Autism Autism is a developmental disorder characterized by troubles with social interaction and communication and by restricted and repetitive behavior. Parents usually notice signs in the first two or three years of their child's life. These signs often develop gradually, though some children with autism reach their developmental milestones at a normal pace and then worsen. Autism is caused by a combination of genetic and environmental factors. Risk factors include certain infections during pregnancy, such as rubella, as well as valproic acid, alcohol or cocaine use during pregnancy. Controversies surround other proposed environmental causes, for example the vaccine hypotheses, which have been disproven. Autism affects information processing in the brain by altering how nerve cells and their synapses connect and organize; how this occurs is not well understood. In the DSM-5, autism is included within the autism spectrum (ASDs), along with Asperger syndrome, which is less severe, and pervasive developmental disorder not otherwise specified (PDD-NOS). Early speech or behavioral interventions can help children with autism gain self-care, social skills and communication skills. Although there is no known cure, there have been cases of children who have recovered from the condition. Not many children with autism live independently after reaching adulthood, though some are successful. An autistic culture has developed, with some individuals seeking a cure and others believing autism should be accepted as a difference and not treated as a disorder. Globally, autism is estimated to affect 24.8 million people as of 2015. In the 2000s, the number of people affected was estimated at 1–2 per 1,000 people worldwide. In the developed countries, about 1.5% of children are diagnosed with ASD , a more than doubling from 0.7% in 2000 in the United States. It occurs four-to-five times more often in boys than girls. The number of people diagnosed has increased dramatically since the 1960s, partly due to changes in diagnostic practice; the question of whether actual rates have increased is unresolved. Autism is a highly variable neurodevelopmental disorder that first appears during infancy or childhood, and generally follows a steady course without remission. People with autism may be severely impaired in some respects but normal, or even superior, in others. Overt symptoms gradually begin after the age of six months, become established by age two or three years and tend to continue through adulthood, although often in more muted form. It is distinguished not by a single symptom but by a characteristic triad of symptoms: impairments in social interaction; impairments in communication; and restricted interests and repetitive behavior. Other aspects, such as atypical eating, are also common but are not essential for diagnosis. Individual symptoms of autism occur in the general population and appear not to associate highly, without a sharp line separating pathologically severe from common traits. Social deficits distinguish autism and the related autism spectrum disorders (ASD; see Classification) from other developmental disorders. People with autism have social impairments and often lack the intuition about others that many people take for granted. Noted autistic Temple Grandin described her inability to understand the social communication of neurotypicals, or people with normal neural development, as leaving her feeling "like an anthropologist on Mars". Unusual social development becomes apparent early in childhood. Autistic infants show less attention to social stimuli, smile and look at others less often, and respond less to their own name. Autistic toddlers differ more strikingly from social norms; for example, they have less eye contact and turn-taking, and do not have the ability to use simple movements to express themselves, such as pointing at things. Three- to five-year-old children with autism are less likely to exhibit social understanding, approach others spontaneously, imitate and respond to emotions, communicate nonverbally, and take turns with others. However, they do form attachments to their primary caregivers. Most children with autism display moderately less attachment security than neurotypical children, although this difference disappears in children with higher mental development or less severe ASD. Older children and adults with ASD perform worse on tests of face and emotion recognition although this may be partly due to a lower ability to define a person's own emotions. Children with high-functioning autism suffer from more intense and frequent loneliness compared to non-autistic peers, despite the common belief that children with autism prefer to be alone. Making and maintaining friendships often proves to be difficult for those with autism. For them, the quality of friendships, not the number of friends, predicts how lonely they feel. Functional friendships, such as those resulting in invitations to parties, may affect the quality of life more deeply. There are many anecdotal reports, but few systematic studies, of aggression and violence in individuals with ASD. The limited data suggest that, in children with intellectual disability, autism is associated with aggression, destruction of property, and tantrums. About a third to a half of individuals with autism do not develop enough natural speech to meet their daily communication needs. Differences in communication may be present from the first year of life, and may include delayed onset of babbling, unusual gestures, diminished responsiveness, and vocal patterns that are not synchronized with the caregiver. In the second and third years, children with autism have less frequent and less diverse babbling, consonants, words, and word combinations; their gestures are less often integrated with words. Children with autism are less likely to make requests or share experiences, and are more likely to simply repeat others' words (echolalia) or reverse pronouns. Joint attention seems to be necessary for functional speech, and deficits in joint attention seem to distinguish infants with ASD: for example, they may look at a pointing hand instead of the pointed-at object, and they consistently fail to point at objects in order to comment on or share an experience. Children with autism may have difficulty with imaginative play and with developing symbols into language. In a pair of studies, high-functioning children with autism aged 8–15 performed equally well as, and as adults better than, individually matched controls at basic language tasks involving vocabulary and spelling. Both autistic groups performed worse than controls at complex language tasks such as figurative language, comprehension and inference. As people are often sized up initially from their basic language skills, these studies suggest that people speaking to autistic individuals are more likely to overestimate what their audience comprehends. Autistic individuals can display many forms of repetitive or restricted behavior, which the Repetitive Behavior Scale-Revised (RBS-R) categorizes as follows. No single repetitive or self-injurious behavior seems to be specific to autism, but autism appears to have an elevated pattern of occurrence and severity of these behaviors. Autistic individuals may have symptoms that are independent of the diagnosis, but that can affect the individual or the family. An estimated 0.5% to 10% of individuals with ASD show unusual abilities, ranging from splinter skills such as the memorization of trivia to the extraordinarily rare talents of prodigious autistic savants. Many individuals with ASD show superior skills in perception and attention, relative to the general population. Sensory abnormalities are found in over 90% of those with autism, and are considered core features by some, although there is no good evidence that sensory symptoms differentiate autism from other developmental disorders. Differences are greater for under-responsivity (for example, walking into things) than for over-responsivity (for example, distress from loud noises) or for sensation seeking (for example, rhythmic movements). An estimated 60%–80% of autistic people have motor signs that include poor muscle tone, poor motor planning, and toe walking; deficits in motor coordination are pervasive across ASD and are greater in autism proper. Unusual eating behavior occurs in about three-quarters of children with ASD, to the extent that it was formerly a diagnostic indicator. Selectivity is the most common problem, although eating rituals and food refusal also occur; this does not appear to result in malnutrition. Although some children with autism also have gastrointestinal symptoms, there is a lack of published rigorous data to support the theory that children with autism have more or different gastrointestinal symptoms than usual; studies report conflicting results, and the relationship between gastrointestinal problems and ASD is unclear. Parents of children with ASD have higher levels of stress. Siblings of children with ASD report greater admiration of and less conflict with the affected sibling than siblings of unaffected children and were similar to siblings of children with Down syndrome in these aspects of the sibling relationship. However, they reported lower levels of closeness and intimacy than siblings of children with Down syndrome; siblings of individuals with ASD have greater risk of negative well-being and poorer sibling relationships as adults. It has long been presumed that there is a common cause at the genetic, cognitive, and neural levels for autism's characteristic triad of symptoms. However, there is increasing suspicion that autism is instead a complex disorder whose core aspects have distinct causes that often co-occur. Autism has a strong genetic basis, although the genetics of autism are complex and it is unclear whether ASD is explained more by rare mutations with major effects, or by rare multigene interactions of common genetic variants. Complexity arises due to interactions among multiple genes, the environment, and epigenetic factors which do not change DNA sequencing but are heritable and influence gene expression. Many genes have been associated with autism through sequencing the genomes of affected individuals and their parents. Studies of twins suggest that heritability is 0.7 for autism and as high as 0.9 for ASD, and siblings of those with autism are about 25 times more likely to be autistic than the general population. However, most of the mutations that increase autism risk have not been identified. Typically, autism cannot be traced to a Mendelian (single-gene) mutation or to a single chromosome abnormality, and none of the genetic syndromes associated with ASDs have been shown to selectively cause ASD. Numerous candidate genes have been located, with only small effects attributable to any particular gene. Most loci individually explain less than 1% of cases of autism. The large number of autistic individuals with unaffected family members may result from spontaneous structural variation — such as deletions, duplications or inversions in genetic material during meiosis. Hence, a substantial fraction of autism cases may be traceable to genetic causes that are highly heritable but not inherited: that is, the mutation that causes the autism is not present in the parental genome. Several lines of evidence point to synaptic dysfunction as a cause of autism. Some rare mutations may lead to autism by disrupting some synaptic pathways, such as those involved with cell adhesion. Gene replacement studies in mice suggest that autistic symptoms are closely related to later developmental steps that depend on activity in synapses and on activity-dependent changes. All known teratogens (agents that cause birth defects) related to the risk of autism appear to act during the first eight weeks from conception, and though this does not exclude the possibility that autism can be initiated or affected later, there is strong evidence that autism arises very early in development. Exposure to air pollution during pregnancy, especially heavy metals and particulates, may increase the risk of autism. Environmental factors that have been claimed without evidence to contribute to or exacerbate autism include certain foods, infectious diseases, solvents, PCBs, phthalates and phenols used in plastic products, pesticides, brominated flame retardants, alcohol, smoking, illicit drugs, vaccines, and prenatal stress. Some such as the MMR vaccine have been completely disproven. Parents may first become aware of autistic symptoms in their child around the time of a routine vaccination. This has led to unsupported theories blaming vaccine "overload", a vaccine preservative, or the MMR vaccine for causing autism. The latter theory was supported by a litigation-funded study that has since been shown to have been "an elaborate fraud". Although these theories lack convincing scientific evidence and are biologically implausible, parental concern about a potential vaccine link with autism has led to lower rates of childhood immunizations, outbreaks of previously controlled childhood diseases in some countries, and the preventable deaths of several children. Autism's symptoms result from maturation-related changes in various systems of the brain. How autism occurs is not well understood. Its mechanism can be divided into two areas: the pathophysiology of brain structures and processes associated with autism, and the neuropsychological linkages between brain structures and behaviors. The behaviors appear to have multiple pathophysiologies. Unlike many other brain disorders, such as Parkinson's, autism does not have a clear unifying mechanism at either the molecular, cellular, or systems level; it is not known whether autism is a few disorders caused by mutations converging on a few common molecular pathways, or is (like intellectual disability) a large set of disorders with diverse mechanisms. Autism appears to result from developmental factors that affect many or all functional brain systems, and to disturb the timing of brain development more than the final product. Neuroanatomical studies and the associations with teratogens strongly suggest that autism's mechanism includes alteration of brain development soon after conception. This anomaly appears to start a cascade of pathological events in the brain that are significantly influenced by environmental factors. Just after birth, the brains of children with autism tend to grow faster than usual, followed by normal or relatively slower growth in childhood. It is not known whether early overgrowth occurs in all children with autism. It seems to be most prominent in brain areas underlying the development of higher cognitive specialization. Hypotheses for the cellular and molecular bases of pathological early overgrowth include the following: The immune system is thought to play an important role in autism. Children with autism have been found by researchers to have inflammation of both the peripheral and central immune systems as indicated by increased levels of pro-inflammatory cytokines and significant activation of microglia. Biomarkers of abnormal immune function have also been associated with increased impairments in behaviors that are characteristic of the core features of autism such as deficits in social interactions and communication. Interactions between the immune system and the nervous system begin early during the embryonic stage of life, and successful neurodevelopment depends on a balanced immune response. It is thought that activation of a pregnant mother's immune system such as from environmental toxicants or infection can contribute to causing autism through causing a disruption of brain development. This is supported by recent studies that have found that infection during pregnancy is associated with an increased risk of autism. The relationship of neurochemicals to autism is not well understood; several have been investigated, with the most evidence for the role of serotonin and of genetic differences in its transport. The role of group I metabotropic glutamate receptors (mGluR) in the pathogenesis of fragile X syndrome, the most common identified genetic cause of autism, has led to interest in the possible implications for future autism research into this pathway. Some data suggests neuronal overgrowth potentially related to an increase in several growth hormones or to impaired regulation of growth factor receptors. Also, some inborn errors of metabolism are associated with autism, but probably account for less than 5% of cases. The mirror neuron system (MNS) theory of autism hypothesizes that distortion in the development of the MNS interferes with imitation and leads to autism's core features of social impairment and communication difficulties. The MNS operates when an animal performs an action or observes another animal perform the same action. The MNS may contribute to an individual's understanding of other people by enabling the modeling of their behavior via embodied simulation of their actions, intentions, and emotions. Several studies have tested this hypothesis by demonstrating structural abnormalities in MNS regions of individuals with ASD, delay in the activation in the core circuit for imitation in individuals with Asperger syndrome, and a correlation between reduced MNS activity and severity of the syndrome in children with ASD. However, individuals with autism also have abnormal brain activation in many circuits outside the MNS and the MNS theory does not explain the normal performance of children with autism on imitation tasks that involve a goal or object. ASD-related patterns of low function and aberrant activation in the brain differ depending on whether the brain is doing social or nonsocial tasks. In autism there is evidence for reduced functional connectivity of the default network, a large-scale brain network involved in social and emotional processing, with intact connectivity of the task-positive network, used in sustained attention and goal-directed thinking. In people with autism the two networks are not negatively correlated in time, suggesting an imbalance in toggling between the two networks, possibly reflecting a disturbance of self-referential thought. The underconnectivity theory of autism hypothesizes that autism is marked by underfunctioning high-level neural connections and synchronization, along with an excess of low-level processes. Evidence for this theory has been found in functional neuroimaging studies on autistic individuals and by a brainwave study that suggested that adults with ASD have local overconnectivity in the cortex and weak functional connections between the frontal lobe and the rest of the cortex. Other evidence suggests the underconnectivity is mainly within each hemisphere of the cortex and that autism is a disorder of the association cortex. From studies based on event-related potentials, transient changes to the brain's electrical activity in response to stimuli, there is considerable evidence for differences in autistic individuals with respect to attention, orientation to auditory and visual stimuli, novelty detection, language and face processing, and information storage; several studies have found a preference for nonsocial stimuli. For example, magnetoencephalography studies have found evidence in children with autism of delayed responses in the brain's processing of auditory signals. In the genetic area, relations have been found between autism and schizophrenia based on duplications and deletions of chromosomes; research showed that schizophrenia and autism are significantly more common in combination with 1q21.1 deletion syndrome. Research on autism/schizophrenia relations for chromosome 15 (15q13.3), chromosome 16 (16p13.1) and chromosome 17 (17p12) are inconclusive. Functional connectivity studies have found both hypo- and hyper-connectivity in brains of people with autism. Hypo-connectivity seems to dominate, especially for interhemispheric and cortico-cortical functional connectivity. Two major categories of cognitive theories have been proposed about the links between autistic brains and behavior. The first category focuses on deficits in social cognition. Simon Baron-Cohen's empathizing–systemizing theory postulates that autistic individuals can systemize—that is, they can develop internal rules of operation to handle events inside the brain—but are less effective at empathizing by handling events generated by other agents. An extension, the extreme male brain theory, hypothesizes that autism is an extreme case of the male brain, defined psychometrically as individuals in whom systemizing is better than empathizing. These theories are somewhat related to Baron-Cohen's earlier theory of mind approach, which hypothesizes that autistic behavior arises from an inability to ascribe mental states to oneself and others. The theory of mind hypothesis is supported by the atypical responses of children with autism to the Sally–Anne test for reasoning about others' motivations, and the mirror neuron system theory of autism described in "Pathophysiology" maps well to the hypothesis. However, most studies have found no evidence of impairment in autistic individuals' ability to understand other people's basic intentions or goals; instead, data suggests that impairments are found in understanding more complex social emotions or in considering others' viewpoints. The second category focuses on nonsocial or general processing: the executive functions such as working memory, planning, inhibition. In his review, Kenworthy states that "the claim of executive dysfunction as a causal factor in autism is controversial", however, "it is clear that executive dysfunction plays a role in the social and cognitive deficits observed in individuals with autism". Tests of core executive processes such as eye movement tasks indicate improvement from late childhood to adolescence, but performance never reaches typical adult levels. A strength of the theory is predicting stereotyped behavior and narrow interests; two weaknesses are that executive function is hard to measure and that executive function deficits have not been found in young children with autism. Weak central coherence theory hypothesizes that a limited ability to see the big picture underlies the central disturbance in autism. One strength of this theory is predicting special talents and peaks in performance in autistic people. A related theory—enhanced perceptual functioning—focuses more on the superiority of locally oriented and perceptual operations in autistic individuals. Yet another, monotropism, posits that autism stems from a different cognitive style, tending to focus attention (or processing resources) intensely, to the exclusion of other stimuli. These theories map well from the underconnectivity theory of autism. Neither category is satisfactory on its own; social cognition theories poorly address autism's rigid and repetitive behaviors, while the nonsocial theories have difficulty explaining social impairment and communication difficulties. A combined theory based on multiple deficits may prove to be more useful. Diagnosis is based on behavior, not cause or mechanism. Under the DSM-5, autism is characterized by persistent deficits in social communication and interaction across multiple contexts, as well as restricted, repetitive patterns of behavior, interests, or activities. These deficits are present in early childhood, typically before age three, and lead to clinically significant functional impairment. Sample symptoms include lack of social or emotional reciprocity, stereotyped and repetitive use of language or idiosyncratic language, and persistent preoccupation with unusual objects. The disturbance must not be better accounted for by Rett syndrome, intellectual disability or global developmental delay. ICD-10 uses essentially the same definition. Several diagnostic instruments are available. Two are commonly used in autism research: the Autism Diagnostic Interview-Revised (ADI-R) is a semistructured parent interview, and the Autism Diagnostic Observation Schedule (ADOS) uses observation and interaction with the child. The Childhood Autism Rating Scale (CARS) is used widely in clinical environments to assess severity of autism based on observation of children. The Diagnostic interview for social and communication disorders (DISCO) may also be used. A pediatrician commonly performs a preliminary investigation by taking developmental history and physically examining the child. If warranted, diagnosis and evaluations are conducted with help from ASD specialists, observing and assessing cognitive, communication, family, and other factors using standardized tools, and taking into account any associated medical conditions. A pediatric neuropsychologist is often asked to assess behavior and cognitive skills, both to aid diagnosis and to help recommend educational interventions. A differential diagnosis for ASD at this stage might also consider intellectual disability, hearing impairment, and a specific language impairment such as Landau–Kleffner syndrome. The presence of autism can make it harder to diagnose coexisting psychiatric disorders such as depression. Clinical genetics evaluations are often done once ASD is diagnosed, particularly when other symptoms already suggest a genetic cause. Although genetic technology allows clinical geneticists to link an estimated 40% of cases to genetic causes, consensus guidelines in the US and UK are limited to high-resolution chromosome and fragile X testing. A genotype-first model of diagnosis has been proposed, which would routinely assess the genome's copy number variations. As new genetic tests are developed several ethical, legal, and social issues will emerge. Commercial availability of tests may precede adequate understanding of how to use test results, given the complexity of autism's genetics. Metabolic and neuroimaging tests are sometimes helpful, but are not routine. ASD can sometimes be diagnosed by age 14 months, although diagnosis becomes increasingly stable over the first three years of life: for example, a one-year-old who meets diagnostic criteria for ASD is less likely than a three-year-old to continue to do so a few years later. In the UK the National Autism Plan for Children recommends at most 30 weeks from first concern to completed diagnosis and assessment, though few cases are handled that quickly in practice. Although the symptoms of autism and ASD begin early in childhood, they are sometimes missed; years later, adults may seek diagnoses to help them or their friends and family understand themselves, to help their employers make adjustments, or in some locations to claim disability living allowances or other benefits. Girls are often diagnosed later than boys. Underdiagnosis and overdiagnosis are problems in marginal cases, and much of the recent increase in the number of reported ASD cases is likely due to changes in diagnostic practices. The increasing popularity of drug treatment options and the expansion of benefits has given providers incentives to diagnose ASD, resulting in some overdiagnosis of children with uncertain symptoms. Conversely, the cost of screening and diagnosis and the challenge of obtaining payment can inhibit or delay diagnosis. It is particularly hard to diagnose autism among the visually impaired, partly because some of its diagnostic criteria depend on vision, and partly because autistic symptoms overlap with those of common blindness syndromes or blindisms. Autism is one of the five pervasive developmental disorders (PDD), which are characterized by widespread abnormalities of social interactions and communication, and severely restricted interests and highly repetitive behavior. These symptoms do not imply sickness, fragility, or emotional disturbance. Of the five PDD forms, Asperger syndrome is closest to autism in signs and likely causes; Rett syndrome and childhood disintegrative disorder share several signs with autism, but may have unrelated causes; PDD not otherwise specified (PDD-NOS; also called "atypical autism") is diagnosed when the criteria are not met for a more specific disorder. Unlike with autism, people with Asperger syndrome have no substantial delay in language development. The terminology of autism can be bewildering, with autism, Asperger syndrome and PDD-NOS often called the "autism spectrum disorders" (ASD) or sometimes the "autistic disorders", whereas autism itself is often called "autistic disorder", "childhood autism", or "infantile autism". In this article, "autism" refers to the classic autistic disorder; in clinical practice, though, "autism", "ASD", and "PDD" are often used interchangeably. ASD, in turn, is a subset of the broader autism phenotype, which describes individuals who may not have ASD but do have autistic-like traits, such as avoiding eye contact. The manifestations of autism cover a wide spectrum, ranging from individuals with severe impairments—who may be silent, developmentally disabled, and locked into hand flapping and rocking—to high functioning individuals who may have active but distinctly odd social approaches, narrowly focused interests, and verbose, pedantic communication. Because the behavior spectrum is continuous, boundaries between diagnostic categories are necessarily somewhat arbitrary. Sometimes the syndrome is divided into low-, medium- or high-functioning autism (LFA, MFA, and HFA), based on IQ thresholds, or on how much support the individual requires in daily life; these subdivisions are not standardized and are controversial. Autism can also be divided into syndromal and non-syndromal autism; the syndromal autism is associated with severe or profound intellectual disability or a congenital syndrome with physical symptoms, such as tuberous sclerosis. Although individuals with Asperger syndrome tend to perform better cognitively than those with autism, the extent of the overlap between Asperger syndrome, HFA, and non-syndromal autism is unclear. Some studies have reported diagnoses of autism in children due to a loss of language or social skills, as opposed to a failure to make progress, typically from 15 to 30 months of age. The validity of this distinction remains controversial; it is possible that regressive autism is a specific subtype, or that there is a continuum of behaviors between autism with and without regression. Research into causes has been hampered by the inability to identify biologically meaningful subgroups within the autistic population and by the traditional boundaries between the disciplines of psychiatry, psychology, neurology and pediatrics. Newer technologies such as fMRI and diffusion tensor imaging can help identify biologically relevant phenotypes (observable traits) that can be viewed on brain scans, to help further neurogenetic studies of autism; one example is lowered activity in the fusiform face area of the brain, which is associated with impaired perception of people versus objects. It has been proposed to classify autism using genetics as well as behavior. About half of parents of children with ASD notice their child's unusual behaviors by age 18 months, and about four-fifths notice by age 24 months. According to an article, failure to meet any of the following milestones "is an absolute indication to proceed with further evaluations. Delay in referral for such testing may delay early diagnosis and treatment and affect the long-term outcome". The United States Preventive Services Task Force in 2016 found it was unclear if screening was beneficial or harmful among children in whom there is no concerns. The Japanese practice is to screen all children for ASD at 18 and 24 months, using autism-specific formal screening tests. In contrast, in the UK, children whose families or doctors recognize possible signs of autism are screened. It is not known which approach is more effective. Screening tools include the Modified Checklist for Autism in Toddlers (M-CHAT), the Early Screening of Autistic Traits Questionnaire, and the First Year Inventory; initial data on M-CHAT and its predecessor, the Checklist for Autism in Toddlers (CHAT), on children aged 18–30 months suggests that it is best used in a clinical setting and that it has low sensitivity (many false-negatives) but good specificity (few false-positives). It may be more accurate to precede these tests with a broadband screener that does not distinguish ASD from other developmental disorders. Screening tools designed for one culture's norms for behaviors like eye contact may be inappropriate for a different culture. Although genetic screening for autism is generally still impractical, it can be considered in some cases, such as children with neurological symptoms and dysmorphic features. While infection with rubella during pregnancy causes fewer than 1% of cases of autism, vaccination against rubella can prevent many of those cases. The main goals when treating children with autism are to lessen associated deficits and family distress, and to increase quality of life and functional independence. In general, higher IQs are correlated with greater responsiveness to treatment and improved treatment outcomes. No single treatment is best and treatment is typically tailored to the child's needs. Families and the educational system are the main resources for treatment. Studies of interventions have methodological problems that prevent definitive conclusions about efficacy, however the development of evidence-based interventions has advanced in recent years. Although many psychosocial interventions have some positive evidence, suggesting that some form of treatment is preferable to no treatment, the methodological quality of systematic reviews of these studies has generally been poor, their clinical results are mostly tentative, and there is little evidence for the relative effectiveness of treatment options. Intensive, sustained special education programs and behavior therapy early in life can help children acquire self-care, social, and job skills, and often improve functioning and decrease symptom severity and maladaptive behaviors; claims that intervention by around age three years is crucial are not substantiated. Available approaches include applied behavior analysis (ABA), developmental models, structured teaching, speech and language therapy, social skills therapy, and occupational therapy. Among these approaches, interventions either treat autistic features comprehensively, or focalize treatment on a specific area of deficit. There is some evidence that early intensive behavioral intervention (EIBI), an early intervention model based on ABA for 20 to 40 hours a week for multiple years, is an effective treatment for some children with ASD. Two theoretical frameworks outlined for early childhood intervention include applied behavioral analysis (ABA) and developmental social pragmatic models (DSP). One interventional strategy utilizes a parent training model, which teaches parents how to implement various ABA and DSP techniques, allowing for parents to disseminate interventions themselves. Various DSP programs have been developed to explicitly deliver intervention systems through at-home parent implementation. Despite the recent development of parent training models, these interventions have demonstrated effectiveness in numerous studies, being evaluated as a probable efficacious mode of treatment. Educational interventions can be effective to varying degrees in most children: intensive ABA treatment has demonstrated effectiveness in enhancing global functioning in preschool children and is well-established for improving intellectual performance of young children. Similarly, teacher-implemented intervention that utilizes an ABA combined with a developmental social pragmatic approach has been found to be a well-established treatment in improving social-communication skills in young children, although there is less evidence in its treatment of global symptoms. Neuropsychological reports are often poorly communicated to educators, resulting in a gap between what a report recommends and what education is provided. It is not known whether treatment programs for children lead to significant improvements after the children grow up, and the limited research on the effectiveness of adult residential programs shows mixed results. The appropriateness of including children with varying severity of autism spectrum disorders in the general education population is a subject of current debate among educators and researchers. Many medications are used to treat ASD symptoms that interfere with integrating a child into home or school when behavioral treatment fails. More than half of US children diagnosed with ASD are prescribed psychoactive drugs or anticonvulsants, with the most common drug classes being antidepressants, stimulants, and antipsychotics. Antipsychotics, such as risperidone and aripiprazole, have been found to be useful for treating irritability, repetitive behavior, and sleeplessness that often occurs with autism, however their side effects must be weighed against their potential benefits, and people with autism may respond atypically. There is scant reliable research about the effectiveness or safety of drug treatments for adolescents and adults with ASD. No known medication relieves autism's core symptoms of social and communication impairments. Experiments in mice have reversed or reduced some symptoms related to autism by replacing or modulating gene function, suggesting the possibility of targeting therapies to specific rare mutations known to cause autism. Although many alternative therapies and interventions are available, few are supported by scientific studies. Treatment approaches have little empirical support in quality-of-life contexts, and many programs focus on success measures that lack predictive validity and real-world relevance. Some alternative treatments may place the child at risk. A 2008 study found that compared to their peers, autistic boys have significantly thinner bones if on casein-free diets; in 2005, botched chelation therapy killed a five-year-old child with autism. Another alternative medicine practice with no evidence is CEASE therapy, a mixture of homeopathy, supplements, and 'vaccine detoxing'. Although popularly used as an alternative treatment for people with autism, there is no good evidence that a gluten-free diet is of benefit. In the subset of people who have gluten sensitivity there is limited evidence that suggests that a gluten free diet may improve some autistic behaviors. There is tentative evidence that music therapy may improve social interactions, verbal communication, and non-verbal communication skills. There has been early research looking at hyperbaric treatments in children with autism. The emergence of the autism rights movement has served as an attempt to encourage people to be more tolerant of those with autism. Through this movement, people hope to cause others to think of autism as a difference instead of a disease. Proponents of this movement wish to seek "acceptance, not cures." There have also been many worldwide events promoting autism awareness such as World Autism Awareness Day, Light It Up Blue, Autism Sunday, Autistic Pride Day, Autreat, and others. There have also been many organizations dedicated to increasing the awareness of autism and the effects that autism has on someone's life. These organizations include Autism Speaks, Autism National Committee, Autism Society of America, and many others. Social-science scholars have had an increased focused on studying those with autism in hopes to learn more about "autism as a culture, transcultural comparisons... and research on social movements." Media has had an influence on how the public perceives those with autism. "Rain Man", a film that won 4 Oscars including Best Picture, depicts a character with autism who has incredible talents and abilities. While many autistics don't have these special abilities, there are some autistic individuals who have been successful in their fields. Treatment is expensive; indirect costs are more so. For someone born in 2000, a US study estimated an average lifetime cost of $ (net present value in dollars, inflation-adjusted from 2003 estimate), with about 10% medical care, 30% extra education and other care, and 60% lost economic productivity. Publicly supported programs are often inadequate or inappropriate for a given child, and unreimbursed out-of-pocket medical or therapy expenses are associated with likelihood of family financial problems; one 2008 US study found a 14% average loss of annual income in families of children with ASD, and a related study found that ASD is associated with higher probability that child care problems will greatly affect parental employment. US states increasingly require private health insurance to cover autism services, shifting costs from publicly funded education programs to privately funded health insurance. After childhood, key treatment issues include residential care, job training and placement, sexuality, social skills, and estate planning. There is no known cure. Children recover occasionally, so that they lose their diagnosis of ASD; this occurs sometimes after intensive treatment and sometimes not. It is not known how often recovery happens; reported rates in unselected samples of children with ASD have ranged from 3% to 25%. Most children with autism acquire language by age five or younger, though a few have developed communication skills in later years. (See also: nonverbal autism) Most children with autism lack social support, meaningful relationships, future employment opportunities or self-determination. Although core difficulties tend to persist, symptoms often become less severe with age. Few high-quality studies address long-term prognosis. Some adults show modest improvement in communication skills, but a few decline; no study has focused on autism after midlife. Acquiring language before age six, having an IQ above 50, and having a marketable skill all predict better outcomes; independent living is unlikely with severe autism. Most people with autism face significant obstacles in transitioning to adulthood. Most recent reviews tend to estimate a prevalence of 1–2 per 1,000 for autism and close to 6 per 1,000 for ASD, and 11 per 1,000 children in the United States for ASD as of 2008; because of inadequate data, these numbers may underestimate ASD's true rate. Globally, autism affects an estimated 24.8 million people as of 2015, while Asperger syndrome affects a further 37.2 million. In 2012, the NHS estimated that the overall prevalence of autism among adults aged 18 years and over in the UK was 1.1%. Rates of PDD-NOS's has been estimated at 3.7 per 1,000, Asperger syndrome at roughly 0.6 per 1,000, and childhood disintegrative disorder at 0.02 per 1,000. CDC's most recent estimate is that 1 out of every 68 children, or 14.7 per 1,000, has an ASD as of 2010. The number of reported cases of autism increased dramatically in the 1990s and early 2000s. This increase is largely attributable to changes in diagnostic practices, referral patterns, availability of services, age at diagnosis, and public awareness, though unidentified environmental risk factors cannot be ruled out. The available evidence does not rule out the possibility that autism's true prevalence has increased; a real increase would suggest directing more attention and funding toward changing environmental factors instead of continuing to focus on genetics. Boys are at higher risk for ASD than girls. The sex ratio averages 4.3:1 and is greatly modified by cognitive impairment: it may be close to 2:1 with intellectual disability and more than 5.5:1 without. Several theories about the higher prevalence in males have been investigated, but the cause of the difference is unconfirmed; one theory is that females are underdiagnosed. Although the evidence does not implicate any single pregnancy-related risk factor as a cause of autism, the risk of autism is associated with advanced age in either parent, and with diabetes, bleeding, and use of psychiatric drugs in the mother during pregnancy. The risk is greater with older fathers than with older mothers; two potential explanations are the known increase in mutation burden in older sperm, and the hypothesis that men marry later if they carry genetic liability and show some signs of autism. Most professionals believe that race, ethnicity, and socioeconomic background do not affect the occurrence of autism. Several other conditions are common in children with autism. They include: A few examples of autistic symptoms and treatments were described long before autism was named. The "Table Talk" of Martin Luther, compiled by his notetaker, Mathesius, contains the story of a 12-year-old boy who may have been severely autistic. Luther reportedly thought the boy was a soulless mass of flesh possessed by the devil, and suggested that he be suffocated, although a later critic has cast doubt on the veracity of this report. The earliest well-documented case of autism is that of Hugh Blair of Borgue, as detailed in a 1747 court case in which his brother successfully petitioned to annul Blair's marriage to gain Blair's inheritance. The Wild Boy of Aveyron, a feral child caught in 1798, showed several signs of autism; the medical student Jean Itard treated him with a behavioral program designed to help him form social attachments and to induce speech via imitation. The New Latin word "autismus" (English translation "autism") was coined by the Swiss psychiatrist Eugen Bleuler in 1910 as he was defining symptoms of schizophrenia. He derived it from the Greek word "autós" (αὐτός, meaning "self"), and used it to mean morbid self-admiration, referring to "autistic withdrawal of the patient to his fantasies, against which any influence from outside becomes an intolerable disturbance". The word "autism" first took its modern sense in 1938 when Hans Asperger of the Vienna University Hospital adopted Bleuler's terminology "autistic psychopaths" in a lecture in German about child psychology. Asperger was investigating an ASD now known as Asperger syndrome, though for various reasons it was not widely recognized as a separate diagnosis until 1981. Leo Kanner of the Johns Hopkins Hospital first used "autism" in its modern sense in English when he introduced the label "early infantile autism" in a 1943 report of 11 children with striking behavioral similarities. Almost all the characteristics described in Kanner's first paper on the subject, notably "autistic aloneness" and "insistence on sameness", are still regarded as typical of the autistic spectrum of disorders. It is not known whether Kanner derived the term independently of Asperger. Donald Triplett was the first person diagnosed with autism. He was diagnosed by Kanner after being first examined in 1938, and was labeled as "case 1". Triplett was noted for his savant abilities, particularly being able to name musical notes played on a piano and to mentally multiply numbers. His father, Oliver, described him as socially withdrawn but interested in number patterns, music notes, letters of the alphabet, and U.S. president pictures. By the age of 2, he had the ability to recite the 23rd Psalm and memorized 25 questions and answers from the Presbyterian catechism. He was also interested in creating musical chords. Kanner's reuse of "autism" led to decades of confused terminology like "infantile schizophrenia", and child psychiatry's focus on maternal deprivation led to misconceptions of autism as an infant's response to "refrigerator mothers". Starting in the late 1960s autism was established as a separate syndrome. As late as the mid-1970s there was little evidence of a genetic role in autism; while in 2007 it was believed to be one of the most heritable psychiatric conditions. Although the rise of parent organizations and the destigmatization of childhood ASD have affected how ASD is viewed, parents continue to feel social stigma in situations where their child's autistic behavior is perceived negatively, and many primary care physicians and medical specialists express some beliefs consistent with outdated autism research. It took until 1980 for the DSM-III to differentiate autism from childhood schizophrenia. In 1987, the DSM-III-R provided a checklist for diagnosing autism. In May 2013, the DSM-5 was released, updating the classification for pervasive developmental disorders. The grouping of disorders, including PDD-NOS, autism, Asperger syndrome, Rett syndrome, and CDD, has been removed and replaced with the general term of Autism Spectrum Disorders. The two categories that exist are impaired social communication and/or interaction, and restricted and/or repetitive behaviors. The Internet has helped autistic individuals bypass nonverbal cues and emotional sharing that they find difficult to deal with, and has given them a way to form online communities and work remotely. Societal and cultural aspects of autism have developed: some in the community seek a cure, while others believe that autism is simply another way of being. Amphibian Amphibians are ectothermic, tetrapod vertebrates of the class Amphibia. Modern amphibians are all Lissamphibia. They inhabit a wide variety of habitats, with most species living within terrestrial, fossorial, arboreal or freshwater aquatic ecosystems. Thus amphibians typically start out as larvae living in water, but some species have developed behavioural adaptations to bypass this. The young generally undergo metamorphosis from larva with gills to an adult air-breathing form with lungs. Amphibians use their skin as a secondary respiratory surface and some small terrestrial salamanders and frogs lack lungs and rely entirely on their skin. They are superficially similar to lizards but, along with mammals and birds, reptiles are amniotes and do not require water bodies in which to breed. With their complex reproductive needs and permeable skins, amphibians are often ecological indicators; in recent decades there has been a dramatic decline in amphibian populations for many species around the globe. The earliest amphibians evolved in the Devonian period from sarcopterygian fish with lungs and bony-limbed fins, features that were helpful in adapting to dry land. They diversified and became dominant during the Carboniferous and Permian periods, but were later displaced by reptiles and other vertebrates. Over time, amphibians shrank in size and decreased in diversity, leaving only the modern subclass Lissamphibia. The three modern orders of amphibians are Anura (the frogs and toads), Urodela (the salamanders), and Apoda (the caecilians). The number of known amphibian species is approximately 7,000, of which nearly 90% are frogs. The smallest amphibian (and vertebrate) in the world is a frog from New Guinea ("Paedophryne amauensis") with a length of just . The largest living amphibian is the Chinese giant salamander ("Andrias davidianus"), but this is dwarfed by the extinct "Prionosuchus" from the middle Permian of Brazil. The study of amphibians is called batrachology, while the study of both reptiles and amphibians is called herpetology. The word "amphibian" is derived from the Ancient Greek term ἀμφίβιος ("amphíbios"), which means "both kinds of life", "ἀμφί" meaning "of both kinds" and "βιος" meaning "life". The term was initially used as a general adjective for animals that could live on land or in water, including seals and otters. Traditionally, the class Amphibia includes all tetrapod vertebrates that are not amniotes. Amphibia in its widest sense ("sensu lato") was divided into three subclasses, two of which are extinct: The actual number of species in each group depends on the taxonomic classification followed. The two most common systems are the classification adopted by the website AmphibiaWeb, University of California, Berkeley and the classification by herpetologist Darrel Frost and the American Museum of Natural History, available as the online reference database "Amphibian Species of the World". The numbers of species cited above follows Frost and the total number of known amphibian species is over 7,000, of which nearly 90% are frogs. With the phylogenetic classification, the taxon Labyrinthodontia has been discarded as it is a polyparaphyletic group without unique defining features apart from shared primitive characteristics. Classification varies according to the preferred phylogeny of the author and whether they use a stem-based or a node-based classification. Traditionally, amphibians as a class are defined as all tetrapods with a larval stage, while the group that includes the common ancestors of all living amphibians (frogs, salamanders and caecilians) and all their descendants is called Lissamphibia. The phylogeny of Paleozoic amphibians is uncertain, and Lissamphibia may possibly fall within extinct groups, like the Temnospondyli (traditionally placed in the subclass Labyrinthodontia) or the Lepospondyli, and in some analyses even in the amniotes. This means that advocates of phylogenetic nomenclature have removed a large number of basal Devonian and Carboniferous amphibian-type tetrapod groups that were formerly placed in Amphibia in Linnaean taxonomy, and included them elsewhere under cladistic taxonomy. If the common ancestor of amphibians and amniotes is included in Amphibia, it becomes a paraphyletic group. All modern amphibians are included in the subclass Lissamphibia, which is usually considered a clade, a group of species that have evolved from a common ancestor. The three modern orders are Anura (the frogs and toads), Caudata (or Urodela, the salamanders), and Gymnophiona (or Apoda, the caecilians). It has been suggested that salamanders arose separately from a Temnospondyl-like ancestor, and even that caecilians are the sister group of the advanced reptiliomorph amphibians, and thus of amniotes. Although the fossils of several older proto-frogs with primitive characteristics are known, the oldest "true frog" is "Prosalirus bitis", from the Early Jurassic Kayenta Formation of Arizona. It is anatomically very similar to modern frogs. The oldest known caecilian is another Early Jurassic species, "Eocaecilia micropodia", also from Arizona. The earliest salamander is "Beiyanerpeton jianpingensis" from the Late Jurassic of northeastern China. Authorities disagree as to whether Salientia is a superorder that includes the order Anura, or whether Anura is a sub-order of the order Salientia. The Lissamphibia are traditionally divided into three orders, but an extinct salamander-like family, the Albanerpetontidae, is now considered part of Lissamphibia alongside the superorder Salientia. Furthermore, Salientia includes all three recent orders plus the Triassic proto-frog, "Triadobatrachus". The first major groups of amphibians developed in the Devonian period, around 370 million years ago, from lobe-finned fish which were similar to the modern coelacanth and lungfish. These ancient lobe-finned fish had evolved multi-jointed leg-like fins with digits that enabled them to crawl along the sea bottom. Some fish had developed primitive lungs to help them breathe air when the stagnant pools of the Devonian swamps were low in oxygen. They could also use their strong fins to hoist themselves out of the water and onto dry land if circumstances so required. Eventually, their bony fins would evolve into limbs and they would become the ancestors to all tetrapods, including modern amphibians, reptiles, birds, and mammals. Despite being able to crawl on land, many of these prehistoric tetrapodomorph fish still spent most of their time in the water. They had started to develop lungs, but still breathed predominantly with gills. Many examples of species showing transitional features have been discovered. "Ichthyostega" was one of the first primitive amphibians, with nostrils and more efficient lungs. It had four sturdy limbs, a neck, a tail with fins and a skull very similar to that of the lobe-finned fish, "Eusthenopteron". Amphibians evolved adaptations that allowed them to stay out of the water for longer periods. Their lungs improved and their skeletons became heavier and stronger, better able to support the weight of their bodies on land. They developed "hands" and "feet" with five or more digits; the skin became more capable of retaining body fluids and resisting desiccation. The fish's hyomandibula bone in the hyoid region behind the gills diminished in size and became the stapes of the amphibian ear, an adaptation necessary for hearing on dry land. An affinity between the amphibians and the teleost fish is the multi-folded structure of the teeth and the paired supra-occipital bones at the back of the head, neither of these features being found elsewhere in the animal kingdom. At the end of the Devonian period (360 million years ago), the seas, rivers and lakes were teeming with life while the land was the realm of early plants and devoid of vertebrates, though some, such as "Ichthyostega", may have sometimes hauled themselves out of the water. It is thought they may have propelled themselves with their forelimbs, dragging their hindquarters in a similar manner to that used by the elephant seal. In the early Carboniferous (360 to 345 million years ago), the climate became wet and warm. Extensive swamps developed with mosses, ferns, horsetails and calamites. Air-breathing arthropods evolved and invaded the land where they provided food for the carnivorous amphibians that began to adapt to the terrestrial environment. There were no other tetrapods on the land and the amphibians were at the top of the food chain, occupying the ecological position currently held by the crocodile. Though equipped with limbs and the ability to breathe air, most still had a long tapering body and strong tail. They were the top land predators, sometimes reaching several metres in length, preying on the large insects of the period and the many types of fish in the water. They still needed to return to water to lay their shell-less eggs, and even most modern amphibians have a fully aquatic larval stage with gills like their fish ancestors. It was the development of the amniotic egg, which prevents the developing embryo from drying out, that enabled the reptiles to reproduce on land and which led to their dominance in the period that followed. After the Carboniferous rainforest collapse amphibian dominance gave way to reptiles, and amphibians were further devastated by the Permian–Triassic extinction event. During the Triassic Period (250 to 200 million years ago), the reptiles continued to out-compete the amphibians, leading to a reduction in both the amphibians' size and their importance in the biosphere. According to the fossil record, Lissamphibia, which includes all modern amphibians and is the only surviving lineage, may have branched off from the extinct groups Temnospondyli and Lepospondyli at some period between the Late Carboniferous and the Early Triassic. The relative scarcity of fossil evidence precludes precise dating, but the most recent molecular study, based on multilocus sequence typing, suggests a Late Carboniferous/Early Permian origin for extant amphibians. The origins and evolutionary relationships between the three main groups of amphibians is a matter of debate. A 2005 molecular phylogeny, based on rDNA analysis, suggests that salamanders and caecilians are more closely related to each other than they are to frogs. It also appears that the divergence of the three groups took place in the Paleozoic or early Mesozoic (around 250 million years ago), before the breakup of the supercontinent Pangaea and soon after their divergence from the lobe-finned fish. The briefness of this period, and the swiftness with which radiation took place, would help account for the relative scarcity of primitive amphibian fossils. There are large gaps in the fossil record, but the discovery of a Gerobatrachus hottoni from the Early Permian in Texas in 2008 provided a missing link with many of the characteristics of modern frogs. Molecular analysis suggests that the frog–salamander divergence took place considerably earlier than the palaeontological evidence indicates. Newer research indicates that the common ancestor of all Lissamphibians lived about 315 million years ago, and that stereospondyls are the closest relatives to the caecilians. As they evolved from lunged fish, amphibians had to make certain adaptations for living on land, including the need to develop new means of locomotion. In the water, the sideways thrusts of their tails had propelled them forward, but on land, quite different mechanisms were required. Their vertebral columns, limbs, limb girdles and musculature needed to be strong enough to raise them off the ground for locomotion and feeding. Terrestrial adults discarded their lateral line systems and adapted their sensory systems to receive stimuli via the medium of the air. They needed to develop new methods to regulate their body heat to cope with fluctuations in ambient temperature. They developed behaviours suitable for reproduction in a terrestrial environment. Their skins were exposed to harmful ultraviolet rays that had previously been absorbed by the water. The skin changed to become more protective and prevent excessive water loss. The superclass Tetrapoda is divided into four classes of vertebrate animals with four limbs. Reptiles, birds and mammals are amniotes, the eggs of which are either laid or carried by the female and are surrounded by several membranes, some of which are impervious. Lacking these membranes, amphibians require water bodies for reproduction, although some species have developed various strategies for protecting or bypassing the vulnerable aquatic larval stage. They are not found in the sea with the exception of one or two frogs that live in brackish water in mangrove swamps; the Anderson's salamander meanwhile occurs in brackish or salt water lakes. On land, amphibians are restricted to moist habitats because of the need to keep their skin damp. The smallest amphibian (and vertebrate) in the world is a microhylid frog from New Guinea ("Paedophryne amauensis") first discovered in 2012. It has an average length of and is part of a genus that contains four of the world's ten smallest frog species. The largest living amphibian is the Chinese giant salamander ("Andrias davidianus") but this is a great deal smaller than the largest amphibian that ever existed—the extinct "Prionosuchus", a crocodile-like temnospondyl dating to 270 million years ago from the middle Permian of Brazil! The largest frog is the African Goliath frog ("Conraua goliath"), which can reach and weigh . Amphibians are ectothermic (cold-blooded) vertebrates that do not maintain their body temperature through internal physiological processes. Their metabolic rate is low and as a result, their food and energy requirements are limited. In the adult state, they have tear ducts and movable eyelids, and most species have ears that can detect airborne or ground vibrations. They have muscular tongues, which in many species can be protruded. Modern amphibians have fully ossified vertebrae with articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, apart from a few fish-like scales in certain caecilians. The skin contains many mucous glands and in some species, poison glands (a type of granular gland). The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Most amphibians lay their eggs in water and have aquatic larvae that undergo metamorphosis to become terrestrial adults. Amphibians breathe by means of a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin. The order Anura (from the Ancient Greek "a(n)-" meaning "without" and "oura" meaning "tail") comprises the frogs and toads. They usually have long hind limbs that fold underneath them, shorter forelimbs, webbed toes with no claws, no tails, large eyes and glandular moist skin. Members of this order with smooth skins are commonly referred to as frogs, while those with warty skins are known as toads. The difference is not a formal one taxonomically and there are numerous exceptions to this rule. Members of the family Bufonidae are known as the "true toads". Frogs range in size from the Goliath frog ("Conraua goliath") of West Africa to the "Paedophryne amauensis", first described in Papua New Guinea in 2012, which is also the smallest known vertebrate. Although most species are associated with water and damp habitats, some are specialised to live in trees or in deserts. They are found worldwide except for polar areas. Anura is divided into three suborders that are broadly accepted by the scientific community, but the relationships between some families remain unclear. Future molecular studies should provide further insights into their evolutionary relationships. The suborder Archaeobatrachia contains four families of primitive frogs. These are Ascaphidae, Bombinatoridae, Discoglossidae and Leiopelmatidae which have few derived features and are probably paraphyletic with regard to other frog lineages. The six families in the more evolutionarily advanced suborder Mesobatrachia are the fossorial Megophryidae, Pelobatidae, Pelodytidae, Scaphiopodidae and Rhinophrynidae and the obligatorily aquatic Pipidae. These have certain characteristics that are intermediate between the two other suborders. Neobatrachia is by far the largest suborder and includes the remaining families of modern frogs, including most common species. Ninety-six percent of the over 5,000 extant species of frog are neobatrachians. The order Caudata (from the Latin "cauda" meaning "tail") consists of the salamanders—elongated, low-slung animals that mostly resemble lizards in form. This is a symplesiomorphic trait and they are no more closely related to lizards than they are to mammals. Salamanders lack claws, have scale-free skins, either smooth or covered with tubercles, and tails that are usually flattened from side to side and often finned. They range in size from the Chinese giant salamander ("Andrias davidianus"), which has been reported to grow to a length of , to the diminutive "Thorius pennatulus" from Mexico which seldom exceeds in length. Salamanders have a mostly Laurasian distribution, being present in much of the Holarctic region of the northern hemisphere. The family Plethodontidae is also found in Central America and South America north of the Amazon basin; South America was apparently invaded from Central America by about the start of the Miocene, 23 million years ago. Urodela is a name sometimes used for all the extant species of salamanders. Members of several salamander families have become paedomorphic and either fail to complete their metamorphosis or retain some larval characteristics as adults. Most salamanders are under long. They may be terrestrial or aquatic and many spend part of the year in each habitat. When on land, they mostly spend the day hidden under stones or logs or in dense vegetation, emerging in the evening and night to forage for worms, insects and other invertebrates. The suborder Cryptobranchoidea contains the primitive salamanders. A number of fossil cryptobranchids have been found, but there are only three living species, the Chinese giant salamander ("Andrias davidianus"), the Japanese giant salamander ("Andrias japonicus") and the hellbender ("Cryptobranchus alleganiensis") from North America. These large amphibians retain several larval characteristics in their adult state; gills slits are present and the eyes are unlidded. A unique feature is their ability to feed by suction, depressing either the left side of their lower jaw or the right. The males excavate nests, persuade females to lay their egg strings inside them, and guard them. As well as breathing with lungs, they respire through the many folds in their thin skin, which has capillaries close to the surface. The suborder Salamandroidea contains the advanced salamanders. They differ from the cryptobranchids by having fused prearticular bones in the lower jaw, and by using internal fertilisation. In salamandrids, the male deposits a bundle of sperm, the spermatophore, and the female picks it up and inserts it into her cloaca where the sperm is stored until the eggs are laid. The largest family in this group is Plethodontidae, the lungless salamanders, which includes 60% of all salamander species. The family Salamandridae includes the true salamanders and the name "newt" is given to members of its subfamily Pleurodelinae. The third suborder, Sirenoidea, contains the four species of sirens, which are in a single family, Sirenidae. Members of this order are eel-like aquatic salamanders with much reduced forelimbs and no hind limbs. Some of their features are primitive while others are derived. Fertilisation is likely to be external as sirenids lack the cloacal glands used by male salamandrids to produce spermatophores and the females lack spermathecae for sperm storage. Despite this, the eggs are laid singly, a behaviour not conducive for external fertilisation. The order Gymnophiona (from the Greek "gymnos" meaning "naked" and "ophis" meaning "serpent") or Apoda (from the Latin "an-" meaning "without" and the Greek "poda" meaning "legs") comprises the caecilians. These are long, cylindrical, limbless animals with a snake- or worm-like form. The adults vary in length from 8 to 75 centimetres (3 to 30 inches) with the exception of Thomson's caecilian ("Caecilia thompsoni"), which can reach . A caecilian's skin has a large number of transverse folds and in some species contains tiny embedded dermal scales. It has rudimentary eyes covered in skin, which are probably limited to discerning differences in light intensity. It also has a pair of short tentacles near the eye that can be extended and which have tactile and olfactory functions. Most caecilians live underground in burrows in damp soil, in rotten wood and under plant debris, but some are aquatic. Most species lay their eggs underground and when the larvae hatch, they make their way to adjacent bodies of water. Others brood their eggs and the larvae undergo metamorphosis before the eggs hatch. A few species give birth to live young, nourishing them with glandular secretions while they are in the oviduct. Caecilians have a mostly Gondwanan distribution, being found in tropical regions of Africa, Asia and Central and South America. The structure contains some typical characteristics common to terrestrial vertebrates, such as the presence of highly cornified outer layers, renewed periodically through a moulting process controlled by the pituitary and thyroid glands. Local thickenings (often called warts) are common, such as those found on toads. The outside of the skin is shed periodically mostly in one piece, in contrast to mammals and birds where it is shed in flakes. Amphibians often eat the sloughed skin. Caecilians are unique among amphibians in having mineralized dermal scales embedded in the dermis between the furrows in the skin. The similarity of these to the scales of bony fish is largely superficial. Lizards and some frogs have somewhat similar osteoderms forming bony deposits in the dermis, but this is an example of convergent evolution with similar structures having arisen independently in diverse vertebrate lineages. Amphibian skin is permeable to water. Gas exchange can take place through the skin (cutaneous respiration) and this allows adult amphibians to respire without rising to the surface of water and to hibernate at the bottom of ponds. To compensate for their thin and delicate skin, amphibians have evolved mucous glands, principally on their heads, backs and tails. The secretions produced by these help keep the skin moist. In addition, most species of amphibian have granular glands that secrete distasteful or poisonous substances. Some amphibian toxins can be lethal to humans while others have little effect. The main poison-producing glands, the paratoids, produce the neurotoxin bufotoxin and are located behind the ears of toads, along the backs of frogs, behind the eyes of salamanders and on the upper surface of caecilians. The skin colour of amphibians is produced by three layers of pigment cells called chromatophores. These three cell layers consist of the melanophores (occupying the deepest layer), the guanophores (forming an intermediate layer and containing many granules, producing a blue-green colour) and the lipophores (yellow, the most superficial layer). The colour change displayed by many species is initiated by hormones secreted by the pituitary gland. Unlike bony fish, there is no direct control of the pigment cells by the nervous system, and this results in the colour change taking place more slowly than happens in fish. A vividly coloured skin usually indicates that the species is toxic and is a warning sign to predators. Amphibians have a skeletal system that is structurally homologous to other tetrapods, though with a number of variations. They all have four limbs except for the legless caecilians and a few species of salamander with reduced or no limbs. The bones are hollow and lightweight. The musculoskeletal system is strong to enable it to support the head and body. The bones are fully ossified and the vertebrae interlock with each other by means of overlapping processes. The pectoral girdle is supported by muscle, and the well-developed pelvic girdle is attached to the backbone by a pair of sacral ribs. The ilium slopes forward and the body is held closer to the ground than is the case in mammals. In most amphibians, there are four digits on the fore foot and five on the hind foot, but no claws on either. Some salamanders have fewer digits and the amphiumas are eel-like in appearance with tiny, stubby legs. The sirens are aquatic salamanders with stumpy forelimbs and no hind limbs. The caecilians are limbless. They burrow in the manner of earthworms with zones of muscle contractions moving along the body. On the surface of the ground or in water they move by undulating their body from side to side. In frogs, the hind legs are larger than the fore legs, especially so in those species that principally move by jumping or swimming. In the walkers and runners the hind limbs are not so large, and the burrowers mostly have short limbs and broad bodies. The feet have adaptations for the way of life, with webbing between the toes for swimming, broad adhesive toe pads for climbing, and keratinised tubercles on the hind feet for digging (frogs usually dig backwards into the soil). In most salamanders, the limbs are short and more or less the same length and project at right angles from the body. Locomotion on land is by walking and the tail often swings from side to side or is used as a prop, particularly when climbing. In their normal gait, only one leg is advanced at a time in the manner adopted by their ancestors, the lobe-finned fish. Some salamanders in the genus "Aneides" and certain plethodontids climb trees and have long limbs, large toepads and prehensile tails. In aquatic salamanders and in frog tadpoles, the tail has dorsal and ventral fins and is moved from side to side as a means of propulsion. Adult frogs do not have tails and caecilians have only very short ones. Salamanders use their tails in defence and some are prepared to jettison them to save their lives in a process known as autotomy. Certain species in the Plethodontidae have a weak zone at the base of the tail and use this strategy readily. The tail often continues to twitch after separation which may distract the attacker and allow the salamander to escape. Both tails and limbs can be regenerated. Adult frogs are unable to regrow limbs but tadpoles can do so. Amphibians have a juvenile stage and an adult stage, and the circulatory systems of the two are distinct. In the juvenile (or tadpole) stage, the circulation is similar to that of a fish; the two-chambered heart pumps the blood through the gills where it is oxygenated, and is spread around the body and back to the heart in a single loop. In the adult stage, amphibians (especially frogs) lose their gills and develop lungs. They have a heart that consists of a single ventricle and two atria. When the ventricle starts contracting, deoxygenated blood is pumped through the pulmonary artery to the lungs. Continued contraction then pumps oxygenated blood around the rest of the body. Mixing of the two bloodstreams is minimized by the anatomy of the chambers. The nervous system is basically the same as in other vertebrates, with a central brain, a spinal cord, and nerves throughout the body. The amphibian brain is less well developed than that of reptiles, birds and mammals but is similar in morphology and function to that of a fish. It is believed amphibians are capable of perceiving pain. The brain consists of equal parts, cerebrum, midbrain and cerebellum. Various parts of the cerebrum process sensory input, such as smell in the olfactory lobe and sight in the optic lobe, and it is additionally the centre of behaviour and learning. The cerebellum is the center of muscular coordination and the medulla oblongata controls some organ functions including heartbeat and respiration. The brain sends signals through the spinal cord and nerves to regulate activity in the rest of the body. The pineal body, known to regulate sleep patterns in humans, is thought to produce the hormones involved in hibernation and aestivation in amphibians. Tadpoles retain the lateral line system of their ancestral fishes, but this is lost in terrestrial adult amphibians. Some caecilians possess electroreceptors that allow them to locate objects around them when submerged in water. The ears are well developed in frogs. There is no external ear, but the large circular eardrum lies on the surface of the head just behind the eye. This vibrates and sound is transmitted through a single bone, the stapes, to the inner ear. Only high-frequency sounds like mating calls are heard in this way, but low-frequency noises can be detected through another mechanism. There is a patch of specialized haircells, called "papilla amphibiorum", in the inner ear capable of detecting deeper sounds. Another feature, unique to frogs and salamanders, is the columella-operculum complex adjoining the auditory capsule which is involved in the transmission of both airborne and seismic signals. The ears of salamanders and caecilians are less highly developed than those of frogs as they do not normally communicate with each other through the medium of sound. The eyes of tadpoles lack lids, but at metamorphosis, the cornea becomes more dome-shaped, the lens becomes flatter, and eyelids and associated glands and ducts develop. The adult eyes are an improvement on invertebrate eyes and were a first step in the development of more advanced vertebrate eyes. They allow colour vision and depth of focus. In the retinas are green rods, which are receptive to a wide range of wavelengths. Many amphibians catch their prey by flicking out an elongated tongue with a sticky tip and drawing it back into the mouth before seizing the item with their jaws. Some use inertial feeding to help them swallow the prey, repeatedly thrusting their head forward sharply causing the food to move backwards in their mouth by inertia. Most amphibians swallow their prey whole without much chewing so they possess voluminous stomachs. The short oesophagus is lined with cilia that help to move the food to the stomach and mucus produced by glands in the mouth and pharynx eases its passage. The enzyme chitinase produced in the stomach helps digest the chitinous cuticle of arthropod prey. Amphibians possess a pancreas, liver and gall bladder. The liver is usually large with two lobes. Its size is determined by its function as a glycogen and fat storage unit, and may change with the seasons as these reserves are built or used up. Adipose tissue is another important means of storing energy and this occurs in the abdomen (in internal structures called fat bodies), under the skin and, in some salamanders, in the tail. There are two kidneys located dorsally, near the roof of the body cavity. Their job is to filter the blood of metabolic waste and transport the urine via ureters to the urinary bladder where it is stored before being passed out periodically through the cloacal vent. Larvae and most aquatic adult amphibians excrete the nitrogen as ammonia in large quantities of dilute urine, while terrestrial species, with a greater need to conserve water, excrete the less toxic product urea. Some tree frogs with limited access to water excrete most of their metabolic waste as uric acid. The lungs in amphibians are primitive compared to those of amniotes, possessing few internal septa and large alveoli, and consequently having a comparatively slow diffusion rate for oxygen entering the blood. Ventilation is accomplished by buccal pumping. Most amphibians, however, are able to exchange gases with the water or air via their skin. To enable sufficient cutaneous respiration, the surface of their highly vascularised skin must remain moist to allow the oxygen to diffuse at a sufficiently high rate. Because oxygen concentration in the water increases at both low temperatures and high flow rates, aquatic amphibians in these situations can rely primarily on cutaneous respiration, as in the Titicaca water frog and the hellbender salamander. In air, where oxygen is more concentrated, some small species can rely solely on cutaneous gas exchange, most famously the plethodontid salamanders, which have neither lungs nor gills. Many aquatic salamanders and all tadpoles have gills in their larval stage, with some (such as the axolotl) retaining gills as aquatic adults. For the purpose of reproduction most amphibians require fresh water although some lay their eggs on land and have developed various means of keeping them moist. A few (e.g. "Fejervarya raja") can inhabit brackish water, but there are no true marine amphibians. There are reports, however, of particular amphibian populations unexpectedly invading marine waters. Such was the case with the Black Sea invasion of the natural hybrid "Pelophylax esculentus" reported in 2010. Several hundred frog species in adaptive radiations (e.g., "Eleutherodactylus", the Pacific "Platymantis", the Australo-Papuan microhylids, and many other tropical frogs), however, do not need any water for breeding in the wild. They reproduce via direct development, an ecological and evolutionary adaptation that has allowed them to be completely independent from free-standing water. Almost all of these frogs live in wet tropical rainforests and their eggs hatch directly into miniature versions of the adult, passing through the tadpole stage within the egg. Reproductive success of many amphibians is dependent not only on the quantity of rainfall, but the seasonal timing. In the tropics, many amphibians breed continuously or at any time of year. In temperate regions, breeding is mostly seasonal, usually in the spring, and is triggered by increasing day length, rising temperatures or rainfall. Experiments have shown the importance of temperature, but the trigger event, especially in arid regions, is often a storm. In anurans, males usually arrive at the breeding sites before females and the vocal chorus they produce may stimulate ovulation in females and the endocrine activity of males that are not yet reproductively active. In caecilians, fertilisation is internal, the male extruding an intromittent organ, the phallodeum, and inserting it into the female cloaca. The paired Müllerian glands inside the male cloaca secrete a fluid which resembles that produced by mammalian prostate glands and which may transport and nourish the sperm. Fertilisation probably takes place in the oviduct. The majority of salamanders also engage in internal fertilisation. In most of these, the male deposits a spermatophore, a small packet of sperm on top of a gelatinous cone, on the substrate either on land or in the water. The female takes up the sperm packet by grasping it with the lips of the cloaca and pushing it into the vent. The spermatozoa move to the spermatheca in the roof of the cloaca where they remain until ovulation which may be many months later. Courtship rituals and methods of transfer of the spermatophore vary between species. In some, the spermatophore may be placed directly into the female cloaca while in others, the female may be guided to the spermatophore or restrained with an embrace called amplexus. Certain primitive salamanders in the families Sirenidae, Hynobiidae and Cryptobranchidae practice external fertilisation in a similar manner to frogs, with the female laying the eggs in water and the male releasing sperm onto the egg mass. With a few exceptions, frogs use external fertilisation. The male grasps the female tightly with his forelimbs either behind the arms or in front of the back legs, or in the case of "Epipedobates tricolor", around the neck. They remain in amplexus with their cloacae positioned close together while the female lays the eggs and the male covers them with sperm. Roughened nuptial pads on the male's hands aid in retaining grip. Often the male collects and retains the egg mass, forming a sort of basket with the hind feet. An exception is the granular poison frog ("Oophaga granulifera") where the male and female place their cloacae in close proximity while facing in opposite directions and then release eggs and sperm simultaneously. The tailed frog ("Ascaphus truei") exhibits internal fertilisation. The "tail" is only possessed by the male and is an extension of the cloaca and used to inseminate the female. This frog lives in fast-flowing streams and internal fertilisation prevents the sperm from being washed away before fertilisation occurs. The sperm may be retained in storage tubes attached to the oviduct until the following spring. Most frogs can be classified as either prolonged or explosive breeders. Typically, prolonged breeders congregate at a breeding site, the males usually arriving first, calling and setting up territories. Other satellite males remain quietly nearby, waiting for their opportunity to take over a territory. The females arrive sporadically, mate selection takes place and eggs are laid. The females depart and territories may change hands. More females appear and in due course, the breeding season comes to an end. Explosive breeders on the other hand are found where temporary pools appear in dry regions after rainfall. These frogs are typically fossorial species that emerge after heavy rains and congregate at a breeding site. They are attracted there by the calling of the first male to find a suitable place, perhaps a pool that forms in the same place each rainy season. The assembled frogs may call in unison and frenzied activity ensues, the males scrambling to mate with the usually smaller number of females. There is a direct competition between males to win the attention of the females in salamanders and newts, with elaborate courtship displays to keep the female's attention long enough to get her interested in choosing him to mate with. Some species store sperm through long breeding seasons, as the extra time may allow for interactions with rival sperm. Most amphibians go through metamorphosis, a process of significant morphological change after birth. In typical amphibian development, eggs are laid in water and larvae are adapted to an aquatic lifestyle. Frogs, toads and salamanders all hatch from the egg as larvae with external gills. Metamorphosis in amphibians is regulated by thyroxine concentration in the blood, which stimulates metamorphosis, and prolactin, which counteracts thyroxine's effect. Specific events are dependent on threshold values for different tissues. Because most embryonic development is outside the parental body, it is subject to many adaptations due to specific environmental circumstances. For this reason tadpoles can have horny ridges instead of teeth, whisker-like skin extensions or fins. They also make use of a sensory lateral line organ similar to that of fish. After metamorphosis, these organs become redundant and will be reabsorbed by controlled cell death, called apoptosis. The variety of adaptations to specific environmental circumstances among amphibians is wide, with many discoveries still being made. The egg of an amphibian is typically surrounded by a transparent gelatinous covering secreted by the oviducts and containing mucoproteins and mucopolysaccharides. This capsule is permeable to water and gases, and swells considerably as it absorbs water. The ovum is at first rigidly held, but in fertilised eggs the innermost layer liquefies and allows the embryo to move freely. This also happens in salamander eggs, even when they are unfertilised. Eggs of some salamanders and frogs contain unicellular green algae. These penetrate the jelly envelope after the eggs are laid and may increase the supply of oxygen to the embryo through photosynthesis. They seem to both speed up the development of the larvae and reduce mortality. Most eggs contain the pigment melanin which raises their temperature through the absorption of light and also protects them against ultraviolet radiation. Caecilians, some plethodontid salamanders and certain frogs lay eggs underground that are unpigmented. In the wood frog ("Rana sylvatica"), the interior of the globular egg cluster has been found to be up to warmer than its surroundings, which is an advantage in its cool northern habitat. The eggs may be deposited singly or in small groups, or may take the form of spherical egg masses, rafts or long strings. In terrestrial caecilians, the eggs are laid in grape-like clusters in burrows near streams. The amphibious salamander "Ensatina" attaches its similar clusters by stalks to underwater stems and roots. The greenhouse frog ("Eleutherodactylus planirostris") lays eggs in small groups in the soil where they develop in about two weeks directly into juvenile frogs without an intervening larval stage. The tungara frog ("Physalaemus pustulosus") builds a floating nest from foam to protect its eggs. First a raft is built, then eggs are laid in the centre, and finally a foam cap is overlaid. The foam has anti-microbial properties. It contains no detergents but is created by whipping up proteins and lectins secreted by the female. The eggs of amphibians are typically laid in water and hatch into free-living larvae that complete their development in water and later transform into either aquatic or terrestrial adults. In many species of frog and in most lungless salamanders (Plethodontidae), direct development takes place, the larvae growing within the eggs and emerging as miniature adults. Many caecilians and some other amphibians lay their eggs on land, and the newly hatched larvae wriggle or are transported to water bodies. Some caecilians, the alpine salamander ("Salamandra atra") and some of the African live-bearing toads ("Nectophrynoides spp.") are viviparous. Their larvae feed on glandular secretions and develop within the female's oviduct, often for long periods. Other amphibians, but not caecilians, are ovoviviparous. The eggs are retained in or on the parent's body, but the larvae subsist on the yolks of their eggs and receive no nourishment from the adult. The larvae emerge at varying stages of their growth, either before or after metamorphosis, according to their species. The toad genus "Nectophrynoides" exhibits all of these developmental patterns among its dozen or so members. Frog larvae are known as tadpoles and typically have oval bodies and long, vertically flattened tails with fins. The free-living larvae are normally fully aquatic, but the tadpoles of some species (such as "Nannophrys ceylonensis") are semi-terrestrial and live among wet rocks. Tadpoles have cartilaginous skeletons, gills for respiration (external gills at first, internal gills later), lateral line systems and large tails that they use for swimming. Newly hatched tadpoles soon develop gill pouches that cover the gills. The lungs develop early and are used as accessory breathing organs, the tadpoles rising to the water surface to gulp air. Some species complete their development inside the egg and hatch directly into small frogs. These larvae do not have gills but instead have specialised areas of skin through which respiration takes place. While tadpoles do not have true teeth, in most species, the jaws have long, parallel rows of small keratinized structures called keradonts surrounded by a horny beak. Front legs are formed under the gill sac and hind legs become visible a few days later. Iodine and T4 (over stimulate the spectacular apoptosis [programmed cell death] of the cells of the larval gills, tail and fins) also stimulate the evolution of nervous systems transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog with better neurological, visuospatial, olfactory and cognitive abilities for hunting. In fact, tadpoles developing in ponds and streams are typically herbivorous. Pond tadpoles tend to have deep bodies, large caudal fins and small mouths; they swim in the quiet waters feeding on growing or loose fragments of vegetation. Stream dwellers mostly have larger mouths, shallow bodies and caudal fins; they attach themselves to plants and stones and feed on the surface films of algae and bacteria. They also feed on diatoms, filtered from the water through the gills, and stir up the sediment at bottom of the pond, ingesting edible fragments. They have a relatively long, spiral-shaped gut to enable them to digest this diet. Some species are carnivorous at the tadpole stage, eating insects, smaller tadpoles and fish. Young of the Cuban tree frog ("Osteopilus septentrionalis") can occasionally be cannibalistic, the younger tadpoles attacking a larger, more developed tadpole when it is undergoing metamorphosis. At metamorphosis, rapid changes in the body take place as the lifestyle of the frog changes completely. The spiral‐shaped mouth with horny tooth ridges is reabsorbed together with the spiral gut. The animal develops a large jaw, and its gills disappear along with its gill sac. Eyes and legs grow quickly, and a tongue is formed. There are associated changes in the neural networks such as development of stereoscopic vision and loss of the lateral line system. All this can happen in about a day. A few days later, the tail is reabsorbed, due to the higher thyroxine concentration required for this to take place. At hatching, a typical salamander larva has eyes without lids, teeth in both upper and lower jaws, three pairs of feathery external gills, a somewhat laterally flattened body and a long tail with dorsal and ventral fins. The forelimbs may be partially developed and the hind limbs are rudimentary in pond-living species but may be rather more developed in species that reproduce in moving water. Pond-type larvae often have a pair of balancers, rod-like structures on either side of the head that may prevent the gills from becoming clogged up with sediment. Some members of the genera "Ambystoma" and "Dicamptodon" have larvae that never fully develop into the adult form, but this varies with species and with populations. The northwestern salamander ("Ambystoma gracile") is one of these and, depending on environmental factors, either remains permanently in the larval state, a condition known as neoteny, or transforms into an adult. Both of these are able to breed. Neoteny occurs when the animal's growth rate is very low and is usually linked to adverse conditions such as low water temperatures that may change the response of the tissues to the hormone thyroxine. Other factors that may inhibit metamorphosis include lack of food, lack of trace elements and competition from conspecifics. The tiger salamander ("Ambystoma tigrinum") also sometimes behaves in this way and may grow particularly large in the process. The adult tiger salamander is terrestrial, but the larva is aquatic and able to breed while still in the larval state. When conditions are particularly inhospitable on land, larval breeding may allow continuation of a population that would otherwise die out. There are fifteen species of obligate neotenic salamanders, including species of "Necturus", "Proteus" and "Amphiuma", and many examples of facultative ones that adopt this strategy under appropriate environmental circumstances. Lungless salamanders in the family Plethodontidae are terrestrial and lay a small number of unpigmented eggs in a cluster among damp leaf litter. Each egg has a large yolk sac and the larva feeds on this while it develops inside the egg, emerging fully formed as a juvenile salamander. The female salamander often broods the eggs. In the genus "Ensatinas", the female has been observed to coil around them and press her throat area against them, effectively massaging them with a mucous secretion. In newts and salamanders, metamorphosis is less dramatic than in frogs. This is because the larvae are already carnivorous and continue to feed as predators when they are adults so few changes are needed to their digestive systems. Their lungs are functional early, but the larvae do not make as much use of them as do tadpoles. Their gills are never covered by gill sacs and are reabsorbed just before the animals leave the water. Other changes include the reduction in size or loss of tail fins, the closure of gill slits, thickening of the skin, the development of eyelids, and certain changes in dentition and tongue structure. Salamanders are at their most vulnerable at metamorphosis as swimming speeds are reduced and transforming tails are encumbrances on land. Adult salamanders often have an aquatic phase in spring and summer, and a land phase in winter. For adaptation to a water phase, prolactin is the required hormone, and for adaptation to the land phase, thyroxine. External gills do not return in subsequent aquatic phases because these are completely absorbed upon leaving the water for the first time. Most terrestrial caecilians that lay eggs do so in burrows or moist places on land near bodies of water. The development of the young of "Ichthyophis glutinosus", a species from Sri Lanka, has been much studied. The eel-like larvae hatch out of the eggs and make their way to water. They have three pairs of external red feathery gills, a blunt head with two rudimentary eyes, a lateral line system and a short tail with fins. They swim by undulating their body from side to side. They are mostly active at night, soon lose their gills and make sorties onto land. Metamorphosis is gradual. By the age of about ten months they have developed a pointed head with sensory tentacles near the mouth and lost their eyes, lateral line systems and tails. The skin thickens, embedded scales develop and the body divides into segments. By this time, the caecilian has constructed a burrow and is living on land. In the majority of species of caecilians, the young are produced by viviparity. "Typhlonectes compressicauda", a species from South America, is typical of these. Up to nine larvae can develop in the oviduct at any one time. They are elongated and have paired sac-like gills, small eyes and specialised scraping teeth. At first, they feed on the yolks of the eggs, but as this source of nourishment declines they begin to rasp at the ciliated epithelial cells that line the oviduct. This stimulates the secretion of fluids rich in lipids and mucoproteins on which they feed along with scrapings from the oviduct wall. They may increase their length sixfold and be two-fifths as long as their mother before being born. By this time they have undergone metamorphosis, lost their eyes and gills, developed a thicker skin and mouth tentacles, and reabsorbed their teeth. A permanent set of teeth grow through soon after birth. The ringed caecilian ("Siphonops annulatus") has developed a unique adaptation for the purposes of reproduction. The progeny feed on a skin layer that is specially developed by the adult in a phenomenon known as maternal dermatophagy. The brood feed as a batch for about seven minutes at intervals of approximately three days which gives the skin an opportunity to regenerate. Meanwhile, they have been observed to ingest fluid exuded from the maternal cloaca. The care of offspring among amphibians has been little studied but, in general, the larger the number of eggs in a batch, the less likely it is that any degree of parental care takes place. Nevertheless, it is estimated that in up to 20% of amphibian species, one or both adults play some role in the care of the young. Those species that breed in smaller water bodies or other specialised habitats tend to have complex patterns of behaviour in the care of their young. Many woodland salamanders lay clutches of eggs under dead logs or stones on land. The black mountain salamander ("Desmognathus welteri") does this, the mother brooding the eggs and guarding them from predation as the embryos feed on the yolks of their eggs. When fully developed, they break their way out of the egg capsules and disperse as juvenile salamanders. The male hellbender, a primitive salamander, excavates an underwater nest and encourages females to lay there. The male then guards the site for the two or three months before the eggs hatch, using body undulations to fan the eggs and increase their supply of oxygen. The male "Colostethus subpunctatus", a tiny frog, protects the egg cluster which is hidden under a stone or log. When the eggs hatch, the male transports the tadpoles on his back, stuck there by a mucous secretion, to a temporary pool where he dips himself into the water and the tadpoles drop off. The male midwife toad ("Alytes obstetricans") winds egg strings round his thighs and carries the eggs around for up to eight weeks. He keeps them moist and when they are ready to hatch, he visits a pond or ditch and releases the tadpoles. The female gastric-brooding frog ("Rheobatrachus spp.") reared larvae in her stomach after swallowing either the eggs or hatchlings; however, this stage was never observed before the species became extinct. The tadpoles secrete a hormone that inhibits digestion in the mother whilst they develop by consuming their very large yolk supply. The pouched frog ("Assa darlingtoni") lays eggs on the ground. When they hatch, the male carries the tadpoles around in brood pouches on his hind legs. The aquatic Surinam toad ("Pipa pipa") raises its young in pores on its back where they remain until metamorphosis. The granular poison frog ("Oophaga granulifera") is typical of a number of tree frogs in the poison dart frog family Dendrobatidae. Its eggs are laid on the forest floor and when they hatch, the tadpoles are carried one by one on the back of an adult to a suitable water-filled crevice such as the axil of a leaf or the rosette of a bromeliad. The female visits the nursery sites regularly and deposits unfertilised eggs in the water and these are consumed by the tadpoles. With a few exceptions, adult amphibians are predators, feeding on virtually anything that moves that they can swallow. The diet mostly consists of small prey that do not move too fast such as beetles, caterpillars, earthworms and spiders. The sirens ("Siren spp.") often ingest aquatic plant material with the invertebrates on which they feed and a Brazilian tree frog ("Xenohyla truncata") includes a large quantity of fruit in its diet. The Mexican burrowing toad ("Rhinophrynus dorsalis") has a specially adapted tongue for picking up ants and termites. It projects it with the tip foremost whereas other frogs flick out the rear part first, their tongues being hinged at the front. Food is mostly selected by sight, even in conditions of dim light. Movement of the prey triggers a feeding response. Frogs have been caught on fish hooks baited with red flannel and green frogs ("Rana clamitans") have been found with stomachs full of elm seeds that they had seen floating past. Toads, salamanders and caecilians also use smell to detect prey. This response is mostly secondary because salamanders have been observed to remain stationary near odoriferous prey but only feed if it moves. Cave-dwelling amphibians normally hunt by smell. Some salamanders seem to have learned to recognize immobile prey when it has no smell, even in complete darkness. Amphibians usually swallow food whole but may chew it lightly first to subdue it. They typically have small hinged pedicellate teeth, a feature unique to amphibians. The base and crown of these are composed of dentine separated by an uncalcified layer and they are replaced at intervals. Salamanders, caecilians and some frogs have one or two rows of teeth in both jaws, but some frogs ("Rana spp.") lack teeth in the lower jaw, and toads ("Bufo spp.") have no teeth. In many amphibians there are also vomerine teeth attached to a facial bone in the roof of the mouth. The tiger salamander ("Ambystoma tigrinum") is typical of the frogs and salamanders that hide under cover ready to ambush unwary invertebrates. Others amphibians, such as the "Bufo spp." toads, actively search for prey, while the Argentine horned frog ("Ceratophrys ornata") lures inquisitive prey closer by raising its hind feet over its back and vibrating its yellow toes. Among leaf litter frogs in Panama, frogs that actively hunt prey have narrow mouths and are slim, often brightly coloured and toxic, while ambushers have wide mouths and are broad and well-camouflaged. Caecilians do not flick their tongues, but catch their prey by grabbing it with their slightly backward-pointing teeth. The struggles of the prey and further jaw movements work it inwards and the caecilian usually retreats into its burrow. The subdued prey is gulped down whole. When they are newly hatched, frog larvae feed on the yolk of the egg. When this is exhausted some move on to feed on bacteria, algal crusts, detritus and raspings from submerged plants. Water is drawn in through their mouths, which are usually at the bottom of their heads, and passes through branchial food traps between their mouths and their gills where fine particles are trapped in mucus and filtered out. Others have specialised mouthparts consisting of a horny beak edged by several rows of labial teeth. They scrape and bite food of many kinds as well as stirring up the bottom sediment, filtering out larger particles with the papillae around their mouths. Some, such as the spadefoot toads, have strong biting jaws and are carnivorous or even cannibalistic. The calls made by caecilians and salamanders are limited to occasional soft squeaks, grunts or hisses and have not been much studied. A clicking sound sometimes produced by caecilians may be a means of orientation, as in bats, or a form of communication. Most salamanders are considered voiceless, but the California giant salamander ("Dicamptodon ensatus") has vocal cords and can produce a rattling or barking sound. Some species of salamander emit a quiet squeak or yelp if attacked. Frogs are much more vocal, especially during the breeding season when they use their voices to attract mates. The presence of a particular species in an area may be more easily discerned by its characteristic call than by a fleeting glimpse of the animal itself. In most species, the sound is produced by expelling air from the lungs over the vocal cords into an air sac or sacs in the throat or at the corner of the mouth. This may distend like a balloon and acts as a resonator, helping to transfer the sound to the atmosphere, or the water at times when the animal is submerged. The main vocalisation is the male's loud advertisement call which seeks to both encourage a female to approach and discourage other males from intruding on its territory. This call is modified to a quieter courtship call on the approach of a female or to a more aggressive version if a male intruder draws near. Calling carries the risk of attracting predators and involves the expenditure of much energy. Other calls include those given by a female in response to the advertisement call and a release call given by a male or female during unwanted attempts at amplexus. When a frog is attacked, a distress or fright call is emitted, often resembling a scream. The usually nocturnal Cuban tree frog ("Osteopilus septentrionalis") produces a rain call when there is rainfall during daylight hours. Little is known of the territorial behaviour of caecilians, but some frogs and salamanders defend home ranges. These are usually feeding, breeding or sheltering sites. Males normally exhibit such behaviour though in some species, females and even juveniles are also involved. Although in many frog species, females are larger than males, this is not the case in most species where males are actively involved in territorial defence. Some of these have specific adaptations such as enlarged teeth for biting or spines on the chest, arms or thumbs. In salamanders, defence of a territory involves adopting an aggressive posture and if necessary attacking the intruder. This may involve snapping, chasing and sometimes biting, occasionally causing the loss of a tail. The behaviour of red back salamanders ("Plethodon cinereus") has been much studied. 91% of marked individuals that were later recaptured were within a metre (yard) of their original daytime retreat under a log or rock. A similar proportion, when moved experimentally a distance of , found their way back to their home base. The salamanders left odour marks around their territories which averaged in size and were sometimes inhabited by a male and female pair. These deterred the intrusion of others and delineated the boundaries between neighbouring areas. Much of their behaviour seemed stereotyped and did not involve any actual contact between individuals. An aggressive posture involved raising the body off the ground and glaring at the opponent who often turned away submissively. If the intruder persisted, a biting lunge was usually launched at either the tail region or the naso-labial grooves. Damage to either of these areas can reduce the fitness of the rival, either because of the need to regenerate tissue or because it impairs its ability to detect food. In frogs, male territorial behaviour is often observed at breeding locations; calling is both an announcement of ownership of part of this resource and an advertisement call to potential mates. In general, a deeper voice represents a heavier and more powerful individual, and this may be sufficient to prevent intrusion by smaller males. Much energy is used in the vocalization and it takes a toll on the territory holder who may be displaced by a fitter rival if he tires. There is a tendency for males to tolerate the holders of neighbouring territories while vigorously attacking unknown intruders. Holders of territories have a "home advantage" and usually come off better in an encounter between two similar-sized frogs. If threats are insufficient, chest to chest tussles may take place. Fighting methods include pushing and shoving, deflating the opponent's vocal sac, seizing him by the head, jumping on his back, biting, chasing, splashing, and ducking him under the water. Amphibians have soft bodies with thin skins, and lack claws, defensive armour, or spines. Nevertheless, they have evolved various defence mechanisms to keep themselves alive. The first line of defence in salamanders and frogs is the mucous secretion that they produce. This keeps their skin moist and makes them slippery and difficult to grip. The secretion is often sticky and distasteful or toxic. Snakes have been observed yawning and gaping when trying to swallow African clawed frogs ("Xenopus laevis"), which gives the frogs an opportunity to escape. Caecilians have been little studied in this respect, but the Cayenne caecilian ("Typhlonectes compressicauda") produces toxic mucus that has killed predatory fish in a feeding experiment in Brazil. In some salamanders, the skin is poisonous. The rough-skinned newt ("Taricha granulosa") from North America and other members of its genus contain the neurotoxin tetrodotoxin (TTX), the most toxic non-protein substance known and almost identical to that produced by pufferfish. Handling the newts does not cause harm, but ingestion of even the most minute amounts of the skin is deadly. In feeding trials, fish, frogs, reptiles, birds and mammals were all found to be susceptible. The only predators with some tolerance to the poison are certain populations of common garter snake ("Thamnophis sirtalis"). In locations where both snake and salamander co-exist, the snakes have developed immunity through genetic changes and they feed on the amphibians with impunity. Coevolution occurs with the newt increasing its toxic capabilities at the same rate as the snake further develops its immunity. Some frogs and toads are toxic, the main poison glands being at the side of the neck and under the warts on the back. These regions are presented to the attacking animal and their secretions may be foul-tasting or cause various physical or neurological symptoms. Altogether, over 200 toxins have been isolated from the limited number of amphibian species that have been investigated. Poisonous species often use bright colouring to warn potential predators of their toxicity. These warning colours tend to be red or yellow combined with black, with the fire salamander ("Salamandra salamandra") being an example. Once a predator has sampled one of these, it is likely to remember the colouration next time it encounters a similar animal. In some species, such as the fire-bellied toad ("Bombina spp."), the warning colouration is on the belly and these animals adopt a defensive pose when attacked, exhibiting their bright colours to the predator. The frog "Allobates zaparo" is not poisonous, but mimics the appearance of other toxic species in its locality, a strategy that may deceive predators. Many amphibians are nocturnal and hide during the day, thereby avoiding diurnal predators that hunt by sight. Other amphibians use camouflage to avoid being detected. They have various colourings such as mottled browns, greys and olives to blend into the background. Some salamanders adopt defensive poses when faced by a potential predator such as the North American northern short-tailed shrew ("Blarina brevicauda"). Their bodies writhe and they raise and lash their tails which makes it difficult for the predator to avoid contact with their poison-producing granular glands. A few salamanders will autotomise their tails when attacked, sacrificing this part of their anatomy to enable them to escape. The tail may have a constriction at its base to allow it to be easily detached. The tail is regenerated later, but the energy cost to the animal of replacing it is significant. Some frogs and toads inflate themselves to make themselves look large and fierce, and some spadefoot toads ("Pelobates spp") scream and leap towards the attacker. Giant salamanders of the genus "Andrias", as well as Ceratophrine and "Pyxicephalus" frogs possess sharp teeth and are capable of drawing blood with a defensive bite. The blackbelly salamander ("Desmognathus quadramaculatus") can bite an attacking common garter snake ("Thamnophis sirtalis") two or three times its size on the head and often manages to escape. In amphibians, there is evidence of habituation, associative learning through both classical and instrumental learning, and discrimination abilities. In one experiment, when offered live fruit flies ("Drosophila virilis"), salamanders choose the larger of 1 vs 2 and 2 vs 3. Frogs can distinguish between low numbers (1 vs 2, 2 vs 3, but not 3 vs 4) and large numbers (3 vs 6, 4 vs 8, but not 4 vs 6) of prey. This is irrespective of other characteristics, i.e. surface area, volume, weight and movement, although discrimination among large numbers may be based on surface area. Dramatic declines in amphibian populations, including population crashes and mass localized extinction, have been noted since the late 1980s from locations all over the world, and amphibian declines are thus perceived to be one of the most critical threats to global biodiversity. In 2004, the International Union for Conservation of Nature (IUCN) reported stating that currently birds, mammals, and amphibians extinction rates were at minimum 48 times greater than natural extinction rates—possibly 1,024 times higher. In 2006 there were believed to be 4,035 species of amphibians that depended on water at some stage during their life cycle. Of these, 1,356 (33.6%) were considered to be threatened and this figure is likely to be an underestimate because it excludes 1,427 species for which there was insufficient data to assess their status. A number of causes are believed to be involved, including habitat destruction and modification, over-exploitation, pollution, introduced species, climate change, endocrine-disrupting pollutants, destruction of the ozone layer (ultraviolet radiation has shown to be especially damaging to the skin, eyes, and eggs of amphibians), and diseases like chytridiomycosis. However, many of the causes of amphibian declines are still poorly understood, and are a topic of ongoing discussion. With their complex reproductive needs and permeable skins, amphibians are often considered to be ecological indicators. In many terrestrial ecosystems, they constitute one of the largest parts of the vertebrate biomass. Any decline in amphibian numbers will affect the patterns of predation. The loss of carnivorous species near the top of the food chain will upset the delicate ecosystem balance and may cause dramatic increases in opportunistic species. In the Middle East, a growing appetite for eating frog legs and the consequent gathering of them for food was linked to an increase in mosquitoes. Predators that feed on amphibians are affected by their decline. The western terrestrial garter snake ("Thamnophis elegans") in California is largely aquatic and depends heavily on two species of frog that are decreasing in numbers, the Yosemite toad ("Bufo canorus") and the mountain yellow-legged frog ("Rana muscosa"), putting the snake's future at risk. If the snake were to become scarce, this would affect birds of prey and other predators that feed on it. Meanwhile, in the ponds and lakes, fewer frogs means fewer tadpoles. These normally play an important role in controlling the growth of algae and also forage on detritus that accumulates as sediment on the bottom. A reduction in the number of tadpoles may lead to an overgrowth of algae, resulting in depletion of oxygen in the water when the algae later die and decompose. Aquatic invertebrates and fish might then die and there would be unpredictable ecological consequences. A global strategy to stem the crisis was released in 2005 in the form of the Amphibian Conservation Action Plan. Developed by over eighty leading experts in the field, this call to action details what would be required to curtail amphibian declines and extinctions over the following five years and how much this would cost. The Amphibian Specialist Group of the IUCN is spearheading efforts to implement a comprehensive global strategy for amphibian conservation. Amphibian Ark is an organization that was formed to implement the ex-situ conservation recommendations of this plan, and they have been working with zoos and aquaria around the world, encouraging them to create assurance colonies of threatened amphibians. One such project is the Panama Amphibian Rescue and Conservation Project that built on existing conservation efforts in Panama to create a country-wide response to the threat of chytridiomycosis. Apollo 8 Apollo 8, the second manned spaceflight mission in the United States Apollo space program, was launched on December 21, 1968, and became the first manned spacecraft to leave Earth orbit, reach the Earth's Moon, orbit it and return safely to Earth. The three-astronaut crew — Commander Frank Borman, Command Module Pilot James Lovell, and Lunar Module Pilot William Anders — became the first humans to: travel beyond low Earth orbit; see Earth as a whole planet; enter the gravity well of another celestial body (Earth's moon); orbit another celestial body (Earth's moon); directly see the far side of the Moon with their own eyes; witness an Earthrise; escape the gravity of another celestial body (Earth's moon); and re-enter the gravitational well of Earth. The 1968 mission, the third flight of the Saturn V rocket and that rocket's first crewed launch, was also the first human spaceflight launch from the Kennedy Space Center, Florida, located adjacent to Cape Canaveral Air Force Station. Originally planned as a second Lunar Module/Command Module test in an elliptical medium Earth orbit in early 1969, the mission profile was changed in August 1968 to a more ambitious Command Module-only lunar orbital flight to be flown in December, because the Lunar Module was not yet ready to make its first flight. This meant Borman's crew was scheduled to fly two to three months sooner than originally planned, leaving them a shorter time for training and preparation, thus placing more demands than usual on their time and discipline. Apollo 8 took 68 hours (2.8 days) to travel the distance to the Moon. It orbited ten times over the course of 20 hours, during which the crew made a Christmas Eve television broadcast where they read the first 10 verses from the Book of Genesis. At the time, the broadcast was the most watched TV program ever. Apollo 8's successful mission paved the way for Apollo 11 to fulfill U.S. President John F. Kennedy's goal of landing a man on the Moon before the end of the 1960s. The Apollo 8 astronauts returned to Earth on December 27, 1968, when their spacecraft splashed down in the Northern Pacific Ocean. The crew members were named "Time" magazine's "Men of the Year" for 1968 upon their return. Lovell was originally the CMP on the back-up crew, with Michael Collins as the prime crew's CMP. However, Collins was replaced in July 1968, after suffering a cervical disc herniation that required surgery to repair. This crew was unique among pre-shuttle era missions in that the commander was not the most experienced member of the crew, as Lovell had flown twice before, on Gemini VII and Gemini XII. This was also the first case of the rarity of an astronaut who had commanded a spaceflight mission subsequently flying as a non-commander, as Lovell had previously commanded Gemini XII. On a lunar mission, the Command Module Pilot (CMP) was assigned the role of navigator, while the Lunar Module Pilot (LMP) was assigned the role of flight engineer, responsible for monitoring all spacecraft systems, even if the flight didn't include a Lunar Module. Edwin "Buzz" Aldrin was originally the backup LMP. When Lovell was rotated to the prime crew, no one with experience on CSM-103 (the specific spacecraft used for the mission) was available, so Aldrin was moved to CMP and Fred Haise brought in as backup LMP. Neil Armstrong went on to command Apollo 11, where Aldrin was returned to the LMP position and Collins was assigned as CMP. Haise was rotated out of the crew and onto the backup crew of Apollo 11 as LMP. The Earth-based mission control teams for Apollo 8 consisted of astronauts assigned to the support crew, as well as non-astronaut flight directors and their staffs. The support crew members were not trained to fly the mission, but were able to stand in for astronauts in meetings and be involved in the minutiae of mission planning, while the prime and backup crews trained. They also served as CAPCOMs during the mission. For Apollo 8, these crew members included astronauts Michael Collins, John S. Bull, Vance D. Brand, Gerald P. Carr, and Ken Mattingly. The mission control teams on Earth rotated in three shifts, each led by a flight director. The directors for Apollo 8 included Clifford E. Charlesworth (Green team), Glynn Lunney (Black team), and Milton Windler (Maroon team). The triangular shape of the insignia symbolizes the shape of the Apollo Command Module (CM). It shows a red figure-8 looping around the Earth and Moon representing the mission number as well as the circumlunar nature of the mission. On the red number 8 are the names of the three astronauts. The initial design of the insignia was developed by Jim Lovell. Lovell reportedly sketched the initial design while riding in the backseat of a T-38 flight from California to Houston, shortly after learning of the re-designation of the flight to become a lunar-orbital mission. The graphic design of the insignia was done by Houston artist and animator William Bradley. Apollo 4 and Apollo 6 had been "A" missions, unmanned tests of the Saturn V launch vehicle using an unmanned Block I production model of the Apollo Command and Service Module in Earth orbit. , scheduled for October 1968, would be a manned Earth-orbit flight of the CSM, completing the objectives for Mission "C". Further missions depended on the readiness of the Lunar Module. Apollo 8 was planned as the "D" mission, to test the LM in a low Earth orbit in December 1968 by James McDivitt, David Scott and Russell Schweickart, while Borman's crew would fly the "E" mission, a more rigorous LM test in an elliptical medium Earth orbit as Apollo 9, in early 1969. But production of the LM fell behind schedule, and when Apollo 8's LM arrived at the Kennedy Space Center in June 1968, significant defects were discovered, leading Grumman, the lead contractor for the LM, to predict that the first mission-ready LM would not be ready until at least February 1969. This would mean delaying the "D" and subsequent missions, endangering the program's goal of a lunar landing before the end of 1969. George Low, the Manager of the Apollo Spacecraft Program Office, proposed a solution in August to keep the program on track despite the LM delay. Since the Command/Service Module (CSM) would be ready three months before the Lunar Module, a CSM-only mission could be flown in December 1968. Instead of just repeating the "C" mission flight of Apollo 7, this CSM could be sent all the way to the Moon, with the possibility of entering a lunar orbit. The new mission would also allow NASA to test lunar landing procedures that would otherwise have to wait until Apollo 10, the scheduled "F" mission. This also meant that the medium Earth orbit "E" mission could be dispensed with. The net result was that only the "D" mission had to be delayed. Almost every senior manager at NASA agreed with this new mission, citing both confidence in the hardware and personnel, and the potential for a significant morale boost provided by a circumlunar flight. The only person who needed some convincing was James E. Webb, the NASA administrator. With the rest of his agency in support of the new mission, Webb eventually approved the mission change. The mission was officially changed from a "D" mission to a "C-Prime" lunar-orbit mission, but was still referred to in press releases as an Earth-orbit mission at Webb's direction. No public announcement was made about the change in mission until November 12, three weeks after Apollo 7's successful Earth-orbit mission and less than 40 days before launch. With the change in mission for Apollo 8, Director of Flight Crew Operations Deke Slayton decided to swap the crews of the D and E missions. This swap also meant a swap of spacecraft, requiring Borman's crew to use CSM-103, while McDivitt's crew would use CSM-104. On September 9, the crew entered the simulators to begin their preparation for the flight. By the time the mission flew, the crew had spent seven hours training for every actual hour of flight. Although all crew members were trained in all aspects of the mission, it was necessary to specialize. Borman, as commander, was given training on controlling the spacecraft during the re-entry. Lovell was trained on navigating the spacecraft in case communication was lost with the Earth. Anders was placed in charge of checking that the spacecraft was in working order. Added pressure on the Apollo program to make its 1969 landing goal was provided by the Soviet Union's flight of some living creatures, including Russian tortoises, in a cislunar loop around the Moon on Zond 5 and return to Earth on September 21. There was speculation within NASA and the press that they might be preparing to launch cosmonauts on a similar circumlunar mission before the end of 1968. The Apollo 8 crew, now living in the crew quarters at Kennedy Space Center, received a visit from Charles Lindbergh and his wife, Anne Morrow Lindbergh, the night before the launch. They talked about how, before his 1927 flight, Lindbergh had used a piece of string to measure the distance from New York City to Paris on a globe and from that calculated the fuel needed for the flight. The total was a tenth of the amount that the Saturn V would burn every second. The next day, the Lindberghs watched the launch of Apollo 8 from a nearby dune. The Saturn V rocket used by Apollo 8 was designated SA-503, or the "03rd" model of the Saturn V ("5") Rocket to be used in the Saturn-Apollo ("SA") program. When it was erected in the Vertical Assembly Building on December 20, 1967, it was thought that the rocket would be used for an unmanned Earth-orbit test flight carrying a boilerplate Command/Service Module. Apollo 6 had suffered several major problems during its April 1968 flight, including severe pogo oscillation during its first stage, two second stage engine failures, and a third stage that failed to reignite in orbit. Without assurances that these problems had been rectified, NASA administrators could not justify risking a manned mission until additional unmanned test flights proved that the Saturn V was ready. Teams from the Marshall Space Flight Center (MSFC) went to work on the problems. Of primary concern was the pogo oscillation, which would not only hamper engine performance, but could exert significant g-forces on a crew. A task force of contractors, NASA agency representatives, and MSFC researchers concluded that the engines vibrated at a frequency similar to the frequency at which the spacecraft itself vibrated, causing a resonance effect that induced oscillations in the rocket. A system using helium gas to absorb some of these vibrations was installed. Of equal importance was the failure of three engines during flight. Researchers quickly determined that a leaking hydrogen fuel line ruptured when exposed to vacuum, causing a loss of fuel pressure in engine two. When an automatic shutoff attempted to close the liquid hydrogen valve and shut down engine two, it accidentally shut down engine three's liquid oxygen due to a miswired connection. As a result, engine three failed within one second of engine two's shutdown. Further investigation revealed the same problem for the third-stage engine—a faulty igniter line. The team modified the igniter lines and fuel conduits, hoping to avoid similar problems on future launches. The teams tested their solutions in August 1968 at the Marshall Space Flight Center. A Saturn stage IC was equipped with shock absorbing devices to demonstrate the team's solution to the problem of pogo oscillation, while a Saturn Stage II was retrofitted with modified fuel lines to demonstrate their resistance to leaks and ruptures in vacuum conditions. Once NASA administrators were convinced that the problems were solved, they gave their approval for a manned mission using SA-503. The Apollo 8 spacecraft was placed on top of the rocket on September 21 and the rocket made the slow 3-mile (5 km) journey to the launch pad on October 9. Testing continued all through December until the day before launch, including various levels of readiness testing from December 5 through 11. Final testing of modifications to address the problems of pogo oscillation, ruptured fuel lines, and bad igniter lines took place on December 18, a mere three days before the scheduled launch. As the first manned spacecraft to orbit more than one celestial body, Apollo 8's profile had two different sets of orbital parameters, separated by a translunar injection maneuver. Apollo lunar missions would begin with a nominal circular Earth parking orbit. Apollo 8 was launched into an initial orbit with an apogee of and a perigee of , with an inclination of 32.51° to the Equator, and an orbital period of 88.19 minutes. Propellant venting increased the apogee by over the 2 hours, 44 minutes and 30 seconds spent in the parking orbit. This was followed by a Trans-Lunar Injection (TLI) burn of the S-IVB third stage for 318 seconds, accelerating the Command/Service Module and LM test article from an orbital velocity of to the injection velocity of , which set a record for the highest speed, relative to Earth, that humans had ever traveled. This speed was slightly less than the Earth's escape velocity of , but put Apollo 8 into an elongated elliptical Earth orbit, to a point where the Moon's gravity would capture it. The standard lunar orbit for Apollo missions was planned as a nominal circular orbit above the Moon's surface. Initial lunar orbit insertion was an ellipse with a perilune of and an apolune of , at an inclination of 12° from the lunar equator. This was then circularized at by , with an orbital period of 128.7 minutes. The effect of lunar mass concentrations ("masscons") on the orbit was found to be greater than initially predicted; over the course of the ten lunar orbits lasting twenty hours, the orbital distance was perturbated to by . Apollo 8 achieved a maximum distance from Earth of . Apollo 8 launched at 7:51:00 a.m. Eastern Standard Time on December 21, 1968, using the Saturn V's three stages to achieve Earth orbit. The S-IC first stage impacted the Atlantic Ocean at and the S-II second stage at . The S-IVB third stage injected the craft into Earth orbit, but remained attached to later perform the trans-lunar injection (TLI) burn that put the spacecraft on a trajectory to the Moon. Once the vehicle reached Earth orbit, both the crew and Houston flight controllers spent the next 2 hours and 38 minutes checking that the spacecraft was in proper working order and ready for TLI. The proper operation of the S-IVB third stage of the rocket was crucial: in the last unmanned test, it had failed to re-ignite for TLI. During the flight, three fellow astronauts served on the ground as Capsule Communicators (usually referred to as "CAPCOMs") on a rotating schedule. The CAPCOMs were the only people who regularly communicated with the crew. Michael Collins was the first CAPCOM on duty and at 2 hours, 27 minutes and 22 seconds after launch radioed, "Apollo 8. You are Go for TLI." This communication signified that Mission Control had given official permission for Apollo 8 to go to the Moon. Over the next 12 minutes before the TLI burn, the Apollo 8 crew continued to monitor the spacecraft and the S-IVB. The engine ignited on time and performed the TLI burn perfectly. After the S-IVB had performed its required tasks, it was jettisoned. The crew then rotated the spacecraft to take some photographs of the spent stage and then practiced flying in formation with it. As the crew rotated the spacecraft, they had their first views of the Earth as they moved away from it. This marked the first time humans could view the whole Earth at once. Borman became worried that the S-IVB was staying too close to the Command/Service Module and suggested to Mission Control that the crew perform a separation maneuver. Mission Control first suggested pointing the spacecraft towards Earth and using the Reaction Control System (RCS) thrusters on the Service Module (SM) to add away from the Earth, but Borman did not want to lose sight of the S-IVB. After discussion, the crew and Mission Control decided to burn in this direction, but at instead. These discussions put the crew an hour behind their flight plan. Five hours after launch, Mission Control sent a command to the S-IVB booster to vent its remaining fuel through its engine bell to change the booster's trajectory. This S-IVB would then pass the Moon and enter into a solar orbit, posing no further hazard to Apollo 8. The S-IVB subsequently went into a solar orbit with an inclination of 23.47° from the plane of the ecliptic, and an orbital period of 340.80 days. After the insertion into trans-Lunar orbit, the Saturn IVB third stage became a object. It will continue to orbit the Sun for many years. The Apollo 8 crew were the first humans to pass through the Van Allen radiation belts, which extend up to from Earth. Scientists predicted that passing through the belts quickly at the spacecraft's high speed would cause a radiation dosage of no more than a chest X-ray, or 1 milligray (during a year, the average human receives a dose of 2 to 3 mGy). To record the actual radiation dosages, each crew member wore a Personal Radiation Dosimeter that transmitted data to Earth as well as three passive film dosimeters that showed the cumulative radiation experienced by the crew. By the end of the mission, the crew experienced an average radiation dose of 1.6 mGy. Jim Lovell's main job as Command Module Pilot was as navigator. Although Mission Control performed all the actual navigation calculations, it was necessary to have a crew member serving as navigator so that the crew could return to Earth in case of loss of communication with Mission Control. Lovell navigated by star sightings using a sextant built into the spacecraft, measuring the angle between a star and the Earth's (or the Moon's) horizon. This task was difficult, because a large cloud of debris around the spacecraft, formed by the venting S-IVB, made it hard to distinguish the stars. By seven hours into the mission, the crew was about one hour and 40 minutes behind flight plan, because of the problems in moving away from the S-IVB and Lovell's obscured star sightings. The crew now placed the spacecraft into Passive Thermal Control (PTC), also called "barbecue roll", in which the spacecraft rotated about once per hour around its long axis to ensure even heat distribution across the surface of the spacecraft. In direct sunlight, the spacecraft could be heated to over while the parts in shadow would be . These temperatures could cause the heat shield to crack and propellant lines to burst. Because it was impossible to get a perfect roll, the spacecraft swept out a cone as it rotated. The crew had to make minor adjustments every half hour as the cone pattern got larger and larger. The first mid-course correction came 11 hours into the flight. Testing on the ground had shown that the Service Propulsion System (SPS) engine had a small chance of exploding when burned for long periods unless its combustion chamber was "coated" first. Burning the engine for a short period would accomplish coating. This first correction burn was only 2.4 seconds and added about velocity prograde (in the direction of travel). This change was less than the planned , because of a bubble of helium in the oxidizer lines, which caused unexpectedly low propellant pressure. The crew had to use the small RCS thrusters to make up the shortfall. Two later planned mid-course corrections were canceled because the Apollo 8 trajectory was found to be perfect. Eleven hours into the flight, the crew had been awake for more than 16 hours. Before launch, NASA had decided that at least one crew member should be awake at all times to deal with problems that might arise. Borman started the first sleep shift, but found sleeping difficult because of the constant radio chatter and mechanical noises. About an hour after starting his sleep shift, Borman obtained permission from ground control to take a Seconal sleeping pill. The pill had little effect. Borman eventually fell asleep, and then awoke feeling ill. He vomited twice and had a bout of diarrhea; this left the spacecraft full of small globules of vomit and feces, which the crew cleaned up as well as they could. Borman initially did not want everyone to know about his medical problems, but Lovell and Anders wanted to inform Mission Control. The crew decided to use the Data Storage Equipment (DSE), which could tape voice recordings and telemetry and dump them to Mission Control at high speed. After recording a description of Borman's illness they asked Mission Control to check the recording, stating that they "would like an evaluation of the voice comments". The Apollo 8 crew and Mission Control medical personnel held a conference using an unoccupied second-floor control room (there were two identical control rooms in Houston, on the second and third floors, only one of which was used during a mission). The conference participants concluded that there was little to worry about and that Borman's illness was either a 24-hour flu, as Borman thought, or a reaction to the sleeping pill. Researchers now believe that he was suffering from space-adaptation syndrome, which affects about a third of astronauts during their first day in space as their vestibular system adapts to weightlessness. Space-adaptation syndrome had not occurred on previous spacecraft (Mercury and Gemini), because those astronauts couldn't move freely in the small cabins of those spacecraft. The increased cabin space in the Apollo Command Module afforded astronauts greater freedom of movement, contributing to symptoms of space sickness for Borman and, later, astronaut Russell Schweickart during Apollo 9. The cruise phase was a relatively uneventful part of the flight, except for the crew checking that the spacecraft was in working order and that they were on course. During this time, NASA scheduled a television broadcast at 31 hours after launch. The Apollo 8 crew used a 2 kg camera that broadcast in black-and-white only, using a Vidicon tube. The camera had two lenses, a very wide-angle (160°) lens, and a telephoto (9°) lens. During this first broadcast, the crew gave a tour of the spacecraft and attempted to show how the Earth appeared from space. However, difficulties aiming the narrow-angle lens without the aid of a monitor to show what it was looking at made showing the Earth impossible. Additionally, the Earth image became saturated by any bright source without proper filters. In the end, all the crew could show the people watching back on Earth was a bright blob. After broadcasting for 17 minutes, the rotation of the spacecraft took the high-gain antenna out of view of the receiving stations on Earth and they ended the transmission with Lovell wishing his mother a happy birthday. By this time, the crew had completely abandoned the planned sleep shifts. Lovell went to sleep 32½ hours into the flight—3½ hours before he had planned to. A short while later, Anders also went to sleep after taking a sleeping pill. The crew was unable to see the Moon for much of the outward cruise. Two factors made the Moon almost impossible to see from inside the spacecraft: three of the five windows fogging up due to out-gassed oils from the silicone sealant, and the attitude required for the PTC. It was not until the crew had gone behind the Moon that they would be able to see it for the first time. Apollo 8 made a second television broadcast at 55 hours into the flight. This time, the crew rigged up filters meant for the still cameras so they could acquire images of the Earth through the telephoto lens. Although difficult to aim, as they had to maneuver the entire spacecraft, the crew was able to broadcast back to Earth the first television pictures of the Earth. The crew spent the transmission describing the Earth and what was visible and the colors they could see. The transmission lasted 23 minutes. At about 55 hours and 40 minutes into the flight, the crew of Apollo 8 became the first humans to enter the gravitational sphere of influence of another celestial body. In other words, the effect of the Moon's gravitational force on Apollo 8 became stronger than that of the Earth. At the time it happened, Apollo 8 was from the Moon and had a speed of relative to the Moon. This historic moment was of little interest to the crew since they were still calculating their trajectory with respect to the launch pad at Kennedy Space Center. They would continue to do so until they performed their last mid-course correction, switching to a reference frame based on ideal orientation for the second engine burn they would make in lunar orbit. It was only 13 hours until they would be in lunar orbit. The last major event before Lunar Orbit Insertion (LOI) was a second mid-course correction. It was in retrograde (against direction of travel) and slowed the spacecraft down by , effectively lowering the closest distance that the spacecraft would pass the moon. At exactly 61 hours after launch, about from the Moon, the crew burned the RCS for 11 seconds. They would now pass from the lunar surface. At 64 hours into the flight, the crew began to prepare for Lunar Orbit Insertion-1 (LOI-1). This maneuver had to be performed perfectly, and due to orbital mechanics had to be on the far side of the Moon, out of contact with the Earth. After Mission Control was polled for a "go/no go" decision, the crew was told at 68 hours, they were Go and "riding the best bird we can find". Lovell replied, "We'll see you on the other side", and for the first time in history, humans travelled behind the Moon and out of radio contact with the Earth. With 10 minutes before the LOI-1, the crew began one last check of the spacecraft systems and made sure that every switch was in the correct place. At that time, they finally got their first glimpses of the Moon. They had been flying over the unlit side, and it was Lovell who saw the first shafts of sunlight obliquely illuminating the lunar surface. The LOI burn was only two minutes away, so the crew had little time to appreciate the view. The SPS ignited at 69 hours, 8 minutes, and 16 seconds after launch and burned for 4 minutes and 7 seconds, placing the Apollo 8 spacecraft in orbit around the Moon. The crew described the burn as being the longest four minutes of their lives. If the burn had not lasted exactly the correct amount of time, the spacecraft could have ended up in a highly elliptical lunar orbit or even flung off into space. If it lasted too long they could have struck the Moon. After making sure the spacecraft was working, they finally had a chance to look at the Moon, which they would orbit for the next 20 hours. On Earth, Mission Control continued to wait. If the crew had not burned the engine or the burn had not lasted the planned length of time, the crew would appear early from behind the Moon. However, this time came and went without Apollo 8 reappearing. Exactly at the calculated moment, the signal was received from the spacecraft, indicating it was in a orbit about the Moon. After reporting on the status of the spacecraft, Lovell gave the first description of what the lunar surface looked like: Lovell continued to describe the terrain they were passing over. One of the crew's major tasks was reconnaissance of planned future landing sites on the Moon, especially one in Mare Tranquillitatis that would be the Apollo 11 landing site. The launch time of Apollo 8 had been chosen to give the best lighting conditions for examining the site. A film camera had been set up in one of the spacecraft windows to record a frame every second of the Moon below. Bill Anders spent much of the next 20 hours taking as many photographs as possible of targets of interest. By the end of the mission the crew had taken 700 photographs of the Moon and 150 of the Earth. Throughout the hour that the spacecraft was in contact with Earth, Borman kept asking how the data for the SPS looked. He wanted to make sure that the engine was working and could be used to return early to the Earth if necessary. He also asked that they receive a "go/no go" decision before they passed behind the Moon on each orbit. As they reappeared for their second pass in front of the Moon, the crew set up the equipment to broadcast a view of the lunar surface. Anders described the craters that they were passing over. At the end of this second orbit they performed the 11-second LOI-2 burn of the SPS to circularize the orbit to . Through the next two orbits, the crew continued to keep check of the spacecraft and to observe and photograph the Moon. During the third pass, Borman read a small prayer for his church. He had been scheduled to participate in a service at St. Christopher's Episcopal Church near Seabrook, Texas, but due to the Apollo 8 flight he was unable to. A fellow parishioner and engineer at Mission Control, Rod Rose, suggested that Borman read the prayer which could be recorded and then replayed during the service. When the spacecraft came out from behind the Moon for its fourth pass across the front, the crew witnessed "Earthrise" for the first time in human history (NASA's Lunar Orbiter 1 took the very first picture of an Earthrise from the vicinity of the Moon, on August 23, 1966). Anders saw the Earth emerging from behind the lunar horizon, and then called in excitement to the others, taking a black-and-white photograph as he did so. Anders asked Lovell for a color film and then took "Earthrise", a more famous color photo, later picked by "Life" magazine as one of its hundred photos of the century. Due to the synchronous rotation of the Moon about the Earth, Earthrise is not generally visible from the lunar surface. Earthrise is generally only visible when orbiting the Moon, other than at selected places near the Moon's limb, where libration carries the Earth slightly above and below the lunar horizon. Anders continued to take photographs while Lovell assumed control of the spacecraft so Borman could rest. Despite the difficulty resting in the cramped and noisy spacecraft, Borman was able to sleep for two orbits, awakening periodically to ask questions about their status. Borman awoke fully, however, when he started to hear his fellow crew members make mistakes. They were beginning to not understand questions and would have to ask for the answers to be repeated. Borman realized that everyone was extremely tired from not having a good night's sleep in over three days. He ordered Anders and Lovell to get some sleep and that the rest of the flight plan regarding observing the Moon be scrubbed. At first Anders protested saying that he was fine, but Borman would not be swayed. At last Anders agreed as long as Borman would set up the camera to continue to take automatic shots of the Moon. Borman also remembered that there was a second television broadcast planned, and with so many people expected to be watching he wanted the crew to be alert. For the next two orbits Anders and Lovell slept while Borman sat at the helm. On subsequent Apollo missions, crews would avoid this situation by sleeping on the same schedule. As they rounded the Moon for the ninth time, the second television transmission began. Borman introduced the crew, followed by each man giving his impression of the lunar surface and what it was like to be orbiting the Moon. Borman described it as being "a vast, lonely, forbidding expanse of nothing". Then, after talking about what they were flying over, Anders said that the crew had a message for all those on Earth. Each man on board read a section from the Biblical creation story from the Book of Genesis. Borman finished the broadcast by wishing a Merry Christmas to everyone on Earth. His message appeared to sum up the feelings that all three crewmen had from their vantage point in lunar orbit. Borman said, "And from the crew of Apollo 8, we close with good night, good luck, a Merry Christmas and God bless all of you—all of you on the good Earth." The only task left for the crew at this point was to perform the Trans-Earth Injection (TEI), which was scheduled for 2½ hours after the end of the television transmission. The TEI was the most critical burn of the flight, as any failure of the SPS to ignite would strand the crew in lunar orbit, with little hope of escape. As with the previous burn, the crew had to perform the maneuver above the far side of the Moon, out of contact with Earth. The burn occurred exactly on time. The spacecraft telemetry was reacquired as it re-emerged from behind the Moon at 89 hours, 28 minutes, and 39 seconds, the exact time calculated. When voice contact was regained, Lovell announced, "Please be informed, there is a Santa Claus", to which Ken Mattingly, the current CAPCOM, replied, "That's affirmative, you are the best ones to know." The spacecraft began its journey back to Earth on December 25, Christmas Day. Later, Lovell used some otherwise idle time to do some navigational sightings, maneuvering the module to view various stars by using the computer keyboard. However, he accidentally erased some of the computer's memory, which caused the Inertial Measurement Unit (IMU) to think the module was in the same relative position it had been in before lift-off and fire the thrusters to "correct" the module's attitude. Once the crew realized why the computer had changed the module's attitude, they realized they would have to re-enter data that would tell the computer its real position. It took Lovell ten minutes to figure out the right numbers, using the thrusters to get the stars Rigel and Sirius aligned, and another 15 minutes to enter the corrected data into the computer. Sixteen months later, Lovell would once again have to perform a similar manual re-alignment, under more critical conditions, during the Apollo 13 mission, after that module's IMU had to be turned off to conserve energy. In his 1994 book, "Lost Moon: The Perilous Voyage of Apollo 13", Lovell wrote, "My training [on Apollo 8] came in handy!" In that book he dismissed the incident as a "planned experiment", requested by the ground crew. In subsequent interviews Lovell has acknowledged that the incident was an accident, caused by his mistake. The cruise back to Earth was mostly a time for the crew to relax and monitor the spacecraft. As long as the trajectory specialists had calculated everything correctly, the spacecraft would re-enter two-and-half days after TEI and splashdown in the Pacific. On Christmas afternoon, the crew made their fifth television broadcast. This time they gave a tour of the spacecraft, showing how an astronaut lived in space. When they finished broadcasting they found a small present from Deke Slayton in the food locker: a real turkey dinner with stuffing, in the same kind of pack that the troops in Vietnam received. Another Slayton surprise was a gift of three miniature bottles of brandy, that Borman ordered the crew to leave alone until after they landed. They remained unopened, even years after the flight. There were also small presents to the crew from their wives. The next day, at about 124 hours into the mission, the sixth and final TV transmission showed the mission's best video images of the earth, in a four-minute broadcast. After two uneventful days the crew prepared for re-entry. The computer would control the re-entry and all the crew had to do was put the spacecraft in the correct attitude, blunt end forward. If the computer broke down, Borman would take over. Once the Command Module was separated from the Service Module, the astronauts were committed to re-entry. Six minutes before they hit the top of the atmosphere, the crew saw the Moon rising above the Earth's horizon, just as had been predicted by the trajectory specialists. As they hit the thin outer atmosphere they noticed it was becoming hazy outside as glowing plasma formed around the spacecraft. The spacecraft started slowing down and the deceleration peaked at 6 g (59 m/s). With the computer controlling the descent by changing the attitude of the spacecraft, Apollo 8 rose briefly like a skipping stone before descending to the ocean. At the drogue parachute stabilized the spacecraft and was followed at by the three main parachutes. The spacecraft splashdown position was officially reported as in the North Pacific Ocean south of Hawaii. When it hit the water, the parachutes dragged the spacecraft over and left it upside down, in what was termed Stable 2 position. About six minutes later the Command Module was righted into its normal apex-up splashdown orientation by the inflatable bag uprighting system. As they were buffeted by a swell, Borman was sick, waiting for the three flotation balloons to right the spacecraft. It was 43 minutes after splashdown before the first frogman from arrived, as the spacecraft had landed before sunrise. Forty-five minutes later, the crew was safe on the deck of the aircraft carrier. Apollo 8 came at the end of 1968, a year that had seen much upheaval in the United States and most of the world. Even though the year saw political assassinations, political unrest in the streets of Europe and America, and the Prague Spring, "Time" magazine chose the crew of Apollo 8 as its Men of the Year for 1968, recognizing them as the people who most influenced events in the preceding year. They had been the first people ever to leave the gravitational influence of the Earth and orbit another celestial body. They had survived a mission that even the crew themselves had rated as only having a fifty-fifty chance of fully succeeding. The effect of Apollo 8 can be summed up by a telegram from a stranger, received by Borman after the mission, that simply stated, "Thank you Apollo 8. You saved 1968." One of the most famous aspects of the flight was the Earthrise picture that was taken as they came around for their fourth orbit of the Moon. This was the first time that humans had taken such a picture while actually behind the camera, and it has been credited with a role in inspiring the first Earth Day in 1970. It was selected as the first of "Life" magazine's "100 Photographs That Changed the World". Apollo 11 astronaut Michael Collins said, "Eight's momentous historic significance was foremost"; while space historian Robert K. Poole saw Apollo 8 as the most historically significant of all the Apollo missions. The mission was the most widely covered by the media since the first American orbital flight, Mercury-Atlas 6 by John Glenn in 1962. There were 1200 journalists covering the mission, with the BBC coverage being broadcast in 54 countries in 15 different languages. The Soviet newspaper "Pravda" featured a quote from Boris Nikolaevich Petrov, Chairman of the Soviet Interkosmos program, who described the flight as an "outstanding achievement of American space sciences and technology". It is estimated that a quarter of the people alive at the time saw—either live or delayed—the Christmas Eve transmission during the ninth orbit of the Moon. The Apollo 8 broadcasts won an Emmy Award, the highest honor given by the Academy of Television Arts & Sciences. Madalyn Murray O'Hair, an atheist, later caused controversy by bringing a lawsuit against NASA over the reading from Genesis. O'Hair wished the courts to ban American astronauts—who were all government employees—from public prayer in space. Though the case was rejected by the Supreme Court of the United States for lack of jurisdiction, it caused NASA to be skittish about the issue of religion throughout the rest of the Apollo program. Buzz Aldrin, on Apollo 11, self-communicated Presbyterian Communion on the surface of the Moon after landing; he refrained from mentioning this publicly for several years, and only obliquely referred to it at the time. In 1969, the United States Postal Service issued a postage stamp (Scott catalogue #1371) commemorating the Apollo 8 flight around the Moon. The stamp featured a detail of the famous photograph of the Earthrise over the Moon taken by Anders on Christmas Eve, and the words, "In the beginning God ..." Just 18 days after the crew's return to Earth, they were featured during the 1969 Super Bowl pre-game show reciting the Pledge of Allegiance prior to the national anthem being performed by Anita Bryant. In January 1970, the spacecraft was delivered to Osaka, Japan, for display in the U.S. pavilion at Expo '70. It is now displayed at the Chicago Museum of Science and Industry, along with a collection of personal items from the flight donated by Lovell and the space suit worn by Frank Borman. Jim Lovell's Apollo 8 space suit is on public display in the Visitor Center at NASA's Glenn Research Center. Bill Anders's space suit is on display at the Science Museum in London, United Kingdom. Apollo 8's historic mission has been shown and referred to in several forms, both documentary and fiction. The various television transmissions and 16 mm footage shot by the crew of Apollo 8 was compiled and released by NASA in the 1969 documentary, "Debrief: Apollo 8", which was hosted by Burgess Meredith. In addition, Spacecraft Films released, in 2003, a three-disc DVD set containing all of NASA's TV and 16 mm film footage related to the mission including all TV transmissions from space, training and launch footage, and motion pictures taken in flight. Portions of the Apollo 8 mission can be seen in the 1989 documentary "For All Mankind", which won the Grand Jury Prize Documentary at the Sundance Film Festival. The television series "American Experience" aired a documentary, "Race to the Moon", in 2005 during season 18. The Apollo 8 mission was well covered in the 2007 British documentary "In the Shadow of the Moon". Portions of the mission are dramatized in the 1998 miniseries "From the Earth to the Moon" episode "1968". The S-IVB stage of Apollo 8 was also portrayed as the location of an alien device in the 1970 "UFO" episode "Conflict". The mission was also featured in the Discovery Channel's 6-part documentary series "" (Part 3, "Landing the Eagle"). All three astronauts were featured in this documentary, telling the story of their historic mission to the Moon in their own words. Footage of Apollo 8 appears in the Dawn of the Planet of the Apes movie. A clip of Frank Borman is shown on the news on TV in the background while the news covered the first manned mission to Mars. Apollo 8's Lunar Orbit Insertion One was chronicled with actual recordings in the song "The Other Side" on the album The Race for Space by the band Public Service Broadcasting. Aikido Ueshiba's goal was to create an art that practitioners could use to defend themselves while also protecting their attacker from injury. Aikido's techniques include: irimi (entering), and tenkan (turning) movements (that redirect the opponent's attack momentum), various types of throws and joint locks. Aikido derives mainly from the martial art of Daitō-ryū Aiki-jūjutsu, but began to diverge from it in the late 1920s, partly due to Ueshiba's involvement with the Ōmoto-kyō religion. Ueshiba's early students' documents bear the term "aiki-jūjutsu". Ueshiba's senior students have different approaches to aikido, depending partly on when they studied with him. Today aikido is found all over the world in a number of styles, with broad ranges of interpretation and emphasis. However, they all share techniques formulated by Ueshiba and most have concern for the well-being of the attacker. The word "aikido" is formed of three kanji: The term "aiki" does not readily appear in the Japanese language outside the scope of budō. This has led to many possible interpretations of the word. The term is also found in martial arts such as judo and kendo, and in various non-martial arts, such as Japanese calligraphy (), flower arranging () and tea ceremony (). Therefore, from a purely literal interpretation, aikido is the "Way of combining forces", in that the term "aiki" refers to the martial arts principle or tactic of blending with an attacker's movements for the purpose of controlling their actions with minimal effort. One applies by understanding the rhythm and intent of the attacker to find the optimal position and timing to apply a counter-technique. Aikido was created by Morihei Ueshiba ( , 14 December 1883 – 26 April 1969), referred to by some aikido practitioners as ("Great Teacher"). The term "aikido" was coined in the twentieth century. Ueshiba envisioned aikido not only as the synthesis of his martial training, but as an expression of his personal philosophy of universal peace and reconciliation. During Ueshiba's lifetime and continuing today, aikido has evolved from the "aiki" that Ueshiba studied into a variety of expressions by martial artists throughout the world. Ueshiba developed aikido primarily during the late 1920s through the 1930s through the synthesis of the older martial arts that he had studied. The core martial art from which aikido derives is Daitō-ryū aiki-jūjutsu, which Ueshiba studied directly with Takeda Sōkaku, the reviver of that art. Additionally, Ueshiba is known to have studied Tenjin Shin'yō-ryū with Tozawa Tokusaburō in Tokyo in 1901, Gotōha Yagyū Shingan-ryū under Nakai Masakatsu in Sakai from 1903 to 1908, and judo with Kiyoichi Takagi ( , 1894–1972) in Tanabe in 1911. The art of Daitō-ryū is the primary technical influence on aikido. Along with empty-handed throwing and joint-locking techniques, Ueshiba incorporated training movements with weapons, such as those for the spear (), short staff (), and perhaps the . However, aikido derives much of its technical structure from the art of swordsmanship (). Ueshiba moved to Hokkaidō in 1912, and began studying under Takeda Sokaku in 1915. His official association with Daitō-ryū continued until 1937. However, during the latter part of that period, Ueshiba had already begun to distance himself from Takeda and the Daitō-ryū. At that time Ueshiba was referring to his martial art as "Aiki Budō". It is unclear exactly when Ueshiba began using the name "aikido", but it became the official name of the art in 1942 when the Greater Japan Martial Virtue Society () was engaged in a government sponsored reorganization and centralization of Japanese martial arts. After Ueshiba left Hokkaidō in 1919, he met and was profoundly influenced by Onisaburo Deguchi, the spiritual leader of the Ōmoto-kyō religion (a neo-Shinto movement) in Ayabe. One of the primary features of Ōmoto-kyō is its emphasis on the attainment of utopia during one's life. This was a great influence on Ueshiba's martial arts philosophy of extending love and compassion especially to those who seek to harm others. Aikido demonstrates this philosophy in its emphasis on mastering martial arts so that one may receive an attack and harmlessly redirect it. In an ideal resolution, not only is the receiver unharmed, but so is the attacker. In addition to the effect on his spiritual growth, the connection with Deguchi gave Ueshiba entry to elite political and military circles as a martial artist. As a result of this exposure, he was able to attract not only financial backing but also gifted students. Several of these students would found their own styles of aikido. Aikido was first brought to the rest of the world in 1951 by Minoru Mochizuki with a visit to France where he introduced aikido techniques to judo students. He was followed by Tadashi Abe in 1952, who came as the official Aikikai Hombu representative, remaining in France for seven years. Kenji Tomiki toured with a delegation of various martial arts through 15 continental states of the United States in 1953. Later that year, Koichi Tohei was sent by Aikikai Hombu to Hawaii for a full year, where he set up several dojo. This trip was followed by several further visits and is considered the formal introduction of aikido to the United States. The United Kingdom followed in 1955; Italy in 1964 by Hiroshi Tada; and Germany in 1965 by Katsuaki Asai. Designated "Official Delegate for Europe and Africa" by Morihei Ueshiba, Masamichi Noro arrived in France in September 1961. Seiichi Sugano was appointed to introduce aikido to Australia in 1965. Today there are aikido dojo throughout the world. The largest aikido organization is the Aikikai Foundation, which remains under the control of the Ueshiba family. However, aikido has many styles, mostly formed by Morihei Ueshiba's major students. The earliest independent styles to emerge were Yoseikan Aikido, begun by Minoru Mochizuki in 1931, Yoshinkan Aikido, founded by Gozo Shioda in 1955, and Shodokan Aikido, founded by Kenji Tomiki in 1967. The emergence of these styles pre-dated Ueshiba's death and did not cause any major upheavals when they were formalized. Shodokan Aikido, however, was controversial, since it introduced a unique rule-based competition that some felt was contrary to the spirit of aikido. After Ueshiba's death in 1969, two more major styles emerged. Significant controversy arose with the departure of the Aikikai Hombu Dojo's chief instructor Koichi Tohei, in 1974. Tohei left as a result of a disagreement with the son of the founder, Kisshomaru Ueshiba, who at that time headed the Aikikai Foundation. The disagreement was over the proper role of "ki" development in regular aikido training. After Tohei left, he formed his own style, called Shin Shin Toitsu Aikido, and the organization that governs it, the Ki Society ("Ki no Kenkyūkai"). A final major style evolved from Ueshiba's retirement in Iwama, Ibaraki and the teaching methodology of long term student Morihiro Saito. It is unofficially referred to as the "Iwama style", and at one point a number of its followers formed a loose network of schools they called Iwama Ryu. Although Iwama style practitioners remained part of the Aikikai until Saito's death in 2002, followers of Saito subsequently split into two groups. One remained with the Aikikai and the other formed the independent Shinshin Aikishuren Kai in 2004 around Saito's son Hitohiro Saito. Today, the major styles of aikido are each run by a separate governing organization, have their own in Japan, and have an international breadth. The study of "ki" is an important component of aikido, and its study defies categorization as either "physical" or "mental" training, as it encompasses both. The "kanji" for "ki" normally is written as . It was written as until the writing reforms after World War II, and this older form still is seen on occasion. The character for "ki" is used in everyday Japanese terms, such as , or . "Ki" has many meanings, including "ambience", "mind", "mood", and "intention", however, in traditional martial arts it is often used to refer to "life energy". Gōzō Shioda's Yoshinkan Aikido, considered one of the "hard styles", largely follows Ueshiba's teachings from before World War II, and surmises that the secret to "ki" lies in timing and the application of the whole body's strength to a single point. In later years, Ueshiba's application of "ki" in aikido took on a softer, more gentle feel. This was his Takemusu Aiki and many of his later students teach about "ki" from this perspective. Koichi Tohei's Ki Society centers almost exclusively around the study of the empirical (albeit subjective) experience of "ki" with students ranked separately in aikido techniques and "ki" development. In aikido, as in virtually all Japanese martial arts, there are both physical and mental aspects of training. The physical training in aikido is diverse, covering both general physical fitness and conditioning, as well as specific techniques. Because a substantial portion of any aikido curriculum consists of throws, beginners learn how to safely fall or roll. The specific techniques for attack include both strikes and grabs; the techniques for defense consist of throws and pins. After basic techniques are learned, students study freestyle defense against multiple opponents, and techniques with weapons. Physical training goals pursued in conjunction with aikido include controlled relaxation, correct movement of joints such as hips and shoulders, flexibility, and endurance, with less emphasis on strength training. In aikido, pushing or extending movements are much more common than pulling or contracting movements. This distinction can be applied to general fitness goals for the aikido practitioner. In aikido, specific muscles or muscle groups are not isolated and worked to improve tone, mass, or power. Aikido-related training emphasizes the use of coordinated whole-body movement and balance similar to yoga or pilates. For example, many dojos begin each class with , which may include stretching and ukemi (break falls). Aikido training is based primarily on two partners practicing pre-arranged forms ("kata") rather than freestyle practice. The basic pattern is for the receiver of the technique ("uke") to initiate an attack against the person who applies the technique—the "tori", or "shite" (depending on aikido style), also referred to as "nage" (when applying a throwing technique), who neutralises this attack with an aikido technique. Both halves of the technique, that of "uke" and that of "tori", are considered essential to aikido training. Both are studying aikido principles of blending and adaptation. "Tori" learns to blend with and control attacking energy, while "uke" learns to become calm and flexible in the disadvantageous, off-balance positions in which "tori" places them. This "receiving" of the technique is called "ukemi". "Uke" continuously seeks to regain balance and cover vulnerabilities (e.g., an exposed side), while "tori" uses position and timing to keep "uke" off-balance and vulnerable. In more advanced training, "uke" will sometimes apply to regain balance and pin or throw "tori". Aikido techniques are usually a defense against an attack, so students must learn to deliver various types of attacks to be able to practice aikido with a partner. Although attacks are not studied as thoroughly as in striking-based arts, sincere attacks (a strong strike or an immobilizing grab) are needed to study correct and effective application of technique. Many of the of aikido resemble cuts from a sword or other grasped object, which indicate its origins in techniques intended for armed combat. Other techniques, which explicitly appear to be punches ("tsuki"), are practiced as thrusts with a knife or sword. Kicks are generally reserved for upper-level variations; reasons cited include that falls from kicks are especially dangerous, and that kicks (high kicks in particular) were uncommon during the types of combat prevalent in feudal Japan. Some basic strikes include: Beginners in particular often practice techniques from grabs, both because they are safer and because it is easier to feel the energy and lines of force of a hold than a strike. Some grabs are historically derived from being held while trying to draw a weapon; a technique could then be used to free oneself and immobilize or strike the attacker who is grabbing the defender. The following are examples of some basic grabs: The following are a sample of the basic or widely practiced throws and pins. Many of these techniques derive from Daitō-ryū Aiki-jūjutsu, but some others were invented by Morihei Ueshiba. The precise terminology for some may vary between organisations and styles, so what follows are the terms used by the Aikikai Foundation. Note that despite the names of the first five techniques listed, they are not universally taught in numeric order. Aikido makes use of body movement ("tai sabaki") to blend with "uke". For example, an "entering" ("irimi") technique consists of movements inward towards "uke", while a technique uses a pivoting motion. Additionally, an technique takes place in front of "uke", whereas an technique takes place to their side; a technique is applied with motion to the front of "uke", and a version is applied with motion towards the rear of "uke", usually by incorporating a turning or pivoting motion. Finally, most techniques can be performed while in a seated posture ("seiza"). Techniques where both "uke" and "tori" are standing are called "tachi-waza", techniques where both start off in "seiza" are called "suwari-waza", and techniques performed with "uke" standing and "tori" sitting are called "hanmi handachi" (). Thus, from fewer than twenty basic techniques, there are thousands of possible implementations. For instance, "ikkyō" can be applied to an opponent moving forward with a strike (perhaps with an "ura" type of movement to redirect the incoming force), or to an opponent who has already struck and is now moving back to reestablish distance (perhaps an "omote-waza" version). Specific aikido "kata" are typically referred to with the formula "attack-technique(-modifier)". For instance, "katate-dori ikkyō" refers to any "ikkyō" technique executed when "uke" is holding one wrist. This could be further specified as "katate-dori ikkyō omote", referring to any forward-moving "ikkyō" technique from that grab. "Atemi" () are strikes (or feints) employed during an aikido technique. Some view "atemi" as attacks against "vital points" meant to cause damage in and of themselves. For instance, Gōzō Shioda described using "atemi" in a brawl to quickly down a gang's leader. Others consider "atemi", especially to the face, to be methods of distraction meant to enable other techniques. A strike, whether or not it is blocked, can startle the target and break their concentration. The target may become unbalanced in attempting to avoid the blow, for example by jerking the head back, which may allow for an easier throw. Many sayings about "atemi" are attributed to Morihei Ueshiba, who considered them an essential element of technique. Weapons training in aikido traditionally includes the short staff ("jō") although its techniques resemble closely the use of the bayonet or Jūkendō, the wooden sword ("bokken"), and the knife ("tantō"). Some schools incorporate firearm-disarming techniques. Both weapon-taking and weapon-retention are taught. Some schools, such as the Iwama style of Morihiro Saito, usually spend substantial time practicing with both "bokken" and "jō", under the names of "aiki-ken", and "aiki-jō", respectively. The founder developed many of the empty-handed techniques from traditional sword, spear and bayonet movements. Consequently, the practice of the weapons arts gives insight into the origin of techniques and movements, and reinforces the concepts of distance, timing, foot movement, presence and connectedness with one's training partner(s). One feature of aikido is training to defend against multiple attackers, often called "taninzudori", or "taninzugake". Freestyle practice with multiple attackers called "randori" () is a key part of most curricula and is required for the higher level ranks. "Randori" exercises a person's ability to intuitively perform techniques in an unstructured environment. Strategic choice of techniques, based on how they reposition the student relative to other attackers, is important in "randori" training. For instance, an "ura" technique might be used to neutralise the current attacker while turning to face attackers approaching from behind. In Shodokan Aikido, "randori" differs in that it is not performed with multiple persons with defined roles of defender and attacker, but between two people, where both participants attack, defend, and counter at will. In this respect it resembles judo "randori". In applying a technique during training, it is the responsibility of "tori" to prevent injury to "uke" by employing a speed and force of application that is commensurate with their partner's proficiency in "ukemi". Injuries (especially those to the joints), when they do occur in aikido, are often the result of "tori" misjudging the ability of "uke" to receive the throw or pin. A study of injuries in the martial arts showed that the type of injuries varied considerably from one art to the other. Soft tissue injuries are one of the most common types of injuries found within aikido, as well as joint strain and stubbed fingers and toes. Several deaths from head-and-neck injuries, caused by aggressive "shihōnage" in a senpai/kōhai hazing context, have been reported. Aikido training is mental as well as physical, emphasizing the ability to relax the mind and body even under the stress of dangerous situations. This is necessary to enable the practitioner to perform the bold enter-and-blend movements that underlie aikido techniques, wherein an attack is met with confidence and directness. Morihei Ueshiba once remarked that one "must be willing to receive 99% of an opponent's attack and stare death in the face" in order to execute techniques without hesitation. As a martial art concerned not only with fighting proficiency but with the betterment of daily life, this mental aspect is of key importance to aikido practitioners. Aikido practitioners (commonly called "aikidōka" outside Japan) generally progress by promotion through a series of "grades" ("kyū"), followed by a series of "degrees" ("dan"), pursuant to formal testing procedures. Some aikido organizations use belts to distinguish practitioners' grades, often simply white and black belts to distinguish "kyu" and "dan" grades, though some use various belt colors. Testing requirements vary, so a particular rank in one organization is not comparable or interchangeable with the rank of another. Some dojos do not allow students to take the test to obtain a "dan" rank unless they are 16 or older. The uniform worn for practicing aikido ("aikidōgi") is similar to the training uniform ("keikogi") used in most other modern martial arts; simple trousers and a wraparound jacket, usually white. Both thick ("judo-style"), and thin ("karate-style") cotton tops are used. Aikido-specific tops are available with shorter sleeves which reach to just below the elbow. Most aikido systems add a pair of wide pleated black or indigo trousers called a "hakama" (used also in Naginatajutsu, kendo, and iaido). In many schools, its use is reserved for practitioners with ("dan") ranks or for instructors, while others allow all practitioners to wear a "hakama" regardless of rank. The most common criticism of aikido is that it suffers from a lack of realism in training. The attacks initiated by "uke" (and which "tori" must defend against) have been criticized as being "weak", "sloppy", and "little more than caricatures of an attack". Weak attacks from "uke" allow for a conditioned response from "tori", and result in underdevelopment of the skills needed for the safe and effective practice of both partners. To counteract this, some styles allow students to become less compliant over time but, in keeping with the core philosophies, this is after having demonstrated proficiency in being able to protect themselves and their training partners. Shodokan Aikido addresses the issue by practising in a competitive format. Such adaptations are debated between styles, with some maintaining that there is no need to adjust their methods because either the criticisms are unjustified, or that they are not training for self-defense or combat effectiveness, but spiritual, fitness or other reasons. Another criticism pertains to the shift in training focus after the end of Ueshiba's seclusion in Iwama from 1942 to the mid-1950s, as he increasingly emphasized the spiritual and philosophical aspects of aikido. As a result, strikes to vital points by "tori", entering ("irimi") and initiation of techniques by "tori", the distinction between "omote" (front side) and "ura" (back side) techniques, and the use of weapons, were all de-emphasized or eliminated from practice. Some Aikido practitioners feel that lack of training in these areas leads to an overall loss of effectiveness. Conversely, some styles of aikido receive criticism for not placing enough importance on the spiritual practices emphasized by Ueshiba. According to Minoru Shibata of Aikido Journal, "O-Sensei's aikido was not a continuation and extension of the old and has a distinct discontinuity with past martial and philosophical concepts." That is, that aikido practitioners who focus on aikido's roots in traditional "jujutsu" or "kenjutsu" are diverging from what Ueshiba taught. Such critics urge practitioners to embrace the assertion that "[Ueshiba's] transcendence to the spiritual and universal reality were the fundamentals of the paradigm that he demonstrated." Aries (constellation) Aries is one of the constellations of the zodiac. It is located in the northern celestial hemisphere between Pisces to the west and Taurus to the east. The name Aries is Latin for ram, and its symbol is (Unicode ♈), representing a ram's horns. It is one of the 48 constellations described by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations. It is a mid-sized constellation, ranking 39th overall size, with an area of 441 square degrees (1.1% of the celestial sphere). Although Aries came to represent specifically the ram whose fleece became the Golden Fleece of Ancient Greek mythology, it has represented a ram since late Babylonian times. Before that, the stars of Aries formed a farmhand. Different cultures have incorporated the stars of Aries into different constellations including twin inspectors in China and a porpoise in the Marshall Islands. Aries is a relatively dim constellation, possessing only four bright stars: Hamal (Alpha Arietis, second magnitude), Sheratan (Beta Arietis, third magnitude), Mesarthim (Gamma Arietis, fourth magnitude), and 41 Arietis (also fourth magnitude). The few deep-sky objects within the constellation are quite faint and include several pairs of interacting galaxies. Several meteor showers appear to radiate from Aries, including the Daytime Arietids and the Epsilon Arietids. Aries is now recognized as an official constellation, albeit as a specific region of the sky, by the International Astronomical Union. It was originally defined in ancient texts as a specific pattern of stars, and has remained a constellation since ancient times; it now includes the ancient pattern as well as the surrounding stars. In the description of the Babylonian zodiac given in the clay tablets known as the MUL.APIN, the constellation now known as Aries was the final station along the ecliptic. The MUL.APIN was a comprehensive table of the risings and settings of stars, which likely served as an agricultural calendar. Modern-day Aries was known as , "The Agrarian Worker" or "The Hired Man". Although likely compiled in the 12th or 11th century BC, the MUL.APIN reflects a tradition which marks the Pleiades as the vernal equinox, which was the case with some precision at the beginning of the Middle Bronze Age. The earliest identifiable reference to Aries as a distinct constellation comes from the boundary stones that date from 1350 to 1000 BC. On several boundary stones, a zodiacal ram figure is distinct from the other characters present. The shift in identification from the constellation as the Agrarian Worker to the Ram likely occurred in later Babylonian tradition because of its growing association with Dumuzi the Shepherd. By the time the MUL.APIN was created—by 1000 BC—modern Aries was identified with both Dumuzi's ram and a hired laborer. The exact timing of this shift is difficult to determine due to the lack of images of Aries or other ram figures. In ancient Egyptian astronomy, Aries was associated with the god Amon-Ra, who was depicted as a man with a ram's head and represented fertility and creativity. Because it was the location of the vernal equinox, it was called the "Indicator of the Reborn Sun". During the times of the year when Aries was prominent, priests would process statues of Amon-Ra to temples, a practice that was modified by Persian astronomers centuries later. Aries acquired the title of "Lord of the Head" in Egypt, referring to its symbolic and mythological importance. Aries was not fully accepted as a constellation until classical times. In Hellenistic astrology, the constellation of Aries is associated with the golden ram of Greek mythology that rescued Phrixus and Helle on orders from Hermes, taking Phrixus to the land of Colchis. Phrixos and Helle were the son and daughter of King Athamas and his first wife Nephele. The king's second wife, Ino, was jealous and wished to kill his children. To accomplish this, she induced a famine in Boeotia, then falsified a message from the Oracle of Delphi that said Phrixos must be sacrificed to end the famine. Athamas was about to sacrifice his son atop Mount Laphystium when Aries, sent by Nephele, arrived. Helle fell off of Aries's back in flight and drowned in the Dardanelles, also called the Hellespont in her honor. After arriving, Phrixus sacrificed the ram to Zeus and gave the Fleece to Aeëtes of Colchis, who rewarded him with an engagement to his daughter Chalciope. Aeëtes hung its skin in a sacred place where it became known as the Golden Fleece and was guarded by a dragon. In a later myth, this Golden Fleece was stolen by Jason and the Argonauts. Historically, Aries has been depicted as a crouched, wingless ram with its head turned towards Taurus. Ptolemy asserted in his "Almagest" that Hipparchus depicted Alpha Arietis as the ram's muzzle, though Ptolemy did not include it in his constellation figure. Instead, it was listed as an "unformed star", and denoted as "the star over the head". John Flamsteed, in his "Atlas Coelestis", followed Ptolemy's description by mapping it above the figure's head. Flamsteed followed the general convention of maps by depicting Aries lying down. Astrologically, Aries has been associated with the head and its humors. It was strongly associated with Mars, both the planet and the god. It was considered to govern Western Europe and Syria, and to indicate a strong temper in a person. The First Point of Aries, the location of the vernal equinox, is named for the constellation. This is because the Sun crossed the celestial equator from south to north in Aries more than two millennia ago. Hipparchus defined it in 130 BC. as a point south of Gamma Arietis. Because of the precession of the equinoxes, the First Point of Aries has since moved into Pisces and will move into Aquarius by around 2600 AD. The Sun now appears in Aries from late April through mid May, though the constellation is still associated with the beginning of spring. Medieval Muslim astronomers depicted Aries in various ways. Astronomers like al-Sufi saw the constellation as a ram, modeled on the precedent of Ptolemy. However, some Islamic celestial globes depicted Aries as a nondescript four-legged animal with what may be antlers instead of horns. Some early Bedouin observers saw a ram elsewhere in the sky; this constellation featured the Pleiades as the ram's tail. The generally accepted Arabic formation of Aries consisted of thirteen stars in a figure along with five "unformed" stars, four of which were over the animal's hindquarters and one of which was the disputed star over Aries's head. Al-Sufi's depiction differed from both other Arab astronomers' and Flamsteed's, in that his Aries was running and looking behind itself. The obsolete constellations introduced in Aries (Musca Borealis, Lilium, Vespa, and Apes) have all been composed of the northern stars. Musca Borealis was created from the stars 33 Arietis, 35 Arietis, 39 Arietis, and 41 Arietis. In 1612, Petrus Plancius introduced Apes, a constellation representing a bee. In 1624, the same stars were used by Jakob Bartsch to create a constellation called Vespa, representing a wasp. In 1679 Augustin Royer used these stars for his constellation Lilium, representing the fleur-de-lis. None of these constellation became widely accepted. Johann Hevelius renamed the constellation "Musca" in 1690 in his "Firmamentum Sobiescianum". To differentiate it from Musca, the southern fly, it was later renamed Musca Borealis but it did not gain acceptance and its stars were ultimately officially reabsorbed into Aries. In 1922, the International Astronomical Union defined its recommended three-letter abbreviation, "Ari". The official boundaries of Aries were defined in 1930 by Eugène Delporte as a polygon of 12 segments. Its right ascension is between 1 46.4 and 3 29.4 and its declination is between 10.36° and 31.22° in the equatorial coordinate system. In traditional Chinese astronomy, stars from Aries were used in several constellations. The brightest stars—Alpha, Beta, and Gamma Arietis—formed a constellation called "Lou", variously translated as "bond", "lasso", and "sickle", which was associated with the ritual sacrifice of cattle. This name was shared by the 16th lunar mansion, the location of the full moon closest to the autumnal equinox. The lunar mansion represented the area where animals were gathered before sacrifice around that time. This constellation has also been associated with harvest-time as it could represent a woman carrying a basket of food on her head. 35, 39, and 41 Arietis were part of a constellation called "Wei", which represented a fat abdomen and was the namesake of the 17th lunar mansion, which represented granaries. Delta and Zeta Arietis were a part of the constellation "Tianyin", thought to represent the Emperor's hunting partner. "Zuogeng" ("Tso-kang"), a constellation depicting a marsh and pond inspector, was composed of Mu, Nu, Omicron, Pi, and Sigma Arietis. He was accompanied by "Yeou-kang", a constellation depicting an official in charge of pasture distribution. In a similar system to the Chinese, the first lunar mansion in Hindu astronomy was called "Aswini", after the traditional names for Beta and Gamma Arietis, the Aswins. Because the Hindu new year began with the vernal equinox, the Rig Veda contains over 50 new-year's related hymns to the twins, making them some of the most prominent characters in the work. Aries itself was known as ""Aja"" and ""Mesha"". In Hebrew astronomy Aries was named ""Teli""; it signified either Simeon or Gad, and generally symbolizes the "Lamb of the World". The neighboring Syrians named the constellation "Amru", and the bordering Turks named it "Kuzi". Half a world away, in the Marshall Islands, several stars from Aries were incorporated into a constellation depicting a porpoise, along with stars from Cassiopeia, Andromeda, and Triangulum. Alpha, Beta, and Gamma Arietis formed the head of the porpoise, while stars from Andromeda formed the body and the bright stars of Cassiopeia formed the tail. Other Polynesian peoples recognized Aries as a constellation. The Marquesas islanders called it "Na-pai-ka"; the Māori constellation "Pipiri" may correspond to modern Aries as well. In indigenous Peruvian astronomy, a constellation with most of the same stars as Aries existed. It was called the "Market Moon" and the "Kneeling Terrace", as a reminder for when to hold the annual harvest festival, Ayri Huay. Aries has three prominent stars forming an asterism, designated Alpha, Beta, and Gamma Arietis by Johann Bayer. All three are commonly used for navigation. There is also one other star above the fourth magnitude, 41 Arietis (Bharani). α Arietis, called Hamal, is the brightest star in Aries. Its traditional name is derived from the Arabic word for "lamb" or "head of the ram" ("ras al-hamal"), which references Aries's mythological background. With a spectral class of K2 and a luminosity class of III, it is an orange giant with an apparent visual magnitude of 2.00, which lies 66 light-years from Earth. Hamal has a luminosity of and its absolute magnitude is −0.1. β Arietis, also known as Sheratan, is a blue-white star with an apparent visual magnitude of 2.64. Its traditional name is derived from ""sharatayn"", the Arabic word for "the two signs", referring to both Beta and Gamma Arietis in their position as heralds of the vernal equinox. The two stars were known to the Bedouin as ""qarna al-hamal"", "horns of the ram". It is 59 light-years from Earth. It has a luminosity of and its absolute magnitude is 2.1. It is a spectroscopic binary star, one in which the companion star is only known through analysis of the spectra. The spectral class of the primary is A5. Hermann Carl Vogel determined that Sheratan was a spectroscopic binary in 1903; its orbit was determined by Hans Ludendorff in 1907. It has since been studied for its eccentric orbit. γ Arietis, with a common name of Mesarthim, is a binary star with two white-hued components, located in a rich field of magnitude 8–12 stars. Its traditional name has conflicting derivations. It may be derived from a corruption of "al-sharatan", the Arabic word meaning "pair" or a word for "fat ram". However, it may also come from the Sanskrit for "first star of Aries" or the Hebrew for "ministerial servants", both of which are unusual languages of origin for star names. Along with Beta Arietis, it was known to the Bedouin as ""qarna al-hamal"". The primary is of magnitude 4.59 and the secondary is of magnitude 4.68. The system is 164 light-years from Earth. The two components are separated by 7.8 arcseconds, and the system as a whole has an apparent magnitude of 3.9. The primary has a luminosity of and the secondary has a luminosity of ; the primary is an A-type star with an absolute magnitude of 0.2 and the secondary is a B9-type star with an absolute magnitude of 0.4. The angle between the two components is 1°. Mesarthim was discovered to be a double star by Robert Hooke in 1664, one of the earliest such telescopic discoveries. The primary, γ Arietis, is an Alpha² Canum Venaticorum variable star that has a range of 0.02 magnitudes and a period of 2.607 days. It is unusual because of its strong silicon emission lines. The constellation is home to several double stars, including Epsilon, Lambda, and Pi Arietis. ε Arietis is a binary star with two white components. The primary is of magnitude 5.2 and the secondary is of magnitude 5.5. The system is 290 light-years from Earth. Its overall magnitude is 4.63, and the primary has an absolute magnitude of 1.4. Its spectral class is A2. The two components are separated by 1.5 arcseconds. λ Arietis is a wide double star with a white-hued primary and a yellow-hued secondary. The primary is of magnitude 4.8 and the secondary is of magnitude 7.3. The primary is 129 light-years from Earth. It has an absolute magnitude of 1.7 and a spectral class of F0. The two components are separated by 36 arcseconds at an angle of 50°; the two stars are located 0.5° east of 7 Arietis. π Arietis is a close binary star with a blue-white primary and a white secondary. The primary is of magnitude 5.3 and the secondary is of magnitude 8.5. The primary is 776 light-years from Earth. The primary itself is a wide double star with a separation of 25.2 arcseconds; the tertiary has a magnitude of 10.8. The primary and secondary are separated by 3.2 arcseconds. Most of the other stars in Aries visible to the naked eye have magnitudes between 3 and 5. δ Ari, called Boteïn, is a star of magnitude 4.35, 170 light-years away. It has an absolute magnitude of −0.1 and a spectral class of K2. ζ Arietis is a star of magnitude 4.89, 263 light-years away. Its spectral class is A0 and its absolute magnitude is 0.0. 14 Arietis is a star of magnitude 4.98, 288 light-years away. Its spectral class is F2 and its absolute magnitude is 0.6. 39 Arietis (Lilii Borea) is a similar star of magnitude 4.51, 172 light-years away. Its spectral class is K1 and its absolute magnitude is 0.0. 35 Arietis is a dim star of magnitude 4.55, 343 light-years away. Its spectral class is B3 and its absolute magnitude is −1.7. 41 Arietis, known both as c Arietis and Nair al Butain, is a brighter star of magnitude 3.63, 165 light-years away. Its spectral class is B8 and it has a luminosity of . Its absolute magnitude is −0.2. 53 Arietis is a runaway star of magnitude 6.09, 815 light-years away. Its spectral class is B2. It was likely ejected from the Orion Nebula approximately five million years ago, possibly due to supernovae. Finally, Teegarden's Star is the closest star to Earth in Aries. It is a brown dwarf of magnitude 15.14 and spectral class M6.5V. With a proper motion of 5.1 arcseconds per year, it is the 24th closest star to Earth overall. Aries has its share of variable stars, including R and U Arietis, Mira-type variable stars, and T Arietis, a semi-regular variable star. R Arietis is a Mira variable star that ranges in magnitude from a minimum of 13.7 to a maximum of 7.4 with a period of 186.8 days. It is 4,080 light-years away. U Arietis is another Mira variable star that ranges in magnitude from a minimum of 15.2 to a maximum of 7.2 with a period of 371.1 days. T Arietis is a semiregular variable star that ranges in magnitude from a minimum of 11.3 to a maximum of 7.5 with a period of 317 days. It is 1,630 light-years away. One particularly interesting variable in Aries is SX Arietis, a rotating variable star considered to be the prototype of its class, helium variable stars. SX Arietis stars have very prominent emission lines of Helium I and Silicon III. They are normally main-sequence B0p—B9p stars, and their variations are not usually visible to the naked eye. Therefore, they are observed photometrically, usually having periods that fit in the course of one night. Similar to Alpha² Canum Venaticorum variables, SX Arietis stars have periodic changes in their light and magnetic field, which correspond to the periodic rotation; they differ from the Alpha² Canum Venaticorum variables in their higher temperature. There are between 39 and 49 SX Arietis variable stars currently known; ten are noted as being "uncertain" in the General Catalog of Variable Stars. NGC 772 is a spiral galaxy with an integrated magnitude of 10.3, located southeast of β Arietis and 15 arcminutes west of 15 Arietis. It is a relatively bright galaxy and shows obvious nebulosity and ellipticity in an amateur telescope. It is 7.2 by 4.2 arcminutes, meaning that its surface brightness, magnitude 13.6, is significantly lower than its integrated magnitude. NGC 772 is a class SA(s)b galaxy, which means that it is an unbarred spiral galaxy without a ring that possesses a somewhat prominent bulge and spiral arms that are wound somewhat tightly. The main arm, on the northwest side of the galaxy, is home to many star forming regions; this is due to previous gravitational interactions with other galaxies. NGC 772 has a small companion galaxy, NGC 770, that is about 113,000 light-years away from the larger galaxy. The two galaxies together are also classified as Arp 78 in the Arp peculiar galaxy catalog. NGC 772 has a diameter of 240,000 light-years and the system is 114 million light-years from Earth. Another spiral galaxy in Aries is NGC 673, a face-on class SAB(s)c galaxy. It is a weakly barred spiral galaxy with loosely wound arms. It has no ring and a faint bulge and is 2.5 by 1.9 arcminutes. It has two primary arms with fragments located farther from the core. 171,000 light-years in diameter, NGC 673 is 235 million light-years from Earth. NGC 678 and NGC 680 are a pair of galaxies in Aries that are only about 200,000 light-years apart. Part of the NGC 691 group of galaxies, both are at a distance of approximately 130 million light-years. NGC 678 is an edge-on spiral galaxy that is 4.5 by 0.8 arcminutes. NGC 680, an elliptical galaxy with an asymmetrical boundary, is the brighter of the two at magnitude 12.9; NGC 678 has a magnitude of 13.35. Both galaxies have bright cores, but NGC 678 is the larger galaxy at a diameter of 171,000 light-years; NGC 680 has a diameter of 72,000 light-years. NGC 678 is further distinguished by its prominent dust lane. NGC 691 itself is a spiral galaxy slightly inclined to our line of sight. It has multiple spiral arms and a bright core. Because it is so diffuse, it has a low surface brightness. It has a diameter of 126,000 light-years and is 124 million light-years away. NGC 877 is the brightest member of an 8-galaxy group that also includes NGC 870, NGC 871, and NGC 876, with a magnitude of 12.53. It is 2.4 by 1.8 arcminutes and is 178 million light-years away with a diameter of 124,000 light-years. Its companion is NGC 876, which is about 103,000 light-years from the core of NGC 877. They are interacting gravitationally, as they are connected by a faint stream of gas and dust. Arp 276 is a different pair of interacting galaxies in Aries, consisting of NGC 935 and IC 1801. NGC 821 is an E6 elliptical galaxy. It is unusual because it has hints of an early spiral structure, which is normally only found in lenticular and spiral galaxies. NGC 821 is 2.6 by 2.0 arcminutes and has a visual magnitude of 11.3. Its diameter is 61,000 light-years and it is 80 million light-years away. Another unusual galaxy in Aries is Segue 2. Segue 2 is a dwarf galaxy that is a satellite galaxy of the Milky Way, recently discovered to be a potential relic of the epoch of reionization. Aries is home to several meteor showers. The Daytime Arietid meteor shower is one of the strongest meteor showers that occurs during the day, lasting from 22 May to 2 July. It is an annual shower associated with the Marsden group of comets that peaks on 7 June with a maximum zenithal hourly rate of 54 meteors. Its parent body may be the asteroid Icarus. The meteors are sometimes visible before dawn, because the radiant is 32 degrees away from the Sun. They usually appear at a rate of 1–2 per hour as "earthgrazers", meteors that last several seconds and often begin at the horizon. Because most of the Daytime Arietids are not visible to the naked eye, they are observed in the radio spectrum. This is possible because of the ionized gas they leave in their wake. Other meteor showers radiate from Aries during the day; these include the Daytime Epsilon Arietids and the Northern and Southern Daytime May Arietids. The Jodrell Bank Observatory discovered the Daytime Arietids in 1947 when James Hey and G. S. Stewart adapted the World War II-era radar systems for meteor observations. The Delta Arietids are another meteor shower radiating from Aries. Peaking on 9 December with a low peak rate, the shower lasts from 8 December to 14 January, with the highest rates visible from 8 to 14 December. The average Delta Aquarid meteor is very slow, with an average velocity of per second. However, this shower sometimes produces bright fireballs. This meteor shower has northern and southern components, both of which are likely associated with 1990 HA, a near-Earth asteroid. The Autumn Arietids also radiate from Aries. The shower lasts from 7 September to 27 October and peaks on 9 October. Its peak rate is low. The Epsilon Arietids appear from 12 to 23 October. Other meteor showers radiating from Aries include the October Delta Arietids, Daytime Epsilon Arietids, Daytime May Arietids, Sigma Arietids, Nu Arietids, and Beta Arietids. The Sigma Arietids, a class IV meteor shower, are visible from 12 to 19 October, with a maximum zenithal hourly rate of less than two meteors per hour on 19 October. Aries contains several stars with extrasolar planets. HIP 14810, a G5 type star, is orbited by three giant planets (those more than ten times the mass of Earth). HD 12661, like HIP 14810, is a G-type main sequence star, slightly larger than the Sun, with two orbiting planets. One planet is 2.3 times the mass of Jupiter, and the other is 1.57 times the mass of Jupiter. HD 20367 is a G0 type star, approximately the size of the Sun, with one orbiting planet. The planet, discovered in 2002, has a mass 1.07 times that of Jupiter and orbits every 500 days. Explanatory notes Citations Bibliography Online sources "SIMBAD" Ancient Egypt Ancient Egypt was a civilization of ancient North Africa, concentrated along the lower reaches of the Nile River in the place that is now the country Egypt. Ancient Egyptian civilization followed prehistoric Egypt and coalesced around 3100 BC (according to conventional Egyptian chronology) with the political unification of Upper and Lower Egypt under Menes (often identified with Narmer). The history of ancient Egypt occurred as a series of stable kingdoms, separated by periods of relative instability known as Intermediate Periods: the Old Kingdom of the Early Bronze Age, the Middle Kingdom of the Middle Bronze Age and the New Kingdom of the Late Bronze Age. Egypt reached the pinnacle of its power in the New Kingdom, ruling much of Nubia and a sizable portion of the Near East, after which it entered a period of slow decline. During the course of its history Egypt was invaded or conquered by a number of foreign powers, including the Hyksos, the Libyans, the Nubians, the Assyrians, the Achaemenid Persians, and the Macedonians under the command of Alexander the Great. The Greek Ptolemaic Kingdom, formed in the aftermath of Alexander's death, ruled Egypt until 30 BC, when, under Cleopatra, it fell to the Roman Empire and became a Roman province. The success of ancient Egyptian civilization came partly from its ability to adapt to the conditions of the Nile River valley for agriculture. The predictable flooding and controlled irrigation of the fertile valley produced surplus crops, which supported a more dense population, and social development and culture. With resources to spare, the administration sponsored mineral exploitation of the valley and surrounding desert regions, the early development of an independent writing system, the organization of collective construction and agricultural projects, trade with surrounding regions, and a military intended to defeat foreign enemies and assert Egyptian dominance. Motivating and organizing these activities was a bureaucracy of elite scribes, religious leaders, and administrators under the control of a pharaoh, who ensured the cooperation and unity of the Egyptian people in the context of an elaborate system of religious beliefs. The many achievements of the ancient Egyptians include the quarrying, surveying and construction techniques that supported the building of monumental pyramids, temples, and obelisks; a system of mathematics, a practical and effective system of medicine, irrigation systems and agricultural production techniques, the first known planked boats, Egyptian faience and glass technology, new forms of literature, and the earliest known peace treaty, made with the Hittites. Ancient Egypt has left a lasting legacy. Its art and architecture were widely copied, and its antiquities carried off to far corners of the world. Its monumental ruins have inspired the imaginations of travelers and writers for centuries. A new-found respect for antiquities and excavations in the early modern period by Europeans and Egyptians led to the scientific investigation of Egyptian civilization and a greater appreciation of its cultural legacy. The Nile has been the lifeline of its region for much of human history. The fertile floodplain of the Nile gave humans the opportunity to develop a settled agricultural economy and a more sophisticated, centralized society that became a cornerstone in the history of human civilization. Nomadic modern human hunter-gatherers began living in the Nile valley through the end of the Middle Pleistocene some 120,000 years ago. By the late Paleolithic period, the arid climate of Northern Africa became increasingly hot and dry, forcing the populations of the area to concentrate along the river region. In Predynastic and Early Dynastic times, the Egyptian climate was much less arid than it is today. Large regions of Egypt were covered in treed savanna and traversed by herds of grazing ungulates. Foliage and fauna were far more prolific in all environs and the Nile region supported large populations of waterfowl. Hunting would have been common for Egyptians, and this is also the period when many animals were first domesticated. By about 5500 BC, small tribes living in the Nile valley had developed into a series of cultures demonstrating firm control of agriculture and animal husbandry, and identifiable by their pottery and personal items, such as combs, bracelets, and beads. The largest of these early cultures in upper (Southern) Egypt was the Badari, which probably originated in the Western Desert; it was known for its high quality ceramics, stone tools, and its use of copper. The Badari was followed by the Amratian (Naqada I) and Gerzeh (Naqada II) cultures, which brought a number of technological improvements. As early as the Naqada I Period, predynastic Egyptians imported obsidian from Ethiopia, used to shape blades and other objects from flakes. In Naqada II times, early evidence exists of contact with the Near East, particularly Canaan and the Byblos coast. Over a period of about 1,000 years, the Naqada culture developed from a few small farming communities into a powerful civilization whose leaders were in complete control of the people and resources of the Nile valley. Establishing a power center at Nekhen (in Greek, Hierakonpolis), and later at Abydos, Naqada III leaders expanded their control of Egypt northwards along the Nile. They also traded with Nubia to the south, the oases of the western desert to the west, and the cultures of the eastern Mediterranean and Near East to the east. The Naqada culture manufactured a diverse selection of material goods, reflective of the increasing power and wealth of the elite, as well as societal personal-use items, which included combs, small statuary, painted pottery, high quality decorative stone vases, cosmetic palettes, and jewelry made of gold, lapis, and ivory. They also developed a ceramic glaze known as faience, which was used well into the Roman Period to decorate cups, amulets, and figurines. During the last predynastic phase, the Naqada culture began using written symbols that eventually were developed into a full system of hieroglyphs for writing the ancient Egyptian language. The Early Dynastic Period was approximately contemporary to the early Sumerian-Akkadian civilisation of Mesopotamia and of ancient Elam. The third-century BC Egyptian priest Manetho grouped the long line of pharaohs from Menes to his own time into 30 dynasties, a system still used today. He began his official history with the king named "Meni" (or "Menes" in Greek) who was believed to have united the two kingdoms of Upper and Lower Egypt. The transition to a unified state happened more gradually than ancient Egyptian writers represented, and there is no contemporary record of Menes. Some scholars now believe, however, that the mythical Menes may have been the pharaoh Narmer, who is depicted wearing royal regalia on the ceremonial "Narmer Palette," in a symbolic act of unification. In the Early Dynastic Period, which began about 3000 BC, the first of the Dynastic pharaohs solidified control over lower Egypt by establishing a capital at Memphis, from which he could control the labour force and agriculture of the fertile delta region, as well as the lucrative and critical trade routes to the Levant. The increasing power and wealth of the pharaohs during the early dynastic period was reflected in their elaborate mastaba tombs and mortuary cult structures at Abydos, which were used to celebrate the deified pharaoh after his death. The strong institution of kingship developed by the pharaohs served to legitimize state control over the land, labour, and resources that were essential to the survival and growth of ancient Egyptian civilization. Major advances in architecture, art, and technology were made during the Old Kingdom, fueled by the increased agricultural productivity and resulting population, made possible by a well-developed central administration. Some of ancient Egypt's crowning achievements, the Giza pyramids and Great Sphinx, were constructed during the Old Kingdom. Under the direction of the vizier, state officials collected taxes, coordinated irrigation projects to improve crop yield, drafted peasants to work on construction projects, and established a justice system to maintain peace and order. With the rising importance of central administration in Egypt a new class of educated scribes and officials arose who were granted estates by the pharaoh in payment for their services. Pharaohs also made land grants to their mortuary cults and local temples, to ensure that these institutions had the resources to worship the pharaoh after his death. Scholars believe that five centuries of these practices slowly eroded the economic vitality of Egypt, and that the economy could no longer afford to support a large centralized administration. As the power of the pharaohs diminished, regional governors called nomarchs began to challenge the supremacy of the office of pharaoh. This, coupled with severe droughts between 2200 and 2150 BC, is believed to have caused the country to enter the 140-year period of famine and strife known as the First Intermediate Period. After Egypt's central government collapsed at the end of the Old Kingdom, the administration could no longer support or stabilize the country's economy. Regional governors could not rely on the king for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars. Yet despite difficult problems, local leaders, owing no tribute to the pharaoh, used their new-found independence to establish a thriving culture in the provinces. Once in control of their own resources, the provinces became economically richer—which was demonstrated by larger and better burials among all social classes. In bursts of creativity, provincial artisans adopted and adapted cultural motifs formerly restricted to the royalty of the Old Kingdom, and scribes developed literary styles that expressed the optimism and originality of the period. Free from their loyalties to the pharaoh, local rulers began competing with each other for territorial control and political power. By 2160 BC, rulers in Herakleopolis controlled Lower Egypt in the north, while a rival clan based in Thebes, the Intef family, took control of Upper Egypt in the south. As the Intefs grew in power and expanded their control northward, a clash between the two rival dynasties became inevitable. Around 2055 BC the northern Theban forces under Nebhepetre Mentuhotep II finally defeated the Herakleopolitan rulers, reuniting the Two Lands. They inaugurated a period of economic and cultural renaissance known as the Middle Kingdom. The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects. Mentuhotep II and his Eleventh Dynasty successors ruled from Thebes, but the vizier Amenemhat I, upon assuming the kingship at the beginning of the Twelfth Dynasty around 1985 BC, shifted the nation's capital to the city of Itjtawy, located in Faiyum. From Itjtawy, the pharaohs of the Twelfth Dynasty undertook a far-sighted land reclamation and irrigation scheme to increase agricultural output in the region. Moreover, the military reconquered territory in Nubia that was rich in quarries and gold mines, while laborers built a defensive structure in the Eastern Delta, called the "Walls-of-the-Ruler", to defend against foreign attack. With the pharaohs having secured the country militarily and politically and with vast agricultural and mineral wealth at their disposal, the nation's population, arts, and religion flourished. In contrast to elitist Old Kingdom attitudes towards the gods, the Middle Kingdom displayed an increase in expressions of personal piety. Middle Kingdom literature featured sophisticated themes and characters written in a confident, eloquent style. The relief and portrait sculpture of the period captured subtle, individual details that reached new heights of technical sophistication. The last great ruler of the Middle Kingdom, Amenemhat III, allowed Semitic-speaking Canaanite settlers from the Near East into the Delta region to provide a sufficient labour force for his especially active mining and building campaigns. These ambitious building and mining activities, however, combined with severe Nile floods later in his reign, strained the economy and precipitated the slow decline into the Second Intermediate Period during the later Thirteenth and Fourteenth dynasties. During this decline, the Canaanite settlers began to assume greater control of the Delta region, eventually coming to power in Egypt as the Hyksos. Around 1785 BC, as the power of the Middle Kingdom pharaohs weakened, a Western Asian people called the Hyksos, who had already settled in the Delta, seized control of Egypt and established their capital at Avaris, forcing the former central government to retreat to Thebes. The pharaoh was treated as a vassal and expected to pay tribute. The Hyksos ("foreign rulers") retained Egyptian models of government and identified as pharaohs, thereby integrating Egyptian elements into their culture. They and other invaders introduced new tools of warfare into Egypt, most notably the composite bow and the horse-drawn chariot. After retreating south, the native Theban kings found themselves trapped between the Canaanite Hyksos ruling the north and the Hyksos' Nubian allies, the Kushites, to the south. After years of vassalage, Thebes gathered enough strength to challenge the Hyksos in a conflict that lasted more than 30 years, until 1555 BC. The pharaohs Seqenenre Tao II and Kamose were ultimately able to defeat the Nubians to the south of Egypt, but failed to defeat the Hyksos. That task fell to Kamose's successor, Ahmose I, who successfully waged a series of campaigns that permanently eradicated the Hyksos' presence in Egypt. He established a new dynasty and, in the New Kingdom that followed, the military became a central priority for the pharaohs, who sought to expand Egypt's borders and attempted to gain mastery of the Near East. The New Kingdom pharaohs established a period of unprecedented prosperity by securing their borders and strengthening diplomatic ties with their neighbours, including the Mitanni Empire, Assyria, and Canaan. Military campaigns waged under Tuthmosis I and his grandson Tuthmosis III extended the influence of the pharaohs to the largest empire Egypt had ever seen. Between their reigns, Hatshepsut, a queen who established herself as pharaoh, launched many building projects, including restoration of temples damaged by the Hyksos, and sent trading expenditions to Punt and the Sinai. When Tuthmosis III died in 1425 BC, Egypt had an empire extending from Niya in north west Syria to the Fourth Cataract of the Nile in Nubia, cementing loyalties and opening access to critical imports such as bronze and wood. The New Kingdom pharaohs began a large-scale building campaign to promote the god Amun, whose growing cult was based in Karnak. They also constructed monuments to glorify their own achievements, both real and imagined. The Karnak temple is the largest Egyptian temple ever built. Around 1350 BC, the stability of the New Kingdom was threatened when Amenhotep IV ascended the throne and instituted a series of radical and chaotic reforms. Changing his name to Akhenaten, he touted the previously obscure sun deity Aten as the supreme deity, suppressed the worship of most other deities, and moved the capital to the new city of Akhetaten (modern-day Amarna). He was devoted to his new religion and artistic style. After his death, the cult of the Aten was quickly abandoned and the traditional religious order restored. The subsequent pharaohs, Tutankhamun, Ay, and Horemheb, worked to erase all mention of Akhenaten's heresy, now known as the Amarna Period. Around 1279 BC, Ramesses II, also known as Ramesses the Great, ascended the throne, and went on to build more temples, erect more statues and obelisks, and sire more children than any other pharaoh in history. A bold military leader, Ramesses II led his army against the Hittites in the Battle of Kadesh (in modern Syria) and, after fighting to a stalemate, finally agreed to the first recorded peace treaty, around 1258 BC. Egypt's wealth, however, made it a tempting target for invasion, particularly by the Libyan Berbers to the west, and the Sea Peoples, a conjectured confederation of seafarers from the Aegean Sea. Initially, the military was able to repel these invasions, but Egypt eventually lost control of its remaining territories in southern Canaan, much of it falling to the Assyrians. The effects of external threats were exacerbated by internal problems such as corruption, tomb robbery, and civil unrest. After regaining their power, the high priests at the temple of Amun in Thebes accumulated vast tracts of land and wealth, and their expanded power splintered the country during the Third Intermediate Period. Following the death of Ramesses XI in 1078 BC, Smendes assumed authority over the northern part of Egypt, ruling from the city of Tanis. The south was effectively controlled by the High Priests of Amun at Thebes, who recognized Smendes in name only. During this time, Libyans had been settling in the western delta, and chieftains of these settlers began increasing their autonomy. Libyan princes took control of the delta under Shoshenq I in 945 BC, founding the so-called Libyan or Bubastite dynasty that would rule for some 200 years. Shoshenq also gained control of southern Egypt by placing his family members in important priestly positions. Libyan control began to erode as a rival dynasty in the delta arose in Leontopolis, and Kushites threatened from the south. Around 727 BC the Kushite king Piye invaded northward, seizing control of Thebes and eventually the Delta. Egypt's far-reaching prestige declined considerably toward the end of the Third Intermediate Period. Its foreign allies had fallen under the Assyrian sphere of influence, and by 700 BC war between the two states became inevitable. Between 671 and 667 BC the Assyrians began their attack on Egypt. The reigns of both Taharqa and his successor, Tanutamun, were filled with constant conflict with the Assyrians, against whom Egypt enjoyed several victories. Ultimately, the Assyrians pushed the Kushites back into Nubia, occupied Memphis, and sacked the temples of Thebes. The Assyrians left control of Egypt to a series of vassals who became known as the Saite kings of the Twenty-Sixth Dynasty. By 653 BC, the Saite king Psamtik I was able to oust the Assyrians with the help of Greek mercenaries, who were recruited to form Egypt's first navy. Greek influence expanded greatly as the city of Naukratis became the home of Greeks in the delta. The Saite kings based in the new capital of Sais witnessed a brief but spirited resurgence in the economy and culture, but in 525 BC, the powerful Persians, led by Cambyses II, began their conquest of Egypt, eventually capturing the pharaoh Psamtik III at the battle of Pelusium. Cambyses II then assumed the formal title of pharaoh, but ruled Egypt from Iran, leaving Egypt under the control of a satrapy. A few successful revolts against the Persians marked the 5th century BC, but Egypt was never able to permanently overthrow the Persians. Following its annexation by Persia, Egypt was joined with Cyprus and Phoenicia in the sixth satrapy of the Achaemenid Persian Empire. This first period of Persian rule over Egypt, also known as the Twenty-Seventh dynasty, ended in 402 BC, when Egypt regained independence under a series of native dynasties. The last of these dynasties, the Thirtieth, proved to be the last native royal house of ancient Egypt, ending with the kingship of Nectanebo II. A brief restoration of Persian rule, sometimes known as the Thirty-First Dynasty, began in 343 BC, but shortly after, in 332 BC, the Persian ruler Mazaces handed Egypt over to Alexander the Great without a fight. In 332 BC, Alexander the Great conquered Egypt with little resistance from the Persians and was welcomed by the Egyptians as a deliverer. The administration established by Alexander's successors, the Macedonian Ptolemaic Kingdom, was based on an Egyptian model and based in the new capital city of Alexandria. The city showcased the power and prestige of Hellenistic rule, and became a seat of learning and culture, centered at the famous Library of Alexandria. The Lighthouse of Alexandria lit the way for the many ships that kept trade flowing through the city—as the Ptolemies made commerce and revenue-generating enterprises, such as papyrus manufacturing, their top priority. Hellenistic culture did not supplant native Egyptian culture, as the Ptolemies supported time-honored traditions in an effort to secure the loyalty of the populace. They built new temples in Egyptian style, supported traditional cults, and portrayed themselves as pharaohs. Some traditions merged, as Greek and Egyptian gods were syncretized into composite deities, such as Serapis, and classical Greek forms of sculpture influenced traditional Egyptian motifs. Despite their efforts to appease the Egyptians, the Ptolemies were challenged by native rebellion, bitter family rivalries, and the powerful mob of Alexandria that formed after the death of Ptolemy IV. In addition, as Rome relied more heavily on imports of grain from Egypt, the Romans took great interest in the political situation in the country. Continued Egyptian revolts, ambitious politicians, and powerful opponents from the Near East made this situation unstable, leading Rome to send forces to secure the country as a province of its empire. Egypt became a province of the Roman Empire in 30 BC, following the defeat of Marc Antony and Ptolemaic Queen Cleopatra VII by Octavian (later Emperor Augustus) in the Battle of Actium. The Romans relied heavily on grain shipments from Egypt, and the Roman army, under the control of a prefect appointed by the Emperor, quelled rebellions, strictly enforced the collection of heavy taxes, and prevented attacks by bandits, which had become a notorious problem during the period. Alexandria became an increasingly important center on the trade route with the orient, as exotic luxuries were in high demand in Rome. Although the Romans had a more hostile attitude than the Greeks towards the Egyptians, some traditions such as mummification and worship of the traditional gods continued. The art of mummy portraiture flourished, and some Roman emperors had themselves depicted as pharaohs, though not to the extent that the Ptolemies had. The former lived outside Egypt and did not perform the ceremonial functions of Egyptian kingship. Local administration became Roman in style and closed to native Egyptians. From the mid-first century AD, Christianity took root in Egypt and it was originally seen as another cult that could be accepted. However, it was an uncompromising religion that sought to win converts from Egyptian Religion and Greco-Roman religion and threatened popular religious traditions. This led to the persecution of converts to Christianity, culminating in the great purges of Diocletian starting in 303, but eventually Christianity won out. In 391 the Christian Emperor Theodosius introduced legislation that banned pagan rites and closed temples. Alexandria became the scene of great anti-pagan riots with public and private religious imagery destroyed. As a consequence, Egypt's native religious culture was continually in decline. While the native population certainly continued to speak their language, the ability to read hieroglyphic writing slowly disappeared as the role of the Egyptian temple priests and priestesses diminished. The temples themselves were sometimes converted to churches or abandoned to the desert. In the fourth century, as the Roman Empire divided, Egypt found itself in the Eastern Empire with its capital at Constantinople. In the waning years of the Empire, Egypt fell to the Sassanid Persian army (618–628 AD), was recaptured by the Roman Emperor Heraclius (629–639 AD), and then was finally captured by Muslim Rashidun army in 639–641 AD, ending Roman rule. The pharaoh was the absolute monarch of the country and, at least in theory, wielded complete control of the land and its resources. The king was the supreme military commander and head of the government, who relied on a bureaucracy of officials to manage his affairs. In charge of the administration was his second in command, the vizier, who acted as the king's representative and coordinated land surveys, the treasury, building projects, the legal system, and the archives. At a regional level, the country was divided into as many as 42 administrative regions called nomes each governed by a nomarch, who was accountable to the vizier for his jurisdiction. The temples formed the backbone of the economy. Not only were they houses of worship, but were also responsible for collecting and storing the nation's wealth in a system of granaries and treasuries administered by overseers, who redistributed grain and goods. Much of the economy was centrally organized and strictly controlled. Although the ancient Egyptians did not use coinage until the Late period, they did use a type of money-barter system, with standard sacks of grain and the "deben", a weight of roughly of copper or silver, forming a common denominator. Workers were paid in grain; a simple laborer might earn 5½ sacks (200 kg or 400 lb) of grain per month, while a foreman might earn 7½ sacks (250 kg or 550 lb). Prices were fixed across the country and recorded in lists to facilitate trading; for example a shirt cost five copper deben, while a cow cost 140 deben. Grain could be traded for other goods, according to the fixed price list. During the fifth century BC coined money was introduced into Egypt from abroad. At first the coins were used as standardized pieces of precious metal rather than true money, but in the following centuries international traders came to rely on coinage. Egyptian society was highly stratified, and social status was expressly displayed. Farmers made up the bulk of the population, but agricultural produce was owned directly by the state, temple, or noble family that owned the land. Farmers were also subject to a labor tax and were required to work on irrigation or construction projects in a corvée system. Artists and craftsmen were of higher status than farmers, but they were also under state control, working in the shops attached to the temples and paid directly from the state treasury. Scribes and officials formed the upper class in ancient Egypt, known as the "white kilt class" in reference to the bleached linen garments that served as a mark of their rank. The upper class prominently displayed their social status in art and literature. Below the nobility were the priests, physicians, and engineers with specialized training in their field. Slavery was known in ancient Egypt, but the extent and prevalence of its practice are unclear. The ancient Egyptians viewed men and women, including people from all social classes except slaves, as essentially equal under the law, and even the lowliest peasant was entitled to petition the vizier and his court for redress. Although slaves were mostly used as indentured servants, they were able to buy and sell their servitude, work their way to freedom or nobility, and were usually treated by doctors in the workplace. Both men and women had the right to own and sell property, make contracts, marry and divorce, receive inheritance, and pursue legal disputes in court. Married couples could own property jointly and protect themselves from divorce by agreeing to marriage contracts, which stipulated the financial obligations of the husband to his wife and children should the marriage end. Compared with their counterparts in ancient Greece, Rome, and even more modern places around the world, ancient Egyptian women had a greater range of personal choices and opportunities for achievement. Women such as Hatshepsut and Cleopatra VII even became pharaohs, while others wielded power as Divine Wives of Amun. Despite these freedoms, ancient Egyptian women did not often take part in official roles in the administration, served only secondary roles in the temples, and were not as likely to be as educated as men. The head of the legal system was officially the pharaoh, who was responsible for enacting laws, delivering justice, and maintaining law and order, a concept the ancient Egyptians referred to as Ma'at. Although no legal codes from ancient Egypt survive, court documents show that Egyptian law was based on a common-sense view of right and wrong that emphasized reaching agreements and resolving conflicts rather than strictly adhering to a complicated set of statutes. Local councils of elders, known as "Kenbet" in the New Kingdom, were responsible for ruling in court cases involving small claims and minor disputes. More serious cases involving murder, major land transactions, and tomb robbery were referred to the "Great Kenbet", over which the vizier or pharaoh presided. Plaintiffs and defendants were expected to represent themselves and were required to swear an oath that they had told the truth. In some cases, the state took on both the role of prosecutor and judge, and it could torture the accused with beatings to obtain a confession and the names of any co-conspirators. Whether the charges were trivial or serious, court scribes documented the complaint, testimony, and verdict of the case for future reference. Punishment for minor crimes involved either imposition of fines, beatings, facial mutilation, or exile, depending on the severity of the offense. Serious crimes such as murder and tomb robbery were punished by execution, carried out by decapitation, drowning, or impaling the criminal on a stake. Punishment could also be extended to the criminal's family. Beginning in the New Kingdom, oracles played a major role in the legal system, dispensing justice in both civil and criminal cases. The procedure was to ask the god a "yes" or "no" question concerning the right or wrong of an issue. The god, carried by a number of priests, rendered judgment by choosing one or the other, moving forward or backward, or pointing to one of the answers written on a piece of papyrus or an ostracon. A combination of favorable geographical features contributed to the success of ancient Egyptian culture, the most important of which was the rich fertile soil resulting from annual inundations of the Nile River. The ancient Egyptians were thus able to produce an abundance of food, allowing the population to devote more time and resources to cultural, technological, and artistic pursuits. Land management was crucial in ancient Egypt because taxes were assessed based on the amount of land a person owned. Farming in Egypt was dependent on the cycle of the Nile River. The Egyptians recognized three seasons: "Akhet" (flooding), "Peret" (planting), and "Shemu" (harvesting). The flooding season lasted from June to September, depositing on the river's banks a layer of mineral-rich silt ideal for growing crops. After the floodwaters had receded, the growing season lasted from October to February. Farmers plowed and planted seeds in the fields, which were irrigated with ditches and canals. Egypt received little rainfall, so farmers relied on the Nile to water their crops. From March to May, farmers used sickles to harvest their crops, which were then threshed with a flail to separate the straw from the grain. Winnowing removed the chaff from the grain, and the grain was then ground into flour, brewed to make beer, or stored for later use. The ancient Egyptians cultivated emmer and barley, and several other cereal grains, all of which were used to make the two main food staples of bread and beer. Flax plants, uprooted before they started flowering, were grown for the fibers of their stems. These fibers were split along their length and spun into thread, which was used to weave sheets of linen and to make clothing. Papyrus growing on the banks of the Nile River was used to make paper. Vegetables and fruits were grown in garden plots, close to habitations and on higher ground, and had to be watered by hand. Vegetables included leeks, garlic, melons, squashes, pulses, lettuce, and other crops, in addition to grapes that were made into wine. The Egyptians believed that a balanced relationship between people and animals was an essential element of the cosmic order; thus humans, animals and plants were believed to be members of a single whole. Animals, both domesticated and wild, were therefore a critical source of spirituality, companionship, and sustenance to the ancient Egyptians. Cattle were the most important livestock; the administration collected taxes on livestock in regular censuses, and the size of a herd reflected the prestige and importance of the estate or temple that owned them. In addition to cattle, the ancient Egyptians kept sheep, goats, and pigs. Poultry, such as ducks, geese, and pigeons, were captured in nets and bred on farms, where they were force-fed with dough to fatten them. The Nile provided a plentiful source of fish. Bees were also domesticated from at least the Old Kingdom, and provided both honey and wax. The ancient Egyptians used donkeys and oxen as beasts of burden, and they were responsible for plowing the fields and trampling seed into the soil. The slaughter of a fattened ox was also a central part of an offering ritual. Horses were introduced by the Hyksos in the Second Intermediate Period. Camels, although known from the New Kingdom, were not used as beasts of burden until the Late Period. There is also evidence to suggest that elephants were briefly utilized in the Late Period but largely abandoned due to lack of grazing land. Dogs, cats, and monkeys were common family pets, while more exotic pets imported from the heart of Africa, such as Sub-Saharan African lions, were reserved for royalty. Herodotus observed that the Egyptians were the only people to keep their animals with them in their houses. During the Late Period, the worship of the gods in their animal form was extremely popular, such as the cat goddess Bastet and the ibis god Thoth, and these animals were bred in large numbers on farms for the purpose of ritual sacrifice. Egypt is rich in building and decorative stone, copper and lead ores, gold, and semiprecious stones. These natural resources allowed the ancient Egyptians to build monuments, sculpt statues, make tools, and fashion jewelry. Embalmers used salts from the Wadi Natrun for mummification, which also provided the gypsum needed to make plaster. Ore-bearing rock formations were found in distant, inhospitable wadis in the eastern desert and the Sinai, requiring large, state-controlled expeditions to obtain natural resources found there. There were extensive gold mines in Nubia, and one of the first maps known is of a gold mine in this region. The Wadi Hammamat was a notable source of granite, greywacke, and gold. Flint was the first mineral collected and used to make tools, and flint handaxes are the earliest pieces of evidence of habitation in the Nile valley. Nodules of the mineral were carefully flaked to make blades and arrowheads of moderate hardness and durability even after copper was adopted for this purpose. Ancient Egyptians were among the first to use minerals such as sulfur as cosmetic substances. The Egyptians worked deposits of the lead ore galena at Gebel Rosas to make net sinkers, plumb bobs, and small figurines. Copper was the most important metal for toolmaking in ancient Egypt and was smelted in furnaces from malachite ore mined in the Sinai. Workers collected gold by washing the nuggets out of sediment in alluvial deposits, or by the more labor-intensive process of grinding and washing gold-bearing quartzite. Iron deposits found in upper Egypt were utilized in the Late Period. High-quality building stones were abundant in Egypt; the ancient Egyptians quarried limestone all along the Nile valley, granite from Aswan, and basalt and sandstone from the wadis of the eastern desert. Deposits of decorative stones such as porphyry, greywacke, alabaster, and carnelian dotted the eastern desert and were collected even before the First Dynasty. In the Ptolemaic and Roman Periods, miners worked deposits of emeralds in Wadi Sikait and amethyst in Wadi el-Hudi. The ancient Egyptians engaged in trade with their foreign neighbors to obtain rare, exotic goods not found in Egypt. In the Predynastic Period, they established trade with Nubia to obtain gold and incense. They also established trade with Palestine, as evidenced by Palestinian-style oil jugs found in the burials of the First Dynasty pharaohs. An Egyptian colony stationed in southern Canaan dates to slightly before the First Dynasty. Narmer had Egyptian pottery produced in Canaan and exported back to Egypt. By the Second Dynasty at latest, ancient Egyptian trade with Byblos yielded a critical source of quality timber not found in Egypt. By the Fifth Dynasty, trade with Punt provided gold, aromatic resins, ebony, ivory, and wild animals such as monkeys and baboons. Egypt relied on trade with Anatolia for essential quantities of tin as well as supplementary supplies of copper, both metals being necessary for the manufacture of bronze. The ancient Egyptians prized the blue stone lapis lazuli, which had to be imported from far-away Afghanistan. Egypt's Mediterranean trade partners also included Greece and Crete, which provided, among other goods, supplies of olive oil. In exchange for its luxury imports and raw materials, Egypt mainly exported grain, gold, linen, and papyrus, in addition to other finished goods including glass and stone objects. The Egyptian language is a northern Afro-Asiatic language closely related to the Berber and Semitic languages. It has the second longest known history of any language (after Sumerian), having been written from c. 3200 BC to the Middle Ages and remaining as a spoken language for longer. The phases of ancient Egyptian are Old Egyptian, Middle Egyptian (Classical Egyptian), Late Egyptian, Demotic and Coptic. Egyptian writings do not show dialect differences before Coptic, but it was probably spoken in regional dialects around Memphis and later Thebes. Ancient Egyptian was a synthetic language, but it became more analytic later on. Late Egyptian developed prefixal definite and indefinite articles, which replaced the older inflectional suffixes. There was a change from the older verb–subject–object word order to subject–verb–object. The Egyptian hieroglyphic, hieratic, and demotic scripts were eventually replaced by the more phonetic Coptic alphabet. Coptic is still used in the liturgy of the Egyptian Orthodox Church, and traces of it are found in modern Egyptian Arabic. Ancient Egyptian has 25 consonants similar to those of other Afro-Asiatic languages. These include pharyngeal and emphatic consonants, voiced and voiceless stops, voiceless fricatives and voiced and voiceless affricates. It has three long and three short vowels, which expanded in Late Egyptian to about nine. The basic word in Egyptian, similar to Semitic and Berber, is a triliteral or biliteral root of consonants and semiconsonants. Suffixes are added to form words. The verb conjugation corresponds to the person. For example, the triconsonantal skeleton is the semantic core of the word 'hear'; its basic conjugation is ', 'he hears'. If the subject is a noun, suffixes are not added to the verb: ', 'the woman hears'. Adjectives are derived from nouns through a process that Egyptologists call "nisbation" because of its similarity with Arabic. The word order is in verbal and adjectival sentences, and in nominal and adverbial sentences. The subject can be moved to the beginning of sentences if it is long and is followed by a resumptive pronoun. Verbs and nouns are negated by the particle "n", but "nn" is used for adverbial and adjectival sentences. Stress falls on the ultimate or penultimate syllable, which can be open (CV) or closed (CVC). Hieroglyphic writing dates from c. 3000 BC, and is composed of hundreds of symbols. A hieroglyph can represent a word, a sound, or a silent determinative; and the same symbol can serve different purposes in different contexts. Hieroglyphs were a formal script, used on stone monuments and in tombs, that could be as detailed as individual works of art. In day-to-day writing, scribes used a cursive form of writing, called hieratic, which was quicker and easier. While formal hieroglyphs may be read in rows or columns in either direction (though typically written from right to left), hieratic was always written from right to left, usually in horizontal rows. A new form of writing, Demotic, became the prevalent writing style, and it is this form of writing—along with formal hieroglyphs—that accompany the Greek text on the Rosetta Stone. Around the first century AD, the Coptic alphabet started to be used alongside the Demotic script. Coptic is a modified Greek alphabet with the addition of some Demotic signs. Although formal hieroglyphs were used in a ceremonial role until the fourth century, towards the end only a small handful of priests could still read them. As the traditional religious establishments were disbanded, knowledge of hieroglyphic writing was mostly lost. Attempts to decipher them date to the Byzantine and Islamic periods in Egypt, but only in the 1820s, after the discovery of the Rosetta Stone and years of research by Thomas Young and Jean-François Champollion, were hieroglyphs substantially deciphered. Writing first appeared in association with kingship on labels and tags for items found in royal tombs. It was primarily an occupation of the scribes, who worked out of the "Per Ankh" institution or the House of Life. The latter comprised offices, libraries (called House of Books), laboratories and observatories. Some of the best-known pieces of ancient Egyptian literature, such as the Pyramid and Coffin Texts, were written in Classical Egyptian, which continued to be the language of writing until about 1300 BC. Late Egyptian was spoken from the New Kingdom onward and is represented in Ramesside administrative documents, love poetry and tales, as well as in Demotic and Coptic texts. During this period, the tradition of writing had evolved into the tomb autobiography, such as those of Harkhuf and Weni. The genre known as "Sebayt" ("instructions") was developed to communicate teachings and guidance from famous nobles; the Ipuwer papyrus, a poem of lamentations describing natural disasters and social upheaval, is a famous example. The Story of Sinuhe, written in Middle Egyptian, might be the classic of Egyptian literature. Also written at this time was the Westcar Papyrus, a set of stories told to Khufu by his sons relating the marvels performed by priests. The Instruction of Amenemope is considered a masterpiece of Near Eastern literature. Towards the end of the New Kingdom, the vernacular language was more often employed to write popular pieces like the Story of Wenamun and the Instruction of Any. The former tells the story of a noble who is robbed on his way to buy cedar from Lebanon and of his struggle to return to Egypt. From about 700 BC, narrative stories and instructions, such as the popular Instructions of Onchsheshonqy, as well as personal and business documents were written in the demotic script and phase of Egyptian. Many stories written in demotic during the Greco-Roman period were set in previous historical eras, when Egypt was an independent nation ruled by great pharaohs such as Ramesses II. Most ancient Egyptians were farmers tied to the land. Their dwellings were restricted to immediate family members, and were constructed of mud-brick designed to remain cool in the heat of the day. Each home had a kitchen with an open roof, which contained a grindstone for milling grain and a small oven for baking the bread. Walls were painted white and could be covered with dyed linen wall hangings. Floors were covered with reed mats, while wooden stools, beds raised from the floor and individual tables comprised the furniture. The ancient Egyptians placed a great value on hygiene and appearance. Most bathed in the Nile and used a pasty soap made from animal fat and chalk. Men shaved their entire bodies for cleanliness; perfumes and aromatic ointments covered bad odors and soothed skin. Clothing was made from simple linen sheets that were bleached white, and both men and women of the upper classes wore wigs, jewelry, and cosmetics. Children went without clothing until maturity, at about age 12, and at this age males were circumcised and had their heads shaved. Mothers were responsible for taking care of the children, while the father provided the family's income. Music and dance were popular entertainments for those who could afford them. Early instruments included flutes and harps, while instruments similar to trumpets, oboes, and pipes developed later and became popular. In the New Kingdom, the Egyptians played on bells, cymbals, tambourines, drums, and imported lutes and lyres from Asia. The sistrum was a rattle-like musical instrument that was especially important in religious ceremonies. The ancient Egyptians enjoyed a variety of leisure activities, including games and music. Senet, a board game where pieces moved according to random chance, was particularly popular from the earliest times; another similar game was mehen, which had a circular gaming board. Juggling and ball games were popular with children, and wrestling is also documented in a tomb at Beni Hasan. The wealthy members of ancient Egyptian society enjoyed hunting and boating as well. The excavation of the workers village of Deir el-Medina has resulted in one of the most thoroughly documented accounts of community life in the ancient world, which spans almost four hundred years. There is no comparable site in which the organization, social interactions, working and living conditions of a community have been studied in such detail. Egyptian cuisine remained remarkably stable over time; indeed, the cuisine of modern Egypt retains some striking similarities to the cuisine of the ancients. The staple diet consisted of bread and beer, supplemented with vegetables such as onions and garlic, and fruit such as dates and figs. Wine and meat were enjoyed by all on feast days while the upper classes indulged on a more regular basis. Fish, meat, and fowl could be salted or dried, and could be cooked in stews or roasted on a grill. The architecture of ancient Egypt includes some of the most famous structures in the world: the Great Pyramids of Giza and the temples at Thebes. Building projects were organized and funded by the state for religious and commemorative purposes, but also to reinforce the wide-ranging power of the pharaoh. The ancient Egyptians were skilled builders; using only simple but effective tools and sighting instruments, architects could build large stone structures with great accuracy and precision that is still envied today. The domestic dwellings of elite and ordinary Egyptians alike were constructed from perishable materials such as mud bricks and wood, and have not survived. Peasants lived in simple homes, while the palaces of the elite and the pharaoh were more elaborate structures. A few surviving New Kingdom palaces, such as those in Malkata and Amarna, show richly decorated walls and floors with scenes of people, birds, water pools, deities and geometric designs. Important structures such as temples and tombs that were intended to last forever were constructed of stone instead of mud bricks. The architectural elements used in the world's first large-scale stone building, Djoser's mortuary complex, include post and lintel supports in the papyrus and lotus motif. The earliest preserved ancient Egyptian temples, such as those at Giza, consist of single, enclosed halls with roof slabs supported by columns. In the New Kingdom, architects added the pylon, the open courtyard, and the enclosed hypostyle hall to the front of the temple's sanctuary, a style that was standard until the Greco-Roman period. The earliest and most popular tomb architecture in the Old Kingdom was the mastaba, a flat-roofed rectangular structure of mudbrick or stone built over an underground burial chamber. The step pyramid of Djoser is a series of stone mastabas stacked on top of each other. Pyramids were built during the Old and Middle Kingdoms, but most later rulers abandoned them in favor of less conspicuous rock-cut tombs. The use of the pyramid form continued in private tomb chapels of the New Kingdom and in the royal pyramids of Nubia. The ancient Egyptians produced art to serve functional purposes. For over 3500 years, artists adhered to artistic forms and iconography that were developed during the Old Kingdom, following a strict set of principles that resisted foreign influence and internal change. These artistic standards—simple lines, shapes, and flat areas of color combined with the characteristic flat projection of figures with no indication of spatial depth—created a sense of order and balance within a composition. Images and text were intimately interwoven on tomb and temple walls, coffins, stelae, and even statues. The Narmer Palette, for example, displays figures that can also be read as hieroglyphs. Because of the rigid rules that governed its highly stylized and symbolic appearance, ancient Egyptian art served its political and religious purposes with precision and clarity. Ancient Egyptian artisans used stone as a medium for carving statues and fine reliefs, but used wood as a cheap and easily carved substitute. Paints were obtained from minerals such as iron ores (red and yellow ochres), copper ores (blue and green), soot or charcoal (black), and limestone (white). Paints could be mixed with gum arabic as a binder and pressed into cakes, which could be moistened with water when needed. Pharaohs used reliefs to record victories in battle, royal decrees, and religious scenes. Common citizens had access to pieces of funerary art, such as shabti statues and books of the dead, which they believed would protect them in the afterlife. During the Middle Kingdom, wooden or clay models depicting scenes from everyday life became popular additions to the tomb. In an attempt to duplicate the activities of the living in the afterlife, these models show laborers, houses, boats, and even military formations that are scale representations of the ideal ancient Egyptian afterlife. Despite the homogeneity of ancient Egyptian art, the styles of particular times and places sometimes reflected changing cultural or political attitudes. After the invasion of the Hyksos in the Second Intermediate Period, Minoan-style frescoes were found in Avaris. The most striking example of a politically driven change in artistic forms comes from the Amarna period, where figures were radically altered to conform to Akhenaten's revolutionary religious ideas. This style, known as Amarna art, was quickly abandoned after Akhenaten's death and replaced by the traditional forms. Beliefs in the divine and in the afterlife were ingrained in ancient Egyptian civilization from its inception; pharaonic rule was based on the divine right of kings. The Egyptian pantheon was populated by gods who had supernatural powers and were called on for help or protection. However, the gods were not always viewed as benevolent, and Egyptians believed they had to be appeased with offerings and prayers. The structure of this pantheon changed continually as new deities were promoted in the hierarchy, but priests made no effort to organize the diverse and sometimes conflicting myths and stories into a coherent system. These various conceptions of divinity were not considered contradictory but rather layers in the multiple facets of reality. Gods were worshiped in cult temples administered by priests acting on the king's behalf. At the center of the temple was the cult statue in a shrine. Temples were not places of public worship or congregation, and only on select feast days and celebrations was a shrine carrying the statue of the god brought out for public worship. Normally, the god's domain was sealed off from the outside world and was only accessible to temple officials. Common citizens could worship private statues in their homes, and amulets offered protection against the forces of chaos. After the New Kingdom, the pharaoh's role as a spiritual intermediary was de-emphasized as religious customs shifted to direct worship of the gods. As a result, priests developed a system of oracles to communicate the will of the gods directly to the people. The Egyptians believed that every human being was composed of physical and spiritual parts or "aspects". In addition to the body, each person had a "šwt" (shadow), a "ba" (personality or soul), a "ka" (life-force), and a "name". The heart, rather than the brain, was considered the seat of thoughts and emotions. After death, the spiritual aspects were released from the body and could move at will, but they required the physical remains (or a substitute, such as a statue) as a permanent home. The ultimate goal of the deceased was to rejoin his "ka" and "ba" and become one of the "blessed dead", living on as an "akh", or "effective one". For this to happen, the deceased had to be judged worthy in a trial, in which the heart was weighed against a "feather of truth". If deemed worthy, the deceased could continue their existence on earth in spiritual form. The ancient Egyptians maintained an elaborate set of burial customs that they believed were necessary to ensure immortality after death. These customs involved preserving the body by mummification, performing burial ceremonies, and interring with the body goods the deceased would use in the afterlife. Before the Old Kingdom, bodies buried in desert pits were naturally preserved by desiccation. The arid, desert conditions were a boon throughout the history of ancient Egypt for burials of the poor, who could not afford the elaborate burial preparations available to the elite. Wealthier Egyptians began to bury their dead in stone tombs and use artificial mummification, which involved removing the internal organs, wrapping the body in linen, and burying it in a rectangular stone sarcophagus or wooden coffin. Beginning in the Fourth Dynasty, some parts were preserved separately in canopic jars. By the New Kingdom, the ancient Egyptians had perfected the art of mummification; the best technique took 70 days and involved removing the internal organs, removing the brain through the nose, and desiccating the body in a mixture of salts called natron. The body was then wrapped in linen with protective amulets inserted between layers and placed in a decorated anthropoid coffin. Mummies of the Late Period were also placed in painted cartonnage mummy cases. Actual preservation practices declined during the Ptolemaic and Roman eras, while greater emphasis was placed on the outer appearance of the mummy, which was decorated. Wealthy Egyptians were buried with larger quantities of luxury items, but all burials, regardless of social status, included goods for the deceased. Funerary texts were often included in the grave, and, beginning in the New Kingdom, so were shabti statues that were believed to perform manual labor for them in the afterlife. Rituals in which the deceased was magically re-animated accompanied burials. After burial, living relatives were expected to occasionally bring food to the tomb and recite prayers on behalf of the deceased. The ancient Egyptian military was responsible for defending Egypt against foreign invasion, and for maintaining Egypt's domination in the ancient Near East. The military protected mining expeditions to the Sinai during the Old Kingdom and fought civil wars during the First and Second Intermediate Periods. The military was responsible for maintaining fortifications along important trade routes, such as those found at the city of Buhen on the way to Nubia. Forts also were constructed to serve as military bases, such as the fortress at Sile, which was a base of operations for expeditions to the Levant. In the New Kingdom, a series of pharaohs used the standing Egyptian army to attack and conquer Kush and parts of the Levant. Typical military equipment included bows and arrows, spears, and round-topped shields made by stretching animal skin over a wooden frame. In the New Kingdom, the military began using chariots that had earlier been introduced by the Hyksos invaders. Weapons and armor continued to improve after the adoption of bronze: shields were now made from solid wood with a bronze buckle, spears were tipped with a bronze point, and the Khopesh was adopted from Asiatic soldiers. The pharaoh was usually depicted in art and literature riding at the head of the army; it has been suggested that at least a few pharaohs, such as Seqenenre Tao II and his sons, did do so. However, it has also been argued that "kings of this period did not personally act as frontline war leaders, fighting alongside their troops." Soldiers were recruited from the general population, but during, and especially after, the New Kingdom, mercenaries from Nubia, Kush, and Libya were hired to fight for Egypt. In technology, medicine, and mathematics, ancient Egypt achieved a relatively high standard of productivity and sophistication. Traditional empiricism, as evidenced by the Edwin Smith and Ebers papyri (c. 1600 BC), is first credited to Egypt. The Egyptians created their own alphabet and decimal system. Even before the Old Kingdom, the ancient Egyptians had developed a glassy material known as faience, which they treated as a type of artificial semi-precious stone. Faience is a non-clay ceramic made of silica, small amounts of lime and soda, and a colorant, typically copper. The material was used to make beads, tiles, figurines, and small wares. Several methods can be used to create faience, but typically production involved application of the powdered materials in the form of a paste over a clay core, which was then fired. By a related technique, the ancient Egyptians produced a pigment known as Egyptian Blue, also called blue frit, which is produced by fusing (or sintering) silica, copper, lime, and an alkali such as natron. The product can be ground up and used as a pigment. The ancient Egyptians could fabricate a wide variety of objects from glass with great skill, but it is not clear whether they developed the process independently. It is also unclear whether they made their own raw glass or merely imported pre-made ingots, which they melted and finished. However, they did have technical expertise in making objects, as well as adding trace elements to control the color of the finished glass. A range of colors could be produced, including yellow, red, green, blue, purple, and white, and the glass could be made either transparent or opaque. The medical problems of the ancient Egyptians stemmed directly from their environment. Living and working close to the Nile brought hazards from malaria and debilitating schistosomiasis parasites, which caused liver and intestinal damage. Dangerous wildlife such as crocodiles and hippos were also a common threat. The lifelong labors of farming and building put stress on the spine and joints, and traumatic injuries from construction and warfare all took a significant toll on the body. The grit and sand from stone-ground flour abraded teeth, leaving them susceptible to abscesses (though caries were rare). The diets of the wealthy were rich in sugars, which promoted periodontal disease. Despite the flattering physiques portrayed on tomb walls, the overweight mummies of many of the upper class show the effects of a life of overindulgence. Adult life expectancy was about 35 for men and 30 for women, but reaching adulthood was difficult as about one-third of the population died in infancy. Ancient Egyptian physicians were renowned in the ancient Near East for their healing skills, and some, such as Imhotep, remained famous long after their deaths. Herodotus remarked that there was a high degree of specialization among Egyptian physicians, with some treating only the head or the stomach, while others were eye-doctors and dentists. Training of physicians took place at the "Per Ankh" or "House of Life" institution, most notably those headquartered in Per-Bastet during the New Kingdom and at Abydos and Saïs in the Late period. Medical papyri show empirical knowledge of anatomy, injuries, and practical treatments. Wounds were treated by bandaging with raw meat, white linen, sutures, nets, pads, and swabs soaked with honey to prevent infection, while opium thyme and belladona were used to relieve pain. The earliest records of burn treatment describe burn dressings that use the milk from mothers of male babies. Prayers were made to the goddess Isis. Moldy bread, honey and copper salts were also used to prevent infection from dirt in burns. Garlic and onions were used regularly to promote good health and were thought to relieve asthma symptoms. Ancient Egyptian surgeons stitched wounds, set broken bones, and amputated diseased limbs, but they recognized that some injuries were so serious that they could only make the patient comfortable until death occurred. Early Egyptians knew how to assemble planks of wood into a ship hull and had mastered advanced forms of shipbuilding as early as 3000 BC. The Archaeological Institute of America reports that the oldest planked ships known are the Abydos boats. A group of 14 discovered ships in Abydos were constructed of wooden planks "sewn" together. Discovered by Egyptologist David O'Connor of New York University, woven straps were found to have been used to lash the planks together, and reeds or grass stuffed between the planks helped to seal the seams. Because the ships are all buried together and near a mortuary belonging to Pharaoh Khasekhemwy, originally they were all thought to have belonged to him, but one of the 14 ships dates to 3000 BC, and the associated pottery jars buried with the vessels also suggest earlier dating. The ship dating to 3000 BC was long and is now thought to perhaps have belonged to an earlier pharaoh, perhaps one as early as Hor-Aha. Early Egyptians also knew how to assemble planks of wood with treenails to fasten them together, using pitch for caulking the seams. The "Khufu ship", a vessel sealed into a pit in the Giza pyramid complex at the foot of the Great Pyramid of Giza in the Fourth Dynasty around 2500 BC, is a full-size surviving example that may have filled the symbolic function of a solar barque. Early Egyptians also knew how to fasten the planks of this ship together with mortise and tenon joints. Large seagoing ships are known to have been heavily used by the Egyptians in their trade with the city states of the eastern Mediterranean, especially Byblos (on the coast of modern-day Lebanon), and in several expeditions down the Red Sea to the Land of Punt. In fact one of the earliest Egyptian words for a seagoing ship is a "Byblos Ship", which originally defined a class of Egyptian seagoing ships used on the Byblos run; however, by the end of the Old Kingdom, the term had come to include large seagoing ships, whatever their destination. In 2011 archaeologists from Italy, the United States, and Egypt excavating a dried-up lagoon known as Mersa Gawasis have unearthed traces of an ancient harbor that once launched early voyages like Hatshepsut's Punt expedition onto the open ocean. Some of the site's most evocative evidence for the ancient Egyptians' seafaring prowess include large ship timbers and hundreds of feet of ropes, made from papyrus, coiled in huge bundles. And in 2013 a team of Franco-Egyptian archaeologists discovered what is believed to be the world's oldest port, dating back about 4500 years, from the time of King Cheops on the Red Sea coast near Wadi el-Jarf (about 110 miles south of Suez). In 1977, an ancient north-south canal dating to the Middle Kingdom of Egypt was discovered extending from Lake Timsah to the Ballah Lakes. It was dated to the Middle Kingdom of Egypt by extrapolating dates of ancient sites constructed along its course. The earliest attested examples of mathematical calculations date to the predynastic Naqada period, and show a fully developed numeral system. The importance of mathematics to an educated Egyptian is suggested by a New Kingdom fictional letter in which the writer proposes a scholarly competition between himself and another scribe regarding everyday calculation tasks such as accounting of land, labor, and grain. Texts such as the Rhind Mathematical Papyrus and the Moscow Mathematical Papyrus show that the ancient Egyptians could perform the four basic mathematical operations—addition, subtraction, multiplication, and division—use fractions, compute the volumes of boxes and pyramids, and calculate the surface areas of rectangles, triangles, and circles. They understood basic concepts of algebra and geometry, and could solve simple sets of simultaneous equations. Mathematical notation was decimal, and based on hieroglyphic signs for each power of ten up to one million. Each of these could be written as many times as necessary to add up to the desired number; so to write the number eighty or eight hundred, the symbol for ten or one hundred was written eight times respectively. Because their methods of calculation could not handle most fractions with a numerator greater than one, they had to write fractions as the sum of several fractions. For example, they resolved the fraction "two-fifths" into the sum of "one-third" + "one-fifteenth". Standard tables of values facilitated this. Some common fractions, however, were written with a special glyph—the equivalent of the modern two-thirds is shown on the right. Ancient Egyptian mathematicians knew the Pythagorean theorem as an empirical formula. They were aware, for example, that a triangle had a right angle opposite the hypotenuse when its sides were in a 3–4–5 ratio. They were able to estimate the area of a circle by subtracting one-ninth from its diameter and squaring the result: a reasonable approximation of the formula π"r". The golden ratio seems to be reflected in many Egyptian constructions, including the pyramids, but its use may have been an unintended consequence of the ancient Egyptian practice of combining the use of knotted ropes with an intuitive sense of proportion and harmony. Greek historian Herodotus claimed that ancient Egyptians looked like the people in Colchis (modern-day Georgia). This claim has been largely discredited as fictional by modern-day scholars. For the fact is as I soon came to realise myself, and then heard from others later, that the Colchians are obviously Egyptian. When the notion occurred to me, I asked both the Colchians and the Egyptians about it, and found that the Colchians had better recall of the Egyptians than the Egyptians did of them. Some Egyptians said that they thought the Colchians originated with Sesostris' army, but I myself guessed their Egyptian origin not only because the Colchians are dark-skinned and curly-haired (which does not count for much by itself, because these features are common in others too) but more importantly because Colchians, Egyptians and Ethiopians are the only peoples in the world who practise circumcision and who have always done so. A team led by Johannes Krause managed the first reliable sequencing of the genomes of 90 mummified individuals in 2017. Whilst not conclusive, because of the non-exhaustive time frame and restricted location that the mummies represent, their study nevertheless showed that these ancient Egyptians "closely resembled ancient and modern Near Eastern populations, especially those in the Levant, and had almost no DNA from sub-Saharan Africa. What's more, the genetics of the mummies remained remarkably consistent even as different powers—including Nubians, Greeks, and Romans—conquered the empire." Later, however, something did alter the genomes of Egyptians. Some 15% to 20% of modern Egyptians' DNA reflects sub-Saharan ancestry, but the ancient mummies had only 6–15% sub-Saharan DNA. The culture and monuments of ancient Egypt have left a lasting legacy on the world. The cult of the goddess Isis, for example, became popular in the Roman Empire, as obelisks and other relics were transported back to Rome. The Romans also imported building materials from Egypt to erect Egyptian-style structures. Early historians such as Herodotus, Strabo, and Diodorus Siculus studied and wrote about the land, which Romans came to view as a place of mystery. During the Middle Ages and the Renaissance, Egyptian pagan culture was in decline after the rise of Christianity and later Islam, but interest in Egyptian antiquity continued in the writings of medieval scholars such as Dhul-Nun al-Misri and al-Maqrizi. In the seventeenth and eighteenth centuries, European travelers and tourists brought back antiquities and wrote stories of their journeys, leading to a wave of Egyptomania across Europe. This renewed interest sent collectors to Egypt, who took, purchased, or were given many important antiquities. Although the European colonial occupation of Egypt destroyed a significant portion of the country's historical legacy, some foreigners left more positive marks. Napoleon, for example, arranged the first studies in Egyptology when he brought some 150 scientists and artists to study and document Egypt's natural history, which was published in the "Description de l'Égypte". In the 20th century, the Egyptian Government and archaeologists alike recognized the importance of cultural respect and integrity in excavations. The Supreme Council of Antiquities now approves and oversees all excavations, which are aimed at finding information rather than treasure. The council also supervises museums and monument reconstruction programs designed to preserve the historical legacy of Egypt. Astatine Astatine is a radioactive chemical element with symbol At and atomic number 85. It is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. A sample of the pure element has never been assembled, because any macroscopic specimen would be immediately vaporized by the heat of its own radioactivity. The bulk properties of astatine are not known with any certainty. Many of them have been estimated based on the element's position on the periodic table as a heavier analog of iodine, and a member of the halogens (the group of elements including fluorine, chlorine, bromine, and iodine). Astatine is likely to have a dark or lustrous appearance and may be a semiconductor or possibly a metal; it probably has a higher melting point than that of iodine. Chemically, several anionic species of astatine are known and most of its compounds resemble those of iodine. It also shows some metallic behavior, including being able to form a stable monatomic cation in aqueous solution (unlike the lighter halogens). The first synthesis of the element was in 1940 by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio G. Segrè at the University of California, Berkeley, who named it from the Greek "astatos" (ἄστατος), meaning "unstable". Four isotopes of astatine were subsequently found to be naturally occurring, although much less than one gram is present at any given time in the Earth's crust. Neither the most stable isotope astatine-210, nor the medically useful astatine-211, occur naturally; they can only be produced synthetically, usually by bombarding bismuth-209 with alpha particles. Astatine is an extremely radioactive element; all its isotopes have short half-lives of 8.1 hours or less, decaying into other astatine isotopes, bismuth, polonium or radon. Most of its isotopes are very unstable with half-lives of one second or less. Of the first 101 elements in the periodic table, only francium is less stable, and all the astatine isotopes more stable than francium are in any case synthetic and do not occur in nature. The bulk properties of astatine are not known with any certainty. Research is limited by its short half-life, which prevents the creation of weighable quantities. A visible piece of astatine would immediately vaporize itself because of the heat generated by its intense radioactivity. It remains to be seen if, with sufficient cooling, a macroscopic quantity of astatine could be deposited as a thin film. Astatine is usually classified as either a nonmetal or a metalloid; metal formation has also been predicted. Most of the physical properties of astatine have been estimated (by interpolation or extrapolation), using theoretically or empirically derived methods. For example, halogens get darker with increasing atomic weight – fluorine is nearly colorless, chlorine is yellow-green, bromine is red-brown, and iodine is dark gray/violet. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance (if it is a metalloid or a metal). The melting and boiling points of astatine are also expected to follow the trend seen in the halogen series, increasing with atomic number. On this basis they are estimated to be , respectively. Some experimental evidence suggests astatine may have lower melting and boiling points than those implied by the halogen trend. Astatine sublimes less readily than does iodine, having a lower vapor pressure. Even so, half of a given quantity of astatine will vaporize in approximately an hour if put on a clean glass surface at room temperature. The absorption spectrum of astatine in the middle ultraviolet region has lines at 224.401 and 216.225 nm, suggestive of 6p to 7s transitions. The structure of solid astatine is unknown. As an analogue of iodine it may have an orthorhombic crystalline structure composed of diatomic astatine molecules, and be a semiconductor (with a band gap of 0.7 eV). Alternatively, if condensed astatine forms a metallic phase, as has been predicted, it may have a monatomic face-centered cubic structure; in this structure it may well be a superconductor, like the similar high-pressure phase of iodine. Evidence for (or against) the existence of diatomic astatine (At) is sparse and inconclusive. Some sources state that it does not exist, or at least has never been observed, while other sources assert or imply its existence. Despite this controversy, many properties of diatomic astatine have been predicted; for example, its bond length would be , dissociation energy , and heat of vaporization (∆H) 54.39 kJ/mol. The latter figure means that astatine may (at least) be metallic in the liquid state on the basis that elements with a heat of vaporization greater than ~42 kJ/mol are metallic when liquid; diatomic iodine, with a value of 41.71 kJ/mol, falls just short of the threshold figure. The chemistry of astatine is "clouded by the extremely low concentrations at which astatine experiments have been conducted, and the possibility of reactions with impurities, walls and filters, or radioactivity by-products, and other unwanted nano-scale interactions." Many of its apparent chemical properties have been observed using tracer studies on extremely dilute astatine solutions, typically less than 10 mol·L. Some properties – such as anion formation – align with other halogens. Astatine has some metallic characteristics as well, such as plating onto a cathode, coprecipitating with metal sulfides in hydrochloric acid, and forming a stable monatomic cation in aqueous solution. It forms complexes with EDTA, a metal chelating agent, and is capable of acting as a metal in antibody radiolabeling; in some respects astatine in the +1 state is akin to silver in the same state. Most of the organic chemistry of astatine is, however, analogous to that of iodine. Astatine has an electronegativity of 2.2 on the revised Pauling scale – lower than that of iodine (2.66) and the same as hydrogen. In hydrogen astatide (HAt) the negative charge is predicted to be on the hydrogen atom, implying that this compound could be referred to as astatine hydride according to certain nomenclatures, That would be consistent with the electronegativity of astatine on the Allred–Rochow scale (1.9) being less than that of hydrogen (2.2). However, official IUPAC stoichiometric nomenclature is based on an idealized convention of determining the relative electronegativities of the elements by the mere virtue of their position within the periodic table. According to this convention, astatine is handled as though it is more electronegative than hydrogen, irrespective of its true electronegativity. The electron affinity of astatine is predicted to be reduced by one-third because of spin–orbit interactions. Less reactive than iodine, astatine is the least reactive of the halogens, although its compounds have been synthesized in microscopic amounts and studied as intensively as possible before their radioactive disintegration. The reactions involved have been typically tested with dilute solutions of astatine mixed with larger amounts of iodine. Acting as a carrier, the iodine ensures there is sufficient material for laboratory techniques (such as filtration and precipitation) to work. Like iodine, astatine has been shown to adopt odd-numbered oxidation states ranging from −1 to +7. Only a few compounds with metals have been reported, in the form of astatides of sodium, palladium, silver, thallium, and lead. Some characteristic properties of silver and sodium astatide, and the other hypothetical alkali and alkaline earth astatides, have been estimated by extrapolation from other metal halides. The formation of an astatine compound with hydrogen – usually referred to as hydrogen astatide – was noted by the pioneers of astatine chemistry. As mentioned, there are grounds for instead referring to this compound as astatine hydride. It is easily oxidized; acidification by dilute nitric acid gives the At or At forms, and the subsequent addition of silver(I) may only partially, at best, precipitate astatine as silver(I) astatide (AgAt). Iodine, in contrast, is not oxidized, and precipitates readily as silver(I) iodide. Astatine is known to bind to boron, carbon, and nitrogen. Various boron cage compounds have been prepared with At–B bonds, these being more stable than At–C bonds. Astatine can replace a hydrogen atom in benzene to form astatobenzene CHAt; this may be oxidized to CHAtCl by chlorine. By treating this compound with an alkaline solution of hypochlorite, CHAtO can be produced. In the molecules dipyridine-astatine(I) perchlorate [At(CHN)] [ClO] and the analogous nitrate, the astatine atom is bonded to each nitrogen atom in the two pyridine rings. With oxygen, there is evidence of the species AtO and AtO in aqueous solution, formed by the reaction of astatine with an oxidant such as elemental bromine or (in the last case) by sodium persulfate in a solution of perchloric acid. The species previously thought to be has since been determined to be , a hydrolysis product of AtO (another such hydrolysis product being AtOOH). The well characterized anion can be obtained by, for example, the oxidation of astatine with potassium hypochlorite in a solution of potassium hydroxide. Preparation of lanthanum triastatate La(AtO), following the oxidation of astatine by a hot NaSO solution, has been reported. Further oxidation of , such as by xenon difluoride (in a hot alkaline solution) or periodate (in a neutral or alkaline solution), yields the perastatate ion ; this is only stable in neutral or alkaline solutions. Astatine is also thought to be capable of forming cations in salts with oxyanions such as iodate or dichromate; this is based on the observation that, in acidic solutions, monovalent or intermediate positive states of astatine coprecipitate with the insoluble salts of metal cations such as silver(I) iodate or thallium(I) dichromate. Astatine may form bonds to the other chalcogens; these include SAt and with sulfur, a coordination selenourea compound with selenium, and an astatine–tellurium colloid with tellurium. Astatine is known to react with its lighter homologs iodine, bromine, and chlorine in the vapor state; these reactions produce diatomic interhalogen compounds with formulas AtI, AtBr, and AtCl. The first two compounds may also be produced in water – astatine reacts with iodine/iodide solution to form AtI, whereas AtBr requires (aside from astatine) an iodine/iodine monobromide/bromide solution. The excess of iodides or bromides may lead to and ions, or in a chloride solution, they may produce species like or via equilibrium reactions with the chlorides. Oxidation of the element with dichromate (in nitric acid solution) showed that adding chloride turned the astatine into a molecule likely to be either AtCl or AtOCl. Similarly, or may be produced. The polyhalides PdAtI, CsAtI, TlAtI, and PbAtI are known or presumed to have been precipitated. In a plasma ion source mass spectrometer, the ions [AtI], [AtBr], and [AtCl] have been formed by introducing lighter halogen vapors into a helium-filled cell containing astatine, supporting the existence of stable neutral molecules in the plasma ion state. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride. In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit "eka" – "one") to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries. The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine; moreover, astatine is not found in the thorium series, and the true identity of dakin is not known. In 1936, a team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 via X-ray analysis. In 1939 they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine, his means to detect it were too weak, by current standards, to enable correct identification. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work. In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Karlik and Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results. Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine has been determined in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, "Nature" published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Greek "astatos" (αστατος) meaning "unstable", because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element. Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine … it also exhibits metallic properties, more like its metallic neighbors Po and Bi." There are 39 known isotopes of astatine, with atomic masses (mass numbers) of 191–229. Theoretical modeling suggests that 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist. Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay. Beta decay modes have been found for all astatine isotopes except astatine-213, -214, -215, and -216m. Astatine-210 and lighter isotopes exhibit beta plus decay (positron emission), astatine-216 and heavier isotopes exhibit beta (minus) decay, and astatine-212 decays via both modes, while astatine-211 undergoes electron capture. The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209. Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also be called a "meta-state", meaning the system has more internal energy than the "ground state" (the state with the lowest possible internal energy), making the former likely to decay into the latter. There may be more than one isomer for each isotope. The most stable of these nuclear isomers is astatine-202m1, which has a half-life of about 3 minutes, longer than those of all the ground states bar those of isotopes 203–211 and 220. The least stable is astatine-214m1; its half-life of 265 nanoseconds is shorter than those of all ground states except that of astatine-213. Astatine is the rarest naturally occurring element. The total amount of astatine in the Earth's crust (quoted mass 2.36 × 10 grams) is estimated to be less than one gram at any given time. Any astatine present at the formation of the Earth has long since disappeared; the four naturally occurring isotopes (astatine-215, -217, -218 and -219) are instead continuously produced as a result of the decay of radioactive thorium and uranium ores, and trace quantities of neptunium-237. The landmass of North and South America combined, to a depth of 16 kilometers (10 miles), contains only about one trillion astatine-215 atoms at any given time (around 3.5 × 10 grams). Astatine-217 is produced via the radioactive decay of neptunium-237. Primordial remnants of the latter isotope—due to its relatively short half-life of 2.14 million years—are no longer present on Earth. However, trace amounts occur naturally as a product of transmutation reactions in uranium ores. Astatine-218 was the first astatine isotope discovered in nature. Astatine-219, with a half-life of 56 seconds, is the longest lived of the naturally occurring isotopes. Isotopes of astatine are sometimes not listed as naturally occurring because of misconceptions that there are no such isotopes, or discrepancies in the literature. Astatine-216 has been counted as a naturally occurring isotope but reports of its observation (which were described as doubtful) have not been confirmed. Astatine was first produced by bombarding bismuth-209 with energetic alpha particles, and this is still the major route used to create the relatively long-lived isotopes astatine-209 through astatine-211. Astatine is only produced in minuscule quantities, with modern techniques allowing production runs of up to 6.6 giga becquerels (about 86 nanograms or 2.47 × 10 atoms). Synthesis of greater quantities of astatine using this method is constrained by the limited availability of suitable cyclotrons and the prospect of melting the target. Solvent radiolysis due to the cumulative effect of astatine decay is a related problem. With cryogenic technology, microgram quantities of astatine might be able to be generated via proton irradiation of thorium or uranium to yield radon-211, in turn decaying to astatine-211. Contamination with astatine-210 is expected to be a drawback of this method. The most important isotope is astatine-211, the only one in commercial use. To produce the bismuth target, the metal is sputtered onto a gold, copper, or aluminium surface at 50 to 100 milligrams per square centimeter. Bismuth oxide can be used instead; this is forcibly fused with a copper plate. The target is kept under a chemically neutral nitrogen atmosphere, and is cooled with water to prevent premature astatine vaporization. In a particle accelerator, such as a cyclotron, alpha particles are collided with the bismuth. Even though only one bismuth isotope is used (bismuth-209), the reaction may occur in three possible ways, producing astatine-209, astatine-210, or astatine-211. In order to eliminate undesired nuclides, the maximum energy of the particle accelerator is set to a value (optimally 29.17 MeV) above that for the reaction producing astatine-211 (to produce the desired isotope) and below the one producing astatine-210 (to avoid producing other astatine isotopes). Since astatine is the main product of the synthesis, after its formation it must only be separated from the target and any significant contaminants. Several methods are available, "but they generally follow one of two approaches—dry distillation or [wet] acid treatment of the target followed by solvent extraction." The methods summarized below are modern adaptations of older procedures, as reviewed by Kugler and Keller. Pre-1985 techniques more often addressed the elimination of co-produced toxic polonium; this requirement is now mitigated by capping the energy of the cyclotron irradiation beam. The astatine-containing cyclotron target is heated to a temperature of around 650 °C. The astatine volatilizes and is condensed in (typically) a cold trap. Higher temperatures of up to around 850 °C may increase the yield, at the risk of bismuth contamination from concurrent volatilization. Redistilling the condensate may be required to minimize the presence of bismuth (as bismuth can interfere with astatine labeling reactions). The astatine is recovered from the trap using one or more low concentration solvents such as sodium hydroxide, methanol or chloroform. Astatine yields of up to around 80% may be achieved. Dry separation is the method most commonly used to produce a chemically useful form of astatine. The bismuth (or sometimes bismuth trioxide) target is dissolved in, for example, concentrated nitric or perchloric acid. Astatine is extracted using an organic solvent such as butyl or isopropyl ether, or thiosemicarbazide. A separation yield of 93% using nitric acid has been reported, falling to 72% by the time purification procedures were completed (distillation of nitric acid, purging residual nitrogen oxides, and redissolving bismuth nitrate to enable liquid–liquid extraction). Wet methods involve "multiple radioactivity handling steps" and are not well suited for isolating larger quantities of astatine. They can enable the production of astatine in a specific oxidation state and may have greater applicability in experimental radiochemistry. Newly formed astatine-211 is the subject of ongoing research in nuclear medicine. It must be used quickly as it decays with a half-life of 7.2 hours; this is long enough to permit multistep labeling strategies. Astatine-211 has potential for targeted alpha particle radiotherapy, since it decays either via emission of an alpha particle (to bismuth-207), or via electron capture (to an extremely short-lived nuclide, polonium-211, which undergoes further alpha decay), very quickly reaching its stable granddaughter lead-207. Polonium X-rays emitted as a result of the electron capture branch, in the range of 77–92 keV, enable the tracking of astatine in animals and patients. Although astatine-210 has a slightly longer half-life, it is wholly unsuitable because it usually undergoes beta plus decay to the extremely toxic polonium-210. The principal medicinal difference between astatine-211 and iodine-131 (a radioactive iodine isotope also used in medicine) is that iodine-131 emits high-energy beta particles, and astatine does not. Beta particles have much greater penetrating power through tissues than do the much heavier alpha particles. An average alpha particle released by astatine-211 can travel up to 70 µm through surrounding tissues; an average-energy beta particle emitted by iodine-131 can travel nearly 30 times as far, to about 2 mm. The short half-life and limited penetrating power of alpha radiation through tissues offers advantages in situations where the "tumor burden is low and/or malignant cell populations are located in close proximity to essential normal tissues." Significant morbidity in cell culture models of human cancers has been achieved with from one to ten astatine-211 atoms bound per cell. Several obstacles have been encountered in the development of astatine-based radiopharmaceuticals for cancer treatment. World War II delayed research for close to a decade. Results of early experiments indicated that a cancer-selective carrier would need to be developed and it was not until the 1970s that monoclonal antibodies became available for this purpose. Unlike iodine, astatine shows a tendency to dehalogenate from molecular carriers such as these, particularly at sp carbon sites (less so from sp sites). Given the toxicity of astatine accumulated and retained in the body, this emphasized the need to ensure it remained attached to its host molecule. While astatine carriers that are slowly metabolized can be assessed for their efficacy, more rapidly metabolized carriers remain a significant obstacle to the evaluation of astatine in nuclear medicine. Mitigating the effects of astatine-induced radiolysis of labeling chemistry and carrier molecules is another area requiring further development. A practical application for astatine as a cancer treatment would potentially be suitable for a "staggering" number of patients; production of astatine in the quantities that would be required remains an issue. Animal studies show that astatine, similarly to iodine, although to a lesser extent, is preferentially concentrated in the thyroid gland. Unlike iodine, astatine also shows a tendency to be taken up by the lungs and spleen, possibly because of in-body oxidation of At to At. If administered in the form of a radiocolloid it tends to concentrate in the liver. Experiments in rats and monkeys suggest that astatine-211 causes much greater damage to the thyroid gland than does iodine-131, with repetitive injection of the nuclide resulting in necrosis and cell dysplasia within the gland. Early research suggested that injection of astatine into female rodents caused morphological changes in breast tissue; this conclusion remained controversial for many years. General agreement was later reached that this was likely caused by the effect of breast tissue irradiation combined with hormonal changes due to irradiation of the ovaries. Trace amounts of astatine can be handled safely in fume hoods if they are well-aerated; biological uptake of the element must be avoided. Atom An atom is the smallest constituent unit of ordinary matter that has the properties of a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are extremely small; typical sizes are around 100 picometers (a ten-billionth of a meter, in the short scale). Atoms are small enough that attempting to predict their behavior using classical physics – as if they were billiard balls, for example – gives noticeably incorrect predictions due to quantum effects. Through the development of physics, atomic models have incorporated quantum principles to better explain and predict this behavior. Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and typically a similar number of neutrons. Protons and neutrons are called nucleons. More than 99.94% of an atom's mass is in the nucleus. The protons have a positive electric charge, the electrons have a negative electric charge, and the neutrons have no electric charge. If the number of protons and electrons are equal, that atom is electrically neutral. If an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively, and it is called an ion. The electrons of an atom are attracted to the protons in an atomic nucleus by this electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by a different force, the nuclear force, which is usually stronger than the electromagnetic force repelling the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force, and nucleons can be ejected from the nucleus, leaving behind a different element: nuclear decay resulting in nuclear transmutation. The number of protons in the nucleus defines to what chemical element the atom belongs: for example, all copper atoms contain 29 protons. The number of neutrons defines the isotope of the element. The number of electrons influences the magnetic properties of an atom. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules. The ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature and is the subject of the discipline of chemistry. The idea that matter is made up of discrete units is a very old idea, appearing in many ancient cultures such as Greece and India. The word "atom" was coined by the ancient Greek philosophers Leucippus and his pupil Democritus. However, these ideas were founded in philosophical and theological reasoning rather than evidence and experimentation. As a result, their views on what atoms look like and how they behave were incorrect. They also could not convince everybody, so atomism was but one of a number of competing theories on the nature of matter. It was not until the 19th century that the idea was embraced and refined by scientists, when the blossoming science of chemistry produced discoveries that only the concept of atoms could explain. In the early 1800s, John Dalton used the concept of atoms to explain why elements always react in ratios of small whole numbers (the law of multiple proportions). For instance, there are two types of tin oxide: one is 88.1% tin and 11.9% oxygen and the other is 78.7% tin and 21.3% oxygen (tin(II) oxide and tin dioxide respectively). This means that 100g of tin will combine either with 13.5g or 27g of oxygen. 13.5 and 27 form a ratio of 1:2, a ratio of small whole numbers. This common pattern in chemistry suggested to Dalton that elements react in whole number multiples of discrete units—in other words, atoms. In the case of tin oxides, one tin atom will combine with either one or two oxygen atoms. Dalton also believed atomic theory could explain why water absorbs different gases in different proportions. For example, he found that water absorbs carbon dioxide far better than it absorbs nitrogen. Dalton hypothesized this was due to the differences between the masses and configurations of the gases' respective particles, and carbon dioxide molecules (CO) are heavier and larger than nitrogen molecules (N). In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as "Brownian motion". This was thought to be caused by water molecules knocking the grains about. In 1905, Albert Einstein proved the reality of these molecules and their motions by producing the first Statistical physics analysis of Brownian motion. French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of atoms, thereby conclusively verifying Dalton's atomic theory. The physicist J. J. Thomson measured the mass of cathode rays, showing they were made of particles, but were around 1800 times lighter than the lightest atom, hydrogen. Therefore, they were not atoms, but a new particle, the first "subatomic" particle to be discovered, which he originally called ""corpuscle"" but was later named "electron", after particles postulated by George Johnstone Stoney in 1874. He also showed they were identical to particles given off by photoelectric and radioactive materials. It was quickly recognized that they are the particles that carry electric currents in metal wires, and carry the negative electric charge within atoms. Thomson was given the 1906 Nobel Prize in Physics for this work. Thus he overturned the belief that atoms are the indivisible, ultimate particles of matter. Thomson also incorrectly postulated that the low mass, negatively charged electrons were distributed throughout the atom in a uniform sea of positive charge. This became known as the plum pudding model. In 1909, Hans Geiger and Ernest Marsden, under the direction of Ernest Rutherford, bombarded a metal foil with alpha particles to observe how they scattered. They expected all the alpha particles to pass straight through with little deflection, because Thomson's model said that the charges in the atom are so diffuse that their electric fields could not affect the alpha particles much. However, Geiger and Marsden spotted alpha particles being deflected by angles greater than 90°, which was supposed to be impossible according to Thomson's model. To explain this, Rutherford proposed that the positive charge of the atom is concentrated in a tiny nucleus at the center of the atom. Rutherford compared his findings to one firing a 15-inch shell at a sheet of tissue paper and it coming back to hit the person who fired it. While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table. The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J.J. Thomson created a technique for separating atom types through his work on ionized gases, which subsequently led to the discovery of stable isotopes. In 1913 the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons orbits are stable (given that normally, charges in acceleration, including circular motion, lose kinetic energy which is emitted as electromagnetic radiation, see "synchrotron radiation") and why elements absorb and emit electromagnetic radiation in discrete spectra. Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius Van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today. Chemical bonds between atoms were now explained, by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons. As the chemical properties of the elements were known to largely repeat themselves according to the periodic law, in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus. The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of atomic properties. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split in a way correlated with the direction of an atom's angular momentum, or spin. As this spin direction is initially random, the beam would be expected to deflect in a random direction. Instead, the beam was split into two directional components, corresponding to the atomic spin being oriented up or down with respect to the magnetic field. In 1925 Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (Matrix Mechanics). One year earlier, in 1924, Louis de Broglie had proposed that all particles behave to an extent like waves and, in 1926, Erwin Schrödinger used this idea to develop a mathematical model of the atom (Wave Mechanics) that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed. The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus. In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product. A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental "nuclear fission". In 1944, Hahn received the Nobel prize in chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized. In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies. Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions. Though the word "atom" originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron; all three are fermions. However, the hydrogen-1 atom has no neutrons and the hydron ion has no electrons. The electron is by far the least massive of these particles at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass 1,836 times that of the electron, at . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron, or , the heaviest of the three constituent particles, but it can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure. However, both protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles. The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces. All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to 1.07  fm, where "A" is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 10 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other. Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay. The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits "identical" fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. However, a proton and a neutron are allowed to occupy the same quantum state. For atoms with low atomic numbers, a nucleus that has more neutrons than protons tends to drop to a lower energy state through radioactive decay so that the neutron–proton ratio is closer to one. However, as the atomic number increases, a higher proportion of neutrons is required to offset the mutual repulsion of the protons. Thus, there are no stable nuclei with equal proton and neutron numbers above atomic number "Z" = 20 (calcium) and as "Z" increases, the neutron–proton ratio of stable isotopes increases. The stable isotope with the highest proton–neutron ratio is lead-208 (about 1.5). The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3–10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element. If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, "E" = "mc", where "m" is the mass loss and "c" is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate. The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star. The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation. Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines. The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 "million" eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals. By definition, any two atoms with an identical number of "protons" in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of "neutrons" are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible. About 339 nuclides occur naturally on Earth, of which 254 (about 75%) have not been observed to decay, and are referred to as "stable isotopes". However, only 90 of these nuclides are stable to all decay, even in theory. Another 164 (bringing the total to 254) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 80 million years, and are long-lived enough to be present from the birth of the solar system. This collection of 288 nuclides are known as primordial nuclides. Finally, an additional 51 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or else as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14). For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes. Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 254 known stable nuclides, only four have both an odd number of protons "and" odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10 and nitrogen-14. Also, only four naturally occurring, radioactive odd–odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138 and tantalum-180m. Most odd–odd nuclei are highly unstable with respect to beta decay, because the decay products are even–even, and are therefore more strongly bound, due to nuclear pairing effects. The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed using the unified atomic mass unit (u), also called dalton (Da). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately . Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 u. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 u). However, this number will not be exactly an integer except in the case of carbon-12 (see below). The heaviest stable atom is lead-208, with a mass of . As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about ). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 u, and so a mole of carbon-12 atoms weighs exactly 0.012 kg. Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. However, this assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin. On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right). Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm. When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have been shown to occur for sulfur ions and chalcogen ions in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope. However, individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width. A single drop of water contains about 2 sextillion () atoms of oxygen, and twice the number of hydrogen atoms. A single carat diamond with a mass of contains about 10 sextillion (10) atoms of carbon. If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple. Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm. The most common forms of radioactive decay are: Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth. Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ½ ħ, or "spin-½". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin. The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field. However, the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons. In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field. The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium. However, for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging. The potential energy of an electron in an atom is negative, its dependence of its position reaches the minimum (the most absolute value) inside the nucleus, and vanishes when the distance from the nucleus goes to infinity, roughly in an inverse proportion to the distance. In the quantum-mechanical model, a bound electron can only occupy a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state. The electron's energy raises when "n" increases because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. grounded state to first excited level (ionization), it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors. When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined. Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines. The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect. If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band. Valency is the combining power of an element. It is equal to number of hydrogen atoms that atom can combine or displace in forming compounds. The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells. For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. However, many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds. The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases. Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas. Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond. Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale. This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior. The scanning tunneling microscope is a device for viewing surfaces at the atomic level. It uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would normally be insurmountable. Electrons tunnel through the vacuum between two planar metal electrodes, on each of which is an adsorbed atom, providing a tunneling-current density that can be measured. Scanning one atom (taken as the tip) as it moves past the other (the sample) permits plotting of tip displacement versus lateral separation for a constant current. The calculation shows the extent to which scanning-tunneling-microscope images of an individual atom are visible. It confirms that for low bias, the microscope images the space-averaged dimensions of the electron orbitals across closely packed energy levels—the Fermi level local density of states. An atom can be ionized by removing one of its electrons. The electric charge causes the trajectory of an atom to bend when it passes through a magnetic field. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis. A more area-selective method is electron energy loss spectroscopy, which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry. Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element. Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth. Atoms form about 4% of the total energy density of the observable Universe, with an average density of about 0.25 atoms/m. Within a galaxy such as the Milky Way, atoms have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 10 to 10 atoms/m. The Sun is believed to be inside the Local Bubble, a region of highly ionized gas, so the density in the solar neighborhood is only about 10 atoms/m. Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's atoms are concentrated inside stars and the total mass of atoms forms about 10% of the mass of the galaxy. (The remainder of the mass is an unknown dark matter.) Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron. Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei. Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation. This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae through the r-process and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei. Elements such as lead formed largely through the radioactive decay of heavier elements. Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating. Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay. There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions. Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth. Transuranic elements have radioactive lifetimes shorter than the current age of the Earth and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust. Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore. The Earth contains approximately atoms. Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals. This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter. While isotopes with atomic numbers higher than lead (82) are known to be radioactive, an "island of stability" has been proposed for some elements with atomic numbers above 103. These superheavy elements may have a nucleus that is relatively stable against radioactive decay. The most likely candidate for a stable superheavy atom, unbihexium, has 126 protons and 184 neutrons. Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature. However, in 1996 the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva. Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test the fundamental predictions of physics. Albert Speer Berthold Konrad Hermann Albert Speer (; ; March 19, 1905 – September 1, 1981) was a German architect who was, for most of World War II, Reich Minister of Armaments and War Production for Nazi Germany. Speer was Adolf Hitler's chief architect before assuming ministerial office. As "the Nazi who said sorry", he accepted moral responsibility at the Nuremberg trials and in his memoirs for complicity in crimes of the Nazi regime, while insisting he had been ignorant of the Holocaust. Speer joined the Nazi Party in 1931, launching himself on a political and governmental career which lasted fourteen years. His architectural skills made him increasingly prominent within the Party and he became a member of Hitler's inner circle. Hitler instructed him to design and construct structures including the Reich Chancellery and the "Zeppelinfeld" stadium in Nuremberg where Party rallies were held. Speer also made plans to reconstruct Berlin on a grand scale, with huge buildings, wide boulevards, and a reorganized transportation system. In February 1942, Hitler appointed him as Reich Minister of Armaments and War Production. After the war, he was tried at Nuremberg and sentenced to 20 years in prison for his role in the Nazi regime, principally for the use of forced labor. Despite repeated attempts to gain early release, he served his full sentence, most of it at Spandau Prison in West Berlin. Following his release in 1966, Speer published two bestselling autobiographical works, "Inside the Third Reich" and "", detailing his close personal relationship with Hitler, and providing readers and historians with a unique perspective on the workings of the Nazi regime. He wrote a third book, "Infiltration", about the SS. Speer died of a stroke in 1981 while on a visit to London. Speer was born in Mannheim, into an upper-middle-class family. He was the second of three sons of Luise Máthilde Wilhelmine (Hommel) and Albert Friedrich Speer. In 1918, the family moved permanently to their summer home Villa Speer on Schloss-Wolfsbrunnenweg, Heidelberg. According to Henry T. King, deputy prosecutor at Nuremberg who later wrote a book about Speer, "Love and warmth were lacking in the household of Speer's youth." Speer was active in sports, taking up skiing and mountaineering. Speer's Heidelberg school offered rugby football, unusual for Germany, and Speer was a participant. He wanted to become a mathematician, but his father said if Speer chose this occupation he would "lead a life without money, without a position and without a future". Instead, Speer followed in the footsteps of his father and grandfather and studied architecture. Speer began his architectural studies at the University of Karlsruhe instead of a more highly acclaimed institution because the hyperinflation crisis of 1923 limited his parents' income. In 1924 when the crisis had abated, he transferred to the "much more reputable" Technical University of Munich. In 1925 he transferred again, this time to the Technical University of Berlin where he studied under Heinrich Tessenow, whom Speer greatly admired. After passing his exams in 1927, Speer became Tessenow's assistant, a high honor for a man of 22. As such, Speer taught some of Tessenow's classes while continuing his own postgraduate studies. In Munich, and continuing in Berlin, Speer began a close friendship, ultimately spanning over 50 years, with Rudolf Wolters, who also studied under Tessenow. In mid-1922, Speer began courting Margarete (Margret) Weber (1905–1987), the daughter of a successful craftsman who employed 50 workers. The relationship was frowned upon by Speer's class-conscious mother, who felt that the Webers were socially inferior. Despite this opposition, the two married in Berlin on August 28, 1928; seven years elapsed before Margarete Speer was invited to stay at her in-laws' home. Speer stated he was apolitical when he was a young man, and that he attended a Berlin Nazi rally in December 1930 at the urging of some of his students. On March 1, 1931, he applied to join the Nazi Party and became member number 474,481. In 1931, Speer surrendered his position as Tessenow's assistant and moved to Mannheim. His father gave him a job as manager of the elder Speer's properties. In July 1932, the Speers visited Berlin to help out the Party prior to the "Reichstag" elections. While they were there, his friend, Nazi Party official Karl Hanke, recommended the young architect to Joseph Goebbels to help renovate the Party's Berlin headquarters. Speer agreed to do the work. When the commission was completed, Speer returned to Mannheim and remained there as Hitler took office in January 1933. The organizers of the 1933 Nuremberg Rally asked Speer to submit designs for the rally, bringing him into contact with Hitler for the first time. Neither the organizers nor Rudolf Hess were willing to decide whether to approve the plans, and Hess sent Speer to Hitler's Munich apartment to seek his approval. This work won Speer his first national post, as Nazi Party "Commissioner for the Artistic and Technical Presentation of Party Rallies and Demonstrations". Shortly after Hitler had come into power, he had started to make plans to rebuild the chancellery. At the end of 1933 he contracted Paul Troost to renovate the entire building. Hitler appointed Speer, whose work for Goebbels had impressed him, to manage the building site for Troost. As Chancellor, Hitler had a residence in the building and came by every day to be briefed by Speer and the building supervisor on the progress of the renovations. After one of these briefings, Hitler invited Speer to lunch, to the architect's great excitement. Hitler evinced considerable interest in Speer during the luncheon, and later told Speer that he had been looking for a young architect capable of carrying out his architectural dreams for the new Germany. Speer quickly became part of Hitler's inner circle; he was expected to call on Hitler in the morning for a walk or chat, to provide consultation on architectural matters, and to discuss Hitler's ideas. Most days he was invited to dinner. The two men found much in common: Hitler spoke of Speer as a "kindred spirit" for whom he had always maintained "the warmest human feelings". The young, ambitious architect was dazzled by his rapid rise and close proximity to Hitler, which guaranteed him a flood of commissions from the government and from the highest ranks of the Party. Speer testified at Nuremberg, "I belonged to a circle which consisted of other artists and his personal staff. If Hitler had had any friends at all, I certainly would have been one of his close friends." When Troost died on January 21, 1934, Speer effectively replaced him as the Party's chief architect. Hitler appointed Speer as head of the Chief Office for Construction, which placed him nominally on Hess's staff. One of Speer's first commissions after Troost's death was the "Zeppelinfeld" stadium—the Nürnberg parade grounds seen in Leni Riefenstahl's propaganda masterpiece "Triumph of the Will". This huge work was able to hold 340,000 people. Speer insisted that as many events as possible be held at night, both to give greater prominence to his lighting effects and to hide the individual Nazis, many of whom were overweight. Speer surrounded the site with 130 anti-aircraft searchlights. Speer described this as his most beautiful work, and as the only one that stood the test of time. Nürnberg was to be the site of many more official Nazi buildings, most of which were never built; for example, the German Stadium would have accommodated 400,000 spectators, while an even larger rally ground would have held half a million people. While planning these structures, Speer conceived the concept of "ruin value": that major buildings should be constructed in such a way they would leave aesthetically pleasing ruins for thousands of years into the future. Such ruins would be a testament to the greatness of Nazi Germany, just as ancient Greek or Roman ruins were symbols of the greatness of those civilizations. When Hitler deprecated Werner March's design for the for the 1936 Summer Olympics as too modern, Speer modified the plans by adding a stone exterior. Speer designed the German Pavilion for the 1937 international exposition in Paris. The German and Soviet pavilion sites were opposite each other. On learning (through a clandestine look at the Soviet plans) that the Soviet design included two colossal figures seemingly about to overrun the German site, Speer modified his design to include a cubic mass which would check their advance, with a huge eagle on top looking down on the Soviet figures. Speer received, from Hitler Youth leader and later fellow Spandau prisoner Baldur von Schirach, the Golden Hitler Youth Honor Badge with oak leaves. In 1937, Hitler appointed Speer as with the rank of undersecretary of state in the Reich government. The position carried with it extraordinary powers over the Berlin city government and made Speer answerable to Hitler alone. It also made Speer a member of the "Reichstag", though the body by then had little effective power. Hitler ordered Speer to develop plans to rebuild Berlin. The plans centered on a three-mile long grand boulevard running from north to south, which Speer called the "Prachtstrasse", or Street of Magnificence; he also referred to it as the "North-South Axis". At the northern end of the boulevard, Speer planned to build the "Volkshalle", a huge assembly hall with a dome which would have been over high, with floor space for 180,000 people. At the southern end of the avenue a great triumphal arch would rise; it would be almost high, and able to fit the Arc de Triomphe inside its opening. The outbreak of World War II in 1939 led to the postponement, and later the abandonment, of these plans. Part of the land for the boulevard was to be obtained by consolidating Berlin's railway system. Speer hired Wolters as part of his design team, with special responsibility for the "Prachtstrasse". When Speer's father saw the model for the new Berlin, he said to his son, "You've all gone completely insane." All the while plans to build a new Reich chancellery had been underway since 1934. Land had been purchased by the end of 1934 and starting in March 1936 the first buildings were demolished to create space at Voßstraße. Speer was involved virtually from the beginning. He had been commissioned to renovate the Borsig Palace on the corner of Voßstraße and Wilhelmstraße as a headquarter for the SA, who were about to be relocated from Munich to Berlin in the aftermath of the Röhm purge. and completed the preliminary work for the new chancellery by May 1936. In June 1936 he charged a personal honorarium of 30,000 Reichsmark and estimated that the chancellery would be completed within three to four years. Detailed plans were completed in July 1937 and the first shell of the new chancellery was complete on 1 January 1938. On 27 January 1938 Speer received plenipotentiary powers from Hitler to finish the new chancellery by 1 January 1939. Yet for propagandistic reasons, to prove the vigor and organizational skills of National Socialism, Hitler claimed during the topping-out ceremony on 2 August 1938 that he had ordered Speer to build the new chancellery just that year. Speer reiterated this claim in his memoirs to show that he had been up to that supposed challenge, and some of his biographers, most notably Joachim Fest, have followed that account. The building itself, hailed by Hitler as the "crowning glory of the greater German political empire", was designed as a theatrical set for representation, "to intimidate and humiliate", as historian Martin Kitchen puts it. Because of shortages of labor, the construction workers had to work in two ten- to twelve-hour shifts to have the chancellery completed by early January 1939. During the war the chancellery was destroyed, except for the exterior walls, by air raids and in the Battle of Berlin in 1945. It was eventually dismantled by the Soviets. Rumor has it that the remains have been used for other building projects like the Humboldt University, Mohrenstraße metro station or Soviet war memorials in Berlin, but none of these are true. During the Chancellery project, the pogrom of Kristallnacht took place. Speer made no mention of it in the first draft of "Inside the Third Reich", and it was only on the urgent advice of his publisher that he added a mention of seeing the ruins of the Central Synagogue in Berlin from his car. Speer was under significant psychological pressure during this period of his life. He later remembered: Speer supported the German invasion of Poland and subsequent war, though he recognized that it would lead to the postponement, at the least, of his architectural dreams. In his later years, Speer, talking with his biographer-to-be Gitta Sereny, explained how he felt in 1939: "Of course I was perfectly aware that [Hitler] sought world domination ...[A]t that time I asked for nothing better. That was the whole point of my buildings. They would have looked grotesque if Hitler had sat still in Germany. All I "wanted" was for this great man to dominate the globe." Speer placed his department at the disposal of the "Wehrmacht". When Hitler remonstrated, and said it was not for Speer to decide how his workers should be used, Speer simply ignored him. Among Speer's innovations were quick-reaction squads to construct roads or clear away debris; before long, these units would be used to clear bomb sites. As the war progressed, initially to great German success, Speer continued preliminary work on the Berlin and Nürnberg plans. Speer also oversaw the construction of buildings for the "Wehrmacht" and "Luftwaffe". In 1940, Joseph Stalin proposed that Speer pay a visit to Moscow. Stalin had been particularly impressed by Speer's work in Paris, and wished to meet the "Architect of the Reich". Hitler, alternating between amusement and anger, did not allow Speer to go, fearing that Stalin would put Speer in a "rat hole" until a new Moscow arose. When Germany invaded the Soviet Union in 1941, Speer came to doubt, despite Hitler's reassurances, that his projects for Berlin would ever be completed. On February 8, 1942, Minister of Armaments Fritz Todt died in a plane crash shortly after taking off from Hitler's eastern headquarters at Rastenburg. Speer, who had arrived in Rastenburg the previous evening, had accepted Todt's offer to fly with him to Berlin, but had cancelled some hours before takeoff (Speer stated in his memoirs that the cancellation was because of exhaustion from travel and a late-night meeting with Hitler). Later that day, Hitler appointed Speer as Todt's successor to all of his posts. In "Inside the Third Reich", Speer recounts his meeting with Hitler and his reluctance to take ministerial office, saying that he only did so because Hitler commanded it. Speer also states that Hermann Göring raced to Hitler's headquarters on hearing of Todt's death, hoping to claim Todt's powers. Hitler instead presented Göring with the "fait accompli" of Speer's appointment. At the time of Speer's accession to the office, the German economy, unlike the British one, was not fully geared for war production. Consumer goods were still being produced at nearly as high a level as during peacetime. No fewer than five "Supreme Authorities" had jurisdiction over armament production—one of which, the Ministry of Economic Affairs, had declared in November 1941 that conditions did not permit an increase in armament production. Few women were employed in the factories, which were running only one shift. One evening soon after his appointment, Speer went to visit a Berlin armament factory; he found no one on the premises. Speer overcame these difficulties by centralizing power over the war economy in himself. Factories were given autonomy, or as Speer put it, "self-responsibility", and each factory concentrated on a single product. Backed by Hitler's strong support (the dictator stated, "Speer, I'll sign anything that comes from you"), he divided the armament field according to weapon system, with experts rather than civil servants overseeing each department. No department head could be older than 55—anyone older being susceptible to "routine and arrogance"—and no deputy older than 40. Over these departments was a central planning committee headed by Speer, which took increasing responsibility for war production, and as time went by, for the German economy itself. According to the minutes of a conference at "Wehrmacht" High Command in March 1942, "It is only Speer's word that counts nowadays. He can interfere in all departments. Already he overrides all departments ... On the whole, Speer's attitude is to the point." Goebbels would note in his diary in June 1943, "Speer is still tops with the "Führer". He is truly a genius with organization." Speer was so successful in his position that by late 1943, he was widely regarded among the Nazi elite as a possible successor to Hitler. While Speer had tremendous power, he was of course subordinate to Hitler. Nazi officials sometimes went around Speer by seeking direct orders from the dictator. When Speer ordered peacetime building work suspended, the "Gauleiters" (Nazi Party district leaders) obtained an exemption for their pet projects. When Speer sought the appointment of Hanke as a labor czar to optimize the use of German and slave labor, Hitler, under the influence of Martin Bormann, instead appointed Fritz Sauckel. Rather than increasing female labor and taking other steps to better organize German labor, as Speer favored, Sauckel advocated importing more slave labour from the occupied nations – and did so, obtaining workers for (among other things) Speer's armament factories, often using the most brutal methods. On December 10, 1943, Speer visited the underground Mittelwerk V-2 rocket factory that used concentration camp labor. Speer claimed after the war that he had been shocked by the conditions there (5.7 percent of the work force died that month). By 1943, the Allies had gained air superiority over Germany, and bombings of German cities and industry had become commonplace. However, the Allies in their strategic bombing campaign did not concentrate on industry, and Speer was able to overcome bombing losses. In spite of these losses, German production of tanks more than doubled in 1943, production of planes increased by 80 percent, and production time for "Kriegsmarine" submarines was reduced from one year to two months. Production would continue to increase until the second half of 1944. In January 1944, Speer fell ill with complications from an inflamed knee, necessitating a leave. According to Speer's post-war memoirs, his political rivals (mainly Göring and Martin Bormann), attempted to have some of his powers permanently transferred to them during his absence. Speer claimed that SS chief Heinrich Himmler tried to have him physically isolated by having Himmler's personal physician Karl Gebhardt treat him, though his "care" did not improve his health. Speer's case was transferred to his friend Dr. Karl Brandt, and he slowly recovered. In response to the Allied air raids on aircraft factories, Adolf Hitler authorised the creation of a Jägerstab, a governmental task force composed of Reich Aviation Ministry, Armaments Ministry and SS personnel. Its aim was to ensure the preservation and growth of fighter aircraft production. The task force was established by the 1 March 1944 order of Speer, with support from Erhard Milch of the Reich Aviation Ministry. Speer and Milch played a key role in directing the activities of the agency, while the day-to-day operations were handled by Chief of Staff Karl Saur, the head of the Technical Office in the Armaments Ministry. Production continued to improve until late 1944, with allied bombing destroying just 9% of German production. Production of German fighter aircraft was more than doubled from 1943 to 1944. In April, Speer's rivals for power succeeded in having him deprived of responsibility for construction. Speer sent Hitler a bitter letter, concluding with an offer of his resignation. Judging Speer indispensable to the war effort, Field Marshal Erhard Milch persuaded Hitler to try to get his minister to reconsider. Hitler sent Milch to Speer with a message not addressing the dispute but instead stating that he still regarded Speer as highly as ever. According to Milch, upon hearing the message, Speer burst out, "The "Führer" can kiss my ass!" After a lengthy argument, Milch persuaded Speer to withdraw his offer of resignation, on the condition his powers were restored. On April 23, 1944, Speer went to see Hitler who agreed that "everything [will] stay as it was, [Speer will] remain the head of all German construction". According to Speer, while he was successful in this debate, Hitler had also won, "because he wanted and needed me back in his corner, and he got me". The Jägerstab was given extraordinary powers over labour, production and transportation resources, with its functions taking priority over housing repairs for bombed out civilians or restoration of vital city services. The factories that came under the Jägerstab program saw their work-weeks extended to 72 hours. At the same time, Milch took steps to rationalise production by reducing the number of variants of each type of aircraft produced. The Jägerstab was instrumental in bringing about the increased exploitation of slave labour for the benefit of Germany's war industry and its air force, the Luftwaffe. The task force immediately began implementing plans to expand the use of slave labour in the aviation manufacturing. Records show that SS provided 64,000 prisoners for 20 separate projects at the peak of Jägerstab's construction activities. Taking into account the high mortality rate associated with the underground construction projects, the historian Marc Buggeln estimates that the workforce involved amounted to 80,000−90,000 inmates. They belonged to the various sub-camps of Mittelbau-Dora, Mauthausen-Gusen, Buchenwald and other camps. The prisoners worked for Junkers, Messerschmitt, Henschel and BMW, among others. The cooperation between the Reich Ministry of Aviation, the Ministry of Armaments and the SS proved especially productive. Although intended to function for only six months, already in late May Speer and Milch discussed with Goring the possibility of centralising all of Germany's arms manufacturing under a similar task force. On 1 August 1944, Speer reorganised the Jägerstab into the Rüstungsstab (Armament Staff) to apply the same model of operation to all top-priority armament programs. The formation of the Rüstungsstab allowed Speer, for the first time, to consolidate key arms manufacturing projects for the three branches of the Wehrmacht under the authority of his ministry, further marginalising the Reich Ministry of Aviation. Several departments, including the once powerful Technical Office, were disbanded or transferred to the new task force. The task force oversaw the day-to-day development and production activities relating to the He 162, the "Volksjäger" ("people's fighter"), as part of the Emergency Fighter Program. The Rüstungsstab assumed responsibilities for the underground transfer projects of the Jägerstab. In November 1944, 1.8 million square meters of underground space were ready for occupancy, encompassing over 1,000 spaces commissioned by the task force. But by this time German production was beginning to collapse. (Post-war, Speer sought to downplay his involvement with these projects and claimed that only 300,000 square meters had been completed). According to Buggeln, the Rüstungsstab played a key role in maintaining and increasing production of fighter aircraft and V-2 rockets. Speer's name was included on the list of members of a post-Hitler government drawn up by the conspirators behind the July 1944 assassination plot to kill Hitler. The list had a question mark and the annotation "to be won over" by his name, which likely saved him from the extensive purges that followed the scheme's failure. When Speer learned in February 1945 that the Red Army had overrun the Silesian industrial region, he drafted a memo to Hitler noting that Silesia's coal mines now supplied 60 percent of the Reich's coal. Without them, Speer wrote, Germany's coal production would only be a quarter of its 1944 total—not nearly enough to continue the war. He told Hitler in no uncertain terms that without Silesia, "the war is lost." Hitler merely filed the memo in his safe. By February 1945, Speer was working to supply areas about to be occupied with food and materials to get them through the hard times ahead. On March 19, 1945, Hitler issued his Nero Decree, ordering a scorched earth policy in both Germany and the occupied territories. Hitler's order, by its terms, deprived Speer of any power to interfere with the decree, and Speer went to confront Hitler, reiterating that the war was lost. Hitler gave Speer 24 hours to reconsider his position, and when the two met the following day, Speer answered, "I stand unconditionally behind you." However, he demanded the exclusive power to implement the Nero Decree, and Hitler signed an order to that effect. Using this order, Speer worked to persuade generals and "Gauleiters" to circumvent the Nero Decree and avoid needless sacrifice of personnel and destruction of industry that would be needed after the war. Speer managed to reach a relatively safe area near Hamburg as the Nazi regime finally collapsed, but decided on a final, risky visit to Berlin to see Hitler one more time. Speer stated at Nuremberg, "I felt that it was my duty not to run away like a coward, but to stand up to him again." Speer visited the "Führerbunker" on April 22. Hitler seemed calm and somewhat distracted, and the two had a long, disjointed conversation in which the dictator defended his actions and informed Speer of his intent to commit suicide and have his body burned. In the published edition of "Inside the Third Reich", Speer relates that he confessed to Hitler that he had defied the Nero Decree, but then assured Hitler of his personal loyalty, bringing tears to the dictator's eyes. Speer biographer Gitta Sereny argued, "Psychologically, it is possible that this is the way he remembered the occasion, because it was how he would have liked to behave, and the way he would have liked Hitler to react. But the fact is that none of it happened; our witness to this is Speer himself." Sereny notes that Speer's original draft of his memoirs lacks the confession and Hitler's tearful reaction, and contains an explicit denial that any confession or emotional exchange took place, as had been alleged in a French magazine article. The following morning, Speer left the "Führerbunker"; Hitler curtly bade him farewell. Speer toured the damaged Chancellery one last time before leaving Berlin to return to Hamburg. On April 29, the day before committing suicide, Hitler dictated a final political testament which dropped Speer from the successor government. Speer was to be replaced by his own subordinate, Karl-Otto Saur. After Hitler's death, Speer offered his services to the so-called Flensburg Government, headed by Hitler's successor, Karl Dönitz, and took a significant role in that short-lived regime. On May 15, an Allied delegation arrived at Glücksburg Castle, where Speer had accommodations, and asked if he would be willing to provide information on the effects of the air war. Speer agreed, and over the next several days, provided information on a broad range of subjects. On May 23, two weeks after the surrender of German forces, British troops arrested the members of the Flensburg Government and brought Nazi Germany to a formal end. Speer was taken to several internment centres for Nazi officials and interrogated. In September 1945, he was told that he would be tried for war crimes, and several days later, he was taken to Nuremberg and incarcerated there. Speer was indicted on all four possible counts: first, participating in a common plan or conspiracy for the accomplishment of crime against peace; second, planning, initiating and waging wars of aggression and other crimes against peace; third, war crimes; and lastly, crimes against humanity. U.S. Supreme Court Justice Robert Jackson, the chief U.S. prosecutor at Nuremberg, alleged, "Speer joined in planning and executing the program to dragoon prisoners of war and foreign workers into German war industries, which waxed in output while the workers waned in starvation." Speer's attorney, Dr. Hans Flächsner, presented Speer as an artist thrust into political life, who had always remained a non-ideologue and who had been promised by Hitler that he could return to architecture after the war. During his testimony, Speer accepted responsibility for the Nazi regime's actions. An observer at the trial, journalist and author William L. Shirer, wrote that, compared to his codefendants, Speer "made the most straightforward impression of all and ... during the long trial spoke honestly and with no attempt to shirk his responsibility and his guilt". Speer claimed that he had planned to kill Hitler in early 1945 by introducing tabun poison gas into the "Führerbunker" ventilation shaft. He said his efforts were frustrated by the impracticability of tabun and his lack of ready access to a replacement nerve agent, and also by the unexpected construction of a tall chimney that put the air intake out of reach. Speer stated his motive was despair at realising that Hitler intended to take the German people down with him. Speer's supposed assassination plan subsequently met with some skepticism, with Speer's architectural rival Hermann Giesler sneering, "the second most powerful man in the state did not have a ladder." Speer was found guilty of war crimes and crimes against humanity, though he was acquitted on the other two counts. His claim that he was unaware of Nazi extermination plans, which probably saved him from hanging, was finally revealed to be false in a private correspondence written in 1971 and publicly disclosed in 2007. On 1 October 1946, he was sentenced to 20 years' imprisonment. While three of the eight judges (two Soviet and one American) initially advocated the death penalty for Speer, the other judges did not, and a compromise sentence was reached "after two days' discussion and some rather bitter horse-trading". The court's judgment stated that: On July 18, 1947, Speer and his six fellow prisoners, all former high officials of the Nazi regime, were flown from Nuremberg to Berlin under heavy guard. They were taken to Spandau Prison in the British Sector of what became West Berlin where they were designated by number, with Speer given Number Five. Initially, the prisoners were kept in solitary confinement for all but half an hour a day and were not permitted to address each other or their guards. As time passed, the strict regimen was relaxed, especially during the three months out of four that the three Western powers were in control; the four occupying powers took overall control on a monthly rotation. Speer considered himself an outcast among his fellow prisoners for his acceptance of responsibility at Nuremberg. He made a deliberate effort to use his time as productively as possible. He wrote, "I am obsessed with the idea of using this time of confinement for writing a book of major importance ... That could mean transforming prison cell into scholar's den." The prisoners were forbidden to write memoirs, and mail was severely limited and censored. However, Speer was able to have his writings sent to Wolters as a result of an offer from a sympathetic orderly, and they eventually amounted to 20,000 sheets. He had completed his memoirs by 1954, which became the basis of "Inside the Third Reich" and which Wolters arranged to have transcribed onto 1,100 typewritten pages. He was also able to send letters and financial instructions and to obtain writing paper and letters from the outside. His many letters to his children were secretly transmitted and eventually formed the basis for "Spandau: The Secret Diaries". With the draft memoir complete and clandestinely transmitted, Speer sought a new project. He found one while taking his daily exercise, walking in circles around the prison yard. Measuring the path's distance carefully, he set out to walk the distance from Berlin to Heidelberg. He then expanded his idea into a worldwide journey, visualizing the places that he was "traveling" through while walking the path around the prison yard. He ordered guidebooks and other materials about the nations through which he imagined that he was passing so as to envision as accurate a picture as possible. He meticulously calculated every meter traveled and mapped distances to the real-world geography. He began in northern Germany, passed through Asia by a southern route before entering Siberia, then crossed the Bering Strait and continued southwards, finally ending his sentence south of Guadalajara, Mexico. Speer devoted much of his time and energy to reading. The prisoners brought some books with them in their personal property, but Spandau Prison had no library; books were sent from Spandau's municipal library. From 1952, the prisoners were also able to order books from the Berlin central library in Wilmersdorf. Speer was a voracious reader and he completed well over 500 books in the first three years at Spandau alone. He read classic novels, travelogues, books on ancient Egypt, and biographies of such figures as Lucas Cranach, Édouard Manet, and Genghis Khan. He took to the prison garden for enjoyment and work, at first to do something constructive while afflicted with writer's block. He was allowed to build an ambitious garden, transforming what he initially described as a "wilderness" into what the American commander at Spandau described as "Speer's Garden of Eden". Speer's supporters maintained a continual call for his release. Among those who pledged support for his sentence to be commuted were Charles de Gaulle, U.S. diplomat George Ball, former U.S. High Commissioner John J. McCloy, and former Nuremberg prosecutor Hartley Shawcross. Willy Brandt was a strong advocate of his release, sending flowers to his daughter on the day of his release and putting an end to the de-Nazification proceedings against him, which could have caused his property to be confiscated. A reduced sentence required the consent of all four of the occupying powers, and the Soviets adamantly opposed any such proposal. Speer served his full sentence and was released at midnight on October 1, 1966. Speer's release from prison was a worldwide media event, as reporters and photographers crowded both the street outside Spandau and the lobby of the Berlin hotel where Speer spent his first hours of freedom in over 20 years. He said little, reserving most comments for a major interview published in "Der Spiegel" in November 1966 in which he again took personal responsibility for crimes of the Nazi regime. He abandoned plans to return to architecture, as two proposed partners died shortly before his release. Instead, he revised his Spandau writings into two autobiographical books, and later researched and published a work about Himmler and the SS. His books provide a unique and personal look into the personalities of the Nazi era, most notably "Inside the Third Reich" (in German, "Erinnerungen", or "Reminiscences") and "Spandau: The Secret Diaries", and they have become much valued by historians. Speer was aided in shaping the works by Joachim Fest and Wolf Jobst Siedler from the publishing house Ullstein. He found himself unable to re-establish his relationship with his children, even with his son Albert who had also become an architect. According to Speer's daughter Hilde, "One by one my sister and brothers gave up. There was no communication." Following the publication of his bestselling books, Speer donated a considerable amount of money to Jewish charities. According to Siedler, these donations were as high as 80% of his royalties. Speer kept the donations anonymous, both for fear of rejection and for fear of being called a hypocrite. Wolters strongly objected to Speer referring to Hitler in the memoirs as a criminal, and Speer predicted as early as 1953 that he would lose a "good many friends" if the writings were published. This came to pass following the publication of "Inside the Third Reich", as close friends distanced themselves from him, such as Wolters and sculptor Arno Breker. Hitler's personal pilot Hans Baur suggested that "Speer must have taken leave of his senses." Wolters wondered that Speer did not now "walk through life in a hair shirt, distributing his fortune among the victims of National Socialism, forswear all the vanities and pleasures of life and live on locusts and wild honey". Speer made himself widely available to historians and other enquirers. He did an extensive, in-depth interview for the June 1971 issue of "Playboy" magazine, in which he stated, "If I didn't see it, then it was because I didn't want to see it." In October 1973, Speer made his first trip to Britain, flying to London under an assumed name to be interviewed by Ludovic Kennedy on the BBC "Midweek" programme. Upon arrival, he was detained for almost eight hours at Heathrow Airport when British immigration authorities discovered his true identity. Home Secretary Robert Carr allowed him into the country for 48 hours. In the same year, he appeared on the television programme "The World at War". Speer returned to London in 1981 to participate in the BBC "Newsnight" program; while there, he suffered a stroke and died on September 1. He had formed a relationship with an Englishwoman of German origin and was with her at the time of his death. Even to the end of his life, Speer continued to question his actions under Hitler. He asks in his final book "Infiltration", "What would have happened if Hitler had asked me to make decisions that required the utmost hardness? ... How far would I have gone? ... If I had occupied a different position, to what extent would I have ordered atrocities if Hitler had told me to do so?" Speer leaves the questions unanswered. The view of Speer as an unpolitical "miracle man" is challenged by Columbia historian Adam Tooze. In his 2006 book, "The Wages of Destruction", Tooze, following Gitta Sereny, argues that Speer's ideological commitment to the Nazi cause was greater than he claimed. Tooze further contends that an insufficiently challenged Speer "mythology" (partly fostered by Speer himself through politically motivated, tendentious use of statistics and other propaganda) had led many historians to assign Speer far more credit for the increases in armaments production than was warranted and give insufficient consideration to the "highly political" function of the so-called armaments miracle. Little remains of Speer's personal architectural works, other than the plans and photographs. No buildings designed by Speer during the Nazi era are extant in Berlin, other than the "Schwerbelastungskörper" (heavy load bearing body), built around 1941. The high concrete cylinder was used to measure ground subsidence as part of feasibility studies for a massive triumphal arch and other large structures proposed as part of "Welthauptstadt Germania", Hitler's planned postwar renewal project for the city. The cylinder is now a protected landmark and is open to the public. Along the Strasse des 17. Juni, a double row of lampposts designed by Speer still stands. The tribune of the "Zeppelinfeld" stadium in Nuremberg, though partly demolished, can also be seen. More of Speer's own personal work can be found in London, where he redesigned the interior of the German Embassy to the United Kingdom, then located at 7–9 Carlton House Terrace. Since 1967, it has served as the offices of the Royal Society. His work there, stripped of its Nazi fixtures and partially covered by carpets, survives in part. Another legacy was the "Arbeitsstab Wiederaufbau zerstörter Städte" (Working group on Reconstruction of destroyed cities), authorised by Speer in 1943 to rebuild bombed German cities to make them more livable in the age of the automobile. Headed by Wolters, the working group took a possible military defeat into their calculations. The "Arbeitsstab"'s recommendations served as the basis of the postwar redevelopment plans in many cities, and "Arbeitsstab" members became prominent in the rebuilding. As General Building Inspector, Speer was responsible for the Central Department for Resettlement. From 1939 onward, the Department used the Nuremberg Laws to evict Jewish tenants of non-Jewish landlords in Berlin, to make way for non-Jewish tenants displaced by redevelopment or bombing. Eventually, 75,000 Jews were displaced by these measures. Speer was aware of these activities, and inquired as to their progress. At least one original memo from Speer so inquiring still exists, as does the "Chronicle" of the Department's activities, kept by Wolters. Following his release from Spandau, Speer presented to the German Federal Archives an edited version of the "Chronicle", stripped by Wolters of any mention of the Jews. When David Irving discovered discrepancies between the edited "Chronicle" and other documents, Wolters explained the situation to Speer, who responded by suggesting to Wolters that the relevant pages of the original "Chronicle" should "cease to exist". Wolters did not destroy the "Chronicle", and, as his friendship with Speer deteriorated, allowed access to the original "Chronicle" to doctoral student (who, after obtaining his doctorate, developed his thesis into a book, "Albert Speer: The End of a Myth"). Speer considered Wolters' actions to be a "betrayal" and a "stab in the back". The original "Chronicle" reached the Archives in 1983, after both Speer and Wolters had died. Speer maintained at Nuremberg and in his memoirs that he had no knowledge of the Holocaust. In "Inside the Third Reich", he wrote that in mid-1944, he was told by Hanke (by then "Gauleiter" of Lower Silesia) that the minister should never accept an invitation to inspect a concentration camp in neighbouring Upper Silesia, as "he had seen something there which he was not permitted to describe and moreover could not describe". Speer later concluded that Hanke must have been speaking of Auschwitz and blamed himself for not inquiring further of Hanke or seeking information from Himmler or Hitler: Much of the controversy over Speer's knowledge of the Holocaust has centered on his presence at the Posen Conference on October 6, 1943, at which Himmler gave a speech detailing the ongoing Holocaust to Nazi leaders. Himmler said, "The grave decision had to be taken to cause this people to vanish from the earth ... In the lands we occupy, the Jewish question will be dealt with by the end of the year." Speer is mentioned several times in the speech, and Himmler seems to address him directly. In "Inside the Third Reich", Speer mentions his own address to the officials (which took place earlier in the day) but does not mention Himmler's speech. In October 1971, American historian Erich Goldhagen published an article arguing that Speer was present for Himmler's speech. According to Fest in his biography of Speer, "Goldhagen's accusation certainly would have been more convincing" had he not placed supposed incriminating statements linking Speer with the Holocaust in quotation marks, attributed to Himmler, which were in fact invented by Goldhagen. In response, after considerable research in the German Federal Archives in Koblenz, Speer said he had left Posen around noon (long before Himmler's speech) to journey to Hitler's headquarters at Rastenburg. In "Inside the Third Reich", published before the Goldhagen article, Speer recalled that on the evening after the conference, many Nazi officials were so drunk that they needed help boarding the special train which was to take them to a meeting with Hitler. One of his biographers, Dan van der Vat, suggests this necessarily implies he must have still been present at Posen then and must have heard Himmler's speech. In response to Goldhagen's article, Speer had alleged that in writing "Inside the Third Reich", he erred in reporting an incident that happened at another conference at Posen a year later, as happening in 1943. In 2007, "The Guardian" reported that a letter from Speer dated December 23, 1971, had been found in Britain in a collection of his correspondence to Hélène Jeanty, widow of a Belgian resistance fighter. In the letter, Speer states that he had been present for Himmler's presentation in Posen. Speer wrote: "There is no doubt – I was present as Himmler announced on October 6, 1943, that all Jews would be killed." In 2005, the "Daily Telegraph" reported that documents had surfaced indicating that Speer had approved the allocation of materials for the expansion of Auschwitz after two of his assistants toured the facility on a day when almost a thousand Jews were killed. The documents bore annotations in Speer's own handwriting. Speer biographer Gitta Sereny stated that, due to his workload, Speer would not have been personally aware of such activities. The debate over Speer's knowledge of, or complicity in, the Holocaust made him a symbol for people who were involved with the Nazi regime yet did not have (or claimed not to have had) an active part in the regime's atrocities. As film director Heinrich Breloer remarked, "[Speer created] a market for people who said, 'Believe me, I didn't know anything about [the Holocaust]. Just look at the "Führer's" friend, he didn't know about it either. Joined NSDAP: March 1, 1931 Party Number: 474,481 From 1934 to 1939, Speer was often referred to as "First Architect of the Reich", however this was mainly a title given to him by Hitler and not an actual political position within the Nazi Party or German government. Albert Speer held the following Nazi Party political ranks. Speer held the following Nazi Party political awards. Alboin Alboin (530sJune 28, 572) was king of the Lombards from about 560 until 572. During his reign the Lombards ended their migrations by settling in Italy, the northern part of which Alboin conquered between 569 and 572. He had a lasting effect on Italy and the Pannonian Basin; in the former his invasion marked the beginning of centuries of Lombard rule, and in the latter his defeat of the Gepids and his departure from Pannonia ended the dominance there of the Germanic peoples. The period of Alboin's reign as king in Pannonia following the death of his father, Audoin, was one of confrontation and conflict between the Lombards and their main neighbors, the Gepids. The Gepids initially gained the upper hand, but in 567, thanks to his alliance with the Avars, Alboin inflicted a decisive defeat on his enemies, whose lands the Avars subsequently occupied. The increasing power of his new neighbours caused Alboin some unease however, and he therefore decided to leave Pannonia for Italy, hoping to take advantage of the Byzantine Empire's reduced ability to defend its territory in the wake of the Gothic War. After gathering a large coalition of peoples, Alboin crossed the Julian Alps in 568, entering an almost undefended Italy. He rapidly took control of most of Venetia and Liguria. In 569, unopposed, he took northern Italy's main city, Milan. Pavia offered stiff resistance however, and was taken only after a siege lasting three years. During that time Alboin turned his attention to Tuscany, but signs of factionalism among his supporters and Alboin's diminishing control over his army increasingly began to manifest themselves. Alboin was assassinated on June 28, 572, in a coup d'état instigated by the Byzantines. It was organized by the king's foster brother, Helmichis, with the support of Alboin's wife, Rosamund, daughter of the Gepid king whom Alboin had killed some years earlier. The coup failed in the face of opposition from a majority of the Lombards, who elected Cleph as Alboin's successor, forcing Helmichis and Rosamund to flee to Ravenna under imperial protection. Alboin's death deprived the Lombards of the only leader who could have kept the newborn Germanic entity together, the last in the line of hero-kings who had led the Lombards through their migrations from the vale of the Elbe to Italy. For many centuries following his death Alboin's heroism and his success in battle were celebrated in Saxon and Bavarian epic poetry. The Lombards under King Wacho had migrated towards the east into Pannonia, taking advantage of the difficulties facing the Ostrogothic Kingdom in Italy following the death of its founder, Theodoric in 526. Wacho's death in about 540 brought his son Walthari to the throne, but as the latter was still a minor the kingdom was governed in his stead by Alboin's father, Audoin, of the Gausian clan. Seven years later Walthari died, giving Audoin the opportunity to crown himself and overthrow the reigning Lethings. Alboin was probably born in the 530s in Pannonia, the son of Audoin and his wife, Rodelinda. She may have been the niece of King Theodoric and betrothed to Audoin through the mediation of Emperor Justinian. Like his father, Alboin was raised a pagan, although Audoin had at one point attempted to gain Byzantine support against his neighbours by professing himself a Christian. Alboin took as his first wife the Christian Chlothsind, daughter of the Frankish King Chlothar. This marriage, which took place soon after the death of the Frankish ruler Theudebald in 555, is thought to reflect Audoin's decision to distance himself from the Byzantines, traditional allies of the Lombards, who had been lukewarm when it came to supporting Audoin against the Gepids. The new Frankish alliance was important because of the Franks' known hostility to the Byzantine empire, providing the Lombards with more than one option. However, the "Prosopography of the Later Roman Empire" interprets events and sources differently, believing that Alboin married Chlothsind when already a king in or shortly before 561, the year of Chlothar's death. Alboin first distinguished himself on the battlefield in a clash with the Gepids. At the Battle of Asfeld (552), he killed Turismod, son of the Gepid king Thurisind, in a victory that resulted in the Emperor Justinian's intervention to maintain equilibrium between the rival regional powers. After the battle, according to a tradition reported by Paul the Deacon, to be granted the right to sit at his father's table, Alboin had to ask for the hospitality of a foreign king and have him donate his weapons, as was customary. For this initiation, he went to the court of Thurisind, where the Gepid king gave him Turismod's arms. Walter Goffart believes it is probable that in this narrative Paul was making use of an oral tradition, and is sceptical that it can be dismissed as merely a typical "topos" of an epic poem. Alboin came to the throne after the death of his father, sometime between 560 and 565. As was customary among the Lombards, Alboin took the crown after an election by the tribe's freemen, who traditionally selected the king from the dead sovereign's clan. Shortly afterwards, in 565, a new war erupted with the Gepids, now led by Cunimund, Thurisind's son. The cause of the conflict is uncertain, as the sources are divided; the Lombard Paul the Deacon accuses the Gepids, while the Byzantine historian Menander Protector places the blame on Alboin, an interpretation favoured by historian Walter Pohl. An account of the war by the Byzantine Theophylact Simocatta sentimentalises the reasons behind the conflict, claiming it originated with Alboin's vain courting and subsequent kidnapping of Cunimund's daughter Rosamund, that Alboin proceeded then to marry. The tale is treated with scepticism by Walter Goffart, who observes that it conflicts with the "Origo Gentis Langobardorum", where she was captured only after the death of her father. The Gepids obtained the support of the Emperor in exchange for a promise to cede him the region of Sirmium, the seat of the Gepid kings. Thus in 565 or 566 Justinian's successor Justin II sent his son-in-law Baduarius as "magister militum" (field commander) to lead a Byzantine army against Alboin in support of Cunimund, ending in the Lombards' complete defeat. Faced with the possibility of annihilation, Alboin made an alliance in 566 with the Avars under Bayan I, at the expense of some tough conditions: the Avars demanded a tenth of the Lombards' cattle, half of the war booty, and on the war's conclusion all of the lands held by the Gepids. The Lombards played on the pre-existing hostility between the Avars and the Byzantines, claiming that the latter were allied with the Gepids. Cunimund, on the other hand, encountered hostility when he once again asked the Emperor for military assistance, as the Byzantines had been angered by the Gepids' failure to cede Sirmium to them, as had been agreed. Moreover, Justin II was moving away from the foreign policy of Justinian, and believed in dealing more strictly with bordering states and peoples. Attempts to mollify Justin II with tributes failed, and as a result the Byzantines kept themselves neutral if not outright supportive of the Avars. In 567 the allies made their final move against Cunimund, with Alboin invading the Gepids' lands from the northwest while Bayan attacked from the northeast. Cunimund attempted to prevent the two armies joining up by moving against the Lombards and clashing with Alboin somewhere between the Tibiscus and Danube rivers. The Gepids were defeated in the ensuing battle, their king slain by Alboin, and Cunimund's daughter Rosamund taken captive, according to references in the "Origo". The full destruction of the Gepid kingdom was completed by the Avars, who overcame the Gepids in the east. As a result, the Gepids ceased to exist as an independent people, and were partly absorbed by the Lombards and the Avars. Some time before 568, Alboin's first wife Chlothsind died, and after his victory against Cunimund Alboin married Rosamund, to establish a bond with the remaining Gepids. The war also marked a watershed in the geo-political history of the region, as together with the Lombard migration the following year, it signalled the end of six centuries of Germanic dominance in the Pannonian Basin. Despite his success against the Gepids, Alboin had failed to greatly increase his power, and was now faced with a much stronger threat from the Avars. Historians consider this the decisive factor in convincing Alboin to undertake a migration, even though there are indications that before the war with the Gepids a decision was maturing to leave for Italy, a country thousands of Lombards had seen in the 550s when hired by the Byzantines to fight in the Gothic War. Additionally, the Lombards would have known of the weakness of Byzantine Italy, which had endured a number of problems after being retaken from the Goths. In particular the so-called Plague of Justinian had ravaged the region and conflict remained endemic, with the Three-Chapter Controversy sparking religious opposition and administration at a standstill after the able governor of the peninsula, Narses, was recalled. Nevertheless, the Lombards viewed Italy as a rich land which promised great booty, assets Alboin used to gather together a horde which included not only Lombards but many other peoples of the region, including Heruli, Suebi, Gepids, Thuringii, Bulgars, Sarmatians, the remaining Romans and a few Ostrogoths. But the most important group, other than the Lombards, were the Saxons, of whom 20,000 male warriors with their families participated in the trek. These Saxons were tributaries to the Frankish King Sigebert, and their participation indicates that Alboin had the support of the Franks for his venture. The precise size of the heterogeneous group gathered by Alboin is impossible to know, and many different estimates have been made. Neil Christie considers 150,000–500,000 to be a realistic size, a number which would make the Lombards a more numerous force than the Ostrogoths on the eve of their invasion of Italy. Jörg Jarnut proposes 100,000–150,000 as an approximation; Wilfried Menghen in "Die Langobarden" estimates 150,000 to 200,000; while Stefano Gasparri cautiously judges the peoples united by Alboin to be somewhere between 100,000 and 300,000. As a precautionary move Alboin strengthened his alliance with the Avars, signing what Paul calls a "foedus perpetuum" ("perpetual treaty") and what is referred to in the 9th-century "Historia Langobardorum codicis Gothani" as a "pactum et foedus amicitiae" ("pact and treaty of friendship"), adding that the treaty was put down on paper. By the conditions accepted in the treaty, the Avars were to take possession of Pannonia and the Lombards were promised military support in Italy should the need arise; also, for a period of 200 years the Lombards were to maintain the right to reclaim their former territories if the plan to conquer Italy failed, thus leaving Alboin with an alternative open. The accord also had the advantage of protecting Alboin's rear, as an Avar-occupied Pannonia would make it difficult for the Byzantines to bring forces to Italy by land. The agreement proved immensely successful, and relations with the Avars were almost uninterruptedly friendly during the lifetime of the Lombard Kingdom. A further cause of the Lombard migration into Italy may have been an invitation from Narses. According to a controversial tradition reported by several medieval sources, Narses, out of spite for having been removed by Justinian's successor Justin II, called the Lombards to Italy. Often dismissed as an unreliable tradition, it has been studied with attention by modern scholars, in particular Neil Christie, who see in it a possible record of a formal invitation by the Byzantine state to settle in northern Italy as "foederati", to help protect the region against the Franks, an arrangement that may have been disowned by Justin II after Narses' removal. The Lombard migration started on Easter Monday, April 2, 568. The decision to combine the departure with a Christian celebration can be understood in the context of Alboin's recent conversion to Arian Christianity, as attested by the presence of Arian Gothic missionaries at his court. The conversion is likely to have been motivated mostly from political considerations, and intended to consolidate the migration's cohesion, distinguishing them from the Catholic Romans. It also connected Alboin and his people to the Gothic heritage, and in this way obtain the support of the Ostrogoths serving in the Byzantine army as "foederati". It has been speculated that Alboin's migration could have been partly the result of a call from surviving Ostrogoths in Italy. The season chosen for leaving Pannonia was unusually early; the Germanic peoples generally waited until autumn before beginning a migration, giving themselves time to do the harvesting and replenish their granaries for the march. The reason behind the spring departure could be the anxiety induced by the neighboring Avars, despite the friendship treaty. Nomadic peoples like the Avars also waited for autumn to begin their military campaigns, as they needed enough forage for their horses. A sign of this anxiety can also be seen in the decision taken by Alboin to ravage Pannonia, which created a safety zone between the Lombards and the Avars. The road followed by Alboin to reach Italy has been the subject of controversy, as is the length of the trek. According to Neil Christie the Lombards divided themselves into migrational groups, with a vanguard scouting the road, probably following the Poetovio – Celeia – Emona – Forum Iulii route, while the wagons and most of the people proceeded slowly behind because of the goods and chattels they brought with them, and possibly also because they were waiting for the Saxons to join them on the road. By September raiding parties were looting Venetia, but it was probably only in 569 that the Julian Alps were crossed at the Vipava Valley; the eyewitness Secundus of Non gives the date as May 20 or 21. The 569 date for the entry into Italy is not void of difficulties however, and Jörg Jarnut believes the conquest of most of Venetia had already been completed in 568. According to Carlo Guido Mor, a major difficulty remains in explaining how Alboin could have reached Milan on September 3 assuming he had passed the border only in the May of the same year. The Lombards penetrated into Italy without meeting any resistance from the border troops ("milities limitanei"). The Byzantine military resources available on the spot were scant and of dubious loyalty, and the border forts may well have been left unmanned. What seems certain is that archaeological excavations have found no sign of violent confrontation in the sites that have been excavated. This agrees with Paul the Deacon's narrative, who speaks of a Lombard takeover in Friuli "without any hindrance". The first town to fall into the Lombards' hands was Forum Iulii (Cividale del Friuli), the seat of the local "magister militum". Alboin chose this walled town close to the frontier to be capital of the Duchy of Friuli and made his nephew and shield bearer, Gisulf, duke of the region, with the specific duty of defending the borders from Byzantine or Avar attacks from the east. Gisulf obtained from his uncle the right to choose for his duchy those "farae", or clans, that he preferred. Alboin's decision to create a duchy and designate a duke were both important innovations; until then, the Lombards had never had dukes or duchies based on a walled town. The innovation adopted was part of Alboin's borrowing of Roman and Ostrogothic administrative models, as in Late Antiquity the "comes civitatis" (city count) was the main local authority, with full administrative powers in his region. But the shift from count ("comes") to duke ("dux") and from county ("comitatus") to duchy ("ducatus") also signalled the progressive militarization of Italy. The selection of a fortified town as the centre for the new duchy was also an important change from the time in Pannonia, for while urbanized settlements had previously been ignored by the Lombards, now a considerable part of the nobility settled itself in Forum Iulii, a pattern that was repeated regularly by the Lombards in their other duchies. From Forum Iulii, Alboin next reached Aquileia, the most important road junction in the northeast, and the administrative capital of Venetia. The imminent arrival of the Lombards had a considerable impact on the city's population; the Patriarch of Aquileia Paulinus fled with his clergy and flock to the island of Grado in Byzantine-controlled territory. From Aquileia, Alboin took the Via Postumia and swept through Venetia, taking in rapid succession Tarvisium (Treviso), Vicentia (Vicenza), Verona, Brixia (Brescia) and Bergomum (Bergamo). The Lombards faced difficulties only in taking Opitergium (Oderzo), which Alboin decided to avoid, as he similarly avoided tackling the main Venetian towns closer to the coast on the Via Annia, such as Altinum, Patavium (Padova), Mons Silicis (Monselice), Mantua and Cremona. The invasion of Venetia generated a considerable level of turmoil, spurring waves of refugees from the Lombard-controlled interior to the Byzantine-held coast, often led by their bishops, and resulting in new settlements such as Torcello and Heraclia. Alboin moved west in his march, invading the region of Liguria (north-west Italy) and reaching its capital Mediolanum (Milan) on September 3, 569, only to find it already abandoned by the "vicarius Italiae" (vicar of Italy), the authority entrusted with the administration of the diocese of Annonarian Italy. Archbishop Honoratus, his clergy, and part of the laity accompanied the "vicarius Italiae" to find a safe haven in the Byzantine port of Genua (Genoa). Alboin counted the years of his reign from the capture of Milan, when he assumed the title of "dominus Italiae" (Lord of Italy). His success also meant the collapse of Byzantine defences in the northern part of the Po plain, and large movements of refugees to Byzantine areas. Several explanations have been advanced to explain the swiftness and ease of the initial Lombard advance in northern Italy. It has been suggested that the towns' doors may have been opened by the betrayal of the Gothic auxiliaries in the Byzantine army, but historians generally hold that Lombard success occurred because Italy was not considered by Byzantium as a vital part of the empire, especially at a time when the empire was imperilled by the attacks of Avars and Slavs in the Balkans and Sassanids in the east. The Byzantine decision not to contest the Lombard invasion reflects the desire of Justinian's successors to reorient the core of the Empire's policies eastward. The impact of the Lombard migration on the Late Roman aristocracy was disruptive, especially in combination with the Gothic War; the latter conflict had finished in the north only in 562, when the last Gothic stronghold, Verona, was taken. Many men of means (Paul's "possessores") either lost their lives or their goods, but the exact extent of the despoliation of the Roman aristocracy is a subject of heated debate. The clergy was also greatly affected. The Lombards were mostly pagans, and displayed little respect for the clergy and Church property. Many churchmen left their sees to escape from the Lombards, like the two most senior bishops in the north, Honoratus and Paulinus. However, most of the suffragan bishops in the north sought an accommodation with the Lombards, as did in 569 the bishop of Tarvisium Felix when he journeyed to the Piave river to parley with Alboin, obtaining respect for the Church and its goods in return for this act of homage. It seems certain that many sees maintained an uninterrupted episcopal succession through the turmoil of the invasion and the following years. The transition was eased by the hostility existing among the northern Italian bishops towards the papacy and the empire due to the religious dispute involving the "Three-Chapter Controversy". In Lombard territory, churchmen were at least sure to avoid imperial religious persecution. In the view of Pierre Riché, the disappearance of 220 bishops' seats indicates that the Lombard migration was a crippling catastrophe for the Church. Yet according to Walter Pohl the regions directly occupied by Alboin suffered less devastation and had a relatively robust survival rate for towns, whereas the occupation of territory by autonomous military bands interested mainly in raiding and looting had a more severe impact, with the bishoprics in such places rarely surviving. The first attested instance of strong resistance to Alboin's migration took place at the town of Ticinum (Pavia), which he started to besiege in 569 and captured only after three years. The town was of strategic importance, sitting at the confluence of the rivers Po and Ticino and connected by waterways to Ravenna, the capital of Byzantine Italy and the seat of the Praetorian prefecture of Italy. Its fall cut direct communications between the garrisons stationed on the Alpes Maritimae and the Adriatic coast. Careful to maintain the initiative against the Byzantines, by 570 Alboin had taken their last defences in northern Italy except for the coastal areas of Liguria and Venetia and a few isolated inland centres such as Augusta Praetoria (Aosta), Segusio (Susa), and the island of Amacina in the Larius Lucus (Lake Como). During Alboin's kingship the Lombards crossed the Apennines and plundered Tuscia, but historians are not in full agreement as to whether this took place under his guidance and if this constituted anything more than raiding. According to Herwig Wolfram, it was probably only in 578–579 that Tuscany was conquered, but Jörg Jarnut and others believe this began in some form under Alboin, although it was not completed by the time of his death. Alboin's problems in maintaining control over his people worsened during the siege of Ticinum. The nature of the Lombard monarchy made it difficult for a ruler to exert the same degree of authority over his subjects as had been exercised by Theodoric over his Goths, and the structure of the army gave great authority to the military commanders or "duces", who led each band ("fara") of warriors. Additionally, the difficulties encountered by Alboin in building a solid political entity resulted from a lack of imperial legitimacy, as unlike the Ostrogoths, they had not entered Italy as "foederati" but as enemies of the Empire. The king's disintegrating authority over his army was also manifested in the invasion of Frankish Burgundy which from 569 or 570 was subject to yearly raids on a major scale. The Lombard attacks were ultimately repelled following Mummolus' victory at Embrun. These attacks had lasting political consequences, souring the previously cordial Lombard-Frankish relations and opening the door to an alliance between the Empire and the Franks against the Lombards, a coalition agreed to by Guntram in about 571. Alboin is generally thought not to have been behind this invasion, but an alternative interpretation of the transalpine raids presented by Gian Piero Bognetti is that Alboin may actually have been involved in the offensive on Guntram as part of an alliance with the Frankish king of Austrasia, Sigebert I. This view is met with scepticism by scholars such as Chris Wickham. The weakening of royal authority may also have resulted in the conquest of much of southern Italy by the Lombards, in which modern scholars believe Alboin played no role at all, probably taking place in 570 or 571 under the auspices of individual warlords. However it is far from certain that the Lombard takeover occurred during those years, as very little is known of Faroald and Zotto's respective rise to power in Spoletium (Spoleto) and Beneventum (Benevento). Ticinum eventually fell to the Lombards in either May or June 572. Alboin had in the meantime chosen Verona as his seat, establishing himself and his treasure in a royal palace built there by Theodoric. This choice may have been another attempt to link himself with the Gothic king. It was in this palace that Alboin was killed on June 28, 572. In the account given by Paul the Deacon, the most detailed narrative on Alboin's death, history and saga intermingle almost inextricably. Much earlier and shorter is the story told by Marius of Aventicum in his "Chronica", written about a decade after Alboin's murder. According to his version the king was killed in a conspiracy by a man close to him, called Hilmegis (Paul's Helmechis), with the connivance of the queen. Helmichis then married the widow, but the two were forced to escape to Byzantine Ravenna, taking with them the royal treasure and part of the army, which hints at the cooperation of Byzantium. Roger Collins describes Marius as an especially reliable source because of his early date and his having lived close to Lombard Italy. Also contemporary is Gregory of Tours' account presented in the "Historia Francorum", and echoed by the later Fredegar. Gregory's account diverges in several respects from most other sources. In his tale it is told how Alboin married the daughter of a man he had slain, and how she waited for a suitable occasion for revenge, eventually poisoning him. She had previously fallen in love with one of her husband's servants, and after the assassination tried to escape with him, but they were captured and killed. However, historians including Walter Goffart place little trust in this narrative. Goffart notes other similar doubtful stories in the "Historia" and calls its account of Alboin's demise "a suitably ironic tale of the doings of depraved humanity". Elements present in Marius' account are echoed in Paul's "Historia Langobardorum", which also contains distinctive features. One of the best known aspects unavailable in any other source is that of the skull cup. In Paul, the events that led to Alboin's downfall unfold in Verona. During a great feast, Alboin gets drunk and orders his wife Rosamund to drink from his cup, made from the skull of his father-in-law Cunimund after he had slain him in 567 and married Rosamund. Alboin "invited her to drink merrily with her father". This reignited the queen's determination to avenge her father. The tale has been often dismissed as a fable and Paul was conscious of the risk of disbelief. For this reason, he insists that he saw the skull cup personally during the 740s in the royal palace of Ticinum in the hands of king Ratchis. The use of skull cups has been noticed among nomadic peoples and, in particular, among the Lombards' neighbors, the Avars. Skull cups are believed to be part of a shamanistic ritual, where drinking from the cup was considered a way to assume the dead man's powers. In this context, Stefano Gasparri and Wilfried Menghen see in Cunimund's skull cup the sign of nomadic cultural influences on the Lombards: by drinking from his enemy's skull Alboin was taking his vital strength. As for the offering of the skull to Rosamund, that may have been a ritual request of complete submission of the queen and her people to the Lombards, and thus a cause of shame or humiliation. Alternatively, it may have been a rite to appease the dead through the offering of a libation. In the latter interpretation, the queen's answer reveals her determination not to let the wound opened by the killing of her father be healed through a ritual act, thus openly displaying her thirst for revenge. The episode is read in a radically different way by Walter Goffart. According to him, the whole story assumes an allegorical meaning, with Paul intent on telling an edifying story of the downfall of the hero and his expulsion from the promised land, because of his human weakness. In this story, the skull cup plays a key role as it unites original sin and barbarism. Goffart does not exclude the possibility that Paul had really seen the skull, but believes that by the 740s the connection between sin and barbarism as exemplified by the skull cup had already been established. In her plan to kill her husband she found an ally in Helmichis, the king's foster brother and "spatharius" (arms bearer). According to Paul the queen then recruited the king's "cubicularius" (bedchamberlain), Peredeo, into the plot, after having seduced him. When Alboin retired for his midday rest on June 28, care was taken to leave the door open and unguarded. Alboin's sword was also removed, leaving him defenceless when Peredeo entered his room and killed him. Alboin's remains were allegedly buried beneath the palace steps. Peredeo's figure and role is mostly introduced by Paul; the "Origo" had for the first time mentioned his name as "Peritheus", but there his role had been different, as he was not the assassin, but the instigator of the assassination. In the vein of his reading of the skull cup, Goffart sees Peredeo as not as a historical figure but as an allegorical character: he notes a similarity between Peredeo's name and the Latin word "peritus", meaning "lost", a representation of those Lombards who entered into the service of the Empire. Alboin's death had a lasting impact, as it deprived the Lombards of the only leader they had that could have kept together the newborn Germanic entity. His end also represents the death of the last of the line of the hero-kings that had led the Lombards through their migrations from the Elba to Italy. His fame survived him for many centuries in epic poetry, with Saxons and Bavarians celebrating his prowess in battle, his heroism, and the magical properties of his weapons. To complete the coup d'état and legitimize his claim to the throne, Helmichis married the queen, whose high standing arose not only from being the king's widow but also from being the most prominent member of the remaining Gepid nation, and as such her support was a guarantee of the Gepids' loyalty to Helmichis. The latter could also count on the support of the Lombard garrison of Verona, where many may have opposed Alboin's aggressive policy and could have cultivated the hope of reaching an entente with the Empire. The Byzantines were almost certainly deeply involved in the plot. It was in their interest to stem the Lombard tide by bringing a pro-Byzantine regime into power in Verona, and possibly in the long run break the unity of the Lombards' kingdom, winning over the dukes with honors and emoluments. The coup ultimately failed, as it met with the resistance of most of the warriors, who were opposed to the king's assassination. As a result, the Lombard garrison in Ticinum proclaimed Duke Cleph the new king, and Helmichis, rather than going to war against overwhelming odds, escaped to Ravenna with Longinus' assistance, taking with him his wife, his troops, the royal treasure and Alboin's daughter Albsuinda. In Ravenna the two lovers became estranged and killed each other. Subsequently, Longinus sent Albsuinda and the treasure to Constantinople. Cleph kept the throne for only 18 months before being assassinated by a slave. Possibly he too was killed at the instigation of the Byzantines, who had every interest in avoiding a hostile and solid leadership among the Lombards. An important success for the Byzantines was that no king was proclaimed to succeed Cleph, opening a decade of interregnum, thus making them more vulnerable to attacks from Franks and Byzantines. It was only when faced with the danger of annihilation by the Franks in 584 that the dukes elected a new king in the person of Authari, son of Cleph, who began the definitive consolidation and centralization of the Lombard kingdom while the remaining imperial territories were reorganized under the control of an exarch in Ravenna with the capacity to defend the country without the Emperor's assistance. The consolidation of Byzantine and Lombard dominions had long-lasting consequences for Italy, as the region was from that moment on fragmented among multiple rulers until Italian unification in 1861. Ealdred (archbishop of York) Ealdred (or Aldred; died 11 September 1069) was Abbot of Tavistock, Bishop of Worcester, and Archbishop of York in Anglo-Saxon England. He was related to a number of other ecclesiastics of the period. After becoming a monk at the monastery at Winchester, he was appointed Abbot of Tavistock Abbey in around 1027. In 1046 he was named to the Bishopric of Worcester. Ealdred, besides his episcopal duties, served Edward the Confessor, the King of England, as a diplomat and as a military leader. He worked to bring one of the king's relatives, Edward the Exile, back to England from Hungary to secure an heir for the childless king. In 1058 he undertook a pilgrimage to Jerusalem, the first bishop from England to do so. As administrator of the Diocese of Hereford, he was involved in fighting against the Welsh, suffering two defeats at the hands of raiders before securing a settlement with Gruffydd ap Llywelyn, a Welsh ruler. In 1060, Ealdred was elected to the archbishopric of York, but had difficulty in obtaining papal approval for his appointment, only managing to do so when he promised not to hold the bishoprics of York and Worcester simultaneously. He helped secure the election of Wulfstan as his successor at Worcester. During his archiepiscopate, he built and embellished churches in his diocese, and worked to improve his clergy by holding a synod which published regulations for the priesthood. Some sources state that following King Edward the Confessor's death in 1066, it was Ealdred who crowned Harold Godwinson as King of England. Ealdred supported Harold as king, but when Harold was defeated at the Battle of Hastings, Ealdred backed Edgar the Ætheling and then endorsed King William the Conqueror, the Duke of Normandy and a distant relative of King Edward's. Ealdred crowned King William on Christmas Day in 1066. William never quite trusted Ealdred or the other English leaders, and Ealdred had to accompany William back to Normandy in 1067, but he had returned to York by the time of his death in 1069. Ealdred supported the churches and monasteries in his diocese with gifts and building projects. Ealdred was probably born in the west of England, and could be related to Lyfing, his predecessor as bishop of Worcester. His family, from Devonshire, may have been well-to-do. Another relative was Wilstan or Wulfstan, who under Ealdred's influence became Abbot of Gloucester. Ealdred was a monk in the cathedral chapter at Winchester Cathedral before becoming abbot of Tavistock Abbey about 1027, an office he held until about 1043. Even after leaving the abbacy of Tavistock, he continued to hold two properties from the abbey until his death. No contemporary documents relating to Ealdred's time as abbot have been discovered. Ealdred was made bishop of Worcester in 1046, a position he held until his resignation in 1062. He may have acted as suffragan, or subordinate bishop, to his predecessor Lyfing before formally assuming the bishopric, as from about 1043 Ealdred witnessed as an "episcopus", or bishop, and a charter from 1045 or early 1046 names Sihtric as abbot of Tavistock. Lyfing died on 26 March 1046, and Ealdred became bishop of Worcester shortly after. However, Ealdred did not receive the other two dioceses that Lyfing had held, Crediton and Cornwall; King Edward the Confessor (reigned 1043–1066) granted these to Leofric, who combined the two sees at Crediton in 1050. Ealdred was an advisor to King Edward the Confessor, and was often involved in the royal government. He was also a military leader, and in 1046 he led an unsuccessful expedition against the Welsh. This was in retaliation for a raid led by the Welsh rulers Gruffydd ap Rhydderch, Rhys ap Rhydderch, and Gruffydd ap Llywelyn. Ealdred's expedition was betrayed by some Welsh soldiers who were serving with the English, and Ealdred was defeated. In 1050, Ealdred went to Rome "on the king's errand", apparently to secure papal approval to move the seat, or centre, of the bishopric of Crediton to Exeter. It may also have been to secure the release of the king from a vow to go on pilgrimage, if sources from after the Norman Conquest of England are to be believed. While in Rome, he attended a papal council, along with his fellow English bishop Herman. That same year, as Ealdred was returning to England he met Sweyn, a son of Godwin, Earl of Wessex, and probably absolved Sweyn for having abducted the abbess of Leominster Abbey in 1046. Through Ealdred's intercession, Sweyn was restored to his earldom, which he had lost after abducting the abbess and murdering his cousin Beorn Estrithson. Ealdred helped Sweyn not only because Ealdred was a supporter of Earl Godwin's family but because Sweyn's earldom was close to his bishopric. As recently as 1049 Irish raiders had allied with Gruffydd ap Rhydderch of Gwent in raiding along the River Usk. Ealdred unsuccessfully tried to drive off the raiders, but was again routed by the Welsh. This failure underscored Ealdred's need for a strong earl in the area to protect against raids. Normally, the bishop of Hereford would have led the defence in the absence of an Earl of Hereford, but in 1049 the incumbent, Æthelstan, was blind, so Ealdred took on the role of defender. Earl Godwin's rebellion against the king in 1051 came as a blow to Ealdred, who was a supporter of the earl and his family. Ealdred was present at the royal council at London that banished Godwin's family. Later in 1051, when he was sent to intercept Harold Godwinson and his brothers as they fled England after their father's outlawing, Ealdred "could not, or would not" capture the brothers. The banishment of Ealdred's patron came shortly after the death of Ælfric Puttoc, the Archbishop of York. York and Worcester had long had close ties, and the two sees had often been held in plurality, or at the same time. Ealdred probably wanted to become Archbishop of York after Ælfric's death, but his patron's eclipse led to the king appointing Cynesige, a royal chaplain, instead. In September 1052, though, Godwin returned from exile and his family was restored to power. By late 1053 Ealdred was once more in royal favour. At some point, he was alleged to have accompanied Swein on a pilgrimage to the Holy Land, but proof is lacking. In 1054 King Edward sent Ealdred to Germany to obtain Emperor Henry III's help in returning Edward the Exile, son of Edmund Ironside, to England. Edmund (reigned 1016) was an elder half-brother of King Edward the Confessor, and Edmund's son Edward was in Hungary with King Andrew I, having left England as an infant after his father's death and the accession of Cnut as King of England. In this mission Ealdred was somewhat successful and obtained insight into the working of the German church during a stay of a year with Hermann II, the Archbishop of Cologne. He also was impressed with the buildings he saw, and later incorporated some of the German styles into his own constructions. The main objective of the mission, however, was to secure the return of Edward; but this failed, mainly because Henry III's relations with the Hungarians were strained, and the emperor was unable or unwilling to help Ealdred. Ealdred was able to discover that Edward was alive, and had a place at the Hungarian court. Although some sources state that Ealdred attended the coronation of Emperor Henry IV, this is not possible, as on the date that Henry was crowned, Ealdred was in England consecrating an abbot. Ealdred had returned to England by 1055, and brought with him a copy of the "Pontificale Romano-Germanicum", a set of liturgies, with him. An extant copy of this work, currently manuscript Cotton Vitellus E xii, has been identified as a copy owned by Ealdred. It appears likely that the "Rule of Chrodegang", a continental set of ordinances for the communal life of secular canons, was introduced into England by Ealdred sometime before 1059. Probably he brought it back from Germany, possibly in concert with Harold. After Ealdred's return to England he took charge of the sees of Hereford and Ramsbury. Ealdred also administered Winchcombe Abbey and Gloucester Abbey. The authors of the "Handbook of British Chronology Third Edition" say he was named bishop of Hereford in 1056, holding the see until he resigned it in 1060, but other sources say that he merely administered the see while it was vacant, or that he was bishop of Hereford from 1055 to 1060. Ealdred became involved with the see of Ramsbury after its bishop Herman got into a dispute with King Edward over the movement of the seat of his bishopric to Malmesbury Abbey. Herman wished to move the seat of his see, but Edward refused permission for the move. Ealdred was a close associate of Herman's, and the historian H. R. Loyn called Herman "something of an alter ego" to Ealdred. According to the medieval chronicler John of Worcester, Ealdred was given the see of Ramsbury to administer while Herman remained outside England. Herman returned in 1058, and resumed his bishopric. There is no contemporary documentary evidence of Ealdred's administration of Ramsbury. The king again employed Ealdred as a diplomat in 1056, when he assisted earls Harold and Leofric in negotiations with the Welsh. Edward sent Ealdred after the death in battle of Bishop Leofgar of Hereford, who had attacked Gruffydd ap Llywelyn after encouragement from the king. However, Leofgar lost the battle and his life, and Edward had to sue for peace. Although details of the negotiations are lacking, Gruffydd ap Llywelyn swore loyalty to King Edward, but the oath may not have had any obligations on Gruffydd's part to Edward. The exact terms of the submission are not known in total, but Gruffydd was not required to assist Edward in war nor attend Edward's court. Ealdred was rewarded with the administration of the see of Hereford, which he held until 1061, and was appointed Archbishop of York. The diocese had suffered a serious raid from the Welsh in 1055, and during his administration, Ealdred continued the rebuilding of the cathedral church as well as securing the cathedral chapter's rights. Ealdred was granted the administration in order that the area might have someone with experience with the Welsh in charge. In 1058 Ealdred made a pilgrimage to Jerusalem, the first English bishop to make the journey. He travelled through Hungary, and the "Anglo-Saxon Chronicle" stated that "he went to Jerusalem in such state as no-one had done before him". While in Jerusalem he made a gift of a gold chalice to the church of the Holy Sepulchre. It is possible that the reason Ealdred travelled through Hungary was to arrange the travel of Edward the Exile's family to England. Another possibility is that he wished to search for other possible heirs to King Edward in Hungary. It is not known exactly when Edward the Exile's family returned to England, whether they returned with Edward in 1057, or sometime later, so it is only a possibility that they returned with Ealdred in 1058. Very little documentary evidence is available from Ealdred's time as Bishop of Worcester. Only five leases that he signed survive, and all date from 1051 to 1053. Two further leases exist in "Hemming's Cartulary" as copies only. How the diocese of Worcester was administered when Ealdred was abroad is unclear, although it appears that Wulfstan, the prior of the cathedral chapter, performed the religious duties in the diocese. On the financial side, the "Evesham Chronicle" states that Æthelwig, who became abbot of Evesham Abbey in 1058, administered Worcester before he became abbot. Cynesige, the archbishop of York, died on 22 December 1060, and Ealdred was elected Archbishop of York on Christmas Day, 1060. Although a bishop was promptly appointed to Hereford, none was named to Worcester, and it appears that Ealdred intended to retain Worcester along with York, which several of his predecessors had done. There were a few reasons for this, one of which was political, as the kings of England preferred to appoint bishops from the south to the northern bishoprics, hoping to counter the northern tendency towards separatism. Another reason was that York was not a wealthy see, and Worcester was. Holding Worcester along with York allowed the archbishop sufficient revenue to support himself. In 1061 Ealdred travelled to Rome to receive the pallium, the symbol of an archbishop's authority. Journeying with him was Tostig, another son of Earl Godwin, who was now earl of Northumbria. William of Malmesbury says that Ealdred, by "amusing the simplicity of King Edward and alleging the custom of his predecessors, had acquired, more by bribery than by reason, the archbishopric of York while still holding his former see." On his arrival in Rome, however, charges of simony, or the buying of ecclesiastical office, and lack of learning were brought against him, and his elevation to York was refused by Pope Nicholas II, who also deposed him from Worcester. The story of Ealdred being deposed comes from the "Vita Edwardi", a life of Edward the Confessor, but the "Vita Wulfstani", an account of the life of Ealdred's successor at Worcester, Wulfstan, says that Nicholas refused the pallium until a promise to find a replacement for Worcester was given by Ealdred. Yet another chronicler, John of Worcester, mentions nothing of any trouble in Rome, and when discussing the appointment of Wulfstan, says that Wulfstan was elected freely and unanimously by the clergy and people. John of Worcester also claims that at Wulfstan's consecration, Stigand, the archbishop of Canterbury extracted a promise from Ealdred that neither he nor his successors would lay claim to any jurisdiction over the diocese of Worcester. Given that John of Worcester wrote his chronicle after the eruption of the Canterbury–York supremacy struggle, the story of Ealdred renouncing any claims to Worcester needs to be considered suspect. For whatever reason, Ealdred gave up the see of Worcester in 1062, when papal legates arrived in England to hold a council and make sure that Ealdred relinquished Worcester. This happened at Easter in 1062. Ealdred was succeeded by Wulfstan, chosen by Ealdred, but John of Worcester relates that Ealdred had a hard time deciding between Wulfstan and Æthelwig. The legates had urged the selection of Wulfstan because of his saintliness. Because the position of Stigand, the archbishop of Canterbury, was irregular, Wulfstan sought and received consecration as a bishop from Ealdred. Normally, Wulfstan would have gone to the archbishop of Canterbury, as the see of Worcester was within Canterbury's province. Although Ealdred gave up the bishopric, the appointment of Wulfstan was one that allowed Ealdred to continue his considerable influence on the see of Worcester. Ealdred retained a number of estates belonging to Worcester. Even after the Norman Conquest, Ealdred still controlled some events in Worcester, and it was Ealdred, not Wulfstan, who opposed Urse d'Abetot's attempt to extend the castle of Worcester into the cathedral after the Norman Conquest. While archbishop, Ealdred built at Beverley, expanding on the building projects begun by his predecessor Cynesige, as well as repairing and expanding other churches in his diocese. He also built refectories for the canons at York and Southwell. He also was the one bishop that published ecclesiastical legislation during Edward the Confessor's reign, attempting to discipline and reform the clergy. He held a synod of his clergy shortly before 1066. John of Worcester, a medieval chronicler, stated that Ealdred crowned King Harold II in 1066, although the Norman chroniclers mention Stigand as the officiating prelate. Given Ealdred's known support of Godwin's family, John of Worcester is probably correct. Stigand's position as archbishop was canonically suspect, and as earl Harold had not allowed Stigand to consecrate one of the earl's churches, it is unlikely that Harold would have allowed Stigand to perform the much more important royal coronation. Arguments for Stigand having performed the coronation, however, rely on the fact that no other English source names the ecclesiastic who performed the ceremony; all Norman sources name Stigand as the presider. In all events, Ealdred and Harold were close, and Ealdred supported Harold's bid to become king. Ealdred perhaps accompanied Harold when the new king went to York and secured the support of the northern magnates shortly after Harold's consecration. According to the medieval chronicler Geoffrey Gaimar, after the Battle of Stamford Bridge Harold entrusted the loot gained from Harold Hardrada to Ealdred. Gaimar asserts that King Harold did this because he had heard of Duke William's landing in England, and needed to rush south to counter it. After the Battle of Hastings, Ealdred joined the group who tried to elevate Edgar the Ætheling, Edward the Exile's son, as king, but eventually he submitted to William the Conqueror at Berkhamsted. John of Worcester says that the group supporting Edgar vacillated over what to do while William ravaged the countryside, which led to Ealdred and Edgar's submission to William. Ealdred crowned William king on Christmas Day 1066. An innovation in William's coronation ceremony was that before the actual crowning, Ealdred asked the assembled crowd, in English, if it was their wish that William be crowned king. The Bishop of Coutances then did the same, but in Norman French. In March 1067, William took Ealdred with him when William returned to Normandy, along with the other English leaders Earl Edwin of Mercia, Earl Morcar, Edgar the Ætheling, and Archbishop Stigand. Ealdred at Whitsun 1068 performed the coronation of Matilda, William's wife. The "Laudes Regiae", or song commending a ruler, that was performed at Matilda's coronation may have been composed by Ealdred himself for the occasion. In 1069, when the northern thegns rebelled against William and attempted to install Edgar the Ætheling as king, Ealdred continued to support William. He was the only northern leader to support William, however. Ealdred was back at York by 1069; he died there on 11 September 1069, and was buried in his episcopal cathedral. He may have taken an active part in trying to calm the rebellions in the north in 1068 and 1069. The medieval chronicler William of Malmesbury records a story that when the new sheriff of Worcester, Urse d'Abetot, encroached on the cemetery of the cathedral chapter for Worcester Cathedral, Ealdred pronounced a rhyming curse on him, saying "Thou are called Urse. May you have God's curse." After Ealdred's death, one of the restraints on William's treatment of the English was removed. Ealdred was one of a few native Englishmen who William appears to have trusted, and his death led to fewer attempts to integrate Englishmen into the administration, although such efforts did not entirely stop. In 1070, a church council was held at Westminster and a number of bishops were deposed. By 1073 there were only two Englishmen in episcopal sees, and by the time of William's death in 1087, there was only one, Wulfstan II of Worcester. Ealdred did much to restore discipline in the monasteries and churches under his authority, and was liberal with gifts to the churches of his diocese. He built the monastic church of St Peter at Gloucester (now Gloucester Cathedral, though nothing of his fabric remains), then part of his diocese of Worcester. He also repaired a large part of Beverley Minster in the diocese of York, adding a presbytery and an unusually splendid painted ceiling covering "all the upper part of the church from the choir to the tower...intermingled with gold in various ways, and in a wonderful fashion". He added a pulpit "in German style" of bronze, gold and silver, surmounted by an arch with a rood cross in the same materials; these were examples of the lavish decorations added to important churches in the years before the conquest. Ealdred encouraged Folcard, a monk of Canterbury, to write the "Life" of Saint John of Beverley. This was part of Ealdred's promotion of the cult of Saint John, who had only been canonised in 1037. Along with the "Pontificale", Ealdred may have brought back from Cologne the first manuscript of the "Cambridge Songs" to enter England, a collection of Latin Goliardic songs which became famous in the Middle Ages. The historian Michael Lapidge suggests that the "Laudes Regiae", which are included in Cotton Vitellius E xii, might have been composed by Ealdred, or a member of his household. Another historian, H. J. Cowdrey, argued that the "laudes" were composed at Winchester. These praise songs are probably the same performed at Matilda's coronation, but might have been used at other court ceremonies before Ealdred's death. Historians have seen Ealdred as an "old-fashioned prince-bishop". Others say that he "raised the see of York from its former rustic state". He was known for his generosity and for his diplomatic and administrative abilities. After the Conquest, Ealdred provided a degree of continuity between the pre- and post-Conquest worlds. One modern historian feels that it was Ealdred who was behind the compilation of the D version of the "Anglo-Saxon Chronicle", and gives a date in the 1050s as its composition. Certainly, Ealdred is one of the leading figures in the work, and it is likely that one of his clerks compiled the version. Andrew Jackson Andrew Jackson (March 15, 1767 – June 8, 1845) was an American soldier and statesman who served as the seventh President of the United States from 1829 to 1837. Before being elected to the presidency, Jackson gained fame as a general in the United States Army and served in both houses of Congress. As president, Jackson sought to advance the rights of the "common man" against a "corrupt aristocracy" and to preserve the Union. Born in the colonial Carolinas to a Scotch-Irish family in the decade before the American Revolutionary War, Jackson became a frontier lawyer and married Rachel Donelson Robards. He served briefly in the U.S. House of Representatives and the U.S. Senate representing Tennessee. After resigning, he served as a justice on the Tennessee Supreme Court from 1798 until 1804. Jackson purchased a property later known as the Hermitage, and became a wealthy, slaveowning planter. In 1801, he was appointed colonel of the Tennessee militia and was elected its commander the following year. He led troops during the Creek War of 1813–1814, winning the Battle of Horseshoe Bend. The subsequent Treaty of Fort Jackson required the Creek surrender of vast lands in present-day Alabama and Georgia. In the concurrent war against the British, Jackson's victory in 1815 at the Battle of New Orleans made him a national hero. Jackson then led U.S. forces in the First Seminole War, which led to the annexation of Florida from Spain. Jackson briefly served as Florida's first territorial governor before returning to the Senate. He ran for president in 1824, winning a plurality of the popular and electoral vote. As no candidate won an electoral majority, the House of Representatives elected John Quincy Adams in a contingent election. In reaction to the alleged "corrupt bargain" between Adams and Henry Clay and the ambitious agenda of President Adams, Jackson's supporters founded the Democratic Party. Jackson ran again in 1828, defeating Adams in a landslide. Jackson faced the threat of secession by South Carolina over what opponents called the "Tariff of Abominations." The crisis was defused when the tariff was amended, and Jackson threatened the use of military force if South Carolina attempted to secede. In Congress, Henry Clay led the effort to reauthorize the Second Bank of the United States. Jackson, regarding the Bank as a corrupt institution, vetoed the renewal of its charter. After a lengthy struggle, Jackson and his allies thoroughly dismantled the Bank. In 1835, Jackson became the only president to completely pay off the national debt, fulfilling a longtime goal. His presidency marked the beginning of the ascendancy of the party "spoils system" in American politics. In 1830, Jackson signed the Indian Removal Act, which forcibly relocated most members of the Native American tribes in the South to Indian Territory. In foreign affairs, Jackson's administration concluded a "most favored nation" treaty with Great Britain, settled claims of damages against France from the Napoleonic Wars, and recognized the Republic of Texas. In January 1835, he survived the first assassination attempt on a sitting president. In his retirement, Jackson remained active in Democratic Party politics, supporting the presidencies of Martin Van Buren and James K. Polk. Though fearful of its effects on the slavery debate, Jackson advocated the annexation of Texas, which was accomplished shortly before his death. Jackson has been widely revered in the United States as an advocate for democracy and the common man. Many of his actions, such as those during the Bank War, proved divisive, garnering both fervent support and strong opposition from many in the country. His reputation has suffered since the 1970s, largely due to his role in Indian removal. Surveys of historians and scholars have ranked Jackson favorably among United States presidents. Andrew Jackson was born on March 15, 1767, in the Waxhaws region of the Carolinas. His parents were Scots-Irish colonists Andrew and Elizabeth Hutchinson Jackson, Presbyterians who had emigrated from present day Northern Ireland two years earlier. Jackson's father was born in Carrickfergus, County Antrim, in current-day Northern Ireland, around 1738. Jackson's parents lived in the village of Boneybefore, also in County Antrim. His paternal family line originated in Killingswold Grove, Yorkshire, England. When they immigrated to North America in 1765, Jackson's parents probably landed in Philadelphia. Most likely they traveled overland through the Appalachian Mountains to the Scots-Irish community in the Waxhaws, straddling the border between North and South Carolina. They brought two children from Ireland, Hugh (born 1763) and Robert (born 1764). Jackson's father died in a logging accident while clearing land in February 1767 at the age of 29, three weeks before his son Andrew was born. Jackson, his mother, and his brothers lived with Jackson's aunt and uncle in the Waxhaws region, and Jackson received schooling from two nearby priests. Jackson's exact birthplace is unclear because of a lack of knowledge of his mother's actions immediately following her husband's funeral. The area was so remote that the border between North and South Carolina had not been officially surveyed. In 1824 Jackson wrote a letter saying that he was born on the plantation of his uncle James Crawford in Lancaster County, South Carolina. Jackson may have claimed to be a South Carolinian because the state was considering nullification of the Tariff of 1824, which he opposed. In the mid-1850s, second-hand evidence indicated that he might have been born at a different uncle's home in North Carolina. As a young boy, Jackson was easily offended and was considered something of a bully. He was, however, said to have taken a group of younger and weaker boys under his wing and been very kind to them. During the Revolutionary War, Jackson's eldest brother, Hugh, died from heat exhaustion after the Battle of Stono Ferry on June 20, 1779. Anti-British sentiment intensified following the brutal Waxhaws Massacre on May 29, 1780. Jackson's mother encouraged him and his elder brother Robert to attend the local militia drills. Soon, they began to help the militia as couriers. They served under Colonel William Richardson Davie at the Battle of Hanging Rock on August 6. Andrew and Robert were captured by the British in 1781 while staying at the home of the Crawford family. When Andrew refused to clean the boots of a British officer, the officer slashed at the youth with a sword, leaving him with scars on his left hand and head, as well as an intense hatred for the British. Robert also refused to do as commanded and was struck with the sword. The two brothers were held as prisoners, contracted smallpox, and nearly starved to death in captivity. Later that year, their mother Elizabeth secured the brothers' release. She then began to walk both boys back to their home in the Waxhaws, a distance of some 40 miles (64 km). Both were in very poor health. Robert, who was far worse, rode on the only horse that they had, while Andrew walked behind them. In the final two hours of the journey, a torrential downpour began which worsened the effects of the smallpox. Within two days of arriving back home, Robert was dead and Andrew in mortal danger. After nursing Andrew back to health, Elizabeth volunteered to nurse American prisoners of war on board two British ships in the Charleston harbor, where there had been an outbreak of cholera. In November, she died from the disease and was buried in an unmarked grave. Andrew became an orphan at age 14. He blamed the British personally for the loss of his brothers and mother. After the Revolutionary War, Jackson received a sporadic education in a local Waxhaw school. On bad terms with much of his extended family, he boarded with several different people. In 1781, he worked for a time as a saddle-maker, and eventually taught school. He apparently prospered in neither profession. In 1784, he left the Waxhaws region for Salisbury, North Carolina, where he studied law under attorney Spruce Macay. With the help of various lawyers, he was able to learn enough to qualify for the bar. In September 1787, Jackson was admitted to the North Carolina bar. Shortly thereafter, a friend helped Jackson get appointed to a vacant prosecutor position in the Western District of North Carolina, which would later become the state of Tennessee. During his travel west, Jackson bought his first slave and in 1788, having been offended by fellow lawyer Waightstill Avery, fought his first duel. The duel ended with both men firing into the air, having made a secret agreement to do so before the engagement. Jackson moved to the small frontier town of Nashville in 1788, where he lived as a boarder with Rachel Stockly Donelson, the widow of John Donelson. Here Jackson became acquainted with their daughter, Rachel Donelson Robards. At the time, the younger Rachel was in an unhappy marriage with Captain Lewis Robards; he was subject to fits of jealous rage. The two were separated in 1790. According to Jackson, he married Rachel after hearing that Robards had obtained a divorce. Her divorce had not been made final, making Rachel's marriage to Jackson bigamous and therefore invalid. After the divorce was officially completed, Rachel and Jackson remarried in 1794. To complicate matters further, evidence shows that Rachel had been living with Jackson and referred to herself as Mrs. Jackson before the petition for divorce was ever made. It was not uncommon on the frontier for relationships to be formed and dissolved unofficially, as long as they were recognized by the community. In 1794, Jackson formed a partnership with fellow lawyer John Overton, dealing in claims for land reserved by treaty for the Cherokee and Chickasaw. Like many of their contemporaries, they dealt in such claims although the land was in Indian country. Most of the transactions involved grants made under the 'land grab' act of 1783 that briefly opened Indian lands west of the Appalachians within North Carolina to claim by that state's residents. He was one of the three original investors who founded Memphis, Tennessee, in 1819. After moving to Nashville, Jackson became a protege of William Blount, a friend of the Donelsons and one of the most powerful men in the territory. Jackson became attorney general in 1791, and he won election as a delegate to the Tennessee constitutional convention in 1796. When Tennessee achieved statehood that year, he was elected its only U.S. Representative. He was a member of the Democratic-Republican Party, the dominant party in Tennessee. Jackson soon became associated with the more radical, pro-French and anti-British wing. He strongly opposed the Jay Treaty and criticized George Washington for allegedly removing Republicans from public office. Jackson joined several other Republican congressmen in voting against a resolution of thanks for Washington, a vote that would later haunt him when he sought the presidency. In 1797, the state legislature elected him as U.S. Senator. Jackson seldom participated in debate and found the job dissatisfying. He pronounced himself "disgusted with the administration" of President John Adams and resigned the following year without explanation. Upon returning home, with strong support from western Tennessee, he was elected to serve as a judge of the Tennessee Supreme Court at an annual salary of $600. Jackson's service as a judge is generally viewed as a success and earned him a reputation for honesty and good decision making. Jackson resigned the judgeship in 1804. His official reason for resigning was ill health. He had been suffering financially from poor land ventures, and so it is also possible that he wanted to return full-time to his business interests. After arriving in Tennessee, Jackson won the appointment of judge advocate of the Tennessee militia. In 1802, while serving on the Tennessee Supreme Court, he declared his candidacy for major general, or commander, of the Tennessee militia, a position voted on by the officers. At that time, most free men were members of the militia. The organizations, intended to be called up in case of conflict with Europeans or Indians, resembled large social clubs. Jackson saw it as a way to advance his stature. With strong support from western Tennessee, he tied with John Sevier with seventeen votes. Sevier was a popular Revolutionary War veteran and former governor, the recognized leader of politics in eastern Tennessee. On February 5, Governor Archibald Roane broke the tie in Jackson's favor. Jackson had also presented Roane with evidence of land fraud against Sevier. Subsequently, in 1803, when Sevier announced his intention to regain the governorship, Roane released the evidence. Sevier insulted Jackson in public, and the two nearly fought a duel over the matter. Despite the charges leveled against Sevier, he defeated Roane, and continued to serve as governor until 1809. In addition to his legal and political career, Jackson prospered as planter, slave owner, and merchant. He built a home and the first general store in Gallatin, Tennessee, in 1803. The next year, he acquired the Hermitage, a plantation in Davidson County, near Nashville. He later added to the plantation, which eventually totaled . The primary crop was cotton, grown by slaves—Jackson began with nine, owned as many as 44 by 1820, and later up to 150, placing him among the planter elite. Jackson also co-owned with his son Andrew Jackson Jr. the Halcyon plantation in Coahoma County, Mississippi, which housed 51 slaves at the time of his death. Throughout his lifetime Jackson may have owned as many as 300 slaves. Men, women, and child slaves were owned by Jackson on three sections of the Hermitage plantation. Slaves lived in extended family units of between five and ten persons and were quartered in cabins made either of brick or logs. The size and quality of the Hermitage slave quarters exceeded the standards of his times. To help slaves acquire food, Jackson supplied them with guns, knives, and fishing equipment. At times he paid his slaves with monies and coins to trade in local markets. The Hermitage plantation was a profit-making enterprise. Jackson permitted slaves to be whipped to increase productivity or if he believed his slaves' offenses were severe enough. At various times he posted advertisements for fugitive slaves who had escaped from his plantation. In one advertisement placed in the Tennessee Gazette in October 1804, Jackson offered “ten dollars extra, for every hundred lashes any person will give him, to the amount of three hundred.” The controversy surrounding his marriage to Rachel remained a sore point for Jackson, who deeply resented attacks on his wife's honor. By May 1806, Charles Dickinson, who, like Jackson, raced horses, had published an attack on Jackson in the local newspaper, and it resulted in a written challenge from Jackson to a duel. Since Dickinson was considered an expert shot, Jackson determined it would be best to let Dickinson turn and fire first, hoping that his aim might be spoiled in his quickness; Jackson would wait and take careful aim at Dickinson. Dickinson did fire first, hitting Jackson in the chest. The bullet that struck Jackson was so close to his heart that it could not be removed. Under the rules of dueling, Dickinson had to remain still as Jackson took aim and shot and killed him. Jackson's behavior in the duel outraged men in Tennessee, who called it a brutal, cold-blooded killing and saddled Jackson with a reputation as a violent, vengeful man. He became a social outcast. After the Sevier affair and the duel, Jackson was looking for a way to salvage his reputation. He chose to align himself with former Vice President Aaron Burr, who after leaving office in 1805, his political career over after the killing of Alexander Hamilton in a famous duel in 1804, went on a tour of what was then the western United States. Burr was extremely well received by the people of Tennessee, and stayed for five days at the Hermitage. Burr's true intentions are not known with certainty. He seems to have been planning a military operation to conquer Spanish Florida and drive the Spanish from Texas. To many westerners like Jackson, the promise seemed enticing. Western American settlers had long held bitter feelings towards the Spanish due to territorial disputes and the persistent failure of the Spanish to keep Indians living on their lands from raiding American settlements., and by Spain's disinterest in returning fugitive slaves. On October 4, 1806, Jackson addressed the Tennessee militia, declaring that the men should be "at a moment's warning ready to march." On the same day, he wrote to James Winchester, proclaiming that the United States "can conquer not only the Floridas [at that time there was an East Florida and a West Florida.], but all Spanish North America." He continued: Jackson agreed to provide boats and other provisions for the expedition. However, on November 10, he learned from a military captain that Burr's plans apparently included seizure of New Orleans, then part of the Louisiana Territory of the United States, and incorporating it, along with lands won from the Spanish, into a new empire. He was further outraged when he learned from the same man of the involvement of Brigadier General James Wilkinson, whom he deeply disliked, in the plan. Jackson acted cautiously at first, but wrote letters to public officials, including President Thomas Jefferson, vaguely warning them about the scheme. In December, Jefferson, a political opponent of Burr, issued a proclamation declaring that a treasonous plot was underway in the West and calling for the arrest of the perpetrators. Jackson, safe from arrest because of his extensive paper trail, organized the militia. Burr was soon captured, and the men were sent home. Jackson traveled to Richmond, Virginia, to testify on Burr's behalf in trial. The defense team decided against placing him on the witness stand, fearing his remarks were too provocative. Burr was acquitted of treason, despite Jefferson's efforts to have him convicted. Jackson endorsed James Monroe for president in 1808 against James Madison. The latter was part of the Jeffersonian wing of the Democratic-Republican Party. Leading up to 1812, the United States found itself increasingly drawn into international conflict. Formal hostilities with Spain or France never materialized, but tensions with Britain increased for a number of reasons. Among these was the desire of many Americans for more land, particularly British Canada and Florida, the latter still controlled by Spain, Britain's European ally. On June 18, 1812, Congress officially declared war on the United Kingdom of Great Britain and Ireland, beginning the War of 1812. Jackson responded enthusiastically, sending a letter to Washington offering 2,500 volunteers. However, the men were not called up for many months. Biographer Robert V. Remini claims that Jackson saw the apparent slight as payback by the Madison administration for his support of Burr and Monroe. Meanwhile, the United States military repeatedly suffered devastating defeats on the battlefield. On January 10, 1813, Jackson led an army of 2,071 volunteers to New Orleans to defend the region against British and Native American attacks. He had been instructed to serve under General Wilkinson, who commanded Federal forces in New Orleans. Lacking adequate provisions, Wilkinson ordered Jackson to halt in Natchez, then part of the Mississippi Territory, and await further orders. Jackson reluctantly obeyed. The newly appointed Secretary of War, John Armstrong Jr., sent a letter to Jackson dated February 6 ordering him to dismiss his forces and to turn over his supplies to Wilkinson. In reply to Armstrong on March 15, Jackson defended the character and readiness of his men, and promised to turn over his supplies. He also promised, instead of dismissing the troops without provisions in Natchez, to march them back to Nashville. The march was filled with agony. Many of the men had fallen ill. Jackson and his officers turned over their horses to the sick. He paid for provisions for the men out of his own pocket. The soldiers began referring to their commander as "Hickory" because of his toughness, and Jackson became known as "Old Hickory." The army arrived in Nashville within about a month. Jackson's actions earned him the widespread respect and praise of the people of Tennessee. Jackson faced financial ruin, until his former aide-de-camp Thomas Benton persuaded Secretary Armstrong to order the army to pay the expenses Jackson had incurred. On June 14, Jackson served as a second in a duel on behalf of his junior officer William Carroll against Jesse Benton, the brother of Thomas. In September, Jackson and his top cavalry officer, Brigadier General John Coffee, were involved in a street brawl with the Benton brothers. Jackson was severely wounded by Jesse with a gunshot to the shoulder. On August 30, 1813, a group of Muscogee (also known as Creek Indians) called the Red Sticks, so named for the color of their war paint, perpetrated the Fort Mims massacre. During the massacre, hundreds of white American settlers and non-Red Stick Creeks were slaughtered. The Red Sticks, led by chiefs Red Eagle and Peter McQueen, had broken away from the rest of the Creek Confederacy, which wanted peace with the United States. They were allied with Tecumseh, a Shawnee chief who had launched Tecumseh's War against the United States, and who was fighting alongside the British. The resulting conflict became known as the Creek War. Jackson, with 2,500 men, was ordered to crush the hostile Indians. On October 10, he set out on the expedition, his arm still in a sling from fighting the Bentons. Jackson established Fort Strother as a supply base. On November 3, Coffee defeated a band of Red Sticks at the Battle of Tallushatchee. Coming to the relief of friendly Creeks besieged by Red Sticks, Jackson won another decisive victory at the Battle of Talladega. In the winter, Jackson, encamped at Fort Strother, faced a severe shortage of troops due to the expiration of enlistments and chronic desertions. He sent Coffee with the cavalry (which abandoned him) back to Tennessee to secure more enlistments. Jackson decided to combine his force with that of the Georgia militia, and marched to meet the Georgia troops. From January 22–24, 1814, while on their way, the Tennessee militia and allied Muscogee were attacked by the Red Sticks at the Battles of Emuckfaw and Enotachopo Creek. Jackson's troops repelled the attackers, but outnumbered, were forced to withdraw to Fort Strother. Jackson, now with over 2,000 troops, marched most of his army south to confront the Red Sticks at a fortress they had constructed at a bend in the Tallapoosa River. On March 27, enjoying an advantage of more than 2 to 1, he engaged them at the Battle of Horseshoe Bend. An initial artillery barrage did little damage to the well-constructed fort. A subsequent Infantry charge, in addition to an assault by Coffee's cavalry and diversions caused by the friendly Creeks, overwhelmed the Red Sticks. The campaign ended three weeks later with Red Eagle's surrender, although some Red Sticks such as McQueen fled to East Florida. On June 8, Jackson accepted a commission as brigadier general in the United States Army, and 10 days later became a major general, in command of the Seventh Military Division. Subsequently, Jackson, with Madison's approval, imposed the Treaty of Fort Jackson. The treaty required the Muscogee, including those who had not joined the Red Sticks, to surrender 23 million acres (8,093,713 ha) of land to the United States. Most of the Creeks bitterly acquiesced. Though in ill-health from dysentery, Jackson turned his attention to defeating Spanish and British forces. Jackson accused the Spanish of arming the Red Sticks and of violating the terms of their neutrality by allowing British soldiers into the Floridas. The first charge was true, while the second ignored the fact that it was Jackson's threats to invade Florida which had caused them to seek British protection. In the November 7 Battle of Pensacola, Jackson defeated British and Spanish forces in a short skirmish. The Spanish surrendered and the British fled. Weeks later, he learned that the British were planning an attack on New Orleans, which sat on the mouth of the Mississippi River and held immense strategic and commercial value. Jackson abandoned Pensacola to the Spanish, placed a force in Mobile, Alabama to guard against a possible invasion there, and rushed the rest of his force west to defend the city. The Creeks coined their own name for Jackson, "Jacksa Chula Harjo" or "Jackson, old and fierce." After arriving in New Orleans on December 1, 1814, Jackson instituted martial law in the city, as he worried about the loyalty of the city's Creole and Spanish inhabitants. At the same time, he formed an alliance with Jean Lafitte's smugglers, and formed military units consisting of African-Americans and Muscogees, in addition to recruiting volunteers in the city. Jackson received some criticism for paying white and non-white volunteers the same salary. These forces, along with U.S. Army regulars and volunteers from surrounding states, joined with Jackson's force in defending New Orleans. The approaching British force, led by Admiral Alexander Cochrane and later General Edward Pakenham, consisted of over 10,000 soldiers, many of whom had served in the Napoleonic Wars. Jackson only had about 5,000 men, most of whom were inexperienced and poorly trained. The British arrived on the east bank of the Mississippi River on the morning of December 23. That evening, Jackson attacked the British and temporarily drove them back. On January 8, 1815, the British launched a major frontal assault against Jackson's defenses. An initial artillery barrage by the British did little damage to the well-constructed American defenses. Once the morning fog had cleared, the British launched a frontal assault, and their troops made easy targets for the Americans protected by their parapets. Despite managing to temporarily drive back the American right flank, the overall attack ended in disaster. For the battle on January 8, Jackson admitted to only 71 total casualties. Of these, 13 men were killed, 39 wounded, and 19 missing or captured. The British admitted 2,037 casualties. Of these, 291 men were killed (including Pakenham), 1,262 wounded, and 484 missing or captured. After the battle, the British retreated from the area, and open hostilities ended shortly thereafter when word spread that the Treaty of Ghent had been signed in Europe that December. Coming in the waning days of the war, Jackson's victory made him a national hero, as the country celebrated the end of what many called the "Second American Revolution" against the British. By a Congressional resolution on February 27, 1815, Jackson was given the Thanks of Congress and awarded a Congressional Gold Medal. Alexis de Tocqueville ("underwhelmed" by Jackson according to a 2001 commentator) later wrote in "Democracy in America" that Jackson "was raised to the Presidency, and has been maintained there, solely by the recollection of a victory which he gained, twenty years ago, under the walls of New Orleans." Jackson, still not knowing for certain of the treaty's signing, refused to lift martial law in the city. In March 1815, after U.S. District Court Judge Dominic A. Hall signed a writ of "habeas corpus" on behalf of a Louisiana legislator whom Jackson had detained, Jackson ordered Hall's arrest. State senator Louis Louaillier had written an anonymous piece in the New Orleans newspaper, challenging Jackson's refusal to release the militia after the British ceded the field of battle. He too was put in jail. Jackson did not relent his campaign of suppressing dissent until after ordering the arrest of a Louisiana legislator, a federal judge, and a lawyer, and after the intervention of State Judge Joshua Lewis. Lewis was simultaneously serving under Jackson in the militia, and also had signed a writ of "habeas corpus" against Jackson, his commanding officer, seeking Judge Hall's release. Civilian authorities in New Orleans had reason to fear Jackson—he summarily ordered the execution of six members of the militia who had attempted to leave. Their deaths were not well publicized until the Coffin Handbills were circulated during his 1828 presidential campaign. Following the war, Jackson remained in command of Army forces on the southern border of the U.S. He conducted official business from the Hermitage. He signed treaties with the Cherokee and Chickasaw which gained for the United States large parts of Tennessee and Kentucky. The treaty with the Chickasaw, finally agreed to later in the year, is commonly known as the Jackson Purchase. Several Native American tribes, which became known as the Seminole, straddled the border between the U.S. and Florida. The Seminole, in alliance with escaped slaves, frequently raided Georgia settlements before retreating back into Florida. These skirmishes continually escalated, and the conflict is now known as the First Seminole War. In 1816, Jackson led a detachment into Florida which destroyed the Negro Fort, a community of escaped slaves and their descendants. Jackson was ordered by President Monroe in December 1817 to lead a campaign in Georgia against the Seminole and Creek Indians. Jackson was also charged with preventing Spanish Florida from becoming a refuge for runaway slaves, after Spain promised freedom to fugitive slaves. Critics later alleged that Jackson exceeded orders in his Florida actions. His orders from President Monroe were to "terminate the conflict." Jackson believed the best way to do this was to seize Florida from Spain once and for all. Before departing, Jackson wrote to Monroe, "Let it be signified to me through any channel ... that the possession of the Floridas would be desirable to the United States, and in sixty days it will be accomplished." Jackson invaded Florida on March 15, 1818, capturing Pensacola. He crushed Seminole and Spanish resistance in the region and captured two British agents, Robert Ambrister and Alexander Arbuthnot, who had been working with the Seminole. After a brief trial, Jackson executed both of the men, causing a diplomatic incident with the British. Jackson's actions polarized Monroe's cabinet, some of whom argued that Jackson had gone against Monroe's orders and violated the Constitution, since the United States had not declared war upon Spain. Yet Jackson was defended by Secretary of State John Quincy Adams. Adams thought that Jackson's conquest of Florida would force Spain to finally sell the province, and Spain did indeed sell Florida to the United States in the Adams–Onís Treaty of 1819. A congressional investigation exonerated Jackson, but Jackson was deeply angered by the criticism he received, particularly from Speaker of the House Henry Clay. After the ratification of the Adams–Onís Treaty in 1821, Jackson briefly served as the territorial Governor of Florida before returning to Tennessee. In the spring of 1822, Jackson suffered a physical breakdown. His body had two bullets lodged in it, and he had grown exhausted from years of hard military campaigning. He regularly coughed up blood, and his entire body shook. Jackson feared that he was on the brink of death. After several months of rest, he recovered. During his convalescence, Jackson's thoughts increasingly turned to national affairs. He obsessed over rampant corruption in the Monroe administration and grew to detest the Second Bank of the United States, blaming it for causing the Panic of 1819 by contracting credit. Jackson turned down an offer to run for governor of his home state, but accepted John Overton's plan to have the legislature nominate him for president. On July 22, 1822, he was officially nominated by the Tennessee legislature. Jackson had come to dislike Secretary of the Treasury William H. Crawford, who had been the most vocal critic of Jackson in Monroe's cabinet, and he hoped to prevent Tennessee's electoral votes from going to Crawford. Yet Jackson's nomination garnered a welcoming response even outside of Tennessee, as many Americans appreciated Jackson's attacks on banks. The Panic of 1819 had devastated the fortunes of many, and banks and politicians seen as supportive of banks were particularly unpopular. With his growing political viability, Jackson emerged as one of the five major presidential candidates, along with Crawford, Adams, Clay, and Secretary of War John C. Calhoun. During the Era of Good Feelings, the Federalist Party had faded away, and all five presidential contenders were members of the Democratic-Republican Party. Jackson's campaign promoted him as a defender of the common people, as well as the one candidate who could rise above sectional divisions. On the major issues of the day, most prominently the tariff, Jackson expressed centrist beliefs, and opponents accused him of obfuscating his positions. At the forefront of Jackson's campaign was combatting corruption. Jackson vowed to restore honesty in government and to scale back its excesses. In 1823, Jackson reluctantly allowed his name to be placed in contention for one of Tennessee's U.S. Senate seats. The move was independently orchestrated by his advisors William Berkeley Lewis and U.S. Senator John Eaton in order to defeat incumbent John Williams, who openly opposed his presidential candidacy. The legislature narrowly elected him. His return, after 24 years, 11 months, 3 days out of office, marks the second longest gap in service to the chamber in history. Although Jackson was reluctant to serve once more in the Senate, he was appointed chairman of the Committee on Military Affairs. Eaton wrote to Rachel that Jackson as a senator was "in harmony and good understanding with every body," including Thomas Hart Benton, now a senator from Missouri, with whom Jackson had fought in 1813. Meanwhile, Jackson himself did little active campaigning for the presidency, as was customary. Eaton updated an already-written biography of him in preparation for the campaign and, along with others, wrote letters to newspapers praising Jackson's record and past conduct. Democratic-Republican presidential nominees had historically been chosen by informal Congressional nominating caucuses, but this method had become unpopular. In 1824, most of the Democratic-Republicans in Congress boycotted the caucus. Those who attended backed Crawford for president and Albert Gallatin for vice president. A Pennsylvania convention nominated Jackson for president a month later, stating that the irregular caucus ignored the "voice of the people" and was a "vain hope that the American people might be thus deceived into a belief that he [Crawford] was the regular democratic candidate." Gallatin criticized Jackson as "an honest man and the idol of the worshipers of military glory, but from incapacity, military habits, and habitual disregard of laws and constitutional provisions, altogether unfit for the office." After Jackson won the Pennsylvania nomination, Calhoun dropped out of the presidential race and successfully sought the vice presidency instead. In the presidential election, Jackson won a plurality of the electoral vote, taking several southern and western states as well as the mid-Atlantic states of Pennsylvania and New Jersey. He was the only candidate to win states outside of his regional base, as Adams dominated New England, Clay took three western states, and Crawford won Virginia and Georgia. Jackson won a plurality of the popular vote, taking 42 percent, although not all states held a popular vote for the presidency. He won 99 electoral votes, more than any other candidates, but still short of 131, which he needed for a true majority. With no candidate having won a majority of the electoral, the House of Representatives held a contingent election under the terms of the Twelfth Amendment. The amendment specifies that only the top three electoral vote-winners are eligible to be elected by the House, so Clay was eliminated from contention. Jackson believed that he was likely to win this contingent election, as Crawford and Adams lacked Jackson's national appeal, and Crawford had suffered a debilitating stroke that made many doubt his physical fitness for the presidency. Clay, who as Speaker of the House presided over the election, saw Jackson as a dangerous demagogue who might topple the republic in favor of his own leadership. He threw his support behind Adams, who shared Clay's support for federally-funded internal improvements such as roads and canals. With Clay's backing, Adams won the contingent election on the first ballot. Furious supporters of Jackson accused Clay and Adams of having reached a "corrupt bargain" after Adams appointed Clay as his Secretary of State. "So you see," Jackson growled, "the Judas of the West has closed the contract and receive the thirty pieces of silver. [H]is end will be the same." After the election, Jackson resigned his Senate seat and returned to Tennessee. Almost immediately, opposition arose to the Adams presidency. Jackson opposed Adams's plan to involve the U.S. in Panama's quest for independence, writing, "The moment we engage in confederations, or alliances with any nation, we may from that time date the down fall of our republic." Adams also damaged his standing in his first annual message to Congress, when he argued that Congress must not give the world the impression "that we are palsied by the will of our constituents." Jackson was nominated for president by the Tennessee legislature in October 1825, more than three years before the 1828 election. It was the earliest such nomination in presidential history, and it attested to the fact that Jackson's supporters began the 1828 campaign almost as soon as the 1824 campaign ended. Adams's presidency floundered, as his ambitious agenda faced defeat in a new era of mass politics. Critics led by Jackson attacked Adams's policies as a dangerous expansion of Federal power. Senator Martin Van Buren, who had been a prominent supporter of Crawford in the 1824 election, emerged as one of the strongest opponents of Adams's policies, and he settled on Jackson as his preferred candidate in the 1828 election. Van Buren was joined by Vice President Calhoun, who also opposed much of Adams's agenda on states' rights grounds. Van Buren and other Jackson allies established numerous pro-Jackson newspapers and clubs around the country, while Jackson avoided campaigning but made himself available to visitors at his Hermitage plantation. In the election, Jackson won a commanding 56 percent of the popular vote and 68 percent of the electoral vote. The election marked the definitive end of the one-party Era of Good Feelings, as Jackson's supporters coalesced into the Democratic Party and Adams's followers became known as the National Republicans. In the large Scots-Irish community that was especially numerous in the rural South and Southwest, Jackson was a favorite hero. The campaign was very much a personal one. As was the custom at the time, neither candidate personally campaigned, but their political followers organized many campaign events. Both candidates were rhetorically attacked in the press. Jackson was strongly attacked as a slave trader, who bought and sold slaves and moved them about in defiance of higher standards of slaveholder behavior. A series of pamphlets known as the Coffin Handbills were published to attack Jackson, one of which revealed his order to execute soldiers at New Orleans. Another accused him of engaging in cannibalism by eating the bodies of American Indians killed in battle, while still another labeled his mother a "common prostitute" and stated that Jackson's father was a "mulatto man." Rachel Jackson was also a frequent target of attacks, and was widely accused of bigamy, a reference to the controversial situation of her marriage with Jackson. Jackson's campaigners fired back by claiming that while serving as Minister to Russia, Adams had procured a young girl to serve as a prostitute for Emperor Alexander I. They also stated that Adams had a billiard table in the White House and that he had charged the government for it. Rachel had been under extreme stress during the election, and often struggled while Jackson was away. She began experiencing significant physical stress during the election season. Jackson described her symptoms as "excruciating pain in the left shoulder, arm, and breast." After struggling for three days, Rachel finally died of a heart attack on December 22, 1828 three weeks after her husband's victory in the election (which began on October 31 and ended on December 2) and 10 weeks before Jackson took office as president. A distraught Jackson had to be pulled from her so the undertaker could prepare the body. He felt that the accusations from Adams's supporters had hastened her death and never forgave him. Rachel was buried at the Hermitage on Christmas Eve. "May God Almighty forgive her murderers," Jackson swore at her funeral. "I never can." Jackson's name has been associated with Jacksonian democracy or the shift and expansion of democracy with the passing of some political power from established elites to ordinary voters based in political parties. "The Age of Jackson" shaped the national agenda and American politics. Jackson's philosophy as president was similar to that of Jefferson, advocating Republican values held by the Revolutionary War generation. Jackson took a moral tone, with the belief that agrarian sympathies, and a limited view of states rights and the federal government, would produce less corruption. He feared that monied and business interests would corrupt republican values. When South Carolina opposed the tariff law, he took a strong line in favor of nationalism and against secession. Jackson believed in the ability of the people to "arrive at right conclusions." They had the right not only to elect but to "instruct their agents & representatives." Office holders should either obey the popular will or resign. He rejected the view of a powerful and independent Supreme Court with binding decisions, arguing that "the Congress, the Executive, and the Court must each or itself be guided by its own opinions of the Constitution." Jackson thought that Supreme Court justices should be made to stand for election, and believed in strict constructionism as the best way to insure democratic rule. He called for term limits on presidents and the abolition of the Electoral College. Jackson "was far ahead of his times–and maybe even further than this country can ever achieve." Jackson departed from the Hermitage on January 19 and arrived in Washington on February 11. He then set about choosing his cabinet members. Jackson chose Van Buren as expected for Secretary of State, Eaton of Tennessee as Secretary of War, Samuel D. Ingham of Pennsylvania as Secretary of Treasury, John Branch of North Carolina as Secretary of Navy, John M. Berrien of Georgia as Attorney General, and William T. Barry of Kentucky as Postmaster General. Jackson's first choice of cabinet proved to be unsuccessful, full of bitter partisanship and gossip. Jackson blamed Adams in part for what was said about Rachel during the campaign, and refused to meet him after arriving in Washington. Therefore, Adams chose not to attend the inauguration. On March 4, 1829, Andrew Jackson became the first United States president-elect to take the oath of office on the East Portico of the U.S. Capitol. In his inaugural speech, Jackson promised to respect the sovereign powers of states and the constitutional limits of the presidency. He also promised to pursue "reform" by removing power from "unfaithful or incompetent hands." At the conclusion of the ceremony, Jackson invited the public to the White House, where his supporters held a raucous party. Thousands of spectators overwhelmed the White House staff, and minor damage was caused to fixtures and furnishings. Jackson's populism earned him the nickname "King Mob." Jackson devoted a considerable amount of his presidential time during his early years in office responding to what came to be known as the "Petticoat affair" or "Eaton affair." Washington gossip circulated among Jackson's cabinet members and their wives, including Calhoun's wife Floride Calhoun, concerning Secretary of War Eaton and his wife Peggy Eaton. Salacious rumors held that Peggy, as a barmaid in her father's tavern, had been sexually promiscuous or had even been a prostitute. Controversy also ensued because Peggy had married soon after her previous husband's death, and it was alleged that she and her husband had engaged in an adulterous affair while her previous husband was still living. Petticoat politics emerged when the wives of cabinet members, led by Mrs. Calhoun, refused to socialize with the Eatons. Allowing a prostitute in the official family was unthinkable—but Jackson refused to believe the rumors, telling his Cabinet that "She is as chaste as a virgin!" Jackson believed that the dishonorable people were the rumormongers, who in essence questioned and dishonored Jackson himself by, in attempting to drive the Eatons out, daring to tell him who he could and could not have in his cabinet. Jackson was also reminded of the attacks that were made against his wife. These memories increased his dedication to defending Peggy Eaton. Meanwhile, the cabinet wives insisted that the interests and honor of all American women was at stake. They believed a responsible woman should never accord a man sexual favors without the assurance that went with marriage. A woman who broke that code was dishonorable and unacceptable. Historian Daniel Walker Howe notes that this was the feminist spirit that in the next decade shaped the woman's rights movement. Secretary of State Martin Van Buren, a widower, was already forming a coalition against Calhoun. He could now see his main chance to strike hard; he took the side of Jackson and Eaton. In the spring of 1831, Jackson, at Van Buren's suggestion, demanded the resignations of all the cabinet members except Barry. Van Buren himself resigned to avoid the appearance of bias. In 1832, Jackson nominated Van Buren to be Minister to Great Britain. Calhoun blocked the nomination with a tie-breaking vote against it, claiming the defeated nomination would "...kill [Van Buren], sir, kill dead. He will never kick, sir, never kick." Van Buren continued to serve as an important adviser to Jackson and was placed on the ticket for vice president in the 1832 election, making him Jackson's heir-apparent. The Petticoat affair led to the development of the Kitchen Cabinet. The Kitchen Cabinet emerged as an unofficial group of advisors to the president. Its existence was partially rooted in Jackson's difficulties with his official cabinet, even after the purging. Throughout his eight years in office, Jackson made about 70 treaties with Native American tribes both in the South and the Northwest. Jackson's presidency marked a new era in Indian-Anglo American relations initiating a policy of Indian removal. Jackson himself sometimes participated in the treaty negotiating process with various Indian tribes, though other times he left the negotiations to his subordinates. The southern tribes included the Choctaw, Creek, Chickasaw, Seminole and the Cherokee. The northwest tribes include the Chippewa, Ottawa, and the Potawatomi. Relations between Indians and Americans increasingly grew tense and sometimes violent as a result of territorial conflicts. Previous presidents had at times supported removal or attempts to "civilize" the Indians, but generally let the problem play itself out with minimal intervention. There had developed a growing popular and political movement to deal with the issue, and out of this policy to relocate certain Indian populations. Jackson, never known for timidity, became an advocate for this relocation policy in what many historians consider the most controversial aspect of his presidency. In his First Annual Message to Congress, Jackson advocated land west of the Mississippi River be set aside for Indian tribes. On May 26, 1830, Congress passed the Indian Removal Act, which Jackson signed into law two days later. The Act authorized the president to negotiate treaties to buy tribal lands in the east in exchange for lands farther west, outside of existing state borders. The act specifically pertained to the Five Civilized Tribes in the South, the conditions being that they could either move west or stay and obey state law, effectively relinquishing their sovereignty. Jackson, Eaton, and General Coffee negotiated with the Chickasaw, who quickly agreed to move. Jackson put Eaton and Coffee in charge of negotiating with the Choctaw. Lacking Jackson's skills at negotiation, they frequently bribed the chiefs in order to gain their submission. The tactics worked, and the chiefs agreed to move. The removal of the Choctaw took place in the winter of 1831 and 1832, and was wrought with misery and suffering. The Seminole, despite the signing of the Treaty of Payne's Landing in 1832, refused to move. In December 1835, this dispute began the Second Seminole War. The war lasted over six years, finally ending in 1842. Members of the Creek Nation had signed the Treaty of Cusseta in 1832, allowing the Creek to either sell or retain their land. Conflict later erupted between the Creek who remained and the white settlers, leading to a second Creek War. A common complaint amongst the tribes was that the men who had signed the treaties did not represent the whole tribe. The state of Georgia became involved in a contentious dispute with the Cherokee, culminating in the 1832 Supreme Court decision in "Worcester v. Georgia". Chief Justice John Marshall, writing for the court, ruled that Georgia could not forbid whites from entering tribal lands, as it had attempted to do with two missionaries supposedly stirring up resistance amongst the tribespeople. Jackson is frequently attributed the following response: "John Marshall has made his decision, now let him enforce it." The quote, apparently indicating Jackson's dismissive view of the courts, was attributed to Jackson by Horace Greeley, who cited as his source Representative George N. Briggs. Remini argues that Jackson did not say it because, while it "certainly sounds like Jackson...[t]here was nothing for him to enforce." This is because a writ of "habeas corpus" had never been issued for the missionaries. The Court also did not ask federal marshals to carry out the decision, as had become standard. A group of Cherokees led by John Ridge negotiated the Treaty of New Echota. Ridge was not a widely recognized leader of the Cherokee, and this document was rejected by some as illegitimate. Another faction, led by John Ross, unsuccessfully petitioned to protest the proposed removal. The Cherokee largely considered themselves independent, and not subject to the laws of the United States or Georgia. The treaty was enforced by Jackson's successor, Van Buren. Subsequently, as many as 4,000 out of 18,000 Cherokees died on the "Trail of Tears" in 1838. More than 45,000 American Indians were relocated to the West during Jackson's administration, though a few Cherokees walked back afterwards or migrated to the high Smoky Mountains. The Black Hawk War took place during Jackson's presidency in 1832 after a group of Indians crossed into U.S. territory. In an effort to purge the government of corruption, Jackson launched presidential investigations into all executive Cabinet offices and departments. He believed appointees should be hired on merit and withdrew many candidates he believed were lax in their handling of monies. He asked Congress to reform embezzlement laws, reduce fraudulent applications for federal pensions, revenue laws to prevent evasion of custom duties, and laws to improve government accounting. Jackson's Postmaster General Barry resigned after a Congressional investigation into the postal service revealed mismanagement of mail services, collusion and favoritism in awarding lucrative contracts, as well as failure to audit accounts and supervise contract performances. Jackson replaced Barry with Treasury Auditor and prominent Kitchen Cabinet member Amos Kendall, who went on to implement much needed reforms in the Post Office Department. Jackson repeatedly called for the abolition of the Electoral College by constitutional amendment in his annual messages to Congress as president. In his third annual message to Congress, he expressed the view "I have heretofore recommended amendments of the Federal Constitution giving the election of President and Vice-President to the people and limiting the service of the former to a single term. So important do I consider these changes in our fundamental law that I can not, in accordance with my sense of duty, omit to press them upon the consideration of a new Congress." Although he was unable to implement this goal, Jackson's time in office did see a variety of other reforms. He supported an act in July 1836 that enabled widows of Revolutionary War soldiers who met certain criteria to receive their husband's pensions. In 1836, Jackson established the ten-hour day in national shipyards. Jackson enforced the Tenure of Office Act, signed by President Monroe in 1820, that limited appointed office tenure and authorized the president to remove and appoint political party associates. Jackson believed that a rotation in office was actually a democratic reform preventing father-to-son succession of office and made civil service responsible to the popular will. Jackson declared that rotation of appointments in political office was "a leading principle in the republican creed." Jackson noted, "In a country where offices are created solely for the benefit of the people no one man has any more intrinsic right to official station than another." Jackson believed that rotating political appointments would prevent the development of a corrupt bureaucracy. The number of federal office holders removed by Jackson were exaggerated by his opponents; Jackson only rotated about 20% of federal office holders during his first term, some for dereliction of duty rather than political purposes. Jackson, nonetheless, used his presidential power to award loyal Democrats by granting them federal office appointments. Jackson's approach incorporated patriotism for country as qualification for holding office. Having appointed a soldier who had lost his leg fighting on the battlefield to postmaster, Jackson stated, "[i]f he lost his leg fighting for his country, that is ... enough for me." Jackson's theory regarding rotation of office generated what would later be called the spoils system. The political realities of Washington sometimes forced Jackson to make partisan appointments despite his personal reservations. Historians believe Jackson's presidency marked the beginning of an era of decline in public ethics. Supervision of bureaus and departments whose operations were outside of Washington (such as the New York Customs House; the Postal Service; the Departments of Navy and War; and the Bureau of Indian Affairs, whose budget had increased enormously in the previous two decades) proved to be difficult. Remini claims that because "friendship, politics, and geography constituted the President's total criteria for appointments, most of his appointments were predictably substandard." In 1828, Congress had approved the "Tariff of Abominations", which set the tariff at an historically high rate. Southern planters, who sold their cotton on the world market, strongly opposed this tariff, which they saw as favoring northern interests. The South now had to pay more for goods it did not produce locally; and other countries would have more difficulty affording southern cotton. The issue came to a head during Jackson's presidency, resulting in the Nullification Crisis, in which South Carolina threatened disunion. The South Carolina Exposition and Protest of 1828, secretly written by Calhoun, asserted that their state had the right to "nullify"—declare void—the tariff legislation of 1828. Although Jackson sympathized with the South in the tariff debate, he also vigorously supported a strong union, with effective powers for the central government. Jackson attempted to face down Calhoun over the issue, which developed into a bitter rivalry between the two men. One incident came at the April 13, 1830, Jefferson Day dinner, involving after-dinner toasts. Robert Hayne began by toasting to "The Union of the States, and the Sovereignty of the States." Jackson then rose, and in a booming voice added "Our federal Union: It must be preserved!" – a clear challenge to Calhoun. Calhoun clarified his position by responding "The Union: Next to our Liberty, the most dear!" In May 1830, Jackson discovered that Calhoun had asked President Monroe to censure then-General Jackson for his invasion of Spanish Florida in 1818 while Calhoun was serving as Secretary of War. Calhoun's and Jackson's relationship deteriorated further. By February 1831, the break between Calhoun and Jackson was final. Responding to inaccurate press reports about the feud, Calhoun had published letters between him and Jackson detailing the conflict in the "United States Telegraph". Jackson and Calhoun began an angry correspondence which lasted until Jackson stopped it in July. The "Telegraph", edited by Duff Green, had previously supported Jackson. After it took the side of Calhoun, Jackson needed a new organ for the administration. He enlisted the help of longtime supporter Francis Preston Blair, who in November 1830 established a newspaper known as the "Washington Globe", which from then on served as the primary mouthpiece of the Democratic Party. Jackson supported a revision to tariff rates known as the Tariff of 1832. It was designed to placate the nullifiers by lowering tariff rates. Written by Treasury Secretary Louis McLane, the bill lowered duties from 45% to 27%. In May, Representative John Quincy Adams introduced a slightly revised version of the bill, which Jackson accepted. It passed Congress on July 9 and was signed by the President on July 14. The bill ultimately failed to satisfy extremists on either side. On November 24, the South Carolina legislature officially nullified both the Tariff of 1832 and the Tariff of 1828. In response, Jackson sent U.S. Navy warships to Charleston harbor, and threatened to hang any man who worked to support nullification or secession. On December 28, 1832, Calhoun resigned as vice president to become a U.S. Senator for South Carolina. This was part of a strategy whereby Calhoun, with less than three months remaining on his vice presidential term, would replace Robert Y. Hayne in the Senate, who would then become governor. Hayne had often struggled to defend nullification on the floor of the Senate, especially against fierce criticism from Senator Daniel Webster of Massachusetts. In December 1832, Jackson issued a resounding proclamation against the "nullifiers," stating that he considered "the power to annul a law of the United States, assumed by one State, incompatible with the existence of the Union, contradicted expressly by the letter of the Constitution, unauthorized by its spirit, inconsistent with every principle on which it was founded, and destructive of the great object for which it was formed." South Carolina, the President declared, stood on "the brink of insurrection and treason," and he appealed to the people of the state to reassert their allegiance to that Union for which their ancestors had fought. Jackson also denied the right of secession: "The Constitution ... forms a government not a league ... To say that any State may at pleasure secede from the Union is to say that the United States are not a nation." Jackson tended to personalize the controversy, frequently characterizing nullification as a conspiracy between disappointed and bitter men whose ambitions had been thwarted. Jackson asked Congress to pass a "Force Bill" explicitly authorizing the use of military force to enforce the tariff. It was introduced by Senator Felix Grundy of Tennessee, and was quickly attacked by Calhoun as "military despotism." At the same time, Calhoun and Clay began to work on a new compromise tariff. A bill sponsored by the administration had been introduced by Representative Gulian C. Verplanck of New York, but it lowered rates more sharply than Clay and other protectionists desired. Clay managed to get Calhoun to agree to a bill with higher rates in exchange for Clay's opposition to Jackson's military threats and, perhaps, with the hope that he could win some Southern votes in his next bid for the presidency. The Compromise Tariff passed on March 1, 1833. The Force Bill passed the same day. Calhoun, Clay, and several others marched out of the chamber in opposition, with the only dissenting vote coming from John Tyler of Virginia. The new tariff was opposed by Webster, who argued that it essentially surrendered to South Carolina's demands. Jackson, despite his anger over the scrapping of the Verplanck bill and the new alliance between Clay and Calhoun, saw it as an efficient way to end the crisis. He signed both bills on March 2, starting with the Force Bill. The South Carolina Convention then met and rescinded its nullification ordinance, but in a final show of defiance, nullified the Force Bill. On May 1, Jackson wrote, "the tariff was only the pretext, and disunion and southern confederacy the real object. The next pretext will be the negro, or slavery question." Addressing the subject of foreign affairs in his First Annual Address to Congress, Jackson declared it to be his "settled purpose to ask nothing that is not clearly right and to submit to nothing that is wrong." When Jackson took office, spoliation claims, or compensation demands for the capture of American ships and sailors, dating from the Napoleonic era, caused strained relations between the U.S. and French governments. The French Navy had captured and sent American ships to Spanish ports while holding their crews captive forcing them to labor without any charges or judicial rules. According to Secretary of State Martin Van Buren, relations between the U.S. and France were "hopeless." Jackson's Minister to France, William C. Rives, through diplomacy was able to convince the French government to sign a reparations treaty on July 4, 1831, that would award the U.S. ₣ 25,000,000 ($5,000,000) in damages. The French government became delinquent in payment due to internal financial and political difficulties. The French king Louis Philippe I and his ministers blamed the French Chamber of Deputies. By 1834, the non-payment of reparations by the French government drew Jackson's ire and he became impatient. In , Jackson sternly reprimanded the French government for non-payment, stating the federal government was "wholly disappointed" by the French, and demanded Congress authorize trade reprisals against France. Feeling insulted by Jackson's words, the French people began pressuring their government not to pay the indemnity until Jackson had apologized for his remarks. In his December 1835 State of the Union Address, Jackson refused to apologize, stating he had a good opinion of the French people and his intentions were peaceful. Jackson described in lengthy and minute detail the history of events surrounding the treaty and his belief that the French government was purposely stalling payment. The French accepted Jackson's statements as sincere and in February 1836, reparations were paid. In addition to France, the Jackson administration successfully settled spoliation claims with Denmark, Portugal, and Spain. Jackson's state department was active and successful at making trade agreements with Russia, Spain, Turkey, Great Britain, and Siam. Under the treaty of Great Britain, American trade was reopened in the West Indies. The trade agreement with Siam was America's first treaty between the United States and an Asiatic country. As a result, American exports increased 75% while imports increased 250%. Jackson's attempt to purchase Texas from Mexico for $5,000,000 failed. The chargé d'affaires in Mexico, Colonel Anthony Butler, suggested that the U.S. take Texas over militarily, but Jackson refused. Butler was later replaced toward the end of Jackson's presidency. In 1835, the Texas Revolution began when pro-slavery American settlers in Texas fought the Mexican government for Texan independence. By May 1836, they had routed the Mexican military, establishing an independent Republic of Texas. The new Texas government legalized slavery and demanded recognition from President Jackson and annexation into the United States. Jackson was hesitant in recognizing Texas, unconvinced that the new republic could maintain independence from Mexico, and not wanting to make Texas an anti-slavery issue during the 1836 election. The strategy worked; the Democratic Party and national loyalties were held intact, and Van Buren was elected president. Jackson formally recognized the Republic of Texas, nominating Alcée Louis la Branche as chargé d'affaires on the last full day of his presidency, March 3, 1837. Jackson failed in his efforts to open trade with China and Japan and was unsuccessful at thwarting Great Britain's presence and power in South America. The 1832 presidential election demonstrated the rapid development and organization of political parties during this time period. The Democratic Party's first national convention, held in Baltimore, nominated Jackson's choice for vice president, Van Buren. The National Republican Party, who had held their first convention in Baltimore earlier in December 1831, nominated Henry Clay, now a senator from Kentucky, and John Sergeant of Pennsylvania. The Anti-Masonic Party emerged by capitalizing on opposition to Freemasonry, which existed primarily in New England, after the disappearance and possible murder of William Morgan. The party, which had earlier held its convention also in Baltimore in September 1831, nominated William Wirt of Maryland and Amos Ellmaker of Pennsylvania. Clay was, like Jackson, a Mason, and so some anti-Jacksonians who would have supported the National Republican Party supported Wirt instead. In 1816, the Second Bank of the United States was chartered by President James Madison to restore the United States economy devastated by the War of 1812. Monroe had appointed Nicholas Biddle as the Bank's executive. Jackson believed that the Bank was a fundamentally corrupt monopoly. Its stock was mostly held by foreigners, he insisted, and it exerted an unfair amount of control over the political system. Jackson used the issue to promote his democratic values, believing the Bank was being run exclusively for the wealthy. Jackson stated the Bank made "the rich richer and the potent more powerful." He accused it of making loans with the intent of influencing elections. In his address to Congress in 1830, Jackson called for a substitute for the Bank that would have no private stockholders and no ability to lend or purchase land. Its only power would be to issue bills of exchange. The address touched off fiery debate in the Senate. Thomas Hart Benton, now a strong supporter of the President despite the brawl years earlier, gave a speech strongly denouncing the Bank and calling for open debate on its recharter. Webster led a motion to narrowly defeat the resolution. Shortly afterward, the "Globe" announced that Jackson would stand for reelection. Despite his misgivings about the Bank, he supported a plan proposed in late 1831 by his moderately pro-Bank Treasury Secretary Louis McLane, who was secretly working with Biddle, to recharter a reformed version of the Bank in a way that would free up funds which would in turn be used to strengthen the military or pay off the nation's debt. This would be done, in part, through the sale of government stock in the Bank. Over the objections of Attorney General Roger B. Taney, an irreconcilable opponent of the Bank, he allowed McLane to publish a Treasury Report which essentially recommended rechartering the Bank. Clay hoped to make the Bank an issue in the election, so as to accuse Jackson of going beyond his powers if he vetoed a recharter bill. He and Webster urged Biddle to immediately apply for recharter rather than wait to reach a compromise with the administration. Biddle received advice to the contrary from moderate Democrats such as McLane and William Lewis, who argued that Biddle should wait because Jackson would likely veto the recharter bill. On January 6, 1832 Biddle submitted to Congress a renewal of the Bank's charter without any of the proposed reforms. The submission came four years before the original 20-year charter was to end. Biddle's recharter bill passed the Senate on June 11 and the House on July 3, 1832. Jackson determined to veto it. Many moderate Democrats, including McLane, were appalled by the perceived arrogance of the bill and supported his decision. When Van Buren met Jackson on July 4, Jackson declared, "The Bank, Mr. Van Buren, is trying to kill me. But I will kill it." Jackson officially vetoed the bill on July 10. The veto message was crafted primarily by Taney, Kendall, and Jackson's nephew and advisor Andrew Jackson Donelson. It attacked the Bank as an agent of inequality that supported only the wealthy. The veto was considered "one of the strongest and most controversial" presidential statements and "a brilliant political manifesto." The National Republican Party immediately made Jackson's veto of the Bank a political issue. Jackson's political opponents castigated the veto as "the very slang of the leveller and demagogue," claiming Jackson was using class warfare to gain support from the common man. At Biddle's direction, the Bank poured thousands of dollars into a campaign to defeat Jackson, seemingly confirming Jackson's view that it interfered in the political process. On July 21, Clay said privately, "The campaign is over, and I think we have won the victory." Jackson successfully portrayed his veto as a defense of the common man against governmental tyranny. Clay proved to be no match to Jackson's ability to resonate with the people and the Democratic Party's strong political networks. Democratic newspapers, parades, barbecues, and rallies increased Jackson's popularity. Jackson himself made numerous public appearances on his return trip from Tennessee to Washington, D.C. Jackson won the election by a landslide, receiving 54 percent of the popular vote and 219 electoral votes. Clay received 37 percent of the popular vote and 49 electoral votes. Wirt received only eight percent of the popular vote and seven electoral votes while the Anti-Masonic Party eventually declined. Jackson believed the solid victory was a popular mandate for his veto of the Bank's recharter and his continued warfare on the Bank's control over the national economy. In 1833, Jackson attempted to begin removing federal deposits from the bank, whose money-lending functions were taken over by the legions of local and state banks that materialized across America, thus drastically increasing credit and speculation. Jackson's moves were greatly controversial. He removed McLane from the Treasury Department, having him serve instead as Secretary of State, replacing Edward Livingston. He replaced McLane with William J. Duane. In September, he fired Duane for refusing to remove the deposits. Signalling his intent to continue battling the Bank, he replaced Duane with Taney. Under Taney, the deposits began to be removed. They were placed in a variety of state banks which were friendly to the administration's policies, known to critics as pet banks. Biddle responded by stockpiling the Bank's reserves and contracting credit, thus causing interest rates to rise and bringing about a financial panic. The moves were intended to force Jackson into a compromise. "Nothing but the evidence of suffering abroad will produce any effect in Congress," he wrote. At first, Biddle's strategy was successful, putting enormous pressure on Jackson. But Jackson handled the situation well. When people came to him complaining, he referred them to Biddle, saying that he was the man who had "all the money." Jackson's approach worked. =Biddle's strategy backfired, increasing anti-Bank sentiment. In 1834, those who disagreed with Jackson's expansion of executive power united and formed the Whig Party, calling Jackson "King Andrew I," and named their party after the English Whigs who opposed seventeenth British monarchy. A movement emerged among Whigs in the Senate to censure Jackson. The censure was a political maneuver spearheaded by Clay, which served only to perpetuate the animosity between him and Jackson. Jackson called Clay "reckless and as full of fury as a drunken man in a brothel." On March 28, the Senate voted to censure Jackson 26–20. It also rejected Taney as Treasury Secretary. The House however, led by Ways and Means Committee chairman James K. Polk, declared on April 4 that the Bank "ought not to be rechartered" and that the depositions "ought not to be restored." It also voted to continue to allow pet banks to be places of deposit and voted even more overwhelmingly to investigate whether the Bank had deliberately instigated the panic. Jackson called the passage of these resolutions a "glorious triumph." It essentially sealed the Bank's demise. The Democrats later suffered a temporary setback. Polk ran for Speaker of the House to replace Andrew Stevenson. After southerners discovered his connection to Van Buren, he was defeated by fellow-Tennessean John Bell, a Democrat-turned-Whig who opposed Jackson's removal policy. The national economy following the withdrawal of the remaining funds from the Bank was booming and the federal government through duty revenues and sale of public lands was able to pay all bills. On January 1, 1835, Jackson paid off the entire national debt, the only time in U.S. history that has been accomplished. The objective had been reached in part through Jackson's reforms aimed at eliminating the misuse of funds and through his vetoes of legislation had he deemed extravagant. In December 1835, Polk defeated Bell in a rematch and was elected Speaker. Finally, on January 16, 1837, when the Jacksonians had a majority in the Senate, the censure was expunged after years of effort by Jackson supporters. The expunction movement was led ironically by Benton. In 1836, in response to increased land speculation, Jackson issued the Specie Circular, an executive order that required buyers of government lands to pay in "specie" (gold or silver coins). The result was high demand for specie, which many banks could not meet in exchange for their notes, contributing to the Panic of 1837. The White House Van Buren biography notes, "Basically the trouble was the 19th-century cyclical economy of 'boom and bust,' which was following its regular pattern, but Jackson's financial measures contributed to the crash. His destruction of the Second Bank of the United States had removed restrictions upon the inflationary practices of some state banks; wild speculation in lands, based on easy bank credit, had swept the West. To end this speculation, Jackson in 1836 had issued a Specie Circular..." The first recorded physical attack on a U.S. president was directed at Jackson. He had ordered the dismissal of Robert B. Randolph from the navy for embezzlement. On May 6, 1833, Jackson sailed on USS "Cygnet" to Fredericksburg, Virginia, where he was to lay the cornerstone on a monument near the grave of Mary Ball Washington, George Washington's mother. During a stopover near Alexandria, Randolph appeared and struck the President. He fled the scene chased by several members of Jackson's party, including the writer Washington Irving. Jackson declined to press charges. On January 30, 1835, what is believed to be the first attempt to kill a sitting president of the United States occurred just outside the United States Capitol. When Jackson was leaving through the East Portico after the funeral of South Carolina Representative Warren R. Davis, Richard Lawrence, an unemployed house painter from England, aimed a pistol at Jackson, which misfired. Lawrence then pulled out a second pistol, which also misfired. Historians believe the humid weather contributed to the double misfiring. Jackson, infuriated, attacked Lawrence with his cane. Others present, including Davy Crockett, restrained and disarmed Lawrence. Lawrence offered a variety of explanations for the shooting. He blamed Jackson for the loss of his job. He claimed that with the President dead, "money would be more plenty," (a reference to Jackson's struggle with the Bank of the United States) and that he "could not rise until the President fell." Finally, Lawrence told his interrogators that he was a deposed English king—specifically, Richard III, dead since 1485—and that Jackson was his clerk. He was deemed insane and was institutionalized. Afterwards, the pistols were tested and retested. Each time they performed perfectly. Many believed that Jackson had been protected by the same Providence that also protected their young nation. The incident became a part of Jacksonian mythos. Jackson initially suspected that a number of his political enemies might have orchestrated the attempt on his life. His suspicions were never proven. During the summer of 1835, Northern abolitionists began sending anti-slavery tracts through the postal system into the South. Pro-slavery Southerners demanded that the postal service ban distribution of the materials, which were deemed "incendiary," and some began to riot. Jackson wanted sectional peace, and desired to placate Southerners ahead of the 1836 election. He fiercely disliked the abolitionists, whom he believed were, by instituting sectional jealousies, attempting to destroy the Union. Jackson also did not want to condone open insurrection. He supported the solution of Postmaster General Amos Kendall, which gave Southern postmasters discretionary powers to either send or detain the anti-slavery tracts. That December, Jackson called on Congress to prohibit the circulation through the South of "incendiary publications intended to instigate the slaves to insurrection." Jackson initially opposed any federal exploratory scientific expeditions during his first term in office. The last scientific federally funded expeditions took place from 1817 to 1823, led by Stephen H. Harriman on the Red River of the North. Jackson's predecessor, President Adams, attempted to launch a scientific oceanic exploration in 1828, but Congress was unwilling to fund the effort. When Jackson assumed office in 1829 he pocketed Adams' expedition plans. Eventually, wanting to establish his presidential legacy, similar to Jefferson and the Lewis and Clark Expedition, Jackson sponsored scientific exploration during his second term. On May 18, 1836, Jackson signed a law creating and funding the oceanic United States Exploring Expedition. Jackson put Secretary of the Navy Mahlon Dickerson in charge, to assemble suitable ships, officers, and scientific staff for the expedition; with a planned launch before Jackson's term of office expired. Dickerson proved unfit for the task, preparations stalled and the expedition was not launched until 1838, during the presidency of Van Buren. One brig ship, , later used in the expedition; having been commissioned by Secretary Dickerson in May 1836, circumnavigated the world and explored and mapped the Southern Ocean, confirming the existence of the Antarctica continent. In spite of economic success following Jackson's vetoes and war against the Bank, reckless speculation in land and railroads eventually caused the Panic of 1837. Contributing factors included Jackson's veto of the Second National Bank renewal charter in 1832 and subsequent transfer of federal monies to state banks in 1833 that caused western banks to relax their lending standards. Two other Jacksonian acts in 1836 contributed to the Panic of 1837: the Specie Circular, which mandated western lands only be purchased by money backed by gold and silver, and the Deposit and Distribution Act, which transferred federal monies from eastern to western state banks and in turn led to a speculation frenzy by banks. Jackson's Specie Circular, albeit designed to reduce speculation and stabilize the economy, left many investors unable to afford to pay loans in gold and silver. The same year there was a downturn in Great Britain's economy that stopped investment in the United States. As a result, the U.S. economy went into a depression, banks became insolvent, the national debt (previously paid off) increased, business failures rose, cotton prices dropped, and unemployment dramatically increased. The depression that followed lasted for four years until 1841, when the economy began to rebound. Jackson appointed six Justices to the Supreme Court. Most were undistinguished. His first appointee, John McLean, had been nominated in Barry's place after Barry had agreed to become postmaster general. McLean "turned Whig and forever schemed to win" the presidency. His next two appointees–Henry Baldwin and James Moore Wayne–disagreed with Jackson on some points but were poorly regarded even by Jackson's enemies. In reward for his services, Jackson nominated Taney to the Court to fill a vacancy in January 1835, but the nomination failed to win Senate approval. Chief Justice Marshall died in 1835, leaving two vacancies on the court. Jackson nominated Taney for Chief Justice and Philip Pendleton Barbour for Associate Justice. Both were confirmed by the new Senate. Taney served as Chief Justice until 1864, presiding over a court that upheld many of the precedents set by the Marshall Court. He was generally regarded as a good and respectable judge, but his opinion in "Dred Scott v. Sandford" largely overshadows his career. On the last full day of his presidency, Jackson nominated John Catron, who was confirmed. Two new states were admitted into the Union during Jackson's presidency: Arkansas (June 15, 1836) and Michigan (January 26, 1837). Both states increased Democratic power in Congress and helped Van Buren win the presidency in 1836. This was in keeping with the tradition that new states would support the party which had done the most to admit them. In 1837, after serving two terms as president, Jackson was replaced by his chosen successor Martin Van Buren and retired to the Hermitage. He immediately began putting it in order as it had been poorly managed in his absence by his adopted son, Andrew Jr. Although he suffered ill health, Jackson remained highly influential in both national and state politics. He was a firm advocate of the federal union of the states and rejected any talk of secession, insisting, "I will die with the Union." Blamed for causing the Panic of 1837, he was unpopular in his early retirement. Jackson continued to denounce the "perfidy and treachery" of banks and urged his successor, Van Buren, to repudiate the Specie Circular as president. As a solution to the panic, he supported an Independent Treasury system, which was designed to hold the money balances of the government in the form of gold or silver and would be restricted from printing paper money so as to prevent further inflation. A coalition of conservative Democrats and Whigs opposed the bill, and it was not passed until 1840. During the delay, no effective remedy had been implemented for the depression. Van Buren grew deeply unpopular. A unified Whig Party nominated popular war hero William Henry Harrison and former Jacksonian John Tyler in the 1840 presidential election. The Whigs' campaign style in many ways mimicked that of the Democrats when Jackson ran. They depicted Van Buren as an aristocrat who did not care for the concerns of ordinary Americans, while glorifying Harrison's military record and portraying him as a man of the people. Jackson campaigned heavily for Van Buren in Tennessee. He favored the nomination of Polk for vice president at the 1840 Democratic National Convention over controversial incumbent Richard Mentor Johnson. No nominee was chosen, and the party chose to leave the decision up to individual state electors. Harrison won the election, and the Whigs captured majorities in both houses of Congress. "The democracy of the United States has been shamefully beaten," Jackson wrote to Van Buren. "but I trust, not conquered." Harrison died only a month into his term, and was replaced by Tyler. Jackson was encouraged because Tyler had a strong independent streak and was not bound by party lines. Sure enough, Tyler quickly incurred the wrath of the Whigs in 1841 when he vetoed two Whig-sponsored bills to establish a new national bank, bringing satisfaction to Jackson and other Democrats. After the second veto, Tyler's entire cabinet, with the exception of Daniel Webster, resigned. Jackson strongly favored the annexation of Texas, a feat he had been unable to accomplish during his own presidency. While Jackson still feared that annexation would stir up anti-slavery sentiment, his belief that the British would use Texas as a base to threaten the United States overrode his other concerns. He also insisted that Texas was part of the Louisiana Purchase and therefore rightfully belonged to the United States. At the request of Senator Robert J. Walker of Mississippi, acting on behalf of the Tyler administration, which also supported annexation, Jackson wrote several letters to Texas President Sam Houston, urging him to wait for the Senate to approve annexation and lecturing him on how much being a part of the United States would benefit Texas. Initially prior to the 1844 election, Jackson again supported Van Buren for president and Polk for vice president. A treaty of annexation was signed by Tyler on April 12, 1844, and submitted to the Senate. When a letter from Secretary of State Calhoun to British Ambassador Richard Pakenham linking annexation to slavery was made public, anti-annexation sentiment exploded in the North and the bill failed to be ratified. Van Buren decided to write the "Hamlet letter," opposing annexation. This effectively extinguished any support that Van Buren might previously have enjoyed in the South. The Whig nominee, Henry Clay, also opposed annexation, and Jackson recognized the need for the Democrats to nominate a candidate who supported it and could therefore gain the support of the South. If the plan failed, Jackson warned, Texas would not join the Union and would potentially fall victim to a Mexican invasion supported by the British. Jackson met with Polk, Robert Armstrong, and Andrew Jackson Donelson in his study. He then pointed directly at a startled Polk, telling him that, as a man from the southwest and a supporter of annexation, he would be the perfect candidate. Polk called the scheme "utterly abortive," but agreed to go along with it. At the 1844 Democratic National Convention, Polk emerged as the party's nominee after Van Buren failed to win the required two-thirds majority of delegates. George M. Dallas was selected for vice president. Jackson convinced Tyler to drop his plans of running for re-election as an independent by promising, as Tyler requested, to welcome the president and his allies back into the Democratic Party and by instructing Blair to stop criticizing the president. Polk won the election, defeating Clay. A bill of annexation was passed by Congress in February and signed by Tyler on March 1. Jackson died at his plantation on June 8, 1845, at the age of 78, of chronic dropsy and heart failure. According to a newspaper account from the Boon Lick Times, "[he] fainted whilst being removed from his chair to the bed ... but he subsequently revived ... Gen. Jackson died at the Hermitage at 6 o'clock P.M. on Sunday the 8th instant. ... When the messenger finally came, the old soldier, patriot and Christian was looking out for his approach. He is gone, but his memory lives, and will continue to live." In his will, Jackson left his entire estate to his adopted son, Andrew Jackson Jr., except for specifically enumerated items that were left to various friends and other family members. Jackson had three adopted sons: Theodore, an Indian about whom little is known, Andrew Jackson Jr., the son of Rachel's brother Severn Donelson, and Lyncoya, a Creek Indian orphan adopted by Jackson after the Battle of Tallushatchee. Lyncoya died of tuberculosis on July 1, 1828, at the age of sixteen. The Jacksons also acted as guardians for eight other children. John Samuel Donelson, Daniel Smith Donelson, and Andrew Jackson Donelson were the sons of Rachel's brother Samuel Donelson, who died in 1804. Andrew Jackson Hutchings was Rachel's orphaned grand nephew. Caroline Butler, Eliza Butler, Edward Butler, and Anthony Butler were the orphaned children of Edward Butler, a family friend. They came to live with the Jacksons after the death of their father. The widower Jackson invited Rachel's niece Emily Donelson to serve as hostess at the White House. Emily was married to Andrew Jackson Donelson, who acted as Jackson's private secretary and in 1856 ran for vice president on the American Party ticket. The relationship between the President and Emily became strained during the Petticoat affair, and the two became estranged for over a year. They eventually reconciled and she resumed her duties as White House hostess. Sarah Yorke Jackson, the wife of Andrew Jackson Jr., became co-hostess of the White House in 1834. It was the only time in history when two women simultaneously acted as unofficial First Lady. Sarah took over all hostess duties after Emily died from tuberculosis in 1836. Jackson used Rip Raps as a retreat. Jackson's quick temper was notorious. Biographer H. W. Brands notes that his opponents were terrified of his temper: "Observers likened him to a volcano, and only the most intrepid or recklessly curious cared to see it erupt. ... His close associates all had stories of his blood-curdling oaths, his summoning of the Almighty to loose His wrath upon some miscreant, typically followed by his own vow to hang the villain or blow him to perdition. Given his record—in duels, brawls, mutiny trials, and summary hearings—listeners had to take his vows seriously." On the last day of his presidency, Jackson admitted that he had but two regrets, that he "had been unable to shoot Henry Clay or to hang John C. Calhoun." On his deathbed, he was once again quoted as regretting that he had not hanged Calhoun for treason. "My country would have sustained me in the act, and his fate would have been a warning to traitors in all time to come," he said. Remini expresses the opinion that Jackson was typically in control of his temper, and that he used his anger, along with his fearsome reputation, as a tool to get what he wanted. Jackson was a lean figure, standing at tall, and weighing between on average. Jackson also had an unruly shock of red hair, which had completely grayed by the time he became president at age 61. He had penetrating deep blue eyes. Jackson was one of the more sickly presidents, suffering from chronic headaches, abdominal pains, and a hacking cough. Much of his trouble was caused by a musket ball in his lung that was never removed, that often brought up blood and sometimes made his whole body shake. In 1838, Jackson became an official member of the First Presbyterian Church in Nashville. Both his mother and his wife had been devout Presbyterians all their lives, but Jackson himself had postponed officially entering the church in order to avoid accusations that he was joining only for political reasons. Jackson was a Freemason, initiated at Harmony Lodge No. 1 in Tennessee; he also participated in chartering several other lodges in Tennessee. He was the only U.S. president to have served as Grand Master of a state's Grand Lodge until Harry S. Truman in 1945. His Masonic apron is on display in the Tennessee State Museum. An obelisk and bronze Masonic plaque decorate his tomb at the Hermitage. Jackson remains one of the most studied and controversial figures in American history. Historian Charles Grier Sellers says, "Andrew Jackson's masterful personality was enough by itself to make him one of the most controversial figures ever to stride across the American stage." There has never been universal agreement on Jackson's legacy, for "his opponents have ever been his most bitter enemies, and his friends almost his worshippers." He was always a fierce partisan, with many friends and many enemies. He has been lauded as the champion of the common man, while criticized for his treatment of Indians and for other matters. James Parton was the first man after Jackson's death to write a full biography of him. Trying to sum up the contradictions in his subject, he wrote: Jackson was criticized by his contemporary Alexis de Tocqueville in "Democracy in America" for flattering the dominant ideas of his time, including the mistrust over the federal power, for sometimes enforcing his view by force and disrespect towards the institutions and the law: In the 20th century, Jackson was written about by many admirers. Arthur M. Schlesinger Jr.'s "Age of Jackson" (1945) depicts Jackson as a man of the people battling inequality and upper-class tyranny. From the 1970s to the 1980s, Robert Remini published a three-volume biography of Jackson followed by an abridged one-volume study. Remini paints a generally favorable portrait of Jackson. He contends that Jacksonian democracy "stretches the concept of democracy about as far as it can go and still remain workable. ... As such it has inspired much of the dynamic and dramatic events of the nineteenth and twentieth centuries in American history—Populism, Progressivism, the New and Fair Deals, and the programs of the New Frontier and Great Society." To Remini, Jackson serves as "the embodiment of the new American ... This new man was no longer British. He no longer wore the queue and silk pants. He wore trousers, and he had stopped speaking with a British accent." Other 20th-century writers such as Richard Hofstadter and Bray Hammond depict Jackson as an advocate of the sort of "laissez-faire" capitalism that benefits the rich and oppresses the poor. Jackson's initiatives to deal with the conflicts between Indians and American settlers has been a source of controversy. Starting mainly around 1970, Jackson came under attack from some historians on this issue. Howard Zinn called him "the most aggressive enemy of the Indians in early American history" and "exterminator of Indians." In 1969, Francis Paul Prucha argued that Jackson's removal of the "Five Civilized Tribes" from the extremely hostile white environment in the Old South to Oklahoma probably saved their very existence. Similarly, Remini claims that, if not for Jackson's policies, the Southern tribes would have been totally wiped out, just like other tribes-namely, the Yamasee, Mahican, and Narragansett–which did not move. Jackson has long been honored, along with Thomas Jefferson, in the Jefferson–Jackson Day fundraising dinners held by state Democratic Party organizations to honor the two men whom the party regards as its founders. Because both Jefferson and Jackson were slave owners, as well as because of Jackson's Indian removal policies, many state party organizations have renamed the dinners. Brands argues that Jackson's reputation suffered since the 1960s as his actions towards Indians and African Americans received new attention. He also claims that the Indian controversy overshadowed Jackson's other achievements. Noting shifting attitudes on different national issues, Brands notes that he was often hailed during his lifetime as the "second George Washington," because, while Washington had fought for independence, Jackson confirmed it at New Orleans and made the United States a great world power. Over time, while the Revolution has maintained a strong presence in the public conscience, memory of the War of 1812, including the Battle of New Orleans, has sharply declined. Brands argues that this is because once America had become a military power, "it was easy to think that America had been destined for this role from the beginning." Still, Jackson's performance in office has generally been ranked in the top half in public opinion polling. His position in C-SPAN's poll dropped from 13th in 2009 to 18th in 2017. Jackson has appeared on U.S. banknotes as far back as 1869, and extending into the 21st century. His image has appeared on the $5, $10, $20 and $10,000 note. Most recently, his image has appeared on the U.S. $20 Federal reserve note beginning in 1928. In 2016, Treasury Secretary Jack Lew announced his goal that by 2020 an image of Harriet Tubman would replace Jackson's depiction on the front side of the $20 banknote, and that an image of Jackson would be placed on the reverse side, though the final decision will be made by his successors. Jackson has appeared on several postage stamps. He first appeared on an 1863 two-cent stamp, which is commonly referred to by collectors as the "Black Jack" due to the large portraiture of Jackson on its face printed in pitch black. During the American Civil War, the Confederate government also issued two Confederate postage stamps bearing Jackson's portrait, one a and the other a , both issued in 1863. Numerous counties and cities are named after him, including the city of Jacksonville in Florida and North Carolina; the cities of Jackson in Louisiana, Michigan, Mississippi, Missouri, and Tennessee; Jackson County in Florida, Illinois, Michigan, Mississippi, Missouri, Ohio, and Oregon; and Jackson Parish in Louisiana. Memorials to Jackson include a set of four identical equestrian statues by the sculptor Clark Mills: in Lafayette Square, Washington, D.C.; in Jackson Square, New Orleans; in Nashville on the grounds of the Tennessee State Capitol; and in Jacksonville, Florida. Other equestrian statues of Jackson have been erected elsewhere, as in the State Capitol grounds in Raleigh, North Carolina. That statue controversially identifies him as one of the "presidents North Carolina gave the nation," and he is featured alongside James Polk and Andrew Johnson, both U.S. presidents born in North Carolina. There is a bust of Andrew Jackson in Plaza Ferdinand VII in Pensacola, Florida, where he became the first governor of the Florida Territory in 1821. There is also a 1928 bronze sculpture of Andrew Jackson by Belle Kinney Scholz and Leopold Scholz in the U.S. Capitol Building as part of the National Statuary Hall Collection. Jackson and his wife Rachel were the main subjects of a 1951 historical novel by Irving Stone, "The President's Lady", which told the story of their lives up until Rachel's death. The novel was the basis for the 1953 film of the same name starring Charlton Heston as Jackson and Susan Hayward as Rachel. Jackson has been a supporting character in a number of historical films and television productions. Lionel Barrymore played Jackson in "The Gorgeous Hussy" (1936), a fictionalized biography of Peggy Eaton starring Joan Crawford. "The Buccaneer" (1938), depicting the Battle of New Orleans, included Hugh Sothern as Jackson, and was remade in 1958 with Heston again playing Jackson. Basil Ruysdael played Jackson in Walt Disney's 1955 "Davy Crockett" TV miniseries. Wesley Addy appeared as Jackson in some episodes of the 1976 PBS miniseries "The Adams Chronicles". Jackson is the protagonist of the comedic historic rock musical "Bloody Bloody Andrew Jackson" (2008) with music and lyrics by Michael Friedman and book by Alex Timbers. Andrew Johnson Andrew Johnson (December 29, 1808 July 31, 1875) was the 17th President of the United States, serving from 1865 to 1869. Johnson became president as he was vice president at the time of the assassination of Abraham Lincoln. A Democrat who ran with Lincoln on the National Union ticket, Johnson came to office as the Civil War concluded. The new president favored quick restoration of the seceded states to the Union. His plans did not give protection to the former slaves, and he came into conflict with the Republican-dominated Congress, culminating in his impeachment by the House of Representatives. He was acquitted in the Senate by one vote. Johnson was born in poverty in Raleigh, North Carolina and never attended school. Apprenticed as a tailor, he worked in several frontier towns before settling in Greeneville, Tennessee. He served as alderman and mayor there before being elected to the Tennessee House of Representatives in 1835. After brief service in the Tennessee Senate, Johnson was elected to the federal House of Representatives in 1843, where he served five two-year terms. He became Governor of Tennessee for four years, and was elected by the legislature to the U.S. Senate in 1857. In his congressional service, he sought passage of the Homestead Bill, which was enacted soon after he left his Senate seat in 1862. As Southern slave states, including Tennessee, seceded to form the Confederate States of America, Johnson remained firmly with the Union. He was the only sitting senator from a Confederate state who did not resign his seat upon learning of his state's secession. In 1862, Lincoln appointed him as military governor of Tennessee after most of it had been retaken. In 1864, Johnson, as a War Democrat and Southern Unionist, was a logical choice as running mate for Lincoln, who wished to send a message of national unity in his re-election campaign; their ticket easily won. When Johnson was sworn in as vice president in March 1865, he gave a rambling speech, after which he secluded himself to avoid public ridicule. Six weeks later, the assassination of Lincoln made him president. Johnson implemented his own form of Presidential Reconstruction – a series of proclamations directing the seceded states to hold conventions and elections to re-form their civil governments. When Southern states returned many of their old leaders, and passed Black Codes to deprive the freedmen of many civil liberties, Congressional Republicans refused to seat legislators from those states and advanced legislation to overrule the Southern actions. Johnson vetoed their bills, and Congressional Republicans overrode him, setting a pattern for the remainder of his presidency. Johnson opposed the Fourteenth Amendment, which gave citizenship to former slaves. In 1866, Johnson went on an unprecedented national tour promoting his executive policies, seeking to destroy his Republican opponents. As the conflict between the branches of government grew, Congress passed the Tenure of Office Act, restricting Johnson's ability to fire Cabinet officials. When he persisted in trying to dismiss Secretary of War Edwin Stanton, he was impeached by the House of Representatives, and narrowly avoided conviction in the Senate and removal from office. After failing to win the 1868 Democratic presidential nomination, Johnson left office in 1869. Returning to Tennessee after his presidency, Johnson sought political vindication, and gained it in his eyes when he was elected to the Senate again in 1875, making Johnson the only former president to serve in the Senate. He died just months into his term. While some admire Johnson's strict constitutionalism, his strong opposition to federally guaranteed rights for African Americans is widely criticized. He is regarded by many historians as one of the worst presidents in American history. Andrew Johnson was born in Raleigh, North Carolina, on December 29, 1808, to Jacob Johnson (1778–1812) and Mary ("Polly") McDonough (1783–1856), a laundress. He was of English, Scottish, and Irish ancestry. He had a brother William, four years his senior, and an older sister Elizabeth, who died in childhood. Johnson's birth in a two-room shack was a political asset in the mid-19th century, and he would frequently remind voters of his humble origins. Jacob Johnson was a poor man, as had been his father, William Johnson, but he became town constable of Raleigh before marrying and starting a family. Both Jacob and Mary were illiterate, and had worked as tavern servants, while Johnson never attended school. Johnson grew up in poverty. Jacob died of an apparent heart attack while ringing the town bell, shortly after rescuing three drowning men, when his son Andrew was three. Polly Johnson worked as a washerwoman and became the sole support of her family. Her occupation was then looked down on, as it often took her into other homes unaccompanied. There were even rumors that Andrew, who did not resemble his brother or sister, had been fathered by another man. Polly Johnson eventually remarried, to Turner Doughtry, who was as poor as she was. Johnson's mother apprenticed her son William to a tailor, James Selby. Andrew also became an apprentice in Selby's shop at age ten and was legally bound to serve until his 21st birthday. Johnson lived with his mother for part of his service, and one of Selby's employees taught him rudimentary literacy skills. His education was augmented by citizens who would come to Selby's shop to read to the tailors as they worked. Even before he became an apprentice, Johnson came to listen. The readings caused a lifelong love of learning, and one of his biographers, Annette Gordon-Reed, suggests that Johnson, later a gifted public speaker, learned the art as he threaded needles and cut cloth. Johnson was not happy at James Selby's, and after about five years, both he and his brother ran away. Selby responded by placing a reward for their return: "Ten Dollars Reward. Ran away from the subscriber, two apprentice boys, legally bound, named William and Andrew Johnson ... [payment] to any person who will deliver said apprentices to me in Raleigh, or I will give the above reward for Andrew Johnson alone." The brothers went to Carthage, North Carolina, where Andrew Johnson worked as a tailor for several months. Fearing he would be arrested and returned to Raleigh, Johnson moved to Laurens, South Carolina. He found work quickly, met his first love, Mary Wood, and made her a quilt as a gift. However, she rejected his marriage proposal. He returned to Raleigh, hoping to buy out his apprenticeship, but could not come to terms with Selby. Unable to stay in Raleigh, where he risked being apprehended for abandoning Selby, he decided to move west. Johnson left North Carolina for Tennessee, traveling mostly on foot. After a brief period in Knoxville, he moved to Mooresville, Alabama. He then worked as a tailor in Columbia, Tennessee, but was called back to Raleigh by his mother and stepfather, who saw limited opportunities there and who wished to emigrate west. Johnson and his party traveled through the Blue Ridge Mountains to Greeneville, Tennessee. Andrew Johnson fell in love with the town at first sight, and when he became prosperous purchased the land where he had first camped and planted a tree in commemoration. In Greeneville, Johnson established a successful tailoring business in the front of his home. In 1827, at the age of 18, he married 16-year-old Eliza McCardle, the daughter of a local shoemaker. The pair were married by Justice of the Peace Mordecai Lincoln, first cousin of Thomas Lincoln, whose son would become president. The Johnsons were married for almost 50 years and had five children: Martha (1828), Charles (1830), Mary (1832), Robert (1834), and Andrew Jr. (1852). Though she suffered from Tuberculosis, Eliza supported her husband's endeavors. She taught him mathematics skills and tutored him to improve his writing. Shy and retiring by nature, Eliza Johnson usually remained in Greeneville during Johnson's political rise. She was not often seen during her husband's presidency; their daughter Martha usually served as official hostess. Johnson's tailoring business prospered during the early years of the marriage, enabling him to hire help and giving him the funds to invest profitably in real estate. He later boasted of his talents as a tailor, "my work never ripped or gave way." He was a voracious reader. Books about famous orators aroused his interest in political dialogue, and he had private debates on the issues of the day with customers who held opposing views. He also took part in debates at Greeneville College. Johnson helped organize a mechanics' (working men's) ticket in the 1829 Greeneville municipal election. He was elected town alderman, along with his friends Blackston McDannel and Mordecai Lincoln. Following the 1831 Nat Turner slave rebellion, a state convention was called to pass a new constitution, including provisions to disenfranchise free people of color. The convention also wanted to reform real estate tax rates, and provide ways of funding improvements to Tennessee's infrastructure. The constitution was submitted for a public vote, and Johnson spoke widely for its adoption; the successful campaign provided him with statewide exposure. On January 4, 1834, his fellow aldermen elected him mayor of Greeneville. In 1835, Johnson made a bid for election to the "floater" seat which Greene County shared with neighboring Washington County in the Tennessee House of Representatives. According to his biographer, Hans L. Trefousse, Johnson "demolished" the opposition in debate and won the election with almost a two to one margin. Soon after taking his seat, Johnson purchased his first slave, Dolly, aged 14. Dolly had three children over the years. Johnson had the reputation of treating his slaves kindly, and the fact that Dolly was dark-skinned, and her offspring much lighter, led to speculation both during and after his lifetime that he was the father. During his Greeneville days, Johnson joined the Tennessee Militia as a member of the 90th Regiment. He attained the rank of colonel, though while an enrolled member, Johnson was fined for an unknown offense. Afterwards, he was often addressed or referred to by his rank. In his first term in the legislature, which met in the state capital of Nashville, Johnson did not consistently vote with either the Democratic or the newly formed Whig Party, though he revered President Andrew Jackson, a Democrat and fellow Tennessean. The major parties were still determining their core values and policy proposals, with the party system in a state of flux. The Whig Party had organized in opposition to Jackson, fearing the concentration of power in the Executive Branch of the government; Johnson differed from the Whigs as he opposed more than minimal government spending and spoke against aid for the railroads, while his constituents hoped for improvements in transportation. After Brookins Campbell and the Whigs defeated Johnson for re-election in 1837, Johnson would not lose another race for thirty years. In 1839, he sought to regain his seat, initially as a Whig, but when another candidate sought the Whig nomination, he ran as a Democrat and was elected. From that time he supported the Democratic party and built a powerful political machine in Greene County. Johnson became a strong advocate of the Democratic Party, noted for his oratory, and in an era when public speaking both informed the public and entertained it, people flocked to hear him. In 1840, Johnson was selected as a presidential elector for Tennessee, giving him more statewide publicity. Although Democratic President Martin Van Buren was defeated by former Ohio senator William Henry Harrison, Johnson was instrumental in keeping Tennessee and Greene County in the Democratic column. He was elected to the Tennessee Senate in 1841, where he served a two-year term. He had achieved financial success in his tailoring business, but sold it to concentrate on politics. He had also acquired additional real estate, including a larger home and a farm (where his mother and stepfather took residence), and among his assets numbered eight or nine slaves. Having served in both houses of the state legislature, Johnson saw election to Congress as the next step in his political career. He engaged in a number of political maneuvers to gain Democratic support, including the displacement of the Whig postmaster in Greeneville, and defeated Jonesborough lawyer John A. Aiken by 5,495 votes to 4,892. In Washington, he joined a new Democratic majority in the House of Representatives. Johnson advocated for the interests of the poor, maintained an anti-abolitionist stance, argued for only limited spending by the government and opposed protective tariffs. With Eliza remaining in Greeneville, Congressman Johnson shunned social functions in favor of study in the Library of Congress. Although a fellow Tennessee Democrat, James K. Polk, was elected president in 1844, and Johnson had campaigned for him, the two men had difficult relations, and President Polk refused some of his patronage suggestions. Johnson believed, as did many Southern Democrats, that the Constitution protected private property, including slaves, and thus prohibited the federal and state governments from abolishing slavery. He won a second term in 1845 against Wiliam G. Brownlow, presenting himself as the defender of the poor against the aristocracy. In his second term, Johnson supported the Polk administration's decision to fight the Mexican War, seen by some Northerners as an attempt to gain territory to expand slavery westward, and opposed the Wilmot Proviso, a proposal to ban slavery in any territory gained from Mexico. He introduced for the first time his Homestead Bill, to grant to people willing to settle the land and gain title to it. This issue was especially important to Johnson because of his own humble beginnings. In the presidential election of 1848, the Democrats split over the slavery issue, and abolitionists formed the Free Soil Party, with former president Van Buren as their nominee. Johnson supported the Democratic candidate, former Michigan senator Lewis Cass. With the party split, Whig nominee General Zachary Taylor was easily victorious, and carried Tennessee. Johnson's relations with Polk remained poor; the President recorded of his final New Year's reception in 1849 that Johnson, due to national interest in new railroad construction and in response to the need for better transportation in his own district, also supported government assistance for the East Tennessee and Virginia Railroad. In his campaign for a fourth term, Johnson concentrated on three issues: slavery, homesteads and judicial elections. He defeated his opponent, Nathaniel G. Taylor, in August 1849, with a greater margin of victory than in previous campaigns. When the House convened in December, the party division caused by the Free Soil Party precluded the formation of the majority needed to elect a Speaker. Johnson proposed adoption of a rule allowing election of a Speaker by a plurality; some weeks later others took up a similar proposal, and Democrat Howell Cobb was elected. Once the Speaker election had concluded and Congress was ready to conduct legislative business, the issue of slavery took center stage. Northerners sought to admit California, a free state, to the Union. Kentucky's Henry Clay introduced in the Senate a series of resolutions, the Compromise of 1850, to admit California and pass legislation sought by each side. Johnson voted for all the provisions except for the abolition of slavery in the nation's capital. He pressed resolutions for constitutional amendments to provide for popular election of senators (then elected by state legislatures) and of the president (chosen by the Electoral College), and limiting the tenure of federal judges to 12 years. These were all defeated. A group of Democrats nominated Landon Carter Haynes to oppose Johnson as he sought a fifth term; the Whigs were so pleased with the internecine battle among the Democrats in the general election that they did not nominate a candidate of their own. The campaign included fierce debates: Johnson's main issue was the passage of the Homestead Bill; Haynes contended it would facilitate abolition. Johnson won the election by more than 1600 votes. Though he was not enamored of the party's presidential nominee in 1852, former New Hampshire senator Franklin Pierce, Johnson campaigned for him. Pierce was elected, but he failed to carry Tennessee. In 1852, Johnson managed to get the House to pass his Homestead Bill, but it failed in the Senate. The Whigs had gained control of the Tennessee legislature, and, under the leadership of Gustavus Henry, redrew the boundaries of Johnson's First District to make it a safe seat for their party. The "Nashville Union" termed this "Henry-mandering"; lamented Johnson, "I have no political future." If Johnson considered retiring from politics upon deciding not to seek re-election, he soon changed his mind. His political friends began to maneuver to get him the nomination for governor. The Democratic convention unanimously named him, though some party members were not happy at his selection. The Whigs had won the past two gubernatorial elections, and still controlled the legislature. That party nominated Henry, making the "Henry-mandering" of the First District an immediate issue. The two men debated in county seats the length of Tennessee before the meetings were called off two weeks before the August 1853 election due to illness in Henry's family. Johnson won the election by 63,413 votes to 61,163; some votes for him were cast in return for his promise to support Whig Nathaniel Taylor for his old seat in Congress. Tennessee's governor had little power: Johnson could propose legislation but not veto it, and most appointments were made by the Whig-controlled legislature. Nevertheless, the office was a "bully pulpit" that allowed him to publicize himself and his political views. He succeeded in getting the appointments he wanted in return for his endorsement of John Bell, a Whig, for one of the state's U.S. Senate seats. In his first biennial speech, Johnson urged simplification of the state judicial system, abolition of the Bank of Tennessee, and establishment of an agency to provide uniformity in weights and measures; the last was passed. Johnson was critical of the Tennessee common school system and suggested funding be increased via taxes, either statewide or county by county—a mixture of the two was passed. Reforms carried out during Johnson's time as governor included the foundation of the State's public library (making books available to all) and its first public school system, and the initiation of regular state fairs to benefit craftsmen and farmers. Although the Whig Party was on its final decline nationally, it remained strong in Tennessee, and the outlook for Democrats there in 1855 was poor. Feeling that re-election as governor was necessary to give him a chance at the higher offices he sought, Johnson agreed to make the run. Meredith P. Gentry received the Whig nomination. A series of more than a dozen vitriolic debates ensued. The issues in the campaign were slavery, the prohibition of alcohol, and the nativist positions of the Know Nothing Party. Johnson favored the first, but opposed the others. Gentry was more equivocal on the alcohol question, and had gained the support of the Know Nothings, a group Johnson portrayed as a secret society. Johnson was unexpectedly victorious, albeit with a narrower margin than in 1853. When the presidential election of 1856 approached, Johnson hoped to be nominated; some Tennessee county conventions designated him a "favorite son". His position that the best interests of the Union were served by slavery in some areas made him a practical compromise candidate for president. He was never a major contender; the nomination fell to former Pennsylvania senator James Buchanan. Though he was not impressed by either, Johnson campaigned for Buchanan and his running mate, John C. Breckinridge, who were elected. Johnson decided not to seek a third term as governor, with an eye towards election to the U.S. Senate. In 1857, while returning from Washington, his train derailed, causing serious damage to his right arm. This injury would trouble him in the years to come. The victors in the 1857 state legislative campaign would, once they convened in October, elect a United States Senator. Former Whig governor William B. Campbell wrote to his uncle, "The great anxiety of the Whigs is to elect a majority in the legislature so as to defeat Andrew Johnson for senator. Should the Democrats have the majority, he will certainly be their choice, and there is no man living to whom the Americans and Whigs have as much antipathy as Johnson." The governor spoke widely in the campaign, and his party won the gubernatorial race and control of the legislature. Johnson's final address as governor gave him the chance to influence his electors, and he made proposals popular among Democrats. Two days later the legislature elected him to the Senate. The opposition was appalled, with the Richmond "Whig" newspaper referring to him as "the vilest radical and most unscrupulous demagogue in the Union." Johnson gained high office due to his proven record as a man popular among the small farmers and self-employed tradesmen who made up much of Tennessee's electorate. He called them the "plebeians"; he was less popular among the planters and lawyers who led the state Democratic Party, but none could match him as a vote-getter. After his death, one Tennessee voter wrote of him, "Johnson was always the same to everyone ... the honors heaped upon him did not make him forget to be kind to the humblest citizen." Always seen in impeccably tailored clothing, he cut an impressive figure, and had the stamina to endure lengthy campaigns with daily travel over bad roads leading to another speech or debate. Mostly denied the party's machinery, he relied on a network of friends, advisers, and contacts. One friend, Hugh Douglas, stated in a letter to him, "you have been in the way of our would be great men for a long time. At heart many of us never wanted you to be Governor only none of the rest of us Could have been elected at the time and we only wanted to use you. Then we did not want you to go to the Senate but "the people would send you"." The new senator took his seat when Congress convened in December 1857 (the term of his predecessor, James C. Jones, had expired in March). He came to Washington as usual without his wife and family; Eliza would visit Washington only once during Johnson's first time as senator, in 1860. Johnson immediately set about introducing the Homestead Bill in the Senate, but as most senators who supported it were Northern (many associated with the newly founded Republican Party), the matter became caught up in suspicions over the slavery issue. Southern senators felt that those who took advantage of the provisions of the Homestead Bill were more likely to be Northern non-slaveholders. The issue of slavery had been complicated by the Supreme Court's ruling earlier in the year in "Dred Scott v. Sandford" that slavery could not be prohibited in the territories. Johnson, a slaveholding senator from a Southern state, made a major speech in the Senate the following May in an attempt to convince his colleagues that the Homestead Bill and slavery were not incompatible. Nevertheless, Southern opposition was key to defeating the legislation, 30–22. In 1859, it failed on a procedural vote when Vice President Breckinridge broke a tie against the bill, and in 1860, a watered-down version passed both houses, only to be vetoed by Buchanan at the urging of Southerners. Johnson continued his opposition to spending, chairing a committee to control it. He argued against funding to build infrastructure in Washington, D.C., stating that it was unfair to expect state citizens to pay for the city's streets, even if it was the seat of government. He opposed spending money for troops to put down the revolt by the Mormons in Utah Territory, arguing for temporary volunteers as the United States should not have a standing army. In October 1859, abolitionist John Brown and sympathizers raided the federal arsenal at Harpers Ferry, Virginia (today West Virginia). Tensions in Washington between pro- and anti-slavery forces increased greatly. Johnson gave a major speech in the Senate in December, decrying Northerners who would endanger the Union by seeking to outlaw slavery. The Tennessee senator stated that "all men are created equal" from the Declaration of Independence did not apply to African Americans, since the Constitution of Illinois contained that phrase—and that document barred voting by African Americans. Johnson, by this time, was a wealthy man who owned a few household slaves, 14 slaves, according to the 1860 Federal Census. Johnson hoped that he would be a compromise candidate for the presidential nomination as the Democratic Party tore itself apart over the slavery question. Busy with the Homestead Bill during the 1860 Democratic National Convention in Charleston, South Carolina, he sent two of his sons and his chief political adviser to represent his interests in the backroom deal-making. The convention deadlocked, with no candidate able to gain the required two-thirds vote, but the sides were too far apart to consider Johnson as a compromise. The party split, with Northerners backing Illinois Senator Stephen Douglas while Southerners, including Johnson, supported Vice President Breckinridge for president. With former Tennessee senator John Bell running a fourth-party candidacy and further dividing the vote, the Republican Party elected its first president, former Illinois representative Abraham Lincoln. The election of Lincoln, known to be against the spread of slavery, was unacceptable to many in the South. Although secession from the Union had not been an issue in the campaign, talk of it began in the Southern states. Johnson took to the Senate floor after the election, giving a speech well received in the North, "I will not give up this government ... No; I intend to stand by it ... and I invite every man who is a patriot to ... rally around the altar of our common country ... and swear by our God, and all that is sacred and holy, that the Constitution shall be saved, and the Union preserved." As Southern senators announced they would resign if their states seceded, he reminded Mississippi Senator Jefferson Davis that if Southerners would only hold to their seats, the Democrats would control the Senate, and could defend the South's interests against any infringement by Lincoln. Gordon-Reed points out that while Johnson's belief in an indissoluble Union was sincere, he had alienated Southern leaders, including Davis, who would soon be the president of the Confederate States of America, formed by the seceding states. If the Tennessean had backed the Confederacy, he would have had small influence in its government. Johnson returned home when his state took up the issue of secession. His successor as governor, Isham G. Harris, and the legislature organized a referendum on whether to have a constitutional convention to authorize secession; when that failed, they put the question of leaving the Union to a popular vote. Despite threats on Johnson's life, and actual assaults, he campaigned against both questions, sometimes speaking with a gun on the lectern before him. Although Johnson's eastern region of Tennessee was largely against secession, the second referendum passed, and in June 1861, Tennessee joined the Confederacy. Believing he would be killed if he stayed, Johnson fled through the Cumberland Gap, where his party was in fact shot at. He left his wife and family in Greeneville. As the only member from a seceded state to remain in the Senate and the most prominent Southern Unionist, Johnson had Lincoln's ear in the early months of the war. With most of Tennessee in Confederate hands, Johnson spent congressional recesses in Kentucky and Ohio, trying in vain to convince any Union commander who would listen to conduct an operation into East Tennessee. Johnson's first tenure in the Senate came to a conclusion in March 1862 when Lincoln appointed him military governor of Tennessee. Much of the central and western portions of that seceded state had been recovered. Although some argued that civil government should simply resume once the Confederates were defeated in an area, Lincoln chose to use his power as commander in chief to appoint military governors over Union-controlled Southern regions. The Senate quickly confirmed Johnson's nomination along with the rank of brigadier general. In response, the Confederates confiscated his land and his slaves, and turned his home into a military hospital. Later in 1862, after his departure from the Senate and in the absence of most Southern legislators, the Homestead Bill was finally enacted. Along with legislation for land-grant colleges and for the transcontinental railroad, the Homestead Bill has been credited with opening the American West to settlement. As military governor, Johnson sought to eliminate rebel influence in the state. He demanded loyalty oaths from public officials, and shut down all newspapers owned by Confederate sympathizers. Much of eastern Tennessee remained in Confederate hands, and the ebb and flow of war during 1862 sometimes brought Confederate control again close to Nashville. However, the Confederates allowed his wife and family to pass through the lines to join him. Johnson undertook the defense of Nashville as well as he could, though the city was continually harassed by cavalry raids led by General Nathan Bedford Forrest. Relief from Union regulars did not come until William S. Rosecrans defeated the Confederates at Murfreesboro in early 1863. Much of eastern Tennessee was captured later that year. When Lincoln issued the Emancipation Proclamation in January 1863, declaring freedom for all slaves in Confederate-held areas, he exempted Tennessee at Johnson's request. The proclamation increased the debate over what should become of the slaves after the war, as not all Unionists supported abolition. Johnson finally decided that slavery had to end. He wrote, "If the institution of slavery ... seeks to overthrow it [the Government], then the Government has a clear right to destroy it." He reluctantly supported efforts to enlist former slaves into the Union Army, feeling that African Americans should perform menial tasks to release white Americans to do the fighting. Nevertheless, he succeeded in recruiting 20,000 black soldiers to serve the Union. In 1860, Lincoln's running mate had been Maine Senator Hannibal Hamlin. Vice President Hamlin had served competently, was in good health, and was willing to run again. Nevertheless, Johnson emerged as running mate for Lincoln's re-election bid in 1864. Lincoln considered several War Democrats for the ticket in 1864, and sent an agent to sound out General Benjamin Butler as a possible running mate. In May 1864, the President dispatched General Daniel Sickles to Nashville on a fact-finding mission. Although Sickles denied he was there either to investigate or interview the military governor, Johnson biographer Hans L. Trefousse believes Sickles's trip was connected to Johnson's subsequent nomination for vice president. According to historian Albert Castel in his account of Johnson's presidency, Lincoln was impressed by Johnson's administration of Tennessee. Gordon-Reed points out that while the Lincoln-Hamlin ticket might have been considered geographically balanced in 1860, "having Johnson, the "southern" War Democrat, on the ticket sent the right message about the folly of secession and the continuing capacity for union within the country." Another factor was the desire of Secretary of State William Seward to frustrate the vice-presidential candidacy of his fellow New Yorker, former senator Daniel S. Dickinson, a War Democrat, as Seward would probably have had to yield his place if another New Yorker became vice president. Johnson, once he was told by reporters the likely purpose of Sickles' visit, was active on his own behalf, giving speeches and having his political friends work behind the scenes to boost his candidacy. To sound a theme of unity, Lincoln in 1864 ran under the banner of the National Union Party, rather than the Republicans. At the party's convention in Baltimore in June, Lincoln was easily nominated, although there had been some talk of replacing him with a Cabinet officer or one of the more successful generals. After the convention backed Lincoln, former Secretary of War Simon Cameron offered a resolution to nominate Hamlin, but it was defeated. Johnson was nominated for vice president by C.M. Allen of Indiana with an Iowa delegate as seconder. On the first ballot, Johnson led with 200 votes to 150 for Hamlin and 108 for Dickinson. On the second ballot, Kentucky switched to vote for Johnson, beginning a stampede. Johnson was named on the second ballot with 491 votes to Hamlin's 17 and eight for Dickinson; the nomination was made unanimous. Lincoln expressed pleasure at the result, "Andy Johnson, I think, is a good man." When word reached Nashville, a crowd assembled and the military governor obliged with a speech contending his selection as a Southerner meant that the rebel states had not actually left the Union. Although it was unusual at the time for a national candidate to actively campaign, Johnson gave a number of speeches in Tennessee, Kentucky, Ohio, and Indiana. He also sought to boost his chances in Tennessee while re-establishing civil government by making the loyalty oath even more restrictive, in that voters would now have to swear they opposed making a settlement with the Confederacy. The Democratic candidate for president, George McClellan, hoped to avoid additional bloodshed by negotiation, and so the stricter loyalty oath effectively disenfranchised his supporters. Lincoln declined to override Johnson, and their ticket took the state by 25,000 votes. Congress refused to count Tennessee's electoral votes, but Lincoln and Johnson did not need them, having won in most states that had voted, and easily secured the election. Now Vice President-elect, Johnson was anxious to complete the work of re-establishing civilian government in Tennessee, although the timetable for the election of a new governor did not allow it to take place until after Inauguration Day, March 4. He hoped to remain in Nashville to complete his task, but was told by Lincoln's advisers that he could not stay, but would be sworn in with Lincoln. In these months, Union troops finished the retaking of eastern Tennessee, including Greeneville. Just before his departure, the voters of Tennessee ratified a new constitution, abolishing slavery, on February 22, 1865. One of Johnson's final acts as military governor was to certify the results. Johnson traveled to Washington to be sworn in, although according to Gordon-Reed, "in light of what happened on March 4, 1865, it might have been better if Johnson had stayed in Nashville." He may have been ill; Castel cited typhoid fever, though Gordon-Reed notes that there is no independent evidence for that diagnosis. On the evening of March 3, Johnson attended a party in his honor; he drank heavily. Hung over the following morning at the Capitol, he asked Vice President Hamlin for some whiskey. Hamlin produced a bottle, and Johnson took two stiff drinks, stating "I need all the strength for the occasion I can have." In the Senate Chamber, Johnson delivered a rambling address as Lincoln, the Congress, and dignitaries looked on. Almost incoherent at times, he finally meandered to a halt, whereupon Hamlin hastily swore him in as vice president. Lincoln, who had watched sadly during the debacle, then went to his own swearing-in outside the Capitol, and delivered his acclaimed Second Inaugural Address. In the weeks after the inauguration, Johnson only presided over the Senate briefly, and hid from public ridicule at the Maryland home of a friend, Francis Preston Blair. When he did return to Washington, it was with the intent of leaving for Tennessee to re-establish his family in Greeneville. Instead, he remained after word came that General Ulysses S. Grant had captured the Confederate capital of Richmond, Virginia, presaging the end of the war. Lincoln stated, in response to criticism of Johnson's behavior, that "I have known Andy Johnson for many years; he made a bad slip the other day, but you need not be scared; Andy ain't a drunkard." On the afternoon of April 14, 1865, Lincoln and Johnson met for the first time since the inauguration. Trefousse states that Johnson wanted to "induce Lincoln not to be too lenient with traitors"; Gordon-Reed agrees. That night, President Lincoln was shot and mortally wounded by John Wilkes Booth, a Confederate sympathizer. The shooting of the President was part of a conspiracy to assassinate Lincoln, Johnson, and Seward the same night. Seward barely survived his wounds, while Johnson escaped attack as his would-be assassin, George Atzerodt, got drunk instead of killing the vice president. Leonard J. Farwell, a fellow boarder at the Kirkwood House, awoke Johnson with news of Lincoln's shooting at Ford's Theatre. Johnson rushed to the President's deathbed, where he remained a short time, on his return promising, "They shall suffer for this. They shall suffer for this." Lincoln died at 7:22 am the next morning; Johnson's swearing in occurred between 10 and 11 am with Chief Justice Salmon P. Chase presiding in the presence of most of the Cabinet. Johnson's demeanor was described by the newspapers as "solemn and dignified". Some Cabinet members had last seen Johnson, apparently drunk, at the inauguration. At noon, Johnson conducted his first Cabinet meeting in the Treasury Secretary's office, and asked all members to remain in their positions. The events of the assassination resulted in speculation, then and subsequently, concerning Johnson and what the conspirators might have intended for him. In the vain hope of having his life spared after his capture, Atzerodt spoke much about the conspiracy, but did not say anything to indicate that the plotted assassination of Johnson was merely a ruse. Conspiracy theorists point to the fact that on the day of the assassination, Booth came to the Kirkwood House and left one of his cards. This object was received by Johnson's private secretary, William A. Browning, with an inscription, "Don't wish to disturb you. Are you at home? J. Wilkes Booth." Johnson presided with dignity over Lincoln's funeral ceremonies in Washington, before his predecessor's body was sent home to Springfield, Illinois, for interment. Shortly after Lincoln's death, Union General William T. Sherman reported he had, without consulting Washington, reached an armistice agreement with Confederate General Joseph E. Johnston for the surrender of Confederate forces in North Carolina in exchange for the existing state government remaining in power, with private property rights to be respected. This did not even acknowledge the freedom of those in slavery. This was not acceptable to Johnson or the Cabinet who sent word for Sherman to secure the surrender without making political deals, which he did. Further, Johnson placed a $100,000 bounty (equivalent to $ in ) on Confederate President Davis, then a fugitive, which gave him the reputation of a man who would be tough on the South. More controversially, he permitted the execution of Mary Surratt for her part in Lincoln's assassination. Surratt was executed with three others, including Atzerodt, on July 7, 1865. Upon taking office, Johnson faced the question of what to do with the Confederacy. President Lincoln had authorized loyalist governments in Virginia, Arkansas, Louisiana, and Tennessee as the Union came to control large parts of those states and advocated a ten percent plan that would allow elections after ten percent of the voters in any state took an oath of future loyalty to the Union. Congress considered this too lenient; its own plan, requiring a majority of voters to take the loyalty oath, passed both houses in 1864, but Lincoln pocket vetoed it. Johnson had three goals in Reconstruction. He sought a speedy restoration of the states, on the grounds that they had never truly left the Union, and thus should again be recognized once loyal citizens formed a government. To Johnson, African-American suffrage was a delay and a distraction; it had always been a state responsibility to decide who should vote. Second, political power in the Southern states should pass from the planter class to his beloved "plebeians". Johnson feared that the freedmen, many of whom were still economically bound to their former masters, might vote at their direction. Johnson's third priority was election in his own right in 1868, a feat no one who had succeeded a deceased president had managed to accomplish, attempting to secure a Democratic anti-Congressional Reconstruction coalition in the South. The Republicans had formed a number of factions. The Radical Republicans sought voting and other civil rights for African Americans. They believed that the freedmen could be induced to vote Republican in gratitude for emancipation, and that black votes could keep the Republicans in power and Southern Democrats, including former rebels, out of influence. They believed that top Confederates should be punished. The Moderate Republicans sought to keep the Democrats out of power at a national level, and prevent former rebels from resuming power. They were not as enthusiastic about the idea of African-American suffrage as their Radical colleagues, either because of their own local political concerns, or because they believed that the freedman would be likely to cast his vote badly. Northern Democrats favored the unconditional restoration of the Southern states. They did not support African-American suffrage, which might threaten Democratic control in the South. Johnson was initially left to devise a Reconstruction policy without legislative intervention, as Congress was not due to meet again until December 1865. Radical Republicans told the President that the Southern states were economically in a state of chaos and urged him to use his leverage to insist on rights for freedmen as a condition of restoration to the Union. But Johnson, with the support of other officials including Seward, insisted that the franchise was a state, not a federal matter. The Cabinet was divided on the issue. Johnson's first Reconstruction actions were two proclamations, with the unanimous backing of his Cabinet, on May 29. One recognized the Virginia government led by provisional Governor Francis Pierpont. The second provided amnesty for all ex-rebels except those holding property valued at $20,000 or more; it also appointed a temporary governor for North Carolina and authorized elections. Neither of these proclamations included provisions regarding black suffrage or freedmen's rights. The President ordered constitutional conventions in other former rebel states. As Southern states began the process of forming governments, Johnson's policies received considerable public support in the North, which he took as unconditional backing for quick reinstatement of the South. While he received such support from the white South, he underestimated the determination of Northerners to ensure that the war had not been fought for nothing. It was important, in Northern public opinion, that the South acknowledge its defeat, that slavery be ended, and that the lot of African Americans be improved. Voting rights were less important—after all, only a handful of Northern states (mostly in New England) gave African-American men the right to vote on the same basis as whites, and in late 1865, Connecticut, Wisconsin, and Minnesota voted down African-American suffrage proposals by large margins. Northern public opinion tolerated Johnson's inaction on black suffrage as an experiment, to be allowed if it quickened Southern acceptance of defeat. Instead, white Southerners felt emboldened. A number of Southern states passed Black Codes, binding African-American laborers to farms on annual contracts they could not quit, and allowing law enforcement at whim to arrest them for vagrancy and rent out their labor. Most Southerners elected to Congress were former Confederates, with the most prominent being Georgia Senator-designate and former Confederate vice president Alexander Stephens. Congress assembled in early December 1865; Johnson's conciliatory annual message to them was well received. Nevertheless, Congress refused to seat the Southern legislators and established a committee to recommend appropriate Reconstruction legislation. Northerners were outraged at the idea of unrepentant Confederate leaders, such as Stephens, rejoining the federal government at a time when emotional wounds from the war remained raw. They saw the Black Codes placing African Americans in a position barely above slavery. Republicans also feared that restoration of the Southern states would return the Democrats to power. In addition, according to David O. Stewart in his book on Johnson's impeachment, "the violence and poverty that oppressed the South would galvanize the opposition to Johnson". Congress was reluctant to confront the President, and initially only sought to fine-tune Johnson's policies towards the South. According to Trefousse, "If there was a time when Johnson could have come to an agreement with the moderates of the Republican Party, it was the period following the return of Congress". The President was unhappy about the provocative actions of the Southern states, and about the continued control by the antebellum elite there, but made no statement publicly, believing that Southerners had a right to act as they did, even if it was unwise to do so. By late January 1866, he was convinced that winning a showdown with the Radical Republicans was necessary to his political plans – both for the success of Reconstruction and for re-election in 1868. He would have preferred that the conflict arise over the legislative efforts to enfranchise African Americans in the District of Columbia, a proposal that had been defeated overwhelmingly in an all-white referendum. A bill to accomplish this passed the House of Representatives, but to Johnson's disappointment, stalled in the Senate before he could veto it. Illinois Senator Lyman Trumbull, leader of the Moderate Republicans and Chairman of the Judiciary Committee, was anxious to reach an understanding with the President. He ushered through Congress a bill extending the Freedmen's Bureau beyond its scheduled abolition in 1867, and the first Civil Rights Bill, to grant citizenship to the freedmen. Trumbull met several times with Johnson, and was convinced the President would sign the measures (Johnson rarely contradicted visitors, often fooling those who met with him into thinking he was in accord). In fact, the President opposed both bills as infringements on state sovereignty. Additionally, both of Trumbull's bills were unpopular among white Southerners, whom Johnson hoped to include in his new party. Johnson vetoed the Freedman's Bureau bill on February 18, 1866, to the delight of white Southerners and the puzzled anger of Republican legislators. He considered himself vindicated when a move to override his veto failed in the Senate the following day. Johnson believed that the Radicals would now be isolated and defeated, and that the Moderate Republicans would form behind him; he did not understand that Moderates too wanted to see African Americans treated fairly. On February 22, 1866, Washington's Birthday, Johnson gave an impromptu speech to supporters who had marched to the White House and called for an address in honor of the first president. In his hour-long speech, he instead referred to himself over 200 times. More damagingly, he also spoke of "men ... still opposed to the Union" to whom he could not extend the hand of friendship he gave to the South. When called upon by the crowd to say who they were, Johnson named Pennsylvania Congressman Thaddeus Stevens, Massachusetts Senator Charles Sumner, and abolitionist Wendell Phillips, and accused them of plotting his assassination. Republicans viewed the address as a declaration of war, while one Democratic ally estimated Johnson's speech cost the party 200,000 votes in the 1866 congressional midterm elections. Although strongly urged by Moderates to sign the Civil Rights Bill, Johnson broke decisively with them by vetoing it on March 27. In his veto message, he objected to the measure because it conferred citizenship on the freedmen at a time when 11 out of 36 states were unrepresented in the Congress, and that it discriminated in favor of African Americans and against whites. Within three weeks, Congress had overridden his veto, the first time that had been done on a major bill in American history. The veto of the Civil Rights Act of 1866, often seen as a key mistake of Johnson's presidency, convinced Moderates there was no hope of working with him. Historian Eric Foner in his volume on Reconstruction views it as "the most disastrous miscalculation of his political career". According to Stewart, the veto was "for many his defining blunder, setting a tone of perpetual confrontation with Congress that prevailed for the rest of his presidency". Congress also proposed the Fourteenth Amendment to the states. Written by Trumbull and others, it was sent for ratification by state legislatures in a process in which the president plays no part, though Johnson opposed it. The amendment was designed to put the key provisions of the Civil Rights Act into the Constitution, but also went further. The amendment extended citizenship to every person born in the United States (except Indians on reservations), penalized states that did not give the vote to freedmen, and most importantly, created new federal civil rights that could be protected by federal courts. It also guaranteed that the federal debt would be paid and forbade repayment of Confederate war debts. Further, it disqualified many former Confederates from office, although the disability could be removed—by Congress, not the president. Both houses passed the Freedmen's Bureau Act a second time, and again the President vetoed it; this time, the veto was overridden. By the summer of 1866, when Congress finally adjourned, Johnson's method of restoring states to the Union by executive fiat, without safeguards for the freedmen, was in deep trouble. His home state of Tennessee ratified the Fourteenth Amendment despite the President's opposition. When Tennessee did so, Congress immediately seated its proposed delegation, embarrassing Johnson. Efforts to compromise failed, and a political war ensued between the united Republicans on one side, and on the other, Johnson and his allies in the Democratic Party, North and South. He called a convention of the National Union Party. Republicans had returned to using their previous identifier; Johnson intended to use the discarded name to unite his supporters and gain election to a full term, in 1868. The battleground was the election of 1866; Southern states were not allowed to vote. Johnson campaigned vigorously, undertaking a public speaking tour, known as the "Swing Around the Circle". The trip, including speeches in Chicago, St. Louis, Indianapolis and Columbus, proved politically disastrous, with the President making controversial comparisons between himself and Christ, and engaging in arguments with hecklers. These exchanges were attacked as beneath the dignity of the presidency. The Republicans won by a landslide, increasing their two-thirds majority in Congress, and made plans to control Reconstruction. Johnson blamed the Democrats for giving only lukewarm support to the National Union movement. Even with the Republican victory in November 1866, Johnson considered himself in a strong position. The Fourteenth Amendment had been ratified by none of the Southern or border states except Tennessee, and had been rejected in Kentucky, Delaware, and Maryland. As the amendment required ratification by three-quarters of the states to become part of the Constitution, he believed the deadlock would be broken in his favor, leading to his election in 1868. Once it reconvened in December 1866, an energized Congress began passing legislation, often over a presidential veto; this included the District of Columbia voting bill. Congress admitted Nebraska to the Union over a veto, and the Republicans gained two senators and a state that promptly ratified the amendment. Johnson's veto of a bill for statehood for Colorado Territory was sustained; enough senators agreed that a district with a population of 30,000 was not yet worthy of statehood to win the day. In January 1867, Congressman Stevens introduced legislation to dissolve the Southern state governments and reconstitute them into five military districts, under martial law. The states would begin again by holding constitutional conventions. African Americans could vote for or become delegates; former Confederates could not. In the legislative process, Congress added to the bill that restoration to the Union would follow the state's ratification of the Fourteenth Amendment, and completion of the process of adding it to the Constitution. Johnson and the Southerners attempted a compromise, whereby the South would agree to a modified version of the amendment without the disqualification of former Confederates, and for limited black suffrage. The Republicans insisted on the full language of the amendment, and the deal fell through. Although Johnson could have pocket vetoed the First Reconstruction Act as it was presented to him less than ten days before the end of the Thirty-Ninth Congress, he chose to veto it directly on March 2, 1867; Congress overruled him the same day. Also on March 2, Congress passed the Tenure of Office Act over the President's veto, in response to statements during the Swing Around the Circle that he planned to fire Cabinet secretaries who did not agree with him. This bill, requiring Senate approval for the firing of Cabinet members during the tenure of the president who appointed them and for one month afterwards, was immediately controversial, with some senators doubting that it was constitutional or that its terms applied to Johnson, whose key Cabinet officers were Lincoln holdovers. Secretary of War Edwin Stanton was an able and hard-working man, but difficult to deal with. Johnson both admired and was exasperated by his War Secretary, who, in combination with General of the Army Grant, worked to undermine the president's Southern policy from within his own administration. Johnson considered firing Stanton, but respected him for his wartime service as secretary. Stanton, for his part, feared allowing Johnson to appoint his successor and refused to resign, despite his public disagreements with his president. The new Congress met for a few weeks in March 1867, then adjourned, leaving the House Committee on the Judiciary behind, charged with reporting back to the full House whether there were grounds for Johnson to be impeached. This committee duly met, examined the President's bank accounts, and summoned members of the Cabinet to testify. When a federal court released former Confederate president Davis on bail on May 13 (he had been captured shortly after the war), the committee investigated whether the President had impeded the prosecution. It learned that Johnson was eager to have Davis tried. A bipartisan majority of the committee voted down impeachment charges; the committee adjourned on June 3. Later in June, Johnson and Stanton battled over the question of whether the military officers placed in command of the South could override the civil authorities. The President had Attorney General Henry Stanbery issue an opinion backing his position that they could not. Johnson sought to pin down Stanton either as for, and thus endorsing Johnson's position, or against, showing himself to be opposed to his president and the rest of the Cabinet. Stanton evaded the point in meetings and written communications. When Congress reconvened in July, it passed a Reconstruction Act against Johnson's position, waited for his veto, overruled it, and went home. In addition to clarifying the powers of the generals, the legislation also deprived the President of control over the Army in the South. With Congress in recess until November, Johnson decided to fire Stanton and relieve one of the military commanders, General Philip Sheridan, who had dismissed the governor of Texas and installed a replacement with little popular support. Johnson was initially deterred by a strong objection from Grant, but on August 5, the President demanded Stanton's resignation; the secretary refused to quit with Congress out of session. Johnson then suspended him pending the next meeting of Congress as permitted under the Tenure of Office Act; Grant agreed to serve as temporary replacement while continuing to lead the Army. Grant, under protest, followed Johnson's order transferring Sheridan and another of the district commanders, Daniel Sickles, who had angered Johnson by firmly following Congress's plan. The President also issued a proclamation pardoning most Confederates, exempting those who held office under the Confederacy, or who had served in federal office before the war and had breached their oaths. Although Republicans expressed anger with his actions, the 1867 elections generally went Democratic. No seats in Congress were directly elected in the polling, but the Democrats took control of the Ohio General Assembly, allowing them to defeat for re-election one of Johnson's strongest opponents, Senator Benjamin Wade. Voters in Ohio, Connecticut, and Minnesota turned down propositions to grant African Americans the vote. The adverse results momentarily put a stop to Republican calls to impeach Johnson, who was elated by the elections. Nevertheless, once Congress met in November, the Judiciary Committee reversed itself and passed a resolution of impeachment against Johnson. After much debate about whether anything the President had done was a high crime or misdemeanor, the standard under the Constitution, the resolution was defeated by the House of Representatives on December 7, 1867, by a vote of 57 in favor to 108 opposed. Johnson notified Congress of Stanton's suspension and Grant's interim appointment. In January 1868, the Senate disapproved of his action, and reinstated Stanton, contending the President had violated the Tenure of Office Act. Grant stepped aside over Johnson's objection, causing a complete break between them. Johnson then dismissed Stanton and appointed Lorenzo Thomas to replace him. Stanton refused to leave his office, and on February 24, 1868, the House impeached the President for intentionally violating the Tenure of Office Act, by a vote of 128 to 47. The House subsequently adopted eleven articles of impeachment, for the most part alleging that he had violated the Tenure of Office Act, and had questioned the legitimacy of Congress. On March 5, 1868, the impeachment trial began in the Senate and lasted almost three months; Congressmen George S. Boutwell, Benjamin Butler and Thaddeus Stevens acted as managers for the House, or prosecutors, and William M. Evarts, Benjamin R. Curtis and former Attorney General Stanbery were Johnson's counsel; Chief Justice Chase served as presiding judge. The defense relied on the provision of the Tenure of Office Act that made it applicable only to appointees of the current administration. Since Lincoln had appointed Stanton, the defense maintained Johnson had not violated the act, and also argued that the President had the right to test the constitutionality of an act of Congress. Johnson's counsel insisted that he make no appearance at the trial, nor publicly comment about the proceedings, and except for a pair of interviews in April, he complied. Johnson maneuvered to gain an acquittal; for example, he pledged to Iowa Senator James W. Grimes that he would not interfere with Congress's Reconstruction efforts. Grimes reported to a group of Moderates, many of whom voted for acquittal, that he believed the President would keep his word. Johnson also promised to install the respected John Schofield as War Secretary. Kansas Senator Edmund G. Ross received assurances that the new, Radical-influenced constitutions ratified in South Carolina and Arkansas would be transmitted to the Congress without delay, an action which would give him and other senators political cover to vote for acquittal. One reason senators were reluctant to remove the President was that his successor would have been Ohio Senator Wade, the president "pro tempore" of the Senate. Wade, a lame duck who left office in early 1869, was a Radical who supported such measures as women's suffrage, placing him beyond the pale politically in much of the nation. Additionally, a President Wade was seen as an obstacle to Grant's ambitions. With the dealmaking, Johnson was confident of the result in advance of the verdict, and in the days leading up to the ballot, newspapers reported that Stevens and his Radicals had given up. On May 16, the Senate voted on the 11th article of impeachment, accusing Johnson of firing Stanton in violation of the Tenure of Office of Act once the Senate had overturned his suspension. Thirty-five senators voted "guilty" and 19 "not guilty", thus falling short by a single vote of the two-thirds majority required for conviction under the Constitution. Seven Republicans—Senators Grimes, Ross, Trumbull, William Pitt Fessenden, Joseph S. Fowler, John B. Henderson, and Peter G. Van Winkle—voted to acquit the President. With Stevens bitterly disappointed at the result, the Senate then adjourned for the Republican National Convention; Grant was nominated for president. The Senate returned on May 26 and voted on the second and third articles, with identical 35–19 results. Faced with those results, Johnson's opponents gave up and dismissed proceedings. Stanton "relinquished" his office on May 26, and the Senate subsequently confirmed Schofield. When Johnson renominated Stanbery to return to his position as Attorney General after his service as a defense manager, the Senate refused to confirm him. Allegations were made at the time and again later that bribery dictated the outcome of the trial. Even when it was in progress, Representative Butler began an investigation, held contentious hearings, and issued a report, unendorsed by any other congressman. Butler focused on a New York–based "Astor House Group", supposedly led by political boss and editor Thurlow Weed. This organization was said to have raised large sums of money from whiskey interests through Cincinnati lawyer Charles Woolley to bribe senators to acquit Johnson. Butler went so far as to imprison Woolley in the Capitol building when he refused to answer questions, but failed to prove bribery. Soon after taking office as president, Johnson reached an accord with Secretary of State William H. Seward that there would be no change in foreign policy. In practice, this meant that Seward would continue to run things as he had under Lincoln. Seward and Lincoln had been rivals for the nomination in 1860; the victor hoped that Seward would succeed him as president in 1869. At the time of Johnson's accession, the French had intervened in Mexico, sending troops there. While many politicians had indulged in saber rattling over the Mexican matter, Seward preferred quiet diplomacy, warning the French through diplomatic channels that their presence in Mexico was not acceptable. Although the President preferred a more aggressive approach, Seward persuaded him to follow his lead. In April 1866, the French government informed Seward that its troops would be brought home in stages, to conclude by November 1867. Seward was an expansionist, and sought opportunities to gain territory for the United States. By 1867, the Russian government saw its North American colony (today Alaska) as a financial liability, and feared losing control as American settlement reached there. It instructed its minister in Washington, Baron Eduard de Stoeckl, to negotiate a sale. De Stoeckl did so deftly, getting Seward to raise his offer from $5 million (coincidentally, the minimum that Russia had instructed de Stoeckl to accept) to $7 million, and then getting $200,000 added by raising various objections. This sum of $7.2 million is equivalent to $ in present-day terms. On March 30, 1867, de Stoeckl and Seward signed the treaty, working quickly as the Senate was about to adjourn. Johnson and Seward took the signed document to the President's Room in the Capitol, only to be told there was no time to deal with the matter before adjournment. The President summoned the Senate into session to meet on April 1; that body approved the treaty, 37–2. Emboldened by his success in Alaska, Seward sought acquisitions elsewhere. His only success was staking an American claim to uninhabited Wake Island in the Pacific, which would be officially claimed by the U.S. in 1898. He came close with the Danish West Indies as Denmark agreed to sell and the local population approved the transfer in a plebiscite, but the Senate never voted on the treaty and it expired. Another treaty that fared badly was the Johnson-Clarendon convention, negotiated in settlement of the "Alabama" Claims, for damages to American shipping from British-built Confederate raiders. Negotiated by the United States Minister to Britain, former Maryland senator Reverdy Johnson, in late 1868, it was ignored by the Senate during the remainder of the President's term. The treaty was rejected after he left office, and the Grant administration later negotiated considerably better terms from Britain. Johnson appointed nine Article III federal judges during his presidency, all to United States district courts; he did not appoint a justice to serve on the Supreme Court. In April 1866, he nominated Henry Stanbery to fill the vacancy left with the death of John Catron, but Congress eliminated the seat to prevent the appointment, and to ensure that he did not get to make any appointments eliminated the next vacancy as well, providing that the court would shrink by one justice when one next departed from office. Johnson appointed his Greeneville crony, Samuel Milligan, to the United States Court of Claims, where he served from 1868 until his death in 1874. In June 1866, Johnson signed the Southern Homestead Act into law, believing that the legislation would assist poor whites. Around 28,000 land claims were successfully patented, although few former slaves benefitted from the law, fraud was rampant, and much of the best land was off-limits; reserved for grants to veterans or railroads. In June 1868, Johnson signed an eight-hour law passed by Congress that established an eight-hour workday for laborers and mechanics employed by the Federal Government. Although Johnson told members of a Workingmen's party delegation in Baltimore that he could not directly commit himself to an eight-hour day, he nevertheless told the same delegation that he greatly favoured the "shortest number of hours consistent with the interests of all." According to Richard F. Selcer, however, the good intentions behind the law were "immediately frustrated" as wages were cut by 20%. Johnson sought nomination by the 1868 Democratic National Convention in New York in July 1868. He remained very popular among Southern whites, and boosted that popularity by issuing, just before the convention, a pardon ending the possibility of criminal proceedings against any Confederate not already indicted, meaning that only Davis and a few others still might face trial. On the first ballot, Johnson was second to former Ohio representative George H. Pendleton, who had been his Democratic opponent for vice president in 1864. Johnson's support was mostly from the South, and fell away as the ballots passed. On the 22nd ballot, former New York governor Horatio Seymour was nominated, and the President received only four votes, all from Tennessee. The conflict with Congress continued. Johnson sent Congress proposals for amendments to limit the president to a single six-year term and make the president and the Senate directly elected, and for term limits for judges. Congress took no action on them. When the President was slow to officially report ratifications of the Fourteenth Amendment by the new Southern legislatures, Congress passed a bill, again over his veto, requiring him to do so within ten days of receipt. He still delayed as much as he could, but was required, in July 1868, to report the ratifications making the amendment part of the Constitution. Seymour's operatives sought Johnson's support, but he long remained silent on the presidential campaign. It was not until October, with the vote already having taken place in some states, that he mentioned Seymour at all, and he never endorsed him. Nevertheless, Johnson regretted Grant's victory, in part because of their animus from the Stanton affair. In his annual message to Congress in December, Johnson urged the repeal of the Tenure of Office Act and told legislators that had they admitted their Southern colleagues in 1865, all would have been well. He celebrated his 60th birthday in late December with a party for several hundred children, though not including those of President-elect Grant, who did not allow his to go. On Christmas Day 1868, Johnson issued a final amnesty, this one covering everyone, including Davis. He also issued, in his final months in office, pardons for crimes, including one for Dr. Samuel Mudd, controversially convicted of involvement in the Lincoln assassination (he had set Booth's broken leg) and imprisoned in on Florida's Dry Tortugas. On March 3, the President hosted a large public reception at the White House on his final full day in office. Grant had made it known that he was unwilling to ride in the same carriage as Johnson, as was customary, and Johnson refused to go to the inauguration at all. Despite an effort by Seward to prompt a change of mind, he spent the morning of March 4 finishing last-minute business, and then shortly after noon rode from the White House to the home of a friend. After leaving the presidency, Johnson remained for some weeks in Washington, then returned to Greeneville for the first time in eight years. He was honored with large public celebrations along the way, especially in Tennessee, where cities hostile to him during the war hung out welcome banners. He had arranged to purchase a large farm near Greeneville to live on after his presidency. Some expected Johnson to run for Governor of Tennessee or for the Senate again, while others that he would become a railroad executive. Johnson found Greeneville boring, and his private life was embittered by the suicide of his son Robert in 1869. Seeking vindication for himself, and revenge against his political enemies, he launched a Senate bid soon after returning home. Tennessee had gone Republican, but court rulings restoring the vote to some whites and the violence of the Ku Klux Klan kept down the African-American vote, leading to a Democratic victory in the legislative elections in August 1869. Johnson was seen as a likely victor in the Senate election, although hated by Radical Republicans, and also by some Democrats because of his wartime activities. Although he was at one point within a single vote of victory in the legislature's balloting, the Republicans eventually elected Henry Cooper over Johnson, 54–51. In 1872, there was a special election for an ; Johnson initially sought the Democratic nomination, but when he saw that it would go to former Confederate general Benjamin F. Cheatham, decided to run as an independent. The former president was defeated, finishing third, but the split in the Democratic Party defeated Cheatham in favor of an old Johnson Unionist ally, Horace Maynard. In 1873, Johnson contracted cholera during an epidemic but recovered; that year he lost about $73,000, when the First National Bank of Washington went under, though he was eventually repaid much of the sum. He began looking towards the next Senate election, to take place in the legislature in early 1875. Johnson began to woo the farmers' Grange movement; with his Jeffersonian leanings, he easily gained their support. He spoke throughout the state in his final campaign tour. Few African Americans outside the large towns were now able to vote as Reconstruction faded in Tennessee, setting a pattern that would be repeated in the other Southern states; the white domination would last almost a century. In the Tennessee legislative elections in August, the Democrats elected 92 legislators to the Republicans' eight, and Johnson went to Nashville for the legislative session. When the balloting for the Senate seat began on January 20, 1875, he led with 30 votes, but did not have the required majority as three former Confederate generals, one former colonel, and a former Democratic congressman split the vote with him. Johnson's opponents tried to agree on a single candidate who might gain majority support and defeat him, but failed, and he was elected on January 26 on the 54th ballot, with a margin of a single vote. Nashville erupted in rejoicing; remarked Johnson, "Thank God for the vindication." Johnson's comeback garnered national attention, with the "St. Louis Republican" calling it, "the most magnificent personal triumph which the history of American politics can show". At his swearing-in in the Senate on March 5, 1875, he was greeted with flowers and sworn in with his predecessor as vice president, Hamlin, by that office's current incumbent, Henry Wilson, who as senator had voted for his ousting. Many Republicans ignored Senator Johnson, though some, such as Ohio's John Sherman (who had voted for conviction), shook his hand. Johnson remains the only former president to serve in the Senate. He spoke only once in the short session, on March 22 lambasting President Grant for his use of federal troops in support of Louisiana's Reconstruction government. The former president asked, "How far off is military despotism?" and concluded his speech, "may God bless this people and God save the Constitution." Johnson returned home after the special session concluded. In late July, convinced some of his opponents were defaming him in the Ohio gubernatorial race, he decided to travel there to give speeches. He began the trip on July 28, and broke the journey at his daughter Mary's farm near Elizabethton, where his daughter Martha was also staying. That evening he suffered a stroke, but refused medical treatment until the next day, when he did not improve and two doctors were sent for from Elizabethton. He seemed to respond to their ministrations, but suffered another stroke on the evening of July 30, and died early the following morning at the age of 66. President Grant had the "painful duty" of announcing the death of the only surviving past president. Northern newspapers, in their obituaries, tended to focus on Johnson's loyalty during the war, while Southern ones paid tribute to his actions as president. Johnson's funeral was held on August 3 in Greeneville. He was buried with his body wrapped in an American flag and a copy of the U.S. Constitution placed under his head, according to his wishes. The burial ground was dedicated as the Andrew Johnson National Cemetery in 1906, and with his home and tailor's shop, is part of the Andrew Johnson National Historic Site. According to Castel, "historians [of Johnson's presidency] have tended to concentrate to the exclusion of practically everything else upon his role in that titanic event [Reconstruction]". Through the remainder of the 19th century, there were few historical evaluations of Johnson and his presidency. Memoirs from Northerners who had dealt with him, such as former vice president Henry Wilson and Maine Senator James G. Blaine, depicted him as an obstinate boor who tried to favor the South in Reconstruction, but who was frustrated by Congress. According to historian Howard K. Beale in his journal article about the historiography of Reconstruction, "Men of the postwar decades were more concerned with justifying their own position than they were with painstaking search for truth. Thus [Alabama congressman and historian] Hilary Herbert and his corroborators presented a Southern indictment of Northern policies, and Henry Wilson's history was a brief for the North." The turn of the 20th century saw the first significant historical evaluations of Johnson. Leading the wave was Pulitzer Prize-winning historian James Ford Rhodes, who wrote of the former president: Rhodes ascribed Johnson's faults to his personal weaknesses, and blamed him for the problems of the postbellum South. Other early 20th-century historians, such as John Burgess, Woodrow Wilson (who later became president himself) and William Dunning, all Southerners, concurred with Rhodes, believing Johnson flawed and politically inept, but concluding that he had tried to carry out Lincoln's plans for the South in good faith. Author and journalist Jay Tolson suggests that Wilson "depict[ed Reconstruction] as a vindictive program that hurt even repentant southerners while benefiting northern opportunists, the so-called Carpetbaggers, and cynical white southerners, or Scalawags, who exploited alliances with blacks for political gain". Even as Rhodes and his school wrote, another group of historians was setting out on the full rehabilitation of Johnson, using for the first time primary sources such as his papers, provided by his daughter Martha before her death in 1901, and the diaries of Johnson's Navy Secretary, Gideon Welles, first published in 1911. The resulting volumes, such as David Miller DeWitt's "The Impeachment and Trial of President Andrew Johnson" (1903), presented him far more favorably than they did those who had sought to oust him. In James Schouler's 1913 "History of the Reconstruction Period", the author accused Rhodes of being "quite unfair to Johnson", though agreeing that the former president had created many of his own problems through inept political moves. These works had an effect; although historians continued to view Johnson as having deep flaws which sabotaged his presidency, they saw his Reconstruction policies as fundamentally correct. Castel writes: Beale wondered in 1940, "is it not time that we studied the history of Reconstruction without first assuming, at least subconsciously, that carpetbaggers and Southern white Republicans were wicked, that Negroes were illiterate incompetents, and that the whole white South owes a debt of gratitude to the restorers of 'white supremacy'?" Despite these doubts, the favorable view of Johnson survived for a time. In 1942, Van Heflin portrayed the former president as a fighter for democracy in the Hollywood film "Tennessee Johnson". In 1948, a poll of his colleagues by historian Arthur M. Schlesinger deemed Johnson among the average presidents; in 1956, one by Clinton L. Rossiter named him as one of the near-great Chief Executives. Foner notes that at the time of these surveys, "the Reconstruction era that followed the Civil War was regarded as a time of corruption and misgovernment caused by granting black men the right to vote". Earlier historians, including Beale, believed that money drove events, and had seen Reconstruction as an economic struggle. They also accepted, for the most part, that reconciliation between North and South should have been the top priority of Reconstruction. In the 1950s, historians began to focus on the African-American experience as central to Reconstruction. They rejected completely any claim of black inferiority, which had marked many earlier historical works, and saw the developing civil rights movement as a second Reconstruction; some writers stated they hoped their work on the postbellum era would advance the cause of civil rights. These authors sympathized with the Radical Republicans for their desire to help the African American, and saw Johnson as callous towards the freedman. In a number of works from 1956 onwards by such historians as Fawn Brodie, the former president was depicted as a successful saboteur of efforts to better the freedman's lot. These volumes included major biographies of Stevens and Stanton. Reconstruction was increasingly seen as a noble effort to integrate the freed slaves into society. In the early 21st century, Johnson is among those commonly mentioned as the worst presidents in U.S. history. According to historian Glenn W. Lafantasie, who believes Buchanan the worst president, "Johnson is a particular favorite for the bottom of the pile because of his impeachment ... his complete mishandling of Reconstruction policy ... his bristling personality, and his enormous sense of self-importance." Tolson suggests that "Johnson is now scorned for having resisted Radical Republican policies aimed at securing the rights and well-being of the newly emancipated African-Americans". Gordon-Reed notes that Johnson, along with his contemporaries Pierce and Buchanan, are generally listed among the five worst presidents, but states, "there have never been more difficult times in the life of this nation. The problems these men had to confront were enormous. It would have taken a succession of Lincolns to do them justice." Trefousse considers Johnson's legacy to be "the maintenance of white supremacy. His boost to Southern conservatives by undermining Reconstruction was his legacy to the nation, one that would trouble the country for generations to come." Gordon-Reed states of Johnson: Explanatory notes References Amazing Grace "Amazing Grace" is a Christian hymn published in 1779, with words written by the English poet and Anglican clergyman John Newton (1725–1807). Newton wrote the words from personal experience. He grew up without any particular religious conviction, but his life's path was formed by a variety of twists and coincidences that were often put into motion by his recalcitrant insubordination. He was pressed (conscripted) into service in the Royal Navy, and after leaving the service, he became involved in the Atlantic slave trade. In 1748, a violent storm battered his vessel off the coast of County Donegal, Ireland, so severely that he called out to God for mercy, a moment that marked his spiritual conversion. He continued his slave trading career until 1754 or 1755, when he ended his seafaring altogether and began studying Christian theology. Ordained in the Church of England in 1764, Newton became curate of Olney, Buckinghamshire, where he began to write hymns with poet William Cowper. "Amazing Grace" was written to illustrate a sermon on New Year's Day of 1773. It is unknown if there was any music accompanying the verses; it may have simply been chanted by the congregation. It debuted in print in 1779 in Newton and Cowper's "Olney Hymns" but settled into relative obscurity in England. In the United States, however, "Amazing Grace" was used extensively during the Second Great Awakening in the early 19th century. It has been associated with more than 20 melodies, but in 1835 it was joined to a tune named "New Britain" to which it is most frequently sung today. With the message that forgiveness and redemption are possible regardless of sins committed and that the soul can be delivered from despair through the mercy of God, "Amazing Grace" is one of the most recognisable songs in the English-speaking world. Author Gilbert Chase writes that it is "without a doubt the most famous of all the folk hymns", and Jonathan Aitken, a Newton biographer, estimates that it is performed about 10 million times annually. It has had particular influence in folk music, and has become an emblematic African American spiritual. Its universal message has been a significant factor in its crossover into secular music. "Amazing Grace" saw a resurgence in popularity in the U.S. during the 1960s and has been recorded thousands of times during and since the 20th century, occasionally appearing on popular music charts. According to the "Dictionary of American Hymnology" "Amazing Grace" is John Newton's spiritual autobiography in verse. In 1725, Newton was born in Wapping, a district in London near the Thames. His father was a shipping merchant who was brought up as a Catholic but had Protestant sympathies, and his mother was a devout Independent unaffiliated with the Anglican Church. She had intended Newton to become a clergyman, but she died of tuberculosis when he was six years old. For the next few years, Newton was raised by his emotionally distant stepmother while his father was at sea, and spent some time at a boarding school where he was mistreated. At the age of eleven, he joined his father on a ship as an apprentice; his seagoing career would be marked by headstrong disobedience. As a youth, Newton began a pattern of coming very close to death, examining his relationship with God, then relapsing into bad habits. As a sailor, he denounced his faith after being influenced by a shipmate who discussed "Characteristics of Men, Manners, Opinions, Times", a book by the Third Earl of Shaftesbury, with him. In a series of letters he later wrote, "Like an unwary sailor who quits his port just before a rising storm, I renounced the hopes and comforts of the Gospel at the very time when every other comfort was about to fail me." His disobedience caused him to be pressed into the Royal Navy, and he took advantage of opportunities to overstay his leave and finally deserted to visit Mary "Polly" Catlett, a family friend with whom he had fallen in love. After enduring humiliation for deserting, he managed to get himself traded to a slave ship where he began a career in slave trading. Newton often openly mocked the captain by creating obscene poems and songs about him that became so popular the crew began to join in. He entered into disagreements with several colleagues that resulted in his being starved almost to death, imprisoned while at sea and chained like the slaves they carried, then outright enslaved and forced to work on a plantation in Sierra Leone near the Sherbro River. After several months he came to think of Sierra Leone as his home, but his father intervened after Newton sent him a letter describing his circumstances, and a ship found him by coincidence. Newton claimed the only reason he left was because of Polly. While aboard the ship "Greyhound", Newton gained notoriety for being one of the most profane men the captain had ever met. In a culture where sailors commonly used oaths and swore, Newton was admonished several times for not only using the worst words the captain had ever heard, but creating new ones to exceed the limits of verbal debauchery. In March 1748, while the "Greyhound" was in the North Atlantic, a violent storm came upon the ship that was so rough it swept overboard a crew member who was standing where Newton had been moments before. After hours of the crew emptying water from the ship and expecting to be capsized, Newton and another mate tied themselves to the ship's pump to keep from being washed overboard, working for several hours. After proposing the measure to the captain, Newton had turned and said, "If this will not do, then Lord have mercy upon us!" Newton rested briefly before returning to the deck to steer for the next eleven hours. During his time at the wheel he pondered his divine challenge. About two weeks later, the battered ship and starving crew landed in Lough Swilly, Ireland. For several weeks before the storm, Newton had been reading "The Christian's Pattern", a summary of the 15th-century "The Imitation of Christ" by Thomas à Kempis. The memory of his own "Lord have mercy upon us!" uttered during a moment of desperation in the storm did not leave him; he began to ask if he was worthy of God's mercy or in any way redeemable as he had not only neglected his faith but directly opposed it, mocking others who showed theirs, deriding and denouncing God as a myth. He came to believe that God had sent him a profound message and had begun to work through him. Newton's conversion was not immediate, but he contacted Polly's family and announced his intentions to marry her. Her parents were hesitant as he was known to be unreliable and impetuous. They knew he was profane, but they allowed him to write to Polly, and he set to begin to submit to authority for her sake. He sought a place on a slave ship bound for Africa, and Newton and his crewmates participated in most of the same activities he had written about before; the only immorality from which he was able to free himself was profanity. After a severe illness his resolve was renewed, yet he retained the same attitude towards slavery as was held by his contemporaries. Newton continued in the slave trade through several voyages where he sailed up rivers in Africa now as a captain procured slaves being offered for sale in larger ports, and subsequently transported them to North America. In between voyages, he married Polly in 1750 and he found it more difficult to leave her at the beginning of each trip. After three shipping experiences in the slave trade, Newton was promised a position as ship's captain with cargo unrelated to slavery when, at the age of thirty, he collapsed and never sailed again. Working as a customs agent in Liverpool starting in 1756, Newton began to teach himself Latin, Greek, and theology. He and Polly immersed themselves in the church community, and Newton's passion was so impressive that his friends suggested he become a priest in the Church of England. He was turned down by John Gilbert, Archbishop of York, in 1758, ostensibly for having no university degree, although the more likely reasons were his leanings toward evangelism and tendency to socialise with Methodists. Newton continued his devotions, and after being encouraged by a friend, he wrote about his experiences in the slave trade and his conversion. George Legge, 3rd Earl of Dartmouth, impressed with his story, sponsored Newton for ordination by John Green, Bishop of Lincoln, and offered him the curacy of Olney, Buckinghamshire, in 1764. Olney was a village of about 2,500 residents whose main industry was making lace by hand. The people were mostly illiterate and many of them were poor. Newton's preaching was unique in that he shared many of his own experiences from the pulpit; many clergy preached from a distance, not admitting any intimacy with temptation or sin. He was involved in his parishioners' lives and was much loved, although his writing and delivery were sometimes unpolished. But his devotion and conviction were apparent and forceful, and he often said his mission was to "break a hard heart and to heal a broken heart". He struck a friendship with William Cowper, a gifted writer who had failed at a career in law and suffered bouts of insanity, attempting suicide several times. Cowper enjoyed Olney and Newton's company; he was also new to Olney and had gone through a spiritual conversion similar to Newton's. Together, their effect on the local congregation was impressive. In 1768, they found it necessary to start a weekly prayer meeting to meet the needs of an increasing number of parishioners. They also began writing lessons for children. Partly from Cowper's literary influence, and partly because learned vicars were expected to write verses, Newton began to try his hand at hymns, which had become popular through the language, made plain for common people to understand. Several prolific hymn writers were at their most productive in the 18th century, including Isaac Watts whose hymns Newton had grown up hearing and Charles Wesley, with whom Newton was familiar. Wesley's brother John, the eventual founder of the Methodist Church, had encouraged Newton to go into the clergy. Watts was a pioneer in English hymn writing, basing his work after the Psalms. The most prevalent hymns by Watts and others were written in the common meter in 8.6.8.6: the first line is eight syllables and the second is six. Newton and Cowper attempted to present a poem or hymn for each prayer meeting. The lyrics to "Amazing Grace" were written in late 1772 and probably used in a prayer meeting for the first time on January 1, 1773. A collection of the poems Newton and Cowper had written for use in services at Olney was bound and published anonymously in 1779 under the title "Olney Hymns". Newton contributed 280 of the 348 texts in "Olney Hymns"; "1 Chronicles 17:16–17, Faith's Review and Expectation" was the title of the poem with the first line "Amazing grace! (how sweet the sound)". The general impact of "Olney Hymns" was immediate and it became a widely popular tool for evangelicals in Britain for many years. Scholars appreciated Cowper's poetry somewhat more than Newton's plaintive and plain language driven from his forceful personality. The most prevalent themes in the verses written by Newton in "Olney Hymns" are faith in salvation, wonder at God's grace, his love for Jesus, and his cheerful exclamations of the joy he found in his faith. As a reflection of Newton's connection to his parishioners, he wrote many of the hymns in first person, admitting his own experience with sin. Bruce Hindmarsh in "Sing Them Over Again To Me: Hymns and Hymnbooks in America" considers "Amazing Grace" an excellent example of Newton's testimonial style afforded by the use of this perspective. Several of Newton's hymns were recognized as great work ("Amazing Grace" was not among them) while others seem to have been included to fill in when Cowper was unable to write. Jonathan Aitken calls Newton, specifically referring to "Amazing Grace", an "unashamedly middlebrow lyricist writing for a lowbrow congregation", noting that only twenty-one of the nearly 150 words used in all six verses have more than one syllable. William Phipps in the "Anglican Theological Review" and author James Basker have interpreted the first stanza of "Amazing Grace" as evidence of Newton's realization that his participation in the slave trade was his wretchedness, perhaps representing a wider common understanding of Newton's motivations. Newton joined forces with a young man named William Wilberforce, the British Member of Parliament who led the Parliamentarian campaign to abolish the slave trade in the British Empire, culminating in the Slave Trade Act 1807. However, Newton became an ardent and outspoken abolitionist after he left Olney in the 1780s; he never connected the construction of the hymn that became "Amazing Grace" to anti-slavery sentiments. The lyrics in "Olney Hymns" were arranged by their association to the Biblical verses that would be used by Newton and Cowper in their prayer meetings and did not address any political objective. For Newton, the beginning of the year was a time to reflect on one's spiritual progress. At the same time he completed a diary which has since been lost that he had begun 17 years before, two years after he quit sailing. The last entry of 1772 was a recounting of how much he had changed since then. The title ascribed to the hymn, "1 Chronicles 17:16–17", refers to David's reaction to the prophet Nathan telling him that God intends to maintain his family line forever. Some Christians interpret this as a prediction that Jesus Christ, as a descendant of David, was promised by God as the salvation for all people. Newton's sermon on that January day in 1773 focused on the necessity to express one's gratefulness for God's guidance, that God is involved in the daily lives of Christians though they may not be aware of it, and that patience for deliverance from the daily trials of life is warranted when the glories of eternity await. Newton saw himself a sinner like David who had been chosen, perhaps undeservedly, and was humbled by it. According to Newton, unconverted sinners were "blinded by the god of this world" until "mercy came to us not only undeserved but undesired ... our hearts endeavored to shut him out till he overcame us by the power of his grace." The New Testament served as the basis for many of the lyrics of "Amazing Grace". The first verse, for example, can be traced to the story of the Prodigal Son. In the Gospel of Luke the father says, "For this son of mine was dead and is alive again; he was lost, and is found". The story of Jesus healing a blind man who tells the Pharisees that he can now see is told in the Gospel of John. Newton used the words "I was blind but now I see" and declared "Oh to grace how great a debtor!" in his letters and diary entries as early as 1752. The effect of the lyrical arrangement, according to Bruce Hindmarsh, allows an instant release of energy in the exclamation "Amazing grace!", to be followed by a qualifying reply in "how sweet the sound". In "An Annotated Anthology of Hymns", Newton's use of an exclamation at the beginning of his verse is called "crude but effective" in an overall composition that "suggest(s) a forceful, if simple, statement of faith". Grace is recalled three times in the following verse, culminating in Newton's most personal story of his conversion, underscoring the use of his personal testimony with his parishioners. The sermon preached by Newton was his last, of those that William Cowper heard in Olney, since Cowper's mental instability returned shortly thereafter. Steve Turner, author of "Amazing Grace: The Story of America's Most Beloved Song", suggests Newton may have had his friend in mind, employing the themes of assurance and deliverance from despair for Cowper's benefit. Although it had its roots in England, "Amazing Grace" became an integral part of the Christian tapestry in the United States. More than 60 of Newton and Cowper's hymns were republished in other British hymnals and magazines, but "Amazing Grace" was not, appearing only once in a 1780 hymnal sponsored by the Countess of Huntingdon. Scholar John Julian commented in his 1892 "A Dictionary of Hymnology" that outside of the United States, the song was unknown and it was "far from being a good example of Newton's finest work". Between 1789 and 1799, four variations of Newton's hymn were published in the U.S. in Baptist, Dutch Reformed, and Congregationalist hymnodies; by 1830 Presbyterians and Methodists also included Newton's verses in their hymnals. The greatest influences in the 19th century that propelled "Amazing Grace" to spread across the U.S. and become a staple of religious services in many denominations and regions were the Second Great Awakening and the development of shape note singing communities. A tremendous religious movement swept the U.S. in the early 19th century, marked by the growth and popularity of churches and religious revivals that got their start in Kentucky and Tennessee. Unprecedented gatherings of thousands of people attended camp meetings where they came to experience salvation; preaching was fiery and focused on saving the sinner from temptation and backsliding. Religion was stripped of ornament and ceremony, and made as plain and simple as possible; sermons and songs often used repetition to get across to a rural population of poor and mostly uneducated people the necessity of turning away from sin. Witnessing and testifying became an integral component to these meetings, where a congregation member or even a stranger would rise and recount his turn from a sinful life to one of piety and peace. "Amazing Grace" was one of many hymns that punctuated fervent sermons, although the contemporary style used a refrain, borrowed from other hymns, that employed simplicity and repetition such as: Simultaneously, an unrelated movement of communal singing was established throughout the South and Western states. A format of teaching music to illiterate people appeared in 1800. It used four sounds to symbolize the basic scale: fa-sol-la-fa-sol-la-mi-fa. Each sound was accompanied by a specifically shaped note and thus became known as shape note singing. The method was simple to learn and teach, so schools were established throughout the South and West. Communities would come together for an entire day of singing in a large building where they sat in four distinct areas surrounding an open space, one member directing the group as a whole. Most of the music was Christian, but the purpose of communal singing was not primarily spiritual. Communities either could not afford music accompaniment or rejected it out of a Calvinistic sense of simplicity, so the songs were sung a cappella. When originally used in Olney, it is unknown what music, if any, accompanied the verses written by John Newton. Contemporary hymnbooks did not contain music and were simply small books of religious poetry. The first known instance of Newton's lines joined to music was in "A Companion to the Countess of Huntingdon's Hymns" (London, 1808), where it is set to the tune "Hephzibah" by English composer John Husband. Common meter hymns were interchangeable with a variety of tunes; more than twenty musical settings of "Amazing Grace" circulated with varying popularity until 1835 when William Walker assigned Newton's words to a traditional song named "New Britain", which was itself an amalgamation of two melodies ("Gallaher" and "St. Mary") first published in the "Columbian Harmony" by Charles H. Spilman and Benjamin Shaw (Cincinnati, 1829). Spilman and Shaw, both students at Kentucky's Centre College, compiled their tunebook both for public worship and revivals, to satisfy "the wants of the Church in her triumphal march". Most of the tunes had been previously published, but "Gallaher" and "St. Mary" had not. As neither tune is attributed and both show elements of oral transmission, scholars can only speculate that they are possibly of British origin. A manuscript from 1828 by Lucius Chapin, a famous hymn writer of that time, contains a tune very close to "St. Mary", but that does not mean that he wrote it. "Amazing Grace", with the words written by Newton and joined with "New Britain", the melody most currently associated with it, appeared for the first time in Walker's shape note tunebook "Southern Harmony" in 1847. It was, according to author Steve Turner, a "marriage made in heaven ... The music behind 'amazing' had a sense of awe to it. The music behind 'grace' sounded graceful. There was a rise at the point of confession, as though the author was stepping out into the open and making a bold declaration, but a corresponding fall when admitting his blindness." Walker's collection was enormously popular, selling about 600,000 copies all over the U.S. when the total population was just over 20 million. Another shape note tunebook named "The Sacred Harp" (1844) by Georgia residents Benjamin Franklin White and Elisha J. King became widely influential and continues to be used. Another verse was first recorded in Harriet Beecher Stowe's immensely influential 1852 anti-slavery novel "Uncle Tom's Cabin". Three verses were emblematically sung by Tom in his hour of deepest crisis. He sings the sixth and fifth verses in that order, and Stowe included another verse not written by Newton that had been passed down orally in African American communities for at least 50 years. It was originally one of between 50 and 70 verses of a song titled "Jerusalem, My Happy Home" that first appeared in a 1790 book called "A Collection of Sacred Ballads": "Amazing Grace" came to be an emblem of a Christian movement and a symbol of the U.S. itself as the country was involved in a great political experiment, attempting to employ democracy as a means of government. Shape note singing communities, with all the members sitting around an open center, each song employing a different director, illustrated this in practice. Simultaneously, the U.S. began to expand westward into previously unexplored territory that was often wilderness. The "dangers, toils, and snares" of Newton's lyrics had both literal and figurative meanings for Americans. This became poignantly true during the most serious test of American cohesion in the U.S. Civil War (1861–1865). "Amazing Grace" set to "New Britain" was included in two hymnals distributed to soldiers and with death so real and imminent, religious services in the military became commonplace. The hymn was translated into other languages as well: while on the Trail of Tears, the Cherokee sang Christian hymns as a way of coping with the ongoing tragedy, and a version of the song by Samuel Worcester that had been translated into the Cherokee language became very popular. Although "Amazing Grace" set to "New Britain" was popular, other versions existed regionally. Primitive Baptists in the Appalachian region often used "New Britain" with other hymns, and sometimes sing the words of "Amazing Grace" to other folk songs, including titles such as "In the Pines", "Pisgah", "Primrose", and "Evan", as all are able to be sung in common meter, of which the majority of their repertoire consists. A tune named "Arlington" accompanied Newton's verses as much as "New Britain" for a time in the late 19th century. Two musical arrangers named Dwight Moody and Ira Sankey heralded another religious revival in the cities of the U.S. and Europe, giving the song international exposure. Moody's preaching and Sankey's musical gifts were significant; their arrangements were the forerunners of gospel music, and churches all over the U.S. were eager to acquire them. Moody and Sankey began publishing their compositions in 1875, and "Amazing Grace" appeared three times with three different melodies, but they were the first to give it its title; hymns were typically published using the first line of the lyrics, or the name of the tune such as "New Britain". A publisher named Edwin Othello Excell gave the version of "Amazing Grace" set to "New Britain" immense popularity by publishing it in a series of hymnals that were used in urban churches. Excell altered some of Walker's music, making it more contemporary and European, giving "New Britain" some distance from its rural folk-music origins. Excell's version was more palatable for a growing urban middle class and arranged for larger church choirs. Several editions featuring Newton's first three stanzas and the verse previously included by Harriet Beecher Stowe in "Uncle Tom's Cabin" were published by Excell between 1900 and 1910, and his version of "Amazing Grace" became the standard form of the song in American churches. With the advent of recorded music and radio, "Amazing Grace" began to cross over from primarily a gospel standard to secular audiences. The ability to record combined with the marketing of records to specific audiences allowed "Amazing Grace" to take on thousands of different forms in the 20th century. Where Edwin Othello Excell sought to make the singing of "Amazing Grace" uniform throughout thousands of churches, records allowed artists to improvise with the words and music specific to each audience. AllMusic lists more than 7,000 recordings including re-releases and compilations as of September 2011. Its first recording is an a cappella version from 1922 by the Sacred Harp Choir. It was included from 1926 to 1930 in Okeh Records' catalogue, which typically concentrated strongly on blues and jazz. Demand was high for black gospel recordings of the song by H. R. Tomlin and J. M. Gates. A poignant sense of nostalgia accompanied the recordings of several gospel and blues singers in the 1940s and 1950s who used the song to remember their grandparents, traditions, and family roots. It was recorded with musical accompaniment for the first time in 1930 by Fiddlin' John Carson, although to another folk hymn named "At the Cross", not to "New Britain". "Amazing Grace" is emblematic of several kinds of folk music styles, often used as the standard example to illustrate such musical techniques as lining out and call and response, that have been practiced in both black and white folk music. Mahalia Jackson's 1947 version received significant radio airplay, and as her popularity grew throughout the 1950s and 1960s, she often sang it at public events such as concerts at Carnegie Hall. Author James Basker states that the song has been employed by African Americans as the "paradigmatic Negro spiritual" because it expresses the joy felt at being delivered from slavery and worldly miseries. Anthony Heilbut, author of "The Gospel Sound", states that the "dangers, toils, and snares" of Newton's words are a "universal testimony" of the African American experience. During the civil rights movement and opposition to the Vietnam War, the song took on a political tone. Mahalia Jackson employed "Amazing Grace" for Civil Rights marchers, writing that she used it "to give magical protection a charm to ward off danger, an incantation to the angels of heaven to descend ... I was not sure the magic worked outside the church walls ... in the open air of Mississippi. But I wasn't taking any chances." Folk singer Judy Collins, who knew the song before she could remember learning it, witnessed Fannie Lou Hamer leading marchers in Mississippi in 1964, singing "Amazing Grace". Collins also considered it a talisman of sorts, and saw its equal emotional impact on the marchers, witnesses, and law enforcement who opposed the civil rights demonstrators. According to fellow folk singer Joan Baez, it was one of the most requested songs from her audiences, but she never realized its origin as a hymn; by the time she was singing it in the 1960s she said it had "developed a life of its own". It even made an appearance at the Woodstock Music Festival in 1969 during Arlo Guthrie's performance. Collins decided to record it in the late 1960s amid an atmosphere of counterculture introspection; she was part of an encounter group that ended a contentious meeting by singing "Amazing Grace" as it was the only song to which all the members knew the words. Her producer was present and suggested she include a version of it on her 1970 album "Whales & Nightingales". Collins, who had a history of alcohol abuse, claimed that the song was able to "pull her through" to recovery. It was recorded in St. Paul's, the chapel at Columbia University, chosen for the acoustics. She chose an "a cappella" arrangement that was close to Edwin Othello Excell's, accompanied by a chorus of amateur singers who were friends of hers. Collins connected it to the Vietnam War, to which she objected: "I didn't know what else to do about the war in Vietnam. I had marched, I had voted, I had gone to jail on political actions and worked for the candidates I believed in. The war was still raging. There was nothing left to do, I thought ... but sing 'Amazing Grace'." Gradually and unexpectedly, the song began to be played on the radio, and then be requested. It rose to number 15 on the "Billboard" Hot 100, remaining on the charts for 15 weeks, as if, she wrote, her fans had been "waiting to embrace it". In the UK, it charted 8 times between 1970 and 1972, peaking at number 5 and spending a total of 75 weeks on popular music charts. Her rendition also reached number 5 in New Zealand and number 12 in Ireland in 1971. Although Collins used it as a catharsis for her opposition to the Vietnam War, two years after her rendition, the Royal Scots Dragoon Guards, senior Scottish regiment of the British Army, recorded an instrumental version featuring a bagpipe soloist accompanied by a pipe and drum band. The tempo of their arrangement was slowed to allow for the bagpipes, but it was based on Collins': it began with a bagpipe solo introduction similar to her lone voice, then it was accompanied by the band of bagpipes and horns, whereas in her version she is backed up by a chorus. It topped the "RPM" national singles chart in Canada for three weeks, and rose as high as number 11 in the U.S. It is also a controversial instrumental, as it combined pipes with a military band. The Royal Scots Dragoon Guards' version of "Amazing Grace" regained popularity in 2007, thirty-five years after it was originally released, when it was recorded for their album "Spirit of the Glen", which reached number 13 in the UK Albums Chart. The Pipe Major of the Royal Scots Dragoon Guards was summoned to Edinburgh Castle and chastised for demeaning the bagpipes. Funeral processions for killed police, fire, and military personnel have often played a bagpipes version ever since. Aretha Franklin and Rod Stewart also recorded "Amazing Grace" around the same time, and both of their renditions were popular. All four versions were marketed to distinct types of audiences thereby assuring its place as a pop song. Johnny Cash recorded it on his 1975 album "Sings Precious Memories", dedicating it to his older brother Jack, who had been killed in a mill accident when they were boys in Dyess, Arkansas. Cash and his family sang it to themselves while they worked in the cotton fields following Jack's death. Cash often included the song when he toured prisons, saying "For the three minutes that song is going on, everybody is free. It just frees the spirit and frees the person." The U.S. Library of Congress has a collection of 3,000 versions of and songs inspired by "Amazing Grace", some of which were first-time recordings by folklorists Alan and John Lomax, a father and son team who in 1932 traveled thousands of miles across the South to capture the different regional styles of the song. More contemporary renditions include samples from such popular artists as Sam Cooke and the Soul Stirrers (1963), the Byrds (1970), Elvis Presley (1971), Skeeter Davis (1972), Mighty Clouds of Joy (1972), Amazing Rhythm Aces (1975), Willie Nelson (1976), and the Lemonheads (1992). Following the appropriation of the hymn in secular music, "Amazing Grace" became such an icon in American culture that it has been used for a variety of secular purposes and marketing campaigns, placing it in danger of becoming a cliché. It has been mass-produced on souvenirs, lent its name to a Superman villain, appeared on "The Simpsons" to demonstrate the redemption of a murderous character named Sideshow Bob, incorporated into Hare Krishna chants and adapted for Wicca ceremonies. It can also be sung to the theme from "The Mickey Mouse Club", as Garrison Keillor has observed. The hymn has been employed in several films, including "Alice's Restaurant", "Coal Miner's Daughter", and "Silkwood". It is referenced in the 2006 film "Amazing Grace", which highlights Newton's influence on the leading British abolitionist William Wilberforce, and in the upcoming film biography of Newton, "Newton's Grace". The 1982 science fiction film "" used "Amazing Grace" amid a context of Christian symbolism, to memorialize Mr. Spock following his death, but more practically, because the song has become "instantly recognizable to many in the audience as music that sounds appropriate for a funeral" according to a "Star Trek" scholar. Since 1954, when an organ instrumental of "New Britain" became a bestseller, "Amazing Grace" has been associated with funerals and memorial services. It has become a song that inspires hope in the wake of tragedy, becoming a sort of "spiritual national anthem" according to authors Mary Rourke and Emily Gwathmey. For example, President Barack Obama recited and then sang the hymn at the memorial service for Clementa Pinckney, one of the victims of the 2015 Charleston church shooting. In recent years, the words of the hymn have been changed in some religious publications to downplay a sense of imposed self-loathing by its singers. The second line, "That saved a wretch like me!" has been rewritten as "That saved and strengthened me", "save a soul like me", or "that saved and set me free". Kathleen Norris in her book "Amazing Grace: A Vocabulary of Faith" characterizes this transformation of the original words as "wretched English" making the line that replaces the original "laughably bland". Part of the reason for this change has been the altered interpretations of what wretchedness and grace means. Newton's Calvinistic view of redemption and divine grace formed his perspective that he considered himself a sinner so vile that he was unable to change his life or be redeemed without God's help. Yet his lyrical subtlety, in Steve Turner's opinion, leaves the hymn's meaning open to a variety of Christian and non-Christian interpretations. "Wretch" also represents a period in Newton's life when he saw himself outcast and miserable, as he was when he was enslaved in Sierra Leone; his own arrogance was matched by how far he had fallen in his life. The communal understanding of redemption and human self-worth has changed since Newton's time. Since the 1970s, self-help books, psychology, and some modern expressions of Christianity have viewed this disparity in terms of grace being an innate quality within all people who must be inspired or strong enough to find it: something to achieve. In contrast to Newton's vision of wretchedness as his willful sin and distance from God, wretchedness has instead come to mean an obstacle of physical, social, or spiritual nature to overcome in order to achieve a state of grace, happiness, or contentment. Since its immense popularity and iconic nature, "grace" and the meaning behind the words of "Amazing Grace" have become as individual as the singer or listener. Bruce Hindmarsh suggests that the secular popularity of "Amazing Grace" is due to the absence of any mention of God in the lyrics until the fourth verse (by Excell's version, the fourth verse begins "When we've been there ten thousand years"), and that the song represents the ability of humanity to transform itself instead of a transformation taking place at the hands of God. "Grace", however, to John Newton had a clearer meaning, as he used the word to represent God or the power of God. The transformative power of the song was investigated by journalist Bill Moyers in a documentary released in 1990. Moyers was inspired to focus on the song's power after watching a performance at Lincoln Center, where the audience consisted of Christians and non-Christians, and he noticed that it had an equal impact on everybody in attendance, unifying them. James Basker also acknowledged this force when he explained why he chose "Amazing Grace" to represent a collection of anti-slavery poetry: "there is a transformative power that is applicable ... : the transformation of sin and sorrow into grace, of suffering into beauty, of alienation into empathy and connection, of the unspeakable into imaginative literature." Moyers interviewed Collins, Cash, opera singer Jessye Norman, Appalachian folk musician Jean Ritchie and her family, white Sacred Harp singers in Georgia, black Sacred Harp singers in Alabama, and a prison choir at the Texas State Penitentiary at Huntsville. Collins, Cash, and Norman were unable to discern if the power of the song came from the music or the lyrics. Norman, who once notably sang it at the end of a large outdoor rock concert for Nelson Mandela's 70th birthday, stated, "I don't know whether it's the text I don't know whether we're talking about the lyrics when we say that it touches so many people or whether it's that tune that everybody knows." A prisoner interviewed by Moyers explained his literal interpretation of the second verse: "'Twas grace that taught my heart to fear, and grace my fears relieved" by saying that the fear became immediately real to him when he realized he may never get his life in order, compounded by the loneliness and restriction in prison. Gospel singer Marion Williams summed up its effect: "That's a song that gets to everybody". The "Dictionary of American Hymnology" claims it is included in more than a thousand published hymnals, and recommends its use for "occasions of worship when we need to confess with joy that we are saved by God's grace alone; as a hymn of response to forgiveness of sin or as an assurance of pardon; as a confession of faith or after the sermon". Explanatory notes Citations Bibliography Alfred Russel Wallace Alfred Russel Wallace (8 January 18237 November 1913) was an English naturalist, explorer, geographer, anthropologist, and biologist. He is best known for independently conceiving the theory of evolution through natural selection; his paper on the subject was jointly published with some of Charles Darwin's writings in 1858. This prompted Darwin to publish his own ideas in "On the Origin of Species." Wallace did extensive fieldwork, first in the Amazon River basin and then in the Malay Archipelago, where he identified the faunal divide now termed the Wallace Line, which separates the Indonesian archipelago into two distinct parts: a western portion in which the animals are largely of Asian origin, and an eastern portion where the fauna reflect Australasia. He was considered the 19th century's leading expert on the geographical distribution of animal species and is sometimes called the "father of biogeography". Wallace was one of the leading evolutionary thinkers of the 19th century and made many other contributions to the development of evolutionary theory besides being co-discoverer of natural selection. These included the concept of warning colouration in animals, and the Wallace effect, a hypothesis on how natural selection could contribute to speciation by encouraging the development of barriers against hybridisation. Wallace's 1904 book "Man's Place in the Universe" was the first serious attempt by a biologist to evaluate the likelihood of life on other planets. He was also one of the first scientists to write a serious exploration of the subject of whether there was life on Mars. Wallace was strongly attracted to unconventional ideas (such as evolution). His advocacy of spiritualism and his belief in a non-material origin for the higher mental faculties of humans strained his relationship with some members of the scientific establishment. Aside from scientific work, he was a social activist who was critical of what he considered to be an unjust social and economic system (capitalism) in 19th-century Britain. His interest in natural history resulted in his being one of the first prominent scientists to raise concerns over the environmental impact of human activity. He was also a prolific author who wrote on both scientific and social issues; his account of his adventures and observations during his explorations in Singapore, Indonesia and Malaysia, "The Malay Archipelago", was both popular and highly regarded. Since its publication in 1869 it has never been out of print. Wallace had financial difficulties throughout much of his life. His Amazon and Far Eastern trips were supported by the sale of specimens he collected and, after he lost most of the considerable money he made from those sales in unsuccessful investments, he had to support himself mostly from the publications he produced. Unlike some of his contemporaries in the British scientific community, such as Darwin and Charles Lyell, he had no family wealth to fall back on, and he was unsuccessful in finding a long-term salaried position, receiving no regular income until he was awarded a small government pension, through Darwin's efforts, in 1881. Alfred Wallace was born in the Welsh village of Llanbadoc, near Usk, Monmouthshire. He was the seventh of nine children of Thomas Vere Wallace and Mary Anne Greenell. Mary Anne was English; Thomas Wallace was probably of Scottish ancestry. His family, like many Wallaces, claimed a connection to William Wallace, a leader of Scottish forces during the Wars of Scottish Independence in the 13th century. Thomas Wallace graduated in law, but never practised law. He owned some income-generating property, but bad investments and failed business ventures resulted in a steady deterioration of the family's financial position. His mother was from a middle-class English family from Hertford, north of London. When Wallace was five years old, his family moved to Hertford. There he attended Hertford Grammar School until financial difficulties forced his family to withdraw him in 1836, when he was aged 14. Wallace then moved to London to board with his older brother John, a 19-year-old apprentice builder. This was a stopgap measure until William, his oldest brother, was ready to take him on as an apprentice surveyor. While in London, Alfred attended lectures and read books at the London Mechanics Institute. Here he was exposed to the radical political ideas of the Welsh social reformer Robert Owen and of Thomas Paine. He left London in 1837 to live with William and work as his apprentice for six years. At the end of 1839, they moved to Kington, Hereford, near the Welsh border, before eventually settling at Neath in Glamorgan in Wales. Between 1840 and 1843, Wallace did land surveying work in the countryside of the west of England and Wales. By the end of 1843, William's business had declined due to difficult economic conditions, and Wallace, at the age of 20, left in January. One result of Wallace's early travels is a modern controversy about his nationality. Since Wallace was born in Monmouthshire, some sources have considered him to be Welsh. However, some historians have questioned this because neither of his parents was Welsh, his family only briefly lived in Monmouthshire, the Welsh people Wallace knew in his childhood considered him to be English, and because Wallace himself consistently referred to himself as English rather than Welsh (even when writing about his time in Wales). One Wallace scholar has stated that the most reasonable interpretation is therefore that he was an Englishman born in Wales. After a brief period of unemployment, he was hired as a master at the Collegiate School in Leicester to teach drawing, mapmaking, and surveying. Wallace spent many hours at the library in Leicester: he read "An Essay on the Principle of Population" by Thomas Robert Malthus, and one evening he met the entomologist Henry Bates. Bates was 19 years old, and in 1843 he had published a paper on beetles in the journal "Zoologist". He befriended Wallace and started him collecting insects. William died in March 1845, and Wallace left his teaching position to assume control of his brother's firm in Neath, but his brother John and he were unable to make the business work. After a few months, Wallace found work as a civil engineer for a nearby firm that was working on a survey for a proposed railway in the Vale of Neath. Wallace's work on the survey involved spending a lot of time outdoors in the countryside, allowing him to indulge his new passion for collecting insects. Wallace persuaded his brother John to join him in starting another architecture and civil engineering firm, which carried out a number of projects, including the design of a building for the Neath Mechanics' Institute, founded in 1843. William Jevons, the founder of that institute, was impressed by Wallace and persuaded him to give lectures there on science and engineering. In the autumn of 1846, John and he purchased a cottage near Neath, where they lived with their mother and sister Fanny (his father had died in 1843). During this period, he read avidly, exchanging letters with Bates about Robert Chambers' anonymously published evolutionary treatise "Vestiges of the Natural History of Creation", Charles Darwin's "The Voyage of the Beagle", and Charles Lyell's "Principles of Geology". Inspired by the chronicles of earlier travelling naturalists, including Alexander von Humboldt, Charles Darwin and especially William Henry Edwards, Wallace decided that he too wanted to travel abroad as a naturalist. In 1848, Wallace and Henry Bates left for Brazil aboard the "Mischief". Their intention was to collect insects and other animal specimens in the Amazon Rainforest for their private collections, selling the duplicates to museums and collectors back in Britain in order to fund the trip. Wallace also hoped to gather evidence of the transmutation of species. Wallace and Bates spent most of their first year collecting near Belém, then explored inland separately, occasionally meeting to discuss their findings. In 1849, they were briefly joined by another young explorer, botanist Richard Spruce, along with Wallace's younger brother Herbert. Herbert left soon thereafter (dying two years later from yellow fever), but Spruce, like Bates, would spend over ten years collecting in South America. Wallace continued charting the Rio Negro for four years, collecting specimens and making notes on the peoples and languages he encountered as well as the geography, flora, and fauna. On 12 July 1852, Wallace embarked for the UK on the brig "Helen". After 26 days at sea, the ship's cargo caught fire and the crew was forced to abandon ship. All of the specimens Wallace had on the ship, mostly collected during the last, and most interesting, two years of his trip, were lost. He managed to save a few notes and pencil sketches and little else. Wallace and the crew spent ten days in an open boat before being picked up by the brig "Jordeson", which was sailing from Cuba to London. The "Jordeson"'s provisions were strained by the unexpected passengers, but after a difficult passage on very short rations the ship finally reached its destination on 1 October 1852. After his return to the UK, Wallace spent 18 months in London living on the insurance payment for his lost collection and selling a few specimens that had been shipped back to Britain prior to his starting his exploration of the Rio Negro until the Indian town of Jativa on Orinoco River basin and as far west as Micúru (Mitú) on the Uaupés River. He was deeply impressed by the grandeur of the virgin forest, by the variety and beauty of the butterflies and birds, and by his first encounter with Indians on the Uaupés River area, an experience he never forgot. During this period, despite having lost almost all of the notes from his South American expedition, he wrote six academic papers (which included "On the Monkeys of the Amazon") and two books; "Palm Trees of the Amazon and Their Uses" and "Travels on the Amazon". He also made connections with a number of other British naturalists—most significantly, Darwin. From 1854 to 1862, age 31 to 39, Wallace travelled through the Malay Archipelago or East Indies (now Singapore, Malaysia and Indonesia), to collect specimens for sale and to study natural history. A set of 80 bird skeletons he collected in Indonesia and associated documentation can be found in the Cambridge University Museum of Zoology. His observations of the marked zoological differences across a narrow strait in the archipelago led to his proposing the zoogeographical boundary now known as the Wallace line. Wallace collected more than 126,000 specimens in the Malay Archipelago (more than 80,000 beetles alone). Several thousand of them represented species new to science. One of his better-known species descriptions during this trip is that of the gliding tree frog "Rhacophorus nigropalmatus", known as Wallace's flying frog. While he was exploring the archipelago, he refined his thoughts about evolution and had his famous insight on natural selection. In 1858 he sent an article outlining his theory to Darwin; it was published, along with a description of Darwin's own theory, in the same year. Accounts of his studies and adventures there were eventually published in 1869 as "The Malay Archipelago", which became one of the most popular books of scientific exploration of the 19th century, and has never been out of print. It was praised by scientists such as Darwin (to whom the book was dedicated), and Charles Lyell, and by non-scientists such as the novelist Joseph Conrad, who called it his "favorite bedside companion" and used it as source of information for several of his novels, especially "Lord Jim". In 1862, Wallace returned to England, where he moved in with his sister Fanny Sims and her husband Thomas. While recovering from his travels, Wallace organised his collections and gave numerous lectures about his adventures and discoveries to scientific societies such as the Zoological Society of London. Later that year, he visited Darwin at Down House, and became friendly with both Charles Lyell and Herbert Spencer. During the 1860s, Wallace wrote papers and gave lectures defending natural selection. He also corresponded with Darwin about a variety of topics, including sexual selection, warning colouration, and the possible effect of natural selection on hybridisation and the divergence of species. In 1865, he began investigating spiritualism. After a year of courtship, Wallace became engaged in 1864 to a young woman whom, in his autobiography, he would only identify as Miss L. Miss L. was the daughter of Lewis Leslie who played chess with Wallace. However, to Wallace's great dismay, she broke off the engagement. In 1866, Wallace married Annie Mitten. Wallace had been introduced to Mitten through the botanist Richard Spruce, who had befriended Wallace in Brazil and who was also a good friend of Annie Mitten's father, William Mitten, an expert on mosses. In 1872, Wallace built the Dell, a house of concrete, on land he leased in Grays in Essex, where he lived until 1876. The Wallaces had three children: Herbert (1867–1874), Violet (1869–1945), and William (1871–1951). In the late 1860s and 1870s, Wallace was very concerned about the financial security of his family. While he was in the Malay Archipelago, the sale of specimens had brought in a considerable amount of money, which had been carefully invested by the agent who sold the specimens for Wallace. However, on his return to the UK, Wallace made a series of bad investments in railways and mines that squandered most of the money, and he found himself badly in need of the proceeds from the publication of "The Malay Archipelago". Despite assistance from his friends, he was never able to secure a permanent salaried position such as a curatorship in a museum. To remain financially solvent, Wallace worked grading government examinations, wrote 25 papers for publication between 1872 and 1876 for various modest sums, and was paid by Lyell and Darwin to help edit some of their own works. In 1876, Wallace needed a £500 advance from the publisher of "The Geographical Distribution of Animals" to avoid having to sell some of his personal property. Darwin was very aware of Wallace's financial difficulties and lobbied long and hard to get Wallace awarded a government pension for his lifetime contributions to science. When the £200 annual pension was awarded in 1881, it helped to stabilise Wallace's financial position by supplementing the income from his writings. John Stuart Mill was impressed by remarks criticising English society that Wallace had included in "The Malay Archipelago". Mill asked him to join the general committee of his Land Tenure Reform Association, but the association dissolved after Mill's death in 1873. Wallace had written only a handful of articles on political and social issues between 1873 and 1879 when, at the age of 56, he entered the debates over trade policy and land reform in earnest. He believed that rural land should be owned by the state and leased to people who would make whatever use of it that would benefit the largest number of people, thus breaking the often-abused power of wealthy landowners in British society. In 1881, Wallace was elected as the first president of the newly formed Land Nationalisation Society. In the next year, he published a book, "Land Nationalisation; Its Necessity and Its Aims", on the subject. He criticised the UK's free trade policies for the negative impact they had on working-class people. In 1889, Wallace read "Looking Backward" by Edward Bellamy and declared himself a socialist, despite his earlier foray as a speculative investor. After reading "Progress and Poverty", the best selling book by the progressive land reformist Henry George, Wallace described it as "Undoubtedly the most remarkable and important book of the present century." Wallace opposed eugenics, an idea supported by other prominent 19th-century evolutionary thinkers, on the grounds that contemporary society was too corrupt and unjust to allow any reasonable determination of who was fit or unfit. In the 1890 article "Human Selection" he wrote, "Those who succeed in the race for wealth are by no means the best or the most intelligent ...". In 1898, Wallace wrote a paper advocating a pure paper money system, not backed by silver or gold, which impressed the economist Irving Fisher so much that he dedicated his 1920 book "Stabilizing the Dollar" to Wallace. Wallace wrote on other social and political topics including his support for women's suffrage, and repeatedly on the dangers and wastefulness of militarism. In an essay published in 1899 Wallace called for popular opinion to be rallied against warfare by showing people: "...that all modern wars are dynastic; that they are caused by the ambition, the interests, the jealousies, and the insatiable greed of power of their rulers, or of the great mercantile and financial classes which have power and influence over their rulers; and that the results of war are never good for the people, who yet bear all its burthens". In a letter published by the Daily Mail in 1909, with aviation in its infancy, he advocated an international treaty to ban the military use of aircraft, arguing against the idea "...that this new horror is "inevitable," and that all we can do is to be sure and be in the front rank of the aerial assassins—for surely no other term can so fitly describe the dropping of, say, ten thousand bombs at midnight into an enemy's capital from an invisible flight of airships." In 1898, Wallace published a book entitled "The Wonderful Century: Its Successes and Its Failures" about developments in the 19th century. The first part of the book covered the major scientific and technical advances of the century; the second part covered what Wallace considered to be its social failures including: the destruction and waste of wars and arms races, the rise of the urban poor and the dangerous conditions in which they lived and worked, a harsh criminal justice system that failed to reform criminals, abuses in a mental health system based on privately owned sanatoriums, the environmental damage caused by capitalism, and the evils of European colonialism. Wallace continued his social activism for the rest of his life, publishing the book "The Revolt of Democracy" just weeks before his death. Wallace continued his scientific work in parallel with his social commentary. In 1880, he published "Island Life" as a sequel to "The Geographic Distribution of Animals". In November 1886, Wallace began a ten-month trip to the United States to give a series of popular lectures. Most of the lectures were on Darwinism (evolution through natural selection), but he also gave speeches on biogeography, spiritualism, and socio-economic reform. During the trip, he was reunited with his brother John who had emigrated to California years before. He also spent a week in Colorado, with the American botanist Alice Eastwood as his guide, exploring the flora of the Rocky Mountains and gathering evidence that would lead him to a theory on how glaciation might explain certain commonalities between the mountain flora of Europe, Asia and North America, which he published in 1891 in the paper "English and American Flowers". He met many other prominent American naturalists and viewed their collections. His 1889 book "Darwinism" used information he collected on his American trip, and information he had compiled for the lectures. On 7 November 1913, Wallace died at home in the country house he called Old Orchard, which he had built a decade earlier. He was 90 years old. His death was widely reported in the press. "The New York Times" called him "the last of the giants belonging to that wonderful group of intellectuals that included, among others, Darwin, Huxley, Spencer, Lyell, and Owen, whose daring investigations revolutionised and evolutionised the thought of the century." Another commentator in the same edition said "No apology need be made for the few literary or scientific follies of the author of that great book on the 'Malay Archipelago'." Some of Wallace's friends suggested that he be buried in Westminster Abbey, but his wife followed his wishes and had him buried in the small cemetery at Broadstone, Dorset. Several prominent British scientists formed a committee to have a medallion of Wallace placed in Westminster Abbey near where Darwin had been buried. The medallion was unveiled on 1 November 1915. Unlike Darwin, Wallace began his career as a travelling naturalist already believing in the transmutation of species. The concept had been advocated by Jean-Baptiste Lamarck, Geoffroy Saint-Hilaire, Erasmus Darwin, and Robert Grant, among others. It was widely discussed, but not generally accepted by leading naturalists, and was considered to have radical, even revolutionary connotations. Prominent anatomists and geologists such as Georges Cuvier, Richard Owen, Adam Sedgwick, and Charles Lyell attacked it vigorously. It has been suggested that Wallace accepted the idea of the transmutation of species in part because he was always inclined to favour radical ideas in politics, religion and science, and because he was unusually open to marginal, even fringe, ideas in science. He was also profoundly influenced by Robert Chambers' work, "Vestiges of the Natural History of Creation", a highly controversial work of popular science published anonymously in 1844 that advocated an evolutionary origin for the solar system, the earth, and living things. Wallace wrote to Henry Bates in 1845: I have a rather more favourable opinion of the 'Vestiges' than you appear to have. I do not consider it a hasty generalization, but rather as an ingenious hypothesis strongly supported by some striking facts and analogies, but which remains to be proven by more facts and the additional light which more research may throw upon the problem. It furnishes a subject for every student of nature to attend to; every fact he observes will make either for or against it, and it thus serves both as an incitement to the collection of facts, and an object to which they can be applied when collected. In 1847, he wrote to Bates: I should like to take some one family [of beetles] to study thoroughly, principally with a view to the theory of the origin of species. By that means I am strongly of opinion that some definite results might be arrived at. Wallace deliberately planned some of his field work to test the hypothesis that under an evolutionary scenario closely related species should inhabit neighbouring territories. During his work in the Amazon basin, he came to realise that geographical barriers—such as the Amazon and its major tributaries—often separated the ranges of closely allied species, and he included these observations in his 1853 paper "On the Monkeys of the Amazon". Near the end of the paper he asks the question, "Are very closely allied species ever separated by a wide interval of country?" In February 1855, while working in Sarawak on the island of Borneo, Wallace wrote "On the Law which has Regulated the Introduction of New Species", a paper which was published in the "Annals and Magazine of Natural History" in September 1855. In this paper, he discussed observations regarding the geographic and geologic distribution of both living and fossil species, what would become known as biogeography. His conclusion that "Every species has come into existence coincident both in space and time with a closely allied species" has come to be known as the "Sarawak Law". Wallace thus answered the question he had posed in his earlier paper on the monkeys of the Amazon river basin. Although it contained no mention of any possible mechanisms for evolution, this paper foreshadowed the momentous paper he would write three years later. The paper shook Charles Lyell's belief that species were immutable. Although his friend Charles Darwin had written to him in 1842 expressing support for transmutation, Lyell had continued to be strongly opposed to the idea. Around the start of 1856, he told Darwin about Wallace's paper, as did Edward Blyth who thought it "Good! Upon the whole! ... Wallace has, I think put the matter well; and according to his theory the various domestic races of animals have been fairly developed into "species"." Despite this hint, Darwin mistook Wallace's conclusion for the progressive creationism of the time and wrote that it was "nothing very new ... Uses my simile of tree [but] it seems all creation with him." Lyell was more impressed, and opened a notebook on species, in which he grappled with the consequences, particularly for human ancestry. Darwin had already shown his theory to their mutual friend Joseph Hooker and now, for the first time, he spelt out the full details of natural selection to Lyell. Although Lyell could not agree, he urged Darwin to publish to establish priority. Darwin demurred at first, then began writing up a "species sketch" of his continuing work in May 1856. By February 1858, Wallace had been convinced by his biogeographical research in the Malay Archipelago of the reality of evolution. As he later wrote in his autobiography: The problem then was not only how and why do species change, but how and why do they change into new and well defined species, distinguished from each other in so many ways; why and how they become so exactly adapted to distinct modes of life; and why do all the intermediate grades die out (as geology shows they have died out) and leave only clearly defined and well marked species, genera, and higher groups of animals? According to his autobiography, it was while he was in bed with a fever that Wallace thought about Thomas Robert Malthus's idea of positive checks on human population growth and came up with the idea of natural selection. Wallace said in his autobiography that he was on the island of Ternate at the time; but historians have questioned this, saying that on the basis of the journal he kept at the time, he was on the island of Gilolo. From 1858 to 1861 he rented a house on Ternate from the Dutchman Maarten Dirk van Renesse van Duivenbode. He used this house as a base camp for expeditions to other islands such as Gilolo. Wallace describes how he discovered natural selection as follows: "It then occurred to me that these causes or their equivalents are continually acting in the case of animals also; and as animals usually breed much more quickly than does mankind, the destruction every year from these causes must be enormous in order to keep down the numbers of each species, since evidently they do not increase regularly from year to year, as otherwise the world would long ago have been crowded with those that breed most quickly. Vaguely thinking over the enormous and constant destruction which this implied, it occurred to me to ask the question, why do some die and some live? And the answer was clearly, on the whole the best fitted live ... and considering the amount of individual variation that my experience as a collector had shown me to exist, then it followed that all the changes necessary for the adaptation of the species to the changing conditions would be brought about ... In this way every part of an animals organization could be modified exactly as required, and in the very process of this modification the unmodified would die out, and thus the definite characters and the clear isolation of each new species would be explained." Wallace had once briefly met Darwin, and was one of the correspondents whose observations Darwin used to support his own theories. Although Wallace's first letter to Darwin has been lost, Wallace carefully kept the letters he received. In the first letter, dated 1 May 1857, Darwin commented that Wallace's letter of 10 October which he had recently received, as well as Wallace's paper "On the Law which has regulated the Introduction of New Species" of 1855, showed that they were both thinking alike and to some extent reaching similar conclusions, and said that he was preparing his own work for publication in about two years time. The second letter, dated 22 December 1857, said how glad he was that Wallace was theorising about distribution, adding that "without speculation there is no good and original observation" while commenting that "I believe I go much further than you". Wallace trusted Darwin's opinion on the matter and sent him his February 1858 essay, "On the Tendency of Varieties to Depart Indefinitely From the Original Type", with the request that Darwin would review it and pass it on to Charles Lyell if he thought it worthwhile. Although Wallace had sent several articles for journal publication during his travels through the Malay archipelago, the Ternate essay was in a private letter. On 18 June 1858, Darwin received the essay from Wallace. While Wallace's essay obviously did not employ Darwin's term "natural selection", it did outline the mechanics of an evolutionary divergence of species from similar ones due to environmental pressures. In this sense, it was very similar to the theory that Darwin had worked on for twenty years, but had yet to publish. Darwin sent the manuscript to Charles Lyell with a letter saying "he could not have made a better short abstract! Even his terms now stand as heads of my chapters ... he does not say he wishes me to publish, but I shall, of course, at once write and offer to send to any journal." Distraught about the illness of his baby son, Darwin put the problem to Charles Lyell and Joseph Hooker, who decided to publish the essay in a joint presentation together with unpublished writings which highlighted Darwin's priority. Wallace's essay was presented to the Linnean Society of London on 1 July 1858, along with excerpts from an essay which Darwin had disclosed privately to Hooker in 1847 and a letter Darwin had written to Asa Gray in 1857. Communication with Wallace in the far-off Malay Archipelago was impossible without months of delay, so he was not part of this rapid publication. Fortunately, Wallace accepted the arrangement after the fact, happy that he had been included at all, and never expressed public or private bitterness. Darwin's social and scientific status was far greater than Wallace's, and it was unlikely that, without Darwin, Wallace's views on evolution would have been taken seriously. Lyell and Hooker's arrangement relegated Wallace to the position of co-discoverer, and he was not the social equal of Darwin or the other prominent British natural scientists. However, the joint reading of their papers on natural selection associated Wallace with the more famous Darwin. This, combined with Darwin's (as well as Hooker's and Lyell's) advocacy on his behalf, would give Wallace greater access to the highest levels of the scientific community. The reaction to the reading was muted, with the president of the Linnean Society remarking in May 1859 that the year had not been marked by any striking discoveries; but, with Darwin's publication of "On the Origin of Species" later in 1859, its significance became apparent. When Wallace returned to the UK, he met Darwin. Although some of Wallace's iconoclastic opinions in the ensuing years would test Darwin's patience, they remained on friendly terms for the rest of Darwin's life. Over the years, a few people have questioned this version of events. In the early 1980s, two books, one written by Arnold Brackman and another by John Langdon Brooks, even suggested not only that there had been a conspiracy to rob Wallace of his proper credit, but that Darwin had actually stolen a key idea from Wallace to finish his own theory. These claims have been examined in detail by a number of scholars who have not found them to be convincing. Research into shipping schedules has shown that, contrary to these accusations, Wallace's letter could not have been delivered earlier than the date shown in Darwin's letter to Lyell. After the publication of Darwin's "On the Origin of Species", Wallace became one of its staunchest defenders on his return to England in 1862. In one incident in 1863 that particularly pleased Darwin, Wallace published the short paper "Remarks on the Rev. S. Haughton's Paper on the Bee's Cell, And on the Origin of Species" in order to rebut a paper by a professor of geology at the University of Dublin that had sharply criticised Darwin's comments in the "Origin" on how hexagonal honey bee cells could have evolved through natural selection. An even lengthier defence of Darwin's work was "Creation by Law", a review Wallace wrote in 1867 for the "Quarterly Journal of Science" of the book "The Reign of Law", which had been written by George Campbell, the 8th Duke of Argyll, as a refutation of natural selection. After an 1870 meeting of the British Science Association, Wallace wrote to Darwin complaining that there were "no opponents left who know anything of natural history, so that there are none of the good discussions we used to have." Historians of science have noted that, while Darwin considered the ideas in Wallace's paper to be essentially the same as his own, there were differences. Darwin emphasised competition between individuals of the same species to survive and reproduce, whereas Wallace emphasised environmental pressures on varieties and species forcing them to become adapted to their local conditions, leading populations in different locations to diverge. Some historians, notably Peter J. Bowler, have suggested the possibility that in the paper he mailed to Darwin, Wallace was not discussing selection of individual variations at all but rather group selection. However, Malcolm Kottler has shown that this notion is incorrect and Wallace was indeed discussing individual variations. Others have noted that another difference was that Wallace appeared to have envisioned natural selection as a kind of feedback mechanism keeping species and varieties adapted to their environment. They point to a largely overlooked passage of Wallace's famous 1858 paper: The action of this principle is exactly like that of the centrifugal governor of the steam engine, which checks and corrects any irregularities almost before they become evident; and in like manner no unbalanced deficiency in the animal kingdom can ever reach any conspicuous magnitude, because it would make itself felt at the very first step, by rendering existence difficult and extinction almost sure soon to follow. The cybernetician and anthropologist Gregory Bateson would observe in the 1970s that, though writing it only as an example, Wallace had "probably said the most powerful thing that'd been said in the 19th Century". Bateson revisited the topic in his 1979 book "Mind and Nature: A Necessary Unity", and other scholars have continued to explore the connection between natural selection and systems theory. In 1867, Darwin wrote to Wallace about a problem he was having understanding how some caterpillars could have evolved conspicuous colour schemes. Darwin had come to believe that sexual selection, an agency to which Wallace did not attribute the same importance as Darwin did, explained many conspicuous animal colour schemes. However, Darwin realised that this could not apply to caterpillars. Wallace responded that he and Henry Bates had observed that many of the most spectacular butterflies had a peculiar odour and taste, and that he had been told by John Jenner Weir that birds would not eat a certain kind of common white moth because they found it unpalatable. "Now, as the white moth is as conspicuous at dusk as a coloured caterpillar in the daylight", Wallace wrote back to Darwin that it seemed likely that the conspicuous colour scheme served as a warning to predators and thus could have evolved through natural selection. Darwin was impressed by the idea. At a subsequent meeting of the Entomological Society, Wallace asked for any evidence anyone might have on the topic. In 1869, Weir published data from experiments and observations involving brightly coloured caterpillars that supported Wallace's idea. Warning coloration was one of a number of contributions Wallace made in the area of the evolution of animal coloration in general and the concept of protective coloration in particular. It was also part of a lifelong disagreement Wallace had with Darwin over the importance of sexual selection. In his 1878 book "Tropical Nature and Other Essays", he wrote extensively on the coloration of animals and plants and proposed alternative explanations for a number of cases Darwin had attributed to sexual selection. He revisited the topic at length in his 1889 book "Darwinism". In 1890, he wrote a critical review in "Nature" of his friend Edward Bagnall Poulton's "The Colours of Animals" which supported Darwin on sexual selection, attacking especially Poulton's claims on the "aesthetic preferences of the insect world". In 1889, Wallace wrote the book "Darwinism", which explained and defended natural selection. In it, he proposed the hypothesis that natural selection could drive the reproductive isolation of two varieties by encouraging the development of barriers against hybridisation. Thus it might contribute to the development of new species. He suggested the following scenario. When two populations of a species had diverged beyond a certain point, each adapted to particular conditions, hybrid offspring would be less well-adapted than either parent form and, at that point, natural selection will tend to eliminate the hybrids. Furthermore, under such conditions, natural selection would favour the development of barriers to hybridisation, as individuals that avoided hybrid matings would tend to have more fit offspring, and thus contribute to the reproductive isolation of the two incipient species. This idea came to be known as the Wallace effect, later referred to as reinforcement. Wallace had suggested to Darwin that natural selection could play a role in preventing hybridisation in private correspondence as early as 1868, but had not worked it out to this level of detail. It continues to be a topic of research in evolutionary biology today, with both computer simulation and empirical results supporting its validity. In 1864, Wallace published a paper, "The Origin of Human Races and the Antiquity of Man Deduced from the Theory of 'Natural Selection'", applying the theory to humankind. Darwin had not yet publicly addressed the subject, although Thomas Huxley had in "Evidence as to Man's Place in Nature". He explained the apparent stability of the human stock by pointing to the vast gap in cranial capacities between humans and the great apes. Unlike some other Darwinists, including Darwin himself, he did not "regard modern primitives as almost filling the gap between man and ape". He saw the evolution of humans in two stages: achieving a bipedal posture freeing the hands to carry out the dictates of the brain, and the "recognition of the human brain as a totally new factor in the history of life. Wallace was apparently the first evolutionist to recognize clearly that ... with the emergence of that bodily specialization which constitutes the human brain, bodily specialization itself might be said to be outmoded." For this paper he won Darwin's praise. Shortly afterwards, Wallace became a spiritualist. At about the same time, he began to maintain that natural selection cannot account for mathematical, artistic, or musical genius, as well as metaphysical musings, and wit and humour. He eventually said that something in "the unseen universe of Spirit" had interceded at least three times in history. The first was the creation of life from inorganic matter. The second was the introduction of consciousness in the higher animals. And the third was the generation of the higher mental faculties in humankind. He also believed that the raison d'être of the universe was the development of the human spirit. These views greatly disturbed Darwin, who argued that spiritual appeals were not necessary and that sexual selection could easily explain apparently non-adaptive mental phenomena. While some historians have concluded that Wallace's belief that natural selection was insufficient to explain the development of consciousness and the human mind was directly caused by his adoption of spiritualism, other Wallace scholars have disagreed, and some maintain that Wallace never believed natural selection applied to those areas. Reaction to Wallace's ideas on this topic among leading naturalists at the time varied. Charles Lyell endorsed Wallace's views on human evolution rather than Darwin's. Wallace's belief that human consciousness could not be entirely a product of purely material causes was shared by a number of prominent intellectuals in the late 19th and early 20th centuries. However, many, including Huxley, Hooker, and Darwin himself, were critical of Wallace. As the historian of science Michael Shermer has stated, Wallace's views in this area were at odds with two major tenets of the emerging Darwinian philosophy, which were that evolution was not teleological (purpose driven) and that it was not anthropocentric (human-centred). Much later in his life Wallace returned to these themes, that evolution suggested that the universe might have a purpose and that certain aspects of living organisms might not be explainable in terms of purely materialistic processes, in a 1909 magazine article entitled "The World of Life", which he later expanded into a book of the same name; a work that Shermer said anticipated some ideas about design in nature and directed evolution that would arise from various religious traditions throughout the 20th century. In many accounts of the development of evolutionary theory, Wallace is mentioned only in passing as simply being the stimulus to the publication of Darwin's own theory. In reality, Wallace developed his own distinct evolutionary views which diverged from Darwin's, and was considered by many (especially Darwin) to be a leading thinker on evolution in his day, whose ideas could not be ignored. One historian of science has pointed out that, through both private correspondence and published works, Darwin and Wallace exchanged knowledge and stimulated each other's ideas and theories over an extended period. Wallace is the most-cited naturalist in Darwin's "Descent of Man", occasionally in strong disagreement. Both Darwin and Wallace agreed on the importance of natural selection, and some of the factors responsible for it: competition between species and geographical isolation. But Wallace believed that evolution had a purpose ("teleology") in maintaining species' fitness to their environment, whereas Darwin hesitated to attribute any purpose to a random natural process. Scientific discoveries since the 19th century support Darwin's viewpoint, by identifying several additional mechanisms and triggers: Wallace remained an ardent defender of natural selection for the rest of his life. By the 1880s, evolution was widely accepted in scientific circles. In 1889, Wallace published the book "Darwinism" as a response to the scientific critics of natural selection. Of all Wallace's books, it is the most cited by scholarly publications. In 1872, at the urging of many of his friends, including Darwin, Philip Sclater, and Alfred Newton, Wallace began research for a general review of the geographic distribution of animals. He was unable to make much progress initially, in part because classification systems for many types of animals were in flux at the time. He resumed the work in earnest in 1874 after the publication of a number of new works on classification. Extending the system developed by Sclater for birds—which divided the earth into six separate geographic regions for describing species distribution—to cover mammals, reptiles and insects as well, Wallace created the basis for the zoogeographic regions still in use today. He discussed all of the factors then known to influence the current and past geographic distribution of animals within each geographical region. These included the effects of the appearance and disappearance of land bridges (such as the one currently connecting North America and South America) and the effects of periods of increased glaciation. He provided maps that displayed factors, such as elevation of mountains, depths of oceans, and the character of regional vegetation, that affected the distribution of animals. He also summarised all the known families and genera of the higher animals and listed their known geographic distributions. The text was organised so that it would be easy for a traveller to learn what animals could be found in a particular location. The resulting two-volume work, "The Geographical Distribution of Animals", was published in 1876 and would serve as the definitive text on zoogeography for the next 80 years. In this book Wallace did not confine himself to the biogeography of living species, but also included evidence from the fossil record to discuss the processes of evolution and migration that had led to the geographical distribution of modern animal species. For example, he discussed how fossil evidence showed that tapirs had originated in the Northern Hemisphere, migrating between North America and Eurasia and then, much more recently, to South America after which the northern species became extinct, leaving the modern distribution of two isolated groups of tapir species in South America and Southeast Asia. Wallace was very aware of, and interested in, the mass extinction of megafauna in the late Pleistocene. In "The Geographical Distribution of Animals" (1876) he wrote, "We live in a zoologically impoverished world, from which all the hugest, and fiercest, and strangest forms have recently disappeared". He added that he believed the most likely cause for the rapid extinctions to have been glaciation, but by the time he wrote "World of Life" (1911) he had come to believe those extinctions were "due to man's agency". In 1880, Wallace published the book "Island Life" as a sequel to "The Geographical Distribution of Animals". It surveyed the distribution of both animal and plant species on islands. Wallace classified islands into three different types. Oceanic islands, such as the Galapagos and Hawaiian Islands (then known as the Sandwich Islands) formed in mid-ocean and never part of any large continent. Such islands were characterised by a complete lack of terrestrial mammals and amphibians, and their inhabitants (with the exceptions of migratory birds and species introduced by human activity) were typically the result of accidental colonisation and subsequent evolution. He divided continental islands into two separate classes depending on whether they had recently been part of a continent (like Britain) or much less recently (like Madagascar) and discussed how that difference affected the flora and fauna. He talked about how isolation affected evolution and how that could result in the preservation of classes of animals, such as the lemurs of Madagascar that were remnants of once widespread continental faunas. He extensively discussed how changes of climate, particularly periods of increased glaciation, may have affected the distribution of flora and fauna on some islands, and the first portion of the book discusses possible causes of these great ice ages. "Island Life" was considered a very important work at the time of its publication. It was discussed extensively in scientific circles both in published reviews and in private correspondence. Wallace's extensive work in biogeography made him aware of the impact of human activities on the natural world. In "Tropical Nature and Other Essays" (1878), he warned about the dangers of deforestation and soil erosion, especially in tropical climates prone to heavy rainfall. Noting the complex interactions between vegetation and climate, he warned that the extensive clearing of rainforest for coffee cultivation in Ceylon (Sri Lanka) and India would adversely impact the climate in those countries and lead to their eventual impoverishment due to soil erosion. In "Island Life", Wallace again mentioned deforestation and also the impact of invasive species. On the impact of European colonisation on the island of Saint Helena, he wrote: ... yet the general aspect of the island is now so barren and forbidding that some persons find it difficult to believe that it was once all green and fertile. The cause of this change is, however, very easily explained. The rich soil formed by decomposed volcanic rock and vegetable deposits could only be retained on the steep slopes so long as it was protected by the vegetation to which it in great part owed its origin. When this was destroyed, the heavy tropical rains soon washed away the soil, and has left a vast expanse of bare rock or sterile clay. This irreparable destruction was caused, in the first place, by goats, which were introduced by the Portuguese in 1513, and increased so rapidly that in 1588 they existed in the thousands. These animals are the greatest of all foes to trees, because they eat off the young seedlings, and thus prevent the natural restoration of the forest. They were, however, aided by the reckless waste of man. The East India Company took possession of the island in 1651, and about the year 1700 it began to be seen that the forests were fast diminishing, and required some protection. Two of the native trees, redwood and ebony, were good for tanning, and, to save trouble, the bark was wastefully stripped from the trunks only, the remainder being left to rot; while in 1709 a large quantity of the rapidly disappearing ebony was used to burn lime for building fortifications! Wallace's comments on environment grew more strident later in his career. In "The World of Life" (1911) he wrote: Wallace's 1904 book "Man's Place in the Universe" was the first serious attempt by a biologist to evaluate the likelihood of life on other planets. He concluded that the Earth was the only planet in the solar system that could possibly support life, mainly because it was the only one in which water could exist in the liquid phase. More controversially he maintained that it was unlikely that other stars in the galaxy could have planets with the necessary properties (the existence of other galaxies not having been proved at the time). His treatment of Mars in this book was brief, and in 1907, Wallace returned to the subject with a book "Is Mars Habitable?" to criticise the claims made by Percival Lowell that there were Martian canals built by intelligent beings. Wallace did months of research, consulted various experts, and produced his own scientific analysis of the Martian climate and atmospheric conditions. Among other things, Wallace pointed out that spectroscopic analysis had shown no signs of water vapour in the Martian atmosphere, that Lowell's analysis of Mars's climate was seriously flawed and badly overestimated the surface temperature, and that low atmospheric pressure would make liquid water, let alone a planet-girding irrigation system, impossible. Richard Milner comments: "It was the brilliant and eccentric evolutionist Alfred Russel Wallace ... who effectively debunked Lowell's illusionary network of Martian canals." Wallace originally became interested in the topic because his anthropocentric philosophy inclined him to believe that man would likely be unique in the universe. Wallace also wrote poetic verse, an example being 'A Description Of Javita' from his book 'Travels on the Amazon'. The poem begins: 'Tis where the streams divide, to swell the floods Of the two mighty rivers of our globe; Where gushing brooklets in their narrow beds' It continues then to describe the people of the village and their lives in detail, although it is not an idyllic description: There is an Indian village ; all around, The dark, eternal, boundless forest spreads Its varied foliage. Stately palm-trees rise On every side, and numerous trees unknown Save by strange names uncouth to English ears. Here I dwelt awhile the one white man Among perhaps two hundred living souls. They pass a peaceful and contented life' The poem is a lyrical description of the life of a tribe he was living with along the Amazon river. While it has echoes of Tennyson in the rhymes and rhythms the poem itself is not overly Romantic, serving more as a tool to set the scene and then draw contrast between the lives of the people living here and those in England. The poem is a comment on savagery and greed, making the point that some in England were more 'savage' than those on the Amazon, either by a greed for gold or by their greed for gold driving others into poverty. Wallace concludes: I'd be an Indian here, and live content To fish, and hunt, and paddle my canoe, And see my children grow, like young wild fawns, In health of body and in peace of mind, Rich without wealth, and happy without gold ! The poem is referenced and partially recited in the BBC television series 'The Ascent of Man'. In a letter to his brother-in-law in 1861, Wallace wrote: ... I remain an utter disbeliever in almost all that you consider the most sacred truths. I will pass over as utterly contemptible the oft-repeated accusation that sceptics shut out evidence because they will not be governed by the morality of Christianity ... I am thankful I can see much to admire in all religions. To the mass of mankind religion of some kind is a necessity. But whether there be a God and whatever be His nature; whether we have an immortal soul or not, or whatever may be our state after death, I can have no fear of having to suffer for the study of nature and the search for truth, or believe that those will be better off in a future state who have lived in the belief of doctrines inculcated from childhood, and which are to them rather a matter of blind faith than intelligent conviction. Wallace was an enthusiast of phrenology. Early in his career, he experimented with hypnosis, then known as mesmerism. He used some of his students in Leicester as subjects, with considerable success. When he began his experiments with mesmerism, the topic was very controversial and early experimenters, such as John Elliotson, had been harshly criticised by the medical and scientific establishment. Wallace drew a connection between his experiences with mesmerism and his later investigations into spiritualism. In 1893, he wrote: I thus learnt my first great lesson in the inquiry into these obscure fields of knowledge, never to accept the disbelief of great men or their accusations of imposture or of imbecility, as of any weight when opposed to the repeated observation of facts by other men, admittedly sane and honest. The whole history of science shows us that whenever the educated and scientific men of any age have denied the facts of other investigators on a priori grounds of absurdity or impossibility, the deniers have always been wrong. Wallace began investigating spiritualism in the summer of 1865, possibly at the urging of his older sister Fanny Sims, who had been involved with it for some time. After reviewing the literature on the topic and attempting to test the phenomena he witnessed at séances, he came to accept that the belief was connected to a natural reality. For the rest of his life, he remained convinced that at least some séance phenomena were genuine, no matter how many accusations of fraud sceptics made or how much evidence of trickery was produced. Historians and biographers have disagreed about which factors most influenced his adoption of spiritualism. It has been suggested by one biographer that the emotional shock he had received a few months earlier, when his first fiancée broke their engagement, contributed to his receptiveness to spiritualism. Other scholars have preferred to emphasise instead Wallace's desire to find rational and scientific explanations for all phenomena, both material and non-material, of the natural world and of human society. Spiritualism appealed to many educated Victorians who no longer found traditional religious doctrine, such as that of the Church of England, acceptable yet were unsatisfied with the completely materialistic and mechanical view of the world that was increasingly emerging from 19th-century science. However, several scholars who have researched Wallace's views in depth have emphasised that, for him, spiritualism was a matter of science and philosophy rather than religious belief. Among other prominent 19th-century intellectuals involved with spiritualism were the social reformer Robert Owen, who was one of Wallace's early idols, the physicists William Crookes and Lord Rayleigh, the mathematician Augustus De Morgan, and the Scottish publisher Robert Chambers. During the 1860s the stage magician John Nevil Maskelyne exposed the trickery of the Davenport brothers. Wallace was unable to accept that he had replicated their feats utilizing natural methods, and stated that Maskelyne possessed supernatural powers. However, in one of his writings Wallace dismissed Maskelyne, referring to a lecture exposing his tricks. In 1874, Wallace visited the spirit photographer Frederick Hudson. A photograph of him with his deceased mother was produced and Wallace declared the photograph genuine, declaring "even if he had by some means obtained possession of all the photographs ever taken of my mother, they would not have been of the slightest use to him in the manufacture of these pictures. I see no escape from the conclusion that some spiritual being, acquainted with my mother's various aspects during life, produced these recognisable impressions on the plate." However, Hudson's photographs had previously been exposed as fraudulent in 1872. Wallace's very public advocacy of spiritualism and his repeated defence of spiritualist mediums against allegations of fraud in the 1870s damaged his scientific reputation. In 1875 Wallace published the evidence he believed proved his position in his book 'On Miracles and Modern Spiritualism' which is a compilation of essays he wrote over a period of time. In his chapter entitled 'Modern Spiritualism: Evidence of Men of Science', Wallace refers to "three men of the highest eminence in their respective departments" who were Professor De Morgan, Professor Hare and Judge Edmonds who all investigated spiritualist phenomena. However, Wallace himself is only quoting their results and was not present at any of their investigations. His vehement defence of spiritualism strained his relationships with previously friendly scientists such as Henry Bates, Thomas Huxley, and even Darwin, who felt he was overly credulous. Evidence of this can be seen in Wallace's letters dated 22 November and 1 December 1866, to Thomas Huxley asking him if he would be interested in getting involved in scientific spiritualist investigations which Huxley, politely but emphatically, declined on the basis that he had neither the time nor the inclination. Others, such as the physiologist William Benjamin Carpenter and zoologist E. Ray Lankester became openly and publicly hostile to Wallace over the issue. Wallace and other scientists who defended spiritualism, notably William Crookes, were subject to much criticism from the press, with "The Lancet" as the leading English medical journal of the time being particularly harsh. The controversy affected the public perception of Wallace's work for the rest of his career. When, in 1879, Darwin first tried to rally support among naturalists to get a civil pension awarded to Wallace, Joseph Hooker responded: Wallace has lost caste considerably, not only by his adhesion to Spiritualism, but by the fact of his having deliberately and against the whole voice of the committee of his section of the British Association, brought about a discussion on Spiritualism at one of its sectional meetings ... This he is said to have done in an underhanded manner, and I well remember the indignation it gave rise to in the B.A. Council. Hooker eventually relented and agreed to support the pension request. In 1870, a Flat-Earth proponent named John Hampden offered a £500 wager (equivalent to about £ in present-day terms) in a magazine advertisement to anyone who could demonstrate a convex curvature in a body of water such as a river, canal, or lake. Wallace, intrigued by the challenge and short of money at the time, designed an experiment in which he set up two objects along a six-mile (10 km) stretch of canal. Both objects were at the same height above the water, and he mounted a telescope on a bridge at the same height above the water as well. When seen through the telescope, one object appeared higher than the other, showing the curvature of the earth. The judge for the wager, the editor of "Field" magazine, declared Wallace the winner, but Hampden refused to accept the result. He sued Wallace and launched a campaign, which persisted for several years, of writing letters to various publications and to organisations of which Wallace was a member denouncing him as a swindler and a thief. Wallace won multiple libel suits against Hampden, but the resulting litigation cost Wallace more than the amount of the wager and the controversy frustrated him for years. In the early 1880s, Wallace was drawn into the debate over mandatory smallpox vaccination. Wallace originally saw the issue as a matter of personal liberty; but, after studying some of the statistics provided by anti-vaccination activists, he began to question the efficacy of vaccination. At the time, the germ theory of disease was very new and far from universally accepted. Moreover, no one knew enough about the human immune system to understand why vaccination worked. When Wallace did some research, he discovered instances where supporters of vaccination had used questionable, in a few cases completely phony, statistics to support their arguments. Always suspicious of authority, Wallace suspected that physicians had a vested interest in promoting vaccination, and became convinced that reductions in the incidence of smallpox that had been attributed to vaccination were, in fact, due to better hygiene and improvements in public sanitation. Another factor in Wallace's thinking was his belief that, because of the action of natural selection, organisms were in a state of balance with their environment, and that everything in nature, even disease-causing organisms, served a useful purpose in the natural order of things; he feared vaccination might upset that natural balance with unfortunate results. Wallace and other anti-vaccinationists pointed out that vaccination, which at the time was often done in a sloppy and unsanitary manner, could be dangerous. In 1890, Wallace gave evidence before a Royal Commission investigating the controversy. When the commission examined the material he had submitted to support his testimony, they found errors, including some questionable statistics. "The Lancet" averred that Wallace and the other anti-vaccination activists were being selective in their choice of statistics, ignoring large quantities of data inconsistent with their position. The commission found that smallpox vaccination was effective and should remain compulsory, though they did recommend some changes in procedures to improve safety, and that the penalties for people who refused to comply be made less severe. Years later, in 1898, Wallace wrote a pamphlet, "Vaccination a Delusion; Its Penal Enforcement a Crime", attacking the commission's findings. It, in turn, was attacked by "The Lancet", which stated that it contained many of the same errors as his evidence given to the commission. As a result of his writing, at the time of his death Wallace had been for many years a well-known figure both as a scientist and as a social activist. He was often sought out by journalists and others for his views on a variety of topics. He received honorary doctorates and a number of professional honours, such the Royal Society's Royal Medal and Darwin Medal in 1868 and 1890 respectively, and the Order of Merit in 1908. Above all, his role as the co-discoverer of natural selection and his work on zoogeography marked him out as an exceptional figure. He was undoubtedly one of the greatest natural history explorers of the 19th century. Despite this, his fame faded quickly after his death. For a long time, he was treated as a relatively obscure figure in the history of science. A number of reasons have been suggested for this lack of attention, including his modesty, his willingness to champion unpopular causes without regard for his own reputation, and the discomfort of much of the scientific community with some of his unconventional ideas. Recently, he has become a less obscure figure with the publication of several book-length biographies on him, as well as anthologies of his writings. In 2007 a literary critic for "New Yorker" magazine observed that five such biographies and two such anthologies had been published since 2000. There has also been a web page created that is dedicated to Wallace scholarship. In a 2010 book, the environmentalist Tim Flannery claimed that Wallace was 'the first modern scientist to comprehend how essential cooperation is to our survival,' and suggested that Wallace's understanding of natural selection and his later work on the atmosphere be seen as a forerunner to modern ecological thinking. The Natural History Museum, London, co-ordinated commemorative events for the Wallace centenary worldwide in the 'Wallace100' project in 2013. On 24 January, his portrait was unveiled in the Main Hall of the museum by Bill Bailey, a fervent admirer. On the BBC Two programme "Bill Bailey's Jungle Hero", first broadcast on 21 April 2013, Bailey revealed how Wallace cracked evolution by revisiting places where Wallace discovered exotic species. Episode one featured orangutans and flying frogs in Bailey's journey through Borneo. Episode two featured birds of paradise. On 7 November 2013, the 100th anniversary of Wallace's death, Sir David Attenborough unveiled a statue of Wallace at the museum. The statue was donated by the A. R. Wallace Memorial Fund, and was sculpted by Anthony Smith. It depicts Wallace as a young man, collecting in the jungle. November 2013 also marked the debut of "The Animated Life of A. R. Wallace", a paper-puppet animation film dedicated to Wallace's centennial. Wallace was a prolific author. In 2002, a historian of science published a quantitative analysis of Wallace's publications. He found that Wallace had published 22 full-length books and at least 747 shorter pieces, 508 of which were scientific papers (191 of them published in "Nature"). He further broke down the 747 short pieces by their primary subjects as follows. 29% were on biogeography and natural history, 27% were on evolutionary theory, 25% were social commentary, 12% were on Anthropology, and 7% were on spiritualism and phrenology. An online bibliography of Wallace's writings has more than 750 entries. A more comprehensive list of Wallace's publications that are available online, as well as a full bibliography of all of Wallace's writings, has been compiled by the historian Charles H. Smith at The Alfred Russel Wallace Page. Augustus Augustus (; 23 September 63 BC – 19 August AD 14) was a Roman statesman and military leader who was the first Emperor of the Roman Empire, controlling Imperial Rome from 27 BC until his death in AD 14. His status as the founder of the Roman Principate has consolidated an enduring legacy as one of the most effective and controversial leaders in human history. He was born Gaius Octavius Thurinus into an old and wealthy equestrian branch of the plebeian "gens" Octavia. His maternal great-uncle Julius Caesar was assassinated in 44 BC, and Octavius was named in Caesar's will as his adopted son and heir. Then known simply as Octavianus (often anglicized to Octavian), he along with Mark Antony, and Marcus Lepidus formed the Second Triumvirate to defeat the assassins of Caesar. Following their victory at the Battle of Philippi, the Triumvirate divided the Roman Republic among themselves and ruled as military dictators. The Triumvirate was eventually torn apart by the competing ambitions of its members. Lepidus was driven into exile and stripped of his position, and Antony committed suicide following his defeat at the Battle of Actium by Octavian in 31 BC. After the demise of the Second Triumvirate, Augustus restored the outward façade of the free Republic, with governmental power vested in the Roman Senate, the executive magistrates, and the legislative assemblies. In reality, however, he retained his autocratic power over the Republic as a military dictator. By law, Augustus held a collection of powers granted to him for life by the Senate, including supreme military command, and those of tribune and censor. It took several years for Augustus to develop the framework within which a formally republican state could be led under his sole rule. He rejected monarchical titles, and instead called himself "Princeps Civitatis" ("First Citizen of the State"). The resulting constitutional framework became known as the Principate, the first phase of the Roman Empire. The reign of Augustus initiated an era of relative peace known as the "Pax Romana". The Roman world was largely free from large-scale conflict for more than two centuries, despite continuous wars of imperial expansion on the Empire's frontiers and the year-long civil war known as the "Year of the Four Emperors" over the imperial succession. Augustus dramatically enlarged the Empire, annexing Egypt, Dalmatia, Pannonia, Noricum, and Raetia; expanding possessions in Africa; expanding into Germania; and completing the conquest of Hispania. Beyond the frontiers, he secured the Empire with a buffer region of client states and made peace with the Parthian Empire through diplomacy. He reformed the Roman system of taxation, developed networks of roads with an official courier system, established a standing army, established the Praetorian Guard, created official police and fire-fighting services for Rome, and rebuilt much of the city during his reign. Augustus died in AD 14 at the age of 75. He probably died from natural causes, although there were unconfirmed rumors that his wife Livia poisoned him. He was succeeded as Emperor by his adopted son (also stepson and former son-in-law) Tiberius. As a consequence of Roman customs, society, and personal preference Augustus (; ) was known by many names throughout his life: While his paternal family was from the town of Velletri, approximately from Rome, Augustus was born in the city of Rome on 23 September 63 BC. He was born at Ox Head, a small property on the Palatine Hill, very close to the Roman Forum. He was given the name Gaius Octavius Thurinus, his cognomen possibly commemorating his father's victory at Thurii over a rebellious band of slaves. Due to the crowded nature of Rome at the time, Octavius was taken to his father's home village at Velletri to be raised. Octavius only mentions his father's equestrian family briefly in his memoirs. His paternal great-grandfather Gaius Octavius was a military tribune in Sicily during the Second Punic War. His grandfather had served in several local political offices. His father, also named Gaius Octavius, had been governor of Macedonia. Suetonius, Augustus The "Marcus Octavius" vetoing the agrarian law suggested by Tiberius Gracchus in 133 BC may have been his ancestor. His mother, Atia, was the niece of Julius Caesar. In 59 BC, when he was four years old, his father died. His mother married a former governor of Syria, Lucius Marcius Philippus. Philippus claimed descent from Alexander the Great, and was elected consul in 56 BC. Philippus never had much of an interest in young Octavius. Because of this, Octavius was raised by his grandmother, Julia, the sister of Julius Caesar. Julia died in 52 or 51 BC, and Octavius delivered the funeral oration for his grandmother. From this point, his mother and stepfather took a more active role in raising him. He donned the "toga virilis" four years later, and was elected to the College of Pontiffs in 47 BC. The following year he was put in charge of the Greek games that were staged in honor of the Temple of Venus Genetrix, built by Julius Caesar. According to Nicolaus of Damascus, Octavius wished to join Caesar's staff for his campaign in Africa, but gave way when his mother protested. In 46 BC, she consented for him to join Caesar in Hispania, where he planned to fight the forces of Pompey, Caesar's late enemy, but Octavius fell ill and was unable to travel. When he had recovered, he sailed to the front, but was shipwrecked; after coming ashore with a handful of companions, he crossed hostile territory to Caesar's camp, which impressed his great-uncle considerably. Velleius Paterculus reports that after that time, Caesar allowed the young man to share his carriage. When back in Rome, Caesar deposited a new will with the Vestal Virgins, naming Octavius as the prime beneficiary. Octavius was studying and undergoing military training in Apollonia, Illyria, when Julius Caesar was killed on the Ides of March (15 March) 44 BC. He rejected the advice of some army officers to take refuge with the troops in Macedonia and sailed to Italy to ascertain whether he had any potential political fortunes or security. Caesar had no living legitimate children under Roman law, and so had adopted Octavius, his grand-nephew, making him his primary heir. Mark Antony later charged that Octavian had earned his adoption by Caesar through sexual favours, though Suetonius describes Antony's accusation as political slander. After landing at Lupiae near Brundisium, Octavius learned the contents of Caesar's will, and only then did he decide to become Caesar's political heir as well as heir to two-thirds of his estate. Upon his adoption, Octavius assumed his great-uncle's name Gaius Julius Caesar. Roman citizens adopted into a new family usually retained their old nomen in cognomen form (e.g., "Octavianus" for one who had been an Octavius, "Aemilianus" for one who had been an Aemilius, etc.). However, though some of his contemporaries did, there is no evidence that Octavius ever himself officially used the name "Octavianus", as it would have made his modest origins too obvious. Historians usually refer to the new Caesar as "Octavian" during the time between his adoption and his assumption of the name Augustus in 27 BC in order to avoid confusing the dead dictator with his heir. Octavian could not rely on his limited funds to make a successful entry into the upper echelons of the Roman political hierarchy. After a warm welcome by Caesar's soldiers at Brundisium, Octavian demanded a portion of the funds that were allotted by Caesar for the intended war against the Parthian Empire in the Middle East. This amounted to 700 million sesterces stored at Brundisium, the staging ground in Italy for military operations in the east. A later senatorial investigation into the disappearance of the public funds took no action against Octavian, since he subsequently used that money to raise troops against the Senate's arch enemy Mark Antony. Octavian made another bold move in 44 BC when, without official permission, he appropriated the annual tribute that had been sent from Rome's Near Eastern province to Italy. Octavian began to bolster his personal forces with Caesar's veteran legionaries and with troops designated for the Parthian war, gathering support by emphasizing his status as heir to Caesar. On his march to Rome through Italy, Octavian's presence and newly acquired funds attracted many, winning over Caesar's former veterans stationed in Campania. By June, he had gathered an army of 3,000 loyal veterans, paying each a salary of 500 denarii. Arriving in Rome on 6 May 44 BC, Octavian found consul Mark Antony, Caesar's former colleague, in an uneasy truce with the dictator's assassins. They had been granted a general amnesty on 17 March, yet Antony succeeded in driving most of them out of Rome. This was due to his "inflammatory" eulogy given at Caesar's funeral, mounting public opinion against the assassins. Mark Antony was amassing political support, but Octavian still had opportunity to rival him as the leading member of the faction supporting Caesar. Mark Antony had lost the support of many Romans and supporters of Caesar when he initially opposed the motion to elevate Caesar to divine status. Octavian failed to persuade Antony to relinquish Caesar's money to him. During the summer, he managed to win support from Caesarian sympathizers, however, who saw the younger heir as the lesser evil and hoped to manipulate him, or to bear with him during their efforts to get rid of Antony. Octavian began to make common cause with the Optimates, the former enemies of Caesar. In September, the leading Optimate orator Marcus Tullius Cicero began to attack Antony in a series of speeches portraying him as a threat to the Republican order. With opinion in Rome turning against him and his year of consular power nearing its end, Antony attempted to pass laws that would lend him control over Cisalpine Gaul, which had been assigned as part of his province, from Decimus Junius Brutus Albinus, one of Caesar's assassins. Octavian meanwhile built up a private army in Italy by recruiting Caesarian veterans and, on 28 November, he won over two of Antony's legions with the enticing offer of monetary gain. In the face of Octavian's large and capable force, Antony saw the danger of staying in Rome and, to the relief of the Senate, he fled to Cisalpine Gaul, which was to be handed to him on 1 January. Decimus Brutus refused to give up Cisalpine Gaul, so Antony besieged him at Mutina. Antony rejected the resolutions passed by the Senate to stop the violence, as the Senate had no army of its own to challenge him. This provided an opportunity for Octavian, who already was known to have armed forces. Cicero also defended Octavian against Antony's taunts about Octavian's lack of noble lineage and aping of Julius Caesar's name, stating "we have no more brilliant example of traditional piety among our youth." At the urging of Cicero, the Senate inducted Octavian as senator on 1 January 43 BC, yet he also was given the power to vote alongside the former consuls. In addition, Octavian was granted "propraetor" "imperium" (commanding power) which legalized his command of troops, sending him to relieve the siege along with Hirtius and Pansa (the consuls for 43 BC). In April 43 BC, Antony's forces were defeated at the battles of Forum Gallorum and Mutina, forcing Antony to retreat to Transalpine Gaul. Both consuls were killed, however, leaving Octavian in sole command of their armies. The senate heaped many more rewards on Decimus Brutus than on Octavian for defeating Antony, then attempted to give command of the consular legions to Decimus Brutus—yet Octavian decided not to cooperate. Instead, Octavian stayed in the Po Valley and refused to aid any further offensive against Antony. In July, an embassy of centurions sent by Octavian entered Rome and demanded that he receive the consulship left vacant by Hirtius and Pansa. Octavian also demanded that the decree should be rescinded which declared Antony a public enemy. When this was refused, he marched on the city with eight legions. He encountered no military opposition in Rome, and on 19 August 43 BC was elected consul with his relative Quintus Pedius as co-consul. Meanwhile, Antony formed an alliance with Marcus Aemilius Lepidus, another leading Caesarian. In a meeting near Bologna in October 43 BC, Octavian, Antony, and Lepidus formed a junta called the Second Triumvirate. This explicit arrogation of special powers lasting five years was then supported by law passed by the plebs, unlike the unofficial First Triumvirate formed by Pompey, Julius Caesar, and Marcus Licinius Crassus. The triumvirs then set in motion proscriptions in which 300 senators and 2,000 "equites" allegedly were branded as outlaws and deprived of their property and, for those who failed to escape, their lives. The estimation that 300 senators were proscribed was presented by Appian, although his earlier contemporary Livy asserted that only 130 senators had been proscribed. This decree issued by the triumvirate was motivated in part by a need to raise money to pay the salaries of their troops for the upcoming conflict against Caesar's assassins, Marcus Junius Brutus the Younger and Gaius Cassius Longinus. Rewards for their arrest gave incentive for Romans to capture those proscribed, while the assets and properties of those arrested were seized by the triumvirs. Contemporary Roman historians provide conflicting reports as to which triumvir was most responsible for the proscriptions and killing. However, the sources agree that enacting the proscriptions was a means by all three factions to eliminate political enemies. Marcus Velleius Paterculus asserted that Octavian tried to avoid proscribing officials whereas Lepidus and Antony were to blame for initiating them. Cassius Dio defended Octavian as trying to spare as many as possible, whereas Antony and Lepidus, being older and involved in politics longer, had many more enemies to deal with. This claim was rejected by Appian, who maintained that Octavian shared an equal interest with Lepidus and Antony in eradicating his enemies. Suetonius said that Octavian was reluctant to proscribe officials, but did pursue his enemies with more vigor than the other triumvirs. Plutarch described the proscriptions as a ruthless and cutthroat swapping of friends and family among Antony, Lepidus, and Octavian. For example, Octavian allowed the proscription of his ally Cicero, Antony the proscription of his maternal uncle Lucius Julius Caesar (the consul of 64 BC), and Lepidus his brother Paullus. On 1 January 42 BC, the Senate posthumously recognized Julius Caesar as a divinity of the Roman state, "Divus Iulius". Octavian was able to further his cause by emphasizing the fact that he was "Divi filius", "Son of God". Antony and Octavian then sent 28 legions by sea to face the armies of Brutus and Cassius, who had built their base of power in Greece. After two battles at Philippi in Macedonia in October 42, the Caesarian army was victorious and Brutus and Cassius committed suicide. Mark Antony later used the examples of these battles as a means to belittle Octavian, as both battles were decisively won with the use of Antony's forces. In addition to claiming responsibility for both victories, Antony also branded Octavian as a coward for handing over his direct military control to Marcus Vipsanius Agrippa instead. After Philippi, a new territorial arrangement was made among the members of the Second Triumvirate. Gaul and the provinces of Hispania and Italia were placed in the hands of Octavian. Antony traveled east to Egypt where he allied himself with Queen Cleopatra VII, the former lover of Julius Caesar and mother of Caesar's infant son Caesarion. Lepidus was left with the province of Africa, stymied by Antony, who conceded Hispania to Octavian instead. Octavian was left to decide where in Italy to settle the tens of thousands of veterans of the Macedonian campaign, whom the triumvirs had promised to discharge. The tens of thousands who had fought on the republican side with Brutus and Cassius could easily ally with a political opponent of Octavian if not appeased, and they also required land. There was no more government-controlled land to allot as settlements for their soldiers, so Octavian had to choose one of two options: alienating many Roman citizens by confiscating their land, or alienating many Roman soldiers who could mount a considerable opposition against him in the Roman heartland. Octavian chose the former. There were as many as eighteen Roman towns affected by the new settlements, with entire populations driven out or at least given partial evictions. There was widespread dissatisfaction with Octavian over these settlements of his soldiers, and this encouraged many to rally at the side of Lucius Antonius, who was brother of Mark Antony and supported by a majority in the Senate. Meanwhile, Octavian asked for a divorce from Clodia Pulchra, the daughter of Fulvia (Mark Antony's wife) and her first husband Publius Clodius Pulcher. He returned Clodia to her mother, claiming that their marriage had never been consummated. Fulvia decided to take action. Together with Lucius Antonius, she raised an army in Italy to fight for Antony's rights against Octavian. Lucius and Fulvia took a political and martial gamble in opposing Octavian, however, since the Roman army still depended on the triumvirs for their salaries. Lucius and his allies ended up in a defensive siege at Perusia (modern Perugia), where Octavian forced them into surrender in early 40 BC. Lucius and his army were spared, due to his kinship with Antony, the strongman of the East, while Fulvia was exiled to Sicyon. Octavian showed no mercy, however, for the mass of allies loyal to Lucius; on 15 March, the anniversary of Julius Caesar's assassination, he had 300 Roman senators and equestrians executed for allying with Lucius. Perusia also was pillaged and burned as a warning for others. This bloody event sullied Octavian's reputation and was criticized by many, such as Augustan poet Sextus Propertius. Sextus Pompeius was the son of First Triumvir Pompey and still a renegade general following Julius Caesar's victory over his father. He was established in Sicily and Sardinia as part of an agreement reached with the Second Triumvirate in 39 BC. Both Antony and Octavian were vying for an alliance with Pompeius, who was a member of the republican party, ironically, not the Caesarian faction. Octavian succeeded in a temporary alliance in 40 BC when he married Scribonia, a daughter of Lucius Scribonius Libo who was a follower of Sextus Pompeius as well as his father-in-law. Scribonia gave birth to Octavian's only natural child, Julia, who was born the same day that he divorced her to marry Livia Drusilla, little more than a year after their marriage. While in Egypt, Antony had been engaged in an affair with Cleopatra and had fathered three children with her. Aware of his deteriorating relationship with Octavian, Antony left Cleopatra; he sailed to Italy in 40 BC with a large force to oppose Octavian, laying siege to Brundisium. This new conflict proved untenable for both Octavian and Antony, however. Their centurions, who had become important figures politically, refused to fight due to their Caesarian cause, while the legions under their command followed suit. Meanwhile, in Sicyon, Antony's wife Fulvia died of a sudden illness while Antony was en route to meet her. Fulvia's death and the mutiny of their centurions allowed the two remaining triumvirs to effect a reconciliation. In the autumn of 40, Octavian and Antony approved the Treaty of Brundisium, by which Lepidus would remain in Africa, Antony in the East, Octavian in the West. The Italian Peninsula was left open to all for the recruitment of soldiers, but in reality, this provision was useless for Antony in the East. To further cement relations of alliance with Mark Antony, Octavian gave his sister, Octavia Minor, in marriage to Antony in late 40 BC. During their marriage, Octavia gave birth to two daughters (known as Antonia the Elder and Antonia Minor). Sextus Pompeius threatened Octavian in Italy by denying shipments of grain through the Mediterranean Sea to the peninsula. Pompeius' own son was put in charge as naval commander in the effort to cause widespread famine in Italy. Pompeius' control over the sea prompted him to take on the name "Neptuni filius", "son of Neptune". A temporary peace agreement was reached in 39 BC with the treaty of Misenum; the blockade on Italy was lifted once Octavian granted Pompeius Sardinia, Corsica, Sicily, and the Peloponnese, and ensured him a future position as consul for 35 BC. The territorial agreement between the triumvirate and Sextus Pompeius began to crumble once Octavian divorced Scribonia and married Livia on 17 January 38 BC. One of Pompeius' naval commanders betrayed him and handed over Corsica and Sardinia to Octavian. Octavian lacked the resources to confront Pompeius alone, however, so an agreement was reached with the Second Triumvirate's extension for another five-year period beginning in 37 BC. In supporting Octavian, Antony expected to gain support for his own campaign against the Parthian Empire, desiring to avenge Rome's defeat at Carrhae in 53 BC. In an agreement reached at Tarentum, Antony provided 120 ships for Octavian to use against Pompeius, while Octavian was to send 20,000 legionaries to Antony for use against Parthia. Octavian sent only a tenth of those promised, however, which Antony viewed as an intentional provocation. Octavian and Lepidus launched a joint operation against Sextus in Sicily in 36 BC. Despite setbacks for Octavian, the naval fleet of Sextus Pompeius was almost entirely destroyed on 3 September by general Agrippa at the naval Battle of Naulochus. Sextus fled to the east with his remaining forces, where he was captured and executed in Miletus by one of Antony's generals the following year. As Lepidus and Octavian accepted the surrender of Pompeius' troops, Lepidus attempted to claim Sicily for himself, ordering Octavian to leave. Lepidus' troops deserted him, however, and defected to Octavian since they were weary of fighting and were enticed by Octavian's promises of money. Lepidus surrendered to Octavian and was permitted to retain the office of "Pontifex Maximus" (head of the college of priests), but was ejected from the Triumvirate, his public career at an end, and effectively was exiled to a villa at Cape Circei in Italy. The Roman dominions were now divided between Octavian in the West and Antony in the East. Octavian ensured Rome's citizens of their rights to property in order to maintain peace and stability in his portion of the Empire. This time, he settled his discharged soldiers outside of Italy, while also returning 30,000 slaves to their former Roman owners—slaves who had fled to join Pompeius' army and navy. Octavian had the Senate grant him, his wife, and his sister tribunal immunity, or "sacrosanctitas", in order to ensure his own safety and that of Livia and Octavia once he returned to Rome. Meanwhile, Antony's campaign turned disastrous against Parthia, tarnishing his image as a leader, and the mere 2,000 legionaries sent by Octavian to Antony were hardly enough to replenish his forces. On the other hand, Cleopatra could restore his army to full strength; he already was engaged in a romantic affair with her, so he decided to send Octavia back to Rome. Octavian used this to spread propaganda implying that Antony was becoming less than Roman because he rejected a legitimate Roman spouse for an "Oriental paramour". In 36 BC, Octavian used a political ploy to make himself look less autocratic and Antony more the villain by proclaiming that the civil wars were coming to an end, and that he would step down as triumvir—if only Antony would do the same. Antony refused. Roman troops captured the Kingdom of Armenia in 34 BC, and Antony made his son Alexander Helios the ruler of Armenia. He also awarded the title "Queen of Kings" to Cleopatra, acts that Octavian used to convince the Roman Senate that Antony had ambitions to diminish the preeminence of Rome. Octavian became consul once again on 1 January 33 BC, and he opened the following session in the Senate with a vehement attack on Antony's grants of titles and territories to his relatives and to his queen. The breach between Antony and Octavian prompted a large portion of the Senators, as well as both of that year's consuls, to leave Rome and defect to Antony. However, Octavian received two key deserters from Antony in the autumn of 32 BC: Munatius Plancus and Marcus Titius. These defectors gave Octavian the information that he needed to confirm with the Senate all the accusations that he made against Antony. Octavian forcibly entered the temple of the Vestal Virgins and seized Antony's secret will, which he promptly publicized. The will would have given away Roman-conquered territories as kingdoms for his sons to rule, and designated Alexandria as the site for a tomb for him and his queen. In late 32 BC, the Senate officially revoked Antony's powers as consul and declared war on Cleopatra's regime in Egypt. In early 31 BC, Antony and Cleopatra were temporarily stationed in Greece when Octavian gained a preliminary victory: the navy successfully ferried troops across the Adriatic Sea under the command of Agrippa. Agrippa cut off Antony and Cleopatra's main force from their supply routes at sea, while Octavian landed on the mainland opposite the island of Corcyra (modern Corfu) and marched south. Trapped on land and sea, deserters of Antony's army fled to Octavian's side daily while Octavian's forces were comfortable enough to make preparations. Antony's fleet sailed through the bay of Actium on the western coast of Greece in a desperate attempt to break free of the naval blockade. It was there that Antony's fleet faced the much larger fleet of smaller, more maneuverable ships under commanders Agrippa and Gaius Sosius in the Battle of Actium on 2 September 31 BC. Antony and his remaining forces were spared only due to a last-ditch effort by Cleopatra's fleet that had been waiting nearby. Octavian pursued them and defeated their forces in Alexandria on 1 August 30 BC—after which Antony and Cleopatra committed suicide. Antony fell on his own sword and was taken by his soldiers back to Alexandria where he died in Cleopatra's arms. Cleopatra died soon after, reputedly by the venomous bite of an asp or by poison. Octavian had exploited his position as Caesar's heir to further his own political career, and he was well aware of the dangers in allowing another person to do the same. He therefore followed the advice of Arius Didymus that "two Caesars are one too many", ordering Caesarion, Julius Caesar's son by Cleopatra, killed, while sparing Cleopatra's children by Antony, with the exception of Antony's older son. Octavian had previously shown little mercy to surrendered enemies and acted in ways that had proven unpopular with the Roman people, yet he was given credit for pardoning many of his opponents after the Battle of Actium. After Actium and the defeat of Antony and Cleopatra, Octavian was in a position to rule the entire Republic under an unofficial principate—but he had to achieve this through incremental power gains. He did so by courting the Senate and the people while upholding the republican traditions of Rome, appearing that he was not aspiring to dictatorship or monarchy. Marching into Rome, Octavian and Marcus Agrippa were elected as dual consuls by the Senate. Years of civil war had left Rome in a state of near lawlessness, but the Republic was not prepared to accept the control of Octavian as a despot. At the same time, Octavian could not simply give up his authority without risking further civil wars among the Roman generals and, even if he desired no position of authority whatsoever, his position demanded that he look to the well-being of the city of Rome and the Roman provinces. Octavian's aims from this point forward were to return Rome to a state of stability, traditional legality, and civility by lifting the overt political pressure imposed on the courts of law and ensuring free elections—in name at least. In 27 BC, Octavian made a show of returning full power to the Roman Senate and relinquishing his control of the Roman provinces and their armies. Under his consulship, however, the Senate had little power in initiating legislation by introducing bills for senatorial debate. Octavian was no longer in direct control of the provinces and their armies, but he retained the loyalty of active duty soldiers and veterans alike. The careers of many clients and adherents depended on his patronage, as his financial power was unrivaled in the Roman Republic. Historian Werner Eck states: To a large extent, the public were aware of the vast financial resources that Octavian commanded. He failed to encourage enough senators to finance the building and maintenance of networks of roads in Italy in 20 BC, but he undertook direct responsibility for them. This was publicized on the Roman currency issued in 16 BC, after he donated vast amounts of money to the "aerarium Saturni", the public treasury. According to H. H. Scullard, however, Octavian's power was based on the exercise of "a predominant military power and ... the ultimate sanction of his authority was force, however much the fact was disguised." The Senate proposed to Octavian, the victor of Rome's civil wars, that he once again assume command of the provinces. The Senate's proposal was a ratification of Octavian's extra-constitutional power. Through the Senate, Octavian was able to continue the appearance of a still-functional constitution. Feigning reluctance, he accepted a ten-year responsibility of overseeing provinces that were considered chaotic. The provinces ceded to him for that ten-year period comprised much of the conquered Roman world, including all of Hispania and Gaul, Syria, Cilicia, Cyprus, and Egypt. Moreover, command of these provinces provided Octavian with control over the majority of Rome's legions. While Octavian acted as consul in Rome, he dispatched senators to the provinces under his command as his representatives to manage provincial affairs and ensure that his orders were carried out. The provinces not under Octavian's control were overseen by governors chosen by the Roman Senate. Octavian became the most powerful political figure in the city of Rome and in most of its provinces, but he did not have a monopoly on political and martial power. The Senate still controlled North Africa, an important regional producer of grain, as well as Illyria and Macedonia, two strategic regions with several legions. However, the Senate had control of only five or six legions distributed among three senatorial proconsuls, compared to the twenty legions under the control of Octavian, and their control of these regions did not amount to any political or military challenge to Octavian. The Senate's control over some of the Roman provinces helped maintain a republican façade for the autocratic Principate. Also, Octavian's control of entire provinces followed Republican-era precedents for the objective of securing peace and creating stability, in which such prominent Romans as Pompey had been granted similar military powers in times of crisis and instability. On 16 January 27 BC the Senate gave Octavian the new titles of "Augustus" and "Princeps". "Augustus" is from the Latin word "Augere" (meaning to increase) and can be translated as "the illustrious one". It was a title of religious authority rather than political authority. According to Roman religious beliefs, the title symbolized a stamp of authority over humanity—and in fact nature—that went beyond any constitutional definition of his status. After the harsh methods employed in consolidating his control, the change in name served to demarcate his benign reign as Augustus from his reign of terror as Octavian. His new title of Augustus was also more favorable than "Romulus", the previous one which he styled for himself in reference to the story of the legendary founder of Rome, which symbolized a second founding of Rome. The title of "Romulus" was associated too strongly with notions of monarchy and kingship, an image that Octavian tried to avoid. "Princeps" comes from the Latin phrase "primum caput", "the first head", originally meaning the oldest or most distinguished senator whose name would appear first on the senatorial roster. In the case of Augustus, however, it became an almost regnal title for a leader who was first in charge. "Princeps" had also been a title under the Republic for those who had served the state well; for example, Pompey had held the title. Augustus also styled himself as "Imperator Caesar divi filius", "Commander Caesar son of the deified one". With this title, he boasted his familial link to deified Julius Caesar, and the use of "Imperator" signified a permanent link to the Roman tradition of victory. The word "Caesar" was merely a cognomen for one branch of the Julian family, yet Augustus transformed "Caesar" into a new family line that began with him. Augustus was granted the right to hang the "corona civica" above his door, the "civic crown" made from oak, and to have laurels drape his doorposts. This crown was usually held above the head of a Roman general during a triumph, with the individual holding the crown charged to continually repeat to the general ""memento mori"", or "Remember that you are mortal". Additionally, laurel wreaths were important in several state ceremonies, and crowns of laurel were rewarded to champions of athletic, racing, and dramatic contests. Thus, both the laurel and the oak were integral symbols of Roman religion and statecraft; placing them on Augustus' doorposts was tantamount to declaring his home the capital. However, Augustus renounced flaunting insignia of power such as holding a scepter, wearing a diadem, or wearing the golden crown and purple toga of his predecessor Julius Caesar. If he refused to symbolize his power by donning and bearing these items on his person, the Senate nonetheless awarded him with a golden shield displayed in the meeting hall of the Curia, bearing the inscription "virtus", "pietas", "clementia", "iustitia"—"valor, piety, clemency, and justice." By 23 BC, some of the un-Republican implications were becoming apparent concerning the settlement of 27 BC. Augustus' retention of an annual consulate drew attention to his "de facto" dominance over the Roman political system, and cut in half the opportunities for others to achieve what was still nominally the preeminent position in the Roman state. Further, he was causing political problems by desiring to have his nephew Marcus Claudius Marcellus follow in his footsteps and eventually assume the Principate in his turn, alienating his three greatest supporters – Agrippa, Maecenas, and Livia. Feeling pressure from his core group of adherents, Augustus turned to the Senate for help. He appointed noted Republican Calpurnius Piso as co-consul in 23 BC, after his choice Aulus Terentius Varro Murena (who had fought against Julius Caesar and supported Cassius and Brutus) was executed in consequence of his involvement in the Marcus Primus affair, with an eye to bolstering his support among the Republicans. In the late spring Augustus suffered a severe illness, and on his supposed deathbed made arrangements that would ensure the continuation of the Principate in some form, while allaying senators' suspicions of his anti-republicanism. Augustus prepared to hand down his signet ring to his favored general Agrippa. However, Augustus handed over to his co-consul Piso all of his official documents, an account of public finances, and authority over listed troops in the provinces while Augustus' supposedly favored nephew Marcellus came away empty-handed. This was a surprise to many who believed Augustus would have named an heir to his position as an unofficial emperor. Augustus bestowed only properties and possessions to his designated heirs, as an obvious system of institutionalized imperial inheritance would have provoked resistance and hostility among the republican-minded Romans fearful of monarchy. With regards to the Principate, it was obvious to Augustus that Marcellus was not ready to take on his position; nonetheless, by giving his signet ring to Agrippa, Augustus intended to signal to the legions that Agrippa was to be his successor, and that constitutional procedure notwithstanding, they should continue to obey Agrippa. Soon after his bout of illness subsided, Augustus gave up his consulship. The only other times Augustus would serve as consul would be in the years 5 and 2 BC, both times to introduce his grandsons into public life. This was a clever ploy by Augustus; ceasing to serve as one of two annually elected consuls allowed aspiring senators a better chance to attain the consular position, while allowing Augustus to exercise wider patronage within the senatorial class. Although Augustus had resigned as consul, he desired to retain his consular "imperium" not just in his provinces but throughout the empire. This desire, as well as the Marcus Primus Affair, led to a second compromise between him and the Senate known as the Second Settlement. The primary reasons for the Second Settlement were as follows. First, after Augustus relinquished the annual consulship, he was no longer in an official position to rule the state, yet his dominant position remained unchanged over his Roman, 'imperial' provinces where he was still a proconsul. When he annually held the office of consul, he had the power to intervene with the affairs of the other provincial proconsuls appointed by the Senate throughout the empire, when he deemed necessary. When he relinquished his annual consulship, he legally lost this power because his proconsular powers applied only to his imperial provinces. Augustus wanted to keep this power. A second problem later arose showing the need for the Second Settlement in what became known as the "Marcus Primus Affair". In late 24 or early 23 BC, charges were brought against Marcus Primus, the former proconsul (governor) of Macedonia, for waging a war without prior approval of the Senate on the Odrysian kingdom of Thrace, whose king was a Roman ally. He was defended by Lucius Lucinius Varro Murena, who told the trial that his client had received specific instructions from Augustus, ordering him to attack the client state. Later, Primus testified that the orders came from the recently deceased Marcellus. Such orders, had they been given, would have been considered a breach of the Senate's prerogative under the Constitutional settlement of 27 BC and its aftermath—i.e., before Augustus was granted "imperium proconsulare maius"—as Macedonia was a Senatorial province under the Senate's jurisdiction, not an imperial province under the authority of Augustus. Such an action would have ripped away the veneer of Republican restoration as promoted by Augustus, and exposed his fraud of merely being the first citizen, a first among equals. Even worse, the involvement of Marcellus provided some measure of proof that Augustus' policy was to have the youth take his place as Princeps, instituting a form of monarchy – accusations that had already played out. The situation was so serious that Augustus himself appeared at the trial, even though he had not been called as a witness. Under oath, Augustus declared that he gave no such order. Murena disbelieved Augustus' testimony and resented his attempt to subvert the trial by using his "auctoritas". He rudely demanded to know why Augustus had turned up to a trial to which he had not been called; Augustus replied that he came in the public interest. Although Primus was found guilty, some jurors voted to acquit, meaning that not everybody believed Augustus' testimony, an insult to the 'August One'. The Second Constitutional Settlement was completed in part to allay confusion and formalize Augustus' legal authority to intervene in Senatorial provinces. The Senate granted Augustus a form of general "imperium proconsulare", or proconsular imperium (power) that applied throughout the empire, not solely to his provinces. Moreover, the Senate augmented Augustus' proconsular imperium into "imperium proconsulare maius", or proconsular imperium applicable throughout the empire that was more (maius) or greater than that held by the other proconsuls. This in effect gave Augustus constitutional power superior to all other proconsuls in the empire. Augustus stayed in Rome during the renewal process and provided veterans with lavish donations to gain their support, thereby ensuring that his status of proconsular imperium maius was renewed in 13 BC. During the second settlement, Augustus was also granted the power of a tribune ("tribunicia potestas") for life, though not the official title of tribune. For some years, Augustus had been awarded "tribunicia sacrosanctitas", the immunity given to a Tribune of the Plebs. Now he decided to assume the full powers of the magistracy, renewed annually, in perpetuity. Legally, it was closed to patricians, a status that Augustus had acquired some years earlier when adopted by Julius Caesar. This power allowed him to convene the Senate and people at will and lay business before them, to veto the actions of either the Assembly or the Senate, to preside over elections, and to speak first at any meeting. Also included in Augustus' tribunician authority were powers usually reserved for the Roman censor; these included the right to supervise public morals and scrutinize laws to ensure that they were in the public interest, as well as the ability to hold a census and determine the membership of the Senate. With the powers of a censor, Augustus appealed to virtues of Roman patriotism by banning all attire but the classic toga while entering the Forum. There was no precedent within the Roman system for combining the powers of the tribune and the censor into a single position, nor was Augustus ever elected to the office of censor. Julius Caesar had been granted similar powers, wherein he was charged with supervising the morals of the state. However, this position did not extend to the censor's ability to hold a census and determine the Senate's roster. The office of the "tribunus plebis" began to lose its prestige due to Augustus' amassing of tribunal powers, so he revived its importance by making it a mandatory appointment for any plebeian desiring the praetorship. Augustus was granted sole "imperium" within the city of Rome itself, in addition to being granted proconsular imperium maius and tribunician authority for life. Traditionally, proconsuls (Roman province governors) lost their proconsular "imperium" when they crossed the Pomerium – the sacred boundary of Rome – and entered the city. In these situations, Augustus would have power as part of his tribunician authority but his constitutional imperium within the Pomerium would be less than that of a serving consul. That would mean that, when he was in the city, he might not be the constitutional magistrate with the most authority. Thanks to his prestige or "auctoritas", his wishes would usually be obeyed, but there might be some difficulty. To fill this power vacuum, the Senate voted that Augustus' imperium proconsulare maius (superior proconsular power) should not lapse when he was inside the city walls. All armed forces in the city had formerly been under the control of the urban praetors and consuls, but this situation now placed them under the sole authority of Augustus. In addition, the credit was given to Augustus for each subsequent Roman military victory after this time, because the majority of Rome's armies were stationed in imperial provinces commanded by Augustus through the legatus who were deputies of the princeps in the provinces. Moreover, if a battle was fought in a Senatorial province, Augustus' proconsular imperium maius allowed him to take command of (or credit for) any major military victory. This meant that Augustus was the only individual able to receive a triumph, a tradition that began with Romulus, Rome's first King and first triumphant general. Lucius Cornelius Balbus was the last man outside Augustus' family to receive this award, in 19 BC. (Balbus was the nephew of Julius Caesar's great agent, who was governor of Africa and conqueror of the Garamantes.) Tiberius, Augustus' eldest son by marriage to Livia, was the only other general to receive a triumph—for victories in Germania in 7 BC. Many of the political subtleties of the Second Settlement seem to have evaded the comprehension of the Plebeian class, who were Augustus' greatest supporters and clientele. This caused them to insist upon Augustus' participation in imperial affairs from time to time. Augustus failed to stand for election as consul in 22 BC, and fears arose once again that he was being forced from power by the aristocratic Senate. In 22, 21, and 19 BC, the people rioted in response, and only allowed a single consul to be elected for each of those years, ostensibly to leave the other position open for Augustus. Likewise, there was a food shortage in Rome in 22 BC which sparked panic, while many urban plebs called for Augustus to take on dictatorial powers to personally oversee the crisis. After a theatrical display of refusal before the Senate, Augustus finally accepted authority over Rome's grain supply "by virtue of his proconsular "imperium"", and ended the crisis almost immediately. It was not until AD 8 that a food crisis of this sort prompted Augustus to establish a "praefectus annonae", a permanent prefect who was in charge of procuring food supplies for Rome. Nevertheless, there were some who were concerned by the expansion of powers granted to Augustus by the Second Settlement, and this came to a head with the apparent conspiracy of Fannius Caepio. Some time prior to 1 September 22 BC, a certain Castricius provided Augustus with information about a conspiracy led by Fannius Caepio. Murena was named among the conspirators, the outspoken Consul who defended Primus in the Marcus Primus Affair. The conspirators were tried in absentia with Tiberius acting as prosecutor; the jury found them guilty, but it was not a unanimous verdict. All the accused were sentenced to death for treason and executed as soon as they were captured—without ever giving testimony in their defence. Augustus ensured that the facade of Republican government continued with an effective cover-up of the events. In 19 BC, the Senate granted Augustus a form of 'general consular imperium', which was probably 'imperium consulare maius', like the proconsular powers that he received in 23 BC. Like his tribune authority, the consular powers were another instance of gaining power from offices that he did not actually hold. In addition, Augustus was allowed to wear the consul's insignia in public and before the Senate, as well as to sit in the symbolic chair between the two consuls and hold the fasces, an emblem of consular authority. This seems to have assuaged the populace; regardless of whether or not Augustus was a consul, the importance was that he both appeared as one before the people and could exercise consular power if necessary. On 6 March 12 BC, after the death of Lepidus, he additionally took up the position of pontifex maximus, the high priest of the college of the Pontiffs, the most important position in Roman religion. On 5 February 2 BC, Augustus was also given the title "pater patriae", or "father of the country". A final reason for the Second Settlement was to give the Principate constitutional stability and staying power in case something happened to Princeps Augustus. His illness of early 23 BC and the Caepio conspiracy showed that the regime's existence hung by the thin thread of the life of one man, Augustus himself, who suffered from several severe and dangerous illnesses throughout his life. If he were to die from natural causes or fall victim to assassination, Rome could be subjected to another round of civil war. The memories of Pharsalus, the Ides of March, the proscriptions, Philippi, and Actium, barely twenty-five years distant, were still vivid in the minds of many citizens. Proconsular imperium was conferred upon Agrippa for five years, similar to Augustus' power, in order to accomplish this constitutional stability. The exact nature of the grant is uncertain but it probably covered Augustus' imperial provinces, east and west, perhaps lacking authority over the provinces of the Senate. That came later, as did the jealously guarded tribunicia potestas. Augustus' accumulation of powers were now complete. In fact, he dated his 'reign' from the completion of the Second Settlement, 1 July 23 BC. Almost as importantly, the Principate now had constitutional stability. Later Roman Emperors were generally limited to the powers and titles originally granted to Augustus, though often newly appointed Emperors would decline one or more of the honorifics given to Augustus in order to display humility. Just as often, as their reigns progressed, Emperors would appropriate all of the titles, regardless of whether they had been granted them by the Senate. Later Emperors took to wearing the civic crown, consular insignia, and the purple robes of a Triumphant general ("toga picta"), which became the imperial insignia well into the Byzantine era. Augustus chose "Imperator" ("victorious commander") to be his first name, since he wanted to make an emphatically clear connection between himself and the notion of victory, and consequently became known as "Imperator Caesar Divi Filius Augustus" By the year 13, Augustus boasted 21 occasions where his troops proclaimed "imperator" as his title after a successful battle. Almost the entire fourth chapter in his publicly released memoirs of achievements known as the "Res Gestae" was devoted to his military victories and honors. Augustus also promoted the ideal of a superior Roman civilization with a task of ruling the world (to the extent to which the Romans knew it), a sentiment embodied in words that the contemporary poet Virgil attributes to a legendary ancestor of Augustus: "tu regere imperio populos, Romane, memento"—"Roman, remember by your strength to rule the Earth's peoples!" The impulse for expansionism was apparently prominent among all classes at Rome, and it is accorded divine sanction by Virgil's Jupiter in Book 1 of the "Aeneid", where Jupiter promises Rome "imperium sine fine", "sovereignty without end". By the end of his reign, the armies of Augustus had conquered northern Hispania (modern Spain and Portugal) and the Alpine regions of Raetia and Noricum (modern Switzerland, Bavaria, Austria, Slovenia), Illyricum and Pannonia (modern Albania, Croatia, Hungary, Serbia, etc.), and had extended the borders of the Africa Province to the east and south. Judea was added to the province of Syria when Augustus deposed Herod Archelaus, successor to client king Herod the Great (73–4 BC). Syria (like Egypt after Antony) was governed by a high prefect of the equestrian class rather than by a proconsul or legate of Augustus. Again, no military effort was needed in 25 BC when Galatia (modern Turkey) was converted to a Roman province shortly after Amyntas of Galatia was killed by an avenging widow of a slain prince from Homonada. The rebellious tribes of Asturias and Cantabria in modern-day Spain were finally quelled in 19 BC, and the territory fell under the provinces of Hispania and Lusitania. This region proved to be a major asset in funding Augustus' future military campaigns, as it was rich in mineral deposits that could be fostered in Roman mining projects, especially the very rich gold deposits at Las Medulas. Conquering the peoples of the Alps in 16 BC was another important victory for Rome, since it provided a large territorial buffer between the Roman citizens of Italy and Rome's enemies in Germania to the north. Horace dedicated an ode to the victory, while the monument Trophy of Augustus near Monaco was built to honor the occasion. The capture of the Alpine region also served the next offensive in 12 BC, when Tiberius began the offensive against the Pannonian tribes of Illyricum, and his brother Nero Claudius Drusus moved against the Germanic tribes of the eastern Rhineland. Both campaigns were successful, as Drusus' forces reached the Elbe River by 9 BC—though he died shortly after by falling off his horse. It was recorded that the pious Tiberius walked in front of his brother's body all the way back to Rome. To protect Rome's eastern territories from the Parthian Empire, Augustus relied on the client states of the east to act as territorial buffers and areas that could raise their own troops for defense. To ensure security of the Empire's eastern flank, Augustus stationed a Roman army in Syria, while his skilled stepson Tiberius negotiated with the Parthians as Rome's diplomat to the East. Tiberius was responsible for restoring Tigranes V to the throne of the Kingdom of Armenia. Yet arguably his greatest diplomatic achievement was negotiating with Phraates IV of Parthia (37–2 BC) in 20 BC for the return of the battle standards lost by Crassus in the Battle of Carrhae, a symbolic victory and great boost of morale for Rome. Werner Eck claims that this was a great disappointment for Romans seeking to avenge Crassus' defeat by military means. However, Maria Brosius explains that Augustus used the return of the standards as propaganda symbolizing the submission of Parthia to Rome. The event was celebrated in art such as the breastplate design on the statue Augustus of Prima Porta and in monuments such as the Temple of Mars Ultor ('Mars the Avenger') built to house the standards. Parthia had always posed a threat to Rome in the east, but the real battlefront was along the Rhine and Danube rivers. Before the final fight with Antony, Octavian's campaigns against the tribes in Dalmatia were the first step in expanding Roman dominions to the Danube. Victory in battle was not always a permanent success, as newly conquered territories were constantly retaken by Rome's enemies in Germania. A prime example of Roman loss in battle was the Battle of Teutoburg Forest in AD 9, where three entire legions led by Publius Quinctilius Varus were destroyed by Arminius, leader of the Cherusci, an apparent Roman ally. Augustus retaliated by dispatching Tiberius and Drusus to the Rhineland to pacify it, which had some success although the battle of AD 9 brought the end to Roman expansion into Germany. Roman general Germanicus took advantage of a Cherusci civil war between Arminius and Segestes; they defeated Arminius, who fled that battle but was killed later in 21 due to treachery. The illness of Augustus in 23 BC brought the problem of succession to the forefront of political issues and the public. To ensure stability, he needed to designate an heir to his unique position in Roman society and government. This was to be achieved in small, undramatic, and incremental ways that did not stir senatorial fears of monarchy. If someone was to succeed Augustus' unofficial position of power, he would have to earn it through his own publicly proven merits. Some Augustan historians argue that indications pointed toward his sister's son Marcellus, who had been quickly married to Augustus' daughter Julia the Elder. Other historians dispute this due to Augustus' will being read aloud to the Senate while he was seriously ill in 23 BC, instead indicating a preference for Marcus Agrippa, who was Augustus' second in charge and arguably the only one of his associates who could have controlled the legions and held the Empire together. After the death of Marcellus in 23 BC, Augustus married his daughter to Agrippa. This union produced five children, three sons and two daughters: Gaius Caesar, Lucius Caesar, Vipsania Julia, Agrippina the Elder, and Postumus Agrippa, so named because he was born after Marcus Agrippa died. Shortly after the Second Settlement, Agrippa was granted a five-year term of administering the eastern half of the Empire with the "imperium" of a proconsul and the same "tribunicia potestas" granted to Augustus (although not trumping Augustus' authority), his seat of governance stationed at Samos in the eastern Aegean. This granting of power showed Augustus' favor for Agrippa, but it was also a measure to please members of his Caesarian party by allowing one of their members to share a considerable amount of power with him. Augustus' intent became apparent to make Gaius and Lucius Caesar his heirs when he adopted them as his own children. He took the consulship in 5 and 2 BC so that he could personally usher them into their political careers, and they were nominated for the consulships of AD 1 and 4. Augustus also showed favor to his stepsons, Livia's children from her first marriage Nero Claudius Drusus Germanicus (henceforth referred to as Drusus) and Tiberius Claudius (henceforth Tiberius), granting them military commands and public office, though seeming to favor Drusus. After Agrippa died in 12 BC, Tiberius was ordered to divorce his own wife Vipsania Agrippina and marry Agrippa's widow, Augustus' daughter Julia—as soon as a period of mourning for Agrippa had ended. Drusus' marriage to Antonia Minor was considered an unbreakable affair, whereas Vipsania was "only" the daughter of the late Agrippa from his first marriage. Tiberius shared in Augustus' tribune powers as of 6 BC, but shortly thereafter went into retirement, reportedly wanting no further role in politics while he exiled himself to Rhodes. No specific reason is known for his departure, though it could have been a combination of reasons, including a failing marriage with Julia, as well as a sense of envy and exclusion over Augustus' apparent favouring of his young grandchildren-turned-sons Gaius and Lucius. (Gaius and Lucius joined the college of priests at an early age, were presented to spectators in a more favorable light, and were introduced to the army in Gaul.) After the early deaths of both Lucius and Gaius in AD 2 and 4 respectively, and the earlier death of his brother Drusus (9 BC), Tiberius was recalled to Rome in June AD 4, where he was adopted by Augustus on the condition that he, in turn, adopt his nephew Germanicus. This continued the tradition of presenting at least two generations of heirs. In that year, Tiberius was also granted the powers of a tribune and proconsul, emissaries from foreign kings had to pay their respects to him, and by AD 13 was awarded with his second triumph and equal level of "imperium" with that of Augustus. The only other possible claimant as heir was Postumus Agrippa, who had been exiled by Augustus in AD 7, his banishment made permanent by senatorial decree, and Augustus officially disowned him. He certainly fell out of Augustus' favor as an heir; the historian Erich S. Gruen notes various contemporary sources that state Postumus Agrippa was a "vulgar young man, brutal and brutish, and of depraved character". Postumus Agrippa was murdered at his place of exile either shortly before or after the death of Augustus. On 19 August AD 14, Augustus died while visiting Nola where his father had died. Both Tacitus and Cassius Dio wrote that Livia was rumored to have brought about Augustus' death by poisoning fresh figs. This element features in many modern works of historical fiction pertaining to Augustus' life, but some historians view it as likely to have been a salacious fabrication made by those who had favoured Postumus as heir, or other of Tiberius' political enemies. Livia had long been the target of similar rumors of poisoning on the behalf of her son, most or all of which are unlikely to have been true. Alternatively, it is possible that Livia did supply a poisoned fig (she did cultivate a variety of fig named for her that Augustus is said to have enjoyed), but did so as a means of assisted suicide rather than murder. Augustus' health had been in decline in the months immediately before his death, and he had made significant preparations for a smooth transition in power, having at last reluctantly settled on Tiberius as his choice of heir. It is likely that Augustus was not expected to return alive from Nola, but it seems that his health improved once there; it has therefore been speculated that Augustus and Livia conspired to end his life at the anticipated time, having committed all political process to accepting Tiberius, in order to not endanger that transition. Augustus' famous last words were, "Have I played the part well? Then applaud as I exit"—referring to the play-acting and regal authority that he had put on as emperor. Publicly, though, his last words were, "Behold, I found Rome of clay, and leave her to you of marble." An enormous funerary procession of mourners traveled with Augustus' body from Nola to Rome, and on the day of his burial all public and private businesses closed for the day. Tiberius and his son Drusus delivered the eulogy while standing atop two "rostra". Augustus' body was coffin-bound and cremated on a pyre close to his mausoleum. It was proclaimed that Augustus joined the company of the gods as a member of the Roman pantheon. The mausoleum was despoiled by the Goths in 410 during the Sack of Rome, and his ashes were scattered. Historian D. C. A. Shotter states that Augustus' policy of favoring the Julian family line over the Claudian might have afforded Tiberius sufficient cause to show open disdain for Augustus after the latter's death; instead, Tiberius was always quick to rebuke those who criticized Augustus. Shotter suggests that Augustus' deification obliged Tiberius to suppress any open resentment that he might have harbored, coupled with Tiberius' "extremely conservative" attitude towards religion. Also, historian R. Shaw-Smith points to letters of Augustus to Tiberius which display affection towards Tiberius and high regard for his military merits. Shotter states that Tiberius focused his anger and criticism on Gaius Asinius Gallus (for marrying Vipsania after Augustus forced Tiberius to divorce her), as well as toward the two young Caesars, Gaius and Lucius—instead of Augustus, the real architect of his divorce and imperial demotion. Augustus' reign laid the foundations of a regime that lasted, in one form or another, for nearly fifteen hundred years through the ultimate decline of the Western Roman Empire and until the Fall of Constantinople in 1453. Both his adoptive surname, Caesar, and his title "Augustus" became the permanent titles of the rulers of the Roman Empire for fourteen centuries after his death, in use both at Old Rome and at New Rome. In many languages, "Caesar" became the word for "Emperor", as in the German "Kaiser" and in the Bulgarian and subsequently Russian "Tsar" (sometimes Csar or Czar). The cult of "Divus Augustus" continued until the state religion of the Empire was changed to Christianity in 391 by Theodosius I. Consequently, there are many excellent statues and busts of the first emperor. He had composed an account of his achievements, the "Res Gestae Divi Augusti", to be inscribed in bronze in front of his mausoleum. Copies of the text were inscribed throughout the Empire upon his death. The inscriptions in Latin featured translations in Greek beside it, and were inscribed on many public edifices, such as the temple in Ankara dubbed the "Monumentum Ancyranum", called the "queen of inscriptions" by historian Theodor Mommsen. There are a few known written works by Augustus that have survived such as his poems "Sicily", "Epiphanus", and "Ajax", an autobiography of 13 books, a philosophical treatise, and his written rebuttal to Brutus' "Eulogy of Cato". Historians are able to analyze existing letters penned by Augustus to others for additional facts or clues about his personal life. Many consider Augustus to be Rome's greatest emperor; his policies certainly extended the Empire's life span and initiated the celebrated "Pax Romana" or "Pax Augusta". The Roman Senate wished subsequent emperors to "be more fortunate than Augustus and better than Trajan". Augustus was intelligent, decisive, and a shrewd politician, but he was not perhaps as charismatic as Julius Caesar, and was influenced on occasion by his third wife, Livia (sometimes for the worse). Nevertheless, his legacy proved more enduring. The city of Rome was utterly transformed under Augustus, with Rome's first institutionalized police force, fire fighting force, and the establishment of the municipal prefect as a permanent office. The police force was divided into cohorts of 500 men each, while the units of firemen ranged from 500 to 1,000 men each, with 7 units assigned to 14 divided city sectors. A "praefectus vigilum", or "Prefect of the Watch" was put in charge of the vigiles, Rome's fire brigade and police. With Rome's civil wars at an end, Augustus was also able to create a standing army for the Roman Empire, fixed at a size of 28 legions of about 170,000 soldiers. This was supported by numerous auxiliary units of 500 non-citizen soldiers each, often recruited from recently conquered areas. They usually equaled or slightly exceeded in number the legions. With his finances securing the maintenance of roads throughout Italy, Augustus also installed an official courier system of relay stations overseen by a military officer known as the "praefectus vehiculorum". Besides the advent of swifter communication among Italian polities, his extensive building of roads throughout Italy also allowed Rome's armies to march swiftly and at an unprecedented pace across the country. In the year 6 Augustus established the "aerarium militare", donating 170 million sesterces to the new military treasury that provided for both active and retired soldiers. One of the most enduring institutions of Augustus was the establishment of the Praetorian Guard in 27 BC, originally a personal bodyguard unit on the battlefield that evolved into an imperial guard as well as an important political force in Rome. They had the power to intimidate the Senate, install new emperors, and depose ones they disliked; the last emperor they served was Maxentius, as it was Constantine I who disbanded them in the early 4th century and destroyed their barracks, the Castra Praetoria. Although the most powerful individual in the Roman Empire, Augustus wished to embody the spirit of Republican virtue and norms. He also wanted to relate to and connect with the concerns of the plebs and lay people. He achieved this through various means of generosity and a cutting back of lavish excess. In the year 29 BC, Augustus gave 400 sesterces (equal to 1/10 of a Roman pound of gold) each to 250,000 citizens, 1,000 sesterces each to 120,000 veterans in the colonies, and spent 700 million sesterces in purchasing land for his soldiers to settle upon. He also restored 82 different temples to display his care for the Roman pantheon of deities. In 28 BC, he melted down 80 silver statues erected in his likeness and in honor of him, an attempt of his to appear frugal and modest. The longevity of Augustus' reign and its legacy to the Roman world should not be overlooked as a key factor in its success. As Tacitus wrote, the younger generations alive in AD 14 had never known any form of government other than the Principate. Had Augustus died earlier (in 23 BC, for instance), matters might have turned out differently. The attrition of the civil wars on the old Republican oligarchy and the longevity of Augustus, therefore, must be seen as major contributing factors in the transformation of the Roman state into a de facto monarchy in these years. Augustus' own experience, his patience, his tact, and his political acumen also played their parts. He directed the future of the Empire down many lasting paths, from the existence of a standing professional army stationed at or near the frontiers, to the dynastic principle so often employed in the imperial succession, to the embellishment of the capital at the emperor's expense. Augustus' ultimate legacy was the peace and prosperity the Empire enjoyed for the next two centuries under the system he initiated. His memory was enshrined in the political ethos of the Imperial age as a paradigm of the good emperor. Every Emperor of Rome adopted his name, Caesar Augustus, which gradually lost its character as a name and eventually became a title. The Augustan era poets Virgil and Horace praised Augustus as a defender of Rome, an upholder of moral justice, and an individual who bore the brunt of responsibility in maintaining the empire. However, for his rule of Rome and establishing the principate, Augustus has also been subjected to criticism throughout the ages. The contemporary Roman jurist Marcus Antistius Labeo (d. AD 10/11), fond of the days of pre-Augustan republican liberty in which he had been born, openly criticized the Augustan regime. In the beginning of his "Annals", the Roman historian Tacitus (c. 56–c.117) wrote that Augustus had cunningly subverted Republican Rome into a position of slavery. He continued to say that, with Augustus' death and swearing of loyalty to Tiberius, the people of Rome simply traded one slaveholder for another. Tacitus, however, records two contradictory but common views of Augustus: According to the second opposing opinion: In a recent biography on Augustus, Anthony Everitt asserts that through the centuries, judgments on Augustus' reign have oscillated between these two extremes but stresses that: Tacitus was of the belief that Nerva (r. 96–98) successfully "mingled two formerly alien ideas, principate and liberty". The 3rd-century historian Cassius Dio acknowledged Augustus as a benign, moderate ruler, yet like most other historians after the death of Augustus, Dio viewed Augustus as an autocrat. The poet Marcus Annaeus Lucanus (AD 39–65) was of the opinion that Caesar's victory over Pompey and the fall of Cato the Younger (95 BC–46 BC) marked the end of traditional liberty in Rome; historian Chester G. Starr, Jr. writes of his avoidance of criticizing Augustus, "perhaps Augustus was too sacred a figure to accuse directly." The Anglo-Irish writer Jonathan Swift (1667–1745), in his "Discourse on the Contests and Dissentions in Athens and Rome", criticized Augustus for installing tyranny over Rome, and likened what he believed Great Britain's virtuous constitutional monarchy to Rome's moral Republic of the 2nd century BC. In his criticism of Augustus, the admiral and historian Thomas Gordon (1658–1741) compared Augustus to the puritanical tyrant Oliver Cromwell (1599–1658). Thomas Gordon and the French political philosopher Montesquieu (1689–1755) both remarked that Augustus was a coward in battle. In his "Memoirs of the Court of Augustus", the Scottish scholar Thomas Blackwell (1701–1757) deemed Augustus a Machiavellian ruler, "a bloodthirsty vindicative usurper", "wicked and worthless", "a mean spirit", and a "tyrant". Augustus' public revenue reforms had a great impact on the subsequent success of the Empire. Augustus brought a far greater portion of the Empire's expanded land base under consistent, direct taxation from Rome, instead of exacting varying, intermittent, and somewhat arbitrary tributes from each local province as Augustus' predecessors had done. This reform greatly increased Rome's net revenue from its territorial acquisitions, stabilized its flow, and regularized the financial relationship between Rome and the provinces, rather than provoking fresh resentments with each new arbitrary exaction of tribute. The measures of taxation in the reign of Augustus were determined by population census, with fixed quotas for each province. Citizens of Rome and Italy paid indirect taxes, while direct taxes were exacted from the provinces. Indirect taxes included a 4% tax on the price of slaves, a 1% tax on goods sold at auction, and a 5% tax on the inheritance of estates valued at over 100,000 sesterces by persons other than the next of kin. An equally important reform was the abolition of private tax farming, which was replaced by salaried civil service tax collectors. Private contractors who collected taxes for the State were the norm in the Republican era. Some of them were powerful enough to influence the number of votes for men running for offices in Rome. These tax farmers called publicans were infamous for their depredations, great private wealth, and the right to tax local areas. Rome's revenue equaled the amount of the successful bids to farm the taxes. The tax farmers' profits consisted of additional amounts they could forcibly wring from the populace with Rome's blessing or turning a blind eye. Lack of effective supervision, combined with tax farmers' desire to maximize their profits, produced a system of arbitrary exactions regarded quite rightly as barbarously cruel to taxpayers, unfair, and very harmful to investment and the economy. The use of Egypt's immense land rents to finance the Empire's operations resulted from Augustus' conquest of Egypt and the shift to a Roman form of government. As it was effectively considered Augustus' private property rather than a province of the Empire, it became part of each succeeding emperor's patrimonium. Instead of a legate or proconsul, Augustus installed a prefect from the equestrian class to administer Egypt and maintain its lucrative seaports; this position became the highest political achievement for any equestrian besides becoming Prefect of the Praetorian Guard. The highly productive agricultural land of Egypt yielded enormous revenues that were available to Augustus and his successors to pay for public works and military expeditions, as well as bread and circuses for the population of Rome. During his reign the circus games resulted in the killing of 3,500 elephants. The month of August (Latin: "Augustus") is named after Augustus; until his time it was called Sextilis (named so because it had been the sixth month of the original Roman calendar and the Latin word for six is "sex"). Commonly repeated lore has it that August has 31 days because Augustus wanted his month to match the length of Julius Caesar's July, but this is an invention of the 13th century scholar Johannes de Sacrobosco. Sextilis in fact had 31 days before it was renamed, and it was not chosen for its length (see Julian calendar). According to a "senatus consultum" quoted by Macrobius, Sextilis was renamed to honor Augustus because several of the most significant events in his rise to power, culminating in the fall of Alexandria, fell in that month. On his deathbed, Augustus boasted "I found a Rome of bricks; I leave to you one of marble." Although there is some truth in the literal meaning of this, Cassius Dio asserts that it was a metaphor for the Empire's strength. Marble could be found in buildings of Rome before Augustus, but it was not extensively used as a building material until the reign of Augustus. Although this did not apply to the Subura slums, which were still as rickety and fire-prone as ever, he did leave a mark on the monumental topography of the centre and of the Campus Martius, with the Ara Pacis (Altar of Peace) and monumental sundial, whose central gnomon was an obelisk taken from Egypt. The relief sculptures decorating the Ara Pacis visually augmented the written record of Augustus' triumphs in the "Res Gestae". Its reliefs depicted the imperial pageants of the praetorians, the Vestals, and the citizenry of Rome. He also built the Temple of Caesar, the Baths of Agrippa, and the Forum of Augustus with its Temple of Mars Ultor. Other projects were either encouraged by him, such as the Theatre of Balbus, and Agrippa's construction of the Pantheon, or funded by him in the name of others, often relations (e.g. Portico of Octavia, Theatre of Marcellus). Even his Mausoleum of Augustus was built before his death to house members of his family. To celebrate his victory at the Battle of Actium, the Arch of Augustus was built in 29 BC near the entrance of the Temple of Castor and Pollux, and widened in 19 BC to include a triple-arch design. There are also many buildings outside of the city of Rome that bear Augustus' name and legacy, such as the Theatre of Mérida in modern Spain, the Maison Carrée built at Nîmes in today's southern France, as well as the Trophy of Augustus at La Turbie, located near Monaco. After the death of Agrippa in 12 BC, a solution had to be found in maintaining Rome's water supply system. This came about because it was overseen by Agrippa when he served as aedile, and was even funded by him afterwards when he was a private citizen paying at his own expense. In that year, Augustus arranged a system where the Senate designated three of its members as prime commissioners in charge of the water supply and to ensure that Rome's aqueducts did not fall into disrepair. In the late Augustan era, the commission of five senators called the "curatores locorum publicorum iudicandorum" (translated as "Supervisors of Public Property") was put in charge of maintaining public buildings and temples of the state cult. Augustus created the senatorial group of the "curatores viarum" (translated as "Supervisors for Roads") for the upkeep of roads; this senatorial commission worked with local officials and contractors to organize regular repairs. The Corinthian order of architectural style originating from ancient Greece was the dominant architectural style in the age of Augustus and the imperial phase of Rome. Suetonius once commented that Rome was unworthy of its status as an imperial capital, yet Augustus and Agrippa set out to dismantle this sentiment by transforming the appearance of Rome upon the classical Greek model. His biographer Suetonius, writing about a century after Augustus' death, described his appearance as: "... unusually handsome and exceedingly graceful at all periods of his life, though he cared nothing for personal adornment. He was so far from being particular about the dressing of his hair, that he would have several barbers working in a hurry at the same time, and as for his beard he now had it clipped and now shaved, while at the very same time he would either be reading or writing something ... He had clear, bright eyes ... His teeth were wide apart, small, and ill-kept; his hair was slightly curly and inclined to golden; his eyebrows met. His ears were of moderate size, and his nose projected a little at the top and then bent ever so slightly inward. His complexion was between dark and fair. He was short of stature, although Julius Marathus, his freedman and keeper of his records, says that he was five feet and nine inches (just under 5 ft. 7 in., or 1.70 meters, in modern height measurements), but this was concealed by the fine proportion and symmetry of his figure, and was noticeable only by comparison with some taller person standing beside him...", adding that "his shoes [were] somewhat high-soled, to make him look taller than he really was". Scientific analysis of traces of paint found in his official statues show that he most likely had light brown hair and eyes (his hair and eyes were depicted as the same color). His official images were very tightly controlled and idealized, drawing from a tradition of Hellenistic royal portraiture rather than the tradition of realism in Roman portraiture. He first appeared on coins at the age of 19, and from about 29 BC "the explosion in the number of Augustan portraits attests a concerted propaganda campaign aimed at dominating all aspects of civil, religious, economic and military life with Augustus' person." The early images did indeed depict a young man, but although there were gradual changes his images remained youthful until he died in his seventies, by which time they had "a distanced air of ageless majesty". Among the best known of many surviving portraits are the Augustus of Prima Porta, the image on the Ara Pacis, and the Via Labicana Augustus, which shows him as a priest. Several cameo portraits include the Blacas Cameo and "Gemma Augustea". Augustus' only biological (non-adopted) child was his daughter. Primary sources Secondary source material Apatosaurus Apatosaurus (; meaning "deceptive lizard") is a genus of herbivorous sauropod dinosaur that lived in North America during the Late Jurassic period. Othniel Charles Marsh described and named the first-known species, A. ajax, in 1877, and a second species, A. louisae, was discovered and named by William H. Holland in 1916. "Apatosaurus" lived about 152 to 151 million years ago (mya), during the early Tithonian age, and are now known from fossils in the Morrison Formation of modern-day Colorado, Oklahoma, New Mexico, and Utah in the United States. "Apatosaurus" had an average length of , and an average mass of . A few specimens indicate a maximum length of 11–30% greater than average and a mass of . The cervical vertebrae of "Apatosaurus" are less elongated and more heavily constructed than those of "Diplodocus", a diplodocid like "Apatosaurus", and the bones of the leg are much stockier despite being longer, implying that "Apatosaurus" was a more robust animal. The tail was held above the ground during normal locomotion. "Apatosaurus" had a single claw on each forelimb and three on each hindlimb. The "Apatosaurus" skull, long thought to be similar to "Camarasaurus", is much more similar to that of "Diplodocus". "Apatosaurus" was a generalized browser that likely held its head elevated. To lighten its vertebrae, "Apatosaurus" had air sacs that made the bones internally full of holes. Like that of other diplodocids, its tail may have been used as a whip to create loud noises. The skull of "Apatosaurus" was confused with that of "Camarasaurus" and "Brachiosaurus" until 1909, when the holotype of "A. louisae" was found, and a complete skull just a few meters away from the front of the neck. Henry Fairfield Osborn disagreed with this association, and went on to mount a skeleton of "Apatosaurus" with a "Camarasaurus" skull cast. Until 1970, "Apatosaurus" skeletons were mounted with speculative skull casts, when McIntosh showed that more robust skulls assigned to "Diplodocus" were more likely from "Apatosaurus". "Apatosaurus" is a genus in the family Diplodocidae. It is one of the more basal genera, with only "Amphicoelias" and possibly a new, unnamed genus more primitive. While the subfamily Apatosaurinae was named in 1929, the group was not used validly until an extensive 2015 study. Only "Brontosaurus" is also in the subfamily, with the other genera being considered synonyms or reclassified as diplodocines. "Brontosaurus" has long been considered a junior synonym of "Apatosaurus"; its only species was reclassified as "A.excelsus" in 1903. A 2015 study concluded that "Brontosaurus" is a valid genus of sauropod distinct from "Apatosaurus", but not all paleontologists agree with this division. As it existed in North America during the late Jurassic, "Apatosaurus" would have lived alongside dinosaurs such as "Allosaurus", "Camarasaurus", "Diplodocus", and "Stegosaurus". "Apatosaurus" was a large, long-necked, quadrupedal animal with a long, whip-like tail. Its forelimbs were slightly shorter than its hindlimbs. Most size estimates are based on specimen CM3018, the type specimen of "A.louisae". In 1936 this was measured to be , by measuring the vertebral column. Current estimates are similar, finding that the individual was long and had a mass of . A 2015 study that estimated the mass of volumetric models of "Dreadnoughtus", "Apatosaurus", and "Giraffatitan" estimates CM3018 at , similar in mass to "Dreadnoughtus". Past estimates have put the creature's mass as high as . Some specimens of "A.ajax" (such as OMNH1670) represent individuals 1130% longer, suggesting masses twice that of CM3018 or , potentially rivalling the largest titanosaurs. The skull is small in relation to the size of the animal. The jaws are lined with spatulate (chisel-like) teeth suited to an herbivorous diet. The snout of "Apatosaurus" and similar diplodocoids is squared, with only "Nigersaurus" having a squarer skull. The braincase of "Apatosaurus" is well preserved in specimen BYU17096, which also preserved much of the skeleton. A phylogenetic analysis found that the braincase had a morphology similar to those of other diplodocoids. Some skulls of "Apatosaurus" have been found still in articulation with their teeth. Those teeth that have the enamel surface exposed do not show any scratches on the surface; instead, they display a sugary texture and little wear. Like those of other sauropods, the neck vertebrae are deeply bifurcated; they carried neural spines with a large trough in the middle, resulting in a wide, deep neck. The vertebral formula for the holotype of "A.louisae" is 15cervicals, 10dorsals, 5sacrals, and 82caudals. The caudal vertebra number may vary, even within species. The cervical vertebrae of "Apatosaurus" and "Brontosaurus" are stouter and more robust than those of other diplodocids and were found to be most similar to "Camarasaurus" by Charles Whitney Gilmore. In addition, they support cervical ribs that extend farther towards the ground than in diplodocines, and have vertebrae and ribs that are narrower towards the top of the neck, making the neck nearly triangular in cross-section. In "Apatosaurus louisae", the atlas-axis complex of the first cervicals is nearly fused. The dorsal ribs are not fused or tightly attached to their vertebrae and are instead loosely articulated. "Apatosaurus" has ten dorsal ribs on either side of the body. The large neck was filled with an extensive system of weight-saving air sacs. "Apatosaurus", like its close relative "Supersaurus", has tall neural spines, which make up more than half the height of the individual bones of its vertebrae. The shape of the tail is unusual for a diplodocid; it is comparatively slender because of the rapidly decreasing height of the vertebral spines with increasing distance from the hips. "Apatosaurus" also had very long ribs compared to most other diplodocids, giving it an unusually deep chest. As in other diplodocids, the tail transformed into a whip-like structure towards the end. The limb bones are also very robust. Within Apatosaurinae, the scapula of "Apatosaurus louisae" is intermediate in morphology between those of "A.ajax" and "Brontosaurus excelsus". The arm bones are stout, so the humerus of "Apatosaurus" resembles that of "Camarasaurus", as well as "Brontosaurus". However, the humeri of "Brontosaurus" and "A.ajax" are more similar to each other than they are to "A.louisae". In 1936 Charles Gilmore noted that previous reconstructions of "Apatosaurus" forelimbs erroneously proposed that the radius and ulna could cross; in life they would have remained parallel. "Apatosaurus" had a single large claw on each forelimb, a feature shared by all sauropods more derived than "Shunosaurus". The first three toes had claws on each hindlimb. The phalangeal formula is 2-1-1-1-1, meaning the innermost finger (phalanx) on the forelimb has two bones and the next has one. The single manual claw bone (ungual) is slightly curved and squarely truncated on the anterior end. The pelvic girdle includes the robust ilia, and the fused (co-ossified) pubes and ischia. The femora of "Apatosaurus" are very stout and represent some of the most robust femora of any member of Sauropoda. The tibia and fibula bones are different from the slender bones of "Diplodocus" but are nearly indistinguishable from those of "Camarasaurus". The fibula is longer and slenderer than the tibia. The foot of "Apatosaurus" has three claws on the innermost digits; the digit formula is 3-4-5-3-2. The first metatarsal is the stoutest, a feature shared among diplodocids. The name "Apatosaurus ajax" was coined in 1877 by Othniel Charles Marsh, Professor of Paleontology at Yale University, based on a nearly complete skeleton (holotype, YPM1860) discovered in the eastern foothills of the Rocky Mountains in Gunnison County, Colorado. The composite term "Apatosaurus" comes from the Greek words "apatē" ()/"apatēlos" () meaning "deception"/"deceptive", and "sauros" () meaning "lizard"; thus, "deceptive lizard". Marsh gave it this name based on the chevron bones, which are dissimilar to those of other dinosaurs; instead, the chevron bones of "Apatosaurus" showed similarities with those of mosasaurs. During excavation and transportation, the bones of the holotype skeleton were mixed with those of another "Apatosaurus" individual originally described as "Atlantosaurus immanis"; as a consequence, some elements cannot be ascribed to either specimen with confidence. Marsh distinguished the new genus "Apatosaurus" from "Atlantosaurus" on the basis of the number of sacral vertebrae, with "Apatosaurus" possessing three and "Atlantosaurus" four. Two years later, Marsh announced the discovery of a larger and more complete specimen at Como Bluff, Wyoming. He gave this specimen a new name based on the conventions of his age and the relatively sparse fossil record available at that time. It was later recognised that the features he had used to distinguish genera and species were in fact more widespread among sauropods. He named the new species "Brontosaurus excelsus". All specimens currently considered "Apatosaurus" were from the Morrison Formation, the location of the excavations of Marsh and his rival Edward Drinker Cope. Another specimen, in the American Museum of Natural History under specimen number460, which is occasionally assigned to "Apatosaurus", is considered nearly complete; only the head, feet, and sections of the tail are missing, and it was the first sauropod skeleton mounted. The specimen was found north of Medicine Bow, Wyoming, in 1898 by Walter Granger, and took the entire summer to extract. To complete the mount, sauropod feet that were discovered at the same quarry and a tail fashioned to appear as Marsh believed it shouldbut which had too few vertebraewere added. In addition, a sculpted model of what the museum thought the skull of this massive creature might look like was made. This was not a delicate skull like that of "Diplodocus"which was later found to be more accuratebut was based on "the biggest, thickest, strongest skull bones, lower jaws and tooth crowns from three different quarries". These skulls were likely those of "Camarasaurus", the only other sauropod for which good skull material was known at the time. The mount construction was overseen by Adam Hermann, who failed to find "Apatosaurus" skulls. Hermann was forced to sculpt a stand-in skull by hand. Osborn said in a publication that the skull was "largely conjectural and based on that of "Morosaurus"" (now "Camarasaurus"). In 1903 Elmer Riggs published a study that described a well-preserved skeleton of a diplodocid from the Grand River Valley near Fruita, Colorado, Field Museum of Natural History specimen P25112. Riggs thought that the deposits were similar in age to those of the Como Bluff in Wyoming from which Marsh had described "Brontosaurus". Most of the skeleton was found, and after comparison with both "Brontosaurus" and "Apatosaurus ajax", Riggs realized that the holotype of "A.ajax" was immature, and thus the features distinguishing the genera were not valid. Since "Apatosaurus" was the earlier name, "Brontosaurus" should be considered a junior synonym of "Apatosaurus". Because of this, Riggs recombined "Brontosaurus excelsus" as "Apatosaurus excelsus". Based on comparisons with other species proposed to belong to "Apatosaurus", Riggs also determined that the Field Columbian Museum specimen was likely most similar to "A.excelsus". Despite Riggs' publication, Henry Fairfield Osborn, who was a strong opponent of Marsh and his taxa, labeled the "Apatosaurus" mount of the American Museum of Natural History "Brontosaurus". Because of this decision the name "Brontosaurus" was commonly used outside of scientific literature for what Riggs considered "Apatosaurus", and the museum's popularity meant that "Brontosaurus" became one of the best known dinosaurs, even though it was invalid throughout nearly all of the 20th and early 21st centuries. It was not until 1909 that an "Apatosaurus" skull was found during the first expedition, led by Earl Douglass, to what would become known as the Carnegie Quarry at Dinosaur National Monument. The skull was found a short distance from a skeleton (specimen CM3018) identified as the new species "Apatosaurus louisae", named after Louise Carnegie, wife of Andrew Carnegie, who funded field research to find complete dinosaur skeletons in the American West. The skull was designated CM11162; it was very similar to the skull of "Diplodocus". Another smaller skeleton of "A.louisae" was found nearby CM11162 and CM3018. The skull was accepted as belonging to the "Apatosaurus" specimen by Douglass and Carnegie Museum director William H. Holland, although other scientistsmost notably Osbornrejected this identification. Holland defended his view in 1914 in an address to the Paleontological Society of America, yet he left the Carnegie Museum mount headless. While some thought Holland was attempting to avoid conflict with Osborn, others suspected Holland was waiting until an articulated skull and neck were found to confirm the association of the skull and skeleton. After Holland's death in 1934, museum staff placed a cast of a "Camarasaurus" skull on the mount. While most other museums were using cast or sculpted "Camarasaurus" skulls on "Apatosaurus" mounts, the Yale Peabody Museum decided to sculpt a skull based on the lower jaw of a "Camarasaurus", with the cranium based on Marsh's 1891 illustration of the skull. The skull also included forward-pointing nasalssomething different to any dinosaurand fenestrae differing from both the drawing and other skulls. No "Apatosaurus" skull was mentioned in literature until the 1970s when John Stanton McIntosh and David Berman redescribed the skulls of "Diplodocus" and "Apatosaurus". They found that though he never published his opinion, Holland was almost certainly correct, that "Apatosaurus" had a "Diplodocus"-like skull. According to them, many skulls long thought to pertain to "Diplodocus" might instead be those of "Apatosaurus". They reassigned multiple skulls to "Apatosaurus" based on associated and closely associated vertebrae. Even though they supported Holland, it was noted that "Apatosaurus" might have possessed a "Camarasaurus"-like skull, based on a disarticulated "Camarasaurus"-like tooth found at the precise site where an "Apatosaurus" specimen was found years before. On October20, 1979, after the publications by McIntosh and Berman, the first true skull of "Apatosaurus" was mounted on a skeleton in a museum, that of the Carnegie. In 1998 it was suggested that the Felch Quarry skull that Marsh had included in his 1896 skeletal restoration instead belonged to "Brachiosaurus". In 2011 the first specimen of "Apatosaurus" where a skull was found articulated with its cervical vertebrae was described. This specimen, CMCVP7180, was found to differ in both skull and neck features from "A.louisae", but shared many features of the cervical vertebrae with "A.ajax". Another well-preserved skull is Brigham Young University specimen 17096, a well-preserved skull and skeleton, with a preserved braincase. The specimen was found in Cactus Park Quarry in western Colorado. Almost all modern paleontologists agreed with Riggs that the two dinosaurs should be classified together in a single genus. According to the rules of the ICZN (which governs the scientific names of animals), the name "Apatosaurus", having been published first, has priority as the official name; "Brontosaurus" was considered a junior synonym and was therefore long discarded from formal use. Despite this, at least one paleontologistRobert T. Bakkerargued in the 1990s that "A.ajax" and "A.excelsus" were in fact sufficiently distinct for the latter to merit a separate genus. In 2015 Emanuel Tschopp, Octávio Mateus, and Roger Benson released a paper on diplodocoid systematics, and proposed that genera could be diagnosed by thirteen differing characters, and species separated based on six. The minimum number for generic separation was chosen based on the fact that "A.ajax" and "A.louisae" differ in twelve characters, and "Diplodocus carnegiei" and "D.hallorum" differ in eleven characters. Thus, thirteen characters were chosen to validate the separation of genera. The six differing features for specific separation were chosen by counting the number of differing features in separate specimens generally agreed to represent one species, with only one differing character in "D.carnegiei" and "A.louisae", but five differing features in "B.excelsus". Therefore, Tschopp etal. argued that "Apatosaurus excelsus", originally classified as "Brontosaurus excelsus", had enough morphological differences from other species of "Apatosaurus" that it warranted being reclassified as a separate genus again. The conclusion was based on a comparison of 477 morphological characteristics across 81 different dinosaur individuals. Among the many notable differences are the widerand presumably strongerneck of "Apatosaurus" species compared to "B.excelsus". Other species previously assigned to "Apatosaurus", such as "Elosaurus parvus" and "Eobrontosaurus yahnahpin" were also reclassified as "Brontosaurus". Some features proposed to separate "Brontosaurus" from "Apatosaurus" include: posterior dorsal vertebrae with the centrum longer than wide; the scapula rear to the acromial edge and the distal blade being excavated; the acromial edge of the distal scapular blade bearing a rounded expansion; and the ratio of the proximodistal length to transverse breadth of the astragalus 0.55 or greater. Sauropod expert Michael Daniel D'Emic pointed out that the criteria chosen were to an extent arbitrary and that they would require abandoning the name "Brontosaurus" again if newer analyses obtained different results. Mammal palaeontologist Donald Prothero criticized the mass media reaction to this study as superficial and premature, concluding that he would keep "Brontosaurus" in quotes and not treat the name as a valid genus. Many species of "Apatosaurus" have been designated from scant material. Marsh named as many species as he could, which resulted in many being based upon fragmentary and indistinguishable remains. In 2005 Paul Upchurch and colleagues published a study that analyzed the species and specimen relationships of "Apatosaurus". They found that "A.louisae" was the most basal species, followed by FMNHP25112, and then a polytomy of "A.ajax", "A.parvus", and "A.excelsus". Their analysis was revised and expanded with many additional diplodocid specimens in 2015, which resolved the relationships of "Apatosaurus" slightly differently, and also supported separating "Brontosaurus" from "Apatosaurus". The cladogram below is the result of an analysis by Tschopp, Mateus, and Benson (2015). The authors analyzed most diplodocid type specimens separately to deduce which specimen belonged to which species and genus. "Apatosaurus" is a member of the family Diplodocidae, a clade of gigantic sauropod dinosaurs. The family includes some of the longest creatures ever to walk the earth, including "Diplodocus", "Supersaurus", and "Barosaurus". "Apatosaurus" is sometimes classified in the subfamily Apatosaurinae, which may also include "Suuwassea", "Supersaurus", and "Brontosaurus". Othniel Charles Marsh described "Apatosaurus" as allied to "Atlantosaurus" within the now-defunct group Atlantosauridae. In 1878 Marsh raised his family to the rank of suborder, including "Apatosaurus", "Atlantosaurus", "Morosaurus" (="Camarasaurus") and "Diplodocus". He classified this group within Sauropoda, a group he erected in the same study. In 1903 Elmer S. Riggs said the name Sauropoda would be a junior synonym of earlier names; he grouped "Apatosaurus" within Opisthocoelia. Sauropoda is still used as the group name. In 2011, John Whitlock published a study that placed "Apatosaurus" a more basal diplodocid, sometimes less basal than "Supersaurus". Cladogram of the Diplodocidae after Tschopp, Mateus, and Benson (2015). It was believed throughout the 19th and early 20th centuries that sauropods like "Apatosaurus" were too massive to support their own weight on dry land. It was theorized that they lived partly submerged in water, perhaps in swamps. More recent findings do not support this; sauropods are now thought to have been fully terrestrial animals. A study of diplodocid snouts showed that the square snout, large proportion of pits, and fine, subparallel scratches of the teeth of "Apatosaurus" suggests it was a ground-height, nonselective browser. It may have eaten ferns, cycadeoids, seed ferns, horsetails, and algae. Stevens and Parish (2005) speculate that these sauropods fed from riverbanks on submerged water plants. A 2015 study of the necks of "Apatosaurus" and "Brontosaurus" found many differences between them and other diplodocids, and that these variations may have shown that the necks of "Apatosaurus" and "Brontosaurus" were used for intraspecific combat. Various uses for the single claw on the forelimb of sauropods have been proposed. One suggestion is that they were used for defense, but their shape and size make this unlikely. It was also possible they were for feeding, but the most probable use for the claw was grasping objects such as tree trunks when rearing. Trackways of sauropods like "Apatosaurus" show that they may have had a range of around per day, and that they could potentially have reached a top speed of per hour. The slow locomotion of sauropods may be due to their minimal muscling, or to recoil after strides. A trackway of a juvenile has led some to believe that they were capable of bipedalism, though this is disputed. Diplodocids like "Apatosaurus" are often portrayed with their necks held high up in the air, allowing them to browse on tall trees. Some studies state diplodocid necks were less flexible than previously believed, because the structure of the neck vertebrae would not have allowed the neck to bend far upwards, and that sauropods like "Apatosaurus" were adapted to low browsing or ground feeding. Other studies by Taylor find that all tetrapods appear to hold their necks at the maximum possible vertical extension when in a normal, alert posture; they argue the same would hold true for sauropods barring any unknown, unique characteristics that set the soft tissue anatomy of their necks apart from that of other animals. "Apatosaurus", like "Diplodocus", would have held its neck angled upwards with the head pointing downwards in a resting posture. Kent Stevens and Michael Parrish (1999 and 2005) state "Apatosaurus" had a great feeding range; its neck could bend into a U-shape laterally. The neck's range of movement would have also allowed the head to feed at the level of the feet. Matthew Cobley et al. (2013) dispute this, finding that large muscles and cartilage would have limited movement of the neck. They state the feeding ranges for sauropods like "Diplodocus" were smaller than previously believed, and the animals may have had to move their whole bodies around to better access areas where they could browse vegetation. As such, they might have spent more time foraging to meet their minimum energy needs. The conclusions of Cobley etal. are disputed by Taylor, who analyzed the amount and positioning of intervertebral cartilage to determine the flexibility of the neck of "Apatosaurus" and "Diplodocus". He found that the neck of "Apatosaurus" was very flexible. Given the large body mass and long neck of sauropods like "Apatosaurus", physiologists have encountered problems determining how these animals breathed. Beginning with the assumption that, like crocodilians, "Apatosaurus" did not have a diaphragm, the dead-space volume (the amount of unused air remaining in the mouth, trachea, and air tubes after each breath) has been estimated at about for a specimen. Paladino calculates its tidal volume (the amount of air moved in or out during a single breath) at with an avian respiratory system, if mammalian, and if reptilian. On this basis, its respiratory system would likely have been parabronchi, with multiple pulmonary air sacs as in avian lungs, and a flow-through lung. An avian respiratory system would need a lung volume of about compared with a mammalian requirement of , which would exceed the space available. The overall thoracic volume of "Apatosaurus" has been estimated at , allowing for a , four-chambered heart and a lung capacity. That would allow about for the necessary tissue. Evidence for the avian system in "Apatosaurus" and other sauropods is also present in the pneumaticity of the vertebrae. Though this plays a role in reducing the weight of the animal, Wedel (2003) states they are also likely connected to air sacs, as in birds. James Spotila et al. (1991) concludes that the large body size of sauropods would have made them unable to maintain high metabolic rates because they would not have been able to release enough heat. They assumed sauropods had a reptilian respiratory system. Wedel says that an avian system would have allowed it to dump more heat. Some scientists state that the heart would have had trouble sustaining sufficient blood pressure to oxygenate the brain. Others suggest that the near-horizontal posture of the head and neck would have eliminated the problem of supplying blood to the brain because it would not have been elevated. James Farlow (1987) calculates that an "Apatosaurus"-sized dinosaur about would have possessed of fermentation contents. Assuming "Apatosaurus" had an avian respiratory system and a reptilian resting-metabolism, Frank Paladino etal. (1997) estimate the animal would have needed to consume only about of water per day. A 1999 microscopic study of "Apatosaurus" and "Brontosaurus" bones concluded the animals grew rapidly when young and reached near-adult sizes in about 10years. In 2008, a study on the growth rates of sauropods was published by Thomas Lehman and Holly Woodward. They said that by using growth lines and length-to-mass ratios, "Apatosaurus" would have grown to 25t (25 long tons; 28 short tons) in 15years, with growth peaking at in a single year. An alternative method, using limb length and body mass, found "Apatosaurus" grew per year, and reached its full mass before it was about 70years old. These estimates have been called unreliable because the calculation methods are not sound; old growth lines would have been obliterated by bone remodelling. One of the first identified growth factors of "Apatosaurus" was the number of sacral vertebrae, which increased to five by the time of the creature's maturity. This was first noted in 1903 and again in 1936. Long-bone histology enables researchers to estimate the age that a specific individual reached. A study by Eva Griebeler etal. (2013) examined long-bone histological data and concluded the "Apatosaurus" sp.SMA0014 weighed , reached sexual maturity at 21years, and died aged 28. The same growth model indicated "Apatosaurus" sp.BYU 601–17328 weighed , reached sexual maturity at 19years, and died aged 31. Compared with most sauropods, a relatively large amount of juvenile material is known from "Apatosaurus". Multiple specimens in the OMNH are from juveniles of an undetermined species of "Apatosaurus"; this material includes partial shoulder and pelvic girdles, some vertebrae, and limb bones. OMNH juvenile material is from at least two different age groups and based on overlapping bones likely comes from more than three individuals. The specimens exhibit features that distinguish "Apatosaurus" from its relatives, and thus likely belong to the genus. Juvenile sauropods tend to have proportionally shorter necks and tails, and a more pronounced forelimb-hindlimb disparity than found in adult sauropods. An article published in 1997 reported research of the mechanics of "Apatosaurus" tails by Nathan Myhrvold and paleontologist Philip J. Currie. Myhrvold carried out a computer simulation of the tail, which in diplodocids like "Apatosaurus" was a very long, tapering structure resembling a bullwhip. This computer modeling suggested sauropods were capable of producing a whiplike cracking sound of over 200 decibels, comparable to the volume of a cannon being fired. A pathology has been identified on the tail of "Apatosaurus", caused by a growth defect. Two caudal vertebrae are seamlessly fused along the entire articulating surface of the bone, including the arches of the neural spines. This defect might have been caused by the lack or inhibition of the substance that forms intervertebral disks or joints. It has been proposed that the whips could have been used in combat and defense, but the tails of diplodocids were quite light and narrow compared to "Shunosaurus" and mamenchisaurids, and thus to injure another animal with the tail would severely injure the tail itself. The Morrison Formation is a sequence of shallow marine and alluvial sediments which, according to radiometric dating, dates from between 156.3mya at its base, and 146.8mya at the top, placing it in the late Oxfordian, Kimmeridgian, and early Tithonian stages of the Late Jurassic period. This formation is interpreted as originating in a locally semiarid environment with distinct wet and dry seasons. The Morrison Basin, where dinosaurs lived, stretched from New Mexico to Alberta and Saskatchewan; it was formed when the precursors to the Front Range of the Rocky Mountains started pushing up to the west. The deposits from their east-facing drainage basins were carried by streams and rivers and deposited in swampy lowlands, lakes, river channels, and floodplains. This formation is similar in age to the Lourinhã Formation in Portugal and the Tendaguru Formation in Tanzania. "Apatosaurus" was the second most common sauropod in the Morrison Formation ecosystem, after "Camarasaurus". "Apatosaurus" may have been more solitary than other Morrison Formation dinosaurs. "Supersaurus" has a greater total length and is the largest of all sauropods from the Morrison Formation. "Apatosaurus" fossils have only been found in the upper levels of the formation. Those of "Apatosaurus ajax" are known exclusively from the upper Brushy Basin Member, about 152–151 mya. "A.louisae" fossils are rare, known only from one site in the upper Brushy Basin Member; they date to the late Kimmeridgian stage, about 151mya. Additional "Apatosaurus" remains are known from similarly aged or slightly younger rocks, but they have not been identified as any particular species, and thus may instead belong to "Brontosaurus". The Morrison Formation records a time when the local environment was dominated by gigantic sauropod dinosaurs. Dinosaurs known from the Morrison Formation include the theropods "Allosaurus", "Ceratosaurus", "Ornitholestes", "Saurophaganax", and "Torvosaurus"; the sauropods "Brontosaurus", "Brachiosaurus", "Camarasaurus", and "Diplodocus"; and the ornithischians "Camptosaurus", "Dryosaurus", and "Stegosaurus". "Apatosaurus" is commonly found at the same sites as "Allosaurus", "Camarasaurus", "Diplodocus", and "Stegosaurus". "Allosaurus" accounted for 70–75% of theropod specimens and was at the top trophic level of the Morrison food web. Many of the dinosaurs of the Morrison Formation are of the same genera as those seen in Portuguese rocks of the Lourinhã Formationmainly "Allosaurus", "Ceratosaurus", and "Torvosaurus"or have a close counterpart"Brachiosaurus" and "Lusotitan", "Camptosaurus" and "Draconyx", and "Apatosaurus" and "Dinheirosaurus". Other vertebrates that are known to have shared this paleo-environment include ray-finned fishes, frogs, salamanders, turtles, sphenodonts, lizards, terrestrial and aquatic crocodylomorphans, and several species of pterosaur. Shells of bivalves and aquatic snails are also common. The flora of the period has been evidenced in fossils of green algae, fungi, mosses, horsetails, cycads, ginkgoes, and several families of conifers. Vegetation varied from river-lining forests of tree ferns with fern understory (gallery forests), to fern savannas with occasional trees such as the "Araucaria"-like conifer "Brachyphyllum". Allosaurus Allosaurus () is a genus of carnivorous theropod dinosaur that lived 155 to 150 million years ago during the late Jurassic period (Kimmeridgian to early Tithonian). The name ""Allosaurus"" means "different lizard" alluding to its unique concave vertebrae (at the time of its discovery). It is derived from the Greek /"allos" ("different, other") and /"sauros" ("lizard / generic reptile"). The first fossil remains that could definitively be ascribed to this genus were described in 1877 by paleontologist Othniel Charles Marsh. These remains became known as "Antrodemus". As one of the first well-known theropod dinosaurs, it has long attracted attention outside of paleontological circles. Indeed, it has been a top feature in several films and documentaries about prehistoric life. "Allosaurus" was a large bipedal predator. Its skull was large and equipped with dozens of sharp, serrated teeth. It averaged in length, though fragmentary remains suggest it could have reached over . Relative to the large and powerful hindlimbs, its three-fingered forelimbs were small, and the body was balanced by a long and heavily muscled tail. It is classified as an allosaurid, a type of carnosaurian theropod dinosaur. The genus has a complicated taxonomy, and includes an uncertain number of valid species, the best known of which is "A. fragilis". The bulk of "Allosaurus" remains have come from North America's Morrison Formation, with material also known from Portugal and possibly Tanzania. It was known for over half of the 20th century as "Antrodemus", but a study of the copious remains from the Cleveland-Lloyd Dinosaur Quarry brought the name ""Allosaurus"" back to prominence and established it as one of the best-known dinosaurs. As the most abundant large predator in the Morrison Formation, "Allosaurus" was at the top of the food chain, probably preying on contemporaneous large herbivorous dinosaurs, and perhaps even other predators. Potential prey included ornithopods, stegosaurids, and sauropods. Some paleontologists interpret "Allosaurus" as having had cooperative social behavior, and hunting in packs, while others believe individuals may have been aggressive toward each other, and that congregations of this genus are the result of lone individuals feeding on the same carcasses. It may have attacked large prey by ambush, using its upper jaw like a hatchet. "Allosaurus" was a typical large theropod, having a massive skull on a short neck, a long tail and reduced forelimbs. "Allosaurus fragilis", the best-known species, had an average length of , with the largest definitive "Allosaurus" specimen (AMNH 680) estimated at long, and an estimated weight of . In his 1976 monograph on "Allosaurus", James H. Madsen mentioned a range of bone sizes which he interpreted to show a maximum length of . As with dinosaurs in general, weight estimates are debatable, and since 1980 have ranged between , , and for modal adult weight (not maximum). John Foster, a specialist on the Morrison Formation, suggests that is reasonable for large adults of "A. fragilis", but that is a closer estimate for individuals represented by the average-sized thigh bones he has measured. Using the subadult specimen nicknamed "Big Al", researchers using computer modelling arrived at a best estimate of for the individual, but by varying parameters they found a range from approximately to approximately . Several gigantic specimens have been attributed to "Allosaurus", but may in fact belong to other genera. The closely related genus "Saurophaganax" (OMNH 1708) reached perhaps in length, and its single species has sometimes been included in the genus "Allosaurus" as "Allosaurus maximus", though recent studies support it as a separate genus. Another potential specimen of "Allosaurus", once assigned to the genus "Epanterias" (AMNH 5767), may have measured in length. A more recent discovery is a partial skeleton from the Peterson Quarry in Morrison rocks of New Mexico; this large allosaurid may be another individual of "Saurophaganax". The skull and teeth of "Allosaurus" were modestly proportioned for a theropod of its size. Paleontologist Gregory S. Paul gives a length of for a skull belonging to an individual he estimates at long. Each premaxilla (the bones that formed the tip of the snout), held five teeth with D-shaped cross-sections, and each maxilla (the main tooth-bearing bones in the upper jaw) had between 14 and 17 teeth; the number of teeth does not exactly correspond to the size of the bone. Each dentary (the tooth-bearing bone of the lower jaw) had between 14 and 17 teeth, with an average count of 16. The teeth became shorter, narrower, and more curved toward the back of the skull. All of the teeth had saw-like edges. They were shed easily, and were replaced continually, making them common fossils. The skull had a pair of horns above and in front of the eyes. These horns were composed of extensions of the lacrimal bones, and varied in shape and size. There were also lower paired ridges running along the top edges of the nasal bones that led into the horns. The horns were probably covered in a keratin sheath and may have had a variety of functions, including acting as sunshades for the eye, being used for display, and being used in combat against other members of the same species (although they were fragile). There was a ridge along the back of the skull roof for muscle attachment, as is also seen in tyrannosaurids. Inside the lacrimal bones were depressions that may have held glands, such as salt glands. Within the maxillae were sinuses that were better developed than those of more basal theropods such as "Ceratosaurus" and "Marshosaurus"; they may have been related to the sense of smell, perhaps holding something like Jacobson's organ. The roof of the braincase was thin, perhaps to improve thermoregulation for the brain. The skull and lower jaws had joints that permitted motion within these units. In the lower jaws, the bones of the front and back halves loosely articulated, permitting the jaws to bow outward and increasing the animal's gape. The braincase and frontals may also have had a joint. "Allosaurus" had nine vertebrae in the neck, 14 in the back, and five in the sacrum supporting the hips. The number of tail vertebrae is unknown and varied with individual size; James Madsen estimated about 50, while Gregory S. Paul considered that to be too many and suggested 45 or less. There were hollow spaces in the neck and anterior back vertebrae. Such spaces, which are also found in modern theropods (that is, the birds), are interpreted as having held air sacs used in respiration. The rib cage was broad, giving it a barrel chest, especially in comparison to less derived theropods like "Ceratosaurus". "Allosaurus" had gastralia (belly ribs), but these are not common findings, and they may have ossified poorly. In one published case, the gastralia show evidence of injury during life. A furcula (wishbone) was also present, but has only been recognized since 1996; in some cases furculae were confused with gastralia. The ilium, the main hip bone, was massive, and the pubic bone had a prominent foot that may have been used for both muscle attachment and as a prop for resting the body on the ground. Madsen noted that in about half of the individuals from the Cleveland-Lloyd Dinosaur Quarry, independent of size, the pubes had not fused to each other at their foot ends. He suggested that this was a sexual characteristic, with females lacking fused bones to make egg-laying easier. This proposal has not attracted further attention, however. The forelimbs of "Allosaurus" were short in comparison to the hindlimbs (only about 35% the length of the hindlimbs in adults) and had three fingers per hand, tipped with large, strongly curved and pointed claws. The arms were powerful, and the forearm was somewhat shorter than the upper arm (1:1.2 ulna/humerus ratio). The wrist had a version of the semilunate carpal also found in more derived theropods like maniraptorans. Of the three fingers, the innermost (or thumb) was the largest, and diverged from the others. The phalangeal formula is 2-3-4-0-0, meaning that the innermost finger (phalange) has two bones, the next has three, and the third finger has four. The legs were not as long or suited for speed as those of tyrannosaurids, and the claws of the toes were less developed and more hoof-like than those of earlier theropods. Each foot had three weight-bearing toes and an inner dewclaw, which Madsen suggested could have been used for grasping in juveniles. There was also what is interpreted as the splint-like remnant of a fifth (outermost) metatarsal, perhaps used as a lever between the Achilles tendon and foot. "Allosaurus" was an allosaurid, a member of a family of large theropods within the larger group Carnosauria. The family name Allosauridae was created for this genus in 1878 by Othniel Charles Marsh, but the term was largely unused until the 1970s in favor of Megalosauridae, another family of large theropods that eventually became a wastebasket taxon. This, along with the use of "Antrodemus" for "Allosaurus" during the same period, is a point that needs to be remembered when searching for information on "Allosaurus" in publications that predate James Madsen's 1976 monograph. Major publications using the name "Megalosauridae" instead of "Allosauridae" include Gilmore, 1920, von Huene, 1926, Romer, 1956 and 1966, Steel, 1970, and Walker, 1964. Following the publication of Madsen's influential monograph, Allosauridae became the preferred family assignment, but it too was not strongly defined. Semi-technical works used Allosauridae for a variety of large theropods, usually those that were larger and better-known than megalosaurids. Typical theropods that were thought to be related to "Allosaurus" included "Indosaurus", "Piatnitzkysaurus", "Piveteausaurus", "Yangchuanosaurus", "Acrocanthosaurus", "Chilantaisaurus", "Compsosuchus", "Stokesosaurus", and "Szechuanosaurus". Given modern knowledge of theropod diversity and the advent of cladistic study of evolutionary relationships, none of these theropods is now recognized as an allosaurid, although several, like "Acrocanthosaurus" and "Yangchuanosaurus", are members of closely related families. Below is a cladogram by Benson "et al." in 2010. Allosauridae is one of four families in Carnosauria; the other three are Neovenatoridae, Carcharodontosauridae and Sinraptoridae. Allosauridae has at times been proposed as ancestral to the Tyrannosauridae (which would make it paraphyletic), one recent example being Gregory S. Paul's "Predatory Dinosaurs of the World", but this has been rejected, with tyrannosaurids identified as members of a separate branch of theropods, the Coelurosauria. Allosauridae is the smallest of the carnosaur families, with only "Saurophaganax" and a currently unnamed French allosauroid accepted as possible valid genera besides "Allosaurus" in the most recent review. Another genus, "Epanterias", is a potential valid member, but it and "Saurophaganax" may turn out to be large examples of "Allosaurus". Recent reviews have kept the genus "Saurophaganax" and included "Epanterias" with "Allosaurus". The discovery and early study of "Allosaurus" is complicated by the multiplicity of names coined during the Bone Wars of the late 19th century. The first described fossil in this history was a bone obtained secondhand by Ferdinand Vandiveer Hayden in 1869. It came from Middle Park, near Granby, Colorado, probably from Morrison Formation rocks. The locals had identified such bones as "petrified horse hoofs". Hayden sent his specimen to Joseph Leidy, who identified it as half of a tail vertebra, and tentatively assigned it to the European dinosaur genus "Poekilopleuron" as "Poicilopleuron" "valens". He later decided it deserved its own genus, "Antrodemus". "Allosaurus" itself is based on YPM 1930, a small collection of fragmentary bones including parts of three vertebrae, a rib fragment, a tooth, a toe bone, and, most useful for later discussions, the shaft of the right humerus (upper arm). Othniel Charles Marsh gave these remains the formal name "Allosaurus fragilis" in 1877. "Allosaurus" comes from the Greek "allos/αλλος", meaning "strange" or "different" and "sauros/σαυρος", meaning "lizard" or "reptile". It was named 'different lizard' because its vertebrae were different from those of other dinosaurs known at the time of its discovery. The species epithet "fragilis" is Latin for "fragile", referring to lightening features in the vertebrae. The bones were collected from the Morrison Formation of Garden Park, north of Cañon City. Marsh and Edward Drinker Cope, who were in scientific competition, went on to coin several other genera based on similarly sparse material that would later figure in the taxonomy of "Allosaurus". These include Marsh's "Creosaurus" and "Labrosaurus", and Cope's "Epanterias". In their haste, Cope and Marsh did not always follow up on their discoveries (or, more commonly, those made by their subordinates). For example, after the discovery by Benjamin Mudge of the type specimen of "Allosaurus" in Colorado, Marsh elected to concentrate work in Wyoming; when work resumed at Garden Park in 1883, M. P. Felch found an almost complete "Allosaurus" and several partial skeletons. In addition, one of Cope's collectors, H. F. Hubbell, found a specimen in the Como Bluff area of Wyoming in 1879, but apparently did not mention its completeness, and Cope never unpacked it. Upon unpacking in 1903 (several years after Cope had died), it was found to be one of the most complete theropod specimens then known, and in 1908 the skeleton, now cataloged as AMNH 5753, was put on public view. This is the well-known mount poised over a partial "Apatosaurus" skeleton as if scavenging it, illustrated as such by Charles R. Knight. Although notable as the first free-standing mount of a theropod dinosaur, and often illustrated and photographed, it has never been scientifically described. The multiplicity of early names complicated later research, with the situation compounded by the terse descriptions provided by Marsh and Cope. Even at the time, authors such as Samuel Wendell Williston suggested that too many names had been coined. For example, Williston pointed out in 1901 that Marsh had never been able to adequately distinguish "Allosaurus" from "Creosaurus". The most influential early attempt to sort out the convoluted situation was produced by Charles W. Gilmore in 1920. He came to the conclusion that the tail vertebra named "Antrodemus" by Leidy was indistinguishable from those of "Allosaurus", and "Antrodemus" thus should be the preferred name because as the older name it had priority. "Antrodemus" became the accepted name for this familiar genus for over fifty years, until James Madsen published on the Cleveland-Lloyd specimens and concluded that "Allosaurus" should be used because "Antrodemus" was based on material with poor, if any, diagnostic features and locality information (for example, the geological formation that the single bone of "Antrodemus" came from is unknown). ""Antrodemus"" has been used informally for convenience when distinguishing between the skull Gilmore restored and the composite skull restored by Madsen. Although sporadic work at what became known as the Cleveland-Lloyd Dinosaur Quarry in Emery County, Utah had taken place as early as 1927, and the fossil site itself described by William L. Stokes in 1945, major operations did not begin there until 1960. Under a cooperative effort involving nearly 40 institutions, thousands of bones were recovered between 1960 and 1965. The quarry is notable for the predominance of "Allosaurus" remains, the condition of the specimens, and the lack of scientific resolution on how it came to be. The majority of bones belong to the large theropod "Allosaurus fragilis" (it is estimated that the remains of at least 46 "A. fragilis" have been found there, out of at minimum 73 dinosaurs), and the fossils found there are disarticulated and well-mixed. Nearly a dozen scientific papers have been written on the taphonomy of the site, suggesting numerous mutually exclusive explanations for how it may have formed. Suggestions have ranged from animals getting stuck in a bog, to becoming trapped in deep mud, to falling victim to drought-induced mortality around a waterhole, to getting trapped in a spring-fed pond or seep. Regardless of the actual cause, the great quantity of well-preserved "Allosaurus" remains has allowed this genus to be known in detail, making it among the best-known theropods. Skeletal remains from the quarry pertain to individuals of almost all ages and sizes, from less than to long, and the disarticulation is an advantage for describing bones usually found fused. The period since Madsen's monograph has been marked by a great expansion in studies dealing with topics concerning "Allosaurus" in life (paleobiological and paleoecological topics). Such studies have covered topics including skeletal variation, growth, skull construction, hunting methods, the brain, and the possibility of gregarious living and parental care. Reanalysis of old material (particularly of large 'allosaur' specimens), new discoveries in Portugal, and several very complete new specimens have also contributed to the growing knowledge base. In 1991 "Big Al" (MOR 693), a 95% complete, partially articulated specimen of "Allosaurus" was discovered. It measured about 8 meters (about 26 ft) in length. MOR 693 was excavated near Shell, Wyoming, by a joint Museum of the Rockies and University of Wyoming Geological Museum team. This skeleton was discovered by a Swiss team, led by Kirby Siber. In 1996 the same team discovered a second "Allosaurus", "Big Al Two", which is the best preserved skeleton of its kind to date. The completeness, preservation, and scientific importance of this skeleton gave "Big Al" its name; the individual itself was below the average size for "Allosaurus fragilis", and was a subadult estimated at only 87% grown. The specimen was described by Breithaupt in 1996. Nineteen of its bones were broken or showed signs of infection, which may have contributed to "Big Al's" death. Pathologic bones included five ribs, five vertebrae, and four bones of the feet; several damaged bones showed osteomyelitis, a bone infection. A particular problem for the living animal was infection and trauma to the right foot that probably affected movement and may have also predisposed the other foot to injury because of a change in gait. Al had an infection on the first phalanx on the third toe that was afflicted by an involucrum. The infection was long lived, perhaps up to 6 months. Big Al Two is also known to have multiple injuries. There are currently four valid and one undescribed species of "Allosaurus" ("A. amplus", "A. europaeus", the type species "A. fragilis", the as-yet not formally described "A. jimmadseni", and "A. lucasi"). "A. fragilis", "A. jimmadseni", "A. amplus", and "A. lucasi" are all known from remains discovered in the Kimmeridgian–Tithonian Upper Jurassic-age Morrison Formation of the United States, spread across the states of Colorado, Montana, New Mexico, Oklahoma, South Dakota, Utah, and Wyoming. "A. fragilis" is regarded as the most common, known from the remains of at least sixty individuals. For a while in the late 1980s and early 1990s it was common to recognize "A. fragilis" as the short-snouted species, with the long-snouted taxon being "A. atrox"; however, subsequent analysis of specimens from the Cleveland-Lloyd Dinosaur Quarry, Como Bluff, and Dry Mesa Quarry showed that the differences seen in the Morrison Formation material could be attributed to individual variation. A study of skull elements from the Cleveland-Lloyd site found wide variation between individuals, calling into question previous species-level distinctions based on such features as the shape of the lacrimal horns, and the proposed differentiation of "A. jimmadseni" based on the shape of the jugal. "A. europaeus" was found in the Kimmeridgian-age Porto Novo Member of the Lourinhã Formation, but may be the same as "A. fragilis". "Allosaurus tendagurensis" was found in Kimmeridgian-age rocks of Tendaguru, in Mtwara, Tanzania. Subsequent studies classified it as a non-coelurosaurian tetanuran, either a megalosaurid or carcharodontosaur. Although obscure, it was a large theropod, possibly around long and in weight. "Creosaurus", "Epanterias", and "Labrosaurus" are regarded as junior synonyms of "Allosaurus". Most of the species that are regarded as synonyms of "A. fragilis", or that were misassigned to the genus, are obscure and were based on scrappy remains. One exception is "Labrosaurus ferox", named in 1884 by Marsh for an oddly formed partial lower jaw, with a prominent gap in the tooth row at the tip of the jaw, and a rear section greatly expanded and turned down. Later researchers suggested that the bone was pathologic, showing an injury to the living animal, and that part of the unusual form of the rear of the bone was due to plaster reconstruction. It is now regarded as an example of "A. fragilis." Other remains formerly thought to pertain to "Allosaurus" were described from Australia, and Siberia, but these fossils have been reassessed as belonging to other dinosaurs. The issue of synonyms is complicated by the type specimen of "Allosaurus fragilis" (catalog number YPM 1930) being extremely fragmentary, consisting of a few incomplete vertebrae, limb bone fragments, rib fragments, and a tooth. Because of this, several scientists have interpreted the type specimen as potentially dubious, and thus the genus "Allosaurus" itself or at least the species "A. fragilis" would be a "nomen dubium" ("dubious name", based on a specimen too incomplete to compare to other specimens or to classify). To address this situation, Gregory S. Paul and Kenneth Carpenter (2010) submitted a petition to the ICZN to have the name ""A. fragilis"" officially transferred to the more complete specimen USNM4734 (as a neotype). This request is currently pending review. The wealth of "Allosaurus" fossils, from nearly all ages of individuals, allows scientists to study how the animal grew and how long its lifespan may have been. Remains may reach as far back in the lifespan as eggs—crushed eggs from Colorado have been suggested as those of "Allosaurus". Based on histological analysis of limb bones, bone deposition appears to stop at around 22 to 28 years, which is comparable to that of other large theropods like "Tyrannosaurus". From the same analysis, its maximum growth appears to have been at age 15, with an estimated growth rate of about 150 kilograms (330 lb) per year. Medullary bone tissue (endosteally derived, ephemeral, mineralization located inside the medulla of the long bones in gravid female birds) has been reported in at least one "Allosaurus" specimen, a shin bone from the Cleveland-Lloyd Quarry. Today, this bone tissue is only formed in female birds that are laying eggs, as it is used to supply calcium to shells. Its presence in the "Allosaurus" individual has been used to establish sex and show it had reached reproductive age. However, other studies have called into question some cases of medullary bone in dinosaurs, including this "Allosaurus" individual. Data from extant birds suggested that the medullary bone in this "Allosaurus" individual may have been the result of a bone pathology instead. However, with the confirmation of medullary tissue indicating gender in a specimen of "Tyrannosaurus", it may be possible to ascertain whether or not the "Allosaurus" in question was indeed female. The discovery of a juvenile specimen with a nearly complete hindlimb shows that the legs were relatively longer in juveniles, and the lower segments of the leg (shin and foot) were relatively longer than the thigh. These differences suggest that younger "Allosaurus" were faster and had different hunting strategies than adults, perhaps chasing small prey as juveniles, then becoming ambush hunters of large prey upon adulthood. The thigh bone became thicker and wider during growth, and the cross-section less circular, as muscle attachments shifted, muscles became shorter, and the growth of the leg slowed. These changes imply that juvenile legs has less predictable stresses compared with adults, which would have moved with more regular forward progression. Conversely, the skull bones appear to have generally grown isometrically, increasing in size without changing in proportion. Paleontologists accept "Allosaurus" as an active predator of large animals. There is dramatic evidence for allosaur attacks on "Stegosaurus", including an "Allosaurus" tail vertebra with a partially healed puncture wound that fits a "Stegosaurus" tail spike, and a "Stegosaurus" neck plate with a U-shaped wound that correlates well with an "Allosaurus" snout. Sauropods seem to be likely candidates as both live prey and as objects of scavenging, based on the presence of scrapings on sauropod bones fitting allosaur teeth well and the presence of shed allosaur teeth with sauropod bones. However, as Gregory Paul noted in 1988, "Allosaurus" was probably not a predator of fully grown sauropods, unless it hunted in packs, as it had a modestly sized skull and relatively small teeth, and was greatly outweighed by contemporaneous sauropods. Another possibility is that it preferred to hunt juveniles instead of fully grown adults. Research in the 1990s and first decade of the 21st century may have found other solutions to this question. Robert T. Bakker, comparing "Allosaurus" to Cenozoic sabre-toothed carnivorous mammals, found similar adaptations, such as a reduction of jaw muscles and increase in neck muscles, and the ability to open the jaws extremely wide. Although "Allosaurus" did not have sabre teeth, Bakker suggested another mode of attack that would have used such neck and jaw adaptations: the short teeth in effect became small serrations on a saw-like cutting edge running the length of the upper jaw, which would have been driven into prey. This type of jaw would permit slashing attacks against much larger prey, with the goal of weakening the victim. Similar conclusions were drawn by another study using finite element analysis on an "Allosaurus" skull. According to their biomechanical analysis, the skull was very strong but had a relatively small bite force. By using jaw muscles only, it could produce a bite force of 805 to 2,148 N, less than the values for alligators (13,000 N), lions (4,167 N), and leopards (2,268 N), but the skull could withstand nearly 55,500 N of vertical force against the tooth row. The authors suggested that "Allosaurus" used its skull like a hatchet against prey, attacking open-mouthed, slashing flesh with its teeth, and tearing it away without splintering bones, unlike "Tyrannosaurus", which is thought to have been capable of damaging bones. They also suggested that the architecture of the skull could have permitted the use of different strategies against different prey; the skull was light enough to allow attacks on smaller and more agile ornithopods, but strong enough for high-impact ambush attacks against larger prey like stegosaurids and sauropods. Their interpretations were challenged by other researchers, who found no modern analogues to a hatchet attack and considered it more likely that the skull was strong to compensate for its open construction when absorbing the stresses from struggling prey. The original authors noted that "Allosaurus" itself has no modern equivalent, that the tooth row is well-suited to such an attack, and that articulations in the skull cited by their detractors as problematic actually helped protect the palate and lessen stress. Another possibility for handling large prey is that theropods like "Allosaurus" were "flesh grazers" which could take bites of flesh out of living sauropods that were sufficient to sustain the predator so it would not have needed to expend the effort to kill the prey outright. This strategy would also potentially have allowed the prey to recover and be fed upon in a similar way later. An additional suggestion notes that ornithopods were the most common available dinosaurian prey, and that allosaurs may have subdued them by using an attack similar to that of modern big cats: grasping the prey with their forelimbs, and then making multiple bites on the throat to crush the trachea. This is compatible with other evidence that the forelimbs were strong and capable of restraining prey. Studies done by Stephen Lautenschager "et al." from the University of Bristol also indicate "Allosaurus" could open its jaws quite wide and sustain considerable muscle force. When compared with "Tyrannosaurus" and the therizinosaurid "Erlikosaurus" in the same study, it was found that "Allosaurus" had a wider gape than either; the animal was capable of opening its jaws to a 92 degree angle at maximum. The findings also indicate that large carnivorous dinosaurs, like modern carnivores, had wider jaw gapes than herbivores. A biomechanical study published in 2013 by Eric Snively and colleagues found that "Allosaurus" had an unusually low attachment point on the skull for the longissimus capitis superficialis neck muscle compared to other theropods such as "Tyrannosaurus". This would have allowed the animal to make rapid and forceful vertical movements with the skull. The authors found that vertical strikes as proposed by Bakker and Rayfield are consistent with the animal's capabilities. They also found that the animal probably processed carcasses by vertical movements in a similar manner to falcons, such as kestrels: the animal could have gripped prey with the skull and feet, then pulled back and up to remove flesh. This differs from the prey-handling envisioned for tyrannosaurids, which probably tore flesh with lateral shakes of the skull, similar to crocodilians. In addition, "Allosaurus" was able to "move its head and neck around relatively rapidly and with considerable control", at the cost of power. Other aspects of feeding include the eyes, arms, and legs. The shape of the skull of "Allosaurus" limited potential binocular vision to 20° of width, slightly less than that of modern crocodilians. As with crocodilians, this may have been enough to judge prey distance and time attacks. The arms, compared with those of other theropods, were suited for both grasping prey at a distance or clutching it close, and the articulation of the claws suggests that they could have been used to hook things. Finally, the top speed of "Allosaurus" has been estimated at 30 to 55 kilometers per hour (19 to 34 miles per hour). A new paper on the cranio-dental morphology of "Allosaurus" and how it worked has deemed the hatchet jaw attack unlikely, reinterpreting the unusually wide gape as an adaptation to allow Allosaurus to deliver a muscle-driven bite to large prey, with the weaker jaw muscles being a trade-off to allow for the widened gape. It has been speculated since the 1970s that "Allosaurus" preyed on sauropods and other large dinosaurs by hunting in groups. Such a depiction is common in semitechnical and popular dinosaur literature. Robert T. Bakker has extended social behavior to parental care, and has interpreted shed allosaur teeth and chewed bones of large prey animals as evidence that adult allosaurs brought food to lairs for their young to eat until they were grown, and prevented other carnivores from scavenging on the food. However, there is actually little evidence of gregarious behavior in theropods, and social interactions with members of the same species would have included antagonistic encounters, as shown by injuries to gastralia and bite wounds to skulls (the pathologic lower jaw named "Labrosaurus ferox" is one such possible example). Such head-biting may have been a way to establish dominance in a pack or to settle territorial disputes. Although "Allosaurus" may have hunted in packs, it has been argued that "Allosaurus" and other theropods had largely aggressive interactions instead of cooperative interactions with other members of their own species. The study in question noted that cooperative hunting of prey much larger than an individual predator, as is commonly inferred for theropod dinosaurs, is rare among vertebrates in general, and modern diapsid carnivores (including lizards, crocodiles, and birds) rarely cooperate to hunt in such a way. Instead, they are typically territorial and will kill and cannibalize intruders of the same species, and will also do the same to smaller individuals that attempt to eat before they do when aggregated at feeding sites. According to this interpretation, the accumulation of remains of multiple "Allosaurus" individuals at the same site, e.g. in the Cleveland–Lloyd Quarry, are not due to pack hunting, but to the fact that "Allosaurus" individuals were drawn together to feed on other disabled or dead allosaurs, and were sometimes killed in the process. This could explain the high proportion of juvenile and subadult allosaurs present, as juveniles and subadults are disproportionally killed at modern group feeding sites of animals like crocodiles and Komodo dragons. The same interpretation applies to Bakker's lair sites. There is some evidence for cannibalism in "Allosaurus", including "Allosaurus" shed teeth found among rib fragments, possible tooth marks on a shoulder blade, and cannibalized allosaur skeletons among the bones at Bakker's lair sites. The brain of "Allosaurus", as interpreted from spiral CT scanning of an endocast, was more consistent with crocodilian brains than those of the other living archosaurs, birds. The structure of the vestibular apparatus indicates that the skull was held nearly horizontal, as opposed to strongly tipped up or down. The structure of the inner ear was like that of a crocodilian, and so "Allosaurus" probably could have heard lower frequencies best, and would have had trouble with subtle sounds. The olfactory bulbs were large and seem to have been well suited for detecting odors, although the area for evaluating smells was relatively small. In 2001, Bruce Rothschild and others published a study examining evidence for stress fractures and tendon avulsions in theropod dinosaurs and the implications for their behavior. Since stress fractures are caused by repeated trauma rather than singular events they are more likely to be caused by the behavior of the animal than other kinds of injury. Stress fractures and tendon avulsions occurring in the forelimb have special behavioral significance since while injuries to the feet could be caused by running or migration, resistant prey items are the most probable source of injuries to the hand. "Allosaurus" was one of only two theropods examined in the study to exhibit a tendon avulsion, and in both cases the avulsion occurred on the forelimb. When the researchers looked for stress fractures, they found that "Allosaurus" had a significantly greater number of stress fractures than "Albertosaurus", "Ornithomimus" or "Archaeornithomimus". Of the 47 hand bones the researchers studied, 3 were found to contain stress fractures. Of the feet, 281 bones were studied and 17 found to have stress fractures. The stress fractures in the foot bones "were distributed to the proximal phalanges" and occurred across all three weight-bearing toes in "statistically indistinguishable" numbers. Since the lower end of the third metatarsal would have contacted the ground first while an allosaur was running it would have borne the most stress. If the allosaurs' stress fractures were caused by damage accumulating while walking or running this bone should have experience more stress fractures than the others. The lack of such a bias in the examined "Allosaurus" fossils indicates an origin for the stress fractures from a source other than running. The authors conclude that these fractures occurred during interaction with prey, like an allosaur trying to hold struggling prey with its feet. The abundance of stress fractures and avulsion injuries in "Allosaurus" provide evidence for "very active" predation-based rather than scavenging diets. The left scapula and fibula of an "Allosaurus fragilis" specimen catalogued as USNM 4734 are both pathological, both probably due to healed fractures. The specimen USNM 8367 preserved several pathological gastralia which preserve evidence of healed fractures near their middle. Some of the fractures were poorly healed and "formed pseudoarthroses." A specimen with a fractured rib was recovered from the Cleveland-Lloyd Quarry. Another specimen had fractured ribs and fused vertebrae near the end of the tail. An apparent subadult male "Allosaurus fragilis" was reported to have extensive pathologies, with a total of fourteen separate injuries. The specimen MOR 693 had pathologies on five ribs, the sixth neck vertebra the third eighth and thirteenth back vertebrae, the second tail vertebra and its chevron, the gastralia right scapula, manual phalanx I left ilium metatarsals III and V, the first phalanx of the third toe and the third phalanx of the second. The ilium had "a large hole... caused by a blow from above".The near end of the first phalanx of the third toe was afflicted by an involucrum. Other pathologies reported in "Allosaurus" include: Willow breaks in two ribs. Healed fractures in the humerus and radius. Distortion of joint surfaces in the foot possibly due to osteoarthritis or developmental issues. Osteopetrosis along the endosteal surface of a tibia. Distortions of the joint surfaces of the tail vertebrae possibly due to osetoarthritis or developmental issues. "[E]xtensive 'neoplastic' ankylosis of caudals," possibly due to physical trauma as well as the fusion of chevrons to centra. Coossification of vertebral centra near the end of the tail. Amputation of a chevron and foot bone, both possibly a result of bites. "[E]xtensive exostoses" in the first phalanx of the third toe. Lesions similar to those caused by osteomyelitis in two scapulae. Bone spurs in a premaxilla, ungual, and two metacarpals. Exostosis in a pedal phalanx possibly attributable to an infectious disease. A metacarpal with a round depressed fracture. "Allosaurus" was the most common large theropod in the vast tract of Western American fossil-bearing rock known as the Morrison Formation, accounting for 70 to 75% of theropod specimens, and as such was at the top trophic level of the Morrison food web. The Morrison Formation is interpreted as a semiarid environment with distinct wet and dry seasons, and flat floodplains. Vegetation varied from river-lining forests of conifers, tree ferns, and ferns (gallery forests), to fern savannas with occasional trees such as the "Araucaria"-like conifer "Brachyphyllum". The Morrison Formation has been a rich fossil hunting ground. The flora of the period has been revealed by fossils of green algae, fungi, mosses, horsetails, ferns, cycads, ginkgoes, and several families of conifers. Animal fossils discovered include bivalves, snails, ray-finned fishes, frogs, salamanders, turtles, sphenodonts, lizards, terrestrial and aquatic crocodylomorphans, several species of pterosaur, numerous dinosaur species, and early mammals such as docodonts, multituberculates, symmetrodonts, and triconodonts. Dinosaurs known from the Morrison include the theropods "Ceratosaurus", "Ornitholestes", "Tanycolagreus", and "Torvosaurus", the sauropods "Haplocanthosaurus", "Camarasaurus", "Cathetosaurus", "Brachiosaurus", "Suuwassea", "Apatosaurus", "Brontosaurus", "Barosaurus", "Diplodocus", "Supersaurus", and "Amphicoelias", and the ornithischians "Camptosaurus", "Dryosaurus", and "Stegosaurus". "Allosaurus" is commonly found at the same sites as "Apatosaurus", "Camarasaurus", "Diplodocus", and "Stegosaurus". The Late Jurassic formations of Portugal where "Allosaurus" is present are interpreted as having been similar to the Morrison but with a stronger marine influence. Many of the dinosaurs of the Morrison Formation are the same genera as those seen in Portuguese rocks (mainly "Allosaurus", "Ceratosaurus", "Torvosaurus", and "Stegosaurus"), or have a close counterpart ("Brachiosaurus" and "Lusotitan", "Camptosaurus" and "Draconyx"). "Allosaurus" coexisted with fellow large theropods "Ceratosaurus" and "Torvosaurus" in both the United States and Portugal. The three appear to have had different ecological niches, based on anatomy and the location of fossils. Ceratosaurs and torvosaurs may have preferred to be active around waterways, and had lower, thinner bodies that would have given them an advantage in forest and underbrush terrains, whereas allosaurs were more compact, with longer legs, faster but less maneuverable, and seem to have preferred dry floodplains. "Ceratosaurus", better known than "Torvosaurus", differed noticeably from "Allosaurus" in functional anatomy by having a taller, narrower skull with large, broad teeth. "Allosaurus" was itself a potential food item to other carnivores, as illustrated by an "Allosaurus" pubic foot marked by the teeth of another theropod, probably "Ceratosaurus" or "Torvosaurus". The location of the bone in the body (along the bottom margin of the torso and partially shielded by the legs), and the fact that it was among the most massive in the skeleton, indicates that the "Allosaurus" was being scavenged. Along with "Tyrannosaurus", "Allosaurus" has come to represent the quintessential large, carnivorous dinosaur in western popular culture. It is a common dinosaur in American museums, due in particular to the excavations at the Cleveland-Lloyd Dinosaur Quarry; by 1976, as a result of cooperative operations, 38 museums in eight countries on three continents had Cleveland-Lloyd allosaur material or casts. "Allosaurus" is the official state fossil of Utah alongside "Utahraptor". "Allosaurus" has been depicted in popular culture since the early years of the 20th century. It is top predator in both Arthur Conan Doyle's 1912 novel, "The Lost World", and its 1925 film adaptation, the first full-length motion picture to feature dinosaurs. "Allosaurus" was used as the starring dinosaur of the 1956 film "The Beast of Hollow Mountain", and the 1969 film "The Valley of Gwangi", two genre combinations of living dinosaurs with Westerns. In "The Valley of Gwangi", Gwangi is billed as an "Allosaurus", although Ray Harryhausen based his model for the creature on Charles R. Knight's depiction of a "Tyrannosaurus". Harryhausen sometimes confuses the two, stating in a DVD interview "They're both meat eaters, they're both tyrants... one was just a bit larger than the other." In "Dinosaucers", an animated television series of 1987, Allo is an evolved Allosaurus and the leader of the Dinosaucers, a group composed of intelligent anthropomorphic dinosaurs or other prehistoric saurian species from the planet Reptilon. The 1998 animated film "" features an "Allosaurus" as the main antagonist, bent on making a meal out of the main characters. It also appears in a flashback in the 2007 TV series. "Allosaurus" is featured in "" as among the dinosaurs on Isla Nublar trying to escape a volcano eruption, and eventually transferred to the mainland. "Allosaurus" appeared in the second and fifth episode of the 1999 BBC television series "Walking with Dinosaurs" and the follow-up special "The Ballad of Big Al", which speculated on the life of the "Big Al" specimen, based on scientific evidence from the numerous injuries and pathologies in its skeleton, and it is the only dinosaur to feature in multiple episodes of the documentary series. "Allosaurus" also made an appearance in the Discovery Channel series "Dinosaur Revolution". Its depiction in this series was based upon a specimen with a smashed lower jaw that was uncovered by paleontologist Thomas Holtz. "Allosaurus" appears in the 2011 BBC docu-mini-series "Planet Dinosaur" in the episode "Fight for Life". Albertosaurus Albertosaurus (; meaning "Alberta lizard") is a genus of tyrannosaurid theropod dinosaurs that lived in western North America during the Late Cretaceous Period, about 70 million years ago. The type species, "A. sarcophagus", was apparently restricted in range to the modern-day Canadian province of Alberta, after which the genus is named. Scientists disagree on the content of the genus, with some recognizing "Gorgosaurus libratus" as a second species. As a tyrannosaurid, "Albertosaurus" was a bipedal predator with tiny, two-fingered hands and a massive head that had dozens of large, sharp teeth. It may have been at the top of the food chain in its local ecosystem. While "Albertosaurus" was large for a theropod, it was much smaller than its larger and more famous relative "Tyrannosaurus rex", growing nine to ten meters long and weighing less than possibly 2 metric tons. Since the first discovery in 1884, fossils of more than 30 individuals have been recovered, providing scientists with a more detailed knowledge of "Albertosaurus" anatomy than is available for most other tyrannosaurids. The discovery of 26 individuals at one site provides evidence of pack behaviour and allows studies of ontogeny and population biology, which are impossible with lesser-known dinosaurs. "Albertosaurus" was smaller than some other tyrannosaurids, such as "Tarbosaurus" and "Tyrannosaurus". Typical "Albertosaurus" adults measured up to long, while rare individuals of great age could grow to be over long. Several independent mass estimates, obtained by different methods, suggest that an adult "Albertosaurus" weighed between 1.3 tonnes and 1.7 tonnes (1.9 tons). "Albertosaurus" shared a similar body appearance with all other tyrannosaurids. Typically for a theropod, "Albertosaurus" was bipedal and balanced the heavy head and torso with a long tail. However, tyrannosaurid forelimbs were extremely small for their body size and retained only two digits. The hind limbs were long and ended in a four-toed foot on which the first digit, called the hallux, was short and did not reach the ground. The third digit was longer than the rest. "Albertosaurus" may have been able to reach walking speeds of 14−21 km/hour (8−13 mi/hour). At least for the younger individuals, a high running speed is plausible. Two skin impressions from "Albertosaurus" are known, both showing scales. One patch is found with some gastralic ribs and the impression of a long, unknown bone, indicating that the patch is from the belly. The scales are pebbly and gradually become larger and somewhat hexagonal in shape. Also preserved are two larger feature scales, placed 4,5 cm apart from each other. Another skin impression is from an unknown part of the body. These scales are small, diamond-shaped and arranged in rows. The massive skull of "Albertosaurus", which was perched on a short, S-shaped neck, was about long in the largest adults. Wide openings in the skull (fenestrae) reduced the weight of the head while also providing space for muscle attachment and sensory organs. Its long jaws contained, both sides combined, 58 or more banana-shaped teeth; larger tyrannosaurids possessed fewer teeth, "Gorgosaurus" at least 62. Unlike most theropods, "Albertosaurus" and other tyrannosaurids were heterodont, with teeth of different forms depending on their position in the mouth. The premaxillary teeth at the tip of the upper jaw, four per side, were much smaller than the rest, more closely packed, and D-shaped in cross section. Like with "Tyrannosaurus", the maxillary (cheek) teeth of "Albertosaurus" were adapted in general form to resist lateral forces exerted by a struggling prey. The bite force of "Albertosaurus" was less formidable, however, with the maximum force, by the hind teeth, reaching 3,413 Newtons. Above the eyes were short bony crests that may have been brightly coloured in life and used in courtship to attract a mate. William Abler observed in 2001 that "Albertosaurus" tooth serrations resemble a crack in the tooth ending in a round void called an ampulla. Tyrannosaurid teeth were used as holdfasts for pulling flesh off a body, so when a tyrannosaur pulled back on a piece of meat, the tension could cause a purely crack-like serration to spread through the tooth. However, the presence of the ampulla distributed these forces over a larger surface area, and lessened the risk of damage to the tooth under strain. The presence of incisions ending in voids has parallels in human engineering. Guitar makers use incisions ending in voids to, as Abler describes, "impart alternating regions of flexibility and rigidity" to wood they work. The use of a drill to create an "ampulla" of sorts and prevent the propagation of cracks through material is also used to protect aircraft surfaces. Abler demonstrated that a plexiglass bar with incisions called "kerfs" and drilled holes was more than 25% stronger than one with only regularly placed incisions. Unlike tyrannosaurs, ancient predators like phytosaurs and "Dimetrodon" had no adaptations to prevent the crack-like serrations of their teeth from spreading when subjected to the forces of feeding. "Albertosaurus" was named by Henry Fairfield Osborn in a one-page note at the end of his 1905 description of "Tyrannosaurus rex". The name honours Alberta, the Canadian province established the same year, in which the first remains were found. The generic name also incorporates the Greek term "σαυρος"/"sauros" ("lizard"), the most common suffix in dinosaur names. The type species is "Albertosaurus sarcophagus"; the specific name is derived from Ancient Greek σαρκοφάγος ("sarkophagos") meaning "flesh-eating" and having the same etymology as the funeral container with which it shares its name: a combination of the Greek words σαρξ/"sarx" ("flesh") and φαγειν/"phagein" ("to eat"). More than 30 specimens of all ages are known to science. The type specimen is a partial skull, collected in the summer of 1884 from an outcrop of the Horseshoe Canyon Formation alongside the Red Deer River, in Alberta. This specimen, found on June 9, 1884, was recovered by an expedition of the Geological Survey of Canada, led by the famous geologist Joseph Burr Tyrrell. Due to a lack of specialised equipment the almost complete skull could only be partially secured. In 1889, Tyrrell's colleague Thomas Chesmer Weston found an incomplete smaller skull associated with some skeletal material at a location nearby. The two skulls were assigned to the preexisting species "Laelaps incrassatus" by Edward Drinker Cope in 1892, although the name "Laelaps" was preoccupied by a genus of mite and had been changed to "Dryptosaurus" in 1877 by Othniel Charles Marsh. Cope refused to recognize the new name created by his archrival Marsh. However, Lawrence Lambe used the name "Dryptosaurus incrassatus" instead of "Laelaps incrassatus" when he described the remains in detail in 1903 and 1904, a combination first coined by Oliver Perry Hay in 1902. Shortly later, Osborn pointed out that "D. incrassatus" was based on generic tyrannosaurid teeth, so the two Horseshoe Canyon skulls could not be confidently referred to that species. The Horseshoe Canyon skulls also differed markedly from the remains of "D. aquilunguis", type species of "Dryptosaurus", so Osborn created the new name "Albertosaurus sarcophagus" for them in 1905. He did not describe the remains in any great detail, citing Lambe's complete description the year before. Both specimens (the holotype CMN 5600 and the paratype CMN 5601) are stored in the Canadian Museum of Nature in Ottawa. By the early twenty-first century, some concerns had arisen that, due to the damaged state of the holotype, "Albertosaurus" might be a "nomen dubium", a "dubious name" that could only be used for the type specimen itself because other fossils could not reliably be assigned to it. However, in 2010, Thomas Carr established that the holotype, the paratype and comparable later finds all shared a single common unique trait or autapomorphy: the possession of an enlarged pneumatic opening in the back rim of the side of the palatine bone, proving that "Albertosaurus" was a valid taxon. On 11 August 1910, American paleontologist Barnum Brown discovered the remains of a large group of "Albertosaurus" at another quarry alongside the Red Deer River. Because of the large number of bones and the limited time available, Brown's party did not collect every specimen, but made sure to collect remains from all of the individuals that they could identify in the bonebed. Among the bones deposited in the American Museum of Natural History collections in New York City are seven sets of right metatarsals, along with two isolated toe bones that did not match any of the metatarsals in size. This indicated the presence of at least nine individuals in the quarry. The Royal Tyrrell Museum of Palaeontology rediscovered the bonebed in 1997 and resumed fieldwork at the site, which is now located inside Dry Island Buffalo Jump Provincial Park. Further excavation from 1997 to 2005 turned up the remains of 13 more individuals of various ages, including a diminutive two-year-old and a very old individual estimated at over in length. None of these individuals are known from complete skeletons, and most are represented by remains in both museums. Excavations continued until 2008, when the minimum number of individuals present had been established at 12, on the basis of preserved elements that occur only once in a skeleton, and at 26 if mirrored elements were counted when differing in size due to ontogeny. A total of 1,128 "Albertosaurus" bones had been secured, the largest concentration of large theropod fossils known from the Cretaceous. In 1911, Barnum Brown, during the second year of American Museum of Natural History operations in Alberta, uncovered a fragmentary partial "Albertosaurus" skull at the Red Deer River near Tolman Bridge, specimen AMNH 5222. William Parks described a new species in 1928, "Albertosaurus arctunguis", based on a partial skeleton lacking the skull excavated by Gus Lindblad and Ralph Hornell near the Red Deer River in 1923, but this species has been considered identical to "A. sarcophagus" since 1970. Parks' specimen (ROM 807) is housed in the Royal Ontario Museum in Toronto. Between 1926 and 1972, no "Albertosaurus" fossils were found at all; but, since the seventies, there has been a steady increase in the known material. Apart from the Dry Island bonebed, six more skulls and skeletons have since been discovered in Alberta and are housed in various Canadian museums: specimens RTMP 81.010.001, found in 1978 by amateur paleontologist Maurice Stefanuk; RTMP 85.098.001, found by Stefanuk on 16 June 1985; RTMP 86.64.001 (December 1985); RTMP 86.205.001 (1986); RTMP 97.058.0001 (1996); and CMN 11315. However, due to vandalism and accidents, no undamaged and complete skulls could be secured among these finds. Fossils have also been reported from the American states of Montana, New Mexico, and Wyoming, but these probably do not represent "A. sarcophagus" and may not even belong to the genus "Albertosaurus". In 1913, paleontologist Charles H. Sternberg recovered another tyrannosaurid skeleton from the slightly older Dinosaur Park Formation in Alberta. Lawrence Lambe named this dinosaur "Gorgosaurus libratus" in 1914. Other specimens were later found in Alberta and the US state of Montana. Finding, largely due to a lack of good "Albertosaurus" skull material, no significant differences to separate the two taxa, Dale Russell declared the name "Gorgosaurus" a junior synonym of "Albertosaurus", which had been named first, and "G. libratus" was renamed "Albertosaurus libratus" in 1970. A species distinction was maintained because of the age difference. This addition extended the temporal range of the genus "Albertosaurus" backwards by several million years and its geographic range southwards by hundreds of kilometres. In 2003, Philip J. Currie, benefiting from much more extensive finds and a general increase in anatomical knowledge of theropods, compared several tyrannosaurid skulls and came to the conclusion that the two species are more distinct than previously thought. The decision to use one or two genera is rather arbitrary, as the two species are sister taxa, more closely related to each other than to any other species. Recognizing this, Currie nevertheless recommended that "Albertosaurus" and "Gorgosaurus" be retained as separate genera, as he concluded that they were no more similar than "Daspletosaurus" and "Tyrannosaurus", which are almost always separated. In addition, several albertosaurine specimens have been recovered from Alaska and New Mexico, and Currie suggested that the "Albertosaurus"-"Gorgosaurus" situation may be clarified once these are described fully. Most authors have followed Currie's recommendation, but some have not. Apart from "A. sarcophagus", "A. arctunguis" and "A. libratus", several other species of "Albertosaurus" have been named. All of these are today seen as younger synonyms of other species or as "nomina dubia", and are not assigned to "Albertosaurus". In 1930, Anatoly Nikolaevich Riabinin named "Albertosaurus pericolosus" based on a tooth from China, that probably belonged to "Tarbosaurus". In 1932, Friedrich von Huene renamed "Dryptosaurus incrassatus", not considered a "nomen dubium" by him, to "Albertosaurus incrassatus". Because he had identified "Gorgosaurus" with "Albertosaurus", in 1970, Russell also renamed "Gorgosaurus sternbergi" (Matthew & Brown 1922) into "Albertosaurus sternbergi" and "Gorgosaurus lancensis" (Gilmore 1946) into "Albertosaurus lancensis". The former species is today seen as a juvenile form of "Gorgosaurus libratus", the latter as either identical to "Tyrannosaurus" or representing a separate genus "Nanotyrannus". In 1988, Gregory S. Paul based "Albertosaurus megagracilis" on a small tyrannosaurid skeleton, specimen LACM 28345, from the Hell Creek Formation of Montana. It was renamed "Dinotyrannus" in 1995, but is now thought to represent a juvenile "Tyrannosaurus rex". Also in 1988, Paul renamed "Alectrosaurus olseni" (Gilmore 1933) into "Albertosaurus olseni"; this has found no general acceptance. In 1989, "Gorgosaurus novojilovi" (Maleev 1955) was renamed by Bryn Mader and Robert Bradley as "Albertosaurus novojilovi"; today this is seen as a synonym of "Tarbosaurus". On two occasions, species based on valid "Albertosaurus" material were reassigned to a different genus: in 1922 William Diller Matthew renamed "A. sarcophagus" into "Deinodon sarcophagus" and in 1939 German paleontologist Oskar Kuhn renamed "A. arctunguis" into "Deinodon arctunguis". "Albertosaurus" is a member of the theropod family Tyrannosauridae, in the subfamily Albertosaurinae. Its closest relative is the slightly older "Gorgosaurus libratus" (sometimes called "Albertosaurus libratus"; see below). These two species are the only described albertosaurines; other undescribed species may exist. Thomas Holtz found "Appalachiosaurus" to be an albertosaurine in 2004, but his more recent unpublished work locates it just outside Tyrannosauridae, in agreement with other authors. The other major subfamily of tyrannosaurids is the Tyrannosaurinae, which includes "Daspletosaurus", "Tarbosaurus" and "Tyrannosaurus". Compared with these robust tyrannosaurines, albertosaurines had slender builds, with proportionately smaller skulls and longer bones of the lower leg (tibia) and feet (metatarsals and phalanges). Below is the cladogram of the Tyrannosauridae based on the phylogenetic analysis conducted by Loewen "et al." in 2013. Most age categories of "Albertosaurus" are represented in the fossil record. Using bone histology, the age of an individual animal at the time of death can often be determined, allowing growth rates to be estimated and compared with other species. The youngest known "Albertosaurus" is a two-year-old discovered in the Dry Island bonebed, which would have weighed about 50 kilograms (110 lb) and measured slightly more than in length. The specimen from the same quarry is the oldest and largest known, at 28 years of age. When specimens of intermediate age and size are plotted on a graph, an "S"-shaped growth curve results, with the most rapid growth occurring in a four-year period ending around the sixteenth year of life, a pattern also seen in other tyrannosaurids. The growth rate during this phase was per year, based on an adult 1.3 tonnes. Other studies have suggested higher adult weights; this would affect the magnitude of the growth rate, but not the overall pattern. Tyrannosaurids similar in size to "Albertosaurus" had similar growth rates, although the much larger "Tyrannosaurus rex" grew at almost five times this rate ( per year) at its peak. The end of the rapid growth phase suggests the onset of sexual maturity in "Albertosaurus", although growth continued at a slower rate throughout the animals' lives. Sexual maturation while still actively growing appears to be a shared trait among small and large dinosaurs as well as in large mammals such as humans and elephants. This pattern of relatively early sexual maturation differs strikingly from the pattern in birds, which delay their sexual maturity until after they have finished growing. During growth, through thickening the tooth morphology changed so much that, had the association of young and adult skeletons on the Dry Island bonebed not proven they belonged to the same taxon, the teeth of juveniles would likely have been identified by statistical analysis as those of a different species. Most known "Albertosaurus" individuals were aged 14 years or more at the time of death. Juvenile animals are rarely found as fossils for several reasons, mainly preservation bias, where the smaller bones of younger animals were less likely to be preserved by fossilization than the larger bones of adults, and collection bias, where smaller fossils are less likely to be noticed by collectors in the field. Young "Albertosaurus" are relatively large for juvenile animals, but their remains are still rare in the fossil record compared with adults. It has been suggested that this phenomenon is a consequence of life history, rather than bias, and that fossils of juvenile "Albertosaurus" are rare because they simply did not die as often as adults did. A hypothesis of "Albertosaurus" life history postulates that hatchlings died in large numbers, but have not been preserved in the fossil record due to their small size and fragile construction. After just two years, juveniles were larger than any other predator in the region aside from adult "Albertosaurus", and more fleet of foot than most of their prey animals. This resulted in a dramatic decrease in their mortality rate and a corresponding rarity of fossil remains. Mortality rates doubled at age twelve, perhaps the result of the physiological demands of the rapid growth phase, and then doubled again with the onset of sexual maturity between the ages of fourteen and sixteen. This elevated mortality rate continued throughout adulthood, perhaps due to the high physiological demands of procreation, including stress and injuries received during intraspecific competition for mates and resources, and eventually, the ever-increasing effects of senescence. The higher mortality rate in adults may explain their more common preservation. Very large animals were rare because few individuals survived long enough to attain such sizes. High infant mortality rates, followed by reduced mortality among juveniles and a sudden increase in mortality after sexual maturity, with very few animals reaching maximum size, is a pattern observed in many modern large mammals, including elephants, African buffalo, and rhinoceros. The same pattern is also seen in other tyrannosaurids. The comparison with modern animals and other tyrannosaurids lends support to this life history hypothesis, but bias in the fossil record may still play a large role, especially since more than two-thirds of all "Albertosaurus" specimens are known from one locality. The Dry Island bonebed discovered by Barnum Brown and his crew contains the remains of 26 "Albertosaurus", the most individuals found in one locality of any large Cretaceous theropod, and the second-most of any large theropod dinosaur behind the "Allosaurus" assemblage at the Cleveland-Lloyd Dinosaur Quarry in Utah. The group seems to be composed of one very old adult; eight adults between 17 and 23 years old; seven sub-adults undergoing their rapid growth phases at between 12 and 16 years old; and six juveniles between the ages of 2 and 11 years, who had not yet reached the growth phase. The near-absence of herbivore remains and the similar state of preservation common to the many individuals at the "Albertosaurus" bonebed quarry led Currie to conclude that the locality was not a predator trap like the La Brea Tar Pits in California, and that all of the preserved animals died at the same time. Currie claims this as evidence of pack behaviour. Other scientists are skeptical, observing that the animals may have been driven together by drought, flood or for other reasons. There is plentiful evidence for gregarious behaviour among herbivorous dinosaurs, including ceratopsians and hadrosaurs. However, only rarely are so many dinosaurian predators found at the same site. Small theropods like "Deinonychus" and "Coelophysis" have been found in aggregations, as have larger predators like "Allosaurus" and "Mapusaurus". There is some evidence of gregarious behaviour in other tyrannosaurids as well. Fragmentary remains of smaller individuals were found alongside "Sue", the "Tyrannosaurus" mounted in the Field Museum of Natural History in Chicago, and a bonebed in the Two Medicine Formation of Montana contains at least three specimens of "Daspletosaurus", preserved alongside several hadrosaurs. These findings may corroborate the evidence for social behaviour in "Albertosaurus", although some or all of the above localities may represent temporary or unnatural aggregations. Others have speculated that instead of social groups, at least some of these finds represent Komodo dragon-like mobbing of carcasses, where aggressive competition leads to some of the predators being killed and cannibalized. Currie also offers speculation on the pack-hunting habits of "Albertosaurus". The leg proportions of the smaller individuals were comparable to those of ornithomimids, which were probably among the fastest dinosaurs. Younger "Albertosaurus" were probably equally fleet-footed, or at least faster than their prey. Currie hypothesized that the younger members of the pack may have been responsible for driving their prey towards the adults, who were larger and more powerful, but also slower. Juveniles may also have had different lifestyles than adults, filling predator niches between the enormous adults and the smaller contemporaneous theropods, the largest of which were two orders of magnitude smaller than adult "Albertosaurus" in mass. A similar situation is observed in modern Komodo dragons, with hatchlings beginning life as small insectivores before growing to become the dominant predators on their islands. However, as the preservation of behaviour in the fossil record is exceedingly rare, these ideas cannot readily be tested. In 2010, Currie, though still favouring the hunting pack hypothesis, admitted that the concentration could have been brought about by other causes, such as a slowly rising water level during an extended flood. In 2009, researchers hypothesized that smooth-edged holes found in the fossil jaws of tyrannosaurid dinosaurs such as "Albertosaurus" were caused by a parasite similar to "Trichomonas gallinae", which infects birds. They suggested that tyrannosaurids transmitted the infection by biting each other, and that the infection impaired their ability to eat food. In 2001, Bruce Rothschild and others published a study examining evidence for stress fractures and tendon avulsions in theropod dinosaurs and the implications for their behavior. They found that only one of the 319 "Albertosaurus" foot bones checked for stress fractures actually had them and none of the four hand bones did. The scientists found that stress fractures were "significantly" less common in "Albertosaurus" than in the carnosaur "Allosaurus". ROM 807, the holotype of "A. arctunguis" (now referred to "A. sarcophagus"), had a deep hole in the iliac blade, although the describer of the species did not recognize this as pathological. The specimen also contains some exostosis on the fourth left metatarsal. In 1970, two of the five "Albertosaurus sarcophagus" specimens with humeri were reported by Dale Russel as having pathological damage to them. In 2010, the health of the Dry Island "Albertosaurus" assembly was reported upon. Most specimens showed no sign of disease. On three phalanges of the foot strange bony spurs, consisting of abnormal ossifications of the tendons, so-called enthesophytes, were present, their cause unknown. Two ribs and a belly-rib showed signs of breaking and healing. One adult specimen had a left lower jaw showing a puncture wound and both healed and unhealed bite marks. The low number of abnormalities compares favourably with the health condition of a "Majungasaurus" population of which it in 2007 was established that 19% of individuals showed bone pathologies. According to a scientific paper published in 2012, "Albertosaurus" could exert a bite force of around 42,000 newtons. All identifiable fossils of "Albertosaurus sarcophagus" are known from the upper Horseshoe Canyon Formation in Alberta. These younger units of this geologic formation date to the early Maastrichtian stage of the Late Cretaceous Period, 70 to 68 Ma (million years ago). Immediately below this formation is the Bearpaw Shale, a marine formation representing a section of the Western Interior Seaway. The seaway was receding as the climate cooled and sea levels subsided towards the end of the Cretaceous, exposing land that had previously been underwater. It was not a smooth process, however, and the seaway would periodically rise to cover parts of the region throughout Horseshoe Canyon before finally receding altogether in the years after. Due to the changing sea levels, many different environments are represented in the Horseshoe Canyon Formation, including offshore and near-shore marine habitats and coastal habitats like lagoons, estuaries and tidal flats. Numerous coal seams represent ancient peat swamps. Like most of the other vertebrate fossils from the formation, "Albertosaurus" remains are found in deposits laid down in the deltas and floodplains of large rivers during the later half of Horseshoe Canyon times. The fauna of the Horseshoe Canyon Formation is well-known, as vertebrate fossils, including those of dinosaurs, are quite common. Sharks, rays, sturgeons, bowfins, gars and the gar-like "Aspidorhynchus" made up the fish fauna. Mammals included multituberculates and the marsupial "Didelphodon". The saltwater plesiosaur "Leurospondylus" has been found in marine sediments in the Horseshoe Canyon, while freshwater environments were populated by turtles, "Champsosaurus", and crocodilians like "Leidyosuchus" and "Stangerochampsa". Dinosaurs dominate the fauna, especially hadrosaurs, which make up half of all dinosaurs known, including the genera "Edmontosaurus", "Saurolophus" and "Hypacrosaurus". Ceratopsians and ornithomimids were also very common, together making up another third of the known fauna. Along with much rarer ankylosaurians and pachycephalosaurs, all of these animals would have been prey for a diverse array of carnivorous theropods, including troodontids, dromaeosaurids, and caenagnathids. Intermingled with the "Albertosaurus" remains of the Dry Island bonebed, the bones of the small theropod "Albertonykus" were found. Adult "Albertosaurus" were the apex predators in this environment, with intermediate niches possibly filled by juvenile albertosaurs. Archimedes Archimedes of Syracuse (; ; ) was a Greek mathematician, physicist, engineer, inventor, and astronomer. Although few details of his life are known, he is regarded as one of the leading scientists in classical antiquity. Generally considered the greatest mathematician of antiquity and one of the greatest of all time, Archimedes anticipated modern calculus and analysis by applying concepts of infinitesimals and the method of exhaustion to derive and rigorously prove a range of geometrical theorems, including the area of a circle, the surface area and volume of a sphere, and the area under a parabola. Other mathematical achievements include deriving an accurate approximation of pi, defining and investigating the spiral bearing his name, and creating a system using exponentiation for expressing very large numbers. He was also one of the first to apply mathematics to physical phenomena, founding hydrostatics and statics, including an explanation of the principle of the lever. He is credited with designing innovative machines, such as his screw pump, compound pulleys, and defensive war machines to protect his native Syracuse from invasion. Archimedes died during the Siege of Syracuse when he was killed by a Roman soldier despite orders that he should not be harmed. Cicero describes visiting the tomb of Archimedes, which was surmounted by a sphere and a cylinder, which Archimedes had requested be placed on his tomb to represent his mathematical discoveries. Unlike his inventions, the mathematical writings of Archimedes were little known in antiquity. Mathematicians from Alexandria read and quoted him, but the first comprehensive compilation was not made until by Isidore of Miletus in Byzantine Constantinople, while commentaries on the works of Archimedes written by Eutocius in the sixth century AD opened them to wider readership for the first time. The relatively few copies of Archimedes' written work that survived through the Middle Ages were an influential source of ideas for scientists during the Renaissance, while the discovery in 1906 of previously unknown works by Archimedes in the Archimedes Palimpsest has provided new insights into how he obtained mathematical results. Archimedes was born "c". 287 BC in the seaport city of Syracuse, Sicily, at that time a self-governing colony in Magna Graecia, located along the coast of Southern Italy. The date of birth is based on a statement by the Byzantine Greek historian John Tzetzes that Archimedes lived for 75 years. In "The Sand Reckoner", Archimedes gives his father's name as Phidias, an astronomer about whom nothing else is known. Plutarch wrote in his "Parallel Lives" that Archimedes was related to King Hiero II, the ruler of Syracuse. A biography of Archimedes was written by his friend Heracleides but this work has been lost, leaving the details of his life obscure. It is unknown, for instance, whether he ever married or had children. During his youth, Archimedes may have studied in Alexandria, Egypt, where Conon of Samos and Eratosthenes of Cyrene were contemporaries. He referred to Conon of Samos as his friend, while two of his works ("The Method of Mechanical Theorems" and the "Cattle Problem") have introductions addressed to Eratosthenes. Archimedes died "c". 212 BC during the Second Punic War, when Roman forces under General Marcus Claudius Marcellus captured the city of Syracuse after a two-year-long siege. According to the popular account given by Plutarch, Archimedes was contemplating a mathematical diagram when the city was captured. A Roman soldier commanded him to come and meet General Marcellus but he declined, saying that he had to finish working on the problem. The soldier was enraged by this, and killed Archimedes with his sword. Plutarch also gives a account of the death of Archimedes which suggests that he may have been killed while attempting to surrender to a Roman soldier. According to this story, Archimedes was carrying mathematical instruments, and was killed because the soldier thought that they were valuable items. General Marcellus was reportedly angered by the death of Archimedes, as he considered him a valuable scientific asset and had ordered that he not be harmed. Marcellus called Archimedes "a geometrical Briareus". The last words attributed to Archimedes are "Do not disturb my circles", a reference to the circles in the mathematical drawing that he was supposedly studying when disturbed by the Roman soldier. This quote is often given in Latin as ""Noli turbare circulos meos"," but there is no reliable evidence that Archimedes uttered these words and they do not appear in the account given by Plutarch. Valerius Maximus, writing in "Memorable Doings and Sayings" in the 1st century AD, gives the phrase as ""...sed protecto manibus puluere 'noli' inquit, 'obsecro, istum disturbare"'" - "... but protecting the dust with his hands, said 'I beg of you, do not disturb this. The phrase is also given in Katharevousa Greek as "μὴ μου τοὺς κύκλους τάραττε!" ("Mē mou tous kuklous taratte!"). The tomb of Archimedes carried a sculpture illustrating his favorite mathematical proof, consisting of a sphere and a cylinder of the same height and diameter. Archimedes had proven that the volume and surface area of the sphere are two thirds that of the cylinder including its bases. In 75 BC, 137 years after his death, the Roman orator Cicero was serving as quaestor in Sicily. He had heard stories about the tomb of Archimedes, but none of the locals were able to give him the location. Eventually he found the tomb near the Agrigentine gate in Syracuse, in a neglected condition and overgrown with bushes. Cicero had the tomb cleaned up, and was able to see the carving and read some of the verses that had been added as an inscription. A tomb discovered in the courtyard of the Hotel Panorama in Syracuse in the early 1960s was claimed to be that of Archimedes, but there was no compelling evidence for this and the location of his tomb today is unknown. The standard versions of the life of Archimedes were written long after his death by the historians of Ancient Rome. The account of the siege of Syracuse given by Polybius in his "Universal History" was written around seventy years after Archimedes' death, and was used subsequently as a source by Plutarch and Livy. It sheds little light on Archimedes as a person, and focuses on the war machines that he is said to have built in order to defend the city. The most widely known anecdote about Archimedes tells of how he invented a method for determining the volume of an object with an irregular shape. According to Vitruvius, a votive crown for a temple had been made for King Hiero II of Syracuse, who had supplied the pure gold to be used, and Archimedes was asked to determine whether some silver had been substituted by the dishonest goldsmith. Archimedes had to solve the problem without damaging the crown, so he could not melt it down into a regularly shaped body in order to calculate its density. While taking a bath, he noticed that the level of the water in the tub rose as he got in, and realized that this effect could be used to determine the volume of the crown. For practical purposes water is incompressible, so the submerged crown would displace an amount of water equal to its own volume. By dividing the mass of the crown by the volume of water displaced, the density of the crown could be obtained. This density would be lower than that of gold if cheaper and less dense metals had been added. Archimedes then took to the streets naked, so excited by his discovery that he had forgotten to dress, crying "Eureka!" (, "heúrēka"!", meaning "I have found [it]!"). The test was conducted successfully, proving that silver had indeed been mixed in. The story of the golden crown does not appear in the known works of Archimedes. Moreover, the practicality of the method it describes has been called into question, due to the extreme accuracy with which one would have to measure the water displacement. Archimedes may have instead sought a solution that applied the principle known in hydrostatics as Archimedes' principle, which he describes in his treatise "On Floating Bodies". This principle states that a body immersed in a fluid experiences a buoyant force equal to the weight of the fluid it displaces. Using this principle, it would have been possible to compare the density of the crown to that of pure gold by balancing the crown on a scale with a pure gold reference sample of the same weight, then immersing the apparatus in water. The difference in density between the two samples would cause the scale to tip accordingly. Galileo considered it "probable that this method is the same that Archimedes followed, since, besides being very accurate, it is based on demonstrations found by Archimedes himself." In a 12th-century text titled "Mappae clavicula" there are instructions on how to perform the weighings in the water in order to calculate the percentage of silver used, and thus solve the problem. The Latin poem "Carmen de ponderibus et mensuris" of the 4th or 5th century describes the use of a hydrostatic balance to solve the problem of the crown, and attributes the method to Archimedes. A large part of Archimedes' work in engineering arose from fulfilling the needs of his home city of Syracuse. The Greek writer Athenaeus of Naucratis described how King Hiero II commissioned Archimedes to design a huge ship, the "Syracusia", which could be used for luxury travel, carrying supplies, and as a naval warship. The "Syracusia" is said to have been the largest ship built in classical antiquity. According to Athenaeus, it was capable of carrying 600 people and included garden decorations, a gymnasium and a temple dedicated to the goddess Aphrodite among its facilities. Since a ship of this size would leak a considerable amount of water through the hull, the Archimedes' screw was purportedly developed in order to remove the bilge water. Archimedes' machine was a device with a revolving screw-shaped blade inside a cylinder. It was turned by hand, and could also be used to transfer water from a body of water into irrigation canals. The Archimedes' screw is still in use today for pumping liquids and granulated solids such as coal and grain. The Archimedes' screw described in Roman times by Vitruvius may have been an improvement on a screw pump that was used to irrigate the Hanging Gardens of Babylon. The world's first seagoing steamship with a screw propeller was the SS "Archimedes", which was launched in 1839 and named in honor of Archimedes and his work on the screw. The Claw of Archimedes is a weapon that he is said to have designed in order to defend the city of Syracuse. Also known as "the ship shaker," the claw consisted of a crane-like arm from which a large metal grappling hook was suspended. When the claw was dropped onto an attacking ship the arm would swing upwards, lifting the ship out of the water and possibly sinking it. There have been modern experiments to test the feasibility of the claw, and in 2005 a television documentary entitled "Superweapons of the Ancient World" built a version of the claw and concluded that it was a workable device. Archimedes may have used mirrors acting collectively as a parabolic reflector to burn ships attacking Syracuse. The 2nd century AD author Lucian wrote that during the Siege of Syracuse ("c." 214–212 BC), Archimedes destroyed enemy ships with fire. Centuries later, Anthemius of Tralles mentions burning-glasses as Archimedes' weapon. The device, sometimes called the "Archimedes heat ray", was used to focus sunlight onto approaching ships, causing them to catch fire. In the modern era, similar devices have been constructed and may be referred to as a heliostat or solar furnace. This purported weapon has been the subject of ongoing debate about its credibility since the Renaissance. René Descartes rejected it as false, while modern researchers have attempted to recreate the effect using only the means that would have been available to Archimedes. It has been suggested that a large array of highly polished bronze or copper shields acting as mirrors could have been employed to focus sunlight onto a ship. A test of the Archimedes heat ray was carried out in 1973 by the Greek scientist Ioannis Sakkas. The experiment took place at the Skaramagas naval base outside Athens. On this occasion 70 mirrors were used, each with a copper coating and a size of around five by three feet (1.5 by 1 m). The mirrors were pointed at a plywood of a Roman warship at a distance of around 160 feet (50 m). When the mirrors were focused accurately, the ship burst into flames within a few seconds. The plywood ship had a coating of tar paint, which may have aided combustion. A coating of tar would have been commonplace on ships in the classical era. In October 2005 a group of students from the Massachusetts Institute of Technology carried out an experiment with 127 one-foot (30 cm) square mirror tiles, focused on a wooden ship at a range of around 100 feet (30 m). Flames broke out on a patch of the ship, but only after the sky had been cloudless and the ship had remained stationary for around ten minutes. It was concluded that the device was a feasible weapon under these conditions. The MIT group repeated the experiment for the television show "MythBusters", using a wooden fishing boat in San Francisco as the target. Again some charring occurred, along with a small amount of flame. In order to catch fire, wood needs to reach its autoignition temperature, which is around 300 °C (570 °F). When "MythBusters" broadcast the result of the San Francisco experiment in January 2006, the claim was placed in the category of "busted" (or failed) because of the length of time and the ideal weather conditions required for combustion to occur. It was also pointed out that since Syracuse faces the sea towards the east, the Roman fleet would have had to attack during the morning for optimal gathering of light by the mirrors. "MythBusters" also pointed out that conventional weaponry, such as flaming arrows or bolts from a catapult, would have been a far easier way of setting a ship on fire at short distances. In December 2010, "MythBusters" again looked at the heat ray story in a special edition entitled "President's Challenge". Several experiments were carried out, including a large scale test with 500 schoolchildren aiming mirrors at a of a Roman sailing ship 400 feet (120 m) away. In all of the experiments, the sail failed to reach the 210 °C (410 °F) required to catch fire, and the verdict was again "busted". The show concluded that a more likely effect of the mirrors would have been blinding, dazzling, or distracting the crew of the ship. While Archimedes did not invent the lever, he gave an explanation of the principle involved in his work "On the Equilibrium of Planes". Earlier descriptions of the lever are found in the Peripatetic school of the followers of Aristotle, and are sometimes attributed to Archytas. According to Pappus of Alexandria, Archimedes' work on levers caused him to remark: "Give me a place to stand on, and I will move the Earth." () Plutarch describes how Archimedes designed block-and-tackle pulley systems, allowing sailors to use the principle of leverage to lift objects that would otherwise have been too heavy to move. Archimedes has also been credited with improving the power and accuracy of the catapult, and with inventing the odometer during the First Punic War. The odometer was described as a cart with a gear mechanism that dropped a ball into a container after each mile traveled. Cicero (106–43 BC) mentions Archimedes briefly in his dialogue "De re publica", which portrays a fictional conversation taking place in 129 BC. After the capture of Syracuse "c." 212 BC, General Marcus Claudius Marcellus is said to have taken back to Rome two mechanisms, constructed by Archimedes and used as aids in astronomy, which showed the motion of the Sun, Moon and five planets. Cicero mentions similar mechanisms designed by Thales of Miletus and Eudoxus of Cnidus. The dialogue says that Marcellus kept one of the devices as his only personal loot from Syracuse, and donated the other to the Temple of Virtue in Rome. Marcellus' mechanism was demonstrated, according to Cicero, by Gaius Sulpicius Gallus to Lucius Furius Philus, who described it thus: This is a description of a planetarium or orrery. Pappus of Alexandria stated that Archimedes had written a manuscript (now lost) on the construction of these mechanisms entitled . Modern research in this area has been focused on the Antikythera mechanism, another device built  BC that was probably designed for the same purpose. Constructing mechanisms of this kind would have required a sophisticated knowledge of differential gearing. This was once thought to have been beyond the range of the technology available in ancient times, but the discovery of the Antikythera mechanism in 1902 has confirmed that devices of this kind were known to the ancient Greeks. While he is often regarded as a designer of mechanical devices, Archimedes also made contributions to the field of mathematics. Plutarch wrote: "He placed his whole affection and ambition in those purer speculations where there can be no reference to the vulgar needs of life." Archimedes was able to use infinitesimals in a way that is similar to modern integral calculus. Through proof by contradiction (reductio ad absurdum), he could give answers to problems to an arbitrary degree of accuracy, while specifying the limits within which the answer lay. This technique is known as the method of exhaustion, and he employed it to approximate the value of π. In "Measurement of a Circle" he did this by drawing a larger regular hexagon outside a circle and a smaller regular hexagon inside the circle, and progressively doubling the number of sides of each regular polygon, calculating the length of a side of each polygon at each step. As the number of sides increases, it becomes a more accurate approximation of a circle. After four such steps, when the polygons had 96 sides each, he was able to determine that the value of π lay between 3 (approximately 3.1429) and 3 (approximately 3.1408), consistent with its actual value of approximately 3.1416. He also proved that the area of a circle was equal to π multiplied by the square of the radius of the circle (πr). In "On the Sphere and Cylinder", Archimedes postulates that any magnitude when added to itself enough times will exceed any given magnitude. This is the Archimedean property of real numbers. In "Measurement of a Circle", Archimedes gives the value of the square root of 3 as lying between (approximately 1.7320261) and (approximately 1.7320512). The actual value is approximately 1.7320508, making this a very accurate estimate. He introduced this result without offering any explanation of how he had obtained it. This aspect of the work of Archimedes caused John Wallis to remark that he was: "as it were of set purpose to have covered up the traces of his investigation as if he had grudged posterity the secret of his method of inquiry while he wished to extort from them assent to his results." It is possible that he used an iterative procedure to calculate these values. In "The Quadrature of the Parabola", Archimedes proved that the area enclosed by a parabola and a straight line is times the area of a corresponding inscribed triangle as shown in the figure at right. He expressed the solution to the problem as an infinite geometric series with the common ratio : If the first term in this series is the area of the triangle, then the second is the sum of the areas of two triangles whose bases are the two smaller secant lines, and so on. This proof uses a variation of the series which sums to . In "The Sand Reckoner", Archimedes set out to calculate the number of grains of sand that the universe could contain. In doing so, he challenged the notion that the number of grains of sand was too large to be counted. He wrote: "There are some, King Gelo (Gelo II, son of Hiero II), who think that the number of the sand is infinite in multitude; and I mean by the sand not only that which exists about Syracuse and the rest of Sicily but also that which is found in every region whether inhabited or uninhabited." To solve the problem, Archimedes devised a system of counting based on the myriad. The word is from the Greek "murias", for the number 10,000. He proposed a number system using powers of a myriad of myriads (100 million) and concluded that the number of grains of sand required to fill the universe would be 8 vigintillion, or 8. The works of Archimedes were written in Doric Greek, the dialect of ancient Syracuse. The written work of Archimedes has not survived as well as that of Euclid, and seven of his treatises are known to have existed only through references made to them by other authors. Pappus of Alexandria mentions "On Sphere-Making" and another work on polyhedra, while Theon of Alexandria quotes a remark about refraction from the "Catoptrica". During his lifetime, Archimedes made his work known through correspondence with the mathematicians in Alexandria. The writings of Archimedes were first collected by the Byzantine Greek architect Isidore of Miletus ("c". 530 AD), while commentaries on the works of Archimedes written by Eutocius in the sixth century AD helped to bring his work a wider audience. Archimedes' work was translated into Arabic by Thābit ibn Qurra (836–901 AD), and Latin by Gerard of Cremona ("c." 1114–1187 AD). During the Renaissance, the "Editio Princeps" (First Edition) was published in Basel in 1544 by Johann Herwagen with the works of Archimedes in Greek and Latin. Around the year 1586 Galileo Galilei invented a hydrostatic balance for weighing metals in air and water after apparently being inspired by the work of Archimedes. Archimedes' "Book of Lemmas" or "Liber Assumptorum" is a treatise with fifteen propositions on the nature of circles. The earliest known copy of the text is in Arabic. The scholars T. L. Heath and Marshall Clagett argued that it cannot have been written by Archimedes in its current form, since it quotes Archimedes, suggesting modification by another author. The "Lemmas" may be based on an earlier work by Archimedes that is now lost. It has also been claimed that Heron's formula for calculating the area of a triangle from the length of its sides was known to Archimedes. However, the first reliable reference to the formula is given by Heron of Alexandria in the 1st century AD. The foremost document containing the work of Archimedes is the Archimedes Palimpsest. In 1906, the Danish professor Johan Ludvig Heiberg visited Constantinople and examined a 174-page goatskin parchment of prayers written in the 13th century AD. He discovered that it was a palimpsest, a document with text that had been written over an erased older work. Palimpsests were created by scraping the ink from existing works and reusing them, which was a common practice in the Middle Ages as vellum was expensive. The older works in the palimpsest were identified by scholars as 10th century AD copies of previously unknown treatises by Archimedes. The parchment spent hundreds of years in a monastery library in Constantinople before being sold to a private collector in the 1920s. On October 29, 1998 it was sold at auction to an anonymous buyer for $2 million at Christie's in New York. The palimpsest holds seven treatises, including the only surviving copy of "On Floating Bodies" in the original Greek. It is the only known source of "The Method of Mechanical Theorems", referred to by Suidas and thought to have been lost forever. "Stomachion" was also discovered in the palimpsest, with a more complete analysis of the puzzle than had been found in previous texts. The palimpsest is now stored at the Walters Art Museum in Baltimore, Maryland, where it has been subjected to a range of modern tests including the use of ultraviolet and light to read the overwritten text. The treatises in the Archimedes Palimpsest are: a. In the preface to "On Spirals" addressed to Dositheus of Pelusium, Archimedes says that "many years have elapsed since Conon's death." Conon of Samos lived , suggesting that Archimedes may have been an older man when writing some of his works. b. The treatises by Archimedes known to exist only through references in the works of other authors are: "On Sphere-Making" and a work on polyhedra mentioned by Pappus of Alexandria; "Catoptrica", a work on optics mentioned by Theon of Alexandria; "Principles", addressed to Zeuxippus and explaining the number system used in "The Sand Reckoner"; "On Balances and Levers"; "On Centers of Gravity"; "On the Calendar". Of the surviving works by Archimedes, T. L. Heath offers the following suggestion as to the order in which they were written: "On the Equilibrium of Planes I", "The Quadrature of the Parabola", "On the Equilibrium of Planes II", "On the Sphere and the Cylinder I, II", "On Spirals", "On Conoids and Spheroids", "On Floating Bodies I, II", "On the Measurement of a Circle", "The Sand Reckoner". c. Boyer, Carl Benjamin "A History of Mathematics" (1991) "Arabic scholars inform us that the familiar area formula for a triangle in terms of its three sides, usually known as Heron's formula — "k" = , where "s" is the semiperimeter — was known to Archimedes several centuries before Heron lived. Arabic scholars also attribute to Archimedes the 'theorem on the broken chord' ... Archimedes is reported by the Arabs to have given several proofs of the theorem." d. "It was usual to smear the seams or even the whole hull with pitch or with pitch and wax". In Νεκρικοὶ Διάλογοι ("Dialogues of the Dead"), Lucian refers to coating the seams of a skiff with wax, a reference to pitch (tar) or wax. Ann Arbor, Michigan Ann Arbor is a city in the U.S. state of Michigan and the county seat of Washtenaw County. The 2010 census recorded its population to be 113,934, making it the sixth largest city in Michigan. Ann Arbor is home to the University of Michigan. The university shapes Ann Arbor's economy significantly as it employs about 30,000 workers, including about 12,000 in the medical center. The city's economy is also centered on high technology, with several companies drawn to the area by the university's research and development infrastructure, and by its graduates. Ann Arbor was founded in 1824, named for wives of the village's founders, both named Ann, and the stands of bur oak trees. The University of Michigan moved from Detroit to Ann Arbor in 1837, and the city grew at a rapid rate in the early to mid-20th century. During the 1960s and 70s, the city gained a reputation as a center for left-wing politics. Ann Arbor became a focal point for political activism, such as opposition to the Vietnam War and support for the legalization of cannabis. In about 1774, the Potawatomi founded two villages in the area of what is now Ann Arbor. Ann Arbor was founded in 1824 by land speculators John Allen and Elisha Walker Rumsey. On May 25, 1824, the town plat was registered with Wayne County as "Annarbour;" this represents the earliest known use of the town's name. Allen and Rumsey decided to name it for their wives, both named Ann, and for the stands of Bur Oak in the of land they purchased for $800 from the federal government at $1.25 per acre. The local Ojibwa named the settlement "kaw-goosh-kaw-nick", after the sound of Allen's sawmill. Ann Arbor became the seat of Washtenaw County in 1827, and was incorporated as a village in 1833. The Ann Arbor Land Company, a group of speculators, set aside of undeveloped land and offered it to the state of Michigan as the site of the state capital, but lost the bid to Lansing. In 1837, the property was accepted instead as the site of the University of Michigan, which moved from Detroit. Since the university's establishment in the city in 1837, the histories of the University of Michigan and Ann Arbor have been closely linked. The town became a regional transportation hub in 1839 with the arrival of the Michigan Central Railroad, and a north—south railway connecting Ann Arbor to Toledo and other markets to the south was established in 1878. Throughout the 1840s and the 1850s settlers continued to come to Ann Arbor. While the earlier settlers were primarily of British ancestry, the newer settlers also consisted of Germans, Irish, and African-Americans. In 1851, Ann Arbor was chartered as a city, though the city showed a drop in population during the Depression of 1873. It was not until the early 1880s that Ann Arbor again saw robust growth, with new emigrants from Greece, Italy, Russia, and Poland. Ann Arbor saw increased growth in manufacturing, particularly in milling. Ann Arbor's Jewish community also grew after the turn of the 20th century, and its first and oldest synagogue, Beth Israel Congregation, was established in 1916. During the 1960s and 1970s, the city gained a reputation as an important center for liberal politics. Ann Arbor also became a locus for left-wing activism and anti-Vietnam War movement, as well as the student movement. The first major meetings of the national left-wing campus group Students for a Democratic Society took place in Ann Arbor in 1960; in 1965, the city was home to the first U.S. teach-in against the Vietnam War. During the ensuing 15 years, many countercultural and New Left enterprises sprang up and developed large constituencies within the city. These influences washed into municipal politics during the early and mid-1970s when three members of the Human Rights Party (HRP) won city council seats on the strength of the student vote. During their time on the council, HRP representatives fought for measures including pioneering antidiscrimination ordinances, measures decriminalizing marijuana possession, and a rent-control ordinance; many of these remain in effect in modified form. Alongside these liberal and left-wing efforts, a small group of conservative institutions were born in Ann Arbor. These include Word of God (established in 1967), a charismatic inter-denominational movement; and the Thomas More Law Center (established in 1999), a religious-conservative advocacy group. Following a 1956 vote, the city of East Ann Arbor merged with Ann Arbor to encompass the eastern sections of the city. In the past several decades, Ann Arbor has grappled with the effects of sharply rising land values, gentrification, and urban sprawl stretching into outlying countryside. On November 4, 2003, voters approved a greenbelt plan under which the city government bought development rights on agricultural parcels of land adjacent to Ann Arbor to preserve them from sprawling development. Since then, a vociferous local debate has hinged on how and whether to accommodate and guide development within city limits. Ann Arbor consistently ranks in the "top places to live" lists published by various mainstream media outlets every year. In 2008, it was ranked by CNNMoney.com 27th out of 100 "America's best small cities". And in 2010, "Forbes" listed Ann Arbor as one of the most liveable cities in the United States. According to the United States Census Bureau, the city has a total area of , of which, of it is land and is water, much of which is part of the Huron River. Ann Arbor is about west of Detroit. Ann Arbor Charter Township adjoins the city's north and east sides. Ann Arbor is situated on the Huron River in a productive agricultural and fruit-growing region. The landscape of Ann Arbor consists of hills and valleys, with the terrain becoming steeper near the Huron River. The elevation ranges from about along the Huron River to on the city's west side, near the intersection of Maple Road and Pauline Blvd. Generally, the west-central and northwestern parts of the city and U-M's North Campus are the highest parts of the city; the lowest parts are along the Huron River and in the southeast. Ann Arbor Municipal Airport, which is south of the city at , has an elevation of . Ann Arbor's "Tree Town" nickname stems from the dense forestation of its parks and residential areas. The city contains more than 50,000 trees along its streets and an equal number in parks. In recent years, the emerald ash borer has destroyed many of the city's approximately 10,500 ash trees. The city contains 157 municipal parks ranging from small neighborhood green spots to large recreation areas. Several large city parks and a university park border sections of the Huron River. Fuller Recreation Area, near the University Hospital complex, contains sports fields, pedestrian and bike paths, and swimming pools. The Nichols Arboretum, owned by the University of Michigan, is a arboretum that contains hundreds of plant and tree species. It is on the city's east side, near the university's Central Campus. Located across the Huron River just beyond the university's North Campus is the university's Matthaei Botanical Gardens, which contains 300 acres of gardens and a large tropical conservatory. The Kerrytown Shops, Main Street Business District, the State Street Business District, and the South University Business District are commercial areas in downtown Ann Arbor. Three commercial areas south of downtown include the areas near I-94 and Ann Arbor-Saline Road, Briarwood Mall, and the South Industrial area. Other commercial areas include the Arborland/Washtenaw Avenue and Packard Road merchants on the east side, the Plymouth Road area in the northeast, and the Westgate/West Stadium areas on the west side. Downtown contains a mix of 19th- and early-20th-century structures and modern-style buildings, as well as a farmers' market in the Kerrytown district. The city's commercial districts are composed mostly of two- to four-story structures, although downtown and the area near Briarwood Mall contain a small number of high-rise buildings. Ann Arbor's residential neighborhoods contain architectural styles ranging from classic 19th- and early 20th-century designs to ranch-style houses. Among these homes are a number of kit houses built in the early 20th century. Contemporary-style houses are farther from the downtown district. Surrounding the University of Michigan campus are houses and apartment complexes occupied primarily by student renters. Tower Plaza, a 26-story condominium building located between the University of Michigan campus and downtown, is the tallest building in Ann Arbor. The 19th-century buildings and streetscape of the Old West Side neighborhood have been preserved virtually intact; in 1972, the district was listed on the National Register of Historic Places, and it is further protected by city ordinances and a nonprofit preservation group. Ann Arbor has a typically Midwestern humid continental climate (Köppen "Dfa"), which is influenced by the Great Lakes. There are four distinct seasons: winters are cold and snowy, with average highs around . Summers are warm to hot and humid, with average highs around and with slightly more precipitation. Spring and autumn are transitional between the two. The area experiences lake effect weather, primarily in the form of increased cloudiness during late fall and early winter. The monthly daily average temperature in July is , while the same figure for January is . Temperatures reach or exceed on 10 days, and drop to or below on 4.6 nights. Precipitation tends to be the heaviest during the summer months, but most frequent during winter. Snowfall, which normally occurs from November to April but occasionally starts in October, averages per season. The lowest recorded temperature was on February 11, 1885 and the highest recorded temperature was on July 24, 1934. As of the 2010 U.S. Census, there were 113,394 people, 45,634 households, and 21,704 families residing in the city. The population density was 4,270.33 people per square mile (2653.47/km²). There were 49,982 housing units at an average density of 1,748.0 per square mile (675.0/km²), making it less densely populated than inner-ring Detroit suburbs like Oak Park and Ferndale (and than Detroit proper), but more densely populated than outer-ring suburbs like Livonia or Troy. The racial makeup of the city was 73.0% White (70.4% non-Hispanic White), 7.7% Black or African American, 0.3% Native American, 14.4% Asian, 0.0% Pacific Islander, 1.0% from other races, and 3.6% from two or more races. Hispanic or Latino residents of any race were 4.1% of the population. In 2013, Ann Arbor had the second-largest community of Japanese citizens in the state of Michigan, numbering 1,541; this figure trailed only that of Novi, which had 2,666 Japanese nationals. In addition, as of 2005, Ann Arbor has a small population of Arab Americans, including several students as well as local Lebanese and Palestinians. In 2000, out of 45,693 households, 23.0% had children under the age of 18 living with them, 37.8% were married couples living together, 7.5% had a female householder with no husband present, and 52.5% were nonfamilies. 35.5% of households were made up of individuals and 6.6% had someone living alone who was 65 years of age or older. The average household size was 2.22 and the average family size was 2.90. The age distribution was 16.8% under 18, 26.8% from 18 to 24, 31.2% from 25 to 44, 17.3% from 45 to 64, and 7.9% were 65 or older. The median age was 28 years. For every 100 females, there were 97.7 males; while for every 100 females age 18 and over, there were 96.4 males. The median income for a household in the city was $46,299, and the median income for a family was $71,293 (these figures had risen to $51,232 and $82,293 respectively as of a 2007 estimate). Males had a median income of $48,880 versus $36,561 for females. The per capita income for the city was $26,419. About 4.6% of families and 16.6% of the population were below the poverty line, including 7.3% of those under age 18 and 5.1% of those age 65 or over. The University of Michigan shapes Ann Arbor's economy significantly. It employs about 30,000 workers, including about 12,000 in the medical center. Other employers are drawn to the area by the university's research and development money, and by its graduates. High tech, health services and biotechnology are other major components of the city's economy; numerous medical offices, laboratories, and associated companies are located in the city. Automobile manufacturers, such as General Motors and Visteon, also employ residents. High tech companies have located in the area since the 1930s, when International Radio Corporation introduced the first mass-produced AC/DC radio (the Kadette, in 1931) as well as the first pocket radio (the Kadette Jr., in 1933). The Argus camera company, originally a subsidiary of International Radio, manufactured cameras in Ann Arbor from 1936 to the 1960s. Current firms include Arbor Networks (provider of Internet traffic engineering and security systems), Arbortext (provider of XML-based publishing software), JSTOR (the digital scholarly journal archive), MediaSpan (provider of software and online services for the media industries), Truven Health Analytics, and ProQuest, which includes UMI. Ann Arbor Terminals manufactured a video-display terminal called the Ann Arbor Ambassador during the 1980s. Barracuda Networks, which provides networking, security, and storage products based on network appliances and cloud services, opened an engineering office in Ann Arbor in 2008 on Depot St. and recently announced it will move downtown to occupy the building previously used as the Borders headquarters. Duo Security, a cloud-based access security provider protecting thousands of organizations worldwide through two-factor authentication, is headquartered in Ann Arbor. Websites and online media companies in or near the city include All Media Guide, the Weather Underground, and Zattoo. Ann Arbor is the home to Internet2 and the Merit Network, a not-for-profit research and education computer network. Both are located in the South State Commons 2 building on South State Street, which once housed the Michigan Information Technology Center Foundation. The city is also home to the headquarters of Google's AdWords program—the company's primary revenue stream. The recent surge in companies operating in Ann Arbor has led to a decrease in its office and flex space vacancy rates. As of December 31, 2012, the total market vacancy rate for office and flex space is 11.80%, a 1.40% decrease in vacancy from one year previous, and the lowest overall vacancy level since 2003. The office vacancy rate decreased to 10.65% in 2012 from 12.08% in 2011, while the flex vacancy rate decreased slightly more, with a drop from 16.50% to 15.02%. Pfizer, once the city's second largest employer, operated a large pharmaceutical research facility on the northeast side of Ann Arbor. On January 22, 2007, Pfizer announced it would close operations in Ann Arbor by the end of 2008. The facility was previously operated by Warner-Lambert and, before that, Parke-Davis. In December 2008, the University of Michigan Board of Regents approved the purchase of the facilities, and the university anticipates hiring 2,000 researchers and staff during the next 10 years. The city is the home of other research and engineering centers, including those of Lotus Engineering, General Dynamics and the National Oceanic and Atmospheric Administration (NOAA). Other research centers sited in the city are the United States Environmental Protection Agency's National Vehicle and Fuel Emissions Laboratory and the Toyota Technical Center. The city is also home to National Sanitation Foundation International (NSF International), the nonprofit non-governmental organization that develops generally accepted standards for a variety of public health related industries and subject areas. Borders Books, started in Ann Arbor, was opened by brothers Tom and Louis Borders in 1971 with a stock of used books. The Borders chain was based in the city, as was its flagship store until it closed in September 2011. Domino's Pizza's headquarters is near Ann Arbor on Domino's Farms, a Frank Lloyd Wright-inspired complex just northeast of the city. Another Ann Arbor-based company is Zingerman's Delicatessen, which serves sandwiches and has developed businesses under a variety of brand names. Zingerman's has grown into a family of companies which offers a variety of products (bake shop, mail order, creamery, coffee) and services (business education). Flint Ink Corp., another Ann Arbor-based company, was the world's largest privately held ink manufacturer until it was acquired by Stuttgart-based XSYS Print Solutions in October 2005. Avfuel, a global supplier of aviation fuels and services, is also headquartered in Ann Arbor. Aastrom Biosciences, a publicly traded company that develops stem cell treatments for cardiovascular diseases, is also headquartered in Ann Arbor. Many cooperative enterprises were founded in the city; among those that remain are the People's Food Co-op and the Inter-Cooperative Council at the University of Michigan, a student housing cooperative founded in 1937. There are also three cohousing communities—Sunward, Great Oak, and Touchstone—located immediately to the west of the city limits. Several performing arts groups and facilities are on the University of Michigan's campus, as are museums dedicated to art, archaeology, and natural history and sciences. Founded in 1879, the University Musical Society is an independent performing arts organization that presents over 60 events each year, bringing international artists in music, dance, and theater. Since 2001 Shakespeare in the Arb has presented one play by Shakespeare each June, in a large park near downtown. Regional and local performing arts groups not associated with the university include the Ann Arbor Civic Theatre, the Arbor Opera Theater, the Ann Arbor Symphony Orchestra, the Ann Arbor Ballet Theater, the Ann Arbor Civic Ballet (established in 1954 as Michigan's first chartered ballet company), The Ark, and Performance Network Theatre. Another unique piece of artistic expression in Ann Arbor is the fairy doors. These small portals are examples of installation art and can be found throughout the downtown area. The Ann Arbor Hands-On Museum is located in a renovated and expanded historic downtown fire station. Multiple art galleries exist in the city, notably in the downtown area and around the University of Michigan campus. Aside from a large restaurant scene in the Main Street, South State Street, and South University Avenue areas, Ann Arbor ranks first among U.S. cities in the number of booksellers and books sold per capita. The Ann Arbor District Library maintains four branch outlets in addition to its main downtown building. The city is also home to the Gerald R. Ford Presidential Library. Several annual events—many of them centered on performing and visual arts—draw visitors to Ann Arbor. One such event is the Ann Arbor Art Fairs, a set of four concurrent juried fairs held on downtown streets. Scheduled on Thursday through Sunday of the third week of July, the fairs draw upward of half a million visitors. Another is the Ann Arbor Film Festival, held during the third week of March, which receives more than 2,500 submissions annually from more than 40 countries and serves as one of a handful of Academy Award–qualifying festivals in the United States. Ann Arbor has a long history of openness to marijuana, given Ann Arbor's decriminalization of cannabis, the large number of medical marijuana dispensaries in the city (one dispensary, called People's Co-op, was directly across the street from Michigan Stadium until zoning forced it to move one mile to the west), the large number of pro-marijuana residents, and the annual Hash Bash: an event that is held on the first Saturday of April. Until (at least) the successful passage of Michigan's medical marijuana law, the event had arguably strayed from its initial intent, although for years, a number of attendees have received serious legal responses due to marijuana use on University of Michigan property, which does not fall under the city's progressive and compassionate ticketing program. Ann Arbor is a major scene of college sports, most notably at the University of Michigan, a member of the Big Ten Conference. Several well-known college sports facilities exist in the city, including Michigan Stadium, the largest American football stadium in the world. The stadium was completed in 1927 and cost more than $950,000 to build. It has a 107,601 seating capacity after multiple renovations were made. The stadium is colloquially known as "The Big House". Crisler Center and Yost Ice Arena play host to the school's basketball (both men's and women's) and ice hockey teams, respectively. Concordia University, a member of the NAIA, also fields sports teams. Ann Arbor is represented in the NPSL by semi-pro soccer team AFC Ann Arbor, a club founded in 2014 who call themselves The Mighty Oak. A person from Ann Arbor is called an "Ann Arborite", and many long-time residents call themselves "townies". The city itself is often called "A²" ("A-squared") or "A2" ("A two") or "AA", "The Deuce" (mainly by Chicagoans), and "Tree Town". With tongue-in-cheek reference to the city's liberal political leanings, some occasionally refer to Ann Arbor as "The People's Republic of Ann Arbor" or "25 square miles surrounded by reality", the latter phrase being adapted from Wisconsin Governor Lee Dreyfus's description of Madison, Wisconsin. In "A Prairie Home Companion" broadcast from Ann Arbor, Garrison Keillor described Ann Arbor as "a city where people discuss socialism, but only in the fanciest restaurants." Ann Arbor sometimes appears on citation indexes as an author, instead of a location, often with the academic degree "MI", a misunderstanding of the abbreviation for Michigan. Ann Arbor has become increasingly gentrified in recent years. Ann Arbor has a council-manager form of government. The City Council has 11 voting members: the mayor and 10 city council members. Two city council members are elected from each of the city's five wards. The mayor and council serve four-year terms. The mayor and one council member from each ward are elected in Presidential election years, and the other five council members are elected in the alternate even-numbered years. The mayor is elected citywide. The mayor is the presiding officer of the City Council and has the power to appoint all Council committee members as well as board and commission members, with the approval of the City Council. The current mayor of Ann Arbor is Christopher Taylor, a Democrat who was elected as mayor in 2014. Day-to-day city operations are managed by a city administrator chosen by the city council. In 1960, Ann Arbor voters approved a $2.3 million bond issue to build the current city hall, which was designed by architect Alden B. Dow. The City Hall opened in 1963. In 1995, the building was renamed the Guy C. Larcom, Jr. Municipal Building in honor of the longtime city administrator who championed the building's construction. Ann Arbor is part of Michigan's 12th congressional district, represented in Congress by Representative Debbie Dingell, a Democrat. On the state level, the city is part of the 18th district in the Michigan Senate, represented by Democrat Rebekah Warren. In the Michigan House of Representatives, representation is split between the 55th district (northern Ann Arbor, part of Ann Arbor Township, and other surrounding areas, represented by Democrat Adam Zemke), the 53rd district (most of downtown and the southern half of the city, represented by Democrat Yousef Rabhi) and the 52nd district (southwestern areas outside Ann Arbor proper and western Washtenaw County, represented by Democrat Donna Lasinski). As the county seat of Washtenaw County, the Washtenaw County Trial Court (22nd Circuit Court) is located in Ann Arbor at the Washtenaw County Courthouse on Main Street. This court has countywide general jurisdiction and has two divisions: the Civil/Criminal (criminal and civil matters) and the Family Division (which includes Juvenile Court, Friend of the Court, and Probate Court sections). Seven judges serve on the court. Ann Arbor also has a local state district court (15th District Court), which serves only the City of Ann Arbor. In Michigan, the state district courts are limited jurisdiction courts which handle traffic violations, civil cases with claims under $25,000, landlord-tenant matters, and misdemeanor crimes. The Ann Arbor Federal Building (attached to a post office) on Liberty Street serves as one of the courthouses for the U.S. District Court for the Eastern District of Michigan and Court of Appeals for the Sixth Circuit. Progressive politics have been particularly strong in municipal government since the 1960s. Voters approved charter amendments that have lessened the penalties for possession of marijuana (1974), and that aim to protect access to abortion in the city should it ever become illegal in the State of Michigan (1990). In 1974, Kathy Kozachenko's victory in an Ann Arbor city-council race made her the country's first openly homosexual candidate to win public office. In 1975, Ann Arbor became the first U.S. city to use instant-runoff voting for a mayoral race. Adopted through a ballot initiative sponsored by the local Human Rights Party, which feared a splintering of the liberal vote, the process was repealed in 1976 after use in only one election. As of May 2016, Democrats hold the mayorship and nine out of the ten council seats. Nationally, Ann Arbor is located in Michigan's 12th congressional district, represented by Democrat Debbie Dingell. In 2015, Ann Arbor was ranked 11th safest among cities in Michigan with a population of over 50,000. It ranked safer than cities such as Royal Oak, Livonia, Canton and Clinton Township. The level of most crimes in Ann Arbor has fallen significantly in the past 20 years. In 1995 there were 294 aggravated assaults, 132 robberies and 43 rapes while in 2015 there were 128 aggravated assaults, 42 robberies and 58 rapes (under the revised definition). Ann Arbor's crime rate was below the national average in 2000. The violent crime rate was further below the national average than the property crime rate; the two rates were 48% and 11% lower than the U.S. average, respectively. Public schools are part of the Ann Arbor Public Schools (AAPS) district. AAPS has one of the country's leading music programs. In September 2008, 16,539 students had been enrolled in the Ann Arbor Public Schools. There were 21 elementary schools, five middle schools (Forsythe, Slauson, Tappan, Scarlett, and Clague), two K-8 schools (Ann Arbor STEAM and Ann Arbor open), three traditional high schools (Pioneer, Huron, and Skyline), and three alternative high schools (Community High, Stone School, and Roberto Clemente) in the district. The district also operates a K-8 open school program, Ann Arbor Open School, out of the former Mack School. This program is open to all families who live within the district. Ann Arbor Public Schools also operates a preschool and family center, with programs for at-risk infants and children before kindergarten. The district has a preschool center with both free and tuition-based programs for preschoolers in the district. Ann Arbor is home to several private schools, including the Rudolf Steiner School of Ann Arbor, Clonlara School, Michigan Islamic Academy, and Greenhills School, a prep school. The city is also home to several charter schools such as Central Academy (PreK-12) of the Global Educational Excellence (GEE) charter school company, and Honey Creek Community School. The University of Michigan dominates the city of Ann Arbor, providing the city with its distinctive college-town character. Other local colleges and universities include Concordia University Ann Arbor, a Lutheran liberal-arts institution; a campus of the University of Phoenix; and Cleary University, a private business school. Washtenaw Community College is located in neighboring Ann Arbor Township. In 2000, the Ave Maria School of Law, a Roman Catholic law school established by Domino's Pizza founder Tom Monaghan, opened in northeastern Ann Arbor, but the school moved to Ave Maria, Florida in 2009, and the Thomas M. Cooley Law School acquired the former Ave Maria buildings for use as a branch campus. "The Ann Arbor News", owned by the Michigan-based Booth Newspapers chain, is the major newspaper serving Ann Arbor and the rest of Washtenaw County. The newspaper ended its 174-year daily print run in 2009, due to economic difficulties and began producing two printed editions a week under the name AnnArbor.com, It resumed using its former name in 2013. It also produces a daily digital edition named Mlive.com. Another Ann Arbor-based publication that has ceased production was the "Ann Arbor Paper", a free monthly. Ann Arbor has been said to be the first significant city to lose its only daily paper. The "Ann Arbor Chronicle", an online newspaper, covered local news, including meetings of the library board, county commission, and DDA until September 3, 2014. Current publications in the city include the "Ann Arbor Journal" ("A2 Journal"), a weekly community newspaper; the "Ann Arbor Observer", a free monthly local magazine; and "Current", a free entertainment-focused alt-weekly. The "Ann Arbor Business Review" covers local business in the area. "Car and Driver" magazine and "Automobile Magazine" are also based in Ann Arbor. The University of Michigan is served by many student publications, including the independent "Michigan Daily" student newspaper, which reports on local, state, and regional issues in addition to campus news. Four major AM radio stations based in or near Ann Arbor are WAAM 1600, a conservative news and talk station; WLBY 1290, a business news and talk station; WDEO 990, Catholic radio; and WTKA 1050, which is primarily a sports station. The city's FM stations include NPR affiliate WUOM 91.7; country station WWWW 102.9; and adult-alternative station WQKL 107.1. Freeform station WCBN-FM 88.3 is a local community radio/college radio station operated by the students of the University of Michigan featuring noncommercial, eclectic music and public-affairs programming. The city is also served by public and commercial radio broadcasters in Ypsilanti, the Lansing/Jackson area, Detroit, Windsor, and Toledo. Ann Arbor is part of the Detroit television market. WPXD channel 31, the owned-and-operated Detroit outlet of the ION Television network, is licensed to the city. Until its sign-off on August 31, 2017, WHTV channel 18, a MyNetworkTV-affiliated station for the Lansing market, was broadcast from a transmitter in Lyndon Township, west of Ann Arbor. Community Television Network (CTN) is a city-provided cable television channel with production facilities open to city residents and nonprofit organizations. Detroit and Toledo-area radio and television stations also serve Ann Arbor, and stations from Lansing and Windsor, Ontario, can be seen in parts of the area. The University of Michigan Medical Center, the only teaching hospital in the city, took the number 1 slot in "U.S. News & World Report" for best hospital in the state of Michigan, as of 2015. The University of Michigan Health System (UMHS) includes University Hospital, C.S. Mott Children's Hospital and Women's Hospital in its core complex. UMHS also operates out-patient clinics and facilities throughout the city. The area's other major medical centers include a large facility operated by the Department of Veterans Affairs in Ann Arbor, and Saint Joseph Mercy Hospital in nearby Superior Township. The city provides sewage disposal and water supply services, with water coming from the Huron River and groundwater sources. There are two water-treatment plants, one main and three outlying reservoirs, four pump stations, and two water towers. These facilities serve the city, which is divided into five water districts. The city's water department also operates four dams along the Huron River, two of which provide hydroelectric power. The city also offers waste management services, with Recycle Ann Arbor handling recycling service. Other utilities are provided by private entities. Electrical power and gas are provided by DTE Energy. AT&T Inc. is the primary wired telephone service provider for the area. Cable TV service is primarily provided by Comcast. The streets in downtown Ann Arbor conform to a grid pattern, though this pattern is less common in the surrounding areas. Major roads branch out from the downtown district like spokes on a wheel to the highways surrounding the city. The city is belted by three freeways: I-94, which runs along the southern and western portion of the city; U.S. Highway 23 (US 23), which primarily runs along the eastern edge of Ann Arbor; and M-14, which runs along the northern edge of the city. Other nearby highways include US 12 (Michigan Ave.), M-17 (Washtenaw Ave.), and M-153 (Ford Rd.). Several of the major surface arteries lead to the I-94/M-14 interchange in the west, US 23 in the east, and the city's southern areas. The city also has a system of bike routes and paths and includes the nearly complete Washtenaw County Border-to-Border Trail. The Ann Arbor Area Transportation Authority (AATA), which brands itself as "TheRide", operates public bus services throughout the city and nearby Ypsilanti. The AATA operates Blake Transit Center on Fourth Ave. in downtown Ann Arbor, and the Ypsilanti Transit Center. A separate zero-fare bus service operates within and between the University of Michigan campuses. Since April 2012, route 98 (the "AirRide") connects to Detroit Metro Airport a dozen times a day. There are also limited-stop bus services between Ann Arbor and Chelsea as well as Canton. These two routes, 91 and 92 respectively, are known as the "ExpressRide". Greyhound Lines provides intercity bus service. The Michigan Flyer, a service operated by Indian Trails, cooperates with AAATA for their AirRide and additionally offers bus service to East Lansing. Megabus has direct service to Chicago, Illinois, while a bus service is provided by Amtrak for rail passengers making connections to services in East Lansing and Toledo, Ohio. Ann Arbor Municipal Airport is a small, city-run general aviation airport located south of I-94. Detroit Metropolitan Airport, the area's large international airport, is about east of the city, in Romulus. Willow Run Airport east of the city near Ypsilanti serves freight, corporate, and general aviation clients. The city was a major rail hub, notably for freight traffic between Toledo and ports north of Chicago, Illinois, from 1878 to 1982; however, the Ann Arbor Railroad also provided passenger service from 1878 to 1950. The city was served by the Michigan Central Railroad starting in 1837. The Ann Arbor and Ypsilanti Street Railway, Michigan's first interurban, served the city from 1891 to 1929. Amtrak, which provides service to the city at the Ann Arbor Train Station, operates the Wolverine train between Chicago and Pontiac, via Detroit. The present-day train station neighbors the city's old Michigan Central Depot, which was renovated as a restaurant in 1970. Ann Arbor has seven sister cities: Ælfheah of Canterbury Ælfheah (c. 953 – 19 April 1012) was an Anglo-Saxon Bishop of Winchester, later Archbishop of Canterbury. He became an anchorite before being elected abbot of Bath Abbey. His reputation for piety and sanctity led to his promotion to the episcopate, and eventually, to his becoming archbishop. Ælfheah furthered the cult of Dunstan and also encouraged learning. He was captured by Viking raiders in 1011 and killed by them the following year after refusing to allow himself to be ransomed. Ælfheah was canonised as a saint in 1078. Thomas Becket, a later Archbishop of Canterbury, prayed to him just before his own murder in Canterbury Cathedral. Purportedly born in Weston on the outskirts of Bath, Ælfheah became a monk early in life. His birth took place around 953. He first entered the monastery of Deerhurst, but then moved to Bath, where he became an anchorite. He was noted for his piety and austerity and rose to become abbot of Bath Abbey. The 12th century chronicler William of Malmesbury recorded that Ælfheah was a monk and prior at Glastonbury Abbey, but this is not accepted by all historians. Indications are that Ælfheah became abbot at Bath by 982, perhaps as early as around 977. He perhaps shared authority with his predecessor Æscwig after 968. Probably due to the influence of Dunstan, the Archbishop of Canterbury (959–988), Ælfheah was elected Bishop of Winchester in 984, and was consecrated on 19 October that year. While bishop he was largely responsible for the construction of a large organ in the cathedral, audible from over a mile (1600 m) away and said to require more than 24 men to operate. He also built and enlarged the city's churches, and promoted the cult of Swithun and his own predecessor, Æthelwold of Winchester. One act promoting Æthelwold's cult was the translation of Æthelwold's body to a new tomb in the cathedral at Winchester, which Ælfheah presided over on 10 September 996. Following a Viking raid in 994, a peace treaty was agreed with one of the raiders, Olaf Tryggvason. Besides receiving danegeld, Olaf converted to Christianity and undertook never to raid or fight the English again. Ælfheah may have played a part in the treaty negotiations, and it is certain that he confirmed Olaf in his new faith. In 1006 Ælfheah succeeded Ælfric as Archbishop of Canterbury, taking Swithun's head with him as a relic for the new location. He went to Rome in 1007 to receive his pallium—symbol of his status as an archbishop—from Pope John XVIII, but was robbed during his journey. While at Canterbury he promoted the cult of Dunstan, ordering the writing of the second "Life of Dunstan", which Adelard of Ghent composed between 1006 and 1011. He also introduced new practices into the liturgy, and was instrumental in the Witenagemot's recognition of Wulfsige of Sherborne as a saint in about 1012. Ælfheah sent Ælfric of Eynsham to Cerne Abbey to take charge of its monastic school. He was present at the council of May 1008 at which Wulfstan II, Archbishop of York, preached his "Sermo Lupi ad Anglos" ("The Sermon of the Wolf to the English"), castigating the English for their moral failings and blaming the latter for the tribulations afflicting the country. In 1011 the Danes again raided England, and from 8–29 September they laid siege to Canterbury. Aided by the treachery of Ælfmaer, whose life Ælfheah had once saved, the raiders succeeded in sacking the city. Ælfheah was taken prisoner and held captive for seven months. Godwine (Bishop of Rochester), Leofrun (abbess of St Mildrith's), and the king's reeve, Ælfweard were captured also, but the abbot of St Augustine's Abbey, Ælfmær, managed to escape. Canterbury Cathedral was plundered and burned by the Danes following Ælfheah's capture. Ælfheah refused to allow a ransom to be paid for his freedom, and as a result was killed on 19 April 1012 at Greenwich (then in Kent, now part of London), reputedly on the site of St Alfege's Church. The account of Ælfheah's death appears in the E version of the "Anglo-Saxon Chronicle": Ælfheah was the first Archbishop of Canterbury to die a violent death. A contemporary report tells that Thorkell the Tall attempted to save Ælfheah from the mob about to kill him by offering everything he owned except for his ship, in exchange for Ælfheah's life; Thorkell's presence is not mentioned in the "Anglo-Saxon Chronicle", however. Some sources record that the final blow, with the back of an axe, was delivered as an act of kindness by a Christian convert known as "Thrum." Ælfheah was buried in St Paul's Cathedral. In 1023 his body was moved by King Cnut to Canterbury, with great ceremony. Thorkell the Tall was appalled at the brutality of his fellow raiders, and switched sides to the English king Æthelred the Unready following Ælfheah's death. Pope Gregory VII canonised Ælfheah in 1078, with a feast day of 19 April. Lanfranc, the first post-Conquest archbishop, was dubious about some of the saints venerated at Canterbury. He was persuaded of Ælfheah's sanctity, but Ælfheah and Augustine of Canterbury were the only pre-conquest Anglo-Saxon archbishops kept on Canterbury's calendar of saints. Ælfheah's shrine, which had become neglected, was rebuilt and expanded in the early 12th century under Anselm of Canterbury, who was instrumental in retaining Ælfheah's name in the church calendar. After the 1174 fire in Canterbury Cathedral, Ælfheah's remains together with those of Dunstan were placed around the high altar, at which Thomas Becket is said to have commended his life into Ælfheah's care shortly before his martyrdom during the Becket controversy. The new shrine was sealed in lead, and was north of the high altar, sharing the honour with Dunstan's shrine, which was located south of the high altar. A "Life of Saint Ælfheah" in prose and verse was written by a Canterbury monk named Osbern, at Lanfranc's request. The prose version has survived, but the "Life" is very much a hagiography: many of the stories it contains have obvious Biblical parallels, making them suspect as a historical record. In the late medieval period, Ælfheah's feast day was celebrated in Scandinavia, perhaps because of the saint's connection with Cnut. Few church dedications to him are known, with most of them occurring in Kent and one each in London and Winchester; as well as St Alfege's Church in Greenwich, a nearby hospital (1931–1968) was named after him. In the town of Solihull in the West Midlands St Alphege Church is dedicated to Ælfheah dating back to approximately 1277. In 1929 a new church in Bath was dedicated to Ælfheah, under the name Alphege, designed by Giles Gilbert Scott in homage to the ancient Roman church of Santa Maria in Cosmedin. Aaliyah Aaliyah Dana Haughton (; January 16, 1979 – August 25, 2001) was an American singer, actress, and model. Born in Brooklyn, New York, and raised in Detroit, Michigan, Aaliyah first gained recognition at the age of 10, when she appeared on the television show "Star Search" and performed in concert alongside Gladys Knight. At age 12, Aaliyah signed with Jive Records and her uncle Barry Hankerson's Blackground Records. Hankerson introduced her to R. Kelly, who became her mentor, as well as lead songwriter and producer of her debut album, "Age Ain't Nothing but a Number". The album sold three million copies in the United States and was certified double platinum by the Recording Industry Association of America (RIAA). After facing allegations of an illegal marriage with Kelly, Aaliyah ended her contract with Jive and signed with Atlantic Records. Aaliyah worked with record producers Timbaland and Missy Elliott for her second album, "One in a Million", which sold 3 million copies in the United States and over eight million copies worldwide. In 2000, Aaliyah appeared in her first film, "Romeo Must Die". She contributed to the film's soundtrack, which spawned the single "Try Again". The song topped the "Billboard" Hot 100 solely on airplay, making Aaliyah the first artist in "Billboard" history to achieve this goal. "Try Again" also earned Aaliyah a Grammy Award nomination for Best Female R&B Vocalist. After completing "Romeo Must Die", Aaliyah filmed her role in "Queen of the Damned", and released her third and final studio album, "Aaliyah", in 2001. On August 25, 2001, Aaliyah and eight others were killed in a plane crash in the Bahamas after filming the music video for the single "Rock the Boat". The pilot, Luis Morales III, was unlicensed at the time of the accident and toxicology tests revealed that he had traces of cocaine and alcohol in his system. Aaliyah's family later filed a wrongful death lawsuit against Blackhawk International Airways, which was settled out of court. Aaliyah's music has continued to achieve commercial success with several posthumous releases, and has sold an estimated 24 to 32 million albums worldwide. She has been credited for helping redefine contemporary R&B, pop and hip hop, earning her the nicknames "Princess of R&B" and "Queen of Urban Pop". She is listed by "Billboard" as the tenth most successful female R&B artist of the past 25 years, and the 27th most successful R&B artist in history. Aaliyah Dana Haughton was born on January 16, 1979, in Brooklyn, New York, and was the younger child of Diane and Michael "Miguel" Haughton (1951–2012). She was African American, and had Native American (Oneida) heritage from a grandmother. Her name has been described as a female version of the Arabic "Ali"; however, the original Arabic and Jewish name "Aliya (Hebrew: אליה)" derived from the Hebrew word "aliyah (Hebrew: עלייה)", and meant "highest, most exalted one, the best." Regardless of origin, the singer was highly fond of her Semitic name, expressing support by calling it "beautiful" and asserting that she was "very proud of it," and she thus spent her entire life striving to live up to her name every day. At a young age, Aaliyah was enrolled in voice lessons by her mother. She started performing at weddings, church choir and charity events. When she was five years old, her family moved to Detroit, Michigan, where she was raised along with her older brother, Rashad. She attended a Catholic school, Gesu Elementary, where in first grade, she received a part in the stage play "Annie". From then on, she was determined to become an entertainer. In Detroit, her father began working in the warehouse business, one of his brother-in-law Barry Hankerson's widening interests. Her mother stayed home and raised Aaliyah and her brother. Throughout her life, she had a good relationship with her brother, which traced back to their childhood as Rashad reflected that growing up with Aaliyah was "amazing". He recalled her running around their home singing and that never being annoying due to her having a "beautiful voice". She and her brother became close with their cousin Jomo Hankerson, since growing up, they lived "about five blocks apart". Jomo walked Aaliyah and Rashad to their home from school when their mother was not able to pick them up and recalled the Haughton household being filled with music. Aaliyah's family was very close due to the struggles of her grandparents and when the Haughtons moved to Detroit, the Hankersons were ready to take them in if necessary. These same bonds led to ties in the music industry, under the Blackground Records label. Aaliyah's mother was a vocalist, and her uncle, Barry Hankerson, was an entertainment lawyer who had been married to Gladys Knight. As a child, Aaliyah traveled with Knight and worked with an agent in New York to audition for commercials and television programs, including "Family Matters"; she went on to appear on "Star Search" at the age of ten. Aaliyah chose to begin auditioning while her mother made the decision to have her surname dropped. She auditioned for several record labels and at age 11 appeared in concerts alongside Knight. She had several pet animals in during her childhood, which included ducks, snakes and iguanas. Her cousin Jomo had a pet alligator, which Aaliyah felt was too much, remarking, "that was something I wasn't going to stroke." Her grandmother died in 1991. Years after her death, Aaliyah said her grandmother supported everyone in the family and always wanted to hear her sing, as well as admitting that she "spoiled" her and her brother Rashad "to death." She also enjoyed Aaliyah's singing and would have Aaliyah to sing for her. Aaliyah stated that she thought of her grandmother whenever she fell into depression. Aaliyah's hands reminded her of her aunt, who died when she was "very young" and Aaliyah referred to her as an "amazingly beautiful woman". When she was growing up, Aaliyah attended Detroit schools and believed she was well-liked, but got teased for her short stature. She recalled coming into her own prior to age 15 and grew to love her height. Her mother would tell her to be happy that she was small and compliment her. Other children disliked Aaliyah, but she did not stay focused on them. "You always have to deal with people who are jealous, but there were so few it didn't even matter. The majority of kids supported me, which was wonderful. When it comes to dealing with negative people, I just let it in one ear and out the other. Those people were invisible to me." Even in her adult life, she considered herself small. She had "learned to accept and love" herself and added: "... the most important thing is to think highly of yourself because if you don't, no one else will". During her audition for acceptance to the Detroit High School for the Fine and Performing Arts Aaliyah sung the song "Ave Maria" in its entirety in the Italian language. Aaliyah, who maintained a perfect 4.0 grade point average when graduating from Detroit High School for the Fine and Performing Arts, felt education was important. She saw fit to keep her grades up despite the pressures and time constraints brought on her during the early parts of her career. She labeled herself as a perfectionist and recalled always being a good student. Aaliyah reflected: "I always wanted to maintain that, even in high school when I first started to travel. I wanted to keep that 4.0. Being in the industry, you know, I don't want kids to think, 'I can just sing and forget about school.' I think it's very important to have an education, and even more important to have something to fall back on." She did this in her own life, as she planned to "fall back on" another part of the entertainment industry. She believed that if she could teach music history or open her own school to teach that or drama if she did not make a living as a recording artist because, as she reasoned, "when you pick a career it has to be something you love". After Hankerson signed a distribution deal with Jive Records, he signed Aaliyah to his Blackground Records label at the age of 12. Hankerson later introduced her to recording artist and producer R. Kelly, who became Aaliyah's mentor, as well as lead songwriter and producer of her first album, which was recorded when she was 14. Aaliyah's debut album, "Age Ain't Nothing but a Number", was released under her mononym "Aaliyah", by Jive and Blackground Records on May 24, 1994; the album debut at number 24 on the "Billboard" 200 chart, selling 74,000 copies in its first week. It ultimately peaked at number 18 on the "Billboard" 200 and sold over three million copies in the United States, where it was certified two times Platinum by the RIAA. In Canada, the album sold over 50,000 copies and was certified gold by the CRIA. Aaliyah's debut single, "Back & Forth", topped the "Billboard" Hot R&B/Hip-Hop Songs chart for three weeks and was certified Gold by the RIAA. The second single, a cover of The Isley Brothers' "At Your Best (You Are Love)", peaked at number six on the "Billboard" Hot 100 and was also certified Gold by the RIAA. The title track, "Age Ain't Nothing but a Number", peaked at number 75 on the Hot 100. Additionally, she released "The Thing I Like" as part of the soundtrack to the 1994 film "A Low Down Dirty Shame". "Age Ain't Nothing But a Number" received generally favorable reviews from music critics. Some writers noted that Aaliyah's "silky vocals" and "sultry voice" blended with Kelly's new jack swing helped define R&B in the 1990s. Her sound was also compared to that of female quartet En Vogue. Christopher John Farley of "Time" magazine described the album as a "beautifully restrained work", noting that Aaliyah's "girlish, breathy vocals rode calmly on R. Kelly's rough beats". Stephen Thomas Erlewine of AllMusic felt that the album had its "share of filler", but described the singles as "slyly seductive". He also claimed that the songs on the album were "frequently better" than that of Kelly's second studio album, "12 Play". The single "At Your Best (You Are Love)" was criticized by "Billboard" for being out of place on the album and for its length. In 1996, Aaliyah left Jive Records and signed with Atlantic Records. She worked with record producers Timbaland and Missy Elliott, who contributed to her second studio album, "One in a Million". Missy Elliott recalled Timbaland and herself being nervous to work with Aaliyah, since Aaliyah had already released her successful début album while Missy Elliott and Timbaland were just starting out. Missy Elliott also feared she would be a diva, but reflected that Aaliyah "came in and was so warming; she made us immediately feel like family." The album yielded the single "If Your Girl Only Knew", which topped the "Billboard" Hot R&B/Hip-Hop Songs for two weeks. It also generated the singles "Hot Like Fire" and "4 Page Letter". The following year, Aaliyah was featured on Timbaland & Magoo's debut single, "Up Jumps da Boogie". "One in a Million" peaked at number 18 on the "Billboard" 200, selling 3 million copies in the United States and over eight million copies worldwide. The album was certified double platinum by the RIAA on June 16, 1997, denoting shipments of two million copies. The month prior to "One in a Million"s release, on May 5, 1997, music publisher Windswept Pacific filed a lawsuit in U.S. District Court against Aaliyah claiming she had illegally copied Bobby Caldwell's "What You Won't Do for Love" for the single "Age Ain't Nothing but a Number". Aaliyah attended the Detroit High School for the Fine and Performing Arts, where she majored in drama and graduated in 1997 with a 4.0 GPA. Aaliyah began her acting career that same year; she played herself in the police drama television series "New York Undercover". During this time, Aaliyah participated in the Children's Benefit Concert, a charity concert that took place at the Beacon Theatre in New York. Aaliyah also became the spokesperson for Tommy Hilfiger Corporation. In 1997 Aaliyah performed the Christmas carol What Child Is This at the annual holiday special Christmas in Washington. She contributed on the soundtrack album for the Fox Animation Studios animated feature "Anastasia", performing a cover version of "Journey to the Past" which earned songwriters Lynn Ahrens and Stephen Flaherty a nomination for the Academy Award for Best Original Song. Aaliyah performed the song at the 1998 Academy Awards ceremony and became the youngest singer to perform at the event. The song "Are You That Somebody?" was featured on the "Dr. Dolittle" soundtrack, which earned Aaliyah her first Grammy Award nomination. The song peaked at number 21 on the Hot 100. In 1999, Aaliyah landed her first film role in "Romeo Must Die", released March 22, 2000. Aaliyah starred opposite martial artist Jet Li, playing a couple who fall in love amid their warring families. It grossed US$18.6 million in its first weekend, ranking number two at the box office. Aaliyah purposely stayed away from reviews of the film to "make it easier on" herself, but she heard "that people were able to get into me, which is what I wanted." In contrast, some critics felt there was no chemistry between her and Jet Li, as well as viewing the film was too simplistic. This was echoed by Elvis Mitchell of "The New York Times", who wrote that while Aaliyah was "a natural" and the film was conceived as a spotlight for both her and Li, "they have so little chemistry together you'd think they're putting out a fire instead of shooting off sparks. Her role was well received by Glen Oliver by IGN who liked that she did not portray her character "as a victimized female" but instead "as a strong female who does not come across as an over-the-top Women's Right Advocate." In addition to acting, Aaliyah served as an executive producer of the film's soundtrack, where she contributed four songs. "Try Again" was released as a single from the soundtrack; the song topped the "Billboard" Hot 100, making Aaliyah the first artist to top the chart based solely on airplay; this led the song to be released in a 12" vinyl and 7" single. The music video won the Best Female Video and Best Video from a Film awards at the 2000 MTV Video Music Awards. It also earned her a Grammy Award nomination for Best Female R&B Vocalist. The soundtrack went on to sell 1.5 million copies in the United States. After completing "Romeo Must Die", Aaliyah began to work on her second film, "Queen of the Damned". She played the role of an ancient vampire, Queen Akasha, which she described as a "manipulative, crazy, sexual being". Prior to her death, she expressed the possibility of recording songs for the film's soundtrack and welcomed the possibility of collaborating with Jonathan Davis. She was scheduled to film for the sequels of "The Matrix" as the character Zee. In May 2001, Shaquille O'Neal admitted that his remarks where he claimed to have engaged in sexual intercourse with Aaliyah, Cindy Crawford and Venus Williams were false after making the allegations during an appearance on a radio station and apologized to the three. All three denied the claims. The following month, June 2001, Aaliyah posed for a photo shoot with Eric Johnson. Johnson kept the images in his "private personal archive" for thirteen years before providing digital copies of 13 Aaliyah photographs to an online photography magazine and authorizing the publication to use the photographs for a story they were doing on Aaliyah. Not long after, he filed a lawsuit claiming ABC had infringed his rights since the corporation authorized further reproduction by reproducing them online. Aaliyah released her self-titled album, "Aaliyah", in July 2001. It debuted at number two on the "Billboard" 200, selling 187,000 copies in its first week. The first single from the album, "We Need a Resolution", peaked at number 59 on the "Billboard" Hot 100. She finished recording the album in March 2001 after a year of recording tracks that began in March of the previous year. At the time she started recording the album, Aaliyah's publicist disclosed the album's release date as most likely being in October 2000. Filming for "Queen of the Damned" delayed the release of "Aaliyah". Aaliyah enjoyed balancing her singing and acting careers. Though she called music a "first" for her, she also had been acting since she was young and had wanted to begin acting "at some point in my career," but "wanted it to be the right time and the right vehicle" and felt "Romeo Must Die" "was it". "Aaliyah" was released five years after "One in a Million". Aaliyah had not intended for the albums to have such a gap between them. "I wanted to take a break after "One in a Million" to just relax, think about how I wanted to approach the next album. Then, when I was ready to start back up, "Romeo" happened, and so I had to take another break and do that film and then do the soundtrack, then promote it. The break turned into a longer break than I anticipated." Connie Johnson of the "Los Angeles Times" argued that Aaliyah having to focus on her film career may have caused her to not give the album "the attention it merited." Collaborator Timbaland concurred, stating that he was briefly in Australia to work on the album while Aaliyah was filming and did not feel the same production had gone into "Aaliyah" as "One in a Million" had. He also said Virgin Records had rushed the album and Aaliyah had specifically requested Missy Elliott and Timbaland work on "Aaliyah" with her. The week after Aaliyah's death, her third studio album, "Aaliyah", rose from number 19 to number one on the "Billboard" 200. "Rock the Boat" was released as a posthumous single. The music video premiered on BET's "Access Granted"; it became the most viewed and highest rated episode in the history of the show. The song peaked at number 14 on the "Billboard" Hot 100 and number two on the "Billboard" Hot R&B/Hip-Hop Songs chart. It was also included on the "Now That's What I Call Music! 8" compilation series; a portion of the album's profits was donated to the Aaliyah Memorial Fund. Promotional posters for "Aaliyah" that had been put up in major cities such as New York and Los Angeles became makeshift memorials for grieving fans. "More than a Woman" and "I Care 4 U" were released as posthumous singles and peaked within the top 25 of the "Billboard" Hot 100. The album was certified double Platinum by the RIAA and sold 2.6 million copies in the United States. "More than a Woman" reached number one on the UK singles chart making Aaliyah the first female deceased artist to reach number one on the UK singles chart. "More than a Woman" was replaced by George Harrison's "My Sweet Lord" which is the only time in the UK singles chart's history where a dead artist has replaced another dead artist at number one. In July 2001, she allowed MTV's show "Diary" behind-the-scenes access to her life and stated "I am truly blessed to wake up every morning to do something that I love; there is nothing better than that." She continued, "Everything is worth it – the hard work, the times when you're tired, the times when you are a bit sad. In the end, it's all worth it because it really makes me happy. I wouldn't trade it for anything else in the world. I've got good friends, a beautiful family and I've got a career. I thank God for his blessings every single chance I get." Aaliyah was signed to appear in several future films, including "Honey", a romantic film titled "Some Kind of Blue", and a Whitney Houston-produced remake of the 1976 film "Sparkle". Whitney Houston recalled Aaliyah being "so enthusiastic" about the film and wanting to appear in the film "so badly". Houston also voiced her belief that Aaliyah was more than qualified for the role and the film was shelved after she died, since Aaliyah had "gone to a better place". Studio officials of Warner Brothers stated that Aaliyah and her mother had both read the script for "Sparkle". According to them, Aaliyah was passionate about playing the lead role of a young singer in a girl group. The film was released in 2012, eleven years after Aaliyah's death. Before her death, Aaliyah had filmed part of her role in "The Matrix Reloaded" and was scheduled to appear in "The Matrix Revolutions" as Zee. Aaliyah told "Access Hollywood" that she was "beyond happy" to have landed the role. The role was subsequently recast to Nona Gaye. Aaliyah's scenes were included in the tribute section of the "Matrix Ultimate Collection" series. In November 2001, Ronald Isley stated that Aaliyah and the Isley Brothers had discussed a collaboration prior to her death. She had previously covered the Isley Brothers' single "At Your Best (You Are Love)". By 2001, Aaliyah had enjoyed her now seven-year career and felt a sense of accomplishment. "This is what I always wanted," she said of her career in "Vibe" magazine. "I breathe to perform, to entertain, I can't imagine myself doing anything else. I'm just a really happy girl right now. I honestly love every aspect of this business. I really do. I feel very fulfilled and complete." Aaliyah had the vocal range of a soprano. With the release of her debut single "Back & Forth", Dimitri Ehrlich of "Entertainment Weekly" expressed that Aaliyah's "silky vocals are more agile than those of self-proclaimed queen of hip-hop soul Mary J. Blige." In her review for Aaliyah's second studio album "One in a Million" "Vibe" magazine, music critic Dream Hampton said that Aaliyah's "deliciously feline" voice has the same "pop appeal" as Janet Jackson's. Aaliyah described her sound as "street but sweet", which featured her "gentle" vocals over a "hard" beat. Though Aaliyah did not write any of her own material, her lyrics were described as in-depth. She incorporated R&B, pop and hip hop into her music. Her songs were often uptempo and at the same time often dark, revolving around "matters of the heart". After her R. Kelly-produced debut album, Aaliyah worked with Timbaland and Missy Elliott, whose productions were more electronic. Sasha Frere-Jones of "The Wire" finds Aaliyah's "Are You That Somebody?" to be Timbaland's "masterpiece" and exemplary of his production's start-stop rhythms, with "big half-second pauses between beats and voices". Keith Harris of "Rolling Stone" cites "Are You That Somebody?" as "one of '90s R&B's most astounding moments". Aaliyah's songs have been said to have "crisp production" and "staccato arrangements" that "extend genre boundaries" while containing "old-school" soul music. Kelefah Sanneh of "The New York Times" called Aaliyah "a digital diva who wove a spell with ones and zeroes", and writes that her songs comprised "simple vocal riffs, repeated and refracted to echo the manipulated loops that create digital rhythm", as Timbaland's "computer-programmed beats fitted perfectly with her cool, breathy voice to create a new kind of electronic music." When she experimented with other genres on "Aaliyah", such as Latin pop and heavy metal, "Entertainment Weekly"s Craig Seymour panned the attempt. While Analyzing her eponymous album British publication NME (New Musical Express) felt that Aaliyah's radical third album was intended to consolidate her position as U.S.R&B's most experimental artist. As her albums progressed, writers felt that Aaliyah matured, calling her progress a "declaration of strength and independence". ABC News noted that Aaliyah's music was evolving from the punchy pop influenced Hip hop and R&B to a more mature, introspective sound on her third album. Stephen Thomas Erlewine of AllMusic described her eponymous album, "Aaliyah", as "a statement of maturity and a stunning artistic leap forward" and called it one of the strongest urban soul records of its time. She portrayed "unfamiliar sounds, styles and emotions", but managed to please critics with the contemporary sound it contained. Ernest Hardy of "Rolling Stone" felt that Aaliyah reflected a stronger technique, where she gave her best vocal performance. Prior to her death, Aaliyah expressed a desire to learn about the burgeoning UK garage scene she had heard about at the time. As an artist, Aaliyah often voiced that she was inspired by a number of performers. These include Michael Jackson, Stevie Wonder, Sade, En Vogue, Nine Inch Nails, Korn, Prince, Naughty by Nature, Johnny Mathis, Janet Jackson and Barbra Streisand. Aaliyah expressed that Michael Jackson's "Thriller" was her "favorite album" and that "nothing will ever top "Thriller"." She stated that she admired Sade because "she stays true to her style no matter what ... she's an amazing artist, an amazing performer ... and I absolutely love her." Aaliyah expressed she had always desired to work with Janet Jackson, whom she had drawn frequent comparison to over the course of her career, stating "I admire her a great deal. She's a total performer ... I'd love to do a duet with Janet Jackson." Jackson reciprocated Aaliyah's affections, commenting "I've loved her from the beginning because she always comes out and does something different, musically." Jackson also stated she would have enjoyed collaborating with Aaliyah. Aaliyah focused on her public image throughout her career. She often wore baggy clothes and sunglasses, stating that she wanted to be herself. She described her image as being "important ... to differentiate yourself from the rest of the pack". She often wore black clothing, starting a trend for similar fashion among women in United States and Japan. Aaliyah's fashionable style has been credited for being an influence on new fashion trend's called "Health Goth" and "Ghetto Goth" also known as GHE20 GOTH1K Aaliyah participated in fashion designer Tommy Hilfiger's All America Tour and was featured in Tommy Jean ads, which depicted her in boxer shorts, baggy jeans and a tube top. Hilfiger's brother, Andy, called it "a whole new look" that was "classy but sexy". Carson Daly A former VJ on MTV's Total Request Live commented on Aaliyah's style by saying that she was "cutting edge" ,"always one step ahead of the curve" and that "the TRL audience looks to her to figure out what's hot and what's new". When she changed her hairstyle, Aaliyah took her mother's advice to cover her left eye, much like Veronica Lake. The look has become known as her signature and been referred to as fusion of "unnerving emotional honesty" and "a sense of mystique". In 1998, she hired a personal trainer to keep in shape, and exercised five days a week and ate diet foods. Aaliyah was praised for her "clean-cut image" and "moral values". Robert Christgau of "The Village Voice" wrote of Aaliyah's artistry and image, "she was lithe and dulcet in a way that signified neither jailbait nor hottie—an ingenue whose selling point was sincerity, not innocence and the obverse it implies." Aaliyah was viewed by others as a role model. Emil Wilbekin, described by CNN as "a friend of Aaliyah's" and follower of her career, explained: "Aaliyah is an excellent role model, because she started her career in the public eye at age 15 with a gold album entitled "Age Ain't Nothing but a Number". And then her second album, "One in a Million" went double platinum. She had the leading role in "Romeo Must Die", which was a box office success. She's won numerous awards, several MTV music video awards, and aside from her professional successes, many of her lyrics are very inspirational and uplifting. She also carried herself in a very professional manner. She was well spoken. She was beautiful, but she didn't use her beauty to sell her music. She used her talent. Many young hip-hop fans greatly admire her." She also was seen by others as a sex symbol. Aaliyah did not have a problem with being considered one. "I know that people think I'm sexy and I am looked at as that, and it is cool with me," she stated. "It's wonderful to have sex appeal. If you embrace it, it can be a very beautiful thing. I am totally cool with that. Definitely. I see myself as sexy. If you are comfortable with it, it can be very classy and it can be very appealing." The single "We Need a Resolution" was argued to have transformed "the once tomboy into a sexy grown woman". Aaliyah mentioned that her mother, during her childhood, would take pictures of her and notice a sex appeal. She reinforced her mother's belief by saying that she did feel "sexy for sure" and that she embraced it and was comfortable with this view of her. In her spare time, she was mostly a home person, which dated back to her younger years, but on occasion went out and played laser tag. She reasoned this was due to her liking "the simple things in life". Despite having a prosperous career that allowed her to purchase the vehicle she wanted, Aaliyah revealed during her final interview on August 21, 2001 on "106 & Park" that she had never owned a car due to living in New York City and hiring a car or driver on a regular basis. Aaliyah's family played a major role in the course of her career. Aaliyah's father Michael Haughton, who died in 2012, served as her personal manager. Her mother assisted her in her career while brother Rashad Haughton and cousin Jomo Hankerson worked with her consistently. Her father's illness ended his co-management of Aaliyah with her mother Diane Haughton. She ran all of her decisions by Rashad. Aaliyah was known to have usually been accompanied by members of her family and the "Rock the Boat" filming was credited by Rashad Haughton as being the first and only time her family was not present. In October 2001, Rashad stated: "It really boggles everyone [that] from Day One, every single video she ever shot there's always been myself or my mother or my father there. The circumstances surrounding this last video were really strange because my mother had eye surgery and couldn't fly. That really bothered her because she always traveled. My dad had to take care of my mom at that time. And I went to Australia to visit some friends. We really couldn't understand why we weren't there. You ask yourself maybe we could have stopped it. But you can't really answer the question. There's always gonna be that question of why." Her friend Kidada Jones said in the last year of her life her parents had given her more freedom and she had talked about wanting a family. "She wanted to have a family, and we talked about how we couldn't wait to kick back with our babies." Gladys Knight, who had been married to Aaliyah's uncle Barry Hankerson, was essential to the start of Aaliyah's career as she gave her many of her earlier performances. One of their last conversations concerned Aaliyah having difficulty with "another young artist" that she was trying to work with. Knight felt the argument was "petty" and insisted that she remain being who she was in spite of the conflict. With the release of "Age Ain't Nothing but a Number", rumors circulated of a relationship between Aaliyah and R. Kelly. Shortly after, there was speculation about a secret marriage with the release of "Age Ain't Nothing but a Number" and the adult content that Kelly had written for Aaliyah. "Vibe" magazine later revealed a marriage certificate that listed the couple married on August 31, 1994, in Sheraton Gateway Suites in Rosemont, Illinois. Aaliyah, who was 15 at the time, was listed as 18 on the certificate; the illegal marriage was annulled in February 1995 by her parents. The pair continued to deny marriage allegations, stating that neither was married. One particular allegation among the rumor was that Aaliyah wedded Kelly without her parents' knowledge. Aaliyah reportedly developed a friendship with Kelly during the recording of her debut album. As she recalled to "Vibe" magazine in 1994, she and Kelly would "go watch a movie" and "go eat" when she got tired and would then "come back and work". She described the relationship between her and Kelly as being "rather close." In 2016, Kelly said that he was in love with Aaliyah as he was with "anybody else." In December 1994, Aaliyah told the "Sun-Times" that whenever she was asked about being married to Kelly, she urged them not to believe "all that mess" and that she and Kelly were "close" and "people took it the wrong way." In his 2011 book "The Man Behind the Man: Looking From the Inside Out", Demetrius Smith Sr., a former member of Kelly's entourage, wrote that Kelly told him "in a voice that sounded as if he wanted to burst into tears" that he thought Aaliyah was pregnant. Jamie Foster Brown in the 1994 issue of "Sister 2 Sister" wrote that "R. Kelly told me that he and Aaliyah got together and it was just magic." Brown also reported hearing about a relationship between them. "I've been hearing about Robert and Aaliyah for a while—that she was pregnant. Or that she was coming and going in and out of his house. People would see her walking his dog, 12 Play, with her basketball cap and sunglasses on. Every time I asked the label, they said it was platonic. But I kept hearing complaints from people about her being in the studio with all those men." Brown later added "at 15, you have all those hormones and no brains attached to them." Aaliyah admitted in court documents that she had lied about her age. In May 1997, she filed suit in Cook County seeking to have all records of the marriage expunged because she was not old enough under state law to get married without her parents' consent. It was reported that she cut off all professional and personal ties with Kelly after the marriage was annulled and ceased having contact with him. In 2014, Jomo Hankerson stated that Aaliyah "got villainized" over her relationship with Kelly and the scandal over the marriage made it difficult to find producers for her second album. "We were coming off of a multi-platinum debut album and except for a couple of relationships with Jermaine Dupri and Puffy, it was hard for us to get producers on the album." Hankerson also expressed confusion over why "they were upset" with Aaliyah given her age at the time. Aaliyah was known to avoid answering questions regarding Kelly following the professional split. During an interview with Christopher John Farley, she was asked if she was still in contact with him and if she would ever work with him again. Farley said Aaliyah responded with a "firm, frosty 'no'" to both of the questions. "Vibe" magazine said Aaliyah changed the subject anytime "you bring up the marriage with her". A spokeswoman for Aaliyah said in 2000 that when "R. Kelly comes up, she doesn't even speak his name, and nobody's allowed to ask about it at all". Kelly later commented that Aaliyah had opportunities to address the pair's relationship after they separated professionally but chose not to. R. Kelly would have other allegations made about him regarding underage girls in the years following her death and his marriage to Aaliyah was used to evidence his involvement with them. He has refused to discuss his relationship with her, citing her death. "Out of respect for her, and her mom and her dad, I will not discuss Aaliyah. That was a whole other situation, a whole other time, it was a whole other thing, and I'm sure that people also know that." Aaliyah's mother, Diane Haughton, reflected that everything "that went wrong in her life" when she began with her relationship with Kelly. However, the allegations have been said to have done "little to taint Aaliyah's image or prevent her from becoming a reliable '90s hitmaker with viable sidelines in movies and modeling." Aaliyah was dating co-founder of Roc-A-Fella Records Damon Dash at the time of her death and, though they were not formally engaged, in interviews given after Aaliyah's death, Dash claimed the couple had planned to marry. Aaliyah and Dash met in 2000 through his accountant and formed a friendship. Dash has said he is unsure of how he and Aaliyah started dating and that the two just understood each other. "I don't know [how we got involved], just spending time, you know, we just saw things the same and it was new, you know what I mean? Meeting someone that is trying to do the same thing you are doing in the urban market, in the same urban market place but not really being so urban. It was just; her mind was where my mind was. She understood me and she got my jokes. She thought my jokes were funny." Dash expressed his belief that Aaliyah was the "one" and claimed the pair were not officially engaged, but had spoken about getting married prior to her death. Aaliyah publicly never addressed the relationship between her and Dash as being anything but platonic. In May 2001, she hosted a party for Dash's 30th birthday at a New York City club, where they were spotted together and Dash was seen escorting her to a bathroom. Addressing this, Aaliyah stated that she and Dash were just "very good friends" and chose to "keep it at that" for the time being. Just two weeks before her death, Aaliyah traveled from New Jersey to East Hampton, New York to visit Dash at the summer house he shared with Jay Z. The couple were separated for long periods at a time, as Dash recalled that Aaliyah continuously shot films and would be gone for months often to come back shortly and continue her schedule. Dash was also committed to "his own thing", which did not make matters any better. Despite this, they were understanding that the time they had together was special. Dash remembered they would "be in a room full of people talking to each other and it felt like everyone was listening but it would be just us. It would be like we were the only ones in the room". Dash always felt their time together was essential and Aaliyah was the person he was interested in being with, which is why, as he claimed, they had begun speaking about engagement. The relationship was mentioned in the lyrics of Jay-Z's remix to her song "Miss You", released after her death. On August 25, 2001, at 6:50 p.m. (EDT), Aaliyah and the members of the record company boarded a twin-engine Cessna 402B (registration N8097W) at the Marsh Harbour Airport in Abaco Islands, the Bahamas, to travel to the Opa-locka Airport in Florida, after they completed filming the music video for "Rock the Boat". They had a flight scheduled the following day, but with filming finishing early, Aaliyah and her entourage were eager to return to the U.S. and made the decision to leave immediately. The designated airplane was smaller than the Cessna 404 on which they had originally arrived, but the whole party and all of the equipment were accommodated on board. The plane crashed shortly after takeoff, about from the end of the runway and exploded. Aaliyah and the eight others on board—pilot Luis Morales III, hair stylist Eric Forman, Anthony Dodd, security guard Scott Gallin, family friend Keith Wallace, make-up stylist Christopher Maldonado, and Blackground Records employees Douglas Kratz and Gina Smith—were all killed. Gallin survived the initial impact and spent his last moments worrying about Aaliyah's condition, according to ambulance drivers. The plane was identified as being owned by Florida-based company Skystream by the US Federal Aviation Administration (FAA) in Atlanta. Initial reports of the crash identified Luis Morales as "L Marael". According to findings from an inquest conducted by the coroner's office in the Bahamas, Aaliyah suffered from "severe burns and a blow to the head", in addition to severe shock and a weak heart. The coroner theorized that she went into such a state of shock that even if she had survived the crash, her recovery would have been nearly impossible given the severity of her injuries. The bodies were taken to the morgue at Princess Margaret Hospital in Nassau, where they were kept for relatives to help identify them. Some of the bodies were badly burned in the crash. As the subsequent investigation determined, when the aircraft attempted to depart, it was over its maximum take-off weight by and was carrying one excess passenger, according to its certification. An informational report issued by the National Transportation Safety Board stated, "The airplane was seen lifting off the runway, and then nose down, impacting in a marsh on the south side of the departure end of runway 27." It indicated that the pilot was not approved to fly the plane. Morales falsely obtained his FAA license by showing hundreds of hours never flown, and he may also have falsified how many hours he had flown in order to get a job with his employer, Blackhawk International Airways. Additionally, toxicology tests performed on Morales revealed traces of cocaine and alcohol in his system. Aaliyah's funeral services were held on August 31, 2001, at the Frank E.Campbell Funeral Home and St. Ignatius Loyola Church in Manhattan. Her body was set in a silver-plated copper-deposit casket, which was carried in a glass horse-drawn hearse. An estimated 800 mourners were in attendance at the procession. Among those in attendance at the private ceremony were Missy Elliott, Timbaland, Gladys Knight, Lil' Kim and Sean Combs. After the service, 22 white doves were released to symbolize each year of Aaliyah's life. Aaliyah was initially entombed in a crypt at the Ferncliff Mausoleum in Hartsdale, New York; she was later moved to a private room at the left end of a corridor in the Rosewood Mausoleum. The inscription at the bottom of Aaliyah's portrait at the funeral read: "We Were Given a Queen, We Were Given an Angel." After Aaliyah's death, the German newspaper "Die Zeit" published excerpts from an interview done shortly before her death, in which she described a recurring dream: "It is dark in my favorite dream. Someone is following me. I don't know why. I'm scared. Then suddenly I lift off. Far away. How do I feel? As if I am swimming in the air. Free. Weightless. Nobody can reach me. Nobody can touch me. It's a wonderful feeling." Immediately after Aaliyah's death, there was uncertainty over whether the music video for "Rock the Boat" would ever air. It made its world premiere on BET's "Access Granted" on October 9, 2001. She won two posthumous awards at the American Music Awards of 2002; Favorite Female R&B Artist and Favorite R&B/Soul Album for "Aaliyah". Her second and final film, "Queen of the Damned", was released in February 2002. Before its release, Aaliyah's brother, Rashad, re-dubbed some of her lines during post-production. It grossed US$15.2 million in its first weekend, ranking number one at the box office. On the first anniversary of Aaliyah's death, a candlelight vigil was held in Times Square; millions of fans observed a moment of silence; and throughout the United States, radio stations played her music in remembrance. In December 2002, a collection of previously unreleased material was released as Aaliyah's first posthumous album, "I Care 4 U". A portion of the proceeds was donated to the Aaliyah Memorial Fund, a program that benefits the Revlon UCLA Women's Cancer Research Program and Harlem's Sloan Kettering Cancer Center. It debuted at number three on the "Billboard" 200, selling 280,000 copies in its first week. The album's lead single, "Miss You", peaked at number three on the "Billboard" Hot 100 and topped the Hot R&B/Hip-Hop Songs chart. In August of the following year, clothing retailer Christian Dior donated profits from sales in honor of Aaliyah. In 2005, Aaliyah's second compilation album, "Ultimate Aaliyah" was released in the UK by Blackground Records. "Ultimate Aaliyah" is a three disc set, which included a greatest hits audio CD and a DVD. Andy Kellman of AllMusic remarked ""Ultimate Aaliyah" adequately represents the shortened career of a tremendous talent who benefited from some of the best songwriting and production work by Timbaland, Missy Elliott, and R. Kelly." A documentary movie "Aaliyah Live in Amsterdam" was released in 2011, shortly before the tenth anniversary of Aaliyah's death. The documentary, by Pogus Caesar, contained previously unseen footage shot of her career beginnings in 1995 when she was appearing in the Netherlands. In March 2012, music producer Jeffrey "J-Dub" Walker announced on his Twitter account that a song "Steady Ground", which he produced for Aaliyah's third album, would be included in the forthcoming posthumous Aaliyah album. This second proposed posthumous album would feature this song using demo vocals, as Walker claims the originals were somehow lost by his sound engineer. Aaliyah's brother Rashad later refuted Walker's claim, claiming that "no official album [is] being released and supported by the Haughton family." On August 5, 2012, a song entitled "Enough Said" was released online. The song was produced by Noah "40" Shebib and features Canadian rapper Drake. Four days later, Jomo Hankerson confirmed a posthumous album is being produced and that it was scheduled to be released by the end of 2012 by Blackground Records. The album was reported to include 16 unreleased songs and have contributions from Aaliyah's longtime collaborators Timbaland and Missy Elliott, among others. On August 13, Timbaland and Missy Elliott dismissed rumors about being contacted or participating for the project. Elliott's manager Mona Scott-Young said in a statement to "XXL", "Although Missy and Timbaland always strive to keep the memory of their close friend alive, we have not been contacted about the project nor are there any plans at this time to participate. We've seen the reports surfacing that they have been confirmed to participate but that is not the case. Both Missy and Timbaland are very sensitive to the loss still being felt by the family so we wanted to clear up any misinformation being circulated." Elliott herself said, "Tim and I carry Aaliyah with us everyday, like so many of the people who love her. She will always live in our hearts. We have nothing but love and respect for her memory and for her loved ones left behind still grieving her loss. They are always in our prayers." In June 2013, Aaliyah was featured on a new track by Chris Brown, titled "Don't Think They Know"; with Aaliyah singing the song's hook. The video features dancing holographic versions of Aaliyah. The song appears on Brown's sixth studio album, "X". Timbaland voiced his disapproval for "Enough Said" and "Don't Think They Know" in July 2013. He exclaimed, "Aaliyah music only work with its soulmate, which is me". Soon after, Timbaland apologized to Chris Brown over his remarks, which he explained were made due to Aaliyah and her death being a "very sensitive subject". In January 2014, producer Noah "40" Shebib confirmed that the posthumous album was shelved due to the negative reception surrounding Drake's involvement. Shebib added, "Aaliyah's mother saying, 'I don't want this out' was enough for me ... I walked away very quickly." Aaliyah's vocals were reported to be featured on the T-Pain mixtape, "The Iron Way", on the track "Girlfriend", but were pulled after being met with criticism by fans and many in attendance at a New York listening session that he hosted for the project. In response to the criticism, T-Pain questioned if Aaliyah's legacy was driven by her death and claimed that were she still alive, she would be seen as trying to emulate Beyoncé. According to T-Pain, he was given her vocals from a session she had done prior to her death after being approached to work on a track for a posthumous Aaliyah album and completing the song, calling the exchange "just like a swap." She was featured on the Tink track "Million", which was released in May 2015 and contained samples from her song "One in a Million". Collaborator Timbaland was involved in the song's creation, having previously claimed that Aaliyah appeared to him in a dream and stressed that Tink was "the one". In August 2015, Timbaland confirmed that he had unreleased vocals from Aaliyah and stated a "sneak peek" would be coming soon. In September 2015, "Aaliyah by Xyrena", an official tribute fragrance was announced. On December 19, 2015, Timbaland uploaded a snippet of a new Aaliyah song title "He Keeps Me Shakin" on his Instagram account and said it would be released December 25, 2015, on the Timbaland mixtape "King Stays King". On August 24, 2017 MAC Cosmetics announced that an Aaliyah collection will be made available in the summer of 2018. The Aaliyah for Mac collection was released on June 20 online and June 21 in stores , along with the MAC collection, MAC and i-D Magazine partnered up to release a short film titled "A-Z of Aaliyah" which coincided with the launch. The short film highlighted and celebrated the legacy of Aaliyah with the help of select fans who were selected to be a part of the film through a casting call competition held by Mac and i-d magazine. The Aaliyah for Mac collectors box was sold at $250 and it sold out within minutes during the first day of its initial release Aaliyah has been credited for helping redefine R&B, pop and hip hop in the 1990s, "leaving an indelible imprint on the music industry as a whole." According to "Billboard", Aaliyah revolutionized R&B with her sultry mix of pop, soul and hip hop. In a 2001 review of her eponymous album, "Rolling Stone" professed that Aaliyah's impact on R&B and pop has been enormous. Steve Huey of AllMusic wrote Aaliyah ranks among the "elite" artists of the R&B genre, as she "played a major role in popularizing the stuttering, futuristic production style that consumed hip-hop and urban soul in the late 1990s." Bruce Britt of "music world" on Broadcast Music, Inc's. website stated that by combining "schoolgirl charm with urban grit", Aaliyah helped define the teen-oriented sound that has resulted in contemporary pop phenom's like Brandy, Christina Aguilera and Destiny's Child. Described as one of "R&B's most important artists" during the 1990s, her second studio album, "One in a Million", became one of the most influential R&B albums of the decade. Music critic Simon Reynolds cited "Are You That Somebody?" as "the most radical pop single" of 1998. Kelefah Sanneh of "The New York Times" wrote that rather than being the song's focal point, Aaliyah "knew how to disappear into the music, how to match her voice to the bass line", and consequently "helped change the way popular music sounds; the twitchy, beat-driven songs of Destiny's Child owe a clear debt to 'Are You That Somebody'." Sanneh asserted that by the time of her death in 2001, Aaliyah "had recorded some of the most innovative and influential pop songs of the last five years." Music publication "Popdust" called Aaliyah an unlikely queen of the underground due mainly to her influence on the underground alternative music scene, which consists of heavy sampling and references to her music by underground artists. "Popdust" also mentioned that the forward-thinking music Aaliyah did with Timbaland and the experimental music being made by many underground alternative artists are somewhat cut from the same cloth. While compiling a list of artists that take cues from Aaliyah, MTV Hive mentioned that it's easy to spot her influence on underground movements like dubstep, strains of indie pop, and lo-fi R&B movements. With sales of 8.1 million albums in the United States and an estimated 24 to 32 million albums worldwide, Aaliyah earned the nicknames "Princess of R&B" and "Queen of Urban Pop", as she "proved she was a muse in her own right". Ernest Hardy of "Rolling Stone" dubbed her as the "undisputed queen of the midtempo come-on". Aaliyah has been referred to as a pop icon and a R&B icon for her impact and contributions to those respective genres. Japanese pop singer Hikaru Utada has said several times that "It was when I heard Aaliyah's "Age Ain't Nothing but a Number" that I got hooked on R&B.", after which Utada released her debut album "First Love" with heavy R&B influences. Another Japanese pop singer Crystal Kay has expressed how she admired Aaliyah when she was growing up and how she would practice dancing while watching her music videos. Aaliyah was honored at the 2001 MTV Video Music Awards by Janet Jackson, Missy Elliott, Timbaland, Ginuwine and her brother, Rashad, who all paid tribute to her. In the same year, the United States Social Security Administration ranked the name Aaliyah one of the 100 most popular names for newborn girls. Aaliyah was ranked as one of "The Top 40 Women of the Video Era" in VH1's 2003 "The Greatest" series. She was also ranked at number 18 on BET's "Top 25 Dancers of All Time". Aaliyah appeared on both 2000 and 2001 list of "Maxim" Hot 100 in position 41 and the latter at 14. In 2002 VH1 created the 100 sexiest artist list and Aaliyah was ranked at number 36. In memory of Aaliyah, the Entertainment Industry Foundation created the Aaliyah Memorial Fund to donate money raised to charities she supported. In December 2009, "Billboard" magazine ranked Aaliyah at number 70 on its Top Artists of the Decade, while her eponymous album was ranked at number 181 on the magazine's Top 200 Albums of the Decade. She is listed by "Billboard" as the tenth most successful female R&B artist of the past 25 years, and 27th most successful R&B artist overall. In 2012, VH1 ranked her number 48 in "VH1's Greatest Women in Music". Also in 2012, Aaliyah was ranked at number 10 on Complex magazine's 100 hottest female singers of all-time list and number 22 on their 90 hottest women of the 90's list. In 2014, "NME" ("New Musical Express") ranked Aaliyah at number 18 on NME's 100 most influential artist list. Aaliyah's dress that she wore at the 2000 MTV Video Music Award's was featured in the most memorable fashion moments at the VMA's list by the fashion publication Harper's Bazaar In October 2015 Aaliyah was featured in the 10 women who became Denim Style icons list created by the fashion publication Vogue. Aaliyah's music has influenced numerous artists including Adele, The Weeknd, Ciara, Beyoncé, Monica, Chris Brown, Rihanna, Azealia Banks, Sevyn Streeter, Keyshia Cole, J. Cole, Kelly Rowland, Zendaya, Rita Ora, The xx, Arctic Monkeys, Speedy Ortiz, Chelsea Wolfe, Haim, Angel Haze, Kiesza, Naya Rivera, Cassie, Hayley Williams, Jessie Ware, Yeasayer, Bebe Rexha, Omarion, and Years & Years frontman Olly Alexander. Canadian R&B singer Keshia Chanté who was said to play as her in her pending biopic back in 2008, complimented the singer's futuristic style in music and fashion. Chanté backed out of the biopic after speaking to Diane Haughton, but has expressed a willingness to do the project if "the right production comes along and the family's behind it". Chanté also mentioned that Aaliyah had been part of her life "since I was 6." R&B singer and friend Brandy said about the late singer "She came out before Monica and I did, she was our inspiration. At the time, record companies did not believe in kid acts and it was just inspiring to see someone that was winning and winning being themselves. When I met her I embraced her, I was so happy to meet her." Rapper Drake said that the singer has had the biggest influence on his career. He also has a tattoo of the singer on his back. Solange Knowles remarked on the tenth anniversary of her death that she idolized Aaliyah and proclaimed that she would never be forgotten. Adam Levine, the lead vocalist of the pop rock group Maroon 5, remembers that listening to "Are You That Somebody?" convinced him to pursue a more soulful sound than that of his then-band Kara's Flowers. Erika Ramirez, associate editor of Billboard.com, said at the time of Aaliyah's career "there weren't many artists using the kind of soft vocals the ways she was using it, and now you see a lot of artists doing that and finding success," her reasoning for Aaliyah's continued influence on current artists. She argued that Aaliyah's second album "One in a Million" was "very much ahead of its time, with the bass and electro kind of R&B sounds that they produced", referring to collaborators Timbaland and Missy Elliott and that the sound, which "really stood out" at its time, was being replicated. In 2012, British singer-songwriter Katy B released the song "Aaliyah" as a tribute to Aaliyah's legacy and lasting impression on R&B music. The song first appeared on Katy B's "Danger" EP and featured Jessie Ware on guest vocals. In 2016, Swedish singer-songwriter Erik Hassle released a song titled "If Your Man Only Knew" which serves as a tribute to Aaliyah's 1996 single "If Your Girl Only Knew". There has been continuing belief that Aaliyah would have achieved greater career success had it not been for her death. Emil Wilbekin mentioned the deaths of The Notorious B.I.G. and Tupac Shakur in conjunction with hers and added: "Her just-released third album and scheduled role in a sequel to "The Matrix" could have made her another Janet Jackson or Whitney Houston". Director of "Queen of the Damned" Michael Rymer said of Aaliyah, "God, that girl could have gone so far" and spoke of her having "such a clarity about what she wanted. Nothing was gonna step in her way. No ego, no nervousness, no manipulation. There was nothing to stop her." On July 18, 2014, it was announced that Alexandra Shipp replaced Zendaya for the role of Aaliyah for the Lifetime TV biopic movie "", which premiered on November 15, 2014. Zendaya drew criticism because people felt that she was too light skinned and did not greatly resemble Aaliyah. She voiced her strong respect for Aaliyah before dropping out of the project. She explained her choice to withdraw from the film in videos on Instagram. Aaliyah's family has been vocal in their disapproving of the film. Her cousin Jomo Hankerson stated the family would prefer a "major studio release along the lines" of "What's Love Got to Do with It", the biopic based on the life of Tina Turner. Aaliyah's family has consulted a lawyer to stop Lifetime from using "any of the music, or any of the photographs and videos" they own and Jomo Hankerson claimed the TV network "didn't reach out." On August 9, 2014, it was announced that Chattrisse Dolabaille and Izaak Smith had been cast as Aaliyah's collaborators Missy Elliott and Timbaland. Dolabaille received criticism for her appearance in comparison with that of Missy Elliot. Despite negative reviews, the film's premiere drew 3.2 million viewers, becoming the second highest rated television movie of 2014. In February 2015, a tribute dinner was held for Aaliyah by The Sugar Club in Dublin, Ireland. Arsenal F.C. Arsenal Football Club is a professional football club based in Islington, London, England, that plays in the Premier League, the top flight of English football. The club has won 13 League titles, a record 13 FA Cups, two League Cups, the League Centenary Trophy, 15 FA Community Shields, one UEFA Cup Winners' Cup and one Inter-Cities Fairs Cup. Arsenal was the first club from the South of England to join The Football League, in 1893, and they reached the First Division in 1904. Relegated only once, in 1913, they continue the longest streak in the top division, and have won the second-most top-flight matches in English football history. In the 1930s, Arsenal won five League Championships and two FA Cups, and another FA Cup and two Championships after the war. In 1970–71, they won their first League and FA Cup Double. Between 1989 and 2005, they won five League titles and five FA Cups, including two more Doubles. They completed the 20th century with the highest average league position. Herbert Chapman won Arsenal's first national trophies, but died prematurely. He helped introduce the WM formation, floodlights, and shirt numbers, and added the white sleeves and brighter red to Arsenal's kit. Arsène Wenger was the longest-serving manager and won the most trophies. He won a record 7 FA Cups, and his title-winning team set an English record for the longest top-flight unbeaten league run at 49 games between 2003 and 2004, receiving the nickname The Invincibles, and a special gold Premier League trophy. In 1886, Woolwich munitions workers founded the club as Dial Square. In 1913, the club crossed the city to Arsenal Stadium in Highbury, becoming close neighbours of Tottenham Hotspur, and creating the North London derby. In 2006, they moved to the nearby Emirates Stadium. In terms of revenue, Arsenal was the sixth highest-earning football club in the world, earned €487.6m in 2016–17 season. Based on social media activity from 2014–15, Arsenal's fanbase is the fifth largest in the world. In 2018, Forbes estimated the club was the third most valuable in England, with the club being worth $2.24 billion. On 1 December 1886, munitions workers in Woolwich, now South East London, formed Arsenal as Dial Square, with David Danskin as their first captain. Named after the heart of the Royal Arsenal complex, they took the name of the whole complex a month later. Royal Arsenal F.C.'s first home was Plumstead Common, though they spent most of their time in South East London playing on the other side of Plumstead, at the Manor Ground. Royal Arsenal won Arsenal's first trophies in 1890 and 1891, and these were the only football association trophies Arsenal won during their time in South East London. In 1891, Royal Arsenal became the first London club to turn professional. Royal Arsenal renamed themselves for a second time upon becoming a limited liability company in 1893. They registered their new name, Woolwich Arsenal, with The Football League when the club ascended later that year. Woolwich Arsenal was the first southern member of The Football League, starting out in the Second Division and winning promotion to the First Division in 1904. Falling attendances, due to financial difficulties among the munitions workers and the arrival of more accessible football clubs elsewhere in the city, led the club close to bankruptcy by 1910. Businessmen Henry Norris and William Hall became involved in the club, and sought to move them elsewhere. In 1913, soon after relegation back to the Second Division, Woolwich Arsenal moved to the new Arsenal Stadium in Highbury, North London. This saw their third change of name: the following year, they reduced Woolwich Arsenal to simply The Arsenal. In 1919, The Football League voted to promote The Arsenal, instead of relegated local rivals Tottenham Hotspur, into the newly enlarged First Division, despite only listing the club sixth in the Second Division's last pre-war season of 1914–15. Some books have speculated that the club won this election to division one by dubious means. Later that year, The Arsenal started dropping "The" in official documents, gradually shifting its name for the final time towards Arsenal, as it is generally known today. With a new home and First Division football, attendances were more than double those at the Manor Ground, and Arsenal's budget grew rapidly. Their location and record-breaking salary offer lured star Huddersfield Town manager Herbert Chapman in 1925. Over the next five years, Chapman built a new Arsenal. He appointed enduring new trainer Tom Whittaker, implemented Charlie Buchan's new twist on the nascent WM formation, captured young players like Cliff Bastin and Eddie Hapgood, and lavished Highbury's income on stars like David Jack and Alex James. With record-breaking spending and gate receipts, Arsenal quickly became known as the Bank of England club. Transformed, Chapman's Arsenal claimed their first national trophy, the FA Cup, in 1930. Two League Championships followed, in 1930–31 and 1932–33. Chapman also presided over multiple off the pitch changes: white sleeves and shirt numbers were added to the kit; a Tube station was named after the club; and the first of two opulent, Art Deco stands was completed, with some of the first floodlights in English football. Suddenly, in the middle of the 1933–34 season, Chapman died of pneumonia. His work was left to Joe Shaw and George Allison, who saw out a hat-trick with the 1933–34 and 1934–35 titles, and then won the 1936 FA Cup and 1937–38 title. World War II meant The Football League was suspended for seven years, but Arsenal returned to win it in the second post-war season, 1947–48. This was Tom Whittaker's first season as manager, after his promotion to succeed Allison, and the club had equalled the champions of England record. They won a third FA Cup in 1950, and then won a record-breaking seventh championship in 1952–53. However, the war had taken its toll on Arsenal. The club had had more players killed than any top flight club, and debt from reconstructing the North Bank Stand bled Arsenal's resources. Arsenal were not to win the League or the FA Cup for another 18 years. The '53 Champions squad was old, and the club failed to attract strong enough replacements. Although Arsenal were competitive during these years, their fortunes had waned; the club spent most of the 1950s and 1960s in midleague mediocrity. Even former England captain Billy Wright could not bring the club any success as manager, in a stint between 1962 and 1966. Arsenal tentatively appointed club physiotherapist Bertie Mee as acting manager in 1966. With new assistant Don Howe and new players such as Bob McNab and George Graham, Mee led Arsenal to their first League Cup finals, in 1967–68 and 1968–69. Next season saw a breakthrough: Arsenal's first competitive European trophy, the 1969–70 Inter-Cities Fairs Cup. And the season after, an even greater triumph: Arsenal's first League and FA Cup double, and a new champions of England record. This marked a premature high point of the decade; the Double-winning side was soon broken up and the rest of the decade was characterised by a series of near misses, starting with Arsenal finishing as FA Cup runners up in 1972, and First Division runners-up in 1972–73. Former player Terry Neill succeeded Mee in 1976. At the age of 34, he became the youngest Arsenal manager to date. With new signings like Malcolm Macdonald and Pat Jennings, and a crop of talent in the side such as Liam Brady and Frank Stapleton, the club reached a trio of FA Cup finals (1978, 1979 and 1980), and lost the 1980 European Cup Winners' Cup Final on penalties. The club's only trophy during this time was a last-minute 3–2 victory over Manchester United in the 1979 FA Cup Final, widely regarded as a classic. One of Bertie Mee's double winners, George Graham, returned as manager in 1986. Arsenal won their first League Cup in 1987, Graham's first season in charge. By 1988, new signings Nigel Winterburn, Lee Dixon and Steve Bould had joined the club to complete the "famous Back Four" led by existing player Tony Adams. They immediately won the 1988 Football League Centenary Trophy, and followed it with the 1988–89 Football League title, snatched with a last-minute goal in the final game of the season against fellow title challengers Liverpool. Graham's Arsenal won another title in 1990–91, losing only one match, won the FA Cup and League Cup double in 1993, and the European Cup Winners' Cup, in 1994. Graham's reputation was tarnished when he was found to have taken kickbacks from agent Rune Hauge for signing certain players, and he was dismissed in 1995. His permanent replacement, Bruce Rioch, lasted for only one season, leaving the club after a dispute with the board of directors. The club metamorphosed during the long tenure of manager Arsène Wenger, appointed in 1996. New, attacking football, an overhaul of dietary and fitness practices, and efficiency with money have defined his reign. Accumulating key players from Wenger's homeland, such as Patrick Vieira and Thierry Henry, Arsenal won a second League and Cup double in 1997–98 and a third in 2001–02. In addition, the club reached the final of the 1999–2000 UEFA Cup, were victorious in the 2003 and 2005 FA Cups, and won the Premier League in 2003–04 without losing a single match, an achievement which earned the side the nickname "The Invincibles". This latter feat came within a run of 49 league matches unbeaten from 7 May 2003 to 24 October 2004, a national record. Arsenal finished in either first or second place in the league in eight of Wenger's first nine seasons at the club, although on no occasion were they able to retain the title. The club had never progressed beyond the quarter-finals of the Champions League until 2005–06; in that season they became the first club from London in the competition's fifty-year history to reach the final, in which they were beaten 2–1 by Barcelona. In July 2006, they moved into the Emirates Stadium, after 93 years at Highbury. Arsenal reached the final of the 2007 and 2011 League Cups, losing 2–1 to Chelsea and Birmingham City respectively. The club had not gained a major trophy since the 2005 FA Cup until 17 May 2014 when, spearheaded by then club-record acquisition Mesut Özil, Arsenal beat Hull City in the 2014 FA Cup Final, coming back from a 2–0 deficit to win the match 3–2. A year later, Arsenal appeared in the FA Cup final for the second time in a row, defeating Aston Villa 4–0 in the final and becoming the most successful club in the tournament's history with 12 titles, a record which Manchester United would tie the following season. Arsenal later won the FA Cup for a record 13th time, defeating Chelsea 2–1 in the 2017 final and once more becoming the outright leader in terms of FA Cups won. The victory also saw Wenger become the first manager in English football history to win seven FA Cups. However, in that same season, Arsenal finished in the fifth position in the league, the first time they had finished outside the top four since before Wenger arrived in 1996. After another unspectacular league season the following year, Wenger announced his departure from the club on 20 April 2018, after 22 years as manager. His decision was met by responses of praise throughout English and world football from many pundits and former players, who also thanked him for developing them as people. His final home match in charge was a 5–0 win over Burnley where his entrance was met to a standing ovation by supporters. The final match under the Wenger era was a 1–0 away victory against Huddersfield. After conducting an overhaul in the club's operating model to coincide with Wenger's departure, Spaniard Unai Emery was named as the club's new head coach on 23 May 2018. He would become the club's first ever head coach, while also their second ever manager from outside the United Kingdom. His first match was an 8-0 win in a friendly against Boreham Wood Unveiled in 1888, Royal Arsenal's first crest featured three cannons viewed from above, pointing northwards, similar to the coat of arms of the Metropolitan Borough of Woolwich (nowadays transferred to the coat of arms of the Royal Borough of Greenwich). These can sometimes be mistaken for chimneys, but the presence of a carved lion's head and a cascabel on each are clear indicators that they are cannons. This was dropped after the move to Highbury in 1913, only to be reinstated in 1922, when the club adopted a crest featuring a single cannon, pointing eastwards, with the club's nickname, "The Gunners", inscribed alongside it; this crest only lasted until 1925, when the cannon was reversed to point westward and its barrel slimmed down. In 1949, the club unveiled a modernised crest featuring the same style of cannon below the club's name, set in blackletter, and above the coat of arms of the Metropolitan Borough of Islington and a scroll inscribed with the club's newly adopted Latin motto, "Victoria Concordia Crescit -" "victory comes from harmony" – coined by the club's programme editor Harry Homer. For the first time, the crest was rendered in colour, which varied slightly over the crest's lifespan, finally becoming red, gold and green. Because of the numerous revisions of the crest, Arsenal were unable to copyright it. Although the club had managed to register the crest as a trademark, and had fought (and eventually won) a long legal battle with a local street trader who sold "unofficial" Arsenal merchandise, Arsenal eventually sought a more comprehensive legal protection. Therefore, in 2002 they introduced a new crest featuring more modern curved lines and a simplified style, which was copyrightable. The cannon once again faces east and the club's name is written in a sans-serif typeface above the cannon. Green was replaced by dark blue. The new crest was criticised by some supporters; the Arsenal Independent Supporters' Association claimed that the club had ignored much of Arsenal's history and tradition with such a radical modern design, and that fans had not been properly consulted on the issue. Until the 1960s, a badge was worn on the playing shirt only for high-profile matches such as FA Cup finals, usually in the form of a monogram of the club's initials in red on a white background. The monogram theme was developed into an Art Deco-style badge on which the letters A and C framed a football rather than the letter F, the whole set within a hexagonal border. This early example of a corporate logo, introduced as part of Herbert Chapman's rebranding of the club in the 1930s, was used not only on Cup Final shirts but as a design feature throughout Highbury Stadium, including above the main entrance and inlaid in the floors. From 1967, a white cannon was regularly worn on the shirts, until replaced by the club crest, sometimes with the addition of the nickname "The Gunners", in the 1990s. In the 2011–12 season, Arsenal celebrated their 125th year anniversary. The celebrations included a modified version of the current crest worn on their jerseys for the season. The crest was all white, surrounded by 15 oak leaves to the right and 15 laurel leaves to the left. The oak leaves represent the 15 founding members of the club who met at the Royal Oak pub. The 15 laurel leaves represent the design detail on the six pence pieces paid by the founding fathers to establish the club. The laurel leaves also represent strength. To complete the crest, 1886 and 2011 are shown on either sides of the motto "Forward" at the bottom of the crest. For much of Arsenal's history, their home colours have been bright red shirts with white sleeves and white shorts, though this has not always been the case. The choice of red is in recognition of a charitable donation from Nottingham Forest, soon after Arsenal's foundation in 1886. Two of Dial Square's founding members, Fred Beardsley and Morris Bates, were former Forest players who had moved to Woolwich for work. As they put together the first team in the area, no kit could be found, so Beardsley and Bates wrote home for help and received a set of kit and a ball. The shirt was redcurrant, a dark shade of red, and was worn with white shorts and socks with blue and white hoops. In 1933, Herbert Chapman, wanting his players to be more distinctly dressed, updated the kit, adding white sleeves and changing the shade to a brighter pillar box red. Two possibilities have been suggested for the origin of the white sleeves. One story reports that Chapman noticed a supporter in the stands wearing a red sleeveless sweater over a white shirt; another was that he was inspired by a similar outfit worn by the cartoonist Tom Webster, with whom Chapman played golf. Regardless of which story is true, the red and white shirts have come to define Arsenal and the team have worn the combination ever since, aside from two seasons. The first was 1966–67, when Arsenal wore all-red shirts; this proved unpopular and the white sleeves returned the following season. The second was 2005–06, the last season that Arsenal played at Highbury, when the team wore commemorative redcurrant shirts similar to those worn in 1913, their first season in the stadium; the club reverted to their normal colours at the start of the next season. In the 2008–09 season, Arsenal replaced the traditional all-white sleeves with red sleeves with a broad white stripe. Arsenal's home colours have been the inspiration for at least three other clubs. In 1909, Sparta Prague adopted a dark red kit like the one Arsenal wore at the time; in 1938, Hibernian adopted the design of the Arsenal shirt sleeves in their own green and white strip. In 1920, Sporting Clube de Braga's manager returned from a game at Highbury and changed his team's green kit to a duplicate of Arsenal's red with white sleeves and shorts, giving rise to the team's nickname of "Os Arsenalistas". These teams still wear those designs to this day. For many years Arsenal's away colours were white shirts and either black or white shorts. In the 1969–70 season, Arsenal introduced an away kit of yellow shirts with blue shorts. This kit was worn in the 1971 FA Cup Final as Arsenal beat Liverpool to secure the double for the first time in their history. Arsenal reached the FA Cup final again the following year wearing the red and white home strip and were beaten by Leeds United. Arsenal then competed in three consecutive FA Cup finals between 1978 and 1980 wearing their "lucky" yellow and blue strip, which remained the club's away strip until the release of a green and navy away kit in 1982–83. The following season, Arsenal returned to the yellow and blue scheme, albeit with a darker shade of blue than before. When Nike took over from Adidas as Arsenal's kit provider in 1994, Arsenal's away colours were again changed to two-tone blue shirts and shorts. Since the advent of the lucrative replica kit market, the away kits have been changed regularly, with Arsenal usually releasing both away and third choice kits. During this period the designs have been either all blue designs, or variations on the traditional yellow and blue, such as the metallic gold and navy strip used in the 2001–02 season, the yellow and dark grey used from 2005 to 2007, and the yellow and maroon of 2010 to 2013. Until 2014, the away kit was changed every season, and the outgoing away kit became the third-choice kit if a new home kit was being introduced in the same year. Since Puma began manufacturing Arsenal's kits in 2014, new home, away and third kits are released every single season. Arsenal's shirts have been made by manufacturers including Bukta (from the 1930s until the early 1970s), Umbro (from the 1970s until 1986), Adidas (1986–1994), Nike (1994–2014), and Puma (from 2014). Like those of most other major football clubs, Arsenal's shirts have featured sponsors' logos since the 1980s; sponsors include JVC (1982–1999), Sega (1999–2002), O2 (2002–2006), and Emirates (from 2006). Before joining the Football League, Arsenal played briefly on Plumstead Common, then at the Manor Ground in Plumstead, then spent three years between 1890 and 1893 at the nearby Invicta Ground. Upon joining the Football League in 1893, the club returned to the Manor Ground and installed stands and terracing, upgrading it from just a field. Arsenal continued to play their home games there for the next twenty years (with two exceptions in the 1894–95 season), until the move to north London in 1913. Widely referred to as Highbury, Arsenal Stadium was the club's home from September 1913 until May 2006. The original stadium was designed by the renowned football architect Archibald Leitch, and had a design common to many football grounds in the UK at the time, with a single covered stand and three open-air banks of terracing. The entire stadium was given a massive overhaul in the 1930s: new Art Deco West and East stands were constructed, opening in 1932 and 1936 respectively, and a roof was added to the North Bank terrace, which was bombed during the Second World War and not restored until 1954. Highbury could hold more than 60,000 spectators at its peak, and had a capacity of 57,000 until the early 1990s. The Taylor Report and Premier League regulations obliged Arsenal to convert Highbury to an all-seater stadium in time for the 1993–94 season, thus reducing the capacity to 38,419 seated spectators. This capacity had to be reduced further during Champions League matches to accommodate additional advertising boards, so much so that for two seasons, from 1998 to 2000, Arsenal played Champions League home matches at Wembley, which could house more than 70,000 spectators. Expansion of Highbury was restricted because the East Stand had been designated as a Grade II listed building and the other three stands were close to residential properties. These limitations prevented the club from maximising matchday revenue during the 1990s and first decade of the 21st century, putting them in danger of being left behind in the football boom of that time. After considering various options, in 2000 Arsenal proposed building a new 60,361-capacity stadium at Ashburton Grove, since named the Emirates Stadium, about 500 metres south-west of Highbury. The project was initially delayed by red tape and rising costs, and construction was completed in July 2006, in time for the start of the 2006–07 season. The stadium was named after its sponsors, the airline company Emirates, with whom the club signed the largest sponsorship deal in English football history, worth around £100 million. Some fans referred to the ground as Ashburton Grove, or the Grove, as they did not agree with corporate sponsorship of stadium names. The stadium will be officially known as Emirates Stadium until at least 2028, and the airline will be the club's shirt sponsor until at least 2024. From the start of the 2010–11 season on, the stands of the stadium have been officially known as North Bank, East Stand, West Stand and Clock end. Arsenal's players train at the Shenley Training Centre in Hertfordshire, a purpose-built facility which opened in 1999. Before that the club used facilities on a nearby site owned by the University College of London Students' Union. Until 1961 they had trained at Highbury. Arsenal's Academy under-18 teams play their home matches at Shenley, while the reserves play their games at Meadow Park, which is also the home of Boreham Wood F.C. Both the Academy under-18 & the reserves occasionally play their big games at the Emirates in front of a crowd reduced to only the lower west stand. Arsenal fans often refer to themselves as "Gooners", the name derived from the team's nickname, "The Gunners". The fanbase is large, generally loyal, and virtually all home matches sell out; in 2007–08 Arsenal had the second-highest average League attendance for an English club (60,070, which was 99.5% of available capacity), and, as of 2015, the third-highest all-time average attendance. Arsenal have the seventh highest average attendance of European football clubs only behind Borussia Dortmund, FC Barcelona, Manchester United, Real Madrid, Bayern Munich, and Schalke. The club's location, adjoining wealthy areas such as Canonbury and Barnsbury, mixed areas such as Islington, Holloway, Highbury, and the adjacent London Borough of Camden, and largely working-class areas such as Finsbury Park and Stoke Newington, has meant that Arsenal's supporters have come from a variety of social classes. Much of the Afro-Caribbean support comes from the neighbouring London Borough of Hackney and a large portion of the South Asian Arsenal supporters commute to the stadium from Wembley Park, North West of the capital. There was also traditionally a large Irish community that followed Arsenal, with the nearby Archway area having a particularly large community, but Irish migration to North London is much lower than in the 1960s or 1970s. Like all major English football clubs, Arsenal have a number of domestic supporters' clubs, including the Arsenal Football Supporters' Club, which works closely with the club, and the Arsenal Independent Supporters' Association, which maintains a more independent line. The Arsenal Supporters' Trust promotes greater participation in ownership of the club by fans. The club's supporters also publish fanzines such as "The Gooner", "Gunflash" and the satirical "Up The Arse!". In addition to the usual English football chants, supporters sing "One-Nil to the Arsenal" (to the tune of "Go West"). There have always been Arsenal supporters outside London, and since the advent of satellite television, a supporter's attachment to a football club has become less dependent on geography. Consequently, Arsenal have a significant number of fans from beyond London and all over the world; in 2007, 24 UK, 37 Irish and 49 other overseas supporters clubs were affiliated with the club. A 2011 report by SPORT+MARKT estimated Arsenal's global fanbase at 113 million. The club's social media activity was the fifth highest in world football during the 2014–15 season. Arsenal's longest-running and deepest rivalry is with their nearest major neighbours, Tottenham Hotspur; matches between the two are referred to as North London derbies. Other rivalries within London include those with Chelsea, Fulham and West Ham United. In addition, Arsenal and Manchester United developed a strong on-pitch rivalry in the late 1980s, which intensified in recent years when both clubs were competing for the Premier League title – so much so that a 2003 online poll by the Football Fans Census listed Manchester United as Arsenal's biggest rivals, followed by Tottenham and Chelsea. A 2008 poll listed the Tottenham rivalry as more important. The largest shareholder on the Arsenal board is American sports tycoon Stan Kroenke. Kroenke first launched a bid for the club in April 2007, and faced competition for shares from Red and White Securities, which acquired its first shares off David Dein in August 2007. Red & White Securities was co-owned by Russian billionaire Alisher Usmanov and Iranian London-based financier Farhad Moshiri, though Usmanov bought Moshiri's stake in 2016. Kroenke came close to the 30% takeover threshold in November 2009, when he increased his holding to 18,594 shares (29.9%). In April 2011, Kroenke achieved a full takeover by purchasing the shareholdings of Nina Bracewell-Smith and Danny Fiszman, taking his shareholding to 62.89%. As of May 2017, Kroenke owns 41,721 shares (67.05%) and Red & White Securities own 18,695 shares (30.04%). Ivan Gazidis has been the club's Chief Executive since 2009. Arsenal's parent company, Arsenal Holdings plc, operates as a non-quoted public limited company, whose ownership is considerably different from that of other football clubs. Only 62,219 shares in Arsenal have been issued, and they are not traded on a public exchange such as the FTSE or AIM; instead, they are traded relatively infrequently on the ICAP Securities and Derivatives Exchange, a specialist market. On 29 May 2017, a single share in Arsenal had a mid price of £18,000, which sets the club's market capitalisation value at approximately £1,119.9m. Most football clubs aren't listed on an exchange, which makes direct comparisons of their values difficult. Consultants Brand Finance valued the club's brand and intangible assets at $703m in 2015, and consider Arsenal an AAA global brand. Business magazine Forbes valued Arsenal as a whole at $2.238 billion (£1.69 billion) in 2018, ranked third in English football. Research by the Henley Business School also ranked Arsenal second in English football, modelling the club's value at £1.118 billion in 2015. Arsenal's financial results for the 2014–15 season show group revenue of £344.5m, with a profit before tax of £24.7m. The footballing core of the business showed a revenue of £329.3m. The Deloitte Football Money League is a publication that homogenizes and compares clubs' annual revenue. They put Arsenal's footballing revenue at £331.3m (€435.5m), ranking Arsenal seventh among world football clubs. Arsenal and Deloitte both list the match day revenue generated by the Emirates Stadium as £100.4m, more than any other football stadium in the world. Arsenal have appeared in a number of media "firsts". On 22 January 1927, their match at Highbury against Sheffield United was the first English League match to be broadcast live on radio. A decade later, on 16 September 1937, an exhibition match between Arsenal's first team and the reserves was the first football match in the world to be televised live. Arsenal also featured in the first edition of the BBC's "Match of the Day", which screened highlights of their match against Liverpool at Anfield on 22 August 1964. Sky's coverage of Arsenal's January 2010 match against Manchester United was the first live public broadcast of a sports event on 3D television. As one of the most successful teams in the country, Arsenal have often featured when football is depicted in the arts in Britain. They formed the backdrop to one of the earliest football-related novels, "The Arsenal Stadium Mystery" (1939), which was made into a film in the same year. The story centres on a friendly match between Arsenal and an amateur side, one of whose players is poisoned while playing. Many Arsenal players appeared as themselves in the film and manager George Allison was given a speaking part. More recently, the book "Fever Pitch" by Nick Hornby was an autobiographical account of Hornby's life and relationship with football and Arsenal in particular. Published in 1992, it formed part of the revival and rehabilitation of football in British society during the 1990s. The book was twice adapted for the cinema – the 1997 British film focuses on Arsenal's 1988–89 title win, and a 2005 American version features a fan of baseball's Boston Red Sox. Arsenal have often been stereotyped as a defensive and "boring" side, especially during the 1970s and 1980s; many comedians, such as Eric Morecambe, made jokes about this at the team's expense. The theme was repeated in the 1997 film "The Full Monty", in a scene where the lead actors move in a line and raise their hands, deliberately mimicking the Arsenal defence's offside trap, in an attempt to co-ordinate their striptease routine. Another film reference to the club's defence comes in the film "Plunkett & Macleane", in which two characters are named Dixon and Winterburn after Arsenal's long-serving full backs – the right-sided Lee Dixon and the left-sided Nigel Winterburn. In 1985, Arsenal founded a community scheme, "Arsenal in the Community", which offered sporting, social inclusion, educational and charitable projects. The club support a number of charitable causes directly and in 1992 established The Arsenal Charitable Trust, which by 2006 had raised more than £2 million for local causes. An ex-professional and celebrity football team associated with the club also raised money by playing charity matches. The club launched the Arsenal for Everyone initiative in 2008 as a annual celebration of the diversity of the Arsenal family. In the 2009–10 season Arsenal announced that they had raised a record breaking £818,897 for the Great Ormond Street Hospital Children's Charity. The original target was £500,000. Save the Children has been Arsenal global charity partner since 2011 and have worked together in numerous projects to improve safety and well-being for vulnerable children in London and abroad. On 3 September 2016 The Arsenal Foundation has donated £1m to build football pitches for children in London, Indonesia, Iraq, Jordan and Somalia thanks to The Arsenal Foundation Legends Match against Milan Glorie at the Emirates Stadium. On 3 June 2018 Arsenal will play Real Madrid in the Corazon Classic Match 2018 at the Bernabeu, where the proceeds will go to Real Madrid Foundation projects that are aimed at the most vulnerable children. In addition there will be a return meeting on 8 September 2018 at the Emirates stadium where proceeds will go towards the Arsenal foundation. Arsenal's tally of 13 League Championships is the third highest in English football, after Manchester United (20) and Liverpool (18), and they were the first club to reach a seventh and an eighth League Championship. As of May 2016, they are one of only six teams, the others being Manchester United, Blackburn Rovers, Chelsea, Manchester City and Leicester City, to have won the Premier League since its formation in 1992. They hold the highest number of FA Cup trophies, with 13. The club is one of only six clubs to have won the FA Cup twice in succession, in 2002 and 2003, and 2014 and 2015. Arsenal have achieved three League and FA Cup "Doubles" (in 1971, 1998 and 2002), a feat only previously achieved by Manchester United (in 1994, 1996 and 1999). They were the first side in English football to complete the FA Cup and League Cup double, in 1993. Arsenal were also the first London club to reach the final of the UEFA Champions League, in 2006, losing the final 2–1 to Barcelona. Arsenal have one of the best top-flight records in history, having finished below fourteenth only seven times. They have won the second most top flight league matches in English football, and have also accumulated the second most points, whether calculated by two points per win or by the contemporary points value. They have been in the top flight for the most consecutive seasons (92 as of 2017–18). Arsenal also have the highest average league finishing position for the 20th century, with an average league placement of 8.5. Arsenal hold the record for the longest run of unbeaten League matches (49 between May 2003 and October 2004). This included all 38 matches of their title-winning 2003–04 season, when Arsenal became only the second club to finish a top-flight campaign unbeaten, after Preston North End (who played only 22 matches) in 1888–89. They also hold the record for the longest top flight win streak. Arsenal set a Champions League record during the 2005–06 season by going ten matches without conceding a goal, beating the previous best of seven set by A.C. Milan. They went a record total stretch of 995 minutes without letting an opponent score; the streak ended in the final, when Samuel Eto'o scored a 76th-minute equaliser for Barcelona. David O'Leary holds the record for Arsenal appearances, having played 722 first-team matches between 1975 and 1993. Fellow centre half and former captain Tony Adams comes second, having played 669 times. The record for a goalkeeper is held by David Seaman, with 564 appearances. Thierry Henry is the club's top goalscorer with 228 goals in all competitions between 1999 and 2012, having surpassed Ian Wright's total of 185 in October 2005. Wright's record had stood since September 1997, when he overtook the longstanding total of 178 goals set by winger Cliff Bastin in 1939. Henry also holds the club record for goals scored in the League, with 175, a record that had been held by Bastin until February 2006. Arsenal's record home attendance is 73,707, for a UEFA Champions League match against RC Lens on 25 November 1998 at Wembley Stadium, where the club formerly played home European matches because of the limits on Highbury's capacity. The record attendance for an Arsenal match at Highbury is 73,295, for a 0–0 draw against Sunderland on 9 March 1935, while that at Emirates Stadium is 60,161, for a 2–2 draw with Manchester United on 3 November 2007. The clubs current manager is the Spanish Unai Emery. There have been eighteen permanent and five caretaker managers of Arsenal since the appointment of the club's first professional manager, Thomas Mitchell in 1897. The club's longest-serving manager, in terms of both length of tenure and number of games overseen, is Arsène Wenger, who managed the club between 1996 and 2018. Wenger was also Arsenal's first manager from outside the United Kingdom. Two Arsenal managers have died in the job – Herbert Chapman and Tom Whittaker. Arsenal's first ever silverware was won as the Royal Arsenal in 1890. The Kent Junior Cup, won by Royal Arsenal's reserves, was the club's first trophy, while the first team's first trophy came three weeks later when they won the Kent Senior Cup. Their first national senior honour came in 1930, when they won the FA Cup. The club enjoyed further success in the 1930s, winning another FA Cup and five Football League First Division titles. Arsenal won their first league and cup double in the 1970–71 season and twice repeated the feat, in 1997–98 and 2001–02, as well as winning a cup double of the FA Cup and League Cup in 1992–93. Seasons in bold are seasons when the club won a Double of the league and FA Cup, or of the FA Cup and League Cup. The "2003–04" season was the only 38-match league season unbeaten in English football history. A special gold version of the Premier League trophy was commissioned and presented to the club the following season. As of 28 May 2017. When the FA Cup was the only national football association competition available to Arsenal, the other football association competitions were County Cups, and they made up many of the matches the club played during a season. Arsenal's first first-team trophy was a County Cup, the inaugural Kent Senior Cup. Arsenal became ineligible for the London Cups when the club turned professional in 1891, and rarely participated in County Cups after this. Due to the club's original location within the borders of both the London and Kent Football Associations, Arsenal competed in and won trophies organized by each. During Arsenal's history, the club has participated in and won a variety of pre-season and friendly honours. These include Arsenal's own pre-season competition the Emirates Cup, begun in 2007. During the wars, previous competitions were widely suspended and the club had to participate in wartime competitions. During WWII, Arsenal won several of these. Arsenal Women is the women's football club affiliated to Arsenal. Founded as Arsenal Ladies F.C. in 1987 by Vic Akers, they turned semi-professional in 2002 and are currently managed by Clair Wheatley. Akers currently holds the role of Honorary President of Arsenal Women. As part of the festivities surrounding their 30th anniversary in 2017, the club announced that they were changing their formal name to Arsenal Women F.C., and would use "Arsenal" in all references except rare cases where there might be confusion with the men's side. Arsenal Women are the most successful team in English women's football. In the 2008–09 season, they won all three major English trophies – the FA Women's Premier League, FA Women's Cup and FA Women's Premier League Cup, and, as of 2017, were the only English side to have won the UEFA Women's Cup or UEFA Women's Champions League, having won the Cup in the 2006–07 season as part of a unique quadruple. The men's and women's clubs are formally separate entities but have quite close ties; Arsenal Women are entitled to play once a season at the Emirates Stadium, though they usually play their home matches at Boreham Wood. Ælle of Sussex Ælle (also Aelle or Ella) is recorded in early sources as the first king of the South Saxons, reigning in what is now called Sussex, England, from 477 to perhaps as late as 514. According to the "Anglo-Saxon Chronicle", Ælle and three of his sons are said to have landed at a place called Cymensora and fought against the local Britons. The chronicle goes on to report a victory in 491, at present day Pevensey, where the battle ended with the Saxons slaughtering their opponents to the last man. Ælle was the first king recorded by the 8th century chronicler Bede to have held "imperium", or overlordship, over other Anglo-Saxon kingdoms. In the late 9th-century "Anglo-Saxon Chronicle" (around four hundred years after his time) Ælle is recorded as being the first bretwalda, or "Britain-ruler", though there is no evidence that this was a contemporary title. Ælle's death is not recorded and although he may have been the founder of a South Saxon dynasty, there is no firm evidence linking him with later South Saxon rulers. The 12th-century chronicler Henry of Huntingdon produced an enhanced version of the "Anglo-Saxon Chronicle" that included 514 as the date of Ælle's death, but this is not secure. Historians are divided on the detail of Ælle's life and existence as it was during the least-documented period in English history of the last two millennia. By the early 5th century Britain had been Roman for over three hundred and fifty years. Amongst the enemies of Roman Britain were the Picts of central and northern Scotland, and the Gaels known as Scoti, who were raiders from Ireland. Also vexatious were the Saxons, the name Roman writers gave to the peoples who lived in the northern part of what is now Germany and the southern part of the Jutland peninsula. Saxon raids on the southern and eastern shores of England had been sufficiently alarming by the late 3rd century for the Romans to build the Saxon Shore forts, and subsequently to establish the role of the Count of the Saxon Shore to command the defence against these incursions. Roman control of Britain finally ended in the early part of the 5th century; the date usually given as marking the end of Roman Britain is 410, when the Emperor Honorius sent letters to the British, urging them to look to their own defence. Britain had been repeatedly stripped of troops to support usurpers' claims to the Roman empire, and after 410 the Roman armies never returned. Sources for events after this date are extremely scarce, but a tradition, reported as early as the mid-6th century by a British priest named Gildas, records that the British sent for help against the barbarians to Aetius, a Roman consul, probably in the late 440s. No help came. Subsequently, a British leader named Vortigern is supposed to have invited continental mercenaries to help fight the Picts who were attacking from the north. The leaders, whose names are recorded as Hengest and Horsa, rebelled, and a long period of warfare ensued. The invaders—Angles, Saxons, Jutes, and Frisians—gained control of parts of England, but lost a major battle at Mons Badonicus (the location of which is not known). Some authors have speculated that Ælle may have led the Saxon forces at this battle, while others reject the idea out of hand. The British thus gained a respite, and peace lasted at least until the time Gildas was writing: that is, for perhaps forty or fifty years, from around the end of the 5th century until midway through the sixth. Shortly after Gildas's time the Anglo-Saxon advance was resumed, and by the late 6th century nearly all of southern England was under the control of the continental invaders. There are two early sources that mention Ælle by name. The earliest is "The Ecclesiastical History of the English People", a history of the English church written in 731 by Bede, a Northumbrian monk. Bede mentions Ælle as one of the Anglo-Saxon kings who exercised what he calls "imperium" over "all the provinces south of the river Humber"; "imperium" is usually translated as "overlordship". Bede gives a list of seven kings who held "imperium", and Ælle is the first of them. The other information Bede gives is that Ælle was not a Christian—Bede mentions a later king as "the first to enter the kingdom of heaven". The second source is the "Anglo-Saxon Chronicle", a collection of annals assembled in the Kingdom of Wessex in c. 890, during the reign of Alfred the Great. The "Chronicle" has three entries for Ælle, from 477 to 491, as follows: The "Chronicle" was put together about four hundred years after these events. It is known that the annalists used material from earlier chronicles, as well as from oral sources such as sagas, but there is no way to tell where these lines came from. It should also be noted that the terms 'British' and 'Welsh' were used interchangeably, as 'Welsh' is the Saxon word meaning 'foreigner', and was applied to all the native Romano-British of the era. Three of the places named may be identified : The "Chronicle" mentions Ælle once more under the year 827, where he is listed as the first of the eight "bretwaldas", or "Britain-rulers". The list consists of Bede's original seven, plus Egbert of Wessex. There has been much scholarly debate over just what it meant to be a "bretwalda", and the extent of Ælle's actual power in southern England is an open question. It is also noteworthy that there is a long gap between Ælle and the second king on Bede's list, Ceawlin of Wessex, whose reign began in the late 6th century; this may indicate a period in which Anglo-Saxon dominance was interrupted in some way. Earlier sources than Bede exist which mention the South Saxons, though they do not name Ælle. The earliest reference is still quite late, however, at about 692: a charter of King Nothelm's, which styles him "King of the South Saxons". Charters are documents which granted land to followers or to churchmen, and which would be witnessed by the kings who had power to grant the land. They are one of the key documentary sources for Anglo-Saxon history, but no original charters survive from earlier than 679. There are other early writers whose works can shed light on Ælle's time, though they do not mention either him or his kingdom. Gildas's description of the state of Britain in his time is useful for understanding the ebb and flow of the Anglo-Saxon incursions. Procopius, a Byzantine historian, writing not long after Gildas, adds to the meagre sources on population movement by including a chapter on England in one of his works. He records that the peoples of Britain—he names the English, the British, and the Frisians—were so numerous that they were migrating to the kingdom of the Franks in great numbers every year. Although this is probably a reference to Britons emigrating to Armorica to escape the Anglo-Saxons. They subsequently gave their name to the area they settled as Brittany, or "la petite Bretagne" (literally little Britain). The early dates given in the "Anglo-Saxon Chronicle" for the colonization of Sussex are supported by an analysis of the place names of the region. The strongest evidence comes from place names that end in "-ing", such as Worthing and Angmering. These are known to derive from an earlier form ending in "-ingas". "Hastings" for example, derives from "Hæstingas" which may mean "the followers or dependents of a person named Hæsta", although others suggest the heavily Romanised region may have had names of Gallo-Roman origin derived from "-ienses". From west of Selsey Bill to east of Pevensey can be found the densest concentration of these names anywhere in Britain. There are a total of about forty-five place names in Sussex of this form, however, personal names either were not associated with these places or fell out of use. This does not necessarily mean that the Saxons killed or drove out almost all of the native population, despite the slaughter of the Britons reported in the "Chronicle" entry for 491; however, it does imply that the invasion was on a scale that left little space for the British. These lines of reasoning cannot prove the dates given in the "Chronicle", much less the details surrounding Ælle himself, but they do support the idea of an early conquest and the establishment of a settled kingdom. If the dates given by the "Anglo-Saxon Chronicle" are accurate to within half a century, then Ælle's reign lies in the middle of the Anglo-Saxon expansion, and prior to the final conquest of the Britons. It also seems consistent with the dates given to assume that Ælle's battles predate Mons Badonicus.This in turn would explain the long gap, of fifty or more years, in the succession of the "bretwaldas": if the peace gained by the Britons did indeed hold till the second half of the 6th century, it is not to be expected that an Anglo-Saxon leader should have anything resembling overlordship of England during that time. The idea of a pause in the Anglo-Saxon advance is also supported by the account in Procopius of 6th century migration from Britain to the kingdom of the Franks. Procopius's account is consistent with what is known to be a contemporary colonization of Armorica (now Brittany, in France); the settlers appear to have been at least partly from Dumnonia (modern Cornwall), and the area acquired regions known as Dumnonée and Cornouaille. It seems likely that something at that time was interrupting the general flow of the Anglo-Saxons from the continent to Britain. The dates for Ælle's battles are also reasonably consistent with what is known of events in the kingdom of the Franks at that time. Clovis I united the Franks into a single kingdom during the 480s and afterwards, and the Franks' ability to exercise power along the southern coast of the English channel may have diverted Saxon adventurers to England rather than the continent. It is possible, therefore, that a historical king named Ælle existed, who arrived from the continent in the late 5th century, and who conquered much of what is now Sussex. He may have been a prominent war chief with a leadership role in a federation of Anglo-Saxon groups fighting for territory in Britain at that time. This may be the origin of the reputation that led Bede to list him as holding overlordship over southern Britain. The battles listed in the "Chronicle" are compatible with a conquest of Sussex from west to east, against British resistance stiff enough to last fourteen years. His area of military control may have extended as far as Hampshire and north to the upper Thames valley, but it certainly did not extend across all of England south of the Humber, as Bede asserts. The historian Guy Halsall argues that as Ælle immediately preceded the late sixth-century King Ceawlin as Bretwalda, it is far more likely that Ælle dates to the mid sixth century, and that the "Chronicle" has moved his dates back a century in order to provide a foundation myth for Sussex which puts it chronologically and geographically between the origins of the kingdoms of Kent and Wessex. Ælle's death is not recorded by the "Chronicle", which gives no information about him, or his sons, or the South Saxons until 675, when the South Saxon king Æthelwalh was baptized. It has been conjectured that, as Saxon war leader, Ælle may have met his death in the disastrous battle of Mount Badon when the Britons halted Saxon expansion If Ælle died within the borders of his own kingdom then it may well have been that he was buried on Highdown Hill with his weapons and ornaments in the usual mode of burial among the South Saxons. Highdown Hill is the traditional burial-place of the kings of Sussex. Primary sources Secondary sources American Airlines Flight 77 American Airlines Flight 77 was a scheduled American Airlines domestic transcontinental passenger flight from Washington Dulles International Airport in Dulles, Virginia, to Los Angeles International Airport in Los Angeles, California. The Boeing 757-223 aircraft serving the flight was hijacked by five men affiliated with al-Qaeda on September 11, 2001, as part of the September 11 attacks. They deliberately crashed the plane into the Pentagon in Arlington County, Virginia, near Washington, D.C., killing all 64 people on board, including the five hijackers and six crew, as well as 125 people in the building. Less than 35 minutes into the flight, the hijackers stormed the cockpit. They forced the passengers, crew, and pilots to the rear of the aircraft. Hani Hanjour, one of the hijackers who was trained as a pilot, assumed control of the flight. Unknown to the hijackers, passengers aboard made telephone calls to friends and family and relayed information on the hijacking. The hijackers crashed the aircraft into the western side of the Pentagon at 09:37 EDT. Many people witnessed the crash, and news sources began reporting on the incident within minutes. The impact severely damaged an area of the Pentagon and caused a large fire. A portion of the building collapsed; firefighters spent days working to fully extinguish the blaze. The damaged sections of the Pentagon were rebuilt in 2002, with occupants moving back into the completed areas that August. The 184 victims of the attack are memorialized in the Pentagon Memorial adjacent to the crash site. The park contains a bench for each of the victims, arranged according to their year of birth and ranging from 1930 to 1998. The hijackers on American Airlines Flight 77 were led by Hani Hanjour, who piloted the aircraft into the Pentagon. Hanjour first came to the United States in 1990. Hanjour trained at the CRM Airline Training Center in Scottsdale, Arizona, earning his FAA commercial pilot's certificate in April 1999. He had wanted to be a commercial pilot for the Saudi national airline but was rejected when he applied to the civil aviation school in Jeddah in 1999. Hanjour's brother later explained that, frustrated at not finding a job, Hanjour "increasingly turned his attention toward religious texts and cassette tapes of militant Islamic preachers". Hanjour returned to Saudi Arabia after being certified as a pilot, but left again in late 1999, telling his family that he was going to the United Arab Emirates to work for an airline. Hanjour likely went to Afghanistan, where Al-Qaeda recruits were screened for special skills they might have. Already having selected the Hamburg cell members, Al Qaeda leaders selected Hanjour to lead the fourth team of hijackers. Alec Station, the CIA's unit dedicated to tracking Osama bin Laden, had discovered that two of the other hijackers, al-Hazmi and al-Mihdhar, had multiple-entry visas to the United States well before 9/11. Two FBI agents inside the unit tried to alert FBI headquarters, but CIA officers rebuffed them. In December 2000, Hanjour arrived in San Diego, joining "muscle" hijackers Nawaf al-Hazmi and Khalid al-Mihdhar, who had been there since January 2000. Soon after arriving, Hanjour and Hazmi left for Mesa, Arizona, where Hanjour began refresher training at Arizona Aviation. In April 2001, they relocated to Falls Church, Virginia, where they awaited the arrival of the remaining "muscle" hijackers. One of these men, Majed Moqed, arrived on May 2, 2001, with Flight 175 hijacker Ahmed al-Ghamdi from Dubai at Dulles International Airport. They moved into an apartment with Hazmi and Hanjour. On May 21, 2001, Hanjour rented a room in Paterson, New Jersey, where he stayed with other hijackers through the end of August. The last Flight 77 "muscle" hijacker, Salem al-Hazmi, arrived on June 29, 2001, with Abdulaziz al-Omari (a hijacker of Flight 11) at John F. Kennedy International Airport from the United Arab Emirates. They stayed with Hanjour. Hanjour received ground instruction and did practice flights at Air Fleet Training Systems in Teterboro, New Jersey, and at Caldwell Flight Academy in Fairfield, New Jersey. Hanjour moved out of the room in Paterson and arrived at the Valencia Motel in Laurel, Maryland, on September 2, 2001. While in Maryland, Hanjour and fellow hijackers trained at Gold's Gym in Greenbelt. On September 10, he completed a certification flight, using a terrain recognition system for navigation, at Congressional Air Charters in Gaithersburg, Maryland. On September 10, Nawaf al-Hazmi—accompanied by other hijackers—checked into the Marriott in Herndon, Virginia, near Dulles Airport. According to a U.S. State Department cable leaked in the WikiLeaks dump in February 2010, the FBI has investigated another suspect, Mohammed al-Mansoori. He had associated with three Qatari citizens who flew from Los Angeles to London (via Washington) and Qatar on the eve of the attacks, after allegedly surveying the World Trade Center and the White House. U.S. law enforcement officials said that the data about the four men was "just one of many leads that were thoroughly investigated at the time and never led to terrorism charges". An official added that the three Qatari citizens have never been questioned by the FBI. Eleanor Hill, the former staff director for the congressional joint inquiry on the September 11 attacks, said the cable reinforces questions about the thoroughness of the FBI's investigation. She also said that the inquiry concluded that the hijackers had a support network that helped them in different ways. The three Qatari men were booked to fly from Los Angeles to Washington on September 10, 2001, on the same plane that was hijacked and piloted into the Pentagon on the following day. Instead, they flew from Los Angeles to Qatar, via Washington and London. While the cable said that Mansoori was currently under investigation, U.S. law enforcement officials said that there was no active investigation of him or of the Qatari citizens mentioned in the cable. The American Airlines Flight 77 aircraft was a Boeing 757-223 (registration The aircraft was built and had its first flight in 1991. The flight crew included pilot Charles Burlingame (a Naval Academy graduate and former fighter pilot), First Officer David Charlebois, and flight attendants Michele Heidenberger, Jennifer Lewis, Kenneth Lewis, and Renee May. The capacity of the aircraft was 188 passengers, but with 58 passengers on September 11, the load factor was 33 percent. American Airlines said that Tuesdays were the least-traveled day of the week, with the same load factor seen on Tuesdays in the previous three months for Flight 77. On the morning of September 11, 2001, the five hijackers arrived at Washington Dulles International Airport. At 07:15, Khalid al-Mihdhar and Majed Moqed checked in at the American Airlines ticket counter for Flight 77, arriving at the passenger security checkpoint a few minutes later at 07:18. Both men set off the metal detector and were put through secondary screening. Moqed continued to set off the alarm, so he was searched with a hand wand. The Hazmi brothers checked in together at the ticket counter at 07:29. Hani Hanjour checked in separately and arrived at the passenger security checkpoint at 07:35. Hanjour was followed minutes later at the checkpoint by Salem and Nawaf al-Hazmi, who also set off the metal detector's alarm. The screener at the checkpoint never resolved what set off the alarm. As seen in security footage later released, Nawaf Hazmi appeared to have an unidentified item in his back pocket. Utility knives up to four inches were permitted at the time by the Federal Aviation Administration (FAA) as carry-on items. The passenger security checkpoint at Dulles International Airport was operated by Argenbright Security, under contract with United Airlines. The hijackers were all selected for extra screening of their checked bags. Hanjour, al-Mihdhar, and Moqed were chosen by the Computer Assisted Passenger Prescreening System criteria, while the brothers Nawaf and Salem al-Hazmi were selected because they did not provide adequate identification and were deemed suspicious by the airline check-in agent. Hanjour, Mihdhar, and Nawaf al-Hazmi did not check any bags for the flight. Checked bags belonging to Moqed and Salem al-Hazmi were held until they boarded the aircraft. Flight 77 was scheduled to depart for Los Angeles at 08:10; 58 passengers boarded through Gate D26, including the five hijackers. Excluding the hijackers, of the 59 other passengers and crew on board, there were 26 men, 22 women, and five children ranging in age from three to eleven. On the flight, Hani Hanjour was seated up front in 1B, while Salem and Nawaf al-Hazmi were seated in first class in seats 5E and 5F. Majed Moqed and Khalid al-Mihdhar were seated further back in 12A and 12B, in economy class. Flight 77 left the gate on time and took off from Runway 30 at Dulles at 08:20. The 9/11 Commission estimated that the flight was hijacked between 08:51 and 08:54, shortly after American Airlines Flight 11 struck the World Trade Center and not too long after United Airlines Flight 175 had been hijacked. The last normal radio communications from the aircraft to air traffic control occurred at 08:50:51. Unlike the other three flights, there were no reports of anyone being stabbed or a bomb threat and the pilots were not immediately killed but shoved to the back of the plane with the rest of the passengers. At 08:54, the plane began to deviate from its normal, assigned flight path and turned south. Two minutes later at 08:56, the plane's transponder was switched off. The hijackers set the flight's autopilot on a course heading east towards Washington, D.C. The FAA was aware at this point that there was an emergency on board the airplane. By this time, Flight 11 had already crashed into the North Tower of the World Trade Center and Flight 175 was known to have been hijacked and was within minutes of striking the South Tower. After learning of this second hijacking involving an American Airlines aircraft and the hijacking involving United Airlines, American Airlines' executive vice president Gerard Arpey ordered a nationwide ground stop for the airline. The Indianapolis Air Traffic Control Center, as well as American Airlines dispatchers, made several failed attempts to contact the aircraft. At the time the airplane was hijacked, it was flying over an area of limited radar coverage. With air controllers unable to contact the flight by radio, an Indianapolis official declared that the Boeing 757 had possibly crashed at 09:09. Two people on the aircraft made phone calls to contacts on the ground. At 09:12, flight attendant Renee May called her mother, Nancy May, in Las Vegas. During the call, which lasted nearly two minutes, May said her flight was being hijacked by six persons, and staff and passengers had been moved to the rear of the airplane. May asked her mother to contact American Airlines, which she and her husband promptly did; American Airlines was already aware of the hijacking. Between 09:16 and 09:26, passenger Barbara Olson called her husband, United States Solicitor General Theodore Olson, and reported that the airplane had been hijacked and that the assailants had box cutters and knives. She reported that the passengers, including the pilots, had been moved to the back of the cabin and that the hijackers were unaware of her call. A minute into the conversation, the call was cut off. Theodore Olson contacted the command center at the Department of Justice, and tried unsuccessfully to contact Attorney General John Ashcroft. About five minutes later, Barbara Olson called again, told her husband that the "pilot" (possibly Hanjour on the cabin intercom) had announced the flight was hijacked, and asked, "What do I tell the pilot to do?" Ted Olson asked her location and she reported the plane was flying low over a residential area. He told her of the attacks on the World Trade Center. Soon afterward, the call cut off again. An airplane was detected again by Dulles controllers on radar screens as it approached Washington, turning and descending rapidly. Controllers initially thought this was a military fighter, due to its high speed and maneuvering. Reagan Airport controllers asked a passing Air National Guard Lockheed C-130 Hercules to identify and follow the aircraft. The pilot, Lt. Col. Steven O'Brien, told them it was a Boeing 757 or 767, and its silver fuselage meant that it was probably an American Airlines jet. He had difficulty picking out the airplane in the "East Coast haze", but then saw a "huge" fireball, and initially assumed it had hit the ground. Approaching the Pentagon, he saw the impact site on the building's west side and reported to Reagan control, "Looks like that aircraft crashed into the Pentagon, sir." According to the 9/11 Commission Report, as Flight 77 was west-southwest of the Pentagon, it made a 330-degree turn. At the end of the turn, it was descending through , pointed toward the Pentagon and downtown Washington. Hani Hanjour advanced the throttles to maximum power and dived toward the Pentagon. While level above the ground and seconds from the crash, the wings knocked over five street lampposts and the right wing struck a portable generator, creating a smoke trail seconds before smashing into the Pentagon. Flight 77, flying at 530 mph (853 km/h, 237 m/s, or 460 knots) over the Navy Annex Building adjacent to Arlington National Cemetery, crashed into the western side of the Pentagon in Arlington County, Virginia, just south of Washington, D.C., at 09:37:46. The plane hit the Pentagon at the first-floor level, and at the moment of impact, the airplane was rolled slightly to the left, with the right wing elevated. The front part of the fuselage disintegrated on impact, while the mid and tail sections moved for another fraction of a second, with tail section debris penetrating furthest into the building. In all, the airplane took eight-tenths of a second to fully penetrate into the three outermost of the building's five rings and unleashed a fireball that rose above the building. At the time of the attacks, approximately 18,000 people worked in the Pentagon, which was 4,000 fewer than before renovations began in 1998. The section of the Pentagon that was struck, which had recently been renovated at a cost of $250 million, housed the Naval Command Center. In all, there were 189 deaths at the Pentagon site, including the 125 in the Pentagon building in addition to the 64 on board the aircraft. Passenger Barbara Olson was en route to a recording of the TV show "Politically Incorrect". A group of children, their chaperones, and two National Geographic Society staff members were also on board, embarking on an educational trip west to the Channel Islands National Marine Sanctuary near Santa Barbara, California. The fatalities at the Pentagon included 55 military personnel and 70 civilians. Of those 125 killed, 92 were on the first floor, 31 were on the second floor, and two were on the third. Seven Defense Intelligence Agency civilian employees were killed while the Office of the Secretary of Defense lost one contractor. The U.S. Army suffered 75 fatalities—53 civilians (47 employees and six contractors) and 22 soldiers—while the U.S. Navy suffered 42 fatalities—nine civilians (six employees and three contractors) and 33 sailors. Lieutenant General Timothy Maude, an Army Deputy Chief of Staff, was the highest-ranking military officer killed at the Pentagon; also killed was retired Rear Admiral Wilson Flagg, a passenger on the plane. LT Mari-Rae Sopper, JAGC, USNR, was also on board the flight, and was the first Navy Judge Advocate ever to be killed in action. Another 106 were injured on the ground and were treated at area hospitals. On the side where the plane hit, the Pentagon is bordered by Interstate 395 and Washington Boulevard. Motorist Mary Lyman, who was on I-395, saw the airplane pass over at a "steep angle toward the ground and going fast" and then saw the cloud of smoke from the Pentagon. Omar Campo, another witness, was cutting the grass on the other side of the road when the airplane flew over his head, and later recalled: Afework Hagos, a computer programmer, was on his way to work and stuck in a traffic jam near the Pentagon when the airplane flew over. "There was a huge screaming noise and I got out of the car as the plane came over. Everybody was running away in different directions. It was tilting its wings up and down like it was trying to balance. It hit some lampposts on the way in." Daryl Donley witnessed the crash and took some of the first photographs of the site. "USA Today" reporter Mike Walter was driving on Washington Boulevard when he witnessed the crash, which he recounted, Terrance Kean, who lived in a nearby apartment building, heard the noise of loud jet engines, glanced out his window, and saw a "very, very large passenger jet". He watched "it just plow right into the side of the Pentagon. The nose penetrated into the portico. And then it sort of disappeared, and there was fire and smoke everywhere." Tim Timmerman, who is a pilot himself, noticed American Airlines markings on the aircraft as he saw it hit the Pentagon. Other drivers on Washington Boulevard, Interstate 395, and Columbia Pike witnessed the crash, as did people in Pentagon City, Crystal City, and other nearby locations. Former Georgetown University basketball coach John Thompson had originally booked a ticket on Flight 77. As he would tell the story many times in the following years, including a September 12, 2011 interview on Jim Rome's radio show, he had been scheduled to appear on that show on September 12, 2001. Thompson was planning to be in Las Vegas for a friend's birthday on September 13, and initially insisted on traveling to Rome's Los Angeles studio on the 11th. However, this did not work for the show, which wanted him to travel on the day of the show. After a Rome staffer personally assured Thompson that he would be able to travel from Los Angeles to Las Vegas immediately after the show, Thompson changed his travel plans. He felt the impact from the crash at his home near the Pentagon. Rescue efforts began immediately after the crash. Almost all the successful rescues of survivors occurred within half an hour of the impact. Initially, rescue efforts were led by the military and civilian employees within the building. Within minutes, the first fire companies arrived and found these volunteers searching near the impact site. The firemen ordered them to leave as they were not properly equipped or trained to deal with the hazards. The Arlington County Fire Department (ACFD) assumed command of the immediate rescue operation within 10 minutes of the crash. ACFD Assistant Chief James Schwartz implemented an incident command system (ICS) to coordinate response efforts among multiple agencies. It took about an hour for the ICS structure to become fully operational. Firefighters from Fort Myer and Reagan National Airport arrived within minutes. Rescue and firefighting efforts were impeded by rumors of additional incoming planes. Chief Schwartz ordered two evacuations during the day in response to these rumors. As firefighters attempted to extinguish the fires, they watched the building in fear of a structural collapse. One firefighter remarked that they "pretty much knew the building was going to collapse because it started making weird sounds and creaking". Officials saw a cornice of the building move and ordered an evacuation. Minutes later, at 10:10, the upper floors of the damaged area of the Pentagon collapsed. The collapsed area was about at its widest point and at its deepest. The amount of time between impact and collapse allowed everyone on the fourth and fifth levels to evacuate safely before the structure collapsed. After the collapse, the interior fires intensified, spreading through all five floors. After 11:00, firefighters mounted a two-pronged attack against the fires. Officials estimated temperatures of up to . While progress was made against the interior fires by late afternoon, firefighters realized a flammable layer of wood under the Pentagon's slate roof had caught fire and begun to spread. Typical firefighting tactics were rendered useless by the reinforced structure as firefighters were unable to reach the fire to extinguish it. Firefighters instead made firebreaks in the roof on September 12 to prevent further spreading. At 18:00 on the 12th, Arlington County issued a press release stating the fire was "controlled" but not fully "extinguished". Firefighters continued to put out smaller fires that ignited in the succeeding days. Various pieces of aircraft debris were found within the wreckage at the Pentagon. While on fire and escaping from the Navy Command Center, Lt. Kevin Shaeffer observed a chunk of the aircraft's nose cone and the nose landing gear in the service road between rings B and C. Early in the morning on Friday, September 14, Fairfax County Urban Search and Rescue Team members Carlton Burkhammer and Brian Moravitz came across an "intact seat from the plane's cockpit", while paramedics and firefighters located the two black boxes near the punch out hole in the A-E drive, nearly into the building. The cockpit voice recorder was to retrieve any information, though the flight data recorder yielded useful information. Investigators also found a part of Nawaf al-Hazmi's driver's license in the North Parking Lot rubble pile. Personal effects belonging to victims were found and taken to Fort Myer. Army engineers determined by 5:30 p.m. on the first day that no one remained alive in the damaged section of the building. In the days after the crash, news reports emerged that up to 800 people had died. Army soldiers from Fort Belvoir were the first teams to survey the interior of the crash site and noted the presence of human remains. Federal Emergency Management Agency (FEMA) Urban Search and Rescue teams, including Fairfax County Urban Search and Rescue assisted the search for remains, working through the National Interagency Incident Management System (NIIMS). Kevin Rimrodt, a Navy photographer surveying the Navy Command Center after the attacks, remarked that "there were so many bodies, I'd almost step on them. So I'd have to really take care to look backwards as I'm backing up in the dark, looking with a flashlight, making sure I'm not stepping on somebody". Debris from the Pentagon was taken to the Pentagon's north parking lot for more detailed search for remains and evidence. Remains that were recovered from the Pentagon were photographed, and turned over to the Armed Forces Medical Examiner office, located at Dover Air Force Base in Delaware. The medical examiner's office was able to identify remains belonging to 179 of the victims. Investigators eventually identified 184 of the 189 people who died in the attack. The remains of the five hijackers were identified through a process of elimination, and were turned over as evidence to the Federal Bureau of Investigation (FBI). On September 21, the ACFD relinquished control of the crime scene to the FBI. The Washington Field Office, National Capital Response Squad (NCRS), and the Joint Terrorism Task Force (JTTF) led the crime scene investigation at the Pentagon. By October 2, 2001, the search for evidence and remains was complete and the site was turned over to Pentagon officials. In 2002, the remains of 25 victims were buried collectively at Arlington National Cemetery, with a five-sided granite marker inscribed with the names of all the victims in the Pentagon. The ceremony also honored the five victims whose remains were never found. At around 3:40 a.m. on September 14, a paramedic and a firefighter who were searching through the debris of the impact site found two dark boxes, about by long. They called for an FBI agent, who in turn called for someone from the National Transportation Safety Board (NTSB). The NTSB employee confirmed that these were the flight recorders ("black boxes") from American Airlines Flight 77. Dick Bridges, deputy manager for Arlington County, Virginia, said the cockpit voice recorder was damaged on the outside and the flight data recorder was charred. Bridges said the recorders were found "right where the plane came into the building." The cockpit voice recorder was transported to the NTSB lab in Washington, D.C., to see what data was salvageable. In its report, the NTSB identified the unit as an L-3 Communications, Fairchild Aviation Recorders model A-100A cockpit voice recorder—a device which records on magnetic tape. No usable segments of tape were found inside the recorder; according to the NTSB's report, "[t]he majority of the recording tape was fused into a solid block of charred plastic". On the other hand, all the data from the flight data recorder, which used a solid-state drive, was recovered. At the moment of impact, Secretary of Defense Donald Rumsfeld was in his office on the other side of the Pentagon, away from the crash site. He ran to the site and assisted the injured. Rumsfeld returned to his office, and went to a conference room in the Executive Support Center where he joined a secure videoteleconference with Vice President Dick Cheney and other officials. On the day of the attacks, DoD officials considered moving their command operations to Site R, a backup facility in Pennsylvania. Secretary of Defense Rumsfeld insisted he remain at the Pentagon, and sent Deputy Secretary Paul Wolfowitz to Site R. The National Military Command Center (NMCC) continued to operate at the Pentagon, even as smoke entered the facility. Engineers and building managers manipulated the ventilation and other building systems that still functioned to draw smoke out of the NMCC and bring in fresh air. During a press conference held inside the Pentagon at 18:42, Rumsfeld announced, "The Pentagon's functioning. It will be in business tomorrow." Pentagon employees returned the next day to offices in mostly unaffected areas of the building. By the end of September, more workers returned to the lightly damaged areas of the Pentagon. Early estimates on rebuilding the damaged section of the Pentagon were that it would take three years to complete. However, the project moved forward at an accelerated pace and was completed by the one-year anniversary of the attack. The rebuilt section of the Pentagon includes a small indoor memorial and chapel at the point of impact. An outdoor memorial, commissioned by the Pentagon and designed by Julie Beckman and Keith Kaseman, was completed on schedule for its dedication on September 11, 2008. Since September 11, American Airlines continues to fly from Dulles International Airport to Los Angeles International Airport. As of August 2016, flight number 77 has been renumbered to 2636, now using a Boeing 737-800, departing at 7:40 in the morning. The Department of Defense released filmed footage on May 16, 2006, that was recorded by a security camera of American Airlines Flight 77 crashing into the Pentagon, with a plane visible in one frame, as a "thin white blur" and an explosion following. The images were made public in response to a December 2004 Freedom of Information Act request by Judicial Watch. Some still images from the video had previously been released and publicly circulated, but this was the first official release of the edited video of the crash. A nearby Citgo service station also had security cameras, but a video released on September 15, 2006 did not show the crash because the camera was pointed away from the crash site. The Doubletree Hotel, located nearby in Crystal City, Virginia, also had a security camera video. The FBI released the video on December 4, 2006, in response to a FOIA lawsuit filed by Scott Bingham. The footage is "grainy and the focus is soft, but a rapidly growing tower of smoke is visible in the distance on the upper edge of the frame as the plane crashes into the building". On September 12, 2002, Defense Secretary Donald Rumsfeld and General Richard Myers, Chairman of the Joint Chiefs of Staff, dedicated the Victims of Terrorist Attack on the Pentagon Memorial at Arlington National Cemetery. The memorial specifically honors the five individuals for whom no identifiable remains were found. This included Dana Falkenberg, age three, who was aboard American Airlines Flight 77 with her parents and older sister. A portion of the remains of 25 other victims are also buried at the site. The memorial is a pentagonal granite marker high. On five sides of the memorial along the top are inscribed the words "Victims of Terrorist Attack on the Pentagon September 11, 2001". Aluminum plaques, painted black, are inscribed with the names of the 184 victims of the terrorist attack. The site is located in Section 64, on a slight rise, which gives it a view of the Pentagon. At the National September 11 Memorial, the names of the Pentagon victims are inscribed on the South Pool, on Panels S-1 and S-72 – S-76. The Pentagon Memorial, located just southwest of The Pentagon in Arlington County, Virginia, is a permanent outdoor memorial to the 184 people who died as victims in the building and on American Airlines Flight 77 during the September 11 attacks. Designed by Julie Beckman and Keith Kaseman of the architectural firm of Kaseman Beckman Advanced Strategies with engineers Buro Happold, the memorial opened on September 11, 2008, seven years after the attack. Note: This list does not include the nationalities of the five hijackers. Antlia Antlia (; from Ancient Greek "ἀντλία") is a constellation in the Southern Celestial Hemisphere. Its name means "pump" in Latin; it represents an air pump. Originally Antlia Pneumatica, the constellation was established by Nicolas-Louis de Lacaille in the 18th century, though its name was later abbreviated by John Herschel. Located close to the stars forming the old constellation of the ship Argo Navis, Antlia is completely visible from latitudes south of 49 degrees north. Antlia is a faint constellation; its brightest star is Alpha Antliae, an orange giant that is a suspected variable star, ranging between apparent magnitudes 4.22 and 4.29. S Antliae is an eclipsing binary star system, changing in brightness as one star passes in front of the other. Sharing a common envelope, the stars are so close they will one day merge to form a single star. Two star systems with known exoplanets, HD 93083 and WASP-66, lie within Antlia, as do NGC 2997, a spiral galaxy, and the Antlia Dwarf Galaxy. The French astronomer Nicolas-Louis de Lacaille first described the constellation in French as "la Machine Pneumatique" (the Pneumatic Machine) in 1751–52, commemorating the air pump invented by the French physicist Denis Papin. De Lacaille had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope, devising fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. He named all but one in honour of instruments that symbolised the Age of Enlightenment. Lacaille depicted Antlia as a single-cylinder vacuum pump used in Papin's initial experiments, while German astronomer Johann Bode chose the more advanced double-cylinder version. Lacaille Latinised the name to "Antlia pneumatica" on his 1763 chart. English astronomer John Herschel proposed shrinking the name to one word in 1844, noting that Lacaille himself had abbreviated his constellations thus on occasion. This was universally adopted. The International Astronomical Union adopted it as one of the 88 modern constellations in 1922. Although above the horizon and hence visible to the Ancient Greeks, Antlia's stars were too faint to have been included in any ancient constellations. The stars that now comprise Antlia lay within an area of the sky covered by the ancient constellation Argo Navis, the Ship of the Argonauts, which due to its immense size was split into several smaller constellations by Lacaille in 1763. Ridpath reports that due to their faintness, the stars of Antlia did not make up part of the classical depiction of Argo Navis. Chinese astronomers were able to view what is modern Antlia from their latitudes, and incorporated its stars into two different constellations. Several stars in the southern part of Antlia were a portion of ""Dong'ou"", which represented an area in southern China. Furthermore, Epsilon, Eta, and Theta Antliae were incorporated into the celestial temple, which also contained stars from modern Pyxis. Covering 238.9 square degrees and hence 0.579% of the sky, Antlia ranks 62nd of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 49°N. Hydra the sea snake runs along the length of its northern border, while Pyxis the compass, Vela the sails, and Centaurus the centaur line it to the west, south and east respectively. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union, is Ant. The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of twelve segments ("illustrated in infobox at top-right"). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −24.54° and −40.42°. Lacaille gave nine stars Bayer designations, labelling them Alpha through to Theta, including two stars next to each other as Zeta. Gould later added a tenth, Iota Antliae. Beta and Gamma Antliae (now HR 4339 and HD 90156) ended up in the neighbouring constellation Hydra once the constellation boundaries were delineated in 1930. Within the constellation's borders, there are 42 stars brighter than or equal to apparent magnitude 6.5. The constellation's two brightest stars—Alpha and Epsilon Antliae—shine with a reddish tinge. Alpha is an orange giant of spectral type K4III that is a suspected variable star, ranging between apparent magnitudes 4.22 and 4.29. It is located 370 ± 20 light-years away from Earth. Estimated to be shining with around 480 to 555 times the luminosity of the Sun, it is most likely an ageing star that is brightening and on its way to becoming a Mira variable star, having converted all its core fuel into carbon. Located 710 ± 40 light-years from Earth, Epsilon Antliae is an evolved orange giant star of spectral type K3 IIIa, that has swollen to have a diameter about 69 times that of the Sun, and a luminosity of around 1279 Suns. It is slightly variable. At the other end of Antlia, Iota Antliae is likewise an orange giant of spectral type K1 III. Located near Alpha is Delta Antliae, a binary star, 430 ± 30 light-years distant from Earth. The primary is a blue-white main sequence star of spectral type B9.5V and magnitude 5.6, and the secondary is a yellow-white main sequence star of spectral type F9Ve and magnitude 9.6. Zeta Antliae is a wide optical double star. The brighter star—Zeta Antliae—is 410 ± 40 light-years distant and has a magnitude of 5.74, though it is a true binary star system composed of two white main sequence stars of magnitudes 6.20 and 7.01 that are separated by 8.042 arcseconds. The fainter star—Zeta Antliae—is 380 ± 20 light-years distant and of magnitude 5.9. Eta Antliae is another double composed of a yellow white star of spectral type F1V and magnitude 5.31, with a companion of magnitude 11.3. Theta Antliae is likewise double, most likely composed of an A-type main sequence star and a yellow giant. S Antliae is an eclipsing binary star system that varies in apparent magnitude from 6.27 to 6.83 over a period of 15.6 hours. The system is classed as a W Ursae Majoris variable—the primary is hotter than the secondary and the drop in magnitude is caused by the latter passing in front of the former. Calculating the properties of the component stars from the orbital period indicates that the primary star has a mass 1.94 times and a diameter 2.026 times that of the Sun, and the secondary has a mass 0.76 times and a diameter 1.322 times that of the Sun. The two stars have similar luminosity and spectral type as they have a common envelope and share stellar material. The system is thought to be around 5–6 billion years old. The two stars will eventually merge to form a single fast-spinning star. T Antliae is a yellow-white supergiant of spectral type F6Iab and Classical Cepheid variable ranging between magnitude 8.88 and 9.82 over 5.9 days. U Antliae is a red C-type carbon star and is an irregular variable that ranges between magnitudes 5.27 and 6.04. Approximately 900 light-years distant, it is around 5819 times as luminous as the Sun. BF Antliae is a Delta Scuti variable that varies by 0.01 of a magnitude. HR 4049, also known as AG Antliae, is an unusual hot variable ageing star of spectral type B9.5Ib-II. It is undergoing intense loss of mass and is a unique variable that does not belong to any class of known variable star, ranging between magnitudes 5.29 and 5.83 with a period of 429 days. UX Antliae is an R Coronae Borealis variable with a baseline apparent magnitude of around 11.85, with irregular dimmings down to below magnitude 18.0. A luminous and remote star, it is a supergiant with a spectrum resembling that of a yellow-white F-type star but it has almost no hydrogen. HD 93083 is an orange dwarf star of spectral type K3V that is smaller and cooler than the Sun. It has a planet that was discovered by the radial velocity method with the HARPS spectrograph in 2005. About as massive as Saturn, the planet orbits its star with a period of 143 days at a mean distance of 0.477 AU. WASP-66 is a sunlike star of spectral type F4V. A planet with 2.3 times the mass of Jupiter orbits it every 4 days, discovered by the transit method in 2012. DEN 1048-3956 is a brown dwarf of spectral type M8 located around 13 light-years distant from Earth. At magnitude 17 it is much too faint to be seen with the unaided eye. It has a surface temperature of about 2500 K. Two powerful flares lasting 4–5 minutes each were detected in 2002. 2MASS 0939-2448 is a system of two cool and faint brown dwarfs, probably with effective temperatures of about 500 and 700 K and masses of about 25 and 40 times that of Jupiter, though it is also possible that both objects have temperatures of 600 K and 30 Jupiter masses. Antlia contains many faint galaxies, the brightest of which is NGC 2997 at magnitude 10.6. It is a loosely wound face-on spiral galaxy of type Sc. Though nondescript in most amateur telescopes, it presents bright clusters of young stars and many dark dust lanes in photographs. Discovered in 1997, the Antlia Dwarf is a 14.8 dwarf spheroidal galaxy that belongs to the Local Group of galaxies. The Antlia Cluster, also known as Abell S0636, is a cluster of galaxies located in the Hydra-Centaurus Supercluster. It is the third nearest to the Local Group after the Virgo Cluster and the Fornax Cluster. The cluster's distance from earth is to Located in the southeastern corner of the constellation, it boasts the giant elliptical galaxies NGC 3268 and NGC 3258 as the main members of a southern and northern subgroup respectively, and contains around 234 galaxies in total. Notes Citations Sources Apus Apus is a small constellation in the southern sky. It represents a bird-of-paradise, and its name means "without feet" in Greek because the bird-of-paradise was once wrongly believed to lack feet. First depicted on a celestial globe by Petrus Plancius in 1598, it was charted on a star atlas by Johann Bayer in his 1603 "Uranometria". The French explorer and astronomer Nicolas Louis de Lacaille charted and gave the brighter stars their Bayer designations in 1756. The five brightest stars are all reddish in hue. Shading the others at apparent magnitude 3.8 is Alpha Apodis, an orange giant that has around 48 times the diameter and 928 times the luminosity of the Sun. Marginally fainter is Gamma Apodis, another ageing giant star. Delta Apodis is a double star, the two components of which are 103 arcseconds apart and visible with the naked eye. Two star systems have been found to have planets. Apus was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman who had sailed on the first Dutch trading expedition, known as the "Eerste Schipvaart", to the East Indies. It first appeared on a 35-cm (14 in) diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. De Houtman included it in his southern star catalogue in 1603 under the Dutch name "De Paradijs Voghel", "The Bird of Paradise", and Plancius called the constellation "Paradysvogel Apis Indica"; the first word is Dutch for "bird of paradise". "Apis" (Latin for "bee") is assumed to have been a typographical error for "avis" ("bird"). After its introduction on Plancius's globe, the constellation's first known appearance in a celestial atlas was in German cartographer Johann Bayer's "Uranometria" of 1603. Bayer called it "Apis Indica" while fellow astronomers Johannes Kepler and his son-in-law Jakob Bartsch called it "Apus" or "Avis Indica". The name "Apus" is derived from the Greek "apous", meaning "without feet". This referred to the Western misconception that the bird-of-paradise had no feet, which arose because the only specimens available in the West had their feet and wings removed. Such specimens began to arrive in Europe in 1522, when the survivors of Ferdinand Magellan's expedition brought them home. The constellation later lost some of its tail when Nicolas-Louis de Lacaille used those stars to establish Octans in the 1750s. Covering 206.3 square degrees and hence 0.5002% of the sky, Apus ranks 67th of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 7°N. It is bordered by Ara, Triangulum Australe and Circinus to the north, Musca and Chamaeleon to the west, Octans to the south, and Pavo to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is 'Aps'. The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of six segments ("illustrated in infobox"). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −67.48° and −83.12°. Lacaille gave twelve stars Bayer designations, labelling them Alpha through to Kappa, including two stars next to each other as Delta and another two stars near each other as Kappa. Within the constellation's borders, there are 39 stars brighter than or equal to apparent magnitude 6.5. Beta, Gamma and Delta Apodis form a narrow triangle, with Alpha Apodis lying to the east. The five brightest stars are all red-tinged, which is unusual among constellations. Alpha Apodis is an orange giant of spectral type K3III located 447 ± 8 light years away from Earth, with an apparent magnitude of 3.8. It spent much of its life as a blue-white (B-type) main sequence star before expanding, cooling and brightening as it used up its core hydrogen. It has swollen to 48 times the Sun's diameter, and shines with a luminosity approximately 928 times that of the Sun, with a surface temperature of 4312 K. Beta Apodis is an orange giant 157 ± 2 light years away, with a magnitude of 4.2. It is around 1.84 times as massive as the Sun, with a surface temperature of 4677 K. Gamma Apodis is a yellow giant of spectral type G8III located 156 ± 1 light-years away, with a magnitude of 3.87. It is approximately 63 times as luminous the Sun, with a surface temperature of 5279 K. Delta Apodis is a double star, the two components of which are 103 arcseconds apart and visible through binoculars. Delta is a red giant star of spectral type M4III located 760 ± 30 light years away. It is a semiregular variable that varies from magnitude +4.66 to +4.87, with pulsations of multiple periods of 68.0, 94.9 and 101.7 days. Delta is an orange giant star of spectral type K3III, located 610 ± 30 light years away, with a magnitude of 5.3. The separate components can be resolved with the naked eye. The fifth-brightest star is Zeta Apodis at magnitude 4.8, a star that has swollen and cooled to become an orange giant of spectral type K1III, with a surface temperature of 4649 K and a luminosity 133 times that of the Sun. It is 297 ± 8 light-years distant. Near Zeta is Iota Apodis, a binary star system around 1300 light-years distant, that is composed of two blue-white main sequence stars that orbit each other every 59.32 years. Of spectral types B9V and B9.5 V, they are both over three times as massive as the Sun. Eta Apodis is a white main sequence star located 138 ± 1 light-years distant. Of apparent magnitude 4.89, it is 1.77 times as massive, 15.5 times as luminous as the Sun and has 2.13 times its radius. Aged 250 ± 200 million years old, this star is emitting an excess of 24 μm infrared radiation, which may be caused by a debris disk of dust orbiting at a distance of more than 31 astronomical units from it. Theta Apodis is a cool red giant of spectral type M7 III located 370 ± 20 light years distant. It shines with a luminosity approximately 3879 times that of the Sun and has a surface temperature of 3151 K. A semiregular variable, it varies by 0.56 magnitudes with a period of 119 days—or approximately 4 months. It is losing mass at the rate of times the mass of the Sun per year through its stellar wind. Dusty material ejected from this star is interacting with the surrounding interstellar medium, forming a bow shock as the star moves through the galaxy. NO Apodis is a red giant of spectral type M3III that varies between magnitudes 5.71 and 5.95. Located around 883 light-years distant, it shines with a luminosity estimated at 2059 times that of the Sun and has a surface temperature of 3568 K. S Apodis is a rare R Coronae Borealis variable, an extremely hydrogen-deficient supergiant thought to have arisen as the result of the merger of two white dwarfs; fewer than 100 have been discovered as of 2012. It has a baseline magnitude of 9.7. R Apodis is a star that was given a variable star designation, yet has turned out not to be variable. Of magnitude 5.3, it is another orange giant. Two star systems have had exoplanets discovered by doppler spectroscopy, and the substellar companion of a third star system—the sunlike star HD 131664—has since been found to be a brown dwarf with a calculated mass of the companion to 23 times that of Jupiter (minimum of 18 and maximum of 49 Jovian masses). HD 134606 is a yellow sunlike star of spectral type G6IV that has begun expanding and cooling off the main sequence. Three planets orbit it with periods of 12, 59.5 and 459 days, successively larger as they are further away from the star. HD 137388 is another star—of spectral type K2IV—that is cooler than the Sun and has begun cooling off the main sequence. Around 47% as luminous and 88% as massive as the Sun, with 85% of its diameter, it is thought to be around 7.4 ± 3.9 billion years old. It has a planet that is 79 times as massive as the Earth and orbits its sun every 330 days at an average distance of 0.89 astronomical units (AU). The Milky Way covers much of the constellation's area. Of the deep-sky objects in Apus, there are two prominent globular clusters—NGC 6101 and IC 4499—and a large faint nebula that covers several degrees east of Beta and Gamma Apodis. NGC 6101 is a globular cluster of apparent magnitude 9.2 located around 50,000 light-years distant from Earth, which is around 160 light-years across. Around 13 billion years old, it contains a high concentration of massive bright stars known as blue stragglers, thought to be the result of two stars merging. IC 4499 is a loose globular cluster in the medium-far galactic halo; its apparent magnitude is 10.6. The galaxies in the constellation are faint. IC 4633 is a very faint spiral galaxy surrounded by a vast amount of Milky Way line-of-sight integrated flux nebulae—large faint clouds thought to be lit by large numbers of stars. A Vindication of the Rights of Woman A Vindication of the Rights of Woman: with Strictures on Political and Moral Subjects (1792), written by the 18th-century British proto-feminist Mary Wollstonecraft, is one of the earliest works of feminist philosophy. In it, Wollstonecraft responds to those educational and political theorists of the 18th century who did not believe women should have an education. She argues that women ought to have an education commensurate with their position in society, claiming that women are essential to the nation because they educate its children and because they could be "companions" to their husbands, rather than mere wives. Instead of viewing women as ornaments to society or property to be traded in marriage, Wollstonecraft maintains that they are human beings deserving of the same fundamental rights as men. Wollstonecraft was prompted to write the "Rights of Woman" after reading Charles Maurice de Talleyrand-Périgord's 1791 report to the French National Assembly, which stated that women should only receive a domestic education; she used her commentary on this specific event to launch a broad attack against sexual double standards and to indict men for encouraging women to indulge in excessive emotion. Wollstonecraft wrote the "Rights of Woman" hurriedly to respond directly to ongoing events; she intended to write a more thoughtful second volume but died before completing it. While Wollstonecraft does call for equality between the sexes in particular areas of life, such as morality, she does not explicitly state that men and women are equal. Her ambiguous statements regarding the equality of the sexes have since made it difficult to classify Wollstonecraft as a modern feminist, particularly since the word and the concept were unavailable to her. Although it is commonly assumed now that the "Rights of Woman" was unfavourably received, this is a modern misconception based on the belief that Wollstonecraft was as reviled during her lifetime as she became after the publication of William Godwin's "Memoirs of the Author of A Vindication of the Rights of Woman" (1798). The "Rights of Woman" was actually well received when it was first published in 1792. One biographer has called it "perhaps the most original book of [Wollstonecraft's] century". "A Vindication of the Rights of Woman" was written against the tumultuous background of the French Revolution and the debates that it spawned in Britain. In a lively and sometimes vicious pamphlet war, now referred to as the "Revolution controversy", British political commentators addressed topics ranging from representative government to human rights to the separation of church and state, many of these issues having been raised in France first. Wollstonecraft first entered this fray in 1790 with "A Vindication of the Rights of Men", a response to Edmund Burke's "Reflections on the Revolution in France" (1790). In his "Reflections", Burke criticized the view of many British thinkers and writers who had welcomed the early stages of the French revolution. While they saw the revolution as analogous to Britain's own Glorious Revolution in 1688, which had restricted the powers of the monarchy, Burke argued that the appropriate historical analogy was the English Civil War (1642–1651) in which Charles I had been executed in 1649. He viewed the French revolution as the violent overthrow of a legitimate government. In "Reflections" he argues that citizens do not have the right to revolt against their government because civilization is the result of social and political consensus; its traditions cannot be continually challenged—the result would be anarchy. One of the key arguments of Wollstonecraft's "Rights of Men", published just six weeks after Burke's "Reflections", is that rights cannot be based on tradition; rights, she argues, should be conferred because they are reasonable and just, regardless of their basis in tradition. When Charles Maurice de Talleyrand-Périgord presented his "Rapport sur l'instruction publique" (1791) to the National Assembly in France, Wollstonecraft was galvanized to respond. In his recommendations for a national system of education, Talleyrand had written: Let us bring up women, not to aspire to advantages which the Constitution denies them, but to know and appreciate those which it guarantees them . . . Men are destined to live on the stage of the world. A public education suits them: it early places before their eyes all the scenes of life: only the proportions are different. The paternal home is better for the education of women; they have less need to learn to deal with the interests of others, than to accustom themselves to a calm and secluded life. Wollstonecraft dedicated the "Rights of Woman" to Talleyrand: "Having read with great pleasure a pamphlet which you have lately published, I dedicate this volume to you; to induce you to reconsider the subject, and maturely weigh what I have advanced respecting the rights of woman and national education." At the end of 1791, French feminist Olympe de Gouges had published her "Declaration of the Rights of Woman and the Female Citizen", and the question of women's rights became central to political debates in both France and Britain. The "Rights of Woman" is an extension of Wollstonecraft's arguments in the "Rights of Men". In the "Rights of Men", as the title suggests, she is concerned with the rights of particular men (18th-century British men) while in the "Rights of Woman", she is concerned with the rights afforded to "woman", an abstract category. She does not isolate her argument to 18th-century women or British women. The first chapter of the "Rights of Woman" addresses the issue of natural rights and asks who has those inalienable rights and on what grounds. She answers that since natural rights are given by God, for one segment of society to deny them to another segment is a sin. "The Rights of Woman" thus engages not only specific events in France and in Britain but also larger questions being raised by political philosophers such as John Locke and Jean-Jacques Rousseau. Wollstonecraft did not employ the formal argumentation or logical prose style common to 18th-century philosophical writing when composing her own works. The "Rights of Woman" is a long essay that introduces all of its major topics in the opening chapters and then repeatedly returns to them, each time from a different point of view. It also adopts a hybrid tone that combines rational argument with the fervent rhetoric of sensibility. In the 18th century, "sensibility" was a physical phenomenon that came to be attached to a specific set of moral beliefs. Physicians and anatomists believed that the more sensitive people's nerves, the more emotionally affected they would be by their surroundings. Since women were thought to have keener nerves than men, it was also believed that women were more emotional than men. The emotional excess associated with sensibility also theoretically produced an ethic of compassion: those with sensibility could easily sympathise with people in pain. Thus historians have credited the discourse of sensibility and those who promoted it with the increased humanitarian efforts, such as the movement to abolish the slave trade. But sensibility also paralysed those who had too much of it; as scholar G. J. Barker-Benfield explains, "an innate refinement of nerves was also identifiable with greater suffering, with weakness, and a susceptibility to disorder". By the time Wollstonecraft was writing the "Rights of Woman", sensibility had already been under sustained attack for a number of years. Sensibility, which had initially promised to draw individuals together through sympathy, was now viewed as "profoundly separatist"; novels, plays, and poems that employed the language of sensibility asserted individual rights, sexual freedom, and unconventional familial relationships based only upon feeling. Furthermore, as Janet Todd, another scholar of sensibility, argues, "to many in Britain the cult of sensibility seemed to have feminized the nation, given women undue prominence, and emasculated men." One of Wollstonecraft's central arguments in the "Rights of Woman" is that women should be educated in a rational manner to give them the opportunity to contribute to society. In the 18th century, it was often assumed by both educational philosophers and conduct book writers, who wrote what one might think of as early self-help books, that women were incapable of rational or abstract thought. Women, it was believed, were too susceptible to sensibility and too fragile to be able to think clearly. Wollstonecraft, along with other female reformers such as Catharine Macaulay and Hester Chapone, maintained that women were indeed capable of rational thought and deserved to be educated. She argued this point in her own conduct book, "Thoughts on the Education of Daughters" (1787), in her children's book, "Original Stories from Real Life" (1788), as well as in the "Rights of Woman". Stating in her preface that "my main argument is built on this simple principle, that if [woman] be not prepared by education to become the companion of man, she will stop the progress of knowledge and virtue; for truth must be common to all", Wollstonecraft contends that society will degenerate without educated women, particularly because mothers are the primary educators of young children. She attributes the problem of uneducated women to men and "a false system of education, gathered from the books written on this subject by men who [consider] females rather as women than human creatures". Women are capable of rationality; it only appears that they are not, because men have refused to educate them and encouraged them to be frivolous (Wollstonecraft describes silly women as "spaniels" and "toys"). While stressing it is of the same kind, she entertains the notion that women might not be able to attain the same degree of knowledge that men do. Wollstonecraft attacks conduct book writers such as James Fordyce and John Gregory as well as educational philosophers such as Jean-Jacques Rousseau who argue that a woman does not need a rational education. (Rousseau famously argues in "" (1762) that women should be educated for the pleasure of men; Wollstonecraft, infuriated by this argument, attacks not only it but also Rousseau himself.) Intent on illustrating the limitations that contemporary educational theory placed upon women, Wollstonecraft writes, "taught from their infancy that beauty is woman's sceptre, the mind shapes itself to the body, and, roaming round its gilt cage, only seeks to adorn its prison", implying that without this damaging ideology, which encourages young women to focus their attention on beauty and outward accomplishments, they could achieve much more. Wives could be the rational "companions" of their husbands and even pursue careers should they so choose: "women might certainly study the art of healing, and be physicians as well as nurses. And midwifery, decency seems to allot to them . . . they might, also, study politics . . . Business of various kinds, they might likewise pursue." For Wollstonecraft, "the most perfect education" is "an exercise of the understanding as is best calculated to strengthen the body and form the heart. Or, in other words, to enable the individual to attach such habits of virtue as will render it independent." In addition to her broad philosophical arguments, Wollstonecraft lays out a specific plan for national education to counter Talleyrand's. In Chapter 12, "On National Education," she proposes that children be sent to day schools as well as given some education at home "to inspire a love of home and domestic pleasures," and that such schools be free for children "five to nine years of age." She also maintains that schooling should be co-educational, contending that men and women, whose marriages are "the cement of society," should be "educated after the same model." It is debatable to what extent the "Rights of Woman" is a feminist text; because the definitions of "feminist" vary, different scholars have come to different conclusions. Wollstonecraft would never have referred to her text as feminist because the words "feminist" and "feminism" were not coined until the 1890s. Moreover, there was no feminist movement to speak of during Wollstonecraft's lifetime. In the introduction to her seminal work on Wollstonecraft's thought, Barbara Taylor writes: Describing [Wollstonecraft's philosophy] as feminist is problematic, and I do it only after much consideration. The label is of course anachronistic . . . Treating Wollstonecraft's thought as an anticipation of nineteenth and twentieth-century feminist argument has meant sacrificing or distorting some of its key elements. Leading examples of this . . . have been the widespread neglect of her religious beliefs, and the misrepresentation of her as a bourgeois liberal, which together have resulted in the displacement of a religiously inspired utopian radicalism by a secular, class-partisan reformism as alien to Wollstonecraft's political project as her dream of a divinely promised age of universal happiness is to our own. Even more important however has been the imposition on Wollstonecraft of a heroic-individualist brand of politics utterly at odds with her own ethically driven case for women's emancipation. Wollstonecraft's leading ambition for women was that they should attain virtue, and it was to this end that she sought their liberation. In the "Rights of Woman", Wollstonecraft does not make the claim for gender equality using the same arguments or the same language that late 19th- and 20th century feminists later would. For instance, rather than unequivocally stating that men and women are equal, Wollstonecraft contends that men and women are equal in the eyes of God, which means that they are both subject to the same moral law. For Wollstonecraft, men and women are equal in the most important areas of life. While such an idea may not seem revolutionary to 21st-century readers, its implications were revolutionary during the 18th century. For example, it implied that both men and women—not just women—should be modest and respect the sanctity of marriage. Wollstonecraft's argument exposed the sexual double standard of the late 18th century and demanded that men adhere to the same virtues demanded of women. However, Wollstonecraft's arguments for equality stand in contrast to her statements respecting the superiority of masculine strength and valour. Wollstonecraft famously and ambiguously states: Let it not be concluded, that I wish to invert the order of things; I have already granted, that, from the constitution of their bodies, men seem to be designed by Providence to attain a greater degree of virtue. I speak collectively of the whole sex; but I see not the shadow of a reason to conclude that their virtues should differ in respect to their nature. In fact, how can they, if virtue has only one eternal standard? I must therefore, if I reason consequentially, as strenuously maintain that they have the same simple direction, as that there is a God. Moreover, Wollstonecraft calls on men, rather than women, to initiate the social and political changes she outlines in the "Rights of Woman". Because women are uneducated, they cannot alter their own situation—men must come to their aid. Wollstonecraft writes at the end of her chapter "Of the Pernicious Effects Which Arise from the Unnatural Distinctions Established in Society": I then would fain convince reasonable men of the importance of some of my remarks; and prevail on them to weigh dispassionately the whole tenor of my observations. – I appeal to their understandings; and, as a fellow-creature, claim, in the name of my sex, some interest in their hearts. I entreat them to assist to emancipate their companion, to make her a help meet for them! Would men but generously snap our chains, and be content with rational fellowship instead of slavish obedience, they would find us more observant daughters, more affectionate sisters, more faithful wives, more reasonable mothers – in a word, better citizens. It is Wollstonecraft's last novel, "" (1798), the fictionalized sequel to the "Rights of Woman", that is usually considered her most radical feminist work. One of Wollstonecraft's most scathing criticisms in the "Rights of Woman" is against false and excessive sensibility, particularly in women. She argues that women who succumb to sensibility are "blown about by every momentary gust of feeling"; because these women are "the prey of their senses", they cannot think rationally. In fact, not only do they do harm to themselves but they also do harm to all of civilization: these are not women who can refine civilization – these are women who will destroy it. But reason and feeling are not independent for Wollstonecraft; rather, she believes that they should inform each other. For Wollstonecraft, as for the important 18th-century philosopher David Hume, the passions underpin all reason. This was a theme that she would return to throughout her career, but particularly in her novels "" (1788) and "". As part of her argument that women should not be overly influenced by their feelings, Wollstonecraft emphasises that they should not be constrained by or made slaves to their bodies or their sexual feelings. This particular argument has led many modern feminists to suggest that Wollstonecraft intentionally avoids granting women any sexual desire. Cora Kaplan argues that the "negative and prescriptive assault on female sexuality" is a ""leitmotif"" of the "Rights of Woman". For example, Wollstonecraft advises her readers to "calmly let passion subside into friendship" in the ideal companionate marriage (that is, in the ideal of a love-based marriage that was developing at the time). It would be better, she writes, when "two virtuous young people marry . . . if some circumstances checked their passion". According to Wollstonecraft, "love and friendship cannot subsist in the same bosom". As Mary Poovey explains, "Wollstonecraft betrays her fear that female desire might in fact court man's lascivious and degrading attentions, that the subordinate position women have been given might even be deserved. Until women can transcend their fleshly desires and fleshly forms, they will be hostage to the body." If women are not interested in sexuality, they cannot be dominated by men. Wollstonecraft worries that women are consumed with "romantic wavering", that is, they are interested only in satisfying their lusts. Because the "Rights of Woman" eliminates sexuality from a woman's life, Kaplan contends, it "expresses a violent antagonism to the sexual" while at the same time "exaggerat[ing] the importance of the sensual in the everyday life of women". Wollstonecraft was so determined to wipe sexuality from her picture of the ideal woman that she ended up foregrounding it by insisting upon its absence. But as Kaplan and others have remarked, Wollstonecraft may have been forced to make this sacrifice: "it is important to remember that the notion of woman as politically enabled and independent [was] fatally linked [during the eighteenth century] to the unrestrained and vicious exercise of her sexuality." Claudia Johnson, a prominent Wollstonecraft scholar, has called the "Rights of Woman" "a republican manifesto". Johnson contends that Wollstonecraft is hearkening back to the Commonwealth tradition of the 17th century and attempting to reestablish a republican ethos. In Wollstonecraft's version, there would be strong, but separate, masculine and feminine roles for citizens. According to Johnson, Wollstonecraft "denounces the collapse of proper sexual distinction as the leading feature of her age, and as the grievous consequence of sentimentality itself. The problem undermining society in her view is feminized men". If men feel free to adopt both the masculine position and the sentimental feminine position, she argues, women have no position open to them in society. Johnson therefore sees Wollstonecraft as a critic, in both the "Rights of Men" and the "Rights of Woman", of the "masculinization of sensitivity" in such works as Edmund Burke's "Reflections on the Revolution in France". In the "Rights of Woman" Wollstonecraft adheres to a version of republicanism that includes a belief in the eventual overthrow of all titles, including the monarchy. She also briefly suggests that all men and women should be represented in government. But the bulk of her "political criticism," as Chris Jones, a Wollstonecraft scholar, explains, "is couched predominantly in terms of morality". Her definition of virtue focuses on the individual's happiness rather than, for example, the good of the entire society. This is reflected in her explanation of natural rights. Because rights ultimately proceed from God, Wollstonecraft maintains that there are duties, tied to those rights, incumbent upon each and every person. For Wollstonecraft, the individual is taught republicanism and benevolence within the family; domestic relations and familial ties are crucial to her understanding of social cohesion and patriotism. In many ways the "Rights of Woman" is inflected by a bourgeois view of the world, as is its direct predecessor the "Rights of Men". Wollstonecraft addresses her text to the middle class, which she calls the "most natural state". She also frequently praises modesty and industry, virtues which, at the time, were associated with the middle class. From her position as a middle-class writer arguing for a middle-class ethos, Wollstonecraft also attacks the wealthy, criticizing them using the same arguments she employs against women. She points out the "false-refinement, immorality, and vanity" of the rich, calling them "weak, artificial beings, raised above the common wants and affections of their race, in a premature unnatural manner [who] undermine the very foundation of virtue, and spread corruption through the whole mass of society". But Wollstonecraft's criticisms of the wealthy do not necessarily reflect a concomitant sympathy for the poor. For her, the poor are fortunate because they will never be trapped by the snares of wealth: "Happy is it when people have the cares of life to struggle with; for these struggles prevent their becoming a prey to enervating vices, merely from idleness!" Moreover, she contends that charity has only negative consequences because, as Jones puts it, she "sees it as sustaining an unequal society while giving the appearance of virtue to the rich". In her national plan for education, she retains class distinctions (with an exception for the intelligent), suggesting that: "After the age of nine, girls and boys, intended for domestic employments, or mechanical trades, ought to be removed to other schools, and receive instruction, in some measure appropriated to the destination of each individual . . . The young people of superior abilities, or fortune, might now be taught, in another school, the dead and living languages, the elements of science, and continue the study of history and politics, on a more extensive scale, which would not exclude polite literature." In attempting to navigate the cultural expectations of female writers and the generic conventions of political and philosophical discourse, Wollstonecraft, as she does throughout her "oeuvre", constructs a unique blend of masculine and feminine styles in the "Rights of Woman". She uses the language of philosophy, referring to her work as a "treatise" with "arguments" and "principles". However, Wollstonecraft also uses a personal tone, employing "I" and "you", dashes and exclamation marks, and autobiographical references to create a distinctly feminine voice in the text. The "Rights of Woman" further hybridizes its genre by weaving together elements of the conduct book, the short essay, and the novel, genres often associated with women, while at the same time claiming that these genres could be used to discuss philosophical topics such as rights. Although Wollstonecraft argues against excessive sensibility, the rhetoric of the "Rights of Woman" is at times heated and attempts to provoke the reader. Many of the most emotional comments in the book are directed at Rousseau. For example, after excerpting a long passage from "" (1762), Wollstonecraft pithily states, "I shall make no other comments on this ingenious passage, than just to observe, that it is the philosophy of lasciviousness." A mere page later, after indicting Rousseau's plan for female education, she writes "I must relieve myself by drawing another picture." These terse exclamations are meant to draw the reader to her side of the argument (it is assumed that the reader will agree with them). While she claims to write in a plain style so that her ideas will reach the broadest possible audience, she actually combines the plain, rational language of the political treatise with the poetic, passionate language of sensibility to demonstrate that one can combine rationality and sensibility in the same self. Wollstonecraft defends her positions not only with reasoned argument but also with ardent rhetoric. In her efforts to vividly describe the condition of women within society, Wollstonecraft employs several different analogies. She often compares women to slaves, arguing that their ignorance and powerlessness places them in that position. But at the same time, she also compares them to "capricious tyrants" who use cunning and deceit to manipulate the men around them. At one point, she reasons that a woman can become either a slave or tyrant, which she describes as two sides of the same coin. Wollstonecraft also compares women to soldiers; like military men, they are valued only for their appearance and obedience. And like the rich, women's "softness" has "debased mankind". Wollstonecraft was forced to write the "Rights of Woman" hurriedly to respond to Talleyrand and ongoing events. Upon completing the work, she wrote to her friend William Roscoe: "I am dissatisfied with myself for not having done justice to the subject. – Do not suspect me of false modesty – I mean to say that had I allowed myself more time I could have written a better book, in every sense of the word . . . I intend to finish the next volume before I begin to print, for it is not pleasant to have the Devil coming for the conclusion of a sheet fore it is written." When Wollstonecraft revised the "Rights of Woman" for the second edition, she took the opportunity not only to fix small spelling and grammar mistakes but also to bolster the feminist claims of her argument. She changed some of her statements regarding female and male difference to reflect a greater equality between the sexes. Wollstonecraft never wrote the second part to the "Rights of Woman," although William Godwin published her "Hints", which were "chiefly designed to have been incorporated in the second part of the "Vindication of the Rights of Woman"", in the posthumous collection of her works. However, she did begin writing the novel "", which most scholars consider a fictionalized sequel to the "Rights of Woman". It was unfinished at her death and also included in the "Posthumous Works" published by Godwin. When it was first published in 1792, the "Rights of Woman" was reviewed favourably by the "Analytical Review", the "General Magazine", the "Literary Magazine", "New York Magazine", and the "Monthly Review", although the assumption persists even today that "Rights of Woman" received hostile reviews. It was almost immediately released in a second edition in 1792, several American editions appeared, and it was translated into French. Taylor writes that "it was an immediate success". Moreover, other writers such as Mary Hays and Mary Robinson specifically alluded to Wollstonecraft's text in their own works. Hays cited the "Rights of Woman" in her novel "Memoirs of Emma Courtney" (1796) and modelled her female characters after Wollstonecraft's ideal woman. Although female conservatives such as Hannah More excoriated Wollstonecraft personally, they actually shared many of the same values. As the scholar Anne Mellor has shown, both More and Wollstonecraft wanted a society founded on "Christian virtues of rational benevolence, honesty, personal virtue, the fulfillment of social duty, thrift, sobriety, and hard work". During the early 1790s, many writers within British society were engaged in an intense debate regarding the position of women in society. For example, the respected poet and essayist Anna Laetitia Barbauld and Wollstonecraft sparred back and forth; Barbauld published several poems responding to Wollstonecraft's work and Wollstonecraft commented on them in footnotes to the "Rights of Woman". The work also provoked outright hostility. The bluestocking Elizabeth Carter was unimpressed with the work. Thomas Taylor, the Neoplatonist translator who had been a landlord to the Wollstonecraft family in the late 1770s, swiftly wrote a satire called "A Vindication of the Rights of Brutes": if women have rights, why not animals too? After Wollstonecraft died in 1797, her husband William Godwin published his "Memoirs of the Author of A Vindication of the Rights of Woman" (1798). He revealed much about her private life that had previously not been known to the public: her illegitimate child, her love affairs, and her attempts at suicide. While Godwin believed he was portraying his wife with love, sincerity, and compassion, contemporary readers were shocked by Wollstonecraft's unorthodox lifestyle and she became a reviled figure. Richard Polwhele targeted her in particular in his anonymous long poem "The Unsex'd Females" (1798), a defensive reaction to women's literary self-assertion: Hannah More is Christ to Wollstonecraft's Satan. His poem was "well known" among the responses "A Vindication". One reviewer comments this "ingenious poem" with its "playful sallies of sarcastic wit" against "our modern ladies," though others found it "a tedious, lifeless piece of writing." Critical responses largely fell along clear-cut political lines. Wollstonecraft's ideas became associated with her life story and women writers felt that it was dangerous to mention her in their texts. Hays, who had previously been a close friend and an outspoken advocate for Wollstonecraft and her "Rights of Woman", for example, did not include her in the collection of "Illustrious and Celebrated Women" she published in 1803. Maria Edgeworth specifically distances herself from Wollstonecraft in her novel "Belinda" (1802); she caricatures Wollstonecraft as a radical feminist in the character of Harriet Freke. But, like Jane Austen, she does not reject Wollstonecraft's ideas. Both Edgeworth and Austen argue that women are crucial to the development of the nation; moreover, they portray women as rational beings who should choose companionate marriage. The negative views towards Wollstonecraft persisted for over a century. The "Rights of Woman" was not reprinted until the middle of the 19th century and it still retained an aura of ill-repute. George Eliot wrote "there is in some quarters a vague prejudice against the "Rights of Woman" as in some way or other a reprehensible book, but readers who go to it with this impression will be surprised to find it eminently serious, severely moral, and withal rather heavy". The suffragist (i.e. moderate reformer, as opposed to suffragette) Millicent Garrett Fawcett wrote the introduction to the centenary edition of the "Rights of Woman", cleansing the memory of Wollstonecraft and claiming her as the foremother of the struggle for the vote. While the "Rights of Woman" may have paved the way for feminist arguments, 20th century feminists have tended to use Wollstonecraft's life story, rather than her texts, for inspiration; her unorthodox lifestyle convinced them to try new "experiments in living", as Virginia Woolf termed it in her famous essay on Wollstonecraft. However, there is some evidence that the "Rights of Woman" may be influencing current feminists. Ayaan Hirsi Ali, a feminist who is critical of Islam's dictates regarding women, cites the "Rights of Woman" in her autobiography "Infidel", writing that she was "inspired by Mary Wollstonecraft, the pioneering feminist thinker who told women they had the same ability to reason as men did and deserved the same rights". Augustine of Canterbury Augustine of Canterbury (born first third of the 6th century – died probably 26 May 604) was a Benedictine monk who became the first Archbishop of Canterbury in the year 597. He is considered the "Apostle to the English" and a founder of the Church in England. Augustine was the prior of a monastery in Rome when Pope Gregory the Great chose him in 595 to lead a mission, usually known as the Gregorian mission, to Britain to Christianize King Æthelberht and his Kingdom of Kent from Anglo-Saxon paganism. Kent was probably chosen because Æthelberht had married a Christian princess, Bertha, daughter of Charibert I the King of Paris, who was expected to exert some influence over her husband. Before reaching Kent, the missionaries had considered turning back, but Gregory urged them on, and in 597, Augustine landed on the Isle of Thanet and proceeded to Æthelberht's main town of Canterbury. King Æthelberht converted to Christianity and allowed the missionaries to preach freely, giving them land to found a monastery outside the city walls. Augustine was consecrated as a bishop and converted many of the king's subjects, including thousands during a mass baptism on Christmas Day in 597. Pope Gregory sent more missionaries in 601, along with encouraging letters and gifts for the churches, although attempts to persuade the native British bishops to submit to Augustine's authority failed. Roman bishops were established at London, and Rochester in 604, and a school was founded to train Anglo-Saxon priests and missionaries. Augustine also arranged the consecration of his successor, Laurence of Canterbury. The archbishop probably died in 604 and was soon revered as a saint. After the withdrawal of the Roman legions from their province of Britannia in 410, the inhabitants were left to defend themselves against the attacks of the Saxons. Before the Roman withdrawal, Britannia had been converted to Christianity and produced the ascetic Pelagius. Britain sent three bishops to the Council of Arles in 314, and a Gaulish bishop went to the island in 396 to help settle disciplinary matters. Material remains testify to a growing presence of Christians, at least until around 360. After the Roman legions departed, pagan tribes settled the southern parts of the island while western Britain, beyond the Anglo-Saxon kingdoms, remained Christian. This native British Church developed in isolation from Rome under the influence of missionaries from Ireland and was centred on monasteries instead of bishoprics. Other distinguishing characteristics were its calculation of the date of Easter and the style of the tonsure haircut that clerics wore. Evidence for the survival of Christianity in the eastern part of Britain during this time includes the survival of the cult of Saint Alban and the occurrence in place names of "eccles", derived from the Latin "ecclesia", meaning "church". There is no evidence that these native Christians tried to convert the Anglo-Saxons. The invasions destroyed most remnants of Roman civilisation in the areas held by the Saxons and related tribes, including the economic and religious structures. It was against this background that Pope Gregory I decided to send a mission, often called the Gregorian mission, to convert the Anglo-Saxons to Christianity in 595. The Kingdom of Kent was ruled by Æthelberht, who married a Christian princess named Bertha before 588, and perhaps earlier than 560. Bertha was the daughter of Charibert I, one of the Merovingian kings of the Franks. As one of the conditions of her marriage, she brought a bishop named Liudhard with her to Kent. Together in Canterbury, they restored a church that dated to Roman times—possibly the current St Martin's Church. Æthelberht was a pagan at this point but allowed his wife freedom of worship. One biographer of Bertha states that under his wife's influence, Æthelberht asked Pope Gregory to send missionaries. The historian Ian N. Wood feels that the initiative came from the Kentish court as well as the queen. Other historians, however, believe that Gregory initiated the mission, although the exact reasons remain unclear. Bede, an 8th-century monk who wrote a history of the English church, recorded a famous story in which Gregory saw fair-haired Saxon slaves from Britain in the Roman slave market and was inspired to try to convert their people. More practical matters, such as the acquisition of new provinces acknowledging the primacy of the papacy, and a desire to influence the emerging power of the Kentish kingdom under Æthelberht, were probably involved. The mission may have been an outgrowth of the missionary efforts against the Lombards who, as pagans and Arian Christians, were not on good relations with the Catholic church in Rome. Aside from Æthelberht's granting of freedom of worship to his wife, the choice of Kent was probably dictated by a number of other factors. Kent was the dominant power in southeastern Britain. Since the eclipse of King Ceawlin of Wessex in 592, Æthelberht was the leading Anglo-Saxon ruler; Bede refers to Æthelberht as having imperium (overlordship) south of the River Humber. Trade between the Franks and Æthelberht's kingdom was well established, and the language barrier between the two regions was apparently only a minor obstacle, as the interpreters for the mission came from the Franks. Lastly, Kent's proximity to the Franks allowed support from a Christian area. There is some evidence, including Gregory's letters to Frankish kings in support of the mission, that some of the Franks felt that they had a claim to overlordship over some of the southern British kingdoms at this time. The presence of a Frankish bishop could also have lent credence to claims of overlordship, if Bertha's Bishop Liudhard was felt to be acting as a representative of the Frankish church and not merely as a spiritual advisor to the queen. Frankish influence was not merely political; archaeological remains attest to a cultural influence as well. In 595, Gregory chose Augustine, who was the prior of the Abbey of St Andrew's in Rome, to head the mission to Kent. The pope selected monks to accompany Augustine and sought support from the Frankish royalty and clergy in a series of letters, of which some copies survive in Rome. He wrote to King Theuderic II of Burgundy and to King Theudebert II of Austrasia, as well as their grandmother Brunhild, seeking aid for the mission. Gregory thanked King Chlothar II of Neustria for aiding Augustine. Besides hospitality, the Frankish bishops and kings provided interpreters and Frankish priests to accompany the mission. By soliciting help from the Frankish kings and bishops, Gregory helped to assure a friendly reception for Augustine in Kent, as Æthelbert was unlikely to mistreat a mission which visibly had the support of his wife's relatives and people. Moreover, the Franks appreciated the chance to participate in mission that would extend their influence in Kent. Chlothar, in particular, needed a friendly realm across the Channel to help guard his kingdom's flanks against his fellow Frankish kings. Sources make no mention of why Pope Gregory chose a monk to head the mission. Pope Gregory once wrote to Æthelberht complimenting Augustine's knowledge of the Bible, so Augustine was evidently well educated. Other qualifications included administrative ability, for Gregory was the abbot of St Andrews as well as being pope, which left the day-to-day running of the abbey to Augustine, the prior. Augustine was accompanied by Laurence of Canterbury, his eventual successor to the archbishopric, and a group of about 40 companions, some of whom were monks. Soon after leaving Rome, the missionaries halted, daunted by the nature of the task before them. They sent Augustine back to Rome to request papal permission to return. Gregory refused and sent Augustine back with letters encouraging the missionaries to persevere. In 597, Augustine and his companions landed in Kent. They achieved some initial success soon after their arrival: Æthelberht permitted the missionaries to settle and preach in his capital of Canterbury where they used the church of St Martin's for services. Neither Bede nor Gregory mentions the date of Æthelberht's conversion, but it probably took place in 597. In the early medieval period, large-scale conversions required the ruler's conversion first, and Augustine is recorded as making large numbers of converts within a year of his arrival in Kent. Also, by 601, Gregory was writing to both Æthelberht and Bertha, calling the king his son and referring to his baptism. A late medieval tradition, recorded by the 15th-century chronicler Thomas Elmham, gives the date of the king's conversion as Whit Sunday, or 2 June 597; there is no reason to doubt this date, although there is no other evidence for it. Against a date in 597 is a letter of Gregory's to Patriarch Eulogius of Alexandria in June 598, which mentions the number of converts made by Augustine, but does not mention any baptism of the king. However, it is clear that by 601 the king had been converted. His baptism likely took place at Canterbury. Augustine established his episcopal see at Canterbury. It is not clear when and where Augustine was consecrated as a bishop. Bede, writing about a century later, states that Augustine was consecrated by the Frankish Archbishop Ætherius of Arles, Gaul (France) after the conversion of Æthelberht. Contemporary letters from Pope Gregory, however, refer to Augustine as a bishop before he arrived in England. A letter of Gregory's from September 597 calls Augustine a bishop, and one dated ten months later says Augustine had been consecrated on Gregory's command by bishops of the German lands. The historian R. A. Markus discusses the various theories of when and where Augustine was consecrated, and suggests he was consecrated before arriving in England, but argues the evidence does not permit deciding exactly where this took place. Soon after his arrival, Augustine founded the monastery of Saints Peter and Paul, which later became St Augustine's Abbey, on land donated by the king. This foundation has often been claimed as the first Benedictine abbey outside Italy, and that by founding it, Augustine introduced the Rule of St. Benedict into England, but there is no evidence the abbey followed the Benedictine Rule at the time of its foundation. In a letter Gregory wrote to the patriarch of Alexandria in 598, he claimed that more than 10,000 Christians had been baptised; the number may be exaggerated but there is no reason to doubt that a mass conversion took place. However, there were probably some Christians already in Kent before Augustine arrived, remnants of the Christians who lived in Britain in the later Roman Empire. Little literary traces remain of them, however. One other effect of the king's conversion by Augustine's mission was that the Frankish influence on the southern kingdoms of Britain was decreased. After these conversions, Augustine sent Laurence back to Rome with a report of his success, along with questions about the mission. Bede records the letter and Gregory's replies in chapter 27 of his "Historia ecclesiastica gentis Anglorum"; this section of the "History" is usually known as the "Libellus responsionum". Augustine asked for Gregory's advice on a number of issues, including how to organise the church, the punishment for church robbers, guidance on who was allowed to marry whom, and the consecration of bishops. Other topics were relations between the churches of Britain and Gaul, childbirth and baptism, and when it was lawful for people to receive communion and for a priest to celebrate mass. Further missionaries were sent from Rome in 601. They brought a pallium for Augustine and a present of sacred vessels, vestments, relics, and books. The pallium was the symbol of metropolitan status, and signified that Augustine was now an archbishop unambiguously associated with the Holy See. Along with the pallium, a letter from Gregory directed the new archbishop to consecrate 12 suffragan bishops as soon as possible and to send a bishop to York. Gregory's plan was that there would be two metropolitans, one at York and one at London, with 12 suffragan bishops under each archbishop. As part of this plan, Augustine was expected to transfer his archiepiscopal see to London from Canterbury. The move from Canterbury to London never happened; no contemporary sources give the reason, but it was probably because London was not part of Æthelberht's domains. Instead, London was part of the kingdom of Essex, ruled by Æthelberht's nephew Saebert of Essex, who converted to Christianity in 604. The historian S. Brechter has suggested that the metropolitan see was indeed moved to London, and that it was only with the abandonment of London as a see after the death of Æthelberht that Canterbury became the archiepiscopal see. This theory contradicts Bede's version of events, however. In 604, Augustine founded two more bishoprics in Britain. Two men who had come to Britain with him in 601 were consecrated, Mellitus as Bishop of London and Justus as Bishop of Rochester. Bede relates that Augustine, with the help of the king, "recovered" a church built by Roman Christians in Canterbury. It is not clear if Bede meant that Augustine rebuilt the church or that Augustine merely reconsecrated a building that had been used for pagan worship. Archaeological evidence seems to support the latter interpretation; in 1973 the remains of an aisled building dating from the Romano-British period were uncovered just south of the present Canterbury Cathedral. The historian Ian Wood argues that the existence of the "Libellus" points to more contact between Augustine and the native Christians because the topics covered in the work are not restricted to conversion from paganism, but also dealt with relations between differing styles of Christianity. Augustine failed to extend his authority to the Christians in Wales and Dumnonia to the west. Gregory had decreed that these Christians should submit to Augustine and that their bishops should obey him, apparently believing that more of the Roman governmental and ecclesiastical organisation survived in Britain than was actually the case. According to the narrative of Bede, the Britons in these regions viewed Augustine with uncertainty, and their suspicion was compounded by a diplomatic misjudgement on Augustine's part. In 603, Augustine and Æthelberht summoned the British bishops to a meeting south of the Severn. These guests retired early to confer with their people, who, according to Bede, advised them to judge Augustine based upon the respect he displayed at their next meeting. When Augustine failed to rise from his seat on the entrance of the British bishops, they refused to recognise him as their archbishop. There were, however, deep differences between Augustine and the British church that perhaps played a more significant role in preventing an agreement. At issue were the tonsure, the observance of Easter, and practical and deep-rooted differences in approach to asceticism, missionary endeavours, and how the church itself was organised. Some historians believe that Augustine had no real understanding of the history and traditions of the British church, damaging his relations with their bishops. Also, there were political dimensions involved, as Augustine's efforts were sponsored by the Kentish king, and at this period the Wessex and Mercian kingdoms were expanding to the west, into areas held by the Britons. Gregory also instructed Augustine on other matters. Temples were to be consecrated for Christian use, and feasts, if possible, moved to days celebrating Christian martyrs. One religious site was revealed to be a shrine of a local St Sixtus, whose worshippers were unaware of details of the martyr's life or death. They may have been native Christians, but Augustine did not treat them as such. When Gregory was informed, he told Augustine to stop the cult and use the shrine for the Roman St Sixtus. Gregory legislated on the behaviour of the laity and the clergy. He placed the new mission directly under papal authority and made it clear that English bishops would have no authority over Frankish counterparts nor vice versa. Other directives dealt with the training of native clergy and the missionaries' conduct. The King's School, Canterbury claims Augustine as its founder, which would make it the world's oldest existing school, but the first documentary records of the school date from the 16th century. Augustine did establish a school, and soon after his death Canterbury was able to send teachers out to support the East Anglian mission. Augustine received liturgical books from the pope, but their exact contents are unknown. They may have been some of the new mass books that were being written at this time. The exact liturgy that Augustine introduced to England remains unknown, but it would have been a form of the Latin language liturgy in use at Rome. Before his death, Augustine consecrated Laurence of Canterbury as his successor to the archbishopric, probably to ensure an orderly transfer of office. Although at the time of Augustine's death, 26 May 604, the mission barely extended beyond Kent, his undertaking introduced a more active missionary style into the British Isles. Despite the earlier presence of Christians in Ireland and Wales, no efforts had been made to try to convert the Saxon invaders. Augustine was sent to convert the descendants of those invaders, and eventually became the decisive influence in Christianity in the British Isles. Much of his success came about because of Augustine's close relationship with Æthelberht, which gave the archbishop time to establish himself. Augustine's example also influenced the great missionary efforts of the Anglo-Saxon Church. Augustine's body was originally buried in the portico of what is now St Augustine's, Canterbury, but it was later exhumed and placed in a tomb within the abbey church, which became a place of pilgrimage and veneration. After the Norman Conquest the cult of St Augustine was actively promoted. After the Conquest, his shrine in St Augustine's Abbey held a central position in one of the axial chapels, flanked by the shrines of his successors Laurence and Mellitus. King Henry I of England granted St. Augustine's Abbey a six-day fair around the date on which Augustine's relics were translated to his new shrine, from 8 September through 13 September. A life of Augustine was written by Goscelin around 1090, but this life portrays Augustine in a different light than Bede's account. Goscelin's account has little new historical content, mainly being filled with miracles and imagined speeches. Building on this account, later medieval writers continued to add new miracles and stories to Augustine's life, often quite fanciful. These authors included William of Malmesbury, who claimed that Augustine founded Cerne Abbey, the author (generally believed to be John Brompton) of a late medieval chronicle containing invented letters from Augustine, and a number of medieval writers who included Augustine in their romances. Another problem with investigating Augustine's saintly cult is the confusion resulting because most medieval liturgical documents mentioning Augustine do not distinguish between Augustine of Canterbury and Augustine of Hippo, a fourth-century saint. Medieval Scandinavian liturgies feature Augustine of Canterbury quite often, however. During the English Reformation, Augustine's shrine was destroyed and his relics were lost. Augustine's shrine was re-established in March 2012 at the church of St. Augustine in Ramsgate, Kent, very close to the mission's landing site. St Augustine's Cross, a Celtic cross erected in 1884, marks the spot in Ebbsfleet, Thanet, East Kent, where Augustine is said to have landed. A Wizard of Earthsea A Wizard of Earthsea is a fantasy novel written by American author Ursula K. Le Guin and first published by the small press Parnassus in 1968. It is regarded as a classic of fantasy and children's literature and has been widely influential within the genre. The story is set in the fictional archipelago of Earthsea and centers around a young mage named Ged, born in a village on the island of Gont. He displays great power while still a boy and joins the school of wizardry, where his prickly nature drives him into conflict with one of his fellows. During a magical duel, Ged's spell goes awry and releases a shadow creature that attacks him. The novel follows his journey as he seeks to be free of the creature. The book has often been described as a "Bildungsroman" or coming-of-age story, as it explores Ged's process of learning to cope with power and come to terms with death. The novel also carries Taoist themes about a fundamental balance in the universe of Earthsea, which wizards are supposed to maintain, closely tied to the idea that language and names have power to affect the material world and alter this balance. The structure of the story is similar to that of a traditional epic, although critics have also described it as subverting this genre in many ways, such as by making the protagonist dark-skinned in contrast to more typical white-skinned heroes. "A Wizard of Earthsea" received highly positive reviews, initially as a work for children and later among a general audience, as well. It won the Boston Globe–Horn Book Award in 1969 and was one of the final recipients of the Lewis Carroll Shelf Award in 1979. Margaret Atwood called it one of the "wellsprings" of fantasy literature. Le Guin wrote five subsequent books that are collectively referred to as the "Earthsea Cycle", together with "A Wizard of Earthsea": "The Tombs of Atuan" (1971), "The Farthest Shore" (1972), "Tehanu" (1990), "The Other Wind" (2001), and "Tales from Earthsea" (2001). George Slusser has described the series as a "work of high style and imagination", while Amanda Craig said that "A Wizard of Earthsea" was "the most thrilling, wise, and beautiful children's novel ever". Early concepts for the Earthsea setting were developed in two short stories, "The Rule of Names" (1964) and "The Word of Unbinding" (1964), both published in "Fantastic". The stories were later collected in Le Guin's anthology "The Wind's Twelve Quarters" (1975). Earthsea was also used as the setting for a story Le Guin wrote in 1965 or 1966, which was never published. In 1967, Herman Schein (the publisher of Parnassus Press and the husband of Ruth Robbins, the illustrator of the book) asked Le Guin to try writing a book "for older kids", giving her complete freedom over the subject and the approach. Le Guin had no previous experience specifically with the genre of young adult literature, which rose in prominence during the late 1960s. Drawing from her short stories, Le Guin began work on "A Wizard of Earthsea". Le Guin has said that the book was in part a response to the image of wizards as ancient and wise, and to her wondering where they come from. Le Guin later said that she chose the medium of fantasy, and the theme of coming of age, with her intended adolescent audience in mind. The short stories published in 1964 introduced the world of Earthsea and important concepts in it, such as Le Guin's treatment of magic. It also introduced Yevaud, a dragon who featured briefly in "A Wizard of Earthsea". Le Guin's depiction of Earthsea was influenced by her familiarity with Native American legends as well as Norse mythology. Her knowledge of myths and legends, as well as her familial interest in anthropology, have been described by scholar Donna White as allowing her to create "entire cultures" for the islands of Earthsea. The influence of Norse lore in particular can be seen in the characters of the Kargs, who are blonde and blue-eyed, and worship two gods who are brothers. The influence of Taoist thought on Le Guin's writing is also visible in the idea of a cosmic "balance" in the universe of Earthsea. Earthsea itself is an archipelago or group of islands. In the fictional history of this world, the islands were raised from the ocean by Segoy, an ancient deity or hero. The world is inhabited by both humans and dragons, and several among the humans are sorcerers or wizards. The world is shown as being based on a delicate balance, which most of its inhabitants are aware of, but which is disrupted by somebody in each of the original trilogy of novels. The setting of Earthsea is preindustrial, and has many cultures within the widespread archipelago. Most of the characters of the story are of the Hardic peoples, who are dark-skinned, and who populate most of the islands. Some of the Eastern islands are populated by the white-skinned Kargish people, who see the Hardic folk as evil sorcerers: the Kargish, in turn, are viewed by the Hardic as barbarians. The far western regions of the archipelago are where the dragons live. The novel follows a young boy called Duny, nicknamed "Sparrowhawk", born on the island of Gont. Discovering that the boy has great innate power, his aunt teaches him the little magic she knows. When his village is attacked by Kargish raiders, Duny summons a fog to conceal the village and its inhabitants, enabling the residents to drive off the Kargs. Hearing of this, the powerful mage Ogion takes him as an apprentice, giving him his "true name"—Ged. Ogion tries to teach Ged about the "equilibrium", the concept that magic can upset the natural order of the world if used improperly. In an attempt to impress a girl, however, Ged searches Ogion's spell books and inadvertently summons a strange shadow, which has to be banished by Ogion. Sensing Ged's eagerness to act and impatience with his slow teaching methods, Ogion sends him to the renowned school for wizards on the island of Roke. At the school, Ged's skills inspire admiration even among the teachers. He is befriended by an older student named Vetch, but generally remains aloof from his fellows. Another student, Jasper, acts condescendingly towards Ged and provokes the latter's proud nature. After Jasper needles Ged during a feast, Ged challenges him to a duel of magic. Ged casts a powerful spell intended to raise the spirit of a legendary dead woman, but the spell goes awry and instead releases a shadow creature, which attacks him and scars his face. The Archmage Nemmerle drives the shadow away, but at the cost of his life. Ged spends many months healing before resuming his studies. The new Archmage, Gensher, describes the shadow as an ancient evil that wishes to possess Ged, and warns him that the creature has no name. Ged eventually receives his wizard's staff, and takes up residence in the Ninety Isles, providing the poor villagers protection from the dragons that have seized and taken up residence on the nearby island of Pendor, but discovers that he is still being sought by the shadow. Knowing that he cannot guard against both threats at the same time, he sails to Pendor and gambles his life on a guess of the adult dragon's true name. When he is proved right, the dragon offers to tell him the name of the shadow, but Ged instead extracts a promise that the dragon and his offspring will never threaten the archipelago. Chased by the shadow, Ged flees to Osskil, having heard of the stone of the Terrenon. He is attacked by the shadow, and barely escapes into the Court of Terrenon. Serret, the lady of the castle, shows him the stone, and urges Ged to speak to it, claiming it can give him limitless knowledge and power. Recognizing that the stone harbors one of the Old Powers—ancient, powerful, malevolent beings—Ged refuses. He flees and is pursued by the stone's minions, but transforms into a swift falcon and escapes. Ged flies back to Ogion on Gont. Unlike Gensher, Ogion insists that all creatures have a name and advises Ged to confront the shadow. Ogion is proved right; when Ged seeks out the shadow, it flees from him. Ged pursues it in a small sailboat, until it lures him into a fog where the boat is wrecked on a reef. Ged recovers with the help of an elderly couple marooned on a small island since they were children; the woman gives Ged part of a broken bracelet as a gift. Ged patches his boat and resumes his pursuit of the creature into the East Reach. On the island of Iffish, he meets his friend Vetch, who insists on joining him. They journey east far beyond the last known lands before they finally come upon the shadow. Naming it with his own name, Ged merges with it and joyfully tells Vetch he is healed and whole. The first edition of the book, published in 1968, was illustrated by Ruth Robbins. The cover illustration was in color, and the interior of the book contained a map of the archipelago of Earthsea. In addition, each chapter had a black-and-white illustration by Robbins, similar to a woodcut image. The images represented topics from each chapter; for instance, the very first image depicted the island of Gont, while the illustration for the chapter "The Dragon of Pendor" pictured a flying dragon. The image shown here depicts Ged sailing in his boat "Lookfar", and was used in the 10th chapter, "The Open Sea", in which Ged and Vetch travel from Iffish eastward past all known lands to confront the shadow creature. "A Wizard of Earthsea" was first published in 1968 by Parnassus Press in Berkeley, a year before "The Left Hand of Darkness", Le Guin's watershed work. It was a personal landmark for Le Guin, as it represented her first attempt at writing for children; she had written only a handful of other novels and short stories prior to its publication. The book was also her first attempt at writing fantasy, rather than science-fiction. "A Wizard of Earthsea" was the first of Le Guin's books to receive widespread critical attention, and has been described as her best known work, as part of the Earthsea series. The book has been released in numerous editions, including an illustrated Folio Society edition released in 2015. It was also translated into a number of other languages. An omnibus edition of all of Le Guin's Earthsea works is planned to be released on the 50th anniversary of the publication of "A Wizard of Earthsea" in 2018. Le Guin originally intended for "A Wizard of Earthsea" to be a standalone novel, but decided to write a sequel after considering the loose ends in the first book, and "The Tombs of Atuan" was released in 1971. "The Farthest Shore" was written as a third volume after further consideration, and was published in 1972. "The Tombs of Atuan" tells of the story of Ged's attempt to make whole the ring of Erreth Akbe, half of which is buried in the tombs of Atuan in the Kargish lands, from where he must steal it. There, he meets the child priestess Tenar, on whom the book focuses. In "The Farthest Shore", Ged, who has become Archmage, tries to combat a dwindling of magic across Earthsea, accompanied by Arren, a young prince. The first three books are together seen as the "original trilogy"; in each of these, Ged is shown as trying to heal some imbalance in the world. They were followed by "Tehanu" (1990), "Tales from Earthsea" (2001), and "The Other Wind" (2001), which are sometimes referred to as the "second trilogy". Initial recognition for the book was from children's-book critics, among whom it garnered acclaim. "A Wizard of Earthsea" received an even more positive response in the United Kingdom when it was released there in 1971, which, according to White, reflected the greater admiration of British critics for children's fantasy. In her 1975 annotated collection "Fantasy for Children", British critic Naomi Lewis described it in the following terms: "[It is not] the easiest book for casual browsing, but readers who take the step will find themselves in one of the most important works of fantasy of our time." Similarly, literary scholar Margaret Esmonde wrote in 1981 that "Le Guin has ... enriched children's literature with what may be its finest high fantasy", while a review in "The Guardian" by author and journalist Amanda Craig said it was "The most thrilling, wise and beautiful children's novel ever, [written] in prose as taut and clean as a ship's sail." In discussing the book for a gathering of children's librarians Eleanor Cameron praised the world building in the story, saying "it is as if [Le Guin] herself has lived on the archipelago." Author David Mitchell called the titular character Ged a "superb creation", and argued that he was a more relatable wizard than those featured in prominent works of fantasy at the time. According to him, characters such as Gandalf were "variants on the archetype of Merlin, a Caucasian scholarly aristocrat amongst sorcerers" with little room to grow, whereas Ged developed as a character through his story. Mitchell also praised the other characters in the story, who he said seemed to have a "fully thought-out inner life" despite being fleeting presences. The 1995 "Encyclopedia of Science Fiction" said that the Earthsea books had been considered the finest science fiction books for children in the post-World War II period. Commentators have noted that the Earthsea novels in general received less critical attention because they were considered children's books. Le Guin herself took exception to this treatment of children's literature, describing it as "adult chauvinist piggery". In 1976, literary scholar George Slusser criticized the "silly publication classification designating the original series as 'children's literature'". Barbara Bucknall stated that "Le Guin was not writing for young children when she wrote these fantasies, nor yet for adults. She was writing for 'older kids.' But in fact she can be read, like Tolkien, by ten-year-olds and by adults. These stories are ageless because they deal with problems that confront us at any age." Only in later years did "A Wizard of Earthsea" receive attention from a more general audience. Literary scholar T. A. Shippey was among the first to treat "A Wizard of Earthsea" as serious literature, assuming in his analysis of the volume that it belonged alongside works by C. S. Lewis and Fyodor Dostoevsky, among others. Margaret Atwood said that she saw the book as "a fantasy book for adults", and added that the book could be categorized as either young adult fiction or as fantasy, but since it dealt with themes such as "life and mortality and who are we as human beings", it could be read and enjoyed by anybody older than twelve. The "Encyclopedia of Science Fiction" echoed this view, saying the series's appeal went "far beyond" the young adults for whom it was written. It went on to praise the book as "austere but vivid", and said the series was more thoughtful than the "Narnia" books by C. S. Lewis. In his 1980 history of fantasy, Brian Attebery called the Earthsea trilogy "the most challenging and richest American fantasy to date". Slusser described the Earthsea cycle as a "work of high style and imagination", and the original trilogy of books a product of "genuine epic vision". In 1974, critic Robert Scholes compared Le Guin's work favorably to that of C. S. Lewis, saying, "Where C. S. Lewis worked out a specifically Christian set of values, Ursula LeGuin works not with a theology but with an ecology, a cosmology, a reverence for the universe as a self-regulating structure." He added that Le Guin's three Earthsea novels were themselves a sufficient legacy for anybody to leave. In 2014 David Pringle called it "a beautiful story—poetic, thrilling, and profound". "A Wizard of Earthsea" won or contributed to several notable awards for Le Guin. It won the Boston Globe–Horn Book Award in 1969, and was one of the last winners of the Lewis Carroll Shelf Award ten years later. In 1984 it won the or the "Golden Sepulka" in Poland. In 2000 Le Guin was given the Margaret A. Edwards Award by the American Library Association for young adult literature. The award cited six of her works, including the first four Earthsea volumes, "The Left Hand of Darkness", and "The Beginning Place". A 1987 poll in "Locus" ranked "A Wizard of Earthsea" third among "All-Time Best Fantasy Novels", while in 2014 Pringle listed it at number 39 in his list of the 100 best novels in modern fantasy. The book has been seen as widely influential within the genre of fantasy. Margaret Atwood has called "A Wizard of Earthsea" one of the "wellsprings" of fantasy literature, illustrating Le Guin's influence within the genre. "A Wizard of Earthsea" has been compared to major works of high fantasy such as J. R. R. Tolkien's "The Lord of the Rings" and L. Frank Baum's "The Wonderful Wizard of Oz". The notion that names can exert power is also present in Hayao Miyazaki's 2001 film "Spirited Away"; critics have suggested that that idea originated with Le Guin's Earthsea series. Novelist David Mitchell, author of books such as "Cloud Atlas", described "A Wizard of Earthsea" as having a strong influence on him, and said that he felt a desire to "wield words with the same power as Ursula Le Guin". Modern writers have credited "A Wizard of Earthsea" for introducing the idea of a "wizard school", which would later be made famous by the "Harry Potter" series of books, and with popularizing the trope of a boy wizard, also present in "Harry Potter". Reviewers have also commented that the basic premise of "A Wizard of Earthsea", that of a talented boy going to a wizard's school and making an enemy with whom he has a close connection, is also the premise of "Harry Potter". Ged also receives a scar from the shadow, which hurts whenever the shadow is near him, just as Harry Potter's scar from Voldemort. Commenting on the similarity, Le Guin said that she did not feel that J. K. Rowling "ripped her off", but that Rowling's books received too much praise for supposed originality, and that Rowling "could have been more gracious about her predecessors. My incredulity was at the critics who found the first book wonderfully original. She has many virtues, but originality isn't one of them. That hurt." "A Wizard of Earthsea" focuses on Ged's adolescence and coming of age, and along with the other two works of the original Earthsea trilogy forms a part of Le Guin's dynamic portrayal of the process of growing old. The three novels together follow Ged from youth to old age, each of them also follow the coming of age of a different character. The novel is frequently described as a "Bildungsroman". Mike Cadden stated that the book is a convincing tale "to a reader as young and possibly as headstrong as Ged, and therefore sympathetic to him". Ged's coming of age is also intertwined with the physical journey he undertakes through the novel. Ged is depicted as proud and yet unsure of himself in multiple situations: early in his apprenticeship he believes Ogion to be mocking him, and later, at Roke, feels put upon by Jasper. In both cases, he believes that others do not appreciate his greatness, and Le Guin's sympathetic narration does not immediately contradict this belief. Cadden writes that Le Guin allows young readers to sympathize with Ged, and only gradually realize that there is a price to be paid for his actions, as he learns to discipline his magical powers. Similarly, as Ged begins his apprenticeship with Ogion, he imagines that he will be taught mysterious aspects of wizardry, and has visions of transforming himself into other creatures, but gradually comes to see that Ogion's important lessons are those about his own self. The passage at the end of the novel, wherein Ged finally accepts the shadow as a part of himself and is thus released from its terror, has been pointed to by reviewers as a rite of passage. Jeanne Walker, for example, wrote that the rite of passage at the end was an analogue for the entire plot of "A Wizard of Earthsea", and that the plot itself plays the role of a rite of passage for an adolescent reader. Walker goes on to say, "The entire action of A Wizard of Earthsea ... portrays the hero's slow realization of what it means to be an individual in society and a self in relation to higher powers. Many readers and critics have commented on similarities between Ged's process of growing up and ideas in Jungian psychology. The young Ged has a scary encounter with a shadow creature, which he later realizes is the dark side of himself. It is only after he recognizes and merges with the shadow that he becomes a whole person. Le Guin claimed never to have read Jung before writing the Earthsea novels. Le Guin described coming of age as the main theme of the book, and wrote in a 1973 essay that she chose that theme since she was writing for an adolescent audience. She stated that "Coming of age ... is a process that took me many years; I finished it, so far as I ever will, at about age thirty-one; and so I feel rather deeply about it. So do most adolescents. It's their main occupation, in fact." She also said that fantasy was best suited as a medium for describing coming of age, because exploring the subconscious was difficult using the language of "rational daily life". The coming of age that Le Guin focused on included not just psychological development, but moral changes as well. Ged needs to recognize the balance between his power and his responsibility to use it well, a recognition which comes as he travels to the stone of Terrenon and sees the temptation that that power represents. The world of Earthsea is depicted as being based on a delicate balance, which most of its inhabitants are aware of, but which is disrupted by somebody in each of the original trilogy of novels. This includes an equilibrium between land and sea (implicit in the name Earthsea), and between people and their natural environment. In addition to physical equilibrium, there is a larger cosmic equilibrium, which everybody is aware of, and which wizards are tasked with maintaining. Describing this aspect of Earthsea, Elizabeth Cummins wrote, "The principle of balanced powers, the recognition that every act affects self, society, world, and cosmos, is both a physical and a moral principle of Le Guin's fantasy world." The concept of balance is related to the novel's other major theme of coming of age, as Ged's knowledge of the consequences of his own actions for good or ill is necessary for him to understand how the balance is maintained. While at the school of Roke, the Master Hand tells him: The influence of Taoism on Le Guin's writing is evident through much of the book, especially in her depiction of the "balance". At the end of the novel, Ged, may be seen to embody the Taoist way of life, as he has learned not to act unless absolutely necessary. He has also learned that seeming opposites, like light and dark or good and evil, are actually interdependent. Light and dark themselves are recurring images within the story. Reviewers have identified this belief as evidence of a conservative ideology within the story, shared with much of fantasy. In emphasizing concerns over balance and equilibrium, scholars have argued, Le Guin essentially justifies the status quo, which wizards strive to maintain. This tendency is in contrast to Le Guin's science fiction writing, in which change is shown to have value. The nature of human evil forms a significant related theme through "A Wizard of Earthsea" as well as the other Earthsea novels. As with other works by Le Guin, evil is shown as a misunderstanding of the balance of life. Ged is born with great power in him, but the pride that he takes in his power leads to his downfall; he tries to demonstrate his strength by bringing a spirit back from the dead, and in performing this act against the laws of nature, releases the shadow that attacks him. Slusser suggests that although he is provoked into performing dangerous spells first by the girl on Gont and then by Jasper, this provocation exists in Ged's mind. He is shown as unwilling to look within himself and see the pride that drives him to do what he does. When he accepts the shadow into himself, he also finally accepts responsibility for his own actions, and by accepting his own mortality he is able to free himself. His companion Vetch describes the moment by saying Thus, although there are several dark powers in Earthsea (like the dragon, and the stone of Terrenon) the true evil was not one of these powers, or even death, but Ged's actions that went against the balance of nature. Although evil is present in the books, it is a non-Christian understanding of evil: evil is a "misunderstanding of the dynamics of life". This is contrary to conventional Western storytelling, in which light and darkness are often considered opposites, and are seen as symbolizing good and evil, which are constantly in conflict. On two different occasions, Ged is tempted to try to defy death and evil, but eventually learns that neither can be eliminated: instead, he chooses not to serve evil, and stops denying death. In Le Guin's fictional universe, to know the true name of an object or a person is to have power over it. Each child is given a true name when they reach puberty, a name which they share only with close friends. Several of the dragons in the later Earthsea novels, like Orm Embar and Kalessin, are shown as living openly with their names, which do not give anybody power over them. In "A Wizard of Earthsea", however, Ged is shown to have power over Yevaud. Cadden writes that this is because Yevaud still has attachment to riches and material possessions, and is thus bound by the power of his name. Wizards exert their influence over the equilibrium through the use of names, thus linking this theme to Le Guin's depiction of a cosmic balance. According to Cummins, this is Le Guin's way of demonstrating the power of language in shaping reality. Since language is the tool we use for communicating about the environment, she argues that it also allows humans to affect the environment, and the wizards' power to use names symbolizes this. Cummins went on to draw an analogy between the a wizard's use of names to change things with the creative use of words in fictional writing. Shippey wrote that Earthsea magic seems to work through what he called the "Rumpelstiltskin theory", in which names have power. He argued that this portrayal was part of Le Guin's effort to emphasize the power of words over objects, which, according to Shippey, was in contrast to the ideology of other fantasy writers, such as James Frazer in "The Golden Bough". Esmonde argued that each of the first three Earthsea books hinged on an act of trust. In "A Wizard of Earthsea", Vetch trusts Ged with his true name when the latter is at his lowest ebb emotionally, thus giving Ged complete power over himself. Ged later offers Tenar the same gift in "The Tombs of Atuan", thereby allowing her to learn trust. "A Wizard of Earthsea" and other novels of the Earthsea cycle differ notably from Le Guin's early Hainish cycle works, although they were written at a similar time. George Slusser described the Earthsea works as providing a counterweight to the "excessive pessimism" of the Hainish novels. He saw the former as depicting individual action in a favorable light, in contrast to works such as "Vaster than Empires and More Slow". The "Encyclopedia of Science Fiction" said the book was pervaded by a "grave joyfulness". In discussing the style of her fantasy works, Le Guin herself said that in fantasy it was necessary to be clear and direct with language, because there is no known framework for the reader's mind to rest upon. The story often appears to assume that readers are familiar with the geography and history of Earthsea, a technique which allowed Le Guin to avoid exposition: a reviewer wrote that this method "gives Le Guin's world the mysterious depths of Tolkien's, but without his tiresome back-stories and versifying". In keeping with the notion of an epic, the narration switches from looking ahead into Ged's future and looking back into the past of Earthsea. At the same time, Slusser described the mood of the novel as "strange and dreamlike", fluctuating between objective reality and the thoughts in Ged's mind; some of Ged's adversaries are real, while others are phantoms. This narrative technique, which Cadden characterizes as "free indirect discourse" makes the narrator of the book seem sympathetic to the protagonist, and does not distance his thoughts from the reader. "A Wizard of Earthsea" has strong elements of an epic; for instance, Ged's place in Earthsea history is described at the very beginning of the book in the following terms: "some say the greatest, and surely the greatest voyager, was the man called Sparrowhawk, who in his day became both dragonlord and Archmage." The story also begins with words from the Earthsea song "The Creation of Éa", which forms a ritualistic beginning to the book. The teller of the story then goes on to say that it is from Ged's youth, thereby establishing context for the rest of the book. In comparison with the protagonists of many of Le Guin's other works, Ged is superficially a typical hero, a mage who sets out on a quest. Reviewers have compared "A Wizard of Earthsea" to epics such as "Beowulf". Scholar Virginia White argued that the story followed a structure common to epics in which the protagonist begins an adventure, faces trials along the way, and eventually returns in triumph. White went on to suggest that this structure can be seen in the series as a whole, as well as in the individual volumes. Le Guin subverted many of the tropes typical to such "monomyths"; the protagonists of her story were all dark-skinned, in comparison to the white-skinned heroes more traditionally used; the Kargish antagonists, in contrast, were white-skinned, a switching of race roles that has been remarked upon by multiple critics. Critics have also cited her use of characters from multiple class backgrounds as a choice subversive to conventional Western fantasy. At the same time, reviewers questioned Le Guin's treatment of gender in "A Wizard of Earthsea", and the original trilogy as a whole. Le Guin, who later became known as a feminist, chose to restrict the use of magic to men and boys in the first volume of Earthsea. Initial critical reactions to "A Wizard of Earthsea" saw Ged's gender as incidental. In contrast, "The Tombs of Atuan" saw Le Guin intentionally tell a female coming-of-age story, which was nonetheless described as perpetuating a male-dominated model of Earthsea. "Tehanu" (1990), published as the fourth volume of "Earthsea" 18 years after the third, has been described both by Le Guin and her commentators as a feminist re-imagining of the series, in which the power and status of the chief characters are reversed, and the patriarchal social structure questioned. Commenting in 1993, Le Guin wrote that she could not continue [Earthsea after 1972] until she had "wrestled with the angels of the feminist consciousness". Several critics have argued that by combining elements of epic, "Bildungsroman", and young adult fiction, Le Guin succeeded in blurring the boundaries of conventional genres. In a 1975 commentary Francis Molson argued that the series should be referred to as "ethical fantasy", a term which acknowledged that the story did not always follow the tropes of heroic fantasy, and the moral questions that it raised. The term did not become popular. A similar argument was made by children's literature critic Cordelia Sherman in 1985; she argued that "A Wizard of Earthsea" and the rest of the series sought "to teach children by dramatic example what it means to be a good adult". A condensed, illustrated version of the first chapter was printed by World Books in the third volume of "Childcraft" in 1989. Multiple audio versions of the book have been released. BBC Radio produced a radioplay version in 1996 narrated by Judi Dench, and a six-part series adapting the Earthsea novels in 2015, broadcast on Radio 4 Extra. In 2011, the work was produced as an unabridged recording performed by Robert Inglis. Two screen adaptations of the story have also been produced. An original mini-series titled "Legend of Earthsea" was broadcast in 2005 on the Sci Fi Channel. It is based very loosely on "A Wizard of Earthsea" and "The Tombs of Atuan". In an article published in "Salon", Le Guin expressed strong displeasure at the result. She stated that by casting a "petulant white kid" as Ged (who has red-brown skin in the book) the series "whitewashed Earthsea", and had ignored her choice to write the story of a non-white character, a choice she said was central to the book. This sentiment was shared by a review in "The Ultimate Encyclopedia of Fantasy", which said that "Legend of Earthsea" "totally missed the point" of Le Guin's novels, "ripping out all the subtlety, nuance and beauty of the books and inserting boring cliches, painful stereotypes and a very unwelcome 'epic' war in their place". Studio Ghibli released an adaptation of the series in 2006 titled "Tales from Earthsea". The film very loosely combines elements of the first, third, and fourth books into a new story. Le Guin commented with displeasure on the film-making process, saying that she had acquiesced to the adaptation believing Hayao Miyazaki would be producing the film himself, which was eventually not the case. Le Guin praised the imagery of the film, but disliked the use of violence. She also expressed dissatisfaction with the portrayal of morality, and in particular the use of a villain who could be slain as a means of resolving conflict, which she said was antithetical to the message of the book. The film received generally mixed responses. Anton Chekhov Anton Pavlovich Chekhov (; 29 January 1860 – 15 July 1904) was a Russian playwright and short-story writer, who is considered to be among the greatest writers of short fiction in history. His career as a playwright produced four classics, and his best short stories are held in high esteem by writers and critics. Along with Henrik Ibsen and August Strindberg, Chekhov is often referred to as one of the three seminal figures in the birth of early modernism in the theatre. Chekhov practiced as a medical doctor throughout most of his literary career: "Medicine is my lawful wife", he once said, "and literature is my mistress." Chekhov renounced the theatre after the reception of "The Seagull" in 1896, but the play was revived to acclaim in 1898 by Konstantin Stanislavski's Moscow Art Theatre, which subsequently also produced Chekhov's "Uncle Vanya" and premiered his last two plays, "Three Sisters" and "The Cherry Orchard". These four works present a challenge to the acting ensemble as well as to audiences, because in place of conventional action Chekhov offers a "theatre of mood" and a "submerged life in the text". Chekhov had at first written stories only for financial gain, but as his artistic ambition grew, he made formal innovations which have influenced the evolution of the modern short story. He made no apologies for the difficulties this posed to readers, insisting that the role of an artist was to ask questions, not to answer them. Anton Chekhov was born on the feast day of St. Anthony the Great (17 January Old Style) 29 January 1860, the third of six surviving children, in Taganrog, a port on the Sea of Azov in southern Russia. His father, Pavel Yegorovich Chekhov, the son of a former serf and his Ukrainian wife, were from the village Vilkhovatka near Kobeliaky (Poltava Region in modern-day Ukraine) and ran a grocery store. A director of the parish choir, devout Orthodox Christian, and physically abusive father, Pavel Chekhov has been seen by some historians as the model for his son's many portraits of hypocrisy. Chekhov's mother, Yevgeniya (Morozova), was an excellent storyteller who entertained the children with tales of her travels with her cloth-merchant father all over Russia. "Our talents we got from our father," Chekhov remembered, "but our soul from our mother." In adulthood, Chekhov criticised his brother Alexander's treatment of his wife and children by reminding him of Pavel's tyranny: "Let me ask you to recall that it was despotism and lying that ruined your mother's youth. Despotism and lying so mutilated our childhood that it's sickening and frightening to think about it. Remember the horror and disgust we felt in those times when Father threw a tantrum at dinner over too much salt in the soup and called Mother a fool." Chekhov attended the Greek School in Taganrog and the Taganrog "Gymnasium" (since renamed the Chekhov Gymnasium), where he was kept down for a year at fifteen for failing an examination in Ancient Greek. He sang at the Greek Orthodox monastery in Taganrog and in his father's choirs. In a letter of 1892, he used the word "suffering" to describe his childhood and recalled: In 1876, Chekhov's father was declared bankrupt after overextending his finances building a new house, having been cheated by a contractor called Mironov. To avoid debtor's prison he fled to Moscow, where his two eldest sons, Alexander and Nikolay, were attending university. The family lived in poverty in Moscow, Chekhov's mother physically and emotionally broken by the experience. Chekhov was left behind to sell the family's possessions and finish his education. Chekhov remained in Taganrog for three more years, boarding with a man called Selivanov who, like Lopakhin in "The Cherry Orchard", had bailed out the family for the price of their house. Chekhov had to pay for his own education, which he managed by private tutoring, catching and selling goldfinches, and selling short sketches to the newspapers, among other jobs. He sent every ruble he could spare to his family in Moscow, along with humorous letters to cheer them up. During this time, he read widely and analytically, including the works of Cervantes, Turgenev, Goncharov, and Schopenhauer, and wrote a full-length comic drama, "Fatherless", which his brother Alexander dismissed as "an inexcusable though innocent fabrication." Chekhov also enjoyed a series of love affairs, one with the wife of a teacher. In 1879, Chekhov completed his schooling and joined his family in Moscow, having gained admission to the medical school at I.M. Sechenov First Moscow State Medical University. Chekhov now assumed responsibility for the whole family. To support them and to pay his tuition fees, he wrote daily short, humorous sketches and vignettes of contemporary Russian life, many under pseudonyms such as "Antosha Chekhonte" (Антоша Чехонте) and "Man without a Spleen" (Человек без селезенки). His prodigious output gradually earned him a reputation as a satirical chronicler of Russian street life, and by 1882 he was writing for "Oskolki" ("Fragments"), owned by Nikolai Leykin, one of the leading publishers of the time. Chekhov's tone at this stage was harsher than that familiar from his mature fiction. In 1884, Chekhov qualified as a physician, which he considered his principal profession though he made little money from it and treated the poor free of charge. In 1884 and 1885, Chekhov found himself coughing blood, and in 1886 the attacks worsened, but he would not admit his tuberculosis to his family or his friends. He confessed to Leykin, "I am afraid to submit myself to be sounded by my colleagues." He continued writing for weekly periodicals, earning enough money to move the family into progressively better accommodations. Early in 1886 he was invited to write for one of the most popular papers in St. Petersburg, "Novoye Vremya" ("New Times"), owned and edited by the millionaire magnate Alexey Suvorin, who paid a rate per line double Leykin's and allowed Chekhov three times the space. Suvorin was to become a lifelong friend, perhaps Chekhov's closest. Before long, Chekhov was attracting literary as well as popular attention. The sixty-four-year-old Dmitry Grigorovich, a celebrated Russian writer of the day, wrote to Chekhov after reading his short story "The Huntsman" that "You have "real" talent, a talent that places you in the front rank among writers in the new generation." He went on to advise Chekhov to slow down, write less, and concentrate on literary quality. Chekhov replied that the letter had struck him "like a thunderbolt" and confessed, "I have written my stories the way reporters write up their notes about fires – mechanically, half-consciously, caring nothing about either the reader or myself."" The admission may have done Chekhov a disservice, since early manuscripts reveal that he often wrote with extreme care, continually revising. Grigorovich's advice nevertheless inspired a more serious, artistic ambition in the twenty-six-year-old. In 1888, with a little string-pulling by Grigorovich, the short story collection "At Dusk" ("V Sumerkakh") won Chekhov the coveted Pushkin Prize "for the best literary production distinguished by high artistic worth." In 1887, exhausted from overwork and ill health, Chekhov took a trip to Ukraine, which reawakened him to the beauty of the steppe. On his return, he began the novella-length short story "," which he called "something rather odd and much too original," and which was eventually published in "Severny Vestnik" ("The Northern Herald"). In a narrative that drifts with the thought processes of the characters, Chekhov evokes a chaise journey across the steppe through the eyes of a young boy sent to live away from home, and his companions, a priest and a merchant. "The Steppe" has been called a "dictionary of Chekhov's poetics", and it represented a significant advance for Chekhov, exhibiting much of the quality of his mature fiction and winning him publication in a literary journal rather than a newspaper. In autumn 1887, a theatre manager named Korsh commissioned Chekhov to write a play, the result being "Ivanov", written in a fortnight and produced that November. Though Chekhov found the experience "sickening" and painted a comic portrait of the chaotic production in a letter to his brother Alexander, the play was a hit and was praised, to Chekhov's bemusement, as a work of originality. Although Chekhov did not fully realise it at the time, Chekhov's plays, such as "The Seagull" (written in 1895), "Uncle Vanya" (written in 1897), "The Three Sisters" (written in 1900), and "The Cherry Orchard" (written in 1903) served as a revolutionary backbone to what is common sense to the medium of acting to this day: an effort to recreate and express the "realism" of how people truly act and speak with each other and translating it to the stage to manifest the human condition as accurately as possible in hopes to make the audience reflect upon their own definition of what it means to be human, warts and all. This philosophy of approaching the art of acting has stood not only steadfast, but as the cornerstone of acting for much of the 20th century to this day. Mikhail Chekhov considered "Ivanov" a key moment in his brother's intellectual development and literary career. From this period comes an observation of Chekhov's that has become known as Chekhov's gun, a dramatic principle that requires that every element in a narrative be necessary and irreplaceable, and that everything else be removed. The death of Chekhov's brother Nikolay from tuberculosis in 1889 influenced "A Dreary Story", finished that September, about a man who confronts the end of a life that he realises has been without purpose. Mikhail Chekhov, who recorded his brother's depression and restlessness after Nikolay's death, was researching prisons at the time as part of his law studies, and Anton Chekhov, in a search for purpose in his own life, himself soon became obsessed with the issue of prison reform. In 1890, Chekhov undertook an arduous journey by train, horse-drawn carriage, and river steamer to the Russian Far East and the "katorga", or penal colony, on Sakhalin Island, north of Japan, where he spent three months interviewing thousands of convicts and settlers for a census. The letters Chekhov wrote during the two-and-a-half-month journey to Sakhalin are considered to be among his best. His remarks to his sister about Tomsk were to become notorious. The inhabitants of Tomsk later retaliated by erecting a mocking statue of Chekhov. Chekhov witnessed much on Sakhalin that shocked and angered him, including floggings, embezzlement of supplies, and forced prostitution of women. He wrote, "There were times I felt that I saw before me the extreme limits of man's degradation." He was particularly moved by the plight of the children living in the penal colony with their parents. For example: Chekhov later concluded that charity was not the answer, but that the government had a duty to finance humane treatment of the convicts. His findings were published in 1893 and 1894 as "Ostrov Sakhalin" ("The Island of Sakhalin"), a work of social science, not literature. Chekhov found literary expression for the "Hell of Sakhalin" in his long short story "," the last section of which is set on Sakhalin, where the murderer Yakov loads coal in the night while longing for home. Chekhov's writing on Sakhalin is the subject of brief comment and analysis in Haruki Murakami's novel "1Q84". It is also the subject of a poem by the Nobel Prize winner Seamus Heaney, "Chekhov on Sakhalin" (collected in the volume "Station Island"). In 1892, Chekhov bought the small country estate of Melikhovo, about forty miles south of Moscow, where he lived with his family until 1899. "It's nice to be a lord," he joked to his friend Ivan Leontyev (who wrote humorous pieces under the pseudonym Shcheglov), but he took his responsibilities as a landlord seriously and soon made himself useful to the local peasants. As well as organising relief for victims of the famine and cholera outbreaks of 1892, he went on to build three schools, a fire station, and a clinic, and to donate his medical services to peasants for miles around, despite frequent recurrences of his tuberculosis. Mikhail Chekhov, a member of the household at Melikhovo, described the extent of his brother's medical commitments: Chekhov's expenditure on drugs was considerable, but the greatest cost was making journeys of several hours to visit the sick, which reduced his time for writing. However, Chekhov's work as a doctor enriched his writing by bringing him into intimate contact with all sections of Russian society: for example, he witnessed at first hand the peasants' unhealthy and cramped living conditions, which he recalled in his short story "Peasants". Chekhov visited the upper classes as well, recording in his notebook: "Aristocrats? The same ugly bodies and physical uncleanliness, the same toothless old age and disgusting death, as with market-women." In 1894, Chekhov began writing his play "The Seagull" in a lodge he had built in the orchard at Melikhovo. In the two years since he had moved to the estate, he had refurbished the house, taken up agriculture and horticulture, tended the orchard and the pond, and planted many trees, which, according to Mikhail, he "looked after ... as though they were his children. Like Colonel Vershinin in his "Three Sisters", as he looked at them he dreamed of what they would be like in three or four hundred years." The first night of "The Seagull", at the Alexandrinsky Theatre in St. Petersburg on 17 October 1896, was a fiasco, as the play was booed by the audience, stinging Chekhov into renouncing the theatre. But the play so impressed the theatre director Vladimir Nemirovich-Danchenko that he convinced his colleague Konstantin Stanislavski to direct a new production for the innovative Moscow Art Theatre in 1898. Stanislavski's attention to psychological realism and ensemble playing coaxed the buried subtleties from the text, and restored Chekhov's interest in playwriting. The Art Theatre commissioned more plays from Chekhov and the following year staged "Uncle Vanya", which Chekhov had completed in 1896. In the last decades of his life he became an atheist. In March 1897, Chekhov suffered a major haemorrhage of the lungs while on a visit to Moscow. With great difficulty he was persuaded to enter a clinic, where the doctors diagnosed tuberculosis on the upper part of his lungs and ordered a change in his manner of life. After his father's death in 1898, Chekhov bought a plot of land on the outskirts of Yalta and built a villa, into which he moved with his mother and sister the following year. Though he planted trees and flowers, kept dogs and tame cranes, and received guests such as Leo Tolstoy and Maxim Gorky, Chekhov was always relieved to leave his "hot Siberia" for Moscow or travels abroad. He vowed to move to Taganrog as soon as a water supply was installed there. In Yalta he completed two more plays for the Art Theatre, composing with greater difficulty than in the days when he "wrote serenely, the way I eat pancakes now". He took a year each over "Three Sisters" and "The Cherry Orchard". On 25 May 1901, Chekhov married Olga Knipper quietly, owing to his horror of weddings. She was a former protegée and sometime lover of Nemirovich-Danchenko whom he had first met at rehearsals for "The Seagull". Up to that point, Chekhov, known as "Russia's most elusive literary bachelor," had preferred passing liaisons and visits to brothels over commitment. He had once written to Suvorin: The letter proved prophetic of Chekhov's marital arrangements with Olga: he lived largely at Yalta, she in Moscow, pursuing her acting career. In 1902, Olga suffered a miscarriage; and Donald Rayfield has offered evidence, based on the couple's letters, that conception may have occurred when Chekhov and Olga were apart, although Russian scholars have rejected that claim. The literary legacy of this long-distance marriage is a correspondence that preserves gems of theatre history, including shared complaints about Stanislavski's directing methods and Chekhov's advice to Olga about performing in his plays. In Yalta, Chekhov wrote one of his most famous stories, "The Lady with the Dog" (also translated from the Russian as "Lady with Lapdog"), which depicts what at first seems a casual liaison between a cynical married man and an unhappy married woman who meet while vacationing in Yalta. Neither expects anything lasting from the encounter. Unexpectedly though, they gradually fall deeply in love and end up risking scandal and the security of their family lives. The story masterfully captures their feelings for each other, the inner transformation undergone by the disillusioned male protagonist as a result of falling deeply in love, and their inability to resolve the matter by either letting go of their families or of each other. By May 1904, Chekhov was terminally ill with tuberculosis. Mikhail Chekhov recalled that "everyone who saw him secretly thought the end was not far off, but the nearer [he] was to the end, the less he seemed to realise it." On 3 June, he set off with Olga for the German spa town of Badenweiler in the Black Forest, from where he wrote outwardly jovial letters to his sister Masha, describing the food and surroundings, and assuring her and his mother that he was getting better. In his last letter, he complained about the way German women dressed. Chekhov's death has become one of "the great set pieces of literary history," retold, embroidered, and fictionalised many times since, notably in the short story "Errand" by Raymond Carver. In 1908, Olga wrote this account of her husband's last moments: Chekhov's body was transported to Moscow in a refrigerated railway car meant for oysters, a detail that offended Gorky. Some of the thousands of mourners followed the funeral procession of a General Keller by mistake, to the accompaniment of a military band. Chekhov was buried next to his father at the Novodevichy Cemetery. A few months before he died, Chekhov told the writer Ivan Bunin that he thought people might go on reading his writings for seven years. "Why seven?" asked Bunin. "Well, seven and a half," Chekhov replied. "That's not bad. I've got six years to live." Chekhov's posthumous reputation greatly exceeded his expectations. The ovations for the play "The Cherry Orchard" in the year of his death served to demonstrate the Russian public's acclaim for the writer, which placed him second in literary celebrity only to Tolstoy, who outlived him by six years. Tolstoy was an early admirer of Chekhov's short stories and had a series that he deemed "first quality" and "second quality" bound into a book. In the first category were: "Children", "The Chorus Girl", "A Play", "Home", "Misery", "The Runaway", "In Court", "Vanka", "Ladies", "A Malefactor", "The Boys", "Darkness", "Sleepy", "The Helpmate", and "The Darling""; in the second: "A Transgression", "Sorrow", "The Witch", "Verochka", "In a Strange Land", "The Cook's Wedding", "A Tedious Business", "An Upheaval", "Oh! The Public!", "The Mask", "A Woman's Luck", "Nerves", "The Wedding", "A Defenseless Creature", and "Peasant Wives." In Chekhov's lifetime, British and Irish critics generally did not find his work pleasing; E. J. Dillon thought "the effect on the reader of Chekhov's tales was repulsion at the gallery of human waste represented by his fickle, spineless, drifting people" and R. E. C. Long said "Chekhov's characters were repugnant, and that Chekhov reveled in stripping the last rags of dignity from the human soul". After his death, Chekhov was reappraised. Constance Garnett's translations won him an English-language readership and the admiration of writers such as James Joyce, Virginia Woolf, and Katherine Mansfield, whose story "The Child Who Was Tired" is similar to Chekhov's "Sleepy". The Russian critic D. S. Mirsky, who lived in England, explained Chekhov's popularity in that country by his "unusually complete rejection of what we may call the heroic values." In Russia itself, Chekhov's drama fell out of fashion after the revolution, but it was later incorporated into the Soviet canon. The character of Lopakhin, for example, was reinvented as a hero of the new order, rising from a modest background so as eventually to possess the gentry's estates. One of the first non-Russians to praise Chekhov's plays was George Bernard Shaw, who subtitled his "Heartbreak House" "A Fantasia in the Russian Manner on English Themes," and pointed out similarities between the predicament of the British landed class and that of their Russian counterparts as depicted by Chekhov: "the same nice people, the same utter futility." In the United States, Chekhov's reputation began its rise slightly later, partly through the influence of Stanislavski's system of acting, with its notion of subtext: "Chekhov often expressed his thought not in speeches," wrote Stanislavski, "but in pauses or between the lines or in replies consisting of a single word ... the characters often feel and think things not expressed in the lines they speak." The Group Theatre, in particular, developed the subtextual approach to drama, influencing generations of American playwrights, screenwriters, and actors, including Clifford Odets, Elia Kazan and, in particular, Lee Strasberg. In turn, Strasberg's Actors Studio and the "Method" acting approach influenced many actors, including Marlon Brando and Robert De Niro, though by then the Chekhov tradition may have been distorted by a preoccupation with realism. In 1981, the playwright Tennessee Williams adapted "The Seagull" as "The Notebook of Trigorin". One of Anton's nephews, Michael Chekhov would also contribute heavily to modern theatre, particularly through his unique acting methods which developed Stanislavski's ideas further. Despite Chekhov's reputation as a playwright, William Boyd asserts that his short stories represent the greater achievement. Raymond Carver, who wrote the short story "Errand" about Chekhov's death, believed that Chekhov was the greatest of all short story writers: Ernest Hemingway, another writer influenced by Chekhov, was more grudging: "Chekhov wrote about six good stories. But he was an amateur writer." And Vladimir Nabokov criticised Chekhov's "medley of dreadful prosaisms, ready-made epithets, repetitions." But he also declared "The Lady with the Dog" "one of the greatest stories ever written" in its depiction of a problematic relationship, and described Chekhov as writing "the way one person relates to another the most important things in his life, slowly and yet without a break, in a slightly subdued voice." For the writer William Boyd, Chekhov's historical accomplishment was to abandon what William Gerhardie called the "event plot" for something more "blurred, interrupted, mauled or otherwise tampered with by life." Virginia Woolf mused on the unique quality of a Chekhov story in "The Common Reader" (1925): While a Professor of Comparative Literature at Princeton University, Michael Goldman presented his view on defining the elusive quality of Chekhov's comedies stating: "Having learned that Chekhov is comic ... Chekhov is comic in a very special, paradoxical way. His plays depend, as comedy does, on the vitality of the actors to make pleasurable what would otherwise be painfully awkward – inappropriate speeches, missed connections, "faux pas", stumbles, childishness – but as part of a deeper pathos; the stumbles are not pratfalls but an energized, graceful dissolution of purpose." Alan Twigg, the chief editor and publisher of the Canadian book review magazine "BC Bookworld wrote, Chekhov has also influenced the work of Japanese playwrights including Shimizu Kunio, Yōji Sakate, and Ai Nagai. Critics have noted similarities in how Chekhov and Shimizu use a mixture of light humor as well as an intense depictions of longing. Sakate adapted several of Chekhov's plays and transformed them in the general style of "nō". Nagai also adapted Chekhov's plays, including "Three Sisters", and transformed his dramatic style into Nagai's style of satirical realism while emphasising the social issues depicted on the play. Chekhov's works have been adapted for the screen, including Sidney Lumet's "Sea Gull" and Louis Malle's "Vanya on 42nd Street". Laurence Olivier's final effort as a film director was a 1970 adaption of "Three Sisters" in which he also played a supporting role. His work has also served as inspiration or been referenced in numerous films. In Andrei Tarkovsky's 1975 film "The Mirror", characters discuss his short story "Ward No. 6". Woody Allen has been influenced by Chekhov and reference to his works are present in many of his films including "Love and Death" (1975), "Interiors" (1978) and "Hannah and Her Sisters" (1986). Plays by Chekhov are also referenced in François Truffaut's 1980 drama film "The Last Metro", which is set in a theatre. A portion of a stage production of "Three Sisters" appears in the 2014 drama film "Still Alice". Biographical Works Amphetamine Amphetamine (contracted from alpha-methylphenethylamine) is a potent central nervous system (CNS) stimulant that is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity. Amphetamine was discovered in 1887 and exists as two enantiomers: levoamphetamine and dextroamphetamine. "Amphetamine" properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers, levoamphetamine and dextroamphetamine, in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use. The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Currently, pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems. At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Drug addiction is a serious risk with large recreational doses but is unlikely to arise from typical long-term medical use at therapeutic doses. Very high doses can result in psychosis (e.g., delusions and paranoia) which rarely occurs at therapeutic doses even during long-term use. Recreational doses are generally much larger than prescribed therapeutic doses and carry a far greater risk of serious side effects. Amphetamine belongs to the phenethylamine class. It is also the parent compound of its own structural class, the substituted amphetamines, which includes prominent substances such as bupropion, cathinone, MDMA, and methamphetamine. As a member of the phenethylamine class, amphetamine is also chemically related to the naturally occurring trace amine neuromodulators, specifically phenethylamine and , both of which are produced within the human body. Phenethylamine is the parent compound of amphetamine, while is a positional isomer of amphetamine that differs only in the placement of the methyl group. Long-term amphetamine exposure at sufficiently high doses in some animal species is known to produce abnormal dopamine system development or nerve damage, but, in humans with ADHD, pharmaceutical amphetamines appear to improve brain development and nerve growth. Reviews of magnetic resonance imaging (MRI) studies suggest that long-term treatment with amphetamine decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia. Reviews of clinical stimulant research have established the safety and effectiveness of long-term continuous amphetamine use for the treatment of ADHD. Randomized controlled trials of continuous stimulant therapy for the treatment of ADHD spanning 2 years have demonstrated treatment effectiveness and safety. Two reviews have indicated that long-term continuous stimulant therapy for ADHD is effective for reducing the core symptoms of ADHD (i.e., hyperactivity, inattention, and impulsivity), enhancing quality of life and academic achievement, and producing improvements in a large number of functional outcomes across 9 categories of outcomes related to academics, antisocial behavior, driving, non-medicinal drug use, obesity, occupation, self-esteem, service use (i.e., academic, occupational, health, financial, and legal services), and social function. One review highlighted a nine-month randomized controlled trial of amphetamine treatment for ADHD in children that found an average increase of 4.5 IQ points, continued increases in attention, and continued decreases in disruptive behaviors and hyperactivity. Another review indicated that, based upon the longest follow-up studies conducted to date, lifetime stimulant therapy that begins during childhood is continuously effective for controlling ADHD symptoms and reduces the risk of developing a substance use disorder as an adult. Current models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems; these functional impairments involve impaired dopamine neurotransmission in the mesocorticolimbic projection and norepinephrine neurotransmission in the noradrenergic projections from the locus coeruleus to the prefrontal cortex. Psychostimulants like methylphenidate and amphetamine are effective in treating ADHD because they increase neurotransmitter activity in these systems. Approximately 80% of those who use these stimulants see improvements in ADHD symptoms. Children with ADHD who use stimulant medications generally have better relationships with peers and family members, perform better in school, are less distractible and impulsive, and have longer attention spans. The Cochrane Collaboration's reviews on the treatment of ADHD in children, adolescents, and adults with pharmaceutical amphetamines stated that while these drugs improve short-term symptoms, they have higher discontinuation rates than non-stimulant medications due to their adverse side effects. A Cochrane Collaboration review on the treatment of ADHD in children with tic disorders such as Tourette syndrome indicated that stimulants in general do not make tics worse, but high doses of dextroamphetamine could exacerbate tics in some individuals. In 2015, a systematic review and a meta-analysis of high quality clinical trials found that, when used at low (therapeutic) doses, amphetamine produces modest yet unambiguous improvements in cognition, including working memory, long-term episodic memory, inhibitory control, and some aspects of attention, in normal healthy adults; these cognition-enhancing effects of amphetamine are known to be partially mediated through the indirect activation of both dopamine receptor D and adrenoceptor α in the prefrontal cortex. A systematic review from 2014 found that low doses of amphetamine also improve memory consolidation, in turn leading to improved recall of information. Therapeutic doses of amphetamine also enhance cortical network efficiency, an effect which mediates improvements in working memory in all individuals. Amphetamine and other ADHD stimulants also improve task saliency (motivation to perform a task) and increase arousal (wakefulness), in turn promoting goal-directed behavior. Stimulants such as amphetamine can improve performance on difficult and boring tasks and are used by some students as a study and test-taking aid. Based upon studies of self-reported illicit stimulant use, of college students use diverted ADHD stimulants, which are primarily used for performance enhancement rather than as recreational drugs. However, high amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of cognitive control. Amphetamine is used by some athletes for its psychological and athletic performance-enhancing effects, such as increased endurance and alertness; however, non-medical amphetamine use is prohibited at sporting events that are regulated by collegiate, national, and international anti-doping agencies. In healthy people at oral therapeutic doses, amphetamine has been shown to increase muscle strength, acceleration, athletic performance in anaerobic conditions, and endurance (i.e., it delays the onset of fatigue), while improving reaction time. Amphetamine improves endurance and reaction time primarily through reuptake inhibition and effluxion of dopamine in the central nervous system. Amphetamine and other dopaminergic drugs also increase power output at fixed levels of perceived exertion by overriding a "safety switch" that allows the core temperature limit to increase in order to access a reserve capacity that is normally off-limits. At therapeutic doses, the adverse effects of amphetamine do not impede athletic performance; however, at much higher doses, amphetamine can induce effects that severely impair performance, such as rapid muscle breakdown and elevated body temperature. According to the International Programme on Chemical Safety (IPCS) and United States Food and Drug Administration (USFDA), amphetamine is contraindicated in people with a history of drug abuse, cardiovascular disease, severe agitation, or severe anxiety. It is also contraindicated in people currently experiencing advanced arteriosclerosis (hardening of the arteries), glaucoma (increased eye pressure), hyperthyroidism (excessive production of thyroid hormone), or moderate to severe hypertension. These agencies indicate that people who have experienced allergic reactions to other stimulants or who are taking monoamine oxidase inhibitors (MAOIs) should not take amphetamine, although safe concurrent use of amphetamine and monoamine oxidase inhibitors has been documented. These agencies also state that anyone with anorexia nervosa, bipolar disorder, depression, hypertension, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome should monitor their symptoms while taking amphetamine. Evidence from human studies indicates that therapeutic amphetamine use does not cause developmental abnormalities in the fetus or newborns (i.e., it is not a human teratogen), but amphetamine abuse does pose risks to the fetus. Amphetamine has also been shown to pass into breast milk, so the IPCS and USFDA advise mothers to avoid breastfeeding when using it. Due to the potential for reversible growth impairments, the USFDA advises monitoring the height and weight of children and adolescents prescribed an amphetamine pharmaceutical. At normal therapeutic doses, the physical side effects of amphetamine vary widely by age and from person to person. Cardiovascular side effects can include hypertension or hypotension from a vasovagal response, Raynaud's phenomenon (reduced blood flow to the hands and feet), and tachycardia (increased heart rate). Sexual side effects in males may include erectile dysfunction, frequent erections, or prolonged erections. Abdominal side effects may include abdominal pain, appetite loss, nausea, and weight loss. Other potential side effects include blurred vision, dry mouth, excessive grinding of the teeth, nosebleed, profuse sweating, rhinitis medicamentosa (drug-induced nasal congestion), reduced seizure threshold, and tics (a type of movement disorder). Dangerous physical side effects are rare at typical pharmaceutical doses. Amphetamine stimulates the medullary respiratory centers, producing faster and deeper breaths. In a normal person at therapeutic doses, this effect is usually not noticeable, but when respiration is already compromised, it may be evident. Amphetamine also induces contraction in the urinary bladder sphincter, the muscle which controls urination, which can result in difficulty urinating. This effect can be useful in treating bed wetting and loss of bladder control. The effects of amphetamine on the gastrointestinal tract are unpredictable. If intestinal activity is high, amphetamine may reduce gastrointestinal motility (the rate at which content moves through the digestive system); however, amphetamine may increase motility when the smooth muscle of the tract is relaxed. Amphetamine also has a slight analgesic effect and can enhance the pain relieving effects of opioids. USFDA-commissioned studies from 2011 indicate that in children, young adults, and adults there is no association between serious adverse cardiovascular events (sudden death, heart attack, and stroke) and the medical use of amphetamine or other ADHD stimulants. However, amphetamine pharmaceuticals are contraindicated in individuals with cardiovascular disease. At normal therapeutic doses, the most common psychological side effects of amphetamine include increased alertness, apprehension, concentration, initiative, self-confidence, and sociability, mood swings (elated mood followed by mildly depressed mood), insomnia or wakefulness, and decreased sense of fatigue. Less common side effects include anxiety, change in libido, grandiosity, irritability, repetitive or obsessive behaviors, and restlessness; these effects depend on the user's personality and current mental state. Amphetamine psychosis (e.g., delusions and paranoia) can occur in heavy users. Although very rare, this psychosis can also occur at therapeutic doses during long-term therapy. According to the USFDA, "there is no systematic evidence" that stimulants produce aggressive behavior or hostility. Amphetamine has also been shown to produce a conditioned place preference in humans taking therapeutic doses, meaning that individuals acquire a preference for spending time in places where they have previously used amphetamine. An amphetamine overdose can lead to many different symptoms, but is rarely fatal with appropriate care. The severity of overdose symptoms increases with dosage and decreases with drug tolerance to amphetamine. Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day, which is roughly 100 times the maximum daily therapeutic dose. Symptoms of a moderate and extremely large overdose are listed below; fatal amphetamine poisoning usually also involves convulsions and coma. In 2013, overdose on amphetamine, methamphetamine, and other compounds implicated in an "" resulted in an estimated 3,788 deaths worldwide ( deaths, 95% confidence). Pathological overactivation of the mesolimbic pathway, a dopamine pathway that connects the ventral tegmental area to the nucleus accumbens, plays a central role in amphetamine addiction. Individuals who frequently overdose on amphetamine during recreational use have a high risk of developing an amphetamine addiction, since repeated overdoses gradually increase the level of accumbal ΔFosB, a "molecular switch" and "master control protein" for addiction. Once nucleus accumbens ΔFosB is sufficiently overexpressed, it begins to increase the severity of addictive behavior (i.e., compulsive drug-seeking) with further increases in its expression. While there are currently no effective drugs for treating amphetamine addiction, regularly engaging in sustained aerobic exercise appears to reduce the risk of developing such an addiction. Sustained aerobic exercise on a regular basis also appears to be an effective treatment for amphetamine addiction; exercise therapy improves clinical treatment outcomes and may be used as a combination therapy with cognitive behavioral therapy, which is currently the best clinical treatment available. Addiction is a serious risk with heavy recreational amphetamine use but is unlikely to arise from typical long-term medical use at therapeutic doses. Drug tolerance develops rapidly in amphetamine abuse (i.e., a recreational amphetamine overdose), so periods of extended use require increasingly larger doses of the drug in order to achieve the same effect. Chronic use of amphetamine at excessive doses causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are ΔFosB, cAMP response element binding protein (CREB), and nuclear factor kappa B (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because the overexpression of ΔFosB in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and behavioral effects (e.g., expression-dependent increases in drug self-administration and reward sensitization) seen in drug addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others. ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both oppose the function of ΔFosB and inhibit increases in its expression. Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. Consequently, ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sex addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sex addictions are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs. The effects of amphetamine on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. This suggests that medical use of amphetamine does not significantly affect gene regulation. there is no effective pharmacotherapy for amphetamine addiction. Reviews from 2015 and 2016 indicated that TAAR1-selective agonists have significant therapeutic potential as a treatment for psychostimulant addictions; however, the only compounds which are known to function as TAAR1-selective agonists are experimental drugs. Amphetamine addiction is largely mediated through increased activation of dopamine receptors and NMDA receptors in the nucleus accumbens; magnesium ions inhibit NMDA receptors by blocking the receptor calcium channel. One review suggested that, based upon animal testing, pathological (addiction-inducing) psychostimulant use significantly reduces the level of intracellular magnesium throughout the brain. Supplemental magnesium treatment has been shown to reduce amphetamine self-administration (i.e., doses given to oneself) in humans, but it is not an effective monotherapy for amphetamine addiction. Cognitive behavioral therapy is currently the most effective clinical treatment for psychostimulant addictions. Additionally, research on the neurobiological effects of physical exercise suggests that daily aerobic exercise, especially endurance exercise (e.g., marathon running), prevents the development of drug addiction and is an effective adjunct therapy (i.e., a supplemental treatment) for amphetamine addiction. Exercise leads to better treatment outcomes when used as an adjunct treatment, particularly for psychostimulant addictions. In particular, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces increased dopamine receptor D (DRD2) density in the striatum. This is the opposite of pathological stimulant use, which induces decreased striatal DRD2 density. One review noted that exercise may also prevent the development of a drug addiction by altering ΔFosB or immunoreactivity in the striatum or other parts of the reward system. According to another Cochrane Collaboration review on withdrawal in individuals who compulsively use amphetamine and methamphetamine, "when chronic heavy users abruptly discontinue amphetamine use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose." This review noted that withdrawal symptoms in chronic, high-dose users are frequent, occurring in roughly 88% of cases, and persist for  weeks with a marked "crash" phase occurring during the first week. Amphetamine withdrawal symptoms can include anxiety, drug craving, depressed mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and lucid dreams. The review indicated that the severity of withdrawal symptoms is positively correlated with the age of the individual and the extent of their dependence. Mild withdrawal symptoms from the discontinuation of amphetamine treatment at therapeutic doses can be avoided by tapering the dose. In rodents and primates, sufficiently high doses of amphetamine cause dopaminergic neurotoxicity, or damage to dopamine neurons, which is characterized by dopamine terminal degeneration and reduced transporter and receptor function. There is no evidence that amphetamine is directly neurotoxic in humans. However, large doses of amphetamine may indirectly cause dopaminergic neurotoxicity as a result of hyperpyrexia, the excessive formation of reactive oxygen species, and increased autoxidation of dopamine. Animal models of neurotoxicity from high-dose amphetamine exposure indicate that the occurrence of hyperpyrexia (i.e., core body temperature ≥ 40 °C) is necessary for the development of amphetamine-induced neurotoxicity. Prolonged elevations of brain temperature above 40 °C likely promote the development of amphetamine-induced neurotoxicity in laboratory animals by facilitating the production of reactive oxygen species, disrupting cellular protein function, and transiently increasing blood–brain barrier permeability. A severe amphetamine overdose can result in a stimulant psychosis that may involve a variety of symptoms, such as delusions and paranoia. A Cochrane Collaboration review on treatment for amphetamine, dextroamphetamine, and methamphetamine psychosis states that about of users fail to recover completely. According to the same review, there is at least one trial that shows antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Psychosis very rarely arises from therapeutic use. Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD. In general, there is no significant interaction when consuming amphetamine with food, but the pH of gastrointestinal content and urine affects the absorption and excretion of amphetamine, respectively. Acidic substances reduce the absorption of amphetamine and increase urinary excretion, and alkaline substances do the opposite. Due to the effect pH has on absorption, amphetamine also interacts with gastric acid reducers such as proton pump inhibitors and H antihistamines, which increase gastrointestinal pH (i.e., make it less acidic). Amphetamine exerts its behavioral effects by altering the use of monoamines as neuronal signals in the brain, primarily in catecholamine neurons in the reward and executive function pathways of the brain. The concentrations of the main neurotransmitters involved in reward circuitry and executive functioning, dopamine and norepinephrine, increase dramatically in a dose-dependent manner by amphetamine due to its effects on monoamine transporters. The reinforcing and motivational salience-promoting effects of amphetamine are mostly due to enhanced dopaminergic activity in the mesolimbic pathway. The euphoric and locomotor-stimulating effects of amphetamine are dependent upon the magnitude and speed by which it increases synaptic dopamine and norepinephrine concentrations in the striatum. Amphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a and G protein-coupled receptor (GPCR) discovered in 2001, which is important for regulation of brain monoamines. Activation of increases production via adenylyl cyclase activation and inhibits monoamine transporter function. Monoamine autoreceptors (e.g., D short, presynaptic α, and presynaptic 5-HT) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is site specific and depends upon the presence of TAAR1 in the associated monoamine neurons. of TAAR1 and the dopamine transporter (DAT) has been visualized in rhesus monkeys, but of TAAR1 with the norepinephrine transporter (NET) and the serotonin transporter (SERT) has only been evidenced by messenger RNA (mRNA) expression. In addition to the neuronal monoamine transporters, amphetamine also inhibits both vesicular monoamine transporters, VMAT1 and VMAT2, as well as SLC1A1, SLC22A3, and SLC22A5. SLC1A1 is excitatory amino acid transporter 3 (EAAT3), a glutamate transporter located in neurons, SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter. Amphetamine is known to strongly induce cocaine- and amphetamine-regulated transcript (CART) gene expression, a neuropeptide involved in feeding behavior, stress, and reward, which induces observable increases in neuronal development and survival "in vitro". The CART receptor has yet to be identified, but there is significant evidence that CART binds to a unique . Amphetamine also inhibits monoamine oxidases at very high doses, resulting in less monoamine and trace amine metabolism and consequently higher concentrations of synaptic monoamines. In humans, the only post-synaptic receptor at which amphetamine is known to bind is the receptor, where it acts as an agonist with micromolar affinity. The full profile of amphetamine's short-term drug effects in humans is mostly derived through increased cellular communication or neurotransmission of dopamine, serotonin, norepinephrine, epinephrine, histamine, CART peptides, endogenous opioids, adrenocorticotropic hormone, corticosteroids, and glutamate, which it effects through interactions with , , , , , , and possibly other biological targets. Dextroamphetamine is a more potent agonist of than levoamphetamine. Consequently, dextroamphetamine produces greater stimulation than levoamphetamine, roughly three to four times more, but levoamphetamine has slightly stronger cardiovascular and peripheral effects. In certain brain regions, amphetamine increases the concentration of dopamine in the synaptic cleft. Amphetamine can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine produces competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, amphetamine activates which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces the reversal of dopamine transport through DAT (i.e., dopamine efflux). Amphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through an unidentified Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent pathway, in turn producing dopamine efflux. Through direct activation of G protein-coupled inwardly-rectifying potassium channels, reduces the firing rate of dopamine neurons, preventing a hyper-dopaminergic state. Amphetamine is also a substrate for the presynaptic vesicular monoamine transporter, . Following amphetamine uptake at VMAT2, amphetamine induces the collapse of the vesicular pH gradient, which results in the release of dopamine molecules from synaptic vesicles into the cytosol via dopamine efflux through VMAT2. Subsequently, the cytosolic dopamine molecules are released from the presynaptic neuron into the synaptic cleft via reverse transport at . Similar to dopamine, amphetamine dose-dependently increases the level of synaptic norepinephrine, the direct precursor of epinephrine. Based upon neuronal expression, amphetamine is thought to affect norepinephrine analogously to dopamine. In other words, amphetamine induces TAAR1-mediated efflux and reuptake inhibition at phosphorylated , competitive NET reuptake inhibition, and norepinephrine release from . Amphetamine exerts analogous, yet less pronounced, effects on serotonin as on dopamine and norepinephrine. Amphetamine affects serotonin via and, like norepinephrine, is thought to phosphorylate via . Like dopamine, amphetamine has low, micromolar affinity at the human 5-HT1A receptor. Acute amphetamine administration in humans increases endogenous opioid release in several brain structures in the reward system. Extracellular levels of glutamate, the primary excitatory neurotransmitter in the brain, have been shown to increase in the striatum following exposure to amphetamine. This increase in extracellular glutamate presumably occurs via the amphetamine-induced internalization of EAAT3, a glutamate reuptake transporter, in dopamine neurons. Amphetamine also induces the selective release of histamine from mast cells and efflux from histaminergic neurons through . Acute amphetamine administration can also increase adrenocorticotropic hormone and corticosteroid levels in blood plasma by stimulating the hypothalamic–pituitary–adrenal axis. The oral bioavailability of amphetamine varies with gastrointestinal pH; it is well absorbed from the gut, and bioavailability is typically over 75% for dextroamphetamine. Amphetamine is a weak base with a p"K" of 9.9; consequently, when the pH is basic, more of the drug is in its lipid soluble free base form, and more is absorbed through the lipid-rich cell membranes of the gut epithelium. Conversely, an acidic pH means the drug is predominantly in a water-soluble cationic (salt) form, and less is absorbed. Approximately of amphetamine circulating in the bloodstream is bound to plasma proteins. Following absorption, amphetamine readily distributes into most tissues in the body, with high concentrations occurring in cerebrospinal fluid and brain tissue. The half-life of amphetamine enantiomers differ and vary with urine pH. At normal urine pH, the half-lives of dextroamphetamine and levoamphetamine are  hours and  hours, respectively. Highly acidic urine will reduce the enantiomer half-lives to 7 hours; highly alkaline urine will increase the half-lives up to 34 hours. The immediate-release and extended release variants of salts of both isomers reach peak plasma concentrations at 3 hours and 7 hours post-dose respectively. Amphetamine is eliminated via the kidneys, with of the drug being excreted unchanged at normal urinary pH. When the urinary pH is basic, amphetamine is in its free base form, so less is excreted. When urine pH is abnormal, the urinary recovery of amphetamine may range from a low of 1% to a high of 75%, depending mostly upon whether urine is too basic or acidic, respectively. Following oral administration, amphetamine appears in urine within 3 hours. Roughly 90% of ingested amphetamine is eliminated 3 days after the last oral dose. CYP2D6, dopamine β-hydroxylase (DBH), flavin-containing monooxygenase 3 (FMO3), butyrate-CoA ligase (XM-ligase), and glycine N-acyltransferase (GLYAT) are the enzymes known to metabolize amphetamine or its metabolites in humans. Amphetamine has a variety of excreted metabolic products, including , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone. Among these metabolites, the active sympathomimetics are , , and norephedrine. The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways, detectable metabolites, and metabolizing enzymes in humans include the following: Amphetamine has a very similar structure and function to the endogenous trace amines, which are naturally occurring neurotransmitter molecules produced in the human body and brain. Among this group, the most closely related compounds are phenethylamine, the parent compound of amphetamine, and , an isomer of amphetamine (i.e., it has an identical molecular formula). In humans, phenethylamine is produced directly from by the aromatic amino acid decarboxylase (AADC) enzyme, which converts into dopamine as well. In turn, is metabolized from phenethylamine by phenylethanolamine N-methyltransferase, the same enzyme that metabolizes norepinephrine into epinephrine. Like amphetamine, both phenethylamine and regulate monoamine neurotransmission via ; unlike amphetamine, both of these substances are broken down by monoamine oxidase B, and therefore have a shorter half-life than amphetamine. Amphetamine is a methyl homolog of the mammalian neurotransmitter phenethylamine with the chemical formula . The carbon atom adjacent to the primary amine is a stereogenic center, and amphetamine is composed of a racemic 1:1 mixture of two enantiomeric mirror images. This racemic mixture can be separated into its optical isomers: levoamphetamine and dextroamphetamine. At room temperature, the pure free base of amphetamine is a mobile, colorless, and volatile liquid with a characteristically strong amine odor, and acrid, burning taste. Frequently prepared solid salts of amphetamine include amphetamine aspartate, hydrochloride, phosphate, saccharate, and sulfate, the last of which is the most common amphetamine salt. Amphetamine is also the parent compound of its own structural class, which includes a number of psychoactive derivatives. In organic chemistry, amphetamine is an excellent chiral ligand for the stereoselective synthesis of . The substituted derivatives of amphetamine, or "substituted amphetamines", are a broad range of chemicals that contain amphetamine as a "backbone"; specifically, this chemical class includes derivative compounds that are formed by replacing one or more hydrogen atoms in the amphetamine core structure with substituents. The class includes amphetamine itself, stimulants like methamphetamine, serotonergic empathogens like MDMA, and decongestants like ephedrine, among other subgroups. Since the first preparation was reported in 1887, numerous synthetic routes to amphetamine have been developed. The most common route of both legal and illicit amphetamine synthesis employs a non-metal reduction known as the Leuckart reaction (method 1). In the first step, a reaction between phenylacetone and formamide, either using additional formic acid or formamide itself as a reducing agent, yields . This intermediate is then hydrolyzed using hydrochloric acid, and subsequently basified, extracted with organic solvent, concentrated, and distilled to yield the free base. The free base is then dissolved in an organic solvent, sulfuric acid added, and amphetamine precipitates out as the sulfate salt. A number of chiral resolutions have been developed to separate the two enantiomers of amphetamine. For example, racemic amphetamine can be treated with to form a diastereoisomeric salt which is fractionally crystallized to yield dextroamphetamine. Chiral resolution remains the most economical method for obtaining optically pure amphetamine on a large scale. In addition, several enantioselective syntheses of amphetamine have been developed. In one example, optically pure is condensed with phenylacetone to yield a chiral Schiff base. In the key step, this intermediate is reduced by catalytic hydrogenation with a transfer of chirality to the carbon atom alpha to the amino group. Cleavage of the benzylic amine bond by hydrogenation yields optically pure dextroamphetamine. A large number of alternative synthetic routes to amphetamine have been developed based on classic organic reactions. One example is the Friedel–Crafts alkylation of benzene by allyl chloride to yield beta chloropropylbenzene which is then reacted with ammonia to produce racemic amphetamine (method 2). Another example employs the Ritter reaction (method 3). In this route, allylbenzene is reacted acetonitrile in sulfuric acid to yield an organosulfate which in turn is treated with sodium hydroxide to give amphetamine via an acetamide intermediate. A third route starts with which through a double alkylation with methyl iodide followed by benzyl chloride can be converted into acid. This synthetic intermediate can be transformed into amphetamine using either a Hofmann or Curtius rearrangement (method 4). A significant number of amphetamine syntheses feature a reduction of a nitro, imine, oxime, or other nitrogen-containing functional groups. In one such example, a Knoevenagel condensation of benzaldehyde with nitroethane yields . The double bond and nitro group of this intermediate is reduced using either catalytic hydrogenation or by treatment with lithium aluminium hydride (method 5). Another method is the reaction of phenylacetone with ammonia, producing an imine intermediate that is reduced to the primary amine using hydrogen over a palladium catalyst or lithium aluminum hydride (method 6). Amphetamine is frequently measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Techniques such as immunoassay, which is the most common form of amphetamine test, may cross-react with a number of sympathomimetic drugs. Chromatographic methods specific for amphetamine are employed to prevent false positive results. Chiral separation techniques may be employed to help distinguish the source of the drug, whether prescription amphetamine, prescription amphetamine prodrugs, (e.g., selegiline), over-the-counter drug products that contain levomethamphetamine, or illicitly obtained substituted amphetamines. Several prescription drugs produce amphetamine as a metabolite, including benzphetamine, clobenzorex, famprofazone, fenproporex, lisdexamfetamine, mesocarb, methamphetamine, prenylamine, and selegiline, among others. These compounds may produce positive results for amphetamine on drug tests. Amphetamine is generally only detectable by a standard drug test for approximately 24 hours, although a high dose may be detectable for  days. For the assays, a study noted that an enzyme multiplied immunoassay technique (EMIT) assay for amphetamine and methamphetamine may produce more false positives than liquid chromatography–tandem mass spectrometry. Gas chromatography–mass spectrometry (GC–MS) of amphetamine and methamphetamine with the derivatizing agent chloride allows for the detection of methamphetamine in urine. GC–MS of amphetamine and methamphetamine with the chiral derivatizing agent Mosher's acid chloride allows for the detection of both dextroamphetamine and dextromethamphetamine in urine. Hence, the latter method may be used on samples that test positive using other methods to help distinguish between the various sources of the drug. Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it "phenylisopropylamine"; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Amphetamine had no medical use until late 1933, when Smith, Kline and French began selling it as an inhaler under the brand name Benzedrine as a decongestant. Benzedrine sulfate was introduced 3 years later and was used to treat a wide variety of medical conditions, including narcolepsy, obesity, low blood pressure, low libido, and chronic pain, among others. During World War II, amphetamine and methamphetamine were used extensively by both the Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drug became known, governments began to place strict controls on the sale of amphetamine. For example, during the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. In spite of strict government controls, amphetamine has been used legally or illicitly by people from a variety of backgrounds, including authors, musicians, mathematicians, and athletes. Amphetamine is still illegally synthesized today in clandestine labs and sold on the black market, primarily in European countries. Among European Union (EU) member states, 1.2 million young adults used illicit amphetamine or methamphetamine in 2013. During 2012, approximately 5.9 metric tons of illicit amphetamine were seized within EU member states; the "street price" of illicit amphetamine within the EU ranged from  per gram during the same period. Outside Europe, the illicit market for amphetamine is much smaller than the market for methamphetamine and MDMA. As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II controlled substance, as defined in the treaty, in all 183 state parties. Consequently, it is heavily regulated in most countries. Some countries, such as South Korea and Japan, have banned substituted amphetamines even for medical use. In other nations, such as Canada (schedule I drug), the Netherlands (List I drug), the United States (schedule II drug), Australia (schedule 8), Thailand (category 1 narcotic), and United Kingdom (class B drug), amphetamine is in a restrictive national drug schedule that allows for its use as a medical treatment. Several currently prescribed amphetamine formulations contain both enantiomers, including Adderall, Adderall XR, Mydayis, , Dyanavel XR, and Evekeo, the last of which contains racemic amphetamine sulfate. Amphetamine is also prescribed in enantiopure and prodrug form as dextroamphetamine and lisdexamfetamine respectively. Lisdexamfetamine is structurally different from amphetamine, and is inactive until it metabolizes into dextroamphetamine. The free base of racemic amphetamine was previously available as Benzedrine, Psychedrine, and Sympatedrine. Levoamphetamine was previously available as Cydril. Many current amphetamine pharmaceuticals are salts due to the comparatively high volatility of the free base. However, oral suspension and orally disintegrating tablet (ODT) dosage forms composed of the free base were introduced in 2015 and 2016, respectively. Some of the current brands and their generic equivalents are listed below. Ant Ants are eusocial insects of the family Formicidae and, along with the related wasps and bees, belong to the order Hymenoptera. Ants evolved from wasp-like ancestors in the Cretaceous period, about 140 million years ago, and diversified after the rise of flowering plants. More than 12,500 of an estimated total of 22,000 species have been classified. They are easily identified by their elbowed antennae and the distinctive node-like structure that forms their slender waists. Ants form colonies that range in size from a few dozen predatory individuals living in small natural cavities to highly organised colonies that may occupy large territories and consist of millions of individuals. Larger colonies consist of various castes of sterile, wingless females, most of which are workers (ergates), as well as soldiers (dinergates) and other specialised groups. Nearly all ant colonies also have some fertile males called "drones" (aner) and one or more fertile females called "queens" (gynes). The colonies are described as superorganisms because the ants appear to operate as a unified entity, collectively working together to support the colony. Ants have colonised almost every landmass on Earth. The only places lacking indigenous ants are Antarctica and a few remote or inhospitable islands. Ants thrive in most ecosystems and may form 15–25% of the terrestrial animal biomass. Their success in so many environments has been attributed to their social organisation and their ability to modify habitats, tap resources, and defend themselves. Their long co-evolution with other species has led to mimetic, commensal, parasitic, and mutualistic relationships. Ant societies have division of labour, communication between individuals, and an ability to solve complex problems. These parallels with human societies have long been an inspiration and subject of study. Many human cultures make use of ants in cuisine, medication, and rituals. Some species are valued in their role as biological pest control agents. Their ability to exploit resources may bring ants into conflict with humans, however, as they can damage crops and invade buildings. Some species, such as the red imported fire ant ("Solenopsis invicta"), are regarded as invasive species, establishing themselves in areas where they have been introduced accidentally. The word "ant" and its chiefly dialectal form "emmet" come from ', ' of Middle English, which come from ' of Old English, and these are all related to the dialectal Dutch ' and the Old High German ', from which comes the modern German '. All of these words come from West Germanic "*", and the original meaning of the word was "the biter" (from Proto-Germanic ', "off, away" + ' "cut"). The family name Formicidae is derived from the Latin ' ("ant") from which the words in other Romance languages, such as the Portuguese ', Italian ', Spanish ', Romanian ', and French ' are derived. It has been hypothesised that a Proto-Indo-European word *morwi- was used, cf. Sanskrit vamrah, Latin formīca, Greek μύρμηξ "mýrmēx", Old Church Slavonic "mraviji", Old Irish "moirb", Old Norse "maurr", Dutch "mier". The family Formicidae belongs to the order Hymenoptera, which also includes sawflies, bees, and wasps. Ants evolved from a lineage within the aculeate wasps, and a 2013 study suggests that they are a sister group of the Apoidea. In 1966, E. O. Wilson and his colleagues identified the fossil remains of an ant ("Sphecomyrma") that lived in the Cretaceous period. The specimen, trapped in amber dating back to around 92 million years ago, has features found in some wasps, but not found in modern ants. "Sphecomyrma" was possibly a ground forager, while "Haidomyrmex" and "Haidomyrmodes", related genera in subfamily Sphecomyrminae, are reconstructed as active arboreal predators. Older ants in the genus "Sphecomyrmodes" have been found in 99 million year-old amber from Myanmar. A 2006 study suggested that ants arose tens of millions of years earlier than previously thought, up to 168 million years ago. After the rise of flowering plants about 100 million years ago they diversified and assumed ecological dominance around 60 million years ago. Some groups, such as the Leptanillinae and Martialinae, are suggested to have diversified from early primitive ants that were likely to have been predators underneath the surface of the soil. During the Cretaceous period, a few species of primitive ants ranged widely on the Laurasian supercontinent (the Northern Hemisphere). They were scarce in comparison to the populations of other insects, representing only about 1% of the entire insect population. Ants became dominant after adaptive radiation at the beginning of the Paleogene period. By the Oligocene and Miocene, ants had come to represent 20–40% of all insects found in major fossil deposits. Of the species that lived in the Eocene epoch, around one in 10 genera survive to the present. Genera surviving today comprise 56% of the genera in Baltic amber fossils (early Oligocene), and 92% of the genera in Dominican amber fossils (apparently early Miocene). Termites, although sometimes called 'white ants', are not ants. They belong to the sub-order Isoptera within the order Blattodea. Termites are more closely related to cockroaches and mantids. Termites are eusocial, but differ greatly in the genetics of reproduction. The similarity of their social structure to that of ants is attributed to convergent evolution. Velvet ants look like large ants, but are wingless female wasps. Ants are found on all continents except Antarctica, and only a few large islands, such as Greenland, Iceland, parts of Polynesia and the Hawaiian Islands lack native ant species. Ants occupy a wide range of ecological niches and exploit many different food resources as direct or indirect herbivores, predators and scavengers. Most ant species are omnivorous generalists, but a few are specialist feeders. Their ecological dominance is demonstrated by their biomass: ants are estimated to contribute 15–20 % (on average and nearly 25% in the tropics) of terrestrial animal biomass, exceeding that of the vertebrates. Ants range in size from , the largest species being the fossil "Titanomyrma giganteum", the queen of which was long with a wingspan of . Ants vary in colour; most ants are red or black, but a few species are green and some tropical species have a metallic lustre. More than 12,000 species are currently known (with upper estimates of the potential existence of about 22,000) (see the article List of ant genera), with the greatest diversity in the tropics. Taxonomic studies continue to resolve the classification and systematics of ants. Online databases of ant species, including AntBase and the Hymenoptera Name Server, help to keep track of the known and newly described species. The relative ease with which ants may be sampled and studied in ecosystems has made them useful as indicator species in biodiversity studies. Ants are distinct in their morphology from other insects in having elbowed antennae, metapleural glands, and a strong constriction of their second abdominal segment into a node-like petiole. The head, mesosoma, and metasoma are the three distinct body segments. The petiole forms a narrow waist between their mesosoma (thorax plus the first abdominal segment, which is fused to it) and gaster (abdomen less the abdominal segments in the petiole). The petiole may be formed by one or two nodes (the second alone, or the second and third abdominal segments). Like other insects, ants have an exoskeleton, an external covering that provides a protective casing around the body and a point of attachment for muscles, in contrast to the internal skeletons of humans and other vertebrates. Insects do not have lungs; oxygen and other gases, such as carbon dioxide, pass through their exoskeleton via tiny valves called spiracles. Insects also lack closed blood vessels; instead, they have a long, thin, perforated tube along the top of the body (called the "dorsal aorta") that functions like a heart, and pumps haemolymph toward the head, thus driving the circulation of the internal fluids. The nervous system consists of a ventral nerve cord that runs the length of the body, with several ganglia and branches along the way reaching into the extremities of the appendages. An ant's head contains many sensory organs. Like most insects, ants have compound eyes made from numerous tiny lenses attached together. Ant eyes are good for acute movement detection, but do not offer a high resolution image. They also have three small ocelli (simple eyes) on the top of the head that detect light levels and polarization. Compared to vertebrates, most ants have poor-to-mediocre eyesight and a few subterranean species are completely blind. However, some ants, such as Australia's bulldog ant, have excellent vision and are capable of discriminating the distance and size of objects moving nearly a metre away. Two antennae ("feelers") are attached to the head; these organs detect chemicals, air currents, and vibrations; they also are used to transmit and receive signals through touch. The head has two strong jaws, the mandibles, used to carry food, manipulate objects, construct nests, and for defence. In some species, a small pocket (infrabuccal chamber) inside the mouth stores food, so it may be passed to other ants or their larvae. Both the legs and wings of the ant are attached to the mesosoma ("thorax"). The legs terminate in a hooked claw which allows them to hook on and climb surfaces. Only reproductive ants, queens, and males, have wings. Queens shed their wings after the nuptial flight, leaving visible stubs, a distinguishing feature of queens. In a few species, wingless queens (ergatoids) and males occur. The metasoma (the "abdomen") of the ant houses important internal organs, including those of the reproductive, respiratory (tracheae), and excretory systems. Workers of many species have their egg-laying structures modified into stings that are used for subduing prey and defending their nests. In the colonies of a few ant species, there are physical castes—workers in distinct size-classes, called minor, median, and major ergates. Often, the larger ants have disproportionately larger heads, and correspondingly stronger mandibles. These are known as macrergates while smaller workers are known as micrergates. Although formally known as dinergates, such individuals are sometimes called "soldier" ants because their stronger mandibles make them more effective in fighting, although they still are workers and their "duties" typically do not vary greatly from the minor or median workers. In a few species, the median workers are absent, creating a sharp divide between the minors and majors. Weaver ants, for example, have a distinct bimodal size distribution. Some other species show continuous variation in the size of workers. The smallest and largest workers in "Pheidologeton diversus" show nearly a 500-fold difference in their dry-weights. Workers cannot mate; however, because of the haplodiploid sex-determination system in ants, workers of a number of species can lay unfertilised eggs that become fully fertile, haploid males. The role of workers may change with their age and in some species, such as honeypot ants, young workers are fed until their gasters are distended, and act as living food storage vessels. These food storage workers are called "repletes". For instance, these replete workers develop in the North American honeypot ant "Myrmecocystus mexicanus". Usually the largest workers in the colony develop into repletes; and, if repletes are removed from the colony, other workers become repletes, demonstrating the flexibility of this particular polymorphism. This polymorphism in morphology and behaviour of workers initially was thought to be determined by environmental factors such as nutrition and hormones that led to different developmental paths; however, genetic differences between worker castes have been noted in "Acromyrmex" sp. These polymorphisms are caused by relatively small genetic changes; differences in a single gene of "Solenopsis invicta" can decide whether the colony will have single or multiple queens. The Australian jack jumper ant ("Myrmecia pilosula") has only a single pair of chromosomes (with the males having just one chromosome as they are haploid), the lowest number known for any animal, making it an interesting subject for studies in the genetics and developmental biology of social insects. The life of an ant starts from an egg. If the egg is fertilised, the progeny will be female diploid; if not, it will be male haploid. Ants develop by complete metamorphosis with the larva stages passing through a pupal stage before emerging as an adult. The larva is largely immobile and is fed and cared for by workers. Food is given to the larvae by trophallaxis, a process in which an ant regurgitates liquid food held in its crop. This is also how adults share food, stored in the "social stomach". Larvae, especially in the later stages, may also be provided solid food, such as trophic eggs, pieces of prey, and seeds brought by workers. The larvae grow through a series of four or five moults and enter the pupal stage. The pupa has the appendages free and not fused to the body as in a butterfly pupa. The differentiation into queens and workers (which are both female), and different castes of workers, is influenced in some species by the nutrition the larvae obtain. Genetic influences and the control of gene expression by the developmental environment are complex and the determination of caste continues to be a subject of research. Winged male ants, called drones, emerge from pupae along with the usually winged breeding females. Some species, such as army ants, have wingless queens. Larvae and pupae need to be kept at fairly constant temperatures to ensure proper development, and so often, are moved around among the various brood chambers within the colony. A new ergate spends the first few days of its adult life caring for the queen and young. She then graduates to digging and other nest work, and later to defending the nest and foraging. These changes are sometimes fairly sudden, and define what are called temporal castes. An explanation for the sequence is suggested by the high casualties involved in foraging, making it an acceptable risk only for ants who are older and are likely to die soon of natural causes. Ant colonies can be long-lived. The queens can live for up to 30 years, and workers live from 1 to 3 years. Males, however, are more transitory, being quite short-lived and surviving for only a few weeks. Ant queens are estimated to live 100 times as long as solitary insects of a similar size. Ants are active all year long in the tropics, but, in cooler regions, they survive the winter in a state of dormancy known as hibernation. The forms of inactivity are varied and some temperate species have larvae going into the inactive state (diapause), while in others, the adults alone pass the winter in a state of reduced activity. A wide range of reproductive strategies have been noted in ant species. Females of many species are known to be capable of reproducing asexually through thelytokous parthenogenesis. Secretions from the male accessory glands in some species can plug the female genital opening and prevent females from re-mating. Most ant species have a system in which only the queen and breeding females have the ability to mate. Contrary to popular belief, some ant nests have multiple queens, while others may exist without queens. Workers with the ability to reproduce are called "gamergates" and colonies that lack queens are then called gamergate colonies; colonies with queens are said to be queen-right. Drones can also mate with existing queens by entering a foreign colony. When the drone is initially attacked by the workers, it releases a mating pheromone. If recognized as a mate, it will be carried to the queen to mate. Males may also patrol the nest and fight others by grabbing them with their mandibles, piercing their exoskeleton and then marking them with a pheromone. The marked male is interpreted as an invader by worker ants and is killed. Most ants are univoltine, producing a new generation each year. During the species-specific breeding period, new reproductives, females, and winged males leave the colony in what is called a nuptial flight. The nuptial flight usually takes place in the late spring or early summer when the weather is hot and humid. Heat makes flying easier and freshly fallen rain makes the ground softer for mated queens to dig nests. Males typically take flight before the females. Males then use visual cues to find a common mating ground, for example, a landmark such as a pine tree to which other males in the area converge. Males secrete a mating pheromone that females follow. Males will mount females in the air, but the actual mating process usually takes place on the ground. Females of some species mate with just one male but in others they may mate with as many as ten or more different males, storing the sperm in their spermathecae. Mated females then seek a suitable place to begin a colony. There, they break off their wings and begin to lay and care for eggs. The females can selectively fertilise future eggs with the sperm stored to produce diploid workers or lay unfertilized haploid eggs to produce drones. The first workers to hatch are known as nanitics, and are weaker and smaller than later workers, but they begin to serve the colony immediately. They enlarge the nest, forage for food, and care for the other eggs. Species that have multiple queens may have a queen leaving the nest along with some workers to found a colony at a new site, a process akin to swarming in honeybees. Ants communicate with each other using pheromones, sounds, and touch. The use of pheromones as chemical signals is more developed in ants, such as the red harvester ant, than in other hymenopteran groups. Like other insects, ants perceive smells with their long, thin, and mobile antennae. The paired antennae provide information about the direction and intensity of scents. Since most ants live on the ground, they use the soil surface to leave pheromone trails that may be followed by other ants. In species that forage in groups, a forager that finds food marks a trail on the way back to the colony; this trail is followed by other ants, these ants then reinforce the trail when they head back with food to the colony. When the food source is exhausted, no new trails are marked by returning ants and the scent slowly dissipates. This behaviour helps ants deal with changes in their environment. For instance, when an established path to a food source is blocked by an obstacle, the foragers leave the path to explore new routes. If an ant is successful, it leaves a new trail marking the shortest route on its return. Successful trails are followed by more ants, reinforcing better routes and gradually identifying the best path. Ants use pheromones for more than just making trails. A crushed ant emits an alarm pheromone that sends nearby ants into an attack frenzy and attracts more ants from farther away. Several ant species even use "propaganda pheromones" to confuse enemy ants and make them fight among themselves. Pheromones are produced by a wide range of structures including Dufour's glands, poison glands and glands on the hindgut, pygidium, rectum, sternum, and hind tibia. Pheromones also are exchanged, mixed with food, and passed by trophallaxis, transferring information within the colony. This allows other ants to detect what task group (e.g., foraging or nest maintenance) other colony members belong to. In ant species with queen castes, when the dominant queen stops producing a specific pheromone, workers begin to raise new queens in the colony. Some ants produce sounds by stridulation, using the gaster segments and their mandibles. Sounds may be used to communicate with colony members or with other species. Ants attack and defend themselves by biting and, in many species, by stinging, often injecting or spraying chemicals, such as formic acid in the case of formicine ants, alkaloids and piperidines in fire ants, and a variety of protein components in other ants. Bullet ants ("Paraponera"), located in Central and South America, are considered to have the most painful sting of any insect, although it is usually not fatal to humans. This sting is given the highest rating on the Schmidt Sting Pain Index. The sting of jack jumper ants can be fatal, and an antivenom has been developed for it. Fire ants, "Solenopsis" spp., are unique in having a venom sac containing piperidine alkaloids. Their stings are painful and can be dangerous to hypersensitive people. Trap-jaw ants of the genus "Odontomachus" are equipped with mandibles called trap-jaws, which snap shut faster than any other predatory appendages within the animal kingdom. One study of "Odontomachus bauri" recorded peak speeds of between , with the jaws closing within 130 microseconds on average. The ants were also observed to use their jaws as a catapult to eject intruders or fling themselves backward to escape a threat. Before striking, the ant opens its mandibles extremely widely and locks them in this position by an internal mechanism. Energy is stored in a thick band of muscle and explosively released when triggered by the stimulation of sensory organs resembling hairs on the inside of the mandibles. The mandibles also permit slow and fine movements for other tasks. Trap-jaws also are seen in the following genera: "Anochetus", "Orectognathus", and "Strumigenys", plus some members of the Dacetini tribe, which are viewed as examples of convergent evolution. A Malaysian species of ant in the "Camponotus" "cylindricus" group has enlarged mandibular glands that extend into their gaster. If combat takes a turn for the worse, a worker may perform a final act of suicidal altruism by rupturing the membrane of its gaster, causing the content of its mandibular glands to burst from the anterior region of its head, spraying a poisonous, corrosive secretion containing acetophenones and other chemicals that immobilise small insect attackers. The worker subsequently dies. Suicidal defences by workers are also noted in a Brazilian ant, "Forelius pusillus", where a small group of ants leaves the security of the nest after sealing the entrance from the outside each evening. In addition to defence against predators, ants need to protect their colonies from pathogens. Some worker ants maintain the hygiene of the colony and their activities include undertaking or "necrophory", the disposal of dead nest-mates. Oleic acid has been identified as the compound released from dead ants that triggers necrophoric behaviour in "Atta mexicana" while workers of "Linepithema humile" react to the absence of characteristic chemicals (dolichodial and iridomyrmecin) present on the cuticle of their living nestmates to trigger similar behaviour. Nests may be protected from physical threats such as flooding and overheating by elaborate nest architecture. Workers of "Cataulacus muticus", an arboreal species that lives in plant hollows, respond to flooding by drinking water inside the nest, and excreting it outside. "Camponotus anderseni", which nests in the cavities of wood in mangrove habitats, deals with submergence under water by switching to anaerobic respiration. Many animals can learn behaviours by imitation, but ants may be the only group apart from mammals where interactive teaching has been observed. A knowledgeable forager of "Temnothorax albipennis" will lead a naive nest-mate to newly discovered food by the process of tandem running. The follower obtains knowledge through its leading tutor. The leader is acutely sensitive to the progress of the follower and slows down when the follower lags and speeds up when the follower gets too close. Controlled experiments with colonies of "Cerapachys biroi" suggest that an individual may choose nest roles based on her previous experience. An entire generation of identical workers was divided into two groups whose outcome in food foraging was controlled. One group was continually rewarded with prey, while it was made certain that the other failed. As a result, members of the successful group intensified their foraging attempts while the unsuccessful group ventured out fewer and fewer times. A month later, the successful foragers continued in their role while the others had moved to specialise in brood care. Complex nests are built by many ant species, but other species are nomadic and do not build permanent structures. Ants may form subterranean nests or build them on trees. These nests may be found in the ground, under stones or logs, inside logs, hollow stems, or even acorns. The materials used for construction include soil and plant matter, and ants carefully select their nest sites; "Temnothorax albipennis" will avoid sites with dead ants, as these may indicate the presence of pests or disease. They are quick to abandon established nests at the first sign of threats. The army ants of South America, such as the "Eciton burchellii" species, and the driver ants of Africa do not build permanent nests, but instead, alternate between nomadism and stages where the workers form a temporary nest (bivouac) from their own bodies, by holding each other together. Weaver ant ("Oecophylla" spp.) workers build nests in trees by attaching leaves together, first pulling them together with bridges of workers and then inducing their larvae to produce silk as they are moved along the leaf edges. Similar forms of nest construction are seen in some species of "Polyrhachis". "Formica polyctena", among other ant species, constructs nests that maintain a relatively constant interior temperature that aids in the development of larvae. The ants maintain the nest temperature by choosing the location, nest materials, controlling ventilation and maintaining the heat from solar radiation, worker activity and metabolism, and in some moist nests, microbial activity in the nest materials. Some ant species, such as those that use natural cavities, can be opportunistic and make use of the controlled micro-climate provided inside human dwellings and other artificial structures to house their colonies and nest structures. Most ants are generalist predators, scavengers, and indirect herbivores, but a few have evolved specialised ways of obtaining nutrition. It is believed that many ant species that engage in indirect herbivory rely on specialized symbiosis with their gut microbes to upgrade the nutritional value of the food they collect and allow them to survive in nitrogen poor regions, such as rainforest canopies. Leafcutter ants ("Atta" and "Acromyrmex") feed exclusively on a fungus that grows only within their colonies. They continually collect leaves which are taken to the colony, cut into tiny pieces and placed in fungal gardens. Ergates specialise in related tasks according to their sizes. The largest ants cut stalks, smaller workers chew the leaves and the smallest tend the fungus. Leafcutter ants are sensitive enough to recognise the reaction of the fungus to different plant material, apparently detecting chemical signals from the fungus. If a particular type of leaf is found to be toxic to the fungus, the colony will no longer collect it. The ants feed on structures produced by the fungi called "gongylidia". Symbiotic bacteria on the exterior surface of the ants produce antibiotics that kill bacteria introduced into the nest that may harm the fungi. Foraging ants travel distances of up to from their nest and scent trails allow them to find their way back even in the dark. In hot and arid regions, day-foraging ants face death by desiccation, so the ability to find the shortest route back to the nest reduces that risk. Diurnal desert ants of the genus "Cataglyphis" such as the Sahara desert ant navigate by keeping track of direction as well as distance travelled. Distances travelled are measured using an internal pedometer that keeps count of the steps taken and also by evaluating the movement of objects in their visual field (optical flow). Directions are measured using the position of the sun. They integrate this information to find the shortest route back to their nest. Like all ants, they can also make use of visual landmarks when available as well as olfactory and tactile cues to navigate. Some species of ant are able to use the Earth's magnetic field for navigation. The compound eyes of ants have specialised cells that detect polarised light from the Sun, which is used to determine direction. These polarization detectors are sensitive in the ultraviolet region of the light spectrum. In some army ant species, a group of foragers who become separated from the main column may sometimes turn back on themselves and form a circular ant mill. The workers may then run around continuously until they die of exhaustion. The female worker ants do not have wings and reproductive females lose their wings after their mating flights in order to begin their colonies. Therefore, unlike their wasp ancestors, most ants travel by walking. Some species are capable of leaping. For example, Jerdon's jumping ant ("Harpegnathos saltator") is able to jump by synchronising the action of its mid and hind pairs of legs. There are several species of gliding ant including "Cephalotes atratus"; this may be a common trait among most arboreal ants. Ants with this ability are able to control the direction of their descent while falling. Other species of ants can form chains to bridge gaps over water, underground, or through spaces in vegetation. Some species also form floating rafts that help them survive floods. These rafts may also have a role in allowing ants to colonise islands. "Polyrhachis sokolova", a species of ant found in Australian mangrove swamps, can swim and live in underwater nests. Since they lack gills, they go to trapped pockets of air in the submerged nests to breathe. Not all ants have the same kind of societies. The Australian bulldog ants are among the biggest and most basal of ants. Like virtually all ants, they are eusocial, but their social behaviour is poorly developed compared to other species. Each individual hunts alone, using her large eyes instead of chemical senses to find prey. Some species (such as "Tetramorium caespitum") attack and take over neighbouring ant colonies. Others are less expansionist, but just as aggressive; they invade colonies to steal eggs or larvae, which they either eat or raise as workers or slaves. Extreme specialists among these slave-raiding ants, such as the Amazon ants, are incapable of feeding themselves and need captured workers to survive. Captured workers of the enslaved species "Temnothorax" have evolved a counter strategy, destroying just the female pupae of the slave-making "Protomognathus americanus", but sparing the males (who don't take part in slave-raiding as adults). Ants identify kin and nestmates through their scent, which comes from hydrocarbon-laced secretions that coat their exoskeletons. If an ant is separated from its original colony, it will eventually lose the colony scent. Any ant that enters a colony without a matching scent will be attacked. Also, the reason why two separate colonies of ants will attack each other even if they are of the same species is because the genes responsible for pheromone production are different between them. The Argentine ant, however, does not have this characteristic, due to lack of genetic diversity, and has become a global pest because of it. Parasitic ant species enter the colonies of host ants and establish themselves as social parasites; species such as "Strumigenys xenos" are entirely parasitic and do not have workers, but instead, rely on the food gathered by their "Strumigenys perplexa" hosts. This form of parasitism is seen across many ant genera, but the parasitic ant is usually a species that is closely related to its host. A variety of methods are employed to enter the nest of the host ant. A parasitic queen may enter the host nest before the first brood has hatched, establishing herself prior to development of a colony scent. Other species use pheromones to confuse the host ants or to trick them into carrying the parasitic queen into the nest. Some simply fight their way into the nest. A conflict between the sexes of a species is seen in some species of ants with these reproducers apparently competing to produce offspring that are as closely related to them as possible. The most extreme form involves the production of clonal offspring. An extreme of sexual conflict is seen in "Wasmannia auropunctata", where the queens produce diploid daughters by thelytokous parthenogenesis and males produce clones by a process whereby a diploid egg loses its maternal contribution to produce haploid males who are clones of the father. Ants form symbiotic associations with a range of species, including other ant species, other insects, plants, and fungi. They also are preyed on by many animals and even certain fungi. Some arthropod species spend part of their lives within ant nests, either preying on ants, their larvae, and eggs, consuming the food stores of the ants, or avoiding predators. These inquilines may bear a close resemblance to ants. The nature of this ant mimicry (myrmecomorphy) varies, with some cases involving Batesian mimicry, where the mimic reduces the risk of predation. Others show Wasmannian mimicry, a form of mimicry seen only in inquilines. Aphids and other hemipteran insects secrete a sweet liquid called honeydew, when they feed on plant sap. The sugars in honeydew are a high-energy food source, which many ant species collect. In some cases, the aphids secrete the honeydew in response to ants tapping them with their antennae. The ants in turn keep predators away from the aphids and will move them from one feeding location to another. When migrating to a new area, many colonies will take the aphids with them, to ensure a continued supply of honeydew. Ants also tend mealybugs to harvest their honeydew. Mealybugs may become a serious pest of pineapples if ants are present to protect mealybugs from their natural enemies. Myrmecophilous (ant-loving) caterpillars of the butterfly family Lycaenidae (e.g., blues, coppers, or hairstreaks) are herded by the ants, led to feeding areas in the daytime, and brought inside the ants' nest at night. The caterpillars have a gland which secretes honeydew when the ants massage them. Some caterpillars produce vibrations and sounds that are perceived by the ants. A similar adaptation can be seen in Grizzled skipper butterflies that emit vibrations by expanding their wings in order to communicate with ants, which are natural predators of these butterflies. Other caterpillars have evolved from ant-loving to ant-eating: these myrmecophagous caterpillars secrete a pheromone that makes the ants act as if the caterpillar is one of their own larvae. The caterpillar is then taken into the ant nest where it feeds on the ant larvae. A number of specialized bacterial have been found as endosymbionts in ant guts. Some of the dominant bacteria belong to the order Rhizobiales whose members are known for being nitrogen-fixing symbionts in legumes but the species found in ant lack the ability to fix nitrogen. Fungus-growing ants that make up the tribe Attini, including leafcutter ants, cultivate certain species of fungus in the genera "Leucoagaricus" or "Leucocoprinus" of the family Agaricaceae. In this ant-fungus mutualism, both species depend on each other for survival. The ant "Allomerus decemarticulatus" has evolved a three-way association with the host plant, "Hirtella physophora" (Chrysobalanaceae), and a sticky fungus which is used to trap their insect prey. Lemon ants make devil's gardens by killing surrounding plants with their stings and leaving a pure patch of lemon ant trees, ("Duroia hirsuta"). This modification of the forest provides the ants with more nesting sites inside the stems of the "Duroia" trees. Although some ants obtain nectar from flowers, pollination by ants is somewhat rare. Some plants have special nectar exuding structures, extrafloral nectaries, that provide food for ants, which in turn protect the plant from more damaging herbivorous insects. Species such as the bullhorn acacia ("Acacia cornigera") in Central America have hollow thorns that house colonies of stinging ants ("Pseudomyrmex ferruginea") who defend the tree against insects, browsing mammals, and epiphytic vines. Isotopic labelling studies suggest that plants also obtain nitrogen from the ants. In return, the ants obtain food from protein- and lipid-rich Beltian bodies. In Fiji "Philidris nagasau" (Dolichoderinae) are known to selectively grow species of epiphytic "Squamellaria" (Rubiaceae) which produce large domatia inside which the ant colonies nest. The ants plant the seeds and the domatia of young seedling are immediately occupied and the ant faeces in them contribute to rapid growth. Similar dispersal associations are found with other dolichoderines in the region as well. Another example of this type of ectosymbiosis comes from the "Macaranga" tree, which has stems adapted to house colonies of "Crematogaster" ants. Many tropical tree species have seeds that are dispersed by ants. Seed dispersal by ants or myrmecochory is widespread and new estimates suggest that nearly 9% of all plant species may have such ant associations. Some plants in fire-prone grassland systems are particularly dependent on ants for their survival and dispersal as the seeds are transported to safety below the ground. Many ant-dispersed seeds have special external structures, elaiosomes, that are sought after by ants as food. A convergence, possibly a form of mimicry, is seen in the eggs of stick insects. They have an edible elaiosome-like structure and are taken into the ant nest where the young hatch. Most ants are predatory and some prey on and obtain food from other social insects including other ants. Some species specialise in preying on termites ("Megaponera" and "Termitopone") while a few Cerapachyinae prey on other ants. Some termites, including "Nasutitermes corniger", form associations with certain ant species to keep away predatory ant species. The tropical wasp "Mischocyttarus drewseni" coats the pedicel of its nest with an ant-repellent chemical. It is suggested that many tropical wasps may build their nests in trees and cover them to protect themselves from ants. Other wasps, such as "A. multipicta", defend against ants by blasting them off the nest with bursts of wing buzzing. Stingless bees ("Trigona" and "Melipona") use chemical defences against ants. Flies in the Old World genus "Bengalia" (Calliphoridae) prey on ants and are kleptoparasites, snatching prey or brood from the mandibles of adult ants. Wingless and legless females of the Malaysian phorid fly ("Vestigipoda myrmolarvoidea") live in the nests of ants of the genus "Aenictus" and are cared for by the ants. Fungi in the genera "Cordyceps" and "Ophiocordyceps" infect ants. Ants react to their infection by climbing up plants and sinking their mandibles into plant tissue. The fungus kills the ants, grows on their remains, and produces a fruiting body. It appears that the fungus alters the behaviour of the ant to help disperse its spores in a microhabitat that best suits the fungus. Strepsipteran parasites also manipulate their ant host to climb grass stems, to help the parasite find mates. A nematode ("Myrmeconema neotropicum") that infects canopy ants ("Cephalotes atratus") causes the black-coloured gasters of workers to turn red. The parasite also alters the behaviour of the ant, causing them to carry their gasters high. The conspicuous red gasters are mistaken by birds for ripe fruits, such as "Hyeronima alchorneoides", and eaten. The droppings of the bird are collected by other ants and fed to their young, leading to further spread of the nematode. South American poison dart frogs in the genus "Dendrobates" feed mainly on ants, and the toxins in their skin may come from the ants. Army ants forage in a wide roving column, attacking any animals in that path that are unable to escape. In Central and South America, "Eciton burchellii" is the swarming ant most commonly attended by "ant-following" birds such as antbirds and woodcreepers. This behaviour was once considered mutualistic, but later studies found the birds to be parasitic. Direct kleptoparasitism (birds stealing food from the ants' grasp) is rare and has been noted in Inca doves which pick seeds at nest entrances as they are being transported by species of "Pogonomyrmex". Birds that follow ants eat many prey insects and thus decrease the foraging success of ants. Birds indulge in a peculiar behaviour called anting that, as yet, is not fully understood. Here birds rest on ant nests, or pick and drop ants onto their wings and feathers; this may be a means to remove ectoparasites from the birds. Anteaters, aardvarks, pangolins, echidnas and numbats have special adaptations for living on a diet of ants. These adaptations include long, sticky tongues to capture ants and strong claws to break into ant nests. Brown bears ("Ursus arctos") have been found to feed on ants. About 12%, 16%, and 4% of their faecal volume in spring, summer, and autumn, respectively, is composed of ants. Ants perform many ecological roles that are beneficial to humans, including the suppression of pest populations and aeration of the soil. The use of weaver ants in citrus cultivation in southern China is considered one of the oldest known applications of biological control. On the other hand, ants may become nuisances when they invade buildings, or cause economic losses. In some parts of the world (mainly Africa and South America), large ants, especially army ants, are used as surgical sutures. The wound is pressed together and ants are applied along it. The ant seizes the edges of the wound in its mandibles and locks in place. The body is then cut off and the head and mandibles remain in place to close the wound. The large heads of the dinergates (soldiers) of the leafcutting ant "Atta cephalotes" are also used by native surgeons in closing wounds. Some ants have toxic venom and are of medical importance. The species include "Paraponera clavata" (tocandira) and "Dinoponera" spp. (false tocandiras) of South America and the "Myrmecia" ants of Australia. In South Africa, ants are used to help harvest the seeds of rooibos ("Aspalathus linearis"), a plant used to make a herbal tea. The plant disperses its seeds widely, making manual collection difficult. Black ants collect and store these and other seeds in their nest, where humans can gather them "en masse". Up to half a pound (200 g) of seeds may be collected from one ant-heap. Although most ants survive attempts by humans to eradicate them, a few are highly endangered. These tend to be island species that have evolved specialized traits and risk being displaced by introduced ant species. Examples include the critically endangered Sri Lankan relict ant ("Aneuretus simoni") and "Adetomyrma venatrix" of Madagascar. It has been estimated by E.O. Wilson that the total number of individual ants alive in the world at any one time is between one and ten quadrillion (short scale) (i.e. between 10 and 10). According to this estimate, the total biomass of all the ants in the world is approximately equal to the total biomass of the entire human race. Also, according to this estimate, there are approximately 1 million ants for every human on Earth. Ants and their larvae are eaten in different parts of the world. The eggs of two species of ants are used in Mexican "escamoles". They are considered a form of insect caviar and can sell for as much as US$40 per pound ($90/kg) because they are seasonal and hard to find. In the Colombian department of Santander, "hormigas culonas" (roughly interpreted as "large-bottomed ants") "Atta laevigata" are toasted alive and eaten. In areas of India, and throughout Burma and Thailand, a paste of the green weaver ant ("Oecophylla smaragdina") is served as a condiment with curry. Weaver ant eggs and larvae, as well as the ants, may be used in a Thai salad, "yam" (), in a dish called "yam khai mot daeng" () or red ant egg salad, a dish that comes from the Issan or north-eastern region of Thailand. Saville-Kent, in the "Naturalist in Australia" wrote "Beauty, in the case of the green ant, is more than skin-deep. Their attractive, almost sweetmeat-like translucency possibly invited the first essays at their consumption by the human species". Mashed up in water, after the manner of lemon squash, "these ants form a pleasant acid drink which is held in high favor by the natives of North Queensland, and is even appreciated by many European palates". In his "First Summer in the Sierra", John Muir notes that the Digger Indians of California ate the tickling, acid gasters of the large jet-black carpenter ants. The Mexican Indians eat the replete workers, or living honey-pots, of the honey ant ("Myrmecocystus"). Some ant species are considered as pests, primarily those that occur in human habitations, where their presence is often problematic. For example, the presence of ants would be undesirable in sterile places such as hospitals or kitchens. Some species or genera commonly categorized as pests include the Argentine ant, pavement ant, yellow crazy ant, banded sugar ant, Pharaoh ant, carpenter ants, odorous house ant, red imported fire ant, and European fire ant. Some ants will raid stored food, others may damage indoor structures, some can damage agricultural crops directly (or by aiding sucking pests), and some will sting or bite. The adaptive nature of ant colonies make it nearly impossible to eliminate entire colonies and most pest management practices aim to control local populations and tend to be temporary solutions. Ant populations are managed by a combination of approaches that make use of chemical, biological and physical methods. Chemical methods include the use of insecticidal bait which is gathered by ants as food and brought back to the nest where the poison is inadvertently spread to other colony members through trophallaxis. Management is based on the species and techniques can vary according to the location and circumstance. Observed by humans since the dawn of history, the behaviour of ants has been documented and the subject of early writings and fables passed from one century to another. Those using scientific methods, myrmecologists, study ants in the laboratory and in their natural conditions. Their complex and variable social structures have made ants ideal model organisms. Ultraviolet vision was first discovered in ants by Sir John Lubbock in 1881. Studies on ants have tested hypotheses in ecology and sociobiology, and have been particularly important in examining the predictions of theories of kin selection and evolutionarily stable strategies. Ant colonies may be studied by rearing or temporarily maintaining them in "formicaria", specially constructed glass framed enclosures. Individuals may be tracked for study by marking them with dots of colours. The successful techniques used by ant colonies have been studied in computer science and robotics to produce distributed and fault-tolerant systems for solving problems, for example Ant colony optimization and Ant robotics. This area of biomimetics has led to studies of ant locomotion, search engines that make use of "foraging trails", fault-tolerant storage, and networking algorithms. From the late 1950s through the late 1970s, ant farms were popular educational children's toys in the United States. Some later commercial versions use transparent gel instead of soil, allowing greater visibility at the cost of stressing the ants with unnatural light. Anthropomorphised ants have often been used in fables and children's stories to represent industriousness and cooperative effort. They also are mentioned in religious texts. In the Book of Proverbs in the Bible, ants are held up as a good example for humans for their hard work and cooperation. Aesop did the same in his fable The Ant and the Grasshopper. In the Quran, Sulayman is said to have heard and understood an ant warning other ants to return home to avoid being accidentally crushed by Sulayman and his marching army. In parts of Africa, ants are considered to be the messengers of the deities. Some Native American mythology, such as the Hopi mythology, considers ants as the very first animals. Ant bites are often said to have curative properties. The sting of some species of "Pseudomyrmex" is claimed to give fever relief. Ant bites are used in the initiation ceremonies of some Amazon Indian cultures as a test of endurance. Ant society has always fascinated humans and has been written about both humorously and seriously. Mark Twain wrote about ants in his 1880 book "A Tramp Abroad". Some modern authors have used the example of the ants to comment on the relationship between society and the individual. Examples are Robert Frost in his poem "Departmental" and T. H. White in his fantasy novel "The Once and Future King". The plot in French entomologist and writer Bernard Werber's "Les Fourmis" science-fiction trilogy is divided between the worlds of ants and humans; ants and their behaviour is described using contemporary scientific knowledge. H.G. Wells wrote about intelligent ants destroying human settlements in Brazil and threatening human civilization in his 1905 science-fiction short story, "The Empire of the Ants." In more recent times, animated cartoons and 3-D animated films featuring ants have been produced including "Antz", "A Bug's Life", "The Ant Bully", "The Ant and the Aardvark", "Ferdy the Ant" and "Atom Ant." Renowned myrmecologist E. O. Wilson wrote a short story, "Trailhead" in 2010 for "The New Yorker" magazine, which describes the life and death of an ant-queen and the rise and fall of her colony, from an ants' point of view. The French neuroanatomist, psychiatrist and eugenicist Auguste Forel believed that ant societies were models for human society. He published a five volume work from 1921 to 1923 that examined ant biology and society. In the early 1990s, the video game SimAnt, which simulated an ant colony, won the 1992 Codie award for "Best Simulation Program". Ants also are quite popular inspiration for many science-fiction insectoids, such as the Formics of "Ender's Game", the Bugs of "Starship Troopers", the giant ants in the films "Them!" and "Empire of the Ants," Marvel Comics' super hero Ant-Man, and ants mutated into super-intelligence in "Phase IV". In computer strategy games, ant-based species often benefit from increased production rates due to their single-minded focus, such as the Klackons in the "Master of Orion" series of games or the ChCht in "Deadlock II". These characters are often credited with a hive mind, a common misconception about ant colonies. Archaeopteryx Archaeopteryx (), meaning "old wing" (sometimes referred to by its German name ("original bird" or "first bird")), is a genus of bird-like dinosaurs that is transitional between non-avian feathered dinosaurs and modern birds. The name derives from the ancient Greek ("archaīos") meaning "ancient", and ("ptéryx"), meaning "feather" or "wing". Between the late nineteenth century and the early twenty-first century, "Archaeopteryx" had been generally accepted by palaeontologists and popular reference books as the oldest known bird (member of the group Avialae). Older potential avialans have since been identified, including "Anchiornis", "Xiaotingia", and "Aurornis". "Archaeopteryx" lived in the Late Jurassic around 150 million years ago, in what is now southern Germany during a time when Europe was an archipelago of islands in a shallow warm tropical sea, much closer to the equator than it is now. Similar in size to a Eurasian magpie, with the largest individuals possibly attaining the size of a raven, the largest species of "Archaeopteryx" could grow to about in length. Despite their small size, broad wings, and inferred ability to fly or glide, "Archaeopteryx" had more in common with other small Mesozoic dinosaurs than with modern birds. In particular, they shared the following features with the dromaeosaurids and troodontids: jaws with sharp teeth, three fingers with claws, a long bony tail, hyperextensible second toes ("killing claw"), feathers (which also suggest warm-bloodedness), and various features of the skeleton. These features make "Archaeopteryx" a clear candidate for a transitional fossil between non-avian dinosaurs and birds. Thus, "Archaeopteryx" plays an important role, not only in the study of the origin of birds, but in the study of dinosaurs. It was named from a single feather in 1861. That same year, the first complete specimen of "Archaeopteryx" was announced. Over the years, ten more fossils of "Archaeopteryx" have surfaced. Despite variation among these fossils, most experts regard all the remains that have been discovered as belonging to a single species, although this is still debated. Most of these eleven fossils include impressions of feathers. Because these feathers are of an advanced form (flight feathers), these fossils are evidence that the evolution of feathers began before the Late Jurassic. The type specimen of "Archaeopteryx" was discovered just two years after Charles Darwin published "On the Origin of Species". "Archaeopteryx" seemed to confirm Darwin's theories and has since become a key piece of evidence for the origin of birds, the transitional fossils debate, and confirmation of evolution. In March 2018, scientists reported that "Archaeopteryx" was likely capable of flight, but in a manner substantially different from that of modern birds. Most of the specimens of "Archaeopteryx" that have been discovered come from the Solnhofen limestone in Bavaria, southern Germany, which is a lagerstätte, a rare and remarkable geological formation known for its superbly detailed fossils laid down during the early Tithonian stage of the Jurassic period, approximately 150.8–148.5 million years ago. "Archaeopteryx" was roughly the size of a raven, with broad wings that were rounded at the ends and a long tail compared to its body length. It could reach up to in body length, with an estimated mass of . "Archaeopteryx" feathers, although less documented than its other features, were very similar in structure to modern-day bird feathers. Despite the presence of numerous avian features, "Archaeopteryx" had many non-avian theropod dinosaur characteristics. Unlike modern birds, "Archaeopteryx" had small teeth, as well as a long bony tail, features which "Archaeopteryx" shared with other dinosaurs of the time. Because it displays features common to both birds and non-avian dinosaurs, "Archaeopteryx" has often been considered a link between them. In the 1970s, John Ostrom, following Thomas Henry Huxley's lead in 1868, argued that birds evolved within theropod dinosaurs and "Archaeopteryx" was a critical piece of evidence for this argument; it had several avian features, such as a wishbone, flight feathers, wings, and a partially reversed first toe along with dinosaur and theropod features. For instance, it has a long ascending process of the ankle bone, interdental plates, an obturator process of the ischium, and long chevrons in the tail. In particular, Ostrom found that "Archaeopteryx" was remarkably similar to the theropod family Dromaeosauridae. Specimens of "Archaeopteryx" were most notable for their well-developed flight feathers. They were markedly asymmetrical and showed the structure of flight feathers in modern birds, with vanes given stability by a barb-barbule-barbicel arrangement. The tail feathers were less asymmetrical, again in line with the situation in modern birds and also had firm vanes. The thumb did not yet bear a separately movable tuft of stiff feathers. The body plumage of "Archaeopteryx" is less well documented and has only been properly researched in the well-preserved Berlin specimen. Thus, as more than one species seems to be involved, the research into the Berlin specimen's feathers does not necessarily hold true for the rest of the species of "Archaeopteryx". In the Berlin specimen, there are "trousers" of well-developed feathers on the legs; some of these feathers seem to have a basic contour feather structure, but are somewhat decomposed (they lack barbicels as in ratites). In part they are firm and thus capable of supporting flight. A patch of pennaceous feathers is found running along its back, which was quite similar to the contour feathers of the body plumage of modern birds in being symmetrical and firm, although not as stiff as the flight-related feathers. Apart from that, the feather traces in the Berlin specimen are limited to a sort of "proto-down" not dissimilar to that found in the dinosaur "Sinosauropteryx": decomposed and fluffy, and possibly even appearing more like fur than feathers in life (although not in their microscopic structure). These occur on the remainder of the body—although some feathers did not fossilize and others were obliterated during preparation, leaving bare patches on specimens—and the lower neck. There is no indication of feathering on the upper neck and head. While these conceivably may have been nude, this may still be an artefact of preservation. It appears that most "Archaeopteryx" specimens became embedded in anoxic sediment after drifting some time on their backs in the sea—the head, neck and the tail are generally bent downward, which suggests that the specimens had just started to rot when they were embedded, with tendons and muscle relaxing so that the characteristic shape (death pose) of the fossil specimens was achieved. This would mean that the skin already was softened and loose, which is bolstered by the fact that in some specimens the flight feathers were starting to detach at the point of embedding in the sediment. So it is hypothesized that the pertinent specimens moved along the sea bed in shallow water for some time before burial, the head and upper neck feathers sloughing off, while the more firmly attached tail feathers remained. In 2011, graduate student Ryan Carney and colleagues performed the first colour study on an "Archaeopteryx" specimen. Using scanning electron microscopy technology and energy-dispersive X-ray analysis, the team was able to detect the structure of melanosomes in the isolated feather specimen described in 1861. The resultant measurements were then compared to those of 87 modern bird species, and the original colour was calculated with a 95% likelihood to be black. The feather was determined to be black throughout, with heavier pigmentation in the distal tip. The feather studied was most probably a dorsal covert, which would have partly covered the primary feathers on the wings. The study does not mean that "Archaeopteryx" was entirely black, but suggests that it had some black colouration which included the coverts. Carney pointed out that this is consistent with what we know of modern flight characteristics, in that black melanosomes have structural properties that strengthen feathers for flight. In a 2013 study published in the "Journal of Analytical Atomic Spectrometry", new analyses of "Archaeopteryx"'s feathers revealed that the animal may have had light-coloured inner vanes. This interpretation was based on synchrotron imaging of organic sulfur and trace metals, used as proxies for the presence of melanin. However, such biomarkers have been found to be non-specific for melanin, and thus it was suggested that the purported light-and-dark patterns were instead due to non-pigmentary organics and secondary precipitates. In 2014, Carney and colleagues presented additional data that the wing feather's color was matte black (as opposed to iridescent), due to the lower aspect ratio of the melanosomes detected in the fossil. Here they also provided evidence that the previous counter-interpretation of the feather having a light-coloured inner vane was not supported, due to the abundance of matte black melanosomes in this region. As in the wings of modern birds, the flight feathers of "Archaeopteryx" were somewhat asymmetrical and the tail feathers were rather broad. This implies that the wings and tail were used for lift generation, but it is unclear whether "Archaeopteryx" was capable of flapping flight or simply a glider. The lack of a bony breastbone suggests that "Archaeopteryx" was not a very strong flier, but flight muscles might have attached to the thick, boomerang-shaped wishbone, the platelike coracoids, or perhaps, to a cartilaginous sternum. The sideways orientation of the glenoid (shoulder) joint between scapula, coracoid, and humerus—instead of the dorsally angled arrangement found in modern birds—may indicate that "Archaeopteryx" was unable to lift its wings above its back, a requirement for the upstroke found in modern flapping flight. According to a study by Philip Senter in 2006, "Archaeopteryx" was indeed unable to use flapping flight as modern birds do, but it may well have used a downstroke-only flap-assisted gliding technique. "Archaeopteryx" wings were relatively large, which would have resulted in a low stall speed and reduced turning radius. The short and rounded shape of the wings would have increased drag, but also could have improved its ability to fly through cluttered environments such as trees and brush (similar wing shapes are seen in birds that fly through trees and brush, such as crows and pheasants). The presence of "hind wings", asymmetrical flight feathers stemming from the legs similar to those seen in dromaeosaurids such as "Microraptor", also would have added to the aerial mobility of "Archaeopteryx". The first detailed study of the hind wings by Longrich in 2006, suggested that the structures formed up to 12% of the total airfoil. This would have reduced stall speed by up to 6% and turning radius by up to 12%. The feathers of "Archaeopteryx" were asymmetrical. This has been interpreted as evidence that it was a flyer, because flightless birds tend to have symmetrical feathers. Some scientists, including Thomson and Speakman, have questioned this. They studied more than 70 families of living birds, and found that some flightless types do have a range of asymmetry in their feathers, and that the feathers of "Archaeopteryx" fall into this range. The degree of asymmetry seen in "Archaeopteryx" is more typical for slow flyers than for flightless birds. In 2010, Robert L. Nudds and Gareth J. Dyke in the journal "Science" published a paper in which they analysed the rachises of the primary feathers of "Confuciusornis" and "Archaeopteryx". The analysis suggested that the rachises on these two genera were thinner and weaker than those of modern birds relative to body mass. The authors determined that "Archaeopteryx" and "Confuciusornis", were unable to use flapping flight. This study was criticized by Philip J. Currie and Luis Chiappe. Chiappe suggested that it is difficult to measure the rachises of fossilized feathers, and Currie speculated that "Archaeopteryx" and "Confuciusornis" must have been able to fly to some degree, as their fossils are preserved in what is believed to have been marine or lake sediments, suggesting that they must have been able to fly over deep water. Gregory Paul also disagreed with the study, arguing in a 2010 response that Nudds and Dyke had overestimated the masses of these early birds, and that more accurate mass estimates allowed powered flight even with relatively narrow rachises. Nudds and Dyke had assumed a mass of for the Munich specimen "Archaeopteryx", a young juvenile, based on published mass estimates of larger specimens. Paul argued that a more reasonable body mass estimate for the Munich specimen is about . Paul also criticized the measurements of the rachises themselves, noting that the feathers in the Munich specimen are poorly preserved. Nudds and Dyke reported a diameter of for the longest primary feather, which Paul could not confirm using photographs. Paul measured some of the inner primary feathers, finding rachises across. Despite these criticisms, Nudds and Dyke stood by their original conclusions. They claimed that Paul's statement, that an adult "Archaeopteryx" would have been a better flyer than the juvenile Munich specimen, was dubious. This, they reasoned, would require an even thicker rachis, evidence for which has not yet been presented. Another possibility is that they had not achieved true flight, but instead used their wings as aids for extra lift while running over water after the fashion of the basilisk lizard, which could explain their presence in lake and marine deposits (see Evolution of bird flight). In 2004, scientists analysing a detailed CT scan of the braincase of the London "Archaeopteryx" concluded that its brain was significantly larger than that of most dinosaurs, indicating that it possessed the brain size necessary for flying. The overall brain anatomy was reconstructed using the scan. The reconstruction showed that the regions associated with vision took up nearly one-third of the brain. Other well-developed areas involved hearing and muscle coordination. The skull scan also revealed the structure of its inner ear. The structure more closely resembles that of modern birds than the inner ear of non-avian reptiles. These characteristics taken together suggest that "Archaeopteryx" had the keen sense of hearing, balance, spatial perception, and coordination needed to fly. "Archaeopteryx" had a cerebrum-to-brain-volume ratio 78% of the way to modern birds from the condition of non-coelurosaurian dinosaurs such as "Carcharodontosaurus" or "Allosaurus", which had a crocodile-like anatomy of the brain and inner ear. Newer research shows that while the "Archaeopteryx" brain was more complex than that of more primitive theropods, it had a more generalized brain volume among maniraptoran dinosaurs, even smaller than that of other non-avian dinosaurs in several instances, which indicates the neurological development required for flight was already a common trait in the maniraptoran clade. Recent studies of flight feather barb geometry reveal that modern birds possess a larger barb angle in the trailing vane of the feather, whereas "Archaeopteryx" lacks this large barb angle, indicating potentially weak flight abilities. "Archaeopteryx" continues to play an important part in scientific debates about the origin and evolution of birds. Some scientists see it as a semi-arboreal climbing animal, following the idea that birds evolved from tree-dwelling gliders (the "trees down" hypothesis for the evolution of flight proposed by O. C. Marsh). Other scientists see "Archaeopteryx" as running quickly along the ground, supporting the idea that birds evolved flight by running (the "ground up" hypothesis proposed by Samuel Wendell Williston). Still others suggest that "Archaeopteryx" might have been at home both in the trees and on the ground, like modern crows, and this latter view is what currently is considered best-supported by morphological characters. Altogether, it appears that the species was not particularly specialized for running on the ground or for perching. A scenario outlined by Elżanowski in 2002 suggested that "Archaeopteryx" used its wings mainly to escape predators by glides punctuated with shallow downstrokes to reach successively higher perches, and alternatively, to cover longer distances (mainly) by gliding down from cliffs or treetops. A histological study by Erickson, Norell, Zhongue, and others in 2009 estimated that "Archaeopteryx" grew relatively slowly compared to modern birds, presumably because the outermost portions of "Archaeopteryx" bones appear poorly vascularized; in living vertebrates, poorly vascularized bone is correlated with slow growth rate. They also assume that all known skeletons of "Archaeopteryx" come from juvenile specimens. Because the bones of "Archaeopteryx" could not be histologically sectioned in a formal skeletochronological (growth ring) analysis, Erickson and colleagues used bone vascularity (porosity) to estimate bone growth rate. They assumed that poorly vascularized bone grows at similar rates in all birds and in "Archaeopteryx". The poorly vascularized bone of "Archaeopteryx" might have grown as slowly as that in a mallard (2.5 micrometres per day) or as fast as that in an ostrich (4.2 micrometres per day). Using this range of bone growth rates, they calculated how long it would take to "grow" each specimen of "Archaeopteryx" to the observed size; it may have taken at least 970 days (there were 375 days in a Late Jurassic year) to reach an adult size of . The study also found that the avialans "Jeholornis" and "Sapeornis" grew relatively slowly, as did the dromaeosaurid "Mahakala". The avialans "Confuciusornis" and "Ichthyornis" grew relatively quickly, following a growth trend similar to that of modern birds. One of the few modern birds that exhibit slow growth is the flightless kiwi, and the authors speculated that "Archaeopteryx" and the kiwi had similar basal metabolic rate. Comparisons between the scleral rings of "Archaeopteryx" and modern birds and reptiles indicate that it may have been diurnal, similar to most modern birds. The richness and diversity of the Solnhofen limestones in which all specimens of "Archaeopteryx" have been found have shed light on an ancient Jurassic Bavaria strikingly different from the present day. The latitude was similar to Florida, though the climate was likely to have been drier, as evidenced by fossils of plants with adaptations for arid conditions and a lack of terrestrial sediments characteristic of rivers. Evidence of plants, although scarce, include cycads and conifers while animals found include a large number of insects, small lizards, pterosaurs, and "Compsognathus". The excellent preservation of "Archaeopteryx" fossils and other terrestrial fossils found at Solnhofen indicates that they did not travel far before becoming preserved. The "Archaeopteryx" specimens found were therefore likely to have lived on the low islands surrounding the Solnhofen lagoon rather than to have been corpses that drifted in from farther away. "Archaeopteryx" skeletons are considerably less numerous in the deposits of Solnhofen than those of pterosaurs, of which seven genera have been found. The pterosaurs included species such as "Rhamphorhynchus" belonging to the Rhamphorhynchidae, the group which dominated the niche currently occupied by seabirds, and which became extinct at the end of the Jurassic. The pterosaurs, which also included "Pterodactylus", were common enough that it is unlikely that the specimens found are vagrants from the larger islands to the north. The islands that surrounded the Solnhofen lagoon were low lying, semi-arid, and sub-tropical with a long dry season and little rain. The closest modern analogue for the Solnhofen conditions is said to be Orca Basin in the northern Gulf of Mexico, although it is much deeper than the Solnhofen lagoons. The flora of these islands was adapted to these dry conditions and consisted mostly of low () shrubs. Contrary to reconstructions of "Archaeopteryx" climbing large trees, these seem to have been mostly absent from the islands; few trunks have been found in the sediments and fossilized tree pollen also is absent. The lifestyle of "Archaeopteryx" is difficult to reconstruct and there are several theories regarding it. Some researchers suggest that it was primarily adapted to life on the ground, while other researchers suggest that it was principally arboreal. The absence of trees does not preclude "Archaeopteryx" from an arboreal lifestyle, as several species of bird live exclusively in low shrubs. Various aspects of the morphology of "Archaeopteryx" point to either an arboreal or ground existence, including the length of its legs and the elongation in its feet; some authorities consider it likely to have been a generalist capable of feeding in both shrubs and open ground, as well as along the shores of the lagoon. It most likely hunted small prey, seizing it with its jaws if it was small enough, or with its claws if it was larger. Over the years, twelve body fossil specimens of "Archaeopteryx" and a feather that may belong to it have been found. All of the fossils come from the limestone deposits, quarried for centuries, near Solnhofen, Germany. The initial discovery, a single feather, was unearthed in 1860 or 1861 and described in 1861 by Hermann von Meyer. It is currently located at the Natural History Museum of Berlin. This is generally assigned to "Archaeopteryx" and was the initial holotype, but whether it is a feather of this species, or another, as yet undiscovered, proto-bird is unknown. There are some indications it is not from the same animal as most of the skeletons (the "typical" "A. lithographica"). The first skeleton, known as the London Specimen (BMNH 37001), was unearthed in 1861 near Langenaltheim, Germany, and perhaps given to local physician Karl Häberlein in return for medical services. He then sold it for £700 to the Natural History Museum in London, where it remains. Missing most of its head and neck, it was described in 1863 by Richard Owen as "Archaeopteryx macrura", allowing for the possibility it did not belong to the same species as the feather. In the subsequent fourth edition of his "On the Origin of Species", Charles Darwin described how some authors had maintained "that the whole class of birds came suddenly into existence during the eocene period; but now we know, on the authority of professor Owen, that a bird certainly lived during the deposition of the upper greensand; and still more recently, that strange bird, the "Archaeopteryx", with a long lizard-like tail, bearing a pair of feathers on each joint, and with its wings furnished with two free claws, has been discovered in the oolitic slates of Solnhofen. Hardly any recent discovery shows more forcibly than this how little we as yet know of the former inhabitants of the world." The Greek term "pteryx" () primarily means "wing", but can also designate merely "feather". Meyer suggested this in his description. At first he referred to a single feather which appeared to resemble a modern bird's remex (wing feather), but he had heard of and been shown a rough sketch of the London specimen, to which he referred as a ""Skelett eines mit ähnlichen Federn bedeckten "" ("skeleton of an animal covered in similar feathers"). In German, this ambiguity is resolved by the term "Schwinge" which does not necessarily mean a wing used for flying. "Urschwinge" was the favoured translation of "Archaeopteryx" among German scholars in the late nineteenth century. In English, "ancient pinion" offers a rough approximation. Since then twelve specimens have been recovered: The Berlin Specimen (HMN 1880/81) was discovered in 1874 or 1875 on the Blumenberg near Eichstätt, Germany, by farmer Jakob Niemeyer. He sold this precious fossil for the money to buy a cow in 1876, to innkeeper Johann Dörr, who again sold it to Ernst Otto Häberlein, the son of K. Häberlein. Placed on sale between 1877 and 1881, with potential buyers including O. C. Marsh of Yale University's Peabody Museum, it eventually was bought for 20,000 Goldmark by the Berlin's Natural History Museum, where it now is displayed. The transaction was financed by Ernst Werner von Siemens, founder of the famous company that bears his name. Described in 1884 by Wilhelm Dames, it is the most complete specimen, and the first with a complete head. In 1897 it was named by Dames as a new species, "A. siemensii"; though often considered a synonym of "A. lithographica", several 21st century studies have concluded that it is a distinct species which includes the Berlin, Munich, and Thermopolis specimens. Composed of a torso, the Maxberg Specimen (S5) was discovered in 1956 near Langenaltheim; it was brought to the attention of professor Florian Heller in 1958 and described by him in 1959. The specimen is missing its head and tail, although the rest of the skeleton is mostly intact. Although it was once exhibited at the Maxberg Museum in Solnhofen, it is currently missing. It belonged to Eduard Opitsch, who loaned it to the museum until 1974. After his death in 1991, it was discovered that the specimen was missing and may have been stolen or sold. The Haarlem Specimen (TM 6428/29, also known as the "Teylers Specimen") was discovered in 1855 near Riedenburg, Germany, and described as a "Pterodactylus crassipes" in 1857 by Meyer. It was reclassified in 1970 by John Ostrom and is currently located at the Teylers Museum in Haarlem, the Netherlands. It was the very first specimen found, but was incorrectly classified at the time. It is also one of the least complete specimens, consisting mostly of limb bones, isolated cervical vertebrae, and ribs. In 2017 it was named as a separate genus "Ostromia", considered more closely related to "Anchiornis" from China. The Eichstätt Specimen (JM 2257) was discovered in 1951 near Workerszell, Germany, and described by Peter Wellnhofer in 1974. Currently located at the Jura Museum in Eichstätt, Germany, it is the smallest known specimen and has the second best head. It is possibly a separate genus ("Jurapteryx recurva") or species ("A. recurva"). The Solnhofen Specimen (unnumbered specimen) was discovered in the 1970s near Eichstätt, Germany, and described in 1988 by Wellnhofer. Currently located at the Bürgermeister-Müller-Museum in Solnhofen, it originally was classified as "Compsognathus" by an amateur collector, the same mayor Friedrich Müller after which the museum is named. It is the largest specimen known and may belong to a separate genus and species, "Wellnhoferia grandis". It is missing only portions of the neck, tail, backbone, and head. The Munich Specimen (BSP 1999 I 50, formerly known as the "Solenhofer-Aktien-Verein Specimen") was discovered on 3 August 1992 near Langenaltheim and described in 1993 by Wellnhofer. It is currently located at the Paläontologisches Museum München in Munich, to which it was sold in 1999 for 1.9 million Deutschmark. What was initially believed to be a bony sternum turned out to be part of the coracoid, but a cartilaginous sternum may have been present. Only the front of its face is missing. It has been used as the basis for a distinct species, "A. bavarica", but more recent studies suggest it belongs to "A. siemensii". An eighth, fragmentary specimen was discovered in 1990, not in Solnhofen limestone, but in somewhat younger sediments at Daiting, Suevia. Therefore, it is known as the Daiting Specimen, and had been known since 1996 only from a cast, briefly shown at the Naturkundemuseum in Bamberg. The original was purchased by palaeontologist Raimund Albertsdörfer in 2009. It was on display for the first time with six other original fossils of "Archaeopteryx" at the Munich Mineral Show in October 2009. A first, quick look by scientists indicates that this specimen might represent a new species of "Archaeopteryx". It was found in a limestone bed that was a few hundred thousand years younger than the other finds. Another fragmentary fossil was found in 2000. It is in private possession and, since 2004, on loan to the Bürgermeister-Müller Museum in Solnhofen, so it is called the Bürgermeister-Müller Specimen; the institute itself officially refers to it as the "Exemplar of the families Ottman & Steil, Solnhofen". As the fragment represents the remains of a single wing of "Archaeopteryx", the popular name of this fossil is "chicken wing". Long in a private collection in Switzerland, the Thermopolis Specimen (WDC CSG 100) was discovered in Bavaria and described in 2005 by Mayr, Pohl, and Peters. Donated to the Wyoming Dinosaur Center in Thermopolis, Wyoming, it has the best-preserved head and feet; most of the neck and the lower jaw have not been preserved. The "Thermopolis" specimen was described in 2 December 2005 "Science" journal article as "A well-preserved "Archaeopteryx" specimen with theropod features"; it shows that "Archaeopteryx" lacked a reversed toe—a universal feature of birds—limiting its ability to perch on branches and implying a terrestrial or trunk-climbing lifestyle. This has been interpreted as evidence of theropod ancestry. In 1988, Gregory S. Paul claimed to have found evidence of a hyperextensible second toe, but this was not verified and accepted by other scientists until the Thermopolis specimen was described. "Until now, the feature was thought to belong only to the species' close relatives, the deinonychosaurs." The Thermopolis Specimen was assigned to "Archaeopteryx siemensii" in 2007. The specimen is considered to represent the most complete and best-preserved "Archaeopteryx" remains yet. The discovery of an eleventh specimen was announced in 2011, and it was described in 2014. It is one of the more complete specimens, but is missing much of the skull and one forelimb. It is privately owned and has yet to be given a name. Palaeontologists of the Ludwig Maximilian University of Munich studied the specimen, which revealed previously unknown features of the plumage, such as feathers on both the upper and lower legs and metatarsus, and the only preserved tail tip. A twelfth specimen had been discovered by an amateur collector in 2010 at the Schamhaupten quarry, but the finding was only announced in February 2014. It was scientifically described in 2018. It represents a complete and mostly articulated skeleton with skull. It is the only specimen lacking preserved feathers. It is from the Painten Formation and somewhat older than the other specimens. Today, fossils of the genus "Archaeopteryx" are usually assigned to one or two species, "A. lithographica" and "A. siemensii", but their taxonomic history is complicated. Ten names have been published for the handful of specimens. As interpreted today, the name "A. lithographica" only referred to the single feather described by Meyer. In 1954 Gavin de Beer concluded that the London specimen was the holotype. In 1960, Swinton accordingly proposed that the name "Archaeopteryx lithographica" be placed on the official genera list making the alternative names "Griphosaurus" and "Griphornis" invalid. The ICZN, implicitly accepting de Beer's standpoint, did indeed suppress the plethora of alternative names initially proposed for the first skeleton specimens, which mainly resulted from the acrimonious dispute between Meyer and his opponent Johann Andreas Wagner (whose "Griphosaurus problematicus" – "problematic riddle-lizard" – was a vitriolic sneer at Meyer's "Archaeopteryx"). In addition, in 1977, the Commission ruled that the first species name of the Haarlem specimen, "crassipes", described by Meyer as a pterosaur before its true nature was realized, was not to be given preference over "lithographica" in instances where scientists considered them to represent the same species. It has been noted that the feather, the first specimen of "Archaeopteryx" described, does not correspond well with the flight-related feathers of "Archaeopteryx". It certainly is a flight feather of a contemporary species, but its size and proportions indicate that it may belong to another, smaller species of feathered theropod, of which only this feather is known so far. As the feather had been designated the type specimen, the name "Archaeopteryx" should then no longer be applied to the skeletons, thus creating significant nomenclatorial confusion. In 2007, two sets of scientists therefore petitioned the ICZN requesting that the London specimen explicitly be made the type by designating it as the new holotype specimen, or neotype. This suggestion was upheld by the ICZN after four years of debate, and the London specimen was designated the neotype on 3 October 2011. Below is a cladogram published in 2013 by Godefroit "et al." It has been argued that all the specimens belong to the same species, "A. lithographica". Differences do exist among the specimens, and while some researchers regard these as due to the different ages of the specimens, some may be related to actual species diversity. In particular, the Munich, Eichstätt, Solnhofen, and Thermopolis specimens differ from the London, Berlin, and Haarlem specimens in being smaller or much larger, having different finger proportions, having more slender snouts lined with forward-pointing teeth, and possible presence of a sternum. Due to these differences, most individual specimens have been given their own species name at one point or another. The Berlin specimen has been designated as "Archaeornis siemensii", the Eichstätt specimen as "Jurapteryx recurva", the Munich specimen as "Archaeopteryx bavarica", and the Solnhofen specimen as "Wellnhoferia grandis". In 2007, a review of all well-preserved specimens including the then-newly discovered Thermopolis specimen concluded that two distinct species of "Archaeopteryx" could be supported: "A. lithographica" (consisting of at least the London and Solnhofen specimens), and "A. siemensii" (consisting of at least the Berlin, Munich, and Thermopolis specimens). The two species are distinguished primarily by large flexor tubercles on the foot claws in "A. lithographica" (the claws of "A. siemensii" specimens being relatively simple and straight). "A. lithographica" also had a constricted portion of the crown in some teeth and a stouter metatarsus. A supposed additional species, "Wellnhoferia grandis" (based on the Solnhofen specimen), seems to be indistinguishable from "A. lithographica" except in its larger size. If two names are given, the first denotes the original describer of the "species", the second the author on whom the given name combination is based. As always in zoological nomenclature, putting an author's name in parentheses denotes that the taxon was originally described in a different genus. ""Archaeopteryx" vicensensis" (Anon. "fide" Lambrecht, 1933) is a "nomen nudum" for what appears to be an undescribed pterosaur. Beginning in 1985, a group including astronomer Fred Hoyle and physicist Lee Spetner published a series of papers claiming that the feathers on the Berlin and London specimens of "Archaeopteryx" were forged. Their claims were repudiated by Alan J. Charig and others at the Natural History Museum in London. Most of their evidence for a forgery was based on unfamiliarity with the processes of lithification; for example, they proposed that, based on the difference in texture associated with the feathers, feather impressions were applied to a thin layer of cement, without realizing that feathers themselves would have caused a textural difference. They also misinterpreted the fossils, claiming that the tail was forged as one large feather, when visibly this is not the case. In addition, they claimed that the other specimens of "Archaeopteryx" known at the time did not have feathers, which is incorrect; the Maxberg and Eichstätt specimens have obvious feathers. They also expressed disbelief that slabs would split so smoothly, or that one half of a slab containing fossils would have good preservation, but not the counterslab. These are common properties of Solnhofen fossils, because the dead animals would fall onto hardened surfaces, which would form a natural plane for the future slabs to split along and would leave the bulk of the fossil on one side and little on the other. Finally, the motives they suggested for a forgery are not strong, and are contradictory; one is that Richard Owen wanted to forge evidence in support of Charles Darwin's theory of evolution, which is unlikely given Owen's views toward Darwin and his theory. The other is that Owen wanted to set a trap for Darwin, hoping the latter would support the fossils so Owen could discredit him with the forgery; this is unlikely because Owen wrote a detailed paper on the London specimen, so such an action would certainly backfire. Charig "et al." pointed to the presence of hairline cracks in the slabs running through both rock and fossil impressions, and mineral growth over the slabs that had occurred before discovery and preparation, as evidence that the feathers were original. Spetner "et al." then attempted to show that the cracks would have propagated naturally through their postulated cement layer, but neglected to account for the fact that the cracks were old and had been filled with calcite, and thus were not able to propagate. They also attempted to show the presence of cement on the London specimen through X-ray spectroscopy, and did find something that was not rock; it was not cement either, and is most probably a fragment of silicone rubber left behind when moulds were made of the specimen. Their suggestions have not been taken seriously by palaeontologists, as their evidence was largely based on misunderstandings of geology, and they never discussed the other feather-bearing specimens, which have increased in number since then. Charig "et al." reported a discolouration: a dark band between two layers of limestone – they say it is the product of sedimentation. It is natural for limestone to take on the colour of its surroundings and most limestones are coloured (if not colour banded) to some degree, so the darkness was attributed to such impurities. They also mention that a complete absence of air bubbles in the rock slabs is further proof that the specimen is authentic. Modern paleontology has often classified "Archaeopteryx" as the most primitive bird. It is not thought to be a true ancestor of modern birds, but rather, a close relative of that ancestor. Nonetheless, "Archaeopteryx" was often used as a model of the true ancestral bird. Several authors have done so. Lowe (1935) and Thulborn (1984) questioned whether "Archaeopteryx" truly was the first bird. They suggested that "Archaeopteryx" was a dinosaur that was no more closely related to birds than were other dinosaur groups. Kurzanov (1987) suggested that "Avimimus" was more likely to be the ancestor of all birds than "Archaeopteryx". Barsbold (1983) and Zweers and Van den Berge (1997) noted that many maniraptoran lineages are extremely birdlike, and they suggested that different groups of birds may have descended from different dinosaur ancestors. The discovery of the closely related "Xiaotingia" in 2011 led to new phylogenetic analyses that suggested that "Archaeopteryx" is a deinonychosaur rather than an avialan, and therefore, not a "bird" under most common uses of that term. A more thorough analysis was published soon after to test this hypothesis, and failed to arrive at the same result; it found "Archaeopteryx" in its traditional position at the base of Avialae, while "Xiaotingia" was recovered as a basal dromaeosaurid or troodontid. The authors of the follow-up study noted that uncertainties still exist, and that it may not be possible to state confidently whether or not "Archaeopteryx" is a member of Avialae or not, barring new and better specimens of relevant species. Phylogenetic studies conducted by Senter, "et al." (2012) and Turner, Makovicky, and Norell (2012) confirmed that "Archaeopteryx" was more closely related to living birds than to dromaeosaurids and troodontids. On the other hand, Godefroit, "et al." (2013) recovered "Archaeopteryx" as more closely related to dromaeosaurids and troodontids in the analysis included in their description of "Eosinopteryx brevipenna". The authors used a modified version of the matrix from the study describing "Xiaotingia", adding "Jinfengopteryx elegans" and "Eosinopteryx brevipenna" to it, as well as adding four additional characters related to the development of the plumage. Unlike the analysis from the description of "Xiaotingia", the analysis conducted by Godefroit, "et al." did not find "Archaeopteryx" to be related particularly closely to "Anchiornis" and "Xiaotingia", which were recovered as basal troodontids instead. Agnolín and Novas (2013) found "Archaeopteryx" and (possibly synonymous) "Wellnhoferia" to be the basalmost avialans (Avialae being defined by the authors as including "Archaeopteryx lithographica" and "Passer", their most recent common ancestor and all of its descendants), with Microraptoria, Unenlagiinae, and the clade containing "Anchiornis" and "Xiaotingia" being successively closer outgroups to the Avialae. Another phylogenetic study by Godefroit, "et al.", using a more inclusive matrix than the one from the analysis in the description of "Eosinopteryx brevipenna", also found "Archaeopteryx" to be a member of Avialae (defined by the authors as the most inclusive clade containing "Passer domesticus", but not "Dromaeosaurus albertensis" or "Troodon formosus"). "Archaeopteryx" was found to form a grade at the base of Avialae with "Xiaotingia", "Anchiornis", and "Aurornis". Compared to "Archaeopteryx", "Xiaotingia" was found to be more closely related to extant birds, while both "Anchiornis" and "Aurornis" were found to be more distantly so. Antioxidant Antioxidants are compounds that inhibit oxidation. Oxidation is a chemical reaction that can produce free radicals, thereby leading to chain reactions that may damage the cells of organisms. Antioxidants such as thiols or ascorbic acid (vitamin C) terminate these chain reactions. To balance the oxidative state, plants and animals maintain complex systems of overlapping antioxidants, such as glutathione and enzymes (e.g., catalase and superoxide dismutase), produced internally, or the dietary antioxidants vitamin A, vitamin C, and vitamin E. The term "antioxidant" is mostly used for two entirely different groups of substances: industrial chemicals that are added to products to prevent oxidation, and naturally occurring compounds that are present in foods and tissue. The former, industrial antioxidants, have diverse uses: acting as preservatives in food and cosmetics, and being oxidation-inhibitors in fuels. Importantly, antioxidant dietary supplements have not yet been shown to improve health in humans, or to be effective at preventing disease. Supplements of beta-carotene, vitamin A, and vitamin E have no effect on mortality rate or cancer risk. Additionally, supplementation with selenium or vitamin E do not reduce the risk of cardiovascular disease. Although certain levels of antioxidant vitamins in the diet are required for good health, there is still considerable debate on whether antioxidant-rich foods or supplements have anti-disease activity. Moreover, if they are actually beneficial, it is unknown which antioxidants are health-promoting in the diet and in what amounts beyond typical dietary intake. Some authors dispute the hypothesis that antioxidant vitamins could prevent chronic diseases, and others maintain such that hypothesis is unproven and misguided. Polyphenols, which often have antioxidant properties in vitro, are not necessarily antioxidants in vivo due to extensive metabolism following digestion. In many polyphenols the catechol group acts as an electron acceptor and is therefore responsible for the antioxidant activity. However, this catechol group undergoes extensive metabolism upon uptake in the human body, for example by catechol-O-methyl transferase, and is therefore no longer able to act as an electron acceptor. Many polyphenols may have non-antioxidant roles in minute concentrations that affect cell-to-cell signaling, receptor sensitivity, inflammatory enzyme activity or gene regulation. Although dietary antioxidants have been investigated for potential effects on neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis, these studies have been inconclusive. Tirilazad is an antioxidant steroid derivative that inhibits the lipid peroxidation that is believed to play a key role in neuronal death in stroke and head injury. It demonstrated activity in animal models of stroke, but human trials demonstrated no effect on mortality or other outcomes in subarachnoid haemorrhage and worsened results in ischemic stroke. Similarly, the designed antioxidant NXY-059 exhibited efficacy in animal models, but failed to improve stroke outcomes in a clinical trial. As of November 2014, other antioxidants are being studied as potential neuroprotectants. Common pharmaceuticals (and supplements) with antioxidant properties may interfere with the efficacy of certain anticancer medication and radiation. A 2016 systematic review examined antioxidant medications, such as allopurinol and acetylcysteine, as add on treatment for schizophrenia. Evidence was insufficient to determine benefits and there was potential for adverse effects. Relatively strong reducing acids can have antinutrient effects by binding to dietary minerals such as iron and zinc in the gastrointestinal tract and preventing them from being absorbed. Notable examples are oxalic acid, tannins and phytic acid, which are high in plant-based diets. Calcium and iron deficiencies are not uncommon in diets in developing countries where less meat is eaten and there is high consumption of phytic acid from beans and unleavened whole grain bread. Nonpolar antioxidants such as eugenol—a major component of oil of cloves—have toxicity limits that can be exceeded with the misuse of undiluted essential oils. Toxicity associated with high doses of water-soluble antioxidants such as ascorbic acid are less of a concern, as these compounds can be excreted rapidly in urine. More seriously, very high doses of some antioxidants may have harmful long-term effects. The beta-carotene and Retinol Efficacy Trial (CARET) study of lung cancer patients found that smokers given supplements containing beta-carotene and vitamin A had increased rates of lung cancer. Subsequent studies confirmed these adverse effects. These harmful effects may also be seen in non-smokers, as one meta-analysis including data from approximately 230,000 patients showed that β-carotene, vitamin A or vitamin E supplementation is associated with increased mortality, but saw no significant effect from vitamin C. No health risk was seen when all the randomized controlled studies were examined together, but an increase in mortality was detected when only high-quality and low-bias risk trials were examined separately. As the majority of these low-bias trials dealt with either elderly people, or people with disease, these results may not apply to the general population. This meta-analysis was later repeated and extended by the same authors, with the new analysis published by the Cochrane Collaboration; this analysis confirmed the previous results. These two publications are consistent with some previous meta-analyzes that also suggested that Vitamin E supplementation increased mortality, and that antioxidant supplements increased the risk of colon cancer. Beta-carotene may also increase lung cancer. Overall, the large number of clinical trials carried out on antioxidant supplements suggest that either these products have no effect on health, or that they cause a small increase in mortality in elderly or vulnerable populations. While antioxidant supplementation is widely used in attempts to prevent the development of cancer, antioxidants may interfere with cancer treatments, since the environment of cancer cells causes high levels of oxidative stress, making these cells more susceptible to the further oxidative stress induced by treatments. As a result, by reducing the redox stress in cancer cells, antioxidant supplements (and pharmaceuticals) could decrease the effectiveness of radiotherapy and chemotherapy. On the other hand, other reviews have suggested that antioxidants could reduce side effects or increase survival times. A paradox in metabolism is that, while the vast majority of complex life on Earth requires oxygen for its existence, oxygen is a highly reactive molecule that damages living organisms by producing reactive oxygen species. Consequently, organisms contain a complex network of antioxidant metabolites and enzymes that work together to prevent oxidative damage to cellular components such as DNA, proteins and lipids. In general, antioxidant systems either prevent these reactive species from being formed, or remove them before they can damage vital components of the cell. However, reactive oxygen species also have useful cellular functions, such as redox signaling. Thus, the function of antioxidant systems is not to remove oxidants entirely, but instead to keep them at an optimum level. The reactive oxygen species produced in cells include hydrogen peroxide (HO), hypochlorous acid (HClO), and free radicals such as the hydroxyl radical (·OH) and the superoxide anion (O). The hydroxyl radical is particularly unstable and will react rapidly and non-specifically with most biological molecules. This species is produced from hydrogen peroxide in metal-catalyzed redox reactions such as the Fenton reaction. These oxidants can damage cells by starting chemical chain reactions such as lipid peroxidation, or by oxidizing DNA or proteins. Damage to DNA can cause mutations and possibly cancer, if not reversed by DNA repair mechanisms, while damage to proteins causes enzyme inhibition, denaturation and protein degradation. The use of oxygen as part of the process for generating metabolic energy produces reactive oxygen species. In this process, the superoxide anion is produced as a by-product of several steps in the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, since a highly reactive free radical is formed as an intermediate (Q·). This unstable intermediate can lead to electron "leakage", when electrons jump directly to oxygen and form the superoxide anion, instead of moving through the normal series of well-controlled reactions of the electron transport chain. Peroxide is also produced from the oxidation of reduced flavoproteins, such as complex I. However, although these enzymes can produce oxidants, the relative importance of the electron transfer chain to other processes that generate peroxide is unclear. In plants, algae, and cyanobacteria, reactive oxygen species are also produced during photosynthesis, particularly under conditions of high light intensity. This effect is partly offset by the involvement of carotenoids in photoinhibition, and in algae and cyanobacteria, by large amount of iodide and selenium, which involves these antioxidants reacting with over-reduced forms of the photosynthetic reaction centres to prevent the production of reactive oxygen species. Antioxidants are classified into two broad divisions, depending on whether they are soluble in water (hydrophilic) or in lipids (lipophilic). In general, water-soluble antioxidants react with oxidants in the cell cytosol and the blood plasma, while lipid-soluble antioxidants protect cell membranes from lipid peroxidation. These compounds may be synthesized in the body or obtained from the diet. The different antioxidants are present at a wide range of concentrations in body fluids and tissues, with some such as glutathione or ubiquinone mostly present within cells, while others such as uric acid are more evenly distributed (see table below). Some antioxidants are only found in a few organisms and these compounds can be important in pathogens and can be virulence factors. The relative importance and interactions between these different antioxidants is a very complex question, with the various antioxidant compounds and antioxidant enzyme systems having synergistic and interdependent effects on one another. The action of one antioxidant may therefore depend on the proper function of other members of the antioxidant system. The amount of protection provided by any one antioxidant will also depend on its concentration, its reactivity towards the particular reactive oxygen species being considered, and the status of the antioxidants with which it interacts. Some compounds contribute to antioxidant defense by chelating transition metals and preventing them from catalyzing the production of free radicals in the cell. Particularly important is the ability to sequester iron, which is the function of iron-binding proteins such as transferrin and ferritin. Selenium and zinc are commonly referred to as "antioxidant nutrients", but these chemical elements have no antioxidant action themselves and are instead required for the activity of some antioxidant enzymes, as is discussed below. Uric acid is by far the highest concentration antioxidant in human blood. Uric acid (UA) is an antioxidant oxypurine produced from xanthine by the enzyme xanthine oxidase, and is an intermediate product of purine metabolism. In almost all land animals, urate oxidase further catalyzes the oxidation of uric acid to allantoin, but in humans and most higher primates, the urate oxidase gene is nonfunctional, so that UA is not further broken down. The evolutionary reasons for this loss of urate conversion to allantoin remain the topic of active speculation. The antioxidant effects of uric acid have led researchers to suggest this mutation was beneficial to early primates and humans. Studies of high altitude acclimatization support the hypothesis that urate acts as an antioxidant by mitigating the oxidative stress caused by high-altitude hypoxia. Uric acid has the highest concentration of any blood antioxidant and provides over half of the total antioxidant capacity of human serum. Uric acid's antioxidant activities are also complex, given that it does not react with some oxidants, such as superoxide, but does act against peroxynitrite, peroxides, and hypochlorous acid. Concerns over elevated UA's contribution to gout must be considered as one of many risk factors. By itself, UA-related risk of gout at high levels (415–530 μmol/L) is only 0.5% per year with an increase to 4.5% per year at UA supersaturation levels (535+ μmol/L). Many of these aforementioned studies determined UA's antioxidant actions within normal physiological levels, and some found antioxidant activity at levels as high as 285 μmol/L. Ascorbic acid or "vitamin C" is a monosaccharide oxidation-reduction (redox) catalyst found in both animals and plants. As one of the enzymes needed to make ascorbic acid has been lost by mutation during primate evolution, humans must obtain it from the diet; it is therefore a vitamin. Most other animals are able to produce this compound in their bodies and do not require it in their diets. Ascorbic acid is required for the conversion of the procollagen to collagen by oxidizing proline residues to hydroxyproline. In other cells, it is maintained in its reduced form by reaction with glutathione, which can be catalysed by protein disulfide isomerase and glutaredoxins. Ascorbic acid is a redox catalyst which can reduce, and thereby neutralize, reactive oxygen species such as hydrogen peroxide. In addition to its direct antioxidant effects, ascorbic acid is also a substrate for the redox enzyme ascorbate peroxidase, a function that is particularly important in stress resistance in plants. Ascorbic acid is present at high levels in all parts of plants and can reach concentrations of 20 millimolar in chloroplasts. Glutathione is a cysteine-containing peptide found in most forms of aerobic life. It is not required in the diet and is instead synthesized in cells from its constituent amino acids. Glutathione has antioxidant properties since the thiol group in its cysteine moiety is a reducing agent and can be reversibly oxidized and reduced. In cells, glutathione is maintained in the reduced form by the enzyme glutathione reductase and in turn reduces other metabolites and enzyme systems, such as ascorbate in the glutathione-ascorbate cycle, glutathione peroxidases and glutaredoxins, as well as reacting directly with oxidants. Due to its high concentration and its central role in maintaining the cell's redox state, glutathione is one of the most important cellular antioxidants. In some organisms glutathione is replaced by other thiols, such as by mycothiol in the Actinomycetes, bacillithiol in some Gram-positive bacteria, or by trypanothione in the Kinetoplastids. Vitamin E is the collective name for a set of eight related tocopherols and tocotrienols, which are fat-soluble vitamins with antioxidant properties. Of these, α-tocopherol has been most studied as it has the highest bioavailability, with the body preferentially absorbing and metabolising this form. It has been claimed that the α-tocopherol form is the most important lipid-soluble antioxidant, and that it protects membranes from oxidation by reacting with lipid radicals produced in the lipid peroxidation chain reaction. This removes the free radical intermediates and prevents the propagation reaction from continuing. This reaction produces oxidised α-tocopheroxyl radicals that can be recycled back to the active reduced form through reduction by other antioxidants, such as ascorbate, retinol or ubiquinol. This is in line with findings showing that α-tocopherol, but not water-soluble antioxidants, efficiently protects glutathione peroxidase 4 (GPX4)-deficient cells from cell death. GPx4 is the only known enzyme that efficiently reduces lipid-hydroperoxides within biological membranes. However, the roles and importance of the various forms of vitamin E are presently unclear, and it has even been suggested that the most important function of α-tocopherol is as a signaling molecule, with this molecule having no significant role in antioxidant metabolism. The functions of the other forms of vitamin E are even less well understood, although γ-tocopherol is a nucleophile that may react with electrophilic mutagens, and tocotrienols may be important in protecting neurons from damage. Antioxidants that are reducing agents can also act as pro-oxidants. For example, vitamin C has antioxidant activity when it reduces oxidizing substances such as hydrogen peroxide, however, it will also reduce metal ions that generate free radicals through the Fenton reaction. The relative importance of the antioxidant and pro-oxidant activities of antioxidants is an area of current research, but vitamin C, which exerts its effects as a vitamin by oxidizing polypeptides, appears to have a mostly antioxidant action in the human body. However, less data is available for other dietary antioxidants, such as vitamin E, or the polyphenols. Likewise, the pathogenesis of diseases involving hyperuricemia likely involve uric acid's direct and indirect pro-oxidant properties. That is, paradoxically, agents which are normally considered antioxidants can act as conditional pro-oxidants and actually increase oxidative stress. Besides ascorbate, medically important conditional pro-oxidants include uric acid and sulfhydryl amino acids such as homocysteine. Typically, this involves some transition-series metal such as copper or iron as catalyst. The potential role of the pro-oxidant role of uric acid in (e.g.) atherosclerosis and ischemic stroke is considered above. Another example is the postulated role of homocysteine in atherosclerosis. As with the chemical antioxidants, cells are protected against oxidative stress by an interacting network of antioxidant enzymes. Here, the superoxide released by processes such as oxidative phosphorylation is first converted to hydrogen peroxide and then further reduced to give water. This detoxification pathway is the result of multiple enzymes, with superoxide dismutases catalysing the first step and then catalases and various peroxidases removing hydrogen peroxide. As with antioxidant metabolites, the contributions of these enzymes to antioxidant defenses can be hard to separate from one another, but the generation of transgenic mice lacking just one antioxidant enzyme can be informative. Superoxide dismutases (SODs) are a class of closely related enzymes that catalyze the breakdown of the superoxide anion into oxygen and hydrogen peroxide. SOD enzymes are present in almost all aerobic cells and in extracellular fluids. Superoxide dismutase enzymes contain metal ion cofactors that, depending on the isozyme, can be copper, zinc, manganese or iron. In humans, the copper/zinc SOD is present in the cytosol, while manganese SOD is present in the mitochondrion. There also exists a third form of SOD in extracellular fluids, which contains copper and zinc in its active sites. The mitochondrial isozyme seems to be the most biologically important of these three, since mice lacking this enzyme die soon after birth. In contrast, the mice lacking copper/zinc SOD (Sod1) are viable but have numerous pathologies and a reduced lifespan (see article on superoxide), while mice without the extracellular SOD have minimal defects (sensitive to hyperoxia). In plants, SOD isozymes are present in the cytosol and mitochondria, with an iron SOD found in chloroplasts that is absent from vertebrates and yeast. Catalases are enzymes that catalyse the conversion of hydrogen peroxide to water and oxygen, using either an iron or manganese cofactor. This protein is localized to peroxisomes in most eukaryotic cells. Catalase is an unusual enzyme since, although hydrogen peroxide is its only substrate, it follows a ping-pong mechanism. Here, its cofactor is oxidised by one molecule of hydrogen peroxide and then regenerated by transferring the bound oxygen to a second molecule of substrate. Despite its apparent importance in hydrogen peroxide removal, humans with genetic deficiency of catalase — "acatalasemia" — or mice genetically engineered to lack catalase completely, suffer few ill effects. Peroxiredoxins are peroxidases that catalyze the reduction of hydrogen peroxide, organic hydroperoxides, as well as peroxynitrite. They are divided into three classes: typical 2-cysteine peroxiredoxins; atypical 2-cysteine peroxiredoxins; and 1-cysteine peroxiredoxins. These enzymes share the same basic catalytic mechanism, in which a redox-active cysteine (the peroxidatic cysteine) in the active site is oxidized to a sulfenic acid by the peroxide substrate. Over-oxidation of this cysteine residue in peroxiredoxins inactivates these enzymes, but this can be reversed by the action of sulfiredoxin. Peroxiredoxins seem to be important in antioxidant metabolism, as mice lacking peroxiredoxin 1 or 2 have shortened lifespan and suffer from hemolytic anaemia, while plants use peroxiredoxins to remove hydrogen peroxide generated in chloroplasts. The thioredoxin system contains the 12-kDa protein thioredoxin and its companion thioredoxin reductase. Proteins related to thioredoxin are present in all sequenced organisms. Plants, such as "Arabidopsis thaliana," have a particularly great diversity of isoforms. The active site of thioredoxin consists of two neighboring cysteines, as part of a highly conserved CXXC motif, that can cycle between an active dithiol form (reduced) and an oxidized disulfide form. In its active state, thioredoxin acts as an efficient reducing agent, scavenging reactive oxygen species and maintaining other proteins in their reduced state. After being oxidized, the active thioredoxin is regenerated by the action of thioredoxin reductase, using NADPH as an electron donor. The glutathione system includes glutathione, glutathione reductase, glutathione peroxidases, and glutathione "S"-transferases. This system is found in animals, plants and microorganisms. Glutathione peroxidase is an enzyme containing four selenium-cofactors that catalyzes the breakdown of hydrogen peroxide and organic hydroperoxides. There are at least four different glutathione peroxidase isozymes in animals. Glutathione peroxidase 1 is the most abundant and is a very efficient scavenger of hydrogen peroxide, while glutathione peroxidase 4 is most active with lipid hydroperoxides. Surprisingly, glutathione peroxidase 1 is dispensable, as mice lacking this enzyme have normal lifespans, but they are hypersensitive to induced oxidative stress. In addition, the glutathione "S"-transferases show high activity with lipid peroxides. These enzymes are at particularly high levels in the liver and also serve in detoxification metabolism. Oxidative stress is thought to contribute to the development of a wide range of diseases including Alzheimer's disease, Parkinson's disease, the pathologies caused by diabetes, rheumatoid arthritis, and neurodegeneration in motor neuron diseases. In many of these cases, it is unclear if oxidants trigger the disease, or if they are produced as a secondary consequence of the disease and from general tissue damage; One case in which this link is particularly well understood is the role of oxidative stress in cardiovascular disease. Here, low density lipoprotein (LDL) oxidation appears to trigger the process of atherogenesis, which results in atherosclerosis, and finally cardiovascular disease. Oxidative damage in DNA can cause cancer. Several antioxidant enzymes such as superoxide dismutase, catalase, glutathione peroxidase, glutathione reductase, glutathione S-transferase etc. protect DNA from oxidative stress. It has been proposed that polymorphisms in these enzymes are associated with DNA damage and subsequently the individual's risk of cancer susceptibility. A low calorie diet extends median and maximum lifespan in many animals. This effect may involve a reduction in oxidative stress. While there is some evidence to support the role of oxidative stress in aging in model organisms such as "Drosophila melanogaster" and "Caenorhabditis elegans", the evidence in mammals is less clear. Indeed, a 2009 review of experiments in mice concluded that almost all manipulations of antioxidant systems had no effect on aging. Antioxidants are used as food additives to help guard against food deterioration. Exposure to oxygen and sunlight are the two main factors in the oxidation of food, so food is preserved by keeping in the dark and sealing it in containers or even coating it in wax, as with cucumbers. However, as oxygen is also important for plant respiration, storing plant materials in anaerobic conditions produces unpleasant flavors and unappealing colors. Consequently, packaging of fresh fruits and vegetables contains an ~8% oxygen atmosphere. Antioxidants are an especially important class of preservatives as, unlike bacterial or fungal spoilage, oxidation reactions still occur relatively rapidly in frozen or refrigerated food. These preservatives include natural antioxidants such as ascorbic acid (AA, E300) and tocopherols (E306), as well as synthetic antioxidants such as propyl gallate (PG, E310), tertiary butylhydroquinone (TBHQ), butylated hydroxyanisole (BHA, E320) and butylated hydroxytoluene (BHT, E321). The most common molecules attacked by oxidation are unsaturated fats; oxidation causes them to turn rancid. Since oxidized lipids are often discolored and usually have unpleasant tastes such as metallic or sulfurous flavors, it is important to avoid oxidation in fat-rich foods. Thus, these foods are rarely preserved by drying; instead, they are preserved by smoking, salting or fermenting. Even less fatty foods such as fruits are sprayed with sulfurous antioxidants prior to air drying. Oxidation is often catalyzed by metals, which is why fats such as butter should never be wrapped in aluminium foil or kept in metal containers. Some fatty foods such as olive oil are partially protected from oxidation by their natural content of antioxidants, but remain sensitive to photooxidation. Antioxidant preservatives are also added to fat based cosmetics such as lipstick and moisturizers to prevent rancidity. Antioxidants are frequently added to industrial products. A common use is as stabilizers in fuels and lubricants to prevent oxidation, and in gasolines to prevent the polymerization that leads to the formation of engine-fouling residues. In 2014, the worldwide market for natural and synthetic antioxidants was US $2.25 billion with a forecast of growth to $3.25 billion by 2020. Antioxidant polymer stabilizers are widely used to prevent the degradation of polymers such as rubbers, plastics and adhesives that causes a loss of strength and flexibility in these materials. Polymers containing double bonds in their main chains, such as natural rubber and polybutadiene, are especially susceptible to oxidation and ozonolysis. They can be protected by antiozonants. Solid polymer products start to crack on exposed surfaces as the material degrades and the chains break. The mode of cracking varies between oxygen and ozone attack, the former causing a "crazy paving" effect, while ozone attack produces deeper cracks aligned at right angles to the tensile strain in the product. Oxidation and UV degradation are also frequently linked, mainly because UV radiation creates free radicals by bond breakage. The free radicals then react with oxygen to produce peroxy radicals which cause yet further damage, often in a chain reaction. Other polymers susceptible to oxidation include polypropylene and polyethylene. The former is more sensitive owing to the presence of secondary carbon atoms present in every repeat unit. Attack occurs at this point because the free radical formed is more stable than one formed on a primary carbon atom. Oxidation of polyethylene tends to occur at weak links in the chain, such as branch points in low-density polyethylene. Antioxidant vitamins are found in vegetables, fruits, eggs, legumes and nuts. Vitamins A, C, and E can be destroyed by long-term storage or prolonged cooking. The effects of cooking and food processing are complex, as these processes can also increase the bioavailability of antioxidants, such as some carotenoids in vegetables. Processed food contains fewer antioxidant vitamins than fresh and uncooked foods, as preparation exposes food to heat and oxygen. Other antioxidants are not obtained from the diet, but instead are made in the body. For example, ubiquinol (coenzyme Q) is poorly absorbed from the gut and is made through the mevalonate pathway. Another example is glutathione, which is made from amino acids. As any glutathione in the gut is broken down to free cysteine, glycine and glutamic acid before being absorbed, even large oral intake has little effect on the concentration of glutathione in the body. Although large amounts of sulfur-containing amino acids such as acetylcysteine can increase glutathione, no evidence exists that eating high levels of these glutathione precursors is beneficial for healthy adults. Measurement of antioxidant content in food is not a straightforward process, as antioxidants collectively are a diverse group of compounds with different reactivities to various reactive oxygen species. In food science, the oxygen radical absorbance capacity (ORAC) was once an industry standard for antioxidant strength of whole foods, juices and food additives. However, the United States Department of Agriculture withdrew these ratings in 2012 as biologically invalid, stating that no physiological proof "in vivo" existed to support the free-radical theory or roles for ingested phytochemicals, especially for polyphenols. Consequently, the ORAC method, derived only from "in vitro" experiments, is no longer considered relevant to human diets or biology. Alternative "in vitro" measurements of antioxidant content in foods include the Folin-Ciocalteu reagent, and the Trolox equivalent antioxidant capacity assay. As part of their adaptation from marine life, terrestrial plants began producing non-marine antioxidants such as ascorbic acid (vitamin C), polyphenols and tocopherols. The evolution of angiosperm plants between 50 and 200 million years ago resulted in the development of many antioxidant pigments – particularly during the Jurassic period – as chemical defences against reactive oxygen species that are byproducts of photosynthesis. Originally, the term antioxidant specifically referred to a chemical that prevented the consumption of oxygen. In the late 19th and early 20th centuries, extensive study concentrated on the use of antioxidants in important industrial processes, such as the prevention of metal corrosion, the vulcanization of rubber, and the polymerization of fuels in the fouling of internal combustion engines. Early research on the role of antioxidants in biology focused on their use in preventing the oxidation of unsaturated fats, which is the cause of rancidity. Antioxidant activity could be measured simply by placing the fat in a closed container with oxygen and measuring the rate of oxygen consumption. However, it was the identification of vitamins A, C, and E as antioxidants that revolutionized the field and led to the realization of the importance of antioxidants in the biochemistry of living organisms. The possible mechanisms of action of antioxidants were first explored when it was recognized that a substance with anti-oxidative activity is likely to be one that is itself readily oxidized. Research into how vitamin E prevents the process of lipid peroxidation led to the identification of antioxidants as reducing agents that prevent oxidative reactions, often by scavenging reactive oxygen species before they can damage cells. Bird Birds, also known as Aves, are a group of endothermic vertebrates, characterised by feathers, toothless beaked jaws, the laying of hard-shelled eggs, a high metabolic rate, a four-chambered heart, and a strong yet lightweight skeleton. Birds live worldwide and range in size from the bee hummingbird to the ostrich. They rank as the world’s most numerically-successful class of tetrapods, with approximately ten thousand living species, more than half of these being passerines, sometimes known as perching birds. Birds have which are more or less developed depending on the species; the only known groups without wings are the extinct moa and elephant birds. Wings, which evolved from forelimbs, gave birds the ability to fly, although further evolution has led to the loss of flight in flightless birds, including ratites, penguins, and diverse endemic island species of birds. The digestive and respiratory systems of birds are also uniquely adapted for flight. Some bird species of aquatic environments, particularly seabirds and some waterbirds, have further evolved for swimming. Reverse genetic engineering and the fossil record both demonstrate that birds are modern feathered dinosaurs, having evolved from earlier feathered dinosaurs within the theropod group, which are traditionally placed within the saurischian dinosaurs. The closest living relatives of birds are the crocodilians. Primitive bird-like dinosaurs that lie outside class Aves proper, in the broader group Avialae, have been found dating back to the mid-Jurassic period, around 170 million years ago. Many of these early "stem-birds", such as "Archaeopteryx", were not yet capable of fully powered flight, and many retained primitive characteristics like toothy jaws in place of beaks, and long bony tails. DNA-based evidence finds that birds diversified dramatically around the time of the Cretaceous–Palaeogene extinction event 66 million years ago, which killed off the pterosaurs and all the non-avian dinosaur lineages. But birds, especially those in the southern continents, survived this event and then migrated to other parts of the world while diversifying during periods of global cooling. This makes them the sole surviving dinosaurs according to cladistics. Some birds, especially corvids and parrots, are among the most intelligent animals; several bird species make and use tools, and many social species pass on knowledge across generations, which is considered a form of culture. Many species annually migrate great distances. Birds are social, communicating with visual signals, calls, and bird songs, and participating in such social behaviours as cooperative breeding and hunting, flocking, and mobbing of predators. The vast majority of bird species are socially monogamous (referring to social living arrangement, distinct from genetic monogamy), usually for one breeding season at a time, sometimes for years, but rarely for life. Other species have breeding systems that are polygynous (arrangement of one male with many females) or, rarely, polyandrous (arrangement of one female with many males). Birds produce offspring by laying eggs which are fertilised through sexual reproduction. They are usually laid in a nest and incubated by the parents. Most birds have an extended period of parental care after hatching. Some birds, such as hens, lay eggs even when not fertilised, though unfertilised eggs do not produce offspring. Many species of birds are economically important as food for human consumption and raw material in manufacturing, with domesticated and undomesticated birds (poultry and game) being important sources of eggs, meat, and feathers. Songbirds, parrots, and other species are popular as pets. Guano (bird excrement) is harvested for use as a fertiliser. Birds prominently figure throughout human culture. About 120–130 species have become extinct due to human activity since the 17th century, and hundreds more before then. Human activity threatens about 1,200 bird species with extinction, though efforts are underway to protect them. Recreational birdwatching is an important part of the ecotourism industry. The first classification of birds was developed by Francis Willughby and John Ray in their 1676 volume "Ornithologiae". Carl Linnaeus modified that work in 1758 to devise the taxonomic classification system currently in use. Birds are categorised as the biological class Aves in Linnaean taxonomy. Phylogenetic taxonomy places Aves in the dinosaur clade Theropoda. Aves and a sister group, the clade Crocodilia, contain the only living representatives of the reptile clade Archosauria. During the late 1990s, Aves was most commonly defined phylogenetically as all descendants of the most recent common ancestor of modern birds and "Archaeopteryx lithographica". However, an earlier definition proposed by Jacques Gauthier gained wide currency in the 21st century, and is used by many scientists including adherents of the Phylocode system. Gauthier defined Aves to include only the crown group of the set of modern birds. This was done by excluding most groups known only from fossils, and assigning them, instead, to the Avialae, in part to avoid the uncertainties about the placement of "Archaeopteryx" in relation to animals traditionally thought of as theropod dinosaurs. Gauthier identified four different definitions for the same biological name "Aves", which is a problem. Gauthier proposed to reserve the term Aves only for the crown group consisting of the last common ancestor of all living birds and all of its descendants, which corresponds to meaning number 4 below. He assigned other names to the other groups. Under the fourth definition "Archaeopteryx" is an avialan, and not a member of Aves. Gauthier's proposals have been adopted by many researchers in the field of palaeontology and bird evolution, though the exact definitions applied have been inconsistent. Avialae, initially proposed to replace the traditional fossil content of Aves, is often used synonymously with the vernacular term "bird" by these researchers. Most researchers define Avialae as branch-based clade, though definitions vary. Many authors have used a definition similar to "all theropods closer to birds than to "Deinonychus"." Avialae is also occasionally defined as an apomorphy-based clade (that is, one based on physical characteristics). Jacques Gauthier, who named Avialae in 1986, re-defined it in 2001 as all dinosaurs that possessed feathered wings used in flapping flight, and the birds that descended from them. Based on fossil and biological evidence, most scientists accept that birds are a specialised subgroup of theropod dinosaurs, and more specifically, they are members of Maniraptora, a group of theropods which includes dromaeosaurs and oviraptorids, among others. As scientists have discovered more theropods closely related to birds, the previously clear distinction between non-birds and birds has become blurred. Recent discoveries in the Liaoning Province of northeast China, which demonstrate many small theropod feathered dinosaurs, contribute to this ambiguity. The consensus view in contemporary palaeontology is that the flying theropods, or avialans, are the closest relatives of the deinonychosaurs, which include dromaeosaurids and troodontids. Together, these form a group called Paraves. Some basal members of this group, such as "Microraptor", have features which may have enabled them to glide or fly. The most basal deinonychosaurs were very small. This evidence raises the possibility that the ancestor of all paravians may have been arboreal, have been able to glide, or both. Unlike "Archaeopteryx" and the non-avialan feathered dinosaurs, who primarily ate meat, recent studies suggest that the first avialans were omnivores. The Late Jurassic "Archaeopteryx" is well known as one of the first transitional fossils to be found, and it provided support for the theory of evolution in the late 19th century. "Archaeopteryx" was the first fossil to display both clearly traditional reptilian characteristics: teeth, clawed fingers, and a long, lizard-like tail, as well as wings with flight feathers similar to those of modern birds. It is not considered a direct ancestor of birds, though it is possibly closely related to the true ancestor. The earliest known avialan fossils come from the Tiaojishan Formation of China, which has been dated to the late Jurassic period (Oxfordian stage), about 160 million years ago. The avialan species from this time period include "Anchiornis huxleyi", "Xiaotingia zhengi", and "Aurornis xui". The well-known early avialan, "Archaeopteryx", dates from slightly later Jurassic rocks (about 155 million years old) from Germany. Many of these early avialans shared unusual anatomical features that may be ancestral to modern birds, but were later lost during bird evolution. These features include enlarged claws on the second toe which may have been held clear of the ground in life, and long feathers or "hind wings" covering the hind limbs and feet, which may have been used in aerial maneuvering. Avialans diversified into a wide variety of forms during the Cretaceous Period. Many groups retained primitive characteristics, such as clawed wings and teeth, though the latter were lost independently in a number of avialan groups, including modern birds (Aves). While the earliest forms, such as "Archaeopteryx" and "Jeholornis", retained the long bony tails of their ancestors, the tails of more advanced avialans were shortened with the advent of the pygostyle bone in the group Pygostylia. In the late Cretaceous, about 100 million years ago, the ancestors of all modern birds evolved a more open pelvis, allowing them to lay larger eggs compared to body size. Around 95 million years ago, they evolved a better sense of smell. The first large, diverse lineage of short-tailed avialans to evolve were the enantiornithes, or "opposite birds", so named because the construction of their shoulder bones was in reverse to that of modern birds. Enantiornithes occupied a wide array of ecological niches, from sand-probing shorebirds and fish-eaters to tree-dwelling forms and seed-eaters. While they were the dominant group of avialans during the Cretaceous period, enantiornithes became extinct along with many other dinosaur groups at the end of the Mesozoic era. Many species of the second major avialan lineage to diversify, the Euornithes (meaning "true birds", because they include the ancestors of modern birds), were semi-aquatic and specialised in eating fish and other small aquatic organisms. Unlike the enantiornithes, which dominated land-based and arboreal habitats, most early euornithes lacked perching adaptations and seem to have included shorebird-like species, waders, and swimming and diving species. The latter included the superficially gull-like "Ichthyornis", the Hesperornithiformes, which became so well adapted to hunting fish in marine environments that they lost the ability to fly and became primarily aquatic. The early euornithes also saw the development of many traits associated with modern birds, like strongly keeled breastbones, toothless, beaked portions of their jaws (though most non-avian euornithes retained teeth in other parts of the jaws). Euornithes also included the first avialans to develop true pygostyle and a fully mobile fan of tail feathers, which may have replaced the "hind wing" as the primary mode of aerial maneuverability and braking in flight. A recent study on avian skull evolution has concluded that the ancestral Neornithe had a beak similar to that of the modern Hook-billed vanga and a skull similar to that of the Eurasian golden oriole, suggesting a similar ancestral ecological niche as an arboreal omnivore. All modern birds lie within the crown group Aves (alternately Neornithes), which has two subdivisions: the Palaeognathae, which includes the flightless ratites (such as the ostriches) and the weak-flying tinamous, and the extremely diverse Neognathae, containing all other birds. These two subdivisions are often given the rank of superorder, although Livezey and Zusi assigned them "cohort" rank. Depending on the taxonomic viewpoint, the number of known living bird species varies anywhere from 9,800 to 10,050. The discovery of "Vegavis", a late Cretaceous member of the Anatidae, proved that the diversification of modern birds started before the Cenozoic. The affinities of an earlier fossil, the possible galliform "Austinornis lentus", dated to about 85 million years ago, are still too controversial to provide a fossil evidence of modern bird diversification. Most studies agree on a Cretaceous age for the most recent common ancestor of modern birds but estimates range from the Middle Cretaceous to the latest Late Cretaceous. Similarly, there is no agreement on whether most of the early diversification of modern birds occurred before or after the Cretaceous–Palaeogene extinction event. This disagreement is in part caused by a divergence in the evidence; most molecular dating studies suggests a Cretaceous radiation, while fossil evidence points to a Cenozoic radiation (the so-called 'rocks' versus 'clocks' controversy). Previous attempts to reconcile molecular and fossil evidence have proved controversial, but more recent estimates, using a more comprehensive sample of fossils and a new way of calibrating molecular clocks, showed that while modern birds originated early in the Late Cretaceous, a pulse of diversification in all major groups occurred around the Cretaceous–Palaeogene extinction event. Cladogram of modern bird relationships based on Prum, R.O. "et al". (2015) with some clade names after Yuri, T. et al. (2013). The classification of birds is a contentious issue. Sibley and Ahlquist's "Phylogeny and Classification of Birds" (1990) is a landmark work on the classification of birds, although it is frequently debated and constantly revised. Most evidence seems to suggest the assignment of orders is accurate, but scientists disagree about the relationships between the orders themselves; evidence from modern bird anatomy, fossils and DNA have all been brought to bear on the problem, but no strong consensus has emerged. More recently, new fossil and molecular evidence is providing an increasingly clear picture of the evolution of modern bird orders. Birds live and breed in most terrestrial habitats and on all seven continents, reaching their southern extreme in the snow petrel's breeding colonies up to inland in Antarctica. The highest bird diversity occurs in tropical regions. It was earlier thought that this high diversity was the result of higher speciation rates in the tropics; however recent studies found higher speciation rates in the high latitudes that were offset by greater extinction rates than in the tropics. Several families of birds have adapted to life both on the world's oceans and in them, with some seabird species coming ashore only to breed and some penguins have been recorded diving up to deep. Many bird species have established breeding populations in areas to which they have been introduced by humans. Some of these introductions have been deliberate; the ring-necked pheasant, for example, has been introduced around the world as a game bird. Others have been accidental, such as the establishment of wild monk parakeets in several North American cities after their escape from captivity. Some species, including cattle egret, yellow-headed caracara and galah, have spread naturally far beyond their original ranges as agricultural practices created suitable new habitat. Compared with other vertebrates, birds have a body plan that shows many unusual adaptations, mostly to facilitate flight. The skeleton consists of very lightweight bones. They have large air-filled cavities (called pneumatic cavities) which connect with the respiratory system. The skull bones in adults are fused and do not show cranial sutures. The orbits are large and separated by a bony septum. The spine has cervical, thoracic, lumbar and caudal regions with the number of cervical (neck) vertebrae highly variable and especially flexible, but movement is reduced in the anterior thoracic vertebrae and absent in the later vertebrae. The last few are fused with the pelvis to form the synsacrum. The ribs are flattened and the sternum is keeled for the attachment of flight muscles except in the flightless bird orders. The forelimbs are modified into wings. Like the reptiles, birds are primarily uricotelic, that is, their kidneys extract nitrogenous waste from their bloodstream and excrete it as uric acid instead of urea or ammonia through the ureters into the intestine. Birds do not have a urinary bladder or external urethral opening and (with exception of the ostrich) uric acid is excreted along with faeces as a semisolid waste. However, birds such as hummingbirds can be facultatively ammonotelic, excreting most of the nitrogenous wastes as ammonia. They also excrete creatine, rather than creatinine like mammals. This material, as well as the output of the intestines, emerges from the bird's cloaca. The cloaca is a multi-purpose opening: waste is expelled through it, most birds mate by joining cloaca, and females lay eggs from it. In addition, many species of birds regurgitate pellets. Males within Palaeognathae (with the exception of the kiwis), the Anseriformes (with the exception of screamers), and in rudimentary forms in Galliformes (but fully developed in Cracidae) possess a penis, which is never present in Neoaves. The length is thought to be related to sperm competition. When not copulating, it is hidden within the proctodeum compartment within the cloaca, just inside the vent. The digestive system of birds is unique, with a crop for storage and a gizzard that contains swallowed stones for grinding food to compensate for the lack of teeth. Most birds are highly adapted for rapid digestion to aid with flight. Some migratory birds have adapted to use protein from many parts of their bodies, including protein from the intestines, as additional energy during migration. Birds have one of the most complex respiratory systems of all animal groups. Upon inhalation, 75% of the fresh air bypasses the lungs and flows directly into a posterior air sac which extends from the lungs and connects with air spaces in the bones and fills them with air. The other 25% of the air goes directly into the lungs. When the bird exhales, the used air flows out of the lungs and the stored fresh air from the posterior air sac is simultaneously forced into the lungs. Thus, a bird's lungs receive a constant supply of fresh air during both inhalation and exhalation. Sound production is achieved using the syrinx, a muscular chamber incorporating multiple tympanic membranes which diverges from the lower end of the trachea; the trachea being elongated in some species, increasing the volume of vocalisations and the perception of the bird's size. In birds, the main arteries taking blood away from the heart originate from the right aortic arch (or pharyngeal arch), unlike in the mammals where the left aortic arch forms this part of the aorta. The postcava receives blood from the limbs via the renal portal system. Unlike in mammals, the circulating red blood cells in birds retain their nucleus. The avian circulatory system is driven by a four-chambered, myogenic heart contained in a fibrous pericardial sac. This pericardial sac is filled with a serous fluid for lubrication. The heart itself is divided into a right and left half, each with an atrium and ventricle. The atrium and ventricles of each side are separated by atrioventricular valves which prevent back flow from one chamber to the next during contraction. Being myogenic, the heart's pace is maintained by pacemaker cells found in the sinoatrial node, located on the right atrium. The sinoatrial node uses calcium to cause a depolarising signal transduction pathway from the atrium through right and left atrioventricular bundle which communicates contraction to the ventricles. The avian heart also consists of muscular arches that are made up of thick bundles of muscular layers. Much like a mammalian heart, the avian heart is composed of endocardial, myocardial and epicardial layers. The atrium walls tend to be thinner than the ventricle walls, due to the intense ventricular contraction used to pump oxygenated blood throughout the body. Avian hearts are generally larger than mammalian hearts when compared to body mass. This adaptation allows more blood to be pumped to meet the high metabolic need associated with flight. Birds have a very efficient system for diffusing oxygen into the blood; birds have a ten times greater surface area to gas exchange volume than mammals. As a result, birds have more blood in their capillaries per unit of volume of lung than a mammal. The arteries are composed of thick elastic muscles to withstand the pressure of the ventricular constriction, and become more rigid as they move away from the heart. Blood moves through the arteries, which undergo vasoconstriction, and into arterioles which act as a transportation system to distribute primarily oxygen as well as nutrients to all tissues of the body. As the arterioles move away from the heart and into individual organs and tissues they are further divided to increase surface area and slow blood flow. Blood travels through the arterioles and moves into the capillaries where gas exchange can occur. Capillaries are organized into capillary beds in tissues; it is here that blood exchanges oxygen for carbon dioxide waste. In the capillary beds blood flow is slowed to allow maximum diffusion of oxygen into the tissues. Once the blood has become deoxygenated it travels through venules then veins and back to the heart. Veins, unlike arteries, are thin and rigid as they do not need to withstand extreme pressure. As blood travels through the venules to the veins a funneling occurs called vasodilation bringing blood back to the heart. Once the blood reaches the heart it moves first into the right atrium, then the right ventricle to be pumped through the lungs for further gas exchange of carbon dioxide waste for oxygen. Oxygenated blood then flows from the lungs through the left atrium to the left ventricle where it is pumped out to the body. The nervous system is large relative to the bird's size. The most developed part of the brain is the one that controls the flight-related functions, while the cerebellum coordinates movement and the cerebrum controls behaviour patterns, navigation, mating and nest building. Most birds have a poor sense of smell with notable exceptions including kiwis, New World vultures and tubenoses. The avian visual system is usually highly developed. Water birds have special flexible lenses, allowing accommodation for vision in air and water. Some species also have dual fovea. Birds are tetrachromatic, possessing ultraviolet (UV) sensitive cone cells in the eye as well as green, red and blue ones. They also have double cones, likely to mediate achromatic vision. Many birds show plumage patterns in ultraviolet that are invisible to the human eye; some birds whose sexes appear similar to the naked eye are distinguished by the presence of ultraviolet reflective patches on their feathers. Male blue tits have an ultraviolet reflective crown patch which is displayed in courtship by posturing and raising of their nape feathers. Ultraviolet light is also used in foraging—kestrels have been shown to search for prey by detecting the UV reflective urine trail marks left on the ground by rodents. With the exception of pigeons and a few other species, the eyelids of birds are not used in blinking. Instead the eye is lubricated by the nictitating membrane, a third eyelid that moves horizontally. The nictitating membrane also covers the eye and acts as a contact lens in many aquatic birds. The bird retina has a fan shaped blood supply system called the pecten. Most birds cannot move their eyes, although there are exceptions, such as the great cormorant. Birds with eyes on the sides of their heads have a wide visual field, while birds with eyes on the front of their heads, such as owls, have binocular vision and can estimate the depth of field. The avian ear lacks external pinnae but is covered by feathers, although in some birds, such as the "Asio", "Bubo" and "Otus" owls, these feathers form tufts which resemble ears. The inner ear has a cochlea, but it is not spiral as in mammals. A few species are able to use chemical defences against predators; some Procellariiformes can eject an unpleasant stomach oil against an aggressor, and some species of pitohuis from New Guinea have a powerful neurotoxin in their skin and feathers. A lack of field observations limit our knowledge, but intraspecific conflicts are known to sometimes result in injury or death. The screamers (Anhimidae), some jacanas ("Jacana", "Hydrophasianus"), the spur-winged goose ("Plectropterus"), the torrent duck ("Merganetta") and nine species of lapwing ("Vanellus") use a sharp spur on the wing as a weapon. The steamer ducks ("Tachyeres"), geese and swans ("Anserinae"), the solitaire ("Pezophaps"), sheathbills ("Chionis"), some guans ("Crax") and stone curlews ("Burhinus") use a bony knob on the alular metacarpal to punch and hammer opponents. The jacanas "Actophilornis" and "Irediparra" have an expanded, blade-like radius. The extinct "Xenicibis" was unique in having an elongate forelimb and massive hand which likely functioned in combat or defence as a jointed club or flail. Swans, for instance, may strike with the bony spurs and bite when defending eggs or young. Birds have two sexes: either female or male. The sex of birds is determined by the Z and W sex chromosomes, rather than by the X and Y chromosomes present in mammals. Male birds have two Z chromosomes (ZZ), and female birds have a W chromosome and a Z chromosome (WZ). In nearly all species of birds, an individual's sex is determined at fertilisation. However, one recent study demonstrated temperature-dependent sex determination among the Australian brushturkey, for which higher temperatures during incubation resulted in a higher female-to-male sex ratio. This, however, was later proven to not be the case. These birds do not exhibit temperature-dependent sex determination, but temperature-dependent sex mortality. Feathers are a feature characteristic of birds (though also present in some dinosaurs not currently considered to be true birds). They facilitate flight, provide insulation that aids in thermoregulation, and are used in display, camouflage, and signalling. There are several types of feathers, each serving its own set of purposes. Feathers are epidermal growths attached to the skin and arise only in specific tracts of skin called pterylae. The distribution pattern of these feather tracts (pterylosis) is used in taxonomy and systematics. The arrangement and appearance of feathers on the body, called plumage, may vary within species by age, social status, and sex. Plumage is regularly moulted; the standard plumage of a bird that has moulted after breeding is known as the "" plumage, or—in the Humphrey-Parkes terminology—"basic" plumage; breeding plumages or variations of the basic plumage are known under the Humphrey-Parkes system as "" plumages. Moulting is annual in most species, although some may have two moults a year, and large birds of prey may moult only once every few years. Moulting patterns vary across species. In passerines, flight feathers are replaced one at a time with the innermost being the first. When the fifth of sixth primary is replaced, the outermost begin to drop. After the innermost tertiaries are moulted, the starting from the innermost begin to drop and this proceeds to the outer feathers (centrifugal moult). The greater primary are moulted in synchrony with the primary that they overlap. A small number of species, such as ducks and geese, lose all of their flight feathers at once, temporarily becoming flightless. As a general rule, the tail feathers are moulted and replaced starting with the innermost pair. Centripetal moults of tail feathers are however seen in the Phasianidae. The centrifugal moult is modified in the tail feathers of woodpeckers and treecreepers, in that it begins with the second innermost pair of feathers and finishes with the central pair of feathers so that the bird maintains a functional climbing tail. The general pattern seen in passerines is that the primaries are replaced outward, secondaries inward, and the tail from centre outward. Before nesting, the females of most bird species gain a bare brood patch by losing feathers close to the belly. The skin there is well supplied with blood vessels and helps the bird in incubation. Feathers require maintenance and birds preen or groom them daily, spending an average of around 9% of their daily time on this. The bill is used to brush away foreign particles and to apply waxy secretions from the uropygial gland; these secretions protect the feathers' flexibility and act as an antimicrobial agent, inhibiting the growth of feather-degrading bacteria. This may be supplemented with the secretions of formic acid from ants, which birds receive through a behaviour known as anting, to remove feather parasites. The scales of birds are composed of the same keratin as beaks, claws, and spurs. They are found mainly on the toes and metatarsus, but may be found further up on the ankle in some birds. Most bird scales do not overlap significantly, except in the cases of kingfishers and woodpeckers. The scales of birds are thought to be homologous to those of reptiles and mammals. Most birds can fly, which distinguishes them from almost all other vertebrate classes. Flight is the primary means of locomotion for most bird species and is used for searching for food and for escaping from predators. Birds have various adaptations for flight, including a lightweight skeleton, two large flight muscles, the pectoralis (which accounts for 15% of the total mass of the bird) and the supracoracoideus, as well as a modified forelimb (wing) that serves as an aerofoil. Wing shape and size generally determine a bird's flight style and performance; many birds combine powered, flapping flight with less energy-intensive soaring flight. About 60 extant bird species are flightless, as were many extinct birds. Flightlessness often arises in birds on isolated islands, probably due to limited resources and the absence of land predators. Though flightless, penguins use similar musculature and movements to "fly" through the water, as do auks, shearwaters and dippers. Most birds are diurnal, but some birds, such as many species of owls and nightjars, are nocturnal or crepuscular (active during twilight hours), and many coastal waders feed when the tides are appropriate, by day or night. are varied and often include nectar, fruit, plants, seeds, carrion, and various small animals, including other birds. Because birds have no teeth, their digestive system is adapted to process unmasticated food items that are swallowed whole. Birds that employ many strategies to obtain food or feed on a variety of food items are called generalists, while others that concentrate time and effort on specific food items or have a single strategy to obtain food are considered specialists. Birds' feeding strategies vary by species. Many birds glean for insects, invertebrates, fruit, or seeds. Some hunt insects by suddenly attacking from a branch. Those species that seek pest insects are considered beneficial 'biological control agents' and their presence encouraged in biological pest control programmes. Combined, insectivorous birds eat 400–500 million metric tons of arthropods annually. Nectar feeders such as hummingbirds, sunbirds, lories, and lorikeets amongst others have specially adapted brushy tongues and in many cases bills designed to fit co-adapted flowers. Kiwis and shorebirds with long bills probe for invertebrates; shorebirds' varied bill lengths and feeding methods result in the separation of ecological niches. Loons, diving ducks, penguins and auks pursue their prey underwater, using their wings or feet for propulsion, while aerial predators such as sulids, kingfishers and terns plunge dive after their prey. Flamingos, three species of prion, and some ducks are filter feeders. Geese and dabbling ducks are primarily grazers. Some species, including frigatebirds, gulls, and skuas, engage in kleptoparasitism, stealing food items from other birds. Kleptoparasitism is thought to be a supplement to food obtained by hunting, rather than a significant part of any species' diet; a study of great frigatebirds stealing from masked boobies estimated that the frigatebirds stole at most 40% of their food and on average stole only 5%. Other birds are scavengers; some of these, like vultures, are specialised carrion eaters, while others, like gulls, corvids, or other birds of prey, are opportunists. Water is needed by many birds although their mode of excretion and lack of sweat glands reduces the physiological demands. Some desert birds can obtain their water needs entirely from moisture in their food. They may also have other adaptations such as allowing their body temperature to rise, saving on moisture loss from evaporative cooling or panting. Seabirds can drink seawater and have salt glands inside the head that eliminate excess salt out of the nostrils. Most birds scoop water in their beaks and raise their head to let water run down the throat. Some species, especially of arid zones, belonging to the pigeon, finch, mousebird, button-quail and bustard families are capable of sucking up water without the need to tilt back their heads. Some desert birds depend on water sources and sandgrouse are particularly well known for their daily congregations at waterholes. Nesting sandgrouse and many plovers carry water to their young by wetting their belly feathers. Some birds carry water for chicks at the nest in their crop or regurgitate it along with food. The pigeon family, flamingos and penguins have adaptations to produce a nutritive fluid called crop milk that they provide to their chicks. Feathers being critical to the survival of a bird, require maintenance. Apart from physical wear and tear, feathers face the onslaught of fungi, ectoparasitic feather mites and birdlice. The physical condition of feathers are maintained by often with the application of secretions from the . Birds also bathe in water or dust themselves. While some birds dip into shallow water, more aerial species may make aerial dips into water and arboreal species often make use of dew or rain that collect on leaves. Birds of arid regions make use of loose soil to dust-bathe. A behaviour termed as anting in which the bird encourages ants to run through their plumage is also thought to help them reduce the ectoparasite load in feathers. Many species will spread out their wings and expose them to direct sunlight and this too is thought to help in reducing fungal and ectoparasitic activity that may lead to feather damage. Many bird species migrate to take advantage of global differences of seasonal temperatures, therefore optimising availability of food sources and breeding habitat. These migrations vary among the different groups. Many landbirds, shorebirds, and waterbirds undertake annual long distance migrations, usually triggered by the length of daylight as well as weather conditions. These birds are characterised by a breeding season spent in the temperate or polar regions and a non-breeding season in the tropical regions or opposite hemisphere. Before migration, birds substantially increase body fats and reserves and reduce the size of some of their organs. Migration is highly demanding energetically, particularly as birds need to cross deserts and oceans without refuelling. Landbirds have a flight range of around and shorebirds can fly up to , although the bar-tailed godwit is capable of non-stop flights of up to . Seabirds also undertake long migrations, the longest annual migration being those of sooty shearwaters, which nest in New Zealand and Chile and spend the northern summer feeding in the North Pacific off Japan, Alaska and California, an annual round trip of . Other seabirds disperse after breeding, travelling widely but having no set migration route. Albatrosses nesting in the Southern Ocean often undertake circumpolar trips between breeding seasons. Some bird species undertake shorter migrations, travelling only as far as is required to avoid bad weather or obtain food. Irruptive species such as the boreal finches are one such group and can commonly be found at a location in one year and absent the next. This type of migration is normally associated with food availability. Species may also travel shorter distances over part of their range, with individuals from higher latitudes travelling into the existing range of conspecifics; others undertake partial migrations, where only a fraction of the population, usually females and subdominant males, migrates. Partial migration can form a large percentage of the migration behaviour of birds in some regions; in Australia, surveys found that 44% of non-passerine birds and 32% of passerines were partially migratory. Altitudinal migration is a form of short distance migration in which birds spend the breeding season at higher altitudes and move to lower ones during suboptimal conditions. It is most often triggered by temperature changes and usually occurs when the normal territories also become inhospitable due to lack of food. Some species may also be nomadic, holding no fixed territory and moving according to weather and food availability. Parrots as a family are overwhelmingly neither migratory nor sedentary but considered to either be dispersive, irruptive, nomadic or undertake small and irregular migrations. The ability of birds to return to precise locations across vast distances has been known for some time; in an experiment conducted in the 1950s, a Manx shearwater released in Boston in the United States returned to its colony in Skomer, in Wales within 13 days, a distance of . Birds navigate during migration using a variety of methods. For diurnal migrants, the sun is used to navigate by day, and a stellar compass is used at night. Birds that use the sun compensate for the changing position of the sun during the day by the use of an internal clock. Orientation with the stellar compass depends on the position of the constellations surrounding Polaris. These are backed up in some species by their ability to sense the Earth's geomagnetism through specialised photoreceptors. Birds communicate using primarily visual and auditory signals. Signals can be interspecific (between species) and intraspecific (within species). Birds sometimes use plumage to assess and assert social dominance, to display breeding condition in sexually selected species, or to make threatening displays, as in the sunbittern's mimicry of a large predator to ward off hawks and protect young chicks. Variation in plumage also allows for the identification of birds, particularly between species. Visual communication among birds may also involve ritualised displays, which have developed from non-signalling actions such as preening, the adjustments of feather position, pecking, or other behaviour. These displays may signal aggression or submission or may contribute to the formation of pair-bonds. The most elaborate displays occur during courtship, where "dances" are often formed from complex combinations of many possible component movements; males' breeding success may depend on the quality of such displays. Bird calls and songs, which are produced in the syrinx, are the major means by which birds communicate with sound. This communication can be very complex; some species can operate the two sides of the syrinx independently, allowing the simultaneous production of two different songs. Calls are used for a variety of purposes, including mate attraction, evaluation of potential mates, bond formation, the claiming and maintenance of territories, the identification of other individuals (such as when parents look for chicks in colonies or when mates reunite at the start of breeding season), and the warning of other birds of potential predators, sometimes with specific information about the nature of the threat. Some birds also use mechanical sounds for auditory communication. The "Coenocorypha" snipes of New Zealand drive air through their feathers, woodpeckers drum for long distance communication, and palm cockatoos use tools to drum. While some birds are essentially territorial or live in small family groups, other birds may form large flocks. The principal benefits of flocking are safety in numbers and increased foraging efficiency. Defence against predators is particularly important in closed habitats like forests, where ambush predation is common and multiple eyes can provide a valuable early warning system. This has led to the development of many mixed-species feeding flocks, which are usually composed of small numbers of many species; these flocks provide safety in numbers but increase potential competition for resources. Costs of flocking include bullying of socially subordinate birds by more dominant birds and the reduction of feeding efficiency in certain cases. Birds sometimes also form associations with non-avian species. Plunge-diving seabirds associate with dolphins and tuna, which push shoaling fish towards the surface. Hornbills have a mutualistic relationship with dwarf mongooses, in which they forage together and warn each other of nearby birds of prey and other predators. The high metabolic rates of birds during the active part of the day is supplemented by rest at other times. Sleeping birds often use a type of sleep known as vigilant sleep, where periods of rest are interspersed with quick eye-opening "peeks", allowing them to be sensitive to disturbances and enable rapid escape from threats. Swifts are believed to be able to sleep in flight and radar observations suggest that they orient themselves to face the wind in their roosting flight. It has been suggested that there may be certain kinds of sleep which are possible even when in flight. Some birds have also demonstrated the capacity to fall into slow-wave sleep one hemisphere of the brain at a time. The birds tend to exercise this ability depending upon its position relative to the outside of the flock. This may allow the eye opposite the sleeping hemisphere to remain vigilant for predators by viewing the outer margins of the flock. This adaptation is also known from marine mammals. Communal roosting is common because it lowers the loss of body heat and decreases the risks associated with predators. Roosting sites are often chosen with regard to thermoregulation and safety. Many sleeping birds bend their heads over their backs and tuck their bills in their back feathers, although others place their beaks among their breast feathers. Many birds rest on one leg, while some may pull up their legs into their feathers, especially in cold weather. Perching birds have a tendon locking mechanism that helps them hold on to the perch when they are asleep. Many ground birds, such as quails and pheasants, roost in trees. A few parrots of the genus "Loriculus" roost hanging upside down. Some hummingbirds go into a nightly state of torpor accompanied with a reduction of their metabolic rates. This physiological adaptation shows in nearly a hundred other species, including owlet-nightjars, nightjars, and woodswallows. One species, the common poorwill, even enters a state of hibernation. Birds do not have sweat glands, but they may cool themselves by moving to shade, standing in water, panting, increasing their surface area, fluttering their throat or by using special behaviours like urohidrosis to cool themselves. Ninety-five per cent of bird species are socially monogamous. These species pair for at least the length of the breeding season or—in some cases—for several years or until the death of one mate. Monogamy allows for both paternal care and biparental care, which is especially important for species in which females require males' assistance for successful brood-rearing. Among many socially monogamous species, extra-pair copulation (infidelity) is common. Such behaviour typically occurs between dominant males and females paired with subordinate males, but may also be the result of forced copulation in ducks and other anatids. Female birds have sperm storage mechanisms that allow sperm from males to remain viable long after copulation, a hundred days in some species. Sperm from multiple males may compete through this mechanism. For females, possible benefits of extra-pair copulation include getting better genes for her offspring and insuring against the possibility of infertility in her mate. Males of species that engage in extra-pair copulations will closely guard their mates to ensure the parentage of the offspring that they raise. Other mating systems, including polygyny, polyandry, polygamy, polygynandry, and promiscuity, also occur. Polygamous breeding systems arise when females are able to raise broods without the help of males. Some species may use more than one system depending on the circumstances. Breeding usually involves some form of courtship display, typically performed by the male. Most displays are rather simple and involve some type of song. Some displays, however, are quite elaborate. Depending on the species, these may include wing or tail drumming, dancing, aerial flights, or communal lekking. Females are generally the ones that drive partner selection, although in the polyandrous phalaropes, this is reversed: plainer males choose brightly coloured females. Courtship feeding, billing and are commonly performed between partners, generally after the birds have paired and mated. Homosexual behaviour has been observed in males or females in numerous species of birds, including copulation, pair-bonding, and joint parenting of chicks. Many birds actively defend a territory from others of the same species during the breeding season; maintenance of territories protects the food source for their chicks. Species that are unable to defend feeding territories, such as seabirds and swifts, often breed in colonies instead; this is thought to offer protection from predators. Colonial breeders defend small nesting sites, and competition between and within species for nesting sites can be intense. All birds lay amniotic eggs with hard shells made mostly of calcium carbonate. Hole and burrow nesting species tend to lay white or pale eggs, while open nesters lay camouflaged eggs. There are many exceptions to this pattern, however; the ground-nesting nightjars have pale eggs, and camouflage is instead provided by their plumage. Species that are victims of brood parasites have varying egg colours to improve the chances of spotting a parasite's egg, which forces female parasites to match their eggs to those of their hosts. Bird eggs are usually laid in a nest. Most species create somewhat elaborate nests, which can be cups, domes, plates, beds scrapes, mounds, or burrows. Some bird nests, however, are extremely primitive; albatross nests are no more than a scrape on the ground. Most birds build nests in sheltered, hidden areas to avoid predation, but large or colonial birds—which are more capable of defence—may build more open nests. During nest construction, some species seek out plant matter from plants with parasite-reducing toxins to improve chick survival, and feathers are often used for nest insulation. Some bird species have no nests; the cliff-nesting common guillemot lays its eggs on bare rock, and male emperor penguins keep eggs between their body and feet. The absence of nests is especially prevalent in ground-nesting species where the newly hatched young are precocial. Incubation, which optimises temperature for chick development, usually begins after the last egg has been laid. In monogamous species incubation duties are often shared, whereas in polygamous species one parent is wholly responsible for incubation. Warmth from parents passes to the eggs through brood patches, areas of bare skin on the abdomen or breast of the incubating birds. Incubation can be an energetically demanding process; adult albatrosses, for instance, lose as much as of body weight per day of incubation. The warmth for the incubation of the eggs of megapodes comes from the sun, decaying vegetation or volcanic sources. Incubation periods range from 10 days (in woodpeckers, cuckoos and passerine birds) to over 80 days (in albatrosses and kiwis). The diversity of characteristics of birds is great, sometimes even in closely related species. Several avian characteristics are compared in the table below. At the time of their hatching, chicks range in development from helpless to independent, depending on their species. Helpless chicks are termed "altricial", and tend to be born small, blind, immobile and naked; chicks that are mobile and feathered upon hatching are termed "precocial". Altricial chicks need help thermoregulating and must be brooded for longer than precocial chicks. The young of many bird species do not precisely fit into either the precocial or altricial category, having some aspects of each and thus fall somewhere on an "altricial-precocial spectrum". Chicks at neither extreme but favoring one or the other may be termed or . The length and nature of parental care varies widely amongst different orders and species. At one extreme, parental care in megapodes ends at hatching; the newly hatched chick digs itself out of the nest mound without parental assistance and can fend for itself immediately. At the other extreme, many seabirds have extended periods of parental care, the longest being that of the great frigatebird, whose chicks take up to six months to fledge and are fed by the parents for up to an additional 14 months. The "chick guard stage" describes the period of breeding during which one of the adult birds is permanently present at the nest after chicks have hatched. The main purpose of the guard stage is to aid offspring to thermoregulate and protect them from predation. In some species, both parents care for nestlings and fledglings; in others, such care is the responsibility of only one sex. In some species, other members of the same species—usually close relatives of the breeding pair, such as offspring from previous broods—will help with the raising of the young. Such alloparenting is particularly common among the Corvida, which includes such birds as the true crows, Australian magpie and fairy-wrens, but has been observed in species as different as the rifleman and red kite. Among most groups of animals, male parental care is rare. In birds, however, it is quite common—more so than in any other vertebrate class. Though territory and nest site defence, incubation, and chick feeding are often shared tasks, there is sometimes a division of labour in which one mate undertakes all or most of a particular duty. The point at which chicks fledge varies dramatically. The chicks of the "Synthliboramphus" murrelets, like the ancient murrelet, leave the nest the night after they hatch, following their parents out to sea, where they are raised away from terrestrial predators. Some other species, such as ducks, move their chicks away from the nest at an early age. In most species, chicks leave the nest just before, or soon after, they are able to fly. The amount of parental care after fledging varies; albatross chicks leave the nest on their own and receive no further help, while other species continue some supplementary feeding after fledging. Chicks may also follow their parents during their first migration. Brood parasitism, in which an egg-layer leaves her eggs with another individual's brood, is more common among birds than any other type of organism. After a parasitic bird lays her eggs in another bird's nest, they are often accepted and raised by the host at the expense of the host's own brood. Brood parasites may be either "obligate brood parasites", which must lay their eggs in the nests of other species because they are incapable of raising their own young, or "non-obligate brood parasites", which sometimes lay eggs in the nests of conspecifics to increase their reproductive output even though they could have raised their own young. One hundred bird species, including honeyguides, icterids, and ducks, are obligate parasites, though the most famous are the cuckoos. Some brood parasites are adapted to hatch before their host's young, which allows them to destroy the host's eggs by pushing them out of the nest or to kill the host's chicks; this ensures that all food brought to the nest will be fed to the parasitic chicks. Birds have evolved a variety of mating behaviours, with the peacock tail being perhaps the most famous example of sexual selection and the Fisherian runaway. Commonly occurring sexual dimorphisms such as size and colour differences are energetically costly attributes that signal competitive breeding situations. Many types of avian sexual selection have been identified; intersexual selection, also known as female choice; and intrasexual competition, where individuals of the more abundant sex compete with each other for the privilege to mate. Sexually selected traits often evolve to become more pronounced in competitive breeding situations until the trait begins to limit the individual’s fitness. Conflicts between an individual fitness and signalling adaptations ensure that sexually selected ornaments such as plumage coloration and courtship behaviour are "honest" traits. Signals must be costly to ensure that only good-quality individuals can present these exaggerated sexual ornaments and behaviours. Inbreeding causes early death (inbreeding depression) in the zebra finch "Taeniopygia guttata". Embryo survival (that is, hatching success of fertile eggs) was significantly lower for sib-sib mating pairs than for unrelated pairs. Darwin’s finch "Geospiza scandens" experiences inbreeding depression (reduced survival of offspring) and the magnitude of this effect is influenced by environmental conditions such as low food availability. Incestuous matings by the purple-crowned fairy wren "Malurus coronatus" result in severe fitness costs due to inbreeding depression (greater than 30% reduction in hatchability of eggs). Females paired with related males may undertake extra pair matings (see Promiscuity#Other animals for 90% frequency in avian species) that can reduce the negative effects of inbreeding. However, there are ecological and demographic constraints on extra pair matings. Nevertheless, 43% of broods produced by incestuously paired females contained extra pair young. Inbreeding depression occurs in the great tit ("Parus major") when the offspring produced as a result of a mating between close relatives show reduced fitness. In natural populations of "Parus major", inbreeding is avoided by dispersal of individuals from their birthplace, which reduces the chance of mating with a close relative. Southern pied babblers "Turdoides bicolor" appear to avoid inbreeding in two ways. The first is through dispersal, and the second is by avoiding familiar group members as mates. Although both males and females disperse locally, they move outside the range where genetically related individuals are likely to be encountered. Within their group, individuals only acquire breeding positions when the opposite-sex breeder is unrelated. Cooperative breeding in birds typically occurs when offspring, usually males, delay dispersal from their natal group in order to remain with the family to help rear younger kin. Female offspring rarely stay at home, dispersing over distances that allow them to breed independently, or to join unrelated groups. In general, inbreeding is avoided because it leads to a reduction in progeny fitness (inbreeding depression) due largely to the homozygous expression of deleterious recessive alleles. Cross-fertilisation between unrelated individuals ordinarily leads to the masking of deleterious recessive alleles in progeny. Birds occupy a wide range of ecological positions. While some birds are generalists, others are highly specialised in their habitat or food requirements. Even within a single habitat, such as a forest, the niches occupied by different species of birds vary, with some species feeding in the forest canopy, others beneath the canopy, and still others on the forest floor. Forest birds may be insectivores, frugivores, and nectarivores. Aquatic birds generally feed by fishing, plant eating, and piracy or kleptoparasitism. Birds of prey specialise in hunting mammals or other birds, while vultures are specialised scavengers. Avivores are animals that are specialised at preying on birds. Some nectar-feeding birds are important pollinators, and many frugivores play a key role in seed dispersal. Plants and pollinating birds often coevolve, and in some cases a flower's primary pollinator is the only species capable of reaching its nectar. Birds are often important to island ecology. Birds have frequently reached islands that mammals have not; on those islands, birds may fulfil ecological roles typically played by larger animals. For example, in New Zealand the moas were important browsers, as are the kereru and kokako today. Today the plants of New Zealand retain the defensive adaptations evolved to protect them from the extinct moa. Nesting seabirds may also affect the ecology of islands and surrounding seas, principally through the concentration of large quantities of guano, which may enrich the local soil and the surrounding seas. A wide variety of avian ecology field methods, including counts, nest monitoring, and capturing and marking, are used for researching avian ecology. Since birds are highly visible and common animals, humans have had a relationship with them since the dawn of man. Sometimes, these relationships are mutualistic, like the cooperative honey-gathering among honeyguides and African peoples such as the Borana. Other times, they may be commensal, as when species such as the house sparrow have benefited from human activities. Several bird species have become commercially significant agricultural pests, and some pose an aviation hazard. Human activities can also be detrimental, and have threatened numerous bird species with extinction (hunting, avian lead poisoning, pesticides, roadkill, wind turbine kills and predation by pet cats and dogs are common sources of death for birds). Birds can act as vectors for spreading diseases such as psittacosis, salmonellosis, campylobacteriosis, mycobacteriosis (avian tuberculosis), avian influenza (bird flu), giardiasis, and cryptosporidiosis over long distances. Some of these are zoonotic diseases that can also be transmitted to humans. Domesticated birds raised for meat and eggs, called poultry, are the largest source of animal protein eaten by humans; in 2003, tons of poultry and tons of eggs were produced worldwide. Chickens account for much of human poultry consumption, though domesticated turkeys, ducks, and geese are also relatively common. Many species of birds are also hunted for meat. Bird hunting is primarily a recreational activity except in extremely undeveloped areas. The most important birds hunted in North and South America are waterfowl; other widely hunted birds include pheasants, wild turkeys, quail, doves, partridge, grouse, snipe, and woodcock. Muttonbirding is also popular in Australia and New Zealand. Though some hunting, such as that of muttonbirds, may be sustainable, hunting has led to the extinction or endangerment of dozens of species. Other commercially valuable products from birds include feathers (especially the down of geese and ducks), which are used as insulation in clothing and bedding, and seabird faeces (guano), which is a valuable source of phosphorus and nitrogen. The War of the Pacific, sometimes called the Guano War, was fought in part over the control of guano deposits. Birds have been domesticated by humans both as pets and for practical purposes. Colourful birds, such as parrots and mynas, are bred in captivity or kept as pets, a practice that has led to the illegal trafficking of some endangered species. Falcons and cormorants have long been used for hunting and fishing, respectively. Messenger pigeons, used since at least 1 AD, remained important as recently as World War II. Today, such activities are more common either as hobbies, for entertainment and tourism, or for sports such as pigeon racing. Amateur bird enthusiasts (called birdwatchers, twitchers or, more commonly, birders) number in the millions. Many homeowners erect bird feeders near their homes to attract various species. Bird feeding has grown into a multimillion-dollar industry; for example, an estimated 75% of households in Britain provide food for birds at some point during the winter. Birds play prominent and diverse roles in religion and mythology. In religion, birds may serve as either messengers or priests and leaders for a deity, such as in the Cult of Makemake, in which the Tangata manu of Easter Island served as chiefs or as attendants, as in the case of Hugin and Munin, the two common ravens who whispered news into the ears of the Norse god Odin. In several civilisations of ancient Italy, particularly Etruscan and Roman religion, priests were involved in augury, or interpreting the words of birds while the "auspex" (from which the word "auspicious" is derived) watched their activities to foretell events. They may also serve as religious symbols, as when Jonah (Hebrew: יוֹנָה, dove) embodied the fright, passivity, mourning, and beauty traditionally associated with doves. Birds have themselves been deified, as in the case of the common peacock, which is perceived as Mother Earth by the Dravidians of India. In the ancient world, doves were used as symbols of the Mesopotamian goddess Inanna (later inown as Ishtar), the Canaanite mother goddess Asherah, and the Greek goddess Aphrodite. In ancient Greece, Athena, the goddess of wisdom and patron deity of the city of Athens, had a little owl as her symbol. In religious images preserved from the Inca and Tiwanaku empires, birds are depicted in the process of transgressing boundaries between earthly and underground spiritual realms. Indigenous peoples of the central Andes maintain legends of birds passing to and from metaphysical worlds. Birds have featured in culture and art since prehistoric times, when they were represented in early cave paintings. Some birds have been perceived as monsters, including the mythological Roc and the Māori's legendary "Pouākai", a giant bird capable of snatching humans. Birds were later used as symbols of power, as in the magnificent Peacock Throne of the Mughal and Persian emperors. With the advent of scientific interest in birds, many paintings of birds were commissioned for books. Among the most famous of these bird artists was John James Audubon, whose paintings of North American birds were a great commercial success in Europe and who later lent his name to the National Audubon Society. Birds are also important figures in poetry; for example, Homer incorporated nightingales into his "Odyssey", and Catullus used a sparrow as an erotic symbol in his Catullus 2. The relationship between an albatross and a sailor is the central theme of Samuel Taylor Coleridge's "The Rime of the Ancient Mariner", which led to the use of the term as a metaphor for a 'burden'. Other English metaphors derive from birds; vulture funds and vulture investors, for instance, take their name from the scavenging vulture. Perceptions of bird species vary across cultures. Owls are associated with bad luck, witchcraft, and death in parts of Africa, but are regarded as wise across much of Europe. Hoopoes were considered sacred in Ancient Egypt and symbols of virtue in Persia, but were thought of as thieves across much of Europe and harbingers of war in Scandinavia. In heraldry, birds, especially eagles, often appear in coats of arms. In music, birdsong has influenced composers and musicians in several ways: they can be inspired by birdsong; they can intentionally imitate bird song in a composition, as Vivaldi, Messiaen, and Beethoven did, along with many later composers; they can incorporate recordings of birds into their works, as Ottorino Respighi first did; or like Beatrice Harrison and David Rothenberg, they can duet with birds. Though human activities have allowed the expansion of a few species, such as the barn swallow and European starling, they have caused population decreases or extinction in many other species. Over a hundred bird species have gone extinct in historical times, although the most dramatic human-caused avian extinctions, eradicating an estimated 750–1800 species, occurred during the human colonisation of Melanesian, Polynesian, and Micronesian islands. Many bird populations are declining worldwide, with 1,227 species listed as threatened by BirdLife International and the IUCN in 2009. The most commonly cited human threat to birds is habitat loss. Other threats include overhunting, accidental mortality due to collisions with buildings or vehicles, long-line fishing bycatch, pollution (including oil spills and pesticide use), competition and predation from nonnative invasive species, and climate change. Governments and conservation groups work to protect birds, either by passing laws that preserve and restore bird habitat or by establishing captive populations for reintroductions. Such projects have produced some successes; one study estimated that conservation efforts saved 16 species of bird that would otherwise have gone extinct between 1994 and 2004, including the California condor and Norfolk parakeet. Belarus Belarus ( ; , ', ; , ', ), officially the Republic of Belarus (; ), formerly known by its Russian name Byelorussia or Belorussia (, ""), is a landlocked country in Eastern Europe bordered by Russia to the northeast, Ukraine to the south, Poland to the west, and Lithuania and Latvia to the northwest. Its capital and most populous city is Minsk. Over 40% of its is forested. Its major economic sectors are service industries and manufacturing. Until the 20th century, different states at various times controlled the lands of modern-day Belarus, including the Principality of Polotsk (11th to 14th centuries), the Grand Duchy of Lithuania, the Polish–Lithuanian Commonwealth, and the Russian Empire. In the aftermath of the 1917 Russian Revolution, Belarus declared independence as the Belarusian People's Republic, which was conquered by Soviet Russia. The Socialist Soviet Republic of Byelorussia became a founding constituent republic of the Soviet Union in 1922 and was renamed as the Byelorussian Soviet Socialist Republic (Byelorussian SSR). Belarus lost almost half of its territory to Poland after the Polish–Soviet War of 1919–1921. Much of the borders of Belarus took their modern shape in 1939, when some lands of the Second Polish Republic were reintegrated into it after the Soviet invasion of Poland, and were finalized after World War II. During WWII, military operations devastated Belarus, which lost about a third of its population and more than half of its economic resources. The republic was redeveloped in the post-war years. In 1945 the Byelorussian SSR became a founding member of the United Nations, along with the Soviet Union and the Ukrainian SSR. The parliament of the republic proclaimed the sovereignty of Belarus on 1990, and during the dissolution of the Soviet Union, Belarus declared independence on 1991. Alexander Lukashenko has served as the country's first president since 1994. Belarus has been labeled "Europe's last dictatorship" by some Western journalists, on account of Lukashenko's self-described authoritarian style of government. Lukashenko continued a number of Soviet-era policies, such as state ownership of large sections of the economy. Elections under Lukashenko's rule have been widely criticized as unfair; and according to many countries and organizations, political opposition has been violently suppressed. Belarus is also the last country in Europe using the death penalty. Belarus's Democracy Index rating was the lowest in Europe until 2014 (when it was passed by Russia), the country is labelled as "not free" by Freedom House, as "repressed" in the Index of Economic Freedom, and is rated as by far the worst country for press freedom in Europe in the 2013–14 Press Freedom Index published by Reporters Without Borders, which ranks Belarus 157th out of 180 nations. In 2000, Belarus and Russia signed a treaty for greater cooperation, forming the Union State. Over 70% of Belarus's population of 9.49 million resides in urban areas. More than 80% of the population is ethnic Belarusian, with sizable minorities of Russians, Poles and Ukrainians. Since a referendum in 1995, the country has had two official languages: Belarusian and Russian. The Constitution of Belarus does not declare any official religion, although the primary religion in the country is Eastern Orthodox Christianity. The second-most widespread religion, Roman Catholicism, has a much smaller following; nevertheless, Belarus celebrates both Orthodox and Catholic versions of Christmas and Easter as national holidays. Belarus is a member of the United Nations since its founding, the Commonwealth of Independent States, CSTO, EEU, and the Non-Aligned Movement. Belarus has shown no aspirations for joining the European Union but nevertheless maintains a bilateral relationship with the organisation, and likewise participates in two EU projects: the Eastern Partnership and the Baku Initiative. The name "Belarus" is closely related with the term "Belaya Rus", i.e., "White Rus'". There are several claims to the origin of the name "White Rus'." An ethno-religious theory suggests that the name used to describe the part of old Ruthenian lands within the Grand Duchy of Lithuania that had been populated mostly by early Christianized Slavs, as opposed to Black Ruthenia, which was predominantly inhabited by pagan Balts. An alternate explanation for the name comments on the white clothing worn by the local Slavic population. A third theory suggests that the old Rus' lands that were not conquered by the Tatars (i.e., Polotsk, Vitebsk and Mogilev) had been referred to as "White Rus'". The name Rus' is often conflated with its Latin forms Russia and Ruthenia, thus Belarus is often referred to as "White Russia" or "White Ruthenia". The name first appeared in German and Latin medieval literature; the chronicles of Jan of Czarnków mention the imprisonment of Lithuanian grand duke Jogaila and his mother at "" in 1381. In some languages, including German, Afrikaans and Dutch, the country is generally called "White Russia" to this day (' and ' respectively). The Latin term "Alba Russia" was used again by Pope Pius VI in 1783 to recognize the Society of Jesus there, exclaiming "." The first known use of "White Russia" to refer to Belarus was in the late-16th century by Englishman Sir Jerome Horsey, who was known for his close contacts with the Russian Royal Court. During the 17th century, the Russian tsars used "White Rus" to describe the lands added from the Grand Duchy of Lithuania. The term "Belorussia" (, the latter part similar but spelled and stressed differently from Росси́я, "Russia") first rose in the days of the Russian Empire, and the Russian Tsar was usually styled "the Tsar of All the Russias", as "Russia" or the "Russian Empire" was formed by three parts of Russia—the Great, Little, and White. This asserted that the territories are all Russian and all the peoples are also Russian; in the case of the Belarusians, they were variants of the Russian people. After the Bolshevik Revolution in 1917, the term "White Russia" caused some confusion, as it was also the name of the military force that opposed the red Bolsheviks. During the period of the Byelorussian SSR, the term "Byelorussia" was embraced as part of a national consciousness. In western Belarus under Polish control, "Byelorussia" became commonly used in the regions of Białystok and Grodno during the interwar period. The term "Byelorussia" (its names in other languages such as English being based on the Russian form) was only used officially until 1991, when the Supreme Soviet of the Byelorussian SSR decreed by law that the new independent republic should be called "Republic of Belarus" ( spelled in Russian), as well its abridged form should be "Belarus". The law decreed that all the forms of the new term should be transliterated into other languages from their Belarusian language forms. The use of Byelorussian SSR and any abbreviations thereof were allowed from 1991 to 1993. Conservative forces in the newly independent Belarus did not support the name change and opposed its inclusion in the 1991 draft of the Constitution of Belarus. Accordingly, the name "Byelorussia" was replaced by "Belarus" in English. Likewise, the adjective "Belorussian" or "Byelorussian" was replaced by "Belarusian" in English. "Belarusian" is closer to the original Belarusian term of '. Belarusian intelligentsia in the Stalin era attempted to change the name from "Byelorussia" to a form of "Krivia" because of the supposed connection with Russia. Some nationalists object to the name for the same reason. Several local newspapers kept the old name of the country in Russian in their names, for example ', which is the localized publication of a popular Russian newspaper. Also, those who wish for Belarus to be reunited with Russia continue to use "Belorussia". Officially, the full name of the country is "Republic of Belarus" (, , ). From 5000 to 2000 BC, Bandkeramik cultures predominated. In addition, remains from the Dnieper-Donets culture were found in Belarus and parts of Ukraine. Cimmerians and other pastoralists roamed through the area by 1,000 BC, and by 500 AD, Slavs had taken up residence, which was circumscribed by the Scythians who roamed its outskirts. Invaders from Asia, among whom were the Huns and Avars, swept through c. 400–600 AD, but were unable to dislodge the Slavic presence. The region that is now Belarus was first settled by Baltic tribes in the 3rd century. Around the 5th century, the area was taken over by Slavic tribes. The takeover was partially due to the lack of military coordination of the Balts but the gradual assimilation of the Balts into Slavic culture was peaceful in nature. In the 9th century some principalities arose on the territory of modern Belarus. Among them was the Principality of Polotsk that for most of the time was effectively an independent State (apart from about 20 years when it was a Vassal of Kievan Rus'). The Principality of Polotsk was the first nation state to be established on the land of Belarus. Many early Rus' principalities were virtually razed or severely affected by a major Mongol invasion in the 13th century, but the lands of modern Belarus avoided the brunt of the invasion and were eventually joined the Grand Duchy of Lithuania. There are no sources of military seizure, but the annals affirm the alliance and united foreign policy of Polotsk and Lithuania for decades. For example, the Chronicle of Novgorod informs about "Izyaslav had been set to be Knyaz in Luki and covered Novgorod from the Lithuanians" in 1198 when Luki is situated on the east from Polotsk. The Grand Duchy of Lithuania, whose territory started its existence between Nemunas and Neris rivers and existed in the center of Europe in the 13th–18th centuries and comprised entire territories of contemporary Belarus, Ukraine, partially Poland, Lithuania and Latvia and stretched from the Baltic Sea to the Black Sea. Incorporation into the Grand Duchy of Lithuania resulted in an economic, political and ethno-cultural unification of Belarusian lands. Of the principalities held by the Duchy, nine of them were settled by a population that would eventually become Belarusian people. During this time, the Duchy was involved in several military campaigns, including fighting on the side of Poland against the Teutonic Knights at the Battle of Grunwald in 1410; the joint victory allowed the Duchy to control the northwestern borderlands of Eastern Europe. The Muscovites, led by Ivan III of Moscow, began military campaigns in 1486 in an attempt to incorporate the lands of Kievan Rus', specifically the territories of modern Belarus, Russia and Ukraine. On 2 February 1386, the Grand Duchy of Lithuania and the Kingdom of Poland were joined in a personal union through a marriage of their rulers. This union set in motion the developments that eventually resulted in the formation of the Polish–Lithuanian Commonwealth, created in 1569 by the Union of Lublin. The Lithuanian nobles was forced to go for rapprochement because of the threat coming from Muscovy. To strengthen the independence in the format of the union, three editions of the Statutes of Lithuania were issued in the 16th century. The third Article of the Statute establishes that all lands of Grand Duchy of Lithuania will be eternally in Grand Duchy of Lithuania and never enter as a part of other states. It allowed to own the land within Grand Duchy of Lithuania only to own families. Anyone from outside Duchy would be honored with property only own it after swearing to Grand Duke of Lithuania. These articles were aimed to defend the rights of the Grand Duchy of Lithuania nobility against Polish, Prussian and other aristocracy of Polish–Lithuanian Commonwealth. In the years following the union, the process of gradual Polonization of both Lithuanians and Ruthenians gained steady momentum. In culture and social life, both the Polish language and Catholicism became dominant, and in 1696, Polish replaced Ruthenian as the official language—with the Ruthenian language being banned from administrative use. However, the Ruthenian peasants, continued to speak their own language and remained faithful to the Belarusian Greek Catholic Church. Statutes were initially only issued in Ruthenian language and later also in Polish. Around 1840 the Statutes were banned by the Russian tsar following the November Uprising. Modern Ukrainian lands used it until 1860s. For a autonomous Duchy of Samogitia there was a separate "Statute of the Zhmudz Lands". The union between Poland and Lithuania ended in 1795 with the partitioning of Poland by Imperial Russia, Prussia, and Austria. The Belarusian territories acquired by the Russian Empire under the reign of Catherine II were included into the Belarusian Governorate () in 1796 and held until their occupation by the German Empire during World War I. Under Nicholas I and Alexander III the national cultures were repressed due to the policies of de-Polonization and Russification, which included the return to Orthodox Christianity of Belorusian Uniates. In a Russification drive in the 1840s, Nicholas I prohibited use of the Belarusian language in public schools, campaigned against Belarusian publications and tried to pressure those who had converted to Catholicism under the Poles to reconvert to the Orthodox faith. In 1863, economic and cultural pressure exploded in a revolt, led by Konstanty Kalinowski. After the failed revolt, the Russian government reintroduced the use of Cyrillic to Belarusian in 1864 and no documents in Belarusian were permitted by the Russian government until 1905. During the negotiations of the Treaty of Brest-Litovsk, Belarus first declared independence under German occupation on 1918, forming the Belarusian People's Republic. Immediately afterwards, the Polish–Soviet War ignited, and the territory of Belarus was divided between Poland and Soviet Russia. The Rada of the Belarusian Democratic Republic exists as a government in exile ever since then; in fact, it is currently the world's longest serving government in exile. The Republic of Central Lithuania was a short-lived political entity, which was the last attempt to restore Lithuania in the historical confederacy state (it was also supposed to create Lithuania Upper and Lithuania Lower). The republic was created in 1920 following the staged rebellion of soldiers of the 1st Lithuanian–Belarusian Division of the Polish Army under Lucjan Żeligowski. Centered on the historical capital of the Grand Duchy of Lithuania, Vilna (, ), for 18 months the entity served as a buffer state between Poland, upon which it depended, and Lithuania, which claimed the area. After a variety of delays, a disputed election took place on 8 January 1922, and the territory was annexed to Poland. Żeligowski later condemned the annexation by Poland in his memoir, which was published in London in 1943, as well as the policy of closing Belarusian schools and showed a general disregard of Marshal Józef Piłsudski's confederation plans. A part of Belarus under Russian rule emerged as the Byelorussian Soviet Socialist Republic (Byelorussian SSR) in 1919. Soon thereafter it merged to form the Lithuanian-Byelorussian SSR. The contested lands were divided between Poland and the Soviet Union after the war ended in 1921, and the Byelorussian SSR became a founding member of the Union of Soviet Socialist Republics in 1922. The western part of modern Belarus remained part of Poland. In the 1920s and 1930s, Soviet agricultural and economic policies, including collectivization and five-year plans for the national economy, led to famine and political repression. In 1939, Nazi Germany and the Soviet Union invaded and occupied Poland, marking the beginning of World War II. The Soviets invaded and annexed much of eastern Poland, which had been part of the country since the Peace of Riga two decades earlier. Much of the northern section of this area was added to the Byelorussian SSR, and now constitutes West Belarus. The Soviet-controlled Byelorussian People's Council officially took control of the territories, whose populations consisted of a mixture of Poles, Ukrainians, Belarusians and Jews, on 28 October 1939 in Białystok. Nazi Germany invaded the Soviet Union in 1941. The Brest Fortress, which had been annexed in 1939, at this time was subjected to one of the most destructive onslaughts that happened during the war. Statistically, the Byelorussian SSR was the hardest-hit Soviet republic in World War II; it remained in Nazi hands until 1944. During that time, Germany destroyed 209 out of 290 cities in the republic, 85% of the republic's industry, and more than one million buildings. The Nazi "Generalplan Ost" called for the extermination, expulsion or enslavement of most or all Belarusians for the purpose of providing more living space in the East for Germans. Casualties were estimated to be between 2 and 3 million (about a quarter to one-third of the total population), while the Jewish population of Belarus was devastated during the Holocaust and never recovered. The population of Belarus did not regain its pre-war level until 1971. It was also after this conflict that the final borders of Belarus were set by Stalin when parts of Belarusian territory were given to the recently annexed Lithuania. After the war, Belarus was among the 51 founding countries of the United Nations Charter and as such it was allowed an additional vote at the UN, on top of the Soviet Union's vote. Vigorous postwar reconstruction promptly followed the end of the war and the Byelorussian SSR became a major center of manufacturing in the western USSR, creating jobs and attracting ethnic Russians. The borders of the Byelorussian SSR and Poland were redrawn, in accord with the 1919-proposed Curzon Line. Joseph Stalin implemented a policy of Sovietization to isolate the Byelorussian SSR from Western influences. This policy involved sending Russians from various parts of the Soviet Union and placing them in key positions in the Byelorussian SSR government. After Stalin's death in 1953, Nikita Khrushchev continued his predecessor's cultural hegemony program, stating, "The sooner we all start speaking Russian, the faster we shall build communism." In 1986, the Byelorussian SSR was exposed to significant nuclear fallout from the explosion at the Chernobyl power plant in the neighboring Ukrainian SSR. In June 1988, the archaeologist and leader of the Christian Conservative Party of the BPF Zyanon Paznyak discovered mass graves of victims executed in 1937–41 at Kurapaty, near Minsk. Some nationalists contend that this discovery is proof that the Soviet government was trying to erase the Belarusian people, causing Belarusian nationalists to seek independence. In March 1990, elections for seats in the Supreme Soviet of the Byelorussian SSR took place. Though the pro-independence Belarusian Popular Front took only 10% of the seats, the populace was content with the selection of the delegates. Belarus declared itself sovereign on 1990 by issuing the Declaration of State Sovereignty of the Belarusian Soviet Socialist Republic. With the support of the Communist Party, the country's name was changed to the Republic of Belarus on 1991. Stanislav Shushkevich, the chairman of the Supreme Soviet of Belarus, met with Boris Yeltsin of Russia and Leonid Kravchuk of Ukraine on 1991 in Belavezhskaya Pushcha to formally declare the dissolution of the Soviet Union and the formation of the Commonwealth of Independent States. A national constitution was adopted in March 1994 in which the functions of prime minister were given to the President of Belarus. Two-round elections for the presidency on ( 1994 and 1994) catapulted the formerly unknown Alexander Lukashenko into national prominence. He garnered 45% of the vote in the first round and 80% in the second, defeating Vyacheslav Kebich who received 14% of the vote. Lukashenko was re-elected in 2001, in 2006, in 2010 and again in 2015. Western governments, Amnesty International, and Human Rights Watch have criticized Lukashenko's authoritarian style of government. Since 2014, following years of embrace of Russian influence in the country, Lukashenko has pressed a revival of Belarusian identity, following the Russian annexation of Crimea and military intervention in Eastern Ukraine. For the first time, he delivered a speech in Belarusian (rather than Russian, which most people use), in which he said, "We are not Russian—we are Belarusians", and later encouraged the use of Belarusian. Trade disputes, a border dispute, and a much relaxed official attitude to dissident voices are all part of a weakening of the longtime warm relationship with Russia. Belarus lies between latitudes 51° and 57° N, and longitudes 23° and 33° E. Its extension from north to south is , from west to east is . It is landlocked, relatively flat, and contains large tracts of marshy land. About 40% of Belarus is covered by forests. Many streams and 11,000 lakes are found in Belarus. Three major rivers run through the country: the Neman, the Pripyat, and the Dnieper. The Neman flows westward towards the Baltic sea and the Pripyat flows eastward to the Dnieper; the Dnieper flows southward towards the Black Sea. The highest point is Dzyarzhynskaya Hara (Dzyarzhynsk Hill) at , and the lowest point is on the Neman River at . The average elevation of Belarus is above sea level. The climate features mild to cold winters, with average January minimum temperatures ranging from in southwest (Brest) to in northeast (Vitebsk), and cool and moist summers with an average temperature of . Belarus has an average annual rainfall of . The country is in the transitional zone between continental climates and maritime climates. Natural resources include peat deposits, small quantities of oil and natural gas, granite, dolomite (limestone), marl, chalk, sand, gravel, and clay. About 70% of the radiation from neighboring Ukraine's 1986 Chernobyl nuclear disaster entered Belarusian territory, and about a fifth of Belarusian land (principally farmland and forests in the southeastern regions) was affected by radiation fallout. The United Nations and other agencies have aimed to reduce the level of radiation in affected areas, especially through the use of caesium binders and rapeseed cultivation, which are meant to decrease soil levels of caesium-137. Belarus borders five countries: Latvia to the north, Lithuania to the northwest, Poland to the west, Russia to the north and the east, and Ukraine to the south. Treaties in 1995 and 1996 demarcated Belarus's borders with Latvia and Lithuania, and Belarus ratified a 1997 treaty establishing the Belarus-Ukraine border in 2009. Belarus and Lithuania ratified final border demarcation documents in February 2007. Belarus is a presidential republic, governed by a president and the National Assembly. The term for each presidency is five years. Under the 1994 constitution, the president could serve for only two terms as president, but a change in the constitution in 2004 eliminated term limits. Alexander Lukashenko has been the president of Belarus since 1994. In 1996, Lukashenko called for a controversial vote to extend the presidential term from five to seven years, and as a result the election that was supposed to occur in 1999 was pushed back to 2001. The referendum on the extension was denounced as a "fantastic" fake by the chief electoral officer, Viktar Hanchar, who was removed from the office for official matters only during the campaign. The National Assembly is a bicameral parliament comprising the 110-member House of Representatives (the lower house) and the 64-member Council of the Republic (the upper house). The House of Representatives has the power to appoint the prime minister, make constitutional amendments, call for a vote of confidence on the prime minister, and make suggestions on foreign and domestic policy. The Council of the Republic has the power to select various government officials, conduct an impeachment trial of the president, and accept or reject the bills passed by the House of Representatives. Each chamber has the ability to veto any law passed by local officials if it is contrary to the constitution. The government includes a Council of Ministers, headed by the prime minister and five deputy prime ministers. The members of this council need not be members of the legislature and are appointed by the president. The judiciary comprises the Supreme Court and specialized courts such as the Constitutional Court, which deals with specific issues related to constitutional and business law. The judges of national courts are appointed by the president and confirmed by the Council of the Republic. For criminal cases, the highest court of appeal is the Supreme Court. The Belarusian Constitution forbids the use of special extrajudicial courts. In the 2012 parliamentary election, 105 of the 110 members elected to the House of Representatives were not affiliated with any political party. The Communist Party of Belarus won 3 seats, and the Agrarian Party and Republican Party of Labour and Justice, one each. Most non-partisans represent a wide scope of social organizations such as workers' collectives, public associations, and civil society organizations, similar to the composition of the Soviet legislature. Neither the pro-Lukashenko parties, such as the Belarusian Socialist Sporting Party and the Republican Party of Labour and Justice, nor the People's Coalition 5 Plus opposition parties, such as the Belarusian People's Front and the United Civil Party of Belarus, won any seats in the 2004 elections. Groups such as the Organization for Security and Co-operation in Europe (OSCE) declared the election "un-free" because of the opposition parties' poor results and media bias in favor of the government. In the 2006 presidential election, Lukashenko was opposed by Alaksandar Milinkievič, who represented a coalition of opposition parties, and by Alaksandar Kazulin of the Social Democrats. Kazulin was detained and beaten by police during protests surrounding the All Belarusian People's Assembly. Lukashenko won the election with 80% of the vote; the Russian Federation and the CIS deemed the vote open and fair while the OSCE and other organizations called the election unfair. After the December completion of the 2010 presidential election, Lukashenko was elected to a fourth straight term with nearly 80% of the vote in elections. The runner-up opposition leader Andrei Sannikov received less than 3% of the vote; independent observers criticized the election as fraudulent. When opposition protesters took to the streets in Minsk, many people, including most rival presidential candidates, were beaten and arrested by the state militia. Many of the candidates, including Sannikov, were sentenced to prison or house arrest for terms which are mainly and typically over four years. Six months later amid an unprecedented economic crisis, activists utilized social networking to initiate a fresh round of protests characterized by wordless hand-clapping. The judicial system in Belarus lacks independence and is subject to political interference. Corrupt practices such as bribery often took place during tender processes, and whistleblower protection and national ombudsman are lacking in Belarus's anti-corruption system. However, there is a political will to fight against corruption in the government, and the government has made some progress in combating corruption, such as minimizing tax regulations in order to improve transparency in the tax office. Lukashenko has described himself as having an "authoritarian ruling style". Western countries have described Belarus under Lukashenko as a dictatorship; the government has accused the same Western powers of trying to oust Lukashenko. The Council of Europe has barred Belarus from membership since 1997 for undemocratic voting and election irregularities in the November 1996 constitutional referendum and parliament by-elections. The Belarusian government is also criticized for human rights violations and its persecution of non-governmental organisations, independent journalists, national minorities, and opposition politicians. In a testimony to the United States Senate Committee on Foreign Relations, former United States Secretary of State Condoleezza Rice labeled Belarus as one of the world's six "outposts of tyranny". In response, the Belarusian government called the assessment "quite far from reality". The Viasna Human Rights Centre lists 11 political prisoners currently detained in Belarus. Among them is the human rights activist Ales Bialiatski, Vice President of International Federation for Human Rights and head of Viasna. Lukashenko announced a new law in 2014 that will prohibit kolkhoz workers (around 9% of total work force) from leaving their jobs at will—a change of job and living location will require permission from governors. The law was compared with serfdom by Lukashenko himself. Similar regulations were introduced for the forestry industry in 2012. The Byelorussian SSR was one of the two Soviet republics that joined the United Nations along with the Ukrainian SSR as one of the original 51 members in 1945. After the dissolution of the Soviet Union, under international law, Belarus became the internationally recognized successor state to the Byelorussian SSR, retaining its UN membership. Belarus and Russia have been close trading partners and diplomatic allies since the breakup of the Soviet Union. Belarus is dependent on Russia for imports of raw materials and for its export market. The union of Russia and Belarus, a supranational confederation, was established in a 1996–99 series of treaties that called for monetary union, equal rights, single citizenship, and a common foreign and defense policy. However, the future of the union has been placed in doubt because of Belarus's repeated delays of monetary union, the lack of a referendum date for the draft constitution, and a dispute over the petroleum trade. On 11 December 2007, reports emerged that a framework for the new state was discussed between both countries. On 2008, Belarusian President Lukashenko said that he had named Russian Prime Minister Vladimir Putin the "prime minister" of the Russia-Belarus alliance. The significance of this act was not immediately clear; some incorrectly speculated that Putin would become president of a unified state of Russia and Belarus after stepping down as Russian president in May 2008. Belarus was a founding member of the Commonwealth of Independent States (CIS). Belarus has trade agreements with several European Union member states (despite other member states' travel ban on Lukashenko and top officials), including neighboring Latvia, Lithuania, and Poland. Travel bans imposed by the European Union have been lifted in the past in order to allow Lukashenko to attend diplomatic meetings and also to engage his government and opposition groups in dialogue. Bilateral relations with the United States are strained because the U.S. Department of State supports various anti-Lukashenko non-governmental organizations (NGOs), and also because the Belarusian government has made it increasingly difficult for United States-based organizations to operate within the country. Diplomatic relations remained tense, and in 2004, the United States passed the Belarus Democracy Act, which authorized funding for anti-government Belarusian NGOs, and prohibited loans to the Belarusian government, except for humanitarian purposes. Despite this political friction, the two countries do cooperate on intellectual property protection, prevention of human trafficking, technology crime, and disaster relief. Sino-Belarusian relations have improved, strengthened by the visit of President Lukashenko to China in October 2005. Belarus also has strong ties with Syria, considered a key partner in the Middle East. In addition to the CIS, Belarus is a member of the Eurasian Economic Community, the Collective Security Treaty Organisation, the international Non-Aligned Movement since 1998, and the Organization on Security and Cooperation in Europe (OSCE). As an OSCE member state, Belarus's international commitments are subject to monitoring under the mandate of the U.S. Helsinki Commission. Belarus is included in the European Union's European Neighbourhood Policy (ENP) which aims at bringing the EU and its neighbours closer. On 15 February 2016 the European Union announced the easing of sanctions against Belarus during a meeting by 28 EU foreign ministers at a regular session of the Council of the European Union. Major General Andrei Ravkov heads the Ministry of Defence, and Alexander Lukashenko (as president) serves as Commander-in-Chief. The armed forces were formed in 1992 using parts of the former Soviet Armed Forces on the new republic's territory. The transformation of the ex-Soviet forces into the Armed Forces of Belarus, which was completed in 1997, reduced the number of its soldiers by 30,000 and restructured its leadership and military formations. Most of Belarus's service members are conscripts, who serve for 12 months if they have higher education or 18 months if they do not. Demographic decreases in the Belarusians of conscription age have increased the importance of contract soldiers, who numbered 12,000 in 2001. In 2005, about 1.4% of Belarus's gross domestic product was devoted to military expenditure. Belarus has not expressed a desire to join NATO but has participated in the Individual Partnership Program since 1997, and Belarus provides refueling and airspace support for the ISAF mission in Afghanistan. Belarus first began to cooperate with NATO upon signing documents to participate in their Partnership for Peace Program in 1995. However, Belarus cannot join NATO because it is a member of the Collective Security Treaty Organisation. Tensions between NATO and Belarus peaked after the March 2006 presidential election in Belarus. Belarus is divided into six regions (, ), which are named after the cities that serve as their administrative centers. Each region has a provincial legislative authority, called a region council (, ), which is elected by its residents, and a provincial executive authority called a region administration (, ), whose chairman is appointed by the president. Regions are further subdivided into "raions", commonly translated as "districts" (, ). Each "raion" has its own legislative authority, or "raion" council, (, ) elected by its residents, and an executive authority or "raion" administration appointed by higher executive powers. The six regions are divided into 118 "raions". The city of Minsk is split into nine districts and enjoys special status as the nation's capital. It is run by an executive committee and has been granted a charter of self-rule. Regions (with administrative centers): Special administrative district: Belarus is the only European country still using capital punishment. The U.S. and Belarus were the only two of the 56 member states of the Organization for Security and Cooperation in Europe to have carried out executions during 2011. In 2014 the share of manufacturing in GDP was 37%, more than two thirds of this amount falls on manufacturing industries. The number of people employed in industry is 32.7% of the working population. The growth rate is much lower than for the economy as a whole – about 1.9% in 2014. At the time of the dissolution of the Soviet Union in 1991, Belarus was one of the world's most industrially developed states by percentage of GDP as well as the richest CIS member-state. In 2015, 39.3% of Belarusians were employed by state-controlled companies, 57.2% were employed by private companies (in which the government has a 21.1% stake) and 3.5% were employed by foreign companies. The country relies on Russia for various imports, including petroleum. Important agricultural products include potatoes and cattle byproducts, including meat. In 1994, Belarus's main exports included heavy machinery (especially tractors), agricultural products, and energy products. Economically, Belarus involved itself in the CIS, Eurasian Economic Community, and Union with Russia. In the 1990s, however, industrial production plunged due to decreases in imports, investment, and demand for Belarusian products from its trading partners. GDP only began to rise in 1996; the country was the fastest-recovering former Soviet republic in the terms of its economy. In 2006, GDP amounted to in purchasing power parity (PPP) dollars (estimate), or about $8,100 per capita. In 2005, GDP increased by 9.9%; the inflation rate averaged 9.5%. Since the disintegration of the Soviet Union, under Lukashenko's leadership, Belarus has maintained government control over key industries and eschewed the large-scale privatizations seen in other former Soviet republics. In 2006, Belarus's largest trading partner was Russia, accounting for nearly half of total trade, with the European Union the next largest trading partner, with nearly a third of foreign trade. As of 2015, 38% of Belarusian exported goods go to Russia and 56% of imported goods come from Russia. Due to its failure to protect labor rights, including passing laws forbidding unemployment or working outside of state-controlled sectors, Belarus lost its EU Generalized System of Preferences status on 2007, which raised tariff rates to their prior most favored nation levels. Belarus applied to become a member of the World Trade Organization in 1993. The labor force consists of more than four million people, among whom women hold slightly more jobs than men. In 2005, nearly a quarter of the population was employed by industrial factories. Employment is also high in agriculture, manufacturing sales, trading goods, and education. The unemployment rate, according to government statistics, was 1.5% in 2005. There were 679,000 unemployed Belarusians, two-thirds of whom were women. The unemployment rate has been in decline since 2003, and the overall rate of employment is the highest since statistics were first compiled in 1995. Until 1 July 2016, the currency of Belarus was the Belarusian ruble (BYR). The currency was introduced in May 1992, replacing the Soviet ruble. The first coins of the Republic of Belarus were issued on 27 December 1996. The ruble was reintroduced with new values in 2000 and has been in use ever since. As part of the Union of Russia and Belarus, both states have discussed using a single currency along the same lines as the Euro. This led to a proposal that the Belarusian ruble be discontinued in favor of the Russian ruble (RUB), starting as early as 2008. The National Bank of Belarus abandoned pegging the Belarusian ruble to the Russian ruble in August 2007. A new currency, the new Belarusian ruble (ISO 4217 code: BYN) was introduced in July 2016, replacing the Belarusian ruble in a rate of 1:10,000 (10,000 old rubles = 1 new ruble). From 1 July until 31 December 2016, the old and new currencies will be in parallel circulation and series 2000 notes and coins can be exchanged for series 2009 from 1 January 2017 to 31 December 2021. This redenomination can be considered an effort to fight the high inflation rate. The banking system of Belarus consists of two levels: Central Bank (National Bank of the Republic of Belarus) and 25 commercial banks. On 23 May 2011, the Belarusian ruble depreciated 56% against the United States dollar. The depreciation was even steeper on the black market and financial collapse seemed imminent as citizens rushed to exchange their rubles for dollars, euros, durable goods, and canned goods. On 1 June 2011, Belarus requested an economic rescue package from the International Monetary Fund. According to the National Statistical Committee, as of January 2016, the population is 9.49 million people. Ethnic Belarusians constitute 83.7% of Belarus's total population. The next largest ethnic groups are: Russians (8.3%), Poles (3.1%), and Ukrainians (1.7%). Belarus has a population density of about 50 people per square kilometer (127 per sq mi); 70% of its total population is concentrated in urban areas. Minsk, the nation's capital and largest city, was home to 1,937,900 residents . Gomel, with a population of 481,000, is the second-largest city and serves as the capital of the Homiel Voblast. Other large cities are Mogilev (365,100), Vitebsk (342,400), Hrodna (314,800) and Brest (298,300). Like many other eastern European countries, Belarus has a negative population growth rate and a negative natural growth rate. In 2007, Belarus's population declined by 0.41% and its fertility rate was 1.22, well below the replacement rate. Its net migration rate is +0.38 per 1,000, indicating that Belarus experiences slightly more immigration than emigration. , 69.9% of Belarus's population is aged 14 to 64; 15.5% is under 14, and 14.6% is 65 or older. Its population is also aging; the median age of 30–34 is estimated to rise to between 60 and 64 in 2050. There are about 0.87 males per female in Belarus. The average life expectancy is 72.15 (66.53 years for men and 78.1 years for women). Over 99% of Belarusians aged 15 and older are literate. Belarus's two official languages are Russian and Belarusian; Russian is the main language, used by 72% of the population, while Belarusian, the official first language, is spoken by 11.9%. Minorities also speak Polish, Ukrainian and Eastern Yiddish. Belarusian, although not as widely used as Russian, is the mother tongue of 53.2% of the population, whereas Russian is the mother tongue of only 41.5%. According to the census of , 58.9% of all Belarusians adhere to some kind of religion; out of those, Eastern Orthodoxy (Belarusian Exarchate of the Russian Orthodox Church) makes up about 82%. Roman Catholicism is practiced mostly in the western regions, and there are also different denominations of Protestantism. Minorities also practice Greek Catholicism, Judaism, Islam and Neopaganism. Overall, 48.3% of the population is Orthodox Christian, 41.1% is not religious, 7.1% is Catholic and 3.3% follows other religions. Belarus's Catholic minority is concentrated in the western part of the country, especially around Hrodna, is made up of a mixture of Belarusians and the country's Polish and Lithuanian minorities. In a statement to the media regarding Belarusian-Vatican ties, President Lukashenko stated that Orthodox and Catholic believers are the "two main confessions in our country". Belarus was once a major center of European Jews, with 10% of the population being Jewish. But since the mid-20th century, the number of Jews has been reduced by the Holocaust, deportation, and emigration, so that today it is a very small minority of less than one percent. The Lipka Tatars, numbering over 15,000, are predominantly Muslims. According to Article 16 of the Constitution, Belarus has no official religion. While the freedom of worship is granted in the same article, religious organizations deemed harmful to the government or social order can be prohibited. The Belarusian government sponsors annual cultural festivals such as the Slavianski Bazaar in Vitebsk, which showcases Belarusian performers, artists, writers, musicians, and actors. Several state holidays, such as Independence Day and Victory Day, draw big crowds and often include displays such as fireworks and military parades, especially in Vitebsk and Minsk. The government's Ministry of Culture finances events promoting Belarusian arts and culture both inside and outside the country. Belarusian literature began with 11th- to 13th-century religious scripture, such as the 12th-century poetry of Cyril of Turaw. By the 16th century, Polotsk resident Francysk Skaryna translated the Bible into Belarusian. It was published in Prague and Vilnius sometime between 1517 and 1525, making it the first book printed in Belarus or anywhere in Eastern Europe. The modern era of Belarusian literature began in the late 19th century; one prominent writer was Yanka Kupala. Many Belarusian writers of the time, such as Uładzimir Žyłka, Kazimir Svayak, Yakub Kolas, Źmitrok Biadula, and Maksim Haretski, wrote for "Nasha Niva", a Belarusian-language paper published that was previously published in Vilnius but now is published in Minsk. After Belarus was incorporated into the Soviet Union, the Soviet government took control of the Republic's cultural affairs. At first, a policy of "Belarusianization" was followed in the newly formed Byelorussian SSR. This policy was reversed in the 1930s, and the majority of prominent Belarusian intellectuals and nationalist advocates were either exiled or killed in Stalinist purges. The free development of literature occurred only in Polish-held territory until Soviet occupation in 1939. Several poets and authors went into exile after the Nazi occupation of Belarus and would not return until the 1960s. The last major revival of Belarusian literature occurred in the 1960s with novels published by Vasil Bykaŭ and Uladzimir Karatkievich. An influential author who devoted his work to awakening the awareness of the catastrophes the country has suffered, was Ales Adamovich. He was named by Svetlana Alexievich, the Belarusian winner of the Nobel Prize in Literature 2015, as "her main teacher, who helped her to find a path of her own". Music in Belarus largely comprises a rich tradition of folk and religious music. The country's folk music traditions can be traced back to the times of the Grand Duchy of Lithuania. In the 19th century, Polish composer Stanisław Moniuszko composed operas and chamber music pieces while living in Minsk. During his stay, he worked with Belarusian poet Vintsent Dunin-Martsinkyevich and created the opera "Sialanka" ("Peasant Woman"). At the end of the 19th century, major Belarusian cities formed their own opera and ballet companies. The ballet "Nightingale" by M. Kroshner was composed during the Soviet era and became the first Belarusian ballet showcased at the National Academic Vialiki Ballet Theatre in Minsk. After the Second World War, music focused on the hardships of the Belarusian people or on those who took up arms in defense of the homeland. During this period, Anatoly Bogatyrev, creator of the opera "In Polesye Virgin Forest", served as the "tutor" of Belarusian composers. The National Academic Theatre of Ballet in Minsk was awarded the Benois de la Dance Prize in 1996 as the top ballet company in the world. Rock music has become increasingly popular in recent years, though the Belarusian government has attempted to limit the amount of foreign music aired on the radio in favor of traditional Belarusian music. Since 2004, Belarus has been sending artists to the Eurovision Song Contest. Marc Chagall was born in Liozna (near Vitebsk) in 1887. He spent the World War I years in Soviet Belarus, becoming one of the country's most distinguished artists and a member of the modernist avant-garde and was a founder of the Vitebsk Arts College. The traditional Belarusian dress originates from the Kievan Rus' period. Due to the cool climate, clothes were designed to preserve body heat and were usually made from flax or wool. They were decorated with ornate patterns influenced by the neighboring cultures: Poles, Lithuanians, Latvians, Russians, and other European nations. Each region of Belarus has developed specific design patterns. One ornamental pattern common in early dresses currently decorates the hoist of the Belarusian national flag, adopted in a disputed referendum in 1995. Belarusian cuisine consists mainly of vegetables, meat (particularly pork), and bread. Foods are usually either slowly cooked or stewed. Typically, Belarusians eat a light breakfast and two hearty meals, with dinner being the largest meal of the day. Wheat and rye breads are consumed in Belarus, but rye is more plentiful because conditions are too harsh for growing wheat. To show hospitality, a host traditionally presents an offering of bread and salt when greeting a guest or visitor. Belarus has competed in the Olympic Games since the 1994 Winter Olympics. Its National Olympic Committee has been headed by President Lukashenko since 1997. Receiving heavy sponsorship from the government, ice hockey is the nation's second most popular sport after football. The national football team has never qualified for a major tournament; however, BATE Borisov has played in the Champions League. The national hockey team finished fourth at the 2002 Salt Lake City Olympics following a memorable upset win over Sweden in the quarterfinals, and regularly competes in the World Championships, often making the quarterfinals. Numerous Belarusian players are present in the Kontinental Hockey League in Eurasia, particularly for Belarusian club HC Dinamo Minsk, and several have also played in the National Hockey League in North America. Darya Domracheva is a leading biathlete whose honours include three gold medals at the 2014 Winter Olympics. Tennis player Victoria Azarenka became the first Belarusian to win a Grand Slam singles title at the Australian Open in 2012. She also won the gold medal in mixed doubles at the 2012 Summer Olympics with Max Mirnyi, who holds ten Grand Slam titles in doubles. Other notable Belarusian sportspeople include cyclist Vasil Kiryienka, who won the 2015 Road World Time Trial Championship, and middle distance runner Maryna Arzamasava, who won the gold medal in the 800m at the 2015 World Championships in Athletics. Belarus is also known for its strong rhythmic gymnasts. Noticeable gymnasts include Inna Zhukova, who earned silver at the 2008 Beijing Olympics, Liubov Charkashyna, who earned bronze at the 2012 London Olympics and Melitina Staniouta, Bronze All-Around Medalist of the 2015 World Championships. The Belorussian senior group earned bronze at the 2012 London Olympics. Andrei Arlovski, who was born in Babruysk, Byelorussian SSR, is a current UFC fighter and the former UFC heavyweight champion of the world. The state telecom monopoly, Beltelecom, holds the exclusive interconnection with Internet providers outside of Belarus. Beltelecom owns all the backbone channels that linked to the Lattelecom, TEO LT, Tata Communications (former Teleglobe), Synterra, Rostelecom, Transtelekom and MTS ISP's. Beltelecom is the only operator licensed to provide commercial VoIP services in Belarus. Belarus has four UNESCO-designated World Heritage Sites: the Mir Castle Complex, the Nesvizh Castle, the Belovezhskaya Pushcha (shared with Poland), and the Struve Geodetic Arc (shared with nine other countries). Baseball Baseball is a bat-and-ball game played between two opposing teams who take turns batting and fielding. The game proceeds when a player on the fielding team, called the pitcher, throws a ball which a player on the batting team tries to hit with a bat. The objectives of the offensive team (batting team) are to hit the ball into the field of play, and to run the bases—having its runners advance counter-clockwise around four bases to score what are called "runs". The objective of the defensive team (fielding team) is to prevent batters from becoming runners, and to prevent runners' advance around the bases. A run is scored when a runner legally advances around the bases in order and touches home plate (the place where the player started as a batter). The team that scores the most runs by the end of the game is the winner. The first objective of the batting team is to have a player reach first base safely. A player on the batting team who reaches first base without being called "out" can attempt to advance to subsequent bases as a runner, either immediately or during teammates' turns batting. The fielding team tries to prevent runs by getting batters or runners "out", which forces them out of the field of play. Both the pitcher and fielders have methods of getting the batting team's players out. The opposing teams switch back and forth between batting and fielding; the batting team's turn to bat is over once the fielding team records three outs. One turn batting for each team constitutes an inning. A game is usually composed of nine innings, and the team with the greater number of runs at the end of the game wins. If scores are tied at the end of nine innings, extra innings are usually played. Baseball has no game clock, although most games end in the ninth inning. Baseball evolved from older bat-and-ball games already being played in England by the mid-18th century. This game was brought by immigrants to North America, where the modern version developed. By the late 19th century, baseball was widely recognized as the national sport of the United States. Baseball is popular in North America and parts of Central and South America, the Caribbean, and East Asia, particularly in Japan and South Korea. In the United States and Canada, professional Major League Baseball (MLB) teams are divided into the National League (NL) and American League (AL), each with three divisions: East, West, and Central. The MLB champion is determined by playoffs that culminate in the World Series. The top level of play is similarly split in Japan between the Central and Pacific Leagues and in Cuba between the West League and East League. The World Baseball Classic, organized by the World Baseball Softball Confederation, is the major international competition of the sport and attracts the top national teams from around the world. A baseball game is played between two teams, each composed of nine players, that take turns playing offense (batting and baserunning) and defense (pitching and fielding). A pair of turns, one at bat and one in the field, by each team constitutes an inning. A game consists of nine innings (seven innings at the high school level and in doubleheaders in college and minor leagues, and six innings at the Little League level). One team—customarily the visiting team—bats in the top, or first half, of every inning. The other team—customarily the home team—bats in the bottom, or second half, of every inning. The goal of the game is to score more points (runs) than the other team. The players on the team at bat attempt to score runs by circling or completing a tour of the four bases set at the corners of the square-shaped baseball diamond. A player bats at home plate and must proceed counterclockwise to first base, second base, third base, and back home to score a run. The team in the field attempts to prevent runs from scoring and record outs, which remove opposing players from offensive action until their turn in their team's batting order comes up again. When three outs are recorded, the teams switch roles for the next half-inning. If the score of the game is tied after nine innings, extra innings are played to resolve the contest. Many amateur games, particularly unorganized ones, involve different numbers of players and innings. The game is played on a field whose primary boundaries, the foul lines, extend forward from home plate at 45-degree angles. The 90-degree area within the foul lines is referred to as fair territory; the 270-degree area outside them is foul territory. The part of the field enclosed by the bases and several yards beyond them is the infield; the area farther beyond the infield is the outfield. In the middle of the infield is a raised pitcher's mound, with a rectangular rubber plate (the rubber) at its center. The outer boundary of the outfield is typically demarcated by a raised fence, which may be of any material and height. The fair territory between home plate and the outfield boundary is baseball's field of play, though significant events can take place in foul territory, as well. There are three basic tools of baseball: the ball, the bat, and the glove or mitt: Protective helmets are also standard equipment for all batters. At the beginning of each half-inning, the nine players on the fielding team arrange themselves around the field. One of them, the pitcher, stands on the pitcher's mound. The pitcher begins the pitching delivery with one foot on the rubber, pushing off it to gain velocity when throwing toward home plate. Another player, the catcher, squats on the far side of home plate, facing the pitcher. The rest of the team faces home plate, typically arranged as four infielders—who set up along or within a few yards outside the imaginary lines (basepaths) between first, second, and third base—and three outfielders. In the standard arrangement, there is a first baseman positioned several steps to the left of first base, a second baseman to the right of second base, a shortstop to the left of second base, and a third baseman to the right of third base. The basic outfield positions are left fielder, center fielder, and right fielder. With the exception of the catcher, all fielders are required to be in fair territory when the pitch is delivered. A neutral umpire sets up behind the catcher. Other umpires will be distributed around the field as well. Play starts with a batter standing at home plate, holding a bat. The batter waits for the pitcher to throw a pitch (the ball) toward home plate, and attempts to hit the ball with the bat. The catcher catches pitches that the batter does not hit—as a result of either electing not to swing or failing to connect—and returns them to the pitcher. A batter who hits the ball into the field of play must drop the bat and begin running toward first base, at which point the player is referred to as a "runner" (or, until the play is over, a "batter-runner"). A batter-runner who reaches first base without being put out is said to be "safe" and is on base. A batter-runner may choose to remain at first base or attempt to advance to second base or even beyond—however far the player believes can be reached safely. A player who reaches base despite proper play by the fielders has recorded a hit. A player who reaches first base safely on a hit is credited with a single. If a player makes it to second base safely as a direct result of a hit, it is a double; third base, a triple. If the ball is hit in the air within the foul lines over the entire outfield (and outfield fence, if there is one), or otherwise safely circles all the bases, it is a home run: the batter and any runners on base may all freely circle the bases, each scoring a run. This is the most desirable result for the batter. A player who reaches base due to a fielding mistake is not credited with a hit—instead, the responsible fielder is charged with an error. Any runners already on base may attempt to advance on batted balls that land, or contact the ground, in fair territory, before or after the ball lands. A runner on first base "must" attempt to advance if a ball lands in play. If a ball hit into play rolls foul before passing through the infield, it becomes dead and any runners must return to the base they occupied when the play began. If the ball is hit in the air and caught before it lands, the batter has flied out and any runners on base may attempt to advance only if they tag up (contact the base they occupied when the play began, as or after the ball is caught). Runners may also attempt to advance to the next base while the pitcher is in the process of delivering the ball to home plate; a successful effort is a stolen base. A pitch that is not hit into the field of play is called either a strike or a ball. A batter against whom three strikes are recorded strikes out. A batter against whom four balls are recorded is awarded a base on balls or walk, a free advance to first base. (A batter may also freely advance to first base if the batter's body or uniform is struck by a pitch outside the strike zone, provided the batter does not swing and attempts to avoid being hit.) Crucial to determining balls and strikes is the umpire's judgment as to whether a pitch has passed through the strike zone, a conceptual area above home plate extending from the midpoint between the batter's shoulders and belt down to the hollow of the knee. While the team at bat is trying to score runs, the team in the field is attempting to record outs. In addition to the strikeout, common ways a member of the batting team may be put out include the flyout, ground out, force out, and tag out. It is possible to record two outs in the course of the same play. This is called a double play. Three outs in one play, a triple play, is possible, though rare. Players put out or retired must leave the field, returning to their team's dugout or bench. A runner may be stranded on base when a third out is recorded against another player on the team. Stranded runners do not benefit the team in its next turn at bat as every half-inning begins with the bases empty. An individual player's turn batting or plate appearance is complete when the player reaches base, hits a home run, makes an out, or hits a ball that results in the team's third out, even if it is recorded against a teammate. On rare occasions, a batter may be at the plate when, without the batter's hitting the ball, a third out is recorded against a teammate—for instance, a runner getting caught stealing (tagged out attempting to steal a base). A batter with this sort of incomplete plate appearance starts off the team's next turn batting; any balls or strikes recorded against the batter the previous inning are erased. A runner may circle the bases only once per plate appearance and thus can score at most a single run per batting turn. Once a player has completed a plate appearance, that player may not bat again until the eight other members of the player's team have all taken their turn at bat. The batting order is set before the game begins, and may not be altered except for substitutions. Once a player has been removed for a substitute, that player may not reenter the game. Children's games often have more liberal substitution rules. If the designated hitter (DH) rule is in effect, each team has a tenth player whose sole responsibility is to bat (and run). The DH takes the place of another player—almost invariably the pitcher—in the batting order, but does not field. Thus, even with the DH, each team still has a batting order of nine players and a fielding arrangement of nine players. The number of players on a baseball roster, or "squad", varies by league and by the level of organized play. A Major League Baseball (MLB) team has a roster of 25 players with specific roles. A typical roster features the following players: Most baseball leagues worldwide have the DH rule, including MLB's American League, Japan's Pacific League, and Caribbean professional leagues, along with major American amateur organizations. The Central League in Japan and the National League do not have the rule, and high-level minor league clubs connected to National League teams are not required to field a DH. In leagues that apply the designated hitter rule, a typical team has nine offensive regulars (including the DH), five starting pitchers, seven or eight relievers, a backup catcher, and two or three other reserve players. The manager, or head coach, oversees the team's major strategic decisions, such as establishing the starting rotation, setting the lineup, or batting order, before each game, and making substitutions during games—in particular, bringing in relief pitchers. Managers are typically assisted by two or more coaches; they may have specialized responsibilities, such as working with players on hitting, fielding, pitching, or strength and conditioning. At most levels of organized play, two coaches are stationed on the field when the team is at bat: the first base coach and third base coach, occupying designated coaches' boxes just outside the foul lines, assist in the direction of baserunners when the ball is in play, and relay tactical signals from the manager to batters and runners during pauses in play. In contrast to many other team sports, baseball managers and coaches generally wear their team's uniforms; coaches must be in uniform to be allowed on the field to confer with players during a game. Any baseball game involves one or more umpires, who make rulings on the outcome of each play. At a minimum, one umpire will stand behind the catcher, to have a good view of the strike zone, and call balls and strikes. Additional umpires may be stationed near the other bases, thus making it easier to judge plays such as attempted force outs and tag outs. In MLB, four umpires are used for each game, one near each base. In the playoffs, six umpires are used: one at each base and two in the outfield along the foul lines. Many of the pre-game and in-game strategic decisions in baseball revolve around a fundamental fact: in general, right-handed batters tend to be more successful against left-handed pitchers and, to an even greater degree, left-handed batters tend to be more successful against right-handed pitchers. A manager with several left-handed batters in the regular lineup who knows the team will be facing a left-handed starting pitcher may respond by starting one or more of the right-handed backups on the team's roster. During the late innings of a game, as relief pitchers and pinch hitters are brought in, the opposing managers will often go back and forth trying to create favorable matchups with their substitutions: the manager of the fielding team trying to arrange same-handed pitcher-batter matchups, the manager of the batting team trying to arrange opposite-handed matchups. With a team that has the lead in the late innings, a manager may remove a starting position player—especially one whose turn at bat is not likely to come up again—for a more skillful fielder. The tactical decision that precedes almost every play in a baseball game involves pitch selection. By gripping and then releasing the baseball in a certain manner, and by throwing it at a certain speed, pitchers can cause the baseball to break to either side, or downward, as it approaches the batter. Among the resulting wide variety of pitches that may be thrown, the four basic types are the fastball, the changeup (or off-speed pitch), and two breaking balls—the curveball and the slider. Pitchers have different repertoires of pitches they are skillful at throwing. Conventionally, before each pitch, the catcher signals the pitcher what type of pitch to throw, as well as its general vertical and/or horizontal location. If there is disagreement on the selection, the pitcher may shake off the sign and the catcher will call for a different pitch. With a runner on base and taking a lead, the pitcher may attempt a pickoff, a quick throw to a fielder covering the base to keep the runner's lead in check or, optimally, effect a tag out. Pickoff attempts, however, are subject to rules that severely restrict the pitcher's movements before and during the pickoff attempt. Violation of any one of these rules could result in the umpire calling a balk against the pitcher, which permits any runners on base to advance one base with impunity. If an attempted stolen base is anticipated, the catcher may call for a pitchout, a ball thrown deliberately off the plate, allowing the catcher to catch it while standing and throw quickly to a base. Facing a batter with a strong tendency to hit to one side of the field, the fielding team may employ a shift, with most or all of the fielders moving to the left or right of their usual positions. With a runner on third base, the infielders may play in, moving closer to home plate to improve the odds of throwing out the runner on a ground ball, though a sharply hit grounder is more likely to carry through a drawn-in infield. Several basic offensive tactics come into play with a runner on first base, including the fundamental choice of whether to attempt a steal of second base. The hit and run is sometimes employed with a skillful contact hitter: the runner takes off with the pitch drawing the shortstop or second baseman over to second base, creating a gap in the infield for the batter to poke the ball through. The sacrifice bunt calls for the batter to focus on making contact with the ball so that it rolls a short distance into the infield, allowing the runner to advance into scoring position even at the expense of the batter being thrown out at first—a batter who succeeds is credited with a sacrifice. (A batter, particularly one who is a fast runner, may also attempt to bunt for a hit.) A sacrifice bunt employed with a runner on third base, aimed at bringing that runner home, is known as a squeeze play. With a runner on third and fewer than two outs, a batter may instead concentrate on hitting a fly ball that, even if it is caught, will be deep enough to allow the runner to tag up and score—a successful batter, in this case, gets credit for a sacrifice fly. The manager will sometimes signal a batter who is ahead in the count (i.e., has more balls than strikes) to take, or not swing at, the next pitch. The evolution of baseball from older bat-and-ball games is difficult to trace with precision. Consensus once held that today's baseball is a North American development from the older game rounders, popular in Great Britain and Ireland. "Baseball Before We Knew It: A Search for the Roots of the Game" (2005), by American baseball historian David Block, suggests that the game originated in England; recently uncovered historical evidence supports this position. Block argues that rounders and early baseball were actually regional variants of each other, and that the game's most direct antecedents are the English games of stoolball and "tut-ball". The earliest known reference to baseball is in a 1744 British publication, "A Little Pretty Pocket-Book", by John Newbery. Block discovered that the first recorded game of "Bass-Ball" took place in 1749 in Surrey, and featured the Prince of Wales as a player. This early form of the game was apparently brought to Canada by English immigrants. By the early 1830s, there were reports of a variety of uncodified bat-and-ball games recognizable as early forms of baseball being played around North America. In 1845, Alexander Cartwright, a member of New York City's Knickerbocker Club, led the codification of the so-called Knickerbocker Rules. While there are reports that the New York Knickerbockers played games in 1845, the contest long recognized as the first officially recorded baseball game in U.S. history took place on June 19, 1846, in Hoboken, New Jersey: the "New York Nine" defeated the Knickerbockers, 23–1, in four innings. With the Knickerbocker code as the basis, the rules of modern baseball continued to evolve over the next half-century. In the mid-1850s, a baseball craze hit the New York metropolitan area, and by 1856, local journals were referring to baseball as the "national pastime" or "national game". A year later, the sport's first governing body, the National Association of Base Ball Players, was formed. In 1867, it barred participation by African Americans. The more formally structured National League was founded in 1876. Professional Negro leagues formed, but quickly folded. In 1887, softball, under the name of indoor baseball or indoor-outdoor, was invented as a winter version of the parent game. The National League's first successful counterpart, the American League, which evolved from the minor Western League, was established in 1893, and virtually all of the modern baseball rules were in place by then. The National Agreement of 1903 formalized relations both between the two major leagues and between them and the National Association of Professional Base Ball Leagues, representing most of the country's minor professional leagues. The World Series, pitting the two major league champions against each other, was inaugurated that fall. The Black Sox Scandal of the 1919 World Series led to the formation of a new National Commission of baseball that drew the two major leagues closer together. The first major league baseball commissioner, Kenesaw Mountain Landis, was elected in 1920. That year also saw the founding of the Negro National League; the first significant Negro league, it would operate until 1931. For part of the 1920s, it was joined by the Eastern Colored League. Compared with the present, professional baseball in the early 20th century was lower-scoring, and pitchers were more dominant. The so-called dead-ball era ended in the early 1920s with several changes in rule and circumstance that were advantageous to hitters. Strict new regulations governed the ball's size, shape and composition, along with a new rule officially banning the spitball and other pitches that depended on the ball being treated or roughed-up with foreign substances, resulted in a ball that traveled farther when hit. The rise of the legendary player Babe Ruth, the first great power hitter of the new era, helped permanently alter the nature of the game. In the late 1920s and early 1930s, St. Louis Cardinals general manager Branch Rickey invested in several minor league clubs and developed the first modern farm system. A new Negro National League was organized in 1933; four years later, it was joined by the Negro American League. The first elections to the National Baseball Hall of Fame took place in 1936. In 1939 Little League Baseball was founded in Pennsylvania. A large number of minor league teams disbanded when World War II led to a player shortage. Chicago Cubs owner Philip K. Wrigley led the formation of the All-American Girls Professional Baseball League to help keep the game in the public eye. The first crack in the unwritten agreement barring blacks from white-controlled professional ball occurred in 1945: Jackie Robinson was signed by the National League's Brooklyn Dodgers and began playing for their minor league team in Montreal. In 1947, Robinson broke the major leagues' color barrier when he debuted with the Dodgers. Latin American players, largely overlooked before, also started entering the majors in greater numbers. In 1951, two Chicago White Sox, Venezuelan-born Chico Carrasquel and black Cuban-born Minnie Miñoso, became the first Hispanic All-Stars. Integration proceeded slowly: by 1953, only six of the 16 major league teams had a black player on the roster. In 1975, the union's power—and players' salaries—began to increase greatly when the reserve clause was effectively struck down, leading to the free agency system. Significant work stoppages occurred in 1981 and 1994, the latter forcing the cancellation of the World Series for the first time in 90 years. Attendance had been growing steadily since the mid-1970s and in 1994, before the stoppage, the majors were setting their all-time record for per-game attendance. After play resumed in 1995, non-division-winning wild card teams became a permanent fixture of the post-season. Regular-season interleague play was introduced in 1997 and the second-highest attendance mark for a full season was set. In 2000, the National and American Leagues were dissolved as legal entities. While their identities were maintained for scheduling purposes (and the designated hitter distinction), the regulations and other functions—such as player discipline and umpire supervision—they had administered separately were consolidated under the rubric of MLB. In 2001, Barry Bonds established the current record of 73 home runs in a single season. There had long been suspicions that the dramatic increase in power hitting was fueled in large part by the abuse of illegal steroids (as well as by the dilution of pitching talent due to expansion), but the issue only began attracting significant media attention in 2002 and there was no penalty for the use of performance-enhancing drugs before 2004. In 2007, Bonds became MLB's all-time home run leader, surpassing Hank Aaron, as total major league and minor league attendance both reached all-time highs. Widely known as America's pastime, baseball is well established in several other countries as well. As early as 1877, a professional league, the International Association, featured teams from both Canada and the US. While baseball is widely played in Canada and many minor league teams have been based in the country, the American major leagues did not include a Canadian club until 1969, when the Montreal Expos joined the National League as an expansion team. In 1977, the expansion Toronto Blue Jays joined the American League. In 1847, American soldiers played what may have been the first baseball game in Mexico at Parque Los Berros in Xalapa, Veracruz. The first formal baseball league outside of the United States and Canada was founded in 1878 in Cuba, which maintains a rich baseball tradition. The Dominican Republic held its first islandwide championship tournament in 1912. Professional baseball tournaments and leagues began to form in other countries between the world wars, including the Netherlands (formed in 1922), Australia (1934), Japan (1936), Mexico (1937), and Puerto Rico (1938). The Japanese major leagues have long been considered the highest quality professional circuits outside of the United States. After World War II, professional leagues were founded in many Latin American countries, most prominently Venezuela (1946) and the Dominican Republic (1955). Since the early 1970s, the annual Caribbean Series has matched the championship clubs from the four leading Latin American winter leagues: the Dominican Professional Baseball League, Mexican Pacific League, Puerto Rican Professional Baseball League, and Venezuelan Professional Baseball League. In Asia, South Korea (1982), Taiwan (1990) and China (2003) all have professional leagues. Many European countries have professional leagues as well; the most successful, other than the Dutch league, is the Italian league, founded in 1948. In 2004, Australia won a surprise silver medal at the Olympic Games. The Confédération Européene de Baseball (European Baseball Confederation), founded in 1953, organizes a number of competitions between clubs from different countries. Other competitions between national teams, such as the Baseball World Cup and the Olympic baseball tournament, were administered by the International Baseball Federation (IBAF) from its formation in 1938 until its 2013 merger with the International Softball Federation to create the current joint governing body for both sports, the World Baseball Softball Confederation (WBSC). Women's baseball is played on an organized amateur basis in numerous countries. After being admitted to the Olympics as a medal sport beginning with the 1992 Games, baseball was dropped from the 2012 Summer Olympic Games at the 2005 International Olympic Committee meeting. It remained part of the 2008 Games. While the sport's lack of a following in much of the world was a factor, more important was MLB's reluctance to have a break during the Games to allow its players to participate. MLB initiated the World Baseball Classic, scheduled to precede the major league season, partly as a replacement, high-profile international tournament. The inaugural Classic, held in March 2006, was the first tournament involving national teams to feature a significant number of MLB participants. The Baseball World Cup was discontinued after its 2011 edition in favor of an expanded World Baseball Classic. Baseball has certain attributes that set it apart from the other popular team sports in the countries where it has a following. All of these sports use a clock; in all of them, play is less individual and more collective; and in none of them is the variation between playing fields nearly as substantial or important. The comparison between cricket and baseball demonstrates that many of baseball's distinctive elements are shared in various ways with its cousin sports. In clock-limited sports, games often end with a team that holds the lead killing the clock rather than competing aggressively against the opposing team. In contrast, baseball has no clock; a team cannot win without getting the last batter out and rallies are not constrained by time. At almost any turn in any baseball game, the most advantageous strategy is some form of aggressive strategy. In contrast, again, the clock comes into play even in the case of multi-day Test and first-class cricket: the possibility of a draw often encourages a team that is batting last and well behind to bat defensively, giving up any faint chance at a win to avoid a loss. While nine innings has been the standard since the beginning of professional baseball, the duration of the average major league game has increased steadily through the years. At the turn of the 20th century, games typically took an hour and a half to play. In the 1920s, they averaged just less than two hours, which eventually ballooned to 2:38 in 1960. By 1997, the average American League game lasted 2:57 (National League games were about 10 minutes shorter—pitchers at the plate making for quicker outs than designated hitters). In 2004, Major League Baseball declared that its goal was an average game of 2:45. By 2014, though, the average MLB game took over three hours to complete. The lengthening of games is attributed to longer breaks between half-innings for television commercials, increased offense, more pitching changes, and a slower pace of play with pitchers taking more time between each delivery, and batters stepping out of the box more frequently. Other leagues have experienced similar issues. In 2008, Nippon Professional Baseball took steps aimed at shortening games by 12 minutes from the preceding decade's average of 3:18. In 2016, the average nine-inning playoff game in Major League baseball was 3 hours and 35 minutes. This was up 10 minutes from 2015 and 21 minutes from 2014. Although baseball is a team sport, individual players are often placed under scrutiny and pressure. In 1915, a baseball instructional manual pointed out that every single pitch, of which there are often more than two hundred in a game, involves an individual, one-on-one contest: "the pitcher and the batter in a battle of wits". Contrasting the game with both football and basketball, scholar Michael Mandelbaum argues that "baseball is the one closest in evolutionary descent to the older individual sports". Pitcher, batter, and fielder all act essentially independent of each other. While coaching staffs can signal pitcher or batter to pursue certain tactics, the execution of the play itself is a series of solitary acts. If the batter hits a line drive, the outfielder is solely responsible for deciding to try to catch it or play it on the bounce and for succeeding or failing. The statistical precision of baseball is both facilitated by this isolation and reinforces it. As described by Mandelbaum, It is impossible to isolate and objectively assess the contribution each [football] team member makes to the outcome of the play... [E]very basketball player is interacting with all of his teammates all the time. In baseball, by contrast, every player is more or less on his own... Baseball is therefore a realm of complete transparency and total responsibility. A baseball player lives in a glass house, and in a stark moral universe... Everything that every player does is accounted for and everything accounted for is either good or bad, right or wrong. Cricket is more similar to baseball than many other team sports in this regard: while the individual focus in cricket is mitigated by the importance of the batting partnership and the practicalities of tandem running, it is enhanced by the fact that a batsman may occupy the wicket for an hour or much more. There is no statistical equivalent in cricket for the fielding error and thus less emphasis on personal responsibility in this area of play. Unlike those of most sports, baseball playing fields can vary significantly in size and shape. While the dimensions of the infield are specifically regulated, the only constraint on outfield size and shape for professional teams following the rules of MLB and Minor League Baseball is that fields built or remodeled since June 1, 1958, must have a minimum distance of from home plate to the fences in left and right field and to center. Major league teams often skirt even this rule. For example, at Minute Maid Park, which became the home of the Houston Astros in 2000, the Crawford Boxes in left field are only from home plate. There are no rules at all that address the height of fences or other structures at the edge of the outfield. The most famously idiosyncratic outfield boundary is the left-field wall at Boston's Fenway Park, in use since 1912: the Green Monster is from home plate down the line and tall. Similarly, there are no regulations at all concerning the dimensions of foul territory. Thus a foul fly ball may be entirely out of play in a park with little space between the foul lines and the stands, but a foulout in a park with more expansive foul ground. A fence in foul territory that is close to the outfield line will tend to direct balls that strike it back toward the fielders, while one that is farther away may actually prompt more collisions, as outfielders run full speed to field balls deep in the corner. These variations can make the difference between a double and a triple or inside-the-park home run. The surface of the field is also unregulated. While the adjacent image shows a traditional field surfacing arrangement (and the one used by virtually all MLB teams with naturally surfaced fields), teams are free to decide what areas will be grassed or bare. Some fields—including several in MLB—use an artificial surface, such as AstroTurf. Surface variations can have a significant effect on how ground balls behave and are fielded as well as on baserunning. Similarly, the presence of a roof (seven major league teams play in stadiums with permanent or retractable roofs) can greatly affect how fly balls are played. While football and soccer players deal with similar variations of field surface and stadium covering, the size and shape of their fields are much more standardized. The area out-of-bounds on a football or soccer field does not affect play the way foul territory in baseball does, so variations in that regard are largely insignificant. These physical variations create a distinctive set of playing conditions at each ballpark. Other local factors, such as altitude and climate, can also significantly affect play. A given stadium may acquire a reputation as a pitcher's park or a hitter's park, if one or the other discipline notably benefits from its unique mix of elements. The most exceptional park in this regard is Coors Field, home of the Colorado Rockies. Its high altitude— above sea level—is responsible for giving it the strongest hitter's park effect in the major leagues due to the low air pressure. Wrigley Field, home of the Chicago Cubs, is known for its fickle disposition: a hitter's park when the strong winds off Lake Michigan are blowing out, it becomes more of a pitcher's park when they are blowing in. The absence of a standardized field affects not only how particular games play out, but the nature of team rosters and players' statistical records. For example, hitting a fly ball into right field might result in an easy catch on the warning track at one park, and a home run at another. A team that plays in a park with a relatively short right field, such as the New York Yankees, will tend to stock its roster with left-handed pull hitters, who can best exploit it. On the individual level, a player who spends most of his career with a team that plays in a hitter's park will gain an advantage in batting statistics over time—even more so if his talents are especially suited to the park. Organized baseball lends itself to statistics to a greater degree than many other sports. Each play is discrete and has a relatively small number of possible outcomes. In the late 19th century, a former cricket player, English-born Henry Chadwick of Brooklyn, was responsible for the "development of the box score, tabular standings, the annual baseball guide, the batting average, and most of the common statistics and tables used to describe baseball." The statistical record is so central to the game's "historical essence" that Chadwick came to be known as Father Baseball. In the 1920s, American newspapers began devoting more and more attention to baseball statistics, initiating what journalist and historian Alan Schwarz describes as a "tectonic shift in sports, as intrigue that once focused mostly on teams began to go to individual players and their statistics lines." The Official Baseball Rules administered by MLB require the official scorer to categorize each baseball play unambiguously. The rules provide detailed criteria to promote consistency. The score report is the official basis for both the box score of the game and the relevant statistical records. General managers, managers, and baseball scouts use statistics to evaluate players and make strategic decisions. Certain traditional statistics are familiar to most baseball fans. The basic batting statistics include: The basic baserunning statistics include: The basic pitching statistics include: The basic fielding statistics include: Among the many other statistics that are kept are those collectively known as "situational statistics". For example, statistics can indicate which specific pitchers a certain batter performs best against. If a given situation statistically favors a certain batter, the manager of the fielding team may be more likely to change pitchers or have the pitcher intentionally walk the batter in order to face one who is less likely to succeed. Sabermetrics refers to the field of baseball statistical study and the development of new statistics and analytical tools. The term is also used to refer directly to new statistics themselves. The term was coined around 1980 by one of the field's leading proponents, Bill James, and derives from the Society for American Baseball Research (SABR). The growing popularity of sabermetrics since the early 1980s has brought more attention to two batting statistics that sabermetricians argue are much better gauges of a batter's skill than batting average: Some of the new statistics devised by sabermetricians have gained wide use: Writing in 1919, philosopher Morris Raphael Cohen described baseball as America's national religion. In the words of sports columnist Jayson Stark, baseball has long been "a unique paragon of American culture"—a status he sees as devastated by the steroid abuse scandal. Baseball has an important place in other national cultures as well: Scholar Peter Bjarkman describes "how deeply the sport is ingrained in the history and culture of a nation such as Cuba, [and] how thoroughly it was radically reshaped and nativized in Japan." Since the early 1980s, the Dominican Republic, in particular the city of San Pedro de Macorís, has been the major leagues' primary source of foreign talent. In 2017, 83 of the 868 players on MLB Opening Day rosters (and disabled lists) were from the country. Among other Caribbean countries and territories, a combined 97 MLB players were born in Venezuela, Cuba, and Puerto Rico. Hall-of-Famer Roberto Clemente remains one of the greatest national heroes in Puerto Rico's history. While baseball has long been the island's primary athletic pastime, its once well-attended professional winter league has declined in popularity since 1990, when young Puerto Rican players began to be included in the major leagues' annual first-year player draft. In Asia, baseball is among the most popular sports in Japan and South Korea. The major league game in the United States was originally targeted toward a middle-class, white-collar audience: relative to other spectator pastimes, the National League's set ticket price of 50 cents in 1876 was high, while the location of playing fields outside the inner city and the workweek daytime scheduling of games were also obstacles to a blue-collar audience. A century later, the situation was very different. With the rise in popularity of other team sports with much higher average ticket prices—football, basketball, and hockey—professional baseball had become among the most blue-collar-oriented of leading American spectator sports. Overall, baseball has a large following in the United States; a 2006 poll found that nearly half of Americans are fans. In the late 1900s and early 2000s, baseball's position compared to football in the United States moved in contradictory directions. In 2008, MLB set a revenue record of $6.5 billion, matching the NFL's revenue for the first time in decades. A new MLB revenue record of more than $10 billion was set in 2017. On the other hand, the percentage of American sports fans polled who named baseball as their favorite sport was 9%, compared to pro football at 37%. In 1985, the respective figures were pro football 24%, baseball 23%. Because there are so many more major league games played, there is no comparison in overall attendance. In 2008, total attendance at major league games was the second-highest in history: 78.6 million, 0.7% off the record set the previous year. The following year, amid the U.S. recession, attendance fell by 6.6% to 73.4 million. Eight years later, it dropped under 73 million. Attendance at games held under the Minor League Baseball umbrella set a record in 2008, with 43.3 million. In Japan, where baseball is inarguably the leading spectator team sport, combined revenue for the twelve teams in Nippon Professional Baseball (NPB), the body that oversees both the Central and Pacific Leagues, was estimated at $1 billion in 2007. Total NPB attendance for the year was approximately 20 million. While in the preceding two decades, MLB attendance grew by 50 percent and revenue nearly tripled, the comparable NPB figures were stagnant. There are concerns that MLB's growing interest in acquiring star Japanese players will hurt the game in their home country. In Cuba, where baseball is by every reckoning the national sport, the national team overshadows the city and provincial teams that play in the top-level domestic leagues. Revenue figures are not released for the country's amateur system. Similarly, according to one official pronouncement, the sport's governing authority "has never taken into account attendance ... because its greatest interest has always been the development of athletes". As of 2018, Little League Baseball oversees leagues with close to 2.4 million participants in over 80 countries. The number of players has fallen since the 1990s, when 3 million children took part in Little League Baseball annually. Babe Ruth League teams have over 1 million participants. According to the president of the International Baseball Federation, between 300,000 and 500,000 women and girls play baseball around the world, including Little League and the introductory game of Tee Ball. A varsity baseball team is an established part of physical education departments at most high schools and colleges in the United States. In 2015, nearly half a million high schoolers and over 34,000 collegians played on their schools' baseball teams. By early in the 20th century, intercollegiate baseball was Japan's leading sport. Today, high school baseball in particular is immensely popular there. The final rounds of the two annual tournaments—the National High School Baseball Invitational Tournament in the spring, and the even more important National High School Baseball Championship in the summer—are broadcast around the country. The tournaments are known, respectively, as Spring Koshien and Summer Koshien after the 55,000-capacity stadium where they are played. In Cuba, baseball is a mandatory part of the state system of physical education, which begins at age six. Talented children as young as seven are sent to special district schools for more intensive training—the first step on a ladder whose acme is the national baseball team. Baseball has had a broad impact on popular culture, both in the United States and elsewhere. Dozens of English-language idioms have been derived from baseball; in particular, the game is the source of a number of widely used sexual euphemisms. The first networked radio broadcasts in North America were of the 1922 World Series: famed sportswriter Grantland Rice announced play-by-play from New York City's Polo Grounds on WJZ–Newark, New Jersey, which was connected by wire to WGY–Schenectady, New York, and WBZ–Springfield, Massachusetts. The baseball cap has become a ubiquitous fashion item not only in the United States and Japan, but also in countries where the sport itself is not particularly popular, such as the United Kingdom. Baseball has inspired many works of art and entertainment. One of the first major examples, Ernest Thayer's poem "Casey at the Bat", appeared in 1888. A wry description of the failure of a star player in what would now be called a "clutch situation", the poem became the source of vaudeville and other staged performances, audio recordings, film adaptations, and an opera, as well as a host of sequels and parodies in various media. There have been many baseball movies, including the Academy Award–winning "The Pride of the Yankees" (1942) and the Oscar nominees "The Natural" (1984) and "Field of Dreams" (1989). The American Film Institute's selection of the ten best sports movies includes "The Pride of the Yankees" at number 3 and "Bull Durham" (1988) at number 5. Baseball has provided thematic material for hits on both stage—the Adler–Ross musical "Damn Yankees"—and record—George J. Gaskin's "Slide, Kelly, Slide", Simon and Garfunkel's "Mrs. Robinson", and John Fogerty's "Centerfield". The baseball-inspired comedic sketch "Who's on First", popularized by Abbott and Costello in 1938, quickly became famous. Six decades later, "Time" named it the best comedy routine of the 20th century. Literary works connected to the game include the short fiction of Ring Lardner and novels such as Bernard Malamud's "The Natural" (the source for the movie), Robert Coover's "The Universal Baseball Association, Inc., J. Henry Waugh, Prop.", and W. P. Kinsella's "Shoeless Joe" (the source for "Field of Dreams"). Baseball's literary canon also includes the beat reportage of Damon Runyon; the columns of Grantland Rice, Red Smith, Dick Young, and Peter Gammons; and the essays of Roger Angell. Among the celebrated nonfiction books in the field are Lawrence S. Ritter's "The Glory of Their Times", Roger Kahn's "The Boys of Summer", and Michael Lewis's "Moneyball". The 1970 publication of major league pitcher Jim Bouton's tell-all chronicle "Ball Four" is considered a turning point in the reporting of professional sports. Baseball has also inspired the creation of new cultural forms. Baseball cards were introduced in the late 19th century as trade cards. A typical example featured an image of a baseball player on one side and advertising for a business on the other. In the early 1900s they were produced widely as promotional items by tobacco and confectionery companies. The 1930s saw the popularization of the modern style of baseball card, with a player photograph accompanied on the rear by statistics and biographical data. Baseball cards—many of which are now prized collectibles—are the source of the much broader trading card industry, involving similar products for different sports and non-sports-related fields. Modern fantasy sports began in 1980 with the invention of Rotisserie League Baseball by New York writer Daniel Okrent and several friends. Participants in a Rotisserie league draft notional teams from the list of active MLB players and play out an entire imaginary season with game outcomes based on the players' latest real-world statistics. Rotisserie-style play quickly became a phenomenon. Now known more generically as fantasy baseball, it has inspired similar games based on an array of different sports. The field boomed with increasing Internet access and new fantasy sports-related websites. By 2008, 29.9 million people in the United States and Canada were playing fantasy sports, spending $800 million on the hobby. The burgeoning popularity of fantasy baseball is also credited with the increasing attention paid to sabermetrics—first among fans, only later among baseball professionals. Benjamin Disraeli Benjamin Disraeli, 1st Earl of Beaconsfield, (21 December 1804 – 19 April 1881) was a British statesman of the Conservative Party who twice served as Prime Minister of the United Kingdom. He played a central role in the creation of the modern Conservative Party, defining its policies and its broad outreach. Disraeli is remembered for his influential voice in world affairs, his political battles with the Liberal Party leader William Ewart Gladstone, and his one-nation conservatism or "Tory democracy". He made the Conservatives the party most identified with the glory and power of the British Empire. He is the only British Prime Minister to have been Jewish by birth and the first person from an ethnic minority background to hold one of the Great Offices of State. He was also a novelist, publishing works of fiction even as prime minister. Disraeli was born in Bloomsbury, then a part of Middlesex. His father left Judaism after a dispute at his synagogue; young Benjamin became an Anglican at the age of 12. After several unsuccessful attempts, Disraeli entered the House of Commons in 1837. In 1846, Prime Minister Robert Peel split the Conservatives over his proposal to repeal the Corn Laws, which involved ending the tariff on imported grain. As a result of his clashes with Peel in the House of Commons, Disraeli became a major Tory figure. When Lord Derby, the party leader, thrice formed governments in the 1850s and 1860s, Disraeli served as Chancellor of the Exchequer and Leader of the House of Commons. Upon Derby's retirement in 1868, Disraeli became prime minister briefly before losing that year's general election. He returned to opposition until the general election of 1874, when he led the Tories as they won an outright majority. Disraeli's second term was dominated by the Eastern Question—the slow decay of the Ottoman Empire and the desire of other European powers, such as Russia, to gain at its expense. Disraeli arranged for the British to purchase a major interest in the Suez Canal Company (in Ottoman-controlled Egypt). In 1878, faced with Russian victories against the Ottomans, he worked at the Congress of Berlin to obtain peace in the Balkans at terms favourable to Britain and unfavourable to Russia, its longstanding enemy. This diplomatic victory established Disraeli as one of Europe's leading statesmen. World events thereafter moved against the Conservatives. The Second Anglo-Afghan War and the Anglo-Zulu War in South Africa undermined his public support. He angered British farmers by refusing to reinstitute the Corn Laws in response to poor harvests and cheap imported grain. With Gladstone conducting a massive speaking campaign, his Liberals bested the Conservatives at the 1880 general election. Disraeli died on 19 April 1881 at the age of 76. In his final months, he led the Conservatives in opposition. He had always maintained a close friendship with Queen Victoria, who in 1876 appointed him Earl of Beaconsfield. His last completed novel, "Endymion", was published in 1881 shortly before his death, more than 50 years after his first. Disraeli was born on 21 December 1804 at 6 King's Road, Bedford Row, Bloomsbury, London, the second child and eldest son of Isaac D'Israeli, a literary critic and historian, and Maria (Miriam), "née" Basevi. The family was of Sephardic Jewish Italian mercantile background. All of Disraeli's grandparents and great-grandparents were born in Italy; Isaac's father, Benjamin, moved to England from Venice in 1748. Disraeli later romanticised his origins, claiming that his father's family was of grand Spanish and Venetian descent; in fact Isaac's family was of no great distinction, but on Disraeli's mother's side, in which he took no interest, there were some distinguished forebears. Historians differ on Disraeli's motives for rewriting his family history: Bernard Glassman argues that it was intended to give him status comparable to that of England's ruling elite; Sarah Bradford believes "his dislike of the commonplace would not allow him to accept the facts of his birth as being as middle-class and undramatic as they really were". Disraeli's siblings were Sarah (1802–1859), Naphtali (born and died 1807), Ralph (1809–1898), and James ("Jem") (1813–1868). He was close to his sister, and on affectionate but more distant terms with his surviving brothers. Details of his schooling are sketchy. From the age of about six he was a day boy at a dame school in Islington that one of his biographers later described as "for those days a very high-class establishment". Two years later or so—the exact date has not been ascertained—he was sent as a boarder to Rev John Potticary's St Piran's school at Blackheath. While he was there, events at the family home changed the course of Disraeli's education and of his whole life: his father renounced Judaism and had the four children baptised into the Church of England in July and August 1817. Isaac D'Israeli had never taken religion very seriously, but had remained a conforming member of the Bevis Marks Synagogue. His father, the elder Benjamin, was a prominent and devout member; it was probably from respect for him that Isaac did not leave when he fell out with the synagogue authorities in 1813. After Benjamin senior died in 1816, Isaac felt free to leave the congregation following a second dispute. Isaac's friend Sharon Turner, a solicitor, convinced him that although he could comfortably remain unattached to any formal religion it would be disadvantageous to the children if they did so. Turner stood as godfather when Benjamin was baptised, aged twelve, on 31 July 1817. Conversion to Christianity enabled Disraeli to contemplate a career in politics. Britain in the early nineteenth century was not a greatly anti-Semitic society, and there had been Members of Parliament (MPs) from Jewish families since Samson Gideon in 1770. But until 1858, MPs were required to take the oath of allegiance "on the true faith of a Christian", necessitating at least nominal conversion. It is not known whether Disraeli formed any ambition for a parliamentary career at the time of his baptism, but there is no doubt that he bitterly regretted his parents' decision not to send him to Winchester College. As one of the great public schools of England, Winchester consistently provided recruits to the political elite. His two younger brothers were sent there, and it is not clear why Isaac D'Israeli chose to send his eldest son to a much less prestigious school. The boy evidently held his mother responsible for the decision; Bradford speculates that "Benjamin's delicate health and his obviously Jewish appearance may have had something to do with it." The school chosen for him was run by Eliezer Cogan at Higham Hill in Walthamstow. He began there in the autumn term of 1817; he later recalled his education: In November 1821, shortly before his seventeenth birthday, Disraeli was articled as a clerk to a firm of solicitors—Swain, Stevens, Maples, Pearse and Hunt—in the City of London. T F Maples was not only the young Disraeli's employer and friend of his father's, but also his prospective father-in-law: Isaac and Maples entertained the possibility that the latter's only daughter might be a suitable match for Benjamin. A friendship developed, but there was no romance. The firm had a large and profitable business, and as the biographer R W Davis observes, the clerkship was "the kind of secure, respectable position that many fathers dream of for their children". Although biographers including Robert Blake and Bradford comment that such a post was incompatible with Disraeli's romantic and ambitious nature, he reportedly gave his employers satisfactory service, and later professed to have learned a good deal from his time with the firm. He recalled, 'I had some scruples, for even then I dreamed of Parliament. My father's refrain always was "Philip Carteret Webb", who was the most eminent solicitor of his boyhood and [...] an MP. It would be a mistake to suppose that the two years and more that I was in the office of our friend [Maples] were wasted. I have often thought, though I have often regretted the University, that it was much the reverse'. The year after joining Maples' firm, Benjamin changed his surname from D'Israeli to Disraeli. His reasons for doing so are unknown, but the biographer Bernard Glassman surmises that it was to avoid being confused with his father. Disraeli's sister and brothers adopted the new version of the name; Isaac and his wife retained the older form. Disraeli toured Belgium and the Rhine Valley with his father in the summer of 1824. He later wrote that it was while travelling on the Rhine that he decided to abandon his position: "I determined when descending those magical waters that I would not be a lawyer." On their return to England he left the solicitors at the suggestion of Maples, with the aim of qualifying as a barrister. He enrolled as a student at Lincoln's Inn and joined the chambers of his uncle, Nathaniel Basevy, and then those of Benjamin Austen, who persuaded Isaac that Disraeli would never make a barrister and should be allowed to pursue a literary career. He had made a tentative start: in May 1824 he submitted a manuscript to his father's friend, the publisher John Murray, but withdrew it before Murray could decide whether to publish it. Released from the law, Disraeli did some work for Murray, but turned most of his attention not to literature but to speculative dealing on the stock exchange. Shares in South American mining companies were enjoying a huge boom. Spain was losing its South American colonies in the face of South American rebellions and revolutions. At the urging of George Canning, the British government recognised the new independent governments of Argentina (1824), Colombia and Mexico (both 1825). British capital could now flow much more freely to give the struggling countries and their most significant industries a sounder footing. Both sides would reap profits, all the while causing real damage to Spain, one of Britain's bitterest enemies. With no money of his own, Disraeli borrowed money to invest. He became involved with the financier John Diston Powles, who was prominent among those encouraging the mining boom. In the course of 1825, Powles asked Disraeli to write three anonymous pamphlets promoting the companies, which John Murray, another heavy investor in the boom, then published. Murray had had ambitions to establish a new morning paper to compete with "The Times". In 1825 Disraeli convinced him to proceed. The new paper, "The Representative", promoted the mines and the politicians who supported them, particularly Canning. Although Disraeli impressed Murray with his energy and commitment to the project, he failed in his key task of persuading the eminent writer John Gibson Lockhart to edit the paper. After that, Disraeli's influence on Murray waned, and to his resentment he was sidelined in the newspaper's affairs. The paper survived for only six months, partly because the mining bubble burst in late 1825, and partly because, according to Blake, the paper was "atrociously edited" and would have failed regardless. The mining crash ruined Disraeli. By June 1825, he and his business partners had lost £7,000, the equivalent of more than £615,000 in 2017. Disraeli could not pay off the last of his debts from this debacle until 1849. He turned to writing, motivated by his desperate need for money and a wish for revenge on Murray and others he felt had slighted him. There was a vogue for what was called 'silver-fork fiction'—novels depicting aristocratic life, usually by anonymous authors, read avidly by the aspirational middle classes. Disraeli's first novel, "Vivian Grey", published anonymously in four volumes in 1826–27 when he was just 23, was a thinly veiled re-telling of the history of "The Representative". It sold well, but reviewers were sharply critical of both the author and the book because its numerous solecisms made it obvious the author did not move in high society. When Disraeli's authorship was discovered, many in high society's most influential circles, already offended, knew where to place their ire. Furthermore, Murray and Lockhart, who had great power in the literary and publishing spheres, believed that Disraeli had caricatured them and abused their confidence—an accusation he denied but many of his biographers have given credence to. In later editions Disraeli made many changes to the novel, softening his satire, but the damage to his reputation proved long-lasting. Disraeli's biographer Jonathan Parry writes that the financial failure and personal criticism that Disraeli suffered in 1825 and 1826 were probably the trigger for a serious nervous crisis affecting him over the next four years: 'He had always been moody, sensitive, and solitary by nature, but now became seriously depressed and lethargic.' He was still living with his parents in London, but in search of the 'change of air' recommended by the family's doctors, his father took a succession of houses in the country and on the coast before Disraeli would finally seek wider horizons. Together with his sister's fiancé, William Meredith, Disraeli travelled widely in southern Europe and beyond in 1830–31. The trip was financed partly by another high-society novel, "The Young Duke", written in 1829–30. The tour was cut short suddenly by Meredith's death from smallpox in Cairo in July 1831. Despite this tragedy, and the need for treatment for a sexually transmitted disease on his return, Disraeli felt enriched by his experiences. He became, in Parry's words, "aware of values that seemed denied to his insular countrymen. The journey encouraged his self-consciousness, his moral relativism, and his interest in Eastern racial and religious attitudes." Blake regards the tour as one of the formative experiences of Disraeli's whole career: "[T]he impressions that it made on him were life-lasting. They conditioned his attitude toward some of the most important political problems which faced him in his later years—especially the Eastern Question; they also coloured many of his novels." Disraeli wrote two novels in the aftermath of the tour. "Contarini Fleming" (1832) was avowedly a self-portrait. It is subtitled "a psychological autobiography", and depicts the conflicting elements of its hero's character: the duality of northern and Mediterranean ancestry, the dreaming artist and the bold man of action. As Parry observes, the book ends on a political note, setting out Europe's progress "from feudal to federal principles". "The Wondrous Tale of Alroy", the following year, portrayed the problems of a medieval Jew in deciding between a small, exclusively Jewish state and a large empire embracing all. After the two novels were published, Disraeli declared that he would "write no more about myself". He had already turned his attention to politics in 1832, during the great crisis over the Reform Bill. He contributed to an anti-Whig pamphlet edited by John Wilson Croker and published by Murray, entitled "England and France: or a cure for Ministerial Gallomania". The choice of a Tory publication was regarded as strange by Disraeli's friends and relatives, who thought him more of a Radical. Indeed, he had objected to Murray about Croker's inserting "high Tory" sentiment: Disraeli remarked, "it is quite impossible that anything adverse to the general measure of Reform can issue from my pen." Moreover, at the time "Gallomania" was published, Disraeli was electioneering in High Wycombe in the Radical interest. Disraeli's politics at the time were influenced both by his rebellious streak and by his desire to make his mark. At that time, the politics of the nation were dominated by members of the aristocracy, together with a few powerful commoners. The Whigs derived from the coalition of Lords who had forced through the Bill of Rights in 1689, and in some cases were their actual descendants, not merely spiritual. The Tories tended to support King and Church, and sought to thwart political change. A small number of Radicals, generally from northern constituencies, were the strongest advocates of continuing reform. In the early-1830s the Tories and the interests they represented appeared to be a lost cause. The other great party, the Whigs, were anathema to Disraeli: "Toryism is worn out & I cannot condescend to be a Whig." There were two general elections in 1832; Disraeli unsuccessfully stood as a Radical at High Wycombe in each. Disraeli's political views embraced certain Radical policies, particularly democratic reform of the electoral system, and also some Tory ones, including protectionism. He began to move in Tory circles. In 1834 he was introduced to the former Lord Chancellor, Lord Lyndhurst, by Henrietta Sykes, wife of Sir Francis Sykes. She was having an affair with Lyndhurst, and began another with Disraeli. Disraeli and Lyndhurst took an immediate liking to each other. Lyndhurst was an indiscreet gossip with a fondness for intrigue; this appealed greatly to Disraeli, who became his secretary and go-between. Disraeli stood as a Radical for the last time in 1835, unsuccessfully contesting High Wycombe once again. In April 1835, Disraeli fought a by-election at Taunton as a Tory candidate. The Irish MP Daniel O'Connell, misled by inaccurate press reports, thought Disraeli had slandered him while electioneering at Taunton; he launched an outspoken attack, referring to Disraeli as: Disraeli's public exchanges with O'Connell, extensively reproduced in "The Times", included a demand for a duel with the 60-year-old O'Connell's son (which resulted in Disraeli's temporary detention by the authorities), a reference to "the inextinguishable hatred with which [he] shall pursue [O'Connell's] existence", and the accusation that O'Connell's supporters had a "princely revenue wrung from a starving race of fanatical slaves". Disraeli was highly gratified by the dispute, which propelled him to general public notice for the first time. He did not defeat the incumbent Whig member, Henry Labouchere, but the Taunton constituency was regarded as unwinnable by the Tories. Disraeli kept Labouchere's majority down to 170, a good showing that put him in line for a winnable seat in the near future. With Lyndhurst's encouragement, Disraeli turned to writing propaganda for his newly adopted party. His "Vindication of the English Constitution", was published in December 1835. It was couched in the form of an open letter to Lyndhurst, and in Bradford's view encapsulates a political philosophy that Disraeli adhered to for the rest of his life. Its themes were the value of benevolent aristocratic government, a loathing of political dogma, and the modernisation of Tory policies. The following year he wrote a series of satires on politicians of the day, which he published in "The Times" under the pen-name "Runnymede". His targets included the Whigs, collectively and individually, Irish nationalists, and political corruption. One essay ended: Disraeli was now firmly in the Tory camp. He was elected to the exclusively Tory Carlton Club in 1836, and was also taken up by the party's leading hostess, Lady Londonderry. In June 1837, WilliamIV died, his niece the young Princess Victoria succeeded him, and parliament was dissolved. On the recommendation of the Carlton Club, Disraeli was adopted as a Tory parliamentary candidate at the ensuing general election. In the election in July 1837, Disraeli won a seat in the House of Commons as one of two members, both Tory, for the constituency of Maidstone. The other was Wyndham Lewis, who helped finance Disraeli's election campaign, and who died the following year. In the same year Disraeli published a novel, "Henrietta Temple", which was a love story and social comedy, drawing on his affair with Henrietta Sykes. He had broken off the relationship in late 1836, distraught that she had taken yet another lover. His other novel of this period is "Venetia", a romance based on the characters of Shelley and Byron, written quickly to raise much-needed money. Disraeli made his maiden speech in Parliament on 7 December 1837. He followed O'Connell, whom he sharply criticised for the latter's "long, rambling, jumbling, speech". He was shouted down by O'Connell's supporters. After this unpromising start, Disraeli kept a low profile for the rest of the parliamentary session. He was a loyal supporter of the party leader Sir Robert Peel and his policies, with the exception of a personal sympathy for the Chartist movement that most Tories did not share. In 1839, Disraeli married Mary Anne Lewis, the widow of Wyndham Lewis. Twelve years Disraeli's senior, Mary Lewis had a substantial income of £5,000 a year. His motives were generally assumed to be mercenary, but the couple came to cherish one another, remaining close until she died more than three decades later. "Dizzy married me for my money", his wife said later, "But, if he had the chance again, he would marry me for love." Finding the financial demands of his Maidstone seat too much, Disraeli secured a Tory nomination for Shrewsbury, winning one of the constituency's two seats at the 1841 general election, despite serious opposition, and heavy debts which opponents seized on. The election was a massive defeat for the Whigs across the country, and Peel became Prime Minister. Disraeli hoped, unrealistically, for ministerial office. Though disappointed at being left on the back benches, he continued his support for Peel in 1842 and 1843, seeking to establish himself as an expert on foreign affairs and international trade. Although a Tory (or Conservative, as some in the party now called themselves) Disraeli was sympathetic to some of the aims of Chartism, and argued for an alliance between the landed aristocracy and the working class against the increasing power of the merchants and new industrialists in the middle class. After Disraeli won widespread acclaim in March 1842 for worsting the formidable Lord Palmerston in debate, he was taken up by a small group of idealistic new Tory MPs, with whom he formed the Young England group. They held that the landed interests should use their power to protect the poor from exploitation by middle-class businessmen. For many years in his parliamentary career Disraeli hoped to forge a paternalistic Tory-Radical alliance, but he was unsuccessful. Before the Reform Act 1867, the working class did not possess the vote and therefore had little political power. Although Disraeli forged a personal friendship with John Bright, a Lancashire manufacturer and leading Radical, Disraeli was unable to persuade Bright to sacrifice his distinct position for parliamentary advancement. When Disraeli attempted to secure a Tory-Radical cabinet in 1852, Bright refused. Disraeli gradually became a sharp critic of Peel's government, often deliberately taking positions contrary to those of his nominal chief. The best-known of these stances were over the Maynooth Grant in 1845 and the repeal of the Corn Laws in 1846. However, the young MP had attacked his leader as early as 1843 on Ireland and then on foreign policy interventions. In a letter of February 1844, he slighted the Prime Minister for failing to send him a Policy Circular. He laid into the Whigs as freebooters, swindlers and conmen, but Peel's own free trade policies were directly in the firing line. The President of the Board of Trade, William Gladstone, resigned from the cabinet over the Maynooth Grant. The Corn Laws imposed a tariff on imported wheat, protecting British farmers from foreign competition, but making the cost of bread artificially high. Peel hoped that the repeal of the Corn Laws and the resultant influx of cheaper wheat into Britain would relieve the condition of the poor, and in particular the suffering caused by successive failure of potato crops in Ireland—the Great Famine. The first months of 1846 were dominated by a battle in Parliament between the free traders and the protectionists over the repeal of the Corn Laws, with the latter rallying around Disraeli and Lord George Bentinck. The landowning interest in the Party, under its leader, William Miles MP for East Somerset, had called upon Disraeli to lead the Party. Disraeli had declined, as Bentinck had offered to lead if he had Disraeli's support. Disraeli stated, in a letter to Sir William Miles of 11 June 1860, that he wished to help "because, from my earliest years, my sympathies had been with the landed interest of England". An alliance of free-trade Conservatives (the "Peelites"), Radicals, and Whigs carried repeal, and the Conservative Party split: the Peelites moved towards the Whigs, while a "new" Conservative Party formed around the protectionists, led by Disraeli, Bentinck, and Lord Stanley (later Lord Derby). The split in the Tory party over the repeal of the Corn Laws had profound implications for Disraeli's political career: almost every Tory politician with experience of office followed Peel, leaving the rump bereft of leadership. In Blake's words, "[Disraeli] found himself almost the only figure on his side capable of putting up the oratorical display essential for a parliamentary leader." Looking on from the House of Lords, the Duke of Argyll wrote that Disraeli "was like a subaltern in a great battle where every superior officer was killed or wounded". If the Tory Party could muster the electoral support necessary to form a government, then Disraeli now seemed to be guaranteed high office. However, he would take office with a group of men who possessed little or no official experience, who had rarely felt moved to speak in the House of Commons, and who, as a group, remained hostile to Disraeli on a personal level. In the event the matter was not put to the test, as the Tory split soon had the party out of office, not regaining power until 1852. The Conservatives would not again have a majority in the House of Commons until 1874. Peel successfully steered the repeal of the Corn Laws through Parliament, and was then defeated by an alliance of all his enemies on the issue of Irish law and order; he resigned in June 1846. The Tories remained split, and the Queen sent for Lord John Russell, the Whig leader. In the 1847 general election, Disraeli stood, successfully, for the Buckinghamshire constituency. The new House of Commons had more Conservative than Whig members, but the depth of the Tory schism enabled Russell to continue to govern. The Conservatives were led by Bentinck in the Commons and Stanley in the Lords. In 1847 a small political crisis occurred which removed Bentinck from the leadership and highlighted Disraeli's differences with his own party. In that year's general election, Lionel de Rothschild had been returned for the City of London. As a practising Jew he could not take the oath of allegiance in the prescribed Christian form, and therefore could not take his seat. Lord John Russell, the Whig leader who had succeeded Peel as Prime Minister and like Rothschild was a member for the City of London, proposed in the Commons that the oath should be amended to permit Jews to enter Parliament. Disraeli spoke in favour of the measure, arguing that Christianity was "completed Judaism", and asking the House of Commons "Where is your Christianity if you do not believe in their Judaism?" Russell, and Disraeli's future rival Gladstone, thought it brave of him to speak as he did; the speech was badly received by his own party. The Tories and the Anglican establishment were hostile to the bill. Samuel Wilberforce, Bishop of Oxford, spoke strongly against the measure, and implied that Russell was paying off the Jews for helping elect him. With the exception of Disraeli, every member of the future protectionist cabinet then in Parliament voted against the measure. One who was not yet an MP, Lord John Manners, stood against Rothschild when the latter re-submitted himself for election in 1849. Disraeli, who had attended the protectionists' dinner at the Merchant Taylors' Hall, joined Bentinck in speaking and voting for the bill, although his own speech was a standard one of toleration. The measure was voted down. In the aftermath of the debate Bentinck resigned the leadership and was succeeded by Lord Granby; Disraeli's own speech, thought by many of his own party to be blasphemous, ruled him out for the time being. While these intrigues played out, Disraeli was working with the Bentinck family to secure the necessary financing to purchase Hughenden Manor, in Buckinghamshire. Possession of a country house, and incumbency of a county constituency, were regarded as essential for a Tory with ambitions to lead the party. Disraeli and his wife alternated between Hughenden and several homes in London for the rest of their marriage. The negotiations were complicated by Bentinck's sudden death on 21 September 1848, but Disraeli obtained a loan of £25,000 from Bentinck's brothers Lord Henry Bentinck and Lord Titchfield. Within a month of his appointment Granby resigned the leadership in the Commons, feeling himself inadequate to the post, and the party functioned without a leader in the Commons for the rest of the parliamentary session. At the start of the next session, affairs were handled by a triumvirate of Granby, Disraeli, and John Charles Herries—indicative of the tension between Disraeli and the rest of the party, who needed his talents but mistrusted him. This confused arrangement ended with Granby's resignation in 1851; Disraeli effectively ignored the two men regardless. In March 1851, Lord John Russell's government was defeated over a bill to equalise the county and borough franchises, mostly because of divisions among his supporters. He resigned, and the Queen sent for Stanley, who felt that a minority government could do little and would not last long, so Russell remained in office. Disraeli regretted this, hoping for an opportunity, however brief, to show himself capable in office. Stanley, on the other hand, deprecated his inexperienced followers as a reason for not assuming office, "These are not names I can put before the Queen." At the end of June 1851, Stanley's father died, and he succeeded to his title as Earl of Derby. The Whigs were wracked by internal dissensions during the second half of 1851, much of which Parliament spent in recess. Russell dismissed Lord Palmerston from the cabinet, leaving the latter determined to deprive the Prime Minister of office as well. Palmerston did so within weeks of Parliament's reassembly on 4 February 1852, his followers combining with Disraeli's Tories to defeat the government on a Militia Bill, and Russell resigned. Derby had either to take office or risk damage to his reputation and he accepted the Queen's commission as Prime Minister. Palmerston declined any office; Derby had hoped to have him as Chancellor of the Exchequer. Disraeli, his closest ally, was his second choice and accepted, though disclaiming any great knowledge in the financial field. Gladstone refused to join the government. Disraeli may have been attracted to the office by the £5,000 per year salary, which would help pay his debts. Few of the new cabinet had held office before; when Derby tried to inform the Duke of Wellington of the names of the Queen's new ministers, the elderly Duke, who was somewhat deaf, inadvertently branded the new government by incredulously repeating "Who? Who?" In the following weeks, Disraeli served as Leader of the House (with Derby as Prime Minister in the Lords) and as Chancellor. He wrote regular reports on proceedings in the Commons to Victoria, who described them as "very curious" and "much in the style of his books". The Tories could not govern for long as a minority, and Parliament was prorogued on 1 July 1852; Disraeli hoped that they would gain a majority of about 40. Instead, the election later that month had no clear winner, and the Derby government remained in power pending the meeting of Parliament. Disraeli's task as Chancellor was to devise a budget which would satisfy the protectionist elements who supported the Tories, without uniting the free-traders against it. His proposed budget, which he presented to the Commons on 3 December, lowered the taxes on malt and tea, provisions designed to appeal to the working class. To make his budget revenue-neutral, as funds were needed to provide defences against the French, he doubled the house tax and continued the income tax. Disraeli's overall purpose was to enact policies which would benefit the working classes, making his party more attractive to them. Although the budget did not contain protectionist features, the opposition was prepared to destroy it—and Disraeli's career as Chancellor—in part out of revenge for his actions against Peel in 1846. MP Sidney Herbert predicted that the budget would fail because "Jews make no converts". Disraeli delivered the budget on 3 December 1852, and prepared to wind up the debate for the government on 16 December—it was customary for the Chancellor to have the last word. A massive defeat for the government was predicted. Disraeli attacked his opponents individually, and then as a force, "I face a Coalition ... This, too, I know, that England does not love coalitions." His speech of three hours was quickly seen as a parliamentary masterpiece. As MPs prepared to divide, Gladstone rose to his feet and began an angry speech, despite the efforts of Tory MPs to shout him down. The interruptions were fewer, as Gladstone gained control of the House, and in the next two hours painted a picture of Disraeli as frivolous and his budget as subversive. The government was defeated by 19 votes, and Derby resigned four days later. He was replaced by the Peelite Earl of Aberdeen, with Gladstone as his Chancellor. Because of Disraeli's unpopularity among the Peelites, no party reconciliation was possible while he remained Tory leader in the House of Commons. With the fall of the government, Disraeli and the Conservatives returned to the opposition benches; he was to spend three-quarters of his 44-year parliamentary career in opposition. Derby was reluctant to seek to unseat the government, fearing a repetition of the 'Who? Who?' ministry, and knowing that despite his lieutenant's strengths, shared dislike of Disraeli was part of what had formed the governing coalition. Disraeli, on the other hand, was anxious to return to office. In the interim he, as Conservative leader in the Commons, opposed the government on all major measures. Disraeli was awarded an honorary degree by Oxford University in June 1853. He had been recommended for it by Lord Derby, the university's Chancellor. The start of the Crimean War the following year caused a lull in party politics; Disraeli spoke patriotically in support. The British military efforts were marked by bungling, and in 1855 a restive Parliament considered a resolution to establish a committee on the conduct of the war. The Aberdeen government chose to make this a motion of confidence; Disraeli led the opposition to defeat the government, 305 to 148. Aberdeen resigned, and the Queen sent for Derby, who to Disraeli's frustration refused to take office. Palmerston was deemed essential to any Whig ministry, and he would not join any he did not head. The Queen reluctantly asked Palmerston to form a government. Under Palmerston, the war went better, and was ended by the Treaty of Paris in early 1856. Disraeli was early to call for peace, but had little influence on events. When a rebellion broke out in India in 1857, Disraeli took a keen interest in affairs, having been a member of a select committee in 1852 which considered how best to rule the subcontinent, and had proposed eliminating the governing role of the British East India Company. After peace was restored, and Palmerston in early 1858 brought in legislation for direct rule of India by the Crown, Disraeli opposed it. Many Conservative MPs refused to follow him and the bill passed the Commons easily. Palmerston's grip on the premiership was weakened by his response to the Orsini affair, in which an attempt was made to assassinate the French Emperor Napoleon III by an Italian revolutionary with a bomb made in Birmingham. At the request of the French ambassador, Palmerston put forward amendments to the conspiracy to murder statute, proposing to make creating an infernal device a felony rather than a misdemeanour. He was defeated by 19 votes on the second reading, with many Liberals crossing the aisle against him. He immediately resigned, and Lord Derby returned to office. Derby took office at the head of a purely "Conservative" administration, not in coalition with any other faction. He again offered a place to Gladstone, who declined. Disraeli was once more leader of the House of Commons and returned to the Exchequer. As in 1852, Derby led a minority government, dependent on the division of its opponents for survival. As Leader of the House, Disraeli resumed his regular reports to Queen Victoria, who had requested that he include what she "could not meet in newspapers". During its brief life of just over a year, the Derby government proved moderately progressive. The Government of India Act 1858 ended the role of the East India Company in governing the subcontinent. It also passed the Thames Purification Bill, which funded the construction of much larger sewers for London. Disraeli had supported efforts to allow Jews to sit in Parliament—the oaths required of new members could only be made in good faith by a Christian. Disraeli had a bill passed through the Commons allowing each house of Parliament to determine what oaths its members should take. This was grudgingly agreed to by the House of Lords, with a minority of Conservatives joining with the opposition to pass it. In 1858, Baron Lionel de Rothschild became the first MP to profess the Jewish faith. Faced with a vacancy, Disraeli and Derby tried yet again to bring Gladstone, still nominally a Conservative MP, into the government, hoping to strengthen it. Disraeli wrote a personal letter to Gladstone, asking him to place the good of the party above personal animosity: "Every man performs his office, and there is a Power, greater than ourselves, that disposes of all this." In responding to Disraeli, Gladstone denied that personal feelings played any role in his decisions then and previously whether to accept office, while acknowledging that there were differences between him and Derby "broader than you may have supposed". The Tories pursued a Reform Bill in 1859, which would have resulted in a modest increase to the franchise. The Liberals were healing the breaches between those who favoured Russell and the Palmerston loyalists, and in late March 1859, the government was defeated on a Russell-sponsored amendment. Derby dissolved Parliament, and the ensuing general election resulted in modest Tory gains, but not enough to control the Commons. When Parliament assembled, Derby's government was defeated by 13 votes on an amendment to the Address from the Throne. He resigned, and the Queen reluctantly sent for Palmerston again. After Derby's second ejection from office, Disraeli faced dissension within Conservative ranks from those who blamed him for the defeat, or who felt he was disloyal to Derby—the former Prime Minister warned Disraeli of some MPs seeking his removal from the front bench. Among the conspirators were Lord Robert Cecil, a young Conservative MP who would a quarter century later become Prime Minister as Lord Salisbury; he wrote that having Disraeli as leader in the Commons decreased the Conservatives' chance of holding office. When Cecil's father objected, Lord Robert stated, "I have merely put into print what all the country gentlemen were saying in private." Disraeli led a toothless opposition in the Commons—seeing no way of unseating Palmerston, Derby had privately agreed not to seek the government's defeat. Disraeli kept himself informed on foreign affairs, and on what was going on in cabinet, thanks to a source within it. When the American Civil War began in 1861, Disraeli said little publicly, but like most Englishmen expected the South to win. Less reticent were Palmerston, Gladstone (again Chancellor) and Russell, whose statements in support of the South contributed to years of hard feelings in the United States. In 1862, Disraeli met Prussian Count Otto von Bismarck for the first time and said of him, "be careful about that man, he means what he says". The party truce ended in 1864, with Tories outraged over Palmerston's handling of the territorial dispute between the German Confederation and Denmark known as the Schleswig-Holstein Question. Disraeli had little help from Derby, who was ill, but he united the party enough on a no-confidence vote to limit the government to a majority of 18—Tory defections and absentees kept Palmerston in office. Despite rumours about Palmerston's health as he passed his eightieth birthday, he remained personally popular, and the Liberals increased their margin in the July 1865 general election. In the wake of the poor election results, Derby predicted to Disraeli that neither of them would ever hold office again. Political plans were thrown into disarray by Palmerston's death on 18 October 1865. Russell became Prime Minister again, with Gladstone clearly the Liberal Party's leader-in-waiting, and as Leader of the House Disraeli's direct opponent. One of Russell's early priorities was a Reform Bill, but the proposed legislation that Gladstone announced on 12 March 1866 divided his party. The Conservatives and the dissident Liberals repeatedly attacked Gladstone's bill, and in June finally defeated the government; Russell resigned on 26 June. The dissidents were unwilling to serve under Disraeli in the House of Commons, and Derby formed a third Conservative minority government, with Disraeli again as Chancellor. In 1867, the Conservatives introduced a Reform Bill. Without a majority in the Commons, the Conservatives had little choice but to accept amendments that considerably liberalised the legislation, though Disraeli refused to accept any from Gladstone. The Reform Act 1867 passed that August, extending the franchise by 938,427—an increase of 88%—by giving the vote to male householders and male lodgers paying at least £10 for rooms. It eliminated rotten boroughs with fewer than 10,000 inhabitants, and granted constituencies to 15 unrepresented towns, with extra representation to large municipalities such as Liverpool and Manchester. This act was unpopular with the right wing of the Conservative Party, most notably Lord Cranborne (as Robert Cecil was by then known), who resigned from the government and spoke against the bill, accusing Disraeli of "a political betrayal which has no parallel in our Parliamentary annals". Cranborne, however, was unable to lead an effective rebellion against Derby and Disraeli. Disraeli gained wide acclaim and became a hero to his party for the "marvellous parliamentary skill" with which he secured the passage of Reform in the Commons. Derby had long suffered from attacks of gout which sent him to his bed, unable to deal with politics. As the new session of Parliament approached in February 1868, he was bedridden at his home, Knowsley Hall, near Liverpool. He was reluctant to resign, reasoning that he was only 68, much younger than either Palmerston or Russell at the end of their premierships; yet he knew that his "attacks of illness would, at no distant period, incapacitate me from the discharge of my public duties", and doctors had warned him that his health required his resignation from office. In late February, with Parliament in session and Derby absent, he wrote to Disraeli asking for confirmation that "you will not shrink from the additional heavy responsibility". Reassured, he wrote to the Queen, resigning and recommending Disraeli as "only he could command the cordial support, en masse, of his present colleagues". Disraeli went to Osborne House on the Isle of Wight, where the Queen asked him to form a government. The monarch wrote to her daughter, Prussian Crown Princess Victoria, "Mr. Disraeli is Prime Minister! A proud thing for a man 'risen from the people' to have obtained!" The new Prime Minister told those who came to congratulate him, "I have climbed to the top of the greasy pole." The Conservatives remained a minority in the House of Commons and the passage of the Reform Bill required the calling of a new election once the new voting register had been compiled. Disraeli's term as Prime Minister, which began in February 1868, would therefore be short unless the Conservatives won the general election. He made only two major changes in the cabinet: he replaced Lord Chelmsford as Lord Chancellor with Lord Cairns, and brought in George Ward Hunt as Chancellor of the Exchequer. Derby had intended to replace Chelmsford once a vacancy in a suitable sinecure developed. Disraeli was unwilling to wait, and Cairns, in his view, was a far stronger minister. Disraeli's first premiership was dominated by the heated debate over the Church of Ireland. Although Ireland was overwhelmingly Roman Catholic, the Protestant Church remained the established church and was funded by direct taxation, which was greatly resented by the Catholic majority. An initial attempt by Disraeli to negotiate with Archbishop Manning the establishment of a Roman Catholic university in Dublin foundered in March when Gladstone moved resolutions to disestablish the Irish Church altogether. The proposal united the Liberals under Gladstone's leadership, while causing divisions among the Conservatives. The Conservatives remained in office because the new electoral register was not yet ready; neither party wished a poll under the old roll. Gladstone began using the Liberal majority in the House of Commons to push through resolutions and legislation. Disraeli's government survived until the December general election, at which the Liberals were returned to power with a majority of about 110. Despite its short life, the first Disraeli government succeeded in passing a number of pieces of legislation of a politically noncontentious sort. It ended public executions, and the Corrupt Practices Act did much to end electoral bribery. It authorised an early version of nationalisation, having the Post Office buy up the telegraph companies. Amendments to the school law, the Scottish legal system, and the railway laws were passed. Disraeli sent the successful expedition against Tewodros II of Ethiopia under Sir Robert Napier. With Gladstone's Liberal majority dominant in the Commons, Disraeli could do little but protest as the government advanced legislation. Accordingly, he chose to await Liberal mistakes. Having leisure time as he was not in office, he wrote a new novel, "Lothair" (1870). A work of fiction by a former Prime Minister was a new thing for Britain, and the book became a bestseller. By 1872, there was dissent in the Conservative ranks over the failure to challenge Gladstone and his Liberals. This was quieted as Disraeli took steps to assert his leadership of the party, and as divisions among the Liberals became clear. Public support for Disraeli was shown by cheering at a thanksgiving service that year on the recovery of the Prince of Wales from illness, while Gladstone was met with silence. Disraeli had supported the efforts of party manager John Eldon Gorst to put the administration of the Conservative Party on a modern basis. On Gorst's advice, Disraeli gave a speech to a mass meeting in Manchester that year. To roaring approval, he compared the Liberal front bench to "a range of exhausted volcanoes. Not a flame flickers on a single pallid crest. But the situation is still dangerous. There are occasional earthquakes and ever and again the dark rumbling of the sea." Gladstone, Disraeli stated, dominated the scene and "alternated between a menace and a sigh". At his first departure from 10 Downing Street in 1868, Disraeli had had Victoria create Mary Anne Viscountess of Beaconsfield in her own right in lieu of a peerage for himself. Through 1872 the eighty-year-old peeress was suffering from stomach cancer. She died on 15 December. Urged by a clergyman to turn her thoughts to Jesus Christ in her final days, she said she could not: "You know Dizzy is my J.C." After she died, Gladstone, who always had a liking for Mary Anne, sent her widower a letter of condolence. In 1873, Gladstone brought forward legislation to establish a Catholic university in Dublin. This divided the Liberals, and on 12 March an alliance of Conservatives and Irish Catholics defeated the government by three votes. Gladstone resigned, and the Queen sent for Disraeli, who refused to take office. Without a general election, a Conservative government would be another minority, dependent for survival on the division of its opponents. Disraeli wanted the power a majority would bring, and felt he could gain it later by leaving the Liberals in office now. Gladstone's government struggled on, beset by scandal and unimproved by a reshuffle. As part of that change, Gladstone took on the office of Chancellor, leading to questions as to whether he had to stand for re-election on taking on a second ministry—until the 1920s, MPs becoming ministers, thus taking an office of profit under the Crown, had to seek re-election. In January 1874, Gladstone called a general election, convinced that if he waited longer, he would do worse at the polls. Balloting was spread over two weeks, beginning on 1 February. Disraeli devoted much of his campaign to decrying the Liberal programme of the past five years. As the constituencies voted, it became clear that the result would be a Conservative majority, the first since 1841. In Scotland, where the Conservatives were perennially weak, they increased from seven seats to nineteen. Overall, they won 350 seats to 245 for the Liberals and 57 for the Irish Home Rule League. The Queen sent for Disraeli, and he became Prime Minister for the second time. Disraeli's cabinet of twelve, with six peers and six commoners, was the smallest since Reform. Of the peers, five of them had been in Disraeli's 1868 cabinet; the sixth, Lord Salisbury, was reconciled to Disraeli after negotiation and became Secretary of State for India. Lord Stanley (who had succeeded his father, the former Prime Minister, as Earl of Derby) became Foreign Secretary and Sir Stafford Northcote the Chancellor. In August 1876, Disraeli was elevated to the House of Lords as Earl of Beaconsfield and Viscount Hughenden. The Queen had offered to ennoble him as early as 1868; he had then declined. She did so again in 1874, when he fell ill at Balmoral, but he was reluctant to leave the Commons for a house in which he had no experience. Continued ill-health during his second premiership caused him to contemplate resignation, but his lieutenant, Derby, was unwilling, feeling that he could not manage the Queen. For Disraeli, the Lords, where the debate was less intense, was the alternative to resignation from office. Five days before the end of the 1876 session of Parliament, on 11 August, Disraeli was seen to linger and look around the chamber before departing the Commons. Newspapers reported his ennoblement the following morning. The earldom of Beaconsfield was to have been bestowed on Edmund Burke in 1797, but he had died before receiving it. The name Beaconsfield, a town near Hughenden, also was given to a minor character in "Vivian Grey". Disraeli made various statements about his elevation, writing to Selina, Lady Bradford on 8 August 1876, "I am quite tired of that place [the Commons]" but when asked by a friend how he liked the Lords, replied, "I am dead; dead but in the Elysian fields." Under the stewardship of Richard Assheton Cross, the Home Secretary, Disraeli's new government enacted many reforms, including the Artisans' and Labourers' Dwellings Improvement Act 1875, which made inexpensive loans available to towns and cities to construct working-class housing. Also enacted were the Public Health Act 1875, modernising sanitary codes through the nation, the Sale of Food and Drugs Act (1875), and the Education Act (1876). Disraeli's government also introduced a new Factory Act meant to protect workers, the Conspiracy, and Protection of Property Act 1875, which allowed peaceful picketing, and the Employers and Workmen Act (1875) to enable workers to sue employers in the civil courts if they broke legal contracts. As a result of these social reforms the Liberal-Labour MP Alexander Macdonald told his constituents in 1879, "The Conservative party have done more for the working classes in five years than the Liberals have in fifty." Gladstone in 1870 had sponsored an Order in Council, introducing competitive examination into the Civil Service, diminishing the political aspects of government hiring. Disraeli did not agree, and while he did not seek to reverse the order, his actions often frustrated its intent. For example, Disraeli made political appointments to positions previously given to career civil servants. In this, he was backed by his party, hungry for office and its emoluments after almost thirty years with only brief spells in government. Disraeli gave positions to hard-up Conservative leaders, even—to Gladstone's outrage—creating one office at £2,000 per year. Nevertheless, Disraeli made fewer peers (only 22, and one of those one of Victoria's sons) than had Gladstone—the Liberal leader had arranged for the bestowal of 37 peerages during his just over five years in office. As he had in government posts, Disraeli rewarded old friends with clerical positions, making Sydney Turner, son of a good friend of Isaac D'Israeli, Dean of Ripon. He favoured Low church clergymen in promotion, disliking other movements in Anglicanism for political reasons. In this, he came into disagreement with the Queen, who out of loyalty to her late husband, Albert, Prince Consort, preferred Broad church teachings. One controversial appointment had occurred shortly before the 1868 election. When the position of Archbishop of Canterbury fell vacant, Disraeli reluctantly agreed to the Queen's preferred candidate, Archibald Tait, the Bishop of London. To fill Tait's vacant see, Disraeli was urged by many people to appoint Samuel Wilberforce, the former Bishop of Winchester and leading figure in London society. Disraeli disliked Wilberforce, and instead appointed John Jackson, the Bishop of Lincoln. Blake suggested that, on balance, these appointments cost Disraeli more votes than they gained him. Disraeli always considered foreign affairs to be the most critical and most interesting part of statesmanship. Nevertheless, his biographer Robert Blake doubts that his subject had specific ideas about foreign policy when he took office in 1874. He had rarely travelled abroad; since his youthful tour of the Middle East in 1830–1831, he had left Britain only for his honeymoon and three visits to Paris, the last of which was in 1856. As he had criticised Gladstone for a do-nothing foreign policy, he most probably contemplated what actions would reassert Britain's place in Europe. His brief first premiership, and the first year of his second, gave him little opportunity to make his mark in foreign affairs. The Suez Canal, opened in 1869, cut weeks and thousands of miles off the journey between Britain and India; in 1875, approximately 80% of the ships using the canal were British. In the event of another rebellion in India, or of a Russian invasion, the time saved at Suez might be crucial. Built by French interests, much of the ownership and bonds in the canal remained in their hands, though some of the stock belonged to Isma'il Pasha, the Khedive of Egypt, who was noted for his profligate spending. The canal was losing money, and an attempt by Ferdinand de Lesseps, builder of the canal, to raise the tolls had fallen through when the Khedive had threatened to use military force to prevent it, and had also attracted Disraeli's attention. The Khedive governed Egypt under the Ottoman Empire; as in the Crimea, the issue of the Canal raised the Eastern Question of what to do about the decaying empire governed from Constantinople. With much of the pre-canal trade and communications between Britain and India passing through the Ottoman Empire, Britain had done its best to prop up the Ottomans against the threat that Russia would take Constantinople, cutting those communications, and giving Russian ships unfettered access to the Mediterranean. The French might also threaten those lines from colonies in Syria. Britain had had the opportunity to purchase shares in the canal but had declined to do so. Disraeli had passed near Suez in his tour of the Middle East in his youth, and on taking office, recognising the British interest in the canal as a gateway to India, he sent the Liberal MP Nathan Rothschild to Paris to enquire about buying de Lesseps's shares. On 14 November 1875, the editor of the "Pall Mall Gazette", Frederick Greenwood, learned from London banker Henry Oppenheim that the Khedive was seeking to sell his shares in the Suez Canal Company to a French firm. Greenwood quickly told Lord Derby, the Foreign Secretary, who notified Disraeli. The Prime Minister moved immediately to secure the shares. On 23 November, the Khedive offered to sell the shares for 100,000,000 francs. Rather than seek the aid of the Bank of England, Disraeli asked Lionel de Rothschild to loan funds. Rothschild did so, and controversially took a commission on the deal. The banker's capital was at risk, for Parliament could have refused to ratify the transaction. The contract for purchase was signed at Cairo on 25 November, and the shares deposited at the British consulate the following day. Disraeli told the Queen, "it is settled; you have it, madam!" The public saw the venture as a daring British statement of its dominance of the seas. Sir Ian Malcolm described the Suez Canal share purchase as "the greatest romance of Mr. Disraeli's romantic career". In the following decades, the security of the Suez Canal, as the pathway to India, became a major focus of British foreign policy. A later Foreign Secretary, Lord Curzon, described the canal in 1909 as "the determining influence of every considerable movement of British power to the east and south of the Mediterranean". Although initially curious about Disraeli when he entered Parliament in 1837, Victoria came to detest him over his treatment of Peel. Over time, her dislike softened, especially as Disraeli took pains to cultivate her. He told Matthew Arnold, "Everybody likes flattery; and, when you come to royalty, you should lay it on with a trowel". Disraeli's biographer, Adam Kirsch, suggests that Disraeli's obsequious treatment of his queen was part flattery, part belief that this was how a queen should be addressed by a loyal subject, and part awe that a middle-class man of Jewish birth should be the companion of a monarch. By the time of his second premiership, Disraeli had built a strong relationship with Victoria, probably closer to her than any of her Prime Ministers except her first, Lord Melbourne. When Disraeli returned as Prime Minister in 1874 and went to kiss hands, he did so literally, on one knee; and, according to Richard Aldous in his book on the rivalry between Disraeli and Gladstone, "for the next six years Victoria and Disraeli would exploit their closeness for mutual advantage." Victoria had long wished to have an imperial title, reflecting Britain's expanding domain. She was irked when Tsar Alexander II held a higher rank than her as an emperor, and was appalled that her daughter, the Prussian Crown Princess, would outrank her when her husband came to the throne. She also saw an imperial title as proclaiming Britain's increased stature in the world. The title "Empress of India" had been used informally with respect to Victoria for some time and she wished to have that title formally bestowed on her. The Queen prevailed upon Disraeli to introduce a Royal Titles Bill, and also told of her intent to open Parliament in person, which during this time she did only when she wanted something from legislators. Disraeli was cautious in response, as careful soundings of MPs brought a negative reaction, and declined to place such a proposal in the Queen's Speech. Once the desired bill was prepared, Disraeli's handling of it was not adept. He neglected to notify either the Prince of Wales or the opposition, and was met by irritation from the prince and a full-scale attack from the Liberals. An old enemy of Disraeli, former Liberal Chancellor Robert Lowe, alleged during the debate in the Commons that two previous Prime Ministers had refused to introduce such legislation for the Queen. Gladstone immediately stated that he was not one of them, and the Queen gave Disraeli leave to quote her saying she had never approached a Prime Minister with such a proposal. According to Blake, Disraeli "in a brilliant oration of withering invective proceeded to destroy Lowe", who apologised and never held office again. Disraeli said of Lowe that he was the only person in London with whom he would not shake hands and, "he is in the mud and there I leave him." Fearful of losing, Disraeli was reluctant to bring the bill to a vote in the Commons, but when he eventually did, it passed with a majority of 75. Once the bill was formally enacted, Victoria began signing her letters "Victoria R & I" ("Regina et Imperatrix", that is, Queen and Empress). According to Aldous, "the unpopular Royal Titles Act, however, shattered Disraeli's authority in the House of Commons". In July 1875, Serb populations in Bosnia and Herzegovina, then provinces of the Ottoman Empire, rose in revolt against their Turkish masters, alleging religious persecution and poor administration. The following January, Sultan Abdülaziz agreed to reforms proposed by Hungarian statesman Julius Andrássy, but the rebels, suspecting they might win their freedom, continued their uprising, joined by militants in Serbia and Bulgaria. The Turks suppressed the Bulgarian uprising harshly, and when reports of these actions escaped, Disraeli and Derby stated in Parliament that they did not believe them. Disraeli called them "coffee-house babble" and dismissed allegations of torture by the Ottomans since "Oriental people usually terminate their connections with culprits in a more expeditious fashion". Gladstone, who had left the Liberal leadership and retired from public life, was appalled by reports of atrocities in Bulgaria, and in August 1876, penned a hastily written pamphlet arguing that the Turks should be deprived of Bulgaria because of what they had done there. He sent a copy to Disraeli, who called it "vindictive and ill-written ... of all the Bulgarian horrors perhaps the greatest". Gladstone's pamphlet became an immense best-seller and rallied the Liberals to urge that the Ottoman Empire should no longer be a British ally. Disraeli wrote to Lord Salisbury on 3 September, "Had it not been for these unhappy 'atrocities', we should have settled a peace very honourable to England and satisfactory to Europe. Now we are obliged to work from a new point of departure, and dictate to Turkey, who has forfeited all sympathy." In spite of this, Disraeli's policy favoured Constantinople and the territorial integrity of its empire. Disraeli and the cabinet sent Salisbury as lead British representative to the Constantinople Conference, which met in December 1876 and January 1877. In advance of the conference, Disraeli sent Salisbury private word to seek British military occupation of Bulgaria and Bosnia, and British control of the Ottoman Army. Salisbury ignored these instructions, which his biographer, Andrew Roberts deemed "ludicrous". Nevertheless, the conference failed to reach agreement with the Turks. Parliament opened in February 1877, with Disraeli now in the Lords as Earl of Beaconsfield. He spoke only once there in the 1877 session on the Eastern Question, stating on 20 February that there was a need for stability in the Balkans, and that forcing Turkey into territorial concessions would do nothing to secure it. The Prime Minister wanted a deal with the Ottomans whereby Britain would temporarily occupy strategic areas to deter the Russians from war, to be returned on the signing of a peace treaty, but found little support in his cabinet, which favoured partition of the Ottoman Empire. As Disraeli, by then in poor health, continued to battle within the cabinet, Russia invaded Turkey on 21 April, beginning the Russo-Turkish War. The Russians pushed through Ottoman territory, and by December 1877 had captured the strategic Bulgarian town of Plevna; their march on Constantinople seemed inevitable. The war divided the British, but the Russian success caused some to forget the atrocities and call for intervention on the Turkish side. Others hoped for further Russian successes. The fall of Plevna was a major story for weeks in the newspapers, and Disraeli's warnings that Russia was a threat to British interests in the eastern Mediterranean were deemed prophetic. The jingoistic attitude of many Britons increased Disraeli's political support, and the Queen acted to help him as well, showing her favour by visiting him at Hughenden—the first time she had visited the country home of her Prime Minister since the Melbourne ministry. At the end of January 1878, the Ottoman Sultan appealed to Britain to save Constantinople. Amid war fever in Britain, the government asked Parliament to vote £6,000,000 to prepare the Army and Navy for war. Gladstone, who had again involved himself in politics, opposed the measure, but less than half his party voted with him. Popular opinion was with Disraeli, though some thought him too soft for not immediately declaring war on Russia. With the Russians close to Constantinople, the Turks yielded and signed the Treaty of San Stefano in March 1878, conceding a Bulgarian state which would cover a large part of the Balkans. It was to be initially occupied by the Russians, and many feared that it would give them a client state close to Constantinople. Other Ottoman possessions in Europe would become independent; additional territory was to be ceded directly to Russia. This was unacceptable to the British, who protested, hoping to get the Russians to agree to attend an international conference which German Chancellor Bismarck proposed to hold at Berlin. The cabinet discussed Disraeli's proposal to position Indian troops at Malta for possible transit to the Balkans and call out reserves. Derby resigned in protest, and Disraeli appointed Salisbury as Foreign Secretary. Amid British preparations for war, the Russians and Turks agreed to discussions at Berlin. In advance of the meeting, confidential negotiations took place between Britain and Russia in April and May 1878. The Russians were willing to make changes to the big Bulgaria, but were determined to retain their new possessions, Bessarabia in Europe and Batum and Kars on the east coast of the Black Sea. To counterbalance this, Britain required a possession in the Eastern Mediterranean where it might base ships and troops, and negotiated with the Ottomans for the cession of Cyprus. Once this was secretly agreed, Disraeli was prepared to allow Russia's territorial gains. The Congress of Berlin was held in June and July 1878, the central relationship in it that between Disraeli and Bismarck. In later years, the German chancellor would show visitors to his office three pictures on the wall: "the portrait of my Sovereign, there on the right that of my wife, and on the left, there, that of Lord Beaconsfield". Disraeli caused an uproar in the congress by making his opening address in English, rather than in French, hitherto accepted as the international language of diplomacy. By one account, the British ambassador in Berlin, Lord Odo Russell, hoping to spare the delegates Disraeli's very poor French accent, told Disraeli that the congress was hoping to hear a speech in the English tongue by one of its masters. Disraeli left much of the detailed work to Salisbury, concentrating his efforts on making it as difficult as possible for the broken-up big Bulgaria to reunite. Disraeli did not have things all his own way: he intended that Batum be demilitarised, but the Russians obtained their preferred language, and in 1886, fortified the town. Nevertheless, the Cyprus Convention ceding the island to Britain was announced during the congress, and again made Disraeli a sensation. Disraeli gained agreement that Turkey should retain enough of its European possessions to safeguard the Dardanelles. By one account, when met with Russian intransigence, Disraeli told his secretary to order a special train to return them home to begin the war. Although Russia yielded, Tsar Alexander II later described the congress as "a European coalition against Russia, under Bismarck". The Treaty of Berlin was signed on 13 July 1878 at the Radziwill Palace in Berlin. Disraeli and Salisbury returned home to heroes' receptions at Dover and in London. At the door of 10 Downing Street, Disraeli received flowers sent by the Queen. There, he told the gathered crowd, "Lord Salisbury and I have brought you back peace—but a peace I hope with honour." The Queen offered him a dukedom, which he declined, though accepting the Garter, as long as Salisbury also received it. In Berlin, word spread of Bismarck's admiring description of Disraeli, ""Der alte Jude, das ist der Mann!" " In the weeks after Berlin, Disraeli and the cabinet considered calling a general election to capitalise on the public applause he and Salisbury had received. Parliaments then sat for a seven-year term, and it was the custom not to go to the country until the sixth year unless forced to by events. Only four and a half years had passed since the last general election. Additionally, they did not see any clouds on the horizon that might forecast Conservative defeat if they waited. This decision not to seek re-election has often been cited as a great mistake by Disraeli. Blake, however, pointed out that results in local elections had been moving against the Conservatives, and doubted if Disraeli missed any great opportunity by waiting. As successful invasions of India generally came through Afghanistan, the British had observed and sometimes intervened there since the 1830s, hoping to keep the Russians out. In 1878 the Russians sent a mission to Kabul; it was not rejected by the Afghans, as the British had hoped. The British then proposed to send their own mission, insisting that the Russians be sent away. The Viceroy, Lord Lytton, concealed his plans to issue this ultimatum from Disraeli, and when the Prime Minister insisted he take no action, went ahead anyway. When the Afghans made no answer, the British advanced against them in the Second Anglo-Afghan War, and under Lord Roberts easily defeated them. The British installed a new ruler, and left a mission and garrison in Kabul. British policy in South Africa was to encourage federation between the British-run Cape Colony and Natal, and the Boer republics, the Transvaal (annexed by Britain in 1877) and the Orange Free State. The governor of Cape Colony, Sir Bartle Frere, believing that the federation could not be accomplished until the native tribes acknowledged British rule, made demands on the Zulu and their king, Cetewayo, which they were certain to reject. As Zulu troops could not marry until they had washed their spears in blood, they were eager for combat. Frere did not send word to the cabinet of what he had done until the ultimatum was about to expire. Disraeli and the cabinet reluctantly backed him, and in early January 1879 resolved to send reinforcements. Before they could arrive, on 22 January, a Zulu "impi", or army, moving with great speed and stealth, ambushed and destroyed a British encampment in South Africa in the Battle of Isandlwana. Over a thousand British and colonial troops were killed. Word of the defeat did not reach London until 12 February. Disraeli wrote the next day, "the terrible disaster has shaken me to the centre". He reprimanded Frere, but left him in charge, attracting fire from all sides. Disraeli sent General Sir Garnet Wolseley as High Commissioner and Commander in Chief. Before Wolseley could arrive, Frere rushed to the Zulu capital, crushing Cetewayo at the Battle of Ulundi on 4 July 1879. On 8 September 1879, Sir Louis Cavagnari, in charge of the mission in Kabul, was killed with his entire staff by rebelling Afghan soldiers. Roberts undertook a successful punitive expedition against the Afghans over the next six weeks. Gladstone, in the 1874 election, had been returned for Greenwich, finishing second behind a Conservative in the two-member constituency, a result he termed more like a defeat than a victory. In December 1878, he was offered the Liberal nomination at the next election for Edinburghshire, a constituency popularly known as Midlothian. The small Scottish electorate was dominated by two noblemen, the Conservative Duke of Buccleuch and the Liberal Earl of Rosebery. The Earl, a friend of both Disraeli and Gladstone who would succeed the latter after his final term as Prime Minister, had journeyed to the United States to view politics there, and was convinced that aspects of American electioneering could be translated to the United Kingdom. On his advice, Gladstone accepted the offer in January 1879, and later that year began his Midlothian campaign, speaking not only in Edinburgh, but across Britain, attacking Disraeli, to huge crowds. Conservative chances of re-election were damaged by the poor weather, and consequent effects on agriculture. Four consecutive wet summers through 1879 had led to poor harvests in the United Kingdom. In the past, the farmer had the consolation of higher prices at such times, but with bumper crops cheaply transported from the United States, grain prices remained low. Other European nations, faced with similar circumstances, opted for protection, and Disraeli was urged to reinstitute the Corn Laws. He declined, stating that he regarded the matter as settled. Protection would have been highly unpopular among the newly enfranchised urban working classes, as it would raise their cost of living. Amid an economic slump generally, the Conservatives lost support among farmers. Disraeli's health continued to fail through 1879. Owing to his infirmities, he was three-quarters of an hour late for the Lord Mayor's Dinner at the Guildhall in November, at which it is customary that the Prime Minister speaks. Though many commented on how healthy he looked, it took him great effort to appear so, and when he told the audience he expected to speak to the dinner again the following year, attendees chuckled—Gladstone was then in the midst of his campaign. Despite his public confidence, Disraeli recognised that the Conservatives would probably lose the next election, and was already contemplating his Resignation Honours. Despite this pessimism, Conservative hopes were buoyed in early 1880 with successes in by-elections the Liberals had expected to win, concluding with victory in Southwark, normally a Liberal stronghold. The cabinet had resolved to wait before dissolving Parliament; in early March they reconsidered, agreeing to go to the country as soon as possible. Parliament was dissolved on 24 March, and the first borough constituencies began voting a week later. Disraeli took no public part in the electioneering, it being deemed improper for peers to make speeches to influence Commons elections. This meant that the chief Conservatives—Disraeli, Salisbury, and India Secretary Lord Cranbrook—would not be heard from. The election was thought likely to be close. Once returns began to be announced, it became clear that the Conservatives were being decisively beaten. The final result gave the Liberals an absolute majority of about 50. Disraeli refused to cast blame for the defeat, which he understood was likely to be final for him. He wrote to Lady Bradford that it was just as much work to end a government as to form one, without any of the fun. Queen Victoria was bitter at his departure as Prime Minister. Among the honours he arranged before resigning as Prime Minister on 21 April 1880 was one for his private secretary, Montagu Corry, who became Baron Rowton. Returning to Hughenden, Disraeli brooded over his electoral dismissal, but also resumed work on "Endymion", which he had begun in 1872 and laid aside before the 1874 election. The work was rapidly completed and published by November 1880. He carried on a correspondence with Victoria, with letters passed through intermediaries. When Parliament met in January 1881, he served as Conservative leader in the Lords, attempting to serve as a moderating influence on Gladstone's legislation. Suffering from asthma and gout, Disraeli went out as little as possible, fearing more serious episodes of illness. In March, he fell ill with bronchitis, and emerged from bed only for a meeting with Salisbury and other Conservative leaders on the 26th. As it became clear that this might be his final sickness, friends and opponents alike came to call. Disraeli declined a visit from the Queen, saying, "She would only ask me to take a message to Albert." Almost blind, when he received the last letter from Victoria of which he was aware on 5 April, he held it momentarily, then had it read to him by Lord Barrington, a Privy Councillor. One card, signed "A Workman", delighted its recipient, "Don't die yet, we can't do without you." Despite the gravity of Disraeli's condition, the doctors concocted optimistic bulletins, for public consumption. The Prime Minister, Gladstone, called several times to enquire about his rival's condition, and wrote in his diary, "May the Almighty be near his pillow." There was intense public interest in the former Prime Minister's struggles for life. Disraeli had customarily taken the sacrament at Easter; when this day was observed on 17 April, there was discussion among his friends and family if he should be given the opportunity, but those against, fearing that he would lose hope, prevailed. On the morning of the following day, Easter Monday, he became incoherent, then comatose. Disraeli's last confirmed words before dying at his home on 19 Curzon Street in the early morning of 19 April were "I had rather live but I am not afraid to die". The anniversary of Disraeli's death is now commemorated in the United Kingdom as Primrose Day. Disraeli's executors decided against a public procession and funeral, fearing that too large crowds would gather to do him honour. The chief mourners at the service at Hughenden on 26 April were his brother Ralph and nephew Coningsby, to whom Hughenden would eventually pass. Queen Victoria was prostrated with grief, and considered ennobling Ralph or Coningsby as a memorial to Disraeli (without children, his titles became extinct with his death) but decided against it on the ground that their means were too small for a peerage. Protocol forbade her attending Disraeli's funeral (this would not be changed until 1965, when Elizabeth II attended the rites for the former Prime Minister Sir Winston Churchill) but she sent primroses ("his favourite flowers") to the funeral, and visited the burial vault to place a wreath of china blooms four days later. Disraeli is buried with his wife in a vault beneath the Church of St Michael and All Angels which stands in the grounds of his home, Hughenden Manor, accessed from the churchyard. There is also a memorial to him in the chancel in the church, erected in his honour by Queen Victoria. His literary executor was his private secretary, Lord Rowton. The Disraeli vault also contains the body of Sarah Brydges Willyams, the wife of James Brydges Willyams of St Mawgan in Cornwall. Disraeli carried on a long correspondence with Mrs. Willyams, writing frankly about political affairs. At her death in 1865, she left him a large legacy, which helped clear up his debts. His will was proved at £84,000. Disraeli has a memorial in Westminster Abbey. This monument was erected by the nation on the motion of Gladstone in his memorial speech on Disraeli in the House of Commons. Gladstone had absented himself from the funeral, with his plea of the press of public business met with public mockery. His speech was widely anticipated, if only because his dislike for Disraeli was well known, and caused the Prime Minister much worry. In the event, the speech was a model of its kind, in which he avoided comment on Disraeli's politics, while praising his personal qualities. Disraeli's literary and political career interacted over his lifetime and fascinated Victorian Britain, making him "one of the most eminent figures in Victorian public life", and occasioned a large output of commentary. Critic Shane Leslie noted three decades after his death that "Disraeli's career was a romance such as no Eastern vizier or Western plutocrat could tell. He began as a pioneer in dress and an aesthete of words... Disraeli actually made his novels come true." Blake comments that Disraeli "produced an epic poem, unbelievably bad, and a five-act blank verse tragedy, if possible worse. Further he wrote a discourse on political theory and a political biography, the "Life of Lord George Bentinck", which is excellent ... remarkably fair and accurate." But it is on his novels that Disraeli's achievements as a writer are generally judged. They have from the outset divided critical opinion. The writer R. W. Stewart observed that there have always been two criteria for judging Disraeli's novels—one political and the other artistic. The critic Robert O'Kell, concurring, writes, "It is after all, even if you are a Tory of the staunchest blue, impossible to make Disraeli into a first-rate novelist. And it is equally impossible, no matter how much you deplore the extravagances and improprieties of his works, to make him into an insignificant one." Disraeli's early "silver fork" novels "Vivian Grey" (1826) and "The Young Duke" (1831) featured romanticised depictions of aristocratic life (despite his ignorance of it) with character sketches of well-known public figures lightly disguised. In some of his early fiction Disraeli also portrayed himself and what he felt to be his Byronic dual nature: the poet and the man of action. His most autobiographical novel was "Contarini Fleming" (1832), an avowedly serious work that did not sell well. The critic William Kuhn suggests that Disraeli's fiction can be read as "the memoirs he never wrote", revealing the inner life of a politician for whom the norms of Victorian public life appeared to represent a social straitjacket—particularly with regard to what Kuhn sees as the author's "ambiguous sexuality". Of the other novels of the early 1830s, "Alroy" is described by Blake as "profitable but unreadable". "The Rise of Iskander" (1833), "The Infernal Marriage" and "Ixion in Heaven" (1834) made little impact. "Henrietta Temple" (1837) was Disraeli's next major success. It draws on the events of his affair with Henrietta Sykes to tell the story of a debt-ridden young man torn between a mercenary loveless marriage and a passionate love at first sight for the eponymous heroine. "Venetia" (1837) was a minor work, written to raise much-needed cash. In the 1840s Disraeli wrote a trilogy of novels with political themes. With "Coningsby; or, The New Generation" (1844), Disraeli, in Blake's view, "infused the novel genre with political sensibility, espousing the belief that England's future as a world power depended not on the complacent old guard, but on youthful, idealistic politicians." "Coningsby" was followed by "Sybil; or, The Two Nations" (1845), another political novel, which was less idealistic and more clear-eyed than "Coningsby"; the "two nations" of its sub-title referred to the huge economic and social gap between the privileged few and the deprived working classes. The last in Disraeli's political novel trilogy was "Tancred; or, The New Crusade" (1847), promoting the Church of England's role in reviving Britain's flagging spirituality. Disraeli's last completed novels were "Lothair" (1870) and "Endymion" (1880). The first, described by Daniel R Schwarz as 'Disraeli's ideological "Pilgrim's Progress"', is a story of political life with particular regard to the roles of the Anglican and Roman Catholic churches. "Endymion", despite having a Whig as hero, is a last exposition of the author's economic policies and political beliefs. To the last, Disraeli pilloried his enemies in barely disguised caricatures. The character St Barbe in "Endymion" is widely seen as a parody of Thackeray, who had offended Disraeli more than thirty years earlier by lampooning him in "Punch" as 'Codlingsby'. Disraeli left an unfinished novel in which the priggish central character, Falconet, is unmistakably a caricature of Gladstone. In the years after Disraeli's death, as Salisbury began his reign of more than twenty years over the Conservatives, the party emphasised the late leader's "One Nation" views, that the Conservatives at root shared the beliefs of the working classes, with the Liberals the party of the urban élite. Disraeli had, for example, stressed the need to improve the lot of the urban labourer. The memory of Disraeli was used by the Conservatives to appeal to the working classes, with whom he was said to have had a rapport. This aspect of his policies has been re-evaluated by historians in the 20th and 21st centuries. In 1972 BHAbbott stressed that it was not Disraeli but Lord Randolph Churchill who invented the term "Tory democracy", though it was Disraeli who made it an essential part of Conservative policy and philosophy. In 2007 Parry wrote, "The tory democrat myth did not survive detailed scrutiny by professional historical writing of the 1960s [which] demonstrated that Disraeli had very little interest in a programme of social legislation and was very flexible in handling parliamentary reform in 1867." Despite this, Parry sees Disraeli, rather than Peel, as the founder of the modern Conservative party. The Conservative politician and writer Douglas Hurd wrote in 2013, "[Disraeli] was not a one-nation Conservative—and this was not simply because he never used the phrase. He rejected the concept in its entirety." Disraeli's enthusiastic propagation of the British Empire has also been seen as appealing to working class voters. Before his leadership of the Conservative Party, imperialism was the province of the Liberals, most notably Palmerston, with the Conservatives murmuring dissent across the aisle. Disraeli made the Conservatives the party that most loudly supported both the Empire and military action to assert its primacy. This came about in part because Disraeli's own views stemmed that way, in part because he saw advantage for the Conservatives, and partially in reaction against Gladstone, who disliked the expense of empire. Blake argued that Disraeli's imperialism "decisively orientated the Conservative party for many years to come, and the tradition which he started was probably a bigger electoral asset in winning working-class support during the last quarter of the century than anything else". Some historians have commented on a romantic impulse behind Disraeli's approach to Empire and foreign affairs: Abbott writes, "To the mystical Tory concepts of Throne, Church, Aristocracy and People, Disraeli added Empire." Others have identified a strongly pragmatic aspect to his policies. Gladstone's biographer Philip Magnus contrasted Disraeli's grasp of foreign affairs with that of Gladstone, who "never understood that high moral principles, in their application to foreign policy, are more often destructive of political stability than motives of national self-interest." In Parry's view, Disraeli's foreign policy "can be seen as a gigantic castle in the air (as it was by Gladstone), or as an overdue attempt to force the British commercial classes to awaken to the realities of European politics." During his lifetime Disraeli's opponents, and sometimes even his friends and allies, questioned whether he sincerely held the views he propounded, or whether they were adopted by him as essential to one who sought to spend his life in politics, and were mouthed by him without conviction. Lord John Manners, in 1843 at the time of Young England, wrote, "could I only satisfy myself that D'Israeli believed all that he said, I should be more happy: his historical views are quite mine, but does he believe them?" Blake (writing in 1966) suggested that it is no more possible to answer that question now than it was then. Nevertheless, Paul Smith, in his journal article on Disraeli's politics, argues that Disraeli's ideas were coherently argued over a political career of nearly half a century, and "it is impossible to sweep them aside as a mere bag of burglar's tools for effecting felonious entry to the British political pantheon." Stanley Weintraub, in his biography of Disraeli, points out that his subject did much to advance Britain towards the 20th century, carrying one of the two great Reform Acts of the 19th despite the opposition of his Liberal rival, Gladstone. "He helped preserve constitutional monarchy by drawing the Queen out of mourning into a new symbolic national role and created the climate for what became 'Tory democracy'. He articulated an imperial role for Britain that would last into World War II and brought an intermittently self-isolated Britain into the concert of Europe." Frances Walsh comments on Disraeli's multifaceted public life: The debate about his place in the Conservative pantheon has continued since his death. Disraeli fascinated and divided contemporary opinion; he was seen by many, including some members of his own party, as an adventurer and a charlatan and by others as a far-sighted and patriotic statesman. As an actor on the political stage he played many roles: Byronic hero, man of letters, social critic, parliamentary virtuoso, squire of Hughenden, royal companion, European statesman. His singular and complex personality has provided historians and biographers with a particularly stiff challenge. Historical writers have often played Disraeli and Gladstone against each other as great rivals. Roland Quinault, however, cautions us not to exaggerate the confrontation:they were not direct antagonists for most of their political careers. Indeed initially they were both loyal to the Tory party, the Church and the landed interest. Although their paths diverged over the repeal of the Corn Laws in 1846 and later over fiscal policy more generally, it was not until the later 1860s that their differences over parliamentary reform, Irish and Church policy assumed great partisan significance. Even then their personal relations remained fairly cordial until their dispute over the Eastern Question in the later 1870s. Historian Michael Diamond reports that for British music hall patrons in the 1880s and 1890s, "xenophobia and pride in empire" were reflected in the halls' most popular political heroes: all were Conservatives and Disraeli stood out above all, even decades after his death, while Gladstone was used as a villain. Film historian Roy Armes has argued that historical films helped maintain the political status quo in Britain in the 1920s and 1930s by imposing an establishment viewpoint that emphasized the greatness of monarchy, empire, and tradition. The films created "a facsimile world where existing values were invariably validated by events in the film and where all discord could be turned into harmony by an acceptance of the status quo." Steven Fielding has argued that Disraeli was an especially popular film hero: "historical dramas favoured Disraeli over Gladstone and, more substantively, promulgated an essentially deferential view of democratic leadership." Stage and screen actor George Arliss was known for his portrayals of Disraeli, winning the Oscar as best actor for 1929's "Disraeli". Fielding says Arliss "personified the kind of paternalistic, kindly, homely statesmanship that appealed to a significant proportion of the cinema audience...Even workers attending Labour party meetings deferred to leaders with an elevated social background who showed they cared.". Major productions featuring Disraeli include: "Disraeli" (1911, UK theatre) - Played by George Arliss; "Disraeli" (1916, UK film) - Played by Dennis Eadie; "Disraeli" (1921, US film) - Played by George Arliss; "Disraeli" (1929, US film) - Played by George Arliss; "Victoria the Great" (UK, 1937) - Played by Derrick De Marney (in youth)/ Hugh Miller (older age); "Suez" (US, 1938) - Played by Miles Mander; "The Prime Minister" (UK, 1941) - Played by Sir John Gielgud; "Mr. Gladstone" (UK-Television, 1947) - Played by Sydney Tafler; "The Ghosts of Berkeley Square" (UK, 1947) - Played by Abraham Sofaer; "The Mudlark" (US, 1950) - Played by Sir Alec Guinness; "Disraeli (TV serial)" (UK-Television, 1978) - Played by Ian McShane; "Mrs. Brown" (US, 1997) - Played by Sir Antony Sher Notes References Buckingham Palace Buckingham Palace () is the London residence and administrative headquarters of the monarch of the United Kingdom. Located in the City of Westminster, the palace is often at the centre of state occasions and royal hospitality. It has been a focal point for the British people at times of national rejoicing and mourning. Originally known as Buckingham House, the building at the core of today's palace was a large townhouse built for the Duke of Buckingham in 1703 on a site that had been in private ownership for at least 150 years. It was acquired by King George III in 1761 as a private residence for Queen Charlotte and became known as "The Queen's House". During the 19th century it was enlarged, principally by architects John Nash and Edward Blore, who constructed three wings around a central courtyard. Buckingham Palace became the London residence of the British monarch on the accession of Queen Victoria in 1837. The last major structural additions were made in the late 19th and early 20th centuries, including the East front, which contains the well-known balcony on which the royal family traditionally congregates to greet crowds. The palace chapel was destroyed by a German bomb during World War II; the Queen's Gallery was built on the site and opened to the public in 1962 to exhibit works of art from the Royal Collection. The original early 19th-century interior designs, many of which survive, include widespread use of brightly coloured scagliola and blue and pink lapis, on the advice of Sir Charles Long. King Edward VII oversaw a partial redecoration in a "Belle Époque" cream and gold colour scheme. Many smaller reception rooms are furnished in the Chinese regency style with furniture and fittings brought from the Royal Pavilion at Brighton and from Carlton House. The palace has 775 rooms, and the garden is the largest private garden in London. The state rooms, used for official and state entertaining, are open to the public each year for most of August and September and on some days in winter and spring. In the Middle Ages, the site of the future palace formed part of the Manor of Ebury (also called Eia). The marshy ground was watered by the river Tyburn, which still flows below the courtyard and south wing of the palace. Where the river was fordable (at Cow Ford), the village of Eye Cross grew. Ownership of the site changed hands many times; owners included Edward the Confessor and his queen consort Edith of Wessex in late Saxon times, and, after the Norman Conquest, William the Conqueror. William gave the site to Geoffrey de Mandeville, who bequeathed it to the monks of Westminster Abbey. In 1531, King Henry VIII acquired the Hospital of St James (later St James's Palace) from Eton College, and in 1536 he took the Manor of Ebury from Westminster Abbey. These transfers brought the site of Buckingham Palace back into royal hands for the first time since William the Conqueror had given it away almost 500 years earlier. Various owners leased it from royal landlords and the freehold was the subject of frenzied speculation during the 17th century. By then, the old village of Eye Cross had long since fallen into decay, and the area was mostly wasteland. Needing money, James I sold off part of the Crown freehold but retained part of the site on which he established a mulberry garden for the production of silk. (This is at the northwest corner of today's palace.) Clement Walker in "Anarchia Anglicana" (1649) refers to "new-erected sodoms and spintries at the Mulberry Garden at S. James's"; this suggests it may have been a place of debauchery. Eventually, in the late 17th century, the freehold was inherited from the property tycoon Sir Hugh Audley by the great heiress Mary Davies. Possibly the first house erected within the site was that of a Sir William Blake, around 1624. The next owner was Lord Goring, who from 1633 extended Blake's house and developed much of today's garden, then known as Goring Great Garden. He did not, however, obtain the freehold interest in the mulberry garden. Unbeknown to Goring, in 1640 the document "failed to pass the Great Seal before King Charles I fled London, which it needed to do for legal execution". It was this critical omission that helped the British royal family regain the freehold under King George III. The improvident Goring defaulted on his rents; Henry Bennet, 1st Earl of Arlington obtained the mansion and was occupying it, now known as Goring House, when it burned down in 1674. Arlington House rose on the site—the location of the southern wing of today's palace—the next year. In 1698, John Sheffield, later the first Duke of Buckingham and Normanby, acquired the lease. The house which forms the architectural core of the palace was built for the first Duke of Buckingham and Normanby in 1703 to the design of William Winde. The style chosen was of a large, three-floored central block with two smaller flanking service wings. Buckingham House was eventually sold by Buckingham's natural son, Sir Charles Sheffield, in 1761 to George III for £21,000. Sheffield's leasehold on the mulberry garden site, the freehold of which was still owned by the royal family, was due to expire in 1774. Under the new Crown ownership, the building was originally intended as a private retreat for King George III's wife, Queen Charlotte, and was accordingly known as The Queen's House. Remodelling of the structure began in 1762. In 1775, an Act of Parliament settled the property on Queen Charlotte, in exchange for her rights to Somerset House, and 14 of her 15 children were born there. Some furnishings were transferred from Carlton House, and others had been bought in France after the French Revolution of 1789. While St James's Palace remained the official and ceremonial royal residence, the name "Buckingham-palace" was used from at least 1791. After his accession to the throne in 1820, King George IV continued the renovation with the idea in mind of a small, comfortable home. But while the work was in progress, in 1826, the King decided to modify the house into a palace, with the help of his architect John Nash. The external façade was designed keeping in mind the French neo-classical influence preferred by George IV. The cost of the renovations grew dramatically, and by 1829 the extravagance of Nash's designs resulted in his removal as architect. On the death of George IV in 1830, his younger brother King William IV hired Edward Blore to finish the work. After the destruction of the Palace of Westminster by fire in 1834, William considered converting the palace into the new Houses of Parliament. Buckingham Palace finally became the principal royal residence in 1837, on the accession of Queen Victoria, who was the first monarch to reside there; her predecessor William IV had died before its completion. While the state rooms were a riot of gilt and colour, the necessities of the new palace were somewhat less luxurious. For one thing, it was reported the chimneys smoked so much that the fires had to be allowed to die down, and consequently the court shivered in icy magnificence. Ventilation was so bad that the interior smelled, and when it was decided to install gas lamps, there was a serious worry about the build-up of gas on the lower floors. It was also said that staff were lax and lazy and the palace was dirty. Following the Queen's marriage in 1840, her husband, Prince Albert, concerned himself with a reorganisation of the household offices and staff, and with the design faults of the palace. The problems were all rectified by the close of 1840. However, the builders were to return within the decade. By 1847, the couple had found the palace too small for court life and their growing family, and consequently the new wing, designed by Edward Blore, was built by Thomas Cubitt, enclosing the central quadrangle. The large East Front, facing The Mall, is today the "public face" of Buckingham Palace, and contains the balcony from which the royal family acknowledge the crowds on momentous occasions and after the annual Trooping the Colour. The ballroom wing and a further suite of state rooms were also built in this period, designed by Nash's student Sir James Pennethorne. Before Prince Albert's death, the palace was frequently the scene of musical entertainments, and the greatest contemporary musicians entertained at Buckingham Palace. The composer Felix Mendelssohn is known to have played there on three occasions. Johann Strauss II and his orchestra played there when in England. Strauss's "Alice Polka" was first performed at the palace in 1849 in honour of the queen's daughter, Princess Alice. Under Victoria, Buckingham Palace was frequently the scene of lavish costume balls, in addition to the usual royal ceremonies, investitures and presentations. Widowed in 1861, the grief-stricken Queen withdrew from public life and left Buckingham Palace to live at Windsor Castle, Balmoral Castle and Osborne House. For many years the palace was seldom used, even neglected. In 1864, a note was found pinned to the fence of Buckingham Palace, saying: "These commanding premises to be let or sold, in consequence of the late occupant's declining business." Eventually, public opinion persuaded the Queen to return to London, though even then she preferred to live elsewhere whenever possible. Court functions were still held at Windsor Castle, presided over by the sombre Queen habitually dressed in mourning black, while Buckingham Palace remained shuttered for most of the year. The palace measures by , is high and contains over of floorspace. The floor area is smaller than the Royal Palace of Madrid, the Papal Palace and Quirinal Palace in Rome, the Louvre in Paris, the Hofburg Palace in Vienna, and the Forbidden City. There are 775 rooms, including 188 staff bedrooms, 92 offices, 78 bathrooms, 52 principal bedrooms, and 19 state rooms. It also has a post office, cinema, swimming pool, doctor's surgery, and jeweller's workshop. The principal rooms are contained on the "piano nobile" behind the west-facing garden façade at the rear of the palace. The centre of this ornate suite of state rooms is the Music Room, its large bow the dominant feature of the façade. Flanking the Music Room are the Blue and the White Drawing Rooms. At the centre of the suite, serving as a corridor to link the state rooms, is the Picture Gallery, which is top-lit and long. The Gallery is hung with numerous works including some by Rembrandt, van Dyck, Rubens and Vermeer; other rooms leading from the Picture Gallery are the Throne Room and the Green Drawing Room. The Green Drawing Room serves as a huge anteroom to the Throne Room, and is part of the ceremonial route to the throne from the Guard Room at the top of the Grand Staircase. The Guard Room contains white marble statues of Queen Victoria and Prince Albert, in Roman costume, set in a tribune lined with tapestries. These very formal rooms are used only for ceremonial and official entertaining, but are open to the public every summer. Directly underneath the State Apartments are the less grand semi-state apartments. Opening from the Marble Hall, these rooms are used for less formal entertaining, such as luncheon parties and private audiences. Some of the rooms are named and decorated for particular visitors, such as the "1844 Room", decorated in that year for the state visit of Tsar Nicholas I of Russia, and the "1855 Room", in honour of the visit of Emperor Napoleon III of France. At the centre of this suite is the Bow Room, through which thousands of guests pass annually to the Queen's Garden Parties. The Queen and Prince Philip use a smaller suite of rooms in the north wing. Between 1847 and 1850, when Blore was building the new east wing, the Brighton Pavilion was once again plundered of its fittings. As a result, many of the rooms in the new wing have a distinctly oriental atmosphere. The red and blue Chinese Luncheon Room is made up from parts of the Brighton Banqueting and Music Rooms with a large oriental chimney piece designed by Robert Jones and sculpted by Richard Westmacott. It was formerly in the Music Room at the Brighton Pavilion. The ornate clock, known as the Kylin Clock was made in Jingdezhen, Jiangxi Province, China, in the second half of the 18th century; it has a later movement by Benjamin Vulliamy circa 1820. The Yellow Drawing Room has wallpaper supplied in 1817 for the Brighton Saloon, and a chimney piece which is a European vision of how the Chinese chimney piece may appear. It has nodding mandarins in niches and fearsome winged dragons, designed by Robert Jones. At the centre of this wing is the famous balcony with the Centre Room behind its glass doors. This is a Chinese-style saloon enhanced by Queen Mary, who, working with the designer Sir Charles Allom, created a more "binding" Chinese theme in the late 1920s, although the lacquer doors were brought from Brighton in 1873. Running the length of the "piano nobile" of the east wing is the great gallery, modestly known as the Principal Corridor, which runs the length of the eastern side of the quadrangle. It has mirrored doors, and mirrored cross walls reflecting porcelain pagodas and other oriental furniture from Brighton. The Chinese Luncheon Room and Yellow Drawing Room are situated at each end of this gallery, with the Centre Room obviously placed in the centre. The original early 19th-century interior designs, many of which still survive, included widespread use of brightly coloured scagliola and blue and pink lapis, on the advice of Sir Charles Long. King Edward VII oversaw a partial redecoration in a Belle époque cream and gold colour scheme. When paying a state visit to Britain, foreign heads of state are usually entertained by the Queen at Buckingham Palace. They are allocated a large suite of rooms known as the Belgian Suite, situated at the foot of the Minister's Staircase, on the ground floor of the north-facing Garden Wing. The rooms of the suite are linked by narrow corridors, one of them is given extra height and perspective by saucer domes designed by Nash in the style of Soane. A second corridor in the suite has Gothic-influenced cross-over vaulting. The Belgian Rooms themselves were decorated in their present style and named after Prince Albert's uncle Léopold I, first King of the Belgians. In 1936, the suite briefly became the private apartments of the palace when they were occupied by King Edward VIII. Investitures, which include the conferring of knighthoods by dubbing with a sword, and other awards take place in the palace's Ballroom, built in 1854. At long, wide and high, it is the largest room in the palace. It has replaced the throne room in importance and use. During investitures, the Queen stands on the throne dais beneath a giant, domed velvet canopy, known as a shamiana or a baldachin, that was used at the Delhi Durbar in 1911. A military band plays in the musicians' gallery as award recipients approach the Queen and receive their honours, watched by their families and friends. State banquets also take place in the Ballroom; these formal dinners are held on the first evening of a state visit by a foreign head of state. On these occasions, for up to 170 guests in formal "white tie and decorations", including tiaras, the dining table is laid with the Grand Service, a collection of silver-gilt plate made in 1811 for the Prince of Wales, later George IV. The largest and most formal reception at Buckingham Palace takes place every November when the Queen entertains members of the diplomatic corps. On this grand occasion, all the state rooms are in use, as the royal family proceed through them, beginning at the great north doors of the Picture Gallery. As Nash had envisaged, all the large, double-mirrored doors stand open, reflecting the numerous crystal chandeliers and sconces, creating a deliberate optical illusion of space and light. Smaller ceremonies such as the reception of new ambassadors take place in the "1844 Room". Here too, the Queen holds small lunch parties, and often meetings of the Privy Council. Larger lunch parties often take place in the curved and domed Music Room, or the State Dining Room. Since the bombing of the palace chapel in World War II, royal christenings have sometimes taken place in the Music Room. The Queen's first three children were all baptised there. On all formal occasions, the ceremonies are attended by the Yeomen of the Guard in their historic uniforms, and other officers of the court such as the Lord Chamberlain. Formerly, men not wearing military uniform wore knee breeches of an 18th-century design. Women's evening dress included trains and tiaras or feathers in their hair (often both). The dress code governing formal court uniform and dress has progressively relaxed. After World War I, when Queen Mary wished to follow fashion by raising her skirts a few inches from the ground, she requested a lady-in-waiting to shorten her own skirt first to gauge the king's reaction. King George V was horrified, so the queen kept her hemline unfashionably low. Following their accession in 1936, King George VI and his consort, Queen Elizabeth, allowed the hemline of daytime skirts to rise. Today, there is no official dress code. Most men invited to Buckingham Palace in the daytime choose to wear service uniform or lounge suits; a minority wear morning coats, and in the evening, depending on the formality of the occasion, black tie or white tie. Débutantes were aristocratic young ladies making their first entrée into society through presentation to the monarch at court. These occasions, known as "coming out", took place at the palace from the reign of Edward VII. Wearing full court dress, with three ostrich feathers in their hair, débutantes entered, curtsied, and performed a backwards walk and a further curtsey, while manoeuvring a dress train of prescribed length. The ceremony, known as an evening court, corresponded to the "court drawing rooms" of Victoria's reign. After World War II, the ceremony was replaced by less formal afternoon receptions, usually omitting curtsies and court dress. In 1958, the Queen abolished the presentation parties for débutantes, replacing them with Garden Parties, for up to 8,000 invitees in the Garden. They are the largest functions of the year. The boy Jones was an intruder who gained entry to the palace on three occasions between 1838 and 1841. At least 12 people have managed to gain unauthorised entry into the palace or its grounds since 1914, including Michael Fagan, who broke into the palace twice in 1982 and entered the Queen's bedroom on the second occasion. At the time, news media reported that he had a long conversation with the Queen while she waited for security officers to arrive, but in a 2012 interview with "The Independent", Fagan said the Queen ran out of the room and no conversation took place. It was only in 2007 that trespassing on the palace grounds became a specific criminal offence. At the rear of the palace is the large and park-like garden, which together with its lake is the largest private garden in London. There, the Queen hosts her annual garden parties each summer, and also holds large functions to celebrate royal milestones, such as jubilees. It covers , and includes a helicopter landing area, a lake, and a tennis court. Adjacent to the palace is the Royal Mews, also designed by Nash, where the royal carriages, including the Gold State Coach, are housed. This rococo gilt coach, designed by Sir William Chambers in 1760, has painted panels by G. B. Cipriani. It was first used for the State Opening of Parliament by George III in 1762 and has been used by the monarch for every coronation since George IV. It was last used for the Golden Jubilee of Elizabeth II. Also housed in the mews are the coach horses used at royal ceremonial processions. The Mall, a ceremonial approach route to the palace, was designed by Sir Aston Webb and completed in 1911 as part of a grand memorial to Queen Victoria. It extends from Admiralty Arch, across St James's Park to the Victoria Memorial. This route is used by the cavalcades and motorcades of visiting heads of state, and by the royal family on state occasions such as the annual Trooping the Colour. In 1901 the accession of Edward VII saw new life breathed into the palace. The new King and his wife Queen Alexandra had always been at the forefront of London high society, and their friends, known as "the Marlborough House Set", were considered to be the most eminent and fashionable of the age. Buckingham Palace—the Ballroom, Grand Entrance, Marble Hall, Grand Staircase, vestibules and galleries redecorated in the Belle époque cream and gold colour scheme they retain today—once again became a setting for entertaining on a majestic scale but leaving some to feel King Edward's heavy redecorations were at odds with Nash's original work. The last major building work took place during the reign of King George V when, in 1913, Sir Aston Webb redesigned Blore's 1850 East Front to resemble in part Giacomo Leoni's Lyme Park in Cheshire. This new, refaced principal façade (of Portland stone) was designed to be the backdrop to the Victoria Memorial, a large memorial statue of Queen Victoria, placed outside the main gates. George V, who had succeeded Edward VII in 1910, had a more serious personality than his father; greater emphasis was now placed on official entertaining and royal duties than on lavish parties. He arranged a series of command performances featuring jazz musicians such as the Original Dixieland Jazz Band (1919) – the first jazz performance for a head of state, Sidney Bechet, and Louis Armstrong (1932), which earned the palace a nomination in 2009 for a (Kind of) Blue Plaque by the Brecon Jazz Festival as one of the venues making the greatest contribution to jazz music in the United Kingdom. George V's wife Queen Mary was a connoisseur of the arts, and took a keen interest in the Royal Collection of furniture and art, both restoring and adding to it. Queen Mary also had many new fixtures and fittings installed, such as the pair of marble Empire-style chimneypieces by Benjamin Vulliamy, dating from 1810, which the Queen had installed in the ground floor Bow Room, the huge low room at the centre of the garden façade. Queen Mary was also responsible for the decoration of the Blue Drawing Room. This room, long, previously known as the South Drawing Room, has a ceiling designed specially by Nash, coffered with huge gilt console brackets. During World War I, the palace, then the home of King George V and Queen Mary, escaped unscathed. Its more valuable contents were evacuated to Windsor but the royal family remained "in situ". The King imposed rationing at the palace, much to the dismay of his guests and household. To the King's later regret, David Lloyd George persuaded him to go further by ostentatiously locking the wine cellars and refraining from alcohol, to set a good example to the supposedly inebriated working class. The workers continued to imbibe and the King was left unhappy at his enforced abstinence. In 1938, the north-west pavilion, designed by Nash as a conservatory, was converted into a swimming pool. During World War II, the palace was bombed nine times, the most serious and publicised of which resulted in the destruction of the palace chapel in 1940. Coverage of this event was played in cinemas all over the UK to show the common suffering of rich and poor. One bomb fell in the palace quadrangle while King George VI and Queen Elizabeth were in residence, and many windows were blown in and the chapel destroyed. War-time coverage of such incidents was severely restricted, however. The King and Queen were filmed inspecting their bombed home, the smiling Queen, as always, immaculately dressed in a hat and matching coat seemingly unbothered by the damage around her. It was at this time the Queen famously declared: "I'm glad we have been bombed. Now I can look the East End in the face". The royal family were seen as sharing their subjects' hardship, as "The Sunday Graphic" reported: On 15 September 1940, known as the Battle of Britain Day, an RAF pilot, Ray Holmes of No. 504 Squadron RAF rammed a German bomber he believed was going to bomb the Palace. Holmes had run out of ammunition and made the quick decision to ram it. Holmes bailed out. Both aircraft crashed. In fact the Dornier Do 17 bomber was empty. It had already been damaged, two of its crew had been killed and the remainder bailed out. Its pilot, Feldwebel Robert Zehbe, landed, only to die later of wounds suffered during the attack. During the Dornier's descent, it somehow unloaded its bombs, one of which hit the Palace. It then crashed into the forecourt of London Victoria station. The bomber's engine was later exhibited at the Imperial War Museum in London. The British pilot became a King's Messenger after the war, and died at the age of 90 in 2005. On VE Day—8 May 1945—the palace was the centre of British celebrations. The King, Queen, Princess Elizabeth (the future Queen), and Princess Margaret appeared on the balcony, with the palace's blacked-out windows behind them, to the cheers from a vast crowd in the Mall. The damaged Palace was carefully restored after the War by John Mowlem & Co. It was designated a Grade I listed building in 1970. Every year, some 50,000 invited guests are entertained at garden parties, receptions, audiences, and banquets. Three Garden Parties are held in the summer, usually in July. The Forecourt of Buckingham Palace is used for Changing of the Guard, a major ceremony and tourist attraction (daily from April to July; every other day in other months). The palace, like Windsor Castle, is owned by the reigning monarch in right of the Crown. It is not the monarch's personal property, unlike Sandringham House and Balmoral Castle. Many of the contents from Buckingham Palace, Windsor Castle, Kensington Palace, and St James's Palace are part of the Royal Collection, held in trust by the Sovereign; they can, on occasion, be viewed by the public at the Queen's Gallery, near the Royal Mews. Unlike the palace and the castle, the purpose-built gallery is open continually and displays a changing selection of items from the collection. It occupies the site of the chapel destroyed by an air raid in World War II. The palace's state rooms have been open to the public during August and September and on selected dates throughout the year since 1993. The money raised in entry fees was originally put towards the rebuilding of Windsor Castle after the 1992 fire devastated many of its state rooms. In the year to 31 March 2017, 580,000 people visited the palace, and 154,000 visited the gallery. Her Majesty's Government is responsible for maintaining the palace in exchange for the profits made by the Crown Estate. In November 2015, the State Dining Room was closed for six months because its ceiling had become potentially dangerous. A 10-year schedule of maintenance work, including new plumbing, wiring, boilers, and radiators, and the installation of solar panels on the roof, has been estimated to cost £369 million and was approved by the prime minister in November 2016. It will be funded by a temporary increase in the Sovereign Grant paid from the income of the Crown Estate and is intended to extend the building's working life by at least 50 years. In March 2017, the House of Commons backed funding for the project by 464 votes to 56. Thus, Buckingham Palace is a symbol and home of the British monarchy, an art gallery, and a tourist attraction. Behind the gilded railings and gates that were completed by the Bromsgrove Guild in 1911 and Webb's famous façade, which has been described in a book published by the Royal Collection Trust as looking "like everybody's idea of a palace", is not only a weekday home of the Queen and Prince Philip but also the London residence of the Duke of York and the Earl and Countess of Wessex. The palace also houses their offices, as well as those of the Princess Royal and Princess Alexandra, and is the workplace of more than 800 people. Blade Runner Blade Runner is a 1982 neo-noir science fiction film directed by Ridley Scott, written by Hampton Fancher and David Peoples, and starring Harrison Ford, Rutger Hauer, Sean Young, and Edward James Olmos. It is a loose adaptation of Philip K. Dick's novel "Do Androids Dream of Electric Sheep?" (1968). The film is set in a dystopian future Los Angeles of 2019, in which synthetic humans known as replicants are bio-engineered by the powerful Tyrell Corporation to work on off-world colonies. When a fugitive group of replicants led by Roy Batty (Hauer) escapes back to Earth, burnt-out cop Rick Deckard (Ford) reluctantly agrees to hunt them down. "Blade Runner" initially underperformed in North American theaters and polarized critics; some praised its thematic complexity and visuals, while others were displeased with its unconventional pacing and plot. It later became an acclaimed cult film regarded as one of the all-time best science fiction movies. Hailed for its production design depicting a "retrofitted" future, "Blade Runner" is a leading example of neo-noir cinema. The soundtrack, composed by Vangelis, was nominated in 1983 for a BAFTA and a Golden Globe as best original score. The film has influenced many science fiction films, video games, anime, and television series. It brought the work of Philip K. Dick to the attention of Hollywood, and several later big-budget films were based on his work. In the year after its release, "Blade Runner" won the Hugo Award for Best Dramatic Presentation, and in 1993 it was selected for preservation in the United States National Film Registry by the Library of Congress as "culturally, historically, or aesthetically significant". A sequel, "Blade Runner 2049", was released in October 2017. Seven versions of "Blade Runner" exist as a result of controversial changes made at the request of studio executives. A director's cut was released in 1992 after a strong response to test screenings of a workprint. This, in conjunction with the film's popularity as a video rental, made it one of the earliest movies to be released on DVD. In 2007, Warner Bros. released "The Final Cut", a 25th-anniversary digitally remastered version, and the only version over which Scott retained artistic control. In 2019 Los Angeles, former police officer Rick Deckard is detained by officer Gaff, and brought to his former supervisor, Bryant. Deckard, whose job as a "blade runner" was to track down bioengineered beings known as replicants and "retire" (kill) them, is informed that four are on Earth illegally. Deckard starts to leave, but Bryant ambiguously threatens him, and he stays. The two watch a video of a blade runner named Holden administering the "Voigt-Kampff" test, which is designed to distinguish replicants from humans based on their emotional response to questions. The test subject, Leon, shoots Holden on the second question. Bryant wants Deckard to retire Leon and the other three Tyrell Corporation Nexus-6 replicants: Roy Batty, Zhora, and Pris. Bryant has Deckard meet with Eldon Tyrell so he can administer the test on a Nexus-6 to see if it works. Tyrell expresses his interest in seeing the test fail first and asks him to administer it on his assistant Rachael. After a much longer than standard test, Deckard concludes that Rachael is a replicant who believes she is human. Tyrell explains that she is an experiment who has been given false memories to provide an emotional "cushion". Searching Leon's hotel room, Deckard finds photos and a synthetic snake scale. Roy and Leon investigate a replicant eye-manufacturing laboratory and learn of J. F. Sebastian, a gifted genetic designer who works closely with Tyrell. Deckard returns to his apartment where Rachael is waiting. She tries to prove her humanity by showing him a family photo, but after Deckard reveals that her memories are implants from Tyrell's niece, she leaves his apartment. Meanwhile, Pris locates Sebastian and manipulates him to gain his trust. A photograph from Leon's apartment and the snake scale lead Deckard to a strip club, where Zhora works. After a confrontation and chase, Deckard kills Zhora. Bryant orders him also to retire Rachael, who has disappeared from the Tyrell Corporation. After Deckard spots Rachael in a crowd, he is attacked by Leon, who knocks Deckard's pistol out of his hand, and attempts to kill Deckard, but Rachael uses Deckard's pistol to kill Leon. They return to Deckard's apartment, and, during an intimate discussion, he promises not to track her down; as she abruptly tries to leave, Deckard restrains her, making her kiss him. Arriving at Sebastian's apartment, Roy tells Pris that the others are dead. Sympathetic to their plight, Sebastian reveals that because of "Methuselah Syndrome", a genetic premature aging disorder, his life will also be cut short. Sebastian and Roy gain entrance into Tyrell's secure penthouse, where Roy demands more life from his maker. Tyrell tells him that it is impossible. Roy confesses that he has done "questionable things", but Tyrell dismisses this, praising Roy's advanced design, and accomplishments in his short life. Roy kisses Tyrell, then kills him. Sebastian runs for the elevator, followed by Roy, who rides the elevator down alone. Deckard is later told by Bryant that Sebastian was found dead. At Sebastian's apartment, Deckard is ambushed by Pris, but he kills her as Roy returns. Roy's body begins to fail as the end of his lifespan nears. He chases Deckard through the building, ending on the roof. Deckard tries to jump to an adjacent roof, but is left hanging between buildings. Roy makes the jump with ease, and as Deckard's grip loosens, Roy hoists him onto the roof, saving him. Before Roy dies, he delivers a monologue about how his memories "will be lost in time, like tears in rain". Gaff arrives and shouts to Deckard about Rachael: "It's too bad she won't live, but then again, who does?" Deckard returns to his apartment and finds Rachael asleep in his bed. As they leave, Deckard notices an origami unicorn on the floor, a calling card that recalls for him Gaff's earlier statement. Deckard and Rachael leave the apartment block. The film operates on multiple dramatic and narrative levels. It employs some of the conventions of film noir, among them the character of a "femme fatale"; narration by the protagonist (in the original release); chiaroscuro cinematography; and giving the hero a questionable moral outlook – extended to include reflections upon the nature of his own humanity. It is a literate science fiction film, thematically enfolding the philosophy of religion and moral implications of human mastery of genetic engineering in the context of classical Greek drama and hubris. It also draws on Biblical images, such as Noah's flood, and literary sources, such as "Frankenstein". Linguistically, the theme of mortality is subtly reiterated in the chess game between Sebastian and Tyrell, supposedly based on the famous Immortal Game of 1851 (though Scott has said this was merely coincidental). "Blade Runner" delves into the effects of technology on the environment and society by reaching to the past, using literature, religious symbolism, classical dramatic themes, and "film noir" techniques. This tension between past, present, and future is mirrored in the "retrofitted" future depicted in the film, one which is high-tech and gleaming in places but decayed and outdated elsewhere. In an interview with "The Observer" in 2002, director Ridley Scott described the film as "extremely dark, both literally and metaphorically, with an oddly masochistic feel". He also said that he "liked the idea of exploring pain" in the wake of his brother's death: "When he was ill, I used to go and visit him in London, and that was really traumatic for me." A sense of foreboding and paranoia pervades the world of the film: corporate power looms large; the police seem omnipresent; vehicle and warning lights probe into buildings; and the consequences of huge biomedical power over the individual are explored – especially regarding replicants' implanted memories. Control over the environment is exercised on a vast scale, and goes hand in hand with the absence of any natural life; for example, artificial animals stand in for their extinct predecessors. This oppressive backdrop explains the frequently referenced migration of humans to "off-world" (extraterrestrial) colonies. The dystopian themes explored in "Blade Runner" are an early example of the expansion of cyberpunk concepts into cinema. Eyes are a recurring motif, as are manipulated images, calling into question the nature of reality and our ability to accurately perceive and remember it. These thematic elements provide an atmosphere of uncertainty for "Blade Runner"s central theme of examining humanity. In order to discover replicants, an empathy test is used, with a number of its questions focused on the treatment of animals – seemingly an essential indicator of one's "humanity". The replicants appear to show compassion and concern for one another and are juxtaposed against human characters who lack empathy, while the mass of humanity on the streets is cold and impersonal. The film goes so far as to question if Deckard might be an android, in the process forcing the audience to re-evaluate what it means to be human. The question of whether Deckard is intended to be a human or a replicant has been an ongoing controversy since the film's release. Both Michael Deeley and Harrison Ford wanted Deckard to be human, while Hampton Fancher preferred ambiguity. Ridley Scott has stated that in his vision, Deckard is a replicant. Deckard's unicorn-dream sequence, inserted into the "Director's Cut" and concomitant with Gaff's parting gift of an origami unicorn, is seen by many as showing that Deckard is a replicant – because Gaff could have retrieved Deckard's implanted memories. The interpretation that Deckard is a replicant is challenged by others who believe the unicorn imagery shows that the characters, whether human or replicant, share the same dreams and recognize their affinity, or that the absence of a decisive answer is crucial to the film's main theme. The film's inherent ambiguity and uncertainty, as well as its textual richness, have permitted viewers to see it from multiple perspectives. Casting the film proved troublesome, particularly for the lead role of Deckard. Screenwriter Hampton Fancher envisioned Robert Mitchum as Deckard and wrote the character's dialogue with Mitchum in mind. Director Ridley Scott and the film's producers spent months meeting and discussing the role with Dustin Hoffman, who eventually departed over differences in vision. Harrison Ford was ultimately chosen for several reasons, including his performance in the "Star Wars" films, Ford's interest in the "Blade Runner" story, and discussions with Steven Spielberg who was finishing "Raiders of the Lost Ark" at the time and strongly praised Ford's work in the film. Following his success in films like "Star Wars" (1977) and "Raiders of the Lost Ark" (1981), Ford was looking for a role with dramatic depth. According to production documents, several actors were considered for the role, including Gene Hackman, Sean Connery, Jack Nicholson, Paul Newman, Clint Eastwood, Tommy Lee Jones, Arnold Schwarzenegger, Al Pacino, and Burt Reynolds. One role that was not difficult to cast was Rutger Hauer as Roy Batty, the violent yet thoughtful leader of the replicants. Scott cast Hauer without having met him, based solely on Hauer's performances in Paul Verhoeven's movies Scott had seen ("Katie Tippel", "Soldier of Orange", and "Turkish Delight"). Hauer's portrayal of Batty was regarded by Philip K. Dick as "the perfect Batty – cold, Aryan, flawless". Of the many films Hauer has made, "Blade Runner" is his favorite. As he explained in a live chat in 2001, ""Blade Runner" needs no explanation. It just . All of the best. There is nothing like it. To be part of a real which changed the world's thinking. It's awesome." Hauer rewrote his character's "tears in rain" speech himself and presented the words to Scott on set prior to filming. "Blade Runner" used a number of then-lesser-known actors: Sean Young portrays Rachael, an experimental replicant implanted with the memories of Tyrell's niece, causing her to believe she is human; Nina Axelrod auditioned for the role. Daryl Hannah portrays Pris, a "basic pleasure model" replicant; Stacey Nelkin auditioned for the role, but was given another part in the film, which was ultimately cut before filming. Casting Pris and Rachael was challenging, requiring several screen tests with Morgan Paull playing the role of Deckard. Paull was cast as Deckard's fellow bounty hunter Holden based on his performances in the tests. Brion James portrays Leon Kowalski, a combat and laborer replicant, and Joanna Cassidy portrays Zhora, an assassin replicant. Edward James Olmos portrays Gaff. Olmos used his diverse ethnic background, and personal research, to help create the fictional "Cityspeak" language his character uses in the film. His initial address to Deckard at the noodle bar is partly in Hungarian and means, "Horse dick [bullshit]! No way. You are the Blade ... Blade Runner." M. Emmet Walsh plays Captain Bryant, a hard-drinking, sleazy, and underhanded police veteran typical of the film noir genre. Joe Turkel portrays Dr. Eldon Tyrell, a corporate mogul who built an empire on genetically manipulated humanoid slaves. William Sanderson was cast as J. F. Sebastian, a quiet and lonely genius who provides a compassionate yet compliant portrait of humanity. J. F. sympathizes with the replicants, whom he sees as companions, and he shares their shorter lifespan due to his rapid aging disease. Joe Pantoliano had earlier been considered for the role. James Hong portrays Hannibal Chew, an elderly geneticist specializing in synthetic eyes, and Hy Pyke portrayed the sleazy bar owner Taffey Lewis – in a single take, something almost unheard-of with Scott, whose drive for perfection resulted at times in double-digit takes. Interest in adapting Philip K. Dick's novel "Do Androids Dream of Electric Sheep?" developed shortly after its 1968 publication. Director Martin Scorsese was interested in filming the novel, but never optioned it. Producer Herb Jaffe optioned it in the early 1970s, but Dick was unimpressed with the screenplay written by Herb's son Robert: "Jaffe's screenplay was so terribly done ... Robert flew down to Santa Ana to speak with me about the project. And the first thing I said to him when he got off the plane was, 'Shall I beat you up here at the airport, or shall I beat you up back at my apartment? The screenplay by Hampton Fancher was optioned in 1977. Producer Michael Deeley became interested in Fancher's draft and convinced director Ridley Scott to film it. Scott had previously declined the project, but after leaving the slow production of "Dune", wanted a faster-paced project to take his mind off his older brother's recent death. He joined the project on February 21, 1980, and managed to push up the promised Filmways financing from US$13 million to $15 million. Fancher's script focused more on environmental issues and less on issues of humanity and religion, which are prominent in the novel and Scott wanted changes. Fancher found a cinema treatment by William S. Burroughs for Alan E. Nourse's novel "The Bladerunner" (1974), titled "Blade Runner (a movie)". Scott liked the name, so Deeley obtained the rights to the titles. Eventually he hired David Peoples to rewrite the script and Fancher left the job over the issue on December 21, 1980, although he later returned to contribute additional rewrites. Having invested over $2.5 million in pre-production, as the date of commencement of principal photography neared, Filmways withdrew financial backing. In ten days Deeley had secured $21.5 million in financing through a three-way deal between The Ladd Company (through Warner Bros.), the Hong Kong-based producer Sir Run Run Shaw and Tandem Productions. Dick became concerned that no one had informed him about the film's production, which added to his distrust of Hollywood. After Dick criticized an early version of Fancher's script in an article written for the Los Angeles "Select TV Guide", the studio sent Dick the Peoples rewrite. Although Dick died shortly before the film's release, he was pleased with the rewritten script and with a 20-minute special effects test reel that was screened for him when he was invited to the studio. Despite his well-known skepticism of Hollywood in principle, Dick enthused to Scott that the world created for the film looked exactly as he had imagined it. He said, "I saw a segment of Douglas Trumbull's special effects for "Blade Runner" on the KNBC-TV news. I recognized it immediately. It was my own interior world. They caught it perfectly." He also approved of the film's script, saying, "After I finished reading the screenplay, I got the novel out and looked through it. The two reinforce each other, so that someone who started with the novel would enjoy the movie and someone who started with the movie would enjoy the novel." The motion picture was dedicated to Dick. Principal photography of "Blade Runner" began on March 9, 1981, and ended four months later. In 1992, Ford revealed, ""Blade Runner" is not one of my favorite films. I tangled with Ridley." Apart from friction with the director, Ford also disliked the voiceovers: "When we started shooting it had been tacitly agreed that the version of the film that we had agreed upon was the version without voiceover narration. It was a nightmare. I thought that the film had worked without the narration. But now I was stuck re-creating that narration. And I was obliged to do the voiceovers for people that did not represent the director's interests." "I went kicking and screaming to the studio to record it." The narration monologues were written by an uncredited Roland Kibbee. In 2006, Scott was asked "Who's the biggest pain in the arse you've ever worked with?", he replied: "It's got to be Harrison ... he'll forgive me because now I get on with him. Now he's become charming. But he knows a lot, that's the problem. When we worked together it was my first film up and I was the new kid on the block. But we made a good movie." Ford said of Scott in 2000: "I admire his work. We had a bad patch there, and I'm over it." In 2006 Ford reflected on the production of the film saying: "What I remember more than anything else when I see "Blade Runner" is not the 50 nights of shooting in the rain, but the voiceover ... I was still obliged to work for these clowns that came in writing one bad voiceover after another." Ridley Scott confirmed in the summer 2007 issue of "Total Film" that Harrison Ford contributed to the "Blade Runner" Special Edition DVD, and had already recorded his interviews. "Harrison's fully on board", said Scott. The Bradbury Building in downtown Los Angeles served as a filming location, and a Warner Bros. backlot housed the 2019 Los Angeles street sets. Other locations included the Ennis-Brown House and the 2nd Street Tunnel. Test screenings resulted in several changes, including adding a voice-over, a happy ending, and the removal of a Holden hospital scene. The relationship between the filmmakers and the investors was difficult, which culminated in Deeley and Scott being fired but still working on the film. Crew members created T-shirts during filming saying, "Yes Guv'nor, My Ass" that mocked Scott's unfavorable comparison of U.S. and British crews; Scott responded with a T-shirt of his own, "Xenophobia Sucks" making the incident known as the T-shirt war. Ridley Scott credits Edward Hopper's painting "Nighthawks" and the French science fiction comics magazine "Métal Hurlant", to which the artist Jean "Moebius" Giraud contributed, as stylistic mood sources. He also drew on the landscape of "Hong Kong on a very bad day" and the industrial landscape of his one-time home in northeast England. The visual style of the movie is influenced by the work of futurist Italian architect Antonio Sant'Elia. Scott hired Syd Mead as his concept artist; like Scott, he was influenced by "Métal Hurlant". Moebius was offered the opportunity to assist in the pre-production of "Blade Runner", but he declined so that he could work on René Laloux's animated film "Les Maîtres du temps" – a decision that he later regretted. Production designer Lawrence G. Paull and art director David Snyder realized Scott's and Mead's sketches. Douglas Trumbull and Richard Yuricich supervised the special effects for the film, and Mark Stetson served as chief model maker. "Blade Runner" has numerous deep similarities to Fritz Lang's "Metropolis", including a built-up urban environment, in which the wealthy literally live above the workers, dominated by a huge building – the Stadtkrone Tower in "Metropolis" and the Tyrell Building in "Blade Runner". Special effects supervisor David Dryer used stills from "Metropolis" when lining up "Blade Runner"s miniature building shots. The extended end scene in the original theatrical release shows Rachael and Deckard traveling into daylight with pastoral aerial shots filmed by director Stanley Kubrick. Ridley Scott contacted Kubrick about using some of his surplus helicopter aerial photography from "The Shining". "Spinner" is the generic term for the fictional flying cars used in the film. A spinner can be driven as a ground-based vehicle, and take off vertically, hover, and cruise much like vertical take-off and landing (VTOL) aircraft. They are used extensively by the police as patrol cars, and wealthy people can also acquire spinner licenses. The vehicle was conceived and designed by Syd Mead who described the spinner as an "aerodyne" – a vehicle which directs air downward to create lift, though press kits for the film stated that the spinner was propelled by three engines: "conventional internal combustion, jet, and anti-gravity" A spinner is on permanent exhibit at the Science Fiction Museum and Hall of Fame in Seattle, Washington. Mead's conceptual drawings were transformed into 25 vehicles by automobile customizer Gene Winfield; at least two were working ground vehicles, while others were light-weight mockups for crane shots and set decoration for street shots. Two of them ended up at Disney World in Orlando, Florida, but were later destroyed, and a few others remain in private collections. The Voigt-Kampff machine is a fictional interrogation tool, originating from the novel. The Voigt-Kampff is a polygraph-like machine used by blade runners to determine whether an individual is a replicant. It measures bodily functions such as respiration, blush response, heart rate and eye movement in response to questions dealing with empathy. (Tyrell states: "Capillary dilation of the so-called blush response? Fluctuation of the pupil? Involuntary dilation of the iris?") In the film, two replicants – Leon and Rachael – take the test. Deckard tells Tyrell that it usually takes 20 to 30 cross-referenced questions to distinguish a replicant; in contrast with the book, where it is stated it only takes six or seven questions to make a determination. In the film, it takes more than a hundred questions to determine that Rachael is a replicant. The "Blade Runner" soundtrack by Vangelis is a dark melodic combination of classic composition and futuristic synthesizers which mirrors the film-noir retro-future envisioned by Ridley Scott. Vangelis, fresh from his Academy Award-winning score for "Chariots of Fire", composed and performed the music on his synthesizers. He also made use of various chimes and the vocals of collaborator Demis Roussos. Another memorable sound is the haunting tenor sax solo "Love Theme" by British saxophonist Dick Morrissey, who performed on many of Vangelis's albums. Ridley Scott also used "Memories of Green" from the Vangelis album "See You Later, "an orchestral version of which Scott would later use in his film "Someone to Watch Over Me". Along with Vangelis' compositions and ambient textures, the film's soundscape also features a track by the Japanese ensemble Nipponia – "Ogi no Mato" or "The Folding Fan as a Target" from the Nonesuch Records release "Traditional Vocal and Instrumental Music" – and a track by harpist Gail Laughton from "Harps of the Ancient Temples" on Laurel Records. Despite being well received by fans and critically acclaimed and nominated in 1983 for a BAFTA and Golden Globe as best original score, and the promise of a soundtrack album from Polydor Records in the end titles of the film, the release of the official soundtrack recording was delayed for over a decade. There are two official releases of the music from "Blade Runner". In light of the lack of a release of an album, the New American Orchestra recorded an orchestral adaptation in 1982 which bore little resemblance to the original. Some of the film tracks would, in 1989, surface on the compilation "Vangelis: Themes", but not until the 1992 release of the "Director's Cut" version would a substantial amount of the film's score see commercial release. These delays and poor reproductions led to the production of many bootleg recordings over the years. A bootleg tape surfaced in 1982 at science fiction conventions and became popular given the delay of an official release of the original recordings, and in 1993 "Off World Music, Ltd" created a bootleg CD that would prove more comprehensive than Vangelis' official CD in 1994. A set with three CDs of "Blade Runner"-related Vangelis music was released in 2007. Titled "Blade Runner Trilogy", the first disc contains the same tracks as the 1994 official soundtrack release, the second features previously unreleased music from the movie, and the third disc is all newly composed music from Vangelis, inspired by, and in the spirit of the movie. The film's special effects are generally recognized to be among the best of all time, using the available (non-digital) technology to the fullest. In addition to matte paintings and models, the techniques employed included multipass exposures. In some scenes, the set was lit, shot, the film rewound, and then rerecorded over with different lighting. In some cases this was done 16 times in all. The cameras were frequently motion controlled using computers. Many effects used techniques which had been developed during the production of "Close Encounters of the Third Kind". "Blade Runner" was released in 1,290 theaters on June 25, 1982. That date was chosen by producer Alan Ladd Jr. because his previous highest-grossing films ("Star Wars" and "Alien") had a similar opening date (May 25) in 1977 and 1979, making the 25th of the month his "lucky day". "Blade Runner" grossed reasonably good ticket sales in its opening weekend; earning $6.1 million during its first weekend in theaters. The film was released close to other major science-fiction and fantasy releases such as "The Thing", "", "Conan the Barbarian" and "E.T. the Extra-Terrestrial", which affected its commercial success. Initial reactions among film critics were mixed. Some wrote that the plot took a back seat to the film's special effects, and did not fit the studio's marketing as an action/adventure movie. Others acclaimed its complexity and predicted it would stand the test of time. Negative criticism in the United States cited its slow pace. Sheila Benson from the "Los Angeles Times" called it "Blade Crawler", and Pat Berman in "The State" and "Columbia Record" described it as "science fiction pornography". Pauline Kael praised "Blade Runner" as worthy of a place in film history for its distinctive sci-fi vision, yet criticized the film's lack of development in "human terms". Academics began writing analyses of the film almost as soon as it was released, and the boom in home video formats helped establish a growing cult around the film, which scholars have dissected for its dystopic aspects, its questions regarding "authentic" humanity, its ecofeminist aspects, and its use of conventions from multiple genres. Popular culture began to reassess its impact as a classic several years after it was released. Roger Ebert praised the visuals of both the original and the "Director's Cut" versions and recommended it for that reason; however, he found the human story clichéd and a little thin. He later added "The Final Cut" to his "Great Movies" list. Critic Chris Rodley and Janet Maslin theorized that "Blade Runner" changed cinematic and cultural discourse through its image repertoire, and subsequent influence on films. In 2012, "Time" film critic Richard Corliss surgically analyzed the durability, complexity, screenplay, sets and production dynamics from a personal, three-decade perspective. On review aggregator Rotten Tomatoes, "Blade Runner" holds an approval rating of 90% based on 108 reviews, with an average rating of 8.5/10. The site's critical consensus reads, "Misunderstood when it first hit theaters, the influence of Ridley Scott's mysterious, neo-noir "Blade Runner" has deepened with time. A visually remarkable, achingly human sci-fi masterpiece." Metacritic, another review aggregator, assigned the film a weighted average score of 89 out of 100, based on 11 critics, indicating "universal acclaim". Denis Villeneuve, who directed the sequel, "Blade Runner 2049", cites the film as a huge influence for him and many others. "Blade Runner" won or received nominations for the following awards: Several versions of "Blade Runner" have been shown. The original workprint version (1982, 113 minutes) was shown for audience test previews in Denver and Dallas in March 1982. Negative responses to the previews led to the modifications resulting in the U.S. theatrical version. The workprint was shown as a director's cut without Scott's approval at the Los Angeles Fairfax Theater in May 1990, at an AMPAS showing in April 1991, and in September and October 1991 at the Los Angeles NuArt Theater and the San Francisco Castro Theatre. Positive responses pushed the studio to approve work on an official director's cut. A San Diego Sneak Preview was shown only once, in May 1982, and was almost identical to the U.S. theatrical version but contained three extra scenes not shown in any other version, including the 2007 Final Cut. Two versions were shown in the film's 1982 theatrical release: the U.S. theatrical version (117 minutes), known as the original version or "Domestic Cut" (released on Betamax, CED Videodisc and VHS in 1983, and on LaserDisc in 1987), and the "International Cut" (117 minutes), also known as the "Criterion Edition" or "uncut version", which included more violent action scenes than the U.S. version. Although initially unavailable in the U.S. and distributed in Europe and Asia via theatrical and local Warner Home Video Laserdisc releases, the "International Cut" was later released on VHS and Criterion Collection Laserdisc in North America, and re-released in 1992 as a "10th Anniversary Edition". Ridley Scott's "Director's Cut" (1991, 116 minutes) was made available on VHS and Laserdisc in 1993, and on DVD in 1997. Significant changes from the theatrical version include the removal of Deckard's voice-over; the re-insertion of the unicorn sequence, and the removal of the studio-imposed happy ending. Scott provided extensive notes and consultation to Warner Bros. through film preservationist Michael Arick, who was put in charge of creating the "Director's Cut". Scott's "The Final Cut" (2007, 117 minutes) was released by Warner Bros. theatrically on October 5, 2007, and subsequently released on DVD, HD DVD, and Blu-ray Disc in December 2007. This is the only version over which Scott had complete artistic and editorial control. While not initially a success with North American audiences, "Blade Runner" was popular internationally and garnered a cult following. The film's dark style and futuristic designs have served as a benchmark and its influence can be seen in many subsequent science fiction films, video games, anime, and television programs. For example, Ronald D. Moore and David Eick, the producers of the re-imagining of "Battlestar Galactica", have both cited "Blade Runner" as one of the major influences for the show. The film was selected for preservation in the United States National Film Registry in 1993 and is frequently taught in university courses. In 2007, it was named the second-most visually influential film of all time by the Visual Effects Society. The film has also been the subject of parody, such as the comics "Blade Bummer" by "Crazy" comics, "Bad Rubber" by Steve Gallacci, and the "Red Dwarf" 2009 three-part miniseries "". "Blade Runner" continues to reflect modern trends and concerns, and an increasing number of critics consider it one of the greatest science fiction films of all time. It was voted the best science fiction film ever made in a 2004 poll of 60 eminent world scientists. "Blade Runner" is also cited as an important influence to both the style and story of the "Ghost in the Shell" film series, which itself has been highly influential to the future-noir genre. "Blade Runner" has been very influential to the cyberpunk movement. It also influenced the cyberpunk derivative biopunk, which revolves around biotechnology and genetic engineering. The dialogue and music in "Blade Runner" has been sampled in music more than any other film of the 20th century. The 2009 album "I, Human" by Singaporean band Deus Ex Machina makes numerous references to the genetic engineering and cloning themes from the film, and even features a track titled "Replicant". "Blade Runner" is cited as a major influence on Warren Spector, designer of the video game "Deus Ex", which displays evidence of the film's influence in both its visual rendering and plot. Indeed, the film's look – and in particular its overall darkness, preponderance of neon lights, and opaque visuals – are easier to render than complicated backdrops, making it a popular reference point for video game designers. It has influenced adventure games such as the 2012 graphical text adventure "Cypher"; "Rise of the Dragon"; "Snatcher"; the Tex Murphy series; "Beneath a Steel Sky"; ""; "Bubblegum Crisis" (and its original anime films); the role-playing game "Shadowrun"; the first-person shooter "Perfect Dark", and the "Syndicate" series of video games. Among the folklore that has developed around the film over the years, there is a belief that "Blade Runner" cursed the companies whose logos were displayed prominently as product placement. While they were market leaders at the time, Atari, Bell, Cuisinart, and Pan Am all experienced setbacks after the film's release. Media recognitions for "Blade Runner" include: Before filming began, "Cinefantastique" magazine commissioned Paul M. Sammon to write an article about "Blade Runner"s production which became the book "Future Noir: The Making of Blade Runner". The book chronicles "Blade Runner"s evolution, focusing on film-set politics, especially the British director's experiences with his first American film crew; of which producer Alan Ladd, Jr. has said, "Harrison wouldn't speak to Ridley and Ridley wouldn't speak to Harrison. By the end of the shoot Ford was 'ready to kill Ridley', said one colleague. He really would have taken him on if he hadn't been talked out of it." "Future Noir" has short cast biographies and quotations about their experiences, and photographs of the film's production and preliminary sketches. A second edition of "Future Noir" was published in 2007, and additional materials not in either print edition have been published online. Philip K. Dick refused a $400,000 offer to write a "Blade Runner" novelization, saying: "[I was] told the cheapo novelization would have to appeal to the twelve-year-old audience" and it "would have probably been disastrous to me artistically". He added, "That insistence on my part of bringing out the original novel and not doing the novelization – they were just furious. They finally recognized that there was a legitimate reason for reissuing the novel, even though it cost them money. It was a victory not just of contractual obligations but of theoretical principles." "Do Androids Dream of Electric Sheep?" was eventually reprinted as a tie-in, with the film poster as a cover and the original title in parentheses below the "Blade Runner" title. Additionally, a novelization of the movie entitled "Blade Runner: A Story of the Future" by Les Martin was released in 1982. Archie Goodwin scripted the comic book adaptation, "A Marvel Super Special: Blade Runner", published in September 1982, which was illustrated by Al Williamson, Carlos Garzon, Dan Green, and Ralph Reese, and lettered by Ed King. There are two video games based on the film, both titled "Blade Runner": one from 1985, an action-adventure side-scroller for Commodore 64, Sinclair ZX Spectrum, and Amstrad CPC by CRL Group PLC, marked as based on the music by Vangelis rather than the film itself (due to licensing issues); and another from 1997, a point-and-click adventure by Westwood Studios. The 1997 video game featured new characters and branching storylines based on the "Blade Runner" world. Eldon Tyrell, Gaff, Leon, Rachael, Chew, and J. F. Sebastian appear, and their voice files are recorded by the original actors. The player assumes the role of McCoy, another replicant-hunter working at the same time as Deckard. The PC game featured a non-linear plot, non-player characters that each ran in their own independent AI, and an unusual pseudo-3D engine (which eschewed polygonal solids in favor of voxel elements) that did not require the use of a 3D accelerator card to play the game. The television film (and later series) "Total Recall 2070" was initially planned as a spin-off of the film "Total Recall" (based on Philip K. Dick's short story "We Can Remember It for You Wholesale"), but was produced as a hybrid of "Total Recall" and "Blade Runner". Many similarities between "Total Recall 2070" and "Blade Runner" were noted, as well as apparent influences on the show from Isaac Asimov's "The Caves of Steel" and the TV series "Holmes & Yoyo". The film has been the subject of several documentaries. A direct sequel was released in 2017, titled "Blade Runner 2049", with Ryan Gosling in the starring role. The film entered production in mid-2016, is set decades after the first film, and saw Harrison Ford reprise his role as Rick Deckard. Dick's friend K. W. Jeter wrote three authorized "Blade Runner" novels that continue Rick Deckard's story, attempting to resolve the differences between the film and "Do Androids Dream of Electric Sheep?" These are "" (1995), "" (1996), and "" (2000) "Blade Runner" cowriter David Peoples wrote the 1998 action film "Soldier", which he referred to as a "sidequel" or spiritual successor to the original film; the two are set in a shared universe. A bonus feature on the DVD for "Prometheus", the 2012 film by Scott set in the "Alien" and "Predator" universe, states that Eldon Tyrell, CEO of the "Blade Runner" Tyrell Corporation, was the mentor of Guy Pearce's character Peter Weyland. Battle of Bosworth Field The Battle of Bosworth Field (or Battle of Bosworth) was the last significant battle of the Wars of the Roses, the civil war between the Houses of Lancaster and York that extended across England in the latter half of the 15th century. Fought on 22 August 1485, the battle was won by the Lancastrians. Their leader Henry Tudor, Earl of Richmond, by his victory became the first English monarch of the Tudor dynasty. His opponent, Richard III, the last king of the House of York, was killed in the battle. Historians consider Bosworth Field to mark the end of the Plantagenet dynasty, making it a defining moment of English and Welsh history. Richard's reign began in 1483. At the request of his brother Edward IV, Richard was acting as Lord Protector for his son Edward V. Richard had Parliament declare Edward V illegitimate and ineligible for the throne, and took it for himself. Richard lost popularity when the boy and his younger brother disappeared after he incarcerated them in the Tower of London, and his support was further eroded by the popular belief that he was implicated in the death of his wife. Across the English Channel in Brittany, Henry Tudor, a descendant of the greatly diminished House of Lancaster, seized on Richard's difficulties so that he could challenge his claim to the throne. Henry's first attempt to invade England was frustrated by a storm in 1483, but on his second attempt he arrived unopposed on 7 August 1485 on the southwest coast of Wales. Marching inland, Henry gathered support as he made for London. Richard mustered his troops and intercepted Henry's army south of Market Bosworth in Leicestershire. Thomas, Lord Stanley, and Sir William Stanley brought a force to the battlefield, but held back while they decided which side it would be more advantageous to support. Richard divided his army, which outnumbered Henry's, into three groups (or "battles"). One was assigned to the Duke of Norfolk and another to the Earl of Northumberland. Henry kept most of his force together and placed it under the command of the experienced Earl of Oxford. Richard's vanguard, commanded by Norfolk, attacked but struggled against Oxford's men, and some of Norfolk's troops fled the field. Northumberland took no action when signalled to assist his king, so Richard gambled everything on a charge across the battlefield to kill Henry and end the fight. Seeing the king's knights separated from his army, the Stanleys intervened; Sir William led his men to Henry's aid, surrounding and killing Richard. After the battle Henry was crowned king below an oak tree in nearby Stoke Golding, now a residential garden. Henry hired chroniclers to portray his reign favourably; the Battle of Bosworth was popularised to represent the Tudor dynasty as the start of a new age. From the 15th to the 18th centuries the battle was glamorised as a victory of good over evil. The climax of William Shakespeare's play "Richard III" provides a focal point for critics in later film adaptations. The exact site of the battle is disputed because of the lack of conclusive data, and memorials have been erected at different locations. In 1974 the Bosworth Battlefield Heritage Centre was built on a site that has since been challenged by several scholars and historians. In October 2009 a team of researchers, who had performed geological surveys and archaeological digs in the area from 2003, suggested a location southwest of Ambion Hill. During the 15th century civil war raged across England as the Houses of York and Lancaster fought each other for the English throne. In 1471 the Yorkists defeated their rivals in the battles of Barnet and Tewkesbury. The Lancastrian King Henry VI and his only son, Edward of Lancaster, died in the aftermath of the Battle of Tewkesbury. Their deaths left the House of Lancaster with no direct claimants to the throne. The Yorkist king, Edward IV, was in complete control of England. He attainted those who refused to submit to his rule, such as Jasper Tudor and his nephew Henry, naming them traitors and confiscating their lands. The Tudors tried to flee to France but strong winds forced them to land in Brittany, which was a semi-independent duchy, where they were taken into the custody of Duke Francis II. Henry's mother, Lady Margaret Beaufort, was a great-granddaughter of John of Gaunt, uncle of King Richard II and father of King Henry IV. The Beauforts were originally bastards, but Henry IV legitimised them on the condition that their descendants were not eligible to inherit the throne. Henry Tudor, the only remaining Lancastrian noble with a trace of the royal bloodline, had a weak claim to the throne, and Edward regarded him as "a nobody". The Duke of Brittany, however, viewed Henry as a valuable tool to bargain for England's aid in conflicts with France and kept the Tudors under his protection. Edward IV died 12 years after Tewkesbury on 9 April 1483. His 12-year-old elder son succeeded him as King Edward V; the younger son, nine-year-old Richard of Shrewsbury, was next in line to the throne. Edward V was too young to rule and a Royal Council was established to rule the country until the king's coming of age. Some among the council were worried when it became apparent that the Woodvilles, relatives of Edward IV's widow Elizabeth, were plotting to use their control of the young king to dominate the council. Having offended many in their quest for wealth and power, the Woodville family was not popular. To frustrate the Woodvilles' ambitions, Lord Hastings and other members of the council turned to the new king's uncle—Richard, Duke of Gloucester, brother of Edward IV. The courtiers urged Gloucester to assume the role of Protector quickly, as had been previously requested by his now dead brother. On 29 April Gloucester, accompanied by a contingent of guards and Henry Stafford, 2nd Duke of Buckingham, took Edward V into custody and arrested several prominent members of the Woodville family. After bringing the young king to London, Gloucester had two of the Woodvilles (the Queen's brother Anthony Woodville, 2nd Earl Rivers and her younger son by her first marriage Richard Grey) executed, without trial, on charges of treason. On 13 June Gloucester accused Hastings of plotting with the Woodvilles and had him beheaded. Nine days later Gloucester convinced Parliament to declare the marriage between Edward IV and Elizabeth illegal, rendering their children illegitimate and disqualifying them from the throne. With his brother's children out of the way, he was next in the line of succession and was proclaimed King Richard III on 26 June. The timing and extrajudicial nature of the deeds done to obtain the throne for Richard won him no popularity, and rumours that spoke ill of the new king spread throughout England. After they were declared bastards, the two princes were confined in the Tower of London and never seen in public again. Discontent with Richard's actions manifested itself in the summer after he took control of the country, as a conspiracy emerged to displace him from the throne. The rebels were mostly loyalists to Edward IV, who saw Richard as a usurper. Their plans were coordinated by a Lancastrian, Henry's mother Lady Margaret, who was promoting her son as a candidate for the throne. The highest-ranking conspirator was Buckingham. No chronicles tell of the duke's motive in joining the plot, although historian Charles Ross proposes that Buckingham was trying to distance himself from a king who was becoming increasingly unpopular with the people. Michael Jones and Malcolm Underwood suggest that Margaret deceived Buckingham into thinking the rebels supported him to be king. The plan was to stage uprisings within a short time in southern and western England, overwhelming Richard's forces. Buckingham would support the rebels by invading from Wales, while Henry came in by sea. Bad timing and weather wrecked the plot. An uprising in Kent started 10 days prematurely, alerting Richard to muster the royal army and take steps to put down the insurrections. Richard's spies informed him of Buckingham's activities, and the king's men captured and destroyed the bridges across the River Severn. When Buckingham and his army reached the river, they found it swollen and impossible to cross because of a violent storm that broke on 15 October. Buckingham was trapped and had no safe place to retreat; his Welsh enemies seized his home castle after he had set forth with his army. The duke abandoned his plans and fled to Wem, where he was betrayed by his servant and arrested by Richard's men. On 2 November 1483 he was executed. Henry had attempted a landing on 10 October (or 19 October), but his fleet was scattered by a storm. He reached the coast of England (at either Plymouth or Poole) and a group of soldiers hailed him to come ashore. They were, in fact, Richard's men, prepared to capture Henry once he set foot on English soil. Henry was not deceived and returned to Brittany, abandoning the invasion. Without Buckingham or Henry, the rebellion was easily crushed by Richard. The survivors of the failed uprisings fled to Brittany, where they openly supported Henry's claim to the throne. At Christmas, Henry Tudor swore an oath to marry Edward IV's daughter, Elizabeth of York, to unite the warring houses of York and Lancaster. Henry's rising prominence made him a great threat to Richard, and the Yorkist king made several overtures to the Duke of Brittany to surrender the young Lancastrian. Francis refused, holding out for the possibility of better terms from Richard. In mid-1484 Francis was incapacitated by illness and while recuperating, his treasurer Pierre Landais took over the reins of government. Landais reached an agreement with Richard to send back Henry and his uncle in exchange for military and financial aid. John Morton, a bishop of Flanders, learned of the scheme and warned the Tudors, who fled to France. The French court allowed them to stay; the Tudors were useful pawns to ensure that Richard's England did not interfere with French plans to annex Brittany. On 16 March 1485 Richard's queen, Anne Neville, died, and rumours spread across the country that she was murdered to pave the way for Richard to marry his niece, Elizabeth. The gossip alienated Richard from some of his northern supporters, and upset Henry across the English Channel. The loss of Elizabeth's hand in marriage could unravel the alliance between Henry's supporters who were Lancastrians and those who were loyalists to Edward IV. Anxious to secure his bride, Henry recruited mercenaries formerly in French service to supplement his following of exiles and set sail from France on 1 August. By the 15th century English chivalric ideas of selfless service to the king had been corrupted. Armed forces were mostly raised through musters in individual estates; every able-bodied man had to respond to his lord's call to arms, and each noble had exclusive authority over his militia. Although a king could raise personal militia from his lands, he could only muster a significantly large army through the support of his nobles. Richard, like his predecessors, had to win over these men by granting gifts and maintaining cordial relationships. Powerful nobles could demand greater incentives to remain on the liege's side or else they might turn against him. Three groups, each with its own agenda, stood on Bosworth Field: Richard III and his Yorkist army; his challenger, Henry Tudor, who championed the Lancastrian cause; and the fence-sitting Stanleys. Small and slender, Richard III did not have the robust physique associated with many of his Plantagenet predecessors. However, he enjoyed very rough sports and activities that were considered manly. His performances on the battlefield impressed his brother greatly, and he became Edward's right-hand man. During the 1480s Richard defended the northern borders of England. In 1482 Edward charged him to lead an army into Scotland with the aim of replacing King James III with the Duke of Albany. Richard's army broke through the Scottish defences and occupied the capital, Edinburgh, but Albany decided to give up his claim to the throne in return for the post of Lieutenant General of Scotland. As well as obtaining a guarantee that the Scottish government would concede territories and diplomatic benefits to the English crown, Richard's campaign retook the town of Berwick-upon-Tweed, which the Scots had conquered in 1460. Edward was not satisfied by these gains, which, according to Ross, could have been greater if Richard had been resolute enough to capitalise on the situation while in control of Edinburgh. In her analysis of Richard's character, Christine Carpenter sees him as a soldier who was more used to taking orders than giving them. However, he was not averse to displaying his militaristic streak; on ascending the throne he made known his desire to lead a crusade against "not only the Turks, but all [his] foes". Richard's most loyal subject was John Howard, 1st Duke of Norfolk. The duke had served Richard's brother for many years and had been one of Edward IV's closer confidants. He was a military veteran, having fought in the Battle of Towton in 1461 and served as Hastings' deputy at Calais in 1471. Ross speculates that he may have borne a grudge against Edward for depriving him of a fortune. Norfolk was due to inherit a share of the wealthy Mowbray estate on the death of eight-year-old Anne de Mowbray, the last of her family. However, Edward convinced Parliament to circumvent the law of inheritance and transfer the estate to his younger son, who was married to Anne. Consequently, Howard supported Richard III in deposing Edward's sons, for which he received the dukedom of Norfolk and his original share of the Mowbray estate. Henry Percy, 4th Earl of Northumberland, also supported Richard's seizure of the throne of England. The Percys were loyal Lancastrians, but Edward IV eventually won the earl's allegiance. Northumberland had been captured and imprisoned by the Yorkists in 1461, losing his titles and estates; however, Edward released him eight years later and restored his earldom. From that time Northumberland served the Yorkist crown, helping to defend northern England and maintain its peace. Initially the earl had issues with Richard III as Edward groomed his brother to be the leading power of the north. Northumberland was mollified when he was promised he would be the Warden of the East March, a position that was formerly hereditary for the Percys. He served under Richard during the 1482 invasion of Scotland, and the allure of being in a position to dominate the north of England if Richard went south to assume the crown was his likely motivation for supporting Richard's bid for kingship. However, after becoming king, Richard began moulding his nephew, John de la Pole, 1st Earl of Lincoln, to manage the north, passing over Northumberland for the position. According to Carpenter, although the earl was amply compensated, he despaired of any possibility of advancement under Richard. Henry Tudor was unfamiliar with the arts of war and was a stranger to the land he was trying to conquer. He spent the first fourteen years of his life in Wales and the next fourteen in Brittany and France. Slender but strong and decisive, Henry lacked a penchant for battle and was not much of a warrior; chroniclers such as Polydore Vergil and ambassadors like Pedro de Ayala found him more interested in commerce and finance. Having not fought in any battles, Henry recruited several experienced veterans on whom he could rely for military adviceand the command of his armies. John de Vere, 13th, Earl of Oxford, was Henry's principal military commander. He was adept in the arts of war. At the Battle of Barnet, he commanded the Lancastrian right wing and routed the division opposing him. However, as a result of confusion over identities, Oxford's group came under friendly fire from the Lancastrian main force and retreated from the field. The earl fled abroad and continued his fight against the Yorkists, raiding shipping and eventually capturing the island fort of St Michael's Mount in 1473. He surrendered after receiving no aid or reinforcement, but in 1484 escaped from prison and joined Henry's court in France, bringing along his erstwhile gaoler Sir James Blount. Oxford's presence raised morale in Henry's camp and troubled Richard III. In the early stages of the Wars of the Roses, the Stanleys of Cheshire had been predominantly Lancastrians. Sir William Stanley, however, was a staunch Yorkist supporter, fighting in the Battle of Blore Heath in 1459 and helping Hastings to put down uprisings against Edward IV in 1471. When Richard took the crown, Sir William showed no inclination to turn against the new king, refraining from joining Buckingham's rebellion, for which he was amply rewarded. Sir William's elder brother, Thomas Stanley, 2nd Baron Stanley, was not as steadfast. By 1485, he had served three kings, namely Henry VI, Edward IV, and Richard III. Lord Stanley's skilled political manoeuvrings—vacillating between opposing sides until it was clear who would be the winner—gained him high positions; he was Henry's chamberlain and Edward's steward. His non-committal stance, until the crucial point of a battle, earned him the loyalty of his men, who felt he would not needlessly send them to their deaths. Even though Lord Stanley had served as Edward IV's steward , his relations with the king's brother, the eventual Richard III, were not cordial. The two had conflicts that erupted into violence around March 1470. Furthermore, having taken Lady Margaret as his second wife in June 1472, Stanley was Henry Tudor's stepfather, a relationship which did nothing to win him Richard's favour. Despite these differences, Stanley did not join Buckingham's revolt in 1483. When Richard executed those conspirators who had been unable to flee England, he spared Lady Margaret. However, he declared her titles forfeit and transferred her estates to Stanley's name, to be held in trust for the Yorkist crown. Richard's act of mercy was calculated to reconcile him with Stanley, but it may have been to no avail—Carpenter has identified a further cause of friction in Richard's intention to reopen an old land dispute that involved Thomas Stanley and the Harrington family. Edward IV had ruled the case in favour of Stanley in 1473, but Richard planned to overturn his brother's ruling and give the wealthy estate to the Harringtons. Immediately before the Battle of Bosworth, being wary of Stanley, Richard took his son, Lord Strange, as hostage to discourage him from joining Henry. Reports of the number of soldiers Henry brought with him vary widely, with different sources giving figures of 1,000, 3,000, 3,600, 4,000 and 5,000 men. This initial force consisted of the English and Welsh exiles who had gathered around Henry, combined with a contingent of mercenaries put at his disposal by Charles VIII of France. The history of one "John Major" (published in 1521) claims that Charles had granted Henry 5,000 men, of whom 1,000 were Scots, headed by Sir Alexander Bruce. No mention of Scottish soldiers was made by subsequent English historians. After the battle, Bruce was given an annuity of £20 by Henry for his "faithful services". How many Frenchmen actually sailed is unknown, but the historian Chris Skidmore estimates over half of Henry's armed fleet was manned by Frenchmen. The Crowland Chronicler also recorded that Henry's troops were "as much French as English". Many of these French mercenaries were from the garrison of Philippe de Crevecoeur, Lord of Esquerdes. Commynes recorded that these included "some 3,000 of the most unruly men in Normandy". Henry's crossing of the English Channel in 1485 was without incident. Thirty ships sailed from Harfleur on 1 August and, with fair winds behind them, landed in his native Wales, at Mill Bay (near Dale) on the north side of Milford Haven on 7 August, easily capturing nearby Dale Castle. His long-awaited arrival had been hailed by contemporary Welsh bards such as Dafydd Ddu and Gruffydd ap Dafydd as the true prince and "the youth of Brittany defeating the Saxons" in order to bring their country back to glory. Mill Bay had been chosen as it was completely hidden from view and there was no resistance by the cohort of Richard's men stationed at Dale where Henry and his men spent the first night. In the morning they marched to Haverfordwest, the county town of Pembrokeshire, 12 miles away and were received "with the utmost goodwill of all". Here, the Welshman Arnold Butler (who had met Henry in Brittany) announced that "the whole of Pembrokeshire was prepared to serve him". Butler's closest friend was Rhys ap Thomas. That afternoon, Henry and his troops headed north towards Cardigan and pitched camp "at the fifth milestone towards Cardigan" where they were joined by Gruffydd Rede with a band of soldiers and John Morgan of Tredegar. The following day, 9 August, they passed through Bwlch-y-gwynt and over the Preseli mountains and to Fagwyr Llwyd south of Cilgwyn. Richard's lieutenant in South Wales, Sir Walter Herbert, failed to move against Henry, and two of his officers, Richard Griffith and Evan Morgan, abandoned Richard III to join Henry Tudor with their men. However, the most important defector to Henry in this early stage of the campaign was probably Rhys ap Thomas, who was the leading figure in West Wales. Richard had appointed Rhys Lieutenant in West Wales for his refusal to join Buckingham's rebellion, asking that he surrender his son Gruffydd ap Rhys ap Thomas as surety, although by some accounts Rhys had managed to evade this condition. However, Henry successfully courted Rhys, offering the lieutenancy of all Wales in exchange for his fealty. Henry marched via Aberystwyth while Rhys followed a more southerly route, recruiting a force of Welshmen en route, variously estimated at 500 or 2,000 men, to swell Henry's army when they reunited at Cefn Digoll, Welshpool. By 15 or 16 August, Henry and his men had crossed the English border, making for the town of Shrewsbury. Since 22 June 1485 Richard had been aware of Henry's impending invasion, and had ordered his lords to maintain a high level of readiness. News of Henry's landing reached Richard on 11 August, but it took three to four days for his messengers to notify his lords of their king's mobilisation. On 16 August, the Yorkist army started to gather; Norfolk set off for Leicester, the assembly point, that night. The city of York, a traditional stronghold of Richard's family, asked the king for instructions, and receiving a reply three days later sent 80 men to join the king. Simultaneously Northumberland, whose northern territory was the most distant from the capital, had gathered his men and ridden to Leicester. Although London was his goal, Henry did not move directly towards the city. After resting in Shrewsbury, his forces went eastwards and picked up Sir Gilbert Talbot and other English allies, including deserters from Richard's forces. Although its size had increased substantially since the landing, Henry's army was still substantially outnumbered by Richard's forces. Henry's pace through Staffordshire was slow, delaying the confrontation with Richard so that he could gather more recruits to his cause. Henry had been communicating on friendly terms with the Stanleys for some time before setting foot in England, and the Stanleys had mobilised their forces on hearing of Henry's landing. They ranged themselves ahead of Henry's march through the English countryside, meeting twice in secret with Henry as he moved through Staffordshire. At the second of these, at Atherstone in Warwickshire, they conferred "in what sort to arraign battle with King Richard, whom they heard to be not far off". On 21 August, the Stanleys were making camp on the slopes of a hill north of Dadlington, while Henry encamped his army at White Moors to the northwest of their camp. On 20 August, Richard rode from Nottingham to Leicester, joining Norfolk. He spent the night at the Blue Boar inn (demolished 1836). Northumberland arrived the following day. The royal army proceeded westwards to intercept Henry's march on London. Passing Sutton Cheney, Richard moved his army towards Ambion Hill—which he thought would be of tactical value—and made camp on it. Richard's sleep was not peaceful and, according to the "Croyland Chronicle", in the morning his face was "more livid and ghastly than usual". The Yorkist army, variously estimated at between 7,500 and 12,000 men, deployed on the hilltop along the ridgeline from west to east. Norfolk's group (or "battle" in the parlance of the time) of spearmen stood on the right flank, protecting the cannon and about 1,200 archers. Richard's group, comprising 3,000 infantry, formed the centre. Northumberland's men guarded the left flank; he had approximately 4,000 men, many of them mounted. Standing on the hilltop, Richard had a wide, unobstructed view of the area. He could see the Stanleys and their 4,000–6,000 men holding positions on and around Dadlington Hill, while to the southwest was Henry's army. Henry's force has been variously estimated at between 5,000 and 8,000 men, his original landing force of exiles and mercenaries having been augmented by the recruits gathered in Wales and the English border counties (in the latter area probably mustered chiefly by the Talbot interest), and by deserters from Richard's army. Historian John Mackie believes that 1,800 French mercenaries, led by Philibert de Chandée, formed the core of Henry's army. John Mair, writing thirty-five years after the battle, claimed that this force contained a significant Scottish component, and this claim is accepted by some modern writers, but Mackie reasons that the French would not have released their elite Scottish knights and archers, and concludes that there were probably few Scottish troops in the army, although he accepts the presence of captains like Bernard Stewart, Lord of Aubigny. In their interpretations of the vague mentions of the battle in the old text, historians placed areas near the foot of Ambion Hill as likely regions where the two armies clashed, and thought up possible scenarios of the engagement. In their recreations of the battle, Henry started by moving his army towards Ambion Hill where Richard and his men stood. As Henry's army advanced past the marsh at the southwestern foot of the hill, Richard sent a message to Stanley, threatening to execute his son, Lord Strange, if Stanley did not join the attack on Henry immediately. Stanley replied that he had other sons. Incensed, Richard gave the order to behead Strange but his officers temporised, saying that battle was imminent, and it would be more convenient to carry out the execution afterwards. Henry had also sent messengers to Stanley asking him to declare his allegiance. The reply was evasive—the Stanleys would "naturally" come, after Henry had given orders to his army and arranged them for battle. Henry had no choice but to confront Richard's forces alone. Well aware of his own military inexperience, Henry handed command of his army to Oxford and retired to the rear with his bodyguards. Oxford, seeing the vast line of Richard's army strung along the ridgeline, decided to keep his men together instead of splitting them into the traditional three battles: vanguard, centre, and rearguard. He ordered the troops to stray no further than from their banners, fearing that they would become enveloped. Individual groups clumped together, forming a single large mass flanked by horsemen on the wings. The Lancastrians were harassed by Richard's cannon as they manoeuvred around the marsh, seeking firmer ground. Once Oxford and his men were clear of the marsh, Norfolk's battle and several contingents of Richard's group, under the command of Sir Robert Brackenbury, started to advance. Hails of arrows showered both sides as they closed. Oxford's men proved the steadier in the ensuing hand-to-hand combat; they held their ground and several of Norfolk's men fled the field. Norfolk lost one of his senior officers, Walter Devereux in this early clash. Recognising that his force was at a disadvantage, Richard signalled for Northumberland to assist but Northumberland's group showed no signs of movement. Historians, such as Horrox and Pugh, believe Northumberland chose not to aid his king for personal reasons. Ross doubts the aspersions cast on Northumberland's loyalty, suggesting instead that Ambion Hill's narrow ridge hindered him from joining the battle. The earl would have had to either go through his allies or execute a wide flanking move—near impossible to perform given the standard of drill at the time—to engage Oxford's men. At this juncture Richard saw Henry at some distance behind his main force. Seeing this, Richard decided to end the fight quickly by killing the enemy commander. He led a charge of mounted men around the melee and tore into Henry's group; several accounts state that Richard's force numbered 800–1000 knights, but Ross says it was more likely that Richard was accompanied only by his household men and closest friends. Richard killed Henry's standard-bearer Sir William Brandon in the initial charge and unhorsed burly John Cheyne, Edward IV's former standard-bearer, with a blow to the head from his broken lance. French mercenaries in Henry's retinue related how the attack had caught them off guard and that Henry sought protection by dismounting and concealing himself among them to present less of a target. Henry made no attempt to engage in combat himself. Oxford had left a small reserve of pike-equipped men with Henry. They slowed the pace of Richard's mounted charge and bought Tudor some critical time. The remainder of Henry's bodyguards surrounded their master and succeeded in keeping him away from the Yorkist king. On seeing Richard embroiled with Henry's men and separated from his main force, William Stanley made his move. He led his men into the fight at Henry's side. Outnumbered, Richard's group was surrounded and gradually pressed back. Richard's force was driven several hundred yards away from Tudor, near to the edge of a marsh. The king's horse lost its footing and toppled into it. Richard gathered himself and rallied his dwindling followers, supposedly refusing to retreat: "God forbid that I retreat one step. I will either win the battle as a king, or die as one." In the fighting Richard's banner man—Sir Percival Thirlwall—lost his legs but held the Yorkist banner aloft until he was killed. It is likely that James Harrington also died in the charge. The king's trusted advisor Richard Ratcliffe was also slain. Polydore Vergil, Henry Tudor's official historian, recorded that "King Richard, alone, was killed fighting manfully in the thickest press of his enemies". Richard had come within a sword's length of Henry Tudor before being surrounded by Sir William Stanley's men and killed. The Burgundian chronicler Jean Molinet says that a Welshman struck the death-blow with a halberd while Richard's horse was stuck in the marshy ground. It was said that the blows were so violent that the king's helmet was driven into his skull. The contemporary Welsh poet Guto'r Glyn implies the leading Welsh Lancastrian Rhys ap Thomas, or one of his men, killed the king, writing that he "killed the boar, shaved his head". Analysis of King Richard's skeletal remains found 11 wounds, nine of them to the head; a blade consistent with a halberd had sliced off part of the rear of Richard's skull, suggesting he had lost his helmet. Richard's forces disintegrated as news of his death spread. Northumberland and his men fled north on seeing the king's fate, and Norfolk was killed. After the battle, Richard's circlet is said to have been found and brought to Henry, who was proclaimed king at the top of Crown Hill, near the village of Stoke Golding. According to Vergil, Henry's official historian, Lord Stanley found the circlet. Historians Stanley Chrimes and Sydney Anglo dismiss the legend of the circlet's finding in a hawthorn bush; none of the contemporary sources reported such an event. Ross, however, does not ignore the legend. He argues that the hawthorn bush would not be part of Henry's coat of arms if it did not have a strong relationship to his ascendance. Baldwin points out that a hawthorn bush motif was already used by the House of Lancaster, and Henry merely added the crown. In Vergil's chronicle, 100 of Henry's men, compared to 1,000 of Richard's, died in this battle—a ratio Chrimes believes to be an exaggeration. The bodies of the fallen were brought to St James Church at Dadlington for burial. However, Henry denied any immediate rest for Richard; instead the last Yorkist king's corpse was stripped naked and strapped across a horse. His body was brought to Leicester and openly exhibited to prove that he was dead. Early accounts suggest that this was in the major Lancastrian collegiate foundation, the Church of the Annunciation of Our Lady of the Newarke. After two days, the corpse was interred in a plain tomb, within the church of the Greyfriars. The church was demolished following the friary's dissolution in 1538, and the location of Richard's tomb was long uncertain. On 12 September 2012 archaeologists announced the discovery of a buried skeleton with spinal abnormalities and head injuries under a car park in Leicester, and their suspicions that it might be Richard III. On 4 February 2013, it was announced that DNA testing had convinced Leicester University scientists and researchers "beyond reasonable doubt" that the remains were those of King Richard. On 26 March 2015, these remains were ceremonially buried in Leicester Cathedral On the following day Richard's tomb was unveiled. Henry dismissed the mercenaries in his force, retaining only a small core of local soldiers to form a "Yeomen of his Garde", and proceeded to establish his rule of England. Parliament reversed his attainder and recorded Richard's kingship as illegal, although the Yorkist king's reign remained officially in the annals of England history. The proclamation of Edward IV's children as illegitimate was also reversed, restoring Elizabeth's status to a royal princess. The marriage of Elizabeth, the heiress to the House of York, to Henry, the master of the House of Lancaster, marked the end of the feud between the two houses and the start of the Tudor dynasty. The royal matrimony, however, was delayed until Henry was crowned king and had established his claim on the throne firmly enough to preclude that of Elizabeth and her kin. Henry further convinced Parliament to backdate his reign to the day before the battle, enabling him retrospectively to declare as traitors those who had fought against him at Bosworth Field. Northumberland, who had remained inactive during the battle, was imprisoned but later released and reinstated to pacify the north in Henry's name. The purge of those who fought for Richard occupied Henry's first two years of rule, although later he proved prepared to accept those who submitted to him regardless of their former allegiances. Of his supporters, Henry rewarded the Stanleys the most generously. Aside from making William his chamberlain, he bestowed the earldom of Derby upon Lord Stanley along with grants and offices in other estates. Henry rewarded Oxford by restoring to him the lands and titles confiscated by the Yorkists and appointing him as Constable of the Tower and admiral of England, Ireland, and Aquitaine. For his kin, Henry created Jasper Tudor the Duke of Bedford. He returned to his mother the lands and grants stripped from her by Richard, and proved to be a filial son, granting her a place of honour in the palace and faithfully attending to her throughout his reign. Parliament's declaration of Margaret as "femme sole" effectively empowered her; she no longer needed to manage her estates through Stanley. Elton points out that despite his initial largesse, Henry's supporters at Bosworth would only enjoy his special favour for the short term; in later years, he would instead promote those who best served his interests. Like the kings before him, Henry faced dissenters. The first open revolt occurred two years after Bosworth Field; Lambert Simnel claimed to be Edward Plantagenet, 17th Earl of Warwick, who was Edward IV's nephew. The Earl of Lincoln backed him for the throne and led rebel forces in the name of the House of York. The rebel army fended off several attacks by Northumberland's forces, before engaging Henry's army at the Battle of Stoke Field on 16 June 1487. Oxford and Bedford led Henry's men, including several former supporters of Richard III. Henry won this battle easily, but other malcontents and conspiracies would follow. A rebellion in 1489 started with Northumberland's murder; military historian Michael C. C. Adams says that the author of a note, which was left next to Northumberland's body, blamed the earl for Richard's death. Contemporary accounts of the Battle of Bosworth can be found in four main sources, one of which is the English "Croyland Chronicle", written by a senior Yorkist chronicler who relied on second-hand information from nobles and soldiers. The other accounts were written by foreigners—Vergil, Jean Molinet, and Diego de Valera. Whereas Molinet was sympathetic to Richard, Vergil was in Henry's service and drew information from the king and his subjects to portray them in a good light. Diego de Valera, whose information Ross regards as unreliable, compiled his work from letters of Spanish merchants. However, other historians have used Valera's work to deduce possibly valuable insights not readily evident in other sources. Ross finds the poem, "The Ballad of Bosworth Field", a useful source to ascertain certain details of the battle. The multitude of different accounts, mostly based on second- or third-hand information, has proved an obstacle to historians as they try to reconstruct the battle. Their common complaint is that, except for its outcome, very few details of the battle are found in the chronicles. According to historian Michael Hicks, the Battle of Bosworth is one of the worst-recorded clashes of the Wars of the Roses. Henry tried to present his victory as a new beginning for the country; he hired chroniclers to portray his reign as a "modern age" with its dawn in 1485. Hicks states that the works of Vergil and the blind historian Bernard André, promoted by subsequent Tudor administrations, became the authoritative sources for writers for the next four hundred years. As such, Tudor literature paints a flattering picture of Henry's reign, depicting the Battle of Bosworth as the final clash of the civil war and downplaying the subsequent uprisings. For England the Middle Ages ended in 1485, and English Heritage claims that other than William the Conqueror's successful invasion of 1066, no other year holds more significance in English history. By portraying Richard as a hunchbacked tyrant who usurped the throne by killing his nephews, the Tudor historians attached a sense of myth to the battle: it became an epic clash between good and evil with a satisfying moral outcome. According to Reader Colin Burrow, André was so overwhelmed by the historic significance of the battle that he represented it with a blank page in his "Henry VII" (1502). For Professor Peter Saccio, the battle was indeed a unique clash in the annals of English history, because "the victory was determined, not by those who fought, but by those who delayed fighting until they were sure of being on the winning side." Historians such as Adams and Horrox believe that Richard lost the battle not for any mythic reasons, but because of morale and loyalty problems in his army. Most of the common soldiers found it difficult to fight for a liege whom they distrusted, and some lords believed that their situation might improve if Richard was dethroned. According to Adams, against such duplicities Richard's desperate charge was the only knightly behaviour on the field. As fellow historian Michael Bennet puts it, the attack was "the swan-song of [mediaeval] English chivalry". Adams believes this view was shared at the time by the printer William Caxton, who enjoyed sponsorship from Edward IV and Richard III. Nine days after the battle, Caxton published Thomas Malory's story about chivalry and death by betrayal—"Le Morte d'Arthur"—seemingly as a response to the circumstances of Richard's death. Elton does not believe Bosworth Field has any true significance, pointing out that the 20th-century English public largely ignored the battle until its quincentennial celebration. In his view, the dearth of specific information about the battle—no-one even knows exactly where it took place—demonstrates its insignificance to English society. Elton considers the battle as just one part of Henry's struggles to establish his reign, underscoring his point by noting that the young king had to spend ten more years pacifying factions and rebellions to secure his throne. Mackie asserts that, in hindsight, Bosworth Field is notable as the decisive battle that established a dynasty which would rule unchallenged over England for more than a hundred years. Mackie notes that contemporary historians of that time, wary of the three royal successions during the long Wars of the Roses, considered Bosworth Field just another in a lengthy series of such battles. It was through the works and efforts of Francis Bacon and his successors that the public started to believe the battle had decided their futures by bringing about "the fall of a tyrant". William Shakespeare gives prominence to the Battle of Bosworth in his play, "Richard III". It is the "one big battle"; no other fighting scene distracts the audience from this action, represented by a one-on-one sword fight between Henry Tudor and Richard III. Shakespeare uses their duel to bring a climactic end to the play and the Wars of the Roses; he also uses it to champion morality, portraying the "unequivocal triumph of good over evil". Richard, the villainous lead character, has been built up in the battles of Shakespeare's earlier play, "Henry VI, Part 3", as a "formidable swordsman and a courageous military leader"—in contrast to the dastardly means by which he becomes king in "Richard III". Although the Battle of Bosworth has only five sentences to direct it, three scenes and more than four hundred lines precede the action, developing the background and motivations for the characters in anticipation of the battle. Shakespeare's account of the battle was mostly based on chroniclers Edward Hall's and Raphael Holinshed's dramatic versions of history, which were sourced from Vergil's chronicle. However, Shakespeare's attitude towards Richard was shaped by scholar Thomas More, whose writings displayed extreme bias against the Yorkist king. The result of these influences is a script that vilifies the king, and Shakespeare had few qualms about departing from history to incite drama. Margaret of Anjou died in 1482, but Shakespeare had her speak to Richard's mother before the battle to foreshadow Richard's fate and fulfill the prophecy she had given in "Henry VI". Shakespeare exaggerated the cause of Richard's restless night before the battle, imagining it as a haunting by the ghosts of those whom the king had murdered, including Buckingham. Richard is portrayed as suffering a pang of conscience, but as he speaks he regains his confidence and asserts that he will be evil, if such needed to retain his crown. The fight between the two armies is simulated by rowdy noises made off-stage ("alarums" or alarms) while actors walk on-stage, deliver their lines, and exit. To build anticipation for the duel, Shakespeare requests more "alarums" after Richard's councillor, William Catesby, announces that the king is "[enacting] more wonders than a man". Richard punctuates his entrance with the classic line, "A horse, a horse! My kingdom for a horse!" He refuses to withdraw, continuing to seek to slay Henry's doubles until he has killed his nemesis. There is no documentary evidence that Henry had five decoys at Bosworth Field; the idea was Shakespeare's invention. He drew inspiration from Henry IV's use of them at the Battle of Shrewsbury (1403) to amplify the perception of Richard's courage on the battlefield. Similarly, the single combat between Henry and Richard is Shakespeare's creation. "The True Tragedy of Richard III", by an unknown playwright, earlier than Shakespeare's, has no signs of staging such an encounter: its stage directions give no hint of visible combat. Despite the dramatic licences taken, Shakespeare's version of the Battle of Bosworth was the model of the event for English textbooks for many years during the 18th and 19th centuries. This glamorised version of history, promulgated in books and paintings and played out on stages across the country, perturbed humorist Gilbert Abbott à Beckett. He voiced his criticism in the form of a poem, equating the romantic view of the battle to watching a "fifth-rate production of "Richard III"": shabbily costumed actors fight the Battle of Bosworth on-stage while those with lesser roles lounge at the back, showing no interest in the proceedings. In Laurence Olivier's 1955 film adaptation of "Richard III", the Battle of Bosworth is represented not by a single duel but a general melee that became the film's most recognised scene and a regular screening at Bosworth Battlefield Heritage Centre. The film depicts the clash between the Yorkist and Lancastrian armies on an open field, focusing on individual characters amidst the savagery of hand-to-hand fighting, and received accolades for the realism portrayed. One reviewer for "The Manchester Guardian" newspaper, however, was not impressed, finding the number of combatants too sparse for the wide plains and a lack of subtlety in Richard's death scene. The means by which Richard is shown to prepare his army for the battle also earned acclaim. As Richard speaks to his men and draws his plans in the sand using his sword, his units appear on-screen, arraying themselves according to the lines that Richard had drawn. Intimately woven together, the combination of pictorial and narrative elements effectively turns Richard into a storyteller, who acts out the plot he has constructed. Shakespearian critic Herbert Coursen extends that imagery: Richard sets himself up as a creator of men, but dies amongst the savagery of his creations. Coursen finds the depiction a contrast to that of Henry V and his "band of brothers". The adaptation of the setting for "Richard III" to a 1930s fascist England in Ian McKellen's 1995 film, however, did not sit well with historians. Adams posits that the original Shakespearian setting for Richard's fate at Bosworth teaches the moral of facing one's fate, no matter how unjust it is, "nobly and with dignity". By overshadowing the dramatic teaching with special effects, McKellen's film reduces its version of the battle to a pyrotechnic spectacle about the death of a one-dimensional villain. Coursen agrees that, in this version, the battle and Richard's end are trite and underwhelming. Officially the site of the battle is deemed by Leicestershire County Council to be in the vicinity of the town of Market Bosworth. The council engaged historian Daniel Williams to research the battle, and in 1974 his findings were used to build the Bosworth Battlefield Heritage Centre and the presentation it houses. Williams's interpretation, however, has since been questioned. Sparked by the battle's quincentenary celebration in 1985, a dispute among historians has led many to suspect the accuracy of Williams's theory. In particular, geological surveys conducted from 2003 to 2009 by the Battlefields Trust, a charitable organisation that protects and studies old English battlefields, show that the southern and eastern flanks of Ambion Hill were solid ground in the 15th century, contrary to Williams's claim that it was a large area of marshland. Landscape archaeologist Glenn Foard, leader of the survey, said the collected soil samples and finds of medieval military equipment suggest that the battle took place southwest of Ambion Hill (52°34′41″N 1°26′02″W), contrary to the popular belief that it was fought near the foot of the hill. The Historic Buildings and Monuments Commission for England (popularly referred to as "English Heritage") argues that the battle was named after Market Bosworth because the town was the nearest significant settlement to the battlefield in the 15th century. As explored by Professor Philip Morgan, a battle might initially not be named specifically at all. As time passes, writers of administrative and historical records find it necessary to identify a notable battle, ascribing it a name that is usually toponymical in nature and sourced from combatants or observers. This official name becomes accepted by society and future generations without question. Early records associated the Battle of Bosworth with "Brownehethe", ""bellum Miravallenses"", "Sandeford" and "Dadlyngton field". The earliest record, a municipal memorandum of 23 August 1485 from York, locates the battle "on the field of Redemore". This is corroborated by a 1485–86 letter that mentions "Redesmore" as its site. According to historian Peter Foss, records did not associate the battle with "Bosworth" until 1510. Foss is named by English Heritage as the principal advocate for "Redemore" as the battle site. He suggests the name is derived from ""Hreod Mor"", an Anglo-Saxon phrase that means "reedy marshland". Basing his opinion on 13th- and 16th-century church records, he believes "Redemore" was an area of wetland that lay between Ambion Hill and the village of Dadlington, and was close to the Fenn Lanes, a Roman road running east to west across the region. Foard believes this road to be the most probable route that both armies took to reach the battlefield. Williams dismisses the notion of "Redmore" as a specific location, saying that the term refers to a large area of reddish soil; Foss argues that Williams's sources are local stories and flawed interpretations of records. Moreover, he proposes that Williams was influenced by William Hutton's 1788 "The Battle of Bosworth-Field", which Foss blames for introducing the notion that the battle was fought west of Ambion Hill on the north side of the River Sence. Hutton, as Foss suggests, misinterpreted a passage from his source, Raphael Holinshed's 1577 "Chronicle". Holinshed wrote, "King Richard pitched his field on a hill called Anne Beame, refreshed his soldiers and took his rest." Foss believes that Hutton mistook "field" to mean "field of battle", thus creating the idea that the fight took place on Anne Beame (Ambion) Hill. To "[pitch] his field", as Foss clarifies, was a period expression for setting up a camp. Foss brings further evidence for his "Redemore" theory by quoting Edward Hall's 1550 "Chronicle". Hall stated that Richard's army stepped onto a plain after breaking camp the next day. Furthermore, historian William Burton, author of "Description of Leicestershire" (1622), wrote that the battle was "fought in a large, flat, plaine, and spacious ground, three miles [5 km] distant from [Bosworth], between the Towne of Shenton, Sutton [Cheney], Dadlington and Stoke [Golding]". In Foss's opinion both sources are describing an area of flat ground north of Dadlington. English Heritage, responsible for managing England's historic sites, used both theories to designate the site for Bosworth Field. Without preference for either theory, they constructed a single continuous battlefield boundary that encompasses the locations proposed by both Williams and Foss. The region has experienced extensive changes over the years, starting after the battle. Holinshed stated in his chronicle that he found firm ground where he expected the marsh to be, and Burton confirmed that by the end of the 16th century, areas of the battlefield were enclosed and had been improved to make them agriculturally productive. Trees were planted on the south side of Ambion Hill, forming Ambion Wood. In the 18th and 19th centuries, the Ashby Canal carved through the land west and south-west of Ambion Hill. Winding alongside the canal at a distance, the Ashby and Nuneaton Joint Railway crossed the area on an embankment. The changes to the landscape were so extensive that when Hutton revisited the region in 1807 after an earlier 1788 visit, he could not readily find his way around. Bosworth Battlefield Heritage Centre was built on Ambion Hill, near Richard's Well. According to legend, Richard III drank from one of the several springs in the region on the day of the battle. In 1788, a local pointed out one of the springs to Hutton as the one mentioned in the legend. A stone structure was later built over the location. The inscription on the well reads: Northwest of Ambion Hill, just across the northern tributary of the , a flag and memorial stone mark Richard's Field. Erected in 1973, the site was selected on the basis of Williams's theory. St James's Church at Dadlington is the only structure in the area that is reliably associated with the Battle of Bosworth; the bodies of those killed in the battle were buried there. The very extensive survey carried out (2005–2009) by the Battlefields Trust headed by Glenn Foard led eventually to the discovery of the real location of the core battlefield. This lies about a kilometer further west than the location suggested by Peter Foss. It is in what was at the time of the battle an area of marginal land at the meeting of several township boundaries. There was a cluster of field names suggesting the presence of marshland and heath. Thirty four lead round shot were discovered as a result of systematic metal detecting (more than the total found previously on all other C15th European battlefields), as well as other significant finds, including a small silver gilt badge depicting a boar. Experts believe that the boar badge could indicate the actual site of Richard III's death, since this high-status badge depicting his personal emblem, was probably worn by a member of his close retinue. A new interpretation of the battle now integrates the historic accounts with the battlefield finds and landscape history. The new site lies either side of the Fenn Lanes Roman road, close to Fenn Lane Farm and is some three kilometers to the southwest of Ambion Hill. Based on the round shot scatter, the likely size of Richard III's army, and the topography, Glenn Foard and Anne Curry think that Richard may have lined up his forces on a slight ridge which lies just east of Fox Covert Lane and behind a postulated medieval marsh. Richard's vanguard commanded by the Duke of Norfolk was on the right (north) side of Richard's battle line, with the Earl of Northumberland on Richard's left (south) side. Tudor's forces approached along the line of the Roman road and lined up to the west of the present day Fenn Lane Farm, having marched from the vicinity of Merevale in Warwickshire. The Stanleys were positioned on the south side of the battlefield, on rising ground towards Stoke Golding and Dadlington. The Earl of Oxford turned north to avoid the marsh (and possibly Richard's guns). This manoeuvre put the marsh on Oxford's right. He moved to attack Norfolk's vanguard. Norfolk was subsequently killed. Northumberland failed to engage, possibly due to the presence of the Stanleys, whose intentions were unclear, or due to the position of the marsh (or for both reasons). With his situation deteriorating, Richard decided to launch an attack against Henry Tudor, which almost succeeded, but the king's horse became stuck in the marsh, and he was killed. Fen Hole (where the boar badge was found) is believed to be a residue of the marsh. When Richard began his charge, Sir William Stanley intervened from the vicinity of Stoke Golding. It was here, on what came to be known as Crown Hill (the closest elevated ground to the fighting), that Lord Stanley crowned Henry Tudor after Richard was killed. The windmill close to which the Duke of Norfolk is said to have died (according to the ballad "Lady Bessy") was Dadlington windmill. This has disappeared but is known to have stood at the time of the battle, in the vicinity of Apple Orchard Farm and North Farm, Dadlington. A small cluster of significant finds was made in this area, including a gold livery badge depicting an eagle. The windmill lay between the core battlefield and Richard's camp on Ambion Hill and the rout of Norfolk's vanguard was in this direction. This also accounts for the large number of dead in Dadlington parish, leading to the setting up of the battle chantry there. Historic England have re-defined the boundaries of the registered Bosworth Battlefield to incorporate the newly identified site. There are hopes that public access to the site may be possible in the future. Books Periodicals Online sources Battle of Blenheim The Battle of Blenheim (German:"Zweite Schlacht bei Höchstädt"; French "Bataille de Höchstädt"), fought on 13 August 1704, was a major battle of the War of the Spanish Succession. The overwhelming Allied victory ensured the safety of Vienna from the Franco-Bavarian army, thus preventing the collapse of the Grand Alliance. Louis XIV of France sought to knock the Holy Roman Emperor, Leopold out of the war by seizing Vienna, the Habsburg capital, and gain a favourable peace settlement. The dangers to Vienna were considerable: the Elector of Bavaria and Marshal Marsin's forces in Bavaria threatened from the west, and Marshal Vendôme's large army in northern Italy posed a serious danger with a potential offensive through the Brenner Pass. Vienna was also under pressure from Rákóczi's Hungarian revolt from its eastern approaches. Realising the danger, the Duke of Marlborough resolved to alleviate the peril to Vienna by marching his forces south from Bedburg and help maintain Emperor Leopold within the Grand Alliance. A combination of deception and skilled administration – designed to conceal his true destination from friend and foe alike – enabled Marlborough to march unhindered from the Low Countries to the River Danube in five weeks. After securing Donauwörth on the Danube, Marlborough sought to engage the Elector's and Marsin's army before Marshal Tallard could bring reinforcements through the Black Forest. However, with the Franco-Bavarian commanders reluctant to fight until their numbers were deemed sufficient, the Duke enacted a policy of plundering in Bavaria designed to force the issue. The tactic proved unsuccessful, but when Tallard arrived to bolster the Elector's army, and Prince Eugene arrived with reinforcements for the Allies, the two armies finally met on the banks of the Danube in and around the small village of Blindheim, from which the English "Blenheim" is derived. Blenheim was one of the battles that altered the course of the war, which until then was leaning for Louis' coalition, and ended French plans of knocking the Emperor out of the war. France suffered as many as 38,000 casualties including the commander-in-chief, Marshal Tallard, who was taken captive to England. Before the 1704 campaign ended, the Allies had taken Landau, and the towns of Trier and Trarbach on the Moselle in preparation for the following year's campaign into France itself. The offensive never materialised as the Grand Alliance's army had to depart the Moselle to defend Liège from a French counteroffensive. The war would rage on for another decade. By 1704, the War of the Spanish Succession was in its fourth year. The previous year had been one of success for France and her allies, most particularly on the Danube, where Marshal Villars and the Elector of Bavaria had created a direct threat to Vienna, the Habsburg capital. Vienna had been saved by dissension between the two commanders, leading to the brilliant Villars being replaced by the less dynamic Marshal Marsin. Nevertheless, by 1704, the threat was still real: Rákóczi's Hungarian revolt was already threatening the Empire's eastern approaches, and Marshal Vendôme's forces threatened an invasion from northern Italy. In the courts of Versailles and Madrid, Vienna's fall was confidently anticipated, an event which would almost certainly have led to the collapse of the Grand Alliance. To isolate the Danube from any Allied intervention, Marshal Villeroi's 46,000 troops were expected to pin the 70,000 Dutch and English troops around Maastricht in the Low Countries, while General de Coigny protected Alsace against surprise with a further corps. The only forces immediately available for Vienna's defence were Prince Louis of Baden's force of 36,000 stationed in the Lines of Stollhofen to watch Marshal Tallard at Strasbourg; there was also a weak force of 10,000 men under Field Marshal Count Limburg Styrum observing Ulm. Both the Imperial Austrian Ambassador in London, Count Wratislaw, and the Duke of Marlborough realised the implications of the situation on the Danube. The Dutch, however, who clung to their troops for their country's protection, were against any adventurous military operation as far south as the Danube and would never willingly permit any major weakening of the forces in the Spanish Netherlands. Marlborough, realising the only way to ignore Dutch wishes was by the use of secrecy and guile, set out to deceive his Dutch allies by pretending to simply move his troops to the Moselle – a plan approved of by The Hague – but once there, he would slip the Dutch leash and link up with Austrian forces in southern Germany. "My intentions", wrote the Duke from The Hague on 29 April to his governmental confidant, Sidney Godolphin, "are to march with the English to Coblenz and declare that I intend to campaign on the Moselle. But when I come there, to write to the Dutch States that I think it absolutely necessary for the saving of the Empire to march with the troops under my command and to join with those that are in Germany ... in order to make measures with Prince Lewis of Baden for the speedy reduction of the Elector of Bavaria." Marlborough's march started on 19 May from Bedburg, north-west of Cologne. The army (assembled by the Duke's brother, General Charles Churchill) consisted of 66 squadrons, 31 battalions and 38 guns and mortars totalling 21,000 men (16,000 of whom were English troops). This force was to be augmented "en route" such that by the time Marlborough reached the Danube, it would number 40,000 (47 battalions, 88 squadrons). Whilst Marlborough led his army, General Overkirk would maintain a defensive position in the Dutch Republic in case Villeroi mounted an attack. The Duke had assured the Dutch that if the French were to launch an offensive he would return in good time, but Marlborough calculated that as he marched south, the French commander would be drawn after him. In this assumption Marlborough proved correct: Villeroi shadowed the Duke with 30,000 men in 60 squadrons and 42 battalions. The military dangers in such an enterprise were numerous: Marlborough's lines of communication along the Rhine would be hopelessly exposed to French interference, for Louis' generals controlled the left bank of the river and its central reaches. Such a long march would almost certainly involve a high wastage of men and horses through exhaustion and disease. However, Marlborough was convinced of the urgency – "I am very sensible that I take a great deal upon me", he had earlier written to Godolphin, "but should I act otherwise, the Empire would be undone ..." Whilst Allied preparations had progressed, the French were striving to maintain and re-supply Marshal Marsin. Marsin had been operating with the Elector of Bavaria against the Imperial commander, Prince Louis of Baden, and was somewhat isolated from France: his only lines of communication lay through the rocky passes of the Black Forest. However, on 14 May, with considerable skill Marshal Tallard managed to bring 10,000 reinforcements and vast supplies and munitions through the difficult terrain, whilst outmanoeuvring Baron Thüngen, the Imperial general who sought to block his path. Tallard then returned with his own force to the Rhine, once again side-stepping Thüngen's efforts to intercept him. The whole operation was an outstanding military achievement. On 26 May, Marlborough reached Coblenz, where the Moselle meets the Rhine. If he intended an attack along the Moselle the Duke must now turn west, but, instead, the following day the army crossed to the right bank of the Rhine, (pausing to add 5,000 waiting Hanoverians and Prussians). "There will be no campaign on the Moselle", wrote Villeroi who had taken up a defensive position on the river, "the English have all gone up into Germany." A second possible objective now occurred to the French – an Allied incursion into Alsace and an attack on the city of Strasbourg. Marlborough skilfully encouraged this apprehension by constructing bridges across the Rhine at Philippsburg, a ruse that not only encouraged Villeroi to come to Tallard's aid in the defence of Alsace, but one that ensured the French plan to march on Vienna remained paralysed by uncertainty. With Villeroi shadowing Marlborough's every move, Marlborough's gamble that the French would not move against the weakened Dutch position in the Netherlands paid off. In any case, Marlborough had promised to return to the Netherlands if a French attack developed there, transferring his troops down the Rhine on barges at a rate of a day. Encouraged by this promise (whatever it was worth) the States General agreed to release the Danish contingent of seven battalions and 22 squadrons as a reinforcement. Marlborough reached Ladenburg, in the plain of the Neckar and the Rhine, and there halted for three days to rest his cavalry and allow the guns and infantry to close up. On 6 June he arrived at Wiesloch, south of Heidelberg. The following day, the Allied army swung away from the Rhine towards the hills of the Swabian Jura and the Danube beyond. At last Marlborough's destination was established without doubt. On 10 June, the Duke met for the first time the President of the Imperial War Council, Prince Eugene – accompanied by Count Wratislaw – at the village of Mundelsheim, halfway between the Danube and the Rhine. By 13 June, the Imperial Field Commander, Prince Louis of Baden, had joined them in Großheppach. The three generals commanded a force of nearly 110,000 men. At conference it was decided that Eugene would return with 28,000 men to the Lines of Stollhofen on the Rhine to keep an eye on Villeroi and Tallard and prevent them going to the aid of the Franco-Bavarian army on the Danube. Meanwhile, Marlborough's and Baden's forces would combine, totalling 80,000 men, for the march on the Danube to seek out the Elector and Marsin before they could be reinforced. Knowing Marlborough's destination, Tallard and Villeroi met at Landau in the Palatinate on 13 June to rapidly construct a plan to save Bavaria but the rigidity of the French command system was such that any variations from the original plan had to be sanctioned by Versailles. The Count of Mérode-Westerloo, commander of the Flemish troops in Tallard's army wrote – "One thing is certain: we delayed our march from Alsace for far too long and quite inexplicably." Approval from Louis arrived on 27 June: Tallard was to reinforce Marsin and the Elector on the Danube via the Black Forest, with 40 battalions and 50 squadrons; Villeroi was to pin down the Allies defending the Lines of Stollhofen, or, if the Allies should move all their forces to the Danube, he was to join with Marshal Tallard; and General de Coignies with 8,000 men, would protect Alsace. On 1 July Tallard's army of 35,000 re-crossed the Rhine at Kehl and began its march. On 22 June, Marlborough's forces linked up with Baden's Imperial forces at Launsheim. A distance of had been covered in five weeks. Thanks to a carefully planned time-table, the effects of wear and tear had been kept to a minimum. Captain Parker described the march discipline – "As we marched through the country of our Allies, commissars were appointed to furnish us with all manner of necessaries for man and horse ... the soldiers had nothing to do but pitch their tents, boil kettles and lie down to rest." In response to Marlborough's manoeuvres, the Elector and Marsin, conscious of their numerical disadvantage with only 40,000 men, moved their forces to the entrenched camp at Dillingen on the north bank of the Danube. Marlborough could not attack Dillingen because of a lack of siege guns – he was unable to bring any from the Low Countries, and Baden had failed to supply any despite assurances to the contrary. The Allies, nevertheless, needed a base for provisions and a good river crossing. On 2 July, therefore, Marlborough at the Battle of Schellenberg stormed the fortress of Schellenberg on the heights above the town of Donauwörth. Count Jean d'Arco had been sent with 12,000 men from the Franco-Bavarian camp to hold the town and grassy hill but after a ferocious and bloody battle, inflicting enormous casualties on both sides, Schellenberg finally succumbed, forcing Donauwörth to surrender shortly afterwards. The Elector, knowing his position at Dillingen was now not tenable, took up a position behind the strong fortifications of Augsburg. Tallard's march presented a dilemma for Eugene. If the Allies were not to be outnumbered on the Danube, Eugene realised he must either try to cut Tallard off before he could get there or he must hasten to reinforce Marlborough. if he withdrew from the Rhine to the Danube, Villeroi might also make a move south to link up with the Elector and Marsin. Eugene compromised: leaving 12,000 troops behind guarding the Lines of Stollhofen, he marched off with the rest of his army to forestall Tallard. Lacking in numbers, Eugene could not seriously disrupt Tallard's march but the French Marshal's progress was proving pitifully slow. Tallard's force had suffered considerably more than Marlborough's troops on their march – many of his cavalry horses were suffering from glanders and the mountain passes were proving tough for the 2,000 wagons of provisions. Local German peasants, angry at French plundering, compounded Tallard's problems, leading Mérode-Westerloo to bemoan – "the enraged peasantry killed several thousand of our men before the army was clear of the Black Forest." Tallard had insisted on besieging the little town of Villingen for six days (16–22 July) but abandoned the enterprise on discovering the approach of Eugene. The Elector in Augsburg was informed on 14 July that Tallard was on his way through the Black Forest. This good news bolstered the Elector's policy of inaction, further encouraging him to wait for the reinforcements. But this reticence to fight induced Marlborough to undertake a controversial policy of spoliation in Bavaria, burning buildings and crops throughout the rich lands south of the Danube. This had two aims: firstly to put pressure on the Elector to fight or come to terms before Tallard arrived with reinforcements; and secondly, to ruin Bavaria as a base from which the French and Bavarian armies could attack Vienna, or pursue the Duke into Franconia if, at some stage, he had to withdraw northwards. But this destruction, coupled with a protracted siege of Rain (9–16 July), caused Prince Eugene to lament "... since the Donauwörth action I cannot admire their performances", and later to conclude "If he has to go home without having achieved his objective, he will certainly be ruined." Nevertheless, strategically the Duke had been able to place his numerically stronger forces between the Franco-Bavarian army and Vienna. Marshal Tallard, with 34,000 men, reached Ulm, joining with the Elector and Marsin in Augsburg on 5 August (although Tallard was not impressed to find that the Elector had dispersed his army in response to Marlborough's campaign of ravaging the region). Also on 5 August, Eugene reached Höchstädt, riding that same night to meet with Marlborough at Schrobenhausen. Marlborough knew it was necessary that another crossing point over the Danube would be required in case Donauwörth fell to the enemy. On 7 August, therefore, the first of Baden's 15,000 Imperial troops (the remainder following two days later) left Marlborough's main force to besiege the heavily defended city of Ingolstadt, farther down the Danube. With Eugene's forces at Höchstädt on the north bank of the Danube, and Marlborough's at Rain on the south bank, Tallard and the Elector debated their next move. Tallard preferred to bide his time, replenish supplies and allow Marlborough's Danube campaign to flounder in the colder weeks of Autumn; the Elector and Marsin, however, newly reinforced, were keen to push ahead. The French and Bavarian commanders eventually agreed on a plan and decided to attack Eugene's smaller force. On 9 August, the Franco-Bavarian forces began to cross to the north bank of the Danube. On 10 August, Eugene sent an urgent dispatch reporting that he was falling back to Donauwörth – "The enemy have marched. It is almost certain that the whole army is crossing the Danube at Lauingen ... The plain of Dillingen is crowded with troops ... Everything, milord, consists in speed and that you put yourself forthwith in movement to join me tomorrow, without which I fear it will be too late." By a series of brilliant marches Marlborough concentrated his forces on Donauwörth and, by noon 11 August, the link-up was complete. During 11 August, Tallard pushed forward from the river crossings at Dillingen; by 12 August, the Franco-Bavarian forces were encamped behind the small river Nebel near the village of Blenheim on the plain of Höchstädt. That same day Marlborough and Eugene carried out their own reconnaissance of the French position from the church spire at Tapfheim, and moved their combined forces to Münster – from the French camp. A French reconnaissance under the Marquis de Silly went forward to probe the enemy, but were driven off by Allied troops who had deployed to cover the pioneers of the advancing army, labouring to bridge the numerous streams in the area and improve the passage leading westwards to Höchstädt. Marlborough quickly moved forward two brigades under the command of General Wilkes and Brigadier Rowe to secure the narrow strip of land between the Danube and the wooded Fuchsberg hill, at the Schwenningen defile. Tallard's army numbered 56,000 men and 90 guns; the army of the Grand Alliance, 52,000 men and 66 guns. Some Allied officers who were acquainted with the superior numbers of the enemy, and aware of their strong defensive position, ventured to remonstrate with Marlborough about the hazards of attacking; but the Duke was resolute – "I know the danger, yet a battle is absolutely necessary, and I rely on the bravery and discipline of the troops, which will make amends for our disadvantages". Marlborough and Eugene decided to risk everything, and agreed to attack on the following day. The battlefield stretched for nearly . The extreme right flank of the Franco-Bavarian army was covered by the Danube; to the extreme left flank lay the undulating pine-covered hills of the Swabian Jura. A small stream, the Nebel, (the ground either side of which was soft and marshy and only fordable intermittently), fronted the French line. The French right rested on the village of Blenheim near where the Nebel flows into the Danube; the village itself was surrounded by hedges, fences, enclosed gardens, and meadows. Between Blenheim and the next village of Oberglauheim the fields of wheat had been cut to stubble and were now ideal to deploy troops. From Oberglauheim to the next hamlet of Lutzingen the terrain of ditches, thickets and brambles was potentially difficult ground for the attackers. At 02:00 on 13 August, 40 squadrons were sent forward towards the enemy, followed at 03:00, in eight columns, by the main Allied force pushing over the Kessel. At about 06:00 they reached Schwenningen, from Blenheim. The English and German troops who had held Schwenningen through the night joined the march, making a ninth column on the left of the army. Marlborough and Eugene made their final plans. The Allied commanders agreed that Marlborough would command 36,000 troops and attack Tallard's force of 33,000 on the left (including capturing the village of Blenheim), whilst Eugene, commanding 16,000 men would attack the Elector and Marsin's combined forces of 23,000 troops on the right wing; if this attack was pressed hard the Elector and Marsin would have no troops to send to aid Tallard on their right. Lieutenant-General John Cutts would attack Blenheim in concert with Eugene's attack. With the French flanks busy, Marlborough could cross the Nebel and deliver the fatal blow to the French at their centre. However, Marlborough would have to wait until Eugene was in position before the general engagement could begin. The last thing Tallard expected that morning was to be attacked by the Allies – deceived by intelligence gathered from prisoners taken by de Silly the previous day, and assured in their strong natural position, Tallard and his colleagues were convinced that Marlborough and Eugene were about to retreat north-eastwards towards Nördlingen. Tallard wrote a report to this effect to King Louis that morning, but hardly had he sent the messenger when the Allied army began to appear opposite his camp. "I could see the enemy advancing ever closer in nine great columns", wrote Mérode-Westerloo, " ... filling the whole plain from the Danube to the woods on the horizon." Signal guns were fired to bring in the foraging parties and picquets as the French and Bavarian troops tried to draw into battle-order to face the unexpected threat. At about 08:00 the French artillery on their right wing opened fire, answered by Colonel Blood's batteries. The guns were heard by Baden in his camp before Ingolstadt, "The Prince and the Duke are engaged today to the westward", he wrote to the Emperor. "Heaven bless them." An hour later Tallard, the Elector, and Marsin climbed Blenheim's church tower to finalise their plans. It was settled that the Elector and Marsin would hold the front from the hills to Oberglauheim, whilst Tallard would defend the ground between Oberglauheim and the Danube. The French commanders were, however, divided as to how to utilise the Nebel: Tallard's tactic – opposed by Marsin and the Elector who felt it better to close their infantry right up to the stream itself – was to lure the allies across before unleashing their cavalry upon them, causing panic and confusion; whilst the enemy was struggling in the marshes, they would be caught in crossfire from Blenheim and Oberglauheim. The plan was sound if all its parts were implemented, but it allowed Marlborough to cross the Nebel without serious interference and fight the battle he had in mind. The Franco-Bavarian commanders deployed their forces. In the village of Lutzingen, Count Maffei positioned five Bavarian battalions with a great battery of 16 guns at the village's edge. In the woods to the left of Lutzingen, seven French battalions under the Marquis de Rozel moved into place. Between Lutzingen and Oberglauheim the Elector placed 27 squadrons of cavalry – Count d'Arco commanded 14 Bavarian squadrons and Count Wolframsdorf had 13 more in support nearby. To their right stood Marsin's 40 French squadrons and 12 battalions. The village of Oberglauheim was packed with 14 battalions commanded by the Marquis de Blainville (including the effective Irish Brigade known as the 'Wild Geese'). Six batteries of guns were ranged alongside the village. On the right of these French and Bavarian positions, between Oberglauheim and Blenheim, Tallard deployed 64 French and Walloon squadrons (16 drawn from Marsin) supported by nine French battalions standing near the Höchstädt road. In the cornfield next to Blenheim stood three battalions from the Regiment de Roi. Nine battalions occupied the village itself, commanded by the Marquis de Clérambault. Four battalions stood to the rear and a further 11 were in reserve. These battalions were supported by Hautefeuille's 12 squadrons of dismounted dragoons. By 11:00 Tallard, the Elector, and Marsin were in place. Many of the Allied generals were hesitant to attack such a relatively strong position. The Earl of Orkney later confessed that, "had I been asked to give my opinion, I had been against it." Prince Eugene was expected to be in position by 11:00, but due to the difficult terrain and enemy fire, progress was slow. Lord Cutts' column – who by 10:00 had expelled the enemy from two water mills upon the Nebel – had already deployed by the river against Blenheim, enduring over the next three hours severe fire from a heavy six-gun battery posted near the village. The rest of Marlborough's army, waiting in their ranks on the forward slope, were also forced to bear the cannonade from the French artillery, suffering 2,000 casualties before the attack could even start. Meanwhile, engineers repaired a stone bridge across the Nebel, and constructed five additional bridges or causeways across the marsh between Blenheim and Oberglauheim. Marlborough's anxiety was finally allayed when, just past noon, Colonel Cadogan reported that Eugene's Prussian and Danish infantry were in place – the order for the general advance was given. At 13:00, Cutts was ordered to attack the village of Blenheim whilst Prince Eugene was requested to assault Lutzingen on the Allied right flank. Cutts ordered Brigadier-General Archibald Rowe's brigade to attack. The English infantry rose from the edge of the Nebel, and silently marched towards Blenheim, a distance of some . John Ferguson's Scottish brigade supported Rowe's left, and moved in perfect order towards the barricades between the village and the river, defended by Hautefeuille's dragoons. As the range closed to within , the French fired a deadly volley. Rowe had ordered that there should be no firing from his men until he struck his sword upon the palisades, but as he stepped forward to give the signal, he fell mortally wounded. The survivors of the leading companies closed up the gaps in their torn ranks and rushed forward. Small parties penetrated the defences, but repeated French volleys forced the English back towards the Nebel, sustaining heavy casualties. As the attack faltered, eight squadrons of elite Gens d'Armes, commanded by the veteran Swiss officer, Beat-Jacques von Zurlauben, fell upon the English troops, cutting at the exposed flank of Rowe's own regiment. However, Wilkes' Hessian brigade, lying nearby in the marshy grass at the water's edge, stood firm and repulsed the Gens d'Armes with steady fire, enabling the English and Hessians to re-order and launch another attack. Although the Allies were again repulsed, these persistent attacks on Blenheim eventually bore fruit, panicking Clérambault into making the worst French error of the day. Without consulting Tallard, Clérambault ordered his reserve battalions into the village, upsetting the balance of the French position and nullifying the French numerical superiority. "The men were so crowded in upon one another", wrote Mérode-Westerloo, "that they couldn't even fire – let alone receive or carry out any orders." Marlborough, spotting this error, now countermanded Cutts' intention to launch a third attack, and ordered him simply to contain the enemy within Blenheim; no more than 5,000 Allied soldiers were able to pen in twice the number of French infantry and dragoons. On the Allied right, Eugene's Prussian and Danish forces were desperately fighting the numerically superior forces of the Elector and Marsin. The Prince of Anhalt-Dessau led forward four brigades across the Nebel to assault the well-fortified position of Lutzingen. Here, the Nebel was less of an obstacle, but the great battery positioned on the edge of the village enjoyed a good field of fire across the open ground stretching to the hamlet of Schwennenbach. As soon as the infantry crossed the stream, they were struck by Maffei's infantry, and salvoes from the Bavarian guns positioned both in front of the village and in enfilade on the wood-line to the right. Despite heavy casualties the Prussians attempted to storm the great battery, whilst the Danes, under Count Scholten, attempted to drive the French infantry out of the copses beyond the village. With the infantry heavily engaged, Eugene's cavalry picked its way across the Nebel. After an initial success, his first line of cavalry, under the Imperial General of Horse, Prince Maximilian of Hanover, were pressed by the second line of Marsin's cavalry, and were forced back across the Nebel in confusion. Nevertheless, the exhausted French were unable to follow up their advantage, and the two cavalry forces tried to regroup and reorder their ranks. However, without cavalry support, and threatened with envelopment, the Prussian and Danish infantry were in turn forced to pull back across the Nebel. Panic gripped some of Eugene's troops as they crossed the stream. Ten infantry colours were lost to the Bavarians, and hundreds of prisoners taken; it was only through the leadership of Eugene and the Prussian Prince that the imperial infantry were prevented from abandoning the field. After rallying his troops near Schwennenbach – well beyond their starting point – Eugene prepared to launch a second attack, led by the second-line squadrons under the Duke of Württemberg-Teck. Yet again they were caught in the murderous cross-fire from the artillery in Lutzingen and Oberglauheim, and were once again thrown back in disarray. The French and Bavarians, however, were almost as disordered as their opponents, and they too were in need of inspiration from their commander, the Elector, who was seen – " ... riding up and down, and inspiring his men with fresh courage." Anhalt-Dessau's Danish and Prussian infantry attacked a second time but could not sustain the advance without proper support. Once again they fell back across the stream. Whilst these events around Blenheim and Lutzingen were taking place, Marlborough was preparing to cross the Nebel. The centre, commanded by the Duke's brother, General Charles Churchill, consisted of 18 battalions of infantry arranged in two lines: seven battalions in the front line to secure a foothold across the Nebel, and 11 battalions in the rear providing cover from the Allied side of the stream. Between the infantry were placed two lines, 72 squadrons of cavalry. The first line of foot was to pass the stream first and march as far to the other side as could be conveniently done. This line would then form and cover the passage of the horse, leaving gaps in the line of infantry large enough for the cavalry to pass through and take their position in front. Marlborough ordered the formation forward. Once again Zurlauben's Gens d'Armes charged, looking to rout Lumley's English cavalry who linked Cutts' column facing Blenheim with Churchill's infantry. As these elite French cavalry attacked, they were faced by five English squadrons under Colonel Francis Palmes. To the consternation of the French, the Gens d'Armes were pushed back in terrible confusion, pursued well beyond the Maulweyer stream that flows through Blenheim. "What? Is it possible?" exclaimed the Elector, "the gentlemen of France fleeing?" Palmes, however, attempted to follow up his success but was repulsed in some confusion by other French cavalry, and musketry fire from the edge of Blenheim. Nevertheless, Tallard was alarmed by the repulse of the elite Gens d'Armes and urgently rode across the field to ask Marsin for reinforcements; but on the basis of being hard pressed by Eugene – whose second attack was in full flood – Marsin refused. As Tallard consulted with Marsin, more of his infantry was being taken into Blenheim by Clérambault. Fatally, Tallard, aware of the situation, did nothing to rectify this grave mistake, leaving him with just the nine battalions of infantry near the Höchstädt road to oppose the massed enemy ranks in the centre. Zurlauben tried several more times to disrupt the Allies forming on Tallard's side of the stream; his front-line cavalry darting forward down the gentle slope towards the Nebel. But the attacks lacked co-ordination, and the Allied infantry's steady volleys disconcerted the French horsemen. During these skirmishes Zurlauben fell mortally wounded, and died two days later. The time was just after 15:00. The Danish cavalry, under the Duke of Württemberg-Neuenstadt (not to be confused with the Duke of Württemberg who fought with Eugene), had made slow work of crossing the Nebel near Oberglau; harassed by Marsin's infantry near the village, the Danes were driven back across the stream. Count Horn's Dutch infantry managed to push the French back from the water's edge, but it was apparent that before Marlborough could launch his main effort against Tallard, Oberglauheim would have to be secured. Count Horn directed the Prince of Holstein-Beck to take the village, but his two Dutch brigades were cut down by the French and Irish troops, capturing and badly wounding the Prince during the action. The battle was now in the balance. If Holstein-Beck's Dutch column were destroyed, the Allied army would be split in two: Eugene's wing would be isolated from Marlborough's, passing the initiative to the Franco-Bavarian forces now engaged across the whole plain. Seeing the opportunity, Marsin ordered his cavalry to change from facing Eugene, and turn towards their right and the open flank of Churchill's infantry drawn up in front of Unterglau. Marlborough (who had crossed the Nebel on a makeshift bridge to take personal control), ordered Hulsen's Hanoverian battalions to support the Dutch infantry. A Dutch cavalry brigade under Averock was also called forward but soon came under pressure from Marsin's more numerous squadrons. Marlborough now requested Eugene to release Count Hendrick Fugger and his Imperial Cuirassier brigade to help repel the French cavalry thrust. Despite his own desperate struggle, the Imperial Prince at once complied, demonstrating the high degree of confidence and mutual co-operation between the two generals. Although the Nebel stream lay between Fugger's and Marsin's squadrons, the French were forced to change front to meet this new threat, thus forestalling the chance for Marsin to strike at Marlborough's infantry. Fugger's cuirassiers charged and, striking at a favourable angle, threw back Marsin's squadrons in disorder. With support from Colonel Blood's batteries, the Hessian, Hanoverian and Dutch infantry – now commanded by Count Berensdorf – succeeded in pushing the French and Irish infantry back into Oberglauheim so that they could not again threaten Churchill's flank as he moved against Tallard. The French commander in the village, the Marquis de Blainville, numbered amongst the heavy casualties. By 16:00, with the Franco-Bavarian troops besieged in Blenheim and Oberglau, the Allied centre of 81 squadrons (nine squadrons had been transferred from Cutts' column), supported by 18 battalions was firmly planted amidst the French line of 64 squadrons and nine battalions of raw recruits. There was now a pause in the battle: Marlborough wanted to concert the attack upon the whole front, and Eugene, after his second repulse, needed time to reorganise. Just after 17:00 all was ready along the Allied front. Marlborough's two lines of cavalry had now moved to the front of the Duke's line of battle, with the two supporting lines of infantry behind them. Mérode-Westerloo attempted to extricate some French infantry crowded in Blenheim, but Clérambault ordered the troops back into the village. The French cavalry exerted themselves once more against the first line – Lumley's English and Scots on the Allied left, and Hompesch's Dutch and German squadrons on the Allied right. Tallard's squadrons, lacking infantry support, were tired and ragged but managed to push the Allied first line back to their infantry support. With the battle still not won, Marlborough had to rebuke one of his cavalry officers who was attempting to leave the field – "Sir, you are under a mistake, the enemy lies that way ..." Now, at the Duke's command, the second Allied line under Cuno Josua von Bülow and Bothmer was ordered forward, and, driving through the centre, the Allies finally put Tallard's tired horse to rout, not without cost. The Prussian Life Dragoons' Colonel, Ludwig von Blumenthal, and his 2nd in command, Lt. Col. von Hacke, fell next to each other. But the charge succeeded and with their cavalry in headlong flight, the remaining nine French infantry battalions fought with desperate valour, trying to form square. But it was futile. The French battalions were overwhelmed by Colonel Blood's close-range artillery and platoon fire. Mérode-Westerloo later wrote – "[They] died to a man where they stood, stationed right out in the open plain – supported by nobody." The majority of Tallard's retreating troops headed for Höchstädt but most did not make the safety of the town, plunging instead into the Danube where upwards of 3,000 French horsemen drowned; others were cut down by the pursuing cavalry. The Marquis de Gruignan attempted a counter-attack, but he was easily brushed aside by the triumphant Allies. After a final rally behind his camp's tents, shouting entreaties to stand and fight, Marshal Tallard was caught up in the rout and pushed towards Sonderheim. Surrounded by a squadron of Hessian troops, Tallard surrendered to Lieutenant-Colonel de Boinenburg, the Prince of Hesse-Kassel's "aide-de-camp", and was sent under escort to Marlborough. The Duke welcomed the French commander – "I am very sorry that such a cruel misfortune should have fallen upon a soldier for whom I have the highest regard." With salutes and courtesies, the Marshal was escorted to Marlborough's coach. Meanwhile, the Allies had once again attacked the Bavarian stronghold at Lutzingen. Eugene, however, became exasperated with the performance of his Imperial cavalry whose third attack had failed: he had already shot two of his troopers to prevent a general flight. Then, declaring in disgust that he wished to "fight among brave men and not among cowards", Eugene went into the attack with the Prussian and Danish infantry, as did the Dessauer, waving a regimental colour to inspire his troops. This time the Prussians were able to storm the great Bavarian battery, and overwhelm the guns' crews. Beyond the village, Scholten's Danes defeated the French infantry in a desperate hand-to-hand bayonet struggle. When they saw that the centre had broken, the Elector and Marsin decided the battle was lost and, like the remnants of Tallard's army, fled the battlefield (albeit in better order than Tallard's men). Attempts to organise an Allied force to prevent Marsin's withdrawal failed owing to the exhaustion of the cavalry, and the growing confusion in the field. Marlborough now had to turn his attention from the fleeing enemy to direct Churchill to detach more infantry to storm Blenheim. Orkney's infantry, Hamilton's English brigade and St Paul's Hanoverians moved across the trampled wheat to the cottages. Fierce hand-to-hand fighting gradually forced the French towards the village centre, in and around the walled churchyard which had been prepared for defence. Hay and Ross's dismounted dragoons were also sent, but suffered under a counter-charge delivered by the regiments of Artois and Provence under command of Colonel de la Silvière. Colonel Belville's Hanoverians were fed into the battle to steady the resolve of the dragoons, and once more went to the attack. The Allied progress was slow and hard, and like the defenders, they suffered many casualties. Many of the cottages were now burning, obscuring the field of fire and driving the defenders out of their positions. Hearing the din of battle in Blenheim, Tallard sent a message to Marlborough offering to order the garrison to withdraw from the field. "Inform Monsieur Tallard", replied the Duke, "that, in the position in which he is now, he has no command." Nevertheless, as dusk came the Allied commander was anxious for a quick conclusion. The French infantry fought tenaciously to hold on to their position in Blenheim, but their commander was nowhere to be found. Clérambault's insistence on confining his huge force in the village was to seal his fate that day. Realising his tactical mistake had contributed to Tallard's defeat in the centre, Clérambault deserted Blenheim and the 27 battalions defending the village, and reportedly drowned in the Danube while attempting to make his escape. By now Blenheim was under assault from every side by three British generals: Cutts, Churchill, and Orkney. The French had repulsed every attack with heavy slaughter, but many had seen what had happened on the plain and what its consequences to them would be; their army was routed and they were cut off. Orkney, attacking from the rear, now tried a different tactic – "... it came into my head to beat parley", he later wrote, "which they accepted of and immediately their Brigadier de Nouville capitulated with me to be prisoner at discretion and lay down their arms." Threatened by Allied guns, other units followed their example. However, it was not until 21:00 that the Marquis de Blanzac, who had taken charge in Clérambault's absence, reluctantly accepted the inevitability of defeat, and some 10,000 of France's best infantry had laid down their arms. During these events Marlborough was still in the saddle conducting the pursuit of the broken enemy. Pausing for a moment he scribbled on the back of an old tavern bill a note addressed to his wife, Sarah: "I have no time to say more but to beg you will give my duty to the Queen, and let her know her army has had a glorious victory." French losses were immense, with over 27,000 killed, wounded and captured. Moreover, the myth of French invincibility had been destroyed and Louis's hopes of an early and victorious peace had been wrenched from his grasp. Mérode-Westerloo summarised the case against Tallard's army: "The French lost this battle for a wide variety of reasons. For one thing they had too good an opinion of their own ability ... Another point was their faulty field dispositions, and in addition there was rampant indiscipline and inexperience displayed ... It took all these faults to lose so celebrated a battle." It was a hard-fought contest, leading Prince Eugene to observe – "I have not a squadron or battalion which did not charge four times at least." Although the war dragged on for years, the Battle of Blenheim was probably its most decisive victory; Marlborough and Eugene, working indivisibly together, had saved the Habsburg Empire and thereby preserved the Grand Alliance from collapse. Munich, Augsburg, Ingolstadt, Ulm and all remaining territory of Bavaria soon fell to the Allies. By the Treaty of Ilbersheim, signed 7 November 1704, Bavaria was placed under Austrian military rule, allowing the Habsburgs to utilise its resources for the rest of the conflict. The remnants of the Elector of Bavaria's and Marshal Marsin's wing limped back to Strasbourg, losing another 7,000 men through desertion. Despite being offered the chance to remain as ruler of Bavaria (under strict terms of an alliance with Austria), the Elector left his country and family in order to continue the war against the Allies from the Spanish Netherlands where he still held the post of governor-general. Their commander-in-chief that day, Marshal Tallard – who, unlike his subordinates, had not been ransomed or exchanged – was taken to England and imprisoned in Nottingham until his release in 1711. The 1704 campaign lasted considerably longer than usual as the Allies sought to wring out maximum advantage. Realising that France was too powerful to be forced to make peace by a single victory, however, Eugene, Marlborough and Baden met to plan their next moves. For the following year the Duke proposed a campaign along the valley of the River Moselle to carry the war deep into France. This required the capture of the major fortress of Landau which guarded the Rhine, and the towns of Trier and Trarbach on the Moselle itself. Trier was taken on 27 October and Landau fell on 23 November to the Margrave of Baden and Prince Eugene; with the fall of Trarbach on 20 December, the campaign season for 1704 came to an end. Marlborough returned to England on 14 December (O.S) to the acclamation of Queen Anne and the country. In the first days of January the 110 cavalry standards and the 128 infantry colours that were taken during the battle were borne in procession to Westminster Hall. In February 1705, Queen Anne, who had made Marlborough a Duke in 1702, granted him the Park of Woodstock and promised a sum of £240,000 to build a suitable house as a gift from a grateful crown in recognition of his victory – a victory which British historian Sir Edward Shepherd Creasy considered one of the pivotal battles in history, writing – "Had it not been for Blenheim, all Europe might at this day suffer under the effect of French conquests resembling those of Alexander in extent and those of the Romans in durability." However, military historian John A. Lynn consider this claim unjustified as Louis XIV never laboured such objective, as the campaign in Bavaria was intended to bring only a favourable peace settlement and not domination over Europe. Battle of Ramillies The Battle of Ramillies (), fought on 23 May 1706, was a battle of the War of the Spanish Succession. For the Grand Alliance – Austria, England, and the Dutch Republic – the battle had followed an indecisive campaign against the Bourbon armies of King Louis XIV of France in 1705. Although the Allies had captured Barcelona that year, they had been forced to abandon their campaign on the Moselle, had stalled in the Spanish Netherlands and suffered defeat in northern Italy. Yet despite his opponents' setbacks Louis XIV wanted peace, but on reasonable terms. Because of this, as well as to maintain their momentum, the French and their allies took the offensive in 1706. The campaign began well for Louis XIV's generals: in Italy Marshal Vendôme defeated the Austrians at the Battle of Calcinato in April, while in Alsace Marshal Villars forced the Margrave of Baden back across the Rhine. Encouraged by these early gains Louis XIV urged Marshal Villeroi to go over to the offensive in the Spanish Netherlands and, with victory, gain a 'fair' peace. Accordingly, the French Marshal set off from Leuven ("Louvain") at the head of 60,000 men and marched towards Tienen ("Tirlemont"), as if to threaten Zoutleeuw ("Léau"). Also determined to fight a major engagement, the Duke of Marlborough, commander-in-chief of Anglo-Dutch forces, assembled his army – some 62,000 men – near Maastricht, and marched past Zoutleeuw. With both sides seeking battle, they soon encountered each other on the dry ground between the Mehaigne and Petite Gheete rivers, close to the small village of Ramillies. In less than four hours Marlborough's Dutch, English, and Danish forces overwhelmed Villeroi's and Max Emanuel's Franco-Spanish-Bavarian army. The Duke's subtle moves and changes in emphasis during the battle – something his opponents failed to realise until it was too late – caught the French in a tactical vice. With their foe broken and routed, the Allies were able to fully exploit their victory. Town after town fell, including Brussels, Bruges and Antwerp; by the end of the campaign Villeroi's army had been driven from most of the Spanish Netherlands. With Prince Eugene's subsequent success at the Battle of Turin in northern Italy, the Allies had imposed the greatest loss of territory and resources that Louis XIV would suffer during the war. Thus, the year 1706 proved, for the Allies, to be an "annus mirabilis". After their disastrous defeat at Blenheim in 1704, the next year brought the French some respite. The Duke of Marlborough had intended the 1705 campaign – an invasion of France through the Moselle valley – to complete the work of Blenheim and persuade King Louis XIV to make peace but the plan had been thwarted by friend and foe alike. The reluctance of his Dutch allies to see their frontiers denuded of troops for another gamble in Germany had denied Marlborough the initiative but of far greater importance was the Margrave of Baden’s pronouncement that he could not join the Duke in strength for the coming offensive. This was in part due to the sudden switching of troops from the Rhine to reinforce Prince Eugene in Italy and part due to the deterioration of Baden’s health brought on by the re-opening of a severe foot wound he had received at the storming of the Schellenberg the previous year. Marlborough had to cope with the death of Emperor Leopold I in May and the accession of Joseph I, which unavoidably complicated matters for the Grand Alliance. The resilience of the French King and the efforts of his generals, also added to Marlborough’s problems. Marshal Villeroi, exerting considerable pressure on the Dutch commander, Count Overkirk, along the Meuse, took Huy on 10 June before pressing on towards Liège. With Marshal Villars sitting strong on the Moselle, the Allied commander – whose supplies had by now become very short – was forced to call off his campaign on 16 June. "What a disgrace for Marlborough," exulted Villeroi, "to have made false movements without any result!" With Marlborough’s departure north, the French transferred troops from the Moselle valley to reinforce Villeroi in Flanders, while Villars marched off to the Rhine. The Anglo-Dutch forces gained minor compensation for the failed Moselle campaign with the success at Elixheim and the crossing of the Lines of Brabant in the Spanish Netherlands (Huy was also retaken on 11 July) but a chance to bring the French to a decisive engagement eluded Marlborough. The year 1705 proved almost entirely barren for the Duke, whose military disappointments were only partly compensated by efforts on the diplomatic front where, at the courts of Düsseldorf, Frankfurt, Vienna, Berlin and Hanover, Marlborough sought to bolster support for the Grand Alliance and extract promises of prompt assistance for the following year's campaign. On 11 January 1706, Marlborough finally reached London at the end of his diplomatic tour but he had already been planning his strategy for the coming season. The first option (although it is debatable to what extent the Duke was committed to such an enterprise) was a plan to transfer his forces from the Spanish Netherlands to northern Italy; once there, he intended linking up with Prince Eugene in order to defeat the French and safeguard Savoy from being overrun. Savoy would then serve as a gateway into France by way of the mountain passes or an invasion with naval support along the Mediterranean coast via Nice and Toulon, in connexion with redoubled Allied efforts in Spain. It seems that the Duke’s favoured scheme was to return to the Moselle valley (where Marshal Marsin had recently taken command of French forces) and once more attempt an advance into the heart of France. But these decisions soon became academic. Shortly after Marlborough landed in the Dutch Republic on 14 April, news arrived of big Allied setbacks in the wider war. Determined to show the Grand Alliance that France was still resolute, Louis XIV prepared to launch a double surprise in Alsace and northern Italy. On the latter front Marshal Vendôme defeated the Imperial army at Calcinato on 19 April, pushing the Imperialists back in confusion (French forces were now in a position to prepare for the long-anticipated siege of Turin). In Alsace, Marshal Villars took Baden by surprise and captured Haguenau, driving him back across the Rhine in some disorder, thus creating a threat on Landau. With these reverses, the Dutch refused to contemplate Marlborough's ambitious march to Italy or any plan that denuded their borders of the Duke and their army. In the interest of coalition harmony, Marlborough prepared to campaign in the Low Countries. The Duke left The Hague on 9 May. "God knows I go with a heavy heart," he wrote six days later to his friend and political ally in England, Lord Godolphin, "for I have no hope of doing anything considerable, unless the French do what I am very confident they will not … " – in other words, court battle. On 17 May the Duke concentrated his Dutch and English troops at Tongeren, near Maastricht. The Hanoverians, Hessians and Danes, despite earlier undertakings, found, or invented, pressing reasons for withholding their support. Marlborough wrote an appeal to the Duke of Württemberg, the commander of the Danish contingent – "I send you this express to request your Highness to bring forward by a double march your cavalry so as to join us at the earliest moment …" Additionally, the King "in" Prussia, Frederick I, had kept his troops in quarters behind the Rhine while his personal disputes with Vienna and the States General at The Hague remained unresolved. Nevertheless, the Duke could think of no circumstances why the French would leave their strong positions and attack his army, even if Villeroi was first reinforced by substantial transfers from Marsin’s command. But in this he had miscalculated. Although Louis XIV wanted peace he wanted it on reasonable terms; for that, he needed victory in the field and to convince the Allies that his resources were by no means exhausted. Following the successes in Italy and along the Rhine, Louis XIV was now hopeful of similar results in Flanders. Far from standing on the defensive therefore – and unbeknown to Marlborough – Louis XIV was persistently goading his marshal into action. "[Villeroi] began to imagine," wrote St Simon, "that the King doubted his courage, and resolved to stake all at once in an effort to vindicate himself." Accordingly, on 18 May, Villeroi set off from Leuven at the head of 70 battalions, 132 squadrons and 62 cannon – comprising an overall force of some 60,000 troops – and crossed the river Dyle to seek battle with the enemy. Spurred on by his growing confidence in his ability to out-general his opponent, and by Versailles’ determination to avenge Blenheim, Villeroi and his generals anticipated success. Neither opponent expected the clash at the exact moment or place where it occurred. The French moved first to Tienen, (as if to threaten Zoutleeuw, abandoned by the French in October 1705), before turning southwards, heading for Jodoigne – this line of march took Villeroi’s army towards the narrow aperture of dry ground between the Mehaigne and Petite Gheete rivers close to the small villages of Ramillies and Taviers; but neither commander quite appreciated how far his opponent had travelled. Villeroi still believed (on 22 May) the Allies were a full day’s march away when in fact they had camped near Corswaren waiting for the Danish squadrons to catch up; for his part, Marlborough deemed Villeroi still at Jodoigne when in reality he was now approaching the plateau of Mont St. André with the intention of pitching camp near Ramillies (see map at right). However, the Prussian infantry was not there. Marlborough wrote to Lord Raby, the English resident at Berlin: "If it should please God to give us victory over the enemy, the Allies will be little obliged to the King [Frederick] for the success." The following day, at 01:00, Marlborough dispatched Cadogan, his Quartermaster-General, with an advanced guard to reconnoitre the same dry ground that Villeroi’s army was now heading toward, country that was well known to the Duke from previous campaigns. Two hours later the Duke followed with the main body: 74 battalions, 123 squadrons, 90 pieces of artillery and 20 mortars, totalling 62,000 troops. At about 08:00, after Cadogan had just passed Merdorp, his force made brief contact with a party of French hussars gathering forage on the edge of the plateau of Jandrenouille. After a brief exchange of shots the French retired and Cadogan's dragoons pressed forward. With a short lift in the mist, Cadogan soon discovered the smartly ordered lines of Villeroi’s advance guard some off; a galloper hastened back to warn Marlborough. Two hours later the Duke, accompanied by the Dutch field commander Field Marshal Overkirk, General Daniel Dopff, and the Allied staff, rode up to Cadogan where on the horizon to the westward he could discern the massed ranks of the French army deploying for battle along the front. Marlborough later told Bishop Burnet that, ‘the French army looked the best of any he had ever seen’. The battlefield of Ramillies is very similar to that of Blenheim, for here too there is an immense area of arable land unimpeded by woods or hedges. Villeroi’s right rested on the villages of Franquenée and Taviers, with the river Mehaigne protecting his flank. A large open plain, about wide, lay between Taviers and Ramillies, but unlike Blenheim, there was no stream to hinder the cavalry. His centre was secured by Ramillies itself, lying on a slight eminence which gave distant views to the north and east. The French left flank was protected by broken country, and by a stream, the Petite Gheete, which runs deep between steep and slippery slopes. On the French side of the stream the ground rises to Offus, the village which, together with Autre-Eglise farther north, anchored Villeroi’s left flank. To the west of the Petite Gheete rises the plateau of Mont St. André; a second plain, the plateau of Jandrenouille – upon which the Anglo-Dutch army amassed – rises to the east. At 11:00, the Duke ordered the army to take standard battle formation. On the far right, towards Foulz, the British battalions and squadrons took up their posts in a double line near the Jeuche stream. The centre was formed by the mass of Dutch, German, Protestant Swiss and Scottish infantry – perhaps 30,000 men – facing Offus and Ramillies. Also facing Ramillies Marlborough placed a powerful battery of thirty 24-pounders, dragged into position by a team of oxen; further batteries were positioned overlooking the Petite Gheete. On their left, on the broad plain between Taviers and Ramillies – and where Marlborough thought the decisive encounter must take place – Overkirk drew the 69 squadrons of the Dutch and Danish horse, supported by 19 battalions of Dutch infantry and two artillery pieces. Meanwhile, Villeroi deployed his forces. In Taviers on his right, he placed two battalions of the Greder Suisse Régiment, with a smaller force forward in Franquenée; the whole position was protected by the boggy ground of the Mehaigne river, thus preventing an Allied flanking movement. In the open country between Taviers and Ramillies, he placed 82 squadrons under General de Guiscard supported by several interleaved brigades of French, Swiss and Bavarian infantry. Along the Ramillies–Offus–Autre Eglise ridge-line, Villeroi positioned Walloon and Bavarian infantry, supported by the Elector of Bavaria's 50 squadrons of Bavarian and Walloon cavalry placed behind on the plateau of Mont St. André. Ramillies, Offus and Autre-Eglise were all packed with troops and put in a state of defence, with alleys barricaded and walls loop-holed for muskets. Villeroi also positioned powerful batteries near Ramillies. These guns (some of which were of the three barrelled kind first seen at Elixheim the previous year) enjoyed good arcs of fire, able to fully cover the approaches of the plateau of Jandrenouille over which the Allied infantry would have to pass. Marlborough, however, noticed several important weaknesses in the French dispositions. Tactically, it was imperative for Villeroi to occupy Taviers on his right and Autre-Eglise on his left, but by adopting this posture he had been forced to over-extend his forces. Moreover, this disposition – concave in relation to the Allied army – gave Marlborough the opportunity to form a more compact line, drawn up in a shorter front between the ‘horns’ of the French crescent; when the Allied blow came it would be more concentrated and carry more weight. Additionally, the Duke’s disposition facilitated the transfer of troops across his front far more easily than his foe, a tactical advantage that would grow in importance as the events of the afternoon unfolded. Although Villeroi had the option of enveloping the flanks of the Allied army as they deployed on the plateau of Jandrenouille – threatening to encircle their army – the Duke correctly gauged that the characteristically cautious French commander was intent on a defensive battle along the ridge-line. At 13:00 the batteries went into action; a little later two Allied columns set out from the extremities of their line and attacked the flanks of the Franco-Bavarian army. To the south the Dutch Guards, under the command of Colonel Wertmüller, came forward with their two field guns to seize the hamlet of Franquenée. The small Swiss garrison in the village, shaken by the sudden onslaught and unsupported by the battalions to their rear, were soon compelled back towards the village of Taviers. Taviers was of particular importance to the Franco-Bavarian position: it protected the otherwise unsupported flank of General de Guiscard’s cavalry on the open plain, while at the same time, it allowed the French infantry to pose a threat to the flanks of the Dutch and Danish squadrons as they came forward into position. But hardly had the retreating Swiss rejoined their comrades in that village when the Dutch Guards renewed their attack. The fighting amongst the alleys and cottages soon deteriorated into a fierce bayonet and clubbing "mêlée", but the superiority in Dutch firepower soon told. The accomplished French officer, Colonel de la Colonie, standing on the plain nearby remembered – "this village was the opening of the engagement, and the fighting there was almost as murderous as the rest of the battle put together." By about 15:00 the Swiss had been pushed out of the village into the marshes beyond. Villeroi’s right flank fell into chaos and was now open and vulnerable. Alerted to the situation de Guiscard ordered an immediate attack with 14 squadrons of French dragoons currently stationed in the rear. Two other battalions of the Greder Suisse Régiment were also sent, but the attack was poorly co-ordinated and consequently went in piecemeal. The Anglo-Dutch commanders now sent dismounted Dutch dragoons into Taviers, which, together with the Guards and their field guns, poured concentrated musketry- and canister-fire into the advancing French troops. Colonel d’Aubigni, leading his regiment, fell mortally wounded. As the French ranks wavered, the leading squadrons of Württemberg’s Danish horse – now unhampered by enemy fire from either village – were also sent into the attack and fell upon the exposed flank of the Franco-Swiss infantry and dragoons. De la Colonie, with his Grenadiers Rouge regiment, together with the Cologne Guards who were brigaded with them, was now ordered forward from his post south of Ramillies to support the faltering counter-attack on the village. But on his arrival, all was chaos – "Scarcely had my troops got over when the dragoons and Swiss who had preceded us, came tumbling down upon my battalions in full flight … My own fellows turned about and fled along with them." De La Colonie managed to rally some of his grenadiers, together with the remnants of the French dragoons and Greder Suisse battalions, but it was an entirely peripheral operation, offering only fragile support for Villeroi’s right flank. While the attack on Taviers went on the Earl of Orkney launched his first line of English across the Petite Gheete in a determined attack against the barricaded villages of Offus and Autre-Eglise on the Allied right. Villeroi, posting himself near Offus, watched anxiously the redcoats' advance, mindful of the counsel he had received on 6 May from Louis XIV – "Have particular care to that part of the line which will endure the first shock of the English troops." Heeding this advice the French commander began to transfer battalions from his centre to reinforce the left, drawing more foot from the already weakened right to replace them. As the English battalions descended the gentle slope of the Petite Gheete valley, struggling through the boggy stream, they were met by Major General de la Guiche’s disciplined Walloon infantry sent forward from around Offus. After concentrated volleys, exacting heavy casualties on the redcoats, the Walloons reformed back to the ridgeline in good order. The English took some time to reform their ranks on the dry ground beyond the stream and press on up the slope towards the cottages and barricades on the ridge. The vigour of the English assault, however, was such that they threatened to break through the line of the villages and out onto the open plateau of Mont St André beyond. This was potentially dangerous for the Allied infantry who would then be at the mercy of the Elector’s Bavarian and Walloon squadrons patiently waiting on the plateau for the order to move. Although Henry Lumley’s English cavalry had managed to cross the marshy ground around the Petite Gheete, it was soon evident to Marlborough that sufficient cavalry support would not be practicable and that the battle could not be won on the Allied right. The Duke, therefore, called off the attack against Offus and Autre-Eglise. To make sure that Orkney obeyed his order to withdraw, Marlborough sent his Quartermaster-General in person with the command. Despite Orkney’s protestations, Cadogan insisted on compliance and, reluctantly, Orkney gave the word for his troops to fall back to their original positions on the edge of the plateau of Jandrenouille. It is still not clear how far Orkney’s advance was planned only as a feint; according to historian David Chandler it is probably more accurate to surmise that Marlborough launched Orkney in a serious probe with a view to sounding out the possibilities of the sector. Nevertheless, the attack had served its purpose. Villeroi had given his personal attention to that wing and strengthened it with large bodies of horse and foot that ought to have been taking part in the decisive struggle south of Ramillies. Meanwhile, the Dutch assault on Ramillies was gaining pace. Marlborough’s younger brother, General of Infantry, Charles Churchill, ordered four brigades of foot to attack the village. The assault consisted of 12 battalions of Dutch infantry commanded by Major Generals Schultz and Spaar; two brigades of Saxons under Count Schulenburg; a Scottish brigade in Dutch service led by the 2nd Duke of Argyle; and a small brigade of Protestant Swiss. The 20 French and Bavarian battalions in Ramillies, supported by the Irish dragoons who had left Ireland in the Flight of the Wild Geese to join Clare's Dragoons and a small brigade of Cologne and Bavarian Guards under the Marquis de Maffei, put up a determined defence, initially driving back the attackers with severe losses as commemorated in the song "Clare's Dragoons": When on Ramillies' bloody field The baffled French were forced to yield, The victor Saxon backward reeled Before the charge of Clare's Dragoons. "Viva là, the new brigade!" "Viva là the old one too!" "Viva là, the Rose shall fade" "The Shamrock shine forever new!" Seeing that Schultz and Spaar were faltering, Marlborough now ordered Orkney’s second-line British and Danish battalions (who had not been used in the assault on Offus and Autre-Eglise) to move south towards Ramillies. Shielded as they were from observation by a slight fold in the land, their commander, Brigadier-General Van Pallandt, ordered the regimental colours to be left in place on the edge of the plateau to convince their opponents they were still in their initial position. Therefore, unbeknown to the French who remained oblivious to the Allies’ real strength and intentions on the opposite side of the Petite Gheete, Marlborough was throwing his full weight against Ramillies and the open plain to the south. Villeroi meanwhile, was still moving more reserves of infantry in the opposite direction towards his left flank; crucially, it would be some time before the French commander noticed the subtle change in emphasis of the Allied dispositions. At around 15:30, Overkirk advanced his massed squadrons on the open plain in support of the infantry attack on Ramillies. Overkirk's squadrons – 48 Dutch, supported on their left by 21 Danish – steadily advanced towards the enemy (taking care not to prematurely tire the horses), before breaking into a trot to gain the impetus for their charge. The Marquis de Feuquières writing after the battle described the scene – "They advanced in four lines … As they approached they advanced their second and fourth lines into the intervals of their first and third lines; so that when they made their advance upon us, they formed only one front, without any intermediate spaces." The initial clash favoured the Dutch and Danish squadrons. The disparity of numbers – exacerbated by Villeroi stripping their ranks of infantry to reinforce his left flank – enabled Overkirk's cavalry to throw the first line of French horse back in some disorder towards their second-line squadrons. This line also came under severe pressure and, in turn, was forced back to their third-line of cavalry and the few battalions still remaining on the plain. But these French horsemen were amongst the best in Louis XIV’s army – the "Maison du Roi", supported by four elite squadrons of Bavarian Cuirassiers. Ably led by de Guiscard, the French cavalry rallied, thrusting back the Allied squadrons in successful local counterattacks. On Overkirk’s right flank, close to Ramillies, ten of his squadrons suddenly broke ranks and were scattered, riding headlong to the rear to recover their order, leaving the left flank of the Allied assault on Ramillies dangerously exposed. Notwithstanding the lack of infantry support, de Guiscard threw his cavalry forward in an attempt to split the Allied army in two. A crisis threatened the centre, but from his vantage point Marlborough was at once aware of the situation. The Allied commander now summoned the cavalry on the right wing to reinforce his centre, leaving only the English squadrons in support of Orkney. Thanks to a combination of battle-smoke and favourable terrain, his redeployment went unnoticed by Villeroi who made no attempt to transfer any of his own 50 unused squadrons. While he waited for the fresh reinforcements to arrive, Marlborough flung himself into the "mêlée", rallying some of the Dutch cavalry who were in confusion. But his personal involvement nearly led to his undoing. A number of French horsemen, recognising the Duke, came surging towards his party. Marlborough’s horse tumbled and the Duke was thrown – "Milord Marlborough was rid over," wrote Orkney some time later. It was a critical moment of the battle. "Major-General Murray," recalled one eye witness, " … seeing him fall, marched up in all haste with two Swiss battalions to save him and stop the enemy who were hewing all down in their way." Fortunately Marlborough’s newly appointed aide-de-camp, Richard Molesworth, galloped to the rescue, mounted the Duke on his horse and made good their escape, before Murray’s disciplined ranks threw back the pursuing French troopers. After a brief pause, Marlborough’s equerry, Colonel Bringfield (or Bingfield), led up another of the Duke’s spare horses; but while assisting him onto his mount, the unfortunate Bringfield was hit by an errant cannonball that sheared off his head. One account has it that the cannonball flew between the Captain-General’s legs before hitting the unfortunate colonel, whose torso fell at Marlborough’s feet – a moment subsequently depicted in a lurid set of contemporary playing cards. Nevertheless, the danger passed, enabling the Duke to attend to the positioning of the cavalry reinforcements feeding down from his right flank – a change of which Villeroi remained blissfully unaware. The time was about 16:30, and the two armies were in close contact across the whole front, from the skirmishing in the marshes in the south, through the vast cavalry battle on the open plain; to the fierce struggle for Ramillies at the centre, and to the north, where, around the cottages of Offus and Autre-Eglise, Orkney and de la Guiche faced each other across the Petite Gheete ready to renew hostilities. The arrival of the transferring squadrons now began to tip the balance in favour of the Allies. Tired, and suffering a growing list of casualties, the numerical inferiority of Guiscard’s squadrons battling on the plain at last began to tell. After earlier failing to hold or retake Franquenée and Taviers, Guiscard’s right flank had become dangerously exposed and a fatal gap had opened on the right of their line. Taking advantage of this breach, Württemberg’s Danish cavalry now swept forward, wheeling to penetrate the flank of the Maison du Roi whose attention was almost entirely fixed on holding back the Dutch. Sweeping forwards, virtually without resistance, the 21 Danish squadrons reformed behind the French around the area of the Tomb of Ottomond, facing north across the plateau of Mont St André towards the exposed flank of Villeroi’s army. The final Allied reinforcements for the cavalry contest to the south were at last in position; Marlborough’s superiority on the left could no longer be denied, and his fast-moving plan took hold of the battlefield. Now, far too late, Villeroi tried to redeploy his 50 unused squadrons, but a desperate attempt to form line facing south, stretching from Offus to Mont St André, floundered amongst the baggage and tents of the French camp carelessly left there after the initial deployment. The Allied commander ordered his cavalry forward against the now heavily outnumbered French and Bavarian horsemen. De Guiscard’s right flank, without proper infantry support, could no longer resist the onslaught and, turning their horses northwards, they broke and fled in complete disorder. Even the squadrons currently being scrambled together by Villeroi behind Ramillies could not withstand the onslaught. "We had not got forty yards on our retreat," remembered Captain Peter Drake, an Irishmen serving with the French – "when the words "sauve qui peut" went through the great part, if not the whole army, and put all to confusion" In Ramillies the Allied infantry, now reinforced by the English troops brought down from the north, at last broke through. The Régiment de Picardie stood their ground but were caught between Colonel Borthwick’s Scots-Dutch regiment and the English reinforcements. Borthwick was killed, as was Charles O’Brien, the Irish Viscount Clare in French service, fighting at the head of his regiment. The Marquis de Maffei attempted one last stand with his Bavarian and Cologne Guards, but it proved in vain. Noticing a rush of horsemen fast approaching from the south, he later recalled – " … I went towards the nearest of these squadrons to instruct their officer, but instead of being listened to [I] was immediately surrounded and called upon to ask for quarter." The roads leading north and west were choked with fugitives. Orkney now sent his English troops back across the Petite Gheete stream to once again storm Offus where de la Guiche’s infantry had begun to drift away in the confusion. To the right of the infantry Lord John Hay’s ‘Scots Greys’ also picked their way across the stream and charged the Régiment du Roi within Autre-Eglise. "Our dragoons," wrote John Deane, "pushing into the village … made terrible slaughter of the enemy." The Bavarian Horse Grenadiers and the Electoral Guards withdrew and formed a shield about Villeroi and the Elector but were scattered by Lumley’s cavalry. Stuck in the mass of fugitives fleeing the battlefield, the French and Bavarian commanders narrowly escaped capture by General Cornelius Wood who, unaware of their identity, had to content himself with the seizure of two Bavarian Lieutenant-Generals. Far to the south, the remnants of de la Colonie’s brigade headed in the opposite direction towards the French held fortress of Namur." The retreat became a rout. Individual Allied commanders drove their troops forward in pursuit, allowing their beaten enemy no chance to recover. Soon the Allied infantry could no longer keep up, but their cavalry were off the leash, heading through the gathering night for the crossings on the Dyle river. At last, however, Marlborough called a halt to the pursuit shortly after midnight near Meldert, from the field. "It was indeed a truly shocking sight to see the miserable remains of this mighty army," wrote Captain Drake, "… reduced to a handful." What was left of Villeroi’s army was now broken in spirit; the imbalance of the casualty figures amply demonstrates the extent of the disaster for Louis XIV’s army: ("see below"). In addition, hundreds of French soldiers were fugitives, many of whom would never remuster to the colours. Villeroi also lost 52 artillery pieces and his entire engineer pontoon train. In the words of Marshal Villars, the French defeat at Ramillies was – "The most shameful, humiliating and disastrous of routs." Town after town now succumbed to the Allies. Leuven fell on 25 May 1706; three days later, the Allies entered Brussels, the capital of the Spanish Netherlands. Marlborough realised the great opportunity created by the early victory of Ramillies: "We now have the whole summer before us," wrote the Duke from Brussels to Robert Harley, "and with the blessing of God I shall make the best use of it." Malines, Lierre, Ghent, Alost, Damme, Oudenaarde, Bruges, and on 6 June Antwerp, all subsequently fell to Marlborough’s victorious army and, like Brussels, proclaimed the Austrian candidate for the Spanish throne, the Archduke Charles, as their sovereign. Villeroi was helpless to arrest the process of collapse. When Louis XIV learnt of the disaster he recalled Marshal Vendôme from northern Italy to take command in Flanders; but it would be weeks before the command changed hands. As news spread of the Allies’ triumph, the Prussians, Hessians and Hanoverian contingents, long delayed by their respective rulers, eagerly joined the pursuit of the broken French and Bavarian forces. "This," wrote Marlborough wearily, "I take to be owing to our late success." Meanwhile, Overkirk took the port of Ostend on 4 July thus opening a direct route to the English Channel for communication and supply, but the Allies were making scant progress against Dendermonde whose governor, the Marquis de Valée, was stubbornly resisting. Only later when Cadogan and Churchill went to take charge did the town’s defences begin to fail. Vendôme formally took over command in Flanders on 4 August; Villeroi would never again receive a major command – "I cannot foresee a happy day in my life save only that of my death." Louis XIV was more forgiving to his old friend – "At our age, Marshal, we must no longer expect good fortune." In the mean time, Marlborough invested the elaborate fortress of Menin which, after a costly siege, capitulated on 22 August. Dendermonde finally succumbed on 6 September followed by Ath – the last conquest of 1706 – on 2 October. By the time Marlborough had closed down the Ramillies campaign he had denied the French most of the Spanish Netherlands west of the Meuse and north of the Sambre – it was an unsurpassed operational triumph for the English Duke but once again it was not decisive as these gains did not defeat France. The immediate question for the Allies now was how to deal with the Spanish Netherlands, a subject which the Austrians and the Dutch were diametrically opposed. Emperor Joseph I, acting on behalf of his younger brother King ’Charles III’, absent in Spain, claimed that reconquered Brabant and Flanders should be put under immediate possession of a governor named by himself. The Dutch, however, who had supplied the major share of the troops and money to secure the victory (the Austrians had produced nothing of either) claimed the government of the region till the war was over, and that after the peace they should continue to garrison Barrier Fortresses stronger than those which had fallen so easily to Louis XIV’s forces in 1701. Marlborough mediated between the two parties but favoured the Dutch position. To sway the Duke’s opinion, the Emperor offered Marlborough the governorship of the Spanish Netherlands. It was a tempting offer, but in the name of Allied unity, it was one he refused. In the end England and the Dutch Republic took control of the newly won territory for the duration of the war; after which it was to be handed over to the direct rule of ‘Charles III’, subject to the reservation of a Dutch Barrier, the extent and nature of which had yet to be settled. Meanwhile, on the Upper Rhine, Villars had been forced onto the defensive as battalion after battalion had been sent north to bolster collapsing French forces in Flanders; there was now no possibility of his undertaking the re-capture of Landau. Further good news for the Allies arrived from northern Italy where, on 7 September, Prince Eugene had routed a French army before the Piedmontese capital, Turin, driving the Franco-Spanish forces from northern Italy. Only from Spain did Louis XIV receive any good news where Das Minas and Galway had been forced to retreat from Madrid towards Valencia, allowing Philip V to re-enter his capital on 4 October. All in all though, the situation had changed considerably and Louis XIV began to look for ways to end what was fast becoming a ruinous war for France. For Queen Anne also, the Ramillies campaign had one overriding significance – "Now we have God be thanked so hopeful a prospect of peace." Instead of continuing the momentum of victory, however, cracks in Allied unity would enable Louis XIV to reverse some of the major setbacks suffered at Turin and Ramillies. The total number of French casualties cannot be calculated precisely, so complete was the collapse of the Franco-Bavarian army that day. David G. Chandler’s "Marlborough as Military Commander" and "A Guide to the Battlefields of Europe" are consistent with regards to French casualty figures i.e., 12,000 dead and wounded plus some 7,000 taken prisoner. James Falkner, in "Ramillies 1706: Year of Miracles," also notes 12,000 dead and wounded and states ‘up to 10,000’ taken prisoner. In "The Collins Encyclopaedia of Military History", Dupuy puts Villeroi’s dead and wounded at 8,000, with a further 7,000 captured. Neil Litten, using French archives, suggests 7,000 killed and wounded and 6,000 captured, with a further 2,000 choosing to desert. John Millner’s memoirs – "Compendious Journal" (1733) – is more specific, recording 12,087 of Villeroi’s army were killed or wounded, with another 9,729 taken prisoner. In "Marlborough", however, Correlli Barnett puts the total casualty figure as high as 30,000 – 15,000 dead and wounded with an additional 15,000 taken captive. Trevelyan estimates Villeroi’s casualties at 13,000, but adds, ‘his losses by desertion may have doubled that number’. La Colonie omits a casualty figure in his "Chronicles of an old Campaigner"; but Saint-Simon in his "Memoirs" states 4,000 killed, adding 'many others were wounded and many important persons were taken prisoner'. Voltaire, however, in "Histoire du siècle du Louis XIV" records, 'the French lost there twenty thousand men'. Battleship A battleship is a large armored warship with a main battery consisting of large caliber guns. During the late 19th and early 20th centuries the battleship was the most powerful type of warship, and a fleet of battleships was considered vital for any nation that desired to maintain command of the sea. The word "battleship" was coined around 1794 and is a contraction of the phrase "line-of-battle ship", the dominant wooden warship during the Age of Sail. The term came into formal use in the late 1880s to describe a type of ironclad warship, now referred to by historians as pre-dreadnought battleships. In 1906, the commissioning of heralded a revolution in battleship design. Subsequent battleship designs, influenced by HMS "Dreadnought", were referred to as "dreadnoughts", though the term eventually became obsolete as they became the only type of battleship in common use. Battleships were a symbol of naval dominance and national might, and for decades the battleship was a major factor in both diplomacy and military strategy. A global arms race in battleship construction began in Europe in the 1890s and culminated at the decisive Battle of Tsushima in 1905, the outcome of which significantly influenced the design of HMS "Dreadnought". The launch of "Dreadnought" in 1906 commenced a new naval arms race. Three major fleet actions between steel battleships took place: the decisive battles of the Yellow Sea (1904) and Tsushima (1905) during the Russo-Japanese War, and the inconclusive Battle of Jutland (1916) during the First World War. Jutland was the largest naval battle and the only full-scale clash of battleships in the war, and it was the last major battle fought primarily by battleships in world history. The Naval Treaties of the 1920s and 1930s limited the number of battleships, though technical innovation in battleship design continued. Both the Allied and Axis powers built battleships during World War II, though the increasing importance of the aircraft carrier meant that the battleship played a less important role than had been expected. The value of the battleship has been questioned, even during their heyday. There were few of the decisive fleet battles that battleship proponents expected, and used to justify the vast resources spent on building battlefleets. Even in spite of their huge firepower and protection, battleships were increasingly vulnerable to much smaller and relatively inexpensive weapons: initially the torpedo and the naval mine, and later aircraft and the guided missile. The growing range of naval engagements led to the aircraft carrier replacing the battleship as the leading capital ship during World War II, with the last battleship to be launched being in 1944. Four battleships were retained by the United States Navy until the end of the Cold War for fire support purposes and were last used in combat during the Gulf War in 1991. The last battleships were stricken from the U.S. Naval Vessel Register in the 2000s. Today, with the exception of Russian "Kirov"-class battlecruiser, aircraft carriers are the only types of capital ships in service. A ship of the line was a large, unarmored wooden sailing ship which mounted a battery of up to 120 smoothbore guns and carronades. The ship of the line developed gradually over centuries and, apart from growing in size, it changed little between the adoption of line of battle tactics in the early 17th century and the end of the sailing battleship's heyday in the 1830s. From 1794, the alternative term 'line of battle ship' was contracted (informally at first) to 'battle ship' or 'battleship'. The sheer number of guns fired broadside meant a sail battleship could wreck any wooden enemy, holing her hull, knocking down masts, wrecking her rigging, and killing her crew. However, the effective range of the guns was as little as a few hundred yards, so the battle tactics of sailing ships depended in part on the wind. The first major change to the ship of the line concept was the introduction of steam power as an auxiliary propulsion system. Steam power was gradually introduced to the navy in the first half of the 19th century, initially for small craft and later for frigates. The French Navy introduced steam to the line of battle with the 90-gun in 1850—the first true steam battleship. "Napoléon" was armed as a conventional ship-of-the-line, but her steam engines could give her a speed of , regardless of the wind condition. This was a potentially decisive advantage in a naval engagement. The introduction of steam accelerated the growth in size of battleships. France and the United Kingdom were the only countries to develop fleets of wooden steam screw battleships although several other navies operated small numbers of screw battleships, including Russia (9), the Ottoman Empire (3), Sweden (2), Naples (1), Denmark (1) and Austria (1). The adoption of steam power was only one of a number of technological advances which revolutionized warship design in the 19th century. The ship of the line was overtaken by the ironclad: powered by steam, protected by metal armor, and armed with guns firing high-explosive shells. Guns that fired explosive or incendiary shells were a major threat to wooden ships, and these weapons quickly became widespread after the introduction of 8-inch shell guns as part of the standard armament of French and American line-of-battle ships in 1841. In the Crimean War, six line-of-battle ships and two frigates of the Russian Black Sea Fleet destroyed seven Turkish frigates and three corvettes with explosive shells at the Battle of Sinop in 1853. Later in the war, French ironclad floating batteries used similar weapons against the defenses at the Battle of Kinburn. Nevertheless, wooden-hulled ships stood up comparatively well to shells, as shown in the 1866 Battle of Lissa, where the modern Austrian steam two-decker ranged across a confused battlefield, rammed an Italian ironclad and took 80 hits from Italian ironclads, many of which were shells, but including at least one 300-pound shot at point-blank range. Despite losing her bowsprit and her foremast, and being set on fire, she was ready for action again the very next day. The development of high-explosive shells made the use of iron armor plate on warships necessary. In 1859 France launched , the first ocean-going ironclad warship. She had the profile of a ship of the line, cut to one deck due to weight considerations. Although made of wood and reliant on sail for most journeys, "Gloire" was fitted with a propeller, and her wooden hull was protected by a layer of thick iron armor. "Gloire" prompted further innovation from the Royal Navy, anxious to prevent France from gaining a technological lead. The superior armored frigate followed "Gloire" by only 14 months, and both nations embarked on a program of building new ironclads and converting existing screw ships of the line to armored frigates. Within two years, Italy, Austria, Spain and Russia had all ordered ironclad warships, and by the time of the famous clash of the and the at the Battle of Hampton Roads at least eight navies possessed ironclad ships. Navies experimented with the positioning of guns, in turrets (like the USS "Monitor"), central-batteries or barbettes, or with the ram as the principal weapon. As steam technology developed, masts were gradually removed from battleship designs. By the mid-1870s steel was used as a construction material alongside iron and wood. The French Navy's , laid down in 1873 and launched in 1876, was a central battery and barbette warship which became the first battleship in the world to use steel as the principal building material. The term "battleship" was officially adopted by the Royal Navy in the re-classification of 1892. By the 1890s, there was an increasing similarity between battleship designs, and the type that later became known as the 'pre-dreadnought battleship' emerged. These were heavily armored ships, mounting a mixed battery of guns in turrets, and without sails. The typical first-class battleship of the pre-dreadnought era displaced 15,000 to 17,000 tons, had a speed of , and an armament of four guns in two turrets fore and aft with a mixed-caliber secondary battery amidships around the superstructure. An early design with superficial similarity to the pre-dreadnought is the British of 1871. The slow-firing main guns were the principal weapons for battleship-to-battleship combat. The intermediate and secondary batteries had two roles. Against major ships, it was thought a 'hail of fire' from quick-firing secondary weapons could distract enemy gun crews by inflicting damage to the superstructure, and they would be more effective against smaller ships such as cruisers. Smaller guns (12-pounders and smaller) were reserved for protecting the battleship against the threat of torpedo attack from destroyers and torpedo boats. The beginning of the pre-dreadnought era coincided with Britain reasserting her naval dominance. For many years previously, Britain had taken naval supremacy for granted. Expensive naval projects were criticised by political leaders of all inclinations. However, in 1888 a war scare with France and the build-up of the Russian navy gave added impetus to naval construction, and the British Naval Defence Act of 1889 laid down a new fleet including eight new battleships. The principle that Britain's navy should be more powerful than the two next most powerful fleets combined was established. This policy was designed to deter France and Russia from building more battleships, but both nations nevertheless expanded their fleets with more and better pre-dreadnoughts in the 1890s. In the last years of the 19th century and the first years of the 20th, the escalation in the building of battleships became an arms race between Britain and Germany. The German naval laws of 1890 and 1898 authorised a fleet of 38 battleships, a vital threat to the balance of naval power. Britain answered with further shipbuilding, but by the end of the pre-dreadnought era, British supremacy at sea had markedly weakened. In 1883, the United Kingdom had 38 battleships, twice as many as France and almost as many as the rest of the world put together. By 1897, Britain's lead was far smaller due to competition from France, Germany, and Russia, as well as the development of pre-dreadnought fleets in Italy, the United States and Japan. The Ottoman Empire, Spain, Sweden, Denmark, Norway, the Netherlands, Chile and Brazil all had second-rate fleets led by armored cruisers, coastal defence ships or monitors. Pre-dreadnoughts continued the technical innovations of the ironclad. Turrets, armor plate, and steam engines were all improved over the years, and torpedo tubes were introduced. A small number of designs, including the American and es, experimented with all or part of the 8-inch intermediate battery superimposed over the 12-inch primary. Results were poor: recoil factors and blast effects resulted in the 8-inch battery being completely unusable, and the inability to train the primary and intermediate armaments on different targets led to significant tactical limitations. Even though such innovative designs saved weight (a key reason for their inception), they proved too cumbersome in practice. In 1906, the British Royal Navy launched the revolutionary . Created as a result of pressure from Admiral Sir John ("Jackie") Fisher, HMS "Dreadnought" made existing battleships obsolete. Combining an "all-big-gun" armament of ten 12-inch (305 mm) guns with unprecedented speed (from steam turbine engines) and protection, she prompted navies worldwide to re-evaluate their battleship building programs. While the Japanese had laid down an all-big-gun battleship, , in 1904 and the concept of an all-big-gun ship had been in circulation for several years, it had yet to be validated in combat. "Dreadnought" sparked a new arms race, principally between Britain and Germany but reflected worldwide, as the new class of warships became a crucial element of national power. Technical development continued rapidly through the dreadnought era, with steep changes in armament, armor and propulsion. Ten years after "Dreadnought"s commissioning, much more powerful ships, the super-dreadnoughts, were being built. In the first years of the 20th century, several navies worldwide experimented with the idea of a new type of battleship with a uniform armament of very heavy guns. Admiral Vittorio Cuniberti, the Italian Navy's chief naval architect, articulated the concept of an all-big-gun battleship in 1903. When the "Regia Marina" did not pursue his ideas, Cuniberti wrote an article in "Janes" proposing an "ideal" future British battleship, a large armored warship of 17,000 tons, armed solely with a single calibre main battery (twelve 12-inch [305 mm] guns), carrying belt armor, and capable of 24 knots (44 km/h). The Russo-Japanese War provided operational experience to validate the "all-big-gun" concept. At the Yellow Sea and Tsushima, pre-dreadnoughts exchanged volleys at ranges of 7,600–12,000 yd (7 to 11 km), beyond the range of the secondary batteries. It is often held that these engagements demonstrated the importance of the gun over its smaller counterparts, though some historians take the view that secondary batteries were just as important as the larger weapons. In Japan, the two battleships of the 1903–04 Programme were the first to be laid down as all-big-gun designs, with eight 12-inch guns. However, the design had armor which was considered too thin, demanding a substantial redesign. The financial pressures of the Russo-Japanese War and the short supply of 12-inch guns which had to be imported from Britain meant these ships were completed with a mixed 10- and 12-inch armament. The 1903–04 design also retained traditional triple-expansion steam engines. As early as 1904, Jackie Fisher had been convinced of the need for fast, powerful ships with an all-big-gun armament. If Tsushima influenced his thinking, it was to persuade him of the need to standardise on guns. Fisher's concerns were submarines and destroyers equipped with torpedoes, then threatening to outrange battleship guns, making speed imperative for capital ships. Fisher's preferred option was his brainchild, the battlecruiser: lightly armored but heavily armed with eight 12-inch guns and propelled to by steam turbines. It was to prove this revolutionary technology that "Dreadnought" was designed in January 1905, laid down in October 1905 and sped to completion by 1906. She carried ten 12-inch guns, had an 11-inch armor belt, and was the first large ship powered by turbines. She mounted her guns in five turrets; three on the centerline (one forward, two aft) and two on the wings, giving her at her launch twice the broadside of any other warship. She retained a number of 12-pound (3-inch, 76 mm) quick-firing guns for use against destroyers and torpedo-boats. Her armor was heavy enough for her to go head-to-head with any other ship in a gun battle, and conceivably win. "Dreadnought" was to have been followed by three s, their construction delayed to allow lessons from "Dreadnought" to be used in their design. While Fisher may have intended "Dreadnought" to be the last Royal Navy battleship, the design was so successful he found little support for his plan to switch to a battlecruiser navy. Although there were some problems with the ship (the wing turrets had limited arcs of fire and strained the hull when firing a full broadside, and the top of the thickest armor belt lay below the waterline at full load), the Royal Navy promptly commissioned another six ships to a similar design in the and es. An American design, , authorized in 1905 and laid down in December 1906, was another of the first dreadnoughts, but she and her sister, , were not launched until 1908. Both used triple-expansion engines and had a superior layout of the main battery, dispensing with "Dreadnought"s wing turrets. They thus retained the same broadside, despite having two fewer guns. In 1897, before the revolution in design brought about by HMS "Dreadnought", the Royal Navy had 62 battleships in commission or building, a lead of 26 over France and 50 over Germany. In 1906, the Royal Navy owned the field with "Dreadnought". The new class of ship prompted an arms race with major strategic consequences. Major naval powers raced to build their own dreadnoughts. Possession of modern battleships was not only seen as vital to naval power, but also, as with nuclear weapons after WWII, represented a nation's standing in the world. Germany, France, Japan, Italy, Austria, and the United States all began dreadnought programmes; while Ottoman Turkey, Argentina, Russia, Brazil, and Chile commissioned dreadnoughts to be built in British and American yards. The battleship, particularly the dreadnought, was the dominant naval weapon of the World War I era. There were few serious challenges at that time. The most significant naval battles of World War I, such as Jutland (May 31, 1916 – June 1, 1916), were fought by battleships and their battlecruiser cousins. By virtue of geography, the Royal Navy was able to use her imposing battleship and battlecruiser fleet to impose a strict and successful naval blockade of Germany and kept Germany's smaller battleship fleet bottled up in the North Sea: only narrow channels led to the Atlantic Ocean and these were guarded by British forces. Both sides were aware that, because of the greater number of British dreadnoughts, a full fleet engagement would be likely to result in a British victory. The German strategy was therefore to try to provoke an engagement on their terms: either to induce a part of the Grand Fleet to enter battle alone, or to fight a pitched battle near the German coastline, where friendly minefields, torpedo-boats and submarines could be used to even the odds. This did not happen however, due in large part to the necessity to keep submarines for the Atlantic campaign. Submarines were the only vessels in the Imperial German Navy able to break out and raid British commerce in force, but even though they sank many merchant ships, they could not successfully counter-blockade the United Kingdom; the Royal Navy successfully adopted convoy tactics to combat Germany's submarine counter-blockade and eventually defeated it. This was in stark contrast to Britain's successful battleship blockade of Germany, which was a major cause of Germany's economic collapse in 1918. The first two years of war saw the Royal Navy's battleships and battlecruisers regularly "sweep" the North Sea making sure that no German ships could get in or out. Only a few German surface ships that were already at sea, such as the famous light cruiser , were able to raid commerce. Even some of those that did manage to get out were hunted down by battlecruisers, as in the Battle of the Falklands, December 7, 1914. The results of sweeping actions in the North Sea were battles such as the Heligoland Bight and Dogger Bank and German raids on the English coast, all of which were attempts by the Germans to lure out portions of the Grand Fleet in an attempt to defeat the Royal Navy in detail. On May 31, 1916, a further attempt to draw British ships into battle on German terms resulted in a clash of the battlefleets in the Battle of Jutland. The German fleet withdrew to port after two short encounters with the British fleet. Less than two months later, the Germans once again attempted to draw portions of the Grand Fleet into battle. The resulting Action of 19 August 1916 proved inconclusive. This reinforced German determination not to engage in a fleet to fleet battle. In the other naval theatres there were no decisive pitched battles. In the Black Sea, engagement between Russian and Ottoman battleships was restricted to skirmishes. In the Baltic Sea, action was largely limited to the raiding of convoys, and the laying of defensive minefields; the only significant clash of battleship squadrons there was the Battle of Moon Sound at which one Russian pre-dreadnought was lost. The Adriatic was in a sense the mirror of the North Sea: the Austro-Hungarian dreadnought fleet remained bottled up by the British and French blockade. And in the Mediterranean, the most important use of battleships was in support of the amphibious assault on Gallipoli. In September 1914, the threat posed to surface ships by German U-boats was confirmed by successful attacks on British cruisers, including the sinking of three British armored cruisers by the German submarine in less than an hour. The British Super-dreadnought soon followed suit as she struck a mine laid by a German U-boat in October 1914 and sank. The threat that German U-boats posed to British dreadnoughts was enough to cause the Royal Navy to change their strategy and tactics in the North Sea to reduce the risk of U-boat attack. Further near-misses from submarine attacks on battleships and casualties amongst cruisers led to growing concern in the Royal Navy about the vulnerability of battleships. As the war wore on however, it turned out that whilst submarines did prove to be a very dangerous threat to older pre-dreadnought battleships, as shown by examples such as the sinking of , which was caught in the Dardanelles by a British submarine and and were torpedoed by "U-21" as well as , , etc., the threat posed to dreadnought battleships proved to have been largely a false alarm. HMS "Audacious" turned out to be the only dreadnought sunk by a submarine in World War I. While battleships were never intended for anti-submarine warfare, there was one instance of a submarine being sunk by a dreadnought battleship. HMS "Dreadnought" rammed and sank the German submarine "U-29" on March 18, 1915 off Moray Firth. Whilst the escape of the German fleet from the superior British firepower at Jutland was effected by the German cruisers and destroyers successfully turning away the British battleships, the German attempt to rely on U-boat attacks on the British fleet failed. Torpedo boats did have some successes against battleships in World War I, as demonstrated by the sinking of the British pre-dreadnought by during the Dardanelles Campaign and the destruction of the Austro-Hungarian dreadnought by Italian motor torpedo boats in June 1918. In large fleet actions, however, destroyers and torpedo boats were usually unable to get close enough to the battleships to damage them. The only battleship sunk in a fleet action by either torpedo boats or destroyers was the obsolescent German pre-dreadnought . She was sunk by destroyers during the night phase of the Battle of Jutland. The German High Seas Fleet, for their part, were determined not to engage the British without the assistance of submarines; and since the submarines were needed more for raiding commercial traffic, the fleet stayed in port for much of the war. For many years, Germany simply had no battleships. The Armistice with Germany required that most of the High Seas Fleet be disarmed and interned in a neutral port; largely because no neutral port could be found, the ships remained in British custody in Scapa Flow, Scotland. The Treaty of Versailles specified that the ships should be handed over to the British. Instead, most of them were scuttled by their German crews on June 21, 1919 just before the signature of the peace treaty. The treaty also limited the German Navy, and prevented Germany from building or possessing any capital ships. The inter-war period saw the battleship subjected to strict international limitations to prevent a costly arms race breaking out. While the victors were not limited by the Treaty of Versailles, many of the major naval powers were crippled after the war. Faced with the prospect of a naval arms race against the United Kingdom and Japan, which would in turn have led to a possible Pacific war, the United States was keen to conclude the Washington Naval Treaty of 1922. This treaty limited the number and size of battleships that each major nation could possess, and required Britain to accept parity with the U.S. and to abandon the British alliance with Japan. The Washington treaty was followed by a series of other naval treaties, including the First Geneva Naval Conference (1927), the First London Naval Treaty (1930), the Second Geneva Naval Conference (1932), and finally the Second London Naval Treaty (1936), which all set limits on major warships. These treaties became effectively obsolete on September 1, 1939 at the beginning of World War II, but the ship classifications that had been agreed upon still apply. The treaty limitations meant that fewer new battleships were launched in 1919–1939 than in 1905–1914. The treaties also inhibited development by imposing upper limits on the weights of ships. Designs like the projected British , the first American , and the Japanese —all of which continued the trend to larger ships with bigger guns and thicker armor—never got off the drawing board. Those designs which were commissioned during this period were referred to as treaty battleships. As early as 1914, the British Admiral Percy Scott predicted that battleships would soon be made irrelevant by aircraft. By the end of World War I, aircraft had successfully adopted the torpedo as a weapon. In 1921 the Italian general and air theorist Giulio Douhet completed a hugely influential treatise on strategic bombing titled "The Command of the Air", which foresaw the dominance of air power over naval units. In the 1920s, General Billy Mitchell of the United States Army Air Corps, believing that air forces had rendered navies around the world obsolete, testified in front of Congress that "1,000 bombardment airplanes can be built and operated for about the price of one battleship" and that a squadron of these bombers could sink a battleship, making for more efficient use of government funds. This infuriated the U.S. Navy, but Mitchell was nevertheless allowed to conduct a careful series of bombing tests alongside Navy and Marine bombers. In 1921, he bombed and sank numerous ships, including the "unsinkable" German World War I battleship and the American pre-dreadnought . Although Mitchell had required "war-time conditions", the ships sunk were obsolete, stationary, defenseless and had no damage control. The sinking of "Ostfriesland" was accomplished by violating an agreement that would have allowed Navy engineers to examine the effects of various munitions: Mitchell's airmen disregarded the rules, and sank the ship within minutes in a coordinated attack. The stunt made headlines, and Mitchell declared, "No surface vessels can exist wherever air forces acting from land bases are able to attack them." While far from conclusive, Mitchell's test was significant because it put proponents of the battleship against naval aviation on the back foot. Rear Admiral William A. Moffett used public relations against Mitchell to make headway toward expansion of the U.S. Navy's nascent aircraft carrier program. The Royal Navy, United States Navy, and Imperial Japanese Navy extensively upgraded and modernized their World War I–era battleships during the 1930s. Among the new features were an increased tower height and stability for the optical rangefinder equipment (for gunnery control), more armor (especially around turrets) to protect against plunging fire and aerial bombing, and additional anti-aircraft weapons. Some British ships received a large block superstructure nicknamed the "Queen Anne's castle", such as in and , which would be used in the new conning towers of the fast battleships. External bulges were added to improve both buoyancy to counteract weight increase and provide underwater protection against mines and torpedoes. The Japanese rebuilt all of their battleships, plus their battlecruisers, with distinctive "pagoda" structures, though the received a more modern bridge tower that would influence the new . Bulges were fitted, including steel tube arrays to improve both underwater and vertical protection along the waterline. The U.S. experimented with cage masts and later tripod masts, though after the Japanese attack on Pearl Harbor some of the most severely damaged ships (such as and ) were rebuilt with tower masts, for an appearance similar to their contemporaries. Radar, which was effective beyond visual range and effective in complete darkness or adverse weather, was introduced to supplement optical fire control. Even when war threatened again in the late 1930s, battleship construction did not regain the level of importance it had held in the years before World War I. The "building holiday" imposed by the naval treaties meant the capacity of dockyards worldwide had shrunk, and the strategic position had changed. In Germany, the ambitious Plan Z for naval rearmament was abandoned in favor of a strategy of submarine warfare supplemented by the use of battlecruisers and commerce raiding (in particular by s). In Britain, the most pressing need was for air defenses and convoy escorts to safeguard the civilian population from bombing or starvation, and re-armament construction plans consisted of five ships of the . It was in the Mediterranean that navies remained most committed to battleship warfare. France intended to build six battleships of the and es, and the Italians four ships. Neither navy built significant aircraft carriers. The U.S. preferred to spend limited funds on aircraft carriers until the . Japan, also prioritising aircraft carriers, nevertheless began work on three mammoth "Yamato"s (although the third, , was later completed as a carrier) and a planned fourth was cancelled. At the outbreak of the Spanish Civil War, the Spanish navy included only two small dreadnought battleships, and . "España" (originally named "Alfonso XIII"), by then in reserve at the northwestern naval base of El Ferrol, fell into Nationalist hands in July 1936. The crew aboard "Jaime I" remained loyal to the Republic, killed their officers, who apparently supported Franco's attempted coup, and joined the Republican Navy. Thus each side had one battleship; however, the Republican Navy generally lacked experienced officers. The Spanish battleships mainly restricted themselves to mutual blockades, convoy escort duties, and shore bombardment, rarely in direct fighting against other surface units. In April 1937, "España" ran into a mine laid by friendly forces, and sank with little loss of life. In May 1937, "Jaime I" was damaged by Nationalist air attacks and a grounding incident. The ship was forced to go back to port to be repaired. There she was again hit by several aerial bombs. It was then decided to tow the battleship to a more secure port, but during the transport she suffered an internal explosion that caused 300 deaths and her total loss. Several Italian and German capital ships participated in the non-intervention blockade. On May 29, 1937, two Republican aircraft managed to bomb the German pocket battleship outside Ibiza, causing severe damage and loss of life. retaliated two days later by bombarding Almería, causing much destruction, and the resulting "Deutschland" incident meant the end of German and Italian support for non-intervention. The —an obsolete pre-dreadnought—fired the first shots of World War II with the bombardment of the Polish garrison at Westerplatte; and the final surrender of the Japanese Empire took place aboard a United States Navy battleship, . Between those two events, it had become clear that aircraft carriers were the new principal ships of the fleet and that battleships now performed a secondary role. Battleships played a part in major engagements in Atlantic, Pacific and Mediterranean theaters; in the Atlantic, the Germans used their battleships as independent commerce raiders. However, clashes between battleships were of little strategic importance. The Battle of the Atlantic was fought between destroyers and submarines, and most of the decisive fleet clashes of the Pacific war were determined by aircraft carriers. In the first year of the war, armored warships defied predictions that aircraft would dominate naval warfare. and surprised and sank the aircraft carrier off western Norway in June 1940. This engagement marked the only time a fleet carrier was sunk by surface gunnery. In the attack on Mers-el-Kébir, British battleships opened fire on the French battleships in the harbor near Oran in Algeria with their heavy guns, and later pursued fleeing French ships with planes from aircraft carriers. The subsequent years of the war saw many demonstrations of the maturity of the aircraft carrier as a strategic naval weapon and its potential against battleships. The British air attack on the Italian naval base at Taranto sank one Italian battleship and damaged two more. The same Swordfish torpedo bombers played a crucial role in sinking the German battleship . On December 7, 1941, the Japanese launched a surprise attack on Pearl Harbor. Within a short time, five of eight U.S. battleships were sunk or sinking, with the rest damaged. All three American aircraft carriers were out to sea, however, and evaded destruction. The sinking of the British battleship and battlecruiser , demonstrated the vulnerability of a battleship to air attack while at sea without sufficient air cover, settling the argument begun by Mitchell in 1921. Both warships were under way and en route to attack the Japanese amphibious force that had invaded Malaya when they were caught by Japanese land-based bombers and torpedo bombers on December 10, 1941. At many of the early crucial battles of the Pacific, for instance Coral Sea and Midway, battleships were either absent or overshadowed as carriers launched wave after wave of planes into the attack at a range of hundreds of miles. In later battles in the Pacific, battleships primarily performed shore bombardment in support of amphibious landings and provided anti-aircraft defense as escort for the carriers. Even the largest battleships ever constructed, Japan's , which carried a main battery of nine 18-inch (46 cm) guns and were designed as a principal strategic weapon, were never given a chance to show their potential in the decisive battleship action that figured in Japanese pre-war planning. The last battleship confrontation in history was the Battle of Surigao Strait, on October 25, 1944, in which a numerically and technically superior American battleship group destroyed a lesser Japanese battleship group by gunfire after it had already been devastated by destroyer torpedo attacks. All but one of the American battleships in this confrontation had previously been sunk during the attack on Pearl Harbor and subsequently raised and repaired. When fired the last salvo of this battle, the last salvo fired by a battleship against another heavy ship, she was "firing a funeral salute to a finished era of naval warfare". In April 1945, during the battle for Okinawa, the world's most powerful battleship, the "Yamato", was sent out on a suicide mission against a massive U.S. force and sunk by overwhelming pressure from carrier aircraft with nearly all hands lost. After World War II, several navies retained their existing battleships, but they were no longer strategically dominant military assets. Indeed, it soon became apparent that they were no longer worth the considerable cost of construction and maintenance and only one new battleship was commissioned after the war, . During the war it had been demonstrated that battleship-on-battleship engagements like Leyte Gulf or the sinking of were the exception and not the rule, and with the growing role of aircraft engagement ranges were becoming longer and longer, making heavy gun armament irrelevant. The armor of a battleship was equally irrelevant in the face of a nuclear attack as tactical missiles with a range of or more could be mounted on the Soviet and s. By the end of the 1950s, smaller vessel classes such as destroyers, which formerly offered no noteworthy opposition to battleships, now were capable of eliminating battleships from outside the range of the ship's heavy guns. The remaining battleships met a variety of ends. and were sunk during the testing of nuclear weapons in Operation Crossroads in 1946. Both battleships proved resistant to nuclear air burst but vulnerable to underwater nuclear explosions. The was taken by the Soviets as reparations and renamed "Novorossiysk"; she was sunk by a leftover German mine in the Black Sea on October 29, 1955. The two ships were scrapped in 1956. The French was scrapped in 1954, in 1968, and in 1970. The United Kingdom's four surviving ships were scrapped in 1957, and followed in 1960. All other surviving British battleships had been sold or broken up by 1949. The Soviet Union's was scrapped in 1953, in 1957 and (back under her original name, , since 1942) in 1956-7. Brazil's was scrapped in Genoa in 1953, and her sister ship sank during a storm in the Atlantic "en route" to the breakers in Italy in 1951. Argentina kept its two ships until 1956 and Chile kept (formerly ) until 1959. The Turkish battlecruiser (formerly , launched in 1911) was scrapped in 1976 after an offer to sell her back to Germany was refused. Sweden had several small coastal-defense battleships, one of which, , survived until 1970. The Soviets scrapped four large incomplete cruisers in the late 1950s, whilst plans to build a number of new s were abandoned following the death of Joseph Stalin in 1953. The three old German battleships , , and all met similar ends. "Hessen" was taken over by the Soviet Union and renamed "Tsel". She was scrapped in 1960. "Schleswig-Holstein" was renamed "Borodino", and was used as a target ship until 1960. "Schlesien", too, was used as a target ship. She was broken up between 1952 and 1957. The s gained a new lease of life in the U.S. Navy as fire support ships. Radar and computer-controlled gunfire could be aimed with pinpoint accuracy to target. The U.S. recommissioned all four "Iowa"-class battleships for the Korean War and the for the Vietnam War. These were primarily used for shore bombardment, "New Jersey" firing nearly 6,000 rounds of 16 inch shells and over 14,000 rounds of 5 inch projectiles during her tour on the gunline, seven times more rounds against shore targets in Vietnam than she had fired in the Second World War. As part of Navy Secretary John F. Lehman's effort to build a 600-ship Navy in the 1980s, and in response to the commissioning of "Kirov" by the Soviet Union, the United States recommissioned all four "Iowa"-class battleships. On several occasions, battleships were support ships in carrier battle groups, or led their own battleship battle group. These were modernized to carry Tomahawk (TLAM) missiles, with "New Jersey" seeing action bombarding Lebanon in 1983 and 1984, while and fired their 16-inch (406 mm) guns at land targets and launched missiles during Operation Desert Storm in 1991. "Wisconsin" served as the TLAM strike commander for the Persian Gulf, directing the sequence of launches that marked the opening of "Desert Storm", firing a total of 24 TLAMs during the first two days of the campaign. The primary threat to the battleships were Iraqi shore-based surface-to-surface missiles; "Missouri" was targeted by two Iraqi Silkworm missiles, with one missing and another being intercepted by the British destroyer . After "Indiana" was stricken in 1962, the four "Iowa-class" ships were the only battleships in commission or reserve anywhere in the world. There was an extended debate when the four "Iowa" ships were finally decommissioned in the early 1990s. and were maintained to a standard where they could be rapidly returned to service as fire support vessels, pending the development of a superior fire support vessel. These last two battleships were finally stricken from the U.S. Naval Vessel Register in 2006. The Military Balance and Russian states the U.S. Navy listed one battleship in the reserve (Naval Inactive Fleet/Reserve 2nd Turn) in 2010. The Military Balance states the U.S. Navy listed no battleships in the reserve in 2014. The U.S. Marine Corps believes that the current naval surface fire support gun and missile programs will not be able to provide adequate fire support for an amphibious assault or onshore operations. When the last "Iowa"-class ship was finally stricken from the Naval Vessel Registry, no battleships remained in service or in reserve with any navy worldwide. A number are preserved as museum ships, either afloat or in drydock. The U.S. has eight battleships on display: , , , , , , , and . "Missouri" and "New Jersey" are museums at Pearl Harbor and Camden, New Jersey, respectively. "Iowa" is on display as an educational attraction at the Los Angeles Waterfront in San Pedro, California. "Wisconsin" now serves as a museum ship in Norfolk, Virginia. "Massachusetts", which has the distinction of never having lost a man during service, is on display at the Battleship Cove naval museum in Fall River, Massachusetts. "Texas", the first battleship turned into a museum, is on display at the San Jacinto Battleground State Historic Site, near Houston. "North Carolina" is on display in Wilmington, North Carolina. "Alabama" is on display in Mobile, Alabama. The wreck of the , sunk during the Pearl Harbor attack in 1941, is designated a historical landmark and national gravesite. The only other 20th-century battleship on display is the Japanese pre-dreadnought . A replica of the Chinese ironclad Dingyuan was built by the Weihai Port Bureau in 2003 and is on display in Weihai, China. Battleships were the embodiment of sea power. For Alfred Thayer Mahan and his followers, a strong navy was vital to the success of a nation, and control of the seas was vital for the projection of force on land and overseas. Mahan's theory, proposed in "The Influence of Sea Power Upon History, 1660–1783" of 1890, dictated the role of the battleship was to sweep the enemy from the seas. While the work of escorting, blockading, and raiding might be done by cruisers or smaller vessels, the presence of the battleship was a potential threat to any convoy escorted by any vessels other than capital ships. This concept of "potential threat" can be further generalized to the mere existence (as opposed to presence) of a powerful fleet tying the opposing fleet down. This concept came to be known as a "fleet in being" – an idle yet mighty fleet forcing others to spend time, resource and effort to actively guard against it. Mahan went on to say victory could only be achieved by engagements between battleships, which came to be known as the "decisive battle" doctrine in some navies, while targeting merchant ships (commerce raiding or "guerre de course", as posited by the "Jeune École") could never succeed. Mahan was highly influential in naval and political circles throughout the age of the battleship, calling for a large fleet of the most powerful battleships possible. Mahan's work developed in the late 1880s, and by the end of the 1890s it had acquired much international influence on naval strategy; in the end, it was adopted by many major navies (notably the British, American, German, and Japanese). The strength of Mahanian opinion was important in the development of the battleships arms races, and equally important in the agreement of the Powers to limit battleship numbers in the interwar era. The "fleet in being" suggested battleships could simply by their existence tie down superior enemy resources. This in turn was believed to be able to tip the balance of a conflict even without a battle. This suggested even for inferior naval powers a battleship fleet could have important strategic effect. While the role of battleships in both World Wars reflected Mahanian doctrine, the details of battleship deployment were more complex. Unlike ships of the line, the battleships of the late 19th and early 20th centuries had significant vulnerability to torpedoes and mines—because efficient mines and torpedoes did not exist before that—which could be used by relatively small and inexpensive craft. The "Jeune École" doctrine of the 1870s and 1880s recommended placing torpedo boats alongside battleships; these would hide behind the larger ships until gun-smoke obscured visibility enough for them to dart out and fire their torpedoes. While this tactic was vitiated by the development of smokeless propellant, the threat from more capable torpedo craft (later including submarines) remained. By the 1890s, the Royal Navy had developed the first destroyers, which were initially designed to intercept and drive off any attacking torpedo boats. During the First World War and subsequently, battleships were rarely deployed without a protective screen of destroyers. Battleship doctrine emphasised the concentration of the battlegroup. In order for this concentrated force to be able to bring its power to bear on a reluctant opponent (or to avoid an encounter with a stronger enemy fleet), battlefleets needed some means of locating enemy ships beyond horizon range. This was provided by scouting forces; at various stages battlecruisers, cruisers, destroyers, airships, submarines and aircraft were all used. (With the development of radio, direction finding and traffic analysis would come into play, as well, so even shore stations, broadly speaking, joined the battlegroup.) So for most of their history, battleships operated surrounded by squadrons of destroyers and cruisers. The North Sea campaign of the First World War illustrates how, despite this support, the threat of mine and torpedo attack, and the failure to integrate or appreciate the capabilities of new techniques, seriously inhibited the operations of the Royal Navy Grand Fleet, the greatest battleship fleet of its time. The presence of battleships had a great psychological and diplomatic impact. Similar to possessing nuclear weapons today, the ownership of battleships served to enhance a nation's force projection. Even during the Cold War, the psychological impact of a battleship was significant. In 1946, USS "Missouri" was dispatched to deliver the remains of the ambassador from Turkey, and her presence in Turkish and Greek waters staved off a possible Soviet thrust into the Balkan region. In September 1983, when Druze militia in Lebanon's Shouf Mountains fired upon U.S. Marine peacekeepers, the arrival of USS "New Jersey" stopped the firing. Gunfire from "New Jersey" later killed militia leaders. Battleships were the largest and most complex, and hence the most expensive warships of their time; as a result, the value of investment in battleships has always been contested. As the French politician Etienne Lamy wrote in 1879, "The construction of battleships is so costly, their effectiveness so uncertain and of such short duration, that the enterprise of creating an armored fleet seems to leave fruitless the perseverance of a people". The "Jeune École" school of thought of the 1870s and 1880s sought alternatives to the crippling expense and debatable utility of a conventional battlefleet. It proposed what would nowadays be termed a sea denial strategy, based on fast, long-ranged cruisers for commerce raiding and torpedo boat flotillas to attack enemy ships attempting to blockade French ports. The ideas of the "Jeune École" were ahead of their time; it was not until the 20th century that efficient mines, torpedoes, submarines, and aircraft were available that allowed similar ideas to be effectively implemented. The determination of powers such as Germany to build battlefleets with which to confront much stronger rivals has been criticised by historians, who emphasise the futility of investment in a battlefleet that has no chance of matching its opponent in an actual battle. Big Bang The Big Bang theory is the prevailing cosmological model for the universe from the earliest known periods through its subsequent large-scale evolution. The model describes how the universe expanded from a very high-density and high-temperature state, and offers a comprehensive explanation for a broad range of phenomena, including the abundance of light elements, the cosmic microwave background (CMB), large scale structure and Hubble's law. If the known laws of physics are extrapolated to the highest density regime, the result is a singularity which is typically associated with the Big Bang. Physicists are undecided whether this means the universe began from a singularity, or that current knowledge is insufficient to describe the universe at that time. Detailed measurements of the expansion rate of the universe place the Big Bang at around 13.8 billion years ago, which is thus considered the age of the universe. After the initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later simple atoms. Giant clouds of these primordial elements later coalesced through gravity in halos of dark matter, eventually forming the stars and galaxies visible today. Since Georges Lemaître first noted in 1927 that an expanding universe could be traced back in time to an originating single point, scientists have built on his idea of cosmic expansion. The scientific community was once divided between supporters of two different theories, the Big Bang and the Steady State theory, but a wide range of empirical evidence has strongly favored the Big Bang which is now universally accepted. In 1929, from analysis of galactic redshifts, Edwin Hubble concluded that galaxies are drifting apart; this is important observational evidence consistent with the hypothesis of an expanding universe. In 1964, the cosmic microwave background radiation was discovered, which was crucial evidence in favor of the Big Bang model, since that theory predicted the existence of background radiation throughout the universe before it was discovered. More recently, measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to dark energy's existence. The known physical laws of nature can be used to calculate the characteristics of the universe in detail back in time to an initial state of extreme density and temperature. The Belgian astronomer and Catholic priest Georges Lemaître proposed on theoretical grounds that the universe is expanding, which was observationally confirmed soon afterwards by Edwin Hubble. In 1927 in the "Annales de la Société Scientifique de Bruxelles" ("Annals of the Scientific Society of Brussels") under the title "Un Univers homogène de masse constante et de rayon croissant rendant compte de la vitesse radiale des nébuleuses extragalactiques" ("A homogeneous Universe of constant mass and growing radius accounting for the radial velocity of extragalactic nebulae"), he presented his new idea that the universe is expanding and provided the first observational estimation of what is known as the Hubble constant..What later will be known as the "Big Bang theory" of the origin of the universe, he called his "hypothesis of the primeval atom" or the "Cosmic Egg". American astronomer Edwin Hubble observed that the distances to faraway galaxies were strongly correlated with their redshifts. This was interpreted to mean that all distant galaxies and clusters are receding away from our vantage point with an apparent velocity proportional to their distance: that is, the farther they are, the faster they move away from us, regardless of direction. Assuming the Copernican principle (that the Earth is not the center of the universe), the only remaining interpretation is that all observable regions of the universe are receding from all others. Since we know that the distance between galaxies increases today, it must mean that in the past galaxies were closer together. The continuous expansion of the universe implies that the universe was denser and hotter in the past. Large particle accelerators can replicate the conditions that prevailed after the early moments of the universe, resulting in confirmation and refinement of the details of the Big Bang model. However, these accelerators can only probe so far into high energy regimes. Consequently, the state of the universe in the earliest instants of the Big Bang expansion is still poorly understood and an area of open investigation and speculation. The first subatomic particles to be formed included protons, neutrons, and electrons. Though simple atomic nuclei formed within the first three minutes after the Big Bang, thousands of years passed before the first electrically neutral atoms formed. The majority of atoms produced by the Big Bang were hydrogen, along with helium and traces of lithium. Giant clouds of these primordial elements later coalesced through gravity to form stars and galaxies, and the heavier elements were synthesized either within stars or during supernovae. The Big Bang theory offers a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the CMB, large scale structure, and Hubble's Law. The framework for the Big Bang model relies on Albert Einstein's theory of general relativity and on simplifying assumptions such as homogeneity and isotropy of space. The governing equations were formulated by Alexander Friedmann, and similar solutions were worked on by Willem de Sitter. Since then, astrophysicists have incorporated observational and theoretical additions into the Big Bang model, and its parametrization as the Lambda-CDM model serves as the framework for current investigations of theoretical cosmology. The Lambda-CDM model is the current "standard model" of Big Bang cosmology, consensus is that it is the simplest model that can account for the various measurements and observations relevant to cosmology. Extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This singularity indicates that general relativity is not an adequate description of the laws of physics in this regime. Models based on general relativity alone can not extrapolate toward the singularity beyond the end of the Planck epoch. This primordial singularity is itself sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase of the universe. In either case, "the Big Bang" as an event is also colloquially referred to as the "birth" of our universe since it represents the point in history where the universe can be verified to have entered into a regime where the laws of physics as we understand them (specifically general relativity and the standard model of particle physics) work. Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background, the time that has passed since that event — otherwise known as the "age of the universe" — is 13.799 ± 0.021 billion years. The agreement of independent measurements of this age supports the ΛCDM model that describes in detail the characteristics of the universe. Despite being extremely dense at this time—far denser than is usually required to form a black hole—the universe did not re-collapse into a black hole. This may be explained by considering that commonly-used calculations and limits for gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang. The earliest phases of the Big Bang are subject to much speculation. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures and was very rapidly expanding and cooling. Approximately 10 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially during which time density fluctuations that occurred because of the uncertainty principle were amplified into the seeds that would later form the large-scale structure of the universe. After inflation stopped, reheating occurred until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe. The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form. After about 10 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was now no longer high enough to create new proton–antiproton pairs (similarly for neutrons–antineutrons), so a mass annihilation immediately followed, leaving just one in 10 of the original protons and neutrons, and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos). A few minutes into the expansion, when the temperature was about a billion (one thousand million) kelvin and the density was about that of air, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis. Most protons remained uncombined as hydrogen nuclei. As the universe cooled, the rest mass energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years, the electrons and nuclei combined into atoms (mostly hydrogen); hence the radiation decoupled from matter and continued through space largely unimpeded. This relic radiation is known as the cosmic microwave background radiation. The chemistry of life may have begun shortly after the Big Bang, 13.8 billion years ago, during a habitable epoch when the universe was only 10–17 million years old. Over a long period of time, the slightly denser regions of the nearly uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter, warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold (warm dark matter is ruled out by early reionization), and is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. In an "extended model" which includes hot dark matter in the form of neutrinos, then if the "physical baryon density" formula_1 is estimated at about 0.023 (this is different from the 'baryon density' formula_2 expressed as a fraction of the total matter/energy density, which as noted above is about 0.046), and the corresponding cold dark matter density Ωh is about 0.11, the corresponding neutrino density Ωh is estimated to be less than 0.0062. Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which apparently permeates all of space. The observations suggest 73% of the total energy density of today's universe is in this form. When the universe was very young, it was likely infused with dark energy, but with less space and everything closer together, gravity predominated, and it was slowly braking the expansion. But eventually, after numerous billion years of expansion, the growing abundance of dark energy caused the expansion of the universe to slowly begin to accelerate. Dark energy in its simplest formulation takes the form of the cosmological constant term in Einstein's field equations of general relativity, but its composition and mechanism are unknown and, more generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theoretically. All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the ΛCDM model of cosmology, which uses the independent frameworks of quantum mechanics and Einstein's General Relativity. There is no well-supported model describing the action prior to 10 seconds or so. Apparently a new unified theory of quantum gravitation is needed to break this barrier. Understanding this earliest of eras in the history of the universe is currently one of the greatest unsolved problems in physics. The Big Bang theory depends on two major assumptions: the universality of physical laws and the cosmological principle. The cosmological principle states that on large scales the universe is homogeneous and isotropic. These ideas were initially taken as postulates, but today there are efforts to test each of them. For example, the first assumption has been tested by observations showing that largest possible deviation of the fine structure constant over much of the age of the universe is of order 10. Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars. If the large-scale universe appears isotropic as viewed from Earth, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10 via observations of the CMB. The universe has been measured to be homogeneous on the largest scales at the 10% level. General relativity describes spacetime by a metric, which determines the distances that separate nearby points. The points, which can be galaxies, stars, or other objects, are themselves specified using a coordinate chart or "grid" that is laid down over all spacetime. The cosmological principle implies that the metric should be homogeneous and isotropic on large scales, which uniquely singles out the Friedmann–Lemaître–Robertson–Walker metric (FLRW metric). This metric contains a scale factor, which describes how the size of the universe changes with time. This enables a convenient choice of a coordinate system to be made, called comoving coordinates. In this coordinate system, the grid expands along with the universe, and objects that are moving only because of the expansion of the universe, remain at fixed points on the grid. While their "coordinate" distance (comoving distance) remains constant, the "physical" distance between two such co-moving points expands proportionally with the scale factor of the universe. The Big Bang is not an explosion of matter moving outward to fill an empty universe. Instead, space itself expands with time everywhere and increases the physical distance between two comoving points. In other words, the Big Bang is not an explosion "in space", but rather an expansion "of space". Because the FLRW metric assumes a uniform distribution of mass and energy, it applies to our universe only on large scales—local concentrations of matter such as our galaxy are gravitationally bound and as such do not experience the large-scale expansion of space. An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not had time to reach us. This places a limit or a "past horizon" on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a "future horizon", which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our universe. Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well. English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a 1949 BBC radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." It is popularly reported that Hoyle, who favored an alternative "steady state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. The Big Bang theory developed from observations of the structure of the universe and from theoretical considerations. In 1912 Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from Albert Einstein's equations of general relativity, showing that the universe might be expanding in contrast to the static universe model advocated by Einstein at that time. In 1924 Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist, proposed that the inferred recession of the nebulae was due to the expansion of the universe. In 1931 Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence. Starting in 1924, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929 Hubble discovered a correlation between distance and recession velocity—now known as Hubble's law. Lemaître had already shown that this was expected, given the cosmological principle. In the 1920s and 1930s almost every major cosmologist preferred an eternal steady state universe, and several complained that the beginning of time implied by the Big Bang imported religious concepts into physics; this objection was later repeated by supporters of the steady state theory. This perception was enhanced by the fact that the originator of the Big Bang theory, Georges Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, thought thatIf the world has begun with a single quantum, the notions of space and time would altogether fail to have any meaning at the beginning; they would only begin to have a sensible meaning when the original quantum had been divided into a sufficient number of quanta. If this suggestion is correct, the beginning of the world happened a little before the beginning of space and time. During the 1930s other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard Tolman) and Fritz Zwicky's tired light hypothesis. After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other was Lemaître's Big Bang theory, advocated and developed by George Gamow, who introduced big bang nucleosynthesis (BBN) and whose associates, Ralph Alpher and Robert Herman, predicted the CMB. Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this "big bang" idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over Steady State. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe. Much of the current work in cosmology includes understanding how galaxies form in the context of the Big Bang, understanding the physics of the universe at earlier and earlier times, and reconciling observations with the basic theory. In 1968 and 1970 Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of general relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang theory with the introduction of an epoch of rapid expansion in the early universe he called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe). In the mid-1990s, observations of certain globular clusters appeared to indicate that they were about 15 billion years old, which conflicted with most then-current estimates of the age of the universe (and indeed with the age measured today). This issue was later resolved when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters. While there still remain some questions as to how accurately the ages of the clusters are measured, globular clusters are of interest to cosmology as some of the oldest objects in the universe. Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as COBE, the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating. The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis. More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures, These are sometimes called the "four pillars" of the Big Bang theory. Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently subjected to the most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are currently unsolved problems in physics. Observations of distant galaxies and quasars show that these objects are redshifted—the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission lines or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed: formula_3 where Hubble's law has two possible explanations. Either we are at the center of an explosion of galaxies—which is untenable given the Copernican principle—or the universe is uniformly expanding everywhere. This universal expansion was predicted from general relativity by Alexander Friedmann in 1922 and Georges Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang theory as developed by Friedmann, Lemaître, Robertson, and Walker. The theory requires the relation "v" = "HD" to hold at all times, where "D" is the comoving distance, "v" is the recessional velocity, and "v", "H", and "D" vary as the universe expands (hence we write "H" to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity "v". However, the redshift is not a true Doppler shift, but rather the result of the expansion of the universe between the time the light was emitted and the time that it was detected. That space is undergoing metric expansion is shown by direct observational evidence of the Cosmological principle and the Copernican principle, which together with Hubble's law have no other explanation. Astronomical redshifts are extremely isotropic and homogeneous, supporting the Cosmological principle that the universe looks the same in all directions, along with much other evidence. If the redshifts were the result of an explosion from a center distant from us, they would not be so similar in different directions. Measurements of the effects of the cosmic microwave background radiation on the dynamics of distant astrophysical systems in 2000 proved the Copernican principle, that, on a cosmological scale, the Earth is not in a central position. Radiation from the Big Bang was demonstrably warmer at earlier times throughout the universe. Uniform cooling of the CMB over billions of years is explainable only if the universe is experiencing a metric expansion, and excludes the possibility that we are near the unique center of an explosion. In 1964 Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s the radiation was found to be approximately consistent with a black body spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded a Nobel Prize in 1978. The "surface of last scattering" corresponding to emission of the CMB occurs shortly after "recombination", the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around , the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent. In 1989, NASA launched the Cosmic Background Explorer satellite (COBE), which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 10, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 10. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results. During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001 several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies. In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe (WMAP) were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon based cosmic microwave background experiments are ongoing. Using the Big Bang model it is possible to calculate the concentration of helium-4, helium-3, deuterium, and lithium-7 in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by number) are about 0.25 for /, about 10 for /, about 10 for / and about 10 for /. The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for , and off by a factor of two for ; in the latter two cases there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by Big Bang nucleosynthesis is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe (i.e., before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products) should have more helium than deuterium or more deuterium than , and in constant ratios, too. Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current state of the Big Bang theory. A combination of observations and theory suggest that the first quasars and galaxies formed about a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently, appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory. In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. These two clouds of gas contain no elements heavier than hydrogen and deuterium. Since the clouds of gas have no heavy elements, they likely formed in the first few minutes after the Big Bang, during Big Bang nucleosynthesis. The age of the universe as estimated from the Hubble expansion and the CMB is now in good agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult. Future gravitational waves observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang. As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang theory. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflationary theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang theory still under intense investigation by cosmologists and astrophysicists. It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of matter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry. Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy". Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure as a cosmic ruler. Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant. The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units. During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters. Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway. Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations. The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature. A resolution to this apparent inconsistency is offered by inflationary theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation. Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to cosmic scale. These fluctuations serve as the seeds of all current structure in the universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been accurately confirmed by measurements of the CMB. If inflation occurred, exponential expansion would push large regions of space well beyond our observable horizon. A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended. The magnetic monopole objection was raised in the late 1970s. Grand unified theories predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is also resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness. The flatness problem (also known as the oldness problem) is an observational problem associated with a Friedmann–Lemaître–Robertson–Walker metric (FLRW). The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative, if its density is less than the critical density; positive, if greater; and zero at the critical density, in which case space is said to be "flat". The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 10 of its critical value, or it would not exist as it does today. Gottfried Wilhelm Leibniz wrote: ""Why is there something rather than nothing? The sufficient reason [...] is found in a substance which [...] is a necessary being bearing the reason for its existence within itself."" Philosopher of physics Dean Rickles has argued that numbers and mathematics (or their underlying laws) may necessarily exist. Physics may conclude that time did not exist before 'Big Bang', but 'started' with the Big Bang and hence there might be no 'beginning', 'before' or potentially 'cause' and instead always existed. Some also argue that nothing cannot exist or that non-existence might never have been an option. Quantum fluctuations, or other laws of physics that may have existed at the start of the Big Bang could then create the conditions for matter to occur. Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch. Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Very gradually, collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would asymptotically approach absolute zero—a Big Freeze. Moreover, if the proton were unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death. Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip. The following is a partial list of the popular misconceptions about the Big Bang model: "The Big Bang as the origin of the universe:" One of the common misconceptions about the Big Bang model is the belief that it was the origin of the universe. However, the Big Bang model does not comment about how the universe came into being. Current conception of the Big Bang model assumes the existence of energy, time, and space, and does not comment about their origin or the cause of the dense and high temperature initial state of the universe. "The Big Bang was "small"": It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe. "Hubble's law violates the special theory of relativity": Hubble's law predicts that galaxies that are beyond Hubble Distance recede faster than the speed of light. However, special relativity does not apply beyond motion through space. Hubble's law describes velocity that results from expansion "of" space, rather than "through" space. "Doppler redshift vs cosmological red-shift": Astronomers often refer to the cosmological red-shift as a normal Doppler shift, which is a misconception. Although similar, the cosmological red-shift is not identical to the Doppler redshift. The Doppler redshift is based on special relativity, which does not consider the expansion of space. On the contrary, the cosmological red-shift is based on general relativity, in which the expansion of space is considered. Although they may appear identical for nearby galaxies, it may cause confusion if the behavior of distant galaxies is understood through the Doppler redshift. While the Big Bang model is well established in cosmology, it is likely to be refined. The Big Bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time; this infinite energy density is regarded as impossible in physics. Still, it is known that the equations are not applicable before the time when the universe cooled down to the Planck temperature, and this conclusion depends on various assumptions, of which some could never be experimentally verified. "(Also see Planck epoch.)" One proposed refinement to avoid this would-be singularity is to develop a correct treatment of quantum gravity. It is not known what could have preceded the hot dense state of the early universe or how and why it originated, though speculation abounds in the field of cosmogony. Some proposals, each of which entails untested hypotheses, are: Proposals in the last two categories, see the Big Bang as an event in either a much larger and older universe or in a multiverse. As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, and some see its mention in their holy books, while others argue that Big Bang cosmology makes the notion of a creator superfluous. Boeing 767 The Boeing 767 is a mid- to large-size, mid- to long-range, wide-body twin-engine jet airliner built by Boeing Commercial Airplanes. It was Boeing's first wide-body twinjet and its first airliner with a two-crew glass cockpit. The aircraft has two turbofan engines, a conventional tail, and, for reduced aerodynamic drag, a supercritical wing design. Designed as a smaller wide-body airliner than earlier aircraft such as the 747, the 767 has a seating capacity for 181 to 375 people, and a design range of , depending on variant. Development of the 767 occurred in tandem with a narrow-body twinjet, the 757, resulting in shared design features which allow pilots to obtain a common type rating to operate both aircraft. The 767 is produced in three fuselage lengths. The original 767-200 entered service in 1982, followed by the 767-300 in 1986 and the 767-400ER, an extended-range (ER) variant, in 2000. The extended-range 767-200ER and 767-300ER models entered service in 1984 and 1988, respectively, while a production freighter version, the 767-300F, debuted in 1995. Conversion programs have modified passenger 767-200 and 767-300 series aircraft for cargo use, while military derivatives include the E-767 surveillance aircraft, the KC-767 and KC-46 aerial tankers, and VIP transports. Engines featured on the 767 include the General Electric CF6, Pratt & Whitney JT9D and PW4000, and Rolls-Royce RB211 turbofans. United Airlines first placed the 767 in commercial service in 1982. The aircraft was initially flown on domestic and transcontinental routes, during which it demonstrated the reliability of its twinjet design. In 1985, the 767 became the first twin-engined airliner to receive regulatory approval for extended overseas flights. The aircraft was then used to expand non-stop service on medium- to long-haul intercontinental routes. In 1986, Boeing initiated studies for a higher-capacity 767, ultimately leading to the development of the 777, a larger wide-body twinjet. In the 1990s, the 767 became the most frequently used airliner for transatlantic flights between North America and Europe. The 767 is the first twinjet wide-body type to reach 1,000 aircraft delivered. As of June 2018, Boeing has received 1,224 orders for the 767 from 74 customers with 1,115 delivered. A total of 764 of these aircraft were in service in July 2017. The most popular variant is the 767-300ER with 583 delivered. Delta Air Lines is the largest operator with 91 aircraft. Competitors have included the Airbus A300, A310, and A330-200. Non-passenger variants of the 767 remain in production as of 2018 while the passenger variant's successor, the 787, entered service in 2011. In 1970, Boeing's 747 became the first wide-body jetliner to enter service. The 747 was the first passenger jet wide enough to feature a twin-aisle cabin. Two years later, the manufacturer began a development study, code-named 7X7, for a new wide-body aircraft intended to replace the 707 and other early generation narrow-body jets. The aircraft would also provide twin-aisle seating, but in a smaller fuselage than the existing 747, McDonnell Douglas DC-10, and Lockheed L-1011 TriStar wide-bodies. To defray the high cost of development, Boeing signed risk-sharing agreements with Italian corporation Aeritalia and the Civil Transport Development Corporation (CTDC), a consortium of Japanese aerospace companies. This marked the manufacturer's first major international joint venture, and both Aeritalia and the CTDC received supply contracts in return for their early participation. The initial 7X7 was conceived as a short take-off and landing airliner intended for short-distance flights, but customers were unenthusiastic about the concept, leading to its redefinition as a mid-size, transcontinental-range airliner. At this stage the proposed aircraft featured two or three engines, with possible configurations including over-wing engines and a T-tail. By 1976, a twinjet layout, similar to the one which had debuted on the Airbus A300, became the baseline configuration. The decision to use two engines reflected increased industry confidence in the reliability and economics of new-generation jet powerplants. While airline requirements for new wide-body aircraft remained ambiguous, the 7X7 was generally focused on mid-size, high-density markets. As such, it was intended to transport large numbers of passengers between major cities. Advancements in civil aerospace technology, including high-bypass-ratio turbofan engines, new flight deck systems, aerodynamic improvements, and lighter construction materials were to be applied to the 7X7. Many of these features were also included in a parallel development effort for a new mid-size narrow-body airliner, code-named 7N7, which would become the 757. Work on both proposals proceeded through the airline industry upturn in the late 1970s. In January 1978, Boeing announced a major extension of its Everett factory—which was then dedicated to manufacturing the 747—to accommodate its new wide-body family. In February 1978, the new jetliner received the 767 model designation, and three variants were planned: a 767-100 with 190 seats, a 767-200 with 210 seats, and a trijet 767MR/LR version with 200 seats intended for intercontinental routes. The 767MR/LR was subsequently renamed 777 for differentiation purposes. The 767 was officially launched on July 14, 1978, when United Airlines ordered 30 of the 767-200 variant, followed by 50 more 767-200 orders from American Airlines and Delta Air Lines later that year. The 767-100 was ultimately not offered for sale, as its capacity was too close to the 757's seating, while the 777 trijet was eventually dropped in favor of standardizing around the twinjet configuration. In the late 1970s, operating cost replaced capacity as the primary factor in airliner purchases. As a result, the 767's design process emphasized fuel efficiency from the outset. Boeing targeted a 20 to 30 percent cost saving over earlier aircraft, mainly through new engine and wing technology. As development progressed, engineers used computer-aided design for over a third of the 767's design drawings, and performed 26,000 hours of wind tunnel tests. Design work occurred concurrently with the 757 twinjet, leading Boeing to treat both as almost one program to reduce risk and cost. Both aircraft would ultimately receive shared design features, including avionics, flight management systems, instruments, and handling characteristics. Combined development costs were estimated at $3.5 to $4 billion. Early 767 customers were given the choice of Pratt & Whitney JT9D or General Electric CF6 turbofans, marking the first time that Boeing had offered more than one engine option at the launch of a new airliner. Both jet engine models had a maximum output of of thrust. The engines were mounted approximately one-third the length of the wing from the fuselage, similar to previous wide-body trijets. The larger wings were designed using an aft-loaded shape which reduced aerodynamic drag and distributed lift more evenly across their surface span than any of the manufacturer's previous aircraft. The wings provided higher-altitude cruise performance, added fuel capacity, and expansion room for future stretched variants. The initial 767-200 was designed for sufficient range to fly across North America or across the northern Atlantic, and would be capable of operating routes up to . The 767's fuselage width was set midway between that of the 707 and the 747 at . While it was narrower than previous wide-body designs, seven abreast seating with two aisles could be fitted, and the reduced width produced less aerodynamic drag. The fuselage was not wide enough to accommodate two standard LD3 wide-body unit load devices side-by-side, so a smaller container, the LD2, was created specifically for the 767. Using a conventional tail design also allowed the rear fuselage to be tapered over a shorter section, providing for parallel aisles along the full length of the passenger cabin, and eliminating irregular seat rows toward the rear of the aircraft. The 767 was the first Boeing wide-body to be designed with a two-crew digital glass cockpit. Cathode ray tube (CRT) color displays and new electronics replaced the role of the flight engineer by enabling the pilot and co-pilot to monitor aircraft systems directly. Despite the promise of reduced crew costs, United Airlines initially demanded a conventional three-person cockpit, citing concerns about the risks associated with introducing a new aircraft. The carrier maintained this position until July 1981, when a US presidential task force determined that a crew of two was safe for operating wide-body jets. A three-crew cockpit remained as an option and was fitted to the first production models. Ansett Australia ordered 767s with three-crew cockpits due to union demands; it was the only airline to operate 767s so configured. The 767's two-crew cockpit was also applied to the 757, allowing pilots to operate both aircraft after a short conversion course, and adding incentive for airlines to purchase both types. Although nominally similar in control design, flying the 767 feels different from the 757. The 757's controls are heavy, similar to the 727 and 747; the control yoke can be rotated to 90 degrees in each direction. The 767 has far lighter control feel in pitch and roll, and the control yoke has approximately 2/3 the rotation travel. To produce the 767, Boeing formed a network of subcontractors which included domestic suppliers and international contributions from Italy's Aeritalia and Japan's CTDC. The wings and cabin floor were produced in-house, while Aeritalia provided control surfaces, Boeing Vertol made the leading edge for the wings, and Boeing Wichita produced the forward fuselage. The CTDC provided multiple assemblies through its constituent companies, namely Fuji Heavy Industries (wing fairings and gear doors), Kawasaki Heavy Industries (center fuselage), and Mitsubishi Heavy Industries (rear fuselage, doors, and tail). Components were integrated during final assembly at the Everett factory. For expedited production of wing spars, the main structural member of aircraft wings, the Everett factory received robotic machinery to automate the process of drilling holes and inserting fasteners. This method of wing construction expanded on techniques developed for the 747. Final assembly of the first aircraft began in July 1979. The prototype aircraft, registered N767BA and equipped with JT9D turbofans, rolled out on August 4, 1981. By this time, the 767 program had accumulated 173 firm orders from 17 customers, including Air Canada, All Nippon Airways, Britannia Airways, Transbrasil, and Trans World Airlines (TWA). On September 26, 1981, the prototype took its maiden flight under the command of company test pilots Tommy Edmonds, Lew Wallick, and John Brit. The maiden flight was largely uneventful, save for the inability to retract the landing gear because of a hydraulic fluid leak. The prototype was used for subsequent flight tests. The 10-month 767 flight test program utilized the first six aircraft built. The first four aircraft were equipped with JT9D engines, while the fifth and sixth were fitted with CF6 engines. The test fleet was largely used to evaluate avionics, flight systems, handling, and performance, while the sixth aircraft was used for route-proving flights. During testing, pilots described the 767 as generally easy to fly, with its maneuverability unencumbered by the bulkiness associated with larger wide-body jets. Following 1,600 hours of flight tests, the JT9D-powered 767-200 received certification from the US Federal Aviation Administration (FAA) and the UK Civil Aviation Authority (CAA) in July 1982. The first delivery occurred on August 19, 1982, to United Airlines. The CF6-powered 767-200 received certification in September 1982, followed by the first delivery to Delta Air Lines on October 25, 1982. The 767 entered service with United Airlines on September 8, 1982. The aircraft's first commercial flight used a JT9D-powered 767-200 on the Chicago-to-Denver route. The CF6-powered 767-200 commenced service three months later with Delta Air Lines. Upon delivery, early 767s were mainly deployed on domestic routes, including US transcontinental services. American Airlines and TWA began flying the 767-200 in late 1982, while Air Canada, China Airlines, and El Al began operating the aircraft in 1983. The aircraft's introduction was relatively smooth, with few operational glitches and greater dispatch reliability than prior jetliners. In its first year, the 767 logged a 96.1 percent dispatch rate, which exceeded the industry average for new aircraft. Operators reported generally favorable ratings for the twinjet's sound levels, interior comfort, and economic performance. Resolved issues were minor and included the recalibration of a leading edge sensor to prevent false readings, the replacement of an evacuation slide latch, and the repair of a tailplane pivot to match production specifications. Seeking to capitalize on its new wide-body's potential for growth, Boeing offered an extended-range model, the 767-200ER, in its first year of service. Ethiopian Airlines placed the first order for the type in December 1982. Featuring increased gross weight and greater fuel capacity, the extended-range model could carry heavier payloads at distances up to , and was targeted at overseas customers. The 767-200ER entered service with El Al Airline on March 27, 1984. The type was mainly ordered by international airlines operating medium-traffic, long-distance flights. In the mid-1980s, the 767 spearheaded the growth of twinjet flights across the northern Atlantic under extended-range twin-engine operational performance standards (ETOPS) regulations, the FAA's safety rules governing transoceanic flights by aircraft with two engines. Before the 767, overwater flight paths of twinjets could be no more than 90 minutes away from diversion airports. In May 1985, the FAA granted its first approval for 120-minute ETOPS flights to 767 operators, on an individual airline basis starting with TWA, provided that the operator met flight safety criteria. This allowed the aircraft to fly overseas routes at up to two hours' distance from land. The larger safety margins were permitted because of the improved reliability demonstrated by the twinjet and its turbofan engines. The FAA lengthened the ETOPS time to 180 minutes for CF6-powered 767s in 1989, making the type the first to be certified under the longer duration, and all available engines received approval by 1993. Regulatory approval spurred the expansion of transoceanic 767 flights and boosted the aircraft's sales. Forecasting airline interest in larger-capacity models, Boeing announced the stretched 767-300 in 1983 and the extended-range 767-300ER in 1984. Both models offered a 20 percent passenger capacity increase, while the extended-range version was capable of operating flights up to . Japan Airlines placed the first order for the 767-300 in September 1983. Following its first flight on January 30, 1986, the type entered service with Japan Airlines on October 20, 1986. The 767-300ER completed its first flight on December 9, 1986, but it was not until March 1987 that the first firm order, from American Airlines, was placed. The type entered service with American Airlines on March 3, 1988. The 767-300 and 767-300ER gained popularity after entering service, and came to account for approximately two-thirds of all 767s sold. After the debut of the first stretched 767s, Boeing sought to address airline requests for greater capacity by proposing larger models, including a partial double-deck version informally named the "Hunchback of Mukilteo" (from a town near Boeing's Everett factory) with a 757 body section mounted over the aft main fuselage. In 1986, Boeing proposed the 767-X, a revised model with extended wings and a wider cabin, but received little interest. By 1988, the 767-X had evolved into an all-new twinjet, which revived the 777 designation. Until the 777's 1995 debut, the 767-300 and 767-300ER remained Boeing's second-largest wide-bodies behind the 747. Buoyed by a recovering global economy and ETOPS approval, 767 sales accelerated in the mid-to-late 1980s; 1989 was the most prolific year with 132 firm orders. By the early 1990s, the wide-body twinjet had become its manufacturer's annual best-selling aircraft, despite a slight decrease due to economic recession. During this period, the 767 became the most common airliner for transatlantic flights between North America and Europe. By the end of the decade, 767s crossed the Atlantic more frequently than all other aircraft types combined. The 767 also propelled the growth of point-to-point flights which bypassed major airline hubs in favor of direct routes. Taking advantage of the aircraft's lower operating costs and smaller capacity, operators added non-stop flights to secondary population centers, thereby eliminating the need for connecting flights. The increased number of cities receiving non-stop services caused a paradigm shift in the airline industry as point-to-point travel gained prominence at the expense of the traditional hub-and-spoke model. In February 1990, the first 767 equipped with Rolls-Royce RB211 turbofans, a 767-300, was delivered to British Airways. Six months later, the carrier temporarily grounded its entire 767 fleet after discovering cracks in the engine pylons of several aircraft. The cracks were related to the extra weight of the RB211 engines, which are heavier than other 767 engines. During the grounding, interim repairs were conducted to alleviate stress on engine pylon components, and a parts redesign in 1991 prevented further cracks. Boeing also performed a structural reassessment, resulting in production changes and modifications to the engine pylons of all 767s in service. In January 1993, following an order from UPS Airlines, Boeing launched a freighter variant, the 767-300F, which entered service with UPS on October 16, 1995. The 767-300F featured a main deck cargo hold, upgraded landing gear, and strengthened wing structure. In November 1993, the Japanese government launched the first 767 military derivative when it placed orders for the , an Airborne Early Warning and Control (AWACS) variant based on the 767-200ER. The first two , featuring extensive modifications to accommodate surveillance radar and other monitoring equipment, were delivered in 1998 to the Japan Self-Defense Forces. In November 1995, after abandoning development of a smaller version of the 777, Boeing announced that it was revisiting studies for a larger 767. The proposed 767-400X, a second stretch of the aircraft, offered a 12 percent capacity increase versus the 767-300, and featured an upgraded flight deck, enhanced interior, and greater wingspan. The variant was specifically aimed at Delta Air Lines' pending replacement of its aging Lockheed L-1011 TriStars, and faced competition from the A330-200, a shortened derivative of the Airbus A330. In March 1997, Delta Air Lines launched the 767-400ER when it ordered the type to replace its L-1011 fleet. In October 1997, Continental Airlines also ordered the 767-400ER to replace its McDonnell Douglas DC-10 fleet. The type completed its first flight on October 9, 1999, and entered service with Continental Airlines on September 14, 2000. In the early 2000s, cumulative 767 deliveries approached 900, but new sales declined during an airline industry downturn. In 2001, Boeing dropped plans for a longer-range model, the 767-400ERX, in favor of the proposed Sonic Cruiser, a new jetliner which aimed to fly 15 percent faster while having comparable fuel costs as the 767. The following year, Boeing announced the KC-767 Tanker Transport, a second military derivative of the 767-200ER. Launched with an order in October 2002 from the Italian Air Force, the KC-767 was intended for the dual role of refueling other aircraft and carrying cargo. The Japanese government became the second customer for the type in March 2003. In May 2003, the United States Air Force (USAF) announced its intent to lease KC-767s to replace its aging KC-135 tankers. The plan was suspended in March 2004 amid a conflict of interest scandal, resulting in multiple US government investigations and the departure of several Boeing officials, including Philip Condit, the company's chief executive officer, and chief financial officer Michael Sears. The first KC-767s were delivered in 2008 to the Japan Self-Defense Forces. In late 2002, after airlines expressed reservations about its emphasis on speed over cost reduction, Boeing halted development of the Sonic Cruiser. The following year, the manufacturer announced the 7E7, a mid-size 767 successor made from composite materials which promised to be 20 percent more fuel efficient. The new jetliner was the first stage of a replacement aircraft initiative called the Boeing Yellowstone Project. Customers embraced the 7E7, later renamed 787 Dreamliner, and within two years it had become the fastest-selling airliner in the company's history. In 2005, Boeing opted to continue 767 production despite record Dreamliner sales, citing a need to provide customers waiting for the 787 with a more readily available option. Subsequently, the 767-300ER was offered to customers affected by 787 delays, including All Nippon Airways and Japan Airlines. Some aging 767s, exceeding 20 years in age, were also kept in service past planned retirement dates due to the delays. To extend the operational lives of older aircraft, airlines increased heavy maintenance procedures, including D-check teardowns and inspections for corrosion, a recurring issue on aging 767s. The first 787s entered service with All Nippon Airways in October 2011, 42 months behind schedule. In 2007, the 767 received a production boost when UPS and DHL Aviation placed a combined 33 orders for the 767-300F. Renewed freighter interest led Boeing to consider enhanced versions of the 767-200 and 767-300F with increased gross weights, 767-400ER wing extensions, and 777 avionics. Net orders for the 767 declined from 24 in 2008 to just three in 2010. During the same period, operators upgraded aircraft already in service; in 2008, the first 767-300ER retrofitted with blended winglets from Aviation Partners Incorporated debuted with American Airlines. The manufacturer-sanctioned winglets, at in height, improved fuel efficiency by an estimated 6.5 percent. Other carriers including All Nippon Airways and Delta Air Lines also ordered winglet kits. On February 2, 2011, the 1,000th 767 rolled out, destined for All Nippon Airways. The aircraft was the 91st 767-300ER ordered by the Japanese carrier, and with its completion the 767 became the second wide-body airliner to reach the thousand-unit milestone after the 747. The 1,000th aircraft also marked the last model produced on the original 767 assembly line. Beginning with the 1,001st aircraft, production moved to another area in the Everett factory which occupied about half of the previous floor space. The new assembly line made room for 787 production and aimed to boost manufacturing efficiency by over twenty percent. At the inauguration of its new assembly line, the 767's order backlog numbered approximately 50, only enough for production to last until 2013. Despite the reduced backlog, Boeing officials expressed optimism that additional orders would be forthcoming. On February 24, 2011, the USAF announced its selection of the KC-767 Advanced Tanker, an upgraded variant of the KC-767, for its KC-X fleet renewal program. The selection followed two rounds of tanker competition between Boeing and Airbus parent EADS, and came eight years after the USAF's original 2003 announcement of its plan to lease KC-767s. The tanker order encompassed 179 aircraft and was expected to sustain 767 production past 2013. In December 2011, FedEx Express announced a 767-300F order for 27 aircraft to replace its DC-10 freighters, citing the USAF tanker order and Boeing's decision to continue production as contributing factors. FedEx Express agreed to buy 19 more of the −300F variant in June 2012. In June 2015, FedEx said it was accelerating retirements of planes both to reflect demand and to modernize its fleet, recording charges of $276 million. On July 21, 2015 FedEx announced an order for 50 767-300F with options on another 50, the largest order for the type. With the announcement FedEx confirmed that it has firm orders for 106 of the freighters for delivery between 2018 and 2023. In February 2018, UPS announced that they were ordering 4 more 767-300Fs, increasing their total order to 63. Deliveries are scheduled from 2017 through 2022. With its successor, the Boeing New Midsize Airplane, planned for at least a 2025 introduction, and the 787 being much larger, Boeing could restart a passenger 767-300ER production to bridge the gap. A demand for 50 to 60 aircraft could have to be satisfied. Having to replace its 40 767s, United Airlines requested a price quote for other widebodies. In November 2017, Boeing CEO Dennis Muilenburg cited interest beyond military and freighter uses. Boeing Commercial Airplanes VP of marketing Randy Tinseth stated in early 2018 that the company did not intend to resume passenger variant production. In its first quarter of 2018 earnings report, Boeing plan to increase its production from 2.5 to 3 monthly beginning in January 2020 due to increased demand in the cargo market, as FedEx had 56 on order, UPS has four, and an unidentified customer has three on order. This rate could rise to 3.5 per month in July 2020 and 4 per month in January 2021, then would come down to 3/mo in January 2025 and 2/mo in July. The 767 is a low-wing cantilever monoplane with a conventional tail unit featuring a single fin and rudder. The wings are swept at 31.5 degrees and optimized for a cruising speed of Mach 0.8 (). Each wing features a supercritical airfoil cross-section and is equipped with six-panel leading edge slats, single- and double-slotted flaps, inboard and outboard ailerons, and six spoilers. The airframe further incorporates Carbon-fiber-reinforced polymer composite material wing surfaces, Kevlar fairings and access panels, plus improved aluminum alloys, which together reduce overall weight by versus preceding aircraft. To distribute the aircraft's weight on the ground, the 767 has a retractable tricycle landing gear with four wheels on each main gear and two for the nose gear. The original wing and gear design accommodated the stretched 767-300 without major changes. The 767-400ER features a larger, more widely spaced main gear with 777 wheels, tires, and brakes. To prevent damage if the tail section contacts the runway surface during takeoff, 767-300 and 767-400ER models are fitted with a retractable tailskid. The 767 has left-side exit doors near the front and rear of the aircraft. In addition to shared avionics and computer technology, the 767 uses the same auxiliary power unit, electric power systems, and hydraulic parts as the 757. A raised cockpit floor and the same forward cockpit windows result in similar pilot viewing angles. Related design and functionality allows 767 pilots to obtain a common type rating to operate the 757 and share the same seniority roster with pilots of either aircraft. The original 767 flight deck uses six Rockwell Collins CRT screens to display Electronic flight instrument system (EFIS) and engine indication and crew alerting system (EICAS) information, allowing pilots to handle monitoring tasks previously performed by the flight engineer. The CRTs replace conventional electromechanical instruments found on earlier aircraft. An enhanced flight management system, improved over versions used on early 747s, automates navigation and other functions, while an automatic landing system facilitates CAT IIIb instrument landings in low visibility situations. The 767 became the first aircraft to receive CAT IIIb certification from the FAA for landings with minimum visibility in 1984. On the 767-400ER, the cockpit layout is simplified further with six Rockwell Collins liquid crystal display (LCD) screens, and adapted for similarities with the 777 and the Next Generation 737. To retain operational commonality, the LCD screens can be programmed to display information in the same manner as earlier 767s. In 2012, Boeing and Rockwell Collins launched a further 787-based cockpit upgrade for the 767, featuring three landscape-format LCD screens that can display two windows each. The 767 is equipped with three redundant hydraulic systems for operation of control surfaces, landing gear, and utility actuation systems. Each engine powers a separate hydraulic system, and the third system uses electric pumps. A ram air turbine provides power for basic controls in the event of an emergency. An early form of fly-by-wire is employed for spoiler operation, utilizing electric signaling instead of traditional control cables. The fly-by-wire system reduces weight and allows independent operation of individual spoilers. The 767 features a twin-aisle cabin with a typical configuration of six abreast in business class and seven across in economy. The standard seven abreast, 2–3–2 economy class layout places approximately 87 percent of all seats at a window or aisle. As a result, the aircraft can be largely occupied before center seats need to be filled, and each passenger is no more than one seat from the aisle. It is possible to configure the aircraft with extra seats for up to an eight abreast configuration, but this is less common. The 767 interior introduced larger overhead bins and more lavatories per passenger than previous aircraft. The bins are wider to accommodate garment bags without folding, and strengthened for heavier carry-on items. A single, large galley is installed near the aft doors, allowing for more efficient meal service and simpler ground resupply. Passenger and service doors are an overhead plug type, which retract upwards, and commonly used doors can be equipped with an electric-assist system. In 2000, a 777-style interior, known as the Boeing Signature Interior, debuted on the 767-400ER. Subsequently, adopted for all new-build 767s, the Signature Interior features even larger overhead bins, indirect lighting, and sculpted, curved panels. The 767-400ER also received larger windows derived from the 777. Older 767s can be retrofitted with the Signature Interior. Some operators have adopted a simpler modification known as the Enhanced Interior, featuring curved ceiling panels and indirect lighting with minimal modification of cabin architecture, as well as aftermarket modifications such as the NuLook 767 package by Heath Tecna. The 767 has been produced in three fuselage lengths. These debuted in progressively larger form as the 767-200, 767-300, and 767-400ER. Longer-range variants include the 767-200ER and 767-300ER, while cargo models include the 767-300F, a production freighter, and conversions of passenger 767-200 and 767-300 models. When referring to different variants, Boeing and airlines often collapse the model number (767) and the variant designator (e.g. –200 or –300) into a truncated form (e.g. "762" or "763"). Subsequent to the capacity number, designations may append the range identifier. The International Civil Aviation Organization (ICAO) aircraft type designator system uses a similar numbering scheme, but adds a preceding manufacturer letter; all variants based on the 767-200 and 767-300 are classified under the codes "B762" and "B763"; the 767-400ER receives the designation of "B764". The 767-200 was the original model and entered service with United Airlines in 1982. The type has been used primarily by mainline U.S. carriers for domestic routes between major hub centers such as Los Angeles to Washington. The 767-200 was the first aircraft to be used on transatlantic ETOPS flights, beginning with TWA on February 1, 1985 under 90-minute diversion rules. Deliveries for the variant totaled 128 aircraft. There were 53 passenger and freighter conversions of the model in commercial service as of July 2017. The type's competitors included the Airbus A300 and A310. The 767-200 ceased production in the late 1980s, superseded by the extended-range 767-200ER. Some early 767-200s were subsequently upgraded to extended-range specification. In 1998, Boeing began offering 767-200 conversions to 767-200SF (Special Freighter) specification for cargo use, and Israel Aerospace Industries has been licensed to perform cargo conversions since 2005. The conversion process entails the installation of a side cargo door, strengthened main deck floor, and added freight monitoring and safety equipment. The 767-200SF was positioned as a replacement for Douglas DC-8 freighters. A commercial freighter version of the Boeing 767-200 with wings from the -300 series and an updated flightdeck was first flown on 29 December 2014. A military tanker variant of the Boeing 767-2C is being developed for the USAF as the KC-46. Boeing is building two aircraft as commercial freighters which will be used to obtain Federal Aviation Administration certification, a further two Boeing 767-2Cs will be modified as military tankers. , Boeing does not have customers for the freighter. The 767-200ER was the first extended-range model and entered service with El Al in 1984. The type's increased range is due to an additional center fuel tank and a higher maximum takeoff weight (MTOW) of up to . The type was originally offered with the same engines as the 767-200, while more powerful Pratt & Whitney PW4000 and General Electric CF6 engines later became available. The 767-200ER was the first 767 to complete a non-stop transatlantic journey, and broke the flying distance record for a twinjet airliner on April 17, 1988 with an Air Mauritius flight from Halifax, Nova Scotia to Port Louis, Mauritius, covering . The 767-200ER has been acquired by international operators seeking smaller wide-body aircraft for long-haul routes such as New York to Beijing. Deliveries of the type totaled 121 with no unfilled orders. As of July 2017, 25 examples of passenger and freighter conversion versions were in airline service. The type's main competitors of the time included the Airbus A300-600R and the A310-300. The 767-300, the first stretched version of the aircraft, entered service with Japan Airlines in 1986. The type features a fuselage extension over the 767-200, achieved by additional sections inserted before and after the wings, for an overall length of . Reflecting the growth potential built into the original 767 design, the wings, engines, and most systems were largely unchanged on the 767-300. An optional mid-cabin exit door is positioned ahead of the wings on the left, while more powerful Pratt & Whitney PW4000 and Rolls-Royce RB211 engines later became available. The 767-300's increased capacity has been used on high-density routes within Asia and Europe. Deliveries for the type totaled 104 aircraft with no unfilled orders remaining. As of July 2017, 38 of the variant were in airline service. The type's main competitor was the Airbus A300. The 767-300ER, the extended-range version of the 767-300, entered service with American Airlines in 1988. The type's increased range was made possible by greater fuel tankage and a higher MTOW of . Design improvements allowed the available MTOW to increase to by 1993. Power is provided by Pratt & Whitney PW4000, General Electric CF6, or Rolls-Royce RB211 engines. Typical routes for the type include Los Angeles to Frankfurt. The combination of increased capacity and range offered by the 767-300ER has been particularly attractive to both new and existing 767 operators. It is the most successful version of the aircraft, with more orders placed than all other variants combined. As of November 2017, 767-300ER deliveries stand at 583 with no unfilled orders. There were 412 examples in service as of July 2017. The type's main competitor is the Airbus A330-200. The 767-300F, the production freighter version of the 767-300ER, entered service with UPS Airlines in 1995. The 767-300F can hold up to 24 standard pallets on its main deck and up to 30 LD2 unit load devices on the lower deck, with a total cargo volume of . The freighter has a main deck cargo door and crew exit, while the lower deck features two port-side cargo doors and one starboard cargo door. A general market version with onboard freight-handling systems, refrigeration capability, and crew facilities was delivered to Asiana Airlines on August 23, 1996. As of June 2018, 767-300F deliveries stand at 141 with 71 unfilled orders. Airlines operated 180 examples of the freighter variant and freighter conversions in July 2017. In June 2008, All Nippon Airways took delivery of the first 767-300BCF (Boeing Converted Freighter), a modified passenger-to-freighter model. The conversion work was performed in Singapore by ST Aerospace Services, the first supplier to offer a 767-300BCF program, and involved the addition of a main deck cargo door, strengthened main deck floor, and additional freight monitoring and safety equipment. Since then, Boeing, Israel Aerospace Industries, and Wagner Aeronautical have also offered passenger-to-freighter conversion programs for 767-300 series aircraft. The 767-400ER, the first Boeing wide-body jet resulting from two fuselage stretches, entered service with Continental Airlines in 2000. The type features a stretch over the 767-300, for a total length of . The wingspan is also increased by through the addition of raked wingtips. Other differences include an updated cockpit, redesigned landing gear, and 777-style Signature Interior. Power is provided by uprated Pratt & Whitney PW4000 or General Electric CF6 engines. The FAA granted approval for the 767-400ER to operate 180-minute ETOPS flights before it entered service. Because its fuel capacity was not increased over preceding models, the 767-400ER has a range of , less than previous extended-range 767s. This is roughly the distance from Shenzhen to Seattle. No 767-400 version was developed, while a longer-range version, the 767-400ERX, was offered for sale in 2000 before it was cancelled a year later, leaving the 767-400ER as the sole version of the largest 767. Boeing dropped the 767-400ER and the -200ER from its pricing list in 2014. A total of 37 aircraft were delivered to the variant's two airline customers, Continental Airlines (now merged with United Airlines) and Delta Air Lines, with no unfilled orders. All 37 examples of the -400ER were in service in July 2017. One additional example was produced as a military testbed, and later sold as a VIP transport. The type's closest competitor is the Airbus A330-200. Versions of the 767 serve in a number of military and government applications, with responsibilities ranging from airborne surveillance and refueling to cargo and VIP transport. Several military 767s have been derived from the 767-200ER, the longest-range version of the aircraft. Boeing offered the 767-400ERX, a longer-range version of the largest 767 model, in 2000. Introduced concurrently with the 747X, the type was to be powered by the 747X's engines, the Engine Alliance GP7000 and the Rolls-Royce Trent 600. An increased range of was specified. Kenya Airways provisionally ordered three 767-400ERXs to supplement its 767 fleet, but after Boeing cancelled the type's development in 2001, switched the order to the 777-200ER. The Northrop Grumman E-10 MC2A was to be a 767-400ER-based replacement for the USAF's 707-based E-3 Sentry AWACS, Northrop Grumman E-8 Joint STARS, and RC-135 SIGINT aircraft. The E-10 MC2A would have included an all-new AWACS system, with a powerful active electronically scanned array (AESA) that was also capable of jamming enemy aircraft or missiles. One 767-400ER aircraft was produced as a testbed for systems integration, but the program was terminated in January 2009 and the prototype was sold to Bahrain as a VIP transport. In July 2017, 744 aircraft were in airline service: 77 -200s, 630 -300 and 37 -400 with 65 -300F on order; the largest operators are Delta Air Lines (82), UPS Airlines (59 - largest cargo operator), United Airlines (51), All Nippon Airways (37), American Airlines (31), Japan Airlines (31). The type's competitors included the Airbus A300 and A310. The largest customers are Delta Air Lines with 117 orders, FedEx Express (108), All Nippon Airways (96), and United Airlines (82). Delta and United are the only customers of all -200, -300 and -400 passenger variants. In July 2015, FedEx placed a firm order for 50 Boeing 767 freighters with deliveries from 2018 to 2023. Boeing 767 orders and deliveries (cumulative, by year): As of May 2017, the Boeing 767 has been in 45 aviation occurrences, including 16 hull-loss accidents. Six fatal crashes, including three hijackings, have resulted in a total of 851 occupant fatalities. The airliner's first fatal crash, Lauda Air Flight 004, occurred near Bangkok on May 26, 1991, following the in-flight deployment of the left engine thrust reverser on a 767-300ER; none of the 223 aboard survived; as a result of this accident all 767 thrust reversers were deactivated until a redesign was implemented. Investigators determined that an electronically controlled valve, common to late-model Boeing aircraft, was to blame. A new locking device was installed on all affected jetliners, including 767s. On October 31, 1999, EgyptAir Flight 990, a 767-300ER, crashed off Nantucket Island, Massachusetts, in international waters killing all 217 people on board. The US National Transportation Safety Board (NTSB) determined the probable cause to be due to a deliberate action by the first officer; Egypt disputed this conclusion. On April 15, 2002, Air China Flight 129, a 767-200ER, crashed into a hill amid inclement weather while trying to land at Gimhae International Airport in Busan, South Korea. The crash resulted in the death of 129 of the 166 people on board, and the cause was attributed to pilot error. An early 767 incident was survived by all on board. On July 23, 1983, Air Canada Flight 143, a 767-200, ran out of fuel in-flight and had to glide with both engines out for almost to an emergency landing at Gimli, Manitoba. The pilots used the aircraft's ram air turbine to power the hydraulic systems for aerodynamic control. There were no fatalities and only minor injuries. This aircraft was nicknamed "Gimli Glider" after its landing site. The aircraft, registered C-GAUN, continued flying for Air Canada until its retirement in January 2008. The 767 has been involved in six hijackings, three resulting in loss of life, for a combined total of 282 occupant fatalities. On November 23, 1996, Ethiopian Airlines Flight 961, a 767-200ER, was hijacked and crash-landed in the Indian Ocean near the Comoros Islands after running out of fuel, killing 125 out of the 175 persons on board; survivors have been rare among instances of land-based aircraft ditching on water. Two 767s were involved in the September 11 attacks on the World Trade Center in 2001, resulting in the collapse of its two main towers. American Airlines Flight 11, a 767-200ER, crashed into the North Tower, killing all 92 people on board, and United Airlines Flight 175, a 767-200, crashed into the South Tower, with the death of all 65 on board. In addition, more than 2,600 people were killed in the towers or on the ground. A foiled 2001 shoe bomb plot involving an American Airlines 767-300ER resulted in passengers being required to remove their shoes for scanning at US security checkpoints. On November 1, 2011, LOT Polish Airlines Flight 16, a 767-300ER, safely landed at Warsaw Frederic Chopin Airport in Warsaw, Poland after a mechanical failure of the landing gear forced an emergency landing with the landing gear up. There were no injuries, but the aircraft involved was damaged and subsequently written off. At the time of the incident, aviation analysts speculated that it may have been the first instance of a complete landing gear failure in the 767's service history. Subsequent investigation determined that while a damaged hose had disabled the aircraft's primary landing gear extension system, an otherwise functional backup system was inoperative due to an accidentally deactivated circuit breaker. In January 2014, the US Federal Aviation Administration issued a directive that ordered inspections of the elevators on more than 400 767s beginning in March 2014; the focus is on fasteners and other parts that can fail and cause the elevators to jam. The issue was first identified in 2000 and has been the subject of several Boeing service bulletins. The inspections and repairs are required to be completed within six years. The aircraft has also had multiple occurrences of "uncommanded escape slide inflation" during maintenance or operations, and during flight. In late 2015, the FAA issued a preliminary directive to address the issue. On October 28, 2016, American Airlines Flight 383, a 767-300ER with 161 passengers and 9 crew members, aborted takeoff at Chicago O'Hare Airport following an uncontained failure of the right GE CF6-80C2 engine. The engine failure, which hurled fragments over a considerable distance, caused a fuel leak resulting in a fire under the right wing. Fire and smoke entered the cabin. All passengers and crew evacuated the aircraft, with 20 passengers and one flight attendant sustaining minor injuries using the evacuation slides. As new 767s roll off the assembly line, older models have been retired and stored or scrapped. One complete aircraft is currently on display - N102DA, the first 767-200 to operate for Delta Air Lines and the twelfth example built. The exhibition aircraft, named "The Spirit of Delta" by the employees who helped purchase it in 1982, underwent restoration at the Delta Air Lines Air Transport Heritage Museum in Atlanta, Georgia. The restoration was completed in 2010. Babe Ruth George Herman "Babe" Ruth Jr. (February 6, 1895 – August 16, 1948) was an American professional baseball player whose career in Major League Baseball (MLB) spanned 22 seasons, from 1914 through 1935. Nicknamed "The Bambino" and "The Sultan of Swat", he began his MLB career as a stellar left-handed pitcher for the Boston Red Sox, but achieved his greatest fame as a slugging outfielder for the New York Yankees. Ruth established many MLB batting (and some pitching) records, including career home runs (714), runs batted in (RBIs) (2,213), bases on balls (2,062), slugging percentage (.690), and on-base plus slugging (OPS) (1.164); the latter two still stand as of 2018. Ruth is regarded as one of the greatest sports heroes in American culture and is considered by many to be the greatest baseball player of all time. In , Ruth was elected into the Baseball Hall of Fame as one of its "first five" inaugural members. At age seven, Ruth was sent to St. Mary's Industrial School for Boys, a reformatory where he learned life lessons and baseball skills from Brother Matthias Boutlier of the Xaverian Brothers, the school's disciplinarian and a capable baseball player. In 1914, Ruth was signed to play minor-league baseball for the Baltimore Orioles but was soon sold to the Red Sox. By 1916, he had built a reputation as an outstanding pitcher who sometimes hit long home runs, a feat unusual for any player in the pre-1920 dead-ball era. Although Ruth twice won 23 games in a season as a pitcher and was a member of three World Series championship teams with the Red Sox, he wanted to play every day and was allowed to convert to an outfielder. With regular playing time, he broke the MLB single-season home run record in 1919. After that season, Red Sox owner Harry Frazee sold Ruth to the Yankees amid controversy. The trade fueled Boston's subsequent 86 year championship drought and popularized the "Curse of the Bambino" superstition. In his 15 years with the Yankees, Ruth helped the team win seven American League (AL) pennants and four World Series championships. His big swing led to escalating home run totals that not only drew fans to the ballpark and boosted the sport's popularity but also helped usher in baseball's live-ball era, which evolved from a low-scoring game of strategy to a sport where the home run was a major factor. As part of the Yankees' vaunted "Murderers' Row" lineup of 1927, Ruth hit 60 home runs, which extended his MLB single-season record by a single home run. Ruth's last season with the Yankees was 1934; he retired from the game the following year, after a short stint with the Boston Braves. During his career, Ruth led the AL in home runs during a season twelve times. Ruth's legendary power and charismatic personality made him a larger-than-life figure during the Roaring Twenties. During his career, he was the target of intense press and public attention for his baseball exploits and off-field penchants for drinking and womanizing. His often reckless lifestyle was tempered by his willingness to do good by visiting children at hospitals and orphanages. After his retirement as a player, he was denied the opportunity to manage a major league club, most likely due to poor behavior during parts of his playing career. In his final years, Ruth made many public appearances, especially in support of American efforts in World War II. In 1946, he became ill with esophageal cancer and died two years later as a result of the disease. George Herman Ruth Jr. was born in 1895 at 216 Emory Street in the Pigtown section of Baltimore, Maryland. Ruth's parents, George Herman Ruth Sr. and Katherine Schamberger, were both of German ancestry. According to the 1880 census, his parents were born in Maryland. His paternal grandparents were from Prussia and Hanover. Ruth Sr. worked a series of jobs that included lightning rod salesman and streetcar operator. The elder Ruth then became a counterman in a family-owned combination grocery and saloon business on Frederick Street. George Ruth Jr. was born in the house of his maternal grandfather, Pius Schamberger, a German immigrant and trade unionist. Only one of young George's seven siblings, his younger sister Mamie, survived infancy. Many details of Ruth's childhood are unknown, including the date of his parents' marriage. As a child, Ruth spoke German. When young George was a toddler, the family moved to 339 South Woodyear Street, not far from the rail yards; by the time the boy was 6, his father had a saloon with an upstairs apartment at 426 West Camden Street. Details are equally scanty about why young George was sent at the age of 7 to St. Mary's Industrial School for Boys, a reformatory and orphanage. As an adult, Babe Ruth reminisced that as a youth he had been running the streets and rarely attending school, as well was drinking beer when his father was not looking. Some accounts say that following a violent incident at his father's saloon, the city authorities decided that this environment was unsuitable for a small child. George, Jr. entered St. Mary's on June 13, 1902. He was recorded as "incorrigible" and spent much of the next twelve years there. Although St. Mary's boys received an education, students were also expected to learn work skills and help operate the school, particularly once the boys turned 12. Ruth became a shirtmaker and was also proficient as a carpenter. He would adjust his own shirt collars, rather than having a tailor do so, even during his well-paid baseball career. The boys, aged 5 to 21, did most work around the facility, from cooking to shoemaking, and renovated St. Mary's in 1912. The food was simple, and the Xaverian Brothers who ran the school insisted on strict discipline; corporal punishment was common. Ruth's nickname there was "Niggerlips", as he had large facial features and was darker than most boys at the all-white reformatory. Ruth was sometimes allowed to rejoin his family or was placed at St. James's Home, a supervised residence with work in the community, but he was always returned to St. Mary's. He was rarely visited by his family; his mother died when he was 12 and by some accounts, he was permitted to leave St. Mary's only to attend the funeral. How Ruth came to play baseball there is uncertain: according to one account, his placement at St. Mary's was due in part to repeatedly breaking Baltimore's windows with long hits while playing street ball; by another, he was told to join a team on his first day at St. Mary's by the school's athletic director, Brother Herman, becoming a catcher even though left-handers rarely play that position. During his time there he also played third base and shortstop, again unusual for a left-hander, and was forced to wear mitts and gloves made for right-handers. He was encouraged in his pursuits by the school's Prefect of Discipline, Brother Matthias Boutlier, a native of Nova Scotia. A large man, Brother Matthias was greatly respected by the boys both for his strength and for his fairness. For the rest of his life, Ruth would praise Brother Matthias, and his running and hitting styles closely resembled his teacher's. Ruth stated, "I think I was born as a hitter the first day I ever saw him hit a baseball." The older man became a mentor and role model to George; biographer Robert W. Creamer commented on the closeness between the two: The school's influence remained with Ruth in other ways. He was a lifelong Catholic who would sometimes attend Mass after carousing all night, and he became a well-known member of the Knights of Columbus. He would visit orphanages, schools, and hospitals throughout his life, often avoiding publicity. He was generous to St. Mary's as he became famous and rich, donating money and his presence at fundraisers, and spending $5,000 to buy Brother Matthias a Cadillac in 1926—subsequently replacing it when it was destroyed in an accident. Most of the boys at St. Mary's played baseball in organized leagues at different levels of proficiency. Ruth later estimated that he played 200 games a year as he steadily climbed the ladder of success. Although he played all positions at one time or another (including infield positions generally reserved for right-handers), he gained stardom as a pitcher. According to Brother Matthias, Ruth was standing to one side laughing at the bumbling pitching efforts of fellow students, and Matthias told him to go in and see if he could do better. Ruth had become the best pitcher at St. Mary's, and when he was 18 in 1913, he was allowed to leave the premises to play weekend games on teams that were drawn from the community. He was mentioned in several newspaper articles, for both his pitching prowess and ability to hit long home runs. In early 1914, Ruth signed a professional baseball contract with Jack Dunn, who owned and managed the minor-league Baltimore Orioles, an International League team. The circumstances of Ruth's signing are not known with certainty; historical fact is obscured by stories that cannot all be true. By some accounts, Dunn was urged to attend a game between an all-star team from St. Mary's and one from another Xaverian facility, Mount St. Mary's College. Some versions have Ruth running away before the eagerly awaited game, to return in time to be punished, and then pitching St. Mary's to victory as Dunn watched. Others have Washington Senators pitcher Joe Engel, a Mount St. Mary's graduate, pitching in an alumni game after watching a preliminary contest between the college's freshmen and a team from St. Mary's, including Ruth. Engel watched Ruth play, then told Dunn about him at a chance meeting in Washington. Ruth, in his autobiography, stated only that he worked out for Dunn for a half-hour, and was signed. According to biographer Kal Wagenheim, there were legal difficulties to be straightened out as Ruth was supposed to remain at the school until he turned 21. The train journey to spring training in Fayetteville, North Carolina, in early March was likely Ruth's first outside the Baltimore area. The rookie ballplayer was the subject of various pranks by the veterans, who were probably also the source of his famous nickname. There are various accounts of how Ruth came to be called Babe, but most center on his being referred to as "Dunnie's babe" or a variant. "Babe" was at that time a common nickname in baseball, with perhaps the most famous to that point being Pittsburgh Pirates pitcher and 1909 World Series hero Babe Adams, who appeared younger than he was. Ruth made his first appearance as a professional ballplayer in an inter-squad game on March 7, 1914. He played shortstop and pitched the last two innings of a 15–9 victory. In his second at-bat, Ruth hit a long home run to right field; the blast was locally reported to be longer than a legendary shot hit by Jim Thorpe in Fayetteville. Ruth made first appearance against a team in organized baseball in an exhibition game versus the major-league Philadelphia Phillies. Ruth pitched the middle three innings and gave up two runs in the fourth, but then settled down and pitched a scoreless fifth and sixth innings. In a game against the Phillies the following afternoon, Ruth entered during the sixth inning and did not allow a run the rest of the way. The Orioles scored seven runs in the bottom of the eighth inning to overcome a 6–0 deficit, and Ruth was the winning pitcher. Once the regular season began, Ruth was a star pitcher who was also dangerous at the plate. The team performed well, yet received almost no attention from the Baltimore press. A third major league, the Federal League, had begun play, and the local franchise, the Baltimore Terrapins, restored that city to the major leagues for the first time since 1902. Few fans visited Oriole Park, where Ruth and his teammates labored in relative obscurity. Ruth may have been offered a bonus and a larger salary to jump to the Terrapins; when rumors to that effect swept Baltimore, giving Ruth the most publicity he had experienced to date, a Terrapins official denied it, stating it was their policy not to sign players under contract to Dunn. The competition from the Terrapins caused Dunn to sustain large losses. Although by late June the Orioles were in first place, having won over two-thirds of their games, the paid attendance dropped as low as 150. Dunn explored a possible move by the Orioles to Richmond, Virginia, as well as the sale of a minority interest in the club. These possibilities fell through, leaving Dunn with little choice other than to sell his best players to major league teams to raise money. He offered Ruth to the reigning World Series champions, Connie Mack's Philadelphia Athletics, but Mack had his own financial problems. The Cincinnati Reds and New York Giants expressed interest in Ruth, but Dunn sold his contract, along with those of pitchers Ernie Shore and Ben Egan, to the Boston Red Sox of the American League (AL) on July 4. The sale price was announced as $25,000 but other reports lower the amount to half that, or possibly $8,500 plus the cancellation of a $3,000 loan. Ruth remained with the Orioles for several days while the Red Sox completed a road trip, and reported to the team in Boston on July 11. On July 11, 1914, Ruth arrived in Boston with Egan and Shore. Ruth later told the story of how that morning he had met Helen Woodford, who was the girl that would be his first wife. She was a 16-year-old waitress at Landers Coffee Shop, and Ruth related that she served him when he had breakfast there. Other stories, though, suggested that the meeting occurred on another day, and perhaps under other circumstances. Regardless of when he began to woo his first wife, he won his first game as a pitcher for the Red Sox that afternoon, 4–3, over the Cleveland Naps. His catcher was Bill Carrigan, who was also the Red Sox manager. Shore was given a start by Carrigan the next day; he won that and his second start and thereafter was pitched regularly. Ruth lost his second start, and was thereafter little used. In his major league debut as a batter, Ruth went 0-for-2 against left-hander Willie Mitchell, striking out in his first at bat, before being removed for a pinch hitter in the seventh inning. Ruth was not much noticed by the fans, as Bostonians watched the Red Sox's crosstown rivals, the Braves, begin a legendary comeback that would take them from last place on the Fourth of July to the 1914 World Series championship. Egan was traded to Cleveland after two weeks on the Boston roster. During his time as a Red Sox, he kept an eye on the inexperienced Ruth, much as Dunn had in Baltimore. When he was traded, no one took his place as supervisor. Ruth's new teammates considered him brash, and would have preferred him, as a rookie, to remain quiet and inconspicuous. When Ruth insisted on taking batting practice despite being both a rookie who did not play regularly, and a pitcher, he arrived to find his bats sawn in half. His teammates nicknamed him "the Big Baboon", a name the swarthy Ruth, who had disliked the nickname "Niggerlips" at St. Mary's, detested. Ruth had received a raise on promotion to the major leagues, and quickly acquired tastes for fine food, liquor, and women, among other temptations. Manager Carrigan allowed Ruth to pitch two exhibition games in mid-August. Although Ruth won both against minor-league competition, he was not restored to the pitching rotation. It is uncertain why Carrigan did not give Ruth additional opportunities to pitch. There are legends—filmed for the screen in "The Babe Ruth Story" (1948)—that the young pitcher had a habit of signaling his intent to throw a curveball by sticking out his tongue slightly, and that he was easy to hit until this changed. Creamer pointed out that it is common for inexperienced pitchers to display such habits, and the need to break Ruth of his would not constitute a reason to not use him at all. The biographer suggested that Carrigan was unwilling to use Ruth due to poor behavior by the rookie. On July 30, 1914, Boston owner Joseph Lannin had purchased the minor-league Providence Grays, members of the International League. The Providence team had been owned by several people associated with the Detroit Tigers, including star hitter Ty Cobb, and as part of the transaction, a Providence pitcher was sent to the Tigers. To soothe Providence fans upset at losing a star, Lannin announced that the Red Sox would soon send a replacement to the Grays. This was intended to be Ruth, but his departure for Providence was delayed when Cincinnati Reds owner Garry Herrmann claimed him off waivers. After Lannin wrote to Herrmann explaining that the Red Sox wanted Ruth in Providence so he could develop as a player, and would not release him to a major league club, Herrmann allowed Ruth to be sent to the minors. Carrigan later stated that Ruth was not sent down to Providence to make him a better player, but to help the Grays win the International League pennant (league championship). Ruth joined the Grays on August 18, 1914. After Dunn's deals, the Baltimore Orioles managed to hold on to first place until August 15, after which they continued to fade, leaving the pennant race between Providence and Rochester. Ruth was deeply impressed by Providence manager "Wild Bill" Donovan, previously a star pitcher with a 25–4 win–loss record for Detroit in 1907; in later years, he credited Donovan with teaching him much about pitching. Ruth was often called upon to pitch, in one stretch starting (and winning) four games in eight days. On September 5 at Maple Leaf Park in Toronto, Ruth pitched a one-hit 9–0 victory, and hit his first professional home run, his only one as a minor leaguer, off Ellis Johnson. Recalled to Boston after Providence finished the season in first place, he pitched and won a game for the Red Sox against the New York Yankees on October 2, getting his first major league hit, a double. Ruth finished the season with a record of 2–1 as a major leaguer and 23–8 in the International League (for Baltimore and Providence). Once the season concluded, Ruth married Helen in Ellicott City, Maryland. Creamer speculated that they did not marry in Baltimore, where the newlyweds boarded with George Ruth Sr., to avoid possible interference from those at St. Mary's—both bride and groom were not yet of age and Ruth remained on parole from that institution until his 21st birthday. In March 1915, Ruth reported to Hot Springs, Arkansas for his first major league spring training. Despite a relatively successful first season, he was not slated to start regularly for the Red Sox, who already had two stellar left-handed pitchers: the established stars Dutch Leonard, who had broken the record for the lowest earned run average (ERA) in a single season; and Ray Collins, a 20-game winner in both 1913 and 1914. Ruth was ineffective in his first start, taking the loss in the third game of the season. Injuries and ineffective pitching by other Boston pitchers gave Ruth another chance, and after some good relief appearances, Carrigan allowed Ruth another start, and he won a rain-shortened seven inning game. Ten days later, the manager had him start against the New York Yankees at the Polo Grounds. Ruth took a 3–2 lead into the ninth, but lost the game 4–3 in 13 innings. Ruth, hitting ninth as was customary for pitchers, hit a massive home run into the upper deck in right field off of Jack Warhop. At the time, home runs were rare in baseball, and Ruth's majestic shot awed the crowd. The winning pitcher, Warhop, would in August 1915 conclude a major league career of eight seasons, undistinguished but for being the first major league pitcher to give up a home run to Babe Ruth. Carrigan was sufficiently impressed by Ruth's pitching to give him a spot in the starting rotation. Ruth finished the 1915 season 18–8 as a pitcher; as a hitter, he batted .315 and had four home runs. The Red Sox won the AL pennant, but with the pitching staff healthy, Ruth was not called upon to pitch in the 1915 World Series against the Philadelphia Phillies. Boston won in five games; Ruth was used as a pinch hitter in Game Five, but grounded out against Phillies ace Grover Cleveland Alexander. Despite his success as a pitcher, Ruth was acquiring a reputation for long home runs; at Sportsman's Park against the St. Louis Browns, a Ruth hit soared over Grand Avenue, breaking the window of a Chevrolet dealership. In 1916, there was attention focused on Ruth for his pitching, as he engaged in repeated pitching duels with the ace of the Washington Senators, Walter Johnson. The two met five times during the season, with Ruth winning four and Johnson one (Ruth had a no decision in Johnson's victory). Two of Ruth's victories were by the score of 1–0, one in a 13-inning game. Of the 1–0 shutout decided without extra innings, AL President Ban Johnson stated, "That was one of the best ball games I have ever seen." For the season, Ruth went 23–12, with a 1.75 ERA and nine shutouts, both of which led the league. Ruth's nine shutouts in 1916 set a league record for left-handers that would remain unmatched until Ron Guidry tied it in 1978. The Red Sox won the pennant and World Series again, this time defeating the Brooklyn Superbas (as the Dodgers were then known) in five games. Ruth started and won Game 2, 2–1, in 14 innings. Until another game of that length was played in 2005, this was the longest World Series game, and Ruth's pitching performance is still the longest postseason complete game victory. Carrigan retired as player and manager after 1916, returning to his native Maine to be a businessman. Ruth, who played under four managers who are in the National Baseball Hall of Fame, always maintained that Carrigan, who is not enshrined there, was the best skipper he ever played for. There were other changes in the Red Sox organization that offseason, as Lannin sold the team to a three-man group headed by New York theatrical promoter Harry Frazee. Jack Barry was hired by Frazee as manager. Ruth went 24–13 with a 2.01 ERA and six shutouts in 1917, but the Sox finished in second place in the league, nine games behind the Chicago White Sox in the standings. On June 23 at Washington, Ruth made a memorable pitching start. When home plate umpire 'Brick' Owens called the first four pitches as balls, Ruth threw a punch at him, and was ejected from the game and later suspended for ten days and fined $100. Ernie Shore was called in to relieve Ruth, and was allowed eight warm-up pitches. The runner who had reached base on the walk was caught stealing, and Shore retired all 26 batters he faced to win the game. Shore's feat was listed as a perfect game for many years; in 1991, Major League Baseball's (MLB) Committee on Statistical Accuracy caused it to be listed as a combined no-hitter. In 1917, Ruth was little-used as a batter, other than for his plate appearances while pitching, and hit .325 with two home runs. The entry of the United States into World War I occurred at the start of the season and overshadowed the sport. Conscription was introduced in September 1917, and most baseball players in the big leagues were of draft age. This included Barry, who was a player-manager, and who joined the Naval Reserve in an attempt to avoid the draft, only to be called up after the 1917 season. Frazee hired International League President Ed Barrow as Red Sox manager. Barrow had spent the previous 30 years in a variety of baseball jobs, though he never played the game professionally. With the major leagues shorthanded due to the war, Barrow had many holes in the Red Sox lineup to fill. Ruth also noticed these vacancies in the lineup; he was dissatisfied in the role of a pitcher who appeared every four or five days and wanted to play every day at another position. Barrow used Ruth at first base and in the outfield during the exhibition season, but he restricted him to pitching as the team moved towards Boston and the season opener. At the time, Ruth was possibly the best left-handed pitcher in baseball, and allowing him to play another position was an experiment that could have backfired. Inexperienced as a manager, Barrow had player Harry Hooper advise him on baseball game strategy. Hooper urged his manager to allow Ruth to play another position when he was not pitching, arguing to Barrow, who had invested in the club, that the crowds were larger on days when Ruth played, as they were attracted by his hitting. Barrow gave in early in May; Ruth promptly hit home runs in four consecutive games (one an exhibition), the last off of Walter Johnson. For the first time in his career (disregarding pinch-hitting appearances), Ruth was assigned a place in the batting order higher than ninth. Although Barrow predicted that Ruth would beg to return to pitching the first time he experienced a batting slump, that did not occur. Barrow used Ruth primarily as an outfielder in the war-shortened 1918 season. Ruth hit .300, with 11 home runs, enough to secure him a share of the major league home run title with Tilly Walker of the Philadelphia Athletics. He was still occasionally used as a pitcher, and had a 13–7 record with a 2.22 ERA. In 1918, the Red Sox won their third pennant in four years and faced the Chicago Cubs in the World Series, which began on September 5, the earliest date in history. The season had been shortened because the government had ruled that baseball players who were eligible for the military would have to be inducted or work in critical war industries, such as armaments plants. Ruth pitched and won Game One for the Red Sox, a 1–0 shutout. Before Game Four, Ruth injured his left hand in a fight; he pitched anyway. He gave up seven hits and six walks, but was helped by outstanding fielding behind him and by his own batting efforts, as a fourth-inning triple by Ruth gave his team a 2–0 lead. The Cubs tied the game in the eighth inning, but the Red Sox scored to take a 3–2 again in the bottom of that inning. After Ruth gave up a hit and a walk to start the ninth inning, he was relieved on the mound by Joe Bush. To keep Ruth and his bat in the game, he was sent to play left field. Bush retired the side to give Ruth his second win of the Series, and the third and last World Series pitching victory of his career, against no defeats, in three pitching appearances. Ruth's effort gave his team a three-games-to-one lead, and two days later the Red Sox won their third Series in four years, four-games-to-two. Before allowing the Cubs to score in Game Four, Ruth pitched consecutive scoreless innings, a record for the World Series that stood for more than 40 years until 1961, broken by Whitey Ford after Ruth's death. Ruth was prouder of that record than he was of any of his batting feats. With the World Series over, Ruth gained exemption from the war draft by accepting a nominal position with a Pennsylvania steel mill. Many industrial establishments took pride in their baseball teams and sought to hire major leaguers. The end of the war in November set Ruth free to play baseball without such contrivances. During the 1919 season, Ruth was used as a pitcher in only 17 of his 130 games and compiled an 8–5 record. Barrow used him as a pitcher mostly in the early part of the season, when the Red Sox manager still had hopes of a second consecutive pennant. By late June, the Red Sox were clearly out of the race, and Barrow had no objection to Ruth concentrating on his hitting, if only because it drew people to the ballpark. Ruth had hit a home run against the Yankees on Opening Day, and another during a month-long batting slump that soon followed. Relieved of his pitching duties, Ruth began an unprecedented spell of slugging home runs, which gave him widespread public and press attention. Even his failures were seen as majestic—one sportswriter noted, "When Ruth misses a swipe at the ball, the stands quiver". Two home runs by Ruth on July 5, and one in each of two consecutive games a week later, raised his season total to 11, tying his career best from 1918. The first record to fall was the AL single-season mark of 16, set by Ralph "Socks" Seybold in 1902. Ruth matched that on July 29, then pulled ahead toward the major league record of 24, set by Buck Freeman in 1899. Ruth reached this on September 8, by which time, writers had discovered that Ned Williamson of the 1884 Chicago White Stockings had hit 27—though in a ballpark where the distance to right field was only . On September 20, "Babe Ruth Day" at Fenway Park, Ruth won the game with a home run in the bottom of the ninth inning, tying Williamson. He broke the record four days later against the Yankees at the Polo Grounds, and hit one more against the Senators to finish with 29. The home run at Washington made Ruth the first major league player to hit a home run at all eight ballparks in his league. In spite of Ruth's hitting heroics, the Red Sox finished sixth, games behind the league champion White Sox. As an out-of-towner from New York City, Frazee had been regarded with suspicion by Boston's sportswriters and baseball fans when he bought the team. He won them over with success on the field and a willingness to build the Red Sox by purchasing or trading for players. He offered the Senators $60,000 for Walter Johnson, but Washington owner Clark Griffith was unwilling. Even so, Frazee was successful in bringing other players to Boston, especially as replacements for players in the military. This willingness to spend for players helped the Red Sox secure the 1918 title. The 1919 season saw record-breaking attendance, and Ruth's home runs for Boston made him a national sensation. In March 1919 Ruth was reported as having accepted a three-year contract for a total of $27,000, after protracted negotiations. Nevertheless, on December 26, 1919, Frazee sold Ruth's contract to the New York Yankees. Not all of the circumstances concerning the sale are known, but brewer and former congressman Jacob Ruppert, the New York team's principal owner, reportedly asked Yankee manager Miller Huggins what the team needed to be successful. "Get Ruth from Boston", Huggins supposedly replied, noting that Frazee was perennially in need of money to finance his theatrical productions. In any event, there was precedent for the Ruth transaction: when Boston pitcher Carl Mays left the Red Sox in a 1919 dispute, Frazee had settled the matter by selling Mays to the Yankees, though over the opposition of AL President Johnson. According to one of Ruth's biographers, Jim Reisler, "why Frazee needed cash in 1919—and large infusions of it quickly—is still, more than 80 years later, a bit of a mystery". The often-told story is that Frazee needed money to finance the musical "No, No, Nanette", which was a Broadway hit and brought Frazee financial security. That play did not open until 1925, however, by which time Frazee had sold the Red Sox. Still, the story may be true in essence: "No, No, Nanette" was based on a Frazee-produced play, "My Lady Friends", which opened in 1919. There were other financial pressures on Frazee, despite his team's success. Ruth, fully aware of baseball's popularity and his role in it, wanted to renegotiate his contract, signed before the 1919 season for $10,000 per year through 1921. He demanded that his salary be doubled, or he would sit out the season and cash in on his popularity through other ventures. Ruth's salary demands were causing other players to ask for more money. Additionally, Frazee still owed Lannin as much as $125,000 from the purchase of the club. Although Ruppert and his co-owner, Colonel Tillinghast Huston, were both wealthy, and had aggressively purchased and traded for players in 1918 and 1919 to build a winning team, Ruppert faced losses in his brewing interests as Prohibition was implemented, and if their team left the Polo Grounds, where the Yankees were the tenants of the New York Giants, building a stadium in New York would be expensive. Nevertheless, when Frazee, who moved in the same social circles as Huston, hinted to the colonel that Ruth was available for the right price, the Yankees owners quickly pursued the purchase. Frazee sold the rights to Babe Ruth for $100,000, the largest sum ever paid for a baseball player. The deal also involved a $350,000 loan from Ruppert to Frazee, secured by a mortgage on Fenway Park. Once it was agreed, Frazee informed Barrow, who, stunned, told the owner that he was getting the worse end of the bargain. Cynics have suggested that Barrow may have played a larger role in the Ruth sale, as less than a year after, he became the Yankee general manager, and in the following years made a number of purchases of Red Sox players from Frazee. The $100,000 price included $25,000 in cash, and notes for the same amount due November 1 in 1920, 1921, and 1922; Ruppert and Huston assisted Frazee in selling the notes to banks for immediate cash. The transaction was contingent on Ruth signing a new contract, which was quickly accomplished—Ruth agreed to fulfill the remaining two years on his contract, but was given a $20,000 bonus, payable over two seasons. The deal was announced on January 6, 1920. Reaction in Boston was mixed: some fans were embittered at the loss of Ruth; others conceded that the slugger had become difficult to deal with. "The New York Times" suggested presciently, "The short right field wall at the Polo Grounds should prove an easy target for Ruth next season and, playing seventy-seven games at home, it would not be surprising if Ruth surpassed his home run record of twenty-nine circuit clouts next Summer." According to Reisler, "The Yankees had pulled off the sports steal of the century." According to Marty Appel in his history of the Yankees, the transaction, "changed the fortunes of two high-profile franchises for decades". The Red Sox, winners of five of the first sixteen World Series, those played between 1903 and 1919, would not win another pennant until 1946, or another World Series until 2004, a drought attributed in baseball superstition to Frazee's sale of Ruth and sometimes dubbed the "Curse of the Bambino". The Yankees, on the other hand, had not won the AL championship prior to their acquisition of Ruth. They won seven AL pennants and four World Series with Ruth, and lead baseball with 40 pennants and 27 World Series titles in their history. When Ruth signed with the Yankees, he completed his transition from a pitcher to a power-hitting outfielder. His fifteen-season Yankee career consisted of over 2,000 games, and Ruth broke many batting records while making only five widely scattered appearances on the mound, winning all of them. At the end of April 1920, the Yankees were 4–7, with the Red Sox leading the league with a 10–2 mark. Ruth had done little, having injured himself swinging the bat. Both situations began to change on May 1, when Ruth hit a tape measure home run that sent the ball completely out of the Polo Grounds, a feat believed to have been previously accomplished only by Shoeless Joe Jackson. The Yankees won, 6–0, taking three out of four from the Red Sox. Ruth hit his second home run on May 2, and by the end of the month had set a major league record for home runs in a month with 11, and promptly broke it with 13 in June. Fans responded with record attendance figures. On May 16, Ruth and the Yankees drew 38,600 to the Polo Grounds, a record for the ballpark, and 15,000 fans were turned away. Large crowds jammed stadiums to see Ruth play when the Yankees were on the road. The home runs kept on coming. Ruth tied his own record of 29 on July 15 and broke it with home runs in both games of a doubleheader four days later. By the end of July, he had 37, but his pace slackened somewhat after that. Nevertheless, on September 4, he both tied and broke the organized baseball record for home runs in a season, snapping Perry Werden's 1895 mark of 44 in the minor Western League. The Yankees played well as a team, battling for the league lead early in the summer, but slumped in August in the AL pennant battle with Chicago and Cleveland. The pennant and the World Series were won by Cleveland, who surged ahead after the Black Sox Scandal broke on September 28 and led to the suspension of many of Chicago's top players, including Shoeless Joe Jackson. The Yankees finished third, but drew 1.2 million fans to the Polo Grounds, the first time a team had drawn a seven-figure attendance. The rest of the league sold 600,000 more tickets, many fans there to see Ruth, who led the league with 54 home runs, 158 runs, and 137 runs batted in (RBIs). In 1920 and afterwards, Ruth was aided in his power hitting by the fact that A.J. Reach Company—the maker of baseballs used in the major leagues—was using a more efficient machine to wind the yarn found within the baseball. The new baseballs went into play in 1920 and ushered the start of the live-ball era; the number of home runs across the major leagues increased by 184 over the previous year. Baseball statistician Bill James pointed out that while Ruth was likely aided by the change in the baseball, there were other factors at work, including the gradual abolition of the spitball (accelerated after the death of Ray Chapman, struck by a pitched ball thrown by Mays in August 1920) and the more frequent use of new baseballs (also a response to Chapman's death). Nevertheless, James theorized that Ruth's 1920 explosion might have happened in 1919, had a full season of 154 games been played rather than 140, had Ruth refrained from pitching 133 innings that season, and if he were playing at any other home field but Fenway Park, where he hit only 9 of 29 home runs. Yankees business manager Harry Sparrow had died early in the 1920 season. Ruppert and Huston hired Barrow to replace him. The two men quickly made a deal with Frazee for New York to acquire some of the players who would be mainstays of the early Yankee pennant-winning teams, including catcher Wally Schang and pitcher Waite Hoyt. The 21-year-old Hoyt became close to Ruth: Ruth hit home runs early and often in the 1921 season, during which he broke Roger Connor's mark for home runs in a career, 138. Each of the almost 600 home runs Ruth hit in his career after that extended his own record. After a slow start, the Yankees were soon locked in a tight pennant race with Cleveland, winners of the 1920 World Series. On September 15, Ruth hit his 55th home run, shattering his year-old single season record. In late September, the Yankees visited Cleveland and won three out of four games, giving them the upper hand in the race, and clinched their first pennant a few days later. Ruth finished the regular season with 59 home runs, batting .378 and with a slugging percentage of .846. The Yankees had high expectations when they met the New York Giants in the 1921 World Series, every game of which was played in the Polo Grounds. The Yankees won the first two games with Ruth in the lineup. However, Ruth badly scraped his elbow during Game 2 when he slid into third base (he had walked and stolen both second and third bases). After the game, he was told by the team physician not to play the rest of the series. Despite this advice, he did play in the next three games, and pinch-hit in Game Eight of the best-of-nine series, but the Yankees lost, five games to three. Ruth hit .316, drove in five runs and hit his first World Series home run. After the Series, Ruth and teammates Bob Meusel and Bill Piercy participated in a barnstorming tour in the Northeast. A rule then in force prohibited World Series participants from playing in exhibition games during the offseason, the purpose being to prevent Series participants from replicating the Series and undermining its value. Baseball Commissioner Kenesaw Mountain Landis suspended the trio until May 20, 1922, and fined them their 1921 World Series checks. In August 1922, the rule was changed to allow limited barnstorming for World Series participants, with Landis's permission required. On March 6, 1922, Ruth signed a new contract for three years at $52,000 a year. This was the largest sum ever paid to a ballplayer up to that point, and it represented 40% of the team's player payroll. Despite his suspension, Ruth was named the Yankees' new on-field captain prior to the 1922 season. During the suspension, he worked out with the team in the morning and played exhibition games with the Yankees on their off days. He and Meusel returned on May 20 to a sellout crowd at the Polo Grounds, but Ruth batted 0-for-4 and was booed. On May 25, he was thrown out of the game for throwing dust in umpire George Hildebrand's face, then climbed into the stands to confront a heckler. Ban Johnson ordered him fined, suspended, and stripped of position as team captain. In his shortened season, Ruth appeared in 110 games, batted .315, with 35 home runs, and drove in 99 runs, but the 1922 season was a disappointment in comparison to his two previous dominating years. Despite Ruth's off-year, the Yankees managed to win the pennant and faced the New York Giants in the World Series for the second consecutive year. In the Series, Giants manager John McGraw instructed his pitchers to throw him nothing but curveballs, and Ruth never adjusted. Ruth had just two hits in seventeen at bats, and the Yankees lost to the Giants for the second straight year, by 4–0 (with one tie game). Sportswriter Joe Vila called him, "an exploded phenomenon". After the season, Ruth was a guest at an Elks Club banquet, set up by Ruth's agent with Yankee team support. There, each speaker, concluding with future New York mayor Jimmy Walker, censured him for his poor behavior. An emotional Ruth promised reform, and, to the surprise of many, followed through. When he reported to spring training, he was in his best shape as a Yankee, weighing only . The Yankees' status as tenants of the Giants at the Polo Grounds had become increasingly uneasy, and in 1922, Giants owner Charles Stoneham stated that the Yankees' lease, expiring after that season, would not be renewed. Ruppert and Huston had long contemplated a new stadium, and had taken an option on property at 161st Street and River Avenue in the Bronx. Yankee Stadium was completed in time for the home opener on April 18, 1923, at which the Babe hit the first home run in what was quickly dubbed "the House that Ruth Built". The ballpark was designed with Ruth in mind: although the venue's left-field fence was further from home plate than at the Polo Grounds, Yankee Stadium's right-field fence was closer, making home runs easier to hit for left-handed batters. To spare Ruth's eyes, right field–his defensive position–was not pointed into the afternoon sun, as was traditional; left fielder Meusel was soon suffering headaches from squinting toward home plate. During the 1923 season, The Yankees were never seriously challenged and won the AL pennant by 17 games. Ruth finished the season with a career-high .393 batting average and 41 home runs, which tied Cy Williams for the most in the major-leagues that year. Ruth hit a career high 45 doubles in 1923, and he reached base 379 times, then a major league record. For the third straight year, the Yankees faced the Giants in the World Series, which Ruth dominated. He batted .368, walked eight times, scored eight runs, hit three home runs and slugged 1.000 during the series, as the Yankees christened their new stadium with their first World Series championship, four games to two. In 1924, the Yankees were favored to become the first team to win four consecutive pennants. Plagued by injuries, they found themselves in a battle with the Senators. Although the Yankees won 18 of 22 at one point in September, the Senators beat out the Yankees by two games. Ruth hit .378, winning his only AL batting title, with a league-leading 46 home runs. Ruth had kept up his efforts to stay in shape in 1923 and 1924, but by early 1925 weighed nearly . His annual visit to Hot Springs, Arkansas, where he exercised and took saunas early in the year, did him no good as he spent much of the time carousing in the resort town. He became ill while there, and suffered relapses during spring training. Ruth collapsed in Asheville, North Carolina, as the team journeyed north. He was put on a train for New York, where he was briefly hospitalized. A rumor circulated that he had died, prompting British newspapers to print a premature obituary. In New York, Ruth collapsed again and was found unconscious in his hotel bathroom. He was taken to a hospital where he suffered multiple convulsions. After sportswriter W. O. McGeehan wrote that Ruth's illness was due to binging on hot dogs and soda pop before a game, it became known as "the bellyache heard 'round the world". However, the exact cause of his ailment has never been confirmed and remains a mystery. Glenn Stout, in his history of the Yankees, notes that the Ruth legend is "still one of the most sheltered in sports"; he suggests that alcohol was at the root of Ruth's illness, pointing to the fact that Ruth remained six weeks at St. Vincent's Hospital but was allowed to leave, under supervision, for workouts with the team for part of that time. He concludes that the hospitalization was behavior-related. Playing just 98 games, Ruth had his worst season as a Yankee; he finished with a .290 average and 25 home runs. The Yankees finished next to last in the AL with a 69–85 record, their last season with a losing record until 1965. Ruth spent part of the offseason of 1925–26 working out at Artie McGovern's gym, where he got back into shape. Barrow and Huggins had rebuilt the team and surrounded the veteran core with good young players like Tony Lazzeri and Lou Gehrig, but the Yankees were not expected to win the pennant. Ruth returned to his normal production during 1926, when he batted .372 with 47 home runs and 146 RBIs. The Yankees built a 10-game lead by mid-June and coasted to win the pennant by three games. The St. Louis Cardinals had won the National League with the lowest winning percentage for a pennant winner to that point (.578) and the Yankees were expected to win the World Series easily. Although the Yankees won the opener in New York, St. Louis took Games Two and Three. In Game Four, Ruth hit three home runs—the first time this had been done in a World Series game—to lead the Yankees to victory. In the fifth game, Ruth caught a ball as he crashed into the fence. The play was described by baseball writers as a defensive gem. New York took that game, but Grover Cleveland Alexander won Game Six for St. Louis to tie the Series at three games each, then got very drunk. He was nevertheless inserted into Game Seven in the seventh inning and shut down the Yankees to win the game, 3–2, and win the Series. Ruth had hit his fourth home run of the Series earlier in the game and was the only Yankee to reach base off Alexander; he walked in the ninth inning before being thrown out to end the game when he attempted to steal second base. Although Ruth's attempt to steal second is often deemed a baserunning blunder, Creamer pointed out that the Yankees' chances of tying the game would have been greatly improved with a runner in scoring position. The 1926 World Series was also known for Ruth's promise to Johnny Sylvester, a hospitalized 11-year-old boy. Ruth promised the child that he would hit a home run on his behalf. Sylvester had been injured in a fall from a horse, and a friend of Sylvester's father gave the boy two autographed baseballs signed by Yankees and Cardinals. The friend relayed a promise from Ruth (who did not know the boy) that he would hit a home run for him. After the Series, Ruth visited the boy in the hospital. When the matter became public, the press greatly inflated it, and by some accounts, Ruth allegedly saved the boy's life by visiting him, emotionally promising to hit a home run, and doing so. The 1927 New York Yankees team is considered one of the greatest squads to ever take the field. Known as Murderers' Row because of the power of its lineup, the team clinched first place on Labor Day, won a then-AL-record 110 games and took the AL pennant by 19 games. There was no suspense in the pennant race, and the nation turned its attention to Ruth's pursuit of his own single-season home run record of 59 round trippers. Ruth was not alone in this chase. Teammate Lou Gehrig proved to be a slugger who was capable of challenging Ruth for his home run crown; he tied Ruth with 24 home runs late in June. Through July and August, the dynamic duo was never separated by more than two home runs. Gehrig took the lead, 45–44, in the first game of a doubleheader at Fenway Park early in September; Ruth responded with two blasts of his own to take the lead, as it proved permanently—Gehrig finished with 47. Even so, as of September 6, Ruth was still several games off his 1921 pace, and going into the final series against the Senators, had only 57. He hit two in the first game of the series, including one off of Paul Hopkins, facing his first major league batter, to tie the record. The following day, September 30, he broke it with his 60th homer, in the eighth inning off Tom Zachary to break a 2–2 tie. "Sixty! Let's see some son of a bitch try to top that one", Ruth exulted after the game. In addition to his career-high 60 home runs, Ruth batted .356, drove in 164 runs and slugged .772. In the 1927 World Series, the Yankees swept the Pittsburgh Pirates in four games; the National Leaguers were disheartened after watching the Yankees take batting practice before Game One, with ball after ball leaving Forbes Field. According to Appel, "The 1927 New York Yankees. Even today, the words inspire awe ... all baseball success is measured against the '27 team." The following season started off well for the Yankees, who led the league in the early going. But the Yankees were plagued by injuries, erratic pitching and inconsistent play. The Philadelphia Athletics, rebuilding after some lean years, erased the Yankees' big lead and even took over first place briefly in early September. The Yankees, however, regained first place when they beat the Athletics three out of four games in a pivotal series at Yankee Stadium later that month, and clinched the pennant in the final weekend of the season. Ruth's play in 1928 mirrored his team's performance. He got off to a hot start and on August 1, he had 42 home runs. This put him ahead of his 60 home run pace from the previous season. He then slumped for the latter part of the season, and he hit just twelve home runs in the last two months. Ruth's batting average also fell to .323, well below his career average. Nevertheless, he ended the season with 54 home runs. The Yankees swept the favored Cardinals in four games in the World Series, with Ruth batting .625 and hitting three home runs in Game Four, including one off Alexander. Before the 1929 season, Ruppert (who had bought out Huston in 1923) announced that the Yankees would wear uniform numbers to allow fans at cavernous Yankee Stadium to easily identify the players. The Cardinals and Indians had each experimented with uniform numbers; the Yankees were the first to use them on both home and away uniforms. Ruth batted third and was given number 3. According to a long-standing baseball legend, the Yankees adopted their now-iconic pinstriped uniforms in hopes of making Ruth look slimmer. In truth, though, they had been wearing pinstripes since Ruppert bought the team in 1915. Although the Yankees started well, the Athletics soon proved they were the better team in 1929, splitting two series with the Yankees in the first month of the season, then taking advantage of a Yankee losing streak in mid-May to gain first place. Although Ruth performed well, the Yankees were not able to catch the Athletics—Connie Mack had built another great team. Tragedy struck the Yankees late in the year as manager Huggins died at 51 of erysipelas, a bacterial skin infection, on September 25, only ten days after he had last directed the team. Despite their past differences, Ruth praised Huggins and described him as a "great guy". The Yankees finished second, 18 games behind the Athletics. Ruth hit .345 during the season, with 46 home runs and 154 RBIs. On October 17, the Yankees hired Bob Shawkey as manager; he was their fourth choice. Ruth had politicked for the job of player-manager, but Ruppert and Barrow never seriously considered him for the position. Stout deemed this the first hint Ruth would have no future with the Yankees once he retired as a player. Shawkey, a former Yankees player and teammate of Ruth, would prove unable to command the slugger's respect. On January 7, 1930, salary negotiations between the Yankees and Ruth quickly broke down. Having just concluded a three-year contract at an annual salary of $70,000, Ruth promptly rejected both the Yankees' initial proposal of $70,000 for one year and their 'final' offer of two years at seventy-five—the latter figure equalling the annual salary of then US President Herbert Hoover; instead, Ruth demanded at least $85,000 and three years. When asked why he thought he was "worth more than the President of the United States," Ruth responded: "Say, if I hadn't been sick last summer, I'd have broken hell out of that home run record! Besides, the President gets a four-year contract. I'm only asking for three." Exactly two months later, a compromise was reached, with Ruth settling for two years at an unprecedented $80,000 per year. In 1930, Ruth hit .359 with 49 home runs (his best in his years after 1928) and 153 RBIs, and pitched his first game in nine years, a complete game victory. Nevertheless, the Athletics won their second consecutive pennant and World Series, as the Yankees finished in third place, sixteen games back. At the end of the season, Shawkey was fired and replaced with Cubs manager Joe McCarthy, though Ruth again unsuccessfully sought the job. McCarthy was a disciplinarian, but chose not to interfere with Ruth, and the slugger for his part did not seek conflict with the manager. The team improved in 1931, but was no match for the Athletics, who won 107 games, games in front of the Yankees. Ruth, for his part, hit .373, with 46 home runs and 163 RBIs. He had 31 doubles, his most since 1924. In the 1932 season, the Yankees went 107–47 and won the pennant. Ruth's effectiveness had decreased somewhat, but he still hit .341 with 41 home runs and 137 RBIs. Nevertheless, he was sidelined twice due to injuries during the season. The Yankees faced the Cubs, McCarthy's former team, in the 1932 World Series. There was bad blood between the two teams as the Yankees resented the Cubs only awarding half a World Series share to Mark Koenig, a former Yankee. The games at Yankee Stadium had not been sellouts; both were won by the home team, with Ruth collecting two singles, but scoring four runs as he was walked four times by the Cubs pitchers. In Chicago, Ruth was resentful at the hostile crowds that met the Yankees' train and jeered them at the hotel. The crowd for Game Three included New York Governor Franklin D. Roosevelt, the Democratic candidate for president, who sat with Chicago Mayor Anton Cermak. Many in the crowd threw lemons at Ruth, a sign of derision, and others (as well as the Cubs themselves) shouted abuse at Ruth and other Yankees. They were briefly silenced when Ruth hit a three-run home run off Charlie Root in the first inning, but soon revived, and the Cubs tied the score at 4–4 in the fourth inning. When Ruth came to the plate in the top of the fifth, the Chicago crowd and players, led by pitcher Guy Bush, were screaming insults at Ruth. With the count at two balls and one strike, Ruth gestured, possibly in the direction of center field, and after the next pitch (a strike), may have pointed there with one hand. Ruth hit the fifth pitch over the center field fence; estimates were that it traveled nearly . Whether or not Ruth intended to indicate where he planned to (and did) hit the ball, the incident has gone down in legend as Babe Ruth's called shot. The Yankees won Game Three, and the following day clinched the Series with another victory. During that game, Bush hit Ruth on the arm with a pitch, causing words to be exchanged and provoking a game-winning Yankee rally. Ruth remained productive in 1933. He batted .301, with 34 home runs, 103 RBIs, and a league-leading 114 walks, as the Yankees finished in second place, seven games behind the Senators. Athletics manager Connie Mack selected him to play right field in the first Major League Baseball All-Star Game, held on July 6, 1933, at Comiskey Park in Chicago. He hit the first home run in the All-Star Game's history, a two-run blast against Bill Hallahan during the third inning, which helped the AL win the game 4–2. During the final game of the 1933 season, as a publicity stunt organized by his team, Ruth was called upon and pitched a complete game victory against the Red Sox, his final appearance as a pitcher. Despite unremarkable pitching numbers, Ruth had a 5–0 record in five games for the Yankees, raising his career totals to 94–46. In 1934, Ruth played in his last full season with the Yankees. By this time, years of high living were starting to catch up with him. His conditioning had deteriorated to the point that he could no longer field or run. He accepted a pay cut to $35,000 from Ruppert, but he was still the highest-paid player in the major leagues. He could still handle a bat and recorded a .288 batting average with 22 home runs; these were statistics that Reisler described as "merely mortal". Ruth was selected to the AL All-Star team for the second consecutive year, even though he was in the twilight of his career. During the game, New York Giants pitcher Carl Hubbell struck out Ruth and four other future Hall-of-Famers consecutively. The Yankees finished second again, seven games behind the Tigers. Although Ruth knew he was nearly finished as a player, he desired to remain in baseball as a manager. He was often spoken of as a possible candidate as managerial jobs opened up, but in 1932, when he was mentioned as a contender for the Red Sox position, Ruth stated that he was not yet ready to leave the field. There were rumors that Ruth was a likely candidate each time when the Cleveland Indians, Cincinnati Reds, and Detroit Tigers were looking for a manager, but nothing came of them. Just before the 1934 season, Ruppert offered to make Ruth the manager of the Yankees' top minor-league team, the Newark Bears, but he was talked out of it by his wife, Claire, and his business manager, Christy Walsh. Shortly afterward, Tigers owner Frank Navin made a proposal to Ruppert and Barrow—if the Yankees traded Ruth to Detroit, Navin would name Ruth player-manager. Navin believed Ruth would not only bring a winning attitude to a team that had not finished higher than third since 1923, but would also revive the Tigers' sagging attendance figures. Navin asked Ruth to come to Detroit for an interview. However, Ruth balked, since Walsh had already arranged for him to take part in a celebrity golf tournament in Hawaii. Ruth and Navin negotiated over the phone while Ruth was in Hawaii, but those talks foundered when Navin refused to give Ruth a portion of the Tigers' box office proceeds. Early in the 1934 season, Ruth openly campaigned to become the Yankees manager. However, the Yankee job was never a serious possibility. Ruppert always supported McCarthy, who would remain in his position for another 12 seasons. The relationship between Ruth and McCarthy had been lukewarm at best and Ruth's managerial ambitions further chilled their interpersonal relations. By the end of the season, Ruth hinted that he would retire unless Ruppert named him manager of the Yankees. When the time came, Ruppert wanted his slugger to leave the team without drama or hard feelings. During the 1934–35 offseason, Ruth circled the world with his wife; the trip included a barnstorming tour of the Far East. At his final stop in the United Kingdom before returning home, Ruth was introduced to cricket by Australian player Alan Fairfax, and after having little luck in a cricketer's stance, he stood as a baseball batter and launched some massive shots around the field, destroying the bat in the process. Although Fairfax regretted that he could not have the time to make Ruth a cricket player, Ruth had lost any interest in such a career upon learning that the best batsmen made only about $40 per week. Also during the offseason, Ruppert had been sounding out the other clubs in hopes of finding one that would be willing to take Ruth as a manager and/or a player. However, the only serious offer came from Athletics owner-manager Connie Mack, who gave some thought to stepping down as manager in favor of Ruth. However, Mack later dropped the idea, saying that Ruth's wife would be running the team in a month if Ruth ever took over. While the barnstorming tour was underway, Ruppert began negotiating with Boston Braves owner Judge Emil Fuchs, who wanted Ruth as a gate attraction. The Braves had enjoyed modest recent success, finishing fourth in the National League in both 1933 and 1934, but the team drew poorly at the box office. Unable to afford the rent at Braves Field, Fuchs had considered holding dog races there when the Braves were not at home, only to be turned down by Landis. After a series of phone calls, letters, and meetings, the Yankees traded Ruth to the Braves on February 26, 1935. Ruppert had stated that he would not release Ruth to go to another team as a full-time player. For this reason, it was announced that Ruth would become a team vice president and would be consulted on all club transactions, in addition to playing. He was also made assistant manager to Braves skipper Bill McKechnie. In a long letter to Ruth a few days before the press conference, Fuchs promised Ruth a share in the Braves' profits, with the possibility of becoming co-owner of the team. Fuchs also raised the possibility of Ruth succeeding McKechnie as manager, perhaps as early as 1936. Ruppert called the deal "the greatest opportunity Ruth ever had". There was considerable attention as Ruth reported for spring training. He did not hit his first home run of the spring until after the team had left Florida, and was beginning the road north in Savannah. He hit two in an exhibition game against the Bears. Amid much press attention, Ruth played his first home game in Boston in over 16 years. Before an opening-day crowd of over 25,000, including five of New England's six state governors, Ruth accounted for all of the Braves' runs in a 4–2 defeat of the New York Giants, hitting a two-run home run, singling to drive in a third run and later in the inning scoring the fourth. Although age and weight had slowed him, he made a running catch in left field that sportswriters deemed the defensive highlight of the game. Ruth had two hits in the second game of the season, but it quickly went downhill both for him and the Braves from there. The season soon settled down to a routine of Ruth performing poorly on the few occasions he even played at all. As April passed into May, Ruth's physical deterioration became even more pronounced. While he remained productive at the plate early on, he could do little else. His conditioning had become so poor that he could barely trot around the bases. He made so many errors that three Braves pitchers told McKechnie that they would not take the mound if he was in the lineup. Before long, Ruth stopped hitting as well. He grew increasingly annoyed that McKechnie ignored most of his advice. For his part, McKechnie later said that Ruth's huge salary and refusal to stay with the team while on the road made it nearly impossible to enforce discipline. The Braves' deterioration mirrored that of Ruth's; their Opening Day win was the only time they were above .500 all year. Ruth soon realized that Fuchs had deceived him, and had no intention of making him manager or giving him any significant off-field duties. He later stated that his only duties as vice president consisted of making public appearances and autographing tickets. Ruth also found out that far from giving him a share of the profits, Fuchs wanted him to invest some of "his" money in the team in a last-ditch effort to improve its balance sheet. As it turned out, Fuchs and Ruppert had both known all along that Ruth's non-playing positions were meaningless. By the end of the first month of the season, Ruth concluded he was finished even as a part-time player. As early as May 12, he asked Fuchs to let him retire. Ultimately, Fuchs persuaded Ruth to remain at least until after the Memorial Day doubleheader in Philadelphia. In the interim was a western road trip, at which the rival teams had scheduled days to honor him. In Chicago and St. Louis, Ruth performed poorly, and his batting average sank to .155, with only two additional home runs for a total of three on the season so far. In the first two games in Pittsburgh, Ruth had only one hit, though a long fly caught by Paul Waner probably would have been a home run in any other ballpark besides Forbes Field. Ruth played in the third game of the Pittsburgh series on May 25, 1935, and added one more tale to his playing legend. Ruth went 4-for-4, including three home runs, though the Braves lost the game 11–7. The last two were off Ruth's old Cubs nemesis, Guy Bush. The final home run, both of the game and of Ruth's career, sailed out of the park over the right field upper deck–the first time anyone had hit a fair ball completely out of Forbes Field. Ruth was urged to make this his last game, but he had given his word to Fuchs and played in Cincinnati and Philadelphia. The first game of the doubleheader in Philadelphia—the Braves lost both—was his final major league appearance. Ruth retired on June 2 after an argument with Fuchs. He finished 1935 with a .181 average—easily his worst as a full-time position player—and the final six of his 714 home runs. The Braves, 10–27 when Ruth left, finished 38–115, at .248 the worst winning percentage in modern National League history. Insolvent like his team, Fuchs gave up control of the Braves before the end of the season; the National League took over the franchise at the end of the year. Although Fuchs had given Ruth his unconditional release, no major league team expressed an interest in hiring him in any capacity. Ruth still hoped to be hired as a manager if he could not play anymore, but only one managerial position, Cleveland, became available between Ruth's retirement and the end of the 1937 season. Asked if he had considered Ruth for the job, Indians owner Alva Bradley replied negatively. The writer Creamer believed Ruth was unfairly treated in never being given an opportunity to manage a major league club. The author believed there was not necessarily a relationship between personal conduct and managerial success, noting that McGraw, Billy Martin, and Bobby Valentine were winners despite character flaws. Team owners and general managers assessed Ruth's flamboyant personal habits as a reason to exclude him from a managerial job; Barrow said of him, "How can he manage other men when he can't even manage himself?" Ruth played much golf and in a few exhibition baseball games, where he demonstrated a continuing ability to draw large crowds. This appeal contributed to the Dodgers hiring him as first base coach in 1938. When Ruth was hired, Brooklyn general manager Larry MacPhail made it clear that Ruth would not be considered for the manager's job if, as expected, Burleigh Grimes retired at the end of the season. Although much was said about what Ruth could teach the younger players, in practice, his duties were to appear on the field in uniform and encourage base runners—he was not called upon to relay signs. He got along well with everyone except team captain Leo Durocher, who was hired as Grimes' replacement at season's end. Ruth then left his job as a first base coach and would never again work in any capacity in the game of baseball. On July 4, 1939, Ruth spoke on Lou Gehrig Appreciation Day at Yankee Stadium as members of the 1927 Yankees and a sellout crowd turned out to honor the first baseman, who was forced into premature retirement by ALS, which would kill him two years later. The next week, Ruth went to Cooperstown, New York, for the formal opening of the Baseball Hall of Fame. Three years earlier, he was one of the first five players elected to the hall. As radio broadcasts of baseball games became popular, Ruth sought a job in that field, arguing that his celebrity and knowledge of baseball would assure large audiences, but he received no offers. During World War II, he made many personal appearances to advance the war effort, including his last appearance as a player at Yankee Stadium, in a 1943 exhibition for the Army-Navy Relief Fund. He hit a long fly ball off Walter Johnson; the blast left the field, curving foul, but Ruth circled the bases anyway. In 1946, he made a final effort to gain a job in baseball when he contacted new Yankees boss MacPhail, but he was sent a rejection letter. Ruth met Helen Woodford (1897–1929), by some accounts, in a coffee shop in Boston where she was a waitress, and they were married as teenagers on October 17, 1914. Although Ruth later claimed to have been married in Elkton, Maryland, records show that they were married at St. Paul's Catholic Church in Ellicott City. They adopted a daughter, Dorothy (1921–1989), in 1921. Ruth and Helen separated around 1925, reportedly due to his repeated infidelities. They appeared in public as a couple for the last time during the 1926 World Series. Helen died in January 1929 at age 31 in a house fire in Watertown, Massachusetts, in a house owned by Edward Kinder, a dentist with whom she had been living as "Mrs. Kinder". In her book, "My Dad, the Babe", Dorothy claimed that she was Ruth's biological child by a mistress named Juanita Jennings. She died in 1989. On April 17, 1929 (only three months after the death of his first wife) Ruth married actress and model Claire Merritt Hodgson (1897–1976) and adopted her daughter Julia. It was the second and final marriage for both parties. By one account, Julia and Dorothy were, through no fault of their own, the reason for the seven-year rift in Ruth's relationship with teammate Lou Gehrig. Sometime in 1932, during a conversation that she assumed was private, Gehrig's mother remarked, "It's a shame [Claire] doesn't dress Dorothy as nicely as she dresses her own daughter." When the comment inevitably got back to Ruth, he angrily told Gehrig to tell his mother to mind her own business. Gehrig, in turn, took offense at what he perceived as Ruth's comment about his mother. The two men reportedly never spoke off the field until they reconciled at Yankee Stadium on Lou Gehrig Appreciation Day, July 4, 1939, which was shortly after Gehrig's retirement from baseball. Although Ruth was married throughout most of his baseball career, when Colonel Huston asked him to tone down his lifestyle, the player said, "I'll promise to go easier on drinking and to get to bed earlier, but not for you, fifty thousand dollars, or two-hundred and fifty thousand dollars will I give up women. They're too much fun." As early as the war years, doctors had cautioned Ruth to take better care of his health, and he grudgingly followed their advice, limiting his drinking and not going on a proposed trip to support the troops in the South Pacific. In 1946, Ruth began experiencing severe pain over his left eye and had difficulty swallowing. In November 1946, Ruth entered French Hospital in New York for tests, which revealed that he had an inoperable malignant tumor at the base of his skull and in his neck. The malady was a lesion known as nasopharyngeal carcinoma, or "lymphoepithelioma." His name and fame gave him access to experimental treatments, and he was one of the first cancer patients to receive both drugs and radiation treatment simultaneously. Having lost , he was discharged from the hospital in February and went to Florida to recuperate. He returned to New York and Yankee Stadium after the season started. The new commissioner, Happy Chandler (Judge Landis had died in 1944), proclaimed April 27, 1947, Babe Ruth Day around the major leagues, with the most significant observance to be at Yankee Stadium. A number of teammates and others spoke in honor of Ruth, who briefly addressed the crowd of almost 60,000. Around this time, developments in chemotherapy offered some hope for Ruth. The doctors had not told Ruth that he had cancer because of his family's fear that he might do himself harm. They treated him with teropterin, a folic acid derivative; he may have been the first human subject. Ruth showed dramatic improvement during the summer of 1947, so much so that his case was presented by his doctors at a scientific meeting, without using his name. He was able to travel around the country, doing promotional work for the Ford Motor Company on American Legion Baseball. He appeared again at another day in his honor at Yankee Stadium in September, but was not well enough to pitch in an old-timers game as he had hoped. The improvement was only a temporary remission, and by late 1947, Ruth was unable to help with the writing of his autobiography, "The Babe Ruth Story", which was almost entirely ghostwritten. In and out of the hospital in Manhattan, he left for Florida in February 1948, doing what activities he could. After six weeks he returned to New York to appear at a book-signing party. He also traveled to California to witness the filming of the movie based on the book. On June 5, 1948, a "gaunt and hollowed out" Ruth visited Yale University to donate a manuscript of "The Babe Ruth Story" to its library. At Yale, he met with future president George H. W. Bush, who was the captain of the Yale baseball team. On June 13, Ruth visited Yankee Stadium for the final time in his life, appearing at the 25th-anniversary celebrations of "The House that Ruth Built". By this time he had lost much weight and had difficulty walking. Introduced along with his surviving teammates from 1923, Ruth used a bat as a cane. Nat Fein's photo of Ruth taken from behind, standing near home plate and facing "Ruthville" (right field) became one of baseball's most famous and widely circulated photographs, and won the Pulitzer Prize. Ruth made one final trip on behalf of American Legion Baseball, then entered Memorial Hospital, where he would die. He was never told he had cancer, but before his death, had surmised it. He was able to leave the hospital for a few short trips, including a final visit to Baltimore. On July 26, 1948, Ruth left the hospital to attend the premiere of the film "The Babe Ruth Story". Shortly thereafter, Ruth returned to the hospital for the final time. He was barely able to speak. Ruth's condition gradually grew worse; only a few visitors were allowed to see him, one of whom was National League president and future Commissioner of Baseball Ford Frick. "Ruth was so thin it was unbelievable. He had been such a big man and his arms were just skinny little bones, and his face was so haggard", Frick said years later. Thousands of New Yorkers, including many children, stood vigil outside the hospital during Ruth's final days. On August 16, 1948, at 8:01 p.m., Ruth died in his sleep at the age of 53. His open casket was placed on display in the rotunda of Yankee Stadium, where it remained for two days; 77,000 people filed past to pay him tribute. His funeral Mass took place at St. Patrick's Cathedral; a crowd estimated at 75,000 waited outside. Ruth was buried on a hillside in Section 25 at the Gate of Heaven Cemetery in Hawthorne, New York. An epitaph by Cardinal Spellman appears on his headstone. His second wife, Claire Merritt Ruth, would be interred with him 28 years later in 1976. On April 19, 1949, the Yankees unveiled a granite monument in Ruth's honor in center field of Yankee Stadium. The monument was located in the field of play next to a flagpole and similar tributes to Huggins and Gehrig until the stadium was remodeled from 1974 to 1975, which resulted in the outfield fences moving inward and enclosing the monuments from the playing field. This area was known thereafter as Monument Park. Yankee Stadium, "the House that Ruth Built", was replaced after the 2008 season with a new Yankee Stadium across the street from the old one; Monument Park was subsequently moved to the new venue behind the center field fence. Ruth's uniform number 3 has been retired by the Yankees, and he is one of five Yankees players or managers to have a granite monument within the stadium. The Babe Ruth Birthplace Museum is located at 216 Emory Street, a Baltimore row house where Ruth was born, and three blocks west of Oriole Park at Camden Yards, where the AL's Baltimore Orioles play. The property was restored and opened to the public in 1973 by the non-profit Babe Ruth Birthplace Foundation, Inc. Ruth's widow, Claire, his two daughters, Dorothy and Julia, and his sister, Mamie, helped select and install exhibits for the museum. Ruth was the first baseball star to be the subject of overwhelming public adulation. Baseball had been known for star players such as Ty Cobb and "Shoeless Joe" Jackson, but both men had uneasy relations with fans. In Cobb's case, the incidents were sometimes marked by violence. Ruth's biographers agreed that he benefited from the timing of his ascension to "Home Run King". The country had been hit hard by both the war and the 1918 flu pandemic and longed for something to help put these traumas behind it. Ruth also resonated in a country which felt, in the aftermath of the war, that it took second place to no one. Montville argued that Ruth was a larger-than-life figure who was capable of unprecedented athletic feats in the nation's largest city. The Babe became an icon of the significant social changes that marked the early 1920s. In his history of the Yankees, Glenn Stout noted that "Ruth was New York incarnate—uncouth and raw, flamboyant and flashy, oversized, out of scale, and absolutely unstoppable". During his lifetime, Ruth had become a symbol of the United States. During World War II, Japanese soldiers could think of no greater insult than to yell in English, "To hell with Babe Ruth", to anger American soldiers. Ruth replied that he hoped that "every Jap that mention[ed] my name gets shot". Creamer recorded that "Babe Ruth transcended sport and moved far beyond the artificial limits of baselines and outfield fences and sports pages". Wagenheim stated, "He appealed to a deeply rooted American yearning for the definitive climax: clean, quick, unarguable." According to Glenn Stout, "Ruth's home runs were exalted, uplifting experience that meant more to fans than any runs they were responsible for. A Babe Ruth home run was an event unto itself, one that meant anything was possible." Ruth's penchant for hitting home runs altered how baseball is played. Prior to 1920, home runs were unusual, and managers tried to win games by getting a runner on base and bringing him around to score through such means as the stolen base, the bunt, and the hit and run. Advocates of what was dubbed "inside baseball", such as Giants manager McGraw, disliked the home run, considering it a blot on the purity of the game. According to sportswriter W. A. Phelon, after the 1920 season, Ruth's breakout performance that season and the response in excitement and attendance, "settled, for all time to come, that the American public is nuttier over the Home Run than the Clever Fielding or the Hitless Pitching. Viva el Home Run and two times viva Babe Ruth, exponent of the home run, and overshadowing star." Bill James noted, "When the owners discovered that the fans "liked" to see home runs, and when the foundations of the games were simultaneously imperiled by disgrace [in the Black Sox Scandal], then there was no turning back." While a few, such as McGraw and Cobb, decried the passing of the old-style play, teams quickly began to seek and develop sluggers. According to contemporary sportswriter Grantland Rice, only two sports figures of the 1920s approached Ruth in popularity—boxer Jack Dempsey and racehorse Man o' War. One of the factors that contributed to Ruth's broad appeal was the uncertainty about his family and early life. Ruth appeared to exemplify the American success story, that even an uneducated, unsophisticated youth, without any family wealth or connections, can do something better than anyone else in the world. Montville noted that "the fog [surrounding his childhood] will make him forever accessible, universal. He will be the patron saint of American possibility." Similarly, the fact that Ruth played in the pre-television era, when a relatively small portion of his fans had the opportunity to see him play allowed his legend to grow through word of mouth and the hyperbole of sports reporters. Reisler noted that recent sluggers who surpassed Ruth's 60-home run mark, such as Mark McGwire and Barry Bonds, generated much less excitement than when Ruth repeatedly broke the single-season home run record in the 1920s. Ruth dominated a relatively small sports world, while Americans of the present era have many sports available to watch. Creamer termed Ruth "a unique figure in the social history of the United States". Ruth has even entered the language: a dominant figure in a field, whether within or outside sports, is often referred to as "the Babe Ruth" of that field. Similarly, "Ruthian" has come to mean in sports, "colossal, dramatic, prodigious, magnificent; with great power." In 2006, Montville noted that more books have been written about Ruth than any other member of the Baseball Hall of Fame. At least five of these books (including Creamer's and Wagenheim's) were written in 1973 and 1974. The books were timed to capitalize on the increase in public interest in Ruth as Henry Aaron approached his career home run mark, which he broke on April 8, 1974. As he approached Ruth's record, Aaron stated, "I can't remember a day this year or last when I did not hear the name of Babe Ruth." Montville suggested that Ruth is probably even more popular today than he was when his career home run record was broken by Aaron. The long ball era that Ruth started continues in baseball, to the delight of the fans. Owners build ballparks to encourage home runs, which are featured on "SportsCenter" and "Baseball Tonight" each evening during the season. The questions of performance-enhancing drug use, which dogged later home run hitters such as McGwire and Bonds, do nothing to diminish Ruth's reputation; his overindulgences with beer and hot dogs seem part of a simpler time. In various surveys and rankings, Ruth has been named the greatest baseball player of all time. In 1998, "The Sporting News" ranked him number one on the list of "Baseball's 100 Greatest Players". In 1999, baseball fans named Ruth to the Major League Baseball All-Century Team. He was named baseball's Greatest Player Ever in a ballot commemorating the 100th anniversary of professional baseball in 1969. The Associated Press reported in 1993 that Muhammad Ali was tied with Babe Ruth as the most recognized athletes in America. In a 1999 ESPN poll, he was ranked as the second-greatest U.S. athlete of the century, behind Michael Jordan. In 1983, the United States Postal Service honored Ruth with the issuance of a twenty-cent stamp. Several of the most expensive items of sports memorabilia and baseball memorabilia ever sold at auction are associated with Ruth. As of November 2016, the most expensive piece of sports memorabilia ever sold is Ruth's 1920 Yankees jersey, which sold for $4,415,658 in 2012. The bat with which he hit the first home run at Yankee Stadium is in "The Guinness Book of World Records" as the most expensive baseball bat sold at auction, having fetched $1,265,000 on December 2, 2004. A hat of Ruth's from the 1934 season set a record for a baseball cap when David Wells sold it at auction for $537,278 in 2012. In 2017, Charlie Sheen sold Ruth's 1927 World Series ring for $2,093,927 at auction. It easily broke the record for a championship ring previously set when Julius Erving's 1974 ABA championship ring sold for $460,741 in 2011. One long-term survivor of the craze over Ruth may be the Baby Ruth candy bar. The original company to market the confectionery, the Curtis Candy Company, maintained that the bar was named after Ruth Cleveland, daughter of former president Grover Cleveland. She died in 1904 and the bar was first marketed in 1921, at the height of the craze over the slugger. The slugger later sought to market candy bearing his name; he was refused a trademark because of the Baby Ruth bar. Corporate files from 1921 are no longer extant; the brand has changed hands several times and is now owned by the Nestlé company. The Ruth estate licensed his likeness for use in an advertising campaign for Baby Ruth in 1995. Due to a marketing arrangement, in 2005, the Baby Ruth bar became the official candy bar of Major League Baseball. Montville noted the continuing relevance of Babe Ruth in American culture, more than three-quarters of a century after he last swung a bat in a major league game: Barnard's Star Barnard's Star is a very-low-mass red dwarf about 6 light-years away from Earth in the constellation of Ophiuchus. It is the fourth-nearest-known individual star to the Sun (after the three components of the Alpha Centauri system) and the closest star in the Northern Celestial Hemisphere. Despite its proximity, because of its dim apparent magnitude of +9.5, the star is invisible to the unaided eye; it is much brighter in the infrared than in visible light. The star is named after the American astronomer E. E. Barnard. He was not the first to observe the star (it appeared on Harvard University plates in 1888 and 1890), but in 1916 he measured its proper motion (which is a function of its proximity to Earth, and not of its actual space velocity) as 10.3 arcseconds per year, which remains the largest proper motion of any star relative to the Sun. This is likely to be the fastest star in terms of proper motion, as its proximity to the Sun, as well as its high velocity make it unlikely any faster object (in terms of proper-motion) remains undiscovered. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name "Barnard's Star" for this star on 1 February 2017 and it is now so included in the List of IAU-approved Star Names. Barnard's Star is among the most studied red dwarfs because of its proximity and favorable location for observation near the celestial equator. Historically, research on Barnard's Star has focused on measuring its stellar characteristics, its astrometry, and also refining the limits of possible extrasolar planets. Although Barnard's Star is an ancient star, it still experiences star flare events, one being observed in 1998. The star has also been the subject of some controversy. For a decade, from the early 1960s to the early 1970s, Peter van de Kamp argued that there were one or more gas giants in orbit around it. Although the presence of small terrestrial planets around Barnard's Star remains a possibility, Van de Kamp's specific claims of large gas giants were refuted in the mid-1970s. Barnard's Star is a red dwarf of the dim spectral type M4, and it is too faint to see without a telescope. Its apparent magnitude is 9.5. At 7–12 billion years of age, Barnard's Star is considerably older than the Sun, which is 4.5 billion years old, and it might be among the oldest stars in the Milky Way galaxy. Barnard's Star has lost a great deal of rotational energy, and the periodic slight changes in its brightness indicate that it rotates once in 130 days (the Sun rotates in 25). Given its age, Barnard's Star was long assumed to be quiescent in terms of stellar activity. In 1998, astronomers observed an intense stellar flare, showing that Barnard's Star is a flare star. Barnard's Star has the variable star designation V2500 Ophiuchi. In 2003, Barnard's Star presented the first detectable change in the radial velocity of a star caused by its motion. Further variability in the radial velocity of Barnard's Star was attributed to its stellar activity. The proper motion of Barnard's Star corresponds to a relative lateral speed of 90 km/s. The 10.3 seconds of arc it travels annually amount to a quarter of a degree in a human lifetime, roughly half the angular diameter of the full Moon. The radial velocity of Barnard's Star towards the Sun is measured from its blueshift to be 110 km/s. Combined with its proper motion, this gives a space velocity (actual velocity relative to the Sun) of 142.6 ± 0.2 km/s. Barnard's Star will make its closest approach to the Sun around 11,800 AD, when it will approach to within about 3.75 light-years. Proxima Centauri is the closest star to the Sun at a position currently 4.24 light-years distant from it. However, despite Barnard's Star's even closer pass to the Sun in 11,800 AD, it will still not then be the nearest star, since by that time Proxima Centauri will have moved to a yet-nearer proximity to the Sun. At the time of the star's closest pass by the Sun, Barnard's Star will still be too dim to be seen with the naked eye, since its apparent magnitude will only have increased by one magnitude to about 8.5 by then, still being 2.5 magnitudes short of visibility to the naked eye. Barnard's Star has a mass of about 0.14 solar masses (), and a radius 15% to 20% of that of the Sun. Thus, although Barnard's Star has roughly 150 times the mass of Jupiter (), its radius is only 1.5 to 2.0 times larger, due to its much higher density. Its effective temperature is 3,100 kelvins, and it has a visual luminosity of 0.0004 solar luminosities. Barnard's Star is so faint that if it were at the same distance from Earth as the Sun is, it would appear only 100 times brighter than a full moon, comparable to the brightness of the Sun at 80 astronomical units. Barnard's Star has 10–32% of the solar metallicity. Metallicity is the proportion of stellar mass made up of elements heavier than helium and helps classify stars relative to the galactic population. Barnard's Star seems to be typical of the old, red dwarf population II stars, yet these are also generally metal-poor halo stars. While sub-solar, Barnard's Star's metallicity is higher than that of a halo star and is in keeping with the low end of the metal-rich disk star range; this, plus its high space motion, have led to the designation "intermediate population II star", between a halo and disk star. For a decade from 1963 to about 1973, a substantial number of astronomers accepted a claim by Peter van de Kamp that he had detected, by using astrometry, a perturbation in the proper motion of Barnard's Star consistent with its having one or more planets comparable in mass with Jupiter. Van de Kamp had been observing the star from 1938, attempting, with colleagues at the Swarthmore College observatory, to find minuscule variations of one micrometre in its position on photographic plates consistent with orbital perturbations that would indicate a planetary companion; this involved as many as ten people averaging their results in looking at plates, to avoid systemic individual errors. Van de Kamp's initial suggestion was a planet having about at a distance of 4.4 AU in a slightly eccentric orbit, and these measurements were apparently refined in a 1969 paper. Later that year, Van de Kamp suggested that there were two planets of 1.1 and . Other astronomers subsequently repeated Van de Kamp's measurements, and two papers in 1973 undermined the claim of a planet or planets. George Gatewood and Heinrich Eichhorn, at a different observatory and using newer plate measuring techniques, failed to verify the planetary companion. Another paper published by John L. Hershey four months earlier, also using the Swarthmore observatory, found that changes in the astrometric field of various stars correlated to the timing of adjustments and modifications that had been carried out on the refractor telescope's objective lens; the claimed planet was attributed to an artifact of maintenance and upgrade work. The affair has been discussed as part of a broader scientific review. Van de Kamp never acknowledged any error and published a further claim of two planets' existence as late as 1982; he died in 1995. Wulff Heintz, Van de Kamp's successor at Swarthmore and an expert on double stars, questioned his findings and began publishing criticisms from 1976 onwards. The two men were reported to have become estranged from each other because of this. While not completely ruling out the possibility of planets, null results for planetary companions continued throughout the 1980s and 1990s, the latest being based on interferometric work with the Hubble Space Telescope in 1999. By refining the values of a star's motion, the mass and orbital boundaries for possible planets are tightened: in this way astronomers are often able to describe what types of planets cannot orbit a given star. M dwarfs such as Barnard's Star are more easily studied than larger stars in this regard because their lower masses render perturbations more obvious. Gatewood was thus able to show in 1995 that planets with were impossible around Barnard's Star, in a paper which helped refine the negative certainty regarding planetary objects in general. In 1999, work with the Hubble Space Telescope further excluded planetary companions of with an orbital period of less than 1,000 days (Jupiter's orbital period is 4,332 days), while Kuerster determined in 2003 that within the habitable zone around Barnard's Star, planets are not possible with an ""M" sin "i"" value greater than 7.5 times the mass of the Earth (), or with a mass greater than 3.1 times the mass of Neptune (much lower than van de Kamp's smallest suggested value). In 2012, a research paper was published that further refined planet mass boundaries for the star. Using radial velocity measurements, taken over a period of 25 years, from the Lick and Keck Observatories and applying Monte Carlo analysis for both circular and eccentric orbits, upper masses for planets out to 1,000-day orbits were determined. Planets above two Earth masses in orbits of less than 10 days were excluded, and planets of more than ten Earth masses out to a two-year orbit were also confidently ruled out. Even though this research has greatly restricted the possible properties of planets around Barnard's Star, it has not ruled them out completely; terrestrial planets would be difficult to detect. NASA's Space Interferometry Mission, which was to begin searching for extrasolar Earth-like planets, was reported to have chosen Barnard's Star as an early search target. This mission was shut down in 2010. ESA's similar Darwin interferometry mission had the same goal, but was stripped of funding in 2007. Barnard's Star was studied as part of Project Daedalus. Undertaken between 1973 and 1978, the study suggested that rapid, unmanned travel to another star system was possible with existing or near-future technology. Barnard's Star was chosen as a target partly because it was believed to have planets. The theoretical model suggested that a nuclear pulse rocket employing nuclear fusion (specifically, electron bombardment of deuterium and helium-3) and accelerating for four years could achieve a velocity of 12% of the speed of light. The star could then be reached in 50 years, within a human lifetime. Along with detailed investigation of the star and any companions, the interstellar medium would be examined and baseline astrometric readings performed. The initial Project Daedalus model sparked further theoretical research. In 1980, Robert Freitas suggested a more ambitious plan: a self-replicating spacecraft intended to search for and make contact with extraterrestrial life. Built and launched in Jupiter's orbit, it would reach Barnard's Star in 47 years under parameters similar to those of the original Project Daedalus. Once at the star, it would begin automated self-replication, constructing a factory, initially to manufacture exploratory probes and eventually to create a copy of the original spacecraft after 1,000 years. In 1998 a stellar flare on Barnard's Star was detected based on changes in the spectral emissions on July 17 during an unrelated search for variations in the proper motion. Four years passed before the flare was fully analyzed, at which point it was suggested that the flare's temperature was 8000 K, more than twice the normal temperature of the star. Given the essentially random nature of flares, Diane Paulson, one of the authors of that study, noted that "the star would be fantastic for amateurs to observe". The flare was surprising because intense stellar activity is not expected in stars of such age. Flares are not completely understood, but are believed to be caused by strong magnetic fields, which suppress plasma convection and lead to sudden outbursts: strong magnetic fields occur in rapidly rotating stars, while old stars tend to rotate slowly. For Barnard's Star to undergo an event of such magnitude is thus presumed to be a rarity. Research on the star's periodicity, or changes in stellar activity over a given timescale, also suggest it ought to be quiescent; 1998 research showed weak evidence for periodic variation in the star's brightness, noting only one possible starspot over 130 days. Stellar activity of this sort has created interest in using Barnard's Star as a proxy to understand similar stars. It is hoped that photometric studies of its X-ray and UV emissions will shed light on the large population of old M dwarfs in the galaxy. Such research has astrobiological implications: given that the habitable zones of M dwarfs are close to the star, any planets would be strongly influenced by solar flares, winds, and plasma ejection events. Barnard's Star shares much the same neighborhood as the Sun. The neighbors of Barnard's Star are generally of red dwarf size, the smallest and most common star type. Its closest neighbor is currently the red dwarf Ross 154, at 1.66 parsecs' (5.41 light-years) distance. The Sun and Alpha Centauri are, respectively, the next closest systems. From Barnard's Star, the Sun would appear on the diametrically opposite side of the sky at coordinates RA=, Dec=, in the eastern part of the constellation Monoceros. The absolute magnitude of the Sun is 4.83, and at a distance of 1.834 parsecs, it would be a first-magnitude star, as Pollux is from the Earth. Beagle The Beagle is a breed of small hound that is similar in appearance to the much larger foxhound. The beagle is a scent hound, developed primarily for hunting hare. With a great sense of smell and superior tracking instinct, the beagle is employed as detection dog for prohibited agricultural imports and foodstuffs in quarantine around the world. The beagle is intelligent but single-minded. It is a popular pet due to its size, good temper, and lack of inherited health problems. Although beagle-type dogs have existed for 2,500 years, the modern breed was developed in Great Britain around the 1830s from several breeds, including the Talbot Hound, the North Country Beagle, the Southern Hound, and possibly the Harrier. Beagles have been depicted in popular culture since Elizabethan times in literature and paintings, and more recently in film, television, and comic books. Snoopy of the comic strip "Peanuts" has been called "the world's most famous beagle". Dogs of similar size and purpose to the modern beagle can be traced in Ancient Greece back to around the 5th century BC. Xenophon, born around 430 BC, in his "Treatise on Hunting" or "Cynegeticus" refers to a hound that hunted hares by scent and was followed on foot. Small hounds are mentioned in the Forest Laws of Canute which exempted them from the ordinance which commanded that all dogs capable of running down a stag should have one foot mutilated. If genuine, these laws would confirm that beagle-type dogs were present in England before 1016, but it is likely the laws were written in the Middle Ages to give a sense of antiquity and tradition to Forest Law. In the 11th century, William the Conqueror brought the Talbot hound to Britain. The Talbot was a predominantly white, slow, deep-throated, scent hound derived from the St. Hubert Hound which had been developed in the 8th century. At some point the English Talbots were crossed with Greyhounds to give them an extra turn of speed. Long extinct, the Talbot strain probably gave rise to the Southern Hound which, in turn, is thought to be an ancestor of the modern-day beagle. From medieval times, "beagle" was used as a generic description for the smaller hounds, though these dogs differed considerably from the modern breed. Miniature breeds of beagle-type dogs were known from the times of Edward II and Henry VII, who both had packs of Glove Beagles, so named since they were small enough to fit on a glove, and Queen Elizabeth I kept a breed known as a Pocket Beagle, which stood at the shoulder. Small enough to fit in a "pocket" or saddlebag, they rode along on the hunt. The larger hounds would run the prey to ground, then the hunters would release the small dogs to continue the chase through underbrush. Elizabeth I referred to the dogs as her "singing beagles" and often entertained guests at her royal table by letting her Pocket Beagles cavort amid their plates and cups. 19th-century sources refer to these breeds interchangeably and it is possible that the two names refer to the same small variety. In George Jesse's "Researches into the History of the British Dog" from 1866, the early 17th-century poet and writer Gervase Markham is quoted referring to the beagle as small enough to sit on a man's hand and to the: Standards for the Pocket Beagle were drawn up as late as 1901; these genetic lines are now extinct, although modern breeders have attempted to recreate the variety. By the 18th century two breeds had been developed for hunting hare and rabbit: the Southern Hound and the North Country Beagle (or Northern Hound). The Southern Hound, a tall, heavy dog with a square head, and long, soft ears, was common from south of the River Trent and probably closely related to the Talbot Hound. Though slow, it had stamina and an excellent scenting ability. The North Country Beagle, possibly a cross between an offshoot of the Talbot stock and a Greyhound, was bred chiefly in Yorkshire and was common in the northern counties. It was smaller than the Southern Hound, less heavy-set and with a more pointed muzzle. It was faster than its southern counterpart but its scenting abilities were less well developed. As fox hunting became increasingly popular, numbers of both types of hound diminished. The beagle-type dogs were crossed with larger breeds such as Stag Hounds to produce the modern Foxhound. The beagle-size varieties came close to extinction but some farmers in the South ensured the survival of the prototype breeds by maintaining small rabbit-hunting packs. Reverend Phillip Honeywood established a beagle pack in Essex in the 1830s and it is believed that this pack formed the basis for the modern breed. Although details of the pack's lineage are not recorded it is thought that North Country Beagles and Southern Hounds were strongly represented; William Youatt suspected that Harriers formed a good majority of the beagle's bloodline, but the origin of the Harrier is itself obscure. Honeywood's Beagles were small, standing at about at the shoulder, and pure white according to John Mills (writing in "The Sportsman's Library" in 1845). Prince Albert and Lord Winterton also had Beagle packs around this time, and royal favour no doubt led to some revival of interest in the breed, but Honeywood's pack was regarded as the finest of the three. Although credited with the development of the modern breed, Honeywood concentrated on producing dogs for hunting and it was left to Thomas Johnson to refine the breeding to produce dogs that were both attractive and capable hunters. Two strains were developed: the rough- and smooth-coated varieties. The rough-coated beagle survived until the beginning of the 20th century, and there were even records of one making an appearance at a dog show as late as 1969, but this variety is now extinct, having probably been absorbed into the standard beagle bloodline. In the 1840s, a standard beagle type was beginning to develop; the distinction between the North Country Beagle and Southern Hound had been lost, but there was still a large variation in size, character, and reliability among the emerging packs. In 1856, "Stonehenge" (the pseudonym of John Henry Walsh), writing in the "Manual of British Rural Sports", was still dividing beagles into four varieties: the medium beagle; the dwarf or lapdog beagle; the fox beagle (a smaller, slower version of the Foxhound); and the rough-coated or terrier beagle, which he classified as a cross between any of the other varieties and one of the Scottish terrier breeds. Stonehenge also gives the start of a standard description: By 1887 the threat of extinction was on the wane: there were 18 beagle packs in England. The Beagle Club was formed in 1890 and the first standard drawn up at the same time. The following year the Association of Masters of Harriers and Beagles was formed. Both organisations aimed to further the best interests of the breed, and both were keen to produce a standard type of beagle. By 1902, the number of packs had risen to 44. Beagles were in the United States by the 1840s at the latest, but the first dogs were imported strictly for hunting and were of variable quality. Since Honeywood had only started breeding in the 1830s, it is unlikely these dogs were representative of the modern breed and the description of them as looking like straight-legged Dachshunds with weak heads has little resemblance to the standard. Serious attempts at establishing a quality bloodline began in the early 1870s when General Richard Rowett from Illinois imported some dogs from England and began breeding. Rowett's Beagles are believed to have formed the models for the first American standard, drawn up by Rowett, L. H. Twadell, and Norman Ellmore in 1887. The beagle was accepted as a breed by the American Kennel Club (AKC) in 1885. In the 20th century the breed has spread worldwide. On its formation, the Association of Masters of Harriers and Beagles took over the running of a regular show at Peterborough that had started in 1889, and the Beagle Club in the UK held its first show in 1896. The regular showing of the breed led to the development of a uniform type, and the beagle continued to prove a success up until the outbreak of World War I when all shows were suspended. After the war, the breed was again struggling for survival in the UK: the last of the Pocket Beagles was probably lost during this time, and registrations fell to an all-time low. A few breeders (notably Reynalton Kennels) managed to revive interest in the dog and by World War II, the breed was once again doing well. Registrations dropped again after the end of the war but almost immediately recovered. As purebred dogs, beagles have always been more popular in the United States and Canada than in their native country England. The National Beagle Club of America was formed in 1888 and by 1901 a beagle had won a Best in Show title. As in the UK, activity during World War I was minimal, but the breed showed a much stronger revival in the U.S. when hostilities ceased. In 1928 it won a number of prizes at the Westminster Kennel Club's show and by 1939 a beagle – Champion Meadowlark Draughtsman – had captured the title of top-winning American-bred dog for the year. On 12 February 2008, a beagle, K-Run's Park Me In First (Uno), won the Best In Show category at the Westminster Kennel Club show for the first time in the competition's history. In North America they have been consistently in the top-ten most-popular breeds for over 30 years. From 1953 to 1959 the beagle was ranked No. 1 on the list of the American Kennel Club's registered breeds; in 2005 and 2006 it ranked 5th out of the 155 breeds registered. In the UK they are not quite so popular, placing 28th and 30th in the rankings of registrations with the Kennel Club in 2005 and 2006 respectively. In the United States the beagle ranked 4th most popular breed in 2012 and 2013, behind the Labrador Retriever (#1), German Shepherd (#2) and Golden Retriever (#3) breeds. According to the "Oxford English Dictionary", the first mention of the beagle by name in English literature dates from c. 1475 in "The Squire of Low Degree". The origin of the word "beagle" is uncertain, although it has been suggested that the word derives from the French "begueule". It is not known why the black and tan Kerry Beagle, present in Ireland since Celtic times, has the "beagle" description, since at it is significantly taller than the modern day beagle, and in earlier times was even larger. Some writers suggest that the beagle's scenting ability may have come from cross-breeding earlier strains with the Kerry Beagle. Originally used for hunting stags, it is today used for hare and drag hunting. The general appearance of the beagle resembles a miniature Foxhound, but the head is broader and the muzzle shorter, the expression completely different and the legs shorter in proportion to the body. They are generally between high at the withers and weigh between , with females being slightly smaller than males on average. They have a smooth, somewhat domed skull with a medium-length, square-cut muzzle and a black (or occasionally liver) gumdrop nose. The jaw is strong and the teeth scissor together with the upper teeth fitting perfectly over the lower teeth and both sets aligned square to the jaw. The eyes are large, hazel or brown, with a mild hound-like pleading look. The large ears are long, soft and low-set, turning towards the cheeks slightly and rounded at the tips. Beagles have a strong, medium-length neck (which is long enough for them to easily bend to the ground to pick up a scent), with little folding in the skin but some evidence of a dewlap; a broad chest narrowing to a tapered abdomen and waist and a long, slightly curved tail (known as the "stern") tipped with white. The white tip, known as the flag has been selectively bred for, as it allows the dog to be easily seen when its head is down following a scent. The tail does not curl over the back, but is held upright when the dog is active. The beagle has a muscular body and a medium-length, smooth, hard coat. The front legs are straight and carried under the body while the rear legs are muscular and well bent at the stifles. The tricolored beaglewhite with large black areas and light brown shadingis the most common. Tricolored beagles occur in a number of shades, from the "Classic Tri" with a jet black saddle (also known as "Blackback"), to the "Dark Tri" (where faint brown markings are intermingled with more prominent black markings), to the "Faded Tri" (where faint black markings are intermingled with more prominent brown markings). Some tricolored dogs have a broken pattern, sometimes referred to as "pied". These dogs have mostly white coats with patches of black and brown hair. Tricolor beagles are almost always born black and white. The white areas are typically set by eight weeks, but the black areas may fade to brown as the puppy matures. (The brown may take between one and two years to fully develop.) Some beagles gradually change color during their lives, and may lose their black markings entirely. Two-color varieties always have a white base color with areas of the second color. Tan and white is the most common two-color variety, but there is a wide range of other colors including lemon, a very light tan; red, a reddish, almost orange, brown; and liver, a darker brown, and black. Liver is not common and is not permitted in some standards; it tends to occur with yellow eyes. Ticked or mottled varieties may be either white or black with different colored flecks ("ticking"), such as the blue-mottled or bluetick beagle, which has spots that appear to be a midnight-blue color, similar to the coloring of the Bluetick Coonhound. Some tricolor beagles also have ticking of various colors in their white areas. Alongside the Bloodhound and Basset Hound, the beagle has one of the best developed senses of smell of any dog. In the 1950s, John Paul Scott and John Fuller began a 13-year study of canine behavior. As part of this research, they tested the scenting abilities of various breeds by putting a mouse in a one-acre field and timing how long it took the dogs to find it. The beagles found it in less than a minute, while Fox Terriers took 15 minutes and Scottish Terriers failed to find it at all. Beagles are better at ground-scenting (following a trail on the ground) than they are at air-scenting, and for this reason they have been excluded from most mountain rescue teams in favor of collies, which use sight in addition to air-scenting and are more biddable. The long ears and large lips of the beagle probably assist in trapping the scents close to the nose. The American Kennel Club recognizes two separate varieties of beagle: the 13-inch for hounds less than , and the 15-inch for those between . The Canadian Kennel Club recognizes a single type, with a height not exceeding . The Kennel Club (UK) and FCI affiliated clubs recognize a single type, with a height of between . English and American varieties are sometimes mentioned. However, there is no official recognition from any Kennel Club for this distinction. Beagles fitting the American Kennel Club standard – which disallows animals over – are smaller on average than those fitting the Kennel Club standard which allows heights up to . Pocket Beagles are sometimes advertised for sale but while the UK Kennel Club originally specified a standard for the Pocket Beagle in 1901, the variety is now not recognized by any Kennel Club. A strain known as Patch Hounds was developed by Willet Randall and his family from 1896 specifically for their rabbit hunting ability. They trace their bloodline back to Field Champion Patch, but do not necessarily have a patchwork marking. In the 1850s, Stonehenge recommended a cross between a Beagle and a Scottish Terrier as a retriever. He found the crossbreed to be a good worker, silent and obedient, but it had the drawback that it was small and could barely carry a hare. More recently the trend has been for "designer dogs" and one of the most popular has been the Beagle/Pug cross known as a Puggle. Some puppies of this cross are less excitable than a Beagle and with a lower exercise requirement, similar to the Pug parent; but many are highly excitable and require vigorous exercise. The beagle has an even temper and gentle disposition. Described in several breed standards as "merry", they are amiable and typically neither aggressive nor timid, although this depends on the individual. They enjoy company, and although they may initially be standoffish with strangers, they are easily won over. They make poor guard dogs for this reason, although their tendency to bark or howl when confronted with the unfamiliar makes them good watch dogs. In a 1985 study conducted by Ben and Lynette Hart, the beagle was given the highest excitability rating, along with the Yorkshire Terrier, Cairn Terrier, Miniature Schnauzer, West Highland White Terrier, and Fox Terrier. Beagles are intelligent but, as a result of being bred for the long chase, are single-minded and determined, which can make them hard to train. They can be difficult to recall once they have picked up a scent, and are easily distracted by smells around them. They do not generally feature in obedience trials; while they are alert, respond well to food-reward training, and are eager to please, they are easily bored or distracted. They are ranked 72nd in Stanley Coren's "The Intelligence of Dogs", as Coren places them among the group with the lowest degree of working/obedience intelligence. Coren's scale, however, does not assess understanding, independence, or creativity. Beagles are excellent with children and this is one of the reasons they have became popular family pets. But as beagles are pack animals, they are prone to separation anxiety, a condition which causes them to destroy things when left unattended. Not all beagles will howl, but most will bark when confronted with strange situations, and some will bay (also referred to as "speaking", "giving tongue", or "opening") when they catch the scent of potential quarry. They also generally get along well with other dogs. They are not too demanding with regard to exercise; their inbred stamina means they do not easily tire when exercised, but they also do not need to be worked to exhaustion before they will rest. Regular exercise helps ward off the weight gain to which the breed is prone. The typical longevity of beagles is 12–15 years, which is a common lifespan for dogs of their size. Beagles may be prone to epilepsy, but this can often be controlled with medication. Hypothyroidism and a number of types of dwarfism occur in beagles. Two conditions in particular are unique to the breed: "Funny Puppy", in which the puppy is slow to develop and eventually develops weak legs, a crooked back and although normally healthy, is prone to a range of illnesses; Hip dysplasia, common in Harriers and in some larger breeds, is rarely considered a problem in beagles. Beagles are considered a chondrodystrophic breed, meaning that they are prone to types of disk diseases. In rare cases, beagles may develop immune mediated polygenic arthritis (where the immune system attacks the joints) even at a young age. The symptoms can sometimes be relieved by steroid treatments. Another rare disease in the breed is neonatal cerebellar cortical degeneration. Affected puppies are slow, have lower co-ordination, fall more often and don't have a normal gait. It has an estimated carrier rate of 5% and affected rate of 0.1%. A genetic test is available. Their long floppy ears can mean that the inner ear does not receive a substantial air flow or that moist air becomes trapped, and this can lead to ear infections. Beagles may also be affected by a range of eye problems; two common ophthalmic conditions in beagles are glaucoma and corneal dystrophy. "Cherry eye", a prolapse of the gland of the third eyelid, and distichiasis, a condition in which eyelashes grow into the eye causing irritation, sometimes exist; both these conditions can be corrected with surgery. They can suffer from several types of retinal atrophy. Failure of the nasolacrimal drainage system can cause dry eye or leakage of tears onto the face. As field dogs they are prone to minor injuries such as cuts and sprains, and, if inactive, obesity is a common problem as they will eat whenever food is available and rely on their owners to regulate their weight. When working or running free they are also likely to pick up parasites such as fleas, ticks, harvest mites, and tapeworms, and irritants such as grass seeds can become trapped in their eyes, soft ears, or paws. Beagles may exhibit a behaviour known as reverse sneezing, in which they sound as if they are choking or gasping for breath, but are actually drawing air in through the mouth and nose. The exact cause of this behaviour is not known, but it can be a common occurrence and is not harmful to the dog. Beagles were developed primarily for hunting hare, an activity known as beagling. They were seen as ideal hunting companions for the elderly who could follow on horseback without exerting themselves, for young hunters who could keep up with them on ponies, and for the poorer hunters who could not afford to maintain a stable of good hunting horses. Before the advent of the fashion for foxhunting in the 19th century, hunting was an all day event where the enjoyment was derived from the chase rather than the kill. In this setting the tiny beagle was well matched to the hare, as unlike Harriers they would not quickly finish the hunt, but because of their excellent scent-tracking skills and stamina they were almost guaranteed to eventually catch the hare. The beagle packs would run closely together ("so close that they might be covered with a sheet") which was useful in a long hunt, as it prevented stray dogs from obscuring the trail. In thick undergrowth they were also preferred to spaniels when hunting pheasant. With the fashion for faster hunts, the beagle fell out of favour for chasing hare, but was still employed for rabbit hunting. In "Anecdotes of Dogs" (1846), Edward Jesse says: In the United States they appear to have been employed chiefly for hunting rabbits from the earliest imports. Hunting hare with beagles became popular again in Britain in the mid-19th century and continued until it was made illegal in Scotland by the Protection of Wild Mammals (Scotland) Act 2002 and in England and Wales by the Hunting Act 2004. Under this legislation beagles may still pursue rabbits with the landowner's permission. Drag hunting is popular where hunting is no longer permitted or for those owners who do not wish to participate in hunting a live animal, but still wish to exercise their dog's innate skills. The traditional foot pack consists of up to 40 beagles, marshaled by a Huntsman who directs the pack and who is assisted by a variable number of whippers-in whose job is to return straying hounds to the pack. The Master of the Hunt is in overall day-to-day charge of the pack, and may or may not take on the role of Huntsman on the day of the hunt. As hunting with beagles was seen as ideal for young people, many of the British public schools traditionally maintained beagle packs. Protests were lodged against Eton's use of beagles for hunting as early as 1902 but the pack is still in existence today, and a pack used by Imperial College in Wye, Kent was stolen by the Animal Liberation Front in 2001. School and university packs are still maintained by Eton, Marlborough, Wye, Radley, the Royal Agricultural University and Christ Church, Oxford. In addition to organized beagling, beagles have been used for hunting or flushing to guns (often in pairs) a wide range of game including snowshoe hare, cottontail rabbits, game birds, roe deer, red deer, bobcat, coyote, wild boar and foxes, and have even been recorded as being used to hunt stoat. In most of these cases, the beagle is employed as a gun dog, flushing game for hunter's guns. Beagles are used as detection dogs in the Beagle Brigade of the United States Department of Agriculture. These dogs are used to detect food items in luggage being taken into the United States. After trialling several breeds, beagles were chosen because they are relatively small and unintimidating for people who are uncomfortable around dogs, easy to care for, intelligent and work well for rewards. They are also used for this purpose in a number of other countries including by the Ministry of Agriculture and Forestry in New Zealand, the Australian Quarantine and Inspection Service, and in Canada, Japan and the People's Republic of China. Larger breeds are generally used for detection of explosives as this often involves climbing over luggage and on large conveyor belts, work for which the smaller Beagle is not suited. Beagles are the dog breed most often used in animal testing, due to their size and passive nature. Beagles are used in a range of research procedures: fundamental biological research, applied human medicine, applied veterinary medicine, and protection of man, animals or the environment. Of the 8,018 dogs used in testing in the UK in 2004, 7,799 were beagles (97.3%). In the UK, the Animals (Scientific Procedures) Act 1986 gave special status to primates, equids, cats and dogs and in 2005 the Animal Procedures Committee (set up by the act) ruled that testing on mice was preferable, even though a greater number of individual animals were involved. In 2005 beagles were involved in less than 0.3% of the total experiments on animals in the UK, but of the 7670 experiments performed on dogs 7406 involved beagles (96.6%). Most dogs are bred specifically for this purpose, by companies such as Harlan. In the UK companies breeding animals for research must be licensed under the Animals (Scientific Procedures) Act. Testing of cosmetic products on animals is banned in the member states of the European Community, although France protested the ban and has made efforts to have it lifted. It is permitted in the United States but is not mandatory if safety can be ascertained by other methods, and the test species is not specified by the Food and Drug Administration (FDA). When testing toxicity of food additives, food contaminants, and some drugs and chemicals the FDA uses beagles and miniature pigs as surrogates for direct human testing. Minnesota was the first state to enact a Beagle freedom adoption law in 2014, mandating that dogs and cats are allowed to be adopted once they have completed with research testing. Anti-vivisection groups have reported on abuse of animals inside testing facilities. In 1997 footage secretly filmed by a freelance journalist inside Huntingdon Life Sciences in the UK showed staff punching and screaming at beagles. Consort Kennels, a UK-based breeder of beagles for testing, closed down in 1997 after pressure from animal rights groups. Although bred for hunting, Beagles are versatile and are nowadays employed for various other roles in detection, therapy, and as family pets. Beagles are used as sniffer dogs for termite detection in Australia, and have been mentioned as possible candidates for drug and explosive detection. Because of their gentle nature and unimposing build, they are also frequently used in pet therapy, visiting the sick and elderly in hospital. In June 2006, a trained Beagle assistance dog was credited with saving the life of its owner after using her owner's mobile phone to dial an emergency number. In the aftermath of the 2010 Haiti earthquake, a Beagle search and rescue dog with a Colombian rescue squad was credited with locating the owner of the Hôtel Montana, who was subsequently rescued after spending 100 hours buried in the rubble. Beagles were hired by New York City to help with bedbug detection, while the role of such dogs in this type of detection may have doubts. Beagles have been featured across a wide range of media. References to the dog appear before the 19th century in works by such writers as William Shakespeare, John Webster, John Dryden, Thomas Tickell, Henry Fielding, and William Cowper, as well as in Alexander Pope's translation of Homer's "Iliad". Beagles appeared in funny animal comic strips and animated cartoons from the 1950s with Courage the Cowardly Dog and the "Peanuts" character Snoopy billed as "the world's most famous Beagle". Former US President Lyndon Baines Johnson had several beagles, and caused an outcry when he picked up one of them by its ears during an official greeting on the White House lawn. The ship on which Charles Darwin made the voyage which provided much of the inspiration for "On the Origin of Species" was named HMS "Beagle" after the breed, and, in turn, lent its name to the ill-fated British Martian lander "Beagle 2". A Canadian bred 15 inch female Beagle with the registered name of Gr Ch Tashtins Lookin For Trouble and the pet name of "Miss P" won the 2015 Westminster Kennel Club Dog Show. Informational notes a. In this article "Beagle" (with a capital B) is used to distinguish the modern breed from other beagle-type dogs. b. Youatt states that the Southern Hound may have been native to the British Isles and used on hunts by the Ancient Britons. c. The Harts posed the following question to a panel of 96 experts, half of which were veterinary surgeons and the other half dog obedience trial judges: d. The specific references in each of the author's works are as follows: Shakespeare: ""Sir Toby Belch": She's a beagle, true-bred, and one that adores me: what o' that?" "Twelfth Night" (c. 1600) Act II Scene III Webster: ""Mistress Tenterhook"': You are a sweet beagle" "Westward Ho" (1607) Act III Scene IV:2 Dryden: "The rest in shape a beagle's whelp throughout, With broader forehead and a sharper snout" "The Cock and the Fox", and again: "About her feet were little beagles seen" in "Palamon and Arcite" both from "Fables, Ancient and Modern" (1700) Tickell: "Here let me trace beneath the purpled morn, The deep-mouth'd beagle, and the sprightly horn" "To a Lady before Marriage" (published posthumously in 1749) Fielding: "'What the devil would you have me do?' cries the Squire, turning to Blifil, 'I can no more turn her, than a beagle can turn an old hare.'" "The History of Tom Jones, a Foundling" (1749) Chapter 7. Cowper: "For persevering chase and headlong leaps, True beagle as the staunchest hound he keeps" "The Progress of Error" (1782) Pope: "Thus on a roe the well-breath'd beagle flies, And rends his hide fresh-bleeding with the dart" "The Iliad of Homer" (1715–20) Book XV:697–8 Citations Bibliography Bahá'í Faith The Bahá'í Faith (; ') is a religion teaching the essential worth of all religions, and the unity and equality of all people. Established by Bahá'u'lláh in 1863, it initially grew in Iran and parts of the Middle East, where it has faced ongoing persecution since its inception. Currently it has between 5 and 7 million adherents, known as Bahá'ís, spread out into most of the world's countries and territories. It grew from the mid-19th-century Bábí religion, whose founder taught that God would soon send a prophet in the manner of Jesus or Muhammad. In 1863, after being banished from his native Iran, Bahá'u'lláh announced that he was this prophet. He was further exiled, spending over a decade in the prison city of Akka in the Ottoman province of Syria, in what is now Israel. Following Bahá'u'lláh's death in 1892, leadership of the religion fell to his son `Abdu'l-Bahá (1844–1921), and later his great-grandson Shoghi Effendi (1897–1957). Bahá'ís around the world annually elect local, regional, and national Spiritual Assemblies that govern the affairs of the religion, and every five years the members of all National Spiritual Assemblies elect the Universal House of Justice, the nine-member supreme governing institution of the worldwide Bahá'í community, which sits in Haifa, Israel, near the Shrine of the Báb. Bahá'í teachings are in some ways similar to other monotheistic faiths: God is considered single and all-powerful. However, Bahá'u'lláh taught that religion is orderly and progressively revealed by one God through Manifestations of God who are the founders of major world religions throughout history; Buddha, Moses, Jesus, and Muhammad being the most recent in the period before the Báb and Bahá'u'lláh. As such, Bahá'ís regard the major religions as fundamentally unified in purpose, though varied in social practices and interpretations. There is a similar emphasis on the unity of all people, openly rejecting notions of racism and nationalism. At the heart of Bahá'í teachings is the goal of a unified world order that ensures the prosperity of all nations, races, creeds, and classes. Letters written by Bahá'u'lláh to various individuals, including some heads of state, have been collected and assembled into a canon of Bahá'í scripture that includes works by his son `Abdu'l-Bahá, and also the Báb, who is regarded as Bahá'u'lláh's forerunner. Prominent among Bahá'í literature are the "Kitáb-i-Aqdas", "Kitáb-i-Íqán", "Some Answered Questions", and "The Dawn-Breakers". In English-language use, the word "Bahá'í " is used either as an adjective to refer to the Bahá'í Faith or as a term for a follower of Bahá'u'lláh. The word is not a noun meaning the religion as a whole. It is derived from the Arabic "Bahá'" (), meaning "glory" or "splendor", although the word "Bahá'í " is actually derived from its use as a loan word in Persian, in particular the "'i" suffix is Persian rather than Arabic. The term "Bahaism" (or "Baha'ism") is still used, mainly in a pejorative sense. Three core principles establish a basis for Bahá'í teachings and doctrine: the unity of God, the unity of religion, and the unity of humanity. From these postulates stems the belief that God periodically reveals his will through divine messengers, whose purpose is to transform the character of humankind and to develop, within those who respond, moral and spiritual qualities. Religion is thus seen as orderly, unified, and progressive from age to age. The Bahá'í writings describe a single, personal, inaccessible, omniscient, omnipresent, imperishable, and almighty God who is the creator of all things in the universe. The existence of God and the universe is thought to be eternal, without a beginning or end. Though inaccessible directly, God is nevertheless seen as conscious of creation, with a will and purpose that is expressed through messengers termed Manifestations of God. Bahá'í teachings state that God is too great for humans to fully comprehend, or to create a complete and accurate image of by themselves. Therefore, human understanding of God is achieved through his revelations via his Manifestations. In the Bahá'í religion, God is often referred to by titles and attributes (for example, the All-Powerful, or the All-Loving), and there is a substantial emphasis on monotheism. The Bahá'í teachings state that the attributes which are applied to God are used to translate Godliness into human terms and also to help individuals concentrate on their own attributes in worshipping God to develop their potentialities on their spiritual path. According to the Bahá'í teachings the human purpose is to learn to know and love God through such methods as prayer, reflection, and being of service to others. Bahá'í notions of progressive religious revelation result in their accepting the validity of the well known religions of the world, whose founders and central figures are seen as Manifestations of God. Religious history is interpreted as a series of dispensations, where each "manifestation" brings a somewhat broader and more advanced revelation that is rendered as a text of scripture and passed on through history with greater or lesser reliability but at least true in substance, suited for the time and place in which it was expressed. Specific religious social teachings (for example, the direction of prayer, or dietary restrictions) may be revoked by a subsequent manifestation so that a more appropriate requirement for the time and place may be established. Conversely, certain general principles (for example, neighbourliness, or charity) are seen to be universal and consistent. In Bahá'í belief, this process of progressive revelation will not end; it is, however, believed to be cyclical. Bahá'ís do not expect a new manifestation of God to appear within 1000 years of Bahá'u'lláh's revelation. Bahá'í beliefs are sometimes described as syncretic combinations of earlier religious beliefs. Bahá'ís, however, assert that their religion is a distinct tradition with its own scriptures, teachings, laws, and history. While the religion was initially seen as a sect of Islam, most religious specialists now see it as an independent religion, with its religious background in Shi'a Islam being seen as analogous to the Jewish context in which Christianity was established. Muslim institutions and clergy, both Sunni and Shia, consider Bahá'ís to be deserters or apostates from Islam, which has led to Bahá'ís being persecuted. Bahá'ís describe their faith as an independent world religion, differing from the other traditions in its relative age and in the appropriateness of Bahá'u'lláh's teachings to the modern context. Bahá'u'lláh is believed to have fulfilled the messianic expectations of these precursor faiths. The Bahá'í writings state that human beings have a "rational soul", and that this provides the species with a unique capacity to recognize God's status and humanity's relationship with its creator. Every human is seen to have a duty to recognize God through his messengers, and to conform to their teachings. Through recognition and obedience, service to humanity and regular prayer and spiritual practice, the Bahá'í writings state that the soul becomes closer to God, the spiritual ideal in Bahá'í belief. When a human dies, the soul passes into the next world, where its spiritual development in the physical world becomes a basis for judgment and advancement in the spiritual world. Heaven and Hell are taught to be spiritual states of nearness or distance from God that describe relationships in this world and the next, and not physical places of reward and punishment achieved after death. The Bahá'í writings emphasize the essential equality of human beings, and the abolition of prejudice. Humanity is seen as essentially one, though highly varied; its diversity of race and culture are seen as worthy of appreciation and acceptance. Doctrines of racism, nationalism, caste, social class, and gender-based hierarchy are seen as artificial impediments to unity. The Bahá'í teachings state that the unification of humanity is the paramount issue in the religious and political conditions of the present world. Shoghi Effendi, the head of the religion from 1921 to 1957, wrote the following summary of what he considered to be the distinguishing principles of Bahá'u'lláh's teachings, which, he said, together with the laws and ordinances of the "Kitáb-i-Aqdas" constitute the bedrock of the Bahá'í Faith: The following principles are frequently listed as a quick summary of the Bahá'í teachings. They are derived from transcripts of speeches given by `Abdu'l-Bahá during his tour of Europe and North America in 1912. The list is not authoritative and a variety of such lists circulate. With specific regard to the pursuit of world peace, Bahá'u'lláh prescribed a world-embracing collective security arrangement for the establishment of a temporary era of peace referred to in the Baha'i teachings as the Lesser Peace. For the establishment of a lasting peace (The Most Great Peace) and the purging of the "overwhelming Corruptions" it is necessary that all the people of the world universally unite under a universal Faith. The Bahá'í teachings speak of a "Greater Covenant", being universal and endless, and a "Lesser Covenant", being unique to each religious dispensation. The Lesser Covenant is viewed as an agreement between a Messenger of God and his followers and includes social practices and the continuation of authority in the religion. At this time Bahá'ís view Bahá'u'lláh's revelation as a binding lesser covenant for his followers; in the Bahá'í writings being firm in the covenant is considered a virtue to work toward. The Greater Covenant is viewed as a more enduring agreement between God and humanity, where a Manifestation of God is expected to come to humanity about every thousand years, at times of turmoil and uncertainty. With unity as an essential teaching of the religion, Bahá'ís follow an administration they believe is divinely ordained, and therefore see attempts to create schisms and divisions as efforts that are contrary to the teachings of Bahá'u'lláh. Schisms have occurred over the succession of authority, but any Bahá'í divisions have had relatively little success and have failed to attract a sizeable following. The followers of such divisions are regarded as Covenant-breakers and shunned, essentially excommunicated. The "canonical texts" are the writings of the Báb, Bahá'u'lláh, `Abdu'l-Bahá, Shoghi Effendi and the Universal House of Justice, and the authenticated talks of `Abdu'l-Bahá. The writings of the Báb and Bahá'u'lláh are considered as divine revelation, the writings and talks of `Abdu'l-Bahá and the writings of Shoghi Effendi as authoritative interpretation, and those of the Universal House of Justice as authoritative legislation and elucidation. Some measure of divine guidance is assumed for all of these texts. Some of Bahá'u'lláh's most important writings include the Kitáb-i-Aqdas, literally the "Most Holy Book", which defines many laws and practices for individuals and society, the Kitáb-i-Íqán, literally the "Book of Certitude", which became the foundation of much of Bahá'í belief, the Gems of Divine Mysteries, which includes further doctrinal foundations, and the Seven Valleys and the Four Valleys which are mystical treatises. Although the Bahá'í teachings have a strong emphasis on social and ethical issues, there exist a number of foundational texts that have been described as mystical. The "Seven Valleys" is considered Bahá'u'lláh's "greatest mystical composition." It was written to a follower of Sufism, in the style of `Attar, The Persian Muslim poet, and sets forth the stages of the soul's journey towards God. It was first translated into English in 1906, becoming one of the earliest available books of Bahá'u'lláh to the West. The "Hidden Words" is another book written by Bahá'u'lláh during the same period, containing 153 short passages in which Bahá'u'lláh claims to have taken the basic essence of certain spiritual truths and written them in brief form. The Bahá'í Faith formed from the Persian religion of the Báb, a merchant who began preaching a new interpretation of Shia Islam in 1844. The Báb's claim to divine revelation was rejected by the generality of Islamic clergy in Iran, ending in his public execution by authorities in 1850. The Báb taught that God would soon send a new messenger, and Bahá'ís consider Bahá'u'lláh to be that person. Although they are distinct movements, the Báb is so interwoven into Bahá'í theology and history that Bahá'ís celebrate his birth, death, and declaration as holy days, consider him one of their three central figures (along with Bahá'u'lláh and `Abdu'l-Bahá), and a historical account of the Bábí movement ("The Dawn-Breakers") is considered one of three books that every Bahá'í should "master" and read "over and over again". The Bahá'í community was mostly confined to the Persian and Ottoman empires until after the death of Bahá'u'lláh in 1892, at which time he had followers in 13 countries of Asia and Africa. Under the leadership of his son, `Abdu'l-Bahá, the religion gained a footing in Europe and America, and was consolidated in Iran, where it still suffers intense persecution. After the death of `Abdu'l-Bahá in 1921, the leadership of the Bahá'í community entered a new phase, evolving from a single individual to an administrative order with both elected bodies and appointed individuals. On the evening of 22 May 1844, Siyyid `Alí-Muhammad of Shiraz gained his first convert and took on the title of "the Báb" ( "the Gate"), referring to his later claim to the status of Mahdi of Shi`a Islam. His followers were therefore known as Bábís. As the Báb's teachings spread, which the Islamic clergy saw as a threat, his followers came under increased persecution and torture. The conflicts escalated in several places to military sieges by the Shah's army. The Báb himself was imprisoned and eventually executed in 1850. Bahá'ís see the Báb as the forerunner of the Bahá'í Faith, because the Báb's writings introduced the concept of "He whom God shall make manifest", a Messianic figure whose coming, according to Bahá'ís, was announced in the scriptures of all of the world's great religions, and whom Bahá'u'lláh, the founder of the Bahá'í Faith, claimed to be in 1863. The Báb's tomb, located in Haifa, Israel, is an important place of pilgrimage for Bahá'ís. The remains of the Báb were brought secretly from Iran to the Holy Land and eventually interred in the tomb built for them in a spot specifically designated by Bahá'u'lláh. The main written works translated into English of the Báb's are collected in Selections from the Writings of the Báb out of the estimated 135 works. Mírzá Husayn `Alí Núrí was one of the early followers of the Báb, and later took the title of Bahá'u'lláh. Bábís faced a period of persecution that peaked in 1852–53 after a few individuals made a failed attempt to assassinate the Shah. Although they acted alone, the government responded with collective punishment, killing many Bábís. Bahá'u'lláh was put in prison. He claimed that in 1853, while incarcerated in the dungeon of the Síyáh-Chál in Tehran, he received the first intimations that he was the one anticipated by the Báb when he received a visit from the Maid of Heaven. Shortly thereafter he was expelled from Tehran to Baghdad, in the Ottoman Empire; then to Constantinople (now Istanbul); and then to Adrianople (now Edirne). In 1863, at the time of his banishment from Baghdad to Constantinople, Bahá'u'lláh declared his claim to a divine mission to his family and followers. Tensions then grew between him and Subh-i-Azal, the appointed leader of the Bábís who did not recognize Bahá'u'lláh's claim. Throughout the rest of his life Bahá'u'lláh gained the allegiance of most of the Bábís, who came to be known as Bahá'ís. Beginning in 1866, he began declaring his mission as a Messenger of God in letters to the world's religious and secular rulers, including Pope Pius IX, Napoleon III, and Queen Victoria. He produced over 18,000 works in his lifetime, in both Arabic and Persian, of which only 8% have been translated into English. Bahá'u'lláh was banished by Sultan Abdülâziz a final time in 1868 to the Ottoman penal colony of `Akká, in present-day Israel. Towards the end of his life, the strict and harsh confinement was gradually relaxed, and he was allowed to live in a home near `Akká, while still officially a prisoner of that city. He died there in 1892. Bahá'ís regard his resting place at Bahjí as the Qiblih to which they turn in prayer each day. `Abbás Effendi was Bahá'u'lláh's eldest son, known by the title of `Abdu'l-Bahá (Servant of Bahá). His father left a Will that appointed `Abdu'l-Bahá as the leader of the Bahá'í community, and designated him as the "Centre of the Covenant", "Head of the Faith", and the sole authoritative interpreter of Bahá'u'lláh's writings. `Abdu'l-Bahá had shared his father's long exile and imprisonment, which continued until `Abdu'l-Bahá's own release as a result of the Young Turk Revolution in 1908. Following his release he led a life of travelling, speaking, teaching, and maintaining correspondence with communities of believers and individuals, expounding the principles of the Bahá'í Faith. There are over 27,000 extant documents by `Abdu'l-Bahá, mostly letters, of which only a fraction have been translated into English. Among the more well known are "The Secret of Divine Civilization", the "Tablet to Auguste-Henri Forel", and "Some Answered Questions". Additionally notes taken of a number of his talks were published in various volumes like "Paris Talks" during his journeys to the West. Bahá'u'lláh's "Kitáb-i-Aqdas" and "The Will and Testament of `Abdu'l-Bahá" are foundational documents of the Bahá'í administrative order. Bahá'u'lláh established the elected Universal House of Justice, and `Abdu'l-Bahá established the appointed hereditary Guardianship and clarified the relationship between the two institutions. In his Will, `Abdu'l-Bahá appointed his eldest grandson, Shoghi Effendi, as the first Guardian of the Bahá'í Faith, serving as head of the religion until his death, for 36 years. Shoghi Effendi throughout his lifetime translated Bahá'í texts; developed global plans for the expansion of the Bahá'í community; developed the Bahá'í World Centre; carried on a voluminous correspondence with communities and individuals around the world; and built the administrative structure of the religion, preparing the community for the election of the Universal House of Justice. He died in 1957 under conditions that did not allow for a successor to be appointed. At local, regional, and national levels, Bahá'ís elect members to nine-person Spiritual Assemblies, which run the affairs of the religion. There are also appointed individuals working at various levels, including locally and internationally, which perform the function of propagating the teachings and protecting the community. The latter do not serve as clergy, which the Bahá'í Faith does not have. The Universal House of Justice, first elected in 1963, remains the successor and supreme governing body of the Bahá'í Faith, and its 9 members are elected every five years by the members of all National Spiritual Assemblies. Any male Bahá'í, 21 years or older, is eligible to be elected to the Universal House of Justice; all other positions are open to male and female Bahá'ís. In 1937, Shoghi Effendi launched a seven-year plan for the Bahá'ís of North America, followed by another in 1946. In 1953, he launched the first international plan, the Ten Year World Crusade. This plan included extremely ambitious goals for the expansion of Bahá'í communities and institutions, the translation of Bahá'í texts into several new languages, and the sending of Bahá'í pioneers into previously unreached nations. He announced in letters during the Ten Year Crusade that it would be followed by other plans under the direction of the Universal House of Justice, which was elected in 1963 at the culmination of the Crusade. The House of Justice then launched a nine-year plan in 1964, and a series of subsequent multi-year plans of varying length and goals followed, guiding the direction of the international Bahá'í community. Annually, on 21 April, the Universal House of Justice sends a ‘Ridván’ message to the worldwide Bahá’í community, which generally gives an update on the progress made concerning the current plan, and provides further guidance for the year to come. The Bahá'ís around the world are currently being encouraged to focus on capacity building through children's classes, youth groups, devotional gatherings, and a systematic study of the religion known as study circles. Further focuses are involvement in social action and participation in the prevalent discourses of society. The years from 2001 until 2021 represent four successive five-year plans, culminating in the centennial anniversary of the passing of `Abdu'l-Bahá. A Bahá'í published document reported 4.74 million Bahá'ís in 1986 growing at a rate of 4.4%. Bahá'í sources since 1991 usually estimate the worldwide Bahá'í population to be above 5 million. The "World Christian Encyclopedia" estimated 7.1 million Bahá'ís in the world in 2000, representing 218 countries, and 7.3 million in 2010 with the same source. They further state: "The Baha'i Faith is the only religion to have grown faster in every United Nations region over the past 100 years than the general population; Baha’i was thus the fastest-growing religion between 1910 and 2010, growing at least twice as fast as the population of almost every UN region." This source's only systematic flaw was to consistently have a higher estimate of Christians than other cross-national data sets. From its origins in the Persian and Ottoman Empires, by the early 20th century there were a number of converts in South and South East Asia, Europe, and North America. During the 1950s and 1960s, vast travel teaching efforts brought the religion to almost every country and territory of the world. By the 1990s, Bahá'ís were developing programs for systematic consolidation on a large scale, and the early 21st century saw large influxes of new adherents around the world. The Bahá'í Faith is currently the largest religious minority in Iran, Panama, Belize, and South Carolina; the second largest international religion in Bolivia, Zambia, and Papua New Guinea; and the third largest international religion in Chad and Kenya. According to "The World Almanac and Book of Facts 2004": The Bahá'í Faith is a medium-sized religion and was listed in "The Britannica Book of the Year" (1992–present) as the second most widespread of the world's independent religions in terms of the number of countries represented. According to "Britannica", the Bahá'í Faith (as of 2010) is established in 221 countries and territories and has an estimated seven million adherents worldwide. Additionally, Bahá'ís have self-organized in most of the nations of the world. The Bahá'í religion was ranked by the Foreign Policy magazine as the world's second fastest growing religion by percentage (1.7%) in 2007. The following are a few examples from Bahá'u'lláh's teachings on personal conduct that are required or encouraged of his followers. The following are a few examples from Bahá'u'lláh's teachings on personal conduct that are prohibited or discouraged. While some of the laws from the Kitáb-i-Aqdas are applicable at the present time, others are dependent upon the existence of a predominantly Bahá'í society, such as the punishments for arson or murder. The laws, when not in direct conflict with the civil laws of the country of residence, are binding on every Bahá'í, and the observance of personal laws, such as prayer or fasting, is the sole responsibility of the individual. The purpose of marriage in the Bahá'i faith is mainly to foster spiritual harmony, fellowship and unity between a man and a woman and to provide a stable and loving environment for the rearing of children. The Bahá'í teachings on marriage call it a "fortress for well-being and salvation" and place marriage and the family as the foundation of the structure of human society. Bahá'u'lláh highly praised marriage, discouraged divorce, and required chastity outside of marriage; Bahá'u'lláh taught that a husband and wife should strive to improve the spiritual life of each other. Interracial marriage is also highly praised throughout Bahá'í scripture. Bahá'ís intending to marry are asked to obtain a thorough understanding of the other's character before deciding to marry. Although parents should not choose partners for their children, once two individuals decide to marry, they must receive the consent of all living biological parents, whether they are Bahá'í or not. The Bahá'í marriage ceremony is simple; the only compulsory part of the wedding is the reading of the wedding vows prescribed by Bahá'u'lláh which both the groom and the bride read, in the presence of two witnesses. The vows are "We will all, verily, abide by the Will of God." Bahá'u'lláh prohibited a mendicant and ascetic lifestyle. Monasticism is forbidden, and Bahá'ís are taught to practice spirituality while engaging in useful work. The importance of self-exertion and service to humanity in one's spiritual life is emphasised further in Bahá'u'lláh's writings, where he states that work done in the spirit of service to humanity enjoys a rank equal to that of prayer and worship in the sight of God. Most Bahá'í meetings occur in individuals' homes, local Bahá'í centers, or rented facilities. Worldwide, there are currently eight Bahá'í Houses of Worship and a further seven planned as of April 2012. Bahá'í writings refer to an institution called a "Mashriqu'l-Adhkár" (Dawning-place of the Mention of God), which is to form the center of a complex of institutions including a hospital, university, and so on. The first ever Mashriqu'l-Adhkár in `Ishqábád, Turkmenistan, has been the most complete House of Worship. The Bahá'í calendar is based upon the calendar established by the Báb. The year consists of 19 months, each having 19 days, with four or five intercalary days, to make a full solar year. The Bahá'í New Year corresponds to the traditional Persian New Year, called Naw Rúz, and occurs on the vernal equinox, near 21 March, at the end of the month of fasting. Bahá'í communities gather at the beginning of each month at a meeting called a Feast for worship, consultation and socializing. Each of the 19 months is given a name which is an attribute of God; some examples include Bahá’ (Splendour), ‘Ilm (Knowledge), and Jamál (Beauty). The Bahá'í week is familiar in that it consists of seven days, with each day of the week also named after an attribute of God. Bahá'ís observe 11 Holy Days throughout the year, with work suspended on 9 of these. These days commemorate important anniversaries in the history of the religion. The symbols of the religion are derived from the Arabic word Bahá’ ( "splendor" or "glory"), with a numerical value of 9, which is why the most common symbol is the nine-pointed star. The ringstone symbol and calligraphy of the Greatest Name are also often encountered. The former consists of two five-pointed stars interspersed with a stylized Bahá’ whose shape is meant to recall the three onenesses, while the latter is a calligraphic rendering of the phrase Yá Bahá'u'l-Abhá ( "O Glory of the Most Glorious!").The five-pointed star is the official symbol of the Bahá'í Faith, known as the "Haykal" ("temple"). It was initiated and established by the Báb and various works were written in calligraphy shaped into a five-pointed star. Since its inception the Bahá'í Faith has had involvement in socio-economic development beginning by giving greater freedom to women, promulgating the promotion of female education as a priority concern, and that involvement was given practical expression by creating schools, agricultural coops, and clinics. The religion entered a new phase of activity when a message of the Universal House of Justice dated 20 October 1983 was released. Bahá'ís were urged to seek out ways, compatible with the Bahá'í teachings, in which they could become involved in the social and economic development of the communities in which they lived. Worldwide in 1979 there were 129 officially recognized Bahá'í socio-economic development projects. By 1987, the number of officially recognized development projects had increased to 1482. Current initiatives of social action include activities in areas like health, sanitation, education, gender equality, arts and media, agriculture, and the environment. Educational projects include schools, which range from village tutorial schools to large secondary schools, and some universities. By 2017 there were an estimated 40,000 small scale projects, 1,400 sustained projects, and 135 Bahá'í inspired organizations. Bahá'u'lláh wrote of the need for world government in this age of humanity's collective life. Because of this emphasis the international Bahá'í community has chosen to support efforts of improving international relations through organizations such as the League of Nations and the United Nations, with some reservations about the present structure and constitution of the UN. The Bahá'í International Community is an agency under the direction of the Universal House of Justice in Haifa, and has consultative status with the following organizations: The Bahá'í International Community has offices at the United Nations in New York and Geneva and representations to United Nations regional commissions and other offices in Addis Ababa, Bangkok, Nairobi, Rome, Santiago, and Vienna. In recent years an Office of the Environment and an Office for the Advancement of Women were established as part of its United Nations Office. The Bahá'í Faith has also undertaken joint development programs with various other United Nations agencies. In the 2000 Millennium Forum of the United Nations a Bahá'í was invited as the only non-governmental speaker during the summit. Bahá'ís continue to be persecuted in Islamic countries, as Islamic leaders do not recognize the Bahá'í Faith as an independent religion, but rather as apostasy from Islam. The most severe persecutions have occurred in Iran, where more than 200 Bahá'ís were executed between 1978 and 1998, and in Egypt. The rights of Bahá'ís have been restricted to greater or lesser extents in numerous other countries, including Afghanistan, Indonesia, Iraq, Morocco, Yemen, and several countries in sub-Saharan Africa. The marginalization of the Iranian Bahá'ís by current governments is rooted in historical efforts by Muslim clergy to persecute the religious minority. When the Báb started attracting a large following, the clergy hoped to stop the movement from spreading by stating that its followers were enemies of God. These clerical directives led to mob attacks and public executions. Starting in the twentieth century, in addition to repression aimed at individual Bahá'ís, centrally directed campaigns that targeted the entire Bahá'í community and its institutions were initiated. In one case in Yazd in 1903 more than 100 Bahá'ís were killed. Bahá'í schools, such as the Tarbiyat boys' and girls' schools in Tehran, were closed in the 1930s and 1940s, Bahá'í marriages were not recognized and Bahá'í texts were censored. During the reign of Mohammad Reza Pahlavi, to divert attention from economic difficulties in Iran and from a growing nationalist movement, a campaign of persecution against the Bahá'ís was instituted. An approved and coordinated anti-Bahá'í campaign (to incite public passion against the Bahá'ís) started in 1955 and it included the spreading of anti-Bahá'í propaganda on national radio stations and in official newspapers. In the late 1970s the Shah's regime consistently lost legitimacy due to criticism that it was pro-Western. As the anti-Shah movement gained ground and support, revolutionary propaganda was spread which alleged that some of the Shah's advisors were Bahá'ís. Bahá'ís were portrayed as economic threats, and as supporters of Israel and the West, and societal hostility against the Bahá'ís increased. Since the Islamic Revolution of 1979 Iranian Bahá'ís have regularly had their homes ransacked or have been banned from attending university or from holding government jobs, and several hundred have received prison sentences for their religious beliefs, most recently for participating in study circles. Bahá'í cemeteries have been desecrated and property has been seized and occasionally demolished, including the House of Mírzá Buzurg, Bahá'u'lláh's father. The House of the Báb in Shiraz, one of three sites to which Bahá'ís perform pilgrimage, has been destroyed twice. According to a US panel, attacks on Bahá'ís in Iran increased under Mahmoud Ahmadinejad's presidency. The United Nations Commission on Human Rights revealed an October 2005 confidential letter from Command Headquarters of the Armed Forces of Iran ordering its members to identify Bahá'ís and to monitor their activities. Due to these actions, the Special Rapporteur of the United Nations Commission on Human Rights stated on 20 March 2006, that she "also expresses concern that the information gained as a result of such monitoring will be used as a basis for the increased persecution of, and discrimination against, members of the Bahá'í faith, in violation of international standards. The Special Rapporteur is concerned that this latest development indicates that the situation with regard to religious minorities in Iran is, in fact, deteriorating. On 14 May 2008, members of an informal body known as the "Friends" that oversaw the needs of the Bahá'í community in Iran were arrested and taken to Evin prison. The Friends court case has been postponed several times, but was finally underway on 12 January 2010. Other observers were not allowed in the court. Even the defence lawyers, who for two years have had minimal access to the defendants, had difficulty entering the courtroom. The chairman of the U.S. Commission on International Religious Freedom said that it seems that the government has already predetermined the outcome of the case and is violating international human rights law. Further sessions were held on 7 February 2010, 12 April 2010 and 12 June 2010. On 11 August 2010 it became known that the court sentence was 20 years imprisonment for each of the seven prisoners which was later reduced to ten years. After the sentence, they were transferred to Gohardasht prison. In March 2011 the sentences were reinstated to the original 20 years. On 3 January 2010, Iranian authorities detained ten more members of the Baha'i minority, reportedly including Leva Khanjani, granddaughter of Jamaloddin Khanjani, one of seven Baha'i leaders jailed since 2008 and in February, they arrested his son, Niki Khanjani. The Iranian government claims that the Bahá'í Faith is not a religion, but is instead a political organization, and hence refuses to recognize it as a minority religion. However, the government has never produced convincing evidence supporting its characterization of the Bahá'í community. Also, the government's statements that Bahá'ís who recanted their religion would have their rights restored, attest to the fact that Bahá'ís are persecuted solely for their religious affiliation. The Iranian government also accuses the Bahá'í Faith of being associated with Zionism because the Bahá'í World Centre is located in Haifa, Israel. These accusations against the Bahá'ís have no basis in historical fact, and the accusations are used by the Iranian government to use the Bahá'ís as "scapegoats". In fact it was the Iranian leader Naser al-Din Shah Qajar who banished Bahá'u'lláh from Persia to the Ottoman Empire and Bahá'u'lláh was later exiled by the Ottoman Sultan, at the behest of the Persian Shah, to territories further away from Iran and finally to Acre in Syria, which only a century later was incorporated into the state of Israel. Bahá'í institutions and community activities have been illegal under Egyptian law since 1960. All Bahá'í community properties, including Bahá'í centers, libraries, and cemeteries, have been confiscated by the government and fatwas have been issued charging Bahá'ís with apostasy. The Egyptian identification card controversy began in the 1990s when the government modernized the electronic processing of identity documents, which introduced a de facto requirement that documents must list the person's religion as Muslim, Christian, or Jewish (the only three religions officially recognized by the government). Consequently, Bahá'ís were unable to obtain government identification documents (such as national identification cards, birth certificates, death certificates, marriage or divorce certificates, or passports) necessary to exercise their rights in their country unless they lied about their religion, which conflicts with Bahá'í religious principle. Without documents, they could not be employed, educated, treated in hospitals, travel outside of the country, or vote, among other hardships. Following a protracted legal process culminating in a court ruling favorable to the Bahá'ís, the interior minister of Egypt released a decree on 14 April 2009, amending the law to allow Egyptians who are not Muslim, Christian, or Jewish to obtain identification documents that list a dash in place of one of the three recognized religions. The first identification cards were issued to two Bahá'ís under the new decree on 8 August 2009. Binary search algorithm In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array. Even though the idea is simple, implementing binary search correctly requires attention to some subtleties about its exit conditions and midpoint calculation. Binary search runs in at worst logarithmic time, making comparisons, where is the number of elements in the array, the is Big O notation, and is the logarithm. Binary search takes constant () space, meaning that the space taken by the algorithm is the same for any number of elements in the array. Binary search is faster than linear search except for small arrays, but the array must be sorted first. Although specialized data structures designed for fast searching—such as hash tables—can be searched more efficiently, binary search applies to a wider range of problems. There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for the same value in multiple arrays, efficiently solving a series of search problems in computational geometry and numerous other fields. Exponential search extends binary search to unbounded lists. The binary search tree and B-tree data structures are based on binary search. Binary search works on sorted arrays. Binary search begins by comparing the middle element of the array with the target value. If the target value matches the middle element, its position in the array is returned. If the target value is less than or greater than the middle element, the search continues in the lower or upper half of the array, respectively, eliminating the other half from consideration. Given an array "A" of "n" elements with values or records , sorted such that , and target value "T", the following subroutine uses binary search to find the index of "T" in "A". This iterative procedure keeps track of the search boundaries with the two variables. The procedure may be expressed in pseudocode as follows, where the variable names and types remain the same as above, codice_1 is the floor function, and codice_2 refers to a specific variable that conveys the failure of the search. In the above procedure, the algorithm checks whether the middle element ("m") is equal to the target ("T") in every iteration. Some implementations leave out this check during each iteration. The algorithm would perform this check only when one element is left (when "L" = "R"). This results in a faster comparison loop, as one comparison is eliminated per iteration. However, it requires one more iteration on average. The first implementation to leave out this equality check was published by Hermann Bottenbruch in 1962. However, Bottenbruch's version assumes that there is more than one element in the array. The subroutine for Bottenbruch's implementation is as follows, assuming that The procedure may return any index whose element is equal to the target value, even if there are duplicate elements in the array. For example, if the array to be searched was and the target was , then it would be correct for the algorithm to either return the 4th (index 3) or 5th (index 4) element. The regular procedure would return the 4th element (index 3). However, it is sometimes necessary to find the leftmost element or the rightmost element if the target value is duplicated in the array. In the above example, the 4th element is the leftmost element of the value 4, and the 5th element is the rightmost element. The alternative procedure above will always return the index of the rightmost element if an element is duplicated in the array. To find the leftmost element, the following procedure can be used: If and , then is the leftmost element that equals "T". Even if "T" is not in the array, "L" is the rank of "T" in the array, or the number of elements in the array that are less than "T". Where codice_1 is the floor function, the pseudocode for this version is: To find the rightmost element, the following procedure can be used: If and , then is the rightmost element that equals "T". Even if "T" is not in the array, "n" - "L" is the number of elements in the array that are greater than "T". Where codice_1 is the floor function, the pseudocode for this version is: The above procedure only performs "exact" matches, finding the position of a target value. However, due to the ordered nature of sorted arrays, it is trivial to extend binary search to perform approximate matches. For example, binary search can be used to compute, for a given value, its rank (the number of smaller elements), predecessor (next-smallest element), successor (next-largest element), and nearest neighbor. Range queries seeking the number of elements between two values can be performed with two rank queries. The performance of binary search can be analyzed by reducing the procedure to a binary comparison tree, where the root node is the middle element of the array. The middle element of the lower half is the left child node of the root and the middle element of the upper half is the right child node of the root. The rest of the tree is built in a similar fashion. This model represents binary search; starting from the root node, the left or right subtrees are traversed depending on whether the target value is less or more than the node under consideration, representing the successive elimination of elements. The worst case is formula_1 iterations of the comparison loop, where the formula_2 notation denotes the floor function that rounds its argument to the next-smallest integer and formula_3 is the binary logarithm. The worst case is reached when the search reaches the deepest level of the tree, equivalent to a binary search that has reduced to one element and, in each iteration, always eliminates the smaller subarray out of the two if they are not of equal size. The worst case may also be reached when the target element is not in the array. If formula_4 is one less than a power of two, then this is always the case. Otherwise, the search may perform formula_1, the worst case, or formula_6 iterations, one less than the worst case, depending on whether the search reaches the deepest or second-deepest level of the tree. On average, assuming that each element is equally likely to be searched, binary search makes formula_7 iterations when the target element is in the array. This is approximately equal to formula_8 iterations. When the target element is not in the array, the average case is formula_9 iterations, assuming that the range between and outside elements is equally likely to be searched. In the best case, where the target value is the middle element of the array, its position is returned after one iteration. In terms of iterations, no search algorithm that works only by comparing elements can exhibit better average and worst-case performance than binary search. This is because the comparison tree representing binary search has the fewest levels possible as each level is filled completely with nodes if there are enough nodes. Otherwise, the search algorithm can eliminate few elements in an iteration, increasing the number of iterations required in the average and worst case. This is the case for other search algorithms based on comparisons, as while they may work faster on some target values, the average performance over "all" elements is affected. By dividing the array in half, binary search ensures that the size of both subarrays are as similar as possible. Each iteration of the binary search procedure defined above makes one or two comparisons, checking if the middle element is equal to the target in each iteration. Assuming that each element is equally likely to be searched, each iteration makes 1.5 comparisons on average. A variation of the algorithm checks whether the middle element is equal to the target at the end of the search, eliminating on average half a comparison from each iteration. This slightly cuts the time taken per iteration on most computers, while guaranteeing that the search takes the maximum number of iterations, on average adding one iteration to the search. Because the comparison loop is performed only formula_1 times in the worst case, the slight increase in comparison loop efficiency does not compensate for the extra iteration for all but enormous formula_4. Sorted arrays with binary search are a very inefficient solution when insertion and deletion operations are interleaved with retrieval, taking formula_12 time for each such operation, and complicating memory use. There are other data structures that support much more efficient insertion and deletion. Binary search can be used to perform exact matching and set membership (determining whether a target value is in a collection of values), but there are data structures that support faster exact matching and set membership. However, unlike many other searching schemes, binary search can be used for efficient approximate matching, usually performing such matches in formula_13 time regardless of the type or structure of the values themselves. In addition, there are some operations, like finding the smallest and largest element, that can be performed efficiently on a sorted array. For implementing associative arrays, hash tables, a data structure that maps keys to records using a hash function, are generally faster than binary search on a sorted array of records; most implementations require only amortized constant time on average. However, hashing is not useful for approximate matches, such as computing the next-smallest, next-largest, and nearest key, as the only information given on a failed search is that the target is not present in any record. Binary search is ideal for such matches, performing them in logarithmic time. Binary search also supports approximate matches. Some operations, like finding the smallest and largest element, can be done efficiently on sorted arrays but not on hash tables. A binary search tree is a binary tree data structure that works based on the principle of binary search. The records of the tree are arranged in sorted order, and each record in the tree can be searched using an algorithm similar to binary search, taking on average logarithmic time. Insertion and deletion also require on average logarithmic time in binary search trees. This can be faster than the linear time insertion and deletion of sorted arrays, and binary trees retain the ability to perform all the operations possible on a sorted array, including range and approximate queries. However, binary search is usually more efficient for searching as binary search trees will most likely be imperfectly balanced, resulting in slightly worse performance than binary search. This even applies to balanced binary search trees, binary search trees that balance their own nodes, because they rarely produce "optimally"-balanced trees. Although unlikely, the tree may be severely imbalanced with few internal nodes with two children, resulting in the average and worst-case search time approaching formula_4 comparisons. Binary search trees take more space than sorted arrays. Binary search trees lend themselves to fast searching in external memory stored in hard disks, as binary search trees can efficiently be structured in filesystems. The B-tree generalizes this method of tree organization; B-trees are frequently used to organize long-term storage such as databases and filesystems. Linear search is a simple search algorithm that checks every record until it finds the target value. Linear search can be done on a linked list, which allows for faster insertion and deletion than an array. Binary search is faster than linear search for sorted arrays except if the array is short, although the array needs to be sorted beforehand. All sorting algorithms based on comparing elements, such as quicksort and merge sort, require at least formula_15 comparisons in the worst case. Unlike linear search, binary search can be used for efficient approximate matching. There are operations such as finding the smallest and largest element that can be done efficiently on a sorted array but not on an unsorted array. A related problem to search is set membership. Any algorithm that does lookup, like binary search, can also be used for set membership. There are other algorithms that are more specifically suited for set membership. A bit array is the simplest, useful when the range of keys is limited. It compactly stores a collection of bits, with each bit representing a single key within the range of keys. Bit arrays are very fast, requiring only formula_16 time. The Judy1 type of Judy array handles 64-bit keys efficiently. For approximate results, Bloom filters, another probabilistic data structure based on hashing, store a set of keys by encoding the keys using a bit array and multiple hash functions. Bloom filters are much more space-efficient than bit arrays in most cases and not much slower: with formula_17 hash functions, membership queries require only formula_18 time. However, Bloom filters suffer from false positives. There exist data structures that may improve on binary search in some cases for both searching and other operations available for sorted arrays. For example, searches, approximate matches, and the operations available to sorted arrays can be performed more efficiently than binary search on specialized data structures such as van Emde Boas trees, fusion trees, tries, and bit arrays. However, while these operations can always be done at least efficiently on a sorted array regardless of the keys, such data structures are usually only faster because they exploit the properties of keys with a certain attribute (usually keys that are small integers), and thus will be time or space consuming for keys that lack that attribute. Some structures, such as Judy arrays, use a combination of approaches to mitigate this while retaining efficiency and the ability to perform approximate matching. Uniform binary search stores, instead of the lower and upper bounds, the index of the middle element and the change in the middle element from the current iteration to the next iteration. Each step reduces the change by about half. For example, if the array to be searched was , the middle element would be . Uniform binary search works on the basis that the difference between the index of middle element of the array and the left and right subarrays is the same. In this case, the middle element of the left subarray () is and the middle element of the right subarray () is . Uniform binary search would store the value of as both indices differ from by this same amount. To reduce the search space, the algorithm either adds or subtracts this change from the middle element. The main advantage of uniform binary search is that the procedure can store a table of the differences between indices for each iteration of the procedure. Uniform binary search may be faster on systems where it is inefficient to calculate the midpoint, such as on decimal computers. Exponential search extends binary search to unbounded lists. It starts by finding the first element with an index that is both a power of two and greater than the target value. Afterwards, it sets that index as the upper bound, and switches to binary search. A search takes formula_19 iterations of the exponential search and at most formula_20 iterations of the binary search, where formula_21 is the position of the target value. Exponential search works on bounded lists, but becomes an improvement over binary search only if the target value lies near the beginning of the array. Instead of calculating the midpoint, interpolation search estimates the position of the target value, taking into account the lowest and highest elements in the array as well as length of the array. This is only possible if the array elements are numbers. It works on the basis that the midpoint is not the best guess in many cases. For example, if the target value is close to the highest element in the array, it is likely to be located near the end of the array. When the distribution of the array elements is uniform or near uniform, it makes formula_22 comparisons. In practice, interpolation search is slower than binary search for small arrays, as interpolation search requires extra computation. Although its time complexity grows more slowly than binary search, this only compensates for the extra computation for large arrays. Fractional cascading is a technique that speeds up binary searches for the same element in multiple sorted arrays. Searching each array separately requires formula_23 time, where formula_17 is the number of arrays. Fractional cascading reduces this to formula_25 by storing specific information in each array about each element and its position in the other arrays. Fractional cascading was originally developed to efficiently solve various computational geometry problems, but it also has been applied elsewhere, in domains such as data mining and Internet Protocol routing. Noisy binary search algorithms solve the case where the algorithm cannot reliably compare elements of the array. For each pair of elements, there is a certain probability that the algorithm makes the wrong comparison. Noisy binary search can find the correct position of the target with a given probability that controls the reliability of the yielded position. Classical computers are bounded to the worst case of exactly formula_26 iterations when performing binary search. Quantum algorithms for binary search are still bounded to a proportion of formula_27 queries (representing iterations of the classical procedure), but the constant factor is less than one, providing for faster performance on quantum computers. Any "exact" quantum binary search procedure—that is, a procedure that always yields the correct result—requires at least formula_28 queries in the worst case, where formula_29 is the natural logarithm. There is an exact quantum binary search procedure that runs in formula_30 queries in the worst case. In comparison, Grover's algorithm is the optimal quantum algorithm for searching an unordered list of elements, and it requires formula_31 queries. In 1946, John Mauchly made the first mention of binary search as part of the Moore School Lectures, a seminal and foundational college course in computing. In 1957, William Wesley Peterson published the first method for interpolation search. Every published binary search algorithm worked only for arrays whose length is one less than a power of two until 1960, when Derrick Henry Lehmer published a binary search algorithm that worked on all arrays. In 1962, Hermann Bottenbruch presented an ALGOL 60 implementation of binary search that placed the comparison for equality at the end, increasing the average number of iterations by one, but reducing to one the number of comparisons per iteration. The uniform binary search was developed by A. K. Chandra of Stanford University in 1971. In 1986, Bernard Chazelle and Leonidas J. Guibas introduced fractional cascading as a method to solve numerous search problems in computational geometry. Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky ... — Donald Knuth When Jon Bentley assigned binary search as a problem in a course for professional programmers, he found that ninety percent failed to provide a correct solution after several hours of working on it, mainly because the incorrect implementations failed to run or returned a wrong answer in rare edge cases. A study published in 1988 shows that accurate code for it is only found in five out of twenty textbooks. Furthermore, Bentley's own implementation of binary search, published in his 1986 book "Programming Pearls", contained an overflow error that remained undetected for over twenty years. The Java programming language library implementation of binary search had the same overflow bug for more than nine years. In a practical implementation, the variables used to represent the indices will often be of fixed size, and this can result in an arithmetic overflow for very large arrays. If the midpoint of the span is calculated as , then the value of may exceed the range of integers of the data type used to store the midpoint, even if "L" and "R" are within the range. If "L" and "R" are nonnegative, this can be avoided by calculating the midpoint as . If the target value is greater than the greatest value in the array, and the last index of the array is the maximum representable value of "L", the value of "L" will eventually become too large and overflow. A similar problem will occur if the target value is smaller than the least value in the array and the first index of the array is the smallest representable value of "R". In particular, this means that "R" must not be an unsigned type if the array starts with index 0. An infinite loop may occur if the exit conditions for the loop are not defined correctly. Once "L" exceeds "R", the search has failed and must convey the failure of the search. In addition, the loop must be exited when the target element is found, or in the case of an implementation where this check is moved to the end, checks for whether the search was successful or failed at the end must be in place. Bentley found that most of the programmers who incorrectly implemented binary search made an error in defining the exit conditions. Many languages' standard libraries include binary search routines: Bald eagle The bald eagle ("Haliaeetus leucocephalus", from Greek ἅλς, "hals" "sea", αἰετός "aietos" "eagle", λευκός, "leukos" "white", κεφαλή, "kephalē" "head") is a bird of prey found in North America. A sea eagle, it has two known subspecies and forms a species pair with the white-tailed eagle ("Haliaeetus albicilla"). Its range includes most of Canada and Alaska, all of the contiguous United States, and northern Mexico. It is found near large bodies of open water with an abundant food supply and old-growth trees for nesting. The bald eagle is an opportunistic feeder which subsists mainly on fish, which it swoops down and snatches from the water with its talons. It builds the largest nest of any North American bird and the largest tree nests ever recorded for any animal species, up to deep, wide, and in weight. Sexual maturity is attained at the age of four to five years. Bald eagles are not actually bald; the name derives from an older meaning of the word, "white headed". The adult is mainly brown with a white head and tail. The sexes are identical in plumage, but females are about 25 percent larger than males. The beak is large and hooked. The plumage of the immature is brown. The bald eagle is both the national bird and national animal of the United States of America. The bald eagle appears on its seal. In the late 20th century it was on the brink of extirpation in the contiguous United States. Populations have since recovered and the species was removed from the U.S. government's list of endangered species on July 12, 1995 and transferred to the list of threatened species. It was removed from the List of Endangered and Threatened Wildlife in the Lower 48 States on June 28, 2007. The plumage of an adult bald eagle is evenly dark brown with a white head and tail. The tail is moderately long and slightly wedge-shaped. Males and females are identical in plumage coloration, but sexual dimorphism is evident in the species, in that females are 25% larger than males. The beak, feet and irises are bright yellow. The legs are feather-free, and the toes are short and powerful with large talons. The highly developed talon of the hind toe is used to pierce the vital areas of prey while it is held immobile by the front toes. The beak is large and hooked, with a yellow cere. The adult bald eagle is unmistakable in its native range. The closely related African fish eagle ("H. vocifer") (from far outside the bald eagle's range) also has a brown body, white head and tail, but differs from the bald in having a white chest and black tip to the bill. The plumage of the immature is a dark brown overlaid with messy white streaking until the fifth (rarely fourth, very rarely third) year, when it reaches sexual maturity. Immature bald eagles are distinguishable from the golden eagle ("Aquila chrysaetos"), the only other very large, non-vulturine bird in North America, in that the former has a larger, more protruding head with a larger beak, straighter edged wings which are held flat (not slightly raised) and with a stiffer wing beat and feathers which do not completely cover the legs. When seen well, the golden eagle is distinctive in plumage with a more solid warm brown color than an immature bald eagle, with a reddish-golden patch to its nape and (in immature birds) a highly contrasting set of white squares on the wing. Another distinguishing feature of the immature bald eagle over the mature bird is its black, yellow-tipped beak; the mature eagle has a fully yellow beak. The bald eagle has sometimes been considered the largest true raptor (accipitrid) in North America. The only larger species of raptor-like bird is the California condor ("Gymnogyps californianus"), a New World vulture which today is not generally considered a taxonomic ally of true accipitrids. However, the golden eagle, averaging and in wing chord length in its American race ("A. c. canadensis"), is merely lighter in mean body mass and exceeds the bald eagle in mean wing chord length by around . Additionally, the bald eagle's close cousins, the relatively longer-winged but shorter-tailed white-tailed eagle and the overall larger Steller's sea eagle ("H. pelagicus"), may, rarely, wander to coastal Alaska from Asia. The bald eagle has a body length of . Typical wingspan is between and mass is normally between . Females are about 25% larger than males, averaging , and against the males' average weight of . The size of the bird varies by location and generally corresponds with Bergmann's rule, since the species increases in size further away from the Equator and the tropics. For example, eagles from South Carolina average in mass and in wingspan, smaller than their northern counterparts. One field guide in Florida listed similarly small sizes for bald eagles there, at about . The largest eagles are from Alaska, where large females may weigh up to and span across the wings. A pair of surveys of adult weights in Alaska showed that adult females there weighed on average , respectively, and males weighed against immatures which averaged and in the two sexes. R.S. Palmer listed a record from 1876 in Wyoming County, New York of an outsized adult bald eagle that was shot and reportedly scaled . Among standard linear measurements, the wing chord is , the tail is long, and the tarsus is . The culmen reportedly ranges from , while the measurement from the gape to the tip of the bill is . The bill size is unusually variable as Alaskan eagles could be up to twice the bill length of "southern birds" (i.e from Georgia, Louisiana, Florida), with means in between the sexes of and in culmen length, respectively, from these two areas. The call consists of weak staccato, chirping whistles, "kleek kik ik ik ik", somewhat similar in cadence to a gull's call. The calls of young birds tend to be more harsh and shrill than those of adults. The bald eagle placed in the genus "Haliaeetus" (sea eagles) which gets both its common and specific scientific names from the distinctive appearance of the adult's head. "Bald" in the English name is derived from the word "piebald", and refers to the white head and tail feathers and their contrast with the darker body. The scientific name is derived from "Haliaeetus", New Latin for "sea eagle" (from the Ancient Greek "haliaetos"), and "leucocephalus", Latinized Ancient Greek for "white head," from λευκος "leukos" ("white") and κεφαλη "kephale" ("head"). The bald eagle was one of the many species originally described by Linnaeus in his 18th century work "Systema Naturae", under the name "Falco leucocephalus". There are two recognized subspecies of bald eagle: The bald eagle forms a species pair with the Eurasian white-tailed eagle. This species pair consists of a white-headed and a tan-headed species of roughly equal size; the white-tailed eagle also has overall somewhat paler brown body plumage. The two species fill the same ecological niche in their respective ranges. The pair diverged from other sea eagles at the beginning of the Early Miocene (c. 10 Ma BP) at the latest, but possibly as early as the Early/Middle Oligocene, 28 Ma BP, if the most ancient fossil record is correctly assigned to this genus. The bald eagle's natural range covers most of North America, including most of Canada, all of the continental United States, and northern Mexico. It is the only sea eagle endemic to North America. Occupying varied habitats from the bayous of Louisiana to the Sonoran Desert and the eastern deciduous forests of Quebec and New England, northern birds are migratory, while southern birds are resident, remaining on their breeding territory all year. At minimum population, in the 1950s, it was largely restricted to Alaska, the Aleutian Islands, northern and eastern Canada, and Florida. Today, they are much more common, and nest in every continental state and province in the United States and Canada. Bald eagles will also congregate in certain locations in winter. From November until February, one to two thousand birds winter in Squamish, British Columbia, about halfway between Vancouver and Whistler. The birds primarily gather along the Squamish and Cheakamus Rivers, attracted by the salmon spawning in the area. It has occurred as a vagrant twice in Ireland; a juvenile was shot illegally in Fermanagh on January 11, 1973 (misidentified at first as a white-tailed eagle), and an exhausted juvenile was captured in Kerry on November 15, 1987. The bald eagle occurs during its breeding season in virtually any kind of American wetland habitat such as seacoasts, rivers, large lakes or marshes or other large bodies of open water with an abundance of fish. Studies have shown a preference for bodies of water with a circumference greater than , and lakes with an area greater than are optimal for breeding bald eagles. The bald eagle typically requires old-growth and mature stands of coniferous or hardwood trees for perching, roosting, and nesting. Tree species reportedly is less important to the eagle pair than the tree's height, composition and location. Perhaps of paramount importance for this species is an abundance of comparatively large trees surrounding the body of water. Selected trees must have good visibility, be over tall, an open structure, and proximity to prey. If nesting trees are in standing water such as in a mangrove swamp, the nest can be located fairly low, at as low above the ground. In a more typical tree standing on dry ground, nests may be located from in height. In Chesapeake Bay, nesting trees averaged in diameter and in total height, while in Florida, the average nesting tree stands high and is in diameter. Trees used for nesting in the Greater Yellowstone area average high. Trees or forest used for nesting should have a canopy cover of no more than 60%, and no less than 20%, and be in close proximity to water. Most nests have been found within of open water. The greatest distance from open water recorded for a bald eagle nest was over , in Florida. Bald eagle nests are often very large in order to compensate for size of the birds. The largest recorded nest was found in Florida in 1963, and was measured at nearly 10 feet wide and 20 feet deep. In Florida, nesting habitats often consist of mangrove swamps, the shorelines of lakes and rivers, pinelands, seasonally flooded flatwoods, hardwood swamps, and open prairies and pastureland with scattered tall trees. Favored nesting trees in Florida are slash pines ("Pinus elliottii"), longleaf pines ("P. palustris"), loblolly pines ("P. taeda") and cypress trees, but for the southern coastal areas where mangroves are usually used. In Wyoming, groves of mature cottonwoods or tall pines found along streams and rivers are typical bald eagle nesting habitats. Wyoming eagles may inhabit habitat types ranging from large, old-growth stands of ponderosa pines ("Pinus ponderosa") to narrow strips of riparian trees surrounded by rangeland. In Southeast Alaska, Sitka spruce ("Picea sitchensis") provided 78% of the nesting trees used by eagles, followed by hemlocks ("Tsuga") at 20%. Increasingly, eagles nest in man-made reservoirs stocked with fish. The bald eagle is usually quite sensitive to human activity while nesting, and is found most commonly in areas with minimal human disturbance. It chooses sites more than from low-density human disturbance and more than from medium- to high-density human disturbance. However, bald eagles will occasionally nest in large estuaries or secluded groves within major cities, such as Hardtack Island on the Willamette River in Portland, Oregon or John Heinz National Wildlife Refuge at Tinicum in Philadelphia, Pennsylvania, which are surrounded by a great quantity of human activity. Even more contrary to the usual sensitivity to disturbance, a family of bald eagles moved to the Harlem neighborhood in New York City in 2010. While wintering, bald eagles tend to be less habitat and disturbance sensitive. They will commonly congregate at spots with plentiful perches and waters with plentiful prey and (in Northern climes) partially unfrozen waters. Alternately, non-breeding or wintering bald eagles, particularly in areas with a lack of human disturbance, spend their time in various upland, terrestrial habitats sometimes quite far away from waterways. In the northern half of North America (especially the interior portion), this terrestrial inhabitance by bald eagles tends to be especially prevalent because unfrozen water may not be accessible. Upland wintering habitats often consist of open habitats with concentrations of medium-sized mammals, such as prairies, meadows or tundra, or open forests with regular carrion access. The bald eagle is a powerful flier, and soars on thermal convection currents. It reaches speeds of when gliding and flapping, and about while carrying fish. Its dive speed is between , though it seldom dives vertically. Regarding their flying abilities, the bald eagle is considered surprisingly maneuverable in flight, bounty hunters shooting from helicopters opined that they were far more difficult to hunt while flying than golden eagles as they would turn, double back or dive as soon as approached. Bald eagles have also been recorded catching up to and then swooping under geese in flight, turning over and thrusting their talons into the other bird's breast. It is partially migratory, depending on location. If its territory has access to open water, it remains there year-round, but if the body of water freezes during the winter, making it impossible to obtain food, it migrates to the south or to the coast. A number of populations are subject to post-breeding dispersal, mainly in juveniles; Florida eagles, for example, will disperse northwards in the summer. The bald eagle selects migration routes which take advantage of thermals, updrafts, and food resources. During migration, it may ascend in a thermal and then glide down, or may ascend in updrafts created by the wind against a cliff or other terrain. Migration generally takes place during the daytime, usually between the local hours of 8:00 a.m. and 6:00 p.m., when thermals are produced by the sun. The bald eagle is an opportunistic carnivore with the capacity to consume a great variety of prey. Throughout their range, fish often comprise the majority of the eagle's diet. In 20 food habit studies across the species' range, fish comprised 56% of the diet of nesting eagles, birds 28%, mammals 14% and other prey 2%. In Southeast Alaska, fish comprise approximately 66% of the year-around diet of bald eagles and 78% of the prey brought to the nest by the parents. Eagles living in the Columbia River Estuary in Oregon were found to rely on fish for 90% of their dietary intake. In the Pacific Northwest, spawning trout and salmon provide most of the bald eagles' diet from late summer throughout fall. Southeast Alaskan eagles largely prey on pink salmon ("Oncorhynchus gorbuscha"), coho salmon ("O. kisutch") and, more locally, sockeye salmon ("O. nerka"), with chinook salmon ("O. tshawytscha"), due to their large size ( average adult size) probably being taken only as carrion. Also important in the estuaries and shallow coastlines of southern Alaska are Pacific herring ("Clupea pallasii"), Pacific sand lance ("Ammodytes hexapterus") and eulachon ("Thaleichthys pacificus"). In Oregon's Columbia River Estuary, the most significant prey species were largescale suckers ("Catostomus macrocheilus") (17.3% of the prey selected there), American shad ("Alosa sapidissima"; 13%) and common carp ("Cyprinus carpio"; 10.8%). Eagles living in the Chesapeake Bay in Maryland were found to subsist largely on American gizzard shad ("Dorosoma cepedianum"), threadfin shad ("D. petenense") and white bass ("Morone chrysops"). Floridian eagles have been reported to prey on catfish, most prevalently the brown bullhead ("Ameiurus nebulosus") and any species in the genus "Ictalurus" as well as mullet, trout, needlefish, and eels. Wintering eagles on the Platte River in Nebraska preyed mainly on American gizzard shads and common carp. From observation in the Columbia River, 58% of the fish were caught live by the eagle, 24% were scavenged as carcasses and 18% were pirated away from other animals. Even eagles living in relatively arid regions still typically rely primarily on fish as prey. In Sonora (Mexico) and Arizona, 77% and over 73%, respectively, of prey remains at the nests were from fish, largely various catfish and rainbow trout ("Oncorhynchus mykiss"). Prey fish targeted by bald eagles are often quite large. When experimenters offered fish of different sizes in the breeding season around Lake Britton in California, fish measuring were taken 71.8% of the time by parent eagles while fish measuring were chosen only 25% of the time. At nests around Lake Superior, the remains of fish (mostly suckers) were found to average in total length. In the Columbia River estuary, most preyed on by eagles were estimated to measure between in length, and carp flown with (laboriously) were up to in length. Benthic fishes such as catfish are usually consumed after they die and float to the surface, though while temporarily swimming in the open may be more vulnerable to predation than most fish since their eyes focus downwards. Bald eagles also regularly exploit water turbines which produce battered, stunned or dead fish easily consumed. Predators who leave behind scraps of dead fish that they kill, such as brown bears ("Ursus arctos"), gray wolves ("Canis lupus") and red foxes ("Vulpes vulpes"), may be habitually followed in order to scavenge the kills secondarily. Once North Pacific salmon die off after spawning, usually local bald eagles eat salmon carcasses almost exclusively. Eagles in Washington need to consume of fish each day for survival, with adults generally consuming more than juveniles and thus reducing potential energy deficiency and increasing survival during winter. Behind fish, the next most significant prey base for bald eagles are other waterbirds. The contribution of such birds to the eagle's diet is variable, depending on the quantity and availability of fish near the water's surface. Waterbirds can seasonally comprise from 7% to 80% of the prey selection for eagles in certain localities. Exceptionally, in the Greater Yellowstone area, birds were eaten as regularly as fish year-around, with both prey groups comprising 43% of the studied dietary intake. Preferred avian prey includes grebes, alcids, ducks, gulls, coots, herons, egrets, and geese. Bird species most preferred as prey by eagles tend to be medium-sized, such as western grebes ("Aechmophorus occidentalis"), mallards ("Anas platyrhynchos") and American coots ("Fulica americana") as such prey is relatively easy for the much larger eagles to catch and fly with. American herring gull ("Larus smithsonianus") are the favored avian prey species for eagles living around Lake Superior. Larger waterbirds are occasionally prey as well, with wintering emperor geese ("Chen canagica") and snow geese ("C. caerulescens"), which gather in large groups, sometimes becoming regular prey. Other large waterbirds hunted at least occasionally by bald eagles have included common loons ("Gavis immer"), great black-backed gulls ("Larus marinus"), sandhill cranes ("Grus canadensis"), great blue herons ("Ardea herodias"), Canada geese ("Branta canadensis"), brown pelicans ("Pelecanus occidentalis"), and fledging American white pelicans ("P. erythrorhynchos"). Colony nesting seabirds, such as alcids, storm petrels, cormorants, northern gannets ("Morus bassanus"), terns and gulls, may be especially vulnerable to predation. Due to easy accessibility and lack of formidable nest defense by such species, bald eagles are capable of preying on such seabirds at all ages, from eggs to mature adults, and can effectively cull large portions of a colony. Along some portions of the North Pacific coastline, bald eagles which had historically preyed mainly kelp-dwelling fish and supplementally sea otter ("Enhydra lutris") pups are now preying mainly on seabird colonies since both the fish (possibly due to overfishing) and otters (cause unknown) have had precipitous population declines, causing concern for seabird conservation. Because of this more extensive predation, some biologist have expressed concern that murres are heading for a "conservation collision" due to heavy eagle predation. Eagles have been confirmed to attack nocturnally active, burrow-nesting seabird species such as storm petrels and shearwaters by digging out their burrows and feeding on all animals they find inside. If a bald eagle flies close by, waterbirds will often fly away en masse, though in other cases they may seemingly ignore a perched eagle. If the said birds are on a colony, this exposed their unprotected eggs and nestlings to scavengers such as gulls. Bird prey may occasionally be attacked in flight, with prey up to the size of Canada geese attacked and killed in mid-air. Unprecedented photographs of a bald eagle unsuccessfully attempting to prey on a much larger adult trumpeter swan ("Cygnus buccinator") in mid-flight were taken recently. While adults often actively prey on waterbirds, congregated wintering waterfowl are frequently exploited for carcasses to scavenge by immature eagles in harsh winter weather. Bald eagles have been recorded as killing other raptors on occasion. In some cases, these may be attacks of competition or kleptoparasitism on rival species but ended with the consumption of the victim. Raptorial birds reported to be hunted by these eagles have included large adults of species such as red-tailed hawks ("Buteo jamaicensis"), ospreys ("Pandion haliaetus") and black ("Coragyps atratus") and turkey vultures ("Cathartes aura"). Mammalian prey includes rabbits, hares, ground squirrels, raccoons ("Procyon lotor"), muskrats ("Ondatra zibethicus"), beavers ("Castor canadensis"), and deer fawns. Newborn, dead, sickly or already injured mammals are often targeted. However, more formidable prey such as adult raccoons and subadult beavers are sometimes attacked. In the Chesapeake Bay area, bald eagles are reportedly the main natural predators of raccoons. Where available, seal colonies can provide much food. On Protection Island, Washington, they commonly feed on harbor seal ("Phoca vitulina") afterbirths, still-borns and sickly seal pups. On San Juan Island in Washington, introduced European rabbits ("Oryctolagus cuniculus"), mainly those killed by auto accidents, comprise nearly 60% of the dietary intake of eagles. In landlocked areas of North America, wintering bald eagles may become habitual predators of medium-sized mammals that occur in colonies or local concentrations, such as prairie dogs ("Cynomys") and jackrabbits ("Lepus"). Together with the golden eagle, bald eagles are occasionally accused of preying on livestock, especially sheep ("Ovis aries"). There are a handful of proven cases of lamb predation, some of specimens weighing up to , by bald eagles but they are much less likely to attack a healthy lamb than a golden eagle and both species prefer native, wild prey and are unlikely to cause any extensive detriment to human livelihoods. There is one case of a bald eagle killing and feeding on an adult, pregnant ewe (then joined in eating the kill by at least 3 other eagles), which, weighing on average over , is much larger than any other known prey taken by this species. Supplemental prey are readily taken given the opportunity. In some areas reptiles may become regular prey, especially warm areas such as Florida where reptile diversity is high. Turtles are perhaps the most regularly hunted type of reptile. In coastal New Jersey, 14 of 20 studied eagle nests included remains of turtles. The main species found were common musk turtles ("Sternotherus odoratus"), diamondback terrapin ("Malaclemys terrapin") and juvenile common snapping turtles ("Chelydra serpentina"). In these New Jersey nests, mainly subadult and small adults were taken, ranging in carapace length from . Snakes are also taken occasionally, especially partially aquatic ones, as are amphibians and crustaceans (largely crayfish and crabs). To hunt fish, the eagle swoops down over the water and snatches the fish out of the water with its talons. They eat by holding the fish in one claw and tearing the flesh with the other. Eagles have structures on their toes called spicules that allow them to grasp fish. Osprey also have this adaptation. Bald eagles have powerful talons and have been recorded flying with a mule deer ("Odocoileus hemionus") fawn. This feat is the record for the heaviest load carrying ever verified for a flying bird. It has been estimated that the gripping power (pounds by square inch) of the bald eagle is ten times greater than that of a human. Bald eagles can fly with fish at least equal to their own weight, but if the fish is too heavy to lift, the eagle may be dragged into the water. It may swim to safety, in some cases pulling the catch along to the shore as it swims, but some eagles drown or succumb to hypothermia. Many sources claim that bald eagles, like all large eagles, cannot normally take flight carrying prey more than half of their own weight unless aided by favorable wind conditions. On numerous occasions, when large prey such as mature salmon or geese are attacked, eagles have been seen to make contact and then drag the prey in a strenuously labored, low flight over the water to a bank, where they then finish off and dismember the prey. When food is abundant, an eagle can gorge itself by storing up to of food in a pouch in the throat called a crop. Gorging allows the bird to fast for several days if food becomes unavailable. Occasionally, bald eagles may hunt cooperatively when confronting prey, especially relatively large prey such as jackrabbits or herons, with one bird distracting potential prey, while the other comes behind it in order to ambush it. While hunting waterfowl, bald eagles repeatedly fly at a target and cause it to dive repeatedly, hoping to exhaust the victim so it can be caught (white-tailed eagles have been recorded hunting waterfowl in the same way). When hunting concentrated prey, a successful catch which often results in the hunting eagle being pursued by other eagles and needing to find an isolated perch for consumption if it is able to carry it away successfully. Unlike some other eagle species, bald eagles rarely take on evasive or dangerous prey on their own. The species mainly target prey which is much smaller than themselves, with most live fish caught weighing and most waterbirds preyed weighing . They obtain much of their food as carrion or via a practice known as kleptoparasitism, by which they steal prey away from other predators. Due to their dietary habits, bald eagles are frequently viewed in a negative light by humans. Thanks to their superior foraging ability and experience, adults are generally more likely to hunt live prey than immature eagles, which often obtain their food from scavenging. They are not very selective about the condition or origin, whether provided by humans, other animals, auto accidents or natural causes, of a carcass's presence, but will avoid eating carrion where disturbances from humans are a regular occurrence. They will scavenge carcasses up to the size of whales, though carcasses of ungulates and large fish are seemingly preferred. Bald eagles also may sometimes feed on material scavenged or stolen from campsites and picnics, as well as garbage dumps (dump usage is habitual mainly in Alaska). When competing for food, eagles will usually dominate other fish-eaters and scavengers, aggressively displacing mammals such as coyotes ("Canis latrans") and foxes, and birds such as corvids, gulls, vultures and other raptors. Occasionally, coyotes, bobcats ("Lynx rufus") and domestic dogs ("Canis lupus familiaris") can displace eagles from carrion, usually less confident immature birds, as has been recorded in Maine. Bald eagles are less active, bold predators than golden eagles and get relatively more of their food as carrion and from kleptoparasitism (although it is now generally thought that golden eagles eat more carrion than was previously assumed). However, the two species are roughly equal in size, aggressiveness and physical strength and so competitions can go either way. Neither species is known to be dominant, and the outcome depends on the size and disposition of the individual eagles involved. The bald eagle is thought to be much more numerous in North America than the golden eagle, with the bald species estimated to number at least 150,000 individuals, about twice as many golden eagles there are estimated to live in North America. Due to this, bald eagles often outnumber golden eagles at attractive food sources. Despite the potential for contention between these animals, in New Jersey during winter, a golden eagle and numerous bald eagles were observed to hunt snow geese alongside each other without conflict. Similarly, both eagle species have been recorded, via video-monitoring, to feed on gut pills and carcasses of white-tailed deer ("Odocoileus virginianus") in remote forest clearings in the eastern Appalachian Mountains without apparent conflict. Many bald eagles are habitual kleptoparasites, especially in winters when fish are harder to come by. They have been recorded stealing fish from other predators such as ospreys, herons and even otters. They have also been recorded opportunistically pirating birds from peregrine falcons ("Falco peregrinus"), prairie dogs from ferruginous hawks ("Buteo regalis") and even jackrabbits from golden eagles. When they approach scavengers like dogs, gulls or vultures at carrion sites, they often aggressively attack them and try to force them to disgorge their food. Healthy adult bald eagles are not preyed on in the wild and are thus considered apex predators. Bald eagles are sexually mature at four or five years of age. When they are old enough to breed, they often return to the area where they were born. It is thought that bald eagles mate for life. However, if one member of a pair dies or disappears, the survivor will choose a new mate. A pair which has repeatedly failed in breeding attempts may split and look for new mates. Bald eagle courtship involves elaborate, spectacular calls and flight displays. The flight includes swoops, chases, and cartwheels, in which they fly high, lock talons, and free fall, separating just before hitting the ground. Usually, a territory defended by a mature pair will be of waterside habitat. Compared to most other raptors which mostly nest in April or May, bald eagles are early breeders: nest building or reinforcing is often by mid-February, egg laying is often late February (sometimes during deep snow in the North), and incubation is usually mid-March and early May. Eggs hatch from mid April to early May, and the young fledge late June to early July. The nest is the largest of any bird in North America; it is used repeatedly over many years and with new material added each year may eventually be as large as deep, across and weigh ; one nest in Florida was found to be deep, across, and to weigh . This nest is on record as the largest tree nest ever recorded for any animal. Usually nests are used for under five years or so, as they either collapse in storms or break the branches supporting them by their sheer weight. However, one nest in the Midwest was occupied continuously for at least 34 years. The nest is built out of branches, usually in large trees found near water. When breeding where there are no trees, the bald eagle will nest on the ground, as has been recorded largely in areas largely isolated from terrestrial predators, such as Amchitka Island in Alaska. In Sonora, Mexico, eagles have been observed nesting on top of Hecho catcuses ("Pachycereus pectinaboriginum"). Nests located on cliffs and rock pinnacles have been reported historically in California, Kansas, Nevada, New Mexico and Utah, but currently are only verified to occur only in Alaska and Arizona. The eggs average about long, ranging from , and have a breadth of , ranging from . Eggs in Alaska averaged in mass, while in Saskatchewan they averaged . As with their ultimate body size, egg size tends to increase further away from the Equator. Eagles produce between one and three eggs per year, two being typical. Rarely, four eggs have been found in nests but these may be exceptional cases of polygyny. Eagles in captivity have been capable of producing up to seven eggs. it is rare for all three chicks to successfully reach the fledging stage. The oldest chick often bears the advantage of larger size and louder voice, which tends to draw the parents attention towards it. Occasionally, as is recorded in many large raptorial birds, the oldest sibling sometimes attacks and kills its younger sibling(s), especially early in the nesting period when their sizes are most different. However, nearly half of known bald eagles produce two fledgings (more rarely three), unlike in some other "eagle" species such as some in the genus "Aquila", in which a second fledging is typically observed in less than 20% of nests, despite two eggs typically being laid. Both the male and female take turns incubating the eggs, but the female does most of the sitting. The parent not incubating will hunt for food or look for nesting material during this stage. For the first two to three weeks of the nestling period, at least one adult is at the nest almost 100% of the time. After five to six weeks, the attendance of parents usually drops off considerably (with the parents often perching in trees nearby). A young eaglet can gain up to a day, the fastest growth rate of any North American bird. The young eaglets pick up and manipulate sticks, play tug of war with each other, practice holding things in their talons, and stretch and flap their wings. By eight weeks, the eaglets are strong enough to flap their wings, lift their feet off the nest platform, and rise up in the air. The young fledge at anywhere from 8 to 14 weeks of age, though will remain close to the nest and attended to by their parents for a further 6 weeks. Juvenile eagles first start dispersing away from their parents about 8 weeks after they fledge. Variability in departure date related to effects of sex and hatching order on growth and development. For the next four years, immature eagles wander widely in search of food until they attain adult plumage and are eligible to reproduce. Additionally, as shown by a pair of eagles in Shoal Harbor Migratory Bird Sanctuary located near Sydney, British Columbia on June 9, 2017, bald eagles have been recently recorded to occasionally adopt other raptor fledglings into their nests. The pair of eagles in question were recorded carrying a juvenile red-tailed hawk back to their nest, whereupon the chick was accepted into the family by both the parents and the eagles' three fledgelings. Whether or not the chick survived remained to be seen at the time, as young bald eagles are known for killing their siblings. However, the aggression of the red-tailed hawk may ensure its survival, as the hawks are well known for their ability to successfully defend against an eagle attack. Six weeks after however, it was discovered that the hawk, nicknamed "Spunky" by biologists monitoring the nest, had grown to fledgeling size and was learning how to hunt, indicating that it successfully survived. The average lifespan of bald eagles in the wild is around 20 years, with the oldest confirmed one having been 38 years of age. In captivity, they often live somewhat longer. In one instance, a captive individual in New York lived for nearly 50 years. As with size, the average lifespan of an eagle population appears to be influenced by its location and access to prey. As they are no longer heavily persecuted, adult mortality is quite low. In one study of Florida eagles, adult bald eagles reportedly had 100% annual survival rate. In Prince William Sound in Alaska, adults had an annual survival rate of 88% even after the Exxon Valdez oil spill adversely affected eagles in the area. Of 1,428 individuals from across the range necropsied by National Wildlife Health Center from 1963 to 1984, 329 (23%) eagles died from trauma, primarily impact with wires and vehicles; 309 (22%) died from gunshot; 158 (11%) died from poisoning; 130 (9%) died from electrocution; 68 (5%) died from trapping; 110 (8%) from emaciation; and 31 (2%) from disease; cause of death was undetermined in 293 (20%) of cases. In this study, 68% of mortality was human-caused. Today eagle-shooting is believed to be considerably reduced due to the species protected status. In one case, an adult eagle investigating a peregrine falcon nest for prey items sustained a concussion from a swooping parent peregrine, and ultimately died days later from it. An early natural history video depicting a cougar ("Puma concolor") ambushing and killing an immature bald eagle feeding at a rabbit carcass is viewable online although this film may have been staged. Most non-human-related mortality involves nestlings or eggs. Around 50% of eagles survive their first year. However, in the Chesapeake Bay area, 100% of 39 radio-tagged nestlings survived to their first year. Occasionally, nestling or egg fatalities are due to nest collapses, starvation, sibling aggression or inclement weather. Another significant cause of egg and nestling mortality is predation. These have been verified to be preyed by large gulls, corvids (including ravens, crows and magpies), wolverines ("Gulo gulo"), hawks, owls, eagles, bobcats ("Lynx rufus"), American black bears ("Ursus americanus") and raccoons. If food access is low, parental attendance at the nest may be lower because both parents may have to forage thus resulting in less protection. Nestlings are usually exempt from predation by terrestrial carnivores that are poor tree-climbers, but Arctic foxes ("Vulpes lagopus") occasionally snatched nestlings from ground nests on Amchitka Island in Alaska before they were extirpated from the island. The bald eagle will defend its nest fiercely from all comers and has even repelled attacks from bears, having been recorded knocking a black bear out of a tree when the latter tried to climb a tree holding nestlings. Once a common sight in much of the continent, the bald eagle was severely affected in the mid-20th century by a variety of factors, among them the thinning of egg shells attributed to use of the pesticide DDT. Bald eagles, like many birds of prey, were especially affected by DDT due to biomagnification. DDT itself was not lethal to the adult bird, but it interfered with the bird's calcium metabolism, making the bird either sterile or unable to lay healthy eggs. Female eagles laid eggs that were too brittle to withstand the weight of a brooding adult, making it nearly impossible for the eggs to hatch. It is estimated that in the early 18th century, the bald eagle population was 300,000–500,000, but by the 1950s there were only 412 nesting pairs in the 48 contiguous states of the US. Other factors in bald eagle population reductions were a widespread loss of suitable habitat, as well as both legal and illegal shooting. In 1930 a New York City ornithologist wrote that in the state of Alaska in the previous 12 years approximately 70,000 bald eagles had been shot. Many of the hunters killed the bald eagles under the long-held beliefs that bald eagles grabbed young lambs and even children with their talons, yet the birds were innocent of most of these alleged acts of predation (lamb predation is rare, human predation is thought to be non-existent). Later illegal shooting was described as "the leading cause of direct mortality in both adult and immature bald eagles," according to a 1978 report in the Endangered Species Technical Bulletin. In 1984, the National Wildlife Federation listed hunting, power-line electrocution, and collisions in flight as the leading causes of eagle deaths. Bald eagles have also been killed by oil, lead, and mercury pollution, and by human and predator intrusion at nests. The species was first protected in the U.S. and Canada by the 1918 Migratory Bird Treaty, later extended to all of North America. The Bald and Golden Eagle Protection Act, approved by the U.S. Congress in 1940, protected the bald eagle and the golden eagle, prohibiting commercial trapping and killing of the birds. The bald eagle was declared an endangered species in the U.S. in 1967, and amendments to the 1940 act between 1962 and 1972 further restricted commercial uses and increased penalties for violators. Perhaps most significant in the species' recovery, in 1972, DDT was banned from usage in the United States due to the fact that it inhibited the reproduction of many birds. DDT was completely banned in Canada in 1989, though its use had been highly restricted since the late 1970s. With regulations in place and DDT banned, the eagle population rebounded. The bald eagle can be found in growing concentrations throughout the United States and Canada, particularly near large bodies of water. In the early 1980s, the estimated total population was 100,000 individuals, with 110,000–115,000 by 1992; the U.S. state with the largest resident population is Alaska, with about 40,000–50,000, with the next highest population the Canadian province of British Columbia with 20,000–30,000 in 1992. Obtaining a precise count of bald eagles population is extremely difficult. The most recent data submitted by individual states was in 2006, when 9789 breeding pairs were reported. For some time, the stronghold breeding population of bald eagles in the lower 48 states was in Florida, where over a thousand pairs have held on while populations in other states were significantly reduced by DDT use. Today, the contiguous state with the largest number of breeding pairs of eagles is Minnesota with an estimated 1,312 pairs, surpassing Florida's most recent count of 1,166 pairs. 23, or nearly half, of the 48 contiguous states now have at least 100 breeding pairs of bald eagles. In Washington State, there were only 105 occupied nests in 1980. That number increased by about 30 per year, so that by 2005 there were 840 occupied nests. 2005 was the last year that the Washington Department of Fish and Wildlife counted occupied nests. Further population increases in Washington may be limited by the availability of late winter food, particularly salmon. The bald eagle was officially removed from the U.S. federal government's list of endangered species on July 12, 1995, by the U.S. Fish & Wildlife Service, when it was reclassified from "Endangered" to "Threatened." On July 6, 1999, a proposal was initiated "To Remove the Bald Eagle in the Lower 48 States From the List of Endangered and Threatened Wildlife." It was de-listed on June 28, 2007. It has also been assigned a risk level of Least Concern category on the IUCN Red List. In the Exxon Valdez Oil Spill of 1989 an estimated 247 were killed in Prince William Sound, though the local population returned to its pre-spill level by 1995. In some areas, the population has increased such that the eagles are a pest. In December 2016, the U.S. Fish and Wildlife Service proposed quadrupling to 4,200 per year the amount of bald eagles that can be killed by the wind electric generation industry without paying a penalty. If issued, the permits would last 30 years, six times the current 5-year permits. Permits are required to keep bald eagles in captivity in the United States. Permits are primarily issued to public educational institutions, and the eagles which they show are permanently injured individuals which cannot be released to the wild. The facilities where eagles are kept must be equipped with adequate caging and facilities, as well as workers experienced in the handling and care of eagles. Bald eagles cannot legally be kept for falconry in the United States. As a rule, the bald eagle is a poor choice for public shows, being timid, prone to becoming highly stressed, and unpredictable in nature. Native American tribes can obtain a "Native American Religious Use" permit to keep non-releasable eagles as well. They use their naturally molted feathers for religious and cultural ceremonies. The bald eagle can be long-lived in captivity if well cared for, but does not breed well even under the best conditions. In Canada, a license is required to keep bald eagles for falconry. The bald eagle is important in various Native American cultures and, as the national bird of the United States, is prominent in seals and logos, coinage, postage stamps, and other items relating to the U.S. federal government. The bald eagle is a sacred bird in some North American cultures, and its feathers, like those of the golden eagle, are central to many religious and spiritual customs among Native Americans. Eagles are considered spiritual messengers between gods and humans by some cultures. Many pow wow dancers use the eagle claw as part of their regalia as well. Eagle feathers are often used in traditional ceremonies, particularly in the construction of regalia worn and as a part of fans, bustles and head dresses. In the Navajo Tradition an Eagle feather is represented to be a Protector, along with the Feather Navajo Medicine Man use the leg and wing bones for ceremonial whistles. The Lakota, for instance, give an eagle feather as a symbol of honor to person who achieves a task. In modern times, it may be given on an event such as a graduation from college. The Pawnee considered eagles as symbols of fertility because their nests are built high off the ground and because they fiercely protect their young. The Choctaw considered the bald eagle, who has direct contact with the upper world of the sun, as a symbol of peace. During the Sun Dance, which is practiced by many Plains Indian tribes, the eagle is represented in several ways. The eagle nest is represented by the fork of the lodge where the dance is held. A whistle made from the wing bone of an eagle is used during the course of the dance. Also during the dance, a medicine man may direct his fan, which is made of eagle feathers, to people who seek to be healed. The medicine man touches the fan to the center pole and then to the patient, in order to transmit power from the pole to the patient. The fan is then held up toward the sky, so that the eagle may carry the prayers for the sick to the Creator. Current eagle feather law stipulates that only individuals of certifiable Native American ancestry enrolled in a federally recognized tribe are legally authorized to obtain or possess bald or golden eagle feathers for religious or spiritual use. The constitutionality of these laws has been questioned by Native American groups on the basis that it violates the First Amendment by affecting ability to practice their religion freely. The National Eagle Repository, a division of the FWS, exists as a means to receive, process, and store bald and golden eagles which are found dead, and to distribute the eagles, their parts and feathers, to federally recognized Native American tribes for use in religious ceremonies. The bald eagle is the national bird of the United States of America. The founders of the United States were fond of comparing their new republic with the Roman Republic, in which eagle imagery (usually involving the golden eagle) was prominent. On June 20, 1782, the Continental Congress adopted the design for the Great Seal of the United States depicting a bald eagle grasping 13 arrows and an olive branch with its talons. The bald eagle appears on most official seals of the U.S. government, including the presidential seal, the presidential flag, and in the logos of many U.S. federal agencies. Between 1916 and 1945, the presidential flag (but not the seal) showed an eagle facing to its left (the viewer's right), which gave rise to the urban legend that the flag is changed to have the eagle face towards the olive branch in peace, and towards the arrows in wartime. Contrary to popular legend, there is no evidence that Benjamin Franklin ever publicly supported the wild turkey ("Meleagris gallopavo"), rather than the bald eagle, as a symbol of the United States. However, in a letter written to his daughter in 1784 from Paris, criticizing the Society of the Cincinnati, he stated his personal distaste for the bald eagle's behavior. In the letter Franklin states: Franklin opposed the creation of the Society because he viewed it, with its hereditary membership, as a noble order unwelcome in the newly independent Republic, contrary to the ideals of Lucius Quinctius Cincinnatus, for whom the Society was named; his reference to the two kinds of birds is interpreted as a satirical comparison between the Society of the Cincinnati and Cincinnatus. Blackbeard Edward Teach or Edward Thatch ( – 22 November 1718), better known as Blackbeard, was an English pirate who operated around the West Indies and the eastern coast of Britain's North American colonies. Little is known about his early life, but he may have been a sailor on privateer ships during Queen Anne's War before settling on the Bahamian island of New Providence, a base for Captain Benjamin Hornigold, whose crew Teach joined around 1716. Hornigold placed him in command of a sloop that he had captured, and the two engaged in numerous acts of piracy. Their numbers were boosted by the addition to their fleet of two more ships, one of which was commanded by Stede Bonnet; but Hornigold retired from piracy towards the end of 1717, taking two vessels with him. Teach captured a French merchant vessel, renamed her "Queen Anne's Revenge", and equipped her with 40 guns. He became a renowned pirate, his nickname derived from his thick black beard and fearsome appearance; he was reported to have tied lit fuses (slow matches) under his hat to frighten his enemies. He formed an alliance of pirates and blockaded the port of Charles Town, South Carolina, ransoming the port's inhabitants. He then ran "Queen Anne's Revenge" aground on a sandbar near Beaufort, North Carolina. He parted company with Bonnet and settled in Bath, North Carolina, also known as Bath Town where he accepted a royal pardon. But he was soon back at sea, where he attracted the attention of Alexander Spotswood, the Governor of Virginia. Spotswood arranged for a party of soldiers and sailors to capture the pirate, which they did on 22 November 1718 following a ferocious battle. Teach and several of his crew were killed by a small force of sailors led by Lieutenant Robert Maynard. Teach was a shrewd and calculating leader who spurned the use of force, relying instead on his fearsome image to elicit the response that he desired from those whom he robbed. Contrary to the modern-day picture of the traditional tyrannical pirate, he commanded his vessels with the consent of their crews and there is no known account of his ever having harmed or murdered those whom he held captive. He was romanticized after his death and became the inspiration for an archetypal pirate in works of fiction across many genres. Little is known about Blackbeard's early life. It is commonly believed that at the time of his death he was between 35 and 40 years old and thus born in about 1680. In contemporary records his name is most often given as Blackbeard, Edward Thatch or Edward Teach; the latter is most often used. Several spellings of his surname exist—Thatch, Thach, Thache, Thack, Tack, Thatche and Theach. One early source claims that his surname was Drummond, but the lack of any supporting documentation makes this unlikely. Pirates habitually used fictitious surnames while engaged in piracy, so as not to tarnish the family name, and this makes it unlikely that Teach's real name will ever be known. The 17th-century rise of Britain's American colonies and the rapid 18th-century expansion of the Atlantic slave trade had made Bristol an important international sea port, and Teach was most likely raised in what was then the second-largest city in England. He could almost certainly read and write; he communicated with merchants and when killed had in his possession a letter addressed to him by the Chief Justice and Secretary of the Province of Carolina, Tobias Knight. The author Robert Lee speculated that Teach may therefore have been born into a respectable, wealthy family. He may have arrived in the Caribbean in the last years of the 17th century, on a merchant vessel (possibly a slave ship). The 18th-century author Charles Johnson claimed that Teach was for some time a sailor operating from Jamaica on privateer ships during the War of the Spanish Succession, and that "he had often distinguished himself for his uncommon boldness and personal courage". At what point during the war Teach joined the fighting is, in keeping with the record of most of his life before he became a pirate, unknown. With its history of colonialism, trade and piracy, the West Indies was the setting for many 17th and 18th-century maritime incidents. The privateer-turned-pirate Henry Jennings and his followers decided, early in the 18th century, to use the uninhabited island of New Providence as a base for their operations; it was within easy reach of the Florida Strait and its busy shipping lanes, which were filled with European vessels crossing the Atlantic. New Providence's harbour could easily accommodate hundreds of ships but was too shallow for the Royal Navy's larger vessels to navigate. The author George Woodbury described New Providence as "no city of homes; it was a place of temporary sojourn and refreshment for a literally floating population," continuing, "The only permanent residents were the piratical camp followers, the traders, and the hangers-on; all others were transient." In New Providence, pirates found a welcome respite from the law. Teach was one of those who came to enjoy the island's benefits. Probably shortly after the signing of the Treaty of Utrecht, he moved there from Jamaica, and, along with most privateers once involved in the war, became involved in piracy. Possibly about 1716, he joined the crew of Captain Benjamin Hornigold, a renowned pirate who operated from New Providence's safe waters. In 1716 Hornigold placed Teach in charge of a sloop he had taken as a prize. In early 1717, Hornigold and Teach, each captaining a sloop, set out for the mainland. They captured a boat carrying 120 barrels of flour out of Havana, and shortly thereafter took 100 barrels of wine from a sloop out of Bermuda. A few days later they stopped a vessel sailing from Madeira to Charles Town, South Carolina. Teach and his quartermaster, William Howard, may at this time have struggled to control their crews. By then they had probably developed a taste for Madeira wine, and on 29 September near Cape Charles all they took from the "Betty" of Virginia was her cargo of Madeira, before they scuttled her with the remaining cargo. It was during this cruise with Hornigold that the earliest known report of Teach was made, in which he is recorded as a pirate in his own right, in command of a large crew. In a report made by a Captain Mathew Munthe on an anti-piracy patrol for North Carolina, "Thatch" was described as operating "a sloop 6 gunns and about 70 men". In September Teach and Hornigold encountered Stede Bonnet, a landowner and military officer from a wealthy family who had turned to piracy earlier that year. Bonnet's crew of about 70 were reportedly dissatisfied with his command, so with Bonnet's permission, Teach took control of his ship "Revenge". The pirates' flotilla now consisted of three ships; Teach on "Revenge", Teach's old sloop and Hornigold's "Ranger". By October, another vessel had been captured and added to the small fleet. The sloops "Robert" of Philadelphia and "Good Intent" of Dublin were stopped on 22 October 1717, and their cargo holds emptied. As a former British privateer, Hornigold attacked only his old enemies, but for his crew, the sight of British vessels filled with valuable cargo passing by unharmed became too much, and at some point toward the end of 1717 he was demoted. Whether Teach had any involvement in this decision is unknown, but Hornigold quickly retired from piracy. He took "Ranger" and one of the sloops, leaving Teach with "Revenge" and the remaining sloop. The two never met again, and with many other occupants of New Providence, Hornigold accepted the King's pardon from Woodes Rogers in June the following year. On 28 November Teach's two ships attacked a French merchant vessel off the coast of Saint Vincent. They each fired a broadside across its bulwarks, killing several of its crew, and forcing its captain to surrender. The ship was "La Concorde" of Saint-Malo, a large French guineaman carrying a cargo of slaves. Teach and his crews sailed the vessel south along Saint Vincent and the Grenadines to Bequia, where they disembarked her crew and cargo, and converted the ship for their own use. The crew of "La Concorde" were given the smaller of Teach's two sloops, which they renamed "Mauvaise Rencontre" (Bad Meeting), and sailed for Martinique. Teach may have recruited some of their slaves, but the remainder were left on the island and were later recaptured by the returning crew of "Mauvaise Rencontre". Teach immediately renamed "La Concorde" as "Queen Anne's Revenge" and equipped her with 40 guns. By this time Teach had placed his lieutenant Richards in command of Bonnet's "Revenge". In late November, near Saint Vincent, he attacked the "Great Allen". After a lengthy engagement, he forced the large and well-armed merchant ship to surrender. He ordered her to move closer to the shore, disembarked her crew and emptied her cargo holds, and then burned and sank the vessel. The incident was chronicled in the "Boston News-Letter", which called Teach the commander of a "French ship of 32 Guns, a Briganteen of 10 guns and a Sloop of 12 guns." When or where Teach collected the ten gun briganteen is unknown, but by that time he may have been in command of at least 150 men split among three vessels. On 5 December 1717 Teach stopped the merchant sloop "Margaret" off the coast of Crab Island, near Anguilla. Her captain, Henry Bostock, and crew, remained Teach's prisoners for about eight hours, and were forced to watch as their sloop was ransacked. Bostock, who had been held aboard "Queen Anne's Revenge", was returned unharmed to "Margaret" and was allowed to leave with his crew. He returned to his base of operations on Saint Christopher Island and reported the matter to Governor Walter Hamilton, who requested that he sign an affidavit about the encounter. Bostock's deposition details Teach's command of two vessels: a sloop and a large French guineaman, Dutch-built, with 36 cannon and a crew of 300 men. The captain believed that the larger ship carried valuable gold dust, silver plate, and "a very fine cup" supposedly taken from the commander of "Great Allen". Teach's crew had apparently informed Bostock that they had destroyed several other vessels, and that they intended to sail to Hispaniola and lie in wait for an expected Spanish armada, supposedly laden with money to pay the garrisons. Bostock also claimed that Teach had questioned him about the movements of local ships, but also that he had seemed unsurprised when Bostock told him of an expected royal pardon from London for all pirates. Bostock's deposition describes Teach as a "tall spare man with a very black beard which he wore very long". It is the first recorded account of Teach's appearance and is the source of his cognomen, Blackbeard. Later descriptions mention that his thick black beard was braided into pigtails, sometimes tied in with small coloured ribbons. Johnson (1724) described him as "such a figure that imagination cannot form an idea of a fury from hell to look more frightful." Whether Johnson's description was entirely truthful or embellished is unclear, but it seems likely that Teach understood the value of appearances; better to strike fear into the heart of one's enemies, than rely on bluster alone. Teach was tall, with broad shoulders. He wore knee-length boots and dark clothing, topped with a wide hat and sometimes a long coat of brightly coloured silk or velvet. Johnson also described Teach in times of battle as wearing "a sling over his shoulders, with three brace of pistols, hanging in holsters like bandoliers; and stuck lighted matches (Slow match) under his hat", the latter apparently to emphasise the fearsome appearance he wished to present to his enemies. Despite his ferocious reputation though, there are no verified accounts of his ever having murdered or harmed those he held captive. Teach may have used other aliases; on 30 November, the "Monserrat Merchant" encountered two ships and a sloop, commanded by a Captain Kentish and Captain Edwards (the latter a known alias of Stede Bonnet). Teach's movements between late 1717 and early 1718 are not known. He and Bonnet were probably responsible for an attack off Sint Eustatius in December 1717. Henry Bostock claimed to have heard the pirates say they would head toward the Spanish-controlled Samaná Bay in Hispaniola, but a cursory search revealed no pirate activity. Captain Hume of reported on 6 February that a "Pyrate Ship of 36 Guns and 250 men, and a Sloop of 10 Guns and 100 men were Said to be Cruizing amongst the Leeward Islands". Hume reinforced his crew with musket-armed soldiers and joined up with to track the two ships, to no avail, though they discerned that the two ships had sunk a French vessel off St Christopher Island, and reported also that they had last been seen "gone down the North side of Hispaniola". Although no confirmation exists that these two ships were controlled by Teach and Bonnet, author Angus Konstam believes it very likely they were. In March 1718, while taking on water at Turneffe Island east of Belize, both ships spotted the Jamaican logwood cutting sloop "Adventure" making for the harbour. She was stopped and her captain, Harriot, invited to join the pirates. Harriot and his crew accepted the invitation, and Teach sent over a crew to sail "Adventure" making Israel Hands the captain. They sailed for the Bay of Honduras, where they added another ship and four sloops to their flotilla. On 9 April Teach's enlarged fleet of ships looted and burnt "Protestant Caesar". His fleet then sailed to Grand Cayman where they captured a "small turtler". Teach probably sailed toward Havana, where he may have captured a small Spanish vessel that had left the Cuban port. They then sailed to the wrecks of the 1715 Spanish fleet, off the eastern coast of Florida. There Teach disembarked the crew of the captured Spanish sloop, before proceeding north to the port of Charles Town, South Carolina, attacking three vessels along the way. By May 1718, Teach had awarded himself the rank of Commodore and was at the height of his power. Late that month his flotilla blockaded the port of Charles Town in the Province of South Carolina. All vessels entering or leaving the port were stopped, and as the town had no guard ship, its pilot boat was the first to be captured. Over the next five or six days about nine vessels were stopped and ransacked as they attempted to sail past Charles Town Bar, where Teach's fleet was anchored. One such ship, headed for London with a group of prominent Charles Town citizens which included Samuel Wragg (a member of the Council of the Province of Carolina), was the "Crowley". Her passengers were questioned about the vessels still in port and then locked below decks for about half a day. Teach informed the prisoners that his fleet required medical supplies from the colonial government of South Carolina, and that if none were forthcoming, all prisoners would be executed, their heads sent to the Governor and all captured ships burnt. Wragg agreed to Teach's demands, and a Mr. Marks and two pirates were given two days to collect the drugs. Teach moved his fleet, and the captured ships, to within about five or six leagues from land. Three days later a messenger, sent by Marks, returned to the fleet; Marks's boat had capsized and delayed their arrival in Charles Town. Teach granted a reprieve of two days, but still the party did not return. He then called a meeting of his fellow sailors and moved eight ships into the harbour, causing panic within the town. When Marks finally returned to the fleet, he explained what had happened. On his arrival he had presented the pirates' demands to the Governor and the drugs had been quickly gathered, but the two pirates sent to escort him had proved difficult to find; they had been busy drinking with friends and were finally discovered, drunk. Teach kept to his side of the bargain and released the captured ships and his prisoners—albeit relieved of their valuables, including the fine clothing some had worn. Whilst at Charles Town, Teach learned that Woodes Rogers had left England with several men-of-war, with orders to purge the West Indies of pirates. Teach's flotilla sailed northward along the Atlantic coast and into Topsail Inlet (commonly known as Beaufort Inlet), off the coast of North Carolina. There they intended to careen their ships to scrape their hulls, but "Queen Anne's Revenge" ran aground on a sandbar, cracking her main-mast and severely damaging many of her timbers. Teach ordered several sloops to throw ropes across the flagship in an attempt to free her. A sloop commanded by Israel Hands of "Adventure" also ran aground, and both vessels appeared to be damaged beyond repair, leaving only "Revenge" and the captured Spanish sloop. Teach had at some stage learnt of the offer of a royal pardon and probably confided in Bonnet his willingness to accept it. The pardon was open to all pirates who surrendered on or before 5 September 1718, but contained a caveat stipulating that immunity was offered only against crimes committed before 5 January. Although in theory this left Bonnet and Teach at risk of being hanged for their actions at Charles Town Bar, most authorities could waive such conditions. Teach thought that Governor Charles Eden was a man he could trust, but to make sure, he waited to see what would happen to another captain. Bonnet left immediately on a small sailing boat for Bath Town, where he surrendered to Governor Eden, and received his pardon. He then travelled back to Beaufort Inlet to collect the "Revenge" and the remainder of his crew, intending to sail to Saint Thomas Island to receive a commission. Unfortunately for him, Teach had stripped the vessel of its valuables and provisions, and had marooned its crew; Bonnet set out for revenge, but was unable to find him. He and his crew returned to piracy and were captured on 27 September 1718 at the mouth of the Cape Fear River. All but four were tried and hanged in Charles Town. The author Robert Lee surmised that Teach and Hands intentionally ran the ships aground to reduce the fleet's crew complement, increasing their share of the spoils. During the trial of Bonnet's crew, "Revenge"s boatswain Ignatius Pell testified that "the ship was run ashore and lost, which Thatch [Teach] caused to be done." Lee considers it plausible that Teach let Bonnet in on his plan to accept a pardon from Governor Eden. He suggested that Bonnet do the same, and as war between the Quadruple Alliance of 1718 and Spain was threatening, to consider taking a privateer's commission from England. Lee suggests that Teach also offered Bonnet the return of his ship "Revenge". Konstam (2007) proposes a similar idea, explaining that Teach began to see "Queen Anne's Revenge" as something of a liability; while a pirate fleet was anchored, news of this was sent to neighbouring towns and colonies, and any vessels nearby would delay sailing. It was prudent therefore for Teach not to linger for too long, although wrecking the ship was a somewhat extreme measure. Before sailing northward on his remaining sloop to Ocracoke Inlet, Teach marooned about 25 men on a small sandy island about a league from the mainland. He may have done this to stifle any protest they made, if they guessed their captain's plans. Bonnet rescued them two days later. Teach continued on to Bath, where in June 1718—only days after Bonnet had departed with his pardon—he and his much-reduced crew received their pardon from Governor Eden. He settled in Bath, on the eastern side of Bath Creek at Plum Point, near Eden's home. During July and August he travelled between his base in the town and his sloop off Ocracoke. Johnson's account states that he married the daughter of a local plantation owner, although there is no supporting evidence for this. Eden gave Teach permission to sail to St Thomas to seek a commission as a privateer (a useful way of removing bored and troublesome pirates from the small settlement), and Teach was given official title to his remaining sloop, which he renamed "Adventure". By the end of August he had returned to piracy, and in the same month the Governor of Pennsylvania issued a warrant for his arrest, but by then Teach was probably operating in Delaware Bay, some distance away. He took two French ships leaving the Caribbean, moved one crew across to the other, and sailed the remaining ship back to Ocracoke. In September he told Eden that he had found the French ship at sea, deserted. A Vice Admiralty Court was quickly convened, presided over by Tobias Knight and the Collector of Customs. The ship was judged as a derelict found at sea, and of its cargo twenty hogsheads of sugar were awarded to Knight and sixty to Eden; Teach and his crew were given what remained in the vessel's hold. Ocracoke Inlet was Teach's favourite anchorage. It was a perfect vantage point from which to view ships travelling between the various settlements of northeast Carolina, and it was from there that Teach first spotted the approaching ship of Charles Vane, another English pirate. Several months earlier Vane had rejected the pardon brought by Woodes Rogers and escaped the men-of-war the English captain brought with him to Nassau. He had also been pursued by Teach's old commander, Benjamin Hornigold, who was by then a pirate hunter. Teach and Vane spent several nights on the southern tip of Ocracoke Island, accompanied by such notorious figures as Israel Hands, Robert Deal and Calico Jack. As it spread throughout the neighbouring colonies, the news of Teach and Vane's impromptu party worried the Governor of Pennsylvania enough to send out two sloops to capture the pirates. They were unsuccessful, but Governor of Virginia Alexander Spotswood was also concerned that the supposedly retired freebooter and his crew were living in nearby North Carolina. Some of Teach's former crew had already moved into several Virginian seaport towns, prompting Spotswood to issue a proclamation on 10 July, requiring all former pirates to make themselves known to the authorities, to give up their arms and to not travel in groups larger than three. As head of a Crown colony, Spotswood viewed the proprietary colony of North Carolina with contempt; he had little faith in the ability of the Carolinians to control the pirates, who he suspected would be back to their old ways, disrupting Virginian commerce, as soon as their money ran out. Spotswood learnt that William Howard, the former quartermaster of "Queen Anne's Revenge", was in the area, and believing that he might know of Teach's whereabouts had the pirate and his two slaves arrested. Spotswood had no legal authority to have pirates tried, and as a result, Howard's attorney, John Holloway, brought charges against Captain Brand of , where Howard was imprisoned. He also sued on Howard's behalf for damages of £500, claiming wrongful arrest. Spotswood's council claimed that Teach's presence was a crisis and that under a statute of William III, the governor was entitled to try Howard without a jury. The charges referred to several acts of piracy supposedly committed after the pardon's cut-off date, in "a sloop belonging to ye subjects of the King of Spain", but ignored the fact that they took place outside Spotswood's jurisdiction and in a vessel then legally owned. Another charge cited two attacks, one of which was the capture of a slave ship off Charles Town Bar, from which one of Howard's slaves was presumed to have come. Howard was sent to await trial before a Court of Vice-Admiralty, on the charge of piracy, but Brand and his colleague, Captain Gordon (of ) refused to serve with Holloway present. Incensed, Holloway had no option but to stand down, and was replaced by the Attorney General of Virginia, John Clayton, whom Spotswood described as "an honester man [than Holloway]". Howard was found guilty and sentenced to be hanged, but was saved by a commission from London, which directed Spotswood to pardon all acts of piracy committed by surrendering pirates before 23 July 1718. Spotswood had obtained from Howard valuable information on Teach's whereabouts, and he planned to send his forces across the border into North Carolina to capture him. He gained the support of two men keen to discredit North Carolina's Governor—Edward Moseley and Colonel Maurice Moore. He also wrote to the Lords of Trade, suggesting that the Crown might benefit financially from Teach's capture. Spotswood personally financed the operation, possibly believing that Teach had fabulous treasures hidden away. He ordered Captains Gordon and Brand of HMS "Pearl" and HMS "Lyme" to travel overland to Bath. Lieutenant Robert Maynard of HMS "Pearl" was given command of two commandeered sloops, to approach the town from the sea. An extra incentive for Teach's capture was the offer of a reward from the Assembly of Virginia, over and above any that might be received from the Crown. Maynard took command of the two armed sloops on 17 November. He was given 57 men—33 from HMS "Pearl" and 24 from HMS "Lyme". Maynard and the detachment from HMS "Pearl" took the larger of the two vessels and named her "Jane"; the rest took "Ranger", commanded by one of Maynard's officers, a Mister Hyde. Some from the two ships' civilian crews remained aboard. They sailed from Kecoughtan, along the James River, on 17 November. The two sloops moved slowly, giving Brand's force time to reach Bath. Brand set out for North Carolina six days later, arriving within three miles of Bath on 23 November. Included in Brand's force were several North Carolinians, including Colonel Moore and Captain Jeremiah Vail, sent to put down any local objection to the presence of foreign soldiers. Moore went into the town to see if Teach was there, reporting back that he was not, but that the pirate was expected at "every minute." Brand then went to Governor Eden's home and informed him of his purpose. The next day, Brand sent two canoes down Pamlico River to Ocracoke Inlet, to see if Teach could be seen. They returned two days later and reported on what eventually transpired. Maynard found the pirates anchored on the inner side of Ocracoke Island, on the evening of 21 November. He had ascertained their position from ships he had stopped along his journey, but unfamiliar with the local channels and shoals he decided to wait until the following morning to make his attack. He stopped all traffic from entering the inlet—preventing any warning of his presence—and posted a lookout on both sloops to ensure that Teach could not escape to sea. On the other side of the island, Teach was busy entertaining guests and had not set a lookout. With Israel Hands ashore in Bath with about 24 of "Adventure"s sailors, he also had a much-reduced crew. Johnson (1724) reported that the pirate had "no more than twenty-five men on board" and that he "gave out to all the vessels that he spoke with that he had forty". "Thirteen white and six Negroes", was the number later reported by Brand to the Admiralty. At daybreak, preceded by a small boat taking soundings, Maynard's two sloops entered the channel. The small craft was quickly spotted by "Adventure" and fired at as soon as it was within range of her guns. While the boat made a quick retreat to the "Jane", Teach cut the "Adventure"s anchor cable. His crew hoisted the sails and the "Adventure" manoeuvred to point her starboard guns toward Maynard's sloops, which were slowly closing the gap. Hyde moved "Ranger" to the port side of "Jane" and the Union flag was unfurled on each ship. "Adventure" then turned toward the beach of Ocracoke Island, heading for a narrow channel. What happened next is uncertain. Johnson claimed that there was an exchange of small-arms fire following which "Adventure" ran aground on a sandbar, and Maynard anchored and then lightened his ship to pass over the obstacle. Another version claimed that "Jane" and "Ranger" ran aground, although Maynard made no mention of this in his log. What is certain though is that "Adventure" turned her guns on the two ships and fired. The broadside was devastating; in an instant, Maynard had lost as much as a third of his forces. About 20 on "Jane" were either wounded or killed and 9 on "Ranger". Hyde was dead and his second and third officers either dead or seriously injured. His sloop was so badly damaged that it played no further role in the attack. Again, contemporary accounts of what happened next are confused, but small-arms fire from "Jane" may have cut "Adventure"s jib sheet, causing her to lose control and run onto the sandbar. In the aftermath of Teach's overwhelming attack, "Jane" and "Ranger" may also have been grounded; the battle would have become a race to see who could float their ship first. The lieutenant had kept many of his men below deck and in anticipation of being boarded told them to prepare for close fighting. Teach watched as the gap between the vessels closed, and ordered his men to be ready. The two vessels contacted one another as the "Adventure"s grappling hooks hit their target and several grenades, made from powder and shot-filled bottles and ignited by fuses, broke across the sloop's deck. As the smoke cleared, Teach led his men aboard, buoyant at the sight of Maynard's apparently empty ship, his men firing at the small group formed by the lieutenant and his men at the stern. The rest of Maynard's men then burst from the hold, shouting and firing. The plan to surprise Teach and his crew worked; the pirates were apparently taken aback at the assault. Teach rallied his men and the two groups fought across the deck, which was already slick with blood from those killed or injured by Teach's broadside. Maynard and Teach fired their flintlocks at each other, then threw them away. Teach drew his cutlass and managed to break Maynard's sword. Against superior training and a slight advantage in numbers, the pirates were pushed back toward the bow, allowing the "Jane"s crew to surround Maynard and Teach, who was by then completely isolated. As Maynard drew back to fire once again, Teach moved in to attack him, but was slashed across the neck by one of Maynard's men. Badly wounded, he was then attacked and killed by several more of Maynard's crew. The remaining pirates quickly surrendered. Those left on the "Adventure" were captured by the "Ranger"s crew, including one who planned to set fire to the powder room and blow up the ship. Varying accounts exist of the battle's list of casualties; Maynard reported that 8 of his men and 12 pirates were killed. Brand reported that 10 pirates and 11 of Maynard's men were killed. Spotswood claimed ten pirates and ten of the King's men dead. Maynard later examined Teach's body, noting that it had been shot five times and cut about twenty. He also found several items of correspondence, including a letter to the pirate from Tobias Knight. Teach's corpse was thrown into the inlet and his head was suspended from the bowsprit of Maynard's sloop so that the reward could be collected. Lieutenant Maynard remained at Ocracoke for several more days, making repairs and burying the dead. Teach's loot—sugar, cocoa, indigo and cotton—found "in pirate sloops and ashore in a tent where the sloops lay", was sold at auction along with sugar and cotton found in Tobias Knight's barn, for £2,238. Governor Spotswood used a portion of this to pay for the entire operation. The prize money for capturing Teach was to have been about £400, but it was split between the crews of HMS "Lyme" and HMS "Pearl". As Captain Brand and his troops had not been the ones fighting for their lives, Maynard thought this extremely unfair. He lost much of any support he may have had though when it was discovered that he and his crew had helped themselves to about £90 of Teach's booty. The two companies did not receive their prize money for another four years, and despite his bravery Maynard was not promoted; instead, he faded into obscurity. The remainder of Teach's crew and former associates were found by Brand, in Bath, and were transported to Williamsburg, Virginia, where they were jailed on charges of piracy. Several were black, prompting Spotswood to ask his council what could be done about "the Circumstances of these Negroes to exempt them from undergoing the same Tryal as other pirates." Regardless, the men were tried with their comrades in Williamsburg's Capitol building, under admiralty law, on 12 March 1719. No records of the day's proceedings remain, but 14 of the 16 accused were found guilty. Of the remaining two, one proved that he had partaken of the fight out of necessity, having been on Teach's ship only as a guest at a drinking party the night before, and not as a pirate. The other, Israel Hands, was not present at the fight. He claimed that during a drinking session Teach had shot him in the knee, and that he was still covered by the royal pardon. The remaining pirates were hanged, then left to rot in gibbets along Williamsburg's Capitol Landing Road (known for some time after as "Gallows Road"). Governor Eden was certainly embarrassed by Spotswood's invasion of North Carolina, and Spotswood disavowed himself of any part of the seizure. He defended his actions, writing to Lord Carteret, a shareholder of the Province of Carolina, that he might benefit from the sale of the seized property and reminding the Earl of the number of Virginians who had died to protect his interests. He argued for the secrecy of the operation by suggesting that Eden "could contribute nothing to the Success of the Design", and told Eden that his authority to capture the pirates came from the king. Eden was heavily criticised for his involvement with Teach and was accused of being his accomplice. By criticising Eden, Spotswood intended to bolster the legitimacy of his invasion. Lee (1974) concludes that although Spotswood may have thought that the ends justified the means, he had no legal authority to invade North Carolina, to capture the pirates and to seize and auction their goods. Eden doubtless shared the same view. As Spotswood had also accused Tobias Knight of being in league with Teach, on 4 April 1719, Eden had Knight brought in for questioning. Israel Hands had, weeks earlier, testified that Knight had been on board the "Adventure" in August 1718, shortly after Teach had brought a French ship to North Carolina as a prize. Four pirates had testified that with Teach, they had visited Knight's home to give him presents. This testimony and the letter found on Teach's body by Maynard appeared compelling, but Knight conducted his defence with competence. Despite being very sick and close to death, he questioned the reliability of Spotswood's witnesses. He claimed that Israel Hands had talked under duress, and that under North Carolinian law, the other witness, an African, was unable to testify. The sugar, he argued, was stored at his house legally, and Teach had visited him only on business, in his official capacity. The board found Knight innocent of all charges. He died later that year. Eden was annoyed that the accusations against Knight arose during a trial in which he played no part. The goods which Brand seized were officially North Carolinian property and Eden considered him a thief. The argument raged back and forth between the colonies until Eden's death on 17 March 1722. His will named one of Spotswood's opponents, John Holloway, a beneficiary. In the same year, Spotswood, who for years had fought his enemies in the House of Burgesses and the Council, was replaced by Hugh Drysdale, once Robert Walpole was convinced to act. Official views on pirates were sometimes quite different from those held by contemporary authors, who often described their subjects as despicable rogues of the sea. Privateers who became pirates were generally considered by the English government to be reserve naval forces, and were sometimes given active encouragement; as far back as 1581 Francis Drake was knighted by Queen Elizabeth, when he returned to England from a round-the-world expedition with plunder worth an estimated £1,500,000. Royal pardons were regularly issued, usually when England was on the verge of war, and the public's opinion of pirates was often favourable, some considering them akin to patrons. Economist Peter Leeson believes that pirates were generally shrewd businessmen, far removed from the modern, romanticised view of them as barbarians. After Woodes Rogers' 1718 landing at New Providence and his ending of the pirate republic, piracy in the West Indies fell into terminal decline. With no easily accessible outlet to fence their stolen goods, pirates were reduced to a subsistence livelihood, and following almost a century of naval warfare between the British, French and Spanish—during which sailors could find easy employment—lone privateers found themselves outnumbered by the powerful ships employed by the British Empire to defend its merchant fleets. The popularity of the slave trade helped bring to an end the frontier condition of the West Indies, and in these circumstances, piracy was no longer able to flourish as it once did. Since the end of this so-called golden age of piracy, Teach and his exploits have become the stuff of lore, inspiring books, films and even amusement park rides. Much of what is known about him can be sourced to Charles Johnson's "A General Historie of the Robberies and Murders of the Most Notorious Pyrates", published in Britain in 1724. A recognised authority on the pirates of his time, Johnson's descriptions of such figures as Anne Bonny and Mary Read were for years required reading for those interested in the subject. Readers were titillated by his stories and a second edition was quickly published, though author Angus Konstam suspects that Johnson's entry on Blackbeard was "coloured a little to make a more sensational story." "A General Historie", though, is generally considered to be a reliable source. Johnson may have been an assumed alias. As Johnson's accounts have been corroborated in personal and official dispatches, Lee (1974) considers that whoever he was, he had some access to official correspondence. Konstam speculates further, suggesting that Johnson may have been the English playwright Charles Johnson, the British publisher Charles Rivington, or the writer Daniel Defoe. In his 1951 work "The Great Days of Piracy", author George Woodbury wrote that Johnson is "obviously a pseudonym", continuing "one cannot help suspecting that he may have been a pirate himself." Despite his infamy, Teach was not the most successful of pirates. Henry Every retired a rich man, and Bartholomew Roberts took an estimated five times the amount Teach stole. Treasure hunters have long busied themselves searching for any trace of his rumoured hoard of gold and silver, but nothing found in the numerous sites explored along the east coast of the US has ever been connected to him. Some tales suggest that pirates often killed a prisoner on the spot where they buried their loot, and Teach is no exception in these stories, but that no finds have come to light is not exceptional; buried pirate treasure is often considered a modern myth for which almost no supporting evidence exists. The available records include nothing to suggest that the burial of treasure was a common practice, except in the imaginations of the writers of fictional accounts such as "Treasure Island". Such hoards would necessitate a wealthy owner, and their supposed existence ignores the command structure of a pirate vessel, in which the crew served for a share of the profit. The only pirate ever known to bury treasure was William Kidd; the only treasure so far recovered from Teach's exploits is that taken from the wreckage of what is presumed to be the "Queen Anne's Revenge", which was found in 1996. As of 2009 more than 250,000 artefacts had been recovered. A selection is on public display at the North Carolina Maritime Museum. Various superstitious tales exist of Teach's ghost. Unexplained lights at sea are often referred to as "Teach's light", and some recitals claim that the notorious pirate now roams the afterlife searching for his head, for fear that his friends, and the Devil, will not recognise him. A North Carolinian tale holds that Teach's skull was used as the basis for a silver drinking chalice; a local judge even claimed to have drunk from it one night in the 1930s. The name of Blackbeard has been attached to many local attractions, such as Charleston's Blackbeard's Cove. His name and persona have also featured heavily in literature. He is the main subject of Matilda Douglas's fictional 1835 work "Blackbeard: A page from the colonial history of Philadelphia". Film renditions of his life include "Blackbeard the Pirate" (1952), "Blackbeard's Ghost" (1968), "Blackbeard: Terror at Sea" (2005) and the 2006 Hallmark Channel miniseries "Blackbeard". Parallels have also been drawn between Johnson's Blackbeard and the character of Captain Jack Sparrow in the 2003 adventure film, "". Blackbeard is also portrayed as a central character in two recent TV series. In the short lived Crossbones (2014) he is played by John Malkovich. The British actor Ray Stevenson plays him in season three and four of Black Sails (2016–2017). Notes Citations Bibliography Battle of the Nile The Battle of the Nile (also known as the Battle of Aboukir Bay; ) was a major naval battle fought between the British Royal Navy and the Navy of the French Republic at Aboukir Bay on the Mediterranean coast off the Nile Delta of Egypt from 1 to 3 August 1798. The battle was the climax of a naval campaign that had ranged across the Mediterranean during the previous three months, as a large French convoy sailed from Toulon to Alexandria carrying an expeditionary force under General Napoleon Bonaparte. The British fleet was led in the battle by Rear-Admiral Sir Horatio Nelson; they decisively defeated the French under Vice-Admiral François-Paul Brueys d'Aigalliers. Bonaparte sought to invade Egypt as the first step in a campaign against British India, part of a greater effort to drive Britain out of the French Revolutionary Wars. As Bonaparte's fleet crossed the Mediterranean, it was pursued by a British force under Nelson who had been sent from the British fleet in the Tagus to learn the purpose of the French expedition and to defeat it. He chased the French for more than two months, on several occasions missing them only by a matter of hours. Bonaparte was aware of Nelson's pursuit and enforced absolute secrecy about his destination. He was able to capture Malta and then land in Egypt without interception by the British naval forces. With the French army ashore, the French fleet anchored in Aboukir Bay, northeast of Alexandria. Commander Vice-Admiral François-Paul Brueys d'Aigalliers believed that he had established a formidable defensive position. The British fleet arrived off Egypt on 1 August and discovered Brueys's dispositions, and Nelson ordered an immediate attack. His ships advanced on the French line and split into two divisions as they approached. One cut across the head of the line and passed between the anchored French and the shore, while the other engaged the seaward side of the French fleet. Trapped in a crossfire, the leading French warships were battered into surrender during a fierce three-hour battle, while the centre succeeded in repelling the initial British attack. As British reinforcements arrived, the centre came under renewed assault and, at 22:00, the French flagship "Orient" exploded. The rear division of the French fleet attempted to break out of the bay, with Brueys dead and his vanguard and centre defeated, but only two ships of the line and two frigates escaped from a total of 17 ships engaged. The battle reversed the strategic situation between the two nations' forces in the Mediterranean and entrenched the Royal Navy in the dominant position that it retained for the rest of the war. It also encouraged other European countries to turn against France, and was a factor in the outbreak of the War of the Second Coalition. Bonaparte's army was trapped in Egypt, and Royal Navy dominance off the Syrian coast contributed significantly to the French defeat at the Siege of Acre in 1799 which preceded Bonaparte's return to Europe. Nelson had been wounded in the battle, and he was proclaimed a hero across Europe and was subsequently made Baron Nelson—although he was privately dissatisfied with his rewards. His captains were also highly praised and went on to form the nucleus of the legendary Nelson's Band of Brothers. The legend of the battle has remained prominent in the popular consciousness, with perhaps the best-known representation being Felicia Hemans' 1826 poem "Casabianca". Napoleon Bonaparte's victories in northern Italy over the Austrian Empire helped secure victory for the French in the War of the First Coalition in 1797, and Great Britain remained the only major European power still at war with the French Republic. The French Directory investigated a number of strategic options to counter British opposition, including projected invasions of Ireland and Britain and the expansion of the French Navy to challenge the Royal Navy at sea. Despite significant efforts, British control of Northern European waters rendered these ambitions impractical in the short term, and the Royal Navy remained firmly in control of the Atlantic Ocean. However, the French navy was dominant in the Mediterranean, following the withdrawal of the British fleet after the outbreak of war between Britain and Spain in 1796. This allowed Bonaparte to propose an invasion of Egypt as an alternative to confronting Britain directly, believing that the British would be too distracted by an imminent Irish uprising to intervene in the Mediterranean. Bonaparte believed that, by establishing a permanent presence in Egypt (nominally part of the neutral Ottoman Empire), the French would obtain a staging point for future operations against British India, possibly in conjunction with the Tipu Sultan of Seringapatam, that might successfully drive the British out of the war. The campaign would sever the chain of communication that connected Britain with India, an essential part of the British Empire whose trade generated the wealth that Britain required to prosecute the war successfully. The French Directory agreed with Bonaparte's plans, although a major factor in their decision was a desire to see the politically ambitious Bonaparte and the fiercely loyal veterans of his Italian campaigns travel as far from France as possible. During the spring of 1798, Bonaparte assembled more than 35,000 soldiers in Mediterranean France and Italy and developed a powerful fleet at Toulon. He also formed the "Commission des Sciences et des Arts", a body of scientists and engineers intended to establish a French colony in Egypt. Napoleon kept the destination of the expedition top secret—most of the army's officers did not know of its target, and Bonaparte did not publicly reveal his goal until the first stage of the expedition was complete. Bonaparte's armada sailed from Toulon on 19 May 1798, making rapid progress through the Ligurian Sea and collecting more ships at Genoa, before sailing southwards along the Sardinian coast and passing Sicily on 7 June. On 9 June, the fleet arrived off Malta, then under the ownership of the Knights of St. John of Jerusalem, ruled by Grand Master Ferdinand von Hompesch zu Bolheim. Bonaparte demanded that his fleet be permitted entry to the fortified harbour of Valletta. When the Knights refused, the French general responded by ordering a large scale invasion of the Maltese Islands, overrunning the defenders after 24 hours of skirmishing. The Knights formally surrendered on 12 June and, in exchange for substantial financial compensation, handed the islands and all of their resources over to Bonaparte, including the extensive property of the Roman Catholic Church on Malta. Within a week, Bonaparte had resupplied his ships, and on 19 June, his fleet departed for Alexandria in the direction of Crete, leaving 4,000 men at Valletta under General Claude-Henri Vaubois to ensure French control of the islands. While Bonaparte was sailing to Malta, the Royal Navy re-entered the Mediterranean for the first time in more than a year. Alarmed by reports of French preparations on the Mediterranean coast, Lord Spencer at the Admiralty sent a message to Vice-Admiral Earl St. Vincent, commander of the Mediterranean Fleet based in the Tagus River, to despatch a squadron to investigate. This squadron, consisting of three ships of the line and three frigates, was entrusted to Rear-Admiral Sir Horatio Nelson. Nelson was a highly experienced officer who had been blinded in one eye during fighting in Corsica in 1794 and subsequently commended for his capture of two Spanish ships of the line at the Battle of Cape St. Vincent in February 1797. In July 1797, he lost an arm at the Battle of Santa Cruz de Tenerife and had been forced to return to Britain to recuperate. Returning to the fleet at the Tagus in late April 1798, he was ordered to collect the squadron stationed at Gibraltar and sail for the Ligurian Sea. On 21 May, as Nelson's squadron approached Toulon, it was struck by a fierce gale and Nelson's flagship, HMS "Vanguard", lost its topmasts and was almost wrecked on the Corsican coast. The remainder of the squadron was scattered. The ships of the line sheltered at San Pietro Island off Sardinia; the frigates were blown to the west and failed to return. On 7 June, following hasty repairs to his flagship, a fleet consisting of ten ships of the line and a fourth-rate joined Nelson off Toulon. The fleet, under the command of Captain Thomas Troubridge, had been sent by Earl St. Vincent to reinforce Nelson, with orders that he was to pursue and intercept the Toulon convoy. Although he now had enough ships to challenge the French fleet, Nelson suffered two great disadvantages: He had no intelligence regarding the destination of the French, and no frigates to scout ahead of his force. Striking southwards in the hope of collecting information about French movements, Nelson's ships stopped at Elba and Naples, where the British ambassador, Sir William Hamilton, reported that the French fleet had passed Sicily headed in the direction of Malta. Despite pleas from Nelson and Hamilton, King Ferdinand of Naples refused to lend his frigates to the British fleet, fearing French reprisals. On 22 June, a brig sailing from Ragusa brought Nelson the news that the French had sailed eastwards from Malta on 16 June. After conferring with his captains, the admiral decided that the French target must be Egypt and set off in pursuit. Incorrectly believing the French to be five days ahead rather than two, Nelson insisted on a direct route to Alexandria without deviation. On the evening of 22 June, Nelson's fleet passed the French in the darkness, overtaking the slow invasion convoy without realising how close they were to their target. Making rapid time on a direct route, Nelson reached Alexandria on 28 June and discovered that the French were not there. After a meeting with the suspicious Ottoman commander, Sayyid Muhammad Kurayyim, Nelson ordered the British fleet northwards, reaching the coast of Anatolia on 4 July and turning westwards back towards Sicily. Nelson had missed the French by less than a day—the scouts of the French fleet arrived off Alexandria in the evening of 29 June. Concerned by his near encounter with Nelson, Bonaparte ordered an immediate invasion, his troops coming ashore in a poorly managed amphibious operation in which at least 20 drowned. Marching along the coast, the French army stormed Alexandria and captured the city, after which Bonaparte led the main force of his army inland. He instructed his naval commander, Vice-Admiral François-Paul Brueys D'Aigalliers, to anchor in Alexandria harbour, but naval surveyors reported that the channel into the harbour was too shallow and narrow for the larger ships of the French fleet. As a result, the French selected an alternative anchorage at Aboukir Bay, northeast of Alexandria. Nelson's fleet reached Syracuse in Sicily on 19 July and took on essential supplies. There the admiral wrote letters describing the events of the previous months: "It is an old saying, 'the Devil's children have the Devil's luck.' I cannot find, or at this moment learn, beyond vague conjecture where the French fleet are gone to. All my ill fortune, hitherto, has proceeded from want of frigates." Meanwhile, the French were securing Egypt by the Battle of the Pyramids. By 24 July, the British fleet was resupplied and, having determined that the French must be somewhere in the Eastern Mediterranean, Nelson sailed again in the direction of the Morea. On 28 July, at Coron, Nelson finally obtained intelligence describing the French attack on Egypt and turned south across the Mediterranean. His scouts, HMS "Alexander" and HMS "Swiftsure", sighted the French transport fleet at Alexandria on the afternoon of 1 August. When Alexandria harbour had proved inadequate for his fleet, Brueys had gathered his captains and discussed their options. Bonaparte had ordered the fleet to anchor in Aboukir Bay, a shallow and exposed anchorage, but had supplemented the orders with the suggestion that, if Aboukir Bay was too dangerous, Brueys could sail north to Corfu, leaving only the transports and a handful of lighter warships at Alexandria. Brueys refused, in the belief that his squadron could provide essential support to the French army on shore, and called his captains aboard his 120-gun flagship "Orient" to discuss their response should Nelson discover the fleet in its anchorage. Despite vocal opposition from Contre-amiral Armand Blanquet, who insisted that the fleet would be best able to respond in open water, the rest of the captains agreed that anchoring in a line of battle inside the bay presented the strongest tactic for confronting Nelson. It is possible that Bonaparte envisaged Aboukir Bay as a temporary anchorage: on 27 July, he expressed the expectation that Brueys had already transferred his ships to Alexandria, and three days later, he issued orders for the fleet to make for Corfu in preparation for naval operations against the Ottoman territories in the Balkans, although Bedouin partisans intercepted and killed the courier carrying the instructions. Aboukir Bay is a coastal indentation across, stretching from the village of Abu Qir in the west to the town of Rosetta to the east, where one of the mouths of the River Nile empties into the Mediterranean. In 1798, the bay was protected at its western end by extensive rocky shoals which ran into the bay from a promontory guarded by Aboukir Castle. A small fort situated on an island among the rocks protected the shoals. The fort was garrisoned by French soldiers and armed with at least four cannon and two heavy mortars. Brueys had augmented the fort with his bomb vessels and gunboats, anchored among the rocks to the west of the island in a position to give support to the head of the French line. Further shoals ran unevenly to the south of the island and extended across the bay in a rough semicircle approximately from the shore. These shoals were too shallow to permit the passage of larger warships, and so Brueys ordered his thirteen ships of the line to form up in a line of battle following the northeastern edge of the shoals to the south of the island, a position that allowed the ships to disembark supplies from their port sides while covering the landings with their starboard batteries. Orders were issued for each ship to attach strong cables to the bow and stern of their neighbours, which would effectively turn the line into a long battery forming a theoretically impregnable barrier. Brueys positioned a second, inner line of four frigates approximately west of the main line, roughly halfway between the line and the shoal. The van of the French line was led by "Guerrier", positioned southeast of Aboukir Island and about from the edge of the shoals that surrounded the island. The line stretched southeast, with the centre bowed seawards away from the shoal. The French ships were spaced at intervals of and the whole line was long, with the flagship "Orient" at the centre and two large 80-gun ships anchored on either side. The rear division of the line was under the command of Contre-amiral Pierre-Charles Villeneuve in "Guillaume Tell". In deploying his ships in this way, Brueys hoped that the British would be forced by the shoals to attack his strong centre and rear, allowing his van to use the prevailing northeasterly wind to counterattack the British once they were engaged. However, he had made a serious misjudgement: he had left enough room between "Guerrier" and the shoals for an enemy ship to cut across the head of the French line and proceed between the shoals and the French ships, allowing the unsupported vanguard to be caught in a crossfire by two divisions of enemy ships. Compounding this error, the French only prepared their ships for battle on their starboard (seaward) sides, from which they expected the attack would have to come; their landward port sides were unprepared. The port side gun ports were closed, and the decks on that side were uncleared, with various stored items blocking access to the guns. Brueys' dispositions had a second significant flaw: The 160-yard gaps between ships were large enough for a British ship to push through and break the French line. Furthermore, not all of the French captains had followed Brueys' orders to attach cables to their neighbours' bow and stern, which would have prevented such a manoeuvre. The problem was exacerbated by orders to only anchor at the bow, which allowed the ships to swing with the wind and widened the gaps. It also created areas within the French line not covered by the broadside of any ship. British vessels could anchor in those spaces and engage the French without reply. In addition, the deployment of Brueys' fleet prevented the rear from effectively supporting the van due to the prevailing winds. A more pressing problem for Brueys was a lack of food and water for the fleet: Bonaparte had unloaded almost all of the provisions carried aboard and no supplies were reaching the ships from the shore. To remedy this, Brueys sent foraging parties of 25 men from each ship along the coast to requisition food, dig wells, and collect water. Constant attacks by Bedouin partisans, however, required escorts of heavily armed guards for each party. Hence, up to a third of the fleet's sailors were away from their ships at any one time. Brueys wrote a letter describing the situation to Minister of Marine Étienne Eustache Bruix, reporting that "Our crews are weak, both in number and quality. Our rigging, in general, out of repair, and I am sure it requires no little courage to undertake the management of a fleet furnished with such tools." Although initially disappointed that the main French fleet was not at Alexandria, Nelson knew from the presence of the transports that they must be nearby. At 14:00 on 1 August, lookouts on HMS "Zealous" reported the French anchored in Aboukir Bay, its signal lieutenant just beating the lieutenant on HMS "Goliath" with the signal, but inaccurately describing 16 French ships of the line instead of 13. At the same time, French lookouts on "Heureux", the ninth ship in the French line, sighted the British fleet approximately nine nautical miles off the mouth of Aboukir Bay. The French initially reported just 11 British ships – "Swiftsure" and "Alexander" were still returning from their scouting operations at Alexandria, and so were to the west of the main fleet, out of sight. Troubridge's ship, HMS "Culloden""," was also some distance from the main body, towing a captured merchant ship. At the sight of the French, Troubridge abandoned the vessel and made strenuous efforts to rejoin Nelson. Due to the need for so many sailors to work onshore, Brueys had not deployed any of his lighter warships as scouts, which left him unable to react swiftly to the sudden appearance of the British. As his ships readied for action, Brueys ordered his captains to gather for a conference on "Orient" and hastily recalled his shore parties, although most had still not returned by the start of the battle. To replace them, large numbers of men were taken out of the frigates and distributed among the ships of the line. Brueys also hoped to lure the British fleet onto the shoals at Aboukir Island, sending the brigs "Alerte" and "Railleur" to act as decoys in the shallow waters. By 16:00, "Alexander" and "Swiftsure" were also in sight, although some distance from the main British fleet. Brueys gave orders to abandon the plan to remain at anchor and instead for his line to set sail. Blanquet protested the order on the grounds that there were not enough men aboard the French ships to both sail the ships and man the guns. Nelson gave orders for his leading ships to slow down, to allow the British fleet to approach in a more organised formation. This convinced Brueys that rather than risk an evening battle in confined waters, the British were planning to wait for the following day. He rescinded his earlier order to sail. Brueys may have been hoping that the delay would allow him to slip past the British during the night and thus follow Bonaparte's orders not to engage the British fleet directly if he could avoid it. Nelson ordered the fleet to slow down at 16:00 to allow his ships to rig "springs" on their anchor cables, a system of attaching the bow anchor that increased stability and allowed his ships to swing their broadsides to face an enemy while stationary. It also increased manoeuvrability and therefore reduced the risk of coming under raking fire. Nelson's plan, shaped through discussion with his senior captains during the return voyage to Alexandria, was to advance on the French and pass down the seaward side of the van and centre of the French line, so that each French ship would face two British ships and the massive "Orient" would be fighting against three. The direction of the wind meant that the French rear division would be unable to join the battle easily and would be cut off from the front portions of the line. To ensure that in the smoke and confusion of a night battle his ships would not accidentally open fire on one another, Nelson ordered that each ship prepare four horizontal lights at the head of their mizzen mast and hoist an illuminated White Ensign, which was different enough from the French tricolour that it would not be mistaken in poor visibility, reducing the risk that British ships might fire on one another in the darkness. As his ship was readied for battle, Nelson held a final dinner with "Vanguard"'s officers, announcing as he rose: "Before this time tomorrow I shall have gained a peerage or Westminster Abbey," in reference to the rewards of victory or the traditional burial place of British military heroes. Shortly after the French order to set sail was abandoned, the British fleet began rapidly approaching once more. Brueys, now expecting to come under attack that night, ordered each of his ships to place springs on their anchor cables and prepare for action. He sent the "Alerte" ahead, which passed close to the leading British ships and then steered sharply to the west over the shoal, in the hope that the ships of the line might follow and become grounded. None of Nelson's captains fell for the ruse and the British fleet continued undeterred. At 17:30, Nelson hailed one of his two leading ships, HMS "Zealous" under Captain Samuel Hood, which had been racing "Goliath" to be the first to fire on the French. The admiral ordered Hood to establish the safest course into the harbour. The British had no charts of the depth or shape of the bay, except a rough sketch map "Swiftsure" had obtained from a merchant captain, an inaccurate British atlas on "Zealous", and a 35-year-old French map aboard "Goliath". Hood replied that he would take careful soundings as he advanced to test the depth of the water, and that, "If you will allow the honour of leading you into battle, I will keep the lead going." Shortly afterwards, Nelson paused to speak with the brig HMS "Mutine", whose commander, Lieutenant Thomas Hardy, had seized some maritime pilots from a small Alexandrine vessel. As "Vanguard" came to a stop, the following ships slowed. This caused a gap to open up between "Zealous" and "Goliath" and the rest of the fleet. To counter this effect, Nelson ordered HMS "Theseus" under Captain Ralph Miller to pass his flagship and join "Zealous" and "Goliath" in the vanguard. By 18:00, the British fleet was again under full sail, "Vanguard" sixth in the line of ten ships as "Culloden" trailed behind to the north and "Alexander" and "Swiftsure" hastened to catch up to the west. Following the rapid change from a loose formation to a rigid line of battle both fleets raised their colours; each British ship added additional Union Flags in its rigging in case its main flag was shot away. At 18:20, as "Goliath" and "Zealous" rapidly bore down on them, the leading French ships "Guerrier" and "Conquérant" opened fire. Ten minutes after the French opened fire "Goliath", ignoring fire from the fort to starboard and from "Guerrier" to port, most of which was too high to trouble the ship, crossed the head of the French line. Captain Thomas Foley had noticed as he approached that there was an unexpected gap between "Guerrier" and the shallow water of the shoal. On his own initiative, Foley decided to exploit this tactical error and changed his angle of approach to sail through the gap. As the bow of "Guerrier" came within range, "Goliath" opened fire, inflicting severe damage with a double-shotted raking broadside as the British ship turned to port and passed down the unprepared port side of "Guerrier." Foley's Royal Marines and a company of Austrian grenadiers joined the attack, firing their muskets. Foley had intended to anchor alongside the French ship and engage it closely, but his anchor took too long to descend and his ship passed "Guerrier" entirely. "Goliath" eventually stopped close to the bow of "Conquérant", opening fire on the new opponent and using the unengaged starboard guns to exchange occasional shots with the frigate "Sérieuse" and bomb vessel "Hercule," which were anchored inshore of the battle line. Foley's attack was followed by Hood in "Zealous", who also crossed the French line and successfully anchored next to "Guerrier" in the space Foley had intended, engaging the lead ship's bow from close range. Within five minutes "Guerrier"'s foremast had fallen, to cheers from the crews of the approaching British ships. The speed of the British advance took the French captains by surprise; they were still aboard "Orient" in conference with the admiral when the firing started. Hastily launching their boats, they returned to their vessels. Captain Jean-François-Timothée Trullet of "Guerrier" shouted orders from his barge for his men to return fire on "Zealous". The third British ship into action was HMS "Orion" under Captain Sir James Saumarez, which rounded the engagement at the head of the battle line and passed between the French main line and the frigates that lay closer inshore. As he did so, the frigate "Sérieuse" opened fire on "Orion", wounding two men. The convention in naval warfare of the time was that ships of the line did not attack frigates when there were ships of equal size to engage, but in firing first French Captain Claude-Jean Martin had negated the rule. Saumarez waited until the frigate was at close range before replying. "Orion" needed just one broadside to reduce the frigate to a wreck, and Martin's disabled ship drifted away over the shoal. During the delay this detour caused, two other British ships joined the battle: Theseus, which had been disguised as a first-rate ship, followed Foley's track across "Guerrier"'s bow. Miller steered his ship through the middle of the melee between the anchored British and French ships until he encountered the third French ship, "Spartiate". Anchoring to port, Miller's ship opened fire at close range. HMS "Audacious" under Captain Davidge Gould crossed the French line between "Guerrier" and "Conquérant", anchoring between the ships and raking them both. "Orion" then rejoined the action further south than intended, firing on the fifth French ship, "Peuple Souverain," and Admiral Blanquet's flagship, "Franklin". The next three British ships, "Vanguard" in the lead followed by HMS "Minotaur" and HMS "Defence", remained in line of battle formation and anchored on the starboard side of the French line at 18:40. Nelson focused his flagship's fire on "Spartiate", while Captain Thomas Louis in "Minotaur" attacked the unengaged "Aquilon" and Captain John Peyton in "Defence" joined the attack on "Peuple Souverain". With the French vanguard now heavily outnumbered, the following British ships, HMS "Bellerophon" and HMS "Majestic""," passed by the melee and advanced on the so far unengaged French centre. Both ships were soon fighting enemies much more powerful than they and began to take severe damage. Captain Henry Darby on "Bellerophon" missed his intended anchor near "Franklin" and instead found his ship underneath the main battery of the French flagship. Captain George Blagdon Westcott on "Majestic" also missed his station and almost collided with "Heureux", coming under heavy fire from "Tonnant". Unable to stop in time, Westcott's jib boom became entangled with "Tonnant"s shroud. The French suffered too, Admiral Brueys on "Orient" was severely wounded in the face and hand by flying debris during the opening exchange of fire with "Bellerophon". The final ship of the British line, "Culloden" under Troubridge, sailed too close to Aboukir Island in the growing darkness and became stuck fast on the shoal. Despite strenuous efforts from the "Culloden"'s boats, the brig "Mutine" and the 50-gun HMS "Leander" under Captain Thomas Thompson, the ship of the line could not be moved, and the waves drove "Culloden" further onto the shoal, inflicting severe damage to the ship's hull. At 19:00 the identifying lights in the mizzenmasts of the British fleet were lit. By this time, "Guerrier" had been completely dismasted and heavily battered. "Zealous" by contrast was barely touched: Hood had situated "Zealous" outside the arc of most of the French ship's broadsides, and in any case "Guerrier" was not prepared for an engagement on both sides simultaneously, with its port guns blocked by stores. Although their ship was a wreck, the crew of "Guerrier" refused to surrender, continuing to fire the few functional guns whenever possible despite heavy answering fire from "Zealous". In addition to his cannon fire, Hood called up his marines and ordered them to fire volleys of musket shot at the deck of the French ship, driving the crew out of sight but still failing to secure the surrender from Captain Trullet. It was not until 21:00, when Hood sent a small boat to "Guerrier" with a boarding party, that the French ship finally surrendered. "Conquérant" was defeated more rapidly, after heavy broadsides from passing British ships and the close attentions of "Audacious" and "Goliath" brought down all three masts before 19:00. With his ship immobile and badly damaged, the mortally wounded Captain Etienne Dalbarade struck his colours and a boarding party seized control. Unlike "Zealous", these British ships suffered relatively severe damage in the engagement. "Goliath" lost most of its rigging, suffered damage to all three masts and suffered more than 60 casualties. With his opponents defeated, Captain Gould on "Audacious" used the spring on his cable to transfer fire to "Spartiate", the next French ship in line. To the west of the battle the battered "Sérieuse" sank over the shoal. Its masts protruded from the water as survivors scrambled into boats and rowed for the shore. The transfer of "Audacious"'s broadside to "Spartiate" meant that Captain Maurice-Julien Emeriau now faced three opponents. Within minutes all three of his ship's masts had fallen, but the battle around "Spartiate" continued until 21:00, when the badly wounded Emeriau ordered his colours struck. Although "Spartiate" was outnumbered, it had been supported by the next in line, "Aquilon", which was the only ship of the French van squadron fighting a single opponent, "Minotaur". Captain Antoine René Thévenard used the spring on his anchor cable to angle his broadside into a raking position across the bow of Nelson's flagship, which consequently suffered more than 100 casualties, including the admiral. At approximately 20:30, an iron splinter fired in a langrage shot from "Spartiate "struck Nelson over his blinded right eye. The wound caused a flap of skin to fall across his face, rendering him temporarily completely blind. Nelson collapsed into the arms of Captain Edward Berry and was carried below. Certain that his wound was fatal, he cried out "I am killed, remember me to my wife", and called for his chaplain, Stephen Comyn. The wound was immediately inspected by "Vanguard"'s surgeon Michael Jefferson, who informed the admiral that it was a simple flesh wound and stitched the skin together. Nelson subsequently ignored Jefferson's instructions to remain inactive, returning to the quarterdeck shortly before the explosion on "Orient" to oversee the closing stages of the battle. Although Thévenard's manoeuvre was successful, it placed his own bow under "Minotaur"s guns and by 21:25 the French ship was dismasted and battered, Captain Thévenard killed and his junior officers forced to surrender. With his opponent defeated, Captain Thomas Louis then took "Minotaur" south to join the attack on "Franklin". "Defence" and "Orion "attacked the fifth French ship, "Peuple Souverain", from either side and the ship rapidly lost the fore and main masts. Aboard the "Orion", a wooden block was smashed off one of the ship's masts, killing two men before wounding Captain Saumarez in the thigh. On "Peuple Souverain", Captain Pierre-Paul Raccord was badly wounded and ordered his ship's anchor cable cut in an effort to escape the bombardment. "Peuple Souverain" drifted south towards the flagship "Orient", which mistakenly opened fire on the darkened vessel. "Orion" and "Defence" were unable to immediately pursue. "Defence" had lost its fore topmast and an improvised fireship that drifted through the battle narrowly missed "Orion". The origin of this vessel, an abandoned and burning ship's boat laden with highly flammable material, is uncertain, but it may have been launched from "Guerrier" as the battle began. "Peuple Souverain" anchored not far from "Orient", but took no further part in the fighting. The wrecked ship surrendered during the night. "Franklin" remained in combat, but Blanquet had suffered a severe head wound and Captain Gillet had been carried below unconscious with severe wounds. Shortly afterwards, a fire broke out on the quarterdeck after an arms locker exploded, which was eventually extinguished with difficulty by the crew. To the south, HMS "Bellerophon" was in serious trouble as the huge broadside of "Orient" pounded the ship. At 19:50 the mizzenmast and main mast both collapsed and fires broke out simultaneously at several points. Although the blazes were extinguished, the ship had suffered more than 200 casualties. Captain Darby recognised that his position was untenable and ordered the anchor cables cut at 20:20. The battered ship drifted away from the battle under continued fire from "Tonnant" as the foremast collapsed as well. "Orient" had also suffered significant damage and Admiral Brueys had been struck in the midriff by a cannonball that almost cut him in half. He died fifteen minutes later, remaining on deck and refusing to be carried below. "Orient"'s captain, Luc-Julien-Joseph Casabianca, was also wounded, struck in the face by flying debris and knocked unconscious, while his twelve-year-old son had a leg torn off by a cannonball as he stood beside his father. The most southerly British ship, "Majestic", had become briefly entangled with the 80-gun "Tonnant", and in the resulting battle, suffered heavy casualties. Captain George Blagdon Westcott was among the dead, killed by French musket fire. Lieutenant Robert Cuthbert assumed command and successfully disentangled his ship, allowing the badly damaged "Majestic" to drift further southwards so that by 20:30 it was stationed between "Tonnant" and the next in line, "Heureux", engaging both. To support the centre, Captain Thompson of "Leander" abandoned the futile efforts to drag the stranded "Culloden" off the shoal and sailed down the embattled French line, entering the gap created by the drifting "Peuple Souverain" and opening a fierce raking fire on "Franklin" and "Orient". While the battle raged in the bay, the two straggling British ships made strenuous efforts to join the engagement, focusing on the flashes of gunfire in the darkness. Warned away from the Aboukir shoals by the grounded "Culloden", Captain Benjamin Hallowell in "Swiftsure" passed the melee at the head of the line and aimed his ship at the French centre. Shortly after 20:00, a dismasted hulk was spotted drifting in front of "Swiftsure" and Hallowell initially ordered his men to fire before rescinding the order, concerned for the identity of the strange vessel. Hailing the battered ship, Hallowell received the reply "Bellerophon, going out of action disabled." Relieved that he had not accidentally attacked one of his own ships in the darkness, Hallowell pulled up between "Orient" and "Franklin" and opened fire on them both. "Alexander", the final unengaged British ship, which had followed "Swiftsure", pulled up close to "Tonnant", which had begun to drift away from the embattled French flagship. Captain Alexander Ball then joined the attack on "Orient". At 21:00, the British observed a fire on the lower decks of the "Orient", the French flagship. Identifying the danger this posed to the "Orient", Captain Hallowell directed his gun crews to fire their guns directly into the blaze. Sustained British gun fire spread the flames throughout the ship's stern and prevented all efforts to extinguish them. Within minutes the fire had ascended the rigging and set the vast sails alight. The nearest British ships, "Swiftsure", "Alexander", and "Orion", all stopped firing, closed their gunports, and began edging away from the burning ship in anticipation of the detonation of the enormous ammunition supplies stored on board. In addition, they took crews away from the guns to form fire parties and to soak the sails and decks in seawater to help contain any resulting fires. Likewise the French ships "Tonnant", "Heureux", and "Mercure" all cut their anchor cables and drifted southwards away from the burning ship. At 22:00 the fire reached the magazines, and the "Orient" was destroyed by a massive explosion. The concussion of the blast was powerful enough to rip open the seams of the nearest ships, and flaming wreckage landed in a huge circle, much of it flying directly over the surrounding ships into the sea beyond. Falling wreckage started fires on "Swiftsure", "Alexander", and "Franklin", although in each case teams of sailors with water buckets succeeded in extinguishing the flames, despite a secondary explosion on "Franklin". It has never been firmly established how the fire on "Orient" broke out, but one common account is that jars of oil and paint had been left on the poop deck, instead of being properly stowed after painting of the ship's hull had been completed shortly before the battle. Burning wadding from one of the British ships is believed to have floated onto the poop deck and ignited the paint. The fire rapidly spread through the admiral's cabin and into a ready magazine that stored carcass ammunition, which was designed to burn more fiercely in water than in air. Alternatively, Fleet Captain Honoré Ganteaume later reported the cause as an explosion on the quarterdeck, preceded by a series of minor fires on the main deck among the ship's boats. Whatever its origin, the fire spread rapidly through the ship's rigging, unchecked by the fire pumps aboard, which had been smashed by British shot. A second blaze then began at the bow, trapping hundreds of sailors in the ship's waist. Subsequent archaeological investigation found debris scattered over of seabed and evidence that the ship was wracked by two huge explosions one after the other. Hundreds of men dove into the sea to escape the flames, but fewer than 100 survived the blast. British boats picked up approximately 70 survivors, including the wounded staff officer Léonard-Bernard Motard. A few others, including Ganteaume, managed to reach the shore on rafts. The remainder of the crew, numbering more than 1,000 men, were killed, including Captain Casabianca and his son, Giocante. For ten minutes after the explosion there was no firing; sailors from both sides were either too shocked by the blast or desperately extinguishing fires aboard their own ships to continue the fight. During the lull, Nelson gave orders that boats be sent to pull survivors from the water around the remains of "Orient". At 22:10, "Franklin" restarted the engagement by firing on "Swiftsure". Isolated and battered, Blanquet's ship was soon dismasted and the admiral, suffering a severe head wound, was forced to surrender by the combined firepower of "Swiftsure" and "Defence". More than half of "Franklin"'s crew had been killed or wounded. By midnight only "Tonnant" remained engaged, as Commodore Aristide Aubert Du Petit Thouars continued his fight with "Majestic" and fired on "Swiftsure" when the British ship moved within range. By 03:00, after more than three hours of close quarter combat, "Majestic" had lost its main and mizzen masts while "Tonnant" was a dismasted hulk. Although Captain Du Petit Thouars had lost both legs and an arm he remained in command, insisting on having the tricolour nailed to the mast to prevent it from being struck and giving orders from his position propped up on deck in a bucket of wheat. Under his guidance, the battered "Tonnant" gradually drifted southwards away from the action to join the southern division under Villeneuve, who failed to bring these ships into effective action. Throughout the engagement the French rear had kept up an arbitrary fire on the battling ships ahead. The only noticeable effect was the smashing of "Timoléon"s rudder by misdirected fire from the neighbouring "Généreux". As the sun rose at 04:00 on 2 August, firing broke out once again between the French southern division of "Guillaume Tell", "Tonnant", "Généreux" and "Timoléon" and the battered "Alexander" and "Majestic". Although briefly outmatched, the British ships were soon joined by "Goliath" and "Theseus". As Captain Miller manoeuvred his ship into position, "Theseus" briefly came under fire from the frigate "Artémise". Miller turned his ship towards "Artémise", but Captain Pierre-Jean Standelet struck his flag and ordered his men to abandon the frigate. Miller sent a boat under Lieutenant William Hoste to take possession of the empty vessel, but Standelet had set fire to his ship as he left and "Artémise" blew up shortly afterwards. The surviving French ships of the line, covering their retreat with gunfire, gradually pulled to the east away from the shore at 06:00. "Zealous" pursued, and was able to prevent the frigate "Justice" from boarding "Bellerophon", which was anchored at the southern point of the bay undergoing hasty repairs. Two other French ships still flew the tricolour, but neither was in a position to either retreat or fight. When "Heureux" and "Mercure" had cut their anchor cables to escape the exploding "Orient", their crews had panicked and neither captain (both of whom were wounded) had managed to regain control of his ship. As a result, both vessels had drifted onto the shoal. "Alexander", "Goliath", "Theseus" and "Leander" attacked the stranded and defenceless ships, and both surrendered within minutes. The distractions provided by "Heureux", "Mercure" and "Justice" allowed Villeneuve to bring most of the surviving French ships to the mouth of the bay at 11:00. On the dismasted "Tonnant", Commodore Du Petit Thouars was now dead from his wounds and thrown overboard at his own request. As the ship was unable to make the required speed it was driven ashore by its crew. "Timoléon" was too far south to escape with Villeneuve and, in attempting to join the survivors, had also grounded on the shoal. The force of the impact dislodged the ship's foremast. The remaining French vessels: the ships of the line "Guillaume Tell" and "Généreux" and the frigates "Justice" and "Diane", formed up and stood out to sea, pursued by "Zealous". Despite strenuous efforts, Captain Hood's isolated ship came under heavy fire and was unable to cut off the trailing "Justice" as the French survivors escaped seawards. "Zealous" was struck by a number of French shot and lost one man killed. For the remainder of 2 August Nelson's ships made improvised repairs and boarded and consolidated their prizes. "Culloden" especially required assistance. Troubridge, having finally dragged his ship off the shoal at 02:00, found that he had lost his rudder and was taking on more than of water an hour. Emergency repairs to the hull and fashioning a replacement rudder from a spare topmast took most of the next two days. On the morning of 3 August, Nelson sent "Theseus" and "Leander" to force the surrender of the grounded "Tonnant" and "Timoléon". The "Tonnant", its decks crowded with 1,600 survivors from other French vessels, surrendered as the British ships approached while "Timoléon" was set on fire by its remaining crew who then escaped to the shore in small boats. "Timoléon" exploded shortly after midday, the eleventh and final French ship of the line destroyed or captured during the battle. British casualties in the battle were recorded with some accuracy in the immediate aftermath as 218 killed and approximately 677 wounded, although the number of wounded who subsequently died is not known. The ships that suffered most were "Bellerophon" with 201 casualties and "Majestic" with 193. Other than "Culloden" the lightest loss was on "Zealous", which had one man killed and seven wounded. The casualty list included Captain Westcott, five lieutenants and ten junior officers among the dead, and Admiral Nelson, Captains Saumarez, Ball and Darby, and six lieutenants wounded. Other than "Culloden", the only British ships seriously damaged in their hulls were "Bellerophon", "Majestic," and "Vanguard". "Bellerophon" and "Majestic" were the only ships to lose masts: "Majestic" the main and mizzen and "Bellerophon" all three. French casualties are harder to calculate but were significantly higher. Estimates of French losses range from 2,000 to 5,000, with a suggested median point of 3,500, which includes more than 1,000 captured wounded and nearly 2,000 killed, half of whom died on "Orient". In addition to Admiral Brueys killed and Admiral Blanquet wounded, four captains died and seven others were seriously wounded. The French ships suffered severe damage: Two ships of the line and two frigates were destroyed (as well as a bomb vessel scuttled by its crew), and three other captured ships were too battered ever to sail again. Of the remaining prizes, only three were ever sufficiently repaired for frontline service. For weeks after the battle, bodies washed up along the Egyptian coast, decaying slowly in the intense, dry heat. Nelson, who on surveying the bay on the morning of 2 August said, "Victory is not a name strong enough for such a scene", remained at anchor in Aboukir Bay for the next two weeks, preoccupied with recovering from his wound, writing dispatches, and assessing the military situation in Egypt using documents captured on board one of the prizes. Nelson's head wound was recorded as being "three inches long" with "the cranium exposed for one inch". He suffered pain from the injury for the rest of his life and was badly scarred, styling his hair to disguise it as much as possible. As their commander recovered, his men stripped the wrecks of useful supplies and made repairs to their ships and prizes. Throughout the week, Aboukir Bay was surrounded by bonfires lit by Bedouin tribesmen in celebration of the British victory. On 5 August, "Leander" was despatched to Cadiz with messages for Earl St. Vincent carried by Captain Edward Berry. Over the next few days the British landed all but 200 of the captured prisoners on shore under strict terms of parole, although Bonaparte later ordered them to be formed into an infantry unit and added to his army. The wounded officers taken prisoner were held on board "Vanguard", where Nelson regularly entertained them at dinner. Historian Joseph Allen recounts that on one occasion Nelson, whose eyesight was still suffering following his wound, offered toothpicks to an officer who had lost his teeth and then passed a snuff-box to an officer whose nose had been torn off, causing much embarrassment. On 8 August the fleet's boats stormed Aboukir Island, which surrendered without a fight. The landing party removed four of the guns and destroyed the rest along with the fort they were mounted in, renaming the island "Nelson's Island". On 10 August, Nelson sent Lieutenant Thomas Duval from "Zealous" with messages to the government in India. Duval travelled across the Middle East overland via camel train to Aleppo and took the East India Company ship "Fly" from Basra to Bombay, acquainting Governor-General of India Viscount Wellesley with the situation in Egypt. On 12 August the frigates HMS "Emerald" under Captain Thomas Moutray Waller and HMS "Alcmene" under Captain George Johnstone Hope, and the sloop HMS "Bonne Citoyenne" under Captain Robert Retalick, arrived off Alexandria. Initially the British mistook the frigate squadron for French warships and "Swiftsure" chased them away. They returned the following day once the error had been realised. The same day as the frigates arrived, Nelson sent "Mutine" to Britain with dispatches, under the command of Lieutenant Thomas Bladen Capel, who had replaced Hardy after the latter's promotion to captain of "Vanguard". On 14 August, Nelson sent "Orion", "Majestic", "Bellerophon", "Minotaur", "Defence", "Audacious", "Theseus", "Franklin", "Tonnant", "Aquilon", "Conquérant", "Peuple Souverain," and "Spartiate" to sea under the command of Saumarez. Many ships had only jury masts and it took a full day for the convoy to reach the mouth of the bay, finally sailing into open water on 15 August. On 16 August the British burned and destroyed the grounded prize "Heureux" as no longer fit for service and on 18 August also burned "Guerrier" and "Mercure". On 19 August, Nelson sailed for Naples with "Vanguard", "Culloden," and "Alexander", leaving Hood in command of "Zealous", "Goliath", "Swiftsure," and the recently joined frigates to watch over French activities at Alexandria. The first message to reach Bonaparte regarding the disaster that had overtaken his fleet arrived on 14 August at his camp on the road between Salahieh and Cairo. The messenger was a staff officer sent by the Governor of Alexandria General Jean Baptiste Kléber, and the report had been hastily written by Admiral Ganteaume, who had subsequently rejoined Villeneuve's ships at sea. One account reports that when he was handed the message, Bonaparte read it without emotion before calling the messenger to him and demanding further details. When the messenger had finished, the French general reportedly announced ""Nous n'avons plus de flotte: eh bien. Il faut rester en ces contrées, ou en sortir grands comme les anciens"" ("We no longer have a fleet: well, we must either remain in this country or quit it as great as the ancients"). Another story, as told by the general's secretary, Bourienne, claims that Bonaparte was almost overcome by the news and exclaimed "Unfortunate Brueys, what have you done!" Bonaparte later placed much of the blame for the defeat on the wounded Admiral Blanquet, falsely accusing him of surrendering "Franklin" while his ship was undamaged. Protestations from Ganteaume and Minister Étienne Eustache Bruix later reduced the degree of criticism Blanquet faced, but he never again served in a command capacity. Bonaparte's most immediate concern however was with his own officers, who began to question the wisdom of the entire expedition. Inviting his most senior officers to dinner, Bonaparte asked them how they were. When they replied that they were "marvellous," Bonaparte responded that it was just as well, since he would have them shot if they continued "fostering mutinies and preaching revolt." To quell any uprising among the native inhabitants, Egyptians overheard discussing the battle were threatened with having their tongues cut out. Nelson's first set of dispatches were captured when "Leander" was intercepted and defeated by "Généreux" in a fierce engagement off the western shore of Crete on 18 August 1798. As a result, reports of the battle did not reach Britain until Capel arrived in "Mutine" on 2 October, entering the Admiralty at 11:15 and personally delivering the news to Lord Spencer, who collapsed unconscious when he heard the report. Although Nelson had previously been castigated in the press for failing to intercept the French fleet, rumours of the battle had begun to arrive in Britain from the continent in late September and the news Capel brought was greeted with celebrations right across the country. Within four days Nelson had been elevated to Baron Nelson of the Nile and Burnham Thorpe, a title with which he was privately dissatisfied, believing his actions deserved better reward. King George III addressed the Houses of Parliament on 20 November with the words: Saumarez's convoy of prizes stopped first at Malta, where Saumarez provided assistance to a rebellion on the island among the Maltese population. It then sailed to Gibraltar, arriving on 18 October to the cheers of the garrison. Saumarez wrote that, "We can never do justice to the warmth of their applause, and the praises they all bestowed on our squadron." On 23 October, following the transfer of the wounded to the military hospital and provision of basic supplies, the convoy sailed on towards Lisbon, leaving "Bellerophon" and "Majestic" behind for more extensive repairs. "Peuple Souverain" also remained at Gibraltar: The ship was deemed too badly damaged for the Atlantic voyage to Britain and so was converted to a guardship under the name of HMS "Guerrier". The remaining prizes underwent basic repairs and then sailed for Britain, spending some months at the Tagus and joining with the annual merchant convoy from Portugal in June 1799 under the escort of a squadron commanded by Admiral Sir Alan Gardner, before eventually arriving at Plymouth. Their age and battered state meant that neither "Conquérant" nor "Aquilon" were considered fit for active service in the Royal Navy and both were subsequently hulked, although they had been bought into the service for £20,000 (the equivalent of £ as of 2018) each as HMS "Conquerant" and HMS "Aboukir" to provide a financial reward to the crews that had captured them. Similar sums were also paid out for "Guerrier", "Mercure", "Heureux" and "Peuple Souverain", while the other captured ships were worth considerably more. Constructed of Adriatic oak, "Tonnant" had been built in 1792 and "Franklin" and "Spartiate" were less than a year old. "Tonnant" and "Spartiate", both of which later fought at the Battle of Trafalgar, joined the Royal Navy under their old names while "Franklin", considered to be "the finest two-decked ship in the world", was renamed HMS "Canopus". The total value of the prizes captured at the Nile and subsequently bought into the Royal Navy was estimated at just over £130,000 (the equivalent of £ as of 2018). Additional awards were presented to the British fleet: Nelson was awarded £2,000 (£ as of 2018) a year for life by the Parliament of Great Britain and £1,000 per annum by the Parliament of Ireland, although the latter was inadvertently discontinued after the Act of Union dissolved the Irish Parliament. Both parliaments gave unanimous votes of thanks, each captain who served in the battle was presented with a specially minted gold medal and the first lieutenant of every ship engaged in the battle was promoted to commander. Troubridge and his men, initially excluded, received equal shares in the awards after Nelson personally interceded for the crew of the stranded "Culloden, "even though they did not directly participate in the engagement. The Honourable East India Company presented Nelson with £10,000 (£ as of 2018) in recognition of the benefit his action had on their holdings and the cities of London, Liverpool and other municipal and corporate bodies made similar awards. Nelson's own captains presented him with a sword and a portrait as "proof of their esteem." Nelson publicly encouraged this close bond with his officers and on 29 September 1798 described them as "We few, we happy few, we band of brothers", echoing William Shakespeare's play "Henry V". From this grew the notion of the Nelsonic Band of Brothers, a cadre of high-quality naval officers that served with Nelson for the remainder of his life. Nearly five decades later the battle was among the actions recognised by a clasp attached to the Naval General Service Medal, awarded upon application to all British participants still living in 1847. Other rewards were bestowed by foreign states, particularly the Ottoman Emperor Selim III, who made Nelson the first Knight Commander of the newly created Order of the Crescent, and presented him with a chelengk, a diamond studded rose, a sable fur and numerous other valuable presents. Tsar Paul I of Russia sent, among other rewards, a gold box studded with diamonds, and similar gifts in silver arrived from other European rulers. On his return to Naples, Nelson was greeted with a triumphal procession led by King Ferdinand IV and Sir William Hamilton and was introduced for only the third time to Sir William's wife Emma, Lady Hamilton, who fainted violently at the meeting, and apparently took several weeks to recover from her injuries. Lauded as a hero by the Neapolitan court, Nelson was later to dabble in Neapolitan politics and become the Duke of Bronté, actions for which he was criticised by his superiors and his reputation suffered. British general John Moore, who met Nelson in Naples at this time, described him as "covered with stars, medals and ribbons, more like a Prince of Opera than the Conqueror of the Nile." Rumours of a battle first appeared in the French press as early as 7 August, although credible reports did not arrive until 26 August, and even these claimed that Nelson was dead and Bonaparte a British prisoner. When the news became certain, the French press insisted that the defeat was the result both of an overwhelmingly large British force and unspecified "traitors." Among the anti-government journals in France, the defeat was blamed on the incompetence of the French Directory and on supposed lingering Royalist sentiments in the Navy. Villeneuve came under scathing attack on his return to France for his failure to support Brueys during the battle. In his defence, he pleaded that the wind had been against him and that Brueys had not issued orders for him to counterattack the British fleet. Writing many years later, Bonaparte commented that if the French Navy had adopted the same tactical principles as the British: By contrast, the British press were jubilant; many newspapers sought to portray the battle as a victory for Britain over anarchy, and the success was used to attack the supposedly pro-republican Whig politicians Charles James Fox and Richard Brinsley Sheridan. There has been extensive historiographical debate over the comparative strengths of the fleets, although they were ostensibly evenly matched in size, each containing 13 ships of the line. However, the loss of "Culloden", the relative sizes of "Orient" and "Leander" and the participation in the action by two of the French frigates and several smaller vessels, as well as the theoretical strength of the French position, leads most historians to the conclusion that the French were marginally more powerful. This is accentuated by the weight of broadside of several of the French ships: "Spartiate", "Franklin", "Orient", "Tonnant" and "Guillaume Tell" were each significantly larger than any individual British ship in the battle. However inadequate deployment, reduced crews, and the failure of the rear division under Villeneuve to meaningfully participate, all contributed to the French defeat. The Battle of the Nile has been called "arguably, the most decisive naval engagement of the great age of sail", and "the most splendid and glorious success which the British Navy gained." Historian and novelist C. S. Forester, writing in 1929, compared the Nile to the great naval actions in history and concluded that "it still only stands rivalled by Tsu-Shima as an example of the annihilation of one fleet by another of approximately equal material force". The effect on the strategic situation in the Mediterranean was immediate, reversing the balance of the conflict and giving the British control at sea that they maintained for the remainder of the war. The destruction of the French Mediterranean fleet allowed the Royal Navy to return to the sea in force, as British squadrons set up blockades off French and allied ports. In particular, British ships cut Malta off from France, aided by the rebellion among the native Maltese population that forced the French garrison to retreat to Valletta and shut the gates. The ensuing Siege of Malta lasted for two years before the defenders were finally starved into surrender. In 1799, British ships harassed Bonaparte's army as it marched east and north through Palestine, and played a crucial part in Bonaparte's defeat at the Siege of Acre, when the barges carrying the siege train were captured and the French storming parties were bombarded by British ships anchored offshore. It was during one of these latter engagements that Captain Miller of "Theseus" was killed in an ammunition explosion. The defeat at Acre forced Bonaparte to retreat to Egypt and effectively ended his efforts to carve an empire in the Middle East. The French general returned to France without his army late in the year, leaving Kléber in command of Egypt. The Ottomans, with whom Bonaparte had hoped to conduct an alliance once his control of Egypt was complete, were encouraged by the Battle of the Nile to go to war against France. This led to a series of campaigns that slowly sapped the strength from the French army trapped in Egypt. The British victory also encouraged the Austrian Empire and the Russian Empire, both of whom were mustering armies as part of a Second Coalition, which declared war on France in 1799. With the Mediterranean undefended, a Russian fleet entered the Ionian Sea, while Austrian armies recaptured much of the Italian territory lost to Bonaparte in the previous war. Without their best general and his veterans, the French suffered a series of defeats and it was not until Bonaparte returned to become First Consul that France once again held a position of strength on mainland Europe. In 1801 a British Expeditionary Force defeated the demoralised remains of the French army in Egypt. The Royal Navy used its dominance in the Mediterranean to invade Egypt without the fear of ambush while anchored off the Egyptian coast. In spite of the overwhelming British victory in the climactic battle, the campaign has sometimes been considered a strategic success for France. Historian Edward Ingram noted that if Nelson had successfully intercepted Bonaparte at sea as ordered, the ensuing battle could have annihilated both the French fleet and the transports. As it was, Bonaparte was free to continue the war in the Middle East and later to return to Europe personally unscathed. The potential of a successful engagement at sea to change the course of history is underscored by the list of French army officers carried aboard the convoy who later formed the core of the generals and marshals under Emperor Napoleon. In addition to Bonaparte himself, Louis-Alexandre Berthier, Auguste de Marmont, Jean Lannes, Joachim Murat, Louis Desaix, Jean Reynier, Antoine-François Andréossy, Jean-Andoche Junot, Louis-Nicolas Davout and Dumas were all passengers on the cramped Mediterranean crossing. The Battle of the Nile remains one of the Royal Navy's most famous victories, and has remained prominent in the British popular imagination, sustained by its depiction in a large number of cartoons, paintings, poems, and plays. One of the best known poems about the battle is "Casabianca", which was written by Felicia Dorothea Hemans in 1826 and describes a fictional account of the death of Captain Casabianca's son on "Orient". Monuments were raised, including Cleopatra's Needle in London. Muhammad Ali of Egypt gave the monument in 1819 in recognition of the battle of 1798 and the campaign of 1801 but Great Britain did not erect it on the Victoria Embankment until 1878. Another memorial, the Nile Clumps near Amesbury, consists of stands of beech trees purportedly planted by Lord Queensbury at the bequest of Lady Hamilton and Thomas Hardy after Nelson's death. The trees form a plan of the battle; each clump represents the position of a British or French ship. A similar arboreal memorial is thought to have been planted near Alnwick by Nelson's agent Alexander Davison. The composer Joseph Haydn had just completed the Missa in Angustiis (mass for troubled times) after Napoleon Bonaparte had defeated the Austrian army in four major battles. The well received news of France's defeat at the Nile however resulted in the mass gradually acquiring the nickname "Lord Nelson Mass". The title became indelible when, in 1800, Nelson himself visited the Palais Esterházy, accompanied by his mistress, Lady Hamilton, and may have heard the mass performed. The Royal Navy commemorated the battle with the ship names HMS "Aboukir" and HMS "Nile""," and in 1998 commemorated the 200th anniversary of the battle with a visit to Aboukir Bay by the modern frigate HMS "Somerset", whose crew laid wreaths in memory of those who lost their lives in the battle. Although Nelson biographer Ernle Bradford assumed in 1977 that the remains of "Orient" "are almost certainly unrecoverable," the first archaeological investigation into the battle began in 1983, when a French survey team under Jacques Dumas discovered the wreck of the French flagship. Franck Goddio later took over the work, leading a major project to explore the bay in 1998. He found that material was scattered over an area in diameter. In addition to military and nautical equipment, Goddio recovered a large number of gold and silver coins from countries across the Mediterranean, some from the 17th century. It is likely that these were part of the treasure taken from Malta that was lost in the explosion aboard "Orient". In 2000, Italian archaeologist Paolo Gallo led an excavation focusing on ancient ruins on Nelson's Island. It uncovered a number of graves that date from the battle, as well as others buried there during the 1801 invasion. These graves, which included a woman and three children, were relocated in 2005 to a cemetery at Shatby in Alexandria. The reburial was attended by sailors from the modern frigate HMS "Chatham" and a band from the Egyptian Navy, as well as a descendant of the only identified burial, Commander James Russell. Boeing 747 The Boeing 747 is an American wide-body commercial jet airliner and cargo aircraft, often referred to by its original nickname, "Jumbo Jet". Its distinctive "hump" upper deck along the forward part of the aircraft has made it one of the most recognizable aircraft, and it was the first wide-body airplane produced. Manufactured by Boeing's Commercial Airplane unit in the United States, the 747 was originally envisioned to have 150 percent greater capacity than the Boeing 707, a common large commercial aircraft of the 1960s. First flown commercially in 1970, the 747 held the passenger capacity record for 37 years. The four-engine 747 uses a double-deck configuration for part of its length and is available in passenger, freighter and other versions. Boeing designed the 747's hump-like upper deck to serve as a first–class lounge or extra seating, and to allow the aircraft to be easily converted to a cargo carrier by removing seats and installing a front cargo door. Boeing expected supersonic airliners—the development of which was announced in the early 1960s—to render the 747 and other subsonic airliners obsolete, while the demand for subsonic cargo aircraft would remain robust well into the future. Though the 747 was expected to become obsolete after 400 were sold, it exceeded critics' expectations with production surpassing 1,000 in 1993. By June 2018, 1,545 aircraft had been built, with 23 of the 747-8 variants remaining on order. , the 747 has been involved in 60 hull losses, resulting in fatalities. The 747-400, the most common variant in service, has a high-subsonic cruise speed of Mach 0.85–0.855 (up to ) with an intercontinental range of 7,260 nautical miles (8,350 statute miles or 13,450 km). The 747-400 can accommodate 416 passengers in a typical three-class layout, 524 passengers in a typical two-class layout, or 660 passengers in a high–density one-class configuration. The newest version of the aircraft, the 747-8, is in production and received certification in 2011. Deliveries of the 747-8F freighter version began in October 2011; deliveries of the 747-8I passenger version began in May 2012. In 1963, the United States Air Force started a series of study projects on a very large strategic transport aircraft. Although the C-141 Starlifter was being introduced, they believed that a much larger and more capable aircraft was needed, especially the capability to carry outsized cargo that would not fit in any existing aircraft. These studies led to initial requirements for the CX-Heavy Logistics System (CX-HLS) in March 1964 for an aircraft with a load capacity of and a speed of Mach 0.75 (), and an unrefueled range of with a payload of . The payload bay had to be wide by high and long with access through doors at the front and rear. Featuring only four engines, the design also required new engine designs with greatly increased power and better fuel economy. In May 1964, airframe proposals arrived from Boeing, Douglas, General Dynamics, Lockheed, and Martin Marietta; engine proposals were submitted by General Electric, Curtiss-Wright, and Pratt & Whitney. After a downselect, Boeing, Douglas, and Lockheed were given additional study contracts for the airframe, along with General Electric and Pratt & Whitney for the engines. All three of the airframe proposals shared a number of features. As the CX-HLS needed to be able to be loaded from the front, a door had to be included where the cockpit usually was. All of the companies solved this problem by moving the cockpit above the cargo area; Douglas had a small "pod" just forward and above the wing, Lockheed used a long "spine" running the length of the aircraft with the wing spar passing through it, while Boeing blended the two, with a longer pod that ran from just behind the nose to just behind the wing. In 1965 Lockheed's aircraft design and General Electric's engine design were selected for the new C-5 Galaxy transport, which was the largest military aircraft in the world at the time. The nose door and raised cockpit concepts would be carried over to the design of the 747. The 747 was conceived while air travel was increasing in the 1960s. The era of commercial jet transportation, led by the enormous popularity of the Boeing 707 and Douglas DC-8, had revolutionized long-distance travel. Even before it lost the CX-HLS contract, Boeing was asked by Juan Trippe, president of Pan American World Airways (Pan Am), one of their most important airline customers, to build a passenger aircraft more than twice the size of the 707. During this time, airport congestion, worsened by increasing numbers of passengers carried on relatively small aircraft, became a problem that Trippe thought could be addressed by a larger new aircraft. In 1965, Joe Sutter was transferred from Boeing's 737 development team to manage the design studies for the new airliner, already assigned the model number 747. Sutter initiated a design study with Pan Am and other airlines, to better understand their requirements. At the time, it was widely thought that the 747 would eventually be superseded by supersonic transport aircraft. Boeing responded by designing the 747 so that it could be adapted easily to carry freight and remain in production even if sales of the passenger version declined. In the freighter role, the clear need was to support the containerized shipping methodologies that were being widely introduced at about the same time. Standard containers are square at the front (slightly higher due to attachment points) and available in lengths. This meant that it would be possible to support a 2-wide 2-high stack of containers two or three ranks deep with a fuselage size similar to the earlier CX-HLS project. In April 1966, Pan Am ordered 25 747-100 aircraft for US$525 million. During the ceremonial 747 contract-signing banquet in Seattle on Boeing's 50th Anniversary, Juan Trippe predicted that the 747 would be "... a great weapon for peace, competing with intercontinental missiles for mankind's destiny". As launch customer, and because of its early involvement before placing a formal order, Pan Am was able to influence the design and development of the 747 to an extent unmatched by a single airline before or since. Ultimately, the high-winged CX-HLS Boeing design was not used for the 747, although technologies developed for their bid had an influence. The original design included a full-length double-deck fuselage with eight-across seating and two aisles on the lower deck and seven-across seating and two aisles on the upper deck. However, concern over evacuation routes and limited cargo-carrying capability caused this idea to be scrapped in early 1966 in favor of a wider single deck design. The cockpit was, therefore, placed on a shortened upper deck so that a freight-loading door could be included in the nose cone; this design feature produced the 747's distinctive "bulge". In early models it was not clear what to do with the small space in the pod behind the cockpit, and this was initially specified as a "lounge" area with no permanent seating. (A different configuration that had been considered in order to keep the flight deck out of the way for freight loading had the pilots below the passengers, and was dubbed the "anteater".) One of the principal technologies that enabled an aircraft as large as the 747 to be drawn up was the high-bypass turbofan engine. The engine technology was thought to be capable of delivering double the power of the earlier turbojets while consuming a third less fuel. General Electric had pioneered the concept but was committed to developing the engine for the C-5 Galaxy and did not enter the commercial market until later. Pratt & Whitney was also working on the same principle and, by late 1966, Boeing, Pan Am and Pratt & Whitney agreed to develop a new engine, designated the JT9D to power the 747. The project was designed with a new methodology called fault tree analysis, which allowed the effects of a failure of a single part to be studied to determine its impact on other systems. To address concerns about safety and flyability, the 747's design included structural redundancy, redundant hydraulic systems, quadruple main landing gear and dual control surfaces. Additionally, some of the most advanced high-lift devices used in the industry were included in the new design, to allow it to operate from existing airports. These included slats running almost the entire length of the wing, as well as complex three-part slotted flaps along the trailing edge of the wing. The wing's complex three-part flaps increase wing area by 21 percent and lift by 90 percent when fully deployed compared to their non-deployed configuration. Boeing agreed to deliver the first 747 to Pan Am by the end of 1969. The delivery date left 28 months to design the aircraft, which was two-thirds of the normal time. The schedule was so fast-paced that the people who worked on it were given the nickname "The Incredibles". Developing the aircraft was such a technical and financial challenge that management was said to have "bet the company" when it started the project. As Boeing did not have a plant large enough to assemble the giant airliner, they chose to build a new plant. The company considered locations in about 50 cities, and eventually decided to build the new plant some north of Seattle on a site adjoining a military base at Paine Field near Everett, Washington. It bought the site in June 1966. Developing the 747 had been a major challenge, and building its assembly plant was also a huge undertaking. Boeing president William M. Allen asked Malcolm T. Stamper, then head of the company's turbine division, to oversee construction of the Everett factory and to start production of the 747. To level the site, more than of earth had to be moved. Time was so short that the 747's full-scale mock-up was built before the factory roof above it was finished. The plant is the largest building by volume ever built, and has been substantially expanded several times to permit construction of other models of Boeing wide-body commercial jets. Before the first 747 was fully assembled, testing began on many components and systems. One important test involved the evacuation of 560 volunteers from a cabin mock-up via the aircraft's emergency chutes. The first full-scale evacuation took two and a half minutes instead of the maximum of 90 seconds mandated by the Federal Aviation Administration (FAA), and several volunteers were injured. Subsequent test evacuations achieved the 90-second goal but caused more injuries. Most problematic was evacuation from the aircraft's upper deck; instead of using a conventional slide, volunteer passengers escaped by using a harness attached to a reel. Tests also involved taxiing such a large aircraft. Boeing built an unusual training device known as "Waddell's Wagon" (named for a 747 test pilot, Jack Waddell) that consisted of a mock-up cockpit mounted on the roof of a truck. While the first 747s were still being built, the device allowed pilots to practice taxi maneuvers from a high upper-deck position. On September 30, 1968, the first 747 was rolled out of the Everett assembly building before the world's press and representatives of the 26 airlines that had ordered the airliner. Over the following months, preparations were made for the first flight, which took place on February 9, 1969, with test pilots Jack Waddell and Brien Wygle at the controls and Jess Wallick at the flight engineer's station. Despite a minor problem with one of the flaps, the flight confirmed that the 747 handled extremely well. The 747 was found to be largely immune to "Dutch roll", a phenomenon that had been a major hazard to the early swept-wing jets. During later stages of the flight test program, flutter testing showed that the wings suffered oscillation under certain conditions. This difficulty was partly solved by reducing the stiffness of some wing components. However, a particularly severe high-speed flutter problem was solved only by inserting depleted uranium counterweights as ballast in the outboard engine nacelles of the early 747s. This measure caused anxiety when these aircraft crashed, for example El Al Flight 1862 at Amsterdam in 1992 with of uranium in the tailplane (horizontal stabilizer). The flight test program was hampered by problems with the 747's JT9D engines. Difficulties included engine stalls caused by rapid throttle movements and distortion of the turbine casings after a short period of service. The problems delayed 747 deliveries for several months; up to 20 aircraft at the Everett plant were stranded while awaiting engine installation. The program was further delayed when one of the five test aircraft suffered serious damage during a landing attempt at Renton Municipal Airport, site of the company's Renton factory. On December 13, 1969 a test aircraft was being taken to have test equipment removed and a cabin installed when pilot Ralph C. Cokely undershot the airport's short runway. The 747's right, outer landing gear was torn off and two engine nacelles were damaged. However, these difficulties did not prevent Boeing from taking a test aircraft to the 28th Paris Air Show in mid-1969, where it was displayed to the public for the first time. The 747 received its FAA airworthiness certificate in December 1969, clearing it for introduction into service. The huge cost of developing the 747 and building the Everett factory meant that Boeing had to borrow heavily from a banking syndicate. During the final months before delivery of the first aircraft, the company had to repeatedly request additional funding to complete the project. Had this been refused, Boeing's survival would have been threatened. The firm's debt exceeded $2 billion, with the $1.2 billion owed to the banks setting a record for all companies. Allen later said, "It was really too large a project for us." Ultimately, the gamble succeeded, and Boeing held a monopoly in very large passenger aircraft production for many years. On January 15, 1970, First Lady of the United States Pat Nixon christened Pan Am's first 747, at Dulles International Airport (later Washington Dulles International Airport) in the presence of Pan Am chairman Najeeb Halaby. Instead of champagne, red, white, and blue water was sprayed on the aircraft. The 747 entered service on January 22, 1970, on Pan Am's New York–London route; the flight had been planned for the evening of January 21, but engine overheating made the original aircraft unusable. Finding a substitute delayed the flight by more than six hours to the following day when Clipper Victor was used. The 747 enjoyed a fairly smooth introduction into service, overcoming concerns that some airports would not be able to accommodate an aircraft that large. Although technical problems occurred, they were relatively minor and quickly solved. After the aircraft's introduction with Pan Am, other airlines that had bought the 747 to stay competitive began to put their own 747s into service. Boeing estimated that half of the early 747 sales were to airlines desiring the aircraft's long range rather than its payload capacity. While the 747 had the lowest potential operating cost per seat, this could only be achieved when the aircraft was fully loaded; costs per seat increased rapidly as occupancy declined. A moderately loaded 747, one with only 70 percent of its seats occupied, used more than 95 percent of the fuel needed by a fully occupied 747. Nonetheless, many flag-carriers purchased the 747 due to its iconic status "even if it made no sense economically" to operate. During the 1970s and 1980s, there was often over 30 regularly scheduled 747s at John F. Kennedy International Airport. The recession of 1969-1970 greatly affected Boeing. For the year and a half after September 1970 it only sold two 747s in the world, and did not sell any to an American carrier for almost three years. When economic problems in the United States and other countries after the 1973 oil crisis led to reduced passenger traffic, several airlines found they did not have enough passengers to fly the 747 economically, and they replaced them with the smaller and recently introduced McDonnell Douglas DC-10 and Lockheed L-1011 TriStar trijet wide bodies (and later the 767 and A300/A310 twinjets). Having tried replacing coach seats on its 747s with piano bars in an attempt to attract more customers, American Airlines eventually relegated its 747s to cargo service and in 1983 exchanged them with Pan Am for smaller aircraft; Delta Air Lines also removed its 747s from service after several years. Later, Delta acquired 747s again in 2008 as part of its merger with Northwest Airlines, although it retired the 747-400 fleet in December 2017. International flights that bypassed traditional hub airports and landed at smaller cities became more common throughout the 1980s, and this eroded the 747's original market. However, many international carriers continued to use the 747 on Pacific routes. In Japan, 747s on domestic routes were configured to carry close to the maximum passenger capacity. After the initial 747-100, Boeing developed the , a higher maximum takeoff weight (MTOW) variant, and the (Short Range), with higher passenger capacity. Increased maximum takeoff weight allows aircraft to carry more fuel and have longer range. The model followed in 1971, featuring more powerful engines and a higher MTOW. Passenger, freighter and combination passenger-freighter versions of the were produced. The shortened 747SP (special performance) with a longer range was also developed, and entered service in 1976. The 747 line was further developed with the launch of the 747-300 in 1980. The 300 series resulted from Boeing studies to increase the seating capacity of the 747, during which modifications such as fuselage plugs and extending the upper deck over the entire length of the fuselage were rejected. The first 747-300, completed in 1983, included a stretched upper deck, increased cruise speed, and increased seating capacity. The -300 variant was previously designated 747SUD for stretched upper deck, then 747-200 SUD, followed by 747EUD, before the 747-300 designation was used. Passenger, short range and combination freighter-passenger versions of the 300 series were produced. In 1985, development of the longer range 747-400 began. The variant had a new glass cockpit, which allowed for a cockpit crew of two instead of three, new engines, lighter construction materials, and a redesigned interior. Development cost soared, and production delays occurred as new technologies were incorporated at the request of airlines. Insufficient workforce experience and reliance on overtime contributed to early production problems on the 747-400. The -400 entered service in 1989. In 1991, a record-breaking 1,087 passengers were airlifted aboard a 747 to Israel as part of Operation Solomon. The 747 remained the heaviest commercial aircraft in regular service until the debut of the Antonov An-124 Ruslan in 1982; variants of the 747-400 would surpass the An-124's weight in 2000. The Antonov An-225 "Mriya" cargo transport, which debuted in 1988, remains the world's largest aircraft by several measures (including the most accepted measures of maximum takeoff weight and length); one aircraft has been completed and is in service . The Hughes H-4 Hercules is the largest aircraft by wingspan, but it only completed a single flight. Since the arrival of the 747-400, several stretching schemes for the 747 have been proposed. Boeing announced the larger 747-500X and preliminary designs in 1996. The new variants would have cost more than US$5 billion to develop, and interest was not sufficient to launch the program. In 2000, Boeing offered the more modest 747X and 747X stretch derivatives as alternatives to the Airbus A3XX. However, the 747X family was unable to attract enough interest to enter production. A year later, Boeing switched from the 747X studies to pursue the Sonic Cruiser, and after the Sonic Cruiser program was put on hold, the 787 Dreamliner. Some of the ideas developed for the 747X were used on the 747-400ER, a longer range variant of the 747-400. After several variants were proposed but later abandoned, some industry observers became skeptical of new aircraft proposals from Boeing. However, in early 2004, Boeing announced tentative plans for the 747 Advanced that were eventually adopted. Similar in nature to the 747-X, the stretched 747 Advanced used technology from the 787 to modernize the design and its systems. The 747 remained the largest passenger airliner in service until the Airbus A380 began airline service in 2007. On November 14, 2005, Boeing announced it was launching the 747 Advanced as the Boeing 747-8. The last 747-400s were completed in 2009. , most orders of the 747-8 have been for the freighter variant. On February 8, 2010, the 747-8 Freighter made its maiden flight. The first delivery of the 747-8 went to Cargolux in 2011. The 1,500th produced Boeing 747 was delivered in June 2014. In January 2016, Boeing stated it was reducing 747-8 production to six a year beginning in September 2016, incurring a $569 million post-tax charge against its fourth-quarter 2015 profits. At the end of 2015, the company had 20 orders outstanding. On January 29, 2016, Boeing announced that it had begun the preliminary work on the modifications to a commercial 747-8 for the next Air Force One Presidential aircraft, expected to be operational by 2020. On July 12, 2016, Boeing announced that it had finalized terms of acquisition with Volga-Dnepr Group for 20 747-8 freighters, valued at $7.58 billion at list prices. Four aircraft were delivered beginning in 2012. Volga-Dnepr Group is the parent of three major Russian air-freight carriers – Volga-Dnepr Airlines, AirBridgeCargo Airlines and Atran Airlines. The new 747-8 freighters will replace AirBridgeCargo's current 747-400 aircraft and expand the airline's fleet and will be acquired through a mix of direct purchases and leasing over the next six years, Boeing said. On July 27, 2016, in its quarterly report to the Securities and Exchange Commission, Boeing discussed the potential termination of 747 production due to insufficient demand and market for the aircraft. With a firm order backlog of 21 aircraft and a production rate of six per year, program accounting has been reduced to 1,555 aircraft, and the 747 line could be closed in the third quarter of 2019. In October 2016, UPS Airlines ordered 14 -8Fs to add capacity, along with 14 options, which it took in February 2018 bringing the total ordered to 28 and increases the backlog to 25 – including some to refractory airlines – with deliveries scheduled through 2022. The Boeing 747 is a large, wide-body (two-aisle) airliner with four wing-mounted engines. Its wings have a high sweep angle of 37.5 degrees for a fast, efficient cruise of Mach 0.84 to 0.88, depending on the variant. The sweep also reduces the wingspan, allowing the 747 to use existing hangars. Its seating capacity is over 366 with a 3–4–3 seat arrangement (a cross section of 3 seats, an aisle, 4 seats, another aisle, and 3 seats) in economy class and a 2–3–2 layout in first class on the main deck. The upper deck has a 3–3 seat arrangement in economy class and a 2–2 layout in first class. Raised above the main deck, the cockpit creates a hump. This raised cockpit allows front loading of cargo on freight variants. The upper deck behind the cockpit provides space for a lounge and/or extra seating. The "stretched upper deck" became available as an alternative on the 747-100B variant and later as standard beginning on the 747-300. The upper deck was stretched more on the 747-8. The 747 cockpit roof section also has an escape hatch from which crew can exit during the events of an emergency if they cannot do so through the cabin. The 747's maximum takeoff weight ranges from 735,000 pounds (333,400 kg) for the -100 to 970,000 lb (439,985 kg) for the -8. Its range has increased from 5,300 nautical miles (6,100 mi, 9,800 km) on the -100 to 8,000 nmi (9,200 mi, 14,815 km) on the -8I. The 747 has redundant structures along with four redundant hydraulic systems and four main landing gears each with four wheels; these provide a good spread of support on the ground and safety in case of tire blow-outs. The main gear are redundant so that landing can be performed on two opposing landing gears if the others are not functioning properly. The 747 also has split control surfaces and was designed with sophisticated triple-slotted flaps that minimize landing speeds and allow the 747 to use standard-length runways. For transportation of spare engines, the 747 can accommodate a non-functioning fifth-pod engine under the aircraft’s port wing between the inner functioning engine and the fuselage. The 747-100 was the original variant launched in 1966. The 747-200 soon followed, with its launch in 1968. The 747-300 was launched in 1980 and was followed by the 747-400 in 1985. Ultimately, the 747-8 was announced in 2005. Several versions of each variant have been produced, and many of the early variants were in production simultaneously. The International Civil Aviation Organization (ICAO) classifies variants using a shortened code formed by combining the model number and the variant designator (e.g. "B741" for all -100 models). The first 747-100s were built with six upper deck windows (three per side) to accommodate upstairs lounge areas. Later, as airlines began to use the upper deck for premium passenger seating instead of lounge space, Boeing offered a ten-window upper deck as an option. Some early -100s were retrofitted with the new configuration. The -100 was equipped with Pratt & Whitney JT9D-3A engines. No freighter version of this model was developed, but many 747-100s were converted into freighters. A total of 168 747-100s were built; 167 were delivered to customers, while Boeing kept the prototype, "City of Everett". Responding to requests from Japanese airlines for a high-capacity aircraft to serve domestic routes between major cities, Boeing developed the 747SR as a short-range version of the 747-100 with lower fuel capacity and greater payload capability. With increased economy class seating, up to 498 passengers could be carried in early versions and up to 550 in later models. The 747SR had an economic design life objective of 52,000 flights during 20 years of operation, compared to 24,600 flights in 20 years for the standard 747. The initial 747SR model, the -100SR, had a strengthened body structure and landing gear to accommodate the added stress accumulated from a greater number of takeoffs and landings. Extra structural support was built into the wings, fuselage, and the landing gear along with a 20 percent reduction in fuel capacity. The initial order for the -100SR – four aircraft for Japan Air Lines (JAL, later Japan Airlines) – was announced on October 30, 1972; rollout occurred on August 3, 1973, and the first flight took place on August 31, 1973. The type was certified by the FAA on September 26, 1973, with the first delivery on the same day. The -100SR entered service with JAL, the type's sole customer, on October 7, 1973, and typically operated flights within Japan. Seven -100SRs were built between 1973 and 1975, each with a MTOW and Pratt & Whitney JT9D-7A engines derated to of thrust. Following the -100SR, Boeing produced the -100BSR, a 747SR variant with increased takeoff weight capability. Debuting in 1978, the -100BSR also incorporated structural modifications for a high cycle-to-flying hour ratio; a related standard -100B model debuted in 1979. The -100BSR first flew on November 3, 1978, with first delivery to All Nippon Airways (ANA) on December 21, 1978. A total of twenty -100BSRs were produced for ANA and JAL. The -100BSR had a 600,000 lb MTOW and was powered by the same JT9D-7A or General Electric CF6-45 engines used on the -100SR. ANA operated this variant on domestic Japanese routes with 455 or 456 seats until retiring its last aircraft in March 2006. In 1986, two -100BSR SUD models, featuring the stretched upper deck (SUD) of the -300, were produced for JAL. The type's maiden flight occurred on February 26, 1986, with FAA certification and first delivery on March 24, 1986. JAL operated the -100BSR SUD with 563 seats on domestic routes until their retirement in the third quarter of 2006. While only two -100BSR SUDs were produced, in theory, standard -100Bs can be modified to the SUD certification. Overall, twenty-nine 747SRs were built, consisting of seven -100SRs, twenty -100BSRs, and two -100BSR SUDs. The 747-100B model was developed from the -100SR, using its stronger airframe and landing gear design. The type had an increased fuel capacity of , allowing for a range with a typical 452-passenger payload, and an increased MTOW of was offered. The first -100B order, one aircraft for Iran Air, was announced on June 1, 1978. This aircraft first flew on June 20, 1979, received FAA certification on August 1, 1979, and was delivered the next day. Nine -100Bs were built, one for Iran Air and eight for Saudi Arabian Airlines. Unlike the original -100, the -100B was offered with Pratt & Whitney JT9D-7A, General Electric CF6-50, or Rolls-Royce RB211-524 engines. However, only RB211-524 (Saudia) and JT9D-7A (Iran Air) engines were ordered. The last 747-100B, EP-IAM was retired by Iran Air in 2014, the last commercial operator of the 747-100 and -100B. The development of the 747SP stemmed from a joint request between Pan American World Airways and Iran Air, who were looking for a high-capacity airliner with enough range to cover Pan Am's New York–Middle Eastern routes and Iran Air's planned Tehran–New York route. The Tehran–New York route, when launched, was the longest non-stop commercial flight in the world. The 747SP is shorter than the 747-100. Fuselage sections were eliminated fore and aft of the wing, and the center section of the fuselage was redesigned to fit mating fuselage sections. The SP's flaps used a simplified single-slotted configuration. The 747SP, compared to earlier variants, had a tapering of the aft upper fuselage into the empennage, a double-hinged rudder, and longer vertical and horizontal stabilizers. Power was provided by Pratt & Whitney JT9D-7(A/F/J/FW) or Rolls-Royce RB211-524 engines. The 747SP was granted a supplemental certificate on February 4, 1976 and entered service with launch customers Pan Am and Iran Air that same year. The aircraft was chosen by airlines wishing to serve major airports with short runways. A total of 45 747SPs were built, with the 44th 747SP delivered on August 30, 1982. In 1987, Boeing re-opened the 747SP production line after five years to build one last 747SP for an order by the United Arab Emirates government. In addition to airline use, one 747SP was modified for the NASA/German Aerospace Center SOFIA experiment. Iran Air is the last civil operator of the type; its final 747-SP (EP-IAC) was to be retired in June 2016. While the 747-100 powered by Pratt & Whitney JT9D-3A engines offered enough payload and range for US domestic operations, it was marginal for long, international route sectors. The demand for longer range aircraft with increased payload quickly led to the improved -200, which featured more powerful engines, increased MTOW, and greater range than the -100. A few early -200s retained the three-window configuration of the -100 on the upper deck, but most were built with a ten-window configuration on each side. The 747-200 was produced in passenger (-200B), freighter (-200F), convertible (-200C), and combi (-200M) versions. The 747-200B was the basic passenger version, with increased fuel capacity and more powerful engines; it entered service in February 1971. In its first three years of production, the -200 was equipped with Pratt & Whitney JT9D-7 engines (initially the only engine available). Range with a full passenger load started at over and increased to with later engines. Most -200Bs had an internally stretched upper deck, allowing for up to 16 passenger seats. The freighter model, the 747-200F, could be fitted with or without a side cargo door, and had a capacity of 105 tons (95.3 tonnes) and an MTOW of up to 833,000 lb (378,000 kg). It entered service in 1972 with Lufthansa. The convertible version, the 747-200C, could be converted between a passenger and a freighter or used in mixed configurations, and featured removable seats and a nose cargo door. The -200C could also be fitted with an optional side cargo door on the main deck. The combi model, the 747-200M, could carry freight in the rear section of the main deck via a side cargo door. A removable partition on the main deck separated the cargo area at the rear from the passengers at the front. The -200M could carry up to 238 passengers in a three-class configuration with cargo carried on the main deck. The model was also known as the 747-200 Combi. As on the -100, a stretched upper deck (SUD) modification was later offered. A total of 10 converted 747-200s were operated by KLM. Union des Transports Aériens (UTA) also had two aircraft converted. After launching the -200 with Pratt & Whitney JT9D-7 engines, on August 1, 1972 Boeing announced that it had reached an agreement with General Electric to certify the 747 with CF6-50 series engines to increase the aircraft's market potential. Rolls-Royce followed 747 engine production with a launch order from British Airways for four aircraft. The option of RB211-524B engines was announced on June 17, 1975. The -200 was the first 747 to provide a choice of powerplant from the three major engine manufacturers. A total of 393 of the 747-200 versions had been built when production ended in 1991. Of these, 225 were -200B, 73 were -200F, 13 were -200C, 78 were -200M, and 4 were military. Many 747-200s remained in operation, although most large carriers have retired them from their fleets and sold them to smaller operators as of early 2000s. Large carriers have sped up fleet retirement following the September 11 attacks and the subsequent drop in demand for air travel, scrapping some or turning others into freighters. Iran Air retired the last passenger 747-200 in May 2016, 36 years after it was delivered. The 747-300 features a upper deck than the -200. The stretched upper deck has two emergency exit doors and is the most visible difference between the -300 and previous models. Before being made standard on the 747-300, the stretched upper deck was previously offered as a retrofit, and appeared on two Japanese 747-100SR aircraft. The 747-300 introduced a new straight stairway to the upper deck, instead of a spiral staircase on earlier variants, which creates room above and below for more seats. Minor aerodynamic changes allowed the -300's cruise speed to reach Mach 0.85 compared with Mach 0.84 on the -200 and -100 models, while retaining the same takeoff weight. The -300 could be equipped with the same Pratt & Whitney and Rolls-Royce powerplants as on the -200, as well as updated General Electric CF6-80C2B1 engines. Swissair placed the first order for the 747-300 on June 11, 1980. The variant revived the 747-300 designation, which had been previously used on a design study that did not reach production. The 747-300 first flew on October 5, 1982, and the type's first delivery went to Swissair on March 23, 1983. Besides the passenger model, two other versions (-300M, -300SR) were produced. The 747-300M features cargo capacity on the rear portion of the main deck, similar to the -200M, but with the stretched upper deck it can carry more passengers. The 747-300SR, a short range, high-capacity domestic model, was produced for Japanese markets with a maximum seating for 584. No production freighter version of the 747-300 was built, but Boeing began modifications of used passenger -300 models into freighters in 2000. A total of 81 747-300 series aircraft were delivered, 56 for passenger use, 21 -300M and 4 -300SR versions. In 1985, just two years after the -300 entered service, the type was superseded by the announcement of the more advanced 747-400. The last 747-300 was delivered in September 1990 to Sabena. While some -300 customers continued operating the type, several large carriers replaced their 747-300s with 747-400s. Air France, Air India, Pakistan International Airlines, and Qantas were some of the last major carriers to operate the 747-300. On December 29, 2008, Qantas flew its last scheduled 747-300 service, operating from Melbourne to Los Angeles via Auckland. In July 2015, Pakistan International Airlines retired their final 747-300 after 30 years of service.As of July 2017, only five 747-300s remain in commercial service, with Max Air (3), Mahan Air (1) and TransAVIAexport Airlines (1). The 747-400 is an improved model with increased range. It has wingtip extensions of and winglets of , which improve the type's fuel efficiency by four percent compared to previous 747 versions. The 747-400 introduced a new glass cockpit designed for a flight crew of two instead of three, with a reduction in the number of dials, gauges and knobs from 971 to 365 through the use of electronics. The type also features tail fuel tanks, revised engines, and a new interior. The longer range has been used by some airlines to bypass traditional fuel stops, such as Anchorage. Powerplants include the Pratt & Whitney PW4062, General Electric CF6-80C2, and Rolls-Royce RB211-524. The was offered in passenger (-400), freighter (-400F), combi (-400M), domestic (-400D), extended range passenger (-400ER), and extended range freighter (-400ERF) versions. Passenger versions retain the same upper deck as the , while the freighter version does not have an extended upper deck. The 747-400D was built for short-range operations with maximum seating for 624. Winglets were not included, but they can be retrofitted. Cruising speed is up to Mach 0.855 on different versions of the 747-400. The passenger version first entered service in February 1989 with launch customer Northwest Airlines on the Minneapolis to Phoenix route. The combi version entered service in September 1989 with KLM, while the freighter version entered service in November 1993 with Cargolux. The 747-400ERF entered service with Air France in October 2002, while the 747-400ER entered service with Qantas, its sole customer, in November 2002. In January 2004, Boeing and Cathay Pacific launched the Boeing 747-400 Special Freighter program, later referred to as the Boeing Converted Freighter (BCF), to modify passenger 747-400s for cargo use. The first 747-400BCF was redelivered in December 2005. In March 2007, Boeing announced that it had no plans to produce further passenger versions of the -400. However, orders for 36 -400F and -400ERF freighters were already in place at the time of the announcement. The last passenger version of the 747-400 was delivered in April 2005 to China Airlines. Some of the last built 747-400s were delivered with Dreamliner livery along with the modern Signature interior from the Boeing 777. A total of 694 of the 747-400 series aircraft were delivered. At various times, the largest 747-400 operator has included Singapore Airlines, Japan Airlines, and British Airways with 31 . As of July 2017, 370 747-400s remain in service. The 747-400 Dreamlifter (originally called the 747 Large Cargo Freighter or LCF) is a Boeing-designed modification of existing 747-400s to a larger configuration to ferry 787 Dreamliner sub-assemblies. Evergreen Aviation Technologies Corporation of Taiwan was contracted to complete modifications of 747-400s into Dreamlifters in Taoyuan. The aircraft flew for the first time on September 9, 2006 in a test flight. Modification of four aircraft was completed by February 2010. The Dreamlifters have been placed into service transporting sub-assemblies for the 787 program to the Boeing plant in Everett, Washington, for final assembly. The aircraft is certified to carry only essential crew and not passengers. Boeing announced a new 747 variant, the 747-8, on November 14, 2005. Referred to as the 747 Advanced prior to its launch, the 747-8 uses the same engine and cockpit technology as the 787, hence the use of the "8". The variant is designed to be quieter, more economical, and more environmentally friendly. The 747-8's fuselage is lengthened from 232 to 251 feet (70.8 to 76.4 m), marking the first stretch variant of the aircraft. Power is supplied by General Electric GEnx-2B67 engines. The 747-8 Freighter, or 747-8F, is derived from the 747-400ERF. The variant has 16 percent more payload capacity than its predecessor, allowing it to carry seven more standard air cargo containers, with a maximum payload capacity of 154 tons (140 tonnes) of cargo. As on previous 747 freighters, the 747-8F features an overhead nose-door and a side-door on the main deck plus a side-door on the lower deck ("belly") to aid loading and unloading. The 747-8F made its maiden flight on February 8, 2010. The variant received its amended type certificate jointly from the FAA and the European Aviation Safety Agency (EASA) on August 19, 2011. The -8F was first delivered to Cargolux on October 12, 2011. The passenger version, named 747-8 Intercontinental or 747-8I, is designed to carry up to 467 passengers in a 3-class configuration and fly more than at Mach 0.855. As a derivative of the already common 747-400, the 747-8 has the economic benefit of similar training and interchangeable parts. The type's first test flight occurred on March 20, 2011. The 747-8 has surpassed the Airbus A340-600 as the world's longest airliner. The first -8I was delivered in May 2012 to Lufthansa. The 747-8 has received 150 total orders, including 103 for the -8F and 47 for the -8I . Boeing has studied a number of 747 variants that have not gone beyond the concept stage. During the late 1960s and early 1970s, Boeing studied the development of a shorter 747 with three engines, to compete with the smaller Lockheed L-1011 TriStar and McDonnell Douglas DC-10. The center engine would have been fitted in the tail with an S-duct intake similar to the L-1011's. Overall, the 747 trijet would have had more payload, range, and passenger capacity than both of them. However, engineering studies showed that a major redesign of the 747 wing would be necessary. Maintaining the same 747 handling characteristics would be important to minimize pilot retraining. Boeing decided instead to pursue a shortened four-engine 747, resulting in the 747SP. Boeing announced the 747 ASB ("Advanced Short Body") in 1986 as a response to the Airbus A340 and the McDonnell Douglas MD-11. This aircraft design would have combined the advanced technology used on the 747-400 with the foreshortened 747SP fuselage. The aircraft was to carry 295 passengers a range of . However, airlines were not interested in the project and it was canceled in 1988 in favor of the 777. Boeing announced the 747-500X and -600X at the 1996 Farnborough Airshow. The proposed models would have combined the 747's fuselage with a new 251 ft (77 m) span wing derived from the 777. Other changes included adding more powerful engines and increasing the number of tires from two to four on the nose landing gear and from 16 to 20 on the main landing gear. The 747-500X concept featured an increased fuselage length of 18 ft (5.5 m) to 250 ft (76.2 m) long, and the aircraft was to carry 462 passengers over a range up to 8,700 nautical miles (10,000 mi, 16,100 km), with a gross weight of over 1.0 Mlb (450 tonnes). The 747-600X concept featured a greater stretch to 279 ft (85 m) with seating for 548 passengers, a range of up to 7,700 nmi (8,900 mi, 14,300 km), and a gross weight of 1.2 Mlb (540 tonnes). A third study concept, the 747-700X, would have combined the wing of the 747-600X with a widened fuselage, allowing it to carry 650 passengers over the same range as a 747-400. The cost of the changes from previous 747 models, in particular the new wing for the 747-500X and -600X, was estimated to be more than US$5 billion. Boeing was not able to attract enough interest to launch the aircraft. As Airbus progressed with its A3XX study, Boeing offered a 747 derivative as an alternative in 2000; a more modest proposal than the previous -500X and -600X that retained the 747's overall wing design and add a segment at the root, increasing the span to . Power would have been supplied by either the Engine Alliance GP7172 or the Rolls-Royce Trent 600, which were also proposed for the 767-400ERX. A new flight deck based on the 777's would be used. The 747X aircraft was to carry 430 passengers over ranges of up to 8,700 nmi (10,000 mi, 16,100 km). The 747X Stretch would be extended to long, allowing it to carry 500 passengers over ranges of up to 7,800 nmi (9,000 mi, 14,500 km). Both would feature an interior based on the 777. Freighter versions of the 747X and 747X Stretch were also studied. Like its predecessor, the 747X family was unable to garner enough interest to justify production, and it was shelved along with the 767-400ERX in March 2001, when Boeing announced the Sonic Cruiser concept. Though the 747X design was less costly than the 747-500X and -600X, it was criticized for not offering a sufficient advance from the existing 747-400. The 747X did not make it beyond the drawing board, but the 747-400X being developed concurrently moved into production to become the 747-400ER. After the end of the 747X program, Boeing continued to study improvements that could be made to the 747. The 747-400XQLR (Quiet Long Range) was meant to have an increased range of 7,980 nmi (9,200 mi, 14,800 km), with improvements to boost efficiency and reduce noise. Improvements studied included raked wingtips similar to those used on the 767-400ER and a sawtooth engine nacelle for noise reduction. Although the 747-400XQLR did not move to production, many of its features were used for the 747 Advanced, which has now been launched as the 747-8. As of July 2017, 488 Boeing 747s are in airline service, with British Airways being the largest operator with 36 747-400s. The last US passenger Boeing 747 was retired from Delta Air Lines in December 2017, after it flew for every American major carrier since its 1970 introduction. Delta flew three of its last four aircraft on a farewell tour, from Seattle to Atlanta on December 19 then to Los Angeles and Minneapolis/St Paul on December 20. As the IATA forecast an increase in air freight from 4% to 5% in 2018 fuelled by booming trade for time-sensitive goods, from smartphones to fresh flowers, demand for freighters is strong while passenger 747s are phased out. Of the 1,544 produced, 890 are retired; as of 2018, a small subset of those which were intended to be parted-out get $3 million D-checks before flying again. Young -400s are sold for 320 million yuan ($50 million) and Boeing stopped converting freighters, which used to cost nearly $30 million. This comeback helped the airframer financing arm Boeing Capital to shrink its exposure to the 747-8 from $1.07 billion in 2017 to $481 million. Boeing 747 orders and deliveries (cumulative, by year): The 747 has been involved in 146 aviation accidents and incidents, including 61 accidents and hull losses which resulted in 3722 fatalities. The last crash was Turkish Airlines Flight 6491 in January 2017. There were also 24 deaths in 32 aircraft hijackings, such as Pan Am Flight 73 where a Boeing 747-121 was hijacked by four terrorists and resulted in 20 deaths. Few crashes have been attributed to design flaws of the 747. The Tenerife airport disaster resulted from pilot error and communications failure, while the Japan Airlines Flight 123 and China Airlines Flight 611 crashes stemmed from improper aircraft repair. United Airlines Flight 811, which suffered an explosive decompression mid-flight on February 24, 1989, led the National Transportation Safety Board (NTSB) to issue a recommendation that 747-200 cargo doors similar to those on the Flight 811 aircraft be modified. Korean Air Lines Flight 007 was shot down by a Soviet fighter aircraft in 1983 after it had strayed into Soviet territory, causing U.S. President Ronald Reagan to authorize the then-strictly military global positioning system (GPS) for civilian use. Accidents due to design deficiencies included TWA Flight 800, where a 747-100 exploded in mid-air on July 17, 1996, probably due to sparking electricity wires inside the fuel tank; this finding led the FAA to propose a rule requiring installation of an inerting system in the center fuel tank of most large aircraft that was adopted in July 2008, after years of research into solutions. At the time, the new safety system was expected to cost US$100,000 to $450,000 per aircraft and weigh approximately . El Al Flight 1862 crashed after the fuse pins for an engine broke off shortly after take-off due to metal fatigue. Instead of dropping away from the wing, the engine knocked off the adjacent engine and damaged the wing. As increasing numbers of "classic" 747-100 and 747-200 series aircraft have been retired, some have found their way into museums or other uses. In recent years, some older 747-300s and 747-400s have also found their way into museums as well. Upon its retirement from service, the 747 number two in the production line was dismantled and shipped to Hopyeong, Namyangju, Gyeonggi-do, South Korea where it was re-assembled, repainted in a livery similar to that of Air Force One and converted into a restaurant. Originally flown commercially by Pan Am as N747PA, "Clipper Juan T. Trippe", and repaired for service following a tailstrike, it stayed with the airline until its bankruptcy. The restaurant closed by 2009, and the aircraft was scrapped in 2010. A former British Airways 747-200B, G-BDXJ, is parked at the Dunsfold Aerodrome in Surrey, England and has been used as a movie set for productions such as the 2006 James Bond film, "Casino Royale". The plane also appears frequently in the BBC television series "Top Gear", which is filmed at Dunsfold. The "Jumbohostel", using a converted 747-200, opened at Arlanda Airport, Stockholm on January 15, 2009. The wings of a 747 have been recycled as roofs of a house in Malibu, California. Following its debut, the 747 rapidly achieved iconic status, appearing in numerous film productions such as "Airport 1975" and "Airport '77" disaster films, "Air Force One", "Die Hard 2", and "Executive Decision". Appearing in over 300 film productions the 747 is one of the most widely depicted civilian aircraft and is considered by many as one of the most iconic in film history. The aircraft entered the cultural lexicon as the original "Jumbo Jet", a term coined by the aviation media to describe its size, and was also nicknamed "Queen of the Skies." British Empire The British Empire comprised the dominions, colonies, protectorates, mandates and other territories ruled or administered by the United Kingdom and its predecessor states. It originated with the overseas possessions and trading posts established by England between the late 16th and early 18th centuries. At its height, it was the largest empire in history and, for over a century, was the foremost global power. By 1913, the British Empire held sway over 412 million people, of the world population at the time, and by 1920, it covered , of the Earth's total land area. As a result, its political, legal, linguistic and cultural legacy is widespread. At the peak of its power, the phrase "the empire on which the sun never sets" was often used to describe the British Empire, because its expanse around the globe meant that the sun was always shining on at least one of its territories. During the Age of Discovery in the 15th and 16th centuries, Portugal and Spain pioneered European exploration of the globe, and in the process established large overseas empires. Envious of the great wealth these empires generated, England, France, and the Netherlands began to establish colonies and trade networks of their own in the Americas and Asia. A series of wars in the 17th and 18th centuries with the Netherlands and France left England and then, following union between England and Scotland in 1707, Great Britain, the dominant colonial power in North America. It then became the dominant power in the Indian subcontinent after the East India Company's conquest of Mughal Bengal at the Battle of Plassey in 1757. The independence of the Thirteen Colonies in North America in 1783 after the American War of Independence caused Britain to lose some of its oldest and most populous colonies. British attention soon turned towards Asia, Africa, and the Pacific. After the defeat of France in the Revolutionary and Napoleonic Wars (1792–1815), Britain emerged as the principal naval and imperial power of the 19th century. Unchallenged at sea, British dominance was later described as "Pax Britannica" ("British Peace"), a period of relative peace in Europe and the world (1815–1914) during which the British Empire became the global hegemon and adopted the role of global policeman. In the early 19th century, the Industrial Revolution began to transform Britain; so that by the time of the Great Exhibition in 1851, the country was described as the "workshop of the world". The British Empire expanded to include most of India, large parts of Africa and many other territories throughout the world. Alongside the formal control that Britain exerted over its own colonies, its dominance of much of world trade meant that it effectively controlled the economies of many regions, such as Asia and Latin America. During the 19th century, Britain's population increased at a dramatic rate, accompanied by rapid urbanisation, which caused significant social and economic stresses. To seek new markets and sources of raw materials, the British government under Benjamin Disraeli initiated a period of imperial expansion in Egypt, South Africa, and elsewhere. Canada, Australia, and New Zealand became self-governing dominions. By the start of the 20th century, Germany and the United States had begun to challenge Britain's economic lead. Subsequent military and economic tensions between Britain and Germany were major causes of the First World War, during which Britain relied heavily upon its empire. The conflict placed enormous strain on the military, financial and manpower resources of Britain. Although the British Empire achieved its largest territorial extent immediately after World War I, Britain was no longer the world's pre-eminent industrial or military power. In the Second World War, Britain's colonies in Southeast Asia were occupied by Japan. Despite the final victory of Britain and its allies, the damage to British prestige helped to accelerate the decline of the empire. India, Britain's most valuable and populous possession, achieved independence as part of a larger decolonisation movement in which Britain granted independence to most territories of the empire. The transfer of Hong Kong to China in 1997 marked for many the end of the British Empire. Fourteen overseas territories remain under British sovereignty. After independence, many former British colonies joined the Commonwealth of Nations, a free association of independent states. The United Kingdom is now one of 16 Commonwealth nations, a grouping known informally as the Commonwealth realms, that share a monarch, Queen Elizabeth II. The foundations of the British Empire were laid when England and Scotland were separate kingdoms. In 1496, King Henry VII of England, following the successes of Spain and Portugal in overseas exploration, commissioned John Cabot to lead a voyage to discover a route to Asia via the North Atlantic. Cabot sailed in 1497, five years after the European discovery of America, but he made landfall on the coast of Newfoundland, and, mistakenly believing (like Christopher Columbus) that he had reached Asia, there was no attempt to found a colony. Cabot led another voyage to the Americas the following year but nothing was ever heard of his ships again. No further attempts to establish English colonies in the Americas were made until well into the reign of Queen Elizabeth I, during the last decades of the 16th century. In the meantime, the 1533 Statute in Restraint of Appeals had declared "that this realm of England is an Empire". The subsequent Protestant Reformation turned England and Catholic Spain into implacable enemies. In 1562, the English Crown encouraged the privateers John Hawkins and Francis Drake to engage in slave-raiding attacks against Spanish and Portuguese ships off the coast of West Africa with the aim of breaking into the Atlantic slave trade. This effort was rebuffed and later, as the Anglo-Spanish Wars intensified, Elizabeth I gave her blessing to further privateering raids against Spanish ports in the Americas and shipping that was returning across the Atlantic, laden with treasure from the New World. At the same time, influential writers such as Richard Hakluyt and John Dee (who was the first to use the term "British Empire") were beginning to press for the establishment of England's own empire. By this time, Spain had become the dominant power in the Americas and was exploring the Pacific Ocean, Portugal had established trading posts and forts from the coasts of Africa and Brazil to China, and France had begun to settle the Saint Lawrence River area, later to become New France. Although England trailed behind other European powers in establishing overseas colonies, it had been engaged during the 16th century in the settlement of Ireland with Protestants from England and Scotland, drawing on precedents dating back to the Norman invasion of Ireland in 1169. Several people who helped establish the Plantations of Ireland also played a part in the early colonisation of North America, particularly a group known as the West Country men. In 1578, Elizabeth I granted a patent to Humphrey Gilbert for discovery and overseas exploration. That year, Gilbert sailed for the Caribbean with the intention of engaging in piracy and establishing a colony in North America, but the expedition was aborted before it had crossed the Atlantic. In 1583, he embarked on a second attempt, on this occasion to the island of Newfoundland whose harbour he formally claimed for England, although no settlers were left behind. Gilbert did not survive the return journey to England, and was succeeded by his half-brother, Walter Raleigh, who was granted his own patent by Elizabeth in 1584. Later that year, Raleigh founded the Roanoke Colony on the coast of present-day North Carolina, but lack of supplies caused the colony to fail. In 1603, James VI, King of Scots, ascended (as James I) to the English throne and in 1604 negotiated the Treaty of London, ending hostilities with Spain. Now at peace with its main rival, English attention shifted from preying on other nations' colonial infrastructures to the business of establishing its own overseas colonies. The British Empire began to take shape during the early 17th century, with the English settlement of North America and the smaller islands of the Caribbean, and the establishment of joint-stock companies, most notably the East India Company, to administer colonies and overseas trade. This period, until the loss of the Thirteen Colonies after the American War of Independence towards the end of the 18th century, has subsequently been referred to by some historians as the "First British Empire". The Caribbean initially provided England's most important and lucrative colonies, but not before several attempts at colonisation failed. An attempt to establish a colony in Guiana in 1604 lasted only two years, and failed in its main objective to find gold deposits. Colonies in St Lucia (1605) and Grenada (1609) also rapidly folded, but settlements were successfully established in St. Kitts (1624), Barbados (1627) and Nevis (1628). The colonies soon adopted the system of sugar plantations successfully used by the Portuguese in Brazil, which depended on slave labour, and—at first—Dutch ships, to sell the slaves and buy the sugar. To ensure that the increasingly healthy profits of this trade remained in English hands, Parliament decreed in 1651 that only English ships would be able to ply their trade in English colonies. This led to hostilities with the United Dutch Provinces—a series of Anglo-Dutch Wars—which would eventually strengthen England's position in the Americas at the expense of the Dutch. In 1655, England annexed the island of Jamaica from the Spanish, and in 1666 succeeded in colonising the Bahamas. England's first permanent settlement in the Americas was founded in 1607 in Jamestown, led by Captain John Smith and managed by the Virginia Company. Bermuda was settled and claimed by England as a result of the 1609 shipwreck of the Virginia Company's flagship, and in 1615 was turned over to the newly formed Somers Isles Company. The Virginia Company's charter was revoked in 1624 and direct control of Virginia was assumed by the crown, thereby founding the Colony of Virginia. The London and Bristol Company was created in 1610 with the aim of creating a permanent settlement on Newfoundland, but was largely unsuccessful. In 1620, Plymouth was founded as a haven for Puritan religious separatists, later known as the Pilgrims. Fleeing from religious persecution would become the motive of many English would-be colonists to risk the arduous trans-Atlantic voyage: Maryland was founded as a haven for Roman Catholics (1634), Rhode Island (1636) as a colony tolerant of all religions and Connecticut (1639) for Congregationalists. The Province of Carolina was founded in 1663. With the surrender of Fort Amsterdam in 1664, England gained control of the Dutch colony of New Netherland, renaming it New York. This was formalised in negotiations following the Second Anglo-Dutch War, in exchange for Suriname. In 1681, the colony of Pennsylvania was founded by William Penn. The American colonies were less financially successful than those of the Caribbean, but had large areas of good agricultural land and attracted far larger numbers of English emigrants who preferred their temperate climates. In 1670, Charles II incorporated by royal charter the Hudson's Bay Company (HBC), granting it a monopoly on the fur trade in the area known as Rupert's Land, which would later form a large proportion of the Dominion of Canada. Forts and trading posts established by the HBC were frequently the subject of attacks by the French, who had established their own fur trading colony in adjacent New France. Two years later, the Royal African Company was inaugurated, receiving from King Charles a monopoly of the trade to supply slaves to the British colonies of the Caribbean. From the outset, slavery was the basis of the British Empire in the West Indies. Until the abolition of its slave trade in 1807, Britain was responsible for the transportation of 3.5 million African slaves to the Americas, a third of all slaves transported across the Atlantic. To facilitate this trade, forts were established on the coast of West Africa, such as James Island, Accra and Bunce Island. In the British Caribbean, the percentage of the population of African descent rose from 25% in 1650 to around 80% in 1780, and in the Thirteen Colonies from 10% to 40% over the same period (the majority in the southern colonies). For the slave traders, the trade was extremely profitable, and became a major economic mainstay for such western British cities as Bristol and Liverpool, which formed the third corner of the triangular trade with Africa and the Americas. For the transported, harsh and unhygienic conditions on the slaving ships and poor diets meant that the average mortality rate during the Middle Passage was one in seven. In 1695, the Parliament of Scotland granted a charter to the Company of Scotland, which established a settlement in 1698 on the Isthmus of Panama. Besieged by neighbouring Spanish colonists of New Granada, and afflicted by malaria, the colony was abandoned two years later. The Darien scheme was a financial disaster for Scotland—a quarter of Scottish capital was lost in the enterprise—and ended Scottish hopes of establishing its own overseas empire. The episode also had major political consequences, persuading the governments of both England and Scotland of the merits of a union of countries, rather than just crowns. This occurred in 1707 with the Treaty of Union, establishing the Kingdom of Great Britain. At the end of the 16th century, England and the Netherlands began to challenge Portugal's monopoly of trade with Asia, forming private joint-stock companies to finance the voyages—the English, later British, East India Company and the Dutch East India Company, chartered in 1600 and 1602 respectively. The primary aim of these companies was to tap into the lucrative spice trade, an effort focused mainly on two regions; the East Indies archipelago, and an important hub in the trade network, India. There, they competed for trade supremacy with Portugal and with each other. Although England ultimately eclipsed the Netherlands as a colonial power, in the short term the Netherlands' more advanced financial system and the three Anglo-Dutch Wars of the 17th century left it with a stronger position in Asia. Hostilities ceased after the Glorious Revolution of 1688 when the Dutch William of Orange ascended the English throne, bringing peace between the Netherlands and England. A deal between the two nations left the spice trade of the East Indies archipelago to the Netherlands and the textiles industry of India to England, but textiles soon overtook spices in terms of profitability, and by 1720, in terms of sales, the British company had overtaken the Dutch. Peace between England and the Netherlands in 1688 meant that the two countries entered the Nine Years' War as allies, but the conflict—waged in Europe and overseas between France, Spain and the Anglo-Dutch alliance—left the English a stronger colonial power than the Dutch, who were forced to devote a larger proportion of their military budget on the costly land war in Europe. The 18th century saw England (after 1707, Britain) rise to be the world's dominant colonial power, and France becoming its main rival on the imperial stage. The death of Charles II of Spain in 1700 and his bequeathal of Spain and its colonial empire to Philippe of Anjou, a grandson of the King of France, raised the prospect of the unification of France, Spain and their respective colonies, an unacceptable state of affairs for England and the other powers of Europe. In 1701, England, Portugal and the Netherlands sided with the Holy Roman Empire against Spain and France in the War of the Spanish Succession, which lasted until 1714. At the concluding Treaty of Utrecht, Philip renounced his and his descendants' right to the French throne and Spain lost its empire in Europe. The British Empire was territorially enlarged: from France, Britain gained Newfoundland and Acadia, and from Spain, Gibraltar and Menorca. Gibraltar became a critical naval base and allowed Britain to control the Atlantic entry and exit point to the Mediterranean. Spain also ceded the rights to the lucrative "asiento" (permission to sell slaves in Spanish America) to Britain. During the middle decades of the 18th century, there were several outbreaks of military conflict on the Indian subcontinent, the Carnatic Wars, as the English East India Company (often known simply as "the Company") and its French counterpart, the French East India Company ("Compagnie française des Indes orientales"), struggled alongside local rulers to fill the vacuum that had been left by the decline of the Mughal Empire. The Battle of Plassey in 1757, in which the British, led by Robert Clive, defeated the Nawab of Bengal and his French allies, left the British East India Company in control of Bengal and as the major military and political power in India. France was left control of its enclaves but with military restrictions and an obligation to support British client states, ending French hopes of controlling India. In the following decades the British East India Company gradually increased the size of the territories under its control, either ruling directly or via local rulers under the threat of force from the British Indian Army, the vast majority of which was composed of Indian sepoys. The British and French struggles in India became but one theatre of the global Seven Years' War (1756–1763) involving France, Britain and the other major European powers. The signing of the Treaty of Paris (1763) had important consequences for the future of the British Empire. In North America, France's future as a colonial power effectively ended with the recognition of British claims to Rupert's Land, and the ceding of New France to Britain (leaving a sizeable French-speaking population under British control) and Louisiana to Spain. Spain ceded Florida to Britain. Along with its victory over France in India, the Seven Years' War therefore left Britain as the world's most powerful maritime power. During the 1760s and early 1770s, relations between the Thirteen Colonies and Britain became increasingly strained, primarily because of resentment of the British Parliament's attempts to govern and tax American colonists without their consent. This was summarised at the time by the slogan "No taxation without representation", a perceived violation of the guaranteed Rights of Englishmen. The American Revolution began with rejection of Parliamentary authority and moves towards self-government. In response, Britain sent troops to reimpose direct rule, leading to the outbreak of war in 1775. The following year, in 1776, the United States declared independence. The entry of France into the war in 1778 tipped the military balance in the Americans' favour and after a decisive defeat at Yorktown in 1781, Britain began negotiating peace terms. American independence was acknowledged at the Peace of Paris in 1783. The loss of such a large portion of British America, at the time Britain's most populous overseas possession, is seen by some historians as the event defining the transition between the "first" and "second" empires, in which Britain shifted its attention away from the Americas to Asia, the Pacific and later Africa. Adam Smith's "Wealth of Nations", published in 1776, had argued that colonies were redundant, and that free trade should replace the old mercantilist policies that had characterised the first period of colonial expansion, dating back to the protectionism of Spain and Portugal. The growth of trade between the newly independent United States and Britain after 1783 seemed to confirm Smith's view that political control was not necessary for economic success. The war to the south influenced British policy in Canada, where between 40,000 and 100,000 defeated Loyalists had migrated from the new United States following independence. The 14,000 Loyalists who went to the Saint John and Saint Croix river valleys, then part of Nova Scotia, felt too far removed from the provincial government in Halifax, so London split off New Brunswick as a separate colony in 1784. The Constitutional Act of 1791 created the provinces of Upper Canada (mainly English-speaking) and Lower Canada (mainly French-speaking) to defuse tensions between the French and British communities, and implemented governmental systems similar to those employed in Britain, with the intention of asserting imperial authority and not allowing the sort of popular control of government that was perceived to have led to the American Revolution. Tensions between Britain and the United States escalated again during the Napoleonic Wars, as Britain tried to cut off American trade with France and boarded American ships to impress men into the Royal Navy. The US declared war, the War of 1812, and invaded Canadian territory. In response Britain invaded the US, but the pre-war boundaries were reaffirmed by the 1814 Treaty of Ghent, ensuring Canada's future would be separate from that of the United States. Since 1718, transportation to the American colonies had been a penalty for various offences in Britain, with approximately one thousand convicts transported per year across the Atlantic. Forced to find an alternative location after the loss of the Thirteen Colonies in 1783, the British government turned to the newly discovered lands of Australia. The coast of Australia had been discovered for Europeans by the Dutch explorer Willem Janszoon in 1606 and was named New Holland by the Dutch East India Company, but there was no attempt to colonise it. In 1770 James Cook discovered the eastern coast of Australia while on a scientific voyage to the South Pacific Ocean, claimed the continent for Britain, and named it New South Wales. In 1778, Joseph Banks, Cook's botanist on the voyage, presented evidence to the government on the suitability of Botany Bay for the establishment of a penal settlement, and in 1787 the first shipment of convicts set sail, arriving in 1788. Britain continued to transport convicts to New South Wales until 1840, to Tasmania until 1853 and to Western Australia until 1868. The Australian colonies became profitable exporters of wool and gold, mainly because of gold rushes in the colony of Victoria, making its capital Melbourne for a time the richest city in the world and the second largest city (after London) in the British Empire. During his voyage, Cook also visited New Zealand, first discovered by Dutch explorer Abel Tasman in 1642, and claimed the North and South islands for the British crown in 1769 and 1770 respectively. Initially, interaction between the indigenous Māori population and Europeans was limited to the trading of goods. European settlement increased through the early decades of the 19th century, with numerous trading stations established, especially in the North. In 1839, the New Zealand Company announced plans to buy large tracts of land and establish colonies in New Zealand. On 6 February 1840, Captain William Hobson and around 40 Maori chiefs signed the Treaty of Waitangi. This treaty is considered by many to be New Zealand's founding document, but differing interpretations of the Maori and English versions of the text have meant that it continues to be a source of dispute. Britain was challenged again by France under Napoleon, in a struggle that, unlike previous wars, represented a contest of ideologies between the two nations. It was not only Britain's position on the world stage that was at risk: Napoleon threatened to invade Britain itself, just as his armies had overrun many countries of continental Europe. The Napoleonic Wars were therefore ones in which Britain invested large amounts of capital and resources to win. French ports were blockaded by the Royal Navy, which won a decisive victory over a Franco-Spanish fleet at Trafalgar in 1805. Overseas colonies were attacked and occupied, including those of the Netherlands, which was annexed by Napoleon in 1810. France was finally defeated by a coalition of European armies in 1815. Britain was again the beneficiary of peace treaties: France ceded the Ionian Islands, Malta (which it had occupied in 1797 and 1798 respectively), Mauritius, Saint Lucia, and Tobago; Spain ceded Trinidad; the Netherlands Guyana, and the Cape Colony. Britain returned Guadeloupe, Martinique, French Guiana, and Réunion to France, and Java and Suriname to the Netherlands, while gaining control of Ceylon (1795–1815). With the advent of the Industrial Revolution, goods produced by slavery became less important to the British economy. Added to this was the cost of suppressing regular slave rebellions. With support from the British abolitionist movement, Parliament enacted the Slave Trade Act in 1807, which abolished the slave trade in the empire. In 1808, Sierra Leone Colony and Protectorate was designated an official British colony for freed slaves. Parliamentary reform in 1832 saw the influence of the West India Committee decline. The Slavery Abolition Act, passed the following year, abolished slavery in the British Empire on 1 August 1834, finally bringing the Empire into line with the law in the UK (with the exception of St. Helena, Ceylon and the territories administered by the East India Company, though these exclusions were later repealed). Under the Act, slaves were granted full emancipation after a period of four to six years of "apprenticeship". The British government compensated slave-owners. Between 1815 and 1914, a period referred to as Britain's "imperial century" by some historians, around of territory and roughly 400 million people were added to the British Empire. Victory over Napoleon left Britain without any serious international rival, other than Russia in Central Asia. Unchallenged at sea, Britain adopted the role of global policeman, a state of affairs later known as the "Pax Britannica", and a foreign policy of "splendid isolation". Alongside the formal control it exerted over its own colonies, Britain's dominant position in world trade meant that it effectively controlled the economies of many countries, such as China, Argentina and Siam, which has been described by some historians as an "Informal Empire". British imperial strength was underpinned by the steamship and the telegraph, new technologies invented in the second half of the 19th century, allowing it to control and defend the empire. By 1902, the British Empire was linked together by a network of telegraph cables, called the All Red Line. The East India Company drove the expansion of the British Empire in Asia. The Company's army had first joined forces with the Royal Navy during the Seven Years' War, and the two continued to co-operate in arenas outside India: the eviction of the French from Egypt (1799), the capture of Java from the Netherlands (1811), the acquisition of Penang Island (1786), Singapore (1819) and Malacca (1824), and the defeat of Burma (1826). From its base in India, the Company had also been engaged in an increasingly profitable opium export trade to China since the 1730s. This trade, illegal since it was outlawed by the Qing dynasty in 1729, helped reverse the trade imbalances resulting from the British imports of tea, which saw large outflows of silver from Britain to China. In 1839, the confiscation by the Chinese authorities at Canton of 20,000 chests of opium led Britain to attack China in the First Opium War, and resulted in the seizure by Britain of Hong Kong Island, at that time a minor settlement. During the late 18th and early 19th centuries the British Crown began to assume an increasingly large role in the affairs of the Company. A series of Acts of Parliament were passed, including the Regulating Act of 1773, Pitt's India Act of 1784 and the Charter Act of 1813 which regulated the Company's affairs and established the sovereignty of the Crown over the territories that it had acquired. The Company's eventual end was precipitated by the Indian Rebellion in 1857, a conflict that had begun with the mutiny of sepoys, Indian troops under British officers and discipline. The rebellion took six months to suppress, with heavy loss of life on both sides. The following year the British government dissolved the Company and assumed direct control over India through the Government of India Act 1858, establishing the British Raj, where an appointed governor-general administered India and Queen Victoria was crowned the Empress of India. India became the empire's most valuable possession, "the Jewel in the Crown", and was the most important source of Britain's strength. A series of serious crop failures in the late 19th century led to widespread famines on the subcontinent in which it is estimated that over 15 million people died. The East India Company had failed to implement any coordinated policy to deal with the famines during its period of rule. Later, under direct British rule, commissions were set up after each famine to investigate the causes and implement new policies, which took until the early 1900s to have an effect. During the 19th century, Britain and the Russian Empire vied to fill the power vacuums that had been left by the declining Ottoman Empire, Qajar dynasty and Qing Dynasty. This rivalry in Central Asia came to be known as the "Great Game". As far as Britain was concerned, defeats inflicted by Russia on Persia and Turkey demonstrated its imperial ambitions and capabilities and stoked fears in Britain of an overland invasion of India. In 1839, Britain moved to pre-empt this by invading Afghanistan, but the First Anglo-Afghan War was a disaster for Britain. When Russia invaded the Turkish Balkans in 1853, fears of Russian dominance in the Mediterranean and Middle East led Britain and France to invade the Crimean Peninsula to destroy Russian naval capabilities. The ensuing Crimean War (1854–56), which involved new techniques of modern warfare, was the only global war fought between Britain and another imperial power during the "Pax Britannica" and was a resounding defeat for Russia. The situation remained unresolved in Central Asia for two more decades, with Britain annexing Baluchistan in 1876 and Russia annexing Kirghizia, Kazakhstan, and Turkmenistan. For a while it appeared that another war would be inevitable, but the two countries reached an agreement on their respective spheres of influence in the region in 1878 and on all outstanding matters in 1907 with the signing of the Anglo-Russian Entente. The destruction of the Russian Navy by the Japanese at the Battle of Port Arthur during the Russo-Japanese War of 1904–05 also limited its threat to the British. The Dutch East India Company had founded the Cape Colony on the southern tip of Africa in 1652 as a way station for its ships travelling to and from its colonies in the East Indies. Britain formally acquired the colony, and its large Afrikaner (or Boer) population in 1806, having occupied it in 1795 to prevent its falling into French hands during the Flanders Campaign. British immigration began to rise after 1820, and pushed thousands of Boers, resentful of British rule, northwards to found their own—mostly short-lived—independent republics, during the Great Trek of the late 1830s and early 1840s. In the process the Voortrekkers clashed repeatedly with the British, who had their own agenda with regard to colonial expansion in South Africa and to the various native African polities, including those of the Sotho and the Zulu nations. Eventually the Boers established two republics which had a longer lifespan: the South African Republic or Transvaal Republic (1852–77; 1881–1902) and the Orange Free State (1854–1902). In 1902 Britain occupied both republics, concluding a treaty with the two Boer Republics following the Second Boer War (1899–1902). In 1869 the Suez Canal opened under Napoleon III, linking the Mediterranean with the Indian Ocean. Initially the Canal was opposed by the British; but once opened, its strategic value was quickly recognised and became the "jugular vein of the Empire". In 1875, the Conservative government of Benjamin Disraeli bought the indebted Egyptian ruler Isma'il Pasha's 44% shareholding in the Suez Canal for £4 million (equivalent to £ in ). Although this did not grant outright control of the strategic waterway, it did give Britain leverage. Joint Anglo-French financial control over Egypt ended in outright British occupation in 1882. The French were still majority shareholders and attempted to weaken the British position, but a compromise was reached with the 1888 Convention of Constantinople, which made the Canal officially neutral territory. With competitive French, Belgian and Portuguese activity in the lower Congo River region undermining orderly colonisation of tropical Africa, the Berlin Conference of 1884–85 was held to regulate the competition between the European powers in what was called the "Scramble for Africa" by defining "effective occupation" as the criterion for international recognition of territorial claims. The scramble continued into the 1890s, and caused Britain to reconsider its decision in 1885 to withdraw from Sudan. A joint force of British and Egyptian troops defeated the Mahdist Army in 1896, and rebuffed an attempted French invasion at Fashoda in 1898. Sudan was nominally made an Anglo-Egyptian condominium, but a British colony in reality. British gains in Southern and East Africa prompted Cecil Rhodes, pioneer of British expansion in Southern Africa, to urge a "Cape to Cairo" railway linking the strategically important Suez Canal to the mineral-rich south of the continent. During the 1880s and 1890s, Rhodes, with his privately owned British South Africa Company, occupied and annexed territories subsequently named after him, Rhodesia. The path to independence for the white colonies of the British Empire began with the 1839 Durham Report, which proposed unification and self-government for Upper and Lower Canada, as a solution to political unrest which had erupted in armed rebellions in 1837. This began with the passing of the Act of Union in 1840, which created the Province of Canada. Responsible government was first granted to Nova Scotia in 1848, and was soon extended to the other British North American colonies. With the passage of the British North America Act, 1867 by the British Parliament, Upper and Lower Canada, New Brunswick and Nova Scotia were formed into the Dominion of Canada, a confederation enjoying full self-government with the exception of international relations. Australia and New Zealand achieved similar levels of self-government after 1900, with the Australian colonies federating in 1901. The term "dominion status" was officially introduced at the Colonial Conference of 1907. The last decades of the 19th century saw concerted political campaigns for Irish home rule. Ireland had been united with Britain into the United Kingdom of Great Britain and Ireland with the Act of Union 1800 after the Irish Rebellion of 1798, and had suffered a severe famine between 1845 and 1852. Home rule was supported by the British Prime minister, William Gladstone, who hoped that Ireland might follow in Canada's footsteps as a Dominion within the empire, but his 1886 Home Rule bill was defeated in Parliament. Although the bill, if passed, would have granted Ireland less autonomy within the UK than the Canadian provinces had within their own federation, many MPs feared that a partially independent Ireland might pose a security threat to Great Britain or mark the beginning of the break-up of the empire. A second Home Rule bill was also defeated for similar reasons. A third bill was passed by Parliament in 1914, but not implemented because of the outbreak of the First World War leading to the 1916 Easter Rising. By the turn of the 20th century, fears had begun to grow in Britain that it would no longer be able to defend the metropole and the entirety of the empire while at the same time maintaining the policy of "splendid isolation". Germany was rapidly rising as a military and industrial power and was now seen as the most likely opponent in any future war. Recognising that it was overstretched in the Pacific and threatened at home by the Imperial German Navy, Britain formed an alliance with Japan in 1902 and with its old enemies France and Russia in 1904 and 1907, respectively. Britain's fears of war with Germany were realised in 1914 with the outbreak of the First World War. Britain quickly invaded and occupied most of Germany's overseas colonies in Africa. In the Pacific, Australia and New Zealand occupied German New Guinea and Samoa respectively. Plans for a post-war division of the Ottoman Empire, which had joined the war on Germany's side, were secretly drawn up by Britain and France under the 1916 Sykes–Picot Agreement. This agreement was not divulged to the Sharif of Mecca, who the British had been encouraging to launch an Arab revolt against their Ottoman rulers, giving the impression that Britain was supporting the creation of an independent Arab state. The British declaration of war on Germany and its allies also committed the colonies and Dominions, which provided invaluable military, financial and material support. Over 2.5 million men served in the armies of the Dominions, as well as many thousands of volunteers from the Crown colonies. The contributions of Australian and New Zealand troops during the 1915 Gallipoli Campaign against the Ottoman Empire had a great impact on the national consciousness at home, and marked a watershed in the transition of Australia and New Zealand from colonies to nations in their own right. The countries continue to commemorate this occasion on Anzac Day. Canadians viewed the Battle of Vimy Ridge in a similar light. The important contribution of the Dominions to the war effort was recognised in 1917 by the British Prime Minister David Lloyd George when he invited each of the Dominion Prime Ministers to join an Imperial War Cabinet to co-ordinate imperial policy. Under the terms of the concluding Treaty of Versailles signed in 1919, the empire reached its greatest extent with the addition of and 13 million new subjects. The colonies of Germany and the Ottoman Empire were distributed to the Allied powers as League of Nations mandates. Britain gained control of Palestine, Transjordan, Iraq, parts of Cameroon and Togoland, and Tanganyika. The Dominions themselves also acquired mandates of their own: the Union of South Africa gained South West Africa (modern-day Namibia), Australia gained New Guinea, and New Zealand Western Samoa. Nauru was made a combined mandate of Britain and the two Pacific Dominions. The changing world order that the war had brought about, in particular the growth of the United States and Japan as naval powers, and the rise of independence movements in India and Ireland, caused a major reassessment of British imperial policy. Forced to choose between alignment with the United States or Japan, Britain opted not to renew its Japanese alliance and instead signed the 1922 Washington Naval Treaty, where Britain accepted naval parity with the United States. This decision was the source of much debate in Britain during the 1930s as militaristic governments took hold in Germany and Japan helped in part by the Great Depression, for it was feared that the empire could not survive a simultaneous attack by both nations. The issue of the empire's security was a serious concern in Britain, as it was vital to the British economy. In 1919, the frustrations caused by delays to Irish home rule led the MPs of Sinn Féin, a pro-independence party that had won a majority of the Irish seats in the 1918 British general election, to establish an independent parliament in Dublin, at which Irish independence was declared. The Irish Republican Army simultaneously began a guerrilla war against the British administration. The Anglo-Irish War ended in 1921 with a stalemate and the signing of the Anglo-Irish Treaty, creating the Irish Free State, a Dominion within the British Empire, with effective internal independence but still constitutionally linked with the British Crown. Northern Ireland, consisting of six of the 32 Irish counties which had been established as a devolved region under the 1920 Government of Ireland Act, immediately exercised its option under the treaty to retain its existing status within the United Kingdom. A similar struggle began in India when the Government of India Act 1919 failed to satisfy demand for independence. Concerns over communist and foreign plots following the Ghadar Conspiracy ensured that war-time strictures were renewed by the Rowlatt Acts. This led to tension, particularly in the Punjab region, where repressive measures culminated in the Amritsar Massacre. In Britain public opinion was divided over the morality of the massacre, between those who saw it as having saved India from anarchy, and those who viewed it with revulsion. The subsequent Non-Co-Operation movement was called off in March 1922 following the Chauri Chaura incident, and discontent continued to simmer for the next 25 years. In 1922, Egypt, which had been declared a British protectorate at the outbreak of the First World War, was granted formal independence, though it continued to be a British client state until 1954. British troops remained stationed in Egypt until the signing of the Anglo-Egyptian Treaty in 1936, under which it was agreed that the troops would withdraw but continue to occupy and defend the Suez Canal zone. In return, Egypt was assisted in joining the League of Nations. Iraq, a British mandate since 1920, also gained membership of the League in its own right after achieving independence from Britain in 1932. In Palestine, Britain was presented with the problem of mediating between the Arabs and increasing numbers of Jews. The 1917 Balfour Declaration, which had been incorporated into the terms of the mandate, stated that a national home for the Jewish people would be established in Palestine, and Jewish immigration allowed up to a limit that would be determined by the mandatory power. This led to increasing conflict with the Arab population, who openly revolted in 1936. As the threat of war with Germany increased during the 1930s, Britain judged the support of Arabs as more important than the establishment of a Jewish homeland, and shifted to a pro-Arab stance, limiting Jewish immigration and in turn triggering a Jewish insurgency. The right of the Dominions to set their own foreign policy, independent of Britain, was recognised at the 1923 Imperial Conference. Britain's request for military assistance from the Dominions at the outbreak of the Chanak Crisis the previous year had been turned down by Canada and South Africa, and Canada had refused to be bound by the 1923 Treaty of Lausanne. After pressure from the Irish Free State and South Africa, the 1926 Imperial Conference issued the Balfour Declaration of 1926, declaring the Dominions to be "autonomous Communities within the British Empire, equal in status, in no way subordinate one to another" within a "British Commonwealth of Nations". This declaration was given legal substance under the 1931 Statute of Westminster. The parliaments of Canada, Australia, New Zealand, the Union of South Africa, the Irish Free State and Newfoundland were now independent of British legislative control, they could nullify British laws and Britain could no longer pass laws for them without their consent. Newfoundland reverted to colonial status in 1933, suffering from financial difficulties during the Great Depression. The Irish Free State distanced itself further from the British state with the introduction of a new constitution in 1937, making it a republic in all but name. Britain's declaration of war against Nazi Germany in September 1939 included the Crown colonies and India but did not automatically commit the Dominions of Australia, Canada, New Zealand, Newfoundland and South Africa. All soon declared war on Germany, but Ireland chose to remain legally neutral throughout the war. After the Fall of France in June 1940, Britain and the empire stood alone against Germany, until the German invasion of Greece on 7 April 1941. British Prime Minister Winston Churchill successfully lobbied President Franklin D. Roosevelt for military aid from the United States, but Roosevelt was not yet ready to ask Congress to commit the country to war. In August 1941, Churchill and Roosevelt met and signed the Atlantic Charter, which included the statement that "the rights of all peoples to choose the form of government under which they live" should be respected. This wording was ambiguous as to whether it referred to European countries invaded by Germany and Italy, or the peoples colonised by European nations, and would later be interpreted differently by the British, Americans, and nationalist movements. In December 1941, Japan launched, in quick succession, attacks on British Malaya, the United States naval base at Pearl Harbor, and Hong Kong. Churchill's reaction to the entry of the United States into the war was that Britain was now assured of victory and the future of the empire was safe, but the manner in which British forces were rapidly defeated in the Far East irreversibly harmed Britain's standing and prestige as an imperial power. Most damaging of all was the Fall of Singapore, which had previously been hailed as an impregnable fortress and the eastern equivalent of Gibraltar. The realisation that Britain could not defend its entire empire pushed Australia and New Zealand, which now appeared threatened by Japanese forces, into closer ties with the United States. This resulted in the 1951 ANZUS Pact between Australia, New Zealand and the United States of America. Though Britain and the empire emerged victorious from the Second World War, the effects of the conflict were profound, both at home and abroad. Much of Europe, a continent that had dominated the world for several centuries, was in ruins, and host to the armies of the United States and the Soviet Union, who now held the balance of global power. Britain was left essentially bankrupt, with insolvency only averted in 1946 after the negotiation of a $US 4.33 billion loan from the United States, the last instalment of which was repaid in 2006. At the same time, anti-colonial movements were on the rise in the colonies of European nations. The situation was complicated further by the increasing Cold War rivalry of the United States and the Soviet Union. In principle, both nations were opposed to European colonialism. In practice, however, American anti-communism prevailed over anti-imperialism, and therefore the United States supported the continued existence of the British Empire to keep Communist expansion in check. The "wind of change" ultimately meant that the British Empire's days were numbered, and on the whole, Britain adopted a policy of peaceful disengagement from its colonies once stable, non-Communist governments were available to transfer power to. This was in contrast to other European powers such as France and Portugal, which waged costly and ultimately unsuccessful wars to keep their empires intact. Between 1945 and 1965, the number of people under British rule outside the UK itself fell from 700 million to five million, three million of whom were in Hong Kong. The pro-decolonisation Labour government, elected at the 1945 general election and led by Clement Attlee, moved quickly to tackle the most pressing issue facing the empire: Indian independence. India's two major political parties—the Indian National Congress (led by Mahatma Gandhi) and the Muslim League (led by Muhammad Ali Jinnah)—had been campaigning for independence for decades, but disagreed as to how it should be implemented. Congress favoured a unified secular Indian state, whereas the League, fearing domination by the Hindu majority, desired a separate Islamic state for Muslim-majority regions. Increasing civil unrest and the mutiny of the Royal Indian Navy during 1946 led Attlee to promise independence no later than 30 June 1948. When the urgency of the situation and risk of civil war became apparent, the newly appointed (and last) Viceroy, Lord Mountbatten, hastily brought forward the date to 15 August 1947. The borders drawn by the British to broadly partition India into Hindu and Muslim areas left tens of millions as minorities in the newly independent states of India and Pakistan. Millions of Muslims subsequently crossed from India to Pakistan and Hindus vice versa, and violence between the two communities cost hundreds of thousands of lives. Burma, which had been administered as part of the British Raj, and Sri Lanka gained their independence the following year in 1948. India, Pakistan and Sri Lanka became members of the Commonwealth, while Burma chose not to join. The British mandate in Palestine, where an Arab majority lived alongside a Jewish minority, presented the British with a similar problem to that of India. The matter was complicated by large numbers of Jewish refugees seeking to be admitted to Palestine following the Holocaust, while Arabs were opposed to the creation of a Jewish state. Frustrated by the intractability of the problem, attacks by Jewish paramilitary organisations and the increasing cost of maintaining its military presence, Britain announced in 1947 that it would withdraw in 1948 and leave the matter to the United Nations to solve. The UN General Assembly subsequently voted for a plan to partition Palestine into a Jewish and an Arab state. Following the surrender of Japan in the Second World War, anti-Japanese resistance movements in Malaya turned their attention towards the British, who had moved to quickly retake control of the colony, valuing it as a source of rubber and tin. The fact that the guerrillas were primarily Malayan-Chinese Communists meant that the British attempt to quell the uprising was supported by the Muslim Malay majority, on the understanding that once the insurgency had been quelled, independence would be granted. The Malayan Emergency, as it was called, began in 1948 and lasted until 1960, but by 1957, Britain felt confident enough to grant independence to the Federation of Malaya within the Commonwealth. In 1963, the 11 states of the federation together with Singapore, Sarawak and North Borneo joined to form Malaysia, but in 1965 Chinese-majority Singapore was expelled from the union following tensions between the Malay and Chinese populations. Brunei, which had been a British protectorate since 1888, declined to join the union and maintained its status until independence in 1984. In 1951, the Conservative Party returned to power in Britain, under the leadership of Winston Churchill. Churchill and the Conservatives believed that Britain's position as a world power relied on the continued existence of the empire, with the base at the Suez Canal allowing Britain to maintain its pre-eminent position in the Middle East in spite of the loss of India. However, Churchill could not ignore Gamal Abdul Nasser's new revolutionary government of Egypt that had taken power in 1952, and the following year it was agreed that British troops would withdraw from the Suez Canal zone and that Sudan would be granted self-determination by 1955, with independence to follow. Sudan was granted independence on 1 January 1956. In July 1956, Nasser unilaterally nationalised the Suez Canal. The response of Anthony Eden, who had succeeded Churchill as Prime Minister, was to collude with France to engineer an Israeli attack on Egypt that would give Britain and France an excuse to intervene militarily and retake the canal. Eden infuriated US President Dwight D. Eisenhower, by his lack of consultation, and Eisenhower refused to back the invasion. Another of Eisenhower's concerns was the possibility of a wider war with the Soviet Union after it threatened to intervene on the Egyptian side. Eisenhower applied financial leverage by threatening to sell US reserves of the British pound and thereby precipitate a collapse of the British currency. Though the invasion force was militarily successful in its objectives, UN intervention and US pressure forced Britain into a humiliating withdrawal of its forces, and Eden resigned. The Suez Crisis very publicly exposed Britain's limitations to the world and confirmed Britain's decline on the world stage, demonstrating that henceforth it could no longer act without at least the acquiescence, if not the full support, of the United States. The events at Suez wounded British national pride, leading one MP to describe it as "Britain's Waterloo" and another to suggest that the country had become an "American satellite". Margaret Thatcher later described the mindset she believed had befallen Britain's political leaders after Suez where they "went from believing that Britain could do anything to an almost neurotic belief that Britain could do nothing", from which Britain did not recover until the successful recapture of the Falkland Islands from Argentina in 1982. While the Suez Crisis caused British power in the Middle East to weaken, it did not collapse. Britain again deployed its armed forces to the region, intervening in Oman (1957), Jordan (1958) and Kuwait (1961), though on these occasions with American approval, as the new Prime Minister Harold Macmillan's foreign policy was to remain firmly aligned with the United States. Britain maintained a military presence in the Middle East for another decade. On 16 January 1968, a few weeks after the devaluation of the pound, Prime Minister Harold Wilson and his Defence Secretary Denis Healey announced that British troops would be withdrawn from major military bases East of Suez, which included the ones in the Middle East, and primarily from Malaysia and Singapore by the end of 1971, instead of 1975 as earlier planned. By that time over 50,000 British military personnel were still stationed in the Far East, including 30,000 in Singapore. The British withdrew from Aden in 1967, Bahrain in 1971, and the Maldives in 1976. Macmillan gave a speech in Cape Town, South Africa in February 1960 where he spoke of "the wind of change blowing through this continent". Macmillan wished to avoid the same kind of colonial war that France was fighting in Algeria, and under his premiership decolonisation proceeded rapidly. To the three colonies that had been granted independence in the 1950s—Sudan, the Gold Coast and Malaya—were added nearly ten times that number during the 1960s. Britain's remaining colonies in Africa, except for self-governing Southern Rhodesia, were all granted independence by 1968. British withdrawal from the southern and eastern parts of Africa was not a peaceful process. Kenyan independence was preceded by the eight-year Mau Mau Uprising. In Rhodesia, the 1965 Unilateral Declaration of Independence by the white minority resulted in a civil war that lasted until the Lancaster House Agreement of 1979, which set the terms for recognised independence in 1980, as the new nation of Zimbabwe. In the Mediterranean, a guerrilla war waged by Greek Cypriots ended in 1960 leading to an independent Cyprus, with the UK retaining the military bases of Akrotiri and Dhekelia. The Mediterranean islands of Malta and Gozo were amicably granted independence from the UK in 1964 and became the country of Malta, though the idea had been raised in 1955 of integration with Britain. Most of the UK's Caribbean territories achieved independence after the departure in 1961 and 1962 of Jamaica and Trinidad from the West Indies Federation, established in 1958 in an attempt to unite the British Caribbean colonies under one government, but which collapsed following the loss of its two largest members. Barbados achieved independence in 1966 and the remainder of the eastern Caribbean islands in the 1970s and 1980s, but Anguilla and the Turks and Caicos Islands opted to revert to British rule after they had already started on the path to independence. The British Virgin Islands, Cayman Islands and Montserrat opted to retain ties with Britain, while Guyana achieved independence in 1966. Britain's last colony on the American mainland, British Honduras, became a self-governing colony in 1964 and was renamed Belize in 1973, achieving full independence in 1981. A dispute with Guatemala over claims to Belize was left unresolved. British territories in the Pacific acquired independence in the 1970s beginning with Fiji in 1970 and ending with Vanuatu in 1980. Vanuatu's independence was delayed because of political conflict between English and French-speaking communities, as the islands had been jointly administered as a condominium with France. Fiji, Tuvalu, the Solomon Islands and Papua New Guinea chose to become Commonwealth realms. In 1980, Southern Rhodesia, Britain's last African colony, became the independent nation of Zimbabwe. The New Hebrides achieved independence (as Vanuatu) in 1980, with Belize following suit in 1981. The passage of the British Nationality Act 1981, which reclassified the remaining Crown colonies as "British Dependent Territories" (renamed British Overseas Territories in 2002) meant that, aside from a scattering of islands and outposts, the process of decolonisation that had begun after the Second World War was largely complete. In 1982, Britain's resolve in defending its remaining overseas territories was tested when Argentina invaded the Falkland Islands, acting on a long-standing claim that dated back to the Spanish Empire. Britain's ultimately successful military response to retake the islands during the ensuing Falklands War was viewed by many to have contributed to reversing the downward trend in Britain's status as a world power. The same year, the Canadian government severed its last legal link with Britain by patriating the Canadian constitution from Britain. The 1982 Canada Act passed by the British parliament ended the need for British involvement in changes to the Canadian constitution. Similarly, the Australia Act 1986 (effective 3 March 1986) severed the constitutional link between Britain and the Australian states, while New Zealand's Constitution Act 1986 (effective 1 January 1987) reformed the constitution of New Zealand to sever its constitutional link with Britain. In 1984, Brunei, Britain's last remaining Asian protectorate, gained its independence. In September 1982 the Prime Minister, Margaret Thatcher, travelled to Beijing to negotiate with the Chinese government, on the future of Britain's last major and most populous overseas territory, Hong Kong. Under the terms of the 1842 Treaty of Nanking, Hong Kong Island itself had been ceded to Britain "in perpetuity", but the vast majority of the colony was constituted by the New Territories, which had been acquired under a 99-year lease in 1898, due to expire in 1997. Thatcher, seeing parallels with the Falkland Islands, initially wished to hold Hong Kong and proposed British administration with Chinese sovereignty, though this was rejected by China. A deal was reached in 1984—under the terms of the Sino-British Joint Declaration, Hong Kong would become a special administrative region of the People's Republic of China, maintaining its way of life for at least 50 years. The handover ceremony in 1997 marked for many, including Charles, Prince of Wales, who was in attendance, "the end of Empire". Britain retains sovereignty over 14 territories outside the British Isles, which were renamed the British Overseas Territories in 2002. Three are uninhabited except for transient military or scientific personnel; the remaining eleven are self-governing to varying degrees and are reliant on the UK for foreign relations and defence. The British government has stated its willingness to assist any Overseas Territory that wishes to proceed to independence, where that is an option, and three territories have specifically voted to remain under British sovereignty (Bermuda in 1995, Gibraltar in 2002 and the Falkland Islands in 2013). British sovereignty of several of the overseas territories is disputed by their geographical neighbours: Gibraltar is claimed by Spain, the Falkland Islands and South Georgia and the South Sandwich Islands are claimed by Argentina, and the British Indian Ocean Territory is claimed by Mauritius and Seychelles. The British Antarctic Territory is subject to overlapping claims by Argentina and Chile, while many countries do not recognise any territorial claims in Antarctica. Most former British colonies and protectorates are among the 52 member states of the Commonwealth of Nations, a non-political, voluntary association of equal members, comprising a population of around 2.2 billion people. Sixteen Commonwealth realms voluntarily continue to share the British monarch, Queen Elizabeth II, as their head of state. These sixteen nations are distinct and equal legal entities – the United Kingdom, Australia, Canada, New Zealand, Antigua and Barbuda, The Bahamas, Barbados, Belize, Grenada, Jamaica, Papua New Guinea, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Solomon Islands and Tuvalu. Decades, and in some cases centuries, of British rule and emigration have left their mark on the independent nations that arose from the British Empire. The empire established the use of English in regions around the world. Today it is the primary language of up to 460 million people and is spoken by about one and a half billion as a first, second or foreign language. The spread of English from the latter half of the 20th century has been helped in part by the cultural and economic influence of the United States, itself originally formed from British colonies. Except in Africa where nearly all the former colonies have adopted the presidential system, the English parliamentary system has served as the template for the governments for many former colonies, and English common law for legal systems. The British Judicial Committee of the Privy Council still serves as the highest court of appeal for several former colonies in the Caribbean and Pacific. British missionaries who travelled around the globe often in advance of soldiers and civil servants spread Protestantism (including Anglicanism) to all continents. The British Empire provided refuge for religiously persecuted continental Europeans for hundreds of years. British colonial architecture, such as in churches, railway stations and government buildings, can be seen in many cities that were once part of the British Empire. Individual and team sports developed in Britain — particularly golf, football, cricket, rugby, netball, lawn bowls, hockey and lawn tennis — were also exported. The British choice of system of measurement, the imperial system, continues to be used in some countries in various ways. The convention of driving on the left hand side of the road has been retained in much of the former empire. Political boundaries drawn by the British did not always reflect homogeneous ethnicities or religions, contributing to conflicts in formerly colonised areas. The British Empire was also responsible for large migrations of peoples. Millions left the British Isles, with the founding settler populations of the United States, Canada, Australia and New Zealand coming mainly from Britain and Ireland. Tensions remain between the white settler populations of these countries and their indigenous minorities, and between white settler minorities and indigenous majorities in South Africa and Zimbabwe. Settlers in Ireland from Great Britain have left their mark in the form of divided nationalist and unionist communities in Northern Ireland. Millions of people moved to and from British colonies, with large numbers of Indians emigrating to other parts of the empire, such as Malaysia and Fiji, and Chinese people to Malaysia, Singapore and the Caribbean. The demographics of Britain itself were changed after the Second World War owing to immigration to Britain from its former colonies. Balfour Declaration The Balfour Declaration was a public statement issued by the British government during World War I announcing support for the establishment of a "national home for the Jewish people" in Palestine, then an Ottoman region with a minority Jewish population (around 3–5% of the total). It read: The declaration was contained in a letter dated 2November 1917 from the United Kingdom's Foreign Secretary Arthur Balfour to Lord Rothschild, a leader of the British Jewish community, for transmission to the Zionist Federation of Great Britain and Ireland. The text of the declaration was published in the press on 9November 1917. Immediately following their declaration of war on the Ottoman Empire in November 1914, the British War Cabinet began to consider the future of Palestine; within two months a memorandum was circulated to the Cabinet by a Zionist Cabinet member, Herbert Samuel, proposing the support of Zionist ambitions in order to enlist the support of Jews in the wider war. A committee was established in April 1915 by British Prime Minister H. H. Asquith to determine their policy toward the Ottoman Empire including Palestine. Asquith, who had favored post-war reform of the Ottoman Empire, resigned in December 1916; his replacement David Lloyd George, favored partition of the Empire. The first negotiations between the British and the Zionists took place at a conference on 7 February 1917 that included Sir Mark Sykes and the Zionist leadership. Subsequent discussions led to Balfour's request, on 19 June, that Rothschild and Chaim Weizmann submit a draft of a public declaration. Further drafts were discussed by the British Cabinet during September and October, with input from Zionist and anti-Zionist Jews but with no representation from the local population in Palestine. By late 1917, in the lead up to the Balfour Declaration, the wider war had reached a stalemate, with two of Britain's allies not fully engaged: the United States had yet to suffer a casualty, and the Russians were in the midst of a revolution. A stalemate in southern Palestine was broken by the Battle of Beersheba on 31 October 1917. The release of the final declaration was authorised on 31 October; the preceding Cabinet discussion had referenced perceived propaganda benefits amongst the worldwide Jewish community for the Allied war effort. The opening words of the declaration represented the first public expression of support for Zionism by a major political power. The term "national home" had no precedent in international law, and was intentionally vague as to whether a Jewish state was contemplated. The intended boundaries of Palestine were not specified, and the British government later confirmed that the words "in Palestine" meant that the Jewish national home was not intended to cover all of Palestine. The second half of the declaration was added to satisfy opponents of the policy, who had claimed that it would otherwise prejudice the position of the local population of Palestine and encourage antisemitism worldwide by "stamping the Jews as strangers in their native lands". The declaration called for safeguarding the civil and religious rights for the Palestinian Arabs, who composed the vast majority of the local population, and also the rights of the Jewish communities in other countries outside of Palestine. The British government acknowledged in 1939 that the local population's views should have been taken into account, and recognised in 2017 that the declaration should have called for protection of the Palestinian Arabs' political rights. The declaration had many long-lasting consequences. It greatly increased popular support for Zionism within Jewish communities worldwide, and became a core component of the British Mandate for Palestine, the founding document of Mandatory Palestine, which later became Israel and the Palestinian territories. As a result, it is considered a principal cause of the ongoing Israeli–Palestinian conflict, often described as the world's most intractable conflict. Controversy remains over a number of areas, such as whether the declaration contradicted earlier promises the British made to the Sharif of Mecca in the McMahon–Hussein correspondence. Early British political support for an increased Jewish presence in the region of Palestine was based upon geopolitical calculations. This support began in the early 1840s and was led by Lord Palmerston, following the occupation of Syria and Palestine by separatist Ottoman governor Muhammad Ali of Egypt. French influence had grown in Palestine and the wider Middle East, and its role as protector of the Catholic communities began to grow, just as Russian influence had grown as protector of the Eastern Orthodox in the same regions. This left Britain without a sphere of influence, and thus a need to find or create their own regional "protégés". These political considerations were supported by a sympathetic evangelical Christian sentiment towards the "restoration of the Jews" to Palestine among elements of the mid-19th-century British political elite – most notably Lord Shaftesbury. The British Foreign Office actively encouraged Jewish emigration to Palestine, exemplified by Charles Henry Churchill's 1841–1842 exhortations to Moses Montefiore, the leader of the British Jewish community. Such efforts were premature, and did not succeed; only 24,000 Jews were living in Palestine on the eve of the emergence of Zionism within the world's Jewish communities in the last two decades of the 19th century. With the geopolitical shakeup occasioned by the outbreak of World War I, the earlier calculations, which had lapsed for some time, led to a renewal of strategic assessments and political bargaining over the Middle and Far East. Zionism arose in the late 19th century in reaction to anti-Semitic and exclusionary nationalist movements in Europe. Romantic nationalism in Central and Eastern Europe had helped to set off the Haskalah, or "Jewish Enlightenment", creating a split in the Jewish community between those who saw Judaism as their religion and those who saw it as their ethnicity or nation. The 1881–1884 anti-Jewish pogroms in the Russian Empire encouraged the growth of the latter identity, resulting in the formation of the Hovevei Zion pioneer organizations, the publication of Leon Pinsker's "Autoemancipation", and the first major wave of Jewish immigration to Palestine – retrospectively named the "First Aliyah". In 1896, Theodor Herzl, a Jewish journalist living in Austria-Hungary, published the foundational text of political Zionism, "Der Judenstaat" ("The Jews' State" or "The State of the Jews"), in which he asserted that the only solution to the "Jewish Question" in Europe, including growing anti-Semitism, was the establishment of a state for the Jews. A year later, Herzl founded the Zionist Organization, which at its first congress called for the establishment of "a home for the Jewish people in Palestine secured under public law". Proposed measures to attain that goal included the promotion of Jewish settlement there, the organisation of Jews in the diaspora, the strengthening of Jewish feeling and consciousness, and preparatory steps to attain necessary governmental grants. Herzl died in 1904 without having gained the political standing required to carry out his agenda. Zionist leader Chaim Weizmann, later President of the World Zionist Organisation, moved from Switzerland to the UK in 1904 and met Arthur Balfour – who had just launched his 1905–1906 election campaign after resigning as Prime Minister – in a session arranged by Charles Dreyfus, his Jewish constituency representative. Earlier that year, Balfour had successfully driven the Aliens Act through Parliament with impassioned speeches regarding the need to restrict the wave of immigration into Britain from Jews fleeing the Russian Empire. During this meeting, he asked what Weizmann's objections had been to the 1903 Uganda Scheme that Herzl had supported to provide a portion of British East Africa to the Jewish people as a homeland. The scheme, which had been proposed to Herzl by Joseph Chamberlain, Colonial Secretary in Balfour's Cabinet, following his trip to East Africa earlier in the year, had been subsequently voted down following Herzl's death by the Seventh Zionist Congress in 1905 after two years of heated debate in the Zionist Organization. Weizmann responded that he believed the English are to London as the Jews are to Jerusalem. In January 1914 Weizmann first met Baron Edmond de Rothschild, a member of the French branch of the Rothschild family and a leading proponent of the Zionist movement, in relation to a project to build a Hebrew university in Jerusalem. The Baron was not part of the World Zionist Organization, but had funded the Jewish agricultural colonies of the First Aliyah and transferred them to the Jewish Colonization Association in 1899. This connection was to bear fruit later that year when the Baron's son, James deRothschild, requested a meeting with Weizmann on 25November 1914, to enlist him in influencing those deemed to be receptive within the British government to their agenda of a "Jewish State" in Palestine. Through James's wife Dorothy, Weizmann was to meet Rózsika Rothschild, who introduced him to the English branch of the familyin particular her husband Charles and his older brother Walter, a zoologist and former member of parliament (MP). Their father, Nathan Rothschild, 1st Baron Rothschild, head of the English branch of the family, had a guarded attitude towards Zionism, but he died in March 1915 and his title was inherited by Walter.  Prior to the declaration, about 8,000 of Britain's 300,000 Jews belonged to a Zionist organisation. Globally, as of 1913 – the latest known date prior to the declaration – the equivalent figure was approximately 1%. The year 1916 marked four centuries since Palestine had become part of the Ottoman Empire, also known as the Turkish Empire. For most of this period, the Jewish population represented a small minority, approximately 3% of the total, with Muslims representing the largest segment of the population, and Christians the second. The Turks began to apply restrictions on Jewish immigration to Palestine in late 1882, in response to the start of the First Aliyah earlier that year. Although this immigration was creating a certain amount of tension with the local population, mainly among the merchant and notable classes, in 1901 the Sublime Porte (the Ottoman central government) gave Jews the same rights as Arabs to buy land in Palestine and the percentage of Jews in the population rose to 7% by 1914. At the same time, with growing distrust of the Young Turks – Turkish nationalists who had taken control of the Empire in 1908 – and the Second Aliyah, Arab nationalism was on the rise, and in Palestine anti-Zionism was a unifying characteristic. Historians do not know whether these strengthening forces would still have ultimately resulted in conflict in the absence of the Balfour Declaration. In July 1914 war broke out in Europe between the Triple Entente (Britain, France, and the Russian Empire) and the Central Powers (Germany, Austria-Hungary, and, later that year, the Ottoman Empire). The British Cabinet first discussed Zionism at a meeting on 9November 1914, four days after Britain's declaration of war on the Ottoman Empire, of which the Mutasarrifate of Jerusalemoften referred to as Palestinewas a component. At the meeting David Lloyd George, then Chancellor of the Exchequer, "referred to the ultimate destiny of Palestine". The Chancellor, whose law firm Lloyd George, Roberts and Co had been engaged a decade before by the Zionist Federation of Great Britain and Ireland to work on the Uganda Scheme, was to become Prime Minister by the time of the declaration, and was ultimately responsible for it. Weizmann's political efforts picked up speed, and on 10December 1914 he met with Herbert Samuel, a British Cabinet member and a secular Jew who had studied Zionism; Samuel believed Weizmann's demands were too modest. Two days later, Weizmann met Balfour again, for the first time since their initial meeting in 1905; Balfour had been out of government ever since his electoral defeat in 1906, but remained a senior member of the Conservative Party in their role as Official Opposition. A month later, Samuel circulated a memorandum entitled "The Future of Palestine" to his Cabinet colleagues. The memorandum stated: "I am assured that the solution of the problem of Palestine which would be much the most welcome to the leaders and supporters of the Zionist movement throughout the world would be the annexation of the country to the British Empire". Samuel discussed a copy of his memorandum with Nathan Rothschild in February 1915, a month before the latter's death. It was the first time in an official record that enlisting the support of Jews as a war measure had been proposed. Many further discussions followed, including the initial meetings in 1915–16 between Lloyd George, who had been appointed Minister of Munitions in May 1915, and Weizmann, who was appointed as a scientific advisor to the ministry in September 1915. Seventeen years later, in his "War Memoirs", Lloyd George described these meetings as being the "fount and origin" of the declaration; historians have rejected this claim. In late 1915 the British High Commissioner to Egypt, Henry McMahon, exchanged ten letters with Hussein bin Ali, Sharif of Mecca, in which he promised Hussein to recognize Arab independence "in the limits and boundaries proposed by the Sherif of Mecca" in return for Hussein launching a revolt against the Ottoman Empire. The pledge excluded "portions of Syria" lying to the west of "the districts of Damascus, Homs, Hama and Aleppo". In the decades after the war, the extent of this coastal exclusion was hotly disputed since Palestine lay to the southwest of Damascus and was not explicitly mentioned. The Arab Revolt was launched on June5th, 1916, on the basis of the "quid pro quo" agreement in the correspondence. However, less than three weeks earlier the governments of the United Kingdom, France, and Russia secretly concluded the Sykes–Picot Agreement, which Balfour described later as a "wholly new method" for dividing the region, after the 1915 agreement "seems to have been forgotten". This Anglo-French treaty was negotiated in late 1915 and early 1916 between Sir Mark Sykes and François Georges-Picot, with the primary arrangements being set out in draft form in a joint memorandum on 5 January 1916. Sykes was a British Conservative MP who had risen to a position of significant influence on Britain's Middle East policy, beginning with his seat on the 1915 De Bunsen Committee and his initiative to create the Arab Bureau. Picot was a French diplomat and former consul-general in Beirut. Their agreement defined the proposed spheres of influence and control in Western Asia should the Triple Entente succeed in defeating the Ottoman Empire during World WarI, dividing many Arab territories into British- and French-administered areas. In Palestine, internationalisation was proposed, with the form of administration to be confirmed after consultation with both Russia and Hussein; the January draft noted Christian and Muslim interests, and that "members of the Jewish community throughout the world have a conscientious and sentimental interest in the future of the country." Prior to this point, no active negotiations with Zionists had taken place, but Sykes had been aware of Zionism, was in contact with Moses Gaster – a former President of the English Zionist Federation – and may have seen Samuel's 1915 memorandum. In Sykes' mind, the agreement which bore his name was outdated even before it was signed – in March 1916, he wrote in a private letter: "to my mind the Zionists are now the key of the situation". These wartime initiatives, inclusive of the declaration, are frequently considered together by historians because of the potential, real or imagined, for incompatibility between them, particularly in regard to the disposition of Palestine. In the words of Professor Albert Hourani, founder of the Middle East Centre at St Antony's College, Oxford: "The argument about the interpretation of these agreements is one which is impossible to end, because they were intended to bear more than one interpretation." In terms of British politics, the declaration resulted from the coming into power of Lloyd George and his Cabinet, which had replaced the H. H. Asquith led-Cabinet in December 1916. Whilst both Prime Ministers were Liberals and both governments were wartime coalitions, Lloyd George and Balfour, appointed as his Foreign Secretary, favoured a post-war partition of the Ottoman Empire as a major British war aim, whereas Asquith and his Foreign Secretary, Sir Edward Grey, had favoured its reform. Two days after taking office, Lloyd George told General Robertson, the Chief of the Imperial General Staff, that he wanted a major victory, preferably the capture of Jerusalem, to impress British public opinion, and immediately consulted his War Cabinet about a "further campaign into Palestine when El Arish had been secured." Subsequent pressure from Lloyd George, over the reservations of Robertson, resulted in the recapture of the Sinai for British-controlled Egypt, and, with the capture of El Arish in December 1916 and Rafah in January 1917, the arrival of British forces at the southern borders of the Ottoman Empire. Following two unsuccessful attempts to capture Gaza between 26 March and 19 April, a six-month stalemate in Southern Palestine began; the Sinai and Palestine Campaign would not make any progress into Palestine until 31October 1917. Following the change in government, Sykes was promoted into the War Cabinet Secretariat with responsibility for Middle Eastern affairs. In January 1917, despite having previously built a relationship with Moses Gaster, he began looking to meet other Zionist leaders; by the end of the month he had been introduced to Weizmann and his associate Nahum Sokolow, a journalist and executive of the World Zionist Organization who had moved to Britain at the beginning of the war. On 7February 1917, Sykes, claiming to be acting in a private capacity, entered into substantive discussions with the Zionist leadership. The previous British correspondence with "the Arabs" was discussed at the meeting; Sokolow's notes record Sykes' description that "The Arabs professed that language must be the measure [by which control of Palestine should be determined] and [by that measure] could claim all Syria and Palestine. Still the Arabs could be managed, particularly if they received Jewish support in other matters." At this point the Zionists were still unaware of the Sykes-Picot Agreement, although they had their suspicions. One of Sykes' goals was the mobilization of Zionism to the cause of British suzerainty in Palestine, so as to have arguments to put to France in support of that objective. During the period of the British War Cabinet discussions leading up to the declaration, the war had reached a period of stalemate. On the Western Front the tide would first turn in favour of the Central Powers in spring 1918, before decisively turning in favour of the Allies from July 1918 onwards. Although the United States declared war on Germany in the spring of 1917, it did not suffer its first casualties until 2 November 1917, at which point President Woodrow Wilson still hoped to avoid dispatching large contingents of troops into the war. The Russian forces were known to be distracted by the ongoing Russian Revolution and the growing support for the Bolshevik faction, but Alexander Kerensky's Provisional Government had remained in the war; Russia only withdrew after the final stage of the revolution on 7November 1917. Balfour met Weizmann at the Foreign Office on 22 March 1917; two days later, Weizmann described the meeting as being "the first time I had a real business talk with him". Weizmann explained at the meeting that the Zionists had a preference for a British protectorate over Palestine, as opposed to an American, French or international arrangement; Balfour agreed, but warned that "there may be difficulties with France and Italy". The French position in regard to Palestine and the wider Syria region during the lead up to the Balfour Declaration was largely dictated by the terms of the Sykes-Picot Agreement, and was complicated from 23 November 1915 by increasing French awareness of the British discussions with the Sherif of Mecca. Prior to 1917, the British had led the fighting on the southern border of the Ottoman Empire alone, given their neighbouring Egyptian colony and the French preoccupation with the fighting on the Western Front that was taking place on their own soil. Italy's participation in the war, which began following the April 1915 Treaty of London, did not include involvement in the Middle Eastern sphere until the April 1917 Agreement of Saint-Jean-de-Maurienne; at this conference, Lloyd George had raised the question of a British protectorate of Palestine and the idea "had been very coldly received" by the French and the Italians. In May and June 1917, the French and Italians sent detachments to support the British as they built their reinforcements in preparation for a renewed attack on Palestine. In early April, Sykes and Picot were appointed to act as the chief negotiators once more, this time on a month-long mission to the Middle East for further discussions with the Sherif of Mecca and other Arab leaders. On 3 April 1917, Sykes met with Lloyd George, Curzon and Hankey to receive his instructions in this regard, namely to keep the French onside while "not prejudicing the Zionist movement and the possibility of its development under British auspices, [and not] enter into any political pledges to the Arabs, and particularly none in regard to Palestine". Before travelling to the Middle East, Picot, via Sykes, invited Nahum Sokolow to Paris to educate the French government on Zionism. Sykes, who had prepared the way in correspondence with Picot, arrived a few days after Sokolow; in the meantime Sokolow had met Picot and other French officials, and convinced the French Foreign Office to accept for study a statement of Zionist aims "in regard to facilities of colonization, communal autonomy, rights of language and establishment of a Jewish chartered company." Sykes went on ahead to Italy and had meetings with the British ambassador and British Vatican representative to prepare the way for Sokolow once again. Sokolow was granted an audience with Pope Benedict XV on 6 May 1917. Sokolow's notes of the meeting – the only meeting records known to historians – stated that the Pope expressed general sympathy and support for the Zionist project. On 21 May 1917 Angelo Sereni, president of the Committee of the Jewish Communities, presented Sokolow to Sidney Sonnino, the Italian Minister of Foreign Affairs. He was also received by Paolo Boselli, the Italian prime minister. Sonnino arranged for the secretary general of the ministry to send a letter to the effect that, although he could not express himself on the merits of a program which concerned all the allies, "generally speaking" he was not opposed to the legitimate claims of the Jews. On his return journey, Sokolow met with French leaders again and secured a letter dated 4 June 1917, giving assurances of sympathy towards the Zionist cause by Jules Cambon, head of the political section of the French foreign ministry. This letter was not published, but was deposited at the British Foreign Office. Following the United States' entry into the war on 6 April, the British Foreign Secretary led the Balfour Mission to Washington D.C. and New York, where he spent a month between mid-April and mid-May. During the trip he spent significant time discussing Zionism with Louis Brandeis, a leading Zionist and a close ally of Wilson who had been appointed as a Supreme Court Justice a year previously. By 13 June 1917, it was acknowledged by Ronald Graham, head of the Foreign Office's Middle Eastern affairs department, that the three most relevant politiciansthe Prime Minister, the Foreign Secretary, and the Parliamentary Under-Secretary of State for Foreign Affairs, Lord Robert Cecilwere all in favour of Britain supporting the Zionist movement; on the same day Weizmann had written to Graham to advocate for a public declaration. Six days later, at a meeting on 19June, Balfour asked Lord Rothschild and Weizmann to submit a formula for a declaration. Over the next few weeks, a 143-word draft was prepared by the Zionist negotiating committee, but it was considered too specific on sensitive areas by Sykes, Graham and Rothschild. Separately, a very different draft had been prepared by the Foreign Office, described in 1961 by Harold Nicolson – who had been involved in preparing the draft – as proposing a "sanctuary for Jewish victims of persecution". The Foreign Office draft was strongly opposed by the Zionists, and was discarded; no copy of the draft has been found in the Foreign Office archives. Following further discussion, a revised – and at just 46 words in length, much shorter – draft declaration was prepared and sent by Lord Rothschild to Balfour on 18 July. It was received by the Foreign Office, and the matter was brought to the Cabinet for formal consideration. The decision to release the declaration was taken by the British War Cabinet on 31 October 1917. This followed discussion at four War Cabinet meetings (including the 31 October meeting) over the space of the previous two months. In order to aid the discussions, the War Cabinet Secretariat, led by Maurice Hankey and supported by his Assistant Secretaries – primarily Sykes and his fellow Conservative MP and pro-Zionist Leo Amery – solicited outside perspectives to put before the Cabinet. These included the views of government ministers, war allies – notably from President Woodrow Wilson – and in October, formal submissions from six Zionist leaders and four non-Zionist Jews. British officials asked President Wilson for his consent on the matter on two occasions – first on 3 September, when he replied the time was not ripe, and later on 6 October, when he agreed with the release of the declaration. Excerpts from the minutes of these four War Cabinet meetings provide a description of the primary factors that the ministers considered: Declassification of British government archives has allowed scholars to piece together the choreography of the drafting of the declaration; in his widely cited 1961 book, Leonard Stein published four previous drafts of the declaration. The drafting began with Weizmann's guidance to the Zionist drafting team on its objectives in a letter dated 20 June 1917, one day following his meeting with Rothschild and Balfour. He proposed that the declaration from the British government should state: "its conviction, its desire or its intention to support Zionist aims for the creation of a Jewish national home in Palestine; no reference must be made I think to the question of the Suzerain Power because that would land the British into difficulties with the French; it must be a Zionist declaration." A month after the receipt of the much-reduced 12 July draft from Rothschild, Balfour proposed a number of mainly technical amendments. The two subsequent drafts included much more substantial amendments: the first in a late August draft by Lord Milner – one of the original five members of Lloyd George's War Cabinet as a minister without portfolio – which reduced the geographic scope from all of Palestine to "in Palestine", and the second from Milner and Amery in early October, which added the two "safeguard clauses". Subsequent authors have debated who the "primary author" really was. In his posthumously published 1981 book "The Anglo-American Establishment", Georgetown University history professor Carroll Quigley explained his view that Lord Milner was the primary author of the declaration, and more recently, William D. Rubinstein, Professor of Modern History at Aberystwyth University, Wales, proposed Amery instead.Huneidi wrote that Ormsby-Gore, in a report he prepared for Shuckburgh, claimed authorship, together with Amery, of the final draft form. The agreed version of the declaration, a single sentence of just 67 words, was sent on 2November 1917 in a short letter from Balfour to Walter Rothschild, for transmission to the Zionist Federation of Great Britain and Ireland. The declaration contained four clauses, of which the first two promised to support "the establishment in Palestine of a national home for the Jewish people", followed by two "safeguard clauses" with respect to "the civil and religious rights of existing non-Jewish communities in Palestine", and "the rights and political status enjoyed by Jews in any other country". The term "national home" was intentionally ambiguous, having no legal value or precedent in international law, such that its meaning was unclear when compared to other terms such as "state". The term was intentionally used instead of "state" because of opposition to the Zionist program within the British Cabinet. According to historian Norman Rose, the chief architects of the declaration contemplated that a Jewish State would emerge in time while the Palestine Royal Commission concluded that the wording was "the outcome of a compromise between those Ministers who contemplated the ultimate establishment of a Jewish State and those who did not." Interpretation of the wording has been sought in the correspondence leading to the final version of the declaration. An official report to the War Cabinet sent by Sykes on 22 September said that the Zionists did "not" want "to set up a Jewish Republic or any other form of state in Palestine or in any part of Palestine" but rather preferred some form of protectorate as provided in the Palestine Mandate. A month later, Curzon produced a memorandum circulated on 26 October 1917 where he addressed two questions, the first concerning the meaning of the phrase "a National Home for the Jewish race in Palestine"; he noted that there were different opinions ranging from a fully fledged state to a merely spiritual centre for the Jews. Sections of the British press assumed that a Jewish state was intended even before the Declaration was finalized. In the United States the press began using the terms "Jewish National Home", "Jewish State", "Jewish republic" and "Jewish Commonwealth" interchangeably. Treaty expert David Hunter Miller, who was at the conference and subsequently compiled a 22 volume compendium of documents, provides a report of the Intelligence Section of the American Delegation to the Paris Peace Conference of 1919 which recommended that "there be established a separate state in Palestine," and that "it will be the policy of the League of Nations to recognize Palestine as a Jewish state, as soon as it is a Jewish state in fact." The report further advised that an independent Palestinian state under a British League of Nations mandate be created. Jewish settlement would be allowed and encouraged in this state and this state's holy sites would be under the control of the League of Nations. Indeed, the Inquiry spoke positively about the possibility of a Jewish state eventually being created in Palestine if the necessary demographics for this were to exist. Historian Matthew Jacobs later wrote that the US approach was hampered by the "general absence of specialist knowledge about the region" and that "like much of the Inquiry's work on the Middle East, the reports on Palestine were deeply flawed" and "presupposed a particular outcome of the conflict". He quotes Miller, writing about one report on the history and impact of Zionism, "absolutely inadequate from any standpoint and must be regarded as nothing more than material for a future report" Lord Robert Cecil on 2 December 1917, assured an audience that the government fully intended that "Judea [was] for the Jews." Yair Auron opines that Cecil, then a deputy Foreign Secretary representing the British Government at a celebratory gathering of the English Zionist Federation, "possibly went beyond his official brief" in saying (he cites Stein) "Our wish is that Arabian countries shall be for the Arabs, Armenia for the Armenians and Judaea for the Jews". The following October Neville Chamberlain, while chairing a Zionist meeting, discussed a "new Jewish State."At the time, Chamberlain was a Member of Parliament for Ladywood, Birmingham; recalling the event in 1939, just after Chamberlain had approved the 1939 White Paper, the Jewish Telegraph Agency noted that the Prime Minister had "experienced a pronounced change of mind in the 21 years intervening" A year later, on the Declaration's second anniversary, General Jan Smuts said that Britain "would redeem her pledge ... and a great Jewish state would ultimately rise." In similar vein, Churchill a few months later stated: At the 22 June 1921 meeting of the Imperial Cabinet, Churchill was asked by Arthur Meighen, the Canadian Prime Minister, about the meaning of the national home. Churchill said "If in the course of many years they become a majority in the country, they naturally would take it over...pro rata with the Arab. We made an equal pledge that we would not turn the Arab off his land or invade his political and social rights". Responding to Curzon in January 1919, Balfour wrote "Weizmann has never put forward a claim for the Jewish Government of Palestine. Such a claim in my opinion is clearly inadmissible and personally I do not think we should go further than the original declaration which I made to Lord Rothschild". In February 1919, France issued a statement that it would not oppose putting Palestine under British trusteeship and the formation of a Jewish State. Friedman further notes that France's attitude went on to change; Yehuda Blum, while discussing France's "unfriendly attitude towards the Jewish national movement", notes the content of a report made by Robert Vansittart (a leading member of the British delegation to the Paris Peace Conference) to Curzon in November 1920 which said: Greece's Foreign Minister told the editor of the Salonica Jewish organ Pro-Israel that "the establishment of a Jewish State meets in Greece with full and sincere sympathy ... A Jewish Palestine would become an ally of Greece." In Switzerland, a number of noted historians including professors Tobler, Forel-Yvorne, and Rogaz, supported the idea of establishing a Jewish state, with one referring to it as "a sacred right of the Jews." While in Germany, officials and most of the press took the Declaration to mean a British sponsored state for the Jews. The British government, including Churchill, made it clear that the Declaration did not intend for the whole of Palestine to be converted into a Jewish National Home, "but that such a Home should be founded in Palestine." Emir Faisal, King of Syria and Iraq, made a formal written agreement with Zionist leader Chaim Weizmann, which was drafted by T.E. Lawrence, whereby they would try to establish a peaceful relationship between Arabs and Jews in Palestine. The 3 January 1919 Faisal–Weizmann Agreement was a short-lived agreement for Arab–Jewish cooperation on the development of a Jewish homeland in Palestine. Faisal did treat Palestine differently in his presentation to the Peace Conference on 6 February 1919 saying "Palestine, for its universal character, [should be] left on one side for the mutual consideration of all parties concerned". The agreement was never implemented. In a subsequent letter written in English by Lawrence for Faisal's signature, he explained: When the letter was tabled at the Shaw Commission in 1929, Rustam Haidar spoke to Faisal in Baghdad and cabled that Faisal had "no recollection that he wrote anything of the sort". In January 1930, Haidar wrote to a newspaper in Baghdad that Faisal: "finds it exceedingly strange that such a matter is attributed to him as he at no time would consider allowing any foreign nation to share in an Arab country". Awni Abd al-Hadi, Faisal's secretary, wrote in his memoirs that he was not aware that a meeting between Frankfurter and Faisal took place and that: "I believe that this letter, assuming that it is authentic, was written by Lawrence, and that Lawrence signed it in English on behalf of Faisal. I believe this letter is part of the false claims made by Chaim Weizmann and Lawrence to lead astray public opinion." According to Allawi, the most likely explanation for the Frankfurter letter is that a meeting took place, a letter was drafted in English by Lawrence, but that its "contents were not entirely made clear to Faisal. He then may or may not have been induced to sign it", since it ran counter to Faisal's other public and private statements at the time.A 1 March interview by Le Matin quoted Faisal as saying: This feeling of respect for other religions dictates my opinion about Palestine, our neighbor. That the unhappy Jews come to reside there and behave as good citizens of this country, our humanity rejoices given that they are placed under a Muslim or Christian government mandated by The League of Nations. If they want to constitute a state and claim sovereign rights in this region, I forsee very serious dangers. It is to be feared that there will be a conflict between them and the other races. Referring to his 1922 White Paper, Churchill later wrote that "there is nothing in it to prohibit the ultimate establishment of a Jewish State." And in private, many British officials agreed with the Zionists' interpretation that a state would be established when a Jewish majority was achieved. When Chaim Weizmann met with Churchill, Lloyd George and Balfour at Balfour's home in London on July 21, 1921, Lloyd George and Balfour assured Weizmann "that by the Declaration they had always meant an eventual Jewish State," according to Weizmann minutes of that meeting. Lloyd George stated in 1937 that it was intended that Palestine would become a Jewish Commonwealth if and when Jews "had become a definite majority of the inhabitants", and Leo Amery echoed the same position in 1946. In the UNSCOP report of 1947, the issue of home versus state was subjected to scrutiny arriving at a similar conclusion to that of Lloyd George. The statement that such a homeland would be found "in Palestine" rather than "of Palestine" was also deliberate. The proposed draft of the declaration contained in Rothschild's 12 July letter to Balfour referred to the principle "that Palestine should be reconstituted as the National Home of the Jewish people." In the final text, following Lord Milner's amendment, the word "reconstituted" was removed and the word "that" was replaced with "in". This text thereby avoided committing the entirety of Palestine as the National Home of the Jewish people, resulting in controversy in future years over the intended scope. This was clarified by the 1922 Churchill White Paper, which wrote that "the terms of the declaration referred to do not contemplate that Palestine as a whole should be converted into a Jewish National Home, but that such a Home should be founded 'in Palestine.'" The declaration did not include any geographical boundaries for Palestine. Following the end of the war, three documents – the declaration, the Hussein-McMahon Correspondence and the Sykes-Picot Agreement – became the basis for the negotiations to set the boundaries of Palestine. The declaration's first safeguard clause referred to protecting the civil and religious rights of non-Jews in Palestine. The clause had been drafted together with the second safeguard by Leo Amery in consultation with Lord Milner, with the intention to "go a reasonable distance to meeting the objectors, both Jewish and pro-Arab, without impairing the substance of the proposed declaration". The "non-Jews" constituted 90% of the population of Palestine; in the words of Ronald Storrs, Britain's Military Governor of Jerusalem between 1917 and 1920, the community observed that they had been "not so much as named, either as Arabs, Moslems or Christians, but were lumped together under the negative and humiliating definition of 'Non-Jewish Communities' and relegated to subordinate provisos". The community also noted that there was no reference to protecting their "political status" or political rights, as there was in the subsequent safeguard relating to Jews in other countries. This protection was frequently contrasted against the commitment to the Jewish community, and over the years a variety of terms were used to refer to these two obligations as a pair; a particularly heated question was whether these two obligations had "equal weight", and in 1930 this equal status was confirmed by the Permanent Mandates Commission and by the British government in the Passfield white paper. Balfour stated in February 1919 that Palestine was considered an exceptional case in which, referring to the local population, "we deliberately and rightly decline to accept the principle of self-determination," although he considered that the policy provided self-determination to Jews. Avi Shlaim considers this the declaration's "greatest contradiction". This principle of self-determination had been declared on numerous occasions subsequent to the declarationPresident Wilson's January 1918 Fourteen Points, McMahon's Declaration to the Seven in June 1918, the November 1918 Anglo-French Declaration, and the June 1919 Covenant of the League of Nations that had established the mandate system. In an August 1919 memo Balfour acknowledged the inconsistency among these statements, and further explained that the British had no intention of consulting the existing population of Palestine. The results of the ongoing American King–Crane Commission of Enquiry consultation of the local population – from which the British had withdrawn – were suppressed for three years until the report was leaked in 1922. Subsequent British governments have acknowledged this deficiency, in particular the 1939 committee led by the Lord Chancellor, Frederic Maugham, which concluded that the government had not been "free to dispose of Palestine without regard for the wishes and interests of the inhabitants of Palestine", and the April 2017 statement by British Foreign Office minister of state Baroness Anelay that the government acknowledged that "the Declaration should have called for the protection of political rights of the non-Jewish communities in Palestine, particularly their right to self-determination." The second safeguard clause was a commitment that nothing should be done which might prejudice the rights of the Jewish communities in other countries outside of Palestine. The original drafts of Rothschild, Balfour, and Milner did not include this safeguard, which was drafted together with the preceding safeguard in early October, in order to reflect opposition from influential members of the Anglo-Jewish community. Lord Rothschild took exception to the proviso on the basis that it presupposed the possibility of a danger to non-Zionists, which he denied. The Conjoint Foreign Committee of the Board of Deputies of British Jews and the Anglo-Jewish Association had published a letter in "The Times" on 24 May 1917 entitled "Views of Anglo-Jewry", signed by the two organisations' presidents, David Lindo Alexander and Claude Montefiore, stating their view that: "the establishment of a Jewish nationality in Palestine, founded on this theory of homelessness, must have the effect throughout the world of stamping the Jews as strangers in their native lands, and of undermining their hard-won position as citizens and nationals of these lands." This was followed in late August by Edwin Samuel Montagu, an influential anti-Zionist Jew and Secretary of State for India, and the only Jewish member of the British Cabinet, who wrote in a Cabinet memorandum that: "The policy of His Majesty's Government is anti-Semitic in result and will prove a rallying ground for anti-Semites in every country of the world." The text of the declaration was published in the press one week after it was signed, on 9November 1917. Other related events took place within a short timeframe, the two most relevant being the almost immediate British military capture of Palestine and the leaking of the previously secret Sykes-Picot Agreement. On the military side, both Gaza and Jaffa fell within several days, and Jerusalem was surrendered to the British on 9 December. The publication of the Sykes–Picot Agreement, following the Russian Revolution, in the Bolshevik "Izvestia" and "Pravda" on 23 November 1917 and in the British "Manchester Guardian" on 26 November 1917, represented a dramatic moment for the Allies' Eastern campaign: "the British were embarrassed, the Arabs dismayed and the Turks delighted." The Zionists had been aware of the outlines of the agreement since April and specifically the part relevant to Palestine, following a meeting between Weizmann and Cecil where Weizmann made very clear his objections to the proposed scheme. The declaration represented the first public support for Zionism by a major political power – its publication galvanized Zionism, which finally had obtained an official charter. In addition to its publication in major newspapers, leaflets were circulated throughout Jewish communities. These leaflets were airdropped over Jewish communities in Germany and Austria, as well as the Pale of Settlement, which had been given to the Central Powers following the Russian withdrawal. Weizmann had argued that the declaration would have three effects: it would swing Russia to maintain pressure on Germany's Eastern Front, since Jews had been prominent in the March Revolution of 1917; it would rally the large Jewish community in the United States to press for greater funding for the American war effort, underway since April of that year; and, lastly, that it would undermine German Jewish support for Kaiser Wilhelm II. The declaration spurred an unintended and extraordinary increase in the number of adherents of American Zionism; in 1914 the 200 American Zionist societies comprised a total of 7,500 members, which grew to 30,000 members in 600 societies in 1918 and 149,000 members in 1919. Whilst the British had considered that the declaration reflected a previously established dominance of the Zionist position in Jewish thought, it was the declaration itself that was subsequently responsible for Zionism's legitimacy and leadership. Exactly one month after the declaration was issued, a large-scale celebration took place at the Royal Opera House – speeches were given by leading Zionists as well as members of the British administration including Sykes and Cecil. From 1918 until World War II, Jews in Mandatory Palestine celebrated Balfour Day as an annual national holiday on 2November. The celebrations included ceremonies in schools and other public institutions and festive articles in the Hebrew press. In August 1919 Balfour approved Weizmann's request to name the first post-war settlement in Mandatory Palestine, "Balfouria", in his honour. It was intended to be a model settlement for future American Jewish activity in Palestine. Herbert Samuel, the Zionist MP whose 1915 memorandum had framed the start of discussions in the British Cabinet, was asked by Lloyd George on 24April 1920 to act as the first civil governor of British Palestine, replacing the previous military administration that had ruled the area since the war. Shortly after beginning the role in July 1920, he was invited to read the "haftarah" from Isaiah 40 at the Hurva Synagogue in Jerusalem, which, according to his memoirs, led the congregation of older settlers to feel that the "fulfilment of ancient prophecy might at last be at hand". The local Christian and Muslim community of Palestine, who constituted almost 90% of the population, strongly opposed the declaration. As described by the Palestinian-American philosopher Edward Said in 1979, it was perceived as being made: "(a)by a European power, (b)about a non-European territory, (c)in a flat disregard of both the presence and the wishes of the native majority resident in that territory, and (d)it took the form of a promise about this same territory to another foreign group." According to the 1919 King–Crane Commission, "No British officer, consulted by the Commissioners, believed that the Zionist programme could be carried out except by force of arms." A delegation of the Muslim-Christian Association, headed by Musa al-Husayni, expressed public disapproval on 3November 1918, one day after the Zionist Commission parade marking the first anniversary of the Balfour Declaration. They handed a petition signed by more than 100 notables to Ronald Storrs, the British military governor: The group also protested the carrying of new "white and blue banners with two inverted triangles in the middle", drawing the attention of the British authorities to the serious consequences of any political implications in raising the banners. Later that month, on the first anniversary of the occupation of Jaffa by the British, the Muslim-Christian Association sent a lengthy memorandum and petition to the military governor protesting once more any formation of a Jewish state. In the broader Arab world, the declaration was seen as a betrayal of the British wartime understandings with the Arabs. The Sharif of Mecca and other Arab leaders considered the declaration a violation of a previous commitment made in the McMahon–Hussein correspondence in exchange for launching the Arab Revolt. Following the publication of the declaration, the British dispatched Commander David George Hogarth to see Hussein in January 1918 bearing the message that the "political and economic freedom" of the Palestinian population was not in question. Hogarth reported that Hussein "would not accept an independent Jewish State in Palestine, nor was I instructed to warn him that such a state was contemplated by Great Britain". Hussein had also learned of the Sykes–Picot Agreement when it was leaked by the new Soviet government in December 1917, but was satisfied by two disingenuous telegrams from Sir Reginald Wingate, who had replaced McMahon as High Commissioner of Egypt, assuring him that the British commitments to the Arabs were still valid and that the Sykes–Picot Agreement was not a formal treaty. Continuing Arab disquiet over Allied intentions also led during 1918 to the British Declaration to the Seven and the Anglo-French Declaration, the latter promising "the complete and final liberation of the peoples who have for so long been oppressed by the Turks, and the setting up of national governments and administrations deriving their authority from the free exercise of the initiative and choice of the indigenous populations". In 1919, King Hussein refused to ratify the Treaty of Versailles. After February, 1920, the British ceased to pay subsidy to him. In August, 1920, five days after the signing of the Treaty of Sèvres, which formally recognized the Kingdom of Hejaz, Curzon asked Cairo to procure Hussein's signature to both treaties and agreed to make a payment of £30,000 conditional on signature. Hussein declined and in 1921, stated that he could not be expected to "affix his name to a document assigning Palestine to the Zionists and Syria to foreigners." Following the 1921 Cairo Conference, Lawrence was sent to try and obtain the King's signature to a treaty, a £100,000 annual subsidy being proposed; this attempt also failed. During 1923, the British made one further attempt to settle outstanding issues with Hussein and once again, the attempt foundered, Hussein continued in his refusal to recognize the Balfour Declaration or any of the Mandates that he perceived as being his domain. In March 1924, having briefly considered the possibility of removing the offending article from the treaty, the government suspended any further negotiations. The declaration was first endorsed by a foreign government on 27 December 1917, when Serbian Zionist leader and diplomat David Albala announced the support of Serbia's government in exile during a mission to the United States. The French and Italian governments offered their endorsements, on 14 February and 9 May 1918, respectively. At a private meeting in London on 1 December 1918, Lloyd George and French Prime Minister Georges Clemenceau agreed to certain modifications to the Sykes-Picot Agreement, including British control of Palestine. On 25 April 1920, the San Remo conference – an outgrowth of the Paris Peace Conference attended by the prime ministers of Britain, France and Italy, the , and the United States Ambassador to Italy – established the basic terms for three League of Nations mandates: a French mandate for Syria, and British mandates for Mesopotamia and Palestine. With respect to Palestine, the resolution stated that the British were responsible for putting into effect the terms of the Balfour Declaration. The French and the Italians made clear their dislike of the "Zionist cast of the Palestinian mandate" and objected especially to language that did not safeguard the "political" rights of non-Jews, accepting Curzon's claim that "in the British language all ordinary rights were included in "civil rights"". At the request of France, it was agreed that an undertaking was to be inserted in the mandate's procès-verbal that this would not involve the surrender of the rights hitherto enjoyed by the non-Jewish communities in Palestine. The boundaries of Palestine were left unspecified, to "be determined by the Principal Allied Powers." Three months later, in July 1920, the French defeat of Faisal's Arab Kingdom of Syria precipitated the British need to know "what is the 'Syria' for which the French received a mandate at San Remo?" and "does it include Transjordania?" – it subsequently decided to pursue a policy of associating Transjordan with the mandated area of Palestine without adding it to the area of the Jewish National Home. In 1922, Congress officially endorsed America's support for the Balfour Declaration through the passage of the Lodge-Fish Resolution, notwithstanding opposition from the State Department. Professor Lawrence Davidson, of West Chester University, whose research focuses on American relations with the Middle East, argues that President Wilson and Congress ignored democratic values in favour of "biblical romanticism" when they endorsed the declaration. He points to an organized pro-Zionist lobby in the United States, which was active at a time when the country's small Arab American community had little political power. The publication of the Balfour Declaration was met with tactical responses from the Central Powers. Two weeks following the declaration, Ottokar Czernin, the Austrian Foreign Minister, gave an interview to Arthur Hantke, President of the Zionist Federation of Germany, promising that his government would influence the Turks once the war was over. On 12December, the Ottoman Grand Vizier, Talaat Pasha, gave an interview to the German newspaper "Vossische Zeitung" that was published on 31December and subsequently released in the German-Jewish periodical "" on 4January 1918, in which he referred to the declaration as "une blague" (a deception) and promised that under Ottoman rule "all justifiable wishes of the Jews in Palestine would be able to find their fulfilment" subject to the absorptive capacity of the country. This Turkish statement was endorsed by the German Foreign Office on 5January 1918. On 8January 1918, a German-Jewish Society, the Union of German Jewish Organizations for the Protection of the Rights of the Jews of the East (VJOD), was formed to advocate for further progress for Jews in Palestine. Following the war, the Treaty of Sèvres was signed by the Ottoman Empire on 10August 1920. The treaty dissolved the Ottoman Empire, requiring Turkey to renounce sovereignty over much of the Middle East. Article95 of the treaty incorporated the terms of the Balfour Declaration with respect to "the administration of Palestine, within such boundaries as may be determined by the Principal Allied Powers". Since incorporation of the declaration into the Treaty of Sèvres did not affect the legal status of either the declaration or the Mandate, there was also no effect when Sèvres was superseded by the Treaty of Lausanne, which did not include any reference to the declaration. In 1922, German anti-Semitic theorist Alfred Rosenberg in his primary contribution to Nazi theory on Zionism, "Der Staatsfeindliche Zionismus" ("Zionism, the Enemy of the State"), accused German Zionists of working for a German defeat and supporting Britain and the implementation of the Balfour Declaration, in a version of the stab-in-the-back myth. Adolf Hitler took a similar approach in some of his speeches from 1920 onwards. With the advent of the declaration and the British entry into Jerusalem on December 9, the Vatican reversed its earlier sympathetic attitude to Zionism and adopted an oppositional stance that was to continue until the early 1990s. The British policy as stated in the declaration was to face numerous challenges to its implementation in the following years. The first of these was the indirect peace negotiations which took place between Britain and the Ottomans in December 1917 and January 1918 during a pause in the hostilities for the rainy season; although these peace talks were unsuccessful, archival records suggest that key members of the War Cabinet may have been willing to permit leaving Palestine under nominal Turkish sovereignty as part of an overall deal. In October 1919, almost a year after the end of the war, Lord Curzon succeeded Balfour as Foreign Secretary. Curzon had been a member of the 1917 Cabinet that had approved the declaration, and according to British historian Sir David Gilmour, Curzon had been "the only senior figure in the British government at the time who foresaw that its policy would lead to decades of Arab–Jewish hostility". He therefore determined to pursue a policy in line with its "narrower and more prudent rather than the wider interpretation". Following Bonar Law's appointment as Prime Minister in late 1922, Curzon wrote to Law that he regarded the declaration as "the worst" of Britain's Middle East commitments and "a striking contradiction of our publicly declared principles". In August 1920 the report of the Palin Commission, the first in a long line of British Commissions of Inquiry on the question of Palestine during the Mandate period, noted that "The Balfour Declaration... is undoubtedly the starting point of the whole trouble". The conclusion of the report, which was not published, mentioned the Balfour Declaration three times, stating that "the causes of the alienation and exasperation of the feelings of the population of Palestine" included: British public and government opinion became increasingly unfavourable to state support for Zionism; even Sykes had begun to change his views in late 1918. In February 1922 Churchill telegraphed Samuel, who had begun his role as High Commissioner for Palestine 18 months earlier, asking for cuts in expenditure and noting: Following the issuance of the Churchill White Paper in June 1922, the House of Lords rejected a Palestine Mandate that incorporated the Balfour Declaration by 60 votes to 25, following a motion issued by Lord Islington. The vote proved to be only symbolic as it was subsequently overruled by a vote in the House of Commons following a tactical pivot and variety of promises made by Churchill. In February 1923, following the change in government, Cavendish, in a lengthy memorandum for the Cabinet, laid the foundation for a secret review of Palestine policy: His covering note asked for a statement of policy to be made as soon as possible and that the cabinet ought to focus on three questions: (1) whether or not pledges to the Arabs conflict with the Balfour declaration; (2) if not, whether the new government should continue the policy set down by the old government in the 1922 White Paper; and (3) if not, what alternative policy should be adopted. Stanley Baldwin, replacing Bonar Law, in June 1923 set up a cabinet subcommittee whose terms of reference were: The Cabinet approved the report of this committee on 31 July 1923. Describing it as "nothing short of remarkable", Quigley noted that the government was admitting to itself that its support for Zionism had been prompted by considerations having nothing to do with the merits of Zionism or its consequences for Palestine. As Huneidi noted, "wise or unwise, it is well nigh impossible for any government to extricate itself without a substantial sacrifice of consistency and self-respect, if not honour." The wording of the declaration was thus incorporated into the British Mandate for Palestine, a legal instrument that created Mandatory Palestine with an explicit purpose of putting the declaration into effect and was finally formalized in September,1923. Unlike the declaration itself, the Mandate was legally binding on the British government. In June, 1924, Britain made its report to the Permanent Mandates Commission for the period July 1920 to the end of 1923 containing nothing of the candor reflected in the internal documents; the documents relating to the 1923 reappraisal stayed secret until the early 1970s. Lloyd George and Balfour remained in government until the collapse of the coalition in October 1922. Under the new Conservative government, attempts were made to identify the background to and motivations for the declaration. A private Cabinet memorandum was produced in January 1923, providing a summary of the then-known Foreign Office and War Cabinet records leading up to the declaration. An accompanying Foreign Office note asserted that the primary authors of the declaration were Balfour, Sykes, Weizmann, and Sokolow, with "perhaps Lord Rothschild as a figure in the background", and that "negotiations seem to have been mainly oral and by means of private notes and memoranda of which only the scantiest records seem to be available." Following the 1936 general strike that was to degenerate into the 1936–1939 Arab revolt in Palestine, the most significant outbreak of violence since the Mandate began, a British Royal Commission  – a high-profile public inquiry – was appointed to investigate the causes of the unrest. The Palestine Royal Commission, appointed with significantly broader terms of reference than the previous British inquiries into Palestine, completed its 404-page report after six months of work in June 1937, publishing it a month later. The report began by describing the history of the problem, including a detailed summary of the origins of the Balfour Declaration. Much of this summary relied on Lloyd-George's personal testimony; Balfour had died in 1930 and Sykes in 1919. He told the commission that the declaration was made "due to propagandist reasons... In particular Jewish sympathy would confirm the support of American Jewry, and would make it more difficult for Germany to reduce her military commitments and improve her economic position on the eastern front". Two years later, in his "Memoirs", Lloyd George described a total of nine factors motivating his decision as Prime Minister to release the declaration, including the additional reasons that a Jewish presence in Palestine would strengthen Britain's position on the Suez Canal and reinforce the route to their imperial dominion in India. These geopolitical calculations were debated and discussed in the following years. Historians agree that the British believed that expressing support would appeal to Jews in Germany and the United States, given two of Woodrow Wilson's closest advisors were known to be avid Zionists; they also hoped to encourage support from the large Jewish population in Russia. In addition, the British intended to pre-empt the expected French pressure for an international administration in Palestine. Some historians argue that the British government's decision reflected what James Gelvin, Professor of Middle Eastern History at UCLA, calls 'patrician anti-Semitism' in the overestimation of Jewish power in both the United States and Russia. American Zionism was still in its infancy; in 1914 the Zionist Federation had a small budget of about $5,000 and only 12,000 members, despite an American Jewish population of three million. But the Zionist organizations had recently succeeded, following a show of force within the American Jewish community, in arranging a Jewish congress to debate the Jewish problem as a whole. This impacted British and French government estimates of the balance of power within the American Jewish public. Avi Shlaim, Emeritus Professor of International Relations in the University of Oxford, asserts that two main schools of thought have been developed on the question of the primary driving force behind the declaration, one presented in 1961 by Leonard Stein, a lawyer and former political secretary to the World Zionist Organization, and the other in 1970 by Mayir Vereté, then Professor of Israeli History at the Hebrew University of Jerusalem. Shlaim states that Stein does not reach any clear cut conclusions, but that implicit in his narrative is that the declaration resulted primarily from the activity and skill of the Zionists, whereas according to Vereté, it was the work of hard-headed pragmatists motivated by British imperial interests in the Middle East. Much of modern scholarship on the decision to issue the declaration focuses on the Zionist movement and rivalries within it, with a key debate being whether the role of Weizmann was decisive or whether the British were likely to have issued a similar declaration in any event. Danny Gutwein, Professor of Jewish History at the University of Haifa, proposes a twist on an old idea, asserting that Sykes's February 1917 approach to the Zionists was the defining moment, and that it was consistent with the pursuit of the government's wider agenda to partition the Ottoman Empire. The declaration had two indirect consequences, the emergence of a Jewish state and a chronic state of conflict between Arabs and Jews throughout the Middle East. It has been described as the "original sin" with respect to both Britain's failure in Palestine and for wider events in Palestine. The statement also had a significant impact on the traditional anti-Zionism of religious Jews, some of whom saw it as divine providence; this contributed to the growth of religious Zionism amid the larger Zionist movement. Starting in 1920, intercommunal conflict in Mandatory Palestine broke out, which widened into the regional Arab–Israeli conflict, often referred to as the world's "most intractable conflict". The "dual obligation" to the two communities quickly proved to be untenable; the British subsequently concluded that it was impossible for them to pacify the two communities in Palestine by using different messages for different audiences. The Palestine Royal Commission – in making the first official proposal for partition of the region – referred to the requirements as "contradictory obligations", and that the "disease is so deep-rooted that, in our firm conviction, the only hope of a cure lies in a surgical operation". Following the 1936–1939 Arab revolt in Palestine, and as worldwide tensions rose in the buildup to World War II, the British Parliament approved the White Paper of 1939 – their last formal statement of governing policy in Mandatory Palestine – declaring that Palestine should not become a Jewish State and placing restrictions on Jewish immigration. Whilst the British considered this consistent with the Balfour Declaration's commitment to protect the rights of non-Jews, many Zionists saw it as a repudiation of the declaration. Although this policy lasted until the British surrendered the Mandate in 1948, it served only to highlight the fundamental difficulty for Britain in carrying out the Mandate obligations. Britain's involvement in this became one of the most controversial parts of its Empire's history, and damaged its reputation in the Middle East for generations. According to historian Elizabeth Monroe: "measured by British interests alone, [the declaration was] one of the greatest mistakes in [its] imperial history." The 2010 study by Jonathan Schneer, specialist in modern British history at Georgia Tech, concluded that because the build-up to the declaration was characterized by "contradictions, deceptions, misinterpretations, and wishful thinking", the declaration sowed dragon's teeth and "produced a murderous harvest, and we go on harvesting even today". The foundational stone for modern Israel had been laid, but the prediction that this would lay the groundwork for harmonious Arab-Jewish cooperation proved to be wishful thinking. The document was presented to the British Museum in 1924 by Walter Rothschild; today it is held in the British Library, which separated from the British Museum in 1973, as Additional Manuscripts number 41178. From October 1987 to May 1988 it was lent outside the UK for display in Israel's Knesset. The Israeli government are currently in negotiations to arrange a second loan in 2018, with plans to display the document at Independence Hall in Tel Aviv. Blue whale The blue whale ("Balaenoptera musculus") is a marine mammal belonging to the baleen whale parvorder, Mysticeti. At up to in length and with a maximum recorded weight of , it is the largest animal known to have ever existed. Long and slender, the blue whale's body can be various shades of bluish-grey dorsally and somewhat lighter underneath. There are at least three distinct subspecies: "B. m. musculus" of the North Atlantic and North Pacific, "B. m. intermedia" of the Southern Ocean and "B. m. brevicauda" (also known as the pygmy blue whale) found in the Indian Ocean and South Pacific Ocean. "B. m. indica", found in the Indian Ocean, may be another subspecies. As with other baleen whales, its diet consists almost exclusively of small crustaceans known as krill. Blue whales were abundant in nearly all the oceans on Earth until the beginning of the twentieth century. For over a century, they were hunted almost to extinction by whalers until protected by the international community in 1966. A 2002 report estimated there were 5,000 to 12,000 blue whales worldwide, in at least five groups. The IUCN estimates that there are probably between 10,000 and 25,000 blue whales worldwide today. Before whaling, the largest population was in the Antarctic, numbering approximately 239,000 (range 202,000 to 311,000). There remain only much smaller (around 2,000) concentrations in each of the eastern North Pacific, Antarctic, and Indian Ocean groups. There are two more groups in the North Atlantic, and at least two in the Southern Hemisphere. As of 2014, the Eastern North Pacific blue whale population had rebounded to nearly its pre-hunting population. Blue whales are rorquals (family Balaenopteridae), a family that includes the humpback whale, the fin whale, Bryde's whale, the sei whale, and the minke whale. The family Balaenopteridae is believed to have diverged from the other families of the suborder Mysticeti as long ago as the middle Oligocene (28 Ma ago). The blue whale lineage diverged from the other rorquals during the Miocene, between 7.5 and 10.5 million years ago. However, gene flow between the species appears to have continued beyond that date. The blue whale has the greatest genetic diversity of any baleen whale, and a higher than average diversity among mammals. The blue whale is usually classified as one of eight species in the genus "Balaenoptera"; one authority places it in a separate monotypic genus, "Sibbaldus", but this is not accepted elsewhere. DNA sequencing analysis indicates that the blue whale is phylogenetically closer to the sei whale ("Balaenoptera borealis") and Bryde's whale ("Balaenoptera brydei") than to other "Balaenoptera" species, and closer to the humpback whale ("Megaptera") and the gray whale ("Eschrichtius") than to the minke whales ("Balaenoptera acutorostrata" and "Balaenoptera bonaerensis"). There have been at least 11 documented cases of blue whale-fin whale hybrid adults in the wild. Arnason and Gullberg describe the genetic distance between a blue and a fin as about the same as that between a human and a gorilla. Researchers working off Fiji believe they photographed a hybrid humpback-blue whale including the discovery through DNA analysis from a meat sample found in a Japanese market. The first published description of the blue whale comes from Robert Sibbald's "Phalainologia Nova" (1694). In September 1692, Sibbald found a blue whale that had stranded in the Firth of Forth – a male long – that had "black, horny plates" and "two large apertures approaching a pyramid in shape". The specific name "musculus" is Latin and could mean "muscle", but it can also be interpreted as "little mouse". Carl Linnaeus, who named the species in his seminal "Systema Naturae" of 1758, would have known this and may have intended the ironic double meaning. Herman Melville called this species "sulphur-bottom" in his novel "Moby-Dick" (1851) due to an orange-brown or yellow tinge on the underparts from diatom films on the skin. Other common names for the blue whale have included "Sibbald's rorqual" (after Sibbald, who first described the species), the "great blue whale" and the "great northern rorqual". These names have now fallen into disuse. The first known usage of the term "blue whale" was in Melville's "Moby-Dick", which only mentions it in passing and does not specifically attribute it to the species in question. The name was really derived from the Norwegian "blåhval", coined by Svend Foyn shortly after he had perfected the harpoon gun; the Norwegian scientist G. O. Sars adopted it as the Norwegian common name in 1874. Authorities classify the species into three or four subspecies: "B. m. musculus", the northern blue whale consisting of the North Atlantic and North Pacific populations, "B. m. intermedia", the southern blue whale of the Southern Ocean, "B. m. brevicauda", the pygmy blue whale found in the Indian Ocean and South Pacific, and the more problematic "B. m. indica", the great Indian rorqual, which is also found in the Indian Ocean and, although described earlier, may be the same subspecies as "B. m. brevicauda". The pygmy blue whale formed from a founder group of Antarctic blue whales about 20,000 years ago, around the Last Glacial Maximum. This is likely because blue whales were driven north by expanding ice, and some have stayed there ever since. The pygmy blue whale's evolutionarily recent origins cause it to have a relatively low genetic diversity. The blue whale has a long tapering body that appears stretched in comparison with the stockier build of other whales. The head is flat, "U"-shaped and has a prominent ridge running from the blowhole to the top of the upper lip. The front part of the mouth is thick with baleen plates; around 300 plates, each around long, hang from the upper jaw, running back into the mouth. Between 70 and 118 grooves (called ventral pleats) run along the throat parallel to the body length. These pleats assist with evacuating water from the mouth after lunge feeding (see feeding below). The dorsal fin is small; its height averages about , and usually ranges between , though it can be as small as or as large as . It is visible only briefly during the dive sequence. Located around three-quarters of the way along the length of the body, it varies in shape from one individual to another; some only have a barely perceptible lump, but others may have prominent and falcate (sickle-shaped) dorsals. When surfacing to breathe, the blue whale raises its shoulder and blowhole out of the water to a greater extent than other large whales, such as the fin or sei whales. Observers can use this trait to differentiate between species at sea. Some blue whales in the North Atlantic and North Pacific raise their tail fluke when diving. When breathing, the whale emits a vertical single-column spout, typically high, but reaching up to . Its lung capacity is . Blue whales have twin blowholes shielded by a large splashguard. The flippers are long. The upper sides are grey with a thin white border; the lower sides are white. The head and tail fluke are generally uniformly grey. The whale's upper parts, and sometimes the flippers, are usually mottled. The degree of mottling varies substantially from individual to individual. Some may have a uniform slate-grey color, but others demonstrate a considerable variation of dark blues, greys and blacks, all tightly mottled. Blue whales can reach speeds of over short bursts, usually when interacting with other whales, but is a more typical traveling speed. Satellite telemetry of Australian pygmy blue whales migrating to Indonesia has shown that they cover between per day. When feeding, they slow down to . Blue whales typically swim at a depth of about when migrating in order to eliminate drag from surface waves. The deepest confirmed dive is . Blue whales most commonly live alone or with one other individual. It is not known how long traveling pairs stay together. In locations where there is a high concentration of food, as many as 50 blue whales have been seen scattered over a small area. They do not form the large, close-knit groups seen in other baleen species. The blue whale is the largest animal known to have ever lived. Blue whales are difficult to weigh because of their size. They were never weighed whole, but cut into blocks across and weighed by parts. This caused a considerable loss of blood and body fluids, estimated to be about 6% of the total weight. As a whole, blue whales from the Northern Atlantic and Pacific are smaller on average than those from Antarctic waters. Adult weights typically range from . There is some uncertainty about the biggest blue whale ever found, as most data came from blue whales killed in Antarctic waters during the first half of the twentieth century, which were collected by whalers not well-versed in standard zoological measurement techniques. The standard measuring technique is to measure in a straight line from the upper jaw to the notch in the tail flukes. This came about because the edges of the tail flukes were typically cut off, and the lower jaw often falls open upon death. Many of the larger whales in the whaling records (especially those over ) were probably measured incorrectly or even deliberately exaggerated. The heaviest weight ever reported was ; for a southern hemisphere female in 1947, it is likely that the largest blue whales would have weighed over . The longest whales ever recorded were two females measuring , but in neither of these cases was the piecemeal weight gathered. Possibly the largest recorded male was killed near the South Shetland Islands in 1926 and was measured at . Females are generally a few feet longer than males. However, males may be slightly heavier on average than females of the same length, owing to heavier muscles and bones. Verified measurements rarely exceed . The longest measured by Macintosh and Wheeler (1929) was a female , while the largest male was ; one of the same authors later found a male of and stated that those lengths may be exceeded. The longest whale measured by scientists was long. Lieut. Quentin R. Walsh, USCG, while acting as whaling inspector of the factory ship "Ulysses", verified the measurement of a pregnant blue whale caught in the Antarctic in the 1937–38 season. A male was verified by Japanese scientists in the 1947–48 whaling season. The longest reported in the North Pacific was a female taken by Japanese whalers in 1959, and the longest reported in the North Atlantic was a female caught in the Davis Strait. The average weight of the longest scientifically verified specimens () would be calculated to be 176.5 tonnes (194.6 tons), varying from 141 tonnes (155.4 tons) to 211.5 tonnes (233.1 tons) depending on fat condition. One study found that a hypothetical blue whale would be too large to exist in real life, due to metabolic and energy constraints. Due to its large size, several organs of the blue whale are the largest in the animal kingdom. A blue whale's tongue weighs around and, when fully expanded, its mouth is large enough to hold up to of food and water. Despite the size of its mouth, the dimensions of its throat are such that a blue whale cannot swallow an object wider than a beach ball. The heart of an average sized blue whale weighs and is the largest known in any animal. During the first seven months of its life, a blue whale calf drinks approximately of milk every day. Blue whale calves gain weight quickly, as much as every 24 hours. Even at birth, they weigh up to —the same as a fully grown hippopotamus. Blue whales have proportionally small brains, only about , about 0.007% of its body weight, although with a highly convoluted cerebral cortex. The blue whale penis is the largest penis of any living organism and also set the Guinness World Record as the longest of any animal's. The reported average length varies but is usually mentioned to have an average length of . Blue whales feed almost exclusively on krill, though they also take small numbers of copepods. The species of this zooplankton eaten by blue whales varies from ocean to ocean. In the North Atlantic, "Meganyctiphanes norvegica", "Thysanoessa raschii", "Thysanoessa inermis" and "Thysanoessa longicaudata" are the usual food; in the North Pacific, "Euphausia pacifica", "Thysanoessa inermis", "Thysanoessa longipes", "Thysanoessa spinifera", "Nyctiphanes symplex" and "Nematoscelis megalops"; and in the Southern Hemisphere, "Euphausia superba", "Euphausia crystallorophias", "Euphausia valentini", and "Nyctiphanes australis". An adult blue whale can eat up to 40 million krill in a day. The whales always feed in the areas with the highest concentration of krill, sometimes eating up to of krill in a single day. The daily energy requirement of an adult blue whale is in the region of . Their feeding habits are seasonal. Blue whales gorge on krill in the rich waters of the Antarctic before migrating to their breeding grounds in the warmer, less-rich waters nearer the equator. The blue whale can take in up to 90 times as much energy as it expends, allowing it to build up considerable energy reserves. Because krill move, blue whales typically feed at depths of more than during the day and only surface-feed at night. Dive times are typically 10 minutes when feeding, though dives of up to 21 minutes are possible. The whale feeds by lunging forward at groups of krill, taking the animals and a large quantity of water into its mouth. The water is then squeezed out through the baleen plates by pressure from the ventral pouch and tongue. Once the mouth is clear of water, the remaining krill, unable to pass through the plates, are swallowed. The blue whale also incidentally consumes small fish, crustaceans and squid caught up with krill. Mating starts in late autumn and continues to the end of winter. Little is known about mating behaviour or breeding grounds. In the fall, males will follow females for prolonged periods of time. Occasionally, a second male will attempt to displace the first, and the whales will race each other at high speed, ranging from to in New Zealand. This often causes the racing whales to breach, which is rare in blue whales. This racing behavior may even escalate to physical violence between the males. Scientists have observed this behavior in multiple parts of the world, including the Gulf of St. Lawrence in Canada and the South Taranaki Bight in New Zealand. Females typically give birth once every two to three years at the start of the winter after a gestation period of 10 to 12 months. The calf weighs about and is around in length. Blue whale calves drink 380–570 litres (100–150 U.S. gallons) of milk a day. Blue whale milk has an energy content of about 18,300 kJ/kg (4,370 kcal/kg). The calf is weaned after six months, by which time it has doubled in length. The first video of a calf thought to be nursing was filmed on 5 February 2016. Sexual maturity is typically reached at five to ten years of age. In the Northern Hemisphere, whaling records show that males averaged and females at sexual maturity, while in the Southern Hemisphere it was , respectively. In the Northern Hemisphere, as adults, males averaged and females with average calculated weights of 90.5 and 101.5 tonnes (100 and 112 tons), respectively. Blue whales in the eastern North Pacific population were found to be on average shorter, therefore with males averaging and 80.5 tonnes (88.5 tons) and females and 90.5 tonnes (100 tons). Antarctic males averaged and females , averaging 101.5 and 118 tonnes (112 and 130 tons). Pygmy blue whales average at sexual maturity, with males averaging 21 meters and females 22 meters (69 and 72 feet) when fully grown, averaging 76 and 90 tonnes (83.5 and 99 tons). In the eastern North Pacific, photogrammetric studies have shown sexually mature (but not necessarily fully grown) blue whales today average , and about 65.5 tonnes (72 tons) with the largest found being about . A female washed ashore near Pescadero, California in 1979. The weight of individual blue whales varies significantly according to fat condition. Antarctic blue whales gain 50% of their lean body weight in the summer feeding season, i.e. a blue whale entering the Antarctic weighing 100 tons would leave weighing 150 tons. Pregnant females probably gain 60–65%. The fattened weight is 120% the average weight and the lean weight is 80%. Scientists estimate that blue whales can live for at least 80 years, but since individual records do not date back into the whaling era, this will not be known with certainty for many years. The longest recorded study of a single individual is 34 years, in the eastern North Pacific. The whales' only natural predator is the orca. Studies report that as many as 25% of mature blue whales have scars resulting from orca attacks. The mortality rate of such attacks is unknown. Blue whale strandings are extremely uncommon, and, because of the species' social structure, mass strandings are unheard of. When strandings do occur, they can become the focus of public interest. In 1920, a blue whale washed up near Bragar on the Isle of Lewis in the Outer Hebrides of Scotland. It had been shot by whalers, but the harpoon had failed to explode. As with other mammals, the fundamental instinct of the whale was to try to carry on breathing at all costs, even though this meant beaching to prevent itself from drowning. Two of the whale's bones were erected just off a main road on Lewis and remain a tourist attraction. In June 2015, a female blue whale estimated at and 20 tonnes (22 tons) was stranded on a beach in Maharashtra, India, the first live stranding in the region. Despite efforts by the Albaug forest department and local fishermen, the whale died 10 hours after being stranded. In August 2009, a wounded blue whale was stranded in a bay in Steingrímsfjördur, Iceland. The first rescue attempt failed, as the whale (thought to be over 20 meters long) towed the >20 ton boat back to shore at speeds of up to . The whale was towed to sea after 7 hours by a stronger boat. It is unknown whether it survived. In December 2015, a live blue whale thought to be over long was rescued from a beach in Chile. Another stranded blue whale, thought to be about long, was rescued in India in February 2016. Boats were used in all successful cases. Estimates made by Cummings and Thompson (1971) suggest the source level of sounds made by blue whales are between 155 and 188 decibels when measured relative to a reference pressure of one micropascal at one metre. All blue whale groups make calls at a fundamental frequency between 10 and 40 Hz; the lowest frequency sound a human can typically perceive is 20 Hz. Blue whale calls last between ten and thirty seconds. Blue whales off the coast of Sri Lanka have been repeatedly recorded making "songs" of four notes, lasting about two minutes each, reminiscent of the well-known humpback whale songs. As this phenomenon has not been seen in any other populations, researchers believe it may be unique to the "B. m. brevicauda" (pygmy) subspecies. The loudest sustained noise from a blue whale was at 188 dB. The purpose of vocalization is unknown. Richardson "et al." (1995) discuss six possible reasons: Blue whales are not easy to catch or kill. Their speed and power meant that they were rarely pursued by early whalers, who instead targeted sperm and right whales. In 1864, the Norwegian Svend Foyn equipped a steamboat with harpoons specifically designed for catching large whales. The harpoon gun was initially cumbersome and had a low success rate, but Foyn perfected it, and soon several whaling stations were established on the coast of Finnmark in northern Norway. Because of disputes with the local fishermen, the last whaling station in Finnmark was closed down in 1904. Soon, blue whales were being hunted off Iceland (1883), the Faroe Islands (1894), Newfoundland (1898), and Spitsbergen (1903). In 1904–05 the first blue whales were taken off South Georgia. By 1925, with the advent of the stern slipway in factory ships and the use of steam-driven whale catchers, the catch of blue whales, and baleen whales as a whole, in the Antarctic and sub-Antarctic began to increase dramatically. In the 1930–31 season, these ships caught 29,400 blue whales in the Antarctic alone. By the end of World War II, populations had been significantly depleted, and, in 1946, the first quotas restricting international trade in whales were introduced, but they were ineffective because of the lack of differentiation between species. Rare species could be hunted on an equal footing with those found in relative abundance. Arthur C. Clarke, in his 1962 book "Profiles of the Future", was the first prominent intellectual to call attention to the plight of the blue whale. He mentioned its large brain and said, "we do not know the true nature of the entity we are destroying." All of the historical coastal Asian groups were driven to near-extinction in short order by Japanese industrial hunts. Those groups that once migrated along western Japan to the East China Sea were likely wiped out much earlier, as the last catches on Amami Oshima were between the 1910s and the 1930s, and the last known stranding records on the Japanese archipelago, excluding the Ryukyu Islands, were over a half-century ago. Commercial catches were continued until 1965 and whaling stations targeting blues were mainly placed along the Hokkaido and Sanriku coasts. Blue whale hunting was banned in 1966 by the International Whaling Commission, and illegal whaling by the Soviet Union finally halted in the 1970s, by which time 330,000 blue whales had been caught in the Antarctic, 33,000 in the rest of the Southern Hemisphere, 8,200 in the North Pacific, and 7,000 in the North Atlantic. The largest original population, in the Antarctic, had been reduced to a mere 360 individuals, about 0.15% of their initial numbers. Blue/fin whale hybrids are still caught by the Icelandic whaling industry. Since the introduction of the whaling ban, studies have examined whether the conservation reliant global blue whale population is increasing or remaining stable. In the Antarctic, best estimates show an increase of 7.3% per year since the end of illegal Soviet whaling, but numbers remain at under 1% of their original levels. Recovery varies regionally, and the Eastern North Pacific blue whale population (historically a relatively small proportion of the global total) has rebounded to about 2,200 individuals, an estimated 97% of its pre-hunting population. The total world population was estimated to be between 5,000 and 12,000 in 2002; there are high levels of uncertainty in available estimates for many areas. A more recent estimate by the IUCN puts the global population at 10,000–25,000. The IUCN Red List counts the blue whale as "endangered", as it has since the list's inception. In the United States, the National Marine Fisheries Service lists them as endangered under the Endangered Species Act. The largest known concentration, consisting of about 2,800 individuals, is the northeast Pacific population of the northern blue whale ("B. m. musculus") subspecies that ranges from Alaska to Costa Rica, but is most commonly seen from California in summer. Infrequently, this population visits the northwest Pacific between Kamchatka and the northern tip of Japan. In the North Atlantic, two stocks of "B. m. musculus" are recognised. The first is found off Greenland, Newfoundland, Nova Scotia and the Gulf of Saint Lawrence. This group is estimated to total about 500. The second, more easterly group is spotted from the Azores in spring to Iceland in July and August; it is presumed the whales follow the Mid-Atlantic Ridge between the two volcanic islands. Beyond Iceland, blue whales have been spotted as far north as Spitsbergen and Jan Mayen, though such sightings are rare. Scientists do not know where these whales spend their winters. The total North Atlantic population is estimated to be between 600 and 1,500. Off Ireland, the first confirmed sightings were made in 2008, since then Porcupine Seabight has been regarded as a prominent habitat for the species along with fin whales. One was sighted along Galicia, Spain, in 2017. Occasionally blue whales get lost in the Baltic Sea. The remnants of two blue whales have been identified in Finland. One 19-meter skeleton was found on the bottom of the Gulf of Finland during the construction of Nord Stream pipeline. The vertebrae of a second individual were recovered from a field near Pori on the coast of the Bothnian Sea in 1942 and identified in the Copenhagen Zoological Museum by Magnus Degerbøl. Radiocarbon dating revealed the former had wandered into the Littorina Sea over 7000 years ago and perished, while the latter bones were shown to be 800–900 years old. Five or more subpopulations have been suggested, and several of these mainly in the western North Pacific have been considered either functionally or virtually extinct. Of the populations that once existed off coastal Japan, the last recorded confirmed stranding was in the 1910s. Today, call types suggest only two populations in the North Pacific. Some scientists regard that historical populations off Japan were driven to extinction by whaling activities, mostly from the Kumanonada Sea off Wakayama, in the , and in the Sea of Hyūga. Nowadays, possible vagrants from either eastern or offshore populations are observed on very rare occasions off Kushiro. There were also small, but constant catch records around the Korean Peninsula and in the coastal waters of the Sea of Japan; this species is normally considered not to frequent into marginal seas, such as the Sea of Okhotsk, on usual migrations. Whales were known to migrate further north to eastern Kamchatka, the Gulf of Anadyr, off Abashiri or the southern Sea of Okhotsk, and the Commander Islands. Only three sightings were made between 1994 and 2004 in Russia with one sighted off east coast of the peninsula in 2009, and the last known occurrence in the eastern Sea of Okhotsk was in 1948. In addition, whales have not been confirmed off the Commander Islands for over past 80 years. In 2017, 13 or more whales were observed off Kamchatka and Commander Islands. Historically, wintering grounds existed off the Hawaiian Archipelago, the Northern Mariana Islands, the Bonin Islands and Ryukyu Islands, the Philippines, Taiwan, the Zhoushan Archipelago, and the South China Sea such as in Daya Bay, off the Leizhou Peninsula, and off Hainan Island, and further south to the Paracel Islands. A stranding was recorded in Wanning in 2005. One whale was sighted off Weizhou Island in 2017. For further status in Chinese and Korean waters, see "Wildlife of China". As of 2014, the eastern North Pacific blue whale population had rebounded to an estimated 2,200 individuals, which is thought to be about 97% of its pre-whaling numbers. In the Southern Hemisphere, there appear to be two distinct subspecies, "B. m. intermedia", the Antarctic blue whale, and the little-studied pygmy blue whale, "B. m. brevicauda", found in Indian Ocean waters. The most recent surveys (midpoint 1998) provided an estimate of 2,280 blue whales in the Antarctic (of which fewer than 1% are likely to be pygmy blue whales). Estimates from a 1996 survey show that 424 pygmy blue whales were in a small area south of Madagascar (the Madagascar Plateau) alone, thus it is likely that numbers in the entire Indian Ocean are in the thousands. If this is true, the global numbers would be much higher than estimates predict. However, slower reproduction rate of the species, along with the impacts of whaling, may affect population recoveries as the total population size is predicted to be at less than 50% of its pre-whaling state by 2100. Several congregating grounds are recently confirmed in Oceania, such as Perth Canyon off Rottnest Island, the Great Australian Bight off Portland, and in South Taranaki Bight and off Kahurangi Point which was discovered just in 2007 and was confirmed in 2014, representing possibly a unique population based on haplotypes. Southern blue and pygmy blue females use waters off Western Australia, and coastal areas of eastern North Island of New Zealand, from Northland waters such as the Bay of Islands and Hauraki Gulf to the Bay of Plenty in the south, as breeding and calving grounds. Whales off southern and western Australia are known to migrate into tropical coastal waters in Indonesia, Philippines, and off East Timor. (Animals in the Philippines may or may not originate from North Pacific populations or from a pygmy blue population in the northern Indian Ocean as whales regularly appear off Bohol, north of the Equator.) At least on occasions, whales also migrate through remote islands such as Cook Islands, and Chilean pelagic waters adjacent to Easter Island and Isla Salas y Gómez, where possibilities of undiscovered wintering grounds have been considered. Blue whales also migrate through western African waters such as off Angola and Mauritania, and at least whales around Iceland are known to migrate to Mauritania. A fourth subspecies, "B. m. indica", was identified by Blyth in 1859 in the northern Indian Ocean, but is now thought to be the same subspecies as "B. m. brevicauda", the pygmy blue whale. Records for Soviet catches seem to indicate that the female adult size is closer to that of the pygmy blue than "B. m. musculus"; the populations of "B. m. indica" and "B. m. brevicauda" appear to be discrete, and the breeding seasons differ by almost six months. Along mainland Indian coasts, appearances of whales had been scarce excluding unconfirmed record(s), the first blue whale since after the last stranding record in Maharashtra in 1914, was sighted off Kunkeshwar along with several Bryde's whales in May 2015. Migratory patterns of these subspecies are not well known. For example, pygmy blue whales have been recorded in the northern Indian Ocean (Oman, Maldives and Sri Lanka), where they may form a distinct resident population. Furthermore, sightings have been recorded from elsewhere in and adjacent to Arabian Sea, including from Gulf of Aden, Persian Gulf, coasts of Bay of Bengal including Bangladesh to Myanmar, and within the Strait of Malacca. The first official confirmation within Thailand's EEZ occurred at Trang in 2013. In addition, the population of blue whales occurring off Chile and Peru may also be a distinct subspecies. Some Antarctic blue whales approach the eastern South Atlantic coast in winter, and occasionally, their vocalizations are heard off Peru, Western Australia, and in the northern Indian Ocean. In Chile, the Cetacean Conservation Center, with support from the Chilean Navy, is undertaking extensive research and conservation work on a recently discovered feeding aggregation of the species off the coast of Chiloe Island in the Gulf of Corcovado (Chiloé National Park), where 326 blue whales were spotted in 2007. In this regions, it is normal for blue whales to enter Fiords. Whales also reach southern Los Lagos, such as off Caleta Zorra, live along with other rorquals. Efforts to calculate the blue whale population more accurately are supported by marine mammologists at Duke University, who maintain the Ocean Biogeographic Information System—Spatial Ecological Analysis of Megavertebrate Populations (OBIS-SEAMAP), a collation of marine mammal sighting data from around 130 sources. Due to their enormous size, power and speed, adult blue whales have virtually no natural predators. There is one documented case in "National Geographic Magazine" of a blue whale being attacked by orcas off the Baja California Peninsula; the orcas were unable to kill the animal outright during their attack, but the blue whale sustained serious wounds and probably died as a result of them shortly after the attack. In March 2014, a pack of orcas harassed a blue whale off California, with one of them biting the tip of the blue whale's tail fluke. The blue whale attempted to tail slap the orca and fled at high speed. A similar incident happened on May 18, 2017 in Monterey Bay, with the orcas swimming in a line up to the blue whale's side. The blue whale fled and escaped. Orcas have virtually no chance against an adult blue whale, but may attack them on occasion anyway for their own enjoyment. Up to a quarter of the blue whales identified in Baja bear scars from orca attacks. Blue whales may be wounded, sometimes fatally, after colliding with ocean vessels, as well as becoming trapped or entangled in fishing gear. Ship strikes in particular have killed many blue whales in California. In September 2007, three dead blue whales washed up in southern California after being killed by ship strikes. A similar incident occurred in northern California in 2017, possibly also 2010. Ship strikes are also a serious problem in Sri Lanka, and scientists believe this problem could be nearly eliminated by moving the shipping lanes 15 nautical miles to the south. The ever-increasing amount of ocean noise, including sonar, drowns out the vocalizations produced by whales, which makes it harder for them to communicate. Blue whales stop producing foraging D calls once a mid-frequency sonar is activated, even though the sonar frequency range (1–8 kHz) far exceeds their sound production range (25–100 Hz). Research on blue whales in the Southern California Bight has shown that mid-frequency sonar use interferes with foraging behavior, sometimes causing the whales to abandon their feeding. Human threats to the potential recovery of blue whale populations also include accumulation of polychlorinated biphenyl (PCB) chemicals within the whale's body. With global warming causing glaciers and permafrost to melt rapidly and allowing a large amount of fresh water to flow into the oceans, there are concerns that if the amount of fresh water in the oceans reaches a critical point, there will be a disruption in the thermohaline circulation. Considering the blue whale's migratory patterns are based on ocean temperature, a disruption in this circulation, which moves warm and cold water around the world, would be likely to have an effect on their migration. The whales summer in the cool, high latitudes, where they feed in krill-abundant waters; they winter in warmer, low latitudes, where they mate and give birth. The change in ocean temperature would also affect the blue whale's food supply. The warming trend and decreased salinity levels would cause a significant shift in krill location and abundance. Before modern whaling began in the late 19th century, blue whales were obscure and very poorly understood. They, along with other large whales were mythologized and indistinguishable from sea monsters. Even after the first specimens were described, the blue whale went by many names and scientists often described their specimens as new species. Descriptions and illustrations were inaccurate. The taxonomy was eventually resolved, but many mysteries remained. The blue whale rose from relative obscurity in the 1970s, after the release of the Songs of the Humpback Whale. The difference between the humpback and the blue whale was not well known to most people, who conflated the species in their minds. To many listeners, the haunting songs seemed like a cry for help. Environmentalists developed the image of a lonely blue whale singing out in an empty ocean, since whalers had nearly hunted the species to extinction. Today, the blue whale is renowned for being the largest animal ever to have lived. In part because of its legendary status, many misconceptions still exist. For example, its size is often exaggerated. Many popular sources give the maximum length as or more. While such lengths were reported in the whaling records, they were not scientifically verified and were probably exaggerated. Virtually all books and articles that mention the blue whale claim that it can reach . This is probably true (the largest verified specimen was but is highly misleading as the average size is much smaller. The global population was reduced by more than 99% during the 20th century. Most of this was in the Antarctic, which had been reduced to 360 individuals or about 0.15% of their original numbers; other populations were not as badly depleted. The Antarctic blue whale population is growing at the relatively rapid rate of about 7.3% per year, but it was hunted to such a low level that it remains at a tiny fraction of pre-whaling numbers. The global population still requires protection, but it is not in immediate danger of extinction. Other myths and misconceptions include that the blue whale's arteries can be swum through, that the heartbeat can be heard from 19 miles away, and that the blue whale penis is 16 feet long. The Natural History Museum in London contains a famous mounted skeleton and life-size model of a blue whale, which were both the first of their kind in the world, but have since been replicated at the University of California, Santa Cruz. Similarly, the American Museum of Natural History in New York City has a full-size model in its Milstein Family Hall of Ocean Life. A juvenile blue whale skeleton named KOBO is installed at the New Bedford Whaling Museum in New Bedford, Massachusetts. The Aquarium of the Pacific in Long Beach, California, features a life-size model of a mother blue whale with her calf suspended from the ceiling of its main hall. The Beaty Biodiversity Museum at the University of British Columbia, Canada, houses a display of a blue whale skeleton (skull is cast replica) directly on the main campus boulevard. A real skeleton of a blue whale at the Canadian Museum of Nature in Ottawa was also unveiled in May 2010. The Royal Ontario Museum obtained a blue whale, which died in Newfoundland and Labrador in 2014, and began an exhibition in 2016 to display its skeleton in Toronto. The Museum of Natural History in Gothenburg, Sweden, contains the only stuffed blue whale in the world, with its skeleton mounted beside it. Two skeletons are held in Ukraine: in Odessa and Kherson. The Melbourne Museum and Museum of New Zealand Te Papa Tongarewa both feature a skeleton of the pygmy blue whale. The North Carolina Museum of Natural Sciences in Raleigh, North Carolina, has a mounted skeleton of a blue whale. The South African Museum in Cape Town features a hall devoted to whales and natural history known as the Whale Well. The centrepiece of the exhibit is a suspended mounted blue whale skeleton, from the carcass of a partially mature specimen that washed ashore in the mid 1980s. The skeleton's jaw bones are mounted on the floor of Whale Well to permit visitors direct contact with them, and to walk between them so as to appreciate the size of the animal. Other mounted skeletons include that of a humpback whale and a right whale, together with in-scale composite models of other whales, dolphins and porpoises. The Tokyo National Museum in Ueno Park displays a life-sized model of a blue whale in the front. Several other institutions such as Tokai University and Taiji Whale Museum hold skeletons or skeleton models of Pygmy blue whales, while several churches and buildings in western Japan including Nagasaki Prefecture display the jawbone of captured animals as a gate. Blue whales may be encountered (but rarely) on whale-watching cruises in the Gulf of Maine and are the main attractions along the north shore of the Gulf of Saint Lawrence and in the Saint Lawrence estuary. Blue whales can also be seen off Southern California, starting as early as March and April, with the peak between July and September. More whales have been observed close to shore along with fin whales. In Chile, the Alfaguara project combines conservation measures for the population of blue whales feeding off Chiloé Island with whale watching and other ecotourism activities that bring economic benefits to the local people. Whale-watching, principally blue whales, is also carried out south of Sri Lanka. Whales are widely seen along the coast of Chile and Peru near the coast, occasionally making mixed groups with fin, sei, and Bryde's whales. In Australia, pygmy blue and Antarctic blue whales have been observed from various tours in almost all the coastlines of the continent. Among these, tours with sightings likely the highest rate are on west coast such as in Geographe Bay and in southern bight off Portland. For later, special tours to observe pygmy blues by helicopters are organized. In New Zealand, whales have been seen in many areas close to shore, most notably around the Northland coast, in the Hauraki Gulf and the Bay of Plenty. Brabham Brabham is the common name for Motor Racing Developments Ltd., a British racing car manufacturer and Formula One racing team. Founded in 1960 by two Australians, driver Jack Brabham and designer Ron Tauranac, the team won four Drivers' and two Constructors' World Championships in its 30-year Formula One history. Jack Brabham's 1966 FIA Drivers' Championship remains the only such achievement using a car bearing the driver's own name. In the 1960s, Brabham was the world's largest manufacturer of open-wheel racing cars for sale to customer teams; by 1970 it had built more than 500 cars. During this period, teams using Brabham cars won championships in Formula Two and Formula Three. Brabham cars also competed in the Indianapolis 500 and in Formula 5000 racing. In the 1970s and 1980s, Brabham introduced such innovations as in-race refuelling, carbon brakes, and hydropneumatic suspension. Its unique Gordon Murray-designed "fan car" won its only race before being withdrawn. The team won two more Formula One Drivers' Championships in the 1980s with Brazilian Nelson Piquet. He won his first championship in in the ground effects BT49-Ford, and became the first to win a Drivers' Championship with a turbocharged car, in . In 1983 the Brabham BT52, driven by Piquet and Italian Riccardo Patrese, was powered by the BMW M12 straight-4 engine, and powered Brabham to four of the team's 35 Grand Prix victories. British businessman Bernie Ecclestone owned Brabham during most of the 1970s and 1980s, and later became responsible for administering the commercial aspects of Formula One. Ecclestone sold the team in 1988. Its last owner was the Middlebridge Group, a Japanese engineering firm. Midway through the 1992 season, the team collapsed financially as Middlebridge was unable to make repayments against loans provided by Landhurst Leasing. The case was investigated by the United Kingdom Serious Fraud Office. In 2009, an unsuccessful attempt was made by a German organisation to enter the 2010 Formula One season using the Brabham name. The Brabham team was founded by Jack Brabham and Ron Tauranac, who met in 1951 while both were successfully building and racing cars in their native Australia. Brabham was the more successful driver and went to the United Kingdom in 1955 to further his racing career. There he started driving for the Cooper Car Company works team and by 1958 had progressed with them to Formula One, the highest category of open-wheel racing defined by the Fédération Internationale de l'Automobile (FIA), motor sport's world governing body. In 1959 and 1960, Brabham won the Formula One World Drivers' Championship in Cooper's revolutionary mid-engined cars. Despite their innovation of putting the engine behind the driver, the Coopers and their chief designer, Owen Maddock, were generally resistant to developing their cars. Brabham pushed for further advances, and played a significant role in developing Cooper's highly successful 1960 T53 "lowline" car, with input from his friend Tauranac. Brabham was confident he could do better than Cooper, and in late 1959 he asked Tauranac to come to the UK and work with him, initially producing upgrade kits for Sunbeam Rapier and Triumph Herald road cars at his car dealership, Jack Brabham Motors, but with the long-term aim of designing racing cars. Brabham describes Tauranac as "absolutely the only bloke I'd have gone into partnership with". Later, Brabham offered a Coventry-Climax FWE-engined version of the Herald, with and uprated suspension to match the extra power. To meet that aim, Brabham and Tauranac set up Motor Racing Developments Ltd. (MRD), deliberately avoiding the use of either man's name. The new company would compete with Cooper in the market for customer racing cars; as Brabham was still employed by Cooper, Tauranac produced the first MRD car, for the entry level Formula Junior class, in secrecy. Unveiled in the summer of 1961, the "MRD" was soon renamed. Motoring journalist Jabby Crombac pointed out that "[the] way a Frenchman pronounces those initials—written phonetically, 'em air day'—sounded perilously like the French word... "."" Gavin Youl achieved a second-place finish at Goodwood and another at Mallory Park in the MRD-Ford. The cars were subsequently known as Brabhams, with type numbers starting with BT for "Brabham Tauranac". By the 1961 Formula One season, the Lotus and Ferrari teams had developed the mid-engined approach further than Cooper. Brabham had a poor season, scoring only four points, and—having run his own private Coopers in non-championship events during 1961—left the company in 1962 to drive for his own team: the Brabham Racing Organisation, using cars built by Motor Racing Developments. The team was based at Chessington, England and held the British nationality. Motor Racing Developments initially concentrated on making money by building cars for sale to customers in lower formulae, so the new car for the Formula One team was not ready until partway through the 1962 Formula One season. The Brabham Racing Organisation (BRO) started the year fielding a customer Lotus chassis, which was delivered at 3:00 am in order to keep it a secret. Brabham took two points finishes in Lotuses, before the turquoise-liveried Brabham BT3 car made its debut at the 1962 German Grand Prix. It retired with a throttle problem after 9 of the 15 laps, but went on to take a pair of fourth places at the end of the season. From the 1963 season, Brabham was partnered by American driver Dan Gurney, the pair now running in Australia's racing colours of green and gold. Brabham took the team's first win at the non-championship Solitude Grand Prix in 1963. Gurney took the marque's first two wins in the world championship, at the 1964 French and Mexican Grands Prix. Brabham works and customer cars took another three non-championship wins during the 1964 season. The 1965 season was less successful, with no championship wins. Brabham finished third or fourth in the Constructors' Championship for three years running, but poor reliability marred promising performances on several occasions. Motor sport authors Mike Lawrence and David Hodges have said that a lack of resources may have cost the team results, a view echoed by Tauranac. The FIA doubled the Formula One engine capacity limit to 3 litres for the 1966 season and suitable engines were scarce. Brabham used engines from Australian engineering firm Repco, which had never produced a Formula One engine before, based on aluminium V8 engine blocks from the defunct American Oldsmobile F85 road car project, and other off-the-shelf parts. Consulting and design engineer Phil Irving (of Vincent Motorcycle fame) was the project engineer responsible for producing the initial version of the engine. Few expected the Brabham-Repcos to be competitive, but the light and reliable cars ran at the front from the start of the season. At the French Grand Prix at Reims-Gueux, Brabham became the first man to win a Formula One world championship race in a car bearing his own name. Only his former teammate, Bruce McLaren, has since matched the achievement. It was the first in a run of four straight wins for the Australian veteran. Brabham won his third title in 1966, becoming the only driver to win the Formula One World Championship in a car carrying his own name ("cf" Surtees, Hill and Fittipaldi Automotive). In 1967, the title went to Brabham's teammate, New Zealander Denny Hulme. Hulme had better reliability through the year, possibly due to Brabham's desire to try new parts first. The Brabham team took the Constructors' World Championship in both years. For 1968, Austrian Jochen Rindt replaced Hulme, who had left to join McLaren. Repco produced a more powerful version of their V8 to maintain competitiveness against Ford's new Cosworth DFV, but it proved very unreliable. Slow communications between the UK and Australia had always made identifying and correcting problems very difficult. The car was fast—Rindt set pole position twice during the season—but Brabham and Rindt finished only three races between them, and ended the year with only ten points. Although Brabham bought Cosworth DFV engines for the 1969 season, Rindt left to join Lotus. His replacement, Jacky Ickx, had a strong second half to the season, winning in Germany and Canada, after Brabham was sidelined by a testing accident. Ickx finished second in the Drivers' Championship, with 37 points to Jackie Stewart's 63. Brabham himself took a couple of pole positions and two top-3 finishes, but did not finish half the races. The team were second in the Constructors' Championship, aided by second places at Monaco and Watkins Glen scored by Piers Courage, driving a Brabham for the Frank Williams Racing Cars privateer squad. Brabham intended to retire at the end of the 1969 season and sold his share in the team to Tauranac. However, Rindt's late decision to remain with Lotus meant that Brabham drove for another year. He took his last win in the opening race of the 1970 season and was competitive throughout the year, although mechanical failures blunted his challenge. Aided by number-two driver Rolf Stommelen, the team came fourth in the Constructors' Championship. Tauranac signed double world champion Graham Hill and young Australian Tim Schenken to drive for the 1971 season. Tauranac designed the unusual 'lobster claw' BT34, featuring twin radiators mounted ahead of the front wheels, a single example of which was built for Hill. Although Hill, no longer a front-runner since his 1969 accident, took his final Formula One win in the non-championship BRDC International Trophy at Silverstone, the team scored only seven championship points. Tauranac, an engineer at heart, started to feel his Formula One budget of around £100,000 was a gamble he could not afford to take on his own and began to look around for an experienced business partner. He sold the company for £100,000 at the end of 1971 to British businessman Bernie Ecclestone, Rindt's former manager and erstwhile owner of the Connaught team. Tauranac stayed on to design the cars and run the factory. Tauranac left Brabham early in the 1972 season after Ecclestone changed the way the company was organised without consulting him. Ecclestone has since said "In retrospect, the relationship was never going to work", noting that "[Tauranac and I] both take the view: 'Please be reasonable, do it my way'". The highlights of an aimless year, during which the team ran three different models, were pole position for Argentinian driver Carlos Reutemann at his home race at Buenos Aires and a victory in the non-championship Interlagos Grand Prix. For the 1973 season, Ecclestone promoted the young South African engineer Gordon Murray to chief designer and moved Herbie Blash from the Formula Two programme to become the Formula One team manager. Both would remain with the team for the next 15 years. For 1973, Murray produced the triangular cross-section BT42, with which Reutemann scored two podium finishes and finished seventh in the Drivers' Championship. In the 1974 season, Reutemann took the first three victories of his Formula One career, and Brabham's first since 1970. The team finished a close fifth in the Constructors' Championship, fielding the much more competitive BT44s. After a strong finish to the 1974 season, many observers felt the team were favourites to win the 1975 title. The year started well, with a first win for Brazilian driver Carlos Pace at the Interlagos circuit in his native São Paulo. However, as the season progressed, tyre wear frequently slowed the cars in races. Pace took another two podiums and finished sixth in the championship; while Reutemann had five podium finishes, including a dominant win in the 1975 German Grand Prix, and finished third in the Drivers' Championship. The team likewise ranked third in the Constructors' Championship at the end of the year. While rival teams Lotus and McLaren relied on the Cosworth DFV engine from the late 1960s to the early 1980s, Ecclestone sought a competitive advantage by investigating other options. Despite the success of Murray's Cosworth-powered cars, Ecclestone signed a deal with Italian motor manufacturer Alfa Romeo to use their large and powerful flat-12 engine from the 1976 season. The engines were free, but they rendered the new BT45s, now in red Martini Racing livery, unreliable and overweight. The 1976 and 1977 seasons saw Brabham fall toward the back of the field again. Reutemann negotiated a release from his contract before the end of the 1976 season and signed with Ferrari. Ulsterman John Watson replaced him at Brabham for 1977. Watson lost near certain victory in the French Grand Prix (Dijon) of that year when his car ran low on fuel on the last lap and was passed by Mario Andretti's Lotus, with Watson's second place being the team's best result of the season. The car often showed at the head of races, but the unreliability of the Alfa Romeo engine was a major problem. The team lost Pace early in the 1977 season when he died in a light aircraft accident. For the 1978 season, Murray's BT46 featured several new technologies to overcome the weight and packaging difficulties caused by the Alfa Romeo engines. Ecclestone signed then two-time Formula One world champion Niki Lauda from Ferrari through a deal with Italian dairy products company Parmalat which met the cost of Lauda ending his Ferrari contract and made up his salary to the £200,000 Ferrari was offering. 1978 was the year of the dominant Lotus 79 "wing car", which used aerodynamic ground effect to stick to the track when cornering, but Lauda won two races in the BT46, one with the controversial "B" or "fan car" version. The partnership with Alfa Romeo ended during the 1979 season, the team's first with young Brazilian driver Nelson Piquet. Murray designed the full-ground effect BT48 around a rapidly developed new Alfa Romeo V12 engine and incorporated an effective "carbon-carbon braking" system—a technology Brabham pioneered in 1976. However, unexpected movement of the car's aerodynamic centre of pressure made its handling unpredictable and the new engine was unreliable. The team dropped to eighth in the Constructors' Championship by the end of the season. Alfa Romeo started testing their own Formula One car during the season, prompting Ecclestone to revert to Cosworth DFV engines, a move Murray described as being "like having a holiday". The new, lighter, Cosworth-powered BT49 was introduced before the end of the year at the Canadian Grand Prix; where after practice Lauda announced his immediate retirement from driving, later saying that he "was no longer getting any pleasure from driving round and round in circles". The team used the BT49 over four seasons. In the 1980 season Piquet scored three wins and the team took third in the Constructors' Championship with Piquet second in the Drivers' Championship. This season saw the introduction of the blue and white livery that the cars would wear through several changes of sponsor, until the team's demise in 1992. With a better understanding of ground effect, the team further developed the BT49C for the 1981 season, incorporating a hydropneumatic suspension system to avoid ride height limitations intended to reduce downforce. Piquet, who had developed a close working relationship with Murray, took the drivers' title with three wins, albeit amid accusations of cheating. The team finished second in the Constructors' Championship, behind the Williams team. Renault had introduced turbocharged engines to Formula One in 1977. Brabham had tested a BMW four-cylinder M12 turbocharged engine in the summer of 1981. For the 1982 season the team designed a new car, the BT50, around the BMW engine which, like the Repco engine 16 years before, was based on a road car engine block, the BMW M10. Brabham continued to run the Cosworth-powered BT49D in the early part of the season while reliability and driveability issues with the BMW units were resolved. The relationship came close to ending, with the German manufacturer insisting that Brabham use their engine. The turbo car took its first win at the Canadian Grand Prix. In the Constructors' Championship, the team finished fifth, the drivers Riccardo Patrese, who scored the last win of the Brabham-Ford combination in the Monaco Grand Prix, 10th and World Champion Piquet a mere 11th in the Drivers' Championship. In the 1983 season, Piquet took the championship lead from Renault's Alain Prost at the last race of the year, the South African Grand Prix to become the first driver to win the Formula One Drivers' World Championship with a turbo-powered car. The team did not win the Constructors' Championship in either 1981 or 1983, despite Piquet's success. Riccardo Patrese was the only driver other than Piquet to win a race for Brabham in this period—the drivers in the second car contributed only a fraction of the team's points in each of these championship seasons. Patrese finished ninth in the Drivers' Championship with 13 points, dropping the team behind Ferrari and Renault to third in the Constructors' Championship. Piquet took the team's last wins: two in 1984 by winning the seventh and eighth races of that season, the Canadian Grand Prix and the Detroit Grand Prix, and one in 1985 by winning the French Grand Prix, before reluctantly leaving for the Williams team at the end of the season. After seven years and two world championships, he felt he was worth more than Ecclestone's salary offer for 1986. Piquet finished fifth in 1984 and a mere eighth in 1985 in the respective Drivers' Championships. The 1986 season was a disaster for Brabham. Murray's radical long and low BT55, with its BMW M12 engine tilted over to improve its aerodynamics and lower its centre of gravity, scored only two points. Driver Elio de Angelis became the Formula One team's only fatality when he died in a testing accident at the Paul Ricard circuit. Derek Warwick, who replaced de Angelis, was close to scoring two points for fifth in the British Grand Prix, but a problem on the last lap dropped him out of the points. In August, BMW after considering running their own in-house team, announced their departure from Formula One at the end of the season. Murray, who had largely taken over the running of the team as Ecclestone became more involved with his role at the Formula One Constructors Association, felt that "the way the team had operated for 15 years broke down". He left Brabham in November to join McLaren. Ecclestone held BMW to their contract for the 1987 season, but the German company would only supply the laydown engine. The upright units, around which Brabham had designed their new car, were sold for use by the Arrows team. Senior figures at Brabham, including Murray, have admitted that by this stage Ecclestone had lost interest in running the team. The 1987 season was only slightly more successful than the previous year—Patrese and de Cesaris scoring 10 points between them, including two third places at the Belgian Grand Prix and the Mexican Grand Prix. Unable to locate a suitable engine supplier, the team missed the FIA deadline for entry into the 1988 world championship and Ecclestone finally announced the team's withdrawal from Formula One at the Brazilian Grand Prix in April 1988. During the season-ending Australian Grand Prix, Ecclestone announced he had sold MRD to EuroBrun team owner Walter Brun for an unknown price. Brun soon sold the team on, this time to Swiss financier Joachim Luhti, who brought it back into Formula One for the 1989 season. The new Brabham BT58, powered by a Judd V8 engine (originally another of Jack Brabham's companies), was produced for the 1989 season. Italian driver Stefano Modena, who had driven for the team in the 1987 Australian Grand Prix in a one off drive for the team, drove alongside the more experienced Martin Brundle who was returning to Formula One after spending 1988 winning the World Sportscar Championship for Jaguar. Modena took the team's last podium: a third place at the Monaco Grand Prix (Brundle, who had only just scraped through pre-qualifying by 0.021 seconds before qualifying a brilliant 4th, had been running third but was forced to stop to replace a flat battery, finally finishing sixth). The team also failed to make the grid sometimes: Brundle failed to prequalify at the Canadian Grand Prix and the French Grand Prix. The team finished 9th in the Constructors' Championship at the end of the season. After Luhti's arrest on tax fraud charges in mid-1989, several parties disputed the ownership of the team. Middlebridge Group Limited, a Japanese engineering firm owned by billionaire Koji Nakauchi, was already involved with established Formula 3000 team Middlebridge Racing and gained control of Brabham for the 1990 season. Herbie Blash had returned to run the team in 1989 and continued to do so in 1990. Middlebridge paid for its purchase using £1 million loaned to them by finance company Landhurst Leasing, but the team remained underfunded and would only score a few more points finishes in its last three seasons. Jack Brabham's youngest son, David, raced for the Formula One team for a short time in 1990 including the season-ending Australian Grand Prix (the first time a Brabham had driven a Brabham car in an Australian Grand Prix since 1968). 1990 was a disastrous year, with Modena's fifth place in the season-opening United States Grand Prix being the only top six finish. The team finished ninth in the Constructors' Championship. Brundle and fellow Briton Mark Blundell, scored only three points during the 1991 season. Due to poor results in the first half of 1991, they had to prequalify in the second half of the season; Blundell failed to do so in Japan, as did Brundle in Australia. The team finished 10th in the Constructors' Championship, behind another struggling British team, Lotus. The 1992 season started with Eric van de Poele and Giovanna Amati after Akihiko Nakaya was denied a superlicense. Damon Hill, the son of another former Brabham driver and World Champion, debuted in the team after Amati was dropped when her sponsorship failed to materialise. Amati, the fifth woman to race in Formula One, ended her career with three DNQs. Argentine Sergio Rinland designed the team's final cars around Judd engines, except for 1991 when Yamaha powered the cars. In the 1992 season the cars (which were updated versions of the 1991 car) rarely qualified for races. Hill gave the team its final finish, at the Hungarian Grand Prix, where he crossed the finish line 11th and last, four laps behind the winner, Ayrton Senna. After the end of that race the team ran out of funds and collapsed. Middlebridge Group Limited had been unable to continue making repayments against the £6 million ultimately provided by Landhurst Leasing, which went into administration. The Serious Fraud Office investigated the case. Landhurst's managing directors were found guilty of corruption and imprisoned, having accepted bribes for further loans to Middlebridge. It was one of four teams to leave Formula One that year. ("cf" March Engineering, Fondmetal and Andrea Moda Formula). Although there was talk of reviving the team for the following year, its assets passed to Landhurst Leasing and were auctioned by the company's receivers in 1993. Among these was the team's old factory in Chessington, which was acquired by Yamaha Motor Sports and used to house Activa Technology Limited, a company manufacturing composite components for race and road cars run by Herbie Blash. The factory was bought by the Carlin DPR GP2 motor racing team in 2006. On 4 June 2009, Franz Hilmer confirmed that he had used the name to lodge an entry for the 2010 Formula One season as a cost capped team under the new budget cap regulations. The Brabham family was not involved and announced that it was seeking legal advice over the use of the name. The team's entry was not accepted, and the Brabham family later obtained legal recognition of their exclusive rights to the Brabham brand. In September 2014, David Brabham—the son of Brabham founder Sir Jack Brabham—announced the reformation of the Brabham Racing team under the name Project Brabham, with plans to enter the 2015 FIA World Endurance Championship and 2015 24 Hours of Le Mans in the LMP2 category using a crowdsourcing business model. The team was also aiming to enter the FIA Formula E Championship and return to Formula One in the future. Brabham cars were also widely used by other teams, and not just in Formula One. Jack Brabham and Ron Tauranac called the company they set up in 1961 to design and build formula racing cars to customer teams Motor Racing Developments (MRD), and this company had a large portfolio of other activities. Initially, Brabham and Tauranac each held 50 percent of the shares. Tauranac was responsible for design and running the business, while Brabham was the test driver and arranged corporate deals like the Repco engine supply and the use of the MIRA wind tunnel. He also contributed ideas to the design process and often machined parts and helped build the cars. From 1963 to 1965, MRD was not directly involved in Formula One, and often ran works cars in other formulae. A separate company, Jack Brabham's Brabham Racing Organisation, ran the Formula One works entry. Like other customers, BRO bought its cars from MRD, initially at £3,000 per car, although it did not pay for development parts. Tauranac was unhappy with his distance from the Formula One operation and before the 1966 season suggested that he was no longer interested in producing cars for Formula One under this arrangement. Brabham investigated other chassis suppliers for BRO, however the two reached an agreement and from 1966 MRD was much more closely involved in this category. After Jack Brabham sold his shares in MRD to Ron Tauranac at the end of 1969, the works Formula One team was MRD. Despite only building its first car in 1961, by the mid-1960s MRD had overtaken established constructors like Cooper to become the largest manufacturer of single-seat racing cars in the world, and by 1970 had built over 500 cars. Of the other Formula One teams which used Brabhams, Frank Williams Racing Cars and the Rob Walker Racing Team were the most successful. The 1965 British Grand Prix saw seven Brabhams compete, only two of them from the works team, and there were usually four or five at championship Grands Prix throughout that season. The firm built scores of cars for the lower formulae each year, peaking with 89 cars in 1966. Brabham had the reputation of providing customers with cars of a standard equal to those used by the works team, which worked "out of the box". The company provided a high degree of support to its customers—including Jack Brabham helping customers set up their cars. During this period the cars were usually known as "Repco Brabhams", not because of the Repco engines used in Formula One between 1966 and 1968, but because of a smaller-scale sponsorship deal through which the Australian company had been providing parts to Jack Brabham since his Cooper days. At the end of 1971 Bernie Ecclestone bought MRD. He retained the Brabham brand, as did subsequent owners. Although the production of customer cars continued briefly under Ecclestone's ownership, he believed the company needed to focus on Formula One to succeed. The last production customer Brabhams were the Formula Two BT40 and the Formula Three BT41 of 1973, although Ecclestone sold ex-works Formula One BT44Bs to RAM Racing as late as 1976. In 1988 Ecclestone sold Motor Racing Developments to Alfa Romeo. The Formula One team did not compete that year, but Alfa Romeo put the company to use designing and building a prototype "Procar"—a racing car with the silhouette of a large saloon (the Alfa Romeo 164) covering a composite racing car chassis and mid-mounted race engine. This was intended for a racing series for major manufacturers to support Formula One Grands Prix, and was designated the Brabham BT57. Brabham cars competed at the Indianapolis 500 from the mid-1960s to the early 1970s. After an abortive project in 1962, MRD was commissioned in 1964 to build an IndyCar chassis powered by an American Offenhauser engine. The resultant BT12 chassis was raced by Jack Brabham as the "Zink-Urschel Trackburner" at the 1964 event and retired with a fuel tank problem. The car was entered again in 1966, taking a third place for Jim McElreath. From 1968 to 1970, Brabham returned to Indianapolis, at first with a 4.2-litre version of the Repco V8 the team used in Formula One—with which Peter Revson finished fifth in 1969—before reverting to the Offenhauser engine for 1970. The Brabham-Offenhauser combination was entered again in 1971 by J.C. Agajanian, finishing fifth in the hands of Bill Vukovich II. Although no Brabham car ever won at Indianapolis, McElreath won four United States Automobile Club (USAC) races over 1965 and 1966 in the BT12. The "Dean Van Lines Special" in which Mario Andretti won the 1965 USAC national championship was a direct copy of this car, made with permission from Brabham by Andretti's crew chief Clint Brawner. Revson took Brabham's final USAC race win in a BT25 in 1969, using the Repco engine. In the 1960s and early 1970s, drivers who had reached Formula One often continued to compete in Formula Two. In 1966 MRD produced the BT18 for the lower category, with a Honda engine acting as a stressed component. The car was extremely successful, winning 11 consecutive Formula Two races in the hands of the Formula One pairing of Brabham and Hulme. Cars were entered by MRD and not by the Brabham Racing Organisation, avoiding a direct conflict with Repco, their Formula One engine supplier. The first Formula Three Brabham, the BT9, won only four major races in 1964. The BT15 which followed in 1965 was a highly successful design. 58 cars were sold, which won 42 major races. Further developments of the same concept, including wings by the end of the decade, were highly competitive up until 1971. The BT38C of 1972 was Brabham's first production monocoque and the first not designed by Tauranac. Although 40 were ordered, it was less successful than its predecessors. The angular BT41 was the final Formula Three Brabham. Brabham made one car for Formula 5000 racing, the Brabham BT43. Rolled out in late 1973 it was tested in early 1974 by John Watson at Silverstone before making its debut at the Rothmans F5000 Championship Round at Monza on June 30, 1974 driven by Martin Birrane. Former Australian Drivers' Champion Kevin Bartlett used the Chevrolet powered Brabham BT43 to finish 3rd in the 1978 Australian Drivers' Championship including finishing 5th in the 1978 Australian Grand Prix. Tauranac did not enjoy designing sports cars and could only spare a small amount of his time from MRD's very successful single-seater business. Only 14 sports car models were built between 1961 and 1972, out of a total production of almost 600 chassis. The BT8A was the only one built in any numbers, and was quite successful in national level racing in the UK in 1964 and 1965. The design was "stretched" in 1966 to become the one-off BT17, originally fitted with the 4.3-litre version of the Repco engine for Can-Am racing. It was quickly abandoned by MRD after engine reliability problems became evident. Brabham was considered a technically conservative team in the 1960s, chiefly because it persevered with traditional "spaceframe" cars long after Lotus introduced lighter, stiffer "monocoque" chassis to Formula One in 1962. Chief designer Tauranac reasoned that monocoques of the time were not usefully stiffer than well designed spaceframe chassis, and were harder to repair and less suitable for MRD's customers. His "old fashioned" cars won the Brabham team the 1966 and 1967 championships, and were competitive in Formula One until rule changes forced a move to monocoques in 1970. Despite the perceived conservatism, in 1963 Brabham was the first Formula One team to use a wind tunnel to hone their designs to reduce drag and stop the cars lifting off the ground at speed. The practice only became the norm in the early 1980s, and is possibly the most important factor in the design of modern cars. Towards the end of the 1960s, teams began to exploit aerodynamic downforce to push the cars' tyres down harder on the track and enable them to maintain faster speeds through high-speed corners. At the 1968 Belgian Grand Prix, Brabham were the first, alongside Ferrari, to introduce full width rear wings to this effect. The team's most fertile period of technical innovation came in the 1970s and 1980s when Gordon Murray became technical director. During 1976, the team introduced "carbon-carbon brakes" to Formula One, which promised reduced "unsprung weight" and better stopping performance due to carbon's greater coefficient of friction. The initial versions used carbon-carbon composite brake pads and a steel disc faced with carbon "pucks". The technology was not reliable at first; in 1976, Carlos Pace crashed at at the Österreichring circuit after heat build-up in the brakes boiled the brake fluid, leaving him with no way of stopping the car. By 1979, Brabham had developed an effective carbon-carbon braking system, combining structural carbon discs with carbon brake pads. By the late 1980s, carbon brakes were used by all competitors in almost all top level motor sports. Although Brabham experimented with airdams and underbody skirts in the mid-1970s, the team, like the rest of the field, did not immediately understand Lotus's development of a ground effect car in 1977. The Brabham BT46B "Fan car" of 1978, generated enormous downforce with a fan, which sucked air from beneath the car, although its claimed use was for engine cooling. The car only raced once in the Formula One World Championship—Niki Lauda winning the 1978 Swedish Grand Prix—before a loophole in the regulations was closed by the FIA. Although in 1979 Murray was the first to use lightweight "carbon fibre composite" panels to stiffen Brabham's aluminium alloy monocoques, he echoed his predecessor Tauranac in being the last to switch to the new fully composite monocoques. Murray was reluctant to build the entire chassis from composite materials until he understood their behaviour in a crash, an understanding achieved in part through an instrumented crash test of a BT49 chassis. The team did not follow McLaren's 1981 MP4/1 with their own fully composite chassis until the "lowline" BT55 in 1986, the last team to do so. This technology is now used in all top level single seater racing cars. For the 1981 season the FIA introduced a minimum ride height for the cars, intended to slow them in corners by limiting the downforce created by aerodynamic ground effect. Gordon Murray devised a "hydropneumatic suspension" system for the BT49C, which allowed the car to settle to a much lower ride height at speed. Brabham were accused of cheating by other teams, although Murray believes that the system met the letter of the regulations. No action was taken against the team and others soon produced systems with similar effects. At the 1982 British Grand Prix, Brabham reintroduced the idea of re-fuelling and changing the car's tyres during the race, unseen since the 1957 Formula One season, to allow their drivers to sprint away at the start of races on a light fuel load and soft tyres. After studying techniques used at the Indianapolis 500 and in NASCAR racing in the United States, the team were able to refuel and re-tyre the car in 14 seconds in tests ahead of the race. In 1982 Murray felt the tactic did little more than "get our sponsors noticed at races we had no chance of winning", but in 1983 the team made good use of the tactic. Refuelling was banned for 1984, and did not reappear until the 1994 season (until it was banned again in 2010 as a part of cost cutting measures), but tyre changes have remained part of Formula One. The fan car and hydropneumatic suspension exploited loopholes in the sporting regulations. In the early 1980s, Brabham was accused of going further and breaking the regulations. During 1981, Piquet's first championship year, rumours circulated of illegal underweight Brabham chassis. Driver Jacques Laffite was among those to claim that the cars were fitted with heavily ballasted bodywork before being weighed at scrutineering. The accusation was denied by Brabham's management. No formal protest was made against the team and no action was taken against them by the sporting authorities. From 1978, Ecclestone was president of the Formula One Constructors Association (FOCA), a body formed by the teams to represent their interests. This left his team open to accusations of having advance warning of rule changes. Ecclestone denies that the team benefited from this and Murray has noted that, contrary to this view, at the end of 1982 the team had to abandon their new BT51 car, built on the basis that ground effect would be permitted in 1983. Brabham had to design and build a replacement, the BT52, in only three months. At the end of the 1983 season, Renault and Ferrari, both beaten to the Drivers' Championship by Piquet, protested that the Research Octane Number (RON) of the team's fuel was above the legal limit of 102. The FIA declared that a figure of up to 102.9 was permitted under the rules, and that Brabham had not exceeded this limit. Results achieved by the "works" Brabham team. Bold results indicate a championship win. All race and championship results are taken from the Official Formula 1 Website. 1962 Season review. www.formula1.com. Retrieved 27 April 2006 Chess Chess is a two-player strategy board game played on a chessboard, a checkered gameboard with 64 squares arranged in an 8×8 grid. The game is played by millions of people worldwide. Chess is believed to have originated in India sometime before the 7th century. The game was derived from the Indian game chaturanga, which is also the likely ancestor of the Eastern strategy games xiangqi, janggi, and shogi. Chess reached Europe by the 9th century, due to the Moorish conquest of the Iberian Peninsula. The pieces assumed their current powers in Spain in the late 15th century; the rules were standardized in the 19th century. Play does not involve hidden information. Each player begins with 16 pieces: one king, one queen, two rooks, two knights, two bishops, and eight pawns. Each of the six piece types moves differently, with the most powerful being the queen and the least powerful the pawn. The objective is to "checkmate" the opponent's king by placing it under an inescapable threat of capture. To this end, a player's pieces are used to attack and capture the opponent's pieces, while supporting each other. During the game, play typically involves making of one piece for an opponent's similar piece, but also finding and engineering opportunities to trade one piece for two, or to get a better position. In addition to checkmate, the game can be won by , and there are also several ways a game can end in a draw. The first generally recognized World Chess Champion, Wilhelm Steinitz, claimed his title in 1886. Since 1948, the World Championship has been regulated by the Fédération Internationale des Échecs (FIDE), the game's international governing body. FIDE also awards life-time master titles to skilled players, the highest of which is grandmaster. Many national chess organizations have a title system of their own. FIDE also organizes the Women's World Championship, the World Junior Championship, the World Senior Championship, the Blitz and Rapid World Championships, and the Chess Olympiad, a popular competition among international teams. FIDE is a member of the International Olympic Committee, which can be considered as a recognition of chess as a sport. Several national sporting bodies (for example the Spanish "Consejo Superior de Deportes") also recognize chess as a sport. Chess was included in the 2006 and 2010 Asian Games. There is also a Correspondence Chess World Championship and a World Computer Chess Championship. Online chess has opened amateur and professional competition to a wide and varied group of players. Since the second half of the 20th century, computers have been programmed to play chess with increasing success, to the point where the strongest personal computers play at a higher level than the best human players. Since the 1990s, computer analysis has contributed significantly to chess theory, particularly in the endgame. The IBM computer Deep Blue was the first machine to overcome a reigning World Chess Champion in a match when it defeated Garry Kasparov in 1997. The rise of strong chess engines runnable on hand-held devices has led to increasing concerns about cheating during tournaments. There are many variants of chess that utilize different rules, pieces, or boards. One of these, Chess960 (originally named "Fischerandom"), incorporates regular chess without change among its 960 different possible start-up positions. Chess960 has gained widespread popularity as well as some FIDE recognition. The rules of chess are published by FIDE ("Fédération Internationale des Échecs"), chess's international governing body, in its Handbook. Rules published by national governing bodies, or by unaffiliated chess organizations, commercial publishers, etc., may differ. FIDE's rules were most recently revised in 2017. Chess is played on a square board of eight rows (called ' and denoted with numbers "1" to "8") and eight columns (called ' and denoted with letters "a" to "h"). The colors of the 64 squares alternate and are referred to as "light" and "dark" squares. The chessboard is placed with a light square at the right-hand end of the rank nearest to each player. By convention, the game pieces are divided into white and black sets, and the players are referred to as "White" and "Black" respectively. Each player begins the game with 16 pieces of the specified color, which consist of one king, one queen, two rooks, two bishops, two knights, and eight pawns. The pieces are set out as shown in the diagram and photo, with each queen on a square of its own color, the white queen on a light square and the black queen on a dark. The player with the white pieces always moves first. After the first move, players alternately move one piece per turn (except for castling, when two pieces are moved). Pieces are moved to either an unoccupied square or one occupied by an opponent's piece, which is captured and removed from play. With the sole exception of "en passant", all pieces capture by moving to the square that the opponent's piece occupies. A player may not make any move that would put or leave the player's own king under attack. A player cannot "pass"; at each turn one must make a legal move (this is the basis for the finesse called zugzwang). If the player to move has no legal move, the game is over; it is either a checkmate (a loss for the player with no legal moves) if the king is under attack, or a stalemate (a draw) if the king is not. Each chess piece has its own way of moving. In the diagrams, the dots mark the squares where the piece can move if there are no intervening piece(s) of either color. Once in every game, each king is allowed to make a special move, known as "castling". Castling consists of moving the king two squares along the first rank toward a rook (which is on the player's first rank) and then placing the rook on the last square that the king has just crossed. Castling is permissible under the following conditions: When a pawn advances two squares from its starting position and there is an opponent's pawn on an adjacent file next to its destination square, then the opponent's pawn can capture it "en passant" (in passing), and move to the square the pawn passed over. This can only be done on the very next move, otherwise the right to do so is forfeit. For example, in the animated diagram, the black pawn advances two squares from g7 to g5, and the white pawn on f5 can take it via "en passant" on g6 (but only on White's next move). When a pawn advances to the eighth rank, as a part of the move it is "promoted" and must be exchanged for the player's choice of queen, rook, bishop, or knight of the same color. Usually, the pawn is chosen to be promoted to a queen, but in some cases another piece is chosen; this is called underpromotion. In the animated diagram, the pawn on c7 can be advanced to the eighth rank and be promoted to an allowed piece. There is no restriction placed on the piece that is chosen on promotion, so it is possible to have more pieces of the same type than at the start of the game (for example, two queens). When a king is under immediate attack by one or two of the opponent's pieces, it is said to be in "check". A response to a check is a legal move if it results in a position where the king is no longer under direct attack (that is, not in check). This can involve capturing the checking piece; interposing a piece between the checking piece and the king (which is possible only if the attacking piece is a queen, rook, or bishop and there is a square between it and the king); or moving the king to a square where it is not under attack. Castling is not a permissible response to a check. The object of the game is to checkmate the opponent; this occurs when the opponent's king is in check, and there is no legal way to remove it from attack. It is illegal for a player to make a move that would put or leave the player's own king in check. In casual games it is common to announce "check" when putting the opponent's king in check, but this is not required by the rules of the game, and is not usually done in tournaments. Games can be won in the following ways: There are several ways games can end in a draw: Chess games may also be played with a time control. If a player's time runs out before the game is completed, the game is automatically lost (provided the opponent has enough pieces left to deliver checkmate). The duration of a game ranges from long (or "classical") games which can take up to seven hours (even longer if adjournments are permitted) to bullet chess (under 3 minutes per player for the entire game). Intermediate between these are rapid chess games, lasting between 20 minutes and two hours per game, a popular time control in amateur weekend tournaments. Time is controlled using a chess clock that has two displays, one for each player's remaining time. Analog chess clocks have been largely replaced by digital clocks, which allow for time controls with increments. Chess games and positions are recorded using a system of notation, most commonly algebraic chess notation. Abbreviated (or short) algebraic notation generally records moves in the format "abbreviation of the piece moved – file where it moved – rank where it moved". The pieces are identified by their initials. In English, these are K (King), Q (Queen), R (Rook), B (Bishop), and N (Knight; N is used to avoid confusion with King). For example, Qg5 means "queen moves to the g-file and the 5th rank" (that is, to the square g5). Chess literature published in other languages may use different initials to indicate the pieces, or figurine algebraic notation (FAN) may be used to avoid language difficulties. To resolve ambiguities, one more letter or number is added to indicate the file or rank from which the piece moved, e.g. Ngf3 means "knight from the g-file moves to the square f3", and R1e2 means "rook on the first rank moves to e2". The letter "P" for a pawn is not used, so that e4 means "pawn moves to the square e4". If the piece makes a capture, "x" is inserted before the destination square. Thus Bxf3 means "bishop captures on f3". When a pawn makes a capture, the file from which the pawn departed is used in place of a piece initial, and ranks may be omitted if unambiguous. For example, exd5 (pawn on the e-file captures the piece on d5) or exd (pawn on the e-file captures a piece somewhere on the d-file). Particularly in Germany, some publications have used ":" rather than "x" to indicate a capture, but this is now rare. Some publications omit the capture symbol altogether, so that exd5 would be rendered simply as "ed". If a pawn moves to its last rank, achieving promotion, the piece chosen is indicated after the move, for example e1Q or e1=Q. Castling is indicated by the special notations 0-0 for castling and 0-0-0 for castling. An "en passant" capture is sometimes marked with the notation "e.p." A move that places the opponent's king in check usually has the notation "+" added. (The notation "++" for a double check is considered obsolete.) Checkmate can be indicated by "#". At the end of the game, "1–0" means "White won", "0–1" means "Black won", and "½–½" indicates a draw. Chess moves can be annotated with punctuation marks and other symbols. For example, "!" indicates a good move, "!!" an excellent move, "?" a mistake, "??" a blunder, "!?" an interesting move that may not be best, or "?!" a dubious move not easily refuted. For example, one variation of a simple trap known as the Scholar's mate (see animated diagram) can be recorded: The text-based Portable Game Notation (PGN), which is understood by chess software, is based on short form English language algebraic notation. Until about 1980, the majority of English language chess publications used a form of descriptive notation. In descriptive notation, files are named according to the piece which occupies the back rank at the start of the game, and each square has two different names depending on whether it is from White's or Black's point of view. For example, the square known as "e3" in algebraic notation is "K3" (King's 3rd) from White's point of view, and "K6" (King's 6th) from Black's point of view. When recording captures, the captured piece is named rather than the square on which it is captured (except to resolve ambiguities). Thus, Scholar's mate is rendered in descriptive notation: A few players still prefer descriptive notation, but it is no longer recognized by FIDE. Another system is ICCF numeric notation, recognized by the International Correspondence Chess Federation though its use is in decline. Squares are identified by numeric coordinates, for example a1 is "11" and h8 is "88". Moves are described by the "from" and "to" squares, and captures are not indicated. For example, the opening move 1.e4 is rendered as 1.5254. Castling is described by the king's move only, for example 5171 for White castling kingside, 5838 for Black castling queenside. Chess strategy consists of setting and achieving long-term positioning advantages during the game – for example, where to place different pieces – while tactics concentrate on immediate maneuver. These two parts of the chess-playing process cannot be completely separated, because strategic goals are mostly achieved through tactics, while the tactical opportunities are based on the previous strategy of play. A game of chess is normally divided into three phases: opening, typically the first 10 moves, when players move their pieces to useful positions for the coming battle; then middlegame; and last the endgame, when most of the pieces are gone, kings typically take a more active part in the struggle, and pawn promotion is often decisive. In chess, tactics in general concentrate on short-term actions – so short-term that they can be calculated in advance by a human player or by a computer. The possible depth of calculation depends on the player's ability. In quiet positions with many possibilities on both sides, a deep calculation is more difficult and may not be practical, while in "tactical" positions with a limited number of forced variations, strong players can calculate long sequences of moves. Simple one-move or two-move tactical actions – threats, exchanges of , and double attacks – can be combined into more complicated combinations, sequences of tactical maneuvers that are often forced from the point of view of one or both players. Theoreticians describe many elementary tactical methods and typical maneuvers; for example, pins, forks, skewers, batteries, discovered attacks (especially discovered checks), zwischenzugs, deflections, decoys, sacrifices, underminings, overloadings, and interferences. A forced variation that involves a sacrifice and usually results in a tangible gain is called a combination. Brilliant combinations – such as those in the Immortal Game – are considered beautiful and are admired by chess lovers. A common type of chess exercise, aimed at developing players' skills, is showing players a position where a decisive combination is available and challenging them to find it. Chess strategy is concerned with evaluation of chess positions and with setting up goals and long-term plans for the future play. During the evaluation, players must take into account numerous factors such as the value of the pieces on the board, control of the center and centralization, the pawn structure, king safety, and the control of key squares or groups of squares (for example, diagonals, open files, and dark or light squares). The most basic step in evaluating a position is to count the total value of pieces of both sides. The point values used for this purpose are based on experience; usually pawns are considered worth one point, knights and bishops about three points each, rooks about five points (the value difference between a rook and a bishop or knight being known as the exchange), and queens about nine points. The king is more valuable than all of the other pieces combined, since its checkmate loses the game. But in practical terms, in the endgame the king as a fighting piece is generally more powerful than a bishop or knight but less powerful than a rook. These basic values are then modified by other factors like position of the piece (e.g. advanced pawns are usually more valuable than those on their initial squares), coordination between pieces (e.g. a pair of bishops usually coordinate better than a bishop and a knight), or the type of position (e.g. knights are generally better in with many pawns while bishops are more powerful in ). Another important factor in the evaluation of chess positions is the pawn structure (sometimes known as the pawn skeleton), or the configuration of pawns on the chessboard. Since pawns are the least mobile of the chess pieces, the pawn structure is relatively static and largely determines the strategic nature of the position. Weaknesses in the pawn structure, such as isolated, doubled, or backward pawns and , once created, are often permanent. Care must therefore be taken to avoid these weaknesses unless they are compensated by another valuable asset (for example, by the possibility of developing an attack). A chess opening is the group of initial moves of a game (the "opening moves"). Recognized sequences of opening moves are referred to as "openings" and have been given names such as the Ruy Lopez or Sicilian Defense. They are catalogued in reference works such as the "Encyclopaedia of Chess Openings". There are dozens of different openings, varying widely in character from quiet (for example, the Réti Opening) to very aggressive (the Latvian Gambit). In some opening lines, the exact sequence considered best for both sides has been worked out to more than 30 moves. Professional players spend years studying openings and continue doing so throughout their careers, as opening theory continues to evolve. The fundamental strategic aims of most openings are similar: Most players and theoreticians consider that White, by virtue of the first move, begins the game with a small advantage. This initially gives White the initiative. Black usually strives to neutralize White's advantage and achieve , or to develop in an unbalanced position. The middlegame is the part of the game which starts after the opening. There is no clear line between the opening and the middlegame, but typically the middlegame will start when most pieces have been developed. (Similarly, there is no clear transition from the middlegame to the endgame; see start of the endgame.) Because the opening theory has ended, players have to form plans based on the features of the position, and at the same time take into account the tactical possibilities of the position. The middlegame is the phase in which most combinations occur. Combinations are a series of tactical moves executed to achieve some gain. Middlegame combinations are often connected with an attack against the opponent's king. Some typical patterns have their own names; for example, the Boden's Mate or the Lasker–Bauer combination. Specific plans or strategic themes will often arise from particular groups of openings which result in a specific type of pawn structure. An example is the , which is the attack of queenside pawns against an opponent who has more pawns on the queenside. The study of openings is therefore connected to the preparation of plans that are typical of the resulting middlegames. Another important strategic question in the middlegame is whether and how to reduce material and transition into an endgame (i.e. ). Minor material advantages can generally be transformed into victory only in an endgame, and therefore the stronger side must choose an appropriate way to achieve an ending. Not every reduction of material is good for this purpose; for example, if one side keeps a light-squared bishop and the opponent has a dark-squared one, the transformation into a bishops and pawns ending is usually advantageous for the weaker side only, because an endgame with bishops on opposite colors is likely to be a draw, even with an advantage of a pawn, or sometimes even with a two-pawn advantage. The endgame (also "end game" or "ending") is the stage of the game when there are few pieces left on the board. There are three main strategic differences between earlier stages of the game and the endgame: Endgames can be classified according to the type of pieces remaining on the board. Basic checkmates are positions in which one side has only a king and the other side has one or two pieces and can checkmate the opposing king, with the pieces working together with their king. For example, king and pawn endgames involve only kings and pawns on one or both sides, and the task of the stronger side is to promote one of the pawns. Other more complicated endings are classified according to pieces on the board other than kings, such as "rook and pawn versus rook" endgames. Chess is believed to have originated in Eastern India, c. 280–550, in the Gupta Empire, where its early form in the 6th century was known as "chaturaṅga" (), literally "four divisions" [of the military] – infantry, cavalry, elephants, and chariotry, represented by the pieces that would evolve into the modern pawn, knight, bishop, and rook, respectively. Thence it spread eastward and westward along the Silk Road. The earliest evidence of chess is found in the nearby Sassanid Persia around 600, where the game came to be known by the name "chatrang". Chatrang was taken up by the Muslim world after the Islamic conquest of Persia (633–44), where it was then named "shatranj", with the pieces largely retaining their Persian names. In Spanish "shatranj" was rendered as "ajedrez" ("al-shatranj"), in Portuguese as "xadrez", and in Greek as ζατρίκιον ("zatrikion", which comes directly from the Persian "chatrang"), but in the rest of Europe it was replaced by versions of the Persian "shāh" ("king"), which was familiar as an exclamation and became the English words "check" and "chess". The oldest archaeological chess artifacts, ivory pieces, were excavated in ancient Afrasiab, today's Samarkand, in Uzbekistan, central Asia, and date to about 760, with some of them possibly older. The oldest known chess manual was in Arabic and dates to 840–850, written by al-Adli ar-Rumi (800–870), a renowned Arab chess player, titled "Kitab ash-shatranj" (Book of the chess). This is a lost manuscript, but referenced in later works. The eastern migration of chess, into China and Southeast Asia, has even less documentation than its migration west. The first reference to chess, called "Xiang Qi", in China comes in the "xuán guaì lù" (玄怪录, record of the mysterious and strange) dating to about 800. Alternatively, some contend that chess arose from Chinese chess or one of its predecessors, although this has been contested. The game reached Western Europe and Russia by at least three routes, the earliest being in the 9th century. By the year 1000, it had spread throughout Europe. Introduced into the Iberian Peninsula by the Moors in the 10th century, it was described in a famous 13th-century manuscript covering shatranj, backgammon, and dice named the "Libro de los juegos". Around 1200, the rules of shatranj started to be modified in southern Europe, and around 1475, several major changes made the game essentially as it is known today. These modern rules for the basic moves had been adopted in Italy and Spain. Pawns gained the option of advancing two squares on their first move, while bishops and queens acquired their modern abilities. The queen replaced the earlier vizier chess piece towards the end of the 10th century and by the 15th century had become the most powerful piece; consequently modern chess was referred to as "Queen's Chess" or "Mad Queen Chess". Castling, derived from the "kings leap" usually in combination with a pawn or rook move to bring the king to safety, was introduced. These new rules quickly spread throughout western Europe. The rules concerning stalemate were finalized in the early 19th century. Also in the 19th century, the convention that White moves first was established (formerly either White or Black could move first). Finally the rules around castling were standardized – variations in the castling rules had persisted in Italy until the late 19th century. The resulting standard game is sometimes referred to as "Western chess" or "international chess", particularly in Asia where other games of the chess family such as xiangqi are prevalent. Since the 19th century, the only rule changes have been technical in nature, for example establishing the correct procedure for claiming a draw by repetition. Writings about the theory of how to play chess began to appear in the 15th century. The "Repetición de Amores y Arte de Ajedrez" ("Repetition of Love and the Art of Playing Chess") by Spanish churchman Luis Ramirez de Lucena was published in Salamanca in 1497. Lucena and later masters like Portuguese Pedro Damiano, Italians Giovanni Leonardo Di Bona, Giulio Cesare Polerio and Gioachino Greco, and Spanish bishop Ruy López de Segura developed elements of openings and started to analyze simple endgames. The romantic era was characterized by opening gambits (sacrificing pawns or even pieces), daring attacks, and brazen sacrifices. Many elaborate and beautiful but unsound move sequences called "combinations" were played by the masters of the time. The game was played more for art than theory. A profound belief that chess merit resided in the players' genius rather than inherent in the position on the board pervaded chess practice. In the 18th century, the center of European chess life moved from the Southern European countries to France. The two most important French masters were François-André Danican Philidor, a musician by profession, who discovered the importance of pawns for chess strategy, and later Louis-Charles Mahé de La Bourdonnais, who won a famous series of matches with the Irish master Alexander McDonnell in 1834. Centers of chess activity in this period were coffee houses in major European cities like "Café de la Régence" in Paris and "Simpson's Divan" in London. As the 19th century progressed, chess organization developed quickly. Many chess clubs, chess books, and chess journals appeared. There were correspondence matches between cities; for example, the London Chess Club played against the Edinburgh Chess Club in 1824. Chess problems became a regular part of 19th-century newspapers; Bernhard Horwitz, Josef Kling, and Samuel Loyd composed some of the most influential problems. In 1843, von der Lasa published his and Bilguer's "Handbuch des Schachspiels" ("Handbook of Chess"), the first comprehensive manual of chess theory. Chess was occasionally criticised in the 19th century as a waste of time. The first modern chess tournament was organized by Howard Staunton, a leading English chess player, and was held in London in 1851. It was won by the German Adolf Anderssen, who was hailed as the leading chess master. His brilliant, energetic attacking style was typical for the time. Sparkling games like Anderssen's Immortal Game and Evergreen Game or Morphy's "Opera Game" were regarded as the highest possible summit of the chess art. Deeper insight into the nature of chess came with two younger players. American Paul Morphy, an extraordinary chess prodigy, won against all important competitors (except Howard Staunton, who refused to play), including Anderssen, during his short chess career between 1857 and 1863. Morphy's success stemmed from a combination of brilliant attacks and sound strategy; he intuitively knew how to prepare attacks. Prague-born Wilhelm Steinitz beginning in 1873 described how to avoid weaknesses in one's own position and how to create and exploit such weaknesses in the opponent's position. The scientific approach and positional understanding of Steinitz revolutionized the game. Steinitz was the first to break a position down into its components. Before Steinitz, players brought their queen out early, did not completely their other pieces, and mounted a quick attack on the opposing king, which either succeeded or failed. The level of defense was poor and players did not form any deep plan. In addition to his theoretical achievements, Steinitz founded an important tradition: his triumph over the leading German master Johannes Zukertort in 1886 is regarded as the first official World Chess Championship. Steinitz lost his crown in 1894 to a much younger player, the German mathematician Emanuel Lasker, who maintained this title for 27 years, the longest tenure of all World Champions. After the end of the 19th century, the number of master tournaments and matches held annually quickly grew. Some sources state that in 1914 the title of chess Grandmaster was first formally conferred by Tsar Nicholas II of Russia to Lasker, Capablanca, Alekhine, Tarrasch, and Marshall, but this is a disputed claim. The tradition of awarding such titles was continued by the World Chess Federation (FIDE), founded in 1924 in Paris. In 1927, the Women's World Chess Championship was established; the first to hold the title was Czech-English master Vera Menchik. It took a prodigy from Cuba, José Raúl Capablanca (World Champion 1921–1927), who loved simple positions and endgames, to end the German-speaking dominance in chess; he was undefeated in tournament play for eight years, until 1924. His successor was Russian-French Alexander Alekhine, a strong attacking player who died as the world champion in 1946. He briefly lost the title to Dutch player Max Euwe in 1935 and regained it two years later. Between the world wars, chess was revolutionized by the new theoretical school of so-called hypermodernists like Aron Nimzowitsch and Richard Réti. They advocated controlling the of the board with distant pieces rather than with pawns, thus inviting opponents to occupy the center with pawns, which become objects of attack. After the death of Alekhine, a new World Champion was sought. FIDE, which has controlled the title since then (except for one interruption), ran a tournament of elite players. The winner of the 1948 tournament, Russian Mikhail Botvinnik, started an era of Soviet dominance in the chess world. Until the end of the Soviet Union, there was only one non-Soviet champion, American Bobby Fischer (champion 1972–1975). Botvinnik revolutionized opening theory. Previously Black strove for equality, to neutralize White's first-move advantage. As Black, Botvinnik strove for the initiative from the beginning. In the previous informal system of World Championships, the current champion decided which challenger he would play for the title and the challenger was forced to seek sponsors for the match. FIDE set up a new system of qualifying tournaments and matches. The world's strongest players were seeded into Interzonal tournaments, where they were joined by players who had qualified from Zonal tournaments. The leading finishers in these Interzonals would go on the "Candidates" stage, which was initially a tournament, and later a series of knockout matches. The winner of the Candidates would then play the reigning champion for the title. A champion defeated in a match had a right to play a rematch a year later. This system operated on a three-year cycle. Botvinnik participated in championship matches over a period of fifteen years. He won the world championship tournament in 1948 and retained the title in tied matches in 1951 and 1954. In 1957, he lost to Vasily Smyslov, but regained the title in a rematch in 1958. In 1960, he lost the title to the 23-year-old Latvian prodigy Mikhail Tal, an accomplished tactician and attacking player. Botvinnik again regained the title in a rematch in 1961. Following the 1961 event, FIDE abolished the automatic right of a deposed champion to a rematch, and the next champion, Armenian Tigran Petrosian, a player renowned for his defensive and positional skills, held the title for two cycles, 1963–1969. His successor, Boris Spassky from Russia (champion 1969–1972), won games in both positional and sharp tactical style. The next championship, the so-called Match of the Century, saw the first non-Soviet challenger since World War II, American Bobby Fischer, who defeated his Candidates opponents by unheard-of margins and clearly won the world championship match. In 1975, however, Fischer refused to defend his title against Soviet Anatoly Karpov when FIDE did not meet his demands, and Karpov obtained the title by default. Fischer modernized many aspects of chess, especially by extensively preparing openings. Karpov defended his title twice against Viktor Korchnoi and dominated the 1970s and early 1980s with a string of tournament successes. Karpov's reign finally ended in 1985 at the hands of Garry Kasparov, another Soviet player from Baku, Azerbaijan. Kasparov and Karpov contested five world title matches between 1984 and 1990; Karpov never won his title back. In 1993, Garry Kasparov and Nigel Short broke with FIDE to organize their own match for the title and formed a competing Professional Chess Association (PCA). From then until 2006, there were two simultaneous World Champions and World Championships: the PCA or Classical champion extending the Steinitzian tradition in which the current champion plays a challenger in a series of many games, and the other following FIDE's new format of many players competing in a tournament to determine the champion. Kasparov lost his Classical title in 2000 to Vladimir Kramnik of Russia. The World Chess Championship 2006, in which Kramnik beat the FIDE World Champion Veselin Topalov, reunified the titles and made Kramnik the undisputed World Chess Champion. In September 2007, he lost the title to Viswanathan Anand of India, who won the championship tournament in Mexico City. Anand defended his title in the revenge match of 2008, 2010 and 2012. In 2013, Magnus Carlsen beat Anand in the 2013 World Chess Championship. He defended his title the following year, again against Anand, and is the reigning world champion. In the Middle Ages and during the Renaissance, chess was a part of noble culture; it was used to teach war strategy and was dubbed the "King's Game". Gentlemen are "to be meanly seene in the play at Chestes", says the overview at the beginning of Baldassare Castiglione's "The Book of the Courtier" (1528, English 1561 by Sir Thomas Hoby), but chess should not be a gentleman's main passion. Castiglione explains it further: And what say you to the game at chestes? It is truely an honest kynde of enterteynmente and wittie, quoth Syr Friderick. But me think it hath a fault, whiche is, that a man may be to couning at it, for who ever will be excellent in the playe of chestes, I beleave he must beestowe much tyme about it, and applie it with so much study, that a man may assoone learne some noble scyence, or compase any other matter of importaunce, and yet in the ende in beestowing all that laboure, he knoweth no more but a game. Therfore in this I beleave there happeneth a very rare thing, namely, that the meane is more commendable, then the excellency. Many of the elaborate chess sets used by the aristocracy have been lost, but others partially survive, such as the Lewis chessmen. Chess was often used as a basis of sermons on morality. An example is "Liber de moribus hominum et officiis nobilium sive super ludo scacchorum" ('Book of the customs of men and the duties of nobles or the Book of Chess'), written by an Italian Dominican monk Jacobus de Cessolis c. 1300. This book was one of the most popular of the Middle Ages. The work was translated into many other languages (the first printed edition was published at Utrecht in 1473) and was the basis for William Caxton's "The Game and Playe of the Chesse" (1474), one of the first books printed in English. Different chess pieces were used as metaphors for different classes of people, and human duties were derived from the rules of the game or from visual properties of the chess pieces: The knyght ought to be made alle armed upon an hors in suche wyse that he haue an helme on his heed and a spere in his ryght hande/ and coueryd wyth his sheld/ a swerde and a mace on his lyft syde/ Cladd wyth an hawberk and plates to fore his breste/ legge harnoys on his legges/ Spores on his heelis on his handes his gauntelettes/ his hors well broken and taught and apte to bataylle and couerid with his armes/ whan the knyghtes ben maad they ben bayned or bathed/ that is the signe that they shold lede a newe lyf and newe maners/ also they wake alle the nyght in prayers and orysons vnto god that he wylle gyue hem grace that they may gete that thynge that they may not gete by nature/ The kynge or prynce gyrdeth a boute them a swerde in signe/ that they shold abyde and kepe hym of whom they take theyr dispenses and dignyte. Known in the circles of clerics, students, and merchants, chess entered into the popular culture of Middle Ages. An example is the 209th song of Carmina Burana from the 13th century, which starts with the names of chess pieces, "Roch, pedites, regina..." During the Age of Enlightenment, chess was viewed as a means of self-improvement. Benjamin Franklin, in his article "The Morals of Chess" (1750), wrote: The Game of Chess is not merely an idle amusement; several very valuable qualities of the mind, useful in the course of human life, are to be acquired and strengthened by it, so as to become habits ready on all occasions; for life is a kind of Chess, in which we have often points to gain, and competitors or adversaries to contend with, and in which there is a vast variety of good and ill events, that are, in some degree, the effect of prudence, or the want of it. By playing at Chess then, we may learn: I. Foresight, which looks a little into futurity, and considers the consequences that may attend an action [...] II. Circumspection, which surveys the whole Chess-board, or scene of action: – the relation of the several Pieces, and their situations [...] III. Caution, not to make our moves too hastily [...] With these or similar views, chess is taught to children in schools around the world today. Many schools host chess clubs, and there are many scholastic tournaments specifically for children. Tournaments are held regularly in many countries, hosted by organizations such as the United States Chess Federation and the National Scholastic Chess Foundation. Chess is often depicted in the arts; significant works where chess plays a key role range from Thomas Middleton's "A Game at Chess" to "Through the Looking-Glass" by Lewis Carroll, to Vladimir Nabokov's "The Defense", to "The Royal Game" by Stefan Zweig. Chess is featured in films like Ingmar Bergman's "The Seventh Seal" and Satyajit Ray's "The Chess Players". Chess is also present in contemporary popular culture. For example, the characters in "Star Trek" play a futuristic version of the game called "Tri-Dimensional Chess". "Wizard's Chess" is featured in J.K. Rowling's "Harry Potter" plays. The hero of "Searching for Bobby Fischer" struggles against adopting the aggressive and misanthropic views of a world chess champion. Chess is used as the core theme in the musical "Chess" by Tim Rice, Björn Ulvaeus, and Benny Andersson. The thriller film "Knight Moves" is about a chess grandmaster who is accused of being a serial killer. "Pawn Sacrifice", starring Tobey Maguire as Bobby Fischer and Liev Schreiber as Boris Spassky, depicts the drama surrounding the 1972 World Chess Championship in Iceland during the Cold War. In 2016 in Saudi Arabia, Grand Mufti Abdul-Aziz ibn Abdullah Al ash-Sheikh issued a religious fatwa declaring that chess is banned for Muslims. He stated that "chess is a waste of time and an opportunity to squander money. It causes enmity and hatred between people." However, this fatwa is not legally binding and chess remains a popular game in Muslim countries. Chess composition is the art of creating chess problems (also called chess compositions). The creator is known as a chess composer. There are many types of chess problems; the two most important are: Chess composition is a distinct branch of chess sport, and tournaments exist for both the composition and solving of chess problems. This is one of the most famous chess studies; it was published by Richard Réti 4 December 1921. It seems impossible to catch the advanced black pawn, while the black king can easily stop the white pawn. The solution is a diagonal advance, which brings the king to "both" pawns simultaneously: Or 2...h3 3.Ke7 and the white king can support its pawn. Now the white king comes just in time to support his pawn, or catch the black one. Contemporary chess is an organized sport with structured international and national leagues, tournaments, and congresses. Chess's international governing body is FIDE (Fédération Internationale des Échecs). Most countries have a national chess organization as well (such as the US Chess Federation and English Chess Federation) which in turn is a member of FIDE. FIDE is a member of the International Olympic Committee, but the game of chess has never been part of the Olympic Games; chess does have its own Olympiad, held every two years as a team event. The current World Chess Champion is Magnus Carlsen of Norway. The reigning Women's World Champion is Hou Yifan from China. The world's highest rated female player, Judit Polgár, has never participated in the Women's World Chess Championship, instead preferring to compete with the leading men and maintaining a ranking among the top male players. Other competitions for individuals include the World Junior Chess Championship, the European Individual Chess Championship, and the National Chess Championships. Invitation-only tournaments regularly attract the world's strongest players. Examples include Spain's Linares event, Monte Carlo's Melody Amber tournament, the Dortmund Sparkassen meeting, Sofia's M-tel Masters, and Wijk aan Zee's Tata Steel tournament. Regular team chess events include the Chess Olympiad and the European Team Chess Championship. The World Chess Solving Championship and World Correspondence Chess Championships include both team and individual events. Besides these prestigious competitions, there are thousands of other chess tournaments, matches, and festivals held around the world every year catering to players of all levels. Chess is promoted as a "mind sport" by the Mind Sports Organisation, alongside other mental-skill games such as Contract Bridge, Go, and Scrabble. The best players can be awarded specific lifetime titles by the world chess organization FIDE: All the titles are open to men and women. Separate women-only titles, such as Woman Grandmaster (WGM), are available. Beginning with Nona Gaprindashvili in 1978, a number of women have earned the GM title, and most of the top ten women in 2006 hold the unrestricted GM title. As of August 2011, there are 1363 active grandmasters and 3153 international masters in the world. Top three countries with the largest numbers of grandmasters are Russia, Ukraine, and Germany, with 208, 78, and 76. The country with most grandmasters per capita is Iceland, with 11 GMs and 13 IMs among the population of 310,000. International titles are awarded to composers and solvers of chess problems and to correspondence chess players (by the International Correspondence Chess Federation). National chess organizations may also award titles, usually to the advanced players still under the level needed for international titles; an example is the chess expert title used in the United States. In order to rank players, FIDE, ICCF, and national chess organizations use the Elo rating system developed by Arpad Elo. Elo is a statistical system based on the assumption that the chess performance of each player in his or her games is a random variable. Arpad Elo thought of a player's true skill as the average of that player's performance random variable, and showed how to estimate the average from results of player's games. The US Chess Federation implemented Elo's suggestions in 1960, and the system quickly gained recognition as being both fairer and more accurate than older systems; it was adopted by FIDE in 1970. The highest FIDE rating of all time, 2881, was achieved by Magnus Carlsen on the March 2014 FIDE rating list. Chess has a very extensive literature. In 1913, the chess historian H.J.R. Murray estimated the total number of books, magazines, and chess columns in newspapers to be about 5,000. B.H. Wood estimated the number, as of 1949, to be about 20,000. David Hooper and Kenneth Whyld write that, "Since then there has been a steady increase year by year of the number of new chess publications. No one knows how many have been printed." There are two significant public chess libraries: the John G. White Chess and Checkers Collection at Cleveland Public Library, with over 32,000 chess books and over 6,000 bound volumes of chess periodicals; and the Chess & Draughts collection at the National Library of the Netherlands, with about 30,000 books. Grandmaster Lothar Schmid owned the world's largest private collection of chess books and memorabilia. David DeLucia's chess library contains 7,000 to 8,000 chess books, a similar number of autographs (letters, score sheets, manuscripts), and about 1,000 items of "ephemera". Dirk Jan ten Geuzendam opines that DeLucia's collection "is arguably the finest chess collection in the world". The game structure and nature of chess are related to several branches of mathematics. Many combinatorical and topological problems connected to chess have been known for hundreds of years. The number of legal positions in chess is estimated to be about 10, and is provably less than 10, with a game-tree complexity of approximately 10. The game-tree complexity of chess was first calculated by Claude Shannon as 10, a number known as the Shannon number. Typically an average position has thirty to forty possible moves, but there may be as few as zero (in the case of checkmate or stalemate) or as many as 218. Chess has inspired many combinatorial puzzles, such as the knight's tour and the eight queens puzzle. One of the most important mathematical challenges of chess is the development of algorithms that can play chess. The idea of creating a chess-playing machine dates to the 18th century; around 1769, the chess-playing automaton called The Turk became famous before being exposed as a hoax. Serious trials based on automata, such as El Ajedrecista, were too complex and limited to be useful. Since the advent of the digital computer in the 1950s, chess enthusiasts, computer engineers and computer scientists have built, with increasing degrees of seriousness and success, chess-playing machines and computer programs. The groundbreaking paper on computer chess, "Programming a Computer for Playing Chess", was published in 1950 by Shannon. He wrote: The chess machine is an ideal one to start with, since: (1) the problem is sharply defined both in allowed operations (the moves) and in the ultimate goal (checkmate); (2) it is neither so simple as to be trivial nor too difficult for satisfactory solution; (3) chess is generally considered to require "thinking" for skillful play; a solution of this problem will force us either to admit the possibility of a mechanized thinking or to further restrict our concept of "thinking"; (4) the discrete structure of chess fits well into the digital nature of modern computers. The Association for Computing Machinery (ACM) held the first major chess tournament for computers, the North American Computer Chess Championship, in September 1970. CHESS 3.0, a chess program from Northwestern University, won the championship. Nowadays, chess programs compete in the World Computer Chess Championship, held annually since 1974. At first considered only a curiosity, the best chess playing programs have become extremely strong. In 1997, a computer won a chess match using classical time controls against a reigning World Champion for the first time: IBM's Deep Blue beat Garry Kasparov 3½–2½ (it scored two wins, one loss, and three draws). However, the match was controversial, and computers would only win such a match again in 2006. In 2009, a mobile phone won a category 6 tournament with a performance rating 2898: chess engine Hiarcs 13 running on the mobile phone HTC Touch HD won the Copa Mercosur tournament with nine wins and one draw. The best chess programs are now able to consistently beat the strongest human players, to the extent that human-computer matches no longer attract interest from chess players or media. With huge databases of past games and high analytical ability, computers can help players to learn chess and prepare for matches. Internet Chess Servers allow people to find and play opponents all over the world. The presence of computers and modern communication tools have raised concerns regarding cheating during games, most notably the "bathroom controversy" during the 2006 World Championship. In 1913, Ernst Zermelo used chess as a basis for his theory of game strategies, which is considered as one of the predecessors of game theory. Zermelo's theorem states that it is possible to solve chess, i.e. to determine with certainty the outcome of a perfectly played game (either white can force a win, or black can force a win, or both sides can force at least a draw). However, according to Claude Shannon, there are 10 legal positions in chess, so it will take an impossibly long time to compute a perfect strategy with any feasible technology. The 11-category, game theoretical taxonomy of chess includes: two player, no-chance, combinatorial, Markov state (present state is all a player needs to move; although past state led up to that point, knowledge of the sequence of past moves is not required to make the next move, except to take into account of en passant and castling, which "do" depend on the past moves), zero sum, symmetric, perfect information, non-cooperative, discrete, extensive form (tree decisions, not payoff matrices), sequential. Generalized chess (played on "n"×"n" board, without the fifty-move rule) is EXPTIME-complete. Some applications of combinatorial game theory to chess endgames were found by Elkies (1996). There is an extensive scientific literature on chess psychology. Alfred Binet and others showed that knowledge and verbal, rather than visuospatial, ability lies at the core of expertise. In his doctoral thesis, Adriaan de Groot showed that chess masters can rapidly perceive the key features of a position. According to de Groot, this perception, made possible by years of practice and study, is more important than the sheer ability to anticipate moves. De Groot showed that chess masters can memorize positions shown for a few seconds almost perfectly. The ability to memorize does not alone account for chess-playing skill, since masters and novices, when faced with random arrangements of chess pieces, had equivalent recall (about half a dozen positions in each case). Rather, it is the ability to recognize patterns, which are then memorized, which distinguished the skilled players from the novices. When the positions of the pieces were taken from an actual game, the masters had almost total positional recall. More recent research has focused on chess as mental training; the respective roles of knowledge and look-ahead search; brain imaging studies of chess masters and novices; blindfold chess; the role of personality and intelligence in chess skill; gender differences; and computational models of chess expertise. The role of practice and talent in the development of chess and other domains of expertise has led to much recent research. Ericsson and colleagues have argued that deliberate practice is sufficient for reaching high levels of expertise in chess. Recent research indicates that factors other than practice are also important. For example, Fernand Gobet and colleagues have shown that stronger players started playing chess at a young age and that experts born in the Northern Hemisphere are more likely to have been born in late winter and early spring. Compared to general population, chess players are more likely to be non-right-handed, though they found no correlation between handedness and skill. Although the link between performance in chess and general intelligence is often assumed, researchers have largely failed to confirm its existence. For example, a 2006 study found no differences in fluid intelligence, as measured by Raven's Progressive Matrices, between strong adult chess players and regular people. There is some evidence towards a correlation between performance in chess and intelligence among beginning players. However, performance in chess also relies substantially on one's experience playing the game, and the role of experience may overwhelm the role of intelligence. Chess experts are estimated to have in excess of 10,000 and possibly as many as 300,000 position patterns stored in their memory; long training is necessary to acquire that amount of data. A 2007 study of young chess players in the United Kingdom found that strong players tended to have above-average IQ scores, but, within that group, the correlation between chess skill and IQ was moderately negative, meaning that smarter children tended to achieve a lower level of chess skill. This result was explained by a negative correlation between intelligence and practice in the elite subsample, and by practice having a higher influence on chess skill. There are more than two thousand published chess variants, most of them of relatively recent origin, including: Prime sources in English describing chess variants and their rules include David Pritchard's encyclopedias, the website "The Chess Variant Pages" created by Hans Bodlaender with various contributors, and the magazine "Variant Chess" published from 1990 (George Jellis) to 2010 (the British Chess Variants Society). In the context of chess variants, regular (i.e. FIDE) chess is commonly referred to as "Western chess", "international chess", "orthodox chess", "orthochess", and "classic chess". Reference aids Lists Charlie Chaplin Sir Charles Spencer Chaplin (16 April 1889 – 25 December 1977) was an English comic actor, filmmaker, and composer who rose to fame in the era of silent film. Chaplin became a worldwide icon through his screen persona "the Tramp" and is considered one of the most important figures in the history of the film industry. His career spanned more than 75 years, from childhood in the Victorian era until a year before his death in 1977, and encompassed both adulation and controversy. Chaplin's childhood in London was one of poverty and hardship. As his father was absent and his mother struggled financially, he was sent to a workhouse twice before the age of nine. When he was 14, his mother was committed to a mental asylum. Chaplin began performing at an early age, touring music halls and later working as a stage actor and comedian. At 19, he was signed to the prestigious Fred Karno company, which took him to America. Chaplin was scouted for the film industry and began appearing in 1914 for Keystone Studios. He soon developed the Tramp persona and formed a large fan base. Chaplin directed his own films from an early stage and continued to hone his craft as he moved to the Essanay, Mutual, and First National corporations. By 1918, he was one of the best-known figures in the world. In 1919, Chaplin co-founded the distribution company United Artists, which gave him complete control over his films. His first feature-length was "The Kid" (1921), followed by "A Woman of Paris" (1923), "The Gold Rush" (1925), and "The Circus" (1928). He refused to move to sound films in the 1930s, instead producing "City Lights" (1931) and "Modern Times" (1936) without dialogue. Chaplin became increasingly political, and his next film, "The Great Dictator" (1940), satirised Adolf Hitler. The 1940s were a decade marked with controversy for Chaplin, and his popularity declined rapidly. He was accused of communist sympathies, while his involvement in a paternity suit and marriages to much younger women caused scandal. An FBI investigation was opened, and Chaplin was forced to leave the United States and settle in Switzerland. He abandoned the Tramp in his later films, which include "Monsieur Verdoux" (1947), "Limelight" (1952), "A King in New York" (1957), and "A Countess from Hong Kong" (1967). Chaplin wrote, directed, produced, edited, starred in, and composed the music for most of his films. He was a perfectionist, and his financial independence enabled him to spend years on the development and production of a picture. His films are characterised by slapstick combined with pathos, typified in the Tramp's struggles against adversity. Many contain social and political themes, as well as autobiographical elements. In 1972, as part of a renewed appreciation for his work, Chaplin received an Honorary Academy Award for "the incalculable effect he has had in making motion pictures the art form of this century". He continues to be held in high regard, with "The Gold Rush", "City Lights", "Modern Times", and "The Great Dictator" often ranked on industry lists of the greatest films of all time. Charles Spencer Chaplin was born on 16 April 1889 to Hannah Chaplin (born Hannah Harriet Pedlingham Hill) and Charles Chaplin Sr. There is no official record of his birth, although Chaplin believed he was born at East Street, Walworth, in South London. His mother and father had married four years previously, at which time Charles Sr. became the legal carer of Hannah's illegitimate son, Sydney John Hill. At the time of his birth, Chaplin's parents were both music hall entertainers. Hannah, the daughter of a shoemaker, had a brief and unsuccessful career under the stage name Lily Harley, while Charles Sr., a butcher's son, was a popular singer. Although they never divorced, Chaplin's parents were estranged by around 1891. The following year, Hannah gave birth to a third son – George Wheeler Dryden – fathered by the music hall entertainer Leo Dryden. The child was taken by Dryden at six months old, and did not re-enter Chaplin's life for 30 years. Chaplin's childhood was fraught with poverty and hardship, making his eventual trajectory "the most dramatic of all the rags to riches stories ever told" according to his authorised biographer David Robinson. Chaplin's early years were spent with his mother and brother Sydney in the London district of Kennington; Hannah had no means of income, other than occasional nursing and dressmaking, and Chaplin Sr. provided no financial support. As the situation deteriorated, Chaplin was sent to Lambeth Workhouse when he was seven years old. The council housed him at the Central London District School for paupers, which Chaplin remembered as "a forlorn existence". He was briefly reunited with his mother 18 months later, before Hannah was forced to readmit her family to the workhouse in July 1898. The boys were promptly sent to Norwood Schools, another institution for destitute children. In September 1898, Hannah was committed to the 'London County Council Asylum with provision for Croydon,' commonly known as 'Cane Hill Asylum', which was in 1930 renamed Cane Hill Mental Hospital . She had developed a psychosis seemingly brought on by an infection of syphilis and malnutrition. For the two months she was there, Chaplin and his brother Sydney were sent to live with their father, whom the young boys scarcely knew. Charles Sr. was by then a severe alcoholic, and life there was bad enough to provoke a visit from the National Society for the Prevention of Cruelty to Children. Chaplin's father died two years later, at 38 years old, from cirrhosis of the liver. Hannah entered a period of remission but, in May 1903, became ill again. Chaplin, then 14, had the task of taking his mother to the infirmary, from where she was sent back to Cane Hill. He lived alone for several days, searching for food and occasionally sleeping rough, until Sydney – who had enrolled in the Navy two years earlier – returned. Hannah was released from the asylum eight months later, but in March 1905, her illness returned, this time permanently. "There was nothing we could do but accept poor mother's fate", Chaplin later wrote, and she remained in care until her death in 1928. Between his time in the poor schools and his mother succumbing to mental illness, Chaplin began to perform on stage. He later recalled making his first amateur appearance at the age of five years, when he took over from Hannah one night in Aldershot. This was an isolated occurrence, but by the time he was nine Chaplin had, with his mother's encouragement, grown interested in performing. He later wrote: "[she] imbued me with the feeling that I had some sort of talent". Through his father's connections, Chaplin became a member of the Eight Lancashire Lads clog-dancing troupe, with whom he toured English music halls throughout 1899 and 1900. Chaplin worked hard, and the act was popular with audiences, but he was not satisfied with dancing and wished to form a comedy act. In the years Chaplin was touring with the Eight Lancashire Lads, his mother ensured that he still attended school but, by age 13, he had abandoned education. He supported himself with a range of jobs, while nursing his ambition to become an actor. At 14, shortly after his mother's relapse, he registered with a theatrical agency in London's West End. The manager sensed potential in Chaplin, who was promptly given his first role as a newsboy in Harry Arthur Saintsbury's "Jim, a Romance of Cockayne". It opened in July 1903, but the show was unsuccessful and closed after two weeks. Chaplin's comic performance, however, was singled out for praise in many of the reviews. Saintsbury secured a role for Chaplin in Charles Frohman's production of "Sherlock Holmes", where he played Billy the pageboy in three nationwide tours. His performance was so well received that he was called to London to play the role alongside William Gillette, the original Holmes. "It was like tidings from heaven", Chaplin recalled. At 16 years old, Chaplin starred in the play's West End production at the Duke of York's Theatre from October to December 1905. He completed one final tour of "Sherlock Holmes" in early 1906, before leaving the play after more than two-and-a-half years. Chaplin soon found work with a new company, and went on tour with his brother – who was also pursuing an acting career – in a comedy sketch called "Repairs". In May 1906, Chaplin joined the juvenile act "Casey's Circus", where he developed popular burlesque pieces and was soon the star of the show. By the time the act finished touring in July 1907, the 18-year-old had become an accomplished comedic performer. He struggled to find more work, however, and a brief attempt at a solo act was a failure. Meanwhile, Sydney Chaplin had joined Fred Karno's prestigious comedy company in 1906 and, by 1908, he was one of their key performers. In February, he managed to secure a two-week trial for his younger brother. Karno was initially wary, and considered Chaplin a "pale, puny, sullen-looking youngster" who "looked much too shy to do any good in the theatre." However, the teenager made an impact on his first night at the London Coliseum and he was quickly signed to a contract. Chaplin began by playing a series of minor parts, eventually progressing to starring roles in 1909. In April 1910, he was given the lead in a new sketch, "Jimmy the Fearless". It was a big success, and Chaplin received considerable press attention. Karno selected his new star to join the section of the company, one that also included Stan Laurel, that toured North America's vaudeville circuit. The young comedian headed the show and impressed reviewers, being described as "one of the best pantomime artists ever seen here". His most successful role was a drunk called the "Inebriate Swell", which drew him significant recognition. The tour lasted 21 months, and the troupe returned to England in June 1912. Chaplin recalled that he "had a disquieting feeling of sinking back into a depressing commonplaceness" and was, therefore, delighted when a new tour began in October. Six months into the second American tour, Chaplin was invited to join the New York Motion Picture Company. A representative who had seen his performances thought he could replace Fred Mace, a star of their Keystone Studios who intended to leave. Chaplin thought the Keystone comedies "a crude mélange of rough and rumble", but liked the idea of working in films and rationalised: "Besides, it would mean a new life." He met with the company and signed a $150-per-week ($ in dollars) contract in September 1913. Chaplin arrived in Los Angeles, home of the Keystone studio, in early December 1913. His boss was Mack Sennett, who initially expressed concern that the 24-year-old looked too young. He was not used in a picture until late January, during which time Chaplin attempted to learn the processes of filmmaking. The one-reeler "Making a Living" marked his film acting debut and was released on 2 February 1914. Chaplin strongly disliked the picture, but one review picked him out as "a comedian of the first water". For his second appearance in front of the camera, Chaplin selected the costume with which he became identified. He described the process in his autobiography: The film was "Mabel's Strange Predicament", but "the Tramp" character, as it became known, debuted to audiences in "Kid Auto Races at Venice" – shot later than "Mabel's Strange Predicament" but released two days earlier. Chaplin adopted the character as his screen persona and attempted to make suggestions for the films he appeared in. These ideas were dismissed by his directors. During the filming of his eleventh picture, "Mabel at the Wheel", he clashed with director Mabel Normand and was almost released from his contract. Sennett kept him on, however, when he received orders from exhibitors for more Chaplin films. Sennett also allowed Chaplin to direct his next film himself after Chaplin promised to pay $1,500 ($ in dollars) if the film was unsuccessful. "Caught in the Rain", issued 4 May 1914, was Chaplin's directorial debut and was highly successful. Thereafter he directed almost every short film in which he appeared for Keystone, at the rate of approximately one per week, a period which he later remembered as the most exciting time of his career. Chaplin's films introduced a slower form of comedy than the typical Keystone farce, and he developed a large fan base. In November 1914, he had a supporting role in the first feature length comedy film, "Tillie's Punctured Romance", directed by Sennett and starring Marie Dressler, which was a commercial success and increased his popularity. When Chaplin's contract came up for renewal at the end of the year, he asked for $1,000 a week ($ in dollars) – an amount Sennett refused as too large. The Essanay Film Manufacturing Company of Chicago sent Chaplin an offer of $1,250 a week with a signing bonus of $10,000. He joined the studio in late December 1914, where he began forming a stock company of regular players, including Leo White, Bud Jamison, Paddy McGuire and Billy Armstrong. He soon recruited a leading lady – Edna Purviance, whom Chaplin met in a cafe and hired on account of her beauty. She went on to appear in 35 films with Chaplin over eight years; the pair also formed a romantic relationship that lasted into 1917. Chaplin asserted a high level of control over his pictures and started to put more time and care into each film. There was a month-long interval between the release of his second production, "A Night Out", and his third, "The Champion". The final seven of Chaplin's 14 Essanay films were all produced at this slower pace. Chaplin also began to alter his screen persona, which had attracted some criticism at Keystone for its "mean, crude, and brutish" nature. The character became more gentle and romantic; "The Tramp" (April 1915) was considered a particular turning point in his development. The use of pathos was developed further with "The Bank", in which Chaplin created a sad ending. Robinson notes that this was an innovation in comedy films, and marked the time when serious critics began to appreciate Chaplin's work. At Essanay, writes film scholar Simon Louvish, Chaplin "found the themes and the settings that would define the Tramp's world." During 1915, Chaplin became a cultural phenomenon. Shops were stocked with Chaplin merchandise, he was featured in cartoons and comic strips, and several songs were written about him. In July, a journalist for "Motion Picture Magazine" wrote that "Chaplinitis" had spread across America. As his fame grew worldwide, he became the film industry's first international star. When the Essanay contract ended in December 1915, Chaplin – fully aware of his popularity – requested a $150,000 signing bonus from his next studio. He received several offers, including Universal, Fox, and Vitagraph, the best of which came from the Mutual Film Corporation at $10,000 a week. A contract was negotiated with Mutual that amounted to $670,000 a year ($ million today), which Robinson says made Chaplin – at 26 years old – one of the highest paid people in the world. The high salary shocked the public and was widely reported in the press. John R. Freuler, the studio president, explained: "We can afford to pay Mr. Chaplin this large sum annually because the public wants Chaplin and will pay for him." Mutual gave Chaplin his own Los Angeles studio to work in, which opened in March 1916. He added two key members to his stock company, Albert Austin and Eric Campbell, and produced a series of elaborate two-reelers: "The Floorwalker", "The Fireman", "The Vagabond", "One A.M.", and "The Count". For "The Pawnshop", he recruited the actor Henry Bergman, who was to work with Chaplin for 30 years. "Behind the Screen" and "The Rink" completed Chaplin's releases for 1916. The Mutual contract stipulated that he release a two-reel film every four weeks, which he had managed to achieve. With the new year, however, Chaplin began to demand more time. He made only four more films for Mutual over the first ten months of 1917: "Easy Street", "The Cure", "The Immigrant", and "The Adventurer". With their careful construction, these films are considered by Chaplin scholars to be among his finest work. Later in life, Chaplin referred to his Mutual years as the happiest period of his career. However, Chaplin also felt that those films became increasingly formulaic over the period of the contract and he was increasingly dissatisfied with the working conditions encouraging that. Chaplin was attacked in the British media for not fighting in the First World War. He defended himself, revealing that he would fight for Britain if called and had registered for the American draft, but he was not summoned by either country. Despite this criticism Chaplin was a favourite with the troops, and his popularity continued to grow worldwide. "Harper's Weekly" reported that the name of Charlie Chaplin was "a part of the common language of almost every country", and that the Tramp image was "universally familiar". In 1917, professional Chaplin imitators were so widespread that he took legal action, and it was reported that nine out of ten men who attended costume parties dressed as the Tramp. The same year, a study by the Boston Society for Psychical Research concluded that Chaplin was "an American obsession". The actress Minnie Maddern Fiske wrote that "a constantly increasing body of cultured, artistic people are beginning to regard the young English buffoon, Charles Chaplin, as an extraordinary artist, as well as a comic genius". Mutual was patient with Chaplin's decreased rate of output, and the contract ended amicably. With his aforementioned concern about the declining quality of his films because of contract scheduling stipulations, Chaplin's primary concern in finding a new distributor was independence; Sydney Chaplin, then his business manager, told the press, "Charlie [must] be allowed all the time he needs and all the money for producing [films] the way he wants ... It is quality, not quantity, we are after." In June 1917, Chaplin signed to complete eight films for First National Exhibitors' Circuit in return for $1 million ($ million today). He chose to build his own studio, situated on five acres of land off Sunset Boulevard, with production facilities of the highest order. It was completed in January 1918, and Chaplin was given freedom over the making of his pictures. "A Dog's Life", released April 1918, was the first film under the new contract. In it, Chaplin demonstrated his increasing concern with story construction and his treatment of the Tramp as "a sort of Pierrot". The film was described by Louis Delluc as "cinema's first total work of art". Chaplin then embarked on the Third Liberty Bond campaign, touring the United States for one month to raise money for the Allies of the First World War. He also produced a short propaganda film, donated to the government for fund-raising, called "The Bond". Chaplin's next release was war-based, placing the Tramp in the trenches for "Shoulder Arms". Associates warned him against making a comedy about the war but, as he later recalled: "Dangerous or not, the idea excited me." He spent four months filming the 45-minute-long picture, which was released in October 1918 with great success. After the release of "Shoulder Arms", Chaplin requested more money from First National, which was refused. Frustrated with their lack of concern for quality, and worried about rumours of a possible merger between the company and Famous Players-Lasky, Chaplin joined forces with Douglas Fairbanks, Mary Pickford, and D. W. Griffith to form a new distribution company – United Artists, established in January 1919. The arrangement was revolutionary in the film industry, as it enabled the four partners – all creative artists – to personally fund their pictures and have complete control. Chaplin was eager to start with the new company and offered to buy out his contract with First National. They refused and insisted that he complete the final six films owed. Before the creation of United Artists, Chaplin married for the first time. The 16-year-old actress Mildred Harris had revealed that she was pregnant with his child, and in September 1918, he married her quietly in Los Angeles to avoid controversy. Soon after, the pregnancy was found to be false. Chaplin was unhappy with the union and, feeling that marriage stunted his creativity, struggled over the production of his film "Sunnyside". Harris was by then legitimately pregnant, and on 7 July 1919, gave birth to a son. Norman Spencer Chaplin was born malformed and died three days later. The marriage ended in April 1920, with Chaplin explaining in his autobiography that they were "irreconcilably mismated". Losing the child, plus his own childhood experiences, are thought to have influenced Chaplin's film, which turned the Tramp into the caretaker of a young boy. For this new venture, Chaplin also wished to do more than comedy and, according to Louvish, "make his mark on a changed world." Filming on "The Kid" began in August 1919, with four-year-old Jackie Coogan his co-star. It was developing into a long project, so to placate First National, he halted production and quickly filmed "A Day's Pleasure". "The Kid" was in production for nine months until May 1920 and, at 68 minutes, it was Chaplin's longest picture to date. Dealing with issues of poverty and parent–child separation, "The Kid" was one of the earliest films to combine comedy and drama. It was released in January 1921 with instant success, and, by 1924, had been screened in over 50 countries. Chaplin spent five months on his next film, the two-reeler "The Idle Class". Following its September 1921 release, he chose to return to England for the first time in almost a decade. He then worked to fulfil his First National contract, releasing "Pay Day" in February 1922. "The Pilgrim" – his final short film – was delayed by distribution disagreements with the studio, and released a year later. Having fulfilled his First National contract, Chaplin was free to make his first picture as an independent producer. In November 1922, he began filming "A Woman of Paris", a romantic drama about ill-fated lovers. Chaplin intended it to be a star-making vehicle for Edna Purviance, and did not appear in the picture himself other than in a brief, uncredited cameo. He wished the film to have a realistic feel and directed his cast to give restrained performances. In real life, he explained, "men and women try to hide their emotions rather than seek to express them". "A Woman of Paris" premiered in September 1923 and was acclaimed for its innovative, subtle approach. The public, however, seemed to have little interest in a Chaplin film without Chaplin, and it was a box office disappointment. The filmmaker was hurt by this failure – he had long wanted to produce a dramatic film and was proud of the result – and soon withdrew "A Woman of Paris" from circulation. Chaplin returned to comedy for his next project. Setting his standards high, he told himself "This next film must be an epic! The Greatest!" Inspired by a photograph of the 1898 Klondike Gold Rush, and later the story of the Donner Party of 1846–47, he made what Geoffrey Macnab calls "an epic comedy out of grim subject matter." In "The Gold Rush", the Tramp is a lonely prospector fighting adversity and looking for love. With Georgia Hale as his new leading lady, Chaplin began filming the picture in February 1924. Its elaborate production, costing almost $1 million, included location shooting in the Truckee mountains with 600 extras, extravagant sets, and special effects. The last scene was shot May 1925 after 15 months of filming. Chaplin felt "The Gold Rush" was the best film he had made. It opened in August 1925 and became one of the highest-grossing films of the silent era with a U.S. box-office of $5 million. The comedy contains some of Chaplin's most famous sequences, such as the Tramp eating his shoe and the "Dance of the Rolls". Macnab has called it "the quintessential Chaplin film". Chaplin stated at its release, "This is the picture that I want to be remembered by". While making "The Gold Rush", Chaplin married for the second time. Mirroring the circumstances of his first union, Lita Grey was a teenage actress, originally set to star in the film, whose surprise announcement of pregnancy forced Chaplin into marriage. She was 16 and he was 35, meaning Chaplin could have been charged with statutory rape under California law. He therefore arranged a discreet marriage in Mexico on 25 November 1924. Their first son, Charles Spencer Chaplin, Jr., was born on 5 May 1925, followed by Sydney Earl Chaplin on 30 March 1926. It was an unhappy marriage, and Chaplin spent long hours at the studio to avoid seeing his wife. In November 1926, Grey took the children and left the family home. A bitter divorce followed, in which Grey's application – accusing Chaplin of infidelity, abuse, and of harbouring "perverted sexual desires" – was leaked to the press. Chaplin was reported to be in a state of nervous breakdown, as the story became headline news and groups formed across America calling for his films to be banned. Eager to end the case without further scandal, Chaplin's lawyers agreed to a cash settlement of $600,000the largest awarded by American courts at that time. His fan base was strong enough to survive the incident, and it was soon forgotten, but Chaplin was deeply affected by it. Before the divorce suit was filed, Chaplin had begun work on a new film, "The Circus". He built a story around the idea of walking a tightrope while besieged by monkeys, and turned the Tramp into the accidental star of a circus. Filming was suspended for 10 months while he dealt with the divorce scandal, and it was generally a trouble-ridden production. Finally completed in October 1927, "The Circus" was released in January 1928 to a positive reception. At the 1st Academy Awards, Chaplin was given a special trophy "For versatility and genius in acting, writing, directing and producing "The Circus"". Despite its success, he permanently associated the film with the stress of its production; Chaplin omitted "The Circus" from his autobiography, and struggled to work on it when he recorded the score in his later years. By the time "The Circus" was released, Hollywood had witnessed the introduction of sound films. Chaplin was cynical about this new medium and the technical shortcomings it presented, believing that "talkies" lacked the artistry of silent films. He was also hesitant to change the formula that had brought him such success, and feared that giving the Tramp a voice would limit his international appeal. He, therefore, rejected the new Hollywood craze and began work on a new silent film. Chaplin was nonetheless anxious about this decision and remained so throughout the film's production. When filming began at the end of 1928, Chaplin had been working on the story for almost a year. "City Lights" followed the Tramp's love for a blind flower girl (played by Virginia Cherrill) and his efforts to raise money for her sight-saving operation. It was a challenging production that lasted 21 months, with Chaplin later confessing that he "had worked himself into a neurotic state of wanting perfection". One advantage Chaplin found in sound technology was the opportunity to record a musical score for the film, which he composed himself. Chaplin finished editing "City Lights" in December 1930, by which time silent films were an anachronism. A preview before an unsuspecting public audience was not a success, but a showing for the press produced positive reviews. One journalist wrote, "Nobody in the world but Charlie Chaplin could have done it. He is the only person that has that peculiar something called 'audience appeal' in sufficient quality to defy the popular penchant for movies that talk." Given its general release in January 1931, "City Lights" proved to be a popular and financial success – eventually grossing over $3 million. The British Film Institute cites it as Chaplin's finest accomplishment, and the critic James Agee hails the closing scene as "the greatest piece of acting and the highest moment in movies". "City Lights" became Chaplin's personal favourite of his films and remained so throughout his life. "City Lights" had been a success, but Chaplin was unsure if he could make another picture without dialogue. He remained convinced that sound would not work in his films, but was also "obsessed by a depressing fear of being old-fashioned." In this state of uncertainty, early in 1931, the comedian decided to take a holiday and ended up travelling for 16 months. He spent months travelling Western Europe, including extended stays in France and Switzerland, and spontaneously decided to visit Japan. The day after he arrived in Japan, Prime Minister Inukai Tsuyoshi was assassinated by ultra-nationalists in the May 15 Incident. The group's original plan had been to provoke a war with the United States by assassinating Chaplin at a welcome reception organised by the prime minister, but the plan had been foiled due to delayed public announcement of the event's date. In his autobiography, Chaplin recalled that on his return to Los Angeles, "I was confused and without plan, restless and conscious of an extreme loneliness". He briefly considered retiring and moving to China. Chaplin's loneliness was relieved when he met 21-year-old actress Paulette Goddard in July 1932, and the pair began a relationship. He was not ready to commit to a film, however, and focused on writing a serial about his travels (published in "Woman's Home Companion"). The trip had been a stimulating experience for Chaplin, including meetings with several prominent thinkers, and he became increasingly interested in world affairs. The state of labour in America troubled him, and he feared that capitalism and machinery in the workplace would increase unemployment levels. It was these concerns that stimulated Chaplin to develop his new film. "Modern Times" was announced by Chaplin as "a satire on certain phases of our industrial life." Featuring the Tramp and Goddard as they endure the Great Depression, it took ten and a half months to film. Chaplin intended to use spoken dialogue but changed his mind during rehearsals. Like its predecessor, "Modern Times" employed sound effects but almost no speaking. Chaplin's performance of a gibberish song did, however, give the Tramp a voice for the only time on film. After recording the music, Chaplin released "Modern Times" in February 1936. It was his first feature in 15 years to adopt political references and social realism, a factor that attracted considerable press coverage despite Chaplin's attempts to downplay the issue. The film earned less at the box-office than his previous features and received mixed reviews, as some viewers disliked the politicising. Today, "Modern Times" is seen by the British Film Institute as one of Chaplin's "great features," while David Robinson says it shows the filmmaker at "his unrivalled peak as a creator of visual comedy." Following the release of "Modern Times", Chaplin left with Goddard for a trip to the Far East. The couple had refused to comment on the nature of their relationship, and it was not known whether they were married or not. Some time later, Chaplin revealed that they married in Canton during this trip. By 1938, the couple had drifted apart, as both focused heavily on their work, although Goddard was again his leading lady in his next feature film, "The Great Dictator". She eventually divorced Chaplin in Mexico in 1942, citing incompatibility and separation for more than a year. The 1940s saw Chaplin face a series of controversies, both in his work and in his personal life, which changed his fortunes and severely affected his popularity in the United States. The first of these was his growing boldness in expressing his political beliefs. Deeply disturbed by the surge of militaristic nationalism in 1930s world politics, Chaplin found that he could not keep these issues out of his work. Parallels between himself and Adolf Hitler had been widely noted: the pair were born four days apart, both had risen from poverty to world prominence, and Hitler wore the same toothbrush moustache as Chaplin. It was this physical resemblance that supplied the plot for Chaplin's next film, "The Great Dictator", which directly satirised Hitler and attacked fascism. Chaplin spent two years developing the script, and began filming in September 1939 – six days after Britain declared war on Germany. He had submitted to using spoken dialogue, partly out of acceptance that he had no other choice, but also because he recognised it as a better method for delivering a political message. Making a comedy about Hitler was seen as highly controversial, but Chaplin's financial independence allowed him to take the risk. "I was determined to go ahead," he later wrote, "for Hitler must be laughed at." Chaplin replaced the Tramp (while wearing similar attire) with "A Jewish Barber", a reference to the Nazi party's belief that he was Jewish. In a dual performance, he also played the dictator "Adenoid Hynkel", who parodied Hitler. "The Great Dictator" spent a year in production and was released in October 1940. The film generated a vast amount of publicity, with a critic for "The New York Times" calling it "the most eagerly awaited picture of the year", and it was one of the biggest money-makers of the era. The ending was unpopular, however, and generated controversy. Chaplin concluded the film with a five-minute speech in which he abandoned his barber character, looked directly into the camera, and pleaded against war and fascism. Charles J. Maland has identified this overt preaching as triggering a decline in Chaplin's popularity, and writes, "Henceforth, no movie fan would ever be able to separate the dimension of politics from [his] star image". "The Great Dictator" received five Academy Award nominations, including Best Picture, Best Original Screenplay and Best Actor. In the mid-1940s, Chaplin was involved in a series of trials that occupied most of his time and significantly affected his public image. The troubles stemmed from his affair with an aspirant actress named Joan Barry, with whom he was involved intermittently between June 1941 and the autumn of 1942. Barry, who displayed obsessive behaviour and was twice arrested after they separated, reappeared the following year and announced that she was pregnant with Chaplin's child. As Chaplin denied the claim, Barry filed a paternity suit against him. The director of the Federal Bureau of Investigation (FBI), J. Edgar Hoover, who had long been suspicious of Chaplin's political leanings, used the opportunity to generate negative publicity about him. As part of a smear campaign to damage Chaplin's image, the FBI named him in four indictments related to the Barry case. Most serious of these was an alleged violation of the Mann Act, which prohibits the transportation of women across state boundaries for sexual purposes. The historian Otto Friedrich has called this an "absurd prosecution" of an "ancient statute", yet if Chaplin was found guilty, he faced 23 years in jail. Three charges lacked sufficient evidence to proceed to court, but the Mann Act trial began in March 1944. Chaplin was acquitted two weeks later. The case was frequently headline news, with "Newsweek" calling it the "biggest public relations scandal since the Fatty Arbuckle murder trial in 1921." Barry's child, Carol Ann, was born in October 1944, and the paternity suit went to court in February 1945. After two arduous trials, in which the prosecuting lawyer accused him of "moral turpitude", Chaplin was declared to be the father. Evidence from blood tests which indicated otherwise were not admissible, and the judge ordered Chaplin to pay child support until Carol Ann turned 21. Media coverage of the paternity suit was influenced by the FBI, as information was fed to the prominent gossip columnist Hedda Hopper, and Chaplin was portrayed in an overwhelmingly critical light. The controversy surrounding Chaplin increased when, two weeks after the paternity suit was filed, it was announced that he had married his newest protégée, 18-year-old Oona O'Neill – daughter of the American playwright Eugene O'Neill. Chaplin, then 54, had been introduced to her by a film agent seven months earlier. In his autobiography, Chaplin described meeting O'Neill as "the happiest event of my life", and claimed to have found "perfect love". Chaplin's son, Charles Jr., reported that Oona "worshipped" his father. The couple remained married until Chaplin's death, and had eight children over 18 years: Geraldine Leigh (b. July 1944), Michael John (b. March 1946), Josephine Hannah (b. March 1949), Victoria (b. May 1951), Eugene Anthony (b. August 1953), Jane Cecil (b. May 1957), Annette Emily (b. December 1959), and Christopher James (b. July 1962). Chaplin claimed that the Barry trials had "crippled [his] creativeness", and it was some time before he began working again. In April 1946, he finally began filming a project that had been in development since 1942. "Monsieur Verdoux" was a black comedy, the story of a French bank clerk, Verdoux (Chaplin), who loses his job and begins marrying and murdering wealthy widows to support his family. Chaplin's inspiration for the project came from Orson Welles, who wanted him to star in a film about the French serial killer Henri Désiré Landru. Chaplin decided that the concept would "make a wonderful comedy", and paid Welles $5,000 for the idea. Chaplin again vocalised his political views in "Monsieur Verdoux", criticising capitalism and arguing that the world encourages mass killing through wars and weapons of mass destruction. Because of this, the film met with controversy when it was released in April 1947; Chaplin was booed at the premiere, and there were calls for a boycott. "Monsieur Verdoux" was the first Chaplin release that failed both critically and commercially in the United States. It was more successful abroad, and Chaplin's screenplay was nominated at the Academy Awards. He was proud of the film, writing in his autobiography, ""Monsieur Verdoux" is the cleverest and most brilliant film I have yet made." The negative reaction to "Monsieur Verdoux" was largely the result of changes in Chaplin's public image. Along with damage of the Joan Barry scandal, he was publicly accused of being a communist. His political activity had heightened during World War II, when he campaigned for the opening of a Second Front to help the Soviet Union and supported various Soviet–American friendship groups. He was also friendly with several suspected communists, and attended functions given by Soviet diplomats in Los Angeles. In the political climate of 1940s America, such activities meant Chaplin was considered, as Larcher writes, "dangerously progressive and amoral." The FBI wanted him out of the country, and launched an official investigation in early 1947. Chaplin denied being a communist, instead calling himself a "peacemonger", but felt the government's effort to suppress the ideology was an unacceptable infringement of civil liberties. Unwilling to be quiet about the issue, he openly protested against the trials of Communist Party members and the activities of the House Un-American Activities Committee. Chaplin received a subpoena to appear before HUAC but was not called to testify. As his activities were widely reported in the press, and Cold War fears grew, questions were raised over his failure to take American citizenship. Calls were made for him to be deported; in one extreme and widely published example, Representative John E. Rankin, who helped establish HUAC, told Congress in June 1947: "[Chaplin's] very life in Hollywood is detrimental to the moral fabric of America. [If he is deported] ... his loathsome pictures can be kept from before the eyes of the American youth. He should be deported and gotten rid of at once." Although Chaplin remained politically active in the years following the failure of "Monsieur Verdoux", his next film, about a forgotten vaudeville comedian and a young ballerina in Edwardian London, was devoid of political themes. "Limelight" was heavily autobiographical, alluding not only to Chaplin's childhood and the lives of his parents, but also to his loss of popularity in the United States. The cast included various members of his family, including his five oldest children and his half-brother, Wheeler Dryden. Filming began in November 1951, by which time Chaplin had spent three years working on the story. He aimed for a more serious tone than any of his previous films, regularly using the word "melancholy" when explaining his plans to his co-star Claire Bloom. "Limelight" featured a cameo appearance from Buster Keaton, whom Chaplin cast as his stage partner in a pantomime scene. This marked the only time the comedians worked together. Chaplin decided to hold the world premiere of "Limelight" in London, since it was the setting of the film. As he left Los Angeles, he expressed a premonition that he would not be returning. At New York, he boarded the with his family on 18 September 1952. The next day, attorney general James P. McGranery revoked Chaplin's re-entry permit and stated that he would have to submit to an interview concerning his political views and moral behaviour in order to re-enter the US. Although McGranery told the press that he had "a pretty good case against Chaplin", Maland has concluded, on the basis of the FBI files that were released in the 1980s, that the US government had no real evidence to prevent Chaplin's re-entry. It is likely that he would have gained entry if he had applied for it. However, when Chaplin received a cablegram informing him of the news, he privately decided to cut his ties with the United States: Because all of his property remained in America, Chaplin refrained from saying anything negative about the incident to the press. The scandal attracted vast attention, but Chaplin and his film were warmly received in Europe. In America, the hostility towards him continued, and, although it received some positive reviews, "Limelight" was subjected to a wide-scale boycott. Reflecting on this, Maland writes that Chaplin's fall, from an "unprecedented" level of popularity, "may be the most dramatic in the history of stardom in America". Chaplin did not attempt to return to the United States after his re-entry permit was revoked, and instead sent his wife to settle his affairs. The couple decided to settle in Switzerland and, in January 1953, the family moved into their permanent home: Manoir de Ban, a estate overlooking Lake Geneva in Corsier-sur-Vevey. Chaplin put his Beverly Hills house and studio up for sale in March, and surrendered his re-entry permit in April. The next year, his wife renounced her US citizenship and became a British citizen. Chaplin severed the last of his professional ties with the United States in 1955, when he sold the remainder of his stock in United Artists, which had been in financial difficulty since the early 1940s. Chaplin remained a controversial figure throughout the 1950s, especially after he was awarded the International Peace Prize by the communist-led World Peace Council, and after his meetings with Zhou Enlai and Nikita Khrushchev. He began developing his first European film, "A King in New York", in 1954. Casting himself as an exiled king who seeks asylum in the United States, Chaplin included several of his recent experiences in the screenplay. His son, Michael, was cast as a boy whose parents are targeted by the FBI, while Chaplin's character faces accusations of communism. The political satire parodied HUAC and attacked elements of 1950s culture – including consumerism, plastic surgery, and wide-screen cinema. In a review, the playwright John Osborne called it Chaplin's "most bitter" and "most openly personal" film. Chaplin founded a new production company, Attica, and used Shepperton Studios for the shooting. Filming in England proved a difficult experience, as he was used to his own Hollywood studio and familiar crew, and no longer had limitless production time. According to Robinson, this had an effect on the quality of the film. "A King in New York" was released in September 1957, and received mixed reviews. Chaplin banned American journalists from its Paris première and decided not to release the film in the United States. This severely limited its revenue, although it achieved moderate commercial success in Europe. "A King in New York" was not shown in America until 1973. In the last two decades of his career, Chaplin concentrated on re-editing and scoring his old films for re-release, along with securing their ownership and distribution rights. In an interview he granted in 1959, the year of his 70th birthday, Chaplin stated that there was still "room for the Little Man in the atomic age". The first of these re-releases was "The Chaplin Revue" (1959), which included new versions of "A Dog's Life", "Shoulder Arms", and "The Pilgrim". In America, the political atmosphere began to change and attention was once again directed to Chaplin's films instead of his views. In July 1962, "The New York Times" published an editorial stating that "we do not believe the Republic would be in danger if yesterday's unforgotten little tramp were allowed to amble down the gangplank of a steamer or plane in an American port". The same month, Chaplin was invested with the honorary degree of Doctor of Letters by the universities of Oxford and Durham. In November 1963, the Plaza Theater in New York started a year-long series of Chaplin's films, including "Monsieur Verdoux" and "Limelight", which gained excellent reviews from American critics. September 1964 saw the release of Chaplin's memoirs, "My Autobiography", which he had been working on since 1957. The 500-page book, which focused on his early years and personal life, became a worldwide best-seller, despite criticism over the lack of information on his film career. Shortly after the publication of his memoirs, Chaplin began work on "A Countess from Hong Kong" (1967), a romantic comedy based on a script he had written for Paulette Goddard in the 1930s. Set on an ocean liner, it starred Marlon Brando as an American ambassador and Sophia Loren as a stowaway found in his cabin. The film differed from Chaplin's earlier productions in several aspects. It was his first to use Technicolor and the widescreen format, while he concentrated on directing and appeared on-screen only in a cameo role as a seasick steward. He also signed a deal with Universal Pictures and appointed his assistant, Jerome Epstein, as the producer. Chaplin was paid $600,000 director's fee as well as a percentage of the gross receipts. "A Countess from Hong Kong" premiered in January 1967, to unfavourable reviews, and was a box-office failure. Chaplin was deeply hurt by the negative reaction to the film, which turned out to be his last. Chaplin suffered a series of minor strokes in the late 1960s, which marked the beginning of a slow decline in his health. Despite the setbacks, he was soon writing a new film script, "The Freak", a story of a winged girl found in South America, which he intended as a starring vehicle for his daughter, Victoria. His fragile health prevented the project from being realised. In the early 1970s, Chaplin concentrated on re-releasing his old films, including "The Kid" and "The Circus". In 1971, he was made a Commander of the National Order of the Legion of Honour at the Cannes Film Festival. The following year, he was honoured with a special award by the Venice Film Festival. In 1972, the Academy of Motion Picture Arts and Sciences offered Chaplin an Honorary Award, which Robinson sees as a sign that America "wanted to make amends". Chaplin was initially hesitant about accepting but decided to return to the US for the first time in 20 years. The visit attracted a large amount of press coverage and, at the Academy Awards gala, he was given a twelve-minute standing ovation, the longest in the Academy's history. Visibly emotional, Chaplin accepted his award for "the incalculable effect he has had in making motion pictures the art form of this century". Although Chaplin still had plans for future film projects, by the mid-1970s he was very frail. He experienced several further strokes, which made it difficult for him to communicate, and he had to use a wheelchair. His final projects were compiling a pictorial autobiography, "My Life in Pictures" (1974) and scoring "A Woman of Paris" for re-release in 1976. He also appeared in a documentary about his life, "The Gentleman Tramp" (1975), directed by Richard Patterson. In the 1975 New Year Honours, Chaplin was awarded a knighthood by Queen Elizabeth II, though he was too weak to kneel and received the honour in his wheelchair. By October 1977, Chaplin's health had declined to the point that he needed constant care. In the early morning of 25 December 1977, Chaplin died at home after suffering a stroke in his sleep. He was 88 years old. The funeral, on 27 December, was a small and private Anglican ceremony, according to his wishes. Chaplin was interred in the Corsier-sur-Vevey cemetery. Among the film industry's tributes, director René Clair wrote, "He was a monument of the cinema, of all countries and all times ... the most beautiful gift the cinema made to us." Actor Bob Hope declared, "We were lucky to have lived in his time." On 1 March 1978, Chaplin's coffin was dug up and stolen from its grave by two unemployed immigrants, Roman Wardas, from Poland, and Gantcho Ganev, from Bulgaria. The body was held for ransom in an attempt to extort money from Oona Chaplin. The pair were caught in a large police operation in May, and Chaplin's coffin was found buried in a field in the nearby village of Noville. It was re-interred in the Corsier cemetery surrounded by reinforced concrete. Chaplin believed his first influence to be his mother, who entertained him as a child by sitting at the window and mimicking passers-by: "it was through watching her that I learned not only how to express emotions with my hands and face, but also how to observe and study people." Chaplin's early years in music hall allowed him to see stage comedians at work; he also attended the Christmas pantomimes at Drury Lane, where he studied the art of clowning through performers like Dan Leno. Chaplin's years with the Fred Karno company had a formative effect on him as an actor and filmmaker. Simon Louvish writes that the company was his "training ground", and it was here that Chaplin learned to vary the pace of his comedy. The concept of mixing pathos with slapstick was learnt from Karno, who also used elements of absurdity that became familiar in Chaplin's gags. From the film industry, Chaplin drew upon the work of the French comedian Max Linder, whose films he greatly admired. In developing the Tramp costume and persona, he was likely inspired by the American vaudeville scene, where tramp characters were common. Chaplin never spoke more than cursorily about his filmmaking methods, claiming such a thing would be tantamount to a magician spoiling his own illusion. Little was known about his working process throughout his lifetime, but research from film historians – particularly the findings of Kevin Brownlow and David Gill that were presented in the three-part documentary "Unknown Chaplin" (1983) – has since revealed his unique working method. Until he began making spoken dialogue films with "The Great Dictator", Chaplin never shot from a completed script. Many of his early films began with only a vague premise – for example "Charlie enters a health spa" or "Charlie works in a pawn shop." He then had sets constructed and worked with his stock company to improvise gags and "business" using them, almost always working the ideas out on film. As ideas were accepted and discarded, a narrative structure would emerge, frequently requiring Chaplin to reshoot an already-completed scene that might have otherwise contradicted the story. From "A Woman of Paris" onward Chaplin began the filming process with a prepared plot, but Robinson writes that every film up to "Modern Times" "went through many metamorphoses and permutations before the story took its final form." Producing films in this manner meant Chaplin took longer to complete his pictures than almost any other filmmaker at the time. If he was out of ideas, he often took a break from the shoot, which could last for days, while keeping the studio ready for when inspiration returned. Delaying the process further was Chaplin's rigorous perfectionism. According to his friend Ivor Montagu, "nothing but perfection would be right" for the filmmaker. Because he personally funded his films, Chaplin was at liberty to strive for this goal and shoot as many takes as he wished. The number was often excessive, for instance 53 takes for every finished take in "The Kid". For "The Immigrant", a 20 minute-short, Chaplin shot 40,000 feet of film – enough for a feature-length. Describing his working method as "sheer perseverance to the point of madness", Chaplin would be completely consumed by the production of a picture. Robinson writes that even in Chaplin's later years, his work continued "to take precedence over everything and everyone else." The combination of story improvisation and relentless perfectionism – which resulted in days of effort and thousands of feet of film being wasted, all at enormous expense – often proved taxing for Chaplin who, in frustration, would lash out at his actors and crew. Chaplin exercised complete control over his pictures, to the extent that he would act out the other roles for his cast, expecting them to imitate him exactly. He personally edited all of his films, trawling through the large amounts of footage to create the exact picture he wanted. As a result of his complete independence, he was identified by the film historian Andrew Sarris as one of the first auteur filmmakers. Chaplin did receive help, notably from his long-time cinematographer Roland Totheroh, brother Sydney Chaplin, and various assistant directors such as Harry Crocker and Charles Reisner. While Chaplin's comedic style is broadly defined as slapstick, it is considered restrained and intelligent, with the film historian Philip Kemp describing his work as a mix of "deft, balletic physical comedy and thoughtful, situation-based gags". Chaplin diverged from conventional slapstick by slowing the pace and exhausting each scene of its comic potential, with more focus on developing the viewer's relationship to the characters. Unlike conventional slapstick comedies, Robinson states that the comic moments in Chaplin's films centre on the Tramp's attitude to the things happening to him: the humour does not come from the Tramp bumping into a tree, but from his lifting his hat to the tree in apology. Dan Kamin writes that Chaplin's "quirky mannerisms" and "serious demeanour in the midst of slapstick action" are other key aspects of his comedy, while the surreal transformation of objects and the employment of in-camera trickery are also common features. Chaplin's silent films typically follow the Tramp's efforts to survive in a hostile world. The character lives in poverty and is frequently treated badly, but remains kind and upbeat; defying his social position, he strives to be seen as a gentleman. As Chaplin said in 1925, "The whole point of the Little Fellow is that no matter how down on his ass he is, no matter how well the jackals succeed in tearing him apart, he's still a man of dignity." The Tramp defies authority figures and "gives as good as he gets", leading Robinson and Louvish to see him as a representative for the underprivileged – an "everyman turned heroic saviour". Hansmeyer notes that several of Chaplin's films end with "the homeless and lonely Tramp [walking] optimistically ... into the sunset ... to continue his journey". The infusion of pathos is a well-known aspect of Chaplin's work, and Larcher notes his reputation for "[inducing] laughter and tears". Sentimentality in his films comes from a variety of sources, with Louvish pinpointing "personal failure, society's strictures, economic disaster, and the elements." Chaplin sometimes drew on tragic events when creating his films, as in the case of "The Gold Rush" (1925), which was inspired by the fate of the Donner Party. Constance B. Kuriyama has identified serious underlying themes in the early comedies, such as greed ("The Gold Rush") and loss ("The Kid"). Chaplin also touched on controversial issues: immigration ("The Immigrant", 1917); illegitimacy ("The Kid", 1921); and drug use ("Easy Street", 1917). He often explored these topics ironically, making comedy out of suffering. Social commentary was a feature of Chaplin's films from early in his career, as he portrayed the underdog in a sympathetic light and highlighted the difficulties of the poor. Later, as he developed a keen interest in economics and felt obliged to publicise his views, Chaplin began incorporating overtly political messages into his films. "Modern Times" (1936) depicted factory workers in dismal conditions, "The Great Dictator" (1940) parodied Adolf Hitler and Benito Mussolini and ended in a speech against nationalism, "Monsieur Verdoux" (1947) criticised war and capitalism, and "A King in New York" (1957) attacked McCarthyism. Several of Chaplin's films incorporate autobiographical elements, and the psychologist Sigmund Freud believed that Chaplin "always plays only himself as he was in his dismal youth". "The Kid" is thought to reflect Chaplin's childhood trauma of being sent into an orphanage, the main characters in "Limelight" (1952) contain elements from the lives of his parents, and "A King in New York" references Chaplin's experiences of being shunned by the United States. Many of his sets, especially in street scenes, bear a strong similarity to Kennington, where he grew up. Stephen M. Weissman has argued that Chaplin's problematic relationship with his mentally ill mother was often reflected in his female characters and the Tramp's desire to save them. Regarding the structure of Chaplin's films, the scholar Gerald Mast sees them as consisting of sketches tied together by the same theme and setting, rather than having a tightly unified storyline. Visually, his films are simple and economic, with scenes portrayed as if set on a stage. His approach to filming was described by the art director Eugène Lourié: "Chaplin did not think in 'artistic' images when he was shooting. He believed that action is the main thing. The camera is there to photograph the actors". In his autobiography, Chaplin wrote, "Simplicity is best ... pompous effects slow up action, are boring and unpleasant ... The camera should not intrude." This approach has prompted criticism, since the 1940s, for being "old fashioned", while the film scholar Donald McCaffrey sees it as an indication that Chaplin never completely understood film as a medium. Kamin, however, comments that Chaplin's comedic talent would not be enough to remain funny on screen if he did not have an "ability to conceive and direct scenes specifically for the film medium". Chaplin developed a passion for music as a child and taught himself to play the piano, violin, and cello. He considered the musical accompaniment of a film to be important, and from "A Woman of Paris" onwards he took an increasing interest in this area. With the advent of sound technology, Chaplin began using a synchronised orchestral soundtrack – composed by himself – for "City Lights" (1931). He thereafter composed the scores for all of his films, and from the late 1950s to his death, he scored all of his silent features and some of his short films. As Chaplin was not a trained musician, he could not read sheet music and needed the help of professional composers, such as David Raksin, Raymond Rasch and Eric James, when creating his scores. Musical directors were employed to oversee the recording process, such as Alfred Newman for "City Lights". Although some critics have claimed that credit for his film music should be given to the composers who worked with him, Raksin – who worked with Chaplin on "Modern Times" – stressed Chaplin's creative position and active participation in the composing process. This process, which could take months, would start with Chaplin describing to the composer(s) exactly what he wanted and singing or playing tunes he had improvised on the piano. These tunes were then developed further in a close collaboration among the composer(s) and Chaplin. According to film historian Jeffrey Vance, "although he relied upon associates to arrange varied and complex instrumentation, the musical imperative is his, and not a note in a Chaplin musical score was placed there without his assent." Chaplin's compositions produced three popular songs. "Smile", composed originally for "Modern Times" (1936) and later set to lyrics by John Turner and Geoffrey Parsons, was a hit for Nat King Cole in 1954. For "Limelight", Chaplin composed "Terry's Theme", which was popularised by Jimmy Young as "Eternally" (1952). Finally, "This Is My Song", performed by Petula Clark for "A Countess from Hong Kong" (1967), reached number one on the UK and other European charts. Chaplin also received his only competitive Oscar for his composition work, as the "Limelight" theme won an Academy Award for Best Original Score in 1973 following the film's re-release. In 1998, the film critic Andrew Sarris called Chaplin "arguably the single most important artist produced by the cinema, certainly its most extraordinary performer and probably still its most universal icon". He is described by the British Film Institute as "a towering figure in world culture", and was included in "Time" magazine's list of the "" for the "laughter [he brought] to millions" and because he "more or less invented global recognizability and helped turn an industry into an art". The image of the Tramp has become a part of cultural history; according to Simon Louvish, the character is recognisable to people who have never seen a Chaplin film, and in places where his films are never shown. The critic Leonard Maltin has written of the "unique" and "indelible" nature of the Tramp, and argued that no other comedian matched his "worldwide impact". Praising the character, Richard Schickel suggests that Chaplin's films with the Tramp contain the most "eloquent, richly comedic expressions of the human spirit" in movie history. Memorabilia connected to the character still fetches large sums in auctions: in 2006 a bowler hat and a bamboo cane that were part of the Tramp's costume were bought for $140,000 in a Los Angeles auction. As a filmmaker, Chaplin is considered a pioneer and one of the most influential figures of the early twentieth century. He is often credited as one of the medium's first artists. Film historian Mark Cousins has written that Chaplin "changed not only the imagery of cinema, but also its sociology and grammar" and claims that Chaplin was as important to the development of comedy as a genre as D.W. Griffith was to drama. He was the first to popularise feature-length comedy and to slow down the pace of action, adding pathos and subtlety to it. Although his work is mostly classified as slapstick, Chaplin's drama "A Woman of Paris" (1923) was a major influence on Ernst Lubitsch's film "The Marriage Circle" (1924) and thus played a part in the development of "sophisticated comedy". According to David Robinson, Chaplin's innovations were "rapidly assimilated to become part of the common practice of film craft." Filmmakers who cited Chaplin as an influence include Federico Fellini (who called Chaplin "a sort of Adam, from whom we are all descended"), Jacques Tati ("Without him I would never have made a film"), René Clair ("He inspired practically every filmmaker"), Michael Powell, Billy Wilder, Vittorio De Sica, and Richard Attenborough. Russian filmmaker Andrei Tarkovsky praised Chaplin as "the only person to have gone down into cinematic history without any shadow of a doubt. The films he left behind can never grow old." Chaplin also strongly influenced the work of later comedians. Marcel Marceau said he was inspired to become a mime artist after watching Chaplin, while the actor Raj Kapoor based his screen persona on the Tramp. Mark Cousins has also detected Chaplin's comedic style in the French character Monsieur Hulot and the Italian character Totò. In other fields, Chaplin helped inspire the cartoon characters Felix the Cat and Mickey Mouse, and was an influence on the Dada art movement. As one of the founding members of United Artists, Chaplin also had a role in the development of the film industry. Gerald Mast has written that although UA never became a major company like MGM or Paramount Pictures, the idea that directors could produce their own films was "years ahead of its time". In the 21st century, several of Chaplin's films are still regarded as classics and among the greatest ever made. The 2012 "Sight & Sound" poll, which compiles "top ten" ballots from film critics and directors to determine each group's most acclaimed films, saw "City Lights" rank among the critics' top 50, "Modern Times" inside the top 100, and "The Great Dictator" and "The Gold Rush" placed in the top 250. The top 100 films as voted on by directors included "Modern Times" at number 22, "City Lights" at number 30, and "The Gold Rush" at number 91. Every one of Chaplin's features received a vote. In 2007, the American Film Institute named "City Lights" the 11th greatest American film of all time, while "The Gold Rush" and "Modern Times" again ranked in the top 100. Books about Chaplin continue to be published regularly, and he is a popular subject for media scholars and film archivists. Many of Chaplin's film have had a DVD and Blu-Ray release. Chaplin's final home, Manoir de Ban in Corsier-sur-Vevey, Switzerland, has been converted into a museum named "Chaplin's World". It opened on 17 April 2016 after 15 years of development, and is described by Reuters as "an interactive museum showcasing the life and works of Charlie Chaplin". On the 128th anniversary of his birth, a record-setting 662 people dressed as the Tramp in an event organised by the museum. Previously, the Museum of the Moving Image in London held a permanent display on Chaplin, and hosted a dedicated exhibition to his life and career in 1988. The London Film Museum hosted an exhibition called "Charlie Chaplin – The Great Londoner", from 2010 until 2013. In London, a statue of Chaplin as the Tramp, sculpted by John Doubleday and unveiled in 1981, is located in Leicester Square. The city also includes a road named after him in central London, "Charlie Chaplin Walk", which is the location of the BFI IMAX. There are nine blue plaques memorialising Chaplin in London, Hampshire, and Yorkshire. The Swiss town of Vevey named a park in his honour in 1980 and erected a statue there in 1982. In 2011, two large murals depicting Chaplin on two 14-storey buildings were also unveiled in Vevey. Chaplin has also been honoured by the Irish town of Waterville, where he spent several summers with his family in the 1960s. A statue was erected in 1998; since 2011, the town has been host to the annual Charlie Chaplin Comedy Film Festival, which was founded to celebrate Chaplin's legacy and to showcase new comic talent. In other tributes, a minor planet, 3623 Chaplin – discovered by Soviet astronomer Lyudmila Karachkina in 1981 – is named after Chaplin. Throughout the 1980s, the Tramp image was used by IBM to advertise their personal computers. Chaplin's 100th birthday anniversary in 1989 was marked with several events around the world, and on 15 April 2011, a day before his 122nd birthday, Google celebrated him with a special Google Doodle video on its global and other country-wide homepages. Many countries, spanning six continents, have honoured Chaplin with a postal stamp. Chaplin's legacy is managed on behalf of his children by the Chaplin office, located in Paris. The office represents Association Chaplin, founded by some of his children "to protect the name, image and moral rights" to his body of work, Roy Export SAS, which owns the copyright to most of his films made after 1918, and Bubbles Incorporated S.A., which owns the copyrights to his image and name. Their central archive is held at the archives of Montreux, Switzerland and scanned versions of its contents, including 83,630 images, 118 scripts, 976 manuscripts, 7,756 letters, and thousands of other documents, are available for research purposes at the Chaplin Research Centre at the Cineteca di Bologna. The photographic archive, which includes approximately 10,000 photographs from Chaplin's life and career, is kept at the Musée de l'Elysée in Lausanne, Switzerland. The British Film Institute has also established the Charles Chaplin Research Foundation, and the first international Charles Chaplin Conference was held in London in July 2005. Chaplin is the subject of a biographical film, "Chaplin" (1992) directed by Richard Attenborough, and starring Robert Downey Jr. in the title role. He is also a character in the period drama film "The Cat's Meow" (2001), played by Eddie Izzard, and in the made-for-television movie "The Scarlett O'Hara War" (1980), played by Clive Revill. A television series about Chaplin's childhood, "Young Charlie Chaplin", ran on PBS in 1989, and was nominated for an Emmy Award for Outstanding Children's Program. Chaplin's life has also been the subject of several stage productions. Two musicals, "Little Tramp" and "Chaplin", were produced in the early 1990s. In 2006, Thomas Meehan and Christopher Curtis created another musical, "Limelight: The Story of Charlie Chaplin", which was first performed at the La Jolla Playhouse in San Diego in 2010. It was adapted for Broadway two years later, re-titled "Chaplin – A Musical". Chaplin was portrayed by Robert McClure in both productions. In 2013, two plays about Chaplin premiered in Finland: "Chaplin" at the Svenska Teatern, and "Kulkuri" ("The Tramp") at the Tampere Workers' Theatre. Chaplin has also been characterised in literary fiction. He is the protagonist of Robert Coover's short story "Charlie in the House of Rue" (1980; reprinted in Coover's 1987 collection "A Night at the Movies"), and of Glen David Gold's "Sunnyside" (2009), a historical novel set in the First World War period. A day in Chaplin's life in 1909 is dramatised in the chapter entitled "Modern Times" in Alan Moore's "Jerusalem" (2016), a novel set in the author's home town of Northampton, England. Chaplin received many awards and honours, especially later in life. In the 1975 New Year Honours, he was appointed a Knight Commander of the Most Excellent Order of the British Empire (KBE). He was also awarded honorary Doctor of Letters degrees by the University of Oxford and the University of Durham in 1962. In 1965, he and Ingmar Bergman were joint winners of the Erasmus Prize and, in 1971, he was appointed a Commander of the National Order of the Legion of Honour by the French government. From the film industry, Chaplin received a special Golden Lion at the Venice Film Festival in 1972, and a Lifetime Achievement Award from the Lincoln Center Film Society the same year. The latter has since been presented annually to filmmakers as The Chaplin Award. Chaplin was given a star on the Hollywood Walk of Fame in 1972, having been previously excluded because of his political beliefs. Chaplin received three Academy Awards: an Honorary Award for "versatility and genius in acting, writing, directing, and producing "The Circus"" in 1929, a second Honorary Award for "the incalculable effect he has had in making motion pictures the art form of this century" in 1972, and a Best Score award in 1973 for "Limelight" (shared with Ray Rasch and Larry Russell). He was further nominated in the Best Actor, Best Original Screenplay, and Best Picture (as producer) categories for "The Great Dictator", and received another Best Original Screenplay nomination for "Monsieur Verdoux". In 1976, Chaplin was made a Fellow of the British Academy of Film and Television Arts (BAFTA). Six of Chaplin's films have been selected for preservation in the National Film Registry by the United States Library of Congress: "The Immigrant" (1917), "The Kid" (1921), "The Gold Rush" (1925), "City Lights" (1931), "Modern Times" (1936), and "The Great Dictator" (1940). Directed features: Columbia River The Columbia River is the largest river in the Pacific Northwest region of North America. The river rises in the Rocky Mountains of British Columbia, Canada. It flows northwest and then south into the US state of Washington, then turns west to form most of the border between Washington and the state of Oregon before emptying into the Pacific Ocean. The river is long, and its largest tributary is the Snake River. Its drainage basin is roughly the size of France and extends into seven US states and a Canadian province. The fourth-largest river in the United States by volume, the Columbia has the greatest flow of any North American river entering the Pacific. The Columbia and its tributaries have been central to the region's culture and economy for thousands of years. They have been used for transportation since ancient times, linking the region's many cultural groups. The river system hosts many species of anadromous fish, which migrate between freshwater habitats and the saline waters of the Pacific Ocean. These fish—especially the salmon species—provided the core subsistence for native peoples. In the late 18th century, a private American ship became the first non-indigenous vessel to enter the river; it was followed by a British explorer, who navigated past the Oregon Coast Range into the Willamette Valley. In the following decades, fur trading companies used the Columbia as a key transportation route. Overland explorers entered the Willamette Valley through the scenic but treacherous Columbia River Gorge, and pioneers began to settle the valley in increasing numbers. Steamships along the river linked communities and facilitated trade; the arrival of railroads in the late 19th century, many running along the river, supplemented these links. Since the late 19th century, public and private sectors have heavily developed the river. To aid ship and barge navigation, locks have been built along the lower Columbia and its tributaries, and dredging has opened, maintained, and enlarged shipping channels. Since the early 20th century, dams have been built across the river for power generation, navigation, irrigation, and flood control. The 14 hydroelectric dams on the Columbia's main stem and many more on its tributaries produce more than 44 percent of total US hydroelectric generation. Production of nuclear power has taken place at two sites along the river. Plutonium for nuclear weapons was produced for decades at the Hanford Site, which is now the most contaminated nuclear site in the US. These developments have greatly altered river environments in the watershed, mainly through industrial pollution and barriers to fish migration. The Columbia begins its journey in the southern Rocky Mountain Trench in British Columbia (BC). Columbia Lake – above sea level – and the adjoining Columbia Wetlands form the river's headwaters. The trench is a broad, deep, and long glacial valley between the Canadian Rockies and the Columbia Mountains in BC. For its first , the Columbia flows northwest along the trench through Windermere Lake and the town of Invermere, a region known in British Columbia as the Columbia Valley, then northwest to Golden and into Kinbasket Lake. Rounding the northern end of the Selkirk Mountains, the river turns sharply south through a region known as the Big Bend Country, passing through Revelstoke Lake and the Arrow Lakes. Revelstoke, the Big Bend, and the Columbia Valley combined are referred to in BC parlance as the Columbia Country. Below the Arrow Lakes, the Columbia passes the cities of Castlegar, located at the Columbia's confluence with the Kootenay River, and Trail, two major population centers of the West Kootenay region. The Pend Oreille River joins the Columbia about north of the US–Canada border. The Columbia enters eastern Washington flowing south and turning to the west at the Spokane River confluence. It marks the southern and eastern borders of the Colville Indian Reservation and the western border of the Spokane Indian Reservation. The river turns south after the Okanogan River confluence, then southeasterly near the confluence with the Wenatchee River in central Washington. This C‑shaped segment of the river is also known as the "Big Bend". During the Missoula Floods 10,000 to 15,000 years ago, much of the floodwater took a more direct route south, forming the ancient river bed known as the Grand Coulee. After the floods, the river found its present course, and the Grand Coulee was left dry. The construction of the Grand Coulee Dam in the mid-20th century impounded the river, forming Lake Roosevelt, from which water was pumped into the dry coulee, forming the reservoir of Banks Lake. The river flows past The Gorge Amphitheatre, a prominent concert venue in the Northwest, then through Priest Rapids Dam, and then through the Hanford Nuclear Reservation. Entirely within the reservation is Hanford Reach, the only US stretch of the river that is completely free-flowing, unimpeded by dams and not a tidal estuary. The Snake River and Yakima River join the Columbia in the Tri‑Cities population center. The Columbia makes a sharp bend to the west at the Washington–Oregon border. The river defines that border for the final of its journey. The Deschutes River joins the Columbia near The Dalles. Between The Dalles and Portland, the river cuts through the Cascade Range, forming the dramatic Columbia River Gorge. No other rivers except for the Klamath and Pit River completely breaches the Cascades—the other rivers that flow through the range also originate in or very near the mountains. The headwaters and upper course of the Pit River are on the Modoc Plateau; downstream the Pit cuts a canyon through the southern reaches of the Cascades. In contrast, the Columbia cuts through the range nearly a thousand miles from its source in the Rocky Mountains. The gorge is known for its strong and steady winds, scenic beauty, and its role as an important transportation link. The river continues west, bending sharply to the north-northwest near Portland and Vancouver, Washington, at the Willamette River confluence. Here the river slows considerably, dropping sediment that might otherwise form a river delta. Near Longview, Washington and the Cowlitz River confluence, the river turns west again. The Columbia empties into the Pacific Ocean just west of Astoria, Oregon, over the Columbia Bar, a shifting sandbar that makes the river's mouth one of the most hazardous stretches of water to navigate in the world. Because of the danger and the many shipwrecks near the mouth, it acquired a reputation as the "Graveyard of Ships". The Columbia drains an area of about . Its drainage basin covers nearly all of Idaho, large portions of British Columbia, Oregon, and Washington, ultimately all of Montana west of the Continental Divide, and small portions of Wyoming, Utah, and Nevada; the total area is similar to the size of France. Roughly of the river's length and 85 percent of its drainage basin are in the US. The Columbia is the twelfth-longest river and has the sixth-largest drainage basin in the United States. In Canada, where the Columbia flows for and drains , the river ranks 23rd in length, and the Canadian part of its basin ranks 13th in size among Canadian basins. The Columbia shares its name with nearby places, such as British Columbia, as well as with landforms and bodies of water. With an average flow at the mouth of about , the Columbia is the largest river by discharge flowing into the Pacific from North America and is the fourth-largest by volume in the US. The average flow where the river crosses the international border between Canada and the United States is from a drainage basin of . This amounts to about 15 percent of the entire Columbia watershed. The Columbia's highest recorded flow, measured at The Dalles, was in June 1894, before the river was dammed. The lowest flow recorded at The Dalles was on April 16, 1968, and was caused by the initial closure of the John Day Dam, upstream. The Dalles is about from the mouth; the river at this point drains about or about 91 percent of the total watershed. Flow rates on the Columbia are affected by many large upstream reservoirs, many diversions for irrigation, and, on the lower stretches, reverse flow from the tides of the Pacific Ocean. The National Ocean Service observes water levels at six tide gauges and issues tide forecasts for twenty-two additional locations along the river between the entrance at the North Jetty and the base of Bonneville Dam, the head of tide. When the rifting of Pangaea, due to the process of plate tectonics, pushed North America away from Europe and Africa and into the Panthalassic Ocean (ancestor to the modern Pacific Ocean), the Pacific Northwest was not part of the continent. As the North American continent moved westward, the Farallon Plate subducted under its western margin. As the plate subducted, it carried along island arcs which were accreted to the North American continent, resulting in the creation of the Pacific Northwest between 150 and 90 million years ago. The general outline of the Columbia Basin was not complete until between 60 and 40 million years ago, but it lay under a large inland sea later subject to uplift. Between 50 and 20 million years ago, from the Eocene through the Miocene eras, tremendous volcanic eruptions frequently modified much of the landscape traversed by the Columbia. The lower reaches of the ancestral river passed through a valley near where Mount Hood later arose. Carrying sediments from erosion and erupting volcanoes, it built a thick delta that underlies the foothills on the east side of the Coast Range near Vernonia in northwestern Oregon. Between 17 million and 6 million years ago, huge outpourings of flood basalt lava covered the Columbia River Plateau and forced the lower Columbia into its present course. The modern Cascade Range began to uplift 5 to 4 million years ago. Cutting through the uplifting mountains, the Columbia River significantly deepened the Columbia River Gorge. The river and its drainage basin experienced some of the world's greatest known catastrophic floods toward the end of the last ice age. The periodic rupturing of ice dams at Glacial Lake Missoula resulted in the Missoula Floods, with discharges exceeding the combined flow of all the other rivers in the world, dozens of times over thousands of years. The exact number of floods is unknown, but geologists have documented at least 40; evidence suggests that they occurred between about 19,000 and 13,000 years ago. The floodwaters rushed across eastern Washington, creating the channeled scablands, which are a complex network of dry canyon-like channels, or coulees that are often braided and sharply gouged into the basalt rock underlying the region's deep topsoil. Numerous flat-topped buttes with rich soil stand high above the chaotic scablands. Constrictions at several places caused the floodwaters to pool into large temporary lakes, such as Lake Lewis, in which sediments were deposited. Water depths have been estimated at at Wallula Gap and over modern Portland, Oregon. Sediments were also deposited when the floodwaters slowed in the broad flats of the Quincy, Othello, and Pasco Basins. The floods' periodic inundation of the lower Columbia River Plateau deposited rich sediments; 21st-century farmers in the Willamette Valley "plow fields of fertile Montana soil and clays from Washington's Palouse". Over the last several thousand years a series of large landslides have occurred on the north side of the Columbia River Gorge, sending massive amounts of debris south from Table Mountain and Greenleaf Peak into the gorge near the present site of Bonneville Dam. The most recent and significant is known as the Bonneville Slide, which formed a massive earthen dam, filling of the river's length. Various studies have placed the date of the Bonneville Slide anywhere between 1060 and 1760 AD; the idea that the landslide debris present today was formed by more than one slide is relatively recent and may explain the large range of estimates. It has been suggested that if the later dates are accurate there may be a link with the 1700 Cascadia earthquake. The pile of debris resulting from the Bonneville Slide blocked the river until rising water finally washed away the sediment. It is not known how long it took the river to break through the barrier; estimates range from several months to several years. Much of the landslide's debris remained, forcing the river about south of its previous channel and forming the Cascade Rapids. In 1938, the construction of Bonneville Dam inundated the rapids as well as the remaining trees that could be used to refine the estimated date of the landslide. In 1980, the eruption of Mount St. Helens deposited large amounts of sediment in the lower Columbia, temporarily reducing the depth of the shipping channel by . Humans have inhabited the Columbia's watershed for more than 15,000 years, with a transition to a sedentary lifestyle based mainly on salmon starting about 3,500 years ago. In 1962, archaeologists found evidence of human activity dating back 11,230 years at the Marmes Rockshelter, near the confluence of the Palouse and Snake rivers in eastern Washington. In 1996 the skeletal remains of a 9,000-year-old prehistoric man (dubbed Kennewick Man) were found near Kennewick, Washington. The discovery rekindled debate in the scientific community over the origins of human habitation in North America and sparked a protracted controversy over whether the scientific or Native American community was entitled to possess and/or study the remains. Many different Native Americans and First Nations peoples have a historical and continuing presence on the Columbia. South of the Canada–US border, the Colville, Spokane, Coeur d'Alene, Yakama, Nez Perce, Cayuse, Palus, Umatilla, Cowlitz, and the Confederated Tribes of Warm Springs live along the US stretch. Along the upper Snake River and Salmon River, the Shoshone Bannock tribes are present. The Sinixt or Lakes people lived on the lower stretch of the Canadian portion, while above that the Shuswap people (Secwepemc in their own language) reckon the whole of the upper Columbia east to the Rockies as part of their territory. The Canadian portion of the Columbia Basin outlines the traditional homelands of the Canadian Kootenay–Ktunaxa. The Chinook tribe, which is not federally recognized, who live near the lower Columbia River, call it "Wimahl" in the Chinookan language, and it is "Nch’i-Wàna" to the Sahaptin-speaking peoples of its middle course in present-day Washington. The river is known as "swah'netk'qhu" by the Sinixt people, who live in the area of the Arrow Lakes in the river's upper reaches in Canada. All three terms essentially mean "the big river". Oral histories describe the formation and destruction of the Bridge of the Gods, a land bridge that connected the Oregon and Washington sides of the river in the Columbia River Gorge. The bridge, which aligns with geological records of the Bonneville Slide, was described in some stories as the result of a battle between gods, represented by Mount Adams and Mount Hood, in their competition for the affection of a goddess, represented by Mount St. Helens. Native American stories about the bridge differ in their details but agree in general that the bridge permitted increased interaction between tribes on the north and south sides of the river. Horses, originally acquired from Spanish New Mexico, spread widely via native trade networks, reaching the Shoshone of the Snake River Plain by 1700. The Nez Perce, Cayuse, and Flathead people acquired their first horses around 1730. Along with horses came aspects of the emerging plains culture, such as equestrian and horse training skills, greatly increased mobility, hunting efficiency, trade over long distances, intensified warfare, the linking of wealth and prestige to horses and war, and the rise of large and powerful tribal confederacies. The Nez Perce and Cayuse kept large herds and made annual long-distance trips to the Great Plains for bison hunting, adopted the plains culture to a significant degree, and became the main conduit through which horses and the plains culture diffused into the Columbia River region. Other peoples acquired horses and aspects of the plains culture unevenly. The Yakama, Umatilla, Palus, Spokane, and Coeur d'Alene maintained sizable herds of horses and adopted some of the plains cultural characteristics, but fishing and fish-related economies remained important. Less affected groups included the Molala, Klickitat, Wenatchi, Okanagan, and Sinkiuse-Columbia peoples, who owned small numbers of horses and adopted few plains culture features. Some groups remained essentially unaffected, such as the Sanpoil and Nespelem people, whose culture remained centered on fishing. Natives of the region encountered foreigners at several times and places during the 18th and 19th centuries. European and American vessels explored the coastal area around the mouth of the river in the late 18th century, trading with local natives. The contact would prove devastating to the Indian tribes; a large portion of their population was wiped out by a smallpox epidemic. Canadian explorer Alexander Mackenzie crossed what is now interior British Columbia in 1793. From 1805 to 1807, the Lewis and Clark Expedition entered the Oregon Country along the Clearwater and Snake rivers, and encountered numerous small settlements of natives. Their records recount tales of hospitable traders who were not above stealing small items from the visitors. They also noted brass teakettles, a British musket, and other artifacts that had been obtained in trade with coastal tribes. From the earliest contact with westerners, the natives of the mid- and lower Columbia were not tribal, but instead congregated in social units no larger than a village, and more often at a family level; these units would shift with the season as people moved about, following the salmon catch up and down the river's tributaries. Sparked by the 1848 Whitman Massacre, a number of violent battles were fought between American settlers and the region's natives. The subsequent Indian Wars, especially the Yakima War, decimated the native population and removed much land from native control. As years progressed, the right of natives to fish along the Columbia became the central issue of contention with the states, commercial fishers, and private property owners. The US Supreme Court upheld fishing rights in landmark cases in 1905 and 1918, as well as the 1974 case "United States v. Washington", commonly called the Boldt Decision. Fish were central to the culture of the region's natives, both as sustenance and as part of their religious beliefs. Natives drew fish from the Columbia at several major sites, which also served as trading posts. Celilo Falls, located east of the modern city of The Dalles, was a vital hub for trade and the interaction of different cultural groups, being used for fishing and trading for 11,000 years. Prior to contact with westerners, villages along this stretch may have at times had a population as great as 10,000. The site drew traders from as far away as the Great Plains. The Cascades Rapids of the Columbia River Gorge, and Kettle Falls and Priest Rapids in eastern Washington, were also major fishing and trading sites. In prehistoric times the Columbia's salmon and steelhead runs numbered an estimated annual average of 10 to 16 million fish. In comparison, the largest run since 1938 was in 1986, with 3.2 million fish entering the Columbia. The annual catch by natives has been estimated at . The most important and productive native fishing site was located at Celilo Falls, which was perhaps the most productive inland fishing site in North America. The falls were located at the border between Chinookan- and Sahaptian-speaking peoples and served as the center of an extensive trading network across the Pacific Plateau. Celilo was the oldest continuously inhabited community on the North American continent. Salmon canneries established by white settlers beginning in 1866 had a strong negative impact on the salmon population, and in 1908 US President Theodore Roosevelt observed that the salmon runs were but a fraction of what they had been 25 years prior. As river development continued in the 20th century, each of these major fishing sites was flooded by a dam, beginning with Cascades Rapids in 1938. The development was accompanied by extensive negotiations between natives and US government agencies. The Confederated Tribes of Warm Springs, a coalition of various tribes, adopted a constitution and incorporated after the 1938 completion of the Bonneville Dam flooded Cascades Rapids; Still, in the 1930s, there were natives who lived along the river and fished year round, moving along with the fish's migration patterns throughout the seasons. The Yakama were slower to do so, organizing a formal government in 1944. In the 21st century, the Yakama, Nez Perce, Umatilla, and Warm Springs tribes all have treaty fishing rights along the Columbia and its tributaries. In 1957 Celilo Falls was submerged by the construction of The Dalles Dam, and the native fishing community was displaced. The affected tribes received a $26.8 million settlement for the loss of Celilo and other fishing sites submerged by The Dalles Dam. The Confederated Tribes of Warm Springs used part of its $4 million settlement to establish the Kah-Nee-Ta resort south of Mount Hood. Some historians believe that Japanese or Chinese vessels blown off course reached the Northwest Coast long before Europeans—possibly as early as 219 BCE. Historian Derek Hayes claims that "It is a near certainty that Japanese or Chinese people arrived on the northwest coast long before any European." It is unknown whether they landed near the Columbia. Evidence exists that Spanish castaways reached the shore in 1679 and traded with the Clatsop; if these were the first Europeans to see the Columbia, they failed to send word home to Spain. In the 18th century, there was strong interest in discovering a Northwest Passage that would permit navigation between the Atlantic (or inland North America) and the Pacific Ocean. Many ships in the area, especially those under Spanish and British command, searched the northwest coast for a large river that might connect to Hudson Bay or the Missouri River. The first documented European discovery of the Columbia River was that of Bruno de Heceta, who in 1775 sighted the river's mouth. On the advice of his officers, he did not explore it, as he was short-staffed and the current was strong. He considered it a bay, and called it "Ensenada de Asunción". Later Spanish maps based on his discovery showed a river, labeled "Rio de San Roque", or an entrance, called "Entrada de Hezeta". Following Heceta's reports, British maritime fur trader Captain John Meares searched for the river in 1788 but concluded that it did not exist. He named Cape Disappointment for the non-existent river, not realizing the cape marks the northern edge of the river's mouth. What happened next would form the basis for decades of both cooperation and dispute between British and American exploration of, and ownership claim to, the region. Royal Navy commander George Vancouver sailed past the mouth in April 1792 and observed a change in the water's color, but he accepted Meares' report and continued on his journey northward. Later that month, Vancouver encountered the American captain Robert Gray at the Strait of Juan de Fuca. Gray reported that he had seen the entrance to the Columbia and had spent nine days trying but failing to enter. On May 12, 1792, Gray returned south and crossed the Columbia Bar, becoming the first explorer to enter the river. Gray's fur trading mission had been financed by Boston merchants, who outfitted him with a private vessel named "Columbia Rediviva"; he named the river after the ship on May 18. Gray spent nine days trading near the mouth of the Columbia, then left without having gone beyond upstream. The farthest point reached was Grays Bay at the mouth of Grays River. Gray's discovery of the Columbia River was later used by the United States to support its claim to the Oregon Country, which was also claimed by Russia, Great Britain, Spain and other nations. In October 1792, Vancouver sent Lieutenant William Robert Broughton, his second-in-command, up the river. Broughton got as far as the Sandy River at the western end of the Columbia River Gorge, about upstream, sighting and naming Mount Hood. Broughton formally claimed the river, its drainage basin, and the nearby coast for Britain. In contrast, Gray had not made any formal claims on behalf of the United States. Because the Columbia was at the same latitude as the headwaters of the Missouri River, there was some speculation that Gray and Vancouver had discovered the long-sought Northwest Passage. A 1798 British map showed a dotted line connecting the Columbia with the Missouri. When the American explorers Meriwether Lewis and William Clark charted the vast, unmapped lands of the American West in their overland expedition (1803–05), they found no passage between the rivers. After crossing the Rocky Mountains, Lewis and Clark built dugout canoes and paddled down the Snake River, reaching the Columbia near the present-day Tri-Cities, Washington. They explored a few miles upriver, as far as Bateman Island, before heading down the Columbia, concluding their journey at the river's mouth and establishing Fort Clatsop, a short-lived establishment that was occupied for less than three months. Canadian explorer David Thompson, of the North West Company, spent the winter of 1807–08 at Kootanae House near the source of the Columbia at present-day Invermere, British Columbia. Over the next few years he explored much of the river and its northern tributaries. In 1811 he traveled down the Columbia to the Pacific Ocean, arriving at the mouth just after John Jacob Astor's Pacific Fur Company had founded Astoria. On his return to the north, Thompson explored the one remaining part of the river he had not yet seen, becoming the first European-American to travel the entire length of the river. In 1825 the Hudson's Bay Company (HBC) established Fort Vancouver on the bank of the Columbia, in what is now Vancouver, Washington, as the headquarters of the company's Columbia District, which encompassed everything west of the Rocky Mountains. John McLoughlin, a physician, was appointed Chief Factor of the Columbia District. The HBC reoriented its Columbia District operations toward the Pacific Ocean via the Columbia, which became the region's main trunk route. In the early 1840s Americans began to colonize the Oregon country in large numbers via the Oregon Trail, despite the HBC's efforts to discourage American settlement in the region. For many the final leg of the journey involved travel down the lower Columbia River to Fort Vancouver. This part of the Oregon Trail, from The Dalles to Fort Vancouver, was the trail's most treacherous stretch, which prompted the 1846 construction of the Barlow Road. In the Treaty of 1818 the United States and Britain agreed that both nations were to enjoy equal rights in Oregon Country for 10 years. By 1828, when the so-called "joint occupation" was renewed for an indefinite period, it seemed probable that the lower Columbia River would in time become the border. For years the Hudson's Bay Company successfully maintained control of the Columbia River and American attempts to gain a foothold were fended off. In the 1830s, American religious missions were established at several locations in the lower Columbia River region. In the 1840s a mass migration of American settlers undermined British control. The Hudson's Bay Company tried to maintain dominance by shifting from the fur trade, which was in sharp decline, to exporting other goods such as salmon and lumber. Colonization schemes were attempted, but failed to match the scale of American settlement. Americans generally settled south of the Columbia, mainly in the Willamette Valley. The Hudson's Bay Company tried to establish settlements north of the river, but nearly all the British colonists moved south to the Willamette Valley. The hope that the British colonists might dilute the American flavor of the valley failed in the face of the overwhelming number of American settlers. These developments rekindled the issue of "joint occupation" and the boundary dispute. While some British interests, especially the Hudson's Bay Company, fought for a boundary along the Columbia River, the Oregon Treaty of 1846 set the boundary at the 49th parallel. The Columbia River became much of the border between the US territories of Oregon and Washington. Oregon became a US state in 1859, Washington in 1889. By the turn of the 20th century, the difficulty of navigating the Columbia was seen as an impediment to the economic development of the Inland Empire region east of the Cascades. The dredging and dam building that followed would permanently alter the river, disrupting its natural flow but also providing electricity, irrigation, navigability and other benefits to the region. American captain Robert Gray and British captain George Vancouver, who explored the river in 1792, proved that it was possible to cross the Columbia Bar. Many of the challenges associated with that feat remain today; even with modern engineering alterations to the mouth of the river, the strong currents and shifting sandbar make it dangerous to pass between the river and the Pacific Ocean. The use of steamboats along the river, beginning with the British "Beaver" in 1836 and followed by American vessels in 1850, contributed to the rapid settlement and economic development of the region. Steamboats operated in several distinct stretches of the river: on its lower reaches, from the Pacific Ocean to Cascades Rapids; from the Cascades to Celilo Falls; from Celilo to the confluence with the Snake River; on the Wenatchee Reach of eastern Washington; on British Columbia's Arrow Lakes; and on tributaries like the Willamette, the Snake and Kootenay Lake. The boats, initially powered by burning wood, carried passengers and freight throughout the region for many years. Early railroads served to connect steamboat lines interrupted by waterfalls on the river's lower reaches. In the 1880s, railroads maintained by companies such as the Oregon Railroad and Navigation Company began to supplement steamboat operations as the major transportation links along the river. As early as 1881, industrialists proposed altering the natural channel of the Columbia to improve navigation. Changes to the river over the years have included the construction of jetties at the river's mouth, dredging, and the construction of canals and navigation locks. Today, ocean freighters can travel upriver as far as Portland and Vancouver, and barges can reach as far inland as Lewiston, Idaho. The shifting Columbia Bar makes passage between the river and the Pacific Ocean difficult and dangerous, and numerous rapids along the river hinder navigation. "Pacific Graveyard," a 1964 book by James A. Gibbs, describes the many shipwrecks near the mouth of the Columbia. Jetties, first constructed in 1886, extend the river's channel into the ocean. Strong currents and the shifting sandbar remain a threat to ships entering the river and necessitate continuous maintenance of the jetties. In 1891 the Columbia was dredged to enhance shipping. The channel between the ocean and Portland and Vancouver was deepened from to . "The Columbian" called for the channel to be deepened to as early as 1905, but that depth was not attained until 1976. Cascade Locks and Canal were first constructed in 1896 around the Cascades Rapids, enabling boats to travel safely through the Columbia River Gorge. The Celilo Canal, bypassing Celilo Falls, opened to river traffic in 1915. In the mid-20th century, the construction of dams along the length of the river submerged the rapids beneath a series of reservoirs. An extensive system of locks allowed ships and barges to pass easily from one reservoir to the next. A navigation channel reaching to Lewiston, Idaho, along the Columbia and Snake rivers, was completed in 1975. Among the main commodities are wheat and other grains, mainly for export. As of 2016, the Columbia ranked third, behind the Mississippi and Paraná rivers, among the world's largest export corridors for grain. The 1980 eruption of Mount St. Helens caused mudslides in the area, which reduced the Columbia's depth by for a stretch, disrupting Portland's economy. Efforts to maintain and improve the navigation channel have continued to the present day. In 1990 a new round of studies examined the possibility of further dredging on the lower Columbia. The plans were controversial from the start because of economic and environmental concerns. In 1999, Congress authorized deepening the channel between Portland and Astoria from , which will make it possible for large container and grain ships to reach Portland and Vancouver. The project has met opposition because of concerns about stirring up toxic sediment on the riverbed. Portland-based Northwest Environmental Advocates brought a lawsuit against the Army Corps of Engineers, but it was rejected by the Ninth U.S. Circuit Court of Appeals in August 2006. The project includes measures to mitigate environmental damage; for instance, the US Army Corps of Engineers must restore 12 times the area of wetland damaged by the project. In early 2006, the Corps spilled of hydraulic oil into the Columbia, drawing further criticism from environmental organizations. Work on the project began in 2005 and concluded in 2010. The project's cost is estimated at $150 million. The federal government is paying 65 percent, Oregon and Washington are paying $27 million each, and six local ports are also contributing to the cost. In 1902, the United States Bureau of Reclamation was established to aid in the economic development of arid western states. One of its major undertakings was building Grand Coulee Dam to provide irrigation for the of the Columbia Basin Project in central Washington. With the onset of World War II, the focus of dam construction shifted to production of hydroelectricity. Irrigation efforts resumed after the war. River development occurred within the structure of the 1909 International Boundary Waters Treaty between the US and Canada. The United States Congress passed the Rivers and Harbors Act of 1925, which directed the Army Corps of Engineers and the Federal Power Commission to explore the development of the nation's rivers. This prompted agencies to conduct the first formal financial analysis of hydroelectric development; the reports produced by various agencies were presented in House Document 308. Those reports, and subsequent related reports, are referred to as 308 Reports. In the late 1920s, political forces in the Northwestern United States generally favored private development of hydroelectric dams along the Columbia. But the overwhelming victories of gubernatorial candidate George W. Joseph in the 1930 Republican primary, and later his law partner Julius Meier, were understood to demonstrate strong public support for public ownership of dams. In 1933, President Franklin D. Roosevelt signed a bill that enabled the construction of the Bonneville and Grand Coulee dams as public works projects. The legislation was attributed to the efforts of Oregon Senator Charles McNary, Washington Senator Clarence Dill, and Oregon Congressman Charles Martin, among others. In 1948 floods swept through the Columbia watershed, destroying Vanport, then the second largest city in Oregon, and impacting cities as far north as Trail, British Columbia. The flooding prompted the United States Congress to pass the Flood Control Act of 1950, authorizing the federal development of additional dams and other flood control mechanisms. By that time local communities had become wary of federal hydroelectric projects, and sought local control of new developments; a public utility district in Grant County, Washington, ultimately began construction of the dam at Priest Rapids. In the 1960s, the United States and Canada signed the Columbia River Treaty, which focused on flood control and the maximization of downstream power generation. Canada agreed to build dams and provide reservoir storage, and the United States agreed to deliver to Canada one-half of the increase in US downstream power benefits as estimated five years in advance. Canada's obligation was met by building three dams (two on the Columbia, and one on the Duncan River), the last of which was completed in 1973. Today the main stem of the Columbia River has 14 dams, of which three are in Canada and 11 in the US. Four mainstem dams and four lower Snake River dams contain navigation locks to allow ship and barge passage from the ocean as far as Lewiston, Idaho. The river system as a whole has more than 400 dams for hydroelectricity and irrigation. The dams address a variety of demands, including flood control, navigation, stream flow regulation, storage and delivery of stored waters, reclamation of public lands and Indian reservations, and the generation of hydroelectric power. The larger US dams are owned and operated by the federal government (some by the Army Corps of Engineers and some by the Bureau of Reclamation), while the smaller dams are operated by public utility districts, and private power companies. The federally operated system is known as the Federal Columbia River Power System, which includes 31 dams on the Columbia and its tributaries. The system has altered the seasonal flow of the river in order to meet higher electricity demands during the winter. At the beginning of the 20th century, roughly 75 percent of the Columbia's flow occurred in the summer, between April and September. By 1980, the summer proportion had been lowered to about 50 percent, essentially eliminating the seasonal pattern. The installation of dams dramatically altered the landscape and ecosystem of the river. At one time, the Columbia was one of the top salmon-producing river systems in the world. Previously active fishing sites, such as Celilo Falls in the eastern Columbia River Gorge, have exhibited a sharp decline in fishing along the Columbia in the last century, and salmon populations have been dramatically reduced. Fish ladders have been installed at some dam sites to help the fish journey to spawning waters. Chief Joseph Dam has no fish ladders and completely blocks fish migration to the upper half of the Columbia River system. The Bureau of Reclamation's Columbia Basin Project focused on the generally dry region of central Washington known as the Columbia Basin, which features rich loess soil. Several groups developed competing proposals, and in 1933, President Franklin D. Roosevelt authorized the Columbia Basin Project. The Grand Coulee Dam was the project's central component; upon completion, it pumped water up from the Columbia to fill the formerly dry Grand Coulee, forming Banks Lake. By 1935, the intended height of the dam was increased from a range between to , a height that would extend the lake impounded by the dam all the way to the Canada–US border; the project had grown from a local New Deal relief measure to a major national project. The project's initial purpose was irrigation, but the onset of World War II created a high demand for electricity, mainly for aluminum production and for the development of nuclear weapons at the Hanford Site. Irrigation began in 1951. The project provides water to more than of fertile but arid land in central Washington, transforming the region into a major agricultural center. Important crops include orchard fruit, potatoes, alfalfa, mint, beans, beets, and wine grapes. Since 1750, the Columbia has experienced six multi-year droughts. The longest, lasting 12 years in the mid‑19th century, reduced the river's flow to 20 percent below average. Scientists have expressed concern that a similar drought would have grave consequences in a region so dependent on the Columbia. In 1992–1993, a lesser drought affected farmers, hydroelectric power producers, shippers, and wildlife managers. Many farmers in central Washington build dams on their property for irrigation and to control frost on their crops. The Washington Department of Ecology, using new techniques involving aerial photographs, estimated there may be as many as a hundred such dams in the area, most of which are illegal. Six such dams have failed in recent years, causing hundreds of thousands of dollars of damage to crops and public roads. Fourteen farms in the area have gone through the permitting process to build such dams legally. The Columbia's heavy flow and large elevation drop over a short distance, , give it tremendous capacity for hydroelectricity generation. In comparison, the Mississippi drops less than . The Columbia alone possesses one-third of the United States's hydroelectric potential. In 2012, the river and its tributaries accounted for 29 GW of hydroelectric generating capacity, contributing 44 percent of the total hydroelectric generation in the nation. The largest of the 150 hydroelectric projects, the Grand Coulee Dam and the Chief Joseph Dam, are also the largest in the United States. As of 2017, Grand Coulee is the fifth largest hydroelectric plant in the world. Inexpensive hydropower supported the location of a large aluminum industry in the region, because its reduction from bauxite requires large amounts of electricity. Until 2000, the Northwestern United States produced up to 17 percent of the world's aluminum and 40 percent of the aluminum produced in the United States. The commoditization of power in the early 21st century, coupled with drought that reduced the generation capacity of the river, damaged the industry and by 2001, Columbia River aluminum producers had idled 80 percent of its production capacity. By 2003, the entire United States produced only 15 percent of the world's aluminum, and many smelters along the Columbia had gone dormant or out of business. Power remains relatively inexpensive along the Columbia, and since the mid-2000 several global enterprises have moved server farm operations into the area to avail themselves of cheap power. Downriver of Grand Coulee, each dam's reservoir is closely regulated by the Bonneville Power Administration (BPA), the U.S. Army Corps of Engineers, and various Washington public utility districts to ensure flow, flood control, and power generation objectives are met. Increasingly, hydro-power operations are required to meet standards under the US Endangered Species Act and other agreements to manage operations to minimize impacts on salmon and other fish, and some conservation and fishing groups support removing four dams on the lower Snake River, the largest tributary of the Columbia. In 1941, the BPA hired Oklahoma folksinger Woody Guthrie to write songs for a documentary film promoting the benefits of hydropower. In the month he spent traveling the region Guthrie wrote 26 songs, which have become an important part of the cultural history of the region. The Columbia supports several species of anadromous fish that migrate between the Pacific Ocean and fresh water tributaries of the river. Sockeye salmon, Coho and Chinook (also known as "king") salmon, and steelhead, all of the genus "Oncorhynchus", are ocean fish that migrate up the rivers at the end of their life cycles to spawn. White sturgeon, which take 15 to 25 years to mature, typically migrate between the ocean and the upstream habitat several times during their lives. Salmon populations declined dramatically after the establishment of canneries in 1867. In 1879 it was reported that 545,450 salmon, with an average weight of were caught (in a recent season) and mainly canned for export to England. A can weighing could be sold for 8d or 9d. By 1908, there was widespread concern about the decline of salmon and sturgeon. In that year, the people of Oregon passed two laws under their newly instituted program of citizens' initiatives limiting fishing on the Columbia and other rivers. Then in 1948, another initiative banned the use of seine nets (devices already used by Native Americans, and refined by later settlers) altogether. Dams interrupt the migration of anadromous fish. Salmon and steelhead return to the streams in which they were born to spawn; where dams prevent their return, entire populations of salmon die. Some of the Columbia and Snake River dams employ fish ladders, which are effective to varying degrees at allowing these fish to travel upstream. Another problem exists for the juvenile salmon headed downstream to the ocean. Previously, this journey would have taken two to three weeks. With river currents slowed by the dams, and the Columbia converted from wild river to a series of slackwater pools, the journey can take several months, which increases the mortality rate. In some cases, the Army Corps of Engineers transports juvenile fish downstream by truck or river barge. The Chief Joseph Dam and several dams on the Columbia's tributaries entirely block migration, and there are no migrating fish on the river above these dams. Sturgeon have different migration habits and can survive without ever visiting the ocean. In many upstream areas cut off from the ocean by dams, sturgeon simply live upstream of the dam. Not all fish have suffered from the modifications to the river; the northern pikeminnow (formerly known as the "squawfish") thrives in the warmer, slower water created by the dams. Research in the mid-1980s found that juvenile salmon were suffering substantially from the predatory pikeminnow, and in 1990, in the interest of protecting salmon, a "bounty" program was established to reward anglers for catching pikeminnow. In 1994, the salmon catch was smaller than usual in the rivers of Oregon, Washington, and British Columbia, causing concern among commercial fishermen, government agencies, and tribal leaders. US government intervention, to which the states of Alaska, Idaho, and Oregon objected, included an 11-day closure of an Alaska fishery. In April 1994 the Pacific Fisheries Management Council unanimously approved the strictest regulations in 18 years, banning all commercial salmon fishing for that year from Cape Falcon north to the Canada–US border. In the winter of 1994, the return of coho salmon far exceeded expectations, which was attributed in part to the fishing ban. Also in 1994, United States Secretary of the Interior Bruce Babbitt first proposed the removal of several Pacific Northwest dams because of their impact on salmon spawning. The Northwest Power Planning Council approved a plan that provided more water for fish and less for electricity, irrigation, and transportation. Environmental advocates have called for the removal of certain dams in the Columbia system in the years since. Of the 227 major dams in the Columbia River drainage basin, the four Washington dams on the lower Snake River are often identified for removal, for example in an ongoing lawsuit concerning a Bush administration plan for salmon recovery. These dams and reservoirs limit the recovery of upriver salmon runs to Idaho's Salmon and Clearwater rivers. Historically, the Snake produced over 1.5 million spring and summer Chinook salmon, a number that has dwindled to several thousand in recent years. Idaho Power Company's Hells Canyon dams have no fish ladders (and do not pass juvenile salmon downstream), and thus allow no steelhead or salmon to migrate above Hells Canyon. In 2007, the destruction of the Marmot Dam on the Sandy River was the first dam removal in the system. Other Columbia Basin dams that have been removed include Condit Dam on Washington's White Salmon River, and the Milltown Dam on the Clark Fork in Montana. In southeastern Washington, a stretch of the river passes through the Hanford Site, established in 1943 as part of the Manhattan Project. The site served as a plutonium production complex, with nine nuclear reactors and related facilities along the banks of the river. From 1944 to 1971, pump systems drew cooling water from the river and, after treating this water for use by the reactors, returned it to the river. Before being released back into the river, the used water was held in large tanks known as retention basins for up to six hours. Longer-lived isotopes were not affected by this retention, and several terabecquerels entered the river every day. By 1957, the eight plutonium production reactors at Hanford dumped a daily average of 50,000 curies of radioactive material into the Columbia. These releases were kept secret by the federal government until the release of declassified documents in the late 1980s. Radiation was measured downstream as far west as the Washington and Oregon coasts. The nuclear reactors were decommissioned at the end of the Cold War, and the Hanford site is the focus of one of the world's largest environmental cleanup, managed by the Department of Energy under the oversight of the Washington Department of Ecology and the Environmental Protection Agency. Nearby aquifers contain an estimated 270 billion US gallons (1 billion m) of groundwater contaminated by high-level nuclear waste that has leaked out of Hanford's underground storage tanks. , 1 million US gallons (3,785 m) of highly radioactive waste is traveling through groundwater toward the Columbia River. This waste is expected to reach the river in 12 to 50 years if cleanup does not proceed on schedule. In addition to concerns about nuclear waste, numerous other pollutants are found in the river. These include chemical pesticides, bacteria, arsenic, dioxins, and polychlorinated biphenyls (PCB). Studies have also found significant levels of toxins in fish and the waters they inhabit within the basin. Accumulation of toxins in fish threatens the survival of fish species, and human consumption of these fish can lead to health problems. Water quality is also an important factor in the survival of other wildlife and plants that grow in the Columbia River drainage basin. The states, Indian tribes, and federal government are all engaged in efforts to restore and improve the water, land, and air quality of the Columbia River drainage basin and have committed to work together to enhance and accomplish critical ecosystem restoration efforts. A number of cleanup efforts are currently underway, including Superfund projects at Portland Harbor, Hanford, and Lake Roosevelt. Timber industry activity further contaminates river water, for example in the increased sediment runoff that results from clearcuts. The Northwest Forest Plan, a piece of federal legislation from 1994, mandated that timber companies consider the environmental impacts of their practices on rivers like the Columbia. On July 1, 2003, Christopher Swain of Portland, Oregon, became the first person to swim the Columbia River's entire length, in an effort to raise public awareness about the river's environmental health. Both natural and anthropogenic processes are involved in the cycling of nutrients in the Columbia River basin. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts to nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams. Nutrients dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration, and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of nutrients into the Pacific Ocean; with the exception of nitrogen, which is delivered into the estuary by ocean upwelling sources. Most of the Columbia's drainage basin (which, at , is about the size of France) lies roughly between the Rocky Mountains on the east and the Cascade Mountains on the west. In the United States and Canada the term watershed is often used to mean drainage basin. The term "Columbia Basin" is used to refer not only to the entire drainage basin but also to subsets of the river's full watershed, such as the relatively flat and unforested area in eastern Washington bounded by the Cascades, the Rocky Mountains, and the Blue Mountains. Within the watershed are diverse landforms including mountains, arid plateaus, river valleys, rolling uplands, and deep gorges. Grand Teton National Park lies in the watershed, as well as parts of Yellowstone National Park, Glacier National Park, Mount Rainier National Park, and North Cascades National Park. Canadian National Parks in the watershed include Kootenay National Park, Yoho National Park, Glacier National Park, and Mount Revelstoke National Park. Hells Canyon, the deepest gorge in North America, and the Columbia Gorge are in the watershed. Vegetation varies widely, ranging from western hemlock and western redcedar in the moist regions to sagebrush in the arid regions. The watershed provides habitat for 609 known fish and wildlife species, including the bull trout, bald eagle, gray wolf, grizzly bear, and Canada lynx. The World Wide Fund for Nature (WWF) divides the waters of the Columbia and its tributaries into three freshwater ecoregions, naming them Columbia Glaciated, Columbia Unglaciated, and Upper Snake. The Columbia Glaciated ecoregion, making up about a third of the total watershed, lies in the north and was covered with ice sheets during the Pleistocene. The ecoregion includes the mainstem Columbia north of the Snake River and tributaries such as the Yakima, Okanagan, Pend Oreille, Clark Fork, and Kootenay rivers. The effects of glaciation include a number of large lakes and a relatively low diversity of freshwater fish. The Upper Snake ecoregion is defined as the Snake River watershed above Shoshone Falls, which totally blocks fish migration. This region has 14 species of fish, many of which are endemic. The Columbia Unglaciated ecoregion makes up the rest of the watershed. It includes the mainstem Columbia below the Snake River and tributaries such as the Salmon, John Day, Deschutes, and lower Snake Rivers. Of the three ecoregions it is the richest in terms of freshwater species diversity. There are 35 species of fish, of which four are endemic. There are also high levels of mollusk endemism. In 2016, over eight million people lived within the Columbia's drainage basin. Of this total about 3.5 million people lived in Oregon, 2.1 million in Washington, 1.7 million in Idaho, half a million in British Columbia, and 0.4 million in Montana. Population in the watershed has been rising for many decades and is projected to rise to about 10 million by 2030. The highest population densities are found west of the Cascade Mountains along the I-5 corridor, especially in the Portland-Vancouver urban area. High densities are also found around Spokane, Washington, and Boise, Idaho. Although much of the watershed is rural and sparsely populated, areas with recreational and scenic values are growing rapidly. The central Oregon county of Deschutes is the fastest-growing in the state. Populations have also been growing just east of the Cascades in central Washington around the city of Yakima and the Tri-Cities area. Projections for the coming decades assume growth throughout the watershed, including the interior. The Canadian part of the Okanagan subbasin is also growing rapidly. Climate varies greatly from place to place within the watershed. Elevation ranges from sea level at the river mouth to more than in the mountains, and temperatures vary with elevation. The highest peak is Mount Rainier, at . High elevations have cold winters and short cool summers; interior regions are subject to great temperature variability and severe droughts. Over some of the watershed, especially west of the Cascade Mountains, precipitation maximums occur in winter, when Pacific storms come ashore. Atmospheric conditions block the flow of moisture in summer, which is generally dry except for occasional thunderstorms in the interior. In some of the eastern parts of the watershed, especially shrub-steppe regions with Continental climate patterns, precipitation maximums occur in early summer. Annual precipitation varies from more than a year in the Cascades to less than in the interior. Much of the watershed gets less than a year. Several major North American drainage basins and many minor ones share a common border with the Columbia River's drainage basin. To the east, in northern Wyoming and Montana, the Continental Divide separates the Columbia watershed from the Mississippi-Missouri watershed, which empties into the Gulf of Mexico. To the northeast, mostly along the southern border between British Columbia and Alberta, the Continental Divide separates the Columbia watershed from the Nelson-Lake Winnipeg-Saskatchewan watershed, which empties into Hudson Bay. The Mississippi and Nelson watersheds are separated by the Laurentian Divide, which meets the Continental Divide at Triple Divide Peak near the headwaters of the Columbia's Flathead River tributary. This point marks the meeting of three of North America's main drainage patterns, to the Pacific Ocean, to Hudson Bay, and to the Atlantic Ocean via the Gulf of Mexico. Further north along the Continental Divide, a short portion of the combined Continental and Laurentian divides separate the Columbia watershed from the MacKenzie-Slave-Athabasca watershed, which empties into the Arctic Ocean. The Nelson and Mackenzie watersheds are separated by a divide between streams flowing to the Arctic Ocean and those of the Hudson Bay watershed. This divide meets the Continental Divide at Snow Dome (also known as Dome), near the northernmost bend of the Columbia River. To the southeast, in western Wyoming, another divide separates the Columbia watershed from the Colorado–Green watershed, which empties into the Gulf of California. The Columbia, Colorado, and Mississippi watersheds meet at Three Waters Mountain in the Wind River Range of . To the south, in Oregon, Nevada, Utah, Idaho, and Wyoming, the Columbia watershed is divided from the Great Basin, whose several watersheds are endorheic, not emptying into any ocean but rather drying up or sinking into sumps. Great Basin watersheds that share a border with the Columbia watershed include Harney Basin, Humboldt River, and Great Salt Lake. The associated triple divide points are Commissary Ridge North, Wyoming, and Sproats Meadow Northwest, Oregon. To the north, mostly in British Columbia, the Columbia watershed borders the Fraser River watershed. To the west and southwest the Columbia watershed borders a number of smaller watersheds that drain to the Pacific Ocean, such as the Klamath River in Oregon and California and the Puget Sound Basin in Washington. The Columbia receives more than 60 significant tributaries. The four largest that empty directly into the Columbia (measured either by discharge or by size of watershed) are the Snake River (mostly in Idaho), the Willamette River (in northwest Oregon), the Kootenay River (mostly in British Columbia), and the Pend Oreille River (mostly in northern Washington and Idaho, also known as the lower part of the Clark Fork). Each of these four averages more than and drains an area of more than . The Snake is by far the largest tributary. Its watershed of is larger than the state of Idaho. Its discharge is roughly a third of the Columbia's at the rivers' confluence but compared to the Columbia upstream of the confluence the Snake is longer (113%) and has a larger drainage basin (104%). The Pend Oreille River system (including its main tributaries, the Clark Fork and Flathead rivers) is also similar in size to the Columbia at their confluence. Compared to the Columbia River above the two rivers' confluence, the Pend Oreille-Clark-Flathead is nearly as long (about 86%), its basin about three-fourths as large (76%), and its discharge over a third (37%). Cameroon Cameroon (; ), officially the Republic of Cameroon (), is a country in Central Africa. It is bordered by Nigeria to the west and north; Chad to the northeast; the Central African Republic to the east; and Equatorial Guinea, Gabon and the Republic of the Congo to the south. Cameroon's coastline lies on the Bight of Biafra, part of the Gulf of Guinea and the Atlantic Ocean. French and English are the official languages of Cameroon. The country is often referred to as "Africa in miniature" for its geological and cultural diversity. Natural features include beaches, deserts, mountains, rainforests, and savannas. The highest point at almost is Mount Cameroon in the Southwest Region of the country, and the largest cities in population-terms are Douala on the Wouri river, its economic capital and main seaport, Yaoundé, its political capital, and Garoua. The country is well known for its native styles of music, particularly makossa and bikutsi, and for its successful national football team. Early inhabitants of the territory included the Sao civilisation around Lake Chad and the Baka hunter-gatherers in the southeastern rainforest. Portuguese explorers reached the coast in the 15th century and named the area "Rio dos Camarões" ("Shrimp River"), which became "Cameroon" in English. Fulani soldiers founded the Adamawa Emirate in the north in the 19th century, and various ethnic groups of the west and northwest established powerful chiefdoms and fondoms. Cameroon became a German colony in 1884 known as Kamerun. After World War I, the territory was divided between France and the United Kingdom as League of Nations mandates. The Union des Populations du Cameroun (UPC) political party advocated independence, but was outlawed by France in the 1950s, leading to the Cameroonian Independence War fought between French and UPC militant forces until early 1971. In 1960, the French-administered part of Cameroon became independent as the Republic of Cameroun under President Ahmadou Ahidjo. The southern part of British Cameroons federated with it in 1961 to form the Federal Republic of Cameroon. The federation was abandoned in 1972. The country was renamed the United Republic of Cameroon in 1972 and the Republic of Cameroon in 1984. Cameroon experiences relatively high political and social stability. This has permitted the development of agriculture, roads, railways and large petroleum and timber industries. Large numbers of Cameroonians live as subsistence farmers. Since 1982 Paul Biya has been President, governing with his Cameroon People's Democratic Movement party. The country has experienced tensions coming from the English-speaking territories. Politicians in the English-speaking regions have advocated for greater decentralisation and even complete separation or independence (as in the Southern Cameroons National Council) from Cameroon. The territory of present-day Cameroon was first settled during the Neolithic Era. The longest continuous inhabitants are groups such as the Baka (Pygmies). From here, Bantu migrations into eastern, southern, and central Africa are believed to have originated about 2,000 years ago. The Sao culture arose around Lake Chad c. AD 500 and gave way to the Kanem and its successor state, the Bornu Empire. Kingdoms, fondoms, and chiefdoms arose in the west. Portuguese sailors reached the coast in 1472. They noted an abundance of the ghost shrimp "Lepidophthalmus turneranus" in the Wouri River and named it ("Shrimp River"), which became "Cameroon" in English. Over the following few centuries, European interests regularised trade with the coastal peoples, and Christian missionaries pushed inland. In the early 19th century, Modibo Adama led Fulani soldiers on a jihad in the north against non-Muslim and partially Muslim peoples and established the Adamawa Emirate. Settled peoples who fled the Fulani caused a major redistribution of population. The northern part of Cameroon was an important part of the Arab slave trade network. The Bamum tribe have a writing system, known as Bamum script or Shu Mom. The script was given to them by Sultan Ibrahim Njoya in 1896, and is taught in Cameroon by the Bamum Scripts and Archives Project. Germany began to establish roots in Cameroon in 1868 when the Woermann Company of Hamburg built a warehouse. It was built on the estuary of the Wouri River. Later Gustav Nachtigal made a treaty with one of the local kings to annex the region for the German emperor. The German Empire claimed the territory as the colony of Kamerun in 1884 and began a steady push inland. The Germans ran into resistance with the native people who did not want the Germans to establish themselves on this land. Under the influence of Germany, commercial companies were left to regulate local administrations. These concessions used forced labour of the Africans to make a profit. The labour was used on banana, rubber, palm oil, and cocoa plantations. They initiated projects to improve the colony's infrastructure, relying on a harsh system of forced labour, which was much criticised by the other colonial powers. With the defeat of Germany in World War I, Kamerun became a League of Nations mandate territory and was split into French Cameroons and British Cameroons in 1919. France integrated the economy of Cameroon with that of France and improved the infrastructure with capital investments and skilled workers, modifying the system of forced labour. The British administered their territory from neighbouring Nigeria. Natives complained that this made them a neglected "colony of a colony". Nigerian migrant workers flocked to Southern Cameroons, ending forced labour altogether but angering the local natives, who felt swamped. The League of Nations mandates were converted into United Nations Trusteeships in 1946, and the question of independence became a pressing issue in French Cameroun. France outlawed the most radical political party, the Union des Populations du Cameroun (UPC), on 13 July 1955. This prompted a long guerrilla war and the assassination of the party's leader, Ruben Um Nyobé. In the more peaceful British Cameroons, the question was whether to reunify with French Cameroun or join Nigeria. On 1 January 1960 French Cameroun gained independence from France under President Ahmadou Ahidjo. On 1 October 1961, the formerly British Southern Cameroons united with French Cameroun to form the Federal Republic of Cameroon. Ahidjo used the ongoing war with the UPC to concentrate power in the presidency, continuing with this even after the suppression of the UPC in 1971. His political party, the Cameroon National Union (CNU), became the sole legal political party on 1 September 1966 and in 1972, the federal system of government was abolished in favour of a United Republic of Cameroon, headed from Yaoundé. Ahidjo pursued an economic policy of planned liberalism, prioritising cash crops and petroleum development. The government used oil money to create a national cash reserve, pay farmers, and finance major development projects; however, many initiatives failed when Ahidjo appointed unqualified allies to direct them. Ahidjo stepped down on 4 November 1982 and left power to his constitutional successor, Paul Biya. However, Ahidjo remained in control of the CNU and tried to run the country from behind the scenes until Biya and his allies pressured him into resigning. Biya began his administration by moving toward a more democratic government, but a failed coup d'état nudged him toward the leadership style of his predecessor. An economic crisis took effect in the mid-1980s to late 1990s as a result of international economic conditions, drought, falling petroleum prices, and years of corruption, mismanagement, and cronyism. Cameroon turned to foreign aid, cut government spending, and privatised industries. With the reintroduction of multi-party politics in December 1990, the former British Southern Cameroons pressure groups called for greater autonomy, and the Southern Cameroons National Council advocated complete secession as the Republic of Ambazonia. In June 2006, talks concerning a territorial dispute over the Bakassi peninsula were resolved. The talks involved President Paul Biya of Cameroon, President Olusegun Obasanjo of Nigeria and UN Secretary General Kofi Annan, and resulted in Cameroonian control of the oil-rich peninsula. The northern portion of the territory was formally handed over to the Cameroonian government in August 2006, and the remainder of the peninsula was left to Cameroon 2 years later, in 2008. In February 2008, Cameroon experienced its worst violence in 15 years when a transport union strike in Douala escalated into violent protests in 31 municipal areas. In May 2014, in the wake of the Chibok schoolgirl kidnapping, presidents Paul Biya of Cameroon and Idriss Déby of Chad announced they are waging war on Boko Haram, and deployed troops to the Nigerian border.
Since November 2016, Cameroon has been suffering from a series of protests from the English-speaking regions of the country, which are the Northwest Region and Southwest Region. People were killed and hundreds jailed as a result of these protests. In 2017, Paul Biya's government blocked access to the Internet for months to prevent news of a civil war reaching the wider world. The French side of Cameroon is trying to take over and eliminate the western English speaking population by removing common law courts and the legal system. This is contrary to the agreement Paul Biya signed with western Cameroon. Hence, in 2018 there is an active bloody civil war in the western portion of Cameroon to drive out English speaking people and seize their rights to offshore oil of which 86% of the GNP of Cameroon depends. Churches, schools, police stations have been violently attacked by Paul Biya's government without provocation. Entire villages have been burned to the ground in an effort to eliminate the Anglos. Yet, relatively little to nothing has been reported in the Western media. Western Cameroon reacted to defend themselves by forming militias; they desperately need supplies. Paul Biya secured $286 billion dollars in 2018 from the International Monetary Fund (IMF) based in Washington, D.C.,to keep this assault upon Western Cameroon while oil companies such as Royal Dutch Shell exited from drilling platforms. Every day there is a gun battle between these two forces of French Biya's attacking military and the defending Western Anglo forces. Food shipments to Western Cameroon have been interrupted while locals fear their water supplies will be poisoned or limited. In the North West region, road blocks between Bamenda and surrounding towns have a series of roadblocks and inspection points maintained by Paul Biya's military forces. The President of Cameroon is elected and creates policy, administers government agencies, commands the armed forces, negotiates and ratifies treaties, and declares a state of emergency. The president appoints government officials at all levels, from the prime minister (considered the official head of government), to the provincial governors and divisional officers. The president is selected by popular vote every seven years.There have been 2 presidents since the independence of Cameroon. The National Assembly makes legislation. The body consists of 180 members who are elected for five-year terms and meet three times per year. Laws are passed on a majority vote. Rarely has the assembly changed or blocked legislation proposed by the president. The 1996 constitution establishes a second house of parliament, the 100-seat Senate, was established in April 2013 and is headed by a President of the Senate who is the constitutional successor in case of untimely vacancy of the Presidency of the Republic. The government recognises the authority of traditional chiefs, fons, and lamibe to govern at the local level and to resolve disputes as long as such rulings do not conflict with national law. Cameroon's legal system is largely based on French civil law with common law influences. Although nominally independent, the judiciary falls under the authority of the executive's Ministry of Justice. The president appoints judges at all levels. The judiciary is officially divided into tribunals, the court of appeal, and the supreme court. The National Assembly elects the members of a nine-member High Court of Justice that judges high-ranking members of government in the event they are charged with high treason or harming national security. Cameroon is viewed as rife with corruption at all levels of government. In 1997, Cameroon established anti-corruption bureaus in 29 ministries, but only 25% became operational, and in 2012, Transparency International placed Cameroon at number 144 on a list of 176 countries ranked from least to most corrupt. On 18 January 2006, Biya initiated an anti-corruption drive under the direction of the National Anti-Corruption Observatory. There are several high corruption risk areas in Cameroon, for instance, customs, public health sector and public procurement.. Unfortunately, the corruption has gotten worse, regardless of the existing anti-corruption bureaus, as Transparency International ranked Cameroon 153 on a list of 180 countries in 2017. Human rights organisations accuse police and military forces of mistreating and even torturing criminal suspects, ethnic minorities, homosexuals, and political activists. Prisons are overcrowded with little access to adequate food and medical facilities, and prisons run by traditional rulers in the north are charged with holding political opponents at the behest of the government. However, since the first decade of the 21st century, an increasing number of police and gendarmes have been prosecuted for improper conduct. A video showing Cameroonian soldiers executing blindfolded women and children has emerged in 2018. President Biya's Cameroon People's Democratic Movement (CPDM) was the only legal political party until December 1990. Numerous regional political groups have since formed. The primary opposition is the Social Democratic Front (SDF), based largely in the Anglophone region of the country and headed by John Fru Ndi. Biya and his party have maintained control of the presidency and the National Assembly in national elections, which rivals contend were unfair. Human rights organisations allege that the government suppresses the freedoms of opposition groups by preventing demonstrations, disrupting meetings, and arresting opposition leaders and journalists. In particular, English-speaking people are discriminated against; protests often escalate into violent clashes and killings. In 2017, President Biya shut down the Internet in the English-speaking region for 94 days, at the cost of hampering five million people, including Silicon Mountain startups. Freedom House ranks Cameroon as "not free" in terms of political rights and civil liberties. The last parliamentary elections were held on 30 September 2013. Cameroon is a member of both the Commonwealth of Nations and La Francophonie. Its foreign policy closely follows that of its main ally, France (one of its former colonial rulers). Cameroon relies heavily on France for its defence, although military spending is high in comparison to other sectors of government. President Biya has engaged in a decades-long clash with the government of Nigeria over possession of the oil rich Bakassi peninsula. Cameroon and Nigeria share a 1,000 mile border and have disputed the sovereignty of the Bakassi peninsula. In 1994 Cameroon petitioned the International Court of Justice to resolve the dispute. The two countries attempted to establish a cease-fire in 1996, however, fighting continued for years. In 2002, the ICJ ruled that the Anglo-German Agreement of 1913 gave sovereignty to Cameroon. The ruling called for a withdrawal by both countries and denied the request by Cameroon for compensation due to Nigeria's long-term occupation. By 2004, Nigeria had failed to meet the deadline to handover the peninsula. A UN-mediated summit in June 2006 facilitated an agreement for Nigeria to withdraw from the region and both leaders signed the Greentree Agreement. The withdrawal and handover of control was completed by August 2006. The constitution divides Cameroon into 10 semi-autonomous regions, each under the administration of an elected Regional Council. Each region is headed by a presidentially appointed governor. These leaders are charged with implementing the will of the president, reporting on the general mood and conditions of the regions, administering the civil service, keeping the peace, and overseeing the heads of the smaller administrative units. Governors have broad powers: they may order propaganda in their area and call in the army, gendarmes, and police. All local government officials are employees of the central government's Ministry of Territorial Administration, from which local governments also get most of their budgets. The regions are subdivided into 58 divisions (French ). These are headed by presidentially appointed divisional officers (). The divisions are further split into sub-divisions (), headed by assistant divisional officers (). The districts, administered by district heads (), are the smallest administrative units. The three northernmost regions are the Far North (), North (), and Adamawa (). Directly south of them are the Centre () and East (). The South Province () lies on the Gulf of Guinea and the southern border. Cameroon's western region is split into four smaller regions: the Littoral () and Southwest () regions are on the coast, and the Northwest () and West () regions are in the western grassfields. In 2013, the total adult literacy rate of Cameroon was estimated to be 71.3%. Among youths age 15-24 the literacy rate was 85.4% for males and 76.4% for females. Most children have access to state-run schools that are cheaper than private and religious facilities. The educational system is a mixture of British and French precedents with most instruction in English or French. Cameroon has one of the highest school attendance rates in Africa. Girls attend school less regularly than boys do because of cultural attitudes, domestic duties, early marriage, pregnancy, and sexual harassment. Although attendance rates are higher in the south, a disproportionate number of teachers are stationed there, leaving northern schools chronically understaffed. In 2013, the primary school enrollment rate was 93.5%. School attendance in Cameroon is also affected by child labor. Indeed, the U.S. Department of Labor Findings on the Worst Forms of Child Labor reported that 56% of children aged 5 to 14 were working children and that almost 53% of children aged 7 to 14 combined work and school. In December 2014, a "List of Goods Produced by Child Labor or Forced Labor" issued by the Bureau of International Labor Affairs mentioned Cameroon among the countries that resorted to child labor in the production of cocoa. The quality of health care is generally low. Life expectancy at birth is estimated to be 56 years in 2012, with 48 healthy life years expected. Fertility rate remain high in Cameroon with an average of 4.8 births per woman and an average mothers' age of 19.7 years old at first birth. In Cameroon, there is only one doctor for every 5,000 people, according to the World Health Organization. In 2014, just 4.1% of total GDP expenditure was allocated to healthcare. Due to financial cuts in the health care system, there are few professionals. Doctors and nurses who were trained in Cameroon, emigrate because in Cameroon the payment is poor while the workload is high. Nurses are unemployed even though their help is needed. Some of them help out voluntarily so they will not lose their skills. Outside the major cities, facilities are often dirty and poorly equipped. In 2012, the top three deadly diseases were HIV/AIDS, Lower Respiratory Infection, and Diarrheal Diseases. Endemic diseases include dengue fever, filariasis, leishmaniasis, malaria, meningitis, schistosomiasis, and sleeping sickness. The HIV/AIDS prevalence rate in 2016 was estimated at 3.8% for those aged 15–49, although a strong stigma against the illness keeps the number of reported cases artificially low. 46,000 children under age 14 were estimated to be living with HIV in 2016. In Cameroon, 58% of those living with HIV know their status, and just 37% receive ARV treatment. In 2016, 29,000 death due to AIDS occurred in both adults and children. Breast ironing, a traditional practice that is prevalent in Cameroon, may affect girls' health. Female genital mutilation (FGM), while not widespread, is practiced among some populations; according to a 2013 UNICEF report, 1% of women in Cameroon have undergone FGM. Also impacting women and girls' health, the contraceptive prevalence rate is estimated to be just 34.4% in 2014. Traditional healers remain a popular alternative to evidence-based medicine. At , Cameroon is the world's 53rd-largest country. It is slightly larger than the nation of Sweden and the state of California; Cameroon is comparable in size to Papua New Guinea. The country is located in Central and West Africa, known as the hinge of Africa, on the Bight of Bonny, part of the Gulf of Guinea and the Atlantic Ocean. Cameroon lies between latitudes 1° and 13°N, and longitudes 8° and 17°E. Cameroon controls 12 nautical miles of the Atlantic Ocean. Tourist literature describes Cameroon as "Africa in miniature" because it exhibits all major climates and vegetation of the continent: coast, desert, mountains, rainforest, and savanna. The country's neighbours are Nigeria and the Atlantic Ocean to the west; Chad to the northeast; the Central African Republic to the east; and Equatorial Guinea, Gabon and the Republic of the Congo to the south. Cameroon is divided into five major geographic zones distinguished by dominant physical, climatic, and vegetative features. The coastal plain extends inland from the Gulf of Guinea and has an average elevation of . Exceedingly hot and humid with a short dry season, this belt is densely forested and includes some of the wettest places on earth, part of the Cross-Sanaga-Bioko coastal forests. The South Cameroon Plateau rises from the coastal plain to an average elevation of . Equatorial rainforest dominates this region, although its alternation between wet and dry seasons makes it is less humid than the coast. This area is part of the Atlantic Equatorial coastal forests ecoregion. An irregular chain of mountains, hills, and plateaus known as the Cameroon range extends from Mount Cameroon on the coast—Cameroon's highest point at —almost to Lake Chad at Cameroon's northern border at 13°05'N. This region has a mild climate, particularly on the Western High Plateau, although rainfall is high. Its soils are among Cameroon's most fertile, especially around volcanic Mount Cameroon. Volcanism here has created crater lakes. On 21 August 1986, one of these, Lake Nyos, belched carbon dioxide and killed between 1,700 and 2,000 people. This area has been delineated by the World Wildlife Fund as the Cameroonian Highlands forests ecoregion. The southern plateau rises northward to the grassy, rugged Adamawa Plateau. This feature stretches from the western mountain area and forms a barrier between the country's north and south. Its average elevation is , and its average temperature ranges from to with high rainfall between April and October peaking in July and August. The northern lowland region extends from the edge of the Adamawa to Lake Chad with an average elevation of . Its characteristic vegetation is savanna scrub and grass. This is an arid region with sparse rainfall and high median temperatures. Cameroon has four patterns of drainage. In the south, the principal rivers are the Ntem, Nyong, Sanaga, and Wouri. These flow southwestward or westward directly into the Gulf of Guinea. The Dja and Kadéï drain southeastward into the Congo River. In northern Cameroon, the Bénoué River runs north and west and empties into the Niger. The Logone flows northward into Lake Chad, which Cameroon shares with three neighbouring countries. Cameroon's per-capita GDP (Purchasing power parity) was estimated as US$2,300 in 2008, one of the ten highest in sub-Saharan Africa. Major export markets include France, Italy, South Korea, Spain, and the United Kingdom. Cameroon is aiming to become an emerging country by 2035. Cameroon has had a decade of strong economic performance, with GDP growing at an average of 4% per year. During the 2004–2008 period, public debt was reduced from over 60% of GDP to 10% and official reserves quadrupled to over USD 3 billion. Cameroon is part of the Bank of Central African States (of which it is the dominant economy), the Customs and Economic Union of Central Africa (UDEAC) and the Organization for the Harmonization of Business Law in Africa (OHADA). Its currency is the CFA franc. Unemployment was estimated at 4.4% in 2014, and about a third of the population was living below the international poverty threshold of US$1.25 a day in 2009. Since the late 1980s, Cameroon has been following programmes advocated by the World Bank and International Monetary Fund (IMF) to reduce poverty, privatise industries, and increase economic growth. The government has taken measures to encourage tourism in the country. Cameroon's natural resources are very well suited to agriculture and arboriculture. An estimated 70% of the population farms, and agriculture comprised an estimated 19.8% of GDP in 2009. Most agriculture is done at the subsistence scale by local farmers using simple tools. They sell their surplus produce, and some maintain separate fields for commercial use. Urban centres are particularly reliant on peasant agriculture for their foodstuffs. Soils and climate on the coast encourage extensive commercial cultivation of bananas, cocoa, oil palms, rubber, and tea. Inland on the South Cameroon Plateau, cash crops include coffee, sugar, and tobacco. Coffee is a major cash crop in the western highlands, and in the north, natural conditions favour crops such as cotton, groundnuts, and rice. Reliance on agricultural exports makes Cameroon vulnerable to shifts in their prices. Livestock are raised throughout the country. Fishing employs 5,000 people and provides over 100,000 tons of seafood each year. Bushmeat, long a staple food for rural Cameroonians, is today a delicacy in the country's urban centres. The commercial bushmeat trade has now surpassed deforestation as the main threat to wildlife in Cameroon. The southern rainforest has vast timber reserves, estimated to cover 37% of Cameroon's total land area. However, large areas of the forest are difficult to reach. Logging, largely handled by foreign-owned firms, provides the government US$60 million a year in taxes (), and laws mandate the safe and sustainable exploitation of timber. Nevertheless, in practice, the industry is one of the least regulated in Cameroon. Factory-based industry accounted for an estimated 29.7% of GDP in 2009. More than 75% of Cameroon's industrial strength is located in Douala and Bonabéri. Cameroon possesses substantial mineral resources, but these are not extensively mined (see "Mining in Cameroon"). Petroleum exploitation has fallen since 1986, but this is still a substantial sector such that dips in prices have a strong effect on the economy. Rapids and waterfalls obstruct the southern rivers, but these sites offer opportunities for hydroelectric development and supply most of Cameroon's energy. The Sanaga River powers the largest hydroelectric station, located at Edéa. The rest of Cameroon's energy comes from oil-powered thermal engines. Much of the country remains without reliable power supplies. Transport in Cameroon is often difficult. Except for the several relatively good toll roads which connect major cities (all of them one-lane) roads are poorly maintained and subject to inclement weather, since only 10% of the roadways are tarred. Roadblocks often serve little other purpose than to allow police and gendarmes to collect bribes from travellers. Road banditry has long hampered transport along the eastern and western borders, and since 2005, the problem has intensified in the east as the Central African Republic has further destabilised. Intercity bus services run by multiple private companies connect all major cities. They are the most popular means of transportation followed by the rail service "Camrail". Rail service runs from Kumba in the west to Bélabo in the east and north to Ngaoundéré. International airports are located in Douala and Yaoundé, with a third under construction in Maroua. Douala is the country's principal seaport. In the north, the Bénoué River is seasonally navigable from Garoua across into Nigeria. Although press freedoms have improved since the first decade of the 21st century, the press is corrupt and beholden to special interests and political groups. Newspapers routinely self-censor to avoid government reprisals. The major radio and television stations are state-run and other communications, such as land-based telephones and telegraphs, are largely under government control. However, cell phone networks and Internet providers have increased dramatically since the first decade of the 21st century and are largely unregulated. The Cameroon Armed Forces, (French: "Forces armées camerounaises", FAC) , consists of the country's army (French: "Armée de Terre"), the country's navy (French: "Marine Nationale de la République" (MNR), includes naval infantry), the Cameroonian Air Force (French: "Armée de l'Air du Cameroun", AAC), Fire Fighter Corps, Rapid Intervention Brigade and the Gendarmerie. Males and females that are 18 years of age up to 23 years of age and have graduated high school are eligible for military service. Those that do so are obliged 4 years of service. There is no conscription in Cameroon, but the government makes periodic calls for volunteers. The population total in Cameroon was in . The life expectancy is 56 years (55.9 years for males and 58.6 years for females). According to the latest census, Cameroon still has slightly more women (50.6%) than men (49.4%). Nearly 60% of the population is under age 25. People over 65 years of age account for only 3.2% of the total population. Cameroon's population is almost evenly divided between urban and rural dwellers. Population density is highest in the large urban centres, the western highlands, and the northeastern plain. Douala, Yaoundé, and Garoua are the largest cities. In contrast, the Adamawa Plateau, southeastern Bénoué depression, and most of the South Cameroon Plateau are sparsely populated. According to the World Health Organization, the fertility rate was 4.8 in 2013 with a population growth rate of 2.56%. People from the overpopulated western highlands and the underdeveloped north are moving to the coastal plantation zone and urban centres for employment. Smaller movements are occurring as workers seek employment in lumber mills and plantations in the south and east. Although the national sex ratio is relatively even, these out-migrants are primarily males, which leads to unbalanced ratios in some regions. Both monogamous and polygamous marriage are practised, and the average Cameroonian family is large and extended. In the north, women tend to the home, and men herd cattle or work as farmers. In the south, women grow the family's food, and men provide meat and grow cash crops. Like most societies, Cameroonian society is male-dominated, and violence and discrimination against women is common. Estimates identify anywhere from 230 to 282 different folks and linguistic groups in Cameroon. The Adamawa Plateau broadly bisects these into northern and southern divisions. The northern peoples are Sudanese groups, who live in the central highlands and the northern lowlands, and the Fulani, who are spread throughout northern Cameroon. A small number of Shuwa Arabs live near Lake Chad. Southern Cameroon is inhabited by speakers of Bantu and Semi-Bantu languages. Bantu-speaking groups inhabit the coastal and equatorial zones, while speakers of Semi-Bantu languages live in the Western grassfields. Some 5,000 Gyele and Baka Pygmy peoples roam the southeastern and coastal rainforests or live in small, roadside settlements. Nigerians make up the largest group of foreign nationals. In 2007, Cameroon hosted a total population of refugees and asylum seekers of approximately 97,400. Of these, 49,300 were from the Central African Republic (many driven west by war), 41,600 from Chad, and 2,900 from Nigeria. Kidnappings of Cameroonian citizens by Central African bandits have increased since 2005. In the first months of 2014, thousands of refugees fleeing the violence in the Central African Republic arrived in Cameroon. On 4 June 2014, AlertNet reported: Almost 90,000 people have fled to neighbouring Cameroon since December and up to 2,000 a week, mostly women and children, are still crossing the border, the United Nations said. "Women and children are arriving in Cameroon in a shocking state, after weeks, sometimes months, on the road, foraging for food," said Ertharin Cousin, executive director of the World Food Programme (WFP). Both English and French are official languages, although French is by far the most understood language (more than 80%). German, the language of the original colonisers, has long since been displaced by French and English. Cameroonian Pidgin English is the lingua franca in the formerly British-administered territories. A mixture of English, French, and Pidgin called Frananglais has been gaining popularity in urban centres since the mid-1970s. The government encourages bilingualism in English and French, and as such, official government documents, new legislation, ballots, among others, are written and provided in both languages. As part of the initiative to encourage bilingualism in Cameroon, six of the eight universities in the country are entirely bilingual. In addition to the colonial languages, there are approximately 250 other languages spoken by nearly 20 million Cameroonians. It is because of this that Cameroon is considered one of the most linguistically diverse countries in the world. In 2017 there were language protests by the anglophone population against perceived oppression by the francophone. The military was deployed against the protesters and people have been killed, hundreds imprisoned and thousands fled the country. Cameroon has a high level of religious freedom and diversity. The predominant faith is Christianity, practised by about two-thirds of the population, while Islam is a significant minority faith, adhered to by about one-fifth. In addition, traditional faiths are practised by many. Muslims are most concentrated in the north, while Christians are concentrated primarily in the southern and western regions, but practitioners of both faiths can be found throughout the country. Large cities have significant populations of both groups. Muslims in Cameroon are divided into Sufis (and Salafis), Shias, and non-denominational Muslims. People from the North-West and South-West provinces, which used to be a part of British Cameroons, have the highest proportion of Protestants. The French-speaking regions of the southern and western regions are largely Catholic. Southern ethnic groups predominantly follow Christian or traditional African animist beliefs, or a syncretic combination of the two. People widely believe in witchcraft, and the government outlaws such practices. Suspected witches are often subject to mob violence. The Islamist jihadist group Ansar al Islam has been reported as operating in North Cameroon. In the northern regions, the locally dominant Fulani ethnic group is mostly Muslim, but the overall population is fairly evenly divided among Muslims, Christians, and followers of indigenous religious beliefs (called "Kirdi" ("pagan") by the Fulani). The Bamum ethnic group of the West Region is largely Muslim. Native traditional religions are practised in rural areas throughout the country but rarely are practised publicly in cities, in part because many indigenous religious groups are intrinsically local in character. The Norwegian Missionary Society first established a mission in Cameroon in the early 1920s. Many of the churches still stands. At the time there were few Christians but now there are many. In cooperation with the local Evangelical Lutheran Church of Cameroon EELC and the American ELCA the NMS built several hospitals among these the Protestant Hospital of Ngaoundéré, high schools and several other institutions. Ngaoundéré once held one of the largest Norwegian contingencies of any place in the world, with over 100 Norwegians living there in the 1980s. Some of the families have been there in several generations among these Kaldhol, Bjaanes, Stavenjord and Dankel. The neighbourhood was even dubbed "Norvège" ("Norway" in French). Music and dance are an integral part of Cameroonian ceremonies, festivals, social gatherings, and storytelling. Traditional dances are highly choreographed and separate men and women or forbid participation by one sex altogether. The goals of dances range from pure entertainment to religious devotion. Traditionally, music is transmitted orally. In a typical performance, a chorus of singers echoes a soloist. Musical accompaniment may be as simple as clapping hands and stomping feet, but traditional instruments include bells worn by dancers, clappers, drums and talking drums, flutes, horns, rattles, scrapers, stringed instruments, whistles, and xylophones; the exact combination varies with ethnic group and region. Some performers sing complete songs by themselves, accompanied by a harplike instrument. Popular music styles include ambasse bey of the coast, assiko of the Bassa, mangambeu of the Bangangte, and tsamassi of the Bamileke. Nigerian music has influenced Anglophone Cameroonian performers, and Prince Nico Mbarga's highlife hit "Sweet Mother" is the top-selling African record in history. The two most popular styles of music are makossa and bikutsi. Makossa developed in Douala and mixes folk music, highlife, soul, and Congo music. Performers such as Manu Dibango, Francis Bebey, Moni Bilé, and Petit-Pays popularised the style worldwide in the 1970s and 1980s. Bikutsi originated as war music among the Ewondo. Artists such as Anne-Marie Nzié developed it into a popular dance music beginning in the 1940s, and performers such as Mama Ohandja and Les Têtes Brulées popularised it internationally during the 1960s, 1970s, and 1980s. Cuisine varies by region, but a large, one-course, evening meal is common throughout the country. A typical dish is based on cocoyams, maize, cassava (manioc), millet, plantains, potatoes, rice, or yams, often pounded into dough-like fufu. This is served with a sauce, soup, or stew made from greens, groundnuts, palm oil, or other ingredients. Meat and fish are popular but expensive additions, with chicken often reserved for special occasions. Dishes are often quite hot, spiced with salt, red pepper sauce, and Maggi. Cutlery is common, but food is traditionally manipulated with the right hand. Breakfast consists of leftovers of bread and fruit with coffee or tea. Generally breakfast is made from wheat flour in various different foods such as puff-puff (doughnuts), accra banana made from bananas and flour, bean cakes and many more. Snacks are popular, especially in larger towns where they may be bought from street vendors. Water, palm wine, and millet beer are the traditional mealtime drinks, although beer, soda, and wine have gained popularity. 33 Export beer is the official drink of the national soccer team and one of the most popular brands, joining Castel, Amstel Brewery, and Guinness. Traditional arts and crafts are practiced throughout the country for commercial, decorative, and religious purposes. Woodcarvings and sculptures are especially common. The high-quality clay of the western highlands is suitable for pottery and ceramics. Other crafts include basket weaving, beadworking, brass and bronze working, calabash carving and painting, embroidery, and leather working. Traditional housing styles make use of locally available materials and vary from temporary wood-and-leaf shelters of nomadic Mbororo to the rectangular mud-and-thatch homes of southern peoples. Dwellings made from materials such as cement and tin are increasingly common. Contemporary art is mainly promoted by independent cultural organizations (Doual'art, Africréa) and artist-run initiatives (Art Wash, Atelier Viking, ArtBakery). Cameroonian literature has concentrated on both European and African themes. Colonial-era writers such as Louis-Marie Pouka and Sankie Maimo were educated by European missionary societies and advocated assimilation into European culture as the means to bring Cameroon into the modern world. After World War II, writers such as Mongo Beti and Ferdinand Oyono analysed and criticised colonialism and rejected assimilation. Shortly after independence, filmmakers such as Jean-Paul Ngassa and Thérèse Sita-Bella explored similar themes. In the 1960s, Mongo Beti, Ferdinand Léopold Oyono and other writers explored post-colonialism, problems of African development, and the recovery of African identity. Meanwhile, in the mid-1970s, filmmakers such as Jean-Pierre Dikongué Pipa and Daniel Kamwa dealt with the conflicts between traditional and post-colonial society. Literature and films during the next two decades concentrated more on wholly Cameroonian themes. National policy strongly advocates sport in all forms. Traditional sports include canoe racing and wrestling, and several hundred runners participate in the Mount Cameroon Race of Hope each year. Cameroon is one of the few tropical countries to have competed in the Winter Olympics. Sport in Cameroon is dominated by football. Amateur football clubs abound, organised along ethnic lines or under corporate sponsors. The national team has been one of the most successful in Africa since its strong showing in the 1982 and 1990 FIFA World Cups. Cameroon has won five African Cup of Nations titles and the gold medal at the 2000 Olympics. Cameroon was the host country of the Women Africa Cup of Nations in November–December 2016. The women's football team is known as the "Indomitable Lionesses." Chad Chad (; ; ), officially the Republic of Chad ( ""; "Republic of the Chad"), is a landlocked country in Central Africa. It is bordered by Libya to the north, Sudan to the east, the Central African Republic to the south, Cameroon and Nigeria to the southwest, and Niger to the west. It is the fifth largest country in Africa in terms of area. Chad has several regions: a desert zone in the north, an arid Sahelian belt in the centre and a more fertile Sudanian Savanna zone in the south. Lake Chad, after which the country is named, is the largest wetland in Chad and the second-largest in Africa. The capital N'Djamena is the largest city. Chad's official languages are Arabic and French. Chad is home to over 200 different ethnic and linguistic groups. The most popular religion of Chad is Islam (at 55%), followed by Christianity (at 40%). Beginning in the 7th millennium BC, human populations moved into the Chadian basin in great numbers. By the end of the 1st millennium AD, a series of states and empires had risen and fallen in Chad's Sahelian strip, each focused on controlling the trans-Saharan trade routes that passed through the region. France conquered the territory by 1920 and incorporated it as part of French Equatorial Africa. In 1960, Chad obtained independence under the leadership of François Tombalbaye. Resentment towards his policies in the Muslim north culminated in the eruption of a long-lasting civil war in 1965. In 1979 the rebels conquered the capital and put an end to the south's hegemony. However, the rebel commanders fought amongst themselves until Hissène Habré defeated his rivals. He was overthrown in 1990 by his general Idriss Déby. Since 2003 the Darfur crisis in Sudan has spilt over the border and destabilised the nation, with hundreds of thousands of Sudanese refugees living in and around camps in eastern Chad. An uneven inclusion into the global political economy as a site for colonial resource extraction (primarily cotton and crude oil), a global economic system that does not promote nor encourage the development of Chadian industrialization, and the failure to support local agricultural production has meant that the majority of Chadians live in daily uncertainty and hunger. While many political parties are active, power lies firmly in the hands of President Déby and his political party, the Patriotic Salvation Movement. Chad remains plagued by political violence and recurrent attempted coups d'état. Since 2003 crude oil has become the country's primary source of export earnings, superseding the traditional cotton industry. In the 7th millennium BC, ecological conditions in the northern half of Chadian territory favored human settlement, and the region experienced a strong population increase. Some of the most important African archaeological sites are found in Chad, mainly in the Borkou-Ennedi-Tibesti Region; some date to earlier than 2000 BC. For more than 2,000 years, the Chadian Basin has been inhabited by agricultural and sedentary people. The region became a crossroads of civilizations. The earliest of these were the legendary Sao, known from artifacts and oral histories. The Sao fell to the Kanem Empire, the first and longest-lasting of the empires that developed in Chad's Sahelian strip by the end of the 1st millennium AD. Two other states in the region, Sultanate of Bagirmi and Wadai Empire emerged in the 16th and 17th centuries. The power of Kanem and its successors was based on control of the trans-Saharan trade routes that passed through the region. These states, at least tacitly Muslim, never extended their control to the southern grasslands except to raid for slaves. In Kanem, about a third of the population were slaves. French colonial expansion led to the creation of the in 1900. By 1920, France had secured full control of the colony and incorporated it as part of French Equatorial Africa. French rule in Chad was characterised by an absence of policies to unify the territory and sluggish modernisation compared to other French colonies. The French primarily viewed the colony as an unimportant source of untrained labour and raw cotton; France introduced large-scale cotton production in 1929. The colonial administration in Chad was critically understaffed and had to rely on the dregs of the French civil service. Only the Sara of the south was governed effectively; French presence in the Islamic north and east was nominal. The educational system was affected by this neglect. After World War II, France granted Chad the status of overseas territory and its inhabitants the right to elect representatives to the National Assembly and a Chadian assembly. The largest political party was the Chadian Progressive Party (PPT), based in the southern half of the colony. Chad was granted independence on 11 August 1960 with the PPT's leader, Sara François Tombalbaye, as its first president. Two years later, Tombalbaye banned opposition parties and established a one-party system. Tombalbaye's autocratic rule and insensitive mismanagement exacerbated inter-ethnic tensions. In 1965, Muslims in the north, led by the National Liberation Front of Chad (FROLINAT), began a civil war. Tombalbaye was overthrown and killed in 1975, but the insurgency continued. In 1979 the rebel factions led by Hissène Habré took the capital, and all central authority in the country collapsed. Armed factions, many from the north's rebellion, contended for power. The disintegration of Chad caused the collapse of France's position in the country. Libya moved to fill the power vacuum and became involved in Chad's civil war. Libya's adventure ended in disaster in 1987; the French-supported president, Hissène Habré, evoked a united response from Chadians of a kind never seen before and forced the Libyan army off Chadian soil. Habré consolidated his dictatorship through a power system that relied on corruption and violence with thousands of people estimated to have been killed under his rule. The president favoured his own Toubou ethnic group and discriminated against his former allies, the Zaghawa. His general, Idriss Déby, overthrew him in 1990. Attempts to prosecute Habré led to his placement under house arrest in Senegal in 2005; in 2013, Habré was formally charged with war crimes committed during his rule. In May 2016, he was found guilty of human-rights abuses, including rape, sexual slavery, and ordering the killing of 40,000 people, and sentenced to life in prison. Déby attempted to reconcile the rebel groups and reintroduced multiparty politics. Chadians approved a new constitution by referendum, and in 1996, Déby easily won a competitive presidential election. He won a second term five years later. Oil exploitation began in Chad in 2003, bringing with it hopes that Chad would at last have some chances of peace and prosperity. Instead, internal dissent worsened, and a new civil war broke out. Déby unilaterally modified the constitution to remove the two-term limit on the presidency; this caused an uproar among the civil society and opposition parties. In 2006 Déby won a third mandate in elections that the opposition boycotted. Ethnic violence in eastern Chad has increased; the United Nations High Commissioner for Refugees has warned that a genocide like that in Darfur may yet occur in Chad. In 2006 and in 2008 rebel forces have attempted to take the capital by force, but have on both occasions failed. An agreement for the restoration of harmony between Chad and Sudan, signed 15 January 2010, marked the end of a five-year war. The fix in relations led to the Chadian rebels from Sudan returning home, the opening of the border between the two countries after seven years of closure, and the deployment of a joint force to secure the border. In May 2013, security forces in Chad foiled a coup against President Idriss Deby that had been in preparation for several months. Chad is currently one of the leading partners in a West African coalition in the fight against Boko Haram. Chad has also been included on Presidential Proclamation 9645, the expanded version of United States president Donald Trump's Executive Order 13780, which restricts entry by nationals from 8 countries, including Chad, into the US. This move has angered the Chadian government. At , Chad is the world's 22nd-largest country. It is slightly smaller than Peru and slightly larger than South Africa. Chad is in north central Africa, lying between latitudes 7° and 24°N, and 13° and 24°E. Chad is bounded to the north by Libya, to the east by Sudan, to the west by Niger, Nigeria and Cameroon, and to the south by the Central African Republic. The country's capital is from the nearest seaport, Douala, Cameroon. Because of this distance from the sea and the country's largely desert climate, Chad is sometimes referred to as the "Dead Heart of Africa". The dominant physical structure is a wide basin bounded to the north and east by the Ennedi Plateau and Tibesti Mountains, which include Emi Koussi, a dormant volcano that reaches above sea level. Lake Chad, after which the country is named (and which in turn takes its name from the Kanuri word for "lake"), is the remains of an immense lake that occupied of the Chad Basin 7,000 years ago. Although in the 21st century it covers only , and its surface area is subject to heavy seasonal fluctuations, the lake is Africa's second largest wetland. The region's tall grasses and extensive marshes make it favourable for birds, reptiles, and large mammals. Chad's major rivers—the Chari, Logone and their tributaries—flow through the southern savannas from the southeast into Lake Chad. Each year a tropical weather system known as the intertropical front crosses Chad from south to north, bringing a wet season that lasts from May to October in the south, and from June to September in the Sahel. Variations in local rainfall create three major geographical zones. The Sahara lies in the country's northern third. Yearly precipitations throughout this belt are under ; only the occasional spontaneous palm grove survives, and the only ones to do so are south of the Tropic of Cancer. The Sahara gives way to a Sahelian belt in Chad's centre; precipitation there varies from per year. In the Sahel, a steppe of thorny bushes (mostly acacias) gradually gives way to the south to East Sudanian savanna in Chad's Sudanese zone. Yearly rainfall in this belt is over . Chad's animal and plant life correspond to the three climatic zones. In the Saharan region, the only flora is the date-palm groves of the oasis. Palms and acacia trees grow in the Sahelian region. The southern, or Sudanic, zone consists of broad grasslands or prairies suitable for grazing. As of 2002, there were at least 134 species of mammals, 509 species of birds (354 species of residents and 155 migrants), and over 1,600 species of plants throughout the country. Elephants, lions, buffalo, hippopotamuses, rhinoceroses, giraffes, antelopes, leopards, cheetahs, hyenas, and many species of snakes are found here, although most large carnivore populations have been drastically reduced since the early 20th century. Elephant poaching, particularly in the south of the country in areas such as Zakouma National Park, is a severe problem. The small group of surviving West African crocodiles in the Ennedi Plateau represents one of the last colonies known in the Sahara today. Extensive deforestation has resulted in loss of trees such as acacias, baobab, dates and palm trees. This has also caused loss of natural habitat for wild animals; one of the main reasons for this is also hunting and livestock farming by increasing human settlements. Populations of animals like lions, leopards and rhino have fallen significantly. Efforts have been made by the Food and Agricultural Organization to improve relations between farmers, agro-pastoralists and pastoralists in the Zakouma National Park (ZNP), Siniaka-Minia, and Aouk reserve in southeastern Chad to promote sustainable development. As part of the national conservation effort, more than 1.2 million trees have been replanted to check the advancement of the desert, which incidentally also helps the local economy by way of financial return from acacia trees, which produce gum arabic, and also from fruit trees. Poaching is a serious problem in the country, particularly of elephants for the profitable ivory industry and a threat to lives of rangers even in the national parks such as Zakouma. Elephants are often massacred in herds in and around the parks by organized poaching. The problem is worsened by the fact that the parks are understaffed and that a number of wardens have been murdered by poachers. Chad's national statistical agency projected the country's 2015 population between 13,630,252 and 13,679,203, with 13,670,084 as its medium projection; based on the medium projection, 3,212,470 people lived in urban areas and 10,457,614 people lived in rural areas. The country's population is young: an estimated 47.3% is under 15. The birth rate is estimated at 42.35 births per 1,000 people, the mortality rate at 16.69. The life expectancy is 52 years. Chad's population is unevenly distributed. Density is in the Saharan Borkou-Ennedi-Tibesti Region but in the Logone Occidental Region. In the capital, it is even higher. About half of the nation's population lives in the southern fifth of its territory, making this the most densely populated region. Urban life is concentrated in the capital, whose population is mostly engaged in commerce. The other major towns are Sarh, Moundou, Abéché and Doba, which are considerably smaller but growing rapidly in population and economic activity. Since 2003, 230,000 Sudanese refugees have fled to eastern Chad from war-ridden Darfur. With the 172,600 Chadians displaced by the civil war in the east, this has generated increased tensions among the region's communities. Polygamy is common, with 39% of women living in such unions. This is sanctioned by law, which automatically permits polygamy unless spouses specify that this is unacceptable upon marriage. Although violence against women is prohibited, domestic violence is common. Female genital mutilation is also prohibited, but the practice is widespread and deeply rooted in tradition; 45% of Chadian women undergo the procedure, with the highest rates among Arabs, Hadjarai, and Ouaddaians (90% or more). Lower percentages were reported among the Sara (38%) and the Toubou (2%). Women lack equal opportunities in education and training, making it difficult for them to compete for the relatively few formal-sector jobs. Although property and inheritance laws based on the French code do not discriminate against women, local leaders adjudicate most inheritance cases in favour of men, according to traditional practice. Chad has more than 200 distinct ethnic groups, which create diverse social structures. The colonial administration and independent governments have attempted to impose a national society, but for most Chadians the local or regional society remains the most important influence outside the immediate family. Nevertheless, Chad's people may be classified according to the geographical region in which they live. In the south live sedentary people such as the Sara, the nation's main ethnic group, whose essential social unit is the lineage. In the Sahel sedentary peoples live side-by-side with nomadic ones, such as the Arabs, the country's second major ethnic group. The north is inhabited by nomads, mostly Toubous. Chad's official languages are Arabic and French, but over 100 languages and dialects are spoken. Due to the important role played by itinerant Arab traders and settled merchants in local communities, Chadian Arabic has become a lingua franca. Chad submitted an application to join the Arab League as a member state on 25 March 2014, which is still pending. Chad is a religiously diverse country. Estimates from Pew Research Center in 2010 found that 55.7% of the population was Muslim, while 22.5% was Catholic and a further 17.6% was Protestant. Among Muslims, 48% professed to be Sunni, 21% Shia, 4% Ahmadi and 23% just Muslim. A small proportion of the population continues to practice indigenous religions. Animism includes a variety of ancestor and place-oriented religions whose expression is highly specific. Islam is expressed in diverse ways; for example, 55% of Muslim Chadians belong to Sufi orders. Christianity arrived in Chad with the French and American missionaries; as with Chadian Islam, it syncretises aspects of pre-Christian religious beliefs. Muslims are largely concentrated in northern and eastern Chad, and animists and Christians live primarily in southern Chad and Guéra. The constitution provides for a secular state and guarantees religious freedom; different religious communities generally co-exist without problems. The majority of Muslims in the country are adherents of a moderate branch of mystical Islam (Sufism). Its most common expression is the Tijaniyah, an order followed by the 35% of Chadian Muslims which incorporates some local African religious elements. A small minority of the country's Muslims hold more fundamentalist practices, which, in some cases, may be associated with Saudi-oriented Salafi movements. Roman Catholics represent the largest Christian denomination in the country. Most Protestants, including the Nigeria-based "Winners' Chapel", are affiliated with various evangelical Christian groups. Members of the Bahá'í and Jehovah's Witnesses religious communities also are present in the country. Both faiths were introduced after independence in 1960 and therefore are considered to be "new" religions in the country. Chad is home to foreign missionaries representing both Christian and Islamic groups. Itinerant Muslim preachers, primarily from Sudan, Saudi Arabia, and Pakistan, also visit. Saudi Arabian funding generally supports social and educational projects and extensive mosque construction. Chad's constitution provides for a strong executive branch headed by a president who dominates the political system. The president has the power to appoint the prime minister and the cabinet, and exercises considerable influence over appointments of judges, generals, provincial officials and heads of Chad's para-statal firms. In cases of grave and immediate threat, the president, in consultation with the National Assembly, may declare a state of emergency. The president is directly elected by popular vote for a five-year term; in 2005 constitutional term limits were removed, allowing a president to remain in power beyond the previous two-term limit. Most of Déby's key advisers are members of the Zaghawa ethnic group, although southern and opposition personalities are represented in government. Chad's legal system is based on French civil law and Chadian customary law where the latter does not interfere with public order or constitutional guarantees of equality. Despite the constitution's guarantee of judicial independence, the president names most key judicial officials. The legal system's highest jurisdictions, the Supreme Court and the Constitutional Council, have become fully operational since 2000. The Supreme Court is made up of a chief justice, named by the president, and 15 councillors, appointed for life by the president and the National Assembly. The Constitutional Court is headed by nine judges elected to nine-year terms. It has the power to review legislation, treaties and international agreements prior to their adoption. The National Assembly makes legislation. The body consists of 155 members elected for four-year terms who meet three times per year. The Assembly holds regular sessions twice a year, starting in March and October, and can hold special sessions when called by the prime minister. Deputies elect a National Assembly president every two years. The president must sign or reject newly passed laws within 15 days. The National Assembly must approve the prime minister's plan of government and may force the prime minister to resign through a majority vote of no confidence. However, if the National Assembly rejects the executive branch's programme twice in one year, the president may disband the Assembly and call for new legislative elections. In practice, the president exercises considerable influence over the National Assembly through his party, the Patriotic Salvation Movement (MPS), which holds a large majority. Until the legalisation of opposition parties in 1992, Déby's MPS was the sole legal party in Chad. Since then, 78 registered political parties have become active. In 2005, opposition parties and human rights organisations supported the boycott of the constitutional referendum that allowed Déby to stand for re-election for a third term amid reports of widespread irregularities in voter registration and government censorship of independent media outlets during the campaign. Correspondents judged the 2006 presidential elections a mere formality, as the opposition deemed the polls a farce and boycotted them. Déby faces armed opposition from groups who are deeply divided by leadership clashes but united in their intention to overthrow him. These forces stormed the capital on 13 April 2006, but were ultimately repelled. Chad's greatest foreign influence is France, which maintains 1,000 soldiers in the country. Déby relies on the French to help repel the rebels, and France gives the Chadian army logistical and intelligence support for fear of a complete collapse of regional stability. Nevertheless, Franco-Chadian relations were soured by the granting of oil drilling rights to the American Exxon company in 1999. Chad is listed as a failed state by the Fund for Peace (FFP). In 2007 Chad had the seventh highest score on the failed state index. Since then the trend has been upwards each year. Chad had the fourth highest score (behind Sudan) on the Failed State Index of 2012 and , is ranked fifth. Corruption is rife at all levels; Transparency International's Corruption Perceptions Index for 2005 named Chad (tied with Bangladesh) as the most corrupt country in the world. Chad's ranking on the index has improved only marginally in recent years. Since its first inclusion on the index in 2004, Chad's best score has been 2/10 for 2011. Critics of President Déby have accused him of cronyism and tribalism. Since 2012 Chad has been divided into 23 regions. The subdivision of Chad in regions came about in 2003 as part of the decentralisation process, when the government abolished the previous 14 prefectures. Each region is headed by a presidentially appointed governor. Prefects administer the 61 departments within the regions. The departments are divided into 200 sub-prefectures, which are in turn composed of 446 cantons. The cantons are scheduled to be replaced by "communautés rurales", but the legal and regulatory framework has not yet been completed. The constitution provides for decentralised government to compel local populations to play an active role in their own development. To this end, the constitution declares that each administrative subdivisions be governed by elected local assemblies, but no local elections have taken place, and communal elections scheduled for 2005 have been repeatedly postponed. The army has over 30,350 active personnel and 3,000,000 fit for military service. Military spending has fluctuated widely in recent history in response to local conditions, especially the 2005–2010 civil war and instability in neighboring countries. In 2009, while in civil war, Chad spent 4.2% of GDP on defense, which fell to 1.6% of GDP in 2011 before rising to 2.0% of GDP in 2013, when Chad began its military intervention in Northern Mali, as it worked with France and other African nations to bring back Mali's sovereignty over territory in the North. There have been numerous rebel groups in Chad throughout the last few decades. In 2007, a peace treaty was signed that integrated United Front for Democratic Change or FUC soldiers into the Chadian Army. The Movement for Justice and Democracy in Chad or MDJT also clashed with government forces in 2003 in an attempt to overthrow President Idriss Déby. In addition, there have been various conflicts with Khartoum's Janjaweed rebels in eastern Chad, who killed civilians by use of helicopter gunships. Presently, the Union of Resistance Forces or UFR are a rebel group that continues to battle with the government of Chad. In 2010, the UFR reportedly had a force estimating 6,000 men and 300 vehicles. In Chad, the Gendarmerie Nationale serves as the national police force for the country. In Chad, homosexual acts are illegal and can be punished by 15 to 20 years in prison. In December 2016, Chad passed a law criminalising both male and female same-sex sexual activity by a vote of 111 to 1. In southern Chad, bitter conflicts over land are becoming more and more common. They frequently turn violent. Long-standing community culture is being eroded – and so are the livelihoods of many farmers. The United Nations' Human Development Index ranks Chad as the seventh poorest country in the world, with 80% of the population living below the poverty line. The GDP (Purchasing power parity) per capita was estimated as US$1,651 in 2009. Chad is part of the Bank of Central African States, the Customs and Economic Union of Central Africa (UDEAC) and the Organization for the Harmonization of Business Law in Africa (OHADA). Chad's currency is the CFA franc. In the 1960s, the Mining industry of Chad produced sodium carbonate, or natron. There have also been reports of gold-bearing quartz in the Biltine Prefecture. However, years of civil war have scared away foreign investors; those who left Chad between 1979 and 1982 have only recently begun to regain confidence in the country's future. In 2000 major direct foreign investment in the oil sector began, boosting the country's economic prospects. Over 80% of Chad's population relies on subsistence farming and livestock raising for its livelihood. The crops grown and the locations of herds are determined by the local climate. In the southernmost 10% of the territory lies the nation's most fertile cropland, with rich yields of sorghum and millet. In the Sahel only the hardier varieties of millet grow, and these with much lower yields than in the south. On the other hand, the Sahel is ideal pastureland for large herds of commercial cattle and for goats, sheep, donkeys and horses. The Sahara's scattered oases support only some dates and legumes. Chad's cities face serious difficulties of municipal infrastructure; only 48% of urban residents have access to potable water and only 2% to basic sanitation. Before the development of oil industry, cotton dominated industry and the labour market had accounted for approximately 80% of export earnings. Cotton remains a primary export, although exact figures are not available. Rehabilitation of Cotontchad, a major cotton company weakened by a decline in world cotton prices, has been financed by France, the Netherlands, the European Union, and the International Bank for Reconstruction and Development (IBRD). The parastatal is now expected to be privatised. Other than Cotton, Cattle and Gum Arabic are dominant. If Chad can maintain a semblance of stability foreign investments will eventually return, but even 24 years after the last successful coup that brought President Idris Deby to power, investors are still wary of investing in Chad. According to the United Nations, Chad has been affected by a humanitarian crisis since at least 2001. , the country of Chad hosts over 280,000 refugees from the Sudan's Darfur region, over 55,000 from the Central African Republic, as well as over 170,000 internally displaced persons. In February 2008 in the aftermath of the battle of N'Djamena, UN Under-Secretary-General for Humanitarian Affairs John Holmes expressed "extreme concern" that the crisis would have a negative effect on the ability of humanitarians to deliver life-saving assistance to half a million beneficiaries, most of whom – according to him – heavily rely on humanitarian aid for their survival. UN spokesperson Maurizio Giuliano stated to "The Washington Post": "If we do not manage to provide aid at sufficient levels, the humanitarian crisis might become a humanitarian catastrophe". In addition, organizations such as Save the Children have suspended activities due to killings of aid workers. Civil war crippled the development of transport infrastructure; in 1987, Chad had only of paved roads. Successive road rehabilitation projects improved the network to by 2004. Nevertheless, the road network is limited; roads are often unusable for several months of the year. With no railways of its own, Chad depends heavily on Cameroon's rail system for the transport of Chadian exports and imports to and from the seaport of Douala. Chad had an estimated 59 airports, only 9 of which had paved runways. An international airport serves the capital and provides regular nonstop flights to Paris and several African cities. At the beginning of the 20th century, a railway system was in development near Lake Chad. In the 21st century, Chad and the China Civil Engineering Construction Corp agreed to a $7 billion contract to build additional railway infrastructure. Presently, there are rail links to Libya and Sudan. Chad's energy sector has had years of mismanagement by the parastatal Chad Water and Electric Society (STEE), which provides power for 15% of the capital's citizens and covers only 1.5% of the national population. Most Chadians burn biomass fuels such as wood and animal manure for power. ExxonMobil leads a consortium of Chevron and Petronas that has invested $3.7 billion to develop oil reserves estimated at one billion barrels in southern Chad. Oil production began in 2003 with the completion of a pipeline (financed in part by the World Bank) that links the southern oilfields to terminals on the Atlantic coast of Cameroon. As a condition of its assistance, the World Bank insisted that 80% of oil revenues be spent on development projects. In January 2006 the World Bank suspended its loan programme when the Chadian government passed laws reducing this amount. On 14 July 2006, the World Bank and Chad signed a memorandum of understanding under which the Government of Chad commits 70% of its spending to priority poverty reduction programmes. The telecommunication system is basic and expensive, with fixed telephone services provided by the state telephone company SotelTchad. Only 14,000 fixed telephone lines serve all of Chad, one of the lowest telephone density rates in the world. Gateway Communications, a pan-African wholesale connectivity and telecommunications provider also has a presence in Chad. In September 2013, Chad's Ministry for Posts and Information & Communication Technologies (PNTIC) announced that the country will be seeking a partner for fiber optic technology. In September 2010 the penetration rate was estimated at 24.3% over a population estimate of 10.7 million. Chad is ranked last in the World Economic Forum's Network Readiness Index (NRI) – an indicator for determining the development level of a country's information and communication technologies. Chad ranked number 148 out of 148 overall in the 2014 NRI ranking, down from 142 in 2013. Chad's television audience is limited to N'Djamena. The only television station is the state-owned Télé Tchad. Radio has a far greater reach, with 13 private radio stations. Newspapers are limited in quantity and distribution, and circulation figures are small due to transportation costs, low literacy rates, and poverty. While the constitution defends liberty of expression, the government has regularly restricted this right, and at the end of 2006 began to enact a system of prior censorship on the media. Educators face considerable challenges due to the nation's dispersed population and a certain degree of reluctance on the part of parents to send their children to school. Although attendance is compulsory, only 68 percent of boys attend primary school, and more than half of the population is illiterate. Higher education is provided at the University of N'Djamena. At 33 percent, Chad has one of the lowest literacy rates of Sub-Saharan Africa. In 2013, the U.S. Department of Labor's Findings on the Worst Forms of Child Labor in Chad reported that school attendance of children aged 5 to 14 was as low as 39%. This can also be related to the issue of child labor as the report also stated that 53% of children aged 5 to 14 were working children, and that 30% of children aged 7 to 14 combined work and school. A more recent DOL report listed cattle herding as a major agricultural activity that employed underage children. Because of its great variety of peoples and languages, Chad possesses a rich cultural heritage. The Chadian government has actively promoted Chadian culture and national traditions by opening the Chad National Museum and the Chad Cultural Centre. Six national holidays are observed throughout the year, and movable holidays include the Christian holiday of Easter Monday and the Muslim holidays of Eid ul-Fitr, Eid ul-Adha, and Eid Milad Nnabi. The music of Chad includes a number of unusual instruments such as the "kinde", a type of bow harp; the "kakaki", a long tin horn; and the "hu hu", a stringed instrument that uses calabashes as loudspeakers. Other instruments and their combinations are more linked to specific ethnic groups: the Sara prefer whistles, balafones, harps and "kodjo" drums; and the Kanembu combine the sounds of drums with those of flute-like instruments. The music group Chari Jazz formed in 1964 and initiated Chad's modern music scene. Later, more renowned groups such as African Melody and International Challal attempted to mix modernity and tradition. Popular groups such as Tibesti have clung faster to their heritage by drawing on "sai", a traditional style of music from southern Chad. The people of Chad have customarily disdained modern music. However, in 1995 greater interest has developed and fostered the distribution of CDs and audio cassettes featuring Chadian artists. Piracy and a lack of legal protections for artists' rights remain problems to further development of the Chadian music industry. Millet is the staple food throughout Chad. It is used to make balls of paste that are dipped in sauces. In the north this dish is known as "alysh"; in the south, as "biya". Fish is popular, which is generally prepared and sold either as "salanga" (sun-dried and lightly smoked "Alestes" and "Hydrocynus") or as "banda" (smoked large fish). "Carcaje" is a popular sweet red tea extracted from hibiscus leaves. Alcoholic beverages, though absent in the north, are popular in the south, where people drink millet beer, known as "billi-billi" when brewed from red millet, and as "coshate" when from white millet. As in other Sahelian countries, literature in Chad has seen an economic, political and spiritual drought that has affected its best known writers. Chadian authors have been forced to write from exile or expatriate status and have generated literature dominated by themes of political oppression and historical discourse. Since 1962, 20 Chadian authors have written some 60 works of fiction. Among the most internationally renowned writers are Joseph Brahim Seïd, Baba Moustapha, Antoine Bangui and Koulsy Lamko. In 2003 Chad's sole literary critic, Ahmat Taboye, published his to further knowledge of Chad's literature internationally and among youth and to make up for Chad's lack of publishing houses and promotional structure. The development of a Chadian film industry was hampered by the devastations of civil war and from the lack of cinemas, of which there is only one in the whole country. The first Chadian feature film, the docudrama "Bye Bye Africa", was made in 1999 by Mahamat Saleh Haroun. His later film "Abouna" was critically acclaimed, and his "Daratt" won the Grand Special Jury Prize at the 63rd Venice International Film Festival. The 2010 feature film "A Screaming Man" won the Jury Prize at the 2010 Cannes Film Festival, making Haroun the first Chadian director to enter, as well as win, an award in the main Cannes competition. Issa Serge Coelo directed Chad's two other films, "Daresalam" and "". Football is Chad's most popular sport. The country's national team is closely followed during international competitions and Chadian footballers have played for French teams. Basketball and freestyle wrestling are widely practiced, the latter in a form in which the wrestlers put on traditional animal hides and cover themselves with dust. Caspar David Friedrich Caspar David Friedrich (5 September 1774 – 7 May 1840) was a 19th-century German Romantic landscape painter, generally considered the most important German artist of his generation. He is best known for his mid-period allegorical landscapes which typically feature contemplative figures silhouetted against night skies, morning mists, barren trees or Gothic or megalithic ruins. His primary interest as an artist was the contemplation of nature, and his often symbolic and anti-classical work seeks to convey a subjective, emotional response to the natural world. Friedrich's paintings characteristically set a human presence in diminished perspective amid expansive landscapes, reducing the figures to a scale that, according to the art historian Christopher John Murray, directs "the viewer's gaze towards their metaphysical dimension". Friedrich was born in the town of Greifswald on the Baltic Sea in what was at the time Swedish Pomerania. He studied in Copenhagen until 1798, before settling in Dresden. He came of age during a period when, across Europe, a growing disillusionment with materialistic society was giving rise to a new appreciation of spirituality. This shift in ideals was often expressed through a reevaluation of the natural world, as artists such as Friedrich, J. M. W. Turner and John Constable sought to depict nature as a "divine creation, to be set against the artifice of human civilization". Friedrich's work brought him renown early in his career, and contemporaries such as the French sculptor David d'Angers spoke of him as a man who had discovered "the tragedy of landscape". Nevertheless, his work fell from favour during his later years, and he died in obscurity. As Germany moved towards modernisation in the late 19th century, a new sense of urgency characterised its art, and Friedrich's contemplative depictions of stillness came to be seen as the products of a bygone age. The early 20th century brought a renewed appreciation of his work, beginning in 1906 with an exhibition of thirty-two of his paintings and sculptures in Berlin. By the 1920s his paintings had been discovered by the Expressionists, and in the 1930s and early 1940s Surrealists and Existentialists frequently drew ideas from his work. The rise of Nazism in the early 1930s again saw a resurgence in Friedrich's popularity, but this was followed by a sharp decline as his paintings were, by association with the Nazi movement, interpreted as having a nationalistic aspect. It was not until the late 1970s that Friedrich regained his reputation as an icon of the German Romantic movement and a painter of international importance. Caspar David Friedrich was born on 5 September 1774, in Greifswald, Swedish Pomerania, on the Baltic coast of Germany. The sixth of ten children, he was brought up in the strict Lutheran creed of his father Adolf Gottlieb Friedrich, a candle-maker and soap boiler. Records of the family's financial circumstances are contradictory; while some sources indicate the children were privately tutored, others record that they were raised in relative poverty. Caspar David was familiar with death from an early age. His mother, Sophie Dorothea Bechly, died in 1781 when he was just seven. A year later, his sister Elisabeth died, while a second sister, Maria, succumbed to typhus in 1791. Arguably the greatest tragedy of his childhood happened in 1787 when his brother Johann Christoffer died: at the age of thirteen, Caspar David witnessed his younger brother fall through the ice of a frozen lake, and drown. Some accounts suggest that Johann Christoffer perished while trying to rescue Caspar David, who was also in danger on the ice. Friedrich began his formal study of art in 1790 as a private student of artist Johann Gottfried Quistorp at the University of Greifswald in his home city, at which the art department is now named "Caspar-David-Friedrich-Institut" in his honour. Quistorp took his students on outdoor drawing excursions; as a result, Friedrich was encouraged to sketch from life at an early age. Through Quistorp, Friedrich met and was subsequently influenced by the theologian Ludwig Gotthard Kosegarten, who taught that nature was a revelation of God. Quistorp introduced Friedrich to the work of the German 17th-century artist Adam Elsheimer, whose works often included religious subjects dominated by landscape, and nocturnal subjects. During this period he also studied literature and aesthetics with Swedish professor Thomas Thorild. Four years later Friedrich entered the prestigious Academy of Copenhagen, where he began his education by making copies of casts from antique sculptures before proceeding to drawing from life. Living in Copenhagen afforded the young painter access to the Royal Picture Gallery's collection of 17th-century Dutch landscape painting. At the Academy he studied under teachers such as Christian August Lorentzen and the landscape painter Jens Juel. These artists were inspired by the "Sturm und Drang" movement and represented a midpoint between the dramatic intensity and expressive manner of the budding Romantic aesthetic and the waning neo-classical ideal. Mood was paramount, and influence was drawn from such sources as the Icelandic legend of Edda, the poems of Ossian and Norse mythology. Friedrich settled permanently in Dresden in 1798. During this early period, he experimented in printmaking with etchings and designs for woodcuts which his furniture-maker brother cut. By 1804 he had produced 18 etchings and four woodcuts; they were apparently made in small numbers and only distributed to friends. Despite these forays into other media, he gravitated toward working primarily with ink, watercolour and sepias. With the exception of a few early pieces, such as "" (1797), he did not work extensively with oils until his reputation was more established. Landscapes were his preferred subject, inspired by frequent trips, beginning in 1801, to the Baltic coast, Bohemia, the Krkonoše and the Harz Mountains. Mostly based on the landscapes of northern Germany, his paintings depict woods, hills, harbors, morning mists and other light effects based on a close observation of nature. These works were modeled on sketches and studies of scenic spots, such as the cliffs on Rügen, the surroundings of Dresden and the river Elbe. He executed his studies almost exclusively in pencil, even providing topographical information, yet the subtle atmospheric effects characteristic of Friedrich's mid-period paintings were rendered from memory. These effects took their strength from the depiction of light, and of the illumination of sun and moon on clouds and water: optical phenomena peculiar to the Baltic coast that had never before been painted with such an emphasis. His reputation as an artist was established when he won a prize in 1805 at the Weimar competition organised by Johann Wolfgang von Goethe. At the time, the Weimar competition tended to draw mediocre and now-forgotten artists presenting derivative mixtures of neo-classical and pseudo-Greek styles. The poor quality of the entries began to prove damaging to Goethe's reputation, so when Friedrich entered two sepia drawings—"Procession at Dawn" and "Fisher-Folk by the Sea"—the poet responded enthusiastically and wrote, "We must praise the artist's resourcefulness in this picture fairly. The drawing is well done, the procession is ingenious and appropriate... his treatment combines a great deal of firmness, diligence and neatness... the ingenious watercolour... is also worthy of praise." Friedrich completed the first of his major paintings in 1808, at the age of 34. "Cross in the Mountains", today known as the "Tetschen Altar", is an altarpiece panel said to have been commissioned for a family chapel in Tetschen, Bohemia. The panel depicts a cross in profile at the top of a mountain, alone, and surrounded by pine trees. Controversially, for the first time in Christian art, an altarpiece had showcased a landscape. According to art historian Linda Siegel, Friedrich's design was the "logical climax of many earlier drawings of his which depicted a cross in nature's world." Although the altarpiece was generally coldly received, it was Friedrich's first painting to receive wide publicity. The artist's friends publicly defended the work, while art critic Basilius von Ramdohr published a long article challenging Friedrich's use of landscape in a religious context. He rejected the idea that landscape painting could convey explicit meaning, writing that it would be "a veritable presumption, if landscape painting were to sneak into the church and creep onto the altar". Friedrich responded with a programme describing his intentions in 1809, comparing the rays of the evening sun to the light of the Holy Father. This statement marked the only time Friedrich recorded a detailed interpretation of his own work, and the painting was among the few commissions the artist ever received. Following the purchase of two of his paintings by the Prussian Crown Prince, Friedrich was elected a member of the Berlin Academy in 1810. Yet in 1816, he sought to distance himself from Prussian authority and applied that June for Saxon citizenship. The move was not expected; the Saxon government was pro-French, while Friedrich's paintings were seen as generally patriotic and distinctly anti-French. Nevertheless, with the aid of his Dresden-based friend Graf Vitzthum von Eckstädt, Friedrich attained citizenship, and in 1818, membership in the Saxon Academy with a yearly dividend of 150 thalers. Although he had hoped to receive a full professorship, it was never awarded him as, according to the German Library of Information, "it was felt that his painting was too personal, his point of view too individual to serve as a fruitful example to students." Politics too may have played a role in stalling his career: Friedrich's decidedly Germanic subjects and costuming frequently clashed with the era's prevailing pro-French attitudes. On 21 January 1818, Friedrich married Caroline Bommer, the twenty-five-year-old daughter of a dyer from Dresden. The couple had three children, with their first, Emma, arriving in 1820. Physiologist and painter Carl Gustav Carus notes in his biographical essays that marriage did not impact significantly on either Friedrich's life or personality, yet his canvasses from this period, including "Chalk Cliffs on Rügen"—painted after his honeymoon—display a new sense of levity, while his palette is brighter and less austere. Human figures appear with increasing frequency in the paintings of this period, which Siegel interprets as a reflection that "the importance of human life, particularly his family, now occupies his thoughts more and more, and his friends, his wife, and his townspeople appear as frequent subjects in his art." Around this time, he found support from two sources in Russia. In 1820, the Grand Duke Nikolai Pavlovich, at the behest of his wife Alexandra Feodorovna, visited Friedrich's studio and returned to Saint Petersburg with a number of his paintings, an exchange that began a patronage that continued for many years. Not long thereafter, the poet Vasily Zhukovsky, tutor to Alexander II, met Friedrich in 1821 and found in him a kindred spirit. For decades Zhukovsky helped Friedrich both by purchasing his work himself and by recommending his art to the royal family; his assistance toward the end of Friedrich's career proved invaluable to the ailing and impoverished artist. Zhukovsky remarked that his friend's paintings "please us by their precision, each of them awakening a memory in our mind." Friedrich was acquainted with Philipp Otto Runge, another leading German painter of the Romantic period. He was also a friend of Georg Friedrich Kersting, and painted him at work in his unadorned studio, and of the Norwegian painter Johan Christian Clausen Dahl (1788–1857). Dahl was close to Friedrich during the artist's final years, and he expressed dismay that to the art-buying public, Friedrich's pictures were only "curiosities". While the poet Zhukovsky appreciated Friedrich's psychological themes, Dahl praised the descriptive quality of Friedrich's landscapes, commenting that "artists and connoisseurs saw in Friedrich's art only a kind of mystic, because they themselves were only looking out for the mystic... They did not see Friedrich's faithful and conscientious study of nature in everything he represented". During this period Friedrich frequently sketched memorial monuments and sculptures for mausoleums, reflecting his obsession with death and the afterlife; he even created designs for some of the funerary art in Dresden's cemeteries. Some of these works were lost in the fire that destroyed Munich's Glass Palace (1931) and later in the 1945 bombing of Dresden. Friedrich's reputation steadily declined over the final fifteen years of his life. As the ideals of early Romanticism passed from fashion, he came to be viewed as an eccentric and melancholy character, out of touch with the times. Gradually his patrons fell away. By 1820, he was living as a recluse and was described by friends as the "most solitary of the solitary". Towards the end of his life he lived in relative poverty. He became isolated and spent long periods of the day and night walking alone through woods and fields, often beginning his strolls before sunrise. In June 1835, Friedrich suffered his first stroke, which left him with minor limb paralysis and greatly reduced his ability to paint. As a result, he was unable to work in oil; instead he was limited to watercolour, sepia and reworking older compositions. Although his vision remained strong, he had lost the full strength of his hand. Yet he was able to produce a final 'black painting', "Seashore by Moonlight" (1835–36), described by Vaughan as the "darkest of all his shorelines, in which richness of tonality compensates for the lack of his former finesse". Symbols of death appeared in his other work from this period. Soon after his stroke, the Russian royal family purchased a number of his earlier works, and the proceeds allowed him to travel to Teplitz—in today's Czech Republic—to recover. During the mid-1830s, Friedrich began a series of portraits and he returned to observing himself in nature. As the art historian William Vaughan has observed, however, "He can see himself as a man greatly changed. He is no longer the upright, supportive figure that appeared in "Two Men Contemplating the Moon" in 1819. He is old and stiff... he moves with a stoop". By 1838, he was capable only of working in a small format. He and his family were living in poverty and grew increasingly dependent for support on the charity of friends. Friedrich died in Dresden on 7 May 1840, and was buried in Dresden's Trinitatis-Friedhof (Trinity Cemetery) east of the city centre (the entrance to which he had painted some 15 years earlier). The simple flat gravestone lies north-west of the central roundel within the main avenue. By the time of his death, his reputation and fame were waning, and his passing was little noticed within the artistic community. His artwork had certainly been acknowledged during his lifetime, but not widely. While the close study of landscape and an emphasis on the spiritual elements of nature were commonplace in contemporary art, his work was too original and personal to be well understood. By 1838, his work no longer sold or received attention from critics; the Romantic movement had been moving away from the early idealism that the artist had helped found. After his death, Carl Gustav Carus wrote a series of articles which paid tribute to Friedrich's transformation of the conventions of landscape painting. However, Carus' articles placed Friedrich firmly in his time, and did not place the artist within a continuing tradition. Only one of his paintings had been reproduced as a print, and that was produced in very few copies. The visualisation and portrayal of landscape in an entirely new manner was Friedrich's key innovation. He sought not just to explore the blissful enjoyment of a beautiful view, as in the classic conception, but rather to examine an instant of sublimity, a reunion with the spiritual self through the contemplation of nature. Friedrich was instrumental in transforming landscape in art from a backdrop subordinated to human drama to a self-contained emotive subject. Friedrich's paintings commonly employed the "Rückenfigur"—a person seen from behind, contemplating the view. The viewer is encouraged to place himself in the position of the "Rückenfigur", by which means he experiences the sublime potential of nature, understanding that the scene is as perceived and idealised by a human. Friedrich created the notion of a landscape full of romantic feeling—"die romantische Stimmungslandschaft". His art details a wide range of geographical features, such as rock coasts, forests, and mountain scenes. He often used the landscape to express religious themes. During his time, most of the best-known paintings were viewed as expressions of a religious mysticism. Friedrich said, "The artist should paint not only what he sees before him, but also what he sees within him. If, however, he sees nothing within him, then he should also refrain from painting that which he sees before him. Otherwise, his pictures will be like those folding screens behind which one expects to find only the sick or the dead." Expansive skies, storms, mist, forests, ruins and crosses bearing witness to the presence of God are frequent elements in Friedrich's landscapes. Though death finds symbolic expression in boats that move away from shore—a Charon-like motif—and in the poplar tree, it is referenced more directly in paintings like "The Abbey in the Oakwood" (1808–10), in which monks carry a coffin past an open grave, toward a cross, and through the portal of a church in ruins. He was one of the first artists to portray winter landscapes in which the land is rendered as stark and dead. Friedrich's winter scenes are solemn and still—according to the art historian Hermann Beenken, Friedrich painted winter scenes in which "no man has yet set his foot. The theme of nearly all the older winter pictures had been less winter itself than life in winter. In the 16th and 17th centuries, it was thought impossible to leave out such motifs as the crowd of skaters, the wanderer... It was Friedrich who first felt the wholly detached and distinctive features of a natural life. Instead of many tones, he sought the one; and so, in his landscape, he subordinated the composite chord into one single basic note". Bare oak trees and tree stumps, such as those in "" (c. 1822), "Man and Woman Contemplating the Moon" (c. 1824), and "Willow Bush under a Setting Sun" (c. 1835), are recurring elements of Friedrich's paintings, symbolizing death. Countering the sense of despair are Friedrich's symbols for redemption: the cross and the clearing sky promise eternal life, and the slender moon suggests hope and the growing closeness of Christ. In his paintings of the sea, anchors often appear on the shore, also indicating a spiritual hope. German literature scholar Alice Kuzniar finds in Friedrich's painting a temporality—an evocation of the passage of time—that is rarely highlighted in the visual arts. For example, in "The Abbey in the Oakwood", the movement of the monks away from the open grave and toward the cross and the horizon imparts Friedrich's message that the final destination of man's life lies beyond the grave. With dawn and dusk constituting prominent themes of his landscapes, Friedrich's own later years were characterized by a growing pessimism. His work becomes darker, revealing a fearsome monumentality. "The Wreck of the Hope"—also known as "The Polar Sea" or "The Sea of Ice" (1823–24)—perhaps best summarizes Friedrich's ideas and aims at this point, though in such a radical way that the painting was not well received. Completed in 1824, it depicted a grim subject, a shipwreck in the Arctic Ocean; "the image he produced, with its grinding slabs of travertine-colored floe ice chewing up a wooden ship, goes beyond documentary into allegory: the frail bark of human aspiration crushed by the world's immense and glacial indifference." Friedrich's written commentary on aesthetics was limited to a collection of aphorisms set down in 1830, in which he explained the need for the artist to match natural observation with an introspective scrutiny of his own personality. His best-known remark advises the artist to "close your bodily eye so that you may see your picture first with the spiritual eye. Then bring to the light of day that which you have seen in the darkness so that it may react upon others from the outside inwards." He rejected the overreaching portrayals of nature in its "totality", as found in the work of contemporary painters like Adrian Ludwig Richter (1803–84) and Joseph Anton Koch (1768–1839). Both Friedrich's life and art have at times been perceived by some to have been marked with an overwhelming sense of loneliness. Art historians and some of his contemporaries attribute such interpretations to the losses suffered during his youth to the bleak outlook of his adulthood, while Friedrich's pale and withdrawn appearance helped reinforce the popular notion of the "taciturn man from the North". Friedrich suffered depressive episodes in 1799, 1803–1805, c.1813, in 1816 and between 1824 and 1826. There are noticeable thematic shifts in the works he produced during these episodes, which see the emergence of such motifs and symbols as vultures, owls, graveyards and ruins. From 1826 these motifs became a permanent feature of his output, while his use of color became more dark and muted. Carus wrote in 1929 that Friedrich "is surrounded by a thick, gloomy cloud of spiritual uncertainty", though the noted art historian and curator Hubertus Gassner disagrees with such notions, seeing in Friedrich's work a positive and life-affirming subtext inspired by Freemasonry and religion. Reflecting Friedrich's patriotism and resentment during the 1813 French occupation of the dominion of Pomerania, motifs from German folklore became increasingly prominent in his work. An anti-French German nationalist, Friedrich used motifs from his native landscape to celebrate Germanic culture, customs and mythology. He was impressed by the anti-Napoleonic poetry of Ernst Moritz Arndt and Theodor Körner, and the patriotic literature of Adam Müller and Heinrich von Kleist. Moved by the deaths of three friends killed in battle against France, as well as by Kleist's 1808 drama "Die Hermannsschlacht", Friedrich undertook a number of paintings in which he intended to convey political symbols solely by means of the landscape—a first in the history of art. In "" (1812), a dilapidated monument inscribed "Arminius" invokes the Germanic chieftain, a symbol of nationalism, while the four tombs of fallen heroes are slightly ajar, freeing their spirits for eternity. Two French soldiers appear as small figures before a cave, lower and deep in a grotto surrounded by rock, as if farther from heaven. A second political painting, "" (c. 1813), depicts a lost French soldier dwarfed by a dense forest, while on a tree stump a raven is perched—a prophet of doom, symbolizing the anticipated defeat of France. Alongside other Romantic painters, Friedrich helped position landscape painting as a major genre within Western art. Of his contemporaries, Friedrich's style most influenced the painting of Johan Christian Dahl (1788–1857). Among later generations, Arnold Böcklin (1827–1901) was strongly influenced by his work, and the substantial presence of Friedrich's works in Russian collections influenced many Russian painters, in particular Arkhip Kuindzhi (c. 1842–1910) and Ivan Shishkin (1832–98). Friedrich's spirituality anticipated American painters such as Albert Pinkham Ryder (1847–1917), Ralph Blakelock (1847–1919), the painters of the Hudson River School and the New England Luminists. At the turn of the 20th century, Friedrich was rediscovered by the Norwegian art historian Andreas Aubert (1851–1913), whose writing initiated modern Friedrich scholarship, and by the Symbolist painters, who valued his visionary and allegorical landscapes. The Norwegian Symbolist Edvard Munch (1863–1944) would have seen Friedrich's work during a visit to Berlin in the 1880s. Munch's 1899 print "The Lonely Ones" echoes Friedrich's "Rückenfigur (back figure)", although in Munch's work the focus has shifted away from the broad landscape and toward the sense of dislocation between the two melancholy figures in the foreground. Friedrich's landscapes exercised a strong influence on the work of German artist Max Ernst (1891–1976), and as a result other Surrealists came to view Friedrich as a precursor to their movement. In 1934, the Belgian painter René Magritte (1898–1967) paid tribute in his work "The Human Condition", which directly echoes motifs from Friedrich's art in its questioning of perception and the role of the viewer. A few years later, the Surrealist journal "Minotaure" featured Friedrich in a 1939 article by critic Marie Landsberger, thereby exposing his work to a far wider circle of artists. The influence of "The Wreck of Hope" (or "The Sea of Ice") is evident in the 1940–41 painting "Totes Meer" by Paul Nash (1889–1946), a fervent admirer of Ernst. Friedrich's work has been cited as an inspiration by other major 20th-century artists, including Mark Rothko (1903–70), Gerhard Richter (b. 1932), Gotthard Graubner and Anselm Kiefer (b. 1945). Friedrich's Romantic paintings have also been singled out by writer Samuel Beckett (1906–89), who, standing before "Man and Woman Contemplating the Moon", said "This was the source of "Waiting for Godot", you know." In his 1961 article "The Abstract Sublime", originally published in ARTnews, the art historian Robert Rosenblum drew comparisons between the Romantic landscape paintings of both Friedrich and Turner with the Abstract Expressionist paintings of Mark Rothko. Rosenblum specifically describes Friedrich's 1809 painting "The Monk by the Sea", Turner's "The Evening Star" and Rothko's 1954 "Light, Earth and Blue" as revealing affinities of vision and feeling. According to Rosenblum, "Rothko, like Friedrich and Turner, places us on the threshold of those shapeless infinities discussed by the aestheticians of the Sublime. The tiny monk in the Friedrich and the fisher in the Turner establish a poignant contrast between the infinite vastness of a pantheistic God and the infinite smallness of His creatures. In the abstract language of Rothko, such literal detail—a bridge of empathy between the real spectator and the presentation of a transcendental landscape—is no longer necessary; we ourselves are the monk before the sea, standing silently and contemplatively before these huge and soundless pictures as if we were looking at a sunset or a moonlit night." Until 1890, and especially after his friends had died, Friedrich's work lay in near-oblivion for decades. Yet, by 1890, the symbolism in his work began to ring true with the artistic mood of the day, especially in central Europe. However, despite a renewed interest and an acknowledgment of his originality, his lack of regard for "painterly effect" and thinly rendered surfaces jarred with the theories of the time. During the 1930s, Friedrich's work was used in the promotion of Nazi ideology, which attempted to fit the Romantic artist within the nationalistic "Blut und Boden". It took decades for Friedrich's reputation to recover from this association with Nazism. His reliance on symbolism and the fact that his work fell outside the narrow definitions of modernism contributed to his fall from favour. In 1949, art historian Kenneth Clark wrote that Friedrich "worked in the frigid technique of his time, which could hardly inspire a school of modern painting", and suggested that the artist was trying to express in painting what is best left to poetry. Clark's dismissal of Friedrich reflected the damage the artist's reputation sustained during the late 1930s. Friedrich's reputation suffered further damage when his imagery was adopted by a number of Hollywood directors, such as Walt Disney, built on the work of such German cinema masters as Fritz Lang and F. W. Murnau, within the horror and fantasy genres. His rehabilitation was slow, but enhanced through the writings of such critics and scholars as Werner Hofmann, Helmut Börsch-Supan and Sigrid Hinz, who successfully rejected and rebutted the political associations ascribed to his work, and placed it within a purely art-historical context. By the 1970s, he was again being exhibited in major galleries across the world, as he found favour with a new generation of critics and art historians. Today, his international reputation is well established. He is a national icon in his native Germany, and highly regarded by art historians and art connoisseurs across the Western World. He is generally viewed as a figure of great psychological complexity, and according to Vaughan, "a believer who struggled with doubt, a celebrator of beauty haunted by darkness. In the end, he transcends interpretation, reaching across cultures through the compelling appeal of his imagery. He has truly emerged as a butterfly—hopefully one that will never again disappear from our sight". Friedrich was a prolific artist who produced more than 500 attributed works. In line with the Romantic ideals of his time, he intended his paintings to function as pure aesthetic statements, so he was cautious that the titles given to his work were not overly descriptive or evocative. It is likely that some of today's more literal titles, such as "The Stages of Life", were not given by the artist himself, but were instead adopted during one of the revivals of interest in Friedrich. Complications arise when dating Friedrich's work, in part because he often did not directly name or date his canvases. He kept a carefully detailed notebook on his output, however, which has been used by scholars to tie paintings to their completion dates. Courtney Love Courtney Michelle Love (; born July 9, 1964) is an American singer, songwriter, actress, and visual artist. A notable presence in the punk and grunge scenes of the 1990s, Love's career has spanned four decades. She rose to prominence as the lead vocalist of the alternative rock band Hole, which she formed in 1989. Love has drawn public attention for her uninhibited live performances and confrontational lyrics, as well as her highly publicized personal life following her marriage to Nirvana frontman Kurt Cobain. The daughter of countercultural parents, Love had an itinerant childhood. Born in San Francisco, she was raised primarily in Portland, Oregon, where she played in a series of short-lived bands and was active in the local punk scene. After being interned in a juvenile hall, she spent a year abroad living in Dublin and Liverpool before returning to the United States and being cast in two films by British director Alex Cox. She formed Hole in 1989 in Los Angeles, and received attention from underground rock press for the group's 1991 debut album, produced by Kim Gordon. Hole's second release, "Live Through This" (1994), gave her high-profile renown with critical accolades and multi-platinum sales. In 1995, Love returned to acting, earning a Golden Globe Award nomination for her performance as Althea Leasure in Miloš Forman's "The People vs. Larry Flynt" (1996), which established her as a mainstream actress. The following year, Hole's third album, "Celebrity Skin" (1998), was nominated for three Grammy Awards. Love continued to work as an actress into the early 2000s, appearing in big-budget pictures such as "Man on the Moon" (1999) and "Trapped" (2002), before releasing her first solo album, "America's Sweetheart", in 2004. The next years were marked by publicity surrounding Love's legal troubles and drug addiction, which resulted in a mandatory lockdown rehabilitation sentence in 2005 while she was writing a second solo album. That project became "Nobody's Daughter", released in 2010 as a Hole album but without the former Hole lineup. Between 2014 and 2015, Love released two solo singles and returned to acting in the network series "Sons of Anarchy" and "Empire". Love has also had endeavors in writing; she co-created and co-wrote three volumes of a manga, "Princess Ai", between 2004 and 2006, and wrote a memoir, "" (2006). In 2012, she premiered an exhibit of mixed media visual art, "And She's Not Even Pretty". Love was born Courtney Michelle Harrison on July 9, 1964 at Saint Francis Memorial Hospital in San Francisco, California, the first child of psychotherapist Linda Carroll (née Risi) and Hank Harrison, a publisher and road manager for the Grateful Dead. Love's godfather is the founding Grateful Dead bassist Phil Lesh. Her mother, who was adopted at birth and raised by a prominent Italian-Catholic family in San Francisco, was later revealed to be the biological daughter of novelist Paula Fox; Love's maternal great-grandmother was screenwriter Elsie Fox. According to Love, she was named after Courtney Farrell, the protagonist of Pamela Moore's 1956 novel "Chocolates for Breakfast". She is of Cuban, English, German, Irish, and Welsh descent. Love spent her early years in the Haight-Ashbury district of San Francisco until her parents' 1969 divorce, spurred by her mother's allegations that her father had fed her LSD when she was a toddler. Though he denied the claim, full custody of Love was awarded to her mother. In 1970, Carroll relocated with Love to the rural community of Marcola, Oregon where they lived along the Mohawk River while she completed her psychology degree at the University of Oregon. There, Love was adopted by her then-stepfather, Frank Rodriguez. He and her mother had two daughters and a son who died in infancy of a heart defect when Love was ten; they also adopted a boy. Love attended a Montessori school in Eugene, where she struggled academically and had trouble making friends. At age nine, a psychologist noted that she exhibited signs of autism. In 1972, Love's mother divorced Rodriguez, remarried, and moved the family to Nelson, New Zealand. There, she enrolled Love at Nelson College for Girls, from which she was soon expelled. In 1973, she was sent back to live in the United States, where she was raised in Portland, Oregon by her former stepfather and other family friends. During this time, her mother gave birth to two of Love's other half-brothers. At age fourteen, she was arrested for shoplifting a T-shirt from a Woolworth's, and was sent to Hillcrest Correctional Facility, a juvenile hall in Salem, Oregon. She was subsequently placed in foster care until she became legally emancipated at age sixteen. She supported herself by working illegally as a topless dancer at Mary's Club in downtown Portland adopting the last name "Love" to conceal her identity; she later adopted "Love" as her surname. She also worked various odd jobs, including picking berries at a farm in Troutdale, Oregon, and as a disc jockey at a gay disco. During this time, she enrolled at Portland State University, studying English and philosophy. Love has said that she "didn't have a lot of social skills," and that she learned them while frequenting gay clubs and spending time with drag queens. In 1981, she was granted a small trust fund that had been left by her adoptive grandparents, which she used to travel to Dublin, Ireland, where her biological father was living. While there, she enrolled in courses at Trinity College, studying theology for two semesters. She would later receive honorary patronage from Trinity's University Philosophical Society in 2010. After leaving Trinity, Love relocated to Liverpool, where she became acquainted with musician Julian Cope and his band, The Teardrop Explodes, and briefly lived in his house. "They kind of took me in", she recalled. "I was sort of a mascot; I would get them coffee or tea during rehearsal." In Cope's autobiography "Head-On", Love is referred to as "the adolescent." After spending a year abroad, Love returned to Portland: "I thought that [going to the United Kingdom] was my peak life experience," she said in 2011. "—that nothing else [would] happen to me again." In 1983, she took short-lived jobs working as an erotic dancer in Japan and later Taiwan, but was deported after the club was shut down by the government. Love began several music projects in the 1980s, first forming Sugar Babylon (later Sugar Babydoll) in Portland with her friends Ursula Wehr and Robin Barbur. In 1982, Love attended a Faith No More concert in San Francisco and convinced the members to let her join as a singer. The group recorded material with Love as a vocalist, but she was subsequently kicked out of the band. According to the Faith No More keyboardist Roddy Bottum, who remained Love's friend in the years after, the band wanted a "male energy." She later formed the Pagan Babies with friend Kat Bjelland, whom she met at the Satyricon club in Portland in 1984. As Love later reflected, "The best thing that ever happened to me in a way, was Kat." Love asked Bjelland to start a band with her as a guitarist, and the two moved to San Francisco in June 1985, where they recruited bassist Jennifer Finch and drummer Janis Tanaka. According to Bjelland, "[Courtney] didn't play an instrument at the time" aside from keyboards, so Bjelland would transcribe Love's musical ideas on guitar for her. The group played several house shows and recorded one 4-track demo before disbanding in late 1985. After Pagan Babies, Love moved to Minneapolis, where Bjelland had formed the group Babes in Toyland, and briefly worked as a concert promoter before returning to California. Deciding to shift her focus to acting, Love enrolled at the San Francisco Art Institute and studied film under experimental director George Kuchar. Love featured in one of his short films, titled "Club Vatican". In 1985 she submitted an audition tape for the role of Nancy Spungen in the Sid Vicious biopic "Sid and Nancy" (1986), and was given a minor supporting role by director Alex Cox. After filming "Sid and Nancy" in New York City, she worked at a peep show in Times Square and squatted at the ABC No Rio social center and Pyramid Club in the East Village. The same year, Cox cast her in a leading role in his film "Straight to Hell" (1987), a spaghetti western starring Joe Strummer and Grace Jones filmed in Spain in 1986. The film caught the attention of Andy Warhol, who featured Love in an episode of "Andy Warhol's Fifteen Minutes" with Robbie Nevil. She also had a part in the 1988 Ramones music video for "I Wanna Be Sedated," appearing as a bride among dozens of party guests. In 1988, Love aborted her acting career and left New York, returning to the West Coast, citing the "celebutante" fame she'd attained as the central reason. She returned to stripping in the small town of McMinnville, Oregon, where she was recognized by customers at the bar. This prompted Love to go into isolation, so she relocated to Anchorage, Alaska, where she lived for three months to "gather her thoughts," supporting herself by working at a strip club frequented by local fishermen. "I decided to move to Alaska because I needed to get my shit together and learn how to work", she said in retrospect. "So I went on this sort of vision quest. I got rid of all my earthly possessions. I had my bad little strip clothes and some big sweaters, and I moved into a trailer with a bunch of other strippers." At the end of 1988, Love taught herself to play guitar and relocated to Los Angeles, where she placed an ad in a local music zine: "I want to start a band. My influences are Big Black, Sonic Youth, and Fleetwood Mac." Love recruited lead guitarist Eric Erlandson; Lisa Roberts, her neighbor, as bassist; and drummer Caroline Rue, whom she met at a Gwar concert. Love named the band Hole after a line from Euripides' "Medea" ("There is a hole that pierces right through me") as well as a conversation she had had with her mother, in which she told her that she couldn't live her life "with a hole running through her." Love continued to work at strip clubs in Hollywood (including Jumbo's Clown Room and the Seventh Veil) in the band's formative stages, saving money to purchase backline equipment and a touring van, and rehearsed at a studio in Hollywood that was loaned to her by the Red Hot Chili Peppers. Hole played their first show in November 1989 at Raji's, a rock club in central Hollywood. The band's debut single, "Retard Girl", was issued in April 1990 through the Long Beach indie label Sympathy for the Record Industry, and was given airtime by Rodney Bingenheimer's show on local rock station KROQ. That fall, the band appeared on the cover of "Flipside", a Los Angeles-based punk fanzine. In early 1991, the band released their second single, "Dicknail", through Sub Pop Records. With no wave, noise rock and grindcore bands being major influences on Love, Hole's first studio album, "Pretty on the Inside", captured a particularly abrasive sound and contained disturbing lyrics, described by "Q" magazine as "confrontational [and] genuinely uninhibited." The record was released in September 1991 on Caroline Records, produced by Kim Gordon of Sonic Youth with assistant production from Gumball's Don Fleming; Love and Gordon had initially met when Hole opened for Sonic Youth during their promotional tour for "Goo" at the Whisky a Go Go in November 1990. In early 1991, Love sent Gordon a personal letter asking her to produce the record for the band, to which she agreed. Though Love would later say it was "unlistenable" and "[un]melodic," the album received generally positive critical reception from indie and punk rock critics and was labeled one of the twenty best albums of the year by "Spin" magazine. It also gained a following in the United Kingdom, charting at 59 on the UK Albums Chart, and its lead single, "Teenage Whore", entered the country's indie chart at number one. The underlying feminist slant of some of the album's songs led many to mistakenly tag the band as being part of the riot grrrl movement, a movement that Love did not associate with. The band toured in support of the record, headlining with Mudhoney in Europe; in the United States, they opened for the Smashing Pumpkins, and performed at CBGB in New York City. After the release of "Pretty on the Inside", Love began dating Nirvana frontman Kurt Cobain and became pregnant. During Love's pregnancy, Hole recorded a cover of "Over the Edge" for a Wipers tribute album, and recorded their fourth single, "Beautiful Son", which was released in April 1993. Love and Cobain married in February 1992 and, after the birth of their daughter Frances Bean Cobain, relocated to Carnation, Washington and then to Seattle. On September 8, 1993, Love and Cobain made their only public performance together at the Rock Against Rape benefit in Hollywood, performing two acoustic duets of "Pennyroyal Tea" and "Where Did You Sleep Last Night." Love also performed electric versions of two new Hole songs, "Doll Parts" and "Miss World," both written for the band's upcoming second album. In October 1993, Hole recorded their second album, "Live Through This", in Atlanta. The album featured a new lineup with bassist Kristen Pfaff and drummer Patty Schemel. "Live Through This" was released on Geffen's subsidiary label DGC in April 1994, four days after Love's husband, Cobain, was found dead after having committed suicide in their Seattle home. Two months later, in June 1994, bassist Kristen Pfaff died of a heroin overdose,; Love recruited Melissa Auf der Maur for the band's impending tour. Love, who was rarely seen in public in the months preceding the tour, split time between two Washington homes, Atlanta, the Paramount Hotel in New York City, and the Namgyal Buddhist Monastery in New York. "Live Through This" was a commercial and critical success, hitting platinum RIAA certification in April 1995 and receiving numerous critical accolades. The success of the record combined with Cobain's suicide resulted in a high level of publicity for Love, and she was featured on Barbara Walters' "10 Most Fascinating People" in 1995. At Hole's performance on August 26, 1994 at the Reading Festival— Love's first public performance following her husband's death— she appeared onstage with outstretched arms, mimicking crucifixion. An MTV review of their performance referred to it as "by turns macabre, frightening and inspirational." John Peel wrote in "The Guardian" that Love's disheveled appearance "would have drawn whistles of astonishment in Bedlam", and that her performance "verged on the heroic ... Love steered her band through a set which dared you to pity either her recent history or that of the band ... the band teetered on the edge of chaos, generating a tension which I cannot remember having felt before from any stage." The band performed a series of riotous concerts during the tour, with Love frequently appearing hysterical onstage, flashing crowds, stage diving, and getting into fights with audience members. One journalist reported that at the band's show in Boston in December 1994, "Love interrupted the music and talked about her deceased husband Kurt Cobain, and also broke out into Tourette syndrome-like rants. The music was great, but the raving was vulgar and offensive, and prompted some of the audience to shout back at her." In retrospect, Love said she "couldn't remember much" of the shows as she was using drugs heavily at the time. On Valentine's Day 1995, Hole performed a well-reviewed acoustic set on "MTV Unplugged" at the Brooklyn Academy of Music, and they continued to tour late into the year, concluding their world tour with an appearance at the 1995 "MTV Video Music Awards", where they were nominated for Best Alternative Video for "Doll Parts". After Hole's world tour concluded in 1996, Love made a return to acting, first in small roles in the Jean-Michel Basquiat biopic "Basquiat" and the drama "Feeling Minnesota" (1996), before landing the co-starring role of Larry Flynt's wife Althea in Miloš Forman's critically acclaimed 1996 film "The People vs. Larry Flynt". Despite Columbia Pictures' reluctance to hire Love due to her troubled past, she received critical acclaim for her performance in the film after its release in December 1996, earning a Golden Globe nomination for Best Actress, and a New York Film Critics Circle Award for Best Supporting Actress. Critic Roger Ebert called her work in the film "quite a performance; Love proves she is not a rock star pretending to act, but a true actress." She won several other awards from various film critic associations for the film. During this time, she also modeled for Versace advertisements and appeared in "Vogue Italia". In late 1997, Hole released a compilation album, "My Body, the Hand Grenade", featuring rare singles and B-sides, and an EP titled "The First Session" which consisted of the band's first recording session in 1990. In September 1998, Hole released their third studio album, "Celebrity Skin", which marked something of a transformation for Love, featuring a stark power pop sound as opposed to the group's earlier punk rock influences. Love divulged her ambition of making an album where "art meets commerce ... there are no compromises made, it has commercial appeal, and it sticks to [our] original vision." She said she was influenced by Neil Young, Fleetwood Mac, and My Bloody Valentine when writing the album. Smashing Pumpkins frontman Billy Corgan co-wrote several songs on the album. "Celebrity Skin" was well received by critics; "Rolling Stone" called it "accessible, fiery and intimate—often at the same time ... a basic guitar record that's anything but basic". "Celebrity Skin" went multi-platinum, and topped "Best of Year" lists at "Spin" and "The Village Voice". The album garnered the band their only No. 1 hit single on the Modern Rock Tracks chart with the title track "Celebrity Skin". The band promoted the album through MTV performances and at the 1998 Billboard Music Awards, and were subsequently nominated for three Grammy Awards at the 41st Grammy Awards ceremony. Hole toured with Marilyn Manson on the Beautiful Monsters Tour in 1999, but dropped out of the tour nine dates in after a dispute over production costs between Love and Manson, in addition to the fact that Hole was forced the open for Manson under an agreement with Interscope Records. Hole resumed touring with Imperial Teen. Love would later make claims that an additional reason the band left the tour was due to Manson and Korn's (whom they also toured with in Australia) sexualized treatment of teenage female audience members. Love told interviewers at 99X.FM in Atlanta: "What I really don't like—there are certain girls that like us, or like me, who are really messed up... and they do not need to be—they're very young—and they do not need to be taken and raped, or filmed having enema contests... going out into the audience and picking up fourteen and fifteen-year-old girls who obviously cut themselves, and then having to see them in the morning... it's just uncool." Before the release and promotion of "Celebrity Skin", Love and Fender designed a low-priced Squier brand guitar, called Vista Venus. The instrument featured a shape inspired by Mercury, a little-known independent guitar manufacturer, Stratocaster, and Rickenbacker's solid body guitars and had a single-coil and a humbucker pickup, and was available in 6-string and 12-string versions. In an early 1999 interview, Love said about the Venus: "I wanted a guitar that sounded really warm and pop, but which required just one box to go dirty ... And something that could also be your first band guitar. I didn't want it all teched out. I wanted it real simple, with just one pickup switch." In 1999, Love was awarded an Orville H. Gibson award for Best Female Rock Guitarist. During this time, she also starred opposite Jim Carrey as his longtime partner Lynne Margulies in the Andy Kaufman biopic "Man on the Moon" (1999), which was followed with a role as William S. Burroughs's wife Joan Vollmer in "Beat" (2000) alongside Kiefer Sutherland. After touring for "Celebrity Skin" finished, Auf der Maur left the band to tour with The Smashing Pumpkins; Hole's touring drummer Samantha Maloney left soon after. Love and Erlandson released the single "Be A Man"—an outtake from the "Celebrity Skin" sessions—for the soundtrack of the Oliver Stone film "Any Given Sunday" (1999). The group became dormant in the following two years. In 2000, Love was cast as the lead in John Carpenter's sci-fi horror film "Ghosts of Mars", but backed out of the role after injuring her foot. The part instead went to Natasha Henstridge. The following year, Love starred in several additional films, including in "Julie Johnson" (2001) as Lili Taylor's lesbian lover, for which she won an Outstanding Actress award at L.A.'s Outfest, and in the thriller "Trapped" (2002), alongside Kevin Bacon and Charlize Theron. With Hole in disarray, Love began a "punk rock femme supergroup" called Bastard in March 2001, enlisting Schemel, Veruca Salt co-frontwoman Louise Post, and bassist Gina Crosley. "She was like, 'Listen, you guys: I've been in my Malibu, manicure, movie star world for two years, alright? I wanna make a record. And let's leave all that grunge shit behind us, eh?'" recalled Post. "We were being so improvisational, and singing together, and with a trust developing between us. It was the shit." The group recorded a demo tape, but by September 2001, Post and Crosley had left the band, with Post citing "unhealthy and unprofessional working conditions." In May 2002, Hole officially announced their breakup amid continuing litigation with Universal Music Group over their record contract. In 2002, Love began composing an album with songwriter Linda Perry of 4 Non Blondes, titled "America's Sweetheart". Love signed with Virgin Records in July 2003, and began recording the album in France shortly after. A total of 32 songs were recorded during these sessions. "America's Sweetheart" was released in February 2004, and received mixed reviews. Charles Aaron of "Spin" called it a "jaw-dropping act of artistic will and a fiery, proper follow-up to 1994's "Live Through This"" and awarded it eight out of ten stars, while Amy Phillips of "The Village Voice" wrote: "[Love is] willing to act out the dream of every teenage brat who ever wanted to have a glamorous, high-profile hissyfit, and she turns those egocentric nervous breakdowns into art. Sure, the art becomes less compelling when you've been pulling the same stunts for a decade. But, honestly, is there anybody out there who fucks up better?" The album sold less than 100,000 copies. Love has publicly expressed her regret over the record, reasoning that her drug issues at the time were to blame: ""America's Sweetheart" was my one true piece of shit. It has no cohesive thread. I just hate it," she commented in 2014. Shortly after the record was released, she told Kurt Loder on "TRL": "I cannot exist as a solo artist. It's a joke." After the release of "America's Sweetheart", Love collaborated on a semi-autobiographical manga titled "Princess Ai" (Japanese: プリンセス·アイ物語), which she co-wrote with Stu Levy. The manga was illustrated by Misaho Kujiradou and Ai Yazawa, and was released in three volumes in both the United States and Japan between 2004 and 2006. In 2005, Love was ordered into lockdown rehab by a California judge after a series of legal issues and controlled substance charges. After her release in 2006, she published a memoir, "", and started recording what would become her second solo album, "How Dirty Girls Get Clean", collaborating again with Perry and Smashing Pumpkins vocalist/guitarist Billy Corgan. Love had written several songs, including an anti-cocaine song titled "Loser Dust", during her time in rehab in 2005. She told "Billboard": "My hand-eye coordination was so bad [after the drug use], I didn't even know chords anymore. It was like my fingers were frozen. And I wasn't allowed to make noise [in rehab] ... I never thought I would work again." Tracks and demos for the album (planned for release in 2008) were leaked on the internet in 2006, and a documentary entitled "The Return of Courtney Love", detailing the making of the album, aired on the British television network More4 in the fall of that year. A rough acoustic version of "Never Go Hungry Again", recorded during an interview for "The Times" in November, was also released. Incomplete audio clips of the song "Samantha", originating from an interview with NPR, were distributed on the internet in 2007. In June 2009, "NME" published an article detailing Love's plan to reunite Hole and release a new album, "Nobody's Daughter". In response, former Hole guitarist Eric Erlandson stated in "Spin" magazine that contractually no reunion could take place without his involvement; therefore "Nobody's Daughter" would remain Love's solo record, as opposed to a "Hole" record. Love responded to Erlandson's comments in a Twitter post, claiming "he's out of his mind, Hole is my band, my name, and my Trademark". "Nobody's Daughter" was released worldwide as a Hole album on April 27, 2010. For the new line-up, Love recruited guitarist Micko Larkin, Shawn Dailey (bass guitar), and Stu Fisher (drums, percussion). "Nobody's Daughter" featured material written and recorded for Love's unfinished solo album, "How Dirty Girls Get Clean", including "Pacific Coast Highway", "Letter to God", "Samantha", and "Never Go Hungry", although they were re-produced in the studio with Larkin and engineer Michael Beinhorn. The album's subject matter was largely centered on Love's tumultuous life between 2003 and 2007, and featured a polished folk rock sound, and more acoustic guitar work than previous Hole albums. The first single from "Nobody's Daughter" was "Skinny Little Bitch", released to promote the album in March 2010. The album received mixed reviews. Robert Sheffield of "Rolling Stone" gave the album three out of five, saying Love "worked hard on these songs, instead of just babbling a bunch of druggy bullshit and assuming people would buy it, the way she did on her 2004 flop, "America's Sweetheart"". Sal Cinquemani of "Slant Magazine" also gave the album three out of five: "It's Marianne Faithfull's substance-ravaged voice that comes to mind most often while listening to songs like 'Honey' and 'For Once in Your Life'. The latter track is, in fact, one of Love's most raw and vulnerable vocal performances to date ... the song offers a rare glimpse into the mind of a woman who, for the last 15 years, has been as famous for being a rock star as she's been for being a victim." Love and the band toured internationally from 2010 into late 2012 promoting the record, with their pre-release shows in London and at South by Southwest receiving critical acclaim. After 2012, Love decided to drop the Hole name and perform as a solo artist. In May 2012, Love debuted an art collection at Fred Torres Collaborations in New York titled ""And She's Not Even Pretty"", which contained over forty drawings and paintings by Love composed in ink, colored pencil, pastels, and watercolors. Later in the year, she collaborated with Michael Stipe on the track "Rio Grande" for Johnny Depp's sea shanty album "", and in 2013, co-wrote and contributed vocals on "Rat A Tat" from Fall Out Boy's album "Save Rock and Roll"; she also appeared in the music video for the track. After solo performances in December 2012 and January 2013, Love appeared in advertisements for Yves Saint Laurent alongside Kim Gordon and Ariel Pink. Love completed a solo tour of North America in mid-2013, which was purported to be in promotion of an upcoming solo album; however, it was ultimately dubbed a "greatest hits" tour, and featured songs from Love's and Hole's back catalogue. Love told "Billboard" at the time that she had recorded eight songs in the studio. On April 22, 2014, Love debuted the song "You Know My Name" on BBC Radio 6 to promote her tour of the United Kingdom. It was released as a double A-side single with the song "Wedding Day" on May 4, 2014, on her own label Cherry Forever Records via Kobalt Label Services. The tracks were produced by Michael Beinhorn, and feature Tommy Lee on drums. In an interview with the BBC, Love revealed that she and former Hole guitarist Eric Erlandson had reconciled, and had been rehearsing new material together, along with former bassist Melissa Auf der Maur and drummer Patty Schemel, though she did not confirm a reunion of the band. On May 1, 2014, in an interview with "Pitchfork", Love commented further on the possibility of Hole reuniting, saying: "I'm not going to commit to it happening, because we want an element of surprise. There's a lot of "i"s to be dotted and "t"s to be crossed." In 2013 and 2014, Love worked with rock journalist Anthony Bozza to co-write her memoir, to be titled "Girl with the Most Cake". Bozza submitted Love a manuscript in 2014 that he was quite pleased with, later calling it "possibly the greatest thing I’ll ever do with anybody." Love, however, disliked it, and said in an interview that the text made her sound like she was "jacked on coffee and sugar in a really bad mood." Love refused to pay Bozza for the work he had done, and he sued her in 2015 for lack of payment. In 2014, Love was cast in several television series in supporting parts, including the FX series "Sons of Anarchy", "Revenge", and Lee Daniels' network series "Empire" in a recurring guest role as Elle Dallas. The track "Walk Out on Me" featuring Love was included on the "" album, which debuted at number 1 on the Billboard 200. Alexis Petridis of "The Guardian" praised the track, saying: "The idea of Courtney Love singing a ballad with a group of gospel singers seems faintly terrifying ... the reality is brilliant. Love's voice fits the careworn lyrics, effortlessly summoning the kind of ravaged darkness that Lana Del Rey nearly ruptures herself trying to conjure up." In addition to television acting, Love collaborated with theater producer Todd Almond, starring in "Kansas City Choir Boy", a collaborative "pop opera" which showed at the Manhattan arts center Here during their annual "Prototype" festival in January 2015. Charles Isherwood of "The New York Times" praised her performance, noting a "soft-edged and bewitching" stage presence, adding: "Her voice, never the most supple or rangy of instruments, retains the singular sound that made her an electrifying front woman for the band Hole: a single sustained noted can seem to simultaneously contain a plea, a wound and a threat." The show toured later in the year, with performances in Boston and Los Angeles. In early 2015, Love joined Lana Del Rey on her Endless Summer Tour, performing as an opener on the tour's eight West Coast shows. During her tenure on Del Rey's tour, Love debuted a new single, "Miss Narcissist", released on Wavves' independent label Ghost Ramp. She also was cast in a supporting role in James Franco's film "The Long Home", based on William Gay's novel of the same name, marking her first film role in over ten years. In January 2016, Love released a clothing line in collaboration with Sophia Amoruso titled "Love, Courtney", featuring eighteen pieces reflecting Love's style over the course of her career. In November 2016, she began filming the pilot for "A Midsummer's Nightmare", a Shakespeare anthology series adapted for Lifetime. She then starred as Kitty Menendez in "", a biopic television film based on the lives of Lyle and Erik Menendez, which premiered on Lifetime in June 2017. The same year, she was cast in Justin Kelly's biopic "JT LeRoy", opposite Laura Dern, Kristen Stewart, Diane Kruger, and Kelvin Harrison Jr.. In March 2018, Love appeared in the music video for Marilyn Manson's "Tattooed in Reverse," which she followed with an April 5 guest-judge appearance on "RuPaul's Drag Race". Love has been candid about her diverse musical influences, the earliest being Patti Smith, The Runaways, and The Pretenders, artists she discovered while in juvenile hall at age fifteen. As a child, her first exposure to music was records that her parents retrieved each month through Columbia Record Club. The first record Love owned was Leonard Cohen's "Songs of Leonard Cohen" (1967), which she obtained from her mother: "He was so lyric-conscious and morbid, and I was a pretty morbid kid," she recalled. As a teenager, she named Flipper, Kate Bush, Soft Cell, Joni Mitchell, Laura Nyro, Lou Reed, and Dead Kennedys among her favorite artists. She has also spoken of her appreciation for new wave and post-punk bands she became acquainted with while living as a teenager in the United Kingdom, such as Echo and the Bunnymen, The Smiths, Siouxsie and the Banshees, Television, Bauhaus, and Joy Division. While in Dublin at age fifteen, Love attended a Virgin Prunes concert, an event she credited as being a pivotal influence: "I had never seen so much sex, snarl, poetry, evil, restraint, grace, filth, raw power and the very essence of rock and roll," she recalled. "[I had seen] U2 [who] gave me lashes of love and inspiration, and a few nights later the Virgin Prunes fuckedmeup." Decades later, in 2009, Love introduced the band's frontman Gavin Friday at a Carnegie Hall event, and performed a song with him. Love's diverse genre interests were illustrated in a 1991 interview with "Flipside", in which she stated: "There's a part of me that wants to have a grindcore band and another that wants to have a Raspberries-type pop band." Discussing the abrasive sound of Hole's debut album, she said she felt she had to "catch up with all my hip peers who'd gone all indie on me, and who made fun of me for liking R.E.M. and The Smiths." She has also embraced the influence of experimental artists and punk rock groups, including Sonic Youth, Swans, Big Black, Diamanda Galás, the Germs, and The Stooges. While writing "Celebrity Skin", she drew influence from Neil Young and My Bloody Valentine. She has also cited her contemporary PJ Harvey as an influence, saying: "The one rock star that makes me know I'm shit is Polly Harvey. I'm nothing next to the purity that she experiences." In 2014, she named "Bitter Sweet Symphony" by The Verve as one of her favorite songs. Literature and poetry have often been a major influence on her songwriting; Love said she had "always wanted to be a poet, but there was no money in it." She has named the works of T.S. Eliot and Charles Baudelaire as influential, and referenced works by Dante Rossetti, William Shakespeare, Rudyard Kipling, and Anne Sexton in her lyrics. Musically, Love's work with Hole and her solo efforts have been characterized as alternative rock; Hole's early material, however, was described by critics as being stylistically closer to grindcore and aggressive punk rock. "Spin"s October 1991 review of Hole's first album noted Love's layering of harsh and abrasive riffs buried more sophisticated musical arrangements. In 1998, she stated that Hole had "always been a pop band. We always had a subtext of pop. I always talked about it, if you go back ... what'll sound like some weird Sonic Youth tuning back then to you was sounding like the Raspberries to me, in my demented pop framework." Love's lyrical content is composed from a female's point of view, and her lyrics have been described as "literate and mordant" and noted by scholars for "articulating a third-wave feminist consciousness." Simon Reynolds, in reviewing Hole's debut album, noted: "Ms. Love's songs explore the full spectrum of female emotions, from vulnerability to rage. The songs are fueled by adolescent traumas, feelings of disgust about the body, passionate friendships with women and the desire to escape domesticity. Her lyrical style could be described as emotional nudism." Journalist and critic Kim France, in critiquing Love's lyrics, referred to her as a "dark genius" and likened her work to that of Anne Sexton. According to a 2014 interview, lyrics have remained the most important component of songwriting for Love: "I want it to look just as good on the page as it would if it was in a poetry book". A great deal of her songwriting has been diaristic in nature. Common themes present in Love's songs during her early career included body image, rape, suicide, conformity, elitism, pregnancy, prostitution, and death. In a 1991 interview with Everett True, she said: "I try to place [beautiful imagery] next to fucked up imagery, because that's how I view things ... I sometimes feel that no one's taken the time to write about certain things in rock, that there's a certain female point of view that's never been given space." Critics have noted that Love's later musical work is more lyrically introspective. "Celebrity Skin" and "America's Sweetheart" are lyrically centered on celebrity life, Hollywood, and drug addiction, while continuing Love's interest in vanity and body image. "Nobody's Daughter" was lyrically reflective of Love's past relationships and her struggle for sobriety, with the majority of its lyrics written while she was in rehab in 2006. Love possesses a contralto vocal range, and her vocal style has been described as "raw and distinctive." According to Love, she never wanted to be a singer, but rather aspired to be a skilled guitarist: "I'm such a lazy bastard though that I never did that," she said. "I was always the only person with the nerve to sing, and so I got stuck with it." She has been regularly noted by critics for her husky vocals as well as her "banshee[-like]" screaming abilities. Her vocals have been compared to those of Johnny Rotten, and David Fricke of "Rolling Stone" described them as "lung-busting" and "a corrosive, lunatic wail." Upon the release of Hole's 2010 album, "Nobody's Daughter", Amanda Petrusich of "Pitchfork" compared Love's raspy, unpolished vocals to those of Bob Dylan. She has played a variety of Fender guitars throughout her career, including a Jaguar and a vintage 1965 Jazzmaster; the latter was purchased by the Hard Rock Cafe and is on display in New York City. Between 1989 and 1991, Love primarily played a Rickenbacker 425 because she "preferred the 3/4 neck," but she destroyed the guitar onstage at a 1991 concert opening for The Smashing Pumpkins. In the mid-1990s, she often played a guitar made by Mercury, an obscure company that manufactured custom guitars, as well as a Univox Hi-Flier. Fender's Vista Venus, designed by Love in 1998, was partially inspired by Rickenbacker guitars as well as her Mercury. During her 2010 and more recent tours, Love has played a Rickenbacker 360 onstage. Her setup has included Fender tube gear, Matchless, Ampeg, Silvertone and a solid-state 1976 Randall Commander. Love has referred to herself as "a shit guitar player," further commenting in a 2014 interview: "I can still write a song, but [the guitar playing] sounds like shit ... I used to be a good rhythm player but I am no longer dependable." Throughout her career, she has also garnered a reputation for unpredictable live shows. In the 1990s, her performances with Hole were characterized by confrontational behavior, with Love stage diving, smashing guitars or throwing them into the audience, wandering into the crowd at the end of sets, and engaging in sometimes incoherent rants. Critics and journalists have noted Love for her comical, often stream-of-consciousness-like stage banter. In a review of a live performance published in 2010, it was noted that Love's onstage "one-liners [were] worthy of the Comedy Store." Love's candidness concerning her struggles with drug addiction and her public legal issues have made her subject to significant media interest over the course of her career. Journalist Neil Strauss, commenting on her in 1995, noted that she had "acquired a strange distinction reserved for Presidents, major felons and celebrity widows: every word she said and wrote became newsworthy." Several journalists have compared her to Yoko Ono, with some branding her the "Yoko Ono of Generation X" (based on her relationship to husband Kurt Cobain, comparing it to that of John Lennon and Ono). Commenting on her association to the media in 2014, "The Guardian" referred to Love as "a PR team’s worst nightmare: brash, unforgiving, cocky, lovable and talented—which also makes her the ideal stage personality." Her first major media exposure was a 1992 profile of herself and husband Kurt Cobain for "Vanity Fair" by journalist Lynn Hirschberg, entitled "Strange Love." After being asked to participate in a cover story for the magazine, Love was urged by her manager to accept the request. In the year prior, Love had been addicted to heroin along with Cobain, and the profile, published in September 1992, painted the couple in an unflattering light and suggested that Love had been addicted to heroin during her pregnancy. The article ultimately resulted in the Los Angeles Department of Children and Family Services investigating, and custody of Love and Cobain's newborn daughter, Frances, was temporarily awarded to Love's sister, Jaimee. Love claimed she was misquoted by Hirschberg, and asserted that she had immediately quit using heroin during her first trimester after she discovered she was pregnant. Love would later claim that the publication of the article had serious implications for her marriage as well as Cobain's mental state, suggesting it was a factor in his suicide. After Cobain committed suicide in April 1994, public interest in Love heightened; simultaneously, her erratic onstage behavior and various legal troubles during Hole's 1994–1995 world tour compounded the media coverage of her. She would later say that she retained little memory of this time period, blaming the fact that she had been using large quantities of heroin and Rohypnol at the time. In January 1995, she was arrested in Melbourne for disrupting a Qantas Airways flight after getting into an argument with a stewardess. On July 4, 1995, at the Lollapalooza Festival in George, Washington, she punched musician Kathleen Hanna in the face after alleging she had made a joke about her daughter. Love was charged with assault, to which she pleaded guilty, and was sentenced to anger management classes. In November 1995, two male teenagers attempted to sue Love for allegedly punching them during a Hole concert they attended in Orlando, Florida in March 1995. The judge ultimately dismissed the case on grounds that the teens "weren't exposed to any greater amount of violence than could reasonably be expected at an alternative rock concert." In 1996, Love went through rehabilitation and quit using heroin at the insistence of director Miloš Forman, who had cast her in a leading role in "The People vs. Larry Flynt". She was ordered to take multiple urine tests under the supervision of Columbia Pictures while filming the movie, and passed all of them. The film saw Love nominated for a Golden Globe award, and during this time she maintained what the media noted as a more decorous public image, though she would attract attention in May 1998 after punching "Los Angeles Times" journalist Belissa Cohen in the face at a party; the suit was settled out of court for an undisclosed sum. In February 2003, she was banned from Virgin Airlines by founder Richard Branson after being arrested at Heathrow Airport for disrupting a flight. In October of that year, in the midst of what Love would later admit was a cocaine and prescription drug addiction, she was arrested in Los Angeles after breaking several windows of her producer and then-boyfriend James Barber's home, and was charged with being under the influence of a controlled substance; the ordeal resulted in her temporarily losing custody of her daughter. On March 17, 2004, Love appeared on the "Late Show with David Letterman" to promote her recent solo album. Her appearance drew media coverage when, during the interview segment, she lifted her shirt multiple times, flashed Letterman, and stood on his desk. A "New York Times" article noted: "The episode was not altogether surprising for Ms. Love, 39, whose most public moments have veered from extreme pathos—like the time she read the suicide note of her famous husband, Kurt Cobain, on MTV—to angry feminism to catfights to incoherent ranting." Hours later, in the early morning of March 18, Love was arrested in Manhattan for allegedly striking a 23-year-old male fan with a microphone stand during a small concert she performed at an East Village venue. She was released within hours of the incident, enabling her to perform a concert scheduled for the following evening at the Bowery Ballroom. Four days later, on March 22, she called in multiple times to "The Howard Stern Show", claiming in broadcast conversations with Stern that the incident had not occurred, and that actress Natasha Lyonne, who was at the concert, was told by the alleged victim that he had been paid $10,000 to file a false claim leading to Love's arrest. On July 9, 2004, Love's 40th birthday, she was arrested for failing to make a court appearance for the March 2004 charges, and taken to Bellevue Hospital, allegedly incoherent, where she was placed on a 72-hour watch. According to police, she was believed to be a potential "danger to herself," but was deemed mentally sound and released to a rehab facility two days later. Amidst public criticism and press coverage, comedian Margaret Cho published an opinion piece on her official website in defense of Love, titled "Courtney [Love] Deserves Better from Feminists," arguing that negative associations of Love with her drug and personal problems (including from feminists) overshadowed discussion of her music and, more importantly, her personal well-being. Love would ultimately plead guilty in October 2004 to disorderly conduct over the alleged striking of the audience member. Her appearance as a roaster on the "Comedy Central Roast" of Pamela Anderson in August 2005 attracted Love further media attention due to her appearing visibly intoxicated and disheveled. One review of the program noted that Love "acted as if she belonged in a [psychiatric] institution." Six days after the show's airing, she was sentenced to a 28-day lockdown rehab program for allegedly being under the influence of a controlled substance, violating her probation. To avoid jail time, she accepted an additional 180-day rehab sentence in September 2005. In November 2005, after successfully completing the program, Love was discharged from the rehab center under the provision that she complete further outpatient rehab. In subsequent interviews in the following years, she would admit to having been dealing with various addictions during this time to prescription drugs, cocaine, and crack cocaine. She has stated she has been sober since completing rehabilitation in 2007, and cited her Buddhist practice as integral to her sobriety. In 2009, fashion designer Dawn Simorangkir brought a libel suit against Love concerning a defamatory post Love made on her Twitter account, which was settled for $450,000. Six years later, Simorangkir filed another lawsuit against Love for further defamatory Twitter posts, and Love paid a further $350,000 in recompense. A similar suit was brought against Love by her former attorney Rhonda Holmes in 2014, who also accused Love of online defamation, seeking $8 million in damages. It was the first case of alleged Twitter-based libel in U.S. history to make it to trial. The jury, however, found in Love's favor. In 1989, Love married James Moreland (vocalist of The Leaving Trains) in Las Vegas, but has said that Moreland was a transvestite and that their marriage was "a joke," ending in an annulment filed by Love several months later. After forming Hole in 1989, Love and bandmate Eric Erlandson had a romantic relationship for over a year, and she also briefly dated Billy Corgan of the rock band The Smashing Pumpkins in 1991, with whom she has maintained a volatile friendship over the years. Her most documented romantic relationship was with Kurt Cobain. It is uncertain when they first met, and there are varying accounts of how they came to know one another. Journalist Michael Azerrad states that the two met in 1989 at the Satyricon nightclub in Portland, Oregon, though Cobain biographer Charles Cross has claimed the date was actually February 12, 1990, and that Cobain playfully wrestled Love to the floor after she commented to him in passing that he looked like Dave Pirner of Soul Asylum. According to Love, she first met him at a Dharma Bums show in Portland, while Love's bandmate Eric Erlandson stated that both he and Love were formally introduced to Cobain in a parking lot after a Butthole Surfers concert at the Hollywood Palladium on May 17, 1991. Sometime in late 1991, Love and Cobain became reacquainted through Jennifer Finch, one of Love's longtime friends and former bandmates. After dating for several months, Love and Cobain were married on Waikiki Beach in Honolulu, Hawaii on February 24, 1992. Love wore a satin and lace dress once owned by actress Frances Farmer, and Cobain wore plaid pajamas. Six months later, on August 18, the couple's only child, a daughter, Frances Bean Cobain, was born. In April 1994, Cobain died of a self-inflicted gunshot wound in their Seattle home while Love was in rehab in Los Angeles. During their marriage, and after Cobain's death, Love became something of a hate-figure among some of Cobain's fans. After his cremation, Love divided portions of Cobain's ashes; she kept some in a teddy bear and some in an urn. Another portion of his ashes was taken by Love to the Namgyal Buddhist Monastery in Ithaca, New York in 1994, where they were ceremonially blessed by Buddhist monks and mixed into clay which was made into memorial sculptures. After Cobain's death, between 1996 and 1999, Love dated her "The People vs. Larry Flynt" co-star Edward Norton, and was also linked to comedian Steve Coogan in the early 2000s. Love has practiced several religions, including Catholicism, Episcopalianism and New Age religions, but has said that Buddhism is the "most transcendent" path for her. She has studied and practiced both Tibetan and Nichiren Buddhism since 1989. She has also openly discussed suffering from depression and drug addiction throughout her life, as well as self-harming. Love is a supporter of the Democratic Party. In 2000, she gave a speech at the Million Mom March to advocate stricter mental health evaluation for gun ownership in the United States. Love has also advocated for LGBT rights throughout her career: In 1997, she used her award speech at the "MTV Fashion Awards" to advocate for acceptance of the LGBT community. Love identifies as a feminist. In 2000, Love publicly advocated for reform of the record industry in a personal letter published by "Salon". In the letter, Love said: "It's not piracy when kids swap music over the Internet using Napster or Gnutella or Freenet or iMesh or beaming their CDs into a My.MP3.com or MyPlay.com music locker. It's piracy when those guys that run those companies make side deals with the cartel lawyers and label heads so that they can be 'the label's' friend', and not the artists." In a subsequent interview with Carrie Fisher, she said that she was interested in starting a union for recording artists, and also discussed race relations in the music industry, advocating for record companies to "put money back into the black community [whom] white people have been stealing from for years." In 1993, Love and husband Kurt Cobain performed an acoustic set together at the Rock Against Rape benefit in Los Angeles, which raised awareness and provided resources for victims of sexual abuse. She has also contributed to amfAR's AIDS research benefits and held live musical performances at their events. In 2009, Love performed a benefit concert for the RED Campaign at Carnegie Hall alongside Laurie Anderson, Rufus Wainwright, and Scarlett Johansson, with proceeds going to AIDS research. In May 2011, she attended Mariska Hargitay's Joyful Heart Foundation event for victims of child abuse, rape, and domestic violence, donating six of her husband Kurt Cobain's personal vinyl records for auction. Love has also supported the Sophie Lancaster Foundation as well as LGBT youth charities, specifically with the Los Angeles Gay and Lesbian Center, where she has taken part in several of the center's "An Evening with Women" events. The proceeds of the event help provide food and shelter for homeless youth; services for seniors; legal assistance; domestic violence services; health and mental health services, and cultural arts programs. Love participated with Linda Perry for the event in 2012, and performed alongside Aimee Mann and comedian Wanda Sykes. Speaking on her collaboration on the event, Love said: "Seven thousand kids in Los Angeles a year go out on the street, and forty percent of those kids are gay, lesbian, or transgendered. They come out to their parents, and become homeless... for whatever reason, I don't really know why, but gay men have a lot of foundations—I've played many of them—but the lesbian side of it doesn't have as much money and/or donors, so we're excited that this has grown to cover women and women's affairs." Love has had a notable impact on female-fronted alternative acts and performers. She has been cited as influential on young female instrumentalists in particular, having once infamously proclaimed: "I want every girl in the world to pick up a guitar and start screaming... I strap on that motherfucking guitar and you cannot fuck with me. That's my feeling." In "The Electric Guitar: A History of an American Icon", it is noted: With over 3 million records sold in the United States alone, Hole became one of the most successful rock bands of all time fronted by a woman. VH1 ranked Love 69 in their list of "The 100 Greatest Women in Music History" in 2012. In 2015, the "Phoenix New Times" declared Love the number one greatest female rock star of all time, writing: "To build a perfect rock star, there are several crucial ingredients: musical talent, physical attractiveness, tumultuous relationships, substance abuse, and public meltdowns, just to name a few. These days, Love seems to have rebounded from her epic tailspin and has leveled out in a slightly more normal manner, but there's no doubt that her life to date is the type of story people wouldn't believe in a novel or a movie." Among the alternative musicians who have cited Love as an influence are Scout Niblett; Brody Dalle of The Distillers; Dee Dee Penny of Dum Dum Girls; and Nine Black Alps. Contemporary female pop artists Lana Del Rey, Avril Lavigne, Tove Lo, and Sky Ferreira have also cited Love as an influence. Love has frequently been recognized as the most high-profile contributor of feminist music during the 1990s, and for "subverting [the] mainstream expectations of how a woman should look, act, and sound." According to music journalist Maria Raha, "Hole was the highest-profile female-fronted band of the '90s to openly and directly sing about feminism." Patti Smith, a major influence of Love's, also praised her, saying: "I hate genderizing things ... [but] when I heard Hole, I was amazed to hear a girl sing like that. Janis Joplin was her own thing; she was into Big Mama Thornton and Bessie Smith. But what Courtney Love does, I'd never heard a girl do that." She has also been noted as a gay icon since the mid-1990s, and has jokingly referred to her fanbase as consisting of "females, gay guys, and a few advanced, evolved heterosexual men." Love's aesthetic image, particularly in the early 1990s, also became influential, and was dubbed "kinderwhore" by critics and media. The subversive fashion mainly consisted of vintage babydoll dresses accompanied by smeared makeup and red lipstick; MTV reporter Kurt Loder described Love as looking like "a debauched rag doll" onstage. Love later said she had been influenced by the fashion of Chrissy Amphlett of the Divinyls. Love has been depicted in popular culture across various mediums: Artist Barbara Kruger used one of Love's quotes on her New York City bus project, and the indie pop punk band The Muffs named their second album "Blonder and Blonder" (1995) after a quote by Love, while a recording of her talking about a stolen dress appears as the track "Love" on the band's 2000 compilation album "Hamburger". She was also the basis of the character "Courtney" in Michael Hornburg's 1995 novel "Bongwater"; the novel, set in Portland, Oregon, is based on Hornburg's teenage years living there, where he had known her. The novel was adapted as the 1998 film "Bongwater", in which the character was renamed "Serena", played by Alicia Witt. In December 1995, Love was parodied by Molly Shannon in a "Saturday Night Live" skit entitled "The Courtney Love Show," in which Shannon (as Love) recklessly interviews Julie Andrews (portrayed by Christine Baranski). In 1999, Love was depicted in "The Simpsons" episode "" appearing on a Wheaties cereal box. There is also a band named after her. Hole discography Courtney Love discography Filmography Bibliography Californium Californium is a radioactive chemical element with symbol Cf and atomic number 98. The element was first synthesized in 1950 at the Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory), by bombarding curium with alpha particles (helium-4 ions). It is an actinide element, the sixth transuranium element to be synthesized, and has the second-highest atomic mass of all the elements that have been produced in amounts large enough to see with the unaided eye (after einsteinium). The element was named after the university and the state of California. Two crystalline forms exist for californium under normal pressure: one above and one below . A third form exists at high pressure. Californium slowly tarnishes in air at room temperature. Compounds of californium are dominated by the +3 oxidation state. The most stable of californium's twenty known isotopes is californium-251, which has a half-life of 898 years. This short half-life means the element is not found in significant quantities in the Earth's crust. Californium-252, with a half-life of about 2.64 years, is the most common isotope used and is produced at the Oak Ridge National Laboratory in the United States and the Research Institute of Atomic Reactors in Russia. Californium is one of the few transuranium elements that have practical applications. Most of these applications exploit the property of certain isotopes of californium to emit neutrons. For example, californium can be used to help start up nuclear reactors, and it is employed as a source of neutrons when studying materials using neutron diffraction and neutron spectroscopy. Californium can also be used in nuclear synthesis of higher mass elements; oganesson (element 118) was synthesized by bombarding californium-249 atoms with calcium-48 ions. Users of californium must take into account radiological concerns and the element's ability to disrupt the formation of red blood cells by bioaccumulating in skeletal tissue. Californium is a silvery white actinide metal with a melting point of and an estimated boiling point of . The pure metal is malleable and is easily cut with a razor blade. Californium metal starts to vaporize above when exposed to a vacuum. Below californium metal is either ferromagnetic or ferrimagnetic (it acts like a magnet), between 48 and 66 K it is antiferromagnetic (an intermediate state), and above it is paramagnetic (external magnetic fields can make it magnetic). It forms alloys with lanthanide metals but little is known about them. The element has two crystalline forms under 1 standard atmosphere of pressure: a double-hexagonal close-packed form dubbed alpha (α) and a face-centered cubic form designated beta (β). The α form exists below 600–800 °C with a density of 15.10 g/cm and the β form exists above 600–800 °C with a density of 8.74 g/cm. At 48 GPa of pressure the β form changes into an orthorhombic crystal system due to delocalization of the atom's 5f electrons, which frees them to bond. The bulk modulus of a material is a measure of its resistance to uniform pressure. Californium's bulk modulus is , which is similar to trivalent lanthanide metals but smaller than more familiar metals, such as aluminium (70 GPa). Californium exhibits oxidation states of 4, 3, or 2. It typically forms eight or nine bonds to surrounding atoms or ions. Its chemical properties are predicted to be similar to other primarily 3+ valence actinide elements and the element dysprosium, which is the lanthanide above californium in the periodic table. The element slowly tarnishes in air at room temperature, with the rate increasing when moisture is added. Californium reacts when heated with hydrogen, nitrogen, or a chalcogen (oxygen family element); reactions with dry hydrogen and aqueous mineral acids are rapid. Californium is only water-soluble as the californium(III) cation. Attempts to reduce or oxidize the +3 ion in solution have failed. The element forms a water-soluble chloride, nitrate, perchlorate, and sulfate and is precipitated as a fluoride, oxalate, or hydroxide. Californium is the heaviest actinide to exhibit covalent properties, as is observed in the californium borate. Twenty radioisotopes of californium have been characterized, the most stable being californium-251 with a half-life of 898 years, californium-249 with a half-life of 351 years, californium-250 with a half-life of 13.08 years, and californium-252 with a half-life of 2.645 years. All the remaining isotopes have half-lives shorter than a year, and the majority of these have half-lives shorter than 20 minutes. The isotopes of californium range in mass number from 237 to 256. Californium-249 is formed from the beta decay of berkelium-249, and most other californium isotopes are made by subjecting berkelium to intense neutron radiation in a nuclear reactor. Although californium-251 has the longest half-life, its production yield is only 10% due to its tendency to collect neutrons (high neutron capture) and its tendency to interact with other particles (high neutron cross-section). Californium-252 is a very strong neutron emitter, which makes it extremely radioactive and harmful. Californium-252 undergoes alpha decay 96.9% of the time to form curium-248 while the remaining 3.1% of decays are spontaneous fission. One microgram (µg) of californium-252 emits 2.3 million neutrons per second, an average of 3.7 neutrons per spontaneous fission. Most of the other isotopes of californium decay to isotopes of curium (atomic number 96) via alpha decay. Californium was first synthesized at the University of California Radiation Laboratory in Berkeley, by the physics researchers Stanley G. Thompson, Kenneth Street, Jr., Albert Ghiorso, and Glenn T. Seaborg on or about February 9, 1950. It was the sixth transuranium element to be discovered; the team announced its discovery on March 17, 1950. To produce californium, a microgram-sized target of curium-242 () was bombarded with 35 MeV-alpha particles () in the cyclotron at Berkeley, which produced californium-245 () plus one free neutron (). Only about 5,000 atoms of californium were produced in this experiment, and these atoms had a half-life of 44 minutes. The discoverers named the new element after the university and the state. This was a break from the convention used for elements 95 to 97, which drew inspiration from how the elements directly above them in the periodic table were named. However, the element directly above element 98 in the periodic table, dysprosium, has a name that simply means "hard to get at" so the researchers decided to set aside the informal naming convention. They added that "the best we can do is to point out [that] ... searchers a century ago found it difficult to get to California." Weighable quantities of californium were first produced by the irradiation of plutonium targets at the Materials Testing Reactor at the National Reactor Testing Station in eastern Idaho; and these findings were reported in 1954. The high spontaneous fission rate of californium-252 was observed in these samples. The first experiment with californium in concentrated form occurred in 1958. The isotopes californium-249 to californium-252 were isolated that same year from a sample of plutonium-239 that had been irradiated with neutrons in a nuclear reactor for five years. Two years later, in 1960, Burris Cunningham and James Wallman of the Lawrence Radiation Laboratory of the University of California created the first californium compounds—californium trichloride, californium oxychloride, and californium oxide—by treating californium with steam and hydrochloric acid. The High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, started producing small batches of californium in the 1960s. By 1995, the HFIR nominally produced of californium annually. Plutonium supplied by the United Kingdom to the United States under the 1958 US-UK Mutual Defence Agreement was used for californium production. The Atomic Energy Commission sold californium-252 to industrial and academic customers in the early 1970s for $10 per microgram and an average of of californium-252 were shipped each year from 1970 to 1990. Californium metal was first prepared in 1974 by Haire and Baybarz who reduced californium(III) oxide with lanthanum metal to obtain microgram amounts of sub-micrometer thick films. Traces of californium can be found near facilities that use the element in mineral prospecting and in medical treatments. The element is fairly insoluble in water, but it adheres well to ordinary soil; and concentrations of it in the soil can be 500 times higher than in the water surrounding the soil particles. Fallout from atmospheric nuclear testing prior to 1980 contributed a small amount of californium to the environment. Californium isotopes with mass numbers 249, 252, 253, and 254 have been observed in the radioactive dust collected from the air after a nuclear explosion. Californium is not a major radionuclide at United States Department of Energy legacy sites since it was not produced in large quantities. Californium was once believed to be produced in supernovas, as their decay matches the 60-day half-life of Cf. However, subsequent studies failed to demonstrate any californium spectra, and supernova light curves are now thought to follow the decay of nickel-56. The transuranic elements from americium to fermium, including californium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Californium is produced in nuclear reactors and particle accelerators. Californium-250 is made by bombarding berkelium-249 () with neutrons, forming berkelium-250 () via neutron capture (n,γ) which, in turn, quickly beta decays (β) to californium-250 () in the following reaction: Bombardment of californium-250 with neutrons produces californium-251 and californium-252. Prolonged irradiation of americium, curium, and plutonium with neutrons produces milligram amounts of californium-252 and microgram amounts of californium-249. As of 2006, curium isotopes 244 to 248 are irradiated by neutrons in special reactors to produce primarily californium-252 with lesser amounts of isotopes 249 to 255. Microgram quantities of californium-252 are available for commercial use through the U.S. Nuclear Regulatory Commission. Only two sites produce californium-252: the Oak Ridge National Laboratory in the United States, and the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. As of 2003, the two sites produce 0.25 grams and 0.025 grams of californium-252 per year, respectively. Three californium isotopes with significant half-lives are produced, requiring a total of 15 neutron captures by uranium-238 without nuclear fission or alpha decay occurring during the process. Californium-253 is at the end of a production chain that starts with uranium-238, includes several isotopes of plutonium, americium, curium, berkelium, and the californium isotopes 249 to 253 (see diagram). Californium-252 has a number of specialized applications as a strong neutron emitter, and each microgram of fresh californium produces 139 million neutrons per minute. This property makes californium useful as a neutron startup source for some nuclear reactors and as a portable (non-reactor based) neutron source for neutron activation analysis to detect trace amounts of elements in samples. Neutrons from californium are employed as a treatment of certain cervical and brain cancers where other radiation therapy is ineffective. It has been used in educational applications since 1969 when the Georgia Institute of Technology received a loan of 119 µg of californium-252 from the Savannah River Plant. It is also used with online elemental coal analyzers and bulk material analyzers in the coal and cement industries. Neutron penetration into materials makes californium useful in detection instruments such as fuel rod scanners; neutron radiography of aircraft and weapons components to detect corrosion, bad welds, cracks and trapped moisture; and in portable metal detectors. Neutron moisture gauges use californium-252 to find water and petroleum layers in oil wells, as a portable neutron source for gold and silver prospecting for on-the-spot analysis, and to detect ground water movement. The major uses of californium-252 in 1982 were, in order of use, reactor start-up (48.3%), fuel rod scanning (25.3%), and activation analysis (19.4%). By 1994 most californium-252 was used in neutron radiography (77.4%), with fuel rod scanning (12.1%) and reactor start-up (6.9%) as important but distant secondary uses. Californium-251 has a very small calculated critical mass of about , high lethality, and a relatively short period of toxic environmental irradiation. The low critical mass of californium led to some exaggerated claims about possible uses for the element. In October 2006, researchers announced that three atoms of oganesson (element 118) had been identified at the Joint Institute for Nuclear Research in Dubna, Russia, as the product of bombardment of californium-249 with calcium-48, making it the heaviest element ever synthesized. The target for this experiment contained about 10 mg of californium-249 deposited on a titanium foil of 32 cm area. Californium has also been used to produce other transuranium elements; for example, element 103 (later named lawrencium) was first synthesized in 1961 by bombarding californium with boron nuclei. Californium that bioaccumulates in skeletal tissue releases radiation that disrupts the body's ability to form red blood cells. The element plays no natural biological role in any organism due to its intense radioactivity and low concentration in the environment. Californium can enter the body from ingesting contaminated food or drinks or by breathing air with suspended particles of the element. Once in the body, only 0.05% of the californium will reach the bloodstream. About 65% of that californium will be deposited in the skeleton, 25% in the liver, and the rest in other organs, or excreted, mainly in urine. Half of the californium deposited in the skeleton and liver are gone in 50 and 20 years, respectively. Californium in the skeleton adheres to bone surfaces before slowly migrating throughout the bone. The element is most dangerous if taken into the body. In addition, californium-249 and californium-251 can cause tissue damage externally, through gamma ray emission. Ionizing radiation emitted by californium on bone and in the liver can cause cancer. Caesium Caesium (British spelling and IUPAC spelling) or cesium (American spelling) is a chemical element with symbol Cs and atomic number 55. It is a soft, silvery-gold alkali metal with a melting point of 28.5 °C (83.3 °F), which makes it one of only five elemental metals that are liquid at or near room temperature. Caesium has physical and chemical properties similar to those of rubidium and potassium. The most reactive of all metals, it is pyrophoric and reacts with water even at −116 °C (−177 °F). It is the least electronegative element, with a value of 0.79 on the Pauling scale. It has only one stable isotope, caesium-133. Caesium is mined mostly from pollucite, while the radioisotopes, especially caesium-137, a fission product, are extracted from waste produced by nuclear reactors. The German chemist Robert Bunsen and physicist Gustav Kirchhoff discovered caesium in 1860 by the newly developed method of flame spectroscopy. The first small-scale applications for caesium were as a "getter" in vacuum tubes and in photoelectric cells. In 1967, acting on Einstein's proof that the speed of light is the most constant dimension in the universe, the International System of Units used two specific wave counts from an emission spectrum of caesium-133 to co-define the second and the metre. Since then, caesium has been widely used in highly accurate atomic clocks. Since the 1990s, the largest application of the element has been as caesium formate for drilling fluids, but it has a range of applications in the production of electricity, in electronics, and in chemistry. The radioactive isotope caesium-137 has a half-life of about 30 years and is used in medical applications, industrial gauges, and hydrology. Nonradioactive caesium compounds are only mildly toxic, but the pure metal's tendency to react explosively with water means that caesium is considered a hazardous material, and the radioisotopes present a significant health and ecological hazard in the environment. Caesium is the softest element (it has a hardness of 0.2 Mohs). It is a very ductile, pale metal, which darkens in the presence of trace amounts of oxygen. When in the presence of mineral oil (where it is best kept during transport), it loses its metallic lustre and takes on a duller, grey appearance. It has a melting point of , making it one of the few elemental metals that are liquid near room temperature. Mercury is the only elemental metal with a known melting point lower than caesium. In addition, the metal has a rather low boiling point, , the lowest of all metals other than mercury. Its compounds burn with a blue or violet colour. Caesium forms alloys with the other alkali metals, gold, and mercury (amalgams). At temperatures below , it does not alloy with cobalt, iron, molybdenum, nickel, platinum, tantalum, or tungsten. It forms well-defined intermetallic compounds with antimony, gallium, indium, and thorium, which are photosensitive. It mixes with all the other alkali metals (except lithium); the alloy with a molar distribution of 41% caesium, 47% potassium, and 12% sodium has the lowest melting point of any known metal alloy, at . A few amalgams have been studied: is black with a purple metallic lustre, while CsHg is golden-coloured, also with a metallic lustre. The golden colour of caesium comes from the decreasing frequency of light required to excite electrons of the alkali metals as the group is descended. For lithium through rubidium this frequency is in the ultraviolet, but for caesium it enters the blue–violet end of the spectrum; in other words, the plasmonic frequency of the alkali metals becomes lower from lithium to caesium. Thus caesium transmits and partially absorbs violet light preferentially while other colours (having lower frequency) are reflected; hence it appears yellowish. Caesium metal is highly reactive and very pyrophoric. It ignites spontaneously in air, and reacts explosively with water even at low temperatures, more so than the other alkali metals (first group of the periodic table). It reacts with solid water at temperatures as low as . Because of this high reactivity, caesium metal is classified as a hazardous material. It is stored and shipped in dry, saturated hydrocarbons such as mineral oil. It can be handled only under inert gas, such as argon. However, a caesium-water explosion is often less powerful than a sodium-water explosion with a similar amount of sodium. This is because caesium explodes instantly upon contact with water, leaving little time for hydrogen to accumulate. Caesium can be stored in vacuum-sealed borosilicate glass ampoules. In quantities of more than about 100 grams (3.5 oz), caesium is shipped in hermetically sealed, stainless steel containers. The chemistry of caesium is similar to that of other alkali metals, in particular rubidium, the element above caesium in the periodic table. As expected for an alkali metal, the only common oxidation state is +1. Some small differences arise from the fact that it has a higher atomic mass and is more electropositive than other (nonradioactive) alkali metals. Caesium is the most electropositive chemical element. The caesium ion is also larger and less "hard" than those of the lighter alkali metals. Most caesium compounds contain the element as the cation , which binds ionically to a wide variety of anions. One noteworthy exception is the caeside anion (), and others are the several suboxides (see section on oxides below). Salts of Cs are usually colourless unless the anion itself is coloured. Many of the simple salts are hygroscopic, but less so than the corresponding salts of lighter alkali metals. The phosphate, acetate, carbonate, halides, oxide, nitrate, and sulfate salts are water-soluble. Double salts are often less soluble, and the low solubility of caesium aluminium sulfate is exploited in refining Cs from ores. The double salt with antimony (such as ), bismuth, cadmium, copper, iron, and lead are also poorly soluble. Caesium hydroxide (CsOH) is hygroscopic and strongly basic. It rapidly etches the surface of semiconductors such as silicon. CsOH has been previously regarded by chemists as the "strongest base", reflecting the relatively weak attraction between the large Cs ion and OH; it is indeed the strongest Arrhenius base, but a number of compounds that do not dissolve in water, such as "n"-butyllithium and sodium amide, are more basic. A stoichiometric mixture of caesium and gold will react to form yellow caesium auride (CsAu) upon heating. The auride anion here behaves as a pseudohalogen. The compound reacts violently with water, yielding caesium hydroxide, metallic gold, and hydrogen gas; in liquid ammonia it can be reacted with a caesium-specific ion exchange resin to produce tetramethylammonium auride. The analogous platinum compound, red caesium platinide (CsPt), contains the platinide ion that behaves as a pseudochalcogen. Like all metal cations, Cs forms complexes with Lewis bases in solution. Because of its large size, Cs usually adopts coordination numbers greater than 6, the number typical for the smaller alkali metal cations. This difference is apparent in the 8-coordination of CsCl. This high coordination number and softness (tendency to form covalent bonds) are properties exploited in separating Cs from other cations in the remediation of nuclear wastes, where Cs must be separated from large amounts of nonradioactive K. Caesium fluoride (CsF) is a hygroscopic white solid that is widely used in organofluorine chemistry as a source of fluoride anions. Caesium fluoride has the halite structure, which means that the Cs and F pack in a cubic closest packed array as do Na and Cl in sodium chloride. Notably, caesium and fluorine have the lowest and highest electronegativities, respectively, among all the known elements. Caesium chloride (CsCl) crystallizes in the simple cubic crystal system. Also called the "caesium chloride structure", this structural motif is composed of a primitive cubic lattice with a two-atom basis, each with an eightfold coordination; the chloride atoms lie upon the lattice points at the edges of the cube, while the caesium atoms lie in the holes in the centre of the cubes. This structure is shared with CsBr and CsI, and many other compounds that do not contain Cs. In contrast, most other alkaline halides have the sodium chloride (NaCl) structure. The CsCl structure is preferred because Cs has an ionic radius of 174 pm and 181 pm. More so than the other alkali metals, caesium forms numerous binary compounds with oxygen. When caesium burns in air, the superoxide is the main product. The "normal" caesium oxide () forms yellow-orange hexagonal crystals, and is the only oxide of the anti- type. It vaporizes at , and decomposes to caesium metal and the peroxide at temperatures above . In addition to the superoxide and the ozonide , several brightly coloured suboxides have also been studied. These include , , , (dark-green), CsO, , as well as . The latter may be heated in a vacuum to generate . Binary compounds with sulfur, selenium, and tellurium also exist. Caesium has 39 known isotopes, ranging in mass number (i.e. number of nucleons in the nucleus) from 112 to 151. Several of these are synthesized from lighter elements by the slow neutron capture process (S-process) inside old stars and by the R-process in supernova explosions. The only stable caesium isotope is Cs, with 78 neutrons. Although it has a large nuclear spin (+), nuclear magnetic resonance studies can use this isotope at a resonating frequency of 11.7 MHz. The radioactive Cs has a very long half-life of about 2.3 million years, longest of all radioactive isotopes of caesium. Cs and Cs have half-lives of 30 and two years, respectively. Cs decomposes to a short-lived Ba by beta decay, and then to nonradioactive barium, while Cs transforms into Ba directly. The isotopes with mass numbers of 129, 131, 132 and 136, have half-lives between a day and two weeks, while most of the other isotopes have half-lives from a few seconds to fractions of a second. At least 21 metastable nuclear isomers exist. Other than Cs (with a half-life of just under 3 hours), all are very unstable and decay with half-lives of a few minutes or less. The isotope Cs is one of the long-lived fission products of uranium produced in nuclear reactors. However, this fission product yield is reduced in most reactors because the predecessor, Xe, is a potent neutron poison and frequently transmutes to stable Xe before it can decay to Cs. The beta decay from Cs to Ba is a strong emission of gamma radiation. Cs and Sr are the principal medium-lived products of nuclear fission, and the prime sources of radioactivity from spent nuclear fuel after several years of cooling, lasting several hundred years. Those two isotopes are the largest source of residual radioactivity in the area of the Chernobyl disaster. Because of the low capture rate, disposing of Cs through neutron capture is not feasible and the only current solution is allowed to decay over time. Almost all caesium produced from nuclear fission comes from the beta decay of originally more neutron-rich fission products, passing through various isotopes of iodine and xenon. Because iodine and xenon are volatile and can diffuse through nuclear fuel or air, radioactive caesium is often created far from the original site of fission. With nuclear weapons testing in the 1950s through the 1980s, Cs was released into the atmosphere and returned to the surface of the earth as a component of radioactive fallout. It is a ready marker of the movement of soil and sediment from those times. Caesium is a relatively rare element estimated to average 3 parts per million in the Earth's crust. It is the 45th most abundant element and the 36th among the metals. Nevertheless, it is more abundant than such elements as antimony, cadmium, tin, and tungsten, and two orders of magnitude more abundant than mercury and silver; it is 3.3% as abundant as rubidium, with which it is closely associated, chemically. Due to its large ionic radius, caesium is one of the "incompatible elements". During magma crystallization, caesium is concentrated in the liquid phase and crystallizes last. Therefore, the largest deposits of caesium are zone pegmatite ore bodies formed by this enrichment process. Because caesium does not substitute for potassium as readily as rubidium, the alkali evaporite minerals sylvite (KCl) and carnallite () may contain only 0.002% caesium. Consequently, Cs is found in few minerals. Percentage amounts of caesium may be found in beryl () and avogadrite (), up to 15 wt% CsO in the closely related mineral pezzottaite (Cs(BeLi)AlSiO), up to 8.4 wt% CsO in the rare mineral londonite (), and less in the more widespread rhodizite. The only economically important ore for caesium is pollucite , which is found in a few places around the world in zoned pegmatites, associated with the more commercially important lithium minerals, lepidolite and petalite. Within the pegmatites, the large grain size and the strong separation of the minerals results in high-grade ore for mining. One of the world's most significant and richest sources of caesium is the Tanco Mine at Bernic Lake in Manitoba, Canada, estimated to contain 350,000 metric tons of pollucite ore, representing more than two-thirds of the world's reserve base. Although the stoichiometric content of caesium in pollucite is 42.6%, pure pollucite samples from this deposit contain only about 34% caesium, while the average content is 24 wt%. Commercial pollucite contains more than 19% caesium. The Bikita pegmatite deposit in Zimbabwe is mined for its petalite, but it also contains a significant amount of pollucite. Another notable source of pollucite is in the Karibib Desert, Namibia. At the present rate of world mine production of 5 to 10 metric tons per year, reserves will last for thousands of years. Mining and refining pollucite ore is a selective process and is conducted on a smaller scale than for most other metals. The ore is crushed, hand-sorted, but not usually concentrated, and then ground. Caesium is then extracted from pollucite primarily by three methods: acid digestion, alkaline decomposition, and direct reduction. In the acid digestion, the silicate pollucite rock is dissolved with strong acids, such as hydrochloric (HCl), sulfuric (), hydrobromic (HBr), or hydrofluoric (HF) acids. With hydrochloric acid, a mixture of soluble chlorides is produced, and the insoluble chloride double salts of caesium are precipitated as caesium antimony chloride (), caesium iodine chloride (), or caesium hexachlorocerate (). After separation, the pure precipitated double salt is decomposed, and pure CsCl is precipitated by evaporating the water. The sulfuric acid method yields the insoluble double salt directly as caesium alum (). The aluminium sulfate component is converted to insoluble aluminium oxide by roasting the alum with carbon, and the resulting product is leached with water to yield a solution. Roasting pollucite with calcium carbonate and calcium chloride yields insoluble calcium silicates and soluble caesium chloride. Leaching with water or dilute ammonia () yields a dilute chloride (CsCl) solution. This solution can be evaporated to produce caesium chloride or transformed into caesium alum or caesium carbonate. Though not commercially feasible, the ore can be directly reduced with potassium, sodium, or calcium in vacuum can produce caesium metal directly. Most of the mined caesium (as salts) is directly converted into caesium formate (HCOOCs) for applications such as oil drilling. To supply the developing market, Cabot Corporation built a production plant in 1997 at the Tanco mine near Bernic Lake in Manitoba, with a capacity of per year of caesium formate solution. The primary smaller-scale commercial compounds of caesium are caesium chloride and nitrate. Alternatively, caesium metal may be obtained from the purified compounds derived from the ore. Caesium chloride and the other caesium halides can be reduced at with calcium or barium, and caesium metal distilled from the result. In the same way, the aluminate, carbonate, or hydroxide may be reduced by magnesium. The metal can also be isolated by electrolysis of fused caesium cyanide (CsCN). Exceptionally pure and gas-free caesium can be produced by thermal decomposition of caesium azide , which can be produced from aqueous caesium sulfate and barium azide. In vacuum applications, caesium dichromate can be reacted with zirconium to produce pure caesium metal without other gaseous products. The price of 99.8% pure caesium (metal basis) in 2009 was about US$10 per gram ($280 per ounce), but the compounds are significantly cheaper. In 1860, Robert Bunsen and Gustav Kirchhoff discovered caesium in the mineral water from Dürkheim, Germany. Because of the bright blue lines in the emission spectrum, they derived the name from the Latin word "caesius", meaning sky-blue. Caesium was the first element to be discovered with a spectroscope, which had been invented by Bunsen and Kirchhoff only a year previously. To obtain a pure sample of caesium, of mineral water had to be evaporated to yield of concentrated salt solution. The alkaline earth metals were precipitated either as sulfates or oxalates, leaving the alkali metal in the solution. After conversion to the nitrates and extraction with ethanol, a sodium-free mixture was obtained. From this mixture, the lithium was precipitated by ammonium carbonate. Potassium, rubidium, and caesium form insoluble salts with chloroplatinic acid, but these salts show a slight difference in solubility in hot water, and the less-soluble caesium and rubidium hexachloroplatinate ((Cs,Rb)PtCl) were obtained by fractional crystallization. After reduction of the hexachloroplatinate with hydrogen, caesium and rubidium were separated by the difference in solubility of their carbonates in alcohol. The process yielded of rubidium chloride and of caesium chloride from the initial 44,000 litres of mineral water. From the caesium chloride, the two scientists estimated the atomic weight of the new element at 123.35 (compared to the currently accepted one of 132.9). They tried to generate elemental caesium by electrolysis of molten caesium chloride, but instead of a metal, they obtained a blue homogeneous substance which "neither under the naked eye nor under the microscope showed the slightest trace of metallic substance"; as a result, they assigned it as a subchloride (). In reality, the product was probably a colloidal mixture of the metal and caesium chloride. The electrolysis of the aqueous solution of chloride with a mercury cathode produced a caesium amalgam which readily decomposed under the aqueous conditions. The pure metal was eventually isolated by the German chemist Carl Setterberg while working on his doctorate with Kekulé and Bunsen. In 1882, he produced caesium metal by electrolysing caesium cyanide, avoiding the problems with the chloride. Historically, the most important use for caesium has been in research and development, primarily in chemical and electrical fields. Very few applications existed for caesium until the 1920s, when it came into use in radio vacuum tubes, where it had two functions; as a getter, it removed excess oxygen after manufacture, and as a coating on the heated cathode, it increased the electrical conductivity. Caesium was not recognized as a high-performance industrial metal until the 1950s. Applications for nonradioactive caesium included photoelectric cells, photomultiplier tubes, optical components of infrared spectrophotometers, catalysts for several organic reactions, crystals for scintillation counters, and in magnetohydrodynamic power generators. Caesium also was, and still is, used as a source of positive ions in secondary ion mass spectrometry (SIMS). Since 1967, the International System of Measurements has based the primary unit of time, the second, on the properties of caesium. The International System of Units (SI) defines the second as the duration of 9,192,631,770 cycles at the microwave frequency of the spectral line corresponding to the transition between two hyperfine energy levels of the ground state of caesium-133. The 13th General Conference on Weights and Measures of 1967 defined a second as: "the duration of 9,192,631,770 cycles of microwave light absorbed or emitted by the hyperfine transition of caesium-133 atoms in their ground state undisturbed by external fields". The largest present-day use of nonradioactive caesium is in caesium formate drilling fluids for the extractive oil industry. Aqueous solutions of caesium formate (HCOOCs)—made by reacting caesium hydroxide with formic acid—were developed in the mid-1990s for use as oil well drilling and completion fluids. The function of a drilling fluid is to lubricate drill bits, to bring rock cuttings to the surface, and to maintain pressure on the formation during drilling of the well. Completion fluids assist the emplacement of control hardware after drilling but prior to production by maintaining the pressure. The high density of the caesium formate brine (up to 2.3 g/cm, or 19.2 pounds per gallon), coupled with the relatively benign nature of most caesium compounds, reduces the requirement for toxic high-density suspended solids in the drilling fluid—a significant technological, engineering and environmental advantage. Unlike the components of many other heavy liquids, caesium formate is relatively environment-friendly. Caesium formate brine can be blended with potassium and sodium formates to decrease the density of the fluids to that of water (1.0 g/cm, or 8.3 pounds per gallon). Furthermore, it is biodegradable and may be recycled, which is important in view of its high cost (about $4,000 per barrel in 2001). Alkali formates are safe to handle and do not damage the producing formation or downhole metals as corrosive alternative, high-density brines (such as zinc bromide solutions) sometimes do; they also require less cleanup and reduce disposal costs. Caesium-based atomic clocks use the electromagnetic transitions in the hyperfine structure of caesium-133 atoms as a reference point. The first accurate caesium clock was built by Louis Essen in 1955 at the National Physical Laboratory in the UK. Caesium clocks have improved over the past half-century and are regarded as "the most accurate realization of a unit that mankind has yet achieved." These clocks measure frequency with an error of 2 to 3 parts in 10, which corresponding to an accuracy of 2 nanoseconds per day, or one second in 1.4 million years. The latest versions are more accurate than 1 part in 10, about 1 second in 20 million years. The Caesium standard is the primary standard for standards-compliant time and frequency measurements. Caesium clocks regulate the timing of cell phone networks and the Internet. It is currently (2018) being proposed by the International Committee for Weights and Measures (CIPM) that the second, symbol s, the SI unit of time, be defined using the fixed numerical value of the caesium frequency Δ"ν", the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom. Caesium vapour thermionic generators are low-power devices that convert heat energy to electrical energy. In the two-electrode vacuum tube converter, caesium neutralizes the space charge near the cathode and enhances the current flow. Caesium is also important for its photoemissive properties, converting light to electron flow. It is used in photoelectric cells because caesium-based cathodes, such as the intermetallic compound , have a low threshold voltage for emission of electrons. The range of photoemissive devices using caesium include optical character recognition devices, photomultiplier tubes, and video camera tubes. Nevertheless, germanium, rubidium, selenium, silicon, tellurium, and several other elements can be substituted for caesium in photosensitive materials. Caesium iodide (CsI), bromide (CsBr) and caesium fluoride (CsF) crystals are employed for scintillators in scintillation counters widely used in mineral exploration and particle physics research to detect gamma and X-ray radiation. Being a heavy element, caesium provides good stopping power with better detection. Caesium compounds may provide a faster response (CsF) and be less hygroscopic (CsI). Caesium vapour is used in many common magnetometers. The element is used as an internal standard in spectrophotometry. Like other alkali metals, caesium has a great affinity for oxygen and is used as a "getter" in vacuum tubes. Other uses of the metal include high-energy lasers, vapour glow lamps, and vapour rectifiers. The high density of the caesium ion makes solutions of caesium chloride, caesium sulfate, and caesium trifluoroacetate () useful in molecular biology for density gradient ultracentrifugation. This technology is used primarily in the isolation of viral particles, subcellular organelles and fractions, and nucleic acids from biological samples. Relatively few chemical applications use caesium. Doping with caesium compounds enhances the effectiveness of several metal-ion catalysts for chemical synthesis, such as acrylic acid, anthraquinone, ethylene oxide, methanol, phthalic anhydride, styrene, methyl methacrylate monomers, and various olefins. It is also used in the catalytic conversion of sulfur dioxide into sulfur trioxide in the production of sulfuric acid. Caesium fluoride enjoys a niche use in organic chemistry as a base and as an anhydrous source of fluoride ion. Caesium salts sometimes replace potassium or sodium salts in organic synthesis, such as cyclization, esterification, and polymerization. Caesium has also been used in thermoluminescent radiation dosimetry : When exposed to radiation, it acquires crystal defects that, when heated, revert with emission of light proportionate to the received dose. Thus, measuring the light pulse with a photomultiplier tube can allow the accumulated radiation dose to be quantified. Caesium-137 is a radioisotope commonly used as a gamma-emitter in industrial applications. Its advantages include a half-life of roughly 30 years, its availability from the nuclear fuel cycle, and having Ba as a stable end product. The high water solubility is a disadvantage which makes it incompatible with large pool irradiators for food and medical supplies. It has been used in agriculture, cancer treatment, and the sterilization of food, sewage sludge, and surgical equipment. Radioactive isotopes of caesium in radiation devices were used in the medical field to treat certain types of cancer, but emergence of better alternatives and the use of water-soluble caesium chloride in the sources, which could create wide-ranging contamination, gradually put some of these caesium sources out of use. Caesium-137 has been employed in a variety of industrial measurement gauges, including moisture, density, levelling, and thickness gauges. It has also been used in well logging devices for measuring the electron density of the rock formations, which is analogous to the bulk density of the formations. Caesium-137 has been used in hydrologic studies analogous to those with tritium. As a daughter product of fission bomb testing from the 1950s through the mid-1980s, caesium-137 was released into the atmosphere, where it was absorbed readily into solution. Known year-to-year variation within that period allows correlation with soil and sediment layers. Caesium-134, and to a lesser extent caesium-135, have also been used in hydrology to measure the caesium output by the nuclear power industry. While they are less prevalent than either caesium-133 or caesium-137, these bellwether isotopes are produced solely from anthropogenic sources. Caesium and mercury were used as a propellant in early ion engines designed for spacecraft propulsion on very long interplanetary or extraplanetary missions. The fuel was ionized by contact with a charged tungsten electrode. But corrosion by caesium on spacecraft components has pushed development in the direction of inert gas propellants, such as xenon, which are easier to handle in ground-based tests and do less potential damage to the spacecraft. Xenon was used in the experimental spacecraft Deep Space 1 launched in 1998. Nevertheless, field-emission electric propulsion thrusters that accelerate liquid metal ions such as caesium have been built. Caesium nitrate is used as an oxidizer and pyrotechnic colorant to burn silicon in infrared flares, such as the LUU-19 flare, because it emits much of its light in the near infrared spectrum. Caesium is used to reduce the radar signature of exhaust plumes in the SR-71 Blackbird military aircraft. Caesium and rubidium have been added as a carbonate to glass because they reduce electrical conductivity and improve stability and durability of fibre optics and night vision devices. Caesium fluoride or caesium aluminium fluoride are used in fluxes formulated for brazing aluminium alloys that contain magnesium. Magnetohydrodynamic (MHD) power-generating systems were researched, but failed to gain widespread acceptance. Caesium metal has also been considered as the working fluid in high-temperature Rankine cycle turboelectric generators. Caesium salts have been evaluated as antishock reagents following the administration of arsenical drugs. Because of their effect on heart rhythms, however, they are less likely to be used than potassium or rubidium salts. They have also been used to treat epilepsy. Caesium-133 can be laser cooled and used to probe fundamental and technological problems in quantum physics. It has a particularly convenient Feshbach spectrum to enable studies of ultracold atoms requiring tunable interactions. Nonradioactive caesium compounds are only mildly toxic and nonradioactive caesium is not a significant environmental hazard. Because biochemical processes can confuse and substitute caesium with potassium, excess caesium can lead to hypokalemia, arrhythmia, and acute cardiac arrest. But such amounts would not ordinarily be encountered in natural sources. The median lethal dose (LD) for caesium chloride in mice is 2.3 g per kilogram, which is comparable to the LD values of potassium chloride and sodium chloride. The principal use of nonradioactive caesium, as caesium formate in petroleum drilling fluids because it is much less toxic than alternatives, though it is more costly. Caesium metal is one of the most reactive elements and is highly explosive in the presence of water. The hydrogen gas produced by the reaction is heated by the thermal energy released at the same time, causing ignition and a violent explosion. This can occur with other alkali metals, but caesium is so potent that this explosive reaction can be triggered even by cold water. It is highly pyrophoric: the autoignition temperature of caesium is −116 °C, and it ignites explosively in air to form caesium hydroxide and various oxides. Caesium hydroxide is a very strong base, and will rapidly corrode glass. The isotopes 134 and 137 are present in the biosphere in small amounts from human activities, differing by location. Radiocaesium does not accumulate in the body as readily as other fission products (such as radioiodine and radiostrontium). About 10% of absorbed radiocaesium washes out of the body relatively quickly in sweat and urine. The remaining 90% has a biological half-life between 50 and 150 days. Radiocaesium follows potassium and tends to accumulate in plant tissues, including fruits and vegetables. Plants vary widely in the absorption of caesium, sometimes displaying great resistance to it. It is also well-documented that mushrooms from contaminated forests accumulate radiocaesium (caesium-137) in the fungal sporocarps. Accumulation of caesium-137 in lakes has been a great concern after the Chernobyl disaster. Experiments with dogs showed that a single dose of 3.8 millicuries (140 MBq, 4.1 μg of caesium-137) per kilogram is lethal within three weeks; smaller amounts may cause infertility and cancer. The International Atomic Energy Agency and other sources have warned that radioactive materials, such as caesium-137, could be used in radiological dispersion devices, or "dirty bombs". Cleveland Cleveland ( ) is a major city in the U.S. state of Ohio, and the county seat of Cuyahoga County. The city proper has a population of 388,072, making it the 51st-largest city in the United States, and the second-largest city in Ohio. Greater Cleveland is ranked as the 32nd-largest metropolitan area in the US, with 2,055,612 people in 2016. The city anchors the Cleveland–Akron–Canton Combined Statistical Area, which had a population of 3,515,646 in 2010 and is ranked 15th in the United States. The city is located on the southern shore of Lake Erie, approximately west of the Ohio-Pennsylvania state border. It was founded by European Americans in 1796 near the mouth of the Cuyahoga River. It became a manufacturing center due to its location on both the river and the lake shore, as well as being connected to numerous canals and railroad lines. Cleveland's economy relies on diversified sectors such as manufacturing, financial services, healthcare, and biomedicals. Cleveland is also home to the Rock and Roll Hall of Fame. Cleveland residents are called "Clevelanders". The city has many nicknames, the oldest of which in contemporary use being "The Forest City". Cleveland was named on July 22, 1796, when surveyors of the Connecticut Land Company laid out Connecticut's Western Reserve into townships and a capital city. They named it "Cleaveland" after their leader, General Moses Cleaveland. Cleaveland oversaw design of the plan for what would become the modern downtown area, centered on Public Square, before returning home, never again to visit Ohio. The first settler in Cleaveland was Lorenzo Carter, who built a cabin on the banks of the Cuyahoga River. The Village of Cleaveland was incorporated on December 23, 1814. In spite of the nearby swampy lowlands and harsh winters, its waterfront location proved to be an advantage, giving access to Great Lakes trade. The area began rapid growth after the 1832 completion of the Ohio and Erie Canal. This key link between the Ohio River and the Great Lakes connected the city to the Atlantic Ocean via the Erie Canal and Hudson River, and later via the St. Lawrence Seaway. Its products could reach markets on the Gulf of Mexico via the Mississippi River. Growth continued with added railroad links. Cleveland incorporated as a city in 1836. In 1836, the city, then located only on the eastern banks of the Cuyahoga River, nearly erupted into open warfare with neighboring Ohio City over a bridge connecting the two. Ohio City remained an independent municipality until its annexation by Cleveland in 1854. The city's prime geographic location as a transportation hub on the Great Lakes has played an important role in its development as a commercial center. Cleveland serves as a destination for iron ore shipped from Minnesota, along with coal transported by rail. In 1870, John D. Rockefeller founded Standard Oil in Cleveland. In 1885, he moved its headquarters to New York City, which had become a center of finance and business. Cleveland emerged in the early 20th century as an important American manufacturing center. Its businesses included automotive companies such as Peerless, People's, Jordan, Chandler, and Winton, maker of the first car driven across the U.S. Other manufacturers located in Cleveland produced steam-powered cars, which included White and Gaeth, as well as the electric car company Baker. Because of its significant growth, Cleveland was known as the "Sixth City" of the US during this period. By 1920, due in large part to the city's economic prosperity, Cleveland became the nation's fifth-largest city. The city counted Progressive Era politicians such as the populist Mayor Tom L. Johnson among its leaders. Its industrial jobs had attracted waves of European immigrants from southern and eastern Europe, as well as both black and white migrants from the rural South. In commemoration of the centennial of Cleveland's incorporation as a city, the Great Lakes Exposition debuted in June 1936 along the Lake Erie shore north of downtown. Conceived as a way to energize the city after the Great Depression, it drew four million visitors in its first season, and seven million by the end of its second and final season in September 1937. The exposition was housed on grounds that are now used by the Great Lakes Science Center, the Rock and Roll Hall of Fame, and Burke Lakefront Airport, among others. Following World War II, Cleveland continued to enjoy a prosperous economy. In sports, the Indians won the 1948 World Series, the hockey team, the Barons, became champions of the American Hockey League, and the Browns dominated professional football in the 1950s. As a result, along with track and boxing champions produced, Cleveland was dubbed "City of Champions" in sports at this time. Businesses proclaimed that Cleveland was the "best location in the nation". In 1940, non-Hispanic whites represented 90.2% of Cleveland's population. Wealthy patrons supported development of the city's cultural institutions, such as the art museum and orchestra. The city's population reached its peak of 914,808, and in 1949 Cleveland was named an All-America City for the first time. By the 1960s, the economy slowed, and residents sought new housing in the suburbs, reflecting the national trends of suburban growth following the subsidized highways. In the 1950s and 1960s, African Americans worked in numerous cities to gain constitutional rights and relief from racial discrimination. As change lagged despite federal laws to enforce rights, social and racial unrest occurred in Cleveland and numerous other industrial cities. In Cleveland, the Hough Riots erupted from July 18 to 23, 1966. The Glenville Shootout took place from July 23 to 25, 1968. In November 1967, Cleveland became the first major American city to elect a black mayor, Carl Stokes (who served from 1968 to 1971). Industrial restructuring, particularly in the railroad and steel industries, resulted in the loss of numerous jobs in Cleveland and the region, and the city suffered economically. In December 1978, Cleveland became the first major American city since the Great Depression to enter into a financial default on federal loans. By the beginning of the 1980s, several factors, including changes in international free trade policies, inflation and the Savings and Loans Crisis, contributed to the recession that severely affected cities like Cleveland. While unemployment during the period peaked in 1983, Cleveland's rate of 13.8% was higher than the national average due to the closure of several steel production centers. In the later 20th century, the metropolitan area began a gradual economic recovery under mayors George Voinovich and Michael R. White. Redevelopment within the city limits has been strongest in the downtown area near the Gateway Sports and Entertainment Complex—consisting of Progressive Field and Quicken Loans Arena—and near North Coast Harbor, including the Rock and Roll Hall of Fame, FirstEnergy Stadium, and the Great Lakes Science Center. Cleveland was hailed in 2007 by local media as the "Comeback City". Economic development of the inner-city neighborhoods and improvement of the school systems have been municipal priorities. In 1999, Cleveland was identified as an emerging global city. Since the turn of the 21st century, the city has improved infrastructure, developed a more diversified economy, gained a national reputation in medical fields, and invested in the arts. Cleveland is generally considered to be an example of revitalization of an older industrial city. The city's goals include additional neighborhood revitalization and increased funding for public education. In 2009, Cleveland was chosen to host the 2014 Gay Games, the fourth city in the United States to host this international event. On July 8, 2014, Cleveland was chosen to be the host city of the 2016 Republican National Convention. According to the United States Census Bureau, the city has a total area of , of which is land and is water. The shore of Lake Erie is above sea level; however, the city lies on a series of irregular bluffs lying roughly parallel to the lake. In Cleveland these bluffs are cut principally by the Cuyahoga River, Big Creek, and Euclid Creek. The land rises quickly from the lake shore. Public Square, less than inland, sits at an elevation of , and Hopkins Airport, inland from the lake, is at an elevation of . Cleveland's downtown architecture is diverse. Many of the city's government and civic buildings, including City Hall, the Cuyahoga County Courthouse, the Cleveland Public Library, and Public Auditorium, are clustered around an open mall and share a common neoclassical architecture. Built in the early 20th century, they are the result of the 1903 Group Plan. They constitute one of the most complete examples of City Beautiful design in the United States. The Terminal Tower, dedicated in 1930, was the tallest building in North America outside New York City until 1964 and the tallest in the city until 1991. It is a prototypical Beaux-Arts skyscraper. The two newer skyscrapers on Public Square, Key Tower (currently the tallest building in Ohio) and the 200 Public Square, combine elements of Art Deco architecture with postmodern designs. Another of Cleveland's architectural treasures is The Arcade (sometimes called the Old Arcade), a five-story arcade built in 1890 and renovated in 2001 as a Hyatt Regency Hotel. Cleveland's landmark ecclesiastical architecture includes the historic Old Stone Church in downtown Cleveland and the onion domed St. Theodosius Russian Orthodox Cathedral in Tremont, along with myriad ethnically inspired Roman Catholic churches. Running east from Public Square through University Circle is Euclid Avenue, which was known for its prestige and elegance as a residential street. In the late 1880s, writer Bayard Taylor described it as "the most beautiful street in the world". Known as "Millionaire's Row", Euclid Avenue was world-renowned as the home of such major figures as John D. Rockefeller, Mark Hanna, and John Hay. Downtown Cleveland is centered on Public Square and includes a wide range of districts. It contains the traditional Financial District and Civic Center, as well as the Cleveland Theater District, which is home to Playhouse Square Center. Mixed-use neighborhoods, such as the Flats and the Warehouse District, are occupied by industrial and office buildings as well as restaurants and bars. The number of downtown housing units, in the form of condominiums, lofts, and apartments, has been on the increase since 2000. Recent developments include the revival of the Flats, the Euclid Corridor Project, and the developments along East 4th Street. Cleveland residents geographically define themselves in terms of whether they live on the east or west side of the Cuyahoga River. The east side includes the neighborhoods of Buckeye-Shaker, Central, Collinwood, Corlett, Euclid-Green, Fairfax, Forest Hills, Glenville, Payne/Goodrich-Kirtland Park, Hough, Kinsman, Lee Harvard/Seville-Miles, Mount Pleasant, Nottingham, St. Clair-Superior, Union-Miles Park, University, Little Italy, and Woodland Hills. The west side includes the neighborhoods of Brooklyn Centre, Clark-Fulton, Detroit-Shoreway, Cudell, Edgewater, Ohio City, Tremont, Old Brooklyn, Stockyards, West Boulevard, and the four neighborhoods colloquially known as West Park: Kamm's Corners, Jefferson, Puritas-Longmead, and Riverside. Three neighborhoods in the Cuyahoga Valley are sometimes referred to as the south side: Industrial Valley/Duck Island, Slavic Village (North and South Broadway), and Tremont. Several inner-city neighborhoods have begun to gentrify in recent years. Areas on both the west side (Ohio City, Tremont, Detroit-Shoreway, and Edgewater) and the east side (Collinwood, Hough, Fairfax, and Little Italy) have been successful in attracting increasing numbers of creative class members, which in turn is spurring new residential development. Furthermore, a live-work zoning overlay for the city's near east side has facilitated the transformation of old industrial buildings into loft spaces for artists. Cleveland's older, inner-ring suburbs include Bedford, Bedford Heights, Brook Park, Brooklyn, Brooklyn Heights, Cleveland Heights, Cuyahoga Heights, East Cleveland, Euclid, Fairview Park, Garfield Heights, Lakewood, Linndale, Maple Heights, Newburgh Heights, Parma, Parma Heights, Shaker Heights, Solon, South Euclid, University Heights and Warrensville Heights. Many of the suburbs are members of the Northeast Ohio First Suburbs Consortium. Typical of the Great Lakes region, Cleveland exhibits a continental climate with four distinct seasons, which lies in the humid continental (Köppen "Dfa") zone. Summers are warm to hot and humid while winters are cold and snowy. The Lake Erie shoreline is very close to due east–west from the mouth of the Cuyahoga west to Sandusky, but at the mouth of the Cuyahoga it turns sharply northeast. This feature is the principal contributor to the lake effect snow that is typical in Cleveland (especially on the city's East Side) from mid-November until the surface of Lake Erie freezes, usually in late January or early February. The lake effect also causes a relative differential in geographical snowfall totals across the city: while Hopkins Airport, on the city's far West Side, has only reached of snowfall in a season three times since record-keeping for snow began in 1893, seasonal totals approaching or exceeding are not uncommon as the city ascends into the Heights on the east, where the region known as the 'Snow Belt' begins. Extending from the city's East Side and its suburbs, the Snow Belt reaches up the Lake Erie shore as far as Buffalo. The all-time record high in Cleveland of was established on June 25, 1988, and the all-time record low of was set on January 19, 1994. On average, July is the warmest month with a mean temperature of , and January, with a mean temperature of , is the coldest. Normal yearly precipitation based on the 30-year average from 1981 to 2010 is . The least precipitation occurs on the western side and directly along the lake, and the most occurs in the eastern suburbs. Parts of Geauga County to the east receive over of liquid precipitation annually. As of the census of 2010, there were 396,698 people, 167,490 households, and 89,821 families residing in the city. The population density was . There were 207,536 housing units at an average density of . The racial makeup of the city was 53.3% African American, 37.3% White, 0.3% Native American, 1.8% Asian, 4.4% from other races, and 2.8% from two or more races. Hispanic or Latino of any race were 10.0% of the population. There were 167,490 households of which 29.7% had children under the age of 18 living with them, 22.4% were married couples living together, 25.3% had a female householder with no husband present, 6.0% had a male householder with no wife present, and 46.4% were non-families. 39.5% of all households were made up of individuals and 10.7% had someone living alone who was 65 years of age or older. The average household size was 2.29 and the average family size was 3.11. The median age in the city was 35.7 years. 24.6% of residents were under the age of 18; 11% were between the ages of 18 and 24; 26.1% were from 25 to 44; 26.3% were from 45 to 64; and 12% were 65 years of age or older. The gender makeup of the city was 48.0% male and 52.0% female. As of the census of 2000, there were 478,403 people, 190,638 households, and 111,904 families residing in the city. The population density was . There were 215,856 housing units at an average density of . The racial makeup of the city was 51.0% African American, 41.5% White, 0.3% Native American, 1.3% Asian, 0.0% Pacific Islander, 3.6% from other races, and 2.2% from two or more races. Hispanic or Latinos of any race were 7.3% of the population. Ethnic groups include Germans (15.2%), Irish (10.9%), English (8.7%), Italian (5.6%), Poles (3.2%), and French (3.0%). Out of the total population, 4.5% were foreign born; of which 41.2% were born in Europe, 29.1% Asia, 22.4% Latin American, 5.0% Africa, and 1.9% Northern America. There are also substantial communities of Slovaks, Hungarians, French, Slovenes, Czechs, Ukrainians, Arabs, Dutch, Scottish, Russian, Scotch Irish, Croats, Macedonians, Puerto Ricans, West Indians, Romanians, Lithuanians, and Greeks. The presence of Hungarians within Cleveland proper was, at one time, so great that the city boasted the highest concentration of Hungarians in the world outside of Budapest. The availability of jobs attracted African Americans from the South. Between 1920 and 1960, the black population of Cleveland increased from 35,000 to 251,000. Out of 190,638 households, 29.9% have children under the age of 18 living with them, 28.5% were married couples living together, 24.8% had a female householder with no husband present, and 41.3% were nonfamilies. 35.2% of all households were made up of individuals and 11.1% had someone living alone who is 65 years of age or older. The average household size was 2.44 and the average family size was 3.19. The age distribution of the population shows 28.5% under the age of 18, 9.5% from 18 to 24, 30.4% from 25 to 44, 19.0% from 45 to 64, and 12.5% who are 65 years of age or older. The median age was 33 years. For every 100 females, there were 90.0 males. For every 100 females age 18 and over, there were 85.2 males. The median income for a household in the city was US$25,928, and the median income for a family was $30,286. Males had a median income of $30,610 versus $24,214 for females. The per capita income for the city was $14,291. 26.3% of the population and 22.9% of families were below the poverty line. Out of the total population, 37.6% of those under the age of 18 and 16.8% of those 65 and older were living below the poverty line. , 88.4% (337,658) of Cleveland residents age 5 and older spoke English at home as a primary language, while 7.1% (27,262) spoke Spanish, 0.6% (2,200) Arabic, and 0.5% (1,960) Chinese. In addition 0.9% (3,364) spoke a Slavic language (1,279 – Polish, 679 Serbo-Croatian, and 485 Russian). In total, 11.6% (44,148) of Cleveland's population age 5 and older spoke another language other than English. Cleveland's location on the Cuyahoga River and Lake Erie has been key to its growth. The Ohio and Erie Canal coupled with rail links helped the city become an important business center. Steel and many other manufactured goods emerged as leading industries. The city diversified its economy in addition to its manufacturing sector. Cleveland is home to the corporate headquarters of many large companies such as Applied Industrial Technologies, Cliffs Natural Resources, Forest City Enterprises, NACCO Industries, Sherwin-Williams Company, and KeyCorp. NASA maintains a facility in Cleveland, the Glenn Research Center. Jones Day, one of the largest law firms in the US, was founded in Cleveland. In 2007, Cleveland's commercial real estate market experienced rebound with a record pace of purchases, with a housing vacancy of 10%. The Cleveland Clinic is the city's largest private employer with a workforce of over 37,000 . It carries the distinction as being among America's best hospitals with top ratings published in "U.S. News & World Report". Cleveland's healthcare sector also includes University Hospitals of Cleveland, a renowned center for cancer treatment, MetroHealth medical center, and the insurance company Medical Mutual of Ohio. Cleveland is also noted in the fields of biotechnology and fuel cell research, led by Case Western Reserve University, the Cleveland Clinic, and University Hospitals of Cleveland. The city is among the top recipients of investment for biotech start-ups and research. Case Western Reserve, the Clinic, and University Hospitals have recently announced plans to build a large biotechnology research center and incubator on the site of the former Mount Sinai Hospital, creating a research campus to stimulate biotech startup companies that can be spun off from research conducted in the city. City leaders promoted growth of the technology sector in the first decade of the 21st century. Mayor Jane L. Campbell appointed a "tech czar" to recruit technology companies to the downtown office market, offering connections to the high-speed fiber networks that run underneath downtown streets in several "high-tech offices" focused on the Euclid Avenue area. Cleveland State University hired a technology transfer officer to cultivate technology transfers from CSU research to marketable ideas and companies in the Cleveland area. Also, they appointed a vice president for economic development. Case Western Reserve University participated in technology initiatives such as the OneCommunity project, a high-speed fiber optic network linking the area's research centers intended to stimulate growth. In mid-2005, Cleveland was named an Intel "Worldwide Digital Community", along with Corpus Christi, Texas, Philadelphia, and Taipei. This added about $12 million for marketing to expand regional technology partnerships, created a city-wide Wi-Fi network, and developed a tech economy. In addition to this Intel initiative, in January 2006 a New York-based think tank, the Intelligent Community Forum, selected Cleveland as the sole American city among its seven finalists for the "Intelligent Community of the Year" award. The group announced it nominated the city for its OneCommunity network with potential broadband applications. OneCommunity collaborated with Cisco Systems to deploy a wireless network starting in September 2006. Cleveland is home to Playhouse Square Center, the second largest performing arts center in the United States behind New York City's Lincoln Center. Playhouse Square includes the State, Palace, Allen, Hanna, and Ohio theaters within what is known as the Cleveland Theater District. Playhouse Square's resident performing arts companies include Cleveland Play House, Cleveland State University Department of Theatre and Dance, and Great Lakes Theater Festival. The center hosts various Broadway musicals, special concerts, speaking engagements, and other events throughout the year. One Playhouse Square, now the headquarters for Cleveland's public broadcasters, was initially used as the broadcast studios of WJW (AM), where disc jockey Alan Freed first popularized the term "rock and roll". Cleveland gained a strong reputation in rock music in the 1960s and 70s as a key breakout market for nationally promoted acts and performers. The city hosted the "World Series of Rock" at Cleveland Municipal Stadium, which was notable high-attendance events. Located between Playhouse Square and University Circle is Karamu House, a well-known African American performing and fine arts center, founded in the 1920s. Cleveland is home to the Cleveland Orchestra, widely considered one of the world's finest orchestras, and often referred to as the finest in the United States. It is one of the "Big Five" major orchestras in the United States. The Orchestra plays at Severance Hall in University Circle during the winter and at Blossom Music Center in Cuyahoga Falls during the summer. The city is also home to the Cleveland Pops Orchestra, the Cleveland Youth Orchestra, and the Cleveland Youth Wind Symphony. The city also has a history of polka music being popular both past and present, even having a subgenre called Cleveland-style polka named after the city, and is home to the Polka Hall of Fame. This is due in part to the success of Frankie Yankovic who was a Cleveland native and was considered the "America's Polka King" and the square at the intersection of Waterloo Rd. and East 152nd St. in Cleveland (), not far from where Yankovic grew up, was named in his honor. There are two main art museums in Cleveland. The Cleveland Museum of Art is a major American art museum, with a collection that includes more than 40,000 works of art ranging over 6,000 years, from ancient masterpieces to contemporary pieces. Museum of Contemporary Art Cleveland showcases established and emerging artists, particularly from the Cleveland area, through hosting and producing temporary exhibitions. The Gordon Square Arts District on Detroit Avenue, in the Detroit-Shoreway neighborhood, is the location of the Capitol Theatre and an Off-Off-Broadway Playhouse, the Cleveland Public Theatre. Each spring, the campus of Cleveland State University hosts the Cleveland Thyagaraja Festival, the largest South Indian classical music festival next to Chennai's December Season. Cleveland has served as the setting for several major studio and independent films. Players from the 1948 Cleveland Indians, winners of the World Series, appear in "The Kid from Cleveland" (1949). Cleveland Municipal Stadium features prominently in both that film and "The Fortune Cookie" (1966); written and directed by Billy Wilder, the picture marked Walter Matthau and Jack Lemmon's first on-screen collaboration and features gameday footage of the 1965 Cleveland Browns. Director Jules Dassin's first American film in nearly twenty years, "Up Tight!" (1968) is set in Cleveland immediately following the assassination of Martin Luther King, Jr. Set in 1930s Cleveland, Sylvester Stallone leads a local labor union in "F.I.S.T." (1978). Paul Simon chose Cleveland as the opening for his only venture into filmmaking, "One-Trick Pony" (1980); Simon spent six weeks filming concert scenes at the Cleveland Agora. The boxing-match-turned-riot near the start of "Raging Bull" (1980) takes place at the Cleveland Arena in 1941. Clevelander Jim Jarmusch's critically acclaimed and independently produced "Stranger Than Paradise" (1984)—a deadpan comedy about two New Yorkers who travel to Florida by way of Cleveland—was a favorite of the Cannes Film Festival, winning the Caméra d'Or. The cult-classic mockumentary "This Is Spinal Tap" (1984) includes a memorable scene where the parody band gets lost backstage just before performing at a Cleveland rock concert (origin of the phrase "Hello, Cleveland!"). "Howard the Duck" (1986), George Lucas' heavily criticized adaptation of the Marvel comic of the same name, begins with the title character crashing into Cleveland after drifting in outer space. Michael J. Fox and Joan Jett play the sibling leads of a Cleveland rock group in "Light of Day" (1987); directed by Paul Schrader, much of the film was shot in the city. Both "Major League" (1989) and "Major League II" (1994) reflected the of the Cleveland Indians during the 1960s, 1970s, and 1980s. Kevin Bacon stars in "Telling Lies in America" (1997), the semi-autobiographical tale of Clevelander Joe Eszterhas, a former reporter for "The Plain Dealer". Cleveland serves as the setting for fictitious insurance giant Great Benefit in "The Rainmaker" (1997); in the film, Key Tower doubles as the firm's main headquarters. A group of Cleveland teenagers try to scam their way into a Kiss concert in "Detroit Rock City" (1999), and several key scenes from director Cameron Crowe's "Almost Famous" (2000) are set in Cleveland. "Antwone Fisher" (2002) recounts the real-life story of the Cleveland native. Brothers Joe and Anthony Russo—native Clevelanders and Case Western Reserve University alumni—filmed their comedy "Welcome to Collinwood" (2002) entirely on location in the city. "American Splendor" (2003)—the biographical film of Harvey Pekar, author of the autobiographical comic of the same name—was also filmed on location throughout Cleveland, as was "The Oh in Ohio" (2006). Much of "The Rocker" (2008) is set in the city, and Cleveland native Nathaniel Ayers' life story is told in "The Soloist" (2009). "Kill the Irishman" (2011) follows the real-life turf war in 1970s Cleveland between Irish mobster Danny Greene and the Cleveland crime family. More recently, the teenage comedy "Fun Size" (2012) takes place in and around Cleveland on Halloween night, and the film "Draft Day" (2014) followed Kevin Costner as general manager for the Cleveland Browns. Cleveland has often doubled for other locations in the film. The wedding and reception scenes in "The Deer Hunter" (1978), while set in the small Pittsburgh suburb of Clairton, were actually shot in the Cleveland neighborhood of Tremont; U.S. Steel also permitted the production to film in one of its Cleveland mills. Francis Ford Coppola produced "The Escape Artist" (1982), much of which was shot in Downtown Cleveland near City Hall and the Cuyahoga County Courthouse, as well as the Flats. "A Christmas Story" (1983) was set in Indiana, but drew many of its external shots—including the Parker family home, the downtown Christmas parade and Higbee's department store Santa scenes —from Cleveland. Much of "Double Dragon" (1994) and "Happy Gilmore" (1996) were also shot in Cleveland, and the opening shots of "Air Force One" (1997) were filmed in and above Severance Hall. A complex chase scene in "Spider-Man 3" (2007), though set in New York City, was actually filmed along Cleveland's Euclid Avenue. Downtown's East 9th Street also doubled for New York in the climax of "The Avengers" (2012); in addition, the production shot on Cleveland's Public Square as a fill-in for Stuttgart, Germany. More recently, "" (2013), "Miss Meadows" (2014) and "" (2014) each filmed in Cleveland. Future productions in the Cleveland area are the responsibility of the Greater Cleveland Film Commission. In television, the city is the setting for the popular network sitcom "The Drew Carey Show", starring Cleveland native Drew Carey. Real-life crime series "Cops", "Crime 360", and "The First 48" regularly film in Cleveland and other U.S. cities. "Hot in Cleveland", a comedy airing on TV Land, premiered on June 16, 2010. The American modernist poet Hart Crane was born in nearby Garrettsville, Ohio in 1899. His adolescence was divided between Cleveland and Akron before he moved to New York City in 1916. Aside from factory work during the first world war, he served as reporter to "The Plain Dealer" for a short period, before achieving recognition in the Modernist literary scene. A diminutive memorial park is dedicated to Crane along the left bank of the Cuyahoga in Cleveland. In University Circle, a historical marker sits at the location of his Cleveland childhood house on E. 115 near the Euclid Ave intersection. On Case Western Reserve University campus, a statue of him stands behind the Kelvin Smith Library. Langston Hughes, preeminent poet of the Harlem Renaissance and child of an itinerant couple, lived in Cleveland as a teenager and attended Central High School in Cleveland in the 1910s. He wrote for the school newspaper and started writing his earlier plays, poems and short stories while living in Cleveland. The African-American avant garde poet Russell Atkins also lived in Cleveland. Cleveland was the home of Joe Shuster and Jerry Siegel, who created the comic book character Superman in 1932. Both attended Glenville High School, and their early collaborations resulted in the creation of "The Man of Steel". D. A. Levy wrote: "Cleveland: The Rectal Eye Visions". Mystery author Richard Montanari's first three novels, "Deviant Way", "The Violet Hour", and "Kiss of Evil" are set in Cleveland. Mystery writer, Les Roberts's "Milan Jacovich" series is also set in Cleveland. Author and Ohio resident, James Renner set his debut novel, "The Man from Primrose Lane" in present-day Cleveland. Harlan Ellison, noted author of speculative fiction, was born in Cleveland in 1934; his family subsequently moved to the nearby suburb of Painesville, though Ellison moved back to Cleveland in 1949. As a youngster, he published a series of short stories appearing in the "Cleveland News"; he also performed in a number of productions for the Cleveland Play House. The Cleveland State University Poetry Center serves as an academic center for poetry. Cleveland continues to have a thriving literary and poetry community, with regular poetry readings at bookstores, coffee shops, and various other venues. Cleveland is the site of the Anisfield-Wolf Book Award, established by poet and philanthropist Edith Anisfield Wolf in 1935, which recognizes books that have made important contributions to understanding of racism and human diversity. Presented by the Cleveland Foundation, it remains the only American book prize focusing on works that address racism and diversity. In an early Gay and Lesbian Studies anthology titled Lavender Culture, a short piece by John Kelsey "The Cleveland Bar Scene in the Forties" discusses the gay and lesbian culture in Cleveland and the unique experiences of amateur female impersonators that existed alongside the New York and San Francisco LGBT subcultures. Cleveland's melting pot of immigrant groups and their various culinary traditions have long played an important role in defining the local cuisine. Examples of these can particularly be found in neighborhoods such as Little Italy, Slavic Village, and Tremont. Local mainstays of Cleveland's cuisine include an abundance of Polish and Central European contributions, such as kielbasa, stuffed cabbage and pierogies. Cleveland also has plenty of corned beef, with nationally renowned Slyman's, on the near East Side, a perennial winner of various accolades from "Esquire Magazine", including being named the best corned beef sandwich in America in 2008. Other famed sandwiches include the Cleveland original, Polish Boy, a local favorite found at many BBQ and Soul food restaurants. With its blue-collar roots well intact, and plenty of Lake Erie perch available, the tradition of Friday night fish fries remains alive and thriving in Cleveland, particularly in church-based settings and during the season of Lent. Ohio City is home to a growing brewery district, which includes Great Lakes Brewing Company (Ohio's oldest microbrewery); Market Garden Brewery next to the historic West Side Market and Platform Beer Company. Cleveland is noted in the world of celebrity food culture. Famous local figures include chef Michael Symon and food writer Michael Ruhlman, both of whom achieved local and national attentions for their contributions in the culinary world. On November 11, 2007, Symon helped gain the spotlight when he was named "The Next Iron Chef" on the Food Network. In 2007, Ruhlman collaborated with Anthony Bourdain, to do an episode of his "" focusing on Cleveland's restaurant scene. The national food press—including publications "Gourmet", "Food & Wine", "Esquire" and "Playboy"—has heaped praise on several Cleveland spots for awards including 'best new restaurant', 'best steakhouse', 'best farm-to-table programs' and 'great new neighborhood eateries'. In early 2008, the "Chicago Tribune" ran a feature article in its 'Travel' section proclaiming Cleveland, America's "hot new dining city". east of downtown Cleveland is University Circle, a concentration of cultural, educational, and medical institutions, including the Cleveland Botanical Garden, Case Western Reserve University, University Hospitals, Severance Hall, the Cleveland Museum of Art, the Cleveland Museum of Natural History, and the Western Reserve Historical Society. A 2011 study by Walk Score ranked Cleveland 17th most walkable of fifty largest U.S. cities. Cleveland is home to the I. M. Pei-designed Rock and Roll Hall of Fame on the Lake Erie waterfront at North Coast Harbor downtown. Neighboring attractions include Cleveland Browns Stadium, the Great Lakes Science Center, the Steamship Mather Museum, and the , a World War II submarine. Cleveland has an attraction for visitors and fans of "A Christmas Story": A Christmas Story House and Museum to see props, costumes, rooms, photos and other materials related to the Jean Shepherd film. Cleveland is home to many festivals throughout the year. Cultural festivals such as the annual Feast of the Assumption in the Little Italy neighborhood, the Harvest Festival in the Slavic Village neighborhood, and the more recent Cleveland Asian Festival in the Asia Town neighborhood are popular events. Vendors at the West Side Market in Ohio City offer many different ethnic foods for sale. Cleveland hosts an annual parade on Saint Patrick's Day that brings hundreds of thousands to the streets of downtown. Fashion Week Cleveland, the city's annual fashion event, is the third-largest fashion show of its kind in the United States. In addition to the cultural festivals, Cleveland hosted the CMJ Rock Hall Music Fest, which featured national and local acts, including both established artists and up-and-coming acts, but the festival was discontinued in 2007 due to financial and manpower costs to the Rock Hall. The annual Ingenuity Fest, Notacon and TEDxCLE conference focus on the combination of art and technology. The Cleveland International Film Festival has been held annually since 1977, and it drew a record 66,476 people in March 2009. Cleveland also hosts an annual holiday display lighting and celebration, dubbed Winterfest, which is held downtown at the city's historic hub, Public Square. Cleveland also has the Jack Cleveland Casino. Phase I opened on May 14, 2012, on Public Square, in the historic former Higbee's Building at Tower City Center. Phase II will open along the bend of the Cuyahoga River behind Tower City Center. The new Greater Cleveland Aquarium is on the west bank of the Cuyahoga River near Downtown. Cleveland's current major professional sports teams include the Cleveland Indians (Major League Baseball), Cleveland Browns (National Football League), and Cleveland Cavaliers (National Basketball Association). Local sporting facilities include Progressive Field, FirstEnergy Stadium, Quicken Loans Arena and the Wolstein Center. The Cleveland Indians won the World Series in 1920 and 1948. They also won the American League pennant, making the World Series in the 1954, 1995, 1997, and 2016 seasons. Between 1995 and 2001, Progressive Field (then known as Jacobs Field) sold out 455 consecutive games, a Major League Baseball record until it was broken in 2008. Historically, the Browns have been among the winningest franchises in American football history winning eight titles during a short period of time—1946, 1947, 1948, 1949, 1950, 1954, 1955, and 1964. The Browns have never played in a Super Bowl, getting close five times by making it to the NFL/AFC Championship Game in 1968, 1969, 1986, , and . Former owner Art Modell's relocation of the Browns after the 1995 season (to Baltimore creating the Ravens), caused tremendous heartbreak and resentment among local fans. Cleveland mayor, Michael R. White, worked with the NFL and Commissioner Paul Tagliabue to bring back the Browns beginning in 1999 season, retaining all team history. In earlier NFL history, the Cleveland Bulldogs won the NFL Championship in 1924, and the Cleveland Rams won the NFL Championship in 1945 before relocating to Los Angeles. The Cavaliers won the Eastern Conference in 2007, 2015, 2016, 2017 and 2018 but were defeated in the NBA Finals by the San Antonio Spurs and then by the Golden State Warriors, respectively. The Cavs won the Conference again in 2016 and won their first NBA Championship, finally defeating the Golden State Warriors. Afterwards, an estimated 1.3 million people attended a parade held in the Cavs honor on June 22, 2016. This was the first time the city had planned for a championship parade in 50 years. Basketball, the Cleveland Rosenblums dominated the original American Basketball League winning three of the first five championships (1926, 1929, 1930), and the Cleveland Pipers, owned by George Steinbrenner, won the American Basketball League championship in 1962. A notable Cleveland athlete is Jesse Owens, who grew up in the city after moving from Alabama when he was nine. He participated in the 1936 Summer Olympics in Berlin, where he achieved international fame by winning four gold medals. A statue commemorating his achievement can be found in Downtown Cleveland at Fort Washington Park. Cleveland State University alum and area native, Stipe Miocic, won the UFC World Heavyweight Championship at UFC 198 in 2016. Miocic has defended his World Heavyweight Champion title at UFC 203, the first ever UFC World Championship fight held in the city of Cleveland, and again at UFC 211 and UFC 220. The AHL Cleveland Monsters won the 2016 Calder Cup, becoming the first Cleveland pro sports team to do so since the 1964 Cleveland Barons. The city is also host to the Cleveland Gladiators of the Arena Football League, Cleveland Fusion of the Women's Football Alliance and Cleveland SC of the National Premier Soccer League. Collegiately, NCAA Division I Cleveland State Vikings have 16 varsity sports, nationally known for their Cleveland State Vikings men's basketball team. NCAA Division III Case Western Reserve Spartans have 19 varsity sports, most known for their Case Western Reserve Spartans football team. The headquarters of the Mid-American Conference (MAC) are located in Cleveland. The conference also stages both its men's and women's basketball tournaments at Quicken Loans Arena. Several chess championships have taken place in Cleveland. The second American Chess Congress, a predecessor the current U.S. Championship, was held in 1871, and won by George Henry Mackenzie. The 1921 and 1957 U.S. Open Chess Championship also took place in the city, and were won by Edward Lasker and Bobby Fischer, respectively. The Cleveland Open is currently held annually. The Cleveland Marathon has been hosted annually since 1978. Cleveland is home to four of the parks in the countywide Cleveland Metroparks system, as well as the: Washington Park, Brookside Park and parts of the Rocky River and Washington Reservations. Known locally as the "Emerald Necklace", the Olmsted-inspired Metroparks encircle Cuyahoga county. Included in the system is the Cleveland Metroparks Zoo. Located in Big Creek valley, the zoo contains one of the largest collection of primates in North America. In addition to the Metroparks system, the Cleveland Lakefront State Park district provides public access to Lake Erie. This cooperative between the City of Cleveland and the State of Ohio contains six parks: Edgewater Park, located on the city's near west side between the Shoreway and the lake; East 55th Street Marina, Euclid Beach Park and Gordon Park. The Cleveland Public Parks District is the municipal body that oversees the city's neighborhood parks, the largest of which is the historic Rockefeller Park, notable for its late-19th century historical landmark bridges and Cultural Gardens. Cleveland's position as a center of manufacturing established it as a hotbed of union activity early in its history. While other parts of Ohio, particularly Cincinnati and the southern portion of the state, have historically supported the Republican Party, Cleveland commonly breeds the strongest support in the state for the Democrats. At the local level, elections are nonpartisan. However, Democrats still dominate every level of government. Cleveland is split between two congressional districts. Most of the western part of the city is in the 9th District, represented by Marcy Kaptur. Most of the eastern part of the city, as well as most of downtown, is in the 11th District, represented by Marcia Fudge. Both are Democrats. During the 2004 Presidential election, although George W. Bush carried Ohio by 2.1%, John Kerry carried Cuyahoga County 66.6%–32.9%, his largest margin in any Ohio county. The city of Cleveland supported Kerry over Bush by the even larger margin of 83.3%–15.8%. The city of Cleveland operates on the mayor–council (strong mayor) form of government. The mayor is the chief executive of the city, and the office has been held by Frank G. Jackson since 2006. Previous mayors of Cleveland include progressive Democrat Tom L. Johnson, World War I era War Secretary and founder of BakerHostetler law firm Newton D. Baker, United States Supreme Court Justice Harold Hitz Burton, Republican Senator George V. Voinovich, two-term Ohio Governor and Senator, former United States Representative Dennis Kucinich of Ohio's 10th congressional district, Frank J. Lausche, and Carl B. Stokes, the first African American mayor of a major American city. The state of Ohio lost two Congressional seats as a result of the 2010 Census, which affects Cleveland's districts in the northeast part of the state. Between about 1935 to 1938, the Cleveland Torso Murderer killed and dismembered at least a dozen and perhaps twenty people in the area. No arrest was ever made. From 2002 to 2014, Ariel Castro held three women as sex slaves in his home in Cleveland. Police became aware of the crime when one of the women escaped. Castro was sentenced to one thousand years in jail, but committed suicide. Based on the Morgan Quitno Press 2008 national crime rankings, Cleveland ranked as the 7th most dangerous city in the nation among US cities with a population of 100,000 to 500,000 and the 11th most dangerous overall. Violent crime from 2005 to 2006 was mostly unchanged nationwide, but increased more than 10% in Cleveland. The murder rate dropped 30% in Cleveland, but was still far above the national average. Property crime from 2005 to 2006 was virtually unchanged across the country and in Cleveland, with larceny-theft down by 7% but burglaries up almost 14%. In September 2009, the local police arrested Anthony Sowell, who was known in press reports as the Cleveland Strangler. He was convicted of eleven murders as well as other crimes and sentenced to death. In October 2010, Cleveland had two neighborhoods appear on ABC News's list of 'America's 25 Most Dangerous Neighborhoods': both in sections just blocks apart in the city's Central neighborhood on the East Side. Ranked 21st was in the vicinity of Quincy Avenue and E. 40th Streets, while an area near E. 55th and Scovill Avenue ranked 2nd in the nation, just behind a section of the Englewood neighborhood in Chicago, which ranked 1st. A study in 1971–72 found that although Cleveland's crime rate was significantly lower than other large urban areas, most Cleveland residents feared crime. In the 1980s, gang activity was on the rise, associated with crack cocaine. A task force was formed and was partially successful at reducing gang activity by a combination of removing gang-related graffiti and educating news sources to not name gangs in news reporting. The distribution of crime in Cleveland is highly . Relatively few crimes take place in downtown Cleveland's business district, but the perception of crime in the downtown has been pointed to by the Greater Cleveland Growth Association (now the Greater Cleveland Partnership) as damaging to the city's economy. More affluent areas of Cleveland and its suburbs have lower rates of violent crime than areas of lower socioeconomic status. Statistically speaking, higher incidences of violent crimes have been noted in some parts of Cleveland with higher populations of African Americans. A study of the relationship between employment access and crime in Cleveland found a strong inverse relationship, with the highest crime rates in areas of the city that had the lowest access to jobs. Furthermore, this relationship was found to be strongest with respect to economic crimes. A study of public housing in Cleveland found that criminals tend to live in areas of higher affluence and move into areas of lower affluence to commit crimes. In 2012, Cleveland's crime rate were 84 murders, 3,252 robberies, and 9,740 burglaries. In 2014, the United States Department of Justice (DOJ) published a report that investigated the use of force by the Cleveland Police Department from 2010–2013. The Justice Department found a pattern of excessive force including the use of firearms, tasers, fists, and chemical spray that unnecessarily escalated nonviolent situations, including against the mentally ill and people who were already restrained. As a result of the Justice Department report, the city of Cleveland has agreed to a consent decree to revise its policies and implement new independent oversight over the police force. On May 26, 2015, the City of Cleveland and the DOJ released a 105-page agreement addressing concerns about Cleveland Division of Police (CDP) use-of-force policies and practices. The agreement follows a two-year Department of Justice investigation, prompted by a request from Cleveland Mayor Frank Jackson, to determine whether the CDP engaged in a pattern or practice of the use of excessive force in violation of the Fourth Amendment of the United States Constitution and the Violent Crime Control and Law Enforcement Act (1994), 42 U.S.C § 14141 (Section 14141"). Under Section 14141, the Department of Justice is granted authority to seek declaratory or equitable relief to remedy a pattern or practice of conduct by law enforcement officers that deprives individuals of rights, privileges, or immunities secured by the Constitution or federal law. U.S. Attorney General Eric Holder and U.S. Attorney Steven Dettelbach announced the findings of the DOJ investigation in Cleveland on December 4, 2014. After reviewing nearly 600 use-of-force incidents from 2010 to 2013 and conducting thousands of interviews, the investigators found systemic patterns insufficient accountability mechanisms, inadequate training, ineffective policies, and inadequate community engagement. At the same time as the announcement of the investigation findings, the City of Cleveland and the Department of Justice issued a Joint Statement of Principles agreeing to begin negotiations with the intention of reaching a court-enforceable settlement agreement. The details of the settlement agreement, or consent decree, were released on May 26, 2015. The agreement mandates sweeping changes in training for recruits and seasoned officers, developing programs to identify and support troubled officers, updating technology and data management practices, and an independent monitor to ensure that the goals of the decree are met. The agreement is not an admission or evidence of liability, nor is it an admission by the city, CDP, or its officers and employees that they have engaged in unconstitutional, illegal, or otherwise improper activities or conduct. Pending approval from a federal judge, the consent decree will be implemented and the agreement is binding. The Cleveland Consent Decree is divided into 15 divisions, with 462 enumerated items. At least some of the provisions have been identified as unique to Cleveland: On June 12, 2015, Chief U.S. District Judge Solomon Oliver Jr. approved and signed the consent decree. The signing of the agreement starts the clock for numerous deadlines that must be met in an effort to improve the department's handling of use-of-force incidents. Cleveland is served by the firefighters of the Cleveland Division of Fire. The fire department operates out of 22 active fire stations, located throughout the city in five battalions. Each Battalion is commanded by a Battalion Chief, who reports to an on-duty Assistant Chief. The Division of Fire operates a fire apparatus fleet of twenty-two engine companies, eight ladder companies, three tower companies, two task force rescue squad companies, hazardous materials ("haz-mat") unit, and numerous other special, support, and reserve units. The current Chief of Department is Patrick Kelly. Cleveland EMS is operated by the city as its own department; however, a merger between the fire and EMS departments is in progress. Cleveland EMS units are now based out of most of the city's fire stations . City officials are currently negotiating with Cleveland Fire and EMS to form a new union contract that will merge the two systems entirely. No set projection for a full merger has been established. Neither the Fire nor EMS unions have been able to come to an agreement with city officials on fair terms of merger as of yet. The Cleveland Metropolitan School District is the largest K–12 district in the state of Ohio, with 127 schools and an enrollment of 55,567 students during the 2006–2007 academic year. It is the only district in Ohio that is under direct control of the mayor, who appoints a school board. Approximately of Cleveland, adjacent the Shaker Square neighborhood, is part of the Shaker Heights City School District. The area, which has been a part of the Shaker school district since the 1920s, permits these Cleveland residents to pay the same school taxes as the Shaker residents, as well as vote in the Shaker school board elections. Cleveland is home to a number of colleges and universities. Most prominent among these is Case Western Reserve University, a world-renowned research and teaching institution located in University Circle. A private university with several prominent graduate programs, CWRU was ranked 37th in the nation in 2012 by "U.S. News & World Report". University Circle also contains Cleveland Institute of Art and the Cleveland Institute of Music. Cleveland State University (CSU), based in Downtown Cleveland, is the city's public four-year university. In addition to CSU, downtown hosts the metropolitan campus of Cuyahoga Community College, the county's two-year higher education institution. Ohio Technical College is also based in Cleveland. Cleveland's primary daily newspaper is "The Plain Dealer". Defunct major newspapers include the "Cleveland Press", an afternoon publication which printed its last edition on June 17, 1982; and the "Cleveland News", which ceased publication in 1960. Additional newspaper coverage includes: the News-Herald which serves the smaller suburbs in the east side, the Thursdays-only "Sun Post-Herald", which serves a few neighborhoods on the city's west side; and the "Call and Post", a weekly newspaper that primarily serves the city's African-American community. The city is also served by "Cleveland Magazine", a regional culture magazine published monthly; "Crain's Cleveland Business", a weekly business newspaper; "Cleveland Jewish News", a weekly Jewish newspaper; and "Cleveland Scene", a free alternative weekly paper which absorbed its competitor, the "Cleveland Free Times", in 2008. In addition, nationally distributed rock magazine "Alternative Press" was founded in Cleveland in 1985, and the publication's headquarters remain in the city. Combined with nearby Akron and Canton, Cleveland is ranked as the 19th-largest television market by Nielsen Media Research (–14). The market is served by 10 full power stations, including: WEWS-TV (ABC), WJW (Fox), WKYC (NBC), WOIO (CBS), WVIZ (PBS), WUAB (The CW and MyNetworkTV), WVPX-TV (Ion), WQHS-DT (Univision), WDLI-TV (TBN), and the independent WBNX-TV. "The Mike Douglas Show", a nationally syndicated daytime talk show, began in Cleveland in 1961 on KYW-TV (now WKYC), while "The Morning Exchange" on WEWS-TV served as the model for "Good Morning America". Tim Conway and Ernie Anderson first established themselves in Cleveland while working together at KYW-TV and later WJW-TV (now WJW). Anderson both created and performed as the immensely popular Cleveland horror host Ghoulardi on WJW-TV's "Shock Theater", and was later succeeded by the long-running late night duo Big Chuck and Lil' John. Cleveland is directly served by 31 AM and FM radio stations, 22 of which are licensed to the city. Commercial FM music stations are frequently the highest rated stations in the market: WAKS (contemporary hit radio), WDOK (adult contemporary), WENZ (mainstream urban), WHLK (adult hits), WGAR-FM (country), WMJI (classic hits), WMMS (active rock/hot talk; Indians and Cavaliers FM flagship), WNCX (classic rock; Browns co-flagship), WQAL (hot adult contemporary), and WZAK (urban adult contemporary). WCPN public radio functions as the local NPR affiliate, and sister station WCLV airs a classical music format. College radio stations include WBWC (Baldwin Wallace University), WCSB (Cleveland State University), WJCU (John Carroll University), and WRUW-FM (Case Western Reserve University). News/talk station WTAM serves as the AM flagship for both the Cleveland Cavaliers and Cleveland Indians. WKNR and WWGK cover sports via ESPN Radio, while WKRK-FM covers sports via CBS Sports Radio (WKNR and WKRK-FM are also co-flagship stations for the Cleveland Browns). As WJW (AM), WKNR was once the home of Alan Freed − the Cleveland disc jockey credited with first using and popularizing the term "rock and roll" to describe the music genre. News/talk station WHK was one of the first radio stations to broadcast in the United States and the first in Ohio; its former sister station, rock station WMMS, dominated Cleveland radio in the 1970s and 1980s and was at that time one of the highest rated radio stations in the country. In 1972, WMMS program director Billy Bass coined the phrase "The Rock and Roll Capital of the World" to describe Cleveland. In 1987, "Playboy" named WMMS DJ Kid Leo (Lawrence Travagliante) "The Best Disc Jockey in the Country". Cleveland is home to several major hospital systems, two of which are in University Circle. Most notable is the world renowned Cleveland Clinic, which is supplemented by University Hospitals and its Rainbow Babies & Children's Hospital. Additionally MetroHealth System, which operates the level one trauma center for northeast Ohio, has various locations throughout greater Cleveland. Cleveland's Global Center for Health Innovation opened with of display space for healthcare companies across the world. The city of Cleveland has a higher than average percentage of households without a car. In 2016, 23.7 percent of Cleveland households lacked a car, while the national average was 8.7 percent. Cleveland averaged 1.19 cars per household in 2016, compared to a national average of 1.8. Cleveland Hopkins International Airport is the city's major airport and an international airport that formerly served as a main hub for United Airlines. It holds the distinction of having the first airport-to-downtown rapid transit connection in North America, established in 1968. In 1930, the airport was the site of the first airfield lighting system and the first air traffic control tower. Originally known as Cleveland Municipal Airport, it was the first municipally owned airport in the country. Cleveland Hopkins is a significant regional air freight hub hosting FedEx Express, UPS Airlines, United States Postal Service, and major commercial freight carriers. In addition to Hopkins, Cleveland is served by Burke Lakefront Airport, on the north shore of downtown between Lake Erie and the Shoreway. Burke is primarily a commuter and business airport. The Port of Cleveland, located at the Cuyahoga River's mouth, is a major bulk freight terminal on Lake Erie, receiving much of the raw materials used by the region's manufacturing industries. Amtrak, the national passenger rail system, provides service to Cleveland, via the "Capitol Limited" and "Lake Shore Limited" routes, which stop at Cleveland Lakefront Station. Cleveland has also been identified as a hub for the proposed Ohio Hub project, which would bring high-speed rail to Ohio. Cleveland hosts several inter-modal freight railroad terminals. There have been several proposals for commuter rail in Cleveland, including an ongoing (as of January 2011) study into a Sandusky–Cleveland line. Cleveland has a bus and rail mass transit system operated by the Greater Cleveland Regional Transit Authority (RTA). The rail portion is officially called the RTA Rapid Transit, but local residents refer to it as "The Rapid". It consists of four light rail lines, known as the Blue, Green, and Waterfront lines, and a heavy rail line, the Red Line. In 2008, RTA completed the HealthLine, a bus rapid transit line, for which naming rights were purchased by the Cleveland Clinic and University Hospitals. It runs along Euclid Avenue from downtown through University Circle, ending at the Louis Stokes Station at Windermere in East Cleveland. In 2007, the American Public Transportation Association named Cleveland's mass transit system the best in North America. Cleveland is the only metropolitan area in the Western Hemisphere with its rail rapid transit system having only one center-city area rapid transit station (Tower City-Public Square). During construction of the Red Line rapid transit line in the 1950s the citizens of Cleveland voted to build the Downtown Distributor Subway which would have provided a number of Center City stations. The plan was quashed by highway promoting County Engineer Albert S. Porter and the full development and growth of center city Cleveland has since been significantly impeded due to the resulting inaccessibility. National intercity bus service is provided at a Greyhound station, located just behind the Playhouse Square theater district. Megabus provides service to Cleveland and has a stop at the Stephanie Tubbs Jones Transit Center on the east side of downtown. Akron Metro, Brunswick Transit Alternative, Laketran, Lorain County Transit, and Medina County Transit provide connecting bus service to the Greater Cleveland Regional Transit Authority. Geauga County Transit and Portage Area Regional Transportation Authority (PARTA) also offer connecting bus service in their neighboring areas. Cleveland's road system consists of numbered streets running roughly north–south, and named avenues, which run roughly east–west. The numbered streets are designated "east" or "west", depending where they lie in relation to Ontario Street, which bisects Public Square. The numbered street system extends beyond the city limits into some suburbs on both the west and east sides. The named avenues that lie both on the east side of the Cuyahoga River and west of Ontario Street receive a "west" designation on street signage. The two downtown avenues which span the Cuyahoga change names on the west side of the river. Superior Avenue becomes Detroit Avenue on the west side, and Carnegie Avenue becomes Lorain Avenue. The bridges that make these connections are often called the Detroit–Superior Bridge and the Lorain–Carnegie Bridge. Three two-digit Interstate highways serve Cleveland directly. Interstate 71 begins just southwest of downtown and is the major route from downtown Cleveland to the airport. I-71 runs through the southwestern suburbs and eventually connects Cleveland with Columbus and Cincinnati. Interstate 77 begins in downtown Cleveland and runs almost due south through the southern suburbs. I-77 sees the least traffic of the three interstates, although it does connect Cleveland to Akron. Interstate 90 connects the two sides of Cleveland, and is the northern terminus for both I-71 and I-77. Running due east–west through the west side suburbs, I-90 turns northeast at the junction with and I-490, and is known as the Innerbelt through downtown. At the junction with the Shoreway, I-90 makes a 90-degree turn known in the area as Dead Man's Curve, then continues northeast, entering Lake County near the eastern split with Ohio State Route 2. Cleveland is also served by two three-digit interstates, Interstate 480, which enters Cleveland briefly at a few points and Interstate 490, which connects I-77 with the junction of I-90 and I-71 just south of downtown. Two other limited-access highways serve Cleveland. The Cleveland Memorial Shoreway carries State Route 2 along its length, and at varying points also carries US 6, US 20 and I-90. The Jennings Freeway (State Route 176) connects I-71 just south of I-90 to I-480 near the suburbs of Parma and Brooklyn Heights. A third highway, the Berea Freeway (State Route 237 in part), connects I-71 to the airport, and forms part of the boundary between Cleveland and Brook Park. In 2011, Walk Score ranked Cleveland the seventeenth most walkable of the fifty largest cities in the United States. , Walk Score increased Cleveland's rank to being the sixteenth most walkable US city, with a Walk Score of 57, a Transit Score of 47, and a Bike Score of 51. Cleveland's most walkable and transient areas can be found in the Downtown, Ohio City, Detroit-Shoreway, University Circle, and Buckeye-Shaker Square neighborhoods. Cleveland is home to the Consulate General of the Republic of Slovenia. , Cleveland has twenty-two sister cities: In addition, Northeast Ohio's Jewish community has an unofficial supportive relationship with the State of Israel. Notes General references Calvin Coolidge John Calvin Coolidge Jr. (; July 4, 1872 – January 5, 1933) was an American politician and the 30th President of the United States (1923–1929). A Republican lawyer from New England, born in Vermont, Coolidge worked his way up the ladder of Massachusetts state politics, eventually becoming governor. His response to the Boston Police Strike of 1919 thrust him into the national spotlight and gave him a reputation as a man of decisive action. Soon after, he was elected Vice President of the United States in 1920, and succeeded to the presidency upon the sudden death of Warren G. Harding in 1923. Elected in his own right in 1924, he gained a reputation as a small government conservative and also as a man who said very little, although having a rather dry sense of humor. Coolidge restored public confidence in the White House after the scandals of his predecessor's administration, and left office with considerable popularity. As a Coolidge biographer wrote: "He embodied the spirit and hopes of the middle class, could interpret their longings and express their opinions. That he did represent the genius of the average is the most convincing proof of his strength". From 1948–2018, scholars have ranked Coolidge as a below-average president. He is praised by advocates of smaller government and "laissez-faire", while supporters of an active central government generally view him less favorably, though most praise his stalwart support of racial equality. John Calvin Coolidge Jr. was born in Plymouth Notch, Windsor County, Vermont, on July 4, 1872, the only U.S. president to be born on Independence Day. He was the elder of the two children of John Calvin Coolidge Sr. (1845–1926) and Victoria Josephine Moor (1846–85). Coolidge Senior engaged in many occupations and developed a statewide reputation as a prosperous farmer, storekeeper, and public servant. He held various local offices, including justice of the peace and tax collector and served in the Vermont House of Representatives as well as the Vermont Senate. Coolidge's mother was the daughter of a Plymouth Notch farmer. She was chronically ill and died, perhaps from tuberculosis, when Coolidge was twelve years old. His younger sister, Abigail Grace Coolidge (1875–1890), died at the age of fifteen, probably of appendicitis, when Coolidge was eighteen. Coolidge's father married a Plymouth schoolteacher in 1891, and lived to the age of eighty. Coolidge's family had deep roots in New England; his earliest American ancestor, John Coolidge, emigrated from Cottenham, Cambridgeshire, England, around 1630 and settled in Watertown, Massachusetts. Another ancestor, Edmund Rice, arrived at Watertown in 1638. Coolidge's great-great-grandfather, also named John Coolidge, was an American military officer in the Revolutionary War and one of the first selectmen of the town of Plymouth Notch. His grandfather Calvin Galusha Coolidge served in the Vermont House of Representatives. Coolidge was also a descendant of Samuel Appleton, who settled in Ipswich and led the Colony during King Philip's War. Coolidge attended Black River Academy and then St. Johnsbury Academy, before enrolling at Amherst College, where he distinguished himself in the debating class. As a senior he joined the fraternity Phi Gamma Delta and graduated "cum laude". While at Amherst Coolidge was profoundly influenced by philosophy professor Charles Edward Garman, a Congregational mystic, with a neo-Hegelian philosophy. Coolidge explained Garman's ethics forty years later: [T]here is a standard of righteousness that might does not make right, that the end does not justify the means, and that expediency as a working principle is bound to fail. The only hope of perfecting human relationships is in accordance with the law of service under which men are not so solicitous about what they shall get as they are about what they shall give. Yet people are entitled to the rewards of their industry. What they earn is theirs, no matter how small or how great. But the possession of property carries the obligation to use it in a larger service... At his father's urging after graduation, Coolidge moved to Northampton, Massachusetts to become a lawyer. To avoid the cost of law school, Coolidge followed the common practice of apprenticing with a local law firm, Hammond & Field, and reading law with them. John C. Hammond and Henry P. Field, both Amherst graduates, introduced Coolidge to law practice in the county seat of Hampshire County. In 1897, Coolidge was admitted to the Massachusetts bar, becoming a country lawyer. With his savings and a small inheritance from his grandfather, Coolidge opened his own law office in Northampton in 1898. He practiced commercial law, believing that he served his clients best by staying out of court. As his reputation as a hard-working and diligent attorney grew, local banks and other businesses began to retain his services. In 1903, Coolidge met Grace Anna Goodhue, a University of Vermont graduate and teacher at Northampton's Clarke School for the Deaf. They married on October 4, 1905 at 2:30 p.m. in a small ceremony which took place in the parlor of Grace's family's house, following a vain effort at postponement by Grace's mother; she was never enamored with Coolidge, nor he with her. The newlyweds went on a honeymoon trip to Montreal, originally planned for two weeks but cut short by a week at Coolidge's request. After 25 years he wrote of Grace, "for almost a quarter of a century she has borne with my infirmities and I have rejoiced in her graces". The Coolidges had two sons: John (September 7, 1906 – May 31, 2000) and Calvin Jr. (April 13, 1908 – July 7, 1924). The death of Calvin Jr. at age 16 from blood poisoning brought on by an infected blister "hurt [Coolidge] terribly," according to son John. John became a railroad executive, helped to start the Coolidge Foundation, and was instrumental in creating the President Calvin Coolidge State Historic Site. Coolidge was frugal, and when it came to securing a home, he insisted upon renting. Henry Field had a pew in Edwards Congregational Church, and Grace was a member, but Coolidge never formally joined the congregation. The Republican Party was dominant in New England at the time, and Coolidge followed the example of Hammond and Field by becoming active in local politics. In 1896, Coolidge campaigned for Republican presidential candidate William McKinley, and the next year he was selected to be a member of the Republican City Committee. In 1898, he won election to the City Council of Northampton, placing second in a ward where the top three candidates were elected. The position offered no salary but provided Coolidge invaluable political experience. In 1899, he declined renomination, running instead for City Solicitor, a position elected by the City Council. He was elected for a one-year term in 1900, and reelected in 1901. This position gave Coolidge more experience as a lawyer and paid a salary of $600. In 1902, the city council selected a Democrat for city solicitor, and Coolidge returned to private practice. Soon thereafter, however, the clerk of courts for the county died, and Coolidge was chosen to replace him. The position paid well, but it barred him from practicing law, so he remained at the job for only one year. In 1904, Coolidge suffered his sole defeat at the ballot box, losing an election to the Northampton school board. When told that some of his neighbors voted against him because he had no children in the schools he would govern, Coolidge replied, "Might give me time!" In 1906, the local Republican committee nominated Coolidge for election to the state House of Representatives. He won a close victory over the incumbent Democrat, and reported to Boston for the 1907 session of the Massachusetts General Court. In his freshman term, Coolidge served on minor committees and, although he usually voted with the party, was known as a Progressive Republican, voting in favor of such measures as women's suffrage and the direct election of Senators. While in Boston, Coolidge became an ally, and then a liegeman, of then U.S. Senator Winthrop Murray Crane who controlled the western faction of the Massachusetts Republican Party; Crane's party rival in the east of the commonwealth was U.S. Senator Henry Cabot Lodge. Coolidge forged another key strategic alliance with Guy Currier, who had served in both state houses and had the social distinction, wealth, personal charm and broad circle of friends which Coolidge lacked, and which would have a lasting impact on his political career. In 1907, he was elected to a second term, and in the 1908 session Coolidge was more outspoken, though not in a leadership position. Instead of vying for another term in the State House, Coolidge returned home to his growing family and ran for mayor of Northampton when the incumbent Democrat retired. He was well liked in the town, and defeated his challenger by a vote of 1,597 to 1,409. During his first term (1910 to 1911), he increased teachers' salaries and retired some of the city's debt while still managing to effect a slight tax decrease. He was renominated in 1911, and defeated the same opponent by a slightly larger margin. In 1911, the State Senator for the Hampshire County area retired and successfully encouraged Coolidge to run for his seat for the 1912 session; Coolidge defeated his Democratic opponent by a large margin. At the start of that term, he became chairman of a committee to arbitrate the "Bread and Roses" strike by the workers of the American Woolen Company in Lawrence, Massachusetts. After two tense months, the company agreed to the workers' demands, in a settlement proposed by the committee. A major issue affecting Massachusetts Republicans that year was the party split between the progressive wing, which favored Theodore Roosevelt, and the conservative wing, which favored William Howard Taft. Although he favored some progressive measures, Coolidge refused to leave the Republican party. When the new Progressive Party declined to run a candidate in his state senate district, Coolidge won reelection against his Democratic opponent by an increased margin. In the 1913 session, Coolidge enjoyed renowned success in arduously navigating to passage the Western Trolley Act, which connected Northampton with a dozen similar industrial communities in western Massachusetts. Coolidge intended to retire after his second term as was the custom, but when the President of the State Senate, Levi H. Greenwood, considered running for Lieutenant Governor, Coolidge decided to run again for the Senate in the hopes of being elected as its presiding officer. Although Greenwood later decided to run for reelection to the Senate, he was defeated primarily due to his opposition to women's suffrage; Coolidge was in favor of the women's vote, won his own re-election and with Crane's help, assumed the presidency of a closely divided Senate. After his election in January 1914, Coolidge delivered a published and frequently quoted speech entitled "Have Faith in Massachusetts", which summarized his philosophy of government. Coolidge's speech was well received, and he attracted some admirers on its account; towards the end of the term, many of them were proposing his name for nomination to lieutenant governor. After winning reelection to the Senate by an increased margin in the 1914 elections, Coolidge was reelected unanimously to be President of the Senate. Coolidge's supporters, led by fellow Amherst alumnus Frank Stearns, encouraged him again to run for lieutenant governor. Stearns, an executive with the Boston department store R. H. Stearns, became another key ally, and began a publicity campaign on Coolidge's behalf before he announced his candidacy at the end of the 1915 legislative session. Coolidge entered the primary election for lieutenant governor and was nominated to run alongside gubernatorial candidate Samuel W. McCall. Coolidge was the leading vote-getter in the Republican primary, and balanced the Republican ticket by adding a western presence to McCall's eastern base of support. McCall and Coolidge won the 1915 election to their respective one-year terms, with Coolidge defeating his opponent by more than 50,000 votes. In Massachusetts, the lieutenant governor does not preside over the state Senate, as is the case in many other states; nevertheless, as lieutenant governor, Coolidge was a deputy governor functioning as administrative inspector and was a member of the governor's council. He was also chairman of the finance committee and the pardons committee. As a full-time elected official, Coolidge discontinued his law practice in 1916, though his family continued to live in Northampton. McCall and Coolidge were both reelected in 1916 and again in 1917. When McCall decided that he would not stand for a fourth term, Coolidge announced his intention to run for governor. Coolidge was unopposed for the Republican nomination for Governor of Massachusetts in 1918. He and his running mate, Channing Cox, a Boston lawyer and Speaker of the Massachusetts House of Representatives, ran on the previous administration's record: fiscal conservatism, a vague opposition to Prohibition, support for women's suffrage, and support for American involvement in World War I. The issue of the war proved divisive, especially among Irish and German Americans. Coolidge was elected by a margin of 16,773 votes over his opponent, Richard H. Long, in the smallest margin of victory of any of his statewide campaigns. In 1919, in reaction to a plan of the policemen of the Boston Police Department to register with a union, Police Commissioner Edwin U. Curtis announced that such an act would not be tolerated. In August of that year, the American Federation of Labor issued a charter to the Boston Police Union. Curtis declared the union's leaders were guilty of insubordination and would be relieved of duty, but indicated he would cancel their suspension if the union was dissolved by September 4. The mayor of Boston, Andrew Peters, convinced Curtis to delay his action for a few days, but with no results, and Curtis suspended the union leaders on September 8. The following day, about three-quarters of the policemen in Boston went on strike. Coolidge, tacitly but fully in support of Curtis' position, closely monitored the situation but initially deferred to the local authorities. He anticipated that only a resulting measure of lawlessness could sufficiently prompt the public to understand and appreciate the controlling principle – that a policeman does not strike. That night and the next, there was sporadic violence and rioting in the unruly city. Peters, concerned about sympathy strikes by the firemen and others, called up some units of the Massachusetts National Guard stationed in the Boston area pursuant to an old and obscure legal authority, and relieved Curtis of duty. Coolidge, sensing the severity of circumstances were then in need of his intervention, conferred with Crane's operative, William Butler, and then acted. He called up more units of the National Guard, restored Curtis to office, and took personal control of the police force. Curtis proclaimed that all of the strikers were fired from their jobs, and Coolidge called for a new police force to be recruited. That night Coolidge received a telegram from AFL leader Samuel Gompers. "Whatever disorder has occurred", Gompers wrote, "is due to Curtis's order in which the right of the policemen has been denied…" Coolidge publicly answered Gompers's telegram, denying any justification whatsoever for the strike – and his response launched him into the national consciousness (quoted, above left). Newspapers across the nation picked up on Coolidge's statement and he became the newest hero to opponents of the strike. In the midst of the First Red Scare, many Americans were terrified of the spread of communist revolution, like those that had taken place in Russia, Hungary, and Germany. While Coolidge had lost some friends among organized labor, conservatives across the nation had seen a rising star. Although he usually acted with deliberation, the Boston police strike gave him a national reputation as a decisive leader, and as a strict enforcer of law and order. Coolidge and Cox were renominated for their respective offices in 1919. By this time Coolidge's supporters (especially Stearns) had publicized his actions in the Police Strike around the state and the nation and some of Coolidge's speeches were published in book form. He faced the same opponent as in 1918, Richard Long, but this time Coolidge defeated him by 125,101 votes, more than seven times his margin of victory from a year earlier. His actions in the police strike, combined with the massive electoral victory, led to suggestions that Coolidge run for president in 1920. By the time Coolidge was inaugurated on January 2, 1919, the First World War had ended, and Coolidge pushed the legislature to give a $100 bonus to Massachusetts veterans. He also signed a bill reducing the work week for women and children from fifty-four hours to forty-eight, saying, "We must humanize the industry, or the system will break down." He signed into law a budget that kept the tax rates the same, while trimming $4 million from expenditures, thus allowing the state to retire some of its debt. Coolidge also wielded the veto pen as governor. His most publicized veto prevented an increase in legislators' pay by 50%. Although Coolidge was personally opposed to Prohibition, he vetoed a bill in May 1920 that would have allowed the sale of beer or wine of 2.75% alcohol or less, in Massachusetts in violation of the Eighteenth Amendment to the United States Constitution. "Opinions and instructions do not outmatch the Constitution," he said in his veto message. "Against it, they are void." At the 1920 Republican National Convention, most of the delegates were selected by state party conventions, not primaries. As such, the field was divided among many local favorites. Coolidge was one such candidate, and while he placed as high as sixth in the voting, the powerful party bosses running the convention, primarily the party's U.S. Senators, never considered him seriously. After ten ballots, the bosses and then the delegates settled on Senator Warren G. Harding of Ohio as their nominee for president. When the time came to select a vice presidential nominee, the bosses also made and announced their decision on whom they wanted – Sen. Irvine Lenroot of Wisconsin – and then prematurely departed after his name was put forth, relying on the rank and file to confirm their decision. A delegate from Oregon, Wallace McCamant, having read "Have Faith in Massachusetts", proposed Coolidge for vice president instead. The suggestion caught on quickly with the masses starving for an act of independence from the absent bosses, and Coolidge was unexpectedly nominated. The Democrats nominated another Ohioan, James M. Cox, for president and the Assistant Secretary of the Navy, Franklin D. Roosevelt, for vice president. The question of the United States joining the League of Nations was a major issue in the campaign, as was the unfinished legacy of Progressivism. Harding ran a "front-porch" campaign from his home in Marion, Ohio, but Coolidge took to the campaign trail in the Upper South, New York, and New England – his audiences carefully limited to those familiar with Coolidge and those placing a premium upon concise and short speeches. On November 2, 1920, Harding and Coolidge were victorious in a landslide, winning more than 60 percent of the popular vote, including every state outside the South. They also won in Tennessee, the first time a Republican ticket had won a Southern state since Reconstruction. The U.S. vice presidency did not carry many official duties, but Coolidge was invited by President Harding to attend cabinet meetings, making him the first vice president to do so. He gave a number of unremarkable speeches around the country. As the U.S. vice president, Coolidge and his vivacious wife Grace were invited to quite a few parties, where the legend of "Silent Cal" was born. It is from this time that most of the jokes and anecdotes involving Coolidge originate. Although Coolidge was known to be a skilled and effective public speaker, in private he was a man of few words and was commonly referred to as "Silent Cal". A possibly apocryphal story has it that a matron, seated next to him at a dinner, said to him, "I made a bet today that I could get more than two words out of you." He replied, "You lose." Dorothy Parker, upon learning that Coolidge had died, reportedly remarked, "How can they tell?" Coolidge often seemed uncomfortable among fashionable Washington society; when asked why he continued to attend so many of their dinner parties, he replied, "Got to eat somewhere." Alice Roosevelt Longworth, a leading Republican wit, underscored Coolidge's silence and his dour personality: "When he wished he were elsewhere, he pursed his lips, folded his arms, and said nothing. He looked then precisely as though he had been weaned on a pickle." As president, Coolidge's reputation as a quiet man continued. "The words of a President have an enormous weight," he would later write, "and ought not to be used indiscriminately." Coolidge was aware of his stiff reputation; indeed, he cultivated it. "I think the American people want a solemn ass as a President," he once told Ethel Barrymore, "and I think I will go along with them." Some historians would later suggest that Coolidge's image was created deliberately as a campaign tactic, while others believe his withdrawn and quiet behavior to be natural, deepening after the death of his son in 1924. On August 2, 1923, President Harding died unexpectedly in San Francisco while on a speaking tour of the western United States. Vice President Coolidge was in Vermont visiting his family home, which had neither electricity nor a telephone, when he received word by messenger of Harding's death. The new president dressed, said a prayer, and came downstairs to greet the reporters who had assembled. His father, a notary public and justice of the peace, administered the oath of office in the family's parlor by the light of a kerosene lamp at 2:47 a.m. on August 3, 1923; President Coolidge then went back to bed. Coolidge returned to Washington the next day, and was sworn in again by Justice Adolph A. Hoehling Jr. of the Supreme Court of the District of Columbia, to forestall any questions about the authority of a state official to administer a federal oath. This second oath-taking remained a secret until it was revealed by Harry M. Daugherty in 1932, and confirmed by Hoehling. When Hoehling confirmed Daugherty's story, he indicated that Daugherty, then serving as United States Attorney General, asked him to administer the oath without fanfare at the Willard Hotel. According to Hoehling, he did not question Daugherty's reason for requesting a second oath-taking but assumed it was to resolve any doubt about whether the first swearing-in was valid. The nation initially did not know what to make of Coolidge, who had maintained a low profile in the Harding administration; many had even expected him to be replaced on the ballot in 1924. Coolidge believed that those of Harding's men under suspicion were entitled to every presumption of innocence, taking a methodical approach to the scandals, principally the Teapot Dome scandal, while others clamored for rapid punishment of those they presumed guilty. Coolidge thought the Senate investigations of the scandals would suffice; this was affirmed by the resulting resignations of those involved. He personally intervened in demanding the resignation of Attorney General Harry M. Daugherty after he refused to cooperate with the congressional probe. He then set about to confirm that no loose ends remained in the administration, arranging for a full briefing on the wrongdoing. Harry A. Slattery reviewed the facts with him, Harlan F. Stone analyzed the legal aspects for him and Senator William E. Borah assessed and presented the political factors. Coolidge addressed Congress when it reconvened on December 6, 1923, giving a speech that supported many of Harding's policies, including Harding's formal budgeting process, the enforcement of immigration restrictions and arbitration of coal strikes ongoing in Pennsylvania. Coolidge's speech was the first presidential speech to be broadcast over the radio. The Washington Naval Treaty was proclaimed just one month into Coolidge's term, and was generally well received in the country. In May 1924, the World War I veterans' World War Adjusted Compensation Act or "Bonus Bill" was passed over his veto. Coolidge signed the Immigration Act later that year, which was aimed at restricting southern and eastern European immigration, but appended a signing statement expressing his unhappiness with the bill's specific exclusion of Japanese immigrants. Just before the Republican Convention began, Coolidge signed into law the Revenue Act of 1924, which reduced the top marginal tax rate from 58% to 46%, as well as personal income tax rates across the board, increased the estate tax and bolstered it with a new gift tax. On June 2, 1924, Coolidge signed the act granting citizenship to all Native Americans born in the United States. By that time, two-thirds of the people were already citizens, having gained it through marriage, military service (veterans of World War I were granted citizenship in 1919), or the land allotments that had earlier taken place. The Republican Convention was held on June 10–12, 1924, in Cleveland, Ohio; Coolidge was nominated on the first ballot. The convention nominated Frank Lowden of Illinois for vice president on the second ballot, but he declined; former Brigadier General Charles G. Dawes was nominated on the third ballot and accepted. The Democrats held their convention the next month in New York City. The convention soon deadlocked, and after 103 ballots, the delegates finally agreed on a compromise candidate, John W. Davis, with Charles W. Bryan nominated for vice president. The Democrats' hopes were buoyed when Robert M. La Follette Sr., a Republican senator from Wisconsin, split from the GOP to form a new Progressive Party. Many believed that the split in the Republican party, like the one in 1912, would allow a Democrat to win the presidency. After the conventions and the death of his younger son Calvin, Coolidge became withdrawn; he later said that "when he [the son] died, the power and glory of the Presidency went with him." Even as he mourned, Coolidge ran his standard campaign, not mentioning his opponents by name or maligning them, and delivering speeches on his theory of government, including several that were broadcast over radio. It was the most subdued campaign since 1896, partly because of Coolidge's grief, but also because of his naturally non-confrontational style. The other candidates campaigned in a more modern fashion, but despite the split in the Republican party, the results were similar to those of 1920. Coolidge and Dawes won every state outside the South except Wisconsin, La Follette's home state. Coolidge won the election with 382 electoral votes and the popular vote by 2.5 million over his opponents' combined total. During Coolidge's presidency, the United States experienced a period of rapid economic growth known as the "Roaring Twenties." He left the administration's industrial policy in the hands of his activist Secretary of Commerce, Herbert Hoover, who energetically used government auspices to promote business efficiency and develop airlines and radio. Coolidge disdained regulation, and demonstrated this by appointing commissioners to the Federal Trade Commission and the Interstate Commerce Commission who did little to restrict the activities of businesses under their jurisdiction. The regulatory state under Coolidge was, as one biographer described it, "thin to the point of invisibility." Historian Robert Sobel offers some context of Coolidge's laissez-faire ideology, based on the prevailing understanding of federalism during his presidency: "As Governor of Massachusetts, Coolidge supported wages and hours legislation, opposed child labor, imposed economic controls during World War I, favored safety measures in factories, and even worker representation on corporate boards. Did he support these measures while president? No, because in the 1920s, such matters were considered the responsibilities of state and local governments." Coolidge adopted the taxation policies of his Secretary of the Treasury, Andrew Mellon, who advocated "scientific taxation" — the notion that lowering taxes will increase, rather than decrease, government receipts. Congress agreed, and tax rates were reduced in Coolidge's term. In addition to federal tax cuts, Coolidge proposed reductions in federal expenditures and retiring of the federal debt. Coolidge's ideas were shared by the Republicans in Congress, and in 1924, Congress passed the Revenue Act of 1924, which reduced income tax rates and eliminated all income taxation for some two million people. They reduced taxes again by passing the Revenue Acts of 1926 and 1928, all the while continuing to keep spending down so as to reduce the overall federal debt. By 1927, only the wealthiest 2% of taxpayers paid any federal income tax. Federal spending remained flat during Coolidge's administration, allowing one-fourth of the federal debt to be retired in total. State and local governments saw considerable growth, however, surpassing the federal budget in 1927. Perhaps the most contentious issue of Coolidge's presidency was relief for farmers. Some in Congress proposed a bill designed to fight falling agricultural prices by allowing the federal government to purchase crops to sell abroad at lower prices. Agriculture Secretary Henry C. Wallace and other administration officials favored the bill when it was introduced in 1924, but rising prices convinced many in Congress that the bill was unnecessary, and it was defeated just before the elections that year. In 1926, with farm prices falling once more, Senator Charles L. McNary and Representative Gilbert N. Haugen—both Republicans—proposed the McNary–Haugen Farm Relief Bill. The bill proposed a federal farm board that would purchase surplus production in high-yield years and hold it (when feasible) for later sale or sell it abroad. Coolidge opposed McNary-Haugen, declaring that agriculture must stand "on an independent business basis," and said that "government control cannot be divorced from political control." Instead of manipulating prices, he favored instead Herbert Hoover's proposal to increase profitability by modernizing agriculture. Secretary Mellon wrote a letter denouncing the McNary-Haugen measure as unsound and likely to cause inflation, and it was defeated. After McNary-Haugen's defeat, Coolidge supported a less radical measure, the Curtis-Crisp Act, which would have created a federal board to lend money to farm co-operatives in times of surplus; the bill did not pass. In February 1927, Congress took up the McNary-Haugen bill again, this time narrowly passing it, and Coolidge vetoed it. In his veto message, he expressed the belief that the bill would do nothing to help farmers, benefiting only exporters and expanding the federal bureaucracy. Congress did not override the veto, but it passed the bill again in May 1928 by an increased majority; again, Coolidge vetoed it. "Farmers never have made much money," said Coolidge, the Vermont farmer's son. "I do not believe we can do much about it." Coolidge has often been criticized for his actions during the Great Mississippi Flood of 1927, the worst natural disaster to hit the Gulf Coast until Hurricane Katrina in 2005. Although he did eventually name Secretary Hoover to a commission in charge of flood relief, scholars argue that Coolidge overall showed a lack of interest in federal flood control. Coolidge did not believe that personally visiting the region after the floods would accomplish anything, and that it would be seen as mere political grandstanding. He also did not want to incur the federal spending that flood control would require; he believed property owners should bear much of the cost. On the other hand, Congress wanted a bill that would place the federal government completely in charge of flood mitigation. When Congress passed a compromise measure in 1928, Coolidge declined to take credit for it and signed the bill in private on May 15. According to one biographer, Coolidge was "devoid of racial prejudice," but rarely took the lead on civil rights. Coolidge disliked the Ku Klux Klan and no Klansman is known to have received an appointment from him. In the 1924 presidential election his opponents (Robert La Follette and John Davis), and his running mate Charles Dawes, often attacked the Klan but Coolidge avoided the subject. Coolidge spoke in favor of the civil rights of African-Americans, saying in his first State of the Union address that their rights were "just as sacred as those of any other citizen" under the U.S. Constitution and that it was a "public and a private duty to protect those rights." Coolidge repeatedly called for laws to make lynching a federal crime (it was already a state crime). Congress refused to pass any such legislation. On June 2, 1924, Coolidge signed the Indian Citizenship Act, which granted U.S. citizenship to all American Indians living on reservations (Indians off reservations had long been citizens). On June 6, 1924, Coolidge delivered a commencement address at historically black, non-segregated Howard University, in which he thanked and commended African-Americans for their rapid advances in education and their contributions to U.S. society over the years, as well as their eagerness to render their services as soldiers in the World War, all while being faced with discrimination and prejudices at home. In a speech in October 1924, Coolidge stressed tolerance of differences as an American value and thanked immigrants for their contributions to U.S. society, saying that they have "contributed much to making our country what it is." He stated that although the diversity of peoples was a detrimental source of conflict and tension in Europe, it was peculiar for the United States that it was a "harmonious" benefit for the country. Coolidge further stated the United States should assist and help immigrants who come to the country, and urged immigrants to reject "race hatreds" and "prejudices". Although not an isolationist, Coolidge was reluctant to enter into foreign alliances. He considered the 1920 Republican victory as a rejection of the Wilsonian position that the United States should join the League of Nations. While not completely opposed to the idea, Coolidge believed the League, as then constituted, did not serve American interests, and he did not advocate membership. He spoke in favor of the United States joining the Permanent Court of International Justice (World Court), provided that the nation would not be bound by advisory decisions. In 1926, the Senate eventually approved joining the Court (with reservations). The League of Nations accepted the reservations, but it suggested some modifications of its own. The Senate failed to act; the United States never joined the World Court. Coolidge's primary initiative was the Kellogg–Briand Pact of 1928, named for Coolidge's Secretary of State, Frank B. Kellogg, and French foreign minister Aristide Briand. The treaty, ratified in 1929, committed signatories—the United States, the United Kingdom, France, Germany, Italy, and Japan—to "renounce war, as an instrument of national policy in their relations with one another." The treaty did not achieve its intended result—the outlawry of war—but it did provide the founding principle for international law after World War II. Coolidge continued the previous administration's policy of withholding recognition of the Soviet Union. He also continued the United States' support for the elected government of Mexico against the rebels there, lifting the arms embargo on that country. He sent Dwight Morrow to Mexico as the American ambassador. The United States' occupation of Nicaragua and Haiti continued under his administration, but Coolidge withdrew American troops from the Dominican Republic in 1924. Coolidge led the U.S. delegation to the Sixth International Conference of American States, January 15–17, 1928, in Havana, Cuba. This was the only international trip Coolidge made during his presidency. There, he extended an olive branch to Latin American leaders embittered over America's interventionist policies in Central America and the Caribbean. For 88 years he was the only sitting president to have visited Cuba, until Barack Obama did so in 2016. In the summer of 1927, Coolidge vacationed in the Black Hills of South Dakota, where he engaged in horseback riding and fly fishing and attended rodeos. He made Custer State Park his "summer White House." While on vacation, Coolidge surprisingly issued a terse statement that he would not seek a second full term as president: "I do not choose to run for President in 1928." After allowing the reporters to take that in, Coolidge elaborated. "If I take another term, I will be in the White House till 1933 … Ten years in Washington is longer than any other man has had it—too long!" In his memoirs, Coolidge explained his decision not to run: "The Presidential office takes a heavy toll of those who occupy it and those who are dear to them. While we should not refuse to spend and be spent in the service of our country, it is hazardous to attempt what we feel is beyond our strength to accomplish." After leaving office, he and Grace returned to Northampton, where he wrote his memoirs. The Republicans retained the White House in 1928 with a landslide by Herbert Hoover. Coolidge had been reluctant to endorse Hoover as his successor; on one occasion he remarked that "for six years that man has given me unsolicited advice—all of it bad." Even so, Coolidge had no desire to split the party by publicly opposing the nomination of the popular commerce secretary. Although a few of Harding's cabinet appointees were scandal-tarred, Coolidge initially retained all of them, out of an ardent conviction that as successor to a deceased elected president he was obligated to retain Harding's counselors and policies until the next election. He kept Harding's able speechwriter Judson T. Welliver; Stuart Crawford replaced Welliver in November 1925. Coolidge appointed C. Bascom Slemp, a Virginia Congressman and experienced federal politician, to work jointly with Edward T. Clark, a Massachusetts Republican organizer whom he retained from his vice-presidential staff, as Secretaries to the President (a position equivalent to the modern White House Chief of Staff). Perhaps the most powerful person in Coolidge's Cabinet was Secretary of the Treasury Andrew Mellon, who controlled the administration's financial policies and was regarded by many, including House Minority Leader John Nance Garner, as more powerful than Coolidge himself. Secretary of Commerce Herbert Hoover also held a prominent place in Coolidge's Cabinet, in part because Coolidge found value in Hoover's ability to win positive publicity with his pro-business proposals. Secretary of State Charles Evans Hughes directed Coolidge's foreign policy until he resigned in 1925 following Coolidge's re-election. He was replaced by Frank B. Kellogg, who had previously served as a Senator and as the ambassador to Great Britain. Coolidge made two other appointments following his re-election, with William M. Jardine taking the position of Secretary of Agriculture and John G. Sargent becoming Attorney General. Coolidge did not have a vice president during his first term, but Charles Dawes became vice president during Coolidge's second term, and Dawes and Coolidge clashed over farm policy and other issues. Coolidge appointed one justice to the Supreme Court of the United States, Harlan Fiske Stone in 1925. Stone was Coolidge's fellow Amherst alumnus, a Wall Street lawyer and conservative Republican. Stone was serving as dean of Columbia Law School when Coolidge appointed him to be attorney general in 1924 to restore the reputation tarnished by Harding's Attorney General, Harry M. Daugherty. It does not appear that Coolidge considered appointing anyone other than Stone, although Stone himself had urged Coolidge to appoint Benjamin N. Cardozo. Stone proved to be a firm believer in judicial restraint and was regarded as one of the court's three liberal justices who would often vote to uphold New Deal legislation. President Franklin D. Roosevelt later appointed Stone to be chief justice. Coolidge nominated 17 judges to the United States Courts of Appeals, and 61 judges to the United States district courts. He appointed judges to various specialty courts as well, including Genevieve R. Cline, who became the first woman named to the federal judiciary when Coolidge placed her on the United States Customs Court in 1928. Coolidge also signed the Judiciary Act of 1925 into law, allowing the Supreme Court more discretion over its workload. After his presidency, Coolidge retired to the modest rented house on residential Massasoit Street in Northampton before moving to a more spacious home, "The Beeches." He kept a Hacker runabout boat on the Connecticut River and was often observed on the water by local boating enthusiasts. During this period, he also served as chairman of the Non-Partisan Railroad Commission, an entity created by several banks and corporations to survey the country's long-term transportation needs and make recommendations for improvements. He was the honorary president of the American Foundation for the Blind, a director of New York Life Insurance Company, president of the American Antiquarian Society, and a trustee of Amherst College. Coolidge published his autobiography in 1929 and wrote a syndicated newspaper column, "Calvin Coolidge Says," from 1930 to 1931. Faced with looming defeat in the 1932 presidential election, some Republicans spoke of rejecting Herbert Hoover as their party's nominee, and instead drafting Coolidge to run, but the former president made it clear that he was not interested in running again, and that he would publicly repudiate any effort to draft him, should it come about. Hoover was renominated, and Coolidge made several radio addresses in support of him. Hoover then lost the general election to Coolidge's 1920 vice presidential Democratic opponent Franklin D. Roosevelt in a landslide. Coolidge died suddenly from coronary thrombosis at "The Beeches," at 12:45 p.m., January 5, 1933. Shortly before his death, Coolidge confided to an old friend: "I feel I no longer fit in with these times." Coolidge is buried in Plymouth Notch Cemetery, Plymouth Notch, Vermont. The nearby family home is maintained as one of the original buildings on the Calvin Coolidge Homestead District site. The State of Vermont dedicated a new visitors' center nearby to mark Coolidge's 100th birthday on July 4, 1972. Despite his reputation as a quiet and even reclusive politician, Coolidge made use of the new medium of radio and made radio history several times while president. He made himself available to reporters, giving 520 press conferences, meeting with reporters more regularly than any president before or since. Coolidge's second inauguration was the first presidential inauguration broadcast on radio. On December 6, 1923, he was the first president whose address to Congress was broadcast on radio. Coolidge signed the Radio Act of 1927, which assigned regulation of radio to the newly created Federal Radio Commission. On August 11, 1924, Theodore W. Case, using the Phonofilm sound-on-film process he developed for Lee DeForest, filmed Coolidge on the White House lawn, making Coolidge the first president to appear in a sound film. The title of the DeForest film was "President Coolidge, Taken on the White House Grounds". When Charles Lindbergh arrived in Washington on a U.S. Navy ship after his celebrated 1927 trans-Atlantic flight, President Coolidge welcomed him back to the U.S. and a sound-on-film record of the event exists. Claudio Monteverdi Claudio Giovanni Antonio Monteverdi (; 15 May 1567 (baptized) – 29 November 1643) was an Italian composer, string player and choirmaster. A composer of both secular and sacred music, and a pioneer in the development of opera, he is considered a crucial transitional figure between the Renaissance and the Baroque periods of music history. Born in Cremona, where he undertook his first musical studies and compositions, Monteverdi developed his career first at the court of Mantua (c. 1590–1613) and then until his death in the Republic of Venice where he was "maestro di capella" at the basilica of San Marco. His surviving letters give insight into the life of a professional musician in Italy of the period, including problems of income, patronage and politics. Much of Monteverdi's output, including many stage works, has been lost. His surviving music includes nine books of madrigals, large-scale sacred works such as his "Vespro della Beata Vergine" ("Vespers") of 1610, and three complete operas. His opera "L'Orfeo" (1607) is the earliest of the genre still widely performed; towards the end of his life he wrote works for the commercial theatre in Venice, including "Il ritorno d'Ulisse in patria" and "L'incoronazione di Poppea". While he worked extensively in the tradition of earlier Renaissance polyphony, such as in his madrigals, he undertook great developments in form and melody, and began to employ the basso continuo technique, distinctive of the Baroque. No stranger to controversy, he defended his sometimes novel techniques as elements of a "seconda pratica", contrasting with the more orthodox earlier style which he termed the "prima pratica". Largely forgotten during the eighteenth and much of the nineteenth centuries, his works enjoyed a rediscovery around the beginning of the twentieth century. He is now established both as a significant influence in European musical history and as a composer whose works are regularly performed and recorded. Monteverdi is usually described as an "Italian" composer, even though in his lifetime the concept of "Italy" existed only as a geographical entity. Although the inhabitants of the peninsula shared much in common in terms of history, culture and language, in political terms the people experienced various layers of (mostly foreign) authority and jurisdiction. In the first instance they were subject to the local rulers of their city-states, powerful families such as the Gonzagas and the Medicis. Above them were the imperial powers – in the sixteenth century primarily Spain – and also the authority of the Habsburgs of Vienna, in their role as Holy Roman Emperors, guardians of the Catholic faith. Monteverdi was baptised in the church of SS Nazaro e Celso, Cremona, on 15 May 1567. The register records his name as "Claudio Zuan Antonio" the son of "Messer Baldasar Mondeverdo". He was the first child of the apothecary Baldassare Monteverdi and his first wife Maddalena (née Zignani); they had married early the previous year. Claudio's brother Giulio Cesare Monteverdi (b. 1573) was also to become a musician; there were two other brothers and two sisters from Baldassare's marriage to Maddalena and his subsequent marriage in 1576 or 1577. Cremona lay under the jurisdiction of Milan, a Spanish possession, so that Monteverdi was technically born a Spanish subject. Cremona was close to the border of the Republic of Venice, and not far from the lands controlled by Mantua, in both of which states Monteverdi was later to establish his career. There is no clear record of Monteverdi's early musical training, or evidence that (as is sometimes claimed) he was a member of the Cathedral choir or studied at Cremona University. Monteverdi's first published work, a set of motets, " (Sacred Songs)" for three voices, was issued in Venice in 1582, when he was only fifteen years old. In this, and his other initial publications, he describes himself as the pupil of Marc'Antonio Ingegneri, who was from 1581 (and possibly from 1576) to 1592 the "maestro di cappella" at Cremona Cathedral. The musicologist Tim Carter deduces that Ingegneri "gave him a solid grounding in counterpoint and composition", and that Monteverdi would also have studied playing instruments of the viol family and singing. Monteverdi's first publications also give evidence of his connections beyond Cremona, even in his early years. His second published work, "Madrigali spirituali" (Spiritual Madrigals, 1583), was printed at Brescia. His next works (his first published secular compositions) were sets of five-part madrigals, according to his biographer Paolo Fabbri "the inevitable proving ground for any composer of the second half of the sixteenth century ... the secular genre "par excellence"". The first book of madrigals (Venice, 1587) was dedicated to Count Marco Verità of Verona; the second book of madrigals (Venice, 1590) was dedicated to the President of the Senate of Milan, Giacomo Ricardi, for whom he had played the viola da braccio in 1587. In the dedication of his second book of madrigals, Monteverdi had described himself as a player of the "vivuola" (which could mean either viola da gamba or viola da braccio). In 1590 or 1591 he entered the service of Duke Vincenzo I Gonzaga of Mantua; he recalled in his dedication to the Duke of his third book of madrigals (Venice, 1592) that "the most noble exercise of the "vivuola" opened to me the fortunate way into your service." In the same dedication he compares his instrumental playing to "flowers" and his compositions as "fruit" which as it matures "can more worthily and more perfectly serve you", indicating his intentions to establish himself as a composer. Duke Vincenzo was keen to establish his court as a musical centre, and sought to recruit leading musicians. When Monteverdi arrived in Mantua, the "maestro di capella" at the court was the Flemish musician Giaches de Wert. Other notable musicians at the court during this period included the composer and violinist Salomone Rossi, Rossi's sister the singer Madama Europa, and the tenor Francesco Rasi. Monteverdi married the court singer Claudia de Cattaneis in 1599; they were to have three children, two sons (Francesco, b. 1601 and Massimiliano, b. 1604), and a daughter who died soon after birth in 1603. Monteverdi's brother Giulio Cesare joined the court musicians in 1602. When Wert died in 1596, his post was given to Benedetto Pallavicino, but Monteverdi was clearly highly regarded by Vincenzo and accompanied him on his military campaigns in Hungary (1595) and also on a visit to Flanders in 1599. Here at the town of Spa he is reported by his brother Giulio Cesare as encountering, and bringing back to Italy, the "canto alla francese". (The meaning of this, literally "song in the French style", is debatable, but may refer to the French-influenced poetry of Gabriello Chiabrera, some of which was set by Monteverdi in his "Scherzi musicali", and which departs from the traditional Italian style of lines of 9 or 11 syllables). Monteverdi may possibly have been a member of Vincenzo's entourage at Florence in 1600 for the marriage of Maria de' Medici and Henry IV of France, at which celebrations Jacopo Peri's opera "Euridice" (the earliest surviving opera) was premiered. On the death of Pallavicino in 1601 Monteverdi was confirmed as the new "maestro di capella". At the turn of the 17th century, Monteverdi found himself the target of musical controversy. The influential Bolognese theorist Giovanni Maria Artusi attacked Monteverdi's music (without naming the composer) in his work "L'Artusi, overo Delle imperfettioni della moderna musica (Artusi, or On the imperfections of modern music)" of 1600, followed by a sequel in 1603. Artusi cited extracts from Monteverdi's works not yet published (they later formed parts of his fourth and fifth books of madrigals of 1603 and 1605), condemning their use of harmony and their innovations in use of musical modes, compared to orthodox polyphonic practice of the sixteenth century. Artusi attempted to correspond with Monteverdi on these issues; the composer refused to respond, but found a champion in a pseudonymous supporter, "L'Ottuso Academico" ("The Obtuse Academic"). Eventually Monteverdi replied in the preface to the fifth book of madrigals that his duties at court prevented him from a detailed reply; but in a note to "the studious reader", he claimed that he would shortly publish a response, "Seconda Pratica, overo Perfettione della Moderna Musica (The Second Style, or Perfection of Modern Music)." This work never appeared, but a later publication by Claudio's brother Giulio Cesare made it clear that the "seconda pratica" which Monteverdi defended was not seen by him as radical change or his own invention, but was an evolution from previous styles ("prima pratica") which was complementary to them. This debate seems in any case to have raised the composer's profile, leading to reprints of his earlier books of madrigals. Some of his madrigals were published in Copenhagen in 1605 and 1606, and the poet Tommaso Stigliani published a eulogy of him in his 1605 poem "O sirene de' fiumi". The composer of madrigal comedies and theorist Adriano Banchieri wrote in 1609: "I must not neglect to mention the most noble of composers, Monteverdi ... his expressive qualities are truly deserving of the highest commendation, and we find in them countless examples of matchless declamation ... enhanced by comparable harmonies." The modern music historian Massimo Ossi has placed the Artusi issue in the context of Monteverdi's artistic development: "If the controversy seems to define Monteverdi's historical position, it also seems to have been about stylistic developments that by 1600 Monteverdi had already outgrown". The non-appearance of Monteverdi's promised explanatory treatise may have been a deliberate ploy, since by 1608, by Monteverdi's reckoning, Artusi had become fully reconciled to modern trends in music, and the "seconda pratica" was by then well established; Monteverdi had no need to revisit the issue. On the other hand, letters to Giovanni Battista Doni of 1632 show that Monteverdi was still preparing a defence of the "seconda practica", in a treatise entitled "Melodia"; he may still have been working on this at the time of his death ten years later. In 1606 Vincenzo's heir Francesco commissioned from Monteverdi the opera "L'Orfeo", to a libretto by Alessandro Striggio, for the Carnival season of 1607. It was given two performances in February and March 1607; the singers included, in the title role, Rasi, who had sung in the first performance of "Euridice" witnessed by Vincenzo in 1600. This was followed in 1608 by the opera "L'Arianna" (libretto by Ottavio Rinuccini), intended for the celebration of the marriage of Francesco to Margherita of Savoy. All the music for this opera is lost apart from "Ariadne's Lament", which became extremely popular. To this period also belongs the ballet entertainment "Il ballo delle ingrate". The strain of the hard work Monteverdi had been putting into these and other compositions was exacerbated by personal tragedies. His wife died in September 1607 and the young singer Caterina Martinelli, intended for the title role of "Arianna", died of smallpox in March 1608. Monteverdi also resented his increasingly poor financial treatment by the Gonzagas. He retired to Cremona in 1608 to convalesce, and wrote a bitter letter to Vincenzo's minister Annibale Chieppio in November of that year seeking (unsuccessfully) "an honourable dismissal". Although the Duke increased Monteverdi's salary and pension, and Monteverdi returned to continue his work at the court, he began to seek patronage elsewhere. After publishing his "Vespers" in 1610, which were dedicated to Pope Paul V, he visited Rome, ostensibly hoping to place his son Francesco at a seminary, but apparently also seeking alternative employment. In the same year he may also have visited Venice, where a large collection of his church music was being printed, with a similar intention. Duke Vincenzo died on 18 February 1612. When Francesco succeeded him, court intrigues and cost-cutting let to the dismissal of Monteverdi and his brother Giulio Cesare, who both returned, almost penniless, to Cremona. Despite Francesco's own death from smallpox in December 1612, Monteverdi was unable to return to favour with his successor, his brother Cardinal Ferdinando Gonzaga. In 1613, following the death of Giulio Cesare Martinengo, Monteverdi auditioned for his post as "maestro" at the basilica of San Marco in Venice, for which he submitted music for a Mass. He was appointed in August 1613, and given 50 ducats for his expenses (of which he was robbed, together with his other belongings, by highwaymen at Sanguinetto on his return to Cremona). Martinengo had been ill for some time before his death and had left the music of San Marco in a fragile state. The choir had been neglected and the administration overlooked. When Monteverdi arrived to take up his post, his principal responsibility was to recruit, train, discipline and manage the musicians of San Marco (the "capella"), who amounted to about 30 singers and six instrumentalists; the numbers could be increased for major events. Among the recruits to the choir was Francesco Cavalli, who joined in 1616 at the age of 14; he was to remain connected with San Marco throughout his life, and was to develop a close association with Monteverdi. Monteverdi also sought to expand the repertory, including not only the traditional "a cappella" repertoire of Roman and Flemish composers, but also examples of the modern style which he favoured, including the use of continuo and other instruments. Apart from this he was of course expected to compose music for all the major feasts of the church. This included a new mass each year for Holy Cross Day and Christmas Eve, cantatas in honour of the Venetian Doge, and numerous other works (many of which are lost). Monteverdi was also free to obtain income by providing music for other Venetian churches and for other patrons, and was frequently commissioned to provide music for state banquets. The Procurators of San Marco, to whom Monteverdi was directly responsible, showed their satisfaction with his work in 1616 by raising his annual salary from 300 ducats to 400. The relative freedom which the Republic of Venice afforded him, compared to the problems of court politics in Mantua, are reflected in Monteverdi's letters to Striggio, particularly his letter of 13 March 1620, when he rejects an invitation to return to Mantua, extolling his present position and finances in Venice, and referring to the pension which Mantua still owes him. Nonetheless, remaining a Mantuan citizen, he accepted commissions from the new Duke Ferdinando, who had formally renounced his position as Cardinal in 1616 to take on the duties of state. These included the "balli" "Tirsi e Clori" (1616) and "Apollo" (1620), an opera "Andromeda" (1620) and an "intermedio", "Le nozze di Tetide", for the marriage of Ferdinando with Caterina de' Medici (1617). Most of these compositions were extensively delayed in creation – partly, as shown by surviving correspondence, through the composer's unwillingness to prioritise them, and partly because of constant changes in the court's requirements. They are now lost, apart from "Tirsi e Clori", which was included in the seventh book of madrigals (published 1619) and dedicated to the Duchess Caterina, for which the composer received a pearl necklace from the Duchess. A subsequent major commission, the opera "La finta pazza Licori", to a libretto by Giulio Strozzi, was completed for Fernando's successor Vincenzo II, who succeeded to the dukedom in 1626. Because of the latter's illness (he died in 1627), it was never performed, and it is now also lost. Monteverdi also received commissions from other Italian states and from their communities in Venice. These included, for the Milanese community in 1620, music for the Feast of St. Charles Borromeo, and for the Florentine community a Requiem Mass for Cosimo II de' Medici (1621). Monteverdi acted on behalf of Paolo Giordano II, Duke of Bracciano, to arrange publication of works by the Cremona musician Francesco Petratti. Among Monteverdi's private Venetian patrons was the nobleman Girolamo Mocenigo, at whose home was premiered in 1624 the dramatic entertainment "Il combattimento di Tancredi e Clorinda" based on an episode from Torquato Tasso's "La Gerusalemme liberata". In 1627 Monteverdi received a major commission from Odoardo Farnese, Duke of Parma, for a series of works, and gained leave from the Procurators to spend time there during 1627 and 1628. Monteverdi's musical direction received the attention of foreign visitors. The Dutch diplomat and musician Constantijn Huygens, attending a Vespers service at the church of SS. Giovanni and Lucia, wrote that he "heard the most perfect music I had ever heard in my life. It was directed by the most famous Claudio Monteverdi ... who was also the composer and was accompanied by four theorbos, two cornettos, two bassoons, one "basso de viola" of huge size, organs and other instruments ...". Monteverdi wrote a mass, and provided other musical entertainment, for the visit to Venice in 1625 of the Crown Prince Władysław of Poland, who may have sought to revive attempts made a few years previously to lure Monteverdi to Warsaw. He also provided chamber music for Wolfgang Wilhelm, Count Palatine of Neuburg, when the latter was paying an incognito visit to Venice in July 1625. Correspondence of Monteverdi in 1625 and 1626 with the Mantuan courtier Ercole Marigliani reveals an interest in alchemy, which apparently Monteverdi had taken up as a hobby. He discusses experiments to transform lead into gold, the problems of obtaining mercury, and mentions commissioning special vessels for his experiments from the glassworks at Murano. Despite his generally satisfactory situation in Venice, Monteverdi experienced personal problems from time to time. He was on one occasion – probably because of his wide network of contacts – the subject of an anonymous denunciation to the Venetian authorities alleging that he supported the Habsburgs. He was also subject to anxieties about his children. His son Francesco, while a student of law at Padua in 1619, was spending in Monteverdi's opinion too much time with music, and he therefore moved him to the University of Bologna. This did not have the required result, and it seems that Monteverdi resigned himself to Francesco having a musical career – he joined the choir of San Marco in 1623. His other son Massimiliano, who graduated in medicine, was arrested by the Inquisition in Mantua in 1627 for reading forbidden literature. Monteverdi was obliged to sell the necklace he had received from Duchess Caterina to pay for his son's (eventually successful) defence. Monteverdi wrote at the time to Striggio seeking his help, and fearing that Massimiliano might be subject to torture; it seems that Striggio's intervention was helpful. Money worries at this time also led Monteverdi to visit Cremona to secure for himself a church canonry. A series of disturbing events troubled Monteverdi's world in the period around 1630. Mantua was invaded by Habsburg armies in 1630, who besieged the plague-stricken town, and after its fall in July looted its treasures, and dispersed the artistic community. The plague was carried to Mantua's ally Venice by an embassy led by Monteverdi's confidante Striggio, and over a period of 16 months led to over 45,000 deaths, leaving Venice's population in 1633 at just above 100,000, the lowest level for about 150 years. Among the plague victims was Monteverdi's assistant at San Marco, and a notable composer in his own right, Alessandro Grandi. The plague and the after-effects of war had an inevitable deleterious effect on the economy and artistic life of Venice. Monteverdi's younger brother Giulio Cesare also died at this time, probably from the plague. By this time Monteverdi was in his sixties, and his rate of composition seems to have slowed down. He had written a setting of Strozzi's "Proserpina rapita (The Abduction of Proserpina)", now lost except for one vocal trio, for a Mocenigo wedding in 1630, and produced a Mass for deliverance from the plague for San Marco which was performed in November 1631. His set of "Scherzi musicali" was published in Venice in 1632. In 1631, Monteverdi was admitted to the tonsure, and was ordained deacon, and later priest, in 1632. Although these ceremonies took place in Venice, he was nominated as a member of the clergy of Cremona; this may imply that he intended to retire there. The opening of the opera house of San Cassiano in 1637, the first public opera house in Europe, stimulated the city's musical life and coincided with a new burst of the composer's activity. 1638 saw the publication of Monteverdi's eighth book of madrigals and a revision of the "Ballo delle ingrate". The eighth book contains a "ballo", "Volgendi il ciel", which may have been composed for the Holy Roman Emperor, Ferdinand III, to whom the book is dedicated. The years 1640–1641 saw the publication of the extensive collection of church music, "Selva morale e spirituale". Among other commissions, Monteverdi wrote music in 1637 and 1638 for Strozzi's "Accademia degli Unisoni" in Venice, and in 1641 a ballet, "La vittoria d'Amore", for the court of Piacenza. Monteverdi was still not entirely free from his responsibilities for the musicians at San Marco. He wrote to complain about one of his singers to the Procurators, on 9 June 1637: "I, Claudio Monteverdi ... come humbly ... to set forth to you how Domenicato Aldegati ... a bass, yesterday morning ... at the time of the greatest concourse of people ... spoke these exact words ...'The Director of Music comes from a brood of cut-throat bastards, a thieving, fucking, he-goat ... and I shit on him and whoever protects him ... Monteverdi's contribution to opera at this period is notable. He revised his earlier opera "L'Arianna" in 1640 and wrote three new works for the commercial stage, "Il ritorno d'Ulisse in patria (The Return of Ulysses to his Homeland", 1640, first performed in Bologna with Venetian singers), "Le nozze d'Enea e Lavinia (The Marriage of Aeneas and Lavinia", 1641, music now lost), and "L'incoronazione di Poppea" ("The Coronation of Poppea", 1643). The introduction to the printed scenario of "Le nozze d'Enea", by an unknown author, acknowledges that Monteverdi is to be credited for the rebirth of theatrical music and that "he will be sighed for in later ages, for his compositions will surely outlive the ravages of time." In his last surviving letter (20 August 1643), Monteverdi, already ill, was still hoping for the settlement of the long-disputed pension from Mantua, and asked the Doge of Venice to intervene on his behalf. He died in Venice on 29 November 1643, after paying a brief visit to Cremona, and is buried in the Church of the Frari. He was survived by his sons; Masimilliano died in 1661, Francesco after 1677. There is a consensus among music historians that a period extending from the mid-15th century to around 1625, characterised in Lewis Lockwood's phrase by "substantial unity of outlook and language", should be identified as the period of "Renaissance music". Musical literature has also defined the succeeding period (covering music from approximately 1580 to 1750) as the era of "Baroque music". It is in the late-16th to early-17th-century overlap of these periods that much of Monteverdi's creativity flourished; he stands as a transitional figure between the Renaissance and the Baroque. In the Renaissance era, music had developed as a formal discipline, a "pure science of relationships" in the words of Lockwood. In the Baroque era it became a form of aesthetic expression, increasingly used to adorn religious, social and festive celebrations in which, in accordance with Plato's ideal, the music was subordinated to the text. Solo singing with instrumental accompaniment, or monody, acquired greater significance towards the end of the 16th century, replacing polyphony as the principal means of dramatic music expression. This was the changing world in which Monteverdi was active. Percy Scholes in his "Oxford Companion to Music" describes the "new music" thus: "[Composers] discarded the choral polyphony of the madrigal style as barbaric, and set dialogue or soliloquy for single voices, imitating more or less the inflexions of speech and accompanying the voice by playing mere supporting chords. Short choruses were interspersed, but they too were homophonic rather than polyphonic." Ingegneri, Monteverdi's first tutor, was a master of the "musica reservata" vocal style, which involved the use of chromatic progressions and word-painting; Monteverdi's early compositions were grounded in this style. Ingegneri was a traditional Renaissance composer, "something of an anachronism" according to Arnold, but Monteverdi also studied the work of more "modern" composers such as Luca Marenzio, Luzzasco Luzzaschi, and a little later, Giaches de Wert, from whom he would learn the art of expressing passion. He was a precocious and productive student, as indicated by his youthful publications of 1582–83. Paul Ringer writes that "these teenaged efforts reveal palpable ambition matched with a convincing mastery of contemporary style", but at this stage they display their creator's competence rather than any striking originality. Geoffrey Chew classifies them as "not in the most modern vein for the period", acceptable but out-of-date. Chew rates the "Canzonette" collection of 1584 much more highly than the earlier juvenilia: "These brief three-voice pieces draw on the airy, modern style of the villanellas of Marenzio, [drawing on] a substantial vocabulary of text-related madrigalisms". The canzonetta form was much used by composers of the day as a technical exercise, and is a prominent element in Monteverdi's first book of madrigals published in 1587. In this book, the playful, pastoral settings again reflect the style of Marenzio, while Luzzaschi's influence is evident in Monteverdi's use of dissonance. The second book (1590) begins with a setting modelled on Marenzio of a modern verse, Torquato Tasso's "Non si levav' ancor", and concludes with a text from 50 years earlier: Pietro Bembo's "Cantai un tempo". Monteverdi set the latter to music in an archaic style reminiscent of the long-dead Cipriano de Rore. Between them is "Ecco mormorar l'onde", strongly influenced by de Wert and hailed by Chew as the great masterpiece of the second book. A thread common throughout these early works is Monteverdi's use of the technique of "imitatio", a general practice among composers of the period whereby material from earlier or contemporary composers was used as models for their own work. Monteverdi continued to use this procedure well beyond his apprentice years, a factor that in some critics' eyes has compromised his reputation for originality. Monteverdi's first fifteen years of service in Mantua are bracketed by his publications of the third book of Madrigals in 1592 and the fourth and fifth books in 1603 and 1605. Between 1592 and 1603 he made minor contributions to other anthologies. How much he composed in this period is a matter of conjecture; his many duties in the Mantuan court may have limited his opportunities, but several of the madrigals that he published in the fourth and fifth books were written and performed during the 1590s, some figuring prominently in the Artusi controversy. The third book shows strongly the increased influence of Wert, by that time Monteverdi's direct superior as "maestro de capella" at Mantua. Two poets dominate the collection: Tasso, whose lyrical poetry had figured prominently in the second book but is here represented through the more epic, heroic verses from "Gerusalemme liberata", and Giovanni Battista Guarini, whose verses had appeared sporadically in Monteverdi's earlier publications, but form around half of the contents of the third book. Wert's influence is reflected in Monteverdi's forthrightly modern approach, and his expressive and chromatic settings of Tasso's verses. Of the Guarini settings Chew writes: "The epigrammatic style ... closely matches a poetic and musical ideal of the period ... [and] often depends on strong, final cadential progressions, with or without the intensification provided by chains of suspended dissonances". Chew cites the setting of "Stracciami pur il core" as "a prime example of Monteverdi's irregular dissonance practice". Tasso and Guarini were both regular visitors to the Mantuan court; Monteverdi's association with them and his absorption of their ideas may have helped lay the foundations of his own approach to the musical dramas that he would create a decade later. As the 1590s progressed, Monteverdi moved closer towards the form that he would identify in due course as the "seconda pratica". Claude V. Palisca quotes the madrigal "Ohimè, se tanto amate", published in the fourth book but written before 1600 – it is among the works attacked by Artusi – as a typical example of the composer's developing powers of invention. In this madrigal Monteverdi again departs from the established practice in the use of dissonance, by means of a vocal ornament Palisca describes as "échappé". Monteverdi's daring use of this device is, says Palisca, "like a forbidden pleasure". In this and in other settings the poet's images were supreme, even at the expense of musical consistency. The fourth book includes madrigals to which Artusi objected on the grounds of their "modernism". However, Ossi describes it as "an anthology of disparate works firmly rooted in the 16th century", closer in nature to the third book than to the fifth. There is evidence of the composer's familiarity with the works of Carlo Gesualdo, and with composers of the school of Ferrara such as Luzzaschi; the book was dedicated to a Ferrarese musical society, the "Accademici Intrepidi". The fifth book looks more to the future; for example, Monteverdi employs the "concertato" style with basso continuo (a device that was to become a typical feature in the emergent Baroque era), and includes a "sinfonia" (instrumental interlude) in the final piece. He presents his music through complex counterpoint and daring harmonies, although at times combining the expressive possibilities of the new music with traditional polyphony. In Monteverdi's final five years' service in Mantua he completed the operas "L'Orfeo" (1607) and "L'Arianna" (1608), and wrote quantities of sacred music, including the "Messa in illo tempore" (1610) and also the collection known as "Vespro della Beata Vergine" which is often referred to as "Monteverdi's "Vespers"" (1610). He also published "Scherzi musicale a tre voci" (1607), settings of verses composed since 1599 and dedicated to the Gonzaga heir, Francesco. The vocal trio in the "Scherzi" comprises two sopranos and a bass, accompanied by simple instrumental ritornellos. According to Bowers the music "reflected the modesty of the prince's resources; it was, nevertheless, the earliest publication to associate voices and instruments in this particular way". The opera opens with a brief trumpet toccata. The prologue of La musica (a figure representing music) is introduced with a ritornello by the strings, repeated often to represent the "power of music" – one of the earliest examples of an operatic leitmotif. Act 1 presents a pastoral idyll, the buoyant mood of which continues into Act 2. The confusion and grief which follow the news of Euridice's death are musically reflected by harsh dissonances and the juxtaposition of keys. The music remains in this vein until the act ends with the consoling sounds of the ritornello. Act 3 is dominated by Orfeo's aria "Possente spirto e formidabil nume" by which he attempts to persuade Caronte to allow him to enter Hades. Monteverdi's vocal embellishments and virtuoso accompaniment provide what Tim Carter has described as "one of the most compelling visual and aural representations" in early opera. In Act 4 the warmth of Proserpina's singing on behalf of Orfeo is retained until Orfeo fatally "looks back". The brief final act, which sees Orfeo's rescue and metamorphosis, is framed by the final appearance of the ritornello and by a lively moresca that brings the audience back to their everyday world. Throughout the opera Monteverdi makes innovative use of polyphony, extending the rules beyond the conventions which composers normally observed in fidelity to Palestrina. He combines elements of the traditional 16th-century madrigal with the new monodic style where the text dominates the music and sinfonias and instrumental ritornellos illustrate the action. The music for this opera is lost except for the "Lamento d'Arianna", which was published in the sixth book in 1614 as a five-voice madrigal; a separate monodic version was published in 1623. In its operatic context the lament depicts Arianna's various emotional reactions to her abandonment: sorrow, anger, fear, self-pity, desolation and a sense of futility. Throughout, indignation and anger are punctuated by tenderness, until a descending line brings the piece to a quiet conclusion. The musicologist Suzanne Cusick writes that Monteverdi "creat[ed] the lament as a recognizable genre of vocal chamber music and as a standard scene in opera ... that would become crucial, almost genre-defining, to the full-scale public operas of 17th-century Venice". Cusick observes how Monteverdi is able to match in music the "rhetorical and syntactical gestures" in the text of Ottavio Rinuccini. The opening repeated words "Lasciatemi morire" (Let me die) are accompanied by a dominant seventh chord which Ringer describes as "an unforgettable chromatic stab of pain". Ringer suggests that the lament defines Monteverdi's innovative creativity in a manner similar to that in which the Prelude and the Liebestod in "Tristan und Isolde" announced Wagner's discovery of new expressive frontiers. Rinuccini's full libretto, which has survived, was set in modern times by Alexander Goehr ("Arianna", 1995), including a version of Monteverdi's "Lament". The "Vespro della Beata Vergine", Monteverdi's first published sacred music since the "Madrigali spirituali" of 1583, consists of 14 components: an introductory versicle and response, five psalms interspersed with five "sacred concertos" (Monteverdi's term), a hymn, and two Magnificat settings. Collectively these pieces fulfil the requirements for a Vespers service on any feast day of the Virgin. Monteverdi employs many musical styles; the more traditional features, such as cantus firmus, falsobordone and Venetian canzone, are mixed with the latest madrigal style, including echo effects and chains of dissonances. Some of the musical features used are reminiscent of "L'Orfeo", written slightly earlier for similar instrumental and vocal forces. In this work the "sacred concertos" fulfil the role of the antiphons which divide the psalms in regular Vespers services. Their non-liturgical character has led writers to question whether they should be within the service, or indeed whether this was Monteverdi's intention. In some versions of Monteverdi's "Vespers" (for example, those of Denis Stevens) the concertos are replaced with antiphons associated with the Virgin, although John Whenham in his analysis of the work argues that the collection as a whole should be regarded as a single liturgical and artistic entity. All the psalms, and the Magnificat, are based on melodically limited and repetitious Gregorian chant psalm tones, around which Monteverdi builds a range of innovative textures. This concertato style challenges the traditional cantus firmus, and is most evident in the "Sonata sopra Sancta Maria", written for eight string and wind instruments plus basso continuo, and a single soprano voice. Monteverdi uses modern rhythms, frequent metre changes and constantly varying textures; yet, according to John Eliot Gardiner, "for all the virtuosity of its instrumental writing and the evident care which has gone into the combinations of timbre", Monteverdi's chief concern was resolving the proper combination of words and music. The actual musical ingredients of the Vespers were not novel to Mantua – concertato had been used by Lodovico Grossi da Viadana, a former choirmaster at the cathedral of Mantua, while the "Sonata sopra" had been anticipated by Archangelo Crotti in his "Sancta Maria" published in 1608. It is, writes Denis Arnold, Monteverdi's mixture of the various elements that makes the music unique. Arnold adds that the Vespers achieved fame and popularity only after their 20th-century rediscovery; they were not particularly regarded in Monteverdi's time. During his years in Venice Monteverdi published his sixth (1614), seventh (1619) and eighth (1638) books of madrigals. The sixth book consists of works written before the composer's departure from Mantua. Hans Redlich sees it as a transitional work, containing Monteverdi's last madrigal compositions in the manner of the "prima pratica", together with music which is typical of the new style of expression which Monteverdi had displayed in the dramatic works of 1607–08. The central theme of the collection is loss; the best-known work is the five-voice version of "Lamento d'Arianna", which, says Massimo Ossi, gives "an object lesson in the close relationship between monodic recitative and counterpoint". The book contains Monteverdi's first settings of verses by Giambattista Marino, and two settings of Petrarch which Ossi considers the most extraordinary pieces in the volume, providing some "stunning musical moments". While Monteverdi had looked backwards in the sixth book, he moved forward in the seventh book from the traditional concept of the madrigal, and from monody, in favour of chamber duets. There are exceptions, such the two solo "lettere amorose" (love letters) "Se i languidi miei sguardi" and "Se pur destina e vole", written to be performed "genere rapresentativo" – acted as well as sung. Of the duets which are the main features of the volume, Chew highlights "Ohimé, dov'è il mio ben, dov'è il mio core", a romanesca in which two high voices express dissonances above a repetitive bass pattern. The book also contains large-scale ensemble works, and the ballet "Tirsi e Clori". This was the height of Monteverdi's "Marino period"; six of the pieces in the book are settings of the poet's verses. As Carter puts it, Monteverdi "embraced Marino's madrigalian kisses and love-bites with ... the enthusiasm typical of the period". Some commentators have opined that the composer should have had better poetic taste. The eighth book, subtitled "Madrigali guerrieri, et amorosi ..." ("Madrigals of war and love") is structured in two symmetrical halves, one for "war" and one for "love". Each half begins with a six-voice setting, followed by an equally large-scale Petrarch setting, then a series of duets mainly for tenor voices, and concludes with a theatrical number and a final ballet. The "war" half contains several items written as tributes to the emperor Ferdinand III, who had succeeded to the Habsburg throne in 1637. Many of Monteverdi's familiar poets – Strozzi, Rinuccini, Tasso, Marino, Guarini – are represented in the settings. It is difficult to gauge when many of the pieces were composed, although the ballet "Mascherata dell' ingrate" that ends the book dates back to 1608 and the celebration of the Gonzaga-Savoy marriage. The "Combattimento di Tancredi e Clorinda", centrepiece of the "war" settings, had been written and performed in Venice in 1624; on its publication in the eighth book, Monteverdi explicitly linked it to his concept of "concitato genera" (otherwise "stile concitato" – "aroused style") that would "fittingly imitate the utterance and the accents of a brave man who is engaged in warfare", and implied that since he had originated this style, others had begun to copy it. The work employed for the first time instructions for the use of pizzicato string chords, and also evocations of fanfares and other sounds of combat. The critic Andrew Clements describes the eighth book as "a statement of artistic principles and compositional authority", in which Monteverdi "shaped and expanded the madrigal form to accommodate what he wanted to do ... the pieces collected in Book Eight make up a treasury of what music in the first half the 17th century could possibly express." During this period of his Venetian residency Monteverdi composed quantities of sacred music. Numerous motets and other short works were included in anthologies by local publishers such as Giulio Cesare Bianchi (a former student of Monteverdi) and Lorenzo Calvi, and others were published elsewhere in Italy and Austria. The range of styles in the motets is broad, from simple strophic arias with string accompaniment to full-scale declamations with an alleluia finale. Monteverdi retained emotional and political attachments to the Mantuan court and wrote for it, or undertook to write, large amounts of stage music including at least four operas. The ballet "Tirsi e Clori" survives through its inclusion in the seventh book, but the rest of the Mantuan dramatic music is lost. Many of the missing manuscripts may have disappeared in the wars that overcame Mantua in 1630. The most significant aspect of their loss, according to Carter, is the extent to which they might have provided musical links between Monteverdi's early Mantuan operas and those he wrote in Venice after 1638: "Without these links ... it is hard to a produce a coherent account of his development as a composer for the stage". Likewise, Janet Beat regrets that the 30-year gap hampers the study of how opera orchestration developed during those critical early years. Apart from the madrigal books, Monteverdi's only published collection during this period was the volume of "Scherzi musicale" in 1632. For unknown reasons, the composer's name does not appear on the inscription, the dedication being signed by the Venetian printer Bartolemeo Magni; Carter surmises that the recently ordained Monteverdi may have wished to keep his distance from this secular collection. It mixes strophic continuo songs for solo voice with more complex works which employ continuous variation over repeated bass patterns. Chew selects the chaconne for two tenors, "Zefiro torna e di soavi accenti", as the outstanding item in the collection: "[T]he greater part of this piece consists of repetitions of a bass pattern which ensures tonal unity of a simple kind, owing to its being framed as a simple cadence in a G major tonal type: over these repetitions, inventive variations unfold in virtuoso passage-work". "Main articles": "Il ritorno d'Ulisse in patria"; "L'incoronazione di Poppea"; "Selva morale e spirituale" The last years of Monteverdi's life were much occupied with opera for the Venetian stage. Richard Taruskin, in his "Oxford History of Western Music", gives his chapter on this topic the title "Opera from Monteverdi to Monteverdi." This wording, originally proposed humorously by the Italian music historian Nino Pirrotta, is interpreted seriously by Taruskin as indicating that Monteverdi is significantly responsible for the transformation of the opera genre from a private entertainment of the nobility (as with "Orfeo" in 1607), to what became a major commercial genre, as exemplified by his opera "L'incoronazione di Poppea" (1643). His two surviving operatic works of this period, "Il ritorno d'Ulisse in patria" and "L'incoronazione" are held by Arnold to be the first "modern" operas; "Il ritorno" is the first Venetian opera to depart from what Ellen Rosand terms "the mythological pastoral". However, David Johnson in "The North American Review" warns audiences not to expect immediate affinity with Mozart, Verdi or Puccini: "You have to submit yourself to a much slower pace, to a much more chaste conception of melody, to a vocal style that is at first merely like dry declamation and only on repeated hearings begins to assume an extraordinary eloquence." "Il ritorno", says Carter, is clearly influenced by Monteverdi's earlier works. Penelope's lament in Act I is close in character to the lament from "L'Arianna", while the martial episodes recall "Il combattimento". "Stile concitato" is prominent in the fight scenes and in the slaying of Penelope's suitors. In "L'incoronazione", Monteverdi represents moods and situations by specific musical devices: triple metre stands for the language of love; arpeggios demonstrate conflict; "stile concitato" represents rage. There is continuing debate about how much of the extant "L'incoronazione" music is Monteverdi's original, and how much is the work of others (there are, for instance, traces of music by Francesco Cavalli). The "Selva morale e spirituale" of 1641, and the posthumous "Messa et salmi" published in 1650 (which was edited by Cavalli), are selections of the sacred music that Monteverdi wrote for San Marco during his 30-year tenure – much else was likely written but not published. The "Selva morale" volume opens with a series of madrigal settings on moral texts, dwelling on themes such as "the transitory nature of love, earthly rank and achievement, even existence itself". They are followed by a Mass in conservative style ("stile antico"), the high point of which is an extended seven-voice "Gloria". Scholars believe that this might have been written to celebrate the end of the 1631 plague. The rest of the volume is made up of numerous psalm settings, two Magnificats and three "Salve Reginas". The "Messa et salmi" volume includes a "stile antico" Mass for four voices, a polyphonic setting of the psalm "Laetatus Sum", and a version of the Litany of Lareto that Monteverdi had originally published in 1620. The posthumous ninth book of madrigals was published in 1651, a miscellany dating back to the early 1630s, some items being repeats of previously published pieces, such as the popular duet "O sia tranquillo il mare" from 1638. The book includes a trio for three sopranos, "Come dolce oggi l'auretta", which is the only surviving music from the 1630 lost opera "Proserpina rapita". In his lifetime Monteverdi enjoyed considerable status among musicians and the public. This is evidenced by the scale of his funeral rites: "[W]ith truly royal pomp a catafalque was erected in the Chiesa de Padrini Minori de Frari, decorated all in mourning, but surrounded with so many candles that the church resembled a night sky luminous with stars". This glorification was transitory; Carter writes that in Monteverdi's day, music rarely survived beyond the circumstances of its initial performance and was quickly forgotten along with its creator. In this regard Monteverdi fared better than most. His operatic works were revived in several cities in the decade following his death; according to Severo Bonini writing in 1651, every musical household in Italy possessed a copy of the "Lamento d'Arianna". The German composer Heinrich Schütz, who had studied in Venice under Giovanni Gabrieli shortly before Monteverdi's arrival there, possessed a copy of "Il combattimento" and himself took up elements of the "stile concitato". On his second visit to Venice in 1628–1629, Arnold believes, Schütz absorbed the concepts of "basso continuo" and expressiveness of word-setting, but he opines that Schütz was more directly influenced by the style of the younger generation of Venetian composers, including Grandi and Giovanni Rovetta (the eventual successor to Monteverdi at San Marco). Schütz published a first book of Symphonia sacrae, settings of biblical texts in the style of "seconda pratica", in Venice in 1629. "Es steh Gott auf", from his Symphoniae sacrae II, published in Dresden in 1647, contains specific quotations from Monteverdi. After the 1650s, Monteverdi's name quickly disappears from contemporary accounts, his music generally forgotten except for the "Lamento", prototype of a genre that would endure well into the 18th century. Interest in Monteverdi revived in the late 18th and early 19th centuries among music scholars in Germany and Italy, although he was still regarded as essentially a historical curiosity. Wider interest in the music itself began in 1881, when Robert Eitner published a shortened version of the "Orfeo" score. Around this time Kurt Vogel scored the madrigals from the original manuscripts, but more critical interest was shown in the operas, following the discovery of the "L'incoronazione" manuscript in 1888 and that of "Il ritorno" in 1904. Largely through the efforts of Vincent d'Indy, all three operas were staged in one form or other, during the first quarter of the 20th century: "L'Orfeo" in May 1911, "L'incoronazione" in February 1913 and "Il ritorno" in May 1925. The Italian nationalist poet Gabriele D'Annunzio lauded Monteverdi and in his novel "Il fuoco" (1900) wrote of ""il divino Claudio" ... what a heroic soul, purely Italian in its essence!" His vision of Monteverdi as the true founder of Italian musical lyricism was adopted by musicians who worked with the regime of Benito Mussolini (1922–1945), including Francesco Malipiero, Luigi Dallapiccola, and Mario Labroca, who contrasted Monteverdi with the decadence of the music of Richard Strauss, Claude Debussy and Igor Stravinsky. In the years after the Second World War the operas began to be performed in the major opera houses, and eventually were established in the general repertory. The resuscitation of Monteverdi's sacred music took longer; he did not benefit from the Catholic Church's 19th-century revival of Renaissance music in the way that Palestrina did, perhaps, as Carter suggests, because Monteverdi was viewed chiefly as a secular composer. It was not until 1932 that the 1610 "Vespers" were published in a modern edition, followed by Redlich's revision two years later. Modern editions of the "Selva morales" and "Missa e Salmi" volumes were published respectively in 1940 and 1942. The revival of public interest in Monteverdi's music gathered pace in the second half of the 20th century, reaching full spate in the general early-music revival of the 1970s, during which time the emphasis turned increasingly towards "authentic" performance using historical instruments. The magazine "Gramophone" notes over 30 recordings of the "Vespers" between 1976 and 2011, and 27 of "Il combattimento di Tancredo e Clorinda" between 1971 and 2013. Monteverdi's surviving operas are today regularly performed; the website Operabase notes 555 performances of the operas in 149 productions worldwide in the seasons 2011–2016, ranking Monteverdi at 30th position for all composers, and at 8th ranking for Italian opera composers. In 1985 Manfred H. Stattkus published an index to Monteverdi's works, the Stattkus-Verzeichnis, (revised in 2006) giving each composition an "SV" number, to be used for cataloguing and references. Monteverdi is lauded by modern critics as "the most significant composer in late Renaissance and early Baroque Italy"; "one of the principal composers in the history of Western music"; and, routinely, as the first great opera composer. These assessments reflect a contemporary perspective, since his music was largely unknown to the composers that followed him for three centuries after his death. It is, as Redlich and others have pointed out, the composers of the late 20th and 21st century that have identified with Monteverdi and sought to make his music a basis for their own. Possibly, as Chew suggests, they are attracted by Monteverdi's reputation as "a Modern, a breaker of rules, against the Ancients, those who deferred to ancient authority" – although the composer was, essentially, a pragmatist, "showing what can only be described as an opportunistic and eclectic willingness to use whatever lay to hand for the purpose". In a letter dated 16 October 1633 Monteverdi appears to endorse the view of himself as a "modern": "I would rather be moderately praised for the new style than greatly praised for the ordinary". However, Chew, in his final summation, sees the composer historically as facing both ways, willing to use modern techniques but while at the same time protective of his status as a competent composer in the "stile antico". Thus, says Chew, "his achievement was both retrospective and progressive". Monteverdi represents the late Renaissance era while simultaneously summing up much of the early Baroque. "And in one respect in particular, his achievement was enduring: the effective projection of human emotions in music, in a way adequate for theatre as well as for chamber music." Cell nucleus In cell biology, the nucleus (pl. "nuclei"; from Latin or , meaning "kernel" or "seed") is a membrane-enclosed organelle found in eukaryotic cells. Eukaryotes usually have a single nucleus, but a few cell types, such as mammalian red blood cells, have no nuclei, and a few others including osteoclasts have many. Cell nuclei contain most of the cell's genetic material, organized as multiple long linear DNA molecules in a complex with a large variety of proteins, such as histones, to form chromosomes. The genes within these chromosomes are the cell's nuclear genome and are structured in such a way to promote cell function. The nucleus maintains the integrity of genes and controls the activities of the cell by regulating gene expression—the nucleus is, therefore, the control center of the cell. The main structures making up the nucleus are the nuclear envelope, a double membrane that encloses the entire organelle and isolates its contents from the cellular cytoplasm, and the nuclear matrix (which includes the nuclear lamina), a network within the nucleus that adds mechanical support, much like the cytoskeleton, which supports the cell as a whole. Because the nuclear envelope is impermeable to large molecules, nuclear pores are required to regulate nuclear transport of molecules across the envelope. The pores cross both nuclear membranes, providing a channel through which larger molecules must be actively transported by carrier proteins while allowing free movement of small molecules and ions. Movement of large molecules such as proteins and RNA through the pores is required for both gene expression and the maintenance of chromosomes. Although the interior of the nucleus does not contain any membrane-bound subcompartments, its contents are not uniform, and a number of "sub-nuclear bodies" exist, made up of unique proteins, RNA molecules, and particular parts of the chromosomes. The best-known of these is the nucleolus, which is mainly involved in the assembly of ribosomes. After being produced in the nucleolus, ribosomes are exported to the cytoplasm where they translate mRNA. The nucleus was the first organelle to be discovered. What is most likely the oldest preserved drawing dates back to the early microscopist Antonie van Leeuwenhoek (1632–1723). He observed a "lumen", the nucleus, in the red blood cells of salmon. Unlike mammalian red blood cells, those of other vertebrates still contain nuclei. The nucleus was also described by Franz Bauer in 1804 and in more detail in 1831 by Scottish botanist Robert Brown in a talk at the Linnean Society of London. Brown was studying orchids under microscope when he observed an opaque area, which he called the "areola" or "nucleus", in the cells of the flower's outer layer. He did not suggest a potential function. In 1838, Matthias Schleiden proposed that the nucleus plays a role in generating cells, thus he introduced the name "cytoblast" (cell builder). He believed that he had observed new cells assembling around "cytoblasts". Franz Meyen was a strong opponent of this view, having already described cells multiplying by division and believing that many cells would have no nuclei. The idea that cells can be generated de novo, by the "cytoblast" or otherwise, contradicted work by Robert Remak (1852) and Rudolf Virchow (1855) who decisively propagated the new paradigm that cells are generated solely by cells ("Omnis cellula e cellula"). The function of the nucleus remained unclear. Between 1877 and 1878, Oscar Hertwig published several studies on the fertilization of sea urchin eggs, showing that the nucleus of the sperm enters the oocyte and fuses with its nucleus. This was the first time it was suggested that an individual develops from a (single) nucleated cell. This was in contradiction to Ernst Haeckel's theory that the complete phylogeny of a species would be repeated during embryonic development, including generation of the first nucleated cell from a "monerula", a structureless mass of primordial mucus ("Urschleim"). Therefore, the necessity of the sperm nucleus for fertilization was discussed for quite some time. However, Hertwig confirmed his observation in other animal groups, including amphibians and molluscs. Eduard Strasburger produced the same results for plants in 1884. This paved the way to assign the nucleus an important role in heredity. In 1873, August Weismann postulated the equivalence of the maternal and paternal germ "cells" for heredity. The function of the nucleus as carrier of genetic information became clear only later, after mitosis was discovered and the Mendelian rules were rediscovered at the beginning of the 20th century; the chromosome theory of heredity was therefore developed. The nucleus is the largest cellular organelle in animal cells. In mammalian cells, the average diameter of the nucleus is approximately 6 micrometres (µm), which occupies about 10% of the total cell volume. The viscous liquid within it is called nucleoplasm (or karyolymph), and is similar in composition to the cytosol found outside the nucleus. It appears as a dense, roughly spherical or irregular organelle. The composition by dry weight of the nucleus is approximately: DNA 9%, RNA 1%, Histone Protein 11%, Residual Protein 14%, Acidic Proteins 65%. In some types of white blood cells specifically most granulocytes the nucleus is lobated and can be present as a bi-lobed, tri-lobed or multi-lobed organelle. The nuclear envelope, otherwise known as nuclear membrane, consists of two cellular membranes, an inner and an outer membrane, arranged parallel to one another and separated by 10 to 50 nanometres (nm). The nuclear envelope completely encloses the nucleus and separates the cell's genetic material from the surrounding cytoplasm, serving as a barrier to prevent macromolecules from diffusing freely between the nucleoplasm and the cytoplasm. The outer nuclear membrane is continuous with the membrane of the rough endoplasmic reticulum (RER), and is similarly studded with ribosomes. The space between the membranes is called the perinuclear space and is continuous with the RER lumen. Nuclear pores, which provide aqueous channels through the envelope, are composed of multiple proteins, collectively referred to as nucleoporins. The pores are about 125 million daltons in molecular weight and consist of around 50 (in yeast) to several hundred proteins (in vertebrates). The pores are 100 nm in total diameter; however, the gap through which molecules freely diffuse is only about 9 nm wide, due to the presence of regulatory systems within the center of the pore. This size selectively allows the passage of small water-soluble molecules while preventing larger molecules, such as nucleic acids and larger proteins, from inappropriately entering or exiting the nucleus. These large molecules must be actively transported into the nucleus instead. The nucleus of a typical mammalian cell will have about 3000 to 4000 pores throughout its envelope, each of which contains an eightfold-symmetric ring-shaped structure at a position where the inner and outer membranes fuse. Attached to the ring is a structure called the "nuclear basket" that extends into the nucleoplasm, and a series of filamentous extensions that reach into the cytoplasm. Both structures serve to mediate binding to nuclear transport proteins. Most proteins, ribosomal subunits, and some DNAs are transported through the pore complexes in a process mediated by a family of transport factors known as karyopherins. Those karyopherins that mediate movement into the nucleus are also called importins, whereas those that mediate movement out of the nucleus are called exportins. Most karyopherins interact directly with their cargo, although some use adaptor proteins. Steroid hormones such as cortisol and aldosterone, as well as other small lipid-soluble molecules involved in intercellular signaling, can diffuse through the cell membrane and into the cytoplasm, where they bind nuclear receptor proteins that are trafficked into the nucleus. There they serve as transcription factors when bound to their ligand; in the absence of a ligand, many such receptors function as histone deacetylases that repress gene expression. In animal cells, two networks of intermediate filaments provide the nucleus with mechanical support: The nuclear lamina forms an organized meshwork on the internal face of the envelope, while less organized support is provided on the cytosolic face of the envelope. Both systems provide structural support for the nuclear envelope and anchoring sites for chromosomes and nuclear pores. The nuclear lamina is composed mostly of lamin proteins. Like all proteins, lamins are synthesized in the cytoplasm and later transported to the nucleus interior, where they are assembled before being incorporated into the existing network of nuclear lamina. Lamins found on the cytosolic face of the membrane, such as emerin and nesprin, bind to the cytoskeleton to provide structural support. Lamins are also found inside the nucleoplasm where they form another regular structure, known as the "nucleoplasmic veil", that is visible using fluorescence microscopy. The actual function of the veil is not clear, although it is excluded from the nucleolus and is present during interphase. Lamin structures that make up the veil, such as LEM3, bind chromatin and disrupting their structure inhibits transcription of protein-coding genes. Like the components of other intermediate filaments, the lamin monomer contains an alpha-helical domain used by two monomers to coil around each other, forming a dimer structure called a coiled coil. Two of these dimer structures then join side by side, in an antiparallel arrangement, to form a tetramer called a "protofilament". Eight of these protofilaments form a lateral arrangement that is twisted to form a ropelike "filament". These filaments can be assembled or disassembled in a dynamic manner, meaning that changes in the length of the filament depend on the competing rates of filament addition and removal. Mutations in lamin genes leading to defects in filament assembly cause a group of rare genetic disorders known as "laminopathies". The most notable laminopathy is the family of diseases known as progeria, which causes the appearance of premature aging in its sufferers. The exact mechanism by which the associated biochemical changes give rise to the aged phenotype is not well understood. The cell nucleus contains the majority of the cell's genetic material in the form of multiple linear DNA molecules organized into structures called chromosomes. Each human cell contains roughly two meters of DNA. During most of the cell cycle these are organized in a DNA-protein complex known as chromatin, and during cell division the chromatin can be seen to form the well-defined chromosomes familiar from a karyotype. A small fraction of the cell's genes are located instead in the mitochondria. There are two types of chromatin. Euchromatin is the less compact DNA form, and contains genes that are frequently expressed by the cell. The other type, heterochromatin, is the more compact form, and contains DNA that is infrequently transcribed. This structure is further categorized into "facultative" heterochromatin, consisting of genes that are organized as heterochromatin only in certain cell types or at certain stages of development, and "constitutive" heterochromatin that consists of chromosome structural components such as telomeres and centromeres. During interphase the chromatin organizes itself into discrete individual patches, called "chromosome territories". Active genes, which are generally found in the euchromatic region of the chromosome, tend to be located towards the chromosome's territory boundary. Antibodies to certain types of chromatin organization, in particular, nucleosomes, have been associated with a number of autoimmune diseases, such as systemic lupus erythematosus. These are known as anti-nuclear antibodies (ANA) and have also been observed in concert with multiple sclerosis as part of general immune system dysfunction. As in the case of progeria, the role played by the antibodies in inducing the symptoms of autoimmune diseases is not obvious. The nucleolus is a discrete densely stained structure found in the nucleus. It is not surrounded by a membrane, and is sometimes called a "suborganelle". It forms around tandem repeats of rDNA, DNA coding for ribosomal RNA (rRNA). These regions are called nucleolar organizer regions (NOR). The main roles of the nucleolus are to synthesize rRNA and assemble ribosomes. The structural cohesion of the nucleolus depends on its activity, as ribosomal assembly in the nucleolus results in the transient association of nucleolar components, facilitating further ribosomal assembly, and hence further association. This model is supported by observations that inactivation of rDNA results in intermingling of nucleolar structures. In the first step of ribosome assembly, a protein called RNA polymerase I transcribes rDNA, which forms a large pre-rRNA precursor. This is cleaved into the subunits 5.8S, 18S, and 28S rRNA. The transcription, post-transcriptional processing, and assembly of rRNA occurs in the nucleolus, aided by small nucleolar RNA (snoRNA) molecules, some of which are derived from spliced introns from messenger RNAs encoding genes related to ribosomal function. The assembled ribosomal subunits are the largest structures passed through the nuclear pores. When observed under the electron microscope, the nucleolus can be seen to consist of three distinguishable regions: the innermost "fibrillar centers" (FCs), surrounded by the "dense fibrillar component" (DFC), which in turn is bordered by the "granular component" (GC). Transcription of the rDNA occurs either in the FC or at the FC-DFC boundary, and, therefore, when rDNA transcription in the cell is increased, more FCs are detected. Most of the cleavage and modification of rRNAs occurs in the DFC, while the latter steps involving protein assembly onto the ribosomal subunits occur in the GC. Besides the nucleolus, the nucleus contains a number of other non-membrane-delineated bodies. These include Cajal bodies, Gemini of coiled bodies, polymorphic interphase karyosomal association (PIKA), promyelocytic leukaemia (PML) bodies, paraspeckles, and splicing speckles. Although little is known about a number of these domains, they are significant in that they show that the nucleoplasm is not a uniform mixture, but rather contains organized functional subdomains. Other subnuclear structures appear as part of abnormal disease processes. For example, the presence of small intranuclear rods has been reported in some cases of nemaline myopathy. This condition typically results from mutations in actin, and the rods themselves consist of mutant actin as well as other cytoskeletal proteins. A nucleus typically contains between 1 and 10 compact structures called Cajal bodies or coiled bodies (CB), whose diameter measures between 0.2 µm and 2.0 µm depending on the cell type and species. When seen under an electron microscope, they resemble balls of tangled thread and are dense foci of distribution for the protein coilin. CBs are involved in a number of different roles relating to RNA processing, specifically small nucleolar RNA (snoRNA) and small nuclear RNA (snRNA) maturation, and histone mRNA modification. Similar to Cajal bodies are Gemini of Cajal bodies, or gems, whose name is derived from the Gemini constellation in reference to their close "twin" relationship with CBs. Gems are similar in size and shape to CBs, and in fact are virtually indistinguishable under the microscope. Unlike CBs, gems do not contain small nuclear ribonucleoproteins (snRNPs), but do contain a protein called survival of motor neuron (SMN) whose function relates to snRNP biogenesis. Gems are believed to assist CBs in snRNP biogenesis, though it has also been suggested from microscopy evidence that CBs and gems are different manifestations of the same structure. Later ultrastructural studies have shown gems to be twins of Cajal bodies with the difference being in the coilin component; Cajal bodies are SMN positive and coilin positive, and gems are SMN positive and coilin negative. RAFA domains, or polymorphic interphase karyosomal associations, were first described in microscopy studies in 1991. Their function remains unclear, though they were not thought to be associated with active DNA replication, transcription, or RNA processing. They have been found to often associate with discrete domains defined by dense localization of the transcription factor PTF, which promotes transcription of small nuclear RNA (snRNA). Promyelocytic leukaemia bodies (PML bodies) are spherical bodies found scattered throughout the nucleoplasm, measuring around 0.1–1.0 µm. They are known by a number of other names, including nuclear domain 10 (ND10), Kremer bodies, and PML oncogenic domains. PML bodies are named after one of their major components, the promyelocytic leukemia protein (PML). They are often seen in the nucleus in association with Cajal bodies and cleavage bodies. PML bodies belong to the nuclear matrix, an ill-defined super-structure of the nucleus proposed to anchor and regulate many nuclear functions, including DNA replication, transcription, or epigenetic silencing. The PML protein is the key organizer of these domains that recruits an ever-growing number of proteins, whose only common known feature to date is their ability to be SUMOylated. Yet, pml-/- mice (which have their PML gene deleted) cannot assemble nuclear bodies, develop normally and live well, demonstrating that PML bodies are dispensable for most basic biological functions. Speckles are subnuclear structures that are enriched in pre-messenger RNA splicing factors and are located in the interchromatin regions of the nucleoplasm of mammalian cells. At the fluorescence-microscope level they appear as irregular, punctate structures, which vary in size and shape, and when examined by electron microscopy they are seen as clusters of interchromatin granules. Speckles are dynamic structures, and both their protein and RNA-protein components can cycle continuously between speckles and other nuclear locations, including active transcription sites. Studies on the composition, structure and behaviour of speckles have provided a model for understanding the functional compartmentalization of the nucleus and the organization of the gene-expression machinery splicing snRNPs and other splicing proteins necessary for pre-mRNA processing. Because of a cell's changing requirements, the composition and location of these bodies changes according to mRNA transcription and regulation via phosphorylation of specific proteins. The splicing speckles are also known as nuclear speckles (nuclear specks), splicing factor compartments (SF compartments), interchromatin granule clusters (IGCs), B snurposomes. B snurposomes are found in the amphibian oocyte nuclei and in "Drosophila melanogaster" embryos. B snurposomes appear alone or attached to the Cajal bodies in the electron micrographs of the amphibian nuclei. IGCs function as storage sites for the splicing factors. Discovered by Fox et al. in 2002, paraspeckles are irregularly shaped compartments in the nucleus' interchromatin space. First documented in HeLa cells, where there are generally 10–30 per nucleus, paraspeckles are now known to also exist in all human primary cells, transformed cell lines, and tissue sections. Their name is derived from their distribution in the nucleus; the "para" is short for parallel and the "speckles" refers to the splicing speckles to which they are always in close proximity. Paraspeckles are dynamic structures that are altered in response to changes in cellular metabolic activity. They are transcription dependent and in the absence of RNA Pol II transcription, the paraspeckle disappears and all of its associated protein components (PSP1, p54nrb, PSP2, CFI(m)68, and PSF) form a crescent shaped perinucleolar cap in the nucleolus. This phenomenon is demonstrated during the cell cycle. In the cell cycle, paraspeckles are present during interphase and during all of mitosis except for telophase. During telophase, when the two daughter nuclei are formed, there is no RNA Pol II transcription so the protein components instead form a perinucleolar cap. Perichromatin fibrils are visible only under electron microscope. They are located next to the transcriptionally active chromatin and are hypothesized to be the sites of active pre-mRNA processing. Clastosomes are small nuclear bodies (0.2-0.5 µm) described as having a thick ring-shape due to the peripheral capsule around these bodies. This name is derived from the Greek "klastos", broken and "soma", body. Clastosomes are not typically present in normal cells, making them hard to detect. They form under high proteolysis conditions within the nucleus and degrade once there is a decrease in activity or if cells are treated with proteasome inhibitors. The scarcity of clastosomes in cells indicates that they are not required for proteasome function. Osmotic stress has also been shown to cause the formation of clastosomes. These nuclear bodies contain catalytic and regulatory sub-units of the proteasome and its substrates, indicating that clastosomes are sites for degrading proteins. The nucleus provides a site for genetic transcription that is segregated from the location of translation in the cytoplasm, allowing levels of gene regulation that are not available to prokaryotes. The main function of the cell nucleus is to control gene expression and mediate the replication of DNA during the cell cycle. The nucleus is an organelle found in eukaryotic cells. Inside its fully enclosed nuclear membrane, it contains the majority of the cell's genetic material. This material is organized as DNA molecules, along with a variety of proteins, to form chromosomes. The nuclear envelope allows the nucleus to control its contents, and separate them from the rest of the cytoplasm where necessary. This is important for controlling processes on either side of the nuclear membrane. In most cases where a cytoplasmic process needs to be restricted, a key participant is removed to the nucleus, where it interacts with transcription factors to downregulate the production of certain enzymes in the pathway. This regulatory mechanism occurs in the case of glycolysis, a cellular pathway for breaking down glucose to produce energy. Hexokinase is an enzyme responsible for the first the step of glycolysis, forming glucose-6-phosphate from glucose. At high concentrations of fructose-6-phosphate, a molecule made later from glucose-6-phosphate, a regulator protein removes hexokinase to the nucleus, where it forms a transcriptional repressor complex with nuclear proteins to reduce the expression of genes involved in glycolysis. In order to control which genes are being transcribed, the cell separates some transcription factor proteins responsible for regulating gene expression from physical access to the DNA until they are activated by other signaling pathways. This prevents even low levels of inappropriate gene expression. For example, in the case of NF-κB-controlled genes, which are involved in most inflammatory responses, transcription is induced in response to a signal pathway such as that initiated by the signaling molecule TNF-α, binds to a cell membrane receptor, resulting in the recruitment of signalling proteins, and eventually activating the transcription factor NF-κB. A nuclear localisation signal on the NF-κB protein allows it to be transported through the nuclear pore and into the nucleus, where it stimulates the transcription of the target genes. The compartmentalization allows the cell to prevent translation of unspliced mRNA. Eukaryotic mRNA contains introns that must be removed before being translated to produce functional proteins. The splicing is done inside the nucleus before the mRNA can be accessed by ribosomes for translation. Without the nucleus, ribosomes would translate newly transcribed (unprocessed) mRNA, resulting in malformed and nonfunctional proteins. Gene expression first involves transcription, in which DNA is used as a template to produce RNA. In the case of genes encoding proteins, that RNA produced from this process is messenger RNA (mRNA), which then needs to be translated by ribosomes to form a protein. As ribosomes are located outside the nucleus, mRNA produced needs to be exported. Since the nucleus is the site of transcription, it also contains a variety of proteins that either directly mediate transcription or are involved in regulating the process. These proteins include helicases, which unwind the double-stranded DNA molecule to facilitate access to it, RNA polymerases, which bind to the DNA promoter to synthesize the growing RNA molecule, topoisomerases, which change the amount of supercoiling in DNA, helping it wind and unwind, as well as a large variety of transcription factors that regulate expression. Newly synthesized mRNA molecules are known as primary transcripts or pre-mRNA. They must undergo post-transcriptional modification in the nucleus before being exported to the cytoplasm; mRNA that appears in the cytoplasm without these modifications is degraded rather than used for protein translation. The three main modifications are 5' capping, 3' polyadenylation, and RNA splicing. While in the nucleus, pre-mRNA is associated with a variety of proteins in complexes known as heterogeneous ribonucleoprotein particles (hnRNPs). Addition of the 5' cap occurs co-transcriptionally and is the first step in post-transcriptional modification. The 3' poly-adenine tail is only added after transcription is complete. RNA splicing, carried out by a complex called the spliceosome, is the process by which introns, or regions of DNA that do not code for protein, are removed from the pre-mRNA and the remaining exons connected to re-form a single continuous molecule. This process normally occurs after 5' capping and 3' polyadenylation but can begin before synthesis is complete in transcripts with many exons. Many pre-mRNAs, including those encoding antibodies, can be spliced in multiple ways to produce different mature mRNAs that encode different protein sequences. This process is known as alternative splicing, and allows production of a large variety of proteins from a limited amount of DNA. The entry and exit of large molecules from the nucleus is tightly controlled by the nuclear pore complexes. Although small molecules can enter the nucleus without regulation, macromolecules such as RNA and proteins require association karyopherins called importins to enter the nucleus and exportins to exit. "Cargo" proteins that must be translocated from the cytoplasm to the nucleus contain short amino acid sequences known as nuclear localization signals, which are bound by importins, while those transported from the nucleus to the cytoplasm carry nuclear export signals bound by exportins. The ability of importins and exportins to transport their cargo is regulated by GTPases, enzymes that hydrolyze the molecule guanosine triphosphate (GTP) to release energy. The key GTPase in nuclear transport is Ran, which can bind either GTP or GDP (guanosine diphosphate), depending on whether it is located in the nucleus or the cytoplasm. Whereas importins depend on RanGTP to dissociate from their cargo, exportins require RanGTP in order to bind to their cargo. Nuclear import depends on the importin binding its cargo in the cytoplasm and carrying it through the nuclear pore into the nucleus. Inside the nucleus, RanGTP acts to separate the cargo from the importin, allowing the importin to exit the nucleus and be reused. Nuclear export is similar, as the exportin binds the cargo inside the nucleus in a process facilitated by RanGTP, exits through the nuclear pore, and separates from its cargo in the cytoplasm. Specialized export proteins exist for translocation of mature mRNA and tRNA to the cytoplasm after post-transcriptional modification is complete. This quality-control mechanism is important due to these molecules' central role in protein translation. Mis-expression of a protein due to incomplete excision of exons or mis-incorporation of amino acids could have negative consequences for the cell; thus, incompletely modified RNA that reaches the cytoplasm is degraded rather than used in translation. During its lifetime, a nucleus may be broken down or destroyed, either in the process of cell division or as a consequence of apoptosis (the process of programmed cell death). During these events, the structural components of the nucleus — the envelope and lamina — can be systematically degraded. In most cells, the disassembly of the nuclear envelope marks the end of the prophase of mitosis. However, this disassembly of the nucleus is not a universal feature of mitosis and does not occur in all cells. Some unicellular eukaryotes (e.g., yeasts) undergo so-called closed mitosis, in which the nuclear envelope remains intact. In closed mitosis, the daughter chromosomes migrate to opposite poles of the nucleus, which then divides in two. The cells of higher eukaryotes, however, usually undergo open mitosis, which is characterized by breakdown of the nuclear envelope. The daughter chromosomes then migrate to opposite poles of the mitotic spindle, and new nuclei reassemble around them. At a certain point during the cell cycle in open mitosis, the cell divides to form two cells. In order for this process to be possible, each of the new daughter cells must have a full set of genes, a process requiring replication of the chromosomes as well as segregation of the separate sets. This occurs by the replicated chromosomes, the sister chromatids, attaching to microtubules, which in turn are attached to different centrosomes. The sister chromatids can then be pulled to separate locations in the cell. In many cells, the centrosome is located in the cytoplasm, outside the nucleus; the microtubules would be unable to attach to the chromatids in the presence of the nuclear envelope. Therefore, the early stages in the cell cycle, beginning in prophase and until around prometaphase, the nuclear membrane is dismantled. Likewise, during the same period, the nuclear lamina is also disassembled, a process regulated by phosphorylation of the lamins by protein kinases such as the CDC2 protein kinase. Towards the end of the cell cycle, the nuclear membrane is reformed, and around the same time, the nuclear lamina are reassembled by dephosphorylating the lamins. However, in dinoflagellates, the nuclear envelope remains intact, the centrosomes are located in the cytoplasm, and the microtubules come in contact with chromosomes, whose centromeric regions are incorporated into the nuclear envelope (the so-called closed mitosis with extranuclear spindle). In many other protists (e.g., ciliates, sporozoans) and fungi, the centrosomes are intranuclear, and their nuclear envelope also does not disassemble during cell division. Apoptosis is a controlled process in which the cell's structural components are destroyed, resulting in death of the cell. Changes associated with apoptosis directly affect the nucleus and its contents, for example, in the condensation of chromatin and the disintegration of the nuclear envelope and lamina. The destruction of the lamin networks is controlled by specialized apoptotic proteases called caspases, which cleave the lamin proteins and, thus, degrade the nucleus' structural integrity. Lamin cleavage is sometimes used as a laboratory indicator of caspase activity in assays for early apoptotic activity. Cells that express mutant caspase-resistant lamins are deficient in nuclear changes related to apoptosis, suggesting that lamins play a role in initiating the events that lead to apoptotic degradation of the nucleus. Inhibition of lamin assembly itself is an inducer of apoptosis. The nuclear envelope acts as a barrier that prevents both DNA and RNA viruses from entering the nucleus. Some viruses require access to proteins inside the nucleus in order to replicate and/or assemble. DNA viruses, such as herpesvirus replicate and assemble in the cell nucleus, and exit by budding through the inner nuclear membrane. This process is accompanied by disassembly of the lamina on the nuclear face of the inner membrane. Initially, it has been suspected that immunoglobulins in general and autoantibodies in particular do not enter the nucleus. Now there is a body of evidence that under pathological conditions (e.g. lupus erythematosus) IgG can enter the nucleus. Most eukaryotic cell types usually have a single nucleus, but some have no nuclei, while others have several. This can result from normal development, as in the maturation of mammalian red blood cells, or from faulty cell division. An anucleated cell contains no nucleus and is, therefore, incapable of dividing to produce daughter cells. The best-known anucleated cell is the mammalian red blood cell, or erythrocyte, which also lacks other organelles such as mitochondria, and serves primarily as a transport vessel to ferry oxygen from the lungs to the body's tissues. Erythrocytes mature through erythropoiesis in the bone marrow, where they lose their nuclei, organelles, and ribosomes. The nucleus is expelled during the process of differentiation from an erythroblast to a reticulocyte, which is the immediate precursor of the mature erythrocyte. The presence of mutagens may induce the release of some immature "micronucleated" erythrocytes into the bloodstream. Anucleated cells can also arise from flawed cell division in which one daughter lacks a nucleus and the other has two nuclei. In flowering plants, this condition occurs in sieve tube elements. Multinucleated cells contain multiple nuclei. Most acantharean species of protozoa and some fungi in mycorrhizae have naturally multinucleated cells. Other examples include the intestinal parasites in the genus "Giardia", which have two nuclei per cell. In humans, skeletal muscle cells, called myocytes and syncytium, become multinucleated during development; the resulting arrangement of nuclei near the periphery of the cells allows maximal intracellular space for myofibrils. Other multinucleate cells in the human are osteoclasts a type of bone cell. Multinucleated and binucleated cells can also be abnormal in humans; for example, cells arising from the fusion of monocytes and macrophages, known as giant multinucleated cells, sometimes accompany inflammation and are also implicated in tumor formation. A number of dinoflagellates are known to have two nuclei. Unlike other multinucleated cells these nuclei contain two distinct lineages of DNA: one from the dinoflagellate and the other from a symbiotic diatom. The mitochondria and the plastids of the diatom somehow remain functional. As the major defining characteristic of the eukaryotic cell, the nucleus' evolutionary origin has been the subject of much speculation. Four major hypotheses have been proposed to explain the existence of the nucleus, although none have yet earned widespread support. The first model known as the "syntrophic model" proposes that a symbiotic relationship between the archaea and bacteria created the nucleus-containing eukaryotic cell. (Organisms of the Archaea and Bacteria domain have no cell nucleus.) It is hypothesized that the symbiosis originated when ancient archaea, similar to modern methanogenic archaea, invaded and lived within bacteria similar to modern myxobacteria, eventually forming the early nucleus. This theory is analogous to the accepted theory for the origin of eukaryotic mitochondria and chloroplasts, which are thought to have developed from a similar endosymbiotic relationship between proto-eukaryotes and aerobic bacteria. The archaeal origin of the nucleus is supported by observations that archaea and eukarya have similar genes for certain proteins, including histones. Observations that myxobacteria are motile, can form multicellular complexes, and possess kinases and G proteins similar to eukarya, support a bacterial origin for the eukaryotic cell. A second model proposes that proto-eukaryotic cells evolved from bacteria without an endosymbiotic stage. This model is based on the existence of modern planctomycetes bacteria that possess a nuclear structure with primitive pores and other compartmentalized membrane structures. A similar proposal states that a eukaryote-like cell, the chronocyte, evolved first and phagocytosed archaea and bacteria to generate the nucleus and the eukaryotic cell. The most controversial model, known as "viral eukaryogenesis", posits that the membrane-bound nucleus, along with other eukaryotic features, originated from the infection of a prokaryote by a virus. The suggestion is based on similarities between eukaryotes and viruses such as linear DNA strands, mRNA capping, and tight binding to proteins (analogizing histones to viral envelopes). One version of the proposal suggests that the nucleus evolved in concert with phagocytosis to form an early cellular "predator". Another variant proposes that eukaryotes originated from early archaea infected by poxviruses, on the basis of observed similarity between the DNA polymerases in modern poxviruses and eukaryotes. It has been suggested that the unresolved question of the evolution of sex could be related to the viral eukaryogenesis hypothesis. A more recent proposal, the "exomembrane hypothesis", suggests that the nucleus instead originated from a single ancestral cell that evolved a second exterior cell membrane; the interior membrane enclosing the original cell then became the nuclear membrane and evolved increasingly elaborate pore structures for passage of internally synthesized cellular components such as ribosomal subunits. Claude Debussy Achille-Claude Debussy (; 22 August 1862 – 25 March 1918) was a French composer. He was seen, during his lifetime and afterwards, as the first Impressionist composer, although he vigorously rejected the term. He was among the most influential composers of the late 19th and early 20th centuries. Born to a family of modest means and little cultural involvement, Debussy showed enough musical talent to be admitted at the age of ten to France's leading music college, the Conservatoire de Paris. He originally studied the piano, but found his vocation in innovative composition, despite the disapproval of the Conservatoire's conservative professors. He took many years to develop his mature style, and was nearly 40 before achieving international fame in 1902 with the only opera he completed, "Pelléas et Mélisande". Debussy's orchestral works include "Prélude à l'après-midi d'un faune" (1894), "Nocturnes" (1897–99) and "Images" (1905–1912). His music was to a considerable extent a reaction against Wagner and the German musical tradition. He regarded the classical symphony as obsolete and sought an alternative in his "symphonic sketches", "La mer" (1903–1905). His piano works include two books of Préludes and two of Études. Throughout his career he wrote "mélodies" based on a wide variety of poetry, including his own. He was greatly influenced by the Symbolist poetic movement of the later 19th century. A small number of works, including the early "La Damoiselle élue" and the late "Le Martyre de saint Sébastien" have important parts for chorus. In his final years he focused on chamber music, completing three of a planned six sonatas for different combinations of instruments. With early influences including Russian and far-eastern music, Debussy developed his own style in the use of harmony and orchestral colouring, derided, and unsuccessfully resisted, by much of the musical establishment of the day. His works have strongly influenced a wide range of composers, including Béla Bartók, Olivier Messiaen, George Benjamin and the jazz pianist and composer Bill Evans. Debussy's life was cut short by cancer. He died at his home in Paris at the age of 55 after a composing career of a little more than 30 years. Debussy was born on 22 August 1862 in Saint-Germain-en-Laye, Seine-et-Oise, on the north-west fringes of Paris. He was the eldest of the five children of Manuel-Achille Debussy and his wife, Victorine, "née" Manoury. Debussy senior ran a china shop and his wife was a seamstress. The shop was unsuccessful, and closed in 1864; the family moved to Paris, first living with Victorine's mother, in Clichy, and, from 1868, in their own apartment in the Rue Saint-Honoré. Manuel worked in a printing factory. In 1870, to escape the Siege of Paris during the Franco-Prussian War, Debussy's pregnant mother took him and his sister Adéle, to their paternal aunt's home in Cannes, where they remained until the following year. During his stay in Cannes, the seven-year-old Debussy had his first piano lessons; his aunt paid for him to study with an Italian musician, Jean Cerutti. Manuel Debussy remained in Paris and joined the forces of the Commune; after its defeat by French government troops in 1871 he was sentenced to four years' imprisonment, although he was only made to serve one year. Among his fellow Communard prisoners was his friend Charles de Sivry, a musician. Sivry's mother, Antoinette Mauté de Fleurville, gave piano lessons, and at his instigation the young Debussy became one of her pupils. Debussy's talents soon became evident, and in 1872, aged ten, he was admitted to the Conservatoire de Paris, where he remained a student for the next eleven years. He first joined the piano class of Antoine François Marmontel, and studied solfège with Albert Lavignac and, later, composition with Ernest Guiraud, harmony with Émile Durand, and organ with César Franck. The course included music history and theory studies with Louis-Albert Bourgault-Ducoudray, but it is not certain that Debussy, who was apt to skip classes, actually attended these. At the Conservatoire, Debussy initially made good progress. Marmontel said of him "A charming child, a truly artistic temperament; much can be expected of him". Another teacher was less impressed: Emile Durand wrote in a report "Debussy would be an excellent pupil if he were less sketchy and less cavalier." A year later he described Debussy as "desperately careless". In July 1874 Debussy received the award of "deuxième accessit" for his performance as soloist in the first movement of Chopin's Second Piano Concerto at the Conservatoire's annual competition. He was a fine pianist and an outstanding sight reader, who could have had a professional career had he wished, but he was only intermittently diligent in his studies. He advanced to "premier accessit" in 1875 and second prize in 1877, but failed at the competitions in 1878 and 1879. These failures made him ineligible to continue in the Conservatoire's piano classes, but he remained a student for harmony, solfège and, later, composition. With Marmontel's help Debussy secured a summer vacation job in 1879 as resident pianist at the Château de Chenonceau, where he rapidly acquired a taste for luxury that was to remain with him all his life. His first compositions date from this period, two settings of poems by Alfred de Musset: "Ballade à la lune" and "Madrid, princesse des Espagnes". The following year he secured a job as pianist in the household of Nadezhda von Meck, the patroness of Tchaikovsky. He travelled with her family for the summers of 1880 to 1882, staying at various places in France, Switzerland and Italy, as well as at her home in Moscow. He composed his Piano Trio in G major for von Meck's ensemble, and made a transcription for piano duet of three dances from Tchaikovsky's "Swan Lake". At the end of 1880 Debussy, while continuing in his studies at the Conservatoire, was engaged as accompanist for Marie Moreau-Sainti's singing class; he took this role for four years. Among the members of the class was Marie Vasnier; Debussy was greatly taken with her, and she inspired him to compose: he wrote 27 songs dedicated to her during their seven-year relationship. She was the wife of Henri Vasnier, a prominent civil servant, and much younger than her husband. She soon became Debussy's mistress as well as his muse. Whether Vasnier was content to tolerate his wife's affair with the young student or was simply unaware of it is not clear, but he and Debussy remained on excellent terms, and he continued to encourage the composer in his career. At the Conservatoire, Debussy incurred the disapproval of the faculty, particularly his composition teacher, Guiraud, for his failure to follow the orthodox rules of composition then prevailing. Nevertheless, in 1884 Debussy won France's most prestigious musical award, the Prix de Rome, with his cantata "L'enfant prodigue". The Prix carried with it a residence at the Villa Medici, the French Academy in Rome, to further the winner's studies. Debussy was there from January 1885 to March 1887, with three or possibly four absences of several weeks when he returned to France, chiefly to see Marie Vasnier. Initially Debussy found the artistic atmosphere of the Villa Medici stifling, the company boorish, the food bad, and the accommodation "abominable". Neither did he delight in Italian opera, as he found the operas of Donizetti and Verdi not to his taste. He was much more impressed by the music of the 16th-century composers Palestrina and Lassus, which he heard at Santa Maria dell'Anima: "The only church music I will accept." He was often depressed and unable to compose, but he was inspired by Franz Liszt, who visited the students and played for them. In June 1885, Debussy wrote of his desire to follow his own way, saying, "I am sure the Institute would not approve, for, naturally it regards the path which it ordains as the only right one. But there is no help for it! I am too enamoured of my freedom, too fond of my own ideas!" Debussy finally composed four pieces that were submitted to the Academy: the symphonic ode "Zuleima" (based on a text by Heinrich Heine); the orchestral piece "Printemps"; the cantata "La Damoiselle élue" (1887–1888), the first piece in which the stylistic features of his later music began to emerge; and the "Fantaisie" for piano and orchestra, which was heavily based on Franck's music and was eventually withdrawn by Debussy. The Academy chided him for writing music that was "bizarre, incomprehensible and unperformable". Although Debussy's works showed the influence of Jules Massenet, the latter concluded, "He is an enigma." During his years in Rome Debussy composed – not for the Academy – most of his Verlaine cycle, "Ariettes oubliées", which made little impact at the time but was successfully republished in 1903 after the composer had become well known. A week after his return to Paris in 1887, Debussy heard the first act of Wagner's "Tristan und Isolde" at the Concerts Lamoureux, and judged it "decidedly the finest thing I know". In 1888 and 1889 he went to the annual festivals of Wagner's operas at Bayreuth. He responded positively to Wagner's sensuousness, mastery of form, and striking harmonies, and was briefly influenced by them, but, unlike some other French composers of his generation, he concluded that there was no future in attempting to adopt and develop Wagner's style. He commented in 1903 that Wagner was "a beautiful sunset that was mistaken for a dawn". In 1889, at the Paris Exposition Universelle, Debussy first heard Javanese gamelan music. The gamelan scales, melodies, rhythms, and ensemble textures appealed to him, and echoes of them are heard in "Pagodes" in his piano suite "Estampes". He also attended two concerts of Rimsky-Korsakov's music, conducted by the composer. This too made an impression on him, and its harmonic freedom and non-Teutonic tone colours influenced his own developing musical style. Marie Vasnier ended her liaison with Debussy soon after his final return from Rome, although they remained on good enough terms for him to dedicate to her one more song, "Mandoline", in 1890. Later in 1890 Debussy met Erik Satie, who proved a kindred spirit in his experimental approach to composition. Both were bohemians, enjoying the same café society and struggling to stay afloat financially. In the same year Debussy began a relationship with Gabrielle (Gaby) Dupont, a tailor's daughter from Lisieux; in July 1893 they began living together. Debussy continued to compose songs, piano pieces and other works, some of which were publicly performed, but his music made only a modest impact, although his fellow composers recognised his potential by electing him to the committee of the Société Nationale de Musique in 1893. His String Quartet was premiered by the Ysaÿe string quartet at the Société Nationale in the same year. In May 1893 Debussy attended a theatrical event that was of key importance to his later career – the premiere of Maurice Maeterlinck's play "Pelléas et Mélisande", which he immediately determined to turn into an opera. He travelled to Maeterlinck's home in Ghent in November to secure his consent to an operatic adaptation. In February 1894 Debussy completed the first draft of Act I of his operatic version of "Pelléas et Mélisande", and worked to complete the work for most of the year. While still living with Dupont, he had an affair with the singer Thérèse Roger, and in 1894 he announced their engagement. His behaviour was widely condemned; anonymous letters circulated denouncing his treatment of both women, as well as his financial irresponsibility and debts. The engagement was broken off, and several of Debussy's friends and supporters disowned him, including Ernest Chausson, hitherto one of his strongest supporters. In terms of musical recognition, Debussy made a step forward in December 1894, when the symphonic poem "Prélude à l'après-midi d'un faune", based on Stéphane Mallarmé's poem, was premiered at a concert of the Société Nationale. The following year he completed the first draft of "Pelléas" and began efforts to get it staged. In May 1898 he made his first contacts with André Messager and Albert Carré, respectively the musical director and general manager of the Opéra-Comique, Paris, about presenting the opera. Debussy abandoned Dupont for her friend Marie-Rosalie Texier, known as "Lilly", whom he married in October 1899, after threatening suicide if she refused him. She was affectionate, practical, straightforward, and well liked by Debussy's friends and associates, but he became increasingly irritated by her intellectual limitations and lack of musical sensitivity. The marriage lasted barely five years. In 1900 Debussy began attending meetings of Les Apaches ("The Hooligans") an informal group of innovative young artists, poets, critics, and musicians who had adopted their collective title to represent their status as "artistic outcasts". The membership was fluid, but at various times included Maurice Ravel, Ricardo Viñes, Igor Stravinsky and Manuel de Falla. In the same year the first two of Debussy's three orchestral "Nocturnes" were first performed. Although they did not make any great impact with the public they were well reviewed by musicians including Paul Dukas, Alfred Bruneau and Pierre de Bréville. The complete set was given the following year. Like many other composers of the time, Debussy supplemented his income by teaching and writing. For most of 1901 he had a sideline as music critic of "La Revue Blanche", adopting the pen name "Monsieur Croche". He expressed trenchant views on composers ("I hate sentimentality – his name is Camille Saint-Saëns"), institutions (on the Paris Opéra: "A stranger would take it for a railway station, and, once inside, would mistake it for a Turkish bath"), conductors ("Nikisch is a unique virtuoso, so much so that his virtuosity seems to make him forget the claims of good taste"), musical politics ("The English actually think that a musician can manage an opera house successfully!"), and audiences ("their almost drugged expression of boredom, indifference and even stupidity"). He later collected his criticisms with a view to their publication as a book; it was published after his death as "Monsieur Croche, Antidilettante". In January 1902 rehearsals began at the Opéra-Comique for the opening of "Pelléas et Mélisande". For three months, Debussy attended rehearsals practically every day. In February there was conflict between Maeterlinck on the one hand and Debussy, Messager and Carré on the other about the casting of Mélisande. The author wanted his mistress, Georgette Leblanc, to sing the role, and was incensed when she was passed over in favour of the Scottish soprano Mary Garden. The opera opened on 30 April 1902, and although the first-night audience was divided between admirers and sceptics, the work quickly became a success. It made Debussy a well-known name in France and abroad; "The Times" commented that the opera had "provoked more discussion than any work of modern times, excepting, of course, those of Richard Strauss". The Apaches, led by Ravel (who attended every one of the 14 performances in the first run), were loud in their support; the conservative faculty of the Conservatoire tried in vain to stop its students from seeing the opera. The vocal score was published in early May, and the full orchestral score in 1904. In 1903 there was public recognition of Debussy's stature when he was appointed a Chevalier of the Légion d'honneur, but his social standing suffered a great blow when another turn in his private life caused a scandal the following year. Among his pupils was Raoul Bardac, son of Emma, the wife of a Parisian banker, Sigismond Bardac. Raoul introduced his teacher to his mother, to whom Debussy quickly became greatly attracted. She was a sophisticate, a brilliant conversationalist, an accomplished singer, and relaxed about marital fidelity, having been the mistress and muse of Gabriel Fauré a few years earlier. After despatching Lilly to her parental home at Bichain in Villeneuve-la-Guyard on 15 July 1904, Debussy took Emma away, staying incognito in Jersey and then at Pourville in Normandy. He wrote to his wife on 11 August from Dieppe, telling her that their marriage was over, but still making no mention of Bardac. When he returned to Paris he set up home on his own, taking a flat in a different arrondissement. On 14 October, five days before their fifth wedding anniversary, Lilly Debussy attempted suicide, shooting herself in the chest with a revolver; she survived, although the bullet remained lodged in her vertebrae for the rest of her life. The ensuing scandal caused Bardac's family to disown her, and Debussy lost many good friends including Dukas and Messager. His relations with Ravel, never close, were exacerbated when the latter joined other former friends of Debussy in contributing to a fund to support the deserted Lilly. The Bardacs divorced in May 1905. Finding the hostility in Paris intolerable, Debussy and Emma (now pregnant) went to England. They stayed at the Grand Hotel, Eastbourne in July and August, where Debussy corrected the proofs of his symphonic sketches" La mer", celebrating his divorce on 2 August. After a brief visit to London, the couple returned to Paris in September, buying a house in a courtyard development off the Avenue du Bois de Boulogne (now Avenue Foch), Debussy's home for the rest of his life. In October 1905 "La mer", Debussy's most substantial orchestral work, was premiered in Paris by the Orchestre Lamoureux under the direction of Camille Chevillard; the reception was mixed. Some praised the work, but Pierre Lalo, critic of "Le Temps", hitherto an admirer of Debussy, wrote, "I do not hear, I do not see, I do not smell the sea". In the same month the composer's only child was born at their home. Claude-Emma, affectionately known as "Chouchou", was a musical inspiration to the composer (she was the dedicatee of his "Children's Corner" suite). She outlived her father by scarcely a year, succumbing to the diphtheria epidemic of 1919. Mary Garden said, "I honestly don't know if Debussy ever loved anybody really. He loved his music – and perhaps himself. I think he was wrapped up in his genius", but biographers are agreed that whatever his relations with lovers and friends, Debussy was devoted to his daughter. Debussy and Emma Bardac eventually married in 1908, their troubled union enduring for the rest of his life. The following year began well, when at Fauré's invitation, Debussy became a member of the governing council of the Conservatoire. His success in London was consolidated in April 1909, when he conducted "Prélude à l'après-midi d'un faune" and the "Nocturnes" at the Queen's Hall; in May he was present at the first London production of "Pelléas et Mélisande", at Covent Garden. In the same year, Debussy was diagnosed with colorectal cancer, from which he was to die nine years later. Debussy's works began to feature increasingly in concert programmes at home and overseas. In 1910 Gustav Mahler conducted the "Nocturnes" and "Prélude à l'après-midi d'un faune" in New York in successive months. In the same year, visiting Budapest, Debussy commented that his works were better known there than in Paris. In 1912 Sergei Diaghilev commissioned a new ballet score, "Jeux". That, and the three "Images", premiered the following year, were the composer's last orchestral works. "Jeux" was unfortunate in its timing: two weeks after the premiere, in March 1913, Diaghilev presented the first performance of Stravinsky's "The Rite of Spring", a sensational event that monopolised discussion in musical circles, and effectively sidelined "Jeux" along with Fauré's "Pénélope", which had opened a week before. In 1915 Debussy underwent one of the earliest colostomy operations. It achieved only a temporary respite, and occasioned him considerable frustration ("There are mornings when the effort of dressing seems like one of the twelve labours of Hercules"). He also had a fierce enemy at this period in the form of Camille Saint-Saëns, who in a letter to Fauré condemned Debussy's "En blanc et noir": "It's incredible, and the door of the Institut [de France] must at all costs be barred against a man capable of such atrocities." Saint-Saëns had been a member of the Institut since 1881: Debussy never became one. His health continued to decline; he gave his final concert (the premiere of his Violin Sonata) on 14 September 1917 and became bedridden in early 1918. Debussy died on 25 March 1918 at his home. The First World War was still raging and Paris was under German aerial and artillery bombardment. The military situation did not permit the honour of a public funeral with ceremonious graveside orations. The funeral procession made its way through deserted streets to a temporary grave at Père Lachaise Cemetery as the German guns bombarded the city. Debussy's body was reinterred the following year in the small Passy Cemetery sequestered behind the Trocadéro, fulfilling his wish to rest "among the trees and the birds"; his wife and daughter are buried with him. In a survey of Debussy's oeuvre shortly after the composer's death, the critic Ernest Newman wrote, "It would be hardly too much to say that Debussy spent a third of his life in the discovery of himself, a third in the free and happy realisation of himself, and the final third in the partial, painful loss of himself". Later commentators have rated some of the late works more highly than Newman and other contemporaries did, but much of the music for which Debussy is best known is from the middle years of his career. The analyst David Cox wrote in 1974 that Debussy, admiring Wagner’s attempts to combine all the creative arts, "created a new, instinctive, dreamlike world of music, lyrical and pantheistic, contemplative and objective – a kind of art, in fact, which seemed to reach out into all aspects of experience." In 1988 the composer and scholar Wilfrid Mellers wrote of Debussy: Debussy did not give his works opus numbers, apart from his String Quartet op. 10 in G minor (also the only work where the composer's title included a key). His works were catalogued and indexed by the musicologist François Lesure in 1977 (revised in 2003) and their Lesure number ("L" followed by a number) is sometimes used as a suffix to their title in concert programmes and recordings. Debussy's musical development was slow, and as a student he was adept enough to produce for his teachers at the Conservatoire works that would conform to their conservative precepts. His friend Georges Jean-Aubry commented that Debussy "admirably imitated Massenet's melodic turns of phrase" in the cantata "L'enfant prodigue" (1884) which won him the Prix de Rome. A more characteristically Debussian work from his early years is "La Damoiselle élue", recasting the traditional form for oratorios and cantatas, using a chamber orchestra and a small body of choral tone and using new or long-neglected scales and harmonies. His early "mélodies", inspired by Marie Vasnier, are more virtuosic in character than his later works in the genre, with extensive wordless "vocalise"; from the "Ariettes oubliées" (1885–1887) onwards he developed a more restrained style. He wrote his own poems for the "Proses lyriques" (1892–1893) but, in the view of the musical scholar Robert Orledge "his literary talents were not on a par with his musical imagination". The musicologist Jacques-Gabriel Prod'homme wrote that together with "La Demoiselle élue", the "Ariettes oubliées" and the "Cinq poèmes de Charles Baudelaire" (1889) show "the new, strange way which the young musician will hereafter follow". Newman concurred: "There is a good deal of Wagner, especially of "Tristan", in the idiom. But the work as a whole is distinctive, and the first in which we get a hint of the Debussy we were to know later – the lover of vague outlines, of half-lights, of mysterious consonances and dissonances of colour, the apostle of languor, the exclusivist in thought and in style." During the next few years Debussy developed his personal style, without, at this stage, breaking sharply away from French musical traditions. Much of his music from this period is on a small scale, such as the "Deux arabesques", "Valse romantique", "Suite bergamasque", and the first set of "Fêtes galantes". Newman remarked that like Chopin, the Debussy of this period appears as a liberator from Germanic styles of composition – offering instead "an exquisite, pellucid style" capable of conveying "not only gaiety and whimsicality but emotion of a deeper sort". In a 2004 study, Mark DeVoto comments that Debussy's early works are harmonically no more adventurous than existing music by Fauré; in a 2007 book about the piano works, Margery Halford observes that "Deux arabesques" (1888–1891) and "Rêverie" (1890) have "the fluidity and warmth of Debussy's later style" but are not harmonically innovative. Halford cites the popular "Clair de Lune" (1890) from the "Suite Bergamasque" as a transitional work pointing towards the composer's mature style. Musicians from Debussy's time onwards have regarded "Prélude à l'après-midi d'un faune" (1894) as his first orchestral masterpiece. Newman considered it "completely original in idea, absolutely personal in style, and logical and coherent from first to last, without a superfluous bar or even a superfluous note"; Pierre Boulez observed, "Modern music was awakened by "Prélude à l'après-midi d'un faune"". Most of the major works for which Debussy is best known were written between the mid-1890s and the mid-1900s. They include the String Quartet (1893), "Pelléas et Mélisande" (1893–1902), the "Nocturnes for Orchestra" (1899) and "La mer" (1903–1905). The suite "Pour le piano" (1894–1901) is, in Halford's view, one of the first examples of the mature Debussy as a composer for the piano: "a major landmark ... and an enlargement of the use of piano sonorities". In the String Quartet (1893), the gamelan sonorities Debussy had heard four years earlier are recalled in the pizzicatos and cross-rhythms of the scherzo. Debussy's biographer Edward Lockspeiser comments that this movement shows the composer's rejection of "the traditional dictum that string instruments should be predominantly lyrical". The work influenced Ravel, whose own String Quartet, written ten years later, has noticeably Debussian features. The academic and journalist Stephen Walsh calls "Pelléas et Mélisande" (begun 1893, staged 1902) "a key work for the 20th century". The composer Olivier Messiaen was fascinated by its "extraordinary harmonic qualities and ... transparent instrumental texture". The opera is composed in what Alan Blyth describes as a sustained and heightened recitative style, with "sensuous, intimate" vocal lines. It influenced composers as different as Stravinsky and Puccini. Orledge describes the "Nocturnes" as exceptionally varied in texture, "ranging from the Musorgskian start of 'Nuages', through the approaching brass band procession in 'Fêtes', to the wordless female chorus in 'Sirènes. Orledge considers the last a pre-echo of the marine textures of "La mer". "Estampes" for piano (1903) gives impressions of exotic locations, with further echoes of the gamelan in its pentatonic structures. Debussy believed that since Beethoven, the traditional symphonic form had become formulaic, repetitive and obsolete. The three-part, cyclic symphony by César Franck (1888) was more to his liking, and its influence can be found in "La mer" (1905); this uses a quasi-symphonic form, its three sections making up a giant sonata-form movement with, as Orledge observes, a cyclic theme, in the manner of Franck. The central "Jeux de vagues" section has the function of a symphonic development section leading into the final "Dialogue du vent et de la mer", "a powerful essay in orchestral colour and sonority" (Orledge) which reworks themes from the first movement. The reviews were sharply divided. Some critics thought the treatment less subtle and less mysterious than his previous works, and even a step backward; others praised its "power and charm", its "extraordinary verve and brilliant fantasy", and its strong colours and definite lines. Of the later orchestral works, "Images" (1905–1912) is better known and more often played than "Jeux" (1913). The former follows the tripartite form established in the "Nocturnes" and "La mer", but differs in employing traditional British and French folk tunes, and in making the central movement, "Ibéria", far longer than the outer ones, and subdividing it into three parts, all inspired by scenes from Spanish life. Although considering "Images" "the pinnacle of Debussy's achievement as a composer for orchestra", Trezise notes a contrary view that the accolade belongs to the ballet score "Jeux". The latter failed as a ballet because of what Jann Pasler describes as a banal scenario, and the score was neglected for some years. Recent analysts have found it a link between traditional continuity and thematic growth within a score and the desire to create discontinuity in a way mirrored in later 20th century music. In this piece, Debussy abandoned the whole-tone scale he had often favoured previously in favour of the octatonic scale with what the Debussy scholar François Lesure describes as its tonal ambiguities. Among the late piano works are two books of "Préludes" (1909–10, 1911–13), short pieces that depict a wide range of subjects. Lesure comments that they range from the frolics of minstrels at Eastbourne in 1905 and the American acrobat "General Lavine" "to dead leaves and the sounds and scents of the evening air". "En blanc et noir" (In white and black, 1915), a three-movement work for two pianos, is a predominantly sombre piece, reflecting the war and national danger. The "Études" (1915) for piano have divided opinion. Writing soon after Debussy's death, Newman found them laboured – "a strange last chapter in a great artist's life"; Lesure, writing eighty years later rates them among Debussy's greatest late works: "Behind a pedagogic exterior, these 12 pieces explore abstract intervals, or – in the last five – the sonorities and timbres peculiar to the piano." In 1914 Debussy started work on a planned set of six sonatas for various instruments. His fatal illness prevented him from completing the set, but those for cello and piano (1915), flute, viola and harp (1915), and violin and piano (1917 – his last completed work) are all concise, three-movement pieces, more diatonic in nature than some of his other late works. "Le Martyre de saint Sébastien" (1911), originally a five-act musical play to a text by Gabriele D'Annunzio that took nearly five hours in performance, was not a success, and the music is now more often heard in a concert (or studio) adaptation with narrator, or as an orchestral suite of "Fragments symphoniques". Debussy enlisted the help of André Caplet in orchestrating and arranging the score. Two late stage works, the ballets "Khamma" (1912) and "La boîte à joujoux" (1913), were left with the orchestration incomplete, and were completed by Charles Koechlin and Caplet, respectively. The application of the term "Impressionist" to Debussy and the music he influenced has been much debated, both in the composer's lifetime and subsequently. The analyst Richard Langham Smith writes that Impressionism was originally a term coined to describe a style of late 19th-century French painting, typically scenes suffused with reflected light in which the emphasis is on the overall impression rather than outline or clarity of detail, as in works by Monet, Pissarro, Renoir and others. Langham Smith writes that the term became transferred to the compositions of Debussy and others which were "concerned with the representation of landscape or natural phenomena, particularly the water and light imagery dear to Impressionists, through subtle textures suffused with instrumental colour". Among painters, Debussy particularly admired Turner, but also drew inspiration from Whistler. With the latter in mind the composer wrote to the violinist Eugène Ysaÿe in 1894 describing the orchestral "Nocturnes" as "an experiment in the different combinations that can be obtained from one colour – what a study in grey would be in painting." Debussy strongly objected to the use of the word "Impressionism" for his (or anybody else's) music, but it has continually been attached to him since the assessors at the Conservatoire first applied it, opprobriously, to his early work "Printemps". Langham Smith comments that Debussy wrote many piano pieces with titles evocative of nature – "Reflets dans l'eau" (1905), "Les Sons et les parfums tournent dans l'air du soir" (1910) and "Brouillards" (1913) – and suggests that the Impressionist painters' use of brush-strokes and dots is paralleled in the music of Debussy. Although Debussy said that anyone using the term (whether about painting or music) was an imbecile, some Debussy scholars have taken a less absolutist line. Lockspeiser calls "La mer" "the greatest example of an orchestral Impressionist work", and more recently in "The Cambridge Companion to Debussy" Nigel Simeone comments, "It does not seem unduly far-fetched to see a parallel in Monet's seascapes". In this context may be placed Debussy's pantheistic eulogy to Nature, in a 1911 interview with Henry Malherbe: In contrast to the "impressionistic" characterisation of Debussy's music, several writers have suggested that he structured at least some of his music on rigorous mathematical lines. In 1983 the pianist and scholar Roy Howat published a book contending that certain of Debussy's works are proportioned using mathematical models, even while using an apparent classical structure such as sonata form. Howat suggests that some of Debussy's pieces can be divided into sections that reflect the golden ratio, which is approximated by ratios of consecutive numbers in the Fibonacci sequence. Simon Trezise, in his 1994 book "Debussy: La Mer", finds the intrinsic evidence "remarkable," with the caveat that no written or reported evidence suggests that Debussy deliberately sought such proportions. Lesure takes a similar view, endorsing Howat's conclusions while not taking a view on Debussy's conscious intentions. Debussy wrote "We must agree that the beauty of a work of art will always remain a mystery [...] we can never be absolutely sure "how it's made". We must at all costs preserve this magic which is peculiar to music and to which music, by its nature, is of all the arts the most receptive". Nevertheless there are many indicators of the sources and elements of Debussy's idiom. Writing in 1958, the critic Rudolph Reti summarized six features of Debussy's music, which he asserted "established a new concept of tonality in European music": the frequent use of lengthy pedal points – "not merely bass pedals in the actual sense of the term, but sustained 'pedals' in any voice"; glittering passages and webs of figurations which distract from occasional absence of tonality; frequent use of parallel chords which are "in essence not harmonies at all, but rather 'chordal melodies', enriched unisons", described by some writers as non-functional harmonies; bitonality, or at least bitonal chords; use of the whole-tone and pentatonic scales; and unprepared modulations, "without any harmonic bridge". Reti concludes that Debussy's achievement was the synthesis of monophonic based "melodic tonality" with harmonies, albeit different from those of "harmonic tonality". In 1889, Debussy held conversations with his former teacher Guiraud, which included exploration of harmonic possibilities at the piano. The discussion, and Debussy's chordal keyboard improvisations, were noted by a younger pupil of Guiraud, Maurice Emmanuel. The chord sequences played by Debussy include some of the elements identified by Reti. They may also indicate the influence on Debussy of Satie's 1887 "Trois Sarabandes". A further improvisation by Debussy during this conversation included a sequence of whole tone harmonies which may have been inspired by the music of Glinka or Rimsky-Korsakov which was becoming known in Paris at this time. During the conversation, Debussy told Guiraud, "There is no theory. You have only to listen. Pleasure is the law!" – although he also conceded, "I feel free because I have been through the mill, and I don't write in the fugal style because I know it." Among French predecessors, Chabrier was an important influence on Debussy (as he was on Ravel and Poulenc); Howat has written that Chabrier's piano music such as "Sous-bois" and "Mauresque" in the "Pièces pittoresques" explored new sound-worlds of which Debussy made effective use 30 years later. Lesure finds traces of Gounod and Massenet in some of Debussy's early songs, and remarks that it may have been from the Russians – Tchaikovsky, Balakirev, Rimsky-Korsakov, Borodin and Mussorgsky – that Debussy acquired his taste for "ancient and oriental modes and for vivid colorations, and a certain disdain for academic rules". Lesure also considers that Mussorgsky's opera "Boris Godunov" directly influenced Debussy's "Pelléas et Mélisande". In the music of Palestrina, Debussy found what he called "a perfect whiteness", and he felt that although Palestrina's musical forms had a "strict manner", they were more to his taste than the rigid rules prevailing among 19th-century French composers and teachers. He drew inspiration from what he called Palestrina's "harmony created by melody", finding an arabesque-like quality in the melodic lines. For Chopin's piano music Debussy professed his "respectful gratitude". He was torn between dedicating his own Études to Chopin or to François Couperin, whom he also admired as a model of form, seeing himself as heir to their mastery of the genre. Howat cautions against the assumption that Debussy's Ballade (1891) and Nocturne (1892) are influenced by Chopin – in Howat's view they owe more to Debussy's early Russian models – but Chopin's influence is found in other early works such as the Deux arabesques (1889–1891). In 1914 the publisher A. Durand & fils began publishing scholarly new editions of the works of major composers, and Debussy undertook the supervision of the editing of Chopin's music. Although Debussy was in no doubt of Wagner's stature, he was only briefly influenced by him in his compositions, after "La damoiselle élue" and the "Cinq poèmes de Baudelaire" (both begun in 1887). According to Pierre Louÿs, Debussy "did not see 'what anyone can do beyond Tristan'," although he admitted that it was sometimes difficult to avoid "the ghost of old Klingsor, alias Richard Wagner, appearing at the turning of a bar". After Debussy's short Wagnerian phase, he started to become interested in non-Western music and its unfamiliar approaches to composition. The piano piece "Golliwogg's Cakewalk", from the 1908 suite "Children's Corner", contains a parody of music from the introduction to "Tristan", in which, in the opinion of the musicologist Lawrence Kramer, Debussy escapes the shadow of the older composer and "smilingly relativizes Wagner into insignificance". A contemporary influence was Erik Satie, according to Nichols Debussy's "most faithful friend" amongst French musicians. Debussy's orchestration in 1896 of Satie's "Gymnopédies" (which had been written in 1887) "put their composer on the map" according to the musicologist Richard Taruskin, and the Sarabande from Debussy's "Pour le piano" (1901) "shows that [Debussy] knew Satie's "Trois Sarabandes" at a time when only a personal friend of the composer could have known them." (They were not published until 1911). Debussy's interest in the popular music of his time is evidenced not only by the "Golliwogg's Cakewalk" and other piano pieces featuring rag-time, such as "The Little Nigar" (Debussy's spelling) (1909), but by the slow waltz "La plus que lente" ("The more than slow"), based on the style of the gipsy violinist at a Paris hotel (to whom he gave the manuscript of the piece). In addition to the composers who influenced his own compositions, Debussy held strong views about several others. He was for the most part enthusiastic about Richard Strauss and Stravinsky, respectful of Mozart and was in awe of Bach, whom he called the "good God of music" ("le Bon Dieu de la musique"). His relationship to Beethoven was complex; he was said to refer to him as "le vieux sourd" (the old deaf one) and asked one young pupil not to play Beethoven's music for "it is like somebody dancing on my grave;" but he believed that Beethoven had profound things to say, yet did not know how to say them, "because he was imprisoned in a web of incessant restatement and of German aggressiveness." He was not in sympathy with Schubert or Mendelssohn, the latter being described as a "facile and elegant notary". With the advent of the First World War, Debussy became ardently patriotic in his musical opinions. Writing to Stravinsky, he asked "How could we not have foreseen that these men were plotting the destruction of our art, just as they had planned the destruction of our country?" In 1915 he complained that "since Rameau we have had no purely French tradition [...] We tolerated overblown orchestras, tortuous forms [...] we were about to give the seal of approval to even more suspect naturalizations when the sound of gunfire put a sudden stop to it all." Taruskin writes that some have seen this as a reference to the composers Gustav Mahler and Arnold Schoenberg, both born Jewish. In 1912 Debussy had remarked to his publisher of the opera "Ariane et Barbe-bleue" by the (also Jewish) composer Paul Dukas, "You're right, [it] is a masterpiece – but it's not a masterpiece of French music." Despite his lack of formal schooling, Debussy read widely and found inspiration in literature. Lesure writes, "The development of free verse in poetry and the disappearance of the subject or model in painting influenced him to think about issues of musical form." Debussy was influenced by the Symbolist poets. These writers, who included Verlaine, Mallarmé, Maeterlinck and Rimbaud, reacted against the realism, naturalism, objectivity and formal conservatism that prevailed in the 1870s. They favoured poetry using suggestion rather than direct statement; the literary scholar Chris Baldrick writes that they evoked "subjective moods through the use of private symbols, while avoiding the description of external reality or the expression of opinion". Debussy was much in sympathy with the Symbolists' desire to bring poetry closer to music, became friendly with several leading exponents, and set many Symbolist works throughout his career. Debussy's literary inspirations were mostly French, but he did not overlook foreign writers. As well as Maeterlinck for "Pelléas et Mélisande", he drew on Shakespeare and Charles Dickens for two of his Préludes for piano – "La Danse de Puck" (Book 1, 1910) and "Hommage à S. Pickwick Esq. P.P.M.P.C." (Book 2, 1913). He set Dante Gabriel Rossetti's "The Blessed Damozel" in his early cantata, "La Damoiselle élue" (1888). He wrote incidental music for "King Lear" and planned an opera based on "As You Like It", but abandoned that once he turned his attention to setting Maeterlinck's play. In 1890 he began work on a orchestral piece inspired by Poe's "The Fall of the House of Usher" and later sketched the libretto for an opera, "La chute de la maison Usher". Another project inspired by Poe – an operatic version of "The Devil in the Belfry" did not progress beyond sketches. French writers whose words he set include Paul Bourget, Alfred de Musset, Théodore de Banville, Leconte de Lisle, Théophile Gautier, Paul Verlaine, François Villon, and Mallarmé – the last of whom also provided Debussy with the inspiration for one of his most popular orchestral pieces, "Prélude à l'après-midi d'un faune". Debussy is widely regarded as one of the most influential composers of the 20th century. Roger Nichols writes that "if one omits Schoenberg [...] a list of 20th-century composers influenced by Debussy is practically a list of 20th-century composers "tout court"." Bartók first encountered Debussy's music in 1907 and later said that "Debussy's great service to music was to reawaken among all musicians an awareness of harmony and its possibilities". Not only Debussy's use of whole-tone scales, but also his style of word-setting in "Pelléas et Mélisande", were the subject of study by Leoš Janáček while he was writing his 1921 opera "Káťa Kabanová". Stravinsky was more ambivalent about Debussy's music (he thought "Pelléas" "a terrible bore...in spite of many wonderful pages") but the two composers knew each other and Stravinsky's "Symphonies d'instruments à vent" (1920) was written as a memorial for Debussy. In the aftermath of the Great War, the young French composers of Les Six reacted against what they saw as the poetic, mystical quality of Debussy's music in favour of something more hard-edged. Their sympathiser and self-appointed spokesman, Jean Cocteau wrote in 1918: "Enough of "nuages", waves, aquariums, "ondines" and nocturnal perfumes," pointedly alluding to the titles of pieces by Debussy. Later generations of French composers had a much more positive relationship with his music. Messiaen was given a score of "Pelléas et Mélisande" as a boy and said that it was "a revelation, love at first sight" and "probably the most decisive influence I have been subject to". Boulez also discovered Debussy’s music at a young age and said that it gave him his first sense of what modernity in music could mean. Among contemporary composers George Benjamin has described "Prélude à l'après-midi d'un faune" as “the definition of perfection”; he has conducted "Pelléas et Mélisande" and the critic Rupert Christiansen detects the influence of the work in Benjamin's opera "Written on Skin" (2012). Others have made orchestrations of some of the piano and vocal works, including John Adams's version of four of the Baudelaire songs ("Le Livre de Baudelaire", 1994), Robin Holloway's of "En blanc et noir" (2002), and Colin Matthews's of both books of "Préludes" (2001–2006). The pianist Stephen Hough believes that Debussy's influence also extends to jazz and suggests that "Reflets dans l'eau" can be heard in the harmonies of Bill Evans. In 1904, Debussy played the piano accompaniment for Mary Garden in recordings for the Compagnie française du Gramophone of four of his songs: three "mélodies" from the Verlaine cycle "Ariettes oubliées" – "Il pleure dans mon coeur", "L'ombre des arbres" and "Green" – and "Mes longs cheveux", from Act III of "Pelléas et Mélisande". He made a set of piano rolls for the Welte-Mignon company in 1913. They contain fourteen of his pieces: "D'un cahier d'esquisses", "La plus que lente", "La soirée dans Grenade", all six movements of "Children's Corner", and five of the "Preludes": "Danseuses de Delphes", "Le vent dans la plaine", "La cathédrale engloutie", "La danse de Puck" and "Minstrels". The 1904 and 1913 sets have been transferred to compact disc. Contemporaries of Debussy who made recordings of his music, included the pianists Ricardo Viñes (in "Poissons d'or" from "Images" and "La soirée dans Grenade" from "Estampes"); Alfred Cortot (numerous solo pieces as well as the Violin Sonata with Jacques Thibaud and the "Chansons de Bilitis" with Maggie Teyte); and Marguerite Long ("Jardins sous la pluie" and "Arabesques"). Singers in Debussy's mélodies or excerpts from "Pelléas et Mélisande" included Jane Bathori, Claire Croiza, Charles Panzéra and Ninon Vallin; and among the conductors in the major orchestral works were Ernest Ansermet, Désiré-Émile Inghelbrecht, Pierre Monteux and Arturo Toscanini, and in the "Petite Suite", Henri Büsser, who had prepared the orchestration for Debussy. Many of these early recordings have been reissued on CD. In more recent times Debussy's output has been extensively recorded. In 2018, to mark the centenary of the composer's death, Warner Classics, with contributions from other companies, issued a 33-CD set that is claimed to include all the music Debussy wrote. Canis Major Canis Major is a constellation in the southern celestial hemisphere. In the second century, it was included in Ptolemy's 48 constellations, and is counted among the 88 modern constellations. Its name is Latin for "greater dog" in contrast to Canis Minor, the "lesser dog"; both figures are commonly represented as following the constellation of Orion the hunter through the sky. The Milky Way passes through Canis Major and several open clusters lie within its borders, most notably M41. Canis Major contains Sirius, the brightest star in the night sky, known as the "dog star". It is bright because of its proximity to the Solar System. In contrast, the other bright stars of the constellation are stars of great distance and high luminosity. At magnitude 1.5, Epsilon Canis Majoris (Adhara) is the second-brightest star of the constellation and the brightest source of extreme ultraviolet radiation in the night sky. Next in brightness are the yellow-white supergiant Delta (Wezen) at 1.8, the blue-white giant Beta (Mirzam) at 2.0, blue-white supergiants Eta (Aludra) at 2.4 and Omicron at 3.0, and white spectroscopic binary Zeta (Furud), also at 3.0. The red hypergiant VY Canis Majoris is one of the largest stars known, while the neutron star RX J0720.4-3125 has a radius of a mere 5 km. In ancient Mesopotamia, Sirius, named KAK.SI.DI by the Babylonians, was seen as an arrow aiming towards Orion, while the southern stars of Canis Major and a part of Puppis were viewed as a bow, named BAN in the "Three Stars Each" tablets, dating to around 1100 BC. In the later compendium of Babylonian astronomy and astrology titled "MUL.APIN", the arrow, Sirius, was also linked with the warrior Ninurta, and the bow with Ishtar, daughter of Enlil. Ninurta was linked to the later deity Marduk, who was said to have slain the ocean goddess Tiamat with a great bow, and worshipped as the principal deity in Babylon. The Ancient Greeks replaced the bow and arrow depiction with that of a dog. In Greek Mythology, Canis Major represented the dog Laelaps, a gift from Zeus to Europa; or sometimes the hound of Procris, Diana's nymph; or the one given by Aurora to Cephalus, so famed for its speed that Zeus elevated it to the sky. It was also considered to represent one of Orion's hunting dogs, pursuing Lepus the Hare or helping Orion fight Taurus the Bull; and is referred to in this way by Aratos, Homer and Hesiod. The ancient Greeks refer only to one dog, but by Roman times, Canis Minor appears as Orion's second dog. Alternative names include Canis Sequens and Canis Alter. Canis Syrius was the name used in the 1521 "Alfonsine tables". The Roman myth refers to Canis Major as "Custos Europae", the dog guarding Europa but failing to prevent her abduction by Jupiter in the form of a bull, and as "Janitor Lethaeus", "the watchdog". In medieval Arab astronomy, the constellation became "al-Kalb al-Akbar", "the Greater Dog", transcribed as "Alcheleb Alachbar" by 17th century writer Edmund Chilmead. Islamic scholar Abū Rayḥān al-Bīrūnī referred to Orion as "Kalb al-Jabbār", "the Dog of the Giant". Among the Merazig of Tunisia, shepherds note six constellations that mark the passage of the dry, hot season. One of them, called "Merzem", includes the stars of Canis Major and Canis Minor and is the herald of two weeks of hot weather. In Chinese astronomy, the modern constellation of Canis Major is located in the Vermilion Bird (), where the stars were classified in several separate asterisms of stars. The Military Market () was a circular pattern of stars containing Nu, Beta, Xi and Xi, and some stars from Lepus. The Wild Cockerel () was at the centre of the Military Market, although it is uncertain which stars depicted what. Schlegel reported that the stars Omicron and Pi Canis Majoris might have been them, while Beta or Nu have also been proposed. Sirius was ' (), the Celestial Wolf, denoting invasion and plunder. Southeast of the Wolf was the asterism ' (), the celestial Bow and Arrow, which was interpreted as containing Delta, Epsilon, Eta and Kappa Canis Majoris and Delta Velorum. Alternatively, the arrow was depicted by Omicron and Eta and aiming at Sirius (the Wolf), while the bow comprised Kappa, Epsilon, Sigma, Delta and 164 Canis Majoris, and Pi and Omicron Puppis. Both the Māori people and the people of the Tuamotus recognized the figure of Canis Major as a distinct entity, though it was sometimes absorbed into other constellations. ', also called ' and ', ("The Assembly of " or "The Assembly of Sirius") was a Māori constellation that included both Canis Minor and Canis Major, along with some surrounding stars. Related was ', also called ', the Mirror of , formed from an undefined group of stars in Canis Major. They called Sirius ' and ', corresponding to two of the names for the constellation, though ' was a name applied to other stars in various Māori groups and other Polynesian cosmologies. The Tuamotu people called Canis Major "", "the abiding assemblage of ". The Tharumba people of the Shoalhaven River saw three stars of Canis Major as ' (Bat) and his two wives ' (Mrs Brown Snake) and ' (Mrs Black Snake); bored of following their husband around, the women try to bury him while he is hunting a wombat down its hole. He spears them and all three are placed in the sky as the constellation '. To the Boorong people of Victoria, Sigma Canis Majoris was ' (which is become the name of this star), and its flanking stars Delta and Epsilon were his two wives. The moon (', "native cat") sought to lure the further wife (Epsilon) away, but assaulted him and he has been wandering the sky ever since. Canis Major is a constellation in the Southern Hemisphere's summer (or northern hemisphere's winter) sky, bordered by Monoceros (which lies between it and Canis Minor) to the north, Puppis to the east and southeast, Columba to the southwest, and Lepus to the west. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is 'CMa'. The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a quadrilateral; in the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −11.03° and −33.25°. Covering 380 square degrees or 0.921% of the sky, it ranks 43rd of the 88 currently-recognized constellations in size. Canis Major is a prominent constellation because of its many bright stars. These include Sirius (Alpha Canis Majoris), the brightest star in the night sky, as well as three other stars above magnitude 2.0. Furthermore, two other stars are thought to have previously outshone all others in the night sky—Adhara (Epsilon Canis Majoris) shone at -3.99 around 4.7 million years ago, and Mirzam (Beta Canis Majoris) peaked at −3.65 around 4.42 million years ago. Another, NR Canis Majoris, will be brightest at magnitude −0.88 in about 2.87 million years' time. The German cartographer Johann Bayer used the Greek letters Alpha through Omicron to label the most prominent stars in the constellation, including three adjacent stars as Nu and two further pairs as Xi and Omicron, while subsequent observers designated further stars in the southern parts of the constellation that were hard to discern from Central Europe. Bayer's countryman Johann Elert Bode later added Sigma, Tau and Omega; the French astronomer Nicolas Louis de Lacaille added lettered stars a to k (though none are in use today). John Flamsteed numbered 31 stars, with 3 Canis Majoris being placed by Lacaille into Columba as Delta Columbae (Flamsteed had not recognised Columba as a distinct constellation). He also labelled two stars—his 10 and 13 Canis Majoris—as Kappa and Kappa respectively, but subsequent cartographers such as Francis Baily and John Bevis dropped the fainter former star, leaving Kappa as the sole Kappa. Flamsteed's listing of Nu, Nu, Nu, Xi, Xi, Omicron and Omicron have all remained in use. Sirius is the brightest star in the night sky at apparent magnitude −1.46 and one of the closest stars to Earth at a distance of 8.6 light-years. Its name comes from the Greek word for "scorching" or "searing". Sirius is also a binary star; its companion Sirius B is a white dwarf with a magnitude of 8.4—10,000 times fainter than Sirius A to observers on Earth. The two orbit each other every 50 years. Their closest approach last occurred in 1993 and they will be at their greatest separation between 2020 and 2025. Sirius was the basis for the ancient Egyptian calendar. The star marked the Great Dog's mouth on Bayer's star atlas. Flanking Sirius are Beta and Gamma Canis Majoris. Also called Mirzam or Murzim, Beta is a blue-white Beta Cephei variable star of magnitude 2.0, which varies by a few hundredths of a magnitude over a period of six hours. Mirzam is 500 light-years from Earth, and its traditional name means "the announcer", referring to its position as the "announcer" of Sirius, as it rises a few minutes before Sirius does. Gamma, also known as Muliphein, is a fainter star of magnitude 4.12, in reality a blue-white bright giant of spectral type B8IIe located 441 light-years from earth. Iota Canis Majoris, lying between Sirius and Gamma, is another star that has been classified as a Beta Cephei variable, varying from magnitude 4.36 to 4.40 over a period of 1.92 hours. It is a remote blue-white supergiant star of spectral type B3Ib, around 46,000 times as luminous as the sun and, at 2500 light-years distant, 300 times further away than Sirius. Epsilon, Omicron, Delta, and Eta Canis Majoris were called "Al Adzari" "the virgins" in medieval Arabic tradition. Marking the dog's right thigh on Bayer's atlas is Epsilon Canis Majoris, also known as Adhara. At magnitude 1.5, it is the second-brightest star in Canis Major and the 23rd-brightest star in the sky. It is a blue-white supergiant of spectral type B2Iab, around 404 light-years from Earth. This star is one of the brightest known extreme ultraviolet sources in the sky. It is a binary star; the secondary is of magnitude 7.4. Its traditional name means "the virgins", having been transferred from the group of stars to Epsilon alone. Nearby is Delta Canis Majoris, also called Wezen. It is a yellow-white supergiant of spectral type F8Iab and magnitude 1.84, around 1605 light-years from Earth. With a traditional name meaning "the weight", Wezen is 17 times as massive and 50,000 times as luminous as the Sun. If located in the centre of the Solar System, it would extend out to Earth as its diameter is 200 times that of the Sun. Only around 10 million years old, Wezen has stopped fusing hydrogen in its core. Its outer envelope is beginning to expand and cool, and in the next 100,000 years it will become a red supergiant as its core fuses heavier and heavier elements. Once it has a core of iron, it will collapse and explode as a supernova. Nestled between Adhara and Wezen lies Sigma Canis Majoris, known as Unurgunite to the Boorong and Wotjobaluk people, a red supergiant of spectral type K7Ib that varies irregularly between magnitudes 3.43 and 3.51. Also called Aludra, Eta Canis Majoris is a blue-white supergiant of spectral type B5Ia with a luminosity 176,000 times and diameter around 80 times that of the Sun. Classified as an Alpha Cygni type variable star, Aludra varies in brightness from magnitude 2.38 to 2.48 over a period of 4.7 days. It is located 1120 light-years away. To the west of Adhara lies 3.0-magnitude Zeta Canis Majoris or Furud, around 362 light-years distant from Earth. It is a spectroscopic binary, whose components orbit each other every 1.85 years, the combined spectrum indicating a main star of spectral type B2.5V. Between these stars and Sirius lie Omicron, Omicron, and Pi Canis Majoris. Omicron is a massive supergiant star about 21 times as massive as the Sun. Only 7 million years old, it has exhausted the supply of hydrogen at its core and is now processing helium. It is an Alpha Cygni variable that undergoes periodic non-radial pulsations, which cause its brightness to cycle from magnitude 2.93 to 3.08 over a 24.44-day interval. Omicron is an orange K-type supergiant of spectral type K2.5Iab that is an irregular variable star, varying between apparent magnitudes 3.78 and 3.99. Around 18 times as massive as the Sun, it shines with 65,000 times its luminosity. North of Sirius lie Theta and Mu Canis Majoris, Theta being the most northerly star with a Bayer designation in the constellation. Around 8 billion years old, it is an orange giant of spectral type K4III that is around as massive as the Sun but has expanded to 30 times the Sun's diameter. Mu is a multiple star system located around 1244 light-years distant, its components discernible in a small telescope as a 5.3-magnitude yellow-hued and 7.1-magnitude bluish star. The brighter star is a giant of spectral type K2III, while the companion is a main sequence star of spectral type B9.5V. Nu Canis Majoris is a yellow-hued giant star of magnitude 5.7, 278 light-years away; it is at the threshold of naked-eye visibility. It has a companion of magnitude 8.1. At the southern limits of the constellation lie Kappa and Lambda Canis Majoris. Although of similar spectra and nearby each other as viewed from Earth, they are unrelated. Kappa is a Gamma Cassiopeiae variable of spectral type B2Vne, which brightened by 50% between 1963 and 1978, from magnitude 3.96 or so to 3.52. It is around 659 light-years distant. Lambda is a blue-white B-type main sequence dwarf with an apparent magnitude of 4.48 located around 423 light-years from Earth. It is 3.7 times as wide as and 5.5 times as massive as the Sun, and shines with 940 times its luminosity. Canis Major is also home to many variable stars. EZ Canis Majoris is a Wolf–Rayet star of spectral type WN4 that varies between magnitudes 6.71 and 6.95 over a period of 3.766 days; the cause of its variability is unknown but thought to be related to its stellar wind and rotation. VY Canis Majoris is a remote red hypergiant located approximately 3,800 light-years away from Earth. It is a candidate for being the largest star known and is also one of most luminous with an estimate of 600 to 2,200 times the Sun's radius, and a luminosity of 60,000 to 560,000 times greater than the Sun. However, estimates of its size, mass and luminosity have varied, it was observed in 2011 using interferometry with the Very Large Telescope, yielding a radius of only 1,420 ± 120 solar radii, and a luminosity of only 270,000 times that of the Sun, corresponding a surface temperature of around 3,490 K (and hence spectral type M4Ia). Its current mass has been revised at 17 ± 8 solar masses, having shed material from an initial 15–35 solar masses. W Canis Majoris is a type of red giant known as a carbon star—a semiregular variable, it ranges between magnitudes 6.27 and 7.09 over a period of 160 days. A cool star, it has a surface temperature of around 2,900 K and a radius 234 times that of the Sun, its distance estimated at 1,444–1,450 light-years from Earth. At the other extreme in size is RX J0720.4-3125, a neutron star with a radius of around 5 km. Exceedingly faint, it has an apparent magnitude of 26.6. Its spectrum and temperature appear to be mysteriously changing over several years. The nature of the changes are unclear, but it is possible they were caused by an event such as the star's absorption of an accretion disc. Tau Canis Majoris is a Beta Lyrae-type eclipsing multiple star system that varies from magnitude 4.32 to 4.37 over 1.28 days. Its four main component stars are hot O-type stars, with a combined mass 80 times that of the Sun and shining with 500,000 times its luminosity, but little is known of their individual properties. A fifth component, a magnitude 10 star, lies at a distance of . The system is only 5 million years old. UW Canis Majoris is another Beta Lyrae-type star 3000 light-years from Earth; it is an eclipsing binary that ranges in magnitude from a minimum of 5.3 to a maximum of 4.8. It has a period of 4.4 days; its components are two massive hot blue stars, one a blue supergiant of spectral type O7.5-8 Iab, while its companion is a slightly cooler, less evolved and less luminous supergiant of spectral type O9.7Ib. The stars are 200,000 and 63,000 times as luminous as the Sun. However the fainter star is the more massive at 19 solar masses to the primary's 16. R Canis Majoris is another eclipsing binary that varies from magnitude 5.7 to 6.34 over 1.13 days, with a third star orbiting these two every 93 years. The shortness of the orbital period and the low ratio between the two main components make this an unusual Algol-type system. Seven star systems have been found to have planets. Nu Canis Majoris is an ageing orange giant of spectral type K1III of apparent magnitude 3.91 located around 64 light-years distant. Around 1.5 times as massive and 11 times as luminous as the Sun, it is orbited over a period of 763 days by a planet 2.6 times as massive as Jupiter. HD 47536 is likewise an ageing orange giant found to have a planetary system—echoing the fate of the Solar System in a few billion years as the Sun ages and becomes a giant. Conversely, HD 45364 is a star 107 light-years distant that is a little smaller and cooler than the Sun, of spectral type G8V, which has two planets discovered in 2008. With orbital periods of 228 and 342 days, the planets have a 3:2 orbital resonance, which helps stabilise the system. HD 47186 is another sunlike star with two planets; the inner—HD 47186 b—takes four days to complete an orbit and has been classified as a Hot Neptune, while the outer—HD 47186 c—has an eccentric 3.7-year period orbit and has a similar mass to Saturn. HD 43197 is a sunlike star around 183 light-years distant that has a Jupiter-size planet with an eccentric orbit. Z Canis Majoris is a star system a mere 300,000 years old composed of two pre-main-sequence stars—a FU Orionis star and a Herbig Ae/Be star, which has brightened episodically by two magnitudes to magnitude 8 in 1987, 2000, 2004 and 2008. The more massive Herbig Ae/Be star is enveloped in an irregular roughly spherical cocoon of dust that has an inner diameter of and outer diameter of . The cocoon has a hole in it through which light shines that covers an angle of 5 to 10 degrees of its circumference. Both stars are surrounded by a large envelope of in-falling material left over from the original cloud that formed the system. Both stars are emitting jets of material, that of the Herbig Ae/Be star being much larger—11.7 light-years long. Meanwhile, FS Canis Majoris is another star with infra-red emissions indicating a compact shell of dust, but it appears to be a main-sequence star that has absorbed material from a companion. These stars are thought to be significant contributors to interstellar dust. The band of the Milky Way goes through Canis Major, with only patchy obscurement by interstellar dust clouds. It is bright in the northeastern corner of the constellation, as well as in a triangular area between Adhara, Wezen and Aludra, with many stars visible in binoculars. Canis Major boasts several open clusters. The only Messier object is M41 (NGC 2287), an open cluster with a combined visual magnitude of 4.5, around 2300 light-years from Earth. Located 4 degrees south of Sirius, it contains contrasting blue, yellow and orange stars and covers an area the apparent size of the full moon—in reality around 25 light-years in diameter. Its most luminous stars have already evolved into giants. The brightest is a 6.3-magnitude star of spectral type K3. Located in the field is 12 Canis Majoris, though this star is only 670 light-years distant. NGC 2360, known as Caroline's Cluster after its discoverer Caroline Herschel, is an open cluster located 3.5 degrees west of Muliphein and has a combined apparent magnitude of 7.2. Around 15 light-years in diameter, it is located 3700 light-years away from Earth, and has been dated to around 2.2 billion years old. NGC 2362 is a small, compact open cluster, 5200 light-years from Earth. It contains about 60 stars, of which Tau Canis Majoris is the brightest member. Located around 3 degrees northeast of Wezen, it covers an area around 12 light-years in diameter, though the stars appear huddled around Tau when seen through binoculars. It is a very young open cluster as its member stars are only a few million years old. Lying 2 degrees southwest of NGC 2362 is NGC 2354 a fainter open cluster of magnitude 6.5, with around 15 member stars visible with binoculars. Located around 30' northeast of NGC 2360, NGC 2359 (Thor's Helmet or the Duck Nebula) is a relatively bright emission nebula in Canis Major, with an approximate magnitude of 10, which is 10,000 light-years from Earth. The nebula is shaped by HD 56925, an unstable Wolf–Rayet star embedded within it. In 2003, an overdensity of stars in the region was announced to be the Canis Major Dwarf, the closest satellite galaxy to Earth. However, there remains debate over whether it represents a disrupted dwarf galaxy or in fact a variation in the thin and thick disk and spiral arm populations of the Milky Way. Investigation of the area yielded only ten RR Lyrae variables—consistent with the Milky Way's halo and thick disk populations rather than a separate dwarf spheroidal galaxy. On the other hand, a globular cluster in Puppis, NGC 2298—which appears to be part of the Canis Major dwarf system—is extremely metal-poor, suggesting it did not arise from the Milky Way's thick disk, and instead is of extragalactic origin. NGC 2207 and IC 2163 are a pair of face-on interacting spiral galaxies located 125 million light-years from Earth. About 40 million years ago, the two galaxies had a close encounter and are now moving farther apart; nevertheless, the smaller IC 2163 will eventually be incorporated into NGC 2207. As the interaction continues, gas and dust will be perturbed, sparking extensive star formation in both galaxies. Supernovae have been observed in NGC 2207 in 1975 (type Ia SN 1975a), 1999 (the type Ib SN 1999ec), 2003 (type 1b supernova SN 2003H), and 2013 (type II supernova SN 2013ai). Located 16 million light-years distant, ESO 489-056 is an irregular dwarf- and low-surface-brightness galaxy that has one of the lowest metallicities known. Canis Minor Canis Minor is a small constellation in the northern celestial hemisphere. In the second century, it was included as an asterism, or pattern, of two stars in Ptolemy's 48 constellations, and it is counted among the 88 modern constellations. Its name is Latin for "lesser dog", in contrast to Canis Major, the "greater dog"; both figures are commonly represented as following the constellation of Orion the hunter. Canis Minor contains only two stars brighter than the fourth magnitude, Procyon (Alpha Canis Minoris), with a magnitude of 0.34, and Gomeisa (Beta Canis Minoris), with a magnitude of 2.9. The constellation's dimmer stars were noted by Johann Bayer, who named eight stars including Alpha and Beta, and John Flamsteed, who numbered fourteen. Procyon is the seventh-brightest star in the night sky, as well as one of the closest. A yellow-white main sequence star, it has a white dwarf companion. Gomeisa is a blue-white main sequence star. Luyten's Star is a ninth-magnitude red dwarf and the Solar System's next closest stellar neighbour in the constellation after Procyon. The fourth-magnitude HD 66141, which has evolved into an orange giant towards the end of its life cycle, was discovered to have a planet in 2012. There are two faint deep-sky objects within the constellation's borders. The 11 Canis-Minorids are a meteor shower that can be seen in early December. Though strongly associated with the Classical Greek uranographic tradition, Canis Minor originates from ancient Mesopotamia. Procyon and Gomeisa were called "MASH.TAB.BA" or "twins" in the "Three Stars Each" tablets, dating to around 1100 BC. In the later "MUL.APIN", this name was also applied to the pairs of Pi and Pi Orionis and Zeta and Xi Orionis. The meaning of "MASH.TAB.BA" evolved as well, becoming the twin deities Lulal and Latarak, who are on the opposite side of the sky from "Papsukal", the True Shepherd of Heaven in Babylonian mythology. Canis Minor was also given the name "DAR.LUGAL", which translates to "the star which stands behind it", in the "MUL.APIN"; the constellation represents a rooster. This name may have also referred to the constellation Lepus. "DAR.LUGAL" was also denoted "DAR.MUŠEN" and "DAR.LUGAL.MUŠEN" in Babylonia. Canis Minor was then called "tarlugallu" in Akkadian astronomy. Canis Minor was one of the original 48 constellations formulated by Ptolemy in his second-century Almagest, in which it was defined as a specific pattern (asterism) of stars; Ptolemy identified only two stars and hence no depiction was possible. The Ancient Greeks called the constellation προκυων/"Procyon", "coming before the dog", transliterated into Latin as "Antecanis", "Praecanis", or variations thereof, by Cicero and others. Roman writers also appended the descriptors "parvus", "minor" or "minusculus" ("small" or "lesser", for its faintness), "septentrionalis" ("northerly", for its position in relation to Canis Major), "primus" (rising "first") or "sinister" (rising to the "left") to its name "Canis". In Greek mythology, Canis Minor was sometimes connected with the Teumessian Fox, a beast turned into stone with its hunter, Laelaps, by Zeus, who placed them in heaven as Canis Major (Laelaps) and Canis Minor (Teumessian Fox). Eratosthenes accompanied the Little Dog with Orion, while Hyginus linked the constellation with Maera, a dog owned by Icarius of Athens. On discovering the latter's death, the dog and Icarius' daughter Erigone took their lives and all three were placed in the sky—Erigone as Virgo and Icarius as Boötes. As a reward for his faithfulness, the dog was placed along the "banks" of the Milky Way, which the ancients believed to be a heavenly river, where he would never suffer from thirst. The medieval Arabic astronomers maintained the depiction of Canis Minor ("al-Kalb al-Asghar" in Arabic) as a dog; in his Book of the Fixed Stars, Abd al-Rahman al-Sufi included a diagram of the constellation with a canine figure superimposed. There was one slight difference between the Ptolemaic vision of Canis Minor and the Arabic; al-Sufi claims Mirzam, now assigned to Orion, as part of both Canis Minor—the collar of the dog—and its modern home. The Arabic names for both Procyon and Gomeisa alluded to their proximity and resemblance to Sirius, though they were not direct translations of the Greek; Procyon was called "ash-Shi'ra ash-Shamiya", the "Syrian Sirius" and Gomeisa was called "ash-Shira al-Ghamisa", the Sirius with bleary eyes. Among the Merazig of Tunisia, shepherds note six constellations that mark the passage of the dry, hot season. One of them, called "Merzem", includes the stars of Canis Minor and Canis Major and is the herald of two weeks of hot weather. The ancient Egyptians thought of this constellation as Anubis, the jackal god. Alternative names have been proposed: Johann Bayer in the early 17th century termed the constellation "Fovea" "The Pit", and "Morus" "Sycamine Tree". Seventeenth-century German poet and author Philippus Caesius linked it to the dog of Tobias from the Apocrypha. Richard A. Proctor gave the constellation the name "Felis" "the Cat" in 1870 (contrasting with Canis Major, which he had abbreviated to "Canis" "the Dog"), explaining that he sought to shorten the constellation names to make them more manageable on celestial charts. Occasionally, Canis Minor is confused with Canis Major and given the name "Canis Orionis" ("Orion's Dog"). In Chinese astronomy, the stars corresponding to Canis Minor lie in the Vermilion Bird of the South (南方朱雀, "Nán Fāng Zhū Què"). Procyon, Gomeisa and Eta Canis Minoris form an asterism known as Nánhé, the Southern River. With its counterpart, the Northern River Beihe (Castor and Pollux), Nánhé was also associated with a gate or sentry. Along with Zeta and 8 Cancri, 6 Canis Minoris and 11 Canis Minoris formed the asterism "Shuiwei", which literally means "water level". Combined with additional stars in Gemini, Shuiwei represented an official who managed floodwaters or a marker of the water level. Neighboring Korea recognized four stars in Canis Minor as part of a different constellation, "the position of the water". This constellation was located in the Red Bird, the southern portion of the sky. Polynesian peoples often did not recognize Canis Minor as a constellation, but they saw Procyon as significant and often named it; in the Tuamotu Archipelago it was known as "Hiro", meaning "twist as a thread of coconut fiber", and "Kopu-nui-o-Hiro" ("great paunch of Hiro"), which was either a name for the modern figure of Canis Minor or an alternative name for Procyon. Other names included "Vena" (after a goddess), on Mangaia and "Puanga-hori" (false "Puanga", the name for Rigel), in New Zealand. In the Society Islands, Procyon was called "Ana-tahua-vahine-o-toa-te-manava", literally "Aster the priestess of brave heart", figuratively the "pillar for elocution". The Wardaman people of the Northern Territory in Australia gave Procyon and Gomeisa the names "Magum" and "Gurumana", describing them as humans who were transformed into gum trees in the dreamtime. Although their skin had turned to bark, they were able to speak with a human voice by rustling their leaves. The Aztec calendar was related to their cosmology. The stars of Canis Minor were incorporated along with some stars of Orion and Gemini into an asterism associated with the day called "Water". Lying directly south of Gemini's bright stars Castor and Pollux, Canis Minor is a small constellation bordered by Monoceros to the south, Gemini to the north, Cancer to the northeast, and Hydra to the east. It does not border Canis Major; Monoceros is in between the two. Covering 183 square degrees, Canis Minor ranks seventy-first of the 88 constellations in size. It appears prominently in the southern sky during the Northern Hemisphere's winter. The constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 14 sides. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . Most visible in the evening sky from January to March, Canis Minor is most prominent at 10 PM during mid-February. It is then seen earlier in the evening until July, when it is only visible after sunset before setting itself, and rising in the morning sky before dawn. The constellation's three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "CMi". Canis Minor contains only two stars brighter than fourth magnitude. At magnitude 0.34, Procyon, or Alpha Canis Minoris, is the seventh-brightest star in the night sky, as well as one of the closest. Its name means "before the dog" or "preceding the dog" in Greek, as it rises an hour before the "Dog Star", Sirius, of Canis Major. It is a binary star system, consisting of a yellow-white main sequence star of spectral type F5 IV-V, named Procyon A, and a faint white dwarf companion of spectral type DA, named Procyon B. Procyon B, which orbits the more massive star every 41 years, is of magnitude 10.7. Procyon A is 1.4 times the Sun's mass, while its smaller companion is 0.6 times as massive as the Sun. The system is from Earth, the shortest distance to a northern-hemisphere star of the first magnitude. Gomeisa, or Beta Canis Minoris, with a magnitude of 2.89, is the second-brightest star in Canis Minor. Lying from the Solar System, it is a blue-white main sequence star of spectral class B8 Ve. Although fainter to Earth observers, it is much brighter than Procyon, and is 250 times as luminous and three times as massive as the Sun. Although its variations are slight, Gomeisa is classified as a shell star (Gamma Cassiopeiae variable), with a maximum magnitude of 2.84 and a minimum magnitude of 2.92. It is surrounded by a disk of gas which it heats and causes to emit radiation. Johann Bayer used the Greek letters Alpha to Eta to label the most prominent eight stars in the constellation, designating two stars as Delta (named Delta and Delta). John Flamsteed numbered fourteen stars, discerning a third star he named Delta; his star 12 Canis Minoris was not found subsequently. In Bayer's 1603 work "Uranometria", Procyon is located on the dog's belly, and Gomeisa on its neck. Gamma, Epsilon and Eta Canis Minoris lie nearby, marking the dog's neck, crown and chest respectively. Although it has an apparent magnitude of 4.34, Gamma Canis Minoris is an orange K-type giant of spectral class K3-III C, which lies away. Its colour is obvious when seen through binoculars. It is a multiple system, consisting of the spectroscopic binary Gamma A and three optical companions, Gamma B, magnitude 13; Gamma C, magnitude 12; and Gamma D, magnitude 10. The two components of Gamma A orbit each other every 389.2 days, with an eccentric orbit that takes their separation between 2.3 and 1.4 astronomical units (AU). Epsilon Canis Minoris is a yellow bright giant of spectral class G6.5IIb of magnitude of 4.99. It lies from Earth, with 13 times the diameter and 750 times the luminosity of the Sun. Eta Canis Minoris is a giant of spectral class F0III of magnitude 5.24, which has a yellowish hue when viewed through binoculars as well as a faint companion of magnitude 11.1. Located 4 arcseconds from the primary, the companion star is actually around 440 AU from the main star and takes around 5000 years to orbit it. Near Procyon, three stars share the name Delta Canis Minoris. Delta is a yellow-white F-type giant of magnitude 5.25 located around from Earth. About 360 times as luminous and 3.75 times as massive as the Sun, it is expanding and cooling as it ages, having spent much of its life as a main sequence star of spectrum B6V. Also known as 8 Canis Minoris, Delta is an F-type main-sequence star of spectral type F2V and magnitude 5.59 which is distant. The last of the trio, Delta (also known as 9 Canis Minoris), is a white main sequence star of spectral type A0Vnn and magnitude 5.83 which is distant. These stars mark the paws of the Lesser Dog's left hind leg, while magnitude 5.13 Zeta marks the right. A blue-white bright giant of spectral type B8II, Zeta lies around away from the Solar System. Lying approximately 264 light-years (81 parsecs) away with an apparent magnitude of 4.39, HD 66141 is 6.8 billion years old and has evolved into an orange giant of spectral type K2III with a diameter around 22 times that of the Sun, and weighing 1.1 solar masses. It is 174 times as luminous as the Sun, with an absolute magnitude of −0.15. HD 66141 was mistakenly named 13 Puppis, as its celestial coordinates were recorded incorrectly when catalogued and hence mistakenly thought to be in the constellation of Puppis; Bode gave it the name Lambda Canis Minoris, which is now obsolete. The orange giant is orbited by a planet, HD 66141b, which was detected in 2012 by measuring the star's radial velocity. The planet has a mass around 6 times that of Jupiter and a period of 480 days. A red giant of spectral type M4III, BC Canis Minoris lies around distant from the Solar System. It is a semiregular variable star that varies between a maximum magnitude of 6.14 and minimum magnitude of 6.42. Periods of 27.7, 143.3 and 208.3 days have been recorded in its pulsations. AZ, AD and BI Canis Minoris are Delta Scuti variables—short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. AZ is of spectral type F0III, and ranges between magnitudes 6.44 and 6.51 over a period of 2.3 hours. AD has a spectral type of F2III, and has a maximum magnitude of 9.21 and minimum of 9.51, with a period of approximately 2.95 hours. BI is of spectral type F2 with an apparent magnitude varying around 9.19 and a period of approximately 2.91 hours. At least three red giants are Mira variables in Canis Minor. S Canis Minoris, of spectral type M7e, is the brightest, ranging from magnitude 6.6 to 13.2 over a period of 332.94 days. V Canis Minoris ranges from magnitude 7.4 to 15.1 over a period of 366.1 days. Similar in magnitude is R Canis Minoris, which has a maximum of 7.3, but a significantly brighter minimum of 11.6. An S-type star, it has a period of 337.8 days. YZ Canis Minoris is a red dwarf of spectral type M4.5V and magnitude 11.2, roughly three times the size of Jupiter and from Earth. It is a flare star, emitting unpredictable outbursts of energy for mere minutes, which might be much more powerful analogues of solar flares. Luyten's Star (GJ 273) is a red dwarf star of spectral type M3.5V and close neighbour of the Solar System. Its visual magnitude of 9.9 renders it too faint to be seen with the naked eye, even though it is only away. Fainter still is PSS 544-7, an eighteenth-magnitude red dwarf around 20 percent the mass of the Sun, located from Earth. First noticed in 1991, it is thought to be a cannonball star, shot out of a star cluster and now moving rapidly through space directly away from the galactic disc. The WZ Sagittae-type dwarf nova DY CMi (also known as VSX J074727.6+065050) flared up to magnitude 11.4 over January and February 2008 before dropping eight magnitudes to around 19.5 over approximately 80 days. It is a remote binary star system where a white dwarf and low mass star orbit each other close enough for the former star to draw material off the latter and form an accretion disc. This material builds up until it erupts dramatically. The Milky Way passes through much of Canis Minor, yet it has few deep-sky objects. William Herschel recorded four objects in his 1786 work "Catalogue of Nebulae and Clusters of Stars", including two he mistakenly believed were star clusters. NGC 2459 is a group of five thirteenth- and fourteenth-magnitude stars that appear to lie close together in the sky but are not related. A similar situation has occurred with NGC 2394, also in Canis Minor. This is a collection of fifteen unrelated stars of ninth-magnitude and fainter. Herschel also observed three faint galaxies, two of which are interacting with each other. NGC 2508 is a lenticular galaxy of thirteenth-magnitude, estimated at 205 million light-years (63 million parsecs) distance with a diameter of 80 thousand light-years (25 thousand parsecs). Named as a single object by Herschel, NGC 2402 is actually a pair of near-adjacent galaxies that appear to be interacting with each other. Only of fourteenth- and fifteenth-magnitudes respectively, the elliptical and spiral galaxy are thought to be approximately 245 million light-years distant, and each measure 55,000 light-years in diameter. The 11 Canis-Minorids, also called the Beta Canis Minorids, are a meteor shower that arise near the fifth-magnitude star 11 Canis Minoris and were discovered in 1964 by Keith Hindley, who investigated their trajectory and proposed a common origin with the comet D/1917 F1 Mellish. However, this conclusion has been refuted subsequently as the number of orbits analysed was low and their trajectories too disparate to confirm a link. They last from 4 to 15 December, peaking over 10 and 11 December. Corona Borealis Corona Borealis is a small constellation in the Northern Celestial Hemisphere. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and remains one of the 88 modern constellations. Its brightest stars form a semicircular arc. Its Latin name, inspired by its shape, means "northern crown". In classical mythology Corona Borealis generally represented the crown given by the god Dionysus to the Cretan princess Ariadne and set by him in the heavens. Other cultures likened the pattern to a circle of elders, an eagle's nest, a bear's den, or even a smokehole. Ptolemy also listed a southern counterpart, Corona Australis, with a similar pattern. The brightest star is the magnitude 2.2 Alpha Coronae Borealis. The yellow supergiant R Coronae Borealis is the prototype of a rare class of giant stars—the R Coronae Borealis variables—that are extremely hydrogen deficient, and thought to result from the merger of two white dwarfs. T Coronae Borealis, also known as the Blaze Star, is another unusual type of variable star known as a recurrent nova. Normally of magnitude 10, it last flared up to magnitude 2 in 1946. ADS 9731 and Sigma Coronae Borealis are multiple star systems with six and five components respectively. Five star systems have been found to have Jupiter-sized exoplanets. Abell 2065 is a highly concentrated galaxy cluster one billion light-years from the Solar System containing more than 400 members, and is itself part of the larger Corona Borealis Supercluster. Covering 179 square degrees and hence 0.433% of the sky, Corona Borealis ranks 73rd of the 88 modern constellations by area. Its position in the Northern Celestial Hemisphere means that the whole constellation is visible to observers north of 50°S. It is bordered by Boötes to the north and west, Serpens Caput to the south, and Hercules to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is 'CrB'. The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of eight segments ("illustrated in infobox"). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 39.71° and 25.54°. It has a counterpart—Corona Australis—in the Southern Celestial Hemisphere. The seven stars that make up the constellation's distinctive crown-shaped pattern are all 4th-magnitude stars except for the brightest of them, Alpha Coronae Borealis. The other six stars are Theta, Beta, Gamma, Delta, Epsilon and Iota Coronae Borealis. The German cartographer Johann Bayer gave twenty stars in Corona Borealis Bayer designations from Alpha to Upsilon in his 1603 star atlas "Uranometria". Zeta Coronae Borealis was noted to be a double star by later astronomers and its components designated Zeta and Zeta. John Flamsteed did likewise with Nu Coronae Borealis; classed by Bayer as a single star, it was noted to be two close stars by Flamsteed. He named them 20 and 21 Coronae Borealis in his catalogue, alongside the designations Nu and Nu respectively. Chinese astronomers deemed nine stars to make up the asterism, adding Pi and Rho Coronae Borealis. Within the constellation's borders, there are 37 stars brighter than or equal to apparent magnitude 6.5. Alpha Coronae Borealis (officially named Alphecca by the IAU, but sometimes also known as Gemma) appears as a blue-white star of magnitude 2.2. In fact, it is an Algol-type eclipsing binary that varies by 0.1 magnitude with a period of 17.4 days. The primary is a white main-sequence star of spectral type A0V that is 2.91 times the mass of the Sun () and 57 times as luminous (), and is surrounded by a debris disk out to a radius of around 60 astronomical units (AU). The secondary companion is a yellow main-sequence star of spectral type G5V that is a little smaller (0.9 times) the diameter of the Sun. Lying 75±0.5 light-years from Earth, Alphecca is believed to be a member of the Ursa Major Moving Group of stars that have a common motion through space. Located 112±3 light-years away, Beta Coronae Borealis or Nusakan is a spectroscopic binary system whose two components are separated by 10 AU and orbit each other every 10.5 years. The brighter component is a rapidly oscillating Ap star, pulsating with a period of 16.2 minutes. Of spectral type A5V with a surface temperature of around 7980 K, it has around , 2.6 solar radii (), and . The smaller star is of spectral type F2V with a surface temperature of around 6750 K, and has around , , and between 4 and . Near Nusakan is Theta Coronae Borealis, a binary system that shines with a combined magnitude of 4.13 located 380±20 light-years distant. The brighter component, Theta Coronae Borealis A, is a blue-white star that spins extremely rapidly—at a rate of around 393 km per second. A Be star, it is surrounded by a debris disk. Flanking Alpha to the east is Gamma Coronae Borealis, yet another binary star system, whose components orbit each other every 92.94 years and are roughly as far apart from each other as the Sun and Neptune. The brighter component has been classed as a Delta Scuti variable star, though this view is not universal. The components are main sequence stars of spectral types B9V and A3V. Located 170±2 light-years away, 4.06-magnitude Delta Coronae Borealis is a yellow giant star of spectral type G3.5III that is around and has swollen to . It has a surface temperature of 5180 K. For most of its existence, Delta Coronae Borealis was a blue-white main-sequence star of spectral type B before it ran out of hydrogen fuel in its core. Its luminosity and spectrum suggest it has just crossed the Hertzsprung gap, having finished burning core hydrogen and just begun burning hydrogen in a shell that surrounds the core. Zeta Coronae Borealis is a double star with two blue-white components 6.3 arcseconds apart that can be readily separated at 100x magnification. The primary is of magnitude 5.1 and the secondary is of magnitude 6.0. Nu Coronae Borealis is an optical double, whose components are a similar distance from Earth but have different radial velocities, hence are assumed to be unrelated. The primary, Nu Coronae Borealis, is a red giant of spectral type M2III and magnitude 5.2, lying 640±30 light-years distant, and the secondary, Nu Coronae Borealis, is an orange-hued giant star of spectral type K5III and magnitude 5.4, estimated to be 590±30 light-years away. Sigma Coronae Borealis, on the other hand, is a true multiple star system divisible by small amateur telescopes. It is actually a complex system composed of two stars around as massive as the Sun that orbit each other every 1.14 days, orbited by a third Sun-like star every 726 years. The fourth and fifth components are a binary red dwarf system that is 14,000 AU distant from the other three stars. ADS 9731 is an even rarer multiple system in the constellation, composed of six stars, two of which are spectroscopic binaries. Corona Borealis is home to two remarkable variable stars. T Coronae Borealis is a cataclysmic variable star also known as the Blaze Star. Normally placid around magnitude 10—it has a minimum of 10.2 and maximum of 9.9—it brightens to magnitude 2 in a period of hours, caused by a nuclear chain reaction and the subsequent explosion. T Coronae Borealis is one of a handful of stars called recurrent novae, which include T Pyxidis and U Scorpii. An outburst of T Coronae Borealis was first recorded in 1866; its second recorded outburst was in February 1946. T Coronae Borealis is a binary star with a red-hued giant primary and a white dwarf secondary, the two stars orbiting each other over a period of approximately 8 months. R Coronae Borealis is a yellow-hued variable supergiant star, over 7000 light-years from Earth, and prototype of a class of stars known as R Coronae Borealis variables. Normally of magnitude 6, its brightness periodically drops as low as magnitude 15 and then slowly increases over the next several months. These declines in magnitude come about as dust that has been ejected from the star obscures it. Direct imaging with the Hubble Space Telescope shows extensive dust clouds out to a radius of around 2000 AU from the star, corresponding with a stream of fine dust (composed of grains 5 nm in diameter) associated with the star's stellar wind and coarser dust (composed of grains with a diameter of around 0.14 µm) ejected periodically. There are several other variables of reasonable brightness for amateur astronomer to observe, including three Mira-type long period variables: S Coronae Borealis ranges between magnitudes 5.8 and 14.1 over a period of 360 days. Located around 1946 light-years distant, it shines with a luminosity 16,643 times that of the Sun and has a surface temperature of 3033 K. One of the reddest stars in the sky, V Coronae Borealis is a cool star with a surface temperature of 2877 K that shines with a luminosity 102,831 times that of the Sun and is a remote 8810 light-years distant from Earth. Varying between magnitudes 6.9 and 12.6 over a period of 357 days, it is located near the junction of the border of Corona Borealis with Hercules and Bootes. Located 1.5° northeast of Tau Coronae Borealis, W Coronae Borealis ranges between magnitudes 7.8 and 14.3 over a period of 238 days. Another red giant, RR Coronae Borealis is a M3-type semiregular variable star that varies between magnitudes 7.3 and 8.2 over 60.8 days. RS Coronae Borealis is yet another semiregular variable red giant, which ranges between magnitudes 8.7 to 11.6 over 332 days. It is unusual in that it is a red star with a high proper motion (greater than 50 milliarcseconds a year). Meanwhile, U Coronae Borealis is an Algol-type eclipsing binary star system whose magnitude varies between 7.66 and 8.79 over a period of 3.45 days TY Coronae Borealis is a pulsating white dwarf (of ZZ Ceti) type, which is around 70% as massive as the Sun, yet has only 1.1% of its diameter. Discovered in 1990, UW Coronae Borealis is a low-mass X-ray binary system composed of a star less massive than the Sun and a neutron star surrounded by an accretion disk that draws material from the companion star. It varies in brightness in an unusually complex manner: the two stars orbit each other every 111 minutes, yet there is another cycle of 112.6 minutes, which corresponds to the orbit of the disk around the degenerate star. The beat period of 5.5 days indicates the time the accretion disk—which is asymmetrical—takes to precess around the star. Extrasolar planets have been confirmed in five star systems, four of which were found by the radial velocity method. The spectrum of Epsilon Coronae Borealis was analysed for seven years from 2005 to 2012, revealing a planet around 6.7 times as massive as Jupiter () orbiting every 418 days at an average distance of around 1.3 AU. Epsilon itself is a orange giant of spectral type K2III that has swollen to and . Kappa Coronae Borealis is a spectral type K1IV orange subgiant nearly twice as massive as the Sun; around it lie a dust debris disk, and one planet with a period of 3.4 years. This planet's mass is estimated at . The dimensions of the debris disk indicate it is likely there is a second substellar companion. Omicron Coronae Borealis is a K-type clump giant with one confirmed planet with a mass of that orbits every 187 days—one of the two least massive planets known around clump giants. HD 145457 is an orange giant of spectral type K0III found to have one planet of . Discovered by the Doppler method in 2010, it takes 176 days to complete an orbit. XO-1 is a magnitude 11 yellow main-sequence star located approximately light-years away, of spectral type G1V with a mass and radius similar to the Sun. In 2006 the hot Jupiter exoplanet XO-1b was discovered orbiting XO-1 by the transit method using the XO Telescope. Roughly the size of Jupiter, it completes an orbit around its star every three days. The discovery of a Jupiter-sized planetary companion was announced in 1997 via analysis of the radial velocity of Rho Coronae Borealis, a yellow main sequence star and Solar analog of spectral type G0V, around 57 light-years distant from Earth. More accurate measurement of data from the Hipparcos satellite subsequently showed it instead to be a low-mass star somewhere between 100 and 200 times the mass of Jupiter. Possible stable planetary orbits in the habitable zone were calculated for the binary star Eta Coronae Borealis, which is composed of two stars—yellow main sequence stars of spectral type G1V and G3V respectively—similar in mass and spectrum to the Sun. No planet has been found, but a brown dwarf companion about 63 times as massive as Jupiter with a spectral type of L8 was discovered at a distance of 3640 AU from the pair in 2001. Corona Borealis contains few galaxies observable with amateur telescopes. NGC 6085 and 6086 are a faint spiral and elliptical galaxy respectively close enough to each other to be seen in the same visual field through a telescope. Abell 2142 is a huge (six million light-year diameter), X-ray luminous galaxy cluster that is the result of an ongoing merger between two galaxy clusters. It has a redshift of 0.0909 (meaning it is moving away from us at 27,250 km/s) and a visual magnitude of 16.0. It is about 1.2 billion light-years away. Another galaxy cluster in the constellation, RX J1532.9+3021, is approximately 3.9 billion light-years from Earth. At the cluster's center is a large elliptical galaxy containing one of the most massive and most powerful supermassive black holes yet discovered. Abell 2065 is a highly concentrated galaxy cluster containing more than 400 members, the brightest of which are 16th magnitude; the cluster is more than one billion light-years from Earth. On a larger scale still, Abell 2065, along with Abell 2061, Abell 2067, Abell 2079, Abell 2089, and Abell 2092, make up the Corona Borealis Supercluster. Another galaxy cluster, Abell 2162, is a member of the Hercules Superclusters. In Greek mythology, Corona Borealis was linked to the legend of Theseus and the minotaur. It was generally considered to represent a crown given by Dionysus to Ariadne, the daughter of Minos of Crete, after she had been abandoned by the Athenian prince Theseus. When she wore the crown at her marriage to Dionysus, he placed it in the heavens to commemorate their wedding. An alternate version has the besotted Dionysus give the crown to Ariadne, who in turn gives it to Theseus after he arrives in Crete to kill the minotaur that the Cretans have demanded tribute from Athens to feed. The hero uses the crown's light to escape the labyrinth after disposing of the creature, and Dionysus later sets it in the heavens. The Latin author Hyginus linked it to a crown or wreath worn by Bacchus (Dionysus) to disguise his appearance when first approaching Mount Olympus and revealing himself to the gods, having been previously hidden as yet another child of Jupiter's trysts with a mortal, in this case Semele. Corona Borealis was one of the 48 constellations mentioned in the "Almagest" of classical astronomer Ptolemy. In Welsh mythology, it was called Caer Arianrhod, "the Castle of the Silver Circle", and was the heavenly abode of the Lady Arianrhod. To the ancient Balts, Corona Borealis was known as "Darželis", the "flower garden". The Arabs called the constellation Alphecca (a name later given to Alpha Coronae Borealis), which means "separated" or "broken up" ( '), a reference to the resemblance of the stars of Corona Borealis to a loose string of jewels. This was also interpreted as a broken dish. Among the Bedouins, the constellation was known as ' (), or "the dish/bowl of the poor people". The Skidi people of Native Americans saw the stars of Corona Borealis representing a council of stars whose chief was Polaris. The constellation also symbolised the smokehole over a fireplace, which conveyed their messages to the gods, as well as how chiefs should come together to consider matters of importance. The Shawnee people saw the stars as the "Heavenly Sisters", who descended from the sky every night to dance on earth. Alphecca signifies the youngest and most comely sister, who was seized by a hunter who transformed into a field mouse to get close to her. They married though she later returned to the sky, with her heartbroken husband and son following later. The Mi'kmaq of eastern Canada saw Corona Borealis as "Mskegwǒm", the den of the celestial bear (Alpha, Beta, Gamma and Delta Ursae Majoris). Polynesian peoples often recognized Corona Borealis; the people of the Tuamotus named it "Na Kaua-ki-tokerau" and probably "Te Hetu". The constellation was likely called "Kaua-mea" in Hawaii, "Rangawhenua" in New Zealand, and "Te Wale-o-Awitu" in the Cook Islands atoll of Pukapuka. Its name in Tonga was uncertain; it was either called "Ao-o-Uvea" or "Kau-kupenga". In Australian Aboriginal astronomy, the constellation is called "womera" ("the boomerang") due to the shape of the stars. The Wailwun people of northwestern New South Wales saw Corona Borealis as "mullion wollai" "eagle's nest", with Altair and Vega—each called "mullion"—the pair of eagles accompanying it. The Wardaman people of northern Australia held the constellation to be a gathering point for Men's Law, Women's Law and Law of both sexes come together and consider matters of existence. Corona Borealis was renamed Corona Firmiana in honour of the Archbishop of Salzburg in the 1730 Atlas "Mercurii Philosophicii Firmamentum Firminianum Descriptionem" by Corbinianus Thomas, but this was not taken up by subsequent cartographers. The constellation was featured as a main plot ingredient in the short story "Hypnos" by H. P. Lovecraft, published in 1923; it is the object of fear of one of the protagonists in the novella. Finnish band Cadacross released an album titled "Corona Borealis" in 2002. Corona Australis Corona Australis is a constellation in the Southern Celestial Hemisphere. Its Latin name means "southern crown", and it is the southern counterpart of Corona Borealis, the northern crown. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. The Ancient Greeks saw Corona Australis as a wreath rather than a crown and associated it with Sagittarius or Centaurus. Other cultures have likened the pattern to a turtle, ostrich nest, a tent, or even a hut belonging to a rock hyrax. Although fainter than its northern counterpart, the oval- or horseshoe-shaped pattern of its brighter stars renders it distinctive. Alpha and Beta Coronae Australis are the two brightest stars with an apparent magnitude of around 4.1. Epsilon Coronae Australis is the brightest example of a W Ursae Majoris variable in the southern sky. Lying alongside the Milky Way, Corona Australis contains one of the closest star-forming regions to the Solar System—a dusty dark nebula known as the Corona Australis Molecular Cloud, lying about 430 light years away. Within it are stars at the earliest stages of their lifespan. The variable stars R and TY Coronae Australis light up parts of the nebula, which varies in brightness accordingly. The name of the constellation was entered as "Corona Australis" when the International Astronomical Union (IAU) established the 88 modern constellations in 1922. In 1932, the name was instead recorded as "Corona Austrina" when the IAU's commission on notation approved a list of four-letter abbreviations for the constellations. The four-letter abbreviations were repealed in 1955. The IAU presently uses "Corona Australis" exclusively. Corona Australis is a small constellation bordered by Sagittarius to the north, Scorpius to the west, Telescopium to the south, and Ara to the southwest. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is 'CrA'. The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of four segments ("illustrated in infobox"). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −36.77° and −45.52°. Covering 128 square degrees, Corona Australis culminates at midnight around the 30th of June and ranks 80th in area. Only visible at latitudes south of 53° north, Corona Australis cannot be seen from the British Isles as it lies too far south, but it can be seen from southern Europe and readily from the southern United States. While not a bright constellation, Corona Australis is nonetheless distinctive due to its easily identifiable pattern of stars, which has been described as horseshoe- or oval-shaped. Though it has no stars brighter than 4th magnitude, it still has 21 stars visible to the unaided eye (brighter than magnitude 5.5). Nicolas Louis de Lacaille used the Greek letters Alpha through to Lambda to label the most prominent eleven stars in the constellation, designating two stars as Eta and omitting Iota altogether. Mu Coronae Australis, a yellow star of spectral type G5.5III and apparent magnitude 5.21, was labelled by Johann Elert Bode and retained by Benjamin Gould, who deemed it bright enough to warrant naming. The only star in the constellation to have received a name is Alfecca Meridiana or Alpha CrA. The name combines the Arabic name of the constellation with the Latin for "southern". In Arabic, "Alfecca" means "break", and refers to the shape of both Corona Australis and Corona Borealis. Also called simply "Meridiana", it is a white main sequence star located 125 light years away from Earth, with an apparent magnitude of 4.10 and spectral type A2Va. A rapidly rotating star, it spins at almost 200 km per second at its equator, making a complete revolution in around 14 hours. Like the star Vega, it has excess infrared radiation, which indicates it may be ringed by a disk of dust. It is currently a main-sequence star, but will eventually evolve into a white dwarf; currently, it has a luminosity 31 times greater, and a radius and mass of 2.3 times that of the Sun. Beta Coronae Australis is an orange giant 474 light years from Earth. Its spectral type is K0II, and it is of apparent magnitude 4.11. Since its formation, it has evolved from a B-type star to a K-type star. Its luminosity class places it as a bright giant; its luminosity is 730 times that of the Sun, designating it one of the highest-luminosity K0-type stars visible to the naked eye. 100 million years old, it has a radius of 43 solar radii () and a mass of between 4.5 and 5 solar masses (). Alpha and Beta are so similar as to be indistinguishable in brightness to the naked eye. Some of the more prominent double stars include Gamma Coronae Australis—a pair of yellowish white stars 58 light years away from Earth, which orbit each other every 122 years. Widening since 1990, the two stars can be seen as separate with a 100 mm aperture telescope; they are separated by 1.3 arcseconds at an angle of 61 degrees. They have a combined visual magnitude of 4.2; each component is an F8V dwarf star with a magnitude of 5.01. Epsilon Coronae Australis is an eclipsing binary belonging to a class of stars known as W Ursae Majoris variables. These star systems are known as contact binaries as the component stars are so close together they touch. Varying by a quarter of a magnitude around an average apparent magnitude of 4.83 every seven hours, the star system lies 98 light years away. Its spectral type is F4VFe-0.8+. At the southern end of the crown asterism are the stars Eta¹ and Eta² Coronae Australis, which form an optical double. Of magnitude 5.1 and 5.5, they are separable with the naked eye and are both white. Kappa Coronae Australis is an easily resolved optical double—the components are of apparent magnitudes 6.3 and 5.6 and are about 1000 and 150 light years away respectively. They appear at an angle of 359 degrees, separated by 21.6 arcseconds. Kappa² is actually the brighter of the pair and is more bluish white, with a spectral type of B9V, while Kappa¹ is of spectral type A0III. Lying 202 light years away, Lambda Coronae Australis is a double splittable in small telescopes. The primary is a white star of spectral type A2Vn and magnitude of 5.1, while the companion star has a magnitude of 9.7. The two components are separated by 29.2 arcseconds at an angle of 214 degrees. Zeta Coronae Australis is a rapidly rotating main sequence star with an apparent magnitude of 4.8, 221.7 light years from Earth. The star has blurred lines in its hydrogen spectrum due to its rotation. Its spectral type is B9V. Theta Coronae Australis lies further to the west, a yellow giant of spectral type G8III and apparent magnitude 4.62. Corona Australis harbours RX J1856.5-3754, an isolated neutron star that is thought to lie 140 (±40) parsecs, or 460 (±130) light years, away, with a diameter of 14 km. It was once suspected to be a strange star, but this has been discounted. In the north of the constellation is the Corona Australis Molecular Cloud, a dark molecular cloud with many embedded reflection nebulae, including NGC 6729, NGC 6726–7, and IC 4812. A star-forming region of around , it contains Herbig–Haro objects (protostars) and some very young stars. About 430 light years (130 parsecs) away, it is one of the closest star-forming regions to the Solar System. The related NGC 6726 and 6727, along with unrelated NGC 6729, were first recorded by Johann Friedrich Julius Schmidt in 1865. The Coronet cluster, about 554 light years (170 parsecs) away at the edge of the Gould Belt, is also used in studying star and protoplanetary disk formation. R Coronae Australis is an irregular variable star ranging from magnitudes 9.7 to 13.9. Blue-white, it is of spectral type B5IIIpe. A very young star, it is still accumulating interstellar material. It is obscured by, and illuminates, the surrounding nebula, NGC 6729, which brightens and darkens with it. The nebula is often compared to a comet for its appearance in a telescope, as its length is five times its width. S Coronae Australis is a G-class dwarf in the same field as R and is a T Tauri star. Nearby, another young variable star, TY Coronae Australis, illuminates another nebula: reflection nebula NGC 6726–7. TY Coronae Australis ranges irregularly between magnitudes 8.7 and 12.4, and the brightness of the nebula varies with it. Blue-white, it is of spectral type B8e. The largest young stars in the region, R, S, T, TY and VV Coronae Australis, are all ejecting jets of material which cause surrounding dust and gas to coalesce and form Herbig–Haro objects, many of which have been identified nearby. Lying adjacent to the nebulosity is the globular cluster known as NGC 6723, which is actually in the neighbouring constellation of Sagittarius and is much much further away. Near Epsilon and Gamma Coronae Australis is Bernes 157, a dark nebula and star forming region. It is a large nebula, 55 by 18 arcminutes, that possesses several stars around magnitude 13. These stars have been dimmed by up to 8 magnitudes by its dust clouds. IC 1297 is a planetary nebula of apparent magnitude 10.7, which appears as a green-hued roundish object in higher-powered amateur instruments. The nebula surrounds the variable star RU Coronae Australis, which has an average apparent magnitude of 12.9 and is a WC class Wolf–Rayet star. IC 1297 is small, at only 7 arcseconds in diameter; it has been described as "a square with rounded edges" in the eyepiece, elongated in the north-south direction. Descriptions of its color encompass blue, blue-tinged green, and green-tinged blue. Corona Australis' location near the Milky Way means that galaxies are uncommonly seen. NGC 6768 is a magnitude 11.2 object 35′ south of IC 1297. It is made up of two galaxies merging, one of which is an elongated elliptical galaxy of classification E4 and the other a lenticular galaxy of classification S0. IC 4808 is a galaxy of apparent magnitude 12.9 located on the border of Corona Australis with the neighbouring constellation of Telescopium and 3.9 degrees west-southwest of Beta Sagittarii. However, amateur telescopes will only show a suggestion of its spiral structure. It is 1.9 arcminutes by 0.8 arcminutes. The central area of the galaxy does appear brighter in an amateur instrument, which shows it to be tilted northeast-southwest. Southeast of Theta and southwest of Eta lies the open cluster ESO 281-SC24, which is composed of the yellow 9th magnitude star GSC 7914 178 1 and five 10th to 11th magnitude stars. Halfway between Theta Coronae Australis and Theta Scorpii is the dense globular cluster NGC 6541. Described as between magnitude 6.3 and magnitude 6.6, it is visible in binoculars and small telescopes. Around 22000 light years away, it is around 100 light years in diameter. It is estimated to be around 14 billion years old. NGC 6541 appears 13.1 arcminutes in diameter and is somewhat resolvable in large amateur instruments; a 12-inch telescope reveals approximately 100 stars but the core remains unresolved. The Corona Australids are a meteor shower that takes place between 14 and 18 March each year, peaking around 16 March. This meteor shower does not have a high peak hourly rate. In 1953 and 1956, observers noted a maximum of 6 meteors per hour and 4 meteors per hour respectively; in 1955 the shower was "barely resolved". However, in 1992, astronomers detected a peak rate of 45 meteors per hour. The Corona Australids' rate varies from year to year. At only six days, the shower's duration is particularly short, and its meteoroids are small; the stream is devoid of large meteoroids. The Corona Australids were first seen with the unaided eye in 1935 and first observed with radar in 1955. Corona Australid meteors have an entry velocity of 45 kilometers per second. In 2006, a shower originating near Beta Coronae Australis was designated as the Beta Coronae Australids. They appear in May, the same month as a nearby shower known as the May Microscopids, but the two showers have different trajectories and are unlikely to be related. Corona Australis may have been recorded by ancient Mesopotamians in the MUL.APIN, as a constellation called MA.GUR ("The Bark"). However, this constellation, adjacent to SUHUR.MASH ("The Goat-Fish", modern Capricornus), may instead have been modern Epsilon Sagittarii. As a part of the southern sky, MA.GUR was one of the fifteen "stars of Ea". In the 3rd century BC, the Greek didactic poet Aratus wrote of, but did not name the constellation, instead calling the two crowns Στεφάνοι ("Stephanoi"). The Greek astronomer Ptolemy described the constellation in the 2nd century AD, though with the inclusion of Alpha Telescopii, since transferred to Telescopium. Ascribing 13 stars to the constellation, he named it Στεφάνος νοτιος ("Stephanos notios"), "Southern Wreath", while other authors associated it with either Sagittarius (having fallen off his head) or Centaurus; with the former, it was called "Corona Sagittarii". Similarly, the Romans called Corona Australis the "Golden Crown of Sagittarius". It was known as "Parvum Coelum" ("Canopy", "Little Sky") in the 5th century. The 18th-century French astronomer Jérôme Lalande gave it the names "Sertum Australe" ("Southern Garland") and "Orbiculus Capitis", while German poet and author Philippus Caesius called it "Corolla" ("Little Crown") or "Spira Australis" ("Southern Coil"), and linked it with the Crown of Eternal Life from the New Testament. Seventeenth-century celestial cartographer Julius Schiller linked it to the Diadem of Solomon. Sometimes, Corona Australis was not the wreath of Sagittarius but arrows held in his hand. Corona Australis has been associated with the myth of Bacchus and Stimula. Jupiter had impregnated Stimula, causing Juno to become jealous. Juno convinced Stimula to ask Jupiter to appear in his full splendor, which the mortal woman could not handle, causing her to burn. After Bacchus, Stimula's unborn child, became an adult and the god of wine, he honored his deceased mother by placing a wreath in the sky. In Chinese astronomy, the stars of Corona Australis are located within the Black Tortoise of the North (北方玄武, "Běi Fāng Xuán Wǔ"). The constellation itself was known as "ti'en pieh" ("Heavenly Turtle") and during the Western Zhou period, marked the beginning of winter. However, precession over time has meant that the "Heavenly River" (Milky Way) became the more accurate marker to the ancient Chinese and hence supplanted the turtle in this role. Arabic names for Corona Australis include "Al Ķubbah" "the Tortoise", "Al Ĥibā" "the Tent" or "Al Udḥā al Na'ām" "the Ostrich Nest". It was later given the name "Al Iklīl al Janūbiyyah", which the European authors Chilmead, Riccioli and Caesius transliterated as Alachil Elgenubi, Elkleil Elgenubi and Aladil Algenubi respectively. The ǀXam speaking San people of South Africa knew the constellation as "≠nabbe ta !nu" "house of branches"—owned originally by the Dassie (rock hyrax), and the star pattern depicting people sitting in a semicircle around a fire. The indigenous Boorong people of northwestern Victoria saw it as "Won", a boomerang thrown by "Totyarguil" (Altair). The Aranda people of Central Australia saw Corona Australis as a coolamon carrying a baby, which was accidentally dropped to earth by a group of sky-women dancing in the Milky Way. The impact of the coolamon created Gosses Bluff crater, 175 km west of Alice Springs. The Torres Strait Islanders saw Corona Australis as part of a larger constellation encompassing part of Sagittarius and the tip of Scorpius's tail; the Pleiades and Orion were also associated. This constellation was Tagai's canoe, crewed by the Pleiades, called the "Usiam", and Orion, called the "Seg". The myth of Tagai says that he was in charge of this canoe, but his crewmen consumed all of the supplies onboard without asking permission. Enraged, Tagai bound the Usiam with a rope and tied them to the side of the boat, then threw them overboard. Scorpius's tail represents a suckerfish, while Eta Sagittarii and Theta Coronae Australis mark the bottom of the canoe. On the island of Futuna, the figure of Corona Australis was called "Tanuma" and in the Tuamotus, it was called "Na Kaua-ki-Tonga". Sources Online sources "SIMBAD" Caelum Caelum is a faint constellation in the southern sky, introduced in the 1750s by Nicolas Louis de Lacaille and counted among the 88 modern constellations. Its name means “"chisel"” in Latin, and it was formerly known as Caelum Scalptorium (“"the engravers’ chisel"”); It is a rare word, unrelated to the far more common Latin "caelum", meaning “sky, heaven, atmosphere”. It is the eighth-smallest constellation, and subtends a solid angle of around 0.038 steradians, just less than that of Corona Australis. Due to its small size and location away from the plane of the Milky Way, Caelum is a rather barren constellation, with few objects of interest. The constellation's brightest star, Alpha Caeli, is only of magnitude 4.45, and only one other star, (Gamma) γ Caeli, is brighter than magnitude 5. Other notable objects in Caelum are RR Caeli, a binary star with one known planet approximately away; X Caeli, a Delta Scuti variable that forms an optical double with γ Caeli; and HE0450-2958, a Seyfert galaxy that at first appeared as just a jet, with no host galaxy visible. Caelum was first introduced in the eighteenth century by Nicolas Louis de Lacaille, a French astronomer who introduced thirteen other southern constellations at the same time. Lacaille gave the constellation the French name "Burin", which was originally Latinized to "Caelum Scalptorium" (“"The Engravers’ Chisel"”). Francis Baily shortened this name to "Caelum", as suggested by John Herschel. In Lacaille's original chart, the constellation was shown both as a burin and an échoppe, although it has come to be recognized simply as a chisel. Johann Elert Bode stated the name as plural with a singular possessor, "Caela Scalptoris" – in German ("die") "Grabstichel" (“"the Engraver’s Chisels"”) – but this did not stick. Caelum is bordered by Dorado and Pictor to the south, Horologium and Eridanus to the east, Lepus to the north, and Columba to the west. Covering only 125 square degrees, it ranks 81st of the 88 modern constellations in size. It appears prominently in the southern sky during the Southern Hemisphere's summer, and the whole constellation is visible for at least part of the year to observers south of latitude 41°N. Its main asterism consists of four stars, and twenty stars in total are brighter than magnitude 6.5. The constellation’s boundaries, as set by Eugène Delporte in 1930, are defined by a 12-sided polygon. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . The International Astronomical Union (IAU) adopted the three-letter abbreviation “Cae” for the constellation in 1922. Caelum is a faint constellation: It has no star brighter than magnitude 4 and only two stars brighter than magnitude 5. Lacaille gave six stars Bayer designations, labeling them Alpha (α) to Zeta (ζ) in 1756, but omitted Epsilon (ε) and designated two adjacent stars as Gamma (γ). Bode extended the designations to Rho (ρ) for other stars, but most of these have fallen out of use. Caelum is too far south for any of its stars to bear Flamsteed designations. The brightest star, (Alpha) α Caeli, is a double star, containing an F-type main-sequence star of magnitude 4.45 and a red dwarf of magnitude 12.5, from Earth. (Beta) β Caeli, another F-type star of magnitude 5.05, is further away, being located from Earth. Unlike α, β Caeli is a subgiant star, slightly evolved from the main sequence. (Delta) δ Caeli, also of magnitude 5.05, is a B-type subgiant and is much farther from Earth, at . (Gamma) γ Caeli is a double-star with a red giant primary of magnitude 4.58 and a secondary of magnitude 8.1. The primary is from Earth. The two components are difficult to resolve with small amateur telescopes because of their difference in visual magnitude and their close separation. This star system forms an optical double with the unrelated X Caeli (previously named γ Caeli), a Delta Scuti variable located from Earth. These are a class of short-period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. X Caeli itself is also a binary star, specifically a contact binary, meaning that the stars are so close that they share envelopes. The only other variable star in Caelum visible to the naked eye is RV Caeli, a pulsating red giant of spectral type M1III, which varies between magnitudes 6.44 and 6.56. Three other stars in Caelum are still occasionally referred to by their Bayer designations, although they are only on the edge of naked-eye visibility. (Nu) ν Caeli is another double star, containing a white giant of magnitude 6.07 and a star of magnitude 10.66, with unknown spectral type. The system is approximately away. (Lambda) λ Caeli, at magnitude 6.24, is much redder and farther away, being a red giant around from Earth. (Zeta) ζ Caeli is even fainter, being only of magnitude 6.36. This star, located away, is a K-type subgiant of spectral type K1. The other twelve naked-eye stars in Caelum are not referred to by Bode's Bayer designations anymore, including RV Caeli. One of the nearest stars in Caelum is the eclipsing binary star RR Caeli, at a distance of . This star system consists of a dim red dwarf and a white dwarf. Despite its closeness to the Earth, the system's apparent magnitude is only 14.40 due to the faintness of its components, and thus it cannot be easily seen with amateur equipment. In 2012, the system was found to contain a giant planet, and there is evidence for a second substellar body. The system is a post-common-envelope binary and is losing angular momentum over time, which will eventually cause mass transfer from the red dwarf to the white dwarf. In approximately 9–20 billion years, this will cause the system to become a cataclysmic variable. Due to its small size and location away from the plane of the Milky Way, Caelum is rather devoid of deep-sky objects, and contains no Messier objects. The only deep-sky object in Caelum to receive much attention is HE0450-2958, an unusual Seyfert galaxy. Originally, the jet's host galaxy proved elusive to find, and this jet appeared to be emanating from nothing. Although it has been suggested that the object is an ejected supermassive black hole, the host is now agreed to be a small galaxy that is difficult to see due to light from the jet and a nearby starburst galaxy. Carolina Panthers The Carolina Panthers are a professional American football team based in Charlotte, North Carolina. The Panthers compete in the National Football League (NFL), as a member club of the league's National Football Conference (NFC) South division. The team is headquartered in Bank of America Stadium in uptown Charlotte; also the team's home field. They are one of the few NFL teams to own the stadium they play in, which is legally registered as Panthers Stadium, LLC. The Panthers are supported throughout the Carolinas; although the team has played its home games in Charlotte since 1996, it played home games at Memorial Stadium in Clemson, South Carolina during its first season. The team hosts its annual training camp at Wofford College in Spartanburg, South Carolina. The head coach is Ron Rivera. The Panthers were announced as the league's 29th franchise in 1993, and began play in 1995 under original owner and founder Jerry Richardson. The Panthers played well in their first two years, finishing in 1995 (an all-time best for an NFL expansion team's first season) and 12–4 the following year, winning the NFC West before ultimately losing to the eventual Super Bowl champion Green Bay Packers in the NFC Championship Game. They did not have another winning season until 2003, when they won the NFC Championship Game and reached Super Bowl XXXVIII, losing 32–29 to the New England Patriots. After recording playoff appearances in 2005 and 2008, the team failed to record another playoff appearance until 2013, the first of three consecutive NFC South titles. After losing in the divisional round to the San Francisco 49ers in 2013 and the Seattle Seahawks in 2014, the Panthers returned to the Super Bowl in 2015, but lost to the Denver Broncos. The Panthers have reached the playoffs seven times, advancing to four NFC Championship Games and two Super Bowls. They have won six division titles, one in the NFC West and five in the NFC South. The Carolina Panthers are legally registered as Panther Football, LLC. and are controlled by David Tepper, whose purchase of the team from founder Jerry Richardson was unanimously approved by league owners on May 22, 2018. The club is worth approximately US$1.56 billion, according to "Forbes". On December 15, 1987, entrepreneur Jerry Richardson announced his bid for an NFL expansion franchise in the Carolinas. A North Carolina native, Richardson was a former wide receiver on the Baltimore Colts who had used his 1959 league championship bonus to co-found the Hardee's restaurant chain, later becoming president and CEO of TW Services. Richardson drew his inspiration to pursue an NFL franchise from George Shinn, who had made a successful bid for an expansion National Basketball Association (NBA) team in Charlotte, the Charlotte Hornets. Richardson founded Richardson Sports, a partnership consisting of himself, his family, and a number of businessmen from North and South Carolina were also recruited to be limited partners. Richardson looked at four potential locations for a stadium, ultimately choosing uptown Charlotte. In choosing the team name, the Richardsons did not run focus groups with potential fans. Their intention had always been the 'Panthers'; Jerry Richardson began driving a car with the license plate 'PNTHRS' near the end of 1989. To highlight the demand for professional football in the Carolinas, Richardson Sports held preseason games around the area from 1989 to 1991. The first two games were held at Carter–Finley Stadium in Raleigh, North Carolina, and Kenan Memorial Stadium in Chapel Hill, North Carolina, while the third and final game was held at Williams-Brice Stadium in Columbia, South Carolina. The matchups were between existing NFL teams. In 1991, the group formally filed an application for the open expansion spot, and on October 26, 1993, the 28 NFL owners unanimously named the Carolina Panthers as the 29th member of the NFL. The Panthers first competed in the 1995 NFL season; they were one of two expansion teams to begin play that year, the other being the Jacksonville Jaguars. The Panthers were put in the NFC West to increase the size of that division to five teams; there were already two other southeastern teams in the division, the Atlanta Falcons and the New Orleans Saints. Former Pittsburgh Steelers defensive coordinator Dom Capers was named the first head coach. The team finished its inaugural season , the best performance ever from a first-year expansion team. They performed even better in their second season, finishing with a record and winning the NFC West division, as well as securing a first-round bye. The Panthers beat the defending Super Bowl champions Dallas Cowboys in the divisional round before losing the NFC Championship Game to the eventual Super Bowl champions, the Green Bay Packers. The team managed only a finish in 1997 and slipped to in 1998, leading to Capers' dismissal as head coach. The Panthers hired former San Francisco 49ers head coach George Seifert to replace Capers, and he led the team to an record in 1999. The team finished in 2000 and fell to in 2001, winning their first game but losing their last 15. This performance tied the NFL record for most losses in a single season and it broke the record held by the winless 1976 Buccaneers for most consecutive losses in a single season (both records have since been broken by the 2008 Lions), leading the Panthers to fire Seifert. After the NFL's expansion to 32 teams in 2002, the Panthers were relocated from the NFC West to the newly created NFC South division. The Panthers' rivalries with the Falcons and Saints were maintained, and they would be joined by the Tampa Bay Buccaneers. New York Giants defensive coordinator John Fox was hired to replace Seifert and led the team to a finish in 2002. Although the team's defense gave up very few yards, ranking the second-best in the NFL in yards conceded, they were hindered by an offense that ranked as the second-worst in the league in yards gained. The Panthers improved to in the 2003 regular season, winning the NFC South and making it to Super Bowl XXXVIII before losing to the New England Patriots, 32–29, in what was immediately hailed by sportswriter Peter King as the "Greatest Super Bowl of all time". King felt the game "was a wonderful championship battle, full of everything that makes football dramatic, draining, enervating, maddening, fantastic, exciting" and praised, among other things, the unpredictability, coaching, and conclusion. The game is still viewed as one of the best Super Bowls of all time, and in the opinion of Charlotte-based NPR reporter Scott Jagow, the Panthers' Super Bowl appearance represented the arrival of Charlotte onto the national scene. Following a start in 2004, the Panthers rebounded to win six of their last seven games despite losing 14 players for the season due to injury. They lost their last game to New Orleans, finishing the 2004 season at . Had they won the game, the Panthers would have made the playoffs. The team improved to in 2005, finishing second in the division behind Tampa Bay and clinching a playoff berth as a wild-card. In the first round of the playoffs, the Panthers went on the road to face the New York Giants, beating them 23–0 for the NFL's first playoff shutout against a home team since 1980. The following week, they beat Chicago 29–21 on the road, but lost key players Julius Peppers, a defensive end, and DeShaun Foster, a running back, who were both injured during the game. The Panthers were then defeated 34–14 by the Seattle Seahawks in the NFC Championship Game, ending their season. Although the Panthers went into the 2006 season as favorites to win the NFC South, they finished with a disappointing record. The team finished the 2007 season with a record after losing quarterback Jake Delhomme early in the season due to an elbow injury. In 2008, the Panthers rebounded with a regular season record, winning the NFC South and securing a first-round bye. They were eliminated in the divisional round of the playoffs, losing 33–13 to the eventual NFC Champion Arizona Cardinals after Delhomme turned the ball over six times. Delhomme's struggles carried over into the 2009 season, where he threw 18 interceptions in the first 11 games before breaking a finger in his throwing hand. The Panthers were at a record before Delhomme's season-ending injury, and his backup, Matt Moore, led the team to a finish to the season for an overall record. In 2010, after releasing Delhomme in the offseason, the Panthers finished with a league-worst () record; their offense was the worst in the league. John Fox's contract expired after the season ended, and the team did not retain him or his staff. The team hired Ron Rivera to replace Fox as head coach and drafted Auburn's Heisman Trophy-winning quarterback Cam Newton with the first overall pick in the 2011 NFL Draft. The Panthers opened the 2011 season , but finished with a record, and Newton was awarded the AP Offensive Rookie of the Year award after setting the NFL record for most rushing touchdowns from a quarterback (14) in a single season and becoming the first rookie NFL quarterback to throw for over 4,000 yards in a single season. He also was the first rookie quarterback to rush for over 500 yards in a single season. In 2012, the Panthers again opened the season poorly, losing five out of their first six games, leading longtime general manager Marty Hurney to be fired in response. The team slid to a record before winning five of their last six games, resulting in a record. This strong finish helped save Rivera's job. The Panthers had a winning season the following year, finishing with a record and winning their third NFC South title and another playoff bye, but they were beaten by the 49ers in the Divisional Round. In 2014, the Panthers opened the season with two wins, but after 12 games sat at due in part to a seven-game winless streak. A four-game winning streak to end the season secured the team their second consecutive NFC South championship and playoff berth, despite a losing record of . The Panthers defeated the Arizona Cardinals, 27–16, in the wild card round to advance to the divisional playoffs, where they lost to eventual NFC champion Seattle, 31–17. The 2015 season saw the Panthers start the season and finish the season , which tied for the best regular-season record in NFC history. The Panthers also secured their third consecutive NFC South championship, as well as their first overall top-seeded playoff berth. In the 2015–16 playoffs, the Panthers defeated the Seattle Seahawks in the NFC Divisional playoffs, 31–24, after shutting them out in the first half, 31–0, and the Arizona Cardinals, 49–15 (highest score in NFC Championship history), in the NFC Championship Game to advance to Super Bowl 50, their first Super Bowl appearance since the 2003 season. The Panthers lost a defensive struggle to the AFC Champion Denver Broncos, 24–10. In the 2016 season, the Panthers regressed on their 15–1 record from 2015, posting a 6–10 record and a last-place finish in the NFC South, missing the playoffs for the first time since 2012, and losing the division title to the second-seeded Falcons, who went on to represent the NFC in Super Bowl LI. In the 2017 season, the Panthers finished with a 11–5 record and a #5 seed and lost to the New Orleans Saints 31-26 in the Wild Card Round, their first loss in that round in franchise history. On May 16, 2018, David Tepper, formerly a minority owner of the Pittsburgh Steelers, finalized an agreement to purchase the Panthers. The sale price was nearly $2.3 billion, a record. The agreement was approved by the league owners on May 22, 2018. The sale officially closed on July 9, 2018. The shape of the Panthers logo was designed to mimic the outline of both North Carolina and South Carolina. The Panthers changed their logo and logotype in 2012, the first such change in team history. According to the team, the changes were designed to give their logo an "aggressive, contemporary look" as well to give it a more three-dimensional feel. The primary tweaks were made in the eye and mouth, where the features, particularly the muscular brow and fangs, are more pronounced, creating a more menacing look. The revised logo has a darker shade of blue over the black logo, compared to the old design, which had teal on top of black. By the time they had been announced as the 29th NFL team in October 1993, the Panthers' logo and helmet design had already been finalized, but the uniform design was still under creation. After discussion, the Panthers organization decided on jerseys colored white, black, and blue, and pants colored white and silver. The exact tone of blue, which they decided would be "process blue" (a shade lighter than Duke's and darker than North Carolina's), was the most difficult color to choose. The team's uniform has remained largely the same since its creation, with only minor alterations such as changing the sock color of the team's black uniforms from blue to black and changing the team's shoes from white to black. Richardson, a self-described traditionalist, said that no major uniform changes would be made in his lifetime. The Panthers have three main jersey colors: black, white, and blue. Their blue jerseys, designated their alternate uniforms, are the newest and were introduced in 2002. NFL regulations allow the team to use the blue jersey up to two times in any given season. In all other games, the team must wear either their white or black jerseys; in NFL games, the home team decides whether to wear a dark or white jersey, while the away team wears the opposite. Usually the Panthers opt for white or blue when the weather is expected to be hot and for black when the weather is expected to be cold. The Panthers typically pair their white jerseys with white pants, while the black and blue jerseys are paired with silver pants; there have only been a few exceptions to these combinations. The first such instance was in 1998, when the team paired their white jerseys with silver pants in a game against the Indianapolis Colts. The second instance was in 2012 during a game against the Denver Broncos, when they paired their black jerseys with new black pants; this created an all-black uniform, with the exception of blue socks and silver helmets. The decision to wear blue socks was made by team captain Steve Smith, who felt the blue socks gave the uniforms a more distinct appearance compared with other teams that have all-black uniforms. The all-black uniforms won the "Greatest Uniform in NFL History" contest, a fan-voted contest run by NFL.com in July 2013. In July 2013, the team's equipment manager, Jackie Miles, said the Panthers intended to use the all-black uniform more in the future. The Panthers wore the all-black uniform three times the following season, once each in the preseason and regular season, and the third time during the home divisional round playoff game vs the 49ers. During the Panthers' 2015 Thanksgiving Day game against the Dallas Cowboys, they debuted an all-blue uniform as part of Nike's "Color Rush" series. The team's uniform did not change significantly after Nike became the NFL's jersey supplier in 2012, but the collar was altered to honor former Panthers player and coach Sam Mills by featuring the phrase "Keep Pounding". Nike had conceived the idea, and the team supported the concept as a way to expose newer fans to the legacy of Mills, who died of cancer in 2005. Mills had introduced the phrase, which has since become a team slogan, in a speech that he gave to the players and coaches prior to their 2003 playoff game against Dallas; in the speech, Mills compared his fight against cancer with the team's on-field battle, saying "When I found out I had cancer, there were two things I could do – quit or keep pounding. I'm a fighter. I kept pounding. You're fighters, too. Keep pounding!" The Panthers played their first season at Memorial Stadium in Clemson, South Carolina, as their facility in uptown Charlotte was still under construction. Ericsson Stadium, called Bank of America Stadium since 2004, opened in the summer of 1996. Bank of America Stadium is owned entirely by the Panthers, making them one of the few teams in the NFL to own the facility they play in. The stadium was specially designed by HOK Sports Facilities Group for football and also serves as the headquarters and administrative offices of the Panthers. On some days the stadium offers public tours for a fee. Private tours for groups are offered for a fee seven days a week, though there are some exceptions, and such tours must be arranged in advance. Two bronze panther statues flank each of the stadium's three main entrances; they are the largest sculptures ever commissioned in the United States. The names of the team's original PSL owners are engraved on the base of each statue. The two people in the Panthers Hall of Honor, team executive Mike McCormack and linebacker Sam Mills, are honored with life-sized bronze statues outside the stadium. Mills, in addition to being the only player in the Hall of Honor, is the only player to have had his jersey number (#51) retired by the Panthers as of 2016. The Panthers have three open-air fields next to Bank of America Stadium where they currently hold their practices; during the 1995 season, when the team played their home games in South Carolina, the team held their practices at Winthrop University in Rock Hill, South Carolina. Because the practice fields, along with the stadium, are located in uptown Charlotte, the fields are directly visible from skyscrapers as well as from a four-story condominium located across the street. According to Mike Cranston, a running joke said that the Panthers' division rivals had pooled their resources to purchase a room on the building's top floor, and that a fire at the condominium was caused by the Panthers organization. In order to prevent people from seeing inside the field while the team is practicing, the team has added "strategically planted trees and a tarp over the ... fence surrounding the fields". Additionally, they employ a security team to watch for and chase away any people who stop alongside the fence surrounding the field. In the event of bad weather, the team moves their practices to an indoor sports facility about 10 miles from the stadium. The team does not own this facility. The Panthers have hosted their annual training camp at Wofford College in Spartanburg, South Carolina, since 1995. The Panthers are supported in both North Carolina and South Carolina; South Carolina Governor Nikki Haley declared July 30, 2012, "Carolina Panthers Day" in her state, saying that "when it comes to professional teams, the Carolina Panthers are the team that South Carolina calls their own". During the 2016 NFC Championship and Super Bowl, the hashtag #OneCarolina was used by college and professional sports teams from North Carolina and South Carolina to show unified support for the Panthers. Pat Yasinskas of ESPN.com observed that while there is "a bit of a wine-and-cheese atmosphere at Panthers games ... there is a strong core of die-hard fans who bring energy to Bank of America Stadium. Charlotte lives and dies with the Panthers because there aren't a lot of other options in the sports world". "Sports Illustrated" graded the Panthers as having the 10th highest "NFL Fan Value Experience" in 2007, attributing much of the fan atmosphere to the team's newness when compared to the established basketball fanbase. They also observed that the stadium has scattered parking lots, each of which has a different tailgating style. Some have fried chicken, pork, or Carolina-style barbecue, while others have live bands and televisions. Pickup football games in the parking lots are common, but fans tend to "behave themselves", in part due to blue laws that prevent the sale of alcohol before noon on Sundays. The Carolina Panthers have sold out all home games since December 2002, and their home attendance has ranked in the NFL's top ten since 2006. Sir Purr, an anthropomorphic black cat who wears a jersey numbered '00', has been the Panthers' mascot since their first season. During games, Sir Purr provides sideline entertainment through skits and "silly antics". The mascot participates in a number of community events year-round, including a monthly visit to the patients at Levine Children's Hospital. Sir Purr also hosts the annual Mascot Bowl, an event which pits pro and college mascots against each other during halftime at a selected Panthers home game. The team's cheerleaders are the Carolina Topcats, a group of 24 women who lead cheers and entertain fans at home games. The TopCats participate in both corporate and charity events. The team's drumline is PurrCussion, an ensemble of snare, tenor, and bass drummers as well as cymbal players. PurrCussion performs for fans outside the stadium and introduces players prior to home games; it consists of drummers from across the Carolinas. Starting with the 2012 season, the Panthers introduced the Keep Pounding Drum, inspired by the aforementioned motivational speech by Sam Mills before the team's 2004 playoff game against the Cowboys. Prior to each home game, an honorary drummer hits the six-foot tall drum four times to signify the four quarters of an American football game. According to the team, the drummers "come from a variety of backgrounds and occupations, but all have overcome a great trial or adversity that has not only made them strong but also pushes them to make others around them stronger". Drummers have included current and former Panthers players, military veterans, Make-A-Wish children, and athletes from other sports, including NBA MVP and Charlotte native Stephen Curry, US women's national soccer team players Whitney Engen and Heather O'Reilly, and 7 time NASCAR Cup Series champion Jimmie Johnson. During the inaugural season of the Panthers, the team had an official fight song, which the team played before each home game. The song, "Stand and Cheer", remains the team's official fight song, but the team does not typically play it before home games. Due to negative fan reaction "Stand and Cheer" was pulled in 1999. Since 2006, the song has returned. In recent seasons the team has played Neil Diamond's "Sweet Caroline" and Chairmen of the Board's "I'd Rather Be In Carolina" immediately after home victories. A "keep pounding" chant was introduced during the 2015 season which starts before the opening kickoff of each home game. As prompted by the video boards, one side of the stadium shouts "keep" and the other side replies with "pounding". The chant is similar to ones that take place at college football games. The Carolina Panthers support a variety of non-profits in North and South Carolina through the Carolina Panthers Charities. Four annual scholarships are awarded to student athletes through the Carolina Panthers Graduate Scholarship and the Carolina Panthers Players Sam Mills Memorial Scholarship programs. Carolina Panthers Charities also offers grants to non-profits that support education, athletics, and human services in the community. The Panthers and Fisher Athletic has provided six equipment grants to high school football teams in the Carolinas each year since 2010. Carolina Panthers Charities raises funds at three annual benefits: the Countdown to Kickoff Luncheon, the team's first public event each season; Football 101, an educational workshop for fans; and the Weekend Warrior Flag Football Tournament, a two-day non-contact flag football tournament. Another annual benefit is Taste of the Panthers, a gourmet food tasting which raises funds for Second Harvest Food Bank of Metrolina. In 2003 the Panthers and Carolinas HealthCare Foundation established the Keep Pounding Fund, a fundraising initiative to support cancer research and patient support programs. The Panthers community has raised more than $1.4 million for the fund through direct donations, charity auctions, blood drives, and an annual 5k stadium run. The Panthers and Levine Children's Hospital coordinate monthly hospital visits and VIP game-day experiences for terminally ill or hospitalized children. In addition to these team-specific efforts, the Panthers participate in a number of regular initiatives promoted by the NFL and USA Football, the league's youth football development partner. These include USA Football Month, held throughout August to encourage and promote youth football; A Crucial Catch, the league's Breast Cancer Awareness Month program; Salute to Service, held throughout November to support military families and personnel; and PLAY 60, which encourages young NFL fans to be active for at least 60 minutes each day. Radio coverage is provided by flagship station WBT (1110 AM) and through the Carolina Panthers Radio Network, with affiliates throughout the Carolinas, Georgia, and Virginia. The Panthers' radio broadcasting team is led by Mick Mixon, Eugene Robinson, and Jim Szoke. The radio network broadcasts pre-game coverage, games with commentary, and post-game wrap-ups. It also live-broadcasts "Panther Talk", a weekly event at Bank of America Stadium which offers fans a chance to meet a player and ask questions of the staff. National broadcasting and cable television networks cover regular season games, as well as some preseason games. Locally, Fox owned and operated station WJZY airs most regular-season games, while any home games against an AFC team air on CBS affiliate WBTV. Any appearances on Monday Night Football are simulcast on ABC affiliate WSOC-TV, while any late-season appearances on Thursday Night Football are simulcast on WBTV. Sunday night and some Thursday night games are aired on NBC affiliate WCNC-TV. All preseason games and team specials are televised by the Carolina Panthers Television Network on flagship station WCCB in Charlotte and fourteen affiliate stations throughout the Carolinas, Georgia, Virginia, and Tennessee. The television broadcasting team consists of play-by-play commentator Mike Morgan, color analyst and former Panthers player Mike Rucker, and sideline reporter Pete Yanity. The network also hosts "The Panthers Huddle", a weekly show focusing on the Panthers' upcoming opponent. "Panthers Gameday", the Panthers' postgame show, is hosted by sports anchor Russ Owens and former Panthers lineman Kevin Donnalley on WCNC-TV. The Panthers also offer game broadcasts in Spanish on an eight-station network fronted by WGSP-FM in Pageland, South Carolina, as well as additional radio affiliates in Mexico. Jaime Moreno provides the play-by-play while his nephew, Luis Moreno Jr., is the color commentator. They have become popular even among English-speaking Panther fans for their high-energy, colorful announcing style. The Panthers have developed heated rivalries with the three fellow members of the NFC South (the Atlanta Falcons, Tampa Bay Buccaneers, and New Orleans Saints). The team's fiercest rivals are the Falcons and Buccaneers. The Falcons are a natural geographic rival for the Panthers, as Atlanta is only 230 miles (370 kilometers) south on I-85. The two teams have played each other twice a year since the Panthers' inception, and games between the two teams feature large contingents of Panthers fans at Atlanta's Mercedes-Benz Stadium and large contingents of Falcons fans at Bank of America Stadium. The Panthers' rivalry with Tampa Bay has been described as the most intense in the NFC South. The rivalry originated in 2002 with the formation of the NFC South, but became particularly heated before the 2003 season with verbal bouts between players on the two teams. It escalated further when the Panthers went to Tampa Bay and beat them in what ESPN.com writer Pat Yasinskas described as "one of the most physical contests in recent memory". The rivalry has resulted in a number of severe injuries for players on both teams, some of which were caused by foul play. One of these plays, an illegal hit on Tampa Bay punt returner Clifton Smith, sparked a brief melee between the teams in 2009. During their time in the NFC West, the Panthers began developing a rivalry with the San Francisco 49ers. This rivalry faded after the NFL moved the Panthers out of the NFC West. A relatively new rivalry, this one dates to the 2005 NFC Championship Game, in which the Seahawks won the game 31–14. The rivalry started up again in 2012, when the Panthers lost a close regular season home game to a Seattle Seahawks team led by rookie quarterback Russell Wilson, 16–12. In the 2013 season, the Panthers opened the season at home versus Seattle. They again lost a close game, with the final score 12–7. The Seahawks would go on to win Super Bowl XLVIII. In the 2014 season, once more at Bank of America Stadium, the Seahawks defeated the Panthers in week eight, 13–9. In the divisional round of the playoffs, the Panthers faced Seattle in Seattle, notorious for being a tough opposing field to play in, and lost 31–17. The Seahawks would go on to lose Super Bowl XLIX. In the 2015 season the next year, the teams faced off in Seattle, where the Panthers won another close game, 27–23. In the divisional round of the playoffs, the Panthers faced Seattle at Bank of America stadium, where they had yet to beat a Russell Wilson-led Seahawks team. By halftime they led 31–0, but the Seahawks rebounded and scored 24 unanswered points before the Panthers were able to seal the victory, 31–24. The Panthers would go on to lose Super Bowl 50. In the 2016 season, the teams met in Seattle, where the Panthers were beaten, 40–7. Since the 2012 season, Carolina is overall against Seattle and in the playoffs. The rivalry aspect stems from how close the majority of the matches have been and the fact that they have played each other seven times between 2012 and 2017—at least once a year. The teams did not face each other during the 2017 season. The Carolina Panthers Hall of Honor was established in 1997 to honor individuals for their contributions to the Carolina Panthers organization. Each inductee is honored with a life-sized bronze statue outside of Bank of America Stadium's North Entrance, while the names of each original PSL owner are written on the black granite base at each of the six panther statues. A rule added in the mid-2000s by the Panthers organization requires all potential inductees to be retired for at least five years before they are eligible for induction. Nominees for the Pro Football Hall of Fame, which "honor[s] individuals who have made outstanding contributions to professional football", are determined by a 46-member selection committee. At least 80% of voters must approve the nominee for him to be inducted. Jerry Richardson was the founder and original owner of the Carolina Panthers. Richardson and his family owned about 48% of the team, with the remaining 52% owned by a group of 14 limited partners. Richardson paid $206 million for the rights to start the team in 1993; according to "Forbes", the Panthers are worth approximately $1 billion as of 2012. They ranked the Carolina Panthers as the 16th-most valuable NFL team and the 23rd most valuable sports team in the world. Mike McCormack, a Hall of Fame lineman for the Cleveland Browns and former coach and executive for the Seattle Seahawks, was the Panthers' first team president, presiding in that role from 1994 until his retirement in 1997; McCormack was inducted as the first person in the Carolina Panthers Hall of Honor later that year. Jerry Richardson's son, Mark, was appointed as the team's second president in 1997 and served in that role until he stepped down in 2009. His brother Jon, who had been president of Bank of America Stadium, stepped down at the same time. The resignations of Mark and Jon Richardson were unexpected, as it was thought that the two would eventually take over the team from their father. Mark Richardson was replaced by Danny Morrison, who had previously served as the athletic director of both Texas Christian University and Wofford College, Richardson's alma mater. On May 16, 2018, David Tepper, formerly a minority owner of the Pittsburgh Steelers, finalized an agreement to purchase the Carolina Panthers, for nearly $2.3 billion, a record at the time. The agreement was approved by the league owners on May 22, 2018. The Carolina Panthers have had four head coaches. Dom Capers was the head coach from 1995 to 1998 and led the team to one playoff appearance. Counting playoff games, he finished with a record of 31–35 (.470). George Seifert coached the team from 1999 to 2001, recording 16 wins and 32 losses (.333); he is the only head coach in team history not to have led the team to a playoff appearance. John Fox, the team's longest-tenured head coach, led the team from 2002 to 2010 and coached the team to three playoff appearances including Super Bowl XXXVIII which the Panthers lost. Including playoff games, Fox ended his tenure with a 78–74 (.513) record, making him the only Panthers coach to finish his tenure with the team with a winning record. Ron Rivera, the team's current head coach, has held the position since 2011 and has led the team to three playoff appearances including Super Bowl 50 which the Panthers also lost. Counting playoff games, he has a career record of 67–51–1 (.567). Statistically, Rivera has the highest winning percentage of any Panthers head coach. Since they began playing football in 1995, the Panthers have been to four NFC Championship Games; they lost two (1996 and 2005) and won two (2003 and 2015). The Panthers have won six division championships: the NFC West championship in 1996 and the NFC South championship in 2003, 2008, 2013, 2014, and 2015. They are the first and only team to win the NFC South back to back and have won the NFC South more times than any other team in the division. They have finished as runners-up in their division six times, finishing second-place in the NFC West in 1997 and 1999 and finishing second-place in the NFC South in 2005, 2006, 2007, and 2012. They have qualified for the playoffs 8 times, most recently in 2017. Kicker John Kasay is the team's career points leader. Kasay scored 1,482 points during his 16 seasons (1995–2010) with the Panthers. Quarterback Cam Newton, who has played for the Panthers since 2011, is the career passing leader, having thrown for 20,257 yards over his six seasons with the team. Running back Jonathan Stewart is the career rushing leader for the Carolina Panthers. Stewart, during his tenure with the team (2008–2018), rushed for 6,868 yards with the Panthers. Wide receiver Steve Smith, the team's leading receiver, recorded 12,197 receiving yards during his 13-year (2001–2013) tenure with the team. Notes Footnotes Chrono Trigger Square re-released a ported version by Tose in Japan for the PlayStation in 1999, later repackaged with a "Final Fantasy IV" port as "Final Fantasy Chronicles" in 2001 for the North American market. A slightly enhanced "Chrono Trigger", again ported by Tose, was released for the Nintendo DS in North America and Japan in 2008, and PAL regions in 2009. "Chrono Trigger" was a critical and commercial success upon release, and is frequently cited as one of the best video games of all time. "Nintendo Power" magazine described aspects of "Chrono Trigger" as revolutionary, including its multiple endings, plot-related sidequests focusing on character development, unique battle system, and detailed graphics. "Chrono Trigger" was the third best-selling game of 1995, shipping over two million copies by March 2003. The Nintendo DS version sold 790,000 copies by March 2009. "Chrono Trigger" was also ported to i-mode mobile phones, Virtual Console, the PlayStation Network, iOS devices, Android devices, and Microsoft Windows. "Chrono Trigger" features standard role-playing video game gameplay. The player controls the protagonist and his companions in the game's two-dimensional fictional world, consisting of various forests, cities, and dungeons. Navigation occurs via an overworld map, depicting the landscape from a scaled-down overhead view. Areas such as forests, cities, and similar places are depicted as more realistic scaled-down maps, in which players can converse with locals to procure items and services, solve puzzles and challenges, or encounter enemies. "Chrono Trigger"s gameplay deviates from that of traditional Japanese RPGs in that, rather than appearing in random encounters, many enemies are openly visible on field maps or lie in wait to ambush the party. Contact with enemies on a field map initiates a battle that occurs directly on the map rather than on a separate battle screen. Players and enemies may use physical or magical attacks to wound targets during battle, and players may use items to heal or protect themselves. Each character and enemy has a certain number of hit points; successful attacks reduce that character's hit points, which can be restored with potions and spells. When a playable character loses all hit points, they faint; if all the player's characters fall in battle, the game ends and must be restored from a previously saved chapter, except in specific storyline-related battles that allow or force the player to lose. Between battles, the player can equip his/her characters with weapons, armor, helmets, and accessories that provide special effects (such as increased attack power or defense against magic), and various consumable items can be used both in and out of battles. Items and equipment can be purchased in shops or found on field maps, often in treasure chests. By exploring new areas and fighting enemies, players progress through "Chrono Trigger"s story. "Chrono Trigger" uses an Active Time Battle system—a staple of Square's "Final Fantasy" game series designed by Hiroyuki Ito for "Final Fantasy IV"—named "Active Time Battle 2.0." Each character can take action in battle once a personal timer dependent on the character's speed statistic counts to zero. Magic and special physical techniques are handled through a system called "Techs." Techs deplete a character's magic points (a numerical meter similar to hit points), and often have special areas of effect; some spells damage huddled monsters, while others can harm enemies spread in a line. Enemies often change positions during battle, creating opportunities for tactical Tech use. A unique feature of "Chrono Trigger"'s Tech system is that numerous cooperative techniques exist. Each character receives eight personal Techs which can be used in conjunction with others' to create Double and Triple Techs for greater effect. For instance, Crono's sword-spinning "Cyclone" Tech can be combined with Lucca's "Flame Toss" to create "Flame Whirl". When characters with compatible Techs have enough magic points available to perform their techniques, the game automatically displays the combo as an option. "Chrono Trigger" features several other distinct gameplay traits, including time travel. Players have access to seven eras of the game world's history, and past actions affect future events. Throughout history, players find new allies, complete side quests, and search for keynote villains. Time travel is accomplished via portals and pillars of light called "time gates", as well as a time machine named "Epoch". The game contains thirteen unique endings; the ending the player receives depends on when and how they reach and complete the game's final battle. "Chrono Trigger DS" features a new ending that can be accessed from the End of Time upon completion of the final extra dungeon and optional final boss. "Chrono Trigger" also introduces a New Game Plus option; after completing the game, the player may begin a new game with the same character levels, techniques, and equipment, excluding money, with which they ended the previous playthrough. However, certain items central to the storyline are removed and must be found again, such as the sword Masamune. Square has since employed the New Game Plus concept in later titles, including "Chrono Cross", "Parasite Eve", "Vagrant Story", "Final Fantasy X-2", and "". "Chrono Trigger" takes place in a world familiar to Earth, with eras such as the prehistoric age, in which primitive humans and dinosaurs share the earth; the Middle Ages, replete with knights, monsters, and magic; and the post-apocalyptic future, where destitute humans and sentient robots struggle to survive. The characters frequently travel through time to obtain allies, gather equipment, and learn information to help them in their quest. The party also gains access to the End of Time (represented as year ∞), which serves as a hub to travel back to other time periods. The party eventually acquires a time-machine vehicle known as the "Wings of Time," nicknamed the "Epoch." The vehicle is capable of time travel between any time period without first having to travel to the End of Time. "Chrono Trigger"'s six playable characters (plus one optional character) come from different eras of history. "Chrono Trigger" begins in AD 1000 with Crono, Marle and Lucca. Crono is the silent protagonist, characterized as a fearless young man who wields a katana in battle. Marle (Princess Nadia) lives in Guardia Castle; though sheltered, at heart she's a princess who enjoys hiding her royal identity. Lucca is a friend of Crono's and a mechanical genius; her home is filled with laboratory equipment and machinery. From the era of AD 2300 comes Robo, or Prometheus (designation R-66Y), a robot with near-human personality created to assist humans. Lying dormant in the future, Robo is found and repaired by Lucca, and joins the group out of gratitude. The fiercely confident Ayla dwells in 65,000,000 BC. Unmatched in raw strength, Ayla is the chief of Ioka Village and leads her people in war against a species of humanoid reptiles known as Reptites. The last two playable characters are Frog and Magus. Frog originated in AD 600. He is a former squire once named Glenn, who was turned into an anthropomorphic frog by Magus, who also killed his friend Cyrus. Chivalrous but mired in regret, Frog dedicates his life to protecting Leene, the queen of Guardia, and avenging Cyrus. Meanwhile, Guardia in AD 600 is in a state of conflict against the Mystics (known as Fiends in the US/DS port), a race of demons and intelligent animals who wage war against humanity under the leadership of Magus, a powerful sorcerer. Magus's seclusion conceals a long-lost past; he was formerly known as Janus, the young prince of the Kingdom of Zeal, which was destroyed by Lavos in 12,000 BC. The incident sent him forward through time, and as he ages, he plots revenge against Lavos and broods over the fate of his sister, Schala. Lavos, the game's main antagonist who awakens and ravages the world in AD 1999, is an extraterrestrial, parasitic creature that harvests DNA and the Earth's energy for its own growth. In AD 1000, Crono and Marle watch Lucca and her father demonstrate her new teleporter at the Millennial Fair. When Marle volunteers to be teleported, her pendant interferes with the device and creates a time portal into which she is drawn. After Crono and Lucca separately recreate the portal and find themselves in AD 600, they find Marle only to see her vanish before their eyes. Lucca realizes that this time period's kingdom has mistaken Marle for an ancestor of hers who had been kidnapped, thus putting off the recovery effort for her ancestor and creating a grandfather paradox. Crono and Lucca, with the help of Frog, restore history to normal by recovering the kidnapped queen. After the three part ways with Frog and return to the present, Crono is arrested on charges of kidnapping the princess and sentenced to death by the current chancellor of Guardia. Lucca and Marle help Crono to flee, haphazardly using another time portal to escape their pursuers. This portal lands them in AD 2300, where they learn that an advanced civilization has been wiped out by a giant creature known as Lavos that appeared in 1999. The three vow to find a way to prevent the future destruction of their world. After meeting and repairing Robo, Crono and his friends find Gaspar, an old sage at the End of Time, who helps them acquire magical powers and travel through time by way of several pillars of light. Research in AD 1000 tells them about Magus summoning Lavos into the world. To repair Frog's sword, the Masamune, they travel to prehistoric times and meet Ayla. After returning to AD 600, they challenge Magus, believing him to be the source of Lavos; after the battle, a summoning spell causes a time gate that throws Crono and his friends to the past. Back in prehistory, Ayla joins the group to battle the Reptites and witness the origin of Lavos. They learn that Lavos was an alien being that arrived on the planet millions of years in the past, and began to absorb DNA and energy from every living creature before arising and razing the planet's surface in 1999 so that it could spawn a new generation. Entering a gate created by the newly-landed Lavos, they go to BC 12,000, an ice age, where the party finds the Kingdom of Zeal, who recently discovered Lavos and seeks to drain its power to achieve immortality through the Mammon Machine. Zeal's leader, Queen Zeal, imprisons Crono and friends on orders of the Prophet, a mysterious figure who has recently begun advising the queen. Though Zeal's daughter Schala frees them, the Prophet forces her to banish them from the realm and seal the time gate they used to travel to the Dark Ages. They return next to AD 2300 to find a time machine called the Wings of Time (or "Epoch"), which can access any time period without using a time gate. They travel back to BC 12000 to stop Zeal from activating the Mammon Machine in the Ocean Palace. Lavos awakens, disturbed by the Mammon Machine; the Prophet reveals himself to be Magus and unsuccessfully tries to kill the creature. The party is beaten, and a broken Crono stands up to Lavos before being vaporized by a powerful blast. Schala transports the rest of the party back to the surface before Lavos destroys the Ocean palace and the Kingdom of Zeal. Crono's friends awaken in a village and find Magus, who confesses that he used to be Prince Janus of Zeal. In his memories, it is revealed that the disaster at the Ocean Palace scattered the Gurus of Zeal across time and sent him to the Middle Ages. Janus took the alias of Magus and gained a cult of followers while plotting to summon and kill Lavos in revenge for the death of his sister, Schala. When Lavos appeared after his battle with Crono and his allies, he was cast back to the time of Zeal and presented himself to them as a prophet. At this point, Magus is either killed by the party, killed in a duel with Frog, or spared and convinced to join the party. As Crono's friends depart, the ruined Ocean Palace rises into the air as the Black Omen. The group turns to Gaspar for help, and he gives them a "Chrono Trigger," an egg-shaped device that allows the group to replace Crono just before the moment of death with a Dopple Doll. Crono and his friends then gather power by helping people across time with Gaspar's instructions. Their journeys involve defeating the remnants of the Mystics, stopping Robo's maniacal AI creator, addressing Frog's feelings towards Cyrus, locating and charging up the mythical Sun Stone, retrieving the Rainbow Shell, and helping restore a forest destroyed by a desert monster. The group enters the Black Omen and defeats Queen Zeal, then successfully battles Lavos, saving the future of their world. If Magus joined the party, he departs to search for Schala. Crono's mother accidentally enters the time gate at the fair before it closes, prompting Crono, Marle, and Lucca to set out in the Epoch to find her while fireworks light up the night sky. Alternatively, if the party used the Epoch to break Lavos's outer shell, Marle will help her father hang Nadia's bell at the festival and accidentally get carried away by several balloons. Crono jumps on to help her, but cannot bring them down to earth. Hanging on in each other's arms, the pair travel through the cloudy, moonlit sky. "Chrono Trigger DS" added two new scenarios to the game. In the first, Crono and his friends can help a "lost sanctum" of Reptites, who reward powerful items and armor. The second scenario adds ties to "Trigger"'s sequel, "Chrono Cross". In a New Game +, the group can explore several temporal distortions to combat shadow versions of Crono, Marle, and Lucca, and to fight Dalton, who promises in defeat to raise an army in the town of Porre to destroy the Kingdom of Guardia. The group can then fight the Dream Devourer, a prototypical form of the "Time Devourer"—a fusion of Schala and Lavos seen in "Chrono Cross". A version of Magus pleads with Schala to resist; though she recognizes him as her brother, she refuses to be helped and sends him away. Schala subsequently erases his memories and Magus awakens in a forest, determined to find what he had lost. "Chrono Trigger" was conceived in 1992 by Hironobu Sakaguchi, producer and creator of the "Final Fantasy" series; Yuji Horii, director and creator of the "Dragon Quest" series; and Akira Toriyama, creator of the "Dragon Ball" comics series. Traveling to the United States to research computer graphics, the three decided to create something that "no one had done before". After spending over a year considering the difficulties of developing a new game, they received a call from Kazuhiko Aoki, who offered to produce. The four met and spent four days brainstorming ideas for the game. Square convened 50–60 developers, including scenario writer Masato Kato, whom Square designated story planner. Development started in early 1993. An uncredited Square employee suggested that the team develop a time travel-themed game, which Kato initially opposed, fearing repetitive, dull gameplay. Kato and Horii then met several hours per day during the first year of development to write the game's plot. Square intended to license the work under the "Seiken Densetsu" franchise and gave it the working title of "Maru Island"; Hiromichi Tanaka (the future director of "Chrono Cross") monitored Toriyama's early designs. The team hoped to release it on Nintendo's planned Super Famicom Disk Drive; when Nintendo canceled the project, Square reoriented the game for release on a Super Famicom cartridge and rebranded it as "Chrono Trigger". Tanaka credited the ROM cartridge platform for enabling seamless transition to battles on the field map. Aoki ultimately produced "Chrono Trigger", while director credits were attributed to Akihiko Matsui, Yoshinori Kitase and Takashi Tokita. Toriyama designed the game's aesthetic, including characters, monsters, vehicles, and the look of each era. Masato Kato also contributed character ideas and designs. Kato planned to feature Gaspar as a playable character and Toriyama sketched him, but he was cut early in development. The development staff studied the drawings of Toriyama to approximate his style. Sakaguchi and Horii supervised; Sakaguchi was responsible for the game's overall system and contributed several monster ideas. Other notable designers include Tetsuya Takahashi, the graphic director, and Yasuyuki Honne, Tetsuya Nomura, and Yusuke Naora, who worked as field graphic artists. Yasuhika Kamata programmed graphics, and cited Ridley Scott's visual work in the film "Alien" as an inspiration for the game's lighting. Kamata made the game's luminosity and color choice lay between that of "Secret of Mana" and the "Final Fantasy" series. Features originally intended to be used in "Secret of Mana" or "Final Fantasy IV", also under development at the same time, were appropriated by the "Chrono Trigger" team. Yuji Horii, a fan of time travel fiction (such as the TV series "The Time Tunnel"), fostered a theme of time travel in his general story outline of "Chrono Trigger" with input from Akira Toriyama. Horii liked the scenario of the grandfather paradox surrounding Marle. Concerning story planning, Horii commented, "If there's a fairground, I just write that there's a fairground; I don't write down any of the details. Then the staff brainstorm and come up with a variety of attractions to put in." Sakaguchi contributed some minor elements, including the character Gato; he liked Marle's drama and reconciliation with her father. Masato Kato subsequently edited and completed the outline by writing the majority of the game's story, including all the events of the 12,000 BC era. He took pains to avoid what he described as "a long string of errands ... [such as] 'do this', 'take this', 'defeat these monsters', or 'plant this flag'." Kato and other developers held a series of meetings to ensure continuity, usually attended by around 30 personnel. Kato and Horii initially proposed Crono's death, though they intended he stay dead; the party would have retrieved an earlier, living version of him to complete the quest. Square deemed the scenario too depressing and asked that Crono be brought back to life later in the story. Kato also devised the system of multiple endings because he could not branch the story out to different paths. Yoshinori Kitase and Takashi Tokita then wrote various subplots. They also devised an "Active Time Event Logic" system, "where you can move your character around during scenes, even when an NPC is talking to you", and with players "talking to different people and steering the conversation in different directions", allowing each scene to "have many permutations." Kato became friends with composer Yasunori Mitsuda during development, and they would collaborate on several future projects. Katsuhisa Higuchi programmed the battle system, which hosted combat on the map without transition to a special battleground as most previous Square games had done. Higuchi noted extreme difficulty in loading battles properly without slow-downs or a brief, black loading screen. The game's use of animated monster sprites consumed much more memory than previous "Final Fantasy" games, which used static enemy graphics. Hironobu Sakaguchi likened the development of "Chrono Trigger" to "play[ing] around with Toriyama's universe," citing the inclusion of humorous sequences in the game that would have been "impossible with something like "Final Fantasy"." When Square Co. suggested a non-human player character, developers created Frog by adapting one of Toriyama's sketches. The team created the End of Time to help players with hints, worrying that they might become stuck and need to consult a walkthrough. The game's testers had previously complained that "Chrono Trigger" was too difficult; as Horii explained, "It's because we know too much. The developers think the game's just right; that they're being too soft. They're thinking from their own experience. The puzzles were the same. Lots of players didn't figure out things we thought they'd get easily." Sakaguchi later cited the unusual desire of beta testers to play the game a second time or "travel through time again" as an affirmation of the New Game + feature: "Wherever we could, we tried to make it so that a slight change in your behavior caused subtle differences in people's reactions, even down to the smallest details ... I think the second playthrough will hold a whole new interest." The game's reuse of locations due to time traveling made bug-fixing difficult, as corrections would cause unintended consequences in other eras. "Chrono Trigger" was scored primarily by Yasunori Mitsuda, with contributions from veteran "Final Fantasy" composer Nobuo Uematsu, and one track composed by Noriko Matsueda. A sound programmer at the time, Mitsuda was unhappy with his pay and threatened to leave Square if he could not compose music. Hironobu Sakaguchi suggested he score "Chrono Trigger", remarking, "maybe your salary will go up." Mitsuda composed new music and drew on a personal collection of pieces composed over the previous two years. He reflected, "I wanted to create music that wouldn't fit into any established genre ... music of an imaginary world. The game's director, Masato Kato, was my close friend, and so I'd always talk with him about the setting and the scene before going into writing." Mitsuda slept in his studio several nights, and attributed certain pieces—such as the game's ending theme, "To Far Away Times"—to inspiring dreams. He later attributed this song to an idea he was developing before "Chrono Trigger", reflecting that the tune was made in dedication to "a certain person with whom [he] wanted to share a generation". He also tried to use leitmotifs of the "Chrono Trigger" main theme to create a sense of consistency in the soundtrack. Mitsuda wrote each tune to be around two minutes long before repeating, unusual for Square's games at the time. Mitsuda suffered a hard drive crash that lost around forty in-progress tracks. After Mitsuda contracted stomach ulcers, Uematsu joined the project to compose ten pieces and finish the score. Mitsuda returned to watch the ending with the staff before the game's release, crying upon seeing the finished scene. At the time of the game's release, the number of tracks and sound effects was unprecedented—the soundtrack spanned three discs in its 1995 commercial pressing. Square also released a one-disc acid jazz arrangement called ""The Brink of Time"" by Guido that year. "The Brink of Time" came about because Mitsuda wanted to do something that no one else was doing, and he noted that acid jazz and its related genres were uncommon in the Japanese market. Mitsuda considers "Chrono Trigger" a landmark title which helped mature his talent. While Mitsuda later held that the title piece was "rough around the edges", he maintains that it had "significant influence on [his] life as a composer". In 1999, Square produced another one-disc soundtrack to complement the PlayStation release of "Trigger", featuring orchestral tracks used in cut scenes. Tsuyoshi Sekito composed four new pieces for the game's bonus features which weren't included on the soundtrack. Some fans were displeased by Mitsuda's absence in creating the port, whose instruments sometimes aurally differed from the original game's. Mitsuda arranged versions of music from the "Chrono" series for Play! video game music concerts, presenting the main theme, "Frog's Theme", and "To Far Away Times". He worked with Square Enix to ensure that the music for the Nintendo DS would sound closer to the Super NES version. Mitsuda encouraged feedback about the game's soundtrack from contemporary children (who he thought would expect "full symphonic scores blaring out of the speakers"). Fans who preordered "Chrono Trigger DS" received a special music disc containing two orchestral arrangements of "Chrono Trigger" music directed by Natsumi Kameoka; Square Enix also held a random prize drawing for two signed copies of "Chrono Trigger" sheet music. Mitsuda expressed difficulty in selecting the tune for the orchestral medley, eventually picking a tune from each era and certain character themes. Mitsuda later wrote: Music from the game was performed live by the Tokyo Symphony Orchestra in 1996 at the Orchestral Game Concert in Tokyo, Japan. A suite of music including "Chrono Trigger" is a part of the symphonic world-tour with video game music Play! A Video Game Symphony, where Mitsuda was in attendance for the concert's world-premiere in Chicago on May 27, 2006. His suite of "Chrono" music, comprising "Reminiscence", "Chrono Trigger", "Chrono Cross~Time's Scar", "Frog's Theme", and "To Far Away Times" was performed. Mitsuda has also appeared with the Eminence Symphony Orchestra as a special guest. Video Games Live has also featured medleys from "Chrono Trigger" and "Chrono Cross". A medley of Music from "Chrono Trigger" made of one of the four suites of the "Symphonic Fantasies" concerts in September 2009 which was produced by the creators of the Symphonic Game Music Concert series, conducted by Arnie Roth. Square Enix re-released the game's soundtrack, along with a video interview with Mitsuda in July 2009. The team planned to release "Chrono Trigger" in late 1994, but release was pushed back to the following year. Early alpha versions of "Chrono Trigger" were demonstrated at the 1994 and 1995 V-Jump festivals in Japan. A few months prior to the game's release, Square shipped a beta version to magazine reviewers and game stores for review. An unfinished build of the game dated November 17, 1994, it contains unused music tracks, locations, and other features changed or removed from the final release—such as a dungeon named "Singing Mountain" and its eponymous tune. Some names also differed; the character Soysaw (Slash in the US version) was known as Wiener, while Mayonnay (Flea in the US version) was named Ketchappa. The ROM image for this early version was eventually uploaded to the internet, prompting fans to explore and document the game's differences, including two unused world map NPC character sprites and presumed additional sprites for certain non-player characters. Around the game's release, Yuji Horii commented that "Chrono Trigger" "went beyond [the development team's] expectations", and Hironobu Sakaguchi congratulated the game's graphic artists and field designers. Sakaguchi intended to perfect the "sense of dancing you get from exploring Toriyama's worlds" in the event that they would make a sequel. "Chrono Trigger" used a 32-megabit ROM cartridge with battery-backed RAM for saved games, lacking special on-cartridge coprocessors. The Japanese release of "Chrono Trigger" included art for the game's ending and running counts of items in the player's status menu. Developers created the North American version before adding these features to the original build, inadvertently leaving in vestiges of "Chrono Trigger"'s early development (such as the piece "Singing Mountain"). Hironobu Sakaguchi asked translator Ted Woolsey to localize "Chrono Trigger" for English audiences and gave him roughly thirty days to work. Lacking the help of a modern translation team, he memorized scenarios and looked at drafts of commercial player's guides to put dialogue in context. Woolsey later reflected that he would have preferred two-and-a-half months, and blames his rushed schedule on the prevailing attitude in Japan that games were children's toys rather than serious works. Some of his work was cut due to space constraints, though he still considered "Trigger" "one of the most satisfying games [he] ever worked on or played". Nintendo of America censored certain dialogue, including references to breastfeeding, consumption of alcohol, and religion. The original SNES edition of "Chrono Trigger" was released on the Wii download service Virtual Console in Japan on April 26, 2011, in the US on May 16, 2011, and in Europe on May 20, 2011. Previously in April 2008, a "Nintendo Power" reader poll had identified "Chrono Trigger" as the third-most wanted game for the Virtual Console. It went on to receive a perfect score of 10 out 10 on IGN. Square released an enhanced port of "Chrono Trigger" developed by Tose in Japan for the Sony PlayStation in 1999. Square timed its release before that of "Chrono Cross", the 1999 sequel to "Chrono Trigger", to familiarize new players with story leading up to it. This version included anime cutscenes created by original character designer Akira Toriyama's Bird Studio and animated by Toei Animation, as well as several bonus features, accessible after achieving various endings in the game. Scenarist Masato Kato attended planning meetings at Bird Studio to discuss how the ending cutscenes would illustrate subtle ties to "Chrono Cross". The port was later released in North America in 2001—along with a newly translated version of "Final Fantasy IV"—under the package title "Final Fantasy Chronicles". Reviewers criticized "Chronicles" for its lengthy load times and an absence of new in-game features. This same iteration was also re-released as a downloadable game on the Playstation Network on October 4, 2011, for the PlayStation 3, PlayStation Vita, and PlayStation Portable. On July 2, 2008, Square Enix announced that they were officially planning to bring "Chrono Trigger" to the Nintendo DS handheld platform. Composer Yasunori Mitsuda was pleased with the project, exclaiming "finally!" after receiving the news from Square Enix and maintaining, "it's still a very deep, very high-quality game even when you play it today. I'm very interested in seeing what kids today think about it when they play it." Square retained Masato Kato to oversee the port, and Tose to program it. Kato explained, "I wanted it to be based on the original Super NES release rather than the PlayStation version. I thought we should look at the additional elements from the Playstation version, re-examine and re-work them to make it a complete edition. That’s how it struck me and I told the staff so later on." Square Enix touted the game by displaying Akira Toriyama's original art at the 2008 Tokyo Game Show. The DS re-release contains all of the bonus material from the PlayStation port, as well as other enhancements. The added features include a more accurate and revised translation by Tom Slattery, a dual-screen mode which clears the top screen of all menus, a self-completing map screen, and a default "run" option. It also featured the option to choose between two control schemes: one mirroring the original SNES controls, and the other making use of the DS's touch screen. Masato Kato participated in development, overseeing the addition of the monster-battling Arena, two new areas, the Lost Sanctum and the Dimensional Vortex, and a new ending that further foreshadows the events of "Chrono Cross". One of the areas within the Vortex uses the "Singing Mountain" song that was featured on the original "Chrono Trigger" soundtrack. These new dungeons met with mixed reviews; GameSpot called them "frustrating" and "repetitive", while IGN noted that "the extra quests in the game connect extremely well." It was a nominee for "Best RPG for the Nintendo DS" in IGN's 2008 video game awards. The Nintendo DS version of "Chrono Trigger" was the 22nd best-selling game of 2008 in Japan. A cellphone version was released in Japan on i-mode distribution service on August 25, 2011. An iOS version was released on December 8, 2011. This version is based on the Nintendo DS version, with graphics optimized for iOS. The game was later released for Android on October 29, 2012. An update incorporating most of the features of the Windows version—including the reintroduction of the animated cutscenes, which had been absent from the initial mobile release—was released on February 27, 2018. Square Enix released "Chrono Trigger" without an announcement for Microsoft Windows via Steam on February 27, 2018. This version includes all content from the Nintendo DS port, the improved graphics from the mobile device releases, support for mouse and keyboard controls, and autosave features, along with additional content such as wallpapers and music. The PC port received negative reception due to its inferior graphical quality, additional glitches, UI adapted for touchscreens, and failure to properly adapt the control scheme for keyboards and controllers. In response, Square Enix provided various UI updates and other improvements over the next few months to address the complaints. The game was a bestseller in Japan. The game's SNES and PS1 iterations have shipped more than 2.36 million copies in Japan and 290,000 abroad. The first two million copies sold in Japan were delivered in only two months, and the game ended 1995 as the third best-selling game of the year behind "" and "". The game was met with substantial success upon release in North America, and its rerelease on the PlayStation as part of the "Final Fantasy Chronicles" package topped the NPD TRSTS PlayStation sales charts for over six weeks. This version was later re-released again in 2003 as part of Sony's Greatest Hits line. "Chrono Trigger DS" has sold 490,000 copies in Japan, 240,000 in North America and 60,000 in Europe as of March 2009. "Chrono Trigger" garnered much critical praise in addition to its brisk sales. "Famicom Tsūshin" gave "Chrono Trigger" first an 8 out of 10 and later a 9 out of 10 in their Reader Cross Review. "Nintendo Power" compared it favorably with "Secret of Mana", "Final Fantasy", and "", citing improved graphics, sound, story and gameplay over past RPG titles. "GamePro" praised the varied gameplay, the humor, the ability to replay the game with previously built-up characters, and the graphics, which they said far exceed even those of "Final Fantasy VI". They commented that combat is easier and more simplistic than in most RPGs, but argued that "Most players would choose an easier RPG of this caliber over a hundred more complicated, but less developed, fantasy role-playing adventures." They gave the game a perfect 5 out of 5 in all four categories: graphics, sound, control, and funfactor. "Electronic Gaming Monthly" gave it their "Game of the Month" award, with their four reviewers praising the graphics, story, and music. "Chrono Trigger" won multiple awards from "Electronic Gaming Monthly"s 1995 video game awards, including Best Role-Playing Game, Best Music in a Cartridge-Based Game, and Best Super NES Game. "Official U.S. PlayStation Magazine" described "Trigger" as "original and extremely captivating", singling out its graphics, sound and story as particularly impressive. IGN commented that "it may be filled with every imaginable console RPG cliché, but "Chrono Trigger" manages to stand out among the pack" with "a [captivating] story that doesn't take itself too serious " and "one of the best videogame soundtracks ever produced". Other reviewers (such as the staff of RPGFan and RPGamer) have criticized the game's short length and relative ease compared to its peers. Victoria Earl of Gamasutra praised the game design for balancing "developer control with player freedom using designed mechanics and a modular approach to narrative." Overall, critics lauded "Chrono Trigger" for its "fantastic yet not overly complex" story, simple but innovative gameplay, and high replay value afforded by multiple endings. Online score aggregator GameRankings lists the original Super NES version as the 2nd highest scoring RPG and 24th highest scoring game ever reviewed. In 2009, Guinness World Records listed it as the 32nd most influential video game in history. "Nintendo Power" listed the ending to "Chrono Trigger" as one of the greatest endings in Nintendo history, due to over a dozen endings that players can experience. Tom Hall drew inspiration from "Chrono Trigger" and other console games in creating "Anachronox", and used the campfire scene to illustrate the dramatic depth of Japanese RPGs. "Chrono Trigger" is frequently listed among the greatest video games of all time. It has placed highly on all six of multimedia website IGN's "top 100 games of all time" lists—4th in 2002, 6th in early 2005, 13th in late 2005, 2nd in 2006, 18th in 2007, and 2nd in 2008. "Game Informer" called it its 15th favourite game in 2001. Its staff thought that it was the best non-"Final Fantasy" title Square had produced at the time. GameSpot included "Chrono Trigger" in "The Greatest Games of All Time" list released in April 2006, and it also appeared as 28th on an "All Time Top 100" list in a poll conducted by Japanese magazine "Famitsu" the same year. In 2004, "Chrono Trigger" finished runner up to "Final Fantasy VII" in the inaugural GameFAQs video game battle. In 2008, readers of Dengeki Online voted it the eighth best game ever made. "Nintendo Power"'s twentieth anniversary issue named it the fifth best Super NES game. In 2012, it came 32nd place on GamesRadar's "100 best games of all time" list, and 1st place on its "Best JRPGs" list. GamesRadar named "Chrono Trigger" the 2nd best Super NES game of all time, behind "Super Metroid". In contrast to the critical acclaim of "Chrono Trigger"'s original SNES release, the 2018 Microsoft Windows port of "Chrono Trigger" was critically panned. Grievances noted by reviewers included tiling errors on textures, the addition of aesthetically-intrusive sprite filters, an unattractive GUI carried over from the 2011 mobile release, a lack of graphic customization options, and the inability to remap controls. In describing the port, "Forbes" commented: "From pretty awful graphical issues, such as tiling textures and quite a painful menu system, this port really doesn’t do this classic game justice." "USGamer" characterized the Windows release as carrying "all the markings of a project farmed out to the lowest bidder. It's a shrug in Square-Enix's mind, seemingly not worth the money or effort necessary for a half-decent port." In a Twitter post detailing his experiences with the Windows version, indie developer Fred Wood derisively compared the port to "someone's first attempt at an RPG Maker game", a comment which was republished across numerous articles addressing the poor quality of the rerelease. "Chrono Trigger" inspired several related releases; the first were three titles released for the Satellaview on July 31, 1995. They included "Chrono Trigger: Jet Bike Special", a racing video game based on a minigame from the original; "Chrono Trigger: Character Library", featuring profiles on characters and monsters from the game; and "Chrono Trigger: Music Library", a collection of music from the game's soundtrack. The contents of "Character Library" and "Music Library" were later included as extras in the PlayStation rerelease of "Chrono Trigger". Production I.G created a 16-minute OVA entitled """" which was shown at the Japanese V-Jump Festival of July 31, 1996. There have been two notable attempts by "Chrono Trigger" fans to unofficially remake parts of the game for PC with a 3D graphics engine. "Chrono Resurrection", an attempt at remaking ten small interactive cut scenes from "Chrono Trigger", and "Chrono Trigger Remake Project", which sought to remake the entire game, were forcibly terminated by Square Enix by way of a cease and desist order. Another group of fans created a sequel via a ROM hack of "Chrono Trigger" called ""; developed from 2004 to 2009; although feature-length and virtually finished, it also was terminated through a cease & desist letter days before its May 2009 release. The letter also banned the dissemination of existing "Chrono Trigger" ROM hacks and documentation. After the cease and desist was issued, an incomplete version of the game was leaked in May 2009, though due to the early state of the game, playability was limited. This was followed by a more complete ROM leak in January 2011, which allowed the game to be played from beginning to end. Square released a fourth Satellaview game in 1996, named "". Having thought that "Trigger" ended with "unfinished business", scenarist Masato Kato wrote and directed the game. "Dreamers" functioned as a side story to "Chrono Trigger", resolving a loose subplot from its predecessor. A short, text-based game relying on minimal graphics and atmospheric music, the game never received an official release outside Japan—though it was translated by fans to English in April 2003. Square planned to release "Radical Dreamers" as an easter egg in the PlayStation edition of "Chrono Trigger", but Kato was unhappy with his work and halted its inclusion. Square released "Chrono Cross" for the Sony PlayStation in 1999. "Cross" is a sequel to "Chrono Trigger" featuring a new setting and cast of characters. Presenting a theme of parallel worlds, the story followed the protagonist Serge—a teenage boy thrust into an alternate reality in which he died years earlier. With the help of a thief named Kid, Serge endeavors to discover the truth behind his apparent death and obtain the Frozen Flame, a mythical artifact. Regarded by writer and director Masato Kato as an effort to "redo "Radical Dreamers" properly", "Chrono Cross" borrowed certain themes, scenarios, characters, and settings from "Dreamers". Yasunori Mitsuda also adapted certain songs from "Radical Dreamers" while scoring "Cross". "Radical Dreamers" was consequently removed from the series' main continuity, considered an alternate dimension. "Chrono Cross" shipped 1.5 million copies and was almost universally praised by critics. There are no plans for a new title, despite a statement from Hironobu Sakaguchi in 2001 that the developers of "Chrono Cross" wanted to make a new "Chrono" game. The same year, Square applied for a trademark for the names "Chrono Break" in the United States and "Chrono Brake" in Japan. However, the United States trademark was dropped in 2003. Director Takashi Tokita mentioned ""Chrono Trigger 2"" in a 2003 interview which has not been translated to English. Yuji Horii expressed no interest in returning to the Chrono franchise in 2005, while Hironobu Sakaguchi remarked in April 2007 that his creation "Blue Dragon" was an "extension of [Chrono Trigger]." During a Cubed³ interview on February 1, 2007, Square Enix's Senior Vice President Hiromichi Tanaka said that although no sequel is currently planned, some sort of sequel is still possible if the "Chrono Cross" developers can be reunited. Yasunori Mitsuda has expressed interest in scoring a new game, but warned that "there are a lot of politics involved" with the series. He stressed that Masato Kato should participate in development. The February 2008 issue of "Game Informer" ranked the "Chrono" series eighth among the "Top Ten Sequels in Demand", naming the games "steadfast legacies in the Square Enix catalogue" and asking, "what's the damn holdup?!" In Electronic Gaming Monthly's June 2008 "Retro Issue", writer Jeremy Parish cited "Chrono" as the franchise video game fans would be most thrilled to see a sequel to. In the first May "Famitsu" of 2009, "Chrono Trigger" placed 14th out of 50 in a vote of most-wanted sequels by the magazine's readers. At E3 2009, SE Senior Vice President Shinji Hashimoto remarked, "If people want a sequel, they should buy more!" In July 2010, Feargus Urquhart, replying to an interview question about what franchises he would like to work on, said that "if [he] could come across everything that [he] played", he would choose a "Chrono Trigger" game. At the time, Urquhart's company Obsidian Entertainment was making "Dungeon Siege III" for Square Enix. Urquhart said: "You make RPGs, we make RPGs, it would be great to see what we could do together. And they really wanted to start getting into Western RPGs. And, so it kind of all ended up fitting together." Yoshinori Kitase revealed in 2011 that he used the time travel mechanics of "Chrono Trigger" as a starting point for that of "Final Fantasy XIII-2". Candide Candide, ou l'Optimisme, (; ) is a French satire first published in 1759 by Voltaire, a philosopher of the Age of Enlightenment. The novella has been widely translated, with English versions titled Candide: or, All for the Best (1759); Candide: or, The Optimist (1762); and Candide: Optimism (1947). It begins with a young man, Candide, who is living a sheltered life in an Edenic paradise and being indoctrinated with Leibnizian optimism by his mentor, Professor Pangloss. The work describes the abrupt cessation of this lifestyle, followed by Candide's slow and painful disillusionment as he witnesses and experiences great hardships in the world. Voltaire concludes with Candide, if not rejecting Leibnizian optimism outright, advocating a deeply practical precept, "we must cultivate our garden", in lieu of the Leibnizian mantra of Pangloss, "all is for the best" in the "best of all possible worlds". "Candide" is characterized by its tone as well as by its erratic, fantastical, and fast-moving plot. A picaresque novel with a story similar to that of a more serious coming-of-age narrative ("Bildungsroman"), it parodies many adventure and romance clichés, the struggles of which are caricatured in a tone that is bitter and matter-of-fact. Still, the events discussed are often based on historical happenings, such as the Seven Years' War and the 1755 Lisbon earthquake. As philosophers of Voltaire's day contended with the problem of evil, so does Candide in this short novel, albeit more directly and humorously. Voltaire ridicules religion, theologians, governments, armies, philosophies, and philosophers. Through Candide, he assaults Leibniz and his optimism. As predicted by Voltaire "Candide" has enjoyed both great success and great scandal. Immediately after its secretive publication, the book was widely banned to the public because it contained religious blasphemy, political sedition, and intellectual hostility hidden under a thin veil of naïveté. However, with its sharp wit and insightful portrayal of the human condition, the novel has since inspired many later authors and artists to mimic and adapt it. Today, "Candide" is recognized as Voltaire's "magnum opus" and is often listed as part of the Western canon. It is among the most frequently taught works of French literature. The British poet and literary critic Martin Seymour-Smith listed "Candide" as one of the 100 most influential books ever written. A number of historical events inspired Voltaire to write "Candide", most notably the publication of Leibniz's "Monadology", a short metaphysical treatise, the Seven Years' War, and the 1755 Lisbon earthquake. Both of the latter catastrophes are frequently referred to in "Candide" and are cited by scholars as reasons for its composition. The 1755 Lisbon earthquake, tsunami, and resulting fires of All Saints' Day, had a strong influence on theologians of the day and on Voltaire, who was himself disillusioned by them. The earthquake had an especially large effect on the contemporary doctrine of optimism, a philosophical system which implies that such events should not occur. Optimism is founded on the theodicy of Gottfried Wilhelm Leibniz and says all is for the best because God is a benevolent deity. This concept is often put into the form, "all is for the best in the best of all possible worlds" (Fr. "Tout est pour le mieux dans le meilleur des mondes possibles"). Philosophers had trouble fitting the horrors of this earthquake into their optimistic world view. Voltaire actively rejected Leibnizian optimism after the natural disaster, convinced that if this were the best possible world, it should surely be better than it is. In both "Candide" and "Poème sur le désastre de Lisbonne" ("Poem on the Lisbon Disaster"), Voltaire attacks this optimist belief. He makes use of the Lisbon earthquake in both "Candide" and his "Poème" to argue this point, sarcastically describing the catastrophe as one of the most horrible disasters "in the best of all possible worlds". Immediately after the earthquake, unreliable rumours circulated around Europe, sometimes overestimating the severity of the event. Ira Wade, a noted expert on Voltaire and "Candide", has analyzed which sources Voltaire might have referenced in learning of the event. Wade speculates that Voltaire's primary source for information on the Lisbon earthquake was the 1755 work "Relation historique du Tremblement de Terre survenu à Lisbonne" by Ange Goudar. Apart from such events, contemporaneous stereotypes of the German personality may have been a source of inspiration for the text, as they were for "Simplicius Simplicissimus", a 1669 satirical picaresque novel written by Hans Jakob Christoffel von Grimmelshausen and inspired by the Thirty Years' War. The protagonist of this novel, who was supposed to embody stereotypically German characteristics, is quite similar to the protagonist of "Candide." These stereotypes, according to Voltaire biographer Alfred Owen Aldridge, include "extreme credulousness or sentimental simplicity", two of Candide's and Simplicius's defining qualities. Aldridge writes, "Since Voltaire admitted familiarity with fifteenth-century German authors who used a bold and buffoonish style, it is quite possible that he knew "Simplicissimus" as well." A satirical and parodic precursor of "Candide", Jonathan Swift's "Gulliver's Travels" (1726) is one of "Candide"'s closest literary relatives. This satire tells the story of "a gullible ingenue", Gulliver, who (like Candide) travels to several "remote nations" and is hardened by the many misfortunes which befall him. As evidenced by similarities between the two books, Voltaire probably drew upon "Gulliver's Travels" for inspiration while writing "Candide". Other probable sources of inspiration for "Candide" are "Télémaque" (1699) by François Fénelon and "Cosmopolite" (1753) by Louis-Charles Fougeret de Monbron. "Candide"'s parody of the "bildungsroman" is probably based on "Télémaque", which includes the prototypical parody of the tutor on whom Pangloss may have been partly based. Likewise, Monbron's protagonist undergoes a disillusioning series of travels similar to those of Candide. Born François-Marie Arouet, Voltaire (1694–1778), by the time of the Lisbon earthquake, was already a well-established author, known for his satirical wit. He had been made a member of the Académie Française in 1746. He was a deist, a strong proponent of religious freedom, and a critic of tyrannical governments. "Candide" became part of his large, diverse body of philosophical, political and artistic works expressing these views. More specifically, it was a model for the eighteenth- and early nineteenth-century novels called the "contes philosophiques". This genre, of which Voltaire was one of the founders, included previous works of his such as "Zadig" and "Micromegas". It is unknown exactly when Voltaire wrote "Candide", but scholars estimate that it was primarily composed in late 1758 and begun as early as 1757. Voltaire is believed to have written a portion of it while living at Les Délices near Geneva and also while visiting Charles Théodore, the Elector-Palatinate at Schwetzingen, for three weeks in the summer of 1758. Despite solid evidence for these claims, a popular legend persists that Voltaire wrote "Candide" in three days. This idea is probably based on a misreading of the 1885 work "La Vie intime de Voltaire aux Délices et à Ferney" by Lucien Perey (real name: Clara Adèle Luce Herpin) and Gaston Maugras. The evidence indicates strongly that Voltaire did not rush or improvise "Candide", but worked on it over a significant period of time, possibly even a whole year. "Candide" is mature and carefully developed, not impromptu, as the intentionally choppy plot and the aforementioned myth might suggest. There is only one extant manuscript of "Candide" that was written before the work's 1759 publication; it was discovered in 1956 by Wade and since named the "La Vallière Manuscript". It is believed to have been sent, chapter by chapter, by Voltaire to the Duke and Duchess La Vallière in the autumn of 1758. The manuscript was sold to the Bibliothèque de l'Arsenal in the late eighteenth century, where it remained undiscovered for almost two hundred years. The "La Vallière Manuscript", the most original and authentic of all surviving copies of "Candide", was probably dictated by Voltaire to his secretary, Jean-Louis Wagnière, then edited directly. In addition to this manuscript, there is believed to have been another, one copied by Wagnière for the Elector Charles-Théodore, who hosted Voltaire during the summer of 1758. The existence of this copy was first postulated by Norman L. Torrey in 1929. If it exists, it remains undiscovered. Voltaire published "Candide" simultaneously in five countries no later than 15 January 1759, although the exact date is uncertain. Seventeen versions of "Candide" from 1759, in the original French, are known today, and there has been great controversy over which is the earliest. More versions were published in other languages: "Candide" was translated once into Italian and thrice into English that same year. The complicated science of calculating the relative publication dates of all of the versions of "Candide" is described at length in Wade's article "The First Edition of "Candide": A Problem of Identification". The publication process was extremely secretive, probably the "most clandestine work of the century", because of the book's obviously illicit and irreverent content. The greatest number of copies of "Candide" were published concurrently in Geneva by Cramer, in Amsterdam by Marc-Michel Rey, in London by Jean Nourse, and in Paris by Lambert. "Candide" underwent one major revision after its initial publication, in addition to some minor ones. In 1761, a version of "Candide" was published that included, along with several minor changes, a major addition by Voltaire to the twenty-second chapter, a section that had been thought weak by the Duke of Vallière. The English title of this edition was "Candide, or Optimism, Translated from the German of Dr. Ralph. With the additions found in the Doctor's pocket when he died at Minden, in the Year of Grace 1759." The last edition of "Candide" authorised by Voltaire was the one included in Cramer's 1775 edition of his complete works, known as "l'édition encadrée", in reference to the border or frame around each page. Voltaire strongly opposed the inclusion of illustrations in his works, as he stated in a 1778 letter to the writer and publisher Charles Joseph Panckoucke: Despite this protest, two sets of illustrations for "Candide" were produced by the French artist Jean-Michel Moreau le Jeune. The first version was done, at Moreau's own expense, in 1787 and included in Kehl's publication of that year, "Oeuvres Complètes de Voltaire". Four images were drawn by Moreau for this edition and were engraved by Pierre-Charles Baquoy. The second version, in 1803, consisted of seven drawings by Moreau which were transposed by multiple engravers. The twentieth-century modern artist Paul Klee stated that it was while reading "Candide" that he discovered his own artistic style. Klee illustrated the work, and his drawings were published in a 1920 version edited by Kurt Wolff. "Candide" contains thirty episodic chapters, which may be grouped into two main schemes: one consists of two divisions, separated by the protagonist's hiatus in El Dorado; the other consists of three parts, each defined by its geographical setting. By the former scheme, the first half of "Candide" constitutes the rising action and the last part the resolution. This view is supported by the strong theme of travel and quest, reminiscent of adventure and picaresque novels, which tend to employ such a dramatic structure. By the latter scheme, the thirty chapters may be grouped into three parts each comprising ten chapters and defined by locale: I–X are set in Europe, XI–XX are set in the Americas, and XXI–XXX are set in Europe and the Ottoman Empire. The plot summary that follows uses this second format and includes Voltaire's additions of 1761. The tale of "Candide" begins in the castle of the Baron Thunder-ten-Tronckh in Westphalia, home to: the Baron's daughter, Lady Cunégonde; his bastard nephew, Candide; a tutor, Pangloss; a chambermaid, Paquette; and the rest of the Baron's family. The protagonist, Candide, is romantically attracted to Cunégonde. He is a young man of "the most unaffected simplicity" ("l'esprit le plus simple"), whose face is "the true index of his mind" ("sa physionomie annonçait son âme"). Dr. Pangloss, professor of ""métaphysico-théologo-cosmolonigologie"" (English: "metaphysico-theologo-cosmoronology") and self-proclaimed optimist, teaches his pupils that they live in the "best of all possible worlds" and that "all is for the best". All is well in the castle until Cunégonde sees Pangloss sexually engaged with Paquette in some bushes. Encouraged by this show of affection, Cunégonde drops her handkerchief next to Candide, enticing him to kiss her. For this infraction, Candide is evicted from the castle, at which point he is captured by Bulgar (Prussian) recruiters and coerced into military service, where he is flogged, nearly executed, and forced to participate in a major battle between the Bulgars and the Avars (an allegory representing the Prussians and the French). Candide eventually escapes the army and makes his way to Holland where he is given aid by Jacques, an Anabaptist, who strengthens Candide's optimism. Soon after, Candide finds his master Pangloss, now a beggar with syphilis. Pangloss reveals he was infected with this disease by Paquette and shocks Candide by relating how Castle Thunder-ten-Tronckh was destroyed by Bulgars, that Cunégonde and her whole family were killed, and that Cunégonde was raped before her death. Pangloss is cured of his illness by Jacques, losing one eye and one ear in the process, and the three set sail to Lisbon. In Lisbon's harbor, they are overtaken by a vicious storm which destroys the boat. Jacques attempts to save a sailor, and in the process is thrown overboard. The sailor makes no move to help the drowning Jacques, and Candide is in a state of despair until Pangloss explains to him that Lisbon harbor was created in order for Jacques to drown. Only Pangloss, Candide, and the "brutish sailor" who let Jacques drown survive the wreck and reach Lisbon, which is promptly hit by an earthquake, tsunami and fire that kill tens of thousands. The sailor leaves in order to loot the rubble while Candide, injured and begging for help, is lectured on the optimistic view of the situation by Pangloss. The next day, Pangloss discusses his optimistic philosophy with a member of the Portuguese Inquisition, and he and Candide are arrested for heresy, set to be tortured and killed in an "auto-da-fé" set up to appease God and prevent another disaster. Candide is flogged and sees Pangloss hanged, but another earthquake intervenes and he escapes. He is approached by an old woman, who leads him to a house where Lady Cunégonde waits, alive. Candide is surprised: Pangloss had told him that Cunégonde had been raped and disemboweled. She had been, but Cunégonde points out that people survive such things. However, her rescuer sold her to a Jewish merchant, Don Issachar, who was then threatened by a corrupt Grand Inquisitor into sharing her (Don Issachar gets Cunégonde on Mondays, Wednesdays, and the sabbath day). Her owners arrive, find her with another man, and Candide kills them both. Candide and the two women flee the city, heading to the Americas. Along the way, Cunégonde falls into self-pity, complaining of all the misfortunes that have befallen her. The old woman reciprocates by revealing her own tragic life: born the daughter of Pope Urban X and the Princess of Palestrina, she was raped and enslaved by African pirates, witnessed violent civil wars in Morocco under the bloodthirsty King Moulay Ismaïl (during which her mother was drawn and quartered), suffered further slavery and famine, nearly died from a plague in Algiers, and had a buttock cut off to feed starving Janissaries during the Russian siege of Azov. After traversing all the Russian Empire, she eventually became a servant of Don Issachar and met Cunégonde. The trio arrives in Buenos Aires, where Governor Don Fernando d'Ibarra y Figueroa y Mascarenes y Lampourdos y Souza asks to marry Cunégonde. Just then, an alcalde (a Spanish fortress commander) arrives, pursuing Candide for killing the Grand Inquisitor. Leaving the women behind, Candide flees to Paraguay with his practical and heretofore unmentioned manservant, Cacambo. At a border post on the way to Paraguay, Cacambo and Candide speak to the commandant, who turns out to be Cunégonde's unnamed brother. He explains that after his family was slaughtered, the Jesuits' preparation for his burial revived him, and he has since joined the order. When Candide proclaims he intends to marry Cunégonde, her brother attacks him, and Candide runs him through with his rapier. After lamenting all the people (mainly priests) he has killed, he and Cacambo flee. In their flight, Candide and Cacambo come across two naked women being chased and bitten by a pair of monkeys. Candide, seeking to protect the women, shoots and kills the monkeys, but is informed by Cacambo that the monkeys and women were probably lovers. Cacambo and Candide are captured by Oreillons, or Orejones; members of the Inca nobility who widened the lobes of their ears, and are depicted here as the fictional inhabitants of the area. Mistaking Candide for a Jesuit by his robes, the Oreillons prepare to cook Candide and Cacambo; however, Cacambo convinces the Oreillons that Candide killed a Jesuit to procure the robe. Cacambo and Candide are released and travel for a month on foot and then down a river by canoe, living on fruits and berries. After a few more adventures, Candide and Cacambo wander into El Dorado, a geographically isolated utopia where the streets are covered with precious stones, there exist no priests, and all of the king's jokes are funny. Candide and Cacambo stay a month in El Dorado, but Candide is still in pain without Cunégonde, and expresses to the king his wish to leave. The king points out that this is a foolish idea, but generously helps them do so. The pair continue their journey, now accompanied by one hundred red pack sheep carrying provisions and incredible sums of money, which they slowly lose or have stolen over the next few adventures. Candide and Cacambo eventually reach Suriname, where they split up: Cacambo travels to Buenos Aires to retrieve Lady Cunégonde, while Candide prepares to travel to Europe to await the two. Candide's remaining sheep are stolen, and Candide is fined heavily by a Dutch magistrate for petulance over the theft. Before leaving Suriname, Candide feels in need of companionship, so he interviews a number of local men who have been through various ill-fortunes and settles on a man named Martin. This companion, Martin, is a Manichaean scholar based on the real-life pessimist Pierre Bayle, who was a chief opponent of Leibniz. For the remainder of the voyage, Martin and Candide argue about philosophy, Martin painting the entire world as occupied by fools. Candide, however, remains an optimist at heart, since it is all he knows. After a detour to Bordeaux and Paris, they arrive in England and see an admiral (based on Admiral Byng) being shot for not killing enough of the enemy. Martin explains that Britain finds it necessary to shoot an admiral from time to time ""pour l'encouragement des autres"" (to encourage the others). Candide, horrified, arranges for them to leave Britain immediately. Upon their arrival in Venice, Candide and Martin meet Paquette, the chambermaid who infected Pangloss with his syphilis, in Venice. She is now a prostitute, and is spending her time with a Theatine monk, Brother Giroflée. Although both appear happy on the surface, they reveal their despair: Paquette has led a miserable existence as a sexual object, and the monk detests the religious order in which he was indoctrinated. Candide gives two thousand piastres to Paquette and one thousand to Brother Giroflée. Candide and Martin visit the Lord Pococurante, a noble Venetian. That evening, Cacambo—now a slave—arrives and informs Candide that Cunégonde is in Constantinople. Prior to their departure, Candide and Martin dine with six strangers who had come for Carnival of Venice. These strangers are revealed to be dethroned kings: the Ottoman Sultan Ahmed III, Emperor Ivan VI of Russia, Charles Edward Stuart (an unsuccessful pretender to the English throne), Augustus III of Poland, Stanisław Leszczyński, and Theodore of Corsica. On the way to Constantinople, Cacambo reveals that Cunégonde—now horribly ugly—currently washes dishes on the banks of the Propontis as a slave for a Transylvanian prince by the name of Rákóczi. After arriving at the Bosphorus, they board a galley where, to Candide's surprise, he finds Pangloss and Cunégonde's brother among the rowers. Candide buys their freedom and further passage at steep prices. The baron and Pangloss relate how they survived, but despite the horrors he has been through, Pangloss's optimism remains unshaken: "I still hold to my original opinions, because, after all, I'm a philosopher, and it wouldn't be proper for me to recant, since Leibniz cannot be wrong, and since pre-established harmony is the most beautiful thing in the world, along with the plenum and subtle matter." Candide, the baron, Pangloss, Martin, and Cacambo arrive at the banks of the Propontis, where they rejoin Cunégonde and the old woman. Cunégonde has indeed become hideously ugly, but Candide nevertheless buys their freedom and marries Cunégonde to spite her brother, who forbids Cunégonde from marrying anyone but a baron of the Empire (he is secretly sold back into slavery). Paquette and Brother Giroflée—having squandered their three thousand piastres—are reconciled with Candide on a small farm ("une petite métairie") which he just bought with the last of his finances. One day, the protagonists seek out a dervish known as a great philosopher of the land. Pangloss asks him why Man is made to suffer so, and what they all ought to do. The dervish responds by asking rhetorically why Pangloss is concerned about the existence of evil and good. The dervish describes human beings as mice on a ship sent by a king to Egypt; their comfort does not matter to the king. The dervish then slams his door on the group. Returning to their farm, Candide, Pangloss, and Martin meet a Turk whose philosophy is to devote his life only to simple work and not concern himself with external affairs. He and his four children cultivate a small area of land, and the work keeps them "free of three great evils: boredom, vice, and poverty." Candide, Pangloss, Martin, Cunégonde, Paquette, Cacambo, the old woman, and Brother Giroflée all set to work on this "commendable plan" ("louable dessein") on their farm, each exercising his or her own talents. Candide ignores Pangloss's insistence that all turned out for the best by necessity, instead telling him "we must cultivate our garden" ("il faut cultiver notre jardin"). As Voltaire himself described it, the purpose of "Candide" was to "bring amusement to a small number of men of wit". The author achieves this goal by combining his sharp wit with a fun parody of the classic adventure-romance plot. Candide is confronted with horrible events described in painstaking detail so often that it becomes humorous. Literary theorist Frances K. Barasch described Voltaire's matter-of-fact narrative as treating topics such as mass death "as coolly as a weather report". The fast-paced and improbable plot—in which characters narrowly escape death repeatedly, for instance—allows for compounding tragedies to befall the same characters over and over again. In the end, "Candide" is primarily, as described by Voltaire's biographer Ian Davidson, "short, light, rapid and humorous". Behind the playful façade of "Candide" which has amused so many, there lies very harsh criticism of contemporary European civilization which angered many others. European governments such as France, Prussia, Portugal and England are each attacked ruthlessly by the author: the French and Prussians for the Seven Years' War, the Portuguese for their Inquisition, and the British for the execution of John Byng. Organised religion, too, is harshly treated in "Candide". For example, Voltaire mocks the Jesuit order of the Roman Catholic Church. Aldridge provides a characteristic example of such anti-clerical passages for which the work was banned: while in Paraguay, Cacambo remarks, "[The Jesuits] are masters of everything, and the people have no money at all …". Here, Voltaire suggests the Christian mission in Paraguay is taking advantage of the local population. Voltaire depicts the Jesuits holding the indigenous peoples as slaves while they claim to be helping them. The main method of "Candide"'s satire is to contrast ironically great tragedy and comedy. The story does not invent or exaggerate evils of the world—it displays real ones starkly, allowing Voltaire to simplify subtle philosophies and cultural traditions, highlighting their flaws. Thus "Candide" derides optimism, for instance, with a deluge of horrible, historical (or at least plausible) events with no apparent redeeming qualities. A simple example of the satire of "Candide" is seen in the treatment of the historic event witnessed by Candide and Martin in Portsmouth harbour. There, the duo spy an anonymous admiral, supposed to represent John Byng, being executed for failing to properly engage a French fleet. The admiral is blindfolded and shot on the deck of his own ship, merely "to encourage the others" (Fr. ""pour encourager les autres""). This depiction of military punishment trivializes Byng's death. The dry, pithy explanation "to encourage the others" thus satirises a serious historical event in characteristically Voltairian fashion. For its classic wit, this phrase has become one of the more often quoted from "Candide". Voltaire depicts the worst of the world and his pathetic hero's desperate effort to fit it into an optimistic outlook. Almost all of "Candide" is a discussion of various forms of evil: its characters rarely find even temporary respite. There is at least one notable exception: the episode of El Dorado, a fantastic village in which the inhabitants are simply rational, and their society is just and reasonable. The positivity of El Dorado may be contrasted with the pessimistic attitude of most of the book. Even in this case, the bliss of El Dorado is fleeting: Candide soon leaves the village to seek Cunégonde, whom he eventually marries only out of a sense of obligation. Another element of the satire focuses on what William F. Bottiglia, author of many published works on "Candide", calls the "sentimental foibles of the age" and Voltaire's attack on them. Flaws in European culture are highlighted as "Candide" parodies adventure and romance clichés, mimicking the style of a picaresque novel. A number of archetypal characters thus have recognisable manifestations in Voltaire's work: Candide is supposed to be the drifting rogue of low social class, Cunégonde the sex interest, Pangloss the knowledgeable mentor and Cacambo the skilful valet. As the plot unfolds, readers find that Candide is no rogue, Cunégonde becomes ugly and Pangloss is a stubborn fool. The characters of "Candide" are unrealistic, two-dimensional, mechanical, and even marionette-like; they are simplistic and stereotypical. As the initially naïve protagonist eventually comes to a mature conclusion—however noncommittal—the novella is a "bildungsroman", if not a very serious one. Gardens are thought by many critics to play a critical symbolic role in "Candide". The first location commonly identified as a garden is the castle of the Baron, from which Candide and Cunégonde are evicted much in the same fashion as Adam and Eve are evicted from the Garden of Eden in the Book of Genesis. Cyclically, the main characters of "Candide" conclude the novel in a garden of their own making, one which might represent celestial paradise. The third most prominent "garden" is El Dorado, which may be a false Eden. Other possibly symbolic gardens include the Jesuit pavilion, the garden of Pococurante, Cacambo's garden, and the Turk's garden. These gardens are probably references to the Garden of Eden, but it has also been proposed, by Bottiglia, for example, that the gardens refer also to the "Encyclopédie", and that Candide's conclusion to cultivate "his garden" symbolises Voltaire's great support for this endeavour. Candide and his companions, as they find themselves at the end of the novella, are in a very similar position to Voltaire's tightly knit philosophical circle which supported the "Encyclopédie": the main characters of "Candide" live in seclusion to "cultivate [their] garden", just as Voltaire suggested his colleagues leave society to write. In addition, there is evidence in the epistolary correspondence of Voltaire that he had elsewhere used the metaphor of gardening to describe writing the "Encyclopédie". Another interpretative possibility is that Candide cultivating "his garden" suggests his engaging in only necessary occupations, such as feeding oneself and fighting boredom. This is analogous to Voltaire's own view on gardening: he was himself a gardener at his estates in Les Délices and Ferney, and he often wrote in his correspondence that gardening was an important pastime of his own, it being an extraordinarily effective way to keep busy. "Candide" satirises various philosophical and religious theories that Voltaire had previously criticised. Primary among these is Leibnizian optimism (sometimes called "Panglossianism" after its fictional proponent), which Voltaire ridicules with descriptions of seemingly endless calamity. Voltaire demonstrates a variety of irredeemable evils in the world, leading many critics to contend that Voltaire's treatment of evil—specifically the theological problem of its existence—is the focus of the work. Heavily referenced in the text are the Lisbon earthquake, disease, and the sinking of ships in storms. Also, war, thievery, and murder—evils of human design—are explored as extensively in "Candide" as are environmental ills. Bottiglia notes Voltaire is "comprehensive" in his enumeration of the world's evils. He is unrelenting in attacking Leibnizian optimism. Fundamental to Voltaire's attack is Candide's tutor Pangloss, a self-proclaimed follower of Leibniz and a teacher of his doctrine. Ridicule of Pangloss's theories thus ridicules Leibniz himself, and Pangloss's reasoning is silly at best. For example, Pangloss's first teachings of the narrative absurdly mix up cause and effect: Following such flawed reasoning even more doggedly than Candide, Pangloss defends optimism. Whatever their horrendous fortune, Pangloss reiterates "all is for the best" (Fr. "Tout est pour le mieux") and proceeds to "justify" the evil event's occurrence. A characteristic example of such theodicy is found in Pangloss's explanation of why it is good that syphilis exists: Candide, the impressionable and incompetent student of Pangloss, often tries to justify evil, fails, invokes his mentor and eventually despairs. It is by these failures that Candide is painfully cured (as Voltaire would see it) of his optimism. This critique of Voltaire's seems to be directed almost exclusively at Leibnizian optimism. "Candide" does not ridicule Voltaire's contemporary Alexander Pope, a later optimist of slightly different convictions. "Candide" does not discuss Pope's optimistic principle that "all is right", but Leibniz's that states, "this is the best of all possible worlds". However subtle the difference between the two, "Candide" is unambiguous as to which is its subject. Some critics conjecture that Voltaire meant to spare Pope this ridicule out of respect, although Voltaire's "Poème" may have been written as a more direct response to Pope's theories. This work is similar to "Candide" in subject matter, but very different from it in style: the "Poème" embodies a more serious philosophical argument than "Candide". The conclusion of the novel, in which Candide finally dismisses his tutor's optimism, leaves unresolved what philosophy the protagonist is to accept in its stead. This element of "Candide" has been written about voluminously, perhaps above all others. The conclusion is enigmatic and its analysis is contentious. Voltaire develops no formal, systematic philosophy for the characters to adopt. The conclusion of the novel may be thought of not as a philosophical alternative to optimism, but as a prescribed practical outlook (though "what" it prescribes is in dispute). Many critics have concluded that one minor character or another is portrayed as having the right philosophy. For instance, a number believe that Martin is treated sympathetically, and that his character holds Voltaire's ideal philosophy—pessimism. Others disagree, citing Voltaire's negative descriptions of Martin's principles and the conclusion of the work in which Martin plays little part. Within debates attempting to decipher the conclusion of "Candide" lies another primary "Candide" debate. This one concerns the degree to which Voltaire was advocating a pessimistic philosophy, by which Candide and his companions give up hope for a better world. Critics argue that the group's reclusion on the farm signifies Candide and his companions' loss of hope for the rest of the human race. This view is to be compared to a reading that presents Voltaire as advocating a melioristic philosophy and a precept committing the travellers to improving the world through metaphorical gardening. This debate, and others, focuses on the question of whether or not Voltaire was prescribing passive retreat from society, or active industrious contribution to it. Separate from the debate about the text's conclusion is the "inside/outside" controversy. This argument centers on the matter of whether or not Voltaire was actually prescribing anything. Roy Wolper, professor emeritus of English, argues in a revolutionary 1969 paper that "Candide" does not necessarily speak for its author; that the work should be viewed as a narrative independent of Voltaire's history; and that its message is entirely (or mostly) "inside" it. This point of view, the "inside", specifically rejects attempts to find Voltaire's "voice" in the many characters of "Candide" and his other works. Indeed, writers have seen Voltaire as speaking through at least Candide, Martin, and the Turk. Wolper argues that "Candide" should be read with a minimum of speculation as to its meaning in Voltaire's personal life. His article ushered in a new era of Voltaire studies, causing many scholars to look at the novel differently. Critics such as Lester Crocker, Henry Stavan, and Vivienne Mylne find too many similarities between "Candide"'s point of view and that of Voltaire to accept the "inside" view; they support the "outside" interpretation. They believe that Candide's final decision is the same as Voltaire's, and see a strong connection between the development of the protagonist and his author. Some scholars who support the "outside" view also believe that the isolationist philosophy of the Old Turk closely mirrors that of Voltaire. Others see a strong parallel between Candide's gardening at the conclusion and the gardening of the author. Martine Darmon Meyer argues that the "inside" view fails to see the satirical work in context, and that denying that "Candide" is primarily a mockery of optimism (a matter of historical context) is a "very basic betrayal of the text". Though Voltaire did not openly admit to having written the controversial "Candide" until 1768 (until then he signed with a pseudonym: "Monsieur le docteur Ralph", or "Doctor Ralph"), his authorship of the work was hardly disputed. Immediately after publication, the work and its author were denounced by both secular and religious authorities, because the book openly derides government and church alike. It was because of such polemics that Omer-Louis-François Joly de Fleury, who was Advocate General to the Parisian parliament when "Candide" was published, found parts of "Candide" to be "contrary to religion and morals". Despite much official indictment, soon after its publication, "Candide"'s irreverent prose was being quoted. "Let us eat a Jesuit", for instance, became a popular phrase for its reference to a humorous passage in "Candide". By the end of February 1759, the Grand Council of Geneva and the administrators of Paris had banned "Candide". "Candide" nevertheless succeeded in selling twenty thousand to thirty thousand copies by the end of the year in over twenty editions, making it a best seller. The Duke de La Vallière speculated near the end of January 1759 that "Candide" might have been the fastest-selling book ever. In 1762, "Candide" was listed in the "Index Librorum Prohibitorum", the Roman Catholic Church's list of prohibited books. Bannings of "Candide" lasted into the twentieth century in the United States, where it has long been considered a seminal work of Western literature. At least once, "Candide" was temporarily barred from entering America: in February 1929, a US customs official in Boston prevented a number of copies of the book, deemed "obscene", from reaching a Harvard University French class. "Candide" was admitted in August of the same year; however by that time the class was over. In an interview soon after "Candide"'s detention, the official who confiscated the book explained the office's decision to ban it, "But about 'Candide,' I'll tell you. For years we've been letting that book get by. There were so many different editions, all sizes and kinds, some illustrated and some plain, that we figured the book must be all right. Then one of us happened to read it. It's a filthy book". "Candide" is the most widely read of Voltaire's many works, and it is considered one of the great achievements of Western literature. However, "Candide" is not necessarily considered a true "classic". According to Bottiglia, "The physical size of "Candide", as well as Voltaire's attitude toward his fiction, precludes the achievement of artistic dimension through plenitude, autonomous '3D' vitality, emotional resonance, or poetic exaltation. "Candide", then, cannot in quantity or quality, measure up to the supreme classics." Bottiglia instead calls it a miniature classic, though others are more forgiving of its size. As the only work of Voltaire which has remained popular up to the present day, "Candide" is listed in Harold Bloom's "". It is included in the Encyclopædia Britannica collection "Great Books of the Western World". "Candide" has influenced modern writers of black humour such as Céline, Joseph Heller, John Barth, Thomas Pynchon, Kurt Vonnegut, and Terry Southern. Its parody and picaresque methods have become favourites of black humorists. Charles Brockden Brown, an early American novelist, may have been directly affected by Voltaire, whose work he knew well. Mark Kamrath, professor of English, describes the strength of the connection between "Candide" and "Edgar Huntly; or, Memoirs of a Sleep-Walker" (1799): "An unusually large number of parallels...crop up in the two novels, particularly in terms of characters and plot." For instance, the protagonists of both novels are romantically involved with a recently orphaned young woman. Furthermore, in both works the brothers of the female lovers are Jesuits, and each is murdered (although under different circumstances). Some twentieth-century novels that may have been influenced by "Candide" are dystopian science-fiction works. Armand Mattelart, a French critic, sees "Candide" in Aldous Huxley's "Brave New World", George Orwell's "Nineteen Eighty-Four" and Yevgeny Zamyatin's "We", three canonical works of the genre. Specifically, Mattelart writes that in each of these works, there exist references to "Candide"'s popularisation of the phrase "the best of all possible worlds". He cites as evidence, for example, that the French version of "Brave New World" was entitled "Le Meilleur des mondes" (En. literally "The best of worlds"). Readers of "Candide" often compare it with certain works of the modern genre the Theatre of the Absurd. Haydn Mason, a Voltaire scholar, sees in "Candide" a few similarities to this brand of literature. For instance, he notes commonalities of "Candide" and "Waiting for Godot" (1952). In both of these works, and in a similar manner, friendship provides emotional support for characters when they are confronted with harshness of their existences. However, Mason qualifies, "the "conte" must not be seen as a forerunner of the 'absurd' in modern fiction. Candide's world has many ridiculous and meaningless elements, but human beings are not totally deprived of the ability to make sense out of it." John Pilling, biographer of Beckett, does state that "Candide" was an early and powerful influence on Beckett's thinking. The American alternative rock band Bloodhound Gang refer to "Candide" in their song "Take the Long Way Home", from the American edition of their 1999 album "Hooray for Boobies". In 1760, one year after Voltaire published "Candide", a sequel was published with the name "Candide, ou l'optimisme, seconde partie". This work is attributed both to Thorel de Campigneulles, a writer unknown today, and Henri Joseph Du Laurens, who is suspected of having habitually plagiarised Voltaire. The story continues in this sequel with Candide having new adventures in the Ottoman Empire, Persia, and Denmark. "Part II" has potential use in studies of the popular and literary receptions of "Candide", but is almost certainly apocryphal. In total, by the year 1803, at least ten imitations of "Candide" or continuations of its story were published by authors other than Voltaire. The operetta "Candide" was originally conceived by playwright Lillian Hellman, as a play with incidental music. Leonard Bernstein, the American composer and conductor who wrote the music, was so excited about the project that he convinced Hellman to do it as a "comic operetta". Many lyricists worked on the show, including James Agee, Dorothy Parker, John Latouche, Richard Wilbur, Leonard and Felicia Bernstein, Stephen Sondheim and Hellman. Hershy Kay orchestrated all the pieces except for the overture, which Bernstein did himself. "Candide" first opened on Broadway as a musical on 1 December 1956. The premier production was directed by Tyrone Guthrie and conducted by Samuel Krachmalnick. While this production was a box office flop, the music was highly praised, and an original cast album was made. The album gradually became a cult hit, but Hellman's libretto was criticised as being too serious an adaptation of Voltaire's novel. "Candide" would be more popular seventeen years later with a new libretto by Hugh Wheeler. "" (1977) or simply "Candido" is a book by Leonardo Sciascia. It was at least partly based on Voltaire's "Candide", although the actual influence of "Candide" on "Candido" is a hotly debated topic. A number of theories on the matter have been proposed. Proponents of one say that "Candido" is very similar to "Candide", only with a happy ending; supporters of another claim that Voltaire provided Sciascia with only a starting point from which to work, that the two books are quite distinct. The BBC produced a television adaptation in 1973, with Ian Ogilvy as Candide and Frank Finlay as Voltaire himself, acting as the narrator. Nedim Gürsel wrote his 2001 novel "Le voyage de Candide à Istanbul" about a minor passage in "Candide" during which its protagonist meets Ahmed III, the deposed Turkish sultan. This chance meeting on a ship from Venice to Istanbul is the setting of Gürsel's book. Terry Southern, in writing his popular novel "Candy" with Mason Hoffenberg adapted "Candide" for a modern audience and changed the protagonist from male to female. "Candy" deals with the rejection of a sort of optimism which the author sees in women's magazines of the modern era; "Candy" also parodies pornography and popular psychology. This adaptation of "Candide" was itself adapted for the cinema by director Christian Marquand in 1968. In addition to the above, "Candide" was made into a number of minor films and theatrical adaptations throughout the twentieth century. For a list of these, see "Voltaire: Candide ou L'Optimisme et autres contes" (1989) with preface and commentaries by Pierre Malandain. In May 2009, a play called "Optimism," based on "Candide" opened at the CUB Malthouse Theatre in Melbourne. It followed the basic storyline of "Candide", incorporating anachronisms, music and stand up comedy from comedian Frank Woodley. It toured Australia and played at the Edinburgh International Festival. In 2010, the Icelandic writer Óttar M. Norðfjörð published a rewriting and modernisation of "Candide", entitled "Örvitinn; eða hugsjónamaðurinn". Candide By Voltaire (plain text and HTML) Cane toad The cane toad ("Rhinella marina"), also known as the giant neotropical toad or marine toad, is a large, terrestrial true toad native to South and mainland Central America, but which has been introduced to various islands throughout Oceania and the Caribbean, as well as Northern Australia. It is the world's largest toad. It is a member of the genus "Rhinella", but was formerly in the genus "Bufo", which includes many different true toad species found throughout Central and South America. The cane toad is a prolific breeder; females lay single-clump spawns with thousands of eggs. Its reproductive success is partly because of opportunistic feeding: it has a diet, unusual among anurans, of both dead and living matter. Adults average in length; the largest recorded specimen had a snout-vent length of . The cane toad is an old species. A fossil toad (specimen UCMP 41159) from the La Venta fauna of the late Miocene of Colombia is indistinguishable from modern cane toads from northern South America. It was discovered in a floodplain deposit, which suggests the "R. marina" habitat preferences have long been for open areas. The cane toad has poison glands, and the tadpoles are highly toxic to most animals if ingested. Because of its voracious appetite, the cane toad has been introduced to many regions of the Pacific and the Caribbean islands as a method of agricultural pest control. The species derives its common name from its use against the cane beetle ("Dermolepida albohirtum"). The cane toad is now considered a pest and an invasive species in many of its introduced regions; of particular concern is its toxic skin, which kills many animals, both wild and domesticated. Cane toads are particularly dangerous to dogs. Historically, the cane toads were used to eradicate pests from sugarcane, giving rise to their common name. The cane toad has many other common names, including "giant toad" and "marine toad"; the former refers to its size, and the latter to the binomial name, "R. marina". It was one of many species described by Linnaeus in his 18th-century work "Systema Naturae" (1758). Linnaeus based the specific epithet "marina" on an illustration by Dutch zoologist Albertus Seba, who mistakenly believed the cane toad to inhabit both terrestrial and marine environments. Other common names include "giant neotropical toad", "Dominican toad", "giant marine toad", and "South American cane toad". In Trinidadian English, they are commonly called "crapaud", the French word for toad. The genus "Rhinella" is considered to constitute a distinct genus of its own, thus changing the scientific name of the cane toad. In this case, the specific name "marinus" (masculine) changes to "marina" (feminine) to conform with the rules of gender agreement as set out by the International Code of Zoological Nomenclature, changing the binomial name from "Bufo marinus" to "Rhinella marina"; the binomial "Rhinella marinus" was subsequently introduced as a synonym through misspelling by Pramuk, Robertson, Sites, and Noonan (2008). Though controversial (with many traditional herpetologists still using "Bufo marinus") the binomial "Rhinella marina" is gaining in acceptance with such bodies as the IUCN, Encyclopaedia of Life, Amphibian Species of the World and increasing numbers of scientific publications adopting its usage. In Australia, the adults may be confused with large native frogs from the genera "Limnodynastes", "Cyclorana", and "Mixophyes". These species can be distinguished from the cane toad by the absence of large parotoid glands behind their eyes and the lack of a ridge between the nostril and the eye. Cane toads have been confused with the giant burrowing frog ("Heleioporus australiacus"), because both are large and warty in appearance; however, the latter can be readily distinguished from the former by its vertical pupils and its silver-grey (as opposed to gold) irises. Juvenile cane toads may be confused with species of the genus "Uperoleia", but their adult colleagues can be distinguished by the lack of bright colouring on the groin and thighs. In the United States, the cane toad closely resembles many bufonid species. In particular, it could be confused with the southern toad ("Bufo terrestris"), which can be distinguished by the presence of two bulbs in front of the parotoid glands. The cane toad is very large; the females are significantly longer than males, reaching an average length of , with a maximum of . Larger toads tend to be found in areas of lower population density. They have a life expectancy of 10 to 15 years in the wild, and can live considerably longer in captivity, with one specimen reportedly surviving for 35 years. The skin of the cane toad is dry and warty. It has distinct ridges above the eyes, which run down the snout. Individual cane toads can be grey, yellowish, red-brown, or olive-brown, with varying patterns. A large parotoid gland lies behind each eye. The ventral surface is cream-coloured and may have blotches in shades of black or brown. The pupils are horizontal and the irises golden. The toes have a fleshy webbing at their base, and the fingers are free of webbing. Typically, juvenile cane toads have smooth, dark skin, although some specimens have a red wash. Juveniles lack the adults' large parotoid glands, so they are usually less poisonous. The tadpoles are small and uniformly black, and are bottom-dwellers, tending to form schools. Tadpoles range from in length. The common name "marine toad" and the scientific name "Rhinella marina" suggest a link to marine life, but the adult cane toad is entirely terrestrial, only venturing to fresh water to breed. However, laboratory experiments suggest that tadpoles can tolerate salt concentrations equivalent to 15% of seawater (~5.4‰), and recent field observations found living tadpoles and toadlets at salinities of 27.5‰ on Coiba Island, Panama. The cane toad inhabits open grassland and woodland, and has displayed a "distinct preference" for areas modified by humans, such as gardens and drainage ditches. In their native habitats, the toads can be found in subtropical forests, although dense foliage tends to limit their dispersal. The cane toad begins life as an egg, which is laid as part of long strings of jelly in water. A female lays 8,000–25,000 eggs at once and the strings can stretch up to in length. The black eggs are covered by a membrane and their diameter is about . The rate at which an egg grows into a tadpole increases with temperature. Tadpoles typically hatch within 48 hours, but the period can vary from 14 hours to almost a week. This process usually involves thousands of tadpoles—which are small, black, and have short tails—forming into groups. Between 12 and 60 days are needed for the tadpoles to develop into juveniles, with four weeks being typical. Similarly to their adult counterparts, eggs and tadpoles are toxic to many animals. When they emerge, toadlets typically are about in length, and grow rapidly. While the rate of growth varies by region, time of year, and gender, an average initial growth rate of per day is seen, followed by an average rate of per day. Growth typically slows once the toads reach sexual maturity. This rapid growth is important for their survival; in the period between metamorphosis and subadulthood, the young toads lose the toxicity that protected them as eggs and tadpoles, but have yet to fully develop the parotoid glands that produce bufotoxin. Because they lack this key defence, only an estimated 0.5% of cane toads reach adulthood. As with rates of growth, the point at which the toads become sexually mature varies across different regions. In New Guinea, sexual maturity is reached by female toads with a snout–vent length between , while toads in Panama achieve maturity when they are between in length. In tropical regions, such as their native habitats, breeding occurs throughout the year, but in subtropical areas, breeding occurs only during warmer periods that coincide with the onset of the wet season. The cane toad is estimated to have a critical thermal maximum of and a minimum of around . The ranges can change due to adaptation to the local environment. The cane toad has a high tolerance to water loss; some can withstand a 52.6% loss of body water, allowing them to survive outside tropical environments. Most frogs identify prey by movement, and vision appears to be the primary method by which the cane toad detects prey; however, the cane toad can also locate food using its sense of smell. They eat a wide range of material; in addition to the normal prey of small rodents, reptiles, other amphibians, birds, and even bats and a range of invertebrates, they also eat plants, dog food, and household refuse. The skin of the adult cane toad is toxic, as well as the enlarged parotoid glands behind the eyes, and other glands across their backs. When the toads are threatened, their glands secrete a milky-white fluid known as bufotoxin. Components of bufotoxin are toxic to many animals; even human deaths have been recorded due to the consumption of cane toads. Bufotenin, one of the chemicals excreted by the cane toad, is classified as a class-1 drug under Australian law, alongside heroin and cannabis. The effects of bufotenin are thought to be similar to those of mild poisoning; the stimulation, which includes mild hallucinations, lasts for less than an hour. As the cane toad excretes bufotenin in small amounts, and other toxins in relatively large quantities, toad licking could result in serious illness or death. In addition to releasing toxin, the cane toad is capable of inflating its lungs, puffing up, and lifting its body off the ground to appear taller and larger to a potential predator. Poisonous sausages containing toad meat are being trialled in the Kimberley (Western Australia) to try to protect native animals from cane toads' deadly impact. The Western Australian Department of Environment and Conservation has been working with the University of Sydney to develop baits to train native animals not to eat the toads. By blending bits of toad with a nausea-inducing chemical, the baits train the animals to stay away from the amphibians. Researcher David Pearson says trials run in laboratories and in remote parts of the Kimberley region of WA are looking promising, although the baits will not solve the cane toad problem altogether. Many species prey on the cane toad and its tadpoles in its native habitat, including the broad-snouted caiman ("Caiman latirostris"), the banded cat-eyed snake ("Leptodeira annulata"), eels (family Anguillidae), various species of killifish, the rock flagtail ("Kuhlia rupestris"), some species of catfish (order Siluriformes), some species of ibis (subfamily Threskiornithinae), and "Paraponera clavata" (bullet ants). Predators outside the cane toad's native range include the whistling kite ("Haliastur sphenurus"), the rakali ("Hydromys chrysogaster"), the black rat ("Rattus rattus") and the water monitor ("Varanus salvator"). The tawny frogmouth ("Podargus strigoides") and the Papuan frogmouth ("Podargus papuensis") have been reported as feeding on cane toads; some Australian crows ("Corvus" spp.) have also learned strategies allowing them to feed on cane toads, such as using their beak to flip toads onto their back. Opossums of the genus "Didelphis" likely can eat cane toads with impunity. Meat ants are unaffected by the cane toads' toxins, and therefore are able to kill them. The cane toad's normal response to attack is to stand still and let its toxin kill the attacker, which allows the ants to attack and eat the toad. The cane toad is native to the Americas, and its range stretches from the Rio Grande Valley in South Texas to the central Amazon and southeastern Peru, and some of the continental islands near Venezuela (such as Trinidad and Tobago). This area encompasses both tropical and semiarid environments. The density of the cane toad is significantly lower within its native distribution than in places where it has been introduced. In South America, the density was recorded to be 20 adults per 100 m (109 yd) of shoreline, 1 to 2% of the density in Australia. The cane toad has been introduced to many regions of the world—particularly the Pacific—for the biological control of agricultural pests. These introductions have generally been well documented, and the cane toad may be one of the most studied of any introduced species. Before the early 1840s, the cane toad had been introduced into Martinique and Barbados, from French Guiana and Guyana. An introduction to Jamaica was made in 1844 in an attempt to reduce the rat population. Despite its failure to control the rodents, the cane toad was introduced to Puerto Rico in the early 20th century in the hope that it would counter a beetle infestation ravaging the sugarcane plantations. The Puerto Rican scheme was successful and halted the economic damage caused by the beetles, prompting scientists in the 1930s to promote it as an ideal solution to agricultural pests. As a result, many countries in the Pacific region emulated the lead of Puerto Rico and introduced the toad in the 1930s. There are introduced populations in Australia, Florida, Papua New Guinea, the Philippines, the Ogasawara, Ishigaki Island and the Daitō Islands of Japan, most Caribbean islands, Fiji and many other Pacific islands, including Hawaii. Since then, the cane toad has become a pest in many host countries, and poses a serious threat to native animals. Following the apparent success of the cane toad in eating the beetles threatening the sugarcane plantations of Puerto Rico, and the fruitful introductions into Hawaii and the Philippines, a strong push was made for the cane toad to be released in Australia to negate the pests ravaging the Queensland cane fields. As a result, 102 toads were collected from Hawaii and brought to Australia. After an initial release in August 1935, the Commonwealth Department of Health decided to ban future introductions until a study was conducted into the feeding habits of the toad. The study was completed in 1936 and the ban lifted, when large-scale releases were undertaken; by March 1937, 62,000 toadlets had been released into the wild. The toads became firmly established in Queensland, increasing exponentially in number and extending their range into the Northern Territory and New South Wales. In 2010, one was found on the far western coast in Broome, Western Australia. However, the toad was generally unsuccessful in reducing the targeted grey-backed cane beetles ("Dermolepida albohirtum"), in part because the cane fields provided insufficient shelter for the predators during the day, in part because the beetles live at the tops of sugar cane – and cane toads are not good climbers. Since its original introduction, the cane toad has had a particularly marked effect on Australian biodiversity. The population of a number of native predatory reptiles has declined, such as the varanid lizards "Varanus mertensi", "V. mitchelli", and "V. panoptes", the land snakes "Pseudechis australis" and "Acanthophis antarcticus", and the crocodile species "Crocodylus johnstoni"; in contrast, the population of the agamid lizard "Amphibolurus gilberti"—known to be a prey item of "V. panoptes"—has increased. The cane toad was introduced to various Caribbean islands to counter a number of pests infesting local crops. While it was able to establish itself on some islands, such as Barbados, Jamaica, and Puerto Rico, other introductions, such as in Cuba before 1900 and in 1946, and on the islands of Dominica and Grand Cayman, were unsuccessful. The earliest recorded introductions were to Barbados and Martinique. The Barbados introductions were focused on the biological control of pests damaging the sugarcane crops, and while the toads became abundant, they have done even less to control the pests than in Australia. The toad was introduced to Martinique from French Guiana before 1944 and became established. Today, they reduce the mosquito and mole cricket populations. A third introduction to the region occurred in 1884, when toads appeared in Jamaica, reportedly imported from Barbados to help control the rodent population. While they had no significant effect on the rats, they nevertheless became well established. Other introductions include the release on Antigua—possibly before 1916, although this initial population may have died out by 1934 and been reintroduced at a later date— and Montserrat, which had an introduction before 1879 that led to the establishment of a solid population, which was apparently sufficient to survive the Soufrière Hills volcano eruption in 1995. In 1920, the cane toad was introduced into Puerto Rico to control the populations of white-grub ("Phyllophaga" spp.), a sugarcane pest. Before this, the pests were manually collected by humans, so the introduction of the toad eliminated labor costs. A second group of toads was imported in 1923, and by 1932, the cane toad was well established. The population of white-grubs dramatically decreased, and this was attributed to the cane toad at the annual meeting of the International Sugar Cane Technologists in Puerto Rico. However, there may have been other factors. The six-year period after 1931—when the cane toad was most prolific, and the white-grub saw dramatic decline—saw the highest-ever rainfall for Puerto Rico. Nevertheless, the cane toad was assumed to have controlled the white-grub; this view was reinforced by a "Nature" article titled "Toads save sugar crop", and this led to large-scale introductions throughout many parts of the Pacific. The cane toad has been spotted in Carriacou and Dominica, the latter appearance occurring in spite of the failure of the earlier introductions. On September 8, 2013, the cane toad was also discovered on the island of New Providence in the Bahamas. The cane toad was first introduced deliberately into the Philippines in 1930 as a biological control agent of pests in sugarcane plantations. This was done after the success of the experimental introductions into Puerto Rico. It subsequently became the most ubiquitous amphibian in the islands. It still retains the common name of "bakî" or "kamprag" in the Visayan languages, a corruption of 'American frog', referring to its origins. It is also commonly known as "bullfrog" in Philippine English. The cane toad was introduced into Fiji to combat insects that infested sugarcane plantations. The introduction of the cane toad to the region was first suggested in 1933, following the successes in Puerto Rico and Hawaii. After considering the possible side effects, the national government of Fiji decided to release the toad in 1953, and 67 specimens were subsequently imported from Hawaii. Once the toads were established, a 1963 study concluded, as the toad's diet included both harmful and beneficial invertebrates, it was considered "economically neutral". Today, the cane toad can be found on all major islands in Fiji, although they tend to be smaller than their counterparts in other regions. The cane toad was introduced into New Guinea to control the hawk moth larvae eating sweet potato crops. The first release occurred in 1937 using toads imported from Hawaii, with a second release the same year using specimens from the Australian mainland. Evidence suggests a third release in 1938, consisting of toads being used for human pregnancy tests—many species of toad were found to be effective for this task, and were employed for about 20 years after the discovery was announced in 1948. Initial reports argued the toads were effective in reducing the levels of cutworms and sweet potato yields were thought to be improving. As a result, these first releases were followed by further distributions across much of the region, although their effectiveness on other crops, such as cabbages, has been questioned; when the toads were released at Wau, the cabbages provided insufficient shelter and the toads rapidly left the immediate area for the superior shelter offered by the forest. A similar situation had previously arisen in the Australian cane fields, but this experience was either unknown or ignored in New Guinea. The cane toad has since become abundant in rural and urban areas. The cane toad naturally exists in South Texas, but attempts (both deliberate and accidental) have been made to introduce the species to other parts of the country. These include introductions to Florida and to the islands of Hawaii, as well as largely unsuccessful introductions to Louisiana. Initial releases into Florida failed. Attempted introductions before 1936 and 1944, intended to control sugarcane pests, were unsuccessful as the toads failed to proliferate. Later attempts failed in the same way. However, the toad gained a foothold in the state after an accidental release by an importer at Miami International Airport in 1957, and deliberate releases by animal dealers in 1963 and 1964 established the toad in other parts of Florida. Today, the cane toad is well established in the state, from the Keys to north of Tampa, and they are gradually extending further northward. In Florida, the toad is a regarded as a threat to native species and pets; so much so, the Florida Fish and Wildlife Conservation Commission recommends residents to kill them. Around 150 cane toads were introduced to Oahu in Hawaii in 1932, and the population swelled to 105,517 after 17 months. The toads were sent to the other islands, and more than 100,000 toads were distributed by July 1934; eventually over 600,000 were transported. Other than the use as a biological control for pests, the cane toad has been employed in a number of commercial and noncommercial applications. Traditionally, within the toad's natural range in South America, the Embera-Wounaan would "milk" the toads for their toxin, which was then employed as an arrow poison. The toxins may have been used as an entheogen by the Olmec people. The toad has been hunted as a food source in parts of Peru, and eaten after the careful removal of the skin and parotoid glands. When properly prepared, the meat of the toad is considered healthy and as a source of omega-3 fatty acids. More recently, the toad's toxins have been used in a number of new ways: bufotenin has been used in Japan as an aphrodisiac and a hair restorer, and in cardiac surgery in China to lower the heart rates of patients. New research has suggested that the cane toad's poison may have some applications in treating prostate cancer. Other modern applications of the cane toad include pregnancy testing, as pets, laboratory research, and the production of leather goods. Pregnancy testing was conducted in the mid-20th century by injecting urine from a woman into a male toad's lymph sacs, and if spermatozoa appeared in the toad's urine, the patient was deemed to be pregnant. The tests using toads were faster than those employing mammals; the toads were easier to raise, and, although the initial 1948 discovery employed "Bufo arenarum" for the tests, it soon became clear that a variety of anuran species were suitable, including the cane toad. As a result, toads were employed in this task for around 20 years. As a laboratory animal, the cane toad is regarded as ideal; they are plentiful, and easy and inexpensive to maintain and handle. The use of the cane toad in experiments started in the 1950s, and by the end of the 1960s, large numbers were being collected and exported to high schools and universities. Since then, a number of Australian states have introduced or tightened importation regulations. Even dead toads have value. Cane toad skin has been made into leather and novelty items; stuffed cane toads, posed and accessorised, have found a home in the tourist market, and attempts have been made to produce fertiliser from their bodies. Cane toads pose a serious threat to native species when introduced to a new ecosystem. Classified as an invasive species in over 20 countries, there are multiple reports of the cane toad moving into a new area to be followed by a decline in the biodiversity in that region. The most documented region of the cane toad's invasion and subsequent effect on native species is Australia, where multiple surveys and observations of the toad's conquest have been completed. The best way to illustrate this effect is through the plight of the northern quoll, as well as Mertens' water monitor, a large lizard native to South and Southeast Asia. Two sites were chosen to study the effects of cane toads on the northern quoll, one of which was at Mary River ranger station, which is located in the southern region of Kakadu National Park. The other site was located at the north end of the park. In addition to these two sites, there was a third site located at the East Alligator ranger station, and this site was used as a control site, where the cane toads would not interact with the northern quoll population. Monitoring of the quoll population began at the Mary River ranger station using radio tracking in 2002, months before the first cane toads arrived at the site. After the arrival of the cane toads, the population of northern quolls in the Mary River site plummeted between October and December 2002, and by March 2003, the northern quoll appeared to be extinct in this section of the park, as there were no northern quolls caught in the trapping trips in the following two months. In contrast, the population of northern quolls in the control site at the East Alligator ranger station remained relatively constant, not showing any symptoms of declining. The evidence from the Kakadu National Park is compelling not only because of the timing of the population of northern quolls plummeting just months after the arrival of the cane toad, but also because in the Mary River region 31% of mortalities within the quoll population were attributed to lethal toxic ingestion, as there were no signs of disease, parasite infestation, or any other obvious changes at the site that could have caused such a rapid decline. The most obvious piece of evidence which supports the hypothesis that the invasion of the cane toads caused the local extinction of the northern quoll is that the closely monitored population of the control group, in the absence of cane toads, showed no signs of decline. In the case of the Mertens' water monitor, there was only one region that was monitored, but over the course of 18 months. This region is located 70 kilometers south of Darwin, at the Manton Dam Recreation Area. Within the Manton Dam Recreation Area, there were 14 sites set up to survey the population of water monitors, measuring abundance and site occupancy at each one. Seven surveys were conducted, each of which ran for four weeks and included 16 site visits, where each site was sampled twice per day for two consecutive days throughout the 4 weeks. Each site visit occurred between 7:30–10:30 AM, and 4:00–7:00 PM, when Varanus mertensi can be viewed sunbathing on the shore or wrapped around a tree branch close to shore. The whole project lasted from December 2004 to May 2006, and unveiled a total of 194 sightings of Varanus mertensi in 1568 site visits. Of the seven surveys, abundance was highest during the second survey, which took place in February 2005, two months into the project. Following this measurement, the abundance declined in the next 4 surveys, before declining sharply after the second to last survey in February 2006. In the final survey taken in May 2006, there were only two Varanus mertensi observed. Cane toads were first recorded in the region of study during the second survey during February 2005, also when the water monitor abundance was at its highest over the course of the study. Numbers of the cane toad population stayed low for the next year after introduction, and then skyrocketed to its peak in the last survey during May 2006. When you compare the two populations side by side one can see clearly that the onset of the cane toads had an immediate negative impact on the Varanus mertensi, as their population began to drop in February 2005, which was when the first cane toads entered the Manton Dam Recreation Area. At the end of the study, some scattered population of water monitors remained in the upper sites of the Manton Dam, which suggests that local extinctions occurred at certain shoreline sites within Manton Dam, but a complete extinction of the population did not occur. Cottingley Fairies The Cottingley Fairies appear in a series of five photographs taken by Elsie Wright (1901–1988) and Frances Griffiths (1907–1986), two young cousins who lived in Cottingley, near Bradford in England. In 1917, when the first two photographs were taken, Elsie was 16 years old and Frances was 9. The pictures came to the attention of writer Sir Arthur Conan Doyle, who used them to illustrate an article on fairies he had been commissioned to write for the Christmas 1920 edition of "The Strand Magazine". Doyle, as a spiritualist, was enthusiastic about the photographs, and interpreted them as clear and visible evidence of psychic phenomena. Public reaction was mixed; some accepted the images as genuine, others believed that they had been faked. Interest in the Cottingley Fairies gradually declined after 1921. Both girls married and lived abroad for a time after they grew up, yet the photographs continued to hold the public imagination. In 1966 a reporter from the "Daily Express" newspaper traced Elsie, who had by then returned to the UK. Elsie left open the possibility that she believed she had photographed her thoughts, and the media once again became interested in the story. In the early 1980s Elsie and Frances admitted that the photographs were faked, using cardboard cutouts of fairies copied from a popular children's book of the time, but Frances maintained that the fifth and final photograph was genuine. The photographs and two of the cameras used are on display in the National Science and Media Museum in Bradford, England. In mid-1917 nine-year-old Frances Griffiths and her mother—both newly arrived in the UK from South Africa—were staying with Frances' aunt, Elsie Wright's mother, in the village of Cottingley in West Yorkshire; Elsie was then 16 years old. The two girls often played together beside the beck (stream) at the bottom of the garden, much to their mothers' annoyance, because they frequently came back with wet feet and clothes. Frances and Elsie said they only went to the beck to see the fairies, and to prove it, Elsie borrowed her father's camera, a Midg quarter-plate. The girls returned about 30 minutes later, "triumphant". Elsie's father, Arthur, was a keen amateur photographer, and had set up his own darkroom. The picture on the photographic plate he developed showed Frances behind a bush in the foreground, on which four fairies appeared to be dancing. Knowing his daughter's artistic ability, and that she had spent some time working in a photographer's studio, he dismissed the figures as cardboard cutouts. Two months later the girls borrowed his camera again, and this time returned with a photograph of Elsie sitting on the lawn holding out her hand to a gnome. Exasperated by what he believed to be "nothing but a prank", and convinced that the girls must have tampered with his camera in some way, Arthur Wright refused to lend it to them again. His wife Polly, however, believed the photographs to be authentic. Towards the end of 1918, Frances sent a letter to Johanna Parvin, a friend in Cape Town, South Africa, where Frances had lived for most of her life, enclosing the photograph of herself with the fairies. On the back she wrote "It is funny, I never used to see them in Africa. It must be too hot for them there." The photographs became public in mid-1919, after Elsie's mother attended a meeting of the Theosophical Society in Bradford. The lecture that evening was on "fairy life", and at the end of the meeting Polly Wright showed the two fairy photographs taken by her daughter and niece to the speaker. As a result, the photographs were displayed at the society's annual conference in Harrogate, held a few months later. There they came to the attention of a leading member of the society, Edward Gardner. One of the central beliefs of theosophy is that humanity is undergoing a cycle of evolution, towards increasing "perfection", and Gardner recognised the potential significance of the photographs for the movement: Gardner sent the prints along with the original glass-plate negatives to Harold Snelling, a photography expert. Snelling's opinion was that "the two negatives are entirely genuine, unfaked photographs ... [with] no trace whatsoever of studio work involving card or paper models". He did not go so far as to say that the photographs showed fairies, stating only that "these are straight forward photographs of whatever was in front of the camera at the time". Gardner had the prints "clarified" by Snelling, and new negatives produced, "more conducive to printing", for use in the illustrated lectures he gave around the UK. Snelling supplied the photographic prints which were available for sale at Gardner's lectures. Author and prominent spiritualist Sir Arthur Conan Doyle learned of the photographs from the editor of the spiritualist publication "Light". Doyle had been commissioned by "The Strand Magazine" to write an article on fairies for their Christmas issue, and the fairy photographs "must have seemed like a godsend" according to broadcaster and historian Magnus Magnusson. Doyle contacted Gardner in June 1920 to determine the background to the photographs, and wrote to Elsie and her father to request permission from the latter to use the prints in his article. Arthur Wright was "obviously impressed" that Doyle was involved, and gave his permission for publication, but he refused payment on the grounds that, if genuine, the images should not be "soiled" by money. Gardner and Doyle sought a second expert opinion from the photographic company Kodak. Several of the company's technicians examined the enhanced prints, and although they agreed with Snelling that the pictures "showed no signs of being faked", they concluded that "this could not be taken as conclusive evidence ... that they were authentic photographs of fairies". Kodak declined to issue a certificate of authenticity. Gardner believed that the Kodak technicians might not have examined the photographs entirely objectively, observing that one had commented "after all, as fairies couldn't be true, the photographs must have been faked somehow". The prints were also examined by another photographic company, Ilford, who reported unequivocally that there was "some evidence of faking". Gardner and Doyle, perhaps rather optimistically, interpreted the results of the three expert evaluations as two in favour of the photographs' authenticity and one against. Doyle also showed the photographs to the physicist and pioneering psychical researcher Sir Oliver Lodge, who believed the photographs to be fake. He suggested that a troupe of dancers had masqueraded as fairies, and expressed doubt as to their "distinctly 'Parisienne hairstyles. Doyle was preoccupied with organising an imminent lecture tour of Australia, and in July 1920, sent Gardner to meet the Wright family. Frances was by then living with her parents in Scarborough, but Elsie's father told Gardner that he had been so certain the photographs were fakes that while the girls were away he searched their bedroom and the area around the beck (stream), looking for scraps of pictures or cutouts, but found nothing "incriminating". Gardner believed the Wright family to be honest and respectable. To place the matter of the photographs' authenticity beyond doubt, he returned to Cottingley at the end of July with two Kodak Cameo cameras and 24 secretly marked photographic plates. Frances was invited to stay with the Wright family during the school summer holiday so that she and Elsie could take more pictures of the fairies. Gardner described his briefing in his 1945 "Fairies: A Book of Real Fairies": Until 19 August the weather was unsuitable for photography. Because Frances and Elsie insisted that the fairies would not show themselves if others were watching, Elsie's mother was persuaded to visit her sister's for tea, leaving the girls alone. In her absence the girls took several photographs, two of which appeared to show fairies. In the first, "Frances and the Leaping Fairy", Frances is shown in profile with a winged fairy close by her nose. The second, "Fairy offering Posy of Harebells to Elsie", shows a fairy either hovering or tiptoeing on a branch, and offering Elsie a flower. Two days later the girls took the last picture, "Fairies and Their Sun-Bath". The plates were packed in cotton wool and returned to Gardner in London, who sent an "ecstatic" telegram to Doyle, by then in Melbourne. Doyle wrote back: Doyle's article in the December 1920 issue of "The Strand" contained two higher-resolution prints of the 1917 photographs, and sold out within days of publication. To protect the girls' anonymity, Frances and Elsie were called Alice and Iris respectively, and the Wright family was referred to as the "Carpenters". An enthusiastic and committed spiritualist, Doyle hoped that if the photographs convinced the public of the existence of fairies then they might more readily accept other psychic phenomena. He ended his article with the words: Early press coverage was "mixed", generally a combination of "embarrassment and puzzlement". The historical novelist and poet Maurice Hewlett published a series of articles in the literary journal "John O' London's Weekly", in which he concluded: "And knowing children, and knowing that Sir Arthur Conan Doyle has legs, I decide that the Miss Carpenters have pulled one of them." The Sydney newspaper "Truth" on 5 January 1921 expressed a similar view; "For the true explanation of these fairy photographs what is wanted is not a knowledge of occult phenomena but a knowledge of children." Some public figures were more sympathetic. Margaret McMillan, the educational and social reformer, wrote: "How wonderful that to these dear children such a wonderful gift has been vouchsafed." The novelist Henry De Vere Stacpoole decided to take the fairy photographs and the girls at face value. In a letter to Gardner he wrote: "Look at Alice's [Frances'] face. Look at Iris's [Elsie's] face. There is an extraordinary thing called Truth which has 10 million faces and forms – it is God's currency and the cleverest coiner or forger can't imitate it." Major John Hall-Edwards, a keen photographer and pioneer of medical X-ray treatments in Britain, was a particularly vigorous critic: Doyle used the later photographs in 1921 to illustrate a second article in "The Strand", in which he described other accounts of fairy sightings. The article formed the foundation for his 1922 book "The Coming of the Fairies". As before, the photographs were received with mixed credulity. Sceptics noted that the fairies "looked suspiciously like the traditional fairies of nursery tales" and that they had "very fashionable hairstyles". Gardner made a final visit to Cottingley in August 1921. He again brought cameras and photographic plates for Frances and Elsie, but was accompanied by the clairvoyant Geoffrey Hodson. Although neither of the girls claimed to see any fairies, and there were no more photographs, "on the contrary, he [Hodson] saw them [fairies] everywhere" and wrote voluminous notes on his observations. By now Elsie and Frances were tired of the whole fairy business. Years later Elsie looked at a photograph of herself and Frances taken with Hodson and said: "Look at that, fed up with fairies." Both Elsie and Frances later admitted that they "played along" with Hodson "out of mischief", and that they considered him "a fake". Public interest in the Cottingley Fairies gradually subsided after 1921. Elsie and Frances eventually married and lived abroad for many years. In 1966, a reporter from the "Daily Express" newspaper traced Elsie, who was by then back in England. She admitted in an interview given that year that the fairies might have been "figments of my imagination", but left open the possibility she believed that she had somehow managed to photograph her thoughts. The media subsequently became interested in Frances and Elsie's photographs once again. BBC television's "Nationwide" programme investigated the case in 1971, but Elsie stuck to her story: "I've told you that they're photographs of figments of our imagination, and that's what I'm sticking to". Elsie and Frances were interviewed by journalist Austin Mitchell in September 1976, for a programme broadcast on Yorkshire Television. When pressed, both women agreed that "a rational person doesn't see fairies", but they denied having fabricated the photographs. In 1978 the magician and scientific sceptic James Randi and a team from the Committee for the Scientific Investigation of Claims of the Paranormal examined the photographs, using a "computer enhancement process". They concluded that the photographs were fakes, and that strings could be seen supporting the fairies. Geoffrey Crawley, editor of the "British Journal of Photography", undertook a "major scientific investigation of the photographs and the events surrounding them", published between 1982 and 1983, "the first major postwar analysis of the affair". He also concluded that the pictures were fakes. In 1983, the cousins admitted in an article published in the magazine "The Unexplained" that the photographs had been faked, although both maintained that they really had seen fairies. Elsie had copied illustrations of dancing girls from a popular children's book of the time, "Princess Mary's Gift Book", published in 1914, and drew wings on them. They said they had then cut out the cardboard figures and supported them with hatpins, disposing of their props in the beck once the photograph had been taken. But the cousins disagreed about the fifth and final photograph, which Doyle in his "The Coming of the Fairies" described in this way: Elsie maintained it was a fake, just like all the others, but Frances insisted that it was genuine. In an interview given in the early 1980s Frances said: Both Frances and Elsie claimed to have taken the fifth photograph. In a letter published in "The Times" newspaper on 9 April 1983, Geoffrey Crawley explained the discrepancy by suggesting that the photograph was "an unintended double exposure of fairy cutouts in the grass", and thus "both ladies can be quite sincere in believing that they each took it". In a 1985 interview on Yorkshire Television's "Arthur C. Clarke's World of Strange Powers", Elsie said that she and Frances were too embarrassed to admit the truth after fooling Doyle, the author of Sherlock Holmes: "Two village kids and a brilliant man like Conan Doyle – well, we could only keep quiet." In the same interview Frances said: "I never even thought of it as being a fraud – it was just Elsie and I having a bit of fun and I can't understand to this day why they were taken in – they wanted to be taken in." Frances died in 1986, and Elsie in 1988. Prints of their photographs of the fairies, along with a few other items including a first edition of Doyle's book "The Coming of the Fairies", were sold at auction in London for £21,620 in 1998. That same year, Geoffrey Crawley sold his Cottingley Fairy material to the National Museum of Film, Photography and Television in Bradford (now the National Science and Media Museum), where it is on display. The collection included prints of the photographs, two of the cameras used by the girls, watercolours of fairies painted by Elsie, and a nine-page letter from Elsie admitting to the hoax. The glass photographic plates were bought for £6,000 by an unnamed buyer at a London auction held in 2001. Frances' daughter, Christine Lynch, appeared in an episode of the television programme "Antiques Roadshow" in Belfast, broadcast on BBC One in January 2009, with the photographs and one of the cameras given to the girls by Doyle. Christine told the expert, Paul Atterbury, that she believed, as her mother had done, that the fairies in the fifth photograph were genuine. Atterbury estimated the value of the items at between £25,000 and £30,000. The first edition of Frances' memoirs was published a few months later, under the title "Reflections on the Cottingley Fairies". The book contains correspondence, sometimes "bitter", between Elsie and Frances. In one letter, dated 1983, Frances wrote: The 1997 films "" and "Photographing Fairies" were inspired by the events surrounding the Cottingley Fairies. The photographs were parodied in a 1994 book written by Terry Jones and Brian Froud, "Lady Cottington's Pressed Fairy Book". In 2017 a further two fairy photographs were presented as evidence that the girls' parents were part of the conspiracy. Dating from 1917 and 1918, both photographs are poorly executed copies of two of the original fairy photographs. One was published in 1918 in "The Sphere" newspaper, which was before the originals had been seen by anyone outside the girls' immediate family. Ceawlin of Wessex Ceawlin (also spelled Ceaulin and Caelin, died "ca." 593) was a King of Wessex. He may have been the son of Cynric of Wessex and the grandson of Cerdic of Wessex, whom the "Anglo-Saxon Chronicle" represents as the leader of the first group of Saxons to come to the land which later became Wessex. Ceawlin was active during the last years of the Anglo-Saxon expansion, with little of southern England remaining in the control of the native Britons by the time of his death. The chronology of Ceawlin's life is highly uncertain. The historical accuracy and dating of many of the events in the later "Anglo-Saxon Chronicle" have been called into question, and his reign is variously listed as lasting seven, seventeen, or thirty-two years. The "Chronicle" records several battles of Ceawlin's between the years 556 and 592, including the first record of a battle between different groups of Anglo-Saxons, and indicates that under Ceawlin Wessex acquired significant territory, some of which was later to be lost to other Anglo-Saxon kingdoms. Ceawlin is also named as one of the eight ""bretwaldas"", a title given in the "Chronicle" to eight rulers who had overlordship over southern Britain, although the extent of Ceawlin's control is not known. Ceawlin died in 593, having been deposed the year before, possibly by his successor, Ceol. He is recorded in various sources as having two sons, Cutha and Cuthwine, but the genealogies in which this information is found are known to be unreliable. The history of the sub-Roman period in Britain is poorly sourced and the subject of a number of important disagreements among historians. It appears, however, that in the fifth century raids on Britain by continental peoples developed into migrations. The newcomers included Angles, Saxons, Jutes, and Frisians. These peoples captured territory in the east and south of England, but at about the end of the fifth century, a British victory at the battle of Mons Badonicus halted the Anglo-Saxon advance for fifty years. Near the year 550, however, the British began to lose ground once more, and within twenty-five years, it appears that control of almost all of southern England was in the hands of the invaders. The peace following the battle of Mons Badonicus is attested partly by Gildas, a monk, who wrote "De Excidio et Conquestu Britanniae" or "On the Ruin and Conquest of Britain" during the middle of the sixth century. This essay is a polemic against corruption and Gildas provides little in the way of names and dates. He appears, however, to state that peace had lasted from the year of his birth to the time he was writing. The "Anglo-Saxon Chronicle" is the other main source that bears on this period, in particular in an entry for the year 827 that records a list of the kings who bore the title ""bretwalda"", or "Britain-ruler". That list shows a gap in the early sixth century that matches Gildas's version of events. Ceawlin's reign belongs to the period of Anglo-Saxon expansion at the end of the sixth century. Though there are many unanswered questions about the chronology and activities of the early West Saxon rulers, it is clear that Ceawlin was one of the key figures in the final Anglo-Saxon conquest of southern Britain. The two main written sources for early West Saxon history are the "Anglo-Saxon Chronicle" and the West Saxon Genealogical Regnal List. The "Chronicle" is a set of annals which were compiled near the year 890, during the reign of King Alfred the Great of Wessex. They record earlier material for the older entries, which were assembled from earlier annals that no longer survive, as well as, from saga material that might have been transmitted orally. The "Chronicle" dates the arrival of the future "West Saxons" in Britain to 495, when Cerdic and his son, Cynric, land at "Cerdices ora", or Cerdic's shore. Almost twenty annals describing Cerdic's campaigns, and those of his descendants appear interspersed through the next hundred years of entries in the "Chronicle". Although these annals provide most of what is known about Ceawlin, the historicity of many of the entries is uncertain. The West Saxon Genealogical Regnal List is a list of rulers of Wessex, including the lengths of their reigns. It survives in several forms, including as a preface to the [B] manuscript of the "Chronicle". As with "Chronicle", the list was compiled during the reign of Alfred the Great, and both the list and the "Chronicle" are influenced by the desire of their writers to use a single line of descent to trace the lineage of the Kings of Wessex through Cerdic to Gewis, the legendary eponymous ancestor of the West Saxons, who is made to descend from Woden. The result served the political purposes of the scribe, but is riddled with contradictions for historians. The contradictions may be seen clearly by calculating dates by different methods from the various sources. The first event in West Saxon history, the date of which can be regarded as reasonably certain, is the baptism of Cynegils, which occurred in the late 630s, perhaps as late as 640. The "Chronicle" dates Cerdic's arrival to 495, but adding up the lengths of the reigns as given in the West Saxon Genealogical Regnal List, leads to the conclusion that Cerdic's reign might have started in 532, a difference of 37 years. Neither 495 nor 532 may be treated as reliable, however, the latter date relies on the presumption that the Regnal List is correct in presenting the Kings of Wessex as having succeeded one another, with no omitted kings, no joint kingships, and that the durations of the reigns are correct as given. None of these presumptions may be made safely. The sources also are inconsistent on the length of Ceawlin's reign. The "Chronicle" gives it as thirty-two years, from 560 to 592, but the Regnal Lists disagree: different versions give it as seven or seventeen years. A recent detailed study of the Regnal List dates the arrival of the West Saxons in England to 538, and favours seven years as the most likely length of Ceawlin's reign, with dates of 581–588 proposed. The sources do agree that Ceawlin is the son of Cynric and he usually is named as the father of Cuthwine. There is one discrepancy in this case: the entry for 685 in the [A] version of the "Chronicle" assigns Ceawlin a son, Cutha, but in the 855 entry in the same manuscript, Cutha is listed as the son of Cuthwine. Cutha also is named as Ceawlin's brother in the [E] and [F] versions of the "Chronicle", in the 571 and 568 entries, respectively. Whether Ceawlin is a descendant of Cerdic is a matter of debate. Subgroupings of different West Saxon lineages give the impression of separate groups, of which Ceawlin's line is one. Some of the problems in the Wessex genealogies may have come about because of efforts to integrate Ceawlin's line with the other lineages: it was very important to the West Saxons to be able to trace their ancestors back to Cerdic. Another reason for doubting the literal nature of these early genealogies is that the etymology of the names of several early members of the dynasty do not appear to be Germanic, as would be expected in the names of leaders of an apparently Anglo-Saxon dynasty. The name Ceawlin is one of the names that do not have convincing Anglo-Saxon etymologies; it seems more likely to be of native British origin. The earliest sources do not use the term "West Saxon". According to Bede's "Ecclesiastical History of the English People", the term is interchangeable with the Gewisse. The term "West Saxon" appears only in the late seventh century, after the reign of Cædwalla. Ultimately, the kingdom of Wessex occupied the southwest of England, but the initial stages in this expansion are not apparent from the sources. Cerdic's landing, whenever it is to be dated, seems to have been near the Isle of Wight, and the annals record the conquest of the island in 530. In 534, according to the "Chronicle", Cerdic died and his son Cynric took the throne; the "Chronicle" adds that "they gave the Isle of Wight to their nephews, Stuf and Wihtgar". These records are in direct conflict with Bede, who states that the Isle of Wight was settled by Jutes, not Saxons; the archaeological record is somewhat in favour of Bede on this. Subsequent entries in the "Chronicle" give details of some of the battles by which the West Saxons won their kingdom. Ceawlin's campaigns are not given as near the coast. They range along the Thames valley and beyond, as far as Surrey in the east and the mouth of the Severn in the west. Ceawlin clearly is part of the West Saxon expansion, but the military history of the period is difficult to understand. In what follows the dates are as given in the "Chronicle", although as noted above, these are earlier than now thought accurate. The first record of a battle fought by Ceawlin is in 556, when he and his father, Cynric, fought the native Britons at "Beran byrg", or Bera's Stronghold. This now is identified as Barbury Castle, an Iron Age hill fort in Wiltshire, near Swindon. Cynric would have been king of Wessex at this time. The first battle Ceawlin fought as king is dated by the "Chronicle" to 568, when he and Cutha fought with Æthelberht, the king of Kent. The entry says "Here Ceawlin and Cutha fought against Aethelberht and drove him into Kent; and they killed two ealdormen, Oslaf and Cnebba, on Wibbandun." The location of "Wibbandun", which can be translated as "Wibba's Mount", has not been identified definitely; it was at one time thought to be Wimbledon, but this now is known to be incorrect. This battle is notable as the first recorded conflict between the invading peoples: previous battles recorded in the "Chronicle" are between the Anglo-Saxons and the native Britons. There are multiple examples of joint kingship in Anglo-Saxon history, and this may be another: it is not clear what Cutha's relationship to Ceawlin is, but it certainly is possible he was also a king. The annal for 577, below, is another possible example. The annal for 571 reads: "Here Cuthwulf fought against the Britons at Bedcanford, and took four settlements: Limbury and Aylesbury, Benson and Eynsham; and in the same year he passed away." Cuthwulf's relationship with Ceawlin is unknown, but the alliteration common to Anglo-Saxon royal families suggests Cuthwulf may be part of the West Saxon royal line. The location of the battle itself is unidentified. It has been suggested that it was Bedford, but what is known of the early history of Bedford's names, does not support this. This battle is of interest because it is surprising that an area so far east should still be in Briton hands this late: there is ample archaeological evidence of early Saxon and Anglian presence in the Midlands, and historians generally have interpreted Gildas's "De Excidio" as implying that the Britons had lost control of this area by the mid-sixth century. One possible explanation is, that this annal records a reconquest of land that was lost to the Britons in the campaigns ending in the battle of Mons Badonicus. The annal for 577 reads "Here Cuthwine and Ceawlin fought against the Britons, and they killed three kings, Coinmail and Condidan and Farinmail, in the place which is called Dyrham, and took three cities: Gloucester and Cirencester and Bath." This entry is all that is known of these Briton kings; their names are in an archaic form that makes it very likely that this annal derives from a much older written source. The battle itself has long been regarded as a key moment in the Saxon advance, since in reaching the Bristol Channel, the West Saxons divided the Britons west of the Severn from land communication with those in the peninsula to the south of the Channel. Wessex almost certainly lost this territory to Penda of Mercia in 628, when the "Chronicle" records that "Cynegils and Cwichelm fought against Penda at Cirencester and then came to an agreement." It is possible that when Ceawlin and Cuthwine took Bath, they found the Roman baths still operating to some extent. Nennius, a ninth-century historian, mentions a "Hot Lake" in the land of the Hwicce, which was along the Severn, and adds "It is surrounded by a wall, made of brick and stone, and men may go there to bathe at any time, and every man can have the kind of bath he likes. If he wants, it will be a cold bath; and if he wants a hot bath, it will be hot". Bede also describes hot baths in the geographical introduction to the "Ecclesiastical History" in terms very similar to those of Nennius. Wansdyke, an early medieval defensive linear earthwork, runs from south of Bristol to near Marlborough, Wiltshire, passing not far from Bath. It probably was built in the fifth or sixth centuries, perhaps by Ceawlin. Ceawlin's last recorded victory is in 584. The entry reads "Here Ceawlin and Cutha fought against the Britons at the place which is named Fethan leag, and Cutha was killed; and Ceawlin took many towns and countless war-loot, and in anger he turned back to his own [territory]." There is a wood named "Fethelée" mentioned in a twelfth-century document that relates to Stoke Lyne, in Oxfordshire, and it now is thought that the battle of Fethan leag must have been fought in this area. The phrase "in anger he turned back to his own" probably indicates that this annal is drawn from saga material, as perhaps are all of the early Wessex annals. It also has been used to argue that perhaps, Ceawlin did not win the battle and that the chronicler chose not to record the outcome fully – a king does not usually come home "in anger" after taking "many towns and countless war-loot". It may be that Ceawlin's overlordship of the southern Britons came to an end with this battle. About 731, Bede, a Northumbrian monk and chronicler, wrote a work called the "Ecclesiastical History of the English People". The work was not primarily a secular history, but Bede provides much information about the history of the Anglo-Saxons, including a list early in the history of seven kings who, he said, held "imperium" over the other kingdoms south of the Humber. The usual translation for "imperium" is "overlordship". Bede names Ceawlin as the second on the list, although he spells it "Caelin", and adds that he was "known in the speech of his own people as Ceaulin". Bede also makes it clear that Ceawlin was not a Christian—Bede mentions a later king, Æthelberht of Kent, as "the first to enter the kingdom of heaven". The "Anglo-Saxon Chronicle," in an entry for the year 827, repeats Bede's list, adds Egbert of Wessex, and also mentions that they were known as "bretwalda", or "Britain-ruler". A great deal of scholarly attention has been given to the meaning of this word. It has been described as a term "of encomiastic poetry", but there also is evidence that it implied a definite role of military leadership. Bede says that these kings had authority "south of the Humber", but the span of control, at least of the earlier bretwaldas, likely was less than this. In Ceawlin's case the range of control is hard to determine accurately, but Bede's inclusion of Ceawlin in the list of kings who held "imperium", and the list of battles he is recorded as having won, indicate an energetic and successful leader who, from a base in the upper Thames valley, dominated much of the surrounding area and held overlordship over the southern Britons for some period. Despite Ceawlin's military successes, the northern conquests he made could not always be retained: Mercia took much of the upper Thames valley, and the north-eastern towns won in 571 were among territory subsequently under the control of Kent and Mercia at different times. Bede's concept of the power of these overlords also must be regarded as the product of his eighth-century viewpoint. When the "Ecclesiastical History" was written, Æthelbald of Mercia dominated the English south of the Humber, and Bede's view of the earlier kings was doubtless strongly coloured by the state of England at that time. For the earlier "bretwaldas", such as Ælle and Ceawlin, there must be some element of anachronism in Bede's description. It also is possible that Bede only meant to refer to power over Anglo-Saxon kingdoms, not the native Britons. Ceawlin is the second king in Bede's list. All the subsequent bretwaldas followed more or less consecutively, but there is a long gap, perhaps fifty years, between Ælle of Sussex, the first bretwalda, and Ceawlin. The lack of gaps between the overlordships of the later bretwaldas has been used to make an argument for Ceawlin's dates matching the later entries in the "Chronicle" with reasonable accuracy. According to this analysis, the next bretwalda, Æthelberht of Kent, must have been already a dominant king by the time Pope Gregory the Great wrote to him in 601, since Gregory would have not written to an underking. Ceawlin defeated Æthelberht in 568 according to the "Chronicle". Æthelberht's dates are a matter of debate, but recent scholarly consensus has his reign starting no earlier than 580. The 568 date for the battle at Wibbandun is thought to be unlikely because of the assertion in various versions of the West Saxon Genealogical Regnal List that Ceawlin's reign lasted either seven or seventeen years. If this battle is placed near the year 590, before Æthelberht has established himself as a powerful king, then the subsequent annals relating to Ceawlin's defeat and death may be reasonably close to the correct date. In any case, the battle with Æthelberht is unlikely to have been more than a few years on either side of 590. The gap between Ælle and Ceawlin, on the other hand, has been taken as supporting evidence for the story told by Gildas in "De Excidio" of a peace lasting a generation or more following a Briton victory at Mons Badonicus. Æthelberht of Kent succeeds Ceawlin on the list of bretwaldas, but the reigns may overlap somewhat: recent evaluations give Ceawlin a likely reign of 581–588, and place Æthelberht's accession near to the year 589, but these analyses are no more than scholarly guesses. Ceawlin's eclipse in 592, probably by Ceol, may have been the occasion for Æthelberht to rise to prominence; Æthelberht very likely was the dominant Anglo-Saxon king by 597. Æthelberht's rise may have been earlier: the 584 annal, even if it records a victory, is the last victory of Ceawlin's in the "Chronicle", and the period after that may have been one of Æthelberht's ascent and Ceawlin's decline. Ceawlin lost the throne of Wessex in 592. The annal for that year reads, in part: "Here there was great slaughter at Woden's Barrow, and Ceawlin was driven out." Woden's Barrow is a tumulus, now called Adam's Grave, at Alton Priors, Wiltshire. No details of his opponent are given. The medieval chronicler William of Malmesbury, writing in about 1120, says that it was "the Angles and the British conspiring together". Alternatively, it may have been Ceol, who is supposed to have been the next king of Wessex, ruling for six years according to the West Saxon Genealogical Regnal List. According to the "Anglo-Saxon Chronicle", Ceawlin died the following year. The relevant part of the annal reads: "Here Ceawlin and Cwichelm and Crida perished." Nothing more is known of Cwichelm and Crida, although they may have been members of the Wessex royal house – their names fit the alliterative pattern common to royal houses of the time. According to the Regnal List, Ceol was a son of Cutha, who was a son of Cynric; and Ceolwulf, his brother, reigned for seventeen years after him. It is possible that some fragmentation of control among the West Saxons occurred at Ceawlin's death: Ceol and Ceolwulf may have been based in Wiltshire, as opposed to the upper Thames valley. This split also may have contributed to Æthelberht's ability to rise to dominance in southern England. The West Saxons remained influential in military terms, however: the "Chronicle" and Bede record continued military activity against Essex and Sussex within twenty or thirty years of Ceawlin's death. Primary sources Secondary sources Comet Shoemaker–Levy 9 Comet Shoemaker–Levy 9 (formally designated D/1993 F2) was a comet that broke apart in July 1992 and collided with Jupiter in July 1994, providing the first direct observation of an extraterrestrial collision of Solar System objects. This generated a large amount of coverage in the popular media, and the comet was closely observed by astronomers worldwide. The collision provided new information about Jupiter and highlighted its possible role in reducing space debris in the inner Solar System. The comet was discovered by astronomers Carolyn and Eugene M. Shoemaker and David Levy in 1993. Shoemaker–Levy 9 had been captured by Jupiter and was orbiting the planet at the time. It was located on the night of March 24 in a photograph taken with the Schmidt telescope at the Palomar Observatory in California. It was the first comet observed to be orbiting a planet, and had probably been captured by Jupiter around 20–30 years earlier. Calculations showed that its unusual fragmented form was due to a previous closer approach to Jupiter in July 1992. At that time, the orbit of Shoemaker–Levy 9 passed within Jupiter's Roche limit, and Jupiter's tidal forces had acted to pull apart the comet. The comet was later observed as a series of fragments ranging up to in diameter. These fragments collided with Jupiter's southern hemisphere between July 16 and 22, 1994 at a speed of approximately (Jupiter's escape velocity) or . The prominent scars from the impacts were more easily visible than the Great Red Spot and persisted for many months. While conducting a program of observations designed to uncover near-Earth objects, the Shoemakers and Levy discovered Comet Shoemaker–Levy 9 on the night of March 24, 1993 in a photograph taken with the Schmidt telescope at the Palomar Observatory in California. The comet was thus a serendipitous discovery, but one that quickly overshadowed the results from their main observing program. Comet Shoemaker–Levy 9 was the ninth periodic comet (a comet whose orbital period is 200 years or less) discovered by the Shoemakers and Levy, hence its name. It was their eleventh comet discovery overall including their discovery of two non-periodic comets, which use a different nomenclature. The discovery was announced in IAU Circular 5725 on March 27, 1993. The discovery image gave the first hint that comet Shoemaker–Levy 9 was an unusual comet, as it appeared to show multiple nuclei in an elongated region about 50 arcseconds long and 10 arcseconds wide. Brian G. Marsden of the Central Bureau for Astronomical Telegrams noted that the comet lay only about 4 degrees from Jupiter as seen from Earth, and that although this could of course be a line of sight effect, its apparent motion in the sky suggested that it was physically close to it. Because of this, he suggested that the Shoemakers and David Levy had discovered the fragments of a comet that had been disrupted by Jupiter's gravity. Orbital studies of the new comet soon revealed that it was orbiting Jupiter rather than the Sun, unlike all other comets known at the time. Its orbit around Jupiter was very loosely bound, with a period of about 2 years and an apoapsis (the point in the orbit farthest from the planet) of . Its orbit around the planet was highly eccentric ("e" = 0.9986). Tracing back the comet's orbital motion revealed that it had been orbiting Jupiter for some time. It is likely that it was captured from a solar orbit in the early 1970s, although the capture may have occurred as early as the mid-1960s. Several other observers found images of the comet in precovery images obtained before March 24, including Kin Endate from a photograph exposed on March 15, S. Otomo on March 17, and a team led by Eleanor Helin from images on March 19. No precovery images dating back to earlier than March 1993 have been found. Before the comet was captured by Jupiter, it was probably a short-period comet with an aphelion just inside Jupiter's orbit, and a perihelion interior to the asteroid belt. The volume of space within which an object can be said to orbit Jupiter is defined by Jupiter's Hill sphere (also called the Roche sphere). When the comet passed Jupiter in the late 1960s or early 1970s, it happened to be near its aphelion, and found itself slightly within Jupiter's Hill sphere. Jupiter's gravity nudged the comet towards it. Because the comet's motion with respect to Jupiter was very small, it fell almost straight toward Jupiter, which is why it ended up on a Jove-centric orbit of very high eccentricitythat is to say, the ellipse was nearly flattened out. The comet had apparently passed extremely close to Jupiter on July 7, 1992, just over above its cloud tops—a smaller distance than Jupiter's radius of , and well within the orbit of Jupiter's innermost moon Metis and the planet's Roche limit, inside which tidal forces are strong enough to disrupt a body held together only by gravity. Although the comet had approached Jupiter closely before, the July 7 encounter seemed to be by far the closest, and the fragmentation of the comet is thought to have occurred at this time. Each fragment of the comet was denoted by a letter of the alphabet, from "fragment A" through to "fragment W", a practice already established from previously observed broken-up comets. More exciting for planetary astronomers was that the best orbital calculations suggested that the comet would pass within of the center of Jupiter, a distance smaller than the planet's radius, meaning that there was an extremely high probability that SL9 would collide with Jupiter in July 1994. Studies suggested that the train of nuclei would plow into Jupiter's atmosphere over a period of about five days. The discovery that the comet was likely to collide with Jupiter caused great excitement within the astronomical community and beyond, as astronomers had never before seen two significant Solar System bodies collide. Intense studies of the comet were undertaken, and as its orbit became more accurately established, the possibility of a collision became a certainty. The collision would provide a unique opportunity for scientists to look inside Jupiter's atmosphere, as the collisions were expected to cause eruptions of material from the layers normally hidden beneath the clouds. Astronomers estimated that the visible fragments of SL9 ranged in size from a few hundred metres (around ) to across, suggesting that the original comet may have had a nucleus up to across—somewhat larger than Comet Hyakutake, which became very bright when it passed close to the Earth in 1996. One of the great debates in advance of the impact was whether the effects of the impact of such small bodies would be noticeable from Earth, apart from a flash as they disintegrated like giant meteors. The most optimistic prediction was that large, asymmetric ballistic fireballs would rise above the limb of Jupiter and into sunlight to be visible from Earth. Other suggested effects of the impacts were seismic waves travelling across the planet, an increase in stratospheric haze on the planet due to dust from the impacts, and an increase in the mass of the Jovian ring system. However, given that observing such a collision was completely unprecedented, astronomers were cautious with their predictions of what the event might reveal. Anticipation grew as the predicted date for the collisions approached, and astronomers trained terrestrial telescopes on Jupiter. Several space observatories did the same, including the Hubble Space Telescope, the ROSAT X-ray-observing satellite, and significantly the "Galileo" spacecraft, then on its way to a rendezvous with Jupiter scheduled for 1995. Although the impacts took place on the side of Jupiter hidden from Earth, "Galileo", then at a distance of from the planet, was able to see the impacts as they occurred. Jupiter's rapid rotation brought the impact sites into view for terrestrial observers a few minutes after the collisions. Two other satellites made observations at the time of the impact: the "Ulysses" spacecraft, primarily designed for solar observations, was pointed towards Jupiter from its location away, and the distant "Voyager 2" probe, some from Jupiter and on its way out of the Solar System following its encounter with Neptune in 1989, was programmed to look for radio emission in the 1–390 kHz range. The first impact occurred at 20:13 UTC on July 16, 1994, when fragment A of the nucleus entered Jupiter's southern hemisphere at a speed of about . Instruments on "Galileo" detected a fireball that reached a peak temperature of about , compared to the typical Jovian cloudtop temperature of about , before expanding and cooling rapidly to about after 40 seconds. The plume from the fireball quickly reached a height of over . A few minutes after the impact fireball was detected, "Galileo" measured renewed heating, probably due to ejected material falling back onto the planet. Earth-based observers detected the fireball rising over the limb of the planet shortly after the initial impact. Despite published predictions, astronomers had not expected to see the fireballs from the impacts and did not have any idea in advance how visible the other atmospheric effects of the impacts would be from Earth. Observers soon saw a huge dark spot after the first impact. The spot was visible even in very small telescopes, and was about (one Earth radius) across. This and subsequent dark spots were thought to have been caused by debris from the impacts, and were markedly asymmetric, forming crescent shapes in front of the direction of impact. Over the next six days, 21 distinct impacts were observed, with the largest coming on July 18 at 07:33 UTC when fragment G struck Jupiter. This impact created a giant dark spot over across, and was estimated to have released an energy equivalent to 6,000,000 megatons of TNT (600 times the world's nuclear arsenal). Two impacts 12 hours apart on July 19 created impact marks of similar size to that caused by fragment G, and impacts continued until July 22, when fragment W struck the planet. Observers hoped that the impacts would give them a first glimpse of Jupiter beneath the cloud tops, as lower material was exposed by the comet fragments punching through the upper atmosphere. Spectroscopic studies revealed absorption lines in the Jovian spectrum due to diatomic sulfur (S) and carbon disulfide (CS), the first detection of either in Jupiter, and only the second detection of S in any astronomical object. Other molecules detected included ammonia (NH) and hydrogen sulfide (HS). The amount of sulfur implied by the quantities of these compounds was much greater than the amount that would be expected in a small cometary nucleus, showing that material from within Jupiter was being revealed. Oxygen-bearing molecules such as sulfur dioxide were not detected, to the surprise of astronomers. As well as these molecules, emission from heavy atoms such as iron, magnesium and silicon was detected, with abundances consistent with what would be found in a cometary nucleus. Although a substantial amount of water was detected spectroscopically, it was not as much as predicted beforehand, meaning that either the water layer thought to exist below the clouds was thinner than predicted, or that the cometary fragments did not penetrate deeply enough. As predicted beforehand, the collisions generated enormous waves that swept across Jupiter at speeds of and were observed for over two hours after the largest impacts. The waves were thought to be travelling within a stable layer acting as a waveguide, and some scientists thought the stable layer must lie within the hypothesised tropospheric water cloud. However, other evidence seemed to indicate that the cometary fragments had not reached the water layer, and the waves were instead propagating within the stratosphere. Radio observations revealed a sharp increase in continuum emission at a wavelength of after the largest impacts, which peaked at 120% of the normal emission from the planet. This was thought to be due to synchrotron radiation, caused by the injection of relativistic electrons—electrons with velocities near the speed of light—into the Jovian magnetosphere by the impacts. About an hour after fragment K entered Jupiter, observers recorded auroral emission near the impact region, as well as at the antipode of the impact site with respect to Jupiter's strong magnetic field. The cause of these emissions was difficult to establish due to a lack of knowledge of Jupiter's internal magnetic field and of the geometry of the impact sites. One possible explanation was that upwardly accelerating shock waves from the impact accelerated charged particles enough to cause auroral emission, a phenomenon more typically associated with fast-moving solar wind particles striking a planetary atmosphere near a magnetic pole. Some astronomers had suggested that the impacts might have a noticeable effect on the Io torus, a torus of high-energy particles connecting Jupiter with the highly volcanic moon Io. High resolution spectroscopic studies found that variations in the ion density, rotational velocity, and temperatures at the time of impact and afterwards were within the normal limits. Several models were devised to compute the density and size of Shoemaker–Levy 9. Its average density was calculated to be about ; the breakup of a much less dense comet would not have resembled the observed string of objects. The size of the parent comet was calculated to be about in diameter. These predictions were among the few that were actually confirmed by subsequent observation. One of the surprises of the impacts was the small amount of water revealed compared to prior predictions. Before the impact, models of Jupiter's atmosphere had indicated that the break-up of the largest fragments would occur at atmospheric pressures of anywhere from 30 kilopascals to a few tens of megapascals (from 0.3 to a few hundred bar), with some predictions that the comet would penetrate a layer of water and create a bluish shroud over that region of Jupiter. Astronomers did not observe large amounts of water following the collisions, and later impact studies found that fragmentation and destruction of the cometary fragments in an 'airburst' probably occurred at much higher altitudes than previously expected, with even the largest fragments being destroyed when the pressure reached , well above the expected depth of the water layer. The smaller fragments were probably destroyed before they even reached the cloud layer. The visible scars from the impacts could be seen on Jupiter for many months. They were extremely prominent, and observers described them as even more easily visible than the Great Red Spot. A search of historical observations revealed that the spots were probably the most prominent transient features ever seen on the planet, and that although the Great Red Spot is notable for its striking color, no spots of the size and darkness of those caused by the SL9 impacts have ever been recorded before. Spectroscopic observers found that ammonia and carbon disulfide persisted in the atmosphere for at least fourteen months after the collisions, with a considerable amount of ammonia being present in the stratosphere as opposed to its normal location in the troposphere. Counterintuitively, the atmospheric temperature dropped to normal levels much more quickly at the larger impact sites than at the smaller sites: at the larger impact sites, temperatures were elevated over a region wide, but dropped back to normal levels within a week of the impact. At smaller sites, temperatures higher than the surroundings persisted for almost two weeks. Global stratospheric temperatures rose immediately after the impacts, then fell to below pre-impact temperatures 2–3 weeks afterwards, before rising slowly to normal temperatures. SL9 is not unique in having orbited Jupiter for a time; five comets, (including 82P/Gehrels, 147P/Kushida–Muramatsu, and 111P/Helin–Roman–Crockett) are known to have been temporarily captured by the planet. Cometary orbits around Jupiter are unstable, as they will be highly elliptical and likely to be strongly perturbed by the Sun's gravity at apojove (the furthest point on the orbit from the planet). By far the most massive planet in the Solar System, Jupiter can capture objects relatively frequently, but the size of SL9 makes it a rarity: one post-impact study estimated that comets in diameter impact the planet once in approximately 500 years and those in diameter do so just once in every 6,000 years. There is very strong evidence that comets have previously been fragmented and collided with Jupiter and its satellites. During the Voyager missions to the planet, planetary scientists identified 13 crater chains on Callisto and three on Ganymede, the origin of which was initially a mystery. Crater chains seen on the Moon often radiate from large craters, and are thought to be caused by secondary impacts of the original ejecta, but the chains on the Jovian moons did not lead back to a larger crater. The impact of SL9 strongly implied that the chains were due to trains of disrupted cometary fragments crashing into the satellites. On July 19, 2009, exactly 15 years after the SL9 impacts, a new black spot about the size of the Pacific Ocean appeared in Jupiter's southern hemisphere. Thermal infrared measurements showed the impact site was warm and spectroscopic analysis detected the production of excess hot ammonia and silica-rich dust in the upper regions of Jupiter's atmosphere. Scientists have concluded that another impact event had occurred, but this time a more compact and strong object, probably a small undiscovered asteroid, was the cause. The impact of SL9 highlighted Jupiter's role as a "cosmic vacuum cleaner" (or in deference to the ancients' planetary correspondences to the major organs in the human body, a "cosmic liver") for the inner Solar System (Jupiter Barrier). The planet's strong gravitational influence leads to many small comets and asteroids colliding with the planet, and the rate of cometary impacts on Jupiter is thought to be between 2000-8000 times higher than the rate on Earth. The extinction of the dinosaurs at the end of the Cretaceous period is generally thought to have been caused by the Cretaceous–Paleogene impact event, which created the Chicxulub crater, demonstrating that impacts are a serious threat to life on Earth. Astronomers have speculated that without Jupiter to mop up potential impactors, extinction events might have been more frequent on Earth, and complex life might not have been able to develop. This is part of the argument used in the Rare Earth hypothesis. In 2009, it was shown that the presence of a smaller planet at Jupiter's position in the Solar System might increase the impact rate of comets on the Earth significantly. A planet of Jupiter's mass still seems to provide increased protection against asteroids, but the total effect on all orbital bodies within the Solar System is unclear. Computer simulations in 2016 have continued to erode the theory. Comet Hale–Bopp Comet Hale–Bopp (formally designated C/1995 O1) is a comet that was perhaps the most widely observed of the 20th century, and one of the brightest seen for many decades. Hale–Bopp was discovered on July 23, 1995, separately by Alan Hale and Thomas Bopp prior to it becoming naked-eye visible on Earth. Although predicting the maximum apparent brightness of new comets with any degree of certainty is difficult, Hale–Bopp met or exceeded most predictions when it passed perihelion on April 1, 1997. It was visible to the naked eye for a record 18 months, twice as long as the previous record holder, the Great Comet of 1811. Accordingly, Hale–Bopp was dubbed the Great Comet of 1997. The comet was discovered independently on July 23, 1995, by two observers, Alan Hale and Thomas Bopp, both in the United States. Hale had spent many hundreds of hours searching for comets without success, and was tracking known comets from his driveway in New Mexico when he chanced upon Hale–Bopp just after midnight. The comet had an apparent magnitude of 10.5 and lay near the globular cluster M70 in the constellation of Sagittarius. Hale first established that there was no other deep-sky object near M70, and then consulted a directory of known comets, finding that none were known to be in this area of the sky. Once he had established that the object was moving relative to the background stars, he emailed the Central Bureau for Astronomical Telegrams, the clearing house for astronomical discoveries. Bopp did not own a telescope. He was out with friends near Stanfield, Arizona, observing star clusters and galaxies when he chanced across the comet while at the eyepiece of his friend's telescope. He realized he might have spotted something new when, like Hale, he checked his star maps to determine if any other deep-sky objects were known to be near M70, and found that there were none. He alerted the Central Bureau for Astronomical Telegrams through a Western Union telegram. Brian G. Marsden, who had run the bureau since 1968, laughed, "Nobody sends telegrams anymore. I mean, by the time that telegram got here, Alan Hale had already e-mailed us three times with updated coordinates." The following morning, it was confirmed that this was a new comet, and it was given the designation C/1995 O1. The discovery was announced in International Astronomical Union circular 6187. Hale–Bopp's orbital position was calculated as 7.2 astronomical units (AU) from the Sun, placing it between Jupiter and Saturn and by far the greatest distance from Earth at which a comet had been discovered by amateurs. Most comets at this distance are extremely faint, and show no discernible activity, but Hale–Bopp already had an observable coma. An image taken at the Anglo-Australian Telescope in 1993 was found to show the then-unnoticed comet some 13 AU from the Sun, a distance at which most comets are essentially unobservable. (Halley's Comet was more than 100 times fainter at the same distance from the Sun.) Analysis indicated later that its comet nucleus was 60±20 kilometres in diameter, approximately six times the size of Halley. Its great distance and surprising activity indicated that comet Hale–Bopp might become very bright indeed when it reached perihelion in 1997. However, comet scientists were wary – comets can be extremely unpredictable, and many have large outbursts at great distance only to diminish in brightness later. Comet Kohoutek in 1973 had been touted as a 'comet of the century' and turned out to be unspectacular. Hale–Bopp became visible to the naked eye in May 1996, and although its rate of brightening slowed considerably during the latter half of that year, scientists were still cautiously optimistic that it would become very bright. It was too closely aligned with the Sun to be observable during December 1996, but when it reappeared in January 1997 it was already bright enough to be seen by anyone who looked for it, even from large cities with light-polluted skies. The Internet was a growing phenomenon at the time, and numerous websites that tracked the comet's progress and provided daily images from around the world became extremely popular. The Internet played a large role in encouraging the unprecedented public interest in comet Hale–Bopp. As the comet approached the Sun, it continued to brighten, shining at 2nd magnitude in February, and showing a growing pair of tails, the blue gas tail pointing straight away from the Sun and the yellowish dust tail curving away along its orbit. On March 9, a solar eclipse in China, Mongolia and eastern Siberia allowed observers there to see the comet in the daytime. Hale–Bopp had its closest approach to Earth on March 22, 1997, at a distance of 1.315 AU. As it passed perihelion on April 1, 1997, the comet developed into a spectacular sight. It shone brighter than any star in the sky except Sirius, and its dust tail stretched 40–45 degrees across the sky. The comet was visible well before the sky got fully dark each night, and while many great comets are very close to the Sun as they pass perihelion, comet Hale–Bopp was visible all night to northern hemisphere observers. After its perihelion passage, the comet moved into the southern celestial hemisphere. The comet was much less impressive to southern hemisphere observers than it had been in the northern hemisphere, but southerners were able to see the comet gradually fade from view during the second half of 1997. The last naked-eye observations were reported in December 1997, which meant that the comet had remained visible without aid for 569 days, or about 18 and a half months. The previous record had been set by the Great Comet of 1811, which was visible to the naked eye for about 9 months. The comet continued to fade as it receded, but is still being tracked by astronomers. In October 2007, 10 years after the perihelion and at distance of 25.7 AU from Sun, the comet was still active as indicated by the detection of the CO-driven coma. Herschel Space Observatory images taken in 2010 suggest comet Hale–Bopp is covered in a fresh frost layer. Hale–Bopp was again detected in December 2010 when it was 30.7 AU away from the Sun, and on August 7, 2012, at a 33.2 AU distance from the Sun. Astronomers expect that the comet will remain observable with large telescopes until perhaps 2020, by which time it will be nearing 30th magnitude. By this time it will become very difficult to distinguish the comet from the large numbers of distant galaxies of similar brightness. The comet likely made its previous perihelion 4,200 years ago, in July 2215 BCE. The estimated closest approach to Earth was 1.4 AU, and it may have been observed in ancient Egypt during the 6th dynasty reign of the Pharaoh Pepi II (Reign: 2247 - c. 2216 BCE). Pepi's pyramid at Saqqara contains a text referring to an "nhh-star" as a companion of the pharaoh in the heavens, where "" is the hieroglyph for long hair. Hale–Bopp may have had a near collision with Jupiter in early June 2215 BC, which probably caused a dramatic change in its orbit, and 2215 BC may have been its first passage through the inner Solar System. The comet's current orbit is almost perpendicular to the plane of the ecliptic, so further close approaches to planets will be rare. However, in April 1996 the comet passed within 0.77 AU of Jupiter, close enough for its orbit to be measurably affected by the planet's gravity. The comet's orbit was shortened considerably to a period of roughly 2,533 years, and it will next return to the inner Solar System around the year 4385. Its greatest distance from the Sun (aphelion) will be about 370 AU, reduced from about 525 AU. The estimated probability of Hale-Bopp's striking Earth in future passages through the inner Solar System is remote, about 2.5×10 per orbit. However, given that the comet nucleus is around 60 km in diameter, the consequences of such an impact would be apocalyptic. Weissman conservatively estimates the diameter at 35 km; an estimated density of 0.6 g/cm then gives a cometary mass of 1.3×10 g. At a probable impact velocity of 52.5 km/s, impact energy can be calculated as 1.9×10 ergs, or 4.4×10 megatons, about 44 times the estimated energy of the K-T impact event. Over many orbits, the cumulative effect of gravitational perturbations on comets with high orbital inclinations and small perihelion distances is generally to reduce the perihelion distance to very small values. Hale–Bopp has about a 15% chance of eventually becoming a sungrazing comet through this process. Comet Hale–Bopp was observed intensively by astronomers during its perihelion passage, and several important advances in cometary science resulted from these observations. The dust production rate of the comet was very high (up to 2.0 kg/s), which may have made the inner coma optically thick. Based on the properties of the dust grains—high temperature, high albedo and strong 10 μm silicate emission feature—the astronomers concluded the dust grains are smaller than observed in any other comet. Hale–Bopp showed the highest ever linear polarization detected for any comet. Such polarization is the result of solar radiation getting scattered by the dust particles in the coma of the comet and depends on the nature of the grains. It further confirms that the dust grains in the coma of comet Hale–Bopp were smaller than inferred in any other comet. One of the most remarkable discoveries was that the comet had a third type of tail. In addition to the well-known gas and dust tails, Hale–Bopp also exhibited a faint sodium tail, only visible with powerful instruments with dedicated filters. Sodium emission had been previously observed in other comets, but had not been shown to come from a tail. Hale–Bopp's sodium tail consisted of neutral atoms (not ions), and extended to some 50 million kilometres in length. The source of the sodium appeared to be the inner coma, although not necessarily the nucleus. There are several possible mechanisms for generating a source of sodium atoms, including collisions between dust grains surrounding the nucleus, and "sputtering" of sodium from dust grains by ultraviolet light. It is not yet established which mechanism is primarily responsible for creating Hale–Bopp's sodium tail, and the narrow and diffuse components of the tail may have different origins. While the comet's dust tail roughly followed the path of the comet's orbit and the gas tail pointed almost directly away from the Sun, the sodium tail appeared to lie between the two. This implies that the sodium atoms are driven away from the comet's head by radiation pressure. The abundance of deuterium in comet Hale–Bopp in the form of heavy water was found to be about twice that of Earth's oceans. If Hale–Bopp's deuterium abundance is typical of all comets, this implies that although cometary impacts are thought to be the source of a significant amount of the water on Earth, they cannot be the only source. Deuterium was also detected in many other hydrogen compounds in the comet. The ratio of deuterium to normal hydrogen was found to vary from compound to compound, which astronomers believe suggests that cometary ices were formed in interstellar clouds, rather than in the solar nebula. Theoretical modelling of ice formation in interstellar clouds suggests that comet Hale–Bopp formed at temperatures of around 25–45 kelvins. Spectroscopic observations of Hale–Bopp revealed the presence of many organic chemicals, several of which had never been detected in comets before. These complex molecules may exist within the cometary nucleus, or might be synthesised by reactions in the comet. Hale–Bopp was the first comet where the noble gas argon was detected. Noble gases are chemically inert and highly volatile, and since different noble elements have different sublimation temperatures, they can be used for probing the temperature histories of the cometary ices. Krypton has a sublimation temperature of 16–20 K and was found to be depleted more than 25 times relative to the solar abundance, while argon with its higher sublimation temperature was enriched relative to the solar abundance. Together these observations indicate that the interior of Hale–Bopp has always been colder than 35–40 K, but has at some point been warmer than 20 K. Unless the solar nebula was much colder and richer in argon than generally believed, this suggests that the comet formed beyond Neptune in the Kuiper belt region and then migrated outward to the Oort cloud. Comet Hale–Bopp's activity and outgassing were not spread uniformly over its nucleus, but instead came from several specific jets. Observations of the material streaming away from these jets allowed astronomers to measure the rotation period of the comet, which was found to be about 11 hours 46 minutes. In 1997 a paper was published that hypothesised the existence of a binary nucleus to fully explain the observed pattern of comet Hale–Bopp's dust emission observed in October 1995. The paper was based on theoretical analysis, and did not claim an observational detection of the proposed satellite nucleus, but estimated that it would have a diameter of about 30 km, with the main nucleus being about 70 km across, and would orbit in about three days at a distance of about 180 km. This analysis was confirmed by observations in 1996 using Wide-Field Planetary Camera 2 of the Hubble Space Telescope which had taken images of the comet that revealed the satellite. Although observations using adaptive optics in late 1997 and early 1998 showed a double peak in the brightness of the nucleus, controversy still exists over whether such observations can only be explained by a binary nucleus. The discovery of the satellite was not confirmed by other observations. Also, while comets have been observed to break up before, no case had been found of a stable binary nucleus until the subsequent discovery of . Given the very small mass of this comet, the orbit of the binary nucleus would be easily disrupted by the gravity of the Sun and planets. In November 1996 amateur astronomer Chuck Shramek (1950–2000) of Houston, Texas took a CCD image of the comet, which showed a fuzzy, slightly elongated object nearby. When his computer sky-viewing program did not identify the star, Shramek called the Art Bell radio program "Coast to Coast AM" to announce that he had discovered a "Saturn-like object" following Hale–Bopp. UFO enthusiasts, such as remote viewing proponent Courtney Brown, soon concluded that there was an alien spacecraft following the comet. Several astronomers, including Alan Hale, claimed the object was simply an 8.5-magnitude star, SAO141894, which did not appear on Shramek's computer program because the user preferences were set incorrectly. Later, Art Bell even claimed to have obtained an image of the object from an anonymous astrophysicist who was about to confirm its discovery. However, astronomers Olivier Hainaut and David Tholen of the University of Hawaii stated that the alleged photo was an altered copy of one of their own comet images. A few months later, in March 1997, 39 members of the cult Heaven's Gate committed mass suicide with the intention of teleporting to a spaceship they believed was flying behind the comet. Nancy Lieder, a self-proclaimed contactee who claims to receive messages from aliens through an implant in her brain, stated that Hale–Bopp was a fiction designed to distract the population from the coming arrival of "Nibiru" or "Planet X", a giant planet whose close passage would disrupt the Earth's rotation, causing global cataclysm. Although Lieder's original date for the apocalypse, May 2003, passed without incident, predictions of the imminent arrival of Nibiru continued by various conspiracy websites, most of whom tied it to the 2012 phenomenon. Its lengthy period of visibility and extensive coverage in the media meant that Hale–Bopp was probably the most-observed comet in history, making a far greater impact on the general public than the return of Halley's Comet in 1986, and certainly seen by a greater number of people than witnessed any of Halley's previous appearances. For instance, 69% of Americans had seen Hale–Bopp by April 9, 1997. Hale–Bopp was a record-breaking comet—the farthest comet from the Sun discovered by amateurs, with the largest well-measured cometary nucleus known after 95P/Chiron, and it was visible to the naked eye for twice as long as the previous record-holder. It was also brighter than magnitude 0 for eight weeks, longer than any other recorded comet. Gene Shoemaker and his wife Carolyn, both famous for co-discovering comet Shoemaker–Levy 9, were involved in a car crash after photographing the comet. Gene died in the crash and his ashes were sent to the Moon along with an image of the Hale-Bopp comet, "the last comet that the Shoemakers observed together". Constantine II of Scotland Constantine, son of Áed (Medieval Gaelic: "Constantín mac Áeda"; Modern Gaelic: "Còiseam mac Aoidh", known in most modern regnal lists as Constantine II; died 952) was an early King of Scotland, known then by the Gaelic name "Alba". The Kingdom of Alba, a name which first appears in Constantine's lifetime, was situated in modern-day Scotland. The core of the kingdom was formed by the lands around the River Tay. Its southern limit was the River Forth, northwards it extended towards the Moray Firth and perhaps to Caithness, while its western limits are uncertain. Constantine's grandfather Kenneth I of Scotland (Cináed mac Ailpín, died 858) was the first of the family recorded as a king, but as king of the Picts. This change of title, from king of the Picts to king of Alba, is part of a broader transformation of Pictland and the origins of the Kingdom of Alba are traced to Constantine's lifetime. His reign, like those of his predecessors, was dominated by the actions of Viking rulers in the British Isles, particularly the Uí Ímair ("the grandsons of Ímar", or Ivar the Boneless). During Constantine's reign the rulers of the southern kingdoms of Wessex and Mercia, later the Kingdom of England, extended their authority northwards into the disputed kingdoms of Northumbria. At first allied with the southern rulers against the Vikings, Constantine in time came into conflict with them. King Æthelstan was successful in securing Constantine's submission in 927 and 934, but the two again fought when Constantine, allied with the Strathclyde Britons and the Viking king of Dublin, invaded Æthelstan's kingdom in 937, only to be defeated at the great battle of Brunanburh. In 943 Constantine abdicated the throne and retired to the Céli Dé (Culdee) monastery of St Andrews where he died in 952. He was succeeded by his predecessor's son Malcolm I (Máel Coluim mac Domnaill). Constantine's reign of 43 years, exceeded in Scotland only by that of King William the Lion before the Union of the Crowns in 1603, is believed to have played a defining part in the gaelicisation of Pictland, in which his patronage of the Irish Céli Dé monastic reformers was a significant factor. During his reign the words "Scots" and "Scotland" () are first used to mean part of what is now Scotland. The earliest evidence for the ecclesiastical and administrative institutions which would last until the Davidian Revolution also appears at this time. Compared to neighbouring Ireland and Anglo-Saxon England, few records of 9th- and 10th-century events in Scotland survive. The main local source from the period is the "Chronicle of the Kings of Alba", a list of kings from Kenneth MacAlpin (died 858) to Kenneth II (Cináed mac Maíl Coluim, died 995). The list survives in the Poppleton Manuscript, a 13th-century compilation. Originally simply a list of kings with reign lengths, the other details contained in the Poppleton Manuscript version were added in the 10th and 12th centuries. In addition to this, later king lists survive. The earliest genealogical records of the descendants of Kenneth MacAlpin may date from the end of the 10th century, but their value lies more in their context, and the information they provide about the interests of those for whom they were compiled, than in the unreliable claims they contain. For narrative history the principal sources are the "Anglo-Saxon Chronicle" and the Irish annals. The evidence from charters created in the Kingdom of England provides occasional insight into events in northern Britain. While Scandinavian sagas describe events in 10th-century Britain, their value as sources of historical narrative, rather than documents of social history, is disputed. Mainland European sources rarely concern themselves with affairs in Britain, and even less commonly with events in northern Britain, but the life of Saint Cathróe of Metz, a work of hagiography written in Germany at the end of the 10th century, provides plausible details of the saint's early life in north Britain. While the sources for north-eastern Britain, the lands of the kingdom of Northumbria and the former Pictland, are limited and late, those for the areas on the Irish Sea and Atlantic coasts—the modern regions of north-west England and all of northern and western Scotland—are non-existent, and archaeology and toponymy are of primary importance. The dominant kingdom in eastern Scotland before the Viking Age was the northern Pictish kingdom of Fortriu on the shores of the Moray Firth. By the 9th century, the Gaels of Dál Riata (Dalriada) were subject to the kings of Fortriu of the family of Constantín mac Fergusa (Constantine son of Fergus). Constantín's family dominated Fortriu after 789 and perhaps, if Constantín was a kinsman of Óengus I of the Picts (Óengus son of Fergus), from around 730. The dominance of Fortriu came to an end in 839 with a defeat by Viking armies reported by the "Annals of Ulster" in which King Uen of Fortriu and his brother Bran, Constantín's nephews, together with the king of Dál Riata, Áed mac Boanta, "and others almost innumerable" were killed. These deaths led to a period of instability lasting a decade as several families attempted to establish their dominance in Pictland. By around 848 Kenneth MacAlpin had emerged as the winner. Later national myth made Kenneth MacAlpin the creator of the kingdom of Scotland, the founding of which was dated from 843, the year in which he was said to have destroyed the Picts and inaugurated a new era. The historical record for 9th-century Scotland is meagre, but the Irish annals and the 10th-century "Chronicle of the Kings of Alba" agree that Kenneth was a Pictish king, and call him "king of the Picts" at his death. The same style is used of Kenneth's brother Donald I (Domnall mac Ailpín) and sons Constantine I (Constantín mac Cináeda) and Áed (Áed mac Cináeda). The kingdom ruled by Kenneth's descendants—older works used the name House of Alpin to describe them but descent from Kenneth was the defining factor, Irish sources referring to "Clann Cináeda meic Ailpín" ("the Clan of Kenneth MacAlpin")—lay to the south of the previously dominant kingdom of Fortriu, centred in the lands around the River Tay. The extent of Kenneth's nameless kingdom is uncertain, but it certainly extended from the Firth of Forth in the south to the Mounth in the north. Whether it extended beyond the mountainous spine of north Britain—Druim Alban—is unclear. The core of the kingdom was similar to the old counties of Mearns, Forfar, Perth, Fife, and Kinross. Among the chief ecclesiastical centres named in the records are Dunkeld, probably seat of the bishop of the kingdom, and "Cell Rígmonaid" (modern St Andrews). Kenneth's son Constantine died in 876, probably killed fighting against a Viking army which had come north from Northumbria in 874. According to the king lists, he was counted the 70th and last king of the Picts in later times. In 899 Alfred the Great, king of Wessex, died leaving his son Edward the Elder as ruler of Britain south of the River Thames and his daughter Æthelflæd and son-in-law Æthelred ruling the western, English part of Mercia. The situation in the Danish kingdoms of eastern Britain is less clear. King Eohric was probably ruling in East Anglia, but no dates can reliably be assigned to the successors of Guthfrith of York in Northumbria. It is known that Guthfrith was succeeded by Sigurd and Cnut, although whether these men ruled jointly or one after the other is uncertain. Northumbria may have been divided by this time between the Viking kings in York and the local rulers, perhaps represented by Eadulf, based at Bamburgh who controlled the lands from the River Tyne or River Tees to the Forth in the north. In Ireland, Flann Sinna, married to Constantine's aunt Máel Muire, was dominant. The years around 900 represented a period of weakness among the Vikings and Norse-Gaels of Dublin. They are reported to have been divided between two rival leaders. In 894 one group left Dublin, perhaps settling on the Irish Sea coast of Britain between the River Mersey and the Firth of Clyde. The remaining Dubliners were expelled in 902 by Flann Sinna's son-in-law Cerball mac Muirecáin, and soon afterwards appeared in western and northern Britain. To the south-west of Constantine's lands lay the kingdom of Strathclyde. This extended north into the Lennox, east to the River Forth, and south into the Southern Uplands. In 900 it was probably ruled by King Dyfnwal. The situation of the Gaelic kingdoms of Dál Riata in western Scotland is uncertain. No kings are known by name after Áed mac Boanta. The Frankish "Annales Bertiniani" may record the conquest of the Inner Hebrides, the seaward part of Dál Riata, by Northmen in 849. In addition to these, the arrival of new groups of Vikings from northern and western Europe was still commonplace. Whether there were Viking or Norse-Gael kingdoms in the Western Isles or the Northern Isles at this time is debated. Áed, Constantine's father, succeeded Constantine's uncle and namesake Constantine I in 876 but was killed in 878. Áed's short reign is glossed as being of no importance by most king lists. Although the date of his birth is nowhere recorded, Constantine II cannot have been born any later than the year after his father's death, that is 879. His name may suggest that he was born rather earlier, during the reign of his uncle Constantine I. After Áed's death, there is a two-decade gap until the death of Donald II (Domnall mac Constantín) in 900 during which nothing is reported in the Irish annals. The entry for the reign between Áed and Donald II is corrupt in the "Chronicle of the Kings of Alba", and in this case the "Chronicle" is at variance with every other king list. According to the "Chronicle", Áed was followed by Eochaid, a grandson of Kenneth MacAlpin, who is somehow connected with Giric, but all other lists say that Giric ruled after Áed and make great claims for him. Giric is not known to have been a kinsman of Kenneth's, although it has been suggested that he was related to him by marriage. The major changes in Pictland which began at about this time have been associated by Alex Woolf and Archie Duncan with Giric's reign. Woolf suggests that Constantine and his cousin Donald may have passed Giric's reign in exile in Ireland where their aunt Máel Muire was wife of two successive High Kings of Ireland, Áed Findliath and Flann Sinna. Giric died in 889. If he had been in exile, Constantine may have returned to Pictland where his cousin Donald II became king. Donald's reputation is suggested by the epithet "dasachtach", a word used of violent madmen and mad bulls, attached to him in the 11th-century writings of Flann Mainistrech, echoed by the his description in the "Prophecy of Berchan" as "the rough one who will think relics and psalms of little worth". Wars with the Viking kings in Britain and Ireland continued during Donald's reign and he was probably killed fighting yet more Vikings at Dunnottar in the Mearns in 900. Constantine succeeded him as king. The earliest event recorded in the "Chronicle of the Kings of Alba" in Constantine's reign is an attack by Vikings and the plundering of Dunkeld "and all Albania" in his third year. This is the first use of the word Albania, the Latin form of the Old Irish "Alba", in the "Chronicle" which until then describes the lands ruled by the descendants of Cináed as Pictavia. These Northmen may have been some of those who were driven out of Dublin in 902, but could also have been the same group who had defeated Domnall in 900. The "Chronicle" states that the Northmen were killed in "Srath Erenn", which is confirmed by the "Annals of Ulster" which records the death of Ímar grandson of Ímar and many others at the hands of the men of Fortriu in 904. This Ímar was the first of the Uí Ímair, that is the grandsons of Ímar, to be reported; three more grandsons of Ímar appear later in Constantín's reign. The "Fragmentary Annals of Ireland" contain an account of the battle, and this attributes the defeat of the Norsemen to the intercession of Saint Columba following fasting and prayer. An entry in the "Chronicon Scotorum" under the year 904 may possibly contain a corrupted reference to this battle. The next event reported by the "Chronicle of the Kings of Alba" is dated to 906. This records that:King Constantine and Bishop Cellach met at the "Hill of Belief" near the royal city of Scone and pledged themselves that the laws and disciplines of the faith, and the laws of churches and gospels, should be kept "pariter cum Scottis". The meaning of this entry, and its significance, have been the subject of debate. The phrase "pariter cum Scottis" in the Latin text of the "Chronicle" has been translated in several ways. William Forbes Skene and Alan Orr Anderson proposed that it should be read as "in conformity with the customs of the Gaels", relating it to the claims in the king lists that Giric liberated the church from secular oppression and adopted Irish customs. It has been read as "together with the Gaels", suggesting either public participation or the presence of Gaels from the western coasts as well as the people of the east coast. Finally, it is suggested that it was the ceremony which followed "the custom of the Gaels" and not the agreements. The idea that this gathering agreed to uphold Irish laws governing the church has suggested that it was an important step in the gaelicisation of the lands east of Druim Alban. Others have proposed that the ceremony in some way endorsed Constantine's kingship, prefiguring later royal inaugurations at Scone. Alternatively, if Bishop Cellach was appointed by Giric, it may be that the gathering was intended to heal a rift between king and church. Following the events at Scone, there is little of substance reported for a decade. A story in the "Fragmentary Annals of Ireland", perhaps referring to events some time after 911, claims that Queen Æthelflæd, who ruled in Mercia, allied with the Irish and northern rulers against the Norsemen on the Irish sea coasts of Northumbria. The "Annals of Ulster" record the defeat of an Irish fleet from the kingdom of Ulaid by Vikings "on the coast of England" at about this time. In this period the "Chronicle of the Kings of Alba" reports the death of Cormac mac Cuilennáin, king of Munster, in the eighth year of Constantine's reign. This is followed by an undated entry which was formerly read as "In his time Domnall [i.e. Dyfnwal], king of the [Strathclyde] Britons died, and Domnall son of Áed was elected". This was thought to record the election of a brother of Constantine named Domnall to the kingship of the Britons of Strathclyde and was seen as early evidence of the domination of Strathclyde by the kings of Alba. The entry in question is now read as "...Dynfwal... and Domnall son Áed king of Ailech died", this Domnall being a son of Áed Findliath who died on 915. Finally, the deaths of Flann Sinna and Niall Glúndub are recorded. There are more reports of Viking fleets in the Irish Sea from 914 onwards. By 916 fleets under Sihtric Cáech and Ragnall, said to be grandsons of Ímar (that is, they belonged to the same Uí Ímair kindred as the Ímar who was killed in 904), were very active in Ireland. Sihtric inflicted a heavy defeat on the armies of Leinster and retook Dublin in 917. The following year Ragnall appears to have returned across the Irish sea intent on establishing himself as king at York. The only precisely dated event in the summer of 918 is the death of Queen Æthelflæd on 918 at Tamworth, Staffordshire. Æthelflæd had been negotiating with the Northumbrians to obtain their submission, but her death put an end to this and her successor, her brother Edward the Elder, was occupied with securing control of Mercia. The northern part of Northumbria, and perhaps the whole kingdom, had probably been ruled by Ealdred son of Eadulf since 913. Faced with Ragnall's invasion, Ealdred came north seeking assistance from Constantine. The two advanced south to face Ragnall, and this led to a battle somewhere on the banks of the River Tyne, probably at Corbridge where Dere Street crosses the river. The Battle of Corbridge appears to have been indecisive; the "Chronicle of the Kings of Alba" is alone in giving Constantine the victory. The report of the battle in the "Annals of Ulster" says that none of the kings or mormaers among the men of Alba were killed. This is the first surviving use of the word mormaer; other than the knowledge that Constantine's kingdom had its own bishop or bishops and royal villas, this is the only hint to the institutions of the kingdom. After Corbridge, Ragnall enjoyed only a short respite. In the south, Alfred's son Edward had rapidly secured control of Mercia and had a burh constructed at Bakewell in the Peak District from which his armies could easily strike north. An army from Dublin led by Ragnall's kinsman Sihtric struck at north-western Mercia in 919, but in 920 or 921 Edward met with Ragnall and other kings. The "Anglo-Saxon Chronicle" states that these king "chose Edward as father and lord". Among the other kings present were Constantine, Ealdred son of Eadwulf, and the king of Strathclyde, Owain ap Dyfnwal. Here, again, a new term appears in the record, the "Anglo-Saxon Chronicle" for the first time using the word "scottas", from which Scots derives, to describe the inhabitants of Constantine's kingdom in its report of these events. Edward died in 924. His realms appear to have been divided with the West Saxons recognising Ælfweard while the Mercians chose Æthelstan who had been raised at Æthelflæd's court. Ælfweard died within weeks of his father and Æthelstan was inaugurated as king of all of Edward's lands in 925. By 926 Sihtric had evidently acknowledged Æthelstan as over-king, adopting Christianity and marrying a sister of Æthelstan at Tamworth. Within the year he may have abandoned his new faith and repudiated his wife, but before Æthelstan and he could fight, Sihtric died suddenly in 927. His kinsman, perhaps brother, Gofraid, who had remained as his deputy in Dublin, came from Ireland to take power in York, but failed. Æthelstan moved quickly, seizing much of Northumbria. In less than a decade, the kingdom of the English had become by far the greatest power in Britain and Ireland, perhaps stretching as far north as the Firth of Forth. John of Worcester's chronicle suggests that Æthelstan faced opposition from Constantine, from Owain, and from the Welsh kings. William of Malmesbury writes that Gofraid, together with Sihtric's young son Olaf Cuaran fled north and received refuge from Constantine, which led to war with Æthelstan. A meeting at Eamont Bridge on 927 was sealed by an agreement that Constantine, Owain, Hywel Dda, and Ealdred would "renounce all idolatry": that is, they would not ally with the Viking kings. William states that Æthelstan stood godfather to a son of Constantine, probably Indulf (Ildulb mac Constantín), during the conference. Æthelstan followed up his advances in the north by securing the recognition of the Welsh kings. For the next seven years, the record of events in the north is blank. Æthelstan's court was attended by the Welsh kings, but not by Constantine or Owain. This absence of record means that Æthelstan's reasons for marching north against Constantine in 934 are unclear. Æthelstan's campaign is reported by in brief by the "Anglo-Saxon Chronicle", and later chroniclers such as John of Worcester, William of Malmesbury, Henry of Huntingdon, and Symeon of Durham add detail to that bald account. Æthelstan's army began gathering at Winchester by 934, and reached Nottingham by . He was accompanied by many leaders, including the Welsh kings Hywel Dda, Idwal Foel, and Morgan ab Owain. From Mercia the army went north, stopping at Chester-le-Street, before resuming the march accompanied by a fleet of ships. Owain was defeated and Symeon states that the army went as far north as Dunnottar and Fortriu, while the fleet is said to have raided Caithness, by which a much larger area, including Sutherland, is probably intended. It is unlikely that Constantine's personal authority extended so far north, and while the attacks may have been directed at his allies, they may also have been simple looting expeditions. The "Annals of Clonmacnoise" state that "the Scottish men compelled [Æthelstan] to return without any great victory", while Henry of Huntingdon claims that the English faced no opposition. A negotiated settlement may have ended matters: according to John of Worcester, a son of Constantine was given as a hostage to Æthelstan and Constantine himself accompanied the English king on his return south. He witnessed a charter with Æthelstan at Buckingham on 934 in which he is described as "subregulus", that is a king acknowledging Æthelstan's overlordship. The following year, Constantine was again in England at Æthelstan's court, this time at Cirencester where he appears as a witness, appearing as the first of several subject kings, followed by Owain and Hywel Dda, who subscribed to the diploma. At Christmas of 935, Owain was once more at Æthelstan's court along with the Welsh kings, but Constantine was not. His return to England less than two years later would be in very different circumstances. Following his disappearance from Æthelstan's court after 935, there is no further report of Constantine until 937. In that year, together with Owain and Olaf Guthfrithson of Dublin, Constantine invaded England. The resulting battle of Brunanburh—"Dún Brunde"—is reported in the "Annals of Ulster" as follows:a great battle, lamentable and terrible was cruelly fought... in which fell uncounted thousands of the Northmen.  ...And on the other side, a multitude of Saxons fell; but Æthelstan, the king of the Saxons, obtained a great victory. The battle was remembered in England a generation later as "the Great Battle". When reporting the battle, the "Anglo-Saxon Chronicle" abandons its usual terse style in favour of a heroic poem vaunting the great victory. In this the "hoary" Constantine, by now around 60 years of age, is said to have lost a son in the battle, a claim which the "Chronicle of the Kings of Alba" confirms. The "Annals of Clonmacnoise" give his name as Cellach. For all its fame, the site of the battle is uncertain and several sites have been advanced, with Bromborough on the Wirral the most favoured location. Brunanburh, for all that it had been a famous and bloody battle, settled nothing. On 939 Æthelstan, the "pillar of the dignity of the western world" in the words of the "Annals of Ulster", died at Malmesbury. He was succeeded by his brother Edmund, then aged 18. Æthelstan's empire, seemingly made safe by the victory of Brunanburh, collapsed in little more than a year from his death when Amlaíb returned from Ireland and seized Northumbria and the Mercian Danelaw. Edmund spent the remainder of Constantín's reign rebuilding the empire. For Constantine's last years as king there is only the meagre record of the "Chronicle of the Kings of Alba". The death of Æthelstan is reported, as are two others. The first of these, in 938, is that of Dubacan, mormaer of Angus or son of the mormaer. Unlike the report of 918, on this occasion the title mormaer is attached to a geographical area, but it is unknown whether the Angus of 938 was in any way similar to the later mormaerdom or earldom. The second death, entered with that of Æthelstan, is that of Eochaid mac Ailpín, who may, from his name, have been a kinsman of Constantín. By the early 940s Constantine was an old man, perhaps more than 70 years of age. The kingdom of Alba was too new to be said to have a customary rule of succession, but Pictish and Irish precedents favoured an adult successor descended from Kenneth MacAlpin. Constantine's surviving son Indulf, probably baptised in 927, would have been too young to be a serious candidate for the kingship in the early 940s, and the obvious heir was Constantine's nephew, Malcolm I. As Malcolm was born no later than 901, by the 940s he was no longer a young man, and may have been impatient. Willingly or not—the 11th-century "Prophecy of Berchán", a verse history in the form of a supposed prophecy, states that it was not a voluntary decision—Constantine abdicated in 943 and entered a monastery, leaving the kingdom to Malcolm. Although his retirement may have been involuntary, the "Life" of Cathróe of Metz and the "Prophecy of Berchán" portray Constantine as a devout king. The monastery which Constantine retired to, and where he is said to have been abbot, was probably that of St Andrews. This had been refounded in his reign and given to the reforming Céli Dé (Culdee) movement. The Céli Dé were subsequently to be entrusted with many monasteries throughout the kingdom of Alba until replaced in the 12th century by new orders imported from France. Seven years later the "Chronicle of the Kings of Alba" says:[Malcolm I] plundered the English as far as the river Tees, and he seized a multitude of people and many herds of cattle: and the Scots called this the raid of Albidosorum, that is, Nainndisi. But others say that Constantine made this raid, asking of the king, Malcolm, that the kingship should be given to him for a week's time, so that he could visit the English. In fact, it was Malcolm who made the raid, but Constantine incited him, as I have said. Woolf suggests that the association of Constantine with the raid is a late addition, one derived from a now-lost saga or poem. Constantine's death in 952 is recorded by the Irish annals, who enter it among ecclesiastics. His son Indulf would become king on Malcolm's death. The last of Constantine's certain descendants to be king in Alba was a great-grandson, Constantine III (Constantín mac Cuiléin). Another son had died at Brunanburh, and, according to John of Worcester, Amlaíb mac Gofraid was married to a daughter of Constantine. It is possible that Constantine had other children, but like the name of his wife, or wives, this has not been recorded. The form of kingdom which appeared in Constantine's reign continued in much the same way until the Davidian Revolution in the 12th century. As with his ecclesiastical reforms, his political legacy was the creation of a new form of Scottish kingship that lasted for two centuries after his death. Cricket World Cup The ICC Cricket World Cup is the international championship of One Day International (ODI) cricket. The event is organised by the sport's governing body, the International Cricket Council (ICC), every four years, with preliminary qualification rounds leading up to a finals tournament. The tournament is one of the world's most viewed sporting events and is considered the "flagship event of the international cricket calendar" by the ICC. The first World Cup was organised in England in June 1975, with the first ODI cricket match having been played only four years earlier. However, a separate Women's Cricket World Cup had been held two years before the first men's tournament, and a tournament involving multiple international teams had been held as early as 1912, when a triangular tournament of Test matches was played between Australia, England and South Africa. The first three World Cups were held in England. From the 1987 tournament onwards, hosting has been shared between countries under an unofficial rotation system, with fourteen ICC members having hosted at least one match in the tournament. The World Cup is open to all members of the International Cricket Council (ICC), although the highest-ranking teams receive automatic qualification. The remaining teams are determined via the World Cricket League and the ICC World Cup Qualifier. A total of twenty teams have competed in the eleven editions of the tournament, with fourteen competing in the latest edition in 2015; the next edition in 2019 will have only ten teams. Australia has won the tournament five times, with the West Indies, India (twice each), Pakistan and Sri Lanka (once each) also having won the tournament. The best performance by a non-full-member team came when Kenya made the semi-finals of the 2003 tournament. The tournament is the world's 3rd biggest sporting event behind the FIFA World Cup, Summer Olympics and just ahead of the Rugby World Cup in terms of viewership and crowd attendance. The first international cricket match was played between Canada and the United States, on 24 and 25 September 1844. However, the first credited Test match was played in 1877 between Australia and England, and the two teams competed regularly for The Ashes in subsequent years. South Africa was admitted to Test status in 1889. Representative cricket teams were selected to tour each other, resulting in bilateral competition. Cricket was also included as an Olympic sport at the 1900 Paris Games, where Great Britain defeated France to win the gold medal. This was the only appearance of cricket at the Summer Olympics. The first multilateral competition at international level was the 1912 Triangular Tournament, a Test cricket tournament played in England between all three Test-playing nations at the time: England, Australia and South Africa. The event was not a success: the summer was exceptionally wet, making play difficult on damp uncovered pitches, and attendances were poor, attributed to a "surfeit of cricket". Since then, international Test cricket has generally been organised as bilateral series: a multilateral Test tournament was not organised again until the triangular Asian Test Championship in 1999. The number of nations playing Test cricket increased gradually over time, with the addition of West Indies in 1928, New Zealand in 1930, India in 1932, and Pakistan in 1952. However, international cricket continued to be played as bilateral Test matches over three, four or five days. In the early 1960s, English county cricket teams began playing a shortened version of cricket which only lasted for one day. Starting in 1962 with a four-team knockout competition known as the Midlands Knock-Out Cup, and continuing with the inaugural Gillette Cup in 1963, one-day cricket grew in popularity in England. A national Sunday League was formed in 1969. The first One-Day International match was played on the fifth day of a rain-aborted Test match between England and Australia at Melbourne in 1971, to fill the time available and as compensation for the frustrated crowd. It was a forty over game with eight balls per over. In the late 1970s, Kerry Packer established the rival World Series Cricket (WSC) competition. It introduced many of the now commonplace features of One Day International cricket, including coloured uniforms, matches played at night under floodlights with a white ball and dark sight screens, and, for television broadcasts, multiple camera angles, effects microphones to capture sounds from the players on the pitch, and on-screen graphics. The first of the matches with coloured uniforms was the WSC Australians in wattle gold versus WSC West Indians in coral pink, played at VFL Park in Melbourne on 17 January 1979. The success and popularity of the domestic one-day competitions in England and other parts of the world, as well as the early One-Day Internationals, prompted the ICC to consider organising a Cricket World Cup. The inaugural Cricket World Cup was hosted in 1975 by England, the only nation able to put forward the resources to stage an event of such magnitude at the time. The 1975 tournament started on 7 June. The first three events were held in England and officially known as the Prudential Cup after the sponsors Prudential plc. The matches consisted of 60 six-ball overs per team, played during the daytime in traditional form, with the players wearing cricket whites and using red cricket balls. Eight teams participated in the first tournament: Australia, England, India, New Zealand, Pakistan, and the West Indies (the six Test nations at the time), together with Sri Lanka and a composite team from East Africa. One notable omission was South Africa, who were banned from international cricket due to apartheid. The tournament was won by the West Indies, who defeated Australia by 17 runs in the final at Lord's. The 1979 World Cup saw the introduction of the ICC Trophy competition to select non-Test playing teams for the World Cup, with Sri Lanka and Canada qualifying. The West Indies won a second consecutive World Cup tournament, defeating the hosts England by 92 runs in the final. At a meeting which followed the World Cup, the International Cricket Conference agreed to make the competition a quadrennial event. The 1983 event was hosted by England for a third consecutive time. By this stage, Sri Lanka had become a Test-playing nation, and Zimbabwe qualified through the ICC Trophy. A fielding circle was introduced, away from the stumps. Four fieldsmen needed to be inside it at all times. The teams faced each other twice, before moving into the knock-outs. India, an outsider, quoted at 66–1 to win by bookmakers before the competition began, were crowned champions after upsetting the West Indies by 43 runs in the final. India and Pakistan jointly hosted the 1987 tournament, the first time that the competition was held outside England. The games were reduced from 60 to 50 overs per innings, the current standard, because of the shorter daylight hours in the Indian subcontinent compared with England's summer. Australia won the championship by defeating England by 7 runs in the final, the closest margin in World Cup final history. The 1992 World Cup, held in Australia and New Zealand, introduced many changes to the game, such as coloured clothing, white balls, day/night matches, and a change to the fielding restriction rules. The South African cricket team participated in the event for the first time, following the fall of the apartheid regime and the end of the international sports boycott. Pakistan overcame a dismal start in the tournament to eventually defeat England by 22 runs in the final and emerge as winners. The 1996 championship was held in the Indian subcontinent for a second time, with the inclusion of Sri Lanka as host for some of its group stage matches. In the semi-final, Sri Lanka, heading towards a crushing victory over India at Eden Gardens after the hosts lost eight wickets while scoring 120 runs in pursuit of 252, were awarded victory by default after crowd unrest broke out in protest against the Indian performance. Sri Lanka went on to win their maiden championship by defeating Australia by seven wickets in the final at Lahore. In 1999 the event was hosted by England, with some matches also being held in Scotland, Ireland, Wales and the Netherlands. Twelve teams contested the World Cup. Australia qualified for the semi-finals after reaching their target in their Super 6 match against South Africa off the final over of the match. They then proceeded to the final with a tied match in the semi-final also against South Africa where a mix-up between South African batsmen Lance Klusener and Allan Donald saw Donald drop his bat and stranded mid-pitch to be run out. In the final, Australia dismissed Pakistan for 132 and then reached the target in less than 20 overs and with eight wickets in hand. South Africa, Zimbabwe and Kenya hosted the 2003 World Cup. The number of teams participating in the event increased from twelve to fourteen. Kenya's victories over Sri Lanka and Zimbabwe, among others – and a forfeit by the New Zealand team, which refused to play in Kenya because of security concerns – enabled Kenya to reach the semi-finals, the best result by an associate. In the final, Australia made 359 runs for the loss of two wickets, the largest ever total in a final, defeating India by 125 runs. In 2007 the tournament was hosted by the West Indies and expanded to sixteen teams. Following Pakistan's upset loss to World Cup debutants Ireland in the group stage, Pakistani coach Bob Woolmer was found dead in his hotel room. Jamaican police had initially launched a murder investigation into Woolmer's death but later confirmed that he died of heart failure. Australia defeated Sri Lanka in the final by 53 runs (D/L) in farcical light conditions, and extended their undefeated run in the World Cup to 29 matches and winning three straight championships. India, Sri Lanka and Bangladesh together hosted the 2011 Cricket World Cup. Pakistan were stripped of their hosting rights following the terrorist attack on the Sri Lankan cricket team in 2009, with the games originally scheduled for Pakistan redistributed to the other host countries. The number of teams participating in the World Cup dropped down to fourteen. Australia lost their final group stage match against Pakistan on 19 March 2011, ending an unbeaten streak of 35 World Cup matches, which had begun on 23 May 1999. India won their second World Cup title by beating Sri Lanka by 6 wickets in the final in Mumbai, and became the first country to win the final on home soil. Australia and New Zealand jointly hosted the 2015 Cricket World Cup. The number of participants remained at fourteen. Ireland was the most successful Associate nation with a total of three wins in the tournament. New Zealand beat South Africa in a thrilling first semi-final to qualify for their maiden World Cup final. Australia defeated New Zealand by seven wickets in the final at Melbourne to lift the World Cup for the fifth time. The Test-playing nations qualify automatically for the World Cup main event while the other teams have to qualify through a series of preliminary qualifying tournaments. A new qualifying format was introduced for the 2015 Cricket World Cup. The top two teams of the 2011–13 ICC World Cricket League Championship qualify directly. The remaining six teams join the third and fourth-placed teams of 2011 ICC World Cricket League Division Two and the top two teams of the 2013 ICC World Cricket League Division Three in the World Cup Qualifier to decide the remaining two places. Qualifying tournaments were introduced for the second World Cup, where two of the eight places in the finals were awarded to the leading teams in the ICC Trophy. The number of teams selected through the ICC Trophy had varied throughout the years. The World Cricket League (administered by the International Cricket Council) is the qualification system provided to allow the Associate and Affiliate members of the ICC more opportunities to qualify. The name "ICC Trophy" has been changed to "ICC World Cup Qualifier". Under the current qualifying process, the World Cricket League, all Associate and Affiliate members of the ICC are able to qualify for the World Cup. Associate and Affiliate members must play between two and five stages in the ICC World Cricket League to qualify for the World Cup finals, depending on the Division in which they start the qualifying process. Process summary in chronological order (2011-2014): The format of the Cricket World Cup has changed greatly over the course of its history. Each of the first four tournaments was played by eight teams, divided into two groups of four. The competition consisted of two stages, a group stage and a knock-out stage. The four teams in each group played each other in the round-robin group stage, with the top two teams in each group progressing to the semi-finals. The winners of the semi-finals played against each other in the final. With South Africa returning in the fifth tournament in 1992 as a result of the end of the apartheid boycott, nine teams played each other once in the group phase, and the top four teams progressed to the semi-finals. The tournament was further expanded in 1996, with two groups of six teams. The top four teams from each group progressed to quarter-finals and semi-finals. A distinct format was used for the 1999 and 2003 World Cups. The teams were split into two pools, with the top three teams in each pool advancing to the "Super 6". The "Super 6" teams played the three other teams that advanced from the other group. As they advanced, the teams carried their points forward from previous matches against other teams advancing alongside them, giving them an incentive to perform well in the group stages. The top four teams from the "Super 6" stage progressed to the semi-finals, with the winners playing in the final. The format used in the 2007 World Cup involved 16 teams allocated into four groups of four. Within each group, the teams played each other in a round-robin format. Teams earned points for wins and half-points for ties. The top two teams from each group moved forward to the "Super 8" round. The "Super 8" teams played the other six teams that progressed from the different groups. Teams earned points in the same way as the group stage, but carried their points forward from previous matches against the other teams who qualified from the same group to the "Super 8" stage. The top four teams from the "Super 8" round advanced to the semi-finals, and the winners of the semi-finals played in the final. The format used in the 2011 and 2015 World Cups featured two groups of seven teams, each playing in a round-robin format. The top four teams from each group proceeded to the knock out stage consisting of quarter-finals, semi-finals and ultimately the final. It is proposed that in 2019 World Cup, the number of teams participating will go down to 10 and all the teams will play against each other once in round robin format, before entering the semifinals. This would be similar to the one used in 1992 World Cup. The ICC Cricket World Cup Trophy is presented to the winners of the World Cup. The current trophy was created for the 1999 championships, and was the first permanent prize in the tournament's history. Prior to this, different trophies were made for each World Cup. The trophy was designed and produced in London by a team of craftsmen from Garrard & Co over a period of two months. The current trophy is made from silver and gilt, and features a golden globe held up by three silver columns. The columns, shaped as stumps and bails, represent the three fundamental aspects of cricket: batting, bowling and fielding, while the globe characterises a cricket ball. The seam is tilted to symbolize the axial tilt of the Earth. It stands 60 centimetres high and weighs approximately 11 kilograms. The names of the previous winners are engraved on the base of the trophy, with space for a total of twenty inscriptions. The ICC keeps the original trophy. A replica differing only in the inscriptions is permanently awarded to the winning team. The tournament is the world's third largest with only the FIFA World Cup and the Summer Olympics exceeding it. The 2011 Cricket World Cup final was televised in over 200 countries to over 2.2 billion television viewers. Television rights, mainly for the 2011 and 2015 World Cup, were sold for over US$1.1 billion, and sponsorship rights were sold for a further US$500 million. The 2003 Cricket World Cup matches were attended by 626,845 people, while the 2007 Cricket World Cup sold more than 672,000 tickets. The 2015 World Cup Sold over 1.1 million tickets which was a Record . Successive World Cup tournaments have generated increasing media attention as One-Day International cricket has become more established. The 2003 World Cup in South Africa was the first to sport a mascot, Dazzler the zebra. An orange mongoose known as "Mello" was the mascot for the 2007 Cricket World Cup. Stumpy, a blue elephant was the mascot for the 2011 World Cup. On 13 February, the opening of the 2015 tournament was celebrated with a Google Doodle. The International Cricket Council's executive committee votes for the hosts of the tournament after examining the bids made by the nations keen to hold a Cricket World Cup. England hosted the first three competitions. The ICC decided that England should host the first tournament because it was ready to devote the resources required to organising the inaugural event. India volunteered to host the third Cricket World Cup, but most ICC members preferred England as the longer period of daylight in England in June meant that a match could be completed in one day. The 1987 Cricket World Cup was held in India and Pakistan, the first hosted outside England. Many of the tournaments have been jointly hosted by nations from the same geographical region, such as South Asia in 1987, 1996 and 2011, Australasia in 1992 and 2015, Southern Africa in 2003 and West Indies in 2007. Twenty nations have qualified for the Cricket World Cup at least once. Seven teams have competed in every tournament, five of which have won the title. The West Indies won the first two tournaments, Australia has won five, India has won two, while Pakistan and Sri Lanka have each won once. The West Indies (1975 and 1979) and Australia (1999, 2003 and 2007) are the only teams to have won consecutive titles. Australia has played in seven of the eleven finals (1975, 1987, 1996, 1999, 2003, 2007, 2015). England has yet to win the World Cup, but has been runners-up three times (1979, 1987, 1992). The best result by a non-Test playing nation is the semi-final appearance by Kenya in the 2003 tournament; while the best result by a non-Test playing team on their debut is the Super 8 (second round) by Ireland in 2007. Sri Lanka as a co-host of the 1996 Cricket World Cup was the first host to win the tournament though the final was held in Pakistan. India won in 2011 as host and was the first team to win in a final played in their own country. Australia repeated the feat in 2015. England is the only other host to have made the final, in 1979. Other countries which have achieved or equalled their best World Cup results while co-hosting the tournament are New Zealand as finalists in 2015; Zimbabwe who reached the Super Six in 2003; and Kenya as semi-finalists in 2003. In 1987, co-hosts India and Pakistan both reached the semi-finals, but were eliminated by Australia and England respectively. Australia in 1992, England in 1999, South Africa in 2003, and Bangladesh in 2011 have been the host teams that were eliminated in the first round. An overview of the teams' performances in every World Cup: Before the 1992 World Cup, South Africa was banned due to apartheid. The number of wins followed by Run-rate is the criteria for determining the rankings till the 1987 World Cup. The number of points followed by, head to head performance and then net run-rate is the criteria for determining the rankings for the World Cups from 1992 onwards. Legend † The table below provides an overview of the performances of teams over past World Cups, as of the end of group stage of the 2015 tournament. Teams are sorted by best performance, then by appearances, total number of wins, total number of games, and alphabetical order respectively. Since 1992, one player has been declared as "Man of the Tournament" at the end of the World Cup finals: There were no Man of the Tournament awards before 1992 but Man of the Match awards have always been given for individual matches. Winning the Man of the Match in the final is logically noteworthy, as this indicates the player deemed to have played the biggest part in the World Cup final. To date the award has always gone to a member of the winning side. The Man of the Match award in the final of the competition has been awarded to: Chagas disease Chagas disease, also known as American trypanosomiasis, is a tropical parasitic disease caused by the protist "Trypanosoma cruzi". It is spread mostly by insects known as Triatominae, or "kissing bugs". The symptoms change over the course of the infection. In the early stage, symptoms are typically either not present or mild, and may include fever, swollen lymph nodes, headaches, or local swelling at the site of the bite. After 8–12 weeks, individuals enter the chronic phase of disease and in 60–70% it never produces further symptoms. The other 30 to 40% of people develop further symptoms 10 to 30 years after the initial infection, including enlargement of the ventricles of the heart in 20 to 30%, leading to heart failure. An enlarged esophagus or an enlarged colon may also occur in 10% of people. "T. cruzi" is commonly spread to humans and other mammals by the blood-sucking "kissing bugs" of the subfamily Triatominae. These insects are known by a number of local names, including: "vinchuca" in Argentina, Bolivia, Chile and Paraguay, "barbeiro" (the barber) in Brazil, "pito" in Colombia, "chinche" in Central America, and "chipo" in Venezuela. The disease may also be spread through blood transfusion, organ transplantation, eating food contaminated with the parasites, and by vertical transmission (from a mother to her fetus). Diagnosis of early disease is by finding the parasite in the blood using a microscope. Chronic disease is diagnosed by finding antibodies for "T. cruzi" in the blood. Prevention mostly involves eliminating kissing bugs and avoiding their bites. This may involve the use of insecticides or bed-nets. Other preventive efforts include screening blood used for transfusions. A vaccine has not been developed as of 2017. Early infections are treatable with the medication benznidazole or nifurtimox. Medication nearly always results in a cure if given early, but becomes less effective the longer a person has had Chagas disease. When used in chronic disease, medication may delay or prevent the development of end–stage symptoms. Benznidazole and nifurtimox cause temporary side effects in up to 40% of people including skin disorders, brain toxicity, and digestive system irritation. It is estimated that 6.6 million people, mostly in Mexico, Central America and South America, have Chagas disease as of 2015. In 2015, Chagas was estimated to result in 8,000 deaths. Most people with the disease are poor, and most do not realize they are infected. Large-scale population movements have increased the areas where Chagas disease is found and these include many European countries and the United States. These areas have also seen an increase in the years up to 2014. The disease was first described in 1909 by the Brazilian physician Carlos Chagas, after whom it is named. Chagas disease is classified as a neglected tropical disease. It affects more than 150 other animals. The human disease occurs in two stages: an acute stage, which occurs shortly after an initial infection, and a chronic stage that develops over many years. The acute phase lasts for the first few weeks or months of infection. It usually occurs unnoticed because it is symptom-free or exhibits only mild symptoms that are not unique to Chagas disease. These can include fever, fatigue, body aches, muscle pain, headache, rash, loss of appetite, diarrhea, nausea, and vomiting. The signs on physical examination can include mild enlargement of the liver or spleen, swollen glands, and local swelling (a chagoma) where the parasite entered the body. The most recognized marker of acute Chagas disease is called Romaña's sign, which includes swelling of the eyelids on the side of the face near the bite wound or where the bug feces were deposited or accidentally rubbed into the eye. Rarely, young children, or adults may die from the acute disease due to severe inflammation/infection of the heart muscle (myocarditis) or brain (meningoencephalitis). The acute phase also can be severe in people with weakened immune systems. If symptoms develop during the acute phase, they usually resolve spontaneously within three to eight weeks in approximately 90% of individuals. Although the symptoms resolve, even with treatment the infection persists and enters a chronic phase. Of individuals with chronic Chagas disease, 60–80% will never develop symptoms (called "indeterminate" chronic Chagas disease), while the remaining 20–40% will develop life-threatening heart and/or digestive disorders during their lifetime (called "determinate" chronic Chagas disease). In 10% of individuals, the disease progresses directly from the acute form to a symptomatic clinical form of chronic Chagas disease. The symptomatic (determinate) chronic stage affects the nervous system, digestive system and heart. About two-thirds of people with chronic symptoms have cardiac damage, including dilated cardiomyopathy, which causes heart rhythm abnormalities and may result in sudden death. About one-third of patients go on to develop digestive system damage, resulting in dilation of the digestive tract (megacolon and megaesophagus), accompanied by severe weight loss. Swallowing difficulties (secondary achalasia) may be the first symptom of digestive disturbances and may lead to malnutrition. 20% to 50% of individuals with intestinal involvement also exhibit cardiac involvement. Up to 10% of chronically infected individuals develop neuritis that results in altered tendon reflexes and sensory impairment. Isolated cases exhibit central nervous system involvement, including dementia, confusion, chronic encephalopathy and sensory and motor deficits. The clinical manifestations of Chagas disease are due to cell death in the target tissues that occurs during the infective cycle, by sequentially inducing an inflammatory response, cellular lesions, and fibrosis. For example, intracellular amastigotes destroy the intramural neurons of the autonomic nervous system in the intestine and heart, leading to megaintestine and heart aneurysms, respectively. If left untreated, Chagas disease can be fatal, in most cases due to heart muscle damage. In Chagas-endemic areas, the main mode of transmission is through an insect vector called a triatomine bug. A triatomine becomes infected with "T. cruzi" by feeding on the blood of an infected person or animal. During the day, triatomines hide in crevices in the walls and roofs. The bugs emerge at night, when the inhabitants are sleeping. Because they tend to feed on people's faces, triatomine bugs are also known as "kissing bugs". After they bite and ingest blood, they defecate on the person. Triatomines pass "T. cruzi" parasites (called trypomastigotes) in feces left near the site of the bite wound. Scratching the site of the bite causes the trypomastigotes to enter the host through the wound, or through intact mucous membranes, such as the conjunctiva. Once inside the host, the trypomastigotes invade cells, where they differentiate into intracellular amastigotes. The amastigotes multiply by binary fission and differentiate into trypomastigotes, which are then released into the bloodstream. This cycle is repeated in each newly infected cell. Replication resumes only when the parasites enter another cell or are ingested by another vector. (See also: ) Dense vegetation (such as that of tropical rainforests) and urban habitats are not ideal for the establishment of the human transmission cycle. However, in regions where the sylvatic habitat and its fauna are thinned by economic exploitation and human habitation, such as in newly deforested areas, piassava palm culture areas, and some parts of the Amazon region, a human transmission cycle may develop as the insects search for new food sources. "T. cruzi" can also be transmitted through blood transfusions. With the exception of blood derivatives (such as fractionated antibodies), all blood components are infective. The parasite remains viable at 4 °C for at least 18 days or up to 250 days when kept at room temperature. It is unclear whether "T. cruzi" can be transmitted through frozen-thawed blood components. Other modes of transmission include organ transplantation, through breast milk, and by accidental laboratory exposure. Chagas disease can also be spread congenitally (from a pregnant woman to her baby) through the placenta, and accounts for approximately 13% of stillborn deaths in parts of Brazil. Oral transmission is an unusual route of infection, but has been described. In 1991, farm workers in the state of Paraíba, Brazil, were infected by eating contaminated food; transmission has also occurred via contaminated açaí palm fruit juice and garapa. A 2007 outbreak in 103 Venezuelan school children was attributed to contaminated guava juice. Chagas disease is a growing problem in Europe, because the majority of cases with chronic infection are asymptomatic and because of migration from Latin America. The presence of "T. cruzi" is diagnostic of Chagas disease. It can be detected by microscopic examination of fresh anticoagulated blood, or its buffy coat, for motile parasites; or by preparation of thin and thick blood smears stained with Giemsa, for direct visualization of parasites. Microscopically, "T. cruzi" can be confused with "Trypanosoma rangeli", which is not known to be pathogenic in humans. Isolation of "T. cruzi" can occur by inoculation into mice, by culture in specialized media (for example, NNN, LIT); and by xenodiagnosis, where uninfected Reduviidae bugs are fed on the patient's blood, and their gut contents examined for parasites. Various immunoassays for "T. cruzi" are available and can be used to distinguish among strains (zymodemes of "T.cruzi" with divergent pathogenicities). These tests include: detecting complement fixation, indirect hemagglutination, indirect fluorescence assays, radioimmunoassays, and ELISA. Alternatively, diagnosis and strain identification can be made using polymerase chain reaction (PCR). There is currently no vaccine against Chagas disease. Prevention is generally focused on decreasing the numbers of the insect that spreads it ("Triatoma") and decreasing their contact with humans. This is done by using sprays and paints containing insecticides (synthetic pyrethroids), and improving housing and sanitary conditions in rural areas. For urban dwellers, spending vacations and camping out in the wilderness or sleeping at hostels or mud houses in endemic areas can be dangerous; a mosquito net is recommended. Some measures of vector control include: A number of potential vaccines are currently being tested. Vaccination with "Trypanosoma rangeli" has produced positive results in animal models. More recently, the potential of DNA vaccines for immunotherapy of acute and chronic Chagas disease is being tested by several research groups. Blood transfusion was formerly the second-most common mode of transmission for Chagas disease, but the development and implementation of blood bank screening tests has dramatically reduced this risk in the 21st century. Blood donations in all endemic Latin American countries undergo Chagas screening, and testing is expanding in countries, such as France, Spain and the United States, that have significant or growing populations of immigrants from endemic areas. In Spain, donors are evaluated with a questionnaire to identify individuals at risk of Chagas exposure for screening tests. The US FDA has approved two Chagas tests, including one approved in April 2010, and has published guidelines that recommend testing of all donated blood and tissue products. While these tests are not required in US, an estimated 75–90% of the blood supply is currently tested for Chagas, including all units collected by the American Red Cross, which accounts for 40% of the U.S. blood supply. The Chagas Biovigilance Network reports current incidents of Chagas-positive blood products in the United States, as reported by labs using the screening test approved by the FDA in 2007. There are two approaches to treating Chagas disease: antiparasitic treatment, to kill the parasite; and symptomatic treatment, to manage the symptoms and signs of the infection. Management uniquely involves addressing selective incremental failure of the parasympathetic nervous system. Autonomic disease imparted by Chagas may eventually result in megaesophagus, megacolon and accelerated dilated cardiomyopathy. The mechanisms that explain why Chagas targets the parasympathetic autonomic nervous system and spares the sympathetic autonomic nervous system remain poorly understood. Antiparasitic treatment is most effective early in the course of infection, but is not limited to cases in the acute phase. Drugs of choice include azole or nitro derivatives, such as benznidazole or nifurtimox. Both agents are limited in their capacity to completely eliminate "T. cruzi" from the body (parasitologic cure), especially in chronically infected patients, and resistance to these drugs has been reported. Studies suggest antiparasitic treatment leads to parasitological cure in more than 90% of infants but only about 60–85% of adults treated in the first year of acute phase Chagas disease. Children aged six to 12 years with chronic disease have a cure rate of about 60% with benznidazole. While the rate of cure declines the longer an adult has been infected with Chagas, treatment with benznidazole has been shown to slow the onset of heart disease in adults with chronic Chagas infections. Treatment of chronic infection in women prior to or during pregnancy does not appear to reduce the probability the disease will be passed on to the infant. Likewise, it is unclear whether prophylactic treatment of chronic infection is beneficial in persons who will undergo immunosuppression (for example, organ transplant recipients) or in persons who are already immunosuppressed (for example, those with HIV infection). In the chronic stage, treatment involves managing the clinical manifestations of the disease. For example, pacemakers and medications for irregular heartbeats, such as the anti-arrhythmia drug amiodarone, may be life saving for some patients with chronic cardiac disease, while surgery may be required for megaintestine. The disease cannot be cured in this phase, however. Chronic heart disease caused by Chagas disease is now a common reason for heart transplantation surgery. Until recently, however, Chagas disease was considered a contraindication for the procedure, since the heart damage could recur as the parasite was expected to seize the opportunity provided by the immunosuppression that follows surgery. Chagas disease affects 8 to 10 million people living in endemic Latin American countries, with an additional 300,000–400,000 living in nonendemic countries, including Spain and the United States. An estimated 41,200 new cases occur annually in endemic countries, and 14,400 infants are born with congenital Chagas disease annually. in 2010 it resulted in approximately 10,300 deaths up from 9,300 in 1990. The disease is present in 18 countries on the American continents, ranging from the southern United States to northern Argentina. Chagas exists in two different ecological zones. In the Southern Cone region, the main vector lives in and around human homes. In Central America and Mexico, the main vector species lives both inside dwellings and in uninhabited areas. In both zones, Chagas occurs almost exclusively in rural areas, where triatomines breed and feed on the more than 150 species from 24 families of domestic and wild mammals, as well as humans, that are the natural reservoirs of "T. cruzi". Although Triatominae bugs feed on them, birds appear to be immune to infection and therefore are not considered to be a "T. cruzi" reservoir. Even when colonies of insects are eradicated from a house and surrounding domestic animal shelters, they can re-emerge from plants or animals that are part of the ancient, sylvatic (referring to wild animals) infection cycle. This is especially likely in zones with mixed open savannah, with clumps of trees interspersed by human habitation. The primary wildlife reservoirs for "Trypanosoma cruzi" in the United States include opossums, raccoons, armadillos, squirrels, woodrats, and mice. Opossums are particularly important as reservoirs, because the parasite can complete its life cycle in the anal glands of this animal without having to re-enter the insect vector. Recorded prevalence of the disease in opossums in the U.S. ranges from 8.3% to 37.5%. Studies on raccoons in the Southeast have yielded infection rates ranging from 47% to as low as 15.5%. Armadillo prevalence studies have been described in Louisiana, and range from a low of 1.1% to 28.8%. Additionally, small rodents, including squirrels, mice, and rats, are important in the sylvatic transmission cycle because of their importance as bloodmeal sources for the insect vectors. A Texas study revealed 17.3% percent "T. cruzi" prevalence in 75 specimens representing four separate small rodent species. Chronic Chagas disease remains a major health problem in many Latin American countries, despite the effectiveness of hygienic and preventive measures, such as eliminating the transmitting insects. However, several landmarks have been achieved in the fight against it in Latin America, including a reduction by 72% of the incidence of human infection in children and young adults in the countries of the Southern Cone Initiative, and at least three countries (Uruguay, in 1997, and Chile, in 1999, and Brazil in 2006) have been certified free of vectorial and transfusional transmission. In Argentina, vectorial transmission has been interrupted in 13 of the 19 endemic provinces, and major progress toward this goal has also been made in both Paraguay and Bolivia. Screening of donated blood, blood components, and solid organ donors, as well as donors of cells, tissues, and cell and tissue products for "T. cruzi" is mandated in all Chagas-endemic countries and has been implemented. Approximately 300,000 infected people live in the United States, which is likely the result of immigration from Latin American countries, and there have been 23 cases acquired from kissing bugs in the United States reported between 1955 and 2014. With increased population movements, the possibility of transmission by blood transfusion became more substantial in the United States. Transfusion blood and tissue products are now actively screened in the U.S., thus addressing and minimizing this risk. The disease was named after the Brazilian physician and epidemiologist Carlos Chagas, who first described it in 1909. The disease was not seen as a major public health problem in humans until the 1960s (the outbreak of Chagas disease in Brazil in the 1920s went widely ignored). Dr Chagas discovered that the intestines of Triatomidae (now Reduviidae: Triatominae) harbored a flagellate protozoan, a new species of the genus "Trypanosoma", and was able to demonstrate experimentally that it could be transmitted to marmoset monkeys that were bitten by the infected bug. Later studies showed squirrel monkeys were also vulnerable to infection. Chagas named the pathogenic parasite as "Trypanosoma cruzi" and later that year as "Schizotrypanum cruzi", both honoring Oswaldo Cruz, the noted Brazilian physician and epidemiologist who successfully fought epidemics of yellow fever, smallpox, and bubonic plague in Rio de Janeiro and other cities in the beginning of the 20th century. Chagas was also the first to unknowingly discover and illustrate the parasitic fungal genus "Pneumocystis", later infamously linked to PCP ("Pneumocystis" pneumonia in AIDS victims). Confusion between the two pathogens' life-cycles led him to briefly recognize his genus "Schizotrypanum", but following the description of "Pneumocystis" by others as an independent genus, Chagas returned to the use of the name "Trypanosoma cruzi". In Argentina, the disease is known as "mal de Chagas-Mazza", in honor of Salvador Mazza, the Argentine physician who in 1926 began investigating the disease and over the years became the principal researcher of this disease in the country. Mazza produced the first scientific confirmation of the existence of "Trypanosoma cruzi" in Argentina in 1927, eventually leading to support from local and European medical schools and Argentine government policy makers. It has been hypothesized that Charles Darwin might have suffered from Chagas disease as a result of a bite of the so-called great black bug of the Pampas ("vinchuca") (see Charles Darwin's illness). The episode was reported by Darwin in his diaries of the Voyage of the Beagle as occurring in March 1835 to the east of the Andes near Mendoza. Darwin was young and generally in good health, though six months previously he had been ill for a month near Valparaiso, but in 1837, almost a year after he returned to England, he began to suffer intermittently from a strange group of symptoms, becoming incapacitated for much of the rest of his life. Attempts to test Darwin's remains at Westminster Abbey by using modern PCR techniques were met with a refusal by the Abbey's curator. Several experimental treatments have shown promise in animal models. These include inhibitors of oxidosqualene cyclase and squalene synthase, cysteine protease inhibitors, dermaseptins collected from frogs in the genus "Phyllomedusa" ("P. oreades" and "P. distincta"), the sesquiterpene lactone dehydroleucodine (DhL), which affects the growth of cultured epimastigote-phase "Trypanosoma cruzi", inhibitors of purine uptake, and inhibitors of enzymes involved in trypanothione metabolism. Hopefully, new drug targets may be revealed following the sequencing of the "T. cruzi" genome. Chagas disease has a serious economic impact on the United States and the world. The cost of treatment in the United States alone, where the disease is not indigenous, is estimated to be $900 million annually, which includes hospitalization and medical devices such as pacemakers. The global cost is estimated at $7 billion. Megazol in a study seems more active against Chagas than benznidazole but has not been studied in humans. A Chagas vaccine (TcVac3) has been found to be effective in mice with plans for studies in dogs. It is hoped that it will be commercially available by 2018. Cannon A cannon (plural: "cannon" or "cannons") is a type of gun classified as artillery that launches a projectile using propellant. In the past, gunpowder was the primary propellant before the invention of smokeless powder in the 19th century. Cannon vary in caliber, range, mobility, rate of fire, angle of fire, and firepower; different forms of cannon combine and balance these attributes in varying degrees, depending on their intended use on the battlefield. The word "cannon" is derived from several languages, in which the original definition can usually be translated as "tube", "cane", or "reed". In the modern era, the term "cannon" has fallen into decline, replaced by "guns" or "artillery" if not a more specific term such as "mortar" or "howitzer", except in the field of aerial warfare, where it is often used as shorthand for autocannon. The earliest known depiction of cannon appeared in Song dynasty China as early as the 12th century, however solid archaeological and documentary evidence of cannon do not appear until the 13th century. In 1288 Yuan dynasty troops are recorded to have used hand cannons in combat, and the earliest extant cannon bearing a date of production comes from the same period. Evidence of cannon next appeared in Europe. By 1326 depictions of cannon had also appeared in Europe and almost immediately recorded usage of cannon began appearing. By the end of the 14th century cannon were widespread throughout Eurasia. Cannon were used primarily as anti-infantry weapons until around 1374 when cannon were recorded to have breached walls for the first time in Europe. Cannon featured prominently as siege weapons and ever larger pieces appeared. In 1464 a 16,000 kg cannon known as the Great Turkish Bombard was created in the Ottoman Empire. Cannon as field artillery became more important after 1453 with the introduction of limber, which greatly improved cannon maneuverability and mobility. European cannon reached their longer, lighter, more accurate, and more efficient "classic form" around 1480. This classic European cannon design stayed relatively consistent in form with minor changes until the 1750s. "Cannon" is derived from the Old Italian word "cannone", meaning "large tube", which came from Latin "canna", in turn originating from the Greek κάννα ("kanna"), "reed", and then generalised to mean any hollow tube-like object; cognate with Akkadian "qanu(m)" and Hebrew "qāneh", "tube, reed". The word has been used to refer to a gun since 1326 in Italy, and 1418 in England. Both "Cannons" and "Cannon" are correct and in common usage, with one or the other having preference in different parts of the English-speaking world. "Cannons" is more common in North America and Australia, while "cannon" as plural is more common in the United Kingdom. The cannon may have appeared as early as the 12th century in China, and was probably a parallel development or evolution of the fire-lance, a short ranged anti-personnel weapon combining a gunpowder-filled tube and a polearm of some sort. Co-viative projectiles such as iron scraps or porcelain shards were placed in fire lance barrels at some point, and eventually, the paper and bamboo materials of fire lance barrels were replaced by metal. The earliest known depiction of a cannon is a sculpture from the Dazu Rock Carvings in Sichuan dated to 1128, however the earliest archaeological samples and textual accounts do not appear until the 13th century. The primary extant specimens of cannon from the 13th century are the Wuwei Bronze Cannon dated to 1227, the Heilongjiang hand cannon dated to 1288, and the Xanadu Gun dated to 1298. However, only the Xanadu gun contains an inscription bearing a date of production, so it is considered the earliest confirmed extant cannon. The Xanadu Gun is 34.7 cm in length and weighs 6.2 kg. The other cannon are dated using contextual evidence. The Heilongjiang hand cannon is also often considered by some to be the oldest firearm since it was unearthed near the area where the History of Yuan reports a battle took place involving hand cannon. According to the History of Yuan, in 1288, a Jurchen commander by the name of Li Ting led troops armed with hand cannon into battle against the rebel prince Nayan. Chen Bingying argues there were no guns before 1259 while Dang Shoushan believes the Wuwei gun and other Western Xia era samples point to the appearance of guns by 1220, and Stephen Haw goes even further by stating that guns were developed as early as 1200. Sinologist Joseph Needham and renaissance siege expert Thomas Arnold provide a more conservative estimate of around 1280 for the appearance of the "true" cannon. Whether or not any of these are correct, it seems likely that the gun was born sometime during the 13th century. References to cannon proliferated throughout China in the following centuries. Cannon featured in literary pieces. In 1341 Xian Zhang wrote a poem called "The Iron Cannon Affair" describing a cannonball fired from an eruptor which could "pierce the heart or belly when striking a man or horse, and even transfix several persons at once." By the 1350s the cannon was used extensively in Chinese warfare. In 1358 the Ming army failed to take a city due to its garrisons' usage of cannon, however they themselves would use cannon, in the thousands, later on during the siege of Suzhou in 1366. During the Ming dynasty cannon were used in riverine warfare at the Battle of Lake Poyang. One shipwreck in Shandong had a cannon dated to 1377 and an anchor dated to 1372. From the 13th to 15th centuries cannon armed Chinese ships also travelled throughout Southeast Asia. The first of the western cannon to be introduced were breach-loaders in the early 16th century which the Chinese began producing themselves by 1523 and began improving later on. During the 1593 Siege of Pyongyang, 40,000 Ming troops deployed a variety of cannon against Japanese troops. Despite their defensive advantage and the use of arquebus by Japanese soldiers, the Japanese were at a severe disadvantage due to their lack of cannon. Throughout the Japanese invasions of Korea (1592–98), the Ming-Joseon coalition used artillery widely in land and naval battles, including on the turtle ships of Yi Sun-sin. According to Ivan Petlin, the first Russian envoy to Beijing, in September 1619, the city was armed with large cannon with cannonballs weighing more than 30 kg. His general observation was that the Chinese were militarily capable and had firearms: Outside of China, the earliest texts to mention gunpowder are Roger Bacon's "Opus Majus" (1267) and "Opus Tertium" in what has been interpreted as references to firecrackers. In the early 20th century, a British artillery officer proposed that another work tentatively attributed to Bacon, "Epistola de Secretis Operibus Artis et Naturae, et de Nullitate Magiae", also known as "Opus Minor", dated to 1247, contained an encrypted formula for gunpowder hidden in the text. These claims have been disputed by science historians. In any case, the formula itself is not useful for firearms or even firecrackers, burning slowly and producing mostly smoke. There is a record of a gun in Europe dating to 1322 being discovered in the nineteenth century but the artifact has since been lost. The earliest known European depiction of a gun appeared in 1326 in a manuscript by Walter de Milemete, although not necessarily drawn by him, known as "De Nobilitatibus, sapientii et prudentiis regum" (Concerning the Majesty, Wisdom, and Prudence of Kings), which displays a gun with a large arrow emerging from it and its user lowering a long stick to ignite the gun through the touchole In the same year, another similar illustration showed a darker gun being set off by a group of knights, which also featured in another work of de Milemete's, "De secretis secretorum Aristotelis". On 11 February of that same year, the Signoria of Florence appointed two officers to obtain "canones de mettallo" and ammunition for the town's defense. In the following year a document from the Turin area recorded a certain amount was paid "for the making of a certain instrument or device made by Friar Marcello for the projection of pellets of lead." A reference from 1331 describes an attack mounted by two Germanic knights on Cividale del Friuli, using gunpowder weapons of some sort. The 1320s seem to have been the takeoff point for guns in Europe according to most modern military historians. Scholars suggest that the lack of gunpowder weapons in a well-traveled Venetian's catalogue for a new crusade in 1321 implies that guns were unknown in Europe up until this point, further solidifying the 1320 mark, however more evidence in this area may be forthcoming in the future. The oldest extant cannon in Europe is a small bronze example unearthed in Loshult, Scania in southern Sweden. It dates from the early-mid 14th century, and is currently in the Swedish History Museum in Stockholm. Early cannon in Europe often shot arrows and were known by an assortment of names such as "pot-de-fer", "tonnoire", "ribaldis", and "büszenpyle". The "ribaldi"s, which shot large arrows and simplistic grapeshot, were first mentioned in the English Privy Wardrobe accounts during preparations for the Battle of Crécy, between 1345 and 1346. The Florentine Giovanni Villani recounts their destructiveness, indicating that by the end of the battle, "the whole plain was covered by men struck down by arrows and cannon balls." Similar cannon were also used at the Siege of Calais (1346–47), although it was not until the 1380s that the "ribaudekin" clearly became mounted on wheels. The battle of Crecy which pitted the English against the French in 1346 featured the early use of cannon which helped the long-bowmen repulse a large group of crossbowmen deployed by the French. It is important to note that the English originally intended to use the cannon against cavalry sent to attack their archers, thinking that the load noises produced by their cannon would panic the advancing horses along with killing the knights atop them.Early cannon could also be used for more than simply killing men and scaring horses. English cannon were used defensively during the siege of the castle Breteuil to launch fire onto an advancing belfry. In this way cannon could be used to burn down siege equipment before it reached the fortifications. The use of cannon to shoot fire could also be used offensively as another battle involved the setting of a castle ablaze with similar methods. The particular incendiary used in these cannon was most likely a gunpowder mixture. This is one area where early Chinese and European cannon share a similarity as both were used to shoot fire as a possible use.Another aspect of early European cannon is that they were rather small, dwarfed by the bombards which would come later. In fact, it is possible that the cannon used at Crecy were capable of being moved rather quickly as there is an anonymous chronicle that notes the guns being used to attack the French camp, indicating that they would have been mobile enough press the attack. These smaller cannon would eventually give way to larger, wall breaching guns by the end of the 1300s. Solid documentary evidence of cannon appeared in the middle east around the 1360s, however some historians consider the arrival of the cannon there to be far earlier. According to historian Ahmad Y. al-Hassan, during the Battle of Ain Jalut in 1260, the Mamluks used cannon against the Mongols. He claims that this was "the first cannon in history" and used a gunpowder formula almost identical to the ideal composition for explosive gunpowder. He also argues that this was not known in China or Europe until much later. Hassan further claims that the earliest textual evidence of cannon is from the middle east, based on earlier originals which report hand-held cannon being used by the Mamluks at the Battle of Ain Jalut in 1260. However Hassan's claims have been refuted by other historians such as David Ayalon, Iqtidar Alam Khan, Joseph Needham, Tonio Andrade, and Gabor Ágoston. Khan argues that it was the Mongols who introduced gunpowder to the Islamic world, and believes cannon only reached Mamluk Egypt in the 1370s. According to Needham, the term "midfa", dated to textual sources from 1342 to 1352, did not refer to true hand-guns or bombards, and contemporary accounts of a metal-barrel cannon in the Islamic world do not occur until 1365 Similarly, Andrade dates the textual appearance of cannon in middle eastern sources to the 1360s. Gabor Ágoston and David Ayalon believe the Mamluks had certainly used siege cannon by the 1360s, but earlier uses of cannon in the Islamic World are vague with a possible appearance in the Emirate of Granada by the 1320s and 1330s, however evidence is inconclusive. Ibn Khaldun reported the use of cannon as siege machines by the Marinid sultan Abu Yaqub Yusuf at the siege of Sijilmasa in 1274. The passage by Ibn Khaldun on the Marinid Siege of Sijilmassa in 1274 occurs as follows: "[ The Sultan] installed siege engines … and gunpowder engines …, which project small balls of iron. These balls are ejected from a chamber … placed in front of a kindling fire of gunpowder; this happens by a strange property which attributes all actions to the power of the Creator." However the source is not contemporary and was written a century later around 1382. Its interpretation has been rejected as anachronistic by most historians, who urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, naft, as they did for an earlier incendiary, naphtha. Ágoston and Peter Purton note that in the 1204–1324 period, late medieval Arabic texts used the same word for gunpowder, "naft", that they used for an earlier incendiary naphtha. Needham believes Ibn Khaldun was speaking of fire lances rather than hand cannon. The Ottoman Empire in particular made good use of cannon as siege artillery. Sixty-eight super-sized bombards were used by Mehmed the Conqueror to capture Constantinople in 1453. Jim Bradbury argues that Urban, a Hungarian cannon engineer, introduced this cannon from Central Europe to the Ottoman realm; according to Paul Hammer, however, it could have been introduced from other Islamic countries which had earlier used cannon. These cannon could fire heavy stone balls a mile, and the sound of their blast could reportedly be heard from a distance of . Shkodëran historian Marin Barleti discusses Turkish bombards at length in his book "De obsidione Scodrensi" (1504), describing the 1478–79 siege of Shkodra in which eleven bombards and two mortars were employed. The Ottomans also used cannon to sink ships which attempted to prevent them from crossing the Bosporus strait. Ottoman cannon also proved effective at stopping crusaders at Varna in 1444 and Kosovo in 1448 despite the presence of European cannon in the former case. The similar Dardanelles Guns (for the location) were created by Munir Ali in 1464 and were still in use during the Anglo-Turkish War (1807–09). These were cast in bronze into two parts, the chase (the barrel) and the breech, which combined weighed 18.4 tonnes. The two parts were screwed together using levers to facilitate moving it. Fathullah Shirazi, a Persian inhabitant of India who worked for Akbar in the Mughal Empire, developed a volley gun in the 16th century. While there is evidence of cannon in Iran as early as 1405 they were not widespread. This changed following the increased use of firearms by Shah Isma il I, and the Iranian army used 500 cannon by the 1620s, probably captured from the Ottomans or acquired by allies in Europe. By 1443 Iranians were also making some of their own cannon, as Mir Khawand wrote of a 1200kg metal piece being made by an Iranian "rekhtagar" which was most likely a cannon. Due to the difficulties of transporting cannon in mountainous terrain, their use was less common compared to their use in Europe. In Southeast Asia, invasion of Mongol to Java in 1293 may have brought firearms technology to Nusantara archipelago. By 1300s, Majapahit fleet has already using breech loading cannon called Cetbang as naval weapon. The Korean kingdom of Joseon started producing gunpowder in 1374 and cannon by 1377. Cannon appeared in Đại Việt by 1390 at the latest. Japan did not acquire a cannon until 1510 when a monk brought one back from China, and did not produce any in appreciable numbers. Documentary evidence of cannon in Russia does not appear until 1382 and they were used only in sieges, often by the defenders. It was not until 1475 when Ivan III established the first Russian cannon foundry in Moscow that they began to produce cannon natively. Later on large cannon were known as bombards, ranging from three to five feet in length and were used by Dubrovnik and Kotor in defence during the later 14th century. The first bombards were made of iron, but bronze became more prevalent as it was recognized as more stable and capable of propelling stones weighing as much as . Around the same period, the Byzantine Empire began to accumulate its own cannon to face the Ottoman Empire, starting with medium-sized cannon long and of 10 in calibre. The earliest reliable recorded use of artillery in the region was against the Ottoman siege of Constantinople in 1396, forcing the Ottomans to withdraw. The Ottomans acquired their own cannon and laid siege to the Byzantine capital again in 1422. By 1453, the Ottomans used 68 Hungarian-made cannon for the 55-day bombardment of the walls of Constantinople, "hurling the pieces everywhere and killing those who happened to be nearby." The largest of their cannon was the Great Turkish Bombard, which required an operating crew of 200 men and 70 oxen, and 10,000 men to transport it. Gunpowder made the formerly devastating Greek fire obsolete, and with the final fall of Constantinople—which was protected by what were once the strongest walls in Europe—on 29 May 1453, "it was the end of an era in more ways than one." While previous smaller guns could burn down structures with fire, larger cannon were so effective that engineers were forced to develop stronger castle walls to prevent their keeps from falling. This isn't to say that cannon were only used to batter down walls as fortifications began using cannon as defensive instruments such as an example in India where the fort of Raicher had gun ports built into its walls to accommodate the use of defensive cannon. It is interesting to note that one man named Niccolo Machiavelli, writer of a book called the "Art of War" believed that field artillery forced an army to take up a defensive posture and this opposed a more ideal offensive stance. Niccolo Machiavelli's concerns can be seen in the criticisms of Portuguese mortars being used in India during the sixteenth century as lack of mobility was one of the key problems with the design. In Russia the early cannon were again placed in forts as a defensive tool. Cannon were also difficult to move around in certain types of terrain with mountains providing a great obstacle for them, for these reasons offensives conducted with cannon would be difficult to pull off in places such as Iran. By the 16th century, cannon were made in a great variety of lengths and bore diameters, but the general rule was that the longer the barrel, the longer the range. Some cannon made during this time had barrels exceeding in length, and could weigh up to . Consequently, large amounts of gunpowder were needed, to allow them to fire stone balls several hundred yards. By mid-century, European monarchs began to classify cannon to reduce the confusion. Henry II of France opted for six sizes of cannon, but others settled for more; the Spanish used twelve sizes, and the English sixteen. Better powder had been developed by this time as well. Instead of the finely ground powder used by the first bombards, powder was replaced by a "corned" variety of coarse grains. This coarse powder had pockets of air between grains, allowing fire to travel through and ignite the entire charge quickly and uniformly. The end of the Middle Ages saw the construction of larger, more powerful cannon, as well their spread throughout the world. As they were not effective at breaching the newer fortifications resulting from the development of cannon, siege engines—such as siege towers and trebuchets—became less widely used. However, wooden "battery-towers" took on a similar role as siege towers in the gunpowder age—such as that used at Siege of Kazan in 1552, which could hold ten large-calibre cannon, in addition to 50 lighter pieces. Another notable effect of cannon on warfare during this period was the change in conventional fortifications. Niccolò Machiavelli wrote, "There is no wall, whatever its thickness that artillery will not destroy in only a few days." Although castles were not immediately made obsolete by cannon, their use and importance on the battlefield rapidly declined. Instead of majestic towers and merlons, the walls of new fortresses were thick, angled, and sloped, while towers became low and stout; increasing use was also made of earth and brick in breastworks and redoubts. These new defences became known as bastion forts, after their characteristic shape which attempted to force any advance towards it directly into the firing line of the guns. A few of these featured cannon batteries, such as the House of Tudor's Device Forts, in England. Bastion forts soon replaced castles in Europe, and, eventually, those in the Americas, as well. By the end of the 15th century, several technological advancements made cannon more mobile. Wheeled gun carriages and trunnions became common, and the invention of the limber further facilitated transportation. As a result, field artillery became more viable, and began to see more widespread use, often alongside the larger cannon intended for sieges. Better gunpowder, cast-iron projectiles (replacing stone), and the standardisation of calibres meant that even relatively light cannon could be deadly. In "The Art of War", Niccolò Machiavelli observed that "It is true that the arquebuses and the small artillery do much more harm than the heavy artillery." This was the case at the Battle of Flodden, in 1513: the English field guns outfired the Scottish siege artillery, firing two or three times as many rounds. Despite the increased maneuverability, however, cannon were still the slowest component of the army: a heavy English cannon required 23 horses to transport, while a culverin needed nine. Even with this many animals pulling, they still moved at a walking pace. Due to their relatively slow speed, and lack of organisation, and undeveloped tactics, the combination of pike and shot still dominated the battlefields of Europe. Innovations continued, notably the German invention of the mortar, a thick-walled, short-barrelled gun that blasted shot upward at a steep angle. Mortars were useful for sieges, as they could hit targets behind walls or other defences. This cannon found more use with the Dutch, who learnt to shoot bombs filled with powder from them. Setting the bomb fuse was a problem. "Single firing" was first used to ignite the fuse, where the bomb was placed with the fuse down against the cannon's propellant. This often resulted in the fuse being blown into the bomb, causing it to blow up as it left the mortar. Because of this, "double firing" was tried where the gunner lit the fuse and then the touch hole. This, however, required considerable skill and timing, and was especially dangerous if the gun misfired, leaving a lighted bomb in the barrel. Not until 1650 was it accidentally discovered that double-lighting was superfluous as the heat of firing would light the fuse. Gustavus Adolphus of Sweden emphasised the use of light cannon and mobility in his army, and created new formations and tactics that revolutionised artillery. He discontinued using all 12 pounder—or heavier—cannon as field artillery, preferring, instead, to use cannon that could be manned by only a few men. One obsolete type of gun, the "leatheren" was replaced by 4 pounder and 9 pounder demi-culverins. These could be operated by three men, and pulled by only two horses. Adolphus's army was also the first to use a cartridge that contained both powder and shot which sped up reloading, increasing the rate of fire. Finally, against infantry he pioneered the use of canister shot – essentially a tin can filled with musket balls. Until then there was no more than one cannon for every thousand infantrymen on the battlefield but Gustavus Adolphus increased the number of cannon sixfold. Each regiment was assigned two pieces, though he often arranged then into batteries instead of distributing them piecemeal. He used these batteries to break his opponent's infantry line, while his cavalry would outflank their heavy guns. At the Battle of Breitenfeld, in 1631, Adolphus proved the effectiveness of the changes made to his army, by defeating Johann Tserclaes, Count of Tilly. Although severely outnumbered, the Swedes were able to fire between three and five times as many volleys of artillery, and their infantry's linear formations helped ensure they didn't lose any ground. Battered by cannon fire, and low on morale, Tilly's men broke ranks and fled. In England cannon were being used to besiege various fortified buildings during the English Civil War. Nathaniel Nye is recorded as testing a Birmingham cannon in 1643 and experimenting with a saker in 1645. From 1645 he was the master gunner to the Parliamentarian garrison at Evesham and in 1646 he successfully directed the artillery at the Siege of Worcester, detailing his experiences and in his 1647 book "The Art of Gunnery". Believing that war was as much a science as an art, his explanations focused on triangulation, arithmetic, theoretical mathematics, and cartography as well as practical considerations such as the ideal specification for gunpowder or slow matches. His book acknowledged mathematicians such as Robert Recorde and Marcus Jordanus as well as earlier military writers on artillery such as Niccolò Fontana Tartaglia and Thomas (or Francis) Malthus (author of "A Treatise on Artificial Fire-Works"). Around this time also came the idea of aiming the cannon to hit a target. Gunners controlled the range of their cannon by measuring the angle of elevation, using a "gunner's quadrant." Cannon did not have sights, therefore, even with measuring tools, aiming was still largely guesswork. In the latter half of the 17th century, the French engineer Sébastien Le Prestre de Vauban introduced a more systematic and scientific approach to attacking gunpowder fortresses, in a time when many field commanders "were notorious dunces in siegecraft." Careful sapping forward, supported by enfilading ricochets, was a key feature of this system, and it even allowed Vauban to calculate the length of time a siege would take. He was also a prolific builder of bastion forts, and did much to popularize the idea of "depth in defence" in the face of cannon. These principles were followed into the mid-19th century, when changes in armaments necessitated greater depth defence than Vauban had provided for. It was only in the years prior to World War I that new works began to break radically away from his designs. The lower tier of 17th-century English ships of the line were usually equipped with demi-cannon, guns that fired a solid shot, and could weigh up to . Demi-cannon were capable of firing these heavy metal balls with such force that they could penetrate more than a metre of solid oak, from a distance of , and could dismast even the largest ships at close range. Full cannon fired a shot, but were discontinued by the 18th century, as they were too unwieldy. By the end of the 18th century, principles long adopted in Europe specified the characteristics of the Royal Navy's cannon, as well as the acceptable defects, and their severity. The United States Navy tested guns by measuring them, firing them two or three times—termed "proof by powder"—and using pressurized water to detect leaks. The carronade was adopted by the Royal Navy in 1779; the lower muzzle velocity of the round shot when fired from this cannon was intended to create more wooden splinters when hitting the structure of an enemy vessel, as they were believed to be more deadly than the ball by itself. The carronade was much shorter, and weighed between a third to a quarter of the equivalent long gun; for example, a 32-pounder carronade weighed less than a ton, compared with a 32-pounder long gun, which weighed over 3 tons. The guns were, therefore, easier to handle, and also required less than half as much gunpowder, allowing fewer men to crew them. Carronades were manufactured in the usual naval gun calibres, but were not counted in a ship of the line's rated number of guns. As a result, the classification of Royal Navy vessels in this period can be misleading, as they often carried more cannon than were listed. Cannon were crucial in Napoleon's rise to power, and continued to play an important role in his army in later years. During the French Revolution, the unpopularity of the Directory led to riots and rebellions. When over 25,000 royalists led by General Danican assaulted Paris, Paul Barras was appointed to defend the capital; outnumbered five to one and disorganised, the Republicans were desperate. When Napoleon arrived, he reorganised the defences but realised that without cannon the city could not be held. He ordered Joachim Murat to bring the guns from the Sablons artillery park; the Major and his cavalry fought their way to the recently captured cannon, and brought them back to Napoleon. When Danican's poorly trained men attacked, on 13 Vendémiaire, 1795 – 5 October 1795, in the calendar used in France at the time — Napoleon ordered his cannon to fire grapeshot into the mob, an act that became known as the "whiff of grapeshot". The slaughter effectively ended the threat to the new government, while, at the same time, made Bonaparte a famous—and popular—public figure. Among the first generals to recognise that artillery was not being used to its full potential, Napoleon often massed his cannon into batteries and introduced several changes into the French artillery, improving it significantly and making it among the finest in Europe. Such tactics were successfully used by the French, for example, at the Battle of Friedland, when sixty-six guns fired a total of 3,000 roundshot and 500 rounds of grapeshot, inflicting severe casualties to the Russian forces, whose losses numbered over 20,000 killed and wounded, in total. At the Battle of Waterloo—Napoleon's final battle—the French army had many more artillery pieces than either the British or Prussians. As the battlefield was muddy, recoil caused cannon to bury themselves into the ground after firing, resulting in slow rates of fire, as more effort was required to move them back into an adequate firing position; also, roundshot did not ricochet with as much force from the wet earth. Despite the drawbacks, sustained artillery fire proved deadly during the engagement, especially during the French cavalry attack. The British infantry, having formed infantry squares, took heavy losses from the French guns, while their own cannon fired at the cuirassiers and lancers, when they fell back to regroup. Eventually, the French ceased their assault, after taking heavy losses from the British cannon and musket fire. In the 1810s and 1820s, greater emphasis was placed on the accuracy of long-range gunfire, and less on the weight of a broadside. The carronade, although initially very successful and widely adopted, disappeared from the Royal Navy in the 1850s after the development of wrought-iron-jacketed steel cannon by William Armstrong and Joseph Whitworth. Nevertheless, carronades were used in the American Civil War. Western cannon during the 19th century became larger, more destructive, more accurate, and could fire at longer range. One example is the American wrought-iron, muzzle-loading rifle, or Griffen gun (usually called the 3-inch Ordnance Rifle), used during the American Civil War, which had an effective range of over . Another is the smoothbore 12-pounder Napoleon, which originated in France in 1853 and was widely used by both sides in the American Civil War. This cannon was renowned for its sturdiness, reliability, firepower, flexibility, relatively lightweight, and range of . The practice of rifling—casting spiralling lines inside the cannon's barrel—was applied to artillery more frequently by 1855, as it gave cannon projectiles gyroscopic stability, which improved their accuracy. One of the earliest rifled cannon was the breech-loading Armstrong Gun—also invented by William Armstrong—which boasted significantly improved range, accuracy, and power than earlier weapons. The projectile fired from the Armstrong gun could reportedly pierce through a ship's side, and explode inside the enemy vessel, causing increased damage, and casualties. The British military adopted the Armstrong gun, and was impressed; the Duke of Cambridge even declared that it "could do everything but speak." Despite being significantly more advanced than its predecessors, the Armstrong gun was rejected soon after its integration, in favour of the muzzle-loading pieces that had been in use before. While both types of gun were effective against wooden ships, neither had the capability to pierce the armour of ironclads; due to reports of slight problems with the breeches of the Armstrong gun, and their higher cost, the older muzzle-loaders were selected to remain in service instead. Realising that iron was more difficult to pierce with breech-loaded cannon, Armstrong designed rifled muzzle-loading guns, which proved successful; "The Times" reported: "even the fondest believers in the invulnerability of our present ironclads were obliged to confess that against such artillery, at such ranges, their plates and sides were almost as penetrable as wooden ships." The superior cannon of the Western world brought them tremendous advantages in warfare. For example, in the First Opium War in China, during the 19th century, British battleships bombarded the coastal areas and fortifications from afar, safe from the reach of the Chinese cannon. Similarly, the shortest war in recorded history, the Anglo-Zanzibar War of 1896, was brought to a swift conclusion by shelling from British cruisers. The cynical attitude towards recruited infantry in the face of ever more powerful field artillery is the source of the term "cannon fodder", first used by François-René de Chateaubriand, in 1814; however, the concept of regarding soldiers as nothing more than "food for powder" was mentioned by William Shakespeare as early as 1598, in Henry IV, Part 1. Cannon in the 20th and 21st centuries are usually divided into sub-categories and given separate names. Some of the most widely used types of modern cannon are howitzers, mortars, guns, and autocannon, although a few superguns—extremely large, custom-designed cannon—have also been constructed. Nuclear artillery was experimented with, but was abandoned as impractical. Modern artillery is used in a variety of roles, depending on its type. According to NATO, the general role of artillery is to provide fire support, which is defined as "the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize, or suppress the enemy." When referring to cannon, the term "gun" is often used incorrectly. In military usage, a gun is a cannon with a high muzzle velocity and a flat trajectory, useful for hitting the sides of targets such as walls, as opposed to howitzers or mortars, which have lower muzzle velocities, and fire indirectly, lobbing shells up and over obstacles to hit the target from above. By the early 20th century, infantry weapons had become more powerful, forcing most artillery away from the front lines. Despite the change to indirect fire, cannon proved highly effective during World War I, directly or indirectly causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they were more suited at hitting targets in trenches. Furthermore, their shells carried more explosives than those of guns, and caused considerably less barrel wear. The German army had the advantage here as they began the war with many more howitzers than the French. World War I also saw the use of the Paris Gun, the longest-ranged gun ever fired. This calibre gun was used by the Germans against Paris and could hit targets more than away. The Second World War sparked new developments in cannon technology. Among them were sabot rounds, hollow-charge projectiles, and proximity fuses, all of which increased the effectiveness of cannon against specific target. The proximity fuse emerged on the battlefields of Europe in late December 1944. Used to great effect in anti-aircraft projectiles, proximity fuses were fielded in both the European and Pacific Theatres of Operations; they were particularly useful against V-1 flying bombs and kamikaze planes. Although widely used in naval warfare, and in anti-air guns, both the British and Americans feared unexploded proximity fuses would be reverse engineered leading to them limiting its use in continental battles. During the Battle of the Bulge, however, the fuses became known as the American artillery's "Christmas present" for the German army because of their effectiveness against German personnel in the open, when they frequently dispersed attacks. Anti-tank guns were also tremendously improved during the war: in 1939, the British used primarily 2 pounder and 6 pounder guns. By the end of the war, 17 pounders had proven much more effective against German tanks, and 32 pounders had entered development. Meanwhile, German tanks were continuously upgraded with better main guns, in addition to other improvements. For example, the Panzer III was originally designed with a 37 mm gun, but was mass-produced with a 50 mm cannon. To counter the threat of the Russian T-34s, another, more powerful 50 mm gun was introduced, only to give way to a larger 75 mm cannon, which was in a fixed mount as the StuG III, the most-produced German World War II armoured fighting vehicle of any type. Despite the improved guns, production of the Panzer III was ended in 1943, as the tank still could not match the T-34, and was replaced by the Panzer IV and Panther tanks. In 1944, the 8.8 cm KwK 43 and many variations, entered service with the Wehrmacht, and was used as both a tank main gun, and as the PaK 43 anti-tank gun. One of the most powerful guns to see service in World War II, it was capable of destroying any Allied tank at very long ranges. Despite being designed to fire at trajectories with a steep angle of descent, howitzers can be fired directly, as was done by the 11th Marine Regiment at the Battle of Chosin Reservoir, during the Korean War. Two field batteries fired directly upon a battalion of Chinese infantry; the Marines were forced to brace themselves against their howitzers, as they had no time to dig them in. The Chinese infantry took heavy casualties, and were forced to retreat. The tendency to create larger calibre cannon during the World Wars has reversed since. The United States Army, for example, sought a lighter, more versatile howitzer, to replace their ageing pieces. As it could be towed, the M198 was selected to be the successor to the World War II–era cannon used at the time, and entered service in 1979. Still in use today, the M198 is, in turn, being slowly replaced by the M777 Ultralightweight howitzer, which weighs nearly half as much and can be more easily moved. Although land-based artillery such as the M198 are powerful, long-ranged, and accurate, naval guns have not been neglected, despite being much smaller than in the past, and, in some cases, having been replaced by cruise missiles. However, the 's planned armament includes the Advanced Gun System (AGS), a pair of 155 mm guns, which fire the Long Range Land-Attack Projectile. The warhead, which weighs , has a circular error of probability of , and will be mounted on a rocket, to increase the effective range to , further than that of the Paris Gun. The AGS's barrels will be water cooled, and will fire 10 rounds per minute, per gun. The combined firepower from both turrets will give a "Zumwalt"-class destroyer the firepower equivalent to 18 conventional M198 howitzers. The reason for the re-integration of cannon as a main armament in United States Navy ships is because satellite-guided munitions fired from a gun are less expensive than a cruise missile but have a similar guidance capability. Autocannons have an automatic firing mode, similar to that of a machine gun. They have mechanisms to automatically load their ammunition, and therefore have a higher rate of fire than artillery, often approaching, or, in the case of rotary autocannons, even surpassing the firing rate of a machine gun. While there is no minimum bore for autocannons, they are generally larger than machine guns, typically 20 mm or greater since World War II and are usually capable of using explosive ammunition even if it isn't always used. Machine guns in contrast are usually too small to use explosive ammunition. Most nations use rapid-fire cannon on light vehicles, replacing a more powerful, but heavier, tank gun. A typical autocannon is the 25 mm "Bushmaster" chain gun, mounted on the LAV-25 and M2 Bradley armoured vehicles. Autocannons may be capable of a very high rate of fire, but ammunition is heavy and bulky, limiting the amount carried. For this reason, both the 25 mm Bushmaster and the 30 mm RARDEN are deliberately designed with relatively low rates of fire. The typical rate of fire for a modern autocannon ranges from 90 to 1,800 rounds per minute. Systems with multiple barrels, such as a rotary autocannon, can have rates of fire of more than several thousand rounds per minute. The fastest of these is the GSh-6-23, which has a rate of fire of over 10,000 rounds per minute. Autocannons are often found in aircraft, where they replaced machine guns and as shipboard anti-aircraft weapons, as they provide greater destructive power than machine guns. The first documented installation of a cannon on an aircraft was on the Voisin Canon in 1911, displayed at the Paris Exposition that year. By World War I, all of the major powers were experimenting with aircraft mounted cannon; however their low rate of fire and great size and weight precluded any of them from being anything other than experimental. The most successful (or least unsuccessful) was the SPAD 12 Ca.1 with a single 37mm Puteaux mounted to fire between the cylinder banks and through the propeller boss of the aircraft's Hispano-Suiza 8C. The pilot (by necessity an ace) had to manually reload each round. The first autocannon were developed during World War I as anti-aircraft guns, and one of these – the Coventry Ordnance Works "COW 37 mm gun" was installed in an aircraft but the war ended before it could be given a field trial and never became standard equipment in a production aircraft. Later trials had it fixed at a steep angle upwards in both the Vickers Type 161 and the Westland C.O.W. Gun Fighter, an idea that would return later. During this period autocannons became available and several fighters of the German "Luftwaffe" and the Imperial Japanese Navy Air Service were fitted with 20mm cannon. They continued to be installed as an adjunct to machine guns rather than as a replacement, as the rate of fire was still too low and the complete installation too heavy. There was a some debate in the RAF as to whether the greater number of possible rounds being fired from a machine gun, or a smaller number of explosive rounds from a cannon was preferable. Improvements during the war in regards to rate of fire allowed the cannon to displace the machine gun almost entirely. The cannon was more effective against armour so they were increasingly used during the course of World War II, and newer fighters such as the Hawker Tempest usually carried two or four versus the six .50 Browning machine guns for US aircraft or eight to twelve M1919 Browning machine guns on earlier British aircraft. The Hispano-Suiza HS.404, Oerlikon 20 mm cannon, MG FF, and their numerous variants became among the most widely used autocannon in the war. Cannon, as with machine guns, were generally fixed to fire forwards (mounted in the wings, in the nose or fuselage, or in a pannier under either); or were mounted in gun turrets on heavier aircraft. Both the Germans and Japanese mounted cannon to fire upwards and forwards for use against heavy bombers, with the Germans calling guns so-installed "Schräge Musik". Schräge Musik derives from the German colloquialism for Jazz Music (the German word schräg means slanted or oblique) Preceding the Vietnam War the high speeds aircraft were attaining led to a move to remove the cannon due to the mistaken belief that they would be useless in a dogfight, but combat experience during the Vietnam War showed conclusively that despite advances in missiles, there was still a need for them. Nearly all modern fighter aircraft are armed with an autocannon and they are also commonly found on ground-attack aircraft. One of the most powerful examples is the 30mm GAU-8/A Avenger Gatling-type rotary cannon, mounted exclusively on the Fairchild Republic A-10 Thunderbolt II. The Lockheed AC-130 gunship (a converted transport) can carry a 105mm howitzer as well as a variety of autocannons ranging up to 40mm. Both are used in the close air support role. Cannon in general have the form of a truncated cone with an internal cylindrical bore for holding an explosive charge and a projectile. The thickest, strongest, and closed part of the cone is located near the explosive charge. As any explosive charge will dissipate in all directions equally, the thickest portion of the cannon is useful for containing and directing this force. The backward motion of the cannon as its projectile leaves the bore is termed its recoil and the effectiveness of the cannon can be measured in terms of how much this response can be diminished, though obviously diminishing recoil through increasing the overall mass of the cannon means decreased mobility. Field artillery cannon in Europe and the Americas were initially made most often of bronze, though later forms were constructed of cast iron and eventually steel. Bronze has several characteristics that made it preferable as a construction material: although it is relatively expensive, does not always alloy well, and can result in a final product that is "spongy about the bore", bronze is more flexible than iron and therefore less prone to bursting when exposed to high pressure; cast iron cannon are less expensive and more durable generally than bronze and withstand being fired more times without deteriorating. However, cast iron cannon have a tendency to burst without having shown any previous weakness or wear, and this makes them more dangerous to operate. The older and more-stable forms of cannon were muzzle-loading as opposed to breech-loading— in order to be used they had to have their ordnance packed down the bore through the muzzle rather than inserted through the breech. The following terms refer to the components or aspects of a classical western cannon (c. 1850) as illustrated here. In what follows, the words "near", "close", and "behind" will refer to those parts towards the thick, closed end of the piece, and "far", "front", "in front of", and "before" to the thinner, open end. The main body of a cannon consists of three basic extensions: the foremost and the longest is called the "chase", the middle portion is the "reinforce", and the closest and briefest portion is the "cascabel" or "cascable". To pack a muzzle-loading cannon, first gunpowder is poured down the bore. This is followed by a layer of wadding (often nothing more than paper), and then the cannonball itself. A certain amount of windage allows the ball to fit down the bore, though the greater the windage the less efficient the propulsion of the ball when the gunpowder is ignited. To fire the cannon, the fuse located in the vent is lit, quickly burning down to the gunpowder, which then explodes violently, propelling wadding and ball down the bore and out of the muzzle. A small portion of exploding gas also escapes through the vent, but this does not dramatically affect the total force exerted on the ball. Any large, smoothbore, muzzle-loading gun—used before the advent of breech-loading, rifled guns—may be referred to as a cannon, though once standardised names were assigned to different-sized cannon, the term specifically referred to a gun designed to fire a shot, as distinct from a demi-cannon – , culverin – , or demi-culverin – . "Gun" specifically refers to a type of cannon that fires projectiles at high speeds, and usually at relatively low angles; they have been used in warships, and as field artillery. The term "cannon" is also used for autocannon, a modern repeating weapon firing explosive projectiles. Cannon have been used extensively in fighter aircraft since World War II, and in place of machine guns on land vehicles. In the 1770s, cannon operation worked as follows: each cannon would be manned by two gunners, six soldiers, and four officers of artillery. The right gunner was to prime the piece and load it with powder, and the left gunner would fetch the powder from the magazine and be ready to fire the cannon at the officer's command. On each side of the cannon, three soldiers stood, to ram and sponge the cannon, and hold the ladle. The second soldier on the left tasked with providing 50 bullets. Before loading, the cannon would be cleaned with a wet sponge to extinguish any smouldering material from the last shot. Fresh powder could be set off prematurely by lingering ignition sources. The powder was added, followed by wadding of paper or hay, and the ball was placed in and rammed down. After ramming, the cannon would be aimed with the elevation set using a quadrant and a plummet. At 45 degrees, the ball had the utmost range: about ten times the gun's level range. Any angle above a horizontal line was called random-shot. Wet sponges were used to cool the pieces every ten or twelve rounds. During the Napoleonic Wars, a British gun team consisted of five gunners to aim it, clean the bore with a damp sponge to quench any remaining embers before a fresh charge was introduced, and another to load the gun with a bag of powder and then the projectile. The fourth gunner pressed his thumb on the vent hole, to prevent a draught that might fan a flame. The charge loaded, the fourth would prick the bagged charge through the vent hole, and fill the vent with powder. On command, the fifth gunner would fire the piece with a slowmatch. When a cannon had to be abandoned such as in a retreat or surrender, the touch hole of the cannon would be plugged flush with an iron spike, disabling the cannon (at least until metal boring tools could be used to remove the plug). This was called "spiking the cannon". A gun was said to be "honeycombed" when the surface of the bore had cavities, or holes in it, caused either by corrosion or casting defects. Historically, logs or poles have been used as decoys to mislead the enemy as to the strength of an emplacement. The "Quaker Gun trick" was used by Colonel William Washington's Continental Army during the American Revolutionary War; in 1780, approximately 100 Loyalists surrendered to them, rather than face bombardment. During the American Civil War, Quaker guns were also used by the Confederates, to compensate for their shortage of artillery. The decoy cannon were painted black at the "muzzle", and positioned behind fortifications to delay Union attacks on those positions. On occasion, real gun carriages were used to complete the deception. Cannon sounds have sometimes been used in classical pieces with a military theme. One of the best known examples of such a piece is Pyotr Ilyich Tchaikovsky's "1812 Overture". The overture is to be performed using an artillery section together with the orchestra, resulting in noise levels high enough that musicians are required to wear ear protection. The cannon fire simulates Russian artillery bombardments of the Battle of Borodino, a critical battle in Napoleon's invasion of Russia, whose defeat the piece celebrates. When the overture was first performed, the cannon were fired by an electric current triggered by the conductor. However, the overture was not recorded with real cannon fire until Mercury Records and conductor Antal Doráti's 1958 recording of the Minnesota Orchestra. Cannon fire is also frequently used annually in presentations of the "1812" on the American Independence Day, a tradition started by Arthur Fiedler of the Boston Pops in 1974. The hard rock band AC/DC also used cannon in their song "For Those About to Rock (We Salute You)", and in live shows replica Napoleonic cannon and pyrotechnics were used to perform the piece. Cannon recovered from the sea are often extensively damaged from exposure to salt water; because of this, electrolytic reduction treatment is required to forestall the process of corrosion. The cannon is then washed in deionized water to remove the electrolyte, and is treated in tannic acid, which prevents further rust and gives the metal a bluish-black colour. After this process, cannon on display may be protected from oxygen and moisture by a wax sealant. A coat of polyurethane may also be painted over the wax sealant, to prevent the wax-coated cannon from attracting dust in outdoor displays. In 2011, archaeologists say six cannon recovered from a river in Panama that could have belonged to legendary pirate Henry Morgan are being studied and could eventually be displayed after going through a restoration process. Charles I of England Charles I (19 November 1600 – 30 January 1649) was monarch of the three kingdoms of England, Scotland, and Ireland from 27 March 1625 until his execution in 1649. Charles was born into the House of Stuart as the second son of King James VI of Scotland, but after his father inherited the English throne in 1603, he moved to England, where he spent much of the rest of his life. He became heir apparent to the thrones of England, Scotland and Ireland on the death of his elder brother, Henry Frederick, Prince of Wales, in 1612. An unsuccessful and unpopular attempt to marry him to the Spanish Habsburg princess Maria Anna culminated in an eight-month visit to Spain in 1623 that demonstrated the futility of the marriage negotiations. Two years later, he married the Bourbon princess Henrietta Maria of France instead. After his succession, Charles quarrelled with the Parliament of England, which sought to curb his royal prerogative. Charles believed in the divine right of kings and thought he could govern according to his own conscience. Many of his subjects opposed his policies, in particular the levying of taxes without parliamentary consent, and perceived his actions as those of a tyrannical absolute monarch. His religious policies, coupled with his marriage to a Roman Catholic, generated the antipathy and mistrust of Reformed groups such as the English Puritans and Scottish Covenanters, who thought his views were too Catholic. He supported high church Anglican ecclesiastics, such as Richard Montagu and William Laud, and failed to aid Protestant forces successfully during the Thirty Years' War. His attempts to force the Church of Scotland to adopt high Anglican practices led to the Bishops' Wars, strengthened the position of the English and Scottish parliaments and helped precipitate his own downfall. From 1642, Charles fought the armies of the English and Scottish parliaments in the English Civil War. After his defeat in 1645, he surrendered to a Scottish force that eventually handed him over to the English Parliament. Charles refused to accept his captors' demands for a constitutional monarchy, and temporarily escaped captivity in November 1647. Re-imprisoned on the Isle of Wight, Charles forged an alliance with Scotland, but by the end of 1648 Oliver Cromwell's New Model Army had consolidated its control over England. Charles was tried, convicted, and executed for high treason in January 1649. The monarchy was abolished and a republic called the Commonwealth of England was declared. The monarchy was restored to Charles's son, Charles II, in 1660. The second son of King James VI of Scotland and Anne of Denmark, Charles was born in Dunfermline Palace, Fife, on 19 November 1600. At a Protestant ceremony in the Chapel Royal at Holyrood Palace in Edinburgh on 23 December 1600, he was baptised by David Lindsay, Bishop of Ross, and created Duke of Albany, the traditional title of the second son of the King of Scotland, with the subsidiary titles of Marquess of Ormond, Earl of Ross and Lord Ardmannoch. James VI was the first cousin twice removed of Queen Elizabeth I of England, and when she died childless in March 1603, he became King of England as James I. Charles was a weak and sickly infant, and while his parents and older siblings left for England in April and early June that year, due to his fragile health, he remained in Scotland with his father's friend Lord Fyvie, appointed as his guardian. By 1604, when Charles was three-and-a-half, he was able to walk the length of the great hall at Dunfermline Palace without assistance, and it was decided that he was strong enough to make the journey to England to be reunited with his family. In mid-July 1604, Charles left Dunfermline for England where he was to spend most of the rest of his life. In England, Charles was placed under the charge of Elizabeth, Lady Carey, the wife of courtier Sir Robert Carey, who put him in boots made of Spanish leather and brass to help strengthen his weak ankles. His speech development was also slow, and he retained a stammer, or hesitant speech, for the rest of his life. In January 1605, Charles was created Duke of York, as is customary in the case of the English sovereign's second son, and made a Knight of the Bath. Thomas Murray, a presbyterian Scot, was appointed as a tutor. Charles learnt the usual subjects of classics, languages, mathematics and religion. In 1611, he was made a Knight of the Garter. Eventually, Charles apparently conquered his physical infirmity, which might have been caused by rickets. He became an adept horseman and marksman, and took up fencing. Even so, his public profile remained low in contrast to that of his physically stronger and taller elder brother, Henry Frederick, Prince of Wales, whom Charles adored and attempted to emulate. However, in early November 1612, Henry died at the age of 18 of what is suspected to have been typhoid (or possibly porphyria). Charles, who turned 12 two weeks later, became heir apparent. As the eldest surviving son of the sovereign, Charles automatically gained several titles (including Duke of Cornwall and Duke of Rothesay). Four years later, in November 1616, he was created Prince of Wales and Earl of Chester. In 1613, his sister Elizabeth married Frederick V, Elector Palatine, and moved to Heidelberg. In 1617, the Habsburg Archduke Ferdinand of Austria, a Catholic, was elected king of Bohemia. The following year, the Bohemians rebelled, defenestrating the Catholic governors. In August 1619, the Bohemian diet chose as their monarch Frederick V, who was leader of the Protestant Union, while Ferdinand was elected Holy Roman Emperor in the imperial election. Frederick's acceptance of the Bohemian crown in defiance of the emperor marked the beginning of the turmoil that would develop into the Thirty Years' War. The conflict, originally confined to Bohemia, spiralled into a wider European war, which the English Parliament and public quickly grew to see as a polarised continental struggle between Catholics and Protestants. In 1620, Charles's brother-in-law, Frederick V, was defeated at the Battle of White Mountain near Prague and his hereditary lands in the Electoral Palatinate were invaded by a Habsburg force from the Spanish Netherlands. James, however, had been seeking marriage between the new Prince of Wales and Ferdinand's niece, Habsburg princess Maria Anna of Spain, and began to see the Spanish match as a possible diplomatic means of achieving peace in Europe. Unfortunately for James, negotiation with Spain proved generally unpopular, both with the public and with James's court. The English Parliament was actively hostile towards Spain and Catholicism, and thus, when called by James in 1621, the members hoped for an enforcement of recusancy laws, a naval campaign against Spain, and a Protestant marriage for the Prince of Wales. James's Lord Chancellor, Francis Bacon, was impeached before the House of Lords for corruption. The impeachment was the first since 1459 without the king's official sanction in the form of a bill of attainder. The incident set an important precedent as the process of impeachment would later be used against Charles and his supporters: the Duke of Buckingham, Archbishop Laud, and the Earl of Strafford. James insisted that the House of Commons be concerned exclusively with domestic affairs, while the members protested that they had the privilege of free speech within the Commons' walls, demanding war with Spain and a Protestant Princess of Wales. Charles, like his father, considered the discussion of his marriage in the Commons impertinent and an infringement of his father's royal prerogative. In January 1622, James dissolved Parliament, angry at what he perceived as the members' impudence and intransigence. Charles and the Duke of Buckingham, James's favourite and a man who had great influence over the prince, travelled incognito to Spain in February 1623 to try to reach agreement on the long-pending Spanish match. In the end, however, the trip was an embarrassing failure. The Infanta thought Charles was little more than an infidel, and the Spanish at first demanded that he convert to Roman Catholicism as a condition of the match. The Spanish insisted on toleration of Catholics in England and the repeal of the penal laws, which Charles knew would never be agreed by Parliament, and that the Infanta remain in Spain for a year after any wedding to ensure that England complied with all the terms of the treaty. A personal quarrel erupted between Buckingham and the Count of Olivares, the Spanish chief minister, and so Charles conducted the ultimately futile negotiations personally. When Charles returned to London in October, without a bride and to a rapturous and relieved public welcome, he and Buckingham pushed a reluctant King James to declare war on Spain. With the encouragement of his Protestant advisers, James summoned the English Parliament in 1624 so that he could request subsidies for a war. Charles and Buckingham supported the impeachment of the Lord Treasurer, Lionel Cranfield, 1st Earl of Middlesex, who opposed war on grounds of cost and who quickly fell in much the same manner as Bacon had. James told Buckingham he was a fool, and presciently warned his son that he would live to regret the revival of impeachment as a parliamentary tool. An under-funded makeshift army under Ernst von Mansfeld set off to recover the Palatinate, but it was so poorly provisioned that it never advanced beyond the Dutch coast. By 1624, James was growing ill, and as a result was finding it difficult to control Parliament. By the time of his death in March 1625, Charles and the Duke of Buckingham had already assumed "de facto" control of the kingdom. With the failure of the Spanish match, Charles and Buckingham turned their attention to France. On 1 May 1625 Charles was married by proxy to the fifteen-year-old French princess Henrietta Maria in front of the doors of the Notre Dame de Paris. Charles had seen Henrietta Maria in Paris while en route to Spain. The married couple met in person on 13 June 1625 in Canterbury. Charles delayed the opening of his first Parliament until after the marriage was consummated, to forestall any opposition. Many members of the Commons were opposed to the king's marriage to a Roman Catholic, fearing that Charles would lift restrictions on Catholic recusants and undermine the official establishment of the reformed Church of England. Although he told Parliament that he would not relax religious restrictions, he promised to do exactly that in a secret marriage treaty with his brother-in-law Louis XIII of France. Moreover, the treaty loaned to the French seven English naval ships that would be used to suppress the Protestant Huguenots at La Rochelle in September 1625. Charles was crowned on 2 February 1626 at Westminster Abbey, but without his wife at his side because she refused to participate in a Protestant religious ceremony. Distrust of Charles's religious policies increased with his support of a controversial anti-Calvinist ecclesiastic, Richard Montagu, who was in disrepute among the Puritans. In his pamphlet "A New Gag for an Old Goose" (1624), a reply to the Catholic pamphlet "A New Gag for the New Gospel", Montagu argued against Calvinist predestination, the doctrine that salvation and damnation were preordained by God. Anti-Calvinists – known as Arminians – believed that human beings could influence their own fate through the exercise of free will. Arminian divines had been one of the few sources of support for Charles's proposed Spanish marriage. With the support of King James, Montagu produced another pamphlet, entitled "Appello Caesarem", in 1625 shortly after the old king's death and Charles's accession. To protect Montagu from the stricture of Puritan members of Parliament, Charles made the cleric one of his royal chaplains, increasing many Puritans' suspicions that Charles favoured Arminianism as a clandestine attempt to aid the resurgence of Catholicism. Rather than direct involvement in the European land war, the English Parliament preferred a relatively inexpensive naval attack on Spanish colonies in the New World, hoping for the capture of the Spanish treasure fleets. Parliament voted to grant a subsidy of £140,000, which was an insufficient sum for Charles's war plans. Moreover, the House of Commons limited its authorisation for royal collection of tonnage and poundage (two varieties of customs duties) to a period of one year, although previous sovereigns since Henry VI had been granted the right for life. In this manner, Parliament could delay approval of the rates until after a full-scale review of customs revenue. The bill made no progress in the House of Lords past its first reading. Although no Parliamentary Act for the levy of tonnage and poundage was obtained, Charles continued to collect the duties. A poorly conceived and executed naval expedition against Spain under the leadership of Buckingham went badly, and the House of Commons began proceedings for the impeachment of the duke. In May 1626, Charles nominated Buckingham as Chancellor of Cambridge University in a show of support, and had two members who had spoken against Buckingham – Dudley Digges and Sir John Eliot – arrested at the door of the House. The Commons was outraged by the imprisonment of two of their members, and after about a week in custody, both were released. On 12 June 1626, the Commons launched a direct protestation attacking Buckingham, stating, "We protest before your Majesty and the whole world that until this great person be removed from intermeddling with the great affairs of state, we are out of hope of any good success; and do fear that any money we shall or can give will, through his misemployment, be turned rather to the hurt and prejudice of this your kingdom than otherwise, as by lamentable experience we have found those large supplies formerly and lately given." Despite Parliament's protests, however, Charles refused to dismiss his friend, dismissing Parliament instead. Meanwhile, domestic quarrels between Charles and Henrietta Maria were souring the early years of their marriage. Disputes over her jointure, appointments to her household, and the practice of her religion culminated in the king expelling the vast majority of her French attendants in August 1626. Despite Charles's agreement to provide the French with English ships as a condition of marrying Henrietta Maria, in 1627 he launched an attack on the French coast to defend the Huguenots at La Rochelle. The action, led by Buckingham, was ultimately unsuccessful. Buckingham's failure to protect the Huguenots – and his retreat from Saint-Martin-de-Ré – spurred Louis XIII's siege of La Rochelle and furthered the English Parliament's and people's detestation of the duke. Charles provoked further unrest by trying to raise money for the war through a "forced loan": a tax levied without parliamentary consent. In November 1627, the test case in the King's Bench, the "Five Knights' Case", found that the king had a prerogative right to imprison without trial those who refused to pay the forced loan. Summoned again in March 1628, on 26 May Parliament adopted a Petition of Right, calling upon the king to acknowledge that he could not levy taxes without Parliament's consent, not impose martial law on civilians, not imprison them without due process, and not quarter troops in their homes. Charles assented to the petition on 7 June, but by the end of the month he had prorogued Parliament and re-asserted his right to collect customs duties without authorisation from Parliament. On 23 August 1628, Buckingham was assassinated. Charles was deeply distressed. According to Edward Hyde, 1st Earl of Clarendon, he "threw himself upon his bed, lamenting with much passion and with abundance of tears". He remained grieving in his room for two days. In contrast, the public rejoiced at Buckingham's death, which accentuated the gulf between the court and the nation, and between the Crown and the Commons. Although the death of Buckingham effectively ended the war with Spain and eliminated his leadership as an issue, it did not end the conflicts between Charles and Parliament. It did, however, coincide with an improvement in Charles's relationship with his wife, and by November 1628 their old quarrels were at an end. Perhaps Charles's emotional ties were transferred from Buckingham to Henrietta Maria. She became pregnant for the first time, and the bond between them grew ever stronger. Together, they embodied an image of virtue and family life, and their court became a model of formality and morality. Partly inspired by his visit to the Spanish court in 1623, Charles became a passionate and knowledgeable art collector, amassing one of the finest art collections ever assembled. In Spain, he sat for a sketch by Velázquez, and acquired works by Titian and Correggio, among others. In England, his commissions included the ceiling of the Banqueting House, Whitehall, by Rubens and paintings by other artists from the Low Countries such as van Honthorst, Mytens, and van Dyck. His close associates, including the Duke of Buckingham and the Earl of Arundel, shared his interest and have been dubbed the Whitehall Group. In 1627 and 1628, Charles purchased the entire collection of the Duke of Mantua, which included work by Titian, Correggio, Raphael, Caravaggio, del Sarto and Mantegna. His collection grew further to encompass Bernini, Bruegel, da Vinci, Holbein, Hollar, Tintoretto and Veronese, and self-portraits by both Dürer and Rembrandt. By Charles's death, there were an estimated 1,760 paintings, most of which were sold and dispersed by Parliament. In January 1629, Charles opened the second session of the English Parliament, which had been prorogued in June 1628, with a moderate speech on the tonnage and poundage issue. Members of the House of Commons began to voice opposition to Charles's policies in light of the case of John Rolle, a Member of Parliament whose goods had been confiscated for failing to pay tonnage and poundage. Many MPs viewed the imposition of the tax as a breach of the Petition of Right. When Charles ordered a parliamentary adjournment on 2 March, members held the Speaker, Sir John Finch, down in his chair so that the ending of the session could be delayed long enough for resolutions against Catholicism, Arminianism and tonnage and poundage to be read out and acclaimed by the chamber. The provocation was too much for Charles, who dissolved Parliament and had nine parliamentary leaders, including Sir John Eliot, imprisoned over the matter, thereby turning the men into martyrs, and giving popular cause to their protest. Shortly after the prorogation, without the means in the foreseeable future to raise funds from Parliament for a European war, or the influence of Buckingham, Charles made peace with France and Spain. The following eleven years, during which Charles ruled England without a Parliament, are referred to as the personal rule or the "eleven years' tyranny". Ruling without Parliament was not exceptional, and was supported by precedent. Only Parliament, however, could legally raise taxes, and without it Charles's capacity to acquire funds for his treasury was limited to his customary rights and prerogatives. A large fiscal deficit had arisen in the reigns of Elizabeth I and James I. Notwithstanding Buckingham's shortlived campaigns against both Spain and France, there was little financial capacity for Charles to wage wars overseas. Throughout his reign Charles was obliged to rely primarily on volunteer forces for defence and on diplomatic efforts to support his sister, Elizabeth, and his foreign policy objective for the restoration of the Palatinate. England was still the least taxed country in Europe, with no official excise and no regular direct taxation. To raise revenue without reconvening Parliament, Charles resurrected an all-but-forgotten law called the "Distraint of Knighthood", in abeyance for over a century, which required any man who earned £40 or more from land each year to present himself at the king's coronation to be knighted. Relying on this old statute, Charles fined individuals who had failed to attend his coronation in 1626. The chief tax imposed by Charles was a feudal levy known as ship money, which proved even more unpopular, and lucrative, than poundage and tonnage before it. Previously, collection of ship money had been authorised only during wars, and only on coastal regions. Charles, however, argued that there was no legal bar to collecting the tax for defence during peacetime and throughout the whole of the kingdom. Ship money, paid directly to the Treasury of the Navy, provided between £150,000 to £200,000 annually between 1634 and 1638, after which yields declined. Opposition to ship money steadily grew, but the 12 common law judges of England declared that the tax was within the king's prerogative, though some of them had reservations. The prosecution of John Hampden for non-payment in 1637–38 provided a platform for popular protest, and the judges found against Hampden only by the narrow margin of 7–5. The king also derived money through the granting of monopolies, despite a statute forbidding such action, which, though inefficient, raised an estimated £100,000 a year in the late 1630s. One such monopoly was for soap, pejoratively referred to as "popish soap" because some of its backers were Catholics. Charles also raised funds from the Scottish nobility, at the price of considerable acrimony, by the Act of Revocation (1625), whereby all gifts of royal or church land made to the nobility since 1540 were revoked, with continued ownership being subject to an annual rent. In addition, the boundaries of the royal forests in England were restored to their ancient limits as part of a scheme to maximise income by exploiting the land and fining land users within the reasserted boundaries for encroachment. The focus of the programme was disafforestation and sale of forest lands for conversion to pasture and arable farming, or in the case of the Forest of Dean, development for the iron industry. Disafforestation frequently caused riots and disturbances including those known as the Western Rising. Against the background of this unrest, Charles faced bankruptcy in mid-1640. The City of London, preoccupied with its own grievances, refused to make any loans to the king, as did foreign powers. In this extremity, in July Charles seized silver bullion worth £130,000 held in trust at the mint in the Tower of London, promising its later return at 8% interest to its owners. In August, after the East India Company refused to grant a loan, Lord Cottington seized the company's stock of pepper and spices and sold it for £60,000 (far below its market value), promising to refund the money with interest later. Throughout Charles's reign, the issue of how far the English Reformation should progress was constantly in the forefront of political debate. Arminian theology emphasised clerical authority and the individual's ability to reject or accept salvation, which opponents viewed as heretical and a potential vehicle for the reintroduction of Roman Catholicism. Puritan reformers thought Charles was too sympathetic to the teachings of Arminianism, which they considered irreligious, and opposed his desire to move the Church of England in a more traditional and sacramental direction. In addition, his Protestant subjects followed the European war closely and grew increasingly dismayed by Charles's diplomacy with Spain and his failure to support the Protestant cause abroad effectively. In 1633, Charles appointed William Laud Archbishop of Canterbury. They initiated a series of reforms aimed at ensuring religious uniformity by restricting non-conformist preachers, insisting the liturgy be celebrated as prescribed by the Book of Common Prayer, organising the internal architecture of English churches to emphasise the sacrament of the altar, and re-issuing King James's Declaration of Sports, which permitted secular activities on the sabbath. The Feoffees for Impropriations, an organisation that bought benefices and advowsons so that Puritans could be appointed to them, was dissolved. Laud prosecuted those who opposed his reforms in the Court of High Commission and the Star Chamber, the two most powerful courts in the land. The courts became feared for their censorship of opposing religious views and unpopular among the propertied classes for inflicting degrading punishments on gentlemen. For example, in 1637 William Prynne, Henry Burton and John Bastwick were pilloried, whipped and mutilated by cropping and imprisoned indefinitely for publishing anti-episcopal pamphlets. When Charles attempted to impose his religious policies in Scotland he faced numerous difficulties. Although born in Scotland, Charles had become estranged from his northern kingdom; his first visit since early childhood was for his Scottish coronation in 1633. To the dismay of the Scots, who had removed many traditional rituals from their liturgical practice, Charles insisted that the coronation be conducted in the Anglican rite. In 1637, the king ordered the use of a new prayer book in Scotland that was almost identical to the English Book of Common Prayer, without consulting either the Scottish Parliament or the Kirk. Although it had been written, under Charles's direction, by Scottish bishops, many Scots resisted it, seeing the new prayer book as a vehicle for introducing Anglicanism to Scotland. On 23 July, riots erupted in Edinburgh upon the first Sunday of the prayer book's usage, and unrest spread throughout the Kirk. The public began to mobilise around a reaffirmation of the National Covenant, whose signatories pledged to uphold the reformed religion of Scotland and reject any innovations that were not authorised by Kirk and Parliament. When the General Assembly of the Church of Scotland met in November 1638, it condemned the new prayer book, abolished episcopal church government by bishops, and adopted presbyterian government by elders and deacons. Charles perceived the unrest in Scotland as a rebellion against his authority, precipitating the First Bishops' War in 1639. Charles did not seek subsidies from the English Parliament to wage war, but instead raised an army without parliamentary aid and marched to Berwick-upon-Tweed, on the border of Scotland. Charles's army did not engage the Covenanters as the king feared the defeat of his forces, whom he believed to be significantly outnumbered by the Scots. In the Treaty of Berwick, Charles regained custody of his Scottish fortresses and secured the dissolution of the Covenanters' interim government, albeit at the decisive concession that both the Scottish Parliament and General Assembly of the Scottish Church were called. The military failure in the First Bishops' War caused a financial and diplomatic crisis for Charles that deepened when his efforts to raise funds from Spain, while simultaneously continuing his support for his Palatine relatives, led to the public humiliation of the Battle of the Downs, where the Dutch destroyed a Spanish bullion fleet off the coast of Kent in sight of the impotent English navy. Charles continued peace negotiations with the Scots in a bid to gain time before launching a new military campaign. Because of his financial weakness, he was forced to call Parliament into session in an attempt to raise funds for such a venture. Both English and Irish parliaments were summoned in the early months of 1640. In March 1640, the Irish Parliament duly voted in a subsidy of £180,000 with the promise to raise an army 9,000 strong by the end of May. In the English general election in March, however, court candidates fared badly, and Charles's dealings with the English Parliament in April quickly reached stalemate. The earls of Northumberland and Strafford attempted to broker a compromise whereby the king would agree to forfeit ship money in exchange for £650,000 (although the cost of the coming war was estimated at around £1 million). Nevertheless, this alone was insufficient to produce consensus in the Commons. The Parliamentarians' calls for further reforms were ignored by Charles, who still retained the support of the House of Lords. Despite the protests of Northumberland, the Short Parliament (as it came to be known) was dissolved in May 1640, less than a month after it assembled. By this stage Strafford, Lord Deputy of Ireland since 1632, had emerged as Charles's right-hand man and together with Laud, pursued a policy of "Thorough" that aimed to make central royal authority more efficient and effective at the expense of local or anti-government interests. Although originally a critic of the king, Strafford defected to royal service in 1628 (due in part to Buckingham's persuasion), and had since emerged, alongside Laud, as the most influential of Charles's ministers. Bolstered by the failure of the English Short Parliament, the Scottish Parliament declared itself capable of governing without the king's consent, and in August 1640 the Covenanter army moved into the English county of Northumberland. Following the illness of the earl of Northumberland, who was the king's commander-in-chief, Charles and Strafford went north to command the English forces, despite Strafford being ill himself with a combination of gout and dysentery. The Scottish soldiery, many of whom were veterans of the Thirty Years' War, had far greater morale and training compared to their English counterparts. They met virtually no resistance until reaching Newcastle upon Tyne, where they defeated the English forces at the Battle of Newburn and occupied the city, as well as the neighbouring county of Durham. As demands for a parliament grew, Charles took the unusual step of summoning a great council of peers. By the time it met, on 24 September at York, Charles had resolved to follow the almost universal advice to call a parliament. After informing the peers that a parliament would convene in November, he asked them to consider how he could acquire funds to maintain his army against the Scots in the meantime. They recommended making peace. A cessation of arms, although not a final settlement, was negotiated in the humiliating Treaty of Ripon, signed in October 1640. The treaty stated that the Scots would continue to occupy Northumberland and Durham and be paid £850 per day until peace was restored and the English Parliament recalled, which would be required to raise sufficient funds to pay the Scottish forces. Consequently, Charles summoned what later became known as the Long Parliament. Once again, Charles's supporters fared badly at the polls. Of the 493 members of the Commons returned in November, over 350 were opposed to the king. The Long Parliament proved just as difficult for Charles as had the Short Parliament. It assembled on 3 November 1640 and quickly began proceedings to impeach the king's leading counsellors of high treason. Strafford was taken into custody on 10 November; Laud was impeached on 18 December; John Finch, now Lord Keeper of the Great Seal, was impeached the following day, and he consequently fled to the Hague with Charles's permission on 21 December. To prevent the king from dissolving it at will, Parliament passed the Triennial Act, which required Parliament to be summoned at least once every three years, and permitted the Lord Keeper and 12 peers to summon Parliament if the king failed to do so. The Act was coupled with a subsidy bill, and so to secure the latter, Charles grudgingly granted royal assent in February 1641. Strafford had become the principal target of the Parliamentarians, particularly John Pym, and he went on trial for high treason on 22 March 1641. However, the key allegation by Sir Henry Vane that Strafford had threatened to use the Irish army to subdue England was not corroborated and on 10 April Pym's case collapsed. Pym and his allies immediately launched a bill of attainder, which simply declared Strafford guilty and pronounced the sentence of death. Charles assured Strafford that "upon the word of a king you shall not suffer in life, honour or fortune", and the attainder could not succeed if Charles withheld assent. Furthermore, many members and most peers were opposed to the attainder, not wishing, in the words of one, to "commit murder with the sword of justice". However, increased tensions and an attempted coup by royalist army officers in support of Strafford and in which Charles was involved began to sway the issue. The Commons passed the bill on 20 April by a large margin (204 in favour, 59 opposed, and 230 abstained), and the Lords acquiesced (by 26 votes to 19, with 79 absent) in May. On 3 May, Parliament's Protestation attacked the "wicked counsels" of Charles's "arbitrary and tyrannical government"; while those who signed the petition undertook to defend the king's "person, honour and estate", they also swore to preserve "the true reformed religion", parliament, and the "rights and liberties of the subjects". Charles, fearing for the safety of his family in the face of unrest, assented reluctantly to Strafford's attainder on 9 May after consulting his judges and bishops. Strafford was beheaded three days later. Additionally in early May, Charles assented to an unprecedented Act that forbade the dissolution of the English Parliament without its consent. In the following months, ship money, fines in distraint of knighthood and excise without parliamentary consent were declared unlawful, and the Courts of Star Chamber and High Commission were abolished. All remaining forms of taxation were legalised and regulated by the Tonnage and Poundage Act. The House of Commons also launched bills attacking bishops and episcopacy, but these failed in the Lords. Charles had made important concessions in England, and temporarily improved his position in Scotland by securing the favour of the Scots on a visit from August to November 1641 during which he conceded to the official establishment of presbyterianism. However, following an attempted royalist coup in Scotland, known as "The Incident", Charles's credibility was significantly undermined. In Ireland, the population was split into three main socio-political groups: the Gaelic Irish, who were Catholic; the Old English, who were descended from medieval Normans and were also predominantly Catholic; and the New English, who were Protestant settlers from England and Scotland aligned with the English Parliament and the Covenanters. Strafford's administration had improved the Irish economy and boosted tax revenue, but had done so by heavy-handedly imposing order. He had trained up a large Catholic army in support of the king and had weakened the authority of the Irish Parliament, while continuing to confiscate land from Catholics for Protestant settlement at the same time as promoting a Laudian Anglicanism that was anathema to presbyterians. As a result, all three groups had become disaffected. Strafford's impeachment provided a new departure for Irish politics whereby all sides joined together to present evidence against him. In a similar manner to the English Parliament, the Old English members of the Irish Parliament argued that while opposed to Strafford they remained loyal to Charles. They argued that the king had been led astray by malign counsellors, and that, moreover, a viceroy such as Strafford could emerge as a despotic figure instead of ensuring that the king was directly involved in governance. Strafford's fall from power weakened Charles's influence in Ireland. The dissolution of the Irish army was unsuccessfully demanded three times by the English Commons during Strafford's imprisonment, until Charles was eventually forced through lack of money to disband the army at the end of Strafford's trial. Disputes concerning the transfer of land ownership from native Catholic to settler Protestant, particularly in relation to the plantation of Ulster, coupled with resentment at moves to ensure the Irish Parliament was subordinate to the Parliament of England, sowed the seeds of rebellion. When armed conflict arose between the Gaelic Irish and New English, in late October 1641, the Old English sided with the Gaelic Irish while simultaneously professing their loyalty to the king. In November 1641, the House of Commons passed the Grand Remonstrance, a long list of grievances against actions by Charles's ministers committed since the beginning of his reign (that were asserted to be part of a grand Catholic conspiracy of which the king was an unwitting member), but it was in many ways a step too far by Pym and passed by only 11 votes – 159 to 148. Furthermore, the Remonstrance had very little support in the House of Lords, which the Remonstrance attacked. The tension was heightened by news of the Irish rebellion, coupled with inaccurate rumours of Charles's complicity. Throughout November, a series of alarmist pamphlets published stories of atrocities in Ireland, which included massacres of New English settlers by the native Irish who could not be controlled by the Old English lords. Rumours of "papist" conspiracies circulated in England, and English anti-Catholic opinion was strengthened, damaging Charles's reputation and authority. The English Parliament distrusted Charles's motivations when he called for funds to put down the Irish rebellion; many members of the Commons suspected that forces raised by Charles might later be used against Parliament itself. Pym's Militia Bill was intended to wrest control of the army from the king, but it did not have the support of the Lords, let alone Charles. Instead, the Commons passed the bill as an ordinance, which they claimed did not require royal assent. The Militia Ordinance appears to have prompted more members of the Lords to support the king. In an attempt to strengthen his position, Charles generated great antipathy in London, which was already fast falling into lawlessness, when he placed the Tower of London under the command of Colonel Thomas Lunsford, an infamous, albeit efficient, career officer. When rumours reached Charles that Parliament intended to impeach his wife for supposedly conspiring with the Irish rebels, the king decided to take drastic action. Charles suspected, probably correctly, that some members of the English Parliament had colluded with the invading Scots. On 3 January 1642, Charles directed Parliament to give up five members of the Commons – Pym, John Hampden, Denzil Holles, William Strode and Sir Arthur Haselrig – and one peer – Lord Mandeville – on the grounds of high treason. When Parliament refused, it was possibly Henrietta Maria who persuaded Charles to arrest the five members by force, which Charles intended to carry out personally. However, news of the warrant reached Parliament ahead of him, and the wanted men slipped away by boat shortly before Charles entered the House of Commons with an armed guard on 4 January. Having displaced the Speaker, William Lenthall, from his chair, the king asked him where the MPs had fled. Lenthall, on his knees, famously replied, "May it please your Majesty, I have neither eyes to see nor tongue to speak in this place but as the House is pleased to direct me, whose servant I am here." Charles abjectly declared "all my birds have flown", and was forced to retire, empty-handed. The botched arrest attempt was politically disastrous for Charles. No English sovereign had ever entered the House of Commons, and his unprecedented invasion of the chamber to arrest its members was considered a grave breach of parliamentary privilege. In one stroke Charles destroyed his supporters' efforts to portray him as a defence against innovation and disorder. Parliament quickly seized London, and Charles fled the capital for Hampton Court Palace on 10 January, moving two days later to Windsor Castle. After sending his wife and eldest daughter to safety abroad in February, he travelled northwards, hoping to seize the military arsenal at Hull. To his dismay, he was rebuffed by the town's Parliamentary governor, Sir John Hotham, who refused him entry in April, and Charles was forced to withdraw. In mid-1642, both sides began to arm. Charles raised an army using the medieval method of commission of array, and Parliament called for volunteers for its militia. Following futile negotiations, Charles raised the royal standard in Nottingham on 22 August 1642. At the start of the First English Civil War, Charles's forces controlled roughly the Midlands, Wales, the West Country and northern England. He set up his court at Oxford. Parliament controlled London, the south-east and East Anglia, as well as the English navy. After a few skirmishes, the opposing forces met in earnest at Edgehill, on 23 October 1642. Charles's nephew Prince Rupert of the Rhine disagreed with the battle strategy of the royalist commander Lord Lindsey, and Charles sided with Rupert. Lindsey resigned, leaving Charles to assume overall command assisted by Lord Forth. Rupert's cavalry successfully charged through the parliamentary ranks, but instead of swiftly returning to the field, rode off to plunder the parliamentary baggage train. Lindsey, acting as a colonel, was wounded and bled to death without medical attention. The battle ended inconclusively as the daylight faded. In his own words, the experience of battle had left Charles "exceedingly and deeply grieved". He regrouped at Oxford, turning down Rupert's suggestion of an immediate attack on London. After a week, he set out for the capital on 3 November, capturing Brentford on the way while simultaneously continuing to negotiate with civic and parliamentary delegations. At Turnham Green on the outskirts of London, the royalist army met resistance from the city militia, and faced with a numerically superior force, Charles ordered a retreat. He overwintered in Oxford, strengthening the city's defences and preparing for the next season's campaign. Peace talks between the two sides collapsed in April. The war continued indecisively over the next couple of years, and Henrietta Maria returned to Britain for 17 months from February 1643. After Rupert captured Bristol in July 1643, Charles visited the port city and laid siege to Gloucester, further up the river Severn. His plan to undermine the city walls failed due to heavy rain, and on the approach of a parliamentary relief force, Charles lifted the siege and withdrew to Sudeley Castle. The parliamentary army turned back towards London, and Charles set off in pursuit. The two armies met at Newbury, Berkshire, on 20 September. Just as at Edgehill, the battle stalemated at nightfall, and the armies disengaged. In January 1644, Charles summoned a Parliament at Oxford, which was attended by about 40 peers and 118 members of the Commons; all told, the Oxford Parliament, which sat until March 1645, was supported by the majority of peers and about a third of the Commons. Charles became disillusioned by the assembly's ineffectiveness, calling it a "mongrel" in private letters to his wife. In 1644, Charles remained in the southern half of England while Rupert rode north to relieve Newark and York, which were under threat from parliamentary and Scottish Covenanter armies. Charles was victorious at the battle of Cropredy Bridge in late June, but the royalists in the north were defeated at the battle of Marston Moor just a few days later. The king continued his campaign in the south, encircling and disarming the parliamentary army of the Earl of Essex. Returning northwards to his base at Oxford, he fought at Newbury for a second time before the winter closed in; the battle ended indecisively. Attempts to negotiate a settlement over the winter, while both sides re-armed and re-organised, were again unsuccessful. At the battle of Naseby on 14 June 1645, Rupert's horsemen again mounted a successful charge against the flank of Parliament's New Model Army, but Charles's troops elsewhere on the field were pushed back by the opposing forces. Charles, attempting to rally his men, rode forward but as he did so, Lord Carnwath seized his bridle and pulled him back, fearing for the king's safety. Carnwath's action was misinterpreted by the royalist soldiers as a signal to move back, leading to a collapse of their position. The military balance tipped decisively in favour of Parliament. There followed a series of defeats for the royalists, and then the Siege of Oxford, from which Charles escaped (disguised as a servant) in April 1646. He put himself into the hands of the Scottish presbyterian army besieging Newark, and was taken northwards to Newcastle upon Tyne. After nine months of negotiations, the Scots finally arrived at an agreement with the English Parliament: in exchange for £100,000, and the promise of more money in the future, the Scots withdrew from Newcastle and delivered Charles to the parliamentary commissioners in January 1647. Parliament held Charles under house arrest at Holdenby House in Northamptonshire until Cornet George Joyce took him by threat of force from Holdenby on 3 June in the name of the New Model Army. By this time, mutual suspicion had developed between Parliament, which favoured army disbandment and presbyterianism, and the New Model Army, which was primarily officered by congregationalist Independents, who sought a greater political role. Charles was eager to exploit the widening divisions, and apparently viewed Joyce's actions as an opportunity rather than a threat. He was taken first to Newmarket, at his own suggestion, and then transferred to Oatlands and subsequently Hampton Court, while more ultimately fruitless negotiations took place. By November, he determined that it would be in his best interests to escape – perhaps to France, Southern England or to Berwick-upon-Tweed, near the Scottish border. He fled Hampton Court on 11 November, and from the shores of Southampton Water made contact with Colonel Robert Hammond, Parliamentary Governor of the Isle of Wight, whom he apparently believed to be sympathetic. Hammond, however, confined Charles in Carisbrooke Castle and informed Parliament that Charles was in his custody. From Carisbrooke, Charles continued to try to bargain with the various parties. In direct contrast to his previous conflict with the Scottish Kirk, on 26 December 1647 he signed a secret treaty with the Scots. Under the agreement, called the "Engagement", the Scots undertook to invade England on Charles's behalf and restore him to the throne on condition that presbyterianism be established in England for three years. The royalists rose in May 1648, igniting the Second Civil War, and as agreed with Charles, the Scots invaded England. Uprisings in Kent, Essex, and Cumberland, and a rebellion in South Wales, were put down by the New Model Army, and with the defeat of the Scots at the Battle of Preston in August 1648, the royalists lost any chance of winning the war. Charles's only recourse was to return to negotiations, which were held at Newport on the Isle of Wight. On 5 December 1648, Parliament voted by 129 to 83 to continue negotiating with the king, but Oliver Cromwell and the army opposed any further talks with someone they viewed as a bloody tyrant and were already taking action to consolidate their power. Hammond was replaced as Governor of the Isle of Wight on 27 November, and placed in the custody of the army the following day. In Pride's Purge on 6 and 7 December, the members of Parliament out of sympathy with the military were arrested or excluded by Colonel Thomas Pride, while others stayed away voluntarily. The remaining members formed the Rump Parliament. It was effectively a military coup. Charles was moved to Hurst Castle at the end of 1648, and thereafter to Windsor Castle. In January 1649, the Rump House of Commons indicted him on a charge of treason, which was rejected by the House of Lords. The idea of trying a king was a novel one. The Chief Justices of the three common law courts of England – Henry Rolle, Oliver St John and John Wilde – all opposed the indictment as unlawful. The Rump Commons declared itself capable of legislating alone, passed a bill creating a separate court for Charles's trial, and declared the bill an act without the need for royal assent. The High Court of Justice established by the Act consisted of 135 commissioners, but many either refused to serve or chose to stay away. Only 68 (all firm Parliamentarians) attended Charles's trial on charges of high treason and "other high crimes" that began on 20 January 1649 in Westminster Hall. John Bradshaw acted as President of the Court, and the prosecution was led by the Solicitor General, John Cook. Charles was accused of treason against England by using his power to pursue his personal interest rather than the good of the country. The charge stated that he, "for accomplishment of such his designs, and for the protecting of himself and his adherents in his and their wicked practices, to the same ends hath traitorously and maliciously levied war against the present Parliament, and the people therein represented", and that the "wicked designs, wars, and evil practices of him, the said Charles Stuart, have been, and are carried on for the advancement and upholding of a personal interest of will, power, and pretended prerogative to himself and his family, against the public interest, common right, liberty, justice, and peace of the people of this nation." Reflecting the modern concept of command responsibility, the indictment held him "guilty of all the treasons, murders, rapines, burnings, spoils, desolations, damages and mischiefs to this nation, acted and committed in the said wars, or occasioned thereby." An estimated 300,000 people, or 6% of the population, died during the war. Over the first three days of the trial, whenever Charles was asked to plead, he refused, stating his objection with the words: "I would know by what power I am called hither, by what lawful authority...?" He claimed that no court had jurisdiction over a monarch, that his own authority to rule had been given to him by God and by the traditional laws of England, and that the power wielded by those trying him was only that of force of arms. Charles insisted that the trial was illegal, explaining that, The court, by contrast, challenged the doctrine of sovereign immunity and proposed that "the King of England was not a person, but an office whose every occupant was entrusted with a limited power to govern 'by and according to the laws of the land and not otherwise'." At the end of the third day, Charles was removed from the court, which then heard over 30 witnesses against the king in his absence over the next two days, and on 26 January condemned him to death. The following day, the king was brought before a public session of the commission, declared guilty, and sentenced. Fifty-nine of the commissioners signed Charles's death warrant. Charles's beheading was scheduled for Tuesday, 30 January 1649. Two of his children remained in England under the control of the Parliamentarians: Elizabeth and Henry. They were permitted to visit him on 29 January, and he bade them a tearful farewell. The following morning, he called for two shirts to prevent the cold weather causing any noticeable shivers that the crowd could have mistaken for fear: "the season is so sharp as probably may make me shake, which some observers may imagine proceeds from fear. I would have no such imputation." He walked under guard from St James's Palace, where he had been confined, to the Palace of Whitehall, where an execution scaffold was erected in front of the Banqueting House. Charles was separated from spectators by large ranks of soldiers, and his last speech reached only those with him on the scaffold. He blamed his fate on his failure to prevent the execution of his loyal servant Strafford: "An unjust sentence that I suffered to take effect, is punished now by an unjust sentence on me." He declared that he had desired the liberty and freedom of the people as much as any, "but I must tell you that their liberty and freedom consists in having government ... It is not their having a share in the government; that is nothing appertaining unto them. A subject and a sovereign are clean different things." He continued, "I shall go from a corruptible to an incorruptible Crown, where no disturbance can be." At about 2:00 p.m., Charles put his head on the block after saying a prayer and signalled the executioner when he was ready by stretching out his hands; he was then beheaded with one clean stroke. According to observer Philip Henry, a moan "as I never heard before and desire I may never hear again" rose from the assembled crowd, some of whom then dipped their handkerchiefs in the king's blood as a memento. The executioner was masked and disguised, and there is debate over his identity. The commissioners approached Richard Brandon, the common hangman of London, but he refused, at least at first, despite being offered £200. It is possible he relented and undertook the commission after being threatened with death, but there are others who have been named as potential candidates, including George Joyce, William Hulet and Hugh Peters. The clean strike, confirmed by an examination of the king's body at Windsor in 1813, suggests that the execution was carried out by an experienced headsman. It was common practice for the severed head of a traitor to be held up and exhibited to the crowd with the words "Behold the head of a traitor!" Although Charles's head was exhibited, the words were not used, possibly because the executioner did not want his voice recognised. On the day after the execution, the king's head was sewn back onto his body, which was then embalmed and placed in a lead coffin. The commission refused to allow Charles's burial at Westminster Abbey, so his body was conveyed to Windsor on the night of 7 February. He was buried in private in the Henry VIII vault alongside the coffins of Henry VIII and Henry's third wife, Jane Seymour, in St George's Chapel, Windsor Castle, on 9 February 1649. The king's son, Charles II, later planned for an elaborate royal mausoleum to be erected in Hyde Park, London, but it was never built. Ten days after Charles's execution, on the day of his interment, a memoir purporting to be written by the king appeared for sale. This book, the "Eikon Basilike" (Greek for the "Royal Portrait"), contained an "apologia" for royal policies, and it proved an effective piece of royalist propaganda. John Milton wrote a Parliamentary rejoinder, the "Eikonoklastes" ("The Iconoclast"), but the response made little headway against the pathos of the royalist book. Anglicans and royalists fashioned an image of martyrdom, and in the Convocations of Canterbury and York of 1660 King Charles the Martyr was added to the Church of England's liturgical calendar. High church Anglicans held special services on the anniversary of his death. Churches, such as those at Falmouth and Tunbridge Wells, and Anglican devotional societies such as the Society of King Charles the Martyr, were founded in his honour. With the monarchy overthrown, England became a republic or "Commonwealth". The House of Lords was abolished by the Rump Commons, and executive power was assumed by a Council of State. All significant military opposition in Britain and Ireland was extinguished by the forces of Oliver Cromwell in the Third English Civil War and the Cromwellian conquest of Ireland. Cromwell forcibly disbanded the Rump Parliament in 1653, thereby establishing the Protectorate with himself as Lord Protector. Upon his death in 1658, he was briefly succeeded by his ineffective son, Richard. Parliament was reinstated, and the monarchy was restored to Charles I's eldest son, Charles II, in 1660. In the words of John Philipps Kenyon, "Charles Stuart is a man of contradictions and controversy". Revered by high Tories who considered him a saintly martyr, he was condemned by Whig historians, such as Samuel Rawson Gardiner, who thought him duplicitous and delusional. In recent decades, most historians have criticised him, the main exception being Kevin Sharpe who offered a more sympathetic view of Charles that has not been widely adopted. While Sharpe argued that the king was a dynamic man of conscience, Professor Barry Coward thought Charles "was the most incompetent monarch of England since Henry VI", a view shared by Ronald Hutton, who called him "the worst king we have had since the Middle Ages". Archbishop William Laud, who was beheaded by Parliament during the war, described Charles as "A mild and gracious prince who knew not how to be, or how to be made, great." Charles was more sober and refined than his father, but he was intransigent. He deliberately pursued unpopular policies that ultimately brought ruin on himself. Both Charles and James were advocates of the divine right of kings, but while James's ambitions concerning absolute prerogative were tempered by compromise and consensus with his subjects, Charles believed that he had no need to compromise or even to explain his actions. He thought he was answerable only to God. "Princes are not bound to give account of their actions," he wrote, "but to God alone". The official style of Charles I as king in England was "Charles, by the Grace of God, King of England, Scotland, France and Ireland, Defender of the Faith, etc." The style "of France" was only nominal, and was used by every English monarch from Edward III to George III, regardless of the amount of French territory actually controlled. The authors of his death warrant referred to him as "Charles Stuart, King of England". As Duke of York, Charles bore the royal arms of the kingdom differenced by a label Argent of three points, each bearing three torteaux Gules. The Prince of Wales bore the royal arms differenced by a plain label Argent of three points. As king, Charles bore the royal arms undifferenced: Quarterly, I and IV Grandquarterly, Azure three fleurs-de-lis Or (for France) and Gules three lions passant guardant in pale Or (for England); II Or a lion rampant within a tressure flory-counter-flory Gules (for Scotland); III Azure a harp Or stringed Argent (for Ireland). In Scotland, the Scottish arms were placed in the first and fourth quarters with the English and French arms in the second quarter. Charles had nine children, two of whom eventually succeeded as king, and two of whom died at or shortly after birth. Chelsea F.C. Chelsea Football Club is a professional football club in London, England, that competes in the Premier League. Founded in 1905, the club's home ground since then has been Stamford Bridge. Chelsea won the First Division title in 1955, followed by various cup competitions between 1965 and 1971. The past two decades have seen sustained success, with the club winning 23 trophies since 1997. In total, the club has won 28 major trophies; six titles, eight FA Cups, five League Cups and four FA Community Shields, one UEFA Champions League, two UEFA Cup Winners' Cups, one UEFA Europa League and one UEFA Super Cup. Chelsea's regular kit colours are royal blue shirts and shorts with white socks. The club's crest has been changed several times in attempts to re-brand the club and modernise its image. The current crest, featuring a ceremonial lion rampant regardant holding a staff, is a modification of the one introduced in the early 1950s. The club have the sixth-highest average all-time attendance in English football, and for the 2017–18 season at 41,280. Since 2003, Chelsea have been owned by Russian billionaire Roman Abramovich. In 2018, they were ranked by "Forbes" magazine as the seventh most valuable football club in the world, at £1.54 billion ($2.06 billion) and in the 2016–17 season it was the eighth highest-earning football club in the world, earned €428 million. In 1904, Gus Mears acquired the Stamford Bridge athletics stadium with the aim of turning it into a football ground. An offer to lease it to nearby Fulham was turned down, so Mears opted to found his own club to use the stadium. As there was already a team named Fulham in the borough, the name of the adjacent borough of Chelsea was chosen for the new club; names like "Kensington FC", "Stamford Bridge FC" and "London FC" were also considered. Chelsea were founded on 10 March 1905 at The Rising Sun pub (now The Butcher's Hook), opposite the present-day main entrance to the ground on Fulham Road, and were elected to the Football League shortly afterwards. The club won promotion to the First Division in their second season, and yo-yoed between the First and Second Divisions in their early years. They reached the 1915 FA Cup Final, where they lost to Sheffield United at Old Trafford, and finished third in the First Division in 1920, the club's best league campaign to that point. Chelsea attracted large crowds and had a reputation for signing big-name players, but success continued to elude the club in the inter-war years. Former Arsenal and England centre-forward Ted Drake became manager in 1952 and proceeded to modernise the club. He removed the club's Chelsea pensioner crest, improved the youth set-up and training regime, rebuilt the side with shrewd signings from the lower divisions and amateur leagues, and led Chelsea to their first major trophy success – the League championship – in 1954–55. The following season saw UEFA create the European Champions' Cup, but after objections from The Football League and the FA Chelsea were persuaded to withdraw from the competition before it started. Chelsea failed to build on this success, and spent the remainder of the 1950s in mid-table. Drake was dismissed in 1961 and replaced by player-coach Tommy Docherty. Docherty built a new team around the group of talented young players emerging from the club's youth set-up and Chelsea challenged for honours throughout the 1960s, enduring several near-misses. They were on course for a treble of League, FA Cup and League Cup going into the final stages of the 1964–65 season, winning the League Cup but faltering late on in the other two. In three seasons the side were beaten in three major semi-finals and were FA Cup runners-up. Under Docherty's successor, Dave Sexton, Chelsea won the FA Cup in 1970, beating Leeds United 2–1 in a final replay. Chelsea took their first European honour, a UEFA Cup Winners' Cup triumph, the following year, with another replayed win, this time over Real Madrid in Athens. The late 1970s through to the '80s was a turbulent period for Chelsea. An ambitious redevelopment of Stamford Bridge threatened the financial stability of the club, star players were sold and the team were relegated. Further problems were caused by a notorious hooligan element among the support, which was to plague the club throughout the decade. In 1982, Chelsea were, at the nadir of their fortunes, acquired by Ken Bates for the nominal sum of £1, although by now the Stamford Bridge freehold had been sold to property developers, meaning the club faced losing their home. On the pitch, the team had fared little better, coming close to relegation to the Third Division for the first time, but in 1983 manager John Neal put together an impressive new team for minimal outlay. Chelsea won the Second Division title in 1983–84 and established themselves in the top division, before being relegated again in 1988. The club bounced back immediately by winning the Second Division championship in 1988–89. After a long-running legal battle, Bates reunited the stadium freehold with the club in 1992 by doing a deal with the banks of the property developers, who had been bankrupted by a market crash. Chelsea's form in the new Premier League was unconvincing, although they did reach the 1994 FA Cup Final with Glenn Hoddle. It was not until the appointment of Ruud Gullit as player-manager in 1996 that their fortunes changed. He added several top international players to the side, as the club won the FA Cup in 1997 and established themselves as one of England's top sides again. Gullit was replaced by Gianluca Vialli, who led the team to victory in the League Cup Final, the UEFA Cup Winners' Cup Final and the UEFA Super Cup in 1998, the FA Cup in 2000 and their first appearance in the UEFA Champions League. Vialli was sacked in favour of Claudio Ranieri, who guided Chelsea to the 2002 FA Cup Final and Champions League qualification in 2002–03. In June 2003, Bates sold Chelsea to Russian billionaire Roman Abramovich for £140 million. Over £100 million was spent on new players, but Ranieri was unable to deliver any trophies, and was replaced by José Mourinho. Under Mourinho, Chelsea became the fifth English team to win back-to-back league championships since the Second World War (2004–05 and 2005–06), in addition to winning an FA Cup (2007) and two League Cups (2005 and 2007). After a poor start to the 2007-2008 season, Mourinho was replaced by Avram Grant, who led the club to their first UEFA Champions League final, which they lost on penalties to Manchester United. Luiz Felipe Scolari took over from Grant, but was sacked after 7 months following poor results. Guus Hiddink then took over the club on an interim basis while continuing to manage the Russian national football team. Hiddink guided Chelsea to another FA Cup success, after which he left the club to return full time to the Russian managerial position. In 2009–10, his successor Carlo Ancelotti led them to their first Premier League and FA Cup Double", the team becoming the first English top-flight club to score 100 league goals in a season since 1963. In 2012, caretaker manager Roberto Di Matteo led Chelsea to their seventh FA Cup, and their first UEFA Champions League title, beating Bayern Munich 4–3 on penalties, the first London club to win the trophy. In 2013, interim manager Rafael Benítez guided Chelsea to win the UEFA Europa League against Benfica, becoming the first club to hold two major European titles simultaneously and one of five clubs, and the first British club followed by Manchester United, to have won all three of UEFA's major club competitions. In the summer of 2013, Mourinho returned as manager, leading Chelsea to League Cup success in March 2015, and their fifth league title two months later. Mourinho was sacked after four months of the following season, with the club having lost 9 of their first 16 games and sitting only one point above the relegation zone. Two years later, under new coach Antonio Conte, Chelsea won its sixth English title. Chelsea have only had one home ground, Stamford Bridge, where they have played since the team's foundation. It was officially opened on 28 April 1877 and for the first 28 years of its existence it was used almost exclusively by the London Athletic Club as an arena for athletics meetings and not at all for football. In 1904 the ground was acquired by businessman Gus Mears and his brother Joseph, who had also purchased nearby land (formerly a large market garden) with the aim of staging football matches on the now 12.5 acre (51,000 m²) site. Stamford Bridge was designed for the Mears family by the noted football architect Archibald Leitch, who had also designed Ibrox, Craven Cottage and Hampden Park. Most football clubs were founded first, and then sought grounds in which to play, but Chelsea were founded for Stamford Bridge. Starting with an open bowl-like design and one covered terrace, Stamford Bridge had an original capacity of around 100,000. The early 1930s saw the construction of a terrace on the southern part of the ground with a roof that covered around one fifth of the stand. It eventually became known as the "Shed End", the home of Chelsea's most loyal and vocal supporters, particularly during the 1960s, 70s and 80s. The exact origins of the name are unclear, but the fact that the roof looked like a corrugated iron shed roof played a part. In the early 1970s, the club's owners announced a modernisation of Stamford Bridge with plans for a state-of-the-art 50,000 all-seater stadium. Work began on the East Stand in 1972 but the project was beset with problems and was never completed; the cost brought the club close to bankruptcy, culminating in the freehold being sold to property developers. Following a long legal battle, it was not until the mid-1990s that Chelsea's future at the stadium was secured and renovation work resumed. The north, west and southern parts of the ground were converted into all-seater stands and moved closer to the pitch, a process completed by 2001. When Stamford Bridge was redeveloped in the Bates era many additional features were added to the complex including two hotels, apartments, bars, restaurants, the Chelsea Megastore, and an interactive visitor attraction called Chelsea World of Sport. The intention was that these facilities would provide extra revenue to support the football side of the business, but they were less successful than hoped and before the Abramovich takeover in 2003 the debt taken on to finance them was a major burden on the club. Soon after the takeover a decision was taken to drop the "Chelsea Village" brand and refocus on Chelsea as a football club. However, the stadium is sometimes still referred to as part of ""Chelsea Village"" or ""The Village"". The Stamford Bridge freehold, the pitch, the turnstiles and Chelsea's naming rights are now owned by Chelsea Pitch Owners, a non-profit organisation in which fans are the shareholders. The CPO was created to ensure the stadium could never again be sold to developers. As a condition for using the Chelsea FC name, the club has to play its first team matches at Stamford Bridge, which means that if the club moves to a new stadium, they may have to change their name. Chelsea's training ground is located in Cobham, Surrey. Chelsea moved to Cobham in 2004. Their previous training ground in Harlington was taken over by QPR in 2005. The new training facilities in Cobham were completed in 2007. Stamford Bridge has been used for a variety of other sporting events since 1905. It hosted the FA Cup Final from 1920 to 1922, has held ten FA Cup Semi-finals (most recently in 1978), ten FA Charity Shield matches (the last in 1970), and three England international matches, the last in 1932; it was also the venue for an unofficial "Victory International" in 1946. The 2013 UEFA Women's Champions League Final was played at Stamford Bridge. In October 1905 it hosted a rugby union match between the All Blacks and Middlesex, and in 1914 hosted a baseball match between the touring New York Giants and the Chicago White Sox. It was the venue for a boxing match between world flyweight champion Jimmy Wilde and Joe Conn in 1918. The running track was used for dirt track racing between 1928 and 1932, greyhound racing from 1933 to 1968, and Midget car racing in 1948. In 1980, Stamford Bridge hosted the first international floodlit cricket match in the UK, between Essex and the West Indies. It was also the home stadium of the London Monarchs American Football team for the 1997 season. The current club ownership have stated that a larger stadium is necessary in order for Chelsea to stay competitive with rival clubs who have significantly larger stadia, such as Arsenal and Manchester United. Owing to its location next to a main road and two railway lines, fans can only enter the ground via the Fulham Road exits, which places constraints on expansion due to health and safety regulations. The club have consistently affirmed their desire to keep Chelsea at their current home, but have nonetheless been linked with a move to various nearby sites, including the Earls Court Exhibition Centre, Battersea Power Station and the Chelsea Barracks. In October 2011, a proposal from the club to buy back the freehold to the land on which Stamford Bridge sits was voted down by Chelsea Pitch Owners shareholders. In May 2012, the club made a formal bid to purchase Battersea Power Station, with a view to developing the site into a new stadium, but lost out to a Malaysian consortium. The club subsequently announced plans to redevelop Stamford Bridge into a 60,000-seater stadium. On 11 January 2017 it was announced that the stadium was given the go ahead from Hammersmith and Fulham council for the new 60,000 stadium to be built. However, on 31 May 2018, the club released a statement via their website stating that "Chelsea Football Club announces today that it has put its new stadium project on hold. No further pre-construction design and planning work will occur." The statement went on to elaborate that "The decision was made due to the current unfavourable investment climate." Chelsea have had four main crests, which all underwent minor variations. The first, adopted when the club was founded, was the image of a Chelsea pensioner, the army veterans who reside at the nearby Royal Hospital Chelsea. This contributed to the club's original "pensioner" nickname, and remained for the next half-century, though it never appeared on the shirts. When Ted Drake became Chelsea manager in 1952, he began to modernise the club. Believing the Chelsea pensioner crest to be old-fashioned, he insisted that it be replaced. A stop-gap badge which comprised the initials C.F.C. was adopted for a year. In 1953, the club crest was changed to an upright blue lion looking backwards and holding a staff. It was based on elements in the coat of arms of the Metropolitan Borough of Chelsea with the "lion rampant regardant" taken from the arms of then club president Viscount Chelsea and the staff from the Abbots of Westminster, former Lords of the Manor of Chelsea. It also featured three red roses, to represent England, and two footballs. This was the first Chelsea crest to appear on the shirts, in the early 1960s. In 1986, with Ken Bates now owner of the club, Chelsea's crest was changed again as part of another attempt to modernise and because the old rampant lion badge could not be trademarked. The new badge featured a more naturalistic non-heraldic lion, in white and not blue, standing over the C.F.C. initials. This lasted for the next 19 years, with some modifications such as the use of different colours, including red from 1987 to 1995, and yellow from 1995 until 1999, before the white returned. With the new ownership of Roman Abramovich, and the club's centenary approaching, combined with demands from fans for the popular 1950s badge to be restored, it was decided that the crest should be changed again in 2005. The new crest was officially adopted for the start of the 2005–06 season and marked a return to the older design, used from 1953 to 1986, featuring a blue heraldic lion holding a staff. For the centenary season this was accompanied by the words '100 YEARS' and 'CENTENARY 2005–2006' on the top and bottom of the crest respectively. Chelsea have always worn blue shirts, although they originally used the paler eton blue, which was taken from the racing colours of then club president, Earl Cadogan, and was worn with white shorts and dark blue or black socks. The light blue shirts were replaced by a royal blue version in around 1912. In the 1960s Chelsea manager Tommy Docherty changed the kit again, switching to blue shorts (which have remained ever since) and white socks, believing it made the club's colours more modern and distinctive, since no other major side used that combination; this kit was first worn during the 1964–65 season. Since then Chelsea have always worn white socks with their home kit apart from a short spell from 1985 to 1992, when blue socks were reintroduced. Chelsea's away colours are usually all yellow or all white with blue trim. More recently, the club have had a number of black or dark blue away kits which alternate every year. As with most teams, they have also had some more unusual ones. At Docherty's behest, in the 1966 FA Cup semi-final they wore blue and black stripes, based on Inter Milan's kit. In the mid-1970s, the away strip was a red, white and green kit inspired by the Hungarian national side of the 1950s. Other memorable away kits include an all jade strip worn from 1986–89, red and white diamonds from 1990–92, graphite and tangerine from 1994–96, and luminous yellow from 2007–08. The graphite and tangerine strip often appears in lists of the worst football kits ever. Chelsea are among the most widely supported football clubs in the world. They have the sixth highest average all-time attendance in English football and regularly attract over 40,000 fans to Stamford Bridge; they were the seventh best-supported Premier League team in the 2013–14 season, with an average gate of 41,572. Chelsea's traditional fanbase comes from all over the Greater London area including working-class parts such as Hammersmith and Battersea, wealthier areas like Chelsea and Kensington, and from the home counties. There are also numerous official supporters clubs in the United Kingdom and all over the world. Between 2007 and 2012, Chelsea were ranked fourth worldwide in annual replica kit sales, with an average of 910,000. Chelsea's official Twitter account has 9.8 million followers as of September 2017. At matches, Chelsea fans sing chants such as "Carefree" (to the tune of "Lord of the Dance", whose lyrics were probably written by supporter Mick Greenaway), "Ten Men Went to Mow", "We All Follow the Chelsea" (to the tune of "Land of Hope and Glory"), "Zigga Zagga", and the celebratory "Celery", with the latter often resulting in fans ritually throwing celery. The vegetable was banned inside Stamford Bridge after an incident involving Arsenal midfielder Cesc Fàbregas at the 2007 League Cup Final. During the 1970s and 1980s in particular, Chelsea supporters were associated with football hooliganism. The club's "football firm", originally known as the Chelsea Shed Boys, and subsequently as the Chelsea Headhunters, were nationally notorious for football violence, alongside hooligan firms from other clubs such as West Ham United's Inter City Firm and Millwall's Bushwackers, before, during and after matches. The increase of hooligan incidents in the 1980s led chairman Ken Bates to propose erecting an electric fence to deter them from invading the pitch, a proposal that the Greater London Council rejected. Since the 1990s, there has been a marked decline in crowd trouble at matches, as a result of stricter policing, CCTV in grounds and the advent of all-seater stadia. In 2007, the club launched the 'Back to the Shed' campaign to improve the atmosphere at home matches, with notable success. According to Home Office statistics, 126 Chelsea fans were arrested for football-related offences during the 2009–10 season, the third highest in the division, and 27 banning orders were issued, the fifth-highest in the division. Chelsea have long-standing rivalries with North London clubs Arsenal and Tottenham Hotspur. A strong rivalry with Leeds United dates back to several heated and controversial matches in the 1960s and 1970s, particularly the 1970 FA Cup Final. More recently a rivalry with Liverpool has grown following repeated clashes in cup competitions. Chelsea's fellow West London sides Brentford, Fulham and Queens Park Rangers are generally not considered major rivals, as matches have only taken place intermittently due to the clubs often being in separate divisions. A 2004 survey by Planetfootball.com found that Chelsea fans consider their main rivalries to be with (in descending order): Arsenal, Tottenham Hotspur and Manchester United. In the same survey, fans of Arsenal, Fulham, Leeds United, QPR, Tottenham, and West Ham United named Chelsea as one of their three main rivals. In a 2008 poll conducted by the Football Fans Census, Chelsea fans named Liverpool, Arsenal and Manchester United as their most disliked clubs. In the same survey, "Chelsea" was the top answer to the question "Which other English club do you dislike the most?" A 2012 survey, conducted among 1200 supporters of the top four league divisions across the country, found that many clubs’ main rivals had changed since 2003 and reported that Chelsea fans consider Tottenham to be their main rival, above Arsenal and Manchester United. Chelsea's highest appearance-maker is ex-captain Ron Harris, who played in 795 competitive games for the club between 1961 and 1980. The record for a Chelsea goalkeeper is held by Harris's contemporary, Peter Bonetti, who made 729 appearances (1959–79). With 103 caps (101 while at the club), Frank Lampard of England is Chelsea's most capped international player. Frank Lampard is Chelsea's all-time top goalscorer, with 211 goals in 648 games (2001–2014); he passed Bobby Tambling's longstanding record of 202 in May 2013. Seven other players have also scored over 100 goals for Chelsea: George Hilsdon (1906–12), George Mills (1929–39), Roy Bentley (1948–56), Jimmy Greaves (1957–61), Peter Osgood (1964–74 and 1978–79), Kerry Dixon (1983–92) and Didier Drogba (2004–12 and 2014–2015). Greaves holds the record for the most goals scored in one season (43 in 1960–61). Chelsea's biggest winning scoreline in a competitive match is 13–0, achieved against Jeunesse Hautcharage in the Cup Winners' Cup in 1971. The club's biggest top-flight win was an 8–0 victory against Wigan Athletic in 2010, which was matched in 2012 against Aston Villa. Chelsea's biggest loss was an 8–1 reverse against Wolverhampton Wanderers in 1953. Officially, Chelsea's highest home attendance is 82,905 for a First Division match against Arsenal on 12 October 1935. However, an estimated crowd of over 100,000 attended a friendly match against Soviet team Dynamo Moscow on 13 November 1945. The modernisation of Stamford Bridge during the 1990s and the introduction of all-seater stands mean that neither record will be broken for the foreseeable future. The current legal capacity of Stamford Bridge is 41,663. Every starting player in Chelsea's 57 games of the 2013–14 season was a full international – a new club record. Chelsea hold the English record for the fewest goals conceded during a league season (15), the highest number of clean sheets overall in a Premier League season (25) (both set during the 2004–05 season), and the most consecutive clean sheets from the start of a league season (6, set during the 2005–06 season). The club's 21–0 aggregate victory over Jeunesse Hautcharage in the UEFA Cup Winners' Cup in 1971 remains a record in European competition. Chelsea hold the record for the longest streak of unbeaten matches at home in the English top flight, which lasted 86 matches from 20 March 2004 to 26 October 2008. They secured the record on 12 August 2007, beating the previous record of 63 matches unbeaten set by Liverpool between 1978 and 1980. Chelsea's streak of eleven consecutive away league wins, set between 5 April 2008 and 6 December 2008, is also a record for the English top flight. Their £50 million purchase of Fernando Torres from Liverpool in January 2011 was the record transfer fee paid by a British club until Ángel Di María signed for Manchester United in August 2014 for £59.7 million. Chelsea, along with Arsenal, were the first club to play with shirt numbers, on 25 August 1928 in their match against Swansea Town. They were the first English side to travel by aeroplane to a domestic away match, when they visited Newcastle United on 19 April 1957, and the first First Division side to play a match on a Sunday, when they faced Stoke City on 27 January 1974. On 26 December 1999, Chelsea became the first British side to field an entirely foreign starting line-up (no British or Irish players) in a Premier League match against Southampton. In May 2007, Chelsea were the first team to win the FA Cup at the new Wembley Stadium, having also been the last to win it at the old Wembley. They were the first English club to be ranked No. 1 under UEFA's five-year coefficient system in the 21st century. They were the first team in Premier League history, and the first team in the English top flight since their great rivals Tottenham Hotspur in 1962-63, to score at least 100 goals in a single season, reaching the milestone on the final day of the 2009–10 season. Chelsea are the only London club to win the UEFA Champions League, after beating Bayern Munich in the 2012 final. Upon winning the 2012–13 UEFA Europa League, Chelsea became the first English club to win all four European trophies and the only club to hold the Champions League and the Europa League at the same time. Chelsea Football Club were founded by Gus Mears in 1905. After his death in 1912, his descendants continued to own the club until 1982, when Ken Bates bought the club from Mears' great-nephew Brian Mears for £1. Bates bought a controlling stake in the club and floated Chelsea on the AIM stock exchange in March 1996. In July 2003, Roman Abramovich purchased just over 50% of Chelsea Village plc's share capital, including Bates' 29.5% stake, for £30 million and over the following weeks bought out most of the remaining 12,000 shareholders at 35 pence per share, completing a £140 million takeover. Other shareholders at the time of the takeover included the Matthew Harding estate (21%), BSkyB (9.9%) and various anonymous offshore trusts. After passing the 90% share threshold, Abramovich took the club back into private hands, delisting it from the AIM on 22 August 2003. He also took on responsibility for the club's debt of £80 million, quickly paying most of it. Thereafter, Abramovich changed the ownership name to Chelsea FC plc, whose ultimate parent company is Fordstam Limited, which is controlled by him. Chelsea are additionally funded by Abramovich via interest free soft loans channelled through his holding company Fordstam Limited. The loans stood at £709 million in December 2009, when they were all converted to equity by Abramovich, leaving the club themselves debt free, although the debt remains with Fordstam. Since 2008 the club have had no external debt. Chelsea did not turn a profit in the first nine years of Abramovich's ownership, and made record losses of £140m in June 2005. In November 2012, Chelsea announced a profit of £1.4 million for the year ending 30 June 2012, the first time the club had made a profit under Abramovich's ownership. This was followed by a loss in 2013 and then their highest ever profit of £18.4 million for the year to June 2014. Chelsea have been described as a global brand; a 2012 report by Brand Finance ranked Chelsea fifth among football brands and valued the club's brand value at US$398 million – an increase of 27% from the previous year, also valuing them at US$10 million more than the sixth best brand, London rivals Arsenal – and gave the brand a strength rating of AA (very strong). In 2016, "Forbes" magazine ranked Chelsea the seventh most valuable football club in the world, at £1.15 billion ($1.66 billion). As of 2016, Chelsea are ranked eighth in the Deloitte Football Money League with an annual commercial revenue of £322.59 million. Chelsea's kit has been manufactured by Nike since July 2017. Previously, the kit was manufactured by Adidas, which was originally contracted to supply the club's kit from 2006 to 2018. The partnership was extended in October 2010 in a deal worth £160 million over eight years. This deal was again extended in June 2013 in a deal worth £300 million over another ten years. In May 2016, Adidas announced that by mutual agreement, the kit sponsorship would end six years early on 30 June 2017. Chelsea had to pay £40m in compensation to Adidas. In October 2016, Nike was announced as the new kit sponsor, in a deal worth £900m over 15 years, until 2032. Previously, the kit was manufactured by Umbro (1975–81), Le Coq Sportif (1981–86), The Chelsea Collection (1986–87), Umbro (1987–2006), and Adidas (2006–2017). Chelsea's first shirt sponsor was Gulf Air, agreed during the 1983–84 season. The club were then sponsored by Grange Farms, Bai Lin Tea and Simod before a long-term deal was signed with Commodore International in 1989; Amiga, an offshoot of Commodore, also appeared on the shirts. Chelsea were subsequently sponsored by Coors beer (1994–97), Autoglass (1997–2001), Emirates (2001–05), Samsung Mobile (2005–08) and Samsung (2008–15). Chelsea's current shirt sponsor is the Yokohama Rubber Company. Worth £40 million per year, the deal is second in English football to Chevrolet's £50 million-per-year sponsorship of Manchester United. Following the door-opening of sleeve sponsor in the English League, Chelsea had Alliance Tyres as its first sleeve sponsor in the 2017–18 season. For the 2018–19 season, they have Hyundai Motor Company as the new sleeve sponsor. The club has a variety of other sponsors, which include Carabao, Delta Air Lines, Beats by Dre, Singha, EA Sports, Rexona, Hublot, Ericsson, William Hill, Levy Restaurants, Wipro, Grand Royal Whisky, Bangkok Bank, Guangzhou R&F, Mobinil, IndusInd Bank, and Ole777. In 1930, Chelsea featured in one of the earliest football films, "The Great Game". One-time Chelsea centre forward, Jack Cock, who by then was playing for Millwall, was the star of the film and several scenes were shot at Stamford Bridge, including on the pitch, the boardroom, and the dressing rooms. It included guest appearances by then-Chelsea players Andrew Wilson, George Mills, and Sam Millington. Owing to the notoriety of the Chelsea Headhunters, a football firm associated with the club, Chelsea have also featured in films about football hooliganism, including 2004's "The Football Factory". Chelsea also appear in the Hindi film "Jhoom Barabar Jhoom". In April 2011, Montenegrin comedy series "Nijesmo mi od juče" made an episode in which Chelsea play against FK Sutjeska Nikšić for qualification of the UEFA Champions League. Up until the 1950s, the club had a long-running association with the music halls; their underachievement often provided material for comedians such as George Robey. It culminated in comedian Norman Long's release of a comic song in 1933, ironically titled "On the Day That Chelsea Went and Won the Cup", the lyrics of which describe a series of bizarre and improbable occurrences on the hypothetical day when Chelsea finally won a trophy. In Alfred Hitchcock's 1935 film "The 39 Steps", Mr Memory claims that Chelsea last won the Cup in 63 BC, "in the presence of the Emperor Nero." Scenes in a 1980 episode of "Minder" were filmed during a real match at Stamford Bridge between Chelsea and Preston North End with Terry McCann (played by Dennis Waterman) standing on the terraces. The song "Blue is the Colour" was released as a single in the build-up to the 1972 League Cup Final, with all members of Chelsea's first team squad singing; it reached number five in the UK Singles Chart. The song has since been adopted as an anthem by a number of other sports teams around the world, including the Vancouver Whitecaps (as "White is the Colour") and the Saskatchewan Roughriders (as "Green is the Colour"). In the build-up to the 1997 FA Cup Final, the song "Blue Day", performed by Suggs and members of the Chelsea squad, reached number 22 in the UK charts. Bryan Adams, a fan of Chelsea, dedicated the song "We're Gonna Win" from the album "18 Til I Die" to the club. Chelsea also operate a women's football team, Chelsea Football Club Women, formerly known as Chelsea Ladies. They have been affiliated to the men's team since 2004 and are part of the club's Community Development programme. They play their home games at Kingsmeadow, the home ground of the EFL League One club AFC Wimbledon. The club were promoted to the Premier Division for the first time in 2005 as Southern Division champions and won the Surrey County Cup nine times between 2003 and 2013. In 2010 Chelsea Ladies were one of the eight founder members of the FA Women's Super League. In 2015, Chelsea Ladies won the FA Women's Cup for the first time, beating Notts County Ladies at Wembley Stadium, and a month later clinched their first FA WSL title to complete a league and cup double. John Terry, former captain of the Chelsea men's team, is the President of Chelsea Women. "For recent transfers, see 2018–19 Chelsea F.C. season." "For further information: Chelsea F.C. Under-23s and Academy" Source: Chelsea F.C. The following managers won at least one trophy when in charge of Chelsea: !Position !Staff Chelsea FC plc is the company which owns Chelsea Football Club. The ultimate parent company of Chelsea FC plc is Fordstam Limited and the ultimate controlling party of Fordstam Limited is Roman Abramovich. On 22 October 2014, Chelsea announced that Ron Gourlay, after ten successful years at the club including five as Chief Executive, is leaving Chelsea to pursue new business opportunities. On 27 October 2014, Chelsea announced that Christian Purslow is joining the club to run global commercial activities and the club do not expect to announce any other senior appointments in the near future having chairman Bruce Buck and Director Marina Granovskaia assumed the executive responsibilities. Guy Laurence was appointed as the club's Chief Executive on 11 January 2018, filling the vacancy following the departure of Michael Emenalo. Chelsea Ltd. Chelsea F.C. plc Board Executive Board Life President Vice-Presidents Club Ambassadors Source: Chelsea F.C. Upon winning the 2012–13 UEFA Europa League, Chelsea became the fourth club in history to have won the "European Treble" of European Cup/UEFA Champions League, European Cup Winners' Cup/UEFA Cup Winners' Cup, and UEFA Cup/UEFA Europa League after Juventus, Ajax and Bayern Munich. Chelsea are the first English club to have won all three major UEFA trophies. Source: Chelsea F.C. Carousel (musical) Carousel is the second musical by the team of Richard Rodgers (music) and Oscar Hammerstein II (book and lyrics). The 1945 work was adapted from Ferenc Molnár's 1909 play "Liliom", transplanting its Budapest setting to the Maine coastline. The story revolves around carousel barker Billy Bigelow, whose romance with millworker Julie Jordan comes at the price of both their jobs. He participates in a robbery to provide for Julie and their unborn child; after it goes tragically wrong, he is given a chance to make things right. A secondary plot line deals with millworker Carrie Pipperidge and her romance with ambitious fisherman Enoch Snow. The show includes the well-known songs "If I Loved You", "June Is Bustin' Out All Over" and "You'll Never Walk Alone". Richard Rodgers later wrote that "Carousel" was his favorite of all his musicals. Following the spectacular success of the first Rodgers and Hammerstein musical, "Oklahoma!" (1943), the pair sought to collaborate on another piece, knowing that any resulting work would be compared with "Oklahoma!", most likely unfavorably. They were initially reluctant to seek the rights to "Liliom"; Molnár had refused permission for the work to be adapted in the past, and the original ending was considered too depressing for the musical theatre. After acquiring the rights, the team created a work with lengthy sequences of music and made the ending more hopeful. The musical required considerable modification during out-of-town tryouts, but once it opened on Broadway on April 19, 1945, it was an immediate hit with both critics and audiences. "Carousel" initially ran for 890 performances and duplicated its success in the West End in 1950. Though it has never achieved as much commercial success as "Oklahoma!", the piece has been repeatedly revived, recorded several times and was filmed in 1956. A production by Nicholas Hytner enjoyed success in 1992 in London, in 1994 in New York and on tour. Another Broadway revival opened in 2018. In 1999, "Time" magazine named "Carousel" the best musical of the 20th century. Ferenc Molnár's Hungarian-language drama, "Liliom", premiered in Budapest in 1909. The audience was puzzled by the work, and it lasted only thirty-odd performances before being withdrawn, the first shadow on Molnár's successful career as a playwright. "Liliom" was not presented again until after World War I. When it reappeared on the Budapest stage, it was a tremendous hit. Except for the ending, the plots of "Liliom" and "Carousel" are very similar. Andreas Zavocky (nicknamed Liliom, the Hungarian word for "lily", a slang term for "tough guy"), a carnival barker, falls in love with Julie Zeller, a servant girl, and they begin living together. With both discharged from their jobs, Liliom is discontented and contemplates leaving Julie, but decides not to do so on learning that she is pregnant. A subplot involves Julie's friend Marie, who has fallen in love with Wolf Biefeld, a hotel porter—after the two marry, he becomes the owner of the hotel. Desperate to make money so that he, Julie and their child can escape to America and a better life, Liliom conspires with lowlife Ficsur to commit a robbery, but it goes badly, and Liliom stabs himself. He dies, and his spirit is taken to heaven's police court. As Ficsur suggested while the two waited to commit the crime, would-be robbers like them do not come before God Himself. Liliom is told by the magistrate that he may go back to Earth for one day to attempt to redeem the wrongs he has done to his family, but must first spend sixteen years in a fiery purgatory. On his return to Earth, Liliom encounters his daughter, Louise, who like her mother is now a factory worker. Saying that he knew her father, he tries to give her a star he stole from the heavens. When Louise refuses to take it, he strikes her. Not realizing who he is, Julie confronts him, but finds herself unable to be angry with him. Liliom is ushered off to his fate, presumably Hell, and Louise asks her mother if it is possible to feel a hard slap as if it was a kiss. Julie reminiscently tells her daughter that it is very possible for that to happen. An English translation of "Liliom" was credited to Benjamin "Barney" Glazer, though there is a story that the actual translator, uncredited, was Rodgers' first major partner Lorenz Hart. The Theatre Guild presented it in New York City in 1921, with Joseph Schildkraut as Liliom, and the play was a success, running 300 performances. A 1940 revival, with Burgess Meredith and Ingrid Bergman was seen by both Hammerstein and Rodgers. Glazer, in introducing the English translation of "Liliom", wrote of the play's appeal: And where in modern dramatic literature can such pearls be matched—Julie incoherently confessing to her dead lover the love she had always been ashamed to tell; Liliom crying out to the distant carousel the glad news that he is to be a father; the two thieves gambling for the spoils of their prospective robbery; Marie and Wolf posing for their portrait while the broken-hearted Julie stands looking after the vanishing Liliom, the thieves' song ringing in her ears; the two policemen grousing about pay and pensions while Liliom lies bleeding to death; Liliom furtively proffering his daughter the star he has stolen for her in heaven. ... The temptation to count the whole scintillating string is difficult to resist. In the 1920s and 1930s, Rodgers and Hammerstein both became well known for creating Broadway hits with other partners. Rodgers, with Lorenz Hart, had produced a string of over two dozen musicals, including such popular successes as "Babes in Arms" (1937), "The Boys from Syracuse" (1938) and "Pal Joey" (1940). Some of Rodgers' work with Hart broke new ground in musical theatre: "On Your Toes" was the first use of ballet to sustain the plot (in the "Slaughter on Tenth Avenue" scene), while "Pal Joey" flouted Broadway tradition by presenting a knave as its hero. Hammerstein had written or co-written the words for such hits as "Rose-Marie" (1924), "The Desert Song" (1926), "The New Moon" (1927) and "Show Boat" (1927). Though less productive in the 1930s, he wrote material for musicals and films, sharing an Oscar for his song with Jerome Kern, "The Last Time I Saw Paris", which was included in the 1941 film "Lady Be Good". By the early 1940s, Hart had sunk into alcoholism and emotional turmoil, becoming unreliable and prompting Rodgers to approach Hammerstein to ask if he would consider working with him. Hammerstein was eager to do so, and their first collaboration was "Oklahoma!" (1943). Thomas Hischak states, in his "The Rodgers and Hammerstein Encyclopedia", that "Oklahoma!" is "the single most influential work in the American musical theatre. In fact, the history of the Broadway musical can accurately be divided into what came before "Oklahoma!" and what came after it." An innovation for its time in integrating song, character, plot and dance, "Oklahoma!" would serve, according to Hischak, as "the model for Broadway shows for decades", and proved a huge popular and financial success. Once it was well-launched, what to do as an encore was a daunting challenge for the pair. Film producer Samuel Goldwyn saw "Oklahoma!" and advised Rodgers to shoot himself, which according to Rodgers "was Sam's blunt but funny way of telling me that I'd never create another show as good as "Oklahoma!"" As they considered new projects, Hammerstein wrote, "We're such fools. No matter what we do, everyone is bound to say, 'This is not another "Oklahoma!"' " "Oklahoma!" had been a struggle to finance and produce. Hammerstein and Rodgers met weekly in 1943 with Theresa Helburn and Lawrence Langner of the Theatre Guild, producers of the blockbuster musical, who together formed what they termed "the Gloat Club". At one such luncheon, Helburn and Langner proposed to Rodgers and Hammerstein that they turn Molnár's "Liliom" into a musical. Both men refused—they had no feeling for the Budapest setting and thought that the unhappy ending was unsuitable for musical theatre. In addition, given the unstable wartime political situation, they might need to change the setting from Hungary while in rehearsal. At the next luncheon, Helburn and Langner again proposed "Liliom", suggesting that they move the setting to Louisiana and make Liliom a Creole. Rodgers and Hammerstein played with the idea over the next few weeks, but decided that Creole dialect, filled with "zis" and "zose", would sound corny and would make it difficult to write effective lyrics. A breakthrough came when Rodgers, who owned a house in Connecticut, proposed a New England setting. Hammerstein wrote of this suggestion in 1945, I began to see an attractive ensemble—sailors, whalers, girls who worked in the mills up the river, clambakes on near-by islands, an amusement park on the seaboard, things people could do in crowds, people who were strong and alive and lusty, people who had always been depicted on the stage as thin-lipped puritans—a libel I was anxious to refute ... as for the two leading characters, Julie with her courage and inner strength and outward simplicity seemed more indigenous to Maine than to Budapest. Liliom is, of course, an international character, indigenous to nowhere. Rodgers and Hammerstein were also concerned about what they termed "the tunnel" of Molnár's second act—a series of gloomy scenes leading up to Liliom's suicide—followed by a dark ending. They also felt it would be difficult to set Liliom's motivation for the robbery to music. Molnár's opposition to having his works adapted was also an issue; he had famously turned down Giacomo Puccini when the great composer wished to transform "Liliom" into an opera, stating that he wanted the piece to be remembered as his, not Puccini's. In 1937, Molnár, who had recently emigrated to the United States, had declined another offer from Kurt Weill to adapt the play into a musical. The pair continued to work on the preliminary ideas for a "Liliom" adaptation while pursuing other projects in late 1943 and early 1944—writing the film musical "State Fair" and producing "I Remember Mama" on Broadway. Meanwhile, the Theatre Guild took Molnár to see "Oklahoma!" Molnár stated that if Rodgers and Hammerstein could adapt "Liliom" as beautifully as they had modified "Green Grow the Lilacs" into "Oklahoma!", he would be pleased to have them do it. The Guild obtained the rights from Molnár in October 1943. The playwright received one percent of the gross and $2,500 for "personal services". The duo insisted, as part of the contract, that Molnár permit them to make changes in the plot. At first, the playwright refused, but eventually yielded. Hammerstein later stated that if this point had not been won, "we could never have made "Carousel"." In seeking to establish through song Liliom's motivation for the robbery, Rodgers remembered that he and Hart had a similar problem in "Pal Joey". Rodgers and Hart had overcome the problem with a song that Joey sings to himself, "I'm Talking to My Pal". This inspired "Soliloquy". Both partners later told a story that "Soliloquy" was only intended to be a song about Liliom's dreams of a son, but that Rodgers, who had two daughters, insisted that Liliom consider that Julie might have a girl. However, the notes taken at their meeting of December 7, 1943 state: "Mr. Rodgers suggested a fine musical number for the end of the scene where Liliom discovers he is to be a father, in which he sings first with pride of the growth of a boy, and then suddenly realizes it might be a girl and changes completely." Hammerstein and Rodgers returned to the "Liliom" project in mid-1944. Hammerstein was uneasy as he worked, fearing that no matter what they did, Molnár would disapprove of the results. "Green Grow the Lilacs" had been a little-known work; "Liliom" was a theatrical standard. Molnár's text also contained considerable commentary on the Hungarian politics of 1909 and the rigidity of that society. A dismissed carnival barker who hits his wife, attempts a robbery and commits suicide seemed an unlikely central character for a musical comedy. Hammerstein decided to use the words and story to make the audience sympathize with the lovers. He also built up the secondary couple, who are incidental to the plot in "Liliom"; they became Enoch Snow and Carrie Pipperidge. "This Was a Real Nice Clambake" was repurposed from a song, "A Real Nice Hayride", written for "Oklahoma!" but not used. Molnár's ending was unsuitable, and after a couple of false starts, Hammerstein conceived the graduation scene that ends the musical. According to Frederick Nolan in his book on the team's works: "From that scene the song "You'll Never Walk Alone" sprang almost naturally." In spite of Hammerstein's simple lyrics for "You'll Never Walk Alone", Rodgers had great difficulty in setting it to music. Rodgers explained his rationale for the changed ending, "Liliom" was a tragedy about a man who cannot learn to live with other people. The way Molnár wrote it, the man ends up hitting his daughter and then having to go back to purgatory, leaving his daughter helpless and hopeless. We couldn't accept that. The way we ended "Carousel" it may still be a tragedy but it's a hopeful one because in the final scene it is clear that the child has at last learned how to express herself and communicate with others. When the pair decided to make "This Was a Real Nice Clambake" into an ensemble number, Hammerstein realized he had no idea what a clambake was like, and researched the matter. Based on his initial findings, he wrote the line, "First came codfish chowder". However, further research convinced him the proper term was "codhead chowder", a term unfamiliar to many playgoers. He decided to keep it as "codfish". When the song proceeded to discuss the lobsters consumed at the feast, Hammerstein wrote the line "We slit 'em down the back/And peppered 'em good". He was grieved to hear from a friend that lobsters are always slit down the front. The lyricist sent a researcher to a seafood restaurant and heard back that lobsters are always slit down the back. Hammerstein concluded that there is disagreement about which side of a lobster is the back. One error not caught involved the song "June Is Bustin' Out All Over", in which sheep are depicted as seeking to mate in late spring—they actually do so in the winter. Whenever this was brought to Hammerstein's attention, he told his informant that 1873 was a special year, in which sheep mated in the spring. Rodgers early decided to dispense with an overture, feeling that the music was hard to hear over the banging of seats as latecomers settled themselves. In his autobiography, Rodgers complained that only the brass section can be heard during an overture because there are never enough strings in a musical's small orchestra. He determined to force the audience to concentrate from the beginning by opening with a pantomime scene accompanied by what became known as "The Carousel Waltz". The pantomime paralleled one in the Molnár play, which was also used to introduce the characters and situation to the audience. Author Ethan Mordden described the effectiveness of this opening: Other characters catch our notice—Mr. Bascombe, the pompous mill owner, Mrs. Mullin, the widow who runs the carousel and, apparently, Billy; a dancing bear; an acrobat. But what draws us in is the intensity with which Julie regards Billy—the way she stands frozen, staring at him, while everyone else at the fair is swaying to the rhythm of Billy's spiel. And as Julie and Billy ride together on the swirling carousel, and the stage picture surges with the excitement of the crowd, and the orchestra storms to a climax, and the curtain falls, we realize that R & H have not only skipped the overture "and" the opening number but the exposition as well. They have plunged into the story, right into the middle of it, in the most intense first scene any musical ever had. The casting for "Carousel" began when "Oklahoma!"'s production team, including Rodgers and Hammerstein, was seeking a replacement for the part of Curly (the male lead in "Oklahoma!"). Lawrence Langner had heard, through a relative, of a California singer named John Raitt, who might be suitable for the part. Langner went to hear Raitt, then urged the others to bring Raitt to New York for an audition. Raitt asked to sing "Largo al factotum", Figaro's aria from "The Barber of Seville", to warm up. The warmup was sufficient to convince the producers that not only had they found a Curly, they had found a Liliom (or Billy Bigelow, as the part was renamed). Theresa Helburn made another California discovery, Jan Clayton, a singer/actress who had made a few minor films for MGM. She was brought east and successfully auditioned for the part of Julie. The producers sought to cast unknowns. Though many had played in previous Hammerstein or Rodgers works, only one, Jean Casto (cast as carousel owner Mrs. Mullin, and a veteran of "Pal Joey"), had ever played on Broadway before. It proved harder to cast the ensemble than the leads, due to the war—Rodgers told his casting director, John Fearnley, that the sole qualification for a dancing boy was that he be alive. Rodgers and Hammerstein reassembled much of the creative team that had made "Oklahoma!" a success, including director Rouben Mamoulian and choreographer Agnes de Mille. Miles White was the costume designer while Jo Mielziner (who had not worked on "Oklahoma!") was the scenic and lighting designer. Even though "Oklahoma!" orchestrator Russell Bennett had informed Rodgers that he was unavailable to work on "Carousel" due to a radio contract, Rodgers insisted he do the work in his spare time. He orchestrated "The Carousel Waltz" and "(When I Marry) Mister Snow" before finally being replaced by Don Walker. A new member of the creative team was Trude Rittmann, who arranged the dance music. Rittmann initially felt that Rodgers mistrusted her because she was a woman, and found him difficult to work with, but the two worked together on Rodgers' shows until the 1970s. Rehearsals began in January 1945; either Rodgers or Hammerstein was always present. Raitt was presented with the lyrics for "Soliloquy" on a five-foot long sheet of paper—the piece ran nearly eight minutes. Staging such a long solo number presented problems, and Raitt later stated that he felt that they were never fully addressed. At some point during rehearsals, Molnár came to see what they had done to his play. There are a number of variations on the story. As Rodgers told it, while watching rehearsals with Hammerstein, the composer spotted Molnár in the rear of the theatre and whispered the news to his partner. Both sweated through an afternoon of rehearsal in which nothing seemed to go right. At the end, the two walked to the back of the theatre, expecting an angry reaction from Molnár. Instead, the playwright said enthusiastically, "What you have done is so beautiful. And you know what I like best? The ending!" Hammerstein wrote that Molnár became a regular attendee at rehearsals after that. Like most of the pair's works, "Carousel" contains a lengthy ballet, "Billy Makes a Journey", in the second act, as Billy looks down to the Earth from "Up There" and observes his daughter. In the original production the ballet was choreographed by de Mille. It began with Billy looking down from heaven at his wife in labor, with the village women gathered for a "birthing". The ballet involved every character in the play, some of whom spoke lines of dialogue, and contained a number of subplots. The focus was on Louise, played by Bambi Linn, who at first almost soars in her dance, expressing the innocence of childhood. She is teased and mocked by her schoolmates, and Louise becomes attracted to the rough carnival people, who symbolize Billy's world. A youth from the carnival attempts to seduce Louise, as she discovers her own sexuality, but he decides she is more girl than woman, and he leaves her. After Julie comforts her, Louise goes to a children's party, where she is shunned. The carnival people reappear and form a ring around the children's party, with Louise lost between the two groups. At the end, the performers form a huge carousel with their bodies. The play opened for tryouts in New Haven, Connecticut on March 22, 1945. The first act was well-received; the second act was not. Casto recalled that the second act finished about 1:30 a.m. The staff immediately sat down for a two-hour conference. Five scenes, half the ballet, and two songs were cut from the show as the result. John Fearnley commented, "Now I see why these people have hits. I never witnessed anything so brisk and brave in my life." De Mille said of this conference, "not three minutes had been wasted pleading for something cherished. Nor was there any idle joking. ... We cut and cut and cut and then we went to bed." By the time the company left New Haven, de Mille's ballet was down to forty minutes. A major concern with the second act was the effectiveness of the characters He and She (later called by Rodgers "Mr. and Mrs. God"), before whom Billy appeared after his death. Mr. and Mrs. God were depicted as a New England minister and his wife, seen in their parlor. The couple was still part of the show at the Boston opening. Rodgers said to Hammerstein, "We've got to get God out of that parlor". When Hammerstein inquired where he should put the deity, Rodgers replied, "I don't care where you put Him. Put Him on a ladder for all I care, only get Him out of that parlor!" Hammerstein duly put Mr. God (renamed the Starkeeper) atop a ladder, and Mrs. God was removed from the show. Rodgers biographer Meryle Secrest terms this change a mistake, leading to a more fantastic afterlife, which was later criticized by "The New Republic" as "a Rotarian atmosphere congenial to audiences who seek not reality but escape from reality, not truth but escape from truth". Hammerstein wrote that Molnár's advice, to combine two scenes into one, was key to pulling together the second act and represented "a more radical departure from the original than any change we had made". A reprise of "If I Loved You" was added in the second act, which Rodgers felt needed more music. Three weeks of tryouts in Boston followed the brief New Haven run, and the audience there gave the musical a warm reception. An even shorter version of the ballet was presented the final two weeks in Boston, but on the final night there, de Mille expanded it back to forty minutes, and it brought the house down, causing both Rodgers and Hammerstein to embrace her. Two young female millworkers in 1873 Maine visit the town's carousel after work. One of them, Julie Jordan, attracts the attention of the barker, Billy Bigelow ("The Carousel Waltz"). When Julie lets Billy put his arm around her during the ride, Mrs. Mullin, the widowed owner of the carousel, tells Julie never to return. Julie and her friend, Carrie Pipperidge, argue with Mrs. Mullin. Billy arrives and, seeing that Mrs. Mullin is jealous, mocks her; he is fired from his job. Billy, unconcerned, invites Julie to join him for a drink. As he goes to get his belongings, Carrie presses Julie about her feelings toward him, but Julie is evasive ("You're a Queer One, Julie Jordan"). Carrie has a beau too, fisherman Enoch Snow ("(When I Marry) Mister Snow"), to whom she is newly engaged. Billy returns for Julie as the departing Carrie warns that staying out late means the loss of Julie's job. Mr. Bascombe, owner of the mill, happens by along with a policeman, and offers to escort Julie to her home, but she refuses and is fired. Left alone, she and Billy talk about what life might be like if they were in love, but neither quite confesses to the growing attraction they feel for each other ("If I Loved You"). Over a month passes, and preparations for the summer clambake are under way ("June Is Bustin' Out All Over"). Julie and Billy, now married, live at Julie's cousin Nettie's spa. Julie confides in Carrie that Billy, frustrated over being unemployed, hit her. Carrie has happier news—she is engaged to Enoch, who enters as she discusses him ("(When I Marry) Mister Snow (reprise))". Billy arrives with his ne'er-do-well whaler friend, Jigger. The former barker is openly rude to Enoch and Julie, then leaves with Jigger, followed by a distraught Julie. Enoch tells Carrie that he expects to become rich selling herring and to have a large family, larger perhaps than Carrie is comfortable having ("When the Children Are Asleep"). Jigger and his shipmates, joined by Billy, then sing about life on the sea ("Blow High, Blow Low"). The whaler tries to recruit Billy to help with a robbery, but Billy declines, as the victim—Julie's former boss, Mr. Bascombe—might have to be killed. Mrs. Mullin enters and tries to tempt Billy back to the carousel (and to her). He would have to abandon Julie; a married barker cannot evoke the same sexual tension as one who is single. Billy reluctantly mulls it over as Julie arrives and the others leave. She tells him that she is pregnant, and Billy is overwhelmed with happiness, ending all thoughts of returning to the carousel. Once alone, Billy imagines the fun he will have with Bill Jr.—until he realizes that his child might be a girl, and reflects soberly that "you've got to be a "father" to a girl" ("Soliloquy"). Determined to provide financially for his future child, whatever the means, Billy decides to be Jigger's accomplice. The whole town leaves for the clambake. Billy, who had earlier refused to go, agrees to join in, to Julie's delight, as he realizes that being seen at the clambake is integral to his and Jigger's alibi ("Act I Finale"). Everyone reminisces about the huge meal and much fun ("This Was a Real Nice Clambake"). Jigger tries to seduce Carrie; Enoch walks in at the wrong moment, and declares that he is finished with her ("Geraniums In the Winder"), as Jigger jeers ("There's Nothin' So Bad for a Woman"). The girls try to comfort Carrie, but for Julie all that matters is that "he's your feller and you love him" ("What's the Use of Wond'rin'?"). Julie sees Billy trying to sneak away with Jigger and, trying to stop him, feels the knife hidden in his shirt. She begs him to give it to her, but he refuses and leaves to commit the robbery. As they wait, Jigger and Billy gamble with cards. They stake their shares of the anticipated robbery spoils. Billy loses: his participation is now pointless. Unknown to Billy and Jigger, Mr. Bascombe, the intended victim, has already deposited the mill's money. The robbery fails: Bascombe pulls a gun on Billy while Jigger escapes. Billy stabs himself with his knife; Julie arrives just in time for him to say his last words to her and die. Julie strokes his hair, finally able to tell him that she loved him. Carrie and Enoch, reunited by the crisis, attempt to console Julie; Nettie arrives and gives Julie the resolve to keep going despite her despair ("You'll Never Walk Alone"). Billy's defiant spirit ("The Highest Judge of All") is taken Up There to see the Starkeeper, a heavenly official. The Starkeeper tells Billy that the good he did in life was not enough to get into heaven, but so long as there is a person alive who remembers him, he can return for a day to try to do good to redeem himself. He informs Billy that fifteen years have passed on Earth since the former barker's suicide, and suggests that Billy can get himself into heaven if he helps his daughter, Louise. He helps Billy look down from heaven to see her (instrumental ballet: "Billy Makes a Journey"). Louise has grown up to be lonely and bitter. The local children ostracize her because her father was a thief and a wife-beater. In the dance, a young ruffian, much like her father at that age, flirts with her and abandons her as too young. The dance concludes, and Billy is anxious to return to Earth and help his daughter. He steals a star to take with him, as the Starkeeper pretends not to notice. Outside Julie's cottage, Carrie describes her visit to New York with the now-wealthy Enoch. Carrie's husband and their many children enter to fetch her—the family must get ready for the high school graduation later that day. Enoch Jr., the oldest son, remains behind to talk with Louise, as Billy and the Heavenly Friend escorting him enter, invisible to the other characters. Louise confides in Enoch Jr. that she plans to run away from home with an acting troupe. He says that he will stop her by marrying her, but that his father will think her an unsuitable match. Louise is outraged: each insults the other's father, and Louise orders Enoch Jr. to go away. Billy, able to make himself visible at will, reveals himself to the sobbing Louise, pretending to be a friend of her father. He offers her a gift—the star he stole from heaven. She refuses it and, frustrated, he slaps her hand. He makes himself invisible, and Louise tells Julie what happened, stating that the slap miraculously felt like a kiss, not a blow—and Julie understands her perfectly. Louise retreats to the house, as Julie notices the star that Billy dropped; she picks it up and seems to feel Billy's presence ("If I Loved You (Reprise)"). Billy invisibly attends Louise's graduation, hoping for one last chance to help his daughter and redeem himself. The beloved town physician, Dr. Seldon (who resembles the Starkeeper) advises the graduating class not to rely on their parents' success or be held back by their failure (words directed at Louise). Seldon prompts everyone to sing an old song, "You'll Never Walk Alone". Billy, still invisible, whispers to Louise, telling her to believe Seldon's words, and when she tentatively reaches out to another girl, she learns she does not have to be an outcast. Billy goes to Julie, telling her at last that he loved her. As his widow and daughter join in the singing, Billy is taken to his heavenly reward. ° denotes original Broadway cast Act I Act II The original Broadway production opened at the Majestic Theatre on April 19, 1945. The dress rehearsal the day before had gone badly, and the pair feared the new work would not be well received. One successful last-minute change was to have de Mille choreograph the pantomime. The movement of the carnival crowd in the pantomime had been entrusted to Mamoulian, and his version was not working. Rodgers had injured his back the previous week, and he watched the opening from a stretcher propped in a box behind the curtain. Sedated with morphine, he could see only part of the stage. As he could not hear the audience's applause and laughter, he assumed the show was a failure. It was not until friends congratulated him later that evening that he realized that the curtain had been met by wild applause. Bambi Linn, who played Louise, was so enthusiastically received by the audience during her ballet that she was forced to break character, when she next appeared, and bow. Rodgers' daughter Mary caught sight of her friend, Stephen Sondheim, both teenagers then, across several rows; both had eyes wet with tears. The original production ran for 890 performances, closing on May 24, 1947. The original cast included John Raitt (Billy), Jan Clayton (Julie), Jean Darling (Carrie), Eric Mattson (Enoch Snow), Christine Johnson (Nettie Fowler), Murvyn Vye (Jigger), Bambi Linn (Louise) and Russell Collins (Starkeeper). In December 1945, Clayton left to star in the Broadway revival of "Show Boat" and was replaced by Iva Withers; Raitt was replaced by Henry Michel in January 1947; Darling was replaced by Margot Moser. After closing on Broadway, the show went on a national tour for two years. It played for five months in Chicago alone, visited twenty states and two Canadian cities, covered and played to nearly two million people. The touring company had a four-week run at New York City Center in January 1949.Calta, Louis. "'Carousel' opens tonight at City Center". "The New York Times", January 25, 1949, p. 27. Retrieved on December 21, 2010. Following the City Center run, the show was moved back to the Majestic Theatre in the hopes of filling the theatre until "South Pacific" opened in early April. However, ticket sales were mediocre, and the show closed almost a month early.Calta, Louis. "'Carousel' to end run on Saturday". "The New York Times", February 28, 1949, p. 15. Retrieved on December 21, 2010. The musical premiered in the West End, London, at the Theatre Royal, Drury Lane on June 7, 1950. The production was restaged by Jerome Whyte, with a cast that included Stephen Douglass (Billy), Iva Withers (Julie) and Margot Moser (Carrie). "Carousel" ran in London for 566 performances, remaining there for over a year and a half. "Carousel" was revived in 1954 and 1957 at City Center, presented by the New York City Center Light Opera Company. Both times, the production featured Barbara Cook, though she played Carrie in 1954 and Julie in 1957 (playing alongside Howard Keel as Billy). The production was then taken to Belgium to be performed at the 1958 Brussels World's Fair, with David Atkinson as Billy, Ruth Kobart as Nettie, and Clayton reprising the role of Julie, which she had originated. In August 1965, Rodgers and the Music Theater of Lincoln Center produced "Carousel" for 47 performances. John Raitt reprised the role of Billy, with Jerry Orbach as Jigger and Reid Shelton as Enoch Snow. The roles of the Starkeeper and Dr. Seldon were played by Edward Everett Horton in his final stage appearance. The following year, New York City Center Light Opera Company brought "Carousel" back to City Center for 22 performances, with Bruce Yarnell as Billy and Constance Towers as Julie. Nicholas Hytner directed a new production of "Carousel" in 1992, at London's Royal National Theatre, with choreography by Sir Kenneth MacMillan and designs by Bob Crowley. In this staging, the story begins at the mill, where Julie and Carrie work, with the music slowed down to emphasize the drudgery. After work ends, they move to the shipyards and then to the carnival. As they proceed on a revolving stage, carnival characters appear, and at last the carousel is assembled onstage for the girls to ride. Louise is seduced by the ruffian boy during her Act 2 ballet, set around the ruins of a carousel. Michael Hayden played Billy not as a large, gruff man, but as a frustrated smaller one, a time bomb waiting to explode. Hayden, Joanna Riding (Julie) and Janie Dee (Carrie) all won Olivier Awards for their performances. Patricia Routledge played Nettie. Enoch and Carrie were cast as an interracial couple whose eight children, according to the review in "The New York Times", looked like "a walking United Colors of Benetton ad". Clive Rowe, as Enoch, was nominated for an Olivier Award. The production's limited run from December 1992 through March 1993 was a sellout. It re-opened at the Shaftesbury Theatre in London in September 1993, presented by Cameron Mackintosh, where it continued until May 1994. The Hytner production moved to New York's Vivian Beaumont Theater, where it opened on March 24, 1994, and ran for 322 performances. This won five Tony Awards, including best musical revival, as well as awards for Hytner, MacMillan, Crowley and Audra McDonald (as Carrie). The cast also included Sally Murphy as Julie, Shirley Verrett as Nettie, Fisher Stevens as Jigger and Eddie Korbich as Enoch. One change made from the London to the New York production was to have Billy strike Louise across the face, rather than on the hand. According to Hayden, "He does the one unpardonable thing, the thing we can't forgive. It's a challenge for the audience to like him after that." The Hytner "Carousel" was presented in Japan in May 1995. A U.S. national tour with a scaled-down production began in February 1996 in Houston and closed in May 1997 in Providence, Rhode Island. Producers sought to feature young talent on the tour, with Patrick Wilson as Billy and Sarah Uriarte Berry, and later Jennifer Laura Thompson, as Julie. A revival opened at London's Savoy Theatre on December 2, 2008, after a week of previews, starring Jeremiah James (Billy), Alexandra Silber (Julie) and Lesley Garrett (Nettie). The production received warm to mixed reviews. It closed in June 2009, a month early. Michael Coveney, writing in "The Independent", admired Rodgers' music but stated, "Lindsay Posner's efficient revival doesn't hold a candle to the National Theatre 1992 version". A production at Theater Basel, Switzerland, in 2016 to 2017, with German dialogue, was directed by Alexander Charim and choreographed by Teresa Rotemberg. Bryony Dwyer, Christian Miedl and Cheryl Studer starred, respectively, as Julie Jordan, Billy Bigelow and Nettie Fowler. A semi-staged revival by the English National Opera opened at the London Coliseum in 2017. The production was directed by Lonny Price, conducted by David Charles Abell, and starred Alfie Boe as Billy, Katherine Jenkins as Julie and Nicholas Lyndhurst as the Starkeeper. The production received mixed to positive reviews. The third Broadway revival began previews in February 2018 at the Imperial Theatre, with an official opening on April 12. It stars Jessie Mueller, Joshua Henry, Renée Fleming, Lindsay Mendez and Alexander Gemignani. The production is directed by Jack O'Brien and choreographed by Justin Peck. The song "There's Nothin' So Bad for a Woman" was cut from this revival. Ben Brantley wrote in "The New York Times", "The tragic inevitability of "Carousel" has seldom come across as warmly or as chillingly as it does in this vividly reimagined revival. ... [W]ith thoughtful and powerful performances by Mr. Henry and Ms. Mueller, the love story at the show's center has never seemed quite as ill-starred or, at the same time, as sexy. ... [T]he Starkeeper ... assumes new visibility throughout, taking on the role of Billy's angelic supervisor." Brantley strongly praised the choreography, all the performances and the designers. He was unconvinced, however, by the "mother-daughter dialogue that falls so abrasively on contemporary ears", where Julie tries to justify loving an abusive man, and other scenes in Act 2, particularly those set in heaven, and the optimism of the final scene. Most of the reviewers agreed that while the choreography and performances (especially the singing) are excellent, characterizing the production as sexy and sumptuous, O'Brien's direction does little to help the show deal with modern sensibilities about men's treatment of women, instead indulging in nostalgia. A film version of the musical was made in 1956, starring Gordon MacRae and Shirley Jones. It follows the musical's story fairly closely, although a prologue, set in the Starkeeper's heaven, was added. The film was released only a few months after the release of the film version of "Oklahoma!" It garnered some good reviews, and the soundtrack recording was a best seller. As the same stars appeared in both pictures, however, the two films were often compared, generally to the disadvantage of "Carousel". Thomas Hischak, in "The Rodgers and Hammerstein Encyclopedia", later wondered "if the smaller number of "Carousel" stage revivals is the product of this often-lumbering [film] musical". There was also an abridged (100 minute) 1967 network television version that starred Robert Goulet, with choreography by Edward Villella. The New York Philharmonic presented a staged concert version of the musical from February 28 to March 2, 2013, at Avery Fisher Hall. Kelli O'Hara played Julie, with Nathan Gunn as Billy, Stephanie Blythe as Nettie, Jessie Mueller as Carrie, Jason Danieley as Enoch, Shuler Hensley as Jigger, John Cullum as the Starkeeper, and Kate Burton as Mrs. Mullin. Tiler Peck danced the role of Louise to choreography by Warren Carlyle. The production was directed by John Rando. Charles Isherwood of "The New York Times" wrote, "this is as gorgeously sung a production of this sublime 1945 Broadway musical as you are ever likely to hear." It was broadcast as part of the PBS "Live from Lincoln Center" series, premiering on April 26, 2013. Rodgers designed "Carousel" to be an almost continuous stream of music, especially in Act 1. In later years, Rodgers was asked if he had considered writing an opera. He stated that he had been sorely tempted to, but saw "Carousel" in operatic terms. He remembered, "We came very close to opera in the Majestic Theatre. ... There's much that is operatic in the music." Rodgers uses music in "Carousel" in subtle ways to differentiate characters and tell the audience of their emotional state. In "You're a Queer One, Julie Jordan", the music for the placid Carrie is characterized by even eighth-note rhythms, whereas the emotionally restless Julie's music is marked by dotted eighths and sixteenths; this rhythm will characterize her throughout the show. When Billy whistles a snatch of the song, he selects Julie's dotted notes rather than Carrie's. Reflecting the close association in the music between Julie and the as-yet unborn Louise, when Billy sings in "Soliloquy" of his daughter, who "gets hungry every night", he uses Julie's dotted rhythms. Such rhythms also characterize Julie's Act 2 song, "What's the Use of Wond'rin'". The stable love between Enoch and Carrie is strengthened by her willingness to let Enoch not only plan his entire life, but hers as well. This is reflected in "When the Children Are Asleep", where the two sing in close harmony, but Enoch musically interrupts his intended's turn at the chorus with the words "Dreams that won't be interrupted". Rodgers biographer Geoffrey Block, in his book on the Broadway musical, points out that though Billy may strike his wife, he allows her musical themes to become a part of him and never interrupts her music. Block suggests that, as reprehensible as Billy may be for his actions, Enoch requiring Carrie to act as "the little woman", and his having nine children with her (more than she had found acceptable in "When the Children are Asleep") can be considered to be even more abusive. The twelve-minute "bench scene", in which Billy and Julie get to know each other and which culminates with "If I Loved You", according to Hischak, "is considered the most completely integrated piece of music-drama in the American musical theatre". The scene is almost entirely drawn from Molnár and is one extended musical piece; Stephen Sondheim described it as "probably the single most important moment in the revolution of contemporary musicals". "If I Loved You" has been recorded many times, by such diverse artists as Frank Sinatra, Barbra Streisand, Sammy Davis Jr., Mario Lanza and Chad and Jeremy. The D-flat major theme that dominates the music for the second act ballet seems like a new melody to many audience members. It is, however, a greatly expanded development of a theme heard during "Soliloquy" at the line "I guess he'll call me 'The old man' ". When the pair discussed the song that would become "Soliloquy", Rodgers improvised at the piano to give Hammerstein an idea of how he envisioned the song. When Hammerstein presented his collaborator with the lyrics after two weeks of work (Hammerstein always wrote the words first, then Rodgers would write the melodies), Rodgers wrote the music for the eight-minute song in two hours. "What's the Use of Wond'rin' ", one of Julie's songs, worked well in the show but was never as popular on the radio or for recording, and Hammerstein believed that the lack of popularity was because he had concluded the final line, "And all the rest is talk" with a hard consonant, which does not allow the singer a vocal climax. Irving Berlin later stated that "You'll Never Walk Alone" had the same sort of effect on him as the 23rd Psalm. When singer Mel Tormé told Rodgers that "You'll Never Walk Alone" had made him cry, Rodgers nodded impatiently. "You're supposed to." The frequently recorded song has become a widely accepted hymn. The cast recording of "Carousel" proved popular in Liverpool, like many Broadway albums, and in 1963, the Brian Epstein-managed band, Gerry and the Pacemakers had a number-one hit with the song. At the time, the top ten hits were played before Liverpool F.C. home matches; even after "You'll Never Walk Alone" dropped out of the top ten, fans continued to sing it, and it has become closely associated with the soccer team and the city of Liverpool. A BBC program, "Soul Music", ranked it alongside "Silent Night" and "Abide With Me" in terms of its emotional impact and iconic status. The cast album of the 1945 Broadway production was issued on 78s, and the score was significantly cut—as was the 1950 London cast recording. Theatre historian John Kenrick notes of the 1945 recording that a number of songs had to be abridged to fit the 78 format, but that there is a small part of "Soliloquy" found on no other recording, as Rodgers cut it from the score immediately after the studio recording was made. A number of songs were cut for the 1956 film, but two of the deleted numbers had been recorded and were ultimately retained on the soundtrack album. The expanded CD version of the soundtrack, issued in 2001, contains all of the singing recorded for the film, including the cut portions, and nearly all of the dance music. The recording of the 1965 Lincoln Center revival featured Raitt reprising the role of Billy. Studio recordings of "Carousel"'s songs were released in 1956 (with Robert Merrill as Billy, Patrice Munsel as Julie, and Florence Henderson as Carrie), 1962 and 1987. The 1987 version featured a mix of opera and musical stars, including Samuel Ramey, Barbara Cook and Sarah Brightman. Kenrick recommends the 1962 studio recording for its outstanding cast, including Alfred Drake, Roberta Peters, Claramae Turner, Lee Venora, and Norman Treigle. Both the London (1993) and New York (1994) cast albums of the Hytner production contain portions of dialogue that, according to Hischak, speak to the power of Michael Hayden's portrayal of Billy. Kenrick judges the 1994 recording the best all-around performance of "Carousel" on disc, despite uneven singing by Hayden, due to Sally Murphy's Julie and the strong supporting cast (calling Audra McDonald the best Carrie he has heard). The Stratford Festival issued a recording in 2015. The musical received almost unanimous rave reviews after its opening in 1945. According to Hischak, reviews were not as exuberant as for "Oklahoma!" as the critics were not taken by surprise this time. John Chapman of the "Daily News" termed it "one of the finest musical plays I have ever seen and I shall remember it always". "The New York Times"'s reviewer, Lewis Nichols, stated that "Richard Rodgers and Oscar Hammerstein 2d, who can do no wrong, have continued doing no wrong in adapting "Liliom" into a musical play. Their "Carousel" is on the whole delightful." Wilella Waldorf of the "New York Post", however, complained, ""Carousel" seemed to us a rather long evening. The "Oklahoma!" formula is becoming a bit monotonous and so are Miss de Mille's ballets. All right, go ahead and shoot!" "Dance Magazine" gave Linn plaudits for her role as Louise, stating, "Bambi doesn't come on until twenty minutes before eleven, and for the next forty minutes, she practically holds the audience in her hand". Howard Barnes in the "New York Herald Tribune" also applauded the dancing: "It has waited for Miss de Mille to come through with peculiarly American dance patterns for a musical show to become as much a dance as a song show." When the musical returned to New York in 1949, "The New York Times" reviewer Brooks Atkinson described "Carousel" as "a conspicuously superior musical play ... "Carousel", which was warmly appreciated when it opened, seems like nothing less than a masterpiece now." In 1954, when "Carousel" was revived at City Center, Atkinson discussed the musical in his review: "Carousel" has no comment to make on anything of topical importance. The theme is timeless and universal: the devotion of two people who love each other through thick and thin, complicated in this case by the wayward personality of the man, who cannot fulfill the responsibilities he has assumed.  ... Billy is a bum, but "Carousel" recognizes the decency of his motives and admires his independence. There are no slick solutions in "Carousel". Stephen Sondheim noted the duo's ability to take the innovations of "Oklahoma!" and apply them to a serious setting: ""Oklahoma!" is about a picnic, "Carousel" is about life and death." Critic Eric Bentley, on the other hand, wrote that "the last scene of "Carousel" is an impertinence: I refuse to be lectured to by a musical comedy scriptwriter on the education of children, the nature of the good life, and the contribution of the American small town to the salvation of souls." "New York Times" critic Frank Rich said of the 1992 London production: "What is remarkable about Mr. Hytner's direction, aside from its unorthodox faith in the virtues of simplicity and stillness, is its ability to make a 1992 audience believe in Hammerstein's vision of redemption, which has it that a dead sinner can return to Earth to do godly good." The Hytner production in New York was hailed by many critics as a grittier "Carousel", which they deemed more appropriate for the 1990s. Clive Barnes of the "New York Post" called it a "defining "Carousel"—hard-nosed, imaginative, and exciting." Critic Michael Billington has commented that "lyrically ["Carousel"] comes perilously close to acceptance of the inevitability of domestic violence." BroadwayWorld.com stated in 2013 that "Carousel" is now "considered somewhat controversial in terms of its attitudes on domestic violence" because Julie chooses to stay with Billy despite the abuse; actress Kelli O'Hara noted that the domestic violence that Julie "chooses to deal with – is a real, existing and very complicated thing. And exploring it is an important part of healing it." Rodgers considered "Carousel" his favorite of all his musicals and wrote, "it affects me deeply every time I see it performed". In 1999, "Time" magazine, in its "Best of the Century" list, named "Carousel" the Best Musical of the 20th century, writing that Rodgers and Hammerstein "set the standards for the 20th century musical, and this show features their most beautiful score and the most skillful and affecting example of their musical storytelling". Hammerstein's grandson, Oscar Andrew Hammerstein, in his book about his family, suggested that the wartime situation made "Carousel"'s ending especially poignant to its original viewers, "Every American grieved the loss of a brother, son, father, or friend ... the audience empathized with [Billy's] all-too-human efforts to offer advice, to seek forgiveness, to complete an unfinished life, and to bid a proper good-bye from beyond the grave." Author and composer Ethan Mordden agreed with that perspective: If "Oklahoma!" developed the moral argument for sending American boys overseas, "Carousel" offered consolation to those wives and mothers whose boys would only return in spirit. The meaning lay not in the tragedy of the present, but in the hope for a future where no one walks alone. "Note: The Tony Awards were not established until 1947, and so "Carousel" was not eligible to win any Tonys at its premiere. Chrono Cross The story of "Chrono Cross" focuses on a teenage boy named Serge and a theme of parallel worlds. Faced with an alternate reality in which he died as a child, Serge endeavors to discover the truth of the two worlds' divergence. The flashy thief Kid and many other characters assist him in his travels around the tropical archipelago El Nido. Struggling to uncover his past and find the mysterious Frozen Flame, Serge is chiefly challenged by Lynx, a shadowy antagonist working to apprehend him. Upon its release in Japan in 1999 and North America in 2000, "Chrono Cross" received critical acclaim, earning a perfect 10.0 score from GameSpot. The game shipped over copies worldwide, leading to a Greatest Hits re-release and continued life in Japan as part of the Ultimate Hits series. "Chrono Cross" was later re-released for the PlayStation Network in Japan in July 2011, and in North America four months later. "Chrono Cross" features standard role-playing video game gameplay with some differences. Players advance the game by controlling the protagonist Serge through the game's world, primarily by foot and boat. Navigation between areas is conducted via an overworld map, much like "Chrono Trigger's", depicting the landscape from a scaled-down overhead view. Around the island world are villages, outdoor areas, and dungeons, through which the player moves in three dimensions. Locations such as cities and forests are represented by more realistically scaled field maps, in which players can converse with locals to procure items and services, solve puzzles and challenges, or encounter enemies. Like "Chrono Trigger", the game features no random encounters; enemies are openly visible on field maps or lie in wait to ambush the party. Touching the monster switches perspectives to a battle screen, in which players can physically attack, use "Elements", defend, or run away from the enemy. Battles are turn-based, allowing the player infinite time to select an action from the available menu. For both the playable characters and the computer-controlled enemies, each attack reduces their number of hit points (a numerically based life bar), which can be restored through some Elements. When a playable character loses all hit points, he or she faints. If all the player's characters fall in battle, the game ends and must be restored from a previously saved chapter—except for specific storyline-related battles that allow the player to lose. "Chrono Cross"'s developers aimed to break new ground in the genre, and the game features several innovations. For example, players can run away from all conflicts, including boss fights and the final battle. The Element system of "Chrono Cross" handles all magic, consumable items, and character-specific abilities. Elements unleash magic effects upon the enemy or party and must be equipped for use, much like the materia of 1997's "Final Fantasy VII". Elements can be purchased from shops or found in treasure chests littered throughout areas. Once acquired, they are allocated to a grid whose size and shape are unique to each character. They are ranked according to eight tiers; certain high level Elements can only be assigned on equivalent tiers in a character's grid. As the game progresses, the grid expands, allowing more Elements to be equipped and higher tiers to be accessed. Elements are divided into six paired oppositional types, or "colors," each with a natural effect. Red (fire/magma) opposes Blue (water/ice), Green (wind/flora) opposes Yellow (earth/lightning), and White (light/cosmos) opposes Black (darkness/gravity). Each character and enemy has an innate color, enhancing the power of using same-color Elements while also making them weak against elements of the opposite color. "Chrono Cross" also features a "field effect", which keeps track of Element color used in the upper corner of the battle screen. If the field is purely one color, characters are able to unleash a powerful summon element at the cost of one the player's stars. The field will also enhance the power of Elements of the colors present, while weakening Elements of the opposite colors. Characters also innately learn some special techniques ("Techs") that are unique to each character but otherwise act like Elements. Like "Chrono Trigger", characters can combine certain Techs to make more powerful Double or Triple Techs. Consumable Elements may be used to restore hit points or heal status ailments after battle. Another innovative aspect of "Chrono Cross" is its stamina bar. At the beginning of a battle, each character has seven points of stamina. When a character attacks or uses an Element, stamina is decreased proportionally to the potency of the attack. Stamina slowly recovers when the character defends or when other characters perform actions in battle. Characters with stamina below one point must wait to take action. Use of an Element reduces the user's stamina bar by seven stamina points; this often means that the user's stamina gauge falls into the negative and the character must wait longer than usual to recover. With each battle, players can enhance statistics such as strength and defense. However, no system of experience points exists; after four or five upgrades, statistics remain static until players defeat a boss. This adds a star to a running count shown on the status screen, which allows for another few rounds of statistical increases. Players can equip characters with weapons, armor, helmets, and accessories for use in battle; for example, the "Power Seal" upgrades attack power. Items and equipment may be purchased or found on field maps, often in treasure chests. Unlike Elements, weapons and armor cannot merely be purchased with money; instead, the player must obtain base materials—such as copper, bronze, or bone—for a blacksmith to forge for a fee. The items can later be disassembled into their original components at no cost. The existence of two major parallel dimensions, like time periods in "Chrono Trigger", plays a significant role in the game. Players must go back and forth between the worlds to recruit party members, obtain items, and advance the plot. Much of the population of either world have counterparts in the other; some party members can even visit their other versions. The player must often search for items or places found exclusively in one world. Events in one dimension sometimes have an impact in another—for instance, cooling scorched ground on an island in one world allows vegetation to grow in the other world. This system assists the presentation of certain themes, including the questioning of the importance of one's past decisions and humanity's role in destroying the environment. Rounding out the notable facets of "Chrono Cross"'s gameplay are the New Game+ option and multiple endings. As in "Chrono Trigger", players who have completed the game may choose to start the game over using data from the previous session. Character levels, learned techniques, equipment, and items gathered copy over, while acquired money and some story-related items are discarded. On a New Game+, players can access twelve endings. Scenes viewed depend on players' progress in the game before the final battle, which can be fought at any time in a New Game+ file. "Chrono Cross" features a diverse cast of 45 party members. Each character is outfitted with an innate Element affinity and three unique special abilities that are learned over time. If taken to the world opposite their own, characters react to their counterparts (if available). Many characters tie in to crucial plot events. Since it is impossible to obtain all 45 characters in one playthrough, players must replay the game to witness everything. Through use of the New Game+ feature, players can ultimately obtain all characters on one save file. Serge, the game's protagonist, is a 17-year-old boy with blue hair who lives in the fishing village of Arni. One day, he slips into an alternate world in which he drowned ten years before. Determined to find the truth behind the incident, he follows a predestined course that leads him to save the world. He is assisted by Kid, a feisty, skilled thief who seeks the mythical Frozen Flame. Portrayed as willful and tomboyish due to her rough, thieving past, she helps Serge sneak into Viper Manor in order to obtain the Frozen Flame. Kid vows to find and defeat Lynx, an anthropomorphic panther who burned down her adopted mother's orphanage. Lynx, a cruel agent of the supercomputer FATE, is bent on finding Serge and using his body as part of a greater plan involving the Frozen Flame. Lynx travels with Harle, a mysterious, playful girl dressed like a harlequin. Harle was sent by the Dragon God to shadow Lynx and one day steal the Frozen Flame from Chronopolis, a task she painfully fulfills despite being smitten with Serge. To accomplish this goal, Harle helps Lynx manipulate the Acacia Dragoons, the powerful militia governing the islands of El Nido. As the Dragoons maintain order, they contend with Fargo, a former Dragoon turned pirate captain who holds a grudge against their leader, General Viper. Though tussling with Serge initially, the Acacia Dragoons—whose ranks include the fierce warriors Karsh, Zoah, Marcy, and Glenn—later assist him when the militaristic nation of Porre invades the archipelago. The invasion brings Norris and Grobyc to the islands, a heartful commander of an elite force and a prototype cyborg soldier, respectively, as they too seek the Frozen Flame. "Chrono Cross" begins with Serge located in El Nido, a tropical archipelago inhabited by ancient natives, mainland colonists, and beings called Demi-humans. Serge slips into an alternate dimension in which he drowned on the beach ten years prior, and meets the thief, "Kid". As his adventure proceeds from here, Serge is able to recruit a multitude of allies to his cause. While assisting Kid in a heist Viper Manor to steal the Frozen Flame, he learns that ten years before the present, the universe split into two dimensions—one in which Serge lived, and one in which he perished. Through Kid's Astral Amulet charm, Serge travels between the dimensions. At Fort Dragonia the use of a Dragonian artifact called the Dragon Tear, Lynx switches bodies with Serge. Unaware of the switch, Kid confides in Lynx, who stabs her as the real Serge helplessly watches. Lynx boasts of his victory and banishes Serge to a strange realm called the Temporal Vortex. He takes Kid under his wing, brainwashing her to believe the real Serge (in Lynx's body) is her enemy. Serge escapes with help from Harle, although his new body turns him into a stranger in his own world, with all the allies he had gained up to that point abandoning him due to his new appearance. Discovering that his new body prevents him from traveling across the dimensions, he sets out to regain his former body and learn more of the universal split that occurred ten years earlier, gaining a new band of allies along the way.. He travels to a forbidden lagoon known as the Dead Sea—a wasteland frozen in time, dotted with futuristic ruins. At the center, he locates a man named Miguel and presumably "Home" world's Frozen Flame. Charged with guarding the Dead Sea by an entity named FATE, Miguel and three visions of Crono, Marle, and Lucca from "Chrono Trigger" explain that Serge's existence dooms "Home" world's future to destruction at the hands of Lavos. To prevent Serge from obtaining the Frozen Flame, FATE destroys the Dead Sea. Able to return to "Another" world, Serge allies with the Acacia Dragoons against Porre and locates that dimension's Dragon Tear, allowing him to return to his human form. He then enters the Sea of Eden, "Another" world's physical equivalent of the Dead Sea, finding a temporal research facility from the distant future called Chronopolis. Lynx and Kid are inside; Serge defeats Lynx and the supercomputer FATE, allowing the six Dragons of El Nido to steal the Frozen Flame and retire to Terra Tower, a massive structure raised from the sea floor. Kid falls into a coma, and Harle bids the party goodbye to fly with the Dragons. Serge regroups his party and tends to Kid, who remains comatose. Continuing his adventure, he obtains and cleanses the corrupted Masamune sword from "Chrono Trigger". He then uses the Dragon relics and shards of the Dragon Tears to create the mythic Element Chrono Cross. The spiritual power of the Masamune later allows him to lift Kid from her coma. At Terra Tower, the prophet of time, revealed to be Belthasar from "Chrono Trigger", visits him with visions of Crono, Marle, and Lucca. Serge learns that the time research facility Chronopolis created El Nido thousands of years ago after a catastrophic experimental failure drew it to the past. The introduction of a temporally foreign object in history caused the planet to pull in a counterbalance from a different dimension. This was Dinopolis, a city of Dragonians—parallel universe descendants of "Chrono Trigger"'s Reptites. The institutions warred and Chronopolis subjugated the Dragonians. Humans captured their chief creation—the Dragon God, an entity capable of controlling nature. Chronopolis divided this entity into six pieces and created an Elements system. FATE then terraformed an archipelago, erased the memories of most Chronopolis's staff, and sent them to inhabit and populate its new paradise. Thousands of years later, a panther demon attacked a three-year-old Serge. His father took him to find assistance at Marbule, but Serge's boat blew off course due to a raging magnetic storm caused by Schala. Schala, the princess of the Kingdom of Zeal, had long ago accidentally fallen to a place known as the Darkness Beyond Time and began merging with Lavos, the chief antagonist of "Chrono Trigger". Schala's storm nullified Chronopolis's defenses and allowed Serge to contact the Frozen Flame; approaching it healed Serge but corrupted his father. A circuit in Chronopolis then designated Serge "Arbiter", simultaneously preventing FATE from using the Frozen Flame by extension. The Dragons were aware of this situation, creating a seventh Dragon under the storm's cover named Harle, who manipulated Lynx to steal the Frozen Flame for the Dragons. After Serge returned home, FATE sent Lynx to kill Serge, hoping that it would release the Arbiter lock. Ten years after Serge drowned, the thief Kid—presumably on Belthasar's orders—went back in time to save Serge and split the dimensions. FATE, locked out of the Frozen Flame again, knew that Serge would one day cross to "Another" world and prepared to apprehend him. Lynx switched bodies with Serge to dupe the biological check of Chronopolis on the Frozen Flame. Belthasar then reveals that these events were part of a plan he had orchestrated named Project Kid. Serge continues to the top of Terra Tower and defeats the Dragon God. Continuing to the beach where the split in dimensions had occurred, Serge finds apparitions of Crono, Marle, and Lucca once more. They reveal that Belthasar's plan was to empower Serge to free Schala from melding with Lavos, lest they evolve into the "Time Devourer", a creature capable of destroying spacetime. Lucca explains that Kid is Schala's clone, sent to the modern age to take part in Project Kid. Serge uses a Time Egg—given to him by Belthasar—to enter the Darkness Beyond Time and vanquish the Time Devourer, separating Schala from Lavos and restores the dimensions to one. Thankful, Schala muses on evolution and the struggle of life and returns Serge to his home, noting that he will forget the entire adventure. She then seemingly records the experience in her diary, noting she will always be searching for Serge in this life and beyond, signing the entry as Schala "Kid" Zeal, implying that she and Kid have merged and became whole again. A wedding photo of Kid and an obscured male sits on the diary's desk. Scenes then depict a real-life Kid searching for someone in a modern city, intending to make players entertain the possibility that their own Kid is searching for them. The ambiguous ending leaves the events of the characters' lives following the game up to interpretation. "Chrono Cross" employs story arcs, characters, and themes from "", a Satellaview side story to "Chrono Trigger" released in Japan. An illustrated text adventure, "Radical Dreamers" was created to wrap up an unresolved plot line of "Chrono Trigger". Though it borrows from "Radical Dreamers" in its exposition, "Chrono Cross" is not a remake of "Radical Dreamers", but a larger effort to fulfill that game's purpose; the plots of the games are irreconcilable. To resolve continuity issues and acknowledge "Radical Dreamers", the developers of "Chrono Cross" suggested the game happened in a parallel dimension. A notable difference between the two games is that Magus—present in "Radical Dreamers" as Gil—is absent from "Chrono Cross". Director Masato Kato originally planned for Magus to appear in disguise as Guile, but scrapped the idea due to plot difficulties. In the DS version of "Chrono Trigger", Kato teases the possibility of an amnesiac Magus. Square began planning "Chrono Cross" immediately after the release of "Xenogears" in 1998 (which itself was originally conceived as a sequel to the SNES game). "Chrono Trigger"'s scenario director Masato Kato had brainstormed ideas for a sequel as early as 1996, following the release of "". Square's managers selected a team, appointed Hiromichi Tanaka producer, and asked Kato to direct and develop a new "Chrono" game in the spirit of "Radical Dreamers". Kato thought "Dreamers" was released in a "half-finished state", and wanted to continue the story of the character Kid. Kato and Tanaka decided to produce an indirect sequel. They acknowledged that Square would soon re-release "Chrono Trigger" as part of "Final Fantasy Chronicles", which would give players a chance to catch up on the story of "Trigger" before playing "Cross". Kato thought that using a different setting and cast for "Chrono Cross" would allow players unfamiliar with "Chrono Trigger" to play "Cross" without becoming confused. The "Chrono Cross" team decided against integrating heavy use of time travel into the game, as they thought it would be "rehashing and cranking up the volume of the last game". Masato Kato cited the belief, "there's no use in making something similar to before ", and noted, "we're not so weak nor cheap as to try to make something exactly the same as "Trigger" ... Accordingly, "Chrono Cross" is not "Chrono Trigger 2". It doesn't simply follow on from "Trigger", but is another, different "Chrono" that interlaces with "Trigger"." Kato and Tanaka further explained their intentions after the game's release: Full production began on "Chrono Cross" in mid-1998. The "Chrono Cross" team reached 80 members at its peak, with additional personnel of 10–20 cut-scene artists and 100 quality assurance testers. The team felt pressure to live up to the work of "Chrono Trigger"'s "Dream Team" development group, which included famous Japanese manga artist Akira Toriyama. Kato and Tanaka hired Nobuteru Yūki for character design and Yasuyuki Honne for art direction and concept art. The event team originally envisioned a short game, and planned a system by which players would befriend any person in a town for alliance in battle. Developers brainstormed traits and archetypes during the character-creation process, originally planning 64 characters with unique endings that could vary in three different ways per character. Kato described the character creation process: "Take Pierre, for example: we started off by saying we wanted a wacko fake hero like Tata from "Trigger". We also said things like 'we need at least one powerful mom', 'no way we're gonna go without a twisted brat', and so on so forth." As production continued, the length of "Cross" increased, leading the event team to reduce the number of characters to 45 and scrap most of the alternate endings. Developers humorously named the character Pip "Tsumaru" in Japanese (which means "packed") as a pun on their attempts to pack as much content into the game as possible. To avoid the burden of writing unique, accented dialogue for several characters, team member Kiyoshi Yoshii coded a system that produces accents by modifying basic text for certain characters. Art director Nobuteru Yuuki initially wanted the characters to appear in a more "chibi" format with diminutive proportions. The game world's fusion of high technology and ethnic, tribal atmospheres proved challenging at first. He later recalled striving to harmonize the time period's level of technology, especially as reflected in characters' garb. The "Chrono Cross" team devised an original battle system using a stamina bar and Elements. Kato planned the system around allowing players to avoid repetitive gameplay (also known as "grinding") to gain combat experience. Hiromichi Tanaka likened the Elements system to card games, hoping players would feel a sense of complete control in battle. The team programmed each battle motion manually instead of performing motion capture. Developers strove to include tongue-in-cheek humor in the battle system's techniques and animations to distance the game from the "Final Fantasy" franchise. Masato Kato planned for the game's setting to feature a small archipelago, for fear that players would become confused traveling in large areas with respect to parallel worlds. He hoped El Nido would still impart a sense of grand scale, and the development team pushed hardware limitations in creating the game's world. To create field maps, the team modeled locations in 3D, then chose the best angle for 2D rendering. The programmers of "Chrono Cross" did not use any existing Square programs or routines to code the game, instead writing new, proprietary systems. Other innovations included variable-frame rate code for fast-forward and slow-motion gameplay (awarded as a bonus for completing the game) and a "CD-read swap" system to allow quick data retrieval. Masato Kato directed and wrote the main story, leaving sub-plots and minor character events to other staff. The event team sometimes struggled to mesh their work on the plot due to the complexity of the parallel worlds concept. Masato Kato confirmed that "Cross" featured a central theme of parallel worlds, as well as the fate of Schala, which he was previously unable to expound upon in "Chrono Trigger". Concerning the ending sequences showing Kid searching for someone in a modern city, he hoped to make players realize that alternate futures and possibilities may exist in their own lives, and that this realization would "not ... stop with the game". He later added, "Paraphrasing one novelist's favorite words, what's important is not the message or theme, but how it is portrayed as a game. Even in Cross, it was intentionally made so that the most important question was left unanswered." Kato described the finished story as "ole' boy-meets-girl type of story" with sometimes-shocking twists. Kato rode his motorcycle to relieve the stress of the game's release schedule. He continued refining event data during the final stages of development while the rest of the team undertook debugging and quality control work. Square advertised the game by releasing a short demo of the first chapter with purchases of "Legend of Mana". The North American version of "Cross" required three months of translation and two months of debugging before release. Richard Honeywood translated, working with Kato to rewrite certain dialogue for ease of comprehension in English. He also added instances of wordplay and alliteration to compensate for difficult Japanese jokes. To streamline translation for all 45 playable characters, Honeywood created his own version of the accent generator which needed to be more robust than the simple verbal tics of the Japanese cast. Although the trademark "Chrono Cross" was registered in the European Union, the game was not released in Europe. "Chrono Cross" was scored by freelance video game music composer Yasunori Mitsuda, who previously worked on "Chrono Trigger". Director Masato Kato personally commissioned Mitsuda's involvement, citing a need for the "Chrono sound". Kato envisioned a "Southeast Asian feel, mixed with the foreign tastes and the tones of countries such as Greece"; Mitsuda centered his work around old world cultural influences, including Mediterranean, Fado, Celtic, and percussive African music. Mitsuda cited visual inspiration for songs: "All of my subjects are taken from scenery. I love artwork." To complement the theme of parallel worlds, he gave "Another" and "Home" respectively dark and bright moods, and hoped players would feel the emotions of "'burning soul,' 'lonely world,' and 'unforgettable memories'". Mitsuda and Kato planned music samples and sound effects with the philosophy of "a few sounds with a lot of content". "Xenogears" contributor Tomohiko Kira played guitar on the beginning and ending themes. Noriko Mitose, as selected by Masato Kato, sang the ending song—"Radical Dreamers – The Unstolen Jewel". Ryo Yamazaki, a synthesizer programmer for Square Enix, helped Mitsuda transfer his ideas to the PlayStation's sound capabilities; Mitsuda was happy to accomplish even half of what he envisioned. Certain songs were ported from the score of "", such as "Gale", "Frozen Flame", and "Viper Mansion". Other entries in the soundtrack contain leitmotifs from "Chrono Trigger" and "Radical Dreamers". The melody of "Far Promise ~ Dream Shore" features prominently in "The Dream That Time Dreams" and "Voyage ~ Another World." Masato Kato faced internal opposition in hiring Noriko Mitose: Dungeons & Dragons Dungeons & Dragons (abbreviated as D&D or DnD) is a fantasy tabletop role-playing game (RPG) originally designed by Gary Gygax and Dave Arneson. It was first published in 1974 by Tactical Studies Rules, Inc. (TSR). The game has been published by Wizards of the Coast (now a subsidiary of Hasbro) since 1997. It was derived from miniature wargames with a variation of Chainmail serving as the initial rule system. "D&D" publication is commonly recognized as the beginning of modern role-playing games and the role-playing game industry. "D&D" departs from traditional wargaming and assigns each player a specific character to play instead of a military formation. These characters embark upon imaginary adventures within a fantasy setting. A Dungeon Master serves as the game's referee and storyteller while maintaining the setting in which the adventures occur, and playing the role of the inhabitants. The characters form a party that interacts with the setting's inhabitants, and each other. Together they solve dilemmas, engage in battles, and gather treasure and knowledge. In the process the characters earn experience points in order to rise in levels, and become increasingly powerful over a series of sessions. The early success of "Dungeons & Dragons" led to a proliferation of similar game systems. Despite the competition, "D&D" has remained as the market leader in the role-playing game industry. In 1977, the game was split into two branches: the relatively rules-light game system of basic "Dungeons & Dragons" and the more structured, rules-heavy game system of "Advanced Dungeons & Dragons" (abbreviated as "AD&D"). "AD&D" 2nd Edition was published in 1989. In 2000, a new system was released as "Dungeons & Dragons" 3rd edition. These rules formed the basis of the d20 System which is available under the Open Game License (OGL) for use by other publishers. "Dungeons & Dragons" version 3.5 was released in June 2003, with a (non-OGL) 4th edition in June 2008. A 5th edition was released during the second half of 2014. , "Dungeons & Dragons" remained the best-known and best-selling role-playing game, with an estimated 20 million people having played the game and more than US$1 billion in book and equipment sales. The game has been supplemented by many pre-made adventures as well as commercial campaign settings suitable for use by regular gaming groups. "Dungeons & Dragons" is known beyond the game for other "D&D"-branded products, references in popular culture, and some of the controversies that have surrounded it, particularly a moral panic in the 1980s falsely linking it to Satanism and suicide. The game has won multiple awards and has been translated into many languages beyond the original English. "Dungeons & Dragons" is a structured yet open-ended role-playing game. It is normally played indoors with the participants seated around a tabletop. Typically, each player controls only a single character, which represents an individual in a fictional setting. When working together as a group, these player characters (PCs) are often described as a "party" of adventurers, with each member often having their own area of specialty which contributes to the success of the whole. During the course of play, each player directs the actions of their character and their interactions with other characters in the game. This activity is performed through the verbal impersonation of the characters by the players, while employing a variety of social and other useful cognitive skills, such as logic, basic mathematics and imagination. A game often continues over a series of meetings to complete a single adventure, and longer into a series of related gaming adventures, called a "campaign". The results of the party's choices and the overall storyline for the game are determined by the Dungeon Master (DM) according to the rules of the game and the DM's interpretation of those rules. The DM selects and describes the various non-player characters (NPCs) that the party encounters, the settings in which these interactions occur, and the outcomes of those encounters based on the players' choices and actions. Encounters often take the form of battles with "monsters" – a generic term used in "D&D" to describe potentially hostile beings such as animals, aberrant beings, or mythical creatures. The game's extensive rules – which cover diverse subjects such as social interactions, magic use, combat, and the effect of the environment on PCs – help the DM to make these decisions. The DM may choose to deviate from the published rules or make up new ones if they feel it is necessary. The most recent versions of the game's rules are detailed in three core rulebooks: The "Player's Handbook", the "Dungeon Master's Guide" and the "Monster Manual". The only items required to play the game are the rulebooks, a character sheet for each player, and a number of polyhedral dice. Many players also use miniature figures on a grid map as a visual aid, particularly during combat. Some editions of the game presume such usage. Many optional accessories are available to enhance the game, such as expansion rulebooks, pre-designed adventures and various campaign settings. Before the game begins, each player creates their player character and records the details (described below) on a character sheet. First, a player determines their character's ability scores, which consist of Strength, Constitution, Dexterity, Intelligence, Wisdom, and Charisma. Each edition of the game has offered differing methods of determining these statistics. The player then chooses a race (species) such as human or elf, a character class (occupation) such as fighter or wizard, an alignment (a moral and ethical outlook), and other features to round out the character's abilities and backstory, which have varied in nature through differing editions. During the game, players describe their PC's intended actions, such as punching an opponent or picking a lock, and converse with the DM, who then describes the result or response. Trivial actions, such as picking up a letter or opening an unlocked door, are usually automatically successful. The outcomes of more complex or risky actions are determined by rolling dice. Factors contributing to the outcome include the character's ability scores, skills and the difficulty of the task. In circumstances where a character does not have control of an event, such as when a trap or magical effect is triggered or a spell is cast, a saving throw can be used to determine whether the resulting damage is reduced or avoided. In this case the odds of success are influenced by the character's class, levels and ability scores. As the game is played, each PC changes over time and generally increases in capability. Characters gain (or sometimes lose) experience, skills and wealth, and may even alter their alignment or gain additional character classes. The key way characters progress is by earning experience points (XP), which happens when they defeat an enemy or accomplish a difficult task. Acquiring enough XP allows a PC to advance a level, which grants the character improved class features, abilities and skills. XP can be lost in some circumstances, such as encounters with creatures that drain life energy, or by use of certain magical powers that come with an XP cost. Hit points (HP) are a measure of a character's vitality and health and are determined by the class, level and constitution of each character. They can be temporarily lost when a character sustains wounds in combat or otherwise comes to harm, and loss of HP is the most common way for a character to die in the game. Death can also result from the loss of key ability scores or character levels. When a PC dies, it is often possible for the dead character to be resurrected through magic, although some penalties may be imposed as a result. If resurrection is not possible or not desired, the player may instead create a new PC to resume playing the game. A typical "Dungeons & Dragons" game consists of an "adventure", which is roughly equivalent to a single story. The DM can either design an adventure on their own, or follow one of the many pre-made adventures (also known as "modules") that have been published throughout the history of "Dungeons & Dragons". Published adventures typically include a background story, illustrations, maps and goals for PCs to achieve. Some include location descriptions and handouts. Although a small adventure entitled "Temple of the Frog" was included in the "Blackmoor" rules supplement in 1975, the first stand-alone "D&D" module published by TSR was 1978's "Steading of the Hill Giant Chief", written by Gygax. A linked series of adventures is commonly referred to as a "campaign". The locations where these adventures occur, such as a city, country, planet or an entire fictional universe, are referred to as "campaign settings" or "world". "D&D" settings are based in various fantasy genres and feature different levels and types of magic and technology. Popular commercially published campaign settings for "Dungeons & Dragons" include Greyhawk, Dragonlance, Forgotten Realms, Mystara, Spelljammer, Ravenloft, Dark Sun, Planescape, Birthright, and Eberron. Alternatively, DMs may develop their own fictional worlds to use as campaign settings. The wargames from which "Dungeons & Dragons" evolved used miniature figures to represent combatants. "D&D" initially continued the use of miniatures in a fashion similar to its direct precursors. The original "D&D" set of 1974 required the use of the "Chainmail" miniatures game for combat resolution. By the publication of the 1977 game editions, combat was mostly resolved verbally. Thus miniatures were no longer required for game play, although some players continued to use them as a visual reference. In the 1970s, numerous companies began to sell miniature figures specifically for "Dungeons & Dragons" and similar games. Licensed miniature manufacturers who produced official figures include Grenadier Miniatures (1980–1983), Citadel Miniatures (1984–1986), Ral Partha, and TSR itself. Most of these miniatures used the 25 mm scale. Periodically, "Dungeons & Dragons" has returned to its wargaming roots with supplementary rules systems for miniatures-based wargaming. Supplements such as "Battlesystem" (1985 & 1989) and a new edition of "Chainmail" (2001) provided rule systems to handle battles between armies by using miniatures. An immediate predecessor of "Dungeons & Dragons" was a set of medieval miniature rules written by Jeff Perren. These were expanded by Gary Gygax, whose additions included a fantasy supplement, before the game was published as "Chainmail". When Dave Wesely entered the Army in 1970, his friend and fellow Napoleonics wargamer Dave Arneson began a medieval variation of Wesely's Braunstein games, where players control individuals instead of armies. Arneson used "Chainmail" to resolve combat. As play progressed, Arneson added such innovations as character classes, experience points, level advancement, armor class, and others. Having partnered previously with Gygax on "Don't Give Up the Ship!", Arneson introduced Gygax to his Blackmoor game and the two then collaborated on developing "The Fantasy Game", the role-playing game (RPG) that became "Dungeons & Dragons", with the final writing and preparation of the text being done by Gygax. The name was chosen by Gygax's two-year-old daughter Cindy — upon being presented with a number of choices of possible names, she exclaimed, "Oh Daddy, I like Dungeons and Dragons best!", although less prevalent versions of the story gave credit to his wife Mary Jo. Many "Dungeons & Dragons" elements appear in hobbies of the mid-to-late 20th century. For example, character-based role playing can be seen in improvisational theatre. Game-world simulations were well developed in wargaming. Fantasy milieux specifically designed for gaming could be seen in Glorantha's board games among others. Ultimately, however, "Dungeons & Dragons" represents a unique blending of these elements. The world of "D&D" was influenced by world mythology, history, pulp fiction, and contemporary fantasy novels. The importance of J. R. R. Tolkien's "The Lord of the Rings" and "The Hobbit" as an influence on "D&D" is controversial. The presence in the game of halflings, elves, half-elves, dwarves, orcs, rangers, and the like, draw comparisons to these works. The resemblance was even closer before the threat of copyright action from Tolkien Enterprises prompted the name changes of hobbit to 'halfling', ent to 'treant', and balrog to 'balor'. For many years, Gygax played down the influence of Tolkien on the development of the game. However, in an interview in 2000, he acknowledged that Tolkien's work had a "strong impact". The "D&D" magic system, in which wizards memorize spells that are used up once cast and must be re-memorized the next day, was heavily influenced by the "Dying Earth" stories and novels of Jack Vance. The original alignment system (which grouped all characters and creatures into 'Law', 'Neutrality' and 'Chaos') was derived from the novel "Three Hearts and Three Lions" by Poul Anderson. A troll described in this work influenced the "D&D" definition of that monster. Other influences include the works of Robert E. Howard, Edgar Rice Burroughs, A. Merritt, H. P. Lovecraft, Fritz Leiber, L. Sprague de Camp, Fletcher Pratt, Roger Zelazny, and Michael Moorcock. Monsters, spells, and magic items used in the game have been inspired by hundreds of individual works such as A. E. van Vogt's "Black Destroyer", Coeurl (the Displacer Beast), Lewis Carroll's "Jabberwocky" (vorpal sword) and the Book of Genesis (the clerical spell 'Blade Barrier' was inspired by the "flaming sword which turned every way" at the gates of Eden). "Dungeons & Dragons" has gone through several revisions. Parallel versions and inconsistent naming practices can make it difficult to distinguish between the different editions. The original "Dungeons & Dragons", now referred to as "OD&D", was a small box set of three booklets published in 1974. It was amateurish in production and assumed the player was familiar with wargaming. Nevertheless, it grew rapidly in popularity, first among wargamers and then expanding to a more general audience of college and high school students. Roughly 1,000 copies of the game were sold in the first year followed by 3,000 in 1975, and much more in the following years. This first set went through many printings and was supplemented with several official additions, such as the original Greyhawk and Blackmoor supplements (both 1975), as well as magazine articles in TSR's official publications and many fanzines. In early 1977, TSR created the first element of a two-pronged strategy that would divide "D&D" for nearly two decades. A "Dungeons & Dragons Basic Set" boxed edition was introduced that cleaned up the presentation of the essential rules, made the system understandable to the general public, and was sold in a package that could be stocked in toy stores. Later in 1977, the first part of "Advanced Dungeons & Dragons" ("AD&D") was published, which brought together the various published rules, options and corrections, then expanded them into a definitive, unified game for hobbyist gamers. TSR marketed them as an introductory game for new players and a more complex game for experienced ones; the "Basic Set" directed players who exhausted the possibilities of that game to switch to the advanced rules. As a result of this parallel development, the basic game included many rules and concepts which contradicted comparable ones in "AD&D". John Eric Holmes, the editor of the basic game, preferred a lighter tone with more room for personal improvisation. "AD&D", on the other hand, was designed to create a tighter, more structured game system than the loose framework of the original game. Between 1977 and 1979, three hardcover rulebooks, commonly referred to as the "core rulebooks", were released: the "Player's Handbook" (PHB), the "Dungeon Master's Guide" (DMG), and the "Monster Manual" (MM). Several supplementary books were published throughout the 1980s, notably "Unearthed Arcana" (1985) that included a large number of new rules. Confusing matters further, the original "D&D" boxed set remained in publication until 1979, since it remained a healthy seller for TSR. In the 1980s, the rules for "Advanced Dungeons & Dragons" and "basic" "Dungeons & Dragons" remained separate, each developing along different paths. In 1981, the basic version of "Dungeons & Dragons" was revised by Tom Moldvay to make it even more novice-friendly. It was promoted as a continuation of the original "D&D" tone, whereas "AD&D" was promoted as advancement of the mechanics. An accompanying "Expert Set", originally written by David "Zeb" Cook, allowed players to continue using the simpler ruleset beyond the early levels of play. In 1983, revisions of those sets by Frank Mentzer were released, revising the presentation of the rules to a more tutorial format. These were followed by "Companion" (1983), "Master" (1985), and "Immortals" (1986) sets. Each set covered game play for more powerful characters than the previous. The first four sets were compiled in 1991 as a single hardcover book, the "Dungeons & Dragons Rules Cyclopedia", which was released alongside a new introductory boxed set. "Advanced Dungeons & Dragons 2nd Edition" was published in 1989, again as three core rulebooks; the primary designer was David "Zeb" Cook. The "Monster Manual" was replaced by the "Monstrous Compendium", a loose-leaf binder that was subsequently replaced by the hardcover "Monstrous Manual" in 1993. In 1995, the core rulebooks were slightly revised, although still referred to by TSR as the 2nd Edition, and a series of "Player's Option" manuals were released as optional rulebooks. The release of "AD&D 2nd Edition" deliberately excluded some aspects of the game that had attracted negative publicity. References to demons and devils, sexually suggestive artwork, and playable, evil-aligned character types – such as assassins and half-orcs – were removed. The edition moved away from a theme of 1960s and 1970s "sword and sorcery" fantasy fiction to a mixture of medieval history and mythology. The rules underwent minor changes, including the addition of non-weapon proficiencies – skill-like abilities that originally appeared in 1st Edition supplements. The game's magic spells were divided into schools and spheres. A major difference was the promotion of various game settings beyond that of traditional fantasy. This included blending fantasy with other genres, such as horror (Ravenloft), science fiction (Spelljammer), and apocalyptic (Dark Sun), as well as alternative historical and non-European mythological settings. In 1997, a near-bankrupt TSR was purchased by Wizards of the Coast. Following three years of development, "Dungeons & Dragons" 3rd edition was released in 2000. The new release folded the Basic and Advanced lines back into a single unified game. It was the largest revision of the "D&D" rules to date, and served as the basis for a multi-genre role-playing system designed around 20-sided dice, called the d20 System. The 3rd Edition rules were designed to be internally consistent and less restrictive than previous editions of the game, allowing players more flexibility to create the characters they wanted to play. Skills and feats were introduced into the core rules to encourage further customization of characters. The new rules standardized the mechanics of action resolution and combat. In 2003, "Dungeons & Dragons v.3.5" was released as a revision of the 3rd Edition rules. This release incorporated hundreds of rule changes, mostly minor, and expanded the core rulebooks. In early 2005, Wizards of the Coast's R&D team started to develop "Dungeons & Dragons 4th Edition", prompted mainly by the feedback obtained from the "D&D" playing community and a desire to make the game faster, more intuitive, and with a better play experience than under the 3rd Edition. The new game was developed through a number of design phases spanning from May 2005 until its release. "Dungeons & Dragons 4th Edition" was announced at Gen Con in August 2007, and the initial three core books were released June 6, 2008. 4th Edition streamlined the game into a simplified form and introduced numerous rules changes. Many character abilities were restructured into "Powers". These altered the spell-using classes by adding abilities that could be used at will, per encounter, or per day. Likewise, non-magic-using classes were provided with parallel sets of options. Software tools, including player character and monster building programs, became a major part of the game. On January 9, 2012, Wizards of the Coast announced that it was working on a 5th edition of the game. The company planned to take suggestions from players and let them playtest the rules. Public playtesting began on May 24, 2012. At Gen Con 2012 in August, Mike Mearls, lead developer for 5th Edition, said that Wizards of the Coast had received feedback from more than 75,000 playtesters, but that the entire development process would take two years, adding, "I can't emphasize this enough ... we're very serious about taking the time we need to get this right." The release of the 5th Edition, coinciding with "D&D"s 40th anniversary, occurred in the second half of 2014. The game had more than three million players around the world by 1981, and copies of the rules were selling at a rate of about 750,000 per year by 1984. Beginning with a French language edition in 1982, "Dungeons & Dragons" has been translated into many languages beyond the original English. By 2004, consumers had spent more than US$1 billion on "Dungeons & Dragons" products and the game had been played by more than 20 million people. As many as six million people played the game in 2007. The various editions of "Dungeons & Dragons" have won many Origins Awards, including "All Time Best Roleplaying Rules of 1977", "Best Roleplaying Rules of 1989", and "Best Roleplaying Game of 2000" for the three flagship editions of the game. Both "Dungeons & Dragons" and "Advanced Dungeons & Dragons" are Origins Hall of Fame Games inductees as they were deemed sufficiently distinct to merit separate inclusion on different occasions. The independent "Games" magazine placed "Dungeons & Dragons" on their "Games 100" list from 1980 through 1983, then entered the game into the magazine's Hall of Fame in 1984. "Advanced Dungeons & Dragons" was ranked 2nd in the 1996 reader poll of "Arcane" magazine to determine the 50 most popular roleplaying games of all time. Eric Goldberg reviewed "Dungeons & Dragons" in "Ares Magazine" #1, rating it a 6 out of 9. Goldberg commented that ""Dungeons and Dragons" is an impressive achievement based on the concept alone, and also must be credited with cementing the marriage between the fantasy genre and gaming." "Dungeons & Dragons" was the first modern role-playing game and it established many of the conventions that have dominated the genre. Particularly notable are the use of dice as a game mechanic, character record sheets, use of numerical attributes and gamemaster-centered group dynamics. Within months of "Dungeons & Dragons"'s release, new role-playing game writers and publishers began releasing their own role-playing games, with most of these being in the fantasy genre. Some of the earliest other role-playing games inspired by "D&D" include "Tunnels & Trolls" (1975), "Empire of the Petal Throne" (1975), and "Chivalry & Sorcery" (1976). The role-playing movement initiated by "D&D" would lead to release of the science fiction game "Traveller" (1977), the fantasy game "RuneQuest" (1978), and subsequent game systems such as Chaosium's "Call of Cthulhu" (1981), "Champions" (1982), "GURPS" (1986), and "" (1991). "Dungeons & Dragons" and the games it influenced fed back into the genre's origin – miniatures wargames – with combat strategy games like "Warhammer Fantasy Battles". "D&D" also had a large impact on modern video games. Director Jon Favreau credits "Dungeons & Dragons" with giving him "... a really strong background in imagination, storytelling, understanding how to create tone and a sense of balance." Early in the game's history, TSR took no action against small publishers' production of "D&D" compatible material, and even licensed Judges Guild to produce "D&D" materials for several years, such as "City State of the Invincible Overlord." This attitude changed in the mid-1980s when TSR took legal action to try to prevent others from publishing compatible material. This angered many fans and led to resentment by the other gaming companies. Although TSR took legal action against several publishers in an attempt to restrict third-party usage, it never brought any court cases to completion, instead settling out of court in every instance. TSR itself ran afoul of intellectual property law in several cases. With the launch of "Dungeons & Dragons"'s 3rd Edition, Wizards of the Coast made the d20 System available under the Open Game License (OGL) and d20 System trademark license. Under these licenses, authors were free to use the d20 System when writing games and game supplements. The OGL and d20 Trademark License made possible new games, some based on licensed products like "Star Wars", and new versions of older games, such as "Call of Cthulhu". With the release of the fourth edition, Wizards of the Coast introduced its Game System License, which represented a significant restriction compared to the very open policies embodied by the OGL. In part as a response to this, some publishers (such as Paizo Publishing with its "Pathfinder Roleplaying Game") who previously produced materials in support of the "D&D" product line, decided to continue supporting the 3rd Edition rules, thereby competing directly with Wizards of the Coast. Others, such as Kenzer & Company, are returning to the practice of publishing unlicensed supplements and arguing that copyright law does not allow Wizards of the Coast to restrict third-party usage. During the 2000s, there has been a trend towards reviving and recreating older editions of "D&D", known as the Old School Revival. Game systems based on earlier editions of "D&D". "Castles & Crusades" (2004), by Troll Lord Games, is a reimagining of early editions by streamlining rules from OGL. This in turn inspired the creation of "retro-clones", games which more closely recreate the original rule sets, using material placed under the OGL along with non-copyrightable mechanical aspects of the older rules to create a new presentation of the games. Alongside the publication of the fifth edition, Wizards of the Coast established a two-pronged licensing approach. The core of the fifth edition rules have been made available under the OGL, while publishers and independent creators have also been given the opportunity to create licensed materials directly for Dungeons & Dragons and associated properties like the Forgotten Realms under a program called the DM's Guild. The DM's Guild does not function under the OGL, but uses a community agreement intended to foster liberal cooperation among content creators. At various times in its history, "Dungeons & Dragons" has received negative publicity, in particular from some Christian groups, for alleged promotion of such practices as devil worship, witchcraft, suicide, and murder, and for the presence of naked breasts in drawings of female humanoids in the original "AD&D" manuals (mainly monsters such as harpies, succubi, etc.). These controversies led TSR to remove many potentially controversial references and artwork when releasing the 2nd Edition of "AD&D". Many of these references, including the use of the names "devils" and "demons", were reintroduced in the 3rd edition. The moral panic over the game led to problems for fans of "D&D" who faced social ostracism, unfair treatment, and false association with the occult and Satanism, regardless of an individual fan's actual religious affiliation and beliefs. "Dungeons & Dragons" has been the subject of rumors regarding players having difficulty separating fantasy from reality, even leading to psychotic episodes. The most notable of these was the saga of James Dallas Egbert III, the facts of which were fictionalized in the novel "Mazes and Monsters" and later made into a TV movie in 1982 starring Tom Hanks. The game was blamed for some of the actions of Chris Pritchard, who was convicted in 1990 of murdering his stepfather. Research by various psychologists, starting with Armando Simon, has concluded that no harmful effects are related to the playing of "D&D". The game's commercial success was a factor that led to lawsuits regarding distribution of royalties between original creators Gygax and Arneson. Gygax later became embroiled in a political struggle for control of TSR which culminated in a court battle and Gygax's decision to sell his ownership interest in the company in 1985. "D&D"'s commercial success has led to many other related products, including "Dragon" Magazine, "Dungeon" Magazine, an animated television series, a film series, an official role-playing soundtrack, novels, and computer games such as the MMORPG "Dungeons & Dragons Online". Hobby and toy stores sell dice, miniatures, adventures, and other game aids related to "D&D" and its game offspring. "D&D" grew in popularity through the late 1970s and 1980s. Numerous games, films, and cultural references based on "D&D" or "D&D"-like fantasies, characters or adventures have been ubiquitous since the end of the 1970s. "D&D" players are (sometimes pejoratively) portrayed as the epitome of geekdom, and have become the basis of much geek and gamer humor and satire. Famous "D&D" players include Pulitzer Prize winning author Junot Díaz, professional basketball player Tim Duncan, comedian Stephen Colbert, and actors Vin Diesel and Robin Williams. "D&D" and its fans have been the subject of spoof films, including "Fear of Girls" and "". DNA Deoxyribonucleic acid (; DNA) is a molecule composed of two chains (made of nucleotides) which coil around each other to form a double helix carrying the genetic instructions used in the growth, development, functioning and reproduction of all known living organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids; alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life. The two DNA strands are also known as polynucleotides since they are composed of simpler monomeric units called nucleotides. Each nucleotide is composed of one of four nitrogen-containing nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound together, according to base pairing rules (A with T and C with G), with hydrogen bonds to make double-stranded DNA. The complementary nitrogenous bases are divided into two groups, pyrimidines and purines. In DNA, the pyrimidines are thymine and cytosine; the purines are adenine and guanine. DNA stores biological information. The DNA backbone is resistant to cleavage, and both strands of the double-stranded structure store the same biological information. This information is replicated as and when the two strands separate. A large part of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences. The two strands of DNA run in opposite directions to each other and are thus antiparallel. Attached to each sugar is one of four types of nucleobases (informally, "bases"). It is the sequence of these four nucleobases along the backbone that encodes genetic information. RNA strands are created using DNA strands as a template in a process called transcription. Under the genetic code, these RNA strands are translated to specify the sequence of amino acids within proteins in a process called translation. Within eukaryotic cells, DNA is organized into long structures called chromosomes. Before typical cell division these chromosomes are duplicated in the process of DNA replication, providing a complete set of chromosomes for each daughter cell. Eukaryotic organisms (animals, plants, fungi and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within eukaryotic chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed. DNA was first isolated by Friedrich Miescher in 1869. Its molecular structure was first identified by James Watson and Francis Crick at the Cavendish Laboratory within the University of Cambridge in 1953, whose model-building efforts were guided by X-ray diffraction data acquired by Raymond Gosling, who was a post-graduate student of Rosalind Franklin. DNA is used by researchers as a molecular tool to explore physical laws and theories, such as the ergodic theorem and the theory of elasticity. The unique material properties of DNA have made it an attractive molecule for material scientists and engineers interested in micro- and nano-fabrication. Among notable advances in this field are DNA origami and DNA-based hybrid materials. DNA is a long polymer made from repeating units called nucleotides. The structure of DNA is dynamic along its length, being capable of coiling into tight loops, and other shapes. In all species it is composed of two helical chains, bound to each other by hydrogen bonds. Both chains are coiled round the same axis, and have the same pitch of 34 ångströms (3.4 nanometres). The pair of chains has a radius of 10 ångströms (1.0 nanometre). According to another study, when measured in a different solution, the DNA chain measured 22 to 26 ångströms wide (2.2 to 2.6 nanometres), and one nucleotide unit measured 3.3 Å (0.33 nm) long. Although each individual nucleotide repeating unit is very small, DNA polymers can be very large molecules containing millions to hundreds of millions of nucleotides. For instance, the DNA in the largest human chromosome, chromosome number 1, consists of approximately 220 million base pairs and would be 85 mm long if straightened. In living organisms, DNA does not usually exist as a single molecule, but instead as a pair of molecules that are held tightly together. These two long strands entwine like vines, in the shape of a double helix. The nucleotide contains both a segment of the backbone of the molecule (which holds the chain together) and a nucleobase (which interacts with the other DNA strand in the helix). A nucleobase linked to a sugar is called a nucleoside and a base linked to a sugar and one or more phosphate groups is called a nucleotide. A polymer comprising multiple linked nucleotides (as in DNA) is called a polynucleotide. The backbone of the DNA strand is made from alternating phosphate and sugar residues. The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings, which are known as the 3′ and 5′ carbons, the prime symbol being used to distinguish these carbon atoms from those of the base to which the deoxyribose forms a glycosidic bond. When imagining DNA, each phosphoryl is normally considered to "belong" to the nucleotide whose 5′ carbon forms a bond therewith. Any DNA strand therefore normally has one end at which there is a phosphoryl attached to the 5′ carbon of a ribose (the 5′ phosphoryl) and another end at which there is a free hydroxyl attached to the 3′ carbon of a ribose (the 3′ hydroxyl). The orientation of the 3′ and 5′ carbons along the sugar-phosphate backbone confers directionality (sometimes called polarity) to each DNA strand. In a double helix, the direction of the nucleotides in one strand is opposite to their direction in the other strand: the strands are "antiparallel". The asymmetric ends of DNA strands are said to have a directionality of "five prime" (5′) and "three prime" (3′), with the 5′ end having a terminal phosphate group and the 3′ end a terminal hydroxyl group. One major difference between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced by the alternative pentose sugar ribose in RNA. The DNA double helix is stabilized primarily by two forces: hydrogen bonds between nucleotides and base-stacking interactions among aromatic nucleobases. In the aqueous environment of the cell, the conjugated bonds of nucleotide bases align perpendicular to the axis of the DNA molecule, minimizing their interaction with the solvation shell. The four bases found in DNA are adenine (A), cytosine (C), guanine (G) and thymine (T). These four bases are attached to the sugar-phosphate to form the complete nucleotide, as shown for adenosine monophosphate. Adenine pairs with thymine and guanine pairs with cytosine. It was represented by A-T base pairs and G-C base pairs. The nucleobases are classified into two types: the purines, A and G, being fused five- and six-membered heterocyclic compounds, and the pyrimidines, the six-membered rings C and T. A fifth pyrimidine nucleobase, uracil (U), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. In addition to RNA and DNA, many artificial nucleic acid analogues have been created to study the properties of nucleic acids, or for use in biotechnology. Uracil is not usually found in DNA, occurring only as a breakdown product of cytosine. However, in several bacteriophages, "Bacillus subtilis" bacteriophages PBS1 and PBS2 and "Yersinia" bacteriophage piR1-37, thymine has been replaced by uracil. Another phage - Staphylococcal phage S6 - has been identified with a genome where thymine has been replaced by uracil. 5-hydroxymethyldeoxyuridine,(hm5dU) is also known to replace thymidine in several genomes including the "Bacillus" phages SPO1, ϕe, SP8, H1, 2C and SP82. Another modified uracil - 5-dihydroxypentauracil – has also been described. Base J (beta-d-glucopyranosyloxymethyluracil), a modified form of uracil, is also found in several organisms: the flagellates "Diplonema" and "Euglena", and all the kinetoplastid genera. Biosynthesis of J occurs in two steps: in the first step, a specific thymidine in DNA is converted into hydroxymethyldeoxyuridine; in the second, HOMedU is glycosylated to form J. Proteins that bind specifically to this base have been identified. These proteins appear to be distant relatives of the Tet1 oncogene that is involved in the pathogenesis of acute myeloid leukemia. J appears to act as a termination signal for RNA polymerase II. In 1976 a bacteriophage - S-2L - which infects species of the genus "Synechocystis" was found to have all the adenosine bases within its genome replaced by 2,6-diaminopurine. In 2016 deoxyarchaeosine was found to be present in the genomes of several bacteria and the "Escherichia" phage 9g. Modified bases also occur in DNA. The first of these recognised was 5-methylcytosine which was found in the genome of "Mycobacterium tuberculosis" in 1925. The complete replacement of cytosine by 5-glycosylhydroxymethylcytosine in T even phages (T2, T4 and T6) was observed in 1953 In the genomes of Xanthomonas oryzae bacteriophage Xp12 and halovirus FH the full complement of cystosine has been replaced by 5-methylcytosine. 6N-methyadenine was discovered to be present in DNA in 1955. N6-carbamoyl-methyladenine was described in 1975. 7-methylguanine was described in 1976. N4-methylcytosine in DNA was described in 1983. In 1985 5-hydroxycytosine was found in the genomes of the Rhizobium phages RL38JI and N17. α-putrescinylthymine occurs in both the genomes of the "Delftia" phage ΦW-14 and the "Bacillus" phage SP10. α-glutamylthymidine is found in the Bacillus phage SP01 and 5-dihydroxypentyluracil is found in the Bacillus phage SP15. The reason for the presence of these non canonical bases in DNA is not known. It seems likely that at least part of the reason for their presence in bacterial viruses (phages) is to avoid the restriction enzymes present in bacteria. This enzyme system acts at least in part as a molecular immune system protecting bacteria from infection by viruses. This does not appear to be the entire story. Four modifications to the cytosine residues in human DNA have been reported. These modifications are the addition of methyl (CH)-, hydroxymethyl (CHOH)-, formyl (CHO)- and carboxyl (COOH)- groups. These modifications are thought to have regulatory functions. Uracil is found in the centromeric regions of at least two human chromosomes (6 and 11). Seventeen non canonical bases are known to occur in DNA. Most of these are modifications of the canonical bases plus uracil. Twin helical strands form the DNA backbone. Another double helix may be found tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not symmetrically located with respect to each other, the grooves are unequally sized. One groove, the major groove, is 22 Å wide and the other, the minor groove, is 12 Å wide. The width of the major groove means that the edges of the bases are more accessible in the major groove than in the minor groove. As a result, proteins such as transcription factors that can bind to specific sequences in double-stranded DNA usually make contact with the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell "(see below)", but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form. In a DNA double helix, each type of nucleobase on one strand bonds with just one type of nucleobase on the other strand. This is called complementary base pairing. Here, purines form hydrogen bonds to pyrimidines, with adenine bonding only to thymine in two hydrogen bonds, and cytosine bonding only to guanine in three hydrogen bonds. This arrangement of two nucleotides binding together across the double helix is called a Watson-Crick base pair. Another type of base pairing is Hoogsteen base pairing where two hydrogen bonds form between guanine and cytosine. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can thus be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this base pair complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. This reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in living organisms. The two types of base pairs form different numbers of hydrogen bonds, AT forming two hydrogen bonds, and GC forming three hydrogen bonds (see figures, right). DNA with high GC-content is more stable than DNA with low GC-content. As noted above, most DNA molecules are actually two polymer strands, bound together in a helical fashion by noncovalent bonds; this double-stranded (dsDNA) structure is maintained largely by the intrastrand base stacking interactions, which are strongest for G,C stacks. The two strands can come apart – a process known as melting – to form two single-stranded DNA (ssDNA) molecules. Melting occurs at high temperature, low salt and high pH (low pH also melts DNA, but since DNA is unstable due to acid depurination, low pH is rarely used). The stability of the dsDNA form depends not only on the GC-content (% G,C basepairs) but also on sequence (since stacking is sequence specific) and also length (longer molecules are more stable). The stability can be measured in various ways; a common way is the "melting temperature", which is the temperature at which 50% of the ds molecules are converted to ss molecules; melting temperature is dependent on ionic strength and the concentration of DNA. As a result, it is both the percentage of GC base pairs and the overall length of a DNA double helix that determines the strength of the association between the two strands of DNA. Long DNA helices with a high GC-content have stronger-interacting strands, while short helices with high AT content have weaker-interacting strands. In biology, parts of the DNA double helix that need to separate easily, such as the TATAAT Pribnow box in some promoters, tend to have a high AT content, making the strands easier to pull apart. In the laboratory, the strength of this interaction can be measured by finding the temperature necessary to break the hydrogen bonds, their melting temperature (also called "T" value). When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules have no single common shape, but some conformations are more stable than others. A DNA sequence is called "sense" if its sequence is the same as that of a messenger RNA copy that is translated into protein. The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands can contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing. A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes. In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription, while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome. DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication. DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although, only B-DNA and Z-DNA have been directly observed in functional organisms. The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, and the presence of polyamines in solution. The first published reports of A-DNA X-ray diffraction patterns—and also B-DNA—used analyses based on Patterson transforms that provided only a limited amount of structural information for oriented fibers of DNA. An alternative analysis was then proposed by Wilkins "et al.", in 1953, for the "in vivo" B-DNA X-ray diffraction-scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions. In the same journal, James Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double-helix. Although the "B-DNA form" is most common under the conditions found in cells, it is not a well-defined conformation but a family of related DNA conformations that occur at the high hydration levels present in living cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder. Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partly dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, and in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription. For many years exobiologists have proposed the existence of a shadow biosphere, a postulated microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. One of the proposals was the existence of lifeforms that use arsenic instead of phosphorus in DNA. A report in 2010 of the possibility in the bacterium GFAJ-1, was announced, though the research was disputed, and evidence suggests the bacterium actively prevents the incorporation of arsenic into the DNA backbone and other biomolecules. At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence. These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases form a flat plate and these flat four-base units then stack on top of each other, to form a stable G-quadruplex structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure. In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop. In DNA, fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible. Branched DNA can be used in nanotechnology to construct geometric shapes, see the section on uses in technology below. The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. DNA packaging and its influence on gene expression can also occur by covalent modifications of the histone protein core around which DNA is wrapped in the chromatin structure or else by remodeling carried out by chromatin remodeling complexes (see Chromatin remodeling). There is, further, crosstalk between DNA methylation and histone modification, so they can coordinately affect chromatin and gene expression. For one example, cytosine methylation produces 5-methylcytosine, which is important for X-inactivation of chromosomes. The average level of methylation varies between organisms – the worm "Caenorhabditis elegans" lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine. Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, so methylated cytosines are particularly prone to mutations. Other base modifications include adenine methylation in bacteria, the presence of 5-hydroxymethylcytosine in the brain, and the glycosylation of uracil to produce the "J-base" in kinetoplastids. DNA can be damaged by many sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases. On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks. A typical human cell contains about 150,000 bases that have suffered oxidative damage. Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions, deletions from the DNA sequence, and chromosomal translocations. These mutations can cause cancer. Because of inherent limits in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. DNA damages that are naturally occurring, due to normal cellular processes that produce reactive oxygen species, the hydrolytic activities of cellular water, etc., also occur frequently. Although most of these damages are repaired, in any cell some DNA damage may remain despite the action of repair processes. These remaining DNA damages accumulate with age in mammalian postmitotic tissues. This accumulation appears to be an important underlying cause of aging. Many mutagens fit into the space between two adjacent base pairs, this is called "intercalation". Most intercalators are aromatic and planar molecules; examples include ethidium bromide, acridines, daunomycin, and doxorubicin. For an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations. As a result, DNA intercalators may be carcinogens, and in the case of thalidomide, a teratogen. Others such as benzo["a"]pyrene diol epoxide and aflatoxin form DNA adducts that induce errors in replication. Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells. DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes. The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In alternative fashion, a cell may simply copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here the focus is on the interactions between DNA and other molecules that mediate the function of the genome. Genomic DNA is tightly and orderly packed in the process called DNA condensation, to fit the small available volumes of the cell. In eukaryotes, DNA is located in the cell nucleus, with small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid. The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, and regulatory sequences such as promoters and enhancers, which control transcription of the open reading frame. In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences. The reasons for the presence of so much noncoding DNA in eukaryotic genomes and the extraordinary differences in genome size, or "C-value", among species, represent a long-standing puzzle known as the "C-value enigma". However, some DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression. Some noncoding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes but are important for the function and stability of chromosomes. An abundant form of noncoding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation. These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence. A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called "codons" formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT). In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons (4 combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAA, TGA, and TAG codons. Cell division is essential for an organism to grow, but, when a cell divides, it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5′ to 3′ direction, different mechanisms are used to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA. Naked extracellular DNA (eDNA), most of it released by cell death, is nearly ubiquitous in the environment. Its concentration in soil may be as high as 2 μg/L, and its concentration in natural aquatic environments may be as high at 88 μg/L. Various possible functions have been proposed for eDNA: it may be involved in horizontal gene transfer; it may provide nutrients; and it may act as a buffer to recruit or titrate ions or antibiotics. Extracellular DNA acts as a functional extracellular matrix component in the biofilms of several bacterial species. It may act as a recognition factor to regulate the attachment and dispersal of specific cell types in the biofilm; it may contribute to biofilm formation; and it may contribute to the biofilm's physical strength and resistance to biological stress. Cell-free fetal DNA is found in the blood of the mother, and can be sequenced to determine a great deal of information about the developing fetus. All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important. Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes, this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones, making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are thus largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation, and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes. A distinct group of DNA-binding proteins is the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination, and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases. In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter. This changes the accessibility of the DNA template to the polymerase. As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes. Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible. Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5′-GATATC-3′ and makes a cut at the horizontal line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting. Enzymes called DNA ligases can rejoin cut or broken DNA strands. Ligases are particularly important in lagging strand DNA replication, as they join together the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination. Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzymes work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break. Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription. Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly adenosine triphosphate (ATP), to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases. Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products is created based on existing polynucleotide chains—which are called "templates". These enzymes function by repeatedly adding a nucleotide to the 3′ hydroxyl group at the end of the growing polynucleotide chain. As a consequence, all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use. In DNA replication, DNA-dependent DNA polymerases make copies of DNA polynucleotide chains. To preserve biological information, it is essential that the sequence of bases in each copy are precisely complementary to the sequence of bases in the template strand. Many DNA polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base removed. In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases. RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres. For example, HIV reverse transcriptase is an enzyme for AIDS virus replication. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure. It synthesizes telomeres at the ends of chromosomes. Telomeres prevent fusion of the ends of neighboring chromosomes and protect chromosome ends from damage. Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits. A DNA helix usually does not interact with other segments of DNA, and in human cells, the different chromosomes even occupy separate areas in the nucleus called "chromosome territories". This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is in chromosomal crossover which occurs during sexual reproduction, when genetic recombination occurs. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin. Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins. Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks. The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51. The first step in recombination is a double-stranded break caused by either an endonuclease or damage to the DNA. A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA. Only strands of like polarity exchange DNA during recombination. There are two types of cleavage: east-west cleavage and north-south cleavage. The north-south cleavage nicks both strands of DNA, while the east-west cleavage has one strand of DNA intact. The formation of a Holliday junction during recombination makes it possible for genetic diversity, genes to exchange on chromosomes, and expression of wild-type viral genomes. DNA contains the genetic information that allows all modern living things to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material. RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes. This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur, since the number of different bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes. However, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible because DNA survives in the environment for less than one million years, and slowly degrades into short fragments in solution. Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250 million years old, but these claims are controversial. Building blocks of DNA (adenine, guanine, and related organic molecules) may have been formed extraterrestrially in outer space. Complex DNA and RNA organic compounds of life, including uracil, cytosine, and thymine, have also been formed in the laboratory under conditions mimicking those found in outer space, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar cosmic dust and gas clouds. Methods have been developed to purify DNA from organisms, such as phenol-chloroform extraction, and to manipulate it in the laboratory, such as restriction digests and the polymerase chain reaction. Modern biology and biochemistry make intensive use of these techniques in recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector. The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research, or be grown in agriculture. Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is formally termed DNA profiling, but may also be called "genetic fingerprinting". In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA. However, identification can be complicated if the scene is contaminated with DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case. The development of forensic science and the ability to now obtain genetic matching on minute samples of blood, skin, saliva, or hair has led to re-examining many cases. Evidence can now be uncovered that was scientifically impossible at the time of the original examination. Combined with the removal of the double jeopardy law in some places, this can allow cases to be reopened where prior trials have failed to produce sufficient evidence to convince a jury. People charged with serious crimes may be required to provide a sample of DNA for matching purposes. The most obvious defense to DNA matches obtained forensically is to claim that cross-contamination of evidence has occurred. This has resulted in meticulous strict handling procedures with new cases of serious crime. DNA profiling is also used successfully to positively identify victims of mass casualty incidents, bodies or body parts in serious accidents, and individual victims in mass war graves, via matching to family members. DNA profiling is also used in DNA paternity testing to determine if someone is the biological parent or grandparent of a child with the probability of parentage is typically 99.99% when the alleged parent is biologically related to the child. Normal DNA sequencing methods happen after birth, but there are new methods to test paternity while a mother is still pregnant. Deoxyribozymes, also called DNAzymes or catalytic DNA, are first discovered in 1994. They are mostly single stranded DNA sequences isolated from a large pool of random DNA sequences through a combinatorial approach called in vitro selection or systematic evolution of ligands by exponential enrichment (SELEX). DNAzymes catalyze variety of chemical reactions including RNA-DNA cleavage, RNA-DNA ligation, amino acids phosphorylation-dephosphorylation, carbon-carbon bond formation, and etc. DNAzymes can enhance catalytic rate of chemical reactions up to 100,000,000,000-fold over the uncatalyzed reaction. The most extensively studied class of DNAzymes is RNA-cleaving types which have been used to detect different metal ions and designing therapeutic agents. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific), the CA1-3 DNAzymes (copper-specific), the 39E DNAzyme (uranyl-specific) and the NaA43 DNAzyme (sodium-specific). The NaA43 DNAzyme, which is reported to be more than 10,000-fold selective for sodium over other metal ions, was used to make a real-time sodium sensor in living cells. Bioinformatics involves the development of techniques to store, data mine, search and manipulate biological data, including DNA nucleic acid sequence data. These have led to widely applied advances in computer science, especially string searching algorithms, machine learning, and database theory. String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides. The DNA sequence may be aligned with other DNA sequences to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without the annotations that identify the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products and their possible functions in an organism even before they have been isolated experimentally. Entire genomes may also be compared, which can shed light on the evolutionary history of particular organism and permit the examination of complex evolutionary events. DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based and using the "DNA origami" method) and three-dimensional structures in the shapes of polyhedra. Nanomechanical devices and algorithmic self-assembly have also been demonstrated, and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins. Because DNA collects mutations over time, which are then inherited, it contains historical information, and, by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny. This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology. In a paper published in "Nature" in January 2013, scientists from the European Bioinformatics Institute and Agilent Technologies proposed a mechanism to use DNA's ability to code information as a means of digital data storage. The group was able to encode 739 kilobytes of data into DNA code, synthesize the actual DNA, then sequence the DNA and decode the information back to its original form, with a reported 100% accuracy. The encoded information consisted of text files and audio files. A prior experiment was published in August 2012. It was conducted by researchers at Harvard University, where the text of a 54,000-word book was encoded in DNA. Moreover, in living cells, the storage can be turned active by enzymes. Light-gated protein domains fused to DNA processing enzymes are suitable for that task "in vitro". Fluorescent exonucleases can transmit the output according to the nucleotide they have read. DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein". In 1878, Albrecht Kossel isolated the non-protein component of "nuclein", nucleic acid, and later isolated its five primary nucleobases. In 1909, Phoebus Levene identified the base, sugar, and phosphate nucleotide unit of the RNA (then named "yeast nucleic acid"). In 1929, Levene identified deoxyribose sugar in "thymus nucleic acid" (DNA). Levene suggested that DNA consisted of a string of four nucleotide units linked together through the phosphate groups ("tetranucleotide hypothesis"). Levene thought the chain was short and the bases repeated in a fixed order. In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template". In 1928, Frederick Griffith in his experiment discovered that traits of the "smooth" form of "Pneumococcus" could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. This system provided the first clear suggestion that DNA carries genetic information. In 1933, while studying virgin sea urchin eggs, Jean Brachet suggested that DNA is found in cell nucleus and that RNA is present exclusively in the cytoplasm. At the time, "yeast nucleic acid" (RNA) was thought to occur only in plants, while "thymus nucleic acid" (DNA) only in animals. The latter was thought to be a tetramer, with the function of buffering cellular pH. In 1937, William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure. In 1943, Oswald Avery, along with coworkers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle, supporting Griffith's suggestion (Avery–MacLeod–McCarty experiment). DNA's role in heredity was confirmed in 1952 when Alfred Hershey and Martha Chase in the Hershey–Chase experiment showed that DNA is the genetic material of the T2 phage. Late in 1951, Francis Crick started working with James Watson at the Cavendish Laboratory within the University of Cambridge. In 1953, Watson and Crick suggested what is now accepted as the first correct double-helix model of DNA structure in the journal "Nature". Their double-helix, molecular model of DNA was then based on one X-ray diffraction image (labeled as "Photo 51") taken by Rosalind Franklin and Raymond Gosling in May 1952, and the information that the DNA bases are paired. On 28 February 1953 Crick interrupted patrons' lunchtime at The Eagle pub in Cambridge to announce that he and Watson had "discovered the secret of life". Months earlier, in February 1953, Linus Pauling and Robert Corey proposed a model for nucleic acids containing three intertwined chains, with the phosphates near the axis, and the bases on the outside. Experimental evidence supporting the Watson and Crick model was published in a series of five articles in the same issue of "Nature". Of these, Franklin and Gosling's paper was the first publication of their own X-ray diffraction data and original analysis method that partly supported the Watson and Crick model; this issue also contained an article on DNA structure by Maurice Wilkins and two of his colleagues, whose analysis and "in vivo" B-DNA X-ray patterns also supported the presence "in vivo" of the double-helical DNA configurations as proposed by Crick and Watson for their double-helix molecular model of DNA in the prior two pages of "Nature". In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine. Nobel Prizes are awarded only to living recipients. A debate continues about who should receive credit for the discovery. In an influential presentation in 1957, Crick laid out the central dogma of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis". Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the Meselson–Stahl experiment. Further work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley, and Marshall Warren Nirenberg to decipher the genetic code. These findings represent the birth of molecular biology. Diamond Diamond is a solid form of carbon with a diamond cubic crystal structure. At room temperature and pressure it is metastable and graphite is the stable form, but diamond almost never converts to graphite. Diamond is renowned for its superlative physical qualities, most of which originate from the strong covalent bonding between its atoms. In particular, it has the highest hardness and thermal conductivity of any bulk material. Those properties determine the major industrial applications of diamond in cutting and polishing tools and the scientific applications in diamond knives and diamond anvil cells. Because of its extremely rigid lattice, diamond can be contaminated by very few types of impurities, such as boron and nitrogen. Small amounts of defects or impurities (about one per million of lattice atoms) color diamond blue (boron), yellow (nitrogen), brown (lattice defects), green (radiation exposure), purple, pink, orange or red. Diamond also has relatively high optical dispersion (ability to disperse light of different colors). Most natural diamonds have ages between 1 billion and 3.5 billion years. Most were formed at depths of in the Earth's mantle, although a few have come from as deep as . Under high pressure and temperature, carbon-containing fluids dissolved minerals and replaced them with diamonds. Much more recently (tens to hundreds of million years ago), they were carried to the surface in volcanic eruptions and deposited in igneous rocks known as kimberlites and lamproites. Synthetic diamonds can be produced in a high pressure, high temperature method (HPHT) which approximately simulates the conditions in the Earth's mantle. An alternative, and completely different growth technique is chemical vapor deposition (CVD). Diamond simulants are non-diamond materials that resemble real diamonds in appearance and many properties. These include cubic zirconia and silicon carbide. Special gemological techniques have been developed to distinguish natural diamonds, synthetic diamonds, and diamond simulants. Because so many factors determine the value of diamond, there is no listed price for per unit of diamond, unlike listed prices for per unit of precious metals. The name "diamond" is derived from the ancient Greek "αδάμας" "(adámas"), "proper", "unalterable", "unbreakable", "untamed", from ἀ- (a-), "un-" + "δαμάω" ("damáō"), "I overpower", "I tame". Diamonds are thought to have been first recognized and mined in India, where significant alluvial deposits of the stone could be found many centuries ago along the rivers Penner, Krishna and Godavari. Diamonds have been known in India for at least 3,000 years but most likely 6,000 years. Diamonds have been treasured as gemstones since their use as religious icons in ancient India. Their usage in engraving tools also dates to early human history. The popularity of diamonds has risen since the 19th century because of increased supply, improved cutting and polishing techniques, growth in the world economy, and innovative and successful advertising campaigns. In 1772, the French scientist Antoine Lavoisier used a lens to concentrate the rays of the sun on a diamond in an atmosphere of oxygen, and showed that the only product of the combustion was carbon dioxide, proving that diamond is composed of carbon. Later in 1797, the English chemist Smithson Tennant repeated and expanded that experiment. By demonstrating that burning diamond and graphite releases the same amount of gas, he established the chemical equivalence of these substances. The most familiar uses of diamonds today are as gemstones used for adornment, and as industrial abrasives for cutting hard materials. The dispersion of white light into spectral colors is the primary gemological characteristic of gem diamonds. In the 20th century, experts in gemology developed methods of grading diamonds and other gemstones based on the characteristics most important to their value as a gem. Four characteristics, known informally as the "four Cs", are now commonly used as the basic descriptors of diamonds: these are "carat" (its weight), "cut" (quality of the cut is graded according to proportions, symmetry and polish), "color" (how close to white or colorless; for fancy diamonds how intense is its hue), and "clarity" (how free is it from inclusions). A large, flawless diamond is known as a paragon. Diamonds are extremely rare, with concentrations of at most parts per billion in source rock. Before the 20th century, most diamonds were found in alluvial deposits. Loose diamonds are also found along existing and ancient shorelines, where they tend to accumulate because of their size and density. Rarely, they have been found in glacial till (notably in Wisconsin and Indiana), but these deposits are not of commercial quality. These types of deposit were derived from localized igneous intrusions through weathering and transport by wind or water. Most diamonds come from the Earth's mantle, and most of this section discusses those diamonds. However, there are other sources. Some blocks of the crust, or terranes, have been buried deep enough as the crust thickened so they experienced ultra-high-pressure metamorphism. These have evenly distributed "microdiamonds" that show no sign of transport by magma. In addition, when meteorites strike the ground, the shock wave can produce high enough temperatures and pressures for "microdiamonds" and "nanodiamonds" to form. Impact-type microdiamonds can be used as an indicator of ancient impact craters. Popigai crater in Russia may have the world's largest diamond deposit, estimated at trillions of carats, and formed by an asteroid impact. A common misconception is that diamonds are formed from highly compressed coal. Coal is formed from buried prehistoric plants, and most diamonds that have been dated are far older than the first land plants. It is possible that diamonds can form from coal in subduction zones, but diamonds formed in this way are rare, and the carbon source is more likely carbonate rocks and organic carbon in sediments, rather than coal. Diamonds are far from evenly distributed over the Earth. A rule of thumb known as Clifford's rule states that they are almost always found in kimberlites on the oldest part of cratons, the stable cores of continents with typical ages of 2.5 billion years or more. However, there are exceptions. The Argyle diamond mine in Australia, the largest producer of diamonds by weight in the world, is located in a "mobile belt", also known as an "orogenic belt", a weaker zone surrounding the central craton that has undergone compressional tectonics. Instead of kimberlite, the host rock is lamproite. Lamproites with diamonds that are not economically viable are also found in the United States, India and Australia. In addition, diamonds in the Wawa belt of the Superior province in Canada and microdiamonds in the island arc of Japan are found in a type of rock called lamprophyre. Kimberlites can be found in narrow (1–4 meters) dikes and sills, and in pipes with diameters that range from about 75 meters to 1.5 kilometers. Fresh rock is dark bluish green to greenish gray, but after exposure rapidly turns brown and crumbles. It is hybrid rock with a chaotic mixture of small minerals and rock fragments (clasts) up to the size of watermelons. They are a mixture of xenocrysts and xenoliths (minerals and rocks carried up from the lower crust and mantle), pieces of surface rock, altered minerals such as serpentine, and new minerals that crystallized during the eruption. The texture varies with depth. The composition forms a continuum with carbonatites, but the latter have too much oxygen for carbon to exist in a pure form. Instead, it is locked up in the mineral calcite (CaCO). All three of the diamond-bearing rocks (kimberlite, lamproite and lamprophyre) lack certain minerals (melilite and kalsilite) that are incompatible with diamond formation. In kimberlite, olivine is large and conspicuous, while lamproite has Ti-phlogopite and lamprophyre has biotite and amphibole. They are all derived from magma types that erupt rapidly from small amounts of melt, are rich in volatiles and magnesium oxide, and are less oxidizing than more common mantle melts such as basalt. These characteristics allow the melts to carry diamonds to the surface before they dissolve. Kimberlite pipes can be difficult to find. They weather quickly (within a few years after exposure) and tend to have lower topographic relief than surrounding rock. If they are visible in outcrops, the diamonds are never visible because they are so rare. In any case, kimberlites are often covered with vegetation, sediments, soils or lakes. In modern searches, geophysical methods such as aeromagnetic surveys, electrical resistivity and gravimetry, help identify promising regions to explore. This is aided by isotopic dating and modeling of the geological history. Then surveyors must go to the area and collect samples, looking for kimberlite fragments or "indicator minerals". The latter have compositions that reflect the conditions where diamonds form, such as extreme melt depletion or high pressures in eclogites. However, indicator minerals can be misleading; a better approach is geothermobarometry, where the compositions of minerals are analyzed as if they were in equilibrium with mantle minerals. Finding kimberlites requires persistence, and only a small fraction contain diamonds that are commercially viable. The only major discoveries since about 1980 have been in Canada. Since existing mines have lifetimes of as little as 25 years, there could be a shortage of new diamonds in the future. Diamonds are dated by analyzing inclusions using the decay of radioactive isotopes. Depending on the elemental abundances, one can look at the decay of rubidium to strontium, samarium to neodymium, uranium to lead, argon-40 to argon-39, or rhenium to osmium. Those found in kimberlites have ages ranging from 1 to 3.5 billion years, and there can be multiple ages in the same kimberlite, indicating multiple episodes of diamond formation. The kimberlites themselves are much younger. Most of them have ages between tens of millions and 300 million years old, although there are some older exceptions (Argyle, Premier and Wawa). Thus, the kimberlites formed independently of the diamonds and served only to transport them to the surface. Kimberlites are also much younger than the cratons they have erupted through. The reason for the lack of older kimberlites is unknown, but it suggests there was some change in mantle chemistry or tectonics. No kimberlite has erupted in human history. Most gem-quality diamonds come from depths of 150 to 250 kilometers in the lithosphere. Such depths occur below cratons in "mantle keels", the thickest part of the lithosphere. These regions have high enough pressure and temperature to allow diamonds to form and they are not convecting, so diamonds can be stored for billions of years until a kimberlite eruption samples them. Host rocks in a mantle keel include harzburgite and lherzolite, two type of peridotite. The most dominant rock type in the upper mantle, peridotite is an igneous rock consisting mostly of the minerals olivine and pyroxene; it is low in silica and high in magnesium. However, diamonds in peridotite rarely survive the trip to the surface. Another common source that does keep diamonds intact is eclogite, a metamorphic rock that typically forms from basalt as an oceanic plate plunges into the mantle at a subduction zone. A smaller fraction of diamonds (about 150 have been studied) come from depths of 330–660 kilometers, a region that includes the transition zone. They formed in eclogite but are distinguished from diamonds of shallower origin by inclusions of majorite (a form of garnet with excess silicon). A similar proportion of diamonds comes from the lower mantle at depths between 660 and 800 kilometers. Diamond is thermodynamically stable at high pressures and temperatures, with the phase transition from graphite occurring at greater temperatures as the pressure increases. Thus, underneath continents it becomes stable at temperatures of 950 degrees Celsius and pressures of 4.5 gigapascals, corresponding to depths of 150 kilometers or greater. In subduction zones, which are colder, it becomes stable at temperatures of 800 degrees C and pressures of 3.5 gigapascals. At depths greater than 240 km, iron-nickel metal phases are present and carbon is likely to be either dissolved in them or in the form of carbides. Thus, the deeper origin of some diamonds may reflect unusual growth environments. In 2018 the first known natural samples of a phase of ice called Ice VII were found as inclusions in diamond samples. The inclusions formed at depths between 400 and 800 kilometers, straddling the upper and lower mantle, and provide evidence for water-rich fluid at these depths. The amount of carbon in the mantle is not well constrained, but its concentration is estimated at 0.5 to 1 parts per thousand. It has two stable isotopes, C and C, in a ratio of approximately 99:1 by mass. This ratio has a wide range in meteorites, which implies that it was probably also broad in the early Earth. It can also be altered by surface processes like photosynthesis. The fraction is generally compared to a standard sample using a ratio δC expressed in parts per thousand. Common rocks from the mantle such as basalts, carbonatites and kimberlites have ratios between -8 and -2. On the surface, organic sediments have an average of -25 while carbonates have an average of 0. Populations of diamonds from different sources have distributions of δC that vary markedly. Peridotitic diamonds are mostly within the typical mantle range; eclogitic diamonds have values from -40 to +3, although the peak of the distribution is in the mantle range. This variability implies that they are not formed from carbon that is "primordial" (having resided in the mantle since the Earth formed). Instead, they are the result of tectonic processes, although (given the ages of diamonds) not necessarily the same tectonic processes that act in the present. Diamonds in the mantle form through a "metasomatic" process where a C-O-H-N-S fluid or melt dissolves minerals in a rock and replaces them with new minerals. (The vague term C-O-H-N-S is commonly used because the exact composition is not known.) Diamonds form from this fluid either by reduction of oxidized carbon (e.g., CO or CO) or oxidation of a reduced phase such as methane. Using probes such as polarized light, photoluminescence and cathodoluminescence, a series of growth zones can be identified in diamonds. The characteristic pattern in diamonds from the lithosphere involves a nearly concentric series of zones with very thin oscillations in luminescence and alternating episodes where the carbon is resorbed by the fluid and then grown again. Diamonds from below the lithosphere have a more irregular, almost polycrystalline texture, reflecting the higher temperatures and pressures as well as the transport of the diamonds by convection. Geological evidence supports a model in which kimberlite magma rose at 4–20 meters per second, creating an upward path by hydraulic fracturing of the rock. As the pressure decreases, a vapor phase exsolves from the magma, and this helps to keep the magma fluid. At the surface, the initial eruption explodes out through fissures at high speeds (over 200 meters per second). Then, at lower pressures, the rock is eroded, forming a pipe and producing fragmented rock (breccia). As the eruption wanes, there is pyroclastic phase and then metamorphism and hydration produces serpentinites. Although diamonds on Earth are rare, they are very common in space. In meteorites, about 3 percent of the carbon is in the form of nanodiamonds, having diameters of a few nanometers. Sufficiently small diamonds can form in the cold of space because their lower surface energy makes them more stable than graphite. The isotopic signatures of some nanodiamonds indicate they were formed outside the Solar System in stars. High pressure experiments predict that large quantities of diamonds condense from methane into a "diamond rain" on the ice giant planets Uranus and Neptune. Some extrasolar planets may be almost entirely composed of diamond. Diamonds may exist in carbon-rich stars, particularly white dwarfs. One theory for the origin of carbonado, the toughest form of diamond, is that it originated in a white dwarf or supernova. Diamonds formed in stars may have been the first minerals. A diamond is a transparent crystal of tetrahedrally bonded carbon atoms in a covalent network lattice (sp) that crystallizes into the diamond lattice which is a variation of the face-centered cubic structure. Diamonds have been adapted for many uses because of the material's exceptional physical characteristics. Most notable are its extreme hardness and thermal conductivity (900–), as well as wide bandgap and high optical dispersion. Above ( / ) in vacuum or oxygen-free atmosphere, diamond converts to graphite; in air, transformation starts at ~. Diamond's ignition point is 720– in oxygen and 850– in air. Naturally occurring diamonds have a density ranging from 3.15 to , with pure diamond close to . The chemical bonds that hold the carbon atoms in diamonds together are weaker than those in graphite. In diamonds, the bonds form an inflexible three-dimensional lattice, whereas in graphite, the atoms are tightly bonded into sheets, which can slide easily over one another, making the overall structure weaker. In a diamond, each carbon atom is surrounded by neighboring four carbon atoms forming a tetrahedral shaped unit. Diamonds occur most often as euhedral or rounded octahedra and twinned octahedra known as "macles". As diamond's crystal structure has a cubic arrangement of the atoms, they have many facets that belong to a cube, octahedron, rhombicosidodecahedron, tetrakis hexahedron or disdyakis dodecahedron. The crystals can have rounded off and unexpressive edges and can be elongated. Diamonds (especially those with rounded crystal faces) are commonly found coated in "nyf", an opaque gum-like skin. Some diamonds have opaque fibers. They are referred to as "opaque" if the fibers grow from a clear substrate or "fibrous" if they occupy the entire crystal. Their colors range from yellow to green or gray, sometimes with cloud-like white to gray impurities. Their most common shape is cuboidal, but they can also form octahedra, dodecahedra, macles or combined shapes. The structure is the result of numerous impurities with sizes between 1 and 5 microns. These diamonds probably formed in kimberlite magma and sampled the volatiles. Diamonds can also form polycrystalline aggregates. There have been attempts to classify them into groups with names such as boart, ballas, stewartite and framesite, but there is no widely accepted set of criteria. Carbonado, a type in which the diamond grains were sintered (fused without melting by the application of heat and pressure), is black in color and tougher than single crystal diamond. It has never been observed in a volcanic rock. There are many theories for its origin, including formation in a star, but no consensus. Diamond is the hardest known natural material on both the Vickers scale and the Mohs scale. Diamond's great hardness relative to other materials has been known since antiquity, and is the source of its name. Diamond hardness depends on its purity, crystalline perfection and orientation: hardness is higher for flawless, pure crystals oriented to the <111> direction (along the longest diagonal of the cubic diamond lattice). Therefore, whereas it might be possible to scratch some diamonds with other materials, such as boron nitride, the hardest diamonds can only be scratched by other diamonds and nanocrystalline diamond aggregates. The hardness of diamond contributes to its suitability as a gemstone. Because it can only be scratched by other diamonds, it maintains its polish extremely well. Unlike many other gems, it is well-suited to daily wear because of its resistance to scratching—perhaps contributing to its popularity as the preferred gem in engagement or wedding rings, which are often worn every day. The hardest natural diamonds mostly originate from the Copeton and Bingara fields located in the New England area in New South Wales, Australia. These diamonds are generally small, perfect to semiperfect octahedra, and are used to polish other diamonds. Their hardness is associated with the crystal growth form, which is single-stage crystal growth. Most other diamonds show more evidence of multiple growth stages, which produce inclusions, flaws, and defect planes in the crystal lattice, all of which affect their hardness. It is possible to treat regular diamonds under a combination of high pressure and high temperature to produce diamonds that are harder than the diamonds used in hardness gauges. Somewhat related to hardness is another mechanical property "toughness", which is a material's ability to resist breakage from forceful impact. The toughness of natural diamond has been measured as 7.5–10 MPa·m. This value is good compared to other ceramic materials, but poor compared to most engineering materials such as engineering alloys, which typically exhibit toughnesses over 100 MPa·m. As with any material, the macroscopic geometry of a diamond contributes to its resistance to breakage. Diamond has a cleavage plane and is therefore more fragile in some orientations than others. Diamond cutters use this attribute to cleave some stones, prior to faceting. "Impact toughness" is one of the main indexes to measure the quality of synthetic industrial diamonds. Used in so-called diamond anvil experiments to create high-pressure environments, diamonds are able to withstand crushing pressures in excess of 600 gigapascals (6 million atmospheres). Other specialized applications also exist or are being developed, including use as semiconductors: some blue diamonds are natural semiconductors, in contrast to most diamonds, which are excellent electrical insulators. The conductivity and blue color originate from boron impurity. Boron substitutes for carbon atoms in the diamond lattice, donating a hole into the valence band. Substantial conductivity is commonly observed in nominally undoped diamond grown by chemical vapor deposition. This conductivity is associated with hydrogen-related species adsorbed at the surface, and it can be removed by annealing or other surface treatments. Diamonds are naturally lipophilic and hydrophobic, which means the diamonds' surface cannot be wet by water, but can be easily wet and stuck by oil. This property can be utilized to extract diamonds using oil when making synthetic diamonds. However, when diamond surfaces are chemically modified with certain ions, they are expected to become so hydrophilic that they can stabilize multiple layers of water ice at human body temperature. The surface of diamonds is partially oxidized. The oxidized surface can be reduced by heat treatment under hydrogen flow. That is to say, this heat treatment partially removes oxygen-containing functional groups. But diamonds (spC) are unstable against high temperature (above about ) under atmospheric pressure. The structure gradually changes into spC above this temperature. Thus, diamonds should be reduced under this temperature. Diamonds are not very reactive. Under room temperature diamonds do not react with any chemical reagents including strong acids and bases. A diamond's surface can only be oxidized at temperatures above about in air. Diamond also reacts with fluorine gas above about . Diamond has a wide bandgap of corresponding to the deep ultraviolet wavelength of 225 nanometers. This means that pure diamond should transmit visible light and appear as a clear colorless crystal. Colors in diamond originate from lattice defects and impurities. The diamond crystal lattice is exceptionally strong, and only atoms of nitrogen, boron and hydrogen can be introduced into diamond during the growth at significant concentrations (up to atomic percents). Transition metals nickel and cobalt, which are commonly used for growth of synthetic diamond by high-pressure high-temperature techniques, have been detected in diamond as individual atoms; the maximum concentration is 0.01% for nickel and even less for cobalt. Virtually any element can be introduced to diamond by ion implantation. Nitrogen is by far the most common impurity found in gem diamonds and is responsible for the yellow and brown color in diamonds. Boron is responsible for the blue color. Color in diamond has two additional sources: irradiation (usually by alpha particles), that causes the color in green diamonds, and plastic deformation of the diamond crystal lattice. Plastic deformation is the cause of color in some brown and perhaps pink and red diamonds. In order of increasing rarity, yellow diamond is followed by brown, colorless, then by blue, green, black, pink, orange, purple, and red. "Black", or Carbonado, diamonds are not truly black, but rather contain numerous dark inclusions that give the gems their dark appearance. Colored diamonds contain impurities or structural defects that cause the coloration, while pure or nearly pure diamonds are transparent and colorless. Most diamond impurities replace a carbon atom in the crystal lattice, known as a carbon flaw. The most common impurity, nitrogen, causes a slight to intense yellow coloration depending upon the type and concentration of nitrogen present. The Gemological Institute of America (GIA) classifies low saturation yellow and brown diamonds as diamonds in the "normal color range", and applies a grading scale from "D" (colorless) to "Z" (light yellow). Diamonds of a different color, such as blue, are called "fancy colored" diamonds and fall under a different grading scale. In 2008, the Wittelsbach Diamond, a blue diamond once belonging to the King of Spain, fetched over US$24 million at a Christie's auction. In May 2009, a blue diamond fetched the highest price per carat ever paid for a diamond when it was sold at auction for 10.5 million Swiss francs (6.97 million euros, or US$9.5 million at the time). That record was, however, beaten the same year: a vivid pink diamond was sold for $10.8 million in Hong Kong on December 1, 2009. Diamonds can be identified by their high thermal conductivity. Their high refractive index is also indicative, but other materials have similar refractivity. Diamonds cut glass, but this does not positively identify a diamond because other materials, such as quartz, also lie above glass on the Mohs scale and can also cut it. Diamonds can scratch other diamonds, but this can result in damage to one or both stones. Hardness tests are infrequently used in practical gemology because of their potentially destructive nature. The extreme hardness and high value of diamond means that gems are typically polished slowly, using painstaking traditional techniques and greater attention to detail than is the case with most other gemstones; these tend to result in extremely flat, highly polished facets with exceptionally sharp facet edges. Diamonds also possess an extremely high refractive index and fairly high dispersion. Taken together, these factors affect the overall appearance of a polished diamond and most diamantaires still rely upon skilled use of a loupe (magnifying glass) to identify diamonds "by eye". The diamond industry can be separated into two distinct categories: one dealing with gem-grade diamonds and another for industrial-grade diamonds. Both markets value diamonds differently. A large trade in gem-grade diamonds exists. Although most gem-grade diamonds are sold newly polished, there is a well-established market for resale of polished diamonds (e.g. pawnbroking, auctions, second-hand jewelry stores, diamantaires, bourses, etc.). One hallmark of the trade in gem-quality diamonds is its remarkable concentration: wholesale trade and diamond cutting is limited to just a few locations; in 2003, 92% of the world's diamonds were cut and polished in Surat, India. Other important centers of diamond cutting and trading are the Antwerp diamond district in Belgium, where the International Gemological Institute is based, London, the Diamond District in New York City, the Diamond Exchange District in Tel Aviv, and Amsterdam. One contributory factor is the geological nature of diamond deposits: several large primary kimberlite-pipe mines each account for significant portions of market share (such as the Jwaneng mine in Botswana, which is a single large-pit mine that can produce between of diamonds per year). Secondary alluvial diamond deposits, on the other hand, tend to be fragmented amongst many different operators because they can be dispersed over many hundreds of square kilometers (e.g., alluvial deposits in Brazil). The production and distribution of diamonds is largely consolidated in the hands of a few key players, and concentrated in traditional diamond trading centers, the most important being Antwerp, where 80% of all rough diamonds, 50% of all cut diamonds and more than 50% of all rough, cut and industrial diamonds combined are handled. This makes Antwerp a de facto "world diamond capital". The city of Antwerp also hosts the Antwerpsche Diamantkring, created in 1929 to become the first and biggest diamond bourse dedicated to rough diamonds. Another important diamond center is New York City, where almost 80% of the world's diamonds are sold, including auction sales. The De Beers company, as the world's largest diamond mining company, holds a dominant position in the industry, and has done so since soon after its founding in 1888 by the British imperialist Cecil Rhodes. De Beers is currently the world's largest operator of diamond production facilities (mines) and distribution channels for gem-quality diamonds. The Diamond Trading Company (DTC) is a subsidiary of De Beers and markets rough diamonds from De Beers-operated mines. De Beers and its subsidiaries own mines that produce some 40% of annual world diamond production. For most of the 20th century over 80% of the world's rough diamonds passed through De Beers, but by 2001–2009 the figure had decreased to around 45%, and by 2013 the company's market share had further decreased to around 38% in value terms and even less by volume. De Beers sold off the vast majority of its diamond stockpile in the late 1990s – early 2000s and the remainder largely represents working stock (diamonds that are being sorted before sale). This was well documented in the press but remains little known to the general public. As a part of reducing its influence, De Beers withdrew from purchasing diamonds on the open market in 1999 and ceased, at the end of 2008, purchasing Russian diamonds mined by the largest Russian diamond company Alrosa. As of January 2011, De Beers states that it only sells diamonds from the following four countries: Botswana, Namibia, South Africa and Canada. Alrosa had to suspend their sales in October 2008 due to the global energy crisis, but the company reported that it had resumed selling rough diamonds on the open market by October 2009. Apart from Alrosa, other important diamond mining companies include BHP Billiton, which is the world's largest mining company; Rio Tinto Group, the owner of the Argyle (100%), Diavik (60%), and Murowa (78%) diamond mines; and Petra Diamonds, the owner of several major diamond mines in Africa. Further down the supply chain, members of The World Federation of Diamond Bourses (WFDB) act as a medium for wholesale diamond exchange, trading both polished and rough diamonds. The WFDB consists of independent diamond bourses in major cutting centers such as Tel Aviv, Antwerp, Johannesburg and other cities across the USA, Europe and Asia. In 2000, the WFDB and The International Diamond Manufacturers Association established the World Diamond Council to prevent the trading of diamonds used to fund war and inhumane acts. WFDB's additional activities include sponsoring the World Diamond Congress every two years, as well as the establishment of the "International Diamond Council" (IDC) to oversee diamond grading. Once purchased by Sightholders (which is a trademark term referring to the companies that have a three-year supply contract with DTC), diamonds are cut and polished in preparation for sale as gemstones ('industrial' stones are regarded as a by-product of the gemstone market; they are used for abrasives). The cutting and polishing of rough diamonds is a specialized skill that is concentrated in a limited number of locations worldwide. Traditional diamond cutting centers are Antwerp, Amsterdam, Johannesburg, New York City, and Tel Aviv. Recently, diamond cutting centers have been established in China, India, Thailand, Namibia and Botswana. Cutting centers with lower cost of labor, notably Surat in Gujarat, India, handle a larger number of smaller carat diamonds, while smaller quantities of larger or more valuable diamonds are more likely to be handled in Europe or North America. The recent expansion of this industry in India, employing low cost labor, has allowed smaller diamonds to be prepared as gems in greater quantities than was previously economically feasible. Diamonds prepared as gemstones are sold on diamond exchanges called "bourses". There are 28 registered diamond bourses in the world. Bourses are the final tightly controlled step in the diamond supply chain; wholesalers and even retailers are able to buy relatively small lots of diamonds at the bourses, after which they are prepared for final sale to the consumer. Diamonds can be sold already set in jewelry, or sold unset ("loose"). According to the Rio Tinto Group, in 2002 the diamonds produced and released to the market were valued at US$9 billion as rough diamonds, US$14 billion after being cut and polished, US$28 billion in wholesale diamond jewelry, and US$57 billion in retail sales. Mined rough diamonds are converted into gems through a multi-step process called "cutting". Diamonds are extremely hard, but also brittle and can be split up by a single blow. Therefore, diamond cutting is traditionally considered as a delicate procedure requiring skills, scientific knowledge, tools and experience. Its final goal is to produce a faceted jewel where the specific angles between the facets would optimize the diamond luster, that is dispersion of white light, whereas the number and area of facets would determine the weight of the final product. The weight reduction upon cutting is significant and can be of the order of 50%. Several possible shapes are considered, but the final decision is often determined not only by scientific, but also practical considerations. For example, the diamond might be intended for display or for wear, in a ring or a necklace, singled or surrounded by other gems of certain color and shape. Some of them may be considered as classical, such as round, pear, marquise, oval, hearts and arrows diamonds, etc. Some of them are special, produced by certain companies, for example, Phoenix, Cushion, Sole Mio diamonds, etc. The most time-consuming part of the cutting is the preliminary analysis of the rough stone. It needs to address a large number of issues, bears much responsibility, and therefore can last years in case of unique diamonds. The following issues are considered: After initial cutting, the diamond is shaped in numerous stages of polishing. Unlike cutting, which is a responsible but quick operation, polishing removes material by gradual erosion and is extremely time consuming. The associated technique is well developed; it is considered as a routine and can be performed by technicians. After polishing, the diamond is reexamined for possible flaws, either remaining or induced by the process. Those flaws are concealed through various diamond enhancement techniques, such as repolishing, crack filling, or clever arrangement of the stone in the jewelry. Remaining non-diamond inclusions are removed through laser drilling and filling of the voids produced. Marketing has significantly affected the image of diamond as a valuable commodity. N. W. Ayer & Son, the advertising firm retained by De Beers in the mid-20th century, succeeded in reviving the American diamond market. And the firm created new markets in countries where no diamond tradition had existed before. N. W. Ayer's marketing included product placement, advertising focused on the diamond product itself rather than the De Beers brand, and associations with celebrities and royalty. Without advertising the De Beers brand, De Beers was advertising its competitors' diamond products as well, but this was not a concern as De Beers dominated the diamond market throughout the 20th century. De Beers' market share dipped temporarily to 2nd place in the global market below Alrosa in the aftermath of the global economic crisis of 2008, down to less than 29% in terms of carats mined, rather than sold. The campaign lasted for decades but was effectively discontinued by early 2011. De Beers still advertises diamonds, but the advertising now mostly promotes its own brands, or licensed product lines, rather than completely "generic" diamond products. The campaign was perhaps best captured by the slogan "a diamond is forever". This slogan is now being used by De Beers Diamond Jewelers, a jewelry firm which is a 50%/50% joint venture between the De Beers mining company and LVMH, the luxury goods conglomerate. Brown-colored diamonds constituted a significant part of the diamond production, and were predominantly used for industrial purposes. They were seen as worthless for jewelry (not even being assessed on the diamond color scale). After the development of Argyle diamond mine in Australia in 1986, and marketing, brown diamonds have become acceptable gems. The change was mostly due to the numbers: the Argyle mine, with its of diamonds per year, makes about one-third of global production of natural diamonds; 80% of Argyle diamonds are brown. Industrial diamonds are valued mostly for their hardness and thermal conductivity, making many of the gemological characteristics of diamonds, such as the 4 Cs, irrelevant for most applications. 80% of mined diamonds (equal to about annually) are unsuitable for use as gemstones and are used industrially. In addition to mined diamonds, synthetic diamonds found industrial applications almost immediately after their invention in the 1950s; another of synthetic diamond is produced annually for industrial use (in 2004; in 2014 it is , 90% of which is produced in China). Approximately 90% of diamond grinding grit is currently of synthetic origin. The boundary between gem-quality diamonds and industrial diamonds is poorly defined and partly depends on market conditions (for example, if demand for polished diamonds is high, some lower-grade stones will be polished into low-quality or small gemstones rather than being sold for industrial use). Within the category of industrial diamonds, there is a sub-category comprising the lowest-quality, mostly opaque stones, which are known as bort. Industrial use of diamonds has historically been associated with their hardness, which makes diamond the ideal material for cutting and grinding tools. As the hardest known naturally occurring material, diamond can be used to polish, cut, or wear away any material, including other diamonds. Common industrial applications of this property include diamond-tipped drill bits and saws, and the use of diamond powder as an abrasive. Less expensive industrial-grade diamonds, known as bort, with more flaws and poorer color than gems, are used for such purposes. Diamond is not suitable for machining ferrous alloys at high speeds, as carbon is soluble in iron at the high temperatures created by high-speed machining, leading to greatly increased wear on diamond tools compared to alternatives. Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances being made in the production of synthetic diamonds, future applications are becoming feasible. The high thermal conductivity of diamond makes it suitable as a heat sink for integrated circuits in electronics. Approximately of diamonds are mined annually, with a total value of nearly US$9 billion, and about are synthesized annually. Roughly 49% of diamonds originate from Central and Southern Africa, although significant sources of the mineral have been discovered in Canada, India, Russia, Brazil, and Australia. They are mined from kimberlite and lamproite volcanic pipes, which can bring diamond crystals, originating from deep within the Earth where high pressures and temperatures enable them to form, to the surface. The mining and distribution of natural diamonds are subjects of frequent controversy such as concerns over the sale of "blood diamonds" or "conflict diamonds" by African paramilitary groups. The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world. Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care is required not to destroy larger diamonds, and then sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore. Historically, diamonds were found only in alluvial deposits in Guntur and Krishna district of the Krishna River delta in Southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725. Currently, one of the most prominent Indian mines is located at Panna. Diamond extraction from primary deposits (kimberlites and lamproites) started in the 1870s after the discovery of the Diamond Fields in South Africa. Production has increased over time and now an accumulated total of have been mined since that date. Twenty percent of that amount has been mined in the last five years, and during the last 10 years, nine new mines have started production; four more are waiting to be opened soon. Most of these mines are located in Canada, Zimbabwe, Angola, and one in Russia. In the U.S., diamonds have been found in Arkansas, Colorado, New Mexico, Wyoming, and Montana. In 2004, the discovery of a microscopic diamond in the U.S. led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana. The Crater of Diamonds State Park in Arkansas is open to the public, and is the only mine in the world where members of the public can dig for diamonds. Today, most commercially viable diamond deposits are in Russia (mostly in Sakha Republic, for example Mir pipe and Udachnaya pipe), Botswana, Australia (Northern and Western Australia) and the Democratic Republic of the Congo. In 2005, Russia produced almost one-fifth of the global diamond output, according to the British Geological Survey. Australia boasts the richest diamantiferous pipe, with production from the Argyle diamond mine reaching peak levels of 42 metric tons per year in the 1990s. There are also commercial deposits being actively mined in the Northwest Territories of Canada and Brazil. Diamond prospectors continue to search the globe for diamond-bearing kimberlite and lamproite pipes. In some of the more politically unstable central African and west African countries, revolutionary groups have taken control of diamond mines, using proceeds from diamond sales to finance their operations. Diamonds sold through this process are known as "conflict diamonds" or "blood diamonds". In response to public concerns that their diamond purchases were contributing to war and human rights abuses in central and western Africa, the United Nations, the diamond industry and diamond-trading nations introduced the Kimberley Process in 2002. The Kimberley Process aims to ensure that conflict diamonds do not become intermixed with the diamonds not controlled by such rebel groups. This is done by requiring diamond-producing countries to provide proof that the money they make from selling the diamonds is not used to fund criminal or revolutionary activities. Although the Kimberley Process has been moderately successful in limiting the number of conflict diamonds entering the market, some still find their way in. According to the International Diamond Manufacturers Association, conflict diamonds constitute 2–3% of all diamonds traded. Two major flaws still hinder the effectiveness of the Kimberley Process: (1) the relative ease of smuggling diamonds across African borders, and (2) the violent nature of diamond mining in nations that are not in a technical state of war and whose diamonds are therefore considered "clean". The Canadian Government has set up a body known as the Canadian Diamond Code of Conduct to help authenticate Canadian diamonds. This is a stringent tracking system of diamonds and helps protect the "conflict free" label of Canadian diamonds. Synthetic diamonds are diamonds manufactured in a laboratory, as opposed to diamonds mined from the Earth. The gemological and industrial uses of diamond have created a large demand for rough stones. This demand has been satisfied in large part by synthetic diamonds, which have been manufactured by various processes for more than half a century. However, in recent years it has become possible to produce gem-quality synthetic diamonds of significant size. It is possible to make colorless synthetic gemstones that, on a molecular level, are identical to natural stones and so visually similar that only a gemologist with special equipment can tell the difference. The majority of commercially available synthetic diamonds are yellow and are produced by so-called "high-pressure high-temperature" (HPHT) processes. The yellow color is caused by nitrogen impurities. Other colors may also be reproduced such as blue, green or pink, which are a result of the addition of boron or from irradiation after synthesis. Another popular method of growing synthetic diamond is chemical vapor deposition (CVD). The growth occurs under low pressure (below atmospheric pressure). It involves feeding a mixture of gases (typically 1 to 99 methane to hydrogen) into a chamber and splitting them to chemically active radicals in a plasma ignited by microwaves, hot filament, arc discharge, welding torch or laser. This method is mostly used for coatings, but can also produce single crystals several millimeters in size (see picture). As of 2010, nearly all 5,000 million carats (1,000 tonnes) of synthetic diamonds produced per year are for industrial use. Around 50% of the 133 million carats of natural diamonds mined per year end up in industrial use. Mining companies' expenses average $40 to $60 per carat for natural colorless diamonds, while synthetic manufacturers' expenses average $2,500 per carat for synthetic, gem-quality colorless diamonds. However, a purchaser is more likely to encounter a synthetic when looking for a fancy-colored diamond because nearly all synthetic diamonds are fancy-colored, while only 0.01% of natural diamonds are. A diamond simulant is a non-diamond material that is used to simulate the appearance of a diamond, and may be referred to as diamante. Cubic zirconia is the most common. The gemstone moissanite (silicon carbide) can be treated as a diamond simulant, though more costly to produce than cubic zirconia. Both are produced synthetically. Diamond enhancements are specific treatments performed on natural or synthetic diamonds (usually those already cut and polished into a gem), which are designed to better the gemological characteristics of the stone in one or more ways. These include laser drilling to remove inclusions, application of sealants to fill cracks, treatments to improve a white diamond's color grade, and treatments to give fancy color to a white diamond. Coatings are increasingly used to give a diamond simulant such as cubic zirconia a more "diamond-like" appearance. One such substance is diamond-like carbon—an amorphous carbonaceous material that has some physical properties similar to those of the diamond. Advertising suggests that such a coating would transfer some of these diamond-like properties to the coated stone, hence enhancing the diamond simulant. Techniques such as Raman spectroscopy should easily identify such a treatment. Early diamond identification tests included a scratch test relying on the superior hardness of diamond. This test is destructive, as a diamond can scratch another diamond, and is rarely used nowadays. Instead, diamond identification relies on its superior thermal conductivity. Electronic thermal probes are widely used in the gemological centers to separate diamonds from their imitations. These probes consist of a pair of battery-powered thermistors mounted in a fine copper tip. One thermistor functions as a heating device while the other measures the temperature of the copper tip: if the stone being tested is a diamond, it will conduct the tip's thermal energy rapidly enough to produce a measurable temperature drop. This test takes about 2–3 seconds. Whereas the thermal probe can separate diamonds from most of their simulants, distinguishing between various types of diamond, for example synthetic or natural, irradiated or non-irradiated, etc., requires more advanced, optical techniques. Those techniques are also used for some diamonds simulants, such as silicon carbide, which pass the thermal conductivity test. Optical techniques can distinguish between natural diamonds and synthetic diamonds. They can also identify the vast majority of treated natural diamonds. "Perfect" crystals (at the atomic lattice level) have never been found, so both natural and synthetic diamonds always possess characteristic imperfections, arising from the circumstances of their crystal growth, that allow them to be distinguished from each other. Laboratories use techniques such as spectroscopy, microscopy and luminescence under shortwave ultraviolet light to determine a diamond's origin. They also use specially made instruments to aid them in the identification process. Two screening instruments are the "DiamondSure" and the "DiamondView", both produced by the DTC and marketed by the GIA. Several methods for identifying synthetic diamonds can be performed, depending on the method of production and the color of the diamond. CVD diamonds can usually be identified by an orange fluorescence. D-J colored diamonds can be screened through the Swiss Gemmological Institute's Diamond Spotter. Stones in the D-Z color range can be examined through the DiamondSure UV/visible spectrometer, a tool developed by De Beers. Similarly, natural diamonds usually have minor imperfections and flaws, such as inclusions of foreign material, that are not seen in synthetic diamonds. Screening devices based on diamond type detection can be used to make a distinction between diamonds that are certainly natural and diamonds that are potentially synthetic. Those potentially synthetic diamonds require more investigation in a specialized lab. Examples of commercial screening devices are D-Screen (WTOCD / HRD Antwerp) and Alpha Diamond Analyzer (Bruker / HRD Antwerp). Occasionally large thefts of diamonds take place. In February 2013 armed robbers carried out a raid at Brussels Airport and escaped with gems estimated to be worth $50m (£32m; 37m euros). The gang broke through a perimeter fence and raided the cargo hold of a Swiss-bound plane. The gang have since been arrested and large amounts of cash and diamonds recovered. The identification of stolen diamonds presents a set of difficult problems. Rough diamonds will have a distinctive shape depending on whether their source is a mine or from an alluvial environment such as a beach or river—alluvial diamonds have smoother surfaces than those that have been mined. Determining the provenance of cut and polished stones is much more complex. The Kimberley Process was developed to monitor the trade in rough diamonds and prevent their being used to fund violence. Before exporting, rough diamonds are certificated by the government of the country of origin. Some countries, such as Venezuela, are not party to the agreement. The Kimberley Process does not apply to local sales of rough diamonds within a country. Diamonds may be etched by laser with marks invisible to the naked eye. Lazare Kaplan, a US-based company, developed this method. However, whatever is marked on a diamond can readily be removed. Dinosaur Dinosaurs are a diverse group of reptiles of the clade Dinosauria. They first appeared during the Triassic period, between 243 and 233.23 million years ago, although the exact origin and timing of the evolution of dinosaurs is the subject of active research. They became the dominant terrestrial vertebrates after the Triassic–Jurassic extinction event 201 million years ago; their dominance continued through the Jurassic and Cretaceous periods. Reverse genetic engineering and the fossil record both demonstrate that birds are modern feathered dinosaurs, having evolved from earlier theropods during the late Jurassic Period. As such, birds were the only dinosaur lineage to survive the Cretaceous–Paleogene extinction event 66 million years ago. Dinosaurs can therefore be divided into "avian dinosaurs", or birds; and "non-avian dinosaurs", which are all dinosaurs other than birds. This article deals primarily with non-avian dinosaurs. Dinosaurs are a varied group of animals from taxonomic, morphological and ecological standpoints. Birds, at over 10,000 living species, are the most diverse group of vertebrates besides perciform fish. Using fossil evidence, paleontologists have identified over 500 distinct genera and more than 1,000 different species of non-avian dinosaurs. Dinosaurs are represented on every continent by both extant species (birds) and fossil remains. Through the first half of the 20th century, before birds were recognized to be dinosaurs, most of the scientific community believed dinosaurs to have been sluggish and cold-blooded. Most research conducted since the 1970s, however, has indicated that all dinosaurs were active animals with elevated metabolisms and numerous adaptations for social interaction. Some were herbivorous, others carnivorous. Evidence suggests that egg-laying and nest-building are additional traits shared by all dinosaurs, avian and non-avian alike. While dinosaurs were ancestrally bipedal, many extinct groups included quadrupedal species, and some were able to shift between these stances. Elaborate display structures such as horns or crests are common to all dinosaur groups, and some extinct groups developed skeletal modifications such as bony armor and spines. While the dinosaurs' modern-day surviving avian lineage (birds) are generally small due to the constraints of flight, many prehistoric dinosaurs (non-avian and avian) were large-bodied—the largest sauropod dinosaurs are estimated to have reached lengths of and heights of and were the largest land animals of all time. Still, the idea that non-avian dinosaurs were uniformly gigantic is a misconception based in part on preservation bias, as large, sturdy bones are more likely to last until they are fossilized. Many dinosaurs were quite small: "Xixianykus", for example, was only about long. Since the first dinosaur fossils were recognized in the early 19th century, mounted fossil dinosaur skeletons have been major attractions at museums around the world, and dinosaurs have become an enduring part of world culture. The large sizes of some dinosaur groups, as well as their seemingly monstrous and fantastic nature, have ensured dinosaurs' regular appearance in best-selling books and films, such as "Jurassic Park". Persistent public enthusiasm for the animals has resulted in significant funding for dinosaur science, and new discoveries are regularly covered by the media. The taxon Dinosauria was formally named in 1841 by paleontologist Sir Richard Owen, who used it to refer to the "distinct tribe or sub-order of Saurian Reptiles" that were then being recognized in England and around the world. The term is derived . Though the taxonomic name has often been interpreted as a reference to dinosaurs' teeth, claws, and other fearsome characteristics, Owen intended it merely to evoke their size and majesty. Other prehistoric animals, including mosasaurs, ichthyosaurs, pterosaurs, plesiosaurs, and "Dimetrodon", while often popularly conceived of as dinosaurs, are not taxonomically classified as dinosaurs. Under phylogenetic nomenclature, dinosaurs are usually defined as the group consisting of the most recent common ancestor (MRCA) of "Triceratops" and Neornithes, and all its descendants. It has also been suggested that Dinosauria be defined with respect to the MRCA of "Megalosaurus" and "Iguanodon", because these were two of the three genera cited by Richard Owen when he recognized the Dinosauria. Both definitions result in the same set of animals being defined as dinosaurs: "Dinosauria = Ornithischia + Saurischia", encompassing ankylosaurians (armored herbivorous quadrupeds), stegosaurians (plated herbivorous quadrupeds), ceratopsians (herbivorous quadrupeds with horns and frills), ornithopods (bipedal or quadrupedal herbivores including "duck-bills"), theropods (mostly bipedal carnivores and birds), and sauropodomorphs (mostly large herbivorous quadrupeds with long necks and tails). Birds are now recognized as being the sole surviving lineage of theropod dinosaurs. In traditional taxonomy, birds were considered a separate class that had evolved from dinosaurs, a distinct superorder. However, a majority of contemporary paleontologists concerned with dinosaurs reject the traditional style of classification in favor of phylogenetic taxonomy; this approach requires that, for a group to be natural, all descendants of members of the group must be included in the group as well. Birds are thus considered to be dinosaurs and dinosaurs are, therefore, not extinct. Birds are classified as belonging to the subgroup Maniraptora, which are coelurosaurs, which are theropods, which are saurischians, which are dinosaurs. Research by Matthew Baron, David B. Norman, and Paul M. Barrett in 2017 suggested a radical revision of dinosaurian systematics. Phylogenetic analysis by Baron "et al." recovered the Ornithischia as being closer to the Theropoda than the Sauropodomorpha, as opposed to the traditional union of theropods with sauropodomorphs. They resurrected the clade Ornithoscelida to refer to the group containing Ornithischia and Theropoda. Dinosauria itself was re-defined as the last common ancestor of "Triceratops horridus", "Passer domesticus", "Diplodocus carnegii", and all of its descendants, to ensure that sauropods and kin remain included as dinosaurs. Using one of the above definitions, dinosaurs can be generally described as archosaurs with hind limbs held erect beneath the body. Many prehistoric animal groups are popularly conceived of as dinosaurs, such as ichthyosaurs, mosasaurs, plesiosaurs, pterosaurs, and pelycosaurs (especially "Dimetrodon"), but are not classified scientifically as dinosaurs, and none had the erect hind limb posture characteristic of true dinosaurs. Dinosaurs were the dominant terrestrial vertebrates of the Mesozoic, especially the Jurassic and Cretaceous periods. Other groups of animals were restricted in size and niches; mammals, for example, rarely exceeded the size of a domestic cat, and were generally rodent-sized carnivores of small prey. Dinosaurs have always been an extremely varied group of animals; according to a 2006 study, over 500 non-avian dinosaur genera have been identified with certainty so far, and the total number of genera preserved in the fossil record has been estimated at around 1850, nearly 75% of which remain to be discovered. An earlier study predicted that about 3,400 dinosaur genera existed, including many that would not have been preserved in the fossil record. By September 17, 2008, 1,047 different species of dinosaurs had been named. In 2016, the estimated number of dinosaur species that existed in the Mesozoic era was estimated to be 1,543–2,468. Some are herbivorous, others carnivorous, including seed-eaters, fish-eaters, insectivores, and omnivores. While dinosaurs were ancestrally bipedal (as are all modern birds), some prehistoric species were quadrupeds, and others, such as "Anchisaurus" and "Iguanodon", could walk just as easily on two or four legs. Cranial modifications like horns and crests are common dinosaurian traits, and some extinct species had bony armor. Although known for large size, many Mesozoic dinosaurs were human-sized or smaller, and modern birds are generally small in size. Dinosaurs today inhabit every continent, and fossils show that they had achieved global distribution by at least the early Jurassic period. Modern birds inhabit most available habitats, from terrestrial to marine, and there is evidence that some non-avian dinosaurs (such as "Microraptor") could fly or at least glide, and others, such as spinosaurids, had semiaquatic habits. While recent discoveries have made it more difficult to present a universally agreed-upon list of dinosaurs' distinguishing features, nearly all dinosaurs discovered so far share certain modifications to the ancestral archosaurian skeleton, or are clear descendants of older dinosaurs showing these modifications. Although some later groups of dinosaurs featured further modified versions of these traits, they are considered typical for Dinosauria; the earliest dinosaurs had them and passed them on to their descendants. Such modifications, originating in the most recent common ancestor of a certain taxonomic group, are called the synapomorphies of such a group. A detailed assessment of archosaur interrelations by Sterling Nesbitt confirmed or found the following twelve unambiguous synapomorphies, some previously known: Nesbitt found a number of further potential synapomorphies, and discounted a number of synapomorphies previously suggested. Some of these are also present in silesaurids, which Nesbitt recovered as a sister group to Dinosauria, including a large anterior trochanter, metatarsals II and IV of subequal length, reduced contact between ischium and pubis, the presence of a cnemial crest on the tibia and of an ascending process on the astragalus, and many others. A variety of other skeletal features are shared by dinosaurs. However, because they are either common to other groups of archosaurs or were not present in all early dinosaurs, these features are not considered to be synapomorphies. For example, as diapsids, dinosaurs ancestrally had two pairs of temporal fenestrae (openings in the skull behind the eyes), and as members of the diapsid group Archosauria, had additional openings in the snout and lower jaw. Additionally, several characteristics once thought to be synapomorphies are now known to have appeared before dinosaurs, or were absent in the earliest dinosaurs and independently evolved by different dinosaur groups. These include an elongated scapula, or shoulder blade; a sacrum composed of three or more fused vertebrae (three are found in some other archosaurs, but only two are found in "Herrerasaurus"); and a perforate acetabulum, or hip socket, with a hole at the center of its inside surface (closed in "Saturnalia", for example). Another difficulty of determining distinctly dinosaurian features is that early dinosaurs and other archosaurs from the late Triassic are often poorly known and were similar in many ways; these animals have sometimes been misidentified in the literature. Dinosaurs stand with their hind limbs erect in a manner similar to most modern mammals, but distinct from most other reptiles, whose limbs sprawl out to either side. This posture is due to the development of a laterally facing recess in the pelvis (usually an open socket) and a corresponding inwardly facing distinct head on the femur. Their erect posture enabled early dinosaurs to breathe easily while moving, which likely permitted stamina and activity levels that surpassed those of "sprawling" reptiles. Erect limbs probably also helped support the evolution of large size by reducing bending stresses on limbs. Some non-dinosaurian archosaurs, including rauisuchians, also had erect limbs but achieved this by a "pillar erect" configuration of the hip joint, where instead of having a projection from the femur insert on a socket on the hip, the upper pelvic bone was rotated to form an overhanging shelf. Dinosaurs diverged from their archosaur ancestors during the middle to late Triassic period, roughly 20 million years after the Permian–Triassic extinction event wiped out an estimated 95% of all life on Earth. Radiometric dating of the rock formation that contained fossils from the early dinosaur genus "Eoraptor" at 231.4 million years old establishes its presence in the fossil record at this time. Paleontologists think that "Eoraptor" resembles the common ancestor of all dinosaurs; if this is true, its traits suggest that the first dinosaurs were small, bipedal predators. The discovery of primitive, dinosaur-like ornithodirans such as "Marasuchus" and "Lagerpeton" in Argentinian Middle Triassic strata supports this view; analysis of recovered fossils suggests that these animals were indeed small, bipedal predators. Dinosaurs may have appeared as early as 243 million years ago, as evidenced by remains of the genus "Nyasasaurus" from that period, though known fossils of these animals are too fragmentary to tell if they are dinosaurs or very close dinosaurian relatives. Recently, it has been determined that "Staurikosaurus" from the Santa Maria Formation dates to 233.23 Ma, making it older in geologic age than "Eoraptor". When dinosaurs appeared, they were not the dominant terrestrial animals. The terrestrial habitats were occupied by various types of archosauromorphs and therapsids, like cynodonts and rhynchosaurs. Their main competitors were the pseudosuchia, such as aetosaurs, ornithosuchids and rauisuchians, which were more successful than the dinosaurs. Most of these other animals became extinct in the Triassic, in one of two events. First, at about 215 million years ago, a variety of basal archosauromorphs, including the protorosaurs, became extinct. This was followed by the Triassic–Jurassic extinction event (about 200 million years ago), that saw the end of most of the other groups of early archosaurs, like aetosaurs, ornithosuchids, phytosaurs, and rauisuchians. Rhynchosaurs and dicynodonts survived (at least in some areas) at least as late as early-mid Norian and early Rhaetian, respectively, and the exact date of their extinction is uncertain. These losses left behind a land fauna of crocodylomorphs, dinosaurs, mammals, pterosaurians, and turtles. The first few lines of early dinosaurs diversified through the Carnian and Norian stages of the Triassic, possibly by occupying the niches of the groups that became extinct. Also notably, there was a heightened rate of extinction during the Carnian Pluvial Event. Dinosaur evolution after the Triassic follows changes in vegetation and the location of continents. In the late Triassic and early Jurassic, the continents were connected as the single landmass Pangaea, and there was a worldwide dinosaur fauna mostly composed of coelophysoid carnivores and early sauropodomorph herbivores. Gymnosperm plants (particularly conifers), a potential food source, radiated in the late Triassic. Early sauropodomorphs did not have sophisticated mechanisms for processing food in the mouth, and so must have employed other means of breaking down food farther along the digestive tract. The general homogeneity of dinosaurian faunas continued into the middle and late Jurassic, where most localities had predators consisting of ceratosaurians, spinosauroids, and carnosaurians, and herbivores consisting of stegosaurian ornithischians and large sauropods. Examples of this include the Morrison Formation of North America and Tendaguru Beds of Tanzania. Dinosaurs in China show some differences, with specialized sinraptorid theropods and unusual, long-necked sauropods like "Mamenchisaurus". Ankylosaurians and ornithopods were also becoming more common, but prosauropods had become extinct. Conifers and pteridophytes were the most common plants. Sauropods, like the earlier prosauropods, were not oral processors, but ornithischians were evolving various means of dealing with food in the mouth, including potential cheek-like organs to keep food in the mouth, and jaw motions to grind food. Another notable evolutionary event of the Jurassic was the appearance of true birds, descended from maniraptoran coelurosaurians. By the early Cretaceous and the ongoing breakup of Pangaea, dinosaurs were becoming strongly differentiated by landmass. The earliest part of this time saw the spread of ankylosaurians, iguanodontians, and brachiosaurids through Europe, North America, and northern Africa. These were later supplemented or replaced in Africa by large spinosaurid and carcharodontosaurid theropods, and rebbachisaurid and titanosaurian sauropods, also found in South America. In Asia, maniraptoran coelurosaurians like dromaeosaurids, troodontids, and oviraptorosaurians became the common theropods, and ankylosaurids and early ceratopsians like "Psittacosaurus" became important herbivores. Meanwhile, Australia was home to a fauna of basal ankylosaurians, hypsilophodonts, and iguanodontians. The stegosaurians appear to have gone extinct at some point in the late early Cretaceous or early late Cretaceous. A major change in the early Cretaceous, which would be amplified in the late Cretaceous, was the evolution of flowering plants. At the same time, several groups of dinosaurian herbivores evolved more sophisticated ways to orally process food. Ceratopsians developed a method of slicing with teeth stacked on each other in batteries, and iguanodontians refined a method of grinding with tooth batteries, taken to its extreme in hadrosaurids. Some sauropods also evolved tooth batteries, best exemplified by the rebbachisaurid "Nigersaurus". There were three general dinosaur faunas in the late Cretaceous. In the northern continents of North America and Asia, the major theropods were tyrannosaurids and various types of smaller maniraptoran theropods, with a predominantly ornithischian herbivore assemblage of hadrosaurids, ceratopsians, ankylosaurids, and pachycephalosaurians. In the southern continents that had made up the now-splitting Gondwana, abelisaurids were the common theropods, and titanosaurian sauropods the common herbivores. Finally, in Europe, dromaeosaurids, rhabdodontid iguanodontians, nodosaurid ankylosaurians, and titanosaurian sauropods were prevalent. Flowering plants were greatly radiating, with the first grasses appearing by the end of the Cretaceous. Grinding hadrosaurids and shearing ceratopsians became extremely diverse across North America and Asia. Theropods were also radiating as herbivores or omnivores, with therizinosaurians and ornithomimosaurians becoming common. The Cretaceous–Paleogene extinction event, which occurred approximately 66 million years ago at the end of the Cretaceous period, caused the extinction of all dinosaur groups except for the neornithine birds. Some other diapsid groups, such as crocodilians, sebecosuchians, turtles, lizards, snakes, sphenodontians, and choristoderans, also survived the event. The surviving lineages of neornithine birds, including the ancestors of modern ratites, ducks and chickens, and a variety of waterbirds, diversified rapidly at the beginning of the Paleogene period, entering ecological niches left vacant by the extinction of Mesozoic dinosaur groups such as the arboreal enantiornithines, aquatic hesperornithines, and even the larger terrestrial theropods (in the form of "Gastornis", eogruiids, bathornithids, ratites, geranoidids, mihirungs, and "terror birds"). It is often cited that mammals out-competed the neornithines for dominance of most terrestrial niches but many of these groups co-existed with rich mammalian faunas for most of the Cenozoic. Terror birds and bathornithids occupied carnivorous guilds alongside predatory mammals, and ratites are still fairly successful as mid-sized herbivores; eogruiids similarly lasted from the Eocene to Pliocene, only becoming extinct very recently after over 20 million years of co-existence with many mammal groups. Dinosaurs belong to a group known as archosaurs, which also includes modern crocodilians. Within the archosaur group, dinosaurs are differentiated most noticeably by their gait. Dinosaur legs extend directly beneath the body, whereas the legs of lizards and crocodilians sprawl out to either side. Collectively, dinosaurs as a clade are divided into two primary branches, Saurischia and Ornithischia. Saurischia includes those taxa sharing a more recent common ancestor with birds than with Ornithischia, while Ornithischia includes all taxa sharing a more recent common ancestor with "Triceratops" than with Saurischia. Anatomically, these two groups can be distinguished most noticeably by their pelvic structure. Early saurischians—"lizard-hipped", from the Greek "sauros" (σαῦρος) meaning "lizard" and "ischion" (ἰσχίον) meaning "hip joint"—retained the hip structure of their ancestors, with a pubis bone directed cranially, or forward. This basic form was modified by rotating the pubis backward to varying degrees in several groups ("Herrerasaurus", therizinosauroids, dromaeosaurids, and birds). Saurischia includes the theropods (exclusively bipedal and with a wide variety of diets) and sauropodomorphs (long-necked herbivores which include advanced, quadrupedal groups). By contrast, ornithischians—"bird-hipped", from the Greek "ornitheios" (ὀρνίθειος) meaning "of a bird" and "ischion" (ἰσχίον) meaning "hip joint"—had a pelvis that superficially resembled a bird's pelvis: the pubic bone was oriented caudally (rear-pointing). Unlike birds, the ornithischian pubis also usually had an additional forward-pointing process. Ornithischia includes a variety of species which were primarily herbivores. (NB: the terms "lizard hip" and "bird hip" are misnomers – birds evolved from dinosaurs with "lizard hips".) The following is a simplified classification of dinosaur groups based on their evolutionary relationships, and organized based on the list of Mesozoic dinosaur species provided by Holtz (2007). A more detailed version can be found at Dinosaur classification. The dagger (†) is used to signify groups with no living members. Knowledge about dinosaurs is derived from a variety of fossil and non-fossil records, including fossilized bones, feces, trackways, gastroliths, feathers, impressions of skin, internal organs and soft tissues. Many fields of study contribute to our understanding of dinosaurs, including physics (especially biomechanics), chemistry, biology, and the earth sciences (of which paleontology is a sub-discipline). Two topics of particular interest and study have been dinosaur size and behavior. Current evidence suggests that dinosaur average size varied through the Triassic, early Jurassic, late Jurassic and Cretaceous periods. Predatory theropod dinosaurs, which occupied most terrestrial carnivore niches during the Mesozoic, most often fall into the category when sorted by estimated weight into categories based on order of magnitude, whereas recent predatory carnivoran mammals peak in the category. The mode of Mesozoic dinosaur body masses is between one and ten metric tonnes. This contrasts sharply with the size of Cenozoic mammals, estimated by the National Museum of Natural History as about . The sauropods were the largest and heaviest dinosaurs. For much of the dinosaur era, the smallest sauropods were larger than anything else in their habitat, and the largest were an order of magnitude more massive than anything else that has since walked the Earth. Giant prehistoric mammals such as "Paraceratherium" (the largest land mammal ever) were dwarfed by the giant sauropods, and only modern whales approach or surpass them in size. There are several proposed advantages for the large size of sauropods, including protection from predation, reduction of energy use, and longevity, but it may be that the most important advantage was dietary. Large animals are more efficient at digestion than small animals, because food spends more time in their digestive systems. This also permits them to subsist on food with lower nutritive value than smaller animals. Sauropod remains are mostly found in rock formations interpreted as dry or seasonally dry, and the ability to eat large quantities of low-nutrient browse would have been advantageous in such environments. Scientists will probably never be certain of the largest and smallest dinosaurs to have ever existed. This is because only a tiny percentage of animals ever fossilize, and most of these remain buried in the earth. Few of the specimens that are recovered are complete skeletons, and impressions of skin and other soft tissues are rare. Rebuilding a complete skeleton by comparing the size and morphology of bones to those of similar, better-known species is an inexact art, and reconstructing the muscles and other organs of the living animal is, at best, a process of educated guesswork. The tallest and heaviest dinosaur known from good skeletons is "Giraffatitan brancai" (previously classified as a species of "Brachiosaurus"). Its remains were discovered in Tanzania between 1907 and 1912. Bones from several similar-sized individuals were incorporated into the skeleton now mounted and on display at the Museum für Naturkunde Berlin; this mount is tall and long, and would have belonged to an animal that weighed between and  kilograms ( and  lb). The longest complete dinosaur is the long "Diplodocus", which was discovered in Wyoming in the United States and displayed in Pittsburgh's Carnegie Natural History Museum in 1907. The longest dinosaur known from good fossil material is the "Patagotitan": the skeleton mount in the American Museum of Natural History is long. The Carmen Funes Museum has an "Argentinosaurus" reconstructed skeleton mount long. There were larger dinosaurs, but knowledge of them is based entirely on a small number of fragmentary fossils. Most of the largest herbivorous specimens on record were discovered in the 1970s or later, and include the massive "Argentinosaurus", which may have weighed to  kilograms (90 to 110 short tons) and reached length of ; some of the longest were the long "Diplodocus hallorum" (formerly "Seismosaurus"), the long "Supersaurus" and long "Patagotitan"; and the tallest, the tall "Sauroposeidon", which could have reached a sixth-floor window. The heaviest and longest dinosaur may have been "Amphicoelias fragillimus", known only from a now lost partial vertebral neural arch described in 1878. Extrapolating from the illustration of this bone, the animal may have been long and weighed kg ( lb). However, as no further evidence of sauropods of this size has been found, and the discoverer, Edward Cope, had made typographic errors before, it is likely to have been an extreme overestimation. As of 2018, "Argentinosaurus" and "Patagotitan" are considered by paleontologists as the largest dinosaurs known from reasonable remains. The largest carnivorous dinosaur was "Spinosaurus", reaching a length of , and weighing 7–20.9 tonnes (7.7–23 short tons). Other large carnivorous theropods included "Giganotosaurus", "Carcharodontosaurus" and "Tyrannosaurus". "Therizinosaurus" and "Deinocheirus" were among the tallest of the theropods. The largest Ornithischian dinosaur was probably the hadrosaurid "Shantungosaurus" which measured and weighed about . The smallest dinosaur known is the bee hummingbird, with a length of only and mass of around . The smallest known non-avialan dinosaurs were about the size of pigeons and were those theropods most closely related to birds. For example, "Anchiornis huxleyi" is currently the smallest non-avialan dinosaur described from an adult specimen, with an estimated weight of 110 grams and a total skeletal length of . The smallest herbivorous non-avialan dinosaurs included "Microceratus" and "Wannanosaurus", at about long each. Many modern birds are highly social, often found living in flocks. There is general agreement that some behaviors that are common in birds, as well as in crocodiles (birds' closest living relatives), were also common among extinct dinosaur groups. Interpretations of behavior in fossil species are generally based on the pose of skeletons and their habitat, computer simulations of their biomechanics, and comparisons with modern animals in similar ecological niches. The first potential evidence for herding or flocking as a widespread behavior common to many dinosaur groups in addition to birds was the 1878 discovery of 31 "Iguanodon bernissartensis", ornithischians that were then thought to have perished together in Bernissart, Belgium, after they fell into a deep, flooded sinkhole and drowned. Other mass-death sites have been discovered subsequently. Those, along with multiple trackways, suggest that gregarious behavior was common in many early dinosaur species. Trackways of hundreds or even thousands of herbivores indicate that duck-bills (hadrosaurids) may have moved in great herds, like the American bison or the African Springbok. Sauropod tracks document that these animals traveled in groups composed of several different species, at least in Oxfordshire, England, although there is no evidence for specific herd structures. Congregating into herds may have evolved for defense, for migratory purposes, or to provide protection for young. There is evidence that many types of slow-growing dinosaurs, including various theropods, sauropods, ankylosaurians, ornithopods, and ceratopsians, formed aggregations of immature individuals. One example is a site in Inner Mongolia that has yielded the remains of over 20 "Sinornithomimus", from one to seven years old. This assemblage is interpreted as a social group that was trapped in mud. The interpretation of dinosaurs as gregarious has also extended to depicting carnivorous theropods as pack hunters working together to bring down large prey. However, this lifestyle is uncommon among modern birds, crocodiles, and other reptiles, and the taphonomic evidence suggesting mammal-like pack hunting in such theropods as "Deinonychus" and "Allosaurus" can also be interpreted as the results of fatal disputes between feeding animals, as is seen in many modern diapsid predators. The crests and frills of some dinosaurs, like the marginocephalians, theropods and lambeosaurines, may have been too fragile to be used for active defense, and so they were likely used for sexual or aggressive displays, though little is known about dinosaur mating and territorialism. Head wounds from bites suggest that theropods, at least, engaged in active aggressive confrontations. From a behavioral standpoint, one of the most valuable dinosaur fossils was discovered in the Gobi Desert in 1971. It included a "Velociraptor" attacking a "Protoceratops", providing evidence that dinosaurs did indeed attack each other. Additional evidence for attacking live prey is the partially healed tail of an "Edmontosaurus", a hadrosaurid dinosaur; the tail is damaged in such a way that shows the animal was bitten by a tyrannosaur but survived. Cannibalism amongst some species of dinosaurs was confirmed by tooth marks found in Madagascar in 2003, involving the theropod "Majungasaurus". Comparisons between the scleral rings of dinosaurs and modern birds and reptiles have been used to infer daily activity patterns of dinosaurs. Although it has been suggested that most dinosaurs were active during the day, these comparisons have shown that small predatory dinosaurs such as dromaeosaurids, "Juravenator", and "Megapnosaurus" were likely nocturnal. Large and medium-sized herbivorous and omnivorous dinosaurs such as ceratopsians, sauropodomorphs, hadrosaurids, ornithomimosaurs may have been cathemeral, active during short intervals throughout the day, although the small ornithischian "Agilisaurus" was inferred to be diurnal. Based on current fossil evidence from dinosaurs such as "Oryctodromeus", some ornithischian species seem to have led a partially fossorial (burrowing) lifestyle. Many modern birds are arboreal (tree climbing), and this was also true of many Mesozoic birds, especially the enantiornithines. While some early bird-like species may have already been arboreal as well (including dromaeosaurids such as "Microraptor") most non-avialan dinosaurs seem to have relied on land-based locomotion. A good understanding of how dinosaurs moved on the ground is key to models of dinosaur behavior; the science of biomechanics, pioneered by Robert McNeill Alexander, has provided significant insight in this area. For example, studies of the forces exerted by muscles and gravity on dinosaurs' skeletal structure have investigated how fast dinosaurs could run, whether diplodocids could create sonic booms via whip-like tail snapping, and whether sauropods could float. Modern birds are known to communicate using visual and auditory signals, and the wide diversity of visual display structures among fossil dinosaur groups, such as horns, frills, crests, sails and feathers, suggests that visual communication has always been important in dinosaur biology. Reconstruction of the plumage color of "Anchiornis huxleyi", suggest the importance of color in visual communication in non-avian dinosaurs. The evolution of dinosaur vocalization is less certain. Paleontologist Phil Senter suggests that non-avian dinosaurs relied mostly on visual displays and possibly non-vocal acoustic sounds like hissing, jaw grinding or clapping, splashing and wing beating (possible in winged maniraptoran dinosaurs). He states they were unlikely to have been capable of vocalizing since their closest relatives, crocodilians and birds, use different means to vocalize, the former via the larynx and the latter through the unique syrinx, suggesting they evolved independently and their common ancestor was mute. The earliest remains of a syrinx, which has enough mineral content for fossilization, was found in a specimen of the duck-like "Vegavis iaai" dated 69-66 million year ago, and this organ is unlikely to have existed in non-avian dinosaurs. However, in contrast to Senter, the researchers have suggested that dinosaurs could vocalize and that the syrinx-based vocal system of birds evolved from a larynx-based one, rather than the two systems evolving independently. A 2016 study suggests that dinosaurs produced closed mouth vocalizations like cooing, which occur in both crocodilians and birds as well as other reptiles. Such vocalizations evolved independently in extant archosaurs numerous times, following increases in body size. The crests of the Lambeosaurini and nasal chambers of ankylosaurids have been suggested to function in vocal resonance, though Senter states that the presence of resonance chambers in some dinosaurs is not necessarily evidence of vocalization as modern snakes have such chambers which intensify their hisses. All dinosaurs lay amniotic eggs with hard shells made mostly of calcium carbonate. Eggs are usually laid in a nest. Most species create somewhat elaborate nests, which can be cups, domes, plates, beds scrapes, mounds, or burrows. Some species of modern bird have no nests; the cliff-nesting common guillemot lays its eggs on bare rock, and male emperor penguins keep eggs between their body and feet. Primitive birds and many non-avialan dinosaurs often lay eggs in communal nests, with males primarily incubating the eggs. While modern birds have only one functional oviduct and lay one egg at a time, more primitive birds and dinosaurs had two oviducts, like crocodiles. Some non-avialan dinosaurs, such as "Troodon", exhibited iterative laying, where the adult might lay a pair of eggs every one or two days, and then ensured simultaneous hatching by delaying brooding until all eggs were laid. When laying eggs, females grow a special type of bone between the hard outer bone and the marrow of their limbs. This medullary bone, which is rich in calcium, is used to make eggshells. A discovery of features in a "Tyrannosaurus rex" skeleton provided evidence of medullary bone in extinct dinosaurs and, for the first time, allowed paleontologists to establish the sex of a fossil dinosaur specimen. Further research has found medullary bone in the carnosaur "Allosaurus" and the ornithopod "Tenontosaurus". Because the line of dinosaurs that includes "Allosaurus" and "Tyrannosaurus" diverged from the line that led to "Tenontosaurus" very early in the evolution of dinosaurs, this suggests that the production of medullary tissue is a general characteristic of all dinosaurs. Another widespread trait among modern birds (but see below in regards to fossil groups and extant megapodes) is parental care for young after hatching. Jack Horner's 1978 discovery of a "Maiasaura" ("good mother lizard") nesting ground in Montana demonstrated that parental care continued long after birth among ornithopods. A specimen of the Mongolian oviraptorid "Citipati osmolskae" was discovered in a chicken-like brooding position in 1993, which may indicate that they had begun using an insulating layer of feathers to keep the eggs warm. A dinosaur embryo (pertaining to the prosauropod "Massospondylus") was found without teeth, indicating that some parental care was required to feed the young dinosaurs. Trackways have also confirmed parental behavior among ornithopods from the Isle of Skye in northwestern Scotland. However, there is ample evidence of supreprecociality among many dinosaur species, particularly theropods. For instance, non-ornithuromorph birds have been abundantly demonstrated to have had slow growth rates, megapode-like egg burying behaviour and the ability to fly soon after birth. Both "Tyrannosaurus rex" and "Troodon formosus" display juveniles with clear supreprecociality and likely occupying different ecological niches than the adults. Superprecociality has been inferred for sauropods. Because both modern crocodilians and birds have four-chambered hearts (albeit modified in crocodilians), it is likely that this is a trait shared by all archosaurs, including all dinosaurs. While all modern birds have high metabolisms and are "warm blooded" (endothermic), a vigorous debate has been ongoing since the 1960s regarding how far back in the dinosaur lineage this trait extends. Scientists disagree as to whether non-avian dinosaurs were endothermic, ectothermic, or some combination of both. After non-avian dinosaurs were discovered, paleontologists first posited that they were ectothermic. This supposed "cold-bloodedness" was used to imply that the ancient dinosaurs were relatively slow, sluggish organisms, even though many modern reptiles are fast and light-footed despite relying on external sources of heat to regulate their body temperature. The idea of dinosaurs as ectothermic and sluggish remained a prevalent view until Robert T. "Bob" Bakker, an early proponent of dinosaur endothermy, published an influential paper on the topic in 1968. Modern evidence indicates that even non-avian dinosaurs and birds thrived in cooler temperate climates, and that at least some early species must have regulated their body temperature by internal biological means (aided by the animals' bulk in large species and feathers or other body coverings in smaller species). Evidence of endothermy in Mesozoic dinosaurs includes the discovery of polar dinosaurs in Australia and Antarctica as well as analysis of blood-vessel structures within fossil bones that are typical of endotherms. Scientific debate continues regarding the specific ways in which dinosaur temperature regulation evolved. In saurischian dinosaurs, higher metabolisms were supported by the evolution of the avian respiratory system, characterized by an extensive system of air sacs that extended the lungs and invaded many of the bones in the skeleton, making them hollow. Early avian-style respiratory systems with air sacs may have been capable of sustaining higher activity levels than those of mammals of similar size and build. In addition to providing a very efficient supply of oxygen, the rapid airflow would have been an effective cooling mechanism, which is essential for animals that are active but too large to get rid of all the excess heat through their skin. Like other reptiles, dinosaurs are primarily uricotelic, that is, their kidneys extract nitrogenous wastes from their bloodstream and excrete it as uric acid instead of urea or ammonia via the ureters into the intestine. In most living species, uric acid is excreted along with feces as a semisolid waste. However, at least some modern birds (such as hummingbirds) can be facultatively ammonotelic, excreting most of the nitrogenous wastes as ammonia. They also excrete creatine, rather than creatinine like mammals. This material, as well as the output of the intestines, emerges from the cloaca. In addition, many species regurgitate pellets, and fossil pellets that may have come from dinosaurs are known from as long ago as the Cretaceous period. The possibility that dinosaurs were the ancestors of birds was first suggested in 1868 by Thomas Henry Huxley. After the work of Gerhard Heilmann in the early 20th century, the theory of birds as dinosaur descendants was abandoned in favor of the idea of their being descendants of generalized thecodonts, with the key piece of evidence being the supposed lack of clavicles in dinosaurs. However, as later discoveries showed, clavicles (or a single fused wishbone, which derived from separate clavicles) were not actually absent; they had been found as early as 1924 in "Oviraptor", but misidentified as an interclavicle. In the 1970s, John Ostrom revived the dinosaur–bird theory, which gained momentum in the coming decades with the advent of cladistic analysis, and a great increase in the discovery of small theropods and early birds. Of particular note have been the fossils of the Yixian Formation, where a variety of theropods and early birds have been found, often with feathers of some type. Birds share over a hundred distinct anatomical features with theropod dinosaurs, which are now generally accepted to have been their closest ancient relatives. They are most closely allied with maniraptoran coelurosaurs. A minority of scientists, most notably Alan Feduccia and Larry Martin, have proposed other evolutionary paths, including revised versions of Heilmann's basal archosaur proposal, or that maniraptoran theropods are the ancestors of birds but themselves are not dinosaurs, only convergent with dinosaurs. Feathers are one of the most recognizable characteristics of modern birds, and a trait that was shared by all other dinosaur groups. Based on the current distribution of fossil evidence, it appears that feathers were an ancestral dinosaurian trait, though one that may have been selectively lost in some species. Direct fossil evidence of feathers or feather-like structures has been discovered in a diverse array of species in many non-avian dinosaur groups, both among saurischians and ornithischians. Simple, branched, feather-like structures are known from heterodontosaurids, primitive neornithischians and theropods, and primitive ceratopsians. Evidence for true, vaned feathers similar to the flight feathers of modern birds has been found only in the theropod subgroup Maniraptora, which includes oviraptorosaurs, troodontids, dromaeosaurids, and birds. Feather-like structures known as pycnofibres have also been found in pterosaurs, suggesting the possibility that feather-like filaments may have been common in the bird lineage and evolved before the appearance of dinosaurs themselves. Research into the genetics of American alligators has also revealed that crocodylian scutes do possess feather-keratins during embryonic development, but these keratins are not expressed by the animals before hatching. "Archaeopteryx" was the first fossil found that revealed a potential connection between dinosaurs and birds. It is considered a transitional fossil, in that it displays features of both groups. Brought to light just two years after Darwin's seminal "The Origin of Species", its discovery spurred the nascent debate between proponents of evolutionary biology and creationism. This early bird is so dinosaur-like that, without a clear impression of feathers in the surrounding rock, at least one specimen was mistaken for "Compsognathus". Since the 1990s, a number of additional feathered dinosaurs have been found, providing even stronger evidence of the close relationship between dinosaurs and modern birds. Most of these specimens were unearthed in the lagerstätte of the Yixian Formation, Liaoning, northeastern China, which was part of an island continent during the Cretaceous. Though feathers have been found in only a few locations, it is possible that non-avian dinosaurs elsewhere in the world were also feathered. The lack of widespread fossil evidence for feathered non-avian dinosaurs may be because delicate features like skin and feathers are not often preserved by fossilization and thus are absent from the fossil record. The description of feathered dinosaurs has not been without controversy; perhaps the most vocal critics have been Alan Feduccia and Theagarten Lingham-Soliar, who have proposed that some purported feather-like fossils are the result of the decomposition of collagenous fiber that underlaid the dinosaurs' skin, and that maniraptoran dinosaurs with vaned feathers were not actually dinosaurs, but convergent with dinosaurs. However, their views have for the most part not been accepted by other researchers, to the point that the scientific nature of Feduccia's proposals has been questioned. In 2016, it was reported that a dinosaur tail with feathers had been found enclosed in amber. The fossil is about 99 million years old. Because feathers are often associated with birds, feathered dinosaurs are often touted as the missing link between birds and dinosaurs. However, the multiple skeletal features also shared by the two groups represent another important line of evidence for paleontologists. Areas of the skeleton with important similarities include the neck, pubis, wrist (semi-lunate carpal), arm and pectoral girdle, furcula (wishbone), and breast bone. Comparison of bird and dinosaur skeletons through cladistic analysis strengthens the case for the link. Large meat-eating dinosaurs had a complex system of air sacs similar to those found in modern birds, according to a 2005 investigation led by Patrick M. O'Connor. The lungs of theropod dinosaurs (carnivores that walked on two legs and had bird-like feet) likely pumped air into hollow sacs in their skeletons, as is the case in birds. "What was once formally considered unique to birds was present in some form in the ancestors of birds", O'Connor said. In 2008, scientists described "Aerosteon riocoloradensis", the skeleton of which supplies the strongest evidence to date of a dinosaur with a bird-like breathing system. CT-scanning of "Aerosteon"'s fossil bones revealed evidence for the existence of air sacs within the animal's body cavity. Fossils of the troodonts "Mei" and "Sinornithoides" demonstrate that some dinosaurs slept with their heads tucked under their arms. This behavior, which may have helped to keep the head warm, is also characteristic of modern birds. Several deinonychosaur and oviraptorosaur specimens have also been found preserved on top of their nests, likely brooding in a bird-like manner. The ratio between egg volume and body mass of adults among these dinosaurs suggest that the eggs were primarily brooded by the male, and that the young were highly precocial, similar to many modern ground-dwelling birds. Some dinosaurs are known to have used gizzard stones like modern birds. These stones are swallowed by animals to aid digestion and break down food and hard fibers once they enter the stomach. When found in association with fossils, gizzard stones are called gastroliths. The discovery that birds are a type of dinosaur showed that dinosaurs in general are not, in fact, extinct as is commonly stated. However, all non-avian dinosaurs, estimated to have been 628-1078 species, as well as many groups of birds did suddenly become extinct approximately 66 million years ago. It has been suggested that because small mammals, squamata and birds occupied the ecological niches suited for small body size, non-avian dinosaurs never evolved a diverse fauna of small-bodied species, which led to their downfall when large-bodied terrestrial tetrapods were hit by the mass extinction event. Many other groups of animals also became extinct at this time, including ammonites (nautilus-like mollusks), mosasaurs, plesiosaurs, pterosaurs, and many groups of mammals. Significantly, the insects suffered no discernible population loss, which left them available as food for other survivors. This mass extinction is known as the Cretaceous–Paleogene extinction event. The nature of the event that caused this mass extinction has been extensively studied since the 1970s; at present, several related theories are supported by paleontologists. Though the consensus is that an impact event was the primary cause of dinosaur extinction, some scientists cite other possible causes, or support the idea that a confluence of several factors was responsible for the sudden disappearance of dinosaurs from the fossil record. The asteroid collision theory, which was brought to wide attention in 1980 by Walter Alvarez and colleagues, links the extinction event at the end of the Cretaceous period to a bolide impact approximately 66 million years ago. Alvarez "et al." proposed that a sudden increase in iridium levels, recorded around the world in the period's rock stratum, was direct evidence of the impact. The bulk of the evidence now suggests that a bolide wide hit in the vicinity of the Yucatán Peninsula (in southeastern Mexico), creating the approximately Chicxulub Crater and triggering the mass extinction. Scientists are not certain whether dinosaurs were thriving or declining before the impact event. Some scientists propose that the meteorite impact caused a long and unnatural drop in Earth's atmospheric temperature, while others claim that it would have instead created an unusual heat wave. The consensus among scientists who support this theory is that the impact caused extinctions both directly (by heat from the meteorite impact) and also indirectly (via a worldwide cooling brought about when matter ejected from the impact crater reflected thermal radiation from the sun). Although the speed of extinction cannot be deduced from the fossil record alone, various models suggest that the extinction was extremely rapid, being down to hours rather than years. Before 2000, arguments that the Deccan Traps flood basalts caused the extinction were usually linked to the view that the extinction was gradual, as the flood basalt events were thought to have started around 68 million years ago and lasted for over 2 million years. However, there is evidence that two thirds of the Deccan Traps were created in only 1 million years about 66 million years ago, and so these eruptions would have caused a fairly rapid extinction, possibly over a period of thousands of years, but still longer than would be expected from a single impact event. The Deccan Traps in India could have caused extinction through several mechanisms, including the release into the air of dust and sulfuric aerosols, which might have blocked sunlight and thereby reduced photosynthesis in plants. In addition, Deccan Trap volcanism might have resulted in carbon dioxide emissions, which would have increased the greenhouse effect when the dust and aerosols cleared from the atmosphere. Before the mass extinction of the dinosaurs, the release of volcanic gases during the formation of the Deccan Traps "contributed to an apparently massive global warming. Some data point to an average rise in temperature of in the last half million years before the impact [at Chicxulub]." In the years when the Deccan Traps theory was linked to a slower extinction, Luis Alvarez (who died in 1988) replied that paleontologists were being misled by sparse data. While his assertion was not initially well-received, later intensive field studies of fossil beds lent weight to his claim. Eventually, most paleontologists began to accept the idea that the mass extinctions at the end of the Cretaceous were largely or at least partly due to a massive Earth impact. However, even Walter Alvarez has acknowledged that there were other major changes on Earth even before the impact, such as a drop in sea level and massive volcanic eruptions that produced the Indian Deccan Traps, and these may have contributed to the extinctions. Non-avian dinosaur remains are occasionally found above the Cretaceous–Paleogene boundary. In 2001, paleontologists Zielinski and Budahn reported the discovery of a single hadrosaur leg-bone fossil in the San Juan Basin, New Mexico, and described it as evidence of Paleocene dinosaurs. The formation in which the bone was discovered has been dated to the early Paleocene epoch, approximately 64.5 million years ago. If the bone was not re-deposited into that stratum by weathering action, it would provide evidence that some dinosaur populations may have survived at least a half million years into the Cenozoic Era. Other evidence includes the finding of dinosaur remains in the Hell Creek Formation up to above the Cretaceous–Paleogene boundary, representing  years of elapsed time. Similar reports have come from other parts of the world, including China. Many scientists, however, dismissed the supposed Paleocene dinosaurs as re-worked, that is, washed out of their original locations and then re-buried in much later sediments. Direct dating of the bones themselves has supported the later date, with U–Pb dating methods resulting in a precise age of 64.8 ± 0.9 million years ago. If correct, the presence of a handful of dinosaurs in the early Paleocene would not change the underlying facts of the extinction. Dinosaur fossils have been known for millennia, although their true nature was not recognized. The Chinese, whose modern word for dinosaur is "kǒnglóng" (恐龍, or "terrible dragon"), considered them to be dragon bones and documented them as such. For example, "Hua Yang Guo Zhi", a book written by Chang Qu during the Western Jin Dynasty (265–316), reported the discovery of dragon bones at Wucheng in Sichuan Province. Villagers in central China have long unearthed fossilized "dragon bones" for use in traditional medicines, a practice that continues today. In Europe, dinosaur fossils were generally believed to be the remains of giants and other biblical creatures. Scholarly descriptions of what would now be recognized as dinosaur bones first appeared in the late 17th century in England. Part of a bone, now known to have been the femur of a "Megalosaurus", was recovered from a limestone quarry at Cornwell near Chipping Norton, Oxfordshire, in 1676. The fragment was sent to Robert Plot, Professor of Chemistry at the University of Oxford and first curator of the Ashmolean Museum, who published a description in his "Natural History of Oxfordshire" in 1677. He correctly identified the bone as the lower extremity of the femur of a large animal, and recognized that it was too large to belong to any known species. He therefore concluded it to be the thigh bone of a giant human similar to those mentioned in the Bible. In 1699, Edward Lhuyd, a friend of Sir Isaac Newton, was responsible for the first published scientific treatment of what would now be recognized as a dinosaur when he described and named a sauropod tooth, "Rutellum implicatum", that had been found in Caswell, near Witney, Oxfordshire. Between 1815 and 1824, the Rev William Buckland, a professor of geology at Oxford, collected more fossilized bones of "Megalosaurus" and became the first person to describe a dinosaur in a scientific journal. The second dinosaur genus to be identified, "Iguanodon", was discovered in 1822 by Mary Ann Mantell – the wife of English geologist Gideon Mantell. Gideon Mantell recognized similarities between his fossils and the bones of modern iguanas. He published his findings in 1825. The study of these "great fossil lizards" soon became of great interest to European and American scientists, and in 1842 the English paleontologist Richard Owen coined the term "dinosaur". He recognized that the remains that had been found so far, "Iguanodon", "Megalosaurus" and "Hylaeosaurus", shared a number of distinctive features, and so decided to present them as a distinct taxonomic group. With the backing of Prince Albert, the husband of Queen Victoria, Owen established the Natural History Museum, London, to display the national collection of dinosaur fossils and other biological and geological exhibits. In 1858, William Parker Foulke discovered the first known American dinosaur, in marl pits in the small town of Haddonfield, New Jersey. (Although fossils had been found before, their nature had not been correctly discerned.) The creature was named "Hadrosaurus foulkii". It was an extremely important find: "Hadrosaurus" was one of the first nearly complete dinosaur skeletons found (the first was in 1834, in Maidstone, England), and it was clearly a bipedal creature. This was a revolutionary discovery as, until that point, most scientists had believed dinosaurs walked on four feet, like other lizards. Foulke's discoveries sparked a wave of dinosaur mania in the United States. Dinosaur mania was exemplified by the fierce rivalry between Edward Drinker Cope and Othniel Charles Marsh, both of whom raced to be the first to find new dinosaurs in what came to be known as the Bone Wars. The feud probably originated when Marsh publicly pointed out that Cope's reconstruction of an "Elasmosaurus" skeleton was flawed: Cope had inadvertently placed the plesiosaur's head at what should have been the animal's tail end. The fight between the two scientists lasted for over 30 years, ending in 1897 when Cope died after spending his entire fortune on the dinosaur hunt. Marsh 'won' the contest primarily because he was better funded through a relationship with the US Geological Survey. Unfortunately, many valuable dinosaur specimens were damaged or destroyed due to the pair's rough methods: for example, their diggers often used dynamite to unearth bones (a method modern paleontologists would find appalling). Despite their unrefined methods, the contributions of Cope and Marsh to paleontology were vast: Marsh unearthed 86 new species of dinosaur and Cope discovered 56, a total of 142 new species. Cope's collection is now at the American Museum of Natural History in New York, while Marsh's is on display at the Peabody Museum of Natural History at Yale University. After 1897, the search for dinosaur fossils extended to every continent, including Antarctica. The first Antarctic dinosaur to be discovered, the ankylosaurid "Antarctopelta oliveroi", was found on James Ross Island in 1986, although it was 1994 before an Antarctic species, the theropod "Cryolophosaurus ellioti", was formally named and described in a scientific journal. Current dinosaur "hot spots" include southern South America (especially Argentina) and China. China in particular has produced many exceptional feathered dinosaur specimens due to the unique geology of its dinosaur beds, as well as an ancient arid climate particularly conducive to fossilization. The field of dinosaur research has enjoyed a surge in activity that began in the 1970s and is ongoing. This was triggered, in part, by John Ostrom's discovery of "Deinonychus", an active predator that may have been warm-blooded, in marked contrast to the then-prevailing image of dinosaurs as sluggish and cold-blooded. Vertebrate paleontology has become a global science. Major new dinosaur discoveries have been made by paleontologists working in previously unexploited regions, including India, South America, Madagascar, Antarctica, and most significantly China (the amazingly well-preserved feathered dinosaurs in China have further consolidated the link between dinosaurs and their living descendants, modern birds). The widespread application of cladistics, which rigorously analyzes the relationships between biological organisms, has also proved tremendously useful in classifying dinosaurs. Cladistic analysis, among other modern techniques, helps to compensate for an often incomplete and fragmentary fossil record. One of the best examples of soft-tissue impressions in a fossil dinosaur was discovered in Pietraroia, Italy. The discovery was reported in 1998, and described the specimen of a small, very young coelurosaur, "Scipionyx samniticus". The fossil includes portions of the intestines, colon, liver, muscles, and windpipe of this immature dinosaur. In the March 2005 issue of "Science", the paleontologist Mary Higby Schweitzer and her team announced the discovery of flexible material resembling actual soft tissue inside a 68-million-year-old "Tyrannosaurus rex" leg bone from the Hell Creek Formation in Montana. After recovery, the tissue was rehydrated by the science team. When the fossilized bone was treated over several weeks to remove mineral content from the fossilized bone-marrow cavity (a process called demineralization), Schweitzer found evidence of intact structures such as blood vessels, bone matrix, and connective tissue (bone fibers). Scrutiny under the microscope further revealed that the putative dinosaur soft tissue had retained fine structures (microstructures) even at the cellular level. The exact nature and composition of this material, and the implications of Schweitzer's discovery, are not yet clear. In 2009, a team including Schweitzer announced that, using even more careful methodology, they had duplicated their results by finding similar soft tissue in a duck-billed dinosaur, "Brachylophosaurus canadensis", found in the Judith River Formation of Montana. This included even more detailed tissue, down to preserved bone cells that seem even to have visible remnants of nuclei and what seem to be red blood cells. Among other materials found in the bone was collagen, as in the "Tyrannosaurus" bone. The type of collagen an animal has in its bones varies according to its DNA and, in both cases, this collagen was of the same type found in modern chickens and ostriches. The extraction of ancient DNA from dinosaur fossils has been reported on two separate occasions; upon further inspection and peer review, however, neither of these reports could be confirmed. However, a functional peptide involved in the vision of a theoretical dinosaur has been inferred using analytical phylogenetic reconstruction methods on gene sequences of related modern species such as reptiles and birds. In addition, several proteins, including hemoglobin, have putatively been detected in dinosaur fossils. In 2015, researchers reported finding structures similar to blood cells and collagen fibers, preserved in the bone fossils of six Cretaceous dinosaur specimens, which are approximately 75 million years old. By human standards, dinosaurs were creatures of fantastic appearance and often enormous size. As such, they have captured the popular imagination and become an enduring part of human culture. Entry of the word "dinosaur" into the common vernacular reflects the animals' cultural importance: in English, "dinosaur" is commonly used to describe anything that is impractically large, obsolete, or bound for extinction. Public enthusiasm for dinosaurs first developed in Victorian England, where in 1854, three decades after the first scientific descriptions of dinosaur remains, a menagerie of lifelike dinosaur sculptures were unveiled in London's Crystal Palace Park. The Crystal Palace dinosaurs proved so popular that a strong market in smaller replicas soon developed. In subsequent decades, dinosaur exhibits opened at parks and museums around the world, ensuring that successive generations would be introduced to the animals in an immersive and exciting way. Dinosaurs' enduring popularity, in its turn, has resulted in significant public funding for dinosaur science, and has frequently spurred new discoveries. In the United States, for example, the competition between museums for public attention led directly to the Bone Wars of the 1880s and 1890s, during which a pair of feuding paleontologists made enormous scientific contributions. The popular preoccupation with dinosaurs has ensured their appearance in literature, film, and other media. Beginning in 1852 with a passing mention in Charles Dickens "Bleak House", dinosaurs have been featured in large numbers of fictional works. Jules Verne's 1864 novel "Journey to the Center of the Earth", Sir Arthur Conan Doyle's 1912 book "The Lost World", the iconic 1933 film "King Kong", the 1954 "Godzilla" and its many sequels, the best-selling 1990 novel "Jurassic Park" by Michael Crichton and its 1993 film adaptation are just a few notable examples of dinosaur appearances in fiction. Authors of general-interest non-fiction works about dinosaurs, including some prominent paleontologists, have often sought to use the animals as a way to educate readers about science in general. Dinosaurs are ubiquitous in advertising; numerous companies have referenced dinosaurs in printed or televised advertisements, either in order to sell their own products or in order to characterize their rivals as slow-moving, dim-witted, or obsolete. General Images Video Popular Technical Major depressive disorder Major depressive disorder (MDD), also known simply as depression, is a mental disorder characterized by at least two weeks of low mood that is present across most situations. It is often accompanied by low self-esteem, loss of interest in normally enjoyable activities, low energy, and pain without a clear cause. People may also occasionally have false beliefs or see or hear things that others cannot. Some people have periods of depression separated by years in which they are normal, while others nearly always have symptoms present. Major depressive disorder can negatively affect a person's personal, work, or school life as well as sleeping, eating habits, and general health. Between 2–7% of adults with major depression die by suicide, and up to 60% of people who die by suicide had depression or another mood disorder. The cause is believed to be a combination of genetic, environmental, and psychological factors. Risk factors include a family history of the condition, major life changes, certain medications, chronic health problems, and substance abuse. About 40% of the risk appears to be related to genetics. The diagnosis of major depressive disorder is based on the person's reported experiences and a mental status examination. There is no laboratory test for major depression. Testing, however, may be done to rule out physical conditions that can cause similar symptoms. Major depression should be differentiated from sadness, which is a normal part of life and is less severe. The United States Preventive Services Task Force (USPSTF) recommends screening for depression among those over the age 12, while a prior Cochrane review found that the routine use of screening questionnaires have little effect on detection or treatment. Typically, people are treated with counseling and antidepressant medication. Medication appears to be effective, but the effect may only be significant in the most severely depressed. It is unclear whether medications affect the risk of suicide. Types of counseling used include cognitive behavioral therapy (CBT) and interpersonal therapy. If other measures are not effective, electroconvulsive therapy (ECT) may be tried. Hospitalization may be necessary in cases with a risk of harm to self and may occasionally occur against a person's wishes. Major depressive disorder affected approximately 216 million people (3% of the world's population) in 2015. The percentage of people who are affected at one point in their life varies from 7% in Japan to 21% in France. Lifetime rates are higher in the developed world (15%) compared to the developing world (11%). It causes the second most years lived with disability, after low back pain. The most common time of onset is in a person's 20s and 30s. Females are affected about twice as often as males. The American Psychiatric Association added "major depressive disorder" to the "Diagnostic and Statistical Manual of Mental Disorders" (DSM-III) in 1980. It was a split of the previous depressive neurosis in the DSM-II, which also encompassed the conditions now known as dysthymia and adjustment disorder with depressed mood. Those currently or previously affected may be stigmatized. Major depression significantly affects a person's family and personal relationships, work or school life, sleeping and eating habits, and general health. Its impact on functioning and well-being has been compared to that of other chronic medical conditions such as diabetes. A person having a major depressive episode usually exhibits a very low mood, which pervades all aspects of life, and an inability to experience pleasure in activities that were formerly enjoyed. Depressed people may be preoccupied with, or ruminate over, thoughts and feelings of worthlessness, inappropriate guilt or regret, helplessness, hopelessness, and self-hatred. In severe cases, depressed people may have symptoms of psychosis. These symptoms include delusions or, less commonly, hallucinations, usually unpleasant. Other symptoms of depression include poor concentration and memory (especially in those with melancholic or psychotic features), withdrawal from social situations and activities, reduced sex drive, irritability, and thoughts of death or suicide. Insomnia is common among the depressed. In the typical pattern, a person wakes very early and cannot get back to sleep. Hypersomnia, or oversleeping, can also happen. Some antidepressants may also cause insomnia due to their stimulating effect. A depressed person may report multiple physical symptoms such as fatigue, headaches, or digestive problems; physical complaints are the most common presenting problem in developing countries, according to the World Health Organization's criteria for depression. Appetite often decreases, with resulting weight loss, although increased appetite and weight gain occasionally occur. Family and friends may notice that the person's behavior is either agitated or lethargic. Older depressed people may have cognitive symptoms of recent onset, such as forgetfulness, and a more noticeable slowing of movements. Depression often coexists with physical disorders common among the elderly, such as stroke, other cardiovascular diseases, Parkinson's disease, and chronic obstructive pulmonary disease. Depressed children may often display an irritable mood rather than a depressed mood, and show varying symptoms depending on age and situation. Most lose interest in school and show a decline in academic performance. They may be described as clingy, demanding, dependent, or insecure. Diagnosis may be delayed or missed when symptoms are interpreted as normal moodiness. Major depression frequently co-occurs with other psychiatric problems. The 1990–92 "National Comorbidity Survey" (US) reports that half of those with major depression also have lifetime anxiety and its associated disorders such as generalized anxiety disorder. Anxiety symptoms can have a major impact on the course of a depressive illness, with delayed recovery, increased risk of relapse, greater disability and increased suicide attempts. There are increased rates of alcohol and drug abuse and particularly dependence, and around a third of individuals diagnosed with ADHD develop comorbid depression. Post-traumatic stress disorder and depression often co-occur. Depression may also coexist with attention deficit hyperactivity disorder (ADHD), complicating the diagnosis and treatment of both. Depression is also frequently comorbid with alcohol abuse and personality disorders. Depression and pain often co-occur. One or more pain symptoms are present in 65% of depressed patients, and anywhere from 5 to 85% of patients with pain will be suffering from depression, depending on the setting; there is a lower prevalence in general practice, and higher in specialty clinics. The diagnosis of depression is often delayed or missed, and the outcome can worsen if the depression is noticed but completely misunderstood. Depression is also associated with a 1.5- to 2-fold increased risk of cardiovascular disease, independent of other known risk factors, and is itself linked directly or indirectly to risk factors such as smoking and obesity. People with major depression are less likely to follow medical recommendations for treating and preventing cardiovascular disorders, which further increases their risk of medical complications. In addition, cardiologists may not recognize underlying depression that complicates a cardiovascular problem under their care. The cause of major depressive disorder is unknown. The biopsychosocial model proposes that biological, psychological, and social factors all play a role in causing depression. The diathesis–stress model specifies that depression results when a preexisting vulnerability, or diathesis, is activated by stressful life events. The preexisting vulnerability can be either genetic, implying an interaction between nature and nurture, or schematic, resulting from views of the world learned in childhood. Childhood abuse, either physical, sexual or psychological, are all risk factors for depression, among other psychiatric issues that co-occur such as anxiety and drug abuse. Childhood trauma also correlates with severity of depression, lack of response to treatment and length of illness. However, some are more susceptible to developing mental illness such as depression after trauma, and various genes have been suggested to control susceptibility. The 5-HTTLPR, or serotonin transporter promoter gene's short allele has been associated with increased risk of depression. However, since the 1990s results have been inconsistent, with three recent reviews finding an effect and two finding none. Other genes that have been linked to a gene-environment interaction include CRHR1, FKBP5 and BDNF, the first two of which are related to the stress reaction of the HPA axis, and the latter of which is involved in neurogenesis. A 2018 study found 44 areas within the chromosomes that were linked to MDD. Depression may also come secondary to a chronic or terminal medical condition such as HIV/AIDS, or asthma and may be labeled "secondary depression". It is unknown if the underlying diseases induce depression through effect on quality of life, of through shared etiologies (such as degeneration of the basal ganglia in parkinson's disease or immune dysregulation in asthma). Depression may also be iatrogenic (the result of healthcare), such as drug induced depression. Therapies associated with depression include interferon therapy, beta-blockers, Isotretinoin, contraceptives, cardiac agents, anticonvulsants, antimigraine drugs, antipsychotics, and hormonal agents agents such as gonadotropin-releasing hormone agonist. Drug abuse in early age is also associated with increased risk of developing depression later in life. Depression that occurs as a result of pregnancy is called postpartum depression, and is thought to be the result of hormonal changes associated with pregnancy. Seasonal affective disorder, a type of depression associated with seasonal changes in sunlight, is thought to be the result of decreased sunlight. The pathophysiology of depression is not yet understood, but the current theories center around monoaminergic systems, the circadian rhythm, immunological dysfunction, HPA axis dysfunction and structural or functional abnormalities of emotional circuits. The monoamine theory, derived from the efficacy of monoaminergic drugs in treating depression, was the dominant theory until recently. The theory postulates that insufficient activity of monoamine neurotransmitters is the primary cause of depression. Evidence for the monoamine theory comes from multiple areas. Firstly, acute depletion of tryptophan, a necessary precursor of serotonin, a monoamine, can cause depression in those in remission or relatives of depressed patients; this suggests that decreased serotonergic neurotransmission is important in depression. Secondly, the correlation between depression risk and polymorphisms in the 5-HTTLPR gene, which codes for serotonin receptors, suggests a link. Third, decreased size of the locus coeruleus, decreased activity of tyrosine hydroxylase, increased density of alpha-2 adrenergic receptor, and evidence from rat models suggest decreased adrenergic neurotransmission in depression. Furthermore, decreased levels of homovanillic acid, altered response to dextroamphetamine, responses of depressive symptoms to dopamine receptor agonists, decreased dopamine receptor D1 binding in the striatum, and polymorphism of dopamine receptor genes implicate dopamine in depression. Lastly, increased activity of monoamine oxidase, which degrades monoamines, has been associated with depression. However, this theory is inconsistent with the fact that serotonin depletion does not cause depression in healthy persons, the fact that antidepressants instantly increase levels of monoamines but take weeks to work, and the existence of atypical antidepressants which can be effective despite not targeting this pathway. One proposed explanation for the therapeutic lag, and further support for the deficiency of monoamines, is a desensitization of self-inhibition in raphe nuclei by the increased serotonin mediated by antidepressants. However, disinhibition of the dorsal raphe has been proposed to occur as a result of "decreased" serotonergic activity in tryptophan depletion, resulting in a depressed state mediated by increased serotonin. Further countering the monoamine hypothesis is the fact that rats with lesions of the dorsal raphe are not more depressive that controls, the finding of increased jugular 5-HIAA in depressed patients that normalized with SSRI treatment, and the preference for carbohydrates in depressed patients. Already limited, the monoamine hypothesis has been further oversimplified when presented to the general public. Immune system abnormalities have been observed, including increased levels of cytokines involved in generating sickness behavior (which shares overlap with depression). The effectiveness of nonsteroidal anti-inflammatory drugs (NSAIDs) and cytokine inhibitors in treating depression, and normalization of cytokine levels after successful treatment further suggest immune system abnormalities in depression. HPA axis abnormalities have been suggested in depression given the association of CRHR1 with depression and the increased frequency of dexamethasone test non-suppression in depressed patients. However, this abnormality is not adequate as a diagnosis tool, because its sensitivity is only 44%. These stress-related abnormalities have been hypothesized to be the cause of hippocampal volume reductions seen in depressed patients. Furthermore, a meta-analysis yielded decreased dexamethasone suppression, and increased response to psychological stressors. Further abnormal results have been obscured with the cortisol awakening response, with increased response being associated with depression. Theories unifying neuroimaging findings have been proposed. The first model proposed is the "Limbic Cortical Model", which involves hyperactivity of the ventral paralimbic regions and hypoactivity of frontal regulatory regions in emotional processing. Another model, the "Corito-Striatal model", suggests that abnormalities of the prefrontal cortex in regulating striatal and subcortical structures results in depression. Another model proposes hyperactivity of salience structures in identifying negative stimuli, and hypoactivity of cortical regulatory structures resulting in a negative emotional bias and depression, consistent with emotional bias studies. A diagnostic assessment may be conducted by a suitably trained general practitioner, or by a psychiatrist or psychologist, who records the person's current circumstances, biographical history, current symptoms, and family history. The broad clinical aim is to formulate the relevant biological, psychological, and social factors that may be impacting on the individual's mood. The assessor may also discuss the person's current ways of regulating mood (healthy or otherwise) such as alcohol and drug use. The assessment also includes a mental state examination, which is an assessment of the person's current mood and thought content, in particular the presence of themes of hopelessness or pessimism, self-harm or suicide, and an absence of positive thoughts or plans. Specialist mental health services are rare in rural areas, and thus diagnosis and management is left largely to primary-care clinicians. This issue is even more marked in developing countries. The mental health examination may include the use of a rating scale such as the Hamilton Rating Scale for Depression or the Beck Depression Inventory or the Suicide Behaviors Questionnaire-Revised. The score on a rating scale alone is insufficient to diagnose depression to the satisfaction of the DSM or ICD, but it provides an indication of the severity of symptoms for a time period, so a person who scores above a given cut-off point can be more thoroughly evaluated for a depressive disorder diagnosis. Several rating scales are used for this purpose. Primary-care physicians and other non-psychiatrist physicians have more difficulty with underrecognition and undertreatment of depression compared to psychiatric physicians, in part because of the physical symptoms that often accompany depression, in addition to the many potential patient, provider, and system barriers that the authors describe. A review found that non-psychiatrist physicians miss about two-thirds of cases, though this has improved somewhat in more recent studies. Before diagnosing a major depressive disorder, in general a doctor performs a medical examination and selected investigations to rule out other causes of symptoms. These include blood tests measuring TSH and thyroxine to exclude hypothyroidism; basic electrolytes and serum calcium to rule out a metabolic disturbance; and a full blood count including ESR to rule out a systemic infection or chronic disease. Adverse affective reactions to medications or alcohol misuse are often ruled out, as well. Testosterone levels may be evaluated to diagnose hypogonadism, a cause of depression in men. Vitamin D levels might be evaluated, as low levels of vitamin D have been associated with greater risk for depression. Subjective cognitive complaints appear in older depressed people, but they can also be indicative of the onset of a dementing disorder, such as Alzheimer's disease. Cognitive testing and brain imaging can help distinguish depression from dementia. A CT scan can exclude brain pathology in those with psychotic, rapid-onset or otherwise unusual symptoms. In general, investigations are not repeated for a subsequent episode unless there is a medical indication. No biological tests confirm major depression. Biomarkers of depression have been sought to provide an objective method of diagnosis. There are several potential biomarkers, including Brain-Derived Neurotrophic Factor and various functional MRI techniques. One study developed a decision tree model of interpreting a series of fMRI scans taken during various activities. In their subjects, the authors of that study were able to achieve a sensitivity of 80% and a specificity of 87%, corresponding to a negative predictive value of 98% and a positive predictive value of 32% (positive and negative likelihood ratios were 6.15, 0.23, respectively). However, much more research is needed before these tests could be used clinically. The most widely used criteria for diagnosing depressive conditions are found in the American Psychiatric Association's revised fourth edition of the "Diagnostic and Statistical Manual of Mental Disorders" (DSM-IV-TR), and the World Health Organization's "International Statistical Classification of Diseases and Related Health Problems" (ICD-10), which uses the name "depressive episode" for a single episode and "recurrent depressive disorder" for repeated episodes. The latter system is typically used in European countries, while the former is used in the US and many other non-European nations, and the authors of both have worked towards conforming one with the other. Both DSM-IV-TR and ICD-10 mark out typical (main) depressive symptoms. ICD-10 defines three typical depressive symptoms (depressed mood, anhedonia, and reduced energy), two of which should be present to determine depressive disorder diagnosis. According to DSM-IV-TR, there are two main depressive symptoms—depressed mood and anhedonia. At least one of these must be present to make a diagnosis of major depressive episode. Major depressive disorder is classified as a mood disorder in DSM-IV-TR. The diagnosis hinges on the presence of single or recurrent major depressive episodes. Further qualifiers are used to classify both the episode itself and the course of the disorder. The category Depressive Disorder Not Otherwise Specified is diagnosed if the depressive episode's manifestation does not meet the criteria for a major depressive episode. The ICD-10 system does not use the term "major depressive disorder" but lists very similar criteria for the diagnosis of a depressive episode (mild, moderate or severe); the term "recurrent" may be added if there have been multiple episodes without mania. A major depressive episode is characterized by the presence of a severely depressed mood that persists for at least two weeks. Episodes may be isolated or recurrent and are categorized as mild (few symptoms in excess of minimum criteria), moderate, or severe (marked impact on social or occupational functioning). An episode with psychotic features—commonly referred to as "psychotic depression"—is automatically rated as severe. If the patient has had an episode of mania or markedly elevated mood, a diagnosis of bipolar disorder is made instead. Depression without mania is sometimes referred to as "unipolar" because the mood remains at one emotional state or "pole". DSM-IV-TR excludes cases where the symptoms are a result of bereavement, although it is possible for normal bereavement to evolve into a depressive episode if the mood persists and the characteristic features of a major depressive episode develop. The criteria have been criticized because they do not take into account any other aspects of the personal and social context in which depression can occur. In addition, some studies have found little empirical support for the DSM-IV cut-off criteria, indicating they are a diagnostic convention imposed on a continuum of depressive symptoms of varying severity and duration: Excluded are a range of related diagnoses, including dysthymia, which involves a chronic but milder mood disturbance; recurrent brief depression, consisting of briefer depressive episodes; minor depressive disorder, whereby only some symptoms of major depression are present; and adjustment disorder with depressed mood, which denotes low mood resulting from a psychological response to an identifiable event or stressor. The DSM-IV-TR recognizes five further subtypes of MDD, called "specifiers", in addition to noting the length, severity and presence of psychotic features: In 2016, the United States Preventive Services Task Force (USPSTF) recommended screening in the adult populations with evidence that it increases the detection of people with depression and with proper treatment improves outcomes. They recommend screening in those between the age of 12 to 18 as well. A Cochrane review from 2005 found screening programs do not significantly improve detection rates, treatment, or outcome. To confirm major depressive disorder as the most likely diagnosis, other potential diagnoses must be considered, including dysthymia, adjustment disorder with depressed mood, or bipolar disorder. Dysthymia is a chronic, milder mood disturbance in which a person reports a low mood almost daily over a span of at least two years. The symptoms are not as severe as those for major depression, although people with dysthymia are vulnerable to secondary episodes of major depression (sometimes referred to as "double depression"). Adjustment disorder with depressed mood is a mood disturbance appearing as a psychological response to an identifiable event or stressor, in which the resulting emotional or behavioral symptoms are significant but do not meet the criteria for a major depressive episode. Bipolar disorder, also known as "manic–depressive disorder", is a condition in which depressive phases alternate with periods of mania or hypomania. Although depression is currently categorized as a separate disorder, there is ongoing debate because individuals diagnosed with major depression often experience some hypomanic symptoms, indicating a mood disorder continuum. Further differential diagnoses involve chronic fatigue syndrome. Other disorders need to be ruled out before diagnosing major depressive disorder. They include depressions due to physical illness, medications, and substance abuse. Depression due to physical illness is diagnosed as a Mood disorder due to a general medical condition. This condition is determined based on history, laboratory findings, or physical examination. When the depression is caused by a medication, drug of abuse, or exposure to a toxin, it is then diagnosed as a specific mood disorder (previously called "Substance-induced mood disorder" in the DSM-IV-TR). Preventative efforts may result in decreases in rates of the condition of between 22 and 38%. Eating large amounts of fish may also reduce the risk. Behavioral interventions, such as interpersonal therapy and cognitive-behavioral therapy, are effective at preventing new onset depression. Because such interventions appear to be most effective when delivered to individuals or small groups, it has been suggested that they may be able to reach their large target audience most efficiently through the Internet. However, an earlier meta-analysis found preventive programs with a competence-enhancing component to be superior to behavior-oriented programs overall, and found behavioral programs to be particularly unhelpful for older people, for whom social support programs were uniquely beneficial. In addition, the programs that best prevented depression comprised more than eight sessions, each lasting between 60 and 90 minutes, were provided by a combination of lay and professional workers, had a high-quality research design, reported attrition rates, and had a well-defined intervention. The Netherlands mental health care system provides preventive interventions, such as the "Coping with Depression" course (CWD) for people with sub-threshold depression. The course is claimed to be the most successful of psychoeducational interventions for the treatment and prevention of depression (both for its adaptability to various populations and its results), with a risk reduction of 38% in major depression and an efficacy as a treatment comparing favorably to other psychotherapies. The three most common treatments for depression are psychotherapy, medication, and electroconvulsive therapy. Psychotherapy is the treatment of choice (over medication) for people under 18. The UK National Institute for Health and Care Excellence (NICE) 2004 guidelines indicate that antidepressants should not be used for the initial treatment of mild depression, because the risk-benefit ratio is poor. The guidelines recommend that antidepressants treatment in combination with psychosocial interventions should be considered for: The guidelines further note that antidepressant treatment should be continued for at least six months to reduce the risk of relapse, and that SSRIs are better tolerated than tricyclic antidepressants. American Psychiatric Association treatment guidelines recommend that initial treatment should be individually tailored based on factors including severity of symptoms, co-existing disorders, prior treatment experience, and patient preference. Options may include pharmacotherapy, psychotherapy, exercise, electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS) or light therapy. Antidepressant medication is recommended as an initial treatment choice in people with mild, moderate, or severe major depression, and should be given to all patients with severe depression unless ECT is planned. There is evidence that collaborative care by a team of health care practitioners produces better results than routine single-practitioner care. Treatment options are much more limited in developing countries, where access to mental health staff, medication, and psychotherapy is often difficult. Development of mental health services is minimal in many countries; depression is viewed as a phenomenon of the developed world despite evidence to the contrary, and not as an inherently life-threatening condition. A 2014 Cochrane review found insufficient evidence to determine the effectiveness of psychological versus medical therapy in children. Physical exercise is recommended for management of mild depression, and has a moderate effect on symptoms. Exercise has also been found to be effective for (unipolar) major depression. It is equivalent to the use of medications or psychological therapies in most people. In older people it does appear to decrease depression. Exercise may be recommended to people who are willing, motivated, and physically healthy enough to participate in an exercise program as treatment. There is a small amount of evidence that skipping a night's sleep may improve depressive symptoms, with the effects usually showing up within a day. This effect is usually temporary. Besides sleepiness, this method can cause a side effect of mania or hypomania. In observational studies smoking cessation has benefits in depression as large as or larger than those of medications. Besides exercise, sleep and diet may play a role in depression, and interventions in these areas may be an effective add-on to conventional methods. Psychotherapy can be delivered to individuals, groups, or families by mental health professionals. A 2015 review found that cognitive behavioral therapy appears to be similar to antidepressant medication in terms of effect. A 2012 review found psychotherapy to be better than no treatment but not other treatments. With more complex and chronic forms of depression, a combination of medication and psychotherapy may be used. A 2014 Cochrane review found that work-directed interventions combined with clinical interventions helped to reduce sick days taken by people with depression. There is moderate-quality evidence that psychological therapies are a useful addition to standard antidepressant treatment of treatment-resistant depression in the short term. Psychotherapy has been shown to be effective in older people. Successful psychotherapy appears to reduce the recurrence of depression even after it has been terminated or replaced by occasional booster sessions. Cognitive behavioral therapy (CBT) currently has the most research evidence for the treatment of depression in children and adolescents, and CBT and interpersonal psychotherapy (IPT) are preferred therapies for adolescent depression. In people under 18, according to the National Institute for Health and Clinical Excellence, medication should be offered only in conjunction with a psychological therapy, such as CBT, interpersonal therapy, or family therapy. Cognitive behavioral therapy has also been shown to reduce the number of sick days taken by people with depression, when used in conjunction with primary care. The most-studied form of psychotherapy for depression is CBT, which teaches clients to challenge self-defeating, but enduring ways of thinking (cognitions) and change counter-productive behaviors. Research beginning in the mid-1990s suggested that CBT could perform as well as or better than antidepressants in patients with moderate to severe depression. CBT may be effective in depressed adolescents, although its effects on severe episodes are not definitively known. Several variables predict success for cognitive behavioral therapy in adolescents: higher levels of rational thoughts, less hopelessness, fewer negative thoughts, and fewer cognitive distortions. CBT is particularly beneficial in preventing relapse. Cognitive behavioral therapy and occupational programs (including modification of work activities and assistance) have been shown to be effective in reducing sick days taken by workers with depression. Several variants of cognitive behavior therapy have been used in those with depression, the most notable being rational emotive behavior therapy, and mindfulness-based cognitive therapy. Mindfulness based stress reduction programs may reduce depression symptoms. Mindfulness programs also appear to be a promising intervention in youth. Psychoanalysis is a school of thought, founded by Sigmund Freud, which emphasizes the resolution of unconscious mental conflicts. Psychoanalytic techniques are used by some practitioners to treat clients presenting with major depression. A more widely practiced therapy, called psychodynamic psychotherapy, is in the tradition of psychoanalysis but less intensive, meeting once or twice a week. It also tends to focus more on the person's immediate problems, and has an additional social and interpersonal focus. In a meta-analysis of three controlled trials of Short Psychodynamic Supportive Psychotherapy, this modification was found to be as effective as medication for mild to moderate depression. Conflicting results have arisen from studies that look at the effectiveness of antidepressants in people with acute, mild to moderate depression. Stronger evidence supports the usefulness of antidepressants in the treatment of depression that is chronic (dysthymia) or severe. While small benefits were found, researchers Irving Kirsch and Thomas Moore state they may be due to issues with the trials rather than a true effect of the medication. In a later publication, Kirsch concluded that the overall effect of new-generation antidepressant medication is below recommended criteria for clinical significance. Similar results were obtained in a meta analysis by Fornier. A review commissioned by the National Institute for Health and Care Excellence concluded that there is strong evidence that SSRIs have greater efficacy than placebo on achieving a 50% reduction in depression scores in moderate and severe major depression, and that there is some evidence for a similar effect in mild depression. Similarly, a Cochrane systematic review of clinical trials of the generic tricyclic antidepressant amitriptyline concluded that there is strong evidence that its efficacy is superior to placebo. In 2014 the U.S. FDA published a systematic review of all antidepressant maintenance trials submitted to the agency between 1985 and 2012. The authors concluded that maintenance treatment reduced the risk of relapse by 52% compared to placebo, and that this effect was primarily due to recurrent depression in the placebo group rather than a drug withdrawal effect. To find the most effective antidepressant medication with minimal side-effects, the dosages can be adjusted, and if necessary, combinations of different classes of antidepressants can be tried. Response rates to the first antidepressant administered range from 50–75%, and it can take at least six to eight weeks from the start of medication to remission. Antidepressant medication treatment is usually continued for 16 to 20 weeks after remission, to minimize the chance of recurrence, and even up to one year of continuation is recommended. People with chronic depression may need to take medication indefinitely to avoid relapse. Selective serotonin reuptake inhibitors (SSRIs) are the primary medications prescribed, owing to their relatively mild side-effects, and because they are less toxic in overdose than other antidepressants. People who do not respond to one SSRI can be switched to another antidepressant, and this results in improvement in almost 50% of cases. Another option is to switch to the atypical antidepressant bupropion. Venlafaxine, an antidepressant with a different mechanism of action, may be modestly more effective than SSRIs. However, venlafaxine is not recommended in the UK as a first-line treatment because of evidence suggesting its risks may outweigh benefits, and it is specifically discouraged in children and adolescents. For child and adolescent depression, fluoxetine is recommended if medication are used. Fluoxetine; however, appears to have only slight benefit in children, while other antidepressants have not been shown to be effective. Medications are not recommended in children with mild disease. There is also insufficient evidence to determine effectiveness in those with depression complicated by dementia. Any antidepressant can cause low blood sodium levels ; nevertheless, it has been reported more often with SSRIs. It is not uncommon for SSRIs to cause or worsen insomnia; the sedating antidepressant mirtazapine can be used in such cases. Irreversible monoamine oxidase inhibitors, an older class of antidepressants, have been plagued by potentially life-threatening dietary and drug interactions. They are still used only rarely, although newer and better-tolerated agents of this class have been developed. The safety profile is different with reversible monoamine oxidase inhibitors such as moclobemide where the risk of serious dietary interactions is negligible and dietary restrictions are less strict. For children, adolescents, and probably young adults between 18 and 24 years old, there is a higher risk of both suicidal ideations and suicidal behavior in those treated with SSRIs. For adults, it is unclear whether SSRIs affect the risk of suicidality. One review found no connection; another an increased risk; and a third no risk in those 25–65 years old and a decrease risk in those more than 65. A black box warning was introduced in the United States in 2007 on SSRI and other antidepressant medications due to increased risk of suicide in patients younger than 24 years old. Similar precautionary notice revisions were implemented by the Japanese Ministry of Health. There is some evidence that omega-3 fatty acids fish oil supplements containing high levels of eicosapentaenoic acid (EPA) to docosahexaenoic acid (DHA) are effective in the treatment of, but not the prevention of major depression. However, a Cochrane review determined there was insufficient high quality evidence to suggest Omega-3 fatty acids were effective in depression. There is limited evidence that vitamin D supplementation is of value in alleviating the symptoms of depression in individuals who are vitamin D deficient. There is some preliminary evidence that COX-2 inhibitors have a beneficial effect on major depression. Lithium appears effective at lowering the risk of suicide in those with bipolar disorder and unipolar depression to nearly the same levels as the general population. There is a narrow range of effective and safe dosages of lithium thus close monitoring may be needed. Low-dose thyroid hormone may be added to existing antidepressants to treat persistent depression symptoms in people who have tried multiple courses of medication. Limited evidence suggests stimulants such as amphetamine and modafinil may be effective in the short term, or as add on therapy. Also, it is suggested that folate supplement may have a role in depression management. Electroconvulsive therapy (ECT) is a standard psychiatric treatment in which seizures are electrically induced in patients to provide relief from psychiatric illnesses. ECT is used with informed consent as a last line of intervention for major depressive disorder. A round of ECT is effective for about 50% of people with treatment-resistant major depressive disorder, whether it is unipolar or bipolar. Follow-up treatment is still poorly studied, but about half of people who respond relapse within twelve months. Aside from effects in the brain, the general physical risks of ECT are similar to those of brief general anesthesia. Immediately following treatment, the most common adverse effects are confusion and memory loss. ECT is considered one of the least harmful treatment options available for severely depressed pregnant women. A usual course of ECT involves multiple administrations, typically given two or three times per week until the patient is no longer suffering symptoms. ECT is administered under anesthetic with a muscle relaxant. Electroconvulsive therapy can differ in its application in three ways: electrode placement, frequency of treatments, and the electrical waveform of the stimulus. These three forms of application have significant differences in both adverse side effects and symptom remission. After treatment, drug therapy is usually continued, and some patients receive maintenance ECT. ECT appears to work in the short term via an anticonvulsant effect mostly in the frontal lobes, and longer term via neurotrophic effects primarily in the medial temporal lobe. Transcranial magnetic stimulation (TMS) or deep transcranial magnetic stimulation is a noninvasive method used to stimulate small regions of the brain. TMS was approved by the FDA for treatment-resistant major depressive disorder in 2008 and as of 2014 evidence supports that it is probably effective. The American Psychiatric Association the Canadian Network for Mood and Anxiety Disorders, and the Royal Australia and New Zealand College of Psychiatrists have endorsed rTMS for trMDD. Bright light therapy reduces depression symptom severity, with benefit was found for both seasonal affective disorder and for nonseasonal depression, and an effect similar to those for conventional antidepressants. For non-seasonal depression, adding light therapy to the standard antidepressant treatment was not effective. For non-seasonal depression where light was used mostly in combination with antidepressants or wake therapy a moderate effect was found, with response better than control treatment in high-quality studies, in studies that applied morning light treatment, and with people who respond to total or partial sleep deprivation. Both analyses noted poor quality, short duration, and small size of most of the reviewed studies. There is insufficient evidence for Reiki and dance movement therapy in depression. Major depressive episodes often resolve over time whether or not they are treated. Outpatients on a waiting list show a 10–15% reduction in symptoms within a few months, with approximately 20% no longer meeting the full criteria for a depressive disorder. The median duration of an episode has been estimated to be 23 weeks, with the highest rate of recovery in the first three months. Studies have shown that 80% of those suffering from their first major depressive episode will suffer from at least 1 more during their life, with a lifetime average of 4 episodes. Other general population studies indicate that around half those who have an episode recover (whether treated or not) and remain well, while the other half will have at least one more, and around 15% of those experience chronic recurrence. Studies recruiting from selective inpatient sources suggest lower recovery and higher chronicity, while studies of mostly outpatients show that nearly all recover, with a median episode duration of 11 months. Around 90% of those with severe or psychotic depression, most of whom also meet criteria for other mental disorders, experience recurrence. Recurrence is more likely if symptoms have not fully resolved with treatment. Current guidelines recommend continuing antidepressants for four to six months after remission to prevent relapse. Evidence from many randomized controlled trials indicates continuing antidepressant medications after recovery can reduce the chance of relapse by 70% (41% on placebo vs. 18% on antidepressant). The preventive effect probably lasts for at least the first 36 months of use. Those people experiencing repeated episodes of depression require ongoing treatment in order to prevent more severe, long-term depression. In some cases, people must take medications for long periods of time or for the rest of their lives. Cases when outcome is poor are associated with inappropriate treatment, severe initial symptoms that may include psychosis, early age of onset, more previous episodes, incomplete recovery after 1 year, pre-existing severe mental or medical disorder, and family dysfunction as well. Depressed individuals have a shorter life expectancy than those without depression, in part because depressed patients are at risk of dying by suicide. However, they also have a higher rate of dying from other causes, being more susceptible to medical conditions such as heart disease. Up to 60% of people who die by suicide have a mood disorder such as major depression, and the risk is especially high if a person has a marked sense of hopelessness or has both depression and borderline personality disorder. The lifetime risk of suicide associated with a diagnosis of major depression in the US is estimated at 3.4%, which averages two highly disparate figures of almost 7% for men and 1% for women (although suicide attempts are more frequent in women). The estimate is substantially lower than a previously accepted figure of 15%, which had been derived from older studies of hospitalized patients. Depression is often associated with unemployment and poverty. Major depression is currently the leading cause of disease burden in North America and other high-income countries, and the fourth-leading cause worldwide. In the year 2030, it is predicted to be the second-leading cause of disease burden worldwide after HIV, according to the World Health Organization. Delay or failure in seeking treatment after relapse and the failure of health professionals to provide treatment are two barriers to reducing disability. Major depressive disorder affects approximately 216 million people in 2015 (3% of the global population). The percentage of people who are affected at one point in their life varies from 7% in Japan to 21% in France. In most countries the number of people who have depression during their lives falls within an 8–18% range. In North America, the probability of having a major depressive episode within a year-long period is 3–5% for males and 8–10% for females. Major depression to be about twice as common in women as in men, although it is unclear why this is so, and whether factors unaccounted for are contributing to this. The relative increase in occurrence is related to pubertal development rather than chronological age, reaches adult ratios between the ages of 15 and 18, and appears associated with psychosocial more than hormonal factors. Depression is a major cause of disability worldwide. People are most likely to develop their first depressive episode between the ages of 30 and 40, and there is a second, smaller peak of incidence between ages 50 and 60. The risk of major depression is increased with neurological conditions such as stroke, Parkinson's disease, or multiple sclerosis, and during the first year after childbirth. It is also more common after cardiovascular illnesses, and is related more to a poor outcome than to a better one. Studies conflict on the prevalence of depression in the elderly, but most data suggest there is a reduction in this age group. Depressive disorders are more common to observe in urban than in rural population and the prevalence is in groups with stronger socioeconomic factors i.e. homelessness. The Ancient Greek physician Hippocrates described a syndrome of melancholia as a distinct disease with particular mental and physical symptoms; he characterized all "fears and despondencies, if they last a long time" as being symptomatic of the ailment. It was a similar but far broader concept than today's depression; prominence was given to a clustering of the symptoms of sadness, dejection, and despondency, and often fear, anger, delusions and obsessions were included. The term "depression" itself was derived from the Latin verb "deprimere", "to press down". From the 14th century, "to depress" meant to subjugate or to bring down in spirits. It was used in 1665 in English author Richard Baker's "Chronicle" to refer to someone having "a great depression of spirit", and by English author Samuel Johnson in a similar sense in 1753. The term also came into use in physiology and economics. An early usage referring to a psychiatric symptom was by French psychiatrist Louis Delasiauve in 1856, and by the 1860s it was appearing in medical dictionaries to refer to a physiological and metaphorical lowering of emotional function. Since Aristotle, melancholia had been associated with men of learning and intellectual brilliance, a hazard of contemplation and creativity. The newer concept abandoned these associations and through the 19th century, became more associated with women. Although "melancholia" remained the dominant diagnostic term, "depression" gained increasing currency in medical treatises and was a synonym by the end of the century; German psychiatrist Emil Kraepelin may have been the first to use it as the overarching term, referring to different kinds of melancholia as "depressive states". Sigmund Freud likened the state of melancholia to mourning in his 1917 paper "Mourning and Melancholia". He theorized that objective loss, such as the loss of a valued relationship through death or a romantic break-up, results in subjective loss as well; the depressed individual has identified with the object of affection through an unconscious, narcissistic process called the "libidinal cathexis" of the ego. Such loss results in severe melancholic symptoms more profound than mourning; not only is the outside world viewed negatively but the ego itself is compromised. The patient's decline of self-perception is revealed in his belief of his own blame, inferiority, and unworthiness. He also emphasized early life experiences as a predisposing factor. Adolf Meyer put forward a mixed social and biological framework emphasizing "reactions" in the context of an individual's life, and argued that the term "depression" should be used instead of "melancholia". The first version of the DSM (DSM-I, 1952) contained "depressive reaction" and the DSM-II (1968) "depressive neurosis", defined as an excessive reaction to internal conflict or an identifiable event, and also included a depressive type of manic-depressive psychosis within Major affective disorders. In the mid-20th century, researchers theorized that depression was caused by a chemical imbalance in neurotransmitters in the brain, a theory based on observations made in the 1950s of the effects of reserpine and isoniazid in altering monoamine neurotransmitter levels and affecting depressive symptoms. The chemical imbalance theory has never been proven. The term "unipolar" (along with the related term "bipolar") was coined by the neurologist and psychiatrist Karl Kleist, and subsequently used by his disciples Edda Neele and Karl Leonhard. The term "Major depressive disorder" was introduced by a group of US clinicians in the mid-1970s as part of proposals for diagnostic criteria based on patterns of symptoms (called the "Research Diagnostic Criteria", building on earlier Feighner Criteria), and was incorporated into the DSM-III in 1980. To maintain consistency the ICD-10 used the same criteria, with only minor alterations, but using the DSM diagnostic threshold to mark a "mild depressive episode", adding higher threshold categories for moderate and severe episodes. The ancient idea of "melancholia" still survives in the notion of a melancholic subtype. The new definitions of depression were widely accepted, albeit with some conflicting findings and views. There have been some continued empirically based arguments for a return to the diagnosis of melancholia. There has been some criticism of the expansion of coverage of the diagnosis, related to the development and promotion of antidepressants and the biological model since the late 1950s. The term "depression" is used in a number of different ways. It is often used to mean this syndrome but may refer to other mood disorders or simply to a low mood. People's conceptualizations of depression vary widely, both within and among cultures. "Because of the lack of scientific certainty," one commentator has observed, "the debate over depression turns on questions of language. What we call it—'disease,' 'disorder,' 'state of mind'—affects how we view, diagnose, and treat it." There are cultural differences in the extent to which serious depression is considered an illness requiring personal professional treatment, or is an indicator of something else, such as the need to address social or moral problems, the result of biological imbalances, or a reflection of individual differences in the understanding of distress that may reinforce feelings of powerlessness, and emotional struggle. The diagnosis is less common in some countries, such as China. It has been argued that the Chinese traditionally deny or somatize emotional depression (although since the early 1980s, the Chinese denial of depression may have modified). Alternatively, it may be that Western cultures reframe and elevate some expressions of human distress to disorder status. Australian professor Gordon Parker and others have argued that the Western concept of depression "medicalizes" sadness or misery. Similarly, Hungarian-American psychiatrist Thomas Szasz and others argue that depression is a metaphorical illness that is inappropriately regarded as an actual disease. There has also been concern that the DSM, as well as the field of descriptive psychiatry that employs it, tends to reify abstract phenomena such as depression, which may in fact be social constructs. American archetypal psychologist James Hillman writes that depression can be healthy for the soul, insofar as "it brings refuge, limitation, focus, gravity, weight, and humble powerlessness." Hillman argues that therapeutic attempts to eliminate depression echo the Christian theme of resurrection, but have the unfortunate effect of demonizing a soulful state of being. Historical figures were often reluctant to discuss or seek treatment for depression due to social stigma about the condition, or due to ignorance of diagnosis or treatments. Nevertheless, analysis or interpretation of letters, journals, artwork, writings, or statements of family and friends of some historical personalities has led to the presumption that they may have had some form of depression. People who may have had depression include English author Mary Shelley, American-British writer Henry James, and American president Abraham Lincoln. Some well-known contemporary people with possible depression include Canadian songwriter Leonard Cohen and American playwright and novelist Tennessee Williams. Some pioneering psychologists, such as Americans William James and John B. Watson, dealt with their own depression. There has been a continuing discussion of whether neurological disorders and mood disorders may be linked to creativity, a discussion that goes back to Aristotelian times. British literature gives many examples of reflections on depression. English philosopher John Stuart Mill experienced a several-months-long period of what he called "a dull state of nerves", when one is "unsusceptible to enjoyment or pleasurable excitement; one of those moods when what is pleasure at other times, becomes insipid or indifferent". He quoted English poet Samuel Taylor Coleridge's "Dejection" as a perfect description of his case: "A grief without a pang, void, dark and drear, / A drowsy, stifled, unimpassioned grief, / Which finds no natural outlet or relief / In word, or sigh, or tear." English writer Samuel Johnson used the term "the black dog" in the 1780s to describe his own depression, and it was subsequently popularized by depression sufferer former British Prime Minister Sir Winston Churchill. Social stigma of major depression is widespread, and contact with mental health services reduces this only slightly. Public opinions on treatment differ markedly to those of health professionals; alternative treatments are held to be more helpful than pharmacological ones, which are viewed poorly. In the UK, the Royal College of Psychiatrists and the Royal College of General Practitioners conducted a joint Five-year Defeat Depression campaign to educate and reduce stigma from 1992 to 1996; a MORI study conducted afterwards showed a small positive change in public attitudes to depression and treatment. Trials are looking at the effects of botulinum toxins on depression. The idea is that the drug is used to make the person look less frowning and that this stops the negative facial feedback from the face. In 2015 it turned out, however, that the partly positive effects that had been observed until then could have been placebo effects. MDD has been studied by taking MRI scans of patients with depression have revealed a number of differences in brain structure compared to those who are not depressed. Meta-analyses of neuroimaging studies in major depression reported that, compared to controls, depressed patients had increased volume of the lateral ventricles and adrenal gland and smaller volumes of the basal ganglia, thalamus, hippocampus, and frontal lobe (including the orbitofrontal cortex and gyrus rectus). Hyperintensities have been associated with patients with a late age of onset, and have led to the development of the theory of vascular depression. Depression is especially common among those over 65 years of age and increases in frequency with age beyond this age. In addition the risk of depression increases in relation to the age and frailty of the individual. Depression is one the most important factors which negatively impact quality of life in adults as well as the elderly. Both symptoms and treatment among the elderly differ from those of the rest of the adult populations. As with many other diseases it is common among the elderly not to present classical depressive symptoms. Diagnosis and treatment is further complicated in that the elderly are often simultaneously treated with a number of other drugs, and often have other concurrent diseases. Treatment differs in that studies of SSRI-drugs have shown lesser and often inadequate effect among the elderly, while other drugs with more clear effects have adverse effects which can be especially difficult to handle among the elderly. Duloxetine is an SNRI-drug with documented effect on recurring depression among the elderly, but has adverse effects in form of dizziness, dryness of the mouth, diarrhea, and constipation. Problem solving therapy was as of 2015 the only psychological therapy with proven effect, and can be likened to a simpler form of cognitive behavioral therapy. However, elderly with depression are seldom offered any psychological treatment, and the evidence surrounding which other treatments are effective is incomplete. Electroconvulsive therapy (ECT or electric-shock therapy) has been used as treatment of the elderly, and register-studies suggest it is effective although less so among the elderly than among the rest of the adult population. The risks involved with treatment of depression among the elderly as opposed to benefits is not entirely clear. Awaiting more evidence on how depression-treatment among the elderly is best designed it is important to follow up treatment results, and to reconsider changing treatments if it does not help. Models of depression in animals for the purpose of study include iatrogenic depression models (such as drug induced), forced swim tests, tail suspension test, and learned helplessness models. Criteria frequently used to assess depression in animals include expression of despair, neurovegetative changes, and anhedonia, as many other depressive criteria are untestable in animals such as guilt and suicidality. Dartmouth College Dartmouth College ( ) is a private Ivy League research university in Hanover, New Hampshire, United States. Established in 1769 by Eleazar Wheelock, it is the ninth-oldest institution of higher education in the United States and one of the nine colonial colleges chartered before the American Revolution. Although founded as a school to educate Native Americans in Christian theology and the English way of life, Dartmouth primarily trained Congregationalist ministers throughout its early history before it gradually secularized, emerging at the turn of the 20th century from relative obscurity into national prominence. Following a liberal arts curriculum, the university provides undergraduate instruction in 40 academic departments and interdisciplinary programs including 57 majors in the humanities, social sciences, natural sciences, and engineering, and enables students to design specialized concentrations or engage in dual degree programs. Dartmouth comprises five constituent schools: the original undergraduate college, the Geisel School of Medicine, the Thayer School of Engineering, the Tuck School of Business, and the Guarini School of Graduate and Advanced Studies. The university also has affiliations with the Dartmouth–Hitchcock Medical Center, the Rockefeller Institute for Public Policy, and the Hopkins Center for the Arts. With a student enrollment of about 6,400, Dartmouth is the smallest university in the Ivy League. Undergraduate admissions is highly competitive, with an acceptance rate of 8.7% for the Class of 2022. Situated on a hill above the Connecticut River, Dartmouth's 269-acre main campus is in the rural Upper Valley region of New England. The university functions on a quarter system, operating year-round on four ten-week academic terms. Dartmouth is known for its undergraduate focus, strong Greek culture, and wide array of enduring campus traditions. Its 34 varsity sports teams compete intercollegiately in the Ivy League conference of the NCAA Division I. Dartmouth is consistently included among the highest-ranked universities in the United States by several institutional rankings, and has been cited as a leading university for undergraduate teaching and research by "U.S. News & World Report". In 2018, the Carnegie Classification of Institutions of Higher Education listed Dartmouth as the only "majority-undergraduate," "arts-and-sciences focused," "doctoral university" in the country that has "some graduate coexistence" and "very high research activity." In a "New York Times" corporate study, Dartmouth graduates were shown to be among the most sought-after and valued in the world. The university has produced many prominent alumni, including 170 members of the U.S. Senate and the U.S. House of Representatives, 24 U.S. governors, 10 billionaire alumni, 10 U.S. Cabinet secretaries, 3 Nobel Prize laureates, 2 U.S. Supreme Court justices, and a U.S. vice president. Other notable alumni include 79 Rhodes Scholars, 26 Marshall Scholarship recipients, 13 Pulitzer Prize winners, and numerous MacArthur Genius fellows, Fulbright Scholars, CEOs and founders of Fortune 500 corporations, high-ranking U.S. diplomats, scholars in academia, literary and media figures, professional athletes, and Olympic medalists. Dartmouth was founded by Eleazar Wheelock, a Congregational minister from Columbia, Connecticut, who had sought to establish a school to train Native Americans as Christian missionaries. Wheelock's ostensible inspiration for such an establishment resulted from his relationship with Mohegan Indian Samson Occom. Occom became an ordained minister after studying under Wheelock from 1743 to 1747, and later moved to Long Island to preach to the Montauks. Wheelock founded Moor's Indian Charity School in 1755. The Charity School proved somewhat successful, but additional funding was necessary to continue school's operations, and Wheelock sought the help of friends to raise money. The first major donation to the school was given by Dr. John Phillips in 1762, who would go on to found Phillips Exeter Academy. Occom, accompanied by the Reverend Nathaniel Whitaker, traveled to England in 1766 to raise money from churches. With these funds, they established a trust to help Wheelock. The head of the trust was a Methodist named William Legge, 2nd Earl of Dartmouth. Although the fund provided Wheelock ample financial support for the Charity School, Wheelock initially had trouble recruiting Indians to the institution, primarily because its location was far from tribal territories. In seeking to expand the school into a college, Wheelock relocated it to Hanover, in the Province of New Hampshire. The move from Connecticut followed a lengthy and sometimes frustrating effort to find resources and secure a charter. The Royal Governor of New Hampshire, John Wentworth, provided the land upon which Dartmouth would be built and on December 13, 1769, issued a royal charter in the name of King George III establishing the College. That charter created a college "for the education and instruction of Youth of the Indian Tribes in this Land in reading, writing & all parts of Learning which shall appear necessary and expedient for civilizing & christianizing Children of Pagans as well as in all liberal Arts and Sciences and also of English Youth and any others." The reference to educating Native American youth was included to connect Dartmouth to the Charity School and enable use of the Charity School's unspent trust funds. Named for William Legge, 2nd Earl of Dartmouth—an important supporter of Eleazar Wheelock's earlier efforts but who, in fact, opposed creation of the College and never donated to it—Dartmouth is the nation's ninth oldest college and the last institution of higher learning established under Colonial rule. The College granted its first degrees in 1771. Given the limited success of the Charity School, however, Wheelock intended his new college as one primarily for whites. Occom, disappointed with Wheelock's departure from the school's original goal of Indian Christianization, went on to form his own community of New England Indians called Brothertown Indians in New York. In 1819, Dartmouth College was the subject of the historic Dartmouth College case, which challenged New Hampshire's 1816 attempt to amend the college' charter to make the school a public university. An institution called Dartmouth University occupied the college buildings and began operating in Hanover in 1817, though the college continued teaching classes in rented rooms nearby. Daniel Webster, an alumnus of the class of 1801, presented the College's case to the Supreme Court, which found the amendment of Dartmouth's charter to be an illegal impairment of a contract by the state and reversed New Hampshire's takeover of the college. Webster concluded his peroration with the famous words: "It is, Sir, as I have said, a small college. And yet there are those who love it." In 1866, the New Hampshire College of Agriculture and the Mechanic Arts was incorporated in Hanover, in connection with Dartmouth College. The institution was officially associated with Dartmouth and was directed by Dartmouth's president. The new college was moved to Durham, New Hampshire, in 1891, and later became known as the University of New Hampshire. Dartmouth emerged onto the national academic stage at the turn of the 20th century. Prior to this period, the college had clung to traditional methods of instruction and was relatively poorly funded. Under President William Jewett Tucker (1893–1909), Dartmouth underwent a major revitalization of facilities, faculty, and the student body, following large endowments such as the $10,000 given by Dartmouth alumnus and law professor John Ordronaux. 20 new structures replaced antiquated buildings, while the student body and faculty both expanded threefold. Tucker is often credited for having "refounded Dartmouth" and bringing it into national prestige. Presidents Ernest Fox Nichols (1909–16) and Ernest Martin Hopkins (1916–45) continued Tucker's trend of modernization, further improving campus facilities and introducing selective admissions in the 1920s. In 1945, Hopkins was subject to no small amount of controversy, as he openly admitted to Dartmouth's practice of using racial quotas to deny Jews entry into the university. John Sloan Dickey, serving as president from 1945 until 1970, strongly emphasized the liberal arts, particularly public policy and international relations. During World War II, Dartmouth was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a navy commission. In 1970, longtime professor of mathematics and computer science John George Kemeny became president of Dartmouth. Kemeny oversaw several major changes at the college. Dartmouth, which had been a men's institution, began admitting women as full-time students and undergraduate degree candidates in 1972 amid much controversy. At about the same time, the college adopted its "Dartmouth Plan" of academic scheduling, permitting the student body to increase in size within the existing facilities. In 1988, Dartmouth's alma mater song's lyrics changed from "Men of Dartmouth" to "Dear old Dartmouth". During the 1990s, the college saw a major academic overhaul under President James O. Freedman and a controversial (and ultimately unsuccessful) 1999 initiative to encourage the school's single-sex Greek houses to go coed. The first decade of the 21st century saw the commencement of the $1.3 billion Campaign for the Dartmouth Experience, the largest capital fundraising campaign in the college's history, which surpassed $1 billion in 2008. The mid- and late first decade of the 21st century have also seen extensive campus construction, with the erection of two new housing complexes, full renovation of two dormitories, and a forthcoming dining hall, life sciences center, and visual arts center. In 2004, Booz Allen Hamilton selected Dartmouth College as a model of institutional endurance "whose record of endurance has had implications and benefits for all American organizations, both academic and commercial," citing "Trustees of Dartmouth College v. Woodward" and Dartmouth's successful self-reinvention in the late 19th century. Since the election of a number of petition-nominated trustees to the Board of Trustees starting in 2004, the role of alumni in Dartmouth governance has been the subject of ongoing conflict. President James Wright announced his retirement in February 2008 and was replaced by Harvard University professor and physician Jim Yong Kim on July 1, 2009. In May 2010 Dartmouth joined the Matariki Network of Universities (MNU) together with Durham University (UK), Queen's University (Canada), University of Otago (New Zealand), University of Tübingen (Germany), University of Western Australia (Australia) and Uppsala University (Sweden). Dartmouth's close association and involvement in the development of the downhill skiing industry is featured in the 2010 book "Passion for Skiing" as well as the 2013 documentary based on the book "Passion for Snow". Dartmouth, a liberal arts institution, offers a four-year Bachelor of Arts and ABET-accredited Bachelor of Engineering degree to undergraduate students. The college has 39 academic departments offering 56 major programs, while students are free to design special majors or engage in dual majors. For the graduating class of 2017, the most popular majors were economics, government, computer science, engineering sciences, and history. The Government Department, whose prominent professors include Stephen Brooks, Richard Ned Lebow, and William Wohlforth, was ranked the top solely undergraduate political science program in the world by researchers at the London School of Economics in 2003. The Economics Department, whose prominent professors include David Blanchflower and Andrew Samwick, also holds the distinction as the top-ranked bachelor's-only economics program in the world. In order to graduate, a student must complete 35 total courses, eight to ten of which are typically part of a chosen major program. Other requirements for graduation include the completion of ten "distributive requirements" in a variety of academic fields, proficiency in a foreign language, and completion of a writing class and first-year seminar in writing. Many departments offer honors programs requiring students seeking that distinction to engage in "independent, sustained work," culminating in the production of a thesis. In addition to the courses offered in Hanover, Dartmouth offers 57 different off-campus programs, including Foreign Study Programs, Language Study Abroad programs, and Exchange Programs. Through the Graduate Studies program, Dartmouth grants doctorate and master's degrees in 19 Arts & Sciences graduate programs. Although the first graduate degree, a PhD in classics, was awarded in 1885, many of the current PhD programs have only existed since the 1960s. Furthermore, Dartmouth is home to three professional schools: the Geisel School of Medicine (established 1797), Thayer School of Engineering (1867)—which also serves as the undergraduate department of engineering sciences—and Tuck School of Business (1900). With these professional schools and graduate programs, conventional American usage would accord Dartmouth the label of "Dartmouth University"; however, because of historical and nostalgic reasons (such as "Dartmouth College v. Woodward"), the school uses the name "Dartmouth College" to refer to the entire institution. Dartmouth employs a total of 607 tenured or tenure-track faculty members, including the highest proportion of female tenured professors among the Ivy League universities. Faculty members have been at the forefront of such major academic developments as the Dartmouth Workshop, the Dartmouth Time Sharing System, Dartmouth BASIC, and Dartmouth ALGOL 30. In 2005, sponsored project awards to Dartmouth faculty research amounted to $169 million. Dartmouth serves as the host institution of the University Press of New England, a university press founded in 1970 that is supported by a consortium of schools that also includes Brandeis University, the University of New Hampshire, Northeastern University, Tufts University and the University of Vermont. Dartmouth was ranked 11th among undergraduate programs at national universities by "U.S. News & World Report" in its 2018 rankings. Dartmouth's strength in undergraduate education is highlighted by "U.S. News" when in 2009 through 2013 it ranked Dartmouth first in undergraduate teaching at national universities. It was ranked 2nd in this area in the 2018 rankings. The institution also ranked 5th in High School Counselor Rankings in 2018. The college ranks 7th in "The Wall Street Journal"s ranking of top feeder schools. The 2017 Academic Ranking of World Universities ranked Dartmouth among the 71-99th best universities in the nation, alongside institution such as Georgetown University and University of Notre Dame. AWRU ranks Dartmouth among the 76–100 best schools in the world for Business Administration and 101–150 for Management and Psychology. In "Forbes" 2016 rankings of colleges, Dartmouth ranked 17th overall in the combined liberal arts college and national universities ranking and 2nd in "grateful graduates", with a financial grade of A+. The 2006 Carnegie Foundation classification listed Dartmouth as the only "majority-undergraduate", "arts-and-sciences focus[ed]", "research university" in the country that also had "some graduate coexistence" and "very high research activity." For its graduate programs, "U.S. News" ranks Dartmouth's MBA program 9th overall and 6th for management. Among its other highly ranked graduate offerings, the school is ranked 40th in computer science, 29th in medicine for primary care, and 37th in medicine for research. Its global ranking places is at 242nd. Undergraduate admission to Dartmouth College is characterized by the Carnegie Foundation and "U.S. News & World Report" as "most selective." The "Princeton Review", in its 2018 edition, gave the university an admissions selectivity rating of 98 out of 99. For the freshman class entering Fall 2018, Dartmouth received 22,033 applications of which 1,925 were accepted for an 8.7% admissions rate. Of those admitted students who reported class rank, a record 46.3% were valedictorian or salutatorian, with 97% ranking in the top decile of their class. The admitted students’ academic profile showed an all-time high SAT average score of 1497, while the average composite ACT score remained at 33. More than 51% identified as being students of color, 15% are among the first generation in their families to matriculate to college, 11% are international students, and 9% are legacies. Additionally, for the 2016–2017 academic year, Dartmouth received 685 transfer applications of which 5.1% were accepted, with an average SAT composite score of 1490, average composite ACT score of 34, and average college GPA of about 3.85. Dartmouth meets 100% of students' demonstrated financial need in order to attend the College, and currently admits all students, with the exception of internationals, on a need-blind basis. Dartmouth guarantees to meet 100% of the demonstrated need of every admitted student who applies for financial aid at the time of admission. Dartmouth practices need-blind admissions for all applicants who are U.S. citizens, permanent residents, and undocumented students in the U.S. These applicants are admitted to the college without regard to their financial circumstances. For international students, financial need is taken into consideration as one of many factors at the time of admission. At Dartmouth, free tuition is provided for students from families with total incomes of $100,000 or less and possessing typical assets. In 2015, $88.8 million in need-based scholarships were awarded to Dartmouth students. Dartmouth functions on a quarter system, operating year-round on four ten-week academic terms. The Dartmouth Plan (or simply "D-Plan") is an academic scheduling system that permits the customization of each student's academic year. All undergraduates are required to be in residence for the fall, winter, and spring terms of their freshman and senior years, as well as the summer term of their sophomore year. However, students may petition to alter this plan so that they may be off during their freshman, senior, or sophomore summer terms. During all terms, students are permitted to choose between studying on-campus, studying at an off-campus program, or taking a term off for vacation, outside internships, or research projects. The typical course load is three classes per term, and students will generally enroll in classes for 12 total terms over the course of their academic career. The D-Plan was instituted in the early 1970s at the same time that Dartmouth began accepting female undergraduates. It was initially devised as a plan to increase the enrollment without enlarging campus accommodations, and has been described as "a way to put 4,000 students into 3,000 beds." Although new dormitories have been built since, the number of students has also increased and the D-Plan remains in effect. It was modified in the 1980s in an attempt to reduce the problems of lack of social and academic continuity. Dartmouth is governed by a Board of Trustees comprising the college president ("ex officio"), the state governor ("ex officio"), 13 trustees nominated and elected by the board (called "charter trustees"), and eight trustees nominated by alumni and elected by the board ("alumni trustees"). The nominees for alumni trustee are determined by a poll of the members of the Association of Alumni of Dartmouth College, selecting from among names put forward by the Alumni Council or by alumni petition. Although the board elected its members from the two sources of nominees in equal proportions between 1891 and 2007, the board decided in 2007 to add several new members, all charter trustees. In the controversy that followed the decision, the Association of Alumni filed a lawsuit, although it later withdrew the action. In 2008, the Board added five new charter trustees. Dartmouth College is situated in the rural town of Hanover, New Hampshire, located in the Upper Valley along the Connecticut River in New England. Its campus is centered on a "Green", a former field of pine trees cleared in 1771. Dartmouth is the largest private landowner of the town of Hanover, and its total landholdings and facilities are worth an estimated $434 million. In addition to its campus in Hanover, Dartmouth owns of Mount Moosilauke in the White Mountains and a tract of land in northern New Hampshire known as the Second College Grant. Dartmouth's campus buildings vary in age from Wentworth and Thornton Halls of the 1820s (the oldest surviving buildings constructed by the college) to new dormitories and mathematics facilities completed in 2006. Most of Dartmouth's buildings are designed in the Georgian colonial architecture style, a theme which has been preserved in recent architectural additions. The College has actively sought to reduce carbon emissions and energy usage on campus, earning it the grade of A- from the Sustainable Endowments Institute on its College Sustainability Report Card 2008. A notable feature of the Dartmouth campus is its many trees which (despite Dutch elm disease) include some 200 American elms. The college's creative and performing arts facility is the Hopkins Center for the Arts ("the Hop"). Opened in 1962, the Hop houses the College's drama, music, film, and studio arts departments, as well as a woodshop, pottery studio, and jewelry studio which are open for use by students and faculty. The building was designed by the famed architect Wallace Harrison, who would later design the similar-looking façade of Manhattan's Metropolitan Opera House at Lincoln Center. Its facilities include two theaters and one 900-seat auditorium. The Hop is also the location of all student mailboxes ("Hinman boxes") and the Courtyard Café dining facility. The Hop is connected to the Hood Museum of Art, arguably North America's oldest museum in continuous operation, and the Loew Auditorium, where films are screened. In addition to its 19 graduate programs in the arts and sciences, Dartmouth is home to three separate graduate schools. The Geisel School of Medicine is located in a complex on the north side of campus and includes laboratories, classrooms, offices, and a biomedical library. The Dartmouth–Hitchcock Medical Center, located several miles to the south in Lebanon, New Hampshire, contains a 396-bed teaching hospital for the Medical School. The Thayer School of Engineering and the Tuck School of Business are both located at the end of Tuck Mall, west of the center of campus and near the Connecticut River. The Thayer School comprises two buildings; Tuck has seven academic and administrative buildings, as well as several common areas. The two graduate schools share a library, the Feldberg Business & Engineering Library. Dartmouth's nine libraries are all part of the collective Dartmouth College Library, which comprises 2.48 million volumes and 6 million total resources, including videos, maps, sound recordings, and photographs. Its specialized libraries include the Biomedical Libraries, Evans Map Room, Feldberg Business & Engineering Library, Jones Media Center, Kresge Physical Sciences Library, Paddock Music Library, Rauner Special Collections Library, and Sherman Art Library. Baker-Berry Library is the main library at Dartmouth, consisting of a merger of the Baker Memorial Library (opened 1928) and the Berry Library (completed 2002). Located on the northern side of the Green, Baker's tower is an iconic symbol of the College. Dartmouth's original sports field was the Green, where students played cricket and old division football during the 19th century. Today, two of Dartmouth's athletic facilities are located in the southeast corner of campus. The center of athletic life is the Alumni Gymnasium, which includes the Karl Michael Competition Pool and the Spaulding Pool, a state of the art fitness center, a weight room, and a 1/13th-mile (123 m) indoor track. Attached to Alumni Gymnasium is the Berry Sports Center, which contains basketball and volleyball courts (Leede Arena), as well as the Kresge Fitness Center. Behind the Alumni Gymnasium is Memorial Field, a 15,600-seat stadium overlooking Dartmouth's football field and track. The nearby Thompson Arena, designed by Italian engineer Pier Luigi Nervi and constructed in 1975, houses Dartmouth's ice rink. Also visible from Memorial Field is the Nathaniel Leverone Fieldhouse, home to the indoor track. The new softball field, Dartmouth Softball Park, was constructed in 2012, sharing parking facilities with Thompson arena and replacing Sachem Field, located over a mile from campus, as the primary softball facility. Dartmouth's other athletic facilities in Hanover include the Friends of Dartmouth Rowing Boathouse and the old rowing house storage facility (both located along the Connecticut River), the Hanover Country Club, Dartmouth's oldest remaining athletic facility (established in 1899), and the Corey Ford Rugby Clubhouse. The college also maintains the Dartmouth Skiway, a skiing facility located over two mountains near the Hanover campus in Lyme Center, New Hampshire, that serves as the winter practice grounds for the Dartmouth ski team, which is a perennial contender for the NCAA Division I championship. Beginning in the fall term of 2016, Dartmouth placed all undergraduate students in one of six House communities, similar to residential colleges, including Allen House, East Wheelock House, North Park House, School House, South House, and West House, alongside independent Living Learning Communities. Dartmouth used to have nine residential communities located throughout campus, instead of ungrouped dormitories or residential colleges. The dormitories varied in design from modern to traditional Georgian styles, and room arrangements range from singles to quads and apartment suites. Since 2006, the college has guaranteed housing for students during their freshman and sophomore years. More than 3,000 students elect to live in housing provided by college. Campus meals are served by Dartmouth Dining Services, which operates 11 dining establishments around campus. Four of them are located at the center of campus in the Class of 1953 Commons, formerly Thayer Dining Hall. The Collis Center is the center of student life and programming, serving as what would be generically termed the "student union" or "campus center." It contains a café, study space, common areas, and a number of administrative departments, including the Academic Skills Centre. Robinson Hall, next door to both Collis and Thayer, contains the offices of a number of student organizations including the Dartmouth Outing Club and "The Dartmouth" daily newspaper. In 2006, "The Princeton Review" ranked Dartmouth third in its "Quality of Life" category, and sixth for having the "Happiest Students." Athletics and participation in the Greek system are the most popular campus activities. In all, Dartmouth offers more than 350 organizations, teams, and sports. The school is also home to a variety of longstanding traditions and celebrations and has a loyal alumni network; Dartmouth ranked #2 in "The Princeton Review" in 2006 for Best Alumni Network. In 2014, Dartmouth College was the third highest in the nation in "total of reports of rape" on their main campus, with 42 reports of rape. The "Washington Post" attributed the high number of rape reports to the fact that a growing number of sexual assault victims feel comfortable enough to report sexual assaults that would have gone unreported in previous years. In 2015, the Huffington Post reported that Dartmouth College had the highest rate of bystander intervention of any college surveyed, with 57.7% of Dartmouth students reporting that they would take some sort of action if they saw someone acting in a "sexually violent or harassing manner," compared to 45.5% of students nationally. Dartmouth fraternities have an extensive history of hazing and alcohol abuse, leading to police raids and accusations of sexual harassment. Dartmouth's more than 200 student organizations and clubs cover a wide range of interests. In 2007, the college hosted eight academic groups, 17 cultural groups, two honor societies, 30 "issue-oriented" groups, 25 performing groups, 12 pre-professional groups, 20 publications, and 11 recreational groups. Notable student groups include the nation's largest and oldest collegiate outdoors club, the Dartmouth Outing Club, which includes the nationally recognized Big Green Bus; the campus's oldest a cappella group, The Dartmouth Aires; the controversial conservative newspaper "The Dartmouth Review"; and "The Dartmouth", arguably the nation's oldest university newspaper. "The Dartmouth" describes itself as "America's Oldest College Newspaper, Founded 1799." Partially because of Dartmouth's rural, isolated location, the Greek system dating from the 1840s is one of the most popular social outlets for students. Dartmouth is home to 32 recognized Greek houses: 17 fraternities, 12 sororities, and three coeducational organizations. In 2007, roughly 70% of eligible students belonged to a Greek organization; since 1987, students have not been permitted to join Greek organizations until their sophomore year. Dartmouth College was among the first institutions of higher education to desegregate fraternity houses in the 1950s, and was involved in the movement to create coeducational Greek houses in the 1970s. In the early first decade of the 21st century, campus-wide debate focused on a Board of Trustees recommendation that Greek organizations become "substantially coeducational"; this attempt to change the Greek system eventually failed. The fraternities have an extensive history of hazing and alcohol abuse, leading to police raids and accusations of sexual harassment. Dartmouth also has a number of secret societies, which are student- and alumni-led organizations often focused on preserving the history of the college and initiating service projects. Most prominent among them is the Sphinx society, housed in a prominent Egyptian tomb-like building near the center of campus. The Sphinx has been the subject of numerous rumors as to its facilities, practices, and membership. The college has an additional classification of social/residential organizations known as undergraduate societies. Approximately 20% of students participate in a varsity sport, and nearly 80% participate in some form of club, varsity, intramural, or other athletics. In 2007, Dartmouth College fielded 34 intercollegiate varsity teams: 16 for men, 16 for women, and coeducational sailing and equestrian programs. Dartmouth's athletic teams compete in the National Collegiate Athletic Association (NCAA) Division I eight-member Ivy League conference; some teams also participate in the Eastern College Athletic Conference (ECAC). As is mandatory for the members of the Ivy League, Dartmouth College does not offer athletic scholarships. In addition to the traditional American team sports (football, basketball, baseball, and ice hockey), Dartmouth competes at the varsity level in many other sports including track and field, softball, squash, sailing, tennis, rowing, soccer, skiing, and lacrosse. The college also offers 26 club and intramural sports such as fencing, rugby, water polo, figure skating, boxing, volleyball, ultimate frisbee, and cricket, leading to a 75% participation rate in athletics among the undergraduate student body. The Dartmouth Fencing Team, despite being entirely self-coached, won the USACFC club national championship in 2014. The Dartmouth Men's Rugby Team, founded in 1951, has been ranked among the best collegiate teams in that sport, winning for example the Ivy Rugby Conference every year between 2008 and 2015. The figure skating team won the national championship five straight times from 2004 through 2008. In addition to the academic requirements for graduation, Dartmouth requires every undergraduate to complete a swim and three terms of physical education. It is often pointed out that the charter of Dartmouth College, granted to Eleazar Wheelock in 1769, proclaims that the institution was created "for the education and instruction of Youth of the Indian Tribes in this Land in reading, writing and all parts of Learning... as well as in all liberal Arts and Sciences; and also of English Youth and any others." However, Wheelock primarily intended the college to educate White youth, and the few Native students that attended Dartmouth experienced much difficulty in an institution ostensibly dedicated to their education. The funds for the Charity School for Native Americans that preceded Dartmouth College were raised primarily by the efforts of a Native American named Samson Occom, and at least some of those funds were used to help found the college. The college graduated only 19 Native Americans during its first two hundred years. In 1970, the college established Native American academic and social programs as part of a "new dedication to increasing Native American enrollment." Since then, Dartmouth has graduated over 1,000 Native American students from over 200 different tribes, more than the other seven Ivy League universities combined. Dartmouth is well known for its fierce school spirit and many traditions. The college functions on a quarter system, and one weekend each term is set aside as a traditional celebratory event, known on campus as "big weekends" or "party weekends". In the fall term, Homecoming (officially called Dartmouth Night) is marked by a bonfire on the Green constructed by the freshman class. Winter term is celebrated by Winter Carnival, a tradition started in 1911 by the Dartmouth Outing Club to promote winter sports. This tradition is the oldest in the United States, and subsequently went on to catch on at other New England colleges. In the spring, Green Key is a weekend mostly devoted to campus parties and celebration. The summer term was formerly marked by Tubestock, an unofficial tradition in which the students used wooden rafts and inner tubes to float on the Connecticut River. Begun in 1986, Tubestock was ended in 2006 by town ordinance. The Class of 2008, during their summer term on campus in 2006, replaced the defunct Tubestock with Fieldstock. This new celebration includes a barbecue, live music, and the revival of the 1970s and 1980s tradition of racing homemade chariots around the Green. Unlike Tubestock, Fieldstock is funded and supported by the College. Another longstanding tradition is four-day, student-run Dartmouth Outing Club trips for incoming freshmen, begun in 1935. Each trip concludes at the Moosilauke Ravine Lodge. In 2011, over 96% of freshmen elected to participate. Dartmouth's motto, chosen by Eleazar Wheelock, is "Vox clamantis in deserto". The Latin motto is literally translated as "A calling voice in the wilderness", but is more often rendered as "A voice crying out in the wilderness". The phrase appears five times in the Bible and is a reference to the college's location on what was once the frontier of European settlement. Richard Hovey's "Men of Dartmouth" was elected as the best of Dartmouth's songs in 1896, and became the school's official song in 1926. The song was retitled to "Alma Mater" in the 1980s when its lyrics were changed to refer to women as well as men. Dartmouth's 1769 royal charter required the creation of a seal for use on official documents and diplomas. The college's founder Eleazar Wheelock designed a seal for his college bearing a striking resemblance to the seal of the Society for the Propagation of the Gospel, a missionary society founded in London in 1701, in order to maintain the illusion that his college was more for mission work than for higher education. Engraved by a Boston silversmith, the seal was ready by commencement of 1773. The trustees officially accepted the seal on August 25, 1773, describing it as: On October 28, 1926, the trustees affirmed the charter's reservation of the seal for official corporate documents alone. The College Publications Committee commissioned noted typographer William Addison Dwiggins to create a line drawing version of the seal in 1940 that saw widespread use. Dwiggins' design was modified during 1957 to change the date from "1770" to "1769", to accord with the date of the college charter. The trustees commissioned a new set of dies with a date of "1769" to replace the old dies, now badly worn after almost two hundred years of use. The 1957 design continues to be used under trademark number 2305032. On October 28, 1926, the trustees approved a "Dartmouth College Shield" for general use. Artist and engraver W. Parke Johnson designed this emblem on the basis of the shield that is depicted at the center of the original seal. This design does not survive. On June 9, 1944, the trustees approved another coat of arms based on the shield part of the seal, this one by Canadian artist and designer Thoreau MacDonald. That design was used widely and, like Dwiggins' seal, had its date changed from "1770" to "1769" around 1958. That version continues to be used under trademark registration number 3112676 and others. College designer John Scotford made a stylized version of the shield during the 1960s, but it did not see the success of MacDonald's design. The shield appears to have been used as the basis of the shield of Dartmouth Medical School, and it has been reproduced in sizes as small as 20 micrometers across. The design has appeared on Rudolph Ruzicka's Bicentennial Medal (Philadelphia Mint, 1969) and elsewhere. Dartmouth has never had an official mascot. The nickname "The Big Green," originating in the 1860s, is based on students' adoption of a shade of forest green ("Dartmouth Green") as the school's official color in 1866. Beginning in the 1920s, the Dartmouth College athletic teams were known by their unofficial nickname "the Indians", a moniker that probably originated among sports journalists. This unofficial mascot and team name was used until the early 1970s, when its use came under criticism. In 1974, the Trustees declared the "use of the [Indian] symbol in any form to be inconsistent with present institutional and academic objectives of the College in advancing Native American education." Some alumni and students, as well as the conservative student newspaper, "The Dartmouth Review", have sought to return the Indian symbol to prominence, but never succeeded in doing so. Various student initiatives have been undertaken to adopt a mascot, but none has become "official." One proposal devised by the college humor magazine the "Dartmouth Jack-O-Lantern" was Keggy the Keg, an anthropomorphic beer keg who makes occasional appearances at college sporting events. Despite student enthusiasm for Keggy, the mascot has received approval from only the student government. In November 2006, student government attempted to revive the "Dartmoose" as a potential replacement amid renewed controversy surrounding the former unofficial Indian mascot. Dartmouth's alumni are known for their devotion to the college. Most start by giving to the Senior Class Gift. According to a 2008 article in "The Wall Street Journal" based on data from payscale.com, Dartmouth graduates also earn higher median salaries at least 10 years after graduation than alumni of any other American university surveyed. By 2008, Dartmouth had graduated 238 classes of students and has over 60,000 living alumni in a variety of fields. Nelson A. Rockefeller, 41st Vice President of the United States and 49th Governor of New York, graduated "cum laude" from Dartmouth with a degree in economics in 1930. Over 164 Dartmouth graduates have served in the United States Senate and United States House of Representatives, such as Massachusetts statesman Daniel Webster. Cabinet members of American presidents include Attorney General Amos T. Akerman, Secretary of Defense James V. Forrestal, Secretary of Labor Robert Reich, former Secretary of the Treasury Henry Paulson, and former Secretary of the Treasury Timothy Geithner. C. Everett Koop was the Surgeon General of the United States under President Ronald Reagan. Two Dartmouth alumni have served as justices on the Supreme Court of the United States: Salmon P. Chase and Levi Woodbury. Eugene Norman Veasey (class of 1954) served as the Chief Justice of Delaware. The 46th and current Governor of Pennsylvania Tom Wolf, and the 42nd and current Governor of Illinois, businessman Bruce Rauner, are also Dartmouth alumni. In literature and journalism, Dartmouth has produced thirteen Pulitzer Prize winners: Thomas M. Burton, Richard Eberhart, Dan Fagin, Paul Gigot, Frank Gilroy, Jake Hooker, Nigel Jaquiss, Joseph Rago, Martin J. Sherwin, David K. Shipler, David Shribman, and Justin Harvey Smith. Other authors and media personalities include ABC Senior White House correspondent Jake Tapper, novelist and founding editor of "The Believer" Heidi Julavits, "Dean of rock critics" Robert Christgau, National Book Award winners Louise Erdrich and Phil Klay, novelist/screenwriter Budd Schulberg, political analyst Dinesh D'Souza, radio talk show host Laura Ingraham, commentator Mort Kondracke, and journalist James Panero. Norman Maclean, a former professor at the University of Chicago and author of "A River Runs Through It and Other Stories", graduated from Dartmouth in 1924. Theodor Geisel, better known as children's author Dr. Seuss, was a member of the class of 1925. In the area of religion and theology, Dartmouth alumni include priests and ministers Ebenezer Porter, Jonathan Clarkson Gibbs, Caleb Sprague Henry, Arthur Whipple Jenks, Solomon Spalding, and Joseph Tracy; and rabbis Marshall Meyer, Arnold Resnicoff, and David E. Stern. Hyrum Smith, brother of Mormon Prophet Joseph Smith, attended the college in his teens. He was Patriarch of the LDS Church. Dartmouth alumni in academia include Stuart Kauffman and Jeffrey Weeks, both recipients of MacArthur Fellowships (commonly called "genius grants"). Dartmouth has also graduated three Nobel Prize winners: Owen Chamberlain (Physics, 1959), K. Barry Sharpless (Chemistry, 2001), and George Davis Snell (Physiology or Medicine, 1980). Educators include founder and first president of Bates College, Oren Burbank Cheney (1839), the current chancellor of the University of California, San Diego, Marye Anne Fox (PhD. in Chemistry, 1974), founding president of Vassar College Milo Parker Jewett, founder and first president of Kenyon College Philander Chase, first professor of Wabash College Caleb Mills, and former president of Union College Charles Augustus Aiken. Nine of Dartmouth's 17 presidents were alumni of the College. Dartmouth alumni serving as CEOs or company presidents and executives include Charles Alfred Pillsbury, founder of the Pillsbury Company and patriarch of the Pillsbury family, Sandy Alderson (San Diego Padres), John Donahoe (eBay), Louis V. Gerstner, Jr. (IBM), Charles E. Haldeman (Putnam Investments), Donald J. Hall, Sr. (Hallmark Cards), Jeffrey R. Immelt (General Electric), Gail Koziara Boudreaux (United Health Care), Grant Tinker (NBC), and Brian Goldner (Hasbro). In film, entertainment, and television, Dartmouth is represented by Budd Schulberg, Academy Award-winning screenwriter of "On the Waterfront", Michael Phillips, who won the Academy Award for best picture as co-producer of "The Sting", Rachel Dratch, a cast member of "Saturday Night Live", Shonda Rhimes creator of "Grey's Anatomy, Private Practice" and "Scandal", Chris Meledandri Executive Producer of "Ice Age", "Horton Hears a Who!", and "Despicable Me", and the title character of "Mister Rogers' Neighborhood", Fred Rogers. Other notable film and television figures include Sarah Wayne Callies ("Prison Break"), Emmy Award winner Michael Moriarty, Andrew Shue of "Melrose Place", Aisha Tyler of "Friends" and "24", Connie Britton of "Spin City", "The West Wing" and "Friday Night Lights", Mindy Kaling of "The Office" and "The Mindy Project", and David Harbour of "Stranger Things". A number of Dartmouth alumni have found success in professional sports. In baseball, Dartmouth alumni include All-Star and three-time Gold Glove winner and manager Brad Ausmus, All-Star reliever Mike Remlinger, and pitcher Kyle Hendricks. Professional football players include former Miami Dolphins quarterback Jay Fiedler, linebacker Reggie Williams, three-time Pro Bowler Nick Lowery, quarterback Jeff Kemp, and Tennessee Titans tight end Casey Cramer, plus Miami Dolphins defensive coordinator Matt Burke. Dartmouth has also produced a number of Olympic competitors. Adam Nelson won the silver medal in the shot put in the 2000 Sydney Olympics and the gold medal at the 2004 Athens Olympics to go along with his gold medal in the 2005 World Championships in Athletics in Helsinki. Kristin King and Sarah Parsons were members of the United States' 2006 bronze medal-winning ice hockey team. Cherie Piper, Gillian Apps, and Katie Weatherston were among Canada's ice hockey gold medalists in 2006. Dick Durrance and Tim Caldwell competed for the United States in skiing in the 1936 and 1976 Winter Olympics, respectively. Arthur Shaw, Earl Thomson, Edwin Myers, Marc Wright, Adam Nelson, Gerry Ashworth, and Vilhjálmur Einarsson have all won medals in track and field events. Former heavyweight rower Dominic Seiterle is a member of the Canadian national rowing team and won a gold medal at the 2008 Summer Olympics in the men's 8+ event. Dartmouth College has appeared in or been referenced by a number of popular media. Most notably, the 1978 comedy film "National Lampoon's Animal House" was co-written by Chris Miller '63, and is based loosely on a series of stories he wrote about his fraternity days at Dartmouth. In a CNN interview, John Landis said the movie was "based on Chris Miller's real fraternity at Dartmouth", Alpha Delta Phi. Dartmouth's Winter Carnival tradition was the subject of the 1939 film "Winter Carnival" starring Ann Sheridan and written by Budd Schulberg '36 and F. Scott Fitzgerald. Dodo The dodo ("Raphus cucullatus") is an extinct flightless bird that was endemic to the island of Mauritius, east of Madagascar in the Indian Ocean. The dodo's closest genetic relative was the also extinct Rodrigues solitaire, the two forming the subfamily Raphinae of the family of pigeons and doves. The closest living relative of the dodo is the Nicobar pigeon. A white dodo was once thought to have existed on the nearby island of Réunion, but this is now thought to have been confusion based on the Réunion ibis and paintings of white dodos. Subfossil remains show the dodo was about tall and may have weighed in the wild. The dodo's appearance in life is evidenced only by drawings, paintings, and written accounts from the 17th century. Because these vary considerably, and because only some illustrations are known to have been drawn from live specimens, its exact appearance in life remains unresolved, and little is known about its behaviour. Though the dodo has historically been considered fat and clumsy, it is now thought to have been well-adapted for its ecosystem. It has been depicted with brownish-grey plumage, yellow feet, a tuft of tail feathers, a grey, naked head, and a black, yellow, and green beak. It used gizzard stones to help digest its food, which is thought to have included fruits, and its main habitat is believed to have been the woods in the drier coastal areas of Mauritius. One account states its clutch consisted of a single egg. It is presumed that the dodo became flightless because of the ready availability of abundant food sources and a relative absence of predators on Mauritius. The first recorded mention of the dodo was by Dutch sailors in 1598. In the following years, the bird was hunted by sailors and invasive species, while its habitat was being destroyed. The last widely accepted sighting of a dodo was in 1662. Its extinction was not immediately noticed, and some considered it to be a mythical creature. In the 19th century, research was conducted on a small quantity of remains of four specimens that had been brought to Europe in the early 17th century. Among these is a dried head, the only soft tissue of the dodo that remains today. Since then, a large amount of subfossil material has been collected on Mauritius, mostly from the Mare aux Songes swamp. The extinction of the dodo within less than a century of its discovery called attention to the previously unrecognised problem of human involvement in the disappearance of entire species. The dodo achieved widespread recognition from its role in the story of "Alice's Adventures in Wonderland", and it has since become a fixture in popular culture, often as a symbol of extinction and obsolescence. The dodo was variously declared a small ostrich, a rail, an albatross, or a vulture, by early scientists. In 1842, Danish zoologist Johannes Theodor Reinhardt proposed that dodos were ground pigeons, based on studies of a dodo skull he had discovered in the collection of the Natural History Museum of Denmark. This view was met with ridicule, but was later supported by English naturalists Hugh Edwin Strickland and Alexander Gordon Melville in their 1848 monograph "The Dodo and Its Kindred", which attempted to separate myth from reality. After dissecting the preserved head and foot of the specimen at the Oxford University Museum and comparing it with the few remains then available of the extinct Rodrigues solitaire ("Pezophaps solitaria") they concluded that the two were closely related. Strickland stated that although not identical, these birds shared many distinguishing features of the leg bones, otherwise known only in pigeons. Strickland and Melville established that the dodo was anatomically similar to pigeons in many features. They pointed to the very short keratinous portion of the beak, with its long, slender, naked basal part. Other pigeons also have bare skin around their eyes, almost reaching their beak, as in dodos. The forehead was high in relation to the beak, and the nostril was located low on the middle of the beak and surrounded by skin, a combination of features shared only with pigeons. The legs of the dodo were generally more similar to those of terrestrial pigeons than of other birds, both in their scales and in their skeletal features. Depictions of the large crop hinted at a relationship with pigeons, in which this feature is more developed than in other birds. Pigeons generally have very small clutches, and the dodo is said to have laid a single egg. Like pigeons, the dodo lacked the vomer and septum of the nostrils, and it shared details in the mandible, the zygomatic bone, the palate, and the hallux. The dodo differed from other pigeons mainly in the small size of the wings and the large size of the beak in proportion to the rest of the cranium. Throughout the 19th century, several species were classified as congeneric with the dodo, including the Rodrigues solitaire and the Réunion solitaire, as "Didus solitarius" and "Raphus solitarius", respectively ("Didus" and "Raphus" being names for the dodo genus used by different authors of the time). An atypical 17th-century description of a dodo and bones found on Rodrigues, now known to have belonged to the Rodrigues solitaire, led Abraham Dee Bartlett to name a new species, "Didus nazarenus", in 1852. Based on solitaire remains, it is now a synonym of that species. Crude drawings of the red rail of Mauritius were also misinterpreted as dodo species; "Didus broeckii" and "Didus herberti". For many years the dodo and the Rodrigues solitaire were placed in a family of their own, the Raphidae (formerly Dididae), because their exact relationships with other pigeons were unresolved. Each was also placed in its own monotypic family (Raphidae and Pezophapidae, respectively), as it was thought that they had evolved their similarities independently. Osteological and DNA analysis has since led to the dissolution of the family Raphidae, and the dodo and solitaire are now placed in their own subfamily, Raphinae, within the family Columbidae. In 2002, American geneticist Beth Shapiro and colleagues analysed the DNA of the dodo for the first time. Comparison of mitochondrial cytochrome "b" and 12S rRNA sequences isolated from a tarsal of the Oxford specimen and a femur of a Rodrigues solitaire confirmed their close relationship and their placement within the Columbidae. The genetic evidence was interpreted as showing the Southeast Asian Nicobar pigeon ("Caloenas nicobarica") to be their closest living relative, followed by the crowned pigeons ("Goura") of New Guinea, and the superficially dodo-like tooth-billed pigeon ("Didunculus strigirostris") from Samoa (its scientific name refers to its dodo-like beak). This clade consists of generally ground-dwelling island endemic pigeons. The following cladogram shows the dodo's closest relationships within the Columbidae, based on Shapiro et al., 2002: A similar cladogram was published in 2007, inverting the placement of "Goura" and "Dicunculus" and including the pheasant pigeon ("Otidiphaps nobilis") and the thick-billed ground pigeon ("Trugon terrestris") at the base of the clade. The DNA used in these studies was obtained from the Oxford specimen, and since this material is degraded, and no usable DNA has been extracted from subfossil remains, these findings still need to be independently verified. Based on behavioural and morphological evidence, Jolyon C. Parish proposed that the dodo and Rodrigues solitaire should be placed in the subfamily Gourinae along with the "Groura" pigeons and others, in agreement with the genetic evidence. In 2014, DNA of the only known specimen of the recently extinct spotted green pigeon ("Caloenas maculata") was analysed, and it was found to be a close relative of the Nicobar pigeon, and thus also the dodo and Rodrigues solitaire. The 2002 study indicated that the ancestors of the dodo and the solitaire diverged around the Paleogene-Neogene boundary. The Mascarene Islands (Mauritius, Réunion, and Rodrigues), are of volcanic origin and are less than 10 million years old. Therefore, the ancestors of both birds probably remained capable of flight for a considerable time after the separation of their lineage. The Nicobar and spotted green pigeon were placed at the base of a lineage leading to the Raphinae, which indicates the flightless raphines had ancestors that were able to fly, were semi-terrestrial, and inhabited islands. This in turn supports the hypothesis that the ancestors of those birds reached the Mascarene islands by island hopping from South Asia. The lack of mammalian herbivores competing for resources on these islands allowed the solitaire and the dodo to attain very large sizes and flightlessness. Despite its divergent skull morphology and adaptations for larger size, many features of its skeleton remained similar to those of smaller, flying pigeons. Another large, flightless pigeon, the Viti Levu giant pigeon ("Natunaornis gigoura"), was described in 2001 from subfossil material from Fiji. It was only slightly smaller than the dodo and the solitaire, and it too is thought to have been related to the crowned pigeons. One of the original names for the dodo was the Dutch ""Walghvogel"", first used in the journal of Vice Admiral Wybrand van Warwijck, who visited Mauritius during the Second Dutch Expedition to Indonesia in 1598. "Walghe" means "tasteless", "insipid", or "sickly", and "vogel" means "bird". The name was translated into German as "Walchstök" or "Walchvögel", by Jakob Friedlib. The original Dutch report titled "Waarachtige Beschryving" was lost, but the English translation survived: Another account from that voyage, perhaps the first to mention the dodo, states that the Portuguese referred to them as penguins. The meaning may not have been derived from "penguin" (the Portuguese referred to them as ""fotilicaios"" at the time), but from "pinion", a reference to the small wings. The crew of the Dutch ship "Gelderland" referred to the bird as "Dronte" (meaning "swollen") in 1602, a name that is still used in some languages. This crew also called them "griff-eendt" and "kermisgans", in reference to fowl fattened for the Kermesse festival in Amsterdam, which was held the day after they anchored on Mauritius. The etymology of the word "dodo" is unclear. Some ascribe it to the Dutch word "dodoor" for "sluggard", but it is more probably related to "Dodaars", which means either "fat-arse" or "knot-arse", referring to the knot of feathers on the hind end. The first record of the word "Dodaars" is in Captain Willem Van West-Zanen's journal in 1602. The English writer Sir Thomas Herbert was the first to use the word "dodo" in print in his 1634 travelogue, claiming it was referred to as such by the Portuguese, who had visited Mauritius in 1507. Another Englishman, Emmanuel Altham, had used the word in a 1628 letter, in which he also claimed the origin was Portuguese. The name "dodar" was introduced into English at the same time as dodo, but was only used until the 18th century. As far as is known, the Portuguese never mentioned the bird. Nevertheless, some sources still state that the word "dodo" derives from the Portuguese word "doudo" (currently "doido"), meaning "fool" or "crazy". It has also been suggested that "dodo" was an onomatopoeic approximation of the bird's call, a two-note pigeon-like sound resembling "doo-doo". The Latin name "cucullatus" ("hooded") was first used by Juan Eusebio Nieremberg in 1635 as "Cygnus cucullatus", in reference to Carolus Clusius's 1605 depiction of a dodo. In his 18th-century classic work "Systema Naturae", Carl Linnaeus used "cucullatus" as the specific name, but combined it with the genus name "Struthio" (ostrich). Mathurin Jacques Brisson coined the genus name "Raphus" (referring to the bustards) in 1760, resulting in the current name "Raphus cucullatus". In 1766, Linnaeus coined the new binomial "Didus ineptus" (meaning "inept dodo"). This has become a synonym of the earlier name because of nomenclatural priority. As no complete dodo specimens exist, its external appearance, such as plumage and colouration, is hard to determine. Illustrations and written accounts of encounters with the dodo between its discovery and its extinction (1598–1662) are the primary evidence for its external appearance. According to most representations, the dodo had greyish or brownish plumage, with lighter primary feathers and a tuft of curly light feathers high on its rear end. The head was grey and naked, the beak green, black and yellow, and the legs were stout and yellowish, with black claws. A study of the few remaining feathers on the Oxford specimen head showed that they were pennaceous rather than plumaceous (downy) and most similar to those of other pigeons. Subfossil remains and remnants of the birds that were brought to Europe in the 17th century show that dodos were very large birds, up to tall. The bird was sexually dimorphic; males were larger and had proportionally longer beaks. Weight estimates have varied from study to study. In 1993, Bradley C. Livezey proposed that males would have weighed and females . Also in 1993, Andrew C. Kitchener attributed a high contemporary weight estimate and the roundness of dodos depicted in Europe to these birds having been overfed in captivity; weights in the wild were estimated to have been in the range of , and fattened birds could have weighed . A 2011 estimate by Angst and colleagues gave an average weight as low as . This has also been questioned, and there is still controversy over weight estimates. A 2016 study estimated the weight at , based on CT scans of composite skeletons. It has also been suggested that the weight depended on the season, and that individuals were fat during cool seasons, but less so during hot. The skull of the dodo differed much from those of other pigeons, especially in being more robust, the bill having a hooked tip, and in having a short cranium compared to the jaws. The upper bill was nearly twice as long as the cranium, which was short compared to those of its closest pigeon relatives. The openings of the bony nostrils were elongated along the length of the beak, and they contained no bony septum. The cranium (excluding the beak) was wider than it was long, and the frontal bone formed a dome-shape, with the highest point above the hind part of the eye sockets. The skull sloped downwards at the back. The eye sockets occupied much of the hind part of the skull. The sclerotic rings inside the eye were formed by eleven ossicles (small bones), similar to the amount in other pigeons. The mandible was slightly curved, and each half had a single fenestra (opening), as in other pigeons. The dodo had about nineteen presynsacral vertebrae (those of the neck and thorax, including three fused into a notarium), sixteen synsacral vertebrae (those of the lumbar region and sacrum), six free tail (caudal) vertebrae, and a pygostyle. The neck had well-developed areas for muscle and ligament attachment, probably to support the heavy skull and beak. On each side, it had six ribs, four of which articulated with the sternum through sternal ribs. The sternum was large, but small in relation to the body compared to those of much smaller pigeons that are able to fly. The sternum was highly pneumatic, broad, and relatively thick in cross-section. The bones of the pectoral girdle, shoulder blades, and wing bones were reduced in size compared to those of flighted pigeon, and were more gracile compared to those of the Rodrigues solitaire, but none of the individual skeletal components had disappeared. The carpometacarpus of the dodo was more robust than that of the solitaire, however. The pelvis was wider than that of the solitaire and other relatives, yet was comparable to the proportions in some smaller, flighted pigeons. Most of the leg bones were more robust than those of extant pigeons and the solitaire, but the length proportions were little different. Many of the skeletal features that distinguish the dodo and the Rodrigues solitaire, its closest relative, from pigeons have been attributed to their flightlessness. The pelvic elements were thicker than those of flighted pigeons to support the higher weight, and the pectoral region and the small wings were paedomorphic, meaning that they were underdeveloped and retained juvenile features. The skull, trunk and pelvic limbs were peramorphic, meaning that they changed considerably with age. The dodo shared several other traits with the Rodrigues solitaire, such as features of the skull, pelvis, and sternum, as well as their large size. It differed in other aspects, such as being more robust and shorter than the solitaire, having a larger skull and beak, a rounded skull roof, and smaller orbits. The dodo's neck and legs were proportionally shorter, and it did not possess an equivalent to the knob present on the solitaire's wrists. Most contemporary descriptions of the dodo are found in ship's logs and journals of the Dutch East India Company vessels that docked in Mauritius when the Dutch Empire ruled the island. These records were used as guides for future voyages. Few contemporary accounts are reliable, as many seem to be based on earlier accounts, and none were written by scientists. One of the earliest accounts, from van Warwijck's 1598 journal, describes the bird as follows: One of the most detailed descriptions is by Sir Thomas Herbert in "A Relation of Some Yeares Travaille into Afrique and the Greater Asia" from 1634: The travel journal of the Dutch ship "Gelderland" (1601–1603), rediscovered in the 1860s, contains the only known sketches of living or recently killed specimens drawn on Mauritius. They have been attributed to the professional artist Joris Joostensz Laerle, who also drew other now-extinct Mauritian birds, and to a second, less refined artist. Apart from these sketches, it is unknown how many of the twenty or so 17th-century illustrations of the dodos were drawn from life or from stuffed specimens, which affects their reliability. All post-1638 depictions appear to be based on earlier images, around the time reports mentioning dodos became rarer. Differences in the depictions led authors such as Anthonie Cornelis Oudemans and Masauji Hachisuka to speculate about sexual dimorphism, ontogenic traits, seasonal variation, and even the existence of different species, but these theories are not accepted today. Because details such as markings of the beak, the form of the tail feathers, and colouration vary from account to account, it is impossible to determine the exact morphology of these features, whether they signal age or sex, or if they even reflect reality. Dodo specialist Julian Hume argued that the nostrils of the living dodo would have been slits, as seen in the "Gelderland", Cornelis Saftleven, Crocker Art Gallery, and Ustad Mansur images. According to this claim, the gaping nostrils often seen in paintings indicate that taxidermy specimens were used as models. Most depictions show that the wings were held in an extended position, unlike flighted pigeons, but similar to ratites such as the ostrich and kiwi. The traditional image of the dodo is of a very fat and clumsy bird, but this view may be exaggerated. The general opinion of scientists today is that many old European depictions were based on overfed captive birds or crudely stuffed specimens. It has also been suggested that the images might show dodos with puffed feathers, as part of display behaviour. The Dutch painter Roelant Savery was the most prolific and influential illustrator of the dodo, having made at least ten depictions, often showing it in the lower corners. A famous painting of his from 1626, now called "Edwards's Dodo" as it was once owned by the ornithologist George Edwards, has since become the standard image of a dodo. It is housed in the Natural History Museum, London. The image shows a particularly fat bird and is the source for many other dodo illustrations. An Indian Mughal painting rediscovered in St. Petersburg in the 1950s shows a dodo along with native Indian birds. It depicts a slimmer, brownish bird, and its discoverer A. Iwanow and dodo specialist Julian Hume regard it as one of the most accurate depictions of the living dodo; the surrounding birds are clearly identifiable and depicted with appropriate colouring. It is believed to be from the 17th century and has been attributed to artist Ustad Mansur. The bird depicted probably lived in the menagerie of Mughal Emperor Jahangir, located in Surat, where English traveller Peter Mundy also claimed to have seen two dodos sometime between 1628 and 1633. In 2014, another Indian illustration of a dodo was reported, but it was found to be derivative of an 1836 German illustration. Little is known of the behaviour of the dodo, as most contemporary descriptions are very brief. Based on weight estimates, it has been suggested the male could reach the age of 21, and the female 17. Studies of the cantilever strength of its leg bones indicate that it could run quite fast. The legs were robust and strong to support the bulk of the bird, and also made it agile and manoeuvrable in the dense, pre-human landscape. Though the wings were small, well-developed muscle scars on the bones show that they were not completely vestigial, and may have been used for display behaviour and balance; extant pigeons also use their wings for such purposes. Unlike the Rodrigues solitaire, there is no evidence that the dodo used its wings in intraspecific combat. Though some dodo bones have been found with healed fractures, it had weak pectoral muscles and more reduced wings in comparison. The dodo may instead have used its large, hooked beak in territorial disputes. Since Mauritius receives more rainfall and has less seasonal variation than Rodrigues, which would have affected the availability of resources on the island, the dodo would have less reason to evolve aggressive territorial behaviour. The Rodrigues solitaire was therefore probably the more aggressive of the two. The preferred habitat of the dodo is unknown, but old descriptions suggest that it inhabited the woods on the drier coastal areas of south and west Mauritius. This view is supported by the fact that the Mare aux Songes swamp, where most dodo remains have been excavated, is close to the sea in south-eastern Mauritius. Such a limited distribution across the island could well have contributed to its extinction. A 1601 map from the "Gelderland" journal shows a small island off the coast of Mauritius where dodos were caught. Julian Hume has suggested this island was l'île aux Benitiers in Tamarin Bay, on the west coast of Mauritius. Subfossil bones have also been found inside caves in highland areas, indicating that it once occurred on mountains. Work at the Mare aux Songes swamp has shown that its habitat was dominated by tambalacoque and "Pandanus" trees and endemic palms. The near-coastal placement and wetness of the Mare aux Songes led to a high diversity of plant species, whereas the surrounding areas were drier. Many endemic species of Mauritius became extinct after the arrival of humans, so the ecosystem of the island is badly damaged and hard to reconstruct. Before humans arrived, Mauritius was entirely covered in forests, but very little remains of them today, because of deforestation. The surviving endemic fauna is still seriously threatened. The dodo lived alongside other recently extinct Mauritian birds such as the flightless red rail, the broad-billed parrot, the Mascarene grey parakeet, the Mauritius blue pigeon, the Mauritius owl, the Mascarene coot, the Mauritian shelduck, the Mauritian duck, and the Mauritius night heron. Extinct Mauritian reptiles include the saddle-backed Mauritius giant tortoise, the domed Mauritius giant tortoise, the Mauritian giant skink, and the Round Island burrowing boa. The small Mauritian flying fox and the snail "Tropidophora carinata" lived on Mauritius and Réunion, but vanished from both islands. Some plants, such as "Casearia tinifolia" and the palm orchid, have also become extinct. A 1631 Dutch letter (long thought lost, but rediscovered in 2017) is the only account of the dodo's diet, and also mentions that it used its beak for defence. The document uses word-play to refer to the animals described, with dodos presumably being an allegory for wealthy mayors: In addition to fallen fruits, the dodo probably subsisted on nuts, seeds, bulbs, and roots. It has also been suggested that the dodo might have eaten crabs and shellfish, like their relatives the crowned pigeons. Its feeding habits must have been versatile, since captive specimens were probably given a wide range of food on the long sea journeys. Oudemans suggested that as Mauritius has marked dry and wet seasons, the dodo probably fattened itself on ripe fruits at the end of the wet season to survive the dry season, when food was scarce; contemporary reports describe the bird's "greedy" appetite. France Staub suggested that they mainly fed on palm fruits, and he attempted to correlate the fat-cycle of the dodo with the fruiting regime of the palms. Skeletal elements of the upper jaw appear to have been rhynchokinetic (movable in relation to each other), which must have affected its feeding behaviour. In extant birds, such as frugivorous (fruit-eating) pigeons, kinetic premaxillae help with consuming large food items. The beak also appears to have been able to withstand high force loads, which indicates a diet of hard food. In 2016, the first 3D endocast was made from the brain of the dodo; examination found that though the brain was similar to that of other pigeons in most respects, the dodo had a comparatively large olfactory bulb. This gave the dodo a good sense of smell, which may have aided in locating fruit and small prey. Several contemporary sources state that the dodo used Gastroliths (gizzard stones) to aid digestion. The English writer Sir Hamon L'Estrange witnessed a live bird in London and described it as follows: It is not known how the young were fed, but related pigeons provide crop milk. Contemporary depictions show a large crop, which was probably used to add space for food storage and to produce crop milk. It has been suggested that the maximum size attained by the dodo and the solitaire was limited by the amount of crop milk they could produce for their young during early growth. In 1973, the tambalacoque, also known as the dodo tree, was thought to be dying out on Mauritius, to which it is endemic. There were supposedly only 13 specimens left, all estimated to be about 300 years old. Stanley Temple hypothesised that it depended on the dodo for its propagation, and that its seeds would germinate only after passing through the bird's digestive tract. He claimed that the tambalacoque was now nearly coextinct because of the disappearance of the dodo. Temple overlooked reports from the 1940s that found that tambalacoque seeds germinated, albeit very rarely, without being abraded during digestion. Others have contested his hypothesis and suggested that the decline of the tree was exaggerated, or seeds were also distributed by other extinct animals such as "Cylindraspis" tortoises, fruit bats or the broad-billed parrot. According to Wendy Strahm and Anthony Cheke, two experts in the ecology of the Mascarene Islands, the tree, while rare, has germinated since the demise of the dodo and numbers several hundred, not 13 as claimed by Temple, hence discrediting Temple's view as to the dodo and the tree's sole survival relationship. It has been suggested that the broad-billed parrot may have depended on dodos and "Cylindraspis" tortoises to eat palm fruits and excrete their seeds, which became food for the parrots. "Anodorhynchus" macaws depended on now-extinct South American megafauna in the same way, but now rely on domesticated cattle for this service. As it was flightless and terrestrial and there were no mammalian predators or other kinds of natural enemy on Mauritius, the dodo probably nested on the ground. The account by François Cauche from 1651 is the only description of the egg and the call: Cauche's account is problematic, since it also mentions that the bird he was describing had three toes and no tongue, unlike dodos. This led some to believe that Cauche was describing a new species of dodo (""Didus nazarenus""). The description was most probably mingled with that of a cassowary, and Cauche's writings have other inconsistencies. A mention of a "young ostrich" taken on board a ship in 1617 is the only other reference to a possible juvenile dodo. An egg claimed to be that of a dodo is stored in the museum of East London, South Africa. It was donated by Marjorie Courtenay-Latimer, whose great aunt had received it from a captain who claimed to have found it in a swamp on Mauritius. In 2010, the curator of the museum proposed using genetic studies to determine its authenticity. It may instead be an aberrant ostrich egg. Because of the possible single-egg clutch and the bird's large size, it has been proposed that the dodo was K-selected, meaning that it produced a low number of altricial offspring, which required parental care until they matured. Some evidence, including the large size and the fact that tropical and frugivorous birds have slower growth rates, indicates that the bird may have had a protracted development period. The fact that no juvenile dodos have been found in the Mare aux Songes swamp may indicate that they produced little offspring, that they matured rapidly, that the breeding grounds were far away from the swamp, or that the risk of miring was seasonal. A 2017 study examined the histology of thin-sectioned dodo bones, modern Mauritian birds, local ecology, and contemporary accounts, to recover information about the life history of the dodo. The study suggested that dodos bred around August, after having potentially fattened themselves, corresponding with the fat and thin cycles of many vertebrates of Mauritius. The chicks grew rapidly, reaching robust, almost adult, sizes, and sexual maturity before Austral summer or the cyclone season. Adult dodos which had just bred moulted after Austral summer, around March. The feathers of the wings and tail were replaced first, and the moulting would have completed at the end of July, in time for the next breeding season. Different stages of moulting may also account for inconsistencies in contemporary descriptions of dodo plumage. Mauritius had previously been visited by Arab vessels in the Middle Ages and Portuguese ships between 1507 and 1513, but was settled by neither. No records of dodos by these are known, although the Portuguese name for Mauritius, "Cerne (swan) Island", may have been a reference to dodos. The Dutch Empire acquired Mauritius in 1598, renaming it after Maurice of Nassau, and it was used for the provisioning of trade vessels of the Dutch East India Company henceforward. The earliest known accounts of the dodo were provided by Dutch travelers during the Second Dutch Expedition to Indonesia, led by admiral Jacob van Neck in 1598. They appear in reports published in 1601, which also contain the first published illustration of the bird. Since the first sailors to visit Mauritius had been at sea for a long time, their interest in these large birds was mainly culinary. The 1602 journal by Willem Van West-Zanen of the ship "Bruin-Vis" mentions that 24–25 dodos were hunted for food, which were so large that two could scarcely be consumed at mealtime, their remains being preserved by salting. An illustration made for the 1648 published version of this journal, showing the killing of dodos, a dugong, and possibly Mascarene grey parakeets, was captioned with a Dutch poem, here in Hugh Strickland's 1848 translation: Some early travellers found dodo meat unsavoury, and preferred to eat parrots and pigeons; others described it as tough but good. Some hunted dodos only for their gizzards, as this was considered the most delicious part of the bird. Dodos were easy to catch, but hunters had to be careful not to be bitten by their powerful beaks. The appearance of the dodo and the red rail led Peter Mundy to speculate, 230 years before Charles Darwin's theory of evolution: The dodo was found interesting enough that living specimens were sent to Europe and the East. The number of transported dodos that reached their destinations alive is uncertain, and it is unknown how they relate to contemporary depictions and the few non-fossil remains in European museums. Based on a combination of contemporary accounts, paintings, and specimens, Julian Hume has inferred that at least eleven transported dodos reached their destinations alive. Hamon L'Estrange's description of a dodo that he saw in London in 1638 is the only account that specifically mentions a live specimen in Europe. In 1626 Adriaen van de Venne drew a dodo that he claimed to have seen in Amsterdam, but he did not mention if it were alive, and his depiction is reminiscent of Savery's "Edwards's Dodo". Two live specimens were seen by Peter Mundy in Surat, India, between 1628 and 1634, one of which may have been the individual painted by Ustad Mansur around 1625. In 1628, Emmanuel Altham visited Mauritius and sent a letter to his brother in England: Whether the dodo survived the journey is unknown, and the letter was destroyed by fire in the 19th century. The earliest known picture of a dodo specimen in Europe is from a collection of paintings depicting animals in the royal menagerie of Emperor Rudolph II in Prague. This collection includes paintings of other Mauritian animals as well, including a red rail. The dodo, which may be a juvenile, seems to have been dried or embalmed, and had probably lived in the emperor's zoo for a while together with the other animals. That whole stuffed dodos were present in Europe indicates they had been brought alive and died there; it is unlikely that taxidermists were on board the visiting ships, and spirits were not yet used to preserve biological specimens. Most tropical specimens were preserved as dried heads and feet. One dodo was reportedly sent as far as Nagasaki, Japan in 1647, but it was long unknown whether it arrived. Contemporary documents first published in 2014 proved the story, and showed that it had arrived alive. It was meant as a gift, and, despite its rarity, was considered of equal value to a white deer and a bezoar stone. It is the last recorded live dodo in captivity. Like many animals that evolved in isolation from significant predators, the dodo was entirely fearless of humans. This fearlessness and its inability to fly made the dodo easy prey for sailors. Although some scattered reports describe mass killings of dodos for ships' provisions, archaeological investigations have found scant evidence of human predation. Bones of at least two dodos were found in caves at Baie du Cap that sheltered fugitive slaves and convicts in the 17th century, which would not have been easily accessible to dodos because of the high, broken terrain. The human population on Mauritius (an area of ) never exceeded 50 people in the 17th century, but they introduced other animals, including dogs, pigs, cats, rats, and crab-eating macaques, which plundered dodo nests and competed for the limited food resources. At the same time, humans destroyed the forest habitat of the dodos. The impact of the introduced animals on the dodo population, especially the pigs and macaques, is today considered more severe than that of hunting. Rats were perhaps not much of a threat to the nests, since dodos would have been used to dealing with local land crabs. It has been suggested that the dodo may already have been rare or localised before the arrival of humans on Mauritius, since it would have been unlikely to become extinct so rapidly if it had occupied all the remote areas of the island. A 2005 expedition found subfossil remains of dodos and other animals killed by a flash flood. Such mass mortalities would have further jeopardised a species already in danger of becoming extinct. Yet the fact that the dodo survived hundreds of years of volcanic activity and climactic changes shows the bird was resilient within its ecosystem. Some controversy surrounds the date of their extinction. The last widely accepted record of a dodo sighting is the 1662 report by shipwrecked mariner Volkert Evertsz of the Dutch ship "Arnhem", who described birds caught on a small islet off Mauritius, now suggested to be Amber Island: The dodos on this islet may not necessarily have been the last members of the species. The last claimed sighting of a dodo was reported in the hunting records of Isaac Johannes Lamotius in 1688. Statistical analysis of these records by Roberts and Solow gives a new estimated extinction date of 1693, with a 95% confidence interval of 1688–1715. The authors also pointed out that because the last sighting before 1662 was in 1638, the dodo was probably already quite rare by the 1660s, and thus a disputed report from 1674 by an escaped slave cannot be dismissed out of hand. Cheke pointed out that some descriptions after 1662 use the names "Dodo" and "Dodaers" when referring to the red rail, indicating that they had been transferred to it after the disappearance of the dodo itself. Cheke therefore points to the 1662 description as the last credible observation. A 1668 account by English traveller John Marshall, who used the names "Dodo" and "Red Hen" interchangeably for the red rail, mentioned that the meat was "hard", which echoes the description of the meat in the 1681 account. Even the 1662 account has been questioned by the writer Errol Fuller, as the reaction to distress cries matches what was described for the red rail. Until this explanation was proposed, a description of "dodos" from 1681 was thought to be the last account, and that date still has proponents. Recently accessible Dutch manuscripts indicate that no dodos were seen by settlers in 1664–1674. It is unlikely the issue will ever be resolved, unless late reports mentioning the name alongside a physical description are rediscovered. The IUCN Red List accepts Cheke's rationale for choosing the 1662 date, taking all subsequent reports to refer to red rails. In any case, the dodo was probably extinct by 1700, about a century after its discovery in 1598. The Dutch left Mauritius in 1710, but by then the dodo and most of the large terrestrial vertebrates there had become extinct. Even though the rareness of the dodo was reported already in the 17th century, its extinction was not recognised until the 19th century. This was partly because, for religious reasons, extinction was not believed possible until later proved so by Georges Cuvier, and partly because many scientists doubted that the dodo had ever existed. It seemed altogether too strange a creature, and many believed it a myth. The bird was first used as an example of human-induced extinction in "Penny Magazine" in 1833, and have since been referred to as an "icon" of extinction. The only extant remains of dodos taken to Europe in the 17th century are a dried head and foot in the Oxford University Museum of Natural History, a foot once housed in the British Museum but now lost, a skull in the University of Copenhagen Zoological Museum, and an upper jaw and leg bones in the National Museum, Prague. The last two were rediscovered and identified as dodo remains in the mid-19th century. Several stuffed dodos were also mentioned in old museum inventories, but none are known to have survived. Apart from these remains, a dried foot, which belonged to the Dutch professor Pieter Pauw, was mentioned by Carolus Clusius in 1605. Its provenance is unknown, and it is now lost, but it may have been collected during the Van Neck voyage. The only known soft tissue remains, the Oxford head (specimen OUM 11605) and foot, belonged to the last known stuffed dodo, which was first mentioned as part of the Tradescant collection in 1656 and was moved to the Ashmolean Museum in 1659. It has been suggested that this might be the remains of the bird that Hamon L'Estrange saw in London, the bird sent by Emanuel Altham, or a donation by Thomas Herbert. Since the remains do not show signs of having been mounted, the specimen might instead have been preserved as a study skin. In 2018, it was reported that scans of the Oxford dodo's head showed that its skin and bone contained lead shot, pellets which were used to hunt birds in the 17th century. This indicates that the Oxford dodo was shot either before being transported to Britain, or some time after arriving. The circumstances of its killing are unknown, and the pellets are to be examined to identify where the lead was mined from. Many sources state that the Ashmolean Museum burned the stuffed dodo around 1755 because of severe decay, saving only the head and leg. Statute 8 of the museum states "That as any particular grows old and perishing the keeper may remove it into one of the closets or other repository; and some other to be substituted." The deliberate destruction of the specimen is now believed to be a myth; it was removed from exhibition to preserve what remained of it. This remaining soft tissue has since degraded further; the head was dissected by Strickland and Melville, separating the skin from the skull in two halves. The foot is in a skeletal state, with only scraps of skin and tendons. Very few feathers remain on the head. It is probably a female, as the foot is 11% smaller and more gracile than the London foot, yet appears to be fully grown. The specimen was exhibited at the Oxford museum from at least the 1860s and until 1998, where-after it was mainly kept in storage to prevent damage. Casts of the head can today be found in many museums worldwide. The dried London foot, first mentioned in 1665, and transferred to the British Museum in the 18th century, was displayed next to Savery's "Edwards's Dodo" painting until the 1840s, and it too was dissected by Strickland and Melville. It was not posed in a standing posture, which suggests that it was severed from a fresh specimen, not a mounted one. By 1896 it was mentioned as being without its integuments, and only the bones are believed to remain today, though its present whereabouts are unknown. The Copenhagen skull (specimen ZMUC 90-806) is known to have been part of the collection of Bernardus Paludanus in Enkhuizen until 1651, when it was moved to the museum in Gottorf Castle, Schleswig. After the castle was occupied by Danish forces in 1702, the museum collection was assimilated into the Royal Danish collection. The skull was rediscovered by J. T. Reinhardt in 1840. Based on its history, it may be the oldest known surviving remains of a dodo brought to Europe in the 17th century. It is shorter than the Oxford skull, and may have belonged to a female. It was mummified, but the skin has perished. The front part of a skull (specimen NMP P6V-004389, a syntype of this species) in the National Museum of Prague was found in 1850 among the remains of the Böhmisches Museum. Other elements supposedly belonging to this specimen have been listed in the literature, but it appears only the partial skull was ever present. It may be what remains of one of the stuffed dodos known to have been at the menagerie of Emperor Rudolph II, possibly the specimen painted by Hoefnagel or Savery there. Until 1860, the only known dodo remains were the four incomplete 17th-century specimens. Philip Burnard Ayres found the first subfossil bones in 1860, which were sent to Richard Owen at the British Museum, who did not publish the findings. In 1863, Owen requested the Mauritian Bishop Vincent Ryan to spread word that he should be informed if any dodo bones were found. In 1865, George Clark, the government schoolmaster at Mahébourg, finally found an abundance of subfossil dodo bones in the swamp of Mare aux Songes in Southern Mauritius, after a 30-year search inspired by Strickland and Melville's monograph. In 1866, Clark explained his procedure to "The Ibis", an ornithology journal: he had sent his coolies to wade through the centre of the swamp, feeling for bones with their feet. At first they found few bones, until they cut away herbage that covered the deepest part of the swamp, where they found many fossils. The swamp yielded the remains of over 300 dodos, but very few skull and wing bones, possibly because the upper bodies were washed away or scavenged while the lower body was trapped. The situation is similar to many finds of moa remains in New Zealand marshes. Most dodo remains from the Mare aux Songes have a medium to dark brown colouration. Clark's reports about the finds rekindled interest in the bird. Sir Richard Owen and Alfred Newton both wanted to be first to describe the post-cranial anatomy of the dodo, and Owen bought a shipment of dodo bones originally meant for Newton, which led to rivalry between the two. Owen described the bones in "Memoir on the Dodo" in October 1866, but erroneously based his reconstruction on the "Edwards's Dodo" painting by Savery, making it too squat and obese. In 1869 he received more bones and corrected its stance, making it more upright. Newton moved his focus to the Réunion solitaire instead. The remaining bones not sold to Owen or Newton were auctioned off or donated to museums. In 1889, Théodor Sauzier was commissioned to explore the "historical souvenirs" of Mauritius and find more dodo remains in the Mare aux Songes. He was successful, and also found remains of other extinct species. In 2005, after a hundred years of neglect, a part of the Mare aux Songes swamp was excavated by an international team of researchers (International Dodo Research Project). To prevent malaria, the British had covered the swamp with hard core during their rule over Mauritius, which had to be removed. Many remains were found, including bones of at least 17 dodos in various stages of maturity (though no juveniles), and several bones obviously from the skeleton of one individual bird, which have been preserved in their natural position. These findings were made public in December 2005 in the Naturalis museum in Leiden. 63% of the fossils found in the swamp belonged to turtles of the extinct genus "Cylindraspis", and 7.1% belonged to dodos, which had been deposited within several centuries, 4,000 years ago. Subsequent excavations suggested that dodos and other animals became mired in the Mare aux Songes while trying to reach water during a long period of severe drought about 4,200 years ago. Furthermore, cyanobacteria thrived in the conditions created by the excrements of animals gathered around the swamp, which died of intoxication, dehydration, trampling, and miring. Though many small skeletal elements were found during the recent excavations of the swamp, few were found during the 19th century, probably owing to the employment of less refined methods when collecting. Louis Etienne Thirioux, an amateur naturalist at Port Louis, also found many dodo remains around 1900 from several locations. They included the first articulated specimen, which is the first subfossil dodo skeleton found outside the Mare aux Songes, and the only remains of a juvenile specimen, a now lost tarsometatarsus. The former specimen was found in 1904 in a cave near Le Pouce mountain, and is the only known complete skeleton of an individual dodo. Thirioux donated the specimen to the Museum Desjardins (now Natural History Museum at Mauritius Institute). Thrioux's heirs sold a second mounted composite skeleton (composed of at least two skeletons, with a mainly reconstructed skull) to the Durban Museum of Natural Science in South Africa in 1918. Together, these two skeletons represent the most completely known dodo remains, including bone elements previously unrecorded (such as knee-caps and various wing bones). Though some contemporary writers noted the importance of Thrioux's specimens, they were not scientifically studied, and were largely forgotten until 2011, when sought out by a group of researchers. The mounted skeletons were laser scanned, from which 3-D models were reconstructed, which became the basis of a 2016 monograph about the osteology of the dodo. In 2006, explorers discovered a complete skeleton of a dodo in a lava cave in Mauritius. This was only the second associated skeleton of an individual specimen everfound, and the only one in recent times. Worldwide, 26 museums have significant holdings of dodo material, almost all found in the Mare aux Songes. The Natural History Museum, American Museum of Natural History, Cambridge University Museum of Zoology, the Senckenberg Museum, and others have almost complete skeletons, assembled from the dissociated subfossil remains of several individuals. In 2011, a wooden box containing dodo bones from the Edwardian era was rediscovered at the Grant Museum at University College London during preparations for a move. They had been stored with crocodile bones until then. The supposed "white dodo" (or "solitaire") of Réunion is now considered an erroneous conjecture based on contemporary reports of the Réunion ibis and 17th-century paintings of white, dodo-like birds by Pieter Withoos and Pieter Holsteyn that surfaced in the 19th century. The confusion began when Willem Ysbrandtszoon Bontekoe, who visited Réunion around 1619, mentioned fat, flightless birds that he referred to as "Dod-eersen" in his journal, though without mentioning their colouration. When the journal was published in 1646, it was accompanied by an engraving of a dodo from Savery's "Crocker Art Gallery sketch". A white, stocky, and flightless bird was first mentioned as part of the Réunion fauna by Chief Officer J. Tatton in 1625. Sporadic mentions were subsequently made by Sieur Dubois and other contemporary writers. Baron Edmond de Sélys Longchamps coined the name "Raphus solitarius" for these birds in 1848, as he believed the accounts referred to a species of dodo. When 17th-century paintings of white dodos were discovered by 19th-century naturalists, it was assumed they depicted these birds. Anthonie Cornelis Oudemans suggested that the discrepancy between the paintings and the old descriptions was that the paintings showed females, and that the species was therefore sexually dimorphic. Some authors also believed the birds described were of a species similar to the Rodrigues solitaire, as it was referred to by the same name, or even that there were white species of both dodo and solitaire on the island. The Pieter Withoos painting, which was discovered first, appears to be based on an earlier painting by Pieter Holsteyn, three versions of which are known to have existed. According to Hume, Cheke, and Valledor de Lozoya, it appears that all depictions of white dodos were based on Roelant Savery's 1611 painting "Landscape with Orpheus and the animals", or on copies of it. The painting shows a whitish specimen and was apparently based on a stuffed specimen then in Prague; a "walghvogel" described as having a "dirty off-white colouring" was mentioned in an inventory of specimens in the Prague collection of the Holy Roman Emperor Rudolf II, to whom Savery was contracted at the time (1607–1611). Savery's several later images all show greyish birds, possibly because he had by then seen another specimen. Cheke and Hume believe the painted specimen was white, owing to albinism. Valledor de Lozoya has instead suggested that the light plumage was a juvenile trait, a result of bleaching of old taxidermy specimens, or simply artistic license. In 1987, scientists described fossils of a recently extinct species of ibis from Réunion with a relatively short beak, "Borbonibis latipes", before a connection to the solitaire reports had been made. Cheke suggested to one of the authors, Francois Moutou, that the fossils may have been of the Réunion solitaire, and this suggestion was published in 1995. The ibis was reassigned to the genus "Threskiornis", now combined with the specific epithet "" from the binomial "R. solitarius". Birds of this genus are also white and black with slender beaks, fitting the old descriptions of the Réunion solitaire. No fossil remains of dodo-like birds have ever been found on the island. The dodo's significance as one of the best-known extinct animals and its singular appearance led to its use in literature and popular culture as a symbol of an outdated concept or object, as in the expression "dead as a dodo," which has come to mean unquestionably dead or obsolete. Similarly, the phrase "to go the way of the dodo" means to become extinct or obsolete, to fall out of common usage or practice, or to become a thing of the past. "Dodo" is also a slang term for a stupid, dull-witted person, as it was supposedly stupid and easily caught. The dodo appears frequently in works of popular fiction, and even before its extinction, it was featured in European literature, as symbol for exotic lands, and of gluttony, due to its apparent fatness. In 1865, the same year that George Clark started to publish reports about excavated dodo fossils, the newly vindicated bird was featured as a character in Lewis Carroll's "Alice's Adventures in Wonderland". It is thought that he included the dodo because he identified with it and had adopted the name as a nickname for himself because of his stammer, which made him accidentally introduce himself as "Do-do-dodgson", his legal surname. Carroll and the girl who served as inspiration for Alice, Alice Liddell, had enjoyed visiting the Oxford museum to see the dodo remains there. The book's popularity made the dodo a well-known icon of extinction. The dodo is used as a mascot for many kinds of products, especially in Mauritius. It appears as a supporter on the coat of arms of Mauritius, on Mauritius coins, is used as a watermark on all Mauritian rupee banknotes, and features as the background of the Mauritian immigration form. A smiling dodo is the symbol of the Brasseries de Bourbon, a popular brewer on Réunion, whose emblem displays the white species once thought to have lived there. The dodo is used to promote the protection of endangered species by environmental organisations, such as the Durrell Wildlife Conservation Trust and the Durrell Wildlife Park. The Center for Biological Diversity gives an annual 'Rubber Dodo Award', to "those who have done the most to destroy wild places, species and biological diversity". In 2011, the nephiline spider "Nephilengys dodo", which inhabits the same woods as the dodo once did, was named after the bird to raise awareness of the urgent need for protection of the Mauritius biota. Two species of ant from Mauritius have been named after the dodo: "Pseudolasius dodo" in 1946 and "Pheidole dodo" in 2013. A species of isopod from a coral reef off Réunion was named "Hansenium dodo" in 1991. The name dodo has been used by scientists naming genetic elements, honoring the dodo's flightless nature. A fruitfly gene within a region of a chromosome required for flying ability was named "dodo". In addition, a defective transposable element family from "Phytophthora infestans" was named "DodoPi" as it contained mutations that eliminated the element's ability to jump to new locations in a chromosome. In 2009, a previously unpublished 17th-century Dutch illustration of a dodo went for sale at Christie's and was expected to sell for £6,000. It is unknown whether the illustration was based on a specimen or on a previous image. It sold for £44,450. The poet Hilaire Belloc included the following poem about the dodo in his "Bad Child's Book of Beasts" from 1896: Dubnium Dubnium is a synthetic chemical element with symbol Db and atomic number 105. Dubnium is highly radioactive: the most stable known isotope, dubnium-268, has a half-life of about 28 hours. This greatly limits the extent of research on dubnium. Dubnium does not occur naturally on Earth and is produced artificially. The Soviet Joint Institute for Nuclear Research (JINR) claimed the first discovery of the element in 1968, followed by the American Lawrence Berkeley Laboratory in 1970. Both teams proposed their names for the new element and used them without formal approval. The long-standing dispute was resolved in 1993 by an official investigation of the discovery claims by the IUPAC/IUPAP Joint Working Party, resulting in credit for the discovery being officially shared between both teams. The element was formally named "dubnium" in 1997 after the town of Dubna, the site of the JINR. Theoretical research establishes dubnium as a member of group 5 in the 6d series of transition metals, placing it under vanadium, niobium, and tantalum. Dubnium should share most properties, such as its valence electron configuration and having a dominant +5 oxidation state, with the other group 5 elements, with a few anomalies due to relativistic effects. A limited investigation of dubnium chemistry has confirmed this. Solution chemistry experiments have revealed that dubnium often behaves more like niobium rather than tantalum, breaking periodic trends. Uranium, element 92, is the heaviest element to occur in significant quantity in nature; heavier elements can only be produced practically by synthesis. The first synthesis of a new element—neptunium, element 93—was achieved in 1940 by a team of researchers in the United States. In the following years, American scientists synthesized the elements up to mendelevium, element 101, which was synthesized in 1955. From element 102, the priority of discoveries was contested between American and Soviet physicists. Their rivalry resulted in a race for new elements and credit for their discoveries, later named the Transfermium Wars. The first report of the discovery of element 105 came from the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Russian SFSR, Soviet Union, in April 1968. The scientists bombarded Am with a beam of Ne ions, and reported 9.4 MeV (with a half-life of 0.1–3 seconds) and 9.7 MeV ("t" > 0.05 s) alpha activities followed by alpha activities similar to those of either 103 or 103. Based on prior theoretical predictions, the two activity lines were assigned to 105 and 105, respectively. After observing the alpha decays of element 105, the researchers aimed to observe spontaneous fission (SF) of the element and study the resulting fission fragments. They published a paper in February 1970, reporting multiple examples of two such activities, with half-lives of 14 ms and . They assigned the former activity to Am and ascribed the latter activity to an isotope of element 105. They suggested that it was unlikely that this activity could come from a transfer reaction instead of element 105, because the yield ratio for this reaction was significantly lower than that of the Am-producing transfer reaction, in accordance with theoretical predictions. To establish that this activity was not from a (Ne,"x"n) reaction, the researchers bombarded a Am target with O ions; reactions producing 103 and 103 showed very little SF activity (matching the established data), and the reaction producing heavier 103 and 103 produced no SF activity at all, in line with theoretical data. The researchers concluded that the activities observed came from SF of element 105. In April 1970, a team at Lawrence Berkeley Laboratory (LBL), in Berkeley, California, United States, claimed to have synthesized element 105 by bombarding californium-249 with nitrogen-15 ions, with an alpha activity of 9.1 MeV. To ensure this activity was not from a different reaction, the team attempted other reactions: bombarding Cf with N, Pb with N, and Hg with N. They stated no such activity was found in those reactions. The characteristics of the daughter nuclei matched those of 103, implying that the parent nuclei were of 105. These results did not confirm the JINR findings regarding the 9.4 MeV or 9.7 MeV alpha decay of 105, leaving only 105 as a possibly produced isotope. JINR then attempted another experiment to create element 105, published in a report in May 1970. They claimed that they had synthesized more nuclei of element 105 and that the experiment confirmed their previous work. According to the paper, the isotope produced by JINR was probably 105, or possibly 105. This report included an initial chemical examination: the thermal gradient version of the gas-chromatography method was applied to demonstrate that the chloride of what had formed from the SF activity nearly matched that of niobium pentachloride, rather than hafnium tetrachloride. The team identified a 2.2-second SF activity in a volatile chloride portraying eka-tantalum properties, and inferred that the source of the SF activity must have been element 105. In June 1970, JINR made improvements on their first experiment, using a purer target and reducing the intensity of transfer reactions by installing a collimator before the catcher. This time, they were able to find 9.1 MeV alpha activities with daughter isotopes identifiable as either 103 or 103, implying that the original isotope was either 105 or 105. JINR did not propose a name after their first report claiming synthesis of element 105, which would have been the usual practice. This led LBL to believe that JINR did not have enough experimental data to back their claim. After collecting more data, JINR proposed the name "nielsbohrium" (Ns) in honor of the Danish nuclear physicist Niels Bohr, a founder of the theories of atomic structure and quantum theory. When LBL first announced their synthesis of element 105, they proposed that the new element be named "hahnium" (Ha) after the German chemist Otto Hahn, the "father of nuclear chemistry", thus creating an element naming controversy. In the early 1970s, both teams reported synthesis of the next element, element 106, but did not suggest names. JINR suggested establishing an international committee to clarify the discovery criteria. This proposal was accepted in 1974 and a neutral joint group formed. Neither team showed interest in resolving the conflict through a third party, so the leading scientists of LBL—Albert Ghiorso and Glenn Seaborg—traveled to Dubna in 1975 and met with the leading scientists of JINR—Georgy Flerov, Yuri Oganessian, and others— to try to resolve the conflict internally and render the neutral joint group unnecessary; after two hours of discussions, this failed. The joint neutral group never assembled to assess the claims and the conflict remained unsolved. In 1979, IUPAC suggested systematic element names to be used as placeholders until permanent names were established; under it, element 105 would be "unnilpentium", from the Latin roots "un-" and "nil-" and the Greek root "pent-" (meaning "one", "zero", and "five", respectively, the digits of the atomic number). Both teams ignored it as they did not wish to weaken their outstanding claims. In 1981, the Gesellschaft für Schwerionenforschung (GSI; "Society for Heavy Ion Research") in Darmstadt, West Germany, claimed synthesis of element 107; their report came out five years after the first report from JINR but with greater precision, making a more solid claim on discovery. GSI acknowledged JINR's efforts by suggesting the name "nielsbohrium" for the new element. JINR did not suggest a new name for element 105, stating it was more important to determine its discoverers first. In 1985, the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) formed a Joint Working Party (JWP) to assess discoveries and establish final names for the controversial elements. The party held meetings with delegates from the three competing institutes; in 1990, they established criteria on recognition of an element, and in 1991, they finished the work on assessing discoveries and disbanded. These results were published in 1993. According to the report, the first definitely successful experiment was the April 1970 LBL experiment, closely followed by the June 1970 JINR experiment, so credit for the discovery of the element should be shared between the two teams. LBL said that the input from JINR was overrated in the review. They claimed JINR was only able to unambiguously demonstrate the synthesis of element 105 a year after they did. JINR and GSI endorsed the report. In 1994, IUPAC published a recommendation on naming the disputed elements. For element 105, they proposed "joliotium" (Jl) after the French physicist Frédéric Joliot-Curie, a contributor to the development of nuclear physics and chemistry; this name was originally proposed by the Soviet team for element 102, which by then had long been called nobelium. This recommendation was criticized by the American scientists for several reasons. Firstly, their suggestions were scrambled: the names "rutherfordium" and "hahnium", originally suggested by Berkeley for elements 104 and 105, were respectively reassigned to elements 106 and 108. Secondly, elements 104 and 105 were given names favored by JINR, despite earlier recognition of LBL as an equal co-discoverer for both of them. Thirdly and most importantly, IUPAC rejected the name "seaborgium" for element 106, having just approved a rule that an element could not be named after a living person, even though the 1993 report had given the LBL team the sole credit for its discovery. In 1995, IUPAC abandoned the controversial rule and established a committee of national representatives aimed at finding a compromise. They suggested "seaborgium" for element 106 in exchange for the removal of all the other American proposals, except for the established name "lawrencium" for element 103. The equally entrenched name "nobelium" for element 102 was replaced by "flerovium" after Georgy Flerov, following the recognition by the 1993 report that that element had been first synthesized in Dubna. This was rejected by American scientists and the decision was retracted. The name "flerovium" was later used for element 114. In 1996, IUPAC held another meeting, reconsidered all names in hand, and accepted another set of recommendations; it was approved and published in 1997. Element 105 was named "dubnium" (Db), after Dubna in Russia, the location of the JINR; the American suggestions were used for elements 102, 103, 104, and 106. The name "dubnium" had been used for element 104 in the previous IUPAC recommendation. The American scientists "reluctantly" approved this decision. IUPAC pointed out that the Berkeley laboratory had already been recognized several times, in the naming of berkelium, californium, and americium, and that the acceptance of the names "rutherfordium" and "seaborgium" for elements 104 and 106 should be offset by recognizing JINR's contributions to the discovery of elements 104, 105, and 106. Dubnium, having an atomic number of 105, is a superheavy element; like all elements with such high atomic numbers, it is very unstable. The longest-lasting known isotope of dubnium, Db, has a half-life of around a day. No stable isotopes have been seen, and a 2012 calculation by JINR suggested that the half-lives of all dubnium isotopes would not significantly exceed a day. Dubnium can only be obtained by artificial production. The short half-life of dubnium limits experimentation. This is exacerbated by the fact that the most stable isotopes are the hardest to synthesize. Elements with a lower atomic number have stable isotopes with a lower neutron-to-proton ratio than those with higher atomic number, meaning that the target and beam nuclei that could be employed to create the superheavy element have fewer neutrons than needed to form these most stable isotopes. (Different techniques based on rapid neutron capture and transfer reactions are being considered as of the 2010s, but those based on the collision of a large and small nucleus still dominate research in the area.) Only a few atoms of Db can be produced in each experiment, and thus the measured lifetimes vary significantly during the process. During three experiments, 23 atoms were created in total, with a resulting half-life of . The second most stable isotope, Db, has been produced in even smaller quantities: three atoms in total, with lifetimes of 33.4 h, 1.3 h, and 1.6 h. These two are the heaviest isotopes of dubnium to date, and both were produced as a result of decay of the heavier nuclei Mc and Ts rather than directly, because the experiments that yielded them were originally designed in Dubna for Ca beams. For its mass, Ca has by far the greatest neutron excess of all practically stable nuclei, both quantitative and relative, which correspondingly helps synthesize superheavy nuclei with more neutrons, but this gain is compensated by the decreased likelihood of fusion for high atomic numbers. According to the periodic law, dubnium should belong to group 5, with vanadium, niobium, and tantalum. Several studies have investigated the properties of element 105 and found that they generally agreed with the predictions of periodic law. Significant deviations may nevertheless occur, due to relativistic effects, which dramatically change physical properties on both atomic and macroscopic scales. These properties have remained challenging to measure for several reasons: the difficulties of production of superheavy atoms, the low rates of production, which only allows for microscopic scales, requirements for a radiochemistry laboratory to test the atoms, short half-lives of those atoms, and the presence of many unwanted activities apart from those of synthesis of superheavy atoms. So far, studies have only been performed on single atoms. A direct relativistic effect is that as the atomic numbers of elements increase, the innermost electrons begin to revolve faster around the nucleus as a result of an increase of electromagnetic attraction between an electron and a nucleus. Similar effects have been found for the outermost s orbitals (and p ones, though in dubnium they are not occupied): for example, the 7s orbital contracts by 25% in size and is stabilized by 2.6 eV. A more indirect effect is that the contracted s and p orbitals shield the charge of the nucleus more effectively, leaving less for the outer d and f electrons, which therefore move in larger orbitals. Dubnium is greatly affected by this: unlike the previous group 5 members, its 7s electrons are slightly more difficult to extract than its 6d electrons. Another effect is the spin–orbit interaction, particularly spin–orbit splitting, which splits the 6d subshell—the azimuthal quantum number ℓ of a d shell is 2—into two subshells, with four of the ten orbitals having their ℓ lowered to 3/2 and six raised to 5/2. All ten energy levels are raised; four of them are lower than the other six. (The three 6d electrons normally occupy the lowest energy levels, 6d.) A singly ionized atom of dubnium (Db) should lose a 6d electron compared to a neutral atom; the doubly (Db) or triply (Db) ionized atoms of dubnium should eliminate 7s electrons, unlike its lighter homologs. Despite the changes, dubnium is still expected to have five valence electrons; 7p energy levels have not been shown to influence dubnium and its properties. As the 6d orbitals of dubnium are more destabilized than the 5d ones of tantalum, and Db is expected to have two 6d, rather than 7s, electrons remaining, the resulting +3 oxidation state is expected to be unstable and even rarer than that of tantalum. The ionization potential of dubnium in its maximum +5 oxidation state should be slightly lower than that of tantalum and the ionic radius of dubnium should increase compared to tantalum; this has a significant effect on dubnium's chemistry. Atoms of dubnium in the solid state should arrange themselves in a body-centered cubic configuration, like the previous group 5 elements. The predicted density of dubnium is 29 g/cm. Computational chemistry is simplest in gas-phase chemistry, in which interactions between molecules may be ignored as negligible. Multiple authors have researched dubnium pentachloride; calculations show it to be consistent with the periodic laws by exhibiting the properties of a compound of a group 5 element. For example, the molecular orbital levels indicate that dubnium uses three 6d electron levels as expected. Compared to its tantalum analog, dubnium pentachloride is expected to show increased covalent character: a decrease in the effective charge on an atom and an increase in the overlap population (between orbitals of dubnium and chlorine). Calculations of solution chemistry indicate that the maximum oxidation state of dubnium, +5, will be more stable than those of niobium and tantalum and the +3 and +4 states will be less stable. The tendency towards hydrolysis of cations with the highest oxidation state should continue to decrease within group 5 but is still expected to be quite rapid. Complexation of dubnium is expected to follow group 5 trends in its richness. Calculations for hydroxo-chlorido- complexes have shown a reversal in the trends of complex formation and extraction of group 5 elements, with dubnium being more prone to do so than tantalum. Experimental results of the chemistry of dubnium date back to 1974 and 1976. JINR researchers used a thermochromatographic system and concluded that the volatility of dubnium bromide was less than that of niobium bromide and about the same as that of hafnium bromide. It is not certain that the detected fission products confirmed that the parent was indeed element 105. These results may imply that dubnium behaves more like hafnium than niobium. The next studies on the chemistry of dubnium were conducted in 1988, in Berkeley. They examined whether the most stable oxidation state of dubnium in aqueous solution was +5. Dubnium was fumed twice and washed with concentrated nitric acid; sorption of dubnium on glass cover slips was then compared with that of the group 5 elements niobium and tantalum and the group 4 elements zirconium and hafnium produced under similar conditions. The group 5 elements are known to sorb on glass surfaces; the group 4 elements do not. Dubnium was confirmed as a group 5 member. Surprisingly, the behavior on extraction from mixed nitric and hydrofluoric acid solution into methyl isobutyl ketone differed between dubnium, tantalum, and niobium. Dubnium did not extract and its behavior resembled niobium more closely than tantalum, indicating that complexing behavior could not be predicted purely from simple extrapolations of trends within a group in the periodic table. This prompted further exploration of the chemical behavior of complexes of dubnium. Various labs jointly conducted thousands of repetitive chromatographic experiments between 1988 and 1993. All group 5 elements and protactinium were extracted from concentrated hydrochloric acid; after mixing with lower concentrations of hydrogen chloride, small amounts of hydrogen fluoride were added to start selective re-extraction. Dubnium showed behavior different from that of tantalum but similar to that of niobium and its pseudohomolog protactinium at concentrations of hydrogen chloride below 12 moles per liter. This similarity to the two elements suggested that the formed complex was either or . After extraction experiments of dubnium from hydrogen bromide into diisobutyl carbinol (2,6-dimethylheptan-4-ol), a specific extractant for protactinium, with subsequent elutions with the hydrogen chloride/hydrogen fluoride mix as well as hydrogen chloride, dubnium was found to be less prone to extraction than either protactinium or niobium. This was explained as an increasing tendency to form non‐extractable complexes of multiple negative charges. Further experiments in 1992 confirmed the stability of the +5 state: Db(V) was shown to be extractable from cation‐exchange columns with α‐hydroxyisobutyrate, like the group 5 elements and protactinium; Db(III) and Db(IV) were not. In 1998 and 1999, new predictions suggested that dubnium would extract nearly as well as niobium and better than tantalum from halide solutions, which was later confirmed. The first isothermal gas chromatography experiments were performed in 1992 with Db (half-life 35 seconds). The volatilities for niobium and tantalum were similar within error limits, but dubnium appeared to be significantly less volatile. It was postulated that traces of oxygen in the system might have led to formation of , which was predicted to be less volatile than . Later experiments in 1996 showed that group 5 chlorides were more volatile than the corresponding bromides, with the exception of tantalum, presumably due to formation of . Later volatility studies of chlorides of dubnium and niobium as a function of controlled partial pressures of oxygen showed that formation of oxychlorides and general volatility are dependent on concentrations of oxygen. The oxychlorides were shown to be less volatile than the chlorides. In 2004–05, researchers from Dubna and Livermore identified a new dubnium isotope, Db, as a fivefold alpha decay product of the newly created element 115. This new isotope proved to be long-lived enough to allow further chemical experimentation, with a half-life of over a day. In the 2004 experiment, a thin layer with dubnium was removed from the surface of the target and dissolved in aqua regia with tracers and a lanthanum carrier, from which various +3, +4, and +5 species were precipitated on adding ammonium hydroxide. The precipitate was washed and dissolved in hydrochloric acid, where it converted to nitrate form and was then dried on a film and counted. Mostly containing a +5 species, which was immediately assigned to dubnium, it also had a +4 species; based on that result, the team decided that additional chemical separation was needed. In 2005, the experiment was repeated, with the final product being hydroxide rather than nitrate precipitate, which was processed further in both Livermore (based on reverse phase chromatography) and Dubna (based on anion exchange chromatography). The +5 species was effectively isolated; dubnium appeared three times in tantalum-only fractions and never in niobium-only fractions. It was noted that these experiments were insufficient to draw conclusions about the general chemical profile of dubnium. In 2009, at the JAEA tandem accelerator in Japan, dubnium was processed in nitric and hydrofluoric acid solution, at concentrations where niobium forms and tantalum forms . Dubnium's behavior was close to that of niobium but not tantalum; it was thus deduced that dubnium formed . From the available information, it was concluded that dubnium often behaved like niobium, sometimes like protactinium, but rarely like tantalum. Dmitri Shostakovich Dmitri Dmitriyevich Shostakovich (, ; 9 August 1975) was a Russian composer and pianist. He is regarded as one of the major composers of the 20th century. Shostakovich achieved fame in the Soviet Union under the patronage of Soviet chief of staff Mikhail Tukhachevsky, but later had a complex and difficult relationship with the government. Nevertheless, he received accolades and state awards and served in the Supreme Soviet of the RSFSR (1947–1962) and the Supreme Soviet of the Soviet Union (from 1962 until his death). A polystylist, Shostakovich developed a hybrid voice, combining a variety of different musical techniques into his works. His music is characterized by sharp contrasts, elements of the grotesque, and ambivalent tonality; the composer was also heavily influenced by the neo-classical style pioneered by Igor Stravinsky, and (especially in his symphonies) by the late Romanticism associated with Gustav Mahler. Shostakovich's orchestral works include 15 symphonies and six concerti. His chamber output includes 15 string quartets, a piano quintet, two piano trios, and two pieces for string octet. His solo piano works include two sonatas, an early set of preludes, and a later set of 24 preludes and fugues. Other works include three operas, several song cycles, ballets, and a substantial quantity of film music; especially well known is "The Second Waltz", Op. 99, music to the film "The First Echelon" (1955–1956), as well as the suites of music composed for "The Gadfly". Born at Podolskaya street in Saint Petersburg, Russia, Shostakovich was the second of three children of Dmitri Boleslavovich Shostakovich and Sofiya Vasilievna Kokoulina. Shostakovich's paternal grandfather, originally surnamed Szostakowicz, was of Polish Roman Catholic descent (his family roots trace to the region of the town of Vileyka in today's Belarus), but his immediate forebears came from Siberia. A Polish revolutionary in the January Uprising of 1863–4, Bolesław Szostakowicz would be exiled to Narym (near Tomsk) in 1866 in the crackdown that followed Dmitri Karakozov's assassination attempt on Tsar Alexander II. When his term of exile ended, Szostakowicz decided to remain in Siberia. He eventually became a successful banker in Irkutsk and raised a large family. His son, Dmitri Boleslavovich Shostakovich, the composer's father, was born in exile in Narim in 1875 and studied physics and mathematics in Saint Petersburg University, graduating in 1899. He then went to work as an engineer under Dmitri Mendeleev at the Bureau of Weights and Measures in Saint Petersburg. In 1903 he married another Siberian transplant to the capital, Sofiya Vasilievna Kokoulina, one of six children born to a Russian Siberian native. Their son, Dmitri Dmitriyevich Shostakovich, displayed significant musical talent after he began piano lessons with his mother at the age of nine. On several occasions he displayed a remarkable ability to remember what his mother had played at the previous lesson, and would get "caught in the act" of playing the previous lesson's music while pretending to read different music placed in front of him. In 1918 he wrote a funeral march in memory of two leaders of the Kadet party, murdered by Bolshevik sailors. In 1919, at the age of thirteen, he was allowed to enter the Petrograd Conservatory, then headed by Alexander Glazunov, who monitored Shostakovich's progress closely and promoted him. Shostakovich studied piano with Leonid Nikolayev after a year in the class of Elena Rozanova, composition with Maximilian Steinberg, and counterpoint and fugue with Nikolay Sokolov, with whom he became friends. Shostakovich also attended Alexander Ossovsky's history of music classes. Steinberg tried to guide Shostakovich in the path of the great Russian composers, but was disappointed to see him 'wasting' his talent and imitating Igor Stravinsky and Sergei Prokofiev. Shostakovich also suffered for his perceived lack of political zeal, and initially failed his exam in Marxist methodology in 1926. His first major musical achievement was the First Symphony (premiered 1926), written as his graduation piece at the age of nineteen. It was this work which brought him to the attention of Mikhail Tukhachevsky, who helped Shostakovich to find accommodation and work in Moscow, and sent a driver around in "a very stylish automobile" to take him to a concert. After graduation, Shostakovich initially embarked on a dual career as concert pianist and composer, but his dry style of playing was often unappreciated (his American biographer, Laurel Fay, comments on his "emotional restraint" and "riveting rhythmic drive"). He nevertheless won an "honorable mention" at the First International Chopin Piano Competition in Warsaw in 1927. He attributed the disappointment at the competition to suffering from appendicitis and the jury being all-Polish. He later had his appendix removed in April 1927. After the competition Shostakovich met the conductor Bruno Walter, who was so impressed by the composer's First Symphony that he conducted it at its Berlin premiere later that year. Leopold Stokowski was equally impressed and gave the work its U.S. premiere the following year in Philadelphia and also made the work's first recording. Thereafter, Shostakovich concentrated on composition and soon limited his performances primarily to those of his own works. In 1927 he wrote his Second Symphony (subtitled "To October"), a patriotic piece with a great pro-Soviet choral finale. Owing to its experimental nature, as with the subsequent Third Symphony, the pieces were not critically acclaimed with the enthusiasm granted to the First. The year 1927 also marked the beginning of Shostakovich's relationship with Ivan Sollertinsky, who remained his closest friend until the latter's death in 1944. Sollertinsky introduced the composer to the music of Gustav Mahler, which had a strong influence on his music from the Fourth Symphony onwards. While writing the Second Symphony, Shostakovich also began work on his satirical opera "The Nose", based on the story by Nikolai Gogol. In June 1929, against the composer's own wishes, the opera was given a concert performance; it was ferociously attacked by the Russian Association of Proletarian Musicians (RAPM). Its stage premiere on 18 January 1930 opened to generally poor reviews and widespread incomprehension amongst musicians. In the late 1920s and early 1930s, Shostakovich worked at TRAM, a proletarian youth theatre. Although he did little work in this post, it shielded him from ideological attack. Much of this period was spent writing his opera, "Lady Macbeth of the Mtsensk District", which was first performed in 1934. It was immediately successful, on both popular and official levels. It was described as "the result of the general success of Socialist construction, of the correct policy of the Party", and as an opera that "could have been written only by a Soviet composer brought up in the best tradition of Soviet culture". Shostakovich married his first wife, Nina Varzar, in 1932. Initial difficulties led to a divorce in 1935, but the couple soon remarried when Nina became pregnant with their first child, Galina. On 17 January 1936, Joseph Stalin paid a rare visit to the opera for a performance of a new work, "Quiet Flows the Don", based on the novel by Mikhail Sholokhov, by the little known composer Ivan Dzerzhinsky, who was called to Stalin's box at the end of the performance and told that his work had "considerable ideological-political value". On 26 January, Stalin revisited the opera, accompanied by Vyacheslav Molotov, Andrei Zhdanov and Anastas Mikoyan, to hear "Lady Macbeth of the Mtsensk District." He and his entourage left without speaking to anyone. Shostakovich had been forewarned by a friend that he should postpone a planned concert tour in Arkhangelsk to be present at that performance. Eyewitness accounts testify that Shostakovich was "white as a sheet" when he went to take his bow after the third act. In letters written to his friend Ivan Sollertinsky, Shostakovich recounted the horror with which he watched as Stalin shuddered every time the brass and percussion played too loudly. Equally horrifying was the way Stalin and his companions laughed at the love-making scene between Sergei and Katerina. The next day, Shostakovich left for Arhangelsk, and was there when he heard on 28 January that "Pravda" had published a tirade entitled "Muddle Instead of Music", complaining that the opera was a "deliberately dissonant, muddled stream of sounds...(that) quacks, hoots, pants and gasps." This was the signal for a nationwide campaign, during which even Soviet music critics who had praised the opera were forced to recant in print, saying they "failed to detect the shortcomings of "Lady Macbeth" as pointed out by "Pravda"". There was, however, resistance from those who admired Shostakovich, including Sollertinsky, who turned up at a composers' meeting in Leningrad called to denounce the opera and praised it instead. Two other speakers supported him. When Shostakovich returned to Leningrad, he had a telephone call from the commander of the Leningrad Military District, who had been asked by Marshal Tukhachevky to make sure that he was all right. When the writer Isaac Babel was under arrest four years later, he told his interrogators that "it was common ground for us to proclaim the genius of the slighted Shostakovich." On 6 February, Shostakovich was again attacked in "Pravda", this time for his light comic ballet, "The Limpid Stream", which was denounced because "it jangles and expresses nothing" and did not give an accurate picture of peasant life on the collective farm. Fearful that he was about to be arrested, Shostakovich secured an appointment with the Chairman of the USSR State Committee on Culture, Platon Kerzhentsev, who reported to Stalin and Molotov that he had instructed the composer to "reject formalist errors and in his art attain something that could be understood by the broad masses", and that Shostakovich had admitted being in the wrong and had asked for a meeting with Stalin, which was not granted. As a result of this campaign, commissions began to fall off, and Shostakovich's income fell by about three-quarters. His Fourth Symphony was due to receive its premier on 11 December 1936, but official intervention prevented it, and the symphony was not performed for 25 years, until 30 December 1961. "Lady Macbeth of the Mtsensk District" was also suppressed. A bowdlerised version was eventually performed under a new title, "Katerina Izmailova" on 8 January 1963. The anti-Shostakovich campaign also served as a signal to artists working in other fields, including art, architecture, the theatre and cinema, with the writer Mikhail Bulgakov, the director Sergei Eisenstein, and the theatre director Vsevolod Meyerhold among the prominent targets. More widely, 1936 marked the beginning of the Great Terror, in which many of the composer's friends and relatives were imprisoned or killed. These included Marshal Tukhachevsky (shot months after his arrest); his brother-in-law Vsevolod Frederiks (a distinguished physicist, who was eventually released but died before he got home); his close friend Nikolai Zhilyayev (a musicologist who had taught Tukhachevsky; shot shortly after his arrest); his mother-in-law, the astronomer Sofiya Mikhaylovna Varzar (sent to a camp in Karaganda); his friend the Marxist writer Galina Serebryakova (20 years in camps); his uncle Maxim Kostrykin (died); and his colleagues Boris Kornilov and Adrian Piotrovsky (executed). His only consolation in this period was the birth of his daughter Galina in 1936; his son Maxim was born two years later. The publication of the "Pravda" editorials coincided with the composition of Shostakovich's Fourth Symphony. The work marked a great shift in style for the composer, owing to the substantial influence of Gustav Mahler and a number of Western-style elements. The symphony gave Shostakovich compositional trouble, as he attempted to reform his style into a new idiom. The composer was well into the work when the fatal articles appeared. Despite this, Shostakovich continued to compose the symphony and planned a premiere at the end of 1936. Rehearsals began that December, but after a number of rehearsals Shostakovich, for reasons still debated today, decided to withdraw the symphony from the public. A number of his friends and colleagues, such as Isaak Glikman, have suggested that it was in fact an official ban which Shostakovich was persuaded to present as a voluntary withdrawal. Whatever the case, it seems possible that this action saved the composer's life: during this time Shostakovich feared for himself and his family. Yet Shostakovich did not repudiate the work; it retained its designation as his Fourth Symphony. A piano reduction was published in 1946, and the work was finally premiered in 1961, well after Stalin's death. During 1936 and 1937, in order to maintain as low a profile as possible between the Fourth and Fifth symphonies, Shostakovich mainly composed film music, a genre favored by Stalin and lacking in dangerous personal expression. The composer's response to his denunciation was the Fifth Symphony of 1937, which was musically more conservative than his earlier works. Premiered on 21 November 1937 in Leningrad, it was a phenomenal success. The Fifth drove many to tears and welling emotions. Later, Shostakovich's purported memoir, "Testimony", stated: "I'll never believe that a man who understood nothing could feel the Fifth Symphony. Of course they understood, they understood what was happening around them and they understood what the Fifth was about." The success put Shostakovich in good standing once again. Music critics and the authorities alike, including those who had earlier accused Shostakovich of formalism, claimed that he had learned from his mistakes and had become a true Soviet artist. In a newspaper article published under Shostakovich's name, the Fifth Symphony was characterized as "A Soviet artist's creative response to just criticism." The composer Dmitry Kabalevsky, who had been among those who disassociated themselves from Shostakovich when the "Pravda" article was published, praised the Fifth Symphony and congratulated Shostakovich for "not having given in to the seductive temptations of his previous 'erroneous' ways." It was also at this time that Shostakovich composed the first of his string quartets. His chamber works allowed him to experiment and express ideas which would have been unacceptable in his more public symphonic pieces. In September 1937 he began to teach composition at the Leningrad Conservatory, which provided some financial security but interfered with his own creative work. In 1939, before Soviet forces attempted to invade Finland, the Party Secretary of Leningrad Andrei Zhdanov commissioned a celebratory piece from Shostakovich, entitled "Suite on Finnish Themes" to be performed as the marching bands of the Red Army would be parading through the Finnish capital Helsinki. The Winter War was a bitter experience for the Red Army, the parade never happened, and Shostakovich would never lay claim to the authorship of this work. It was not performed until 2001. After the outbreak of war between the Soviet Union and Germany in 1941, Shostakovich initially remained in Leningrad. He tried to enlist for the military but was turned away because of his poor eyesight. To compensate, he became a volunteer for the Leningrad Conservatory’s firefighter brigade and delivered a radio broadcast to the Soviet people "". The photograph for which he posed was published in newspapers throughout the country. His greatest and most famous wartime contribution was the Seventh Symphony. The composer wrote the first three movements in Leningrad and completed the work in Kuibyshev (now Samara) where he and his family had been evacuated. It remains unclear whether Shostakovich really conceived the idea of the symphony with the siege of Leningrad in mind. It was officially claimed as a representation of the people of Leningrad’s brave resistance to the German invaders and an authentic piece of patriotic art at a time when morale needed boosting. The symphony was first premiered by the Bolshoi Theatre orchestra in Kuibyshev and was soon performed abroad in London and the United States. The most compelling performance was the Leningrad premiere by the Radio Orchestra in the besieged city. The orchestra had only fourteen musicians left, so the conductor Karl Eliasberg had to recruit anyone who could play a musical instrument to perform the symphony. The family moved to Moscow in spring 1943. At the time of the Eighth Symphony's premiere, the tide had turned for the Red Army. As a consequence, the public, and most importantly the authorities, wanted another triumphant piece from the composer. Instead, they got the Eighth Symphony, perhaps the ultimate in sombre and violent expression within Shostakovich's output. In order to preserve the image of Shostakovich (a vital bridge to the people of the Union and to the West), the government assigned the name "Stalingrad" to the symphony, giving it the appearance of a mourning of the dead in the bloody Battle of Stalingrad. However, the symphony did not escape criticism. Its composer is reported to have said: "When the Eighth was performed, it was openly declared counter-revolutionary and anti-Soviet. They said, 'Why did Shostakovich write an optimistic symphony at the beginning of the war and a tragic one now? At the beginning we were retreating and now we're attacking, destroying the Fascists. And Shostakovich is acting tragic, that means he's on the side of the fascists.'" The work was unofficially but effectively banned until 1956. The Ninth Symphony (1945), in contrast, was much lighter in tone. Gavriil Popov wrote that it was "splendid in its joie de vivre, gaiety, brilliance, and pungency!! By 1946, however, it was the subject of criticism. Israel Nestyev asked whether it was the right time for "a light and amusing interlude between Shostakovich's significant creations, a temporary rejection of great, serious problems for the sake of playful, filigree-trimmed trifles." The "New York World-Telegram" of 27 July 1946 was similarly dismissive: "The Russian composer should not have expressed his feelings about the defeat of Nazism in such a childish manner". Shostakovich continued to compose chamber music, notably his Second Piano Trio (Op. 67), dedicated to the memory of Sollertinsky, with a bittersweet, Jewish-themed "totentanz" finale. In 1948, Shostakovich, along with many other composers, was again denounced for formalism in the Zhdanov decree. Andrei Zhdanov, Chairman of the RSFSR Supreme Soviet, accused Shostakovich and other composers (such as Sergei Prokofiev and Aram Khachaturian) of writing inappropriate and formalist music. This was part of an ongoing anti-formalism campaign intended to root out all Western compositional influence as well as any perceived "non-Russian" output. The conference resulted in the publication of the Central Committee’s Decree "On V. Muradeli’s opera "The Great Friendship"," which was targeted towards all Soviet composers and demanded that they write only "proletarian" music, or music for the masses. The accused composers, including Shostakovich, were summoned to make public apologies in front of the committee. Most of Shostakovich's works were banned, and his family had privileges withdrawn. Yuri Lyubimov says that at this time "he waited for his arrest at night out on the landing by the lift, so that at least his family wouldn't be disturbed." The consequences of the decree for composers were harsh. Shostakovich was among those who were dismissed from the Conservatory altogether. For Shostakovich, the loss of money was perhaps the largest blow. Others still in the Conservatory experienced an atmosphere that was thick with suspicion. No one wanted his work to be understood as formalist, so many resorted to accusing their colleagues of writing or performing anti-proletarian music. In the next few years, Shostakovich composed three categories of work: film music to pay the rent, official works aimed at securing official rehabilitation, and serious works "for the desk drawer". The latter included the Violin Concerto No. 1 and the song cycle "From Jewish Folk Poetry". The cycle was written at a time when the postwar anti-Semitic campaign was already under way, with widespread arrests, including that of I. Dobrushin and Yiditsky, the compilers of the book from which Shostakovich took his texts. The restrictions on Shostakovich's music and living arrangements were eased in 1949, when Stalin decided that the Soviets needed to send artistic representatives to the Cultural and Scientific Congress for World Peace in New York City, and that Shostakovich should be among them. For Shostakovich, it was a humiliating experience culminating in a New York press conference where he was expected to read a prepared speech. Nicolas Nabokov, who was present in the audience, witnessed Shostakovich starting to read "in a nervous and shaky voice" before he had to break off "and the speech was continued in English by a suave radio baritone". Fully aware that Shostakovich was not free to speak his mind, Nabokov publicly asked the composer whether he supported the then recent denunciation of Igor Stravinsky's music in the Soviet Union. Shostakovich, who was a great admirer of Stravinsky and had been influenced by his music, had no alternative but to answer in the affirmative. Nabokov did not hesitate to publish that this demonstrated that Shostakovich was "not a free man, but an obedient tool of his government." Shostakovich never forgave Nabokov for this public humiliation. That same year Shostakovich was obliged to compose the cantata "Song of the Forests", which praised Stalin as the "great gardener". In 1951 the composer was made a deputy to the Supreme Soviet of the RSFSR. Stalin's death in 1953 was the biggest step towards Shostakovich's rehabilitation as a creative artist, which was marked by his Tenth Symphony. It features a number of musical quotations and codes (notably the DSCH and Elmira motifs, Elmira Nazirova being a pianist and composer who had studied under Shostakovich in the year prior to his dismissal from the Moscow Conservatoire), the meaning of which is still debated, whilst the savage second movement, according to "Testimony", is intended as a musical portrait of Stalin himself. The Symphony ranks alongside the Fifth and Seventh as one of his most popular works. 1953 also saw a stream of premieres of the "desk drawer" works. During the forties and fifties, Shostakovich had close relationships with two of his pupils: Galina Ustvolskaya and Elmira Nazirova. In the background to all this remained Shostakovich's first, open marriage to Nina Varzar until her death in 1954. He taught Ustvolskaya from 1937 to 1947. The nature of their relationship is far from clear: Mstislav Rostropovich described it as "tender". Ustvolskaya rejected a proposal of marriage from him after Nina's death. Shostakovich's daughter, Galina, recalled her father consulting her and Maxim about the possibility of Ustvolskaya becoming their stepmother. Ustvolskaya's friend, Viktor Suslin, said that she had been "deeply disappointed" in Shostakovich by the time of her graduation in 1947. The relationship with Nazirova seems to have been one-sided, expressed largely through his letters to her, and can be dated to around 1953 to 1956. He married his second wife, Komsomol activist Margarita Kainova, in 1956; the couple proved ill-matched, and divorced three years later. In 1954, Shostakovich wrote the Festive Overture, opus 96, that was used as the theme music for the 1980 Summer Olympics. In addition his '"Theme from the film "Pirogov", Opus 76a: Finale" was played as the cauldron was lit at the 2004 Summer Olympics in Athens, Greece. In 1959, Shostakovich appeared on stage in Moscow at the end of a concert performance of his Fifth Symphony, congratulating Leonard Bernstein and the New York Philharmonic Orchestra for their performance (part of a concert tour of the Soviet Union). Later that year, Bernstein and the New York Philharmonic recorded the symphony in Boston for Columbia Records. The year 1960 marked another turning point in Shostakovich's life: he joined the Communist Party. The government wanted to appoint him General Secretary of the Composers' Union, but in order to hold that position he was required to attain Party membership. It was understood that Nikita Khrushchev, the First Secretary of the Communist Party from 1953 to 1964, was looking for support from the leading ranks of the intelligentsia in an effort to create a better relationship with the Soviet Union’s artists. This event has been interpreted variously as a show of commitment, a mark of cowardice, the result of political pressure, or as his free decision. On the one hand, the apparat was undoubtedly less repressive than it had been before Stalin's death. On the other, his son recalled that the event reduced Shostakovich to tears, and he later told his wife Irina that he had been blackmailed. Lev Lebedinsky has said that the composer was suicidal. Once he joined the Party, several articles denouncing individualism in music were published in "Pravda" under his name, though he did not actually write them. In addition, in joining the party, Shostakovich was also committing himself to finally writing the homage to Lenin that he had promised before. His Twelfth Symphony, which portrays the Bolshevik Revolution and was completed in 1961, was dedicated to Vladimir Lenin and called "The Year 1917." Around this time, his health also began to deteriorate. Shostakovich's musical response to these personal crises was the Eighth String Quartet, composed in only three days. He subtitled the piece, "To the victims of fascism and war", ostensibly in memory of the Dresden fire bombing that took place in 1945. Yet, like the Tenth Symphony, this quartet incorporates quotations from several of his past works and his musical monogram: Shostakovich confessed to his friend Isaak Glikman "I started thinking that if some day I die, nobody is likely to write a work in memory of me, so I had better write one myself." Several of Shostakovich's colleagues, including Natalya Vovsi-Mikhoels and the cellist Valentin Berlinsky, were also aware of the Eighth Quartet's biographical intent. Peter J. Rabinowitz has also pointed to covert references to Richard Strauss's Metamorphosen in the Eighth Quartet. In 1962 he married for the third time, to Irina Supinskaya. In a letter to Glikman, he wrote "her only defect is that she is 27 years old. In all other respects she is splendid: clever, cheerful, straightforward and very likeable." According to Galina Vishnevskaya, who knew the Shostakoviches well, this marriage was a very happy one: "It was with her that Dmitri Dmitriyevich finally came to know domestic peace... Surely, she prolonged his life by several years." In November he made his only venture into conducting, conducting a couple of his own works in Gorky; otherwise he declined to conduct, citing nerves and ill health as his reasons. That year saw Shostakovich again turn to the subject of anti-Semitism in his Thirteenth Symphony (subtitled "Babi Yar"). The symphony sets a number of poems by Yevgeny Yevtushenko, the first of which commemorates a massacre of Ukrainian Jews during the Second World War. Opinions are divided how great a risk this was: the poem had been published in Soviet media, and was not banned, but it remained controversial. After the symphony's premiere, Yevtushenko was forced to add a stanza to his poem which said that Russians and Ukrainians had died alongside the Jews at Babi Yar. In 1965 Shostakovich raised his voice in defense of poet Joseph Brodsky, who was sentenced to five years of exile and hard labor. Shostakovich co-signed protests together with Yevtushenko and fellow Soviet artists Kornei Chukovsky, Anna Akhmatova, Samuil Marshak, and the French philosopher Jean-Paul Sartre. After the protests the sentence was commuted, and Brodsky returned to Leningrad. In 1964 Shostakovich composed the music for the Russian film "Hamlet", which was favourably reviewed by "The New York Times": "But the lack of this aural stimulation – of Shakespeare's eloquent words – is recompensed in some measure by a splendid and stirring musical score by Dmitri Shostakovich. This has great dignity and depth, and at times an appropriate wildness or becoming levity". In later life, Shostakovich suffered from chronic ill health, but he resisted giving up cigarettes and vodka. Beginning in 1958 he suffered from a debilitating condition that particularly affected his right hand, eventually forcing him to give up piano playing; in 1965 it was diagnosed as poliomyelitis. He also suffered heart attacks the following year and again in 1971, and several falls in which he broke both his legs; in 1967 he wrote in a letter: "Target achieved so far: 75% (right leg broken, left leg broken, right hand defective). All I need to do now is wreck the left hand and then 100% of my extremities will be out of order." A preoccupation with his own mortality permeates Shostakovich's later works, among them the later quartets and the Fourteenth Symphony of 1969 (a song cycle based on a number of poems on the theme of death). This piece also finds Shostakovich at his most extreme with musical language, with twelve-tone themes and dense polyphony used throughout. Shostakovich dedicated this score to his close friend Benjamin Britten, who conducted its Western premiere at the 1970 Aldeburgh Festival. The Fifteenth Symphony of 1971 is, by contrast, melodic and retrospective in nature, quoting Wagner, Rossini and the composer's own Fourth Symphony. Shostakovich died of lung cancer on 9 August 1975. A civic funeral was conducted; he was interred in the Novodevichy Cemetery, Moscow. Even before his death he had been commemorated with the naming of the Shostakovich Peninsula on Alexander Island, Antarctica. He was survived by his third wife, Irina; his daughter, Galina; and his son, Maxim, a pianist and conductor who was the dedicatee and first performer of some of his father's works. Shostakovich himself left behind several recordings of his own piano works, while other noted interpreters of his music include his friends Emil Gilels, Mstislav Rostropovich, Tatiana Nikolayeva, Maria Yudina, David Oistrakh, and members of the Beethoven Quartet. His last work was his Viola Sonata, which was first performed on 28 December 1975, four months after his death. Shostakovich's musical influence on later composers outside the former Soviet Union has been relatively slight, although Alfred Schnittke took up his eclecticism and his contrasts between the dynamic and the static, and some of André Previn's music shows clear links to Shostakovich's style of orchestration. His influence can also be seen in some Nordic composers, such as Lars-Erik Larsson. Many of his Russian contemporaries, and his pupils at the Leningrad Conservatory were strongly influenced by his style (including German Okunev, Boris Tishchenko, whose 5th Symphony of 1978 is dedicated to Shostakovich's memory, Sergei Slonimsky, and others). Shostakovich's conservative idiom has grown increasingly popular with audiences both within and beyond Russia, as the avant-garde has declined in influence and debate about his political views has developed. Shostakovich's works are broadly tonal and in the Romantic tradition, but with elements of atonality and chromaticism. In some of his later works (e.g., the Twelfth Quartet), he made use of tone rows. His output is dominated by his cycles of symphonies and string quartets, each totaling fifteen works. The symphonies are distributed fairly evenly throughout his career, while the quartets are concentrated towards the latter part. Among the most popular are the Fifth and Seventh Symphonies and the Eighth and Fifteenth Quartets. Other works include the operas "Lady Macbeth of Mtsensk", "The Nose" and the unfinished "The Gamblers" based on the comedy of Nikolai Gogol; six concertos (two each for piano, violin and cello); two piano trios; and a large quantity of film music. Shostakovich's music shows the influence of many of the composers he most admired: Bach in his fugues and passacaglias; Beethoven in the late quartets; Mahler in the symphonies and Berg in his use of musical codes and quotations. Among Russian composers, he particularly admired Modest Mussorgsky, whose operas "Boris Godunov" and "Khovanshchina" he re-orchestrated; Mussorgsky's influence is most prominent in the wintry scenes of "Lady Macbeth" and the Eleventh Symphony, as well as in his satirical works such as "Rayok". Prokofiev's influence is most apparent in the earlier piano works, such as the first sonata and first concerto. The influence of Russian church and folk music is very evident in his works for unaccompanied choir of the 1950s. Shostakovich's relationship with Stravinsky was profoundly ambivalent; as he wrote to Glikman, "Stravinsky the composer I worship. Stravinsky the thinker I despise." He was particularly enamoured of the Symphony of Psalms, presenting a copy of his own piano version of it to Stravinsky when the latter visited the USSR in 1962. (The meeting of the two composers was not very successful, however; observers commented on Shostakovich's extreme nervousness and Stravinsky's "cruelty" to him.) Many commentators have noted the disjunction between the experimental works before the 1936 denunciation and the more conservative ones that followed; the composer told Flora Litvinova, "without 'Party guidance' ... I would have displayed more brilliance, used more sarcasm, I could have revealed my ideas openly instead of having to resort to camouflage." Articles published by Shostakovich in 1934 and 1935 cited Berg, Schoenberg, Krenek, Hindemith, "and especially Stravinsky" among his influences. Key works of the earlier period are the First Symphony, which combined the academicism of the conservatory with his progressive inclinations; "The Nose" ("The most uncompromisingly modernist of all his stage-works"); "Lady Macbeth." which precipitated the denunciation; and the Fourth Symphony, described in Grove's Dictionary as "a colossal synthesis of Shostakovich's musical development to date". The Fourth Symphony was also the first in which the influence of Mahler came to the fore, prefiguring the route Shostakovich was to take to secure his rehabilitation, while he himself admitted that the preceding two were his least successful. In the years after 1936, Shostakovich's symphonic works were outwardly musically conservative, regardless of any subversive political content. During this time he turned increasingly to chamber works, a field that permitted the composer to explore different and often darker ideas without inviting external scrutiny. While his chamber works were largely tonal, they gave Shostakovich an outlet for sombre reflection not welcomed in his more public works. This is most apparent in the late chamber works, which portray what is described in Grove's Dictionary as a "world of purgatorial numbness"; in some of these he included the use of tone rows, although he treated these as melodic themes rather than serially. Vocal works are also a prominent feature of his late output, setting texts often concerned with love, death and art. Even before the Stalinist anti-Semitic campaigns in the late 1940s and early 1950s, Shostakovich showed an interest in Jewish themes. He was intrigued by Jewish music’s "ability to build a jolly melody on sad intonations". Examples of works that included Jewish themes are the Fourth String Quartet (1949), the First Violin Concerto (1948), and the "Four Monologues on Pushkin Poems" (1952), as well as the Piano Trio in E minor (1944). He was further inspired to write with Jewish themes when he examined Moisei Beregovski’s thesis on the theme of Jewish folk music in 1946. In 1948, Shostakovich acquired a book of Jewish folk songs, and from this he composed the song cycle "From Jewish Folk Poetry". He initially wrote eight songs that were meant to represent the hardships of being Jewish in the Soviet Union. However in order to disguise this, Shostakovich ended up adding three more songs meant to demonstrate the great life Jews had under the Soviet regime. Despite his efforts to hide the real meaning in the work, the Union of Composers refused to approve his music in 1949 under the pressure of the anti-Semitism that gripped the country. "From Jewish Folk Poetry" could not be performed until after Stalin’s death in March 1953, along with all the other works that were forbidden. Throughout his compositions, Shostakovich has consistently demonstrated a controlled use of musical quotation. This stylistic choice has been common in previous composers, but Shostakovich developed it into a defining characteristic of his music. Rather than quoting other composers, Shostakovich quoted himself a large number of times in his career. In doing so, his compositions have been connected by musicologists through their individual compositions. One such example is the main theme of Katerina's aria, "Seryozha, khoroshiy moy" from the fourth act of Lady Macbeth of the Mtsensk District. First presented in the opera, it accompanies Katerina as she reunites with her lover Sergei. The beauty of the aria comes somewhat as a breath of fresh air in the intense, overbearing tone of the scene. This goes well with the dialogue, as Katerina visits her lover in prison. The beauty of this theme is made tragic when Sergei betrays her and finds a new lover upon blaming Katerina for his incarceration. More than 25 years later, Shostakovich quotes this same theme in his eighth string quartet. In the midst of this quartet's oppressive and somber themes, the only time the listener receives a light and cheerful moment is when the cello introduces the Seryozha theme about three minutes into the fourth movement. Quite fitting to the quartet's dedication, this quotation uses Katerina's position of hope in misery as a means to demonstrate the hope of those oppressed by fascists. This theme emerges once again in his 14th string quartet. Similar to the eighth, the cello introduces this theme, but does so for an entirely different purpose. With this quartet being the last in Shostakovich's "quartet of quartets", the fourteenth serves to honor the cellist of the Beethoven String Quartet, Sergei Shirinsky. Rather than reflecting the original theme's intentions, this quotation serves as a dedication to Sergei the cellist, similar to how Katerina praises her lover Sergei. In 2004, the musicologist Olga Digonskaya discovered a trove of Shostakovich manuscripts at the Glinka State Central Museum of Musical Culture, Moscow. In a cardboard file were some "300 pages of musical sketches, pieces and scores" in the hand of Shostakovich. "A composer friend bribed Shostakovich's housemaid to regularly deliver the contents of Shostakovich's office waste bin to him, instead of taking it to the garbage. Some of those cast-offs eventually found their way into the Glinka. ... The Glinka archive 'contained a huge number of pieces and compositions which were completely unknown or could be traced quite indirectly,' Digonskaya said." Among these were Shostakovich's piano and vocal sketches for a prologue to an opera, "Orango" (1932). They have been orchestrated by the British composer Gerard McBurney and this work was premiered in December 2011 by the Los Angeles Philharmonic. According to Shostakovich scholar Gerard McBurney, opinion is divided on whether his music is "of visionary power and originality, as some maintain, or, as others think, derivative, trashy, empty and second-hand". William Walton, his British contemporary, described him as "the greatest composer of the 20th century". Musicologist David Fanning concludes in Grove's Dictionary that, "Amid the conflicting pressures of official requirements, the mass suffering of his fellow countrymen, and his personal ideals of humanitarian and public service, he succeeded in forging a musical language of colossal emotional power." Some modern composers have been critical. Pierre Boulez dismissed Shostakovich's music as "the second, or even third of Mahler". The Romanian composer and Webern disciple Philip Gershkovich called Shostakovich "a hack in a trance". A related complaint is that Shostakovich's style is vulgar and strident: Stravinsky wrote of "Lady Macbeth": "brutally hammering ... and monotonous". English composer and musicologist Robin Holloway described his music as "battleship-grey in melody and harmony, factory-functional in structure; in content all rhetoric and coercion." In the 1980s, the Finnish conductor and composer Esa-Pekka Salonen was critical of Shostakovich and refused to conduct his music. For instance, he said in 1987: Shostakovich is in many ways a polar counter-force for Stravinsky. [...] When I have said that the 7th symphony of Shostakovich is a dull and unpleasant composition, people have responded: "Yes, yes, but think of the background of that symphony." Such an attitude does no good to anyone. However, Salonen has since performed and recorded several of Shostakovich's works, including the Piano Concertos Nos. 1 and 2 (1999), the Violin Concerto No. 1 (2010), the Prologue to "Orango" and the Symphony No. 4 (2012). It is certainly true that Shostakovich borrows extensively from the material and styles both of earlier composers and of popular music; the vulgarity of "low" music is a notable influence on this "greatest of eclectics". McBurney traces this to the avant-garde artistic circles of the early Soviet period in which Shostakovich moved early in his career, and argues that these borrowings were a deliberate technique to allow him to create "patterns of contrast, repetition, exaggeration" that gave his music the large-scale structure it required. Shostakovich was in many ways an obsessive man: according to his daughter he was "obsessed with cleanliness". He synchronised the clocks in his apartment and regularly sent cards to himself to test how well the postal service was working. Elizabeth Wilson's "Shostakovich: A Life Remembered" (1994 edition) indexes 26 references to his nervousness. Mikhail Druskin remembers that even as a young man the composer was "fragile and nervously agile". Yuri Lyubimov comments, "The fact that he was more vulnerable and receptive than other people was no doubt an important feature of his genius." In later life, Krzysztof Meyer recalled, "his face was a bag of tics and grimaces." In his lighter moods, sport was one of his main recreations, although he preferred spectating or umpiring to participating (he was a qualified football referee). His favourite football club was Zenit Leningrad, which he would watch regularly. He also enjoyed playing card games, particularly patience. He was fond of satirical writers such as Gogol, Chekhov and Mikhail Zoshchenko. The influence of the latter in particular is evident in his letters, which include wry parodies of Soviet officialese. Zoshchenko himself noted the contradictions in the composer's character: "he is ... frail, fragile, withdrawn, an infinitely direct, pure child ... [but he is also] hard, acid, extremely intelligent, strong perhaps, despotic and not altogether good-natured (although cerebrally good-natured)." Shostakovich was a great wit, and there many musical jokes in his works. For example, most people well-versed in classical music are familiar with the overture to Wagner’s opera Tristan and Isolde. The famous “Tristan chord” resolves in surprising ways so many times that when we hear the opening notes of the Tristan overture, we have come to expect the unexpected. In last movement of his last symphony (No. 15), Shostakovich plays with this expectation. First, he quotes several passages from Wagner. Then we hear the beginning melody of the Tristan overture. It resolves in the most banal, expectable way possible which, given what we have come to expect from Wagner’s version, sounds hilarious. He was diffident by nature: Flora Litvinova has said he was "completely incapable of saying 'No' to anybody." This meant he was easily persuaded to sign official statements, including a denunciation of Andrei Sakharov in 1973; on the other hand he was willing to try to help constituents in his capacities as chairman of the Composers' Union and Deputy to the Supreme Soviet. Oleg Prokofiev commented that "he tried to help so many people that ... less and less attention was paid to his pleas." When asked if he believed in God, Shostakovich said "No, and I am very sorry about it." Shostakovich's response to official criticism and, what is more important, the question of whether he used music as a kind of covert dissidence is a matter of dispute. He outwardly conformed to government policies and positions, reading speeches and putting his name to articles expressing the government line. But it is evident he disliked many aspects of the regime, as confirmed by his family, his letters to Isaak Glikman, and the satirical cantata "Rayok", which ridiculed the "anti-formalist" campaign and was kept hidden until after his death. He was a close friend of Marshal of the Soviet Union Mikhail Tukhachevsky, who was executed in 1937 during the Great Purge. It is also uncertain to what extent Shostakovich expressed his opposition to the state in his music. The revisionist view was put forth by Solomon Volkov in the 1979 book "Testimony", which was claimed to be Shostakovich's memoirs dictated to Volkov. The book alleged that many of the composer's works contained coded anti-government messages, that would place Shostakovich in a tradition of Russian artists outwitting censorship that goes back at least to the early 19th century poet Alexander Pushkin. It is known that he incorporated many quotations and motifs in his work, most notably his signature DSCH theme. His longtime collaborator Yevgeny Mravinsky said that "Shostakovich very often explained his intentions with very specific images and connotations." The revisionist perspective has subsequently been supported by his children, Maxim and Galina, and many Russian musicians, although Maxim Shostakovich stated in 1981 that Volkov's book was not his father's memories. Volkov has further argued, both in "Testimony" and in "Shostakovich and Stalin", that Shostakovich adopted the role of the "yurodivy" or holy fool in his relations with the government. Other prominent revisionists are Ian MacDonald, whose book "The New Shostakovich" put forward further revisionist interpretations of his music, and Elizabeth Wilson, whose "Shostakovich: A Life Remembered" provides testimony from many of the composer's acquaintances. Musicians and scholars including Laurel Fay and Richard Taruskin contest the authenticity and debate the significance of "Testimony", alleging that Volkov compiled it from a combination of recycled articles, gossip, and possibly some information direct from the composer. Fay documents these allegations in her 2002 article 'Volkov's "Testimony" reconsidered', showing that the only pages of the original "Testimony" manuscript that Shostakovich had signed and verified are word-for-word reproductions of earlier interviews given by the composer, none of which are controversial. (Against this, it has been pointed out by Allan B. Ho and Dmitry Feofanov that at least two of the signed pages contain controversial material: for instance, "on the first page of chapter 3, where [Shostakovich] notes that the plaque that reads 'In this house lived [Vsevolod] Meyerhold' should also say 'And in this house his wife was brutally murdered'.") In May 1958, during a visit to Paris, Shostakovich recorded his two piano concertos with André Cluytens, as well as some short piano works. These were issued by EMI on an LP, reissued by Seraphim Records on LP, and eventually digitally remastered and released on CD. Shostakovich recorded the two concertos in stereo in Moscow for Melodiya. Shostakovich also played the piano solos in recordings of the Cello Sonata, Op. 40 with cellist Daniil Shafran and also with Mstislav Rostropovich; the Violin Sonata, Op. 134, with violinist David Oistrakh; and the Piano Trio, Op. 67 with violinist David Oistrakh and cellist Miloš Sádlo. There is also a short sound film of Shostakovich as soloist in a 1930s concert performance of the closing moments of his first piano concerto. A colour film of Shostakovich supervising one of his operas, from his last year, was also made. A major achievement was EMI's recording of the original, unexpurgated opera "Lady Macbeth of Mtsensk". There was at least one recording of the cleaned-up version, "Katerina Ismailova", that Shostakovich had made to satisfy Soviet censorship. But when conductor Mstislav Rostropovich and his wife, soprano Galina Vishnevskaya were finally allowed to emigrate to the West, the composer begged them to record the full original score, which they did in 1979. It features Vishnevskaya as Katerina, Nicolai Gedda as Sergei, Dimiter Petkov as Boris Ismailov and a brilliant supporting cast under Rostropovich's direction. Austria: Decoration for Services to the Republic of Austria in Silver (1967)
Belgium: Member of the Royal Academy of Science, Letters and Fine Arts of Belgium (1960) Denmark Léonie Sonning Music Prize (1974) Finland: Wihuri Sibelius Prize (1958) Soviet Union: United Kingdom: Gold Medal of the Royal Philharmonic Society (1966) United States: Oscar nomination for "Khovanshchina", Best Score (Musical) in 1961 Diocletian Diocletian (; ), born Diocles (22 December 244–3 December 311), was a Roman emperor from 284 to 305. Born to a family of low status in Dalmatia, Diocletian rose through the ranks of the military to become Roman cavalry commander to the Emperor Carus. After the deaths of Carus and his son Numerian on campaign in Persia, Diocletian was proclaimed emperor. The title was also claimed by Carus' other surviving son, Carinus, but Diocletian defeated him in the Battle of the Margus. Diocletian's reign stabilized the empire and marks the end of the Crisis of the Third Century. He appointed fellow officer Maximian as Augustus, co-emperor, in 286. Diocletian reigned in the Eastern Empire, and Maximian reigned in the Western Empire. Diocletian delegated further on 1 March 293, appointing Galerius and Constantius as Caesars, junior co-emperors, under himself and Maximian respectively. Under this 'tetrarchy', or "rule of four", each emperor would rule over a quarter-division of the empire. Diocletian secured the empire's borders and purged it of all threats to his power. He defeated the Sarmatians and Carpi during several campaigns between 285 and 299, the Alamanni in 288, and usurpers in Egypt between 297 and 298. Galerius, aided by Diocletian, campaigned successfully against Sassanid Persia, the empire's traditional enemy. In 299 he sacked their capital, Ctesiphon. Diocletian led the subsequent negotiations and achieved a lasting and favourable peace. Diocletian separated and enlarged the empire's civil and military services and reorganized the empire's provincial divisions, establishing the largest and most bureaucratic government in the history of the empire. He established new administrative centres in Nicomedia, Mediolanum, Sirmium, and Trier, closer to the empire's frontiers than the traditional capital at Rome. Building on third-century trends towards absolutism, he styled himself an autocrat, elevating himself above the empire's masses with imposing forms of court ceremonies and architecture. Bureaucratic and military growth, constant campaigning, and construction projects increased the state's expenditures and necessitated a comprehensive tax reform. From at least 297 on, imperial taxation was standardized, made more equitable, and levied at generally higher rates. Not all of Diocletian's plans were successful: the Edict on Maximum Prices (301), his attempt to curb inflation via price controls, was counterproductive and quickly ignored. Although effective while he ruled, Diocletian's tetrarchic system collapsed after his abdication under the competing dynastic claims of Maxentius and Constantine, sons of Maximian and Constantius respectively. The Diocletianic Persecution (303–11), the empire's last, largest, and bloodiest official persecution of Christianity, failed to eliminate Christianity in the empire; indeed, after 324, Christianity became the empire's preferred religion under its first Christian emperor, Constantine. In spite of these failures and challenges, Diocletian's reforms fundamentally changed the structure of Roman imperial government and helped stabilize the empire economically and militarily, enabling the empire to remain essentially intact for another 150 years despite being near the brink of collapse in Diocletian's youth. Weakened by illness, Diocletian left the imperial office on 1 May 305, and became the first Roman emperor to abdicate the position voluntarily. He lived out his retirement in his palace on the Dalmatian coast, tending to his vegetable gardens. His palace eventually became the core of the modern-day city of Split in Croatia. Diocletian was born near Salona in Dalmatia (Solin in modern Croatia), some time around 244. His parents gave him the Greek name Diocles, or possibly Diocles Valerius. The modern historian Timothy Barnes takes his official birthday, 22 December, as his actual birthdate. Other historians are not so certain. Diocles' parents were of low status, and writers critical of him claimed that his father was a scribe or a freedman of the senator Anullinus, or even that Diocles was a freedman himself. The first forty years of his life are mostly obscure. The Byzantine chronicler Joannes Zonaras states that he was "Dux Moesiae", a commander of forces on the lower Danube. The often-unreliable "Historia Augusta" states that he served in Gaul, but this account is not corroborated by other sources and is ignored by modern historians of the period. The first time Diocletian's whereabouts are accurately established, in 282, he was made by the newly Emperor Carus commander of the "Protectores domestici", the elite cavalry force directly attached to the Imperial household – a post that earned him the honour of a consulship in 283. As such, he took part in Carus' subsequent Persian campaign. Carus's death, amid a successful war with Persia and in mysterious circumstances – he was believed (perhaps as a result of later Diocletianic propaganda) to have been struck by lightning – left his sons Numerian and Carinus as the new "Augusti". Carinus quickly made his way to Rome from his post in Gaul as imperial commissioner and arrived there by January 284, becoming legitimate Emperor in the West. Numerian lingered in the East. The Roman withdrawal from Persia was orderly and unopposed. The Sassanid king Bahram II could not field an army against them as he was still struggling to establish his authority. By March 284, Numerian had only reached Emesa (Homs) in Syria; by November, only Asia Minor. In Emesa he was apparently still alive and in good health: he issued the only extant rescript in his name there, but after he left the city, his staff, including the prefect (Numerian's father-in-law, and as such the dominant influence in the Emperor's entourage) Aper, reported that he suffered from an inflammation of the eyes. He travelled in a closed coach from then on. When the army reached Bithynia, some of the soldiers smelled an odor emanating from the coach. They opened its curtains and inside they found Numerian dead. Both Eutropius and Aurelius Victor describe Numerian's death as an assassination. Aper officially broke the news in Nicomedia (İzmit) in November. Numerianus' generals and tribunes called a council for the succession, and chose Diocles as Emperor, in spite of Aper's attempts to garner support. On 20 November 284, the army of the east gathered on a hill outside Nicomedia. The army unanimously saluted Diocles as their new augustus, and he accepted the purple imperial vestments. He raised his sword to the light of the sun and swore an oath disclaiming responsibility for Numerian's death. He asserted that Aper had killed Numerian and concealed it. In full view of the army, Diocles drew his sword and killed Aper. According to the "Historia Augusta", he quoted from Virgil while doing so. Soon after Aper's death, Diocles changed his name to the more Latinate "Diocletianus", in full Gaius Aurelius Valerius Diocletianus. After his accession, Diocletian and Lucius Caesonius Bassus were named as consuls and assumed the "fasces" in place of Carinus and Numerianus. Bassus was a member of a senatorial family from Campania, a former consul and proconsul of Africa, chosen by Probus for signal distinction. He was skilled in areas of government where Diocletian presumably had no experience. Diocletian's elevation of Bassus as consul symbolized his rejection of Carinus' government in Rome, his refusal to accept second-tier status to any other emperor, and his willingness to continue the long-standing collaboration between the empire's senatorial and military aristocracies. It also tied his success to that of the Senate, whose support he would need in his advance on Rome. Diocletian was not the only challenger to Carinus' rule: the usurper M. Aurelius Julianus, Carinus' "corrector Venetiae", took control of northern Italy and Pannonia after Diocletian's accession. Julianus minted coins from the mint at Siscia (Sisak, Croatia) declaring himself as emperor and promising freedom. It was all good publicity for Diocletian, and it aided in his portrayal of Carinus as a cruel and oppressive tyrant. Julianus' forces were weak, however, and were handily dispersed when Carinus' armies moved from Britain to northern Italy. As leader of the united East, Diocletian was clearly the greater threat. Over the winter of 284–85, Diocletian advanced west across the Balkans. In the spring, some time before the end of May, his armies met Carinus' across the river Margus (Great Morava) in Moesia. In modern accounts, the site has been located between the Mons Aureus (Seone, west of Smederevo) and Viminacium, near modern Belgrade, Serbia. Despite having the stronger, more powerful army, Carinus held the weaker position. His rule was unpopular, and it was later alleged that he had mistreated the Senate and seduced his officers' wives. It is possible that Flavius Constantius, the governor of Dalmatia and Diocletian's associate in the household guard, had already defected to Diocletian in the early spring. When the Battle of the Margus began, Carinus' prefect Aristobulus also defected. In the course of the battle, Carinus was killed by his own men. Following Diocletian's victory, both the western and the eastern armies acclaimed him augustus. Diocletian exacted an oath of allegiance from the defeated army and departed for Italy. Diocletian may have become involved in battles against the Quadi and Marcomanni immediately after the Battle of the Margus. He eventually made his way to northern Italy and made an imperial government, but it is not known whether he visited the city of Rome at this time. There is a contemporary issue of coins suggestive of an imperial "adventus" (arrival) for the city, but some modern historians state that Diocletian avoided the city, and that he did so on principle, as the city and its Senate were no longer politically relevant to the affairs of the empire and needed to be taught as much. Diocletian dated his reign from his elevation by the army, not the date of his ratification by the Senate, following the practice established by Carus, who had declared the Senate's ratification a useless formality. However, Diocletian was to offer proof of his deference towards the Senate by retaining Aristobulus as ordinary consul and colleague for 285 (one of the few instances during the Late Empire in which an emperor admitted a "privatus" as his colleague) and by creating senior senators Vettius Aqulinus and Junius Maximus ordinary consuls for the following year – for Maximus, it was his second consulship. Nevertheless, if Diocletian ever did enter Rome shortly after his accession, he did not stay long; he is attested back in the Balkans by 2 November 285, on campaign against the Sarmatians. Diocletian replaced the prefect of Rome with his consular colleague Bassus. Most officials who had served under Carinus, however, retained their offices under Diocletian. In an act of "clementia" denoted by the epitomator Aurelius Victor as unusual, Diocletian did not kill or depose Carinus' traitorous praetorian prefect and consul Ti. Claudius Aurelius Aristobulus, but confirmed him in both roles. He later gave him the proconsulate of Africa and the post of urban prefect for 295. The other figures who retained their offices might have also betrayed Carinus. The assassinations of Aurelian and Probus demonstrated that sole rulership was dangerous to the stability of the empire. Conflict boiled in every province, from Gaul to Syria, Egypt to the lower Danube. It was too much for one person to control, and Diocletian needed a lieutenant. At some time in 285 at Mediolanum (Milan), Diocletian raised his fellow-officer Maximian to the office of caesar, making him co-emperor. The concept of dual rulership was nothing new to the Roman Empire. Augustus, the first Emperor, had nominally shared power with his colleagues, and more formal offices of Co-Emperor had existed from Marcus Aurelius on. Most recently, Emperor Carus and his sons had ruled together, albeit unsuccessfully. Diocletian was in a less comfortable position than most of his predecessors, as he had a daughter, Valeria, but no sons. His co-ruler had to be from outside his family, raising the question of trust. Some historians state that Diocletian adopted Maximian as his "filius Augusti", his "Augustan son", upon his appointment to the throne, following the precedent of some previous Emperors. This argument has not been universally accepted. The relationship between Diocletian and Maximian was quickly couched in religious terms. Around 287 Diocletian assumed the title "Iovius", and Maximian assumed the title "Herculius". The titles were probably meant to convey certain characteristics of their associated leaders. Diocletian, in Jovian style, would take on the dominating roles of planning and commanding; Maximian, in Herculian mode, would act as Jupiter's heroic subordinate. For all their religious connotations, the emperors were not "gods" in the tradition of the Imperial cult—although they may have been hailed as such in Imperial panegyrics. Instead, they were seen as the gods' representatives, effecting their will on earth. The shift from military acclamation to divine sanctification took the power to appoint emperors away from the army. Religious legitimization elevated Diocletian and Maximian above potential rivals in a way military power and dynastic claims could not. After his acclamation, Maximian was dispatched to fight the rebel Bagaudae, insurgent peasants of Gaul. Diocletian returned to the East, progressing slowly. By 2 November, he had only reached Civitas Iovia (Botivo, near Ptuj, Slovenia). In the Balkans during the autumn of 285, he encountered a tribe of Sarmatians who demanded assistance. The Sarmatians requested that Diocletian either help them recover their lost lands or grant them pasturage rights within the empire. Diocletian refused and fought a battle with them, but was unable to secure a complete victory. The nomadic pressures of the European Plain remained and could not be solved by a single war; soon the Sarmatians would have to be fought again. Diocletian wintered in Nicomedia. There may have been a revolt in the eastern provinces at this time, as he brought settlers from Asia to populate emptied farmlands in Thrace. He visited Syria Palaestina the following spring, His stay in the East saw diplomatic success in the conflict with Persia: in 287, Bahram II granted him precious gifts, declared open friendship with the Empire, and invited Diocletian to visit him. Roman sources insist that the act was entirely voluntary. Around the same time, perhaps in 287, Persia relinquished claims on Armenia and recognized Roman authority over territory to the west and south of the Tigris. The western portion of Armenia was incorporated into the empire and made a province. Tiridates III, Arsacid claimant to the Armenian throne and Roman client, had been disinherited and forced to take refuge in the empire after the Persian conquest of 252–53. In 287, he returned to lay claim to the eastern half of his ancestral domain and encountered no opposition. Bahram II's gifts were widely recognized as symbolic of a victory in the ongoing conflict with Persia, and Diocletian was hailed as the "founder of eternal peace". The events might have represented a formal end to Carus' eastern campaign, which probably ended without an acknowledged peace. At the conclusion of discussions with the Persians, Diocletian re-organized the Mesopotamian frontier and fortified the city of Circesium (Buseire, Syria) on the Euphrates. Maximian's campaigns were not proceeding as smoothly. The Bagaudae had been easily suppressed, but Carausius, the man he had put in charge of operations against Saxon and Frankish pirates on the Saxon Shore, had, according to literary sources, begun keeping the goods seized from the pirates for himself. Maximian issued a death-warrant for his larcenous subordinate. Carausius fled the Continent, proclaimed himself Augustus, and agitated Britain and northwestern Gaul into open revolt against Maximian and Diocletian. Far more probable, according to the archaeological evidence available, is that Carausius probably had held some important military post in Britain and had already a firm basis of power on both Britain and Northern Gaul (a coin hoard found in Rouen proves that he was in control of that mainland area at the beginning of his rebellion) and that he profited from the lack of legitimacy of the central government. Carausius strove at having his legitimacy as a junior emperor acknowledged by Diocletian: in his coinage (of far better quality than the official one, especially his silver pieces) he extolled the "concord" between him and the central power (PAX AVGGG, "the Peace of the three Augusti", read one 290 bronze piece, displaying, on the other side, Carausius together with Diocletian and Maximian, with the caption CARAVSIVS ET FRATRES SVI, "Carausius & his brothers" ). However, Diocletian could not allow elbow room to a breakaway regional usurper following on Postumus's footprints to enter, solely on his own accord, the imperial college. So Carausius had to go. Spurred by the crisis, on 1 April 286, Maximian took up the title of Augustus. His appointment is unusual in that it was impossible for Diocletian to have been present to witness the event. It has even been suggested that Maximian usurped the title and was only later recognized by Diocletian in hopes of avoiding civil war. This suggestion is unpopular, as it is clear that Diocletian meant for Maximian to act with a certain amount of independence. It may be posited, however, that Diocletian felt the need to bind Maximian closer to him, by making him his empowered associate, in order to avoid the possibility of having him striking some sort of deal with Carausius. Maximian realized that he could not immediately suppress the rogue commander, so in 287 he campaigned solely against tribes beyond the Rhine instead. As Carausius was allied to the Franks, Maximian's campaigns could be seen as an effort to deny the separatist emperor in Britain a basis of support on the mainland. The following spring, as Maximian prepared a fleet for an expedition against Carausius, Diocletian returned from the East to meet Maximian. The two emperors agreed on a joint campaign against the Alamanni. Diocletian invaded Germania through Raetia while Maximian progressed from Mainz. Each emperor burned crops and food supplies as he went, destroying the Germans' means of sustenance. The two men added territory to the empire and allowed Maximian to continue preparations against Carausius without further disturbance. On his return to the East, Diocletian managed what was probably another rapid campaign against the resurgent Sarmatians. No details survive, but surviving inscriptions indicate that Diocletian took the title "Sarmaticus Maximus" after 289. In the East, Diocletian engaged in diplomacy with desert tribes in the regions between Rome and Persia. He might have been attempting to persuade them to ally themselves with Rome, thus reviving the old, Rome-friendly, Palmyrene sphere of influence, or simply attempting to reduce the frequency of their incursions. No details survive for these events. Some of the princes of these states were Persian client kings, a disturbing fact in light of increasing tensions with the Sassanids. In the West, Maximian lost the fleet built in 288 and 289, probably in the early spring of 290. The panegyrist who refers to the loss suggests that its cause was a storm, but this might simply have been an attempt to conceal an embarrassing military defeat. Diocletian broke off his tour of the Eastern provinces soon thereafter. He returned with haste to the West, reaching Emesa by 10 May 290, and Sirmium on the Danube by 1 July 290. Diocletian met Maximian in Milan in the winter of 290–91, either in late December 290 or January 291. The meeting was undertaken with a sense of solemn pageantry. The emperors spent most of their time in public appearances. It has been surmised that the ceremonies were arranged to demonstrate Diocletian's continuing support for his faltering colleague. A deputation from the Roman Senate met with the emperors, renewing its infrequent contact with the Imperial office. The choice of Milan over Rome further snubbed the capital's pride. But then it was already a long established practice that Rome itself was only a ceremonial capital, as the actual seat of the Imperial administration was determined by the needs of defense. Long before Diocletian, Gallienus (r. 253–68) had chosen Milan as the seat of his headquarters. If the panegyric detailing the ceremony implied that the true center of the empire was not Rome, but where the emperor sat ("...the capital of the empire appeared to be there, where the two emperors met"), it simply echoed what had already been stated by the historian Herodian in the early third century: "Rome is where the emperor is". During the meeting, decisions on matters of politics and war were probably made in secret. The Augusti would not meet again until 303. Some time after his return, and before 293, Diocletian transferred command of the war against Carausius from Maximian to Flavius Constantius, a former Governor of Dalmatia and a man of military experience stretching back to Aurelian's campaigns against Zenobia (272–73). He was Maximian's praetorian prefect in Gaul, and the husband to Maximian's daughter, Theodora. On 1 March 293 at Milan, Maximian gave Constantius the office of caesar. In the spring of 293, in either Philippopolis (Plovdiv, Bulgaria) or Sirmium, Diocletian would do the same for Galerius, husband to Diocletian's daughter Valeria, and perhaps Diocletian's Praetorian Prefect. Constantius was assigned Gaul and Britain. Galerius was initially assigned Syria, Palestine, Egypt, and responsibility for the eastern borderlands. This arrangement is called the tetrarchy, from a Greek term meaning "rulership by four". The Tetrarchic Emperors were more or less sovereign in their own lands, and they travelled with their own imperial courts, administrators, secretaries, and armies. They were joined by blood and marriage; Diocletian and Maximian now styled themselves as brothers. The senior Co-Emperors formally adopted Galerius and Constantius as sons in 293. These relationships implied a line of succession. Galerius and Constantius would become Augusti after the departure of Diocletian and Maximian. Maximian's son Maxentius and Constantius' son Constantine would then become Caesars. In preparation for their future roles, Constantine and Maxentius were taken to Diocletian's court in Nicomedia. Just before his creation as Caesar, Constantius proceeded to cut Carausius from his base of support in Gaul, recovering Boulogne after a hotly fought siege, a success that would result in Carausius being murdered and replaced by his aide Allectus, who would hold out in his Britain stronghold for a further three years until a two-pronged naval invasion resulted in Allectus' defeat and death at the hands of Constantius' praetorian prefect Julius Asclepiodotus, during a land battle somewhere near Farnham. Constantius himself, after disembarking in the south east, delivered London from a looting party of Frankish deserters in Allectus' pay, something that allowed him to assume the role of liberator of Britain. A famous commemorative medallion depicts a personification of London supplying the victorious Constantius on horseback in which he describes himself as "redditor lucis aeternae", 'restorer of the eternal light (viz. of Rome).' The suppression of this threat to the Tetrarchs' legitimacy allowed both Constantius and Maximian to concentrate on outside threats: by 297 Constantius was back on the Rhine and Maximian engaged in a full-scale African campaign against Frankish pirates and nomads, eventually making a triumphal entry into Carthage on 10 March 298. However, Maximian's failure to deal with Carausius and Allectus on his own had jeopardized the position of Maxentius as putative heir to his father's post as Augustus of the West, with Constantius' son Constantine appearing as a rival claimant. Diocletian spent the spring of 293 travelling with Galerius from Sirmium (Sremska Mitrovica, Serbia) to Byzantium (Istanbul, Turkey). Diocletian then returned to Sirmium, where he would remain for the following winter and spring. He campaigned against the Sarmatians again in 294, probably in the autumn, and won a victory against them. The Sarmatians' defeat kept them from the Danube provinces for a long time. Meanwhile, Diocletian built forts north of the Danube, at Aquincum (Budapest, Hungary), Bononia (Vidin, Bulgaria), Ulcisia Vetera, Castra Florentium, Intercisa (Dunaújváros, Hungary), and Onagrinum (Begeč, Serbia). The new forts became part of a new defensive line called the "Ripa Sarmatica". In 295 and 296 Diocletian campaigned in the region again, and won a victory over the Carpi in the summer of 296. Later during both 299 and 302, as Diocletian was then residing in the East, it was Galerius' turn to campaign victoriously on the Danube. By the end of his reign, Diocletian had secured the entire length of the Danube, provided it with forts, bridgeheads, highways, and walled towns, and sent fifteen or more legions to patrol the region; an inscription at Sexaginta Prista on the Lower Danube extolled restored "tranquilitas" to the region. The defense came at a heavy cost, but was a significant achievement in an area difficult to defend. Galerius, meanwhile, was engaged during 291–293 in disputes in Upper Egypt, where he suppressed a regional uprising. He would return to Syria in 295 to fight the revanchist Persian empire. Diocletian's attempts to bring the Egyptian tax system in line with Imperial standards stirred discontent, and a revolt swept the region after Galerius' departure. The usurper L. Domitius Domitianus declared himself Augustus in July or August 297. Much of Egypt, including Alexandria, recognized his rule. Diocletian moved into Egypt to suppress him, first putting down rebels in the Thebaid in the autumn of 297, then moving on to besiege Alexandria. Domitianus died in December 297, by which time Diocletian had secured control of the Egyptian countryside. Alexandria, however, whose defense was organized under Domitianus' former "corrector" Aurelius Achilleus, was to hold out until a later date, probably March 298. Bureaucratic affairs were completed during Diocletian's stay: a census took place, and Alexandria, in punishment for its rebellion, lost the ability to mint independently. Diocletian's reforms in the region, combined with those of Septimius Severus, brought Egyptian administrative practices much closer to Roman standards. Diocletian travelled south along the Nile the following summer, where he visited Oxyrhynchus and Elephantine. In Nubia, he made peace with the Nobatae and Blemmyes tribes. Under the terms of the peace treaty Rome's borders moved north to Philae and the two tribes received an annual gold stipend. Diocletian left Africa quickly after the treaty, moving from Upper Egypt in September 298 to Syria in February 299. He met with Galerius in Mesopotamia. In 294, Narseh, a son of Shapur who had been passed over for the Sassanid succession, came to power in Persia. Narseh eliminated Bahram III, a young man installed in the wake of Bahram II's death in 293. In early 294, Narseh sent Diocletian the customary package of gifts between the empires, and Diocletian responded with an exchange of ambassadors. Within Persia, however, Narseh was destroying every trace of his immediate predecessors from public monuments. He sought to identify himself with the warlike kings Ardashir (r. 226–41) and Shapur I (r. 241–72), who had defeated and imprisoned Emperor Valerian (r. 253–260) following his failed invasion of the Sasanian Empire. Narseh declared war on Rome in 295 or 296. He appears to have first invaded western Armenia, where he seized the lands delivered to Tiridates in the peace of 287. Narseh moved south into Roman Mesopotamia in 297, where he inflicted a severe defeat on Galerius in the region between Carrhae (Harran, Turkey) and Callinicum (Raqqa, Syria) (and thus, the historian Fergus Millar notes, probably somewhere on the Balikh River). Diocletian may or may not have been present at the battle, but he quickly divested himself of all responsibility. In a public ceremony at Antioch, the official version of events was clear: Galerius was responsible for the defeat; Diocletian was not. Diocletian publicly humiliated Galerius, forcing him to walk for a mile at the head of the Imperial caravan, still clad in the purple robes of the Emperor. Galerius was reinforced, probably in the spring of 298, by a new contingent collected from the empire's Danubian holdings. Narseh did not advance from Armenia and Mesopotamia, leaving Galerius to lead the offensive in 298 with an attack on northern Mesopotamia via Armenia. It is unclear if Diocletian was present to assist the campaign; he might have returned to Egypt or Syria. Narseh retreated to Armenia to fight Galerius' force, to Narseh's disadvantage; the rugged Armenian terrain was favorable to Roman infantry, but not to Sassanid cavalry. In two battles, Galerius won major victories over Narseh. During the second encounter, Roman forces seized Narseh's camp, his treasury, his harem, and his wife. Galerius continued moving down the Tigris, and took the Persian capital Ctesiphon before returning to Roman territory along the Euphrates. Narseh sent an ambassador to Galerius to plead for the return of his wives and children in the course of the war, but Galerius dismissed him. Serious peace negotiations began in the spring of 299. The "magister memoriae" (secretary) of Diocletian and Galerius, Sicorius Probus, was sent to Narseh to present terms. The conditions of the resulting Peace of Nisibis were heavy: Armenia returned to Roman domination, with the fort of Ziatha as its border; Caucasian Iberia would pay allegiance to Rome under a Roman appointee; Nisibis, now under Roman rule, would become the sole conduit for trade between Persia and Rome; and Rome would exercise control over the five satrapies between the Tigris and Armenia: Ingilene, Sophanene (Sophene), Arzanene (Aghdznik), Corduene (Carduene), and Zabdicene (near modern Hakkâri, Turkey). These regions included the passage of the Tigris through the Anti-Taurus range; the Bitlis pass, the quickest southerly route into Persian Armenia; and access to the Tur Abdin plateau. A stretch of land containing the later strategic strongholds of Amida (Diyarbakır, Turkey) and Bezabde came under firm Roman military occupation. With these territories, Rome would have an advance station north of Ctesiphon, and would be able to slow any future advance of Persian forces through the region. Many cities east of the Tigris came under Roman control, including Tigranokert, Saird, Martyropolis, Balalesa, Moxos, Daudia, and Arzan – though under what status is unclear. At the conclusion of the peace, Tiridates regained both his throne and the entirety of his ancestral claim. Rome secured a wide zone of cultural influence, which led to a wide diffusion of Syriac Christianity from a center at Nisibis in later decades, and the eventual Christianization of Armenia. At the conclusion of the Peace of Nisibis, Diocletian and Galerius returned to Syrian Antioch. At some time in 299, the emperors took part in a ceremony of sacrifice and divination in an attempt to predict the future. The haruspices were unable to read the entrails of the sacrificed animals and blamed Christians in the Imperial household. The emperors ordered all members of the court to perform a sacrifice to purify the palace. The emperors sent letters to the military command, demanding the entire army perform the required sacrifices or face discharge. Diocletian was conservative in matters of religion, a man faithful to the traditional Roman pantheon and understanding of demands for religious purification, but Eusebius, Lactantius and Constantine state that it was Galerius, not Diocletian, who was the prime supporter of the purge, and its greatest beneficiary. Galerius, even more devoted and passionate than Diocletian, saw political advantage in the politics of persecution. He was willing to break with a government policy of inaction on the issue. Antioch was Diocletian's primary residence from 299 to 302, while Galerius swapped places with his Augustus on the Middle and Lower Danube. Diocletian visited Egypt once, over the winter of 301–2, and issued a grain dole in Alexandria. Following some public disputes with Manicheans, Diocletian ordered that the leading followers of Mani be burnt alive along with their scriptures. In a 31 March 302 rescript from Alexandria, he declared that low-status Manicheans must be executed by the blade, and high-status Manicheans must be sent to work in the quarries of Proconnesus (Marmara Island, Turkey) or the mines of Phaeno in southern Palestine. All Manichean property was to be seized and deposited in the imperial treasury. Diocletian found much to be offended by in Manichean religion: its novelty, its alien origins, the way it corrupted the morals of the Roman race, and its inherent opposition to long-standing religious traditions. Manichaeanism was also supported by Persia at the time, compounding religious dissent with international politics. Excepting Persian support, the reasons he disliked Manichaeanism were at least equally applicable to his next target, Christianity. Diocletian returned to Antioch in the autumn of 302. He ordered that the deacon Romanus of Caesarea have his tongue removed for defying the order of the courts and interrupting official sacrifices. Romanus was then sent to prison, where he was executed on 17 November 303. Diocletian believed that Romanus of Caesarea was arrogant, and he left the city for Nicomedia in the winter, accompanied by Galerius. According to Lactantius, Diocletian and Galerius entered into an argument over imperial policy towards Christians while wintering at Nicomedia in 302. Diocletian argued that forbidding Christians from the bureaucracy and military would be sufficient to appease the gods, but Galerius pushed for extermination. The two men sought the advice of the oracle of Apollo at Didyma. The oracle responded that the impious on Earth hindered Apollo's ability to provide advice. Rhetorically Eusebius records the Oracle as saying "The just on Earth..." These impious, Diocletian was informed by members of the court, could only refer to the Christians of the empire. At the behest of his court, Diocletian acceded to demands for universal persecution. On 23 February 303, Diocletian ordered that the newly built church at Nicomedia be razed. He demanded that its scriptures be burned, and seized its precious stores for the treasury. The next day, Diocletian's first "Edict against the Christians" was published. The edict ordered the destruction of Christian scriptures and places of worship across the empire, and prohibited Christians from assembling for worship. Before the end of February, a fire destroyed part of the Imperial palace. Galerius convinced Diocletian that the culprits were Christians, conspirators who had plotted with the eunuchs of the palace. An investigation was commissioned, but no responsible party was found. Executions followed anyway, and the palace eunuchs Dorotheus and Gorgonius were executed. One individual, Peter Cubicularius, was stripped, raised high, and scourged. Salt and vinegar were poured in his wounds, and he was slowly boiled over an open flame. The executions continued until at least 24 April 303, when six individuals, including the bishop Anthimus, were decapitated. A second fire occurred sixteen days after the first. Galerius left the city for Rome, declaring Nicomedia unsafe. Diocletian would soon follow. Although further persecutionary edicts followed, compelling the arrest of the Christian clergy and universal acts of sacrifice, the persecutionary edicts were ultimately unsuccessful; most Christians escaped punishment, and pagans too were generally unsympathetic to the persecution. The martyrs' sufferings strengthened the resolve of their fellow Christians. Constantius and Maximian did not apply the later persecutionary edicts, and left the Christians of the West unharmed. Galerius rescinded the edict in 311, announcing that the persecution had failed to bring Christians back to traditional religion. The temporary apostasy of some Christians, and the surrendering of scriptures, during the persecution played a major role in the subsequent Donatist controversy. Within twenty-five years of the persecution's inauguration, the Christian Emperor Constantine would rule the empire alone. He would reverse the consequences of the edicts, and return all confiscated property to Christians. Under Constantine's rule, Christianity would become the empire's preferred religion. Diocletian was demonized by his Christian successors: Lactantius intimated that Diocletian's ascendancy heralded the apocalypse, and in Serbian mythology, Diocletian is remembered as Dukljan, the adversary of God. Diocletian entered the city of Rome in the early winter of 303. On 20 November, he celebrated, with Maximian, the twentieth anniversary of his reign ("vicennalia"), the tenth anniversary of the tetrarchy ("decennalia"), and a triumph for the war with Persia. Diocletian soon grew impatient with the city, as the Romans acted towards him with what Edward Gibbon, following Lactantius, calls "licentious familiarity". The Roman people did not give enough deference to his supreme authority; it expected him to act the part of an aristocratic ruler, not a monarchic one. On 20 December 303, Diocletian cut short his stay in Rome and left for the north. He did not even perform the ceremonies investing him with his ninth consulate; he did them in Ravenna on 1 January 304 instead. There are suggestions in the "Panegyrici Latini" and Lactantius' account that Diocletian arranged plans for his and Maximian's future retirement of power in Rome. Maximian, according to these accounts, swore to uphold Diocletian's plan in a ceremony in the Temple of Jupiter. From Ravenna, Diocletian left for the Danube. There, possibly in Galerius' company, he took part in a campaign against the Carpi. He contracted a minor illness while on campaign, but his condition quickly worsened and he chose to travel in a litter. In the late summer he left for Nicomedia. On 20 November 304, he appeared in public to dedicate the opening of the circus beside his palace. He collapsed soon after the ceremonies. Over the winter of 304–5 he kept within his palace at all times. Rumours alleging that Diocletian's death was merely being kept secret until Galerius could come to assume power spread through the city. On 13 December, it appeared that he had finally died. The city was sent into a mourning from which it recovered after public declarations that Diocletian was still alive. When Diocletian reappeared in public on 1 March 305, he was emaciated and barely recognizable. Galerius arrived in the city later in March. According to Lactantius, he came armed with plans to reconstitute the tetrarchy, force Diocletian to step down, and fill the Imperial office with men compliant to his will. Through coercion and threats, he eventually convinced Diocletian to comply with his plan. Lactantius also claims that he had done the same to Maximian at Sirmium. On 1 May 305, Diocletian called an assembly of his generals, traditional companion troops, and representatives from distant legions. They met at the same hill, out of Nicomedia, where Diocletian had been proclaimed emperor. In front of a statue of Jupiter, his patron deity, Diocletian addressed the crowd. With tears in his eyes, he told them of his weakness, his need for rest, and his will to resign. He declared that he needed to pass the duty of empire on to someone stronger. He thus became the first Roman emperor to voluntarily abdicate his title. Most in the crowd believed they knew what would follow; Constantine and Maxentius, the only adult sons of a reigning emperor, men who had long been preparing to succeed their fathers, would be granted the title of caesar. Constantine had travelled through Palestine at the right hand of Diocletian, and was present at the palace in Nicomedia in 303 and 305. It is likely that Maxentius received the same treatment. In Lactantius' account, when Diocletian announced that he was to resign, the entire crowd turned to face Constantine. It was not to be: Severus and Maximinus were declared caesars. Maximin appeared and took Diocletian's robes. On the same day, Severus received his robes from Maximian in Milan. Constantius succeeded Maximian as augustus of the West, but Constantine and Maxentius were entirely ignored in the transition of power. This did not bode well for the future security of the tetrarchic system. Diocletian retired to his homeland, Dalmatia. He moved into the expansive Diocletian's Palace, a heavily fortified compound located by the small town of Spalatum on the shores of the Adriatic Sea, and near the large provincial administrative center of Salona. The palace is preserved in great part to this day and forms the historic core of Split, the second-largest city of modern Croatia. Maximian retired to villas in Campania or Lucania. Their homes were distant from political life, but Diocletian and Maximian were close enough to remain in regular contact with each other. Galerius assumed the consular "fasces" in 308 with Diocletian as his colleague. In the autumn of 308, Galerius again conferred with Diocletian at Carnuntum (Petronell-Carnuntum, Austria). Diocletian and Maximian were both present on 11 November 308, to see Galerius appoint Licinius to be augustus in place of Severus, who had died at the hands of Maxentius. He ordered Maximian, who had attempted to return to power after his retirement, to step down permanently. At Carnuntum people begged Diocletian to return to the throne, to resolve the conflicts that had arisen through Constantine's rise to power and Maxentius' usurpation. Diocletian's reply: "If you could show the cabbage that I planted with my own hands to your emperor, he definitely wouldn't dare suggest that I replace the peace and happiness of this place with the storms of a never-satisfied greed." He lived on for four more years, spending his days in his palace gardens. He saw his tetrarchic system fail, torn by the selfish ambitions of his successors. He heard of Maximian's third claim to the throne, his forced suicide, and his "damnatio memoriae". In his own palace, statues and portraits of his former companion emperor were torn down and destroyed. Deep in despair and illness, Diocletian may have committed suicide. He died on 3 December 312. Diocletian saw his work as that of a restorer, a figure of authority whose duty it was to return the empire to peace, to recreate stability and justice where barbarian hordes had destroyed it. He arrogated, regimented and centralized political authority on a massive scale. In his policies, he enforced an Imperial system of values on diverse and often unreceptive provincial audiences. In the Imperial propaganda from the period, recent history was perverted and minimized in the service of the theme of the tetrarchs as "restorers". Aurelian's achievements were ignored, the revolt of Carausius was backdated to the reign of Gallienus, and it was implied that the tetrarchs engineered Aurelian's defeat of the Palmyrenes; the period between Gallienus and Diocletian was effectively erased. The history of the empire before the tetrarchy was portrayed as a time of civil war, savage despotism, and imperial collapse. In those inscriptions that bear their names, Diocletian and his companions are referred to as "restorers of the whole world", men who succeeded in "defeating the nations of the barbarians, and confirming the tranquility of their world". Diocletian was written up as the "founder of eternal peace". The theme of restoration was conjoined to an emphasis on the uniqueness and accomplishments of the tetrarchs themselves. The cities where emperors lived frequently in this period—Milan, Trier, Arles, Sirmium, Serdica, Thessaloniki, Nicomedia and Antioch—were treated as alternate imperial seats, to the exclusion of Rome and its senatorial elite. A new style of ceremony was developed, emphasizing the distinction of the emperor from all other persons. The quasi-republican ideals of Augustus' "primus inter pares" were abandoned for all but the tetrarchs themselves. Diocletian took to wearing a gold crown and jewels, and forbade the use of purple cloth to all but the emperors. His subjects were required to prostrate themselves in his presence ("adoratio"); the most fortunate were allowed the privilege of kissing the hem of his robe ("proskynesis", προσκύνησις). Circuses and basilicas were designed to keep the face of the emperor perpetually in view, and always in a seat of authority. The emperor became a figure of transcendent authority, a man beyond the grip of the masses. His every appearance was stage-managed. This style of presentation was not new—many of its elements were first seen in the reigns of Aurelian and Severus—but it was only under the tetrarchs that it was refined into an explicit system. In keeping with his move from an ideology of republicanism to one of autocracy, Diocletian's council of advisers, his "consilium", differed from those of earlier emperors. He destroyed the Augustan illusion of imperial government as a cooperative affair between emperor, army, and senate. In its place he established an effectively autocratic structure, a shift later epitomized in the institution's name: it would be called a "consistorium", not a council. Diocletian regulated his court by distinguishing separate departments ("scrinia") for different tasks. From this structure came the offices of different "magistri", like the "magister officiorum" ("Master of Offices"), and associated secretariats. These were men suited to dealing with petitions, requests, correspondence, legal affairs, and foreign embassies. Within his court Diocletian maintained a permanent body of legal advisers, men with significant influence on his re-ordering of juridical affairs. There were also two finance ministers, dealing with the separate bodies of the public treasury and the private domains of the emperor, and the praetorian prefect, the most significant person of the whole. Diocletian's reduction of the Praetorian Guards to the level of a simple city garrison for Rome lessened the military powers of the prefect – although a prefect like Asclepiodotus was still a trained general – but the office retained much civil authority. The prefect kept a staff of hundreds and managed affairs in all segments of government: in taxation, administration, jurisprudence, and minor military commands, the praetorian prefect was often second only to the emperor himself. Altogether, Diocletian effected a large increase in the number of bureaucrats at the government's command; Lactantius was to claim that there were now more men using tax money than there were paying it. The historian Warren Treadgold estimates that under Diocletian the number of men in the civil service doubled from 15,000 to 30,000. The classicist Roger S. Bagnall estimated that there was one bureaucrat for every 5–10,000 people in Egypt based on 400 or 800 bureaucrats for 4 million inhabitants (no one knows the population of the province in 300 AD; Strabo 300 years earlier put it at 7.5 million, excluding Alexandria). (By comparison, the ratio in 12th-century Song dynasty China was one bureaucrat for every 15,000 people.) Jones estimated 30,000 bureaucrats for an empire of 50–65 million inhabitants, which works out to approximately 1,667 or 2,167 inhabitants per imperial official as averages empire-wide. The actual numbers of officials and ratios per inhabitant varied, of course, per diocese depending on the number of provinces and population within a diocese. Provincial and diocesan paid officials (there were unpaid supernumeraries) numbered about 13–15,000 based on their staff establishments as set by law. The other 50% were with the emperor(s) in his or their "comitatus", with the praetorian prefects, with the grain supply officials in the capital (later, the capitals, Rome and Constantinople), Alexandria, and Carthage and officials from the central offices located in the provinces. To avoid the possibility of local usurpations, to facilitate a more efficient collection of taxes and supplies, and to ease the enforcement of the law, Diocletian doubled the number of provinces from fifty to almost one hundred. The provinces were grouped into twelve dioceses, each governed by an appointed official called a "vicarius", or "deputy of the praetorian prefects". Some of the provincial divisions required revision, and were modified either soon after 293 or early in the fourth century. Rome herself (including her environs, as defined by a -radius perimeter around the city itself) was not under the authority of the praetorian prefect, as she was to be administered by a city prefect of senatorial rank – the sole prestigious post with actual power reserved exclusively for senators, except for some governors in Italy with the titles of corrector and the proconsuls of Asia and Africa. The dissemination of imperial law to the provinces was facilitated under Diocletian's reign, because Diocletian's reform of the Empire's provincial structure meant that there were now a greater number of governors ("praesides") ruling over smaller regions and smaller populations. Diocletian's reforms shifted the governors' main function to that of the presiding official in the lower courts: whereas in the early Empire military and judicial functions were the function of governor, and procurators had supervised taxation; under the new system "vicarii" and governors were responsible for justice and taxation, and a new class of "duces" ("dukes"), acting independently of the civil service, had military command. These dukes sometimes administered two or three of the new provinces created by Diocletian, and had forces ranging from two thousand to more than twenty thousand men. In addition to their roles as judges and tax collectors, governors were expected to maintain the postal service ("cursus publicus") and ensure that town councils fulfilled their duties. This curtailment of governors' powers as the Emperors' representatives may have lessened the political dangers of an all-too-powerful class of Imperial delegates, but it also severely limited governors' ability to oppose local landed elites, specially those of senatorial status, which, although with reduced opportunities for office holding, retained wealth, social prestige, and personal connections. – specially in relatively peaceful regions without a great military presence. On one occasion, Diocletian had to exhort a proconsul of Africa not to fear the consequences of treading on the toes of the local magnates of senatorial rank. If a governor of senatorial rank himself felt these pressures, one can imagine the difficulties faced by a mere "praeses". That accounts for the strained relationship between the central power and local elites: sometime during 303, an attempted military sedition in Seleucia Pieria and Antioch made Diocletian to extract a bloody retribution on both cities by putting to death a number of their council members for failing their duties of keeping order in their jurisdiction. As with most emperors, much of Diocletian's daily routine rotated around legal affairs—responding to appeals and petitions, and delivering decisions on disputed matters. Rescripts, authoritative interpretations issued by the emperor in response to demands from disputants in both public and private cases, were a common duty of second- and third-century emperors. Diocletian was awash in paperwork, and was nearly incapable of delegating his duties. It would have been seen as a dereliction of duty to ignore them. In the "nomadic" imperial courts of the later Empire, one can track the progress of the imperial retinue through the locations from whence particular rescripts were issued – the presence of the Emperor was what allowed the system to function. Whenever the imperial court would settle in one of the capitals, there was a glut in petitions, as in late 294 in Nicomedia, where Diocletian kept winter quarters. Admittedly, Diocletian's praetorian prefects—Afranius Hannibalianus, Julius Asclepiodotus, and Aurelius Hermogenianus—aided in regulating the flow and presentation of such paperwork, but the deep legalism of Roman culture kept the workload heavy. Emperors in the forty years preceding Diocletian's reign had not managed these duties so effectively, and their output in attested rescripts is low. Diocletian, by contrast, was prodigious in his affairs: there are around 1,200 rescripts in his name still surviving, and these probably represent only a small portion of the total issue. The sharp increase in the number of edicts and rescripts produced under Diocletian's rule has been read as evidence of an ongoing effort to realign the whole Empire on terms dictated by the imperial center. Under the governance of the jurists Gregorius, Aurelius Arcadius Charisius, and Hermogenianus, the imperial government began issuing official books of precedent, collecting and listing all the rescripts that had been issued from the reign of Hadrian (r. 117–38) to the reign of Diocletian. The Codex Gregorianus includes rescripts up to 292, which the Codex Hermogenianus updated with a comprehensive collection of rescripts issued by Diocletian in 293 and 294. Although the very act of codification was a radical innovation, given the precedent-based design of the Roman legal system, the jurists were generally conservative, and constantly looked to past Roman practice and theory for guidance. They were probably given more free rein over their codes than the later compilers of the "Codex Theodosianus" (438) and "Codex Justinianus" (529) would have. Gregorius and Hermogenianus' codices lack the rigid structuring of later codes, and were not published in the name of the emperor, but in the names of their compilers. Their official character, however, was clear in that both collections were subsequently acknowledged by courts as authoritative records of imperial legislation up to the date of their publication and regularly updated. After Diocletian's reform of the provinces, governors were called "iudex", or judge. The governor became responsible for his decisions first to his immediate superiors, as well as to the more distant office of the emperor. It was most likely at this time that judicial records became verbatim accounts of what was said in trial, making it easier to determine bias or improper conduct on the part of the governor. With these records and the Empire's universal right of appeal, Imperial authorities probably had a great deal of power to enforce behavior standards for their judges. In spite of Diocletian's attempts at reform, the provincial restructuring was far from clear, especially when citizens appealed the decisions of their governors. Proconsuls, for example, were often both judges of first instance and appeal, and the governors of some provinces took appellant cases from their neighbors. It soon became impossible to avoid taking some cases to the emperor for arbitration and judgment. Diocletian's reign marks the end of the classical period of Roman law. Where Diocletian's system of rescripts shows an adherence to classical tradition, Constantine's law is full of Greek and eastern influences. It is archaeologically difficult to distinguish Diocletian's fortifications from those of his successors and predecessors. The Devil's Dykes, for example, the Danubian earthworks traditionally attributed to Diocletian, cannot even be securely dated to a particular century. The most that can be said about built structures under Diocletian's reign is that he rebuilt and strengthened forts at the Upper Rhine frontier (where he followed the works built under Probus along the Lake Constance-Basel and the Rhine–Iller–Danube line), on the Danube- where a new line of forts on the far side of the river, the "Ripa Sarmatica", was added to older, rehabilitated fortresses – in Egypt, and on the frontier with Persia. Beyond that, much discussion is speculative, and reliant on the broad generalizations of written sources. Diocletian and the tetrarchs had no consistent plan for frontier advancement, and records of raids and forts built across the frontier are likely to indicate only temporary claims. The "Strata Diocletiana", built after the Persian Wars, which ran from the Euphrates North of Palmyra and South towards northeast Arabia in the general vicinity of Bostra, is the classic Diocletianic frontier system, consisting of an outer road followed by tightly spaced forts – defensible hard-points manned by small garrisons – followed by further fortifications in the rear. In an attempt to resolve the difficulty and slowness of transmitting orders to the frontier, the new capitals of the tetrarchic era were all much closer to the empire's frontiers than Rome had been: Trier sat on the Moselle, a tributary of the Rhine, Sirmium and Serdica were close to the Danube, Thessaloniki was on the route leading eastward, and Nicomedia and Antioch were important points in dealings with Persia. Lactantius criticized Diocletian for an excessive increase in troop sizes, declaring that "each of the four [tetrarchs] strove to have a far larger number of troops than previous emperors had when they were governing the state alone". The fifth-century pagan Zosimus, by contrast, praised Diocletian for keeping troops on the borders, rather than keeping them in the cities, as Constantine was held to have done. Both these views had some truth to them, despite the biases of their authors: Diocletian and the tetrarchs did greatly expand the army, and the growth was mostly in frontier regions, where the increased effectives of the new Diocletianic legions seem to have been mostly spread across a network of strongholds. Nevertheless, it is difficult to establish the precise details of these shifts given the weakness of the sources. The army expanded to about 580,000 men from a 285 strength of 390,000, of which 310,000 men were stationed in the East, most of whom manned the Persian frontier. The navy's forces increased from approximately 45,000 men to approximately 65,000 men. Diocletian's expansion of the army and civil service meant that the empire's tax burden grew. Since military upkeep took the largest portion of the imperial budget, any reforms here would be especially costly. The proportion of the adult male population, excluding slaves, serving in the army increased from roughly 1 in 25 to 1 in 15, an increase judged excessive by some modern commentators. Official troop allowances were kept to low levels, and the mass of troops often resorted to extortion or the taking of civilian jobs. Arrears became the norm for most troops. Many were even given payment in kind in place of their salaries. Were he unable to pay for his enlarged army, there would likely be civil conflict, potentially open revolt. Diocletian was led to devise a new system of taxation. In the early empire (30 BC – AD 235) the Roman government paid for what it needed in gold and silver. The coinage was stable. Requisition, forced purchase, was used to supply armies on the march. During the third century crisis (235–285), the government resorted to requisition rather than payment in debased coinage, since it could never be sure of the value of money. Requisition was nothing more or less than seizure. Diocletian made requisition into tax. He introduced an extensive new tax system based on heads ("capita") and land ("iugera") – with one iugerum equal to approximately .65 acres – and tied to a new, regular census of the empire's population and wealth. Census officials traveled throughout the empire, assessed the value of labor and land for each landowner, and joined the landowners' totals together to make citywide totals of "capita" and "iuga". The "iugum" was not a consistent measure of land, but varied according to the type of land and crop, and the amount of labor necessary for sustenance. The "caput" was not consistent either: women, for instance, were often valued at half a "caput", and sometimes at other values. Cities provided animals, money, and manpower in proportion to its "capita", and grain in proportion to its "iuga". Most taxes were due on each year on 1 September, and levied from individual landowners by "decuriones" (decurions). These decurions, analogous to city councilors, were responsible for paying from their own pocket what they failed to collect. Diocletian's reforms also increased the number of financial officials in the provinces: more "rationales" and "magistri privatae" are attested under Diocletian's reign than before. These officials represented the interests of the fisc, which collected taxes in gold, and the Imperial properties. Fluctuations in the value of the currency made collection of taxes in kind the norm, although these could be converted into coin. Rates shifted to take inflation into account. In 296, Diocletian issued an edict reforming census procedures. This edict introduced a general five-year census for the whole empire, replacing prior censuses that had operated at different speeds throughout the empire. The new censuses would keep up with changes in the values of "capita" and "iuga". Italy, which had long been exempt from taxes, was included in the tax system from 290/291 as a diocesis. The city of Rome itself, however, remained exempt; the "regions" (i.e., provinces) South of Rome (generally called "suburbicarian", as opposed to the Northern, "annonaria" region) seem to have been relatively less taxed, in what probably was a sop offered to the great senatorial families and their landed properties. Diocletian's edicts emphasized the common liability of all taxpayers. Public records of all taxes were made public. The position of "decurion", member of the city council, had been an honor sought by wealthy aristocrats and the middle classes who displayed their wealth by paying for city amenities and public works. Decurions were made liable for any shortfall in the amount of tax collected. Many tried to find ways to escape the obligation. Aurelian's attempt to reform the currency had failed; the denarius was dead. Diocletian restored the three-metal coinage and issued better quality pieces. The new system consisted of five coins: the "aureus"/"solidus", a gold coin weighing, like its predecessors, one-sixtieth of a pound; the "argenteus", a coin weighing one ninety-sixth of a pound and containing ninety-five percent pure silver; the "follis", sometimes referred to as the "laureatus" A, which is a copper coin with added silver struck at the rate of thirty-two to the pound; the "radiatus", a small copper coin struck at the rate of 108 to the pound, with no added silver; and a coin known today as the "laureatus" B, a smaller copper coin struck at the rate of 192 to the pound. Since the nominal values of these new issues were lower than their intrinsic worth as metals, the state was minting these coins at a loss. This practice could be sustained only by requisitioning precious metals from private citizens in exchange for state-minted coin (of a far lower value than the price of the precious metals requisitioned). By 301, however, the system was in trouble, strained by a new bout of inflation. Diocletian therefore issued his "Edict on Coinage", an act re-tariffing all debts so that the "nummus", the most common coin in circulation, would be worth half as much. In the edict, preserved in an inscription from the city of Aphrodisias in Caria (near Geyre, Turkey), it was declared that all debts contracted before 1 September 301 must be repaid at the old standards, while all debts contracted after that date would be repaid at the new standards. It appears that the edict was made in an attempt to preserve the current price of gold and to keep the Empire's coinage on silver, Rome's traditional metal currency. This edict risked giving further momentum to inflationary trends, as had happened after Aurelian's currency reforms. The government's response was to issue a price freeze. The "Edict on Maximum Prices" ("Edictum De Pretiis Rerum Venalium") was issued two to three months after the coinage edict, somewhere between 20 November and 10 December 301. The best-preserved Latin inscription surviving from the Greek East, the edict survives in many versions, on materials as varied as wood, papyrus, and stone. In the edict, Diocletian declared that the current pricing crisis resulted from the unchecked greed of merchants, and had resulted in turmoil for the mass of common citizens. The language of the edict calls on the people's memory of their benevolent leaders, and exhorts them to enforce the provisions of the edict, and thereby restore perfection to the world. The edict goes on to list in detail over one thousand goods and accompanying retail prices not to be exceeded. Penalties are laid out for various pricing transgressions. In the most basic terms, the edict was ignorant of the law of supply and demand: it ignored the fact that prices might vary from region to region according to product availability, and it ignored the impact of transportation costs in the retail price of goods. In the judgment of the historian David Potter, the edict was "an act of economic lunacy". The fact that the edict began with a long rhetorical preamble betrays at the same time a moralizing stance as well as a weak grasp of economics – perhaps simply the wishful thinking that criminalizing a practice was enough to stop it. There is no consensus about how effectively the edict was enforced. Supposedly, inflation, speculation, and monetary instability continued, and a black market arose to trade in goods forced out of official markets. The edict's penalties were applied unevenly across the empire (some scholars believe they were applied only in Diocletian's domains), widely resisted, and eventually dropped, perhaps within a year of the edict's issue. Lactantius has written of the perverse accompaniments to the edict; of goods withdrawn from the market, of brawls over minute variations in price, of the deaths that came when its provisions were enforced. His account may be true, but it seems to modern historians exaggerated and hyperbolic, and the impact of the law is recorded in no other ancient source. Partly in response to economic pressures and in order to protect the vital functions of the state, Diocletian restricted social and professional mobility. Peasants became tied to the land in a way that presaged later systems of land tenure and workers such as bakers, armourers, public entertainers and workers in the mint had their occupations made hereditary. Soldiers' children were also forcibly enrolled, something that followed spontaneous tendencies among the rank-and-file, but also expressed increasing difficulties in recruitment. The historian A.H.M. Jones observed that "It is perhaps Diocletian's greatest achievement that he reigned twenty-one years and then abdicated voluntarily, and spent the remaining years of his life in peaceful retirement." Diocletian was one of the few emperors of the third and fourth centuries to die naturally, and the first in the history of the empire to retire voluntarily. Once he retired, however, his tetrarchic system collapsed. Without the guiding hand of Diocletian, the empire fell into civil wars. Stability emerged after the defeat of Licinius by Constantine in 324. Under the Christian Constantine, Diocletian was maligned. Constantine's rule, however, validated Diocletian's achievements and the autocratic principle he represented: the borders remained secure, in spite of Constantine's large expenditure of forces during his civil wars; the bureaucratic transformation of Roman government was completed; and Constantine took Diocletian's court ceremonies and made them even more extravagant. Constantine ignored those parts of Diocletian's rule that did not suit him. Diocletian's policy of preserving a stable silver coinage was abandoned, and the gold "solidus" became the empire's primary currency instead. Diocletian's persecution of Christians was repudiated and changed to a policy of toleration and then favoritism. Christianity eventually became the official religion in 380. Constantine would claim to have the same close relationship with the Christian God as Diocletian claimed to have with Jupiter. Most importantly, Diocletian's tax system and administrative reforms lasted, with some modifications, until the advent of the Muslims in the 630s. The combination of state autocracy and state religion was instilled in much of Europe, particularly in the lands which adopted Orthodox Christianity. In addition to his administrative and legal impact on history, the Emperor Diocletian is considered to be the founder of the city of Split in modern-day Croatia. The city itself grew around the heavily fortified Diocletian's Palace the emperor had built in anticipation of his retirement. The "Era of Martyrs" (Latin: "anno martyrum" or AM), also known as the "Diocletian era" (Latin: "anno Diocletiani"), is a method of numbering years used by the Church of Alexandria beginning in the 4th century "anno Domini" and by the Coptic Orthodox Church of Alexandria from the 5th century to the present. In this system of counting, the beginning of Diocletian's reign in 284 was used as the epoch, making Diocletian's first year in power into the Year 1 of that calendar. Western Christians were aware of this count but did not use it; Dionysius Exiguus replaced the anno Diocletiani era with his anno Domini era because he did not wish to continue the memory of a tyrant who persecuted Christians. The anno Domini era became dominant in the Latin West but was not used in the Greek East until modern times. Chapters from "The Cambridge Ancient History, Volume XII: The Crisis of Empire" are marked with a "(CAH)". Primary sources Secondary sources Domitian Domitian (; ; 24 October 51 – 18 September 96 AD) was Roman emperor from 81 to 96. He was the younger brother of Titus and the son of Vespasian, his two predecessors on the throne, and the last member of the Flavian dynasty. During his reign, the authoritarian nature of his rule put him at sharp odds with the senate, whose powers he drastically curtailed. Domitian had a minor and largely ceremonial role during the reigns of his father and brother. After the death of his brother, Domitian was declared emperor by the Praetorian Guard. His 15-year reign was the longest since that of Tiberius. As emperor, Domitian strengthened the economy by revaluing the Roman coinage, expanded the border defenses of the empire, and initiated a massive building program to restore the damaged city of Rome. Significant wars were fought in Britain, where his general Agricola attempted to conquer Caledonia (Scotland), and in Dacia, where Domitian was unable to procure a decisive victory against king Decebalus. Domitian's government exhibited strong authoritarian characteristics; he saw himself as the new Augustus, an enlightened despot destined to guide the Roman Empire into a new era of brilliance. Religious, military, and cultural propaganda fostered a cult of personality, and by nominating himself perpetual censor, he sought to control public and private morals. As a consequence, Domitian was popular with the people and army, but considered a tyrant by members of the Roman Senate. Domitian's reign came to an end in 96 when he was assassinated by court officials. He was succeeded the same day by his advisor Nerva. After his death, Domitian's memory was condemned to oblivion by the Roman Senate, while senatorial authors such as Tacitus, Pliny the Younger, and Suetonius propagated the view of Domitian as a cruel and paranoid tyrant. Modern revisionists instead have characterized Domitian as a ruthless but efficient autocrat whose cultural, economic, and political programs provided the foundation of the peaceful second century. Domitian was born in Rome on 24 October 51, the youngest son of Titus Flavius Vespasianus—commonly known as Vespasian—and Flavia Domitilla Major. He had an older sister, Domitilla the Younger, and brother, also named Titus Flavius Vespasianus. Decades of civil war during the 1st century BC had contributed greatly to the demise of the old aristocracy of Rome, which a new Italian nobility gradually replaced in prominence during the early part of the 1st century. One such family, the Flavians, or "gens Flavia", rose from relative obscurity to prominence in just four generations, acquiring wealth and status under the emperors of the Julio-Claudian dynasty. Domitian's great-grandfather, Titus Flavius Petro, had served as a centurion under Pompey during Caesar's civil war. His military career ended in disgrace when he fled the battlefield at the Battle of Pharsalus in 48 BC. Nevertheless, Petro managed to improve his status by marrying the extremely wealthy Tertulla, whose fortune guaranteed the upward mobility of Petro's son Titus Flavius Sabinus I, Domitian's grandfather. Sabinus himself amassed further wealth and possible equestrian status through his services as tax collector in Asia and banker in Helvetia (modern Switzerland). By marrying Vespasia Polla he allied the Flavian family to the more prestigious "gens Vespasia", ensuring the elevation of his sons Titus Flavius Sabinus II and Vespasian to senatorial rank. The political career of Vespasian included the offices of quaestor, aedile, and praetor, and culminated in a consulship in 51, the year of Domitian's birth. As a military commander, Vespasian gained early renown by participating in the Roman invasion of Britain in 43. Nevertheless, ancient sources allege poverty for the Flavian family at the time of Domitian's upbringing, even claiming Vespasian had fallen into disrepute under the emperors Caligula (37–41) and Nero (54–68). Modern history has refuted these claims, suggesting these stories later circulated under Flavian rule as part of a propaganda campaign to diminish success under the less reputable Emperors of the Julio-Claudian dynasty and to maximize achievements under Emperor Claudius (41–54) and his son Britannicus. By all appearances, the Flavians enjoyed high imperial favour throughout the 40s and 60s. While Titus received a court education in the company of Britannicus, Vespasian pursued a successful political and military career. Following a prolonged period of retirement during the 50s, he returned to public office under Nero, serving as proconsul of the Africa Province in 63, and accompanying the emperor Nero during an official tour of Greece in 66. The same year the Jews of the Judaea Province revolted against the Roman Empire in what is now known as the First Jewish-Roman War. Vespasian was assigned to lead the Roman army against the insurgents, with Titus — who had completed his military education by this time — in charge of a legion. Of the three Flavian emperors, Domitian would rule the longest, despite the fact that his youth and early career were largely spent in the shadow of his older brother. Titus had gained military renown during the First Jewish–Roman War. After their father Vespasian became emperor in 69 following the civil war known as the Year of the Four Emperors, Titus held a great many offices, while Domitian received honours, but no responsibilities. By the time he was 16 years old, Domitian's mother and sister had long since died, while his father and brother were continuously active in the Roman military, commanding armies in Germania and Judaea. For Domitian, this meant that a significant part of his adolescence was spent in the absence of his near relatives. During the Jewish–Roman wars, he was likely taken under the care of his uncle Titus Flavius Sabinus II, at the time serving as city prefect of Rome; or possibly even Marcus Cocceius Nerva, a loyal friend of the Flavians and the future successor to Domitian. He received the education of a young man of the privileged senatorial class, studying rhetoric and literature. In his biography in the "Lives of the Twelve Caesars", Suetonius attests to Domitian's ability to quote the important poets and writers such as Homer or Virgil on appropriate occasions, and describes him as a learned and educated adolescent, with elegant conversation. Among his first published works were poetry, as well as writings on law and administration. Unlike his brother Titus, Domitian was not educated at court. Whether he received formal military training is not recorded, but according to Suetonius, he displayed considerable marksmanship with the bow and arrow. A detailed description of Domitian's appearance and character is provided by Suetonius, who devotes a substantial part of his biography to his personality: Domitian was allegedly extremely sensitive regarding his baldness, which he disguised in later life by wearing wigs. According to Suetonius, he even wrote a book on the subject of hair care. With regard to Domitian's personality, however, the account of Suetonius alternates sharply between portraying Domitian as the emperor-tyrant, a man both physically and intellectually lazy, and the intelligent, refined personality drawn elsewhere. Historian Brian Jones concludes in "The Emperor Domitian" that assessing the true nature of Domitian's personality is inherently complicated by the bias of the surviving sources. Common threads nonetheless emerge from the available evidence. He appears to have lacked the natural charisma of his brother and father. He was prone to suspicion, displayed an odd, sometimes self-deprecating sense of humour, and often communicated in cryptic ways. This ambiguity of character was further exacerbated by his remoteness, and as he grew older, he increasingly displayed a preference for solitude, which may have stemmed from his isolated upbringing. Indeed, by the age of eighteen nearly all of his closest relatives had died by war or disease. Having spent the greater part of his early life in the twilight of Nero's reign, his formative years would have been strongly influenced by the political turmoil of the 60s, culminating with the civil war of 69, which brought his family to power. On 9 June 68, amid growing opposition of the Senate and the army, Nero committed suicide and with him the Julio-Claudian dynasty came to an end. Chaos ensued, leading to a year of brutal civil war known as the Year of the Four Emperors, during which the four most influential generals in the Roman Empire—Galba, Otho, Vitellius and Vespasian—successively vied for imperial power. News of Nero's death reached Vespasian as he was preparing to besiege the city of Jerusalem. Almost simultaneously the Senate had declared Galba, then governor of Hispania Tarraconensis (modern northern Spain), as Emperor of Rome. Rather than continue his campaign, Vespasian decided to await further orders and send Titus to greet the new Emperor. Before reaching Italy, Titus learnt that Galba had been murdered and replaced by Otho, the governor of Lusitania (modern Portugal). At the same time Vitellius and his armies in Germania had risen in revolt and prepared to march on Rome, intent on overthrowing Otho. Not wanting to risk being taken hostage by one side or the other, Titus abandoned the journey to Rome and rejoined his father in Judaea. Otho and Vitellius realized the potential threat posed by the Flavian faction. With four legions at his disposal, Vespasian commanded a strength of nearly 80,000 soldiers. His position in Judaea further granted him the advantage of being nearest to the vital province of Egypt, which controlled the grain supply to Rome. His brother Titus Flavius Sabinus II, as city prefect, commanded the entire city garrison of Rome. Tensions among the Flavian troops ran high but so long as either Galba or Otho remained in power, Vespasian refused to take action. When Otho was defeated by Vitellius at the First Battle of Bedriacum, the armies in Judaea and Egypt took matters into their own hands and declared Vespasian emperor on 1 July 69. Vespasian accepted and entered an alliance with Gaius Licinius Mucianus, the governor of Syria, against Vitellius. A strong force drawn from the Judaean and Syrian legions marched on Rome under the command of Mucianus, while Vespasian travelled to Alexandria, leaving Titus in charge of ending the Jewish rebellion. In Rome, Domitian was placed under house arrest by Vitellius, as a safeguard against Flavian aggression. Support for the old emperor waned as more legions around the empire pledged their allegiance to Vespasian. On 24 October 69, the forces of Vitellius and Vespasian (under Marcus Antonius Primus) met at the Second Battle of Bedriacum, which ended in a crushing defeat for the armies of Vitellius. In despair, Vitellius attempted to negotiate a surrender. Terms of peace, including a voluntary abdication, were agreed upon with Titus Flavius Sabinus II but the soldiers of the Praetorian Guard—the imperial bodyguard—considered such a resignation disgraceful and prevented Vitellius from carrying out the treaty. On the morning of 18 December, the emperor appeared to deposit the imperial insignia at the Temple of Concord but at the last minute retraced his steps to the Imperial palace. In the confusion, the leading men of the state gathered at Sabinus' house, proclaiming Vespasian as Emperor, but the multitude dispersed when Vitellian cohorts clashed with the armed escort of Sabinus, who was forced to retreat to the Capitoline Hill. During the night, he was joined by his relatives, including Domitian. The armies of Mucianus were nearing Rome but the besieged Flavian party did not hold out for longer than a day. On 19 December, Vitellianists burst onto the Capitol and in a skirmish, Sabinus was captured and executed. Domitian managed to escape by disguising himself as a worshipper of Isis and spent the night in safety with one of his father's supporters, Cornelius Primus. By the afternoon of 20 December, Vitellius was dead, his armies having been defeated by the Flavian legions. With nothing more to be feared, Domitian came forward to meet the invading forces; he was universally saluted by the title of "Caesar" and the mass of troops conducted him to his father's house. The following day, 21 December, the Senate proclaimed Vespasian emperor of the Roman Empire. Although the war had officially ended, a state of anarchy and lawlessness pervaded in the first days following the demise of Vitellius. Order was properly restored by Mucianus in early 70 but Vespasian did not enter Rome until September of that year. In the meantime, Domitian acted as the representative of the Flavian family in the Roman Senate. He received the title of "Caesar" and was appointed praetor with consular power. The ancient historian Tacitus describes Domitian's first speech in the Senate as brief and measured, at the same time noting his ability to elude awkward questions. Domitian's authority was merely nominal, however, foreshadowing what was to be his role for at least ten more years. By all accounts, Mucianus held the real power in Vespasian's absence and he was careful to ensure that Domitian, still only eighteen years old, did not overstep the boundaries of his function. Strict control was also maintained over the young Caesar's entourage, promoting away Flavian generals such as Arrius Varus and Antonius Primus and replacing them with more reliable men such as Arrecinus Clemens. Equally curtailed by Mucianus were Domitian's military ambitions. The civil war of 69 had severely destabilized the provinces, leading to several local uprisings such as the Batavian revolt in Gaul. Batavian auxiliaries of the Rhine legions, led by Gaius Julius Civilis, had rebelled with the aid of a faction of Treveri under the command of Julius Classicus. Seven legions were sent from Rome, led by Vespasian's brother-in-law Quintus Petillius Cerialis. Although the revolt was quickly suppressed, exaggerated reports of disaster prompted Mucianus to depart the capital with reinforcements of his own. Domitian eagerly sought the opportunity to attain military glory and joined the other officers with the intention of commanding a legion of his own. According to Tacitus, Mucianus was not keen on this prospect but since he considered Domitian a liability in any capacity that was entrusted to him, he preferred to keep him close at hand rather than in Rome. When news arrived of Cerialis' victory over Civilis, Mucianus tactfully dissuaded Domitian from pursuing further military endeavours. Domitian then wrote to Cerialis personally, suggesting he hand over command of his army but, once again, he was snubbed. With the return of Vespasian in late September, his political role was rendered all but obsolete and Domitian withdrew from government devoting his time to arts and literature. Where his political and military career had ended in disappointment, Domitian's private affairs were more successful. In 70 Vespasian attempted to arrange a dynastic marriage between his youngest son and the daughter of Titus, Julia Flavia, but Domitian was adamant in his love for Domitia Longina, going so far as to persuade her husband, Lucius Aelius Lamia, to divorce her so that Domitian could marry her himself. Despite its initial recklessness, the alliance was very prestigious for both families. Domitia Longina was the younger daughter of Gnaeus Domitius Corbulo, a respected general and honoured politician who had distinguished himself for his leadership in Armenia. Following the failed Pisonian conspiracy against Nero in 65, he had been forced to commit suicide. The new marriage not only re-established ties to senatorial opposition, but also served the broader Flavian propaganda of the time, which sought to diminish Vespasian's political success under Nero. Instead connections to Claudius and Britannicus were emphasised, and Nero's victims, or those otherwise disadvantaged by him, rehabilitated. In 80, Domitia and Domitian's only attested son was born. It is not known what the boy's name was, but he died in childhood in 83. Shortly following his accession as Emperor, Domitian bestowed the honorific title of "Augusta" upon Domitia, while their son was deified, appearing as such on the reverse of coin types from this period. Nevertheless, the marriage appears to have faced a significant crisis in 83. For reasons unknown, Domitian briefly exiled Domitia, and then soon recalled her, either out of love or due to rumours that he was carrying on a relationship with his niece Julia Flavia. Jones argues that most likely he did so for her failure to produce an heir. By 84, Domitia had returned to the palace, where she lived for the remainder of Domitian's reign without incident. Little is known of Domitia's activities as Empress, or how much influence she wielded in Domitian's government, but it seems her role was limited. From Suetonius, we know that she at least accompanied the Emperor to the amphitheatre, while the Jewish writer Josephus speaks of benefits he received from her. It is not known whether Domitian had other children, but he did not marry again. Despite allegations by Roman sources of adultery and divorce, the marriage appears to have been happy. Prior to becoming Emperor, Domitian's role in the Flavian government was largely ceremonial. In June 71, Titus returned triumphant from the war in Judaea. Ultimately, the rebellion had claimed the lives of over 1 million people, a majority of whom were Jewish. The city and temple of Jerusalem were completely destroyed, its most valuable treasures carried off by the Roman army, and nearly 100,000 people were captured and enslaved. For his victory, the Senate awarded Titus a Roman triumph. On the day of the festivities, the Flavian family rode into the capital, preceded by a lavish parade that displayed the spoils of the war. The family procession was headed by Vespasian and Titus, while Domitian, riding a magnificent white horse, followed with the remaining Flavian relatives. Leaders of the Jewish resistance were executed in the Forum Romanum, after which the procession closed with religious sacrifices at the Temple of Jupiter. A triumphal arch, the Arch of Titus, was erected at the south-east entrance to the Forum to commemorate the successful end of the war. Yet the return of Titus further highlighted the comparative insignificance of Domitian, both militarily and politically. As the eldest and most experienced of Vespasian's sons, Titus shared tribunician power with his father, received seven consulships, the censorship, and was given command of the Praetorian Guard; powers that left no doubt he was the designated heir to the Empire. As a second son, Domitian held honorary titles, such as "Caesar" or "Princeps Iuventutis", and several priesthoods, including those of "augur", "pontifex", "frater arvalis", "magister frater arvalium", and "sacerdos collegiorum omnium", but no office with "imperium". He held six consulships during Vespasian's reign but only one of these, in 73, was an ordinary consulship. The other five were less prestigious suffect consulships, which he held in 71, 75, 76, 77 and 79 respectively, usually replacing his father or brother in mid-January. While ceremonial, these offices no doubt gained Domitian valuable experience in the Roman Senate, and may have contributed to his later reservations about its relevance. Under Vespasian and Titus, non-Flavians were virtually excluded from the important public offices. Mucianus himself all but disappeared from historical records during this time, and it is believed he died sometime between 75 and 77. Real power was unmistakably concentrated in the hands of the Flavian faction; the weakened Senate only maintained the facade of democracy. Because Titus effectively acted as co-emperor with his father, no abrupt change in Flavian policy occurred when Vespasian died on 23 June, 79. Titus assured Domitian that full partnership in the government would soon be his, but neither tribunician power nor "imperium" of any kind was conferred upon him during Titus' brief reign. Two major disasters struck during 79 and 80. On 24 August 79, Mount Vesuvius erupted, burying the surrounding cities of Pompeii and Herculaneum under metres of ash and lava; the following year, a fire broke out in Rome that lasted three days and that destroyed a number of important public buildings. Consequently, Titus spent much of his reign coordinating relief efforts and restoring damaged property. On 13 September 81 after barely two years in office, he unexpectedly died of fever during a trip to the Sabine territories. Ancient authors have implicated Domitian in the death of his brother, either by directly accusing him of murder, or implying he left the ailing Titus for dead, even alleging that during his lifetime, Domitian was openly plotting against his brother. It is difficult to assess the factual veracity of these statements given the known bias of the surviving sources. Brotherly affection was likely at a minimum, but this was hardly surprising, considering that Domitian had barely seen Titus after the age of seven. Whatever the nature of their relationship, Domitian seems to have displayed little sympathy when his brother lay dying, instead making for the Praetorian camp where he was proclaimed emperor. The following day, 14 September, the Senate confirmed Domitian's powers, granting tribunician power, the office of Pontifex Maximus, and the titles of "Augustus" ("venerable"), and "Pater Patriae" ("father of the country"). As Emperor, Domitian quickly dispensed with the republican facade his father and brother had maintained during their reign. By moving the centre of government (more or less formally) to the imperial court, Domitian openly rendered the Senate's powers obsolete. In his view, the Roman Empire was to be governed as a divine monarchy with himself as the benevolent despot at its head. In addition to exercising absolute political power, Domitian believed the emperor's role encompassed every aspect of daily life, guiding the Roman people as a cultural and moral authority. To usher in the new era, he embarked on ambitious economic, military and cultural programs with the intention of restoring the Empire to the splendour it had seen under the Emperor Augustus. Despite these grand designs Domitian was determined to govern the Empire conscientiously and scrupulously. He became personally involved in all branches of the administration: edicts were issued governing the smallest details of everyday life and law, while taxation and public morals were rigidly enforced. According to Suetonius, the imperial bureaucracy never ran more efficiently than under Domitian, whose exacting standards and suspicious nature maintained historically low corruption among provincial governors and elected officials. Although he made no pretence regarding the significance of the Senate under his absolute rule, those senators he deemed unworthy were expelled from the Senate, and in the distribution of public offices he rarely favoured family members, a policy that stood in contrast to the nepotism practiced by Vespasian and Titus. Above all, however, Domitian valued loyalty and malleability in those he assigned to strategic posts, qualities he found more often in men of the equestrian order than in members of the Senate or his own family, whom he regarded with suspicion, and promptly removed from office if they disagreed with imperial policy. The reality of Domitian's autocracy was further highlighted by the fact that, more than any emperor since Tiberius, he spent significant periods of time away from the capital. Although the Senate's power had been in decline since the fall of the Republic, under Domitian the seat of power was no longer even in Rome, but rather wherever the Emperor was. Until the completion of the Flavian Palace on the Palatine Hill, the imperial court was situated at Alba or Circeii, and sometimes even farther afield. Domitian toured the European provinces extensively, and spent at least three years of his reign in Germania and Illyricum, conducting military campaigns on the frontiers of the Empire. For his personal use he was active in constructing many monumental buildings including the Villa of Domitian, a vast and sumptuous palace situated 20 km outside Rome in the Alban Hills. In Rome itself he built the Palace of Domitian on the Palatine Hill. Six other villas are linked with Domitian at Tusculum, Antium, Caieta, Circei, Anxur and Baiae. Only the one at Circei has been identified today, where its remains can be visited by the Lago di Paola. The Stadium of Domitian was dedicated in 86 AD as a gift to the people of Rome as part of an Imperial building programme following the damage or destruction of most of the buildings on the Field of Mars by fire in 79 AD. It was Rome's first permanent venue for competitive athletics, and today occupied by the Piazza Navona. Domitian's tendency towards micromanagement was nowhere more evident than in his financial policy. The question of whether Domitian left the Roman Empire in debt or with a surplus at the time of his death has been fiercely debated. The evidence points to a balanced economy for the greater part of Domitian's reign. Upon his accession he revalued the Roman currency dramatically. He increased the silver purity of the denarius from 90% to 98% — the actual silver weight increasing from 2.87 grams to 3.26 grams. A financial crisis in 85 forced a devaluation of the silver purity and weight to 93.5% and 3.04 grams respectively. Nevertheless, the new values were still higher than the levels that Vespasian and Titus had maintained during their reigns. Domitian's rigorous taxation policy ensured that this standard was sustained for the following eleven years. Coinage from this era displays a highly consistent degree of quality including meticulous attention to Domitian's titulature and refined artwork on the reverse portraits. Jones estimates Domitian's annual income at more than 1.2 billion sestertii, of which over one-third would presumably have been spent maintaining the Roman army. The other major expense was the extensive reconstruction of Rome. At the time of Domitian's accession the city was still suffering from the damage caused by the Great Fire of 64, the civil war of 69 and the fire in 80. Much more than a renovation project, Domitian's building program was intended to be the crowning achievement of an Empire-wide cultural renaissance. Around fifty structures were erected, restored or completed, achievements second only to those of Augustus. Among the most important new structures were an odeon, a stadium, and an expansive palace on the Palatine Hill known as the Flavian Palace, which was designed by Domitian's master architect Rabirius. The most important building Domitian restored was the Temple of Jupiter on the Capitoline Hill, said to have been covered with a gilded roof. Among those completed were the Temple of Vespasian and Titus, the Arch of Titus and the Colosseum, to which he added a fourth level and finished the interior seating area. In order to appease the people of Rome an estimated 135 million sestertii was spent on donatives, or "congiaria", throughout Domitian's reign. The Emperor also revived the practice of public banquets, which had been reduced to a simple distribution of food under Nero, while he invested large sums on entertainment and games. In 86 he founded the Capitoline Games, a quadrennial contest comprising athletic displays, chariot racing, and competitions for oratory, music and acting. Domitian himself supported the travel of competitors from all corners of the Empire to Rome and distributed the prizes. Innovations were also introduced into the regular gladiatorial games such as naval contests, nighttime battles, and female and dwarf gladiator fights. Lastly, he added two new factions to the chariot races, Gold and Purple, to race against the existing White, Red, Green and Blue factions. The military campaigns undertaken during Domitian's reign were generally defensive in nature, as the Emperor rejected the idea of expansionist warfare. His most significant military contribution was the development of the Limes Germanicus, which encompassed a vast network of roads, forts and watchtowers constructed along the Rhine river to defend the Empire. Nevertheless, several important wars were fought in Gaul, against the Chatti, and across the Danube frontier against the Suebi, the Sarmatians, and the Dacians. The conquest of Britain continued under the command of Gnaeus Julius Agricola, who expanded the Roman Empire as far as Caledonia, or modern day Scotland. Domitian also founded a new legion in 82, the Legio I Minervia, to fight against the Chatti. Domitian is also credited on the easternmost evidence of Roman presence, the rock inscription near Boyukdash mountain, in present-day Azerbaijan. As judged by the carved titles of Caesar, Augustus and Germanicus, the related march took place between 84 and 96 AD. Domitian's administration of the Roman army was characterized by the same fastidious involvement he exhibited in other branches of the government. His competence as a military strategist was criticized by his contemporaries however. Although he claimed several triumphs, these were largely propaganda manoeuvres. Tacitus derided Domitian's victory against the Chatti as a "mock triumph", and criticized his decision to retreat in Britain following the conquests of Agricola. Nevertheless, Domitian appears to have been very popular among the soldiers, spending an estimated three years of his reign among the army on campaigns—more than any emperor since Augustus—and raising their pay by one-third. While the army command may have disapproved of his tactical and strategic decisions, the loyalty of the common soldier was unquestioned. Once Emperor, Domitian immediately sought to attain his long delayed military glory. As early as 82, or possibly 83, he went to Gaul, ostensibly to conduct a census, and suddenly ordered an attack on the Chatti. For this purpose, a new legion was founded, Legio I Minervia, which constructed some 75 kilometres (46 mi) of roads through Chattan territory to uncover the enemy's hiding places. Although little information survives of the battles fought, enough early victories were apparently achieved for Domitian to be back in Rome by the end of 83, where he celebrated an elaborate triumph and conferred upon himself the title of "Germanicus". Domitian's supposed victory was much scorned by ancient authors, who described the campaign as "uncalled for", and a "mock triumph". The evidence lends some credence to these claims, as the Chatti would later play a significant role during the revolt of Saturninus in 89. One of the most detailed reports of military activity under the Flavian dynasty was written by Tacitus, whose biography of his father-in-law Gnaeus Julius Agricola largely concerns the conquest of Britain between 77 and 84. Agricola arrived c. 77 as governor of Roman Britain, immediately launching campaigns into Caledonia (modern Scotland). In 82 Agricola crossed an unidentified body of water and defeated peoples unknown to the Romans until then. He fortified the coast facing Ireland, and Tacitus recalls that his father-in-law often claimed the island could be conquered with a single legion and a few auxiliaries. He had given refuge to an exiled Irish king whom he hoped he might use as the excuse for conquest. This conquest never happened, but some historians believe that the crossing referred to was in fact a small-scale exploratory or punitive expedition to Ireland. Turning his attention from Ireland, the following year Agricola raised a fleet and pushed beyond the Forth into Caledonia. To aid the advance, a large legionary fortress was constructed at Inchtuthil. In the summer of 84, Agricola faced the armies of the Caledonians, led by Calgacus, at the Battle of Mons Graupius. Although the Romans inflicted heavy losses on the enemy, two-thirds of the Caledonian army escaped and hid in the Scottish marshes and Highlands, ultimately preventing Agricola from bringing the entire British island under his control. In 85, Agricola was recalled to Rome by Domitian, having served for more than six years as governor, longer than normal for consular legates during the Flavian era. Tacitus claims that Domitian ordered his recall because Agricola's successes outshone the Emperor's own modest victories in Germania. The relationship between Agricola and the Emperor is unclear: on the one hand, Agricola was awarded triumphal decorations and a statue, on the other, Agricola never again held a civil or military post in spite of his experience and renown. He was offered the governorship of the province of Africa but declined it, either due to ill health or, as Tacitus claims, the machinations of Domitian. Not long after Agricola's recall from Britain, the Roman Empire entered into war with the Kingdom of Dacia in the East. Reinforcements were needed, and in 87 or 88, Domitian ordered a large-scale strategic withdrawal of troops in the British province. The fortress at Inchtuthil was dismantled and the Caledonian forts and watchtowers abandoned, moving the Roman frontier some 120 kilometres (75 mi) further south. The army command may have resented Domitian's decision to retreat, but to him the Caledonian territories never represented anything more than a loss to the Roman treasury. The most significant threat the Roman Empire faced during the reign of Domitian arose from the northern provinces of Illyricum, where the Suebi, the Sarmatians and the Dacians continuously harassed Roman settlements along the Danube river. Of these, the Sarmatians and the Dacians posed the most formidable threat. In approximately 84 or 85 the Dacians, led by King Decebalus, crossed the Danube into the province of Moesia, wreaking havoc and killing the Moesian governor Oppius Sabinus. Domitian quickly launched a counteroffensive, personally travelling to the region accompanied by a large force commanded by his praetorian prefect Cornelius Fuscus. Fuscus successfully drove the Dacians back across the border in mid-85, prompting Domitian to return to Rome and celebrate his second triumph. The victory proved short-lived, however: as early in 86 Fuscus embarked on an ill-fated expedition into Dacia. Fuscus was killed, and the battle standard of the Praetorian Guard was lost. The loss of the battle standard, or "aquila", was indicative of a crushing defeat and a serious affront to Roman national pride. Domitian returned to Moesia in August 86. He divided the province into Lower Moesia and Upper Moesia, and transferred three additional legions to the Danube. In 87, the Romans invaded Dacia once more, this time under the command of Tettius Julianus, and finally defeated Decebalus in late 88 at the same site where Fuscus had previously perished. An attack on the Dacian capital Sarmizegetusa was forestalled when new troubles arose on the German frontier in 89. In order to avert having to conduct a war on two fronts, Domitian agreed to terms of peace with Decebalus, negotiating free access of Roman troops through the Dacian region while granting Decebalus an annual subsidy of 8 million sesterces. Contemporary authors severely criticized this treaty, which was considered shameful to the Romans and left the deaths of Sabinus and Fuscus unavenged. For the remainder of Domitian's reign Dacia remained a relatively peaceful client kingdom, but Decebalus used the Roman money to fortify his defenses. Domitian probably wanted a new war against the Dacians, and reinforced Upper Moesia with two more cavalry units brought from Syria and with at least five cohorts brought from Pannonia. Trajan continued Domitian's policy and added two more units to the auxiliary forces of Upper Moesia, and then he used the build up of troops for his Dacian wars. Eventually the Romans achieved a decisive victory against Decebalus in 106. Again, the Roman army sustained heavy losses, but Trajan succeeded in capturing Sarmizegetusa and, importantly, annexed the Dacian gold and silver mines. Domitian firmly believed in the traditional Roman religion, and personally saw to it that ancient customs and morals were observed throughout his reign. In order to justify the divine nature of the Flavian rule, Domitian emphasized connections with the chief deity Jupiter, perhaps most significantly through the impressive restoration of the Temple of Jupiter on the Capitoline Hill. A small chapel dedicated to "Jupiter Conservator" was also constructed near the house where Domitian had fled to safety on 20 December 69. Later in his reign, he replaced it with a more expansive building, dedicated to "Jupiter Custos". The goddess he worshipped the most zealously, however, was Minerva. Not only did he keep a personal shrine dedicated to her in his bedroom, she regularly appeared on his coinage—in four different attested reverse types—and he founded a legion, Legio I Minervia, in her name. Domitian also revived the practice of the imperial cult, which had fallen somewhat out of use under Vespasian. Significantly, his first act as an Emperor was the deification of his brother Titus. Upon their deaths, his infant son, and niece, Julia Flavia, were likewise enrolled among the gods. With regards to the emperor himself as a religious figure, both Suetonius and Cassius Dio allege that Domitian officially gave himself the title of "Dominus et Deus" ("Lord and God"). However, not only did he reject the title of "Dominus" during his reign, but since he issued no official documentation or coinage to this effect, historians such as Brian Jones contend that such phrases were addressed to Domitian by flatterers who wished to earn favors from the emperor. To foster the worship of the imperial family, he erected a dynastic mausoleum on the site of Vespasian's former house on the Quirinal, and completed the Temple of Vespasian and Titus, a shrine dedicated to the worship of his deified father and brother. To memorialize the military triumphs of the Flavian family, he ordered the construction of the Templum Divorum and the Templum Fortuna Redux, and completed the Arch of Titus. Construction projects such as these constituted only the most visible part of Domitian's religious policy, which also concerned itself with the fulfilment of religious law and public morals. In 85, he nominated himself perpetual censor, the office that held the task of supervising Roman morals and conduct. Once again, Domitian acquitted himself of this task dutifully, and with care. He renewed the "Lex Iulia de Adulteriis Coercendis", under which adultery was punishable by exile. From the list of jurors he struck an equestrian who had divorced his wife and taken her back, while an ex-quaestor was expelled from the Senate for acting and dancing. Domitian also heavily prosecuted corruption among public officials, removing jurors if they accepted bribes and rescinding legislation when a conflict of interest was suspected. He ensured that libellous writings, especially those directed against himself, were punishable by exile or death. Actors were likewise regarded with suspicion, as their performances provided an opportunity for satire at the expense of the government. Consequently, he forbade mimes from appearing on stage in public. In 87, Vestal Virgins were found to have broken their sacred vows of lifelong public chastity. As the Vestals were regarded as daughters of the community, this offense essentially constituted incest. Accordingly, those found guilty of any such transgression were condemned to death, either by a manner of their choosing, or according to the ancient fashion, which dictated that Vestals should be buried alive. Foreign religions were tolerated insofar as they did not interfere with public order, or could be assimilated with the traditional Roman religion. The worship of Egyptian deities in particular flourished under the Flavian dynasty, to an extent not seen again until the reign of Commodus. Veneration of Serapis and Isis, who were identified with Jupiter and Minerva respectively, was especially prominent. 4th century writings by Eusebius maintain that Jews and Christians were heavily persecuted toward the end of Domitian's reign. The Book of Revelation is thought by some to have been written during this period. Although Jews were heavily taxed, no contemporary authors mention trials or executions based on religious offenses other than those within the Roman religion. On 1 January 89, the governor of Germania Superior, Lucius Antonius Saturninus, and his two legions at Mainz, Legio XIV Gemina and Legio XXI Rapax, revolted against the Roman Empire with the aid of the Germanic Chatti people. The precise cause for the rebellion is uncertain, although it appears to have been planned well in advance. The Senatorial officers may have disapproved of Domitian's military strategies, such as his decision to fortify the German frontier rather than attack, as well as his recent retreat from Britain, and finally the disgraceful policy of appeasement towards Decebalus. At any rate, the uprising was strictly confined to Saturninus' province, and quickly detected once the rumour spread across the neighbouring provinces. The governor of Germania Inferior, Aulus Bucius Lappius Maximus, moved to the region at once, assisted by the procurator of Rhaetia, Titus Flavius Norbanus. From Spain, Trajan was summoned, while Domitian himself came from Rome with the Praetorian Guard. By a stroke of luck, a thaw prevented the Chatti from crossing the Rhine and coming to Saturninus' aid. Within twenty-four days the rebellion was crushed, and its leaders at Mainz savagely punished. The mutinous legions were sent to the front in Illyricum, while those who had assisted in their defeat were duly rewarded. Lappius Maximus received the governorship of the province of Syria, a second consulship in May 95, and finally a priesthood, which he still held in 102. Titus Flavius Norbanus may have been appointed to the prefecture of Egypt, but almost certainly became prefect of the Praetorian Guard by 94, with Titus Petronius Secundus as his colleague. Domitian opened the year following the revolt by sharing the consulship with Marcus Cocceius Nerva, suggesting the latter had played a part in uncovering the conspiracy, perhaps in a fashion similar to the one he played during the Pisonian conspiracy under Nero. Although little is known about the life and career of Nerva before his accession as Emperor in 96, he appears to have been a highly adaptable diplomat, surviving multiple regime changes and emerging as one of the Flavians' most trusted advisors. His consulship may therefore have been intended to emphasize the stability and status quo of the regime. The revolt had been suppressed and the Empire returned to order. Since the fall of the Republic, the authority of the Roman Senate had largely eroded under the quasi-monarchical system of government established by Augustus, known as the Principate. The Principate allowed the existence of a "de facto" dictatorial regime, while maintaining the formal framework of the Roman Republic. Most Emperors upheld the public facade of democracy, and in return the Senate implicitly acknowledged the Emperor's status as a "de facto" monarch. Some rulers handled this arrangement with less subtlety than others. Domitian was not so subtle. From the outset of his reign, he stressed the reality of his autocracy. He disliked aristocrats and had no fear of showing it, withdrawing every decision-making power from the Senate, and instead relying on a small set of friends and equestrians to control the important offices of state. The dislike was mutual. After Domitian's assassination, the senators of Rome rushed to the Senate house, where they immediately passed a motion condemning his memory to oblivion. Under the rulers of the Nervan-Antonian dynasty, senatorial authors published histories that elaborated on the view of Domitian as a tyrant. Nevertheless, the evidence suggests that Domitian did make concessions toward senatorial opinion. Whereas his father and brother had concentrated consular power largely in the hands of the Flavian family, Domitian admitted a surprisingly large number of provincials and potential opponents to the consulship, allowing them to head the official calendar by opening the year as an ordinary consul. Whether this was a genuine attempt to reconcile with hostile factions in the Senate cannot be ascertained. By offering the consulship to potential opponents, Domitian may have wanted to compromise these senators in the eyes of their supporters. When their conduct proved unsatisfactory, they were almost invariably brought to trial and exiled or executed, and their property was confiscated. Both Tacitus and Suetonius speak of escalating persecutions toward the end of Domitian's reign, identifying a point of sharp increase around 93, or sometime after the failed revolt of Saturninus in 89. At least twenty senatorial opponents were executed, including Domitia Longina's former husband Lucius Aelius Lamia and three of Domitian's own family members, Titus Flavius Sabinus, Titus Flavius Clemens and Marcus Arrecinus Clemens. Some of these men were executed as early as 83 or 85, however, lending little credit to Tacitus' notion of a "reign of terror" late in Domitian's reign. According to Suetonius, some were convicted for corruption or treason, others on trivial charges, which Domitian justified through his suspicion: Jones compares the executions of Domitian to those under Emperor Claudius (41–54), noting that Claudius executed around 35 senators and 300 equestrians, and yet was still deified by the Senate and regarded as one of the good Emperors of history. Domitian was apparently unable to gain support among the aristocracy, despite attempts to appease hostile factions with consular appointments. His autocratic style of government accentuated the Senate's loss of power, while his policy of treating patricians and even family members as equals to all Romans earned him their contempt. Domitian was assassinated on 18 September 96 in a conspiracy by court officials. A highly detailed account of the plot and the assassination is provided by Suetonius. He alleges that Domitian's chamberlain Parthenius played the main role in the plot, citing the recent execution of Domitian's secretary Epaphroditus as his primary motive. The act itself was carried out by a freedman of his named Maximus, and a steward of Domitian's niece Flavia Domitilla, named Stephanus. According to Suetonius, a number of omens had foretold Domitian's death. Several days prior to the assassination, Minerva had appeared to the emperor in a dream. She announced that she had been disarmed by Jupiter and could no longer give Domitian her protection. According to an auspice he had received, the Emperor believed that his death would be at midday. As a result, he was always restless around that time. On the day of the assassination, Domitian was distressed and repeatedly asked a servant to tell him what time it was. The servant, who was himself one of the plotters, lied to the emperor, telling him that it was already late in the afternoon. Apparently put at ease, the Emperor went to his desk to sign some decrees. Stephanus, who had been feigning an injury to his arm for several days and wearing a bandage to allow him to carry a concealed dagger, suddenly appeared: During the attack, Stephanus and Domitian had struggled on the floor, during which time Stephanus was stabbed by the emperor and died shortly afterward. Domitian's body was carried away on a common bier and unceremoniously cremated by his nurse Phyllis. Later, she took the emperor's ashes to the Flavian Temple and mingled them with those of his niece, Julia. He was 44 years old. As had been foretold, his death came at midday. Cassius Dio, writing nearly a hundred years after the assassination, suggests that the assassination was improvised, while Suetonius implies it was a well-organized conspiracy, citing Stephanus' feigned injury and claiming that the doors to the servants' quarters had been locked prior to the attack and that a sword Domitian kept concealed beneath his pillow as a last line of personal protection against a would-be assassin, had also been removed beforehand. Dio included Domitia Longina among the conspirators, but in light of her attested devotion to Domitian—even years after her husband had died—her involvement in the plot seems highly unlikely. The precise involvement of the Praetorian Guard is unclear. One of the guard's commanders, Titus Petronius Secundus, was almost certainly aware of the plot. The other, Titus Flavius Norbanus, the former governor of Raetia, was a member of Domitian's family. The "Fasti Ostienses", the Ostian Calendar, records that the same day the Senate proclaimed Marcus Cocceius Nerva emperor. Despite his political experience, this was a remarkable choice. Nerva was old and childless, and had spent much of his career out of the public light, prompting both ancient and modern authors to speculate on his involvement in Domitian's assassination. According to Cassius Dio, the conspirators approached Nerva as a potential successor prior to the assassination, suggesting that he was at least aware of the plot. He does not appear in Suetonius' version of the events, but this may be understandable, since his works were published under Nerva's direct descendants Trajan and Hadrian. To suggest the dynasty owed its accession to murder would have been less than sensitive. On the other hand, Nerva lacked widespread support in the Empire, and as a known Flavian loyalist, his track record would not have recommended him to the conspirators. The precise facts have been obscured by history, but modern historians believe Nerva was proclaimed Emperor solely on the initiative of the Senate, within hours after the news of the assassination broke. The decision may have been hasty so as to avoid civil war, but neither appears to have been involved in the conspiracy. The Senate nonetheless rejoiced at the death of Domitian, and immediately following Nerva's accession as Emperor, passed "damnatio memoriae" on his memory: his coins and statues were melted, his arches were torn down and his name was erased from all public records. Domitian and, over a century later, Publius Septimius Geta were the only emperors known to have officially received a "damnatio memoriae", though others may have received "de facto" ones. In many instances, existing portraits of Domitian, such as those found on the Cancelleria Reliefs, were simply recarved to fit the likeness of Nerva, which allowed quick production of new images and recycling of previous material. Yet the order of the Senate was only partially executed in Rome, and wholly disregarded in most of the provinces outside Italy. According to Suetonius, the people of Rome met the news of Domitian's death with indifference, but the army was much grieved, calling for his deification immediately after the assassination, and in several provinces rioting. As a compensation measure, the Praetorian Guard demanded the execution of Domitian's assassins, which Nerva refused. Instead he merely dismissed Titus Petronius Secundus, and replaced him with a former commander, Casperius Aelianus. Dissatisfaction with this state of affairs continued to loom over Nerva's reign, and ultimately erupted into a crisis in October 97, when members of the Praetorian Guard, led by Casperius Aelianus, laid siege to the Imperial Palace and took Nerva hostage. He was forced to submit to their demands, agreeing to hand over those responsible for Domitian's death and even giving a speech thanking the rebellious Praetorians. Titus Petronius Secundus and Parthenius were sought out and killed. Nerva was unharmed in this assault, but his authority was damaged beyond repair. Shortly thereafter he announced the adoption of Trajan as his successor, and with this decision all but abdicated. The classic view of Domitian is usually negative, since most of the antique sources were related to the Senatorial or aristocratic class, with which Domitian had notoriously difficult relations. Furthermore, contemporary historians such as Pliny the Younger, Tacitus and Suetonius all wrote down the information on his reign after it had ended, and his memory had been condemned to oblivion. The work of Domitian's court poets Martial and Statius constitutes virtually the only literary evidence concurrent with his reign. Perhaps as unsurprising as the attitude of post-Domitianic historians, the poems of Martial and Statius are highly adulatory, praising Domitian's achievements as equalling those of the gods. The most extensive account of the life of Domitian to survive was written by the historian Suetonius, who was born during the reign of Vespasian, and published his works under Emperor Hadrian (117–138). His "De Vita Caesarum" is the source of much of what is known of Domitian. Although his text is predominantly negative, it neither exclusively condemns nor praises Domitian, and asserts that his rule started well, but gradually declined into terror. The biography is problematic, however, in that it appears to contradict itself with regards to Domitian's rule and personality, at the same time presenting him as a conscientious, moderate man, and as a decadent libertine. According to Suetonius, Domitian wholly feigned his interest in arts and literature, and never bothered to acquaint himself with classic authors. Other passages, alluding to Domitian's love of epigrammatic expression, suggest that he was in fact familiar with classic writers, while he also patronized poets and architects, founded artistic Olympics, and personally restored the library of Rome at great expense after it had burned down. "De Vita Caesarum" is also the source of several outrageous stories regarding Domitian's marriage life. According to Suetonius, Domitia Longina was exiled in 83 because of an affair with a famous actor named Paris. When Domitian found out, he allegedly murdered Paris in the street and promptly divorced his wife, with Suetonius further adding that once Domitia was exiled, Domitian took Julia as his mistress, who later died during a failed abortion. Modern historians consider this highly implausible however, noting that malicious rumours such as those concerning Domitia's alleged infidelity were eagerly repeated by post-Domitianic authors, and used to highlight the hypocrisy of a ruler publicly preaching a return to Augustan morals, while privately indulging in excesses and presiding over a corrupt court. Nevertheless, the account of Suetonius has dominated imperial historiography for centuries. Although Tacitus is usually considered to be the most reliable author of this era, his views on Domitian are complicated by the fact that his father-in-law, Gnaeus Julius Agricola, may have been a personal enemy of the Emperor. In his biographical work "Agricola", Tacitus maintains that Agricola was forced into retirement because his triumph over the Caledonians highlighted Domitian's own inadequacy as a military commander. Several modern authors such as Dorey have argued the opposite: that Agricola was in fact a close friend of Domitian, and that Tacitus merely sought to distance his family from the fallen dynasty once Nerva was in power. Tacitus' major historical works, including "The Histories" and Agricola's biography, were all written and published under Domitian's successors Nerva (96–98) and Trajan (98–117). Unfortunately, the part of Tacitus' "Histories" dealing with the reign of the Flavian dynasty is almost entirely lost. His views on Domitian survive through brief comments in its first five books, and the short but highly negative characterization in "Agricola" in which he severely criticizes Domitian's military endeavours. Nevertheless, Tacitus admits his debt to the Flavians with regard to his own public career. Other influential 2nd century authors include Juvenal and Pliny the Younger, the latter of whom was a friend of Tacitus and in 100 delivered his famous "Panegyricus Traiani" before Trajan and the Roman Senate, exalting the new era of restored freedom while condemning Domitian as a tyrant. Juvenal savagely satirized the Domitianic court in his "Satires", depicting the Emperor and his entourage as corrupt, violent and unjust. As a consequence, the anti-Domitianic tradition was already well established by the end of the 2nd century, and by the 3rd century, even expanded upon by early Church historians, who identified Domitian as an early persecutor of Christians, such as in the Acts of John. Over the course of the 20th century, Domitian's military, administrative and economic policies were re-evaluated. Hostile views of Domitian had been propagated until archeological and numismatic advances brought renewed attention to his reign, and necessitated a revision of the literary tradition established by Tacitus and Pliny. It would be nearly a hundred years after Stéphane Gsell's 1894 "Essai sur le règne de l'empereur Domitien" however, before any new, book-length studies were published. The first of these was Jones' 1992 "The Emperor Domitian". He concludes that Domitian was a ruthless but efficient autocrat. For the majority of his reign, there was no widespread dissatisfaction with his policies. His harshness was limited to a highly vocal minority, who exaggerated his despotism in favor of the Nervan-Antonian dynasty that followed. His foreign policy was realistic, rejecting expansionist warfare and negotiating peace at a time when Roman military tradition dictated aggressive conquest. Persecution of religious minorities, such as Jews and Christians, was non-existent. In 1930, Ronald Syme argued for a complete reassessment of Domitian's financial policy, which had been largely viewed as a disaster. His economic program, which was rigorously efficient, maintained the Roman currency at a standard it would never again achieve. Domitian's government nonetheless exhibited totalitarian characteristics. As Emperor, he saw himself as the new Augustus, an enlightened despot destined to guide the Roman Empire into a new era of Flavian renaissance. Using religious, military and cultural propaganda, he fostered a cult of personality. He deified three of his family members and erected massive structures to commemorate the Flavian achievements. Elaborate triumphs were celebrated in order to boost his image as a warrior-emperor, but many of these were either unearned or premature. By nominating himself perpetual censor, he sought to control public and private morals. He became personally involved in all branches of the government and successfully prosecuted corruption among public officials. The dark side of his censorial power involved a restriction in freedom of speech, and an increasingly oppressive attitude toward the Roman Senate. He punished libel with exile or death and, due to his suspicious nature, increasingly accepted information from informers to bring false charges of treason if necessary. Despite his vilification by contemporary historians, Domitian's administration provided the foundation for the Principate of the peaceful 2nd century. His successors Nerva and Trajan were less restrictive, but in reality their policies differed little from his. Much more than a "gloomy coda to the...1st century", the Roman Empire prospered between 81 and 96, in a reign that Theodor Mommsen described as a somber but intelligent despotism. Taiko Taiko have a mythological origin in Japanese folklore, but historical records suggest that taiko were introduced to Japan through Korean and Chinese cultural influence as early as the 6th century CE. Some taiko are similar to instruments originating from India. Archaeological evidence also supports the view that taiko were present in Japan during the 6th century in the Kofun period. Their function has varied throughout history, ranging from communication, military action, theatrical accompaniment, and religious ceremony to both festival and concert performances. In modern times, taiko have also played a central role in social movements for minorities both within and outside Japan. "Kumi-daiko" performance, characterized by an ensemble playing on different drums, was developed in 1951 through the work of Daihachi Oguchi and has continued with groups such as Kodo. Other performance styles, such as "hachijō-daiko", have also emerged from specific communities in Japan. "Kumi-daiko" performance groups are active not only in Japan, but also in the United States, Australia, Canada, and Brazil. Taiko performance consists of many components in technical rhythm, form, stick grip, clothing, and the particular instrumentation. Ensembles typically use different types of barrel-shaped "nagadō-daiko" as well as smaller "shime-daiko". Many groups accompany the drums with vocals, strings, and woodwind instruments. The origin of the instruments is unclear, though there have been many suggestions. Historical accounts, of which the earliest date from 588 CE, note that young Japanese men traveled to Korea to study the kakko, a drum that originated in South China. This study and appropriation of Chinese instruments may have influenced the emergence of taiko. Certain court music styles, especially gigaku and gagaku, arrived in Japan through both Korea and China. In both traditions, dancers were accompanied by several instruments that included drums similar to taiko. Certain percussive patterns and terminology in togaku, an early dance and music style in Japan, in addition to physical features of the kakko, also reflect influence from both China and India on drum use in gagaku performance. Archaeological evidence shows that taiko were used in Japan as early as the 6th century CE, during the latter part of the Kofun period, and were likely used for communication, in festivals, and in other rituals. This evidence was substantiated by the discovery of haniwa statues in the Sawa District of Gunma Prefecture. Two of these figures are depicted playing drums; one of them, wearing skins, is equipped with a barrel-shaped drum hung from his shoulder and uses a stick to play the drum at hip height. This statue is titled "Man Beating the Taiko" and is considered the oldest evidence of taiko performance in Japan. Similarities between the playing style demonstrated by this haniwa and known music traditions in Korea and China further suggest influences from these regions. The "Nihon Shoki", the second oldest book of Japanese classical history, contains a mythological story describing the origin of taiko. The myth tells how Amaterasu, who had sealed herself inside a cave in anger, was beckoned out by an elder goddess Ame-no-Uzume when others had failed. Ame-no-Uzume accomplished this by emptying out a barrel of sake and dancing furiously on top of it. Historians regard her performance as the mythological creation of taiko music. In feudal Japan, taiko were often used to motivate troops, call out orders or announcements, and set a marching pace; marches were usually set to six paces per beat of the drum. During the 16th-century Warring States period, specific drum calls were used to communicate orders for retreating and advancing. Other rhythms and techniques were detailed in period texts. According to the war chronicle "Gunji Yoshū", nine sets of five beats would summon an ally to battle, while nine sets of three beats, sped up three or four times, was the call to advance and pursue an enemy. Folklore from the 16th century on the legendary 6th-century Emperor Keitai offers a story that he obtained a large drum from China, which he named . The Emperor was thought to have used it to both encourage his own army and intimidate his enemies. Taiko have been incorporated in Japanese theatre for rhythmic needs, general atmosphere, and in certain settings decoration. In the kabuki play "The Tale of Shiroishi and the Taihei Chronicles", scenes in the pleasure quarters are accompanied by taiko to create dramatic tension. Noh theatre also feature taiko where performance consists of highly specific rhythmic patterns. The school of drumming, for example, contains 65 basic patterns in addition to 25 special patterns; these patterns are categorized in several classes. Differences between these patterns include changes in tempo, accent, dynamics, pitch, and function in the theatrical performance. Patterns are also often connected together in progressions. Taiko continue to be used in gagaku, a classical music tradition typically performed at the Tokyo Imperial Palace in addition to local temples and shrines. In gagaku, one component of the art form is traditional dance, which is guided in part by the rhythm set by the taiko. Taiko have played an important role in many local festivals across Japan. They are also used to accompany religious ritual music. In kagura, which generically describes music and dances stemming from Shinto practices, taiko frequently appear alongside other performers during local festivals. In Buddhist traditions, taiko are used for ritual dances that are a part of the Bon Festival. Taiko, along with other instruments, are featured atop towers that are adorned with red-and-white cloth and serve to provide rhythms for the dancers who are encircled around the performers. In addition to the instruments, the term "taiko" also refers to the performance itself, and commonly to one style called "kumi-daiko", or ensemble-style playing (as opposed to festival performances, rituals, or theatrical use of the drums). "Kumi-daiko" was developed by Daihachi Oguchi in 1951. He is considered a master performer and helped transform taiko performance from its roots in traditional settings in festivals and shrines. Oguchi was trained as a jazz musician in Nagano, and at one point, a relative gave him an old piece of written taiko music. Unable to read the traditional and esoteric notation, Oguchi found help to transcribe the piece, and on his own added rhythms and transformed the work to accommodate multiple taiko players on different-sized instruments. Each instrument served a specific purpose that established present-day conventions in "kumi-daiko" performance. Oguchi's ensemble, Osuwa Daiko, incorporated these alterations and other drums into their performances. They also devised novel pieces that were intended for non-religious performances. Several other groups emerged in Japan through the 1950s and 1960s. Oedo Sukeroku Daiko was formed in Tokyo in 1959 under Seidō Kobayashi, and has been referred to as the first taiko group who toured professionally. Globally, "kumi-daiko" performance became more visible during the 1964 Summer Olympics in Tokyo, when it was featured during the Festival of Arts event. "Kumi-daiko" was also developed through the leadership of , who gathered young men who were willing to devote their entire lifestyle to taiko playing and took them to Sado Island for training where Den and his family had settled in 1968. Den chose the island based on a desire to reinvigorate the folk arts in Japan, particularly taiko; he became inspired by a drumming tradition unique to Sado called that required considerable strength to play well. Den called the group "Za Ondekoza" or Ondekoza for short, and implemented a rigorous set of exercises for its members including long-distance running. In 1975, Ondekoza was the first taiko group to tour in the United States. Their first performance occurred just after the group finished running the Boston Marathon while wearing their traditional uniforms. In 1981, some members of Ondekoza split from Den and formed another group called Kodo under the leadership of Eitetsu Hayashi. Kodo continued to use Sado Island for rigorous training and communal living, and went on to popularize taiko through frequent touring and collaborations with other musical performers. Kodo is one of the most recognized taiko groups both in Japan and worldwide. Estimates of the number of taiko groups in Japan vary up to 5000 active in Japan, but more conservative assessments place the number closer to 800 based on membership in the Nippon Taiko Foundation, the largest national organization of taiko groups. Some pieces that have emerged from early "kumi-daiko" groups that continue to be performed include "Yatai-bayashi" from Ondekoza, from Osuwa Daiko, and from Kodo. Taiko have been developed into a broad range of percussion instruments that are used in both Japanese folk and classical musical traditions. An early classification system based on shape and tension was advanced by Francis Taylor Piggott in 1909. Taiko are generally classified based on the construction process, or the specific context in which the drum is used, but some are not classified, such as the toy den-den daiko. With few exceptions, taiko have a drum shell with heads on both sides of the body, and a sealed resonating cavity. The head may be fastened to the shell using a number of different systems, such as using ropes. Taiko may be either tunable or non-tunable depending on the system used. Taiko are categorized into three types based on construction process. "Byō-uchi-daiko" are constructed with the drumhead nailed to the body. "Shime-daiko" are classically constructed with the skin placed over iron or steel rings, which are then tightened with ropes. Contemporary "shime-daiko" are tensioned using bolts or turnbuckles systems attached to the drum body. "Tsuzumi" are also rope-tensioned drums, but have a distinct hourglass shape and their skins are made using deerskin. "Byō-uchi-daiko" were historically made only using a single piece of wood; they continue to be made in this manner, but are also constructed from staves of wood. Larger drums can be made using a single piece of wood, but at a much greater cost due to the difficulty in finding appropriate trees. The preferred wood is the Japanese zelkova or "keyaki", but a number of other woods, and even wine barrels, have been used to create taiko. "Byō-uchi-daiko" cannot be tuned. The typical "byō-uchi-daiko" is the "nagadō-daiko", an elongated drum that is roughly shaped like a wine barrel. "Nagadō-daiko" are available in a variety of sizes, and their head diameter is traditionally measured in shaku (units of roughly 30 cm). Head diameters range from . are the smallest of these drums and are usually about in diameter. The "chū-daiko " is a medium-sized "nagadō-daiko" ranging from , and weighing about . vary in size, and are often as large as in diameter. Some "ō-daiko" are difficult to move due to their size, and therefore permanently remain inside the performance space, such as temple or shrine. "Ō-daiko" means "large drum" and for a given ensemble, the term refers to their largest drum. The other type of "byō-uchi-daiko" is called a and describes any drum constructed such that the head diameter is greater than the length of the body. "Shime-daiko" are a set of smaller, roughly snare drum-sized instrument that are tunable. The tensioning system usually consists of hemp cords or rope, but bolt or turnbuckle systems have been used as well. , sometimes referred to as "taiko" in the context of theater, have thinner heads than other kinds of shime-daiko. The head includes a patch of deerskin placed in the center, and in performance, drum strokes are generally restricted to this area. The is a heavier type of "shime-daiko". They are available in sizes 1–5, and are named according to their number: "namitsuke" (1), "nichō-gakke" (2), "sanchō-gakke" (3), "yonchō-gakke" (4), and "gochō-gakke" (5). The "namitsuke" has the thinnest skins and the shortest body in terms of height; thickness and tension of skins, as well as body height, increase toward the "gochō-gakke". The head diameters of all "shime-daiko" sizes are around . "Okedō-daiko" or simply "okedō", are a type of "shime-daiko" that are stave-constructed using narrower strips of wood, have a tube-shaped frame. Like other "shime-daiko", drum heads are attached by metal hoops and fastened by rope or cords. "Okedō" can be played using the same drumsticks (called "bachi") as "shime-daiko", but can also be hand-played. "Okedō" come in short- and long-bodied types. "Tsuzumi" are a class of hourglass-shaped drums. The drum body is shaped on a spool and the inner body carved by hand. Their skins can be made from cowhide, horsehide, or deerskin. While the "ō-tsuzumi" skins are made from cowhide, "ko-tsuzumi" are made from horsehide. While some classify "tsuzumi" as a type of taiko, others have described them as a drum entirely separate from taiko. Taiko can also be categorized by the context in which they are used. The "miya-daiko", for instance, is constructed in the same manner as other "byō-uchi-daiko", but is distinguished by an ornamental stand and is used for ceremonial purposes at Buddhist temples. The (a "ko-daiko") and (a "nagadō-daiko" with a cigar-shaped body) are used in sumo and festivals respectively. Several drums, categorized as "gagakki", are used in the Japanese theatrical form, gagaku. The lead instrument of the ensemble is the kakko, which is a smaller "shime-daiko" with heads made of deerskin, and is placed horizontally on a stand during performance. A "tsuzumi", called the "san-no-tsuzumi" is another small drum in gagaku that is placed horizontally and struck with a thin stick. are the largest drums of the ensemble, and have heads that are about in diameter. During performance, the drum is placed on a tall pedestals and surrounded by a rim decoratively painted with flames and adorned with mystical figures such as wyverns. "Dadaiko" are played while standing, and are usually only played on the downbeat of the music. The is a smaller drum that produces a lower sound, its head measuring about in diameter. It is used in ensembles that accompany bugaku, a traditional dance performed at the Tokyo Imperial Palace and in religious contexts. "Tsuri-daiko" are suspended on a small stand, and are played sitting down. "Tsuri-daiko" performers typically use shorter mallets covered in leather knobs instead of bachi. They can be played simultaneously by two performers; while one performer plays on the head, another performer uses bachi on the body of the drum. The larger "ō-tsuzumi" and smaller "ko-tsuzumi" are used in the opening and dances of Noh theater. Both drums are struck using the fingers; players can also adjust pitch by manually applying pressure to the ropes on the drum. The color of the cords of these drums also indicates the skill of the musician: Orange and red for amateur players, light blue for performers with expertise, and lilac for masters of the instrument. "Nagauta-shime daiko" or "uta daiko" are also featured in Noh performance. Many taiko in Noh are also featured in kabuki performance and are used in a similar manner. In addition to the "ō-tsuzumi", "ko-tsuzumi", and "nagauta-shime daiko", Kabuki performances make use of the larger "ō-daiko" offstage to help set the atmosphere for different scenes. Taiko construction has several stages, including making and shaping of the drum body (or shell), preparing the drum skin, and tuning the skin to the drumhead. Variations in the construction process often occur in the latter two parts of this process. Historically, "byō-uchi-daiko" were crafted from trunks of the Japanese zelkova tree that were dried out over years, using techniques to prevent splitting. A master carpenter then carved out the rough shape of the drum body with a chisel; the texture of the wood after carving softened the tone of the drum. In contemporary times, taiko are carved out on a large lathe using wood staves or logs that can be shaped to fit drum bodies of various sizes. Drumheads can be left to air-dry over a period of years, but some companies use large, smoke-filled warehouses to hasten the drying process. After drying is complete, the inside of the drum is worked with a deep-grooved chisel and sanded. Lastly, handles are placed onto the drum. These are used to carry smaller drums and they serve an ornamental purpose for larger drums. The skins or heads of taiko are generally made from cowhide from Holstein cows aged about three or four years. Skins also come from horses, and bull skin is preferred for larger drums. Thinner skins are preferred for smaller taiko, and thicker skins are used for larger ones. On some drumheads, a patch of deer skin placed in the center serves as the target for many strokes during performance. Before fitting it to the drum body the hair is removed from the hide by soaking it in a river or stream for about a month; winter months are preferred as colder temperatures better facilitate hair removal. To stretch the skin over the drum properly, one process requires the body to be held on a platform with several hydraulic jacks underneath it. The edges of the cowhide are secured to an apparatus below the jacks, and the jacks stretch the skin incrementally to precisely apply tension across the drumhead. Other forms of stretching use rope or cords with wooden dowels or an iron wheel to create appropriate tension. Small tension adjustments can be made during this process using small pieces of bamboo that twist around the ropes. Particularly large drumheads are sometimes stretched by having several workers, clad in stockings, hop rhythmically atop it, forming a circle along the edge. After the skin has dried, tacks, called "byō", are added to the appropriate drums to secure it; "chū-daiko" require about 300 of them for each side. After the body and skin have been finished, excess hide is cut off and the drum can be stained as needed. Several companies specialize in the production of taiko. One such company that created drums exclusively for the Emperor of Japan, Miyamoto Unosuke Shoten in Tokyo, has been making taiko since 1861. The Asano Taiko Corporation is another major taiko-producing organization, and has been producing taiko for over 400 years. The family-owned business started in Mattō, Ishikawa, and, aside from military equipment, made taiko for Noh theater and later expanded to creating instruments for festivals during the Meiji period. Asano currently maintains an entire complex of large buildings referred to as Asano Taiko Village, and the company reports producing up to 8000 drums each year. As of 2012, there is approximately one major taiko production company in each prefecture of Japan, with some regions having several companies. Of the manufacturers in Naniwa, Taikoya Matabē is one of the most successful and is thought to have brought considerable recognition to the community and attracted many drum makers there. Umetsu Daiko, a company that operates in Hakata, has been producing taiko since 1821. Taiko performance styles vary widely across groups in terms of the number of performers, repertoire, instrument choices, and stage techniques. Nevertheless, a number of early groups have had broad influence on the tradition. For instance, many pieces developed by Ondekoza and Kodo are considered standard in many taiko groups. Kata is a term used to describe the posture and movement associated with taiko performance. The term is used in martial arts in a similar way: for example, both traditions include the idea that the hara is the center of being. Author Sean Bender argues that kata is the primary feature that distinguishes different taiko groups from one another, and is a key factor in judging the quality of performance. For this reason, many practice rooms intended for taiko contain mirrors to provide visual feedback to players. An important part of kata in taiko is keeping the body stabilized while performing, and can be accomplished by keeping a wide, low stance with the legs, with the left knee bent over the toes and keeping the right leg straight. It is important that the hips face the drum and the shoulders are relaxed. Some teachers note a tendency to rely on the upper body while playing, and emphasize the importance of the holistic use of the body during performance. Some groups in Japan, particularly those active in Tokyo, also emphasize the importance of the lively and spirited "iki" aesthetic. In taiko, it refers to very specific kinds of movement while performing that evoke the sophistication stemming from the mercantile and artisan classes active during the Edo period (1603–1868). The sticks for playing taiko are called "bachi", and are made in various sizes and from different kinds of wood such as white oak, bamboo, and Japanese magnolia. "Bachi" are also held in a number of different styles. In "kumi-daiko", it is common for a player to hold their sticks in a relaxed manner between the V-shape of the index finger and thumb, which points to the player. There are other grips that allow performers to play much more technically difficult rhythms, such as the "shime" grip, which is similar to a matched grip: the "bachi" are gripped at the back end, and the fulcrum rests between the performer's index finger and thumb, while the other fingers remain relaxed and slightly curled around the stick. Performance in some groups is also guided by principles based on Zen Buddhism. For instance, among other concepts, the San Francisco Taiko Dojo is guided by emphasizing communication, respect, and harmony. The way the "bachi" are held can also be significant; for some groups, "bachi" represent a spiritual link between the body and the sky. Some physical parts of taiko, like the drum body, its skin, and the tacks also hold symbolic significance in Buddhism. "Kumi-daiko" groups consist primarily of percussive instruments where each of the drums plays a specific role. Of the different kinds of taiko, the most common in groups is the "nagadō-daiko". "Chū-daiko" are common in taiko groups and represent the main rhythm of the group, whereas "shime-daiko" set and change tempo. "Ō-daiko" provide a steady, underlying pulse and serve as a counter-rhythm to the other parts. It is common for performances to begin with a single stroke roll called an "". The player starts slowly, leaving considerable space between strikes, gradually shortening the interval between hits, until the drummer is playing a rapid roll of hits. Oroshi are also played as a part of theatrical performance, such as in Noh theater. Drums are not the only instruments played in the ensemble; other Japanese instruments are also used. Other kinds of percussion instruments include the , a hand-sized gong played with a small mallet. In kabuki, the shamisen, a plucked string instrument, often accompanies taiko during the theatrical performance. "Kumi-daiko" performances can also feature woodwinds such as the shakuhachi and the shinobue. Voiced calls or shouts called kakegoe and kiai are also common in taiko performance. They are used as encouragement to other players or cues for transition or change in dynamics such as an increase in tempo. In contrast, the philosophical concept of ma, which superficially describes the space between drum strikes, is also important in shaping rhythmic phrases and creating appropriate contrast. There is a wide variety of traditional clothing that players wear during taiko performance. Common in many "kumi-daiko" groups is the use of the happi, a decorative, thin-fabric coat, and traditional headbands called hachimaki. Tabi, , and are also typical. During his time with the group Ondekoza, Eitetsu Hayashi suggested that a loincloth called a fundoshi be worn when performing for French fashion designer Pierre Cardin, who saw Ondekoza perform for him in 1975. The Japanese group Kodo has sometimes worn fundoshi for its performances. Taiko performance is generally taught orally and through demonstration. Historically, general patterns for taiko were written down, such as in the 1512 encyclopedia called the "Taigensho", but written scores for taiko pieces are generally unavailable. One reason for the adherence to an oral tradition is that, from group to group, the rhythmic patterns in a given piece are often performed differently. Furthermore, ethnomusicologist William P. Malm observed that Japanese players within a group could not usefully predict one another using written notation, and instead did so through listening. In Japan, printed parts are not used during lessons. Orally, patterns of onomatopoeia called kuchi shōga are taught from teacher to student that convey the rhythm and timbre of drum strikes for a particular piece. For example, represents a single strike to the center of the drum, where as represents two successive strikes, first by the right and then the left, and lasts the same amount of time as one "don" strike. Some taiko pieces, such as "Yatai-bayashi", include patterns that are difficult to represent in Western musical notation. The exact words used can also differ from region to region. More recently, Japanese publications have emerged in an attempt to standardize taiko performance. The Nippon Taiko Foundation was formed in 1979; its primary goals were to foster good relations among taiko groups in Japan and to both publicize and teach how to perform taiko. Daihachi Oguchi, the leader of the Foundation, wrote "Japan Taiko" with other teachers in 1994 out of concern that correct form in performance would degrade over time. The instructional publication described the different drums used in "kumi-daiko" performance, methods of gripping, correct form, and suggestions on instrumentation. The book also contains practice exercises and transcribed pieces from Oguchi's group, Osuwa Daiko. While there were similar textbooks published before 1994, this publication had much more visibility due to the Foundation's scope. The system of fundamentals "Japan Taiko" put forward was not widely adopted because taiko performance varied substantially across Japan. An updated 2001 publication from the Foundation, called the , describes regional variations that depart from the main techniques taught in the textbook. The creators of the text maintained that mastering a set of prescribed basics should be compatible with learning local traditions. Aside from "kumi-daiko" performance, a number of folk traditions that use taiko have been recognized in different regions in Japan. Some of these include from Sado Island, ' from the town of Kokura, and ' from Iwate Prefecture. A variety of folk dances originating from Okinawa, known collectively as eisa, often make use of the taiko. Some performers use drums while dancing, and generally speaking, perform in one of two styles: groups on the Yokatsu Peninsula and on Hamahiga Island use small, single-sided drums called whereas groups near the city of Okinawa generally use "shime-daiko". Use of "shime-daiko" over "pāranku" has spread throughout the island, and is considered the dominant style. Small "nagadō-daiko", referred to as "ō-daiko" within the tradition, are also used and are worn in front of the performer. These drum dances are not limited to Okinawa and have appeared in places containing Okinawan communities such as in São Paulo, Hawaii, and large cities on the Japanese mainland. is a taiko tradition originating on the island of Hachijō-jima. Two styles of "Hachijō-daiko" emerged and have been popularized among residents: an older tradition based on a historical account, and a newer tradition influenced by mainland groups and practiced by the majority of the islanders. The "Hachijō-daiko" tradition was documented as early as 1849 based on a journal kept by an exile named Kakuso Kizan. He mentioned some of its unique features, such as "a taiko is suspended from a tree while women and children gathered around", and observed that a player used either side of the drum while performing. Illustrations from Kizan's journal show features of "Hachijō-daiko". These illustrations also featured women performing, which is unusual as taiko performance elsewhere during this period was typically reserved for men. Teachers of the tradition have noted that the majority of its performers were women; one estimate asserts that female performers outnumbered males by three to one. The first style of Hachijō-daiko is thought to descend directly from the style reported by Kizan. This style is called "Kumaoji-daiko", named after its creator Okuyama Kumaoji, a central performer of the style. "Kumaoji-daiko" has two players on a single drum, one of whom, called the , provides the underlying beat. The other player, called the , builds on this rhythmical foundation with unique and typically improvised rhythms. While there are specific types of underlying rhythms, the accompanying player is free to express an original musical beat. "Kumaoji-daiko" also features an unusual positioning for taiko: the drums are sometimes suspended from ropes, and historically, sometimes drums were suspended from trees. The contemporary style of "Hachijō-daiko" is called , which differs from "Kumaoji-daiko" in multiple ways. For instance, while the lead and accompanying roles are still present, "shin-daiko" performances use larger drums exclusively on stands. "Shin-daiko" emphasizes a more powerful sound, and consequently, performers use larger bachi made out of stronger wood. Looser clothing is worn by "shin-daiko" performers compared to kimono worn by "Kumaoji-daiko" performers; the looser clothing in "shin-daiko" allow performers to adopt more open stances and larger movements with the legs and arms. Rhythms used for the accompanying "shita-byōshi" role can also differ. One type of rhythm, called "yūkichi", consists of the following: This rhythm is found in both styles, but is always played faster in "shin-daiko". Another type of rhythm, called "honbadaki", is unique to "shin-daiko" and also contains a song which is performed in standard Japanese. is a style that has spread amongst groups through Kodo, and is formally known as . The word "miyake" comes from Miyake-jima, part of the Izu Islands, and the word "Kamitsuki" refers to the village where the tradition came from. Miyake-style taiko came out of performances for — a traditional festival held annually in July on Miyake Island since 1820 honoring the deity Gozu Tennō. In this festival, players perform on taiko while portable shrines are carried around town. The style itself is characterized in a number of ways. A "nagadō-daiko" is typically set low to the ground and played by two performers, one on each side; instead of sitting, performers stand and hold a stance that is also very low to the ground, almost to the point of kneeling. Taiko groups in Australia began forming in the 1990s. The first group, called Ataru Taru Taiko, was formed in 1995 by Paulene Thomas, Harold Gent, and Kaomori Kamei. TaikOz was later formed by percussionist Ian Cleworth and Riley Lee, a former Ondekoza member, and has been performing in Australia since 1997. They are known for their work in generating interest in performing taiko among Australian audiences, such as by developing a complete education program with both formal and informal classes, and have a strong fan base. Cleworth and other members of the group have developed several original pieces. The introduction of "kumi-daiko" performance in Brazil can be traced back to the 1970s and 1980s in São Paulo. Tangue Setsuko founded an eponymous taiko dojo and was Brazil's first taiko group; Setsuo Kinoshita later formed the group Wadaiko Sho. Brazilian groups have combined native and African drumming techniques with taiko performance. One such piece developed by Kinoshita is called "Taiko de Samba", which emphasizes both Brazilian and Japanese aesthetics in percussion traditions. Taiko was also popularized in Brazil from 2002 through the work of Yukihisa Oda, a Japanese native who visited Brazil several times through the Japan International Cooperation Agency. The Brazilian Association of Taiko (ABT) suggests that there are about 150 taiko groups in Brazil and that about 10–15% of players are non-Japanese; Izumo Honda, coordinator of a large annual festival in São Paulo, estimated that about 60% of all taiko performers in Brazil are women. Taiko emerged in the United States in the late 1960s. The first group, San Francisco Taiko Dojo, was formed in 1968 by Seiichi Tanaka, a postwar immigrant who studied taiko in Japan and brought the styles and teachings to the US. A year later, a few members of Senshin Buddhist Temple in Los Angeles led by its minister Masao Kodani initiated another group called Kinnara Taiko. San Jose Taiko later formed in 1973 in Japantown, San Jose, under Roy and PJ Hirabayashi. Taiko started to branch out to the eastern US in the late 1970s. This included formation of Denver Taiko in 1976 and Soh Daiko in New York City in 1979. Many of these early groups lacked the resources to equip each member with a drum and resorted to makeshift percussion materials such as rubber tires or creating taiko out of wine barrels. Japanese-Canadian taiko began in 1979 with Katari Taiko, and was inspired by the San Jose Taiko group. Its early membership was predominantly female. Katari Taiko and future groups were thought to represent an opportunity for younger, third-generation Japanese Canadians to explore their roots, redevelop a sense of ethnic community, and expand taiko into other musical traditions. There are no official counts or estimates of the number of active taiko groups in the United States or Canada, as there is no governing body for taiko groups in either country. Unofficial estimates have been made. In 1989, there were as many as 30 groups in the US and Canada, seven of which were in California. One estimate suggested that around 120 groups were active in the US and Canada as of 2001, many of which could be traced to the San Francisco Taiko Dojo; later estimates in 2005 and 2006 suggested there were about 200 groups in the United States alone. The Cirque du Soleil shows "Mystère" in Las Vegas and "Dralion" have featured taiko performance. Taiko performance has also been featured in commercial productions such as the 2005 Mitsubishi Eclipse ad campaign, and in events such as the 2009 Academy Awards and 2011 Grammy Awards. From 2005 to 2006, the Japanese American National Museum held an exhibition called "Big Drum: Taiko in the United States". The exhibition covered several topics related to taiko in the United States, such as the formation of performance groups, their construction using available materials, and social movements. Visitors were able to play smaller drums. Certain peoples have used taiko to advance social or cultural movements, both within Japan and elsewhere in the world. Taiko performance has frequently been viewed as an art form dominated by men. Historians of taiko argue that its performance comes from masculine traditions. Those who developed ensemble-style taiko in Japan were men, and through the influence of Ondekoza, the ideal taiko player was epitomized in images of the masculine peasant class, particularly through the character Muhōmatsu in the 1958 film "Rickshaw Man". Masculine roots have also been attributed to perceived capacity for "spectacular bodily performance" where women's bodies are sometimes judged as unable to meet the physical demands of playing. Before the 1980s, it was uncommon for Japanese women to perform on traditional instruments, including taiko, as their participation had been systematically restricted; an exception was the San Francisco Taiko Dojo under the guidance of Grand master Seiichi Tanaka, who was the first to admit females to the art form. In Ondekoza and in the early performances of Kodo, women performed only dance routines either during or between taiko performances. Thereafter, female participation in "kumi-daiko" started to rise dramatically, and by the 1990s, women equaled and possibly exceeded representation by men. While the proportion of women in taiko has become substantial, some have expressed concern that women still do not perform in the same roles as their male counterparts and that taiko performance continues to be a male-dominated profession. For instance, a member of Kodo was informed by the director of the group's apprentice program that women were permitted to play, but could only play "as women". Other women in the apprentice program recognized a gender disparity in performance roles, such as what pieces they were allowed to perform, or in physical terms based on a male standard. Female taiko performance has also served as a response to gendered stereotypes of Japanese women as being quiet, subservient, or a femme fatale. Through performance, some groups believe they are helping to redefine not only the role of women in taiko, but how women are perceived more generally. Those involved in the construction of taiko are usually considered part of the burakumin, a marginalized minority class in Japanese society, particularly those working with leather or animal skins. Prejudice against this class dates back to the Tokugawa period in terms of legal discrimination and treatment as social outcasts. Although official discrimination ended with the Tokugawa era, the burakumin have continued to face social discrimination, such as scrutiny by employers or in marriage arrangements. Drum makers have used their trade and success as a means to advocate for an end to discriminatory practices against their class. The , representing the contributions of burakumin, is found in Naniwa Ward in Osaka, home to a large proportion of burakumin. Among other features, the road contains taiko-shaped benches representing their traditions in taiko manufacturing and leatherworking, and their influence on national culture. The road ends at the Osaka Human Rights Museum, which exhibits the history of systematic discrimination against the burakumin. The road and museum were developed in part due an advocacy campaign led by the Buraku Liberation League and a taiko group of younger performers called . Taiko performance was an important part of cultural development by third-generation Japanese residents in North America, who are called "sansei". During World War II, second-generation Japanese residents, called "nisei" faced internment in the United States and in Canada on the basis of their race. During and after the war, Japanese residents were discouraged from activities such as speaking Japanese or forming ethnic communities. Subsequently, sansei could not engage in Japanese culture and instead were raised to assimilate into more normative activities. There were also prevailing stereotypes of Japanese people, which sansei sought to escape or subvert. During the 1960s in the United States, the civil rights movement influenced sansei to reexamine their heritage by engaging in Japanese culture in their communities; one such approach was through taiko performance. Groups such as San Jose Taiko were organized to fulfill a need for solidarity and to have a medium to express their experiences as Japanese-Americans. Later generations have adopted taiko in programs or workshops established by sansei; social scientist Hideyo Konagaya remarks that this attraction to taiko among other Japanese art forms may be due to its accessibility and energetic nature. Konagaya has also argued that the resurgence of taiko in the United States and Japan are differently motivated: in Japan, performance was meant to represent the need to recapture sacred traditions, while in the United States it was meant to be an explicit representation of masculinity and power in Japanese-American men. A number of performers and groups, including several early leaders, have been recognized for their contributions to taiko performance. Daihachi Oguchi was best known for developing "kumi-daiko" performance. Oguchi founded the first "kumi-daiko" group called Osuwa Daiko in 1951, and facilitated the popularization of taiko performance groups in Japan. Seidō Kobayashi is the leader of the Tokyo-based taiko group Oedo Sukeroku Taiko as of December 2014. Kobayashi founded the group in 1959 and was the first group to tour professionally. Kobayashi is considered a master performer of taiko. He is also known for asserting intellectual control of the group's performance style, which has influenced performance for many groups, particularly in North America. In 1968, Seiichi Tanaka founded the San Francisco Taiko Dojo and is regarded as the Grandfather of Taiko and primary developer of taiko performance in the United States. He was a recipient of a 2001 National Heritage Fellowship awarded by the National Endowment for the Arts and since 2013 is the only taiko professional presented with the Order of the Rising Sun 5th Order: Gold and Silver Rays by Emperor Akihito of Japan, in recognition of Grandmaster Seiichi Tanaka’s contributions to the fostering of US-Japan relations as well as the promotion of Japanese cultural understanding in the United States. In 1969, founded Ondekoza, a group well known for making taiko performance internationally visible and for its artistic contributions to the tradition. Den was also known for developing a communal living and training facility for Ondekoza on Sado Island in Japan, which had a reputation for its intensity and broad education programs in folklore and music. Performers and groups beyond the early practitioners have also been noted. Eitetsu Hayashi is best known for his solo performance work. Hayashi joined Ondekoza when he was 19, and after parting from the group helped found Kodo, one of the best known and most influential taiko performance groups in the world. Hayashi soon left the group to begin a solo career and has performed in venues such as Carnegie Hall in 1984, the first featured taiko performer there. He was awarded the 47th Education Minister's Art Encouragement Prize, a national award, in 1997 as well as the 8th Award for the Promotion of Traditional Japanese Culture from the Japan Arts Foundation in 2001. David Bowie David Robert Jones (8 January 1947 – 10 January 2016), known professionally as David Bowie (), was an English singer, songwriter and actor who is often considered to be one of the most influential musicians of the 20th century. He was a leading figure in popular music and was acclaimed by critics and fellow musicians for his innovative work, particularly for his work during the 1970s. His career was marked by reinvention and visual presentation, with his music and stagecraft significantly influencing popular music. During his lifetime, his record sales, estimated at 140 million albums worldwide, made him one of the world's best-selling music artists. In the UK, he was awarded ten platinum album certifications, eleven gold and eight silver, releasing eleven number-one albums. In the US, he received five platinum and nine gold certifications. He was inducted into the Rock and Roll Hall of Fame in 1996. Born in Brixton, South London, Bowie developed an interest in music as a child, eventually studying art, music, and design before embarking on a professional career as a musician in 1963. "Space Oddity" became his first top-five entry on the UK Singles Chart after its release in July 1969. After a period of experimentation, he re-emerged in 1972 during the glam rock era with his flamboyant and androgynous alter ego Ziggy Stardust. The character was spearheaded by the success of his single "Starman" and album "The Rise and Fall of Ziggy Stardust and the Spiders from Mars", which won him widespread popularity. In 1975, Bowie's style shifted radically towards a sound he characterised as "plastic soul", initially alienating many of his UK devotees but garnering him his first major US crossover success with the number-one single "Fame" and the album "Young Americans". In 1976, Bowie starred in the cult film "The Man Who Fell to Earth", directed by Nicolas Roeg, and released "Station to Station". The following year, he further confounded musical expectations with the electronic-inflected album "Low" (1977), the first of three collaborations with Brian Eno that would come to be known as the "Berlin Trilogy". ""Heroes"" (1977) and "Lodger" (1979) followed; each album reached the UK top five and received lasting critical praise. After uneven commercial success in the late 1970s, Bowie had UK number ones with the 1980 single "Ashes to Ashes", its parent album "Scary Monsters (and Super Creeps)", and "Under Pressure", a 1981 collaboration with Queen. He then reached his commercial peak in 1983 with "Let's Dance", with its title track topping both UK and US charts. Throughout the 1990s and 2000s, Bowie continued to experiment with musical styles, including industrial and jungle. He also continued acting; his roles included Major Jack Celliers in "Merry Christmas, Mr. Lawrence" (1983), Jareth the Goblin King in "Labyrinth" (1986), Pontius Pilate in "The Last Temptation of Christ" (1988), and Nikola Tesla in "The Prestige" (2006), among other film and television appearances and cameos. He stopped concert touring after 2004 and his last live performance was at a charity event in 2006. In 2013, Bowie returned from a decade-long recording hiatus with the release of "The Next Day." He remained musically active until he died of liver cancer two days after the release of his final album, "Blackstar" (2016). Bowie was born David Robert Jones on 8 January 1947 in Brixton, London. His mother, Margaret Mary "Peggy" (née Burns; 1913–2001), was born at Shorncliffe Army Camp near Cheriton, Kent. Her paternal grandparents were Irish immigrants who had settled in Manchester. She worked as a waitress at a cinema in Royal Tunbridge Wells. His father, Haywood Stenton "John" Jones (1912–1969), was from Doncaster, which was then in the West Riding of Yorkshire, and worked as a promotions officer for the children's charity Barnardo's. The family lived at 40 Stansfield Road, on the boundary between Brixton and Stockwell in the south London borough of Lambeth. Bowie attended Stockwell Infants School until he was six years old, acquiring a reputation as a gifted and single-minded child—and a defiant brawler. In 1953, Bowie moved with his family to Bromley. Two years later, he started attending Burnt Ash Junior School. His voice was considered "adequate" by the school choir, and he demonstrated above-average abilities in playing the recorder. At the age of nine, his dancing during the newly-introduced music and movement classes was strikingly imaginative: teachers called his interpretations "vividly artistic" and his poise "astonishing" for a child. The same year, his interest in music was further stimulated when his father brought home a collection of American 45s by artists including the Teenagers, the Platters, Fats Domino, Elvis Presley, and Little Richard. Upon listening to Little Richard's song "Tutti Frutti", Bowie would later say that he had "heard God". Presley's impact on Bowie was likewise emphatic: "I saw a cousin of mine dance to 'Hound Dog' and I had never seen her get up and be moved so much by anything. It really impressed me, the power of the music. I started getting records immediately after that." By the end of the following year, he had taken up the ukulele and tea-chest bass, begun to participate in skiffle sessions with friends, and had started to play the piano; meanwhile, his stage presentation of numbers by both Presley and Chuck Berry—complete with gyrations in tribute to the original artists—to his local Wolf Cub group was described as "mesmerizing ... like someone from another planet". After taking his eleven-plus exam at the conclusion of his Burnt Ash Junior education, Bowie went to Bromley Technical High School. It was an unusual technical school, as biographer Christopher Sandford wrote: Bowie studied art, music, and design, including layout and typesetting. After his half-brother Terry Burns introduced him to modern jazz, his enthusiasm for players like Charles Mingus and John Coltrane led his mother to give him a Grafton saxophone in 1961. He was soon receiving lessons from baritone saxophonist Ronnie Ross. He received a serious injury at school in 1962 when his friend George Underwood punched him in the left eye during a fight over a girl. After a series of operations during a four-month hospitalisation, his doctors determined that the damage could not be fully repaired and Bowie was left with faulty depth perception and a permanently dilated pupil, which gave a false impression of a change in the iris' colour; the eye would later become one of Bowie's most recognisable features. Despite their altercation, Bowie remained good friends with Underwood, who went on to create the artwork for Bowie's early albums. In 1962, Bowie formed his first band at the age of 15, named the Konrads. Playing guitar-based rock and roll at local youth gatherings and weddings, the Konrads had a varying line-up of between four and eight members, Underwood among them. When Bowie left the technical school the following year, he informed his parents of his intention to become a pop star. His mother promptly arranged his employment as an electrician's mate. Frustrated by his bandmates' limited aspirations, Bowie left the Konrads and joined another band, the King Bees. He wrote to the newly successful washing-machine entrepreneur John Bloom inviting him to "do for us what Brian Epstein has done for the Beatles—and make another million." Bloom did not respond to the offer, but his referral to Dick James's partner Leslie Conn led to Bowie's first personal management contract. Conn quickly began to promote Bowie. The singer's debut single, "Liza Jane", credited to Davie Jones with the King Bees, was not commercially successful. Dissatisfied with the King Bees and their repertoire of Howlin' Wolf and Willie Dixon covers, Bowie quit the band less than a month later to join the Mannish Boys, another blues outfit, who incorporated folk and soul—"I used to dream of being their Mick Jagger", Bowie was to recall. Their cover of Bobby Bland's "I Pity the Fool" was no more successful than "Liza Jane", and Bowie soon moved on again to join the Lower Third, a blues trio strongly influenced by The Who. "You've Got a Habit of Leaving" fared no better, signaling the end of Conn's contract. Declaring that he would exit the pop music world "to study mime at Sadler's Wells", Bowie nevertheless remained with the Lower Third. His new manager, Ralph Horton, later instrumental in his transition to solo artist, soon witnessed Bowie's move to yet another group, the Buzz, yielding the singer's fifth unsuccessful single release, "Do Anything You Say". While with the Buzz, Bowie also joined the Riot Squad; their recordings, which included one of Bowie's original songs and material by The Velvet Underground, went unreleased. Ken Pitt, introduced by Horton, took over as Bowie's manager. Dissatisfied with his stage name as Davy (and Davie) Jones, which in the mid-1960s invited confusion with Davy Jones of The Monkees, Bowie renamed himself after the 19th-century American pioneer James Bowie and the knife he had popularised. His April 1967 solo single, "The Laughing Gnome", using speeded-up thus high-pitched vocals, failed to chart. Released six weeks later, his album debut, "David Bowie", an amalgam of pop, psychedelia, and music hall, met the same fate. It was his last release for two years. Bowie met dancer Lindsay Kemp in 1967 and enrolled in his dance class at the London Dance Centre. He commented in 1972 that meeting Kemp was when his interest in image "really blossomed". "He lived on his emotions, he was a wonderful influence. His day-to-day life was the most theatrical thing I had ever seen, ever. It was everything I thought Bohemia probably was. I joined the circus." Studying the dramatic arts under Kemp, from avant-garde theatre and mime to commedia dell'arte, Bowie became immersed in the creation of personae to present to the world. Satirising life in a British prison, meanwhile, the Bowie-penned "Over the Wall We Go" became a 1967 single for Oscar; another Bowie composition, "Silly Boy Blue", was released by Billy Fury the following year. In January 1968, Kemp choreographed a dance scene for a BBC play, "The Pistol Shot", in the Theatre 625 series, and used Bowie with a dancer, Hermione Farthingale; the pair began dating, and moved into a London flat together. Playing acoustic guitar, Farthingale formed a group with Bowie and guitarist John Hutchinson; between September 1968 and early 1969 the trio gave a small number of concerts combining folk, Merseybeat, poetry, and mime. Bowie and Farthingale broke up in early 1969 when she went to Norway to take part in a film, "Song of Norway"; this affected him, and several songs, such as "Letter to Hermione" and "Life on Mars?" reference her, and for the video accompanying "Where Are We Now?", he wore a T-shirt with the words "m/s Song of Norway". They were last together in January 1969 for the filming of "Love You till Tuesday", a 30-minute film that was not released until 1984: intended as a promotional vehicle, it featured performances from Bowie's repertoire, including "Space Oddity", which had not been released when the film was made. After the break-up with Farthingale, Bowie moved in with Mary Finnigan as her lodger. During this period he appeared in a Lyons Maid ice cream commercial, and was rejected for another by Kit Kat. In February and March 1969, he undertook a short tour with Marc Bolan's duo Tyrannosaurus Rex, as third on the bill, performing a mime act. On 11 July 1969, "Space Oddity" was released five days ahead of the Apollo 11 launch, and reached the top five in the UK. Continuing the divergence from rock and roll and blues begun by his work with Farthingale, Bowie joined forces with Finnigan, Christina Ostrom and Barrie Jackson to run a folk club on Sunday nights at the Three Tuns pub in Beckenham High Street. Influenced by the Arts Lab movement, this developed into the Beckenham Arts Lab, and became extremely popular. The Arts Lab hosted a free festival in a local park, the subject of his song "Memory of a Free Festival". Bowie's second album followed in November; originally issued in the UK as "David Bowie", it caused some confusion with its predecessor of the same name, and the early US release was instead titled "Man of Words/Man of Music"; it was reissued internationally in 1972 by RCA Records as "Space Oddity". Featuring philosophical post-hippie lyrics on peace, love, and morality, its acoustic folk rock occasionally fortified by harder rock, the album was not a commercial success at the time of its release. Bowie met Angela Barnett in April 1969. They married within a year. Her impact on him was immediate, and her involvement in his career far-reaching, leaving manager Ken Pitt with limited influence which he found frustrating. Having established himself as a solo artist with "Space Oddity", Bowie began to sense a lacking: "a full-time band for gigs and recording—people he could relate to personally". The shortcoming was underlined by his artistic rivalry with Marc Bolan, who was at the time acting as his session guitarist. A band was duly assembled. John Cambridge, a drummer Bowie met at the Arts Lab, was joined by Tony Visconti on bass and Mick Ronson on electric guitar. Known as the Hype, the bandmates created characters for themselves and wore elaborate costumes that prefigured the glam style of the Spiders from Mars. After a disastrous opening gig at the London Roundhouse, they reverted to a configuration presenting Bowie as a solo artist. Their initial studio work was marred by a heated disagreement between Bowie and Cambridge over the latter's drumming style. Matters came to a head when an enraged Bowie accused the drummer of the disturbance, exclaiming "You're fucking up my album." Cambridge summarily quit and was replaced by Mick Woodmansey. Not long after, the singer fired his manager and replaced him with Tony Defries. This resulted in years of litigation that concluded with Bowie having to pay Pitt compensation. The studio sessions continued and resulted in Bowie's third album, "The Man Who Sold the World" (1970), which contained references to schizophrenia, paranoia, and delusion. Characterised by the heavy rock sound of his new backing band, it was a marked departure from the acoustic guitar and folk rock style established by "Space Oddity". To promote it in the US, Mercury Records financed a coast-to-coast publicity tour across America in which Bowie, between January and February 1971, was interviewed by radio stations and the media. Exploiting his androgynous appearance, the original cover of the UK version unveiled two months later depicted the singer wearing a dress: taking the garment with him, he wore it during interviews—to the approval of critics, including "Rolling Stone"s John Mendelsohn who described him as "ravishing, almost disconcertingly reminiscent of Lauren Bacall"  – and in the street, to mixed reaction including laughter and, in the case of one male pedestrian, producing a gun and telling Bowie to "kiss my ass". During the tour, Bowie's observation of two seminal American proto-punk artists led him to develop a concept that eventually found form in the Ziggy Stardust character: a melding of the persona of Iggy Pop with the music of Lou Reed, producing "the ultimate pop idol". A girlfriend recalled his "scrawling notes on a cocktail napkin about a crazy rock star named Iggy or Ziggy", and on his return to England he declared his intention to create a character "who looks like he's landed from Mars". The "Stardust" surname was a tribute to the "Legendary Stardust Cowboy", whose record he was given during the tour. Bowie would later cover "I Took a Trip on a Gemini Space Ship" on 2002's "Heathen". "Hunky Dory" (1971) found Visconti, Bowie's producer and bassist, supplanted in both roles by Ken Scott and Trevor Bolder respectively. The album saw the partial return of the fey pop singer of "Space Oddity", with light fare such as "Kooks", a song written for his son, Duncan Zowie Haywood Jones, born on 30 May. (His parents chose "his kooky name"—he was known as Zowie for the next 12 years—after the Greek word "zoe", life.) Elsewhere, the album explored more serious subjects, and found Bowie paying unusually direct homage to his influences with "Song for Bob Dylan", "Andy Warhol", and "Queen Bitch", a Velvet Underground pastiche. It was not a significant commercial success at the time. Dressed in a striking costume, his hair dyed reddish-brown, Bowie launched his Ziggy Stardust stage show with the Spiders from Mars—Ronson, Bolder, and Woodmansey—at the Toby Jug pub in Tolworth on 10 February 1972. The show was hugely popular, catapulting him to stardom as he toured the UK over the next six months and creating, as described by Buckley, a "cult of Bowie" that was "unique—its influence lasted longer and has been more creative than perhaps almost any other force within pop fandom." "The Rise and Fall of Ziggy Stardust and the Spiders from Mars" (1972), combining the hard rock elements of "The Man Who Sold the World" with the lighter experimental rock and pop of "Hunky Dory", was released in June. "Starman", issued as an April single ahead of the album, was to cement Bowie's UK breakthrough: both single and album charted rapidly following his July "Top of the Pops" performance of the song. The album, which remained in the chart for two years, was soon joined there by the 6-month-old "Hunky Dory". At the same time the non-album single "John, I'm Only Dancing", and "All the Young Dudes", a song he wrote and produced for Mott the Hoople, were successful in the UK. The Ziggy Stardust Tour continued to the United States. Bowie contributed backing vocals, keyboards, and guitar to Lou Reed's 1972 solo breakthrough "Transformer", co-producing the album with Mick Ronson. The following year, Bowie co-produced and mixed The Stooges album "Raw Power" alongside Iggy Pop. His own "Aladdin Sane" (1973) topped the UK chart, his first number-one album. Described by Bowie as "Ziggy goes to America", it contained songs he wrote while travelling to and across the US during the earlier part of the Ziggy tour, which now continued to Japan to promote the new album. "Aladdin Sane" spawned the UK top five singles "The Jean Genie" and "Drive-In Saturday". Bowie's love of acting led his total immersion in the characters he created for his music. "Offstage I'm a robot. Onstage I achieve emotion. It's probably why I prefer dressing up as Ziggy to being David." With satisfaction came severe personal difficulties: acting the same role over an extended period, it became impossible for him to separate Ziggy Stardust—and, later, the Thin White Duke—from his own character offstage. Ziggy, Bowie said, "wouldn't leave me alone for years. That was when it all started to go sour ... My whole personality was affected. It became very dangerous. I really did have doubts about my sanity." His later Ziggy shows, which included songs from both "Ziggy Stardust" and "Aladdin Sane", were ultra-theatrical affairs filled with shocking stage moments, such as Bowie stripping down to a sumo wrestling loincloth or simulating oral sex with Ronson's guitar. Bowie toured and gave press conferences as Ziggy before a dramatic and abrupt on-stage "retirement" at London's Hammersmith Odeon on 3 July 1973. Footage from the final show was released the same year for the film "Ziggy Stardust and the Spiders from Mars". After breaking up the Spiders from Mars, Bowie attempted to move on from his Ziggy persona. His back catalogue was now highly sought after: "The Man Who Sold the World" had been re-released in 1972 along with "Space Oddity". "Life on Mars?", from "Hunky Dory", was released in June 1973 and peaked at No. 3 on the UK Singles Chart. Entering the same chart in September, Bowie's novelty record from 1967, "The Laughing Gnome", reached No. 6. "Pin Ups", a collection of covers of his 1960s favourites, followed in October, producing a UK No. 3 hit in his version of the McCoys's "Sorrow" and itself peaking at number one, making David Bowie the best-selling act of 1973 in the UK. It brought the total number of Bowie albums concurrently on the UK chart to six. Bowie moved to the US in 1974, initially staying in New York City before settling in Los Angeles. "Diamond Dogs" (1974), parts of which found him heading towards soul and funk, was the product of two distinct ideas: a musical based on a wild future in a post-apocalyptic city, and setting George Orwell's "1984" to music. The album went to number one in the UK, spawning the hits "Rebel Rebel" and "Diamond Dogs", and No. 5 in the US. To promote it, Bowie launched the Diamond Dogs Tour, visiting cities in North America between June and December 1974. Choreographed by Toni Basil, and lavishly produced with theatrical special effects, the high-budget stage production was filmed by Alan Yentob. The resulting documentary, "Cracked Actor", featured a pasty and emaciated Bowie: the tour coincided with the singer's slide from heavy cocaine use into addiction, producing severe physical debilitation, paranoia, and emotional problems. He later commented that the accompanying live album, "David Live", ought to have been titled "David Bowie Is Alive and Well and Living Only in Theory". "David Live" nevertheless solidified Bowie's status as a superstar, charting at No. 2 in the UK and No. 8 in the US. It also spawned a UK No. 10 hit in Bowie's cover of Eddie Floyd's "Knock on Wood". After a break in Philadelphia, where Bowie recorded new material, the tour resumed with a new emphasis on soul. The fruit of the Philadelphia recording sessions was "Young Americans" (1975). Biographer Christopher Sandford writes, "Over the years, most British rockers had tried, one way or another, to become black-by-extension. Few had succeeded as Bowie did now." The album's sound, which the singer identified as "plastic soul", constituted a radical shift in style that initially alienated many of his UK devotees. "Young Americans" yielded Bowie's first US number one, "Fame", co-written with John Lennon, who contributed backing vocals, and Carlos Alomar. Lennon called Bowie's work "great, but it's just rock'n'roll with lipstick on". Earning the distinction of being one of the first white artists to appear on the US variety show "Soul Train", Bowie mimed "Fame", as well as "Golden Years", his November single, which was originally offered to Elvis Presley, who declined it. "Young Americans" was a commercial success in both the US and the UK, and a re-issue of the 1969 single "Space Oddity" became Bowie's first number-one hit in the UK a few months after "Fame" achieved the same in the US. Despite his by now well established superstardom, Bowie, in the words of Sandford, "for all his record sales (over a million copies of "Ziggy Stardust" alone), existed essentially on loose change." In 1975, in a move echoing Ken Pitt's acrimonious dismissal five years earlier, Bowie fired his manager. At the culmination of the ensuing months-long legal dispute, he watched, as described by Sandford, "millions of dollars of his future earnings being surrendered" in what were "uniquely generous terms for Defries", then "shut himself up in West 20th Street, where for a week his howls could be heard through the locked attic door." Michael Lippman, Bowie's lawyer during the negotiations, became his new manager; Lippman in turn was awarded substantial compensation when Bowie fired him the following year. "Station to Station" (1976), produced by Bowie and Harry Maslin, introduced a new Bowie persona, "The Thin White Duke" of its title-track. Visually, the character was an extension of Thomas Jerome Newton, the extraterrestrial being he portrayed in the film "The Man Who Fell to Earth" the same year. Developing the funk and soul of "Young Americans", "Station to Station"'s synthesizer-heavy arrangements prefigured the krautrock-influenced music of his next releases. The extent to which drug addiction was now affecting Bowie was made public when Russell Harty interviewed the singer for his London Weekend Television talk show in anticipation of the album's supporting tour. Shortly before the satellite-linked interview was scheduled to commence, the death of the Spanish dictator Francisco Franco was announced. Bowie was asked to relinquish the satellite booking, to allow the Spanish Government to put out a live newsfeed. This he refused to do, and his interview went ahead. In the ensuing lengthy conversation with Harty, Bowie was incoherent and looked "disconnected". His sanity—by his own later admission—had become twisted from cocaine; he overdosed several times during the year, and was withering physically to an alarming degree. "Station to Station"s January 1976 release was followed in February by a 3½-month-long concert tour of Europe and North America. Featuring a starkly lit set, the Isolar – 1976 Tour with its iconic color newsprint Isolar concert program, highlighted songs from the album, including the dramatic and lengthy title track, the ballads "Wild Is the Wind" and "Word on a Wing", and the funkier "TVC 15" and "Stay". The core band that coalesced to record this album and tour—rhythm guitarist Carlos Alomar, bassist George Murray, and drummer Dennis Davis—continued as a stable unit for the remainder of the 1970s. The tour was highly successful but mired in political controversy. Bowie was quoted in Stockholm as saying that "Britain could benefit from a Fascist leader", and was detained by customs on the Russian/Polish border for possessing Nazi paraphernalia. Matters came to a head in London in May in what became known as the "Victoria Station incident". Arriving in an open-top Mercedes convertible, Bowie waved to the crowd in a gesture that some alleged was a Nazi salute, which was captured on camera and published in "NME". Bowie said the photographer simply caught him in mid-wave. He later blamed his pro-fascism comments and his behaviour during the period on his addictions and the character of the Thin White Duke. "I was out of my mind, totally crazed. The main thing I was functioning on was mythology ... that whole thing about Hitler and Rightism ... I'd discovered King Arthur". According to playwright Alan Franks, writing later in "The Times", "he was indeed 'deranged'. He had some very bad experiences with hard drugs." Bowie's cocaine addiction, which had motivated these controversies, had much to do with his time living in Los Angeles, a city which alienated him. Discussing his flirtations with fascism in a 1980 interview with "NME", Bowie explained that Los Angeles was "where it had all happened. The fucking place should be wiped off the face of the Earth. To be anything to do with rock and roll and go and live in Los Angeles is, I think, just heading for disaster. It really is." After recovering from addiction, Bowie apologized for these statements, and throughout the 1980s and '90s criticized racism in European politics and the American music industry. Nevertheless, Bowie's comments on fascism, as well as Eric Clapton's alcohol-fueled denunciations of Pakistani immigrants in 1976, led to the establishment of Rock Against Racism. Bowie moved to Switzerland in 1976, purchasing a chalet in the hills to the north of Lake Geneva. In the new environment, his cocaine use decreased and he found time for other pursuits outside his musical career. He devoted more time to his painting, and produced a number of post-modernist pieces. When on tour, he took to sketching in a notebook, and photographing scenes for later reference. Visiting galleries in Geneva and the Brücke Museum in Berlin, Bowie became, in the words of biographer Christopher Sandford, "a prolific producer and collector of contemporary art. ... Not only did he become a well-known patron of expressionist art: locked in Clos des Mésanges he began an intensive self-improvement course in classical music and literature, and started work on an autobiography." Before the end of 1976, Bowie's interest in the burgeoning German music scene, as well as his drug addiction, prompted him to move to West Berlin to clean up and revitalise his career. There he was often seen riding a bicycle between his apartment on Hauptstraße in Schöneberg and Hansa Tonstudio, the recording studio he used, located on Köthener Straße in Kreuzberg, near the Berlin Wall. While working with Brian Eno and sharing an apartment with Iggy Pop, he began to focus on minimalist, ambient music for the first of three albums, co-produced with Tony Visconti, that became known as his Berlin Trilogy. During the same period, Iggy Pop, with Bowie as a co-writer and musician, completed his solo album debut "The Idiot" and its follow-up "Lust for Life", touring the UK, Europe, and the US in March and April 1977. The album "Low" (1977), partly influenced by the Krautrock sound of Kraftwerk and Neu!, evinced a move away from narration in Bowie's songwriting to a more abstract musical form in which lyrics were sporadic and optional. Although he completed the album in November 1976, it took his unsettled record company another three months to release it. It received considerable negative criticism upon its release—a release which RCA, anxious to maintain the established commercial momentum, did not welcome, and which Bowie's former manager, Tony Defries, who still maintained a significant financial interest in the singer's affairs, tried to prevent. Despite these forebodings, "Low" yielded the UK No. 3 single "Sound and Vision", and its own performance surpassed that of "Station to Station" in the UK chart, where it reached No. 2. Leading contemporary composer Philip Glass described "Low" as "a work of genius" in 1992, when he used it as the basis for his "Symphony No. 1 "Low""; subsequently, Glass used Bowie's next album as the basis for his 1996 "Symphony No. 4 "Heroes"". Glass has praised Bowie's gift for creating "fairly complex pieces of music, masquerading as simple pieces". Also in 1977, London released "Starting Point", a ten-song LP containing releases from Bowie's Deram period (1966—67). Echoing "Low"s minimalist, instrumental approach, the second of the trilogy, ""Heroes"" (1977), incorporated pop and rock to a greater extent, seeing Bowie joined by guitarist Robert Fripp. Like "Low", ""Heroes"" evinced the zeitgeist of the Cold War, symbolised by the divided city of Berlin. Incorporating ambient sounds from a variety of sources including white noise generators, synthesisers and koto, the album was another hit, reaching No. 3 in the UK. Its title-track, though only reaching No. 24 in the UK singles chart, gained lasting popularity, and within months had been released in both German and French. Towards the end of the year, Bowie performed the song for Marc Bolan's television show "Marc", and again two days later for Bing Crosby's final CBS television Christmas special, when he joined Crosby in "Peace on Earth/Little Drummer Boy", a version of "The Little Drummer Boy" with a new, contrapuntal verse. Five years later, the duet proved a worldwide seasonal hit, charting in the UK at No. 3 on Christmas Day, 1982. After completing "Low" and ""Heroes"", Bowie spent much of 1978 on the Isolar II world tour, bringing the music of the first two Berlin Trilogy albums to almost a million people during 70 concerts in 12 countries. By now he had broken his drug addiction; biographer David Buckley writes that Isolar II was "Bowie's first tour for five years in which he had probably not anaesthetised himself with copious quantities of cocaine before taking the stage. ... Without the oblivion that drugs had brought, he was now in a healthy enough mental condition to want to make friends." Recordings from the tour made up the live album "Stage", released the same year. The final piece in what Bowie called his "triptych", "Lodger" (1979), eschewed the minimalist, ambient nature of the other two, making a partial return to the drum- and guitar-based rock and pop of his pre-Berlin era. The result was a complex mixture of new wave and world music, in places incorporating Hijaz non-Western scales. Some tracks were composed using Eno and Peter Schmidt's Oblique Strategies cards: "Boys Keep Swinging" entailed band members swapping instruments, "Move On" used the chords from Bowie's early composition "All the Young Dudes" played backwards, and "Red Money" took backing tracks from "Sister Midnight", a piece previously composed with Iggy Pop. The album was recorded in Switzerland. Ahead of its release, RCA's Mel Ilberman stated, "It would be fair to call it Bowie's "Sergeant Pepper" ... a concept album that portrays the Lodger as a homeless wanderer, shunned and victimized by life's pressures and technology." As described by biographer Christopher Sandford, "The record dashed such high hopes with dubious choices, and production that spelt the end—for fifteen years—of Bowie's partnership with Eno." "Lodger" reached No. 4 in the UK and No. 20 in the US, and yielded the UK hit singles "Boys Keep Swinging" and "DJ". Towards the end of the year, Bowie and Angie initiated divorce proceedings, and after months of court battles the marriage was ended in early 1980. "Scary Monsters and Super Creeps" (1980) produced the number-one hit "Ashes to Ashes", featuring the textural work of guitar-synthesist Chuck Hammer and revisiting the character of Major Tom from "Space Oddity". The song gave international exposure to the underground New Romantic movement when Bowie visited the London club "Blitz"—the main New Romantic hangout—to recruit several of the regulars (including Steve Strange of the band Visage) to act in the accompanying video, renowned as one of the most innovative of all time. While "Scary Monsters" used principles established by the Berlin albums, it was considered by critics to be far more direct musically and lyrically. The album's hard rock edge included conspicuous guitar contributions from Robert Fripp, Chuck Hammer, and Pete Townshend. As "Ashes to Ashes" hit number one on the UK charts, Bowie opened a three-month run on Broadway on 24 September, starring as John Merrick in "The Elephant Man". Bowie paired with Queen in 1981 for a one-off single release, "Under Pressure". The duet was a hit, becoming Bowie's third UK number-one single. Bowie was given the lead role in the BBC's 1982 televised adaptation of Bertolt Brecht's play "Baal". Coinciding with its transmission, a five-track EP of songs from the play, recorded earlier in Berlin, was released as "David Bowie in Bertolt Brecht's Baal". In March 1982, the month before Paul Schrader's film "Cat People" came out, Bowie's title song, "Cat People (Putting Out Fire)", was released as a single, becoming a minor US hit and entering the UK Top 30. Bowie reached his peak of popularity and commercial success in 1983 with "Let's Dance". Co-produced by Chic's Nile Rodgers, the album went platinum in both the UK and the US. Its three singles became Top 20 hits in both countries, where its title track reached number one. "Modern Love" and "China Girl" each made No. 2 in the UK, accompanied by a pair of "absorbing" promotional videos that biographer David Buckley said "activated key archetypes in the pop world. 'Let's Dance', with its little narrative surrounding the young Aborigine couple, targeted 'youth', and 'China Girl', with its bare-bummed (and later partially censored) beach lovemaking scene (a homage to the film "From Here to Eternity"), was sufficiently sexually provocative to guarantee heavy rotation on MTV". Stevie Ray Vaughan was guest guitarist playing solo on "Let's Dance", although the video depicts Bowie miming this part. By 1983, Bowie had emerged as one of the most important video artists of the day. "Let's Dance" was followed by the Serious Moonlight Tour, during which Bowie was accompanied by guitarist Earl Slick and backing vocalists Frank and George Simms. The world tour lasted six months and was extremely popular." At the 1984 MTV Video Music Awards Bowie received two awards including the inaugural Video Vanguard Award. "Tonight" (1984), another dance-oriented album, found Bowie collaborating with Tina Turner and, once again, Iggy Pop. It included a number of cover songs, among them the 1966 Beach Boys hit "God Only Knows". The album bore the transatlantic Top 10 hit "Blue Jean", itself the inspiration for a short film that won Bowie a Grammy Award for Best Short Form Music Video, "Jazzin' for Blue Jean". Bowie performed at Wembley Stadium in 1985 for Live Aid, a multi-venue benefit concert for Ethiopian famine relief. During the event, the video for a fundraising single was premiered, Bowie's duet with Mick Jagger. "Dancing in the Street" quickly went to number one on release. The same year, Bowie worked with the Pat Metheny Group to record "This Is Not America" for the soundtrack of "The Falcon and the Snowman". Released as a single, the song became a Top 40 hit in the UK and US. Bowie was given a role in the 1986 film "Absolute Beginners". It was poorly received by critics, but Bowie's theme song, also named "Absolute Beginners", rose to No. 2 in the UK charts. He also appeared as Jareth, the Goblin King, in the 1986 Jim Henson film "Labyrinth", for which he wrote five songs. His final solo album of the decade was 1987's "Never Let Me Down", where he ditched the light sound of his previous two albums, instead offering harder rock with an industrial/techno dance edge. Peaking at No. 6 in the UK, the album yielded the hits "Day-In, Day-Out" (his 60th single), "Time Will Crawl", and "Never Let Me Down". Bowie later described it as his "nadir", calling it "an awful album". Supporting "Never Let Me Down", and preceded by nine promotional press shows, the 86-concert Glass Spider Tour commenced on 30 May. Bowie's backing band included Peter Frampton on lead guitar. Contemporary critics maligned the tour as overproduced, saying it pandered to the current stadium rock trends in its special effects and dancing, although years after the tour's conclusion, critics acknowledged that the tour influenced how other artists performed concerts, including Britney Spears, Madonna, and U2. Bowie shelved his solo career in 1989, retreating to the relative anonymity of band membership for the first time since the early 1970s. A hard-rocking quartet, Tin Machine came into being after Bowie began to work experimentally with guitarist Reeves Gabrels. The line-up was completed by Tony and Hunt Sales, whom Bowie had known since the late 1970s for their contribution, on bass and drums respectively, to Iggy Pop's 1977 album "Lust for Life". Although he intended Tin Machine to operate as a democracy, Bowie dominated, both in songwriting and in decision-making. The band's album debut, "Tin Machine" (1989), was initially popular, though its politicised lyrics did not find universal approval: Bowie described one song as "a simplistic, naive, radical, laying-it-down about the emergence of Neo-Nazis"; in the view of biographer Christopher Sandford, "It took nerve to denounce drugs, fascism and TV ... in terms that reached the literary level of a comic book." EMI complained of "lyrics that preach" as well as "repetitive tunes" and "minimalist or no production". The album nevertheless reached No. 3 and went gold in the UK. Tin Machine's first world tour was a commercial success, but there was growing reluctance—among fans and critics alike—to accept Bowie's presentation as merely a band member. A series of Tin Machine singles failed to chart, and Bowie, after a disagreement with EMI, left the label. Like his audience and his critics, Bowie himself became increasingly disaffected with his role as just one member of a band. Tin Machine began work on a second album, but Bowie put the venture on hold and made a return to solo work. Performing his early hits during the seven-month Sound+Vision Tour, he found commercial success and acclaim once again. In October 1990, a decade after his divorce from Angie, Bowie and Somali-born supermodel Iman were introduced by a mutual friend. Bowie recalled, "I was naming the children the night we met ... it was absolutely immediate." They married in 1992. Tin Machine resumed work the same month, but their audience and critics, ultimately left disappointed by the first album, showed little interest in a second. "Tin Machine II"s arrival was marked by a widely publicised and ill-timed conflict over the cover art: after production had begun, the new record label, Victory, deemed the depiction of four ancient nude Kouroi statues, judged by Bowie to be "in exquisite taste", "a show of wrong, obscene images", requiring air-brushing and patching to render the figures sexless. Tin Machine toured again, but after the live album "" failed commercially, the band drifted apart, and Bowie, though he continued to collaborate with Gabrels, resumed his solo career. On 20 April 1992, Bowie appeared at The Freddie Mercury Tribute Concert, following the Queen singer's death the previous year. As well as performing "Heroes" and "All the Young Dudes", he was joined on "Under Pressure" by Annie Lennox, who took Mercury's vocal part; during his appearance, Bowie knelt and recited the Lord's Prayer at Wembley Stadium. Four days later, Bowie and Iman were married in Switzerland. Intending to move to Los Angeles, they flew in to search for a suitable property, but found themselves confined to their hotel, under curfew: the 1992 Los Angeles riots began the day they arrived. They settled in New York instead. In 1993, Bowie released his first solo offering since his Tin Machine departure, the soul, jazz, and hip-hop influenced "Black Tie White Noise". Making prominent use of electronic instruments, the album, which reunited Bowie with "Let's Dance" producer Nile Rodgers, confirmed Bowie's return to popularity, hitting the number-one spot on the UK charts and spawning three Top 40 hits, including the Top 10 single "Jump They Say". Bowie explored new directions on "The Buddha of Suburbia" (1993), ostensibly a soundtrack album of his music composed for the BBC television adaptation of Hanif Kureishi's novel. Only the title track had been used in the television adaptation, although some of his themes for it were also present on the album. It contained some of the new elements introduced in "Black Tie White Noise", and also signalled a move towards alternative rock. The album was a critical success but received a low-key release and only made No. 87 in the UK charts. Reuniting Bowie with Eno, the quasi-industrial "Outside" (1995) was originally conceived as the first volume in a non-linear narrative of art and murder. Featuring characters from a short story written by Bowie, the album achieved UK and US chart success, and yielded three Top 40 UK singles. In a move that provoked mixed reaction from both fans and critics, Bowie chose Nine Inch Nails as his tour partner for the Outside Tour. Visiting cities in Europe and North America between September 1995 and February 1996, the tour saw the return of Gabrels as Bowie's guitarist. On 7 January 1997, Bowie celebrated his half century with a 50th birthday concert at Madison Square Garden, New York, at which he was joined in playing his songs and those of his guests, Lou Reed, Dave Grohl and the Foo Fighters, Robert Smith of the Cure, Billy Corgan of the Smashing Pumpkins, Black Francis of the Pixies, and Sonic Youth. Bowie was inducted into the Rock and Roll Hall of Fame on 17 January 1996. Incorporating experiments in British jungle and drum 'n' bass, "Earthling" (1997) was a critical and commercial success in the UK and the US, and two singles from the album – "Little Wonder" and "Dead Man Walking" – became UK Top 40 hits. Bowie's song "I'm Afraid of Americans" from the Paul Verhoeven film "Showgirls" was re-recorded for the album, and remixed by Trent Reznor for a single release. The heavy rotation of the accompanying video, also featuring Trent Reznor, contributed to the song's 16-week stay in the US "Billboard" Hot 100. Bowie received a star on the Hollywood Walk of Fame on 12 February 1997. The Earthling Tour took in Europe and North America between June and November 1997. In November 1997, Bowie performed on the BBC's Children in Need charity single "Perfect Day", which reached number one in the UK. Bowie reunited with Visconti in 1998 to record "(Safe in This) Sky Life" for "The Rugrats Movie". Although the track was edited out of the final cut, it was later re-recorded and released as "Safe" on the B-side of Bowie's 2002 single "Everyone Says 'Hi'. The reunion led to other collaborations including a limited-edition single release version of Placebo's track "Without You I'm Nothing", co-produced by Visconti, with Bowie's harmonised vocal added to the original recording. Bowie Bonds, an early example of celebrity bonds, were asset-backed securities of current and future revenues of the 25 albums (287 songs) that Bowie recorded before 1990. Bowie Bonds were pioneered by rock and roll investment banker David Pullman. Issued in 1997, the bonds were bought for US$55 million by the Prudential Insurance Company of America. The bonds paid an interest rate of 7.9% and had an average life of ten years, a higher rate of return than a 10-year Treasury note (at the time, 6.37%). Royalties from the 25 albums generated the cash flow that secured the bonds' interest payments. Prudential also received guarantees from Bowie's label, EMI Records, which had recently signed a $30m deal with Bowie. By forfeiting ten years worth of royalties, Bowie received a payment of US$55 million up front. Bowie used this income to buy songs owned by his former manager. Bowie's combined catalogue of albums covered by this agreement sold more than 1 million copies annually at the time of the agreement. By March 2004, Moody's Investors Service lowered the bonds from an A3 rating (the seventh highest rating) to Baa3, one notch above junk status. The downgrade was prompted by lower-than-expected revenue "due to weakness in sales for recorded music" and that an unnamed company guaranteed the issue. Nonetheless, the bonds liquidated in 2007 as originally planned, without default, and the rights to the income from the songs reverted to Bowie. In September 1998, Bowie launched an Internet service provider, BowieNet, developed in conjunction with Robert Goodale and Ron Roy. Subscribers to the dial-up service were offered exclusive content, as well as a BowieNet email address and Internet access. The service was closed by 2006. Bowie, with Reeves Gabrels, created the soundtrack for "", a 1999 computer game in which he and Iman also voiced characters based on their likenesses. Released the same year and containing re-recorded tracks from "Omikron", his album "Hours" featured a song with lyrics by the winner of his "Cyber Song Contest" Internet competition, Alex Grant. Making extensive use of live instruments, the album was Bowie's exit from heavy electronica. Sessions for the planned album "Toy", intended to feature new versions of some of Bowie's earliest pieces as well as three new songs, commenced in 2000, but the album was never released. Bowie and Visconti continued their collaboration, producing a new album of completely original songs instead: the result of the sessions was the 2002 album "Heathen". On 25 June 2000, Bowie made his second appearance at the Glastonbury Festival in England, playing 30 years after his first. On 27 June, Bowie performed a concert at BBC Radio Theatre in London, which was released in the compilation album "Bowie at the Beeb", which also featured BBC recording sessions from 1968 to 1972. Bowie and Iman's daughter was born on 15 August. In October 2001, Bowie opened the Concert for New York City, a charity event to benefit the victims of the September 11 attacks, with a minimalist performance of Simon & Garfunkel's "America", followed by a full band performance of "Heroes". 2002 saw the release of "Heathen", and, during the second half of the year, the Heathen Tour. Taking place in Europe and North America, the tour opened at London's annual "Meltdown" festival, for which Bowie was that year appointed artistic director. Among the acts he selected for the festival were Philip Glass, Television, and the Dandy Warhols. As well as songs from the new album, the tour featured material from Bowie's "Low" era. "Reality" (2003) followed, and its accompanying world tour, the A Reality Tour, with an estimated attendance of 722,000, grossed more than any other in 2004. Onstage in Oslo, Norway, on 18 June, Bowie was hit in the eye with a lollipop thrown by a fan; a week later he suffered chest pain while performing at the Hurricane Festival in Scheeßel, Germany. Originally thought to be a pinched nerve in his shoulder, the pain was later diagnosed as an acutely blocked coronary artery, requiring an emergency angioplasty in Hamburg. The remaining 14 dates of the tour were cancelled. That same year, his interest in Buddhism led him to support the Tibetan cause by performing at a concert to support the New York Tibet House. In the years following his recuperation from the heart attack, Bowie reduced his musical output, making only one-off appearances on stage and in the studio. He sang in a duet of his 1971 song "Changes" with Butterfly Boucher for the 2004 animated film "Shrek 2". During a relatively quiet 2005, he recorded the vocals for the song "(She Can) Do That", co-written with Brian Transeau, for the film "Stealth". He returned to the stage on 8 September 2005, appearing with Arcade Fire for the US nationally televised event Fashion Rocks, and performed with the Canadian band for the second time a week later during the CMJ Music Marathon. He contributed backing vocals on TV on the Radio's song "Province" for their album "Return to Cookie Mountain", made a commercial with Snoop Dogg for XM Satellite Radio and joined with Lou Reed on Danish alt-rockers Kashmir's 2005 album "No Balance Palace". Bowie was awarded the Grammy Lifetime Achievement Award on 8 February 2006. In April, he announced, "I'm taking a year off—no touring, no albums." He made a surprise guest appearance at David Gilmour's 29 May concert at the Royal Albert Hall in London. The event was recorded, and a selection of songs on which he had contributed joint vocals were subsequently released. He performed again in November, alongside Alicia Keys, at the Black Ball, a benefit event for Keep a Child Alive at the Hammerstein Ballroom in New York. The performance marked the last time Bowie performed his music on stage. Bowie was chosen to curate the 2007 High Line Festival, selecting musicians and artists for the Manhattan event, including electronic pop duo AIR, surrealist photographer Claude Cahun, and English comedian Ricky Gervais. Bowie performed on Scarlett Johansson's 2008 album of Tom Waits covers, "Anywhere I Lay My Head". On the 40th anniversary of the July 1969 moon landing—and Bowie's accompanying commercial breakthrough with "Space Oddity"—EMI released the individual tracks from the original eight-track studio recording of the song, in a 2009 contest inviting members of the public to create a remix. "A Reality Tour", a double album of live material from the 2003 concert tour, was released in January 2010. In late March 2011, "Toy", Bowie's previously unreleased album from 2001, was leaked onto the internet, containing material used for "Heathen" and most of its single B-sides, as well as unheard new versions of his early back catalogue. On 8 January 2013, his 66th birthday, his website announced a new album, to be titled "The Next Day" and scheduled for release 8 March for Australia, 12 March for the United States, and 11 March for the rest of the world. Bowie's first studio album in a decade, "The Next Day" contains 14 songs plus 3 bonus tracks. His website acknowledged the length of his hiatus. Record producer Tony Visconti said 29 tracks were recorded for the album, some of which could appear on Bowie's next record, which he might start work on later in 2013. The announcement was accompanied by the immediate release of a single, "Where Are We Now?", written and recorded by Bowie in New York and produced by longtime collaborator Visconti. A music video for "Where Are We Now?" was released onto Vimeo the same day, directed by New York artist Tony Oursler. The single topped the UK iTunes Chart within hours of its release, and debuted in the UK Singles Chart at No. 6, his first single to enter the Top 10 for two decades (since "Jump They Say" in 1993). A second video, "The Stars (Are Out Tonight)", was released 25 February. Directed by Floria Sigismondi, it stars Bowie and Tilda Swinton as a married couple. On 1 March, the album was made available to stream for free through iTunes. "The Next Day" debuted at No. 1 on the UK Albums Chart, was his first album to achieve that position since "Black Tie White Noise" (1993), and was the fastest-selling album of 2013 at the time. The music video for the song "The Next Day" created some controversy, initially being removed from YouTube for terms-of-service violation, then restored with a warning recommending viewing only by those 18 or over. According to "The Times", Bowie ruled out ever giving an interview again. An exhibition of Bowie artefacts, called "David Bowie Is", was organised by the Victoria and Albert Museum in London, and shown there in 2013. The London exhibit was visited by 311,956 people, making it one of the most successful exhibitions ever staged at the museum. Later that year the exhibition began a world tour, starting in Toronto and including stops in Chicago, Paris, Melbourne, and Groningen. Bowie was featured in a cameo vocal in the Arcade Fire song "Reflektor". A poll carried out by BBC History Magazine, in October 2013, named Bowie as the best-dressed Briton in history. At the 2014 Brit Awards on 19 February, Bowie became the oldest recipient of a Brit Award in the ceremony's history when he won the award for Best British Male, which was collected on his behalf by Kate Moss. His speech read: "I'm completely delighted to have a Brit for being the best male – but I am, aren't I Kate? Yes. I think it's a great way to end the day. Thank you very, very much and Scotland stay with us." Bowie's reference to the forthcoming September 2014 Scottish independence referendum garnered a significant reaction throughout the UK on social media. On 18 July, Bowie indicated that future music would be forthcoming, though he was vague about details. New information was released in September 2014 regarding his next compilation album, "Nothing Has Changed", which was released in November. The album featured rare tracks and old material from his catalogue in addition to a new song titled "Sue (Or in a Season of Crime)". In May 2015, "Let's Dance" was announced to be reissued as a yellow vinyl single on 16 July 2015 in conjunction with the "David Bowie is" exhibition at the Australian Centre for the Moving Image in Melbourne, Australia. In August 2015, it was announced that Bowie was writing songs for a Broadway musical based on the "SpongeBob SquarePants" cartoon series. Bowie wrote and recorded the opening title song to the television series "The Last Panthers", which aired in November 2015. The theme that was used for "The Last Panthers" was also the title track for his January 2016 release "Blackstar" which is said to take cues from his earlier krautrock influenced work. According to "The Times": ""Blackstar" may be the oddest work yet from Bowie". On 7 December 2015, Bowie's musical "Lazarus" debuted in New York. His last public appearance was at opening night of the production. "Blackstar" was released on 8 January 2016, Bowie's 69th birthday, and was met with critical acclaim. Following his death on 10 January, producer Tony Visconti revealed that Bowie had planned the album to be his swan song, and a "parting gift" for his fans before his death. Several reporters and critics subsequently noted that most of the lyrics on the album seem to revolve around his impending death, with CNN noting that the album "reveals a man who appears to be grappling with his own mortality". Visconti later said that Bowie had been planning a post-"Blackstar" album, and had written and recorded demo versions of five songs in his final weeks, suggesting that Bowie believed he had a few months left. The day following his death, online viewing of Bowie's music skyrocketed, breaking the record for Vevo's most viewed artist in a single day. On 15 January, "Blackstar" debuted at number one on the UK Albums Chart; nineteen of his albums were in the UK Top 100 Albums Chart, and thirteen singles were in the UK Top 100 Singles Chart. "Blackstar" also debuted at number one on album charts around the world, including Australia, France, Germany, Italy, New Zealand, and the US "Billboard" 200. As of 11 January 2016, more than 1.3 million people had visited the "David Bowie Is" exhibition, making it the most successful exhibition ever staged by the Victoria and Albert Museum in terms of worldwide attendance. The museum stated that the exhibition would continue to tour, with confirmed travel to Japan in 2017. At the 59th Annual Grammy Awards, Bowie won all five nominated awards: Best Rock Performance; Best Alternative Music Album; Best Engineered Album, Non-Classical; Best Recording Package; and Best Rock Song. The wins marked Bowie's first ever in musical categories. An EP, "No Plan", was released on 8 January 2017, which would have been Bowie's 70th birthday. Apart from "Lazarus", the EP includes three songs that Bowie had recorded during the "Blackstar" sessions, but were left off the album and subsequently appeared on the soundtrack album for the "Lazarus" musical in October 2016. A music video for the title track was also released. Since January 2016, Bowie has sold 5 million units in the United Kingdom alone. The beginnings of Bowie's acting career predate his commercial breakthrough as a musician. Studying avant-garde theatre and mime under Lindsay Kemp, he was given the role of Cloud in Kemp's 1967 theatrical production "Pierrot in Turquoise" (later made into the 1970 television film "The Looking Glass Murders"). In the black-and-white short "The Image" (1969), he played a ghostly boy who emerges from a troubled artist's painting to haunt him. The same year, the film of Leslie Thomas's 1966 comic novel "The Virgin Soldiers" saw Bowie make a brief appearance as an extra. In 1976 he earned acclaim for his first major film role, portraying Thomas Jerome Newton, an alien from a dying planet, in "The Man Who Fell to Earth", directed by Nicolas Roeg. "Just a Gigolo" (1979), an Anglo-German co-production directed by David Hemmings, saw Bowie in the lead role as Prussian officer Paul von Przygodski, who, returning from World War I, is discovered by a Baroness (Marlene Dietrich) and put into her Gigolo Stable. Bowie played Joseph Merrick in the Broadway theatre production "The Elephant Man", which he undertook wearing no stage make-up, and which earned high praise for his expressive performance. He played the part 157 times between 1980 and 1981. "Christiane F. – We Children from Bahnhof Zoo", a 1981 biographical film focusing on a young girl's drug addiction in West Berlin, featured Bowie in a cameo appearance as himself at a concert in Germany. Its soundtrack album, "Christiane F." (1981), featured much material from his Berlin Trilogy albums. Bowie starred in "The Hunger" (1983), with Catherine Deneuve and Susan Sarandon. In Nagisa Oshima's film the same year, "Merry Christmas, Mr. Lawrence", based on Laurens van der Post's novel "The Seed and the Sower", Bowie played Major Jack Celliers, a prisoner of war in a Japanese internment camp. Bowie had a cameo in "Yellowbeard", a 1983 pirate comedy created by Monty Python members, and a small part as Colin, the hitman in the 1985 film "Into the Night". He declined to play the villain Max Zorin in the James Bond film "A View to a Kill" (1985). "Absolute Beginners" (1986), a rock musical film adapted from Colin MacInnes's book of the same name about life in late 1950s London, featured Bowie's music and presented him with a minor acting role. The same year, Jim Henson's dark fantasy "Labyrinth" found him with the part of Jareth, the king of the goblins. Two years later, he played Pontius Pilate in Martin Scorsese's 1988 film "The Last Temptation of Christ". Bowie portrayed a disgruntled restaurant employee opposite Rosanna Arquette in "The Linguini Incident" (1991), and the mysterious FBI agent Phillip Jeffries in David Lynch's "" (1992). He took a small but pivotal role as Andy Warhol in "Basquiat", artist/director Julian Schnabel's 1996 biopic of Jean-Michel Basquiat, and co-starred in Giovanni Veronesi's Spaghetti Western "Il Mio West" (1998, released as "Gunslinger's Revenge" in the US in 2005) as the most feared gunfighter in the region. He played the ageing gangster Bernie in Andrew Goth's "Everybody Loves Sunshine" (1999), and appeared in the television horror series of "The Hunger". In "Mr. Rice's Secret" (2000), he played the title role as the neighbour of a terminally ill 12-year-old, and the following year appeared as himself in "Zoolander". Bowie portrayed physicist Nikola Tesla in Christopher Nolan's film "The Prestige" (2006), which was about the bitter rivalry between two magicians in the late 19th century. In the same year, he voice-acted in the animated film "Arthur and the Invisibles" as the powerful villain Maltazard and appeared as himself in an episode of the Ricky Gervais television series "Extras". In 2007, he lent his voice to the character Lord Royal Highness in the "SpongeBob's Atlantis SquarePantis" television film. In the 2008 film "August", directed by Austin Chick, he played a supporting role as Ogilvie, alongside Josh Hartnett and Rip Torn, with whom he had worked in the 1976 film "The Man Who Fell to Earth". In a 2017 interview with "Consequence of Sound", director Denis Villeneuve revealed his intention to cast Bowie in "Blade Runner 2049" as the lead villain, Niander Wallace, but when news broke of Bowie's death in January of the same year, Villeneuve was forced to look for talent with similar "rock star" qualities. He eventually cast actor and lead singer of Thirty Seconds to Mars, Jared Leto. Talking about the casting process, Villeneuve said: "Our first thought [for the character] had been David Bowie, who had influenced "Blade Runner" in many ways. When we learned the sad news, we looked around for someone like that. He [Bowie] embodied the Blade Runner spirit." Bowie's songs and stagecraft brought a new dimension to popular music in the early 1970s, strongly influencing both its immediate forms and its subsequent development. Bowie was a pioneer of glam rock, according to music historians Schinder and Schwartz, who credited Marc Bolan and Bowie with creating the genre. At the same time, he inspired the innovators of the punk rock music movement. When punk musicians were "noisily reclaiming the three-minute pop song in a show of public defiance", biographer David Buckley wrote that "Bowie almost completely abandoned traditional rock instrumentation." Bowie's record company promoted his unique status in popular music with the slogan, "There's old wave, there's new wave, and there's David Bowie". Musicologist James Perone credited him with having "brought sophistication to rock music", and critical reviews frequently acknowledged the intellectual depth of his work and influence. Human League founder Martyn Ware remarked on the depth of his pervasive artistry that he had lived his life "as though he were an art installation." As described by John Peel, "The one distinguishing feature about early-70s progressive rock was that it didn't progress. Before Bowie came along, people didn't want too much change". Buckley called the era "bloated, self-important, leather-clad, self-satisfied"; then Bowie "subverted the whole notion of what it was to be a rock star". Buckley called Bowie "both star and icon. The vast body of work he has produced ... has created perhaps the biggest cult in popular culture. ... His influence has been unique in popular culture—he has permeated and altered more lives than any comparable figure." Through continual reinvention, his influence broadened and extended. Biographer Thomas Forget added, "Because he has succeeded in so many different styles of music, it is almost impossible to find a popular artist today that has not been influenced by David Bowie." In 2000, Bowie was voted by other music stars as the "most influential artist of all time" in a poll by "NME". Alexis Petridis of "The Guardian" wrote that Bowie was confirmed by 1980 to be "the most important and influential artist since the Beatles". Neil McCormick of "The Daily Telegraph" stated that Bowie had "one of the supreme careers in popular music, art and culture of the 20th century" and "he was too inventive, too mercurial, too strange for all but his most devoted fans to keep up with". The BBC's Mark Easton argued that Bowie provided fuel for "the creative powerhouse that Britain has become" by challenging future generations "to aim high, to be ambitious and provocative, to take risks". Easton concluded that Bowie had "changed the way the world sees Britain. And the way Britain sees itself". Annie Zaleski of "Alternative Press" wrote, "Every band or solo artist who's decided to rip up their playbook and start again owes a debt to Bowie". In 2016, he was dubbed "The Greatest Rock Star Ever" by "Rolling Stone" magazine. Numerous figures from the music industry whose careers Bowie had influenced paid tribute to him following his death; panegyrics on Twitter (tweets about him peaked at 20,000 a minute an hour after his death) also came from outside the entertainment industry and pop culture, such as those from the Vatican, namely Cardinal Gianfranco Ravasi, who quoted "Space Oddity", and the Federal Foreign Office, which thanked Bowie for his part in the fall of the Berlin Wall and referenced "Heroes". On 7 January 2017 the BBC broadcast the 90-minute documentary "David Bowie: The Last Five Years", taking a detailed look at Bowie's last albums, "The Next Day" and "Blackstar", and his play "Lazarus". On 8 January 2017, which would have been Bowie's 70th birthday, a charity concert in his birthplace of Brixton was hosted by the actor Gary Oldman, a close friend. A David Bowie walking tour through Brixton was also launched, and other events marking his birthday weekend included concerts in New York, Los Angeles, Sydney, and Tokyo. On 6 February 2018 the maiden flight of the SpaceX Falcon Heavy rocket carried Elon Musk's personal Tesla Roadster and a mannequin affectionately named Starman into space. "Space Oddity" and "Life on Mars?" were looping on the car's sound system during the launch. From the time of his earliest recordings in the 1960s, Bowie employed a wide variety of musical styles. His early compositions and performances were strongly influenced by rock and rollers like Little Richard and Elvis Presley, and also the wider world of show business. He particularly strove to emulate the British musical theatre singer-songwriter and actor Anthony Newley, whose vocal style he frequently adopted, and made prominent use of for his 1967 debut release, "David Bowie" (to the disgust of Newley himself, who destroyed the copy he received from Bowie's publisher). Bowie's music hall fascination continued to surface sporadically alongside such diverse styles as hard rock and heavy metal, soul, psychedelic folk, and pop. Musicologist James Perone observes Bowie's use of octave switches for different repetitions of the same melody, exemplified in his commercial breakthrough single, "Space Oddity", and later in the song "Heroes", to dramatic effect; Perone notes that "in the lowest part of his vocal register ... his voice has an almost crooner-like richness." Voice instructor Jo Thompson describes Bowie's vocal vibrato technique as "particularly deliberate and distinctive". Schinder and Schwartz call him "a vocalist of extraordinary technical ability, able to pitch his singing to particular effect." Here, too, as in his stagecraft and songwriting, the singer's role playing is evident: historiographer Michael Campbell says that Bowie's lyrics "arrest our ear, without question. But Bowie continually shifts from person to person as he delivers them ... His voice changes dramatically from section to section." In a 2014 analysis of 77 "top" artists' vocal ranges, Bowie was 8th, just behind Christina Aguilera and just ahead of Paul McCartney. In addition to the guitar, Bowie also played a variety of keyboards, including piano, Mellotron, Chamberlin, and synthesizers; harmonica; alto and baritone saxophones; stylophone; viola; cello; koto (in the "Heroes" track "Moss Garden"); thumb piano; drums (on the "Heathen" track "Cactus"), and various percussion instruments. Bowie was also a painter and artist. One of his paintings sold at auction in late 1990 for $500, and the cover for his 1995 album "Outside" is a close-up of a self-portrait (from a series of five) he painted that same year. His first solo show was at The Gallery, Cork Street in 1995, entitled 'New Afro/Pagan and Work: 1975–1995'. He was invited to join the editorial board of the journal "Modern Painters" in 1998, and participated in the art hoax later that year. In 1998 during an interview with Michael Kimmelman for The New York Times he said "Art was, seriously, the only thing I'd ever wanted to own.", subsequently in 1999, in an interview for the BBC, he said "The only thing I buy obsessively and addictively is art". His art collection, which included works by Damien Hirst, Frank Auerbach, Henry Moore, and Jean-Michel Basquiat among others, was valued at over £10m in mid-2016. After his death his family decided to sell most of the collection because they "didn't have the space" to store it. On 10 and 11 November three auctions were held at Sotheby's in London, first with 47 lots and second with 208 paintings, drawings, and sculptures, third with 100 design lots. The items on sale represented about 65 percent of the collection. Exhibition of the works in the auction attracted 51,470 visitors, the auction itself was attended by 1,750 bidders, with over 1,000 more bidding online. The auctions has overall sale total £32.9 million (app. $41.5 million), while the highest-selling item, Jean-Michel Basquiat's graffiti-inspired painting "Air Power", sold for £7.09 million. Bowie married his first wife, Mary Angela Barnett on 19 March 1970 at Bromley Register Office in Bromley, London. Their son Duncan, born on 30 May 1971, was at first known as Zowie. Bowie and Angela divorced on 8 February 1980 in Switzerland. On 24 April 1992, Bowie married Somali-American model Iman in a private ceremony in Lausanne. The wedding was later solemnised on 6 June in Florence. They had one daughter, Alexandria "Lexi" Zahra Jones, born in August 2000. The couple resided primarily in New York City and London, as well as owning an apartment in Sydney's Elizabeth Bay. and Britannia Bay House on the island of Mustique, now renamed Mandalay Estate. Bowie declared himself gay in an interview with Michael Watts for a 1972 issue of "Melody Maker", coinciding with his campaign for stardom as Ziggy Stardust. According to Buckley, "If Ziggy confused both his creator and his audience, a big part of that confusion centred on the topic of sexuality." In a September 1976 interview with "Playboy", Bowie said, "It's true—I am a bisexual. But I can't deny that I've used that fact very well. I suppose it's the best thing that ever happened to me." According to his first wife, Angie, Bowie had a relationship with Mick Jagger. In a 1983 interview with "Rolling Stone", Bowie said his public declaration of bisexuality was "the biggest mistake I ever made" and "I was always a closet heterosexual." On other occasions, he said his interest in homosexual and bisexual culture had been more a product of the times and the situation in which he found himself than of his own feelings. "Blender" asked Bowie in 2002 whether he still believed his public declaration was his biggest mistake. After a long pause, he said, "I don't think it was a mistake in Europe, but it was a lot tougher in America. I had no problem with people knowing I was bisexual. But I had no inclination to hold any banners nor be a representative of any group of people." Bowie said he wanted to be a songwriter and performer rather than a headline for his bisexuality, and in "puritanical" America, "I think it stood in the way of so much I wanted to do." Buckley wrote that Bowie "mined sexual intrigue for its ability to shock", and was probably "never gay, nor even consistently actively bisexual", instead experimenting "out of a sense of curiosity and a genuine allegiance with the 'transgressional'." Biographer Christopher Sandford said, according to Mary Finnigan—with whom Bowie had an affair in 1969—the singer and his first wife Angie "created their bisexual fantasy". Sandford wrote that Bowie "made a positive fetish of repeating the quip that he and his wife had met while 'fucking the same bloke' ... Gay sex was always an anecdotal and laughing matter. That Bowie's actual tastes swung the other way is clear from even a partial tally of his affairs with women." The BBC's Mark Easton wrote in 2016 that Britain was "far more tolerant of difference" and that gay rights, such as same-sex marriage, and gender equality would not have "enjoyed the broad support they do today without Bowie's androgynous challenge all those years ago". Over the years, Bowie made numerous references to religions and to his evolving spirituality. Beginning in 1967, he showed an interest in Buddhism; after a few months' study at Tibet House in London, he was told by a Lama, "You don't want to be Buddhist... You should follow music." By 1975, Bowie admitted, "I felt totally, absolutely alone. And I probably was alone because I pretty much had abandoned God." After Bowie married Iman in a private ceremony in 1992, he said they knew that their "real marriage, sanctified by God, had to happen in a church in Florence". Earlier that year, he knelt on stage at The Freddie Mercury Tribute Concert and recited the Lord's Prayer before a television audience. In 1993, Bowie said he had an "undying" belief in the "unquestionable" existence of God. Interviewed in 2005, Bowie said whether God exists "is not a question that can be answered... I'm not quite an atheist and it worries me. There's that little bit that holds on: 'Well, I'm "almost" an atheist. Give me a couple of months... I've nearly got it right. In his will, Bowie stipulated that he be cremated and his ashes scattered in Bali "in accordance with the Buddhist rituals". "Questioning my spiritual life has always been germane" to Bowie's songwriting. "Station to Station" is "very much concerned with the Stations of the Cross"; the song specifically references Kabbalah. Bowie called the album "extremely dark... the nearest album to a magick treatise that I've written". "Earthling" showed "the abiding need in me to vacillate between atheism or a kind of gnosticism... What I need is to find a balance, spiritually, with the way I live and my demise." Released shortly before his death, "Lazarus"—from his final album, "Blackstar"—began with the words, "Look up here, I'm in Heaven". In 1976, speaking as The Thin White Duke, Bowie's persona at the time, and "at least partially tongue-in-cheek", he made statements that expressed support for fascism and perceived admiration for Adolf Hitler in interviews with "Playboy", "NME", and a Swedish publication. Bowie was quoted as saying: "Britain is ready for a fascist leader... I think Britain could benefit from a fascist leader. After all, fascism is really nationalism... I believe very strongly in fascism, people have always responded with greater efficiency under a regimental leadership." He was also quoted as saying: "Adolf Hitler was one of the first rock stars" and "You've got to have an extreme right front come up and sweep everything off its feet and tidy everything up." Bowie later retracted these comments in an interview with "Melody Maker" in October 1977, blaming them on mental instability caused by his drug problems at the time, saying: "I was out of my mind, totally, completely crazed." In the 1980s and 1990s, Bowie's public statements shifted sharply towards anti-racism and anti-fascism. In an interview with MTV in 1983, Bowie criticised the channel for not providing enough coverage of black musicians, and the music videos for "China Girl" and "Let's Dance" were described by Bowie as a "very simple, very direct" statement against racism. The album "Tin Machine" took a more direct stance against fascism and Neo-Nazism, and was criticised for being too preachy. In 2016, filmmaker and activist Michael Moore said he had wanted to use "Panic in Detroit" for his 1998 documentary "The Big One"; denied at first, he was given the rights after calling Bowie personally. "I've read stuff since his death saying that he wasn't that political and he stayed away from politics. But that wasn't the conversation that I had with him." On 10 January 2016, two days after his 69th birthday and the release of the album "Blackstar", Bowie died from liver cancer in his New York City apartment. He had been diagnosed 18 months earlier but had not made the news of his illness public. The Belgian theatre director Ivo van Hove, who had worked with the singer on his Off-Broadway musical "Lazarus", explained that Bowie was unable to attend rehearsals due to the progression of the disease. He noted that Bowie had kept working during the illness. Bowie's producer Tony Visconti wrote: Following Bowie's death, fans gathered at impromptu street shrines. At the mural of Bowie in his birthplace of Brixton, south London, which shows him in his "Aladdin Sane" character, fans laid flowers and sang his songs. Other memorial sites included Berlin, Los Angeles, and outside his apartment in New York. After news of his death, sales of his albums and singles soared. Bowie had insisted that he did not want a funeral, and according to his death certificate he was cremated in New Jersey on 12 January. As he wished in his will, his ashes were scattered in Bali, Indonesia. Bowie's 1969 commercial breakthrough, the song "Space Oddity", won him an Ivor Novello Special Award For Originality. For his performance in the 1976 science fiction film "The Man Who Fell to Earth", he won a Saturn Award for Best Actor. In the ensuing decades he was honoured with numerous awards for his music and its accompanying videos, receiving, among others, six Grammy Awards and four Brit Awards—winning Best British Male Artist twice; the award for Outstanding Contribution to Music in 1996; and the Brits Icon award for his "lasting impact on British culture", given posthumously in 2016. In 1999, Bowie was made a Commander of the Ordre des Arts et des Lettres by the French government. He received an honorary doctorate from Berklee College of Music the same year. He declined the royal honour of Commander of the Order of the British Empire (CBE) in 2000, and turned down a knighthood in 2003. Bowie later stated "I would never have any intention of accepting anything like that. I seriously don't know what it's for. It's not what I spent my life working for." "The Telegraph" in 2016 estimated Bowie's total worldwide sales at 140 million records. In the United Kingdom, he was awarded 9 platinum, 11 gold, and 8 silver albums, and in the United States, 5 platinum and 9 gold. In 2003, six of Bowie's albums appeared on "Rolling Stone"s list of the 500 Greatest Albums of All Time. In 2004, four of Bowie's songs appeared on the "Rolling Stone" list of the 500 Greatest Songs of All Time. Additionally, four of Bowie's songs are included in The Rock and Roll Hall of Fame's 500 Songs that Shaped Rock and Roll. In the BBC's 2002 poll of the 100 Greatest Britons, he was ranked 29. In 2004, "Rolling Stone" magazine ranked him 39th on their list of the 100 Greatest Rock Artists of All Time. Bowie was inducted into the Rock and Roll Hall of Fame on 17 January 1996 and named a member of the Science Fiction and Fantasy Hall of Fame in June 2013. In 2016, "Rolling Stone" proclaimed Bowie "the greatest rock star ever". In 2008, the spider "Heteropoda davidbowie" was named in Bowie's honour. On 5 January 2015, a main-belt asteroid was named 342843 Davidbowie. On 13 January 2016, Belgian amateur astronomers at MIRA Public Observatory created a "Bowie asterism" of seven stars which had been in the vicinity of Mars at the time of Bowie's death; the "constellation" forms the lightning bolt on Bowie's face from the cover of his "Aladdin Sane" album. On 25 March 2018 a statue of Bowie was unveiled in Aylesbury, Buckinghamshire, the town where he debuted Ziggy Stardust. The statue features a likeness of Bowie in 2002 accompanied with his alter egos, with Ziggy at the front. D. B. Cooper D. B. Cooper is a media epithet popularly used to refer to an unidentified man who hijacked a Boeing 727 aircraft in the northwest United States, in the airspace between Portland, Oregon, and Seattle, Washington, on the afternoon of Wednesday, He extorted $200,000 in ransom () and parachuted to an uncertain fate. Despite an extensive manhunt and protracted FBI investigation, the perpetrator has never been located or identified. It remains the only unsolved case of air piracy in commercial aviation history. Available evidence and a preponderance of expert opinion suggested from the beginning that Cooper probably did not survive his high-risk jump, but his remains were never recovered. The FBI nevertheless maintained an active investigation for 45 years after the hijacking. Despite a case file that grew to over 60 volumes over that time period, no definitive conclusions have been reached regarding Cooper's true identity or whereabouts if he survived the jump. The suspect purchased his airline ticket using the alias Dan Cooper, but because of a news media miscommunication he became known in popular lore as "D. B. Cooper". Numerous theories of widely varying plausibility have been proposed over the years by investigators, reporters, and amateur enthusiasts. A young boy discovered a small cache of ransom bills along the banks of the Columbia River in February 1980. The find triggered renewed interest but ultimately only deepened the mystery, and the great majority of the ransom remains unrecovered. The FBI officially suspended active investigation of the case in July 2016, but the agency continues to request that any physical evidence that might emerge related to the parachutes or the ransom money be submitted for analysis. On Thanksgiving eve, November 24, 1971, a middle-aged man carrying a black attaché case approached the flight counter of Northwest Orient Airlines at Portland International Airport. He identified himself as "Dan Cooper" and used cash to purchase a one-way ticket on Flight 305, a 30-minute trip north to Seattle. Cooper boarded the aircraft, a Boeing 727-100 (FAA registration N467US), and took seat 18C (18E by one account, 15D by another) in the rear of the passenger cabin. He lit a cigarette and ordered a bourbon and soda. Fellow passengers described him as a man in his mid-forties, between and tall. He wore a black lightweight raincoat, loafers, a dark suit, a neatly pressed white collared shirt, a black clip-on tie, and a mother of pearl tie pin. Flight 305 was from Washington D.C. to Seattle, with stops in Minneapolis, Great Falls, Missoula, Spokane, and Portland. It was approximately one-third full when it left Portland on schedule at 2:50 p.m. PST. Shortly after takeoff, Cooper handed a note to Florence Schaffner, the flight attendant situated nearest to him in a jump seat attached to the aft stair door. Schaffner assumed that the note contained a lonely businessman's phone number and dropped it unopened into her purse. Cooper leaned toward her and whispered, "Miss, you'd better look at that note. I have a bomb." The note was printed in neat, all-capital letters with a felt-tip pen. Its exact wording is unknown, because Cooper later reclaimed it, but Schaffner recalled that the note said that Cooper had a bomb in his briefcase. After Schaffner read the note, Cooper told her to sit beside him. Schaffner did as requested, then quietly asked to see the bomb. Cooper opened his briefcase long enough for her to glimpse eight red cylinders ("four on top of four") attached to wires coated with red insulation, and a large cylindrical battery. After closing the briefcase, he stated his demands: $200,000 in "negotiable American currency"; four parachutes (two primary and two reserve); and a fuel truck standing by in Seattle to refuel the aircraft upon arrival. Schaffner conveyed Cooper's instructions to the pilots in the cockpit; when she returned, Cooper was wearing dark sunglasses. The pilot, William Scott, contacted Seattle–Tacoma Airport air traffic control, which in turn informed local and federal authorities. The 36 other passengers were given false information that their arrival in Seattle would be delayed because of a "minor mechanical difficulty". Northwest Orient's president, Donald Nyrop, authorized payment of the ransom and ordered all employees to cooperate fully with the hijacker's demands. The aircraft circled Puget Sound for approximately two hours to allow Seattle police and the FBI sufficient time to assemble Cooper's parachutes and ransom money, and to mobilize emergency personnel. Schaffner recalled that Cooper appeared familiar with the local terrain; at one point he remarked, "Looks like Tacoma down there," as the aircraft flew above it. He also correctly mentioned that McChord Air Force Base was only a 20-minute drive (at that time) from Seattle-Tacoma Airport. Schaffner described him as calm, polite, and well-spoken, not at all consistent with the stereotypes (enraged, hardened criminals or "take-me-to-Cuba" political dissidents) popularly associated with air piracy at the time. Tina Mucklow, another flight attendant, agreed. "He wasn't nervous," she told investigators. "He seemed rather nice. He was never cruel or nasty. He was thoughtful and calm all the time." He ordered a second bourbon and water, paid his drink tab (and attempted to give Schaffner the change), and offered to request meals for the flight crew during the stop in Seattle. FBI agents assembled the ransom money from several Seattle-area banks – 10,000 unmarked 20-dollar bills, most with serial numbers beginning with the letter "L" indicating issuance by the Federal Reserve Bank of San Francisco, and most from the 1963A or 1969 series – and made a microfilm photograph of each of them. Cooper rejected the military-issue parachutes offered by McChord AFB personnel, instead demanding civilian parachutes with manually operated ripcords. Seattle police obtained them from a local skydiving school. At 5:24 p.m. PST, Cooper was informed that his demands had been met, and at 5:39 p.m. the aircraft landed at Seattle-Tacoma Airport. It was more than an hour after sunset and Cooper instructed Scott to taxi the jet to an isolated, brightly lit section of the tarmac and close each window shade in the cabin to deter police snipers. Northwest Orient's Seattle operations manager, Al Lee, approached the aircraft in street clothes to avoid the possibility that Cooper might mistake his airline uniform for that of a police officer. He delivered the cash-filled knapsack and parachutes to Mucklow via the aft stairs. Once the delivery was completed, Cooper ordered all passengers, Schaffner, and senior flight attendant Alice Hancock to leave the plane. During refueling, Cooper outlined his flight plan to the cockpit crew: a southeast course toward Mexico City at the minimum airspeed possible without stalling the aircraft – approximately  – at a maximum altitude. He further specified that the landing gear remain deployed in the takeoff/landing position, the wing flaps be lowered 15 degrees, and the cabin remain unpressurized. Copilot William Rataczak informed Cooper that the aircraft's range was limited to approximately under the specified flight configuration, which meant that a second refueling would be necessary before entering Mexico. Cooper and the crew discussed options and agreed on Reno, Nevada, as the refueling With the plane's rear exit door open and its staircase extended, Cooper directed the pilot to take off. Northwest's home office objected, on grounds that it was unsafe to take off with the aft staircase deployed. Cooper countered that it was indeed safe, but he would not argue the point; he would lower it once they were airborne. An FAA official requested a face-to-face meeting with Cooper aboard the aircraft, which was denied. The refueling process was delayed because of a vapor lock in the fuel tanker truck's pumping mechanism, and Cooper became suspicious; but he allowed a replacement tanker truck to continue the refueling—and a third after the second ran dry. At approximately 7:40 p.m., the Boeing 727 took off with only five people onboard: Cooper, pilot Scott, flight attendant Mucklow, copilot Rataczak, and flight engineer H. E. Anderson. Two F-106 fighter aircraft were scrambled from McChord Air Force Base and followed behind the airliner, one above it and one below, out of Cooper's view. A Lockheed T-33 trainer, diverted from an unrelated Air National Guard mission, also shadowed the 727 before running low on fuel and turning back near the Oregon–California state line. Overall there were five planes in total trailing the hijacked plane. Not a single one of them reportedly saw him jump and none of them could pinpoint a location where he could have landed. After takeoff, Cooper told Mucklow to join the rest of the crew in the cockpit and remain there with the door closed. As she complied, Mucklow observed Cooper tying something around his waist. At approximately 8:00 p.m., a warning light flashed in the cockpit, indicating that the aft airstair apparatus had been activated. The crew's offer of assistance via the aircraft's intercom system was curtly refused. The crew soon noticed a subjective change of air pressure, indicating that the aft door was open. At approximately 8:13 p.m., the aircraft's tail section sustained a sudden upward movement, significant enough to require trimming to bring the plane back to level flight. At approximately 10:15 p.m., the aircraft's aft airstair was still deployed when Scott and Rataczak landed the 727 at Reno Airport. FBI agents, state troopers, sheriff's deputies, and Reno police surrounded the jet, as it had not yet been determined with certainty that Cooper was no longer aboard, but an armed search quickly confirmed his absence. FBI agents recovered 66 unidentified latent fingerprints aboard the airliner. The agents also found Cooper's black clip-on tie, his tie clip and two of the four parachutes, one of which had been opened and two shroud (suspension) lines cut from its canopy. Authorities interviewed eyewitnesses in Portland, Seattle, and Reno, and all those who personally interacted with Cooper. A series of composite sketches was developed. Local police and FBI agents immediately began questioning possible suspects. An Oregon man named D. B. Cooper who had a minor police record was one of the first persons of interest in the case. He was contacted by Portland police on the off-chance that the hijacker had used his real name or the same alias in a previous crime. He was quickly ruled out as a suspect, but a local reporter named James Long, rushing to meet an imminent deadline, confused the eliminated suspect's name with the pseudonym used by the hijacker. A wire service reporter (Clyde Jabin of UPI by most accounts, Joe Frazier of the AP by others) republished the error, followed by numerous other media sources; the moniker "D. B. Cooper" became lodged in the public's collective memory. A precise search area was difficult to define, as even small differences in estimates of the aircraft's speed, or the environmental conditions along the flight path (which varied significantly by location and altitude), changed Cooper's projected landing point considerably. An important variable was the length of time he remained in free fall before pulling his ripcord—if indeed he succeeded in opening a parachute at all. Neither of the Air Force fighter pilots saw anything exit the airliner, either visually or on radar, nor did they see a parachute open; but at night, with extremely limited visibility and cloud cover obscuring any ground lighting below, an airborne human figure clad entirely in black clothing could easily have gone undetected. The T-33 pilots never made visual contact with the 727 at all. In order to conduct an experimental re-creation, Scott piloted the aircraft used in the hijacking in the same flight configuration. FBI agents, pushing a sled out of the open airstair, were able to reproduce the upward motion of the tail section described by the flight crew at 8:13 p.m. Based on this experiment, it was concluded that 8:13 p.m. was the most likely jump time. At that moment the aircraft was flying through a heavy rainstorm over the Lewis River in southwestern Washington. Initial extrapolations placed Cooper's landing zone within an area on the southernmost outreach of Mount St. Helens, a few miles southeast of Ariel, Washington, near Lake Merwin, an artificial lake formed by a dam on the Lewis River. Search efforts focused on Clark and Cowlitz counties, encompassing the terrain immediately south and north, respectively, of the Lewis River in southwest Washington. FBI agents and sheriff's deputies from those counties searched large areas of the mountainous wilderness on foot and by helicopter. Door-to-door searches of local farmhouses were also carried out. Other search parties ran patrol boats along Lake Merwin and Yale Lake, the reservoir immediately to its east. No trace of Cooper, nor any of the equipment presumed to have left the aircraft with him, was found. The FBI also coordinated an aerial search, using fixed-wing aircraft and helicopters from the Oregon Army National Guard, along the entire flight path (known as Victor 23 in standard aviation terminology but "Vector 23" in most Cooper from Seattle to Reno. While numerous broken treetops and several pieces of plastic and other objects resembling parachute canopies were sighted and investigated, nothing relevant to the hijacking was found. Shortly after the spring thaw in early 1972, teams of FBI agents aided by some 200 Army soldiers from Fort Lewis, along with Air Force personnel, National Guardsmen, and civilian volunteers, conducted another thorough ground search of Clark and Cowlitz counties for eighteen days in March, and then an additional eighteen days in April. Electronic Explorations Company, a marine salvage firm, used a submarine to search the depths of Lake Merwin. Two local women stumbled upon a skeleton in an abandoned structure in Clark County; it was later identified as the remains of a female teenager who had been abducted and murdered several weeks before. Ultimately, the search operation—arguably the most extensive, and intensive, in U.S. history—uncovered no significant material evidence related to the hijacking. A month after the hijacking, the FBI distributed lists of the ransom serial numbers to financial institutions, casinos, race tracks, and other businesses that routinely conducted significant cash transactions, and to law enforcement agencies around the world. Northwest Orient offered a reward of 15% of the recovered money, to a maximum of $25,000. In early 1972, U.S. Attorney General John N. Mitchell released the serial numbers to the general public. In 1972, two men used counterfeit 20-dollar bills printed with Cooper serial numbers to swindle $30,000 from a "Newsweek" reporter named Karl Fleming in exchange for an interview with a man they falsely claimed was the hijacker. In early 1973, with the ransom money still missing, "The Oregon Journal" republished the serial numbers and offered $1,000 to the first person to turn in a ransom bill to the newspaper or any FBI field office. In Seattle, the "Post-Intelligencer" made a similar offer with a $5,000 reward. The offers remained in effect until Thanksgiving 1974, and though there were several near-matches, no genuine bills were found. In 1975 Northwest Orient's insurer, Global Indemnity Co., complied with an order from the Minnesota Supreme Court and paid the airline's $180,000 claim on the ransom money. Subsequent analyses indicated that the original landing zone estimate was inaccurate: Scott, who was flying the aircraft manually because of Cooper's speed and altitude demands, later determined that his flight path was significantly farther east than initially assumed. Additional data from a variety of sources—in particular Continental Airlines pilot Tom Bohan, who was flying four minutes behind Flight 305—indicated that the wind direction factored into drop zone calculations had been wrong, possibly by as much as 80 degrees. This and other supplemental data suggested that the actual drop zone was probably south-southeast of the original estimate, in the drainage area of the Washougal River. On July 8, 2016, the FBI announced that it was suspending active investigation of the Cooper case, citing a need to focus its investigative resources and manpower on issues of higher and more urgent priority. Local field offices will continue to accept any legitimate physical evidence—related specifically to the parachutes or the ransom money—that may emerge in the future. The 60-volume case file compiled over the 45-year course of the investigation will be preserved for historical purposes at FBI headquarters in Washington, D.C.. On the FBI website, there is currently a 28 part packets full of evidence gathered over the years. All the evidence is open to the public to read. The official physical description of Cooper has remained unchanged and is considered reliable. Flight attendants Schaffner and Mucklow, who spent the most time with Cooper, were interviewed on the same night in separate cities, and gave nearly identical descriptions: to tall, , mid-40s, with close-set piercing brown eyes and swarthy skin. In 1978, a placard printed with instructions for lowering the aft stairs of a 727 was found by a deer hunter near a logging road about east of Castle Rock, Washington, well north of Lake Merwin, but within Flight 305's basic flight path. In February 1980, eight-year-old Brian Ingram was vacationing with his family on the Columbia River at a beachfront known as Tina (or Tena) Bar, about downstream from Vancouver, Washington and southwest of Ariel. The child uncovered three packets of the ransom cash as he raked the sandy riverbank to build a campfire. The bills were significantly disintegrated, but still bundled in rubber bands. FBI technicians confirmed that the money was indeed a portion of the ransom—two packets of 100 twenty-dollar bills each, and a third packet of 90, all arranged in the same order as when given to Cooper. In 1986, after protracted negotiations, the recovered bills were divided equally between Ingram and Northwest Orient's insurer; the FBI retained fourteen examples as evidence. Ingram sold fifteen of his bills at auction in 2008 for about $37,000. To date, none of the 9,710 remaining bills have turned up anywhere in the world. Their serial numbers remain available online for public search. The Columbia River ransom money and the airstair instruction placard remain the only bona fide physical evidence from the hijacking ever found outside the aircraft. In 2017, a group of volunteer investigators uncovered what they believe to be “potential evidence, what appears to be a decades-old parachute strap" in the Pacific Northwest. This was followed later in August 2017 with a piece of foam, suspected of being part of Cooper's backpack. In late 2007, the FBI announced that a partial DNA profile had been obtained from three organic samples found on Cooper's clip-on tie in 2001, though they later acknowledged that there is no evidence that the hijacker was the source of the sample material. "The tie had two small DNA samples, and one large sample," said Special Agent Fred Gutt. "It's difficult to draw firm conclusions from these samples." The Bureau also made public a file of previously unreleased evidence, including Cooper's 1971 plane ticket (price: $20.00, paid in cash), and posted previously unreleased composite sketches and fact sheets, along with a request to the general public for information which might lead to Cooper's positive identification. They also disclosed that Cooper chose the older of the two primary parachutes supplied to him, rather than the technically superior professional sport parachute; and that from the two reserve parachutes, he selected a "dummy"—an unusable unit with an inoperative ripcord intended for classroom demonstrations, although it had clear markings identifying it to any experienced skydiver as non-functional. (He cannibalized the other, functional reserve parachute, possibly using its shrouds to tie the money bag shut, and to secure the bag to his body as witnessed by Mucklow) The FBI stressed that inclusion of the dummy reserve parachute, one of four obtained in haste from a Seattle skydiving school, was accidental. In March 2009, the FBI disclosed that Tom Kaye, a paleontologist from the Burke Museum of Natural History and Culture in Seattle, had assembled a team of "citizen sleuths", including scientific illustrator Carol Abraczinskas and metallurgist Alan Stone. The group, eventually known as the Cooper Research Team, reinvestigated important components of the case using GPS, satellite imagery, and other technologies unavailable in 1971. While little new information was gained regarding the buried ransom money or Cooper's landing zone, they were able to find and analyze hundreds of minute particles on Cooper's tie using electron microscopy. "Lycopodium" spores (likely from a pharmaceutical product) were identified, as well as fragments of bismuth and aluminum. In November 2011, Kaye announced that particles of pure (unalloyed) titanium had also been found on the tie. He explained that titanium, which was much rarer in the 1970s than in the 2010s was at that time found only in metal fabrication or production facilities, or at chemical companies using it (combined with aluminum) to store extremely corrosive substances. The findings suggested that Cooper may have been a chemist or a metallurgist, or possibly an engineer or manager (the only employees who wore ties in such facilities at that time) in a metal or chemical manufacturing plant, or at a company that recovered scrap metal from those types of factories. In January 2017, Kaye reported that rare earth minerals such as cerium and strontium sulfide had also been identified among particles from the tie. One of the rare applications for such elements in the 1970s was Boeing's supersonic transport development project, suggesting the possibility that Cooper was a Boeing employee. Other possible sources of the material included plants that manufactured cathode ray tubes, such as the Portland firms Teledyne and Tektronix. Over the 45-year span of its active investigation, the FBI periodically made public some of its working hypotheses and tentative conclusions, drawn from witness testimony and the scarce physical evidence. Cooper appeared to be familiar with the Seattle area and may have been an Air Force veteran, based on testimony that he recognized the city of Tacoma from the air as the jet circled Puget Sound, and his accurate comment to Mucklow that McChord AFB was approximately 20 minutes' driving time from the Seattle-Tacoma Airport—a detail most civilians would not know, or comment upon. His financial situation was very likely desperate, as extortionists and other criminals who steal large amounts of money nearly always do so, according to experts, because they need it urgently; otherwise, the crime is not worth the considerable risk. A minority opinion is that Cooper was "a thrill seeker" who made the jump "just to prove it could be done." Agents theorized that Cooper took his alias from a popular Belgian comic book series of the 1970s featuring the fictional hero Dan Cooper, a Royal Canadian Air Force test pilot who took part in numerous heroic adventures, including parachuting. (One cover from the series, reproduced on the FBI web site, depicts test pilot Cooper skydiving in full paratrooper regalia.) Because the Dan Cooper comics were never translated into English, nor imported to the U.S., they speculated that he may have encountered them during a tour of duty in Europe. The Cooper Research Team (see Ongoing investigation) suggested the alternative possibility that Cooper was Canadian, and found the comics in Canada, where they were also sold. They noted his specific demand for "negotiable American currency", a phrase seldom if ever used by American citizens; since witnesses stated that Cooper had no distinguishable accent, Canada would be his most likely country of origin if he were not a U.S. citizen. Evidence suggested that Cooper was knowledgeable about technique, aircraft, and the terrain. He demanded four parachutes to force the assumption that he might compel one or more hostages to jump with him, thus ensuring he would not be deliberately supplied with sabotaged equipment. He chose a 727-100 aircraft because it was ideal for a bail-out escape, due not only to its aft airstair but also the high, aftward placement of all three engines, which allowed a reasonably safe jump without risk of immediate incineration by jet exhaust. It had "single-point fueling" capability, a recent innovation that allowed all tanks to be refueled rapidly through a single fuel port. It also had the ability (unusual for a commercial jet airliner) to remain in slow, low-altitude flight without stalling; and Cooper knew how to control its air speed and altitude without entering the cockpit, where he could have been overpowered by the three pilots. In addition, Cooper was familiar with important details, such as the appropriate flap setting of 15 degrees (which was unique to that aircraft), and the typical refueling time. He knew that the aft airstair could be lowered during flight—a fact never disclosed to civilian flight crews, since there was no situation on a passenger flight that would make it necessary—and that its operation, by a single switch in the rear of the cabin, could not be overridden from the cockpit. Some of this knowledge was virtually unique to CIA paramilitary units. In addition to planning his escape, Cooper retrieved the note and wore dark glasses, which indicated that he had a certain level of sophistication in avoiding the things that had aided the identification of the perpetrator of the best-known case of a ransom: the Lindbergh kidnapping. It is not clear how he could have reasonably expected to ever spend the money, fence it at a discount or otherwise profit. While Cooper made the familiar-from-fiction demand of non-sequentially numbered small bills, mass publicity over the Lindbergh case had long made it public knowledge that even with 1930s technology, getting non-sequential bills in a ransom was no defense against the numbers being logged and used to track down a perpetrator. In the Lindbergh case, fencing what he could as hot money and being very careful with what he did personally pass, the perpetrator had been caught through the ransom money nonetheless, with identification and handwriting evidence only brought in at the trial. Although unconscionably perilous by the high safety, training and equipment standards of skydivers, whether Cooper's jump was virtually suicidal is a matter of dispute. The author of an overview and comparison of World War II aircrew bail-outs with Cooper's drop asserts a probability for his survival, and suggests that like copycat Martin McNally, Cooper lost the ransom during descent. The mystery of how the ransom could have been washed into Tena Bar from any Cooper jump area remains. The Tena Bar find anomalies led one local journalist to suggest Cooper, knowing that he could never spend it, dumped the ransom. According to Kaye's research team, Cooper's meticulous planning may also have extended to the timing of his operation, and even his choice of attire. "The FBI searched but couldn't find anyone who disappeared that weekend," Kaye wrote, suggesting that the perpetrator may have returned to his normal occupation. "If you were planning on going 'back to work on Monday', then you would need as much time as possible to get out of the woods, find transportation and get home. The very best time for this is in front of a four-day weekend, which is the timing Dan Cooper chose for his crime." Furthermore, "if he was planning ahead, he knew he had to hitchhike out of the woods, and it is much easier to get picked up in a suit and tie than in old blue jeans." The Bureau were more skeptical, concluding that Cooper lacked crucial skydiving skills and experience. "We originally thought Cooper was an experienced jumper, perhaps even a paratrooper," said Special Agent Larry Carr, leader of the investigative team from 2006 until its dissolution in 2016. "We concluded after a few years this was simply not true. No experienced parachutist would have jumped in the pitch-black night, in the rain, with a 200-mile-an-hour wind in his face, wearing loafers and a trench coat. It was simply too risky. He also missed that his reserve 'chute was only for training, and had been sewn shut—something a skilled skydiver would have checked." He also failed to bring or request a helmet, chose to jump with the older and technically inferior of the two primary parachutes supplied to him, and jumped into a wind chill without proper protection against the extreme cold. The FBI speculated from the beginning that Cooper did not survive his jump. "Diving into the wilderness without a plan, without the right equipment, in such terrible conditions, he probably never even got his 'chute open," said Carr. Even if he did land safely, agents contended that survival in the mountainous terrain at the onset of winter would have been all but impossible without an accomplice at a predetermined landing point. This would have required a precisely timed jump—necessitating, in turn, cooperation from the flight crew. There is no evidence that Cooper requested or received any such help from the crew, nor that he had any clear idea where he was when he jumped into the stormy, overcast darkness. The 1980 cash discovery launched several new rounds of conjecture and ultimately raised more questions than it answered. Initial statements by investigators and scientific consultants were founded on the assumption that the bundled bills washed freely into the Columbia River from one of its many connecting tributaries. An Army Corps of Engineers hydrologist noted that the bills had disintegrated in a "rounded" fashion and were matted together, indicating that they had been deposited by river action, as opposed to having been deliberately buried. That conclusion, if correct, supported the opinion that Cooper had not landed near Lake Merwin nor any tributary of the Lewis River, which feeds into the Columbia well downstream from Tina Bar. It also lent credence to supplemental speculation (see Later developments above) that placed the drop zone near the Washougal River, which merges with the Columbia upstream from the discovery site. But the "free floating" hypothesis presented its own difficulties; it did not explain the ten bills missing from one packet, nor was there a logical reason that the three packets would have remained together after separating from the rest of the money. Physical evidence was incompatible with geologic evidence: the FBI's chief investigator, Himmelsbach, observed that free-floating bundles would have had to wash up on the bank "within a couple of years" of the hijacking; otherwise the rubber bands would have long since deteriorated, an observation confirmed experimentally by the Cooper Research Team (see #Subsequent FBI disclosures below). Geological evidence suggested that the bills arrived at Tina Bar well after 1974, the year of a Corps of Engineers dredging operation on that stretch of the river. Geologist Leonard Palmer of Portland State University found two distinct layers of sand and sediment between the clay deposited on the riverbank by the dredge and the sand layer in which the bills were buried, indicating that the bills arrived long after dredging had been completed. The Cooper Research Team later challenged Palmer's conclusion, citing evidence that the clay layer was a natural deposit. That finding, if true, favors an arrival time of less than one year after the event (based on the rubber band experiment), but does not help to explain how the bundles got to Tina Bar, nor from where they came. Retired FBI chief investigator Ralph Himmelsbach wrote in his 1986 book, "I have to confess, if I [were] going to look for Cooper, I would head for the Washougal." The Washougal Valley and its surroundings have been searched repeatedly by private individuals and groups in subsequent years; to date, no discoveries directly traceable to the hijacking have been reported. Some investigators have speculated that the 1980 eruption of Mount St. Helens may have obliterated any remaining physical clues. Alternative theories were advanced. Some surmised that the money had been found at a distant location by someone (or possibly even a wild animal), carried to the riverbank, and reburied there. The sheriff of Cowlitz County proposed that Cooper accidentally dropped a few bundles on the airstair, which then blew off the aircraft and fell into the Columbia River. One local newspaper editor theorized that Cooper, knowing he could never spend the money, dumped it in the river, or buried portions of it at Tena Bar (and possibly elsewhere) himself. No hypothesis offered to date satisfactorily explains all of the existing evidence. In 1976, discussion arose over impending expiration of the statute of limitations on the hijacking. Most published legal analyses agreed that it would make little difference, as interpretation of the statute varies considerably from case to case and court to court, and a prosecutor could argue that Cooper had forfeited immunity on any of several valid technical grounds. The question was rendered irrelevant in November when a Portland grand jury returned an indictment "in absentia" against "John Doe, "aka" Dan Cooper" for air piracy and violation of the Hobbs Act. The indictment formally initiated prosecution that can be continued, should the hijacker be apprehended, at any time in the future. Between 1971 and 2016, the FBI processed over a thousand "serious suspects", which included assorted publicity seekers and deathbed confessors, but nothing more than circumstantial evidence could be found to implicate any of them, all being linked by no more than conjecture or crackpot claims of responsibility: In 2003, a Minnesota resident named Lyle Christiansen watched a television documentary about the Cooper hijacking and became convinced that his late brother Kenneth was Cooper. After repeated futile attempts to convince first the FBI, and then the author and film director Nora Ephron (who he hoped would make a movie about the case), he contacted a private investigator in New York City. In 2010 the detective, Skipp Porteous, published a book postulating that Christiansen was the hijacker. The following year, an episode of the History series "Brad Meltzer's Decoded" also summarized the circumstantial evidence linking Christiansen to the Cooper case. Christiansen enlisted in the army in 1944 and was trained as a paratrooper. The war had ended by the time he was deployed in 1945, but he made occasional training jumps while stationed in Japan with occupation forces in the late 1940s. After leaving the Army, he joined Northwest Orient in 1954 as a mechanic in the South Pacific, and subsequently became a flight attendant, and then a purser, based in Seattle. Christiansen was 45 years old at the time of the hijacking, but he was shorter (5 ft 8 in or 173 cm), thinner (150 pounds or 68 kg), and lighter complected than eyewitness descriptions. Christiansen smoked (as did the hijacker), and displayed a particular fondness for bourbon (Cooper's preferred beverage). He was also left-handed. (Evidence photos of Cooper's black tie show the tie clip applied from the left side, suggesting a left-handed wearer.) Schaffner told a reporter that photos of Christiansen fit her memory of the hijacker's appearance more closely than those of other suspects she had been shown, but could not conclusively identify him. (Mucklow, who had the most contact with Cooper, has never granted a press interview.) Christiansen reportedly had purchased a house with cash a few months after the hijacking. While dying of cancer in 1994, he told Lyle, "There is something you should know, but I cannot tell you." Lyle said he never pressed his brother to explain. After Christiansen's death, family members discovered gold coins and a valuable stamp collection, along with over $200,000 in bank accounts. They also found a folder of Northwest Orient news clippings which began about the time he was hired in the 1950s, and stopped just prior to the date of the hijacking, despite the fact that the hijacking was by far the most momentous news event in the airline's history. Christiansen continued to work part-time for the airline for many years after 1971, but apparently never clipped another Northwest news story. Research by internet web sleuths would later uncover proof that Christiansen did not pay cash for the house he bought after the hijacking, but instead had a mortgage on the house and took 17 years to pay it off. The same search would also uncover proof that Christiansen had sold off almost two dozen acres of land for $17,000 per acre in the mid 90's, thus accounting for the large sum of money in his account at the time of his death. Despite the publicity generated by Porteous's book and the 2011 television documentary, the FBI is standing by its position that Christiansen cannot be considered a prime suspect. It cites a poor match to eyewitness physical descriptions, a level of skydiving expertise above that predicted by their suspect profile, and an absence of direct incriminating evidence. Coffelt was a con man, ex-convict, and purported government informant who claimed to have been the chauffeur and confidante of Abraham Lincoln's last undisputed descendant, great-grandson Robert Todd Lincoln Beckwith. In 1972 he began claiming he was Cooper, and attempted through an intermediary, a former cellmate named James Brown, to sell his story to a Hollywood production company. He said he landed near Mount Hood, about southeast of Ariel, injuring himself and losing the ransom money in the process. Photos of Coffelt bear a resemblance to the composite drawings, although he was in his mid-fifties in 1971. He was reportedly in Portland on the day of the hijacking, and sustained leg injuries around that time which were consistent with a skydiving mishap. Coffelt's account was reviewed by the FBI, which concluded that it differed in significant details from information that had not been made public, and was therefore a fabrication. Brown, undeterred, continued peddling the story long after Coffelt died in 1975. Multiple media venues, including the CBS news program "60 Minutes", considered and rejected it. In a 2008 book about Lincoln's descendants, author Charles Lachman revisited Coffelt's tale, although it had been discredited thirty-six years before. L.D. Cooper, a leather worker and Korean War veteran, was proposed as a suspect in July 2011 by his niece, Marla Cooper. As an eight-year-old, she recalled Cooper and another uncle planning something "very mischievous", involving the use of "expensive walkie-talkies", at her grandmother's house in Sisters, Oregon, south of Portland. The next day flight 305 was hijacked; and though the uncles ostensibly were turkey hunting, L.D. Cooper came home wearing a bloody shirt—the result, he said, of an auto accident. Later, she said, her parents came to believe that L.D. Cooper was the hijacker. She also recalled that her uncle, who died in 1999, was obsessed with the Canadian comic book hero Dan Cooper (see Theories and conjectures), and "had one of his comic books thumbtacked to his wall"—although he was not a skydiver or paratrooper. In August, "New York" magazine published an alternative witness sketch, reportedly based on a description by Flight 305 eyewitness Robert Gregory, depicting horn-rimmed sunglasses, a "russet"-colored suit jacket with wide lapels, and marcelled hair. The article notes that L.D. Cooper had wavy hair that looked marcelled (as did Duane Weber). The FBI announced that no fingerprints had been found on a guitar strap made by L.D. Cooper. One week later, they added that his DNA did not match the partial DNA profile obtained from the hijacker's tie, but acknowledged, once again, that there is no certainty that the hijacker was the source of the organic material obtained from the tie. The Bureau has made no further public comment. Barbara Dayton, a recreational pilot and University of Washington librarian who was born Robert Dayton in 1926, served in the U.S. Merchant Marine and then the Army during World War II. After discharge, Dayton worked with explosives in the construction industry and aspired to a professional airline career, but could not obtain a commercial pilot's license. Dayton underwent gender reassignment surgery in 1969 and changed her name to Barbara. She claimed to have staged the Cooper hijacking two years later, disguised as a man, in order to "get back" at the airline industry and the FAA, whose insurmountable rules and conditions had prevented her from becoming an airline pilot. She said she hid the ransom money in a cistern near her landing point in Woodburn, a suburban area south of Portland. Eventually she recanted her entire story, ostensibly after learning that she could still be charged with the hijacking. The FBI has never commented publicly on Dayton, who died in 2002. Gossett was a Marine Corps, Army, and Army Air Forces veteran who saw action in Korea and Vietnam. His military experience included advanced jump training and wilderness survival. After retiring from military service in 1973, he worked as an ROTC instructor, taught military law at Weber State University in Ogden, Utah, and hosted a radio talk show in Salt Lake City which featured discussions about the paranormal. He died in 2003. Gossett was widely known to be obsessed with the Cooper hijacking. He amassed a voluminous collection of Cooper-related news articles, and told one of his wives that he knew enough about the case to "write the epitaph for D.B. Cooper". Late in his life he reportedly told three of his sons, a retired Utah judge, and a friend in the Salt Lake City public defender's office that he had committed the hijacking. Photos of Gossett taken circa 1971 bear a close resemblance to the most widely circulated Cooper composite drawing. According to Galen Cook, a lawyer who has collected information related to Gossett for years, Gossett once showed his sons a key to a Vancouver, British Columbia safe deposit box which, he claimed, contained the long-missing ransom money. Gossett's eldest son, Greg, said that his father, a compulsive gambler who was always "strapped for cash", showed him "wads of cash" just before Christmas 1971, weeks after the Cooper hijacking. He speculated that Gossett gambled the money away in Las Vegas. In 1988, Gossett changed his name to "Wolfgang" and became a Catholic priest, which Cook and others interpreted as an effort to disguise his identity. Other circumstantial evidence includes testimony that Cook claims to have obtained from William Mitchell, a passenger on the hijacked aircraft, regarding a mysterious "physical detail" (which he will not divulge) common to the hijacker and Gossett. Cook also claims to have found "possible links" to Gossett in each of four letters signed by "D.B. Cooper" and mailed to three newspapers within days after the hijacking, although there is no evidence that the actual hijacker created or mailed any of the letters. The FBI has no direct evidence implicating Gossett, and cannot even reliably place him in the Pacific Northwest at the time of the hijacking. "There is not one link to the D.B. Cooper case," said Special Agent Carr, "other than the statements [Gossett] made to someone." Robert Richard Lepsy was a 33-year-old grocery store manager and married father of four from Grayling, Michigan who disappeared in October 1969. His vehicle was found three days later at a local airport, and a man matching Lepsy's description was reportedly seen boarding a flight to Mexico. Authorities concluded that Lepsy had left voluntarily and closed their investigation. Two years after the Cooper hijacking, family members noted that Lepsy's physical features resembled those in the Cooper composite drawings, and asserted that Cooper's clothing was described as very similar to Lepsy's grocery store uniform. Lepsy was declared legally dead in 1976. One of Lepsy's daughters submitted a DNA sample to the FBI in 2011, with unknown results. Though Lepsy was proposed as a Cooper suspect in a 2014 book, there is no record of public comment on him from the FBI. List was an accountant, World War II and Korean War veteran who murdered his wife, three teenage children, and 85-year-old mother in Westfield, New Jersey, fifteen days before the Cooper hijacking, withdrew $200,000 from his mother's bank account, and disappeared. He came to the attention of the Cooper task force due to the timing of his disappearance, multiple matches to the hijacker's description, and the reasoning that "a fugitive accused of mass murder has nothing to lose." After his capture in 1989, List admitted to murdering his family, but denied any involvement in the Cooper hijacking. While his name continues to crop up in Cooper articles and documentaries, no substantive evidence implicates him, and the FBI no longer considers him a suspect. He died in prison in 2008. Theodore E. Mayfield was a Special Forces veteran, pilot, competitive skydiver, and skydiving instructor who served time in 1994 for negligent homicide after two of his students died when their parachutes failed to open. Later, he was found indirectly responsible for thirteen additional skydiving deaths due to faulty equipment and training. His criminal record also included armed robbery and transportation of stolen aircraft. In 2010, he was sentenced to three years' probation for piloting a plane 26 years after losing his pilot's license and rigging certificates. He was suggested repeatedly as a suspect early in the investigation, according to FBI Agent Ralph Himmelsbach, who knew Mayfield from a prior dispute at a local airport. He was ruled out, based partly on the fact that he called Himmelsbach less than two hours after Flight 305 landed in Reno to volunteer advice on standard skydiving practices and possible landing zones. In 2006, two amateur researchers named Daniel Dvorak and Matthew Myers proposed Mayfield as a suspect once again, asserting that they had assembled a convincing circumstantial case that would be detailed in a forthcoming book. They theorized that Mayfield called Himmelsbach not to offer advice, but to establish an alibi; and they challenged Himmelsbach's conclusion that Mayfield could not possibly have found a phone in time to call the FBI less than four hours after jumping into the wilderness at night. Mayfield denied any involvement, and repeated a previous assertion that the FBI called "him" five times while the hijacking was still in progress to ask about parachutes, local skydivers, and skydiving techniques. (Himmelsbach said the FBI never called Mayfield.) Mayfield further charged that Dvorak and Myers asked him to play along with their theory, and "we'll all make a lot of money". Dvorak and Myers called any inference of collusion a "blatant lie". Dvorak died in 2007, and the promised book was never published. The FBI offered no comment beyond Himmelsbach's original statement that Mayfield, who died in 2015, was ruled out as a suspect early on. McCoy was an Army veteran who served two tours of duty in Vietnam, first as a demolition expert, and later, with the Green Berets as a helicopter pilot. After his military service he became a warrant officer in the Utah National Guard and an avid recreational skydiver, with aspirations, he said, of becoming a Utah State Trooper. On April 7, 1972, McCoy staged the best-known of the so-called "copycat" hijackings (see below). He boarded United Airlines' Flight 855 (a Boeing 727 with aft stairs) in Denver, Colorado, and brandishing what later proved to be a paperweight resembling a hand grenade and an unloaded handgun, he demanded four parachutes and $500,000. After delivery of the money and parachutes at San Francisco International Airport, McCoy ordered the aircraft back into the sky and bailed out over Provo, Utah, leaving behind his handwritten hijacking instructions and his fingerprints on a magazine he had been reading. He was arrested on April 9 with the ransom cash in his possession, and after trial and conviction, received a 45-year sentence. Two years later he escaped from Lewisburg Federal Penitentiary with several accomplices by crashing a garbage truck through the main gate. Tracked down three months later in Virginia Beach, McCoy was killed in a shootout with FBI agents. In their 1991 book, "D.B. Cooper: The Real McCoy", parole officer Bernie Rhodes and former FBI agent Russell Calame asserted that they had identified McCoy as Cooper. They cited obvious similarities in the two hijackings, claims by McCoy's family that the tie and mother-of-pearl tie clip left on the plane belonged to McCoy, and McCoy's own refusal to admit or deny that he was Cooper. A proponent of their theory was the FBI agent who killed McCoy. "When I shot Richard McCoy," he said, "I shot D. B. Cooper at the same time." While there is no reasonable doubt that McCoy committed the Denver hijacking, the FBI does not consider him a suspect in the Cooper case because of significant mismatches in age and description; a level of skydiving skill well above that thought to be possessed by the hijacker; and credible evidence that McCoy was in Las Vegas on the day of the Portland hijacking, and at home in Utah the day after, having Thanksgiving dinner with his family. Robert Wesley Rackstraw is a retired pilot and ex-convict who served on an army helicopter crew and other units during the Vietnam War. He came to the attention of the Cooper task force in February 1978, after he was arrested in Iran and deported to the U.S. to face explosive possession and check kiting charges. Several months later, while released on bail, Rackstraw attempted to fake his own death by radioing a false mayday call and telling controllers that he was bailing out of a rented plane over Monterey Bay. Police later arrested him in Fullerton on an additional charge of forging federal pilot certificates; the plane he claimed to have ditched was found, repainted, in a nearby hangar. Cooper investigators noted his physical resemblance to Cooper composite sketches (although he was only 28 in 1971), military parachute training, and criminal record, but eliminated him as a suspect in 1979 after no direct evidence of his involvement could be found. In 2016, Rackstraw re-emerged as a suspect in a History Channel program and a book. On September 8, 2016, Thomas J. Colbert, an author of the book "The Last Master Outlaw", filed a lawsuit to compel the FBI to release its Cooper case file under the Freedom of Information Act. The suit alleges that the FBI suspended active investigation of the Cooper case "in order to undermine the theory that Rackstraw is D.B. Cooper so as to prevent embarrassment for the bureau's failure to develop evidence sufficient to prosecute him for the crime." In January 2018, a small cold case documentary team reported that they had obtained a letter originally written in December 1971 and sent to "The New York Times", the "Los Angeles Times", "The Seattle Times" and "The Washington Post", with numerous numbers and letters written on it. The team, led by Tom and Dawna Colbert, says that the codes were deciphered and matched to three units Rackstraw was a part of while in the Army, and the FBI refused to acknowledge the findings because "it would have to admit that amateur sleuths had cracked a case the bureau couldn't". One of the Flight 305 flight attendants reportedly "did not find any similarities" between photos of Rackstraw from the 1970s and her recollection of Cooper's appearance. Rackstraw's attorney called the renewed allegations "the stupidest thing I've ever heard", and Rackstraw himself told People.com, "It's a lot of [expletive], and they know it is." The FBI declined further comment. Rackstraw stated in a 2017 phone interview that he lost his job over the 2016 investigations. A June 2018 article circulated claiming private investigators "decoded" a previously publicly unknown letter on file with the FBI, which purportedly includes a disguised confession. Duane L. Weber was a World War II Army veteran who served time in at least six prisons from 1945 to 1968 for burglary and forgery. He was proposed as a suspect by his widow, based primarily on a deathbed confession: Three days before he died in 1995, Weber told his wife, "I am Dan Cooper." The name meant nothing to her, she said; but months later, a friend told her of its significance in the hijacking. She went to her local library to research D.B. Cooper, found Max Gunther's book, and discovered notations in the margins in her husband's handwriting. She then recalled, in retrospect, that Weber once had a nightmare during which he talked in his sleep about jumping from a plane, leaving his fingerprints on the "aft stairs". He also reportedly told her that an old knee injury had been incurred by "jumping out of a plane". Like the hijacker, Weber drank bourbon and chain smoked. Other circumstantial evidence included a 1979 trip to Seattle and the Columbia River, during which Weber took a walk alone along the river bank in the Tina Bar area; four months later Brian Ingram made his ransom cash discovery in the same area. The FBI eliminated Weber as an active suspect in July 1998 when his fingerprints did not match any of those processed in the hijacked plane, and no other direct evidence could be found to implicate him. Later, his DNA also failed to match the samples recovered from Cooper's tie, though the bureau has since conceded that they cannot be certain that the organic material on the tie came from Cooper. Walter R. Reca (born Walter R. Peca) was a Michigan-native, a military veteran and original member of the Michigan Parachute Team. He was proposed as a suspect by his best friend Carl Laurin, a former commercial airline pilot and expert parachuter himself, and then verified by Grand Rapids, MI-based publisher Principia Media and announced at a press conference on May 17, 2018. In 2008, Reca confessed to being D.B. Cooper to Laurin via a recorded phone calls. Laurin had suspected his friend Walter from the night of the hijacking, and in 1998, began a 20-year investigation to uncover the truth about that infamous evening. Reca gave Laurin permission in a notarized letter to share his story after he passed away, which he did in 2014, aged 80. He also allowed Laurin to tape their phone conversations about the crime over a six-week period in late 2008. In the over three hours of recordings, Reca gave new details about the hijacking that the public had not heard before. He also confessed to his niece Lisa Story. Notably, Reca described a man he had run into within an hour of his jump. Laurin, an experienced pilot, used his years of training to determine the location of the jump. He was positive that Walt, aka D.B. Cooper, landed near Cle Elum, Washingon. According to written testimony, Jeff Osiadacz, a Cle Elum, Washington native, was driving his dump truck near Cle Elum the night of Nov. 24, 1971, when he saw a man walking down the side of the road in the inclement weather. He assumed the man’s car had broken down and was walking to get assistance. However he did not have room in his truck to pick him up. He continued toward his destination, the Teanaway Junction Café just outside of Cle Elum. After ordering coffee, the man from the side of the road also entered the café looking like a “drowned rat”, according to Osiadacz. The man sat next to him and asked if he would be able to give his friend directions if he called him on the phone. Osiadaza agreed to this and spoke with Reca’s friend, giving him directions to the cafe. Shortly after that, Osiadacz left for the Grange Hall to play in a band. Reca offered to pay for his coffee, and the two men amicably parted. Laurin began his search for the witness, after Reca described the landscape he saw while on his way to the drop zone: two bridges, some distinct lights; and his description of the exterior and interior of the café, as well as his encounter with Osiadacz. He described Osiadacz in detail, recalling that he was wearing western gear and had a guitar case. He dubbed him “Cowboy”. Laurin consulted a map to find these particular landmarks and began making phone calls about the “Cowboy who had driven a dump truck.” Remarkably, Laurin was put in contact with Osiadacz, who recalled meeting Reca that night, described what he was wearing and what he looked like, and who later confirmed his identity after seeing a photo Laurin sent him. In addition to the taped confession, Laurin also has the written confession of Reca and the long underwear that Reca wore under his black pants during the hijacking. Laurin and Princpia Media were introduced in 2016, where the investigation took a turn. When Principia Media consulted with Joe Koenig, a Certified Fraud Examiner and forensic linguist with 45 years of investigative experience, including his role as lead investigator of the Jimmy Hoffa case. Koenig evaluated the more than three hours of audio tapes to get a fix on Laurin’s and Reca’s communications patterns, their talking rhythm and where there were any changes, which would indicate someone not being truthful. Koenig did not find any changes. He evaluated all documents, including passports, identification cards, photographs, and newspaper clippings. Koenig found no evidence of tampering or manipulation and deemed all documentation authentic and contemporaneous. After comparing the Principia Media team’s documents to the available FBI records, he found no discrepancies that eliminated Reca as a suspect. He found it particularly significant that Osiadacz’s statement of events on the night of November 24, 1971 was identical to the account that Reca made five years earlier. Koenig publicly stated at the Principia Media press conference on May 17, 2018 that he believes that Walter R. Reca is D.B. Cooper. On July 19, 2018, Principia Media released a four-part documentary on iTunes, Google Play and Amazon detailing their investigation. Cooper was not the first to attempt air piracy for personal gain. In early November 1971, for example, a Canadian man named Paul Joseph Cini hijacked an Air Canada DC-8 over Montana, but was overpowered by the crew when he put down his shotgun to strap on the parachute he had brought with him. Cooper's apparent success inspired a flurry of imitators, mostly during 1972. Some notable examples: In all, 15 hijackings similar to Cooper's—all unsuccessful—were attempted in 1972. With the advent of universal luggage searches in 1973 (see Airport security), the general incidence of hijackings dropped dramatically. There were no further notable Cooper imitators until July 11, 1980, when Glenn K. Tripp seized Northwest flight 608 at Seattle-Tacoma Airport, demanding $600,000 ($100,000 by an independent account), two parachutes, and the assassination of his boss. After a 10-hour standoff, he was apprehended, but on January 21, 1983—while still on probation—he hijacked the same Northwest flight, this time en route, and demanded to be flown to Afghanistan. When the plane landed in Portland, he was shot and killed by FBI agents. The Cooper hijacking marked the beginning of the end for unfettered and unscrutinized commercial airline travel. Despite the initiation of the federal Sky Marshal Program the previous year, 31 hijackings were committed in U.S. airspace in 1972; 19 of them were for the specific purpose of extorting money and most of the rest were attempts to reach Cuba. In 15 of the extortion cases the hijackers also demanded parachutes. In early 1973 the FAA began requiring airlines to search all passengers and their bags. Amid multiple lawsuits charging that such searches violated Fourth Amendment protections against search and seizure, federal courts ruled that they were acceptable when applied universally, and when limited to searches for weapons and explosives. In contrast to the 31 hijackings in 1972, only two were attempted in 1973, both by psychiatric patients, one of whom intended to crash the airliner into the White House to kill President Nixon. In the wake of multiple "copycat" hijackings in 1972, the FAA required that all Boeing 727 aircraft be fitted with a device, later dubbed the "Cooper vane", that prevents lowering of the aft airstair during flight. As a direct result of the hijacking, the installation of peepholes was mandated in all cockpit doors. This made it possible for the cockpit crew to observe people in the passenger cabin without having to open the cockpit door. In 1978 the hijacked 727-100 aircraft was sold by Northwest to Piedmont Airlines where it was re-registered N838N and continued in domestic carrier service. In 1984 it was purchased by the now-defunct charter company Key Airlines, re-registered N29KA, and incorporated into the Air Force's civilian charter fleet that shuttled workers between Nellis Air Force Base and the Tonopah Test Range during the top-secret F-117 Nighthawk development program. In 1996 the aircraft was scrapped for parts in a Memphis boneyard. In late April 2013, Earl Cossey – the owner of the skydiving school that furnished the four parachutes that were given to Cooper – was found dead in his home in Woodinville, a suburb of Seattle. His death was ruled a homicide due to blunt-force trauma to the head. The perpetrator remains unknown. Conspiracy theorists immediately began pointing out possible links to the Cooper case, but authorities responded that they have no reason to believe that any such link exists. Woodinville officials later announced that burglary was most likely the motive for the crime. Himmelsbach famously called Cooper a "rotten sleazy crook" but his bold and adventurous crime inspired a cult following that was expressed in song, film, and literature. Restaurants and bowling alleys in the Pacific Northwest hold regular Cooper-themed promotions and sell tourist souvenirs. A "Cooper Day" celebration has been held at the Ariel General Store and Tavern each November since 1974 with the exception of 2015, the year its owner, Dona Elliot, died. Cooper has appeared in the storylines of the television series "Prison Break", "The Blacklist", "NewsRadio", and "Numb3rs", the 2004 film "Without a Paddle", and a book titled "The Vesuvius Prophecy", based on "The 4400" TV series. Earth Earth is the third planet from the Sun and the only astronomical object known to harbor life. According to radiometric dating and other sources of evidence, Earth formed over 4.5 billion years ago. Earth's gravity interacts with other objects in space, especially the Sun and the Moon, Earth's only natural satellite. Earth revolves around the Sun in 365.26 days, a period known as an Earth year. During this time, Earth rotates about its axis about 366.26 times. Earth's axis of rotation is tilted with respect to its orbital plane, producing seasons on Earth. The gravitational interaction between Earth and the Moon causes ocean tides, stabilizes Earth's orientation on its axis, and gradually slows its rotation. Earth is the densest planet in the Solar System and the largest of the four terrestrial planets. Earth's lithosphere is divided into several rigid tectonic plates that migrate across the surface over periods of many millions of years. About 71% of Earth's surface is covered with water, mostly by oceans. The remaining 29% is land consisting of continents and islands that together have many lakes, rivers and other sources of water that contribute to the hydrosphere. The majority of Earth's polar regions are covered in ice, including the Antarctic ice sheet and the sea ice of the Arctic ice pack. Earth's interior remains active with a solid iron inner core, a liquid outer core that generates the Earth's magnetic field, and a convecting mantle that drives plate tectonics. Within the first billion years of Earth's history, life appeared in the oceans and began to affect the Earth's atmosphere and surface, leading to the proliferation of aerobic and anaerobic organisms. Some geological evidence indicates that life may have arisen as much as 4.1 billion years ago. Since then, the combination of Earth's distance from the Sun, physical properties, and geological history have allowed life to evolve and thrive. In the history of the Earth, biodiversity has gone through long periods of expansion, occasionally punctuated by mass extinction events. Over 99% of all species that ever lived on Earth are extinct. Estimates of the number of species on Earth today vary widely; most species have not been described. Over 7.6 billion humans live on Earth and depend on its biosphere and natural resources for their survival. Humans have developed diverse societies and cultures; politically, the world has about 200 sovereign states. The modern English word "Earth" developed from a wide variety of Middle English forms, which derived from an Old English noun most often spelled '. It has cognates in every Germanic language, and their proto-Germanic root has been reconstructed as *"erþō". In its earliest appearances, "eorðe" was already being used to translate the many senses of Latin ' and Greek ("gē"): the ground, its soil, dry land, the human world, the surface of the world (including the sea), and the globe itself. As with Terra and Gaia, Earth was a personified goddess in Germanic paganism: the Angles were listed by Tacitus as among the devotees of Nerthus, and later Norse mythology included Jörð, a giantess often given as the mother of Thor. Originally, "earth" was written in lowercase, and from early Middle English, its definite sense as "the globe" was expressed as "the earth". By Early Modern English, many nouns were capitalized, and "the earth" became (and often remained) "the Earth", particularly when referenced along with other heavenly bodies. More recently, the name is sometimes simply given as "Earth", by analogy with the names of the other planets. House styles now vary: Oxford spelling recognizes the lowercase form as the most common, with the capitalized form an acceptable variant. Another convention capitalizes "Earth" when appearing as a name (e.g. "Earth's atmosphere") but writes it in lowercase when preceded by "the" (e.g. "the atmosphere of the earth"). It almost always appears in lowercase in colloquial expressions such as "what on earth are you doing?" The oldest material found in the Solar System is dated to (Bya). By the primordial Earth had formed. The bodies in the Solar System formed and evolved with the Sun. In theory, a solar nebula partitions a volume out of a molecular cloud by gravitational collapse, which begins to spin and flatten into a circumstellar disk, and then the planets grow out of that disk with the Sun. A nebula contains gas, ice grains, and dust (including primordial nuclides). According to nebular theory, planetesimals formed by accretion, with the primordial Earth taking 10– (Mys) to form. A subject of research is the formation of the Moon, some 4.53 Bya. A leading hypothesis is that it was formed by accretion from material loosed from Earth after a Mars-sized object, named Theia, hit Earth. In this view, the mass of Theia was approximately 10 percent of Earth, it hit Earth with a glancing blow and some of its mass merged with Earth. Between approximately 4.1 and , numerous asteroid impacts during the Late Heavy Bombardment caused significant changes to the greater surface environment of the Moon and, by inference, to that of Earth. Earth's atmosphere and oceans were formed by volcanic activity and outgassing. Water vapor from these sources condensed into the oceans, augmented by water and ice from asteroids, protoplanets, and comets. In this model, atmospheric "greenhouse gases" kept the oceans from freezing when the newly forming Sun had only 70% of its current luminosity. By , Earth's magnetic field was established, which helped prevent the atmosphere from being stripped away by the solar wind. A crust formed when the molten outer layer of Earth cooled to form a solid. The two models that explain land mass propose either a steady growth to the present-day forms or, more likely, a rapid growth early in Earth history followed by a long-term steady continental area. Continents formed by plate tectonics, a process ultimately driven by the continuous loss of heat from Earth's interior. Over the period of hundreds of millions of years, the supercontinents have assembled and broken apart. Roughly (Mya), one of the earliest known supercontinents, Rodinia, began to break apart. The continents later recombined to form Pannotia , then finally Pangaea, which also broke apart . The present pattern of ice ages began about and then intensified during the Pleistocene about . High-latitude regions have since undergone repeated cycles of glaciation and thaw, repeating about every . The last continental glaciation ended ago. Chemical reactions led to the first self-replicating molecules about four billion years ago. A half billion years later, the last common ancestor of all current life arose. The evolution of photosynthesis allowed the Sun's energy to be harvested directly by life forms. The resultant molecular oxygen () accumulated in the atmosphere and due to interaction with ultraviolet solar radiation, formed a protective ozone layer () in the upper atmosphere. The incorporation of smaller cells within larger ones resulted in the development of complex cells called eukaryotes. True multicellular organisms formed as cells within colonies became increasingly specialized. Aided by the absorption of harmful ultraviolet radiation by the ozone layer, life colonized Earth's surface. Among the earliest fossil evidence for life is microbial mat fossils found in 3.48 billion-year-old sandstone in Western Australia, biogenic graphite found in 3.7 billion-year-old metasedimentary rocks in Western Greenland, and remains of biotic material found in 4.1 billion-year-old rocks in Western Australia. The earliest direct evidence of life on Earth is contained in 3.45 billion-year-old Australian rocks showing fossils of microorganisms. During the Neoproterozoic, , much of Earth might have been covered in ice. This hypothesis has been termed "Snowball Earth", and it is of particular interest because it preceded the Cambrian explosion, when multicellular life forms significantly increased in complexity. Following the Cambrian explosion, , there have been five mass extinctions. The most recent such event was , when an asteroid impact triggered the extinction of the non-avian dinosaurs and other large reptiles, but spared some small animals such as mammals, which at the time resembled shrews. Mammalian life has diversified over the past , and several million years ago an African ape-like animal such as "Orrorin tugenensis" gained the ability to stand upright. This facilitated tool use and encouraged communication that provided the nutrition and stimulation needed for a larger brain, which led to the evolution of humans. The development of agriculture, and then civilization, led to humans having an influence on Earth and the nature and quantity of other life forms that continues to this day. Earth's expected long-term future is tied to that of the Sun. Over the next , solar luminosity will increase by 10%, and over the next by 40%. The Earth's increasing surface temperature will accelerate the inorganic carbon cycle, reducing concentration to levels lethally low for plants ( for C4 photosynthesis) in approximately . The lack of vegetation will result in the loss of oxygen in the atmosphere, making animal life impossible. After another billion years all surface water will have disappeared and the mean global temperature will reach . From that point, the Earth is expected to be habitable for another , possibly up to if nitrogen is removed from the atmosphere. Even if the Sun were eternal and stable, 27% of the water in the modern oceans will descend to the mantle in one billion years, due to reduced steam venting from mid-ocean ridges. The Sun will evolve to become a red giant in about . Models predict that the Sun will expand to roughly , about 250 times its present radius. Earth's fate is less clear. As a red giant, the Sun will lose roughly 30% of its mass, so, without tidal effects, Earth will move to an orbit from the Sun when the star reaches its maximum radius. Most, if not all, remaining life will be destroyed by the Sun's increased luminosity (peaking at about 5,000 times its present level). A 2008 simulation indicates that Earth's orbit will eventually decay due to tidal effects and drag, causing it to enter the Sun's atmosphere and be vaporized. The shape of Earth is approximately oblate spheroidal. Due to rotation, the Earth is flattened at the poles and bulging around the equator. The diameter of the Earth at the equator is larger than the pole-to-pole diameter. Thus the point on the surface farthest from Earth's center of mass is the summit of the equatorial Chimborazo volcano in Ecuador. The average diameter of the reference spheroid is . Local topography deviates from this idealized spheroid, although on a global scale these deviations are small compared to Earth's radius: The maximum deviation of only 0.17% is at the Mariana Trench ( below local sea level), whereas Mount Everest ( above local sea level) represents a deviation of 0.14%. In geodesy, the exact shape that Earth's oceans would adopt in the absence of land and perturbations such as tides and winds is called the geoid. More precisely, the geoid is the surface of gravitational equipotential at mean sea level. Earth's mass is approximately (5,970 Yg). It is composed mostly of iron (32.1%), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulfur (2.9%), nickel (1.8%), calcium (1.5%), and aluminium (1.4%), with the remaining 1.2% consisting of trace amounts of other elements. Due to mass segregation, the core region is estimated to be primarily composed of iron (88.8%), with smaller amounts of nickel (5.8%), sulfur (4.5%), and less than 1% trace elements. The most common rock constituents of the crust are nearly all oxides: chlorine, sulfur, and fluorine are the important exceptions to this and their total amount in any rock is usually much less than 1%. Over 99% of the crust is composed of 11 oxides, principally silica, alumina, iron oxides, lime, magnesia, potash, and soda. Earth's interior, like that of the other terrestrial planets, is divided into layers by their chemical or physical (rheological) properties. The outer layer is a chemically distinct silicate solid crust, which is underlain by a highly viscous solid mantle. The crust is separated from the mantle by the Mohorovičić discontinuity. The thickness of the crust varies from about under the oceans to for the continents. The crust and the cold, rigid, top of the upper mantle are collectively known as the lithosphere, and it is of the lithosphere that the tectonic plates are composed. Beneath the lithosphere is the asthenosphere, a relatively low-viscosity layer on which the lithosphere rides. Important changes in crystal structure within the mantle occur at below the surface, spanning a transition zone that separates the upper and lower mantle. Beneath the mantle, an extremely low viscosity liquid outer core lies above a solid inner core. The Earth's inner core might rotate at a slightly higher angular velocity than the remainder of the planet, advancing by 0.1–0.5° per year. The radius of the inner core is about one fifth of that of Earth. Earth's internal heat comes from a combination of residual heat from planetary accretion (about 20%) and heat produced through radioactive decay (80%). The major heat-producing isotopes within Earth are potassium-40, uranium-238, and thorium-232. At the center, the temperature may be up to , and the pressure could reach . Because much of the heat is provided by radioactive decay, scientists postulate that early in Earth's history, before isotopes with short half-lives were depleted, Earth's heat production was much higher. At approximately , twice the present-day heat would have been produced, increasing the rates of mantle convection and plate tectonics, and allowing the production of uncommon igneous rocks such as komatiites that are rarely formed today. The mean heat loss from Earth is , for a global heat loss of . A portion of the core's thermal energy is transported toward the crust by mantle plumes, a form of convection consisting of upwellings of higher-temperature rock. These plumes can produce hotspots and flood basalts. More of the heat in Earth is lost through plate tectonics, by mantle upwelling associated with mid-ocean ridges. The final major mode of heat loss is through conduction through the lithosphere, the majority of which occurs under the oceans because the crust there is much thinner than that of the continents. Earth's mechanically rigid outer layer, the lithosphere, is divided into tectonic plates. These plates are rigid segments that move relative to each other at one of three boundaries types: At convergent boundaries, two plates come together; at divergent boundaries, two plates are pulled apart; and at transform boundaries, two plates slide past one another laterally. Along these plate boundaries, earthquakes, volcanic activity, mountain-building, and oceanic trench formation can occur. The tectonic plates ride on top of the asthenosphere, the solid but less-viscous part of the upper mantle that can flow and move along with the plates. As the tectonic plates migrate, oceanic crust is subducted under the leading edges of the plates at convergent boundaries. At the same time, the upwelling of mantle material at divergent boundaries creates mid-ocean ridges. The combination of these processes recycles the oceanic crust back into the mantle. Due to this recycling, most of the ocean floor is less than old. The oldest oceanic crust is located in the Western Pacific and is estimated to be old. By comparison, the oldest dated continental crust is . The seven major plates are the Pacific, North American, Eurasian, African, Antarctic, Indo-Australian, and South American. Other notable plates include the Arabian Plate, the Caribbean Plate, the Nazca Plate off the west coast of South America and the Scotia Plate in the southern Atlantic Ocean. The Australian Plate fused with the Indian Plate between . The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of and the Pacific Plate moving . At the other extreme, the slowest-moving plate is the Eurasian Plate, progressing at a typical rate of . The total surface area of Earth is about . Of this, 70.8%, or , is below sea level and covered by ocean water. Below the ocean's surface are much of the continental shelf, mountains, volcanoes, oceanic trenches, submarine canyons, oceanic plateaus, abyssal plains, and a globe-spanning mid-ocean ridge system. The remaining 29.2%, or , not covered by water has terrain that varies greatly from place to place and consists of mountains, deserts, plains, plateaus, and other landforms. Tectonics and erosion, volcanic eruptions, flooding, weathering, glaciation, the growth of coral reefs, and meteorite impacts are among the processes that constantly reshape the Earth's surface over geological time. The continental crust consists of lower density material such as the igneous rocks granite and andesite. Less common is basalt, a denser volcanic rock that is the primary constituent of the ocean floors. Sedimentary rock is formed from the accumulation of sediment that becomes buried and compacted together. Nearly 75% of the continental surfaces are covered by sedimentary rocks, although they form about 5% of the crust. The third form of rock material found on Earth is metamorphic rock, which is created from the transformation of pre-existing rock types through high pressures, high temperatures, or both. The most abundant silicate minerals on Earth's surface include quartz, feldspars, amphibole, mica, pyroxene and olivine. Common carbonate minerals include calcite (found in limestone) and dolomite. The elevation of the land surface varies from the low point of at the Dead Sea, to a maximum altitude of at the top of Mount Everest. The mean height of land above sea level is about . The pedosphere is the outermost layer of Earth's continental surface and is composed of soil and subject to soil formation processes. The total arable land is 10.9% of the land surface, with 1.3% being permanent cropland. Close to 40% of Earth's land surface is used for agriculture, or an estimated of cropland and of pastureland. The abundance of water on Earth's surface is a unique feature that distinguishes the "Blue Planet" from other planets in the Solar System. Earth's hydrosphere consists chiefly of the oceans, but technically includes all water surfaces in the world, including inland seas, lakes, rivers, and underground waters down to a depth of . The deepest underwater location is Challenger Deep of the Mariana Trench in the Pacific Ocean with a depth of . The mass of the oceans is approximately 1.35 metric tons or about 1/4400 of Earth's total mass. The oceans cover an area of with a mean depth of , resulting in an estimated volume of . If all of Earth's crustal surface were at the same elevation as a smooth sphere, the depth of the resulting world ocean would be . About 97.5% of the water is saline; the remaining 2.5% is fresh water. Most fresh water, about 68.7%, is present as ice in ice caps and glaciers. The average salinity of Earth's oceans is about 35 grams of salt per kilogram of sea water (3.5% salt). Most of this salt was released from volcanic activity or extracted from cool igneous rocks. The oceans are also a reservoir of dissolved atmospheric gases, which are essential for the survival of many aquatic life forms. Sea water has an important influence on the world's climate, with the oceans acting as a large heat reservoir. Shifts in the oceanic temperature distribution can cause significant weather shifts, such as the El Niño–Southern Oscillation. The atmospheric pressure at Earth's sea level averages , with a scale height of about . A dry atmosphere is composed of 78.084% nitrogen, 20.946% oxygen, 0.934% argon, and trace amounts of carbon dioxide and other gaseous molecules. Water vapor content varies between 0.01% and 4% but averages about 1%. The height of the troposphere varies with latitude, ranging between at the poles to at the equator, with some variation resulting from weather and seasonal factors. Earth's biosphere has significantly altered its atmosphere. Oxygenic photosynthesis evolved , forming the primarily nitrogen–oxygen atmosphere of today. This change enabled the proliferation of aerobic organisms and, indirectly, the formation of the ozone layer due to the subsequent conversion of atmospheric into. The ozone layer blocks ultraviolet solar radiation, permitting life on land. Other atmospheric functions important to life include transporting water vapor, providing useful gases, causing small meteors to burn up before they strike the surface, and moderating temperature. This last phenomenon is known as the greenhouse effect: trace molecules within the atmosphere serve to capture thermal energy emitted from the ground, thereby raising the average temperature. Water vapor, carbon dioxide, methane, nitrous oxide, and ozone are the primary greenhouse gases in the atmosphere. Without this heat-retention effect, the average surface temperature would be , in contrast to the current , and life on Earth probably would not exist in its current form. In May 2017, glints of light, seen as twinkling from an orbiting satellite a million miles away, were found to be reflected light from ice crystals in the atmosphere. Earth's atmosphere has no definite boundary, slowly becoming thinner and fading into outer space. Three-quarters of the atmosphere's mass is contained within the first of the surface. This lowest layer is called the troposphere. Energy from the Sun heats this layer, and the surface below, causing expansion of the air. This lower-density air then rises and is replaced by cooler, higher-density air. The result is atmospheric circulation that drives the weather and climate through redistribution of thermal energy. The primary atmospheric circulation bands consist of the trade winds in the equatorial region below 30° latitude and the westerlies in the mid-latitudes between 30° and 60°. Ocean currents are also important factors in determining climate, particularly the thermohaline circulation that distributes thermal energy from the equatorial oceans to the polar regions. Water vapor generated through surface evaporation is transported by circulatory patterns in the atmosphere. When atmospheric conditions permit an uplift of warm, humid air, this water condenses and falls to the surface as precipitation. Most of the water is then transported to lower elevations by river systems and usually returned to the oceans or deposited into lakes. This water cycle is a vital mechanism for supporting life on land and is a primary factor in the erosion of surface features over geological periods. Precipitation patterns vary widely, ranging from several meters of water per year to less than a millimeter. Atmospheric circulation, topographic features, and temperature differences determine the average precipitation that falls in each region. The amount of solar energy reaching Earth's surface decreases with increasing latitude. At higher latitudes, the sunlight reaches the surface at lower angles, and it must pass through thicker columns of the atmosphere. As a result, the mean annual air temperature at sea level decreases by about per degree of latitude from the equator. Earth's surface can be subdivided into specific latitudinal belts of approximately homogeneous climate. Ranging from the equator to the polar regions, these are the tropical (or equatorial), subtropical, temperate and polar climates. This latitudinal rule has several anomalies: The commonly used Köppen climate classification system has five broad groups (humid tropics, arid, humid middle latitudes, continental and cold polar), which are further divided into more specific subtypes. The Köppen system rates regions of terrain based on observed temperature and precipitation. The highest air temperature ever measured on Earth was in Furnace Creek, California, in Death Valley, in 1913. The lowest air temperature ever directly measured on Earth was at Vostok Station in 1983, but satellites have used remote sensing to measure temperatures as low as in East Antarctica. These temperature records are only measurements made with modern instruments from the 20th century onwards and likely do not reflect the full range of temperature on Earth. Above the troposphere, the atmosphere is usually divided into the stratosphere, mesosphere, and thermosphere. Each layer has a different lapse rate, defining the rate of change in temperature with height. Beyond these, the exosphere thins out into the magnetosphere, where the geomagnetic fields interact with the solar wind. Within the stratosphere is the ozone layer, a component that partially shields the surface from ultraviolet light and thus is important for life on Earth. The Kármán line, defined as 100 km above Earth's surface, is a working definition for the boundary between the atmosphere and outer space. Thermal energy causes some of the molecules at the outer edge of the atmosphere to increase their velocity to the point where they can escape from Earth's gravity. This causes a slow but steady loss of the atmosphere into space. Because unfixed hydrogen has a low molecular mass, it can achieve escape velocity more readily, and it leaks into outer space at a greater rate than other gases. The leakage of hydrogen into space contributes to the shifting of Earth's atmosphere and surface from an initially reducing state to its current oxidizing one. Photosynthesis provided a source of free oxygen, but the loss of reducing agents such as hydrogen is thought to have been a necessary precondition for the widespread accumulation of oxygen in the atmosphere. Hence the ability of hydrogen to escape from the atmosphere may have influenced the nature of life that developed on Earth. In the current, oxygen-rich atmosphere most hydrogen is converted into water before it has an opportunity to escape. Instead, most of the hydrogen loss comes from the destruction of methane in the upper atmosphere. The gravity of Earth is the acceleration that is imparted to objects due to the distribution of mass within the Earth. Near the Earth's surface, gravitational acceleration is approximately . Local differences in topography, geology, and deeper tectonic structure cause local and broad, regional differences in the Earth's gravitational field, known as gravitational anomalies. The main part of Earth's magnetic field is generated in the core, the site of a dynamo process that converts the kinetic energy of thermally and compositionally driven convection into electrical and magnetic field energy. The field extends outwards from the core, through the mantle, and up to Earth's surface, where it is, approximately, a dipole. The poles of the dipole are located close to Earth's geographic poles. At the equator of the magnetic field, the magnetic-field strength at the surface is , with global magnetic dipole moment of . The convection movements in the core are chaotic; the magnetic poles drift and periodically change alignment. This causes secular variation of the main field and field reversals at irregular intervals averaging a few times every million years. The most recent reversal occurred approximately 700,000 years ago. The extent of Earth's magnetic field in space defines the magnetosphere. Ions and electrons of the solar wind are deflected by the magnetosphere; solar wind pressure compresses the dayside of the magnetosphere, to about 10 Earth radii, and extends the nightside magnetosphere into a long tail. Because the velocity of the solar wind is greater than the speed at which waves propagate through the solar wind, a supersonic bowshock precedes the dayside magnetosphere within the solar wind. Charged particles are contained within the magnetosphere; the plasmasphere is defined by low-energy particles that essentially follow magnetic field lines as Earth rotates; the ring current is defined by medium-energy particles that drift relative to the geomagnetic field, but with paths that are still dominated by the magnetic field, and the Van Allen radiation belt are formed by high-energy particles whose motion is essentially random, but otherwise contained by the magnetosphere. During magnetic storms and substorms, charged particles can be deflected from the outer magnetosphere and especially the magnetotail, directed along field lines into Earth's ionosphere, where atmospheric atoms can be excited and ionized, causing the aurora. Earth's rotation period relative to the Sun—its mean solar day—is of mean solar time (). Because Earth's solar day is now slightly longer than it was during the 19th century due to tidal deceleration, each day varies between longer. Earth's rotation period relative to the fixed stars, called its "stellar day" by the International Earth Rotation and Reference Systems Service (IERS), is of mean solar time (UT1), or Earth's rotation period relative to the precessing or moving mean vernal equinox, misnamed its "sidereal day", is of mean solar time (UT1) . Thus the sidereal day is shorter than the stellar day by about 8.4 ms. The length of the mean solar day in SI seconds is available from the IERS for the periods 1623–2005 and 1962–2005. Apart from meteors within the atmosphere and low-orbiting satellites, the main apparent motion of celestial bodies in Earth's sky is to the west at a rate of 15°/h = 15'/min. For bodies near the celestial equator, this is equivalent to an apparent diameter of the Sun or the Moon every two minutes; from Earth's surface, the apparent sizes of the Sun and the Moon are approximately the same. Earth orbits the Sun at an average distance of about every 365.2564 mean solar days, or one sidereal year. This gives an apparent movement of the Sun eastward with respect to the stars at a rate of about 1°/day, which is one apparent Sun or Moon diameter every 12 hours. Due to this motion, on average it takes 24 hours—a solar day—for Earth to complete a full rotation about its axis so that the Sun returns to the meridian. The orbital speed of Earth averages about , which is fast enough to travel a distance equal to Earth's diameter, about , in seven minutes, and the distance to the Moon, , in about 3.5 hours. The Moon and Earth orbit a common barycenter every 27.32 days relative to the background stars. When combined with the Earth–Moon system's common orbit around the Sun, the period of the synodic month, from new moon to new moon, is 29.53 days. Viewed from the celestial north pole, the motion of Earth, the Moon, and their axial rotations are all counterclockwise. Viewed from a vantage point above the north poles of both the Sun and Earth, Earth orbits in a counterclockwise direction about the Sun. The orbital and axial planes are not precisely aligned: Earth's axis is tilted some 23.44 degrees from the perpendicular to the Earth–Sun plane (the ecliptic), and the Earth–Moon plane is tilted up to ±5.1 degrees against the Earth–Sun plane. Without this tilt, there would be an eclipse every two weeks, alternating between lunar eclipses and solar eclipses. The Hill sphere, or the sphere of gravitational influence, of the Earth is about in radius. This is the maximum distance at which the Earth's gravitational influence is stronger than the more distant Sun and planets. Objects must orbit the Earth within this radius, or they can become unbound by the gravitational perturbation of the Sun. Earth, along with the Solar System, is situated in the Milky Way and orbits about 28,000 light-years from its center. It is about 20 light-years above the galactic plane in the Orion Arm. The axial tilt of the Earth is approximately 23.439281° with the axis of its orbit plane, always pointing towards the Celestial Poles. Due to Earth's axial tilt, the amount of sunlight reaching any given point on the surface varies over the course of the year. This causes the seasonal change in climate, with summer in the Northern Hemisphere occurring when the Tropic of Cancer is facing the Sun, and winter taking place when the Tropic of Capricorn in the Southern Hemisphere faces the Sun. During the summer, the day lasts longer, and the Sun climbs higher in the sky. In winter, the climate becomes cooler and the days shorter. In northern temperate latitudes, the Sun rises north of true east during the summer solstice, and sets north of true west, reversing in the winter. The Sun rises south of true east in the summer for the southern temperate zone and sets south of true west. Above the Arctic Circle, an extreme case is reached where there is no daylight at all for part of the year, up to six months at the North Pole itself, a polar night. In the Southern Hemisphere, the situation is exactly reversed, with the South Pole oriented opposite the direction of the North Pole. Six months later, this pole will experience a midnight sun, a day of 24 hours, again reversing with the South Pole. By astronomical convention, the four seasons can be determined by the solstices—the points in the orbit of maximum axial tilt toward or away from the Sun—and the equinoxes, when the direction of the tilt and the direction to the Sun are perpendicular. In the Northern Hemisphere, winter solstice currently occurs around 21 December; summer solstice is near 21 June, spring equinox is around 20 March and autumnal equinox is about 22 or 23 September. In the Southern Hemisphere, the situation is reversed, with the summer and winter solstices exchanged and the spring and autumnal equinox dates swapped. The angle of Earth's axial tilt is relatively stable over long periods of time. Its axial tilt does undergo nutation; a slight, irregular motion with a main period of 18.6 years. The orientation (rather than the angle) of Earth's axis also changes over time, precessing around in a complete circle over each 25,800 year cycle; this precession is the reason for the difference between a sidereal year and a tropical year. Both of these motions are caused by the varying attraction of the Sun and the Moon on Earth's equatorial bulge. The poles also migrate a few meters across Earth's surface. This polar motion has multiple, cyclical components, which collectively are termed quasiperiodic motion. In addition to an annual component to this motion, there is a 14-month cycle called the Chandler wobble. Earth's rotational velocity also varies in a phenomenon known as length-of-day variation. In modern times, Earth's perihelion occurs around 3 January, and its aphelion around 4 July. These dates change over time due to precession and other orbital factors, which follow cyclical patterns known as Milankovitch cycles. The changing Earth–Sun distance causes an increase of about 6.9% in solar energy reaching Earth at perihelion relative to aphelion. Because the Southern Hemisphere is tilted toward the Sun at about the same time that Earth reaches the closest approach to the Sun, the Southern Hemisphere receives slightly more energy from the Sun than does the northern over the course of a year. This effect is much less significant than the total energy change due to the axial tilt, and most of the excess energy is absorbed by the higher proportion of water in the Southern Hemisphere. A study from 2016 suggested that Planet Nine tilted all Solar System planets, including Earth's, by about six degrees. A planet that can sustain life is termed habitable, even if life did not originate there. Earth provides liquid water—an environment where complex organic molecules can assemble and interact, and sufficient energy to sustain metabolism. The distance of Earth from the Sun, as well as its orbital eccentricity, rate of rotation, axial tilt, geological history, sustaining atmosphere, and magnetic field all contribute to the current climatic conditions at the surface. A planet's life forms inhabit ecosystems, whose total is sometimes said to form a "biosphere". Earth's biosphere is thought to have begun evolving about . The biosphere is divided into a number of biomes, inhabited by broadly similar plants and animals. On land, biomes are separated primarily by differences in latitude, height above sea level and humidity. Terrestrial biomes lying within the Arctic or Antarctic Circles, at high altitudes or in extremely arid areas are relatively barren of plant and animal life; species diversity reaches a peak in humid lowlands at equatorial latitudes. In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth. Earth has resources that have been exploited by humans. Those termed non-renewable resources, such as fossil fuels, only renew over geological timescales. Large deposits of fossil fuels are obtained from Earth's crust, consisting of coal, petroleum, and natural gas. These deposits are used by humans both for energy production and as feedstock for chemical production. Mineral ore bodies have also been formed within the crust through a process of ore genesis, resulting from actions of magmatism, erosion, and plate tectonics. These bodies form concentrated sources for many metals and other useful elements. Earth's biosphere produces many useful biological products for humans, including food, wood, pharmaceuticals, oxygen, and the recycling of many organic wastes. The land-based ecosystem depends upon topsoil and fresh water, and the oceanic ecosystem depends upon dissolved nutrients washed down from the land. In 1980, of Earth's land surface consisted of forest and woodlands, was grasslands and pasture, and was cultivated as croplands. The estimated amount of irrigated land in 1993 was . Humans also live on the land by using building materials to construct shelters. Large areas of Earth's surface are subject to extreme weather such as tropical cyclones, hurricanes, or typhoons that dominate life in those areas. From 1980 to 2000, these events caused an average of 11,800 human deaths per year. Many places are subject to earthquakes, landslides, tsunamis, volcanic eruptions, tornadoes, sinkholes, blizzards, floods, droughts, wildfires, and other calamities and disasters. Many localized areas are subject to human-made pollution of the air and water, acid rain and toxic substances, loss of vegetation (overgrazing, deforestation, desertification), loss of wildlife, species extinction, soil degradation, soil depletion and erosion. There is a scientific consensus linking human activities to global warming due to industrial carbon dioxide emissions. This is predicted to produce changes such as the melting of glaciers and ice sheets, more extreme temperature ranges, significant changes in weather and a global rise in average sea levels. Cartography, the study and practice of map-making, and geography, the study of the lands, features, inhabitants and phenomena on Earth, have historically been the disciplines devoted to depicting Earth. Surveying, the determination of locations and distances, and to a lesser extent navigation, the determination of position and direction, have developed alongside cartography and geography, providing and suitably quantifying the requisite information. Earth's human population reached approximately seven billion on 31 October 2011. Projections indicate that the world's human population will reach 9.2 billion in 2050. Most of the growth is expected to take place in developing nations. Human population density varies widely around the world, but a majority live in Asia. By 2020, 60% of the world's population is expected to be living in urban, rather than rural, areas. It is estimated that one-eighth of Earth's surface is suitable for humans to live on – three-quarters of Earth's surface is covered by oceans, leaving one-quarter as land. Half of that land area is desert (14%), high mountains (27%), or other unsuitable terrains. The northernmost permanent settlement in the world is Alert, on Ellesmere Island in Nunavut, Canada. (82°28′N) The southernmost is the Amundsen–Scott South Pole Station, in Antarctica, almost exactly at the South Pole. (90°S) Independent sovereign nations claim the planet's entire land surface, except for some parts of Antarctica, a few land parcels along the Danube river's western bank, and the unclaimed area of Bir Tawil between Egypt and Sudan. , there are 193 sovereign states that are member states of the United Nations, plus two observer states and 72 dependent territories and states with limited recognition. Earth has never had a sovereign government with authority over the entire globe, although some nation-states have striven for world domination and failed. The United Nations is a worldwide intergovernmental organization that was created with the goal of intervening in the disputes between nations, thereby avoiding armed conflict. The U.N. serves primarily as a forum for international diplomacy and international law. When the consensus of the membership permits, it provides a mechanism for armed intervention. The first human to orbit Earth was Yuri Gagarin on 12 April 1961. In total, about 487 people have visited outer space and reached orbit , and, of these, twelve have walked on the Moon. Normally, the only humans in space are those on the International Space Station. The station's crew, made up of six people, is usually replaced every six months. The farthest that humans have traveled from Earth is , achieved during the Apollo 13 mission in 1970. The Moon is a relatively large, terrestrial, planet-like natural satellite, with a diameter about one-quarter of Earth's. It is the largest moon in the Solar System relative to the size of its planet, although Charon is larger relative to the dwarf planet Pluto. The natural satellites of other planets are also referred to as "moons", after Earth's. The gravitational attraction between Earth and the Moon causes tides on Earth. The same effect on the Moon has led to its tidal locking: its rotation period is the same as the time it takes to orbit Earth. As a result, it always presents the same face to the planet. As the Moon orbits Earth, different parts of its face are illuminated by the Sun, leading to the lunar phases; the dark part of the face is separated from the light part by the solar terminator. Due to their tidal interaction, the Moon recedes from Earth at the rate of approximately . Over millions of years, these tiny modifications—and the lengthening of Earth's day by about 23 µs/yr—add up to significant changes. During the Devonian period, for example, (approximately ) there were 400 days in a year, with each day lasting 21.8 hours. The Moon may have dramatically affected the development of life by moderating the planet's climate. Paleontological evidence and computer simulations show that Earth's axial tilt is stabilized by tidal interactions with the Moon. Some theorists think that without this stabilization against the torques applied by the Sun and planets to Earth's equatorial bulge, the rotational axis might be chaotically unstable, exhibiting chaotic changes over millions of years, as appears to be the case for Mars. Viewed from Earth, the Moon is just far enough away to have almost the same apparent-sized disk as the Sun. The angular size (or solid angle) of these two bodies match because, although the Sun's diameter is about 400 times as large as the Moon's, it is also 400 times more distant. This allows total and annular solar eclipses to occur on Earth. The most widely accepted theory of the Moon's origin, the giant-impact hypothesis, states that it formed from the collision of a Mars-size protoplanet called Theia with the early Earth. This hypothesis explains (among other things) the Moon's relative lack of iron and volatile elements and the fact that its composition is nearly identical to that of Earth's crust. Earth has at least five co-orbital asteroids, including 3753 Cruithne and . A trojan asteroid companion, , is librating around the leading Lagrange triangular point, L4, in the Earth's orbit around the Sun. The tiny near-Earth asteroid makes close approaches to the Earth–Moon system roughly every twenty years. During these approaches, it can orbit Earth for brief periods of time. , there were 1,738 operational, human-made satellites orbiting Earth. There are also inoperative satellites, including Vanguard 1, the oldest satellite currently in orbit, and over 16,000 pieces of tracked space debris. Earth's largest artificial satellite is the International Space Station. The standard astronomical symbol of Earth consists of a cross circumscribed by a circle, , representing the four corners of the world. Human cultures have developed many views of the planet. Earth is sometimes personified as a deity. In many cultures it is a mother goddess that is also the primary fertility deity, and by the mid-20th century, the Gaia Principle compared Earth's environments and life as a single self-regulating organism leading to broad stabilization of the conditions of habitability. Creation myths in many religions involve the creation of Earth by a supernatural deity or deities. Scientific investigation has resulted in several culturally transformative shifts in people's view of the planet. Initial belief in a flat Earth was gradually displaced in the Greek colonies of southern Italy during the late 6th century BC by the idea of spherical Earth, which was attributed to both the philosophers Pythagoras and Parmenides. By the end of the 5th century BC, the sphericity of Earth was universally accepted among Greek intellectuals. Earth was generally believed to be the center of the universe until the 16th century, when scientists first conclusively demonstrated that it was a moving object, comparable to the other planets in the Solar System. Due to the efforts of influential Christian scholars and clerics such as James Ussher, who sought to determine the age of Earth through analysis of genealogies in Scripture, Westerners before the 19th century generally believed Earth to be a few thousand years old at most. It was only during the 19th century that geologists realized Earth's age was at least many millions of years. Lord Kelvin used thermodynamics to estimate the age of Earth to be between 20 million and 400 million years in 1864, sparking a vigorous debate on the subject; it was only when radioactivity and radioactive dating were discovered in the late 19th and early 20th centuries that a reliable mechanism for determining Earth's age was established, proving the planet to be billions of years old. The perception of Earth shifted again in the 20th century when humans first viewed it from orbit, and especially with photographs of Earth returned by the Apollo program. , where "m" is the mass of Earth, "a" is an astronomical unit, and "M" is the mass of the Sun. So the radius in AU is about formula_1. Evolution Evolution is change in the heritable characteristics of biological populations over successive generations. Evolutionary processes give rise to biodiversity at every level of biological organisation, including the levels of species, individual organisms, and molecules. Repeated formation of new species (speciation), change within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth are demonstrated by shared sets of morphological and biochemical traits, including shared DNA sequences. These shared traits are more similar among species that share a more recent common ancestor, and can be used to reconstruct a biological "tree of life" based on evolutionary relationships (phylogenetics), using both existing species and fossils. The fossil record includes a progression from early biogenic graphite, to microbial mat fossils, to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped both by speciation and by extinction. In the mid-19th century, Charles Darwin formulated the scientific theory of evolution by natural selection, published in his book "On the Origin of Species" (1859). Evolution by natural selection is a process first demonstrated by the observation that often, more offspring are produced than can possibly survive. This is followed by three observable facts about living organisms: 1) traits vary among individuals with respect to morphology, physiology, and behaviour (phenotypic variation), 2) different traits confer different rates of survival and reproduction (differential fitness), and 3) traits can be passed from generation to generation (heritability of fitness). Thus, in successive generations members of a population are replaced by progeny of parents better adapted to survive and reproduce in the biophysical environment in which natural selection takes place. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. The processes by which the changes occur, from one generation to another, are called evolutionary processes or mechanisms. The four most widely recognised evolutionary processes are natural selection (including sexual selection), genetic drift, mutation and gene migration due to genetic admixture. Natural selection and genetic drift sort variation; mutation and gene migration create variation. Consequences of selection can include meiotic drive (unequal transmission of certain alleles), nonrandom mating and genetic hitchhiking. In the early 20th century the modern evolutionary synthesis integrated classical genetics with Darwin's theory of evolution by natural selection through the discipline of population genetics. The importance of natural selection as a cause of evolution was accepted into other branches of biology. Moreover, previously held notions about evolution, such as orthogenesis, evolutionism, and other beliefs about innate "progress" within the largest-scale trends in evolution, became obsolete. Scientists continue to study various aspects of evolutionary biology by forming and testing hypotheses, constructing mathematical models of theoretical biology and biological theories, using observational data, and performing experiments in both the field and the laboratory. All life on Earth shares a common ancestor known as the last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. A December 2017 report stated that 3.45 billion-year-old Australian rocks once contained microorganisms, the earliest direct evidence of life on Earth. Nonetheless, this should not be assumed to be the first living organism on Earth; a study in 2015 found "remains of biotic life" from 4.1 billion years ago in ancient rocks in Western Australia. In July 2016, scientists reported identifying a set of 355 genes from the LUCA of all organisms living on Earth. More than 99 percent of all species that ever lived on Earth are estimated to be extinct. Estimates of Earth's current species range from 10 to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date. More recently, in May 2016, scientists reported that 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described. In terms of practical application, an understanding of evolution has been instrumental to developments in numerous scientific and industrial fields, including agriculture, human and veterinary medicine, and the life sciences in general. Discoveries in evolutionary biology have made a significant impact not just in the traditional branches of biology but also in other academic disciplines, including biological anthropology, and evolutionary psychology. Evolutionary computation, a sub-field of artificial intelligence, involves the application of Darwinian principles to problems in computer science. The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork "De rerum natura" ("On the Nature of Things"). In contrast to these materialistic views, Aristotelianism considered all natural things as actualisations of fixed natural possibilities, known as forms. This was part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be. In the 17th century, the new method of modern science rejected the Aristotelian approach. It sought explanations of natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences, the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species," to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan. Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the "Natural Theology or Evidences of the Existence and Attributes of the Deity" (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin. The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin in terms of variable populations. Partly influenced by "An Essay on the Principle of Population" (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favorable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as "On the Origin of Species" explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe. The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cells structure. De Vries was also one of the researchers who made Mendel's work well-known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled. In the 1920s and 1930s the so-called modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that applied generally to any branch of biology. The modern synthesis explained patterns observed across species in populations, through fossil transitions in palaeontology, and complex cellular mechanisms in developmental biology. The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved our understanding of the relationship between genotype and phenotype. Advancements were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution," because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet. Since then, the modern synthesis has been further extended to explain biological phenomena across the full and integrative scale of the biological hierarchy, from genes to species. One extension, known as evolutionary developmental biology and informally called "evo-devo," emphasises how changes between generations (evolution) acts on patterns of change within individual organisms (development). Since the beginning of the 21st century and in light of discoveries made in recent decades, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability. Evolution in organisms occurs through changes in heritable traits—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype. The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. These traits come from the interaction of its genotype with the environment. As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn. Heritable traits are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specify the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by quantitative trait loci (multiple interacting genes). Recent findings have confirmed important examples of heritable changes that cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis. An individual organism's phenotype results from both its genotype and the influence from the environment it has lived in. A substantial part of the phenotypic variation in a population is caused by genotypic variation. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely. Natural selection will only cause evolution if there is enough genetic variation in a population. Before the discovery of Mendelian genetics, one common hypothesis was blending inheritance. But with blending inheritance, genetic variance would be rapidly lost, making evolution by natural selection implausible. The Hardy–Weinberg principle provides the solution to how variation is maintained in a population with Mendelian inheritance. The frequencies of alleles (variations in a gene) will remain constant in the absence of selection, mutation, migration and genetic drift. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is identical in all individuals of that species. However, even relatively small differences in genotype can lead to dramatic differences in phenotype: for example, chimpanzees and humans differ in only about 5% of their genomes. Mutations are changes in the DNA sequence of a cell's genome. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect. Based on studies in the fly "Drosophila melanogaster", it has been suggested that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70% of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial. Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene. New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA. The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions. When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to one hundred independent domains that each catalyse one step in the overall process, like a step in an assembly line. In asexual organisms, genes are inherited together, or "linked", as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution. The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. (This cost does not apply to hermaphroditic species, like most plants and many invertebrates.) The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy metal tolerant and heavy metal sensitive populations of grasses. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast "Saccharomyces cerevisiae" and the adzuki bean weevil "Callosobruchus chinensis" has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea. From a Neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms. For example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, genetic hitchhiking, mutation and gene flow. Evolution by means of natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It has often been called a "self-evident" mechanism because it necessarily follows from three simple facts: More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness. If an allele increases fitness more than the other alleles of that gene, then with each generation this allele will become more common within the population. These traits are said to be "selected "for"." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele becoming rarer—they are "selected "against"." Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form (see Dollo's law). However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms. Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height. A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits. Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system..." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection. Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation, as discussed below. In addition to being a major source of variation, mutation may also function as a mechanism of evolution when there are different probabilities at the molecular level for different mutations to occur, a process known as mutation bias. If two genotypes, for example one with the nucleotide G and another with the nucleotide A in the same position, have the same fitness, but mutation from G to A happens more often than mutation from A to G, then genotypes with A will tend to evolve. Different insertion vs. deletion mutation biases in different taxa can lead to the evolution of different genome sizes. Developmental or mutational biases have also been observed in morphological evolution. For example, according to the phenotype-first theory of evolution, mutations can eventually cause the genetic assimilation of traits that were previously induced by the environment. Mutation bias effects are superimposed on other processes. If selection would favor either one out of two mutations, but there is no extra advantage to having both, then the mutation that occurs the most frequently is the one that is most likely to become fixed in a population. Mutations leading to the loss of function of a gene are much more common than mutations that produce a new, fully functional gene. Most loss of function mutations are selected against. But when selection is weak, mutation bias towards loss of function can affect evolution. For example, pigments are no longer useful when animals live in the darkness of caves, and tend to be lost. This kind of loss of function can occur because of mutation bias, and/or because the function had a cost, and once the benefit of the function disappeared, natural selection leads to the loss. Loss of sporulation ability in "Bacillus subtilis" during laboratory evolution appears to have been caused by mutation bias, rather than natural selection against the cost of maintaining sporulation ability. When there is no selection for loss of function, the speed at which loss evolves depends more on the mutation rate than it does on the effective population size, indicating that it is driven more by mutation bias than by genetic drift. In parasitic organisms, mutation bias leads to selection pressures as seen in "Ehrlichia". Mutations are biased towards antigenic variants in outer-membrane proteins. Genetic drift is the change in allele frequency from one generation to the next that occurs because alleles are subject to sampling error. As a result, when selective forces are absent or relatively weak, allele frequencies tend to "drift" upward or downward randomly (in a random walk). This drift halts when an allele eventually becomes fixed, either by disappearing from the population, or replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that began with the same genetic structure to drift apart into two divergent populations with different sets of alleles. It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research. The neutral theory of molecular evolution proposed that most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. Hence, in this model, most genetic changes in a population are the result of constant mutation pressure and genetic drift. This form of the neutral theory is now largely abandoned, since it does not seem to fit the genetic variation seen in nature. However, a more recent and better-supported version of this model is the nearly neutral theory, where a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other alternative theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. The time for a neutral allele to become fixed by genetic drift depends on population size, with fixation occurring more rapidly in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population. Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size. Gene flow involves the exchange of genes between populations and between species. The presence or absence of gene flow fundamentally changes the course of evolution. Due to the complexity of organisms, any two completely isolated populations will eventually evolve genetic incompatibilities through neutral processes, as in the Bateson-Dobzhansky-Muller model, even if both populations remain essentially identical in terms of their adaptation to the environment. If genetic differentiation between populations develops, gene flow between populations can introduce traits or alleles which are disadvantageous in the local population and this may lead to organisms within these populations evolving mechanisms that prevent mating with genetically distant populations, eventually resulting in the appearance of new species. Thus, exchange of genetic information between individuals is fundamentally important for the development of the "Biological Species Concept" (BSC). During the development of the modern synthesis, Sewall Wright developed his shifting balance theory, which regarded gene flow between partially isolated populations as an important aspect of adaptive evolution. However, recently there has been substantial criticism of the importance of the shifting balance theory. Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction; whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. In general, macroevolution is regarded as the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction. A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size, and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to modern evolutionary research, since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time. Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term "adaptation" for the evolutionary process and "adaptive trait" for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky: Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria "Escherichia coli" evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, "Flavobacterium" evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium "Sphingobium" evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability). Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology. During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes. However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard "Holaspis guentheri", which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes. An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes. Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake. Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system. Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer. Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the "helping" individual contains alleles which promote the helping activity, it is likely that its kin will "also" contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms. Speciation is the process where a species diverges into two or more descendant species. There are multiple ways to define the concept of "species." The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The "Biological Species Concept" (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like others is not without controversy, for example because these concepts cannot be applied to prokaryotes, and this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species. Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example. Speciation has been observed multiple times under both controlled laboratory conditions (see laboratory experiments of speciation) and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed. The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change. The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass "Anthoxanthum odoratum", which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance. Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve. One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species "Arabidopsis thaliana" and "Arabidopsis arenosa" crossbred to give the new species "Arabidopsis suecica". This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms. Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils. Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors. The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote, "If life arose relatively quickly on Earth, then it could be common in the universe." More than 99 percent of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80 percent not yet described. Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells. All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits and finally, that organisms can be classified using these similarities into a hierarchy of nested groups—similar to a family tree. However, modern research has suggested that, due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree since some genes have spread independently between distantly related species. Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, paleontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry. More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed. Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6–2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants. The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells. Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis. About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes. Concepts and models used in evolutionary biology, such as natural selection, have many applications. Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution. Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level. In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems. In the 19th century, particularly after the publication of "On the Origin of Species" in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists. While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of "Vestiges of the Natural History of Creation" in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists. The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 "Epperson v. Arkansas" decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 "Kitzmiller v. Dover Area School District" case. Introductory reading Advanced reading Enzyme Enzymes are macromolecular biological catalysts. Enzymes accelerate chemical reactions. The molecules upon which enzymes may act are called substrates and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. Metabolic pathways depend upon enzymes to catalyze individual steps. The study of enzymes is called "enzymology" and a new field of pseudoenzyme analysis has recently grown up, recognising that during evolution, some enzymes have lost the ability to carry out biological catalysis, which is often reflected in their amino acid sequences and unusual 'pseudocatalytic' properties. Enzymes are known to catalyze more than 5,000 biochemical reaction types. Most enzymes are proteins, although a few are catalytic RNA molecules. The latter are called ribozymes. Enzymes' specificity comes from their unique three-dimensional structures. Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5'-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme's activity decreases markedly outside its optimal temperature and pH. Some enzymes are used commercially, for example, in the synthesis of antibiotics. Some household products use enzymes to speed up chemical reactions: enzymes in biological washing powders break down protein, starch or fat stains on clothes, and enzymes in meat tenderizer break down proteins into smaller molecules, making the meat easier to chew. By the late 17th and early 18th centuries, the digestion of meat by stomach secretions and the conversion of starch to sugars by plant extracts and saliva were known but the mechanisms by which these occurred had not been identified. French chemist Anselme Payen was the first to discover an enzyme, diastase, in 1833. A few decades later, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that this fermentation was caused by a vital force contained within the yeast cells called "ferments", which were thought to function only within living organisms. He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells." In 1877, German physiologist Wilhelm Kühne (1837–1900) first used the term "enzyme", which comes from Greek ἔνζυμον, "leavened" or "in yeast", to describe this process. The word "enzyme" was used later to refer to nonliving substances such as pepsin, and the word "ferment" was used to refer to chemical activity produced by living organisms. Eduard Buchner submitted his first paper on the study of yeast extracts in 1897. In a series of experiments at the University of Berlin, he found that sugar was fermented by yeast extracts even when there were no living yeast cells in the mixture. He named the enzyme that brought about the fermentation of sucrose "zymase". In 1907, he received the Nobel Prize in Chemistry for "his discovery of cell-free fermentation". Following Buchner's example, enzymes are usually named according to the reaction they carry out: the suffix "-ase" is combined with the name of the substrate (e.g., lactase is the enzyme that cleaves lactose) or to the type of reaction (e.g., DNA polymerase forms DNA polymers). The biochemical identity of enzymes was still unknown in the early 1900s. Many scientists observed that enzymatic activity was associated with proteins, but others (such as Nobel laureate Richard Willstätter) argued that proteins were merely carriers for the true enzymes and that proteins "per se" were incapable of catalysis. In 1926, James B. Sumner showed that the enzyme urease was a pure protein and crystallized it; he did likewise for the enzyme catalase in 1937. The conclusion that pure proteins can be enzymes was definitively demonstrated by John Howard Northrop and Wendell Meredith Stanley, who worked on the digestive enzymes pepsin (1930), trypsin and chymotrypsin. These three scientists were awarded the 1946 Nobel Prize in Chemistry. The discovery that enzymes could be crystallized eventually allowed their structures to be solved by x-ray crystallography. This was first done for lysozyme, an enzyme found in tears, saliva and egg whites that digests the coating of some bacteria; the structure was solved by a group led by David Chilton Phillips and published in 1965. This high-resolution structure of lysozyme marked the beginning of the field of structural biology and the effort to understand how enzymes work at an atomic level of detail. An enzyme's name is often derived from its substrate or the chemical reaction it catalyzes, with the word ending in "-ase". Examples are lactase, alcohol dehydrogenase and DNA polymerase. Different enzymes that catalyze the same chemical reaction are called isozymes. The International Union of Biochemistry and Molecular Biology have developed a nomenclature for enzymes, the EC numbers; each enzyme is described by a sequence of four numbers preceded by "EC", which stands for "Enzyme Commission". The first number broadly classifies the enzyme based on its mechanism. The top-level classification is: These sections are subdivided by other features such as the substrate, products, and chemical mechanism. An enzyme is fully specified by four numerical designations. For example, hexokinase (EC 2.7.1.1) is a transferase (EC 2) that adds a phosphate group (EC 2.7) to a hexose sugar, a molecule containing an alcohol group (EC 2.7.1). Enzymes are generally globular proteins, acting alone or in larger complexes. The sequence of the amino acids specifies the structure which in turn determines the catalytic activity of the enzyme. Although structure determines function, a novel enzymatic activity cannot yet be predicted from structure alone. Enzyme structures unfold (denature) when heated or exposed to chemical denaturants and this disruption to the structure typically causes a loss of activity. Enzyme denaturation is normally linked to temperatures above a species' normal level; as a result, enzymes from bacteria living in volcanic environments such as hot springs are prized by industrial users for their ability to function at high temperatures, allowing enzyme-catalysed reactions to be operated at a very high rate. Enzymes are usually much larger than their substrates. Sizes range from just 62 amino acid residues, for the monomer of 4-oxalocrotonate tautomerase, to over 2,500 residues in the animal fatty acid synthase. Only a small portion of their structure (around 2–4 amino acids) is directly involved in catalysis: the catalytic site. This catalytic site is located next to one or more binding sites where residues orient the substrates. The catalytic site and binding site together comprise the enzyme's active site. The remaining majority of the enzyme structure serves to maintain the precise orientation and dynamics of the active site. In some enzymes, no amino acids are directly involved in catalysis; instead, the enzyme contains sites to bind and orient catalytic cofactors. Enzyme structures may also contain allosteric sites where the binding of a small molecule causes a conformational change that increases or decreases activity. A small number of RNA-based biological catalysts called ribozymes exist, which again can act alone or in complex with proteins. The most common of these is the ribosome which is a complex of protein and catalytic RNA components. Enzymes must bind their substrates before they can catalyse any chemical reaction. Enzymes are usually very specific as to what substrates they bind and then the chemical reaction catalysed. Specificity is achieved by binding pockets with complementary shape, charge and hydrophilic/hydrophobic characteristics to the substrates. Enzymes can therefore distinguish between very similar substrate molecules to be chemoselective, regioselective and stereospecific. Some of the enzymes showing the highest specificity and accuracy are involved in the copying and expression of the genome. Some of these enzymes have "proof-reading" mechanisms. Here, an enzyme such as DNA polymerase catalyzes a reaction in a first step and then checks that the product is correct in a second step. This two-step process results in average error rates of less than 1 error in 100 million reactions in high-fidelity mammalian polymerases. Similar proofreading mechanisms are also found in RNA polymerase, aminoacyl tRNA synthetases and ribosomes. Conversely, some enzymes display enzyme promiscuity, having broad specificity and acting on a range of different physiologically relevant substrates. Many enzymes possess small side activities which arose fortuitously (i.e. neutrally), which may be the starting point for the evolutionary selection of a new function. To explain the observed specificity of enzymes, in 1894 Emil Fischer proposed that both the enzyme and the substrate possess specific complementary geometric shapes that fit exactly into one another. This is often referred to as "the lock and key" model. This early model explains enzyme specificity, but fails to explain the stabilization of the transition state that enzymes achieve. In 1958, Daniel Koshland suggested a modification to the lock and key model: since enzymes are rather flexible structures, the active site is continuously reshaped by interactions with the substrate as the substrate interacts with the enzyme. As a result, the substrate does not simply bind to a rigid active site; the amino acid side-chains that make up the active site are molded into the precise positions that enable the enzyme to perform its catalytic function. In some cases, such as glycosidases, the substrate molecule also changes shape slightly as it enters the active site. The active site continues to change until the substrate is completely bound, at which point the final shape and charge distribution is determined. Induced fit may enhance the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism. Enzymes can accelerate reactions in several ways, all of which lower the activation energy (ΔG, Gibbs free energy) Enzymes may use several of these mechanisms simultaneously. For example, proteases such as trypsin perform covalent catalysis using a catalytic triad, stabilise charge build-up on the transition states using an oxyanion hole, complete hydrolysis using an oriented water substrate. Enzymes are not rigid, static structures; instead they have complex internal dynamic motions – that is, movements of parts of the enzyme's structure such as individual amino acid residues, groups of residues forming a protein loop or unit of secondary structure, or even an entire protein domain. These motions give rise to a conformational ensemble of slightly different structures that interconvert with one another at equilibrium. Different states within this ensemble may be associated with different aspects of an enzyme's function. For example, different conformations of the enzyme dihydrofolate reductase are associated with the substrate binding, catalysis, cofactor release, and product release steps of the catalytic cycle. Allosteric sites are pockets on the enzyme, distinct from the active site, that bind to molecules in the cellular environment. These molecules then cause a change in the conformation or dynamics of the enzyme that is transduced to the active site and thus affects the reaction rate of the enzyme. In this way, allosteric interactions can either inhibit or activate enzymes. Allosteric interactions with metabolites upstream or downstream in an enzyme's metabolic pathway cause feedback regulation, altering the activity of the enzyme according to the flux through the rest of the pathway. Some enzymes do not need additional components to show full activity. Others require non-protein molecules called cofactors to be bound for activity. Cofactors can be either inorganic (e.g., metal ions and iron-sulfur clusters) or organic compounds (e.g., flavin and heme). These cofactors serve many purposes; for instance, metal ions can help in stabilizing nucleophilic species within the active site. Organic cofactors can be either coenzymes, which are released from the enzyme's active site during the reaction, or prosthetic groups, which are tightly bound to an enzyme. Organic prosthetic groups can be covalently bound (e.g., biotin in enzymes such as pyruvate carboxylase). An example of an enzyme that contains a cofactor is carbonic anhydrase, which is shown in the ribbon diagram above with a zinc cofactor bound as part of its active site. These tightly bound ions or molecules are usually found in the active site and are involved in catalysis. For example, flavin and heme cofactors are often involved in redox reactions. Enzymes that require a cofactor but do not have one bound are called "apoenzymes" or "apoproteins". An enzyme together with the cofactor(s) required for activity is called a "holoenzyme" (or haloenzyme). The term "holoenzyme" can also be applied to enzymes that contain multiple protein subunits, such as the DNA polymerases; here the holoenzyme is the complete complex containing all the subunits needed for activity. Coenzymes are small organic molecules that can be loosely or tightly bound to an enzyme. Coenzymes transport chemical groups from one enzyme to another. Examples include NADH, NADPH and adenosine triphosphate (ATP). Some coenzymes, such as flavin mononucleotide (FMN), flavin adenine dinucleotide (FAD), thiamine pyrophosphate (TPP), and tetrahydrofolate (THF), are derived from vitamins. These coenzymes cannot be synthesized by the body "de novo" and closely related compounds (vitamins) must be acquired from the diet. The chemical groups carried include: Since coenzymes are chemically changed as a consequence of enzyme action, it is useful to consider coenzymes to be a special class of substrates, or second substrates, which are common to many different enzymes. For example, about 1000 enzymes are known to use the coenzyme NADH. Coenzymes are usually continuously regenerated and their concentrations maintained at a steady level inside the cell. For example, NADPH is regenerated through the pentose phosphate pathway and "S"-adenosylmethionine by methionine adenosyltransferase. This continuous regeneration means that small amounts of coenzymes can be used very intensively. For example, the human body turns over its own weight in ATP each day. As with all catalysts, enzymes do not alter the position of the chemical equilibrium of the reaction. In the presence of an enzyme, the reaction runs in the same direction as it would without the enzyme, just more quickly. For example, carbonic anhydrase catalyzes its reaction in either direction depending on the concentration of its reactants: The rate of a reaction is dependent on the activation energy needed to form the transition state which then decays into products. Enzymes increase reaction rates by lowering the energy of the transition state. First, binding forms a low energy enzyme-substrate complex (ES). Secondly the enzyme stabilises the transition state such that it requires less energy to achieve compared to the uncatalyzed reaction (ES). Finally the enzyme-product complex (EP) dissociates to release the products. Enzymes can couple two or more reactions, so that a thermodynamically favorable reaction can be used to "drive" a thermodynamically unfavourable one so that the combined energy of the products is lower than the substrates. For example, the hydrolysis of ATP is often used to drive other chemical reactions. Enzyme kinetics is the investigation of how enzymes bind substrates and turn them into products. The rate data used in kinetic analyses are commonly obtained from enzyme assays. In 1913 Leonor Michaelis and Maud Leonora Menten proposed a quantitative theory of enzyme kinetics, which is referred to as Michaelis–Menten kinetics. The major contribution of Michaelis and Menten was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis-Menten complex in their honor. The enzyme then catalyzes the chemical step in the reaction and releases the product. This work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely used today. Enzyme rates depend on solution conditions and substrate concentration. To find the maximum speed of an enzymatic reaction, the substrate concentration is increased until a constant rate of product formation is seen. This is shown in the saturation curve on the right. Saturation happens because, as substrate concentration increases, more and more of the free enzyme is converted into the substrate-bound ES complex. At the maximum reaction rate ("V") of the enzyme, all the enzyme active sites are bound to substrate, and the amount of ES complex is the same as the total amount of enzyme. "V" is only one of several important kinetic parameters. The amount of substrate needed to achieve a given rate of reaction is also important. This is given by the Michaelis-Menten constant ("K"), which is the substrate concentration required for an enzyme to reach one-half its maximum reaction rate; generally, each enzyme has a characteristic "K" for a given substrate. Another useful constant is "k", also called the "turnover number", which is the number of substrate molecules handled by one active site per second. The efficiency of an enzyme can be expressed in terms of "k"/"K". This is also called the specificity constant and incorporates the rate constants for all steps in the reaction up to and including the first irreversible step. Because the specificity constant reflects both affinity and catalytic ability, it is useful for comparing different enzymes against each other, or the same enzyme with different substrates. The theoretical maximum for the specificity constant is called the diffusion limit and is about 10 to 10 (M s). At this point every collision of the enzyme with its substrate will result in catalysis, and the rate of product formation is not limited by the reaction rate but by the diffusion rate. Enzymes with this property are called "catalytically perfect" or "kinetically perfect". Example of such enzymes are triose-phosphate isomerase, carbonic anhydrase, acetylcholinesterase, catalase, fumarase, β-lactamase, and superoxide dismutase. The turnover of such enzymes can reach several million reactions per second. But most enzymes are far from perfect: the average values of formula_1 and formula_2 are about formula_3 and formula_4, respectively. Michaelis–Menten kinetics relies on the law of mass action, which is derived from the assumptions of free diffusion and thermodynamically driven random collision. Many biochemical or cellular processes deviate significantly from these conditions, because of macromolecular crowding and constrained molecular movement. More recent, complex extensions of the model attempt to correct for these effects. Enzyme reaction rates can be decreased by various types of enzyme inhibitors. A competitive inhibitor and substrate cannot bind to the enzyme at the same time. Often competitive inhibitors strongly resemble the real substrate of the enzyme. For example, the drug methotrexate is a competitive inhibitor of the enzyme dihydrofolate reductase, which catalyzes the reduction of dihydrofolate to tetrahydrofolate. The similarity between the structures of dihydrofolate and this drug are shown in the accompanying figure. This type of inhibition can be overcome with high substrate concentration. In some cases, the inhibitor can bind to a site other than the binding-site of the usual substrate and exert an allosteric effect to change the shape of the usual binding-site. A non-competitive inhibitor binds to a site other than where the substrate binds. The substrate still binds with its usual affinity and hence K remains the same. However the inhibitor reduces the catalytic efficiency of the enzyme so that V is reduced. In contrast to competitive inhibition, non-competitive inhibition cannot be overcome with high substrate concentration. An uncompetitive inhibitor cannot bind to the free enzyme, only to the enzyme-substrate complex; hence, these types of inhibitors are most effective at high substrate concentration. In the presence of the inhibitor, the enzyme-substrate complex is inactive. This type of inhibition is rare. A mixed inhibitor binds to an allosteric site and the binding of the substrate and the inhibitor affect each other. The enzyme's function is reduced but not eliminated when bound to the inhibitor. This type of inhibitor does not follow the Michaelis-Menten equation. An irreversible inhibitor permanently inactivates the enzyme, usually by forming a covalent bond to the protein. Penicillin and aspirin are common drugs that act in this manner. In many organisms, inhibitors may act as part of a feedback mechanism. If an enzyme produces too much of one substance in the organism, that substance may act as an inhibitor for the enzyme at the beginning of the pathway that produces it, causing production of the substance to slow down or stop when there is sufficient amount. This is a form of negative feedback. Major metabolic pathways such as the citric acid cycle make use of this mechanism. Since inhibitors modulate the function of enzymes they are often used as drugs. Many such drugs are reversible competitive inhibitors that resemble the enzyme's native substrate, similar to methotrexate above; other well-known examples include statins used to treat high cholesterol, and protease inhibitors used to treat retroviral infections such as HIV. A common example of an irreversible inhibitor that is used as a drug is aspirin, which inhibits the COX-1 and COX-2 enzymes that produce the inflammation messenger prostaglandin. Other enzyme inhibitors are poisons. For example, the poison cyanide is an irreversible enzyme inhibitor that combines with the copper and iron in the active site of the enzyme cytochrome c oxidase and blocks cellular respiration. Enzymes serve a wide variety of functions inside living organisms. They are indispensable for signal transduction and cell regulation, often via kinases and phosphatases. They also generate movement, with myosin hydrolyzing ATP to generate muscle contraction, and also transport cargo around the cell as part of the cytoskeleton. Other ATPases in the cell membrane are ion pumps involved in active transport. Enzymes are also involved in more exotic functions, such as luciferase generating light in fireflies. Viruses can also contain enzymes for infecting cells, such as the HIV integrase and reverse transcriptase, or for viral release from cells, like the influenza virus neuraminidase. An important function of enzymes is in the digestive systems of animals. Enzymes such as amylases and proteases break down large molecules (starch or proteins, respectively) into smaller ones, so they can be absorbed by the intestines. Starch molecules, for example, are too large to be absorbed from the intestine, but enzymes hydrolyze the starch chains into smaller molecules such as maltose and eventually glucose, which can then be absorbed. Different enzymes digest different food substances. In ruminants, which have herbivorous diets, microorganisms in the gut produce another enzyme, cellulase, to break down the cellulose cell walls of plant fiber. Several enzymes can work together in a specific order, creating metabolic pathways. In a metabolic pathway, one enzyme takes the product of another enzyme as a substrate. After the catalytic reaction, the product is then passed on to another enzyme. Sometimes more than one enzyme can catalyze the same reaction in parallel; this can allow more complex regulation: with, for example, a low constant activity provided by one enzyme but an inducible high activity from a second enzyme. Enzymes determine what steps occur in these pathways. Without enzymes, metabolism would neither progress through the same steps and could not be regulated to serve the needs of the cell. Most central metabolic pathways are regulated at a few key steps, typically through enzymes whose activity involves the hydrolysis of ATP. Because this reaction releases so much energy, other reactions that are thermodynamically unfavorable can be coupled to ATP hydrolysis, driving the overall series of linked metabolic reactions. There are five main ways that enzyme activity is controlled in the cell. Enzymes can be either activated or inhibited by other molecules. For example, the end product(s) of a metabolic pathway are often inhibitors for one of the first enzymes of the pathway (usually the first irreversible step, called committed step), thus regulating the amount of end product made by the pathways. Such a regulatory mechanism is called a negative feedback mechanism, because the amount of the end product produced is regulated by its own concentration. Negative feedback mechanism can effectively adjust the rate of synthesis of intermediate metabolites according to the demands of the cells. This helps with effective allocations of materials and energy economy, and it prevents the excess manufacture of end products. Like other homeostatic devices, the control of enzymatic action helps to maintain a stable internal environment in living organisms. Examples of post-translational modification include phosphorylation, myristoylation and glycosylation. For example, in the response to insulin, the phosphorylation of multiple enzymes, including glycogen synthase, helps control the synthesis or degradation of glycogen and allows the cell to respond to changes in blood sugar. Another example of post-translational modification is the cleavage of the polypeptide chain. Chymotrypsin, a digestive protease, is produced in inactive form as chymotrypsinogen in the pancreas and transported in this form to the stomach where it is activated. This stops the enzyme from digesting the pancreas or other tissues before it enters the gut. This type of inactive precursor to an enzyme is known as a zymogen or proenzyme. Enzyme production (transcription and translation of enzyme genes) can be enhanced or diminished by a cell in response to changes in the cell's environment. This form of gene regulation is called enzyme induction. For example, bacteria may become resistant to antibiotics such as penicillin because enzymes called beta-lactamases are induced that hydrolyse the crucial beta-lactam ring within the penicillin molecule. Another example comes from enzymes in the liver called cytochrome P450 oxidases, which are important in drug metabolism. Induction or inhibition of these enzymes can cause drug interactions. Enzyme levels can also be regulated by changing the rate of enzyme degradation. The opposite of enzyme induction is enzyme repression. Enzymes can be compartmentalized, with different metabolic pathways occurring in different cellular compartments. For example, fatty acids are synthesized by one set of enzymes in the cytosol, endoplasmic reticulum and Golgi and used by a different set of enzymes as a source of energy in the mitochondrion, through β-oxidation. In addition, trafficking of the enzyme to different compartments may change the degree of protonation (cytoplasm neutral and lysosome acidic) or oxidative state [e.g., oxidized (periplasm) or reduced (cytoplasm)] which in turn affects enzyme activity. In multicellular eukaryotes, cells in different organs and tissues have different patterns of gene expression and therefore have different sets of enzymes (known as isozymes) available for metabolic reactions. This provides a mechanism for regulating the overall metabolism of the organism. For example, hexokinase, the first enzyme in the glycolysis pathway, has a specialized form called glucokinase expressed in the liver and pancreas that has a lower affinity for glucose yet is more sensitive to glucose concentration. This enzyme is involved in sensing blood sugar and regulating insulin production. Since the tight control of enzyme activity is essential for homeostasis, any malfunction (mutation, overproduction, underproduction or deletion) of a single critical enzyme can lead to a genetic disease. The malfunction of just one type of enzyme out of the thousands of types present in the human body can be fatal. An example of a fatal genetic disease due to enzyme insufficiency is Tay-Sachs disease, in which patients lack the enzyme hexosaminidase. One example of enzyme deficiency is the most common type of phenylketonuria. Many different single amino acid mutations in the enzyme phenylalanine hydroxylase, which catalyzes the first step in the degradation of phenylalanine, result in build-up of phenylalanine and related products. Some mutations are in the active site, directly disrupting binding and catalysis, but many are far from the active site and reduce activity by destabilising the protein structure, or affecting correct oligomerisation. This can lead to intellectual disability if the disease is untreated. Another example is pseudocholinesterase deficiency, in which the body's ability to break down choline ester drugs is impaired. Oral administration of enzymes can be used to treat some functional enzyme deficiencies, such as pancreatic insufficiency and lactose intolerance. Another way enzyme malfunctions can cause disease comes from germline mutations in genes coding for DNA repair enzymes. Defects in these enzymes cause cancer because cells are less able to repair mutations in their genomes. This causes a slow accumulation of mutations and results in the development of cancers. An example of such a hereditary cancer syndrome is xeroderma pigmentosum, which causes the development of skin cancers in response to even minimal exposure to ultraviolet light. Enzymes are used in the chemical industry and other industrial applications when extremely specific catalysts are required. Enzymes in general are limited in the number of reactions they have evolved to catalyze and also by their lack of stability in organic solvents and at high temperatures. As a consequence, protein engineering is an active area of research and involves attempts to create new enzymes with novel properties, either through rational design or "in vitro" evolution. These efforts have begun to be successful, and a few enzymes have now been designed "from scratch" to catalyze reactions that do not occur in nature. General Etymology and history Enzyme structure and mechanism Kinetics and inhibition Elephant Elephants are large mammals of the family Elephantidae and the order Proboscidea. Three species are currently recognised: the African bush elephant ("Loxodonta africana"), the African forest elephant ("L. cyclotis"), and the Asian elephant ("Elephas maximus"). Elephants are scattered throughout sub-Saharan Africa, South Asia, and Southeast Asia. Elephantidae is the only surviving family of the order Proboscidea; other, now extinct, members of the order include deinotheres, gomphotheres, mammoths, and mastodons. All elephants have several distinctive features, the most notable of which is a long trunk (also called a proboscis), used for many purposes, particularly breathing, lifting water, and grasping objects. Their incisors grow into tusks, which can serve as weapons and as tools for moving objects and digging. Elephants' large ear flaps help to control their body temperature. Their pillar-like legs can carry their great weight. African elephants have larger ears and concave backs while Asian elephants have smaller ears and convex or level backs. Elephants are herbivorous and can be found in different habitats including savannahs, forests, deserts, and marshes. They prefer to stay near water. They are considered to be a keystone species due to their impact on their environments. Other animals tend to keep their distance from elephants while predators, such as lions, tigers, hyenas, and any wild dogs, usually target only young elephants (or "calves"). Elephants have a fission–fusion society in which multiple family groups come together to socialise. Females ("cows") tend to live in family groups, which can consist of one female with her calves or several related females with offspring. The groups are led by an individual known as the matriarch, often the oldest cow. Males ("bulls") leave their family groups when they reach puberty and may live alone or with other males. Adult bulls mostly interact with family groups when looking for a mate and enter a state of increased testosterone and aggression known as musth, which helps them gain dominance and reproductive success. Calves are the centre of attention in their family groups and rely on their mothers for as long as three years. Elephants can live up to 70 years in the wild. They communicate by touch, sight, smell, and sound; elephants use infrasound, and seismic communication over long distances. Elephant intelligence has been compared with that of primates and cetaceans. They appear to have self-awareness and show empathy for dying or dead individuals of their kind. African elephants are listed as vulnerable by the International Union for Conservation of Nature (IUCN) while the Asian elephant is classed as endangered. One of the biggest threats to elephant populations is the ivory trade, as the animals are poached for their ivory tusks. Other threats to wild elephants include habitat destruction and conflicts with local people. Elephants are used as working animals in Asia. In the past, they were used in war; today, they are often controversially put on display in zoos, or exploited for entertainment in circuses. Elephants are highly recognisable and have been featured in art, folklore, religion, literature, and popular culture. The word "elephant" is based on the Latin "elephas" (genitive "elephantis") ("elephant"), which is the Latinised form of the Greek ἐλέφας ("elephas") (genitive ἐλέφαντος ("elephantos")), probably from a non-Indo-European language, likely Phoenician. It is attested in Mycenaean Greek as "e-re-pa" (genitive "e-re-pa-to") in Linear B syllabic script. As in Mycenaean Greek, Homer used the Greek word to mean ivory, but after the time of Herodotus, it also referred to the animal. The word "elephant" appears in Middle English as "olyfaunt" (c.1300) and was borrowed from Old French "oliphant" (12th century). "Loxodonta", the generic name for the African elephants, is Greek for "oblique-sided tooth". Elephants belong to the family Elephantidae, the sole remaining family within the order Proboscidea which belongs to the superorder Afrotheria. Their closest extant relatives are the sirenians (dugongs and manatees) and the hyraxes, with which they share the clade Paenungulata within the superorder Afrotheria. Elephants and sirenians are further grouped in the clade Tethytheria. Three species of elephants are recognised; the African bush elephant ("Loxodonta africana") and forest elephant ("Loxodonta cyclotis") of sub-Saharan Africa, and the Asian elephant ("Elephas maximus") of South and Southeast Asia. African elephants have larger ears, a concave back, more wrinkled skin, a sloping abdomen, and two finger-like extensions at the tip of the trunk. Asian elephants have smaller ears, a convex or level back, smoother skin, a horizontal abdomen that occasionally sags in the middle and one extension at the tip of the trunk. The looped ridges on the molars are narrower in the Asian elephant while those of the African are more diamond-shaped. The Asian elephant also has dorsal bumps on its head and some patches of depigmentation on its skin. In general, African elephants are larger than their Asian cousins. Swedish zoologist Carl Linnaeus first described the genus "Elephas" and an elephant from Sri Lanka (then known as Ceylon) under the binomial "Elephas maximus" in 1758. In 1798, Georges Cuvier classified the Indian elephant under the binomial "Elephas indicus". Dutch zoologist Coenraad Jacob Temminck described the Sumatran elephant in 1847 under the binomial "Elephas sumatranus". English zoologist Frederick Nutter Chasen classified all three as subspecies of the Asian elephant in 1940. Asian elephants vary geographically in their colour and amount of depigmentation. The Sri Lankan elephant ("Elephas maximus maximus") inhabits Sri Lanka, the Indian elephant ("E. m. indicus") is native to mainland Asia (on the Indian subcontinent and Indochina), and the Sumatran elephant ("E. m. sumatranus") is found in Sumatra. One disputed subspecies, the Borneo elephant, lives in northern Borneo and is smaller than all the other subspecies. It has larger ears, a longer tail, and straighter tusks than the typical elephant. Sri Lankan zoologist Paules Edward Pieris Deraniyagala described it in 1950 under the trinomial "Elephas maximus borneensis", taking as his type an illustration in "National Geographic". It was subsequently subsumed under either "E. m. indicus" or "E. m. sumatranus". Results of a 2003 genetic analysis indicate its ancestors separated from the mainland population about 300,000 years ago. A 2008 study found that Borneo elephants are not indigenous to the island but were brought there before 1521 by the Sultan of Sulu from Java, where elephants are now extinct. The African elephant was first named by German naturalist Johann Friedrich Blumenbach in 1797 as "Elephas africanus". The genus "Loxodonta" was named by Frédéric Cuvier in 1825. Cuvier spelled it "Loxodonte," but in 1827 an anonymous author romanised the spelling to "Loxodonta"; the International Code of Zoological Nomenclature recognises this as the proper authority. In 1942, 18 subspecies of African elephant were recognised by Henry Fairfield Osborn, but further morphological data has reduced the number of classified subspecies, and by the 1990s, only two were recognised, the savannah or bush elephant ("L. a. africana") and the forest elephant ("L. a. cyclotis"), the latter having been named in 1900 by German zoologist Paul Matschie. Forest elephants have smaller and more rounded ears and thinner and straighter tusks than bush elephants, and are limited in range to the forested areas of western and Central Africa. A 2000 study argued for the elevation of the two forms into separate species ("L. africana" and "L. cyclotis" respectively) based on differences in skull morphology. DNA studies published in 2001 and 2007 also suggested they were distinct species while studies in 2002 and 2005 concluded that they were the same species. Further studies (2010, 2011, 2015) have supported African savannah and forest elephants' status as separate species. The two species are believed to have diverged 6 million years ago. and have been completely genetically isolated for the past 500,000 years. In 2017, DNA sequence analysis showed that "L. cyclotis" is more closely related to the extinct "Palaeoloxodon antiquus", than it is to "L. africana," possibly undermining the genus Loxodonta as a whole. Some evidence suggests that elephants of western Africa are a separate species, although this is disputed. The pygmy elephants of the Congo Basin, which have been suggested to be a separate species ("Loxodonta pumilio") are probably forest elephants whose small size and/or early maturity are due to environmental conditions. Over 185 extinct members and three major evolutionary radiations of the order Proboscidea have been recorded. The earliest proboscids, the African "Eritherium" and "Phosphatherium" of the late Paleocene, heralded the first radiation. The Eocene included "Numidotherium", "Moeritherium," and "Barytherium" from Africa. These animals were relatively small and aquatic. Later on, genera such as "Phiomia" and "Palaeomastodon" arose; the latter likely inhabited forests and open woodlands. Proboscidean diversity declined during the Oligocene. One notable species of this epoch was "Eritreum melakeghebrekristosi" of the Horn of Africa, which may have been an ancestor to several later species. The beginning of the Miocene saw the second diversification, with the appearance of the deinotheres and the mammutids. The former were related to "Barytherium" and lived in Africa and Eurasia, while the latter may have descended from "Eritreum" and spread to North America. The second radiation was represented by the emergence of the gomphotheres in the Miocene, which likely evolved from "Eritreum" and originated in Africa, spreading to every continent except Australia and Antarctica. Members of this group included "Gomphotherium" and "Platybelodon". The third radiation started in the late Miocene and led to the arrival of the elephantids, which descended from, and slowly replaced, the gomphotheres. The African "Primelephas gomphotheroides" gave rise to "Loxodonta", "Mammuthus," and "Elephas". "Loxodonta" branched off earliest around the Miocene and Pliocene boundary while "Mammuthus" and "Elephas" diverged later during the early Pliocene. "Loxodonta" remained in Africa while "Mammuthus" and "Elephas" spread to Eurasia, and the former reached North America. At the same time, the stegodontids, another proboscidean group descended from gomphotheres, spread throughout Asia, including the Indian subcontinent, China, southeast Asia, and Japan. Mammutids continued to evolve into new species, such as the American mastodon. At the beginning of the Pleistocene, elephantids experienced a high rate of speciation. The Pleistocene also saw the arrival of "Palaeoloxodon namadicus", the largest terrestrial mammal of all time. "Loxodonta atlantica" became the most common species in northern and southern Africa but was replaced by "Elephas iolensis" later in the Pleistocene. Only when "Elephas" disappeared from Africa did "Loxodonta" become dominant once again, this time in the form of the modern species. "Elephas" diversified into new species in Asia, such as "E. hysudricus" and "E. platycephus"; the latter the likely ancestor of the modern Asian elephant. "Mammuthus" evolved into several species, including the well-known woolly mammoth. Interbreeding appears to have been common among elephantid species, with some hybridisation which in some cases led to species with three ancestral genetic components, such as the straight-tusked elephants. In the Late Pleistocene, most proboscidean species vanished during the Quaternary glaciation which killed off 50% of genera weighing over worldwide. Proboscideans experienced several evolutionary trends, such as an increase in size, which led to many giant species that stood up to tall. As with other megaherbivores, including the extinct sauropod dinosaurs, the large size of elephants likely developed to allow them to survive on vegetation with low nutritional value. Their limbs grew longer and the feet shorter and broader. The feet were originally plantigrade and developed into a digitigrade stance with cushion pads and the sesamoid bone providing support. Early proboscideans developed longer mandibles and smaller craniums while more derived ones developed shorter mandibles, which shifted the head's centre of gravity. The skull grew larger, especially the cranium, while the neck shortened to provide better support for the skull. The increase in size led to the development and elongation of the mobile trunk to provide reach. The number of premolars, incisors and canines decreased. The cheek teeth (molars and premolars) became larger and more specialized, especially after elephants started to switch from C3-plants to C4-grasses, which caused their teeth to undergo a three-fold increase in teeth height as well as substantial multiplication of lamellae after about five million years ago. Only in the last million years or so did they return to a diet mainly consisting of C3 trees and shrubs. The upper second incisors grew into tusks, which varied in shape from straight, to curved (either upward or downward), to spiralled, depending on the species. Some proboscideans developed tusks from their lower incisors. Elephants retain certain features from their aquatic ancestry, such as their middle ear anatomy and the internal testes of the males. There has been some debate over the relationship of "Mammuthus" to "Loxodonta" or "Elephas". Some DNA studies suggest "Mammuthus" is more closely related to the former while others point to the latter. However, analysis of the complete mitochondrial genome profile of the woolly mammoth (sequenced in 2005) supports "Mammuthus" being more closely related to "Elephas". Morphological evidence supports "Mammuthus" and "Elephas" as sister taxa while comparisons of protein albumin and collagen have concluded that all three genera are equally related to each other. Some scientists believe a cloned mammoth embryo could one day be implanted in an Asian elephant's womb. Several species of proboscideans lived on islands and experienced insular dwarfism. This occurred primarily during the Pleistocene when some elephant populations became isolated by fluctuating sea levels, although dwarf elephants did exist earlier in the Pliocene. These elephants likely grew smaller on islands due to a lack of large or viable predator populations and limited resources. By contrast, small mammals such as rodents develop gigantism in these conditions. Dwarf proboscideans are known to have lived in Indonesia, the Channel Islands of California, and several islands of the Mediterranean. "Elephas celebensis" of Sulawesi is believed to have descended from "Elephas planifrons". "Elephas falconeri" of Malta and Sicily was only and had probably evolved from the straight-tusked elephant. Other descendants of the straight-tusked elephant existed in Cyprus. Dwarf elephants of uncertain descent lived in Crete, Cyclades, and Dodecanese while dwarf mammoths are known to have lived in Sardinia. The Columbian mammoth colonised the Channel Islands and evolved into the pygmy mammoth. This species reached a height of and weighed . A population of small woolly mammoths survived on Wrangel Island, now north of the Siberian coast, as recently as 4,000 years ago. After their discovery in 1993, they were considered dwarf mammoths. This classification has been re-evaluated and since the Second International Mammoth Conference in 1999, these animals are no longer considered to be true "dwarf mammoths". Elephants are the largest living terrestrial animals. The average male African bush elephant is tall at the shoulder and has a body mass of , whereas the average female is tall at the shoulder and have a mass of . Asian elephants are smaller, with males being tall at the shoulder and on average, and females tall at the shoulder and on average. African forest elephants are the smallest extant species, on average being tall at the shoulder and . Male African elephants are typically 23% taller than females, whereas male Asian elephants are only around 15% taller than females. The skeleton of the elephant is made up of 326–351 bones. The vertebrae are connected by tight joints, which limit the backbone's flexibility. African elephants have 21 pairs of ribs, while Asian elephants have 19 or 20 pairs. An elephant's skull is resilient enough to withstand the forces generated by the leverage of the tusks and head-to-head collisions. The back of the skull is flattened and spread out, creating arches that protect the brain in every direction. The skull contains air cavities (sinuses) that reduce the weight of the skull while maintaining overall strength. These cavities give the inside of the skull a honeycomb-like appearance. The cranium is particularly large and provides enough room for the attachment of muscles to support the entire head. The lower jaw is solid and heavy. Because of the size of the head, the neck is relatively short to provide better support. Lacking a lacrimal apparatus, the eye relies on the harderian gland to keep it moist. A durable nictitating membrane protects the eye globe. The animal's field of vision is compromised by the location and limited mobility of the eyes. Elephants are considered dichromats and they can see well in dim light but not in bright light. The core body temperature averages , similar to that of a human. Like all mammals, an elephant can raise or lower its temperature a few degrees from the average in response to extreme environmental conditions. Elephant ears have thick bases with thin tips. The ear flaps, or pinnae, contain numerous blood vessels called capillaries. Warm blood flows into the capillaries, helping to release excess body heat into the environment. This occurs when the pinnae are still, and the animal can enhance the effect by flapping them. Larger ear surfaces contain more capillaries, and more heat can be released. Of all the elephants, African bush elephants live in the hottest climates, and have the largest ear flaps. Elephants are capable of hearing at low frequencies and are most sensitive at 1 kHz. The trunk, or proboscis, is a fusion of the nose and upper lip, although in early fetal life, the upper lip and trunk are separated. The trunk is elongated and specialised to become the elephant's most important and versatile appendage. It contains up to 150,000 separate muscle fascicles, with no bone and little fat. These paired muscles consist of two major types: superficial (surface) and internal. The former are divided into dorsals, ventrals, and laterals while the latter are divided into transverse and radiating muscles. The muscles of the trunk connect to a bony opening in the skull. The nasal septum is composed of tiny muscle units that stretch horizontally between the nostrils. Cartilage divides the nostrils at the base. As a muscular hydrostat, the trunk moves by precisely coordinated muscle contractions. The muscles work both with and against each other. A unique proboscis nerve – formed by the maxillary and facial nerves – runs along both sides of the trunk. Elephant trunks have multiple functions, including breathing, olfaction, touching, grasping, and sound production. The animal's sense of smell may be four times as sensitive as that of a bloodhound. The trunk's ability to make powerful twisting and coiling movements allows it to collect food, wrestle with other elephants, and lift up to . It can be used for delicate tasks, such as wiping an eye and checking an orifice, and is capable of cracking a peanut shell without breaking the seed. With its trunk, an elephant can reach items at heights of up to and dig for water under mud or sand. Individuals may show lateral preference when grasping with their trunks: some prefer to twist them to the left, others to the right. Elephants can suck up water both to drink and to spray on their bodies. An adult Asian elephant is capable of holding of water in its trunk. They will also spray dust or grass on themselves. When underwater, the elephant uses its trunk as a snorkel. The African elephant has two finger-like extensions at the tip of the trunk that allow it to grasp and bring food to its mouth. The Asian elephant has only one, and relies more on wrapping around a food item and squeezing it into its mouth. Asian elephants have more muscle coordination and can perform more complex tasks. Losing the trunk would be detrimental to an elephant's survival, although in rare cases, individuals have survived with shortened ones. One elephant has been observed to graze by kneeling on its front legs, raising on its hind legs and taking in grass with its lips. Floppy trunk syndrome is a condition of trunk paralysis in African bush elephants caused by the degradation of the peripheral nerves and muscles beginning at the tip. Elephants usually have 26 teeth: the incisors, known as the tusks, 12 deciduous premolars, and 12 molars. Unlike most mammals, which grow baby teeth and then replace them with a single permanent set of adult teeth, elephants are polyphyodonts that have cycles of tooth rotation throughout their lives. The chewing teeth are replaced six times in a typical elephant's lifetime. Teeth are not replaced by new ones emerging from the jaws vertically as in most mammals. Instead, new teeth grow in at the back of the mouth and move forward to push out the old ones. The first chewing tooth on each side of the jaw falls out when the elephant is two to three years old. The second set of chewing teeth falls out at four to six years old. The third set falls out at 9–15 years of age, and set four lasts until 18–28 years of age. The fifth set of teeth falls out at the early 40s. The sixth (and usually final) set must last the elephant the rest of its life. Elephant teeth have loop-shaped dental ridges, which are thicker and more diamond-shaped in African elephants. The tusks of an elephant are modified second incisors in the upper jaw. They replace deciduous milk teeth at 6–12 months of age and grow continuously at about a year. A newly developed tusk has a smooth enamel cap that eventually wears off. The dentine is known as ivory and its cross-section consists of crisscrossing line patterns, known as "engine turning", which create diamond-shaped areas. As a piece of living tissue, a tusk is relatively soft; it is as hard as the mineral calcite. Much of the tusk can be seen outside; the rest is in a socket in the skull. At least one-third of the tusk contains the pulp and some have nerves stretching to the tip. Thus it would be difficult to remove it without harming the animal. When removed, ivory begins to dry up and crack if not kept cool and moist. Tusks serve multiple purposes. They are used for digging for water, salt, and roots; debarking or marking trees; and for moving trees and branches when clearing a path. When fighting, they are used to attack and defend, and to protect the trunk. Like humans, who are typically right- or left-handed, elephants are usually right- or left-tusked. The dominant tusk, called the master tusk, is generally more worn down, as it is shorter with a rounder tip. For the African elephants, tusks are present in both males and females, and are around the same length in both sexes, reaching up to , but those of males tend to be thicker. In earlier times, elephant tusks weighing over 200 pounds (more than 90 kg) were not uncommon, though it is rare today to see any over . In the Asian species, only the males have large tusks. Female Asians have very small tusks, or none at all. Tuskless males exist and are particularly common among Sri Lankan elephants. Asian males can have tusks as long as Africans', but they are usually slimmer and lighter; the largest recorded was long and weighed . Hunting for elephant ivory in Africa and Asia has led to natural selection for shorter tusks and tusklessness. An elephant's skin is generally very tough, at thick on the back and parts of the head. The skin around the mouth, anus, and inside of the ear is considerably thinner. Elephants typically have grey skin, but African elephants look brown or reddish after wallowing in coloured mud. Asian elephants have some patches of depigmentation, particularly on the forehead and ears and the areas around them. Calves have brownish or reddish hair, especially on the head and back. As elephants mature, their hair darkens and becomes sparser, but dense concentrations of hair and bristles remain on the end of the tail as well as the chin, genitals and the areas around the eyes and ear openings. Normally the skin of an Asian elephant is covered with more hair than its African counterpart. An elephant uses mud as a sunscreen, protecting its skin from ultraviolet light. Although tough, an elephant's skin is very sensitive. Without regular mud baths to protect it from burning, insect bites and moisture loss, an elephant's skin suffers serious damage. After bathing, the elephant will usually use its trunk to blow dust onto its body and this dries into a protective crust. Elephants have difficulty releasing heat through the skin because of their low surface-area-to-volume ratio, which is many times smaller than that of a human. They have even been observed lifting up their legs, presumably in an effort to expose their soles to the air. To support the animal's weight, an elephant's limbs are positioned more vertically under the body than in most other mammals. The long bones of the limbs have cancellous bone in place of medullary cavities. This strengthens the bones while still allowing haematopoiesis. Both the front and hind limbs can support an elephant's weight, although 60% is borne by the front. Since the limb bones are placed on top of each other and under the body, an elephant can stand still for long periods of time without using much energy. Elephants are incapable of rotating their front legs, as the ulna and radius are fixed in pronation; the "palm" of the manus faces backward. The pronator quadratus and the pronator teres are either reduced or absent. The circular feet of an elephant have soft tissues or "cushion pads" beneath the manus or pes, which distribute the weight of the animal. They appear to have a sesamoid, an extra "toe" similar in placement to a giant panda's extra "thumb", that also helps in weight distribution. As many as five toenails can be found on both the front and hind feet. Elephants can move both forwards and backwards, but cannot trot, jump, or gallop. They use only two gaits when moving on land: the walk and a faster gait similar to running. In walking, the legs act as pendulums, with the hips and shoulders rising and falling while the foot is planted on the ground. With no "aerial phase", the fast gait does not meet all the criteria of running, although the elephant uses its legs much like other running animals, with the hips and shoulders falling and then rising while the feet are on the ground. Fast-moving elephants appear to 'run' with their front legs, but 'walk' with their hind legs and can reach a top speed of . At this speed, most other quadrupeds are well into a gallop, even accounting for leg length. Spring-like kinetics could explain the difference between the motion of elephants and other animals. During locomotion, the cushion pads expand and contract, and reduce both the pain and noise that would come from a very heavy animal moving. Elephants are capable swimmers. They have been recorded swimming for up to six hours without touching the bottom, and have travelled as far as at a stretch and at speeds of up to . The brain of an elephant weighs compared to for a human brain. While the elephant brain is larger overall, it is proportionally smaller. At birth, an elephant's brain already weighs 30–40% of its adult weight. The cerebrum and cerebellum are well developed, and the temporal lobes are so large that they bulge out laterally. The throat of an elephant appears to contain a pouch where it can store water for later use. The heart of an elephant weighs . It has a double-pointed apex, an unusual trait among mammals. In addition, the ventricles separate near the top of the heart, a trait they share with sirenians. When standing, the elephant's heart beats approximately 30 times per minute. Unlike many other animals, the heart rate speeds up by 8 to 10 beats per minute when the elephant is lying down. The blood vessels in most of the body are wide and thick and can withstand high blood pressures. The lungs are attached to the diaphragm, and breathing relies mainly on the diaphragm rather than the expansion of the ribcage. Connective tissue exists in place of the pleural cavity. This may allow the animal to deal with the pressure differences when its body is underwater and its trunk is breaking the surface for air, although this explanation has been questioned. Another possible function for this adaptation is that it helps the animal suck up water through the trunk. Elephants inhale mostly through the trunk, although some air goes through the mouth. They have a hindgut fermentation system, and their large and small intestines together reach in length. The majority of an elephant's food intake goes undigested despite the process lasting up to a day. A male elephant's testes are located internally near the kidneys. The elephant's penis can reach a length of and a diameter of at the base. It is S-shaped when fully erect and has a Y-shaped orifice. The female has a well-developed clitoris at up to . The vulva is located between the hind legs instead of near the tail as in most mammals. Determining pregnancy status can be difficult due to the animal's large abdominal cavity. The female's mammary glands occupy the space between the front legs, which puts the suckling calf within reach of the female's trunk. Elephants have a unique organ, the temporal gland, located in both sides of the head. This organ is associated with sexual behaviour, and males secrete a fluid from it when in musth. Females have also been observed with secretions from the temporal glands. The African bush elephant can be found in habitats as diverse as dry savannahs, deserts, marshes, and lake shores, and in elevations from sea level to mountain areas above the snow line. Forest elephants mainly live in equatorial forests but will enter gallery forests and ecotones between forests and savannahs. Asian elephants prefer areas with a mix of grasses, low woody plants, and trees, primarily inhabiting dry thorn-scrub forests in southern India and Sri Lanka and evergreen forests in Malaya. Elephants are herbivorous and will eat leaves, twigs, fruit, bark, grass and roots. They are born with sterile intestines and require bacteria obtained from their mother's feces to digest vegetation. African elephants are mostly browsers while Asian elephants are mainly grazers. They can consume as much as of food and of water in a day. Elephants tend to stay near water sources. Major feeding bouts take place in the morning, afternoon and night. At midday, elephants rest under trees and may doze off while standing. Sleeping occurs at night while the animal is lying down. Elephants average 3–4 hours of sleep per day. Both males and family groups typically move a day, but distances as far as have been recorded in the Etosha region of Namibia. Elephants go on seasonal migrations in search of food, water, minerals, and mates. At Chobe National Park, Botswana, herds travel to visit the river when the local waterholes dry up. Because of their large size, elephants have a huge impact on their environments and are considered keystone species. Their habit of uprooting trees and undergrowth can transform savannah into grasslands; when they dig for water during drought, they create waterholes that can be used by other animals. They can enlarge waterholes when they bathe and wallow in them. At Mount Elgon, elephants excavate caves that are used by ungulates, hyraxes, bats, birds and insects. Elephants are important seed dispersers; African forest elephants ingest and defecate seeds, with either no effect or a positive effect on germination. The seeds are typically dispersed in large amounts over great distances. In Asian forests, large seeds require giant herbivores like elephants and rhinoceros for transport and dispersal. This ecological niche cannot be filled by the next largest herbivore, the tapir. Because most of the food elephants eat goes undigested, their dung can provide food for other animals, such as dung beetles and monkeys. Elephants can have a negative impact on ecosystems. At Murchison Falls National Park in Uganda, the overabundance of elephants has threatened several species of small birds that depend on woodlands. Their weight can compact the soil, which causes the rain to run off, leading to erosion. Elephants typically coexist peacefully with other herbivores, which will usually stay out of their way. Some aggressive interactions between elephants and rhinoceros have been recorded. At Aberdare National Park, Kenya, a rhino attacked an elephant calf and was killed by the other elephants in the group. At Hluhluwe–Umfolozi Game Reserve, South Africa, introduced young orphan elephants went on a killing spree that claimed the lives of 36 rhinos during the 1990s, but ended with the introduction of older males. The size of adult elephants makes them nearly invulnerable to predators, though there are rare reports of adult elephants falling prey to tigers. Calves may be preyed on by lions, spotted hyenas, and wild dogs in Africa and tigers in Asia. The lions of Savuti, Botswana, have adapted to hunting juvenile elephants during the dry season, and a pride of 30 lions has been recorded killing juvenile individuals between the ages of four and eleven years. Elephants appear to distinguish between the growls of larger predators like tigers and smaller predators like leopards (which have not been recorded killing calves); they react to leopards less fearfully and more aggressively. Elephants tend to have high numbers of parasites, particularly nematodes, compared to other herbivores. This is due to lower predation pressures that would otherwise kill off many of the individuals with significant parasite loads. Female elephants spend their entire lives in tight-knit matrilineal family groups, some of which are made up of more than ten members, including three pairs of mothers with offspring, and are led by the matriarch which is often the eldest female. She remains leader of the group until death or if she no longer has the energy for the role; a study on zoo elephants showed that when the matriarch died, the levels of faecal corticosterone ('stress hormone') dramatically increased in the surviving elephants. When her tenure is over, the matriarch's eldest daughter takes her place; this occurs even if her sister is present. The older matriarchs tend to be more effective decision-makers. The social circle of the female elephant does not necessarily end with the small family unit. In the case of elephants in Amboseli National Park, Kenya, a female's life involves interaction with other families, clans, and subpopulations. Families may associate and bond with each other, forming what are known as bond groups. These are typically made of two family groups, which may be closely related due to previously being part of the same family group which split after becoming too large for the available resources. During the dry season, elephant families may cluster together and form another level of social organisation known as the clan. Groups within these clans do not form strong bonds, but they defend their dry-season ranges against other clans. There are typically nine groups in a clan. The Amboseli elephant population is further divided into the "central" and "peripheral" subpopulations. Some elephant populations in India and Sri Lanka have similar basic social organisations. There appear to be cohesive family units and loose aggregations. They have been observed to have "nursing units" and "juvenile-care units". In southern India, elephant populations may contain family groups, bond groups and possibly clans. Family groups tend to be small, consisting of one or two adult females and their offspring. A group containing more than two adult females plus offspring is known as a "joint family". Malay elephant populations have even smaller family units, and do not have any social organisation higher than a family or bond group. Groups of African forest elephants typically consist of one adult female with one to three offspring. These groups appear to interact with each other, especially at forest clearings. The social life of the adult male is very different. As he matures, a male spends more time at the edge of his group and associates with outside males or even other families. At Amboseli, young males spend over 80% of their time away from their families when they are 14–15. When males permanently leave, they either live alone or with other males. The former is typical of bulls in dense forests. Asian males are usually solitary, but occasionally form groups of two or more individuals; the largest consisted of seven bulls. Larger bull groups consisting of over 10 members occur only among African bush elephants, the largest of which numbered up to 144 individuals. These elephants can be quite sociable when not competing for dominance or mates, and will form long-term relationships. A dominance hierarchy exists among males, whether they range socially or solitarily. Dominance depends on the age, size and sexual condition, and when in groups, males follow the lead of the dominant bull. Young bulls may seek out the company and leadership of older, more experienced males, whose presence appears to control their aggression and prevent them from exhibiting "deviant" behaviour. Adult males and females come together for reproduction. Bulls associate with family groups if an oestrous cow is present. Adult males enter a state of increased testosterone known as musth. In a population in southern India, males first enter musth at the age of 15, but it is not very intense until they are older than 25. At Amboseli, bulls under 24 do not go into musth, while half of those aged 25–35 and all those over 35 do. Young bulls appear to enter musth during the dry season (January–May), while older bulls go through it during the wet season (June–December). The main characteristic of a bull's musth is a fluid secreted from the temporal gland that runs down the side of his face. He may urinate with his penis still in his sheath, which causes the urine to spray on his hind legs. Behaviours associated with musth include walking with the head held high and swinging, picking at the ground with the tusks, marking, rumbling and waving only one ear at a time. This can last from a day to four months. Males become extremely aggressive during musth. Size is the determining factor in agonistic encounters when the individuals have the same condition. In contests between musth and non-musth individuals, musth bulls win the majority of the time, even when the non-musth bull is larger. A male may stop showing signs of musth when he encounters a musth male of higher rank. Those of equal rank tend to avoid each other. Agonistic encounters typically consist of threat displays, chases, and minor sparring with the tusks. Serious fights are rare. Elephants are polygynous breeders, and copulations are most frequent during the peak of the wet season. A cow in oestrus releases chemical signals (pheromones) in her urine and vaginal secretions to signal her readiness to mate. A bull will follow a potential mate and assess her condition with the flehmen response, which requires the male to collect a chemical sample with his trunk and bring it to the vomeronasal organ. The oestrous cycle of a cow lasts 14–16 weeks with a 4–6-week follicular phase and an 8- to 10-week luteal phase. While most mammals have one surge of luteinizing hormone during the follicular phase, elephants have two. The first (or anovulatory) surge, could signal to males that the female is in oestrus by changing her scent, but ovulation does not occur until the second (or ovulatory) surge. Fertility rates in cows decline around 45–50 years of age. Bulls engage in a behaviour known as mate-guarding, where they follow oestrous females and defend them from other males. Most mate-guarding is done by musth males, and females actively seek to be guarded by them, particularly older ones. Thus these bulls have more reproductive success. Musth appears to signal to females the condition of the male, as weak or injured males do not have normal musths. For young females, the approach of an older bull can be intimidating, so her relatives stay nearby to provide support and reassurance. During copulation, the male lays his trunk over the female's back. The penis is very mobile, being able to move independently of the pelvis. Prior to mounting, it curves forward and upward. Copulation lasts about 45 seconds and does not involve pelvic thrusting or ejaculatory pause. Elephant sperm must swim close to to reach the egg. By comparison, human sperm has to swim around only . Homosexual behaviour is frequent in both sexes. As in heterosexual interactions, this involves mounting. Male elephants sometimes stimulate each other by playfighting and "championships" may form between old bulls and younger males. Female same-sex behaviours have been documented only in captivity where they are known to masturbate one another with their trunks. Gestation in elephants typically lasts around two years with interbirth intervals usually lasting four to five years. Births tend to take place during the wet season. Calves are born tall and weigh around . Typically, only a single young is born, but twins sometimes occur. The relatively long pregnancy is maintained by five corpus luteums (as opposed to one in most mammals) and gives the foetus more time to develop, particularly the brain and trunk. As such, newborn elephants are precocial and quickly stand and walk to follow their mother and family herd. A new calf is usually the centre of attention for herd members. Adults and most of the other young will gather around the newborn, touching and caressing it with their trunks. For the first few days, the mother is intolerant of other herd members near her young. Alloparenting – where a calf is cared for by someone other than its mother – takes place in some family groups. Allomothers are typically two to twelve years old. When a predator is near, the family group gathers together with the calves in the centre. For the first few days, the newborn is unsteady on its feet, and needs the support of its mother. It relies on touch, smell, and hearing, as its eyesight is poor. It has little precise control over its trunk, which wiggles around and may cause it to trip. By its second week of life, the calf can walk more firmly and has more control over its trunk. After its first month, a calf can pick up, hold, and put objects in its mouth, but cannot suck water through the trunk and must drink directly through the mouth. It is still dependent on its mother and keeps close to her. For its first three months, a calf relies entirely on milk from its mother for nutrition, after which it begins to forage for vegetation and can use its trunk to collect water. At the same time, improvements in lip and leg coordination occur. Calves continue to suckle at the same rate as before until their sixth month, after which they become more independent when feeding. By nine months, mouth, trunk and foot coordination is perfected. After a year, a calf's abilities to groom, drink, and feed itself are fully developed. It still needs its mother for nutrition and protection from predators for at least another year. Suckling bouts tend to last 2–4 min/hr for a calf younger than a year and it continues to suckle until it reaches three years of age or older. Suckling after two years may serve to maintain growth rate, body condition and reproductive ability. Play behaviour in calves differs between the sexes; females run or chase each other while males play-fight. The former are sexually mature by the age of nine years while the latter become mature around 14–15 years. Adulthood starts at about 18 years of age in both sexes. Elephants have long lifespans, reaching 60–70 years of age. Lin Wang, a captive male Asian elephant, lived for 86 years. Touching is an important form of communication among elephants. Individuals greet each other by stroking or wrapping their trunks; the latter also occurs during mild competition. Older elephants use trunk-slaps, kicks, and shoves to discipline younger ones. Individuals of any age and sex will touch each other's mouths, temporal glands, and genitals, particularly during meetings or when excited. This allows individuals to pick up chemical cues. Touching is especially important for mother–calf communication. When moving, elephant mothers will touch their calves with their trunks or feet when side-by-side or with their tails if the calf is behind them. If a calf wants to rest, it will press against its mother's front legs and when it wants to suckle, it will touch her breast or leg. Visual displays mostly occur in agonistic situations. Elephants will try to appear more threatening by raising their heads and spreading their ears. They may add to the display by shaking their heads and snapping their ears, as well as throwing dust and vegetation. They are usually bluffing when performing these actions. Excited elephants may raise their trunks. Submissive ones will lower their heads and trunks, as well as flatten their ears against their necks, while those that accept a challenge will position their ears in a V shape. Elephants produce several sounds, usually through the larynx, though some may be modified by the trunk. Perhaps the most well known call is the trumpet which is made by blowing through the trunk. Trumpeting is made during excitement, distress or aggression. Fighting elephants may roar or squeal, and wounded ones may bellow. Rumbles are produced during mild arousal and some appear to be infrasonic. Infrasonic calls are important, particularly for long-distance communication, in both Asian and African elephants. For Asian elephants, these calls have a frequency of 14–24 Hz, with sound pressure levels of 85–90 dB and last 10–15 seconds. For African elephants, calls range from 15–35 Hz with sound pressure levels as high as 117 dB, allowing communication for many kilometres, with a possible maximum range of around . At Amboseli, several different infrasonic calls have been identified. A greeting rumble is emitted by members of a family group after having been separated for several hours. Contact calls are soft, unmodulated sounds made by individuals that have been separated from their group and may be responded to with a "contact answer" call that starts out loud, but becomes softer. A "let's go" soft rumble is emitted by the matriarch to signal to the other herd members that it is time to move to another spot. Bulls in musth emit a distinctive, low-frequency pulsated rumble nicknamed the "motorcycle". Musth rumbles may be answered by the "female chorus", a low-frequency, modulated chorus produced by several cows. A loud postcopulatory call may be made by an oestrous cow after mating. When a cow has mated, her family may produce calls of excitement known as the "mating pandemonium". Elephants are known to communicate with seismics, vibrations produced by impacts on the earth's surface or acoustical waves that travel through it. They appear to rely on their leg and shoulder bones to transmit the signals to the middle ear. When detecting seismic signals, the animals lean forward and put more weight on their larger front feet; this is known as the "freezing behaviour". Elephants possess several adaptations suited for seismic communication. The cushion pads of the feet contain cartilaginous nodes and have similarities to the acoustic fat found in marine mammals like toothed whales and sirenians. A unique sphincter-like muscle around the ear canal constricts the passageway, thereby dampening acoustic signals and allowing the animal to hear more seismic signals. Elephants appear to use seismics for a number of purposes. An individual running or mock charging can create seismic signals that can be heard at great distances. When detecting the seismics of an alarm call signalling danger from predators, elephants enter a defensive posture and family groups will pack together. Seismic waveforms produced by locomotion appear to travel distances of up to while those from vocalisations travel . Elephants exhibit mirror self-recognition, an indication of self-awareness and cognition that has also been demonstrated in some apes and dolphins. One study of a captive female Asian elephant suggested the animal was capable of learning and distinguishing between several visual and some acoustic discrimination pairs. This individual was even able to score a high accuracy rating when re-tested with the same visual pairs a year later. Elephants are among the species known to use tools. An Asian elephant has been observed modifying branches and using them as flyswatters. Tool modification by these animals is not as advanced as that of chimpanzees. Elephants are popularly thought of as having an excellent memory. This could have a factual basis; they possibly have cognitive maps to allow them to remember large-scale spaces over long periods of time. Individuals appear to be able to keep track of the current location of their family members. Scientists debate the extent to which elephants feel emotion. They appear to show interest in the bones of their own kind, regardless of whether they are related. As with chimps and dolphins, a dying or dead elephant may elicit attention and aid from others, including those from other groups. This has been interpreted as expressing "concern"; however, others would dispute such an interpretation as being anthropomorphic; the "Oxford Companion to Animal Behaviour" (1987) advised that "one is well advised to study the behaviour rather than attempting to get at any underlying emotion". African elephants were listed as vulnerable by the International Union for Conservation of Nature (IUCN) in 2008, with no independent assessment of the conservation status of the two forms. In 1979, Africa had an estimated minimum population of 1.3 million elephants, with a possible upper limit of 3.0 million. By 1989, the population was estimated to be 609,000; with 277,000 in Central Africa, 110,000 in eastern Africa, 204,000 in southern Africa, and 19,000 in western Africa. About 214,000 elephants were estimated to live in the rainforests, fewer than had previously been thought. From 1977 to 1989, elephant populations declined by 74% in East Africa. After 1987, losses in elephant numbers accelerated, and savannah populations from Cameroon to Somalia experienced a decline of 80%. African forest elephants had a total loss of 43%. Population trends in southern Africa were mixed, with anecdotal reports of losses in Zambia, Mozambique and Angola while populations grew in Botswana and Zimbabwe and were stable in South Africa. Conversely, studies in 2005 and 2007 found populations in eastern and southern Africa were increasing by an average annual rate of 4.0%. Due to the vast areas involved, assessing the total African elephant population remains difficult and involves an element of guesswork. The IUCN estimates a total of around 440,000 individuals for 2012. African elephants receive at least some legal protection in every country where they are found, but 70% of their range exists outside protected areas. Successful conservation efforts in certain areas have led to high population densities. As of 2008, local numbers were controlled by contraception or translocation. Large-scale cullings ceased in 1988, when Zimbabwe abandoned the practice. In 1989, the African elephant was listed under Appendix I by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), making trade illegal. Appendix II status (which allows restricted trade) was given to elephants in Botswana, Namibia, and Zimbabwe in 1997 and South Africa in 2000. In some countries, sport hunting of the animals is legal; Botswana, Cameroon, Gabon, Mozambique, Namibia, South Africa, Tanzania, Zambia, and Zimbabwe have CITES export quotas for elephant trophies. In June 2016, the First Lady of Kenya, Margaret Kenyatta, helped launch the East Africa Grass-Root Elephant Education Campaign Walk, organised by elephant conservationist Jim Nyamu. The event was conducted to raise awareness of the value of elephants and rhinos, to help mitigate human-elephant conflicts, and to promote anti-poaching activities. In 2008, the IUCN listed the Asian elephant as endangered due to a 50% population decline over the past 60–75 years while CITES lists the species under Appendix I. Asian elephants once ranged from Syria and Iraq (the subspecies "Elephas maximus asurus"), to China (up to the Yellow River) and Java. It is now extinct in these areas, and the current range of Asian elephants is highly fragmented. The total population of Asian elephants is estimated to be around 40,000–50,000, although this may be a loose estimate. It is likely that around half of the population is in India. Although Asian elephants are declining in numbers overall, particularly in Southeast Asia, the population in the Western Ghats appears to be increasing. The poaching of elephants for their ivory, meat and hides has been one of the major threats to their existence. Historically, numerous cultures made ornaments and other works of art from elephant ivory, and its use rivalled that of gold. The ivory trade contributed to the African elephant population decline in the late 20th century. This prompted international bans on ivory imports, starting with the United States in June 1989, and followed by bans in other North American countries, western European countries, and Japan. Around the same time, Kenya destroyed all its ivory stocks. CITES approved an international ban on ivory that went into effect in January 1990. Following the bans, unemployment rose in India and China, where the ivory industry was important economically. By contrast, Japan and Hong Kong, which were also part of the industry, were able to adapt and were not badly affected. Zimbabwe, Botswana, Namibia, Zambia, and Malawi wanted to continue the ivory trade and were allowed to, since their local elephant populations were healthy, but only if their supplies were from elephants that had been culled or died of natural causes. The ban allowed the elephant to recover in parts of Africa. In January 2012, 650 elephants in Bouba Njida National Park, Cameroon, were killed by Chadian raiders. This has been called "one of the worst concentrated killings" since the ivory ban. Asian elephants are potentially less vulnerable to the ivory trade, as females usually lack tusks. Still, members of the species have been killed for their ivory in some areas, such as Periyar National Park in India. China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015, and in September 2015, China and the United States said "they would enact a nearly complete ban on the import and export of ivory" due to causes of extinction. Other threats to elephants include habitat destruction and fragmentation. The Asian elephant lives in areas with some of the highest human populations. Because they need larger amounts of land than other sympatric terrestrial mammals, they are the first to be affected by human encroachment. In extreme cases, elephants may be confined to small islands of forest among human-dominated landscapes. Elephants cannot coexist with humans in agricultural areas due to their size and food requirements. Elephants commonly trample and consume crops, which contributes to conflicts with humans, and both elephants and humans have died by the hundreds as a result. Mitigating these conflicts is important for conservation. One proposed solution is the provision of ‘urban corridors’ which allow the animals access to key areas. Elephants have been working animals since at least the Indus Valley Civilization and continue to be used in modern times. There were 13,000–16,500 working elephants employed in Asia in 2000. These animals are typically captured from the wild when they are 10–20 years old when they can be trained quickly and easily, and will have a longer working life. They were traditionally captured with traps and lassos, but since 1950, tranquillisers have been used. Individuals of the Asian species have been often trained as working animals. Asian elephants perform tasks such as hauling loads into remote areas, moving logs to rivers and roads, transporting tourists around national parks, pulling wagons, and leading religious processions. In northern Thailand, the animals are used to digest coffee beans for Black Ivory coffee. They are valued over mechanised tools because they can work in relatively deep water, require relatively little maintenance, need only vegetation and water as fuel and can be trained to memorise specific tasks. Elephants can be trained to respond to over 30 commands. Musth bulls can be difficult and dangerous to work with and are chained and semi-starved until the condition passes. In India, many working elephants are alleged to have been subject to abuse. They and other captive elephants are thus protected under The Prevention of Cruelty to Animals Act of 1960. In both Myanmar and Thailand, deforestation and other economic factors have resulted in sizable populations of unemployed elephants resulting in health problems for the elephants themselves as well as economic and safety problems for the people amongst whom they live. The practice of working elephants has also been attempted in Africa. The taming of African elephants in the Belgian Congo began by decree of Leopold II of Belgium during the 19th century and continues to the present with the Api Elephant Domestication Centre. Historically, elephants were considered formidable instruments of war. They were equipped with armour to protect their sides, and their tusks were given sharp points of iron or brass if they were large enough. War elephants were trained to grasp an enemy soldier and toss him to the person riding on them or to pin the soldier to the ground and impale him. One of the earliest references to war elephants is in the Indian epic "Mahabharata" (written in the 4th century BC, but said to describe events between the 11th and 8th centuries BC). They were not used as much as horse-drawn chariots by either the Pandavas or Kauravas. During the Magadha Kingdom (which began in the 6th century BC), elephants began to achieve greater cultural importance than horses, and later Indian kingdoms used war elephants extensively; 3,000 of them were used in the Nandas (5th and 4th centuries BC) army while 9,000 may have been used in the Mauryan army (between the 4th and 2nd centuries BC). The "Arthashastra" (written around 300 BC) advised the Mauryan government to reserve some forests for wild elephants for use in the army, and to execute anyone who killed them. From South Asia, the use of elephants in warfare spread west to Persia and east to Southeast Asia. The Persians used them during the Achaemenid Empire (between the 6th and 4th centuries BC) while Southeast Asian states first used war elephants possibly as early as the 5th century BC and continued to the 20th century. Alexander the Great trained his foot soldiers to injure the animals and cause them to panic during wars with both the Persians and Indians. Ptolemy, who was one of Alexander's generals, used corps of Asian elephants during his reign as the ruler of Egypt (which began in 323 BC). His son and successor Ptolemy II (who began his rule in 285 BC) obtained his supply of elephants further south in Nubia. From then on, war elephants were employed in the Mediterranean and North Africa throughout the classical period. The Greek king Pyrrhus used elephants in his attempted invasion of Rome in 280 BC. While they frightened the Roman horses, they were not decisive and Pyrrhus ultimately lost the battle. The Carthaginian general Hannibal took elephants across the Alps during his war with the Romans and reached the Po Valley in 217 BC with all of them alive, but they later succumbed to disease. Elephants were historically kept for display in the menageries of Ancient Egypt, China, Greece, and Rome. The Romans in particular pitted them against humans and other animals in gladiator events. In the modern era, elephants have traditionally been a major part of zoos and circuses around the world. In circuses, they are trained to perform tricks. The most famous circus elephant was probably Jumbo (1861 – 15 September 1885), who was a major attraction in the Barnum & Bailey Circus. These animals do not reproduce well in captivity, due to the difficulty of handling musth bulls and limited understanding of female oestrous cycles. Asian elephants were always more common than their African counterparts in modern zoos and circuses. After CITES listed the Asian elephant under Appendix I in 1975, the number of African elephants in zoos increased in the 1980s, although the import of Asians continued. Subsequently, the US received many of its captive African elephants from Zimbabwe, which had an overabundance of the animals. As of 2000, around 1,200 Asian and 700 African elephants were kept in zoos and circuses. The largest captive population is in North America, which has an estimated 370 Asian and 350 African elephants. About 380 Asians and 190 Africans are known to exist in Europe, and Japan has around 70 Asians and 67 Africans. Keeping elephants in zoos has met with some controversy. Proponents of zoos argue that they offer researchers easy access to the animals and provide money and expertise for preserving their natural habitats, as well as safekeeping for the species. Critics claim that the animals in zoos are under physical and mental stress. Elephants have been recorded displaying stereotypical behaviours in the form of swaying back and forth, trunk swaying, or route tracing. This has been observed in 54% of individuals in UK zoos. Elephants in European zoos appear to have shorter lifespans than their wild counterparts at only 17 years, although other studies suggest that zoo elephants live as long those in the wild. The use of elephants in circuses has also been controversial; the Humane Society of the United States has accused circuses of mistreating and distressing their animals. In testimony to a US federal court in 2009, Barnum & Bailey Circus CEO Kenneth Feld acknowledged that circus elephants are struck behind their ears, under their chins and on their legs with metal-tipped prods, called bull hooks or ankus. Feld stated that these practices are necessary to protect circus workers and acknowledged that an elephant trainer was reprimanded for using an electric shock device, known as a hot shot or electric prod, on an elephant. Despite this, he denied that any of these practices harm elephants. Some trainers have tried to train elephants without the use of physical punishment. Ralph Helfer is known to have relied on gentleness and reward when training his animals, including elephants and lions. Ringling Bros. and Barnum and Bailey circus retired its touring elephants in May 2016. Elephants can exhibit bouts of aggressive behaviour and engage in destructive actions against humans. In Africa, groups of adolescent elephants damaged homes in villages after cullings in the 1970s and 1980s. Because of the timing, these attacks have been interpreted as vindictive. In parts of India, male elephants regularly enter villages at night, destroying homes and killing people. Elephants killed around 300 people between 2000 and 2004 in Jharkhand while in Assam, 239 people were reportedly killed between 2001 and 2006. Local people have reported their belief that some elephants were drunk during their attacks, although officials have disputed this explanation. Purportedly drunk elephants attacked an Indian village a second time in December 2002, killing six people, which led to the killing of about 200 elephants by locals. In many cultures, elephants represent strength, power, wisdom, longevity, stamina, leadership, sociability, nurturance and loyalty. Several cultural references emphasise the elephant's size and exotic uniqueness. For instance, a "white elephant" is a byword for something expensive, useless, and bizarre. The expression "elephant in the room" refers to an obvious truth that is ignored or otherwise unaddressed. The story of the blind men and an elephant teaches that reality may be viewed by different perspectives. Elephants have been represented in art since Paleolithic times. Africa, in particular, contains many rock paintings and engravings of the animals, especially in the Sahara and southern Africa. In Asia, the animals are depicted as motifs in Jain, Hindu and Buddhist shrines and temples. Elephants were often difficult to portray by people with no first-hand experience with them. The ancient Romans, who kept the animals in captivity, depicted anatomically accurate elephants on mosaics in Tunisia and Sicily. At the beginning of the Middle Ages when Europeans had little to no access to the animals, elephants were portrayed more like fantasy creatures. They were often depicted with horse- or bovine-like bodies with trumpet-like trunks and tusks like a boar; some were even given hooves. Elephants were commonly featured in motifs by the stonemasons of the Gothic churches. As more elephants began to be sent to European kings as gifts during the 15th century, depictions of them became more accurate, including one made by Leonardo da Vinci. Despite this, some Europeans continued to portray them in a more stylised fashion. Max Ernst's 1921 surrealist painting, "The Elephant Celebes," depicts an elephant as a silo with a trunk-like hose protruding from it. Elephants have been the subject of religious beliefs. The Mbuti people of central Africa believe that the souls of their dead ancestors resided in elephants. Similar ideas existed among other African tribes, who believed that their chiefs would be reincarnated as elephants. During the 10th century AD, the people of Igbo-Ukwu, near the Niger Delta, buried their leaders with elephant tusks. The animals' religious importance is only totemic in Africa but is much more significant in Asia. In Sumatra, elephants have been associated with lightning. Likewise in Hinduism, they are linked with thunderstorms as Airavata, the father of all elephants, represents both lightning and rainbows. One of the most important Hindu deities, the elephant-headed Ganesha, is ranked equal with the supreme gods Shiva, Vishnu, and Brahma. Ganesha is associated with writers and merchants and it is believed that he can give people success as well as grant them their desires. In Buddhism, Buddha is said to have been a white elephant reincarnated as a human. According to Buddhist mythology, Gautama Buddha's mother, Maya, dreamt that a white elephant enter her womb. The astrologers of the king's court interpreted this dream as the impending birth of a great person who would either become a 'Chakravartin' (conqueror of the world) or a great sage. In Islamic tradition, the year 570 when Muhammad was born is known as the Year of the Elephant. Elephants were thought to be religious themselves by the Romans, who believed that they worshipped the sun and stars. Elephants are ubiquitous in Western popular culture as emblems of the exotic, especially since – as with the giraffe, hippopotamus and rhinoceros – there are no similar animals familiar to Western audiences. The use of the elephant as a symbol of the US Republican Party began with an 1874 cartoon by Thomas Nast. As characters, elephants are most common in children's stories, in which they are generally cast as models of exemplary behaviour. They are typically surrogates for humans with ideal human values. Many stories tell of isolated young elephants returning to a close-knit community, such as "The Elephant's Child" from Rudyard Kipling's "Just So Stories", Disney's "Dumbo," and Kathryn and Byron Jackson's "The Saggy Baggy Elephant". Other elephant heroes given human qualities include Jean de Brunhoff's Babar, David McKee's Elmer, and Dr. Seuss's Horton. Elvis Presley Elvis Aaron Presley (January 8, 1935 – August 16, 1977) was an American singer and actor. Regarded as one of the most significant cultural icons of the 20th century, he is often referred to as the "King of Rock and Roll" or simply "the King". Presley was born in Tupelo, Mississippi, and relocated to Memphis, Tennessee, with his family when he was 13 years old. His music career began there in 1954, recording at Sun Records with producer Sam Phillips, who wanted to bring the sound of African American music to a wider audience. Accompanied by guitarist Scotty Moore and bassist Bill Black, Presley was a pioneer of rockabilly, an uptempo, backbeat-driven fusion of country music and rhythm and blues. In 1955, drummer D. J. Fontana joined to complete the lineup of Presley's classic quartet and RCA Victor acquired his contract in a deal arranged by Colonel Tom Parker, who would manage the singer for more than two decades. Presley's first RCA single, "Heartbreak Hotel", was released in January 1956 and became a number one hit in the United States. With a series of successful network television appearances and chart-topping records, he became the leading figure of the newly popular sound of rock and roll. His energized interpretations of songs and sexually provocative performance style, combined with a singularly potent mix of influences across color lines during a transformative era in race relations, made him enormously popular—and controversial. In November 1956, Presley made his film debut in "Love Me Tender". Drafted into military service in 1958, Presley relaunched his recording career two years later with some of his most commercially successful work. He held few concerts however, and guided by Parker, proceeded to devote much of the 1960s to making Hollywood movies and soundtrack albums, most of them critically derided. In 1968, following a seven-year break from live performances, he returned to the stage in the acclaimed television comeback special "Elvis", which led to an extended Las Vegas concert residency and a string of highly profitable tours. In 1973, Presley gave the first concert by a solo artist to be broadcast around the world, "Aloha from Hawaii". Years of prescription drug abuse severely compromised his health, and he died suddenly in 1977 at his Graceland estate at the age of 42. Presley is one of the most celebrated and influential musicians of the 20th century. Commercially successful in many genres, including pop, country, blues, and gospel, he is the best-selling solo artist in the history of recorded music. He won three competitive Grammys, received the Grammy Lifetime Achievement Award at age 36, and has been inducted into multiple music halls of fame. Elvis Presley was born on January 8, 1935, in Tupelo, Mississippi, to Gladys Love Presley ("née" Smith) in the two-room shotgun house built by his father, Vernon Elvis Presley, in preparation for the birth. Jesse Garon Presley, his identical twin brother, was delivered 35 minutes before him, stillborn. Presley became close to both parents and formed an especially close bond with his mother. The family attended an Assembly of God church, where he found his initial musical inspiration. Presley's ancestry was primarily a Western European mix: On his mother's side he was Scots-Irish, with some French Norman. Gladys and the rest of the family apparently believed that her great-great-grandmother, Morning Dove White, was Cherokee; the biography by Elaine Dundy supports the idea, but at least one genealogy researcher has contested it on multiple grounds. Vernon's forebears were of German or Scottish origin. Gladys was regarded by relatives and friends as the dominant member of the small family. Vernon moved from one odd job to the next, evincing little ambition. The family often relied on help from neighbors and government food assistance. In 1938, they lost their home after Vernon was found guilty of altering a check written by his landowner and sometime employer. He was jailed for eight months, while Gladys and Elvis moved in with relatives. In September 1941, Presley entered first grade at East Tupelo Consolidated, where his teachers regarded him as "average". He was encouraged to enter a singing contest after impressing his schoolteacher with a rendition of Red Foley's country song "Old Shep" during morning prayers. The contest, held at the Mississippi–Alabama Fair and Dairy Show on October 3, 1945, was his first public performance. The ten-year-old Presley was dressed as a cowboy; he stood on a chair to reach the microphone and sang "Old Shep". He recalled placing fifth. A few months later, Presley received his first guitar for his birthday; he had hoped for something else—by different accounts, either a bicycle or a rifle. Over the following year, he received basic guitar lessons from two of his uncles and the new pastor at the family's church. Presley recalled, "I took the guitar, and I watched people, and I learned to play a little bit. But I would never sing in public. I was very shy about it." In September 1946, Presley entered a new school, Milam, for sixth grade; he was regarded as a loner. The following year, he began bringing his guitar to school on a daily basis. He played and sang during lunchtime, and was often teased as a "trashy" kid who played hillbilly music. By then, the family was living in a largely African-American neighborhood. Presley was a devotee of Mississippi Slim's show on the Tupelo radio station WELO. He was described as "crazy about music" by Slim's younger brother, who was one of Presley's classmates and often took him into the station. Slim supplemented Presley's guitar tuition by demonstrating chord techniques. When his protégé was twelve years old, Slim scheduled him for two on-air performances. Presley was overcome by stage fright the first time, but succeeded in performing the following week. In November 1948, the family moved to Memphis, Tennessee. After residing for nearly a year in rooming houses, they were granted a two-bedroom apartment in the public housing complex known as the Lauderdale Courts. Enrolled at L. C. Humes High School, Presley received only a C in music in eighth grade. When his music teacher told him that he had no aptitude for singing, he brought in his guitar the next day and sang a recent hit, "Keep Them Cold Icy Fingers Off Me", in an effort to prove otherwise. A classmate later recalled that the teacher "agreed that Elvis was right when he said that she didn't appreciate his kind of singing". He was usually too shy to perform openly, and was occasionally bullied by classmates who viewed him as a "mama's boy". In 1950, he began practicing guitar regularly under the tutelage of Lee Denson, a neighbor two-and-a-half years his senior. They and three other boys—including two future rockabilly pioneers, brothers Dorsey and Johnny Burnette—formed a loose musical collective that played frequently around the Courts. That September, he began working as an usher at Loew's State Theater. Other jobs followed: Precision Tool, Loew's again, and MARL Metal Products. During his junior year, Presley began to stand out more among his classmates, largely because of his appearance: he grew out his sideburns and styled his hair with rose oil and Vaseline. In his free time, he would head down to Beale Street, the heart of Memphis's thriving blues scene, and gaze longingly at the wild, flashy clothes in the windows of Lansky Brothers. By his senior year, he was wearing those clothes. Overcoming his reticence about performing outside the Lauderdale Courts, he competed in Humes's Annual "Minstrel" show in April 1953. Singing and playing guitar, he opened with "Till I Waltz Again with You", a recent hit for Teresa Brewer. Presley recalled that the performance did much for his reputation: "I wasn't popular in school ... I failed music—only thing I ever failed. And then they entered me in this talent show ... when I came onstage I heard people kind of rumbling and whispering and so forth, 'cause nobody knew I even sang. It was amazing how popular I became after that." Presley, who received no formal music training and could not read music, studied and played by ear. He also frequented record stores that provided jukeboxes and listening booths to customers. He knew all of Hank Snow's songs, and he loved records by other country singers such as Roy Acuff, Ernest Tubb, Ted Daffan, Jimmie Rodgers, Jimmie Davis, and Bob Wills. The Southern gospel singer Jake Hess, one of his favorite performers, was a significant influence on his ballad-singing style. He was a regular audience member at the monthly All-Night Singings downtown, where many of the white gospel groups that performed reflected the influence of African American spiritual music. He adored the music of black gospel singer Sister Rosetta Tharpe. Like some of his peers, he may have attended blues venues—of necessity, in the segregated South, only on nights designated for exclusively white audiences. He certainly listened to the regional radio stations, such as WDIA-AM, that played "race records": spirituals, blues, and the modern, backbeat-heavy sound of rhythm and blues. Many of his future recordings were inspired by local African American musicians such as Arthur Crudup and Rufus Thomas. B.B. King recalled that he had known Presley before he was popular, when they both used to frequent Beale Street. By the time he graduated from high school in June 1953, Presley had already singled out music as his future. In August 1953, Presley checked into the offices of Sun Records. He aimed to pay for a few minutes of studio time to record a two-sided acetate disc: "My Happiness" and "That's When Your Heartaches Begin". He later claimed that he intended the record as a gift for his mother, or that he was merely interested in what he "sounded like", although there was a much cheaper, amateur record-making service at a nearby general store. Biographer Peter Guralnick argued that he chose Sun in the hope of being discovered. Asked by receptionist Marion Keisker what kind of singer he was, Presley responded, "I sing all kinds." When she pressed him on who he sounded like, he repeatedly answered, "I don't sound like nobody." After he recorded, Sun boss Sam Phillips asked Keisker to note down the young man's name, which she did along with her own commentary: "Good ballad singer. Hold." In January 1954, Presley cut a second acetate at Sun Records—"I'll Never Stand In Your Way" and "It Wouldn't Be the Same Without You"—but again nothing came of it. Not long after, he failed an audition for a local vocal quartet, the Songfellows. He explained to his father, "They told me I couldn't sing." Songfellow Jim Hamill later claimed that he was turned down because he did not demonstrate an ear for harmony at the time. In April, Presley began working for the Crown Electric company as a truck driver. His friend Ronnie Smith, after playing a few local gigs with him, suggested he contact Eddie Bond, leader of Smith's professional band, which had an opening for a vocalist. Bond rejected him after a tryout, advising Presley to stick to truck driving "because you're never going to make it as a singer". Phillips, meanwhile, was always on the lookout for someone who could bring to a broader audience the sound of the black musicians on whom Sun focused. As Keisker reported, "Over and over I remember Sam saying, 'If I could find a white man who had the Negro sound and the Negro feel, I could make a billion dollars.'" In June, he acquired a demo recording by Jimmy Sweeney of a ballad, "Without You", that he thought might suit the teenage singer. Presley came by the studio, but was unable to do it justice. Despite this, Phillips asked Presley to sing as many numbers as he knew. He was sufficiently affected by what he heard to invite two local musicians, guitarist Winfield "Scotty" Moore and upright bass player Bill Black, to work something up with Presley for a recording session. The session, held the evening of July 5, proved entirely unfruitful until late in the night. As they were about to abort and go home, Presley took his guitar and launched into a 1946 blues number, Arthur Crudup's "That's All Right". Moore recalled, "All of a sudden, Elvis just started singing this song, jumping around and acting the fool, and then Bill picked up his bass, and he started acting the fool, too, and I started playing with them. Sam, I think, had the door to the control booth open ... he stuck his head out and said, 'What are you doing?' And we said, 'We don't know.' 'Well, back up,' he said, 'try to find a place to start, and do it again.'" Phillips quickly began taping; this was the sound he had been looking for. Three days later, popular Memphis DJ Dewey Phillips played "That's All Right" on his "Red, Hot, and Blue" show. Listeners began phoning in, eager to find out who the singer really was. The interest was such that Phillips played the record repeatedly during the remaining two hours of his show. Interviewing Presley on-air, Phillips asked him what high school he attended in order to clarify his color for the many callers who had assumed that he was black. During the next few days, the trio recorded a bluegrass number, Bill Monroe's "Blue Moon of Kentucky", again in a distinctive style and employing a jury-rigged echo effect that Sam Phillips dubbed "slapback". A single was pressed with "That's All Right" on the A side and "Blue Moon of Kentucky" on the reverse. The trio played publicly for the first time on July 17 at the Bon Air club—Presley still sporting his child-size guitar. At the end of the month, they appeared at the Overton Park Shell, with Slim Whitman headlining. A combination of his strong response to rhythm and nervousness at playing before a large crowd led Presley to shake his legs as he performed: his wide-cut pants emphasized his movements, causing young women in the audience to start screaming. Moore recalled, "During the instrumental parts, he would back off from the mike and be playing and shaking, and the crowd would just go wild". Black, a natural showman, whooped and rode his bass, hitting double licks that Presley would later remember as "really a wild sound, like a jungle drum or something". Soon after, Moore and Black left their old band, the Starlite Wranglers, to play with Presley regularly, and DJ/promoter Bob Neal became the trio's manager. From August through October, they played frequently at the Eagle's Nest club and returned to Sun Studio for more recording sessions, and Presley quickly grew more confident on stage. According to Moore, "His movement was a natural thing, but he was also very conscious of what got a reaction. He'd do something one time and then he would expand on it real quick." Presley made what would be his only appearance on Nashville's "Grand Ole Opry" stage on October 2; after a polite audience response, "Opry" manager Jim Denny told Phillips that his singer was "not bad" but did not suit the program. In November 1954, Presley performed on "Louisiana Hayride"—the "Opry"s chief, and more adventurous, rival. The Shreveport-based show was broadcast to 198 radio stations in 28 states. Presley had another attack of nerves during the first set, which drew a muted reaction. A more composed and energetic second set inspired an enthusiastic response. House drummer D. J. Fontana brought a new element, complementing Presley's movements with accented beats that he had mastered playing in strip clubs. Soon after the show, the "Hayride" engaged Presley for a year's worth of Saturday-night appearances. Trading in his old guitar for $8 (and seeing it promptly dispatched to the garbage), he purchased a Martin instrument for $175, and his trio began playing in new locales, including Houston, Texas and Texarkana, Arkansas. Founded in 1937 in Texas; the Southern Maid Donut Flour Company sponsored "Louisiana Hayride" for several years, and many young entertainers sang their praises, including Elvis Presley, whose lifelong love of doughnuts may have begun with his singular product commercial, a radio jingle on 6 November 1954, for Southern Maid Donuts. Elvis made his first television appearance on the KSLA-TV television broadcast of "Louisiana Hayride". Soon after, he failed an audition for "Arthur Godfrey's Talent Scouts" on the CBS television network. By early 1955, Presley's regular "Hayride" appearances, constant touring, and well-received record releases had made him a regional star, from Tennessee to West Texas. In January, Neal signed a formal management contract with Presley and brought the singer to the attention of Colonel Tom Parker, whom he considered the best promoter in the music business. Parker—who claimed to be from West Virginia (he was actually Dutch)—had acquired an honorary colonel's commission from country singer turned Louisiana governor Jimmie Davis. Having successfully managed top country star Eddy Arnold, Parker was working with the new number-one country singer, Hank Snow. Parker booked Presley on Snow's February tour. When the tour reached Odessa, Texas, a 19-year-old Roy Orbison saw Presley for the first time: "His energy was incredible, his instinct was just amazing. ... I just didn't know what to make of it. There was just no reference point in the culture to compare it." By August, Sun had released ten sides credited to "Elvis Presley, Scotty and Bill"; on the latest recordings, the trio were joined by a drummer. Some of the songs, like "That's All Right", were in what one Memphis journalist described as the "R&B idiom of negro field jazz"; others, like "Blue Moon of Kentucky", were "more in the country field", "but there was a curious blending of the two different musics in both". This blend of styles made it difficult for Presley's music to find radio airplay. According to Neal, many country-music disc jockeys would not play it because he sounded too much like a black artist and none of the rhythm-and-blues stations would touch him because "he sounded too much like a hillbilly." The blend came to be known as rockabilly. At the time, Presley was variously billed as "The King of Western Bop", "The Hillbilly Cat", and "The Memphis Flash". Presley renewed Neal's management contract in August 1955, simultaneously appointing Parker as his special adviser. The group maintained an extensive touring schedule throughout the second half of the year. Neal recalled, "It was almost frightening, the reaction that came to Elvis from the teenaged boys. So many of them, through some sort of jealousy, would practically hate him. There were occasions in some towns in Texas when we'd have to be sure to have a police guard because somebody'd always try to take a crack at him. They'd get a gang and try to waylay him or something." The trio became a quartet when "Hayride" drummer Fontana joined as a full member. In mid-October, they played a few shows in support of Bill Haley, whose "Rock Around the Clock" track had been a number-one hit the previous year. Haley observed that Presley had a natural feel for rhythm, and advised him to sing fewer ballads. At the Country Disc Jockey Convention in early November, Presley was voted the year's most promising male artist. Several record companies had by now shown interest in signing him. After three major labels made offers of up to $25,000, Parker and Phillips struck a deal with RCA Victor on November 21 to acquire Presley's Sun contract for an unprecedented $40,000. Presley, at 20, was still a minor, so his father signed the contract. Parker arranged with the owners of Hill & Range Publishing, Jean and Julian Aberbach, to create two entities, Elvis Presley Music and Gladys Music, to handle all the new material recorded by Presley. Songwriters were obliged to forgo one third of their customary royalties in exchange for having him perform their compositions. By December, RCA had begun to heavily promote its new singer, and before month's end had reissued many of his Sun recordings. On January 10, 1956, Presley made his first recordings for RCA in Nashville. Extending the singer's by now customary backup of Moore, Black, and Fontana, RCA enlisted pianist Floyd Cramer, guitarist Chet Atkins, and three background singers, including Gordon Stoker of the popular Jordanaires quartet, to fill out the sound. The session produced the moody, unusual "Heartbreak Hotel", released as a single on January 27. Parker finally brought Presley to national television, booking him on CBS's "Stage Show" for six appearances over two months. The program, produced in New York, was hosted on alternate weeks by big band leaders and brothers Tommy and Jimmy Dorsey. After his first appearance, on January 28, Presley stayed in town to record at RCA's New York studio. The sessions yielded eight songs, including a cover of Carl Perkins’s rockabilly anthem "Blue Suede Shoes". In February, Presley's "I Forgot to Remember to Forget", a Sun recording initially released the previous August, reached the top of the "Billboard" country chart. Neal's contract was terminated, and, on March 2, Parker became Presley's manager. RCA released Presley's self-titled debut album on March 23. Joined by five previously unreleased Sun recordings, its seven recently recorded tracks were of a broad variety. There were two country songs and a bouncy pop tune. The others would centrally define the evolving sound of rock and roll: "Blue Suede Shoes"—"an improvement over Perkins' in almost every way", according to critic Robert Hilburn—and three R&B numbers that had been part of Presley's stage repertoire for some time, covers of Little Richard, Ray Charles, and The Drifters. As described by Hilburn, these "were the most revealing of all. Unlike many white artists ... who watered down the gritty edges of the original R&B versions of songs in the '50s, Presley reshaped them. He not only injected the tunes with his own vocal character but also made guitar, not piano, the lead instrument in all three cases." It became the first rock and roll album to top the "Billboard" chart, a position it held for 10 weeks. While Presley was not an innovative guitarist like Moore or contemporary African American rockers Bo Diddley and Chuck Berry, cultural historian Gilbert B. Rodman argued that the album's cover image, "of Elvis having the time of his life on stage "with a guitar in his hands" played a crucial role in positioning the guitar ... as the instrument that best captured the style and spirit of this new music." On April 3, Presley made the first of two appearances on NBC's "Milton Berle Show". His performance, on the deck of the USS "Hancock" in San Diego, California, prompted cheers and screams from an audience of sailors and their dates. A few days later, a flight taking Presley and his band to Nashville for a recording session left all three badly shaken when an engine died and the plane almost went down over Arkansas. Twelve weeks after its original release, "Heartbreak Hotel" became Presley's first number one pop hit. In late April, Presley began a two-week residency at the New Frontier Hotel and Casino on the Las Vegas Strip. The shows were poorly received by the conservative, middle-aged hotel guests—"like a jug of corn liquor at a champagne party," wrote a critic for "Newsweek." Amid his Vegas tenure, Presley, who had serious acting ambitions, signed a seven-year contract with Paramount Pictures. He began a tour of the Midwest in mid-May, taking in 15 cities in as many days. He had attended several shows by Freddie Bell and the Bellboys in Vegas and was struck by their cover of "Hound Dog", a hit in 1953 for blues singer Big Mama Thornton by songwriters Jerry Leiber and Mike Stoller. It became the new closing number of his act. After a show in La Crosse, Wisconsin, an urgent message on the letterhead of the local Catholic diocese's newspaper was sent to FBI director J. Edgar Hoover. It warned that "Presley is a definite danger to the security of the United States. ... [His] actions and motions were such as to rouse the sexual passions of teenaged youth. ... After the show, more than 1,000 teenagers tried to gang into Presley's room at the auditorium. ... Indications of the harm Presley did just in La Crosse were the two high school girls ... whose abdomen and thigh had Presley's autograph." The second "Milton Berle Show" appearance came on June 5 at NBC's Hollywood studio, amid another hectic tour. Berle persuaded Presley to leave his guitar backstage, advising, "Let 'em see you, son." During the performance, Presley abruptly halted an uptempo rendition of "Hound Dog" with a wave of his arm and launched into a slow, grinding version accentuated with energetic, exaggerated body movements. Presley's gyrations created a storm of controversy. Television critics were outraged: Jack Gould of "The New York Times" wrote, "Mr. Presley has no discernible singing ability. ... His phrasing, if it can be called that, consists of the stereotyped variations that go with a beginner's aria in a bathtub. ... His one specialty is an accented movement of the body ... primarily identified with the repertoire of the blond bombshells of the burlesque runway." Ben Gross of the New York "Daily News" opined that popular music "has reached its lowest depths in the 'grunt and groin' antics of one Elvis Presley. ... Elvis, who rotates his pelvis ... gave an exhibition that was suggestive and vulgar, tinged with the kind of animalism that should be confined to dives and bordellos". Ed Sullivan, whose own variety show was the nation's most popular, declared him "unfit for family viewing". To Presley's displeasure, he soon found himself being referred to as "Elvis the Pelvis", which he called "one of the most childish expressions I ever heard, comin' from an adult." The Berle shows drew such high ratings that Presley was booked for a July 1 appearance on NBC's "Steve Allen Show" in New York. Allen, no fan of rock and roll, introduced a "new Elvis" in a white bow tie and black tails. Presley sang "Hound Dog" for less than a minute to a basset hound wearing a top hat and bow tie. As described by television historian Jake Austen, "Allen thought Presley was talentless and absurd ... [he] set things up so that Presley would show his contrition". Allen, for his part, later wrote that he found Presley's "strange, gangly, country-boy charisma, his hard-to-define cuteness, and his charming eccentricity intriguing" and simply worked the singer into the customary "comedy fabric" of his program. Just before the final rehearsal for the show, Presley told a reporter, "I'm holding down on this show. I don't want to do anything to make people dislike me. I think TV is important so I'm going to go along, but I won't be able to give the kind of show I do in a personal appearance." Presley would refer back to the Allen show as the most ridiculous performance of his career. Later that night, he appeared on "Hy Gardner Calling", a popular local TV show. Pressed on whether he had learned anything from the criticism to which he was being subjected, Presley responded, "No, I haven't, I don't feel like I'm doing anything wrong. ... I don't see how any type of music would have any bad influence on people when it's only music. ... I mean, how would rock 'n' roll music make anyone rebel against their parents?" The next day, Presley recorded "Hound Dog", along with "Any Way You Want Me" and "Don't Be Cruel". The Jordanaires sang harmony, as they had on "The Steve Allen Show"; they would work with Presley through the 1960s. A few days later, Presley made an outdoor concert appearance in Memphis, at which he announced, "You know, those people in New York are not gonna change me none. I'm gonna show you what the real Elvis is like tonight." In August, a judge in Jacksonville, Florida, ordered Presley to tame his act. Throughout the following performance, he largely kept still, except for wiggling his little finger suggestively in mockery of the order. The single pairing "Don't Be Cruel" with "Hound Dog" ruled the top of the charts for 11 weeks—a mark that would not be surpassed for 36 years. Recording sessions for Presley's second album took place in Hollywood during the first week of September. Leiber and Stoller, the writers of "Hound Dog," contributed "Love Me." Allen's show with Presley had, for the first time, beaten CBS's "Ed Sullivan Show" in the ratings. Sullivan, despite his June pronouncement, booked the singer for three appearances for an unprecedented $50,000. The first, on September 9, 1956, was seen by approximately 60 million viewers—a record 82.6 percent of the television audience. Actor Charles Laughton hosted the show, filling in while Sullivan was recovering from a car accident. Presley appeared in two segments that night from CBS Television City in Los Angeles. According to Elvis legend, Presley was shot only from the waist up. Watching clips of the Allen and Berle shows with his producer, Sullivan had opined that Presley "got some kind of device hanging down below the crotch of his pants–so when he moves his legs back and forth you can see the outline of his cock. ... I think it's a Coke bottle. ... We just can't have this on a Sunday night. This is a family show!" Sullivan publicly told "TV Guide", "As for his gyrations, the whole thing can be controlled with camera shots." In fact, Presley was shown head-to-toe in the first and second shows. Though the camerawork was relatively discreet during his debut, with leg-concealing closeups when he danced, the studio audience reacted in customary style: screaming. Presley's performance of his forthcoming single, the ballad "Love Me Tender", prompted a record-shattering million advance orders. More than any other single event, it was this first appearance on "The Ed Sullivan Show" that made Presley a national celebrity of barely precedented proportions. Accompanying Presley's rise to fame, a cultural shift was taking place that he both helped inspire and came to symbolize. Igniting the "biggest pop craze since Glenn Miller and Frank Sinatra ... Presley brought rock'n'roll into the mainstream of popular culture", writes historian Marty Jezer. "As Presley set the artistic pace, other artists followed. ... Presley, more than anyone else, gave the young a belief in themselves as a distinct and somehow unified generation—the first in America ever to feel the power of an integrated youth culture." The audience response at Presley's live shows became increasingly fevered. Moore recalled, "He'd start out, 'You ain't nothin' but a Hound Dog,' and they'd just go to pieces. They'd always react the same way. There'd be a riot every time." At the two concerts he performed in September at the Mississippi-Alabama Fair and Dairy Show, 50 National Guardsmen were added to the police security to assure that the crowd would not cause a ruckus. "Elvis", Presley's second album, was released in October and quickly rose to number one on the billboard. The album includes "Old Shep", which he sang at the talent show in 1945, and which now marked the first time he played piano on an RCA session. According to Guralnick, one can hear "in the halting chords and the somewhat stumbling rhythm both the unmistakable emotion and the equally unmistakable valuing of emotion over technique." Assessing the musical and cultural impact of Presley's recordings from "That's All Right" through "Elvis", rock critic Dave Marsh wrote that "these records, more than any others, contain the seeds of what rock & roll was, has been and most likely what it may foreseeably become." Presley returned to the Sullivan show at its main studio in New York, hosted this time by its namesake, on October 28. After the performance, crowds in Nashville and St. Louis burned him in effigy. His first motion picture, "Love Me Tender", was released on November 21. Though he was not top billed, the film's original title—"The Reno Brothers"—was changed to capitalize on his latest number one record: "Love Me Tender" had hit the top of the charts earlier that month. To further take advantage of Presley's popularity, four musical numbers were added to what was originally a straight acting role. The film was panned by the critics but did very well at the box office. Presley would receive top billing on every subsequent film he made. On December 4, Presley dropped into Sun Records where Carl Perkins and Jerry Lee Lewis were recording and had an impromptu jam session, along with Johnny Cash. Though Phillips no longer had the right to release any Presley material, he made sure that the session was captured on tape. The results, none officially released for 25 years, became known as the "Million Dollar Quartet" recordings. The year ended with a front-page story in "The Wall Street Journal" reporting that Presley merchandise had brought in $22 million on top of his record sales, and "Billboard"s declaration that he had placed more songs in the top 100 than any other artist since records were first charted. In his first full year at RCA, one of the music industry's largest companies, Presley had accounted for over 50 percent of the label's singles sales. Presley made his third and final "Ed Sullivan Show" appearance on January 6, 1957—on this occasion indeed shot only down to the waist. Some commentators have claimed that Parker orchestrated an appearance of censorship to generate publicity. In any event, as critic Greil Marcus describes, Presley "did not tie himself down. Leaving behind the bland clothes he had worn on the first two shows, he stepped out in the outlandish costume of a pasha, if not a harem girl. From the make-up over his eyes, the hair falling in his face, the overwhelmingly sexual cast of his mouth, he was playing Rudolph Valentino in "The Sheik", with all stops out." To close, displaying his range and defying Sullivan's wishes, Presley sang a gentle black spiritual, "Peace in the Valley". At the end of the show, Sullivan declared Presley "a real decent, fine boy". Two days later, the Memphis draft board announced that Presley would be classified 1-A and would probably be drafted sometime that year. Each of the three Presley singles released in the first half of 1957 went to number one: "Too Much", "All Shook Up", and "(Let Me Be Your) Teddy Bear". Already an international star, he was attracting fans even where his music was not officially released. Under the headline "Presley Records a Craze in Soviet", "The New York Times" reported that pressings of his music on discarded X-ray plates were commanding high prices in Leningrad. Between film shoots and recording sessions, Presley also found time to purchase an 18-room mansion eight miles (13 km) south of downtown Memphis for himself and his parents: Graceland. "Loving You"—the soundtrack to his second film, released in July—was Presley's third straight number one album. The title track was written by Leiber and Stoller, who were then retained to write four of the six songs recorded at the sessions for "Jailhouse Rock", Presley's next film. The songwriting team effectively produced the "Jailhouse" sessions and developed a close working relationship with Presley, who came to regard them as his "good-luck charm". "He was fast," said Leiber. "Any demo you gave him he knew by heart in ten minutes." The title track was yet another number one hit, as was the "Jailhouse Rock" EP. Presley undertook three brief tours during the year, continuing to generate a crazed audience response. A Detroit newspaper suggested that "the trouble with going to see Elvis Presley is that you're liable to get killed." Villanova students pelted him with eggs in Philadelphia, Pennsylvania and in Vancouver, Canada, the crowd rioted after the end of the show, destroying the stage. Frank Sinatra, who had famously inspired the swooning of teenage girls in the 1940s, condemned the new musical phenomenon. In a magazine article, he decried rock and roll as "brutal, ugly, degenerate, vicious. ... It fosters almost totally negative and destructive reactions in young people. It smells phoney and false. It is sung, played and written, for the most part, by cretinous goons. ... This rancid-smelling aphrodisiac I deplore." Asked for a response, Presley said, "I admire the man. He has a right to say what he wants to say. He is a great success and a fine actor, but I think he shouldn't have said it. ... This is a trend, just the same as he faced when he started years ago." Leiber and Stoller were again in the studio for the recording of "Elvis' Christmas Album". Toward the end of the session, they wrote a song on the spot at Presley's request: "Santa Claus Is Back in Town", an innuendo-laden blues. The holiday release stretched Presley's string of number one albums to four and would become the best-selling Christmas album ever in the United States, with eventual sales of over 20 million worldwide. After the session, Moore and Black—drawing only modest weekly salaries, sharing in none of Presley's massive financial success—resigned. Though they were brought back on a per diem basis a few weeks later, it was clear that they had not been part of Presley's inner circle for some time. On December 20, Presley received his draft notice. He was granted a deferment to finish the forthcoming "King Creole", in which $350,000 had already been invested by Paramount and producer Hal Wallis. A couple of weeks into the new year, "Don't", another Leiber and Stoller tune, became Presley's tenth number one seller. It had been only 21 months since "Heartbreak Hotel" had brought him to the top for the first time. Recording sessions for the "King Creole" soundtrack were held in Hollywood in mid-January 1958. Leiber and Stoller provided three songs and were again on hand, but it would be the last time they and Presley worked closely together. As Stoller recalled, Presley's manager and entourage sought to wall him off: "He was removed. ... They kept him separate." A brief soundtrack session on February 11 marked another ending—it was the final occasion on which Black was to perform with Presley. He died in 1965. On March 24, 1958, Presley was drafted into the U.S. Army as a private at Fort Chaffee, near Fort Smith, Arkansas. His arrival was a major media event. Hundreds of people descended on Presley as he stepped from the bus; photographers then accompanied him into the fort. Presley announced that he was looking forward to his military stint, saying that he did not want to be treated any differently from anyone else: "The Army can do anything it wants with me." Presley commenced basic training at Fort Hood, Texas. During a two-week leave in early June, he recorded five songs in Nashville. In early August, his mother was diagnosed with hepatitis, and her condition rapidly worsened. Presley, granted emergency leave to visit her, arrived in Memphis on August 12. Two days later, she died of heart failure, aged 46. Presley was devastated; their relationship had remained extremely close—even into his adulthood, they would use baby talk with each other and Presley would address her with pet names. After training, Presley joined the 3rd Armored Division in Friedberg, Germany, on October 1. While on maneuvers, Presley was introduced to amphetamines by a sergeant. He became "practically evangelical about their benefits", not only for energy but for "strength" and weight loss as well, and many of his friends in the outfit joined him in indulging. The Army also introduced Presley to karate, which he studied seriously training with Jürgen Seydel, and later included in his live performances, and became a lifelong interest. Fellow soldiers have attested to Presley's wish to be seen as an able, ordinary soldier, despite his fame, and to his generosity. He donated his Army pay to charity, purchased TV sets for the base, and bought an extra set of fatigues for everyone in his outfit. While in Friedberg, Presley met 14-year-old Priscilla Beaulieu. They would eventually marry after a seven-and-a-half-year courtship. In her autobiography, Priscilla said that Presley was concerned that his 24-month spell as a G.I. would ruin his career. In Special Services, he would have been able to give musical performances and remain in touch with the public, but Parker had convinced him that to gain popular respect, he should serve his country as a regular soldier. Media reports echoed Presley's concerns about his career, but RCA producer Steve Sholes and Freddy Bienstock of Hill and Range had carefully prepared for his two-year hiatus. Armed with a substantial amount of unreleased material, they kept up a regular stream of successful releases. Between his induction and discharge, Presley had ten top 40 hits, including "Wear My Ring Around Your Neck", the best-selling "Hard Headed Woman", and "One Night" in 1958, and "(Now and Then There's) A Fool Such as I" and the number one "A Big Hunk o' Love" in 1959. RCA also generated four albums compiling old material during this period, most successfully "Elvis' Golden Records" (1958), which hit number three on the LP chart. Presley returned to the United States on March 2, 1960, and was honorably discharged three days later with the rank of sergeant. The train that carried him from New Jersey to Tennessee was mobbed all the way, and Presley was called upon to appear at scheduled stops to please his fans. On the night of March 20, he entered RCA's Nashville studio to cut tracks for a new album along with a single, "Stuck on You", which was rushed into release and swiftly became a number one hit. Another Nashville session two weeks later yielded a pair of his best-selling singles, the ballads "It's Now or Never" and "Are You Lonesome Tonight?", along with the rest of "Elvis Is Back!" The album features several songs described by Greil Marcus as full of Chicago blues "menace, driven by Presley's own super-miked acoustic guitar, brilliant playing by Scotty Moore, and demonic sax work from Boots Randolph. Elvis's singing wasn't sexy, it was pornographic." As a whole, the record "conjured up the vision of a performer who could be all things", in the words of music historian John Robertson: "a flirtatious teenage idol with a heart of gold; a tempestuous, dangerous lover; a gutbucket blues singer; a sophisticated nightclub entertainer; [a] raucous rocker". Released only days after recording was complete, it reached number two on the album chart. Presley returned to television on May 12 as a guest on ""—ironic for both stars, given Sinatra's not-so-distant excoriation of rock and roll. Also known as "Welcome Home Elvis", the show had been taped in late March, the only time all year Presley performed in front of an audience. Parker secured an unheard-of $125,000 fee for eight minutes of singing. The broadcast drew an enormous viewership. "G.I. Blues", the soundtrack to Presley's first film since his return, was a number one album in October. His first LP of sacred material, "His Hand in Mine", followed two months later. It reached number 13 on the U.S. pop chart and number 3 in the UK, remarkable figures for a gospel album. In February 1961, Presley performed two shows for a benefit event in Memphis, on behalf of 24 local charities. During a luncheon preceding the event, RCA presented him with a plaque certifying worldwide sales of over 75 million records. A 12-hour Nashville session in mid-March yielded nearly all of Presley's next studio album, "Something for Everybody". As described by John Robertson, it exemplifies the Nashville sound, the restrained, cosmopolitan style that would define country music in the 1960s. Presaging much of what was to come from Presley himself over the next half-decade, the album is largely "a pleasant, unthreatening pastiche of the music that had once been Elvis's birthright". It would be his sixth number one LP. Another benefit concert, raising money for a Pearl Harbor memorial, was staged on March 25, in Hawaii. It was to be Presley's last public performance for seven years. Parker had by now pushed Presley into a heavy film making schedule, focused on formulaic, modestly budgeted musical comedies. Presley, at first, insisted on pursuing higher roles, but when two films in a more dramatic vein—"Flaming Star" (1960) and "Wild in the Country" (1961)—were less commercially successful, he reverted to the formula. Among the 27 movies he made during the 1960s, there were a few further exceptions. His films were almost universally panned; critic Andrew Caine dismissed them as a "pantheon of bad taste". Nonetheless, they were virtually all profitable. Hal Wallis, who produced nine of them, declared, "A Presley picture is the only sure thing in Hollywood." Of Presley's films in the 1960s, 15 were accompanied by soundtrack albums and another 5 by soundtrack EPs. The movies' rapid production and release schedules—he frequently starred in three a year—affected his music. According to Jerry Leiber, the soundtrack formula was already evident before Presley left for the Army: "three ballads, one medium-tempo [number], one up-tempo, and one break blues boogie". As the decade wore on, the quality of the soundtrack songs grew "progressively worse". Julie Parrish, who appeared in "Paradise, Hawaiian Style" (1966), says that he disliked many of the songs chosen for his films. The Jordanaires' Gordon Stoker describes how Presley would retreat from the studio microphone: "The material was so bad that he felt like he couldn't sing it." Most of the movie albums featured a song or two from respected writers such as the team of Doc Pomus and Mort Shuman. But by and large, according to biographer Jerry Hopkins, the numbers seemed to be "written on order by men who never really understood Elvis or rock and roll". Regardless of the songs' quality, it has been argued that Presley generally sang them well, with commitment. Critic Dave Marsh heard the opposite: "Presley isn't trying, probably the wisest course in the face of material like 'No Room to Rumba in a Sports Car' and 'Rock-a-Hula Baby. In the first half of the decade, three of Presley's soundtrack albums were ranked number one on the pop charts, and a few of his most popular songs came from his films, such as "Can't Help Falling in Love" (1961) and "Return to Sender" (1962). ("Viva Las Vegas", the title track to the 1964 film, was a minor hit as a B-side, and became truly popular only later.) But, as with artistic merit, the commercial returns steadily diminished. During a five-year span—1964 through 1968—Presley had only one top-ten hit: "Crying in the Chapel" (1965), a gospel number recorded back in 1960. As for non-movie albums, between the June 1962 release of "Pot Luck" and the November 1968 release of the soundtrack to the television special that signaled his comeback, only one LP of new material by Presley was issued: the gospel album "How Great Thou Art" (1967). It won him his first Grammy Award, for Best Sacred Performance. As Marsh described, Presley was "arguably the greatest white gospel singer of his time [and] really the last rock & roll artist to make gospel as vital a component of his musical personality as his secular songs". Shortly before Christmas 1966, more than seven years since they first met, Presley proposed to Priscilla Beaulieu. They were married on May 1, 1967, in a brief ceremony in their suite at the Aladdin Hotel in Las Vegas. The flow of formulaic movies and assembly-line soundtracks rolled on. It was not until October 1967, when the "Clambake" soundtrack LP registered record low sales for a new Presley album, that RCA executives recognized a problem. "By then, of course, the damage had been done", as historians Connie Kirchberg and Marc Hendrickx put it. "Elvis was viewed as a joke by serious music lovers and a has-been to all but his most loyal fans." Presley's only child, Lisa Marie, was born on February 1, 1968, during a period when he had grown deeply unhappy with his career. Of the eight Presley singles released between January 1967 and May 1968, only two charted in the top 40, and none higher than number 28. His forthcoming soundtrack album, "Speedway", would rank at number 82 on the "Billboard" chart. Parker had already shifted his plans to television, where Presley had not appeared since the Sinatra Timex show in 1960. He maneuvered a deal with NBC that committed the network to both finance a theatrical feature and broadcast a Christmas special. Recorded in late June in Burbank, California, the special, called simply "Elvis", aired on December 3, 1968. Later known as the '68 Comeback Special, the show featured lavishly staged studio productions as well as songs performed with a band in front of a small audience—Presley's first live performances since 1961. The live segments saw Presley dressed in tight black leather, singing and playing guitar in an uninhibited style reminiscent of his early rock and roll days. Director and co-producer Steve Binder had worked hard to produce a show that was far from the hour of Christmas songs Parker had originally planned. The show, NBC's highest rated that season, captured 42 percent of the total viewing audience. Jon Landau of "Eye" magazine remarked, "There is something magical about watching a man who has lost himself find his way back home. He sang with the kind of power people no longer expect of rock 'n' roll singers. He moved his body with a lack of pretension and effort that must have made Jim Morrison green with envy." Dave Marsh calls the performance one of "emotional grandeur and historical resonance". By January 1969, the single "If I Can Dream", written for the special, reached number 12. The soundtrack album rose into the top ten. According to friend Jerry Schilling, the special reminded Presley of what "he had not been able to do for years, being able to choose the people; being able to choose what songs and not being told what had to be on the soundtrack. ... He was out of prison, man." Binder said of Presley's reaction, "I played Elvis the 60-minute show, and he told me in the screening room, 'Steve, it's the greatest thing I've ever done in my life. I give you my word I will never sing a song I don't believe in. Buoyed by the experience of the Comeback Special, Presley engaged in a prolific series of recording sessions at American Sound Studio, which led to the acclaimed "From Elvis in Memphis". Released in June 1969, it was his first secular, non-soundtrack album from a dedicated period in the studio in eight years. As described by Dave Marsh, it is "a masterpiece in which Presley immediately catches up with pop music trends that had seemed to pass him by during the movie years. He sings country songs, soul songs and rockers with real conviction, a stunning achievement." The album featured the hit single "In the Ghetto", issued in April, which reached number three on the pop chart—Presley's first non-gospel top ten hit since "Bossa Nova Baby" in 1963. Further hit singles were culled from the American Sound sessions: "Suspicious Minds", "Don't Cry Daddy", and "Kentucky Rain". Presley was keen to resume regular live performing. Following the success of the Comeback Special, offers came in from around the world. The London Palladium offered Parker $28,000 for a one-week engagement. He responded, "That's fine for me, now how much can you get for Elvis?" In May, the brand new International Hotel in Las Vegas, boasting the largest showroom in the city, announced that it had booked Presley. He was scheduled to perform 57 shows over four weeks beginning July 31. Moore, Fontana, and the Jordanaires declined to participate, afraid of losing the lucrative session work they had in Nashville. Presley assembled new, top-notch accompaniment, led by guitarist James Burton and including two gospel groups, The Imperials and Sweet Inspirations. Costume designer Bill Belew, responsible for the intense leather styling of the Comeback Special, created a new stage look for Presley, inspired by the singer's passion for karate. Nonetheless, he was nervous: his only previous Las Vegas engagement, in 1956, had been dismal. Parker, who intended to make Presley's return the show business event of the year, oversaw a major promotional push. For his part, hotel owner Kirk Kerkorian arranged to send his own plane to New York to fly in rock journalists for the debut performance. Presley took to the stage without introduction. The audience of 2,200, including many celebrities, gave him a standing ovation before he sang a note and another after his performance. A third followed his encore, "Can't Help Falling in Love" (a song that would be his closing number for much of the 1970s). At a press conference after the show, when a journalist referred to him as "The King", Presley gestured toward Fats Domino, who was taking in the scene. "No," Presley said, "that's the real king of rock and roll." The next day, Parker's negotiations with the hotel resulted in a five-year contract for Presley to play each February and August, at an annual salary of $1 million. "Newsweek" commented, "There are several unbelievable things about Elvis, but the most incredible is his staying power in a world where meteoric careers fade like shooting stars." "Rolling Stone" called Presley "supernatural, his own resurrection." In November, Presley's final non-concert movie, "Change of Habit", opened. The double album "From Memphis to Vegas/From Vegas to Memphis" came out the same month; the first LP consisted of live performances from the International, the second of more cuts from the American Sound sessions. "Suspicious Minds" reached the top of the charts—Presley's first U.S. pop number one in over seven years, and his last. Cassandra Peterson, later television's Elvira, met Presley during this period in Las Vegas, where she was working as a showgirl. She recalled of their encounter, "He was so anti-drug when I met him. I mentioned to him that I smoked marijuana, and he was just appalled. He said, 'Don't ever do that again. Presley was not only deeply opposed to recreational drugs, he also rarely drank. Several of his family members had been alcoholics, a fate he intended to avoid. Presley returned to the International early in 1970 for the first of the year's two month-long engagements, performing two shows a night. Recordings from these shows were issued on the album "On Stage". In late February, Presley performed six attendance-record–breaking shows at the Houston Astrodome. In April, the single "The Wonder of You" was issued—a number one hit in the UK, it topped the U.S. adult contemporary chart, as well. MGM filmed rehearsal and concert footage at the International during August for the documentary "". Presley was performing in a jumpsuit, which would become a trademark of his live act. During this engagement, he was threatened with murder unless $50,000 was paid. Presley had been the target of many threats since the 1950s, often without his knowledge. The FBI took the threat seriously and security was stepped up for the next two shows. Presley went onstage with a Derringer in his right boot and a .45 pistol in his waistband, but the concerts succeeded without any incidents. The album, "That's the Way It Is", produced to accompany the documentary and featuring both studio and live recordings, marked a stylistic shift. As music historian John Robertson noted, "The authority of Presley's singing helped disguise the fact that the album stepped decisively away from the American-roots inspiration of the Memphis sessions towards a more middle-of-the-road sound. With country put on the back burner, and soul and R&B left in Memphis, what was left was very classy, very clean white pop—perfect for the Las Vegas crowd, but a definite retrograde step for Elvis." After the end of his International engagement on September 7, Presley embarked on a week-long concert tour, largely of the South, his first since 1958. Another week-long tour, of the West Coast, followed in November. On December 21, 1970, Presley engineered a meeting with President Richard Nixon at the White House, where he expressed his patriotism and explained how he believed he could reach out to the hippies to help combat the drug culture he and the president abhorred. He asked Nixon for a Bureau of Narcotics and Dangerous Drugs badge, to add to similar items he had begun collecting and to signify official sanction of his patriotic efforts. Nixon, who apparently found the encounter awkward, expressed a belief that Presley could send a positive message to young people and that it was therefore important that he "retain his credibility". Presley told Nixon that The Beatles, whose songs he regularly performed in concert during the era, exemplified what he saw as a trend of anti-Americanism. (Presley and his friends had had a four-hour get-together with The Beatles five years earlier.) On hearing reports of the meeting, Paul McCartney later said that he "felt a bit betrayed. ... The great joke was that we were taking [illegal] drugs, and look what happened to him", a reference to Presley's early death, linked to prescription drug abuse. The U.S. Junior Chamber of Commerce named Presley one of its annual Ten Most Outstanding Young Men of the Nation on January 16, 1971. Not long after, the City of Memphis named the stretch of Highway 51 South on which Graceland is located "Elvis Presley Boulevard". The same year, Presley became the first rock and roll singer to be awarded the Lifetime Achievement Award (then known as the Bing Crosby Award) by the National Academy of Recording Arts and Sciences, the Grammy Award organization. Three new, non-movie Presley studio albums were released in 1971, as many as had come out over the previous eight years. Best received by critics was "Elvis Country", a concept record that focused on genre standards. The biggest seller was "Elvis Sings the Wonderful World of Christmas", "the truest statement of all", according to Greil Marcus. "In the midst of ten painfully genteel Christmas songs, every one sung with appalling sincerity and humility, one could find Elvis tom-catting his way through six blazing minutes of 'Merry Christmas Baby,' a raunchy old Charles Brown blues. ... If [Presley's] sin was his lifelessness, it was his sinfulness that brought him to life". MGM again filmed Presley in April 1972, this time for "Elvis on Tour", which went on to win the Golden Globe Award for Best Documentary Film that year. His gospel album "He Touched Me", released that month, would earn him his second competitive Grammy Award, for Best Inspirational Performance. A 14-date tour commenced with an unprecedented four consecutive sold-out shows at New York's Madison Square Garden. The evening concert on July 10 was recorded and issued in an LP form a week later. "" became one of Presley's biggest-selling albums. After the tour, the single "Burning Love" was released—Presley's last top ten hit on the U.S. pop chart. "The most exciting single Elvis has made since 'All Shook Up'", wrote rock critic Robert Christgau. "Who else could make 'It's coming closer, the flames are now licking my body' sound like an assignation with James Brown's backup band?" Presley and his wife, meanwhile, had become increasingly distant, barely cohabiting. In 1971, an affair he had with Joyce Bova resulted—unbeknownst to him—in her pregnancy and an abortion. He often raised the possibility of her moving into Graceland, saying that he was likely to leave Priscilla. The Presleys separated on February 23, 1972, after Priscilla disclosed her relationship with Mike Stone, a karate instructor Presley had recommended to her. Priscilla related that when she told him, Presley "grabbed ... and forcefully made love to" her, declaring, "This is how a real man makes love to his woman." She later stated in an interview that she regretted her choice of words in describing the incident, and said it had been an overstatement. Five months later, Presley's new girlfriend, Linda Thompson, a songwriter and one-time Memphis beauty queen, moved in with him. Presley and his wife filed for divorce on August 18. According to Joe Moscheo of the Imperials, the failure of Presley's marriage "was a blow from which he never recovered." At a rare press conference that June, a reporter had asked Presley whether he was satisfied with his image. Presley replied, "Well, the image is one thing and the human being another ... it's very hard to live up to an image." In January 1973, Presley performed two benefit concerts for the Kui Lee Cancer Fund in connection with a groundbreaking TV special, "Aloha from Hawaii", which would be the first concert by a solo artist to be aired globally. The first show served as a practice run and backup should technical problems affect the live broadcast two days later. On January 14, "Aloha from Hawaii" aired live via satellite to prime-time audiences in Japan, South Korea, Thailand, the Philippines, Australia, and New Zealand, as well as to U.S. servicemen based across Southeast Asia. In Japan, where it capped a nationwide Elvis Presley Week, it smashed viewing records. The next night, it was simulcast to 28 European countries, and in April an extended version finally aired in the U.S., where it won a 57 percent share of the TV audience. Over time, Parker's claim that it was seen by one billion or more people would be broadly accepted, but that figure appeared to have been sheer invention. Presley's stage costume became the most recognized example of the elaborate concert garb with which his latter-day persona became closely associated. As described by Bobbie Ann Mason, "At the end of the show, when he spreads out his American Eagle cape, with the full stretched wings of the eagle studded on the back, he becomes a god figure." The accompanying double album, released in February, went to number one and eventually sold over 5 million copies in the United States. It proved to be Presley's last U.S. number one pop album during his lifetime. At a midnight show the same month, four men rushed onto the stage in an apparent attack. Security men came to Presley's defense, and the singer's karate instinct took over as he ejected one invader from the stage himself. Following the show, he became obsessed with the idea that the men had been sent by Mike Stone to kill him. Though they were shown to have been only overexuberant fans, he raged, "There's too much pain in me ... Stone [must] die." His outbursts continued with such intensity that a physician was unable to calm him, despite administering large doses of medication. After another two full days of raging, Red West, his friend and bodyguard, felt compelled to get a price for a contract killing and was relieved when Presley decided, "Aw hell, let's just leave it for now. Maybe it's a bit heavy." Presley's divorce was finalized on October 9, 1973. Since then, his life was on the verge of a serious decline. Twice during the year, he overdosed on barbiturates, spending three days in a coma in his hotel suite after the first incident. Towards the end of 1973, he was hospitalized, semi-comatose from the effects of pethidine addiction. According to his primary care physician, Dr. George C. Nichopoulos, Presley "felt that by getting [drugs] from a doctor, he wasn't the common everyday junkie getting something off the street". Since his comeback, he had staged more live shows with each passing year, and 1973 saw 168 concerts, his busiest schedule ever. Despite his failing health, in 1974, he undertook another intensive touring schedule. Presley's condition declined precipitously in September. Keyboardist Tony Brown remembered the singer's arrival at a University of Maryland concert: "He fell out of the limousine, to his knees. People jumped to help, and he pushed them away like, 'Don't help me.' He walked on stage and held onto the mike for the first thirty minutes like it was a post. Everybody's looking at each other like, Is the tour gonna happen?" Guitarist John Wilkinson recalled, "He was all gut. He was slurring. He was so fucked up. ... It was obvious he was drugged. It was obvious there was something terribly wrong with his body. It was so bad the words to the songs were barely intelligible. ... I remember crying. He could barely get through the introductions." Wilkinson recounted that a few nights later in Detroit, "I watched him in his dressing room, just draped over a chair, unable to move. So often I thought, 'Boss, why don't you just cancel this tour and take a year off ...?' I mentioned something once in a guarded moment. He patted me on the back and said, 'It'll be all right. Don't you worry about it. Presley continued to play to sellout crowds. As cultural critic Marjorie Garber describes, he was now widely seen as a garish pop crooner: "in effect he had become Liberace. Even his fans were now middle-aged matrons and blue-haired grandmothers." On July 13, 1976, Vernon Presley—who had become deeply involved in his son's financial affairs—fired "Memphis Mafia" bodyguards Red West (Presley's friend since the 1950s), Sonny West, and David Hebler, citing the need to "cut back on expenses". Presley was in Palm Springs at the time, and some suggested that the singer was too cowardly to face the three himself. Another associate of Presley's, John O'Grady, argued that the bodyguards were dropped because their rough treatment of fans had prompted too many lawsuits. However, Presley's stepbrother, David Stanley, claimed that the bodyguards were fired because they were becoming more outspoken about Presley's drug dependency. RCA, which had enjoyed a steady stream of product from Presley for over a decade, grew anxious as his interest in spending time in the studio waned. After a December 1973 session that produced 18 songs, enough for almost two albums, he did not enter the studio in 1974. Parker sold RCA on another concert record, "Elvis Recorded Live on Stage in Memphis". Recorded on March 20, it included a version of "How Great Thou Art" that would win Presley his third and final competitive Grammy Award. (All three of his competitive Grammy wins—out of 14 total nominations—were for gospel recordings.) Presley returned to the studio in Hollywood in March 1975, but Parker's attempts to arrange another session toward the end of the year were unsuccessful. In 1976, RCA sent a mobile studio to Graceland that made possible two full-scale recording sessions at Presley's home. Even in that comfortable context, the recording process became a struggle for him. For all the concerns of his label and manager, in studio sessions between July 1973 and October 1976, Presley recorded virtually the entire contents of six albums. Though he was no longer a major presence on the pop charts, five of those albums entered the top five of the country chart, and three went to number one: "Promised Land" (1975), "From Elvis Presley Boulevard, Memphis, Tennessee" (1976), and "Moody Blue" (1977). The story was similar with his singles—there were no major pop hits, but Presley was a significant force in not just the country market, but on adult contemporary radio as well. Eight studio singles from this period released during his lifetime were top ten hits on one or both charts, four in 1974 alone. "My Boy" was a number one adult contemporary hit in 1975, and "Moody Blue" topped the country chart and reached the second spot on the adult contemporary chart in 1976. Perhaps his most critically acclaimed recording of the era came that year, with what Greil Marcus described as his "apocalyptic attack" on the soul classic "Hurt". "If he felt the way he sounded", Dave Marsh wrote of Presley's performance, "the wonder isn't that he had only a year left to live but that he managed to survive that long." Presley and Linda Thompson split in November 1976, and he took up with a new girlfriend, Ginger Alden. He proposed to Alden and gave her an engagement ring two months later, though several of his friends later claimed that he had no serious intention of marrying again. Journalist Tony Scherman wrote that by early 1977, "Presley had become a grotesque caricature of his sleek, energetic former self. Hugely overweight, his mind dulled by the pharmacopia he daily ingested, he was barely able to pull himself through his abbreviated concerts." In Alexandria, Louisiana, the singer was on stage for less than an hour, and "was impossible to understand". On March 31, Presley failed to perform in Baton Rouge, unable to get out of his hotel bed; a total of four shows had to be canceled and rescheduled. Despite the accelerating deterioration of his health, he stuck to most touring commitments. According to Guralnick, fans "were becoming increasingly voluble about their disappointment, but it all seemed to go right past Presley, whose world was now confined almost entirely to his room and his spiritualism books." A cousin, Billy Smith, recalled how Presley would sit in his room and chat for hours, sometimes recounting favorite Monty Python sketches and his own past escapades, but more often gripped by paranoid obsessions that reminded Smith of Howard Hughes. "Way Down", Presley's last single issued during his career, was released on June 6. That month, CBS filmed two concerts for a TV special, "Elvis in Concert", to be aired in October. In the first, shot in Omaha on June 19, Presley's voice, Guralnick writes, "is almost unrecognizable, a small, childlike instrument in which he talks more than sings most of the songs, casts about uncertainly for the melody in others, and is virtually unable to articulate or project". Two days later, in Rapid City, South Dakota, "he looked healthier, seemed to have lost a little weight, and sounded better, too", though by the conclusion of the performance, his face was "framed in a helmet of blue-black hair from which sweat sheets down over pale, swollen cheeks". His final concert was held in Indianapolis at Market Square Arena, on June 26. The book "", co-written by the three bodyguards fired the previous year, was published on August 1. It was the first exposé to detail Presley's years of drug misuse. He was devastated by the book and tried unsuccessfully to halt its release by offering money to the publishers. By this point, he suffered from multiple ailments: glaucoma, high blood pressure, liver damage, and an enlarged colon, each magnified—and possibly caused—by drug abuse. On the evening of August 16, 1977, Presley was scheduled to fly out of Memphis to begin another tour. That afternoon, Ginger Alden discovered him in an unresponsive state in a bathroom floor. According to her eyewitness account, "Elvis looked as if his entire body had completely frozen in a seated position while using the commode and then had fallen forward, in that fixed position, directly in front of it. [...] It was clear that, from the time whatever hit him to the moment he had landed on the floor, Elvis hadn't moved." Attempts to revive him failed, and his death was officially pronounced at 3:30 p.m. at the Baptist Memorial Hospital. President Jimmy Carter issued a statement that credited Presley with having "permanently changed the face of American popular culture". Thousands of people gathered outside Graceland to view the open casket. One of Presley's cousins, Billy Mann, accepted $18,000 to secretly photograph the corpse; the picture appeared on the cover of the "National Enquirer"s biggest-selling issue ever. Alden struck a $105,000 deal with the "Enquirer" for her story, but settled for less when she broke her exclusivity agreement. Presley left her nothing in his will. Presley's funeral was held at Graceland on Thursday, August 18. Outside the gates, a car plowed into a group of fans, killing two women and critically injuring a third. About 80,000 people lined the processional route to Forest Hill Cemetery, where Presley was buried next to his mother. Within a few days, "Way Down" topped the country and UK pop charts. Following an attempt to steal the singer's body in late August, the remains of both Presley and his mother were reburied in Graceland's Meditation Garden on October 2. While an autopsy, undertaken the same day Presley died, was still in progress, Memphis medical examiner Dr. Jerry Francisco announced that the immediate cause of death was cardiac arrest. Asked if drugs were involved, he declared that "drugs played no role in Presley's death". In fact, "drug use was heavily implicated" in Presley's death, writes Guralnick. The pathologists conducting the autopsy thought it possible, for instance, that he had suffered "anaphylactic shock brought on by the codeine pills he had gotten from his dentist, to which he was known to have had a mild allergy". A pair of lab reports filed two months later strongly suggested that polypharmacy was the primary cause of death; one reported "fourteen drugs in Elvis' system, ten in significant quantity". In 1979, forensic pathologist Cyril Wecht conducted a review of the reports and concluded that a combination of central nervous system depressants had resulted in Presley's accidental death. Forensic historian and pathologist Michael Baden viewed the situation as complicated: "Elvis had had an enlarged heart for a long time. That, together with his drug habit, caused his death. But he was difficult to diagnose; it was a judgment call." The competence and ethics of two of the centrally involved medical professionals were seriously questioned. Dr. Francisco had offered a cause of death before the autopsy was complete; claimed the underlying ailment was cardiac arrhythmia, a condition that can be determined only in someone who is still alive; and denied drugs played any part in Presley's death before the toxicology results were known. Allegations of a cover-up were widespread. While a 1981 trial of Presley's main physician, Dr. George Nichopoulos, exonerated him of criminal liability for the singer's death, the facts were startling: "In the first eight months of 1977 alone, he had [prescribed] more than 10,000 doses of sedatives, amphetamines, and narcotics: all in Elvis's name." His license was suspended for three months. It was permanently revoked in the 1990s after the Tennessee Medical Board brought new charges of over-prescription. In 1994, the Presley autopsy report was reopened. Dr. Joseph Davis, who had conducted thousands of autopsies as Miami-Dade County coroner, declared at its completion, "There is nothing in any of the data that supports a death from drugs. In fact, everything points to a sudden, violent heart attack." However, in the last years some apparent new evidence came out about his death. More recent research has revealed that it was only Dr. Francisco who told the news people that Elvis apparently died of heart failure. In fact, the doctors "could say nothing with confidence until they got the results back from the laboratories, if then. That would be a matter of weeks." One of the examiners, Dr. E. Eric Muirhead "could not believe his ears. Francisco had not only presumed to speak for the hospital's team of pathologists, he had announced a conclusion that they had not reached." "Early on, a meticulous dissection of the body ... confirmed [that] Elvis was chronically ill with diabetes, glaucoma, and constipation. As they proceeded, the doctors saw evidence that his body had been wracked over a span of years by a large and constant stream of drugs. They had also studied his hospital records, which included two admissions for drug detoxification and methadone treatments." Therefore, Frank Coffey is of the opinion that a plausible cause of Elvis' death is "a phenomenon called the Valsalva maneuver (essentially straining on the toilet leading to heart stoppage—plausible because Elvis suffered constipation, a common reaction to drug use)..." In similar terms, Dr. Dan Warlick, who was present at the autopsy, "believes Presley's chronic constipation—the result of years of prescription drug abuse and high-fat, high-cholesterol gorging—brought on what's known as Valsalva's maneuver. Put simply, the strain of attempting to defecate compressed the singer's abdominal aorta, shutting down his heart." However, in 2013, Dr. Forest Tennant, who had testified as a defense witness in Nichopoulos's trial, described his own analysis of all of Presley's available medical records. He concluded that Presley's "drug abuse had led to falls, head trauma, and overdoses that damaged his brain", and that his death was due in part to a toxic reaction to codeine—exacerbated by an undetected liver enzyme defect—which can cause sudden cardiac arrhythmia. DNA analysis in 2014 of a hair sample purported to be Presley's found evidence of genetic variants that can lead to glaucoma, migraines, and obesity; a crucial variant associated with the heart-muscle disease hypertrophic cardiomyopathy was also identified. Between 1977 and 1981, six of Presley's posthumously released singles were top ten country hits. Graceland was opened to the public in 1982. Attracting over half a million visitors annually, it is the second most-visited home in the United States, after the White House. It was declared a National Historic Landmark in 2006. Presley has been inducted into five music halls of fame: the Rock and Roll Hall of Fame (1986), the Country Music Hall of Fame (1998), the Gospel Music Hall of Fame (2001), the Rockabilly Hall of Fame (2007), and the Memphis Music Hall of Fame (2012). In 1984, he received the W. C. Handy Award from the Blues Foundation and the Academy of Country Music's first Golden Hat Award. In 1987, he received the American Music Awards' Award of Merit. A Junkie XL remix of Presley's "A Little Less Conversation" (credited as "Elvis Vs JXL") was used in a Nike advertising campaign during the 2002 FIFA World Cup. It topped the charts in over 20 countries, and was included in a compilation of Presley's number one hits, "ELV1S", that was also an international success. The album returned Presley to the "Billboard" summit for the first time in almost three decades. In 2003, a remix of "Rubberneckin'", a 1969 recording of Presley's, topped the U.S. sales chart, as did a 50th-anniversary re-release of "That's All Right" the following year. The latter was an outright hit in Britain, debuting at number three on the pop chart; it also made the top ten in Canada. In 2005, another three reissued singles, "Jailhouse Rock", "One Night"/"I Got Stung", and "It's Now or Never", went to number one in the UK. They were part of a campaign that saw the re-release of all 18 of Presley's previous chart-topping UK singles. The first, "All Shook Up", came with a collectors' box that made it ineligible to chart again; each of the other 17 reissues hit the British top five. In 2005, as well, "Forbes" named Presley the top-earning deceased celebrity for the fifth straight year, with a gross income of $45 million. He placed second in 2006, returned to the top spot the next two years, and ranked fourth in 2009. The following year, he was ranked second, with his highest annual income ever—$60 million—spurred by the celebration of his 75th birthday and the launch of Cirque du Soleil's "Viva Elvis" show in Las Vegas. In November 2010, "Viva Elvis: The Album" was released, setting his voice to newly recorded instrumental tracks. As of mid-2011, there were an estimated 15,000 licensed Presley products, and he was again the second-highest-earning deceased celebrity. Six years later, he ranked fourth with earnings of $35 million, up $8 million from 2016 due in part to the opening of a new entertainment complex, Elvis Presley’s Memphis, and hotel, The Guest House at Graceland. For much of his adult life, Presley, with his rise from poverty to riches and massive fame, had seemed to epitomize the American Dream. In his final years and even more so after his death, and the revelations about its circumstances, he became a symbol of excess and gluttony, as well. Increasing attention, for instance, was paid to his appetite for the rich, heavy Southern cooking of his upbringing, foods such as chicken-fried steak and biscuits and gravy. In particular, his love of calorie-laden fried peanut butter, banana, and (sometimes) bacon sandwiches, now known as "Elvis sandwiches", came to stand for this aspect of his persona. But the Elvis sandwich represents more than just unhealthy overindulgence—as media and culture scholar Robert Thompson describes, the unsophisticated treat also signifies Presley's enduring all-American appeal: "He wasn't only the king, he was one of us." Since 1977, there have been numerous alleged sightings of Presley. A long-standing conspiracy theory among some fans is that he faked his death. Adherents cite alleged discrepancies in the death certificate, reports of a wax dummy in his original coffin, and accounts of Presley planning a diversion so he could retire in peace. An unusually large number of fans have domestic shrines devoted to Presley and journey to sites with which he is connected, however faintly. Every August 16, the anniversary of his death, thousands of people gather outside Graceland and celebrate his memory with a candlelight ritual. "With Elvis, it is not just his music that has survived death", writes Ted Harrison. "He himself has been raised, like a medieval saint, to a figure of cultic status. It is as if he has been canonized by acclamation." Presley's earliest musical influence came from gospel. His mother recalled that from the age of two, at the Assembly of God church in Tupelo attended by the family, "he would slide down off my lap, run into the aisle and scramble up to the platform. There he would stand looking at the choir and trying to sing with them." In Memphis, Presley frequently attended all-night gospel singings at the Ellis Auditorium, where groups such as the Statesmen Quartet led the music in a style that, Guralnick suggests, sowed the seeds of Presley's future stage act: As a teenager, Presley's musical interests were wide-ranging, and he was deeply informed about African American musical idioms as well as white ones (see "Teenage life in Memphis"). Though he never had any formal training, he was blessed with a remarkable memory, and his musical knowledge was already considerable by the time he made his first professional recordings in 1954 at the age of 19. When Jerry Leiber and Mike Stoller met him two years later, they were astonished at his encyclopedic understanding of the blues, and, as Stoller put it, "He certainly knew a lot more than we did about country music and gospel music." At a press conference the following year, he proudly declared, "I know practically every religious song that's ever been written." Presley was a central figure in the development of rockabilly, according to music historians. "Rockabilly crystallized into a recognizable style in 1954 with Elvis Presley's first release, on the Sun label", writes Craig Morrison. Paul Friedlander describes the defining elements of rockabilly, which he similarly characterizes as "essentially ... an Elvis Presley construction": "the raw, emotive, and slurred vocal style and emphasis on rhythmic feeling [of] the blues with the string band and strummed rhythm guitar [of] country". In "That's All Right", the Presley trio's first record, Scotty Moore's guitar solo, "a combination of Merle Travis–style country finger-picking, double-stop slides from acoustic boogie, and blues-based bent-note, single-string work, is a microcosm of this fusion." While Katherine Charlton likewise calls Presley "rockabilly's originator", Carl Perkins has explicitly stated that "[Sam] Phillips, Elvis, and I didn't create rockabilly." and, according to Michael Campbell, "Bill Haley recorded the first big rockabilly hit." In Moore's view, too, "It had been there for quite a while, really. Carl Perkins was doing basically the same sort of thing up around Jackson, and I know for a fact Jerry Lee Lewis had been playing that kind of music ever since he was ten years old." At RCA, Presley's rock and roll sound grew distinct from rockabilly with group chorus vocals, more heavily amplified electric guitars and a tougher, more intense manner. While he was known for taking songs from various sources and giving them a rockabilly/rock and roll treatment, he also recorded songs in other genres from early in his career, from the pop standard "Blue Moon" at Sun to the country ballad "How's the World Treating You?" on his second LP to the blues of "Santa Claus Is Back In Town". In 1957, his first gospel record was released, the four-song EP "Peace in the Valley". Certified as a million seller, it became the top-selling gospel EP in recording history. Presley would record gospel periodically for the rest of his life. After his return from military service in 1960, Presley continued to perform rock and roll, but the characteristic style was substantially toned down. His first post-Army single, the number one hit "Stuck on You", is typical of this shift. RCA publicity materials referred to its "mild rock beat"; discographer Ernst Jorgensen calls it "upbeat pop". The number five "She's Not You" (1962) "integrates the Jordanaires so completely, it's practically doo-wop". The modern blues/R&B sound captured with great success on "Elvis Is Back!" was essentially abandoned for six years until such 1966–67 recordings as "Down in the Alley" and "Hi-Heel Sneakers". The singer's output during most of the 1960s emphasized pop music, often in the form of ballads such as "Are You Lonesome Tonight?", a number one in 1960. "It's Now or Never", which also topped the chart that year, was a classically influenced variation of pop based on the Neapolitan "’O sole mio" and concluding with a "full-voiced operatic cadence". These were both dramatic numbers, but most of what Presley recorded for his many film soundtracks was in a much lighter vein. While Presley performed several of his classic ballads for the '68 Comeback Special, the sound of the show was dominated by aggressive rock and roll. He would record few new straight-ahead rock and roll songs thereafter; as he explained, they were "hard to find". A significant exception was "Burning Love", his last major hit on the pop charts. Like his work of the 1950s, Presley's subsequent recordings reworked pop and country songs, but in markedly different permutations. His stylistic range now began to embrace a more contemporary rock sound as well as soul and funk. Much of "Elvis In Memphis", as well as "Suspicious Minds", cut at the same sessions, reflected his new rock and soul fusion. In the mid-1970s, many of his singles found a home on country radio, the field where he first became a star. The developmental arc of Presley's singing voice, as described by critic Dave Marsh, goes from "high and thrilled in the early days, [to] lower and perplexed in the final months." Marsh credits Presley with the introduction of the "vocal stutter" on 1955's "Baby Let's Play House." When on "Don't Be Cruel", Presley "slides into a 'mmmmm' that marks the transition between the first two verses," he shows "how masterful his relaxed style really is." Marsh describes the vocal performance on "Can't Help Falling in Love" as one of "gentle insistence and delicacy of phrasing", with the line Shall I stay' pronounced as if the words are fragile as crystal". Jorgensen calls the 1966 recording of "How Great Thou Art" "an extraordinary fulfillment of his vocal ambitions", as Presley "crafted for himself an ad-hoc arrangement in which he took every part of the four-part vocal, from [the] bass intro to the soaring heights of the song's operatic climax", becoming "a kind of one-man quartet". Guralnick finds "Stand By Me" from the same gospel sessions "a beautifully articulated, almost nakedly yearning performance," but, by contrast, feels that Presley reaches beyond his powers on "Where No One Stands Alone", resorting "to a kind of inelegant bellowing to push out a sound" that Jake Hess of the Statesmen Quartet had in his command. Hess himself thought that while others might have voices the equal of Presley's, "he had that certain something that everyone searches for all during their lifetime." Guralnick attempts to pinpoint that something: "The warmth of his voice, his controlled use of both vibrato technique and natural falsetto range, the subtlety and deeply felt conviction of his singing were all qualities recognizably belonging to his talent but just as recognizably not to be achieved without sustained dedication and effort." Marsh praises his 1968 reading of "U.S. Male", "bearing down on the hard guy lyrics, not sending them up or overplaying them but tossing them around with that astonishingly tough yet gentle assurance that he brought to his Sun records." The performance on "In the Ghetto" is, according to Jorgensen, "devoid of any of his characteristic vocal tricks or mannerisms", instead relying on the exceptional "clarity and sensitivity of his voice". Guralnick describes the song's delivery as of "almost translucent eloquence ... so quietly confident in its simplicity". On "Suspicious Minds", Guralnick hears essentially the same "remarkable mixture of tenderness and poise", but supplemented with "an expressive quality somewhere between stoicism (at suspected infidelity) and anguish (over impending loss)". Music critic Henry Pleasants observes that "Presley has been described variously as a baritone and a tenor. An extraordinary compass ... and a very wide range of vocal color have something to do with this divergence of opinion." He identifies Presley as a high baritone, calculating his range as two octaves and a third, "from the baritone low G to the tenor high B, with an upward extension in falsetto to at least a D-flat. Presley's best octave is in the middle, D-flat to D-flat, granting an extra full step up or down." In Pleasants' view, his voice was "variable and unpredictable" at the bottom, "often brilliant" at the top, with the capacity for "full-voiced high Gs and As that an opera baritone might envy". Scholar Lindsay Waters, who figures Presley's range as two-and-a-quarter octaves, emphasizes that "his voice had an emotional range from tender whispers to sighs down to shouts, grunts, grumbles, and sheer gruffness that could move the listener from calmness and surrender, to fear. His voice can not be measured in octaves, but in decibels; even that misses the problem of how to measure delicate whispers that are hardly audible at all." Presley was always "able to duplicate the open, hoarse, ecstatic, screaming, shouting, wailing, reckless sound of the black rhythm-and-blues and gospel singers", writes Pleasants, and also demonstrated a remarkable ability to assimilate many other vocal styles. When Dewey Phillips first aired "That's All Right" on Memphis's WHBQ, many listeners who contacted the station by phone and telegram to ask for it again assumed that its singer was black. From the beginning of his national fame, Presley expressed respect for African American performers and their music, and disregard for the norms of segregation and racial prejudice then prevalent in the South. Interviewed in 1956, he recalled how in his childhood he would listen to blues musician Arthur Crudup—the originator of "That's All Right"—"bang his box the way I do now, and I said if I ever got to the place where I could feel all old Arthur felt, I'd be a music man like nobody ever saw." "The Memphis World", an African American newspaper, reported that Presley, "the rock 'n' roll phenomenon", "cracked Memphis's segregation laws" by attending the local amusement park on what was designated as its "colored night". Such statements and actions led Presley to be generally hailed in the black community during the early days of his stardom. By contrast, many white adults, according to "Billboard"s Arnold Shaw, "did not like him, and condemned him as depraved. Anti-negro prejudice doubtless figured in adult antagonism. Regardless of whether parents were aware of the Negro sexual origins of the phrase 'rock 'n' roll', Presley impressed them as the visual and aural embodiment of sex." Despite the largely positive view of Presley held by African Americans, a rumor spread in mid-1957 that he had at some point announced, "The only thing Negroes can do for me is buy my records and shine my shoes." A journalist with the national African American weekly "Jet", Louie Robinson, pursued the story. On the set of "Jailhouse Rock", Presley granted Robinson an interview, though he was no longer dealing with the mainstream press. He denied making such a statement or holding in any way to its racist view: "I never said anything like that, and people who know me know that I wouldn't have said it. ... A lot of people seem to think I started this business. But rock 'n' roll was here a long time before I came along. Nobody can sing that kind of music like colored people. Let's face it: I can't sing like Fats Domino can. I know that." Robinson found no evidence that the remark had ever been made, and on the contrary elicited testimony from many individuals indicating that Presley was anything but racist. Blues singer Ivory Joe Hunter, who had heard the rumor before he visited Graceland one evening, reported of Presley, "He showed me every courtesy, and I think he's one of the greatest." Though the rumored remark was wholly discredited at the time, it was still being used against Presley decades later. The identification of Presley with racism—either personally or symbolically—was expressed most famously in the lyrics of the 1989 rap hit "Fight the Power", by Public Enemy: "Elvis was a hero to most / But he never meant shit to me / Straight-up racist that sucker was / Simple and plain". The persistence of such attitudes was fueled by resentment over the fact that Presley, whose musical and visual performance idiom owed much to African American sources, achieved the cultural acknowledgement and commercial success largely denied his black peers. Into the 21st century, the notion that Presley had "stolen" black music still found adherents. Notable among African American entertainers expressly rejecting this view was Jackie Wilson, who argued, "A lot of people have accused Elvis of stealing the black man's music, when in fact, almost every black solo entertainer copied his stage mannerisms from Elvis." And throughout his career, Presley plainly acknowledged his debt. Addressing his '68 Comeback Special audience, he said, "Rock 'n' roll music is basically gospel or rhythm and blues, or it sprang from that. People have been adding to it, adding instruments to it, experimenting with it, but it all boils down to [that]." Nine years earlier, he had said, "Rock 'n' roll has been around for many years. It used to be called rhythm and blues." Presley's physical attractiveness and sexual appeal were widely acknowledged. "He was once beautiful, astonishingly beautiful", in the words of critic Mark Feeney. Television director Steve Binder, no fan of Presley's music before he oversaw the '68 Comeback Special, reported, "I'm straight as an arrow and I got to tell you, you stop, whether you're male or female, to look at him. He was that good looking. And if you never knew he was a superstar, it wouldn't make any difference; if he'd walked in the room, you'd know somebody special was in your presence." His performance style, as much as his physical beauty, was responsible for Presley's eroticized image. Writing in 1970, critic George Melly described him as "the master of the sexual simile, treating his guitar as both phallus and girl". In his Presley obituary, Lester Bangs credited him as "the man who brought overt blatant vulgar sexual frenzy to the popular arts in America". Ed Sullivan's declaration that he perceived a soda bottle in Presley's trousers was echoed by rumors involving a similarly positioned toilet roll tube or lead bar. While Presley was marketed as an icon of heterosexuality, some cultural critics have argued that his image was ambiguous. In 1959, "Sight and Sound"s Peter John Dyer described his onscreen persona as "aggressively bisexual in appeal". Brett Farmer places the "orgasmic gyrations" of the title dance sequence in "Jailhouse Rock" within a lineage of cinematic musical numbers that offer a "spectacular eroticization, if not homoeroticization, of the male image". In the analysis of Yvonne Tasker, "Elvis was an ambivalent figure who articulated a peculiar feminised, objectifying version of white working-class masculinity as aggressive sexual display." Reinforcing Presley's image as a sex symbol were the reports of his dalliances with various Hollywood stars and starlets, from Natalie Wood in the 1950s to Connie Stevens and Ann-Margret in the 1960s to Candice Bergen and Cybill Shepherd in the 1970s. June Juanico of Memphis, one of Presley's early girlfriends, later blamed Parker for encouraging him to choose his dating partners with publicity in mind. Presley never grew comfortable with the Hollywood scene, and most of these relationships were insubstantial. Once he became Presley's manager, Colonel Tom Parker insisted on exceptionally tight control over his client's career. Early on, he and his Hill and Range allies, the brothers Jean and Julian Aberbach, perceived the close relationship that developed between Presley and songwriters Jerry Leiber and Mike Stoller as a serious threat to that control. Parker effectively ended the relationship, deliberately or not, with the new contract he sent Leiber in early 1958. Leiber thought there was a mistake—the sheet of paper was blank except for Parker's signature and a line on which to enter his. "There's no mistake, boy, just sign it and return it", Parker directed. "Don't worry, we'll fill it in later." Leiber declined, and Presley's fruitful collaboration with the writing team was over. Other respected songwriters lost interest in or simply avoided writing for Presley because of the requirement that they surrender a third of their usual royalties. By 1967, Parker's contracts with Presley gave him 50 percent of most of the singer's earnings from recordings, films, and merchandise. Beginning in February 1972, he took a third of the profit from live appearances; a January 1976 agreement entitled him to half of that as well. Priscilla Presley noted that "Elvis detested the business side of his career. He would sign a contract without even reading it." Presley's friend Marty Lacker regarded Parker as a "hustler and a con artist. He was only interested in 'now money'—get the buck and get gone." Lacker was instrumental in convincing Presley to record with Memphis producer Chips Moman and his handpicked musicians at American Sound Studio in early 1969. The American Sound sessions represented a significant departure from the control customarily exerted by Hill and Range. Moman still had to deal with the publisher's staff on site, whose song suggestions he regarded as unacceptable. He was on the verge of quitting, until Presley ordered the Hill and Range personnel out of the studio. Although RCA executive Joan Deary was later full of praise for the producer's song choices and the quality of the recordings, Moman, to his fury, received neither credit on the records nor royalties for his work. Throughout his entire career, Presley performed in only three venues outside the United States—all of them in Canada, during brief tours there in 1957. In 1968, he remarked, "Before too long I'm going to make some personal appearance tours. I'll probably start out here in this country and after that, play some concerts abroad, probably starting in Europe. I want to see some places I've never seen before." Rumors that he would play overseas for the first time were fueled in 1974 by a million-dollar bid for an Australian tour. Parker was uncharacteristically reluctant, prompting those close to Presley to speculate about the manager's past and the reasons for his evident unwillingness to apply for a passport. After Presley's death, it was revealed that Parker was born Andreas Cornelis van Kuijk in the Netherlands; having immigrated illegally to the U.S., he had reason to fear that if he left the country, he would not be allowed back in again. Parker ultimately squelched any notions Presley had of working abroad, claiming that foreign security was poor and the venues unsuitable for a star of his magnitude. Parker arguably exercised tightest control over Presley's film career. Hal Wallis said, "I'd rather try and close a deal with the devil" than with Parker. Fellow movie producer Sam Katzman described him as "the biggest con artist in the world". In 1957, Robert Mitchum asked Presley to costar with him in "Thunder Road", which Mitchum was producing and writing. According to George Klein, one of his oldest friends, Presley was also offered starring roles in "West Side Story" and "Midnight Cowboy". In 1974, Barbra Streisand approached Presley to star with her in the remake of "A Star is Born". In each case, any ambitions the singer may have had to play such parts were thwarted by his manager's negotiating demands or flat refusals. In Lacker's description, "The only thing that kept Elvis going after the early years was a new challenge. But Parker kept running everything into the ground." The prevailing attitude may have been summed up best by the response Leiber and Stoller received when they brought a serious film project for Presley to Parker and the Hill and Range owners for their consideration. In Leiber's telling, Jean Aberbach warned them to never again "try to interfere with the business or artistic workings of the process known as Elvis Presley". In the early 1960s, the circle of friends with whom Presley constantly surrounded himself until his death came to be known as the "Memphis Mafia". "Surrounded by the[ir] parasitic presence", as journalist John Harris puts it, "it was no wonder that as he slid into addiction and torpor, no-one raised the alarm: to them, Elvis was the bank, and it had to remain open." Tony Brown, who played piano for Presley regularly in the last two years of the singer's life, observed his rapidly declining health and the urgent need to address it: "But we all knew it was hopeless because Elvis was surrounded by that little circle of people ... all those so-called friends". In the Memphis Mafia's defense, Marty Lacker has said, "[Presley] was his own man. ... If we hadn't been around, he would have been dead a lot earlier." Larry Geller became Presley's hairdresser in 1964. Unlike others in the Memphis Mafia, he was interested in spiritual questions and recalls how, from their first conversation, Presley revealed his secret thoughts and anxieties: "I mean there "has" to be a purpose ... there's got to be a reason ... why I was chosen to be Elvis Presley. ... I swear to God, no one knows how lonely I get. And how empty I really feel." Thereafter, Geller supplied him with books on religion and mysticism, which the singer read voraciously. Presley would be preoccupied by such matters for much of his life, taking trunkloads of books with him on tour. Presley's rise to national attention in 1956 transformed the field of popular music and had a huge effect on the broader scope of popular culture. As the catalyst for the cultural revolution that was rock and roll, he was central not only to defining it as a musical genre but in making it a touchstone of youth culture and rebellious attitude. With its racially mixed origins—repeatedly affirmed by Presley—rock and roll's occupation of a central position in mainstream American culture facilitated a new acceptance and appreciation of black culture. In this regard, Little Richard said of Presley, "He was an integrator. Elvis was a blessing. They wouldn't let black music through. He opened the door for black music." Al Green agreed: "He broke the ice for all of us." President Jimmy Carter remarked on his legacy in 1977: "His music and his personality, fusing the styles of white country and black rhythm and blues, permanently changed the face of American popular culture. His following was immense, and he was a symbol to people the world over of the vitality, rebelliousness, and good humor of his country." Presley also heralded the vastly expanded reach of celebrity in the era of mass communication: at the age of 21, within a year of his first appearance on American network television, he was one of the most famous people in the world. Presley's name, image, and voice are instantly recognizable around the globe. He has inspired a legion of impersonators. In polls and surveys, he is recognized as one of the most important popular music artists and influential Americans. "Elvis Presley is the greatest cultural force in the twentieth century", said composer and conductor Leonard Bernstein. "He introduced the beat to everything and he changed everything—music, language, clothes. It's a whole new social revolution—the sixties came from it." In the words of John Lennon, "Nothing really affected me until Elvis." Bob Dylan described the sensation of first hearing Presley as "like busting out of jail". On the 25th anniversary of Presley's death, "The New York Times" asserted, "All the talentless impersonators and appalling black velvet paintings on display can make him seem little more than a perverse and distant memory. But before Elvis was camp, he was its opposite: a genuine cultural force. ... Elvis's breakthroughs are underappreciated because in this rock-and-roll age, his hard-rocking music and sultry style have triumphed so completely." Not only Presley's achievements, but his failings as well, are seen by some cultural observers as adding to the power of his legacy, as in this description by Greil Marcus: Elvis Presley is a supreme figure in American life, one whose presence, no matter how banal or predictable, brooks no real comparisons. ... The cultural range of his music has expanded to the point where it includes not only the hits of the day, but also patriotic recitals, pure country gospel, and really dirty blues. ... Elvis has emerged as a great "artist," a great "rocker," a great "purveyor of schlock," a great "heart throb," a great "bore," a great "symbol of potency," a great "ham," a great "nice person," and, yes, a great American. To this day, Presley remains the best selling solo artist, with sales estimates ranging from 600 million to 1 billion sales. Presley holds the records for most songs charting in "Billboard"s top 40—114—and top 100: 151, according to chart statistician Joel Whitburn, 138 according to Presley historian Adam Victor. Presley's rankings for top ten and number one hits vary depending on how the double-sided "Hound Dog/Don't Be Cruel" and "Don't/I Beg of You" singles, which precede the inception of "Billboard"s unified Hot 100 chart, are analyzed. According to Whitburn's analysis, Presley holds the record with 38, tying with Madonna; per "Billboard"s current assessment, he ranks second with 36. Whitburn and "Billboard" concur that the Beatles hold the record for most number one hits with 20, and that Mariah Carey is second with 18. Whitburn has Presley also with 18, and thus tied for second; "Billboard" has him third with 17. Presley retains the record for cumulative weeks at number one: alone at 80, according to Whitburn and the Rock and Roll Hall of Fame; tied with Carey at 79, according to "Billboard". He holds the records for most British number one hits with 21, and top ten hits with 76. As an album artist, Presley is credited by "Billboard" with the record for the most albums charting in the "Billboard" 200: 129, far ahead of second-place Frank Sinatra's 82. He also holds the record for most time spent at number one on the "Billboard" 200: 67 weeks. In 2015 and 2016, two albums setting Presley's vocals against music by the Royal Philharmonic Orchestra, "If I Can Dream" and "The Wonder of You", both reached number one in the UK. This gave him a new record for number one UK albums by a solo artist with 13, and extended his record for longest span between number one albums by anybody—Presley had first topped the British chart in 1956 with his self-titled debut. As of 2018, the Recording Industry Association of America (RIAA) credits Presley with 136 million certified album sales in the U.S., third all time behind the Beatles and Garth Brooks. He holds the records for most gold albums (106, more than twice as many as second-place Barbra Streisand's 51), most platinum albums (63), and most multi-platinum albums (27). His total of 197 album certification awards (including one diamond award), far outpaces the Beatles' second-best 122. He has the most gold singles (54) and the fourth-most platinum singles (27, behind Rihanna, Taylor Swift, and Chris Brown). A vast number of recordings have been issued under Presley's name. The total number of his original master recordings has been variously calculated as 665 and 711. His career began and he was most successful during an era when singles were the primary commercial medium for pop music. In the case of his albums, the distinction between "official" studio records and other forms is often blurred. For most of the 1960s, his recording career focused on soundtrack albums. In the 1970s, his most heavily promoted and best-selling LP releases tended to be concert albums. This summary discography lists only the albums and singles that reached the top of one or more of the following charts: the main U.S. "Billboard" pop chart; the "Billboard" country chart, the genre chart with which he was most identified (there was no country album chart before 1964); and the official British pop chart. In the U.S., Presley also had five or six number-one R&B singles and seven number-one adult contemporary singles; in 1964, his "Blue Christmas" topped the Christmas singles chart during a period when "Billboard" did not rank holiday singles in its primary pop chart. Between 2015 and 2017, three albums featuring Presley's vocals and newly recorded orchestral music reached the top of the "Billboard" classical chart. He also had number-one hits in many countries beside the U.S and United Kingdom. TV concert specials Ernest Hemingway Ernest Miller Hemingway (July 21, 1899 – July 2, 1961) was an American novelist, short story writer, and journalist. His economical and understated style—which he termed the Iceberg Theory—had a strong influence on 20th-century fiction, while his adventurous lifestyle and his public image brought him admiration from later generations. Hemingway produced most of his work between the mid-1920s and the mid-1950s, and won the Nobel Prize in Literature in 1954. He published seven novels, six short-story collections, and two non-fiction works. Three of his novels, four short story collections, and three non-fiction works were published posthumously. Many of his works are considered classics of American literature. Hemingway was raised in Oak Park, Illinois. After high school, he reported for a few months for "The Kansas City Star", before leaving for the Italian Front to enlist as an ambulance driver in World War I. In 1918, he was seriously wounded and returned home. His wartime experiences formed the basis for his novel "A Farewell to Arms" (1929). In 1921, he married Hadley Richardson, the first of what would be four wives. The couple moved to Paris, where he worked as a foreign correspondent and fell under the influence of the modernist writers and artists of the 1920s "Lost Generation" expatriate community. His debut novel, "The Sun Also Rises", was published in 1926. After his 1927 divorce from Richardson, Hemingway married Pauline Pfeiffer; they divorced after he returned from the Spanish Civil War, where he had been a journalist. He based "For Whom the Bell Tolls" (1940) on his experience there. Martha Gellhorn became his third wife in 1940; they separated after he met Mary Welsh in London during World War II. He was present at the Normandy landings and the liberation of Paris. Shortly after the publication of "The Old Man and the Sea" (1952), Hemingway went on safari to Africa, where he was almost killed in two successive plane crashes that left him in pain or ill-health for much of the rest of his life. Hemingway maintained permanent residences in Key West, Florida (in the 1930s) and Cuba (in the 1940s and 1950s). In 1959, he bought a house in Ketchum, Idaho, where, in mid-1961 he shot himself in the head. Ernest Miller Hemingway was born on July 21, 1899, in Oak Park, Illinois, a suburb of Chicago. His father, Clarence Edmonds Hemingway, was a physician, and his mother, Grace Hall Hemingway, was a musician. Both were well-educated and well-respected in Oak Park, a conservative community about which resident Frank Lloyd Wright said, "So many churches for so many good people to go to." For a short period after their marriage, Clarence and Grace Hemingway lived with Grace's father, Ernest Hall, their first son's namesake. Later, Ernest Hemingway would say that he disliked his name, which he "associated with the naive, even foolish hero of Oscar Wilde's play "The Importance of Being Earnest"". The family eventually moved into a seven-bedroom home in a respectable neighborhood with a music studio for Grace and a medical office for Clarence. Hemingway's mother frequently performed in concerts around the village. As an adult, Hemingway professed to hate his mother, although biographer Michael S. Reynolds points out that Hemingway mirrored her energy and enthusiasm. Her insistence that he learn to play the cello became a "source of conflict", but he later admitted the music lessons were useful to his writing, as is evident in the "contrapuntal structure" of "For Whom the Bell Tolls". The family spent summers at Windemere on Walloon Lake, near Petoskey, Michigan. Hemingway's father taught him to hunt, fish, and camp in the woods and lakes of Northern Michigan as a young boy. These early experiences in nature instilled a passion for outdoor adventure and living in remote or isolated areas. From 1913 until 1917, Hemingway attended Oak Park and River Forest High School. He took part in a number of sports such as boxing, track and field, water polo, and football. He excelled in English classes, and with his sister Marcelline, performed in the school orchestra for two years. During his junior year he had a journalism class, structured "as though the classroom were a newspaper office," with better writers submitting pieces to the school newspaper, "The Trapeze". Hemingway and Marcelline both submitted pieces; Hemingway's first piece, published in January 1916, was about a local performance by the Chicago Symphony Orchestra. He edited the "Trapeze" and the "Tabula" (the yearbook), imitating the language of sportswriters, taking the pen name Ring Lardner, Jr.—a nod to Ring Lardner of the "Chicago Tribune" whose byline was "Line O'Type." Like Mark Twain, Stephen Crane, Theodore Dreiser, and Sinclair Lewis, Hemingway was a journalist before becoming a novelist. After leaving high school he went to work for "The Kansas City Star" as a cub reporter. Although he stayed there for only six months, he relied on the "Star"s style guide as a foundation for his writing: "Use short sentences. Use short first paragraphs. Use vigorous English. Be positive, not negative." Early in 1918, Hemingway responded to a Red Cross recruitment effort in Kansas City and signed on to become an ambulance driver in Italy. He left New York in May and arrived in Paris as the city was under bombardment from German artillery. By June, he was at the Italian Front. It was probably around this time that he first met John Dos Passos, with whom he had a rocky relationship for decades. On his first day in Milan, he was sent to the scene of a munitions factory explosion, where rescuers retrieved the shredded remains of female workers. He described the incident in his non-fiction book "Death in the Afternoon": "I remember that after we searched quite thoroughly for the complete dead we collected fragments." A few days later, he was stationed at Fossalta di Piave. On July 8, he was seriously wounded by mortar fire, having just returned from the canteen bringing chocolate and cigarettes for the men at the front line. Despite his wounds, Hemingway assisted Italian soldiers to safety, for which he received the Italian Silver Medal of Bravery. Still only 18, Hemingway said of the incident: "When you go to war as a boy you have a great illusion of immortality. Other people get killed; not you ... Then when you are badly wounded the first time you lose that illusion and you know it can happen to you." He sustained severe shrapnel wounds to both legs, underwent an immediate operation at a distribution center, and spent five days at a field hospital before he was transferred for recuperation to the Red Cross hospital in Milan. He spent six months at the hospital, where he met and formed a strong friendship with "Chink" Dorman-Smith that lasted for decades and shared a room with future American foreign service officer, ambassador, and author Henry Serrano Villard. While recuperating, he fell in love for the first time with Agnes von Kurowsky, a Red Cross nurse seven years his senior. By the time of his release and return to the United States in January 1919, Agnes and Hemingway had decided to marry within a few months in America. However, in March, she wrote that she had become engaged to an Italian officer. Biographer Jeffrey Meyers states in his book "Hemingway: A Biography" that Hemingway was devastated by Agnes's rejection, and in future relationships, he followed a pattern of abandoning a wife before she abandoned him. Hemingway returned home early in 1919 to a time of readjustment. Not yet 20 years old, he had gained from the war a maturity that was at odds with living at home without a job and with the need for recuperation. As Reynolds explains, "Hemingway could not really tell his parents what he thought when he saw his bloody knee. He could not say how scared he was in another country with surgeons who could not tell him in English if his leg was coming off or not." In September, he took a fishing and camping trip with high school friends to the back-country of Michigan's Upper Peninsula. The trip became the inspiration for his short story "Big Two-Hearted River", in which the semi-autobiographical character Nick Adams takes to the country to find solitude after returning from war. A family friend offered him a job in Toronto, and with nothing else to do, he accepted. Late that year he began as a freelancer and staff writer for the "Toronto Star Weekly". He returned to Michigan the following June and then moved to Chicago in September 1920 to live with friends, while still filing stories for the "Toronto Star". In Chicago, he worked as an associate editor of the monthly journal "Cooperative Commonwealth", where he met novelist Sherwood Anderson. When St. Louis native Hadley Richardson came to Chicago to visit the sister of Hemingway's roommate, Hemingway became infatuated and later claimed, "I knew she was the girl I was going to marry." Hadley, red-haired, with a "nurturing instinct," was eight years older than Hemingway. Despite the age difference, Hadley, who had grown up with an overprotective mother, seemed less mature than usual for a young woman her age. Bernice Kert, author of "The Hemingway Women", claims Hadley was "evocative" of Agnes, but that Hadley had a childishness that Agnes lacked. The two corresponded for a few months and then decided to marry and travel to Europe. They wanted to visit Rome, but Sherwood Anderson convinced them to visit Paris instead, writing letters of introduction for the young couple. They were married on September 3, 1921; two months later, Hemingway was hired as foreign correspondent for the "Toronto Star", and the couple left for Paris. Of Hemingway's marriage to Hadley, Meyers claims: "With Hadley, Hemingway achieved everything he had hoped for with Agnes: the love of a beautiful woman, a comfortable income, a life in Europe." Carlos Baker, Hemingway's first biographer, believes that while Anderson suggested Paris because "the monetary exchange rate" made it an inexpensive place to live, more importantly it was where "the most interesting people in the world" lived. In Paris, Hemingway met American writer and art collector Gertrude Stein, Irish novelist James Joyce, American poet Ezra Pound (who "could help a young writer up the rungs of a career") and other writers. The Hemingway of the early Paris years was a "tall, handsome, muscular, broad-shouldered, brown-eyed, rosy-cheeked, square-jawed, soft-voiced young man." He and Hadley lived in a small walk-up at 74 rue du Cardinal Lemoine in the Latin Quarter, and he worked in a rented room in a nearby building. Stein, who was the bastion of modernism in Paris, became Hemingway's mentor and godmother to his son Jack; she introduced him to the expatriate artists and writers of the Montparnasse Quarter, whom she referred to as the "Lost Generation"—a term Hemingway popularized with the publication of "The Sun Also Rises". A regular at Stein's salon, Hemingway met influential painters such as Pablo Picasso, Joan Miró, and Juan Gris. He eventually withdrew from Stein's influence and their relationship deteriorated into a literary quarrel that spanned decades. Ezra Pound met Hemingway by chance at Sylvia Beach's bookshop Shakespeare and Company in 1922. The two toured Italy in 1923 and lived on the same street in 1924. They forged a strong friendship, and in Hemingway, Pound recognized and fostered a young talent. Pound introduced Hemingway to James Joyce, with whom Hemingway frequently embarked on "alcoholic sprees". During his first 20 months in Paris, Hemingway filed 88 stories for the "Toronto Star" newspaper. He covered the Greco-Turkish War, where he witnessed the burning of Smyrna, and wrote travel pieces such as "Tuna Fishing in Spain" and "Trout Fishing All Across Europe: Spain Has the Best, Then Germany". Hemingway was devastated on learning that Hadley had lost a suitcase filled with his manuscripts at the Gare de Lyon as she was traveling to Geneva to meet him in December 1922. The following September, the couple returned to Toronto, where their son John Hadley Nicanor was born on October 10, 1923. During their absence, Hemingway's first book, "Three Stories and Ten Poems", was published. Two of the stories it contained were all that remained after the loss of the suitcase, and the third had been written early the previous year in Italy. Within months a second volume, "in our time" (without capitals), was published. The small volume included six vignettes and a dozen stories Hemingway had written the previous summer during his first visit to Spain, where he discovered the thrill of the "corrida". He missed Paris, considered Toronto boring, and wanted to return to the life of a writer, rather than live the life of a journalist. Hemingway, Hadley and their son (nicknamed Bumby) returned to Paris in January 1924 and moved into a new apartment on the rue Notre-Dame des Champs. Hemingway helped Ford Madox Ford edit "The Transatlantic Review", which published works by Pound, John Dos Passos, Baroness Elsa von Freytag-Loringhoven, and Stein, as well as some of Hemingway's own early stories such as "Indian Camp". When "In Our Time" was published in 1925, the dust jacket bore comments from Ford. "Indian Camp" received considerable praise; Ford saw it as an important early story by a young writer, and critics in the United States praised Hemingway for reinvigorating the short story genre with his crisp style and use of declarative sentences. Six months earlier, Hemingway had met F. Scott Fitzgerald, and the pair formed a friendship of "admiration and hostility". Fitzgerald had published "The Great Gatsby" the same year: Hemingway read it, liked it, and decided his next work had to be a novel. With his wife Hadley, Hemingway first visited the Festival of San Fermín in Pamplona, Spain, in 1923, where he became fascinated by bullfighting. It is at this time that he began to be referred to as "Papa." The Hemingways returned to Pamplona in 1924 and a third time in June 1925; that year they brought with them a group of American and British expatriates: Hemingway's Michigan boyhood friend Bill Smith, Donald Ogden Stewart, Lady Duff Twysden (recently divorced), her lover Pat Guthrie, and Harold Loeb. A few days after the fiesta ended, on his birthday (July 21), he began to write the draft of what would become "The Sun Also Rises", finishing eight weeks later. A few months later, in December 1925, the Hemingways left to spend the winter in Schruns, Austria, where Hemingway began revising the manuscript extensively. Pauline Pfeiffer joined them in January and against Hadley's advice, urged Hemingway to sign a contract with Scribner's. He left Austria for a quick trip to New York to meet with the publishers, and on his return, during a stop in Paris, began an affair with Pfeiffer, before returning to Schruns to finish the revisions in March. The manuscript arrived in New York in April; he corrected the final proof in Paris in August 1926, and Scribner's published the novel in October. "The Sun Also Rises" epitomized the post-war expatriate generation, received good reviews, and is "recognized as Hemingway's greatest work". Hemingway himself later wrote to his editor Max Perkins that the "point of the book" was not so much about a generation being lost, but that "the earth abideth forever"; he believed the characters in "The Sun Also Rises" may have been "battered" but were not lost. Hemingway's marriage to Hadley deteriorated as he was working on "The Sun Also Rises". In early 1926, Hadley became aware of his affair with Pfeiffer, who came to Pamplona with them that July. On their return to Paris, Hadley asked for a separation; in November she formally requested a divorce. They split their possessions while Hadley accepted Hemingway's offer of the proceeds from "The Sun Also Rises". The couple were divorced in January 1927, and Hemingway married Pfeiffer in May. Pfeiffer, who was from a wealthy Catholic Arkansas family, had moved to Paris to work for "Vogue" magazine. Before their marriage, Hemingway converted to Catholicism. They honeymooned in Le Grau-du-Roi, where he contracted anthrax, and he planned his next collection of short stories, "Men Without Women", which was published in October 1927, and included his boxing story "Fifty Grand". "Cosmopolitan" magazine editor-in-chief Ray Long praised "Fifty Grand", calling it, "one of the best short stories that ever came to my hands ... the best prize-fight story I ever read ... a remarkable piece of realism." By the end of the year Pauline, who was pregnant, wanted to move back to America. John Dos Passos recommended Key West, and they left Paris in March 1928. Hemingway suffered a severe injury in their Paris bathroom when he pulled a skylight down on his head thinking he was pulling on a toilet chain. This left him with a prominent forehead scar, which he carried for the rest of his life. When Hemingway was asked about the scar, he was reluctant to answer. After his departure from Paris, Hemingway "never again lived in a big city". Hemingway and Pauline traveled to Kansas City, where their son Patrick was born on June 28, 1928. Pauline had a difficult delivery, which Hemingway fictionalized in "A Farewell to Arms". After Patrick's birth, Pauline and Hemingway traveled to Wyoming, Massachusetts, and New York. In the winter, he was in New York with Bumby, about to board a train to Florida, when he received a cable telling him that his father had killed himself. Hemingway was devastated, having earlier written to his father telling him not to worry about financial difficulties; the letter arrived minutes after the suicide. He realized how Hadley must have felt after her own father's suicide in 1903, and he commented, "I'll probably go the same way." Upon his return to Key West in December, Hemingway worked on the draft of "A Farewell to Arms" before leaving for France in January. He had finished it in August but delayed the revision. The serialization in "Scribner's Magazine" was scheduled to begin in May, but as late as April, Hemingway was still working on the ending, which he may have rewritten as many as seventeen times. The completed novel was published on September 27. Biographer James Mellow believes "A Farewell to Arms" established Hemingway's stature as a major American writer and displayed a level of complexity not apparent in "The Sun Also Rises". In Spain in mid-1929, Hemingway researched his next work, "Death in the Afternoon". He wanted to write a comprehensive treatise on bullfighting, explaining the "toreros" and "corridas" complete with glossaries and appendices, because he believed bullfighting was "of great tragic interest, being literally of life and death." During the early 1930s, Hemingway spent his winters in Key West and summers in Wyoming, where he found "the most beautiful country he had seen in the American West" and hunted deer, elk, and grizzly bear. He was joined there by Dos Passos and in November 1930, after bringing Dos Passos to the train station in Billings, Montana, Hemingway broke his arm in a car accident. The surgeon tended the compound spiral fracture and bound the bone with kangaroo tendon. Hemingway was hospitalized for seven weeks, with Pauline tending to him; the nerves in his writing hand took as long as a year to heal, during which time he suffered intense pain. His third son, Gregory Hancock Hemingway, was born a year later on November 12, 1931, in Kansas City. Pauline's uncle bought the couple a house in Key West with a carriage house, the second floor of which was converted into a writing studio. Its location across the street from the lighthouse made it easy for Hemingway to find after a long night of drinking. While in Key West, Hemingway frequented the local bar Sloppy Joe's. He invited friends—including Waldo Peirce, Dos Passos, and Max Perkins—to join him on fishing trips and on an all-male expedition to the Dry Tortugas. Meanwhile, he continued to travel to Europe and to Cuba, and—although in 1933 he wrote of Key West, "We have a fine house here, and kids are all well"—Mellow believes he "was plainly restless". In 1933, Hemingway and Pauline went on safari to East Africa. The 10-week trip provided material for "Green Hills of Africa", as well as for the short stories "The Snows of Kilimanjaro" and "The Short Happy Life of Francis Macomber". The couple visited Mombasa, Nairobi, and Machakos in Kenya; then moved on to Tanganyika Territory, where they hunted in the Serengeti, around Lake Manyara, and west and southeast of present-day Tarangire National Park. Their guide was the noted "white hunter" Philip Percival who had guided Theodore Roosevelt on his 1909 safari. During these travels, Hemingway contracted amoebic dysentery that caused a prolapsed intestine, and he was evacuated by plane to Nairobi, an experience reflected in "The Snows of Kilimanjaro". On Hemingway's return to Key West in early 1934, he began work on "Green Hills of Africa", which he published in 1935 to mixed reviews. Hemingway bought a boat in 1934, named it the "Pilar", and began sailing the Caribbean. In 1935 he first arrived at Bimini, where he spent a considerable amount of time. During this period he also worked on "To Have and Have Not", published in 1937 while he was in Spain, the only novel he wrote during the 1930s. In 1937, Hemingway agreed to report on the Spanish Civil War for the North American Newspaper Alliance (NANA), arriving in Spain in March with Dutch filmmaker Joris Ivens. Ivens was filming "The Spanish Earth", a propaganda film in support of the Republican side. He wanted Hemingway to replace John Dos Passos as screenwriter, since Dos Passos had left the project when his friend José Robles was arrested and later executed. The incident changed Dos Passos' initially positive opinion of the leftist republicans, creating a rift between him and Hemingway, who later spread a rumor that Dos Passos left Spain out of cowardice. Journalist and writer Martha Gellhorn, whom Hemingway had met in Key West the previous Christmas (1936), joined him in Spain. Like Hadley, Martha was a St. Louis native, and like Pauline, she had worked for "Vogue" in Paris. Of Martha, Kert explains, "she never catered to him the way other women did". Late in 1937, while in Madrid with Martha, Hemingway wrote his only play, "The Fifth Column", as the city was being bombarded by Francoist forces. He returned to Key West for a few months, then back to Spain twice in 1938, where he was present at the Battle of the Ebro, the last republican stand, and he was among the British and American journalists who were some of the last to leave the battle as they crossed the river. In early 1939, Hemingway crossed to Cuba in his boat to live in the Hotel Ambos Mundos in Havana. This was the separation phase of a slow and painful split from Pauline, which had begun when Hemingway met Martha Gellhorn. Martha soon joined him in Cuba, and they almost immediately rented "Finca Vigia" ("Lookout Farm"), a property from Havana. Pauline and the children left Hemingway that summer, after the family was reunited during a visit to Wyoming, and when Hemingway's divorce from Pauline was finalized, he and Martha were married on November 20, 1940, in Cheyenne, Wyoming. As he had after his divorce from Hadley, he changed locations, moving his primary summer residence to Ketchum, Idaho, just outside the newly built resort of Sun Valley, and his winter residence to Cuba. Hemingway, who had been disgusted when a Parisian friend allowed his cats to eat from the table, became enamored of cats in Cuba, keeping dozens of them on the property. Gellhorn inspired him to write his most famous novel, "For Whom the Bell Tolls," which he started in March 1939 and finished in July 1940. It was published in October 1940. Consistent with his pattern of moving around while working on a manuscript, he wrote "For Whom the Bell Tolls" in Cuba, Wyoming, and Sun Valley. "For Whom the Bell Tolls" became a Book-of-the-Month Club choice, sold half a million copies within months, was nominated for a Pulitzer Prize, and as Meyers describes it, "triumphantly re-established Hemingway's literary reputation". In January 1941, Martha was sent to China on assignment for "Collier's" magazine. Hemingway went with her, sending in dispatches for the newspaper "PM", but in general he disliked China. A 2009 book suggests during that period he may have been recruited to work for Soviet intelligence agents under the name "Agent Argo". They returned to Cuba before the declaration of war by the United States that December, when he convinced the Cuban government to help him refit the "Pilar", which he intended to use to ambush German submarines off the coast of Cuba. From May 1944 to March 1945, Hemingway was in London and Europe. When Hemingway first arrived in London, he met "Time" magazine correspondent Mary Welsh, with whom he became infatuated. Martha had been forced to cross the Atlantic in a ship filled with explosives because Hemingway refused to help her get a press pass on a plane, and she arrived in London to find Hemingway hospitalized with a concussion from a car accident. Unsympathetic to his plight, she accused him of being a bully and told him that she was "through, absolutely finished". The last time that Hemingway saw Martha was in March 1945 as he was preparing to return to Cuba, and their divorce was finalized later that same year. Meanwhile, he had asked Mary Welsh to marry him on their third meeting. Hemingway was present at the Normandy Landings wearing a large head bandage but, according to Meyers, he was considered "precious cargo" and not allowed ashore. The landing craft came within sight of Omaha Beach before coming under enemy fire and turning back. Hemingway later wrote in "Collier's" that he could see "the first, second, third, fourth and fifth waves of [landing troops] lay where they had fallen, looking like so many heavily laden bundles on the flat pebbly stretch between the sea and first cover". Mellow explains that, on that first day, none of the correspondents were allowed to land and Hemingway was returned to the "Dorothea Dix". Late in July, he attached himself to "the 22nd Infantry Regiment commanded by Col. Charles 'Buck' Lanham, as it drove toward Paris", and Hemingway became de facto leader to a small band of village militia in Rambouillet outside of Paris. Of Hemingway's exploits, Paul Fussell historian and critic of the literature of the two world wars, remarks: "Hemingway got into considerable trouble playing infantry captain to a group of Resistance people that he gathered because a correspondent is not supposed to lead troops, even if he does it well." This was in fact in contravention of the Geneva Convention, and Hemingway was brought up on formal charges; he said that he "beat the rap" by claiming that he only offered advice. On August 25, he was present at the liberation of Paris although, contrary to the Hemingway legend, he was not the first into the city, nor did he liberate the Ritz. In Paris, he visited Sylvia Beach and Pablo Picasso with Mary Welsh, who joined him there; in a spirit of happiness, he forgave Gertrude Stein. Later that year, he was present at heavy fighting in the Battle of Hürtgen Forest. On December 17, 1944, a feverish and ill Hemingway had himself driven to Luxembourg to cover what was later called The Battle of the Bulge. As soon as he arrived, however, Lanham handed him to the doctors, who hospitalized him with pneumonia; by the time that he recovered a week later, most of the fighting in this battle was over. In 1947, Hemingway was awarded a Bronze Star for his bravery during World War II. He was recognized for his valor, having been "under fire in combat areas in order to obtain an accurate picture of conditions", with the commendation that "through his talent of expression, Mr. Hemingway enabled readers to obtain a vivid picture of the difficulties and triumphs of the front-line soldier and his organization in combat". Hemingway said he "was out of business as a writer" from 1942 to 1945 during his residence in Cuba. In 1946 he married Mary, who had an ectopic pregnancy five months later. The Hemingway family suffered a series of accidents and health problems in the years following the war: in a 1945 car accident, he "smashed his knee" and sustained another "deep wound on his forehead"; Mary broke first her right ankle and then her left in successive skiing accidents. A 1947 car accident left Patrick with a head wound and severely ill. Hemingway sank into depression as his literary friends began to die: in 1939 William Butler Yeats and Ford Madox Ford; in 1940 Scott Fitzgerald; in 1941 Sherwood Anderson and James Joyce; in 1946 Gertrude Stein; and the following year in 1947, Max Perkins, Hemingway's long-time Scribner's editor and friend. During this period, he suffered from severe headaches, high blood pressure, weight problems, and eventually diabetes—much of which was the result of previous accidents and many years of heavy drinking. Nonetheless, in January 1946, he began work on "The Garden of Eden", finishing 800 pages by June. During the post–war years, he also began work on a trilogy tentatively titled "The Land", "The Sea" and "The Air", which he wanted to combine in one novel titled "The Sea Book". However, both projects stalled, and Mellow says that Hemingway's inability to continue was "a symptom of his troubles" during these years. In 1948, Hemingway and Mary traveled to Europe, staying in Venice for several months. While there, Hemingway fell in love with the then 19-year-old Adriana Ivancich. The platonic love affair inspired the novel "Across the River and into the Trees", written in Cuba during a time of strife with Mary, and published in 1950 to negative reviews. The following year, furious at the critical reception of "Across the River and Into the Trees", he wrote the draft of "The Old Man and the Sea" in eight weeks, saying that it was "the best I can write ever for all of my life". "The Old Man and the Sea" became a book-of-the-month selection, made Hemingway an international celebrity, and won the Pulitzer Prize in May 1952, a month before he left for his second trip to Africa. In 1954, while in Africa, Hemingway was almost fatally injured in two successive plane crashes. He chartered a sightseeing flight over the Belgian Congo as a Christmas present to Mary. On their way to photograph Murchison Falls from the air, the plane struck an abandoned utility pole and "crash landed in heavy brush". Hemingway's injuries included a head wound, while Mary broke two ribs. The next day, attempting to reach medical care in Entebbe, they boarded a second plane that exploded at take-off, with Hemingway suffering burns and another concussion, this one serious enough to cause leaking of cerebral fluid. They eventually arrived in Entebbe to find reporters covering the story of Hemingway's death. He briefed the reporters and spent the next few weeks recuperating and reading his erroneous obituaries. Despite his injuries, Hemingway accompanied Patrick and his wife on a planned fishing expedition in February, but pain caused him to be irascible and difficult to get along with. When a bushfire broke out, he was again injured, sustaining second degree burns on his legs, front torso, lips, left hand and right forearm. Months later in Venice, Mary reported to friends the full extent of Hemingway's injuries: two cracked discs, a kidney and liver rupture, a dislocated shoulder and a broken skull. The accidents may have precipitated the physical deterioration that was to follow. After the plane crashes, Hemingway, who had been "a thinly controlled alcoholic throughout much of his life, drank more heavily than usual to combat the pain of his injuries." In October 1954, Hemingway received the Nobel Prize in Literature. He modestly told the press that Carl Sandburg, Isak Dinesen and Bernard Berenson deserved the prize, but he gladly accepted the prize money. Mellow claims Hemingway "had coveted the Nobel Prize", but when he won it, months after his plane accidents and the ensuing worldwide press coverage, "there must have been a lingering suspicion in Hemingway's mind that his obituary notices had played a part in the academy's decision." Because he was suffering pain from the African accidents, he decided against traveling to Stockholm. Instead he sent a speech to be read, defining the writer's life: From the end of the year in 1955 to early 1956, Hemingway was bedridden. He was told to stop drinking to mitigate liver damage, advice he initially followed but then disregarded. In October 1956, he returned to Europe and met Basque writer Pio Baroja, who was seriously ill and died weeks later. During the trip, Hemingway became sick again and was treated for "high blood pressure, liver disease, and arteriosclerosis". In November 1956, while staying in Paris, he was reminded of trunks he had stored in the Ritz Hotel in 1928 and never retrieved. Upon re-claiming and opening the trunks, Hemingway discovered they were filled with notebooks and writing from his Paris years. Excited about the discovery, when he returned to Cuba in early 1957, he began to shape the recovered work into his memoir "A Moveable Feast". By 1959 he ended a period of intense activity: he finished "A Moveable Feast" (scheduled to be released the following year); brought "True at First Light" to 200,000 words; added chapters to "The Garden of Eden"; and worked on "Islands in the Stream". The last three were stored in a safe deposit box in Havana, as he focused on the finishing touches for "A Moveable Feast". Author Michael Reynolds claims it was during this period that Hemingway slid into depression, from which he was unable to recover. The Finca Vigia became crowded with guests and tourists, as Hemingway, beginning to become unhappy with life there, considered a permanent move to Idaho. In 1959 he bought a home overlooking the Big Wood River, outside Ketchum, and left Cuba—although he apparently remained on easy terms with the Castro government, telling "The New York Times" he was "delighted" with Castro's overthrow of Batista. He was in Cuba in November 1959, between returning from Pamplona and traveling west to Idaho, and the following year for his 60th birthday; however, that year he and Mary decided to leave after hearing the news that Castro wanted to nationalize property owned by Americans and other foreign nationals. On July 25, 1960, the Hemingways left Cuba for the last time, leaving art and manuscripts in a bank vault in Havana. After the 1961 Bay of Pigs Invasion, the Finca Vigia was expropriated by the Cuban government, complete with Hemingway's collection of "four to six thousand books". Through the end of the 1950s, Hemingway continued to rework the material that would be published as "A Moveable Feast". In mid-1959, he visited Spain to research a series of bullfighting articles commissioned by "Life" magazine. "Life" wanted only 10,000 words, but the manuscript grew out of control. For the first time in his life unable to organize his writing, he asked A. E. Hotchner to travel to Cuba to help him. Hotchner helped him trim the "Life" piece down to 40,000 words, and Scribner's agreed to a full-length book version ("The Dangerous Summer") of almost 130,000 words. Hotchner found Hemingway to be "unusually hesitant, disorganized, and confused", and suffering badly from failing eyesight. On July 25, 1960, Hemingway and Mary left Cuba, never to return. During the summer of 1960, he set up a small office in his New York City apartment and attempted to work. He left New York City for good soon after. He then traveled alone to Spain to be photographed for the front cover for the "Life" magazine piece. A few days later, he was reported in the news to be seriously ill and on the verge of dying, which panicked Mary until she received a cable from him telling her, "Reports false. Enroute Madrid. Love Papa." However, he was seriously ill and believed himself to be on the verge of a breakdown. He was lonely and took to his bed for days, retreating into silence, despite having had the first installments of "The Dangerous Summer" published in "Life" in September 1960 to good reviews. In October, he left Spain for New York, where he refused to leave Mary's apartment on the pretext that he was being watched. She quickly took him to Idaho, where George Saviers (a Sun Valley physician) met them at the train. At this time, Hemingway was constantly worried about money and his safety. He worried about his taxes and that he would never return to Cuba to retrieve the manuscripts he had left there in a bank vault. He became paranoid, thinking the FBI was actively monitoring his movements in Ketchum. The FBI had, in fact, opened a file on him during World War II, when he used the "Pilar" to patrol the waters off Cuba, and J. Edgar Hoover had an agent in Havana watch Hemingway during the 1950s. By the end of November, Mary was at her wits' end, and Saviers suggested Hemingway go to the Mayo Clinic in Minnesota, and Hemingway may have believed he was to be treated there for hypertension. The FBI knew Hemingway was at the Mayo Clinic, as an agent later documented in a letter written in January 1961. In an attempt to maintain anonymity, Hemingway was checked in at the Mayo Clinic under Saviers's name. Meyers writes that "an aura of secrecy surrounds Hemingway's treatment at the Mayo" but confirms he was treated with electroconvulsive therapy as many as 15 times in December 1960 and was "released in ruins" in January 1961. Reynolds was able to access Hemingway's records at the Mayo, which indicated that the combination of medications given to Hemingway may have created the depressive state for which he was treated. Three months after Hemingway was released from the Mayo Clinic, when he was back in Ketchum in April 1961, Mary "found Hemingway holding a shotgun" in the kitchen one morning. She called Saviers, who sedated him and admitted him to the Sun Valley Hospital; from there he was returned to the Mayo Clinic for more electroshock treatments. He was released in late June and arrived home in Ketchum on June 30. Two days later, in the early morning hours of July 2, 1961, Hemingway "quite deliberately" shot himself with his favorite shotgun. He had unlocked the basement storeroom where his guns were kept, gone upstairs to the front entrance foyer of their Ketchum home, and according to Mellow, shot himself with the "double-barreled shotgun that he had used so often it might have been a friend". Mary called the Sun Valley Hospital, and a doctor quickly arrived at the house who determined Hemingway "had died of a self-inflicted wound to the head". Mary was sedated and taken to the hospital, returning home the next day where she cleaned the house, and saw to the funeral arrangements and travel arrangements. Bernice Kert writes that at that time it "did not seem to her a conscious lie when she told the press Ernest's death had been 'accidental'." In a press interview five years later, Mary Hemingway confirmed that her husband had shot himself. Family and friends flew to Ketchum for the funeral, officiated by the local Catholic priest, who believed Hemingway's death accidental. Of the funeral (during which an altar boy fainted at the head of the casket), Hemingway's brother Leicester wrote: "It seemed to me Ernest would have approved of it all." He is buried in the Ketchum cemetery. Hemingway's behavior during his final years had been similar to that of his father's before he killed himself; his father may have had the genetic disease hemochromatosis, due to which the inability to metabolize iron culminates in mental and physical deterioration. Medical records made available in 1991 confirm that Hemingway had been diagnosed with hemochromatosis in early 1961. His sister Ursula and his brother Leicester also killed themselves. In addition to being affected by his physical ailments, Hemingway's health was compromised by his having been a heavy drinker for most of his life. In 1966, a memorial to Ernest Hemingway was placed just north of Sun Valley, above Trail Creek. At its base is inscribed a eulogy Hemingway wrote for a friend several decades earlier: …"Now he will be a part of them forever." "The New York Times" wrote in 1926 of Hemingway's first novel, "No amount of analysis can convey the quality of "The Sun Also Rises". It is a truly gripping story, told in a lean, hard, athletic narrative prose that puts more literary English to shame." "The Sun Also Rises" is written in the spare, tight prose that made Hemingway famous, and, according to James Nagel, "changed the nature of American writing." In 1954, when Hemingway was awarded the Nobel Prize for Literature, it was for "his mastery of the art of narrative, most recently demonstrated in "The Old Man and the Sea", and for the influence that he has exerted on contemporary style." Henry Louis Gates believes Hemingway's style was fundamentally shaped "in reaction to [his] experience of world war". After World War I, he and other modernists "lost faith in the central institutions of Western civilization" by reacting against the elaborate style of 19th-century writers and by creating a style "in which meaning is established through dialogue, through action, and silences—a fiction in which nothing crucial—or at least very little—is stated explicitly." Because he began as a writer of short stories, Baker believes Hemingway learned to "get the most from the least, how to prune language, how to multiply intensities and how to tell nothing but the truth in a way that allowed for telling more than the truth." Hemingway called his style the Iceberg Theory: the facts float above water; the supporting structure and symbolism operate out of sight. The concept of the iceberg theory is sometimes referred to as the "theory of omission". Hemingway believed the writer could describe one thing (such as Nick Adams fishing in "The Big Two-Hearted River") though an entirely different thing occurs below the surface (Nick Adams concentrating on fishing to the extent that he does not have to think about anything else). Paul Smith writes that Hemingway's first stories, collected as "In Our Time", showed he was still experimenting with his writing style. He avoided complicated syntax. About 70 percent of the sentences are simple sentences—a childlike syntax without subordination. Jackson Benson believes Hemingway used autobiographical details as framing devices about life in general—not only about his life. For example, Benson postulates that Hemingway used his experiences and drew them out with "what if" scenarios: "what if I were wounded in such a way that I could not sleep at night? What if I were wounded and made crazy, what would happen if I were sent back to the front?" Writing in "The Art of the Short Story", Hemingway explains: "A few things I have found to be true. If you leave out important things or events that you know about, the story is strengthened. If you leave or skip something because you do not know it, the story will be worthless. The test of any story is how very good the stuff that you, not your editors, omit." The simplicity of the prose is deceptive. Zoe Trodd believes Hemingway crafted skeletal sentences in response to Henry James's observation that World War I had "used up words". Hemingway offers a "multi-focal" photographic reality. His iceberg theory of omission is the foundation on which he builds. The syntax, which lacks subordinating conjunctions, creates static sentences. The photographic "snapshot" style creates a collage of images. Many types of internal punctuation (colons, semicolons, dashes, parentheses) are omitted in favor of short declarative sentences. The sentences build on each other, as events build to create a sense of the whole. Multiple strands exist in one story; an "embedded text" bridges to a different angle. He also uses other cinematic techniques of "cutting" quickly from one scene to the next; or of "splicing" a scene into another. Intentional omissions allow the reader to fill the gap, as though responding to instructions from the author, and create three-dimensional prose. Hemingway habitually used the word "and" in place of commas. This use of polysyndeton may serve to convey immediacy. Hemingway's polysyndetonic sentence—or in later works his use of subordinate clauses—uses conjunctions to juxtapose startling visions and images. Benson compares them to haikus. Many of Hemingway's followers misinterpreted his lead and frowned upon all expression of emotion; Saul Bellow satirized this style as "Do you have emotions? Strangle them." However, Hemingway's intent was not to eliminate emotion, but to portray it more scientifically. Hemingway thought it would be easy, and pointless, to describe emotions; he sculpted collages of images in order to grasp "the real thing, the sequence of motion and fact which made the emotion and which would be as valid in a year or in ten years or, with luck and if you stated it purely enough, always". This use of an image as an objective correlative is characteristic of Ezra Pound, T. S. Eliot, James Joyce, and Proust. Hemingway's letters refer to Proust's "Remembrance of Things Past" several times over the years, and indicate he read the book at least twice. The popularity of Hemingway's work depends on its themes of love, war, wilderness and loss, all of which are strongly evident in the body of work. These are recurring themes in American literature, and are quite clearly evident in Hemingway's work. Critic Leslie Fiedler sees the theme he defines as "The Sacred Land"—the American West—extended in Hemingway's work to include mountains in Spain, Switzerland and Africa, and to the streams of Michigan. The American West is given a symbolic nod with the naming of the "Hotel Montana" in "The Sun Also Rises" and "For Whom the Bell Tolls". According to Stoltzfus and Fiedler, in Hemingway's work, nature is a place for rebirth and rest; and it is where the hunter or fisherman might experience a moment of transcendence at the moment they kill their prey. Nature is where men exist without women: men fish; men hunt; men find redemption in nature. Although Hemingway does write about sports, such as fishing, Carlos Baker notes the emphasis is more on the athlete than the sport. At its core, much of Hemingway's work can be viewed in the light of American naturalism, evident in detailed descriptions such as those in "Big Two-Hearted River". Fiedler believes Hemingway inverts the American literary theme of the evil "Dark Woman" versus the good "Light Woman". The dark woman—Brett Ashley of "The Sun Also Rises"—is a goddess; the light woman—Margot Macomber of "The Short Happy Life of Francis Macomber"—is a murderess. Robert Scholes admits that early Hemingway stories, such as "A Very Short Story", present "a male character favorably and a female unfavorably". According to Rena Sanderson, early Hemingway critics lauded his male-centric world of masculine pursuits, and the fiction divided women into "castrators or love-slaves". Feminist critics attacked Hemingway as "public enemy number one", although more recent re-evaluations of his work "have given new visibility to Hemingway's female characters (and their strengths) and have revealed his own sensitivity to gender issues, thus casting doubts on the old assumption that his writings were one-sidedly masculine." Nina Baym believes that Brett Ashley and Margot Macomber "are the two outstanding examples of Hemingway's 'bitch women. The theme of women and death is evident in stories as early as "Indian Camp". The theme of death permeates Hemingway's work. Young believes the emphasis in "Indian Camp" was not so much on the woman who gives birth or the father who kills himself, but on Nick Adams who witnesses these events as a child, and becomes a "badly scarred and nervous young man". Hemingway sets the events in "Indian Camp" that shape the Adams persona. Young believes "Indian Camp" holds the "master key" to "what its author was up to for some thirty-five years of his writing career". Stoltzfus considers Hemingway's work to be more complex with a representation of the truth inherent in existentialism: if "nothingness" is embraced, then redemption is achieved at the moment of death. Those who face death with dignity and courage live an authentic life. Francis Macomber dies happy because the last hours of his life are authentic; the bullfighter in the corrida represents the pinnacle of a life lived with authenticity. In his paper "The Uses of Authenticity: Hemingway and the Literary Field", Timo Müller writes that Hemingway's fiction is successful because the characters live an "authentic life", and the "soldiers, fishers, boxers and backwoodsmen are among the archetypes of authenticity in modern literature". The theme of emasculation is prevalent in Hemingway's work, most notably in "The Sun Also Rises". Emasculation, according to Fiedler, is a result of a generation of wounded soldiers; and of a generation in which women such as Brett gained emancipation. This also applies to the minor character, Frances Clyne, Cohn's girlfriend in the beginning in the book. Her character supports the theme not only because the idea was presented early on in the novel but also the impact she had on Cohn in the start of the book while only appearing a small number of times. Baker believes Hemingway's work emphasizes the "natural" versus the "unnatural". In "Alpine Idyll" the "unnaturalness" of skiing in the high country late spring snow is juxtaposed against the "unnaturalness" of the peasant who allowed his wife's dead body to linger too long in the shed during the winter. The skiers and peasant retreat to the valley to the "natural" spring for redemption. Susan Beegel has written that some more recent critics—writing through the lens of a more modern social and cultural context several decades after Hemingway's death, and more than half a century after his novels were first published—have characterized the social era portrayed in his fiction as misogynistic and homophobic. In her 1996 essay, "Critical Reception", Beegel analyzed four decades of Hemingway criticism and found that "critics interested in multiculturalism", particularly in the 1980s, simply ignored Hemingway, although some "apologetics" have been written. Typical, according to Beegel, is an analysis of Hemingway's 1926 novel, "The Sun Also Rises", in which a critic contended: "Hemingway never lets the reader forget that Cohn is a Jew, not an unattractive character who happens to be a Jew but a character who is unattractive because he is a Jew." Also during the 1980s, according to Beegel, criticism was published that focused on investigating the "horror of homosexuality" and the "racism" typical of the social era portrayed in Hemingway's fiction. In an overall assessment of Hemingway's work Beegel has written: "Throughout his remarkable body of fiction, he tells the truth about human fear, guilt, betrayal, violence, cruelty, drunkenness, hunger, greed, apathy, ecstasy, tenderness, love and lust." Hemingway's legacy to American literature is his style: writers who came after him emulated it or avoided it. After his reputation was established with the publication of "The Sun Also Rises", he became the spokesperson for the post–World War I generation, having established a style to follow. His books were burned in Berlin in 1933, "as being a monument of modern decadence", and disavowed by his parents as "filth". Reynolds asserts the legacy is that "[Hemingway] left stories and novels so starkly moving that some have become part of our cultural heritage." Benson believes the details of Hemingway's life have become a "prime vehicle for exploitation", resulting in a Hemingway industry. Hemingway scholar Hallengren believes the "hard boiled style" and the machismo must be separated from the author himself. Benson agrees, describing him as introverted and private as J. D. Salinger, although Hemingway masked his nature with braggadocio. During World War II, Salinger met and corresponded with Hemingway, whom he acknowledged as an influence. In a letter to Hemingway, Salinger claimed their talks "had given him his only hopeful minutes of the entire war" and jokingly "named himself national chairman of the Hemingway Fan Clubs." The extent of Hemingway's influence is seen in the tributes and echoes of his fiction in popular culture. A minor planet, discovered in 1978 by Soviet astronomer Nikolai Chernykh, was named for him (3656 Hemingway); Ray Bradbury wrote "The Kilimanjaro Device", with Hemingway transported to the top of Mount Kilimanjaro; the 1993 motion picture "Wrestling Ernest Hemingway", about the friendship of two retired men, Irish and Cuban, in a seaside town in Florida, starred Robert Duvall, Richard Harris, Shirley MacLaine, Sandra Bullock, and Piper Laurie. The influence is evident with the many restaurants named "Hemingway"; and the proliferation of bars called "Harry's" (a nod to the bar in "Across the River and Into the Trees"). A line of Hemingway furniture, promoted by Hemingway's son Jack (Bumby), has pieces such as the "Kilimanjaro" bedside table, and a "Catherine" slip-covered sofa. Montblanc offers a Hemingway fountain pen, and a line of Hemingway safari clothes has been created. The International Imitation Hemingway Competition was created in 1977 to publicly acknowledge his influence and the comically misplaced efforts of lesser authors to imitate his style. Entrants are encouraged to submit one "really good page of really bad Hemingway" and winners are flown to Italy to Harry's Bar. In 1965, Mary Hemingway established the Hemingway Foundation and in the 1970s she donated her husband's papers to the John F. Kennedy Library. In 1980, a group of Hemingway scholars gathered to assess the donated papers, subsequently forming the Hemingway Society, "committed to supporting and fostering Hemingway scholarship." Almost exactly 35 years after Hemingway's death, on July 1, 1996, his granddaughter Margaux Hemingway died in Santa Monica, California. Margaux was a supermodel and actress, co-starring with her younger sister Mariel in the 1976 movie "Lipstick". Her death was later ruled a suicide, making her "the fifth person in four generations of her family to commit suicide." Three houses associated with Hemingway are listed on the U.S. National Register of Historic Places: the Ernest Hemingway Cottage on Walloon Lake, Michigan, designated in 1968; the Ernest Hemingway House in Key West, designated in 1968; and the Ernest and Mary Hemingway House in Ketchum, designated in 2015. His boyhood home, in Oak Park, Illinois, is a museum and archive dedicated to Hemingway. In 2012, he was inducted into the Chicago Literary Hall of Fame. Notes Bibliography Electron The electron is a subatomic particle, symbol or , whose electric charge is negative one elementary charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron has a mass that is approximately 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, "ħ". As it is a fermion, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: they can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy. Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry and thermal conductivity, and they also participate in gravitational, electromagnetic and weak interactions. Since an electron has charge, it has a surrounding electric field, and if that electron is moving relative to an observer, it will generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications such as electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle accelerators. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897. Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron except that it carries electrical and other charges of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons. The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise , the English scientist William Gilbert coined the New Latin term , to refer to this property of attracting small objects after being rubbed. Both "electric" and "electricity" are derived from the Latin ' (also the root of the alloy of the same name), which came from the Greek word for amber, ('). In the early 1700s, Francis Hauksbee and French chemist Charles François du Fay independently discovered what they believed were two kinds of frictional electricity—one generated from rubbing glass, the other from rubbing resin. From this, du Fay theorized that electricity consists of two electrical fluids, "vitreous" and "resinous", that are separated by friction, and that neutralize each other when combined. American scientist Ebenezer Kinnersley later also independently reached the same conclusion. A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (-). He gave them the modern charge nomenclature of positive and negative respectively. Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit. Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge "e" by means of Faraday's laws of electrolysis. However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity". Stoney initially coined the term "electrolion" in 1881. Ten years later, he switched to "electron" to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name "electron"". A 1906 proposal to change to "electrion" failed because Hendrik Lorentz preferred to keep "electron". The word "electron" is a combination of the words "electric" and "ion". The suffix -"on" which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron. The German physicist Johann Wilhelm Hittorf studied electrical conductivity in rarefied gases: in 1869, he discovered a glow emitted from the cathode that increased in size with decrease in gas pressure. In 1876, the German physicist Eugen Goldstein showed that the rays from this glow cast a shadow, and he dubbed the rays cathode rays. During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode ray tube to have a high vacuum inside. He then showed that the luminescence rays appearing within the tube carried energy and moved from the cathode to the anode. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged. In 1879, he proposed that these properties could be explained by what he termed 'radiant matter'. He suggested that this was a fourth state of matter, consisting of negatively charged molecules that were being projected with high velocity from the cathode. The German-born British physicist Arthur Schuster expanded upon Crookes' experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given level of current, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time. In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge. In 1896, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier. Thomson made good estimates of both the charge "e" and the mass "m", finding that cathode ray particles, which he called "corpuscles," had perhaps one thousandth of the mass of the least massive ion known: hydrogen. He showed that their charge-to-mass ratio, "e"/"m", was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal. The name electron was again proposed for these particles by the Irish physicist George Johnstone Stoney, and the name has since gained universal acceptance. While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter. In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays. This evidence strengthened the view that electrons existed as components of atoms. The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team, using clouds of charged water droplets generated by electrolysis, and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913. However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time. Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons. By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons. In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom. However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms. Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them. Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics. In 1919, the American chemist Irving Langmuir elaborated on the Lewis' static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness". In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table, which were known to largely repeat themselves according to the periodic law. In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle. The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment. This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting. In his 1924 dissertation "" (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment. The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927 George Paget Thomson, discovered the interference effect was produced when a beam of electrons was passed through thin metal foils and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel. De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated. Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum. Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen. In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field. In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron. This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons "negatons" and using "electron" as a generic term to describe both the positively and negatively charged variants. In 1947 Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other, the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s. With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light. With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron. The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics. Individual electrons can now be easily confined in ultra small (, ) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor. In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions, because they all have half-odd integer spin; the electron has spin . The invariant mass of an electron is approximately  kilograms, or  atomic mass units. On the basis of Einstein's principle of mass–energy equivalence, this mass corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836. Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe. Electrons have an electric charge of coulombs, which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. This elementary charge has a relative standard uncertainty of . Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. As the symbol "e" is used for the elementary charge, the electron is commonly symbolized by , where the minus sign indicates the negative charge. The positron is symbolized by because it has the same properties as the electron but with a positive rather than negative charge. The electron has an intrinsic angular momentum or spin of . This property is usually stated by referring to the electron as a spin- particle. For such particles the spin magnitude is  "ħ". while the result of the measurement of a projection of the spin on any axis can only be ±. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton,=\frac{e\hbar}{2m_{\mathrm{e}}}.|group=note}} which is a physical constant equal to . The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity. The electron has no known substructure and it is assumed to be a point particle with a point charge and no spatial extent. In classical physics, the angular momentum and magnetic moment of an object depend upon its physical dimensions. Hence, the concept of a dimensionless electron possessing these properties contrasts to experimental observations in Penning traps which point to finite non-zero radius of the electron. A possible explanation of this paradoxical situation is given below in the "Virtual particles" subsection by taking into consideration the Foldy-Wouthuysen transformation. The issue of the radius of the electron is a challenging problem of the modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity. Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10 meters. The upper bound of the electron radius of 10 meters can be derived using the uncertainty relation in energy. There "is" also a physical constant called the "classical electron radius", with the much larger value of , greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron. There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of  seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation. The experimental lower bound for the electron's mean lifetime is years, at a 90% confidence level. As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment. The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi ("ψ"). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density. Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, , where the variables "r" and "r" correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead. In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit. In a simplified picture, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, Δ"E" · Δ"t" ≥ "ħ". In effect, the energy needed to create these virtual particles, Δ"E", can be "borrowed" from the vacuum for a period of time, Δ"t", so that their product is no more than the reduced Planck constant, . Thus, for a virtual electron, Δ"t" is at most . While an electron–positron virtual pair is in existence, the coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. Virtual particles cause a comparable shielding effect for the mass of the electron. The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment). The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics. The apparent paradox (mentioned above in the properties subsection) of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons cause the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron. In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines. An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law. When an electron is in motion, it generates a magnetic field. The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor. The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic). When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation. The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself. Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force. Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The acceleration of the electron results in the emission of Bremsstrahlung radiation. An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is "h"/"m""c", which is known as the Compton wavelength. For an electron, it has a value of . When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering. The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by "α" ≈ , which is approximately equal to . When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV. On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus. In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a exchange, and this is responsible for neutrino-electron elastic scattering. An electron can be "bound" to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus' electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number. Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential. Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect. To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron. The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital (so called, paired electrons) cancel each other out. The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics. The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules. Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms. A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei. If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect. Independent electrons moving in vacuum are termed "free" electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass. When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations. At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation. On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas) through the material much like free electrons. Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed. This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material. Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law, which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current. When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, this behavior is modeled by pairs of electrons entering a quantum state known as a Bose–Einstein condensate. These Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance. (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.) However, the mechanism by which higher temperature superconductors operate remains uncertain. Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons. The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge. According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in a vacuum, "c". However, when relativistic electrons—that is, electrons moving at a speed close to "c"—are injected into a dielectric medium such as water, where the local speed of light is significantly less than "c", the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation. The effects of special relativity are based on a quantity known as the Lorentz factor, defined as formula_1 where "v" is the speed of the particle. The kinetic energy "K" of an electron moving with velocity "v" is: where "m" is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV. Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by "λ" = "h"/"p" where "h" is the Planck constant and "p" is the momentum. For the 51 GeV electron above, the wavelength is about , small enough to explore structures well below the size of an atomic nucleus. The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons: An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe. For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron-positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe. The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes. Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process, For about the next –, the excess electrons remained too energetic to bind with atomic nuclei. What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation. Roughly one million years after the big bang, the first generation of stars began to form. Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus. An example is the cobalt-60 (Co) isotope, which decays to form nickel-60 (). At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole. According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants. When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space. In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes. Cosmic rays are particles traveling through space with high energies. Energy events as high as have been recorded. When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions. More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion. A muon, in turn, can decay to form an electron or positron. Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes. The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct absorption lines appear in the spectrum of transmitted radiation. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. Spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined. In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge. The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months. The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant. The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time. The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material. Electron beams are used in welding. They allow energy densities up to across a narrow focus diameter of and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding. Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer. This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits. Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products. Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy. Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays. Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect. Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies . Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV. The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°. The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material. In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm. By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential. The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms. This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain. Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface. In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery. Electrons are important in cathode ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets. In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse. Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor. Edgar Allan Poe Edgar Allan Poe (; born Edgar Poe; January 19, 1809 – October 7, 1849) was an American writer, editor, and literary critic. Poe is best known for his poetry and short stories, particularly his tales of mystery and the macabre. He is widely regarded as a central figure of Romanticism in the United States and American literature as a whole, and he was one of the country's earliest practitioners of the short story. Poe is generally considered the inventor of the detective fiction genre and is further credited with contributing to the emerging genre of science fiction. He was the first well-known American writer to try to earn a living through writing alone, resulting in a financially difficult life and career. Poe was born in Boston, the second child of two actors. His father abandoned the family in 1810, and his mother died the following year. Thus orphaned, the child was taken in by John and Frances Allan of Richmond, Virginia. They never formally adopted him, but Poe was with them well into young adulthood. Tension developed later as John Allan and Edgar repeatedly clashed over debts, including those incurred by gambling, and the cost of secondary education for the young man. Poe attended the University of Virginia but left after a year due to lack of money. Poe quarreled with Allan over the funds for his education and enlisted in the Army in 1827 under an assumed name. It was at this time that his publishing career began, albeit humbly, with the anonymous collection "Tamerlane and Other Poems" (1827), credited only to "a Bostonian". With the death of Frances Allan in 1829, Poe and Allan reached a temporary rapprochement. However, Poe later failed as an officer cadet at West Point, declaring a firm wish to be a poet and writer, and he ultimately parted ways with John Allan. Poe switched his focus to prose and spent the next several years working for literary journals and periodicals, becoming known for his own style of literary criticism. His work forced him to move among several cities, including Baltimore, Philadelphia, and New York City. In Richmond in 1836, he married Virginia Clemm, his 13-year-old cousin. In January 1845, Poe published his poem "The Raven" to instant success. His wife died of tuberculosis two years after its publication. For years, he had been planning to produce his own journal "The Penn" (later renamed "The Stylus"), though he died before it could be produced. Poe died in Baltimore on October 7, 1849, at age 40; the cause of his death is unknown and has been variously attributed to alcohol, "brain congestion", cholera, drugs, heart disease, rabies, suicide, tuberculosis, and other agents. Poe and his works influenced literature in the United States and around the world, as well as in specialized fields such as cosmology and cryptography. Poe and his work appear throughout popular culture in literature, music, films, and television. A number of his homes are dedicated museums today. The Mystery Writers of America present an annual award known as the Edgar Award for distinguished work in the mystery genre. He was born Edgar Poe in Boston on January 19, 1809, the second child of English-born actress Elizabeth Arnold Hopkins Poe and actor David Poe Jr.. He had an elder brother William Henry Leonard Poe, and a younger sister Rosalie Poe. Their grandfather David Poe Sr. had emigrated from County Cavan, Ireland, to America around the year 1750. Edgar may have been named after a character in William Shakespeare's "King Lear", a play that the couple were performing in 1809. His father abandoned their family in 1810, and his mother died a year later from consumption (pulmonary tuberculosis). Poe was then taken into the home of John Allan, a successful Scottish merchant in Richmond, Virginia who dealt in a variety of goods, including tobacco, cloth, wheat, tombstones, and slaves. The Allans served as a foster family and gave him the name "Edgar Allan Poe", though they never formally adopted him. The Allan family had Poe baptized in the Episcopal Church in 1812. John Allan alternately spoiled and aggressively disciplined his foster son. The family sailed to Britain in 1815, and Poe attended the grammar school for a short period in Irvine, Scotland (where John Allan was born) before rejoining the family in London in 1816. There he studied at a boarding school in Chelsea until summer 1817. He was subsequently entered at the Reverend John Bransby's Manor House School at Stoke Newington, then a suburb north of London. Poe moved with the Allans back to Richmond, Virginia in 1820. In 1824, Poe served as the lieutenant of the Richmond youth honor guard as Richmond celebrated the visit of the Marquis de Lafayette. In March 1825, John Allan's uncle and business benefactor William Galt, said to be one of the wealthiest men in Richmond, died, leaving Allan several acres of real estate. The inheritance was estimated at $750,000. By summer 1825, Allan celebrated his expansive wealth by purchasing a two-story brick home named Moldavia. Poe may have become engaged to Sarah Elmira Royster before he registered at the one-year-old University of Virginia in February 1826 to study ancient and modern languages. The university, in its infancy, was established on the ideals of its founder Thomas Jefferson. It had strict rules against gambling, horses, guns, tobacco, and alcohol, but these rules were generally ignored. Jefferson had enacted a system of student self-government, allowing students to choose their own studies, make their own arrangements for boarding, and report all wrongdoing to the faculty. The unique system was still in chaos, and there was a high dropout rate. During his time there, Poe lost touch with Royster and also became estranged from his foster father over gambling debts. Poe claimed that Allan had not given him sufficient money to register for classes, purchase texts, and procure and furnish a dormitory. Allan did send additional money and clothes, but Poe's debts increased. Poe gave up on the university after a year, but did not feel welcome returning to Richmond, especially when he learned that his sweetheart Royster had married Alexander Shelton. He traveled to Boston in April 1827, sustaining himself with odd jobs as a clerk and newspaper writer. At some point, he started using the pseudonym Henri Le Rennet. Poe was unable to support himself, so he enlisted in the United States Army as a private on May 27, 1827, using the name "Edgar A. Perry". He claimed that he was even though he was 18. He first served at Fort Independence in Boston Harbor for five dollars a month. That same year, he released his first book, a 40-page collection of poetry titled "Tamerlane and Other Poems", attributed with the byline "by a Bostonian". Only 50 copies were printed, and the book received virtually no attention. Poe's regiment was posted to Fort Moultrie in Charleston, South Carolina and traveled by ship on the brig "Waltham" on November 8, 1827. Poe was promoted to "artificer", an enlisted tradesman who prepared shells for artillery, and had his monthly pay doubled. He served for two years and attained the rank of Sergeant Major for Artillery (the highest rank that a noncommissioned officer could achieve); he then sought to end his five-year enlistment early. He revealed his real name and his circumstances to his commanding officer, Lieutenant Howard. Howard would only allow Poe to be discharged if he reconciled with John Allan and wrote a letter to Allan, who was unsympathetic. Several months passed and pleas to Allan were ignored; Allan may not have written to Poe even to make him aware of his foster mother's illness. Frances Allan died on February 28, 1829, and Poe visited the day after her burial. Perhaps softened by his wife's death, John Allan agreed to support Poe's attempt to be discharged in order to receive an appointment to the United States Military Academy at West Point. Poe finally was discharged on April 15, 1829, after securing a replacement to finish his enlisted term for him. Before entering West Point, Poe moved back to Baltimore for a time to stay with his widowed aunt Maria Clemm, her daughter Virginia Eliza Clemm (Poe's first cousin), his brother Henry, and his invalid grandmother Elizabeth Cairnes Poe. Meanwhile, Poe published his second book "Al Aaraaf, Tamerlane and Minor Poems" in Baltimore in 1829. Poe traveled to West Point and matriculated as a cadet on July 1, 1830. In October 1830, John Allan married his second wife Louisa Patterson. The marriage and bitter quarrels with Poe over the children born to Allan out of affairs led to the foster father finally disowning Poe. Poe decided to leave West Point by purposely getting court-martialed. On February 8, 1831, he was tried for gross neglect of duty and disobedience of orders for refusing to attend formations, classes, or church. Poe tactically pleaded not guilty to induce dismissal, knowing that he would be found guilty. He left for New York in February 1831 and released a third volume of poems, simply titled "Poems." The book was financed with help from his fellow cadets at West Point, many of whom donated 75 cents to the cause, raising a total of $170. They may have been expecting verses similar to the satirical ones that Poe had been writing about commanding officers. It was printed by Elam Bliss of New York, labeled as "Second Edition," and including a page saying, "To the U.S. Corps of Cadets this volume is respectfully dedicated". The book once again reprinted the long poems "Tamerlane" and "Al Aaraaf" but also six previously unpublished poems, including early versions of "To Helen", "Israfel", and "The City in the Sea". He returned to Baltimore to his aunt, brother, and cousin in March 1831. His elder brother Henry had been in ill health, in part due to problems with alcoholism, and he died on August 1, 1831. After his brother's death, Poe began more earnest attempts to start his career as a writer. He chose a difficult time in American publishing to do so. He was the first well-known American to try to live by writing alone and was hampered by the lack of an international copyright law. Publishers often produced unauthorized copies of British works rather than paying for new work by Americans. The industry was also particularly hurt by the Panic of 1837. There was a booming growth in American periodicals around this time period, fueled in part by new technology, but many did not last beyond a few issues and publishers often refused to pay their writers, or paid them much later than they promised. Throughout his attempts to live as a writer, Poe repeatedly had to resort to humiliating pleas for money and other assistance. After his early attempts at poetry, Poe had turned his attention to prose. He placed a few stories with a Philadelphia publication and began work on his only drama "Politian". The "Baltimore Saturday Visiter" awarded Poe a prize in October 1833 for his short story "MS. Found in a Bottle". The story brought him to the attention of John P. Kennedy, a Baltimorean of considerable means. He helped Poe place some of his stories, and introduced him to Thomas W. White, editor of the "Southern Literary Messenger" in Richmond. Poe became assistant editor of the periodical in August 1835, but was discharged within a few weeks for having been caught drunk by his boss. Returning to Baltimore, Poe obtained a license to marry his cousin Virginia on September 22, 1835, though it is unknown if they were married at that time. He was 26 and she was 13. He was reinstated by White after promising good behavior, and went back to Richmond with Virginia and her mother. He remained at the "Messenger" until January 1837. During this period, Poe claimed that its circulation increased from 700 to 3,500. He published several poems, book reviews, critiques, and stories in the paper. On May 16, 1836, he and Virginia Clemm held a Presbyterian wedding ceremony at their Richmond boarding house, with a witness falsely attesting Clemm's age as 21. "The Narrative of Arthur Gordon Pym of Nantucket" was published and widely reviewed in 1838. In the summer of 1839, Poe became assistant editor of "Burton's Gentleman's Magazine". He published numerous articles, stories, and reviews, enhancing his reputation as a trenchant critic which he had established at the "Southern Literary Messenger". Also in 1839, the collection "Tales of the Grotesque and Arabesque" was published in two volumes, though he made little money from it and it received mixed reviews. Poe left "Burton's" after about a year and found a position as assistant at "Graham's Magazine". In June 1840, Poe published a prospectus announcing his intentions to start his own journal called "The Stylus". Originally, Poe intended to call the journal "The Penn", as it would have been based in Philadelphia. In the June 6, 1840 issue of Philadelphia's "Saturday Evening Post", Poe bought advertising space for his prospectus: ""Prospectus of the Penn Magazine, a Monthly Literary journal to be edited and published in the city of Philadelphia by Edgar A. Poe."" The journal was never produced before Poe's death. Around this time, he attempted to secure a position within the Tyler administration, claiming that he was a member of the Whig Party. He hoped to be appointed to the Custom House in Philadelphia with help from President Tyler's son Robert, an acquaintance of Poe's friend Frederick Thomas. Poe failed to show up for a meeting with Thomas to discuss the appointment in mid-September 1842, claiming to have been sick, though Thomas believed that he had been drunk. Though he was promised an appointment, all positions were filled by others. One evening in January 1842, Virginia showed the first signs of consumption, now known as tuberculosis, while singing and playing the piano. Poe described it as breaking a blood vessel in her throat. She only partially recovered. Poe began to drink more heavily under the stress of Virginia's illness. He left "Graham's" and attempted to find a new position, for a time angling for a government post. He returned to New York where he worked briefly at the "Evening Mirror" before becoming editor of the "Broadway Journal" and, later, sole owner. There he alienated himself from other writers by publicly accusing Henry Wadsworth Longfellow of plagiarism, though Longfellow never responded. On January 29, 1845, his poem "The Raven" appeared in the "Evening Mirror" and became a popular sensation. It made Poe a household name almost instantly, though he was paid only $9 for its publication. It was concurrently published in "" under the pseudonym "Quarles". The "Broadway Journal" failed in 1846. Poe moved to a cottage in Fordham, New York, in what is now the Bronx. That home, since relocated to the southeast corner of the Grand Concourse and Kingsbridge Road, is now known as the Poe Cottage. Nearby he befriended the Jesuits at St. John's College, now Fordham University. Virginia died at the cottage on January 30, 1847. Biographers and critics often suggest that Poe's frequent theme of the "death of a beautiful woman" stems from the repeated loss of women throughout his life, including his wife. Poe was increasingly unstable after his wife's death. He attempted to court poet Sarah Helen Whitman who lived in Providence, Rhode Island. Their engagement failed, purportedly because of Poe's drinking and erratic behavior. There is also strong evidence that Whitman's mother intervened and did much to derail their relationship. Poe then returned to Richmond and resumed a relationship with his childhood sweetheart Sarah Elmira Royster. On October 3, 1849, Poe was found delirious on the streets of Baltimore, "in great distress, and... in need of immediate assistance", according to Joseph W. Walker who found him. He was taken to the Washington Medical College where he died on Sunday, October 7, 1849 at 5:00 in the morning. Poe was never coherent long enough to explain how he came to be in his dire condition and, oddly, was wearing clothes that were not his own. He is said to have repeatedly called out the name "Reynolds" on the night before his death, though it is unclear to whom he was referring. Some sources say that Poe's final words were "Lord help my poor soul". All medical records have been lost, including his death certificate. Newspapers at the time reported Poe's death as "congestion of the brain" or "cerebral inflammation", common euphemisms for deaths from disreputable causes such as alcoholism. The actual cause of death remains a mystery. Speculation has included "delirium tremens", heart disease, epilepsy, syphilis, meningeal inflammation, cholera, and rabies. One theory dating from 1872 suggests that cooping was the cause of Poe's death, a form of electoral fraud in which citizens were forced to vote for a particular candidate, sometimes leading to violence and even murder. The day that Edgar Allan Poe was buried, a long obituary appeared in the "New York Tribune" signed "Ludwig". It was soon published throughout the country. The piece began, "Edgar Allan Poe is dead. He died in Baltimore the day before yesterday. This announcement will startle many, but few will be grieved by it." "Ludwig" was soon identified as Rufus Wilmot Griswold, an editor, critic, and anthologist who had borne a grudge against Poe since 1842. Griswold somehow became Poe's literary executor and attempted to destroy his enemy's reputation after his death. Rufus Griswold wrote a biographical article of Poe called "Memoir of the Author", which he included in an 1850 volume of the collected works. Griswold depicted Poe as a depraved, drunken, drug-addled madman and included Poe's letters as evidence. Many of his claims were either lies or distorted half-truths. For example, it is now known that Poe was not a drug addict. Griswold's book was denounced by those who knew Poe well, but it became a popularly accepted one. This occurred in part because it was the only full biography available and was widely reprinted, and in part because readers thrilled at the thought of reading works by an "evil" man. Letters that Griswold presented as proof of this depiction of Poe were later revealed as forgeries. Poe's best known fiction works are Gothic, a genre that he followed to appease the public taste. His most recurring themes deal with questions of death, including its physical signs, the effects of decomposition, concerns of premature burial, the reanimation of the dead, and mourning. Many of his works are generally considered part of the dark romanticism genre, a literary reaction to transcendentalism which Poe strongly disliked. He referred to followers of the transcendental movement as "Frog-Pondians", after the pond on Boston Common, and ridiculed their writings as "metaphor—run mad," lapsing into "obscurity for obscurity's sake" or "mysticism for mysticism's sake". Poe once wrote in a letter to Thomas Holley Chivers that he did not dislike Transcendentalists, "only the pretenders and sophists among them". Beyond horror, Poe also wrote satires, humor tales, and hoaxes. For comic effect, he used irony and ludicrous extravagance, often in an attempt to liberate the reader from cultural conformity. "Metzengerstein" is the first story that Poe is known to have published and his first foray into horror, but it was originally intended as a burlesque satirizing the popular genre. Poe also reinvented science fiction, responding in his writing to emerging technologies such as hot air balloons in "The Balloon-Hoax". Poe wrote much of his work using themes aimed specifically at mass-market tastes. To that end, his fiction often included elements of popular pseudosciences, such as phrenology and physiognomy. Poe's writing reflects his literary theories, which he presented in his criticism and also in essays such as "The Poetic Principle". He disliked didacticism and allegory, though he believed that meaning in literature should be an undercurrent just beneath the surface. Works with obvious meanings, he wrote, cease to be art. He believed that work of quality should be brief and focus on a specific single effect. To that end, he believed that the writer should carefully calculate every sentiment and idea. Poe describes his method in writing "The Raven" in the essay "The Philosophy of Composition", and he claims to have strictly followed this method. It has been questioned whether he really followed this system, however. T. S. Eliot said: "It is difficult for us to read that essay without reflecting that if Poe plotted out his poem with such calculation, he might have taken a little more pains over it: the result hardly does credit to the method." Biographer Joseph Wood Krutch described the essay as "a rather highly ingenious exercise in the art of rationalization". During his lifetime, Poe was mostly recognized as a literary critic. Fellow critic James Russell Lowell called him "the most discriminating, philosophical, and fearless critic upon imaginative works who has written in America", suggesting—rhetorically—that he occasionally used prussic acid instead of ink. Poe's caustic reviews earned him the reputation of being a "tomahawk man". A favorite target of Poe's criticism was Boston's acclaimed poet Henry Wadsworth Longfellow, who was often defended by his literary friends in what was later called "The Longfellow War". Poe accused Longfellow of "the heresy of the didactic", writing poetry that was preachy, derivative, and thematically plagiarized. Poe correctly predicted that Longfellow's reputation and style of poetry would decline, concluding, "We grant him high qualities, but deny him the Future". Poe was also known as a writer of fiction and became one of the first American authors of the 19th century to become more popular in Europe than in the United States. Poe is particularly respected in France, in part due to early translations by Charles Baudelaire. Baudelaire's translations became definitive renditions of Poe's work throughout Europe. Poe's early detective fiction tales featuring C. Auguste Dupin laid the groundwork for future detectives in literature. Sir Arthur Conan Doyle said, "Each [of Poe's detective stories] is a root from which a whole literature has developed... Where was the detective story until Poe breathed the breath of life into it?" The Mystery Writers of America have named their awards for excellence in the genre the "Edgars". Poe's work also influenced science fiction, notably Jules Verne, who wrote a sequel to Poe's novel "The Narrative of Arthur Gordon Pym of Nantucket" called "An Antarctic Mystery", also known as "The Sphinx of the Ice Fields". Science fiction author H. G. Wells noted, ""Pym" tells what a very intelligent mind could imagine about the south polar region a century ago." In 2013, "The Guardian" cited "The Narrative of Arthur Gordon Pym of Nantucket" as one of the greatest novels ever written in the English language, and noted its influence on later authors such as Henry James, Arthur Conan Doyle, B. Traven, and David Morrell. Like many famous artists, Poe's works have spawned imitators. One trend among imitators of Poe has been claims by clairvoyants or psychics to be "channeling" poems from Poe's spirit. One of the most notable of these was Lizzie Doten, who published "Poems from the Inner Life" in 1863, in which she claimed to have "received" new compositions by Poe's spirit. The compositions were re-workings of famous Poe poems such as "The Bells", but which reflected a new, positive outlook. Even so, Poe has received not only praise, but criticism as well. This is partly because of the negative perception of his personal character and its influence upon his reputation. William Butler Yeats was occasionally critical of Poe and once called him "vulgar". Transcendentalist Ralph Waldo Emerson reacted to "The Raven" by saying, "I see nothing in it", and derisively referred to Poe as "the jingle man". Aldous Huxley wrote that Poe's writing "falls into vulgarity" by being "too poetical"—the equivalent of wearing a diamond ring on every finger. It is believed that only 12 copies have survived of Poe's first book "Tamerlane and Other Poems". In December 2009, one copy sold at Christie's, New York for $662,500, a record price paid for a work of American literature. "", an essay written in 1848, included a cosmological theory that presaged the Big Bang theory by 80 years, as well as the first plausible solution to Olbers' paradox. Poe eschewed the scientific method in "Eureka" and instead wrote from pure intuition. For this reason, he considered it a work of art, not science, but insisted that it was still true and considered it to be his career masterpiece. Even so, "Eureka" is full of scientific errors. In particular, Poe's suggestions ignored Newtonian principles regarding the density and rotation of planets. Poe had a keen interest in cryptography. He had placed a notice of his abilities in the Philadelphia paper "Alexander's Weekly (Express) Messenger", inviting submissions of ciphers which he proceeded to solve. In July 1841, Poe had published an essay called "A Few Words on Secret Writing" in "Graham's Magazine". Capitalizing on public interest in the topic, he wrote "The Gold-Bug" incorporating ciphers as an essential part of the story. Poe's success with cryptography relied not so much on his deep knowledge of that field (his method was limited to the simple substitution cryptogram) as on his knowledge of the magazine and newspaper culture. His keen analytical abilities, which were so evident in his detective stories, allowed him to see that the general public was largely ignorant of the methods by which a simple substitution cryptogram can be solved, and he used this to his advantage. The sensation that Poe created with his cryptography stunts played a major role in popularizing cryptograms in newspapers and magazines. Poe had an influence on cryptography beyond increasing public interest during his lifetime. William Friedman, America's foremost cryptologist, was heavily influenced by Poe. Friedman's initial interest in cryptography came from reading "The Gold-Bug" as a child, an interest that he later put to use in deciphering Japan's PURPLE code during World War II. The historical Edgar Allan Poe has appeared as a fictionalized character, often representing the "mad genius" or "tormented artist" and exploiting his personal struggles. Many such depictions also blend in with characters from his stories, suggesting that Poe and his characters share identities. Often, fictional depictions of Poe use his mystery-solving skills in such novels as "The Poe Shadow" by Matthew Pearl. No childhood home of Poe is still standing, including the Allan family's Moldavia estate. The oldest standing home in Richmond, the Old Stone House, is in use as the Edgar Allan Poe Museum, though Poe never lived there. The collection includes many items that Poe used during his time with the Allan family, and also features several rare first printings of Poe works. 13 West Range is the dorm room that Poe is believed to have used while studying at the University of Virginia in 1826; it is preserved and available for visits. Its upkeep is now overseen by a group of students and staff known as the Raven Society. The earliest surviving home in which Poe lived is in Baltimore, preserved as the Edgar Allan Poe House and Museum. Poe is believed to have lived in the home at the age of 23 when he first lived with Maria Clemm and Virginia (as well as his grandmother and possibly his brother William Henry Leonard Poe). It is open to the public and is also the home of the Edgar Allan Poe Society. Of the several homes that Poe, his wife Virginia, and his mother-in-law Maria rented in Philadelphia, only the last house has survived. The Spring Garden home, where the author lived in 1843–1844, is today preserved by the National Park Service as the Edgar Allan Poe National Historic Site. Poe's final home is preserved as the Edgar Allan Poe Cottage in the Bronx. In Boston, a commemorative plaque on Boylston Street is several blocks away from the actual location of Poe's birth. The house which was his birthplace at 62 Carver Street no longer exists; also, the street has since been renamed "Charles Street South". A "square" at the intersection of Broadway, Fayette, and Carver Streets had once been named in his honor, but it disappeared when the streets were rearranged. In 2009, the intersection of Charles and Boylston Streets (two blocks north of his birthplace) was designated "Edgar Allan Poe Square". In March 2014, fundraising was completed for construction of a permanent memorial sculpture at this location. The winning design by Stefanie Rocknak depicts a life-sized Poe striding against the wind, accompanied by a flying raven; his suitcase lid has fallen open, leaving a "paper trail" of literary works embedded in the sidewalk behind him. The public unveiling on October 5, 2014 was attended by former US poet laureate Robert Pinsky. Other Poe landmarks include a building in the Upper West Side where Poe temporarily lived when he first moved to New York. A plaque suggests that Poe wrote "The Raven" here. The bar still stands where legend says that Poe was last seen drinking before his death, in Fell's Point in Baltimore. The drinking establishment is now known as "The Horse You Came In On", and local lore insists that a ghost whom they call "Edgar" haunts the rooms above. Early daguerreotypes of Poe continue to arouse great interest among literary historians. Notable among them are: For decades, every January 19, a bottle of cognac and three roses were left at Poe's original grave marker by an unknown visitor affectionately referred to as the "Poe Toaster". On August 15, 2007, Sam Porpora, a former historian at the Westminster Church in Baltimore where Poe is buried, claimed that he had started the tradition in 1949. Porpora said that the tradition began in order to raise money and enhance the profile of the church. His story has not been confirmed, and some details which he gave to the press are factually inaccurate. The Poe Toaster's last appearance was on January 19, 2009, the day of Poe's bicentennial. Tales Poetry Other works European Parliament The European Parliament (EP) is the directly elected parliamentary institution of the European Union (EU). Together with the Council of the European Union (the Council) and the European Commission, it exercises the legislative function of the EU. The Parliament is composed of 751 members, who represent the second-largest democratic electorate in the world (after the Parliament of India) and the largest trans-national democratic electorate in the world (375 million eligible voters in 2009). It has been directly elected every five years by universal suffrage since 1979. However, voter turnout at European Parliament elections has fallen consecutively at each election since that date, and has been under 50% since 1999. Voter turnout in 2014 stood at 42.54% of all European voters. Although the European Parliament has legislative power that the Council and Commission do not possess, it does not formally possess legislative initiative, as most national parliaments of European Union member states do. The Parliament is the "first institution" of the EU (mentioned first in the treaties, having ceremonial precedence over all authority at European level), and shares equal legislative and budgetary powers with the Council (except in a few areas where the special legislative procedures apply). It likewise has equal control over the EU budget. Finally, the European Commission, the executive body of the EU, is accountable to Parliament. In particular, Parliament elects the President of the Commission, and approves (or rejects) the appointment of the Commission as a whole. It can subsequently force the Commission as a body to resign by adopting a motion of censure. The President of the European Parliament (Parliament's speaker) is Antonio Tajani (EPP), elected in January 2017. He presides over a multi-party chamber, the two largest groups being the Group of the European People's Party (EPP) and the Progressive Alliance of Socialists and Democrats (S&D). The last union-wide elections were the 2014 elections. The European Parliament has three places of work – Brussels (Belgium), the city of Luxembourg (Luxembourg) and Strasbourg (France). Luxembourg is home to the administrative offices (the "General Secretariat"). Meetings of the whole Parliament ("plenary sessions") take place in Strasbourg and in Brussels. Committee meetings are held in Brussels. The Parliament, like the other institutions, was not designed in its current form when it first met on 10 September 1952. One of the oldest common institutions, it began as the "Common Assembly" of the European Coal and Steel Community (ECSC). It was a consultative assembly of 78 appointed parliamentarians drawn from the national parliaments of member states, having no legislative powers. The change since its foundation was highlighted by Professor David Farrell of the University of Manchester: "For much of its life, the European Parliament could have been justly labeled a 'multi-lingual talking shop'." Its development since its foundation shows how the European Union's structures have evolved without a clear "master plan". Some, such as Tom Reid of the "Washington Post", said of the union: "nobody would have deliberately designed a government as complex and as redundant as the EU". Even the Parliament's two seats, which have switched several times, are a result of various agreements or lack of agreements. Although most MEPs would prefer to be based just in Brussels, at John Major's 1992 Edinburgh summit, France engineered a treaty amendment to maintain Parliament's plenary seat permanently at Strasbourg. The body was not mentioned in the original Schuman Declaration. It was assumed or hoped that difficulties with the British would be resolved to allow the Council of Europe's Assembly to perform the task. A separate Assembly was introduced during negotiations on the Treaty as an institution which would counterbalance and monitor the executive while providing democratic legitimacy. The wording of the ECSC Treaty demonstrated the leaders' desire for more than a normal consultative assembly by using the term "representatives of the people" and allowed for direct election. Its early importance was highlighted when the Assembly was given the task of drawing up the draft treaty to establish a European Political Community. By this document, the Ad Hoc Assembly was established on 13 September 1952 with extra members, but after the failure of the proposed European Defence Community the project was dropped. Despite this, the European Economic Community and Euratom were established in 1958 by the Treaties of Rome. The Common Assembly was shared by all three communities (which had separate executives) and it renamed itself the "European Parliamentary Assembly". The first meeting was held on 19 March 1958 having been set up in Luxembourg, it elected Schuman as its president and on 13 May it rearranged itself to sit according to political ideology rather than nationality. This is seen as the birth of the modern European Parliament, with Parliament's 50 years celebrations being held in March 2008 rather than 2002. The three communities merged their remaining organs as the European Communities in 1967, and the body's name was changed to the current "European Parliament" in 1962. In 1970 the Parliament was granted power over areas of the Communities' budget, which were expanded to the whole budget in 1975. Under the Rome Treaties, the Parliament should have become elected. However, the Council was required to agree a uniform voting system beforehand, which it failed to do. The Parliament threatened to take the Council to the European Court of Justice; this led to a compromise whereby the Council would agree to elections, but the issue of voting systems would be put off till a later date. In 1979, its members were directly elected for the first time. This sets it apart from similar institutions such as those of the Parliamentary Assembly of the Council of Europe or Pan-African Parliament which are appointed. After that first election, the parliament held its first session on 11 July 1979, electing Simone Veil MEP as its president. Veil was also the first female president of the Parliament since it was formed as the Common Assembly. As an elected body, the Parliament began to draft proposals addressing the functioning of the EU. For example, in 1984, inspired by its previous work on the Political Community, it drafted the "draft Treaty establishing the European Union" (also known as the 'Spinelli Plan' after its rapporteur Altiero Spinelli MEP). Although it was not adopted, many ideas were later implemented by other treaties. Furthermore, the Parliament began holding votes on proposed Commission Presidents from the 1980s, before it was given any formal right to veto. Since it became an elected body, the membership of the European Parliament has simply expanded whenever new nations have joined (the membership was also adjusted upwards in 1994 after German reunification). Following this, the Treaty of Nice imposed a cap on the number of members to be elected, 732. Like the other institutions, the Parliament's seat was not yet fixed. The provisional arrangements placed Parliament in Strasbourg, while the Commission and Council had their seats in Brussels. In 1985 the Parliament, wishing to be closer to these institutions, built a second chamber in Brussels and moved some of its work there despite protests from some states. A final agreement was eventually reached by the European Council in 1992. It stated the Parliament would retain its formal seat in Strasbourg, where twelve sessions a year would be held, but with all other parliamentary activity in Brussels. This two-seat arrangement was contested by the Parliament, but was later enshrined in the Treaty of Amsterdam. To this day the institution's locations are a source of contention. The Parliament gained more powers from successive treaties, namely through the extension of the ordinary legislative procedure (then called the codecision procedure), and in 1999, the Parliament forced the resignation of the Santer Commission. The Parliament had refused to approve the Community budget over allegations of fraud and mis-management in the Commission. The two main parties took on a government-opposition dynamic for the first time during the crisis which ended in the Commission resigning en masse, the first of any forced resignation, in the face of an impending censure from the Parliament. In 2004, following the largest trans-national election in history, despite the European Council choosing a President from the largest political group (the EPP), the Parliament again exerted pressure on the Commission. During the Parliament's hearings of the proposed Commissioners MEPs raised doubts about some nominees with the Civil Liberties committee rejecting Rocco Buttiglione from the post of Commissioner for Justice, Freedom and Security over his views on homosexuality. That was the first time the Parliament had ever voted against an incoming Commissioner and despite Barroso's insistence upon Buttiglione the Parliament forced Buttiglione to be withdrawn. A number of other Commissioners also had to be withdrawn or reassigned before Parliament allowed the Barroso Commission to take office. Along with the extension of the ordinary legislative procedure, the Parliament's democratic mandate has given it greater control over legislation against the other institutions. In voting on the Bolkestein directive in 2006, the Parliament voted by a large majority for over 400 amendments that changed the fundamental principle of the law. The "Financial Times" described it in the following terms: In 2007, for the first time, Justice Commissioner Franco Frattini included Parliament in talks on the second Schengen Information System even though MEPs only needed to be consulted on parts of the package. After that experiment, Frattini indicated he would like to include Parliament in all justice and criminal matters, informally pre-empting the new powers they could gain as part of the Treaty of Lisbon. Between 2007 and 2009, a special working group on parliamentary reform implemented a series of changes to modernise the institution such as more speaking time for rapporteurs, increase committee co-operation and other efficiency reforms. The Lisbon Treaty finally came into force on 1 December 2009, granting Parliament powers over the entire EU budget, making Parliament's legislative powers equal to the Council's in nearly all areas and linking the appointment of the Commission President to Parliament's own elections. Despite some calls for the parties to put forward candidates beforehand, only the EPP (which had re-secured their position as largest party) had one in re-endorsing Barroso. Barroso gained the support of the European Council for a second term and secured majority support from the Parliament in September 2009. Parliament voted 382 votes in favour and 219 votes against (117 abstentions ) with support of the European People's Party, European Conservatives and Reformists and the Alliance of Liberals and Democrats for Europe. The liberals gave support after Barroso gave them a number of concessions; the liberals previously joined the socialists' call for a delayed vote (the EPP had wanted to approve Barroso in July of that year). Once Barroso put forward the candidates for his next Commission, another opportunity to gain concessions arose. Bulgarian nominee Rumiana Jeleva was forced to step down by Parliament due to concerns over her experience and financial interests. She only had the support of the EPP which began to retaliate on left wing candidates before Jeleva gave in and was replaced (setting back the final vote further). Before the final vote, Parliament demanded a number of concessions as part of a future working agreement under the new Lisbon Treaty. The deal includes that Parliament's President will attend high level Commission meetings. Parliament will have a seat in the EU's Commission-lead international negotiations and have a right to information on agreements. However, Parliament secured only an observer seat. Parliament also did not secure a say over the appointment of delegation heads and special representatives for foreign policy. Although they will appear before parliament after they have been appointed by the High Representative. One major internal power was that Parliament wanted a pledge from the Commission that it would put forward legislation when parliament requests. Barroso considered this an infringement on the Commission's powers but did agree to respond within three months. Most requests are already responded to positively. During the setting up of the European External Action Service (EEAS), Parliament used its control over the EU budget to influence the shape of the EEAS. MEPs had aimed at getting greater oversight over the EEAS by linking it to the Commission and having political deputies to the High Representative. MEPs didn't manage to get everything they demanded. However, they got broader financial control over the new body. The Parliament and Council have been compared to the two chambers of a bicameral legislature. However, there are some differences from national legislatures; for example, neither the Parliament nor the Council have the power of legislative initiative (except for the fact that the Council has the power in some intergovernmental matters). In Community matters, this is a power uniquely reserved for the European Commission (the executive). Therefore, while Parliament can amend and reject legislation, to make a proposal for legislation, it needs the Commission to draft a bill before anything can become law. The value of such a power has been questioned by noting that in the national legislatures of the member states 85% of initiatives introduced without executive support fail to become law. Yet it has been argued by former Parliament president Hans-Gert Pöttering that as the Parliament does have the right to ask the Commission to draft such legislation, and as the Commission is following Parliament's proposals more and more Parliament does have a "de facto" right of legislative initiative. The Parliament also has a great deal of indirect influence, through non-binding resolutions and committee hearings, as a "pan-European soapbox" with the ear of thousands of Brussels-based journalists. There is also an indirect effect on foreign policy; the Parliament must approve all development grants, including those overseas. For example, the support for post-war Iraq reconstruction, or incentives for the cessation of Iranian nuclear development, must be supported by the Parliament. Parliamentary support was also required for the transatlantic passenger data-sharing deal with the United States. Finally, Parliament holds a non-binding vote on new EU treaties but cannot veto it. However, when Parliament threatened to vote down the Nice Treaty, the Belgian and Italian Parliaments said they would veto the treaty on the European Parliament's behalf. With each new treaty, the powers of the Parliament, in terms of its role in the Union's legislative procedures, have expanded. The procedure which has slowly become dominant is the "ordinary legislative procedure" (previously named "codecision procedure"), which provides an equal footing between Parliament and Council. In particular, under the procedure, the Commission presents a proposal to Parliament and the Council which can only become law if both agree on a text, which they do (or not) through successive readings up to a maximum of three. In its first reading, Parliament may send amendments to the Council which can either adopt the text with those amendments or send back a "common position". That position may either be approved by Parliament, or it may reject the text by an absolute majority, causing it to fail, or it may adopt further amendments, also by an absolute majority. If the Council does not approve these, then a "Conciliation Committee" is formed. The Committee is composed of the Council members plus an equal number of MEPs who seek to agree a compromise. Once a position is agreed, it has to be approved by Parliament, by a simple majority. This is also aided by Parliament's mandate as the only directly democratic institution, which has given it leeway to have greater control over legislation than other institutions, for example over its changes to the Bolkestein directive in 2006. The few other areas that operate the "special legislative procedures" are justice & home affairs, budget and taxation and certain aspects of other policy areas: such as the fiscal aspects of environmental policy. In these areas, the Council or Parliament decide law alone. The procedure also depends upon which type of institutional act is being used. The strongest act is a regulation, an act or law which is directly applicable in its entirety. Then there are directives which bind member states to certain goals which they must achieve. They do this through their own laws and hence have room to manoeuvre in deciding upon them. A decision is an instrument which is focused at a particular person or group and is directly applicable. Institutions may also issue recommendations and opinions which are merely non-binding, declarations. There is a further document which does not follow normal procedures, this is a "written declaration" which is similar to an early day motion used in the Westminster system. It is a document proposed by up to five MEPs on a matter within the EU's activities used to launch a debate on that subject. Having been posted outside the entrance to the hemicycle, members can sign the declaration and if a majority do so it is forwarded to the President and announced to the plenary before being forwarded to the other institutions and formally noted in the minutes. The legislative branch officially holds the Union's budgetary authority with powers gained through the Budgetary Treaties of the 1970s and the Lisbon Treaty. The EU budget is subject to a form of the ordinary legislative procedure with a single reading giving Parliament power over the entire budget (before 2009, its influence was limited to certain areas) on an equal footing to the Council. If there is a disagreement between them, it is taken to a conciliation committee as it is for legislative proposals. If the joint conciliation text is not approved, the Parliament may adopt the budget definitively. The Parliament is also responsible for discharging the implementation of previous budgets based on the annual report of the European Court of Auditors. It has refused to approve the budget only twice, in 1984 and in 1998. On the latter occasion it led to the resignation of the Santer Commission; highlighting how the budgetary power gives Parliament a great deal of power over the Commission. Parliament also makes extensive use of its budgetary, and other powers, elsewhere; for example in the setting up of the European External Action Service, Parliament has a de facto veto over its design as it has to approve the budgetary and staff changes. The President of the European Commission is proposed by the European Council on the basis of the European elections to Parliament. That proposal has to be approved by the Parliament (by a simple majority) who "elect" the President according to the treaties. Following the approval of the Commission President, the members of the Commission are proposed by the President in accord with the member-states. Each Commissioner comes before a relevant parliamentary committee hearing covering the proposed portfolio. They are then, as a body, approved or rejected by the Parliament. In practice, the Parliament has never voted against a President or his Commission, but it did seem likely when the Barroso Commission was put forward. The resulting pressure forced the proposal to be withdrawn and changed to be more acceptable to parliament. That pressure was seen as an important sign by some of the evolving nature of the Parliament and its ability to make the Commission accountable, rather than being a rubber stamp for candidates. Furthermore, in voting on the Commission, MEPs also voted along party lines, rather than national lines, despite frequent pressure from national governments on their MEPs. This cohesion and willingness to use the Parliament's power ensured greater attention from national leaders, other institutions and the public—who previously gave the lowest ever turnout for the Parliament's elections. The Parliament also has the power to censure the Commission if they have a two-thirds majority which will force the resignation of the entire Commission from office. As with approval, this power has never been used but it was threatened to the Santer Commission, who subsequently resigned of their own accord. There are a few other controls, such as: the requirement of Commission to submit reports to the Parliament and answer questions from MEPs; the requirement of the President-in-office of the Council to present its programme at the start of their presidency; the obligation on the President of the European Council to report to Parliament after each of its meetings; the right of MEPs to make requests for legislation and policy to the Commission; and the right to question members of those institutions (e.g. "Commission Question Time" every Tuesday). At present, MEPs may ask a question on any topic whatsoever, but in July 2008 MEPs voted to limit questions to those within the EU's mandate and ban offensive or personal questions. The Parliament also has other powers of general supervision, mainly granted by the Maastricht Treaty. The Parliament has the power to set up a Committee of Inquiry, for example over mad cow disease or CIA detention flights—the former led to the creation of the European veterinary agency. The Parliament can call other institutions to answer questions and if necessary to take them to court if they break EU law or treaties. Furthermore, it has powers over the appointment of the members of the Court of Auditors and the president and executive board of the European Central Bank. The ECB president is also obliged to present an annual report to the parliament. The European Ombudsman is elected by the Parliament, who deals with public complaints against all institutions. Petitions can also be brought forward by any EU citizen on a matter within the EU's sphere of activities. The Committee on Petitions hears cases, some 1500 each year, sometimes presented by the citizen themselves at the Parliament. While the Parliament attempts to resolve the issue as a mediator they do resort to legal proceedings if it is necessary to resolve the citizens dispute. The parliamentarians are known in English as Members of the European Parliament (MEPs). They are elected every five years by universal adult suffrage and sit according to political allegiance; about a third are women. Before 1979 they were appointed by their national parliaments. Under the Lisbon Treaty, seats are allocated to each state according to population and the maximum number of members is set at 751 (however, as the President cannot vote while in the chair there will only be 750 voting members at any one time). The seats are distributed according to "degressive proportionality", i.e., the larger the state, the more citizens are represented per MEP. As a result, Maltese and Luxembourgish voters have roughly 10x more influence per voter than citizens of the six large countries. , Germany (80.9 million inhabitants) has 96 seats (previously 99 seats), i.e. one seat for 843,000 inhabitants. Malta (0.4 million inhabitants) has 6 seats, i.e. one seat for 70,000 inhabitants. The new system implemented under the Lisbon Treaty, including revising the seating well before elections, was intended to avoid political horse trading when the allocations have to be revised to reflect demographic changes. Pursuant to this apportionment, the constituencies are formed. In six EU member states (Belgium, France, Ireland, Italy, Poland, and the United Kingdom), the national territory is divided into a number of constituencies. In the remaining member states, the whole country forms a single constituency. All member states hold elections to the European Parliament using various forms of proportional representation. Due to the delay in ratifying the Lisbon Treaty, the seventh parliament was elected under the lower Nice Treaty cap. A small scale treaty amendment was ratified on 29 November 2011. This amendment brought in transitional provisions to allow the 18 additional MEPs created under the Lisbon Treaty to be elected or appointed before the 2014 election. Under the Lisbon Treaty reforms, Germany was the only state to lose members from 99 to 96. However, these seats were not removed until the 2014 election. Before 2009, members received the same salary as members of their national parliament. However, from 2009 a new members statute came into force, after years of attempts, which gave all members an equal monthly pay, of 8,020.53 euro each in 2014, subject to a European Union tax and which can also be taxed nationally. MEPs are entitled to a pension, paid by Parliament, from the age of 63. Members are also entitled to allowances for office costs and subsistence, and travelling expenses, based on actual cost. Besides their pay, members are granted a number of privileges and immunities. To ensure their free movement to and from the Parliament, they are accorded by their own states the facilities accorded to senior officials travelling abroad and, by other state governments, the status of visiting foreign representatives. When in their own state, they have all the immunities accorded to national parliamentarians, and, in other states, they have immunity from detention and legal proceedings. However, immunity cannot be claimed when a member is found committing a criminal offence and the Parliament also has the right to strip a member of their immunity. MEPs in Parliament are organised into seven different parliamentary groups, including thirty non-attached members known as "non-inscrits". The two largest groups are the European People's Party (EPP) and the Socialists & Democrats (S&D). These two groups have dominated the Parliament for much of its life, continuously holding between 50 and 70 percent of the seats between them. No single group has ever held a majority in Parliament. As a result of being broad alliances of national parties, European group parties are very decentralised and hence have more in common with parties in federal states like Germany or the United States than unitary states like the majority of the EU states. Nevertheless, the European groups were actually more cohesive than their US counterparts between 2004 and 2009. Groups are often based on a single European political party such as the socialist group (before 2009). However, they can, like the liberal group, include more than one European party as well as national parties and independents. For a group to be recognised, it needs 25 MEPs from seven different countries. Once recognised, groups receive financial subsidies from the parliament and guaranteed seats on committees, creating an incentive for the formation of groups. However, some controversy occurred with the establishment of the short-lived Identity, Tradition, Sovereignty (ITS) due to its ideology; the members of the group were far-right, so there were concerns about public funds going towards such a group. There were attempts to change the rules to block the formation of ITS, but they never came to fruition. The group was, however, blocked from gaining leading positions on committees — traditionally (by agreement, not a rule) shared among all parties. When this group engaged in infighting, leading to the withdrawal of some members, its size fell below the threshold for recognition causing its collapse. Given that the Parliament does not form the government in the traditional sense of a Parliamentary system, its politics have developed along more consensual lines rather than majority rule of competing parties and coalitions. Indeed, for much of its life it has been dominated by a grand coalition of the European People's Party and the Party of European Socialists. The two major parties tend to co-operate to find a compromise between their two groups leading to proposals endorsed by huge majorities. However, this does not always produce agreement, and each may instead try to build other alliances, the EPP normally with other centre-right or right wing Groups and the PES with centre-left or left wing Groups. Sometimes, the Liberal Group is then in the pivotal position. There are also occasions where very sharp party political divisions have emerged, for example over the resignation of the Santer Commission. When the initial allegations against the Commission emerged, they were directed primarily against Édith Cresson and Manuel Marín, both socialist members. When the parliament was considering refusing to discharge the Community budget, President Jacques Santer stated that a no vote would be tantamount to a vote of no confidence. The Socialist group supported the Commission and saw the issue as an attempt by the EPP to discredit their party ahead of the 1999 elections. Socialist leader, Pauline Green MEP, attempted a vote of confidence and the EPP put forward counter motions. During this period the two parties took on similar roles to a government-opposition dynamic, with the Socialists supporting the executive and EPP renouncing its previous coalition support and voting it down. Politicisation such as this has been increasing, in 2007 Simon Hix of the London School of Economics noted that: During the fifth term, 1999 to 2004, there was a break in the grand coalition resulting in a centre-right coalition between the Liberal and People's parties. This was reflected in the Presidency of the Parliament with the terms being shared between the EPP and the ELDR, rather than the EPP and Socialists. In the following term the liberal group grew to hold 88 seats, the largest number of seats held by any third party in Parliament. Elections have taken place, directly in every member-state, every five years since 1979. there have been eight elections. When a nation joins mid-term, a by-election will be held to elect their representatives. This has happened six times, most recently when Croatia joined in 2013. Elections take place across four days according to local custom and, apart from having to be proportional, the electoral system is chosen by the member-state. This includes allocation of sub-national constituencies; while most members have a national list, some, like the UK and France, divide their allocation between regions. Seats are allocated to member-states according to their population, since 2014 with no state having more than 96, but no fewer than 6, to maintain proportionality. The most recent Union-wide elections to the European Parliament were the European elections of 2014, held from 22 to 25 May 2014. They were the largest simultaneous transnational elections ever held anywhere in the world. The eighth term of Parliament started on 1 July 2014. The proportion of MEPs elected in 2009 who were female was 35%; in 1979 it was just 16.5%. There have been a number of proposals designed to attract greater public attention to the elections. One such innovation in the 2014 elections was that the pan-European political parties fielded "candidates" for president of the Commission, the so-called "Spitzenkandidaten" (German, "leading candidates" or "top candidates"). However, European Union governance is based on a mixture of intergovernmental and supranational features: the President of the European Commission is nominated by the European Council, representing the governments of the member states, and there is no obligation for them to nominate the successful "candidate". The Lisbon Treaty merely states that they should take account of the results of the elections when choosing whom to nominate. The so-called "Spitzenkandidaten" were Jean-Claude Juncker for the European People's Party, Martin Schulz for the Party of European Socialists, Guy Verhofstadt for the Alliance of Liberals and Democrats for Europe Party, Ska Keller and José Bové jointly for the European Green Party and Alexis Tsipras for the Party of the European Left. Turnout has dropped consistently every year since the first election, and from 1999 it has been below 50%. In 2007 both Bulgaria and Romania elected their MEPs in by-elections, having joined at the beginning of 2007. The Bulgarian and Romanian elections saw two of the lowest turnouts for European elections, just 28.6% and 28.3% respectively. In England, Scotland and Wales, EP elections were originally held for a constituency MEP on a first-past-the-post basis. In 1999 the system was changed to a form of PR where a large group of candidates would stand for a post within a very large regional constituency. One could vote for a party, but not a candidate (unless that party had a single candidate). Each year the activities of the Parliament cycle between committee weeks where reports are discussed in committees and interparliamentary delegations meet, political group weeks for members to discuss work within their political groups and session weeks where members spend 3½ days in Strasbourg for part-sessions. In addition six 2-day part-sessions are organised in Brussels throughout the year. Four weeks are allocated as constituency week to allow members to do exclusively constituency work. Finally there are no meetings planned during the summer weeks. The Parliament has the power to meet without being convened by another authority. Its meetings are partly controlled by the treaties but are otherwise up to Parliament according to its own "Rules of Procedure" (the regulations governing the parliament). During sessions, members may speak after being called on by the President. Members of the Council or Commission may also attend and speak in debates. Partly due to the need for translation, and the politics of consensus in the chamber, debates tend to be calmer and more polite than, say, the Westminster system. Voting is conducted primarily by a show of hands, that may be checked on request by electronic voting. Votes of MEPs are not recorded in either case, however; that only occurs when there is a roll-call ballot. This is required for the final votes on legislation and also whenever a political group or 30 MEPs request it. The number of roll-call votes has increased with time. Votes can also be a completely secret ballot (for example, when the president is elected). All recorded votes, along with minutes and legislation, are recorded in the Official Journal of the European Union and can be accessed online. Votes usually do not follow a debate, but rather they are grouped with other due votes on specific occasions, usually at noon on Tuesdays, Wednesdays or Thursdays. This is because the length of the vote is unpredictable and if it continues for longer than allocated it can disrupt other debates and meetings later in the day. Members are arranged in a hemicycle according to their political groups (in the Common Assembly, prior to 1958, members sat alphabetically) who are ordered mainly by left to right, but some smaller groups are placed towards the outer ring of the Parliament. All desks are equipped with microphones, headphones for translation and electronic voting equipment. The leaders of the groups sit on the front benches at the centre, and in the very centre is a podium for guest speakers. The remaining half of the circular chamber is primarily composed of the raised area where the President and staff sit. Further benches are provided between the sides of this area and the MEPs, these are taken up by the Council on the far left and the Commission on the far right. Both the Brussels and Strasbourg hemicycle roughly follow this layout with only minor differences. The hemicycle design is a compromise between the different Parliamentary systems. The British-based system has the different groups directly facing each other while the French-based system is a semicircle (and the traditional German system had all members in rows facing a rostrum for speeches). Although the design is mainly based on a semicircle, the opposite ends of the spectrum do still face each other. With access to the chamber limited, entrance is controlled by ushers who aid MEPs in the chamber (for example in delivering documents). The ushers can also occasionally act as a form of police in enforcing the President, for example in ejecting an MEP who is disrupting the session (although this is rare). The first head of protocol in the Parliament was French, so many of the duties in the Parliament are based on the French model first developed following the French Revolution. The 180 ushers are highly visible in the Parliament, dressed in black tails and wearing a silver chain, and are recruited in the same manner as the European civil service. The President is allocated a personal usher. The President is essentially the speaker of the Parliament and presides over the plenary when it is in session. The President's signature is required for all acts adopted by co-decision, including the EU budget. The President is also responsible for representing the Parliament externally, including in legal matters, and for the application of the rules of procedure. He or she is elected for two-and-a-half-year terms, meaning two elections per parliamentary term. The President is currently Antonio Tajani MEP of the EPP. In most countries, the protocol of the head of state comes before all others; however, in the EU the Parliament is listed as the first institution, and hence the protocol of its president comes before any other European, or national, protocol. The gifts given to numerous visiting dignitaries depend upon the President. President Josep Borrell MEP of Spain gave his counterparts a crystal cup created by an artist from Barcelona who had engraved upon it parts of the Charter of Fundamental Rights among other things. A number of notable figures have been President of the Parliament and its predecessors. The first President was Paul-Henri Spaak MEP, one of the founding fathers of the Union. Other founding fathers include Alcide de Gasperi MEP and Robert Schuman MEP. The two female Presidents were Simone Veil MEP in 1979 (first President of the elected Parliament) and Nicole Fontaine MEP in 1999, both Frenchwomen. The previous president, Jerzy Buzek was the first East-Central European to lead an EU institution, a former Prime Minister of Poland who rose out of the Solidarity movement in Poland that helped overthrow communism in the Eastern Bloc. During the election of a President, the previous President (or, if unable to, one of the previous Vice-Presidents) presides over the chamber. Prior to 2009, the oldest member fulfilled this role but the rule was changed to prevent far-right French MEP Jean-Marie Le Pen taking the chair. Below the President, there are 14 Vice-Presidents who chair debates when the President is not in the chamber. There are a number of other bodies and posts responsible for the running of parliament besides these speakers. The two main bodies are the Bureau, which is responsible for budgetary and administration issues, and the Conference of Presidents which is a governing body composed of the presidents of each of the parliament's political groups. Looking after the financial and administrative interests of members are five Quaestors. In August 2002, Nichole Robichaux [née Braucksieker] became the first American citizen to intern for a member of the European Parliament—Monica Frassoni [Green Party]. , the European Parliament budget was EUR 1.756 billion. A 2008 report on the Parliament's finances highlighted certain overspending and miss-payments. Despite some MEPs calling for the report to be published, Parliamentary authorities had refused until an MEP broke confidentiality and leaked it. The Parliament has 20 Standing Committees consisting of 25 to 71 MEPs each (reflecting the political make-up of the whole Parliament) including a chair, a bureau and secretariat. They meet twice a month in public to draw up, amend to adopt legislative proposals and reports to be presented to the plenary. The rapporteurs for a committee are supposed to present the view of the committee, although notably this has not always been the case. In the events leading to the resignation of the Santer Commission, the rapporteur went against the Budgetary Control Committee's narrow vote to discharge the budget, and urged the Parliament to reject it. Committees can also set up sub-committees (e.g. the Subcommittee on Human Rights) and temporary committees to deal with a specific topic (e.g. on extraordinary rendition). The chairs of the Committees co-ordinate their work through the "Conference of Committee Chairmen". When co-decision was introduced it increased the Parliament's powers in a number of areas, but most notably those covered by the Committee on the Environment, Public Health and Food Safety. Previously this committee was considered by MEPs as a "Cinderella committee"; however, as it gained a new importance, it became more professional and rigorous, attracting increasing attention to its work. The nature of the committees differ from their national counterparts as, although smaller in comparison to those of the United States Congress, the European Parliament's committees are unusually large by European standards with between eight and twelve dedicated members of staff and three to four support staff. Considerable administration, archives and research resources are also at the disposal of the whole Parliament when needed. Delegations of the Parliament are formed in a similar manner and are responsible for relations with Parliaments outside the EU. There are 34 delegations made up of around 15 MEPs, chairpersons of the delegations also cooperate in a conference like the committee chairs do. They include "Interparliamentary delegations" (maintain relations with Parliament outside the EU), "joint parliamentary committees" (maintaining relations with parliaments of states which are candidates or associates of the EU), the delegation to the ACP EU Joint Parliamentary Assembly and the delegation to the Euro-Mediterranean Parliamentary Assembly. MEPs also participate in other international activities such as the Euro-Latin American Parliamentary Assembly, the Transatlantic Legislators' Dialogue and through election observation in third countries. The Intergroups in the European Parliament are informal fora which gather MEPs from various political groups around any topic. They do not express the view of the European Parliament. They serve a double purpose: to address a topic which is transversal to several committees and in a less formal manner. Their daily secretariat can be run either through the office of MEPs or through interest groups, be them corporate lobbies or NGOs. The favored access to MEPs which the organization running the secretariat enjoys can be one explanation to the multiplication of Intergroups in the 1990s. They are now strictly regulated and financial support, direct or otherwise (via Secretariat staff, for example) must be officially specified in a declaration of financial interests. Also Intergroups are established or renewed at the beginning of each legislature through a specific process. Indeed, the proposal for the constitution or renewal of an Intergroup must be supported by at least 3 political groups whose support is limited to a specific number of proposals in proportion to their size (for example, for the legislature 2014-2019, the EPP or S&D political groups could support 22 proposals whereas the Greens/EFA or the EFDD political groups only 7). Speakers in the European Parliament are entitled to speak in any of the 24 official languages of the European Union, ranging from French and German to Maltese and Irish. Simultaneous interpreting is offered in all plenary sessions, and all final texts of legislation are translated. With twenty-four languages, the European Parliament is the most multilingual parliament in the world and the biggest employer of interpreters in the world (employing 350 full-time and 400 free-lancers when there is higher demand). Citizens may also address the Parliament in Basque, Catalan, Valencian and Galician. Usually a language is translated from a foreign tongue into a translator's native tongue. Due to the large number of languages, some being minor ones, since 1995 interpreting is sometimes done the opposite way, out of an interpreter's native tongue (the "retour" system). In addition, a speech in a minor language may be interpreted through a third language for lack of interpreters ("relay" interpreting) —for example, when interpreting out of Estonian into Maltese. Due to the complexity of the issues, interpretation is not word for word. Instead, interpreters have to convey the political meaning of a speech, regardless of their own views. This requires detailed understanding of the politics and terms of the Parliament, involving a great deal of preparation beforehand (e.g. reading the documents in question). Difficulty can often arise when MEPs use profanities, jokes and word play or speak too fast. While some see speaking their native language as an important part of their identity, and can speak more fluently in debates, interpretation and its cost has been criticised by some. A 2006 report by Alexander Stubb MEP highlighted that by only using English, French and German costs could be reduced from €118,000 per day (for 21 languages then—Romanian, Bulgarian and Croatian having not yet been included) to €8,900 per day. Some see the ideal single language as being English due to its widespread usage, although there has been a small-scale campaign to make French the reference language for all legal texts, due to the fact that it is more clear and precise for legal purposes. Because the proceedings are translated into all of the official EU languages, they have been used to make a multilingual corpus known as Europarl. It is widely used to train statistical machine translation systems. According to the European Parliament website, the annual parliament budget for 2016 was €1.838 billion. The main cost categories were: According to a European Parliament study prepared in 2013, the Strasbourg seat costs €103 million and according to the Court of Auditors an additional €5 million is related to travel expenses caused by having two seats. As a comparison, the German lower house of parliament (Bundestag) is estimated to cost €517 million in total for 2018. This is for a parliament with 709 members. The British House of Commons reported total annual costs in 2016-2017 of £249 million (€279 million). It had 650 seats. According to Economist, the European Parliament costs more than the British, French and German parliaments combined. A quarter of the costs is estimated to be related to translation and interpretation costs (c. €460 million) and the double seats are estimated to add an additional €180 million a year. For a like-for-like comparison, these two cost blocks can be excluded. The resulting costs of c. €1.2 billion a year are still more than double the German Bundestag's costs or more than four times the costs of the British House of Commons. On July 2, 2018, MEPs rejected proposals to tighten the rules around the General Expenditure Allowance (GEA) The Parliament is based in three different cities with numerous buildings. A protocol attached to the Treaty of Amsterdam requires that 12 plenary sessions be held in Strasbourg (none in August but two in September), which is the Parliament's official seat, while extra part sessions as well as committee meetings are held in Brussels. Luxembourg hosts the Secretariat of the European Parliament. The European Parliament is the only assembly in the world with more than one meeting place and one of the few that does not have the power to decide its own location. The Strasbourg seat is seen as a symbol of reconciliation between France and Germany, the Strasbourg region having been fought over by the two countries in the past. However, the cost and inconvenience of having two seats is questioned. While Strasbourg is the official seat, and sits alongside the Council of Europe, Brussels is home to nearly all other major EU institutions, with the majority of Parliament's work being carried out there. Critics have described the two-seat arrangement as a "travelling circus", and there is a strong movement to establish Brussels as the sole seat. This is because the other political institutions (the Commission, Council and European Council) are located there, and hence Brussels is treated as the 'capital' of the EU. This movement has received strong backing through numerous figures, including the Commission First-Vice President who stated that "something that was once a very positive symbol of the EU reuniting France and Germany has now become a negative symbol—of wasting money, bureaucracy and the insanity of the Brussels institutions". The Green Party has also noted the environmental cost in a study led by Jean Lambert MEP and Caroline Lucas MEP; in addition to the extra 200 million euro spent on the extra seat, there are over 20,268 tonnes of additional carbon dioxide, undermining any environmental stance of the institution and the Union. The campaign is further backed by a million-strong online petition started by Cecilia Malmström MEP. In August 2014, an assessment by the European Court of Auditors calculated that relocating the Strasbourg seat of the European Parliament to Brussels would save €113.8 million per year. In 2006, there were allegations of irregularity in the charges made by the city of Strasbourg on buildings the Parliament rented, thus further harming the case for the Strasbourg seat. Most MEPs prefer Brussels as a single base. A poll of MEPs found 89% of the respondents wanting a single seat, and 81% preferring Brussels. Another, more academic, survey found 68% support. In July 2011, an absolute majority of MEPs voted in favour of a single seat. In early 2011, the Parliament voted to scrap one of the Strasbourg sessions by holding two within a single week. The mayor of Strasbourg officially reacted by stating "we will counter-attack by upturning the adversary's strength to our own profit, as a judoka would do." However, as Parliament's seat is now fixed by the treaties, it can only be changed by the Council acting unanimously, meaning that France could veto any move. The former French President Nicolas Sarkozy has stated that the Strasbourg seat is "non-negotiable", and that France has no intention of surrendering the only EU Institution on French soil. Given France's declared intention to veto any relocation to Brussels, some MEPs have advocated civil disobedience by refusing to take part in the monthly exodus to Strasbourg. Over the last few years, European institutions have committed to promoting transparency, openness, and the availability of information about their work. In particular, transparency is regarded as pivotal to the action of European institutions and a general principle of EU law, to be applied to the activities of EU institutions in order to strengthen the Union's democratic foundation. The general principles of openness and transparency are reaffirmed in the articles 8 A, point 3 and 10.3 of the Treaty of Lisbon and the Maastricht Treaty respectively, stating that "every citizen shall have the right to participate in the democratic life of the Union. Decisions shall be taken as openly and as closely as possible to the citizen". Furthermore, both treaties acknowledge the value of dialogue between citizens, representative associations, civil society, and European institutions. Article 17 of the Treaty on the Functioning of the European Union (TFEU) lays the juridical foundation for an open, transparent dialogue between European institutions and churches, religious associations, and non-confessional and philosophical organisations. In July 2014, in the beginning of the 8th term, then President of the European Parliament Martin Schulz tasked Antonio Tajani, then Vice-president, with implementing the dialogue with the religious and confessional organisations included in article 17. In this framework, the European Parliament hosts high-level conferences on inter-religious dialogue, also with focus on current issues and in relation with parliamentary works. The chair of European Parliament Mediator for International Parental Child Abduction was established in 1987 by initiative of British politician and MEP Charles Henry Plumb, with the goal of helping minor children of international couples victim of parental abduction. The Mediator finds negotiated solutions in the higher interest of the minor when said minor is abducted by a parent following separation of the couple, regardless whether married or unmarried. Since its institution, the chair has been held by Mairead McGuinness (since 2014), Roberta Angelilli (2009-2014), Evelyne Gebhardt (2004-2009), Mary Banotti (1995-2004), and Marie-Claude Vayssade (1987-1994). The Mediator's main task is to assist parents in finding a solution in the minor's best interest through mediation, i.e. a form of controversy resolution alternative to lawsuit. The Mediator is activated by request of a citizen and, after evaluating the request, starts a mediation process aimed at reaching an agreement. Once subscribed by both parties and the Mediator, the agreement is official. The nature of the agreement is that of a private contract between parties. In defining the agreement, the European Parliament offers the parties the juridical support necessary to reach a sound, lawful agreement based on legality and equity. The agreement can be ratified by the competent national courts and can also lay the foundation for consensual separation or divorce. The European Parliamentary Research Service (EPRS) is the European Parliament's in-house research department and think tank. It provides Members of the European Parliament - and where appropriate, parliamentary committees - with independent, objective and authoritative analysis of, and research on, policy issues relating to the European Union, in order to assist them in their parliamentary work. It is also designed to increase Members' and EP committees' capacity to scrutinise and oversee the European Commission and other EU executive bodies. EPRS aims to provide a comprehensive range of products and services, backed by specialist internal expertise and knowledge sources in all policy fields, so empowering Members and committees through knowledge and contributing to the Parliament’s effectiveness and influence as an institution. In undertaking this work, the EPRS supports and promotes parliamentary outreach to the wider public, including dialogue with relevant stakeholders in the EU’s system of multi-level governance. All publications by EPRS are publicly available on the EP Think Tank platform. The European Parliament periodically commissions opinion polls and studies on public opinion trends in Member States to survey perceptions and expectations of citizens about its work and the overall activities of the European Union. Topics include citizens' perception of the European Parliament's role, their knowledge of the institution, their sense of belonging in the European Union, opinions on European elections and European integration, identity, citizenship, political values, but also on current issues such as climate change, current economy and politics, etc.. Eurobarometer analyses seek to provide an overall picture of national situations, regional specificities, socio-demographic cleavages, and historical trends. Annually, the European Parliament awards four prizes to individuals and organisations that distinguished themselves in the areas of human rights, film, youth projects, and European participation and citizenship. With the Sakharov Prize for Freedom of Thought, created in 1998, the European Parliament supports human rights by awarding individuals that contribute to promoting human rights worldwide, thus raising awareness on human rights violations. Priorities include: protection of human rights and fundamental liberties, with particular focus on freedom of expression; protection of minority rights; compliance with international law; and development of democracy and authentic rule of law. The European Charlemagne Youth Prize seeks to encourage youth participation in the European integration process. It is awarded by the European Parliament and the Foundation of the International Charlemagne Prize of Aachen to youth projects aimed at nurturing common European identity and European citizenship. The European Citizens' Prize is awarded by the European Parliament to activities and actions carried out by citizens and associations to promote integration between the citizens of EU member states and transnational cooperation projects in the EU. Since 2007, the LUX Prize is awarded by the European Parliament to films dealing with current topics of public European interest that encourage reflection on Europe and its future. Over time, the Lux Prize has become a prestigious cinema award which supports European film and production also outside the EU. Emma Goldman Emma Goldman (, 1869May 14, 1940) was an anarchist political activist and writer. She played a pivotal role in the development of anarchist political philosophy in North America and Europe in the first half of the 20th century. Born in Kovno, Russian Empire (now Kaunas, Lithuania) to a Jewish family, Goldman emigrated to the United States in 1885. Attracted to anarchism after the Haymarket affair, Goldman became a writer and a renowned lecturer on anarchist philosophy, women's rights, and social issues, attracting crowds of thousands. She and anarchist writer Alexander Berkman, her lover and lifelong friend, planned to assassinate industrialist and financier Henry Clay Frick as an act of propaganda of the deed. Frick survived the attempt on his life in 1892 and Berkman was sentenced to 22 years in prison. Goldman was imprisoned several times in the years that followed, for "inciting to riot" and illegally distributing information about birth control. In 1906, Goldman founded the anarchist journal "Mother Earth". In 1917, Goldman and Berkman were sentenced to two years in jail for conspiring to "induce persons not to register" for the newly-instated draft. After their release from prison, they were arrested—along with hundreds of others—and deported to Russia. Initially supportive of that country's October Revolution which brought the Bolsheviks to power, Goldman reversed her opinion in the wake of the Kronstadt rebellion and denounced the Soviet Union for its violent repression of independent voices. In 1923, she published a book about her experiences, "My Disillusionment in Russia". While living in England, Canada, and France, she wrote an autobiography called "Living My Life". After the outbreak of the Spanish Civil War, she traveled to Spain to support the anarchist revolution there. She died in Toronto on May 14, 1940, aged 70. During her life, Goldman was lionized as a freethinking "rebel woman" by admirers, and denounced by detractors as an advocate of politically motivated murder and violent revolution. Her writing and lectures spanned a wide variety of issues, including prisons, atheism, freedom of speech, militarism, capitalism, marriage, free love, and homosexuality. Although she distanced herself from first-wave feminism and its efforts toward women's suffrage, she developed new ways of incorporating gender politics into anarchism. After decades of obscurity, Goldman gained iconic status by a revival of interest in her life in the 1970s, when feminist and anarchist scholars rekindled popular interest. Emma Goldman's Orthodox Jewish family lived in Kovno in the Russian Empire, which is now the Lithuanian city of Kaunas. Goldman's mother Taube Bienowitch had been married before to a man with whom she had two daughters—Helena in 1860 and Lena in 1862. When her first husband died of tuberculosis, Taube was devastated. Goldman later wrote: "Whatever love she had had died with the young man to whom she had been married at the age of fifteen." Taube's second marriage was arranged by her family and, as Goldman puts it, "mismated from the first". Her second husband, Abraham Goldman, invested Taube's inheritance in a business that quickly failed. The ensuing hardship combined with the emotional distance of husband and wife to make the household a tense place for the children. When Taube became pregnant, Abraham hoped desperately for a son; a daughter, he believed, would serve as one more sign of failure. They eventually had three sons, but their first child was Emma. Emma Goldman was born on June 27, 1869. Her father used violence to punish his children, beating them when they disobeyed him. He used a whip on Emma, the most rebellious of them. Her mother provided scarce comfort, rarely calling on Abraham to tone down his beatings. Goldman later speculated that her father's furious temper was at least partly a result of sexual frustration. Goldman's relationships with her elder half-sisters, Helena and Lena, were a study in contrasts. Helena, the oldest, provided the comfort they lacked from their mother; she filled Goldman's childhood with "whatever joy it had". Lena, however, was distant and uncharitable. The three sisters were joined by brothers Louis (who died at the age of six), Herman (born in 1872), and Moishe (born in 1879). When Emma was a young girl, the Goldman family moved to the village of Papilė, where her father ran an inn. While her sisters worked, she became friends with a servant named Petrushka, who excited her "first erotic sensations". Later in Papilė she witnessed a peasant being whipped with a knout in the street. This event traumatized her and contributed to her lifelong distaste for violent authority. At the age of seven, Goldman moved with her family to the Prussian city of Königsberg (then part of the German Empire), and she enrolled in a Realschule. One teacher punished disobedient students—targeting Goldman in particular—by beating their hands with a ruler. Another teacher tried to molest his female students and was fired when Goldman fought back. She found a sympathetic mentor in her German-language teacher, who loaned her books and took her to an opera. A passionate student, Goldman passed the exam for admission into a gymnasium, but her religion teacher refused to provide a certificate of good behavior and she was unable to attend. The family moved to the Russian capital of Saint Petersburg, where her father opened one unsuccessful store after another. Their poverty forced the children to work, and Goldman took an assortment of jobs, including one in a corset shop. As a teenager Goldman begged her father to allow her to return to school, but instead he threw her French book into the fire and shouted: "Girls do not have to learn much! All a Jewish daughter needs to know is how to prepare gefilte fish, cut noodles fine, and give the man plenty of children." Goldman pursued an independent education on her own, however, and soon began to study the political turmoil around her, particularly the Nihilists responsible for assassinating Alexander II of Russia. The ensuing turmoil intrigued Goldman, although she did not fully understand it at the time. When she read Chernyshevsky's novel, "What Is to Be Done?" (1863), she found a role model in the protagonist Vera, who adopts a Nihilist philosophy and escapes her repressive family to live freely and organize a sewing cooperative. The book enthralled Goldman and remained a source of inspiration throughout her life. Her father, meanwhile, continued to insist on a domestic future for her, and he tried to arrange for her to be married at the age of fifteen. They fought about the issue constantly; he complained that she was becoming a "loose" woman, and she insisted that she would marry for love alone. At the corset shop, she was forced to fend off unwelcome advances from Russian officers and other men. One persistent suitor took her into a hotel room and committed what Goldman called "violent contact"; two biographers call it rape. She was stunned by the experience, overcome by "shock at the discovery that the contact between man and woman could be so brutal and painful." Goldman felt that the encounter forever soured her interactions with men. In 1885, Helena made plans to move to New York in the United States to join her sister Lena and her husband. Goldman wanted to join her sister, but their father refused to allow it. Despite Helena's offer to pay for the trip, Abraham turned a deaf ear to their pleas. Desperate, Goldman threatened to throw herself into the Neva River if she could not go. He finally agreed, and on December 29, 1885, Helena and Emma arrived at New York City's Castle Garden, the entry for immigrants. They settled upstate, living in the Rochester home which Lena had made with her husband Samuel. Fleeing the rising antisemitism of Saint Petersburg, their parents and brothers joined them a year later. Goldman began working as a seamstress, sewing overcoats for more than ten hours a day, earning two and a half dollars a week. She asked for a raise and was denied; she quit and took work at a smaller shop nearby. At her new job, Goldman met a fellow worker named Jacob Kershner, who shared her love for books, dancing, and traveling, as well as her frustration with the monotony of factory work. After four months, they married in February 1887. Once he moved in with Goldman's family, however, their relationship faltered. On their wedding night she discovered that he was impotent; they became emotionally and physically distant. Before long he became jealous and suspicious. She, meanwhile, was becoming more engaged with the political turmoil around her—particularly the fallout of the 1886 Haymarket affair in Chicago and the anti-authoritarian political philosophy of anarchism. Less than a year after the wedding, they were divorced; he begged her to return and threatened to poison himself if she did not. They reunited, but after three months she left once again. Her parents considered her behavior "loose" and refused to allow Goldman into their home. Carrying her sewing machine in one hand and a bag with five dollars in the other, she left Rochester and headed southeast to New York City. On her first day in the city, Goldman met two men who would forever change her life. At Sachs's Café, a gathering place for radicals, she was introduced to Alexander Berkman, an anarchist who invited her to a public speech that evening. They went to hear Johann Most, editor of a radical publication called "Freiheit" and an advocate of "propaganda of the deed"—the use of violence to instigate change. She was impressed by his fiery oration, and he took her under his wing, training her in methods of public speaking. He encouraged her vigorously, telling her that she was "to take my place when I am gone." One of her first public talks in support of "the Cause" was in Rochester. After convincing Helena not to tell their parents of her speech, Goldman found her mind a blank once on stage. Suddenly, she later wrote: Excited by the experience, Goldman refined her public persona during subsequent engagements. Quickly, however, she found herself arguing with Most over her independence. After a momentous speech in Cleveland, she felt as though she had become "a parrot repeating Most's views" and resolved to express herself on the stage. Upon her return in New York, Most became furious and told her: "Who is not with me is against me!" She left "Freiheit" and joined with another publication, "Die Autonomie". Meanwhile, she had begun a friendship with Berkman, whom she affectionately called Sasha. Before long they became lovers and moved into a communal apartment with his cousin Modest "Fedya" Stein and Goldman's friend, Helen Minkin, on 42nd Street. Although their relationship had numerous difficulties, Goldman and Berkman would share a close bond for decades, united by their anarchist principles and commitment to personal equality. In 1892, Goldman joined with Berkman and Stein in opening an ice cream shop in Worcester, Massachusetts. After only a few months of operating the shop, however, Goldman and Berkman were deflected from the venture by their involvement in the Homestead Strike. One of the first political moments that brought Berkman and Goldman together was the Homestead Strike. In June 1892, a steel plant in Homestead, Pennsylvania owned by Andrew Carnegie became the focus of national attention when talks between the Carnegie Steel Company and the Amalgamated Association of Iron and Steel Workers (AA) broke down. The factory's manager was Henry Clay Frick, a fierce opponent of the union. When a final round of talks failed at the end of June, management closed the plant and locked out the workers, who immediately went on strike. Strikebreakers were brought in and the company hired Pinkerton guards to protect them. On July 6, a fight broke out between 300 Pinkerton guards and a crowd of armed union workers. During the twelve-hour gunfight, seven guards and nine strikers were killed. When a majority of the nation's newspapers expressed support of the strikers, Goldman and Berkman resolved to assassinate Frick, an action they expected would inspire the workers to revolt against the capitalist system. Berkman chose to carry out the assassination, and ordered Goldman to stay behind in order to explain his motives after he went to jail. He would be in charge of the deed; she of the propaganda. Berkman tried and failed to make a bomb, then set off for Pittsburgh to buy a gun and a suit of decent clothes. Goldman, meanwhile, decided to help fund the scheme through prostitution. Remembering the character of Sonya in Fyodor Dostoevsky's novel "Crime and Punishment" (1866), she mused: "She had become a prostitute in order to support her little brothers and sisters...Sensitive Sonya could sell her body; why not I?" Once on the street, she caught the eye of a man who took her into a saloon, bought her a beer, gave her ten dollars, informed her she did not have "the knack," and told her to quit the business. She was "too astounded for speech". She wrote to Helena, claiming illness, and asked her for fifteen dollars. On July 23, Berkman gained access to Frick's office with a concealed handgun and shot Frick three times, then stabbed him in the leg. A group of workers—far from joining in his "attentat"—beat Berkman unconscious, and he was carried away by the police. Berkman was convicted of attempted murder and sentenced to 22 years in prison. Goldman suffered during his long absence. Convinced Goldman was involved in the plot, police raided her apartment and—finding no evidence—pressured her landlord into evicting her. Worse, the "attentat" had failed to rouse the masses: workers and anarchists alike condemned Berkman's action. Johann Most, their former mentor, lashed out at Berkman and the assassination attempt. Furious at these attacks, Goldman brought a toy horsewhip to a public lecture and demanded, onstage, that Most explain his betrayal. He dismissed her, whereupon she struck him with the whip, broke it on her knee, and hurled the pieces at him. She later regretted her assault, confiding to a friend: "At the age of twenty-three, one does not reason." When the Panic of 1893 struck in the following year, the United States suffered one of its worst economic crises. By year's end, the unemployment rate was higher than 20%, and "hunger demonstrations" sometimes gave way to riots. Goldman began speaking to crowds of frustrated men and women in New York City. On August 21, she spoke to a crowd of nearly 3,000 people in Union Square, where she encouraged unemployed workers to take immediate action. Her exact words are unclear: undercover agents insist she ordered the crowd to "take everything ... by force", while Goldman later recounted this message: "Well then, demonstrate before the palaces of the rich; demand work. If they do not give you work, demand bread. If they deny you both, take bread." Later in court, Detective-Sergeant Charles Jacobs offered yet another version of her speech. A week later Goldman was arrested in Philadelphia and returned to New York City for trial, charged with "inciting to riot". During the train ride, Jacobs offered to drop the charges against her if she would inform on other radicals in the area. She responded by throwing a glass of ice water in his face. As she awaited trial, Goldman was visited by Nellie Bly, a reporter for the "New York World." She spent two hours talking to Goldman, and wrote a positive article about the woman she described as a "modern Joan of Arc." Despite this positive publicity, the jury was persuaded by Jacobs' testimony and frightened by Goldman's politics. The assistant District Attorney questioned Goldman about her anarchism, as well as her atheism; the judge spoke of her as "a dangerous woman". She was sentenced to one year in the Blackwell's Island Penitentiary. Once inside she suffered an attack of rheumatism and was sent to the infirmary; there she befriended a visiting doctor and began studying medicine. She also read dozens of books, including works by the American activist-writers Ralph Waldo Emerson and Henry David Thoreau; novelist Nathaniel Hawthorne; poet Walt Whitman, and philosopher John Stuart Mill. When Goldman was released after ten months, a raucous crowd of nearly 3,000 people greeted her at the Thalia Theater in New York City. She soon became swamped with requests for interviews and lectures. To make money, Goldman decided to pursue the medical work she had studied in prison. However, her preferred fields of specialization—midwifery and massage—were not available to nursing students in the US. She sailed to Europe, lecturing in London, Glasgow, and Edinburgh. She met with renowned anarchists such as Errico Malatesta, Louise Michel, and Peter Kropotkin. In Vienna, she received two diplomas and put them immediately to use back in the US. Alternating between lectures and midwifery, she conducted the first cross-country tour by an anarchist speaker. In November 1899 she returned to Europe, where she met the anarchist Hippolyte Havel, with whom she went to France and helped organize the International Anarchist Congress on the outskirts of Paris. On September 6, 1901, Leon Czolgosz, an unemployed factory worker and registered Republican with a history of mental illness, shot US President William McKinley twice during a public speaking event in Buffalo, New York. McKinley was hit in the breastbone and stomach, and died eight days later. Czolgosz was arrested, and interrogated around the clock. During interrogation he claimed to be an Anarchist and said he had been inspired to act after attending a speech held by Goldman. The authorities used this as a pretext to charge Goldman with planning McKinley's assassination. They tracked her to a residence in Chicago she shared with Havel, as well as Mary and Abe Isaak, an anarchist couple. Goldman was arrested, along with Isaak, Havel, and ten other anarchists. Earlier, Czolgosz had tried but failed to become friends with Goldman and her companions. During a talk in Cleveland, Czolgosz had approached Goldman and asked her advice on which books he should read. In July 1901, he had appeared at the Isaak house, asking a series of unusual questions. They assumed he was an infiltrator, like a number of police agents sent to spy on radical groups. They had remained distant from him, and Abe Isaak sent a notice to associates warning of "another spy". Although Czolgosz repeatedly denied Goldman's involvement, the police held her in close custody, subjecting her to what she called the "third degree". She explained her housemates' distrust of Czolgosz, and it became clear that she had not had any significant contact with the attacker. No evidence was found linking Goldman to the attack, and she was released after two weeks of detention. Before McKinley died, Goldman offered to provide nursing care, referring to him as "merely a human being". Czolgosz, despite considerable evidence of mental illness, was convicted of murder and executed. Throughout her detention and after her release, Goldman steadfastly refused to condemn Czolgosz's actions, standing virtually alone in doing so. Friends and supporters—including Berkman—urged her to quit his cause. But Goldman defended Czolgosz as a "supersensitive being" and chastised other anarchists for abandoning him. She was vilified in the press as the "high priestess of anarchy", while many newspapers declared the anarchist movement responsible for the murder. In the wake of these events, socialism gained support over anarchism among US radicals. McKinley's successor, Theodore Roosevelt, declared his intent to crack down "not only against anarchists, but against all active and passive sympathizers with anarchists". After Czolgosz was executed, Goldman withdrew from the world. Scorned by her fellow anarchists, vilified by the press, and separated from her love, Berkman, she retreated into anonymity and nursing. "It was bitter and hard to face life anew," she wrote later. Using the name E. G. Smith, she vanished from public life and took on a series of private nursing jobs. When the US Congress passed the Anarchist Exclusion Act, however, a new wave of activism rose to oppose it, pulling Goldman back into the movement. A coalition of people and organizations across the left end of the political spectrum opposed the law on grounds that it violated freedom of speech, and she had the nation's ear once again. When an English anarchist named John Turner was arrested under the Anarchist Exclusion Act and threatened with deportation, Goldman joined forces with the Free Speech League to champion his cause. The league enlisted the aid of attorneys Clarence Darrow and Edgar Lee Masters, who took Turner's case to the US Supreme Court. Although Turner and the League lost, Goldman considered it a victory of propaganda. She had returned to anarchist activism, but it was taking its toll on her. "I never felt so weighed down," she wrote to Berkman. "I fear I am forever doomed to remain public property and to have my life worn out through the care for the lives of others." In 1906, Goldman decided to start a publication, "a place of expression for the young idealists in arts and letters". "Mother Earth" was staffed by a cadre of radical activists, including Hippolyte Havel, Max Baginski, and Leonard Abbott. In addition to publishing original works by its editors and anarchists around the world, "Mother Earth" reprinted selections from a variety of writers. These included the French philosopher Pierre-Joseph Proudhon, Russian anarchist Peter Kropotkin, German philosopher Friedrich Nietzsche, and British writer Mary Wollstonecraft. Goldman wrote frequently about anarchism, politics, labor issues, atheism, sexuality, and feminism. On May 18 of the same year, Alexander Berkman was released from prison. Carrying a bouquet of roses, Goldman met him on the train platform and found herself "seized by terror and pity" as she beheld his gaunt, pale form. Neither was able to speak; they returned to her home in silence. For weeks, he struggled to readjust to life on the outside; an abortive speaking tour ended in failure, and in Cleveland he purchased a revolver with the intent of killing himself. He returned to New York, however, and learned that Goldman had been arrested with a group of activists meeting to reflect on Czolgosz. Invigorated anew by this violation of freedom of assembly, he declared, "My resurrection has come!" and set about securing their release. Berkman took the helm of "Mother Earth" in 1907, while Goldman toured the country to raise funds to keep it operating. Editing the magazine was a revitalizing experience for Berkman; his relationship with Goldman faltered, however, and he had an affair with a 15-year-old anarchist named Becky Edelsohn. Goldman was pained by his rejection of her, but considered it a consequence of his prison experience. Later that year she served as a delegate from the US to the International Anarchist Congress of Amsterdam. Anarchists and syndicalists from around the world gathered to sort out the tension between the two ideologies, but no decisive agreement was reached. Goldman returned to the US and continued speaking to large audiences. For the next ten years, Goldman traveled around the country nonstop, delivering lectures and agitating for anarchism. The coalitions formed in opposition to the Anarchist Exclusion Act had given her an appreciation for reaching out to those of other political positions. When the US Justice Department sent spies to observe, they reported the meetings as "packed". Writers, journalists, artists, judges, and workers from across the spectrum spoke of her "magnetic power", her "convincing presence", her "force, eloquence, and fire". In the spring of 1908, Goldman met and fell in love with Ben Reitman, the so-called "Hobo doctor." Having grown up in Chicago's tenderloin district, Reitman spent several years as a drifter before earning a medical degree from the College of Physicians and Surgeons of Chicago. As a doctor, he attended to people suffering from poverty and illness, particularly venereal diseases. He and Goldman began an affair. They shared a commitment to free love and Reitman took a variety of lovers, but Goldman did not. She tried to reconcile her feelings of jealousy with a belief in freedom of the heart, but found it difficult. Two years later, Goldman began feeling frustrated with lecture audiences. She yearned to "reach the few who really want to learn, rather than the many who come to be amused". She collected a series of speeches and items she had written for "Mother Earth" and published a book called "Anarchism and Other Essays." Covering a wide variety of topics, Goldman tried to represent "the mental and soul struggles of twenty-one years". In addition to a comprehensive look at anarchism and its criticisms, the book includes essays on patriotism, women's suffrage, marriage, and prisons. When Margaret Sanger, an advocate of access to contraception, coined the term "birth control" and disseminated information about various methods in the June 1914 issue of her magazine "The Woman Rebel," she received aggressive support from Goldman, who had already been active in efforts to increase birth control access for several years. In 1916, Goldman was arrested for giving lessons in public on how to use contraceptives. Sanger, too, was arrested under the Comstock Law, which prohibited the dissemination of "obscene, lewd, or lascivious articles"—including information relating to birth control. Although they later split from Sanger over charges of insufficient support, Goldman and Reitman distributed copies of Sanger's pamphlet "Family Limitation" (along with a similar essay of Reitman's). In 1915 Goldman conducted a nationwide speaking tour in part to raise awareness about contraception options. Although the nation's attitude toward the topic seemed to be liberalizing, Goldman was arrested on February 11, 1916, as she was about to give another public lecture. Goldman was charged with violating the Comstock Law. Refusing to pay a $100 fine, Goldman spent two weeks in a prison workhouse, which she saw as an "opportunity" to reconnect with those rejected by society. Although US President Woodrow Wilson was re-elected in 1916 under the slogan "He kept us out of the war", at the start of his second term he announced that Germany's continued deployment of unrestricted submarine warfare was sufficient cause for the US to enter World War I. Shortly afterward, Congress passed the Selective Service Act of 1917, which required all males aged 21–30 to register for military conscription. Goldman saw the decision as an exercise in militarist aggression, driven by capitalism. She declared in "Mother Earth" her intent to resist conscription, and to oppose US involvement in the war. To this end, she and Berkman organized the No Conscription League of New York, which proclaimed: "We oppose conscription because we are internationalists, antimilitarists, and opposed to all wars waged by capitalistic governments." The group became a vanguard for anti-draft activism, and chapters began to appear in other cities. When police began raiding the group's public events to find young men who had not registered for the draft, however, Goldman and others focused their efforts on distributing pamphlets and other written work. In the midst of the nation's patriotic fervor, many elements of the political left refused to support the League's efforts. The Women's Peace Party, for example, ceased its opposition to the war once the US entered it. The Socialist Party of America took an official stance against US involvement, but supported Wilson in most of his activities. On June 15, 1917, Goldman and Berkman were arrested during a raid of their offices which yielded "a wagon load of anarchist records and propaganda" for the authorities. "The New York Times" reported that Goldman asked to change into a more appropriate outfit, and emerged in a gown of "royal purple". The pair were charged with conspiracy to "induce persons not to register" under the newly enacted Espionage Act, and were held on US$25,000 bail each. Defending herself and Berkman during their trial, Goldman invoked the First Amendment, asking how the government could claim to fight for democracy abroad while suppressing free speech at home: We say that if America has entered the war to make the world safe for democracy, she must first make democracy safe in America. How else is the world to take America seriously, when democracy at home is daily being outraged, free speech suppressed, peaceable assemblies broken up by overbearing and brutal gangsters in uniform; when free press is curtailed and every independent opinion gagged? Verily, poor as we are in democracy, how can we give of it to the world? However, the jury found Goldman and Berkman guilty. Judge Julius Marshuetz Mayer imposed the maximum sentence: two years' imprisonment, a $10,000 fine each, and the possibility of deportation after their release from prison. As she was transported to Missouri State Penitentiary, Goldman wrote to a friend: "Two years imprisonment for having made an uncompromising stand for one's ideal. Why that is a small price." In prison, she was again assigned to work as a seamstress, under the eye of a "miserable gutter-snipe of a 21-year-old boy paid to get results". She met the socialist Kate Richards O'Hare, who had also been imprisoned under the Espionage Act. Although they differed on political strategy—Kate O'Hare believed in voting to achieve state power—the two women came together to agitate for better conditions among prisoners. Goldman also met and became friends with Gabriella Segata Antolini, an anarchist and follower of Luigi Galleani. Antolini had been arrested transporting a satchel filled with dynamite on a Chicago-bound train. She had refused to cooperate with authorities, and was sent to prison for 14 months. Working together to make life better for the other inmates, the three women became known as "The Trinity". Goldman was released on September 27, 1919. Goldman and Berkman were released from prison during the United States' Red Scare of 1919–20, when public anxiety about wartime pro-German activities had morphed into a pervasive fear of Bolshevism and the prospect of an imminent radical revolution. Attorney General Alexander Mitchell Palmer and J. Edgar Hoover, head of the US Department of Justice's General Intelligence Division, were intent on using the Anarchist Exclusion Act and its 1918 expansion to deport any non-citizens they could identify as advocates of anarchy or revolution. "Emma Goldman and Alexander Berkman," Hoover wrote while they were in prison, "are, beyond doubt, two of the most dangerous anarchists in this country and return to the community will result in undue harm." At her deportation hearing on October 27, Goldman refused to answer questions about her beliefs on the grounds that her American citizenship invalidated any attempt to deport her under the Anarchist Exclusion Act, which could be enforced only against non-citizens of the US. She presented a written statement instead: "Today so-called aliens are deported. Tomorrow native Americans will be banished. Already some patrioteers are suggesting that native American sons to whom democracy is a sacred ideal should be exiled." Louis Post at the Department of Labor, which had ultimate authority over deportation decisions, determined that the revocation of her husband's American citizenship in 1908 after his conviction had revoked hers as well. After initially promising a court fight, she decided not to appeal his ruling. The Labor Department included Goldman and Berkman among 249 aliens it deported "en masse," mostly people with only vague associations with radical groups who had been swept up in government raids in November. "Buford", a ship the press nicknamed the "Soviet Ark," sailed from the Army's New York Port of Embarkation on December 21. Some 58 enlisted men and four officers provided security on the journey, and pistols were distributed to the crew. Most of the press approved enthusiastically. The Cleveland "Plain Dealer" wrote: "It is hoped and expected that other vessels, larger, more commodious, carrying similar cargoes, will follow in her wake." The ship landed her charges in Hanko, Finland on Saturday, January 17, 1920. Upon arrival in Finland, authorities there conducted the deportees to the Russian frontier under a flag of truce. Goldman initially viewed the Bolshevik revolution in a positive light. She wrote in "Mother Earth" that despite its dependence on Communist government, it represented "the most fundamental, far-reaching and all-embracing principles of human freedom and of economic well-being". By the time she neared Europe, however, she expressed fears about what was to come. She was worried about the ongoing Russian Civil War and the possibility of being seized by anti-Bolshevik forces. The state, anti-capitalist though it was, also posed a threat. "I could never in my life work within the confines of the State," she wrote to her niece, "Bolshevist or otherwise." She quickly discovered that her fears were justified. Days after returning to Petrograd (Saint Petersburg), she was shocked to hear a party official refer to free speech as a "bourgeois superstition". As she and Berkman traveled around the country, they found repression, mismanagement, and corruption instead of the equality and worker empowerment they had dreamed of. Those who questioned the government were demonized as counter-revolutionaries, and workers labored under severe conditions. They met with Vladimir Lenin, who assured them that government suppression of press liberties was justified. He told them: "There can be no free speech in a revolutionary period." Berkman was more willing to forgive the government's actions in the name of "historical necessity", but he eventually joined Goldman in opposing the Soviet state's authority. In March 1921, strikes erupted in Petrograd when workers took to the streets demanding better food rations and more union autonomy. Goldman and Berkman felt a responsibility to support the strikers, stating: "To remain silent now is impossible, even criminal." The unrest spread to the port town of Kronstadt, where the government ordered a military response to suppress striking soldiers and sailors. In the Kronstadt rebellion, approximately 1,000 rebelling sailors and soldiers were killed and two thousand more were arrested; many were later executed. In the wake of these events, Goldman and Berkman decided there was no future in the country for them. "More and more", she wrote, "we have come to the conclusion that we can do nothing here. And as we can not keep up a life of inactivity much longer we have decided to leave." In December 1921, they left the country and went to the Latvian capital city of Riga. The US commissioner in that city wired officials in Washington DC, who began requesting information from other governments about the couple's activities. After a short trip to Stockholm, they moved to Berlin for several years; during this time Goldman agreed to write a series of articles about her time in Russia for Joseph Pulitzer's newspaper, the "New York World." These were later collected and published in book form as "My Disillusionment in Russia" (1923) and "My Further Disillusionment in Russia" (1924). The publishers added these titles to attract attention; Goldman protested, albeit in vain. Goldman found it difficult to acclimate to the German leftist community in Berlin. Communists despised her outspokenness about Soviet repression; liberals derided her radicalism. While Berkman remained in Berlin helping Russian exiles, Goldman moved to London in September 1924. Upon her arrival, the novelist Rebecca West arranged a reception dinner for her, attended by philosopher Bertrand Russell, novelist H. G. Wells, and more than 200 other guests. When she spoke of her dissatisfaction with the Soviet government, the audience was shocked. Some left the gathering; others berated her for prematurely criticizing the Communist experiment. Later, in a letter, Russell declined to support her efforts at systemic change in the Soviet Union and ridiculed her anarchist idealism. In 1925, the spectre of deportation loomed again, but a Scottish anarchist named James Colton offered to marry her and provide British citizenship. Although they were only distant acquaintances, she accepted and they were married on June 27, 1925. Her new status gave her peace of mind, and allowed her to travel to France and Canada. Life in London was stressful for Goldman; she wrote to Berkman: "I am awfully tired and so lonely and heartsick. It is a dreadful feeling to come back here from lectures and find not a kindred soul, no one who cares whether one is dead or alive." She worked on analytical studies of drama, expanding on the work she had published in 1914. But the audiences were "awful," and she never finished her second book on the subject. Goldman traveled to Canada in 1927, just in time to receive news of the impending executions of Italian anarchists Nicola Sacco and Bartolomeo Vanzetti in Boston. Angered by the many irregularities of the case, she saw it as another travesty of justice in the US. She longed to join the mass demonstrations in Boston; memories of the Haymarket affair overwhelmed her, compounded by her isolation. "Then," she wrote, "I had my life before me to take up the cause for those killed. Now I have nothing." In 1928, she began writing her autobiography, with the support of a group of American admirers, including journalist H. L. Mencken, poet Edna St. Vincent Millay, novelist Theodore Dreiser and art collector Peggy Guggenheim, who raised $4,000 for her. She secured a cottage in the French coastal city of Saint-Tropez and spent two years recounting her life. Berkman offered sharply critical feedback, which she eventually incorporated at the price of a strain on their relationship. Goldman intended the book, "Living My Life," as a single volume for a price the working class could afford (she urged no more than $5.00); her publisher Alfred A. Knopf, however, released it as two volumes sold together for $7.50. Goldman was furious, but unable to force a change. Due in large part to the Great Depression, sales were sluggish despite keen interest from libraries around the US. Critical reviews were generally enthusiastic; "The New York Times", "The New Yorker", and "Saturday Review of Literature" all listed it as one of the year's top non-fiction books. In 1933, Goldman received permission to lecture in the United States under the condition that she speak only about drama and her autobiography—but not current political events. She returned to New York on February 2, 1934 to generally positive press coverage—except from Communist publications. Soon she was surrounded by admirers and friends, besieged with invitations to talks and interviews. Her visa expired in May, and she went to Toronto in order to file another request to visit the US. However, this second attempt was denied. She stayed in Canada, writing articles for US publications. In February and March 1936, Berkman underwent a pair of prostate gland operations. Recuperating in Nice and cared for by his companion, Emmy Eckstein, he missed Goldman's sixty-seventh birthday in Saint-Tropez in June. She wrote in sadness, but he never read the letter; she received a call in the middle of the night that Berkman was in great distress. She left for Nice immediately but when she arrived that morning, Goldman found that he had shot himself and was in a nearly comatose paralysis. He died later that evening. In July 1936, the Spanish Civil War started after an attempted "coup d'état" by parts of the Spanish Army against the government of the Second Spanish Republic. At the same time, the Spanish anarchists, fighting against the Nationalist forces, started an anarchist revolution. Goldman was invited to Barcelona and in an instant, as she wrote to her niece, "the crushing weight that was pressing down on my heart since Sasha's death left me as by magic". She was welcomed by the Confederación Nacional del Trabajo (CNT) and Federación Anarquista Ibérica (FAI) organizations, and for the first time in her life lived in a community run by and for anarchists, according to true anarchist principles. "In all my life", she wrote later, "I have not met with such warm hospitality, comradeship and solidarity." After touring a series of collectives in the province of Huesca, she told a group of workers: "Your revolution will destroy forever [the notion] that anarchism stands for chaos." She began editing the weekly "CNT-FAI Information Bulletin" and responded to English-language mail. Goldman began to worry about the future of Spain's anarchism when the CNT-FAI joined a coalition government in 1937—against the core anarchist principle of abstaining from state structures—and, more distressingly, made repeated concessions to Communist forces in the name of uniting against fascism. She wrote that cooperating with Communists in Spain was "a denial of our comrades in Stalin's concentration camps". Russia, meanwhile, refused to send weapons to anarchist forces, and disinformation campaigns were being waged against the anarchists across Europe and the US. Her faith in the movement unshaken, Goldman returned to London as an official representative of the CNT-FAI. Delivering lectures and giving interviews, Goldman enthusiastically supported the Spanish anarcho-syndicalists. She wrote regularly for "Spain and the World", a biweekly newspaper focusing on the civil war. In May 1937, however, Communist-led forces attacked anarchist strongholds and broke up agrarian collectives. Newspapers in England and elsewhere accepted the timeline of events offered by the Second Spanish Republic at face value. British journalist George Orwell, present for the crackdown, wrote: "[T]he accounts of the Barcelona riots in May ... beat everything I have ever seen for lying." Goldman returned to Spain in September, but the CNT-FAI appeared to her like people "in a burning house". Worse, anarchists and other radicals around the world refused to support their cause. The Nationalist forces declared victory in Spain just before she returned to London. Frustrated by England's repressive atmosphere—which she called "more fascist than the fascists"—she returned to Canada in 1939. Her service to the anarchist cause in Spain was not forgotten, however. On her seventieth birthday, the former Secretary-General of the CNT-FAI, Mariano Vázquez, sent a message to her from Paris, praising her for her contributions and naming her as "our spiritual mother". She called it "the most beautiful tribute I have ever received". As the events preceding World War II began to unfold in Europe, Goldman reiterated her opposition to wars waged by governments. "[M]uch as I loathe Hitler, Mussolini, Stalin and Franco", she wrote to a friend, "I would not support a war against them and for the democracies which, in the last analysis, are only Fascist in disguise." She felt that Britain and France had missed their opportunity to oppose fascism, and that the coming war would only result in "a new form of madness in the world". On Saturday, February 17, 1940, Goldman suffered a debilitating stroke. She became paralyzed on her right side, and although her hearing was unaffected, she could not speak. As one friend described it: "Just to think that here was Emma, the greatest orator in America, unable to utter one word." For three months she improved slightly, receiving visitors and on one occasion gesturing to her address book to signal that a friend might find friendly contacts during a trip to Mexico. She suffered another stroke on May 8, however, and on May 14 she died in Toronto, aged 70. The US Immigration and Naturalization Service allowed her body to be brought back to the United States. She was buried in German Waldheim Cemetery (now named Forest Home Cemetery) in Forest Park, Illinois, a western suburb of Chicago, near the graves of those executed after the Haymarket affair. The bas relief on her grave marker was created by sculptor Jo Davidson. Goldman spoke and wrote extensively on a wide variety of issues. While she rejected orthodoxy and fundamentalist thinking, she was an important contributor to several fields of modern political philosophy. She was influenced by many diverse thinkers and writers, including Mikhail Bakunin, Henry David Thoreau, Peter Kropotkin, Ralph Waldo Emerson, Nikolai Chernyshevsky, and Mary Wollstonecraft. Another philosopher who influenced Goldman was Friedrich Nietzsche. In her autobiography, she wrote: "Nietzsche was not a social theorist, but a poet, a rebel, and innovator. His aristocracy was neither of birth nor of purse; it was the spirit. In that respect Nietzsche was an anarchist, and all true anarchists were aristocrats." Anarchism was central to Goldman's view of the world and she is today considered one of the most important figures in the history of anarchism. First drawn to it during the persecution of anarchists after the 1886 Haymarket affair, she wrote and spoke regularly on behalf of anarchism. In the title essay of her book "Anarchism and Other Essays", she wrote: Anarchism, then, really stands for the liberation of the human mind from the dominion of religion; the liberation of the human body from the dominion of property; liberation from the shackles and restraint of government. Anarchism stands for a social order based on the free grouping of individuals for the purpose of producing real social wealth; an order that will guarantee to every human being free access to the earth and full enjoyment of the necessities of life, according to individual desires, tastes, and inclinations. Goldman's anarchism was intensely personal. She believed it was necessary for anarchist thinkers to live their beliefs, demonstrating their convictions with every action and word. "I don't care if a man's theory for tomorrow is correct," she once wrote. "I care if his spirit of today is correct." Anarchism and free association were to her logical responses to the confines of government control and capitalism. "It seems to me that "these" are the new forms of life," she wrote, "and that they will take the place of the old, not by preaching or voting, but by living them." At the same time, she believed that the movement on behalf of human liberty must be staffed by liberated humans. While dancing among fellow anarchists one evening, she was chided by an associate for her carefree demeanor. In her autobiography, Goldman wrote: I told him to mind his own business, I was tired of having the Cause constantly thrown in my face. I did not believe that a Cause which stood for a beautiful ideal, for anarchism, for release and freedom from conventions and prejudice, should demand denial of life and joy. I insisted that our Cause could not expect me to behave as a nun and that the movement should not be turned into a cloister. If it meant that, I did not want it. "I want freedom, the right to self-expression, everybody's right to beautiful, radiant things." Goldman, in her political youth, held targeted violence to be a legitimate means of revolutionary struggle. Goldman at the time believed that the use of violence, while distasteful, could be justified in relation to the social benefits it might accrue. She advocated propaganda of the deed—"attentat", or violence carried out to encourage the masses to revolt. She supported her partner Alexander Berkman's attempt to kill industrialist Henry Clay Frick, and even begged him to allow her to participate. She believed that Frick's actions during the Homestead strike were reprehensible and that his murder would produce a positive result for working people. "Yes," she wrote later in her autobiography, "the end in this case justified the means." While she never gave explicit approval of Leon Czolgosz's assassination of US President William McKinley, she defended his ideals and believed actions like his were a natural consequence of repressive institutions. As she wrote in "The Psychology of Political Violence": "the accumulated forces in our social and economic life, culminating in an act of violence, are similar to the terrors of the atmosphere, manifested in storm and lightning." Her experiences in Russia led her to qualify her earlier belief that revolutionary ends might justify violent means. In the afterword to "My Disillusionment in Russia", she wrote: "There is no greater fallacy than the belief that aims and purposes are one thing, while methods and tactics are another... The means employed become, through individual habit and social practice, part and parcel of the final purpose..." In the same chapter, however, Goldman affirmed that "Revolution is indeed a violent process," and noted that violence was the "tragic inevitability of revolutionary upheavals..." Some misinterpreted her comments on the Bolshevik terror as a rejection of all militant force, but Goldman corrected this in the preface to the first US edition of "My Disillusionment in Russia": The argument that destruction and terror are part of revolution I do not dispute. I know that in the past every great political and social change necessitated violence...Black slavery might still be a legalized institution in the United States but for the militant spirit of the John Browns. I have never denied that violence is inevitable, nor do I gainsay it now. Yet it is one thing to employ violence in combat, as a means of defense. It is quite another thing to make a principle of terrorism, to institutionalize it, to assign it the most vital place in the social struggle. Such terrorism begets counter-revolution and in turn itself becomes counter-revolutionary. Goldman saw the militarization of Soviet society not as a result of armed resistance per se, but of the statist vision of the Bolsheviks, writing that "an insignificant minority bent on creating an absolute State is necessarily driven to oppression and terrorism." Goldman believed that the economic system of capitalism was incompatible with human liberty. "The only demand that property recognizes," she wrote in "Anarchism and Other Essays", "is its own gluttonous appetite for greater wealth, because wealth means power; the power to subdue, to crush, to exploit, the power to enslave, to outrage, to degrade." She also argued that capitalism dehumanized workers, "turning the producer into a mere particle of a machine, with less will and decision than his master of steel and iron." Originally opposed to anything less than complete revolution, Goldman was challenged during one talk by an elderly worker in the front row. In her autobiography, she wrote: He said that he understood my impatience with such small demands as a few hours less a day, or a few dollars more a week... But what were men of his age to do? They were not likely to live to see the ultimate overthrow of the capitalist system. Were they also to forgo the release of perhaps two hours a day from the hated work? That was all they could hope to see realized in their lifetime. Goldman realized that smaller efforts for improvement such as higher wages and shorter hours could be part of a social revolution. Goldman viewed the state as essentially and inevitably a tool of control and domination. As a result, Goldman believed that voting was useless at best and dangerous at worst. Voting, she wrote, provided an illusion of participation while masking the true structures of decision-making. Instead, Goldman advocated targeted resistance in the form of strikes, protests, and "direct action against the invasive, meddlesome authority of our moral code". She maintained an anti-voting position even when many anarcho-syndicalists in 1930s Spain voted for the formation of a liberal republic. Goldman wrote that any power anarchists wielded as a voting bloc should instead be used to strike across the country. She disagreed with the movement for women's suffrage, which demanded the right of women to vote. In her essay "Woman Suffrage", she ridicules the idea that women's involvement would infuse the democratic state with a more just orientation: "As if women have not sold their votes, as if women politicians cannot be bought!" She agreed with the suffragists' assertion that women are equal to men, but disagreed that their participation alone would make the state more just. "To assume, therefore, that she would succeed in purifying something which is not susceptible of purification, is to credit her with supernatural powers." Goldman was also a passionate critic of the prison system, critiquing both the treatment of prisoners and the social causes of crime. Goldman viewed crime as a natural outgrowth of an unjust economic system, and in her essay "Prisons: A Social Crime and Failure", she quoted liberally from the 19th-century authors Fyodor Dostoevsky and Oscar Wilde on prisons, and wrote: Year after year the gates of prison hells return to the world an emaciated, deformed, will-less, shipwrecked crew of humanity, with the Cain mark on their foreheads, their hopes crushed, all their natural inclinations thwarted. With nothing but hunger and inhumanity to greet them, these victims soon sink back into crime as the only possibility of existence. Goldman was a committed war resister, believing that wars were fought by the state on behalf of capitalists. She was particularly opposed to the draft, viewing it as one of the worst of the state's forms of coercion, and was one of the founders of the No-Conscription League—for which she was ultimately arrested (1917), imprisoned and deported (1919). Goldman was routinely surveilled, arrested, and imprisoned for her speech and organizing activities in support of workers and various strikes, access to birth control, and in opposition to World War I. As a result, she became active in the early 20th century free speech movement, seeing freedom of expression as a fundamental necessity for achieving social change. Her outspoken championship of her ideals, in the face of persistent arrests, inspired Roger Baldwin, one of the founders of the American Civil Liberties Union. Goldman's and Reitman's experiences in the San Diego free speech fight (1912) were notorious examples of state and capitalist repression of the Industrial Workers of the World's campaign of free speech fights. Although she was hostile to the suffragist goals of first-wave feminism, Goldman advocated passionately for the rights of women, and is today heralded as a founder of anarcha-feminism, which challenges patriarchy as a hierarchy to be resisted alongside state power and class divisions. In 1897, she wrote: "I demand the independence of woman, her right to support herself; to live for herself; to love whomever she pleases, or as many as she pleases. I demand freedom for both sexes, freedom of action, freedom in love and freedom in motherhood." A nurse by training, Goldman was an early advocate for educating women concerning contraception. Like many feminists of her time, she saw abortion as a tragic consequence of social conditions, and birth control as a positive alternative. Goldman was also an advocate of free love, and a strong critic of marriage. She saw early feminists as confined in their scope and bounded by social forces of Puritanism and capitalism. She wrote: "We are in need of unhampered growth out of old traditions and habits. The movement for women's emancipation has so far made but the first step in that direction." Goldman was also an outspoken critic of prejudice against homosexuals. Her belief that social liberation should extend to gay men and lesbians was virtually unheard of at the time, even among anarchists. As German sexologist Magnus Hirschfeld wrote, "she was the first and only woman, indeed the first and only American, to take up the defense of homosexual love before the general public." In numerous speeches and letters, she defended the right of gay men and lesbians to love as they pleased and condemned the fear and stigma associated with homosexuality. As Goldman wrote in a letter to Hirschfeld, "It is a tragedy, I feel, that people of a different sexual type are caught in a world which shows so little understanding for homosexuals and is so crassly indifferent to the various gradations and variations of gender and their great significance in life." A committed atheist, Goldman viewed religion as another instrument of control and domination. Her essay "The Philosophy of Atheism" quoted Bakunin at length on the subject and added: Consciously or unconsciously, most theists see in gods and devils, heaven and hell, reward and punishment, a whip to lash the people into obedience, meekness and contentment... The philosophy of Atheism expresses the expansion and growth of the human mind. The philosophy of theism, if we can call it a philosophy, is static and fixed. In essays like "The Hypocrisy of Puritanism" and a speech entitled "The Failure of Christianity", Goldman made more than a few enemies among religious communities by attacking their moralistic attitudes and efforts to control human behavior. She blamed Christianity for "the perpetuation of a slave society", arguing that it dictated individuals' actions on Earth and offered poor people a false promise of a plentiful future in heaven. She was also critical of Zionism, which she saw as another failed experiment in state control. Goldman was well known during her life, described as—among other things—"the most dangerous woman in America". After her death and through the middle part of the 20th century, her fame faded. Scholars and historians of anarchism viewed her as a great speaker and activist, but did not regard her as a philosophical or theoretical thinker on par with, for instance, Kropotkin. In 1970, Dover Press reissued Goldman's biography, "Living My Life", and in 1972, feminist writer Alix Kates Shulman issued a collection of Goldman's writing and speeches, "Red Emma Speaks". These works brought Goldman's life and writings to a larger audience, and she was in particular lionized by the women's movement of the late 20th century. In 1973, Shulman was asked by a printer friend for a quotation by Goldman for use on a T-shirt. She sent him the selection from "Living My Life" about "the right to self-expression, everybody's right to beautiful, radiant things", recounting that she had been admonished "that it did not behoove an agitator to dance". The printer created a statement based on these sentiments that has become one of Goldman's most famous quotations, even though she probably never said or wrote it as such: "If I can't dance I don't want to be in your revolution." Variations of this saying have appeared on thousands of T-shirts, buttons, posters, bumper stickers, coffee mugs, hats, and other items. The women's movement of the 1970s that "rediscovered" Goldman was accompanied by a resurgent anarchist movement, beginning in the late 1960s, which also reinvigorated scholarly attention to earlier anarchists. The growth of feminism also initiated some reevaluation of Goldman's philosophical work, with scholars pointing out the significance of Goldman's contributions to anarchist thought in her time. Goldman's belief in the value of aesthetics, for example, can be seen in the later influences of anarchism and the arts. Similarly, Goldman is now given credit for significantly influencing and broadening the scope of activism on issues of sexual liberty, reproductive rights, and freedom of expression. Goldman has been depicted in numerous works of fiction over the years, including Warren Beatty's 1981 film "Reds", in which she was portrayed by Maureen Stapleton, who won an Academy Award for her performance. Goldman has also been a character in two Broadway musicals, "Ragtime" and "Assassins". Plays depicting Goldman's life include Howard Zinn's play, "Emma"; Martin Duberman's "Mother Earth" (1991); Jessica Litwak's "Emma Goldman: Love, Anarchy, and Other Affairs" (Goldman's relationship with Berkman and her arrest in connection with McKinley's assassination); Lynn Rogoff's "Love Ben, Love Emma" (Goldman's relationship with Reitman); and Carol Bolt's "Red Emma". Ethel Mannin's 1941 novel "Red Rose" is also based on Goldman's Life. Goldman has been honored by a number of organizations named in her memory. The Emma Goldman Clinic, a women's health center located in Iowa City, Iowa, selected Goldman as a namesake "in recognition of her challenging spirit." Red Emma's Bookstore Coffeehouse, an infoshop in Baltimore, Maryland adopted her name out of their belief "in the ideas and ideals that she fought for her entire life: free speech, sexual and racial equality and independence, the right to organize in our jobs and in our own lives, ideas and ideals that we continue to fight for, even today". Paul Gailiunas and his late wife Helen Hill co-wrote the anarchist song "Emma Goldman", which was performed and released by the band Piggy: The Calypso Orchestra of the Maritimes in 1999. The song was later performed by Gailiunas' new band The Troublemakers and released on their 2004 album "Here Come The Troublemakers". UK punk band Martha's song "Goldman's Detective Agency" reimagines Goldman as a private detective investigating police and political corruption. Goldman was a prolific writer, penning countless pamphlets and articles on a diverse range of subjects. She authored six books, including an autobiography, "Living My Life", and a biography of fellow anarchist Voltairine de Cleyre. Æthelberht of Kent Æthelberht (; also Æthelbert, Aethelberht, Aethelbert or Ethelbert, Old English Æðelberht, ; 550 – 24 February 616) was King of Kent from about 589 until his death. The eighth-century monk Bede, in his "Ecclesiastical History of the English People", lists him as the third king to hold "imperium" over other Anglo-Saxon kingdoms. In the late ninth century "Anglo-Saxon Chronicle", he is referred to as a bretwalda, or "Britain-ruler". He was the first English king to convert to Christianity. Æthelberht was the son of Eormenric, succeeding him as king, according to the "Chronicle". He married Bertha, the Christian daughter of Charibert, king of the Franks, thus building an alliance with the most powerful state in contemporary Western Europe; the marriage probably took place before he came to the throne. Bertha's influence may have led to Pope Gregory I's decision to send Augustine as a missionary from Rome. Augustine landed on the Isle of Thanet in east Kent in 597. Shortly thereafter, Æthelberht converted to Christianity, churches were established, and wider-scale conversion to Christianity began in the kingdom. He provided the new church with land in Canterbury, thus establishing one of the foundation stones of what ultimately became the Anglican Communion. Æthelberht's law for Kent, the earliest written code in any Germanic language, instituted a complex system of fines; the law code is preserved in the Textus Roffensis. Kent was rich, with strong trade ties to the continent, and Æthelberht may have instituted royal control over trade. Coinage probably began circulating in Kent during his reign for the first time since the Anglo-Saxon invasion. He later came to be regarded as a saint for his role in establishing Christianity among the Anglo-Saxons. His feast day was originally 24 February but was changed to 25 February. In the fifth century, raids on Britain by continental peoples had developed into full-scale migrations. The newcomers are known to have included Angles, Saxons, Jutes and Frisians, and there is evidence of other groups as well. These groups captured territory in the east and south of England, but at about the end of the fifth century, a British victory at the battle of Mount Badon (Mons Badonicus) halted the Anglo-Saxon advance for fifty years. From about 550, however, the British began to lose ground once more, and within twenty-five years it appears that control of almost all of southern England was in the hands of the invaders. Anglo-Saxons probably conquered Kent before Mons Badonicus. There is both documentary and archaeological evidence that Kent was primarily colonised by Jutes, from the southern part of the Jutland peninsula. According to legend, the brothers Hengist and Horsa landed in 449 as mercenaries for a British king, Vortigern. After a rebellion over pay and Horsa's death in battle, Hengist established the Kingdom of Kent. Some historians now think the underlying story of a rebelling mercenary force may be accurate; most now date the founding of the kingdom of Kent to the middle of the fifth-century, which is consistent with the legend. This early date, only a few decades after the departure of the Romans, also suggests that more of Roman civilization may have survived into Anglo-Saxon rule in Kent than in other areas. Overlordship was a central feature of Anglo-Saxon politics which began before Æthelberht's time; kings were described as overlords as late as the ninth century. The Anglo-Saxon invasion may have involved military coordination of different groups within the invaders, with a leader who had authority over many different groups; Ælle of Sussex may have been such a leader. Once the new states began to form, conflicts among them began. Tribute from dependents could lead to wealth. A weaker state also might ask or pay for the protection of a stronger neighbour against a warlike third state. Sources for this period in Kentish history include the "Ecclesiastical History of the English People", written in 731 by Bede, a Northumbrian monk. Bede was interested primarily in England's Christianization. Since Æthelberht was the first Anglo-Saxon king to convert to Christianity, Bede provides more substantial information about him than about any earlier king. One of Bede's correspondents was Albinus, abbot of the monastery of St. Peter and St. Paul (subsequently renamed St. Augustine's) in Canterbury. The "Anglo-Saxon Chronicle", a collection of annals assembled c. 890 in the kingdom of Wessex, mentions several events in Kent during Æthelberht's reign. Further mention of events in Kent occurs in the late sixth century history of the Franks by Gregory of Tours. This is the earliest surviving source to mention any Anglo-Saxon kingdom. Some of Pope Gregory the Great's letters concern the mission of St. Augustine to Kent in 597; these letters also mention the state of Kent and its relationships with neighbours. Other sources include regnal lists of the kings of Kent and early charters (land grants by kings to their followers or to the church). Although no originals survive from Æthelberht's reign, later copies exist. A law code from Æthelberht's reign also survives. According to Bede, Æthelberht was descended directly from Hengist. Bede gives the line of descent as follows: "Ethelbert was son of Irminric, son of Octa, and after his grandfather Oeric, surnamed Oisc, the kings of the Kentish folk are commonly known as Oiscings. The father of Oeric was Hengist." An alternative form of this genealogy, found in the "Historia Brittonum" among other places, reverses the position of Octa and Oisc in the lineage. The first of these names that can be placed historically with reasonable confidence is Æthelberht's father, whose name now usually is spelled Eormenric. The only direct written reference to Eormenric is in Kentish genealogies, but Gregory of Tours does mention that Æthelberht's father was the king of Kent, though Gregory gives no date. Eormenric's name provides a hint of connections to the kingdom of the Franks, across the English channel; the element "Eormen" was rare in names of the Anglo-Saxon aristocracy, but much more common among Frankish nobles. One other member of Æthelberht's family is known: his sister, Ricole, who is recorded by both Bede and the "Anglo-Saxon Chronicle" as the mother of Sæberht, king of the East Saxons (i.e. Essex). The dates of Æthelberht's birth and accession to the throne of Kent are both matters of debate. Bede, the earliest source to give dates, is thought to have drawn his information from correspondence with Albinus. Bede states that when Æthelberht died in 616 he had reigned for fifty-six years, placing his accession in 560. Bede also says that Æthelberht died twenty-one years after his baptism. Augustine’s mission from Rome is known to have arrived in 597, and according to Bede, it was this mission that converted Æthelberht. Hence Bede’s dates are inconsistent. The "Anglo-Saxon Chronicle", an important source for early dates, is inconsistent with Bede and also has inconsistencies among different manuscript versions. Putting together the different dates in the "Chronicle" for birth, death and length of reign, it appears that Æthelberht's reign was thought to have been either 560–616 or 565–618 but that the surviving sources have confused the two traditions. It is possible that Æthelberht was converted to Christianity before Augustine's arrival. Æthelberht's wife was a Christian and brought a Frankish bishop with her, to attend her at court, so Æthelberht would have had knowledge of Christianity before the mission reached Kent. It also is possible that Bede had the date of Æthelberht's death wrong; if, in fact, Æthelberht died in 618, this would be consistent with his baptism in 597, which is in accord with the tradition that Augustine converted the king within a year of his arrival. Gregory of Tours, in his "Historia Francorum", writes that Bertha, daughter of Charibert, king of the Franks, married the son of the king of Kent. Bede says that Æthelberht received Bertha "from her parents". If Bede is interpreted literally, the marriage would have had to take place before 567, when Charibert died. The traditions for Æthelberht's reign, then, would imply that Æthelberht married Bertha before either 560 or 565. The extreme length of Æthelberht's reign also has been regarded with skepticism by historians; it has been suggested that he died in the fifty-sixth year of his life, rather than the fifty-sixth year of his reign. This would place the year of his birth approximately at 560, and he would not then have been able to marry until the mid 570s. According to Gregory of Tours, Charibert was king when he married Ingoberg, Bertha's mother, which places that marriage no earlier than 561. It therefore is unlikely that Bertha was married much before about 580. These later dates for Bertha and Æthelberht also solve another possible problem: Æthelberht's daughter, Æthelburh, seems likely to have been Bertha's child, but the earlier dates would have Bertha aged sixty or so at Æthelburh's likely birthdate using the early dates. Gregory, however, also says that he thinks that Ingoberg was seventy years old in 589; and this would make her about forty when she married Charibert. This is possible, but seems unlikely, especially as Charibert seems to have had a preference for younger women, again according to Gregory's account. This would imply an earlier birth date for Bertha. On the other hand, Gregory refers to Æthelberht at the time of his marriage to Bertha simply as "a man of Kent", and in the 589 passage concerning Ingoberg's death, which was written in about 590 or 591, he refers to Æthelberht as "the son of the king of Kent". If this does not simply reflect Gregory's ignorance of Kentish affairs, which seems unlikely given the close ties between Kent and the Franks, then some assert that Æthelberht's reign cannot have begun before 589. While all of the contradictions above cannot be reconciled, the most probable dates that may be drawn from available data place Æthelberht's birth at approximately 560 and, perhaps, his marriage to Bertha at 580. His reign is most likely to have begun in 589 or 590. The later history of Kent shows clear evidence of a system of joint kingship, with the kingdom being divided into east Kent and west Kent, although it appears that there generally was a dominant king. This evidence is less clear for the earlier period, but there are early charters, known to be forged, which nevertheless imply that Æthelberht ruled as joint king with his son, Eadbald. It may be that Æthelberht was king of east Kent and Eadbald became king of west Kent; the east Kent king seems generally to have been the dominant ruler later in Kentish history. Whether or not Eadbald became a joint king with Æthelberht, there is no question that Æthelberht had authority throughout the kingdom. The division into two kingdoms is most likely to date back to the sixth century; east Kent may have conquered west Kent and preserved the institutions of kingship as a subkingdom. This was a common pattern in Anglo-Saxon England, as the more powerful kingdoms absorbed their weaker neighbours. An unusual feature of the Kentish system was that only sons of kings appeared to be legitimate claimants to the throne, although this did not eliminate all strife over the succession. The main towns of the two kingdoms were Rochester, for west Kent, and Canterbury, for east Kent. Bede does not state that Æthelberht had a palace in Canterbury, but he does refer to Canterbury as Æthelberht's "metropolis", and it is clear that it is Æthelberht's seat. There are many indications of close relations between Kent and the Franks. Æthelberht's marriage to Bertha certainly connected the two courts, although not as equals: the Franks would have thought of Æthelberht as an under-king. There is no record that Æthelberht ever accepted a continental king as his overlord and, as a result, historians are divided on the true nature of the relationship. Evidence for an explicit Frankish overlordship of Kent comes from a letter written by Pope Gregory the Great to Theuderic, king of Orléans, and Theudebert, king of Metz. The letter concerned Augustine's mission to Kent in 597, and in it Gregory says that he believes "that you wish your subjects in every respect to be converted to that faith in which you, their kings and lords, stand". It may be that this is a papal compliment, rather than a description of the relationship between the kingdoms. It also has been suggested that Liudhard, Bertha's chaplain, was intended as a representative of the Frankish church in Kent, which also could be interpreted as evidence of overlordship. A possible reason for the willingness of the Franks to connect themselves with the Kentish court is the fact that a Frankish king, Chilperic I, is recorded as having conquered a people known as the Euthiones during the mid-sixth century. If, as seems likely from the name, these people were the continental remnants of the Jutish invaders of Kent, then it may be that the marriage was intended as a unifying political move, reconnecting different branches of the same people. Another perspective on the marriage may be gained by considering that it is likely that Æthelberht was not yet king at the time he and Bertha were wed: it may be that Frankish support for him, acquired via the marriage, was instrumental in gaining the throne for him. Regardless of the political relationship between Æthelberht and the Franks, there is abundant evidence of strong connections across the English Channel. There was a luxury trade between Kent and the Franks, and burial artefacts found include clothing, drink, and weapons that reflect Frankish cultural influence. The Kentish burials have a greater range of imported goods than those of the neighbouring Anglo-Saxon regions, which is not surprising given Kent's easier access to trade across the English Channel. In addition, the grave goods are both richer and more numerous in Kentish graves, implying that material wealth was derived from that trade. Frankish influences also may be detected in the social and agrarian organization of Kent. Other cultural influences may be seen in the burials as well, so it is not necessary to presume that there was direct settlement by the Franks in Kent. In his "Ecclesiastical History", Bede includes his list of seven kings who held "imperium" over the other kingdoms south of the Humber. The usual translation for "imperium" is "overlordship". Bede names Æthelberht as the third on the list, after Ælle of Sussex and Ceawlin of Wessex. The anonymous annalist who composed one of the versions of the "Anglo-Saxon Chronicle" repeated Bede's list of seven kings in a famous entry under the year 827, with one additional king, Egbert of Wessex. The "Chronicle" also records that these kings held the title ""bretwalda"", or "Britain-ruler". The exact meaning of "bretwalda" has been the subject of much debate; it has been described as a term "of encomiastic poetry", but there also is evidence that it implied a definite role of military leadership. The prior "bretwalda", Ceawlin, is recorded by the "Anglo-Saxon Chronicle" as having fought Æthelberht in 568. The entry states that Æthelberht lost the battle and was driven back to Kent. The dating of the entries concerning the West Saxons in this section of the "Chronicle" is thought to be unreliable and a recent analysis suggests that Ceawlin's reign is more likely to have been approximately 581–588, rather than the dates of 560–592 that are given in the "Chronicle". The battle was at "Wibbandun", which may be translated as Wibba's Mount; it is not known where this was. At some point Ceawlin ceased to hold the title of "bretwalda", perhaps after a battle at Stoke Lyne, in Oxfordshire, which the "Chronicle" dates to 584, some eight years before he was deposed in 592 (again using the "Chronicle's" unreliable dating). Æthelberht certainly was a dominant ruler by 601, when Gregory the Great wrote to him: Gregory urges Æthelberht to spread Christianity among those kings and peoples subject to him, implying some level of overlordship. If the battle of Wibbandun was fought c. 590, as has been suggested, then Æthelberht must have gained his position as overlord at some time in the 590s. This dating for Wibbandun is slightly inconsistent with the proposed dates of 581–588 for Ceawlin's reign, but those dates are not thought to be precise, merely the most plausible given the available data. In addition to the evidence of the "Chronicle" that Æthelberht was accorded the title of bretwalda, there is evidence of his domination in several of the southern kingdoms of the Heptarchy. In Essex, Æthelberht appears to have been in a position to exercise authority shortly after 604, when his intervention helped in the conversion of King Sæberht of Essex, his nephew, to Christianity. It was Æthelberht, and not Sæberht, who built and endowed St. Pauls in London, where St Paul's Cathedral now stands. Further evidence is provided by Bede, who explicitly describes Æthelberht as Sæberht's overlord. Bede describes Æthelberht's relationship with Rædwald, king of East Anglia, in a passage that is not completely clear in meaning. It seems to imply that Rædwald retained ducatus, or military command of his people, even while Æthelberht held imperium. This implies that being a bretwalda usually included holding the military command of other kingdoms and also that it was more than that, since Æthelberht is bretwalda despite Rædwald's control of his own troops. Rædwald was converted to Christianity while in Kent but did not abandon his pagan beliefs; this, together with the fact that he retained military independence, implies that Æthelberht's overlordship of East Anglia was much weaker than his influence with the East Saxons. An alternative interpretation, however, is that the passage in Bede should be translated as "Rædwald, king of the East Angles, who while Æthelberht lived, even conceded to him the military leadership of his people"; if this is Bede's intent, then East Anglia firmly was under Æthelberht's overlordship. There is no evidence that Æthelberht's influence in other kingdoms was enough for him to convert any other kings to Christianity, although this is partly due to the lack of sources—nothing is known of Sussex's history, for example, for almost all of the seventh and eighth centuries. Æthelberht was able to arrange a meeting in 602 in the Severn valley, on the northwestern borders of Wessex, however, and this may be an indication of the extent of his influence in the west. No evidence survives showing Kentish domination of Mercia, but it is known that Mercia was independent of Northumbria, so it is quite plausible that it was under Kentish overlordship. The native Britons had converted to Christianity under Roman rule. The Anglo-Saxon invasions separated the British church from European Christianity for centuries, so the church in Rome had no presence or authority in Britain, and in fact, Rome knew so little about the British church that it was unaware of any schism in customs. However, Æthelberht would have known something about the Roman church from his Frankish wife, Bertha, who had brought a bishop, Liudhard, with her across the Channel, and for whom Æthelberht built a chapel, St Martin's. In 596, Pope Gregory the Great sent Augustine, prior of the monastery of St. Andrew in Rome, to England as a missionary, and in 597, a group of nearly forty monks, led by Augustine, landed on the Isle of Thanet in Kent. According to Bede, Æthelberht was sufficiently distrustful of the newcomers to insist on meeting them under the open sky, to prevent them from performing sorcery. The monks impressed Æthelberht, but he was not converted immediately. He agreed to allow the mission to settle in Canterbury and permitted them to preach. It is not known when Æthelberht became a Christian. It is possible, despite Bede's account, that he already was a Christian before Augustine's mission arrived. It is likely that Liudhard and Bertha pressed Æthelberht to consider becoming a Christian before the arrival of the mission, and it is also likely that a condition of Æthelberht's marriage to Bertha was that Æthelberht would consider conversion. Conversion via the influence of the Frankish court would have been seen as an explicit recognition of Frankish overlordship, however, so it is possible that Æthelberht's delay of his conversion until it could be accomplished via Roman influence, might have been an assertion of independence from Frankish control. It also has been argued that Augustine's hesitation—he turned back to Rome, asking to be released from the mission—is an indication that Æthelberht was a pagan at the time Augustine was sent. At the latest, Æthelberht must have converted before 601, since that year Gregory wrote to him as a Christian king. An old tradition records that Æthelberht converted on 1 June, in the summer of the year that Augustine arrived. Through Æthelberht's influence Sæberht, king of Essex, also was converted, but there were limits to the effectiveness of the mission. The entire Kentish court did not convert: Eadbald, Æthelberht's son and heir, was a pagan at his accession. Rædwald, king of East Anglia, was only partly converted (apparently while at Æthelberht's court) and retained a pagan shrine next to the new Christian altar. Augustine also was unsuccessful in gaining the allegiance of the British clergy. Some time after the arrival of Augustine's mission, perhaps in 602 or 603, Æthelberht issued a set of laws, in ninety sections. These laws are by far the earliest surviving code composed in any of the Germanic countries, and they were almost certainly among the first documents written down in Anglo-Saxon, as literacy would have arrived in England with Augustine's mission. The only surviving early manuscript, the "Textus Roffensis", dates from the twelfth century, and it now resides in the Medway Studies Centre in Strood, Kent. Æthelberht's code makes reference to the church in the very first item, which enumerates the compensation required for the property of a bishop, a deacon, a priest, and so on; but overall, the laws seem remarkably uninfluenced by Christian principles. Bede asserted that they were composed "after the Roman manner", but there is little discernible Roman influence either. In subject matter, the laws have been compared to the Lex Salica of the Franks, but it is not thought that Æthelberht based his new code on any specific previous model. The laws are concerned with setting and enforcing the penalties for transgressions at all levels of society; the severity of the fine depended on the social rank of the victim. The king had a financial interest in enforcement, for part of the fines would come to him in many cases, but the king also was responsible for law and order, and avoiding blood feuds by enforcing the rules on compensation for injury was part of the way the king maintained control. Æthelberht's laws are mentioned by Alfred the Great, who compiled his own laws, making use of the prior codes created by Æthelberht, as well as those of Offa of Mercia and Ine of Wessex. One of Æthelberht's laws seems to preserve a trace of a very old custom: the third item in the code states that "If the king is drinking at a man's home, and anyone commits any evil deed there, he is to pay twofold compensation." This probably refers to the ancient custom of a king traveling the country, being hosted, and being provided for by his subjects wherever he went. The king's servants retained these rights for centuries after Æthelberht's time. Items 77–81 in the code have been interpreted as a description of a woman's financial rights after a divorce or legal separation. These clauses define how much of the household goods a woman could keep in different circumstances, depending on whether she keeps custody of the children, for example. It has recently been suggested, however, that it would be more correct to interpret these clauses as referring to women who are widowed, rather than divorced. There is little documentary evidence about the nature of trade in Æthelberht's Kent. It is known that the kings of Kent had established royal control of trade by the late seventh century, but it is not known how early this control began. There is archaeological evidence suggesting that the royal influence predates any of the written sources. It has been suggested that one of Æthelberht's achievements was to take control of trade away from the aristocracy and to make it a royal monopoly. The continental trade provided Kent access to luxury goods which gave it an advantage in trading with the other Anglo-Saxon nations, and the revenue from trade was important in itself. Kentish manufacture before 600 included glass beakers and jewelry. Kentish jewellers were highly skilled, and before the end of the sixth century they gained access to gold. Goods from Kent are found in cemeteries across the channel and as far away as at the mouth of the Loire. It is not known what Kent traded for all of this wealth, although it seems likely that there was a flourishing slave trade. It may well be that this wealth was the foundation of Æthelberht's strength, although his overlordship and the associated right to demand tribute would have brought wealth in its turn. It may have been during Æthelberht's reign that the first coins were minted in England since the departure of the Romans: none bear his name, but it is thought likely that the first coins predate the end of the sixth century. These early coins were gold, and probably were the shillings ("scillingas" in Old English) that are mentioned in Æthelberht's laws. The coins are also known to numismatists as "thrymsas". Æthelberht died on 24 February 616 and was succeeded by his son, Eadbald, who was not a Christian—Bede says he had been converted but went back to his pagan faith, although he ultimately did become a Christian king. Eadbald outraged the church by marrying his stepmother, which was contrary to Church law, and by refusing to accept baptism. Sæberht of the East Saxons also died at approximately this time, and he was succeeded by his three sons, none of whom were Christian. A subsequent revolt against Christianity and the expulsion of the missionaries from Kent may have been a reaction to Kentish overlordship after Æthelberht's death as much as a pagan opposition to Christianity. In addition to Eadbald, it is possible that Æthelberht had another son, Æthelwald. The evidence for this is a papal letter to Justus, archbishop of Canterbury from 619 to 625, that refers to a king named Aduluald, who is apparently different from Audubald, which refers to Eadbald. There is no agreement among modern scholars on how to interpret this: "Aduluald" might be intended as a representation of "Æthelwald", and hence an indication of another king, perhaps a sub-king of west Kent; or it may be merely a scribal error which should be read as referring to Eadbald. Æthelberht was later regarded as a saint for his role in establishing Christianity among the Anglo-Saxons. His feast day was originally 24 February but was changed to 25 February. In the 2004 edition of the Roman Martyrology, he is listed under his date of death, 24 February, with the citation: 'King of Kent, converted by St Augustine, bishop, the first leader of the English people to do so'. The Roman Catholic Archdiocese of Southwark, which contains Kent, commemorates him on 25 February. He is honoured together with his wife Bertha on the liturgical calendar of The Episcopal Church on 27 May. Primary sources Secondary sources European Commission The European Commission (EC) is an institution of the European Union, responsible for proposing legislation, implementing decisions, upholding the EU treaties and managing the day-to-day business of the EU. Commissioners swear an oath at the European Court of Justice in Luxembourg, pledging to respect the treaties and to be completely independent in carrying out their duties during their mandate. The Commission operates as a cabinet government, with 28 members of the Commission (informally known as "commissioners"). There is one member per member state, but members are bound by their oath of office to represent the general interest of the EU as a whole rather than their home state. One of the 28 is the Commission President (currently Jean-Claude Juncker) proposed by the European Council and elected by the European Parliament. The Council of the European Union then nominates the other 27 members of the Commission in agreement with the nominated President, and the 28 members as a single body are then subject to a vote of approval by the European Parliament. The current Commission is the Juncker Commission, which took office in late 2014, following the European Parliament elections in May of the same year. The term "Commission" is variously used, either in the narrow sense of the 28-member "College of Commissioners" (or "College") or to also include the administrative body of about 32,000 European civil servants who are split into departments called directorates-general and services. The procedural languages of the Commission are English, French and German. The Members of the Commission and their "cabinets" (immediate teams) are based in the Berlaymont building in Brussels. The European Commission derives from one of the five key institutions created in the supranational European Community system, following the proposal of Robert Schuman, French Foreign Minister, on 9 May 1950. Originating in 1951 as the High Authority in the European Coal and Steel Community, the Commission has undergone numerous changes in power and composition under various presidents, involving three Communities. The first Commission originated in 1951 as the nine-member "High Authority" under President Jean Monnet (see Monnet Authority). The High Authority was the supranational administrative executive of the new European Coal and Steel Community (ECSC). It took office first on 10 August 1952 in Luxembourg. In 1958, the Treaties of Rome had established two new communities alongside the ECSC: the European Economic Community (EEC) and the European Atomic Energy Community (Euratom). However their executives were called "Commissions" rather than "High Authorities". The reason for the change in name was the new relationship between the executives and the Council. Some states, such as France, expressed reservations over the power of the High Authority, and wished to limit it by giving more power to the Council rather than the new executives. Louis Armand led the first Commission of Euratom. Walter Hallstein led the first Commission of the EEC, holding the first formal meeting on 16 January 1958 at the Château of Val-Duchesse. It achieved agreement on a contentious cereal price accord, as well as making a positive impression upon third countries when it made its international debut at the Kennedy Round of General Agreement on Tariffs and Trade (GATT) negotiations. Hallstein notably began the consolidation of European law and started to have a notable impact on national legislation. Little heed was taken of his administration at first but, with help from the European Court of Justice, his Commission stamped its authority solidly enough to allow future Commissions to be taken more seriously. in 1965, however, accumulating differences between the French government of Charles de Gaulle and the other member states on various subjects (British entry, direct elections to Parliament, the Fouchet Plan and the budget) triggered the "empty chair" crisis, ostensibly over proposals for the Common Agricultural Policy. Although the institutional crisis was solved the following year, it cost Etienne Hirsch his presidency of Euratom and later Walter Hallstein the EEC presidency, despite his otherwise being viewed as the most 'dynamic' leader until Jacques Delors. The three bodies, collectively named the European Executives, co-existed until 1 July 1967 when, under the Merger Treaty, they were combined into a single administration under President Jean Rey. Owing to the merger, the Rey Commission saw a temporary increase to 14 members—although subsequent Commissions were reduced back to nine, following the formula of one member for small states and two for larger states. The Rey Commission completed the Community's customs union in 1968, and campaigned for a more powerful, elected, European Parliament. Despite Rey being the first President of the combined communities, Hallstein is seen as the first President of the modern Commission. The Malfatti and Mansholt Commissions followed with work on monetary co-operation and the first enlargement to the north in 1973. With that enlargement, the Commission's membership increased to thirteen under the Ortoli Commission (the United Kingdom as a large member was granted two Commissioners), which dealt with the enlarged community during economic and international instability at that time. The external representation of the Community took a step forward when President Roy Jenkins, recruited to the presidency in January 1977 from his role as Home Secretary of the United Kingdom's Labour government, became the first President to attend a G8 summit on behalf of the Community. Following the Jenkins Commission, Gaston Thorn's Commission oversaw the Community's enlargement to the south, in addition to beginning work on the Single European Act. The Commission headed by Jacques Delors was seen as giving the Community a sense of direction and dynamism. Delors and his team are also considered as the "founding fathers of the euro". The "International Herald Tribune" noted the work of Delors at the end of his second term in 1992: "Mr. Delors rescued the European Community from the doldrums. He arrived when Europessimism was at its worst. Although he was a little-known former French finance minister, he breathed life and hope into the EC and into the dispirited Brussels Commission. In his first term, from 1985 to 1988, he rallied Europe to the call of the single market, and when appointed to a second term he began urging Europeans toward the far more ambitious goals of economic, monetary and political union". The successor to Delors was Jacques Santer. As a result of a fraud and corruption scandal, the entire Santer Commission was forced by the Parliament to resign in 1999; a central role was played by Édith Cresson. These frauds were revealed by an internal auditor, Paul van Buitenen. That was the first time a Commission had been forced to resign "en masse", and represented a shift of power towards the Parliament. However, the Santer Commission did carry out work on the Treaty of Amsterdam and the euro. In response to the scandal, the European Anti-Fraud Office (OLAF) was created. Following Santer, Romano Prodi took office. The Amsterdam Treaty had increased the Commission's powers and Prodi was dubbed by the press as something akin to a Prime Minister. Powers were strengthened again; the Treaty of Nice, signed in 2001, gave the Presidents more power over the composition of their Commissions. José Manuel Barroso became President in 2004: the Parliament once again asserted itself in objecting to the proposed membership of the Barroso Commission. Owing to this opposition, Barroso was forced to reshuffle his team before taking office. The Barroso Commission was also the first full Commission since the enlargement in 2004 to 25 members; hence, the number of Commissioners at the end of the Prodi Commission had reached 30. As a result of the increase in the number of states, the Amsterdam Treaty triggered a reduction in the number of Commissioners to one per state, rather than two for the larger states. Allegations of fraud and corruption were again raised in 2004 by former chief auditor Jules Muis. A Commission officer, Guido Strack, reported alleged fraud and abuses in his department in the years 2002–2004 to OLAF, and was fired as a result. In 2008, Paul van Buitenen (the former auditor known from the Santer Commission scandal) accused the European Anti-Fraud Office (OLAF) of a lack of independence and effectiveness. Barroso's first Commission term expired on 31 October 2009. Under the Treaty of Nice, the first Commission to be appointed after the number of member states reached 27 would have to be reduced to "less than the number of Member States". The exact number of Commissioners was to be decided by a unanimous vote of the European Council, and membership would rotate equally between member states. Following the accession of Romania and Bulgaria in January 2007, this clause took effect for the next Commission. The Treaty of Lisbon, which came into force on 1 December 2009, mandated a reduction of the number of commissioners to two-thirds of member-states from 2014 unless the Council decided otherwise. Membership would rotate equally and no member state would have more than one Commissioner. However, the treaty was rejected by voters in Ireland in 2008 with one main concern being the loss of their Commissioner. Hence a guarantee given for a rerun of the vote was that the Council would use its power to amend the number of Commissioners upwards. However, according to the treaties it still has to be fewer than the total number of members, thus it was proposed that the member state that does not get a Commissioner would get the post of High Representative – the so-called 26+1 formula. This guarantee (which may find its way into the next treaty amendment, probably in an accession treaty) contributed to the Irish approving the treaty in a second referendum in 2009. Lisbon also combined the posts of European Commissioner for External Relations with the Council's High Representative for the Common Foreign and Security Policy. This post, also a Vice-President of the Commission, would chair the Council of the European Union's foreign affairs meetings as well as the Commission's external relations duties. The treaty further provides that the most recent European elections should be ""taken into account"" when appointing the Commission, although the President is still proposed by the European Council; the European Parliament ""elects"" the Commission, rather than ""approves"" it as under the Treaty of Nice. In 2014, Jean-Claude Juncker became President of the European Commission. Juncker appointed his previous campaign director and head of the transition team, Martin Selmayr, as his chief of cabinet. During the Juncker presidency Selmayr has been described as "the most powerful EU chief of staff ever." The Commission was set up from the start to act as an independent supranational authority separate from governments; it has been described as "the only body paid to think European". The members are proposed by their member state governments, one from each. However, they are bound to act independently – free from other influences such as those governments which appointed them. This is in contrast to the Council of the European Union, which represents governments, the European Parliament, which represents citizens, the Economic and Social Committee, which represents organised civil society, and the Committee of the Regions, which represents local and regional authorities. Through the Commission has several responsibilities: to develop medium-term strategies; to draft legislation and arbitrate in the legislative process; to represent the EU in trade negotiations; to make rules and regulations, for example in competition policy; to draw up the budget of the European Union; and to scrutinise the implementation of the treaties and legislation. The rules of procedure of the European Commission set out the Commission's operation and organisation. Before the Treaty of Lisbon came into force, the executive power of the EU was held by the Council: it conferred on the Commission such powers for it to exercise. However, the Council was allowed to withdraw these powers, exercise them directly, or impose conditions on their use. This aspect has been changed by the Treaty of Lisbon, after which the Commission exercises its powers just by virtue of the treaties. Powers are more restricted than most national executives, in part due to the Commission's lack of power over areas like foreign policy – that power is held by the European Council, which some analysts have described as another executive. Considering that under the Treaty of Lisbon, the European Council has become a formal institution with the power of appointing the Commission, it could be said that the two bodies hold the executive power of the EU (the European Council also holds individual national executive powers). However, it is the Commission that currently holds executive powers over the European Union. The governmental powers of the Commission have been such that some, including former Belgian Prime Minister Guy Verhofstadt, have suggested changing its name to the "European Government", calling the present name of the Commission "ridiculous". The Commission differs from the other institutions in that it alone has legislative initiative in the EU. Only the Commission can make formal proposals for legislation: they cannot originate in the legislative branches. Under the Treaty of Lisbon, no legislative act is allowed in the field of the Common Foreign and Security Policy. In the other fields the Council and Parliament are able to request legislation; in most cases the Commission initiates the basis of these proposals. This monopoly is designed to ensure coordinated and coherent drafting of EU law. This monopoly has been challenged by some who claim the Parliament should also have the right, with most national parliaments holding the right in some respects. However, the Council and Parliament may request the Commission to draft legislation, though the Commission does have the power to refuse to do so as it did in 2008 over transnational collective conventions. Under the Lisbon Treaty, EU citizens are also able to request the Commission to legislate in an area via a petition carrying one million signatures, but this is not binding. The Commission's powers in proposing law have usually centred on economic regulation. It has put forward a large number of regulations based on a "precautionary principle". This means that pre-emptive regulation takes place if there is a credible hazard to the environment or human health: for example on tackling climate change and restricting genetically modified organisms. This is opposed to weighting regulations for their effect on the economy. Thus, the Commission often proposes stricter legislation than other countries. Owing to the size of the European market, this has made EU legislation an important influence in the global market. Recently the Commission has moved into creating European criminal law. In 2006, a toxic waste spill off the coast of Côte d'Ivoire, from a European ship, prompted the Commission to look into legislation against toxic waste. Some EU states at that time did not even have a crime against shipping toxic waste; this led the Commissioners Franco Frattini and Stavros Dimas to put forward the idea of "ecological crimes". Their right to propose criminal law was challenged in the European Court of Justice but upheld. As of 2007, the only other criminal law proposals which have been brought forward are on the intellectual property rights directive, and on an amendment to the 2002 counter-terrorism framework decision, outlawing terrorism‑related incitement, recruitment (especially via the internet) and training. Once legislation is passed by the Council and Parliament, it is the Commission's responsibility to ensure it is implemented. It does this through the member states or through its agencies. In adopting the necessary technical measures, the Commission is assisted by committees made up of representatives of member states and of the public and private lobbies (a process known in jargon as "comitology"). Furthermore, the Commission is responsible for the implementation of the EU budget, ensuring, along with the Court of Auditors, that EU funds are correctly spent. In particular the Commission has a duty to ensure the treaties and law are upheld, potentially by taking member states or other institutions to the Court of Justice in a dispute. In this role it is known informally as the "guardian of the treaties". Finally, the Commission provides some external representation for the Union, alongside the member states and the Common Foreign and Security Policy, representing the Union in bodies such as the World Trade Organisation. It is also usual for the President to attend meetings of the G8. The Commission is composed of a college of "Commissioners" of 28 members, including the President and vice-presidents. Even though each member is appointed by a national government, one per state, they do not represent their state in the Commission. In practice, however, they do occasionally press for their national interest. Once proposed, the President delegates portfolios among each of the members. The power of a Commissioner largely depends upon their portfolio, and can vary over time. For example, the Education Commissioner has been growing in importance, in line with the rise in the importance of education and culture in European policy-making. Another example is the Competition Commissioner, who holds a highly visible position with global reach. Before the Commission can assume office, the college as a whole must be approved by the Parliament. Commissioners are supported by their personal cabinet who give them political guidance, while the Civil Service (the DGs, see below) deal with technical preparation. The President of the Commission is first proposed by the European Council taking into account the latest Parliamentary elections; that candidate can then be elected by the European Parliament or not. If not, the European Council shall propose another candidate within one month. The candidate has often been a leading national politician, but this is not a requirement. In 2009 (as with 2004), the Lisbon Treaty was not in force and Barroso was not "elected" by the Parliament, but rather nominated by the European Council; in any case, the centre-right parties of the EU pressured for a candidate from their own ranks. In the end, a centre-right candidate was chosen: José Manuel Barroso of the European People's Party. There are further criteria influencing the choice of the candidate, including: which area of Europe the candidate comes from, favoured as Southern Europe in 2004; the candidate's political influence, credible yet not overpowering members; language, proficiency in French considered necessary by France; and degree of integration, their state being a member of both the eurozone and the Schengen Agreement. In 2004, this system produced a number of candidates and was thus criticised by some MEPs: following the drawn-out selection, the ALDE group leader Graham Watson described the procedure as a "Justus Lipsius carpet market" producing only the "lowest common denominator"; while Green-EFA co-leader Daniel Cohn-Bendit asked Barroso after his first speech "If you are the best candidate, why were you not the first?" Following the election of the President, and the appointment of the High Representative by the European Council, each Commissioner is nominated by their member state (except for those states who provided the President and High Representative) in consultation with the Commission President, although he holds no hard power to force a change in candidate. However the more capable the candidate is, the more likely the Commission President will assign them a powerful portfolio, the distribution of which is entirely at his discretion. The President's team is then subject to hearings at the European Parliament which will question them and then vote on their suitability as a whole. If members of the team are found to be too inappropriate, the President must then reshuffle the team or request a new candidate from the member state or risk the whole Commission being voted down. As Parliament cannot vote against individual Commissioners there is usually a compromise whereby the worst candidates are removed but minor objections are put aside so the Commission can take office. Once the team is approved by parliament, it is formally put into office by the European Council (). Following their appointment, the President appoints a number of Vice-Presidents (the High Representative is mandated to be one of them) from among the commissioners. For the most part, the position grants little extra power to Vice-Presidents, except the first Vice-President who stands in for the President when he is away. The European Parliament can dissolve the Commission as a whole following a vote of no-confidence but only the President can request the resignation of an individual Commissioner. However, individual Commissioners, by request of the Council or Commission, can be compelled to retire on account of a breach of obligation(s) and if so ruled by the European Court of Justice (Art. 245 and 247, Treaty on the Functioning of the European Union). The Barroso Commission took office in late 2004 after being delayed by objections from the Parliament, which forced a reshuffle. In 2007 the Commission increased from 25 to 27 members with the accession of Romania and Bulgaria who each appointed their own Commissioners. With the increasing size of the Commission, Barroso adopted a more presidential style of control over the college, which earned him some criticism. However, under Barroso, the Commission began to lose ground to the larger member states as countries such as France, the UK and Germany sought to sideline its role. This has increased with the creation of the President of the European Council under the Treaty of Lisbon. There has also been a greater degree of politicisation within the Commission. The Commission is divided into departments known as Directorates-General (DGs) that can be likened to departments or ministries. Each covers a specific policy area such as agriculture or justice and citizens' rights or internal services such as human resources and translation and is headed by a director-general who is responsible to a commissioner. A commissioner's portfolio can be supported by numerous DGs; they prepare proposals for them and if approved by a majority of commissioners proposals go forward to the Parliament and Council for consideration. The Commission's civil service is headed by a Secretary General, currently Alexander Italianer. The rules of procedure of the European Commission set out the Commission's operation and organisation. There has been criticism from a number of people that the highly fragmented DG structure wastes a considerable amount of time in turf wars as the different departments and Commissioners compete with each other. Furthermore, the DGs can exercise considerable control over a Commissioner with the Commissioner having little time to learn to assert control over their staff. According to figures published by the Commission, 23,803 persons were employed by the Commission as officials and temporary agents in September 2012. In addition to these, 9230 "external staff" (e.g. Contractual agents, detached national experts, young experts, trainees etc.) were employed. The single largest DG is the Directorate-General for Translation, with a 2309-strong staff, while the largest group by nationality is Belgian (18.7%), probably due to a majority (17,664) of staff being based in the country. Communication with the press is handled by the Directorate-General Communication. The Commission's chief spokesperson is Pia Ahrenkilde Hansen who takes the midday press briefings, commonly known as the "Midday Presser". It takes place every weekday in the Commission's press room at the Berlaymont where journalists may ask questions of Commission officials on any topic and legitimately expect to get an "on the record" answer for live TV. Such a situation is unique in the world. It has been noted by one researcher that the press releases issued by the Commission are uniquely political. A release often goes through several stages of drafting which emphasises the role of the Commission and is used "for justifying the EU and the commission" increasing their length and complexity. Where there are multiple departments involved a press release can also be a source of competition between areas of the Commission and Commissioners themselves. This also leads to an unusually high number of press releases, 1907 for 2006, and is seen as a unique product of the EU's political set-up. The number of Commission press releases shows a decreasing trend. 1768 press releases were published in 2010 and 1589 in 2011. There is a larger press corps in Brussels than Washington D.C.; in 2007, media outlets in every Union member-state had a Brussels correspondent. However, following the global downturn, the press corps in Brussels shrunk by a third by 2010. There is one journalist covering EU news for Latvia and none for Lithuania. Although there has been a worldwide cut in journalists, the considerable press releases and operations such as Europe by Satellite and EuroparlTV leads many news organisations to believe they can cover the EU from these source and news agencies. In the face of high-level criticism, the Commission shut down Presseurop on 20 December 2013. While the commission is the executive branch, the candidates are chosen individually by the 28 national governments, which means it is not possible for a commission member or president to be removed by a direct election. Rather, the legitimacy of the commission is mainly drawn from the vote of approval that is required from the European Parliament, along with its power to dismiss the body, which, in turn, raises the concern of the relatively low turnout (less than 50%) in elections for the European Parliament since 1999. While that figure may be higher than that of some national elections, including the off-year elections of the United States Congress, the fact that there are no elections for the position of commission president calls the position's legitimacy into question in the eyes of some. The fact that the commission can directly decide (albeit with oversight from specially formed 'comitology committees') on the shape and character of implementing legislation further raises concerns about democratic legitimacy. Even though democratic structures and methods are developing there is not such a mirror in creating a European civil society. The Treaty of Lisbon may go some way to resolving the deficit in creating greater democratic controls on the Commission, including enshrining the procedure of linking elections to the selection of the Commission president. An alternative viewpoint is that electoral pressures undermine the Commission's role as an independent regulator, considering it akin with institutions such as independent central banks which deal with technical areas of policy. In addition some defenders of the Commission point out that legislation must be approved by the Council in all areas (the ministers of member states) and the European Parliament in some areas before it can be adopted, thus the amount of legislation which is adopted in any one country without the approval of its government is limited. In 2009 the European ombudsman published statistics of citizens' complaints against EU institutions, with most of them filed against the Commission (66%) and concerning lack of transparency (36%). In 2010 the Commission was sued for blocking access to documents on EU biofuel policy. This happened after media accused the Commission of blocking scientific evidence against biofuel subsidies. Lack of transparency, unclear lobbyist relations, conflicts of interests and excessive spending of the Commission was highlighted in a number of reports by internal and independent auditing organisations. It has also been criticised on IT-related issues, particularly with regard to Microsoft. The European Commission has an Action Plan to enhance preparedness against chemical, biological, radiological and nuclear (CBRN) security risks as part of its anti-terrorism package released in October 2017. In recent times Europe has seen an increased threat level of CBRN attacks. As such, the European Commission’s preparedness plan is important, said Steven Neville Chatfield, a director for the Centre for Emergency Preparedness and Response in the United Kingdom’s Health Protection Agency. For the first time, the European Commission proposed that medical preparedness for CBRN attack threats is a high priority. “The European Commission’s (EC) Action Plan to enhance preparedness against CBRN security risks is part of its anti-terrorism package released in October 2017, a strategy aimed at better protecting the more than 511 million citizens across the 28 member states of the European Union (EU).” The Commission is primarily based in Brussels, with the President's office and the Commission's meeting room on the 13th floor of the Berlaymont building. The Commission also operates out of numerous other buildings in Brussels and Luxembourg. When the Parliament is meeting in Strasbourg, the Commissioners also meet there in the Winston Churchill building to attend the Parliament's debates. Edward VI of England Edward VI (12 October 1537 – 6 July 1553) was King of England and Ireland from 28 January 1547 until his death. He was crowned on 20 February at the age of nine. Edward was the son of Henry VIII and Jane Seymour, and England's first monarch to be raised as a Protestant. During his reign, the realm was governed by a regency council because he never reached his majority. The council was first led by his uncle Edward Seymour, 1st Duke of Somerset (1547–1549), and then by John Dudley, 1st Earl of Warwick (1550–1553), from 1551 Duke of Northumberland. Edward's reign was marked by economic problems and social unrest that in 1549 erupted into riot and rebellion. An expensive war with Scotland, at first successful, ended with military withdrawal from Scotland and Boulogne-sur-Mer in exchange for peace. The transformation of the Church of England into a recognisably Protestant body also occurred under Edward, who took great interest in religious matters. Although his father, HenryVIII, had severed the link between the Church and Rome, HenryVIII had never permitted the renunciation of Catholic doctrine or ceremony. It was during Edward's reign that Protestantism was established for the first time in England with reforms that included the abolition of clerical celibacy and the Mass, and the imposition of compulsory services in English. In February 1553, at age 15, Edward fell ill. When his sickness was discovered to be terminal, he and his Council drew up a "Devise for the Succession", to prevent the country's return to Catholicism. Edward named his first cousin once removed, Lady Jane Grey, as his heir, excluding his half-sisters, Mary and Elizabeth. This decision was disputed following Edward's death, and Jane was deposed by Mary nine days after becoming queen. During her reign, Mary reversed Edward's Protestant reforms, which nonetheless became the basis of the Elizabethan Religious Settlement of 1559. Edward was born on 12 October 1537 in his mother's room inside Hampton Court Palace, in Middlesex. He was the son of King Henry VIII by his third wife, Jane Seymour. Throughout the realm, the people greeted the birth of a male heir, "whom we hungered for so long", with joy and relief. "Te Deums" were sung in churches, bonfires lit, and "their was shott at the Tower that night above two thousand gonnes". Queen Jane, appearing to recover quickly from the birth, sent out personally signed letters announcing the birth of "a Prince, conceived in most lawful matrimony between my Lord the King's Majesty and us". Edward was christened on 15 October, with his half-sisters, the 21-year-old Lady Mary as godmother and the 4-year-old Lady Elizabeth carrying the chrisom; and the Garter King of Arms proclaimed him as Duke of Cornwall and Earl of Chester. The Queen, however, fell ill on 23 October from presumed postnatal complications, and died the following night. Henry VIII wrote to Francis I of France that "Divine Providence ... hath mingled my joy with bitterness of the death of her who brought me this happiness". Edward was a healthy baby who suckled strongly from the outset. His father was delighted with him; in May 1538, Henry was observed "dallying with him in his arms ... and so holding him in a window to the sight and great comfort of the people". That September, the Lord Chancellor, Lord Audley, reported Edward's rapid growth and vigour; and other accounts describe him as a tall and merry child. The tradition that Edward VI was a sickly boy has been challenged by more recent historians. At the age of four, he fell ill with a life-threatening "quartan fever", but, despite occasional illnesses and poor eyesight, he enjoyed generally good health until the last six months of his life. Edward was initially placed in the care of Margaret Bryan, "lady mistress" of the prince's household. She was succeeded by Blanche Herbert, Lady Troy. Until the age of six, Edward was brought up, as he put it later in his "Chronicle", "among the women". The formal royal household established around Edward was, at first, under Sir William Sidney, and later Sir Richard Page, stepfather of Edward Seymour's wife, Anne Stanhope. Henry demanded exacting standards of security and cleanliness in his son's household, stressing that Edward was "this whole realm's most precious jewel". Visitors described the prince, who was lavishly provided with toys and comforts, including his own troupe of minstrels, as a contented child. From the age of six, Edward began his formal education under Richard Cox and John Cheke, concentrating, as he recalled himself, on "learning of tongues, of the scripture, of philosophy, and all liberal sciences". He received tuition from Elizabeth's tutor, Roger Ascham, and Jean Belmain, learning French, Spanish and Italian. In addition, he is known to have studied geometry and learned to play musical instruments, including the lute and the virginals. He collected globes and maps and, according to coinage historian C. E. Challis, developed a grasp of monetary affairs that indicated a high intelligence. Edward's religious education is assumed to have favoured the reforming agenda. His religious establishment was probably chosen by Archbishop Thomas Cranmer, a leading reformer. Both Cox and Cheke were "reformed" Catholics or Erasmians and later became Marian exiles. By 1549, Edward had written a treatise on the pope as Antichrist and was making informed notes on theological controversies. Many aspects of Edward's religion were essentially Catholic in his early years, including celebration of the mass and reverence for images and relics of the saints. Both Edward's sisters were attentive to their brother and often visited him – on one occasion, Elizabeth gave him a shirt "of her own working". Edward "took special content" in Mary's company, though he disapproved of her taste for foreign dances; "I love you most", he wrote to her in 1546. In 1543, Henry invited his children to spend Christmas with him, signalling his reconciliation with his daughters, whom he had previously illegitimised and disinherited. The following spring, he restored them to their place in the succession with a Third Succession Act, which also provided for a regency council during Edward's minority. This unaccustomed family harmony may have owed much to the influence of Henry's new wife, Catherine Parr, of whom Edward soon became fond. He called her his "most dear mother" and in September 1546 wrote to her: "I received so many benefits from you that my mind can hardly grasp them." Other children were brought to play with Edward, including the granddaughter of Edward's chamberlain, Sir William Sidney, who in adulthood recalled the prince as "a marvellous sweet child, of very mild and generous condition". Edward was educated with sons of nobles, "appointed to attend upon him" in what was a form of miniature court. Among these, Barnaby Fitzpatrick, son of an Irish peer, became a close and lasting friend. Edward was more devoted to his schoolwork than his classmates and seems to have outshone them, motivated to do his "duty" and compete with his sister Elizabeth's academic prowess. Edward's surroundings and possessions were regally splendid: his rooms were hung with costly Flemish tapestries, and his clothes, books, and cutlery were encrusted with precious jewels and gold. Like his father, Edward was fascinated by military arts, and many of his portraits show him wearing a gold dagger with a jewelled hilt, in imitation of Henry. Edward's "Chronicle" enthusiastically details English military campaigns against Scotland and France, and adventures such as John Dudley's near capture at Musselburgh in 1547. On 1 July 1543, Henry VIII signed the Treaty of Greenwich with the Scots, sealing the peace with Edward's betrothal to the seven-month-old Mary, Queen of Scots. The Scots were in a weak bargaining position after their defeat at Solway Moss the previous November, and Henry, seeking to unite the two realms, stipulated that Mary be handed over to him to be brought up in England. When the Scots repudiated the treaty in December 1543 and renewed their alliance with France, Henry was enraged. In April 1544, he ordered Edward's uncle, Edward Seymour, Earl of Hertford, to invade Scotland and "put all to fire and sword, burn Edinburgh town, so razed and defaced when you have sacked and gotten what ye can of it, as there may remain forever a perpetual memory of the vengeance of God lightened upon [them] for their falsehood and disloyalty". Seymour responded with the most savage campaign ever launched by the English against the Scots. The war, which continued into Edward's reign, has become known as "The Rough Wooing". The nine-year-old Edward wrote to his father and stepmother on 10 January 1547 from Hertford thanking them for his new year's gift of their portraits from life. By 28 January 1547, Henry VIII was dead. Those close to the throne, led by Edward Seymour and William Paget, agreed to delay the announcement of the king's death until arrangements had been made for a smooth succession. Seymour and Sir Anthony Browne, the Master of the Horse, rode to collect Edward from Hertford and brought him to Enfield, where Lady Elizabeth was living. He and Elizabeth were then told of the death of their father and heard a reading of the will. The Lord Chancellor, Thomas Wriothesley, announced Henry's death to parliament on 31 January, and general proclamations of Edward's succession were ordered. The new king was taken to the Tower of London, where he was welcomed with "great shot of ordnance in all places there about, as well out of the Tower as out of the ships". The following day, the nobles of the realm made their obeisance to Edward at the Tower, and Seymour was announced as Protector. Henry VIII was buried at Windsor on 16 February, in the same tomb as Jane Seymour, as he had wished. Edward VI was crowned at Westminster Abbey four days later on Sunday 20 February. The ceremonies were shortened, because of the "tedious length of the same which should weary and be hurtsome peradventure to the King's majesty, being yet of tender age", and also because the Reformation had rendered some of them inappropriate. On the eve of the coronation, Edward progressed on horseback from the Tower to the Palace of Westminster through thronging crowds and pageants, many based on the pageants for a previous boy king, Henry VI. He laughed at a Spanish tightrope walker who "tumbled and played many pretty toys" outside St Paul's Cathedral. At the coronation service, Cranmer affirmed the royal supremacy and called Edward a second Josiah, urging him to continue the reformation of the Church of England, "the tyranny of the Bishops of Rome banished from your subjects, and images removed". After the service, Edward presided at a banquet in Westminster Hall, where, he recalled in his "Chronicle", he dined with his crown on his head. Henry VIII's will named sixteen executors, who were to act as Edward's Council until he reached the age of eighteen. These executors were supplemented by twelve men "of counsail" who would assist the executors when called on. The final state of Henry VIII's will has been the subject of controversy. Some historians suggest that those close to the king manipulated either him or the will itself to ensure a share-out of power to their benefit, both material and religious. In this reading, the composition of the Privy Chamber shifted towards the end of 1546 in favour of the reforming faction. In addition, two leading conservative Privy Councillors were removed from the centre of power. Stephen Gardiner was refused access to Henry during his last months. Thomas Howard, 3rd Duke of Norfolk, found himself accused of treason; the day before the king's death his vast estates were seized, making them available for redistribution, and he spent the whole of Edward's reign in the Tower of London. Other historians have argued that Gardiner's exclusion was based on non-religious matters, that Norfolk was not noticeably conservative in religion, that conservatives remained on the Council, and that the radicalism of such men as Sir Anthony Denny, who controlled the dry stamp that replicated the king's signature, is debatable. Whatever the case, Henry's death was followed by a lavish hand-out of lands and honours to the new power group. The will contained an "unfulfilled gifts" clause, added at the last minute, which allowed Henry's executors to freely distribute lands and honours to themselves and the court, particularly to Edward Seymour, 1st Earl of Hertford, the new king's uncle who became Lord Protector of the Realm, Governor of the King's Person, and Duke of Somerset. In fact, Henry VIII's will did not provide for the appointment of a Protector. It entrusted the government of the realm during his son's minority to a Regency Council that would rule collectively, by majority decision, with "like and equal charge". Nevertheless, a few days after Henry's death, on 4 February, the executors chose to invest almost regal power in Edward Seymour, now Duke of Somerset. Thirteen out of the sixteen (the others being absent) agreed to his appointment as Protector, which they justified as their joint decision "by virtue of the authority" of Henry's will. Somerset may have done a deal with some of the executors, who almost all received hand-outs. He is known to have done so with William Paget, private secretary to Henry VIII, and to have secured the support of Sir Anthony Browne of the Privy Chamber. Somerset's appointment was in keeping with historical precedent, and his eligibility for the role was reinforced by his military successes in Scotland and France. In March 1547, he secured letters patent from King Edward granting him the almost monarchical right to appoint members to the Privy Council himself and to consult them only when he wished. In the words of historian Geoffrey Elton, "from that moment his autocratic system was complete". He proceeded to rule largely by proclamation, calling on the Privy Council to do little more than rubber-stamp his decisions. Somerset's takeover of power was smooth and efficient. The imperial ambassador, François van der Delft, reported that he "governs everything absolutely", with Paget operating as his secretary, though he predicted trouble from John Dudley, Viscount Lisle, who had recently been raised to Earl of Warwick in the share-out of honours. In fact, in the early weeks of his Protectorate, Somerset was challenged only by the Chancellor, Thomas Wriothesley, whom the Earldom of Southampton had evidently failed to buy off, and by his own brother. Wriothesley, a religious conservative, objected to Somerset's assumption of monarchical power over the Council. He then found himself abruptly dismissed from the chancellorship on charges of selling off some of his offices to delegates. Somerset faced less manageable opposition from his younger brother Thomas Seymour, who has been described as a "worm in the bud". As King Edward's uncle, Thomas Seymour demanded the governorship of the king's person and a greater share of power. Somerset tried to buy his brother off with a barony, an appointment to the Lord Admiralship, and a seat on the Privy Council—but Thomas was bent on scheming for power. He began smuggling pocket money to King Edward, telling him that Somerset held the purse strings too tight, making him a "beggarly king". He also urged him to throw off the Protector within two years and "bear rule as other kings do"; but Edward, schooled to defer to the Council, failed to co-operate. In the Spring of 1547, using Edward's support to circumvent Somerset's opposition, Thomas Seymour secretly married Henry VIII's widow Catherine Parr, whose Protestant household included the 11-year-old Lady Jane Grey and the 13-year-old Lady Elizabeth. In summer 1548, a pregnant Catherine Parr discovered Thomas Seymour embracing Lady Elizabeth. As a result, Elizabeth was removed from Catherine Parr's household and transferred to Sir Anthony Denny's. That September, Catherine Parr died shortly after childbirth, and Thomas Seymour promptly resumed his attentions to Elizabeth by letter, planning to marry her. Elizabeth was receptive, but, like Edward, unready to agree to anything unless permitted by the Council. In January 1549, the Council had Thomas Seymour arrested on various charges, including embezzlement at the Bristol mint. King Edward, whom Seymour was accused of planning to marry to Lady Jane Grey, himself testified about the pocket money. Lack of clear evidence for treason ruled out a trial, so Seymour was condemned instead by an Act of Attainder and beheaded on 20 March 1549. Somerset's only undoubted skill was as a soldier, which he had proven on expeditions to Scotland and in the defence of Boulogne-sur-Mer in 1546. From the first, his main interest as Protector was the war against Scotland. After a crushing victory at the Battle of Pinkie Cleugh in September 1547, he set up a network of garrisons in Scotland, stretching as far north as Dundee. His initial successes, however, were followed by a loss of direction, as his aim of uniting the realms through conquest became increasingly unrealistic. The Scots allied with France, who sent reinforcements for the defence of Edinburgh in 1548. The Queen of Scots was moved to France, where she was betrothed to the Dauphin. The cost of maintaining the Protector's massive armies and his permanent garrisons in Scotland also placed an unsustainable burden on the royal finances. A French attack on Boulogne in August 1549 at last forced Somerset to begin a withdrawal from Scotland. During 1548, England was subject to social unrest. After April 1549, a series of armed revolts broke out, fuelled by various religious and agrarian grievances. The two most serious rebellions, which required major military intervention to put down, were in Devon and Cornwall and in Norfolk. The first, sometimes called the Prayer Book Rebellion, arose from the imposition of Protestantism, and the second, led by a tradesman called Robert Kett, mainly from the encroachment of landlords on common grazing ground. A complex aspect of the social unrest was that the protesters believed they were acting legitimately against enclosing landlords with the Protector's support, convinced that the landlords were the lawbreakers. The same justification for outbreaks of unrest was voiced throughout the country, not only in Norfolk and the west. The origin of the popular view of Somerset as sympathetic to the rebel cause lies partly in his series of sometimes liberal, often contradictory, proclamations, and partly in the uncoordinated activities of the commissions he sent out in 1548 and 1549 to investigate grievances about loss of tillage, encroachment of large sheep flocks on common land, and similar issues. Somerset's commissions were led by an evangelical M.P. called John Hales, whose socially liberal rhetoric linked the issue of enclosure with Reformation theology and the notion of a godly commonwealth. Local groups often assumed that the findings of these commissions entitled them to act against offending landlords themselves. King Edward wrote in his "Chronicle" that the 1549 risings began "because certain commissions were sent down to pluck down enclosures". Whatever the popular view of Somerset, the disastrous events of 1549 were taken as evidence of a colossal failure of government, and the Council laid the responsibility at the Protector's door. In July 1549, Paget wrote to Somerset: "Every man of the council have misliked your proceedings ... would to God, that, at the first stir you had followed the matter hotly, and caused justice to be ministered in solemn fashion to the terror of others ...". The sequence of events that led to Somerset's removal from power has often been called a "coup d'état". By 1 October 1549, Somerset had been alerted that his rule faced a serious threat. He issued a proclamation calling for assistance, took possession of the king's person, and withdrew for safety to the fortified Windsor Castle, where Edward wrote, "Me thinks I am in prison". Meanwhile, a united Council published details of Somerset's government mismanagement. They made clear that the Protector's power came from them, not from Henry VIII's will. On 11 October, the Council had Somerset arrested and brought the king to Richmond. Edward summarised the charges against Somerset in his "Chronicle": "ambition, vainglory, entering into rash wars in mine youth, negligent looking on Newhaven, enriching himself of my treasure, following his own opinion, and doing all by his own authority, etc." In February 1550, John Dudley, Earl of Warwick, emerged as the leader of the Council and, in effect, as Somerset's successor. Although Somerset was released from the Tower and restored to the Council, he was executed for felony in January 1552 after scheming to overthrow Dudley's regime. Edward noted his uncle's death in his "Chronicle": "the duke of Somerset had his head cut off upon Tower Hill between eight and nine o'clock in the morning". Historians contrast the efficiency of Somerset's takeover of power, in which they detect the organising skills of allies such as Paget, the "master of practices", with the subsequent ineptitude of his rule. By autumn 1549, his costly wars had lost momentum, the crown faced financial ruin, and riots and rebellions had broken out around the country. Until recent decades, Somerset's reputation with historians was high, in view of his many proclamations that appeared to back the common people against a rapacious landowning class. More recently, however, he has often been portrayed as an arrogant and aloof ruler, lacking in political and administrative skills. In contrast, Somerset's successor John Dudley, Earl of Warwick, made Duke of Northumberland in 1551, was once regarded by historians merely as a grasping schemer who cynically elevated and enriched himself at the expense of the crown. Since the 1970s, the administrative and economic achievements of his regime have been recognised, and he has been credited with restoring the authority of the royal Council and returning the government to an even keel after the disasters of Somerset's protectorate. The Earl of Warwick's rival for leadership of the new regime was Thomas Wriothesley, 1st Earl of Southampton, whose conservative supporters had allied with Dudley's followers to create a unanimous Council, which they, and observers such as the Holy Roman Emperor Charles V's ambassador, expected to reverse Somerset's policy of religious reform. Warwick, on the other hand, pinned his hopes on the king's strong Protestantism and, claiming that Edward was old enough to rule in person, moved himself and his people closer to the king, taking control of the Privy Chamber. Paget, accepting a barony, joined Warwick when he realised that a conservative policy would not bring the emperor onto the English side over Boulogne. Southampton prepared a case for executing Somerset, aiming to discredit Warwick through Somerset's statements that he had done all with Warwick's co-operation. As a counter-move, Warwick convinced parliament to free Somerset, which it did on 14 January 1550. Warwick then had Southampton and his followers purged from the Council after winning the support of Council members in return for titles, and was made Lord President of the Council and great master of the king's household. Although not called a Protector, he was now clearly the head of the government. As Edward was growing up, he was able to understand more and more government business. However, his actual involvement in decisions has long been a matter of debate, and during the 20th century, historians have presented the whole gamut of possibilities, "balanc[ing] an articulate puppet against a mature, precocious, and essentially adult king", in the words of Stephen Alford. A special "Counsel for the Estate" was created when Edward was fourteen. Edward chose the members himself. In the weekly meetings with this Council, Edward was "to hear the debating of things of most importance". A major point of contact with the king was the Privy Chamber, and there Edward worked closely with William Cecil and William Petre, the Principal Secretaries. The king's greatest influence was in matters of religion, where the Council followed the strongly Protestant policy that Edward favoured. The Duke of Northumberland's mode of operation was very different from Somerset's. Careful to make sure he always commanded a majority of councillors, he encouraged a working council and used it to legitimatise his authority. Lacking Somerset's blood-relationship with the king, he added members to the Council from his own faction in order to control it. He also added members of his family to the royal household. He saw that to achieve personal dominance, he needed total procedural control of the Council. In the words of historian John Guy, "Like Somerset, he became quasi-king; the difference was that he managed the bureaucracy on the pretence that Edward had assumed full sovereignty, whereas Somerset had asserted the right to near-sovereignty as Protector". Warwick's war policies were more pragmatic than Somerset's, and they have earned him criticism for weakness. In 1550, he signed a peace treaty with France that agreed to withdrawal from Boulogne and recalled all English garrisons from Scotland. In 1551, Edward was betrothed to Elisabeth of Valois, King Henry II's daughter. In practice, he realised that England could no longer support the cost of wars. At home, he took measures to police local unrest. To forestall future rebellions, he kept permanent representatives of the crown in the localities, including lords lieutenant, who commanded military forces and reported back to central government. Working with William Paulet and Walter Mildmay, Warwick tackled the disastrous state of the kingdom's finances. However, his regime first succumbed to the temptations of a quick profit by further debasing the coinage. The economic disaster that resulted caused Warwick to hand the initiative to the expert Thomas Gresham. By 1552, confidence in the coinage was restored, prices fell, and trade at last improved. Though a full economic recovery was not achieved until Elizabeth's reign, its origins lay in the Duke of Northumberland's policies. The regime also cracked down on widespread embezzlement of government finances, and carried out a thorough review of revenue collection practices, which has been called "one of the more remarkable achievements of Tudor administration". In the matter of religion, the regime of Northumberland followed the same policy as that of Somerset, supporting an increasingly vigorous programme of reform. Although Edward VI's practical influence on government was limited, his intense Protestantism made a reforming administration obligatory; his succession was managed by the reforming faction, who continued in power throughout his reign. The man Edward trusted most, Thomas Cranmer, Archbishop of Canterbury, introduced a series of religious reforms that revolutionised the English church from one that—while rejecting papal supremacy—remained essentially Catholic, to one that was institutionally Protestant. The confiscation of church property that had begun under Henry VIII resumed under Edward—notably with the dissolution of the chantries—to the great monetary advantage of the crown and the new owners of the seized property. Church reform was therefore as much a political as a religious policy under Edward VI. By the end of his reign, the church had been financially ruined, with much of the property of the bishops transferred into lay hands. The religious convictions of both Somerset and Northumberland have proved elusive for historians, who are divided on the sincerity of their Protestantism. There is less doubt, however, about the religious fervour of King Edward, who was said to have read twelve chapters of scripture daily and enjoyed sermons, and was commemorated by John Foxe as a "godly imp". Edward was depicted during his life and afterwards as a new Josiah, the biblical king who destroyed the idols of Baal. He could be priggish in his anti-Catholicism and once asked Catherine Parr to persuade Lady Mary "to attend no longer to foreign dances and merriments which do not become a most Christian princess". Edward's biographer Jennifer Loach cautions, however, against accepting too readily the pious image of Edward handed down by the reformers, as in John Foxe's influential "Acts and Monuments", where a woodcut depicts the young king listening to a sermon by Hugh Latimer. In the early part of his life, Edward conformed to the prevailing Catholic practices, including attendance at mass: but he became convinced, under the influence of Cranmer and the reformers among his tutors and courtiers, that "true" religion should be imposed in England. The English Reformation advanced under pressure from two directions: from the traditionalists on the one hand and the zealots on the other, who led incidents of iconoclasm (image-smashing) and complained that reform did not go far enough. Reformed doctrines were made official, such as justification by faith alone and communion for laity as well as clergy in both kinds, of bread and wine. The Ordinal of 1550 replaced the divine ordination of priests with a government-run appointment system, authorising ministers to preach the gospel and administer the sacraments rather than, as before, "to offer sacrifice and celebrate mass both for the living and the dead". Cranmer set himself the task of writing a uniform liturgy in English, detailing all weekly and daily services and religious festivals, to be made compulsory in the first Act of Uniformity of 1549. The "Book of Common Prayer" of 1549, intended as a compromise, was attacked by traditionalists for dispensing with many cherished rituals of the liturgy, such as the elevation of the bread and wine, while some reformers complained about the retention of too many "popish" elements, including vestiges of sacrificial rites at communion. The prayer book was also opposed by many senior Catholic clerics, including Stephen Gardiner, Bishop of Winchester, and Edmund Bonner, Bishop of London, who were both imprisoned in the Tower and, along with others, deprived of their sees. After 1551, the Reformation advanced further, with the approval and encouragement of Edward, who began to exert more personal influence in his role as Supreme Head of the church. The new changes were also a response to criticism from such reformers as John Hooper, Bishop of Gloucester, and the Scot John Knox, who was employed as a minister in Newcastle upon Tyne under the Duke of Northumberland and whose preaching at court prompted the king to oppose kneeling at communion. Cranmer was also influenced by the views of the continental reformer Martin Bucer, who died in England in 1551, by Peter Martyr, who was teaching at Oxford, and by other foreign theologians. The progress of the Reformation was further speeded by the consecration of more reformers as bishops. In the winter of 1551–52, Cranmer rewrote the "Book of Common Prayer" in less ambiguous reformist terms, revised canon law, and prepared a doctrinal statement, the Forty-two Articles, to clarify the practice of the reformed religion, particularly in the divisive matter of the communion service. Cranmer's formulation of the reformed religion, finally divesting the communion service of any notion of the real presence of God in the bread and the wine, effectively abolished the mass. According to Elton, the publication of Cranmer's revised prayer book in 1552, supported by a second Act of Uniformity, "marked the arrival of the English Church at Protestantism". The prayer book of 1552 remains the foundation of the Church of England's services. However, Cranmer was unable to implement all these reforms once it became clear in spring 1553 that King Edward, upon whom the whole Reformation in England depended, was dying. In February 1553, Edward VI became ill, and by June, after several improvements and relapses, he was in a hopeless condition. The king's death and the succession of his Catholic half-sister Mary would jeopardise the English Reformation, and Edward's Council and officers had many reasons to fear it. Edward himself opposed Mary's succession, not only on religious grounds but also on those of legitimacy and male inheritance, which also applied to Elizabeth. He composed a draft document, headed "My devise for the succession", in which he undertook to change the succession, most probably inspired by his father Henry VIII's precedent. He passed over the claims of his half-sisters and, at last, settled the Crown on his first cousin once removed, the 16-year-old Lady Jane Grey, who on 25 May 1553 had married Lord Guilford Dudley, a younger son of the Duke of Northumberland. In the document he writes: In his document Edward provided, in case of "lack of issue of my body", for the succession of male heirs only, that is, Jane Grey's mother's male heirs, Jane's, or her sisters'. As his death approached and possibly persuaded by Northumberland, he altered the wording so that Jane and her sisters themselves should be able to succeed. Yet Edward conceded Jane's right only as an exception to male rule, demanded by reality, an example not to be followed if Jane or her sisters had only daughters. In the final document both Mary and Elizabeth were excluded because of bastardy; since both had been declared bastards under Henry VIII and never made legitimate again, this reason could be advanced for both sisters. The provisions to alter the succession directly contravened Henry VIII's Third Succession Act of 1543 and have been described as bizarre and illogical. In early June, Edward personally supervised the drafting of a clean version of his devise by lawyers, to which he lent his signature "in six several places." Then, on 15 June he summoned high ranking judges to his sickbed, commanding them on their allegiance "with sharp words and angry countenance" to prepare his devise as letters patent and announced that he would have these passed in parliament. His next measure was to have leading councillors and lawyers sign a bond in his presence, in which they agreed faithfully to perform Edward's will after his death. A few months later, Chief Justice Edward Montagu recalled that when he and his colleagues had raised legal objections to the devise, Northumberland had threatened them "trembling for anger, and ... further said that he would fight in his shirt with any man in that quarrel". Montagu also overheard a group of lords standing behind him conclude "if they refused to do that, they were traitors". At last, on 21 June, the devise was signed by over a hundred notables, including councillors, peers, archbishops, bishops, and sheriffs; many of them later claimed that they had been bullied into doing so by Northumberland, although in the words of Edward's biographer Jennifer Loach, "few of them gave any clear indication of reluctance at the time". It was now common knowledge that Edward was dying, and foreign diplomats suspected that some scheme to debar Mary was under way. France found the prospect of the emperor's cousin on the English throne disagreeable and engaged in secret talks with Northumberland, indicating support. The diplomats were certain that the overwhelming majority of the English people backed Mary, but nevertheless believed that Queen Jane would be successfully established. For centuries, the attempt to alter the succession was mostly seen as a one-man-plot by the Duke of Northumberland. Since the 1970s, however, many historians have attributed the inception of the "devise" and the insistence on its implementation to the king's initiative. Diarmaid MacCulloch has made out Edward's "teenage dreams of founding an evangelical realm of Christ", while David Starkey has stated that "Edward had a couple of co-operators, but the driving will was his". Among other members of the Privy Chamber, Northumberland's intimate Sir John Gates has been suspected of suggesting to Edward to change his devise so that Lady Jane Grey herself—not just any sons of hers—could inherit the Crown. Whatever the degree of his contribution, Edward was convinced that his word was law and fully endorsed disinheriting his half-sisters: "barring Mary from the succession was a cause in which the young King believed." Edward became ill during January 1553 with a fever and cough that gradually worsened. The imperial ambassador, Jean Scheyfve, reported that "he suffers a good deal when the fever is upon him, especially from a difficulty in drawing his breath, which is due to the compression of the organs on the right side". Edward felt well enough in early April to take the air in the park at Westminster and to move to Greenwich, but by the end of the month he had weakened again. By 7 May he was "much amended", and the royal doctors had no doubt of his recovery. A few days later the king was watching the ships on the Thames, sitting at his window. However, he relapsed, and on 11 June Scheyfve, who had an informant in the king's household, reported that "the matter he ejects from his mouth is sometimes coloured a greenish yellow and black, sometimes pink, like the colour of blood". Now his doctors believed he was suffering from "a suppurating tumour" of the lung and admitted that Edward's life was beyond recovery. Soon, his legs became so swollen that he had to lie on his back, and he lost the strength to resist the disease. To his tutor John Cheke he whispered, "I am glad to die". Edward made his final appearance in public on 1 July, when he showed himself at his window in Greenwich Palace, horrifying those who saw him by his "thin and wasted" condition. During the next two days, large crowds arrived hoping to see the king again, but on 3 July, they were told that the weather was too chilly for him to appear. Edward died at the age of 15 at Greenwich Palace at 8pm on 6 July 1553. According to John Foxe's legendary account of his death, his last words were: "I am faint; Lord have mercy upon me, and take my spirit". He was buried in the Henry VII Lady Chapel at Westminster Abbey on 8 August 1553, with reformed rites performed by Thomas Cranmer. The procession was led by "a grett company of chylderyn in ther surples" and watched by Londoners "wepyng and lamenting"; the funeral chariot, draped in cloth of gold, was topped by an effigy of Edward, with crown, sceptre, and garter. Edward's burial place was unmarked until as late as 1966, when an inscribed stone was laid in the chapel floor by Christ's Hospital school to commemorate their founder. The inscription reads as follows: "In Memory Of King Edward VI Buried In This Chapel This Stone Was Placed Here By Christ's Hospital In Thanksgiving For Their Founder 7 October 1966". The cause of Edward VI's death is not certain. As with many royal deaths in the 16th century, rumours of poisoning abounded, but no evidence has been found to support these. The Duke of Northumberland, whose unpopularity was underlined by the events that followed Edward's death, was widely believed to have ordered the imagined poisoning. Another theory held that Edward had been poisoned by Catholics seeking to bring Mary to the throne. The surgeon who opened Edward's chest after his death found that "the disease whereof his majesty died was the disease of the lungs". The Venetian ambassador reported that Edward had died of consumption—in other words, tuberculosis—a diagnosis accepted by many historians. Skidmore believes that Edward contracted tuberculosis after a bout of measles and smallpox in 1552 that suppressed his natural immunity to the disease. Loach suggests instead that his symptoms were typical of acute bronchopneumonia, leading to a "suppurating pulmonary infection" or lung abscess, septicaemia, and kidney failure. Lady Mary was last seen by Edward in February, and was kept informed about the state of her brother's health by Northumberland and through her contacts with the imperial ambassadors. Aware of Edward's imminent death, she left Hunsdon House, near London, and sped to her estates around Kenninghall in Norfolk, where she could count on the support of her tenants. Northumberland sent ships to the Norfolk coast to prevent her escape or the arrival of reinforcements from the continent. He delayed the announcement of the king's death while he gathered his forces, and Jane Grey was taken to the Tower on 10 July. On the same day, she was proclaimed queen in the streets of London, to murmurings of discontent. The Privy Council received a message from Mary asserting her "right and title" to the throne and commanding that the Council proclaim her queen, as she had already proclaimed herself. The Council replied that Jane was queen by Edward's authority and that Mary, by contrast, was illegitimate and supported only by "a few lewd, base people". Northumberland soon realised that he had miscalculated drastically, not least in failing to secure Mary's person before Edward's death. Although many of those who rallied to Mary were conservatives hoping for the defeat of Protestantism, her supporters also included many for whom her lawful claim to the throne overrode religious considerations. Northumberland was obliged to relinquish control of a nervous Council in London and launch an unplanned pursuit of Mary into East Anglia, from where news was arriving of her growing support, which included a number of nobles and gentlemen and "innumerable companies of the common people". On 14 July Northumberland marched out of London with three thousand men, reaching Cambridge the next day; meanwhile, Mary rallied her forces at Framlingham Castle in Suffolk, gathering an army of nearly twenty thousand by 19 July. It now dawned on the Privy Council that it had made a terrible mistake. Led by the Earl of Arundel and the Earl of Pembroke, on 19 July the Council publicly proclaimed Mary as queen; Jane's nine-day reign came to an end. The proclamation triggered wild rejoicing throughout London. Stranded in Cambridge, Northumberland proclaimed Mary himself—as he had been commanded to do by a letter from the Council. William Paget and the Earl of Arundel rode to Framlingham to beg Mary's pardon, and Arundel arrested Northumberland on 24 July. Northumberland was beheaded on 22 August, shortly after renouncing Protestantism. His recantation dismayed his daughter-in-law, Jane, who followed him to the scaffold on 12 February 1554, after her father's involvement in Wyatt's rebellion. Although Edward reigned for only six years and died at the age of 15, his reign made a lasting contribution to the English Reformation and the structure of the Church of England. The last decade of Henry VIII's reign had seen a partial stalling of the Reformation, a drifting back to more conservative values. By contrast, Edward's reign saw radical progress in the Reformation. In those six years, the Church transferred from an essentially Roman Catholic liturgy and structure to one that is usually identified as Protestant. In particular, the introduction of the Book of Common Prayer, the Ordinal of 1550, and Cranmer's Forty-two Articles formed the basis for English Church practices that continue to this day. Edward himself fully approved these changes, and though they were the work of reformers such as Thomas Cranmer, Hugh Latimer, and Nicholas Ridley, backed by Edward's determinedly evangelical Council, the fact of the king's religion was a catalyst in the acceleration of the Reformation during his reign. Queen Mary's attempts to undo the reforming work of her brother's reign faced major obstacles. Despite her belief in the papal supremacy, she ruled constitutionally as the Supreme Head of the English Church, a contradiction under which she bridled. She found herself entirely unable to restore the vast number of ecclesiastical properties handed over or sold to private landowners. Although she burned a number of leading Protestant churchmen, many reformers either went into exile or remained subversively active in England during her reign, producing a torrent of reforming propaganda that she was unable to stem. Nevertheless, Protestantism was not yet "printed in the stomachs" of the English people, and had Mary lived longer, her Catholic reconstruction might have succeeded, leaving Edward's reign, rather than hers, as a historical aberration. On Mary's death in 1558, the English Reformation resumed its course, and most of the reforms instituted during Edward's reign were reinstated in the Elizabethan Religious Settlement. Queen Elizabeth replaced Mary's councillors and bishops with ex-Edwardians, such as William Cecil, Northumberland's former secretary, and Richard Cox, Edward's old tutor, who preached an anti-Catholic sermon at the opening of parliament in 1559. Parliament passed an Act of Uniformity the following spring that restored, with modifications, Cranmer's prayer book of 1552; and the Thirty-nine Articles of 1563 were largely based on Cranmer's Forty-two Articles. The theological developments of Edward's reign provided a vital source of reference for Elizabeth's religious policies, though the internationalism of the Edwardian Reformation was never revived. Enid Blyton Enid Mary Blyton (11 August 1897 – 28 November 1968) was an English children's writer whose books have been among the world's best-sellers since the 1930s, selling more than 600 million copies. Blyton's books are still enormously popular, and have been translated into 90 languages; her first book, "Child Whispers", a 24-page collection of poems, was published in 1922. She wrote on a wide range of topics including education, natural history, fantasy, mystery, and biblical narratives and is best remembered today for her Noddy, Famous Five, and Secret Seven series. Following the commercial success of her early novels such as "Adventures of the Wishing-Chair" (1937) and "The Enchanted Wood" (1939), Blyton went on to build a literary empire, sometimes producing fifty books a year in addition to her prolific magazine and newspaper contributions. Her writing was unplanned and sprang largely from her unconscious mind; she typed her stories as events unfolded before her. The sheer volume of her work and the speed with which it was produced led to rumours that Blyton employed an army of ghost writers, a charge she vigorously denied. Blyton's work became increasingly controversial among literary critics, teachers and parents from the 1950s onwards, because of the alleged unchallenging nature of her writing and the themes of her books, particularly the Noddy series. Some libraries and schools banned her works, which the BBC had refused to broadcast from the 1930s until the 1950s because they were perceived to lack literary merit. Her books have been criticised as being elitist, sexist, racist, xenophobic and at odds with the more liberal environment emerging in post-war Britain, but they have continued to be best-sellers since her death in 1968. Blyton felt she had a responsibility to provide her readers with a strong moral framework, so she encouraged them to support worthy causes. In particular, through the clubs she set up or supported, she encouraged and organised them to raise funds for animal and paediatric charities. The story of Blyton's life was dramatised in a BBC film entitled "Enid", featuring Helena Bonham Carter in the title role and first broadcast in the United Kingdom on BBC Four in 2009. There have also been several adaptations of her books for stage, screen and television. Enid Blyton was born on 11 August 1897 in East Dulwich, South London, the oldest of the three children, to Thomas Carey Blyton (1870–1920), a cutlery salesman, and his wife Theresa Mary ( Harrison; 1874–1950). Enid's younger brothers, Hanly (1899–1983) and Carey (1902–1976), were born after the family had moved to a semi-detached villa in Beckenham, then a village in Kent. A few months after her birth Enid almost died from whooping cough, but was nursed back to health by her father, whom she adored. Thomas Blyton ignited Enid's interest in nature; in her autobiography she wrote that he "loved flowers and birds and wild animals, and knew more about them than anyone I had ever met". He also passed on his interest in gardening, art, music, literature and the theatre, and the pair often went on nature walks, much to the disapproval of Enid's mother, who showed little interest in her daughter's pursuits. Enid was devastated when he left the family shortly after her thirteenth birthday to live with another woman. Enid and her mother did not have a good relationship, and she did not attend either of her parents' funerals. From 1907 to 1915 Blyton attended St Christopher's School in Beckenham, where she enjoyed physical activities and became school tennis champion and captain of lacrosse. She was not so keen on all the academic subjects but excelled in writing, and in 1911 she entered Arthur Mee's children's poetry competition. Mee offered to print her verses, encouraging her to produce more. Blyton's mother considered her efforts at writing to be a "waste of time and money", but she was encouraged to persevere by Mabel Attenborough, the aunt of a school friend. Blyton's father taught her to play the piano, which she mastered well enough for him to believe that she might follow in his sister's footsteps and become a professional musician. Blyton considered enrolling at the Guildhall School of Music, but decided she was better suited to becoming a writer. After finishing school in 1915 as head girl, she moved out of the family home to live with her friend Mary Attenborough, before going to stay with George and Emily Hunt at Seckford Hall near Woodbridge in Suffolk. Seckford Hall, with its allegedly haunted room and secret passageway provided inspiration for her later writing. At Woodbridge Congregational Church Blyton met Ida Hunt, who taught at Ipswich High School, and suggested that she train as a teacher. Blyton was introduced to the children at the nursery school, and recognising her natural affinity with them she enrolled in a National Froebel Union teacher training course at the school in September 1916. By this time she had almost ceased contact with her family. Blyton's manuscripts had been rejected by publishers on many occasions, which only made her more determined to succeed: "it is partly the struggle that helps you so much, that gives you determination, character, self-reliance – all things that help in any profession or trade, and most certainly in writing". In March 1916 her first poems were published in "Nash's Magazine". She completed her teacher training course in December 1918, and the following month obtained a teaching appointment at Bickley Park School, a small independent establishment for boys in Bickley, Kent. Two months later Blyton received a teaching certificate with distinctions in zoology and principles of education, 1st class in botany, geography, practice and history of education, child hygiene and class teaching and 2nd class in literature and elementary mathematics. In 1920 she moved to Southernhay in Hook Road Surbiton as nursery governess to the four sons of architect Horace Thompson and his wife Gertrude, with whom Blyton spent four happy years. Owing to a shortage of schools in the area her charges were soon joined by the children of neighbours, and a small school developed at the house. In 1920 Blyton relocated to Chessington, and began writing in her spare time. The following year she won the "Saturday Westminster Review" writing competition with her essay "On the Popular Fallacy that to the Pure All Things are Pure". Publications such as "The Londoner", "Home Weekly" and "The Bystander" began to show an interest in her short stories and poems. Blyton's first book, "Child Whispers", a 24-page collection of poems, was published in 1922. It was illustrated by a schoolfriend, Phyllis Chase, who collaborated on several of her early works. Also in that year Blyton began writing in annuals for Cassell and George Newnes, and her first piece of writing, "Peronel and his Pot of Glue", was accepted for publication in "Teachers' World". Her success was boosted in 1923 when her poems were published alongside those of Rudyard Kipling, Walter de la Mare and G. K. Chesterton in a special issue of "Teachers' World". Blyton's educational texts were quite influential in the 1920s and '30s, her most sizeable being the three-volume "The Teacher's Treasury" (1926), the six-volume "Modern Teaching" (1928), the ten-volume "Pictorial Knowledge" (1930), and the four-volume "Modern Teaching in the Infant School" (1932). In July 1923 Blyton published "Real Fairies", a collection of thirty-three poems written especially for the book with the exception of "Pretending", which had appeared earlier in "Punch" magazine. The following year she published "The Enid Blyton Book of Fairies", illustrated by Horace J. Knowles, and in 1926 the "Book of Brownies". Several books of plays appeared in 1927, including "A Book of Little Plays" and "The Play's the Thing" with the illustrator Alfred Bestall. In the 1930s Blyton developed an interest in writing stories related to various myths, including those of ancient Greece and Rome; "The Knights of the Round Table", "Tales of Ancient Greece" and "Tales of Robin Hood" were published in 1930. In "Tales of Ancient Greece" Blyton retold sixteen well-known ancient Greek myths, but used the Latin rather than the Greek names of deities and invented conversations between the characters. "The Adventures of Odysseus", "Tales of the Ancient Greeks and Persians" and "Tales of the Romans" followed in 1934. The first of twenty-eight books in Blyton's Old Thatch series, "The Talking Teapot and Other Tales", was published in 1934, the same year as the first book in her Brer Rabbit series, "Brer Rabbit Retold"; (note that Brer Rabbit originally featured in Uncle Remus stories by Joel Chandler Harris), her first serial story and first full-length book, "Adventures of the Wishing-Chair", followed in 1937. "The Enchanted Wood", the first book in the Faraway Tree series, published in 1939, is about a magic tree inspired by the Norse mythology that had fascinated Blyton as a child. According to Blyton's daughter Gillian the inspiration for the magic tree came from "thinking up a story one day and suddenly she was walking in the enchanted wood and found the tree. In her imagination she climbed up through the branches and met Moon-Face, Silky, the Saucepan Man and the rest of the characters. She had all she needed." As in the Wishing-Chair series, these fantasy books typically involve children being transported into a magical world in which they meet fairies, goblins, elves, pixies and other mythological creatures. Blyton's first full-length adventure novel, "The Secret Island", was published in 1938, featuring the characters of Jack, Mike, Peggy and Nora. Described by "The Glasgow Herald" as a "Robinson Crusoe-style adventure on an island in an English lake", "The Secret Island" was a lifelong favourite of Gillian's and spawned the Secret series. The following year Blyton released her first book in the Circus series and her initial book in the Amelia Jane series, "Naughty Amelia Jane!" According to Gillian the main character was based on a large handmade doll given to her by her mother on her third birthday. During the 1940s Blyton became a prolific author, her success enhanced by her "marketing, publicity and branding that was far ahead of its time". In 1940 Blyton published two books – "Three Boys and a Circus" and "Children of Kidillin" – under the pseudonym of Mary Pollock (middle name plus first married name), in addition to the eleven published under her own name that year. So popular were Pollock's books that one reviewer was prompted to observe that "Enid Blyton had better look to her laurels". But Blyton's readers were not so easily deceived and many complained about the subterfuge to her and her publisher, with the result that all six books published under the name of Mary Pollock – two in 1940 and four in 1943 – were reissued under Blyton's name. Later in 1940 Blyton published the first of her boarding school story books and the first novel in the Naughtiest Girl series, "The Naughtiest Girl in the School", which followed the exploits of the mischievous schoolgirl Elizabeth Allen at the fictional Whyteleafe School. The first of her six novels in the St. Clare's series, "The Twins at St. Clare's", appeared the following year, featuring the twin sisters Patricia and Isabel O'Sullivan. In 1942 Blyton released the first book in the Mary Mouse series, "Mary Mouse and the Dolls' House", about a mouse exiled from her mousehole who becomes a maid at a dolls' house. Twenty-three books in the series were produced between 1942 and 1964; 10,000 copies were sold in 1942 alone. The same year, Blyton published the first novel in the Famous Five series, "Five on a Treasure Island", with illustrations by Eileen Soper. Its popularity resulted in twenty-one books between then and 1963, and the characters of Julian, Dick, Anne, George (Georgina) and Timmy the dog became household names in Britain. Matthew Grenby, author of "Children's Literature", states that the five were involved with "unmasking hardened villains and solving serious crimes", although the novels were "hardly 'hard-boiled' thrillers". Blyton based the character of Georgina, a tomboy she described as "short-haired, freckled, sturdy, and snub-nosed" and "bold and daring, hot-tempered and loyal", on herself. Blyton had an interest in biblical narratives, and retold Old and New Testament stories. "The Land of Far-Beyond" (1942) is a Christian parable along the lines of John Bunyan's "The Pilgrim's Progress" (1698), with contemporary children as the main characters. In 1943 she published "The Children's Life of Christ", a collection of fifty-nine short stories related to the life of Jesus, with her own slant on popular biblical stories, from the Nativity and the Three Wise Men through to the trial, the crucifixion and the resurrection. "Tales from the Bible" was published the following year, followed by "The Boy with the Loaves and Fishes" in 1948. The first book of Blyton's Five Find-Outers series, "The Mystery of the Burnt Cottage", was published in 1943, as was the second book in the Faraway series, "The Magic Faraway Tree", which in 2003 was voted 66th in the BBC's Big Read poll to find the UK's favourite book. Several of Blyton's works during this period have seaside themes; "John Jolly by the Sea" (1943), a picture book intended for younger readers, was published in a booklet format by Evans Brothers. Other books with a maritime theme include "The Secret of Cliff Castle" and "Smuggler Ben", both attributed to Mary Pollock in 1943; "The Island of Adventure", the first in the Adventure series of eight novels from 1944 onwards; and various novels of the Famous Five series such as "Five on a Treasure Island" (1942), "Five on Kirrin Island Again" (1947) and "Five Go Down to the Sea" (1953). Capitalising on her success, with a loyal and ever-growing readership, Blyton produced a new edition of many of her series such as the Famous Five, the Five Find-Outers and St. Clare's every year in addition to many other novels, short stories and books. In 1946 Blyton launched the first in the Malory Towers series of six books based around the schoolgirl Darrell Rivers, "First Term at Malory Towers", which became extremely popular, particularly with girls. The first book in Blyton's Barney Mysteries series, "The Rockingdown Mystery", was published in 1949, as was the first of her fifteen Secret Seven novels. The Secret Seven Society consists of Peter, his sister Janet, and their friends Colin, George, Jack, Pam and Barbara, who meet regularly in a shed in the garden to discuss peculiar events in their local community. Blyton rewrote the stories so they could be adapted into cartoons, which appeared in "Mickey Mouse Weekly" in 1951 with illustrations by George Brook. The French author Evelyne Lallemand continued the series in the 1970s, producing an additional twelve books, nine of which were translated into English by Anthea Bell between 1983 and 1987. Blyton's Noddy, about a little wooden boy from Toyland, first appeared in the "Sunday Graphic" on 5 June 1949, and in November that year "Noddy Goes to Toyland", the first of at least two dozen books in the series, was published. The idea was conceived by one of Blyton's publishers, Sampson, Low, Marston and Company, who in 1949 arranged a meeting between Blyton and the Dutch illustrator Harmsen van der Beek. Despite having to communicate via an interpreter, he provided some initial sketches of how Toyland and its characters would be represented. Four days after the meeting Blyton sent the text of the first two Noddy books to her publisher, to be forwarded to van der Beek. The Noddy books became one of her most successful and best-known series, and were hugely popular in the 1950s. An extensive range of sub-series, spin-offs and strip books were produced throughout the decade, including "Noddy's Library", "Noddy's Garage of Books", "Noddy's Castle of Books", "Noddy's Toy Station of Books" and "Noddy's Shop of Books". In 1950 Blyton established the company Darrell Waters Ltd to manage her affairs. By the early 1950s she had reached the peak of her output, often publishing more than fifty books a year, and she remained extremely prolific throughout much of the decade. By 1955 Blyton had written her fourteenth Famous Five novel, "Five Have Plenty of Fun", her fifteenth Mary Mouse book, "Mary Mouse in Nursery Rhyme Land", her eighth book in the Adventure series, "The River of Adventure", and her seventh Secret Seven novel, "Secret Seven Win Through". She completed the sixth and final book of the Malory Towers series, "Last Term at Malory Towers", in 1951. Blyton published several further books featuring the character of Scamp the terrier, following on from "The Adventures of Scamp", a novel she had released in 1943 under the pseudonym of Mary Pollock. "Scamp Goes on Holiday" (1952) and "Scamp and Bimbo", "Scamp at School", "Scamp and Caroline" and "Scamp Goes to the Zoo" (1954) were illustrated by Pierre Probst. She introduced the character of Bom, a stylish toy drummer dressed in a bright red coat and helmet, alongside Noddy in "TV Comic" in July 1956. A book series began the same year with "Bom the Little Toy Drummer", featuring illustrations by R. Paul-Hoye, and followed with "Bom and His Magic Drumstick" (1957), "Bom Goes Adventuring" and "Bom Goes to Ho Ho Village" (1958), "Bom and the Clown" and "Bom and the Rainbow" (1959) and "Bom Goes to Magic Town" (1960). In 1958 she produced two annuals featuring the character, the first of which included twenty short stories, poems and picture strips. Many of Blyton's series, including Noddy and The Famous Five, continued to be successful in the 1960s; by 1962, 26 million copies of Noddy had been sold. Blyton concluded several of her long-running series in 1963, publishing the last books of The Famous Five ("Five Are Together Again") and The Secret Seven ("Fun for the Secret Seven"); she also produced three more Brer Rabbit books with the illustrator Grace Lodge: "Brer Rabbit Again", "Brer Rabbit Book", and "Brer Rabbit's a Rascal". In 1962 many of her books were among the first to be published by Armada Books in paperback, making them more affordable to children. After 1963 Blyton's output was generally confined to short stories and books intended for very young readers, such as "Learn to Count with Noddy" and "Learn to Tell Time with Noddy" in 1965, and "Stories for Bedtime" and the Sunshine Picture Story Book collection in 1966. Her declining health and a falling off in readership among older children have been put forward as the principal reasons for this change in trend. Blyton published her last book in the Noddy series, "Noddy and the Aeroplane", in February 1964. In May the following year she published "Mixed Bag", a song book with music written by her nephew Carey, and in August she released her last full-length books, "The Man Who Stopped to Help" and "The Boy Who Came Back". Blyton cemented her reputation as a children's writer when in 1926 she took over the editing of "Sunny Stories", a magazine that typically included the re-telling of legends, myths, stories and other articles for children. That same year she was given her own column in "Teachers' World", entitled "From my Window". Three years later she began contributing a weekly page in the magazine, in which she published letters from her fox terrier dog Bobs. They proved to be so popular that in 1933 they were published in book form as "Letters from Bobs", and sold ten thousand copies in the first week. Her most popular feature was "Round the Year with Enid Blyton", which consisted of forty-eight articles covering aspects of natural history such as weather, pond life, how to plant a school garden and how to make a bird table. Among Blyton's other nature projects was her monthly "Country Letter" feature that appeared in "The Nature Lover" magazine in 1935. "Sunny Stories" was renamed "Enid Blyton's Sunny Stories" in January 1937, and served as a vehicle for the serialisation of Blyton's books. Her first Naughty Amelia Jane story, about an anti-heroine based on a doll owned by her daughter Gillian, was published in the magazine. Blyton stopped contributing in 1952, and it closed down the following year, shortly before the appearance of the new fortnightly "Enid Blyton Magazine" written entirely by Blyton. The first edition appeared on 18 March 1953, and the magazine ran until September 1959. Noddy made his first appearance in the "Sunday Graphic" in 1949, the same year as Blyton's first daily Noddy strip for the London "Evening Standard". It was illustrated by van der Beek until his death in 1953. Blyton worked in a wide range of fictional genres, from fairy tales to animal, nature, detective, mystery, and circus stories, but she often "blurred the boundaries" in her books, and encompassed a range of genres even in her short stories. In a 1958 article published in "The Author", she wrote that there were a "dozen or more different types of stories for children", and she had tried them all, but her favourites were those with a family at their centre. In a letter to the psychologist Peter McKellar, Blyton describes her writing technique: In another letter to McKellar she describes how in just five days she wrote the 60,000-word book "The River of Adventure", the eighth in her Adventure Series, by listening to what she referred to as her "under-mind", which she contrasted with her "upper conscious mind". Blyton was unwilling to conduct any research or planning before beginning work on a new book, which coupled with the lack of variety in her life according to Druce almost inevitably presented the danger that she might unconsciously, and clearly did, plagiarise the books she had read, including her own. Gillian has recalled that her mother "never knew where her stories came from", but that she used to talk about them "coming from her 'mind's eye", as did William Wordsworth and Charles Dickens. Blyton had "thought it was made up of every experience she'd ever had, everything she's seen or heard or read, much of which had long disappeared from her conscious memory" but never knew the direction her stories would take. Blyton further explained in her biography that "If I tried to think out or invent the whole book, I could not do it. For one thing, it would bore me and for another, it would lack the 'verve' and the extraordinary touches and surprising ideas that flood out from my imagination." Blyton's daily routine varied little over the years. She usually began writing soon after breakfast, with her portable typewriter on her knee and her favourite red Moroccan shawl nearby; she believed that the colour red acted as a "mental stimulus" for her. Stopping only for a short lunch break she continued writing until five o'clock, by which time she would usually have produced 6,000–10,000 words. A 2000 article in "The Malay Mail" considers Blyton's children to have "lived in a world shaped by the realities of post-war austerity", enjoying freedom without the political correctness of today, which serves modern readers of Blyton's novels with a form of escapism. Brandon Robshaw of "The Independent" refers to the Blyton universe as "crammed with colour and character", "self-contained and internally consistent", noting that Blyton exemplifies a strong mistrust of adults and figures of authority in her works, creating a world in which children govern. Gillian noted that in her mother's adventure, detective and school stories for older children, "the hook is the strong storyline with plenty of cliffhangers, a trick she acquired from her years of writing serialised stories for children's magazines. There is always a strong moral framework in which bravery and loyalty are (eventually) rewarded". Blyton herself wrote that "my love of children is the whole foundation of all my work". Victor Watson, Assistant Director of Research at Homerton College, Cambridge, believes that Blyton's works reveal an "essential longing and potential associated with childhood", and notes how the opening pages of "The Mountain of Adventure" present a "deeply appealing ideal of childhood". He argues that Blyton's work differs from that of many other authors in its approach, describing the narrative of The Famous Five series for instance as "like a powerful spotlight, it seeks to illuminate, to explain, to demystify. It takes its readers on a roller-coaster story in which the darkness is always banished; everything puzzling, arbitrary, evocative is either dismissed or explained". Watson further notes how Blyton often used minimalist visual descriptions and introduced a few careless phrases such as "gleamed enchantingly" to appeal to her young readers. From the mid-1950s rumours began to circulate that Blyton had not written all the books attributed to her, a charge she found particularly distressing. She published an appeal in her magazine asking children to let her know if they heard such stories and, after one mother informed her that she had attended a parents' meeting at her daughter's school during which a young librarian had repeated the allegation, Blyton decided in 1955 to begin legal proceedings. The librarian was eventually forced to make a public apology in open court early the following year, but the rumours that Blyton operated "a 'company' of ghost writers" persisted, as some found it difficult to believe that one woman working alone could produce such a volume of work. Blyton felt a responsibility to provide her readers with a positive moral framework, and she encouraged them to support worthy causes. Her view, expressed in a 1957 article, was that children should help animals and other children rather than adults: Blyton and the members of the children's clubs she promoted via her magazines raised a great deal of money for various charities; according to Blyton, membership of her clubs meant "working for others, for no reward". The largest of the clubs she was involved with was the Busy Bees, the junior section of the People's Dispensary for Sick Animals, which Blyton had actively supported since 1933. The club had been set up by Maria Dickin in 1934, and after Blyton publicised its existence in the "Enid Blyton Magazine" it attracted 100,000 members in three years. Such was Blyton's popularity among children that after she became Queen Bee in 1952 more than 20,000 additional members were recruited in her first year in office. The Enid Blyton Magazine Club was formed in 1953. Its primary objective was to raise funds to help those children with cerebral palsy who attended a centre in Cheyne Walk, in Chelsea, London, by furnishing an on-site hostel among other things. The Famous Five series gathered such a following that readers asked Blyton if they might form a fan club. She agreed, on condition that it serve a useful purpose, and suggested that it could raise funds for the Shaftesbury Society Babies' Home in Beaconsfield, on whose committee she had served since 1948. The club was established in 1952, and provided funds for equipping a Famous Five Ward at the home, a paddling pool, sun room, summer house, playground, birthday and Christmas celebrations, and visits to the pantomime. By the late 1950s Blyton's clubs had a membership of 500,000, and raised £35,000 in the six years of the "Enid Blyton Magazine"'s run. By 1974 the Famous Five Club had a membership of 220,000, and was growing at the rate of 6,000 new members a year. The Beaconsfield home it was set up to support closed in 1967, but the club continued to raise funds for other paediatric charities, including an Enid Blyton bed at Great Ormond Street Hospital and a mini-bus for disabled children at Stoke Mandeville Hospital. Blyton capitalised upon her commercial success as an author by negotiating agreements with jigsaw puzzle and games manufacturers from the late 1940s onwards; by the early 1960s some 146 different companies were involved in merchandising Noddy alone. In 1948 Bestime released four jigsaw puzzles featuring her characters, and the first Enid Blyton board game appeared, "Journey Through Fairyland", created by BGL. The first card game, Faraway Tree, appeared from Pepys in 1950. In 1954 Bestime released the first four jigsaw puzzles of the Secret Seven, and the following year a Secret Seven card game appeared. Bestime released the Little Noddy Car Game in 1953 and the Little Noddy Leap Frog Game in 1955, and in 1956 American manufacturer Parker Brothers released Little Noddy's Taxi Game, a board game which features Noddy driving about town, picking up various characters. Bestime released its Plywood Noddy Jigsaws series in 1957 and a Noddy jigsaw series featuring cards appeared from 1963, with illustrations by Robert Lee. Arrow Games became the chief producer of Noddy jigsaws in the late 1970s and early 1980s. Whitman manufactured four new Secret Seven jigsaw puzzles in 1975, and produced four new Malory Towers ones two years later. In 1979 the company released a Famous Five adventure board game, Famous Five Kirrin Island Treasure. Stephen Thraves wrote eight Famous Five adventure game books, published by Hodder & Stoughton in the 1980s. The first adventure game book of the series, "The Wreckers' Tower Game", was published in October 1984. On 28 August 1924 Blyton married Major Hugh Alexander Pollock, DSO (1888–1971) at Bromley Register Office, without inviting her family. They married shortly after he divorced from his first wife, with whom he had two sons. Pollock was editor of the book department in the publishing firm of George Newnes, which became her regular publisher. It was he who requested that Blyton write a book about animals, "The Zoo Book", which was completed in the month before they married. They initially lived in a flat in Chelsea before moving to Elfin Cottage in Beckenham in 1926, and then to Old Thatch in Bourne End (called Peterswood in her books) in 1929. Blyton's first daughter Gillian, was born on 15 July 1931, and after a miscarriage in 1934, she gave birth to a second daughter, Imogen, on 27 October 1935. In 1938 Blyton and her family moved to a house in Beaconsfield, which was named Green Hedges by Blyton's readers following a competition in her magazine. By the mid-1930s, Pollock – possibly due to the trauma he had suffered during the First World War being revived through his meetings as a publisher with Winston Churchill – withdrew increasingly from public life and became a secret alcoholic. With the outbreak of the Second World War, he became involved in the Home Guard. Pollock met again Ida Crowe, a 19 years younger writer whom he had met years ago, and he offered her to join him as secretary in his posting to a Home Guard training centre at Denbies, a Gothic mansion in Surrey belonging to Lord Ashcombe, and they entered into a romantic relationship. Blyton's marriage to Pollock became troubled for years, and according to Crowe's memoir, Blyton began a series of affairs, including a lesbian relationship with one of the children's nannies. In 1941 Blyton met Kenneth Fraser Darrell Waters, a London surgeon with whom she began a serious affair. Pollock discovered the liaison, and threatened to initiate divorce proceedings against Blyton. Fearing that exposure of her adultery would ruin her public image, it was ultimately agreed that Blyton would instead file for divorce against Pollock. According to Crowe's memoir, Blyton promised that if he admitted to infidelity she would allow him parental access to their daughters; but after the divorce he was forbidden to contact them, and Blyton ensured he was subsequently unable to find work in publishing. Pollock, having married Crowe on 26 October 1943, eventually resumed his heavy drinking and was forced to petition for bankruptcy in 1950. Blyton and Darrell Waters married at the City of Westminster Register Office on 20 October 1943. She changed the surname of her daughters to Darrell Waters and publicly embraced her new role as a happily married and devoted doctor's wife. After discovering she was pregnant in the spring of 1945, Blyton miscarried five months later, following a fall from a ladder. The baby would have been Darrell Waters's first child and it would also have been the son for which both of them longed. Her love of tennis included playing naked, with nude tennis "a common practice in those days among the more louche members of the middle classes". Blyton's health began to deteriorate in 1957, when during a round of golf she started to complain of feeling faint and breathless, and by 1960 she was displaying signs of dementia. Her agent George Greenfield recalled that it was "unthinkable" for the "most famous and successful of children's authors with her enormous energy and computer-like memory" to be losing her mind and suffering from what is now known as Alzheimer's disease in her mid-sixties. Blyton's situation was worsened by her husband's declining health throughout the 1960s; he suffered from severe arthritis in his neck and hips, deafness, and became increasingly ill-tempered and erratic until his death on 15 September 1967. The story of Blyton's life was dramatised in a BBC film entitled "Enid", which aired in the United Kingdom on BBC Four on 16 November 2009. Helena Bonham Carter, who played the title role, described Blyton as "a complete workaholic, an achievement junkie and an extremely canny businesswoman" who "knew how to brand herself, right down to the famous signature". During the months following her husband's death Blyton became increasingly ill, and moved into a nursing home three months before her death. She died at the Greenways Nursing Home, Hampstead, North London, on 28 November 1968, aged 71. A memorial service was held at St James's Church, Piccadilly, and she was cremated at Golders Green Crematorium, where her ashes remain. Blyton's home, Green Hedges, was auctioned on 26 May 1971 and demolished in 1973; the site is now occupied by houses and a street named Blyton Close. An English Heritage blue plaque commemorates Blyton at Hook Road in Chessington, where she lived from 1920 to 1924. In 2014 a plaque recording her time as a Beaconsfield resident from 1938 until her death in 1968 was unveiled in the town hall gardens, next to small iron figures of Noddy and Big Ears. Since her death and the publication of her daughter Imogen's 1989 autobiography, "A Childhood at Green Hedges", Blyton has emerged as an emotionally immature, unstable and often malicious figure. Imogen considered her mother to be "arrogant, insecure, pretentious, very skilled at putting difficult or unpleasant things out of her mind, and without a trace of maternal instinct. As a child, I viewed her as a rather strict authority. As an adult I pitied her." Blyton's eldest daughter Gillian remembered her rather differently however, as "a fair and loving mother, and a fascinating companion". The Enid Blyton Trust for Children was established in 1982 with Imogen as its first chairman, and in 1985 it established the National Library for the Handicapped Child. "Enid Blyton's Adventure Magazine" began publication in September 1985, and on 14 October 1992 the BBC began publishing "Noddy Magazine" and released the Noddy CD-Rom in October 1996. The first Enid Blyton Day was held at Rickmansworth on 6 March 1993, and in October 1996 the Enid Blyton award, The Enid, was given to those who have made outstanding contributions towards children. The Enid Blyton Society was formed in early 1995, to provide "a focal point for collectors and enthusiasts of Enid Blyton" through its thrice-annual "Enid Blyton Society Journal", its annual Enid Blyton Day, and its website. On 16 December 1996 Channel 4 broadcast a documentary about Blyton, "Secret Lives". To celebrate her centenary in 1997 exhibitions were put on at the London Toy & Model Museum (now closed), Hereford and Worcester County Museum and Bromley Library, and on 9 September the Royal Mail issued centenary stamps. The London-based entertainment and retail company Trocadero plc purchased Blyton's Darrell Waters Ltd in 1995 for £14.6 million and established a subsidiary, Enid Blyton Ltd, to handle all intellectual properties, character brands and media in Blyton's works. The group changed its name to Chorion in 1998, but after financial difficulties in 2012 sold its assets. Hachette UK acquired from Chorion world rights in the Blyton estate in March 2013, including The Famous Five series but excluding the rights to Noddy, which had been sold to DreamWorks Classics (formerly Classic Media, now a subsidiary of DreamWorks Animation) in 2012. Blyton's granddaughter, Sophie Smallwood, wrote a new Noddy book to celebrate the character's 60th birthday, 46 years after the last book was published; "Noddy and the Farmyard Muddle" (2009) was illustrated by Robert Tyndall. In February 2011, the manuscript of a previously unknown Blyton novel, "Mr Tumpy's Caravan", was discovered by the archivist at Seven Stories, National Centre for Children's Books in a collection of papers belonging to Blyton's daughter Gillian, purchased by Seven Stories in 2010 following her death. It was initially thought to belong to a comic strip collection of the same name published in 1949, but it appears to be unrelated and is believed to be something written in the 1930s, which had been rejected by a publisher. In a 1982 survey of 10,000 eleven-year-old children Blyton was voted their most popular writer. She is the world's fourth most-translated author, behind Agatha Christie, Jules Verne and William Shakespeare with her books being translated into 90 languages. From 2000 to 2010, Blyton was listed as a Top Ten author, selling almost 8 million copies (worth £31.2 million) in the UK alone. In 2003 "The Magic Faraway Tree" was voted 66 in the BBC's Big Read. In the 2008 Costa Book Awards, Blyton was voted Britain's best-loved author. Her books continue to be very popular among children in Commonwealth nations such as India, Pakistan, Sri Lanka, Singapore, Malta, New Zealand, and Australia, and around the world. They have also seen a surge of popularity in China, where they are "big with every generation". In March 2004 Chorion and the Chinese publisher Foreign Language Teaching and Research Press negotiated an agreement over the Noddy franchise, which included bringing the character to an animated series on television, with a potential audience of a further 95 million children under the age of five. Chorion spent around £10 million digitising Noddy, and as of 2002 had made television agreements with at least 11 countries worldwide. Novelists influenced by Blyton include the crime writer Denise Danks, whose fictional detective Georgina Powers is based on George from the Famous Five. Peter Hunt's "A Step off the Path" (1985) is also influenced by the Famous Five, and the St. Clare's and Malory Towers series provided the inspiration for Jacqueline Wilson's "Double Act" (1996) and Adèle Geras's Egerton Hall trilogy (1990–92) respectively. Blyton's range of plots and settings has been described as limited and continually recycled. Responding to claims that her moral views were "dependably predictable", Blyton commented that "most of you could write down perfectly correctly all the things that I believe in and stand for – you have found them in my books, and a writer's books are always a faithful reflection of himself". Many of her books were critically assessed by teachers and librarians, deemed unfit for children to read, and removed from syllabuses and public libraries. From the 1930s to the 1950s the BBC operated a "de facto" ban on dramatising Blyton's books for radio, considering her to be a "second-rater" whose work was without literary merit. The children's literary critic Margery Fisher likened Blyton's books to "slow poison", and Jean E. Sutcliffe of the BBC's schools broadcast department wrote of Blyton's ability to churn out "mediocre material", noting that "her capacity to do so amounts to genius ... anyone else would have died of boredom long ago". Michael Rosen, Children's Laureate from 2007 until 2009, wrote that "I find myself flinching at occasional bursts of snobbery and the assumed level of privilege of the children and families in the books." The children's author Anne Fine presented an overview of the concerns about Blyton's work and responses to them on BBC Radio 4 in November 2008, in which she noted the "drip, drip, drip of disapproval" associated with the books. Blyton's response to her critics was that she was uninterested in the views of anyone over the age of 12, claiming that half the attacks on her work were motivated by jealousy and the rest came from "stupid people who don't know what they're talking about because they've never read any of my books". Although Blyton's works have been banned from more public libraries than those of any other author, there is no evidence that the popularity of her books ever suffered, and by 1990 she was still described as being very widely read. Although some criticised her in the 1950s for the volume of work she produced, Blyton astutely capitalised on being considered a more "savoury" English alternative to what was seen by contemporaries as an invasion by American culture in the form of Disney and comics. Some librarians felt that Blyton's restricted use of language, a conscious product of her teaching background, was prejudicial to an appreciation of more literary qualities. In a scathing article published in "Encounter" in 1958, the journalist Colin Welch remarked that it was "hard to see how a diet of Miss Blyton could help with the 11-plus or even with the Cambridge English Tripos", but reserved his harshest criticism for Blyton's Noddy, describing him as an "unnaturally priggish ... sanctimonious ... witless, spiritless, snivelling, sneaking doll." The author and educational psychologist Nicholas Tucker notes that it was common to see Blyton cited as people's favourite or least favourite author according to their age, and argues that her books create an "encapsulated world for young readers that simply dissolves with age, leaving behind only memories of excitement and strong identification". Fred Inglis considers Blyton's books to be technically easy to read, but to also be "emotionally and cognitively easy". He mentions that the psychologist Michael Woods believed that Blyton was different from many other older authors writing for children in that she seemed untroubled by presenting them with a world that differed from reality. Woods surmised that Blyton "was a child, she thought as a child, and wrote as a child ... the basic feeling is essentially pre-adolescent ... Enid Blyton has no moral dilemmas ... Inevitably Enid Blyton was labelled by rumour a child-hater. If true, such a fact should come as no surprise to us, for as a child herself all other children can be nothing but rivals for her." Inglis argues though that Blyton was clearly devoted to children and put an enormous amount of energy into her work, with a powerful belief in "representing the crude moral diagrams and garish fantasies of a readership". Blyton's daughter Imogen has stated that she "loved a relationship with children through her books", but real children were an intrusion, and there was no room for intruders in the world that Blyton occupied through her writing. Accusations of racism in Blyton's books were first made by Lena Jeger in a "Guardian" article published in 1966, in which she was critical of Blyton's "The Little Black Doll", published a few months earlier. Sambo, the black doll of the title, is hated by his owner and the other toys owing to his "ugly black face", and runs away. A shower of rain washes his face clean, after which he is welcomed back home with his now pink face. Jamaica Kincaid also considers the Noddy books to be "deeply racist" because of the blonde children and the black golliwogs. In Blyton's 1944 novel "The Island of Adventure", a black servant named Jo-Jo is very intelligent, but is particularly cruel to the children. Accusations of xenophobia were also made. As George Greenfield observed, "Enid was very much part of that between-the-wars middle class which believed that foreigners were untrustworthy or funny or sometimes both". The publisher Macmillan conducted an internal assessment of Blyton's "The Mystery That Never Was", submitted to them at the height of her fame in 1960. The review was carried out by the author and books editor Phyllis Hartnoll, in whose view "There is a faint but unattractive touch of old-fashioned xenophobia in the author's attitude to the thieves; they are 'foreign' ... and this seems to be regarded as sufficient to explain their criminality." Macmillan rejected the manuscript, but it was published by William Collins in 1961, and then again in 1965 and 1983. Blyton's depictions of boys and girls are considered by many critics to be sexist. In a "Guardian" article published in 2005 Lucy Mangan proposed that The Famous Five series depicts a power struggle between Julian, Dick and George (Georgina), in which the female characters either act like boys or are talked down to, as when Dick lectures George: "it's really time you gave up thinking you're as good as a boy". To address criticisms levelled at Blyton's work some later editions have been altered to reflect more liberal attitudes towards issues such as race, gender and the treatment of children; modern reprints of the Noddy series substitute teddy bears or goblins for golliwogs, for instance. The golliwogs who steal Noddy's car and dump him naked in the Dark Wood in "Here Comes Noddy Again" are replaced by goblins in the 1986 revision, who strip Noddy only of his shoes and hat and return at the end of the story to apologise. "The Faraway Tree"'s Dame Slap, who made regular use of corporal punishment, was changed to Dame Snap who no longer did so, and the names of Dick and Fanny in the same series were changed to Rick and Frannie. Characters in the Malory Towers and St. Clare's series are no longer spanked or threatened with a spanking, but are instead scolded. References to George's short hair making her look like a boy were removed in revisions to "Five on a Hike Together", reflecting the idea that girls need not have long hair to be considered feminine or normal. Anne of "The Famous Five" stating that boys cannot wear pretty dresses or like girl's dolls was removed. In "The Adventurous Four", the names of the young twin girls were changed from Jill and Mary to Pippa and Zoe. In 2010 Hodder, the publisher of the Famous Five series, announced its intention to update the language used in the books, of which it sold more than half a million copies a year. The changes, which Hodder described as "subtle", mainly affect the dialogue rather than the narrative. For instance, "school tunic" becomes "uniform", "mother and father" and "mother and daddy" (this one used by young female characters and deemed sexist) becomes "mum and dad", "bathing" is replaced by "swimming", and "jersey" by "jumper". Some commentators see the changes as necessary to encourage modern readers, whereas others regard them as unnecessary and patronising. In 2016 Hodder's parent company Hachette announced that they would abandon the revisions as, based on feedback, they had not been a success. In 1954 Blyton adapted Noddy for the stage, producing the "Noddy in Toyland" pantomime in just two or three weeks. The production was staged at the 2660-seat Stoll Theatre in Kingsway, London at Christmas. Its popularity resulted in the show running during the Christmas season for five or six years. Blyton was delighted with its reception by children in the audience, and attended the theatre three or four times a week. TV adaptations of Noddy since 1954 include one in the 1970s narrated by Richard Briers. In 1955 a stage play based on the Famous Five was produced, and in January 1997 the King's Head Theatre embarked on a six-month tour of the UK with "The Famous Five Musical", to commemorate Blyton's centenary. On 21 November 1998 "The Secret Seven Save the World" was first performed at the Sherman Theatre in Cardiff. There have also been several film and television adaptations of the Famous Five: by the Children's Film Foundation in 1957 and 1964, Southern Television in 1978–79, and Zenith Productions in 1995–97. The series was also adapted for the German film "Fünf Freunde", directed by Mike Marzuk and released in 2011. The Comic Strip, a group of British comedians, produced two extreme parodies of the Famous Five for Channel 4 television: "Five Go Mad in Dorset", broadcast in 1982, and "Five Go Mad on Mescalin", broadcast the following year. A third in the series, "Five Go to Rehab", was broadcast on Sky in 2012. Blyton's "The Faraway Tree" series of books has also been adapted to television and film. On 27 October 1997 the BBC began broadcasting an animated series called "The Enchanted Lands", based on the series. It was announced in October 2014 that a deal had been signed with publishers Hachette for "The Faraway Tree" series to be adapted into a live-action film by director Sam Mendes’ production company. Marlene Johnson, head of children’s books at Hachette, said: "Enid Blyton was a passionate advocate of children’s storytelling, and The Magic Faraway Tree is a fantastic example of her creative imagination." Seven Stories, the National Centre for Children's Books in Newcastle upon Tyne, holds the largest public collection of Blyton's papers and typescripts. The Seven Stories collection contains a significant number of Blyton's typescripts, including the previously unpublished novel, "Mr Tumpy's Caravan", as well as personal papers and diaries. The purchase of the material in 2010 was made possible by special funding from the Heritage Lottery Fund, the MLA/V&A Purchase Grant Fund, and two private donations. Enrico Fermi Enrico Fermi (; ; 29 September 1901 – 28 November 1954) was an Italian and naturalized-American physicist and the creator of the world's first nuclear reactor, the Chicago Pile-1. He has been called the "architect of the nuclear age" and the "architect of the atomic bomb". He was one of the very few physicists in history to excel both theoretically and experimentally. Fermi held several patents related to the use of nuclear power, and was awarded the 1938 Nobel Prize in Physics for his work on induced radioactivity by neutron bombardment and the discovery of transuranic elements. He made significant contributions to the development of quantum theory, nuclear and particle physics, and statistical mechanics. Fermi's first major contribution was to statistical mechanics. After Wolfgang Pauli announced his exclusion principle in 1925, Fermi followed with a paper in which he applied the principle to an ideal gas, employing a statistical formulation now known as Fermi–Dirac statistics. Today, particles that obey the exclusion principle are called "fermions". Later Pauli postulated the existence of an uncharged invisible particle emitted along with an electron during beta decay, to satisfy the law of conservation of energy. Fermi took up this idea, developing a model that incorporated the postulated particle, which he named the "neutrino". His theory, later referred to as Fermi's interaction and still later as weak interaction, described one of the four fundamental forces of nature. Through experiments inducing radioactivity with recently discovered neutrons, Fermi discovered that slow neutrons were more easily captured than fast ones, and developed the Fermi age equation to describe this. After bombarding thorium and uranium with slow neutrons, he concluded that he had created new elements; although he was awarded the Nobel Prize for this discovery, the new elements were subsequently revealed to be fission products. Fermi left Italy in 1938 to escape new Italian Racial Laws that affected his Jewish wife Laura Capon. He emigrated to the United States where he worked on the Manhattan Project during World War II. Fermi led the team that designed and built Chicago Pile-1, which went critical on 2 December 1942, demonstrating the first artificial self-sustaining nuclear chain reaction. He was on hand when the X-10 Graphite Reactor at Oak Ridge, Tennessee, went critical in 1943, and when the B Reactor at the Hanford Site did so the next year. At Los Alamos he headed F Division, part of which worked on Edward Teller's thermonuclear "Super" bomb. He was present at the Trinity test on 16 July 1945, where he used his Fermi method to estimate the bomb's yield. After the war, Fermi served under J. Robert Oppenheimer on the General Advisory Committee, which advised the Atomic Energy Commission on nuclear matters and policy. Following the detonation of the first Soviet fission bomb in August 1949, he strongly opposed the development of a hydrogen bomb on both moral and technical grounds. He was among the scientists who testified on Oppenheimer's behalf at the 1954 hearing that resulted in the denial of the latter's security clearance. Fermi did important work in particle physics, especially related to pions and muons, and he speculated that cosmic rays arose through material being accelerated by magnetic fields in interstellar space. Many awards, concepts, and institutions are named after Fermi, including the Enrico Fermi Award, the Enrico Fermi Institute, the Fermi National Accelerator Laboratory, the Fermi Gamma-ray Space Telescope, the Enrico Fermi Nuclear Generating Station, and the synthetic element fermium, making him one of 16 scientists who have elements named after them. Enrico Fermi was born in Rome, Italy, on 29 September 1901. He was the third child of Alberto Fermi, a division head ("") in the Ministry of Railways, and Ida de Gattis, an elementary school teacher. His sister, Maria, was two years older than him, and his brother Giulio was a year older. After the two boys were sent to a rural community to be wet nursed, Enrico rejoined his family in Rome when he was two and a half. Although he was baptised a Roman Catholic in accordance with his grandparents' wishes, his family was not particularly religious; Enrico was an agnostic throughout his adult life. As a young boy he shared the same interests as his brother Giulio, building electric motors and playing with electrical and mechanical toys. Giulio died during the administration of an anesthetic for an operation on a throat abscess in 1915. Maria died in an airplane crash near Milano in 1959. One of Fermi's first sources for his study of physics was a book he found at the local market at "Campo de' Fiori" in Rome. Published in 1840, the 900-page "Elementorum physicae mathematicae", was written in Latin by Jesuit Father Andrea Caraffa, a professor at the Collegio Romano. It covered mathematics, classical mechanics, astronomy, optics, and acoustics, insofar as these disciplines were understood when the book was written. Fermi befriended another scientifically inclined student, Enrico Persico, and together the two worked on scientific projects such as building gyroscopes and trying to accurately measure the acceleration of Earth's gravity. Fermi's interest in physics was further encouraged by his father's colleague Adolfo Amidei, who gave him several books on physics and mathematics, which he read and assimilated quickly. Fermi graduated from high school in July 1918 and, at Amidei's urging, applied to the "Scuola Normale Superiore" in Pisa. Having lost one son, his parents were reluctant to let him move away from home for four years while attending it, but in the end they acquiesced. The school provided free lodging for students, but candidates had to take a difficult entrance exam that included an essay. The given theme was "Specific characteristics of Sounds". The 17-year-old Fermi chose to derive and solve the partial differential equation for a vibrating rod, applying Fourier analysis in the solution. The examiner, Professor Giulio Pittarelli from the Sapienza University of Rome, interviewed Fermi and praised him, saying that he would become an outstanding physicist in the future. Fermi achieved first place in the classification of the entrance exam. During his years at the "Scuola Normale Superiore", Fermi teamed up with a fellow student named Franco Rasetti with whom he would indulge in light-hearted pranks and who would later become Fermi's close friend and collaborator. In Pisa, Fermi was advised by the director of the physics laboratory, Luigi Puccianti, who acknowledged that there was little that he could teach Fermi, and frequently asked Fermi to teach him something instead. Fermi's knowledge of quantum physics reached such a high level that Puccianti asked him to organize seminars on the topic. During this time Fermi learned tensor calculus, a mathematical technique invented by Gregorio Ricci and Tullio Levi-Civita that was needed to demonstrate the principles of general relativity. Fermi initially chose mathematics as his major, but soon switched to physics. He remained largely self-taught, studying general relativity, quantum mechanics, and atomic physics. In September 1920, Fermi was admitted to the Physics department. Since there were only three students in the department—Fermi, Rasetti, and Nello Carrara—Puccianti let them freely use the laboratory for whatever purposes they chose. Fermi decided that they should research X-ray crystallography, and the three worked to produce a Laue photograph—an X-ray photograph of a crystal. During 1921, his third year at the university, Fermi published his first scientific works in the Italian journal "Nuovo Cimento". The first was entitled "On the dynamics of a rigid system of electrical charges in translational motion" ('). A sign of things to come was that the mass was expressed as a tensor—a mathematical construct commonly used to describe something moving and changing in three-dimensional space. In classical mechanics, mass is a scalar quantity, but in relativity it changes with velocity. The second paper was "On the electrostatics of a uniform gravitational field of electromagnetic charges and on the weight of electromagnetic charges" ('). Using general relativity, Fermi showed that a charge has a weight equal to U/c, where U was the electrostatic energy of the system, and c is the speed of light. The first paper seemed to point out a contradiction between the electrodynamic theory and the relativistic one concerning the calculation of the electromagnetic masses, as the former predicted a value of 4/3 U/c. Fermi addressed this the next year in a paper "Concerning a contradiction between electrodynamic and the relativistic theory of electromagnetic mass" in which he showed that the apparent contradiction was a consequence of relativity. This paper was sufficiently well-regarded that it was translated into German and published in the German scientific journal "Physikalische Zeitschrift" in 1922. That year, Fermi submitted his article "On the phenomena occurring near a world line" ("") to the Italian journal "I Rendiconti dell'Accademia dei Lincei". In this article he examined the Principle of Equivalence, and introduced the so-called "Fermi coordinates". He proved that on a world line close to the time line, space behaves as if it were a Euclidean space. Fermi submitted his thesis, "A theorem on probability and some of its applications" (""), to the "Scuola Normale Superiore" in July 1922, and received his laurea at the unusually young age of 20. The thesis was on X-ray diffraction images. Theoretical physics was not yet considered a discipline in Italy, and the only thesis that would have been accepted was one on experimental physics. For this reason, Italian physicists were slow in embracing the new ideas like relativity coming from Germany. Since Fermi was quite at home in the lab doing experimental work, this did not pose insurmountable problems for him. While writing the appendix for the Italian edition of the book "Fundamentals of Einstein Relativity" by August Kopff in 1923, Fermi was the first to point out that hidden inside the famous Einstein equation () was an enormous amount of nuclear potential energy to be exploited. "It does not seem possible, at least in the near future", he wrote, "to find a way to release these dreadful amounts of energy—which is all to the good because the first effect of an explosion of such a dreadful amount of energy would be to smash into smithereens the physicist who had the misfortune to find a way to do it." In 1924 Fermi was initiated into Freemasonry in the Masonic Lodge "Adriano Lemmi" of the Grand Orient of Italy. Fermi decided to travel abroad, and spent a semester studying under Max Born at the University of Göttingen, where he met Werner Heisenberg and Pascual Jordan. Fermi then studied in Leiden with Paul Ehrenfest from September to December 1924 on a fellowship from the Rockefeller Foundation obtained through the intercession of the mathematician Vito Volterra. Here Fermi met Hendrik Lorentz and Albert Einstein, and became good friends with Samuel Goudsmit and Jan Tinbergen. From January 1925 to late 1926, Fermi taught mathematical physics and theoretical mechanics at the University of Florence, where he teamed up with Rasetti to conduct a series of experiments on the effects of magnetic fields on mercury vapour. He also participated in seminars at the Sapienza University of Rome, giving lectures on quantum mechanics and solid state physics. While giving lectures on the new quantum mechanics based on the remarkable accuracy of predictions of the Schrödinger equation, the Italian physicist would often say, "It has no business to fit so well!" After Wolfgang Pauli announced his exclusion principle in 1925, Fermi responded with a paper "On the quantisation of the perfect monoatomic gas" ("), in which he applied the exclusion principle to an ideal gas. The paper was especially notable for Fermi's statistical formulation, which describes the distribution of particles in systems of many identical particles that obey the exclusion principle. This was independently developed soon after by the British physicist Paul Dirac, who also showed how it was related to the Bose–Einstein statistics. Accordingly, it is now known as Fermi–Dirac statistics. Following Dirac, particles that obey the exclusion principle are today called "fermions", while those that do not are called "bosons". Professorships in Italy were granted by competition (") for a vacant chair, the applicants being rated on their publications by a committee of professors. Fermi applied for a chair of mathematical physics at the University of Cagliari on Sardinia, but was narrowly passed over in favour of Giovanni Giorgi. In 1926, at the age of 24, he applied for a professorship at the Sapienza University of Rome. This was a new chair, one of the first three in theoretical physics in Italy, that had been created by the Minister of Education at the urging of Professor Orso Mario Corbino, who was the University's professor of experimental physics, the Director of the Institute of Physics, and a member of Benito Mussolini's cabinet. Corbino, who also chaired the selection committee, hoped that the new chair would raise the standard and reputation of physics in Italy. The committee chose Fermi ahead of Enrico Persico and Aldo Pontremoli, and Corbino helped Fermi recruit his team, which was soon joined by notable students such as Edoardo Amaldi, Bruno Pontecorvo, Ettore Majorana and Emilio Segrè, and by Franco Rasetti, whom Fermi had appointed as his assistant. They were soon nicknamed the "Via Panisperna boys" after the street where the Institute of Physics was located. Fermi married Laura Capon, a science student at the University, on 19 July 1928. They had two children: Nella, born in January 1931, and Giulio, born in February 1936. On 18 March 1929, Fermi was appointed a member of the Royal Academy of Italy by Mussolini, and on 27 April he joined the Fascist Party. He later opposed Fascism when the 1938 racial laws were promulgated by Mussolini in order to bring Italian Fascism ideologically closer to German National Socialism. These laws threatened Laura, who was Jewish, and put many of Fermi's research assistants out of work. During their time in Rome, Fermi and his group made important contributions to many practical and theoretical aspects of physics. In 1928, he published his "Introduction to Atomic Physics" (""), which provided Italian university students with an up-to-date and accessible text. Fermi also conducted public lectures and wrote popular articles for scientists and teachers in order to spread knowledge of the new physics as widely as possible. Part of his teaching method was to gather his colleagues and graduate students together at the end of the day and go over a problem, often from his own research. A sign of success was that foreign students now began to come to Italy. The most notable of these was the German physicist Hans Bethe, who came to Rome as a Rockefeller Foundation fellow, and collaborated with Fermi on a 1932 paper "On the Interaction between Two Electrons" (). At this time, physicists were puzzled by beta decay, in which an electron was emitted from the atomic nucleus. To satisfy the law of conservation of energy, Pauli postulated the existence of an invisible particle with no charge and little or no mass that was also emitted at the same time. Fermi took up this idea, which he developed in a tentative paper in 1933, and then a longer paper the next year that incorporated the postulated particle, which Fermi called a "neutrino". His theory, later referred to as Fermi's interaction, and still later as the theory of the weak interaction, described one of the four fundamental forces of nature. The neutrino was detected after his death, and his interaction theory showed why it was so difficult to detect. When he submitted his paper to the British journal "Nature", that journal's editor turned it down because it contained speculations which were "too remote from physical reality to be of interest to readers". Thus Fermi saw the theory published in Italian and German before it was published in English. In the introduction to the 1968 English translation, physicist Fred L. Wilson noted that: In January 1934, Irène Joliot-Curie and Frédéric Joliot announced that they had bombarded elements with alpha particles and induced radioactivity in them. By March, Fermi's assistant Gian-Carlo Wick had provided a theoretical explanation using Fermi's theory of beta decay. Fermi decided to switch to experimental physics, using the neutron, which James Chadwick had discovered in 1932. In March 1934, Fermi wanted to see if he could induce radioactivity with Rasetti's polonium-beryllium neutron source. Neutrons had no electric charge, and so would not be deflected by the positively charged nucleus. This meant that they needed much less energy to penetrate the nucleus than charged particles, and so would not require a particle accelerator, which the Via Panisperna boys did not have. Fermi had the idea to resort to replacing the polonium-beryllium neutron source with a radon-beryllium one, which he created by filling a glass bulb with beryllium powder, evacuating the air, and then adding 50 mCi of radon gas, supplied by Giulio Cesare Trabacchi. This created a much stronger neutron source, the effectiveness of which declined with the 3.8-day half-life of radon. He knew that this source would also emit gamma rays, but, on the basis of his theory, he believed that this would not affect the results of the experiment. He started by bombarding platinum, an element with a high atomic number that was readily available, without success. He turned to aluminium, which emitted an alpha particle and produced sodium, which then decayed into magnesium by beta particle emission. He tried lead, without success, and then fluorine in the form of calcium fluoride, which emitted an alpha particle and produced nitrogen, decaying into oxygen by beta particle emission. In all, he induced radioactivity in 22 different elements. Fermi rapidly reported the discovery of neutron-induced radioactivity in the Italian journal "La Ricerca Scientifica" on 25 March 1934. The natural radioactivity of thorium and uranium made it hard to determine what was happening when these elements were bombarded with neutrons but, after correctly eliminating the presence of elements lighter than uranium but heavier than lead, Fermi concluded that they had created new elements, which he called hesperium and ausonium. The chemist Ida Noddack criticised this work, suggesting that some of the experiments could have produced lighter elements than lead rather than new, heavier elements. Her suggestion was not taken seriously at the time because her team had not carried out any experiments with uranium, and its claim to have discovered masurium (technetium) was disputed. At that time, fission was thought to be improbable if not impossible on theoretical grounds. While physicists expected elements with higher atomic numbers to form from neutron bombardment of lighter elements, nobody expected neutrons to have enough energy to split a heavier atom into two light element fragments in the manner that Noddack suggested. The Via Panisperna boys also noticed some unexplained effects. The experiment seemed to work better on a wooden table than a marble table top. Fermi remembered that Joliot-Curie and Chadwick had noted that paraffin wax was effective at slowing neutrons, so he decided to try that. When neutrons were passed through paraffin wax, they induced a hundred times as much radioactivity in silver compared with when it was bombarded without the paraffin. Fermi guessed that this was due to the hydrogen atoms in the paraffin. Those in wood similarly explained the difference between the wooden and the marble table tops. This was confirmed by repeating the effect with water. He concluded that collisions with hydrogen atoms slowed the neutrons. The lower the atomic number of the nucleus it collides with, the more energy a neutron loses per collision, and therefore the fewer collisions that are required to slow a neutron down by a given amount. Fermi realised that this induced more radioactivity because slow neutrons were more easily captured than fast ones. He developed a diffusion equation to describe this, which became known as the Fermi age equation. In 1938 Fermi received the Nobel Prize in Physics at the age of 37 for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". After Fermi received the prize in Stockholm, he did not return home to Italy, but rather continued to New York City with his family in December 1938, where they applied for permanent residency. The decision to move to America and become U.S. citizens was due primarily to the racial laws in Italy. Fermi arrived in New York City on 2 January 1939. He was immediately offered posts at five universities, and accepted a post at Columbia University, where he had already given summer lectures in 1936. He received the news that in December 1938, the German chemists Otto Hahn and Fritz Strassmann had detected the element barium after bombarding uranium with neutrons, which Lise Meitner and her nephew Otto Frisch correctly interpreted as the result of nuclear fission. Frisch confirmed this experimentally on 13 January 1939. The news of Meitner and Frisch's interpretation of Hahn and Strassmann's discovery crossed the Atlantic with Niels Bohr, who was to lecture at Princeton University. Isidor Isaac Rabi and Willis Lamb, two Columbia University physicists working at Princeton, found out about it and carried it back to Columbia. Rabi said he told Enrico Fermi, but Fermi later gave the credit to Lamb: Noddack was proven right after all. Fermi had dismissed the possibility of fission on the basis of his calculations, but he had not taken into account the binding energy that would appear when a nuclide with an odd number of neutrons absorbed an extra neutron. For Fermi, the news came as a profound embarrassment, as the transuranic elements that he had partly been awarded the Nobel Prize for discovering had not been transuranic elements at all, but fission products. He added a footnote to this effect to his Nobel Prize acceptance speech. The scientists at Columbia decided that they should try to detect the energy released in the nuclear fission of uranium when bombarded by neutrons. On 25 January 1939, in the basement of Pupin Hall at Columbia, an experimental team including Fermi conducted the first nuclear fission experiment in the United States. The other members of the team were Herbert L. Anderson, Eugene T. Booth, John R. Dunning, G. Norris Glasoe, and Francis G. Slack. The next day, the Fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of George Washington University and the Carnegie Institution of Washington. There, the news on nuclear fission was spread even further, fostering many more experimental demonstrations. French scientists Hans von Halban, Lew Kowarski, and Frédéric Joliot-Curie had demonstrated that uranium bombarded by neutrons emitted more neutrons than it absorbed, suggesting the possibility of a chain reaction. Fermi and Anderson did so too a few weeks later. Leó Szilárd obtained of uranium oxide from Canadian radium producer Eldorado Gold Mines Limited, allowing Fermi and Anderson to conduct experiments with fission on a much larger scale. Fermi and Szilárd collaborated on a design of a device to achieve a self-sustaining nuclear reaction—a nuclear reactor. Owing to the rate of absorption of neutrons by the hydrogen in water, it was unlikely that a self-sustaining reaction could be achieved with natural uranium and water as a neutron moderator. Fermi suggested, based on his work with neutrons, that the reaction could be achieved with uranium oxide blocks and graphite as a moderator instead of water. This would reduce the neutron capture rate, and in theory make a self-sustaining chain reaction possible. Szilárd came up with a workable design: a pile of uranium oxide blocks interspersed with graphite bricks. Szilárd, Anderson, and Fermi published a paper on "Neutron Production in Uranium". But their work habits and personalities were different, and Fermi had trouble working with Szilárd. Fermi was among the first to warn military leaders about the potential impact of nuclear energy, giving a lecture on the subject at the Navy Department on 18 March 1939. The response fell short of what he had hoped for, although the Navy agreed to provide $1,500 towards further research at Columbia. Later that year, Szilárd, Eugene Wigner, and Edward Teller sent the famous letter signed by Einstein to U.S. President Roosevelt, warning that Nazi Germany was likely to build an atomic bomb. In response, Roosevelt formed the Advisory Committee on Uranium to investigate the matter. The Advisory Committee on Uranium provided money for Fermi to buy graphite, and he built a pile of graphite bricks on the seventh floor of the Pupin Hall laboratory. By August 1941, he had six tons of uranium oxide and thirty tons of graphite, which he used to build a still larger pile in Schermerhorn Hall at Columbia. The S-1 Section of the Office of Scientific Research and Development, as the Advisory Committee on Uranium was now known, met on 18 December 1941, with the U.S. now engaged in World War II, making its work urgent. Most of the effort sponsored by the Committee had been directed at producing enriched uranium, but Committee member Arthur Compton determined that a feasible alternative was plutonium, which could be mass-produced in nuclear reactors by the end of 1944. He decided to concentrate the plutonium work at the University of Chicago. Fermi reluctantly moved, and his team became part of the new Metallurgical Laboratory there. The possible results of a self-sustaining nuclear reaction were unknown, so it seemed inadvisable to build the first nuclear reactor on the U. of C. campus in the middle of the city. Compton found a location in the Argonne Woods Forest Preserve, about from Chicago. Stone & Webster was contracted to develop the site, but the work was halted by an industrial dispute. Fermi then persuaded Compton that he could build the reactor in the squash court under the stands of the U of C's Stagg Field. Construction of the pile began on 6 November 1942, and Chicago Pile-1 went critical on 2 December. The shape of the pile was intended to be roughly spherical, but as work proceeded Fermi calculated that criticality could be achieved without finishing the entire pile as planned. This experiment was a landmark in the quest for energy, and it was typical of Fermi's approach. Every step was carefully planned, every calculation meticulously done. When the first self-sustained nuclear chain reaction was achieved, Compton made a coded phone call to James B. Conant, the chairman of the National Defense Research Committee. To continue the research where it would not pose a public health hazard, the reactor was disassembled and moved to the Argonne Woods site. There Fermi directed experiments on nuclear reactions, revelling in the opportunities provided by the reactor's abundant production of free neutrons. The laboratory soon branched out from physics and engineering into using the reactor for biological and medical research. Initially, Argonne was run by Fermi as part of the University of Chicago, but it became a separate entity with Fermi as its director in May 1944. When the air-cooled X-10 Graphite Reactor at Oak Ridge went critical on 4 November 1943, Fermi was on hand just in case something went wrong. The technicians woke him early so that he could see it happen. Getting X-10 operational was another milestone in the plutonium project. It provided data on reactor design, training for DuPont staff in reactor operation, and produced the first small quantities of reactor-bred plutonium. Fermi became an American citizen in July 1944, the earliest date the law allowed. In September 1944, Fermi inserted the first uranium fuel slug into the B Reactor at the Hanford Site, the production reactor designed to breed plutonium in large quantities. Like X-10, it had been designed by Fermi's team at the Metallurgical Laboratory, and built by DuPont, but it was much larger, and was water-cooled. Over the next few days, 838 tubes were loaded, and the reactor went critical. Shortly after midnight on 27 September, the operators began to withdraw the control rods to initiate production. At first all appeared to be well, but around 03:00, the power level started to drop and by 06:30 the reactor had shut down completely. The Army and DuPont turned to Fermi's team for answers. The cooling water was investigated to see if there was a leak or contamination. The next day the reactor suddenly started up again, only to shut down once more a few hours later. The problem was traced to neutron poisoning from xenon-135, a fission product with a half-life of 9.2 hours. Fortunately, DuPont had deviated from the Metallurgical Laboratory's original design in which the reactor had 1,500 tubes arranged in a circle, and had added 504 tubes to fill in the corners. The scientists had originally considered this over-engineering a waste of time and money, but Fermi realized that if all 2,004 tubes were loaded, the reactor could reach the required power level and efficiently produce plutonium. In mid-1944, Robert Oppenheimer persuaded Fermi to join his Project Y at Los Alamos, New Mexico. Arriving in September, Fermi was appointed an associate director of the laboratory, with broad responsibility for nuclear and theoretical physics, and was placed in charge of F Division, which was named after him. F Division had four branches: F-1 Super and General Theory under Teller, which investigated the "Super" (thermonuclear) bomb; F-2 Water Boiler under L. D. P. King, which looked after the "water boiler" aqueous homogeneous research reactor; F-3 Super Experimentation under Egon Bretscher; and F-4 Fission Studies under Anderson. Fermi observed the Trinity test on 16 July 1945, and conducted an experiment to estimate the bomb's yield by dropping strips of paper into the blast wave. He paced off the distance they were blown by the explosion, and calculated the yield as ten kilotons of TNT; the actual yield was about 18.6 kilotons. Along with Oppenheimer, Compton, and Ernest Lawrence, Fermi was part of the scientific panel that advised the Interim Committee on target selection. The panel agreed with the committee that atomic bombs would be used without warning against an industrial target. Like others at the Los Alamos Laboratory, Fermi found out about the atomic bombings of Hiroshima and Nagasaki from the public address system in the technical area. Fermi did not believe that atomic bombs would deter nations from starting wars, nor did he think that the time was ripe for world government. He therefore did not join the Association of Los Alamos Scientists. Fermi became the Charles H. Swift Distinguished Professor of Physics at the University of Chicago on 1 July 1945, although he did not depart the Los Alamos Laboratory with his family until 31 December 1945. He was elected a member of the U.S. National Academy of Sciences in 1945. The Metallurgical Laboratory became the Argonne National Laboratory on 1 July 1946, the first of the national laboratories established by the Manhattan Project. The short distance between Chicago and Argonne allowed Fermi to work at both places. At Argonne he continued experimental physics, investigating neutron scattering with Leona Marshall. He also discussed theoretical physics with Maria Mayer, helping her develop insights into spin–orbit coupling that would lead to her receiving the Nobel Prize. The Manhattan Project was replaced by the Atomic Energy Commission (AEC) on 1 January 1947. Fermi served on the AEC General Advisory Committee, an influential scientific committee chaired by Robert Oppenheimer. He also liked to spend a few weeks of each year at the Los Alamos National Laboratory, where he collaborated with Nicholas Metropolis, and with John von Neumann on Rayleigh–Taylor instability, the science of what occurs at the border between two fluids of different densities. Following the detonation of the first Soviet fission bomb in August 1949, Fermi, along with Isidor Rabi, wrote a strongly worded report for the committee, opposing the development of a hydrogen bomb on moral and technical grounds. Nonetheless, Fermi continued to participate in work on the hydrogen bomb at Los Alamos as a consultant. Along with Stanislaw Ulam, he calculated that not only would the amount of tritium needed for Teller's model of a thermonuclear weapon be prohibitive, but a fusion reaction could still not be assured to propagate even with this large quantity of tritium. Fermi was among the scientists who testified on Oppenheimer's behalf at the Oppenheimer security hearing in 1954 that resulted in denial of Oppenheimer's security clearance. In his later years, Fermi continued teaching at the University of Chicago. His PhD students in the post-war period included Owen Chamberlain, Geoffrey Chew, Jerome Friedman, Marvin Goldberger, Tsung-Dao Lee, Arthur Rosenfeld and Sam Treiman. Jack Steinberger was a graduate student. Fermi conducted important research in particle physics, especially related to pions and muons. He made the first predictions of pion-nucleon resonance, relying on statistical methods, since he reasoned that exact answers were not required when the theory was wrong anyway. In a paper co-authored with Chen Ning Yang, he speculated that pions might actually be composite particles. The idea was elaborated by Shoichi Sakata. It has since been supplanted by the quark model, in which the pion is made up of quarks, which completed Fermi's model, and vindicated his approach. Fermi wrote a paper "On the Origin of Cosmic Radiation" in which he proposed that cosmic rays arose through material being accelerated by magnetic fields in interstellar space, which led to a difference of opinion with Teller. Fermi examined the issues surrounding magnetic fields in the arms of a spiral galaxy. He mused about what is now referred to as the "Fermi paradox": the contradiction between the presumed probability of the existence of extraterrestrial life and the fact that contact has not been made. Toward the end of his life, Fermi questioned his faith in society at large to make wise choices about nuclear technology. He said: Fermi underwent an exploratory operation in Billings Memorial Hospital on 9 October 1954, after which he returned home. 50 days later, Fermi died at age 53 of stomach cancer in his home in Chicago, and was interred at Oak Woods Cemetery. Fermi received numerous awards in recognition of his achievements, including the Matteucci Medal in 1926, the Nobel Prize for Physics in 1938, the Hughes Medal in 1942, the Franklin Medal in 1947, and the Rumford Prize in 1953. He was awarded the Medal for Merit in 1946 for his contribution to the Manhattan Project. Fermi was elected a Foreign Member of the Royal Society (FRS) in 1950. The Basilica of Santa Croce, Florence, known as the "Temple of Italian Glories" for its many graves of artists, scientists and prominent figures in Italian history, has a plaque commemorating Fermi. In 1999, "Time" named Fermi on its list of the top 100 persons of the twentieth century. Fermi was widely regarded as an unusual case of a 20th-century physicist who excelled both theoretically and experimentally. The historian of physics, C. P. Snow, wrote that "if Fermi had been born a few years earlier, one could well imagine him discovering Rutherford's atomic nucleus, and then developing Bohr's theory of the hydrogen atom. If this sounds like hyperbole, anything about Fermi is likely to sound like hyperbole". Fermi was known as an inspiring teacher, and was noted for his attention to detail, simplicity, and careful preparation of his lectures. Later, his lecture notes were transcribed into books. His papers and notebooks are today in the University of Chicago. Victor Weisskopf noted how Fermi "always managed to find the simplest and most direct approach, with the minimum of complication and sophistication." Fermi's ability and success stemmed as much from his appraisal of the art of the possible, as from his innate skill and intelligence. He disliked complicated theories, and while he had great mathematical ability, he would never use it when the job could be done much more simply. He was famous for getting quick and accurate answers to problems that would stump other people. Later on, his method of getting approximate and quick answers through back-of-the-envelope calculations became informally known as the "Fermi method", and is widely taught. Fermi was fond of pointing out that Alessandro Volta, working in his laboratory, could have had no idea where the study of electricity would lead. Fermi is generally remembered for his work on nuclear power and nuclear weapons, especially the creation of the first nuclear reactor, and the development of the first atomic and hydrogen bombs. His scientific work has stood the test of time. This includes his theory of beta decay, his work with non-linear systems, his discovery of the effects of slow neutrons, his study of pion-nucleon collisions, and his Fermi–Dirac statistics. His speculation that a pion was not a fundamental particle pointed the way towards the study of quarks and leptons. Many things bear Fermi's name. These include the Fermilab particle accelerator and physics lab in Batavia, Illinois, which was renamed in his honor in 1974, and the Fermi Gamma-ray Space Telescope, which was named after him in 2008, in recognition of his work on cosmic rays. Three nuclear reactor installations have been named after him: the Fermi 1 and Fermi 2 nuclear power plants in Newport, Michigan, the Enrico Fermi Nuclear Power Plant at Trino Vercellese in Italy, and the RA-1 Enrico Fermi research reactor in Argentina. A synthetic element isolated from the debris of the 1952 Ivy Mike nuclear test was named fermium, in honor of Fermi's contributions to the scientific community. This makes him one of 16 scientists who have elements named after them. Since 1956, the United States Atomic Energy Commission has named its highest honor, the Fermi Award, after him. Recipients of the award include well-known scientists like Otto Hahn, Robert Oppenheimer, Edward Teller and Hans Bethe. For a full list of his papers, see pages 75–78 in ref. Edward Elgar Sir Edward William Elgar, 1st Baronet, (; 2 June 1857 – 23 February 1934) was an English composer, many of whose works have entered the British and international classical concert repertoire. Among his best-known compositions are orchestral works including the "Enigma Variations", the "Pomp and Circumstance Marches", concertos for violin and cello, and two symphonies. He also composed choral works, including "The Dream of Gerontius", chamber music and songs. He was appointed Master of the King's Musick in 1924. Although Elgar is often regarded as a typically English composer, most of his musical influences were not from England but from continental Europe. He felt himself to be an outsider, not only musically, but socially. In musical circles dominated by academics, he was a self-taught composer; in Protestant Britain, his Roman Catholicism was regarded with suspicion in some quarters; and in the class-conscious society of Victorian and Edwardian Britain, he was acutely sensitive about his humble origins even after he achieved recognition. He nevertheless married the daughter of a senior British army officer. She inspired him both musically and socially, but he struggled to achieve success until his forties, when after a series of moderately successful works his "Enigma Variations" (1899) became immediately popular in Britain and overseas. He followed the Variations with a choral work, "The Dream of Gerontius" (1900), based on a Roman Catholic text that caused some disquiet in the Anglican establishment in Britain, but it became, and has remained, a core repertory work in Britain and elsewhere. His later full-length religious choral works were well received but have not entered the regular repertory. In his fifties, Elgar composed a symphony and a violin concerto that were immensely successful. His second symphony and his cello concerto did not gain immediate public popularity and took many years to achieve a regular place in the concert repertory of British orchestras. Elgar's music came, in his later years, to be seen as appealing chiefly to British audiences. His stock remained low for a generation after his death. It began to revive significantly in the 1960s, helped by new recordings of his works. Some of his works have, in recent years, been taken up again internationally, but the music continues to be played more in Britain than elsewhere. Elgar has been described as the first composer to take the gramophone seriously. Between 1914 and 1925, he conducted a series of acoustic recordings of his works. The introduction of the moving-coil microphone in 1923 made far more accurate sound reproduction possible, and Elgar made new recordings of most of his major orchestral works and excerpts from "The Dream of Gerontius". Edward Elgar was born in the small village of Lower Broadheath, outside Worcester, England. His father, William Henry Elgar (1821–1906), was raised in Dover and had been apprenticed to a London music publisher. In 1841 William moved to Worcester, where he worked as a piano tuner and set up a shop selling sheet music and musical instruments. In 1848 he married Ann Greening (1822–1902), daughter of a farm worker. Edward was the fourth of their seven children. Ann Elgar had converted to Roman Catholicism shortly before Edward's birth, and he was baptised and brought up as a Roman Catholic, to the disapproval of his father. William Elgar was a violinist of professional standard and held the post of organist of St. George's Roman Catholic Church, Worcester, from 1846 to 1885. At his instigation, masses by Cherubini and Hummel were first heard at the Three Choirs Festival by the orchestra in which he played the violin. All the Elgar children received a musical upbringing. By the age of eight, Elgar was taking piano and violin lessons, and his father, who tuned the pianos at many grand houses in Worcestershire, would sometimes take him along, giving him the chance to display his skill to important local figures. Elgar's mother was interested in the arts and encouraged his musical development. He inherited from her a discerning taste for literature and a passionate love of the countryside. His friend and biographer W. H. "Billy" Reed wrote that Elgar's early surroundings had an influence that "permeated all his work and gave to his whole life that subtle but none the less true and sturdy English quality". He began composing at an early age; for a play written and acted by the Elgar children when he was about ten, he wrote music that forty years later he rearranged with only minor changes and orchestrated as the suites titled "The Wand of Youth". Until he was fifteen, Elgar received a general education at Littleton (now Lyttleton) House school, near Worcester. However, his only formal musical training beyond piano and violin lessons from local teachers consisted of more advanced violin studies with Adolf Pollitzer, during brief visits to London in 1877–78. Elgar said, "my first music was learnt in the Cathedral ... from books borrowed from the music library, when I was eight, nine or ten." He worked through manuals of instruction on organ playing and read every book he could find on the theory of music. He later said that he had been most helped by Hubert Parry's articles in the "Grove Dictionary of Music and Musicians". Elgar began to learn German, in the hope of going to the Leipzig Conservatory for further musical studies, but his father could not afford to send him. Years later, a profile in "The Musical Times" considered that his failure to get to Leipzig was fortunate for Elgar's musical development: "Thus the budding composer escaped the dogmatism of the schools." However, it was a disappointment to Elgar that on leaving school in 1872 he went not to Leipzig but to the office of a local solicitor as a clerk. He did not find an office career congenial, and for fulfilment he turned not only to music but to literature, becoming a voracious reader. Around this time, he made his first public appearances as a violinist and organist. After a few months, Elgar left the solicitor to embark on a musical career, giving piano and violin lessons and working occasionally in his father's shop. He was an active member of the Worcester Glee club, along with his father, and he accompanied singers, played the violin, composed and arranged works, and conducted for the first time. Pollitzer believed that, as a violinist, Elgar had the potential to be one of the leading soloists in the country, but Elgar himself, having heard leading virtuosi at London concerts, felt his own violin playing lacked a full enough tone, and he abandoned his ambitions to be a soloist. At twenty-two he took up the post of conductor of the attendants' band at the Worcester and County Lunatic Asylum in Powick, from Worcester. The band consisted of: piccolo, flute, clarinet, two cornets, euphonium, three or four first and a similar number of second violins, occasional viola, cello, double bass and piano. Elgar coached the players and wrote and arranged their music, including quadrilles and polkas, for the unusual combination of instruments. "The Musical Times" wrote, "This practical experience proved to be of the greatest value to the young musician. ... He acquired a practical knowledge of the capabilities of these different instruments. ... He thereby got to know intimately the tone colour, the ins and outs of these and many other instruments." He held the post for five years, from 1879, travelling to Powick once a week. Another post he held in his early days was professor of the violin at the Worcester College for the Blind Sons of Gentlemen. Although rather solitary and introspective by nature, Elgar thrived in Worcester's musical circles. He played in the violins at the Worcester and Birmingham Festivals, and one great experience was to play Dvořák's Symphony No. 6 and "Stabat Mater" under the composer's baton. Elgar regularly played the bassoon in a wind quintet, alongside his brother Frank, an oboist (and conductor who ran his own wind band). Elgar arranged numerous pieces by Mozart, Beethoven, Haydn, and others for the quintet, honing his arranging and compositional skills. In his first trips abroad, Elgar visited Paris in 1880 and Leipzig in 1882. He heard Saint-Saëns play the organ at the Madeleine and attended concerts by first-rate orchestras. In 1882 he wrote, "I got pretty well dosed with Schumann (my ideal!), Brahms, Rubinstein & Wagner, so had no cause to complain." In Leipzig he visited a friend, Helen Weaver, who was a student at the Conservatoire. They became engaged in the summer of 1883, but for unknown reasons the engagement was broken off the next year. Elgar was greatly distressed, and some of his later cryptic dedications of romantic music may have alluded to Helen and his feelings for her. Throughout his life, Elgar was often inspired by close women friends; Helen Weaver was succeeded by Mary Lygon, Dora Penny, Julia Worthington, Alice Stuart Wortley and finally Vera Hockman, who enlivened his old age. In 1882, seeking more professional orchestral experience, Elgar was employed to play violin in Birmingham with William Stockley's Orchestra, for whom he would play every concert for the next seven years and where he later claimed he "learned all the music I know". On 13 December 1883 he took part with Stockley in a performance at Birmingham Town Hall of one of his first works for full orchestra, the "Sérénade mauresque" – the first time one of his compositions had been performed by a professional orchestra. Stockley had invited him to conduct the piece but later recalled "he declined, and, further, insisted upon playing in his place in the orchestra. The consequence was that he had to appear, fiddle in hand, to acknowledge the genuine and hearty applause of the audience." Elgar often went to London in an attempt to get his works published, but this period in his life found him frequently despondent and low on money. He wrote to a friend in April 1884, "My prospects are about as hopeless as ever ... I am not wanting in energy I think, so sometimes I conclude that 'tis want of ability. ... I have no money – not a cent." When Elgar was 29, he took on a new pupil, Caroline Alice Roberts, daughter of the late Major-General Sir Henry Roberts, and published author of verse and prose fiction. Eight years older than Elgar, Alice became his wife three years later. Elgar's biographer Michael Kennedy writes, "Alice's family was horrified by her intention to marry an unknown musician who worked in a shop and was a Roman Catholic. She was disinherited." They were married on 8 May 1889, at Brompton Oratory. From then until her death, she acted as his business manager and social secretary, dealt with his mood swings, and was a perceptive musical critic. She did her best to gain him the attention of influential society, though with limited success. In time, he would learn to accept the honours given him, realising that they mattered more to her and her social class and recognising what she had given up to further his career. In her diary, she wrote, "The care of a genius is enough of a life work for any woman." As an engagement present, Elgar dedicated his short violin-and-piano piece "Salut d'Amour" to her. With Alice's encouragement, the Elgars moved to London to be closer to the centre of British musical life, and Elgar started devoting his time to composition. Their only child, Carice Irene, was born at their home in West Kensington on 14 August 1890. Her name, revealed in Elgar's dedication of "Salut d'Amour", was a contraction of her mother's names Caroline and Alice. Elgar took full advantage of the opportunity to hear unfamiliar music. In the days before miniature scores and recordings were available, it was not easy for young composers to get to know new music. Elgar took every chance to do so at the Crystal Palace concerts. He and Alice attended day after day, hearing music by a wide range of composers. Among these were masters of orchestration from whom he learned much, such as Berlioz and Richard Wagner. His own compositions, however, made little impact on London's musical scene. August Manns conducted Elgar's orchestral version of "Salut d'amour" and the Suite in D at the Crystal Palace, and two publishers accepted some of Elgar's violin pieces, organ voluntaries, and part songs. Some tantalising opportunities seemed to be within reach but vanished unexpectedly. For example, an offer from the Royal Opera House, Covent Garden, to run through some of his works was withdrawn at the last second when Sir Arthur Sullivan arrived unannounced to rehearse some of his own music. Sullivan was horrified when Elgar later told him what had happened. Elgar's only important commission while in London came from his home city: the Worcester Festival Committee invited him to compose a short orchestral work for the 1890 Three Choirs Festival. The result is described by Diana McVeagh in the "Grove Dictionary of Music and Musicians", as "his first major work, the assured and uninhibited "Froissart"." Elgar conducted the first performance in Worcester in September 1890. For lack of other work, he was obliged to leave London in 1891 and return with his wife and child to Worcestershire, where he could earn a living conducting local musical ensembles and teaching. They settled in Alice's former home town, Great Malvern. During the 1890s, Elgar gradually built up a reputation as a composer, chiefly of works for the great choral festivals of the English Midlands. "The Black Knight" (1892) and "King Olaf" (1896), both inspired by Longfellow, "The Light of Life" (1896) and "Caractacus" (1898) were all modestly successful, and he obtained a long-standing publisher in Novello and Co. Other works of this decade included the "Serenade for Strings" (1892) and "Three Bavarian Dances" (1897). Elgar was of enough consequence locally to recommend the young composer Samuel Coleridge-Taylor to the Three Choirs Festival for a concert piece, which helped establish the younger man's career. Elgar was catching the attention of prominent critics, but their reviews were polite rather than enthusiastic. Although he was in demand as a festival composer, he was only just getting by financially and felt unappreciated. In 1898, he said he was "very sick at heart over music" and hoped to find a way to succeed with a larger work. His friend August Jaeger tried to lift his spirits: "A day's attack of the blues ... will not drive away your desire, your necessity, which is to exercise those creative faculties which a kind providence has given you. Your time of universal recognition will come." In 1899, that prediction suddenly came true. At the age of forty-two, Elgar produced the "Enigma Variations", which were premiered in London under the baton of the eminent German conductor Hans Richter. In Elgar's own words, "I have sketched a set of Variations on an original theme. The Variations have amused me because I've labelled them with the nicknames of my particular friends ... that is to say I've written the variations each one to represent the mood of the 'party' (the person) ... and have written what I think they would have written – if they were asses enough to compose". He dedicated the work "To my friends pictured within". Probably the best known variation is "Nimrod", depicting Jaeger. Purely musical considerations led Elgar to omit variations depicting Arthur Sullivan and Hubert Parry, whose styles he tried but failed to incorporate in the variations. The large-scale work was received with general acclaim for its originality, charm and craftsmanship, and it established Elgar as the pre-eminent British composer of his generation. The work is formally titled "Variations on an Original Theme"; the word "Enigma" appears over the first six bars of music, which led to the familiar version of the title. The enigma is that, although there are fourteen variations on the "original theme", there is another overarching theme, never identified by Elgar, which he said "runs through and over the whole set" but is never heard. Later commentators have observed that although Elgar is today regarded as a characteristically English composer, his orchestral music and this work in particular share much with the Central European tradition typified at the time by the work of Richard Strauss. The "Enigma Variations" were well received in Germany and Italy, and remain to the present day a worldwide concert staple. Elgar's biographer Basil Maine commented, "When Sir Arthur Sullivan died in 1900 it became apparent to many that Elgar, although a composer of another build, was his true successor as first musician of the land." Elgar's next major work was eagerly awaited. For the Birmingham Triennial Music Festival of 1900, he set Cardinal John Henry Newman's poem "The Dream of Gerontius" for soloists, chorus and orchestra. Richter conducted the premiere, which was marred by a poorly prepared chorus, which sang badly. Critics recognised the mastery of the piece despite the defects in performance. It was performed in Düsseldorf, Germany, in 1901 and again in 1902, conducted by Julius Buths, who also conducted the European premiere of the "Enigma Variations" in 1901. The German press was enthusiastic. "The Cologne Gazette" said, "In both parts we meet with beauties of imperishable value. ... Elgar stands on the shoulders of Berlioz, Wagner, and Liszt, from whose influences he has freed himself until he has become an important individuality. He is one of the leaders of musical art of modern times." "The Düsseldorfer Volksblatt" wrote, "A memorable and epoch-making first performance! Since the days of Liszt nothing has been produced in the way of oratorio ... which reaches the greatness and importance of this sacred cantata." Richard Strauss, then widely viewed as the leading composer of his day, was so impressed that in Elgar's presence he proposed a toast to the success of "the first English progressive musician, Meister Elgar." Performances in Vienna, Paris and New York followed, and "The Dream of Gerontius" soon became equally admired in Britain. According to Kennedy, "It is unquestionably the greatest British work in the oratorio form ... [it] opened a new chapter in the English choral tradition and liberated it from its Handelian preoccupation." Elgar, as a Roman Catholic, was much moved by Newman's poem about the death and redemption of a sinner, but some influential members of the Anglican establishment disagreed. His colleague, Charles Villiers Stanford complained that the work "stinks of incense". The Dean of Gloucester banned "Gerontius" from his cathedral in 1901, and at Worcester the following year, the Dean insisted on expurgations before allowing a performance. Elgar is probably best known for the first of the five "Pomp and Circumstance Marches", which were composed between 1901 and 1930. It is familiar to millions of television viewers all over the world every year who watch the Last Night of the Proms, where it is traditionally performed. When the theme of the slower middle section (technically called the "trio") of the first march came into his head, he told his friend Dora Penny, "I've got a tune that will knock 'em – will knock 'em flat". When the first march was played in 1901 at a London Promenade Concert, it was conducted by Henry J. Wood, who later wrote that the audience "rose and yelled ... the one and only time in the history of the Promenade concerts that an orchestral item was accorded a double encore." To mark the coronation of Edward VII, Elgar was commissioned to set A. C. Benson's "Coronation Ode" for a gala concert at the Royal Opera House in June 1901. The approval of the king was confirmed, and Elgar began work. The contralto Clara Butt had persuaded him that the trio of the first "Pomp and Circumstance" march could have words fitted to it, and Elgar invited Benson to do so. Elgar incorporated the new vocal version into the Ode. The publishers of the score recognised the potential of the vocal piece, "Land of Hope and Glory", and asked Benson and Elgar to make a further revision for publication as a separate song. It was immensely popular and is now considered an unofficial British national anthem. In the United States, the trio, known simply as "Pomp and Circumstance" or "The Graduation March", has been adopted since 1905 for virtually all high school and university graduations. In March 1904 a three-day festival of Elgar's works was presented at Covent Garden, an honour never before given to any English composer. "The Times" commented, "Four or five years ago if any one had predicted that the Opera-house would be full from floor to ceiling for the performance of an oratorio by an English composer he would probably have been supposed to be out of his mind." The king and queen attended the first concert, at which Richter conducted "The Dream of Gerontius", and returned the next evening for the second, the London premiere of "The Apostles" (first heard the previous year at the Birmingham Festival). The final concert of the festival, conducted by Elgar, was primarily orchestral, apart for an excerpt from "Caractacus" and the complete "Sea Pictures" (sung by Clara Butt). The orchestral items were "Froissart", the "Enigma Variations", "Cockaigne", the first two (at that time the only two) "Pomp and Circumstance" marches, and the premiere of a new orchestral work, "In the South", inspired by a holiday in Italy. Elgar was knighted at Buckingham Palace on 5 July 1904. The following month, he and his family moved to Plâs Gwyn, a large house on the outskirts of Hereford, overlooking the River Wye, where they lived until 1911. Between 1902 and 1914, Elgar was, in Kennedy's words, at the zenith of popularity. He made four visits to the US, including one conducting tour, and earned considerable fees from the performance of his music. Between 1905 and 1908, he held the post of Peyton Professor of Music at the University of Birmingham. He had accepted the post reluctantly, feeling that a composer should not head a school of music. He was not at ease in the role, and his lectures caused controversy, with his attacks on the critics and on English music in general: "Vulgarity in the course of time may be refined. Vulgarity often goes with inventiveness ... but the commonplace mind can never be anything but commonplace. An Englishman will take you into a large room, beautifully proportioned, and will point out to you that it is white – all over white – and somebody will say, 'What exquisite taste'. You know in your own mind, in your own soul, that it is not taste at all, that it is the want of taste, that is mere evasion. English music is white, and evades everything." He regretted the controversy and was glad to hand on the post to his friend Granville Bantock in 1908. His new life as a celebrity was a mixed blessing to the highly strung Elgar, as it interrupted his privacy, and he often was in ill-health. He complained to Jaeger in 1903, "My life is one continual giving up of little things which I love." Both W. S. Gilbert and Thomas Hardy sought to collaborate with Elgar in this decade. Elgar refused, but would have collaborated with George Bernard Shaw had Shaw been willing. Elgar's principal composition in 1905 was the "Introduction and Allegro for Strings", dedicated to Samuel Sanford, professor at Yale University. Elgar visited America in that year to conduct his music and to accept a doctorate from Yale. His next large-scale work was the sequel to "The Apostles" – the oratorio "The Kingdom" (1906). It was well received but did not catch the public imagination as "The Dream of Gerontius" had done and continued to do. Among keen Elgarians, however, "The Kingdom" was sometimes preferred to the earlier work: Elgar's friend Frank Schuster told the young Adrian Boult: "compared with "The Kingdom", "Gerontius" is the work of a raw amateur." As Elgar approached his fiftieth birthday, he began work on his first symphony, a project that had been in his mind in various forms for nearly ten years. His First Symphony (1908) was a national and international triumph. Within weeks of the premiere it was performed in New York under Walter Damrosch, Vienna under Ferdinand Löwe, St. Petersburg under Alexander Siloti, and Leipzig under Arthur Nikisch. There were performances in Rome, Chicago, Boston, Toronto and fifteen British towns and cities. In just over a year, it received a hundred performances in Britain, America and continental Europe. The Violin Concerto (1910) was commissioned by Fritz Kreisler, one of the leading international violinists of the time. Elgar wrote it during the summer of 1910, with occasional help from W. H. Reed, the leader of the London Symphony Orchestra, who helped the composer with advice on technical points. Elgar and Reed formed a firm friendship, which lasted for the rest of Elgar's life. Reed's biography, "Elgar As I Knew Him" (1936), records many details of Elgar's methods of composition. The work was presented by the Royal Philharmonic Society, with Kreisler and the London Symphony Orchestra, conducted by the composer. Reed recalled, "the Concerto proved to be a complete triumph, the concert a brilliant and unforgettable occasion." So great was the impact of the concerto that Kreisler's rival Eugène Ysaÿe spent much time with Elgar going through the work. There was great disappointment when contractual difficulties prevented Ysaÿe from playing it in London. The Violin Concerto was Elgar's last popular triumph. The following year he presented his Second Symphony in London, but was disappointed at its reception. Unlike the First Symphony, it ends not in a blaze of orchestral splendour but quietly and contemplatively. Reed, who played at the premiere, later wrote that Elgar was recalled to the platform several times to acknowledge the applause, "but missed that unmistakable note perceived when an audience, even an English audience, is thoroughly roused or worked up, as it was after the Violin Concerto or the First Symphony." Elgar asked Reed, "What is the matter with them, Billy? They sit there like a lot of stuffed pigs." The work was, by normal standards, a success, with twenty-seven performances within three years of its premiere, but it did not achieve the international "furore" of the First Symphony. In June 1911, as part of the celebrations surrounding the coronation of King George V, Elgar was appointed to the Order of Merit, an honour limited to twenty-four holders at any time. The following year, the Elgars moved back to London, to a large house in Netherhall Gardens, Hampstead, designed by Norman Shaw. There Elgar composed his last two large-scale works of the pre-war era, the choral ode, "The Music Makers" (for the Birmingham Festival, 1912) and the symphonic study "Falstaff" (for the Leeds Festival, 1913). Both were received politely but without enthusiasm. Even the dedicatee of "Falstaff", the conductor Landon Ronald, confessed privately that he could not "make head or tail of the piece," while the musical scholar Percy Scholes wrote of "Falstaff" that it was a "great work" but, "so far as public appreciation goes, a comparative failure." When World War I broke out, Elgar was horrified at the prospect of the carnage, but his patriotic feelings were nonetheless aroused. He composed "A Song for Soldiers", which he later withdrew. He signed up as a special constable in the local police and later joined the Hampstead Volunteer Reserve of the army. He composed patriotic works, "Carillon", a recitation for speaker and orchestra in honour of Belgium, and "Polonia", an orchestral piece in honour of Poland. "Land of Hope and Glory", already popular, became still more so, and Elgar wished in vain to have new, less nationalistic, words sung to the tune. Elgar's other compositions during the war included incidental music for a children's play, "The Starlight Express" (1915); a ballet, "The Sanguine Fan" (1917); and "The Spirit of England" (1915–17, to poems by Laurence Binyon), three choral settings very different in character from the romantic patriotism of his earlier years. His last large-scale composition of the war years was "The Fringes of the Fleet", settings of verses by Rudyard Kipling, performed with great popular success around the country, until Kipling for unexplained reasons objected to their performance in theatres. Elgar conducted a recording of the work for the Gramophone Company. Towards the end of the war, Elgar was in poor health. His wife thought it best for him to move to the countryside, and she rented 'Brinkwells', a house near Fittleworth in Sussex, from the painter Rex Vicat Cole. There Elgar recovered his strength and, in 1918 and 1919, he produced four large-scale works. The first three of these were chamber pieces: the Violin Sonata in E minor, the Piano Quintet in A minor, and the String Quartet in E minor. On hearing the work in progress, Alice Elgar wrote in her diary, "E. writing wonderful new music". All three works were well received. "The Times" wrote, "Elgar's sonata contains much that we have heard before in other forms, but as we do not at all want him to change and be somebody else, that is as it should be." The quartet and quintet were premiered at the Wigmore Hall on 21 May 1919. "The Manchester Guardian" wrote, "This quartet, with its tremendous climaxes, curious refinements of dance-rhythms, and its perfect symmetry, and the quintet, more lyrical and passionate, are as perfect examples of chamber music as the great oratorios were of their type." By contrast, the remaining work, the Cello Concerto in E minor, had a disastrous premiere, at the opening concert of the London Symphony Orchestra's 1919–20 season in October 1919. Apart from the Elgar work, which the composer conducted, the rest of the programme was conducted by Albert Coates, who overran his rehearsal time at the expense of Elgar's. Lady Elgar wrote, "that brutal selfish ill-mannered bounder ... that brute Coates went on rehearsing." The critic of "The Observer", Ernest Newman, wrote, "There have been rumours about during the week of inadequate rehearsal. Whatever the explanation, the sad fact remains that never, in all probability, has so great an orchestra made so lamentable an exhibition of itself. ... The work itself is lovely stuff, very simple – that pregnant simplicity that has come upon Elgar's music in the last couple of years – but with a profound wisdom and beauty underlying its simplicity." Elgar attached no blame to his soloist, Felix Salmond, who played for him again later, including at the inaugural concert of the City of Birmingham Orchestra (later City of Birmingham Symphony Orchestra), which Elgar conducted. In contrast with the First Symphony and its hundred performances in just over a year, the Cello Concerto did not have a second performance in London for more than a year. Although in the 1920s Elgar's music was no longer in fashion, his admirers continued to present his works when possible. Reed singles out a performance of the Second Symphony in March 1920 conducted by "a young man almost unknown to the public", Adrian Boult, for bringing "the grandeur and nobility of the work" to a wider public. Also in 1920, Landon Ronald presented an all-Elgar concert at the Queen's Hall. Alice Elgar wrote with enthusiasm about the reception of the symphony, but this was one of the last times she heard Elgar's music played in public. After a short illness, she died of lung cancer on 7 April 1920, at the age of seventy-two. Elgar was devastated by the loss of his wife. With no public demand for new works, and deprived of Alice's constant support and inspiration, he allowed himself to be deflected from composition. His daughter later wrote that Elgar inherited from his father a reluctance to "settle down to work on hand but could cheerfully spend hours over some perfectly unnecessary and entirely unremunerative undertaking", a trait that became stronger after Alice's death. For much of the rest of his life, Elgar indulged himself in his several hobbies. Throughout his life he was a keen amateur chemist, sometimes using a laboratory in his back garden. He even patented the "Elgar Sulphuretted Hydrogen Apparatus" in 1908. He enjoyed football, supporting Wolverhampton Wanderers F.C., for whom he composed an anthem, ""He Banged the Leather for Goal"", and in his later years he frequently attended horseraces. His protégés, the conductor Malcolm Sargent and violinist Yehudi Menuhin, both recalled rehearsals with Elgar at which he swiftly satisfied himself that all was well and then went off to the races. In his younger days, Elgar had been an enthusiastic cyclist, buying Royal Sunbeam bicycles for himself and his wife in 1903 (he named his "Mr. Phoebus"). As an elderly widower, he enjoyed being driven about the countryside by his chauffeur. In November and December 1923, he took a voyage to Brazil, journeying up the Amazon to Manaus, where he was impressed by its opera house, the Teatro Amazonas. Almost nothing is recorded about Elgar's activities or the events that he encountered during the trip, which gave the novelist James Hamilton-Paterson considerable latitude when writing "Gerontius", a fictional account of the journey. After Alice's death, Elgar sold the Hampstead house, and after living for a short time in a flat in St James's in the heart of London, he moved back to Worcestershire, to the village of Kempsey, where he lived from 1923 to 1927. He did not wholly abandon composition in these years. He made large-scale symphonic arrangements of works by Bach and Handel and wrote his "Empire March" and eight songs "Pageant of Empire" for the 1924 British Empire Exhibition. Shortly after these were published, he was appointed Master of the King's Musick on 13 May 1924, following the death of Sir Walter Parratt. From 1926 onwards, Elgar made a series of recordings of his own works. Described by the music writer Robert Philip as "the first composer to take the gramophone seriously", he had already recorded much of his music by the early acoustic-recording process for His Master's Voice (HMV) from 1914 onwards, but the introduction of electrical microphones in 1925 transformed the gramophone from a novelty into a realistic medium for reproducing orchestral and choral music. Elgar was the first composer to take full advantage of this technological advance. Fred Gaisberg of HMV, who produced Elgar's recordings, set up a series of sessions to capture on disc the composer's interpretations of his major orchestral works, including the "Enigma Variations", "Falstaff", the first and second symphonies, and the cello and violin concertos. For most of these, the orchestra was the LSO, but the "Variations" were played by the Royal Albert Hall Orchestra. Later in the series of recordings, Elgar also conducted two newly founded orchestras, Boult's BBC Symphony Orchestra and Sir Thomas Beecham's London Philharmonic Orchestra. Elgar's recordings were released on 78-rpm discs by both HMV and RCA Victor. After World War II, the 1932 recording of the Violin Concerto with the teenage Menuhin as soloist remained available on 78 and later on LP, but the other recordings were out of the catalogues for some years. When they were reissued by EMI on LP in the 1970s, they caused surprise to many by their fast tempi, in contrast to the slower speeds adopted by many conductors in the years since Elgar's death. The recordings were reissued on CD in the 1990s. In November 1931, Elgar was filmed by Pathé for a newsreel depicting a recording session of "Pomp and Circumstance March No. 1" at the opening of EMI's Abbey Road Studios in London. It is believed to be the only surviving sound film of Elgar, who makes a brief remark before conducting the London Symphony Orchestra, asking the musicians to "play this tune as though you've never heard it before." A memorial plaque to Elgar at Abbey Road was unveiled on 24 June 1993. A late piece of Elgar's, the "Nursery Suite", was an early example of a studio premiere: its first performance was in the Abbey Road studios. For this work, dedicated to the wife and daughters of the Duke of York, Elgar once again drew on his youthful sketch-books. In his final years, Elgar experienced a musical revival. The BBC organised a festival of his works to celebrate his seventy-fifth birthday, in 1932. He flew to Paris in 1933 to conduct the Violin Concerto for Menuhin. While in France, he visited his fellow composer Frederick Delius at his house at Grez-sur-Loing. He was sought out by younger musicians such as Adrian Boult, Malcolm Sargent and John Barbirolli, who championed his music when it was out of fashion. He began work on an opera, "The Spanish Lady", and accepted a commission from the BBC to compose a Third Symphony. His final illness, however, prevented their completion. He fretted about the unfinished works. He asked Reed to ensure that nobody would "tinker" with the sketches and attempt a completion of the symphony, but at other times he said, "If I can't complete the Third Symphony, somebody will complete it – or write a better one." After Elgar's death, Percy M. Young, in co-operation with the BBC and Elgar's daughter Carice, produced a version of "The Spanish Lady", which was issued on CD. The Third Symphony sketches were elaborated by the composer Anthony Payne into a complete score in 1998. Inoperable colorectal cancer was discovered during an operation on 8 October 1933. He told his consulting doctor, Arthur Thomson, that he had no faith in an afterlife: "I believe there is nothing but complete oblivion." Elgar died on 23 February 1934 at the age of seventy-six and was buried next to his wife at St. Wulstan's Roman Catholic Church in Little Malvern. Elgar was contemptuous of folk music and had little interest in or respect for the early English composers, calling William Byrd and his contemporaries "museum pieces". Of later English composers, he regarded Purcell as the greatest, and he said that he had learned much of his own technique from studying Hubert Parry's writings. The continental composers who most influenced Elgar were Handel, Dvořák and, to some degree, Brahms. In Elgar's chromaticism, the influence of Wagner is apparent, but Elgar's individual style of orchestration owes much to the clarity of nineteenth-century French composers, Berlioz, Massenet, Saint-Saëns and, particularly, Delibes, whose music Elgar played and conducted at Worcester and greatly admired. Elgar began composing when still a child, and all his life he drew on his early sketchbooks for themes and inspiration. The habit of assembling his compositions, even large-scale ones, from scraps of themes jotted down randomly remained throughout his life. His early adult works included violin and piano pieces, music for the wind quintet in which he and his brother played between 1878 and 1881, and music of many types for the Powick Asylum band. Diana McVeagh in "Grove's Dictionary" finds many embryonic Elgarian touches in these pieces, but few of them are regularly played, except "Salut d'Amour" and (as arranged decades later into "The Wand of Youth" Suites) some of the childhood sketches. Elgar's sole work of note during his first spell in London in 1889–91, the overture "Froissart", was a romantic-bravura piece, influenced by Mendelssohn and Wagner, but also showing further Elgarian characteristics. Orchestral works composed during the subsequent years in Worcestershire include the "Serenade for Strings" and "Three Bavarian Dances". In this period and later, Elgar wrote songs and partsongs. W. H. Reed expressed reservations about these pieces, but praised the partsong "The Snow", for female voices, and "Sea Pictures", a cycle of five songs for contralto and orchestra which remains in the repertory. Elgar's principal large-scale early works were for chorus and orchestra for the Three Choirs and other festivals. These were "The Black Knight", "King Olaf", "The Light of Life", "The Banner of St George" and "Caractacus". He also wrote a "Te Deum" and "Benedictus" for the Hereford Festival. Of these, McVeagh comments favourably on his lavish orchestration and innovative use of leitmotifs, but less favourably on the qualities of his chosen texts and the patchiness of his inspiration. McVeagh makes the point that, because these works of the 1890s were for many years little known (and performances remain rare), the mastery of his first great success, the "Enigma Variations", appeared to be a sudden transformation from mediocrity to genius, but in fact his orchestral skills had been building up throughout the decade. Elgar's best-known works were composed within the twenty-one years between 1899 and 1920. Most of them are orchestral. Reed wrote, "Elgar's genius rose to its greatest height in his orchestral works" and quoted the composer as saying that, even in his oratorios, the orchestral part is the most important. The "Enigma Variations" made Elgar's name nationally. The variation form was ideal for him at this stage of his career, when his comprehensive mastery of orchestration was still in contrast to his tendency to write his melodies in short, sometimes rigid, phrases. His next orchestral works, "Cockaigne", a concert-overture (1900–1901), the first two "Pomp and Circumstance" marches (1901), and the gentle "Dream Children" (1902), are all short: the longest of them, "Cockaigne", lasting less than fifteen minutes. "In the South" (1903–1904), although designated by Elgar as a concert-overture, is, according to Kennedy, really a tone poem and the longest continuous piece of purely orchestral writing Elgar had essayed. He wrote it after setting aside an early attempt to compose a symphony. The work reveals his continuing progress in writing sustained themes and orchestral lines, although some critics, including Kennedy, find that in the middle part "Elgar's inspiration burns at less than its brightest." In 1905 Elgar completed the "Introduction and Allegro for Strings". This work is based, unlike much of Elgar's earlier writing, not on a profusion of themes but on only three. Kennedy called it a "masterly composition, equalled among English works for strings only by Vaughan Williams's "Tallis Fantasia"." Nevertheless, at less than a quarter of an hour, it was not by contemporary standards a lengthy composition. Gustav Mahler's Seventh Symphony, composed at the same time, runs for well over an hour. During the next four years, however, Elgar composed three major concert pieces, which, though shorter than comparable works by some of his European contemporaries, are among the most substantial such works by an English composer. These were his First Symphony, Violin Concerto, and Second Symphony, which all play for between forty-five minutes and an hour. McVeagh says of the symphonies that they "rank high not only in Elgar's output but in English musical history. Both are long and powerful, without published programmes, only hints and quotations to indicate some inward drama from which they derive their vitality and eloquence. Both are based on classical form but differ from it to the extent that ... they were considered prolix and slackly constructed by some critics. Certainly the invention in them is copious; each symphony would need several dozen music examples to chart its progress." Elgar's Violin Concerto and Cello Concerto, in the view of Kennedy, "rank not only among his finest works, but among the greatest of their kind". They are, however, very different from each other. The Violin Concerto, composed in 1909 as Elgar reached the height of his popularity, and written for the instrument dearest to his heart, is lyrical throughout and rhapsodical and brilliant by turns. The Cello Concerto, composed a decade later, immediately after World War I, seems, in Kennedy's words, "to belong to another age, another world ... the simplest of all Elgar's major works ... also the least grandiloquent." Between the two concertos came Elgar's symphonic study "Falstaff", which has divided opinion even among Elgar's strongest admirers. Donald Tovey viewed it as "one of the immeasurably great things in music", with power "identical with Shakespeare's", while Kennedy criticises the work for "too frequent reliance on sequences" and an over-idealised depiction of the female characters. Reed thought that the principal themes show less distinction than some of Elgar's earlier works. Elgar himself thought "Falstaff" the highest point of his purely orchestral work. The major works for voices and orchestra of the twenty-one years of Elgar's middle period are three large-scale works for soloists, chorus and orchestra: "The Dream of Gerontius" (1900), and the oratorios "The Apostles" (1903) and "The Kingdom" (1906); and two shorter odes, the "Coronation Ode" (1902) and "The Music Makers" (1912). The first of the odes, as a "pièce d'occasion", has rarely been revived after its initial success, with the culminating "Land of Hope and Glory". The second is, for Elgar, unusual in that it contains several quotations from his earlier works, as Richard Strauss quoted himself in "Ein Heldenleben". The choral works were all successful, although the first, "Gerontius", was and remains the best-loved and most performed. On the manuscript Elgar wrote, quoting John Ruskin, "This is the best of me; for the rest, I ate, and drank, and slept, loved and hated, like another. My life was as the vapour, and is not; but this I saw, and knew; this, if anything of mine, is worth your memory." All three of the large-scale works follow the traditional model with sections for soloists, chorus and both together. Elgar's distinctive orchestration, as well as his melodic inspiration, lifts them to a higher level than most of their British predecessors. Elgar's other works of his middle period include incidental music for "Grania and Diarmid", a play by George Moore and W. B. Yeats (1901), and for "The Starlight Express", a play based on a story by Algernon Blackwood (1916). Of the former, Yeats called Elgar's music "wonderful in its heroic melancholy". Elgar also wrote a number of songs during his peak period, of which Reed observes, "it cannot be said that he enriched the vocal repertory to the same extent as he did that of the orchestra." After the Cello Concerto, Elgar completed no more large-scale works. He made arrangements of works by Bach, Handel and Chopin, in distinctively Elgarian orchestration, and once again turned his youthful notebooks to use for the "Nursery Suite" (1931). His other compositions of this period have not held a place in the regular repertory. For most of the rest of the twentieth century, it was generally agreed that Elgar's creative impulse ceased after his wife's death. Anthony Payne's elaboration of the sketches for Elgar's Third Symphony led to a reconsideration of this supposition. Elgar left the opening of the symphony complete in full score, and those pages, along with others, show Elgar's orchestration changed markedly from the richness of his pre-war work. "The Gramophone" described the opening of the new work as something "thrilling ... unforgettably gaunt". Payne also subsequently produced a performing version of the sketches for a sixth "Pomp and Circumstance March", premiered at the Proms in August 2006. Elgar's sketches for a piano concerto dating from 1913 were elaborated by the composer Robert Walker and first performed in August 1997 by the pianist David Owen Norris. The realisation has since been extensively revised. Views of Elgar's stature have varied in the decades since his music came to prominence at the beginning of the twentieth century. Richard Strauss, as noted, hailed Elgar as a progressive composer; even the hostile reviewer in "The Observer", unimpressed by the thematic material of the First Symphony in 1908, called the orchestration "magnificently modern". Hans Richter rated Elgar as "the greatest modern composer" in any country, and Richter's colleague Arthur Nikisch considered the First Symphony "a masterpiece of the first order" to be "justly ranked with the great symphonic models – Beethoven and Brahms." By contrast, the critic W. J. Turner, in the mid-twentieth century, wrote of Elgar's "Salvation Army symphonies," and Herbert von Karajan called the "Enigma Variations" "second-hand Brahms". Elgar's immense popularity was not long-lived. After the success of his First Symphony and Violin Concerto, his Second Symphony and Cello Concerto were politely received but without the earlier wild enthusiasm. His music was identified in the public mind with the Edwardian era, and after the First World War he no longer seemed a progressive or modern composer. In the early 1920s, even the First Symphony had only one London performance in more than three years. Henry Wood and younger conductors such as Boult, Sargent and Barbirolli championed Elgar's music, but in the recording catalogues and the concert programmes of the middle of the century his works were not well represented. In 1924, the music scholar Edward J. Dent wrote an article for a German music journal in which he identified four features of Elgar's style that gave offence to a section of English opinion (namely, Dent indicated, the academic and snobbish section): "too emotional", "not quite free from vulgarity", "pompous", and "too deliberately noble in expression". This article was reprinted in 1930 and caused controversy. In the later years of the century there was, in Britain at least, a revival of interest in Elgar's music. The features that had offended austere taste in the inter-war years were seen from a different perspective. In 1955, the reference book "The Record Guide" wrote of the Edwardian background during the height of Elgar's career: By the 1960s, a less severe view was being taken of the Edwardian era. In 1966 the critic Frank Howes wrote that Elgar reflected the last blaze of opulence, expansiveness and full-blooded life, before World War I swept so much away. In Howes's view, there was a touch of vulgarity in both the era and Elgar's music, but "a composer is entitled to be judged by posterity for his best work. ... Elgar is historically important for giving to English music a sense of the orchestra, for expressing what it felt like to be alive in the Edwardian age, for conferring on the world at least four unqualified masterpieces, and for thereby restoring England to the comity of musical nations." In 1967 the critic and analyst David Cox considered the question of the supposed Englishness of Elgar's music. Cox noted that Elgar disliked folk-songs and never used them in his works, opting for an idiom that was essentially German, leavened by a lightness derived from French composers including Berlioz and Gounod. How then, asked Cox, could Elgar be "the most English of composers"? Cox found the answer in Elgar's own personality, which "could use the alien idioms in such a way as to make of them a vital form of expression that was his and his alone. And the personality that comes through in the music is English." This point about Elgar's transmuting his influences had been touched on before. In 1930 "The Times" wrote, "When Elgar's first symphony came out, someone attempted to prove that its main tune on which all depends was like the Grail theme in Parsifal. ... but the attempt fell flat because everyone else, including those who disliked the tune, had instantly recognized it as typically 'Elgarian', while the Grail theme is as typically Wagnerian." As for Elgar's "Englishness", his fellow-composers recognised it: Richard Strauss and Stravinsky made particular reference to it, and Sibelius called him, "the personification of the true English character in music ... a noble personality and a born aristocrat". Among Elgar's admirers there is disagreement about which of his works are to be regarded as masterpieces. The "Enigma Variations" are generally counted among them. "The Dream of Gerontius" has also been given high praise by Elgarians, and the Cello Concerto is similarly rated. Many rate the Violin Concerto equally highly, but some do not. Sackville-West omitted it from the list of Elgar masterpieces in "The Record Guide", and in a long analytical article in "The Musical Quarterly", Daniel Gregory Mason criticised the first movement of the concerto for a "kind of sing-songiness ... as fatal to noble rhythm in music as it is in poetry." "Falstaff" also divides opinion. It has never been a great popular favourite, and Kennedy and Reed identify shortcomings in it. In a "Musical Times" 1957 centenary symposium on Elgar led by Vaughan Williams, by contrast, several contributors share Eric Blom's view that "Falstaff" is the greatest of all Elgar's works. The two symphonies divide opinion even more sharply. Mason rates the Second poorly for its "over-obvious rhythmic scheme", but calls the First "Elgar's masterpiece. ... It is hard to see how any candid student can deny the greatness of this symphony." However, in the 1957 centenary symposium, several leading admirers of Elgar express reservations about one or both symphonies. In the same year, Roger Fiske wrote in "The Gramophone", "For some reason few people seem to like the two Elgar symphonies equally; each has its champions and often they are more than a little bored by the rival work." The critic John Warrack wrote, "There are no sadder pages in symphonic literature than the close of the First Symphony's Adagio, as horn and trombones twice softly intone a phrase of utter grief", whereas to Michael Kennedy, the movement is notable for its lack of anguished yearning and "angst" and is marked instead by a "benevolent tranquillity." Despite the fluctuating critical assessment of the various works over the years, Elgar's major works taken as a whole have in the twenty-first century recovered strongly from their neglect in the 1950s. "The Record Guide" in 1955 could list only one currently available recording of the First Symphony, none of the Second, one of the Violin Concerto, two of the Cello Concerto, two of the "Enigma Variations", one of "Falstaff", and none of "The Dream of Gerontius". Since then there have been multiple recordings of all the major works. More than thirty recordings have been made of the First Symphony since 1955, for example, and more than a dozen of "The Dream of Gerontius". Similarly, in the concert hall, Elgar's works, after a period of neglect, are once again frequently programmed. The Elgar Society's website, in its diary of forthcoming performances, lists performances of Elgar's works by orchestras, soloists and conductors across Europe, North America and Australia. Elgar was knighted in 1904, and in 1911 he was appointed a member of the Order of Merit. In 1920 he received the Cross of Commander of the Belgian Order of the Crown; in 1924 he was made Master of the King's Musick; the following year he received the Gold Medal of the Royal Philharmonic Society; and in 1928 he was appointed a Knight Commander of the Royal Victorian Order (KCVO). Between 1900 and 1931, Elgar received honorary degrees from the Universities of Cambridge, Durham, Leeds, Oxford, Yale (USA), Aberdeen, Western Pennsylvania (USA), Birmingham and London. Foreign academies of which he was made a member were Regia Accademia di Santa Cecilia, Rome; Accademia del Reale Istituto Musicale, Florence; Académie des Beaux Arts, Paris; Institut de France; and the American Academy. In 1931 he was created a Baronet, of Broadheath in the County of Worcester. In 1933 he was promoted within the Royal Victorian Order to Knight Grand Cross (GCVO). In Kennedy's words, he "shamelessly touted" for a peerage, but in vain. In "Who's Who", post World War I, he claimed to have been awarded "several Imperial Russian and German decorations (lapsed)". Elgar was offered, but declined, the office of Mayor of Hereford (despite not being a member of its city council) when he lived in the city in 1905. The same year he was made an honorary Freeman of the city of Worcester. The house in Lower Broadheath where Elgar was born is now the Elgar Birthplace Museum, devoted to his life and work. Elgar's daughter, Carice, helped to found the museum in 1936 and bequeathed to it much of her collection of Elgar's letters and documents on her death in 1970. Carice left Elgar manuscripts to musical colleges: "The Black Knight" to Trinity College of Music; "King Olaf" to the Royal Academy of Music; "The Music Makers" to Birmingham University; the Cello Concerto to the Royal College of Music; "The Kingdom" to the Bodleian Library; and other manuscripts to the British Museum. The Elgar Society dedicated to the composer and his works was formed in 1951. The University of Birmingham's Special Collections contain an archive of letters written by Elgar. Elgar's statue at the end of Worcester High Street stands facing the cathedral, only yards from where his father's shop once stood. Another statue of the composer by Rose Garrard is at the top of Church Street in Malvern, overlooking the town and giving visitors an opportunity to stand next to the composer in the shadow of the Hills that he so often regarded. In September 2005, a third statue sculpted by Jemma Pearson was unveiled near Hereford Cathedral in honour of his many musical and other associations with the city. It depicts Elgar with his bicycle. From 1999 until early 2007, new Bank of England twenty pound notes featured a portrait of Elgar. The change to remove his image generated controversy, particularly because 2007 was the 150th anniversary of Elgar's birth. From 2007 the Elgar notes were phased out, ceasing to be legal tender on 30 June 2010. There are around 65 roads in the UK named after Elgar, including six in the counties of Herefordshire and Worcestershire. Elgar had three locomotives named in his honour. Elgar's life and music have inspired works of literature including the novel "Gerontius" and several plays. "Elgar's Rondo", a 1993 stage play by David Pownall depicts the dead Jaeger offering ghostly advice on Elgar's musical development. Pownall also wrote a radio play, "Elgar's Third" (1994); another Elgar-themed radio play is Alick Rowe's "The Dorabella Variation" (2003). David Rudkin's BBC television "Play for Today" "Penda's Fen" (1974) deals with themes including sex and adolescence, spying, and snobbery, with Elgar's music, chiefly "The Dream of Gerontius", as its background. In one scene, a ghostly Elgar whispers the secret of the "Enigma" tune to the youthful central character, with an injunction not to reveal it. "Elgar on the Journey to Hanley", a novel by Keith Alldritt (1979), tells of the composer's attachment to Dora Penny, later Mrs Powell, (depicted as "Dorabella" in the "Enigma Variations"), and covers the fifteen years from their first meeting in the mid-1890s to the genesis of the Violin Concerto when, in the novel, Dora has been supplanted in Elgar's affections by Alice Stuart-Wortley. Perhaps the best-known work depicting Elgar is Ken Russell's 1962 BBC television film "Elgar", made when the composer was still largely out of fashion. This hour-long film contradicted the view of Elgar as a jingoistic and bombastic composer, and evoked the more pastoral and melancholy side of his character and music. The following have been selected as representative of Elgar's works, based on quality, significance and popularity. Notes References Elizabeth I of England Elizabeth I (7 September 1533 – 24 March 1603) was Queen of England and Ireland from 17 November 1558 until her death on 24 March 1603. Sometimes called The Virgin Queen, Gloriana or Good Queen Bess, Elizabeth was the last monarch of the House of Tudor. Elizabeth was the daughter of Henry VIII and Anne Boleyn, his second wife, who was executed two-and-a-half years after Elizabeth's birth. Anne's marriage to Henry VIII was annulled, and Elizabeth was declared illegitimate. Her half-brother, Edward VI, ruled until his death in 1553, bequeathing the crown to Lady Jane Grey and ignoring the claims of his two half-sisters, Elizabeth and the Roman Catholic Mary, in spite of statute law to the contrary. Edward's will was set aside and Mary became queen, deposing Lady Jane Grey. During Mary's reign, Elizabeth was imprisoned for nearly a year on suspicion of supporting Protestant rebels. In 1558, Elizabeth succeeded her half-sister to the throne and set out to rule by good counsel. She depended heavily on a group of trusted advisers, led by William Cecil, 1st Baron Burghley. One of her first actions as queen was the establishment of an English Protestant church, of which she became the Supreme Governor. This Elizabethan Religious Settlement was to evolve into the Church of England. It was expected that Elizabeth would marry and produce an heir to continue the Tudor line; she never did, despite numerous courtships. She was eventually succeeded by her first cousin twice removed, James VI of Scotland. She had earlier been responsible for the imprisonment and execution of James's mother, Mary, Queen of Scots. In government, Elizabeth was more moderate than her father and half-siblings had been. One of her mottoes was ""video et taceo"" ("I see but say nothing"). In religion, she was relatively tolerant and avoided systematic persecution. After the pope declared her illegitimate in 1570 and released her subjects from obedience to her, several conspiracies threatened her life, all of which were defeated with the help of her ministers' secret service. Elizabeth was cautious in foreign affairs, manoeuvring between the major powers of France and Spain. She only half-heartedly supported a number of ineffective, poorly resourced military campaigns in the Netherlands, France, and Ireland. By the mid-1580s, England could no longer avoid war with Spain. England's defeat of the Spanish Armada in 1588 associated Elizabeth with one of the greatest military victories in English history. As she grew older, Elizabeth became celebrated for her virginity. A cult grew around her which was celebrated in the portraits, pageants, and literature of the day. Elizabeth's reign became known as the Elizabethan era. The period is famous for the flourishing of English drama, led by playwrights such as William Shakespeare and Christopher Marlowe, and for the seafaring prowess of English adventurers such as Francis Drake. Some historians depict Elizabeth as a short-tempered, sometimes indecisive ruler, who enjoyed more than her share of luck. Towards the end of her reign, a series of economic and military problems weakened her popularity. Elizabeth is acknowledged as a charismatic performer and a dogged survivor in an era when government was ramshackle and limited, and when monarchs in neighbouring countries faced internal problems that jeopardised their thrones. After the short reigns of her half-siblings, her 44 years on the throne provided welcome stability for the kingdom and helped forge a sense of national identity. Elizabeth was born at Greenwich Palace and was named after both her grandmothers, Elizabeth of York and Elizabeth Howard. She was the second child of Henry VIII of England born in wedlock to survive infancy. Her mother was Henry's second wife, Anne Boleyn. At birth, Elizabeth was the heir presumptive to the throne of England. Her older half-sister, Mary, had lost her position as a legitimate heir when Henry annulled his marriage to Mary's mother, Catherine of Aragon, to marry Anne, with the intent to sire a male heir and ensure the Tudor succession. She was baptised on 10 September 1533; Archbishop Thomas Cranmer, the Marquess of Exeter, the Duchess of Norfolk and the Dowager Marchioness of Dorset stood as her godparents. A canopy was carried at the ceremony over the three-day old child by her uncle Viscount Rochford, Lord Hussey, Lord Thomas Howard, and Lord Howard of Effingham. Elizabeth was two years and eight months old when her mother was beheaded on 19 May 1536, four months after Catherine of Aragon's death from natural causes. Elizabeth was declared illegitimate and deprived of her place in the royal succession. Eleven days after Anne Boleyn's execution, Henry married Jane Seymour, who died shortly after the birth of their son, Edward, in 1537. From his birth, Edward was undisputed heir apparent to the throne. Elizabeth was placed in his household and carried the chrisom, or baptismal cloth, at his christening. Elizabeth's first governess (or Lady Mistress), Margaret Bryan, wrote that she was "as toward a child and as gentle of conditions as ever I knew any in my life". Catherine Champernowne, better known by her later, married name of Catherine "Kat" Ashley, was appointed as Elizabeth's governess in 1537, and she remained Elizabeth's friend until her death in 1565. Champernowne taught Elizabeth four languages: French, Flemish, Italian and Spanish. By the time William Grindal became her tutor in 1544, Elizabeth could write English, Latin, and Italian. Under Grindal, a talented and skilful tutor, she also progressed in French and Greek. After Grindal died in 1548, Elizabeth received her education under Roger Ascham, a sympathetic teacher who believed that learning should be engaging. By the time her formal education ended in 1550, Elizabeth was one of the best educated women of her generation. At the end of her life, Elizabeth was also believed to speak Welsh, Cornish, Scottish and Irish in addition to the languages mentioned above. The Venetian ambassador stated in 1603 that she "possessed [these] languages so thoroughly that each appeared to be her native tongue". Historian Mark Stoyle suggests that she was probably taught Cornish by William Killigrew, Groom of the Privy Chamber and later Chamberlain of the Exchequer. Henry VIII died in 1547 and Elizabeth's half-brother, Edward VI, became king at age nine. Catherine Parr, Henry's widow, soon married Thomas Seymour, 1st Baron Seymour of Sudeley, Edward VI's uncle and the brother of the Lord Protector, Edward Seymour, 1st Duke of Somerset. The couple took Elizabeth into their household at Chelsea. There Elizabeth experienced an emotional crisis that some historians believe affected her for the rest of her life. Thomas Seymour, approaching age 40 but having charm and "a powerful sex appeal", engaged in romps and horseplay with the 14-year-old Elizabeth. These included entering her bedroom in his nightgown, tickling her and slapping her on the buttocks. Parr, rather than confront her husband over his inappropriate activities, joined in. Twice she accompanied him in tickling Elizabeth, and once held her while he cut her black gown "into a thousand pieces." However, after Parr discovered the pair in an embrace, she ended this state of affairs. In May 1548, Elizabeth was sent away. However, Thomas Seymour continued scheming to control the royal family and tried to have himself appointed the governor of the King's person. When Parr died after childbirth on 5 September 1548, he renewed his attentions towards Elizabeth, intent on marrying her. The details of his former behaviour towards Elizabeth emerged, and for his brother and the king's council, this was the last straw. In January 1549, Seymour was arrested on suspicion of plotting to marry Elizabeth and overthrow the Lord Protector. Elizabeth, living at Hatfield House, would admit nothing. Her stubbornness exasperated her interrogator, Sir Robert Tyrwhitt, who reported, "I do see it in her face that she is guilty". Seymour was beheaded on 20 March 1549. Edward VI died on 6 July 1553, aged 15. His will swept aside the Succession to the Crown Act 1543, excluded both Mary and Elizabeth from the succession, and instead declared as his heir Lady Jane Grey, granddaughter of Henry VIII's younger sister, Mary. Jane was proclaimed queen by the Privy Council, but her support quickly crumbled, and she was deposed after nine days. On 3 August 1553, Mary rode triumphantly into London, with Elizabeth at her side. The show of solidarity between the sisters did not last long. Mary, a devout Catholic, was determined to crush the Protestant faith in which Elizabeth had been educated, and she ordered that everyone attend Catholic Mass; Elizabeth had to outwardly conform. Mary's initial popularity ebbed away in 1554 when she announced plans to marry Philip of Spain, the son of Holy Roman Emperor Charles V and an active Catholic. Discontent spread rapidly through the country, and many looked to Elizabeth as a focus for their opposition to Mary's religious policies. In January and February 1554, Wyatt's rebellion broke out; it was soon suppressed. Elizabeth was brought to court, and interrogated regarding her role, and on 18 March, she was imprisoned in the Tower of London. Elizabeth fervently protested her innocence. Though it is unlikely that she had plotted with the rebels, some of them were known to have approached her. Mary's closest confidant, Charles V's ambassador Simon Renard, argued that her throne would never be safe while Elizabeth lived; and the Chancellor, Stephen Gardiner, worked to have Elizabeth put on trial. Elizabeth's supporters in the government, including Lord Paget, convinced Mary to spare her sister in the absence of hard evidence against her. Instead, on 22 May, Elizabeth was moved from the Tower to Woodstock, where she was to spend almost a year under house arrest in the charge of Sir Henry Bedingfield. Crowds cheered her all along the way. On 17 April 1555, Elizabeth was recalled to court to attend the final stages of Mary's apparent pregnancy. If Mary and her child died, Elizabeth would become queen. If, on the other hand, Mary gave birth to a healthy child, Elizabeth's chances of becoming queen would recede sharply. When it became clear that Mary was not pregnant, no one believed any longer that she could have a child. Elizabeth's succession seemed assured. King Philip, who ascended the Spanish throne in 1556, acknowledged the new political reality and cultivated his sister-in-law. She was a better ally than the chief alternative, Mary, Queen of Scots, who had grown up in France and was betrothed to the Dauphin of France. When his wife fell ill in 1558, King Philip sent the Count of Feria to consult with Elizabeth. This interview was conducted at Hatfield House, where she had returned to live in October 1555. By October 1558, Elizabeth was already making plans for her government. On 6 November, Mary recognised Elizabeth as her heir. On 17 November 1558, Mary died and Elizabeth succeeded to the throne. Elizabeth became queen at the age of 25, and declared her intentions to her Council and other peers who had come to Hatfield to swear allegiance. The speech contains the first record of her adoption of the mediaeval political theology of the sovereign's "two bodies": the body natural and the body politic: My lords, the law of nature moves me to sorrow for my sister; the burden that is fallen upon me makes me amazed, and yet, considering I am God's creature, ordained to obey His appointment, I will thereto yield, desiring from the bottom of my heart that I may have assistance of His grace to be the minister of His heavenly will in this office now committed to me. And as I am but one body naturally considered, though by His permission a body politic to govern, so shall I desire you all ... to be assistant to me, that I with my ruling and you with your service may make a good account to Almighty God and leave some comfort to our posterity on earth. I mean to direct all my actions by good advice and counsel. As her triumphal progress wound through the city on the eve of the coronation ceremony, she was welcomed wholeheartedly by the citizens and greeted by orations and pageants, most with a strong Protestant flavour. Elizabeth's open and gracious responses endeared her to the spectators, who were "wonderfully ravished". The following day, 15 January 1559, a date chosen by her astrologer John Dee, Elizabeth was crowned and anointed by Owen Oglethorpe, the Catholic bishop of Carlisle, in Westminster Abbey. She was then presented for the people's acceptance, amidst a deafening noise of organs, fifes, trumpets, drums, and bells. Although Elizabeth was welcomed as queen in England, the country was still in a state of anxiety over the perceived Catholic threat at home and overseas, as well as the choice of whom she would marry. Elizabeth's personal religious convictions have been much debated by scholars. She was a Protestant, but kept Catholic symbols (such as the crucifix), and downplayed the role of sermons in defiance of a key Protestant belief. In terms of public policy she favoured pragmatism in dealing with religious matters. The question of her legitimacy was a key concern: although she was technically illegitimate under both Protestant and Catholic law, her retroactively declared illegitimacy under the English church was not a serious bar compared to having never been legitimate as the Catholics claimed she was. For this reason alone, it was never in serious doubt that Elizabeth would embrace Protestantism. Elizabeth and her advisers perceived the threat of a Catholic crusade against heretical England. Elizabeth therefore sought a Protestant solution that would not offend Catholics too greatly while addressing the desires of English Protestants; she would not tolerate the more radical Puritans though, who were pushing for far-reaching reforms. As a result, the parliament of 1559 started to legislate for a church based on the Protestant settlement of Edward VI, with the monarch as its head, but with many Catholic elements, such as vestments. The House of Commons backed the proposals strongly, but the bill of supremacy met opposition in the House of Lords, particularly from the bishops. Elizabeth was fortunate that many bishoprics were vacant at the time, including the Archbishopric of Canterbury. This enabled supporters amongst peers to outvote the bishops and conservative peers. Nevertheless, Elizabeth was forced to accept the title of Supreme Governor of the Church of England rather than the more contentious title of Supreme Head, which many thought unacceptable for a woman to bear. The new Act of Supremacy became law on 8 May 1559. All public officials were to swear an oath of loyalty to the monarch as the supreme governor or risk disqualification from office; the heresy laws were repealed, to avoid a repeat of the persecution of dissenters practised by Mary. At the same time, a new Act of Uniformity was passed, which made attendance at church and the use of an adapted version of the 1552 Book of Common Prayer compulsory, though the penalties for recusancy, or failure to attend and conform, were not extreme. From the start of Elizabeth's reign, it was expected that she would marry and the question arose to whom. Although she received many offers for her hand, she never married and was childless; the reasons for this are not clear. Historians have speculated that Thomas Seymour had put her off sexual relationships, or that she knew herself to be infertile. She considered several suitors until she was about fifty. Her last courtship was with Francis, Duke of Anjou, 22 years her junior. While risking possible loss of power like her sister, who played into the hands of King Philip II of Spain, marriage offered the chance of an heir. However, the choice of a husband might also provoke political instability or even insurrection. In the spring of 1559, it became evident that Elizabeth was in love with her childhood friend Robert Dudley. It was said that Amy Robsart, his wife, was suffering from a "malady in one of her breasts" and that the Queen would like to marry Dudley if his wife should die. By the autumn of 1559, several foreign suitors were vying for Elizabeth's hand; their impatient envoys engaged in ever more scandalous talk and reported that a marriage with her favourite was not welcome in England: "There is not a man who does not cry out on him and her with indignation ... she will marry none but the favoured Robert." Amy Dudley died in September 1560, from a fall from a flight of stairs and, despite the coroner's inquest finding of accident, many people suspected Dudley of having arranged her death so that he could marry the queen. Elizabeth seriously considered marrying Dudley for some time. However, William Cecil, Nicholas Throckmorton, and some conservative peers made their disapproval unmistakably clear. There were even rumours that the nobility would rise if the marriage took place. Among other marriage candidates being considered for the queen, Robert Dudley continued to be regarded as a possible candidate for nearly another decade. Elizabeth was extremely jealous of his affections, even when she no longer meant to marry him herself. In 1564, Elizabeth raised Dudley to the peerage as Earl of Leicester. He finally remarried in 1578, to which the queen reacted with repeated scenes of displeasure and lifelong hatred towards his wife, Lettice Knollys. Still, Dudley always "remained at the centre of [Elizabeth's] emotional life", as historian Susan Doran has described the situation. He died shortly after the defeat of the Armada. After Elizabeth's own death, a note from him was found among her most personal belongings, marked "his last letter" in her handwriting. Marriage negotiations constituted a key element in Elizabeth's foreign policy. She turned down Philip's own hand early in 1559 but for several years entertained the proposal of King Eric XIV of Sweden. For several years she also seriously negotiated to marry Philip's cousin Archduke Charles of Austria. By 1569, relations with the Habsburgs had deteriorated, and Elizabeth considered marriage to two French Valois princes in turn, first Henry, Duke of Anjou, and later, from 1572 to 1581, his brother Francis, Duke of Anjou, formerly Duke of Alençon. This last proposal was tied to a planned alliance against Spanish control of the Southern Netherlands. Elizabeth seems to have taken the courtship seriously for a time, and wore a frog-shaped earring that Anjou had sent her. In 1563, Elizabeth told an imperial envoy: "If I follow the inclination of my nature, it is this: beggar-woman and single, far rather than queen and married". Later in the year, following Elizabeth's illness with smallpox, the succession question became a heated issue in Parliament. They urged the queen to marry or nominate an heir, to prevent a civil war upon her death. She refused to do either. In April she prorogued the Parliament, which did not reconvene until she needed its support to raise taxes in 1566. Having promised to marry previously, she told an unruly House: I will never break the word of a prince spoken in public place, for my honour's sake. And therefore I say again, I will marry as soon as I can conveniently, if God take not him away with whom I mind to marry, or myself, or else some other great let happen. By 1570, senior figures in the government privately accepted that Elizabeth would never marry or name a successor. William Cecil was already seeking solutions to the succession problem. For her failure to marry, Elizabeth was often accused of irresponsibility. Her silence, however, strengthened her own political security: she knew that if she named an heir, her throne would be vulnerable to a coup; she remembered that the way "a second person, as I have been" had been used as the focus of plots against her predecessor. Elizabeth's unmarried status inspired a cult of virginity. In poetry and portraiture, she was depicted as a virgin or a goddess or both, not as a normal woman. At first, only Elizabeth made a virtue of her virginity: in 1559, she told the Commons, "And, in the end, this shall be for me sufficient, that a marble stone shall declare that a queen, having reigned such a time, lived and died a virgin". Later on, poets and writers took up the theme and turned it into an iconography that exalted Elizabeth. Public tributes to the Virgin by 1578 acted as a coded assertion of opposition to the queen's marriage negotiations with the Duke of Alençon. Ultimately, Elizabeth would insist she was married to her kingdom and subjects, under divine protection. In 1599, she spoke of "all my husbands, my good people". Elizabeth's first policy toward Scotland was to oppose the French presence there. She feared that the French planned to invade England and put her Catholic cousin Mary, Queen of Scots, on the throne. Mary was considered by many to be the heir to the English crown, being the granddaughter of Henry VIII's elder sister, Margaret. Mary boasted being "the nearest kinswoman she hath". Elizabeth was persuaded to send a force into Scotland to aid the Protestant rebels, and though the campaign was inept, the resulting Treaty of Edinburgh of July 1560 removed the French threat in the north. When Mary returned to Scotland in 1561 to take up the reins of power, the country had an established Protestant church and was run by a council of Protestant nobles supported by Elizabeth. Mary refused to ratify the treaty. In 1563 Elizabeth proposed her own suitor, Robert Dudley, as a husband for Mary, without asking either of the two people concerned. Both proved unenthusiastic, and in 1565 Mary married Henry Stuart, Lord Darnley, who carried his own claim to the English throne. The marriage was the first of a series of errors of judgement by Mary that handed the victory to the Scottish Protestants and to Elizabeth. Darnley quickly became unpopular and was murdered in February 1567 by conspirators almost certainly led by James Hepburn, 4th Earl of Bothwell. Shortly afterwards, on 15 May 1567, Mary married Bothwell, arousing suspicions that she had been party to the murder of her husband. Elizabeth confronted Mary about the marriage, writing to her: How could a worse choice be made for your honour than in such haste to marry such a subject, who besides other and notorious lacks, public fame has charged with the murder of your late husband, besides the touching of yourself also in some part, though we trust in that behalf falsely. These events led rapidly to Mary's defeat and imprisonment in Loch Leven Castle. The Scottish lords forced her to abdicate in favour of her son James VI, who had been born in June 1566. James was taken to Stirling Castle to be raised as a Protestant. Mary escaped from Loch Leven in 1568 but after another defeat fled across the border into England, where she had once been assured of support from Elizabeth. Elizabeth's first instinct was to restore her fellow monarch; but she and her council instead chose to play safe. Rather than risk returning Mary to Scotland with an English army or sending her to France and the Catholic enemies of England, they detained her in England, where she was imprisoned for the next nineteen years. Mary was soon the focus for rebellion. In 1569 there was a major Catholic rising in the North; the goal was to free Mary, marry her to Thomas Howard, 4th Duke of Norfolk, and put her on the English throne. After the rebels' defeat, over 750 of them were executed on Elizabeth's orders. In the belief that the revolt had been successful, Pope Pius V issued a bull in 1570, titled "Regnans in Excelsis", which declared "Elizabeth, the pretended Queen of England and the servant of crime" to be excommunicated and a heretic, releasing all her subjects from any allegiance to her. Catholics who obeyed her orders were threatened with excommunication. The papal bull provoked legislative initiatives against Catholics by Parliament, which were, however, mitigated by Elizabeth's intervention. In 1581, to convert English subjects to Catholicism with "the intent" to withdraw them from their allegiance to Elizabeth was made a treasonable offence, carrying the death penalty. From the 1570s missionary priests from continental seminaries came to England secretly in the cause of the "reconversion of England". Many suffered execution, engendering a cult of martyrdom. "Regnans in Excelsis" gave English Catholics a strong incentive to look to Mary Stuart as the legitimate sovereign of England. Mary may not have been told of every Catholic plot to put her on the English throne, but from the Ridolfi Plot of 1571 (which caused Mary's suitor, the Duke of Norfolk, to lose his head) to the Babington Plot of 1586, Elizabeth's spymaster Sir Francis Walsingham and the royal council keenly assembled a case against her. At first, Elizabeth resisted calls for Mary's death. By late 1586, she had been persuaded to sanction her trial and execution on the evidence of letters written during the Babington Plot. Elizabeth's proclamation of the sentence announced that "the said Mary, pretending title to the same Crown, had compassed and imagined within the same realm divers things tending to the hurt, death and destruction of our royal person." On 8 February 1587, Mary was beheaded at Fotheringhay Castle, Northamptonshire. After Mary's execution, Elizabeth claimed that she had not intended for the signed execution warrant to be dispatched, and blamed her Secretary, William Davison, for implementing it without her knowledge. The sincerity of Elizabeth's remorse and whether or not she wanted to delay the warrant have been called into question both by her contemporaries and later historians. Elizabeth's foreign policy was largely defensive. The exception was the English occupation of Le Havre from October 1562 to June 1563, which ended in failure when Elizabeth's Huguenot allies joined with the Catholics to retake the port. Elizabeth's intention had been to exchange Le Havre for Calais, lost to France in January 1558. Only through the activities of her fleets did Elizabeth pursue an aggressive policy. This paid off in the war against Spain, 80% of which was fought at sea. She knighted Francis Drake after his circumnavigation of the globe from 1577 to 1580, and he won fame for his raids on Spanish ports and fleets. An element of piracy and self-enrichment drove Elizabethan seafarers, over whom the queen had little control. After the occupation and loss of Le Havre in 1562–1563, Elizabeth avoided military expeditions on the continent until 1585, when she sent an English army to aid the Protestant Dutch rebels against Philip II. This followed the deaths in 1584 of the allies William the Silent, Prince of Orange, and the Duke of Anjou, and the surrender of a series of Dutch towns to Alexander Farnese, Duke of Parma, Philip's governor of the Spanish Netherlands. In December 1584, an alliance between Philip II and the French Catholic League at Joinville undermined the ability of Anjou's brother, Henry III of France, to counter Spanish domination of the Netherlands. It also extended Spanish influence along the channel coast of France, where the Catholic League was strong, and exposed England to invasion. The siege of Antwerp in the summer of 1585 by the Duke of Parma necessitated some reaction on the part of the English and the Dutch. The outcome was the Treaty of Nonsuch of August 1585, in which Elizabeth promised military support to the Dutch. The treaty marked the beginning of the Anglo-Spanish War, which lasted until the Treaty of London in 1604. The expedition was led by her former suitor, the Earl of Leicester. Elizabeth from the start did not really back this course of action. Her strategy, to support the Dutch on the surface with an English army, while beginning secret peace talks with Spain within days of Leicester's arrival in Holland, had necessarily to be at odds with Leicester's, who wanted and was expected by the Dutch to fight an active campaign. Elizabeth, on the other hand, wanted him "to avoid at all costs any decisive action with the enemy". He enraged Elizabeth by accepting the post of Governor-General from the Dutch States General. Elizabeth saw this as a Dutch ploy to force her to accept sovereignty over the Netherlands, which so far she had always declined. She wrote to Leicester: We could never have imagined (had we not seen it fall out in experience) that a man raised up by ourself and extraordinarily favoured by us, above any other subject of this land, would have in so contemptible a sort broken our commandment in a cause that so greatly touches us in honour ... And therefore our express pleasure and commandment is that, all delays and excuses laid apart, you do presently upon the duty of your allegiance obey and fulfill whatsoever the bearer hereof shall direct you to do in our name. Whereof fail you not, as you will answer the contrary at your utmost peril. Elizabeth's "commandment" was that her emissary read out her letters of disapproval publicly before the Dutch Council of State, Leicester having to stand nearby. This public humiliation of her "Lieutenant-General" combined with her continued talks for a separate peace with Spain, irreversibly undermined his standing among the Dutch. The military campaign was severely hampered by Elizabeth's repeated refusals to send promised funds for her starving soldiers. Her unwillingness to commit herself to the cause, Leicester's own shortcomings as a political and military leader, and the faction-ridden and chaotic situation of Dutch politics led to the failure of the campaign. Leicester finally resigned his command in December 1587. Meanwhile, Sir Francis Drake had undertaken a major voyage against Spanish ports and ships in the Caribbean in 1585 and 1586. In 1587 he made a successful raid on Cadiz, destroying the Spanish fleet of war ships intended for the "Enterprise of England", as Philip II had decided to take the war to England. On 12 July 1588, the Spanish Armada, a great fleet of ships, set sail for the channel, planning to ferry a Spanish invasion force under the Duke of Parma to the coast of southeast England from the Netherlands. A combination of miscalculation, misfortune, and an attack of English fire ships on 29 July off Gravelines, which dispersed the Spanish ships to the northeast, defeated the Armada. The Armada straggled home to Spain in shattered remnants, after disastrous losses on the coast of Ireland (after some ships had tried to struggle back to Spain via the North Sea, and then back south past the west coast of Ireland). Unaware of the Armada's fate, English militias mustered to defend the country under the Earl of Leicester's command. He invited Elizabeth to inspect her troops at Tilbury in Essex on 8 August. Wearing a silver breastplate over a white velvet dress, she addressed them in one of her most famous speeches: My loving people, we have been persuaded by some that are careful of our safety, to take heed how we commit ourself to armed multitudes for fear of treachery; but I assure you, I do not desire to live to distrust my faithful and loving people ... I know I have the body but of a weak and feeble woman, but I have the heart and stomach of a king, and of a King of England too, and think foul scorn that Parma or Spain, or any Prince of Europe should dare to invade the borders of my realm. When no invasion came, the nation rejoiced. Elizabeth's procession to a thanksgiving service at St Paul's Cathedral rivalled that of her coronation as a spectacle. The defeat of the armada was a potent propaganda victory, both for Elizabeth and for Protestant England. The English took their delivery as a symbol of God's favour and of the nation's inviolability under a virgin queen. However, the victory was not a turning point in the war, which continued and often favoured Spain. The Spanish still controlled the southern provinces of the Netherlands, and the threat of invasion remained. Sir Walter Raleigh claimed after her death that Elizabeth's caution had impeded the war against Spain: If the late queen would have believed her men of war as she did her scribes, we had in her time beaten that great empire in pieces and made their kings of figs and oranges as in old times. But her Majesty did all by halves, and by petty invasions taught the Spaniard how to defend himself, and to see his own weakness. Though some historians have criticised Elizabeth on similar grounds, Raleigh's verdict has more often been judged unfair. Elizabeth had good reason not to place too much trust in her commanders, who once in action tended, as she put it herself, "to be transported with an haviour of vainglory". In 1589, the year after the Spanish Armada, Elizabeth sent to Spain the "English Armada" or "Counter Armada" with 23,375 men and 150 ships, led by Sir Francis Drake as admiral and Sir John Norreys as general. The English fleet suffered a catastrophic defeat with 11,000–15,000 killed, wounded or died of disease and 40 ships sunk or captured. The advantage England had won upon the destruction of the Spanish Armada was lost, and the Spanish victory marked a revival of Philip II's naval power through the next decade. When the Protestant Henry IV inherited the French throne in 1589, Elizabeth sent him military support. It was her first venture into France since the retreat from Le Havre in 1563. Henry's succession was strongly contested by the Catholic League and by Philip II, and Elizabeth feared a Spanish takeover of the channel ports. The subsequent English campaigns in France, however, were disorganised and ineffective. Lord Willoughby, largely ignoring Elizabeth's orders, roamed northern France to little effect, with an army of 4,000 men. He withdrew in disarray in December 1589, having lost half his troops. In 1591, the campaign of John Norreys, who led 3,000 men to Brittany, was even more of a disaster. As for all such expeditions, Elizabeth was unwilling to invest in the supplies and reinforcements requested by the commanders. Norreys left for London to plead in person for more support. In his absence, a Catholic League army almost destroyed the remains of his army at Craon, north-west France, in May 1591. In July, Elizabeth sent out another force under Robert Devereux, 2nd Earl of Essex, to help Henry IV in besieging Rouen. The result was just as dismal. Essex accomplished nothing and returned home in January 1592. Henry abandoned the siege in April. As usual, Elizabeth lacked control over her commanders once they were abroad. "Where he is, or what he doth, or what he is to do," she wrote of Essex, "we are ignorant". Although Ireland was one of her two kingdoms, Elizabeth faced a hostile, and in places virtually autonomous, Irish population that adhered to Catholicism and was willing to defy her authority and plot with her enemies. Her policy there was to grant land to her courtiers and prevent the rebels from giving Spain a base from which to attack England. In the course of a series of uprisings, Crown forces pursued scorched-earth tactics, burning the land and slaughtering man, woman and child. During a revolt in Munster led by Gerald FitzGerald, 15th Earl of Desmond, in 1582, an estimated 30,000 Irish people starved to death. The poet and colonist Edmund Spenser wrote that the victims "were brought to such wretchedness as that any stony heart would have rued the same". Elizabeth advised her commanders that the Irish, "that rude and barbarous nation", be well treated; but she showed no remorse when force and bloodshed were deemed necessary. Between 1594 and 1603, Elizabeth faced her most severe test in Ireland during the Nine Years' War, a revolt that took place at the height of hostilities with Spain, who backed the rebel leader, Hugh O'Neill, Earl of Tyrone. In spring 1599, Elizabeth sent Robert Devereux, 2nd Earl of Essex, to put the revolt down. To her frustration, he made little progress and returned to England in defiance of her orders. He was replaced by Charles Blount, Lord Mountjoy, who took three years to defeat the rebels. O'Neill finally surrendered in 1603, a few days after Elizabeth's death. Soon afterwards, a peace treaty was signed between England and Spain. Elizabeth continued to maintain the diplomatic relations with the Tsardom of Russia originally established by her half-brother. She often wrote to Ivan the Terrible on amicable terms, though the Tsar was often annoyed by her focus on commerce rather than on the possibility of a military alliance. The Tsar even proposed to her once, and during his later reign, asked for a guarantee to be granted asylum in England should his rule be jeopardised. Upon Ivan's death, he was succeeded by his simple-minded son Feodor. Unlike his father, Feodor had no enthusiasm in maintaining exclusive trading rights with England. Feodor declared his kingdom open to all foreigners, and dismissed the English ambassador Sir Jerome Bowes, whose pomposity had been tolerated by Ivan. Elizabeth sent a new ambassador, Dr. Giles Fletcher, to demand from the regent Boris Godunov that he convince the Tsar to reconsider. The negotiations failed, due to Fletcher addressing Feodor with two of his many titles omitted. Elizabeth continued to appeal to Feodor in half appealing, half reproachful letters. She proposed an alliance, something which she had refused to do when offered one by Feodor's father, but was turned down. Trade and diplomatic relations developed between England and the Barbary states during the rule of Elizabeth. England established a trading relationship with Morocco in opposition to Spain, selling armour, ammunition, timber, and metal in exchange for Moroccan sugar, in spite of a Papal ban. In 1600, Abd el-Ouahed ben Messaoud, the principal secretary to the Moroccan ruler Mulai Ahmad al-Mansur, visited England as an ambassador to the court of Queen Elizabeth I, to negotiate an Anglo-Moroccan alliance against Spain. Elizabeth "agreed to sell munitions supplies to Morocco, and she and Mulai Ahmad al-Mansur talked on and off about mounting a joint operation against the Spanish". Discussions, however, remained inconclusive, and both rulers died within two years of the embassy. Diplomatic relations were also established with the Ottoman Empire with the chartering of the Levant Company and the dispatch of the first English ambassador to the Porte, William Harborne, in 1578. For the first time, a Treaty of Commerce was signed in 1580. Numerous envoys were dispatched in both directions and epistolar exchanges occurred between Elizabeth and Sultan Murad III. In one correspondence, Murad entertained the notion that Islam and Protestantism had "much more in common than either did with Roman Catholicism, as both rejected the worship of idols", and argued for an alliance between England and the Ottoman Empire. To the dismay of Catholic Europe, England exported tin and lead (for cannon-casting) and ammunitions to the Ottoman Empire, and Elizabeth seriously discussed joint military operations with Murad III during the outbreak of war with Spain in 1585, as Francis Walsingham was lobbying for a direct Ottoman military involvement against the common Spanish enemy. The period after the defeat of the Spanish Armada in 1588 brought new difficulties for Elizabeth that lasted until the end of her reign. The conflicts with Spain and in Ireland dragged on, the tax burden grew heavier, and the economy was hit by poor harvests and the cost of war. Prices rose and the standard of living fell. During this time, repression of Catholics intensified, and Elizabeth authorised commissions in 1591 to interrogate and monitor Catholic householders. To maintain the illusion of peace and prosperity, she increasingly relied on internal spies and propaganda. In her last years, mounting criticism reflected a decline in the public's affection for her. One of the causes for this "second reign" of Elizabeth, as it is sometimes called, was the changed character of Elizabeth's governing body, the privy council in the 1590s. A new generation was in power. With the exception of Lord Burghley, the most important politicians had died around 1590: the Earl of Leicester in 1588; Sir Francis Walsingham in 1590; and Sir Christopher Hatton in 1591. Factional strife in the government, which had not existed in a noteworthy form before the 1590s, now became its hallmark. A bitter rivalry arose between the Earl of Essex and Robert Cecil, son of Lord Burghley and their respective adherents, and the struggle for the most powerful positions in the state marred politics. The queen's personal authority was lessening, as is shown in the 1594 affair of Dr. Lopez, her trusted physician. When he was wrongly accused by the Earl of Essex of treason out of personal pique, she could not prevent his execution, although she had been angry about his arrest and seems not to have believed in his guilt. During the last years of her reign, Elizabeth came to rely on the granting of monopolies as a cost-free system of patronage, rather than asking Parliament for more subsidies in a time of war. The practice soon led to price-fixing, the enrichment of courtiers at the public's expense, and widespread resentment. This culminated in agitation in the House of Commons during the parliament of 1601. In her famous "Golden Speech" of 30 November 1601 at Whitehall Palace to a deputation of 140 members, Elizabeth professed ignorance of the abuses, and won the members over with promises and her usual appeal to the emotions: Who keeps their sovereign from the lapse of error, in which, by ignorance and not by intent they might have fallen, what thank they deserve, we know, though you may guess. And as nothing is more dear to us than the loving conservation of our subjects' hearts, what an undeserved doubt might we have incurred if the abusers of our liberality, the thrallers of our people, the wringers of the poor, had not been told us! This same period of economic and political uncertainty, however, produced an unsurpassed literary flowering in England. The first signs of a new literary movement had appeared at the end of the second decade of Elizabeth's reign, with John Lyly's "Euphues" and Edmund Spenser's "The Shepheardes Calender" in 1578. During the 1590s, some of the great names of English literature entered their maturity, including William Shakespeare and Christopher Marlowe. During this period and into the Jacobean era that followed, the English theatre reached its highest peaks. The notion of a great Elizabethan era depends largely on the builders, dramatists, poets, and musicians who were active during Elizabeth's reign. They owed little directly to the queen, who was never a major patron of the arts. As Elizabeth aged her image gradually changed. She was portrayed as Belphoebe or Astraea, and after the Armada, as Gloriana, the eternally youthful Faerie Queene of Edmund Spenser's poem. Her painted portraits became less realistic and more a set of enigmatic icons that made her look much younger than she was. In fact, her skin had been scarred by smallpox in 1562, leaving her half bald and dependent on wigs and cosmetics. Her love of sweets and fear of dentists contributed to severe tooth decay and loss to such an extent that foreign ambassadors had a hard time understanding her speech. André Hurault de Maisse, Ambassador Extraordinary from Henry IV of France, reported an audience with the queen, during which he noticed, "her teeth are very yellow and unequal ... and on the left side less than on the right. Many of them are missing, so that one cannot understand her easily when she speaks quickly." Yet he added, "her figure is fair and tall and graceful in whatever she does; so far as may be she keeps her dignity, yet humbly and graciously withal." Sir Walter Raleigh called her "a lady whom time had surprised". The more Elizabeth's beauty faded, the more her courtiers praised it. Elizabeth was happy to play the part, but it is possible that in the last decade of her life she began to believe her own performance. She became fond and indulgent of the charming but petulant young Robert Devereux, Earl of Essex, who was Leicester's stepson and took liberties with her for which she forgave him. She repeatedly appointed him to military posts despite his growing record of irresponsibility. After Essex's desertion of his command in Ireland in 1599, Elizabeth had him placed under house arrest and the following year deprived him of his monopolies. In February 1601, the earl tried to raise a rebellion in London. He intended to seize the queen but few rallied to his support, and he was beheaded on 25 February. Elizabeth knew that her own misjudgements were partly to blame for this turn of events. An observer wrote in 1602: "Her delight is to sit in the dark, and sometimes with shedding tears to bewail Essex." Elizabeth's senior adviser, William Cecil, 1st Baron Burghley, died on 4 August 1598. His political mantle passed to his son, Robert Cecil, who soon became the leader of the government. One task he addressed was to prepare the way for a smooth succession. Since Elizabeth would never name her successor, Cecil was obliged to proceed in secret. He therefore entered into a coded negotiation with James VI of Scotland, who had a strong but unrecognised claim. Cecil coached the impatient James to humour Elizabeth and "secure the heart of the highest, to whose sex and quality nothing is so improper as either needless expostulations or over much curiosity in her own actions". The advice worked. James's tone delighted Elizabeth, who responded: "So trust I that you will not doubt but that your last letters are so acceptably taken as my thanks cannot be lacking for the same, but yield them to you in grateful sort". In historian J. E. Neale's view, Elizabeth may not have declared her wishes openly to James, but she made them known with "unmistakable if veiled phrases". The Queen's health remained fair until the autumn of 1602, when a series of deaths among her friends plunged her into a severe depression. In February 1603, the death of Catherine Carey, Countess of Nottingham, the niece of her cousin and close friend Lady Knollys, came as a particular blow. In March, Elizabeth fell sick and remained in a "settled and unremovable melancholy", and sat motionless on a cushion for hours on end. When Robert Cecil told her that she must go to bed, she snapped: "Must is not a word to use to princes, little man." She died on 24 March 1603 at Richmond Palace, between two and three in the morning. A few hours later, Cecil and the council set their plans in motion and proclaimed James King of England. While it has become normative to record the death of the Queen as occurring in 1603, following English calendar reform in the 1750s, at the time England observed New Year's Day on 25 March, commonly known as Lady Day. Thus Elizabeth died on the last day of the year 1602 in the old calendar. The modern convention is to use the old calendar for the date and month while using the new for the year. Elizabeth's coffin was carried downriver at night to Whitehall, on a barge lit with torches. At her funeral on 28 April, the coffin was taken to Westminster Abbey on a hearse drawn by four horses hung with black velvet. In the words of the chronicler John Stow: Westminster was surcharged with multitudes of all sorts of people in their streets, houses, windows, leads and gutters, that came out to see the obsequy, and when they beheld her statue lying upon the coffin, there was such a general sighing, groaning and weeping as the like hath not been seen or known in the memory of man. Elizabeth was interred in Westminster Abbey, in a tomb shared with her half-sister, Mary I. The Latin inscription on their tomb, "Regno consortes & urna, hic obdormimus Elizabetha et Maria sorores, in spe resurrectionis", translates to "Consorts in realm and tomb, here we sleep, Elizabeth and Mary, sisters, in hope of resurrection". Elizabeth was lamented by many of her subjects, but others were relieved at her death. Expectations of King James started high but then declined, so by the 1620s there was a nostalgic revival of the cult of Elizabeth. Elizabeth was praised as a heroine of the Protestant cause and the ruler of a golden age. James was depicted as a Catholic sympathiser, presiding over a corrupt court. The triumphalist image that Elizabeth had cultivated towards the end of her reign, against a background of factionalism and military and economic difficulties, was taken at face value and her reputation inflated. Godfrey Goodman, Bishop of Gloucester, recalled: "When we had experience of a Scottish government, the Queen did seem to revive. Then was her memory much magnified." Elizabeth's reign became idealised as a time when crown, church and parliament had worked in constitutional balance. The picture of Elizabeth painted by her Protestant admirers of the early 17th century has proved lasting and influential. Her memory was also revived during the Napoleonic Wars, when the nation again found itself on the brink of invasion. In the Victorian era, the Elizabethan legend was adapted to the imperial ideology of the day, and in the mid-20th century, Elizabeth was a romantic symbol of the national resistance to foreign threat. Historians of that period, such as J. E. Neale (1934) and A. L. Rowse (1950), interpreted Elizabeth's reign as a golden age of progress. Neale and Rowse also idealised the Queen personally: she always did everything right; her more unpleasant traits were ignored or explained as signs of stress. Recent historians, however, have taken a more complicated view of Elizabeth. Her reign is famous for the defeat of the Armada, and for successful raids against the Spanish, such as those on Cádiz in 1587 and 1596, but some historians point to military failures on land and at sea. In Ireland, Elizabeth's forces ultimately prevailed, but their tactics stain her record. Rather than as a brave defender of the Protestant nations against Spain and the Habsburgs, she is more often regarded as cautious in her foreign policies. She offered very limited aid to foreign Protestants and failed to provide her commanders with the funds to make a difference abroad. Elizabeth established an English church that helped shape a national identity and remains in place today. Those who praised her later as a Protestant heroine overlooked her refusal to drop all practices of Catholic origin from the Church of England. Historians note that in her day, strict Protestants regarded the Acts of Settlement and Uniformity of 1559 as a compromise. In fact, Elizabeth believed that faith was personal and did not wish, as Francis Bacon put it, to "make windows into men's hearts and secret thoughts". Though Elizabeth followed a largely defensive foreign policy, her reign raised England's status abroad. "She is only a woman, only mistress of half an island," marvelled Pope Sixtus V, "and yet she makes herself feared by Spain, by France, by the Empire, by all". Under Elizabeth, the nation gained a new self-confidence and sense of sovereignty, as Christendom fragmented. Elizabeth was the first Tudor to recognise that a monarch ruled by popular consent. She therefore always worked with parliament and advisers she could trust to tell her the truth—a style of government that her Stuart successors failed to follow. Some historians have called her lucky; she believed that God was protecting her. Priding herself on being "mere English", Elizabeth trusted in God, honest advice, and the love of her subjects for the success of her rule. In a prayer, she offered thanks to God that: [At a time] when wars and seditions with grievous persecutions have vexed almost all kings and countries round about me, my reign hath been peacable, and my realm a receptacle to thy afflicted Church. The love of my people hath appeared firm, and the devices of my enemies frustrate. Euclidean algorithm In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two numbers, the largest number that divides both of them without leaving a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his "Elements" (c. 300 BC). It is an example of an "algorithm", a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the original two numbers. By reversing the steps, the GCD can be expressed as a sum of the two original numbers each multiplied by a positive or negative integer, e.g., 21 = 5 × 105 + (−2) × 252. The fact that the GCD can always be expressed in this way is known as Bézout's identity. The version of the Euclidean algorithm described above (and by Euclid) can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844, and marks the beginning of computational complexity theory. Additional methods for improving the algorithm's efficiency were developed in the 20th century. The Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic. Computations using this algorithm form part of the cryptographic protocols that are used to secure internet communications, and in methods for breaking these cryptosystems by factoring large composite numbers. The Euclidean algorithm may be used to solve Diophantine equations, such as finding numbers that satisfy multiple congruences according to the Chinese remainder theorem, to construct continued fractions, and to find accurate rational approximations to real numbers. Finally, it can be used as a basic tool for proving theorems in number theory such as Lagrange's four-square theorem and the uniqueness of prime factorizations. The original algorithm was described only for natural numbers and geometric lengths (real numbers), but the algorithm was generalized in the 19th century to other types of numbers, such as Gaussian integers and polynomials of one variable. This led to modern abstract algebraic notions such as Euclidean domains. The Euclidean algorithm calculates the greatest common divisor (GCD) of two natural numbers "a" and "b". The greatest common divisor "g" is the largest natural number that divides both "a" and "b" without leaving a remainder. Synonyms for the GCD include the "greatest common factor" (GCF), the "highest common factor" (HCF), the "highest common divisor" (HCD), and the "greatest common measure" (GCM). The greatest common divisor is often written as gcd("a", "b") or, more simply, as ("a", "b"), although the latter notation is also used for various mathematical concepts, such as open intervals and ordered pairs, including coordinates of a point or vector in a two-dimensional space. If gcd("a", "b") = 1, then "a" and "b" are said to be coprime (or relatively prime). This property does not imply that "a" or "b" are themselves prime numbers. For example, neither 6 nor 35 is a prime number, since they both have two prime factors: 6 = 2 × 3 and 35 = 5 × 7. Nevertheless, 6 and 35 are coprime. No natural number other than 1 divides both 6 and 35, since they have no prime factors in common. Let "g" = gcd("a", "b"). Since "a" and "b" are both multiples of "g", they can be written "a" = "mg" and "b" = "ng", and there is no larger number "G" > "g" for which this is true. The natural numbers "m" and "n" must be coprime, since any common factor could be factored out of "m" and "n" to make "g" greater. Thus, any other number "c" that divides both "a" and "b" must also divide "g". The greatest common divisor "g" of "a" and "b" is the unique (positive) common divisor of "a" and "b" that is divisible by any other common divisor "c". The GCD can be visualized as follows. Consider a rectangular area "a" by "b", and any common divisor "c" that divides both "a" and "b" exactly. The sides of the rectangle can be divided into segments of length "c", which divides the rectangle into a grid of squares of side length "c". The greatest common divisor "g" is the largest value of "c" for which this is possible. For illustration, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can be divided into a grid of 12-by-12 squares, with two squares along one edge (24/12 = 2) and five squares along the other (60/12 = 5). The GCD of two numbers "a" and "b" is the product of the prime factors shared by the two numbers, where a same prime factor can be used multiple times, but only as long as the product of these factors divides both "a" and "b". For example, since 1386 can be factored into 2 × 3 × 3 × 7 × 11, and 3213 can be factored into 3 × 3 × 3 × 7 × 17, the greatest common divisor of 1386 and 3213 equals 63 = 3 × 3 × 7, the product of their shared prime factors. If two numbers have no prime factors in common, their greatest common divisor is 1 (obtained here as an instance of the empty product), in other words they are coprime. A key advantage of the Euclidean algorithm is that it can find the GCD efficiently without having to compute the prime factors. Factorization of large integers is believed to be a computationally very difficult problem, and the security of many widely used cryptographic protocols is based upon its infeasibility. Another definition of the GCD is helpful in advanced mathematics, particularly ring theory. The greatest common divisor "g"  of two nonzero numbers "a" and "b" is also their smallest positive integral linear combination, that is, the smallest positive number of the form "ua" + "vb" where "u" and "v" are integers. The set of all integral linear combinations of "a" and "b" is actually the same as the set of all multiples of "g" ("mg", where "m" is an integer). In modern mathematical language, the ideal generated by "a" and "b" is the ideal generated by "g" alone (an ideal generated by a single element is called a principal ideal, and all ideals of the integers are principal ideals). Some properties of the GCD are in fact easier to see with this description, for instance the fact that any common divisor of "a" and "b" also divides the GCD (it divides both terms of "ua" + "vb"). The equivalence of this GCD definition with the other definitions is described below. The GCD of three or more numbers equals the product of the prime factors common to all the numbers, but it can also be calculated by repeatedly taking the GCDs of pairs of numbers. For example, Thus, Euclid's algorithm, which computes the GCD of two integers, suffices to calculate the GCD of arbitrarily many integers. The Euclidean algorithm proceeds in a series of steps such that the output of each step is used as an input for the next one. Let "k" be an integer that counts the steps of the algorithm, starting with zero. Thus, the initial step corresponds to "k" = 0, the next step corresponds to "k" = 1, and so on. Each step begins with two nonnegative remainders "r" and "r". Since the algorithm ensures that the remainders decrease steadily with every step, "r" is less than its predecessor "r". The goal of the "k"th step is to find a quotient "q" and remainder "r" that satisfy the equation and that have "r" < "r". In other words, multiples of the smaller number "r" are subtracted from the larger number "r" until the remainder "r" is smaller than "r". In the initial step ("k" = 0), the remainders "r" and "r" equal "a" and "b", the numbers for which the GCD is sought. In the next step ("k" = 1), the remainders equal "b" and the remainder "r" of the initial step, and so on. Thus, the algorithm can be written as a sequence of equations If "a" is smaller than "b", the first step of the algorithm swaps the numbers. For example, if "a" < "b", the initial quotient "q" equals zero, and the remainder "r" is "a". Thus, "r" is smaller than its predecessor "r" for all "k" ≥ 0. Since the remainders decrease with every step but can never be negative, a remainder "r" must eventually equal zero, at which point the algorithm stops. The final nonzero remainder "r" is the greatest common divisor of "a" and "b". The number "N" cannot be infinite because there are only a finite number of nonnegative integers between the initial remainder "r" and zero. The validity of the Euclidean algorithm can be proven by a two-step argument. In the first step, the final nonzero remainder "r" is shown to divide both "a" and "b". Since it is a common divisor, it must be less than or equal to the greatest common divisor "g". In the second step, it is shown that any common divisor of "a" and "b", including "g", must divide "r"; therefore, "g" must be less than or equal to "r". These two conclusions are inconsistent unless "r" = "g". To demonstrate that "r" divides both "a" and "b" (the first step), "r" divides its predecessor "r" since the final remainder "r" is zero. "r" also divides its next predecessor "r" because it divides both terms on the right-hand side of the equation. Iterating the same argument, "r" divides all the preceding remainders, including "a" and "b". None of the preceding remainders "r", "r", etc. divide "a" and "b", since they leave a remainder. Since "r" is a common divisor of "a" and "b", "r" ≤ "g". In the second step, any natural number "c" that divides both "a" and "b" (in other words, any common divisor of "a" and "b") divides the remainders "r". By definition, "a" and "b" can be written as multiples of "c" : "a" = "mc" and "b" = "nc", where "m" and "n" are natural numbers. Therefore, "c" divides the initial remainder "r", since "r" = "a" − "q""b" = "mc" − "q""nc" = ("m" − "q""n")"c". An analogous argument shows that "c" also divides the subsequent remainders "r", "r", etc. Therefore, the greatest common divisor "g" must divide "r", which implies that "g" ≤ "r". Since the first part of the argument showed the reverse ("r" ≤ "g"), it follows that "g" = "r". Thus, "g" is the greatest common divisor of all the succeeding pairs: For illustration, the Euclidean algorithm can be used to find the greatest common divisor of "a" = 1071 and "b" = 462. To begin, multiples of 462 are subtracted from 1071 until the remainder is less than 462. Two such multiples can be subtracted ("q" = 2), leaving a remainder of 147: Then multiples of 147 are subtracted from 462 until the remainder is less than 147. Three multiples can be subtracted ("q" = 3), leaving a remainder of 21: Then multiples of 21 are subtracted from 147 until the remainder is less than 21. Seven multiples can be subtracted ("q" = 7), leaving no remainder: Since the last remainder is zero, the algorithm ends with 21 as the greatest common divisor of 1071 and 462. This agrees with the gcd(1071, 462) found by prime factorization above. In tabular form, the steps are The Euclidean algorithm can be visualized in terms of the tiling analogy given above for the greatest common divisor. Assume that we wish to cover an "a"-by-"b" rectangle with square tiles exactly, where "a" is the larger of the two numbers. We first attempt to tile the rectangle using "b"-by-"b" square tiles; however, this leaves an "r"-by-"b" residual rectangle untiled, where "r" < "b". We then attempt to tile the residual rectangle with "r"-by-"r" square tiles. This leaves a second residual rectangle "r"-by-"r", which we attempt to tile using "r"-by-"r" square tiles, and so on. The sequence ends when there is no residual rectangle, i.e., when the square tiles cover the previous residual rectangle exactly. The length of the sides of the smallest square tile is the GCD of the dimensions of the original rectangle. For example, the smallest square tile in the adjacent figure is 21-by-21 (shown in red), and 21 is the GCD of 1071 and 462, the dimensions of the original rectangle (shown in green). At every step "k", the Euclidean algorithm computes a quotient "q" and remainder "r" from two numbers "r" and "r" where the magnitude of "r" is strictly less than that of "r". The theorem which underlies the definition of the Euclidean division ensures that such a quotient and remainder always exist and are unique. In Euclid's original version of the algorithm, the quotient and remainder are found by repeated subtraction; that is, "r" is subtracted from "r" repeatedly until the remainder "r" is smaller than "r". After that "r" and "r" are exchanged and the process is iterated. Euclidean division reduces all the steps between two exchanges into a single step, which is thus more efficient. Moreover, the quotients are not needed, thus one may replace Euclidean division by the modulo operation, which gives only the remainder. Thus the iteration of the Euclidean algorithm becomes simply Implementations of the algorithm may be expressed in pseudocode. For example, the division-based version may be programmed as At the beginning of the "k"th iteration, the variable "b" holds the latest remainder "r", whereas the variable "a" holds its predecessor, "r". The step "b" := "a" mod "b" is equivalent to the above recursion formula "r" ≡ "r" mod "r". The temporary variable "t" holds the value of "r" while the next remainder "r" is being calculated. At the end of the loop iteration, the variable "b" holds the remainder "r", whereas the variable "a" holds its predecessor, "r". In the subtraction-based version which was Euclid's original version, the remainder calculation ("b" = "a" mod "b") is replaced by repeated subtraction. Contrary to the division-based version, which works with arbitrary integers as input, the subtraction-based version supposes that the input consists of positive integers and stops when "a" = "b": The variables "a" and "b" alternate holding the previous remainders "r" and "r". Assume that "a" is larger than "b" at the beginning of an iteration; then "a" equals "r", since "r" > "r". During the loop iteration, "a" is reduced by multiples of the previous remainder "b" until "a" is smaller than "b". Then "a" is the next remainder "r". Then "b" is reduced by multiples of "a" until it is again smaller than "a", giving the next remainder "r", and so on. The recursive version is based on the equality of the GCDs of successive remainders and the stopping condition gcd("r", 0) = "r". For illustration, the gcd(1071, 462) is calculated from the equivalent gcd(462, 1071 mod 462) = gcd(462, 147). The latter GCD is calculated from the gcd(147, 462 mod 147) = gcd(147, 21), which in turn is calculated from the gcd(21, 147 mod 21) = gcd(21, 0) = 21. In another version of Euclid's algorithm, the quotient at each step is increased by one if the resulting negative remainder is smaller in magnitude than the typical positive remainder. Previously, the equation assumed that . However, an alternative negative remainder can be computed: if or if . If is replaced by when , then one gets a variant of Euclidean algorithm such that at each step. Leopold Kronecker has shown that this version requires the least number of steps of any version of Euclid's algorithm. More generally, it has been proven that, for every input numbers "a" and "b", the number of steps is minimal if and only if is chosen in order that formula_3 where formula_4 is the golden ratio. The Euclidean algorithm is one of the oldest algorithms in common use. It appears in Euclid's "Elements" (c. 300 BC), specifically in Book 7 (Propositions 1–2) and Book 10 (Propositions 2–3). In Book 7, the algorithm is formulated for integers, whereas in Book 10, it is formulated for lengths of line segments. (In modern usage, one would say it was formulated there for real numbers. But lengths, areas, and volumes, represented as real numbers in modern usage, are not measured in the same units and there is no natural unit of length, area, or volume; the concept of real numbers was unknown at that time.) The latter algorithm is geometrical. The GCD of two lengths "a" and "b" corresponds to the greatest length "g" that measures "a" and "b" evenly; in other words, the lengths "a" and "b" are both integer multiples of the length "g". The algorithm was probably not discovered by Euclid, who compiled results from earlier mathematicians in his "Elements". The mathematician and historian B. L. van der Waerden suggests that Book VII derives from a textbook on number theory written by mathematicians in the school of Pythagoras. The algorithm was probably known by Eudoxus of Cnidus (about 375 BC). The algorithm may even pre-date Eudoxus, judging from the use of the technical term ἀνθυφαίρεσις ("anthyphairesis", reciprocal subtraction) in works by Euclid and Aristotle. Centuries later, Euclid's algorithm was discovered independently both in India and in China, primarily to solve Diophantine equations that arose in astronomy and making accurate calendars. In the late 5th century, the Indian mathematician and astronomer Aryabhata described the algorithm as the "pulverizer", perhaps because of its effectiveness in solving Diophantine equations. Although a special case of the Chinese remainder theorem had already been described in the Chinese book "Sunzi Suanjing", the general solution was published by Qin Jiushao in his 1247 book "Shushu Jiuzhang" (數書九章 "Mathematical Treatise in Nine Sections"). The Euclidean algorithm was first described in Europe in the second edition of Bachet's "Problèmes plaisants et délectables" ("Pleasant and enjoyable problems", 1624). In Europe, it was likewise used to solve Diophantine equations and in developing continued fractions. The extended Euclidean algorithm was published by the English mathematician Nicholas Saunderson, who attributed it to Roger Cotes as a method for computing continued fractions efficiently. In the 19th century, the Euclidean algorithm led to the development of new number systems, such as Gaussian integers and Eisenstein integers. In 1815, Carl Gauss used the Euclidean algorithm to demonstrate unique factorization of Gaussian integers, although his work was first published in 1832. Gauss mentioned the algorithm in his "Disquisitiones Arithmeticae" (published 1801), but only as a method for continued fractions. Peter Gustav Lejeune Dirichlet seems to have been the first to describe the Euclidean algorithm as the basis for much of number theory. Lejeune Dirichlet noted that many results of number theory, such as unique factorization, would hold true for any other system of numbers to which the Euclidean algorithm could be applied. Lejeune Dirichlet's lectures on number theory were edited and extended by Richard Dedekind, who used Euclid's algorithm to study algebraic integers, a new general type of number. For example, Dedekind was the first to prove Fermat's two-square theorem using the unique factorization of Gaussian integers. Dedekind also defined the concept of a Euclidean domain, a number system in which a generalized version of the Euclidean algorithm can be defined (as described below). In the closing decades of the 19th century, the Euclidean algorithm gradually became eclipsed by Dedekind's more general theory of ideals. Other applications of Euclid's algorithm were developed in the 19th century. In 1829, Charles Sturm showed that the algorithm was useful in the Sturm chain method for counting the real roots of polynomials in any given interval. The Euclidean algorithm was the first integer relation algorithm, which is a method for finding integer relations between commensurate real numbers. Several novel integer relation algorithms have been developed, such as the algorithm of Helaman Ferguson and R.W. Forcade (1979) and the LLL algorithm. In 1969, Cole and Davie developed a two-player game based on the Euclidean algorithm, called "The Game of Euclid", which has an optimal strategy. The players begin with two piles of "a" and "b" stones. The players take turns removing "m" multiples of the smaller pile from the larger. Thus, if the two piles consist of "x" and "y" stones, where "x" is larger than "y", the next player can reduce the larger pile from "x" stones to "x" − "my" stones, as long as the latter is a nonnegative integer. The winner is the first player to reduce one pile to zero stones. Bézout's identity states that the greatest common divisor "g" of two integers "a" and "b" can be represented as a linear sum of the original two numbers "a" and "b". In other words, it is always possible to find integers "s" and "t" such that "g" = "sa" + "tb". The integers "s" and "t" can be calculated from the quotients "q", "q", etc. by reversing the order of equations in Euclid's algorithm. Beginning with the next-to-last equation, "g" can be expressed in terms of the quotient "q" and the two preceding remainders, "r" and "r": Those two remainders can be likewise expressed in terms of their quotients and preceding remainders, Substituting these formulae for "r" and "r" into the first equation yields "g" as a linear sum of the remainders "r" and "r". The process of substituting remainders by formulae involving their predecessors can be continued until the original numbers "a" and "b" are reached: After all the remainders "r", "r", etc. have been substituted, the final equation expresses "g" as a linear sum of "a" and "b": "g" = "sa" + "tb". Bézout's identity, and therefore the previous algorithm, can both be generalized to the context of Euclidean domains. Bézout's identity provides yet another definition of the greatest common divisor "g" of two numbers "a" and "b". Consider the set of all numbers "ua" + "vb", where "u" and "v" are any two integers. Since "a" and "b" are both divisible by "g", every number in the set is divisible by "g". In other words, every number of the set is an integer multiple of "g". This is true for every common divisor of "a" and "b". However, unlike other common divisors, the greatest common divisor is a member of the set; by Bézout's identity, choosing "u" = "s" and "v" = "t" gives "g". A smaller common divisor cannot be a member of the set, since every member of the set must be divisible by "g". Conversely, any multiple "m" of "g" can be obtained by choosing "u" = "ms" and "v" = "mt", where "s" and "t" are the integers of Bézout's identity. This may be seen by multiplying Bézout's identity by "m", Therefore, the set of all numbers "ua" + "vb" is equivalent to the set of multiples "m" of "g". In other words, the set of all possible sums of integer multiples of two numbers ("a" and "b") is equivalent to the set of multiples of gcd("a", "b"). The GCD is said to be the generator of the ideal of "a" and "b". This GCD definition led to the modern abstract algebraic concepts of a principal ideal (an ideal generated by a single element) and a principal ideal domain (a domain in which every ideal is a principal ideal). Certain problems can be solved using this result. For example, consider two measuring cups of volume "a" and "b". By adding/subtracting "u" multiples of the first cup and "v" multiples of the second cup, any volume "ua" + "vb" can be measured out. These volumes are all multiples of "g" = gcd("a", "b"). The integers "s" and "t" of Bézout's identity can be computed efficiently using the extended Euclidean algorithm. This extension adds two recursive equations to Euclid's algorithm with the starting values Using this recursion, Bézout's integers "s" and "t" are given by "s" = "s" and "t" = "t", where "N+1" is the step on which the algorithm terminates with "r" = 0. The validity of this approach can be shown by induction. Assume that the recursion formula is correct up to step "k" − 1 of the algorithm; in other words, assume that for all "j" less than "k". The "k"th step of the algorithm gives the equation Since the recursion formula has been assumed to be correct for "r" and "r", they may be expressed in terms of the corresponding "s" and "t" variables Rearranging this equation yields the recursion formula for step "k", as required The integers "s" and "t" can also be found using an equivalent matrix method. The sequence of equations of Euclid's algorithm can be written as a product of 2-by-2 quotient matrices multiplying a two-dimensional remainder vector Let M represent the product of all the quotient matrices This simplifies the Euclidean algorithm to the form To express "g" as a linear sum of "a" and "b", both sides of this equation can be multiplied by the inverse of the matrix M. The determinant of M equals (−1), since it equals the product of the determinants of the quotient matrices, each of which is negative one. Since the determinant of M is never zero, the vector of the final remainders can be solved using the inverse of M Since the top equation gives the two integers of Bézout's identity are "s" = (−1)"m" and "t" = (−1)"m". The matrix method is as efficient as the equivalent recursion, with two multiplications and two additions per step of the Euclidean algorithm. Bézout's identity is essential to many applications of Euclid's algorithm, such as demonstrating the unique factorization of numbers into prime factors. To illustrate this, suppose that a number "L" can be written as a product of two factors "u" and "v", that is, "L" = "uv". If another number "w" also divides "L" but is coprime with "u", then "w" must divide "v", by the following argument: If the greatest common divisor of "u" and "w" is 1, then integers "s" and "t" can be found such that by Bézout's identity. Multiplying both sides by "v" gives the relation Since "w" divides both terms on the right-hand side, it must also divide the left-hand side, "v". This result is known as Euclid's lemma. Specifically, if a prime number divides "L", then it must divide at least one factor of "L". Conversely, if a number "w" is coprime to each of a series of numbers "a", "a", …, "a", then "w" is also coprime to their product, "a" × "a" × … × "a". Euclid's lemma suffices to prove that every number has a unique factorization into prime numbers. To see this, assume the contrary, that there are two independent factorizations of "L" into "m" and "n" prime factors, respectively Since each prime "p" divides "L" by assumption, it must also divide one of the "q" factors; since each "q" is prime as well, it must be that "p" = "q". Iteratively dividing by the "p" factors shows that each "p" has an equal counterpart "q"; the two prime factorizations are identical except for their order. The unique factorization of numbers into primes has many applications in mathematical proofs, as shown below. Diophantine equations are equations in which the solutions are restricted to integers; they are named after the 3rd-century Alexandrian mathematician Diophantus. A typical "linear" Diophantine equation seeks integers "x" and "y" such that where "a", "b" and "c" are given integers. This can be written as an equation for "x" in modular arithmetic: Let "g" be the greatest common divisor of "a" and "b". Both terms in "ax" + "by" are divisible by "g"; therefore, "c" must also be divisible by "g", or the equation has no solutions. By dividing both sides by "c"/"g", the equation can be reduced to Bezout's identity where "s" and "t" can be found by the extended Euclidean algorithm. This provides one solution to the Diophantine equation, "x" = "s" ("c"/"g") and "y" = "t" ("c"/"g"). In general, a linear Diophantine equation has no solutions, or an infinite number of solutions. To find the latter, consider two solutions, ("x", "y") and ("x", "y"), where or equivalently Therefore, the smallest difference between two "x" solutions is "b"/"g", whereas the smallest difference between two "y" solutions is "a"/"g". Thus, the solutions may be expressed as By allowing "u" to vary over all possible integers, an infinite family of solutions can be generated from a single solution ("x", "y"). If the solutions are required to be "positive" integers ("x" > 0, "y" > 0), only a finite number of solutions may be possible. This restriction on the acceptable solutions allows some systems of Diophantine equations with more unknowns than equations to have a finite number of solutions; this is impossible for a system of linear equations when the solutions can be any real number (see Underdetermined system). A finite field is a set of numbers with four generalized operations. The operations are called addition, subtraction, multiplication and division and have their usual properties, such as commutativity, associativity and distributivity. An example of a finite field is the set of 13 numbers {0, 1, 2, …, 12} using modular arithmetic. In this field, the results of any mathematical operation (addition, subtraction, multiplication, or division) is reduced modulo 13; that is, multiples of 13 are added or subtracted until the result is brought within the range 0–12. For example, the result of 5 × 7 = 35 mod 13 = 9. Such finite fields can be defined for any prime "p"; using more sophisticated definitions, they can also be defined for any power "m" of a prime "p". Finite fields are often called Galois fields, and are abbreviated as GF("p") or GF("p"). In such a field with "m" numbers, every nonzero element "a" has a unique modular multiplicative inverse, "a" such that This inverse can be found by solving the congruence equation "ax" ≡ 1 mod "m", or the equivalent linear Diophantine equation This equation can be solved by the Euclidean algorithm, as described above. Finding multiplicative inverses is an essential step in the RSA algorithm, which is widely used in electronic commerce; specifically, the equation determines the integer used to decrypt the message. Note that although the RSA algorithm uses rings rather than fields, the Euclidean algorithm can still be used to find a multiplicative inverse where one exists. The Euclidean algorithm also has other applications in error-correcting codes; for example, it can be used as an alternative to the Berlekamp–Massey algorithm for decoding BCH and Reed–Solomon codes, which are based on Galois fields. Euclid's algorithm can also be used to solve multiple linear Diophantine equations. Such equations arise in the Chinese remainder theorem, which describes a novel method to represent an integer "x". Instead of representing an integer by its digits, it may be represented by its remainders "x" modulo a set of "N" coprime numbers "m": The goal is to determine "x" from its "N" remainders "x". The solution is to combine the multiple equations into a single linear Diophantine equation with a much larger modulus "M" that is the product of all the individual moduli "m", and define "M" as Thus, each "M" is the product of all the moduli "except" "m". The solution depends on finding "N" new numbers "h" such that With these numbers "h", any integer "x" can be reconstructed from its remainders "x" by the equation Since these numbers "h" are the multiplicative inverses of the "M", they may be found using Euclid's algorithm as described in the previous subsection. The Euclidean algorithm can be used to arrange the set of all positive rational numbers into an infinite binary search tree, called the Stern–Brocot tree. The number 1 (expressed as a fraction 1/1) is placed at the root of the tree, and the location of any other number "a"/"b" can be found by computing gcd("a","b") using the original form of the Euclidean algorithm, in which each step replaces the larger of the two given numbers by its difference with the smaller number (not its remainder), stopping when two equal numbers are reached. A step of the Euclidean algorithm that replaces the first of the two numbers corresponds to a step in the tree from a node to its right child, and a step that replaces the second of the two numbers corresponds to a step in the tree from a node to its left child. The sequence of steps constructed in this way does not depend on whether "a"/"b" is given in lowest terms, and forms a path from the root to a node containing the number "a"/"b". This fact can be used to prove that each positive rational number appears exactly once in this tree. For example, 3/4 can be found by starting at the root, going to the left once, then to the right twice: The Euclidean algorithm has almost the same relationship to another binary tree on the rational numbers called the Calkin–Wilf tree. The difference is that the path is reversed: instead of producing a path from the root of the tree to a target, it produces a path from the target to the root. The Euclidean algorithm has a close relationship with continued fractions. The sequence of equations can be written in the form The last term on the right-hand side always equals the inverse of the left-hand side of the next equation. Thus, the first two equations may be combined to form The third equation may be used to substitute the denominator term "r"/"r", yielding The final ratio of remainders "r"/"r" can always be replaced using the next equation in the series, up to the final equation. The result is a continued fraction In the worked example above, the gcd(1071, 462) was calculated, and the quotients "q" were 2, 3 and 7, respectively. Therefore, the fraction 1071/462 may be written as can be confirmed by calculation. Calculating a greatest common divisor is an essential step in several integer factorization algorithms, such as Pollard's rho algorithm, Shor's algorithm, Dixon's factorization method and the Lenstra elliptic curve factorization. The Euclidean algorithm may be used to find this GCD efficiently. Continued fraction factorization uses continued fractions, which are determined using Euclid's algorithm. The computational efficiency of Euclid's algorithm has been studied thoroughly. This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to A. A. L. Reynaud in 1811, who showed that the number of division steps on input ("u", "v") is bounded by "v"; later he improved this to "v"/2  + 2. Later, in 1841, P. J. E. Finck showed that the number of division steps is at most 2 log "v" + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input. Émile Léger, in 1837, studied the worst case, which is when the inputs are consecutive Fibonacci numbers. Finck's analysis was refined by Gabriel Lamé in 1844, who showed that the number of steps required for completion is never more than five times the number "h" of base-10 digits of the smaller number "b". In the uniform cost model (suitable for analyzing the complexity of gcd calculation on numbers that fit into a single machine word), each step of the algorithm takes constant time, and Lamé's analysis implies that the total running time is also "O"("h"). However, in a model of computation suitable for computation with larger numbers, the computational expense of a single remainder computation in the algorithm can be as large as "O"("h"). In this case the total time for all of the steps of the algorithm can be analyzed using a telescoping series, showing that it is also "O"("h"). Modern algorithmic techniques based on the Schönhage–Strassen algorithm for fast integer multiplication can be used to speed this up, leading to quasilinear algorithms for the GCD. The number of steps to calculate the GCD of two natural numbers, "a" and "b", may be denoted by "T"("a", "b"). If "g" is the GCD of "a" and "b", then "a" = "mg" and "b" = "ng" for two coprime numbers "m" and "n". Then as may be seen by dividing all the steps in the Euclidean algorithm by "g". By the same argument, the number of steps remains the same if "a" and "b" are multiplied by a common factor "w": "T"("a", "b") = "T"("wa", "wb"). Therefore, the number of steps "T" may vary dramatically between neighboring pairs of numbers, such as T("a", "b") and T("a", "b" + 1), depending on the size of the two GCDs. The recursive nature of the Euclidean algorithm gives another equation where "T"("x", 0) = 0 by assumption. If the Euclidean algorithm requires "N" steps for a pair of natural numbers "a" > "b" > 0, the smallest values of "a" and "b" for which this is true are the Fibonacci numbers "F" and "F", respectively. More precisely, if the Euclidean algorithm requires "N" steps for the pair "a" > "b", then one has "a" ≥ "F" and "b" ≥ "F". This can be shown by induction. If "N" = 1, "b" divides "a" with no remainder; the smallest natural numbers for which this is true is "b" = 1 and "a" = 2, which are "F" and "F", respectively. Now assume that the result holds for all values of "N" up to "M" − 1. The first step of the "M"-step algorithm is "a" = "q""b" + "r", and the Euclidean algorithm requires "M" − 1 steps for the pair "b" > "r". By induction hypothesis, one has "b" ≥ "F" and "r" ≥ "F". Therefore, "a" = "q""b" + "r" ≥ "b" + "r" ≥ "F" + "F" = "F", which is the desired inequality. This proof, published by Gabriel Lamé in 1844, represents the beginning of computational complexity theory, and also the first practical application of the Fibonacci numbers. This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10). For if the algorithm requires "N" steps, then "b" is greater than or equal to "F" which in turn is greater than or equal to "φ", where "φ" is the golden ratio. Since "b" ≥ "φ", then "N" − 1 ≤ log"b". Since log"φ" > 1/5, ("N" − 1)/5 < log"φ" log"b" = log"b". Thus, "N" ≤ 5 log"b". Thus, the Euclidean algorithm always needs less than "O"("h") divisions, where "h" is the number of digits in the smaller number "b". The average number of steps taken by the Euclidean algorithm has been defined in three different ways. The first definition is the average time "T"("a") required to calculate the GCD of a given number "a" and a smaller natural number "b" chosen with equal probability from the integers 0 to "a" − 1 with the residual error being of order "a", where "ε" is infinitesimal. The constant "C" ("Porter's Constant") in this formula equals where "γ" is the Euler–Mascheroni constant and ζ' is the derivative of the Riemann zeta function. The leading coefficient (12/π) ln 2 was determined by two independent methods. Since the first average can be calculated from the tau average by summing over the divisors "d" of "a" it can be approximated by the formula where Λ("d") is the Mangoldt function. A third average "Y"("n") is defined as the mean number of steps required when both "a" and "b" are chosen randomly (with uniform distribution) from 1 to "n" Substituting the approximate formula for "T"("a") into this equation yields an estimate for "Y"("n") In each step "k" of the Euclidean algorithm, the quotient "q" and remainder "r" are computed for a given pair of integers "r" and "r" The computational expense per step is associated chiefly with finding "q", since the remainder "r" can be calculated quickly from "r", "r", and "q" The computational expense of dividing "h"-bit numbers scales as "O"("h"("ℓ"+1)), where "ℓ" is the length of the quotient. For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient "q" number of subtractions. If the ratio of "a" and "b" is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient "q" is approximately ln|"u"/("u" − 1)| where "u" = ("q" + 1). For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers, the subtraction-based Euclid's algorithm is competitive with the division-based version. This is exploited in the binary version of Euclid's algorithm. Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically ("h") with the average number of digits "h" in the initial two numbers "a" and "b". Let "h", "h", …, "h" represent the number of digits in the successive remainders "r", "r", …, "r". Since the number of steps "N" grows linearly with "h", the running time is bounded by Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. For comparison, the efficiency of alternatives to Euclid's algorithm may be determined. One inefficient approach to finding the GCD of two natural numbers "a" and "b" is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number "b". The number of steps of this approach grows linearly with "b", or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted above, the GCD equals the product of the prime factors shared by the two numbers "a" and "b". Present methods for prime factorization are also inefficient; many modern cryptography systems even rely on that inefficiency. The binary GCD algorithm is an efficient alternative that substitutes division with faster operations by exploiting the binary representation used by computers. However, this alternative also scales like "O"("h"²). It is generally faster than the Euclidean algorithm on real computers, even though it scales in the same way. Additional efficiency can be gleaned by examining only the leading digits of the two numbers "a" and "b". The binary algorithm can be extended to other bases ("k"-ary algorithms), with up to fivefold increases in speed. Lehmer's GCD algorithm uses the same general principle as the binary algorithm to speed up GCD computations in arbitrary bases. A recursive approach for very large integers (with more than 25,000 digits) leads to quasilinear integer GCD algorithms, such as those of Schönhage, and Stehlé and Zimmermann. These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given above. These quasilinear methods generally scale as Although the Euclidean algorithm is used to find the greatest common divisor of two natural numbers (positive integers), it may be generalized to the real numbers, and to other mathematical objects, such as polynomials, quadratic integers and Hurwitz quaternions. In the latter cases, the Euclidean algorithm is used to demonstrate the crucial property of unique factorization, i.e., that such numbers can be factored uniquely into irreducible elements, the counterparts of prime numbers. Unique factorization is essential to many proofs of number theory. Euclid's algorithm can be applied to real numbers, as described by Euclid in Book 10 of his "Elements". The goal of the algorithm is to identify a real number such that two given real numbers, and , are integer multiples of it: and , where and are integers. This identification is equivalent to finding an integer relation among the real numbers and ; that is, it determines integers and such that . Euclid uses this algorithm to treat the question of incommensurable lengths. The real-number Euclidean algorithm differs from its integer counterpart in two respects. First, the remainders are real numbers, although the quotients are integers as before. Second, the algorithm is not guaranteed to end in a finite number of steps. If it does, the fraction is a rational number, i.e., the ratio of two integers and can be written as a finite continued fraction . If the algorithm does not stop, the fraction "a"/"b" is an irrational number and can be described by an infinite continued fraction . Examples of infinite continued fractions are the golden ratio "φ" = [1; 1, 1, …] and the square root of two, = [1; 2, 2, …]. The algorithm is unlikely to stop, since almost all ratios "a"/"b" of two real numbers are irrational. An infinite continued fraction may be truncated at a step to yield an approximation to "a"/"b" that improves as "k" is increased. The approximation is described by convergents ; the numerator and denominators are coprime and obey the recurrence relation where and are the initial values of the recursion. The convergent is the best rational number approximation to "a"/"b" with denominator : Polynomials in a single variable "x" can be added, multiplied and factored into irreducible polynomials, which are the analogs of the prime numbers for integers. The greatest common divisor polynomial "g"("x") of two polynomials "a"("x") and "b"("x") is defined as the product of their shared irreducible polynomials, which can be identified using the Euclidean algorithm. The basic procedure is similar to integers. At each step "k", a quotient polynomial "q"("x") and a remainder polynomial "r"("x") are identified to satisfy the recursive equation where "r"("x") = "a"("x") and "r"("x") = "b"("x"). The quotient polynomial is chosen so that the leading term of "q"("x") "r"("x") equals the leading term of "r"("x"); this ensures that the degree of each remainder is smaller than the degree of its predecessor deg["r"("x")] < deg["r"("x")]. Since the degree is a nonnegative integer, and since it decreases with every step, the Euclidean algorithm concludes in a finite number of steps. The final nonzero remainder is the greatest common divisor of the original two polynomials, "a"("x") and "b"("x"). For example, consider the following two quartic polynomials, which each factor into two quadratic polynomials and Dividing "a"("x") by "b"("x") yields a remainder "r"("x") = "x" + (2/3) "x" + (5/3) "x" − (2/3). In the next step, "b"("x") is divided by "r"("x") yielding a remainder "r"("x") = "x" + "x" + 2. Finally, dividing "r"("x") by "r"("x") yields a zero remainder, indicating that "r"("x") is the greatest common divisor polynomial of "a"("x") and "b"("x"), consistent with their factorization. Many of the applications described above for integers carry over to polynomials. The Euclidean algorithm can be used to solve linear Diophantine equations and Chinese remainder problems for polynomials; continued fractions of polynomials can also be defined. The polynomial Euclidean algorithm has other applications, such as Sturm chains, a method for counting the zeros of a polynomial that lie inside a given real interval. This in turn has applications in several areas, such as the Routh–Hurwitz stability criterion in control theory. Finally, the coefficients of the polynomials need not be drawn from integers, real numbers or even the complex numbers. For example, the coefficients may be drawn from a general field, such as the finite fields GF("p") described above. The corresponding conclusions about the Euclidean algorithm and its applications hold even for such polynomials. The Gaussian integers are complex numbers of the form α = "u" + "vi", where "u" and "v" are ordinary integers and "i" is the square root of negative one. By defining an analog of the Euclidean algorithm, Gaussian integers can be shown to be uniquely factorizable, by the argument above. This unique factorization is helpful in many applications, such as deriving all Pythagorean triples or proving Fermat's theorem on sums of two squares. In general, the Euclidean algorithm is convenient in such applications, but not essential; for example, the theorems can often be proven by other arguments. The Euclidean algorithm developed for two Gaussian integers α and β is nearly the same as that for normal integers, but differs in two respects. As before, the task at each step "k" is to identify a quotient "q" and a remainder "r" such that where "r" = α, "r" = β, and every remainder is strictly smaller than its predecessor, |"r"| < |"r"|. The first difference is that the quotients and remainders are themselves Gaussian integers, and thus are complex numbers. The quotients "q" are generally found by rounding the real and complex parts of the exact ratio (such as the complex number α/β) to the nearest integers. The second difference lies in the necessity of defining how one complex remainder can be "smaller" than another. To do this, a norm function "f"("u" + "v"i) = "u" + "v" is defined, which converts every Gaussian integer "u" + "vi" into a normal integer. After each step "k" of the Euclidean algorithm, the norm of the remainder "f"("r") is smaller than the norm of the preceding remainder, "f"("r"). Since the norm is a nonnegative integer and decreases with every step, the Euclidean algorithm for Gaussian integers ends in a finite number of steps. The final nonzero remainder is the gcd(α,β), the Gaussian integer of largest norm that divides both α and β; it is unique up to multiplication by a unit, ±1 or ±"i". Many of the other applications of the Euclidean algorithm carry over to Gaussian integers. For example, it can be used to solve linear Diophantine equations and Chinese remainder problems for Gaussian integers; continued fractions of Gaussian integers can also be defined. A set of elements under two binary operations, + and −, is called a Euclidean domain if it forms a commutative ring "R" and, roughly speaking, if a generalized Euclidean algorithm can be performed on them. The two operations of such a ring need not be the addition and multiplication of ordinary arithmetic; rather, they can be more general, such as the operations of a mathematical group or monoid. Nevertheless, these general operations should respect many of the laws governing ordinary arithmetic, such as commutativity, associativity and distributivity. The generalized Euclidean algorithm requires a "Euclidean function", i.e., a mapping "f" from "R" into the set of nonnegative integers such that, for any two nonzero elements "a" and "b" in "R", there exist "q" and "r" in "R" such that "a" = "qb" + "r" and "f"("r") < "f"("b"). An example of this mapping is the norm function used to order the Gaussian integers above. The function "f" can be the magnitude of the number, or the degree of a polynomial. The basic principle is that each step of the algorithm reduces "f" inexorably; hence, if "f" can be reduced only a finite number of times, the algorithm must stop in a finite number of steps. This principle relies heavily on the natural well-ordering of the non-negative integers; roughly speaking, this requires that every non-empty set of non-negative integers has a smallest member. The fundamental theorem of arithmetic applies to any Euclidean domain: Any number from a Euclidean domain can be factored uniquely into irreducible elements. Any Euclidean domain is a unique factorization domain (UFD), although the converse is not true. The Euclidean domains and the UFD's are subclasses of the GCD domains, domains in which a greatest common divisor of two numbers always exists. In other words, a greatest common divisor may exist (for all pairs of elements in a domain), although it may not be possible to find it using a Euclidean algorithm. A Euclidean domain is always a principal ideal domain (PID), an integral domain in which every ideal is a principal ideal. Again, the converse is not true: not every PID is a Euclidean domain. The unique factorization of Euclidean domains is useful in many applications. For example, the unique factorization of the Gaussian integers is convenient in deriving formulae for all Pythagorean triples and in proving Fermat's theorem on sums of two squares. Unique factorization was also a key element in an attempted proof of Fermat's Last Theorem published in 1847 by Gabriel Lamé, the same mathematician who analyzed the efficiency of Euclid's algorithm, based on a suggestion of Joseph Liouville. Lamé's approach required the unique factorization of numbers of the form "x" + ω"y", where "x" and "y" are integers, and ω = "e" is an "n"th root of 1, that is, ω = 1. Although this approach succeeds for some values of "n" (such as "n"=3, the Eisenstein integers), in general such numbers do "not" factor uniquely. This failure of unique factorization in some cyclotomic fields led Ernst Kummer to the concept of ideal numbers and, later, Richard Dedekind to ideals. The quadratic integer rings are helpful to illustrate Euclidean domains. Quadratic integers are generalizations of the Gaussian integers in which the imaginary unit "i" is replaced by a number ω. Thus, they have the form "u" + "v" ω, where "u" and "v" are integers and ω has one of two forms, depending on a parameter "D". If "D" does not equal a multiple of four plus one, then If, however, "D" does equal a multiple of four plus one, then If the function "f" corresponds to a norm function, such as that used to order the Gaussian integers above, then the domain is known as "norm-Euclidean". The norm-Euclidean rings of quadratic integers are exactly those where "D" = −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57 or 73. The quadratic integers with "D" = −1 and −3 are known as the Gaussian integers and Eisenstein integers, respectively. If "f" is allowed to be any Euclidean function, then the list of possible "D" values for which the domain is Euclidean is not yet known. The first example of a Euclidean domain that was not norm-Euclidean (with "D" = 69) was published in 1994. In 1973, Weinberger proved that a quadratic integer ring with "D" > 0 is Euclidean if, and only if, it is a principal ideal domain, provided that the generalized Riemann hypothesis holds. The Euclidean algorithm may be applied to noncommutative rings such as the set of Hurwitz quaternions. Let α and β represent two elements from such a ring. They have a common right divisor δ if α = ξδ and β = ηδ for some choice of ξ and η in the ring. Similarly, they have a common left divisor if α = δξ and β = δη for some choice of ξ and η in the ring. Since multiplication is not commutative, there are two versions of the Euclidean algorithm, one for right divisors and one for left divisors. Choosing the right divisors, the first step in finding the gcd(α, β) by the Euclidean algorithm can be written where ψ represents the quotient and ρ the remainder. This equation shows that any common right divisor of α and β is likewise a common divisor of the remainder ρ. The analogous equation for the left divisors would be With either choice, the process is repeated as above until the greatest common right or left divisor is identified. As in the Euclidean domain, the "size" of the remainder ρ must be strictly smaller than β, and there must be only a finite number of possible sizes for ρ, so that the algorithm is guaranteed to terminate. Most of the results for the GCD carry over to noncommutative numbers. For example, Bézout's identity states that the right gcd(α, β) can be expressed as a linear combination of α and β. In other words, there are numbers σ and τ such that The analogous identity for the left GCD is nearly the same: Bézout's identity can be used to solve Diophantine equations. For instance, one of the standard proofs of Lagrange's four-square theorem, that every positive integer can be represented as a sum of four squares, is based on quaternion GCDs in this way. Association football Association football, more commonly known as football or soccer, is a team sport played between two teams of eleven players with a spherical ball. It is played by 250 million players in over 200 countries and dependencies, making it the world's most popular sport. The game is played on a rectangular field with a goal at each end. The object of the game is to score by moving the ball beyond the goal line into the opposing goal. Association football is one of a family of football codes which emerged from various ball games played worldwide since antiquity. The modern game traces its origins to 1863 when the Laws of the Game were originally codified in England by The Football Association. Players are not allowed to touch the ball with hands or arms while it is in play, save for the goalkeepers within the penalty area. Other players mainly use their feet to strike or pass the ball, but may also use any other part of their body except the hands and the arms. The team that scores most goals by the end of the match wins. If the score is level at the end of the game, either a draw is declared or the game goes into extra time or a penalty shootout depending on the format of the competition. Association football is governed internationally by the International Federation of Association Football (FIFA; ), which organises World Cups for both men and women every four years. The rules of association football were codified in England by the Football Association in 1863 and the name "association football" was coined to distinguish the game from the other forms of football played at the time, specifically rugby football. The first written "reference to the inflated ball used in the game" was in the mid-14th century: "Þe heued fro þe body went, Als it were a foteballe". The "Online Etymology Dictionary" states that the word "soccer" was "split off in 1863". According to Partha Mazumdar, the term "soccer" originated in England, first appearing in the 1880s as an Oxford "-er" abbreviation of the word "association". Within the English-speaking world, association football is now usually called "football" in the United Kingdom and mainly "soccer" in Canada and the United States. People in Australia, Ireland, South Africa and New Zealand use either or both terms, although national associations in Australia and New Zealand now primarily use "football" for the formal name. According to FIFA, the Chinese competitive game "cuju" (蹴鞠, literally "kick ball") is the earliest form of football for which there is evidence. "Cuju" players could use any part of the body apart from hands and the intent was kicking a ball through an opening into a net. It was remarkably similar to modern football, though similarities to rugby occurred. During the Han Dynasty (206 BC – 220 AD), "cuju" games were standardised and rules were established. "Phaininda" and "episkyros" were Greek ball games. An image of an "episkyros" player depicted in low relief on a vase at the National Archaeological Museum of Athens appears on the UEFA European Championship Cup. Athenaeus, writing in 228 AD, referenced the Roman ball game "harpastum". "Phaininda", "episkyros" and "harpastum" were played involving hands and violence. They all appear to have resembled rugby football, wrestling and volleyball more than what is recognizable as modern football. As with pre-codified "mob football", the antecedent of all modern football codes, these three games involved more handling the ball than kicking. Other games included "kemari" in Japan, "chuk-guk" in Korea and "woggabaliri" in Australia. Association football in itself does not have a classical history. Notwithstanding any similarities to other ball games played around the world FIFA has recognised that no historical connection exists with any game played in antiquity outside Europe. The modern rules of association football are based on the mid-19th century efforts to standardise the widely varying forms of football played in the public schools of England. The history of football in England dates back to at least the eighth century AD. The Cambridge Rules, first drawn up at Cambridge University in 1848, were particularly influential in the development of subsequent codes, including association football. The Cambridge Rules were written at Trinity College, Cambridge, at a meeting attended by representatives from Eton, Harrow, Rugby, Winchester and Shrewsbury schools. They were not universally adopted. During the 1850s, many clubs unconnected to schools or universities were formed throughout the English-speaking world, to play various forms of football. Some came up with their own distinct codes of rules, most notably the Sheffield Football Club, formed by former public school pupils in 1857, which led to formation of a Sheffield FA in 1867. In 1862, John Charles Thring of Uppingham School also devised an influential set of rules. These ongoing efforts contributed to the formation of The Football Association (The FA) in 1863, which first met on the morning of 26 October 1863 at the Freemasons' Tavern in Great Queen Street, London. The only school to be represented on this occasion was Charterhouse. The Freemason's Tavern was the setting for five more meetings between October and December, which eventually produced the first comprehensive set of rules. At the final meeting, the first FA treasurer, the representative from Blackheath, withdrew his club from the FA over the removal of two draft rules at the previous meeting: the first allowed for running with the ball in hand; the second for obstructing such a run by hacking (kicking an opponent in the shins), tripping and holding. Other English rugby clubs followed this lead and did not join the FA and instead in 1871 formed the Rugby Football Union. The eleven remaining clubs, under the charge of Ebenezer Cobb Morley, went on to ratify the original thirteen laws of the game. These rules included handling of the ball by "marks" and the lack of a crossbar, rules which made it remarkably similar to Victorian rules football being developed at that time in Australia. The Sheffield FA played by its own rules until the 1870s with the FA absorbing some of its rules until there was little difference between the games. The world's oldest football competition is the FA Cup, which was founded by C. W. Alcock and has been contested by English teams since 1872. The first official international football match also took place in 1872, between Scotland and England in Glasgow, again at the instigation of C. W. Alcock. England is also home to the world's first football league, which was founded in Birmingham in 1888 by Aston Villa director William McGregor. The original format contained 12 clubs from the Midlands and Northern England. The laws of the game are determined by the International Football Association Board (IFAB). The board was formed in 1886 after a meeting in Manchester of The Football Association, the Scottish Football Association, the Football Association of Wales, and the Irish Football Association. FIFA, the international football body, was formed in Paris in 1904 and declared that they would adhere to Laws of the Game of the Football Association. The growing popularity of the international game led to the admittance of FIFA representatives to the International Football Association Board in 1913. The board consists of four representatives from FIFA and one representative from each of the four British associations. Today, football is played at a professional level all over the world. Millions of people regularly go to football stadiums to follow their favourite teams, while billions more watch the game on television or on the internet. A very large number of people also play football at an amateur level. According to a survey conducted by FIFA published in 2001, over 240 million people from more than 200 countries regularly play football. Football has the highest global television audience in sport. In many parts of the world football evokes great passions and plays an important role in the life of individual fans, local communities, and even nations. R. Kapuscinski says that Europeans who are polite, modest, or humble fall easily into rage when playing or watching football games. The Côte d'Ivoire national football team helped secure a truce to the nation's civil war in 2006 and it helped further reduce tensions between government and rebel forces in 2007 by playing a match in the rebel capital of Bouaké, an occasion that brought both armies together peacefully for the first time. By contrast, football is widely considered to have been the final proximate cause for the Football War in June 1969 between El Salvador and Honduras. The sport also exacerbated tensions at the beginning of the Yugoslav Wars of the 1990s, when a match between Dinamo Zagreb and Red Star Belgrade degenerated into rioting in May 1990. Women may have been playing "football" for as long as the game has existed. Evidence shows that an ancient version of the game (Tsu Chu) was played by women during the Han Dynasty (25–220 CE). Two female figures are depicted in Han Dynasty (25–220 CE) frescoes, playing Tsu Chu. There are, however, a number of opinions about the accuracy of dates, the earliest estimates at 5000 BCE. Association football, the modern game, also has documented early involvement of women. An annual competition in Mid-Lothian, Scotland during the 1790s is reported, too. In 1863, football governing bodies introduced standardised rules to prohibit violence on the pitch, making it more socially acceptable for women to play. The first match recorded by the Scottish Football Association took place in 1892 in Glasgow. In England, the first recorded game of football between women took place in 1895. The most well-documented early European team was founded by activist Nettie Honeyball in England in 1894. It was named the British Ladies' Football Club. Nettie Honeyball is quoted, "I founded the association late last year [1894], with the fixed resolve of proving to the world that women are not the 'ornamental and useless' creatures men have pictured. I must confess, my convictions on all matters where the sexes are so widely divided are all on the side of emancipation, and I look forward to the time when ladies may sit in Parliament and have a voice in the direction of affairs, especially those which concern them most." Honeyball and those like her paved the way for women's football. However, the women's game was frowned upon by the British football associations, and continued without their support. It has been suggested that this was motivated by a perceived threat to the 'masculinity' of the game. Women's football became popular on a large scale at the time of the First World War, when employment in heavy industry spurred the growth of the game, much as it had done for men fifty years earlier. The most successful team of the era was Dick, Kerr's Ladies of Preston, England. The team played in the first women's international matches in 1920, against a team from Paris, France, in April, and also made up most of the England team against a Scottish Ladies XI in 1920, and winning 22-0. Despite being more popular than some men's football events (one match saw a 53,000 strong crowd), women's football in England suffered a blow in 1921 when The Football Association outlawed the playing of the game on Association members' pitches, on the grounds that the game (as played by women) was distasteful. Some speculated that this may have also been due to envy of the large crowds that women's matches attracted. This led to the formation of the English Ladies Football Association and play moved to rugby grounds. Association football has been played by women since at least the time of the first recorded women's games in the late 19th century. It has traditionally been associated with charity games and physical exercise, particularly in the United Kingdom. In the late 1960s and early 1970s women's association football was organised in the United Kingdom, eventually becoming the most prominent team sport for British women. The growth in women's football has seen major competitions being launched at both national and international level mirroring the male competitions. Women's football has faced many struggles. It had a "golden age" in the United Kingdom in the early 1920s when crowds reached 50,000 at some matches; this was stopped on 5 December 1921 when England's Football Association voted to ban the game from grounds used by its member clubs. The FA's ban was rescinded in December 1969 with UEFA voting to officially recognise women's football in 1971. The FIFA Women's World Cup was inaugurated in 1991 and has been held every four years since, while women's football has been an Olympic event since 1996. Association football is played in accordance with a set of rules known as the Laws of the Game. The game is played using a spherical ball of circumference, known as the "football" (or "soccer ball"). Two teams of eleven players each compete to get the ball into the other team's goal (between the posts and under the bar), thereby scoring a goal. The team that has scored more goals at the end of the game is the winner; if both teams have scored an equal number of goals then the game is a draw. Each team is led by a captain who has only one official responsibility as mandated by the Laws of the Game: to represent his or her team in the coin toss prior to kick-off or penalty kicks. The primary law is that players other than goalkeepers may not deliberately handle the ball with their hands or arms during play, though they must use both their hands during a throw-in restart. Although players usually use their feet to move the ball around they may use any part of their body (notably, "heading" with the forehead) other than their hands or arms. Within normal play, all players are free to play the ball in any direction and move throughout the pitch, though the ball cannot be received in an offside position. During gameplay, players attempt to create goal-scoring opportunities through individual control of the ball, such as by dribbling, passing the ball to a teammate, and by taking shots at the goal, which is guarded by the opposing goalkeeper. Opposing players may try to regain control of the ball by intercepting a pass or through tackling the opponent in possession of the ball; however, physical contact between opponents is restricted. Football is generally a free-flowing game, with play stopping only when the ball has left the field of play or when play is stopped by the referee for an infringement of the rules. After a stoppage, play recommences with a specified restart. At a professional level, most matches produce only a few goals. For example, the 2005–06 season of the English Premier League produced an average of 2.48 goals per match. The Laws of the Game do not specify any player positions other than goalkeeper, but a number of specialised roles have evolved. Broadly, these include three main categories: strikers, or forwards, whose main task is to score goals; defenders, who specialise in preventing their opponents from scoring; and midfielders, who dispossess the opposition and keep possession of the ball to pass it to the forwards on their team. Players in these positions are referred to as outfield players, to distinguish them from the goalkeeper. These positions are further subdivided according to the area of the field in which the player spends most time. For example, there are central defenders, and left and right midfielders. The ten outfield players may be arranged in any combination. The number of players in each position determines the style of the team's play; more forwards and fewer defenders creates a more aggressive and offensive-minded game, while the reverse creates a slower, more defensive style of play. While players typically spend most of the game in a specific position, there are few restrictions on player movement, and players can switch positions at any time. The layout of a team's players is known as a "formation". Defining the team's formation and tactics is usually the prerogative of the team's manager. There are 17 laws in the official Laws of the Game, each containing a collection of stipulation and guidelines. The same laws are designed to apply to all levels of football, although certain modifications for groups such as juniors, seniors, women and people with physical disabilities are permitted. The laws are often framed in broad terms, which allow flexibility in their application depending on the nature of the game. The Laws of the Game are published by FIFA, but are maintained by the International Football Association Board (IFAB). In addition to the seventeen laws, numerous IFAB decisions and other directives contribute to the regulation of football. Each team consists of a maximum of eleven players (excluding substitutes), one of whom must be the goalkeeper. Competition rules may state a minimum number of players required to constitute a team, which is usually seven. Goalkeepers are the only players allowed to play the ball with their hands or arms, provided they do so within the penalty area in front of their own goal. Though there are a variety of positions in which the outfield (non-goalkeeper) players are strategically placed by a coach, these positions are not defined or required by the Laws. The basic equipment or "kit" players are required to wear includes a shirt, shorts, socks, footwear and adequate shin guards. An athletic supporter and protective cup is highly recommended for male players by medical experts and professionals. Headgear is not a required piece of basic equipment, but players today may choose to wear it to protect themselves from head injury. Players are forbidden to wear or use anything that is dangerous to themselves or another player, such as jewellery or watches. The goalkeeper must wear clothing that is easily distinguishable from that worn by the other players and the match officials. A number of players may be replaced by substitutes during the course of the game. The maximum number of substitutions permitted in most competitive international and domestic league games is three in ninety minutes with each team being allowed one more if the game should go into extra-time, though the permitted number may vary in other competitions or in friendly matches. Common reasons for a substitution include injury, tiredness, ineffectiveness, a tactical switch, or timewasting at the end of a finely poised game. In standard adult matches, a player who has been substituted may not take further part in a match. IFAB recommends "that a match should not continue if there are fewer than seven players in either team." Any decision regarding points awarded for abandoned games is left to the individual football associations. A game is officiated by a referee, who has "full authority to enforce the Laws of the Game in connection with the match to which he has been appointed" (Law 5), and whose decisions are final. The referee is assisted by two assistant referees. In many high-level games there is also a fourth official who assists the referee and may replace another official should the need arise. Goal line technology is used to measure if the whole ball has crossed the goal-line thereby determining if a goal has or hasn't been scored, this was brought in to prevent there being controversy. Video assistant referee (VAR) has also been increasingly introduced to assist officials through video replay to correct clear and obvious mistakes. There are four types of calls that can be reviewed: mistaken identity in awarding a red or yellow card, goals and whether there was a violation during the buildup, direct red card decisions, and penalty decisions. The ball is spherical with a circumference of between , a weight in the range of , and a pressure between at sea level. In the past the ball was made up of leather panels sewn together, with a latex bladder for pressurisation but modern balls at all levels of the game are now synthetic. As the Laws were formulated in England, and were initially administered solely by the four British football associations within IFAB, the standard dimensions of a football pitch were originally expressed in imperial units. The Laws now express dimensions with approximate metric equivalents (followed by traditional units in brackets), though use of imperial units remains popular in English-speaking countries with a relatively recent history of metrication (or only partial metrication), such as Britain. The length of the pitch, or field, for international adult matches is in the range of 100–110 m (110–120 yd) and the width is in the range of 64–75 m (70–80 yd). Fields for non-international matches may be 90–120 m (100–130 yd) length and 45–90 m (50–100 yd) in width, provided that the pitch does not become square. In 2008, the IFAB initially approved a fixed size of 105 m (344 ft) long and 68 m (223 ft) wide as a standard pitch dimension for international matches; however, this decision was later put on hold and was never actually implemented. The longer boundary lines are "touchlines", while the shorter boundaries (on which the goals are placed) are "goal lines". A rectangular goal is positioned at the middle of each goal line. The inner edges of the vertical goal posts must be apart, and the lower edge of the horizontal crossbar supported by the goal posts must be above the ground. Nets are usually placed behind the goal, but are not required by the Laws. In front of the goal is the penalty area. This area is marked by the goal line, two lines starting on the goal line 16.5 m (18 yd) from the goalposts and extending 16.5 m (18 yd) into the pitch perpendicular to the goal line, and a line joining them. This area has a number of functions, the most prominent being to mark where the goalkeeper may handle the ball and where a penalty foul by a member of the defending team becomes punishable by a penalty kick. Other markings define the position of the ball or players at kick-offs, goal kicks, penalty kicks and corner kicks. A standard adult football match consists of two halves of 45 minutes each. Each half runs continuously, meaning that the clock is not stopped when the ball is out of play. There is usually a 15-minute half-time break between halves. The end of the match is known as full-time. The referee is the official timekeeper for the match, and may make an allowance for time lost through substitutions, injured players requiring attention, or other stoppages. This added time is called "additional time" in FIFA documents, but is most commonly referred to as "stoppage time" or "injury time", while "loss time" can also be used as a synonym. The duration of stoppage time is at the sole discretion of the referee. Stoppage time does not fully compensate for the time in which the ball is out of play, and a 90-minute game typically involves about an hour of "effective playing time". The referee alone signals the end of the match. In matches where a fourth official is appointed, towards the end of the half the referee signals how many minutes of stoppage time he intends to add. The fourth official then informs the players and spectators by holding up a board showing this number. The signalled stoppage time may be further extended by the referee. Added time was introduced because of an incident which happened in 1891 during a match between Stoke and Aston Villa. Trailing 1–0 and with just two minutes remaining, Stoke were awarded a penalty. Villa's goalkeeper kicked the ball out of the ground, and by the time the ball had been recovered, the 90 minutes had elapsed and the game was over. The same law also states that the duration of either half is extended until the penalty kick to be taken or retaken is completed, thus no game shall end with a penalty to be taken. In league competitions, games may end in a draw. In knockout competitions where a winner is required various methods may be employed to break such a deadlock; some competitions may invoke replays. A game tied at the end of regulation time may go into extra time, which consists of two further 15-minute periods. If the score is still tied after extra time, some competitions allow the use of penalty shootouts (known officially in the Laws of the Game as "kicks from the penalty mark") to determine which team will progress to the next stage of the tournament. Goals scored during extra time periods count towards the final score of the game, but kicks from the penalty mark are only used to decide the team that progresses to the next part of the tournament (with goals scored in a penalty shootout not making up part of the final score). In competitions using two-legged matches, each team competes at home once, with an aggregate score from the two matches deciding which team progresses. Where aggregates are equal, the away goals rule may be used to determine the winners, in which case the winner is the team that scored the most goals in the leg they played away from home. If the result is still equal, extra time and potentially a penalty shootout are required. In the late 1990s and early 2000s, the IFAB experimented with ways of creating a winner without requiring a penalty shootout, which was often seen as an undesirable way to end a match. These involved rules ending a game in extra time early, either when the first goal in extra time was scored ("golden goal"), or if one team held a lead at the end of the first period of extra time ("silver goal"). Golden goal was used at the World Cup in 1998 and 2002. The first World Cup game decided by a golden goal was France's victory over Paraguay in 1998. Germany was the first nation to score a golden goal in a major competition, beating Czech Republic in the final of Euro 1996. Silver goal was used in Euro 2004. Both these experiments have been discontinued by IFAB. Under the Laws, the two basic states of play during a game are "ball in play" and "ball out of play". From the beginning of each playing period with a kick-off until the end of the playing period, the ball is in play at all times, except when either the ball leaves the field of play, or play is stopped by the referee. When the ball becomes out of play, play is restarted by one of eight restart methods depending on how it went out of play: A foul occurs when a player commits an offence listed in the Laws of the Game while the ball is in play. The offences that constitute a foul are listed in Law 12. Handling the ball deliberately, tripping an opponent, or pushing an opponent, are examples of "penal fouls", punishable by a direct free kick or penalty kick depending on where the offence occurred. Other fouls are punishable by an indirect free kick. The referee may punish a player's or substitute's misconduct by a caution (yellow card) or dismissal (red card). A second yellow card in the same game leads to a red card, which results in a dismissal. A player given a yellow card is said to have been "booked", the referee writing the player's name in his official notebook. If a player has been dismissed, no substitute can be brought on in their place and the player may not participate in further play. Misconduct may occur at any time, and while the offences that constitute misconduct are listed, the definitions are broad. In particular, the offence of "unsporting behaviour" may be used to deal with most events that violate the spirit of the game, even if they are not listed as specific offences. A referee can show a yellow or red card to a player, substitute or substituted player. Non-players such as managers and support staff cannot be shown the yellow or red card, but may be expelled from the technical area if they fail to conduct themselves in a responsible manner. Rather than stopping play, the referee may allow play to continue if doing so will benefit the team against which an offence has been committed. This is known as "playing an advantage". The referee may "call back" play and penalise the original offence if the anticipated advantage does not ensue within "a few seconds". Even if an offence is not penalised due to advantage being played, the offender may still be sanctioned for misconduct at the next stoppage of play. The referee's decision in all on-pitch matters is considered final. The score of a match cannot be altered after the game, even if later evidence shows that decisions (including awards/non-awards of goals) were incorrect. Along with the general administration of the sport, football associations and competition organisers also enforce good conduct in wider aspects of the game, dealing with issues such as comments to the press, clubs' financial management, doping, age fraud and match fixing. Most competitions enforce mandatory suspensions for players who are sent off in a game. Some on-field incidents, if considered very serious (such as allegations of racial abuse), may result in competitions deciding to impose heavier sanctions than those normally associated with a red card. Some associations allow for appeals against player suspensions incurred on-field if clubs feel a referee was incorrect or unduly harsh. Sanctions for such infractions may be levied on individuals or on to clubs as a whole. Penalties may include fines, points deductions (in league competitions) or even expulsion from competitions. For example, the English Football League deduct 12 points from any team that enters financial administration. Among other administrative sanctions are penalties against game forfeiture. Teams that had forfeited a game or had been forfeited against would be awarded a technical loss or win. The recognised international governing body of football (and associated games, such as futsal and beach soccer) is FIFA. The FIFA headquarters are located in Zürich, Switzerland. Six regional confederations are associated with FIFA; these are: National associations oversee football within individual countries. These are generally synonymous with sovereign states, (for example: the Fédération Camerounaise de Football in Cameroon) but also include a smaller number of associations responsible for sub-national entities or autonomous regions (for example the Scottish Football Association in Scotland). 209 national associations are affiliated both with FIFA and with their respective continental confederations. While FIFA is responsible for arranging competitions and most rules related to international competition, the actual Laws of the Game are set by the International Football Association Board, where each of the UK Associations has one vote, while FIFA collectively has four votes. The major international competition in football is the World Cup, organised by FIFA. This competition takes place every four years since 1930 with the exception of 1942 and 1946 tournaments, which were cancelled due to World War II. Approximately 190–200 national teams compete in qualifying tournaments within the scope of continental confederations for a place in the finals. The finals tournament, which is held every four years, involves 32 national teams competing over a four-week period. The World Cup is the most prestigious association football tournament in the world as well as the most widely viewed and followed sporting event in the world, exceeding even the Olympic Games; the cumulative audience of all matches of the 2006 FIFA World Cup was estimated to be 26.29 billion with an estimated 715.1 million people watching the final match, a ninth of the entire population of the planet. The current champion is France, which won its second title at the 2018 tournament in Russia. FIFA Women's World Cup has been held every four years since 1991. Under the tournament's current format, national teams vie for 23 slots in a three-year qualification phase. (The host nation's team is automatically entered as the 24th slot.) The current champion is the United States, after winning their third title in the 2015 FIFA Women's World Cup. There has been a football tournament at every Summer Olympic Games since 1900, except at the 1932 games in Los Angeles. Before the inception of the World Cup, the Olympics (especially during the 1920s) were the most prestigious international event. Originally, the tournament was for amateurs only. As professionalism spread around the world, the gap in quality between the World Cup and the Olympics widened. The countries that benefited most were the Soviet Bloc countries of Eastern Europe, where top athletes were state-sponsored while retaining their status as amateurs. Between 1948 and 1980, 23 out of 27 Olympic medals were won by Eastern Europe, with only Sweden (gold in 1948 and bronze in 1952), Denmark (bronze in 1948 and silver in 1960) and Japan (bronze in 1968) breaking their dominance. For the 1984 Los Angeles Games, the IOC decided to admit professional players. FIFA still did not want the Olympics to rival the World Cup, so a compromise was struck that allowed teams from Africa, Asia, Oceania and CONCACAF to field their strongest professional sides, while restricting UEFA and CONMEBOL teams to players who had not played in a World Cup. Since 1992 male competitors must be under 23 years old, and since 1996, players under 23 years old, with three over-23 year old players, are allowed per squad. A women's tournament was added in 1996; in contrast to the men's event, full international sides without age restrictions play the women's Olympic tournament. After the World Cup, the most important international football competitions are the continental championships, which are organised by each continental confederation and contested between national teams. These are the European Championship (UEFA), the Copa América (CONMEBOL), African Cup of Nations (CAF), the Asian Cup (AFC), the CONCACAF Gold Cup (CONCACAF) and the OFC Nations Cup (OFC). The FIFA Confederations Cup is contested by the winners of all six continental championships, the current FIFA World Cup champions and the country which is hosting the Confederations Cup. This is generally regarded as a warm-up tournament for the upcoming FIFA World Cup and does not carry the same prestige as the World Cup itself. The most prestigious competitions in club football are the respective continental championships, which are generally contested between national champions, for example the UEFA Champions League in Europe and the Copa Libertadores in South America. The winners of each continental competition contest the FIFA Club World Cup. The governing bodies in each country operate league systems in a domestic season, normally comprising several divisions, in which the teams gain points throughout the season depending on results. Teams are placed into tables, placing them in order according to points accrued. Most commonly, each team plays every other team in its league at home and away in each season, in a round-robin tournament. At the end of a season, the top team is declared the champion. The top few teams may be promoted to a higher division, and one or more of the teams finishing at the bottom are relegated to a lower division. The teams finishing at the top of a country's league may be eligible also to play in international club competitions in the following season. The main exceptions to this system occur in some Latin American leagues, which divide football championships into two sections named Apertura and Clausura (Spanish for "Opening" and "Closing"), awarding a champion for each. The majority of countries supplement the league system with one or more "cup" competitions organised on a knock-out basis. Some countries' top divisions feature highly paid star players; in smaller countries and lower divisions, players may be part-timers with a second job, or amateurs. The five top European leagues – the Bundesliga (Germany), Premier League (England), La Liga (Spain), Serie A (Italy), and Ligue 1 (France) – attract most of the world's best players and each of the leagues has a total wage cost in excess of £600 million/€763 million/US$1.185 billion. Football hooliganism is the term used to describe disorderly, violent or destructive behaviour perpetrated by spectators at football events. Variants of football have been codified for reduced-sized teams (i.e. five-a-side football) play in non-field environments (i.e. beach soccer, indoor soccer, and futsal) and for teams with disabilities (i.e. paralympic association football). Casual games can be played with only minimal equipment – a basic game can be played on almost any open area of reasonable size with just a ball and items to mark the positions of two sets of goalposts. Such games can have team sizes that vary from eleven-a-side, can use a limited or modified subset of the official rules, and can be self-officiated by the players. Frank Zappa Frank Vincent Zappa (December 21, 1940 – December 4, 1993) was an American musician, composer, activist and filmmaker. His work is characterized by nonconformity, free-form improvisation, sound experiments, musical virtuosity, and satire of American culture. In a career spanning more than 30 years, Zappa composed rock, pop, jazz, jazz fusion, orchestral and "musique concrète" works, and produced almost all of the 60-plus albums that he released with his band the Mothers of Invention and as a solo artist. Zappa also directed feature-length films and music videos, and designed album covers. He is considered one of the most innovative and stylistically diverse rock musicians of his era. As a self-taught composer and performer, Zappa's diverse musical influences led him to create music that was sometimes difficult to categorize. While in his teens, he acquired a taste for 20th-century classical composers such as Edgard Varèse, Igor Stravinsky, and Anton Webern, along with 1950s rhythm and blues and doo-wop music. He began writing classical music in high school, while at the same time playing drums in rhythm and blues bands, later switching to electric guitar. His 1966 debut album with the Mothers of Invention, "Freak Out!", combined songs in conventional rock and roll format with collective improvisations and studio-generated sound collages. He continued this eclectic and experimental approach, irrespective of whether the fundamental format was rock, jazz or classical. Zappa's output is unified by a conceptual continuity he termed "Project/Object", with numerous musical phrases, ideas, and characters reappearing across his albums. His lyrics reflected his iconoclastic views of established social and political processes, structures and movements, often humorously so. He was a strident critic of mainstream education and organized religion, and a forthright and passionate advocate for freedom of speech, self-education, political participation and the abolition of censorship. Unlike many other rock musicians of his generation, he personally disapproved of drugs and seldom used them, but supported their decriminalization and regulation. During Zappa's lifetime, he was a highly productive and prolific artist, earning widespread acclaim from critics and fellow musicians. He had some commercial success, particularly in Europe, and worked as an independent artist for most of his career. He remains a major influence on musicians and composers. His honors include his 1995 induction into the Rock and Roll Hall of Fame and the 1997 Grammy Lifetime Achievement Award. In 2000, he was ranked number 36 on VH1's "100 Greatest Artists of Hard Rock". In 2004, "Rolling Stone" magazine ranked him at number 71 on its list of the "100 Greatest Artists of All Time", and in 2011 at number 22 on its list of the "100 Greatest Guitarists of All Time". Zappa was born on December 21, 1940 in Baltimore, Maryland. His mother, Rosemarie ( Collimore) was of Italian (Neapolitan and Sicilian) and French ancestry; his father, whose name was anglicized to Francis Vincent Zappa, was an immigrant from Partinico, Sicily, with Greek and Arab ancestry. Frank, the eldest of four children, was raised in an Italian-American household where Italian was often spoken by his grandparents. The family moved often because his father, a chemist and mathematician, worked in the defense industry. After a time in Florida in the 1940s, the family returned to Maryland, where Zappa's father worked at the Edgewood Arsenal chemical warfare facility of the Aberdeen Proving Ground. Due to their home's proximity to the arsenal, which stored mustard gas, gas masks were kept in the home in case of an accident. This had a profound effect on Zappa, and references to germs, germ warfare and the defense industry occur throughout his work. Zappa was often sick as a child, suffering from asthma, earaches and sinus problems. A doctor treated his sinusitis by inserting a pellet of radium into each of Zappa's nostrils. At the time, little was known about the potential dangers of even small amounts of therapeutic radiation, and although it has since been claimed that nasal radium treatment has causal connections to cancer, no studies have provided significant enough evidence to confirm this. Nasal imagery and references appear in his music and lyrics, as well as in the collage album covers created by his long-time collaborator Cal Schenkel. Zappa believed his childhood diseases might have been due to exposure to mustard gas, released by the nearby chemical warfare facility. His health worsened when he lived in Baltimore. In 1952, his family relocated for reasons of health. They next moved to Monterey, California, where his father taught metallurgy at the Naval Postgraduate School. They soon moved to Claremont, California, then to El Cajon, before finally settling in San Diego. Zappa joined his first band at Mission Bay High School in San Diego as the drummer. At about the same time, his parents bought a phonograph, which allowed him to develop his interest in music, and to begin building his record collection. R&B singles were early purchases, starting a large collection he kept for the rest of his life. He was interested in sounds for their own sake, particularly the sounds of drums and other percussion instruments. By age 12, he had obtained a snare drum and began learning the basics of orchestral percussion. Zappa's deep interest in modern classical music began when he read a "LOOK" magazine article about the Sam Goody record store chain that lauded its ability to sell an LP as obscure as "The Complete Works of Edgard Varèse, Volume One". The article described Varèse's percussion composition "Ionisation", produced by EMS Recordings, as "a weird jumble of drums and other unpleasant sounds". Zappa decided to seek out Varèse's music. After searching for over a year, Zappa found a copy (he noticed the LP because of the "mad scientist" looking photo of Varèse on the cover). Not having enough money with him, he persuaded the salesman to sell him the record at a discount. Thus began his lifelong passion for Varèse's music and that of other modern classical composers. By 1956, the Zappa family had moved to Lancaster, a small aerospace and farming town in the Antelope Valley of the Mojave Desert close to Edwards Air Force Base; he would later refer to Sun Village (a town close to Lancaster) in the 1973 track "Village of the Sun". Zappa's mother encouraged him in his musical interests. Although she disliked Varèse's music, she was indulgent enough to give her son a long distance call to the composer as a 15th birthday present. Unfortunately, Varèse was in Europe at the time, so Zappa spoke to the composer's wife and she suggested he call back later. In a letter Varèse thanked him for his interest, and told him about a composition he was working on called "Déserts". Living in the desert town of Lancaster, Zappa found this very exciting. Varèse invited him to visit if he ever came to New York. The meeting never took place (Varèse died in 1965), but Zappa framed the letter and kept it on display for the rest of his life. At Antelope Valley High School, Zappa met Don Glen Vliet (who later changed his name to Don Van Vliet and adopted the stage name Captain Beefheart). Zappa and Vliet became close friends, sharing an interest in R&B records and influencing each other musically throughout their careers. Around the same time, Zappa started playing drums in a local band, the Blackouts. The band was racially diverse and included Euclid James "Motorhead" Sherwood who later became a member of the Mothers of Invention. Zappa's interest in the guitar grew, and in 1957 he was given his first instrument. Among his early influences were Johnny "Guitar" Watson, Howlin' Wolf and Clarence "Gatemouth" Brown. (In the 1970s/80s, he invited Watson to perform on several albums.) Zappa considered soloing as the equivalent of forming "air sculptures", and developed an eclectic, innovative and highly personal style. Zappa's interest in composing and arranging flourished in his last high-school years. By his final year, he was writing, arranging and conducting avant-garde performance pieces for the school orchestra. He graduated from Antelope Valley High School in 1958, and later acknowledged two of his music teachers on the sleeve of the 1966 album "Freak Out!" Due to his family's frequent moves, Zappa attended at least six different high schools, and as a student he was often bored and given to distracting the rest of the class with juvenile antics. In 1959, he attended Chaffey College but left after one semester, and maintained thereafter a disdain for formal education, taking his children out of school at age 15 and refusing to pay for their college. Zappa left home in 1959, and moved into a small apartment in Echo Park, Los Angeles. After meeting Kathryn J. "Kay" Sherman during his short period of private composition study with Prof. Karl Kohn of Pomona College, they moved in together in Ontario, and were married December 28, 1960. Zappa worked for a short period in advertising. His sojourn in the commercial world was brief, but gave him valuable insights into its workings. Throughout his career, he took a keen interest in the visual presentation of his work, designing some of his album covers and directing his own films and videos. Zappa attempted to earn a living as a musician and composer, and played different nightclub gigs, some with a new version of the Blackouts. Zappa's earliest professional recordings, two soundtracks for the low-budget films "The World's Greatest Sinner" (1962) and "Run Home Slow" (1965) were more financially rewarding. The former score was commissioned by actor-producer Timothy Carey and recorded in 1961. It contains many themes that appeared on later Zappa records. The latter soundtrack was recorded in 1963 after the film was completed, but it was commissioned by one of Zappa's former high school teachers in 1959 and Zappa may have worked on it before the film was shot. Excerpts from the soundtrack can be heard on the posthumous album "The Lost Episodes" (1996). During the early 1960s, Zappa wrote and produced songs for other local artists, often working with singer-songwriter Ray Collins and producer Paul Buff. Their "Memories of El Monte" was recorded by the Penguins, although only Cleve Duncan of the original group was featured. Buff owned the small Pal Recording Studio in Cucamonga, which included a unique five-track tape recorder he had built. At that time, only a handful of the most sophisticated commercial studios had multi-track facilities; the industry standard for smaller studios was still mono or two-track. Although none of the recordings from the period achieved major commercial success, Zappa earned enough money to allow him to stage a concert of his orchestral music in 1963 and to broadcast and record it. He appeared on Steve Allen's syndicated late night show the same year, in which he played a bicycle as a musical instrument. Using a bow borrowed from the band's bass player, as well as drum sticks, he proceeded to pluck, bang, and bow the spokes of the bike, producing strange, comical sounds from his new found instrument. With Captain Beefheart, Zappa recorded some songs under the name of the Soots. They were rejected by Dot Records for having "no commercial potential", a verdict Zappa subsequently quoted on the sleeve of "Freak Out!" In 1964, after his marriage started to break up, he moved into the Pal studio and began routinely working 12 hours or more per day recording and experimenting with overdubbing and audio tape manipulation. This established a work pattern that endured for most of his life. Aided by his income from film composing, Zappa took over the studio from Paul Buff, who was now working with Art Laboe at Original Sound. It was renamed Studio Z. Studio Z was rarely booked for recordings by other musicians. Instead, friends moved in, notably James "Motorhead" Sherwood. Zappa started performing in local bars as a guitarist with a power trio, the Muthers, to support himself. An article in the local press describing Zappa as "the Movie King of Cucamonga" prompted the local police to suspect that he was making pornographic films. In March 1965, Zappa was approached by a vice squad undercover officer, and accepted an offer of $100 () to produce a suggestive audio tape for an alleged stag party. Zappa and a female friend recorded a faked erotic episode. When Zappa was about to hand over the tape, he was arrested, and the police stripped the studio of all recorded material. The press was tipped off beforehand, and next day's "The Daily Report" wrote that "Vice Squad investigators stilled the tape recorders of a free-swinging, a-go-go film and recording studio here Friday and arrested a self-styled movie producer". Zappa was charged with "conspiracy to commit pornography". This felony charge was reduced and he was sentenced to six months in jail on a misdemeanor, with all but ten days suspended. His brief imprisonment left a permanent mark, and was central to the formation of his anti-authoritarian stance. Zappa lost several recordings made at Studio Z in the process, as the police only returned 30 out of 80 hours of tape seized. Eventually, he could no longer afford to pay the rent on the studio and was evicted. Zappa managed to recover some of his possessions before the studio was torn down in 1966. In 1965, Ray Collins asked Zappa to take over as guitarist in local R&B band the Soul Giants, following a fight between Collins and the group's original guitarist. Zappa accepted, and soon assumed leadership and the role as co-lead singer (even though he never considered himself a singer). He convinced the other members that they should play his music to increase the chances of getting a record contract. The band was renamed the Mothers, coincidentally on Mother's Day. They increased their bookings after beginning an association with manager Herb Cohen, while they gradually gained attention on the burgeoning Los Angeles underground music scene. In early 1966, they were spotted by leading record producer Tom Wilson when playing "Trouble Every Day", a song about the Watts riots. Wilson had earned acclaim as the producer for Bob Dylan and Simon & Garfunkel, and was notable as one of the few African-Americans working as a major label pop music producer at this time. Wilson signed the Mothers to the Verve division of MGM, which had built up a strong reputation for its releases of modern jazz recordings in the 1940s and 1950s, but was attempting to diversify into pop and rock audiences. Verve insisted that the band officially rename themselves the Mothers of Invention as "Mother" was short for "motherfucker"—a term that, apart from its profane meanings, can denote a skilled musician. With Wilson credited as producer, the Mothers of Invention, augmented by a studio orchestra, recorded the groundbreaking "Freak Out!" (1966), which, after Bob Dylan's "Blonde on Blonde", was the second rock double album ever released. It mixed R&B, doo-wop, musique concrète, and experimental sound collages that captured the "freak" subculture of Los Angeles at that time. Although he was dissatisfied with the final product, "Freak Out" immediately established Zappa as a radical new voice in rock music, providing an antidote to the "relentless consumer culture of America". The sound was raw, but the arrangements were sophisticated. While recording in the studio, some of the additional session musicians were shocked that they were expected to read the notes on sheet music from charts with Zappa conducting them, since it was not standard when recording rock music. The lyrics praised non-conformity, disparaged authorities, and had dadaist elements. Yet, there was a place for seemingly conventional love songs. Most compositions are Zappa's, which set a precedent for the rest of his recording career. He had full control over the arrangements and musical decisions and did most overdubs. Wilson provided the industry clout and connections and was able to provide the group with the financial resources needed. Although Wilson was able to provide Zappa and the Mothers with an extraordinary degree of artistic freedom for the time, the recording did not go entirely as planned. In a surviving 1967 radio interview, Zappa explained that the album's outlandish 11-minute closing track, "Return of the Son of Monster Magnet" was in fact an unfinished piece. The track (as it appears on the album) was created to act as the backing track for a much more complex work, but MGM refused to approve the additional recording time Zappa needed to complete it, so (much to his chagrin) it was issued in this unfinished form. During the recording of "Freak Out!", Zappa moved into a house in Laurel Canyon with friend Pamela Zarubica, who appeared on the album. The house became a meeting (and living) place for many LA musicians and groupies of the time, despite Zappa's disapproval of their illicit drug use. After a short promotional tour following the release of "Freak Out!", Zappa met Adelaide Gail Sloatman. He fell in love within "a couple of minutes", and she moved into the house over the summer. They married in 1967, had four children and remained together until Zappa's death. Wilson nominally produced the Mothers' second album "Absolutely Free" (1967), which was recorded in November 1966, and later mixed in New York, although by this time Zappa was in "de facto" control of most facets of the production. It featured extended playing by the Mothers of Invention and focused on songs that defined Zappa's compositional style of introducing abrupt, rhythmical changes into songs that were built from diverse elements. Examples are "Plastic People" and "Brown Shoes Don't Make It", which contained lyrics critical of the hypocrisy and conformity of American society, but also of the counterculture of the 1960s. As Zappa put it, "[W]e're satirists, and we are out to satirize everything." At the same time, Zappa had recorded material for an album of orchestral works to be released under his own name, "Lumpy Gravy", released by Capitol Records in 1967. Due to contractual problems, the album was pulled. Zappa took the opportunity to radically restructure the contents, adding newly recorded, improvised dialogue. After the contractual problems were resolved, the album was reissued by Verve in 1968. It is an "incredible ambitious musical project", a "monument to John Cage", which intertwines orchestral themes, spoken words and electronic noises through radical audio editing techniques. The Mothers of Invention played in New York in late 1966 and were offered a contract at the Garrick Theater (at 152 Bleecker Street, above the Cafe au Go Go) during Easter 1967. This proved successful and Herb Cohen extended the booking, which eventually lasted half a year. As a result, Zappa and his wife, along with the Mothers of Invention, moved to New York. Their shows became a combination of improvised acts showcasing individual talents of the band as well as tight performances of Zappa's music. Everything was directed by Zappa using hand signals. Guest performers and audience participation became a regular part of the Garrick Theater shows. One evening, Zappa managed to entice some U.S. Marines from the audience onto the stage, where they proceeded to dismember a big baby doll, having been told by Zappa to pretend that it was a "gook baby". Zappa uniquely contributed to the avant-garde, anti-establishment music scene of the 1960s, sampling radio tape recordings and incorporating his own philosophical ideals to music and freedom of expression in his pieces. Bands such as AMM and Faust also contributed to the radio sampling techniques of the 1960s. Situated in New York, and only interrupted by the band's first European tour, the Mothers of Invention recorded the album widely regarded as the peak of the group's late 1960s work, "We're Only in It for the Money" (released 1968). It was produced by Zappa, with Wilson credited as executive producer. From then on, Zappa produced all albums released by the Mothers of Invention and as a solo artist. "We're Only in It for the Money" featured some of the most creative audio editing and production yet heard in pop music, and the songs ruthlessly satirized the hippie and flower power phenomena. He sampled plundered surf music in "We're only in It for the Money", as well as the Beatles' tape work from their song "Tomorrow Never Knows". The cover photo parodied that of the Beatles' "Sgt. Pepper's Lonely Hearts Club Band". The cover art was provided by Cal Schenkel whom Zappa met in New York. This initiated a lifelong collaboration in which Schenkel designed covers for numerous Zappa and Mothers albums. Reflecting Zappa's eclectic approach to music, the next album, "Cruising with Ruben & the Jets" (1968), was very different. It represented a collection of doo-wop songs; listeners and critics were not sure whether the album was a satire or a tribute. Zappa later noted that the album was conceived in the way Stravinsky's compositions were in his neo-classical period: "If he could take the forms and clichés of the classical era and pervert them, why not do the same ... to doo-wop in the fifties?" A theme from Stravinsky's "The Rite of Spring" is heard during one song. During the late 1960s, Zappa continued to develop the business sides of his career. He and Herb Cohen formed the Bizarre Records and Straight Records labels, distributed by Warner Bros. Records, as ventures to aid the funding of projects and to increase creative control. Zappa produced the double album "Trout Mask Replica" for Captain Beefheart, and releases by Alice Cooper, The Persuasions, Wild Man Fischer, and the GTOs, as well as Lenny Bruce's last live performance. In 1967 and 1968, Zappa made two appearances with the Monkees. The first appearance was on an episode of their TV series, "The Monkees Blow Their Minds", where Zappa, dressed up as Mike Nesmith, interviews Nesmith who is dressed up as Zappa. After the interview, Zappa destroys a car with a sledgehammer as the song "Mother People" plays. He later provided a cameo in the Monkees' movie "Head" where, leading a cow, he tells Davy Jones "the youth of America depends on you to show them the way." Zappa had respect for what the Monkees were doing, and offered Micky Dolenz a position in the Mothers. RCA/Columbia/Colgems would not allow Dolenz out of his contract. In the Mothers' second European tour in September/October 1968 they performed for the at the Grugahalle in Essen, Germany; at the Tivoli in Copenhagen, Denmark; for TV programs in Germany ("Beat-Club"), France, and England; at the Concertgebouw in Amsterdam; at the Royal Festival Hall in London; and at the Olympia in Paris. Zappa and the Mothers of Invention returned to Los Angeles in mid-1968, and the Zappas moved into a house on Laurel Canyon Boulevard, only to move again to one on Woodrow Wilson Drive. This was Zappa's home for the rest of his life. Despite being a success with fans in Europe, the Mothers of Invention were not faring well financially. Their first records were vocally oriented, but Zappa wrote more instrumental jazz and classical oriented music for the band's concerts, which confused audiences. Zappa felt that audiences failed to appreciate his "electrical chamber music". In 1969 there were nine band members and Zappa was supporting the group himself from his publishing royalties whether they played or not. 1969 was also the year Zappa, fed up with MGM Records' interference, left them for Warner Bros. Records' Reprise subsidiary where Zappa/Mothers recordings would bear the Bizarre Records imprint. In late 1969, Zappa broke up the band. He often cited the financial strain as the main reason, but also commented on the band members' lack of sufficient effort. Many band members were bitter about Zappa's decision, and some took it as a sign of Zappa's concern for perfection at the expense of human feeling. Others were irritated by 'his autocratic ways', exemplified by Zappa's never staying at the same hotel as the band members. Several members played for Zappa in years to come. Remaining recordings with the band from this period were collected on "Weasels Ripped My Flesh" and "Burnt Weeny Sandwich" (both released in 1970). After he disbanded the Mothers of Invention, Zappa released the acclaimed solo album "Hot Rats" (1969). It features, for the first time on record, Zappa playing extended guitar solos and contains one of his most enduring compositions, "Peaches en Regalia", which reappeared several times on future recordings. He was backed by jazz, blues and R&B session players including violinist Don "Sugarcane" Harris, drummers John Guerin and Paul Humphrey, multi-instrumentalist and previous member of the Mothers of Invention Ian Underwood, and multi-instrumentalist Shuggie Otis on bass, along with a guest appearance by Captain Beefheart (providing vocals to the only non-instrumental track, "Willie the Pimp"). It became a popular album in England, and had a major influence on the development of the jazz-rock fusion genre. In 1970 Zappa met conductor Zubin Mehta. They arranged a May 1970 concert where Mehta conducted the Los Angeles Philharmonic augmented by a rock band. According to Zappa, the music was mostly written in motel rooms while on tour with the Mothers of Invention. Some of it was later featured in the movie "200 Motels". Although the concert was a success, Zappa's experience working with a symphony orchestra was not a happy one. His dissatisfaction became a recurring theme throughout his career; he often felt that the quality of performance of his material delivered by orchestras was not commensurate with the money he spent on orchestral concerts and recordings. Later in 1970, Zappa formed a new version of the Mothers (from then on, he mostly dropped the "of Invention"). It included British drummer Aynsley Dunbar, jazz keyboardist George Duke, Ian Underwood, Jeff Simmons (bass, rhythm guitar), and three members of the Turtles: bass player Jim Pons, and singers Mark Volman and Howard Kaylan, who, due to persistent legal and contractual problems, adopted the stage name "The Phlorescent Leech and Eddie", or "Flo & Eddie". This version of the Mothers debuted on Zappa's next solo album "Chunga's Revenge" (1970), which was followed by the double-album soundtrack to the movie "200 Motels" (1971), featuring the Mothers, the Royal Philharmonic Orchestra, Ringo Starr, Theodore Bikel, and Keith Moon. Co-directed by Zappa and Tony Palmer, it was filmed in a week at Pinewood Studios outside London. Tensions between Zappa and several cast and crew members arose before and during shooting. The film deals loosely with life on the road as a rock musician. It was the first feature film photographed on videotape and transferred to 35 mm film, a process that allowed for novel visual effects. It was released to mixed reviews. The score relied extensively on orchestral music, and Zappa's dissatisfaction with the classical music world intensified when a concert, scheduled at the Royal Albert Hall after filming, was canceled because a representative of the venue found some of the lyrics obscene. In 1975, he lost a lawsuit against the Royal Albert Hall for breach of contract. After "200 Motels", the band went on tour, which resulted in two live albums, "Fillmore East – June 1971" and "Just Another Band from L.A."; the latter included the 20-minute track "Billy the Mountain", Zappa's satire on rock opera set in Southern California. This track was representative of the band's theatrical performances—which used songs to build sketches based on "200 Motels" scenes, as well as new situations that often portrayed the band members' sexual encounters on the road. On December 4, 1971, Zappa suffered his first of two serious setbacks. While performing at Casino de Montreux in Switzerland, the Mothers' equipment was destroyed when a flare set off by an audience member started a fire that burned down the casino. Immortalized in Deep Purple's song "Smoke on the Water", the event and immediate aftermath can be heard on the bootleg album "Swiss Cheese/Fire", released legally as part of Zappa's "Beat the Boots II" compilation. After losing $50,000 () worth of equipment and a week's break, the Mothers played at the Rainbow Theatre, London, with rented gear. During the encore, audience member Trevor Howell pushed Zappa off the stage and into the concrete-floored orchestra pit. The band thought Zappa had been killed—he had suffered serious fractures, head trauma and injuries to his back, leg, and neck, as well as a crushed larynx, which ultimately caused his voice to drop a third after healing. This attack resulted in an extended period of wheelchair confinement, making touring impossible for over half a year. Upon return to the stage in September 1972, Zappa was still wearing a leg brace, had a noticeable limp and could not stand for very long while on stage. Zappa noted that one leg healed "shorter than the other" (a reference later found in the lyrics of songs "Zomby Woof" and "Dancin' Fool"), resulting in chronic back pain. Meanwhile, the Mothers were left in limbo and eventually formed the core of Flo and Eddie's band as they set out on their own. During 1971–72 Zappa released two strongly jazz-oriented solo LPs, "Waka/Jawaka" and "The Grand Wazoo", which were recorded during the forced layoff from concert touring, using floating line-ups of session players and Mothers alumni. Musically, the albums were akin to "Hot Rats," in that they featured extended instrumental tracks with extended soloing. Zappa began touring again in late 1972. His first effort was a series of concerts in September 1972 with a 20-piece big band referred to as the Grand Wazoo. This was followed by a scaled-down version known as the Petit Wazoo that toured the U.S. for five weeks from October to December 1972. Zappa then formed and toured with smaller groups that variously included Ian Underwood (reeds, keyboards), Ruth Underwood (vibes, marimba), Sal Marquez (trumpet, vocals), Napoleon Murphy Brock (sax, flute and vocals), Bruce Fowler (trombone), Tom Fowler (bass), Chester Thompson (drums), Ralph Humphrey (drums), George Duke (keyboards, vocals), and Jean-Luc Ponty (violin). By 1973 the Bizarre and Straight labels were discontinued. In their place, Zappa and Cohen created DiscReet Records, also distributed by Warner Bros. Zappa continued a high rate of production through the first half of the 1970s, including the solo album "Apostrophe (')" (1974), which reached a career-high No. 10 on the Billboard pop album charts helped by the No. 86 chart hit "Don't Eat The Yellow Snow". Other albums from the period are "Over-Nite Sensation" (1973), which contained several future concert favorites, such as "Dinah-Moe Humm" and "Montana", and the albums "Roxy & Elsewhere" (1974) and "One Size Fits All" (1975) which feature ever-changing versions of a band still called the Mothers, and are notable for the tight renditions of highly difficult jazz fusion songs in such pieces as "Inca Roads", "Echidna's Arf (Of You)" and "Be-Bop Tango (Of the Old Jazzmen's Church)". A live recording from 1974, "You Can't Do That on Stage Anymore, Vol. 2" (1988), captures "the full spirit and excellence of the 1973–75 band". Zappa released "Bongo Fury" (1975), which featured a live recording at the Armadillo World Headquarters in Austin from a tour the same year that reunited him with Captain Beefheart for a brief period. They later became estranged for a period of years, but were in contact at the end of Zappa's life. Zappa's relationship with long-time manager Herb Cohen ended in 1976. Zappa sued Cohen for skimming more than he was allocated from DiscReet Records, as well as for signing acts of which Zappa did not approve. Cohen filed a lawsuit against Zappa in return, which froze the money Zappa and Cohen had gained from an out-of-court settlement with MGM over the rights of the early Mothers of Invention recordings. It also prevented Zappa having access to any of his previously recorded material during the trials. Zappa therefore took his personal master copies of the rock-oriented "Zoot Allures" (1976) directly to Warner Bros., thereby bypassing DiscReet. In the mid-1970s Zappa prepared material for "Läther" (pronounced "leather"), a four-LP project. "Läther" encapsulated all the aspects of Zappa's musical styles—rock tunes, orchestral works, complex instrumentals, and Zappa's own trademark distortion-drenched guitar solos. Wary of a quadruple-LP, Warner Bros. Records refused to release it. Zappa managed to get an agreement with Phonogram Inc., and test pressings were made targeted at a Halloween 1977 release, but Warner Bros. prevented the release by claiming rights over the material. Zappa responded by appearing on the Pasadena, California radio station KROQ, allowing them to broadcast "Läther" and encouraging listeners to make their own tape recordings. A lawsuit between Zappa and Warner Bros. followed, during which no Zappa material was released for more than a year. Eventually, Warner Bros. issued different versions of much of the "Läther" material in 1978 and 1979 as four individual albums (five full-length LPs) with limited promotion. Although Zappa eventually gained the rights to all his material created under the MGM and Warner Bros. contracts, the various lawsuits meant that for a period Zappa's only income came from touring, which he therefore did extensively in 1975–77 with relatively small, mainly rock-oriented, bands. Drummer Terry Bozzio became a regular band member, Napoleon Murphy Brock stayed on for a while, and original Mothers of Invention bassist Roy Estrada joined. Among other musicians were bassist Patrick O'Hearn, singer-guitarist Ray White and keyboardist/violinist Eddie Jobson. In December 1976, Zappa appeared as a featured musical guest on the NBC television show "Saturday Night Live". Zappa's song "I'm the Slime" was performed with a voice-over by "SNL" booth announcer Don Pardo, who also introduced "Peaches En Regalia" on the same airing. In 1978, Zappa served both as host and musical act on the show, and as an actor in various sketches. The performances included an impromptu musical collaboration with cast member John Belushi during the instrumental piece "The Purple Lagoon". Belushi appeared as his Samurai Futaba character playing the tenor sax with Zappa conducting. Zappa's band at the time, with the additions of Ruth Underwood and a horn section (featuring Michael and Randy Brecker), performed during Christmas in New York, recordings of which appear on one of the albums Warner Bros. culled from the "Läther" project, "Zappa in New York" (1978). It mixes complex instrumentals such as "The Black Page" and humorous songs like "Titties and Beer". The former composition, written originally for drum kit but later developed for larger bands, is notorious for its complexity in rhythmic structure and short, densely arranged passages. "Zappa in New York" featured a song about sex criminal Michael H. Kenyon, "The Illinois Enema Bandit", which featured Don Pardo providing the opening narrative in the song. Like many songs on the album, it contained numerous sexual references, leading to many critics objecting and being offended by the content. Zappa dismissed the criticism by noting that he was a journalist reporting on life as he saw it. Predating his later fight against censorship, he remarked: "What do you make of a society that is so primitive that it clings to the belief that certain words in its language are so powerful that they could corrupt you the moment you hear them?" The remaining albums released by Warner Bros. Records without Zappa's consent were "Studio Tan" in 1978 and "Sleep Dirt" and "Orchestral Favorites" in 1979, which contained complex suites of instrumentally-based tunes recorded between 1973 and 1976, and whose release was overlooked in the midst of the legal problems. Resolving the lawsuits successfully, Zappa ended the 1970s "stronger than ever", by releasing two of his most successful albums in 1979: the best selling album of his career, "Sheik Yerbouti", and in Kelley Lowe's opinion the "bona fide masterpiece", "Joe's Garage". The double album "Sheik Yerbouti" was the first release on Zappa Records, and contained the Grammy-nominated single "Dancin' Fool", which reached No. 45 on the "Billboard" charts, and "Jewish Princess", which received attention when a Jewish group, the Anti-Defamation League (ADL), attempted to prevent the song from receiving radio airplay due to its alleged anti-Semitic lyrics. Zappa vehemently denied any anti-Semitic sentiments, and dismissed the ADL as a "noisemaking organization that tries to apply pressure on people in order to manufacture a stereotype image of Jews that suits their idea of a good time." The album's commercial success was attributable in part to "Bobby Brown". Due to its explicit lyrics about a young man's encounter with a "dyke by the name of Freddie", the song did not get airplay in the U.S., but it topped the charts in several European countries where English is not the primary language. The triple LP "Joe's Garage" featured lead singer Ike Willis as the voice of the character "Joe" in a rock opera about the danger of political systems, the suppression of freedom of speech and music—inspired in part by the Islamic revolution that had made music illegal within its jurisdiction at the time—and about the "strange relationship Americans have with sex and sexual frankness". The album contains rock songs like "Catholic Girls" (a riposte to the controversies of "Jewish Princess"), "Lucille Has Messed My Mind Up", and the title track, as well as extended live-recorded guitar improvisations combined with a studio backup band dominated by drummer Vinnie Colaiuta (with whom Zappa had a particularly good musical rapport) adopting the xenochrony process. The album contains one of Zappa's most famous guitar "signature pieces", "Watermelon in Easter Hay". On December 21, 1979, Zappa's movie "Baby Snakes" premiered in New York. The movie's tagline was "A movie about people who do stuff that is not normal". The 2 hour and 40 minutes movie was based on footage from concerts in New York around Halloween 1977, with a band featuring keyboardist Tommy Mars and percussionist Ed Mann (who would both return on later tours) as well as guitarist Adrian Belew. It also contained several extraordinary sequences of clay animation by Bruce Bickford who had earlier provided animation sequences to Zappa for a 1974 TV special (which became available on the 1982 video "The Dub Room Special").282 The movie did not do well in theatrical distribution, but won the Premier Grand Prix at the First International Music Festival in Paris in 1981. Zappa later expanded on his television appearances in a non-musical role. He was an actor or voice artist in episodes of "Shelley Duvall's Faerie Tale Theatre", "Miami Vice" and "The Ren & Stimpy Show". A voice part in "The Simpsons" never materialized, to creator Matt Groening's disappointment (Groening was a neighbor of Zappa and a lifelong fan). In 1980, Zappa cut his ties with record distributor Phonogram after the label refused to release his song "I Don't Wanna Get Drafted". It was picked up by CBS Records and released on the Zappa label in the United States and Canada, and by the CBS label internationally. After spending much of 1980 on the road, Zappa released "Tinsel Town Rebellion" in 1981. It was the first release on his own Barking Pumpkin Records, and it contains songs taken from a 1979 tour, one studio track and material from the 1980 tours. The album is a mixture of complicated instrumentals and Zappa's use of "sprechstimme" (speaking song or voice) — a compositional technique utilized by such composers as Arnold Schoenberg and Alban Berg—showcasing some of the most accomplished bands Zappa ever had (mostly featuring drummer Vinnie Colaiuta). While some lyrics still raised controversy among critics, some of whom found them sexist, the political and sociological satire in songs like the title track and "The Blue Light" have been described as a "hilarious critique of the willingness of the American people to believe anything". The album is also notable for the presence of guitarist Steve Vai, who joined Zappa's touring band in late 1980. The same year the double album "You Are What You Is" was released. Most of it was recorded in Zappa's brand new Utility Muffin Research Kitchen (UMRK) studios, which were located at his house, thereby giving him complete freedom in his work. The album included one complex instrumental, "Theme from the 3rd Movement of Sinister Footwear", but mainly consisted of rock songs with Zappa's sardonic social commentary—satirical lyrics directed at teenagers, the media, and religious and political hypocrisy. "Dumb All Over" is a tirade on religion, as is "Heavenly Bank Account", wherein Zappa rails against TV evangelists such as Jerry Falwell and Pat Robertson for their purported influence on the U.S. administration as well as their use of religion as a means of raising money. Songs like "Society Pages" and "I'm a Beautiful Guy" show Zappa's dismay with the Reagan era and its "obscene pursuit of wealth and happiness". In 1981, Zappa also released three instrumental albums, "Shut Up 'n Play Yer Guitar", "Shut Up 'N Play Yer Guitar Some More", and "The Return of the Son of Shut Up 'N Play Yer Guitar", which were initially sold via mail order, but later released through the CBS label due to popular demand. The albums focus exclusively on Frank Zappa as a guitar soloist, and the tracks are predominantly live recordings from 1979 to 1980; they highlight Zappa's improvisational skills with "beautiful performances from the backing group as well". Another guitar-only album, "Guitar", was released in 1988, and a third, "Trance-Fusion", which Zappa completed shortly before his death, was released in 2006. In May 1982, Zappa released "Ship Arriving Too Late to Save a Drowning Witch", which featured his biggest selling single ever, the Grammy Award-nominated song "Valley Girl" (topping out at No. 32 on the "Billboard" charts). In her improvised lyrics to the song, Zappa's daughter Moon Unit satirized the patois of teenage girls from the San Fernando Valley, which popularized many "Valspeak" expressions such as "gag me with a spoon", "fer sure, fer sure", "grody to the max", and "barf out". In 1983, two different projects were released, beginning with "The Man from Utopia," a rock-oriented work. The album is eclectic, featuring the vocal-led "Dangerous Kitchen" and "The Jazz Discharge Party Hats", both continuations of the sprechstimme excursions on "Tinseltown Rebellion." The second album, "London Symphony Orchestra, Vol. I", contained orchestral Zappa compositions conducted by Kent Nagano and performed by the London Symphony Orchestra (LSO). A second record of these sessions, "London Symphony Orchestra, Vol. II" was released in 1987. The material was recorded under a tight schedule with Zappa providing all funding, helped by the commercial success of "Valley Girl". Zappa was not satisfied with the LSO recordings. One reason is "Strictly Genteel", which was recorded after the trumpet section had been out for drinks on a break: the track took 40 edits to hide out-of-tune notes. Conductor Nagano, who was pleased with the experience, noted that "in fairness to the orchestra, the music is humanly very, very difficult". Some reviews noted that the recordings were the best representation of Zappa's orchestral work so far. In 1984 Zappa teamed again with Nagano and the Berkeley Symphony Orchestra for a live performance of "A Zappa Affair" with augmented orchestra, life-size puppets, and moving stage sets. Although critically acclaimed the work was a financial failure, and only performed twice. Zappa was invited by conference organizer Thomas Wells to be the keynote speaker at the American Society of University Composers at the Ohio State University. It was there Zappa delivered his famous "Bingo! There Goes Your Tenure" address, and had two of his orchestra pieces, "Dupree's Paradise" and "Naval Aviation in Art?" performed by the Columbus Symphony and ProMusica Chamber Orchestra of Columbus. For the remainder of his career, much of Zappa's work was influenced by his use of the Synclavier as a compositional and performance tool. Even considering the complexity of the music he wrote, the Synclavier could realize anything he could dream up. The Synclavier could be programmed to play almost anything conceivable, to perfection: "With the Synclavier, any group of imaginary instruments can be invited to play the most difficult passages ... with "one-millisecond" accuracy—every time". Even though it essentially did away with the need for musicians, Zappa viewed the Synclavier and real-life musicians as separate. In 1984, he released four albums. "" contains orchestral works commissioned and conducted by celebrated conductor, composer and pianist Pierre Boulez (who was listed as an influence on "Freak Out!"), and performed by his Ensemble InterContemporain. These were juxtaposed with premiere Synclavier pieces. Again, Zappa was not satisfied with the performances of his orchestral works, regarding them as under-rehearsed, but in the album liner notes he respectfully thanks Boulez's demands for precision. The Synclavier pieces stood in contrast to the orchestral works, as the sounds were electronically generated and not, as became possible shortly thereafter, sampled. The album "Thing-Fish" was an ambitious three-record set in the style of a Broadway play dealing with a dystopian "what-if" scenario involving feminism, homosexuality, manufacturing and distribution of the AIDS virus, and a eugenics program conducted by the United States government. New vocals were combined with previously released tracks and new Synclavier music; "the work is an extraordinary example of bricolage". "Francesco Zappa", a Synclavier rendition of works by 18th-century composer Francesco Zappa was also released in 1984. Around 1986, Zappa undertook a comprehensive re-release program of his earlier vinyl recordings. He personally oversaw the remastering of all his 1960s, 1970s and early 1980s albums for the new digital compact disc medium. Certain aspects of these re-issues were criticized by some fans as being unfaithful to the original recordings. Nearly twenty years before the advent of online music stores, Zappa had proposed to replace "phonographic record merchandising" of music by "direct digital-to-digital transfer" through phone or cable TV (with royalty payments and consumer billing automatically built into the accompanying software). In 1989, Zappa considered his idea a "miserable flop". The album "Jazz from Hell," released in 1986, earned Zappa his first Grammy Award in 1988 for Best Rock Instrumental Performance. Except for one live guitar solo ("St. Etienne"), the album exclusively featured compositions brought to life by the Synclavier. Although an instrumental album, containing no lyrics, Meyer Music Markets sold "Jazz from Hell" featuring an "explicit lyrics" sticker—a warning label introduced by the Recording Industry Association of America in an agreement with the Parents Music Resource Center (PMRC). Zappa's last tour in a rock and jazz band format took place in 1988 with a 12-piece group which had a repertoire of over 100 (mostly Zappa) compositions, but which split under acrimonious circumstances before the tour was completed. The tour was documented on the albums "Broadway the Hard Way" (new material featuring songs with strong political emphasis); "The Best Band You Never Heard in Your Life" (Zappa "standards" and an eclectic collection of cover tunes, ranging from Maurice Ravel's "Boléro" to Led Zeppelin's "Stairway to Heaven"); and "Make a Jazz Noise Here". Parts are also found on "You Can't Do That on Stage Anymore", volumes 4 and 6. Recordings from this tour also appear on the 2006 album Trance-Fusion. In 1990, Zappa was diagnosed with terminal prostate cancer. The disease had been developing unnoticed for ten years and was considered inoperable. After the diagnosis, Zappa devoted most of his energy to modern orchestral and Synclavier works. Shortly before his death in 1993 he completed "Civilization Phaze III", a major Synclavier work which he had begun in the 1980s. In 1991, Zappa was chosen to be one of four featured composers at the Frankfurt Festival in 1992 (the others were John Cage, Karlheinz Stockhausen, and Alexander Knaifel). Zappa was approached by the German chamber ensemble Ensemble Modern which was interested in playing his music for the event. Although ill, he invited them to Los Angeles for rehearsals of new compositions and new arrangements of older material. Zappa also got along with the musicians, and the concerts in Germany and Austria were set up for later in the year. Zappa also performed in 1991 in Prague, claiming that "was the first time that he had a reason to play his guitar in 3 years", and that that moment was just "the beginning of a new country", and asked the public to "try to keep your country unique, do not change it into something else". In September 1992, the concerts went ahead as scheduled but Zappa could only appear at two in Frankfurt due to illness. At the first concert, he conducted the opening "Overture", and the final "G-Spot Tornado" as well as the theatrical "Food Gathering in Post-Industrial America, 1992" and "Welcome to the United States" (the remainder of the program was conducted by the ensemble's regular conductor Peter Rundel). Zappa received a 20-minute ovation. G-Spot Tornado was performed with Canadian dancer Louise Lecavalier. It was his last professional public appearance as the cancer was spreading to such an extent that he was in too much pain to enjoy an event that he otherwise found "exhilarating". Recordings from the concerts appeared on "The Yellow Shark" (1993), Zappa's last release during his lifetime, and some material from studio rehearsals appeared on the posthumous "Everything Is Healing Nicely" (1999). Zappa died, after his long battle with prostate cancer, on December 4, 1993, just 17 days before his 53rd birthday at his home with his wife and children by his side. At a private ceremony the following day, his body was buried in a grave at the Westwood Village Memorial Park Cemetery, in Los Angeles. The grave is unmarked. On December 6, his family publicly announced that "Composer Frank Zappa left for his final tour just before 6:00 pm on Saturday". The general phases of Zappa's music have been variously categorized under experimental rock, jazz, classical, avant-pop, experimental pop, comedy rock, doo-wop, jazz fusion, progressive rock, electronic, proto-prog, avant-jazz, and psychedelic rock. He generally did not have discrete periods where he performed one style or another but blended them together and reverted back as it interested him but his last studio album—1986's "Jazz from Hell"—marked a shift toward live orchestral performance for the last several years of his life. Zappa grew up influenced by avant-garde composers such as Edgard Varèse, Igor Stravinsky, and Anton Webern; 1950s blues artists Clarence "Gatemouth" Brown, Guitar Slim, Johnny Guitar Watson, and B.B. King; R&B and doo-wop groups (particularly local pachuco groups); and modern jazz. His own heterogeneous ethnic background, and the diverse social and cultural mix in and around greater Los Angeles, were crucial in the formation of Zappa as a practitioner of underground music and of his later distrustful and openly critical attitude towards "mainstream" social, political and musical movements. He frequently lampooned musical fads like psychedelia, rock opera and disco. Television also exerted a strong influence, as demonstrated by quotations from show themes and advertising jingles found in his later works. Zappa's albums make extensive use of segued tracks, breaklessly joining the elements of his albums. His total output is unified by a conceptual continuity he termed "Project/Object", with numerous musical phrases, ideas, and characters reappearing across his albums. He also called it a "conceptual continuity", meaning that any project or album was part of a larger project. Everything was connected, and musical themes and lyrics reappeared in different form on later albums. Conceptual continuity clues are found throughout Zappa's entire œuvre. Zappa is widely recognized as one of the most significant electric guitar soloists. In a 1983 issue of "Guitar World", Jon Swenson declared: "the fact of the matter is that [Zappa] is one of the greatest guitarists we have and is sorely unappreciated as such." His idiosyncratic style developed gradually and was mature by the early 1980s, by which time his live performances featured lengthy improvised solos during many songs. A November 2016 feature by the editors of "Guitar Player" magazine wrote: "Brimming with sophisticated motifs and convoluted rhythms, Zappa's extended excursions are more akin to symphonies than they are to guitar solos." The symphonic comparison stems from his habit of introducing melodic themes that, like a symphony's main melodies, were repeated with variations throughout his solos. He was further described as using a wide variety of scales and modes, enlivened by "unusual rhythmic combinations". His left hand was capable of smooth legato technique, while Zappa's right was "one of the fastest pick hands in the business." His song "Outside Now" from "Joe's Garage" poked fun at the negative reception of Zappa's guitar technique by those more commercially minded, as the song's narrator lives in a world where music is outlawed and he imagines "imaginary guitar notes that would irritate/An executive kind of guy", lyrics that are followed by one of Zappa's characteristically quirky solos in 11/8 time. Zappa transcriptionist Kasper Sloots wrote, "Zappa's guitar solos aren't meant to show off technically (Zappa hasn't claimed to be a big virtuoso on the instrument), but for the pleasure it gives trying to build a composition right in front of an audience without knowing what the outcome will be." In New York, Zappa increasingly used tape editing as a compositional tool. A prime example is found on the double album "Uncle Meat" (1969), where the track "King Kong" is edited from various studio and live performances. Zappa had begun regularly recording concerts, and because of his insistence on precise tuning and timing, he was able to augment his studio productions with excerpts from live shows, and vice versa. Later, he combined recordings of different compositions into new pieces, irrespective of the tempo or meter of the sources. He dubbed this process "xenochrony" (strange synchronizations)—reflecting the Greek "xeno" (alien or strange) and "chronos" (time). Zappa was married to Kathryn J. "Kay" Sherman from 1960 to 1963. In 1967, he married Adelaide Gail Sloatman. He and his second wife had four children: Moon, Dweezil, Ahmet and Diva. Following Zappa's death, his widow Gail created the Zappa Family Trust, which owns the rights to a massive colection of music and other creative output: more than 60 albums were released during Zappa's lifetime and 40 posthumously that are potentially worth at least tens of millions of dollars. Upon Gail's death in October 2015, it was revealed that Zappa's youngest children, Ahmet and Diva, were given control of the trust with shares of 30% each, while his older children Moon and Dweezil were given smaller shares of 20% each. As beneficiaries only, Moon and Dweezil will not see any money from the trust until it is profitable—in 2016, it was "millions of dollars in debt"—and must seek permission from Ahmet, the trustee, to make money off their father's music or merchandise bearing his name. The uneven divide of the trust has resulted in several conflicts between Zappa's children, including a feud between Dweezil and Ahmet over Dweezil's use of his father's music in live performances. Zappa stated that he tried smoking cannabis approximately ten times, but without any pleasure or result beyond sleepiness and sore throat, and "never used LSD, never used cocaine, never used heroin or any of that other stuff." Zappa stated, "Drugs do not become a problem until the person who uses the drugs does something to you, or does something that would affect your life that you don't want to have happen to you, like an airline pilot who crashes because he was full of drugs." He was a heavy tobacco smoker for most of his life, and strongly critical of anti-tobacco campaigns. While he disapproved of drug use, he criticized the War on Drugs, comparing it to alcohol prohibition, and stated that the United States Treasury would benefit from the decriminalization and regulation of drugs. Describing his philosophical views, Zappa stated, "I believe that people have a right to decide their own destinies; people own themselves. I also believe that, in a democracy, government exists because (and only so long as) individual citizens give it a 'temporary license to exist'—in exchange for a promise that it will behave itself. In a democracy, you own the government—it doesn't own you." In a 1991 interview, Zappa reported that he was a registered Democrat but added "that might not last long—I'm going to shred that". Describing his political views, Zappa categorized himself as a "practical conservative". He favored limited government and low taxes; he also stated that he approved of national defense, social security, and other federal programs, but only if recipients of such programs are willing and able to pay for them. He favored capitalism, entrepreneurship, and independent business, stating that musicians could make more from owning their own businesses than from collecting royalties. He opposed communism, stating, "A system that doesn't allow ownership ... has—to put it mildly—a fatal design flaw." He had always encouraged his fans to register to vote on album covers, and throughout 1988 he had registration booths at his concerts. He even considered running for president of the United States as an independent. Zappa was often characterized as an atheist. He recalled his parents being "pretty religious" and trying to make him go to Catholic school despite his resentment. He felt disgust towards organized religion (Christianity in particular) because he believed that it promoted ignorance and anti-intellectualism. On Dweezil's birth certificate, Frank wrote "musician" for "father's religion". Some of his songs, concert performances, interviews and public debates in the 1980s criticized and derided Republicans and their policies, President Ronald Reagan, the Strategic Defense Initiative (SDI), televangelism, and the Christian Right, and warned that the United States government was in danger of becoming a "fascist theocracy". In early 1990, Zappa visited Czechoslovakia at the request of President Václav Havel. Havel designated him as Czechoslovakia's "Special Ambassador to the West on Trade, Culture and Tourism". Havel was a lifelong fan of Zappa, who had great influence in the avant-garde and underground scene in Central Europe in the 1970s and 1980s (a Czech rock group that was imprisoned in 1976 took its name from Zappa's 1968 song "Plastic People"). Under pressure from Secretary of State James Baker, Zappa's posting was withdrawn. Havel made Zappa an unofficial cultural attaché instead. Zappa planned to develop an international consulting enterprise to facilitate trade between the former Eastern Bloc and Western businesses. Zappa expressed opinions on censorship when he appeared on CNN's "Crossfire TV series" and debated issues with "Washington Times" commentator John Lofton in 1986. On September 19, 1985, Zappa testified before the United States Senate Commerce, Technology, and Transportation committee, attacking the Parents Music Resource Center or PMRC, a music organization co-founded by Tipper Gore, wife of then-senator Al Gore. The PMRC consisted of many wives of politicians, including the wives of five members of the committee, and was founded to address the issue of song lyrics with sexual or satanic content. During Zappa's testimony, he states that there is clearly a conflict of interest between the PMRC due to the relations of its founders to the politicians who are trying to pass what he referred to as the Blank Tape Tax. Kandy Stroud, a spokeswoman for the PMRC announced that Senator Gore (husband of Tipper Gore, who co-founded the committee) is a co-sponsor of this legislation. He suggests that record labels are trying to get the bill passed quickly through committees, one of which is chaired by senator Thurmond, who is also affiliated with the PMRC. Zappa points out that this committee is being used as a distraction from this bill being passed, which would lead only to the benefit of a select few in the music industry. Zappa saw their activities as on a path towards censorship, and called their proposal for voluntary labelling of records with explicit content "extortion" of the music industry. In his prepared statement, he said: The PMRC proposal is an ill-conceived piece of nonsense which fails to deliver any real benefits to children, infringes the civil liberties of people who are not children, and promises to keep the courts busy for years dealing with the interpretational and enforcemental problems inherent in the proposal's design. It is my understanding that, in law, First Amendment issues are decided with a preference for the least restrictive alternative. In this context, the PMRC's demands are the equivalent of treating dandruff by decapitation. ... The establishment of a rating system, voluntary or otherwise, opens the door to an endless parade of moral quality control programs based on things certain Christians do not like. What if the next bunch of Washington wives demands a large yellow "J" on all material written or performed by Jews, in order to save helpless children from exposure to concealed Zionist doctrine? Zappa set excerpts from the PMRC hearings to Synclavier music in his composition "Porn Wars" on the 1985 album "Frank Zappa Meets the Mothers of Prevention", and the full recording was released in 2010 as "Congress Shall Make No Law..." Zappa is heard interacting with Senators Fritz Hollings, Slade Gorton and Al Gore. Zappa earned widespread critical acclaim in his lifetime and after his death. "The Rolling Stone Album Guide" (2004) writes: "Frank Zappa dabbled in virtually all kinds of music—and, whether guised as a satirical rocker, jazz-rock fusionist, guitar virtuoso, electronics wizard, or orchestral innovator, his eccentric genius was undeniable." Even though his work drew inspiration from many different genres, Zappa was seen as establishing a coherent and personal expression. In 1971, biographer David Walley noted that "The whole structure of his music is unified, not neatly divided by dates or time sequences and it is all building into a composite". On commenting on Zappa's music, politics and philosophy, Barry Miles noted in 2004 that they cannot be separated: "It was all one; all part of his 'conceptual continuity'." "Guitar Player" devoted a special issue to Zappa in 1992, and asked on the cover "Is FZ America's Best Kept Musical Secret?" Editor Don Menn remarked that the issue was about "The most important composer to come out of modern popular music". Among those contributing to the issue was composer and musicologist Nicolas Slonimsky, who conducted premiere performances of works of Ives and Varèse in the 1930s. He became friends with Zappa in the 1980s, and said, "I admire everything Frank does, because he practically created the new musical millennium. He does beautiful, beautiful work ... It has been my luck to have lived to see the emergence of this totally new type of music." Conductor Kent Nagano remarked in the same issue that "Frank is a genius. That's a word I don't use often ... In Frank's case it is not too strong ... He is extremely literate musically. I'm not sure if the general public knows that." Pierre Boulez told "Musician" magazine's posthumous Zappa tribute article that Zappa "was an exceptional figure because he was part of the worlds of rock and classical music and that both types of his work would survive." In 1994, jazz magazine "Down Beat"s critics poll placed Zappa in its Hall of Fame. Zappa was posthumously inducted into the Rock and Roll Hall of Fame in 1995. There, it was written that "Frank Zappa was rock and roll's sharpest musical mind and most astute social critic. He was the most prolific composer of his age, and he bridged genres—rock, jazz, classical, avant-garde and even novelty music—with masterful ease". He received the Grammy Lifetime Achievement Award in 1997. He was ranked number 36 on VH1's "100 Greatest Artists of Hard Rock" in 2000. In 2005, the U.S. National Recording Preservation Board included "We're Only in It for the Money" in the National Recording Registry as "Frank Zappa's inventive and iconoclastic album presents a unique political stance, both anti-conservative and anti-counterculture, and features a scathing satire on hippiedom and America's reactions to it". The same year, "Rolling Stone" magazine ranked him at No. 71 on its list of the 100 Greatest Artists of All Time. In 2011, he was ranked at No. 22 on the list of the 100 Greatest Guitarists of All Time by the same magazine. The street of Partinico where his father lived at number 13, Via Zammatà, has been renamed to Via Frank Zappa. Many musicians, bands and orchestras from diverse genres have been influenced by Zappa's music. Rock artists like Alice Cooper, Larry LaLonde of Primus, Fee Waybill of the Tubes all cite Zappa's influence, as do progressive, alternative, electronic and avant-garde/experimental rock artists like Can, Pere Ubu, Soft Machine, Henry Cow, Faust, Devo, Kraftwerk, Trey Anastasio of Phish, Jeff Buckley, John Frusciante, Steven Wilson, and The Aristocrats. Paul McCartney regarded "Sgt. Pepper's Lonely Hearts Club Band" as the Beatles' "Freak Out!", Jimi Hendrix, and heavy rock and metal acts like Black Sabbath, Simon Phillips, Mike Portnoy, Warren DeMartini, Steve Vai, Strapping Young Lad, System of a Down, and Clawfinger acknowledge Zappa's inspiration. On the classical music scene, Tomas Ulrich, Meridian Arts Ensemble, Ensemble Ambrosius and the Fireworks Ensemble regularly perform Zappa's compositions and quote his influence. Contemporary jazz musicians and composers Bill Frisell and John Zorn are inspired by Zappa, as is funk legend George Clinton. Other artists affected by Zappa include ambient composer Brian Eno, new age pianist George Winston, electronic composer Bob Gluck, parodist artist and disk jockey Dr. Demento, parodist and novelty composer "Weird Al" Yankovic, industrial music pioneer Genesis P-Orridge, singer Cree Summer, and noise music artist Masami Akita of Merzbow. Scientists from various fields have honored Zappa by naming new discoveries after him. In 1967, paleontologist Leo P. Plas, Jr. identified an extinct mollusc in Nevada and named it "Amaurotoma zappa" with the motivation that, "The specific name, "zappa", honors Frank Zappa". In the 1980s, biologist Ed Murdy named a genus of gobiid fishes of New Guinea "Zappa", with a species named "Zappa confluentus". Biologist Ferdinando Boero named a Californian jellyfish "Phialella zappai" (1987), noting that he had "pleasure in naming this species after the modern music composer". Belgian biologists Bosmans and Bosselaers discovered in the early 1980s a Cameroonese spider, which they in 1994 named "Pachygnatha zappa" because "the ventral side of the abdomen of the female of this species strikingly resembles the artist's legendary moustache". A gene of the bacterium "Proteus mirabilis" that causes urinary tract infections was in 1995 named "zapA" by three biologists from Maryland. In their scientific article, they "especially thank the late Frank Zappa for inspiration and assistance with genetic nomenclature". Repeating regions of the genome of the human tumor virus KSHV were named "frnk", "vnct" and "zppa" in 1996 by the Moore and Chang who discovered the virus. Also, a 143 base pair repeat sequence occurring at two positions was named "waka/jwka". In the late 1990s, American paleontologists Marc Salak and Halard L. Lescinsky discovered a metazoan fossil, and named it "Spygori zappania" to honor "the late Frank Zappa ... whose mission paralleled that of the earliest paleontologists: to challenge conventional and traditional beliefs when such beliefs lacked roots in logic and reason". In 1994, lobbying efforts initiated by psychiatrist John Scialli led the International Astronomical Union's Minor Planet Center to name an asteroid in Zappa's honor: 3834 Zappafrank. The asteroid was discovered in 1980 by Czechoslovakian astronomer Ladislav Brožek, and the citation for its naming says that "Zappa was an eclectic, self-trained artist and composer ... Before 1989 he was regarded as a symbol of democracy and freedom by many people in Czechoslovakia". In 1995, a bust of Zappa by sculptor Konstantinas Bogdanas was installed in Vilnius, the Lithuanian capital (54.683, 25.2759). The choice of Zappa was explained as "a symbol that would mark the end of communism, but at the same time express that it wasn't always doom and gloom." A replica was offered to the city of Baltimore in 2008, and on September 19, 2010 — the twenty-fifth anniversary of Zappa's testimony to the U.S. Senate — a ceremony dedicating the replica was held, and the bust was unveiled at a library in the city. In 2002, a bronze bust was installed in German city Bad Doberan, location of the "Zappanale" since 1990, an annual music festival celebrating Zappa. At the initiative of musicians community ORWOhaus, the city of Berlin named a street in the Marzahn district "Frank-Zappa-Straße" in 2007. The same year, Baltimore mayor Sheila Dixon proclaimed August 9 as the city's official "Frank Zappa Day" citing Zappa's musical accomplishments as well as his defense of the First Amendment to the United States Constitution. During his lifetime, Zappa released 62 albums. Since 1994, the Zappa Family Trust has released 49 posthumous albums, making a total of 111 albums. The current distributor of Zappa's recorded output is Universal Music Enterprises. Frank Zappa has sold more than 40 million records worldwide. Final Fantasy Final Fantasy is a science fiction and fantasy media franchise created by Hironobu Sakaguchi, and developed and owned by Square Enix (formerly Square). The franchise centers on a series of fantasy and science fantasy role-playing video games (RPGs). The first game in the series, published in 1987, was conceived by Sakaguchi as his last-ditch effort in the game industry; it was a success and spawned sequels. The series has since branched into other genres such as tactical role-playing, action role-playing, massively multiplayer online role-playing, racing, third-person shooter, fighting, and rhythm. The franchise has also branched out into other media, including CGI films, anime, manga, and novels. Although most "Final Fantasy" installments are stand-alone stories with different settings and main characters, they feature identical elements that define the franchise. Recurring elements include plot themes, character names, and game mechanics. Plots center on a group of heroes battling a great evil while exploring the characters' internal struggles and relationships. Character names are frequently derived from the history, languages, pop culture, and mythologies of cultures worldwide. The series has been both commercially and critically successful, with more than units sold worldwide, making it one of the best-selling video game franchises of all time. The series is well known for its innovation, visuals, and music, such as the inclusion of full motion videos, photo-realistic character models, and music by Nobuo Uematsu. "Final Fantasy" has been a driving force in the video game industry, and the series has affected Square Enix's business practices and its relationships with other video game developers. It has popularized many features now common in role-playing games, also popularizing the genre as a whole in markets outside Japan. The first installment of the series premiered in Japan on December 18, 1987. Subsequent games are numbered and given a story unrelated to previous games, so the numbers refer to volumes rather than to sequels. Many "Final Fantasy" games have been localized for markets in North America, Europe, and Australia on numerous video game consoles, personal computers (PC), and mobile phones. Future installments will appear on seventh and eighth generation consoles. As of November 2016, the series includes the main installments from "Final Fantasy" to "Final Fantasy XV", as well as direct sequels and spin-offs, both released and confirmed as being in development. Most of the older games have been remade or re-released on multiple platforms. Three "Final Fantasy" installments were released on the Nintendo Entertainment System (NES). "Final Fantasy" was released in Japan in 1987 and in North America in 1990. It introduced many concepts to the console RPG genre, and has since been remade on several platforms. "Final Fantasy II", released in 1988 in Japan, has been bundled with "Final Fantasy" in several re-releases. The last of the NES installments, "Final Fantasy III", was released in Japan in 1990; however, it was not released elsewhere until a Nintendo DS remake in 2006. The Super Nintendo Entertainment System (SNES) also featured three installments of the main series, all of which have been re-released on several platforms. "Final Fantasy IV" was released in 1991; in North America, it was released as "Final Fantasy II". It introduced the "Active Time Battle" system. "Final Fantasy V", released in 1992 in Japan, was the first game in the series to spawn a sequel: a short anime series, "". "Final Fantasy VI" was released in Japan in 1994, titled "Final Fantasy III" in North America. The PlayStation console saw the release of three main "Final Fantasy" games. "Final Fantasy VII" (1997) moved away from the two-dimensional (2D) graphics used in the first six games to three-dimensional (3D) computer graphics; the game features polygonal characters on pre-rendered backgrounds. It also introduced a more modern setting, a style that was carried over to the next game. It was also the second in the series to be released in Europe, with the first being "Final Fantasy Mystic Quest". "Final Fantasy VIII" was published in 1999, and was the first to consistently use realistically proportioned characters and feature a vocal piece as its theme music. "Final Fantasy IX", released in 2000, returned to the series' roots by revisiting a more traditional "Final Fantasy" setting rather than the more modern worlds of "VII" and "VIII". Three main installments, as well as one online game, were published for the PlayStation 2 (PS2). "Final Fantasy X" (2001) introduced full 3D areas and voice acting to the series, and was the first to spawn a direct video game sequel ("Final Fantasy X-2", published in 2003). The first massively multiplayer online role-playing game (MMORPG) in the series, "Final Fantasy XI", was released on the PS2 and PC in 2002, and later on the Xbox 360. It introduced real-time battles instead of random encounters. "Final Fantasy XII", published in 2006, also includes real-time battles in large, interconnected playfields. The game is also the first in the main series to utilize a world used in a previous game, namely the land of Ivalice, which had previously featured in "Final Fantasy Tactics" and "Vagrant Story". In 2009, "Final Fantasy XIII" was released in Japan, and in North America and Europe the following year, for PlayStation 3 and Xbox 360. It is the flagship installment of the "Fabula Nova Crystallis Final Fantasy" series and became the first mainline game to spawn two direct sequels ("XIII-2" and ""). It was also the first game released in Chinese & High Definition along with being released on two consoles at once. "Final Fantasy XIV", a MMORPG, was released worldwide on Microsoft Windows in 2010, but it received heavy criticism when it was launched, prompting Square Enix to rerelease the game as "", this time to the PlayStation 3 as well, in 2013. "Final Fantasy XV" is an action role-playing game that was released for PlayStation 4 and Xbox One in 2016. Originally a "XIII" spin-off titled "Versus XIII", "XV" uses the mythos of the "Fabula Nova Crystallis" series, although in many other respects the game stands on its own and has since been distanced from the series by its developers. "Final Fantasy" has spawned numerous spin-offs and metaseries. Several are, in fact, not "Final Fantasy" games, but were rebranded for North American release. Examples include the "SaGa" series, rebranded "The Final Fantasy Legend", and its two sequels, "Final Fantasy Legend II" and "Final Fantasy Legend III". "Final Fantasy Mystic Quest" was specifically developed for a United States audience, and "Final Fantasy Tactics" is a tactical RPG that features many references and themes found in the series. The spin-off "Chocobo" series, "Crystal Chronicles" series, and "Kingdom Hearts" series also include multiple "Final Fantasy" elements. In 2003, the "Final Fantasy" series' first direct sequel, "Final Fantasy X-2", was released. "Final Fantasy XIII" was originally intended to stand on its own, but the team wanted to explore the world, characters and mythos more, resulting in the development and release of two sequels in 2011 and respectively, creating the series' first official trilogy. "Dissidia Final Fantasy" was released in 2009, a fighting game that features heroes and villains from the first ten games of the main series. It was followed by a prequel in 2011. Other spin-offs have taken the form of subseries—"Compilation of Final Fantasy VII", "Ivalice Alliance", and "Fabula Nova Crystallis Final Fantasy". Square Enix has expanded the "Final Fantasy" series into various media. Multiple anime and computer-generated imagery (CGI) films have been produced that are based either on individual "Final Fantasy" games or on the series as a whole. The first was an original video animation (OVA), "", a sequel to "Final Fantasy V". The story was set in the same world as the game, although 200 years in the future. It was released as four 30-minute episodes, first in Japan in 1994 and later in the United States by Urban Vision in 1998. In 2001, Square Pictures released its first feature film, "". The film is set on a future Earth invaded by alien life forms. "The Spirits Within" was the first animated feature to seriously attempt to portray photorealistic CGI humans, but was considered a box office bomb and garnered mixed reviews. In 2005, "", a theatrical CGI film, and "", a non-canon OVA, were released as part of the "Compilation of Final Fantasy VII". "Advent Children" was animated by Visual Works, which helped the company create CG sequences for the games. The film, unlike "The Spirits Within", gained mixed to positive reviews from critics and has become a commercial success. "Last Order," on the other hand, was released in Japan in a special DVD bundle package with "Advent Children". "Last Order" sold out quickly and was positively received by Western critics, though fan reaction was mixed over changes to established story scenes. A 25-episode anime television series, "," was released in 2001 based on the common elements of the "Final Fantasy" series. It was broadcast in Japan by TV Tokyo and released in North America by ADV Films. Two animated tie-ins for "Final Fantasy XV" were announced at the Uncovered Final Fantasy XV fan and press event, forming part of a larger multimedia project dubbed the "Final Fantasy XV" Universe. "" is a series of five 10-to-20-minute-long episodes developed by A-1 Pictures and Square Enix detailing the backstories of the main cast. "", a CGI movie set for release prior to the game in Summer 2016, is set during the game's opening and follows new and secondary characters. Several video games have either been adapted into or have had spin-offs in the form of manga and novels. The first was the novelization of "Final Fantasy II" in 1989, and was followed by a manga adaptation of "Final Fantasy III" in 1992. The past decade has seen an increase in the number of non-video game adaptations and spin-offs. "Final Fantasy: The Spirits Within" has been adapted into a novel, the spin-off game "Final Fantasy Crystal Chronicles" has been adapted into a manga, and "Final Fantasy XI" has had a novel and manga set in its continuity. Seven novellas based on the "Final Fantasy VII" universe have also been released. The "Final Fantasy: Unlimited" story was partially continued in novels and a manga after the anime series ended. The "Final Fantasy X" and "Final Fantasy XIII" series have also had novellas and audio dramas released. Two games, "Final Fantasy Tactics Advance" and "Final Fantasy: Unlimited", have been adapted into radio dramas. A trading card game named the "Final Fantasy trading card game" is produced by Square Enix and Hobby Japan, first released Japan in 2012 with an English version in 2016. The game has been compared to "", and a tournament circuit for the game also takes place. Although most "Final Fantasy" installments are independent, many gameplay elements recur throughout the series. Most games contain elements of fantasy and science fiction and feature recycled names often inspired from various cultures' history, languages and mythology, including Asian, European, and Middle-Eastern. Examples include weapon names like Excalibur and Masamune—derived from Arthurian legend and the Japanese swordsmith Masamune respectively—as well as the spell names Holy, Meteor, and Ultima. Beginning with "Final Fantasy IV", the main series adopted its current logo style that features the same typeface and an emblem designed by Japanese artist Yoshitaka Amano. The emblem relates to a game's plot and typically portrays a character or object in the story. Subsequent remakes of the first three games have replaced the previous logos with ones similar to the rest of the series. The central conflict in many "Final Fantasy" games focuses on a group of characters battling an evil, and sometimes ancient, antagonist that dominates the game's world. Stories frequently involve a sovereign state in rebellion, with the protagonists taking part in the rebellion. The heroes are often destined to defeat the evil, and occasionally gather as a direct result of the antagonist's malicious actions. Another staple of the series is the existence of two villains; the main villain is not always who it appears to be, as the primary antagonist may actually be subservient to another character or entity. The main antagonist introduced at the beginning of the game is not always the final enemy, and the characters must continue their quest beyond what appears to be the final fight. Stories in the series frequently emphasize the internal struggles, passions, and tragedies of the characters, and the main plot often recedes into the background as the focus shifts to their personal lives. Games also explore relationships between characters, ranging from love to rivalry. Other recurring situations that drive the plot include amnesia, a hero corrupted by an evil force, mistaken identity, and self-sacrifice. Magical orbs and crystals are recurring in-game items that are frequently connected to the themes of the games' plots. Crystals often play a central role in the creation of the world, and a majority of the "Final Fantasy" games link crystals and orbs to the planet's life force. As such, control over these crystals drives the main conflict. The classical elements are also a recurring theme in the series related to the heroes, villains, and items. Other common plot and setting themes include the Gaia hypothesis, an apocalypse, and conflicts between advanced technology and nature. The series features a number of recurring character archetypes. Most famously, every game since "Final Fantasy II", including subsequent remakes of the original "Final Fantasy", features a character named Cid. Cid's appearance, personality, goals, and role in the game (non-playable ally, party member, villain) vary dramatically. However, two characteristics many versions of Cid have in common are 1) being a scientist or engineer, and 2) being tied in some way to an airship the party eventually acquires. Every Cid has at least one of these two traits. Biggs and Wedge, inspired by two "Star Wars" characters of the same name, appear in numerous games as minor characters, sometimes as comic relief. The later games in the series feature several males with effeminate characteristics. Recurring creatures include Chocobos and Moogles. Chocobos are large, often flightless birds that appear in several installments as a means of long-distance travel for characters. Moogles, on the other hand, are white, stout creatures resembling teddy bears with wings and a single antenna. They serve different capacities in games including mail delivery, weaponsmiths, party members, and saving the game. Chocobo and Moogle appearances are often accompanied by specific musical themes that have been arranged differently for separate games. In "Final Fantasy" games, players command a party of characters as they progress through the game's story by exploring the game world and defeating opponents. Enemies are typically encountered randomly through exploring, a trend which changed in "Final Fantasy XI" and "Final Fantasy XII". The player issues combat orders—like "Fight", "Magic", and "Item"—to individual characters via a menu-driven interface while engaging in battles. Throughout the series, the games have used different battle systems. Prior to "Final Fantasy XI", battles were turn-based with the protagonists and antagonists on different sides of the battlefield. "Final Fantasy IV" introduced the "Active Time Battle" (ATB) system that augmented the turn-based nature with a perpetual time-keeping system. Designed by Hiroyuki Ito, it injected urgency and excitement into combat by requiring the player to act before an enemy attacks, and was used until "Final Fantasy X", which implemented the "Conditional Turn-Based" (CTB) system. This new system returned to the previous turn-based system, but added nuances to offer players more challenge. "Final Fantasy XI" adopted a real-time battle system where characters continuously act depending on the issued command. "Final Fantasy XII" continued this gameplay with the "Active Dimension Battle" system. "Final Fantasy XIII"s combat system, designed by the same man who worked on "X", was meant to have an action-oriented feel, emulating the cinematic battles in "Final Fantasy VII: Advent Children". The latest installment to the franchise, Final Fantasy XV, introduces a new "Open Combat" system. Unlike previous battle systems in the franchise, the "Open Combat" system (OCS) allows players to take on a fully active battle scenario, allowing for free range attacks and movement, giving a much more fluid feel of combat. This system also incorporates a "Tactical" Option during battle, which pauses active battle to allow use of items. Like most RPGs, the "Final Fantasy" installments use an experience level system for character advancement, in which experience points are accumulated by killing enemies. Character classes, specific jobs that enable unique abilities for characters, are another recurring theme. Introduced in the first game, character classes have been used differently in each game. Some restrict a character to a single job to integrate it into the story, while other games feature dynamic job systems that allow the player to choose from multiple classes and switch throughout the game. Though used heavily in many games, such systems have become less prevalent in favor of characters that are more versatile; characters still match an archetype, but are able to learn skills outside their class. Magic is another common RPG element in the series. The method by which characters gain magic varies between installments, but is generally divided into classes organized by color: "White magic", which focuses on spells that assist teammates; "Black magic", which focuses on harming enemies; "Red magic", which is a combination of white and black magic, "Blue magic", which mimics enemy attacks; and "Green magic" which focuses on applying status effects to either allies or enemies. Other types of magic frequently appear such as "Time magic", focusing on the themes of time, space, and gravity; and "Summoning magic", which evokes legendary creatures to aid in battle and is a feature that has persisted since "Final Fantasy III". Summoned creatures are often referred to by names like "Espers" or "Eidolons" and have been inspired by mythologies from Arabic, Hindu, Norse, and Greek cultures. Different means of transportation have appeared through the series. The most common is the airship for long range travel, accompanied by chocobos for travelling short distances, but others include sea and land vessels. Following "Final Fantasy VII", more modern and futuristic vehicle designs have been included. In the mid-1980s, Square entered the Japanese video game industry with simple RPGs, racing games, and platformers for Nintendo's Famicom Disk System. In 1987, Square designer Hironobu Sakaguchi chose to create a new fantasy role-playing game for the cartridge-based NES, and drew inspiration from popular fantasy games: Enix's "Dragon Quest", Nintendo's "The Legend of Zelda", and Origin Systems's "Ultima" series. Though often attributed to the company allegedly facing bankruptcy, Sakaguchi explained that the game was his personal last-ditch effort in the game industry and that its title, "Final Fantasy", stemmed from his feelings at the time; had the game not sold well, he would have quit the business and gone back to university. Despite his explanation, publications have also attributed the name to the company's hopes that the project would solve its financial troubles. In 2015, Sakaguchi explained the name's origin: the team wanted a title that would abbreviate to ""FF"", which would sound good in Japanese. The name was originally going to be "Fighting Fantasy", but due to concerns over trademark conflicts with the roleplaying gamebook series of the same name, they needed to settle for something else. As the word "Final" was a famous word in Japan, Sakaguchi settled on that. According to Sakaguchi, any title that created the ""FF"" abbreviation would have done. The game indeed reversed Square's lagging fortunes, and it became the company's flagship franchise. Following the success, Square immediately developed a second installment. Because Sakaguchi assumed "Final Fantasy" would be a stand-alone game, its story was not designed to be expanded by a sequel. The developers instead chose to carry over only thematic similarities from its predecessor, while some of the gameplay elements, such as the character advancement system, were overhauled. This approach has continued throughout the series; each major "Final Fantasy" game features a new setting, a new cast of characters, and an upgraded battle system. Video game writer John Harris attributed the concept of reworking the game system of each installment to Nihon Falcom's "Dragon Slayer" series, with which Square was previously involved as a publisher. The company regularly released new games in the main series. However, the time between the releases of "Final Fantasy XI" (2002), "Final Fantasy XII" (2006), and "Final Fantasy XIII" (2009) were much longer than previous games. Following "Final Fantasy XIV", Square Enix stated that it intended to release "Final Fantasy" games either annually or biennially. This switch was to mimic the development cycles of Western games in the "Call of Duty", "Assassin's Creed" and "Battlefield" series, as well as maintain fan-interest. For the original "Final Fantasy", Sakaguchi required a larger production team than Square's previous games. He began crafting the game's story while experimenting with gameplay ideas. Once the gameplay system and game world size were established, Sakaguchi integrated his story ideas into the available resources. A different approach has been taken for subsequent games; the story is completed first and the game built around it. Designers have never been restricted by consistency, though most feel each game should have a minimum number of common elements. The development teams strive to create completely new worlds for each game, and avoid making new games too similar to previous ones. Game locations are conceptualized early in development and design details like building parts are fleshed out as a base for entire structures. The first five games were directed by Sakaguchi, who also provided the original concepts. He drew inspiration for game elements from anime films by Hayao Miyazaki; series staples like the airships and chocobos are inspired by elements in "Castle in the Sky" and "Nausicaä of the Valley of the Wind", respectively. Sakaguchi served as a producer for subsequent games until he left Square in 2001. Yoshinori Kitase took over directing the games until "Final Fantasy VIII", and has been followed by a new director for each new game. Hiroyuki Ito designed several gameplay systems, including "Final Fantasy V"s "Job System", "Final Fantasy VIII"s "Junction System" and the Active Time Battle concept, which was used from "Final Fantasy IV" until "Final Fantasy IX". In designing the Active Time Battle system, Ito drew inspiration from Formula One racing; he thought it would be interesting if character types had different speeds after watching race cars pass each other. Ito also co-directed "Final Fantasy VI" with Kitase. Kenji Terada was the scenario writer for the first three games; Kitase took over as scenario writer for "Final Fantasy V" through "Final Fantasy VII". Kazushige Nojima became the series' primary scenario writer from "Final Fantasy VII" until his resignation in October 2003; he has since formed his own company, Stellavista. Nojima partially or completely wrote the stories for "Final Fantasy VII", "Final Fantasy VIII", "Final Fantasy X", and "Final Fantasy X-2". He also worked as the scenario writer for the spin-off series, "Kingdom Hearts". Daisuke Watanabe co-wrote the scenarios for "Final Fantasy X" and "XII", and was the main writer for the "XIII" games. Artistic design, including character and monster creations, was handled by Japanese artist Yoshitaka Amano from "Final Fantasy" through "Final Fantasy VI". Amano also handled title logo designs for all of the main series and the image illustrations from "Final Fantasy VII" onward. Tetsuya Nomura was chosen to replace Amano because Nomura's designs were more adaptable to 3D graphics. He worked with the series from "Final Fantasy VII" through "Final Fantasy X"; for "Final Fantasy IX", however, character designs were handled by Shukō Murase, Toshiyuki Itahana, and Shin Nagasawa. Nomura is also the character designer of the "Kingdom Hearts" series, "Compilation of Final Fantasy VII", and "Fabula Nova Crystallis: Final Fantasy". Other designers include Nobuyoshi Mihara and Akihiko Yoshida. Mihara was the character designer for "Final Fantasy XI", and Yoshida served as character designer for "Final Fantasy Tactics", the Square-produced "Vagrant Story", and "Final Fantasy XII". Because of graphical limitations, the first games on the NES feature small sprite representations of the leading party members on the main world screen. Battle screens use more detailed, full versions of characters in a side-view perspective. This practice was used until "Final Fantasy VI", which uses detailed versions for both screens. The NES sprites are 26 pixels high and use a color palette of 4 colors. 6 frames of animation are used to depict different character statuses like "healthy" and "fatigued". The SNES installments use updated graphics and effects, as well as higher quality audio than in previous games, but are otherwise similar to their predecessors in basic design. The SNES sprites are 2 pixels shorter, but have larger palettes and feature more animation frames: 11 colors and 40 frames respectively. The upgrade allowed designers to have characters be more detailed in appearance and express more emotions. The first game includes non-player characters (NPCs) the player could interact with, but they are mostly static in-game objects. Beginning with the second game, Square used predetermined pathways for NPCs to create more dynamic scenes that include comedy and drama. In 1995, Square showed an interactive SGI technical demonstration of "Final Fantasy VI" for the then next generation of consoles. The demonstration used Silicon Graphics's prototype Nintendo 64 workstations to create 3D graphics. Fans believed the demo was of a new "Final Fantasy" game for the Nintendo 64 console; however, 1997 saw the release of "Final Fantasy VII" for the Sony PlayStation. The switch was due to a dispute with Nintendo over its use of faster but more expensive cartridges, as opposed to the slower and cheaper, but much higher capacity Compact Discs used on rival systems. "Final Fantasy VII" introduced 3D graphics with fully pre-rendered backgrounds. It was because of this switch to 3D that a CD-ROM format was chosen over a cartridge format. The switch also led to increased production costs and a greater subdivision of the creative staff for "Final Fantasy VII" and subsequent 3D games in the series. Starting with "Final Fantasy VIII", the series adopted a more photo-realistic look. Like "Final Fantasy VII", full motion video (FMV) sequences would have video playing in the background, with the polygonal characters composited on top. "Final Fantasy IX" returned to the more stylized design of earlier games in the series, although it still maintained, and in many cases slightly upgraded, most of the graphical techniques used in the previous two games. "Final Fantasy X" was released on the PlayStation 2, and used the more powerful hardware to render graphics in real-time instead of using pre-rendered material to obtain a more dynamic look; the game features full 3D environments, rather than have 3D character models move about pre-rendered backgrounds. It is also the first "Final Fantasy" game to introduce voice acting, occurring throughout the majority of the game, even with many minor characters. This aspect added a whole new dimension of depth to the character's reactions, emotions, and development. Taking a temporary divergence, "Final Fantasy XI" used the PlayStation 2's online capabilities as an MMORPG. Initially released for the PlayStation 2 with a PC port arriving six months later, "Final Fantasy XI" was also released on the Xbox 360 nearly four years after its original release in Japan. This was the first "Final Fantasy" game to use a free rotating camera. "Final Fantasy XII" was released in 2006 for the PlayStation 2 and uses only half as many polygons as "Final Fantasy X", in exchange for more advanced textures and lighting. It also retains the freely rotating camera from "Final Fantasy XI". "Final Fantasy XIII" and "Final Fantasy XIV" both make use of Crystal Tools, a middleware engine developed by Square Enix. The "Final Fantasy" games feature a variety of music, and frequently reuse themes. Most of the games open with a piece called "Prelude", which has evolved from a simple, 2-voice arpeggio in the early games to a complex, melodic arrangement in recent installments. Victories in combat are often accompanied by a victory fanfare, a theme that has become one of the most recognized pieces of music in the series. The basic theme that accompanies Chocobo appearances has been rearranged in a different musical style for each installment. A piece called "Prologue" (and sometimes "Final Fantasy"), originally featured in the first game, is often played during the ending credits. Although leitmotifs are common in the more character-driven installments, theme music is typically reserved for main characters and recurring plot elements. Nobuo Uematsu was the chief music composer of the "Final Fantasy" series until his resignation from Square Enix in November 2004. Other composers include Masashi Hamauzu, Hitoshi Sakimoto and Junya Nakano. Uematsu was allowed to create much of the music with little direction from the production staff. Sakaguchi, however, would request pieces to fit specific game scenes including battles and exploring different areas of the game world. Once a game's major scenarios were completed, Uematsu would begin writing the music based on the story, characters, and accompanying artwork. He started with a game's main theme, and developed other pieces to match its style. In creating character themes, Uematsu read the game's scenario to determine the characters' personality. He would also ask the scenario writer for more details to scenes he was unsure about. Technical limitations were prevalent in earlier games; Sakaguchi would sometimes instruct Uematsu to only use specific notes. It was not until "Final Fantasy IV" on the SNES that Uematsu was able to add more subtlety to the music. Overall, the "Final Fantasy" series has been critically acclaimed and commercially successful, though each installment has seen different levels of success. The series has seen a steady increase in total sales; it sold over 10 million units worldwide by early 1996, 45 million by August 2003, 63 million by December 2005, and 85 million by July 2008. In June 2011, Square Enix announced that the series had sold over units, and by March 2014, it had sold over 110 million units. Its high sales numbers have ranked it as one of the best-selling video game franchises in the industry; in January 2007, the series was listed as number three, and later in July as number four. As of 2017, the series has sold over 135 million units worldwide. Several games within the series have become best-selling games. At the end of 2007, the seventh, eighth, and ninth best-selling RPGs were "Final Fantasy VII", "Final Fantasy VIII", and "Final Fantasy X" respectively. "Final Fantasy VII" has sold more than 11 million copies worldwide, earning it the position of the best-selling "Final Fantasy" game. Within two days of "Final Fantasy VIII"s North American release on September 9, 1999, it became the top-selling video game in the United States, a position it held for more than three weeks. "Final Fantasy X" sold over 1.4 million Japanese units in pre-orders alone, which set a record for the fastest-selling console RPG. The MMORPG, "Final Fantasy XI", reached over 200,000 active daily players in March 2006 and had reached over half a million subscribers by July 2007. "Final Fantasy XII" sold more than 1.7 million copies in its first week in Japan. By November 6, 2006—one week after its release—"Final Fantasy XII" had shipped approximately 1.5 million copies in North America. "Final Fantasy XIII" became the fastest-selling game in the franchise, and sold one million units on its first day of sale in Japan. "Final Fantasy XIV: A Realm Reborn", in comparison to its predecessor, was a runaway success, originally suffering from servers being overcrowded, and eventually gaining over one million unique subscribers within two months of its launch. The series has received critical acclaim for the quality of its visuals and soundtracks. In 1996, "Next Generation" ranked the series collectively as the 17th best game of all time, speaking very highly of its graphics, music and stories. It was awarded a star on the Walk of Game in 2006, making it the first franchise to win a star on the event (other winners were individual games, not franchises). WalkOfGame.com commented that the series has sought perfection as well as having been a risk taker in innovation. In 2006, GameFAQs held a contest for the best video game series ever, with "Final Fantasy" finishing as the runner-up to "The Legend of Zelda". In a 2008 public poll held by The Game Group plc, "Final Fantasy" was voted the best game series, with five games appearing in their "Greatest Games of All Time" list. Many "Final Fantasy" games have been included in various lists of top games. Several games have been listed on multiple IGN "Top Games" lists. Eleven games were listed on "Famitsu"'s 2006 "Top 100 Favorite Games of All Time", four of which were in the top ten, with "Final Fantasy X" and "Final Fantasy VII" coming first and second, respectively. The series holds seven Guinness World Records in the "Guinness World Records Gamer's Edition 2008", which include the "Most Games in an RPG Series" (13 main games, seven enhanced games, and 32 spin-off games), the "Longest Development Period" (the production of "Final Fantasy XII" took five years), and the "Fastest-Selling Console RPG in a Single Day" ("Final Fantasy X"). The 2009 edition listed two games from the series among the top 50 consoles games: "Final Fantasy XII" at number 8 and "Final Fantasy VII" at number 20. However, the series has garnered some criticism. IGN has commented that the menu system used by the games is a major detractor for many and is a "significant reason why they haven't touched the series." The site has also heavily criticized the use of random encounters in the series' battle systems. IGN further stated the various attempts to bring the series into film and animation have either been unsuccessful, unremarkable, or did not live up to the standards of the games. In 2007, "Edge" criticized the series for a number of related games that include the phrase ""Final Fantasy"" in their titles, which are considered inferior to previous games. It also commented that with the departure of Hironobu Sakaguchi, the series might be in danger of growing stale. Several individual "Final Fantasy" games have garnered extra attention; some for their positive reception and others for their negative reception. "Final Fantasy VII" topped "GamePro's" "26 Best RPGs of All Time" list, as well as GameFAQs "Best Game Ever" audience polls in 2004 and 2005. Despite the success of "Final Fantasy VII", it is sometimes criticized as being overrated. In 2003, GameSpy listed it as the seventh most overrated game of all time, while IGN presented views from both sides. "" shipped 392,000 units in its first week of release, but received review scores that were much lower than that of other "Final Fantasy" games. A delayed, negative review after the Japanese release of "Dirge of Cerberus" from Japanese gaming magazine "Famitsu" hinted at a controversy between the magazine and Square Enix. Though "Final Fantasy: The Spirits Within" was praised for its visuals, the plot was criticized and the film was considered a box office bomb. "Final Fantasy Crystal Chronicles" for the GameCube received overall positive review scores, but reviews stated that the use of Game Boy Advances as controllers was a big detractor. The predominantly negative reception of the original version of "Final Fantasy XIV" caused then-president Yoichi Wada to issue an official apology during a Tokyo press conference, stating that the brand had been "greatly damaged" by the game's reception. Several video game publications have created rankings of the mainline "Final Fantasy" games. In the table below, the lower the number given, the better the game is in the view of the respective publication. By way of comparison, the rating provided by the review aggregator "Metacritic" is also given; in this row higher numbers indicate better reviews. The "Final Fantasy" series and several specific games within it have been credited for introducing and popularizing many concepts that are today widely used in console RPGs. The original game is often cited as one of the most influential early console RPGs, and played a major role in legitimizing and popularizing the genre. Many console RPGs featured one-on-one battles against monsters from a first-person perspective. "Final Fantasy" introduced a side view perspective with groups of monsters against a group of characters that has been frequently used. It also introduced an early evolving class change system, as well as different methods of transportation, including a ship, canoe, and flying airship. "Final Fantasy II" was the first sequel in the industry to omit characters and locations from the previous game. It also introduced an activity-based progression system, which has been used in later RPG series such as "SaGa", "Grandia", and "The Elder Scrolls". "Final Fantasy III" introduced the job system, a character progression engine allowing the player to change character classes, as well as acquire new and advanced classes and combine class abilities, at any time during the game. "Final Fantasy IV" is considered a milestone for the genre, introducing a dramatic storyline with a strong emphasis on character development and personal relationships. "Final Fantasy VII" is credited as having the largest industry impact of the series, and with allowing console role-playing games to gain mass-market appeal. The series affected Square's business on several levels. The commercial failure of "Final Fantasy: The Spirits Within" resulted in hesitation and delays from Enix during merger discussions with Square. Square's decision to produce games exclusively for the Sony PlayStation—a move followed by Enix's decision with the "Dragon Quest" series—severed their relationship with Nintendo. "Final Fantasy" games were absent from Nintendo consoles, specifically the Nintendo 64, for seven years. Critics attribute the switch of strong third-party games like the "Final Fantasy" and "Dragon Quest" games to Sony's PlayStation, and away from the Nintendo 64, as one of the reasons behind PlayStation being the more successful of the two consoles. The release of the Nintendo GameCube, which used optical disc media, in 2001 caught the attention of Square. To produce games for the system, Square created the shell company The Game Designers Studio and released "Final Fantasy Crystal Chronicles", which spawned its own metaseries within the main franchise. "Final Fantasy XI"s lack of an online method of subscription cancellation prompted the creation of legislation in Illinois that requires internet gaming services to provide such a method to the state's residents. The series' popularity has resulted in its appearance and reference in numerous facets of popular culture like anime, TV series, and webcomics. Music from the series has permeated into different areas of culture. "Final Fantasy IV"s "Theme of Love" was integrated into the curriculum of Japanese school children and has been performed live by orchestras and metal bands. In 2003, Uematsu became involved with The Black Mages, a rock group independent of Square that has released albums of arranged "Final Fantasy" tunes. Bronze medalists Alison Bartosik and Anna Kozlova performed their synchronized swimming routine at the 2004 Summer Olympics to music from "Final Fantasy VIII". Many of the soundtracks have also been released for sale. Numerous companion books, which normally provide in-depth game information, have been published. In Japan, they are published by Square and are called "Ultimania" books. Film noir Film noir (; ) is a cinematic term used primarily to describe stylish Hollywood crime dramas, particularly those that emphasize cynical attitudes and sexual motivations. Hollywood's classical film noir period is generally regarded as extending from the early 1940s to the late 1950s. Film noir of this era is associated with a low-key, black-and-white visual style that has roots in German Expressionist cinematography. Many of the prototypical stories and much of the attitude of classic noir derive from the hardboiled school of crime fiction that emerged in the United States during the Great Depression. The term "film noir", French for "black film" (literal) or "dark film" (closer meaning), was first applied to Hollywood films by French critic Nino Frank in 1946, but was unrecognized by most American film industry professionals of that era. Cinema historians and critics defined the category retrospectively. Before the notion was widely adopted in the 1970s, many of the classic film noirs were referred to as "melodramas". Whether film noir qualifies as a distinct genre is a matter of ongoing debate among scholars. Film noir encompasses a range of plots: the central figure may be a private investigator ("The Big Sleep"), a plainclothes policeman ("The Big Heat"), an aging boxer ("The Set-Up"), a hapless grifter ("Night and the City"), a law-abiding citizen lured into a life of crime ("Gun Crazy"), or simply a victim of circumstance ("D.O.A."). Although film noir was originally associated with American productions, the term has been used to describe films from around the world. Many films released from the 1960s onward share attributes with film noirs of the classical period, and often treat its conventions self-referentially. Some refer to such latter-day works as neo-noir. The clichés of film noir have inspired parody since the mid-1940s. The questions of what defines film noir, and what sort of category it is, provoke continuing debate. "We'd be oversimplifying things in calling film noir oneiric, strange, erotic, ambivalent, and cruel ..."—this set of attributes constitutes the first of many attempts to define film noir made by French critics and Étienne Chaumeton in their 1955 book "Panorama du film noir américain 1941–1953" ("A Panorama of American Film Noir"), the original and seminal extended treatment of the subject. They emphasize that not every film noir embodies all five attributes in equal measure—one might be more dreamlike; another, particularly brutal. The authors' caveats and repeated efforts at alternative definition have been echoed in subsequent scholarship: in the more than five decades since, there have been innumerable further attempts at definition, yet in the words of cinema historian Mark Bould, film noir remains an "elusive phenomenon ... always just out of reach". Though film noir is often identified with a visual style, unconventional within a Hollywood context, that emphasizes low-key lighting and unbalanced compositions, films commonly identified as noir evidence a variety of visual approaches, including ones that fit comfortably within the Hollywood mainstream. Film noir similarly embraces a variety of genres, from the gangster film to the police procedural to the gothic romance to the social problem picture—any example of which from the 1940s and 1950s, now seen as noir's classical era, was likely to be described as a melodrama at the time. While many critics refer to film noir as a genre itself, others argue that it can be no such thing. Film noir is often associated with an urban setting, but many classic noirs take place in small towns, suburbia, rural areas, or on the open road; setting, therefore, cannot be its genre determinant, as with the Western. Similarly, while the private eye and the femme fatale are character types conventionally identified with noir, the majority of film noirs feature neither; so there is no character basis for genre designation as with the gangster film. Nor does film noir rely on anything as evident as the monstrous or supernatural elements of the horror film, the speculative leaps of the science fiction film, or the song-and-dance routines of the musical. An analogous case is that of the screwball comedy, widely accepted by film historians as constituting a "genre": the screwball is defined not by a fundamental attribute, but by a general disposition and a group of elements, some—but rarely and perhaps never all—of which are found in each of the genre's films. Because of the diversity of noir (much greater than that of the screwball comedy), certain scholars in the field, such as film historian Thomas Schatz, treat it as not a genre but a "style". Alain Silver, the most widely published American critic specializing in film noir studies, refers to film noir as a "cycle" and a "phenomenon", even as he argues that it has—like certain genres—a consistent set of visual and thematic codes. Other critics treat film noir as a "mood", characterize it as a "series", or simply address a chosen set of films they regard as belonging to the noir "canon". There is no consensus on the matter. The aesthetics of film noir are influenced by German Expressionism, an artistic movement of the 1910s and 1920s that involved theater, photography, painting, sculpture and architecture, as well as cinema. The opportunities offered by the booming Hollywood film industry and then the threat of Nazism, led to the emigration of many film artists working in Germany who had been involved in the Expressionist movement or studied with its practitioners. "M" (1931), shot only a few years before director Fritz Lang's departure from Germany, is among the first crime films of the sound era to join a characteristically noirish visual style with a noir-type plot, in which the protagonist is a criminal (as are his most successful pursuers). Directors such as Lang, Robert Siodmak and Michael Curtiz brought a dramatically shadowed lighting style and a psychologically expressive approach to visual composition ("mise-en-scène"), with them to Hollywood, where they made some of the most famous classic noirs. By 1931, Curtiz had already been in Hollywood for half a decade, making as many as six films a year. Movies of his such as "20,000 Years in Sing Sing" (1932) and "Private Detective 62" (1933) are among the early Hollywood sound films arguably classifiable as noir—scholar Marc Vernet offers the latter as evidence that dating the initiation of film noir to 1940 or any other year is "arbitrary". Expressionism-orientated filmmakers had free stylistic rein in Universal horror pictures such as "Dracula" (1931), "The Mummy" (1932)—the former photographed and the latter directed by the Berlin-trained Karl Freund—and "The Black Cat" (1934), directed by Austrian émigré Edgar G. Ulmer. The Universal horror film that comes closest to noir, in story and sensibility is "The Invisible Man" (1933), directed by Englishman James Whale and photographed by American Arthur Edeson. Edeson later photographed "The Maltese Falcon" (1941), widely regarded as the first major film noir of the classic era. Josef von Sternberg was directing in Hollywood during the same period. Films of his such as "Shanghai Express" (1932) and "The Devil Is a Woman" (1935), with their hothouse eroticism and baroque visual style, anticipated central elements of classic noir. The commercial and critical success of Sternberg's silent "Underworld" (1927) was largely responsible for spurring a trend of Hollywood gangster films. Successful films in that genre such as "Little Caesar" (1931), "The Public Enemy" (1931) and "Scarface" (1932) demonstrated that there was an audience for crime dramas with morally reprehensible protagonists. An important, possibly influential, cinematic antecedent to classic noir was 1930s French poetic realism, with its romantic, fatalistic attitude and celebration of doomed heroes. The movement's sensibility is mirrored in the Warner Bros. drama "I Am a Fugitive from a Chain Gang" (1932), a forerunner of noir. Among films not considered film noirs, perhaps none had a greater effect on the development of the genre than "Citizen Kane" (1941), directed by Orson Welles. Its visual intricacy and complex, voiceover narrative structure are echoed in dozens of classic film noirs. Italian neorealism of the 1940s, with its emphasis on quasi-documentary authenticity, was an acknowledged influence on trends that emerged in American noir. "The Lost Weekend" (1945), directed by Billy Wilder, another Vienna-born, Berlin-trained American auteur, tells the story of an alcoholic in a manner evocative of neorealism. It also exemplifies the problem of classification: one of the first American films to be described as a film noir, it has largely disappeared from considerations of the field. Director Jules Dassin of "The Naked City" (1948) pointed to the neorealists as inspiring his use of location photography with non-professional extras. This semidocumentary approach characterized a substantial number of noirs in the late 1940s and early 1950s. Along with neorealism, the style had an American precedent cited by Dassin, in director Henry Hathaway's "The House on 92nd Street" (1945), which demonstrated the parallel influence of the cinematic newsreel. The primary literary influence on film noir was the hardboiled school of American detective and crime fiction, led in its early years by such writers as Dashiell Hammett (whose first novel, "Red Harvest", was published in 1929) and James M. Cain (whose "The Postman Always Rings Twice" appeared five years later), and popularized in pulp magazines such as "Black Mask". The classic film noirs "The Maltese Falcon" (1941) and "The Glass Key" (1942) were based on novels by Hammett; Cain's novels provided the basis for "Double Indemnity" (1944), "Mildred Pierce" (1945), "The Postman Always Rings Twice" (1946), and "Slightly Scarlet" (1956; adapted from "Love's Lovely Counterfeit"). A decade before the classic era, a story by Hammett was the source for the gangster melodrama "City Streets" (1931), directed by Rouben Mamoulian and photographed by Lee Garmes, who worked regularly with Sternberg. Released the month before Lang's "M", "City Streets" has a claim to being the first major film noir; both its style and story had many noir characteristics. Raymond Chandler, who debuted as a novelist with "The Big Sleep" in 1939, soon became the most famous author of the hardboiled school. Not only were Chandler's novels turned into major noirs—"Murder, My Sweet" (1944; adapted from "Farewell, My Lovely"), "The Big Sleep" (1946), and "Lady in the Lake" (1947)—he was an important screenwriter in the genre as well, producing the scripts for "Double Indemnity", "The Blue Dahlia" (1946), and "Strangers on a Train" (1951). Where Chandler, like Hammett, centered most of his novels and stories on the character of the private eye, Cain featured less heroic protagonists and focused more on psychological exposition than on crime solving; the Cain approach has come to be identified with a subset of the hardboiled genre dubbed "noir fiction". For much of the 1940s, one of the most prolific and successful authors of this often downbeat brand of suspense tale was Cornell Woolrich (sometimes under the pseudonym George Hopley or William Irish). No writer's published work provided the basis for more film noirs of the classic period than Woolrich's: thirteen in all, including "Black Angel" (1946), "Deadline at Dawn" (1946), and "Fear in the Night" (1947). Another crucial literary source for film noir was W. R. Burnett, whose first novel to be published was "Little Caesar", in 1929. It was turned into a hit for Warner Bros. in 1931; the following year, Burnett was hired to write dialogue for "Scarface", while "The Beast of the City" (1932) was adapted from one of his stories. At least one important reference work identifies the latter as a film noir despite its early date. Burnett's characteristic narrative approach fell somewhere between that of the quintessential hardboiled writers and their noir fiction compatriots—his protagonists were often heroic in their own way, which happened to be that of the gangster. During the classic era, his work, either as author or screenwriter, was the basis for seven films now widely regarded as film noirs, including three of the most famous: "High Sierra" (1941), "This Gun for Hire" (1942), and "The Asphalt Jungle" (1950). The 1940s and 1950s are generally regarded as the "classic period" of American "film noir". While "City Streets" and other pre-WWII crime melodramas such as "Fury" (1936) and "You Only Live Once" (1937), both directed by Fritz Lang, are categorized as full-fledged "noir" in Alain Silver and Elizabeth Ward's "film noir" encyclopedia, other critics tend to describe them as "proto-noir" or in similar terms. The film now most commonly cited as the first "true" "film noir" is "Stranger on the Third Floor" (1940), directed by Latvian-born, Soviet-trained Boris Ingster. Hungarian émigré Peter Lorre—who had starred in Lang's "M"—was top-billed, although he did not play the primary lead. He later played secondary roles in several other formative American noirs. Although modestly budgeted, at the high end of the B movie scale, "Stranger on the Third Floor" still lost its studio, RKO, US$56,000 (), almost a third of its total cost. "Variety" magazine found Ingster's work: "...too studied and when original, lacks the flare to hold attention. It's a film too arty for average audiences, and too humdrum for others." "Stranger on the Third Floor" was not recognized as the beginning of a trend, let alone a new genre, for many decades. Most film noirs of the classic period were similarly low- and modestly-budgeted features without major stars—B movies either literally or in spirit. In this production context, writers, directors, cinematographers, and other craftsmen were relatively free from typical big-picture constraints. There was more visual experimentation than in Hollywood filmmaking as a whole: the Expressionism now closely associated with noir and the semi-documentary style that later emerged represent two very different tendencies. Narrative structures sometimes involved convoluted flashbacks uncommon in non-noir commercial productions. In terms of content, enforcement of the Production Code ensured that no film character could literally get away with murder or be seen sharing a bed with anyone but a spouse; within those bounds, however, many films now identified as noir feature plot elements and dialogue that were very risqué for the time. Thematically, film noirs were most exceptional for the relative frequency with which they centered on women of questionable virtue—a focus that had become rare in Hollywood films after the mid-1930s and the end of the pre-Code era. The signal film in this vein was "Double Indemnity", directed by Billy Wilder; setting the mold was Barbara Stanwyck's unforgettable femme fatale, Phyllis Dietrichson—an apparent nod to Marlene Dietrich, who had built her extraordinary career playing such characters for Sternberg. An A-level feature all the way, the film's commercial success and seven Oscar nominations made it probably the most influential of the early noirs. A slew of now-renowned noir "bad girls" followed, such as those played by Rita Hayworth in "Gilda" (1946), Lana Turner in "The Postman Always Rings Twice" (1946), Ava Gardner in "The Killers" (1946), and Jane Greer in "Out of the Past" (1947). The iconic noir counterpart to the femme fatale, the private eye, came to the fore in films such as "The Maltese Falcon" (1941), with Humphrey Bogart as Sam Spade, and "Murder, My Sweet" (1944), with Dick Powell as Philip Marlowe. The prevalence of the private eye as a lead character declined in film noir of the 1950s, a period during which several critics describe the form as becoming more focused on extreme psychologies and more exaggerated in general. A prime example is "Kiss Me Deadly" (1955); based on a novel by Mickey Spillane, the best-selling of all the hardboiled authors, here the protagonist is a private eye, Mike Hammer. As described by Paul Schrader, "Robert Aldrich's teasing direction carries "noir" to its sleaziest and most perversely erotic. Hammer overturns the underworld in search of the 'great whatsit' [which] turns out to be—joke of jokes—an exploding atomic bomb." Orson Welles's baroquely styled "Touch of Evil" (1958) is frequently cited as the last noir of the classic period. Some scholars believe film noir never really ended, but continued to transform even as the characteristic noir visual style began to seem dated and changing production conditions led Hollywood in different directions—in this view, post-1950s films in the noir tradition are seen as part of a continuity with classic noir. A majority of critics, however, regard comparable films made outside the classic era to be something other than genuine film noirs. They regard true film noir as belonging to a temporally and geographically limited cycle or period, treating subsequent films that evoke the classics as fundamentally different due to general shifts in filmmaking style and latter-day awareness of noir as a historical source for allusion. While the inceptive noir, "Stranger on the Third Floor", was a B picture directed by a virtual unknown, many of the film noirs still remembered were A-list productions by well-known film makers. Debuting as a director with "The Maltese Falcon" (1941), John Huston followed with "Key Largo" (1948) and "The Asphalt Jungle" (1950). Opinion is divided on the noir status of several Alfred Hitchcock thrillers from the era; at least four qualify by consensus: "Shadow of a Doubt" (1943), "Notorious" (1946), "Strangers on a Train" (1951) and "The Wrong Man" (1956). Otto Preminger's success with "Laura" (1944) made his name and helped demonstrate noir's adaptability to a high-gloss 20th Century-Fox presentation. Among Hollywood's most celebrated directors of the era, arguably none worked more often in a noir mode than Preminger; his other noirs include "Fallen Angel" (1945), "Whirlpool" (1949), "Where the Sidewalk Ends" (1950) (all for Fox) and "Angel Face" (1952). A half-decade after "Double Indemnity" and "The Lost Weekend", Billy Wilder made "Sunset Boulevard" (1950) and "Ace in the Hole" (1951), noirs that were not so much crime dramas as satires on Hollywood and the news media. "In a Lonely Place" (1950) was Nicholas Ray's breakthrough; his other noirs include his debut, "They Live by Night" (1948) and "On Dangerous Ground" (1952), noted for their unusually sympathetic treatment of characters alienated from the social mainstream. Orson Welles had notorious problems with financing but his three film noirs were well budgeted: "The Lady from Shanghai" (1947) received top-level, "prestige" backing, while "The Stranger", his most conventional film and "Touch of Evil", an unmistakably personal work, were funded at levels lower but still commensurate with headlining releases. Like "The Stranger", Fritz Lang's "The Woman in the Window" (1945) was a production of the independent International Pictures. Lang's follow-up, "Scarlet Street" (1945), was one of the few classic noirs to be officially censored: filled with erotic innuendo, it was temporarily banned in Milwaukee, Atlanta and New York State. "Scarlet Street" was a semi-independent, cosponsored by Universal and Lang's Diana Productions, of which the film's co-star, Joan Bennett, was the second biggest shareholder. Lang, Bennett and her husband, the Universal veteran and Diana production head Walter Wanger, made "Secret Beyond the Door" (1948) in similar fashion. Before leaving the United States while subject to the Hollywood Blacklist, Jules Dassin made two classic noirs that also straddled the major–independent line: "Brute Force" (1947) and the influential documentary-style "The Naked City" were developed by producer Mark Hellinger, who had an "inside/outside" contract with Universal similar to Wanger's. Years earlier, working at Warner Bros., Hellinger had produced three films for Raoul Walsh, the proto-noirs "They Drive by Night" (1940), "Manpower" (1941) and "High Sierra" (1941), now regarded as a seminal work in noir's development. Walsh had no great name during his half-century as a director but his noirs "White Heat" (1949) and "The Enforcer" (1951) had A-list stars and are seen as important examples of the cycle. Other directors associated with top-of-the-bill Hollywood film noirs include Edward Dmytryk ("Murder, My Sweet" [1944], "Crossfire" [1947])—the first important noir director to fall prey to the industry blacklist—as well as Henry Hathaway ("The Dark Corner" [1946], "Kiss of Death" [1947]) and John Farrow ("The Big Clock" [1948], "Night Has a Thousand Eyes" [1948]). Most of the Hollywood films considered to be classic noirs fall into the category of the "B movie". Some were Bs in the most precise sense, produced to run on the bottom of double bills by a low-budget unit of one of the major studios or by one of the smaller Poverty Row outfits, from the relatively well-off Monogram to shakier ventures such as Producers Releasing Corporation (PRC). Jacques Tourneur had made over thirty Hollywood Bs (a few now highly regarded, most forgotten) before directing the A-level "Out of the Past", described by scholar Robert Ottoson as "the "ne plus ultra" of forties film noir". Movies with budgets a step up the ladder, known as "intermediates" by the industry, might be treated as A or B pictures depending on the circumstances. Monogram created Allied Artists in the late 1940s to focus on this sort of production. Robert Wise ("Born to Kill" [1947], "The Set-Up" [1949]) and Anthony Mann ("T-Men" [1947] and "Raw Deal" [1948]) each made a series of impressive intermediates, many of them noirs, before graduating to steady work on big-budget productions. Mann did some of his most celebrated work with cinematographer John Alton, a specialist in what James Naremore called "hypnotic moments of light-in-darkness". "He Walked by Night" (1948), shot by Alton and though credited solely to Alfred Werker, directed in large part by Mann, demonstrates their technical mastery and exemplifies the late 1940s trend of "police procedural" crime dramas. It was released, like other Mann-Alton noirs, by the small Eagle-Lion company; it was the inspiration for the "Dragnet" series, which debuted on radio in 1949 and television in 1951. Several directors associated with noir built well-respected oeuvres largely at the B-movie/intermediate level. Samuel Fuller's brutal, visually energetic films such as "Pickup on South Street" (1953) and "Underworld U.S.A." (1961) earned him a unique reputation; his advocates praise him as "primitive" and "barbarous". Joseph H. Lewis directed noirs as diverse as "Gun Crazy" (1950) and "The Big Combo" (1955). The former—whose screenplay was written by the blacklisted Dalton Trumbo, disguised by a front—features a bank hold-up sequence shown in an unbroken take of over three minutes that was influential. "The Big Combo" was shot by John Alton and took the shadowy noir style to its outer limits. The most distinctive films of Phil Karlson ("The Phenix City Story" [1955] and "The Brothers Rico" [1957]) tell stories of vice organized on a monstrous scale. The work of other directors in this tier of the industry, such as Felix E. Feist ("The Devil Thumbs a Ride" [1947], "Tomorrow Is Another Day" [1951]), has become obscure. Edgar G. Ulmer spent most of his Hollywood career working at B studios and once in a while on projects that achieved intermediate status; for the most part, on unmistakable Bs. In 1945, while at PRC, he directed a noir cult classic, "Detour". Ulmer's other noirs include "Strange Illusion" (1945), also for PRC; "Ruthless" (1948), for Eagle-Lion, which had acquired PRC the previous year and "Murder Is My Beat" (1955), for Allied Artists. A number of low- and modestly-budgeted noirs were made by independent, often actor-owned, companies contracting with larger studios for distribution. Serving as producer, writer, director and top-billed performer, Hugo Haas made films like "Pickup" (1951) and "The Other Woman" (1954). It was in this way that accomplished noir actress Ida Lupino established herself as the sole female director in Hollywood during the late 1940s and much of the 1950s. She does not appear in the best-known film she directed, "The Hitch-Hiker" (1953), developed by her company, The Filmakers, with support and distribution by RKO. It is one of the seven classic film noirs produced largely outside of the major studios that have been chosen for the United States National Film Registry. Of the others, one was a small-studio release: "Detour". Four were independent productions distributed by United Artists, the "studio without a studio": "Gun Crazy"; "Kiss Me Deadly"; "D.O.A." (1950), directed by Rudolph Maté and "Sweet Smell of Success" (1957), directed by Alexander Mackendrick. One was an independent distributed by MGM, the industry leader: "Force of Evil" (1948), directed by Abraham Polonsky and starring John Garfield, both of whom were blacklisted in the 1950s. Independent production usually meant restricted circumstances but "Sweet Smell of Success", despite the plans of the production team, was clearly not made on the cheap, though like many other cherished A-budget noirs, it might be said to have a B-movie soul. Perhaps no director better displayed that spirit than the German-born Robert Siodmak, who had already made a score of films before his 1940 arrival in Hollywood. Working mostly on A features, he made eight films now regarded as classic-era film noirs (a figure matched only by Lang and Mann). In addition to "The Killers", Burt Lancaster's debut and a Hellinger/Universal co-production, Siodmak's other important contributions to the genre include 1944's "Phantom Lady" (a top-of-the-line B and Woolrich adaptation), the ironically titled "Christmas Holiday" (1944), and "Cry of the City" (1948). "Criss Cross" (1949), with Lancaster again the lead, exemplifies how Siodmak brought the virtues of the B-movie to the A noir. In addition to the relatively looser constraints on character and message at lower budgets, the nature of B production lent itself to the noir style for economic reasons: dim lighting saved on electricity and helped cloak cheap sets (mist and smoke also served the cause); night shooting was often compelled by hurried production schedules; plots with obscure motivations and intriguingly elliptical transitions were sometimes the consequence of hastily written scripts, of which there was not always enough time or money to shoot every scene. In "Criss Cross", Siodmak achieved these effects with purpose, wrapping them around Yvonne De Carlo, playing the most understandable of femme fatales; Dan Duryea, in one of his many charismatic villain roles; and Lancaster as an ordinary laborer turned armed robber, doomed by a romantic obsession. Some critics regard classic film noir as a cycle exclusive to the United States; Alain Silver and Elizabeth Ward, for example, argue, "With the Western, film noir shares the distinction of being an indigenous American form ... a wholly American film style." However, although the term "film noir" was originally coined to describe Hollywood movies, it was an international phenomenon. Even before the beginning of the generally accepted classic period, there were films made far from Hollywood that can be seen in retrospect as film noirs, for example, the French productions "Pépé le Moko" (1937), directed by Julien Duvivier, and "Le Jour se lève" (1939), directed by Marcel Carné. In addition, Mexico experienced a vibrant film noir period from roughly 1946 to 1952, which was around the same time film noir was blossoming in the United States. During the classic period, there were many films produced in Europe, particularly in France, that share elements of style, theme, and sensibility with American film noirs and may themselves be included in the genre's canon. In certain cases, the interrelationship with Hollywood noir is obvious: American-born director Jules Dassin moved to France in the early 1950s as a result of the Hollywood blacklist, and made one of the most famous French film noirs, "Rififi" (1955). Other well-known French films often classified as noir include "Quai des Orfèvres" (1947) and "Les Diaboliques" (1955), both directed by Henri-Georges Clouzot. "Casque d'Or" (1952), "Touchez pas au grisbi" (1954), and Le Trou (1960) directed by Jacques Becker; and "Ascenseur pour l'échafaud" (1958), directed by Louis Malle. French director Jean-Pierre Melville is widely recognized for his tragic, minimalist film noirs—"Bob le flambeur" (1955), from the classic period, was followed by "Le Doulos" (1962), "Le deuxième souffle" (1966), "Le Samouraï" (1967), and "Le Cercle rouge" (1970). Scholar Andrew Spicer argues that British film noir evidences a greater debt to French poetic realism than to the expressionistic American mode of noir. Examples of British noir from the classic period include "Brighton Rock" (1947), directed by John Boulting; "They Made Me a Fugitive" (1947), directed by Alberto Cavalcanti; "The Small Back Room" (1948), directed by Michael Powell and Emeric Pressburger; "The October Man" (1950), directed by Roy Ward Baker; and "Cast a Dark Shadow" (1955), directed by Lewis Gilbert. Terence Fisher directed several low-budget thrillers in a noir mode for Hammer Film Productions, including "The Last Page" (a.k.a. "Man Bait"; 1952), "Stolen Face" (1952), and "Murder by Proxy" (a.k.a. "Blackout"; 1954). Before leaving for France, Jules Dassin had been obliged by political pressure to shoot his last English-language film of the classic noir period in Great Britain: "Night and the City" (1950). Though it was conceived in the United States and was not only directed by an American but also stars two American actors—Richard Widmark and Gene Tierney—it is technically a UK production, financed by 20th Century-Fox's British subsidiary. The most famous of classic British noirs is director Carol Reed's "The Third Man" (1949), from a screenplay by Graham Greene. Set in Vienna immediately after World War II, it also stars two American actors, Joseph Cotten and Orson Welles, who had appeared together in "Citizen Kane". Elsewhere, Italian director Luchino Visconti adapted Cain's "The Postman Always Rings Twice" as "Ossessione" (1943), regarded both as one of the great noirs and a seminal film in the development of neorealism. (This was not even the first screen version of Cain's novel, having been preceded by the French "Le Dernier Tournant" in 1939.) In Japan, the celebrated Akira Kurosawa directed several films recognizable as film noirs, including "Drunken Angel" (1948), "Stray Dog" (1949), "The Bad Sleep Well" (1960), and "High and Low" (1963). Among the first major neo-noir films—the term often applied to films that consciously refer back to the classic noir tradition—was the French "Tirez sur le pianiste" (1960), directed by François Truffaut from a novel by one of the gloomiest of American noir fiction writers, David Goodis. Noir crime films and melodramas have been produced in many countries in the post-classic area. Some of these are quintessentially self-aware neo-noirs—for example, "Il Conformista" (1969; Italy), "Der Amerikanische Freund" (1977; Germany), "The Element of Crime" (1984; Denmark), and "El Aura" (2005; Argentina). Others simply share narrative elements and a version of the hardboiled sensibility associated with classic noir, such as "The Castle of Sand" (1974; Japan), "Insomnia" (1997; Norway), "Croupier" (1998; UK), and "Blind Shaft" (2003; China). The neo-noir film genre developed mid-way into the Cold War. This cinematological trend reflected much of the cynicism and the possibility of nuclear annihilation of the era. This new genre introduced innovations that were not available with the earlier noir films. The violence was also more potent. While it is hard to draw a line between some of the noir films of the early 1960s such as "Blast of Silence" (1961) and "Cape Fear" (1962) and the noirs of the late 1950s, new trends emerged in the post-classic era. "The Manchurian Candidate" (1962), directed by John Frankenheimer, "Shock Corridor" (1962), directed by Samuel Fuller, and "Brainstorm" (1965), directed by experienced noir character actor William Conrad, all treat the theme of mental dispossession within stylistic and tonal frameworks derived from classic film noir. "The Manchurian Candidate" examined the situation of American prisoners of war (POWs) during the Korean War. Incidents that occurred during the war as well as those post-war, functioned as an inspiration for a "Cold War Noir" subgenre. The television series "The Fugitive" (1963–67) brought classic noir themes and mood to the small screen for an extended run. In a different vein, films began to appear that self-consciously acknowledged the conventions of classic film noir as historical archetypes to be revived, rejected, or reimagined. These efforts typify what came to be known as neo-noir. Though several late classic noirs, "Kiss Me Deadly" in particular, were deeply self-knowing and post-traditional in conception, none tipped its hand so evidently as to be remarked on by American critics at the time. The first major film to overtly work this angle was French director Jean-Luc Godard's "À bout de souffle" ("Breathless"; 1960), which pays its literal respects to Bogart and his crime films while brandishing a bold new style for a new day. In the United States, Arthur Penn ("Mickey One" [1964], drawing inspiration from Truffaut's "Tirez sur le pianiste" and other French New Wave films), John Boorman ("Point Blank" [1967], similarly caught up, though in the "Nouvelle vague'"s deeper waters), and Alan J. Pakula ("Klute" [1971]) directed films that knowingly related themselves to the original film noirs, inviting audiences in on the game. A manifest affiliation with noir traditions—which, by its nature, allows different sorts of commentary on them to be inferred—can also provide the basis for explicit critiques of those traditions. In 1973, director Robert Altman flipped off noir piety with "The Long Goodbye". Based on the novel by Raymond Chandler, it features one of Bogart's most famous characters, but in iconoclastic fashion: Philip Marlowe, the prototypical hardboiled detective, is replayed as a hapless misfit, almost laughably out of touch with contemporary mores and morality. Where Altman's subversion of the film noir mythos was so irreverent as to outrage some contemporary critics, around the same time Woody Allen was paying affectionate, at points idolatrous homage to the classic mode with "Play It Again, Sam" (1972). The "blaxploitation" film "Shaft" (1971), wherein Richard Roundtree plays the titular African-American private eye, John Shaft, takes conventions from classic noir. The most acclaimed of the neo-noirs of the era was director Roman Polanski's 1974 "Chinatown". Written by Robert Towne, it is set in 1930s Los Angeles, an accustomed noir locale nudged back some few years in a way that makes the pivotal loss of innocence in the story even crueler. Where Polanski and Towne raised noir to a black apogee by turning rearward, director Martin Scorsese and screenwriter Paul Schrader brought the noir attitude crashing into the present day with "Taxi Driver" (1976), a crackling, bloody-minded gloss on bicentennial America. In 1978, Walter Hill wrote and directed "The Driver", a chase film as might have been imagined by Jean-Pierre Melville in an especially abstract mood. Hill was already a central figure in 1970s noir of a more straightforward manner, having written the script for director Sam Peckinpah's "The Getaway" (1972), adapting a novel by pulp master Jim Thompson, as well as for two tough private eye films: an original screenplay for "Hickey & Boggs" (1972) and an adaptation of a novel by Ross Macdonald, the leading literary descendant of Hammett and Chandler, for "The Drowning Pool" (1975). Some of the strongest 1970s noirs, in fact, were unwinking remakes of the classics, "neo" mostly by default: the heartbreaking "Thieves Like Us" (1973), directed by Altman from the same source as Ray's "They Live by Night", and "Farewell, My Lovely" (1975), the Chandler tale made classically as "Murder, My Sweet", remade here with Robert Mitchum in his last notable noir role. Detective series, prevalent on American television during the period, updated the hardboiled tradition in different ways, but the show conjuring the most noir tone was a horror crossover touched with shaggy, "Long Goodbye"–style humor: "" (1974–75), featuring a Chicago newspaper reporter investigating strange, usually supernatural occurrences. The turn of the decade brought Scorsese's black-and-white "Raging Bull" (cowritten by Schrader); an acknowledged masterpiece—the American Film Institute ranks it as the greatest American film of the 1980s and the fourth greatest of all time—it is also a retreat, telling a story of a boxer's moral self-destruction that recalls in both theme and visual ambience noir dramas such as "Body and Soul" (1947) and "Champion" (1949). From 1981, the popular "Body Heat", written and directed by Lawrence Kasdan, invokes a different set of classic noir elements, this time in a humid, erotically charged Florida setting; its success confirmed the commercial viability of neo-noir, at a time when the major Hollywood studios were becoming increasingly risk averse. The mainstreaming of neo-noir is evident in such films as "Black Widow" (1987), "Shattered" (1991), and "Final Analysis" (1992). Few neo-noirs have made more money or more wittily updated the tradition of the noir double-entendre than "Basic Instinct" (1992), directed by Paul Verhoeven and written by Joe Eszterhas. The film also demonstrates how neo-noir's polychrome palette can reproduce many of the expressionistic effects of classic black-and-white noir. Among big-budget auteurs, Michael Mann has worked frequently in a neo-noir mode, with such films as "Thief" (1981) and "Heat" (1995) and the TV series "Miami Vice" (1984–89) and "Crime Story" (1986–88). Mann's output exemplifies a primary strain of neo-noir, in which classic themes and tropes are revisited in a contemporary setting with an up-to-date visual style and rock- or hip hop-based musical soundtrack. Like "Chinatown", its more complex predecessor, Curtis Hanson's Oscar-winning "L.A. Confidential" (1997), based on the James Ellroy novel, demonstrates an opposite tendency—the deliberately retro film noir; its tale of corrupt cops and femmes fatales is seemingly lifted straight from a film of 1953, the year in which it is set. Director David Fincher followed the immensely successful neo-noir "Seven" (1995) with a film that developed into a cult favorite after its original, disappointing release: "Fight Club" (1999) is a "sui generis" mix of noir aesthetic, perverse comedy, speculative content, and satiric intent. Working generally with much smaller budgets, brothers Joel and Ethan Coen have created one of the most extensive film oeuvres influenced by classic noir, with films such as "Blood Simple" (1984) and "Fargo" (1996), considered by some a supreme work in the neo-noir mode. The Coens cross noir with other generic lines in the gangster drama "Miller's Crossing" (1990)—loosely based on the Dashiell Hammett novels "Red Harvest" and "The Glass Key"—and the comedy "The Big Lebowski" (1998), a tribute to Chandler and an homage to Altman's version of "The Long Goodbye". The characteristic work of David Lynch combines film noir tropes with scenarios driven by disturbed characters such as the sociopathic criminal played by Dennis Hopper in "Blue Velvet" (1986) and the delusionary protagonist of "Lost Highway" (1997). The "Twin Peaks" cycle, both TV series (1990–91) and film, "" (1992), puts a detective plot through a succession of bizarre spasms. David Cronenberg also mixes surrealism and noir in "Naked Lunch" (1991), inspired by William S. Burroughs' novel. Perhaps no American neo-noirs better reflect the classic noir A-movie-with-a-B-movie-soul than those of director-writer Quentin Tarantino; neo-noirs of his such as "Reservoir Dogs" (1992) and "Pulp Fiction" (1994) display a relentlessly self-reflexive, sometimes tongue-in-cheek sensibility, similar to the work of the New Wave directors and the Coens. Other films from the era readily identifiable as neo-noir (some retro, some more au courant) include director John Dahl's "Kill Me Again" (1989), "Red Rock West" (1992), and "The Last Seduction" (1993); four adaptations of novels by Jim Thompson—"The Kill-Off" (1989), "After Dark, My Sweet" (1990), "The Grifters" (1990), and the remake of "The Getaway" (1994); and many more, including adaptations of the work of other major noir fiction writers: "The Hot Spot" (1990), from "Hell Hath No Fury", by Charles Williams; "Miami Blues" (1990), from the novel by Charles Willeford; and "Out of Sight" (1998), from the novel by Elmore Leonard. Several films by director-writer David Mamet involve noir elements: "House of Games" (1987), "Homicide" (1991), "The Spanish Prisoner" (1997), and "Heist" (2001). On television, "Moonlighting" (1985–89) paid homage to classic noir while demonstrating an unusual appreciation of the sense of humor often found in the original cycle. Between 1983 and 1989, Mickey Spillane's hardboiled private eye Mike Hammer was played with wry gusto by Stacy Keach in a series and several stand-alone television films (an unsuccessful revival followed in 1997–98). The British miniseries "The Singing Detective" (1986), written by Dennis Potter, tells the story of a mystery writer named Philip Marlow; widely considered one of the finest neo-noirs in any medium, some critics rank it among the greatest television productions of all time. The Coen brothers make reference to the noir tradition again with "The Man Who Wasn't There" (2001); a black-and-white crime melodrama set in 1949, it features a scene apparently staged to mirror one from "Out of the Past". Lynch's "Mulholland Drive" (2001) continued in his characteristic vein, making the classic noir setting of Los Angeles the venue for a noir-inflected psychological jigsaw puzzle. British-born director Christopher Nolan's black-and-white debut, "Following" (1998), was an overt homage to classic noir. During the new century's first decade, he was one of the leading Hollywood directors of neo-noir with the acclaimed "Memento" (2000) and the remake of "Insomnia" (2002). Director Sean Penn's "The Pledge" (2001), though adapted from a very self-reflexive novel by Friedrich Dürrenmatt, plays noir comparatively straight, to devastating effect. Screenwriter David Ayer updated the classic noir bad-cop tale, typified by "Shield for Murder" (1954) and "Rogue Cop" (1954), with his scripts for "Training Day" (2001) and, adapting a story by James Ellroy, "Dark Blue" (2002); he later wrote and directed the even darker "Harsh Times" (2006). Michael Mann's "Collateral" (2004) features a performance by Tom Cruise as an assassin in the lineage of "Le Samouraï". The torments of "The Machinist" (2004), directed by Brad Anderson, evoke both "Fight Club" and "Memento". In 2005, Shane Black directed "Kiss Kiss Bang Bang", basing his screenplay in part on a crime novel by Brett Halliday, who published his first stories back in the 1920s. The film plays with an awareness not only of classic noir but also of neo-noir reflexivity itself. With ultra-violent films such as "Sympathy for Mr. Vengeance" (2002) and "Thirst" (2009), Park Chan-wook of South Korea has been the most prominent director outside of the United States to work regularly in a noir mode in the new millennium. The most commercially successful neo-noir of this period has been "Sin City" (2005), directed by Robert Rodriguez in extravagantly stylized black and white with splashes of color. The film is based on a series of comic books created by Frank Miller (credited as the film's codirector), which are in turn openly indebted to the works of Spillane and other pulp mystery authors. Similarly, graphic novels provide the basis for "Road to Perdition" (2002), directed by Sam Mendes, and "A History of Violence" (2005), directed by David Cronenberg; the latter was voted best film of the year in the annual "Village Voice" poll. Writer-director Rian Johnson's "Brick" (2005), featuring present-day high schoolers speaking a version of 1930s hardboiled argot, won the Special Jury Prize for Originality of Vision at the Sundance Film Festival. The television series "Veronica Mars" (2004–07) also brought a youth-oriented twist to film noir. Examples of this sort of generic crossover have been dubbed "teen noir". Neo-noir films released in the 2010s include Kim Jee-woon’s "I Saw the Devil" (2010), Fred Cavaye’s "Point Blank" (2010), Na Hong-jin’s "The Yellow Sea" (2010), Nicolas Winding Refn’s "Drive" (2011), and Claire Denis' "Bastards" (2013). In the post-classic era, a significant trend in noir crossovers has involved science fiction. In Jean-Luc Godard's "Alphaville" (1965), Lemmy Caution is the name of the old-school private eye in the city of tomorrow. "The Groundstar Conspiracy" (1972) centers on another implacable investigator and an amnesiac named Welles. "Soylent Green" (1973), the first major American example, portrays a dystopian, near-future world via a self-evidently noir detection plot; starring Charlton Heston (the lead in "Touch of Evil"), it also features classic noir standbys Joseph Cotten, Edward G. Robinson, and Whit Bissell. The film was directed by Richard Fleischer, who two decades before had directed several strong B noirs, including "Armored Car Robbery" (1950) and "The Narrow Margin" (1952). The cynical and stylish perspective of classic film noir had a formative effect on the cyberpunk genre of science fiction that emerged in the early 1980s; the film most directly influential on cyberpunk was "Blade Runner" (1982), directed by Ridley Scott, which pays evocative homage to the classic noir mode (Scott subsequently directed the poignant noir crime melodrama "Someone to Watch Over Me" [1987]). Scholar Jamaluddin Bin Aziz has observed how "the shadow of Philip Marlowe lingers on" in such other "future noir" films as "12 Monkeys" (1995), "Dark City" (1998) and "Minority Report" (2002). Fincher's feature debut was "Alien 3" (1992), which evoked the classic noir jail film "Brute Force". David Cronenberg's "Crash" (1996), an adaptation of the speculative novel by J. G. Ballard, has been described as a "film noir in bruise tones". The hero is the target of investigation in "Gattaca" (1997), which fuses film noir motifs with a scenario indebted to "Brave New World". "The Thirteenth Floor" (1999), like "Blade Runner", is an explicit homage to classic noir, in this case involving speculations about virtual reality. Science fiction, noir, and anime are brought together in the Japanese films "Ghost in the Shell" (1995) and "" (2004), both directed by Mamoru Oshii. "The Animatrix" (2003), based on and set within the world of "The Matrix" film trilogy, contains an anime short film in classic noir style titled "A Detective Story". Anime television series with science fiction noir themes include "Noir" (2001) and "Cowboy Bebop" (1998). The 2015 film "Ex Machina" puts an understated film noir spin on the Frankenstein mythos, with the sentient android Ava as a potential "femme fatale", her creator Nathan embodying the abusive husband or father trope, and her would-be rescuer Caleb as a "clueless drifter" enthralled by Ava. Film noir has been parodied many times in many manners. In 1945, Danny Kaye starred in what appears to be the first intentional film noir parody, "Wonder Man". That same year, Deanna Durbin was the singing lead in the comedic noir "Lady on a Train", which makes fun of Woolrich-brand wistful miserablism. Bob Hope inaugurated the private-eye noir parody with "My Favorite Brunette" (1947), playing a baby-photographer who is mistaken for an ironfisted detective. In 1947 as well, The Bowery Boys appeared in "Hard Boiled Mahoney", which had a similar mistaken-identity plot; they spoofed the genre once more in "Private Eyes" (1953). Two RKO productions starring Robert Mitchum take film noir over the border into self-parody: "The Big Steal" (1949), directed by Don Siegel, and "His Kind of Woman" (1951). The "Girl Hunt" ballet in Vincente Minnelli's "The Band Wagon" (1953) is a ten-minute distillation of—and play on—noir in dance. "The Cheap Detective" (1978), starring Peter Falk, is a broad spoof of several films, including the Bogart classics "The Maltese Falcon" and "Casablanca". Carl Reiner's black-and-white "Dead Men Don't Wear Plaid" (1982) appropriates clips of classic noirs for a farcical pastiche, while his "Fatal Instinct" (1993) sends up noirs both classic ("Double Indemnity") and neo ("Basic Instinct"). Robert Zemeckis's "Who Framed Roger Rabbit" (1988) develops a noir plot set in 1940s L.A. around a host of cartoon characters. Noir parodies come in darker tones as well. "Murder by Contract" (1958), directed by Irving Lerner, is a deadpan joke on noir, with a denouement as bleak as any of the films it kids. An ultra-low-budget Columbia Pictures production, it may qualify as the first intentional example of what is now called a neo-noir film; it was likely a source of inspiration for both Melville's "Le Samouraï" and Scorsese's "Taxi Driver". Belying its parodic strain, "The Long Goodbye"s final act is seriously grave. "Taxi Driver" caustically deconstructs the "dark" crime film, taking it to an absurd extreme and then offering a conclusion that manages to mock every possible anticipated ending—triumphant, tragic, artfully ambivalent—while being each, all at once. Flirting with splatter status even more brazenly, the Coens' "Blood Simple" is both an exacting pastiche and a gross exaggeration of classic noir. Adapted by director Robinson Devor from a novel by Charles Willeford, "The Woman Chaser" (1999) sends up not just the noir mode but the entire Hollywood filmmaking process, with seemingly each shot staged as the visual equivalent of an acerbic Marlowe wisecrack. In other media, the television series "Sledge Hammer!" (1986–88) lampoons noir, along with such topics as capital punishment, gun fetishism, and Dirty Harry. "Sesame Street" (1969–curr.) occasionally casts Kermit the Frog as a private eye; the sketches refer to some of the typical motifs of noir films, in particular the voiceover. Garrison Keillor's radio program "A Prairie Home Companion" features the recurring character Guy Noir, a hardboiled detective whose adventures always wander into farce (Guy also appears in the Altman-directed film based on Keillor's show). Firesign Theatre's Nick Danger has trod the same not-so-mean streets, both on radio and in comedy albums. Cartoons such as "Garfield's Babes and Bullets" (1989) and comic strip characters such as Tracer Bullet of "Calvin and Hobbes" have parodied both film noir and the kindred hardboiled tradition—one of the sources from which film noir sprang and which it now overshadows. In their original 1955 canon of film noir, Raymond Borde and Etienne Chaumeton identified twenty-two Hollywood films released between 1941 and 1952 as core examples; they listed another fifty-nine American films from the period as significantly related to the field of noir. A half-century later, film historians and critics had come to agree on a canon of approximately three hundred films from 1940–58. There remain, however, many differences of opinion over whether other films of the era, among them a number of well-known ones, qualify as film noirs or not. For instance, "The Night of the Hunter" (1955), starring Robert Mitchum in an acclaimed performance, is treated as a film noir by some critics, but not by others. Some critics include "Suspicion" (1941), directed by Alfred Hitchcock, in their catalogues of noir; others ignore it. Concerning films made either before or after the classic period, or outside of the United States at any time, consensus is even rarer. To support their categorization of certain films as noirs and their rejection of others, many critics refer to a set of elements they see as marking examples of the mode. The question of what constitutes the set of noir's identifying characteristics is a fundamental source of controversy. For instance, critics tend to define the model film noir as having a tragic or bleak conclusion, but many acknowledged classics of the genre have clearly happy endings (e.g., "Stranger on the Third Floor," "The Big Sleep," "Dark Passage," and "The Dark Corner"), while the tone of many other noir denouements is ambivalent. Some critics perceive classic noir's hallmark as a distinctive visual style. Others, observing that there is actually considerable stylistic variety among noirs, instead emphasize plot and character type. Still others focus on mood and attitude. No survey of classic noir's identifying characteristics can therefore be considered definitive. In the 1990s and 2000s, critics have increasingly turned their attention to that diverse field of films called neo-noir; once again, there is even less consensus about the defining attributes of such films made outside the classic period. The low-key lighting schemes of many classic film noirs are associated with stark light/dark contrasts and dramatic shadow patterning—a style known as chiaroscuro (a term adopted from Renaissance painting). The shadows of Venetian blinds or banister rods, cast upon an actor, a wall, or an entire set, are an iconic visual in noir and had already become a cliché well before the neo-noir era. Characters' faces may be partially or wholly obscured by darkness—a relative rarity in conventional Hollywood filmmaking. While black-and-white cinematography is considered by many to be one of the essential attributes of classic noir, the color films "Leave Her to Heaven" (1945) and "Niagara" (1953) are routinely included in noir filmographies, while "Slightly Scarlet" (1956), "Party Girl" (1958), and "Vertigo" (1958) are classified as noir by varying numbers of critics. Film noir is also known for its use of low-angle, wide-angle, and skewed, or Dutch angle shots. Other devices of disorientation relatively common in film noir include shots of people reflected in one or more mirrors, shots through curved or frosted glass or other distorting objects (such as during the strangulation scene in "Strangers on a Train"), and special effects sequences of a sometimes bizarre nature. Night-for-night shooting, as opposed to the Hollywood norm of day-for-night, was often employed. From the mid-1940s forward, location shooting became increasingly frequent in noir. In an analysis of the visual approach of "Kiss Me Deadly", a late and self-consciously stylized example of classic noir, critic Alain Silver describes how cinematographic choices emphasize the story's themes and mood. In one scene, the characters, seen through a "confusion of angular shapes", thus appear "caught in a tangible vortex or enclosed in a trap." Silver makes a case for how "side light is used ... to reflect character ambivalence", while shots of characters in which they are lit from below "conform to a convention of visual expression which associates shadows cast upward of the face with the unnatural and ominous". Film noirs tend to have unusually convoluted story lines, frequently involving flashbacks and other editing techniques that disrupt and sometimes obscure the narrative sequence. Framing the entire primary narrative as a flashback is also a standard device. Voiceover narration, sometimes used as a structuring device, came to be seen as a noir hallmark; while classic noir is generally associated with first-person narration (i.e., by the protagonist), Stephen Neale notes that third-person narration is common among noirs of the semidocumentary style. Neo-noirs as varied as "The Element of Crime" (surrealist), "After Dark, My Sweet" (retro), and "Kiss Kiss Bang Bang" (meta) have employed the flashback/voiceover combination. Bold experiments in cinematic storytelling were sometimes attempted during the classic era: "Lady in the Lake", for example, is shot entirely from the point of view of protagonist Philip Marlowe; the face of star (and director) Robert Montgomery is seen only in mirrors. "The Chase" (1946) takes oneirism and fatalism as the basis for its fantastical narrative system, redolent of certain horror stories, but with little precedent in the context of a putatively realistic genre. In their different ways, both "Sunset Boulevard" and "D.O.A." are tales told by dead men. Latter-day noir has been in the forefront of structural experimentation in popular cinema, as exemplified by such films as "Pulp Fiction", "Fight Club", and "Memento". Crime, usually murder, is an element of almost all film noirs; in addition to standard-issue greed, jealousy is frequently the criminal motivation. A crime investigation—by a private eye, a police detective (sometimes acting alone), or a concerned amateur—is the most prevalent, but far from dominant, basic plot. In other common plots the protagonists are implicated in heists or con games, or in murderous conspiracies often involving adulterous affairs. False suspicions and accusations of crime are frequent plot elements, as are betrayals and double-crosses. According to J. David Slocum, "protagonists assume the literal identities of dead men in nearly fifteen percent of all noir." Amnesia is fairly epidemic—"noir's version of the common cold", in the words of film historian Lee Server. Film noirs tend to revolve around heroes who are more flawed and morally questionable than the norm, often fall guys of one sort or another. The characteristic protagonists of noir are described by many critics as "alienated"; in the words of Silver and Ward, "filled with existential bitterness". Certain archetypal characters appear in many film noirs—hardboiled detectives, femme fatales, corrupt policemen, jealous husbands, intrepid claims adjusters, and down-and-out writers. Among characters of every stripe, cigarette smoking is rampant. From historical commentators to neo-noir pictures to pop culture ephemera, the private eye and the femme fatale have been adopted as the quintessential film noir figures, though they do not appear in most films now regarded as classic noir. Of the twenty-six National Film Registry noirs, in only four does the star play a private eye: "The Maltese Falcon", "The Big Sleep", "Out of the Past", and "Kiss Me Deadly". Just four others readily qualify as detective stories: "Laura", "The Killers", "The Naked City", and "Touch of Evil". Film noir is often associated with an urban setting, and a few cities—Los Angeles, San Francisco, New York, and Chicago, in particular—are the location of many of the classic films. In the eyes of many critics, the city is presented in noir as a "labyrinth" or "maze". Bars, lounges, nightclubs, and gambling dens are frequently the scene of action. The climaxes of a substantial number of film noirs take place in visually complex, often industrial settings, such as refineries, factories, trainyards, power plants—most famously the explosive conclusion of "White Heat", set at a chemical plant. In the popular (and, frequently enough, critical) imagination, in noir it is always night and it always rains. A substantial trend within latter-day noir—dubbed "film soleil" by critic D. K. Holm—heads in precisely the opposite direction, with tales of deception, seduction, and corruption exploiting bright, sun-baked settings, stereotypically the desert or open water, to searing effect. Significant predecessors from the classic and early post-classic eras include "The Lady from Shanghai"; the Robert Ryan vehicle "Inferno" (1953); the French adaptation of Patricia Highsmith's "The Talented Mr. Ripley", "Plein soleil" ("Purple Noon" in the United States, more accurately rendered elsewhere as "Blazing Sun" or "Full Sun"; 1960); and director Don Siegel's version of "The Killers" (1964). The tendency was at its peak during the late 1980s and 1990s, with films such as "Dead Calm" (1989), "After Dark, My Sweet" (1990), "The Hot Spot" (1990), "Delusion" (1991), "Red Rock West" (1993) and the television series "Miami Vice". Film noir is often described as essentially pessimistic. The noir stories that are regarded as most characteristic tell of people trapped in unwanted situations (which, in general, they did not cause but are responsible for exacerbating), striving against random, uncaring fate, and frequently doomed. The films are seen as depicting a world that is inherently corrupt. Classic film noir has been associated by many critics with the American social landscape of the era—in particular, with a sense of heightened anxiety and alienation that is said to have followed World War II. In author Nicholas Christopher's opinion, "it is as if the war, and the social eruptions in its aftermath, unleashed demons that had been bottled up in the national psyche." Film noirs, especially those of the 1950s and the height of the Red Scare, are often said to reflect cultural paranoia; "Kiss Me Deadly" is the noir most frequently marshaled as evidence for this claim. Film noir is often said to be defined by "moral ambiguity", yet the Production Code obliged almost all classic noirs to see that steadfast virtue was ultimately rewarded and vice, in the absence of shame and redemption, severely punished (however dramatically incredible the final rendering of mandatory justice might be). A substantial number of latter-day noirs flout such conventions: vice emerges triumphant in films as varied as the grim "Chinatown" and the ribald "Hot Spot". The tone of film noir is generally regarded as downbeat; some critics experience it as darker still—"overwhelmingly black", according to Robert Ottoson. Influential critic (and filmmaker) Paul Schrader wrote in a seminal 1972 essay that ""film noir" is defined by tone", a tone he seems to perceive as "hopeless". In describing the adaptation of "Double Indemnity," noir analyst Foster Hirsch describes the "requisite hopeless tone" achieved by the filmmakers, which appears to characterize his view of noir as a whole. On the other hand, definitive film noirs such as "The Big Sleep", "The Lady from Shanghai", "Scarlet Street" and "Double Indemnity" itself are famed for their hardboiled repartee, often imbued with sexual innuendo and self-reflexive humor. Francium Francium is a chemical element with symbol Fr and atomic number 87. It used to be known as eka-caesium. It is extremely radioactive; its most stable isotope, francium-223 (originally called actinium K after the natural decay chain it appears in), has a half-life of only 22 minutes. It is the second-least electronegative element, behind only caesium, and is the second rarest naturally occurring element (after astatine). The isotopes of francium decay quickly into astatine, radium, and radon. As an alkali metal, it has one valence electron. Bulk francium has never been viewed. Because of the general appearance of the other elements in its periodic table column, it is assumed that francium would appear as a highly reactive metal, if enough could be collected together to be viewed as a bulk solid or liquid. Obtaining such a sample is highly improbable, since the extreme heat of decay (the half-life of its longest-lived isotope is only 22 minutes) would immediately vaporize any viewable quantity of the element. Francium was discovered by Marguerite Perey in France (from which the element takes its name) in 1939. It was the last element first discovered in nature, rather than by synthesis. Outside the laboratory, francium is extremely rare, with trace amounts found in uranium and thorium ores, where the isotope francium-223 continually forms and decays. As little as 20–30 g (one ounce) exists at any given time throughout the Earth's crust; the other isotopes (except for francium-221) are entirely synthetic. The largest amount produced in the laboratory was a cluster of more than 300,000 atoms. Francium is one of the most unstable of the naturally occurring elements: its longest-lived isotope, francium-223, has a half-life of only 22 minutes. The only comparable element is astatine, whose most stable natural isotope, astatine-219 (the alpha daughter of francium-223), has a half-life of 56 seconds, although synthetic astatine-210 is much longer-lived with a half-life of 8.1 hours. All isotopes of francium decay into astatine, radium, or radon. Francium-223 also has a shorter half-life than the longest-lived isotope of each synthetic element up to and including element 105, dubnium. Francium is an alkali metal whose chemical properties mostly resemble those of caesium. A heavy element with a single valence electron, it has the highest equivalent weight of any element. Liquid francium—if created—should have a surface tension of 0.05092 N/m at its melting point. Francium's melting point was calculated to be around 27 °C (80 °F, 300 K). The melting point is uncertain because of the element's extreme rarity and radioactivity. Thus, the estimated boiling point value of 677 °C (1250 °F, 950 K) is also uncertain. Linus Pauling estimated the electronegativity of francium at 0.7 on the Pauling scale, the same as caesium; the value for caesium has since been refined to 0.79, but there are no experimental data to allow a refinement of the value for francium. Francium has a slightly higher ionization energy than caesium, 392.811(4) kJ/mol as opposed to 375.7041(2) kJ/mol for caesium, as would be expected from relativistic effects, and this would imply that caesium is the less electronegative of the two. Francium should also have a higher electron affinity than caesium and the Fr ion should be more polarizable than the Cs ion. The CsFr molecule is predicted to have francium at the negative end of the dipole, unlike all known heterodiatomic alkali metal molecules. Francium superoxide (FrO) is expected to have a more covalent character than its lighter congeners; this is attributed to the 6p electrons in francium being more involved in the francium–oxygen bonding. Francium coprecipitates with several caesium salts, such as caesium perchlorate, which results in small amounts of francium perchlorate. This coprecipitation can be used to isolate francium, by adapting the radiocaesium coprecipitation method of Lawrence E. Glendenin and Nelson. It will additionally coprecipitate with many other caesium salts, including the iodate, the picrate, the tartrate (also rubidium tartrate), the chloroplatinate, and the silicotungstate. It also coprecipitates with silicotungstic acid, and with perchloric acid, without another alkali metal as a carrier, which provides other methods of separation. Nearly all francium salts are water-soluble. There are 34 known isotopes of francium ranging in atomic mass from 199 to 232. Francium has seven metastable nuclear isomers. Francium-223 and francium-221 are the only isotopes that occur in nature, though the former is far more common. Francium-223 is the most stable isotope, with a half-life of 21.8 minutes, and it is highly unlikely that an isotope of francium with a longer half-life will ever be discovered or synthesized. Francium-223 is the fifth product of the actinium decay series as the daughter isotope of actinium-227. Francium-223 then decays into radium-223 by beta decay (1149 keV decay energy), with a minor (0.006%) alpha decay path to astatine-219 (5.4 MeV decay energy). Francium-221 has a half-life of 4.8 minutes. It is the ninth product of the neptunium decay series as a daughter isotope of actinium-225. Francium-221 then decays into astatine-217 by alpha decay (6.457 MeV decay energy). The least stable ground state isotope is francium-215, with a half-life of 0.12 μs: it undergoes a 9.54 MeV alpha decay to astatine-211. Its metastable isomer, francium-215m, is less stable still, with a half-life of only 3.5 ns. Due to its instability and rarity, there are no commercial applications for francium. It has been used for research purposes in the fields of chemistry and of atomic structure. Its use as a potential diagnostic aid for various cancers has also been explored, but this application has been deemed impractical. Francium's ability to be synthesized, trapped, and cooled, along with its relatively simple atomic structure, has made it the subject of specialized spectroscopy experiments. These experiments have led to more specific information regarding energy levels and the coupling constants between subatomic particles. Studies on the light emitted by laser-trapped francium-210 ions have provided accurate data on transitions between atomic energy levels which are fairly similar to those predicted by quantum theory. As early as 1870, chemists thought that there should be an alkali metal beyond caesium, with an atomic number of 87. It was then referred to by the provisional name "eka-caesium". Research teams attempted to locate and isolate this missing element, and at least four false claims were made that the element had been found before an authentic discovery was made. Soviet chemist D. K. Dobroserdov was the first scientist to claim to have found eka-caesium, or francium. In 1925, he observed weak radioactivity in a sample of potassium, another alkali metal, and incorrectly concluded that eka-caesium was contaminating the sample (the radioactivity from the sample was from the naturally occurring potassium radioisotope, potassium-40). He then published a thesis on his predictions of the properties of eka-caesium, in which he named the element "russium" after his home country. Shortly thereafter, Dobroserdov began to focus on his teaching career at the Polytechnic Institute of Odessa, and he did not pursue the element further. The following year, English chemists Gerald J. F. Druce and Frederick H. Loring analyzed X-ray photographs of manganese(II) sulfate. They observed spectral lines which they presumed to be of eka-caesium. They announced their discovery of element 87 and proposed the name "alkalinium", as it would be the heaviest alkali metal. In 1930, Fred Allison of the Alabama Polytechnic Institute claimed to have discovered element 87 when analyzing pollucite and lepidolite using his magneto-optical machine. Allison requested that it be named "virginium" after his home state of Virginia, along with the symbols Vi and Vm. In 1934, H.G. MacPherson of UC Berkeley disproved the effectiveness of Allison's device and the validity of his discovery. In 1936, Romanian physicist Horia Hulubei and his French colleague Yvette Cauchois also analyzed pollucite, this time using their high-resolution X-ray apparatus. They observed several weak emission lines, which they presumed to be those of element 87. Hulubei and Cauchois reported their discovery and proposed the name "moldavium", along with the symbol Ml, after Moldavia, the Romanian province where Hulubei was born. In 1937, Hulubei's work was criticized by American physicist F. H. Hirsh Jr., who rejected Hulubei's research methods. Hirsh was certain that eka-caesium would not be found in nature, and that Hulubei had instead observed mercury or bismuth X-ray lines. Hulubei insisted that his X-ray apparatus and methods were too accurate to make such a mistake. Because of this, Jean Baptiste Perrin, Nobel Prize winner and Hulubei's mentor, endorsed moldavium as the true eka-caesium over Marguerite Perey's recently discovered francium. Perey took pains to be accurate and detailed in her criticism of Hulubei's work, and finally she was credited as the sole discoverer of element 87. All other previous purported discoveries of element 87 were ruled out due to francium's very limited half-life. Eka-caesium was discovered in 1939 by Marguerite Perey of the Curie Institute in Paris, when she purified a sample of actinium-227 which had been reported to have a decay energy of 220 keV. Perey noticed decay particles with an energy level below 80 keV. Perey thought this decay activity might have been caused by a previously unidentified decay product, one which was separated during purification, but emerged again out of the pure actinium-227. Various tests eliminated the possibility of the unknown element being thorium, radium, lead, bismuth, or thallium. The new product exhibited chemical properties of an alkali metal (such as coprecipitating with caesium salts), which led Perey to believe that it was element 87, produced by the alpha decay of actinium-227. Perey then attempted to determine the proportion of beta decay to alpha decay in actinium-227. Her first test put the alpha branching at 0.6%, a figure which she later revised to 1%. Perey named the new isotope "actinium-K" (it is now referred to as francium-223) and in 1946, she proposed the name "catium" (Cm) for her newly discovered element, as she believed it to be the most electropositive cation of the elements. Irène Joliot-Curie, one of Perey's supervisors, opposed the name due to its connotation of "cat" rather than "cation"; furthermore, the symbol coincided with that which had since been assigned to curium. Perey then suggested "francium", after France. This name was officially adopted by the International Union of Pure and Applied Chemistry (IUPAC) in 1949, becoming the second element after gallium to be named after France. It was assigned the symbol Fa, but this abbreviation was revised to the current Fr shortly thereafter. Francium was the last element discovered in nature, rather than synthesized, following hafnium and rhenium. Further research into francium's structure was carried out by, among others, Sylvain Lieberman and his team at CERN in the 1970s and 1980s. Fr is the result of the alpha decay of Ac and can be found in trace amounts in uranium minerals. In a given sample of uranium, there is estimated to be only one francium atom for every 1 × 10 uranium atoms. It is also calculated that there is at most 30 g of francium in the Earth's crust at any given time. Francium can be synthesized in the nuclear reaction: This process, developed by Stony Brook Physics, yields francium isotopes with masses of 209, 210, and 211, which are then isolated by the magneto-optical trap (MOT). The production rate of a particular isotope depends on the energy of the oxygen beam. An O beam from the Stony Brook LINAC creates Fr in the gold target with the nuclear reaction Au + O → Fr + 5n. The production required some time to develop and understand. It was critical to operate the gold target very close to its melting point and to make sure that its surface was very clean. The nuclear reaction embeds the francium atoms deep in the gold target, and they must be removed efficiently. The atoms quickly diffuse to the surface of the gold target and are released as ions, but this does not happen every time. The francium ions are guided by electrostatic lenses until they land in a surface of hot yttrium and become neutral again. The francium is then injected into a glass bulb. A magnetic field and laser beams cool and confine the atoms. Although the atoms remain in the trap for only about 20 seconds before escaping (or decaying), a steady stream of fresh atoms replaces those lost, keeping the number of trapped atoms roughly constant for minutes or longer. Initially, about 1000 francium atoms were trapped in the experiment. This was gradually improved and the setup is capable of trapping over 300,000 neutral atoms of francium at a time. These are neutral metallic atoms in a gaseous unconsolidated state. Enough francium is trapped that a video camera can capture the light given off by the atoms as they fluoresce. The atoms appear as a glowing sphere about 1 millimeter in diameter. This was the first time that anyone had ever seen francium. The researchers can now make extremely sensitive measurements of the light emitted and absorbed by the trapped atoms, providing the first experimental results on various transitions between atomic energy levels in francium. Initial measurements show very good agreement between experimental values and calculations based on quantum theory. Other synthesis methods include bombarding radium with neutrons, and bombarding thorium with protons, deuterons, or helium ions. Francium has not been synthesized in amounts large enough to weigh. Fr can also be isolated from samples of its parent Ac, the francium being milked via elution with NHCl–CrO from an actinium-containing cation exchanger and purified by passing the solution through a silicon dioxide compound loaded with barium sulfate. Frédéric Chopin Frédéric François Chopin (; ; ; 1 March 181017 October 1849) was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano. He has maintained worldwide renown as a leading musician of his era, one whose "poetic genius was based on a professional technique that was without equal in his generation." Chopin was born Fryderyk Franciszek Chopin in the Duchy of Warsaw and grew up in Warsaw, which in 1815 became part of Congress Poland. A child prodigy, he completed his musical education and composed his earlier works in Warsaw before leaving Poland at the age of 20, less than a month before the outbreak of the November 1830 Uprising. At 21, he settled in Paris. Thereafter—in the last 18 years of his life—he gave only 30 public performances, preferring the more intimate atmosphere of the salon. He supported himself by selling his compositions and by giving piano lessons, for which he was in high demand. Chopin formed a friendship with Franz Liszt and was admired by many of his other musical contemporaries (including Robert Schumann). In 1835, Chopin obtained French citizenship. After a failed engagement to Maria Wodzińska from 1836 to 1837, he maintained an often troubled relationship with the French writer Amantine Dupin (known by her pen name, George Sand). A brief and unhappy visit to Majorca with Sand in 1838–39 would prove one of his most productive periods of composition. In his final years, he was supported financially by his admirer Jane Stirling, who also arranged for him to visit Scotland in 1848. For most of his life, Chopin was in poor health. He died in Paris in 1849 at the age of 39, probably of tuberculosis. All of Chopin's compositions include the piano. Most are for solo piano, though he also wrote two piano concertos, a few chamber pieces, and some 19 songs set to Polish lyrics. His piano writing was technically demanding and expanded the limits of the instrument: his own performances were noted for their nuance and sensitivity. Chopin invented the concept of the instrumental "ballade". His major piano works also include mazurkas, waltzes, nocturnes, polonaises, études, impromptus, scherzos, preludes and sonatas, some published only posthumously. Among the influences on his style of composition were Polish folk music, the classical tradition of J. S. Bach, Mozart, and Schubert, and the atmosphere of the Paris salons of which he was a frequent guest. His innovations in style, harmony, and musical form, and his association of music with nationalism, were influential throughout and after the late Romantic period. Chopin's music, his status as one of music's earliest superstars, his (indirect) association with political insurrection, his high-profile love-life, and his early death have made him a leading symbol of the Romantic era. His works remain popular, and he has been the subject of numerous films and biographies of varying historical fidelity. Fryderyk Chopin was born in Żelazowa Wola, 46 kilometres () west of Warsaw, in what was then the Duchy of Warsaw, a Polish state established by Napoleon. The parish baptismal record gives his birthday as 22 February 1810, and cites his given names in the Latin form "Fridericus Franciscus" (in Polish, he was "Fryderyk Franciszek"). However, the composer and his family used the birthdate 1 March, which is now generally accepted as the correct date. Fryderyk's father, Nicolas Chopin, was a Frenchman from Lorraine who had emigrated to Poland in 1787 at the age of sixteen. Nicolas tutored children of the Polish aristocracy, and in 1806 married Justyna Krzyżanowska, a poor relative of the Skarbeks, one of the families for whom he worked. Fryderyk was baptized on Easter Sunday, 23 April 1810, in the same church where his parents had married, in Brochów. His eighteen-year-old godfather, for whom he was named, was Fryderyk Skarbek, a pupil of Nicolas Chopin. Fryderyk was the couple's second child and only son; he had an elder sister, Ludwika (1807–55), and two younger sisters, Izabela (1811–81) and Emilia (1812–27). Nicolas was devoted to his adopted homeland, and insisted on the use of the Polish language in the household. In October 1810, six months after Fryderyk's birth, the family moved to Warsaw, where his father acquired a post teaching French at the Warsaw Lyceum, then housed in the Saxon Palace. Fryderyk lived with his family in the Palace grounds. The father played the flute and violin; the mother played the piano and gave lessons to boys in the boarding house that the Chopins kept. Chopin was of slight build, and even in early childhood was prone to illnesses. Fryderyk may have had some piano instruction from his mother, but his first professional music tutor, from 1816 to 1821, was the Czech pianist Wojciech Żywny. His elder sister Ludwika also took lessons from Żywny, and occasionally played duets with her brother. It quickly became apparent that he was a child prodigy. By the age of seven Fryderyk had begun giving public concerts, and in 1817 he composed two polonaises, in G minor and B-flat major. His next work, a polonaise in A-flat major of 1821, dedicated to Żywny, is his earliest surviving musical manuscript. In 1817 the Saxon Palace was requisitioned by Warsaw's Russian governor for military use, and the Warsaw Lyceum was reestablished in the Kazimierz Palace (today the rectorate of Warsaw University). Fryderyk and his family moved to a building, which still survives, adjacent to the Kazimierz Palace. During this period, Fryderyk was sometimes invited to the Belweder Palace as playmate to the son of the ruler of Russian Poland, Grand Duke Constantine; he played the piano for the Duke and composed a march for him. Julian Ursyn Niemcewicz, in his dramatic eclogue, ""Nasze Przebiegi"" ("Our Discourses", 1818), attested to "little Chopin's" popularity. From September 1823 to 1826, Chopin attended the Warsaw Lyceum, where he received organ lessons from the Czech musician Wilhelm Würfel during his first year. In the autumn of 1826 he began a three-year course under the Silesian composer Józef Elsner at the Warsaw Conservatory, studying music theory, figured bass and composition. Throughout this period he continued to compose and to give recitals in concerts and salons in Warsaw. He was engaged by the inventors of a mechanical organ, the "eolomelodicon", and on this instrument in May 1825 he performed his own improvisation and part of a concerto by Moscheles. The success of this concert led to an invitation to give a similar recital on the instrument before Tsar Alexander I, who was visiting Warsaw; the Tsar presented him with a diamond ring. At a subsequent eolomelodicon concert on 10 June 1825, Chopin performed his Rondo Op. 1. This was the first of his works to be commercially published and earned him his first mention in the foreign press, when the Leipzig "Allgemeine Musikalische Zeitung" praised his "wealth of musical ideas". During 1824–28 Chopin spent his vacations away from Warsaw, at a number of locales. In 1824 and 1825, at Szafarnia, he was a guest of Dominik Dziewanowski, the father of a schoolmate. Here for the first time he encountered Polish rural folk music. His letters home from Szafarnia (to which he gave the title "The Szafarnia Courier"), written in a very modern and lively Polish, amused his family with their spoofing of the Warsaw newspapers and demonstrated the youngster's literary gift. In 1827, soon after the death of Chopin's youngest sister Emilia, the family moved from the Warsaw University building, adjacent to the Kazimierz Palace, to lodgings just across the street from the university, in the south annexe of the Krasiński Palace on Krakowskie Przedmieście, where Chopin lived until he left Warsaw in 1830. Here his parents continued running their boarding house for male students; the Chopin Family Parlour ("Salonik Chopinów") became a museum in the 20th century. In 1829 the artist Ambroży Mieroszewski executed a set of portraits of Chopin family members, including the first known portrait of the composer. Four boarders at his parents' apartments became Chopin's intimates: Tytus Woyciechowski, Jan Nepomucen Białobłocki, Jan Matuszyński and Julian Fontana; the latter two would become part of his Paris milieu. He was friendly with members of Warsaw's young artistic and intellectual world, including Fontana, Józef Bohdan Zaleski and Stefan Witwicki. He was also attracted to the singing student Konstancja Gładkowska. In letters to Woyciechowski, he indicated which of his works, and even which of their passages, were influenced by his fascination with her; his letter of 15 May 1830 revealed that the slow movement ("Larghetto") of his Piano Concerto No. 1 (in E minor) was secretly dedicated to her – "It should be like dreaming in beautiful springtime – by moonlight." His final Conservatory report (July 1829) read: "Chopin F., third-year student, exceptional talent, musical genius." In September 1828 Chopin, while still a student, visited Berlin with a family friend, zoologist Feliks Jarocki, enjoying operas directed by Gaspare Spontini and attending concerts by Carl Friedrich Zelter, Felix Mendelssohn and other celebrities. On an 1829 return trip to Berlin, he was a guest of Prince Antoni Radziwiłł, governor of the Grand Duchy of Posen—himself an accomplished composer and aspiring cellist. For the prince and his pianist daughter Wanda, he composed his Introduction and Polonaise brillante in C major for cello and piano, Op. 3. Back in Warsaw that year, Chopin heard Niccolò Paganini play the violin, and composed a set of variations, "Souvenir de Paganini". It may have been this experience which encouraged him to commence writing his first Études, (1829–32), exploring the capacities of his own instrument. On 11 August, three weeks after completing his studies at the Warsaw Conservatory, he made his debut in Vienna. He gave two piano concerts and received many favourable reviews—in addition to some commenting (in Chopin's own words) that he was "too delicate for those accustomed to the piano-bashing of local artists". In one of these concerts, he premiered his Variations on "Là ci darem la mano", Op. 2 (variations on a duet from Mozart's opera "Don Giovanni") for piano and orchestra. He returned to Warsaw in September 1829, where he premiered his Piano Concerto No. 2 in F minor, Op. 21 on 17 March 1830. Chopin's successes as a composer and performer opened the door to western Europe for him, and on 2 November 1830, he set out, in the words of Zdzisław Jachimecki, "into the wide world, with no very clearly defined aim, forever." With Woyciechowski, he headed for Austria again, intending to go on to Italy. Later that month, in Warsaw, the November 1830 Uprising broke out, and Woyciechowski returned to Poland to enlist. Chopin, now alone in Vienna, was nostalgic for his homeland, and wrote to a friend, "I curse the moment of my departure." When in September 1831 he learned, while travelling from Vienna to Paris, that the uprising had been crushed, he expressed his anguish in the pages of his private journal: "Oh God! ... You are there, and yet you do not take vengeance!" Jachimecki ascribes to these events the composer's maturing "into an inspired national bard who intuited the past, present and future of his native Poland." When he left Warsaw in late 1830, Chopin had intended to go to Italy. However, violent unrest triggered by the Carbonari made that a dangerous destination by the middle of 1831. Chopin's next choice was Paris, but the Russian embassy in Vienna refused to authorize his passport for travel to France. Finally, after numerous delays, he received permission to stop in Paris en route to London. Chopin neglected to complete the second leg of this journey, instead settling permanently in Paris. In the years to come, he would frequently quote the passport inscription "Passeport en passant par Paris à Londres" to Parisian friends, joking that he was in the city "only in passing." Chopin arrived in Paris in late September 1831; he would never return to Poland, thus becoming one of many expatriates of the Polish Great Emigration. In France he used the French versions of his given names, and after receiving French citizenship in 1835, he travelled on a French passport. However, Chopin remained close to his fellow Poles in exile as friends and confidants and he never felt fully comfortable speaking French. Chopin's biographer Adam Zamoyski writes that he never considered himself to be French, despite his father's French origins, and always saw himself as a Pole. In Paris, Chopin encountered artists and other distinguished figures, and found many opportunities to exercise his talents and achieve celebrity. During his years in Paris he was to become acquainted with, among many others, Hector Berlioz, Franz Liszt, Ferdinand Hiller, Heinrich Heine, Eugène Delacroix, and Alfred de Vigny. Chopin was also acquainted with the poet Adam Mickiewicz, principal of the Polish Literary Society, some of whose verses he set as songs. Two Polish friends in Paris were also to play important roles in Chopin's life there. His fellow student at the Warsaw Conservatory, Julian Fontana, had originally tried unsuccessfully to establish himself in England; Fontana was to become, in the words of Michałowski and Samson, Chopin's "general factotum and copyist". Albert Grzymała, who in Paris became a wealthy financier and society figure, often acted as Chopin's adviser and "gradually began to fill the role of elder brother in [his] life." At the end of 1831, Chopin received the first major endorsement from an outstanding contemporary when Robert Schumann, reviewing the Op. 2 Variations in the "Allgemeine musikalische Zeitung" (his first published article on music), declared: "Hats off, gentlemen! A genius." On 26 February 1832 Chopin gave a debut Paris concert at the Salle Pleyel which drew universal admiration. The critic François-Joseph Fétis wrote in the "Revue et gazette musicale": "Here is a young man who ... taking no model, has found, if not a complete renewal of piano music, ... an abundance of original ideas of a kind to be found nowhere else ..." After this concert, Chopin realized that his essentially intimate keyboard technique was not optimal for large concert spaces. Later that year he was introduced to the wealthy Rothschild banking family, whose patronage also opened doors for him to other private salons (social gatherings of the aristocracy and artistic and literary elite). By the end of 1832 Chopin had established himself among the Parisian musical elite, and had earned the respect of his peers such as Hiller, Liszt, and Berlioz. He no longer depended financially upon his father, and in the winter of 1832 he began earning a handsome income from publishing his works and teaching piano to affluent students from all over Europe. This freed him from the strains of public concert-giving, which he disliked. Chopin seldom performed publicly in Paris. In later years he generally gave a single annual concert at the Salle Pleyel, a venue that seated three hundred. He played more frequently at salons, but preferred playing at his own Paris apartment for small groups of friends. The musicologist Arthur Hedley has observed that "As a pianist Chopin was unique in acquiring a reputation of the highest order on the basis of a minimum of public appearances—few more than thirty in the course of his lifetime." The list of musicians who took part in some of his concerts provides an indication of the richness of Parisian artistic life during this period. Examples include a concert on 23 March 1833, in which Chopin, Liszt and Hiller performed (on pianos) a concerto by J.S. Bach for three keyboards; and, on 3 March 1838, a concert in which Chopin, his pupil Adolphe Gutmann, Charles-Valentin Alkan, and Alkan's teacher Joseph Zimmermann performed Alkan's arrangement, for eight hands, of two movements from Beethoven's 7th symphony. Chopin was also involved in the composition of Liszt's "Hexameron"; he wrote the sixth (and final) variation on Bellini's theme. Chopin's music soon found success with publishers, and in 1833 he contracted with Maurice Schlesinger, who arranged for it to be published not only in France but, through his family connections, also in Germany and England. In the spring of 1834, Chopin attended the Lower Rhenish Music Festival in Aix-la-Chapelle with Hiller, and it was there that Chopin met Felix Mendelssohn. After the festival, the three visited Düsseldorf, where Mendelssohn had been appointed musical director. They spent what Mendelssohn described as "a very agreeable day", playing and discussing music at his piano, and met Friedrich Wilhelm Schadow, director of the Academy of Art, and some of his eminent pupils such as Lessing, Bendemann, Hildebrandt and Sohn. In 1835 Chopin went to Carlsbad, where he spent time with his parents; it was the last time he would see them. On his way back to Paris, he met old friends from Warsaw, the Wodzińskis. He had made the acquaintance of their daughter Maria in Poland five years earlier, when she was eleven. This meeting prompted him to stay for two weeks in Dresden, when he had previously intended to return to Paris via Leipzig. The sixteen-year-old girl's portrait of the composer is considered, along with Delacroix's, as among Chopin's best likenesses. In October he finally reached Leipzig, where he met Schumann, Clara Wieck and Mendelssohn, who organised for him a performance of his own oratorio "St. Paul", and who considered him "a perfect musician". In July 1836 Chopin travelled to Marienbad and Dresden to be with the Wodziński family, and in September he proposed to Maria, whose mother Countess Wodzińska approved in principle. Chopin went on to Leipzig, where he presented Schumann with his G minor Ballade. At the end of 1836 he sent Maria an album in which his sister Ludwika had inscribed seven of his songs, and his 1835 Nocturne in C-sharp minor, Op. 27, No. 1. The anodyne thanks he received from Maria proved to be the last letter he was to have from her. Although it is not known exactly when Chopin first met Liszt after arriving in Paris, on 12 December 1831 he mentioned in a letter to his friend Woyciechowski that "I have met Rossini, Cherubini, Baillot, etc.—also Kalkbrenner. You would not believe how curious I was about Herz, Liszt, Hiller, etc." Liszt was in attendance at Chopin's Parisian debut on 26 February 1832 at the Salle Pleyel, which led him to remark: "The most vigorous applause seemed not to suffice to our enthusiasm in the presence of this talented musician, who revealed a new phase of poetic sentiment combined with such happy innovation in the form of his art." The two became friends, and for many years lived in close proximity in Paris, Chopin at 38 Rue de la Chaussée-d'Antin, and Liszt at the Hôtel de France on the Rue Lafitte, a few blocks away. They performed together on seven occasions between 1833 and 1841. The first, on 2 April 1833, was at a benefit concert organized by Hector Berlioz for his bankrupt Shakespearean actress wife Harriet Smithson, during which they played George Onslow's "Sonata in F minor" for piano duet. Later joint appearances included a benefit concert for the Benevolent Association of Polish Ladies in Paris. Their last appearance together in public was for a charity concert conducted for the Beethoven Memorial in Bonn, held at the Salle Pleyel and the Paris Conservatory on 25 and 26 April 1841. Although the two displayed great respect and admiration for each other, their friendship was uneasy and had some qualities of a love-hate relationship. Harold C. Schonberg believes that Chopin displayed a "tinge of jealousy and spite" towards Liszt's virtuosity on the piano, and others have also argued that he had become enchanted with Liszt's theatricality, showmanship and success. Liszt was the dedicatee of Chopin's Op. 10 Études, and his performance of them prompted the composer to write to Hiller, "I should like to rob him of the way he plays my studies." However, Chopin expressed annoyance in 1843 when Liszt performed one of his nocturnes with the addition of numerous intricate embellishments, at which Chopin remarked that he should play the music as written or not play it at all, forcing an apology. Most biographers of Chopin state that after this the two had little to do with each other, although in his letters dated as late as 1848 he still referred to him as "my friend Liszt". Some commentators point to events in the two men's romantic lives which led to a rift between them; there are claims that Liszt had displayed jealousy of his mistress Marie d'Agoult's obsession with Chopin, while others believe that Chopin had become concerned about Liszt's growing relationship with George Sand. In 1836, at a party hosted by Marie d'Agoult, Chopin met the French author George Sand (born [Amantine] Aurore [Lucile] Dupin). Short (under five feet, or 152 cm), dark, big-eyed and a cigar smoker, she initially repelled Chopin, who remarked, "What an unattractive person "la Sand" is. Is she really a woman?" However, by early 1837 Maria Wodzińska's mother had made it clear to Chopin in correspondence that a marriage with her daughter was unlikely to proceed. It is thought that she was influenced by his poor health and possibly also by rumours about his associations with women such as d'Agoult and Sand. Chopin finally placed the letters from Maria and her mother in a package on which he wrote, in Polish, "My tragedy". Sand, in a letter to Grzymała of June 1838, admitted strong feelings for the composer and debated whether to abandon a current affair in order to begin a relationship with Chopin; she asked Grzymała to assess Chopin's relationship with Maria Wodzińska, without realising that the affair, at least from Maria's side, was over. In June 1837 Chopin visited London incognito in the company of the piano manufacturer Camille Pleyel where he played at a musical soirée at the house of English piano maker James Broadwood. On his return to Paris, his association with Sand began in earnest and by the end of June 1838 they had become lovers. Sand, who was six years older than the composer, and who had had a series of lovers, wrote at this time: "I must say I was confused and amazed at the effect this little creature had on me ... I have still not recovered from my astonishment, and if I were a proud person I should be feeling humiliated at having been carried away ..." The two spent a miserable winter on Majorca (8 November 1838 to 13 February 1839), where, together with Sand's two children, they had journeyed in the hope of improving the health of Chopin and that of Sand's 15-year-old son Maurice, and also to escape the threats of Sand's former lover Félicien Mallefille. After discovering that the couple were not married, the deeply traditional Catholic people of Majorca became inhospitable, making accommodation difficult to find. This compelled the group to take lodgings in a former Carthusian monastery in Valldemossa, which gave little shelter from the cold winter weather. On 3 December, Chopin complained about his bad health and the incompetence of the doctors in Majorca: "Three doctors have visited me ... The first said I was dead; the second said I was dying; and the third said I was about to die." He also had problems having his Pleyel piano sent to him, having to rely in the meantime on a piano made in Palma by Juan Bauza. The Pleyel piano finally arrived from Paris in December, just shortly before Chopin and Sand left the island. Chopin wrote to Pleyel in January 1839: "I am sending you my Preludes [(Op. 28)]. I finished them on your little piano, which arrived in the best possible condition in spite of the sea, the bad weather and the Palma customs." Chopin was also able to undertake work while in Majorca on his Ballade No. 2, Op. 38; two Polonaises, Op. 40; and the Scherzo No. 3, Op. 39. Although this period had been productive, the bad weather had such a detrimental effect on Chopin's health that Sand determined to leave the island. To avoid further customs duties, Sand sold the piano to a local French couple, the Canuts. The group traveled first to Barcelona, then to Marseilles, where they stayed for a few months while Chopin convalesced. In May 1839 they headed for the summer to Sand's estate at Nohant, where they spent most summers until 1846. In autumn they returned to Paris, where Chopin's apartment at 5 rue Tronchet was close to Sand's rented accommodation at the rue Pigalle. He frequently visited Sand in the evenings, but both retained some independence. In 1842 he and Sand moved to the Square d'Orléans, living in adjacent buildings. At the funeral of the tenor Adolphe Nourrit in Paris in 1839, Chopin made a rare appearance at the organ, playing a transcription of Franz Schubert's "lied" "Die Gestirne" (D. 444). On 26 July 1840 Chopin and Sand were present at the dress rehearsal of Berlioz's "Grande symphonie funèbre et triomphale", composed to commemorate the tenth anniversary of the July Revolution. Chopin was reportedly unimpressed with the composition. During the summers at Nohant, particularly in the years 1839–43, Chopin found quiet, productive days during which he composed many works, including his Polonaise in A-flat major, Op. 53. Among the visitors to Nohant were Delacroix and the mezzo-soprano Pauline Viardot, whom Chopin had advised on piano technique and composition. Delacroix gives an account of staying at Nohant in a letter of 7 June 1842: The hosts could not be more pleasant in entertaining me. When we are not all together at dinner, lunch, playing billiards, or walking, each of us stays in his room, reading or lounging around on a couch. Sometimes, through the window which opens on the garden, a gust of music wafts up from Chopin at work. All this mingles with the songs of nightingales and the fragrance of roses. From 1842 onwards, Chopin showed signs of serious illness. After a solo recital in Paris on 21 February 1842, he wrote to Grzymała: "I have to lie in bed all day long, my mouth and tonsils are aching so much." He was forced by illness to decline a written invitation from Alkan to participate in a repeat performance of the Beethoven Seventh Symphony arrangement at Érard's on 1 March 1843. Late in 1844, Charles Hallé visited Chopin and found him "hardly able to move, bent like a half-opened penknife and evidently in great pain", although his spirits returned when he started to play the piano for his visitor. Chopin's health continued to deteriorate, particularly from this time onwards. Modern research suggests that apart from any other illnesses, he may also have suffered from temporal lobe epilepsy. Chopin's relations with Sand were soured in 1846 by problems involving her daughter Solange and Solange's fiancé, the young fortune-hunting sculptor Auguste Clésinger. The composer frequently took Solange's side in quarrels with her mother; he also faced jealousy from Sand's son Maurice. Chopin was utterly indifferent to Sand's radical political pursuits, while Sand looked on his society friends with disdain. As the composer's illness progressed, Sand had become less of a lover and more of a nurse to Chopin, whom she called her "third child". In letters to third parties, she vented her impatience, referring to him as a "child," a "little angel", a "sufferer" and a "beloved little corpse." In 1847 Sand published her novel "Lucrezia Floriani", whose main characters—a rich actress and a prince in weak health—could be interpreted as Sand and Chopin; the story was uncomplimentary to Chopin, who could not have missed the allusions as he helped Sand correct the printer's galleys. In 1847 he did not visit Nohant, and he quietly ended their ten-year relationship following an angry correspondence which, in Sand's words, made "a strange conclusion to nine years of exclusive friendship." The two would never meet again. Chopin's output as a composer throughout this period declined in quantity year by year. Whereas in 1841 he had written a dozen works, only six were written in 1842 and six shorter pieces in 1843. In 1844 he wrote only the Op. 58 sonata. 1845 saw the completion of three mazurkas (Op. 59). Although these works were more refined than many of his earlier compositions, Zamoyski concludes that "his powers of concentration were failing and his inspiration was beset by anguish, both emotional and intellectual." Chopin's public popularity as a virtuoso began to wane, as did the number of his pupils, and this, together with the political strife and instability of the time, caused him to struggle financially. In February 1848, with the cellist Auguste Franchomme, he gave his last Paris concert, which included three movements of the Cello Sonata Op. 65. In April, during the Revolution of 1848 in Paris, he left for London, where he performed at several concerts and at numerous receptions in great houses. This tour was suggested to him by his Scottish pupil Jane Stirling and her elder sister. Stirling also made all the logistical arrangements and provided much of the necessary funding. In London Chopin took lodgings at Dover Street, where the firm of Broadwood provided him with a grand piano. At his first engagement, on 15 May at Stafford House, the audience included Queen Victoria and Prince Albert. The Prince, who was himself a talented musician, moved close to the keyboard to view Chopin's technique. Broadwood also arranged concerts for him; among those attending were Thackeray and the singer Jenny Lind. Chopin was also sought after for piano lessons, for which he charged the high fee of one guinea per hour, and for private recitals for which the fee was 20 guineas. At a concert on 7 July he shared the platform with Viardot, who sang arrangements of some of his mazurkas to Spanish texts. On 28 August, he played at a concert in Manchester's Concert Hall, sharing the stage with Marietta Alboni and Lorenzo Salvi. In late summer he was invited by Jane Stirling to visit Scotland, where he stayed at Calder House near Edinburgh and at Johnstone Castle in Renfrewshire, both owned by members of Stirling's family. She clearly had a notion of going beyond mere friendship, and Chopin was obliged to make it clear to her that this could not be so. He wrote at this time to Grzymała "My Scottish ladies are kind, but such bores", and responding to a rumour about his involvement, answered that he was "closer to the grave than the nuptial bed." He gave a public concert in Glasgow on 27 September, and another in Edinburgh, at the Hopetoun Rooms on Queen Street (now Erskine House) on 4 October. In late October 1848, while staying at 10 Warriston Crescent in Edinburgh with the Polish physician Adam Łyszczyński, he wrote out his last will and testament—"a kind of disposition to be made of my stuff in the future, if I should drop dead somewhere", he wrote to Grzymała. Chopin made his last public appearance on a concert platform at London's Guildhall on 16 November 1848, when, in a final patriotic gesture, he played for the benefit of Polish refugees. By this time he was very seriously ill, weighing under 99 pounds (i.e. less than 45 kg), and his doctors were aware that his sickness was at a terminal stage. At the end of November, Chopin returned to Paris. He passed the winter in unremitting illness, but gave occasional lessons and was visited by friends, including Delacroix and Franchomme. Occasionally he played, or accompanied the singing of Delfina Potocka, for his friends. During the summer of 1849, his friends found him an apartment in Chaillot, out of the centre of the city, for which the rent was secretly subsidised by an admirer, Princess Obreskoff. Here in June 1849 he was visited by Jenny Lind. With his health further deteriorating, Chopin desired to have a family member with him. In June 1849 his sister Ludwika came to Paris with her husband and daughter, and in September, supported by a loan from Jane Stirling, he took an apartment at Place Vendôme 12. After 15 October, when his condition took a marked turn for the worse, only a handful of his closest friends remained with him, although Viardot remarked sardonically that "all the grand Parisian ladies considered it "de rigueur" to faint in his room." Some of his friends provided music at his request; among them, Potocka sang and Franchomme played the cello. Chopin requested that his body be opened after death (for fear of being buried alive) and his heart returned to Warsaw where it rests at the Church of the Holy Cross. He also bequeathed his unfinished notes on a piano tuition method, "Projet de méthode", to Alkan for completion. On 17 October, after midnight, the physician leaned over him and asked whether he was suffering greatly. "No longer", he replied. He died a few minutes before two o'clock in the morning. Those present at the deathbed appear to have included his sister Ludwika, Princess Marcelina Czartoryska, Sand's daughter Solange, and his close friend Thomas Albrecht. Later that morning, Solange's husband Clésinger made Chopin's death mask and a cast of his left hand. The funeral, held at the Church of the Madeleine in Paris, was delayed almost two weeks, until 30 October. Entrance was restricted to ticket holders as many people were expected to attend. Over 3,000 people arrived without invitations, from as far as London, Berlin and Vienna, and were excluded. Mozart's Requiem was sung at the funeral; the soloists were the soprano Jeanne-Anaïs Castellan, the mezzo-soprano Pauline Viardot, the tenor Alexis Dupont, and the bass Luigi Lablache; Chopin's Preludes No. 4 in E minor and No. 6 in B minor were also played. The organist at the funeral was Louis Lefébure-Wély. The funeral procession to Père Lachaise Cemetery, which included Chopin's sister Ludwika, was led by the aged Prince Adam Czartoryski. The pallbearers included Delacroix, Franchomme, and Camille Pleyel. At the graveside, the "Funeral March" from Chopin's Piano Sonata No. 2 was played, in Reber's instrumentation. Chopin's tombstone, featuring the muse of music, Euterpe, weeping over a broken lyre, was designed and sculpted by Clésinger. The expenses of the funeral and monument, amounting to 5,000 francs, were covered by Jane Stirling, who also paid for the return of the composer's sister Ludwika to Warsaw. Ludwika took Chopin's heart in an urn, preserved in alcohol, back to Poland in 1850. She also took a collection of two hundred letters from Sand to Chopin; after 1851 these were returned to Sand, who seems to have destroyed them. Chopin's disease and the cause of his death have been a matter of discussion. His death certificate gave the cause of death as tuberculosis, and his physician, Jean Cruveilhier, was then the leading French authority on this disease. Other possibilities that have been advanced have included cystic fibrosis, cirrhosis, and alpha 1-antitrypsin deficiency. An examination of Chopin's preserved heart, conducted in 2014 and published in 2017, confirmed that the likely cause of his death was a rare case of pericarditis caused by complications of chronic tuberculosis. Over 230 works of Chopin survive; some compositions from early childhood have been lost. All his known works involve the piano, and only a few range beyond solo piano music, as either piano concertos, songs or chamber music. Chopin was educated in the tradition of Beethoven, Haydn, Mozart and Clementi; he used Clementi's piano method with his own students. He was also influenced by Hummel's development of virtuoso, yet Mozartian, piano technique. He cited Bach and Mozart as the two most important composers in shaping his musical outlook. Chopin's early works are in the style of the "brilliant" keyboard pieces of his era as exemplified by the works of Ignaz Moscheles, Friedrich Kalkbrenner, and others. Less direct in the earlier period are the influences of Polish folk music and of Italian opera. Much of what became his typical style of ornamentation (for example, his "fioriture") is taken from singing. His melodic lines were increasingly reminiscent of the modes and features of the music of his native country, such as drones. Chopin took the new salon genre of the nocturne, invented by the Irish composer John Field, to a deeper level of sophistication. He was the first to write ballades and scherzi as individual concert pieces. He essentially established a new genre with his own set of free-standing preludes (Op. 28, published 1839). He exploited the poetic potential of the concept of the concert étude, already being developed in the 1820s and 1830s by Liszt, Clementi and Moscheles, in his two sets of studies (Op. 10 published in 1833, Op. 25 in 1837). Chopin also endowed popular dance forms with a greater range of melody and expression. Chopin's mazurkas, while originating in the traditional Polish dance (the "mazurek"), differed from the traditional variety in that they were written for the concert hall rather than the dance hall; as J. Barrie Jones puts it, "it was Chopin who put the mazurka on the European musical map." The series of seven polonaises published in his lifetime (another nine were published posthumously), beginning with the Op. 26 pair (published 1836), set a new standard for music in the form. His waltzes were also written specifically for the salon recital rather than the ballroom and are frequently at rather faster tempos than their dance-floor equivalents. Some of Chopin's well-known pieces have acquired descriptive titles, such as the "Revolutionary" Étude (Op. 10, No. 12), and the "Minute Waltz" (Op. 64, No. 1). However, with the exception of his "Funeral March", the composer never named an instrumental work beyond genre and number, leaving all potential extramusical associations to the listener; the names by which many of his pieces are known were invented by others. There is no evidence to suggest that the "Revolutionary" Étude was written with the failed Polish uprising against Russia in mind; it merely appeared at that time. The "Funeral March", the third movement of his Sonata No. 2 (Op. 35), the one case where he did give a title, was written before the rest of the sonata, but no specific event or death is known to have inspired it. The last opus number that Chopin himself used was 65, allocated to the Cello Sonata in G minor. He expressed a deathbed wish that all his unpublished manuscripts be destroyed. At the request of the composer's mother and sisters, however, his musical executor Julian Fontana selected 23 unpublished piano pieces and grouped them into eight further opus numbers (Opp. 66–73), published in 1855. In 1857, 17 Polish songs that Chopin wrote at various stages of his life were collected and published as Op. 74, though their order within the opus did not reflect the order of composition. Works published since 1857 have received alternative catalogue designations instead of opus numbers. The present standard musicological reference for Chopin's works is the Kobylańska Catalogue (usually represented by the initials 'KK'), named for its compiler, the Polish musicologist Krystyna Kobylańska. Chopin's original publishers included Maurice Schlesinger and Camille Pleyel. His works soon began to appear in popular 19th-century piano anthologies. The first collected edition was by Breitkopf & Härtel (1878–1902). Among modern scholarly editions of Chopin's works are the version under the name of Paderewski published between 1937 and 1966 and the more recent Polish "National Edition", edited by Jan Ekier, both of which contain detailed explanations and discussions regarding choices and sources. Chopin published his music in France, England and the German states due to the copyright laws of the time. As such there are often three different kinds of ‘first editions’. Each edition is different from the other, as Chopin edited them separately and at times he did some revision to the music while editing it. Furthermore, Chopin provided his publishers with varying sources, including autographs, annotated proofsheets and scribal copies. Only recently have these differences gained greater recognition. Improvisation stands at the centre of Chopin's creative processes. However, this does not imply impulsive rambling: Nicholas Temperley writes that "improvisation is designed for an audience, and its starting-point is that audience's expectations, which include the current conventions of musical form." The works for piano and orchestra, including the two concertos, are held by Temperley to be "merely vehicles for brilliant piano playing ... formally longwinded and extremely conservative". After the piano concertos (which are both early, dating from 1830), Chopin made no attempts at large-scale multi-movement forms, save for his late sonatas for piano and for cello; "instead he achieved near-perfection in pieces of simple general design but subtle and complex cell-structure." Rosen suggests that an important aspect of Chopin's individuality is his flexible handling of the four-bar phrase as a structural unit. J. Barrie Jones suggests that "amongst the works that Chopin intended for concert use, the four ballades and four scherzos stand supreme", and adds that "the Barcarolle Op. 60 stands apart as an example of Chopin's rich harmonic palette coupled with an Italianate warmth of melody." Temperley opines that these works, which contain "immense variety of mood, thematic material and structural detail", are based on an extended "departure and return" form; "the more the middle section is extended, and the further it departs in key, mood and theme, from the opening idea, the more important and dramatic is the reprise when it at last comes." Chopin's mazurkas and waltzes are all in straightforward ternary or episodic form, sometimes with a coda. The mazurkas often show more folk features than many of his other works, sometimes including modal scales and harmonies and the use of drone basses. However, some also show unusual sophistication, for example Op. 63 No. 3, which includes a canon at one beat's distance, a great rarity in music. Chopin's polonaises show a marked advance on those of his Polish predecessors in the form (who included his teachers Żywny and Elsner). As with the traditional polonaise, Chopin's works are in triple time and typically display a martial rhythm in their melodies, accompaniments and cadences. Unlike most of their precursors, they also require a formidable playing technique. The 21 nocturnes are more structured, and of greater emotional depth, than those of Field (whom Chopin met in 1833). Many of the Chopin nocturnes have middle sections marked by agitated expression (and often making very difficult demands on the performer) which heightens their dramatic character. Chopin's études are largely in straightforward ternary form. He used them to teach his own technique of piano playing—for instance playing double thirds (Op. 25, No. 6), playing in octaves (Op. 25, No. 10), and playing repeated notes (Op. 10, No.  7). The preludes, many of which are very brief (some consisting of simple statements and developments of a single theme or figure), were described by Schumann as "the beginnings of studies". Inspired by J.S. Bach's "The Well-Tempered Clavier", Chopin's preludes move up the circle of fifths (rather than Bach's chromatic scale sequence) to create a prelude in each major and minor tonality. The preludes were perhaps not intended to be played as a group, and may even have been used by him and later pianists as generic preludes to others of his pieces, or even to music by other composers, as Kenneth Hamilton suggests: he has noted a recording by Ferruccio Busoni of 1922, in which the Prelude Op. 28 No. 7 is followed by the Étude Op. 10 No. 5. The two mature piano sonatas (No. 2, Op. 35, written in 1839 and No. 3, Op. 58, written in 1844) are in four movements. In Op. 35, Chopin was able to combine within a formal large musical structure many elements of his virtuosic piano technique—"a kind of dialogue between the public pianism of the brilliant style and the German sonata principle". The sonata has been considered as showing the influences of both Bach and Beethoven. The Prelude from Bach's Suite No. 6 in D major for cello (BWV 1012) is quoted; and there are references to two sonatas of Beethoven: the Sonata Opus 111 in C minor, and the Sonata Opus 26 in A flat major, which, like Chopin's Op. 35, has a funeral march as its slow movement. The last movement of Chopin's Op. 35, a brief (75-bar) perpetuum mobile in which the hands play in unmodified octave unison throughout, was found shocking and unmusical by contemporaries, including Schumann. The Op. 58 sonata is closer to the German tradition, including many passages of complex counterpoint, "worthy of Brahms" according to the music historians Kornel Michałowski and Jim Samson. Chopin's harmonic innovations may have arisen partly from his keyboard improvisation technique. Temperley says that in his works "novel harmonic effects frequently result from the combination of ordinary appoggiaturas or passing notes with melodic figures of accompaniment", and cadences are delayed by the use of chords outside the home key (neapolitan sixths and diminished sevenths), or by sudden shifts to remote keys. Chord progressions sometimes anticipate the shifting tonality of later composers such as Claude Debussy, as does Chopin's use of modal harmony. In 1841, Léon Escudier wrote of a recital given by Chopin that year, "One may say that Chopin is the creator of a school of piano and a school of composition. In truth, nothing equals the lightness, the sweetness with which the composer preludes on the piano; moreover nothing may be compared to his works full of originality, distinction and grace." Chopin refused to conform to a standard method of playing and believed that there was no set technique for playing well. His style was based extensively on his use of very independent finger technique. In his "Projet de méthode" he wrote: "Everything is a matter of knowing good fingering ... we need no less to use the rest of the hand, the wrist, the forearm and the upper arm." He further stated: "One needs only to study a certain position of the hand in relation to the keys to obtain with ease the most beautiful quality of sound, to know how to play short notes and long notes, and [to attain] unlimited dexterity." The consequences of this approach to technique in Chopin's music include the frequent use of the entire range of the keyboard, passages in double octaves and other chord groupings, swiftly repeated notes, the use of grace notes, and the use of contrasting rhythms (four against three, for example) between the hands. Jonathan Bellman writes that modern concert performance style—set in the "conservatory" tradition of late 19th- and 20th-century music schools, and suitable for large auditoria or recordings—militates against what is known of Chopin's more intimate performance technique. The composer himself said to a pupil that "concerts are never real music, you have to give up the idea of hearing in them all the most beautiful things of art." Contemporary accounts indicate that in performance, Chopin avoided rigid procedures sometimes incorrectly attributed to him, such as "always crescendo to a high note", but that he was concerned with expressive phrasing, rhythmic consistency and sensitive colouring. Berlioz wrote in 1853 that Chopin "has created a kind of chromatic embroidery ... whose effect is so strange and piquant as to be impossible to describe ... virtually nobody but Chopin himself can play this music and give it this unusual turn". Hiller wrote that "What in the hands of others was elegant embellishment, in his hands became a colourful wreath of flowers." Chopin's music is frequently played with "rubato", "the practice in performance of disregarding strict time, 'robbing' some note-values for expressive effect". There are differing opinions as to how much, and what type, of "rubato" is appropriate for his works. Charles Rosen comments that "most of the written-out indications of rubato in Chopin are to be found in his mazurkas ... It is probable that Chopin used the older form of rubato so important to Mozart ... [where] the melody note in the right hand is delayed until after the note in the bass ... An allied form of this rubato is the arpeggiation of the chords thereby delaying the melody note; according to Chopin's pupil Karol Mikuli, Chopin was firmly opposed to this practice." Friederike Müller, a pupil of Chopin, wrote: "[His] playing was always noble and beautiful; his tones sang, whether in full "forte" or softest "piano". He took infinite pains to teach his pupils this "legato", "cantabile" style of playing. His most severe criticism was 'He—or she—does not know how to join two notes together.' He also demanded the strictest adherence to rhythm. He hated all lingering and dragging, misplaced "rubatos", as well as exaggerated "ritardandos" ... and it is precisely in this respect that people make such terrible errors in playing his works." With his mazurkas and polonaises, Chopin has been credited with introducing to music a new sense of nationalism. Schumann, in his 1836 review of the piano concertos, highlighted the composer's strong feelings for his native Poland, writing that "Now that the Poles are in deep mourning [after the failure of the November Uprising of 1830], their appeal to us artists is even stronger ... If the mighty autocrat in the north [i.e. Nicholas I of Russia] could know that in Chopin's works, in the simple strains of his mazurkas, there lurks a dangerous enemy, he would place a ban on his music. Chopin's works are cannon buried in flowers!" The biography of Chopin published in 1863 under the name of Franz Liszt (but probably written by Carolyne zu Sayn-Wittgenstein) states that Chopin "must be ranked first among the first musicians ... individualizing in themselves the poetic sense of an entire nation." Some modern commentators have argued against exaggerating Chopin's primacy as a "nationalist" or "patriotic" composer. George Golos refers to earlier "nationalist" composers in Central Europe, including Poland's Michał Kleofas Ogiński and Franciszek Lessel, who utilised polonaise and mazurka forms. Barbara Milewski suggests that Chopin's experience of Polish music came more from "urbanised" Warsaw versions than from folk music, and that attempts (by Jachimecki and others) to demonstrate genuine folk music in his works are without basis. Richard Taruskin impugns Schumann's attitude toward Chopin's works as patronizing and comments that Chopin "felt his Polish patriotism deeply and sincerely" but consciously modelled his works on the tradition of Bach, Beethoven, Schubert, and Field. A reconciliation of these views is suggested by William Atwood: "Undoubtedly [Chopin's] use of traditional musical forms like the polonaise and mazurka roused nationalistic sentiments and a sense of cohesiveness amongst those Poles scattered across Europe and the New World ... While some sought solace in [them], others found them a source of strength in their continuing struggle for freedom. Although Chopin's music undoubtedly came to him intuitively rather than through any conscious patriotic design, it served all the same to symbolize the will of the Polish people ..." Jones comments that "Chopin's unique position as a composer, despite the fact that virtually everything he wrote was for the piano, has rarely been questioned." He also notes that Chopin was fortunate to arrive in Paris in 1831—"the artistic environment, the publishers who were willing to print his music, the wealthy and aristocratic who paid what Chopin asked for their lessons"—and these factors, as well as his musical genius, also fuelled his contemporary and later reputation. While his illness and his love-affairs conform to some of the stereotypes of romanticism, the rarity of his public recitals (as opposed to performances at fashionable Paris soirées) led Arthur Hutchings to suggest that "his lack of Byronic flamboyance [and] his aristocratic reclusiveness make him exceptional" among his romantic contemporaries, such as Liszt and Henri Herz. Chopin's qualities as a pianist and composer were recognized by many of his fellow musicians. Schumann named a piece for him in his suite "Carnaval", and Chopin later dedicated his Ballade No. 2 in F major to Schumann. Elements of Chopin's music can be traced in many of Liszt's later works. Liszt later transcribed for piano six of Chopin's Polish songs. A less fraught friendship was with Alkan, with whom he discussed elements of folk music, and who was deeply affected by Chopin's death. Two of Chopin's long-standing pupils, Karol Mikuli (1821–1897) and Georges Mathias, were themselves piano teachers and passed on details of his playing to their own students, some of whom (such as Raoul Koczalski) were to make recordings of his music. Other pianists and composers influenced by Chopin's style include Louis Moreau Gottschalk, Édouard Wolff (1816–1880) and Pierre Zimmermann. Debussy dedicated his own 1915 piano Études to the memory of Chopin; he frequently played Chopin's music during his studies at the Paris Conservatoire, and undertook the editing of Chopin's piano music for the publisher Jacques Durand. Polish composers of the following generation included virtuosi such as Moritz Moszkowski, but, in the opinion of J. Barrie Jones, his "one worthy successor" among his compatriots was Karol Szymanowski (1882–1937). Edvard Grieg, Antonín Dvořák, Isaac Albéniz, Pyotr Ilyich Tchaikovsky and Sergei Rachmaninoff, among others, are regarded by critics as having been influenced by Chopin's use of national modes and idioms. Alexander Scriabin was devoted to the music of Chopin, and his early published works include nineteen mazurkas, as well as numerous études and preludes; his teacher Nikolai Zverev drilled him in Chopin's works to improve his virtuosity as a performer. In the 20th century, composers who paid homage to (or in some cases parodied) the music of Chopin included George Crumb, Bohuslav Martinů, Darius Milhaud, Igor Stravinsky and Heitor Villa-Lobos. Chopin's music was used in the 1909 ballet "Chopiniana", choreographed by Michel Fokine and orchestrated by Alexander Glazunov. Sergei Diaghilev commissioned additional orchestrations—from Stravinsky, Anatoly Lyadov, Sergei Taneyev and Nikolai Tcherepnin—for later productions, which used the title "Les Sylphides". Chopin's music remains very popular and is regularly performed, recorded and broadcast worldwide. The world's oldest monographic music competition, the International Chopin Piano Competition, founded in 1927, is held every five years in Warsaw. The Fryderyk Chopin Institute of Poland lists on its website over eighty societies worldwide devoted to the composer and his music. The Institute site also lists nearly 1,500 performances of Chopin works on YouTube as of January 2014. The British Library notes that "Chopin's works have been recorded by all the great pianists of the recording era." The earliest recording was an 1895 performance by Paul Pabst of the Nocturne in E major Op. 62 No. 2. The British Library site makes available a number of historic recordings, including some by Alfred Cortot, Ignaz Friedman, Vladimir Horowitz, Benno Moiseiwitsch, Ignacy Jan Paderewski, Arthur Rubinstein, Xaver Scharwenka and many others. A select discography of recordings of Chopin works by pianists representing the various pedagogic traditions stemming from Chopin is given by Methuen-Campbell in his work tracing the lineage and character of those traditions. Numerous recordings of Chopin's works are available. On the occasion of the composer's bicentenary, the critics of "The New York Times" recommended performances by the following contemporary pianists (among many others): Martha Argerich, Vladimir Ashkenazy, Emanuel Ax, Evgeny Kissin, Murray Perahia, Maurizio Pollini and Krystian Zimerman. The Warsaw Chopin Society organizes the "Grand prix du disque de F. Chopin" for notable Chopin recordings, held every five years. Chopin has figured extensively in Polish literature, both in serious critical studies of his life and music and in fictional treatments. The earliest manifestation was probably an 1830 sonnet on Chopin by Leon Ulrich. French writers on Chopin (apart from Sand) have included Marcel Proust and André Gide; and he has also featured in works of Gottfried Benn and Boris Pasternak. There are numerous biographies of Chopin in English (see bibliography for some of these). Possibly the first venture into fictional treatments of Chopin's life was a fanciful operatic version of some of its events. "Chopin" was written by Giacomo Orefice and produced in Milan in 1901. All the music is derived from that of Chopin. Chopin's life and his relations with George Sand have been fictionalized in numerous films. The 1945 biographical film "A Song to Remember" earned Cornel Wilde an Academy Award nomination as Best Actor for his portrayal of the composer. Other film treatments have included: "La valse de l'adieu" (France, 1928) by Henry Roussel, with Pierre Blanchar as Chopin; "Impromptu" (1991), starring Hugh Grant as Chopin; "La note bleue" (1991); and "" (2002). Chopin's life was covered in a BBC documentary "Chopin – The Women Behind The Music" (2010), in a 1999 documentary by András Schiff and Mischa Scorer, and in a 2010 documentary realised by Angelo Bozzolini and Roberto Prosseda for Italian television. Notes Citations Bibliography Music scores Franz Kafka Franz Kafka (3 July 1883 – 3 June 1924) was a German-speaking Bohemian Jewish novelist and short story writer, widely regarded as one of the major figures of 20th-century literature. His work, which fuses elements of realism and the fantastic, typically features isolated protagonists faced by bizarre or surrealistic predicaments and incomprehensible social-bureaucratic powers, and has been interpreted as exploring themes of alienation, existential anxiety, guilt, and absurdity. His best known works include "" ("The Metamorphosis"), ("The Trial"), and ("The Castle"). The term "" has entered the English language to describe situations like those in his writing. Kafka was born into a middle-class, German-speaking Jewish family in Prague, the capital of the Kingdom of Bohemia, then part of the Austro-Hungarian Empire, today part of the Czech Republic. He trained as a lawyer, and after completing his legal education he was employed by an insurance company, forcing him to relegate writing to his spare time. Over the course of his life, Kafka wrote hundreds of letters to family and close friends, including his father, with whom he had a strained and formal relationship. He became engaged to several women but never married. He died in 1924 at the age of 40 from tuberculosis. Few of Kafka's works were published during his lifetime: the story collections ("Contemplation") and ("A Country Doctor"), and individual stories (such as "") were published in literary magazines but received little public attention. Kafka's unfinished works, including his novels , and (translated as both "Amerika" and "The Man Who Disappeared"), were ordered by Kafka to be destroyed by his friend Max Brod, who nonetheless ignored his friend's direction and published them after Kafka's death. His work went on to influence a vast range of writers, critics, artists, and philosophers during the 20th century. Kafka was born near the Old Town Square in Prague, then part of the Austro-Hungarian Empire. His family were middle-class Ashkenazi Jews. His father, Hermann Kafka (1854–1931), was the fourth child of Jakob Kafka, a or ritual slaughterer in Osek, a Czech village with a large Jewish population located near Strakonice in southern Bohemia. Hermann brought the Kafka family to Prague. After working as a travelling sales representative, he eventually became a fashion retailer who employed up to 15 people and used the image of a jackdaw ( in Czech, pronounced and colloquially written as "kafka") as his business logo. Kafka's mother, Julie (1856–1934), was the daughter of Jakob Löwy, a prosperous retail merchant in Poděbrady, and was better educated than her husband. Kafka's parents probably spoke a German influenced by Yiddish that was sometimes pejoratively called Mauscheldeutsch, but, as the German language was considered the vehicle of social mobility, they probably encouraged their children to speak High German. Hermann and Julie had six children, of whom Franz was the eldest. Franz's two brothers, Georg and Heinrich, died in infancy before Franz was seven; his three sisters were Gabriele ("Ellie") (1889–1944), Valerie ("Valli") (1890–1942) and Ottilie ("Ottla") (1892–1943). They all died during the Holocaust of World War II. Valli was deported to the Łódź Ghetto in occupied Poland in 1942, but that is the last documentation of her. Ottilie was his favorite sister. Hermann is described by the biographer Stanley Corngold as a "huge, selfish, overbearing businessman" and by Franz Kafka as "a true Kafka in strength, health, appetite, loudness of voice, eloquence, self-satisfaction, worldly dominance, endurance, presence of mind, [and] knowledge of human nature". On business days, both parents were absent from the home, with Julie Kafka working as many as 12 hours each day helping to manage the family business. Consequently, Kafka's childhood was somewhat lonely, and the children were reared largely by a series of governesses and servants. Kafka's troubled relationship with his father is evident in his ("Letter to His Father") of more than 100 pages, in which he complains of being profoundly affected by his father's authoritarian and demanding character; his mother, in contrast, was quiet and shy. The dominating figure of Kafka's father had a significant influence on Kafka's writing. The Kafka family had a servant girl living with them in a cramped apartment. Franz's room was often cold. In November 1913 the family moved into a bigger apartment, although Ellie and Valli had married and moved out of the first apartment. In early August 1914, just after World War I began, the sisters did not know where their husbands were in the military and moved back in with the family in this larger apartment. Both Ellie and Valli also had children. Franz at age 31 moved into Valli's former apartment, quiet by contrast, and lived by himself for the first time. From 1889 to 1893, Kafka attended the German boys' elementary school at the (meat market), now known as Masná Street. His Jewish education ended with his Bar Mitzvah celebration at the age of 13. Kafka never enjoyed attending the synagogue and went with his father only on four high holidays a year. After leaving elementary school in 1893, Kafka was admitted to the rigorous classics-oriented state gymnasium, , an academic secondary school at Old Town Square, within the Kinský Palace. German was the language of instruction, but Kafka also spoke and wrote in Czech. He studied the latter at the gymnasium for eight years, achieving good grades. Although Kafka received compliments for his Czech, he never considered himself fluent in Czech, though he spoke German with a Czech accent. He completed his Matura exams in 1901. Admitted to the of Prague in 1901, Kafka began studying chemistry, but switched to law after two weeks. Although this field did not excite him, it offered a range of career possibilities which pleased his father. In addition, law required a longer course of study, giving Kafka time to take classes in German studies and art history. He also joined a student club, (Reading and Lecture Hall of the German students), which organized literary events, readings and other activities. Among Kafka's friends were the journalist Felix Weltsch, who studied philosophy, the actor Yitzchak Lowy who came from an orthodox Hasidic Warsaw family, and the writers Oskar Baum and Franz Werfel. At the end of his first year of studies, Kafka met Max Brod, a fellow law student who became a close friend for life. Brod soon noticed that, although Kafka was shy and seldom spoke, what he said was usually profound. Kafka was an avid reader throughout his life; together he and Brod read Plato's "Protagoras" in the original Greek, on Brod's initiative, and Flaubert's and ("The Temptation of Saint Anthony") in French, at his own suggestion. Kafka considered Fyodor Dostoyevsky, Flaubert, Nikolai Gogol, Franz Grillparzer, and Heinrich von Kleist to be his "true blood brothers". Besides these, he took an interest in Czech literature and was also very fond of the works of Goethe. Kafka was awarded the degree of Doctor of Law on 18 July 1906 and performed an obligatory year of unpaid service as law clerk for the civil and criminal courts. On 1 November 1907, Kafka was hired at the , an insurance company, where he worked for nearly a year. His correspondence during that period indicates that he was unhappy with a working time schedule—from 08:00 until 18:00—making it extremely difficult to concentrate on writing, which was assuming increasing importance to him. On 15 July 1908, he resigned. Two weeks later he found employment more amenable to writing when he joined the Worker's Accident Insurance Institute for the Kingdom of Bohemia. The job involved investigating and assessing compensation for personal injury to industrial workers; accidents such as lost fingers or limbs were commonplace at this time owing to poor work safety policies at the time. It was especially true of factories fitted with machine lathes, drills, planing machines and rotary saws which were rarely fitted with safety guards. The management professor Peter Drucker credits Kafka with developing the first civilian hard hat while employed at the Worker's Accident Insurance Institute, but this is not supported by any document from his employer. His father often referred to his son's job as an insurance officer as a , literally "bread job", a job done only to pay the bills; Kafka often claimed to despise it. Kafka was rapidly promoted and his duties included processing and investigating compensation claims, writing reports, and handling appeals from businessmen who thought their firms had been placed in too high a risk category, which cost them more in insurance premiums. He would compile and compose the annual report on the insurance institute for the several years he worked there. The reports were received well by his superiors. Kafka usually got off work at 2 p.m., so that he had time to spend on his literary work, to which he was committed. Kafka's father also expected him to help out at and take over the family fancy goods store. In his later years, Kafka's illness often prevented him from working at the insurance bureau and at his writing. Years later, Brod coined the term ("The Close Prague Circle") to describe the group of writers, which included Kafka, Felix Weltsch and him. In late 1911, Elli's husband Karl Hermann and Kafka became partners in the first asbestos factory in Prague, known as Prager Asbestwerke Hermann & Co., having used dowry money from Hermann Kafka. Kafka showed a positive attitude at first, dedicating much of his free time to the business, but he later resented the encroachment of this work on his writing time. During that period, he also found interest and entertainment in the performances of Yiddish theatre. After seeing a Yiddish theater troupe perform in October 1911, for the next six months Kafka "immersed himself in Yiddish language and in Yiddish literature". This interest also served as a starting point for his growing exploration of Judaism. It was at about this time that Kafka became a vegetarian. Around 1915 Kafka received his draft notice for military service in World WarI, but his employers at the insurance institute arranged for a deferment because his work was considered essential government service. Later he attempted to join the military but was prevented from doing so by medical problems associated with tuberculosis, with which he was diagnosed in 1917. In 1918 the Worker's Accident Insurance Institute put Kafka on a pension due to his illness, for which there was no cure at the time, and he spent most of the rest of his life in sanatoriums. Kafka never married. According to Brod, Kafka was "tortured" by sexual desire and Kafka's biographer Reiner Stach states that his life was full of "incessant womanising" and that he was filled with a fear of "sexual failure". He visited brothels for most of his adult life and was interested in pornography. In addition, he had close relationships with several women during his life. On 13 August 1912, Kafka met Felice Bauer, a relative of Brod, who worked in Berlin as a representative of a dictaphone company. A week after the meeting at Brod's home, Kafka wrote in his diary: Shortly after this, Kafka wrote the story "" ("The Judgment") in only one night and worked in a productive period on ("The Man Who Disappeared") and "Die Verwandlung" ("The Metamorphosis"). Kafka and Felice Bauer communicated mostly through letters over the next five years, met occasionally, and were engaged twice. Kafka's extant letters to her were published as ("Letters to Felice"); her letters do not survive. According to biographers Stach and James Hawes, around 1920 Kafka was engaged a third time, to Julie Wohryzek, a poor and uneducated hotel chambermaid. Although the two rented a flat and set a wedding date, the marriage never took place. During this time Kafka began a draft of the "Letter to His Father", who objected to Julie because of her Zionist beliefs. Before the date of the intended marriage, he took up with yet another woman. While he needed women and sex in his life, he had low self-confidence, felt sex was dirty, and was shy—especially about his body. Stach and Brod state that during the time that Kafka knew Felice Bauer, he had an affair with a friend of hers, Margarethe "Grete" Bloch, a Jewish woman from Berlin. Brod says that Bloch gave birth to Kafka's son, although Kafka never knew about the child. The boy, whose name is not known, was born in 1914 or 1915 and died in Munich in 1921. However, Kafka's biographer Peter-André Alt claims that, while Bloch had a son, Kafka was not the father as the pair were never intimate. Stach states that Bloch had a son, but there is not solid proof and moreover contradictory evidence that Kafka was the father. Kafka was diagnosed with tuberculosis in August 1917 and moved for a few months to the Bohemian village of Zürau (Siřem in the Czech language), where his sister Ottla worked on the farm of her brother-in-law Karl Hermann. He felt comfortable there and later described this time as perhaps the best time in his life, probably because he had no responsibilities. He kept diaries and (octavo). From the notes in these books, Kafka extracted 109 numbered pieces of text on "Zettel", single pieces of paper in no given order. They were later published as (The Zürau Aphorisms or Reflections on Sin, Hope, Suffering, and the True Way). In 1920 Kafka began an intense relationship with Milena Jesenská, a Czech journalist and writer. His letters to her were later published as . During a vacation in July 1923 to Graal-Müritz on the Baltic Sea, Kafka met Dora Diamant, a 25-year-old kindergarten teacher from an orthodox Jewish family. Kafka, hoping to escape the influence of his family to concentrate on his writing, moved briefly to Berlin and lived with Diamant. She became his lover and caused him to become interested in the Talmud. He worked on four stories, which he prepared to be published as ("A Hunger Artist"). Kafka feared that people would find him mentally and physically repulsive. However, those who met him found him to possess a quiet and cool demeanor, obvious intelligence, and a dry sense of humour; they also found him boyishly handsome, although of austere appearance. Brod compared Kafka to Heinrich von Kleist, noting that both writers had the ability to describe a situation realistically with precise details. Brod thought Kafka was one of the most entertaining people he had met; Kafka enjoyed sharing humour with his friends, but also helped them in difficult situations with good advice. According to Brod, he was a passionate reciter, who was able to phrase his speaking as if it were music. Brod felt that two of Kafka's most distinguishing traits were "absolute truthfulness" () and "precise conscientiousness" (). He explored details, the inconspicuous, in depth and with such love and precision that things surfaced that were unforeseen, seemingly strange, but absolutely true (). Although Kafka showed little interest in exercise as a child, he later showed interest in games and physical activity, as a good rider, swimmer, and rower. On weekends he and his friends embarked on long hikes, often planned by Kafka himself. His other interests included alternative medicine, modern education systems such as Montessori, and technical novelties such as airplanes and film. Writing was important to Kafka; he considered it a "form of prayer". He was highly sensitive to noise and preferred quiet when writing. Pérez-Álvarez has claimed that Kafka may have possessed a schizoid personality disorder. His style, it is claimed, not only in "Die Verwandlung" ("The Metamorphosis"), but in various other writings, appears to show low to medium-level schizoid traits, which explain much of his work. His anguish can be seen in this diary entry from 21 June 1913: and in Zürau Aphorism number 50: Though Kafka never married, he held marriage and children in high esteem. He had several girlfriends. He may have suffered from an eating disorder. Doctor Manfred M. Fichter of the Psychiatric Clinic, University of Munich, presented "evidence for the hypothesis that the writer Franz Kafka had suffered from an atypical anorexia nervosa", and that Kafka was not just lonely and depressed but also "occasionally suicidal". In his 1995 book "Franz Kafka, the Jewish Patient", Sander Gilman investigated "why a Jew might have been considered 'hypochondriac' or 'homosexual' and how Kafka incorporates aspects of these ways of understanding the Jewish male into his own self-image and writing". Kafka considered committing suicide at least once, in late 1912. Prior to World War I, Kafka attended several meetings of the Klub mladých, a Czech anarchist, anti-militarist, and anti-clerical organization. Hugo Bergmann, who attended the same elementary and high schools as Kafka, fell out with Kafka during their last academic year (1900–1901) because "[Kafka's] socialism and my Zionism were much too strident". "Franz became a socialist, I became a Zionist in 1898. The synthesis of Zionism and socialism did not yet exist". Bergmann claims that Kafka wore a red carnation to school to show his support for socialism. In one diary entry, Kafka made reference to the influential anarchist philosopher Peter Kropotkin: "Don't forget Kropotkin!" During the communist era, the legacy of Kafka's work for Eastern bloc socialism was hotly debated. Opinions ranged from the notion that he satirised the bureaucratic bungling of a crumbling Austria-Hungarian Empire, to the belief that he embodied the rise of socialism. A further key point was Marx's theory of alienation. While the orthodox position was that Kafka's depictions of alienation were no longer relevant for a society that had supposedly eliminated alienation, a 1963 conference held in Liblice, Czechoslovakia, on the eightieth anniversary of his birth, reassessed the importance of Kafka's portrayal of bureaucracy. Whether or not Kafka was a political writer is still an issue of debate. Kafka grew up in Prague as a German-speaking Jew. He was deeply fascinated by the Jews of Eastern Europe, who he thought possessed an intensity of spiritual life that was absent from Jews in the West. His diary is full of references to Yiddish writers. Yet he was at times alienated from Judaism and Jewish life: "What have I in common with Jews? I have hardly anything in common with myself and should stand very quietly in a corner, content that I can breathe". In his adolescent years, Kafka had declared himself an atheist. Hawes suggests that Kafka, though very aware of his own Jewishness, did not incorporate it into his work, which, according to Hawes, lacks Jewish characters, scenes or themes. In the opinion of literary critic Harold Bloom, although Kafka was uneasy with his Jewish heritage, he was the quintessential Jewish writer. Lothar Kahn is likewise unequivocal: "The presence of Jewishness in Kafka's is no longer subject to doubt". Pavel Eisner, one of Kafka's first translators, interprets ("The Trial") as the embodiment of the "triple dimension of Jewish existence in Prague... his protagonist Josef K. is (symbolically) arrested by a German (Rabensteiner), a Czech (Kullich), and a Jew (Kaminer). He stands for the 'guiltless guilt' that imbues the Jew in the modern world, although there is no evidence that he himself is a Jew". In his essay "Sadness in Palestine?!", Dan Miron explores Kafka's connection to Zionism: "It seems that those who claim that there was such a connection and that Zionism played a central role in his life and literary work, and those who deny the connection altogether or dismiss its importance, are both wrong. The truth lies in some very elusive place between these two simplistic poles". Kafka considered moving to Palestine with Felice Bauer, and later with Dora Diamant. He studied Hebrew while living in Berlin, hiring a friend of Brod's from Palestine, Pua Bat-Tovim, to tutor him and attending Rabbi Julius Grünthal's and Rabbi Julius Guttmann's classes in the Berlin (College for the Study of Judaism). Livia Rothkirchen calls Kafka the "symbolic figure of his era". His contemporaries included numerous Jewish, Czech, and German writers who were sensitive to Jewish, Czech, and German culture. According to Rothkirchen, "This situation lent their writings a broad cosmopolitan outlook and a quality of exaltation bordering on transcendental metaphysical contemplation. An illustrious example is Franz Kafka". Towards the end of his life Kafka sent a postcard to his friend Hugo Bergman in Tel Aviv, announcing his intention to emigrate to Palestine. Bergman refused to host Kafka because he had young children and was afraid that Kafka would infect them with tuberculosis. Kafka's laryngeal tuberculosis worsened and in March 1924 he returned from Berlin to Prague, where members of his family, principally his sister Ottla, took care of him. He went to Dr. Hoffmann's sanatorium in Kierling just outside Vienna for treatment on 10 April, and died there on 3 June 1924. The cause of death seemed to be starvation: the condition of Kafka's throat made eating too painful for him, and since parenteral nutrition had not yet been developed, there was no way to feed him. Kafka was editing "A Hunger Artist" on his deathbed, a story whose composition he had begun before his throat closed to the point that he could not take any nourishment. His body was brought back to Prague where he was buried on 11 June 1924, in the New Jewish Cemetery in Prague-Žižkov. Kafka was virtually unknown during his own lifetime, but he did not consider fame important. He rose to fame rapidly after his death, particularly after World War II. The Kafka tombstone was designed by architect Leopold Ehrmann. All of Kafka's published works, except some letters he wrote in Czech to Milena Jesenská, were written in German. What little was published during his lifetime attracted scant public attention. Kafka finished none of his full-length novels and burned around 90 percent of his work, much of it during the period he lived in Berlin with Diamant, who helped him burn the drafts. In his early years as a writer, he was influenced by von Kleist, whose work he described in a letter to Bauer as frightening, and whom he considered closer than his own family. Kafka's earliest published works were eight stories which appeared in 1908 in the first issue of the literary journal "Hyperion" under the title ("Contemplation"). He wrote the story "" ("Description of a Struggle") in 1904; he showed it to Brod in 1905 who advised him to continue writing and convinced him to submit it to "Hyperion". Kafka published a fragment in 1908 and two sections in the spring of 1909, all in Munich. In a creative outburst on the night of 22 September 1912, Kafka wrote the story "Das Urteil" ("The Judgment", literally: "The Verdict") and dedicated it to Felice Bauer. Brod noted the similarity in names of the main character and his fictional fiancée, Georg Bendemann and Frieda Brandenfeld, to Franz Kafka and Felice Bauer. The story is often considered Kafka's breakthrough work. It deals with the troubled relationship of a son and his dominant father, facing a new situation after the son's engagement. Kafka later described writing it as "a complete opening of body and soul", a story that "evolved as a true birth, covered with filth and slime". The story was first published in Leipzig in 1912 and dedicated "to Miss Felice Bauer", and in subsequent editions "for F." In 1912, Kafka wrote "Die Verwandlung" ("The Metamorphosis", or "The Transformation"), published in 1915 in Leipzig. The story begins with a travelling salesman waking to find himself transformed into a , a monstrous vermin, being a general term for unwanted and unclean animals. Critics regard the work as one of the seminal works of fiction of the 20th century. The story "In der Strafkolonie" ("In the Penal Colony"), dealing with an elaborate torture and execution device, was written in October 1914, revised in 1918, and published in Leipzig during October 1919. The story "Ein Hungerkünstler" ("A Hunger Artist"), published in the periodical in 1924, describes a victimized protagonist who experiences a decline in the appreciation of his strange craft of starving himself for extended periods. His last story, "Josefine, die Sängerin oder Das Volk der Mäuse" ("Josephine the Singer, or the Mouse Folk"), also deals with the relationship between an artist and his audience. He began his first novel in 1912; its first chapter is the story "Der Heizer" ("The Stoker"). Kafka called the work, which remained unfinished, ("The Man Who Disappeared" or "The Missing Man"), but when Brod published it after Kafka's death he named it "Amerika". The inspiration for the novel was the time spent in the audience of Yiddish theatre the previous year, bringing him to a new awareness of his heritage, which led to the thought that an innate appreciation for one's heritage lives deep within each person. More explicitly humorous and slightly more realistic than most of Kafka's works, the novel shares the motif of an oppressive and intangible system putting the protagonist repeatedly in bizarre situations. It uses many details of experiences of his relatives who had emigrated to America and is the only work for which Kafka considered an optimistic ending. During 1914, Kafka began the novel ("The Trial"), the story of a man arrested and prosecuted by a remote, inaccessible authority, with the nature of his crime revealed neither to him nor to the reader. Kafka did not complete the novel, although he finished the final chapter. According to Nobel Prize winner and Kafka scholar Elias Canetti, Felice is central to the plot of "Der Process" and Kafka said it was "her story". Canetti titled his book on Kafka's letters to Felice "Kafka's Other Trial", in recognition of the relationship between the letters and the novel. Michiko Kakutani notes in a review for "The New York Times" that Kafka's letters have the "earmarks of his fiction: the same nervous attention to minute particulars; the same paranoid awareness of shifting balances of power; the same atmosphere of emotional suffocation—combined, surprisingly enough, with moments of boyish ardor and delight." According to his diary, Kafka was already planning his novel (The Castle), by 11 June 1914; however, he did not begin writing it until 27 January 1922. The protagonist is the (land surveyor) named K., who struggles for unknown reasons to gain access to the mysterious authorities of a castle who govern the village. Kafka's intent was that the castle's authorities notify K. on his deathbed that his "legal claim to live in the village was not valid, yet, taking certain auxiliary circumstances into account, he was to be permitted to live and work there". Dark and at times surreal, the novel is focused on alienation, bureaucracy, the seemingly endless frustrations of man's attempts to stand against the system, and the futile and hopeless pursuit of an unobtainable goal. Hartmut M. Rastalsky noted in his thesis: "Like dreams, his texts combine precise "realistic" detail with absurdity, careful observation and reasoning on the part of the protagonists with inexplicable obliviousness and carelessness." Kafka's stories were initially published in literary periodicals. His first eight were printed in 1908 in the first issue of the bi-monthly "Hyperion". Franz Blei published two dialogues in 1909 which became part of "Beschreibung eines Kampfes" ("Description of a Struggle"). A fragment of the story "Die Aeroplane in Brescia" ("The Aeroplanes at Brescia"), written on a trip to Italy with Brod, appeared in the daily "Bohemia" on 28 September 1909. On 27 March 1910, several stories that later became part of the book were published in the Easter edition of "Bohemia". In Leipzig during 1913, Brod and publisher Kurt Wolff included "" ("The Judgment. A Story by Franz Kafka.") in their literary yearbook for the art poetry "Arkadia". In the same year, Wolff published "Der Heizer" ("The Stoker") in the Jüngste Tag series, where it enjoyed three printings. The story "" ("Before the Law") was published in the 1915 New Year's edition of the independent Jewish weekly ; it was reprinted in 1919 as part of the story collection ("A Country Doctor") and became part of the novel . Other stories were published in various publications, including Martin Buber's "Der Jude", the paper , and the periodicals , "Genius", and "Prager Presse". Kafka's first published book, ("Contemplation", or "Meditation"), was a collection of 18stories written between 1904 and 1912. On a summer trip to Weimar, Brod initiated a meeting between Kafka and Kurt Wolff; Wolff published in the at the end of 1912 (with the year given as 1913). Kafka dedicated it to Brod, "", and added in the personal copy given to his friend "" ("As it is already printed here, for my dearest Max"). Kafka's story "Die Verwandlung" ("The Metamorphosis") was first printed in the October 1915 issue of , a monthly edition of expressionist literature, edited by René Schickele. Another story collection, ("A Country Doctor"), was published by Kurt Wolff in 1919, dedicated to Kafka's father. Kafka prepared a final collection of four stories for print, "(A Hunger Artist)", which appeared in 1924 after his death, in . On 20 April 1924, the published Kafka's essay on Adalbert Stifter. Kafka left his work, both published and unpublished, to his friend and literary executor Max Brod with explicit instructions that it should be destroyed on Kafka's death; Kafka wrote: "Dearest Max, my last request: Everything I leave behind me... in the way of diaries, manuscripts, letters (my own and others'), sketches, and so on, [is] to be burned unread". Brod ignored this request and published the novels and collected works between 1925 and 1935. He took many papers, which remain unpublished, with him in suitcases to Palestine when he fled there in 1939. Kafka's last lover, Dora Diamant (later, Dymant-Lask), also ignored his wishes, secretly keeping 20notebooks and 35letters. These were confiscated by the Gestapo in 1933, but scholars continue to search for them. As Brod published the bulk of the writings in his possession, Kafka's work began to attract wider attention and critical acclaim. Brod found it difficult to arrange Kafka's notebooks in chronological order. One problem was that Kafka often began writing in different parts of the book; sometimes in the middle, sometimes working backwards from the end. Brod finished many of Kafka's incomplete works for publication. For example, Kafka left with unnumbered and incomplete chapters and with incomplete sentences and ambiguous content; Brod rearranged chapters, copy edited the text, and changed the punctuation. appeared in 1925 in . Kurt Wolff published two other novels, in 1926 and "Amerika" in 1927. In 1931, Brod edited a collection of prose and unpublished stories as "(The Great Wall of China)", including the story of the same name. The book appeared in the . Brod's sets are usually called the "Definitive Editions". In 1961, Malcolm Pasley acquired most of Kafka's original handwritten work for the Oxford Bodleian Library. The text for was later purchased through auction and is stored at the German Literary Archives in Marbach am Neckar, Germany. Subsequently, Pasley headed a team (including Gerhard Neumann, Jost Schillemeit and Jürgen Born) which reconstructed the German novels; republished them. Pasley was the editor for , published in 1982, and , published in 1990. Jost Schillemeit was the editor of () published in 1983. These are called the "Critical Editions" or the "Fischer Editions". When Brod died in 1968, he left Kafka's unpublished papers, which are believed to number in the thousands, to his secretary Esther Hoffe. She released or sold some, but left most to her daughters, Eva and Ruth, who also refused to release the papers. A court battle began in 2008 between the sisters and the National Library of Israel, which claimed these works became the property of the nation of Israel when Brod emigrated to British Palestine in 1939. Esther Hoffe sold the original manuscript of for US$2 million in 1988 to the German Literary Archive Museum of Modern Literature in Marbach am Neckar. Only Eva was still alive as of 2012. A ruling by a Tel Aviv family court in 2010 held that the papers must be released and a few were, including a previously unknown story, but the legal battle continued. The Hoffes claim the papers are their personal property, while the National Library argues they are "cultural assets belonging to the Jewish people". The National Library also suggests that Brod bequeathed the papers to them in his will. The Tel Aviv Family Court ruled in October 2012 that the papers were the property of the National Library. The poet W. H. Auden called Kafka "the Dante of the twentieth century"; the novelist Vladimir Nabokov placed him among the greatest writers of the 20th century. Gabriel García Márquez noted the reading of Kafka's "The Metamorphosis" showed him "that it was possible to write in a different way". A prominent theme of Kafka's work, first established in the short story "Das Urteil", is father–son conflict: the guilt induced in the son is resolved through suffering and atonement. Other prominent themes and archetypes include alienation, physical and psychological brutality, characters on a terrifying quest, and mystical transformation. Kafka's style has been compared to that of Kleist as early as 1916, in a review of "Die Verwandlung" and "Der Heizer" by Oscar Walzel in "Berliner Beiträge". The nature of Kafka's prose allows for varied interpretations and critics have placed his writing into a variety of literary schools. Marxists, for example, have sharply disagreed over how to interpret Kafka's works. Some accused him of distorting reality whereas others claimed he was critiquing capitalism. The hopelessness and absurdity common to his works are seen as emblematic of existentialism. Some of Kafka's books are influenced by the expressionist movement, though the majority of his literary output was associated with the experimental modernist genre. Kafka also touches on the theme of human conflict with bureaucracy. William Burroughs claims that such work is centred on the concepts of struggle, pain, solitude, and the need for relationships. Others, such as Thomas Mann, see Kafka's work as allegorical: a quest, metaphysical in nature, for God. According to Gilles Deleuze and Félix Guattari, the themes of alienation and persecution, although present in Kafka's work, have been over-emphasised by critics. They argue Kafka's work is more deliberate and subversive—and more joyful—than may first appear. They point out that reading the Kafka work while focusing on the futility of his characters' struggles reveals Kafka's play of humour; he is not necessarily commenting on his own problems, but rather pointing out how people tend to invent problems. In his work, Kafka often created malevolent, absurd worlds. Kafka read drafts of his works to his friends, typically concentrating on his humorous prose. The writer Milan Kundera suggests that Kafka's surrealist humour may have been an inversion of Dostoyevsky's presentation of characters who are punished for a crime. In Kafka's work a character is punished although a crime has not been committed. Kundera believes that Kafka's inspirations for his characteristic situations came both from growing up in a patriarchal family and living in a totalitarian state. Attempts have been made to identify the influence of Kafka's legal background and the role of law in his fiction. Most interpretations identify aspects of law and legality as important in his work, in which the legal system is often oppressive. The law in Kafka's works, rather than being representative of any particular legal or political entity, is usually interpreted to represent a collection of anonymous, incomprehensible forces. These are hidden from the individual but control the lives of the people, who are innocent victims of systems beyond their control. Critics who support this absurdist interpretation cite instances where Kafka describes himself in conflict with an absurd universe, such as the following entry from his diary: However, James Hawes argues many of Kafka's descriptions of the legal proceedings in —metaphysical, absurd, bewildering and nightmarish as they might appear—are based on accurate and informed descriptions of German and Austrian criminal proceedings of the time, which were inquisitorial rather than adversarial. Although he worked in insurance, as a trained lawyer Kafka was "keenly aware of the legal debates of his day". In an early 21st-century publication that uses Kafka's office writings as its point of departure, Pothik Ghosh states that with Kafka, law "has no meaning outside its fact of being a pure force of domination and determination". The earliest English translations of Kafka's works were by Edwin and Willa Muir, who in 1930 translated the first German edition of . This was published as "The Castle" by Secker & Warburg in England and Alfred A. Knopf in the United States. A 1941 edition, including a homage by Thomas Mann, spurred a surge in Kafka's popularity in the United States during the late 1940s. The Muirs translated all shorter works that Kafka had seen fit to print; they were published by Schocken Books in 1948 as "", including additionally "The First Long Train Journey", written by Kafka and Brod, Kafka's "A Novel about Youth", a review of Felix Sternheim's "Die Geschichte des jungen Oswald", his essay on Kleist's "Anecdotes", his review of the literary magazine "Hyperion", and an epilogue by Brod. Later editions, notably those of 1954 ("Dearest Father. Stories and Other Writings"), included text, translated by Eithne Wilkins and Ernst Kaiser, which had been deleted by earlier publishers. Known as "Definitive Editions", they include translations of "The Trial, Definitive", "The Castle, Definitive", and other writings. These translations are generally accepted to have a number of biases and are considered to be dated in interpretation. Published in 1961 by Schocken Books, "Parables and Paradoxes" presented in a bilingual edition by Nahum N. Glatzer selected writings, drawn from notebooks, diaries, letters, short fictional works and the novel "Der Process". New translations were completed and published based on the recompiled German text of Pasley and Schillemeit"The Castle, Critical" by Mark Harman (Schocken Books, 1998), "The Trial, Critical" by Breon Mitchell (Schocken Books, 1998), and "Amerika: The Man Who Disappeared" by Michael Hofmann (New Directions Publishing, 2004). Kafka often made extensive use of a characteristic particular to the German language which permits long sentences that sometimes can span an entire page. Kafka's sentences then deliver an unexpected impact just before the full stop—this being the finalizing meaning and focus. This is due to the construction of subordinate clauses in German which require that the verb be positioned at the end of the sentence. Such constructions are difficult to duplicate in English, so it is up to the translator to provide the reader with the same (or at least equivalent) effect found in the original text. German's more flexible word order and syntactical differences provide for multiple ways in which the same German writing can be translated into English. An example is the first sentence of Kafka's "The Metamorphosis", which is crucial to the setting and understanding of the entire story: Another difficult problem facing translators is how to deal with the author's intentional use of ambiguous idioms and words that have several meanings which results in phrasing that is difficult to translate precisely. One such instance is found in the first sentence of "The Metamorphosis". English translators often render the word as "insect"; in Middle German, however, literally means "an animal unclean for sacrifice"; in today's German it means vermin. It is sometimes used colloquially to mean "bug" —a very general term, unlike the scientific "insect". Kafka had no intention of labeling Gregor, the protagonist of the story, as any specific thing, but instead wanted to convey Gregor's disgust at his transformation. Another example is Kafka's use of the German noun in the final sentence of "Das Urteil". Literally, means intercourse and, as in English, can have either a sexual or non-sexual meaning; in addition, it is used to mean transport or traffic. The sentence can be translated as: "At that moment an unending stream of traffic crossed over the bridge". The double meaning of "Verkehr" is given added weight by Kafka's confession to Brod that when he wrote that final line, he was thinking of "a violent ejaculation". Unlike many famous writers, Kafka is rarely quoted by others. Instead, he is noted more for his visions and perspective. Shimon Sandbank, a professor, literary critic, and writer, identifies Kafka as having influenced Jorge Luis Borges, Albert Camus, Eugène Ionesco, J. M. Coetzee and Jean-Paul Sartre. A "Financial Times" literary critic credits Kafka with influencing José Saramago, and Al Silverman, a writer and editor, states that J. D. Salinger loved to read Kafka's works. In 1999 a committee of 99 authors, scholars, and literary critics ranked and the second and ninth most significant German-language novels of the 20th century. Sandbank argues that despite Kafka's pervasiveness, his enigmatic style has yet to be emulated. Neil Christian Pages, a professor of German Studies and Comparative Literature at Binghamton University who specialises in Kafka's works, says Kafka's influence transcends literature and literary scholarship; it impacts visual arts, music, and popular culture. Harry Steinhauer, a professor of German and Jewish literature, says that Kafka "has made a more powerful impact on literate society than any other writer of the twentieth century". Brod said that the 20th century will one day be known as the "century of Kafka". Michel-André Bossy writes that Kafka created a rigidly inflexible and sterile bureaucratic universe. Kafka wrote in an aloof manner full of legal and scientific terms. Yet his serious universe also had insightful humour, all highlighting the "irrationality at the roots of a supposedly rational world". His characters are trapped, confused, full of guilt, frustrated, and lacking understanding of their surreal world. Much of the post-Kafka fiction, especially science fiction, follow the themes and precepts of Kafka's universe. This can be seen in the works of authors such as George Orwell and Ray Bradbury. The following are examples of works across a range of literary, musical, and dramatic genres which demonstrate the extent of cultural influence: The term "Kafkaesque" is used to describe concepts and situations reminiscent of his work, particularly ("The Trial") and "Die Verwandlung" ("The Metamorphosis"). Examples include instances in which bureaucracies overpower people, often in a surreal, nightmarish milieu which evokes feelings of senselessness, disorientation, and helplessness. Characters in a Kafkaesque setting often lack a clear course of action to escape a labyrinthine situation. Kafkaesque elements often appear in existential works, but the term has transcended the literary realm to apply to real-life occurrences and situations that are incomprehensibly complex, bizarre, or illogical. Numerous films and television works have been described as Kafkaesque, and the style is particularly prominent in dystopian science fiction. Works in this genre that have been thus described include Patrick Bokanowski's 1982 film "The Angel", Terry Gilliam's 1985 film "Brazil", and the 1998 science fiction film noir, "Dark City". Films from other genres which have been similarly described include "The Tenant" (1976) and "Barton Fink" (1991). The television series "The Prisoner" and "The Twilight Zone" are also frequently described as Kafkaesque. However, with common usage, the term has become so ubiquitous that Kafka scholars note it's often misused. More accurately then, according to author Ben Marcus, paraphrased in "What it Means to be Kafkaesque" by Joe Fassler in "The Atlantic", "Kafka’s quintessential qualities are affecting use of language, a setting that straddles fantasy and reality, and a sense of striving even in the face of bleakness—hopelessly and full of hope." The Franz Kafka Museum in Prague is dedicated to Kafka and his work. A major component of the museum is an exhibit "The City of K. Franz Kafka and Prague", which was first shown in Barcelona in 1999, moved to the Jewish Museum in New York City, and was finally established in 2005 in Prague in Malá Strana (Lesser Town), along the Moldau. The museum calls its display of original photos and documents "Město K. Franz Kafka a Praha" (City K. Kafka and Prague) and aims to immerse the visitor into the world in which Kafka lived and about which he wrote. The Franz Kafka Prize is an annual literary award of the Franz Kafka Society and the City of Prague established in 2001. It recognizes the merits of literature as "humanistic character and contribution to cultural, national, language and religious tolerance, its existential, timeless character, its generally human validity, and its ability to hand over a testimony about our times". The selection committee and recipients come from all over the world, but are limited to living authors who have had at least one work published in the Czech language. The recipient receives $10,000, a diploma, and a bronze statuette at a presentation in Prague's Old Town Hall on the Czech State Holiday in late October. San Diego State University (SDSU) operates the Kafka Project, which began in 1998 as the official international search for Kafka's last writings. "Newspapers" "Online sources" "Journals" Final Fantasy: The Spirits Within Final Fantasy: The Spirits Within is a 2001 American computer-animated science fiction film directed by Hironobu Sakaguchi, creator of the "Final Fantasy" series of role-playing video games. It was the first photorealistic computer-animated feature film and remains the most expensive video game-inspired film of all time. It features the voices of Ming-Na Wen, Alec Baldwin, Donald Sutherland, James Woods, Ving Rhames, Peri Gilpin and Steve Buscemi. "The Spirits Within" follows scientists Aki Ross and Doctor Sid in their efforts to free a post-apocalyptic Earth from a mysterious and deadly alien race known as the Phantoms, which has driven the remnants of humanity into "barrier cities". Aki and Sid must fight against General Hein, who wishes to use more violent means to end the conflict. Square Pictures rendered the film using some of the most advanced processing capabilities available for film animation at the time. A render farm consisting of 960 workstations was tasked with rendering each of the film's 141,964 frames. It took a staff of 200 about four years to complete "The Spirits Within". Square intended to make the character of Aki Ross into the world's first photorealistic computer-animated actress, with plans for appearances in multiple films in different roles. "The Spirits Within" debuted to mixed critical reception, but was widely praised for the realism of the computer-animated characters. Due to rising costs, the film greatly exceeded its original budget towards the end of production, reaching a final cost of $137 million, of which it recovered only $85 million at the box office. The film has been called a box office bomb and is blamed for the demise of Square Pictures. In 2065, Earth is infested by alien life forms known as Phantoms. By physical contact Phantoms consume the Gaia spirit of living beings, killing them instantly, though a minor contact may only result in an infection. The surviving humans live in "barrier cities", areas protected by an energy shield that prevents Phantoms from entering, and are engaged in an ongoing struggle to free the planet. After being infected by a Phantom during one of her experiments, Aki Ross (Ming-Na) and her mentor, Doctor Sid (Donald Sutherland), uncover a means of defeating the Phantoms by gathering eight spirit signatures that, when joined, can negate the Phantoms. Aki is searching for the sixth spirit in the ruins of New York City when she is cornered by Phantoms but is rescued by Gray Edwards (Alec Baldwin) and his squad "Deep Eyes", consisting of Ryan Whittaker (Ving Rhames), Neil Fleming (Steve Buscemi) and Jane Proudfoot (Peri Gilpin). It is revealed that Gray was once romantically involved with Aki. Upon returning to her barrier city, Aki joins Sid and appears before the leadership council along with General Hein (James Woods), who is determined to use the powerful Zeus space cannon to destroy the Phantoms. Aki is concerned the cannon will damage Earth's Gaia (a spirit representing its ecosystem) and delays the use of it by revealing that she has been infected and the collected spirit signatures are keeping her infection stable, convincing the council that there may be another way to defeat the Phantoms. However, this revelation leads Hein to incorrectly conclude that she is being controlled by the Phantoms. Aki and the Deep Eyes squad succeed in finding the seventh spirit as Aki's infection begins to worsen and she slips into unconsciousness. Her dream reveals to her that the Phantoms are the spirits of dead aliens brought to Earth on a fragment of their destroyed planet. Sid uses the seventh spirit to bring Aki's infection back under control, reviving her. To scare the council into giving him clearance to fire the Zeus cannon, Hein lowers part of the barrier shield protecting the city. Though Hein intended that only a few Phantoms enter, his plan goes awry and legions of Phantoms invade the entire city. Aki, Sid and the Deep Eyes attempt to reach Aki's spaceship, their means of escape, but Ryan, Neil and Jane are killed by Phantoms. Hein escapes and boards the Zeus space-station where he finally receives authorisation to fire the cannon. Sid finds the eighth spirit at the crater site of the alien asteroid's impact on Earth. He lowers a shielded vehicle, with Aki and Gray aboard, into the crater to locate the final spirit. Just before they can reach it, Hein fires the Zeus cannon into the crater, not only destroying the eighth spirit but also revealing the Phantom Gaia. Aki has a vision of the Phantom home planet, where she is able to receive the eighth spirit from the alien particles in herself. When Aki awakens, she and Gray combine it with the other seven. Hein continues to fire the Zeus cannon despite overheating warnings and unintentionally destroys the cannon and himself. Gray sacrifices himself as a medium needed to physically transmit the completed spirit into the alien Gaia. The Earth's Gaia is returned to normal as the Phantoms ascend into space, finally at peace. Aki is pulled from the crater holding Gray's body, and is seen looking into the newly liberated world. Aki Ross's voice actor, Ming-Na Wen, was selected for a perceived fit between her personality and Aki's. Ming-Na, who found the role via her publicist, said she felt like she had given birth with her voice to the character. She gradually accustomed herself to the difficulty of working without the presence and spontaneity of real actors, and commented that the voice-acting work did not take much time, as she would just go into the studio "once or twice a month for about four months" with no need for make-up and costuming sessions. The workload was so light it did not interfere with her acting commitments in the television series "ER". Square accumulated four SGI Origin 2000 series servers, four Onyx2 systems, and 167 Octane workstations for the film's production. The basic film was rendered at a custom render farm created by Square in Hawaii. It housed 960 Pentium III-933 MHz workstations. Character movements were filmed using motion capture technology. Animator Matthew Hackett stated that while motion capture was effective for many of the scenes, in others animators still had to add movements manually. Hand and facial movements were all done manually. Some of General Hein's facial features and poses were based on Hackett. As animators did not want to use any actual photographs in the film, all backgrounds were done using matte paintings. 1,327 scenes in total needed to be filmed to animate the digital characters. The film consists of 141,964 frames, with each frame taking an average of 90 minutes to render. By the end of production Square had a total of 15 terabytes of artwork for the film. It is estimated that over the film's four-year production, approximately 200 people put in a combined 120 years of work on it. From early on, it had been decided that "The Spirits Within" would be filmed entirely in English. The original script, written by Sakaguchi, was titled "Gaia". The screenplay was written by Al Reinert and Jeff Vintar. The film was co-directed by Motonori Sakakibara, with Jun Aida and Chris Lee both serving as producers. Lee compared "The Spirits Within", the first full-length photorealistic animated film, to Walt Disney's "Snow White and the Seven Dwarfs", the first full-length cel animated film. In order to keep the film in line with Hironobu Sakaguchi's vision as director, several script rewrites took place, most in the initial stages of production. Sakaguchi stated he was pleased with the film's final cut, saying he would not have changed anything if given the chance. The film had high cost overruns towards the end of filming. New funds had to be sourced to cover the increasing production costs while maintaining staff salaries. The film's final cost of $137 million, which included about $30 million spent on marketing by the film's distributor Columbia Pictures, escalated from an original budget rumored to be around $70 million. $45 million alone was spent on the construction of Square's studio in Hawaii. It was Columbia Pictures' first animated feature film since "" in 1986. Director Sakaguchi named the main character after his mother, Aki, who died in an accident several years prior to the production of the film. Her death led Sakaguchi to reflect on what happened to the spirit after death, and these thoughts resurfaced while he was planning the film, eventually taking the form of the Gaia hypothesis. He later explained that the theme he wanted to convey was "more of a complex idea of life and death and spirit", believing that the best way to portray this would be to set the film on Earth. By comparison, "Final Fantasy" video games are set in fictional worlds. Dan Mayers from "Sight & Sound" stated the film followed the same theme typically found in "Final Fantasy" video games: "A party of heroes averts impending global holocaust by drawing on their individual skills, gaining knowledge through challenges and emerging victorious with new-found love and respect for themselves and their companions." Writing in the book "Robot Ghosts and Wired Dreams", Livia Monnet stated the film remediated "the notion of life in the neovitalistic, evolutionary biology of Lynn Margulis and in contemporary theories on artificial life", going on to state that the film's exploration of the Gaia hypothesis raised interesting questions regarding the life and death process of both cinema and digital media, as well as contemporary life sciences, cybernetics, philosophy and science fiction. The concept of artificial life and resurrection was also discussed, and compared to similar themes in the 1914 book "Locus Solus"; the Phantoms in "The Spirits Within" were considered to be brought to life by various forces: by the alien planet's red Gaia and then by human spiritual energy. Each character's base body model was built from more than 100,000 polygons, plus more than 300,000 for clothing alone. Aki's character model bears 60,000 hairs, each of which were separately and fully animated and rendered. In creating the characters, designers had to transition between using PowerAnimator, Autodesk Maya and RenderMan. Aki's appearance was conceived by the lead animator of the project, Roy Sato, who created several conceptual designs for Sakaguchi to consider, and then used the selected design as a guide for her character model. Sato perceived Aki's original look as a "supermodel", and subsequently removed her make-up and shortened her hair in order to give her a more intelligent look that would "convince people that she's a scientist." In an interview, Sato described actively trying to make her appear as realistic as possible, making her similar to himself in as many ways as he could in the animation, including elements of his personality through facial expressions. He concluded that Aki ended up being similar to him in almost every way, with the exception that "she's a lot cuter". The model for Aki was designed to closely follow human appearance, with Sakaguchi commenting in an interview "I think it's OK to look at Aki and be convinced that she's a human." While Square ruled out any chance of a sequel to "The Spirits Within" before it was even completed, Sakaguchi intended to position Aki as being the "main star" for Square Pictures, using her in later games and films by Square, and including the flexibility of being able to modify aspects such as her age for such appearances. Ming-Na stated that she would be willing if asked to continue voicing Aki. Aki only made one appearance outside of the film; in 2002 she appeared in a demonstration video that Square Pictures made to present to The Wachowskis before developing "Final Flight of the Osiris" for "The Animatrix". The short film, appearing in the DVD's bonus content and featuring her with a slightly modified design, shows her acrobatically dueling a robot from the "Matrix" setting. Shortly afterward, Square Pictures was closed and absorbed into Square Co. and the company ceased use of the character. While the near lifelike appearance of the characters in the film was well received, some commentators felt the character renderings fell into the trap that many robotics scientists refer to as the "uncanny valley". This concept describes when a robot or animated character becomes very realistic, but subtly different enough from reality to feel "creepy". John Mangan from "The Age" cited the film as an example of this phenomenon. The soundtrack to the film was released on July 3, 2001 by Sony Music. Elliot Goldenthal composed the entire score, as well as the film's theme song, "The Dream Within", which had lyrics written by Richard Rudolf and vocals performed by Lara Fabian. Director Hironobu Sakaguchi opted for the acclaimed Goldenthal instead of Nobuo Uematsu, the composer of the "Final Fantasy" games' soundtracks, a decision met with mixed opinions as the former was completely unknown to many of the games' fans. The last song on the album and the second and final song to play during the film's credits (after "The Dream Within") is "Spirit Dreams Inside" by Japanese rock band L'Arc-en-Ciel. The film's score was performed by the London Symphony Orchestra with Belgian composer Dirk Brossé conducting. It was recorded in the United Kingdom at the Watford Coloseum and the London AIR Lyndhurst Hall and was mixed at the Manhattan Center Studios in the United States. In the liner notes to the album, Goldenthal describes the soundtrack as combining "orchestration techniques associated with the late 20th-century Polish avant-garde, as well as my own experiments from "Alien 3", and 19th-century Straussian brass and string instrumentation." In the film's 'Making of' featurette, Goldenthal states he used "ghostly choral" music when the Phantoms are emerging, in an attempt to give a celestial feeling, and focused on low brass clusters and taiko drum rhythms for violent scenes. When Aki talks about a dying girl, Goldenthal used a piano in order to give a domestic home-like feeling to a completely foreign environment, also choosing to use a flute each time Aki focusses on Gaia, as he believed it to be the most "human kind of instrument". The album was met with positive reviews. Neil Shurley from AllMusic, who gave the album 4 out of 5, stated the album would probably have been nominated for an Oscar if the film itself had been more popular, as did the reviewer from Soundtrack Express, who gave the soundtrack 5 out of 5. Christopher Coleman from Tracksounds gave the soundtrack 10 out of 10, stating the feel of the album was "expansive and majestic" and that the score elevated the viewing experience of the film. A review from "Filmtracks" gave the album 4 out of 5, calling it "an easy album to recommend", adding "parts of it will blow you out of your seat." Dan Goldwasser from Soundtrack.net also gave the soundtrack 4 out of 5, calling it a "must have". The album peaked at No. 19 on "Billboard"'s Top Soundtracks list and No. 193 on the "Billboard" 200 on July 28, 2001. The track "The Dream Within" was nominated for "Best Original Song Written for a Film" at the 2002 World Soundtrack Awards, but lost to "If I Didn't Have You" which was composed for "Monsters, Inc.". Before the film's release, there was already skepticism of its potential to be financially successful. Chris Taylor from "Time" magazine noted that video game adaptations had a poor track record at the box office and that it was Sakaguchi's first feature film. The film debuted on July 2, 2001 at the Mann Bruins Theater in Los Angeles, California, and was released in the United States on July 11, making $32 million in North America and going on to gross $85 million in worldwide box office receipts. The film achieved average to poor results at the box office in most of Southeast Asia; however, it performed well in Australia, New Zealand and South Korea. In 2006 Boston.com regarded it as the 4th biggest box office bomb, estimating the film's losses at the end of its cinema run at over $94 million. In March 2012 CNBC considered it to be the 9th biggest box office bomb, though "Time"s list of the ten biggest box office failures, which was released on the same day, did not include the film. The review aggregator website Rotten Tomatoes reported an approval rating of 45% based on 146 reviews, with an average rating of 5.3/10. The website's critical consensus reads, "The movie raises the bar for computer animated movies, but the story is dull and emotionally removed." Metacritic, which uses a weighted average, assigned a score of 49 out of 100 based on 28 critics, indicating "mixed or average reviews". Roger Ebert was a strong advocate of the film; he gave it 3½ stars out of four, praising it as a "technical milestone" while conceding that its "nuts and bolts" story lacked "the intelligence and daring of, say, Steven Spielberg's "A.I."" He noted that while he did not once feel convinced Aki Ross was an actual human being, she was "lifelike", stating her creators "dare us to admire their craft. If Aki is not as real as a human actress, she's about as human as a Playmate who has been retouched to glossy perfection." He also expressed a desire for the film to succeed in hopes of seeing more films made in its image, though he was skeptical of its ability to be accepted. Peter Bradshaw gave a more negative review, stating that while the animation was brilliant, the "solemnly realist human faces look shriekingly phoney precisely because they're almost there but not quite", concluding "The story is adequate, if familiar, but after half an hour relapses into cliche." Aki's appearance was received positively by critics, with praise for the finer details of the character model such as the rendering of her hair. "Entertainment Weekly" named Aki an "it girl", stating that "Calling this action heroine a cartoon would be like calling a Rembrandt a doodle." She was voted one of the sexiest women of 2001 by "Maxim" and its readers, ranking at No. 87 out of 100, becoming the first fictional woman to ever make the list, additionally appearing on the issue's cover in a purple bikini. The same image appeared in the "Babes: The Girls of Sci Fi" special issue of "SFX". Ruth La Ferla from "The New York Times" described her as having the "sinewy efficiency" of "Alien" franchise character Ellen Ripley and visual appeal of Julia Roberts' portrayal of Erin Brockovich. The book "Digital Shock: Confronting the New Reality" by Herve Fischer described her as a virtual actress having a "beauty that is 'really' impressive", comparing her to video game character Lara Croft. In contrast, Livia Monnet criticized her character as an example of the constantly kidnapped female in Japanese cinema, further "diluted" by her existence solely as a computer-generated character representing "an ideal, cinematic female character that has no real referent." Writing in the book "Action and Adventure Cinema", Marc O'Day described her as among the "least overtly eroticised" female characters in science fiction, though stated that Aki was "transformed in a variety of poses into an erotic fantasy machine" in a photo shoot that was included on the DVD's special features. The merger between Square and Enix, which had been under consideration since at least 2000 according to Yasuhiro Fukushima, Enix chairman at the time, was delayed because of the failure of the film and Enix' hesitation at merging with a company that had just lost a substantial amount of money. Square Pictures announced in late January 2002 that they were closing down, largely due to the commercial failure of "The Spirits Within". The film's CGI effects have been compared favourably with later CGI films such as James Cameron's 2009 film "Avatar". In 2011, BioWare art director Derek Watts cited "The Spirits Within" as a major influence on the successful "Mass Effect" series of action role-playing games. In the first episode of the Square Enix published 2015 video game "Life Is Strange", when the lead interacts with a TV, she mentions the idea of watching the film, and says "I don't care what anybody says, that's one of the best sci-fi films ever made." Although the film was loosely based on a video game series, there were never any plans for a game adaptation of the film itself. Sakaguchi indicated the reason for this was the lack of powerful gaming hardware at the time, feeling the graphics in any game adaptation would be far too much of a step down from the graphics in the film itself. A novelization was written by Dean Wesley Smith and published by "Pocket Books" in June 2001. "The Making of Final Fantasy: The Spirits Within", a companion book, was published by "BradyGames" in August 2001. Edited by Steven L. Kent, the 240 page color book contains a foreword by director Sakaguchi and extensive information on all aspects of the film's creation, including concept art, storyboards, sets and props, layout, motion capture and animation, as well as a draft of the full script. The film won the "Jury Prize" at the 2002 Japan Media Arts Festival. It was nominated for "Best Sound Editing – Animated Feature Film, Domestic and Foreign" at the 49th Golden Reel Awards as well as "Best Animated Feature" at the 5th Online Film Critics Society awards. The film's trailer was nominated for the "Golden Fleece" award at the 3rd Golden Trailer Awards. The DVD version of the film was released on 23 October 2001, with the Blu-ray edition released on 7 August 2007. Two weeks before it was released the DVD version was listed on Amazon.com as one of the most-anticipated releases, and it was expected to recoup some of the money lost on the film's disappointing box office performance. Both versions contained two full-length commentary tracks (one featuring Motonori Sakakibara, sequence supervisor Hiroyuki Hayashida, lead artist Tatsuro Maruyama, and creature supervisor Takoo Noguchi; the second featuring animation director Andy Jones, editor Chris S. Capp, and staging director Tani Kunitake) as well as an isolated score with commentary. They also contained a version of the film in its basic CGI and sketch form, with the option of pop-up comments on the film. An easter egg shows the cast of the film re-enacting the dance from "Michael Jackson's Thriller". Fifteen featurettes, including seven on character biographies, three on vehicle comparisons and an interactive "Making Of" featurette, were also included. Other features included Aki's dream viewable as a whole sequence, the film's original opening sequence, and intentional outtakes. Peter Bracke from "High-Def Digest" stated the DVD was "so packed with extras it was almost overwhelming", stating that Sony went "all-out" on the extra features in a likely attempt to boost DVD sales and recover losses. As of December 13, 2001, the film grossed in video rental revenue in the United States, equivalent to 83.4% of its box office gross in the country. The DVD was nominated for "Best DVD Special Edition Release" at the 28th Saturn Awards. Aaron Beierle from "DVD Talk" gave a positive review of the DVD, rating it 4½ out of 5 stars for audio quality, video quality and special features. Dustin Somner from "Blu-ray.com" gave the Blu-ray version 5 out of 5 stars for video quality and special features, and 4½ stars for audio quality. Peter Bracke gave the Blu-ray version 4 out of 5 stars overall. Bibliography Premier League The Premier League is the top level of the English football league system. Contested by 20 clubs, it operates on a system of promotion and relegation with the English Football League (EFL). The Premier League is a corporation in which the member clubs act as shareholders. Seasons run from August to May with each team playing 38 matches (playing each other home and away). Most games are played on Saturday and Sunday afternoons. It is often known outside England as the English Premier League (EPL). The competition was formed as the FA Premier League on 20 February 1992 following the decision of clubs in the Football League First Division to break away from the Football League, founded in 1888, and take advantage of a lucrative television rights deal. The deal was worth £1 billion a year domestically as of 2013–14, with BSkyB and BT Group securing the domestic rights to broadcast 116 and 38 games respectively. The league generates €2.2 billion per year in domestic and international television rights. In 2014–15, teams were apportioned revenues of £1.6 billion, rising sharply to £2.4 billion in 2016–17. The Premier League is the most-watched sports league in the world, broadcast in 212 territories to 643 million homes and a potential TV audience of 4.7 billion people. In the 2014–15 season, the average Premier League match attendance exceeded 36,000, second highest of any professional football league behind the Bundesliga's 43,500. Most stadium occupancies are near capacity. The Premier League ranks second in the UEFA coefficients of leagues based on performances in European competitions over the past five seasons, as of 2018. Forty-nine clubs have competed since the inception of the Premier League in 1992. Six of them have won the title: Manchester United (13), Chelsea (5), Arsenal (3), Manchester City (3), Blackburn Rovers (1), and Leicester City (1). Following the 2003–04 season, Arsenal acquired the nickname "The Invincibles" as they became, and still remain, the only club to complete a Premier League campaign without losing a single game. The record of most points in a season is 100 by Manchester City in 2017–18. Despite significant European success in the 1970s and early 1980s, the late 1980s marked a low point for English football. Stadiums were crumbling, supporters endured poor facilities, hooliganism was rife, and English clubs were banned from European competition for five years following the Heysel Stadium disaster in 1985. The Football League First Division, the top level of English football since 1888, was behind leagues such as Italy's Serie A and Spain's La Liga in attendances and revenues, and several top English players had moved abroad. By the turn of the 1990s the downward trend was starting to reverse: at the 1990 FIFA World Cup, England reached the semi-finals; UEFA, European football's governing body, lifted the five-year ban on English clubs playing in European competitions in 1990, resulting in Manchester United lifting the UEFA Cup Winners' Cup in 1991, and the Taylor Report on stadium safety standards, which proposed expensive upgrades to create all-seater stadiums in the aftermath of the Hillsborough disaster, was published in January of that year. The 1980s also saw the major English clubs, led by the likes of Martin Edwards of Manchester United, Irving Scholar of Tottenham Hotspur and David Dein of Arsenal, beginning to be transformed into business ventures that applied commercial principles to the running of the clubs, which led to the increasing power of the elite clubs. By threatening to break away, the top clubs from Division One managed to increase their voting power, and took 50% share of all television and sponsorship income in 1986. Revenue from television also became more important: the Football League received £6.3 million for a two-year agreement in 1986, but by 1988, in a deal agreed with ITV, the price rose to £44 million over four years with the leading clubs taking 75% of the cash. The 1988 negotiations were conducted under the threat of ten clubs leaving to form a "super league", but were eventually persuaded to stay with the top clubs taking the lion share of the deal. As stadiums improved and match attendance and revenues rose, the country's top teams again considered leaving the Football League in order to capitalise on the influx of money into the sport. In 1990, the managing director of London Weekend Television (LWT), Greg Dyke, met with the representatives of the "big five" football clubs in England (Manchester United, Liverpool, Tottenham, Everton and Arsenal) over a dinner. The meeting was to pave the way for a break away from The Football League. Dyke believed that it would be more lucrative for LWT if only the larger clubs in the country were featured on national television and wanted to establish whether the clubs would be interested in a larger share of television rights money. The five clubs decided it was a good idea and decided to press ahead with it; however, the league would have no credibility without the backing of The Football Association and so David Dein of Arsenal held talks to see whether the FA were receptive to the idea. The FA did not enjoy an amicable relationship with the Football League at the time and considered it as a way to weaken the Football League's position. At the close of the 1991 season, a proposal was tabled for the establishment of a new league that would bring more money into the game overall. The Founder Members Agreement, signed on 17 July 1991 by the game's top-flight clubs, established the basic principles for setting up the FA Premier League. The newly formed top division would have commercial independence from The Football Association and the Football League, giving the FA Premier League licence to negotiate its own broadcast and sponsorship agreements. The argument given at the time was that the extra income would allow English clubs to compete with teams across Europe. Although Dyke played a significant role in the creation of the Premier League, Dyke and ITV would lose out in the bidding for broadcast rights as BSkyB won with a bid of £304 million over five years with the BBC awarded the highlights package broadcast on "Match of the Day". In 1992, the First Division clubs resigned from the Football League "en masse" and on 27 May 1992 the FA Premier League was formed as a limited company working out of an office at the Football Association's then headquarters in Lancaster Gate. This meant a break-up of the 104-year-old Football League that had operated until then with four divisions; the Premier League would operate with a single division and the Football League with three. There was no change in competition format; the same number of teams competed in the top flight, and promotion and relegation between the Premier League and the new First Division remained the same as the old First and Second Divisions with three teams relegated from the league and three promoted. The league held its first season in 1992–93. It was composed of 22 clubs for that season. The first Premier League goal was scored by Brian Deane of Sheffield United in a 2–1 win against Manchester United. The 22 inaugural members of the new Premier League were Arsenal, Aston Villa, Blackburn Rovers, Chelsea, Coventry City, Crystal Palace, Everton, Ipswich Town, Leeds United, Liverpool, Manchester City, Manchester United, Middlesbrough, Norwich City, Nottingham Forest, Oldham Athletic, Queens Park Rangers, Sheffield United, Sheffield Wednesday, Southampton, Tottenham Hotspur, and Wimbledon. Luton Town, Notts County, and West Ham United were the three teams relegated from the old first division at the end of the 1991–92 season, and did not take part in the inaugural Premier League season. One significant feature of the Premier League in the mid-2000s was the dominance of the so-called "Top Four" clubs: Arsenal, Chelsea, Liverpool and Manchester United. During this decade, they dominated the top four spots, which came with UEFA Champions League qualification, taking all top-four places in 5 out of 6 seasons from 2003–04 to 2008–09 inclusive, while every season during the 2000s saw the "Big Four" always qualifying for European competition. Arsenal went as far as winning the league without losing a single game in 2003–04, the only time it has ever happened in the Premier League. During the 2000s, only four sides outside the "Top Four" managed to qualify for the Champions League: Leeds United (1999–2000), Newcastle United (2001–02 and 2002–03), Everton (2004–05) and Tottenham Hotspur (2009–10) – each occupying the final Champions League spot, with the exception of Newcastle in the 2002–03 season, who finished third. In May 2008 Kevin Keegan stated that "Top Four" dominance threatened the division, "This league is in danger of becoming one of the most boring but great leagues in the world." Premier League chief executive Richard Scudamore said in defence: "There are a lot of different tussles that go on in the Premier League depending on whether you're at the top, in the middle or at the bottom that make it interesting." Between 2005 and 2012, there was a Premier League representative in seven of the eight Champions League finals, with only "Top Four" clubs reaching that stage. Liverpool (2005), Manchester United (2008) and Chelsea (2012) won the competition during this period, with Arsenal (2006), Liverpool (2007), Chelsea (2008) and Manchester United (2009 and 2011) all losing Champions League finals. Leeds United were the only non-"Top Four" side to reach the semi-finals of the Champions League, in the 2000–01 season. Additionally, between the 1999–2000 and 2009–10 seasons, four Premier League sides reached UEFA Cup or Europa League finals, with only Liverpool managing to win the competition in 2001. Arsenal (2000), Middlesbrough (2006) and Fulham (2010) all lost their finals. The years following 2009 marked a shift in the structure of the "Top Four" with Tottenham Hotspur and Manchester City both breaking into the top four places on a regular basis, turning the "Big Four" into the "Big Six". In the 2009–10 season, Tottenham finished fourth and became the first team to break the top four since Everton five years prior. Criticism of the gap between an elite group of "super clubs" and the majority of the Premier League has continued, nevertheless, due to their increasing ability to spend more than the other Premier League clubs. Since the continued presence of Manchester City and Tottenham Hotspur at the top end of the table, no side has won consecutive Premier League titles. Manchester City won the title in the 2011–12 season, becoming the first club outside the "Big Four" to win since Blackburn Rovers in the 1994–95 season. That season also saw two of the "Big Four" (Chelsea and Liverpool) finish outside the top four places for the first time since that season. With only four UEFA Champions League qualifying places available in the league, greater competition for qualification now exists, albeit from a narrow base of six clubs. In the following five seasons after the 2011–12 campaign, Manchester United and Liverpool both found themselves outside of the top four three times while Chelsea finished 10th in the 2015–16 season. Arsenal finished 5th in 2016–17, ending their record of 20 consecutive top-four finishes. In the 2015–16 season, the top four was breached by a non-Big Six side for the first time since Everton in 2005. Leicester City were the surprise winners of the league, qualifying for the Champions League as a result. Off the pitch, the "Big Six" wield financial power and influence, with these clubs arguing that they should be entitled to a greater share of revenue due to the greater stature of their clubs globally and the attractive football they aim to play. Objectors argue that the egalitarian revenue structure in the Premier League helps to maintain a competitive league which is vital for its future success. The 2016–17 Deloitte Football Money League report showed the financial disparity between the "Big Six" and the rest of the division. All of the "Big Six" had revenues greater than €350 million, with Manchester United having the largest revenue in the league at €676.3 million. Leicester City was the closest club to the "Big Six" in terms of revenue, recording a figure of €271.1 million for that season – helped by participation in the Champions League. The eighth largest revenue generator West Ham, who didn't play in European competition, had revenues of €213.3 million, nearly half of the fifth largest club, Liverpool (€424.2 million). The number of clubs was reduced to 20, down from 22, in 1995 when four teams were relegated from the league and only two teams promoted. The top flight had only been expanded to 22 teams at the start of the 1991–92 season – the year prior to the formation of the Premier League. On 8 June 2006, FIFA requested that all major European leagues, including Italy's Serie A and Spain's La Liga, be reduced to 18 teams by the start of the 2007–08 season. The Premier League responded by announcing their intention to resist such a reduction. Ultimately, the 2007–08 season kicked off again with 20 teams. The league changed its name from the "FA Premier League" to simply the "Premier League" in 2007. The Football Association Premier League Ltd (FAPL) is operated as a corporation and is owned by the 20 member clubs. Each club is a shareholder, with one vote each on issues such as rule changes and contracts. The clubs elect a chairman, chief executive, and board of directors to oversee the daily operations of the league. The Football Association is not directly involved in the day-to-day operations of the Premier League, but has veto power as a special shareholder during the election of the chairman and chief executive and when new rules are adopted by the league. The current chairman is Sir Dave Richards, who was appointed in April 1999, and the chief executive is Richard Scudamore, appointed in November 1999. The former chairman and chief executive, John Quinton and Peter Leaver, were forced to resign in March 1999 after awarding consultancy contracts to former Sky executives Sam Chisholm and David Chance. Rick Parry was the league's first chief executive. The Premier League sends representatives to UEFA's European Club Association, the number of clubs and the clubs themselves chosen according to UEFA coefficients. For the 2012–13 season the Premier League has 10 representatives in the Association: Arsenal, Aston Villa, Chelsea, Everton, Fulham, Liverpool, Manchester City, Manchester United, Newcastle United and Tottenham Hotspur. The European Club Association is responsible for electing three members to UEFA's Club Competitions Committee, which is involved in the operations of UEFA competitions such as the Champions League and UEFA Europa League. There are 20 clubs in the Premier League. During the course of a season (from August to May) each club plays the others twice (a double round-robin system), once at their home stadium and once at that of their opponents', for 38 games. Teams receive three points for a win and one point for a draw. No points are awarded for a loss. Teams are ranked by total points, then goal difference, and then goals scored. If still equal, teams are deemed to occupy the same position. If there is a tie for the championship, for relegation, or for qualification to other competitions, a play-off match at a neutral venue decides rank. The three lowest placed teams are relegated into the EFL Championship, and the top two teams from the Championship, together with the winner of play-offs involving the third to sixth placed Championship clubs, are promoted in their place. As of the 2009–10 season qualification for the UEFA Champions League changed, the top four teams in the Premier League qualify for the UEFA Champions League, with the top three teams directly entering the group stage. Previously only the top two teams qualified automatically. The fourth-placed team enters the Champions League at the play-off round for non-champions and must win a two-legged knockout tie in order to enter the group stage. The team placed fifth in the Premier League automatically qualifies for the UEFA Europa League, and the sixth and seventh-placed teams can also qualify, depending on the winners of the two domestic cup competitions i.e. the FA Cup and the EFL Cup. Two Europa League places are reserved for the winners of each tournament; if the winner of either the FA Cup or EFL Cup qualifies for the Champions League, then that place will go to the next-best placed finisher in the Premier League. An exception to the usual European qualification system happened in 2005, after Liverpool won the Champions League the year before, but did not finish in a Champions League qualification place in the Premier League that season. UEFA gave special dispensation for Liverpool to enter the Champions League, giving England five qualifiers. UEFA subsequently ruled that the defending champions qualify for the competition the following year regardless of their domestic league placing. However, for those leagues with four entrants in the Champions League, this meant that if the Champions League winner finished outside the top four in its domestic league, it would qualify at the expense of the fourth-placed team in the league. At that time, no association could have more than four entrants in the Champions League. This occurred in 2012, when Chelsea – who had won the Champions League that summer, but finished sixth in the league – qualified for the Champions League in place of Tottenham Hotspur, who went into the Europa League. Starting with the 2015–16 season, the Europa League champion automatically qualifies for the following season's Champions League, and the maximum number of Champions League places for any single association has increased to five. An association with four Champions League places, such as The FA, will only earn a fifth place if a club from that association that does not qualify for the Champions League through its league wins either the Champions League or Europa League. In 2007, the Premier League became the highest ranking European League based on the performances of English teams in European competitions over a five-year period. This broke the eight-year dominance of the Spanish league, La Liga. Between the 1992–93 and the 2017–18 seasons, Premier League clubs won the UEFA Champions League four times (and had six runners-up), behind Spain's La Liga with eleven wins, and Italy's Serie A with five wins; ahead of, among others, Germany's Bundesliga with three wins. The FIFA Club World Cup (originally called the FIFA Club World Championship) has been won once by a Premier League club (Manchester United in 2008), with two runners-up (Liverpool in 2005, Chelsea in 2012), behind Spain's La Liga with six wins, Brazil's Brasileirão with four wins, and Italy's Serie A with two wins. A system of promotion and relegation exists between the Premier League and the EFL Championship. The three lowest placed teams in the Premier League are relegated to the Championship, and the top two teams from the Championship promoted to the Premier League, with an additional team promoted after a series of play-offs involving the third, fourth, fifth and sixth placed clubs. The Premier League had 22 teams when it began in 1992, but this was reduced to the present 20-team format in 1995. 49 clubs have played in the Premier League from its inception in 1992, up to and including the 2018–19 season. The following 20 clubs will compete in the Premier League during the 2018–19 season. In 2011, a Welsh club participated in the Premier League for the first time after Swansea City gained promotion. The first Premier League match to be played outside England was Swansea City's home match at the Liberty Stadium against Wigan Athletic on 20 August 2011. In 2012–13, Swansea qualified for the Europa League by winning the League Cup. The number of Welsh clubs in the Premier League increased to two for the first time in 2013–14, as Cardiff City gained promotion, but they were relegated after their maiden season. Cardiff were promoted again in 2017-18 but the number of Welsh clubs will remain the same as Swansea City were relegated from the Premier League in the same season. Because they are members of the Football Association of Wales (FAW), the question of whether clubs like Swansea should represent England or Wales in European competitions has caused long-running discussions in UEFA. Swansea took one of England's three available places in the Europa League in 2013–14 by winning the League Cup in 2012–13. The right of Welsh clubs to take up such English places was in doubt until UEFA clarified the matter in March 2012, allowing them to participate. Participation in the Premier League by some Scottish or Irish clubs has sometimes been discussed, but without result. The idea came closest to reality in 1998, when Wimbledon received Premier League approval to relocate to Dublin, Ireland, but the move was blocked by the Football Association of Ireland. Additionally, the media occasionally discusses the idea that Scotland's two biggest teams, Celtic and Rangers, should or will take part in the Premier League, but nothing has come of these discussions. From 1993 to 2016, the Premier League had title sponsorship rights sold to two companies, which were Carling Brewery and Barclays Bank PLC; Barclays was the most recent title sponsor, having sponsored the Premier League from 2001 through 2016 (until 2004, the title sponsorship was held through its Barclaycard brand before shifting to its main banking brand in 2004). Barclays' deal with the Premier League expired at the end of the 2015–16 season. The FA announced on 4 June 2015 that it would not pursue any further title sponsorship deals for the Premier League, arguing that they wanted to build a "clean" brand for the competition more in line with those of major U.S. sports leagues. As well as sponsorship for the league itself, the Premier League has a number of official partners and suppliers. The official ball supplier for the league is Nike who have had the contract since the 2000–01 season when they took over from Mitre. The Premier League has the highest revenue of any football league in the world, with total club revenues of €2.48 billion in 2009–10. In 2013–14, due to improved television revenues and cost controls, the Premier League had net profits in excess of £78 million, exceeding all other football leagues. In 2010 the Premier League was awarded the Queen's Award for Enterprise in the International Trade category for its outstanding contribution to international trade and the value it brings to English football and the United Kingdom's broadcasting industry. The Premier League includes some of the richest football clubs in the world. Deloitte's "Football Money League" listed seven Premier League clubs in the top 20 for the 2009–10 season, and all 20 clubs were in the top 40 globally by the end of the 2013–14 season, largely as a result of increased broadcasting revenue. From 2013, the league generates €2.2 billion per year in domestic and international television rights. Premier League clubs agreed in principle in December 2012, to radical new cost controls. The two proposals consist of a break-even rule and a cap on the amount clubs can increase their wage bill by each season. With the new television deals on the horizon, momentum has been growing to find ways of preventing the majority of the cash going straight to players and agents. Central payments for the 2016–17 season amounted to £2,398,515,773 across the 20 clubs, with each team receiving a flat participation fee of £35,301,989 and additional payments for TV broadcasts (£1,016,690 for general UK rights to match highlights, £1,136,083 for each live UK broadcast of their games and £39,090,596 for all overseas rights), commercial rights (a flat fee of £4,759,404) and a notional measure of "merit" which was based upon final league position. The merit component was a nominal sum of £1,941,609 multiplied by each finishing place, counted from the foot of the table (e.g., Burnley finished 16th in May 2017, five places counting upwards, and received 5 × £1,941,609 = £9,708,045 merit payment). Television has played a major role in the history of the Premier League. The League's decision to assign broadcasting rights to BSkyB in 1992 was at the time a radical decision, but one that has paid off. At the time pay television was an almost untested proposition in the UK market, as was charging fans to watch live televised football. However, a combination of Sky's strategy, the quality of Premier League football and the public's appetite for the game has seen the value of the Premier League's TV rights soar. The Premier League sells its television rights on a collective basis. This is in contrast to some other European Leagues, including La Liga, in which each club sells its rights individually, leading to a much higher share of the total income going to the top few clubs. The money is divided into three parts: half is divided equally between the clubs; one quarter is awarded on a merit basis based on final league position, the top club getting twenty times as much as the bottom club, and equal steps all the way down the table; the final quarter is paid out as facilities fees for games that are shown on television, with the top clubs generally receiving the largest shares of this. The income from overseas rights is divided equally between the twenty clubs. The first Sky television rights agreement was worth £304 million over five seasons. The next contract, negotiated to start from the 1997–98 season, rose to £670 million over four seasons. The third contract was a £1.024 billion deal with BSkyB for the three seasons from 2001–02 to 2003–04. The league brought in £320 million from the sale of its international rights for the three-year period from 2004–05 to 2006–07. It sold the rights itself on a territory-by-territory basis. Sky's monopoly was broken from August 2006 when Setanta Sports was awarded rights to show two out of the six packages of matches available. This occurred following an insistence by the European Commission that exclusive rights should not be sold to one television company. Sky and Setanta paid £1.7 billion, a two-thirds increase which took many commentators by surprise as it had been widely assumed that the value of the rights had levelled off following many years of rapid growth. Setanta also hold rights to a live 3 pm match solely for Irish viewers. The BBC has retained the rights to show highlights for the same three seasons (on "Match of the Day") for £171.6 million, a 63 per cent increase on the £105 million it paid for the previous three-year period. Sky and BT have agreed to jointly pay £84.3 million for delayed television rights to 242 games (that is the right to broadcast them in full on television and over the internet) in most cases for a period of 50 hours after 10 pm on matchday. Overseas television rights fetched £625 million, nearly double the previous contract. The total raised from these deals is more than £2.7 billion, giving Premier League clubs an average media income from league games of around £40 million-a-year from 2007 to 2010. The TV rights agreement between the Premier League and Sky has faced accusations of being a cartel, and a number of court cases have arisen as a result. An investigation by the Office of Fair Trading in 2002 found BSkyB to be dominant within the pay TV sports market, but concluded that there were insufficient grounds for the claim that BSkyB had abused its dominant position. In July 1999 the Premier League's method of selling rights collectively for all member clubs was investigated by the UK Restrictive Practices Court, who concluded that the agreement was not contrary to the public interest. The BBC's highlights package on Saturday and Sunday nights, as well as other evenings when fixtures justify, will run until 2016. Television rights alone for the period 2010 to 2013 have been purchased for £1.782 billion. On 22 June 2009, due to troubles encountered by Setanta Sports after it failed to meet a final deadline over a £30 million payment to the Premier League, ESPN was awarded two packages of UK rights containing 46 matches that were available for the 2009–10 season as well as a package of 23 matches per season from 2010–11 to 2012–13. On 13 June 2012, the Premier League announced that BT had been awarded 38 games a season for the 2013–14 through 2015–16 seasons at £246 million-a-year. The remaining 116 games were retained by Sky who paid £760 million-a-year. The total domestic rights have raised £3.018 billion, an increase of 70.2% over the 2010–11 to 2012–13 rights. The value of the licensing deal rose by another 70.2% in 2015, when Sky and BT paid £5.136 billion to renew their contracts with the Premier League for another three years up to the 2018–19 season. UK highlights Between the 1998–99 season and the 2012–13 season, RTÉ broadcast highlights on "Premier Soccer Saturday" and occasionally "Premier Soccer Sunday". During then between the 2004–05 season and the 2006–07 season, RTÉ broadcast 15 live matches on a Saturday afternoon with each match being called "Premiership Live". In August 2016, it was announced that the BBC would be creating a new magazine-style show for the Premier League entitled "The Premier League Show". The Premier League is the most-watched football league in the world, broadcast in 212 territories to 643 million homes and a potential TV audience of 4.7 billion people. The Premier League's production arm, Premier League Productions, is operated by IMG Productions and produces all content for its international television partners. The Premier League is particularly popular in Asia, where it is the most widely distributed sports programme. In Australia, Optus telecommunications holds exclusive rights to the Premier League, providing live broadcasts and online access (Fox Sports formerly held rights). In India, the matches are broadcast live on STAR Sports. In China, the broadcast rights were awarded to Super Sports in a six-year agreement that began in the 2013–14 season. As of the 2013–14 season, Canadian broadcast rights to the Premier League are jointly owned by Sportsnet and TSN, with both rival networks holding rights to 190 matches per season. The Premier League is broadcast in the United States through NBC Sports. Premier League viewership has increased rapidly, with NBC and NBCSN averaging a record 479,000 viewers in the 2014–15 season, up 118% from 2012–13 when coverage still aired on Fox Soccer and ESPN/ESPN2 (220,000 viewers), and NBC Sports has been widely praised for its coverage. NBC Sports reached a six-year extension with the Premier League in 2015 to broadcast the league through the 2021–22 season in a deal valued at $1 billion (£640 million). There has been an increasing gulf between the Premier League and the Football League. Since its split with the Football League, many established clubs in the Premier League have managed to distance themselves from their counterparts in lower leagues. Owing in large part to the disparity in revenue from television rights between the leagues, many newly promoted teams have found it difficult to avoid relegation in their first season in the Premier League. In every season except 2001–02, 2011–12 and 2017-18, at least one Premier League newcomer has been relegated back to the Football League. In 1997–98 all three promoted clubs were relegated at the end of the season. The Premier League distributes a portion of its television revenue to clubs that are relegated from the league in the form of "parachute payments". Starting with the 2013–14 season, these payments are in excess of £60 million over four seasons. Though designed to help teams adjust to the loss of television revenues (the average Premier League team receives £55 million while the average Football League Championship club receives £2 million), critics maintain that the payments actually widen the gap between teams that have reached the Premier League and those that have not, leading to the common occurrence of teams "bouncing back" soon after their relegation. For some clubs who have failed to win immediate promotion back to the Premier League, financial problems, including in some cases administration or even liquidation have followed. Further relegations down the footballing ladder have ensued for several clubs unable to cope with the gap. As of the 2017–18 season, Premier League football has been played in 58 stadiums since the formation of the division. The Hillsborough disaster in 1989 and the subsequent Taylor Report saw a recommendation that standing terraces should be abolished; as a result all stadiums in the Premier League are all-seater. Since the formation of the Premier League, football grounds in England have seen constant improvements to capacity and facilities, with some clubs moving to new-build stadiums. Nine stadiums that have seen Premier League football have now been demolished. The stadiums for the 2017–18 season show a large disparity in capacity: Wembley Stadium, the temporary home of Tottenham Hotspur, has a capacity of 90,000 with Dean Court, the home of Bournemouth, having a capacity of 11,360. The combined total capacity of the Premier League in the 2017–18 season is 806,033 with an average capacity of 40,302. Stadium attendances are a significant source of regular income for Premier League clubs. For the 2016–17 season, average attendances across the league clubs were 35,838 for Premier League matches with an aggregate attendance of 13,618,596. This represents an increase of 14,712 from the average attendance of 21,126 recorded in the league's first season (1992–93). However, during the 1992–93 season the capacities of most stadiums were reduced as clubs replaced terraces with seats in order to meet the Taylor Report's 1994–95 deadline for all-seater stadiums. The Premier League's record average attendance of 36,144 was set during the 2007–08 season. This record was then beaten in the 2013–14 season recording an average attendance of 36,695 with an attendance of just under 14 million, the highest average in England's top flight since 1950. Managers in the Premier League are involved in the day-to-day running of the team, including the training, team selection, and player acquisition. Their influence varies from club-to-club and is related to the ownership of the club and the relationship of the manager with fans. Managers are required to have a UEFA Pro Licence which is the final coaching qualification available, and follows the completion of the UEFA 'B' and 'A' Licences. The UEFA Pro Licence is required by every person who wishes to manage a club in the Premier League on a permanent basis (i.e. more than 12 weeks – the amount of time an unqualified caretaker manager is allowed to take control). Caretaker appointments are managers that fill the gap between a managerial departure and a new appointment. Several caretaker managers have gone on to secure a permanent managerial post after performing well as a caretaker; examples include Paul Hart at Portsmouth and David Pleat at Tottenham Hotspur. The league's longest-serving manager was Alex Ferguson, who was in charge of Manchester United from November 1986 until his retirement at the end of the 2012–13 season, meaning that he was manager for all of the first 21 seasons of the Premier League. Arsène Wenger is the league's longest-serving current manager, having been in charge of Arsenal in the Premier League since 1996. In the 2017–18 season, 11 managers were sacked, the most recent being Alan Pardew of West Bromwich Albion. There have been several studies into the reasoning behind, and effects of, managerial sackings. Most famously, Professor Sue Bridgewater of the University of Liverpool and Dr. Bas ter Weel of the University of Amsterdam, performed two separate studies which helped to explain the statistics behind managerial sackings. Bridgewater's study found that clubs generally sack their managers upon dropping below an average of 1 point-per-game. At the inception of the Premier League in 1992–93, just eleven players named in the starting line-ups for the first round of matches hailed from outside of the United Kingdom or Ireland. By 2000–01, the number of foreign players participating in the Premier League was 36 per cent of the total. In the 2004–05 season the figure had increased to 45 per cent. On 26 December 1999, Chelsea became the first Premier League side to field an entirely foreign starting line-up, and on 14 February 2005 Arsenal were the first to name a completely foreign 16-man squad for a match. By 2009, under 40% of the players in the Premier League were English. In response to concerns that clubs were increasingly passing over young English players in favour of foreign players, in 1999, the Home Office tightened its rules for granting work permits to players from countries outside of the European Union. A non-EU player applying for the permit must have played for his country in at least 75 per cent of its competitive 'A' team matches for which he was available for selection during the previous two years, and his country must have averaged at least 70th place in the official FIFA world rankings over the previous two years. If a player does not meet those criteria, the club wishing to sign him may appeal. Players may only be transferred during transfer windows that are set by the Football Association. The two transfer windows run from the last day of the season to 31 August and from 31 December to 31 January. Player registrations cannot be exchanged outside these windows except under specific licence from the FA, usually on an emergency basis. As of the 2010–11 season, the Premier League introduced new rules mandating that each club must register a maximum 25-man squad of players aged over 21, with the squad list only allowed to be changed in transfer windows or in exceptional circumstances. This was to enable the 'home grown' rule to be enacted, whereby the League would also from 2010 require at least 8 of the named 25 man squad to be made up of 'home-grown players'. There is no team or individual salary cap in the Premier League. As a result of the increasingly lucrative television deals, player wages rose sharply following the formation of the Premier League when the average player wage was £75,000 per year. The average salary stands at £1.1 million as of the 2008–09 season. As of 2015, average salaries in the Premier League are higher than for any other football league in the world. The record transfer fee for a Premier League player has risen steadily over the lifetime of the competition. Prior to the start of the first Premier League season Alan Shearer became the first British player to command a transfer fee of more than £3 million. The record rose steadily in the Premier League's first few seasons, until Alan Shearer made a record breaking £15 million move to Newcastle United in 1996. All three of the most expensive transfers in the sport's history had a Premier League club on the selling or buying end, with Juventus selling Paul Pogba to Manchester United in August 2016 for a fee of £89 million, Tottenham Hotspur selling Gareth Bale to Real Madrid for £85 million in 2013, Manchester United's sale of Cristiano Ronaldo to Real Madrid for £80 million in 2009, and Liverpool selling Luis Suárez to Barcelona for £75 million in 2014. The Golden Boot is awarded to the top Premier League scorer at the end of each season. Former Blackburn Rovers and Newcastle United striker Alan Shearer holds the record for most Premier League goals with 260. Twenty-five players have reached the 100-goal mark. Since the first Premier League season in 1992–93, 14 players from 10 clubs have won or shared the top scorers title. Thierry Henry won his fourth overall scoring title by scoring 27 goals in the 2005–06 season. Andrew Cole and Alan Shearer hold the record for most goals in a season (34) – for Newcastle and Blackburn respectively. Ryan Giggs of Manchester United holds the record for scoring goals in consecutive seasons, having scored in the first 21 seasons of the league. The Premier League maintains two trophies – the genuine trophy (held by the reigning champions) and a spare replica. Two trophies are held in the event that two clubs could win the League on the final day of the season. In the rare event that more than two clubs are vying for the title on the final day of the season – then a replica won by a previous club is used. The current Premier League trophy was created by Royal Jewellers Asprey of London. It consists of a trophy with a golden crown and a malachite plinth base. The plinth weighs and the trophy weighs . The trophy and plinth are tall, wide and deep. Its main body is solid sterling silver and silver gilt, while its plinth is made of malachite, a semi-precious stone. The plinth has a silver band around its circumference, upon which the names of the title-winning clubs are listed. Malachite's green colour is also representative of the green field of play. The design of the trophy is based on the heraldry of Three Lions that is associated with English football. Two of the lions are found above the handles on either side of the trophy – the third is symbolised by the captain of the title-winning team as he raises the trophy, and its gold crown, above his head at the end of the season. The ribbons that drape the handles are presented in the team colours of the league champions that year. In 2004, a special gold version of the trophy was commissioned to commemorate Arsenal winning the title without a single defeat. In addition to the winner's trophy and the individual winner's medals awarded to players who win the title, the Premier League also issues other awards throughout the season. A man of the match award is awarded to the player who has the greatest impact in an individual match. Monthly awards are also given for the Manager of the Month, Player of the Month and Goal of the Month . These are also issued annually for Manager of the Season, Player of the Season. and Goal of the Season. The Golden Boot award is given to the top goalscorer of every season, The Playmaker of the Season award is given to the player who make the most assists of every season and the Golden Glove award is given to the goalkeeper with the most clean sheets at the end of the season. From the 2017–18 season, players also receive a milestone award for 100 appearances and every century there after and also players who score 50 goals and multiples thereof. Each player to reach these milestones will receive a presentation box from the Premier League containing a special medallion and a plaque commemorating their achievement. In 2012, the Premier League celebrated its second decade by holding the 20 Seasons Awards: FIFA World Cup The FIFA World Cup, often simply called the World Cup, is an international association football competition contested by the senior men's national teams of the members of the "" (FIFA), the sport's global governing body. The championship has been awarded every four years since the inaugural tournament in 1930, except in 1942 and 1946 when it was not held because of the Second World War. The current champion is France, which won its second title at the 2018 tournament in Russia. The current format of the competition involves a qualification phase, which currently takes place over the preceding three years, to determine which teams qualify for the tournament phase, which is often called the "World Cup Finals". After this, 32 teams, including the automatically qualifying host nation(s), compete in the tournament phase for the title at venues within the host nation(s) over a period of about a month. The 21 World Cup tournaments have been won by eight national teams. Brazil have won five times, and they are the only team to have played in every tournament. The other World Cup winners are Germany and Italy, with four titles each; Argentina, France and inaugural winner Uruguay, with two titles each; and England and Spain with one title each. The World Cup is the most prestigious association football tournament in the world, as well as the most widely viewed and followed sporting event in the world, exceeding even the Olympic Games; the cumulative audience of all matches of the 2006 World Cup was estimated to be 26.29 billion with an estimated 715.1 million people watching the final match, a ninth of the entire population of the planet. 17 countries have hosted the World Cup. Brazil, France, Italy, Germany and Mexico have each hosted twice, while Uruguay, Switzerland, Sweden, Chile, England, Argentina, Spain, the United States, Japan and South Korea (jointly), South Africa and Russia have each hosted once. Qatar are planned as hosts of the 2022 finals, and 2026 will be a joint hosted finals between Canada, the United States and Mexico, which will give Mexico the distinction of being the first country to have hosted games in three different finals. The world's first international football match was a challenge match played in Glasgow in 1872 between Scotland and England, which ended in a 0–0 draw. The first international tournament, the inaugural British Home Championship, took place in 1884. As football grew in popularity in other parts of the world at the start of the 20th century, it was held as a demonstration sport with no medals awarded at the 1900 and 1904 Summer Olympics (however, the IOC has retroactively upgraded their status to official events), and at the 1906 Intercalated Games. After FIFA was founded in 1904, it tried to arrange an international football tournament between nations outside the Olympic framework in Switzerland in 1906. These were very early days for international football, and the official history of FIFA describes the competition as having been a failure. At the 1908 Summer Olympics in London, football became an official competition. Planned by The Football Association (FA), England's football governing body, the event was for amateur players only and was regarded suspiciously as a show rather than a competition. Great Britain (represented by the England national amateur football team) won the gold medals. They repeated the feat at the 1912 Summer Olympics in Stockholm. With the Olympic event continuing to be contested only between amateur teams, Sir Thomas Lipton organised the Sir Thomas Lipton Trophy tournament in Turin in 1909. The Lipton tournament was a championship between individual clubs (not national teams) from different nations, each one of which represented an entire nation. The competition is sometimes described as "The First World Cup", and featured the most prestigious professional club sides from Italy, Germany and Switzerland, but the FA of England refused to be associated with the competition and declined the offer to send a professional team. Lipton invited West Auckland, an amateur side from County Durham, to represent England instead. West Auckland won the tournament and returned in 1911 to successfully defend their title. In 1914, FIFA agreed to recognise the Olympic tournament as a "world football championship for amateurs", and took responsibility for managing the event. This paved the way for the world's first intercontinental football competition, at the 1920 Summer Olympics, contested by Egypt and 13 European teams, and won by Belgium. Uruguay won the next two Olympic football tournaments in 1924 and 1928. Those were also the first two open world championships, as 1924 was the start of FIFA's professional era. Due to the success of the Olympic football tournaments, FIFA, with President Jules Rimet as the driving force, again started looking at staging its own international tournament outside of the Olympics. On 28 May 1928, the FIFA Congress in Amsterdam decided to stage a world championship itself. With Uruguay now two-time official football world champions and to celebrate their centenary of independence in 1930, FIFA named Uruguay as the host country of the inaugural World Cup tournament. The national associations of selected nations were invited to send a team, but the choice of Uruguay as a venue for the competition meant a long and costly trip across the Atlantic Ocean for European sides. Indeed, no European country pledged to send a team until two months before the start of the competition. Rimet eventually persuaded teams from Belgium, France, Romania, and Yugoslavia to make the trip. In total, 13 nations took part: seven from South America, four from Europe and two from North America. The first two World Cup matches took place simultaneously on 13 July 1930, and were won by France and the USA, who defeated Mexico 4–1 and Belgium 3–0 respectively. The first goal in World Cup history was scored by Lucien Laurent of France. In the final, Uruguay defeated Argentina 4–2 in front of 93,000 people in Montevideo, and became the first nation to win the World Cup. After the creation of the World Cup, FIFA and the IOC disagreed over the status of amateur players, and so football was dropped from the 1932 Summer Olympics. Olympic football returned at the 1936 Summer Olympics, but was now overshadowed by the more prestigious World Cup. The issues facing the early World Cup tournaments were the difficulties of intercontinental travel, and war. Few South American teams were willing to travel to Europe for the 1934 World Cup and all North and South American nations except Brazil and Cuba boycotted the 1938 tournament. Brazil was the only South American team to compete in both. The 1942 and 1946 competitions, which Germany and Brazil sought to host, were cancelled due to World War II and its aftermath. The 1950 World Cup, held in Brazil, was the first to include British participants. British teams withdrew from FIFA in 1920, partly out of unwillingness to play against the countries they had been at war with, and partly as a protest against foreign influence on football, but rejoined in 1946 following FIFA's invitation. The tournament also saw the return of 1930 champions Uruguay, who had boycotted the previous two World Cups. Uruguay won the tournament again after defeating the host nation Brazil, in the match called "Maracanazo" (Portuguese: "Maracanaço"). In the tournaments between 1934 and 1978, 16 teams competed in each tournament, except in 1938, when Austria was absorbed into Germany after qualifying, leaving the tournament with 15 teams, and in 1950, when India, Scotland, and Turkey withdrew, leaving the tournament with 13 teams. Most of the participating nations were from Europe and South America, with a small minority from North America, Africa, Asia, and Oceania. These teams were usually defeated easily by the European and South American teams. Until 1982, the only teams from outside Europe and South America to advance out of the first round were: USA, semi-finalists in 1930; Cuba, quarter-finalists in 1938; North Korea, quarter-finalists in 1966; and Mexico, quarter-finalists in 1970. The tournament was expanded to 24 teams in 1982, and then to 32 in 1998, also allowing more teams from Africa, Asia and North America to take part. Since then, teams from these regions have enjoyed more success, with several having reached the quarter-finals: Mexico, quarter-finalists in 1986; Cameroon, quarter-finalists in 1990; South Korea, finishing in fourth place in 2002; Senegal, along with USA, both quarter-finalists in 2002; Ghana, quarter-finalists in 2010; and Costa Rica, quarter-finalists in 2014. Nevertheless, European and South American teams continue to dominate, e.g., the quarter-finalists in 1994, 1998, 2006 and 2018 were all from Europe or South America and so were the finalists of all tournaments so far. Two hundred teams entered the 2002 FIFA World Cup qualification rounds; 198 nations attempted to qualify for the 2006 FIFA World Cup, while a record 204 countries entered qualification for the 2010 FIFA World Cup. In October 2013, Sepp Blatter spoke of guaranteeing the Caribbean Football Union's region a position in the World Cup. In the edition of 25 October 2013 of the "FIFA Weekly" Blatter wrote that: "From a purely sporting perspective, I would like to see globalisation finally taken seriously, and the African and Asian national associations accorded the status they deserve at the FIFA World Cup. It cannot be that the European and South American confederations lay claim to the majority of the berths at the World Cup." Those two remarks suggested to commentators that Blatter could be putting himself forward for re-election to the FIFA Presidency. Following the magazine's publication, Blatter's would-be opponent for the FIFA Presidency, UEFA President Michel Platini, responded that he intended to extend the World Cup to 40 national associations, increasing the number of participants by eight. Platini said that he would allocate an additional berth to UEFA, two to the Asian Football Confederation and the Confederation of African Football, two shared between CONCACAF and CONMEBOL, and a guaranteed place for the Oceania Football Confederation. Platini was clear about why he wanted to expand the World Cup. He said: "[The World Cup is] not based on the quality of the teams because you don't have the best 32 at the World Cup ... but it's a good compromise. ... It's a political matter so why not have more Africans? The competition is to bring all the people of all the world. If you don't give the possibility to participate, they don't improve." In October 2016 FIFA president Gianni Infantino stated his support for a 48-team World Cup in 2026. On 10 January 2017, FIFA confirmed the 2026 World Cup will have 48 finalist teams. By May 2015, the games were under a particularly dark cloud because of the 2015 FIFA corruption case, allegations and criminal charges of bribery, fraud and money laundering to corrupt the issuing of media and marketing rights (rigged bids) for FIFA games, with FIFA officials accused of taking bribes totaling more than $150 million over 24 years. In late May, the U.S. Justice Department announced a 47-count indictment with charges of racketeering, wire fraud and money laundering conspiracy against 14 people. Arrests of over a dozen FIFA officials were made since that time, particularly on 29 May and 3 December. By the end of May 2015, a total of nine FIFA officials and five executives of sports and broadcasting markets had already been charged on corruption. At the time, FIFA president Sepp Blatter announced he would relinquish his position in February 2016. On 4 June 2015 Chuck Blazer while co-operating with the FBI and the Swiss authorities admitted that he and the other members of FIFA's then-executive committee were bribed in order to promote the 1998 and 2010 World Cups. On 10 June 2015 Swiss authorities seized computer data from the offices of Sepp Blatter. The same day, FIFA postponed the bidding process for the 2026 FIFA World Cup in light of the allegations surrounding bribery in the awarding of the 2018 and 2022 tournaments. Then-secretary general Jérôme Valcke stated, "Due to the situation, I think it's nonsense to start any bidding process for the time being." On 28 October 2015, Blatter and FIFA VP Michel Platini, a potential candidate for presidency, were suspended for 90 days; both maintained their innocence in statements made to the news media. On 3 December 2015 two FIFA vice-presidents were arrested on suspicion of bribery in the same Zurich hotel where seven FIFA officials had been arrested in May. An additional 16 indictments by the U.S. Department of Justice were announced on the same day. An equivalent tournament for women's football, the FIFA Women's World Cup, was first held in 1991 in China. The women's tournament is smaller in scale and profile than the men's, but is growing; the number of entrants for the 2007 tournament was 120, more than double that of 1991. Men's football has been included in every Summer Olympic Games except 1896 and 1932. Unlike many other sports, the men's football tournament at the Olympics is not a top-level tournament, and since 1992, an under-23 tournament with each team allowed three over-age players. Women's football made its Olympic debut in 1996. The FIFA Confederations Cup is a tournament held one year before the World Cup at the World Cup host nation(s) as a dress rehearsal for the upcoming World Cup. It is contested by the winners of each of the six FIFA confederation championships, along with the FIFA World Cup champion and the host country. FIFA also organises international tournaments for youth football (FIFA U-20 World Cup, FIFA U-17 World Cup, FIFA U-20 Women's World Cup, FIFA U-17 Women's World Cup), club football (FIFA Club World Cup), and football variants such as futsal (FIFA Futsal World Cup) and beach soccer (FIFA Beach Soccer World Cup). The latter three do not have a women's version, although a FIFA Women's Club World Cup has been proposed. The FIFA U-20 Women's World Cup is held the year before each Women's World Cup and both tournaments are awarded in a single bidding process. The U-20 tournament serves as a dress rehearsal for the larger competition. From 1930 to 1970, the "Jules Rimet Trophy" was awarded to the World Cup winning team. It was originally simply known as the "World Cup" or "Coupe du Monde", but in 1946 it was renamed after the FIFA president Jules Rimet who set up the first tournament. In 1970, Brazil's third victory in the tournament entitled them to keep the trophy permanently. However, the trophy was stolen in 1983 and has never been recovered, apparently melted down by the thieves. After 1970, a new trophy, known as the FIFA World Cup Trophy, was designed. The experts of FIFA, coming from seven countries, evaluated the 53 presented models, finally opting for the work of the Italian designer Silvio Gazzaniga. The new trophy is high, made of solid 18 carat (75%) gold and weighs . The base contains two layers of semi-precious malachite while the bottom side of the trophy bears the engraved year and name of each FIFA World Cup winner since 1974. The description of the trophy by Gazzaniga was: "The lines spring out from the base, rising in spirals, stretching out to receive the world. From the remarkable dynamic tensions of the compact body of the sculpture rise the figures of two athletes at the stirring moment of victory." This new trophy is not awarded to the winning nation permanently. World Cup winners retain the trophy only until the post-match celebration is finished. They are awarded a gold-plated replica rather than the solid gold original immediately afterwards. Currently, all members (players, coaches, and managers) of the top three teams receive medals with an insignia of the World Cup Trophy; winners' (gold), runners-up' (silver), and third-place (bronze). In the 2002 edition, fourth-place medals were awarded to hosts South Korea. Before the 1978 tournament, medals were only awarded to the eleven players on the pitch at the end of the final and the third-place match. In November 2007, FIFA announced that all members of World Cup-winning squads between 1930 and 1974 were to be retroactively awarded winners' medals. Since the second World Cup in 1934, qualifying tournaments have been held to thin the field for the final tournament. They are held within the six FIFA continental zones (Africa, Asia, North and Central America and Caribbean, South America, Oceania, and Europe), overseen by their respective confederations. For each tournament, FIFA decides the number of places awarded to each of the continental zones beforehand, generally based on the relative strength of the confederations' teams. The qualification process can start as early as almost three years before the final tournament and last over a two-year period. The formats of the qualification tournaments differ between confederations. Usually, one or two places are awarded to winners of intercontinental play-offs. For example, the winner of the Oceanian zone and the fifth-placed team from the Asian zone entered a play-off for a spot in the 2010 World Cup. From the 1938 World Cup onwards, host nations receive automatic qualification to the final tournament. This right was also granted to the defending champions between 1938 and 2002, but was withdrawn from the 2006 FIFA World Cup onward, requiring the champions to qualify. Brazil, winners in 2002, were the first defending champions to play qualifying matches. The current final tournament has been used since 1998 and features 32 national teams competing over the course of a month in the host nation(s). There are two stages: the group stage followed by the knockout stage. In the group stage, teams compete within eight groups of four teams each. Eight teams are seeded, including the hosts, with the other seeded teams selected using a formula based on the FIFA World Rankings and/or performances in recent World Cups, and drawn to separate groups. The other teams are assigned to different "pots", usually based on geographical criteria, and teams in each pot are drawn at random to the eight groups. Since 1998, constraints have been applied to the draw to ensure that no group contains more than two European teams or more than one team from any other confederation. Each group plays a round-robin tournament, in which each team is scheduled for three matches against other teams in the same group. This means that a total of six matches are played within a group. The last round of matches of each group is scheduled at the same time to preserve fairness among all four teams. The top two teams from each group advance to the knockout stage. Points are used to rank the teams within a group. Since 1994, three points have been awarded for a win, one for a draw and none for a loss (before, winners received two points). If one considers all possible outcomes (win, draw, loss) for all six matches in a group, there are 729 (= 3) outcome combinations possible. However, 207 of these combinations lead to ties between the second and third places. In such case, the ranking among these teams is determined as follows: The knockout stage is a single-elimination tournament in which teams play each other in one-off matches, with extra time and penalty shootouts used to decide the winner if necessary. It begins with the round of 16 (or the second round) in which the winner of each group plays against the runner-up of another group. This is followed by the quarter-finals, the semi-finals, the third-place match (contested by the losing semi-finalists), and the final. On 10 January 2017, FIFA approved a new format, the 48-team World Cup (to accommodate more teams), which consists of 16 groups of three teams each, with two teams qualifying from each group, to form a round of 32 knockout stage, to be implemented by 2026. Early World Cups were given to countries at meetings of FIFA's congress. The locations were controversial because South America and Europe were by far the two centres of strength in football and travel between them required three weeks by boat. The decision to hold the first World Cup in Uruguay, for example, led to only four European nations competing. The next two World Cups were both held in Europe. The decision to hold the second of these in France was disputed, as the South American countries understood that the location would alternate between the two continents. Both Argentina and Uruguay thus boycotted the 1938 FIFA World Cup. Since the 1958 FIFA World Cup, to avoid future boycotts or controversy, FIFA began a pattern of alternating the hosts between the Americas and Europe, which continued until the 1998 FIFA World Cup. The 2002 FIFA World Cup, hosted jointly by South Korea and Japan, was the first one held in Asia, and the first tournament with multiple hosts. South Africa became the first African nation to host the World Cup in 2010. The 2014 FIFA World Cup was hosted by Brazil, the first held in South America since Argentina 1978, and was the first occasion where consecutive World Cups were held outside Europe. The host country is now chosen in a vote by FIFA's Council. This is done under an exhaustive ballot system. The national football association of a country desiring to host the event receives a "Hosting Agreement" from FIFA, which explains the steps and requirements that are expected from a strong bid. The bidding association also receives a form, the submission of which represents the official confirmation of the candidacy. After this, a FIFA designated group of inspectors visit the country to identify that the country meets the requirements needed to host the event and a report on the country is produced. The decision on who will host the World Cup is usually made six or seven years in advance of the tournament. However, there have been occasions where the hosts of multiple future tournaments were announced at the same time, as was the case for the 2018 and 2022 World Cups, which were awarded to Russia and Qatar, with Qatar becoming the first Middle Eastern country to host the tournament. For the 2010 and 2014 World Cups, the final tournament is rotated between confederations, allowing only countries from the chosen confederation (Africa in 2010, South America in 2014) to bid to host the tournament. The rotation policy was introduced after the controversy surrounding Germany's victory over South Africa in the vote to host the 2006 tournament. However, the policy of continental rotation will not continue beyond 2014, so any country, except those belonging to confederations that hosted the two preceding tournaments, can apply as hosts for World Cups starting from 2018. This is partly to avoid a similar scenario to the bidding process for the 2014 tournament, where Brazil was the only official bidder. The 2026 FIFA World Cup was chosen to be held in the United States, Canada and Mexico, marking the first time a World Cup has been shared by three host nations. The 2026 tournament will be the biggest World Cup ever held, with 48 teams playing 80 matches. Sixty matches will take place in the US, including all matches from the quarter-finals onward, while Canada and Mexico will host 10 games each. Six of the eight champions have won one of their titles while playing in their own homeland, the exceptions being Brazil, who finished as runners-up after losing the deciding match on home soil in 1950 and lost their semi-final against Germany in 2014, and Spain, which reached the second round on home soil in 1982. England (1966) won its only title while playing as a host nation. Uruguay (1930), Italy (1934), Argentina (1978) and France (1998) won their first titles as host nations but have gone on to win again, while Germany (1974) won their second title on home soil. Other nations have also been successful when hosting the tournament. Switzerland (quarter-finals 1954), Sweden (runners-up in 1958), Chile (third place in 1962), South Korea (fourth place in 2002), and Mexico (quarter-finals in 1970 and 1986) all have their best results when serving as hosts. So far, South Africa (2010) has been the only host nation to fail to advance beyond the first round. The best-attended single match, shown in the last three columns, has been the final in half of the 20 World Cups as of 2014. Another match or matches drew more attendance than the final in 1930, 1938, 1958, 1962, 1970–1982, 1990 and 2006. The World Cup was first televised in 1954 and is now the most widely viewed and followed sporting event in the world. The cumulative audience of all matches of the 2006 World Cup is estimated to be 26.29 billion. 715.1 million individuals watched the final match of this tournament (a ninth of the entire population of the planet). The 2006 World Cup draw, which decided the distribution of teams into groups, was watched by 300 million viewers. The World Cup attracts many sponsors such as Coca-Cola, McDonald's and Adidas. For these companies and many more, being a sponsor strongly impacts their global brands. Host countries typically experience a multimillion-dollar revenue increase from the month-long event. The governing body of the sport, FIFA, generated $4.8 billion in revenue from the 2014 tournament. Each FIFA World Cup since 1966 has its own mascot or logo. "World Cup Willie", the mascot for the 1966 competition, was the first World Cup mascot. World Cups have also featured official match balls specially designed for each tournament. The World Cup even has a statistically significant effect on birth rates, the male/female sex ratio of newborns, and heart attacks in nations whose national teams are competing. In all, 79 nations have played in at least one World Cup. Of these, eight national teams have won the World Cup, and they have added stars to their badges, with each star representing a World Cup victory. (Uruguay, however, choose to display four stars on their badge, representing their two gold medals at the 1924 and 1928 Summer Olympics and their two World Cup titles in 1930 and 1950). With five titles, Brazil are the most successful World Cup team and also the only nation to have played in every World Cup (21) to date. Brazil were also the first team to win the World Cup for the third (1970), fourth (1994) and fifth (2002) time. Italy (1934 and 1938) and Brazil (1958 and 1962) are the only nations to have won consecutive titles. West Germany (1982–1990) and Brazil (1994–2002) are the only nations to appear in three consecutive World Cup finals. Germany has made the most top-four finishes (13), medals (12), as well as the most finals (8). To date, the final of the World Cup has only been contested by teams from the UEFA (Europe) and CONMEBOL (South America) confederations. European nations have won twelve titles, while South American have won nine. Only two teams from outside these two continents have ever reached the semi-finals of the competition: United States (North, Central America and Caribbean) in 1930 and South Korea (Asia) in 2002. The best result of an African team is reaching the quarter-finals: Cameroon in 1990, Senegal in 2002 and Ghana in 2010. Only one Oceanian qualifier, Australia in 2006, has advanced to the second round. Brazil, Argentina, Spain and Germany are the only teams to win a World Cup outside their continental confederation; Brazil came out victorious in Europe (1958), North America (1970 and 1994) and Asia (2002). Argentina won a World Cup in North America in 1986, while Spain won in Africa in 2010. In 2014, Germany became the first European team to win in the Americas. Only on five occasions have consecutive World Cups been won by teams from the same continent, and currently it is the first time with four champions in a row from the same continental confederation. Italy and Brazil successfully defended their titles in 1938 and 1962 respectively, while Italy's triumph in 2006 has been followed by wins for Spain in 2010, Germany in 2014 and France in 2018. Currently, it is also the first time that one of the currently winning continents (Europe) is ahead of the other (South America) by more than one championship. At the end of each World Cup, awards are presented to the players and teams for accomplishments other than their final team positions in the tournament. There are currently six awards: An "All-Star Team" consisting of the best players of the tournament has also been announced for each tournament since 1998. Three players share the record for playing in the most World Cups; Mexico's Antonio Carbajal (1950–1966) and Rafael Márquez (2002-2018); and Germany's Lothar Matthäus (1982–1998) all played in five tournaments. Matthäus has played the most World Cup matches overall, with 25 appearances. Brazil's Djalma Santos (1954–1962), West Germany's Franz Beckenbauer (1966–1974) and Germany's Philipp Lahm (2006–2014) are the only players to be named to three Finals All-Star Teams. Miroslav Klose of Germany (2002–2014) is the all-time top scorer at the finals, with 16 goals. He broke Ronaldo of Brazil's record of 15 goals (1998–2006) during the 2014 semi-final match against Brazil. West Germany's Gerd Müller (1970–1974) is third, with 14 goals. The fourth placed goalscorer, France's Just Fontaine, holds the record for the most goals scored in a single World Cup; all his 13 goals were scored in the 1958 tournament. In November 2007, FIFA announced that all members of World Cup-winning squads between 1930 and 1974 were to be retroactively awarded winners' medals. This made Brazil's Pelé the only player to have won three World Cup winners' medals (1958, 1962, and 1970, although he did not play in the 1962 final due to injury), with 20 other players who have won two winners' medals. Seven players have collected all three types of World Cup medals (winners', runner- ups', and third-place); five players were from West Germany's squad of 1966–1974 including Franz Beckenbauer, Jürgen Grabowski, Horst-Dieter Höttges, Sepp Maier and Wolfgang Overath (1966–1974), Italy's Franco Baresi (1982, 1990, 1994) and the most recent has been Miroslav Klose of Germany (2002–2014) with four consecutive medals. Brazil's Mário Zagallo, West Germany's Franz Beckenbauer and France's Didier Deschamps are the only people to date to win the World Cup as both player and head coach. Zagallo won in 1958 and 1962 as a player and in 1970 as head coach. Beckenbauer won in 1974 as captain and in 1990 as head coach, and Deschamps repeated the feat in 2018, after having won in 1998 as captain. Italy's Vittorio Pozzo is the only head coach to ever win two World Cups (1934 and 1938). All World Cup-winning head coaches were natives of the country they coached to victory. Among the national teams, Germany and Brazil have played the most World Cup matches (109), Germany appeared in the most finals (8), semi-finals (13), quarter-finals (16), while Brazil has appeared in the most World Cups (21), has the most wins (73) and has scored the most goals (229). The two teams have played each other twice in the World Cup, in the 2002 final and in the 2014 semi-final. Female genital mutilation Female genital mutilation (FGM), also known as female genital cutting and female circumcision, is the ritual cutting or removal of some or all of the external female genitalia. The practice is found in Africa, Asia and the Middle East, and within communities from countries in which FGM is common. UNICEF estimated in 2016 that 200 million women living today in 30 countries—27 African countries, Indonesia, Iraqi Kurdistan and Yemen—have undergone the procedures. Typically carried out by a traditional circumciser using a blade, FGM is conducted from days after birth to puberty and beyond. In half the countries for which national figures are available, most girls are cut before the age of five. Procedures differ according to the country or ethnic group. They include removal of the clitoral hood and clitoral glans; removal of the inner labia; and removal of the inner and outer labia and closure of the vulva. In this last procedure, known as infibulation, a small hole is left for the passage of urine and menstrual fluid; the vagina is opened for intercourse and opened further for childbirth. The practice is rooted in gender inequality, attempts to control women's sexuality, and ideas about purity, modesty and beauty. It is usually initiated and carried out by women, who see it as a source of honour, and who fear that failing to have their daughters and granddaughters cut will expose the girls to social exclusion. Health effects depend on the procedure. They can include recurrent infections, difficulty urinating and passing menstrual flow, chronic pain, the development of cysts, an inability to get pregnant, complications during childbirth, and fatal bleeding. There are no known health benefits. There have been international efforts since the 1970s to persuade practitioners to abandon FGM, and it has been outlawed or restricted in most of the countries in which it occurs, although the laws are poorly enforced. Since 2010 the United Nations has called upon healthcare providers to stop performing all forms of the procedure, including reinfibulation after childbirth and symbolic "nicking" of the clitoral hood. The opposition to the practice is not without its critics, particularly among anthropologists, who have raised difficult questions about cultural relativism and the universality of human rights. Until the 1980s FGM was widely known in English as female circumcision, implying an equivalence in severity with male circumcision. From 1929 the Kenya Missionary Council referred to it as the sexual mutilation of women, following the lead of Marion Scott Stevenson, a Church of Scotland missionary. References to the practice as mutilation increased throughout the 1970s. In 1975 Rose Oldfield Hayes, an American anthropologist, used the term "female genital mutilation" in the title of a paper in "American Ethnologist", and four years later Fran Hosken, an Austrian-American feminist writer, called it mutilation in her influential "The Hosken Report: Genital and Sexual Mutilation of Females". The Inter-African Committee on Traditional Practices Affecting the Health of Women and Children began referring to it as female genital mutilation in 1990, and the World Health Organization (WHO) followed suit in 1991. Other English terms include "female genital cutting" (FGC) and "female genital mutilation/cutting" (FGM/C), preferred by those who work with practitioners. In countries where FGM is common, the practice's many variants are reflected in dozens of terms, often alluding to purification. In the Bambara language, spoken mostly in Mali, it is known as "bolokoli" ("washing your hands") and in the Igbo language in eastern Nigeria as "isa aru" or "iwu aru" ("having your bath"). A common Arabic term for purification has the root "t-h-r", used for male and female circumcision ("tahur" and "tahara"). It is also known in Arabic as "khafḍ" or "khifaḍ". Communities may refer to FGM as "pharaonic" for infibulation and "sunna" circumcision for everything else. "Sunna" means "path or way" in Arabic and refers to the tradition of Muhammad, although none of the procedures are required within Islam. The term "infibulation" derives from "fibula", Latin for clasp; the Ancient Romans reportedly fastened clasps through the foreskins or labia of slaves to prevent sexual intercourse. The surgical infibulation of women came to be known as pharaonic circumcision in Sudan, and as Sudanese circumcision in Egypt. In Somalia it is known simply as "qodob" ("to sew up"). The procedures are generally performed by a traditional circumciser (cutter or "exciseuse") in the girls' homes, with or without anaesthesia. The cutter is usually an older woman, but in communities where the male barber has assumed the role of health worker he will perform FGM too. When traditional cutters are involved, non-sterile devices are likely to be used, including knives, razors, scissors, glass, sharpened rocks and fingernails. According to a nurse in Uganda, quoted in 2007 in "The Lancet", a cutter would use one knife on up to 30 girls at a time. Health professionals are often involved in Egypt, Kenya, Indonesia and Sudan. In Egypt 77 percent of FGM procedures, and in Indonesia over 50 percent, were performed by medical professionals as of 2008 and 2016. Women in Egypt reported in 1995 that a local anaesthetic had been used on their daughters in 60 percent of cases, a general anaesthetic in 13 percent, and neither in 25 percent (two percent were missing/don't know). The WHO, UNICEF and UNFPA issued a joint statement in 1997 defining FGM as "all procedures involving partial or total removal of the external female genitalia or other injury to the female genital organs whether for cultural or other non-therapeutic reasons". The procedures vary considerably according to ethnicity and individual practitioners. During a 1998 survey in Niger, women responded with over 50 different terms when asked what was done to them. Translation problems are compounded by the women's confusion over which type of FGM they experienced, or even whether they experienced it. Several studies have suggested that survey responses are unreliable. A 2003 study in Ghana found that in 1995 four percent said they had not undergone FGM, but in 2000 said they had, while 11 percent switched in the other direction. In Tanzania in 2005, 66 percent reported FGM, but a medical exam found that 73 percent had undergone it. In Sudan in 2006, a significant percentage of infibulated women and girls reported a less severe type. Standard questionnaires from United Nations bodies ask women whether they or their daughters have undergone the following: (1) cut, no flesh removed (symbolic nicking); (2) cut, some flesh removed; (3) sewn closed; or (4) type not determined/unsure/doesn't know. The most common procedures fall within the "cut, some flesh removed" category and involve complete or partial removal of the clitoral glans. The World Health Organization (a UN agency) created a more detailed typology: Types I–III vary in how much tissue is removed; Type III is equivalent to the UNICEF category "sewn closed"; and Type IV describes miscellaneous procedures, including symbolic nicking. Type I is "partial or total removal of the clitoris and/or the prepuce". Type Ia involves removal of the clitoral hood only. This is rarely performed alone. The more common procedure is Type Ib (clitoridectomy), the complete or partial removal of the clitoral glans (the visible tip of the clitoris) and clitoral hood. The circumciser pulls the clitoral glans with her thumb and index finger and cuts it off. Type II (excision) is the complete or partial removal of the inner labia, with or without removal of the clitoral glans and outer labia. Type IIa is removal of the inner labia; Type IIb, removal of the clitoral glans and inner labia; and Type IIc, removal of the clitoral glans, inner and outer labia. "Excision" in French can refer to any form of FGM. Type III (infibulation or pharaonic circumcision), the "sewn closed" category, involves the removal of the external genitalia and fusion of the wound. The inner and/or outer labia are cut away, with or without removal of the clitoral glans. Type III is found largely in northeast Africa, particularly Djibouti, Eritrea, Ethiopia, Somalia, and Sudan (although not in South Sudan). According to one 2008 estimate, over eight million women in Africa are living with Type III FGM. According to UNFPA in 2010, 20 percent of women with FGM have been infibulated. In Somalia "[t]he child is made to squat on a stool or mat facing the circumciser at a height that offers her a good view of the parts to be handled. ... adult helpers grab and pull apart the legs of the girl. ... If available, this is the stage at which a local anaesthetic would be used": The element of speed and surprise is vital and the circumciser immediately grabs the clitoris by pinching it between her nails aiming to amputate it with a slash. The organ is then shown to the senior female relatives of the child who will decide whether the amount that has been removed is satisfactory or whether more is to be cut off. After the clitoris has been satisfactorily amputated ... the circumciser can proceed with the total removal of the labia minora and the paring of the inner walls of the labia majora. Since the entire skin on the inner walls of the labia majora has to be removed all the way down to the perineum, this becomes a messy business. By now, the child is screaming, struggling, and bleeding profusely, which makes it difficult for the circumciser to hold with bare fingers and nails the slippery skin and parts that are to be cut or sutured together. ... Having ensured that sufficient tissue has been removed to allow the desired fusion of the skin, the circumciser pulls together the opposite sides of the labia majora, ensuring that the raw edges where the skin has been removed are well approximated. The wound is now ready to be stitched or for thorns to be applied. If a needle and thread are being used, close tight sutures will be placed to ensure that a flap of skin covers the vulva and extends from the mons veneris to the perineum, and which, after the wound heals, will form a bridge of scar tissue that will totally occlude the vaginal introitus. The amputated parts might be placed in a pouch for the girl to wear. A single hole of 2–3 mm is left for the passage of urine and menstrual fluid. The vulva is closed with surgical thread, or agave or acacia thorns, and might be covered with a poultice of raw egg, herbs and sugar. To help the tissue bond, the girl's legs are tied together, often from hip to ankle; the bindings are usually loosened after a week and removed after two to six weeks. If the remaining hole is too large in the view of the girl's family, the procedure is repeated. The vagina is opened for sexual intercourse, for the first time either by a midwife with a knife or by the woman's husband with his penis. In some areas, including Somaliland, female relatives of the bride and groom might watch the opening of the vagina to check that the girl is a virgin. The woman is opened further for childbirth ("defibulation" or "deinfibulation"), and closed again afterwards ("reinfibulation"). Reinfibulation can involve cutting the vagina again to restore the pinhole size of the first infibulation. This might be performed before marriage, and after childbirth, divorce and widowhood. Hanny Lightfoot-Klein interviewed hundreds of women and men in Sudan in the 1980s about sexual intercourse with Type III: The penetration of the bride's infibulation takes anywhere from 3 or 4 days to several months. Some men are unable to penetrate their wives at all (in my study over 15%), and the task is often accomplished by a midwife under conditions of great secrecy, since this reflects negatively on the man's potency. Some who are unable to penetrate their wives manage to get them pregnant in spite of the infibulation, and the woman's vaginal passage is then cut open to allow birth to take place. ... Those men who do manage to penetrate their wives do so often, or perhaps always, with the help of the "little knife". This creates a tear which they gradually rip more and more until the opening is sufficient to admit the penis. Type IV is "[a]ll other harmful procedures to the female genitalia for non-medical purposes", including pricking, piercing, incising, scraping and cauterization. It includes nicking of the clitoris (symbolic circumcision), burning or scarring the genitals, and introducing substances into the vagina to tighten it. Labia stretching is also categorized as Type IV. Common in southern and eastern Africa, the practice is supposed to enhance sexual pleasure for the man and add to the sense of a woman as a closed space. From the age of eight, girls are encouraged to stretch their inner labia using sticks and massage. Girls in Uganda are told they may have difficulty giving birth without stretched labia. A definition of FGM from the WHO in 1995 included gishiri cutting and angurya cutting, found in Nigeria and Niger. These were removed from the WHO's 2008 definition because of insufficient information about prevalence and consequences. Angurya cutting is excision of the hymen, usually performed seven days after birth. Gishiri cutting involves cutting the vagina's front or back wall with a blade or penknife, performed in response to infertility, obstructed labour and other conditions. In a study by Nigerian physician Mairo Usman Mandara, over 30 percent of women with gishiri cuts were found to have vesicovaginal fistulae (holes that allow urine to seep into the vagina). FGM harms women's physical and emotional health throughout their lives. It has no known health benefits. The short-term and late complications depend on the type of FGM, whether the practitioner has had medical training, and whether they used antibiotics and sterilized or single-use surgical instruments. In the case of Type III, other factors include how small a hole was left for the passage of urine and menstrual blood, whether surgical thread was used instead of agave or acacia thorns, and whether the procedure was performed more than once (for example, to close an opening regarded as too wide or re-open one too small). Common short-term complications include swelling, excessive bleeding, pain, urine retention, and healing problems/wound infection. A 2014 systematic review of 56 studies suggested that over one in ten girls and women undergoing any form of FGM, including symbolic nicking of the clitoris (Type IV), experience immediate complications, although the risks increased with Type III. The review also suggested that there was under-reporting. Other short-term complications include fatal bleeding, anaemia, urinary infection, septicaemia, tetanus, gangrene, necrotizing fasciitis (flesh-eating disease), and endometritis. It is not known how many girls and women die as a result of the practice, because complications may not be recognized or reported. The practitioners' use of shared instruments is thought to aid the transmission of hepatitis B, hepatitis C and HIV, although no epidemiological studies have shown this. Late complications vary depending on the type of FGM. They include the formation of scars and keloids that lead to strictures and obstruction, epidermoid cysts that may become infected, and neuroma formation (growth of nerve tissue) involving nerves that supplied the clitoris. An infibulated girl may be left with an opening as small as 2–3 mm, which can cause prolonged, drop-by-drop urination, pain while urinating, and a feeling of needing to urinate all the time. Urine may collect underneath the scar, leaving the area under the skin constantly wet, which can lead to infection and the formation of small stones. The opening is larger in women who are sexually active or have given birth by vaginal delivery, but the urethra opening may still be obstructed by scar tissue. Vesicovaginal or rectovaginal fistulae can develop (holes that allow urine or faeces to seep into the vagina). This and other damage to the urethra and bladder can lead to infections and incontinence, pain during sexual intercourse and infertility. Painful periods are common because of the obstruction to the menstrual flow, and blood can stagnate in the vagina and uterus. Complete obstruction of the vagina can result in hematocolpos and hematometra (where the vagina and uterus fill with menstrual blood). The swelling of the abdomen that results from the collection of fluid, together with the lack of menstruation, can lead to suspicion of pregnancy; Asma El Dareer, a Sudanese physician, reported in 1979 that a girl in Sudan with this condition was killed by her family. FGM may place women at higher risk of problems during pregnancy and childbirth, which are more common with the more extensive FGM procedures. Infibulated women may try to make childbirth easier by eating less during pregnancy to reduce the baby's size. In women with vesicovaginal or rectovaginal fistulae, it is difficult to obtain clear urine samples as part of prenatal care, making the diagnosis of conditions such as pre-eclampsia harder. Cervical evaluation during labour may be impeded and labour prolonged or obstructed. Third-degree laceration (tears), anal-sphincter damage and emergency caesarean section are more common in infibulated women. Neonatal mortality is increased. The WHO estimated in 2006 that an additional 10–20 babies die per 1,000 deliveries as a result of FGM. The estimate was based on a study conducted on 28,393 women attending delivery wards at 28 obstetric centres in Burkina Faso, Ghana, Kenya, Nigeria, Senegal and Sudan. In those settings all types of FGM were found to pose an increased risk of death to the baby: 15 percent higher for Type I, 32 percent for Type II, and 55 percent for Type III. The reasons for this were unclear, but may be connected to genital and urinary tract infections and the presence of scar tissue. According to the study, FGM was associated with an increased risk to the mother of damage to the perineum and excessive blood loss, as well as a need to resuscitate the baby, and stillbirth, perhaps because of a long . According to a 2015 systematic review there is little high-quality information available on the psychological effects of FGM. Several small studies have concluded that women with FGM suffer from anxiety, depression and post-traumatic stress disorder. Feelings of shame and betrayal can develop when women leave the culture that practises FGM and learn that their condition is not the norm, but within the practising culture they may view their FGM with pride, because for them it signifies beauty, respect for tradition, chastity and hygiene. Studies on sexual function have also been small. A 2013 meta-analysis of 15 studies involving 12,671 women from seven countries concluded that women with FGM were twice as likely to report no sexual desire and 52 percent more likely to report dyspareunia (painful sexual intercourse). One third reported reduced sexual feelings. Aid agencies define the prevalence of FGM as the percentage of the 15–49 age group that has exerienced it. These figures are based on nationally representative household surveys known as Demographic and Health Surveys (DHS), developed by Macro International and funded mainly by the United States Agency for International Development (USAID), and Multiple Indicator Cluster Surveys (MICS) conducted with financial and technical help from UNICEF. These surveys have been carried out in Africa, Asia, Latin America and elsewhere roughly every five years, since 1984 and 1995 respectively. The first to ask about FGM was the 1989–1990 DHS in northern Sudan. The first publication to estimate FGM prevalence based on DHS data (in seven countries) was by Dara Carr of Macro International in 1997. Women are asked during the surveys: "Was the genital area just nicked/cut without removing any flesh? Was any flesh (or something) removed from the genital area? Was your genital area sewn?" Most women report "cut, some flesh removed" (Types I and II). Type I is the most common form in Egypt, and in the southern parts of Nigeria. Type III (infibulation) is concentrated in northeastern Africa, particularly Djibouti, Eritrea, Somalia and Sudan. In surveys in 2002–2006, 30 percent of cut girls in Djibouti, 38 percent in Eritrea, and 63 percent in Somalia had experienced Type III. There is also a high prevalence of infibulation among girls in Niger and Senegal, and in 2013 it was estimated that in Nigeria three percent of the 0–14 age group had been infibulated. The type of procedure is often linked to ethnicity. In Eritrea, for example, a survey in 2002 found that all Hedareb girls had been infibulated, compared with two percent of the Tigrinya, most of whom fell into the "cut, no flesh removed" category. FGM is found mostly in what Gerry Mackie called an "intriguingly contiguous" zone in Africa—east to west from Somalia to Senegal, and north to south from Egypt to Tanzania. Nationally representative figures are available for 27 countries in Africa, as well as Indonesia, Iraqi Kurdistan and Yemen. Over 200 million women and girls are thought to be living with FGM in those 30 countries. The highest concentrations among the 15–49 age group are in Somalia (98 percent), Guinea (97 percent), Djibouti (93 percent), Egypt (91 percent) and Sierra Leone (90 percent). As of 2013, 27.2 million women had undergone FGM in Egypt, 23.8 million in Ethiopia, and 19.9 million in Nigeria. There is also a high concentration in Indonesia, where Type I (clitoridectomy) and Type IV [symbolic nicking]) are practised. The Indonesian Ministry of Health and the Indonesian Ulema Council both say that the clitoris should not be cut. The prevalence rate for the 0–11 group in Indonesia is 49 percent (13.4 million). Smaller studies or anecdotal reports suggest that FGM is also practised in Colombia, the Congo, Malaysia, Oman, Peru, Saudi Arabia, Sri Lanka, and the United Arab Emirates, by the Bedouin in Israel, in Rahmah, Jordan, and by the Dawoodi Bohra in India. It is also found within immigrant communities around the world. Prevalence figures for the 15–19 age group and younger show a downward trend. For example, Burkina Faso fell from 89 percent (1980) to 58 percent (2010); Egypt from 97 percent (1985) to 70 percent (2015); and Kenya from 41 percent (1984) to 11 percent (2014). From 2010 household surveys asked women about the FGM status of all their living daughters. The highest concentrations among girls aged 0–14 were in Gambia (56 percent), Mauritania (54 percent), Indonesia (49 percent for 0–11) and Guinea (46 percent). The figures suggest that a girl was one third less likely in 2014 to undergo FGM than she was 30 years ago. If the rate of decline continues, the number of girls cut will nevertheless rise from 3.6 million a year in 2013 to 4.1 million in 2050 because of population growth. Surveys have found FGM to be more common in rural areas, less common in most countries among girls from the wealthiest homes, and (except in Sudan and Somalia) less common in girls whose mothers had access to primary or secondary/higher education. In Somalia and Sudan the situation was reversed: in Somalia the mothers' access to secondary/higher education was accompanied by a rise in prevalence of FGM in their daughters, and in Sudan access to any education was accompanied by a rise. FGM is not invariably a rite of passage between childhood and adulthood, but is often performed on much younger children. Girls are most commonly cut shortly after birth to age 15. In half the countries for which national figures were available in 2000–2010, most girls had been cut by age five. Over 80 percent (of those cut) are cut before the age of five in Nigeria, Mali, Eritrea, Ghana and Mauritania. The 1997 Demographic and Health Survey in Yemen found that 76 percent of girls had been cut within two weeks of birth. The percentage is reversed in Somalia, Egypt, Chad and the Central African Republic, where over 80 percent (of those cut) are cut between five and 14. Just as the type of FGM is often linked to ethnicity, so is the mean age. In Kenya, for example, the Kisi cut around age 10 and the Kamba at 16. A country's national prevalence often reflects a high sub-national prevalence among certain ethnicities, rather than a widespread practice. In Iraq, for example, FGM is found mostly among the Kurds in Erbil (58 percent prevalence within age group 15–49, as of 2011), Sulaymaniyah (54 percent) and Kirkuk (20 percent), giving the country a national prevalence of eight percent. The practice is sometimes an ethnic marker, but it may differ along national lines. For example, in the northeastern regions of Ethiopia and Kenya, which share a border with Somalia, the Somali people practise FGM at around the same rate as they do in Somalia. But in Guinea all Fulani women responding to a survey in 2012 said they had experienced FGM, against 12 percent of the Fulani in Chad, while in Nigeria the Fulani are the only large ethnic group in the country not to practise it. Dahabo Musa, a Somali woman, described infibulation in a 1988 poem as the "three feminine sorrows": the procedure itself, the wedding night when the woman is cut open, then childbirth when she is cut again. Despite the evident suffering, it is women who organize all forms of FGM. Anthropologist Rose Oldfield Hayes wrote in 1975 that educated Sudanese men who did not want their daughters to be infibulated (preferring clitoridectomy) would find the girls had been sewn up after the grandmothers arranged a visit to relatives. Gerry Mackie has compared the practice to footbinding. Like FGM, footbinding was carried out on young girls, nearly universal where practised, tied to ideas about honour, chastity and appropriate marriage, and "supported and transmitted" by women. FGM practitioners see the procedures as marking not only ethnic boundaries but also gender difference. According to this view, male circumcision defeminizes men while FGM demasculinizes women. Fuambai Ahmadu, an anthropologist and member of the Kono people of Sierra Leone, who in 1992 underwent clitoridectomy as an adult during a Sande society initiation, argued in 2000 that it is a male-centred assumption that the clitoris is important to female sexuality. African female symbolism revolves instead around the concept of the womb. Infibulation draws on that idea of enclosure and fertility. "[G]enital cutting completes the social definition of a child's sex by eliminating external traces of androgyny," Janice Boddy wrote in 2007. "The female body is then covered, closed, and its productive blood bound within; the male body is unveiled, opened and exposed." In communities where infibulation is common, there is a preference for women's genitals to be smooth, dry and without odour, and both women and men may find the natural vulva repulsive. Some men seem to enjoy the effort of penetrating an infibulation. The local preference for dry sex causes women to introduce substances into the vagina to reduce lubrication, including leaves, tree bark, toothpaste and Vicks menthol rub. The WHO includes this practice within Type IV FGM, because the added friction during intercourse can cause lacerations and increase the risk of infection. Because of the smooth appearance of an infibulated vulva, there is also a belief that infibulation increases hygiene. Common reasons for FGM cited by women in surveys are social acceptance, religion, hygiene, preservation of virginity, marriageability and enhancement of male sexual pleasure. In a study in northern Sudan, published in 1983, only 17.4 percent of women opposed FGM (558 out of 3,210), and most preferred excision and infibulation over clitoridectomy. Attitudes are changing slowly. In Sudan in 2010, 42 percent of women who had heard of FGM said the practice should continue. In several surveys since 2006, over 50 percent of women in Mali, Guinea, Sierra Leone, Somalia, Gambia and Egypt supported FGM's continuance, while elsewhere in Africa, Iraq and Yemen most said it should end, although in several countries only by a narrow margin. Against the argument that women willingly choose FGM for their daughters, UNICEF calls the practice a "self-enforcing social convention" to which families feel they must conform to avoid uncut daughters facing social exclusion. Ellen Gruenbaum reported that, in Sudan in the 1970s, cut girls from an Arab ethnic group would mock uncut Zabarma girls with "Ya, Ghalfa!" ("Hey, unclean!"). The Zabarma girls would respond "Ya, mutmura!" (A "mutmara" was a storage pit for grain that was continually opened and closed, like an infibulated woman.) But despite throwing the insult back, the Zabarma girls would ask their mothers, "What's the matter? Don't we have razor blades like the Arabs?" Because of poor access to information, and because circumcisers downplay the causal connection, women may not associate the health consequences with the procedure. Lala Baldé, president of a women's association in Medina Cherif, a village in Senegal, told Mackie in 1998 that when girls fell ill or died, it was attributed to evil spirits. When informed of the causal relationship between FGM and ill health, Mackie wrote, the women broke down and wept. He argued that surveys taken before and after this sharing of information would show very different levels of support for FGM. The American non-profit group Tostan, founded by Molly Melching in 1991, introduced community-empowerment programs in several countries that focus on local democracy, literacy, and education about healthcare, giving women the tools to make their own decisions. In 1997, using the Tostan program, Malicounda Bambara in Senegal became the first village to abandon FGM. By 2018 over 8,000 communities in eight countries had pledged to abandon FGM and child marriage. Surveys have shown a widespread belief, particularly in Mali, Mauritania, Guinea and Egypt, that FGM is a religious requirement. Gruenbaum has argued that practitioners may not distinguish between religion, tradition and chastity, making it difficult to interpret the data. FGM's origins in northeastern Africa are pre-Islamic, but the practice became associated with Islam because of that religion's focus on female chastity and seclusion. There is no mention of it in the Quran. It is praised in a few "daʻīf" (weak) "hadith" (sayings attributed to Muhammad) as noble but not required. In 2007 the Al-Azhar Supreme Council of Islamic Research in Cairo ruled that FGM had "no basis in core Islamic law or any of its partial provisions". There is no mention of FGM in the Bible. Christian missionaries in Africa were among the first to object to FGM, but Christian communities in Africa do practise it. A 2013 UNICEF report identified 17 African countries in which at least 10 percent of Christian women and girls aged 15 to 49 had undergone FGM; in Niger 55 percent of Christian women and girls had experienced it, compared with two percent of their Muslim counterparts. The only Jewish group known to have practised it are the Beta Israel of Ethiopia. Judaism requires male circumcision, but does not allow FGM. FGM is also practised by animist groups, particularly in Guinea and Mali. The practice's origins are unknown. Gerry Mackie has suggested that, because FGM's east-west, north-south distribution in Africa meets in Sudan, infibulation may have begun there with the Meroite civilization (c. 800 BCE – c. 350 CE), before the rise of Islam, to increase confidence in paternity. According to historian Mary Knight, Spell 1117 (c. 1991–1786 BCE) of the Ancient Egyptian Coffin Texts may refer in hieroglyphs to an uncircumcised girl ("'m't"): a-m-a:X1-D53-B1 The spell was found on the sarcophagus of Sit-hedjhotep, now in the Egyptian Museum, and dates to Egypt's Middle Kingdom. (Paul F. O'Rourke argues that "'m't" probably refers instead to a menstruating woman.) The proposed circumcision of an Egyptian girl, Tathemis, is also mentioned on a Greek papyrus, from 163 BCE, in the British Museum: "Sometime after this, Nephoris [Tathemis's mother] defrauded me, being anxious that it was time for Tathemis to be circumcised, as is the custom among the Egyptians." The examination of mummies has shown no evidence of FGM. Citing the Australian pathologist Grafton Elliot Smith, who examined hundreds of mummies in the early 20th century, Knight writes that the genital area may resemble Type III because during mummification the skin of the outer labia was pulled toward the anus to cover the pudendal cleft, possibly to prevent sexual violation. It was similarly not possible to determine whether Types I or II had been performed, because soft tissues had deteriorated or been removed by the embalmers. The Greek geographer Strabo (c. 64 BCE – c. 23 CE) wrote about FGM after visiting Egypt around 25 BCE: "This is one of the customs most zealously pursued by them [the Egyptians]: to raise every child that is born and to circumcise ["peritemnein"] the males and excise ["ektemnein"] the females ..." Philo of Alexandria (c. 20 BCE – 50 CE) also made reference to it: "the Egyptians by the custom of their country circumcise the marriageable youth and maid in the fourteenth (year) of their age, when the male begins to get seed, and the female to have a menstrual flow." It is mentioned briefly in a work attributed to the Greek physician Galen (129 – c. 200 CE): "When [the clitoris] sticks out to a great extent in their young women, Egyptians consider it appropriate to cut it out." Another Greek physician, Aëtius of Amida (mid-5th to mid-6th century CE), offered more detail in book 16 of his "Sixteen Books on Medicine", citing the physician Philomenes. The procedure was performed in case the clitoris, or "nymphê", grew too large or triggered sexual desire when rubbing against clothing. "On this account, it seemed proper to the Egyptians to remove it before it became greatly enlarged," Aëtius wrote, "especially at that time when the girls were about to be married": The surgery is performed in this way: Have the girl sit on a chair while a muscled young man standing behind her places his arms below the girl's thighs. Have him separate and steady her legs and whole body. Standing in front and taking hold of the clitoris with a broad-mouthed forceps in his left hand, the surgeon stretches it outward, while with the right hand, he cuts it off at the point next to the pincers of the forceps. It is proper to let a length remain from that cut off, about the size of the membrane that's between the nostrils, so as to take away the excess material only; as I have said, the part to be removed is at that point just above the pincers of the forceps. Because the clitoris is a skinlike structure and stretches out excessively, do not cut off too much, as a urinary fistula may result from cutting such large growths too deeply. The genital area was then cleaned with a sponge, frankincense powder and wine or cold water, and wrapped in linen bandages dipped in vinegar, until the seventh day when calamine, rose petals, date pits or a "genital powder made from baked clay" might be applied. Whatever the practice's origins, infibulation became linked to slavery. Mackie cites the Portuguese missionary João dos Santos, who in 1609 wrote of a group near Mogadishu who had a "custome to sew up their Females, especially their slaves being young to make them unable for conception, which makes these slaves sell dearer, both for their chastitie, and for better confidence which their Masters put in them". Thus, Mackie argues, a "practice associated with shameful female slavery came to stand for honor". Gynaecologists in 19th-century Europe and the United States removed the clitoris to treat insanity and masturbation. A British doctor, Robert Thomas, suggested clitoridectomy as a cure for nymphomania in 1813. The first reported clitoridectomy in the West, described in "The Lancet" in 1825, was performed in 1822 in Berlin by Karl Ferdinand von Graefe on a 15-year-old girl who was masturbating excessively. Isaac Baker Brown, an English gynaecologist, president of the Medical Society of London and co-founder in 1845 of St. Mary's Hospital, believed that masturbation, or "unnatural irritation" of the clitoris, caused hysteria, spinal irritation, fits, idiocy, mania and death. He therefore "set to work to remove the clitoris whenever he had the opportunity of doing so", according to his obituary. Brown performed several clitoridectomies between 1859 and 1866. In the United States, J. Marion Sims followed Brown's work and in 1862 slit the neck of a woman's uterus and amputated her clitoris, "for the relief of the nervous or hysterical condition as recommended by Baker Brown". When Brown published his views in "On the Curability of Certain Forms of Insanity, Epilepsy, Catalepsy, and Hysteria in Females" (1866), doctors in London accused him of quackery and expelled him from the Obstetrical Society. Later in the 19th century, A. J. Bloch, a surgeon in New Orleans, removed the clitoris of a two-year-old girl who was reportedly masturbating. According to a 1985 paper in the "Obstetrical & Gynecological Survey", clitoridectomy was performed in the United States into the 1960s to treat hysteria, erotomania and lesbianism. From the mid-1950s, James Burt, a gynaecologist in Dayton, Ohio, performed non-standard repairs of episiotomies after childbirth, adding more stitches to make the vaginal opening smaller. From 1966 until 1989, he performed "love surgery" by cutting women's pubococcygeus muscle, repositioning the vagina and urethra, and removing the clitoral hood, thereby making their genital area more appropriate, in his view, for intercourse in the missionary position. "Women are structurally inadequate for intercourse," he wrote; he said he would turn them into "horny little mice". In the 1960s and 1970s he performed these procedures without consent while repairing episiotomies and performing hysterectomies and other surgery; he said he had performed a variation of them on 4,000 women by 1975. Following complaints, he was required in 1989 to stop practicing medicine in the United States. Protestant missionaries in British East Africa (present-day Kenya) began campaigning against FGM in the early 20th century, when Dr. John Arthur joined the Church of Scotland Mission (CSM) in Kikuyu. An important ethnic marker, the practice was known by the Kikuyu, the country's main ethnic group, as "irua" for both girls and boys. It involved excision (Type II) for girls and removal of the foreskin for boys. Unexcised Kikuyu women ("irugu") were outcasts. Jomo Kenyatta, general secretary of the Kikuyu Central Association and later Kenya's first prime minister, wrote in 1938 that, for the Kikuyu, the institution of FGM was the ""conditio sine qua non" of the whole teaching of tribal law, religion and morality". No proper Kikuyu man or woman would marry or have sexual relations with someone who was not circumcised. A woman's responsibilities toward the tribe began with her initiation. Her age and place within tribal history was traced to that day, and the group of girls with whom she was cut was named according to current events, an oral tradition that allowed the Kikuyu to track people and events going back hundreds of years. Beginning with the CSM mission in 1925, several missionary churches declared that FGM was prohibited for African Christians. The CSM announced that Africans practising it would be excommunicated, which resulted in hundreds leaving or being expelled. The stand-off turned FGM into a focal point of the Kenyan independence movement; the 1929–1931 period is known in the country's historiography as the female circumcision controversy. In 1929 the Kenya Missionary Council began referring to FGM as the "sexual mutilation of women", rather than circumcision, and a person's stance toward the practice became a test of loyalty, either to the Christian churches or to the Kikuyu Central Association. Hulda Stumpf, an American missionary with the Africa Inland Mission who opposed FGM in the girls' school she helped to run, was murdered in 1930. Edward Grigg, the governor of Kenya, told the British Colonial Office that the killer, who was never identified, had tried to circumcise her. In 1956 the council of male elders (the "Njuri Nchecke") in Meru announced a ban on FGM. Over the next three years, thousands of girls cut each other's genitals with razor blades as a symbol of defiance. The movement came to be known as "Ngaitana" ("I will circumcise myself"), because to avoid naming their friends the girls said they had cut themselves. Historian Lynn Thomas described the episode as significant in the history of FGM because it made clear that its victims were also its perpetrators. The first known non-colonial campaign against FGM began in Egypt in the 1920s, when the Egyptian Doctors' Society called for a ban. There was a parallel campaign in Sudan, run by religious leaders and British women. Infibulation was banned there in 1946, but the law was unpopular and barely enforced. The Egyptian government banned infibulation in state-run hospitals in 1959, but allowed partial clitoridectomy if parents requested it. (Egypt banned FGM entirely in 2007.) In 1959, the UN asked the WHO to investigate FGM, but the latter responded that it was not a medical matter. Feminists took up the issue throughout the 1970s. The Egyptian physician and feminist Nawal El Saadawi criticized FGM in her book "Women and Sex" (1972); the book was banned in Egypt and El Saadawi lost her job as director general of public health. She followed up with a chapter, "The Circumcision of Girls", in her book "The Hidden Face of Eve: Women in the Arab World" (1980), which described her own clitoridectomy when she was six years old: I did not know what they had cut off from my body, and I did not try to find out. I just wept, and called out to my mother for help. But the worst shock of all was when I looked around and found her standing by my side. Yes, it was her, I could not be mistaken, in flesh and blood, right in the midst of these strangers, talking to them and smiling at them, as though they had not participated in slaughtering her daughter just a few moments ago. In 1975, Rose Oldfield Hayes, an American social scientist, became the first female academic to publish a detailed account of FGM, aided by her ability to discuss it directly with women in Sudan. Her article in "American Ethnologist" called it "female genital mutilation", rather than female circumcision, and brought it to wider academic attention. Edna Adan Ismail, who worked at the time for the Somalia Ministry of Health, discussed the health consequences of FGM in 1977 with the Somali Women's Democratic Organization. Two years later Fran Hosken, an Austria-American feminist, published "The Hosken Report: Genital and Sexual Mutilation of Females" (1979), the first to offer global figures. She estimated that 110,529,000 women in 20 African countries had experienced FGM. The figures were speculative but consistent with later surveys. Describing FGM as a "training ground for male violence", Hosken accused female practitioners of "participating in the destruction of their own kind". The language caused a rift between Western and African feminists; African women boycotted a session featuring Hosken during the UN's Mid-Decade Conference on Women in Copenhagen in July 1980. In 1979, the WHO held a seminar, "Traditional Practices Affecting the Health of Women and Children", in Khartoum, Sudan, and in 1981, also in Khartoum, 150 academics and activists signed a pledge to fight FGM after a workshop held by the Babiker Badri Scientific Association for Women's Studies (BBSAWS), "Female Circumcision Mutilates and Endangers Women – Combat it!" Another BBSAWS workshop in 1984 invited the international community to write a joint statement for the United Nations. It recommended that the "goal of all African women" should be the eradication of FGM and that, to sever the link between FGM and religion, clitoridectomy should no longer be referred to as "sunna". The Inter-African Committee on Traditional Practices Affecting the Health of Women and Children, founded in 1984 in Dakar, Senegal, called for an end to the practice, as did the UN's World Conference on Human Rights in Vienna in 1993. The conference listed FGM as a form of violence against women, marking it as a human-rights violation, rather than a medical issue. Throughout the 1990s and 2000s governments in Africa and the Middle East passed legislation banning or restricting FGM. In 2003 the African Union ratified the Maputo Protocol on the rights of women, which supported the elimination of FGM. By 2015 laws restricting FGM had been passed in at least 23 of the 27 African countries in which it is concentrated, although several fell short of a ban. In December 1993, the United Nations General Assembly included FGM in resolution 48/104, the Declaration on the Elimination of Violence Against Women, and from 2003 sponsored International Day of Zero Tolerance for Female Genital Mutilation, held every 6 February. UNICEF began in 2003 to promote an evidence-based social norms approach, using ideas from game theory about how communities reach decisions about FGM, and building on the work of Gerry Mackie on the demise of footbinding in China. In 2005 the UNICEF Innocenti Research Centre in Florence published its first report on FGM. UNFPA and UNICEF launched a joint program in Africa in 2007 to reduce FGM by 40 percent within the 0–15 age group and eliminate it from at least one country by 2012, goals that were not met and which they later described as unrealistic. In 2008 several UN bodies recognized FGM as a human-rights violation, and in 2010 the UN called upon healthcare providers to stop carrying out the procedures, including reinfibulation after childbirth and symbolic nicking. In 2012 the General Assembly passed resolution 67/146, "Intensifying global efforts for the elimination of female genital mutilations". Immigration spread the practice to Australia, New Zealand, Europe and North America, all of which outlawed it entirely or restricted it to consenting adults. Sweden outlawed FGM in 1982 with the "Act Prohibiting the Genital Mutilation of Women", the first Western country to do so. Several former colonial powers, including Belgium, Britain, France and the Netherlands, introduced new laws or made clear that it was covered by existing legislation. legislation banning FGM had been passed in 33 countries outside Africa and the Middle East. In the United States an estimated 513,000 women and girls had experienced FGM or were at risk as of 2012. McDonnell Douglas F-4 Phantom II The McDonnell Douglas F-4 Phantom II is a tandem two-seat, twin-engine, all-weather, long-range supersonic jet interceptor and fighter-bomber originally developed for the United States Navy by McDonnell Aircraft. It first entered service in 1960 with the U.S. Navy. Proving highly adaptable, it was also adopted by the U.S. Marine Corps and the U.S. Air Force, and by the mid-1960s had become a major part of their air wings. The Phantom is a large fighter with a top speed of over Mach 2.2. It can carry more than 18,000 pounds (8,400 kg) of weapons on nine external hardpoints, including air-to-air missiles, air-to-ground missiles, and various bombs. The F-4, like other interceptors of its time, was designed without an internal cannon. Later models incorporated an M61 Vulcan rotary cannon. Beginning in 1959, it set 15 world records for in-flight performance, including an absolute speed record, and an absolute altitude record. The F-4 was used extensively during the Vietnam War. It served as the principal air superiority fighter for both the Navy and Air Force, and became important in the ground-attack and aerial reconnaissance roles late in the war. The Phantom has the distinction of being the last U.S. fighter flown by pilots who attained ace status in the 20th century. During the Vietnam War, one U.S. Air Force pilot, two weapon systems officers (WSOs), one U.S. Navy pilot and one radar intercept officer (RIO) became aces by achieving five aerial kills against enemy fighter aircraft. The F-4 continued to form a major part of U.S. military air power throughout the 1970s and 1980s, being gradually replaced by more modern aircraft such as the F-15 Eagle and F-16 Fighting Falcon in the U.S. Air Force, the F-14 Tomcat in the U.S. Navy, and the F/A-18 Hornet in the U.S. Navy and U.S. Marine Corps. The F-4 Phantom II remained in use by the U.S. in the reconnaissance and Wild Weasel (Suppression of Enemy Air Defenses) roles in the 1991 Gulf War, finally leaving service in 1996. It was also the only aircraft used by both U.S. flight demonstration teams: the USAF Thunderbirds (F-4E) and the US Navy Blue Angels (F-4J). The F-4 was also operated by the armed forces of 11 other nations. Israeli Phantoms saw extensive combat in several Arab–Israeli conflicts, while Iran used its large fleet of Phantoms, acquired before the fall of the Shah, in the Iran–Iraq War. Phantom production ran from 1958 to 1981, with a total of 5,195 built, making it the most produced American supersonic military aircraft. As of 2018, 60 years after its first flight, the F-4 remains in service with Iran, Japan, South Korea, Greece and Turkey. The aircraft has most recently seen service against the Islamic State group in the Middle East. In 1952, McDonnell's Chief of Aerodynamics, Dave Lewis, was appointed by CEO Jim McDonnell to be the company's preliminary design manager. With no new aircraft competitions on the horizon, internal studies concluded the Navy had the greatest need for a new and different aircraft type: an attack fighter. In 1953, McDonnell Aircraft began work on revising its F3H Demon naval fighter, seeking expanded capabilities and better performance. The company developed several projects including a variant powered by a Wright J67 engine, and variants powered by two Wright J65 engines, or two General Electric J79 engines. The J79-powered version promised a top speed of Mach 1.97. On 19 September 1953, McDonnell approached the United States Navy with a proposal for the "Super Demon". Uniquely, the aircraft was to be modular—it could be fitted with one- or two-seat noses for different missions, with different nose cones to accommodate radar, photo cameras, four 20 mm (.79 in) cannon, or 56 FFAR unguided rockets in addition to the nine hardpoints under the wings and the fuselage. The Navy was sufficiently interested to order a full-scale mock-up of the F3H-G/H, but felt that the upcoming Grumman XF9F-9 and Vought XF8U-1 already satisfied the need for a supersonic fighter. The McDonnell design was therefore reworked into an all-weather fighter-bomber with 11 external hardpoints for weapons and on 18 October 1954, the company received a letter of intent for two YAH-1 prototypes. On 26 May 1955, four Navy officers arrived at the McDonnell offices and, within an hour, presented the company with an entirely new set of requirements. Because the Navy already had the Douglas A-4 Skyhawk for ground attack and F-8 Crusader for dogfighting, the project now had to fulfill the need for an all-weather fleet defense interceptor. A second crewman was added to operate the powerful radar. The XF4H-1 was designed to carry four semi-recessed AAM-N-6 Sparrow III radar-guided missiles, and to be powered by two J79-GE-8 engines. As in the McDonnell F-101 Voodoo, the engines sat low in the fuselage to maximize internal fuel capacity and ingested air through fixed geometry intakes. The thin-section wing had a leading edge sweep of 45° and was equipped with blown flaps for better low-speed handling. Wind tunnel testing had revealed lateral instability requiring the addition of 5° dihedral to the wings. To avoid redesigning the titanium central section of the aircraft, McDonnell engineers angled up only the outer portions of the wings by 12°, which averaged to the required 5° over the entire wingspan. The wings also received the distinctive "dogtooth" for improved control at high angles of attack. The all-moving tailplane was given 23° of anhedral to improve control at high angles of attack while still keeping the tailplane clear of the engine exhaust. In addition, air intakes were equipped with variable geometry ramps to regulate airflow to the engines at supersonic speeds. All-weather intercept capability was achieved thanks to the AN/APQ-50 radar. To accommodate carrier operations, the landing gear was designed to withstand landings with a sink rate of , while the nose strut could extend by some to increase angle of attack at takeoff. On 25 July 1955, the Navy ordered two XF4H-1 test aircraft and five YF4H-1 pre-production examples. The Phantom made its maiden flight on 27 May 1958 with Robert C. Little at the controls. A hydraulic problem precluded retraction of the landing gear but subsequent flights went more smoothly. Early testing resulted in redesign of the air intakes, including the distinctive addition of 12,500 holes to "bleed off" the slow-moving boundary layer air from the surface of each intake ramp. Series production aircraft also featured splitter plates to divert the boundary layer away from the engine intakes. The aircraft soon squared off against the XF8U-3 Crusader III. Due to operator workload, the Navy wanted a two-seat aircraft and on 17 December 1958 the F4H was declared a winner. Delays with the J79-GE-8 engines meant that the first production aircraft were fitted with J79-GE-2 and −2A engines, each having 16,100 lbf (71.8 kN) of afterburning thrust. In 1959, the Phantom began carrier suitability trials with the first complete launch-recovery cycle performed on 15 February 1960 from . There were proposals to name the F4H "Satan" and "Mithras". In the end, the aircraft was given the less controversial name "Phantom II", the first "Phantom" being another McDonnell jet fighter, the FH-1 Phantom. The Phantom II was briefly given the designation F-110A and the name "Spectre" by the USAF, but neither name was officially used. Early in production, the radar was upgraded to the Westinghouse AN/APQ-72, an AN-APG-50 with a larger radar antenna, necessitating the bulbous nose, and the canopy was reworked to improve visibility and make the rear cockpit less claustrophobic. During its career the Phantom underwent many changes in the form of numerous variants developed. The USAF received Phantoms as the result of Defense Secretary Robert McNamara's push to create a unified fighter for all branches of the military. After an F-4B won the "Operation Highspeed" fly-off against the Convair F-106 Delta Dart, the USAF borrowed two Naval F-4Bs, temporarily designating them F-110A "Spectre" in January 1962, and developed requirements for their own version. Unlike the navy's focus on interception, the USAF emphasized a fighter-bomber role. With McNamara's unification of designations on 18 September 1962, the Phantom became the F-4 with the naval version designated F-4B and USAF F-4C. The first air force Phantom flew on 27 May 1963, exceeding Mach 2 on its maiden flight. The USN operated the F4H-1 (re-designated F-4A in 1962) with J79-GE-2 and -2A engines of 16,100 lbf (71.62 kN) thrust and later builds receiving -8 engines. A total of 45 F-4As were built and none saw combat and most ended up as test or training aircraft. The USN and USMC received the first definitive Phantom, the F-4B which was equipped with the Westinghouse APQ-72 radar (pulse only), a Texas Instruments AAA-4 Infra-red search and track pod under the nose, an AN/AJB-3 bombing system and powered by J79-GE-8,-8A and -8B engines of 10,900 lbf (48.5 kN) dry and 16,950 lbf (75.4 kN) afterburner (reheat) with the first flight on 25 March 1961. 649 F-4Bs were built with deliveries beginning in 1961 and VF-121 Pacemakers receiving the first examples at NAS Miramar. The F-4J had improved air-to-air and ground-attack capability; deliveries begun in 1966 and ended in 1972 with 522 built. It was equipped with J79-GE-10 engines with 17,844 lbf (79.374 kN) thrust, the Westinghouse AN/AWG-10 Fire Control System (making the F-4J the first fighter in the world with operational look-down/shoot-down capability), a new integrated missile control system and the AN/AJB-7 bombing system for expanded ground attack capability. The F-4N (updated F-4Bs) with smokeless engines and F-4J aerodynamic improvements started in 1972 under a U.S. Navy-initiated refurbishment program called "Project Bee Line" with 228 converted by 1978. The F-4S model resulted from the refurbishment of 265 F-4Js with J79-GE-17 smokeless engines of 17,900 lbf (79.379 kN), AWG-10B radar with digitized circuitry for improved performance and reliability, Honeywell AN/AVG-8 Visual Target Acquisition Set or VTAS (world's first operational Helmet Sighting System), classified avionics improvements, airframe reinforcement and leading edge slats for enhanced maneuvering. The USMC also operated the RF-4B with reconnaissance cameras with 46 built. Phantom II production ended in the United States in 1979 after 5,195 had been built (5,057 by McDonnell Douglas and 138 in Japan by Mitsubishi). Of these, 2,874 went to the USAF, 1,264 to the Navy and Marine Corps, and the rest to foreign customers. The last U.S.-built F-4 went to South Korea, while the last F-4 built was an F-4EJ built by Mitsubishi Heavy Industries in Japan and delivered on 20 May 1981. As of 2008, 631 Phantoms were in service worldwide, while the Phantoms were in use as a target drone (specifically QF-4Cs) operated by the U.S. military until 21 December 2016, when the Air Force officially ended use of the type. To show off their new fighter, the Navy led a series of record-breaking flights early in Phantom development: All in all, the Phantom set 16 world records. Except for Skyburner, all records were achieved in unmodified production aircraft. Five of the speed records remained unbeaten until the F-15 Eagle appeared in 1975. The F-4 Phantom is a tandem-seat fighter-bomber designed as a carrier-based interceptor to fill the U.S. Navy's fleet defense fighter role. Innovations in the F-4 included an advanced pulse-Doppler radar and extensive use of titanium in its airframe. Despite imposing dimensions and a maximum takeoff weight of over 60,000 lb (27,000 kg), the F-4 has a top speed of Mach 2.23 and an initial climb rate of over 41,000 ft/min (210 m/s). The F-4's nine external hardpoints have a capability of up to 18,650 pounds (8,480 kg) of weapons, including air-to-air and air-to-surface missiles, and unguided, guided, and thermonuclear weapons. Like other interceptors of its day, the F-4 was designed without an internal cannon. The baseline performance of a Mach 2-class fighter with long range and a bomber-sized payload would be the template for the next generation of large and light/middle-weight fighters optimized for daylight air combat. In air combat, the Phantom's greatest advantage was its thrust, which permitted a skilled pilot to engage and disengage from the fight at will. As a massive fighter aircraft designed to fire radar-guided missiles from beyond visual range, it lacked the agility of its Soviet opponents and was subject to adverse yaw during hard maneuvering. Although thus subject to irrecoverable spins during aileron rolls, pilots reported the aircraft to be very communicative and easy to fly on the edge of its performance envelope. In 1972, the F-4E model was upgraded with leading edge slats on the wing, greatly improving high angle of attack maneuverability at the expense of top speed. The J79 engines produced noticeable amounts of black smoke (at mid-throttle/cruise settings), a severe disadvantage in that the enemy could spot the aircraft. This was solved on the F-4S fitted with the −10A engine variant which used a smokeless combustor. The F-4's biggest weakness, as it was initially designed, was its lack of an internal cannon. For a brief period, doctrine held that turning combat would be impossible at supersonic speeds and little effort was made to teach pilots air combat maneuvering. In reality, engagements quickly became subsonic, as pilots would slow down in an effort to get behind their adversaries. Furthermore, the relatively new heat-seeking and radar-guided missiles at the time were frequently reported as unreliable and pilots had to use multiple shots (also known as ripple-firing), just to hit one enemy fighter. To compound the problem, rules of engagement in Vietnam precluded long-range missile attacks in most instances, as visual identification was normally required. Many pilots found themselves on the tail of an enemy aircraft but too close to fire short-range Falcons or Sidewinders. Although by 1965 USAF F-4Cs began carrying SUU-16 external gunpods containing a 20 mm (.79 in) M61A1 Vulcan Gatling cannon, USAF cockpits were not equipped with lead-computing gunsights until the introduction of the SUU-23, virtually assuring a miss in a maneuvering fight. Some Marine Corps aircraft carried two pods for strafing. In addition to the loss of performance due to drag, combat showed the externally mounted cannon to be inaccurate unless frequently boresighted, yet far more cost-effective than missiles. The lack of a cannon was finally addressed by adding an internally mounted 20 mm (.79 in) M61A1 Vulcan on the F-4E. Note: Original amounts were in 1965 U.S. dollars. The figures in these tables have been adjusted for inflation. On 30 December 1960, the VF-121 "Pacemakers" at NAS Miramar became the first Phantom operator with its F4H-1Fs (F-4As). The VF-74 "Be-devilers" at NAS Oceana became the first deployable Phantom squadron when it received its F4H-1s (F-4Bs) on 8 July 1961. The squadron completed carrier qualifications in October 1961 and Phantom's first full carrier deployment between August 1962 and March 1963 aboard . The second deployable U.S. Atlantic Fleet squadron to receive F-4Bs was the VF-102 "Diamondbacks", who promptly took their new aircraft on the shakedown cruise of . The first deployable U.S. Pacific Fleet squadron to receive the F-4B was the VF-114 "Aardvarks", which participated in the September 1962 cruise aboard . By the time of the Tonkin Gulf incident, 13 of 31 deployable navy squadrons were armed with the type. F-4Bs from made the first Phantom combat sortie of the Vietnam War on 5 August 1964, flying bomber escort in Operation Pierce Arrow. The first Phantom air-to-air victory of the war took place on 9 April 1965 when an F-4B from VF-96 "Fighting Falcons" piloted by Lieutenant (junior grade) Terence M. Murphy and his RIO, Ensign Ronald Fegan, shot down a Chinese MiG-17 "Fresco". The Phantom was then shot down, probably by an AIM-7 Sparrow from one of its wingmen. There continues to be controversy over whether the Phantom was shot down by MiG guns or, as enemy reports later indicated, an AIM-7 Sparrow III from one of Murphy's and Fegan's wingmen. On 17 June 1965, an F-4B from VF-21 "Freelancers" piloted by Commander Louis Page and Lieutenant John C. Smith shot down the first North Vietnamese MiG of the war. On 10 May 1972, Lieutenant Randy "Duke" Cunningham and Lieutenant (junior grade) William P. Driscoll flying an F-4J, call sign "Showtime 100", shot down three MiG-17s to become the first American flying aces of the war. Their fifth victory was believed at the time to be over a mysterious North Vietnamese ace, Colonel Nguyen Toon, now considered mythical. On the return flight, the Phantom was damaged by an enemy surface-to-air missile. To avoid being captured, Cunningham and Driscoll flew their burning aircraft using only the rudder and afterburner (the damage to the aircraft rendered conventional control nearly impossible), until they could eject over water. During the war, U.S. Navy F-4 Phantom squadrons participated in 84 combat tours with F-4Bs, F-4Js, and F-4Ns. The Navy claimed 40 air-to-air victories at a cost of 73 Phantoms lost in combat (seven to enemy aircraft, 13 to SAMs, and 53 to AAA). An additional 54 Phantoms were lost in mishaps. In 1984, the F-4Ns had been retired, and by 1987 the last F-4Ss were retired in the U.S. Navy deployable squadrons. On 25 March 1986, an F-4S belonging to the VF-151 "Vigilantes," became the last active duty U.S. Navy Phantom to launch from an aircraft carrier, in this case, . On 18 October 1986, an F-4S from the VF-202 "Superheats", a Naval Reserve fighter squadron, made the last-ever Phantom carrier landing while operating aboard . In 1987, the last of the Naval Reserve-operated F-4S aircraft were replaced by F-14As. The last Phantoms in service with the Navy were QF-4 target drones operated by the Naval Air Warfare Center at NAS Point Mugu, California. These airframes were subsequently retired in 2004. The Marine Corps received its first F-4Bs in June 1962, with the "Black Knights" of VMFA-314 at Marine Corps Air Station El Toro, California becoming the first operational squadron. Marine Phantoms from VMFA-531 "Gray Ghosts" were assigned to Da Nang airbase on South Vietnam's northeast coast on 10 May 1965 and were initially assigned to provide air defense for the USMC. They soon began close air support missions (CAS) and VMFA-314 'Black Knights', VMFA-232 'Red Devils, VMFA-323 'Death Rattlers', and VMFA-542 'Bengals' soon arrived at the primitive airfield. Marine F-4 pilots claimed three enemy MiGs (two while on exchange duty with the USAF) at the cost of 75 aircraft lost in combat, mostly to ground fire, and four in accidents. VMCJ-1 Golden Hawks (now VMAQ-1 and VMAQ-4 which has the old RM tailcode) flew the first RF-4B photo recon mission on 3 November 1966 from Da Nang and remained there until 1970 with no RF-4B losses and one damaged by AAA. VMCJ-2 and VMCJ-3 (now VMAQ-3) provided aircraft for VMCJ-1 in Da Nang and VMFP-3 was formed in 1975 at MCAS El Toro, CA consolidating all USMC RF-4Bs in one unit that became known as "The Eyes of the Corps." VMFP-3 disestablished in August 1990 after the Advanced Tactical Airborne Reconnaissance System was introduced for the F/A-18D Hornet. The F-4 continued to equip fighter-attack squadrons in both Marine Corps active and reserve units throughout the 1960s, 1970s and 1980s and into the early 1990s. In the early 1980s, these squadrons began to transition to the F/A-18 Hornet, starting with the same squadron that introduced the F-4 to the Marine Corps, VMFA-314 at MCAS El Toro, California. On 18 January 1992, the last Marine Corps Phantom, an F-4S in the Marine Corps Reserve, was retired by the "Cowboys" of VMFA-112, after which the squadron was re-equipped with F/A-18 Hornets. In USAF service, the F-4 was initially designated the F-110 Spectre prior to the introduction of the 1962 United States Tri-Service aircraft designation system. The USAF quickly embraced the design and became the largest Phantom user. The first USAF Phantoms in Vietnam were F-4Cs from the 555th Tactical Fighter Squadron "Triple Nickel", which arrived in December 1964. Unlike the U.S. Navy and U.S. Marine Corps, which flew the Phantom with a Naval Aviator (pilot) in the front seat and a Naval Flight Officer as a radar intercept officer (RIO) in the back seat, the USAF initially flew its Phantoms with a rated Air Force Pilot in front and back seats. While the rear pilot (GIB, or "guy in back") could fly and ostensibly land the aircraft, he had fewer flight instruments and a very restricted forward view. The Air Force later assigned a rated Air Force Navigator qualified as a weapon/targeting systems officer (later designated as weapon systems officer or WSO) in the rear seat instead of another pilot. However, all USAF Phantoms retained dual flight controls throughout their service life. On 10 July 1965, F-4Cs of the 45th Tactical Fighter Squadron, 15th TFW, on temporary assignment in Ubon, Thailand, scored the USAF's first victories against North Vietnamese MiG-17s using AIM-9 Sidewinder air-to-air missiles. On 26 April 1966, an F-4C from the 480th Tactical Fighter Squadron scored the first aerial victory by a U.S. aircrew over a North Vietnamese MiG-21 "Fishbed". On 24 July 1965, another Phantom from the 45th Tactical Fighter Squadron became the first American aircraft to be downed by an enemy SAM, and on 5 October 1966 an 8th Tactical Fighter Wing F-4C became the first U.S. jet lost to an air-to-air missile, fired by a MiG-21. Early aircraft suffered from leaks in wing fuel tanks that required re-sealing after each flight and 85 aircraft were found to have cracks in outer wing ribs and stringers. There were also problems with aileron control cylinders, electrical connectors, and engine compartment fires. Reconnaissance RF-4Cs made their debut in Vietnam on 30 October 1965, flying the hazardous post-strike reconnaissance missions. The USAF Thunderbirds used the F-4E from the 1969 season until 1974. Although the F-4C was essentially identical to the Navy/Marine Corps F-4B in flight performance and carried the AIM-9 Sidewinder missiles, USAF-tailored F-4Ds initially arrived in June 1967 equipped with AIM-4 Falcons. However, the Falcon, like its predecessors, was designed to shoot down heavy bombers flying straight and level. Its reliability proved no better than others and its complex firing sequence and limited seeker-head cooling time made it virtually useless in combat against agile fighters. The F-4Ds reverted to using Sidewinders under the "Rivet Haste" program in early 1968, and by 1972 the AIM-7E-2 "Dogfight Sparrow" had become the preferred missile for USAF pilots. Like other Vietnam War Phantoms, the F-4Ds were urgently fitted with radar homing and warning (RHAW) antennas to detect the Soviet-built S-75 Dvina SAMs. From the initial deployment of the F-4C to Southeast Asia, USAF Phantoms performed both air superiority and ground attack roles, supporting not only ground troops in South Vietnam but also conducting bombing sorties in Laos and North Vietnam. As the F-105 force underwent severe attrition between 1965 and 1968, the bombing role of the F-4 proportionately increased until after November 1970 (when the last F-105D was withdrawn from combat) it became the primary USAF tactical ordnance delivery system. In October 1972 the first squadron of EF-4C Wild Weasel aircraft deployed to Thailand on temporary duty. The "E" prefix was later dropped and the aircraft was simply known as the F-4C Wild Weasel. Sixteen squadrons of Phantoms were permanently deployed between 1965 and 1973, and 17 others deployed on temporary combat assignments. Peak numbers of combat F-4s occurred in 1972, when 353 were based in Thailand. A total of 445 Air Force Phantom fighter-bombers were lost, 370 in combat and 193 of those over North Vietnam (33 to MiGs, 30 to SAMs, and 307 to AAA). The RF-4C was operated by four squadrons, and of the 83 losses, 72 were in combat including 38 over North Vietnam (seven to SAMs and 65 to AAA). By war's end, the U.S. Air Force had lost a total of 528 F-4 and RF-4C Phantoms. When combined with U.S. Navy and Marine Corps losses of 233 Phantoms, 761 F-4/RF-4 Phantoms were lost in the Vietnam War. On 28 August 1972, Captain Steve Ritchie became the first USAF ace of the war. On 9 September 1972, WSO Capt Charles B. DeBellevue became the highest-scoring American ace of the war with six victories. and WSO Capt Jeffrey Feinstein became the last USAF ace of the war on 13 October 1972. Upon return to the United States, DeBellevue and Feinstein were assigned to undergraduate pilot training (Feinstein was given a vision waiver) and requalified as USAF pilots in the F-4. USAF F-4C/D/E crews scored 107½ MiG kills in Southeast Asia (50 by Sparrow, 31 by Sidewinder, five by Falcon, 15.5 by gun, and six by other means). On 31 January 1972, the 170th Tactical Fighter Squadron/183d Tactical Fighter Group of the Illinois Air National Guard became the first Air National Guard unit to transition to Phantoms from Republic F-84F Thunderstreaks which were found to have corrosion problems. Phantoms would eventually equip numerous tactical fighter and tactical reconnaissance units in the USAF active, National Guard, and reserve. On 2 June 1972, a Phantom flying at supersonic speed shot down a MiG-19 over Thud Ridge in Vietnam for the first supersonic gun kill. At a recorded speed of Mach 1.2, Major Phil Handley's shoot down was the first and only recorded gun kill while flying at supersonic speeds. On 15 August 1990, 24 F-4G Wild Weasel Vs and six RF-4Cs were deployed to Shaikh Isa AB, Bahrain, for Operation Desert Storm. The F-4G was the only aircraft in the USAF inventory equipped for the Suppression of Enemy Air Defenses (SEAD) role, and was needed to protect coalition aircraft from Iraq's extensive air defense system. The RF-4C was the only aircraft equipped with the ultra-long-range KS-127 LOROP (long-range oblique photography) camera, and was used for a variety of reconnaissance missions. In spite of flying almost daily missions, only one RF-4C was lost in a fatal accident before the start of hostilities. One F-4G was lost when enemy fire damaged the fuel tanks and the aircraft ran out of fuel near a friendly airbase. The last USAF Phantoms, F-4G Wild Weasel Vs from 561st Fighter Squadron, were retired on 26 March 1996. The last operational flight of the F-4G Wild Weasel was from the 190th Fighter Squadron, Idaho Air National Guard, in April 1996. The last operational USAF/ANG F-4 to land was flown by Maj Mike Webb and Maj Gary Leeder of the Idaho ANG. Like the Navy, the Air Force has operated QF-4 target drones, serving with the 82d Aerial Targets Squadron at Tyndall Air Force Base, Florida, and Holloman Air Force Base, New Mexico. It was expected that the F-4 would remain in the target role with the 82d ATRS until at least 2015, when they would be replaced by early versions of the F-16 Fighting Falcon converted to a QF-16 configuration. Several QF-4s also retain capability as manned aircraft and are maintained in historical color schemes, being displayed as part of Air Combat Command's Heritage Flight at air shows, base open houses, and other events while serving as non-expendable target aircraft during the week. On 19 November 2013, BAE Systems delivered the last QF-4 aerial target to the Air Force. The example had been in storage for over 20 years before being converted. Over 16 years, BAE had converted 314 F-4 and RF-4 Phantom IIs into QF-4s and QRF-4s, with each aircraft taking six months to adapt. As of December 2013, QF-4 and QRF-4 aircraft had flown over 16,000 manned and 600 unmanned training sorties, with 250 unmanned aircraft being shot down in firing exercises. The remaining QF-4s and QRF-4s held their training role until the first of 126 QF-16s were delivered by Boeing. The final flight of an Air Force QF-4 from Tyndall AFB took place on 27 May 2015 to Holloman AFB. After Tyndall AFB ceased operations, the 53d Weapons Evaluation Group at Holloman became the fleet of 22 QF-4s' last remaining operator. The base continued using them to fly manned test and unmanned live fire test support and Foreign Military Sales testing, with the final unmanned flight taking place in August 2016. The type was officially retired from US military service with a four–ship flight at Holloman during an event on 21 December 2016. The remaining QF-4s were to be demilitarized after 1 January 2017. The Phantom has served with the air forces of many countries, including Australia, Egypt, Germany, United Kingdom, Greece, Iran, Israel, Japan, Spain, South Korea and Turkey. The Royal Australian Air Force (RAAF) leased 24 USAF F-4Es from 1970 to 1973 while waiting for their order for the General Dynamics F-111C to be delivered. They were so well-liked that the RAAF considered retaining the aircraft after the F-111Cs were delivered. They were operated from RAAF Amberley by No. 1 Squadron and No. 6 Squadron. In 1979, the Egyptian Air Force purchased 35 former USAF F-4Es along with a number of Sparrow, Sidewinder, and Maverick missiles from the U.S. for $594 million as part of the "Peace Pharaoh" program. An additional seven surplus USAF aircraft were purchased in 1988. Three attrition replacements had been received by the end of the 1990s. The German Air Force ("Luftwaffe") initially ordered the reconnaissance RF-4E in 1969, receiving a total of 88 aircraft from January 1971. In 1982, the initially unarmed RF-4Es were given a secondary ground attack capability; these aircraft were retired in 1994. In 1973, under the "Peace Rhine" program, the "Luftwaffe" purchased the F-4F (a lightened and simplified version of the F-4E) which was upgraded in the mid-1980s. 24 German F-4F Phantom IIs were operated by the 49th Tactical Fighter Wing of the USAF at Holloman AFB to train "Luftwaffe" crews until December 2004. In 1975, Germany also received 10 F-4Es for training in the U.S. In the late 1990s, these were withdrawn from service after being replaced by F-4Fs. Germany also initiated the Improved Combat Efficiency (ICE) program in 1983. The 110 ICE-upgraded F-4Fs entered service in 1992, and were expected to remain in service until 2012. All the remaining Luftwaffe Phantoms were based at Wittmund with "Jagdgeschwader" 71 (fighter wing 71) in Northern Germany and WTD61 at Manching. Phantoms were deployed to NATO states under the Baltic Air Policing starting in 2005, 2008, 2009, 2011 and 2012. The German Air Force retired its last F-4Fs on 29 June 2013. German F-4Fs flew 279,000 hours from entering service on 31 August 1973 until retirement. In 1971, the Hellenic Air Force ordered brand new F-4E Phantoms, with deliveries starting in 1974. In the early 1990s, the Hellenic AF acquired surplus RF-4Es and F-4Es from the "Luftwaffe" and U.S. ANG. Following the success of the German ICE program, on 11 August 1997, a contract was signed between DASA of Germany and Hellenic Aerospace Industry for the upgrade of 39 aircraft to the very similar "Peace Icarus 2000" standard. The Hellenic AF operated 34 upgraded "F-4E-PI2000" (338 and 339 Squadrons) and 12 RF-4E aircraft (348 Squadron) as of September 2013. On 5 May 2017, the Hellenic Air Force officially retired the RF-4E Phantom II during a public ceremony. In the 1960s and 1970s when the U.S. and Iran were on friendly terms, the U.S. sold 225 F-4D, F-4E, and RF-4E Phantoms to Iran. The Imperial Iranian Air Force saw at least one engagement, resulting in a loss, after an RF-4C was rammed by a Soviet MiG-21 during Project Dark Gene, an ELINT operation during the Cold War. The Islamic Republic of Iran Air Force Phantoms saw heavy action in the Iran–Iraq War in the 1980s and are kept operational by overhaul and servicing from Iran's aerospace industry. Notable operations of Iranian F-4s during the war included Operation Scorch Sword, an attack by two F-4s against the Iraqi Osirak nuclear reactor site near Baghdad on 30 September 1980, and the attack on H3, a 4 April 1981 strike by eight Iranian F-4s against the H-3 complex of air bases in the far west of Iraq, which resulted in many Iraqi aircraft being destroyed or damaged for no Iranian losses. On 5 June 1984, two Saudi Arabian fighter pilots shot down two Iranian F-4 fighters. The Royal Saudi Air Force pilots were flying American-built F-15s and fired air-to-air missiles to bring down the Iranian planes. The Saudi fighter pilots had KC-135 aerial tanker planes and Boeing E-3 Sentry AWACS surveillance planes assist in the encounter. The aerial fight occurred in Saudi airspace over the Persian Gulf near the Saudi island Al Arabiyah, about 60 miles northeast of Jubail. Iranian F-4s were in use as of late 2014; the aircraft reportedly conducted air strikes on ISIS targets in the eastern Iraqi province of Diyala. The Israeli Air Force was the largest foreign operator of the Phantom, flying both newly built and ex-USAF aircraft, as well as several one-off special reconnaissance variants. The first F-4Es, nicknamed ""Kurnass"" (Sledgehammer), and RF-4Es, nicknamed ""Orev"" (Raven), were delivered in 1969 under the "Peace Echo I" program. Additional Phantoms arrived during the 1970s under "Peace Echo II" through "Peace Echo V" and "Nickel Grass" programs. Israeli Phantoms saw extensive combat during Arab–Israeli conflicts, first seeing action during the War of Attrition. In the 1980s, Israel began the "Kurnass 2000" modernization program which significantly updated avionics. The last Israeli F-4s were retired in 2004. From 1968, the Japan Air Self-Defense Force (JASDF) purchased a total of 140 F-4EJ Phantoms without aerial refueling, AGM-12 Bullpup missile system, nuclear control system or ground attack capabilities. Mitsubishi built 138 under license in Japan and 14 unarmed reconnaissance RF-4Es were imported. One of the aircraft (17-8440) was the very last of the 5,195 F-4 Phantoms to be produced. It was manufactured by Mitsubishi Heavy Industries on 21 May 1981. "The Final Phantom" served with 306th Tactical Fighter Squadron and later transferred to the 301st Tactical Fighter Squadron. Of these, 96 F-4EJs were modified to the F-4EJ standard. 15 F-4EJs were converted to reconnaissance aircraft designated RF-4EJ, with similar upgrades as the F-4EJ Kai. Japan had a fleet of 90 F-4s in service in 2007. After studying several replacement fighters the F-35 Lightning II was chosen in 2011. Delays with the F-35 program have meant that some F-4s have remained in service. As of 2017 all three of the JASDF's remaining Phantom squadrons are based at Hyakuri Air Base in Ibaraki prefecture north of Tokyo. Some F-4s are also operated by the Air Development and Test Wing in Gifu Prefecture. The Republic of Korea Air Force purchased its first batch of secondhand USAF F-4D Phantoms in 1968 under the "Peace Spectator" program. The F-4Ds continued to be delivered until 1988. The "Peace Pheasant II" program also provided new-built and former USAF F-4Es. The Spanish Air Force acquired its first batch of ex-USAF F-4C Phantoms in 1971 under the "Peace Alfa" program. Designated C.12, the aircraft were retired in 1989. At the same time, the air arm received a number of ex-USAF RF-4Cs, designated CR.12. In 1995–1996, these aircraft received extensive avionics upgrades. Spain retired its RF-4s in 2002. The Turkish Air Force (TAF) received 40 F-4Es in 1974, with a further 32 F-4Es and 8 RF-4Es in 1977–78 under the "Peace Diamond III" program, followed by 40 ex-USAF aircraft in "Peace Diamond IV" in 1987, and a further 40 ex-U.S. Air National Guard Aircraft in 1991. A further 32 RF-4Es were transferred to Turkey after being retired by the Luftwaffe between 1992 and 1994. In 1995, Israel Aerospace Industries (IAI) implemented an upgrade similar to Kurnass 2000 on 54 Turkish F-4Es which were dubbed the F-4E 2020 Terminator. Turkish F-4s, and more modern F-16s have been used to strike Kurdish PKK bases in ongoing military operations in Northern Iraq. On 22 June 2012, a Turkish RF-4E was shot down by Syrian air defenses while flying a reconnaissance flight near the Turkish-Syrian border. Turkey has stated the reconnaissance aircraft was in international airspace when it was shot down, while Syrian authorities stated it was inside Syrian airspace. Turkish F-4s remained in use as of 2015. On 24 February 2015, two RF-4Es crashed in the Malatya region in the southeast of Turkey, under yet unknown circumstances, killing both crew of two each. On 5 March 2015, an F-4E-2020 crashed in central Anatolia killing both crew. After the recent accidents, the TAF withdrew RF-4Es from active service. Turkey was reported to have used F-4 jets to attack PKK separatists and the ISIS capital on 19 September 2015. The Turkish Air Force has reportedly used the F-4E 2020s against the more recent Third Phase of the PKK conflict on heavy bombardment missions into Iraq on 15 November 2015, 12 January 2016, and 12 March 2016. The United Kingdom bought versions based on the U.S. Navy's F-4J for use with the Royal Air Force and the Royal Navy's Fleet Air Arm. The main differences were the use of the British Rolls-Royce Spey engines and of British-made avionics. The RN and RAF versions were given the designation F-4K and F-4M respectively, and entered service with the British military aircraft designations Phantom FG.1 (fighter/ground attack) and Phantom FGR.2 (fighter/ground attack/reconnaissance). Initially, the FGR.2 was used in the ground attack and reconnaissance role, primarily with RAF Germany, while 43 Squadron was formed in the air defence role using the FG.1s that had been intended for the Fleet Air Arm for use aboard . The superiority of the Phantom over the English Electric Lightning in terms of both range and weapon load, combined with the successful introduction of the SEPECAT Jaguar, meant that, during the mid-1970s, most of the ground attack Phantoms in Germany were redeployed to the UK to replace air defence Lightning squadrons. A second RAF squadron, 111 Squadron, was formed on the FG.1 in 1979 after the disbandment of 892 NAS. In 1982, during the Falklands War, three Phantom FGR2s of No. 29 Squadron were on active Quick Reaction Alert duty on Ascension Island to protect the base from air attack. After the Falklands War, 15 upgraded ex-USN F-4Js, known as the F-4J(UK) entered RAF service to compensate for one interceptor squadron redeployed to the Falklands. Around 15 RAF squadrons received various marks of Phantom, many of them based in Germany. The first to be equipped was No. 228 Operational Conversion Unit at RAF Coningsby in August 1968. One noteworthy operator was No. 43 Squadron where Phantom FG1s remained the squadron equipment for 20 years, arriving in September 1969 and departing in July 1989. During this period the squadron was based at Leuchars. The interceptor Phantoms were replaced by the Panavia Tornado F3 from the late 1980s onwards, and the last British Phantoms were retired in October 1992 when No. 74 Squadron was disbanded. Sandia National Laboratories used an F-4 mounted on a "rocket sled" in a crash test to see the results of an aircraft hitting a reinforced concrete structure, such as a nuclear power plant. One aircraft, an F-4D (civilian registration N749CF), is operated by the Massachusetts-based non-profit organization Collings Foundation as a "living history" exhibit. Funds to maintain and operate the aircraft, which is based in Houston, Texas, are raised through donations/sponsorships from public and commercial parties. NASA used the F-4 to photograph and film Titan II missiles after launch from Cape Canaveral during the 1960s. Retired U.S. Air Force Colonel Jack Petry described how he put his F-4 into a Mach 1.2 dive synchronized to the launch countdown, then "'walked the (rocket's) contrail' up to the intercept point, tweaking closing speed and updating mission control while camera pods mounted under each wing shot film at 900 frames per second." Petry's Phantom stayed with the Titan for 90 seconds, then broke away as the missile continued into space. NASA's Dryden Flight Research Center acquired an F-4A on 3 December 1965. It made 55 flights in support of short programs, chase on X-15 missions and lifting body flights. The F-4 also supported a biomedical monitoring program involving 1,000 flights by NASA Flight Research Center aerospace research pilots and students of the USAF Aerospace Research Pilot School flying high-performance aircraft. The pilots were instrumented to record accurate and reliable data of electrocardiogram, respiration rate and normal acceleration. In 1967, the Phantom supported a brief military-inspired program to determine whether an airplane's sonic boom could be directed and whether it could be used as a weapon of sorts, or at least an annoyance. NASA also flew an F-4C in a spanwise blowing study from 1983 to 1985, after which it was returned. The Phantom gathered a number of nicknames during its career. Some of these names included "Snoopy", "Rhino", "Double Ugly", "Old Smokey", the "Flying Anvil", "Flying Footlocker", "Flying Brick", "Lead Sled", the "Big Iron Sled" and the "St. Louis Slugger". In recognition of its record of downing large numbers of Soviet-built MiGs, it was called the "World's Leading Distributor of MiG Parts". As a reflection of excellent performance in spite of its bulk, the F-4 was dubbed "the triumph of thrust over aerodynamics." German "Luftwaffe" crews called their F-4s the "Eisenschwein" ("Iron Pig"), "Fliegender Ziegelstein" ("Flying Brick") and "Luftverteidigungsdiesel" ("Air Defense Diesel"). Imitating the spelling of the aircraft's name, McDonnell issued a series of patches. Pilots became "Phantom Phlyers", backseaters became "Phantom Pherrets", fans of the F-4 "Phantom Phanatics", and call it the "Phabulous Phantom". Ground crewmen who worked on the aircraft are known as "Phantom Phixers". The aircraft's emblem is a whimsical cartoon ghost called "The Spook", which was created by McDonnell Douglas technical artist, Anthony "Tony" Wong, for shoulder patches. The name "Spook" was coined by the crews of either the 12th Tactical Fighter Wing or the 4453rd Combat Crew Training Wing at MacDill AFB. The figure is ubiquitous, appearing on many items associated with the F-4. The Spook has followed the Phantom around the world adopting local fashions; for example, the British adaptation of the U.S. "Phantom Man" is a Spook that sometimes wears a bowler hat and smokes a pipe. There are many F-4 Phantom IIs on display worldwide. Finnish Civil War The Finnish Civil War was a conflict for the leadership and control of Finland during the country's transition from a Grand Duchy of the Russian Empire to an independent state. The clashes took place in the context of the national, political, and social turmoil caused by World War I (Eastern Front) in Europe. The civil war was fought between the "Reds", led by a section of the Social Democratic Party, and the "Whites", conducted by the conservative-based Senate. The paramilitary Red Guards, composed of industrial and agrarian workers, controlled the cities and industrial centres of southern Finland. The paramilitary White Guards, composed of farmers, along with middle-class and upper-class social strata, controlled rural central and northern Finland. In the years before the conflict, Finnish society had experienced rapid population growth, industrialisation, pre-urbanisation and the rise of a comprehensive labour movement. The country's political and governmental systems were in an unstable phase of democratisation and modernisation. The socio-economic condition and education of the population had gradually improved, as well as national thinking and cultural life had awakened. World War I led to the collapse of the Russian Empire causing a power vacuum in Finland, and a subsequent struggle for dominance leading to militarisation and escalating crisis between the left-leaning labour movement and the conservatives. The Reds carried out an unsuccessful general offensive in February 1918, supplied with weapons by Soviet Russia. A counteroffensive by the Whites began in March, reinforced by the German Empire's military detachments in April. The decisive engagements were the Battles of Tampere and Vyborg (; ), won by the Whites, and the Battles of Helsinki and Lahti, won by German troops, leading to overall victory for the Whites and the German forces. Political violence became a part of this warfare. Around 12,500 Red prisoners of war died of malnutrition and disease in camps. About 39,000 people, of whom 36,000 were Finns, perished in the conflict. In the aftermath, the Finns passed from Russian governance to the German sphere of influence with a plan to establish a German-led Finnish monarchy. The scheme was cancelled with the defeat of Germany in World War I and Finland instead emerged as an independent, democratic republic. The Civil War divided the nation for decades. Finnish society was reunited through social compromises based on a long-term culture of moderate politics and religion and the post-war economic recovery. The main factor behind the Finnish Civil War was a political crisis arising out of World War I. Under the pressures of the Great War, the Russian Empire collapsed, leading to the February and October Revolutions in 1917. This breakdown caused a power vacuum and a subsequent struggle for power in Eastern Europe. Russia's Grand Duchy of Finland (1809-1917), became embroiled in the turmoil. Geopolitically less important than the continental Moscow–Warsaw gateway, Finland, isolated by the Baltic Sea was a peaceful side front until early 1918. The war between the German Empire and Russia had only indirect effects on the Finns. Since the end of the 19th century, the Grand Duchy had become a vital source of raw materials, industrial products, food and labour for the growing Imperial Russian capital Petrograd (modern Saint Petersburg), and World War I emphasised that role. Strategically, the Finnish territory was the less important northern section of the Estonian–Finnish gateway and a buffer zone to and from Petrograd through the Narva area, the Gulf of Finland and the Karelian Isthmus. The German Empire saw Eastern Europe—primarily Russia—as a major source of vital products and raw materials, both during World War I and for the future. Her resources overstretched by the two-front war, Germany attempted to divide Russia by providing financial support to revolutionary groups, such as the Bolsheviks and the Socialist Revolutionary Party, and to radical, separatist factions, such as the Finnish national activist movement leaning toward Germanism. Between 30 and 40 million marks were spent on this endeavour. Controlling the Finnish area would allow the Imperial German Army to penetrate Petrograd and the Kola Peninsula, an area rich in raw materials for the mining industry. Finland possessed large ore reserves and a well-developed forest industry. From 1809 to 1898, a period called "Pax Russica", the peripheral authority of the Finns gradually increased, and Russo-Finnish relations were exceptionally peaceful in comparison with other parts of the Russian Empire. Russia's defeat in the Crimean War in the 1850s led to attempts to speed up the modernisation of the country. This caused more than 50 years of economic, industrial, cultural and educational progress in the Grand Duchy of Finland, including an improvement in the status of the Finnish language. All this encouraged Finnish nationalism and cultural unity through the birth of the Fennoman movement, which bound the Finns to the domestic administration and led to the idea that the Grand Duchy was an increasingly autonomous state of the Russian Empire. In 1899, the Russian Empire initiated a policy of integration through the Russification of Finland. The strengthened, pan-slavist central power tried to unite the "Russian Multinational Dynastic Union" as the military and strategic situation of Russia became more perilous due to the rise of Germany and Japan. Finns called the increased military and administrative control, "the First Period of Oppression", and for the first time Finnish politicians drew up plans for disengagement from Russia or sovereignty for Finland. In the struggle against integration, activists drawn from sections of the working class and the Swedish-speaking intelligentsia carried out terrorist acts. During World War I and the rise of Germanism, the pro-Swedish Svecomans began their covert collaboration with Imperial Germany and, from 1915 to 1917, a Jäger (; ) battalion consisting of 1,900 Finnish volunteers was trained in Germany. The major reasons for rising political tensions among Finns were the autocratic rule of the Russian czar and the undemocratic class system of the estates of the realm. The latter system originated in the regime of the Swedish Empire that preceded Russian governance and divided the Finnish people economically, socially and politically. Finland's population grew rapidly in the nineteenth century (from 860,000 in 1810 to 3,130,000 in 1917), and a class of agrarian and industrial workers, as well as crofters emerged over the period. The Industrial Revolution was rapid in Finland, though it started later than in the rest of Western Europe. Industrialisation was financed by the state and some of the social problems associated with the industrial process were diminished by the administration's actions. Among urban workers, socio-economic problems steepened during periods of industrial depression. The position of rural workers worsened after the end of the nineteenth century, as farming became more efficient and market-oriented and the development of industry was not vigorous enough to fully utilise the rapid population growth of the countryside. The difference between Scandinavian-Finnish (Finno-Ugric peoples) and Russian-Slavic culture affected the nature of Finnish national integration. The upper social strata took the lead and gained domestic authority from the Russian czar in 1809. The estates planned to build up an increasingly autonomous Finnish state, led by the elite and the intelligentsia. The Fennoman movement aimed to include the common people in a non-political role; the labour movement, youth associations and the temperance movement were initially led "from above". Between 1870 and 1916 industrialisation gradually improved social conditions and the self-confidence of workers, but while the standard of living of the common people rose in absolute terms, the rift between rich and poor deepened markedly. The commoners' rising awareness of socio-economic and political questions interacted with the ideas of socialism, social liberalism and nationalism. The workers' initiatives and the corresponding responses of the dominant authorities intensified social conflict in Finland.The Finnish labour movement, which emerged at the end of the nineteenth century from temperance, religious movements and Fennomania, had a Finnish nationalist, working-class character. From 1899 to 1906, the movement became conclusively independent, shedding the paternalistic thinking of the Fennoman estates, and it was represented by the Finnish Social Democratic Party, established in 1899. Workers' activism was directed both toward opposing Russification and in developing a domestic policy that tackled social problems and responded to the demand for democracy. This was a reaction to the domestic dispute, ongoing since the 1880s, between the Finnish nobility-bourgeoisie and the labour movement concerning voting rights for the common people. Despite their obligations as obedient, peaceful and non-political inhabitants of the Grand Duchy (who had, only a few decades earlier, accepted the class system as the natural order of their life), the commoners began to demand their civil rights and citizenship in Finnish society. The power struggle between the Finnish estates and the Russian administration gave a concrete role model and free space for the labour movement. On the other side, due to an at-least century-long tradition and experience of administrative authority, the Finnish elite saw itself as the inherent natural leader of the nation. The political struggle for democracy was solved outside Finland, in international politics: the Russian Empire's failed 1904–1905 war against Japan led to the 1905 Revolution in Russia and to a general strike in Finland. In an attempt to quell the general unrest, the system of estates was abolished in the Parliamentary Reform of 1906. The general strike increased support for the social democrats substantially. The party encompassed a higher proportion of the population than any other socialist movement in the world. The Reform of 1906 was a giant leap towards the political and social liberalisation of the common Finnish people: the Russian House of Romanov having been the most autocratic and conservative ruler in Europe. The Finns adopted a unicameral parliamentary system, the Parliament of Finland (; ) with universal suffrage. The number of voters increased from 126,000 to 1,273,000, including female citizens. The reform led to the social democrats obtaining about fifty percent of the popular vote, but the Czar regained his authority after the crisis of 1905. Subsequently, during the more severe programme of Russification, called "the Second Period of Oppression" by the Finns, the Czar neutralised the power of the Finnish Parliament between 1908 and 1917. He dissolved the assembly, ordered parliamentary elections almost annually, and determined the composition of the Finnish Senate, which did not correlate with the Parliament. The capacity of the Finnish Parliament to solve socio-economic problems was stymied by confrontations between the largely uneducated commoners and the former estates. Another conflict festered as employers denied collective bargaining and the right of the labour unions to represent workers. The parliamentary process disappointed the labour movement, but as dominance in the Parliament and legislation was the workers' most likely way to obtain a more balanced society, they identified themselves with the state. Overall domestic politics led to a contest for leadership of the Finnish state during the ten years before the collapse of the Russian Empire. The Second Period of Russification was halted on 15 March 1917 by the February Revolution, which removed the Russian czar, Nicholas II. The collapse of Russia was caused by military defeats, war-weariness against the duration and hardships of the Great War, and the collision between the most conservative regime in Europe and a Russian people desiring modernisation. The Czar's power was transferred to the State Duma (Russian Parliament) and the right-wing Provisional Government, but this new authority was challenged by the Petrograd Soviet (city council), leading to dual power in the country. The autonomous status of 1809–1899 was returned to the Finns by the March 1917 manifesto of the Russian Provisional Government. For the first time in history, "de facto" political power existed in the Parliament of Finland. The political left, consisting mainly of social democrats, covered a wide spectrum from moderate to revolutionary socialists. The political right was even more diverse, ranging from social liberals and moderate conservatives to rightist conservative elements. The four main parties were: During 1917, a power struggle and social disintegration interacted. The collapse of Russia induced a chain reaction of disintegration, starting from the government, military and economy, and spreading to all fields of society, such as local administration, workplaces and to individual citizens. The social democrats wanted to retain the civil rights already achieved and to increase the socialists' power over society. The conservatives feared the loss of their long-held socio-economic dominance. Both factions collaborated with their equivalents in Russia, deepening the split in the nation. The Social Democratic Party gained an absolute majority in the parliamentary elections of 1916. A new Senate was formed in March 1917 by Oskari Tokoi, but it did not reflect the socialists' large parliamentary majority: it comprised six social democrats and six non-socialists. In theory, the Senate consisted of a broad national coalition, but in practice (with the main political groups unwilling to compromise and top politicians remaining outside of it), it proved unable to solve any major Finnish problem. After the February Revolution, political authority descended to the street level: mass meetings, strike organisations and worker-soldier councils on the left and to active organisations of employers on the right, all serving to undermine the authority of the state. The February Revolution halted the Finnish economic boom caused by the Russian war-economy. The collapse in business led to unemployment and high inflation, but the employed workers gained an opportunity to resolve workplace problems. The commoners' call for the eight-hour working day, better working conditions and higher wages led to demonstrations and large-scale strikes in industry and agriculture. While the Finns had specialised in milk and butter production, the bulk of the food supply for the country depended on cereals produced in southern Russia. The cessation of cereal imports from disintegrating Russia led to food shortages in Finland. The Senate responded by introducing rationing and price controls. The farmers resisted the state control and a black market, accompanied by sharply rising food prices formed and export to the free market of the Petrograd area increased. Food supply, prices and, in the end, the fear of starvation became emotional political issues between farmers and urban workers, especially those who were unemployed. Common people, their fears exploited by politicians and an incendiary, polarised political media, took to the streets. Despite the food shortages, no actual large-scale starvation hit southern Finland before the civil war and the food market remained a secondary stimulator in the power struggle of the Finnish state. The passing of the Tokoi Senate bill called the "Law of Supreme Power" (, more commonly known as "valtalaki"; ) in July 1917, triggered one of the key crises in the power struggle between the social democrats and the conservatives. The fall of the Russian Empire opened the question of who would hold sovereign political authority in the former Grand Duchy. After decades of political disappointment, the February Revolution offered the Finnish social democrats an opportunity to govern; they held the absolute majority in Parliament. The conservatives were alarmed by the continuous increase of the socialists' influence since 1899, which reached a climax in 1917. The "Law of Supreme Power" incorporated a plan by the socialists to substantially increase the authority of Parliament, as a reaction to the non-parliamentary and conservative leadership of the Finnish Senate between 1906 and 1916. The bill furthered Finnish autonomy in domestic affairs: the Russian Provisional Government was only allowed the right to control Finnish foreign and military policies. The Act was adopted with the support of the Social Democratic Party, the Agrarian League, part of the Young Finnish Party and some activists eager for Finnish sovereignty. The conservatives opposed the bill and some of the most right-wing representatives resigned from Parliament. In Petrograd, the social democrats' plan had the backing of the Bolsheviks. They had been plotting a revolt against the Provisional Government since April 1917, and pro-Soviet demonstrations during the July Days brought matters to a head. The Helsinki Soviet and the Regional Committee of the Finnish Soviets, led by the Bolshevik Ivar Smilga, both pledged to defend the Finnish Parliament, were it threatened with attack. However, the Provisional Government still had sufficient support in the Russian army to survive and as the street movement waned, Vladimir Lenin fled to Karelia. In the aftermath of these events, the "Law of Supreme Power" was overruled and the social democrats eventually backed down; more Russian troops were sent to Finland and, with the co-operation and insistence of the Finnish conservatives, Parliament was dissolved and new elections announced. In the October 1917 elections, the social democrats lost their absolute majority, which radicalised the labour movement and decreased support for moderate politics. The crisis of July 1917 did not bring about the Red Revolution of January 1918 on its own, but together with political developments based on the commoners' interpretation of the ideas of Fennomania and socialism, the events favoured a Finnish revolution. In order to win power, the socialists had to overcome Parliament. The February Revolution resulted in a loss of institutional authority in Finland and the dissolution of the police force, creating fear and uncertainty. In response, both the right and left assembled their own security groups, which were initially local and largely unarmed. By late 1917, following the dissolution of Parliament, in the absence of a strong government and national armed forces, the security groups began assuming a broader and more paramilitary character. The Civil Guards (; ; ) and the later White Guards (; ) were organised by local men of influence: conservative academics; industrialists; major landowners, and activists. The Workers' Order Guards (; ) and the Red Guards (; ) were recruited through the local social democratic party sections and from the labour unions. The Bolsheviks' and Vladimir Lenin's October Revolution of 7 November 1917 transferred political power in Petrograd to the radical, left-wing socialists. The German government's decision to arrange safe conduct for Lenin and his comrades from exile in Switzerland to Petrograd in April 1917, was a success. An armistice between Germany and the Bolshevik regime came into force on 6 December and peace negotiations began on 22 December 1917 at Brest-Litovsk. November 1917 became another watershed in the 1917–1918 rivalry for the leadership of Finland. After the dissolution of the Finnish Parliament, polarisation between the social democrats and the conservatives increased markedly and the period witnessed the appearance of political violence. An agricultural worker had been shot during a local strike on 9 August 1917 at Ypäjä and a Civil Guard member was killed in a local political crisis at Malmi on 24 September. The October Revolution disrupted the informal truce between the Finnish non-socialists and the Russian Provisional Government. After political wrangling over how to react to the revolt, the majority of the politicians accepted a compromise proposal by Santeri Alkio, the leader of the Agrarian League. Parliament seized the sovereign power in Finland on 15 November 1917 based on the socialists' "Law of Supreme Power" and ratified their proposals of an eight-hour working day and universal suffrage in local elections, from July 1917. A purely non-socialist, conservative-led government of Pehr Evind Svinhufvud was appointed on 27 November. This nomination was both a long-term aim of the conservatives and a response to the challenges of the labour movement during November 1917. Svinhufvud's main aspirations were to separate Finland from Russia, to strengthen the Civil Guards, and to return a part of Parliament's new authority to the Senate. There were 149 Civil Guards on 31 August 1917 in Finland, counting local units and subsidiary White Guards in towns and rural communes; 251 on 30 September; 315 on 31 October; 380 on 30 November and 408 on 26 January 1918. The first attempt at serious military training among the Guards was the establishment of a 200-strong cavalry school at the Saksanniemi estate in the vicinity of the town of Porvoo, in September 1917. The vanguard of the Finnish Jägers and German weaponry arrived in Finland during October–November 1917 on the ' freighter and the German U-boat '; around 50 Jägers had returned by the end of 1917. After political defeats in July and October 1917, the social democrats put forward an uncompromising program called "We Demand" (; ) on 1 November, in order to push for political concessions. They insisted upon a return to the political status before the dissolution of Parliament in July 1917, disbandment of the Civil Guards and elections to establish a Finnish Constituent Assembly. The program failed and the socialists initiated a general strike during 14–19 November to increase political pressure on the conservatives, who had opposed the "Law of Supreme Power" and the parliamentary proclamation of sovereign power on 15 November. Revolution became the goal of the radicalised socialists after the loss of political control, and events in November 1917 offered momentum for a socialist uprising. In this phase, Lenin and Joseph Stalin, under threat in Petrograd, urged the social democrats to take power in Finland. The majority of Finnish socialists were moderate and preferred parliamentary methods, prompting the Bolsheviks to label them "reluctant revolutionaries". The reluctance diminished as the general strike appeared to offer a major channel of influence for the workers in southern Finland. The strike leadership voted by a narrow majority to start a revolution on 16 November, but the uprising had to be called off the same day due to the lack of active revolutionaries to execute it. At the end of November 1917, the moderate socialists among the social democrats won a second vote over the radicals in a debate over revolutionary versus parliamentary means, but when they tried to pass a resolution to completely abandon the idea of a socialist revolution, the party representatives and several influential leaders voted it down. The Finnish labour movement wanted to sustain a military force of its own and to keep the revolutionary road open too. The wavering Finnish socialists disappointed V. I. Lenin and in turn, he began to encourage the Finnish Bolsheviks in Petrograd. Among the labour movement, a more marked consequence of the events of 1917 was the rise of the Workers' Order Guards. There were 20–60 separate guards between 31 August and 30 September 1917, but on 20 October, after defeat in parliamentary elections, the Finnish labour movement proclaimed the need to establish more worker units. The announcement led to a rush of recruits: on 31 October the number of guards was 100–150; 342 on 30 November 1917 and 375 on 26 January 1918. Since May 1917, the paramilitary organisations of the left had grown in two phases, the majority of them as Workers' Order Guards. The minority were Red Guards, these were partly underground groups formed in industrialised towns and industrial centres, such as Helsinki, Kotka and Tampere, based on the original Red Guards that had been built up during 1905–1906 in Finland. The presence of the two opposing armed forces created a state of dual power and divided sovereignty on Finnish society. The decisive rift between the guards broke out during the general strike: the Reds executed several political opponents in southern Finland and the first armed clashes between the Whites and Reds took place. In total, 34 casualties were reported. Eventually, the political rivalries of 1917 led to an arms race and an escalation towards civil war. The disintegration of Russia offered Finns an historic opportunity to gain national independence. After the October Revolution, the conservatives were eager for secession from Russia in order to control the left and minimise the influence of the Bolsheviks. The socialists were skeptical about sovereignty under conservative rule, but they feared a loss of support among nationalistic workers, particularly after having promised increased national liberty through the "Law of Supreme Power". Eventually, both political factions supported an independent Finland, despite strong disagreement over the composition of the nation's leadership. Nationalism had become a "civic religion" in Finland by the end of nineteenth century, but the goal during the general strike of 1905 was a return to the autonomy of 1809–1898, not full independence. In comparison to the unitary Swedish regime, the domestic power of Finns had increased under the less uniform Russian rule. Economically, the Grand Duchy of Finland benefited from having an independent domestic state budget, a central bank with national currency, the markka (deployed 1860), and customs organisation and the industrial progress of 1860–1916. The economy was dependent on the huge Russian market and separation would disrupt the profitable Finnish financial zone. The economic collapse of Russia and the power struggle of the Finnish state in 1917 were among the key factors that brought sovereignty to the fore in Finland. Svinhufvud's Senate introduced Finland's Declaration of Independence on 4 December 1917 and Parliament adopted it on 6 December. The social democrats voted against the Senate's proposal, while presenting an alternative declaration of sovereignty. The establishment of an independent state was not a guaranteed conclusion for the small Finnish nation. Recognition by Russia and other great powers was essential; Svinhufvud accepted that he had to negotiate with Lenin for the acknowledgement. The socialists, having been reluctant to enter talks with the Russian leadership in July 1917, sent two delegations to Petrograd to request that Lenin approve Finnish sovereignty. In December 1917, Lenin was under intense pressure from the Germans to conclude peace negotiations at Brest-Litovsk and the Bolsheviks' rule was in crisis, with an inexperienced administration and the demoralised army facing powerful political and military opponents. Lenin calculated that the Bolsheviks could fight for central parts of Russia but had to give up some peripheral territories, including Finland in the geopolitically less important north-western corner. As a result, Svinhufvud's delegation won Lenin's concession of sovereignty on 31 December 1917. By the beginning of the Civil War, Austria-Hungary, Denmark, France, Germany, Greece, Norway, Sweden and Switzerland had recognised Finnish independence. The United Kingdom and United States did not approve it; they waited and monitored the relations between Finland and Germany (the main enemy of the Allies), hoping to override Lenin's regime and to get Russia back into the war against the German Empire. In turn, the Germans hastened Finland's separation from Russia so as to move the country to within their sphere of influence. The final escalation towards war began in early January 1918, as each military or political action of the Reds or the Whites resulted in a corresponding counteraction by the other. Both sides justified their activities as defensive measures, particularly to their own supporters. On the left, the vanguard of the movement was the urban Red Guards from Helsinki, Kotka and Turku; they led the rural Reds and convinced the socialist leaders who wavered between peace and war to support the revolution. On the right, the vanguard was the Jägers, who had transferred to Finland, and the volunteer Civil Guards of southwestern Finland, southern Ostrobothnia and Vyborg province in the southeastern corner of Finland. The first local battles were fought during 9–21 January 1918 in southern and southeastern Finland, mainly to win the arms race and to control Vyborg (; ). On 12 January 1918, Parliament authorised the Svinhufvud Senate to establish internal order and discipline on behalf of the state. On 15 January, Carl Gustaf Emil Mannerheim, a former Finnish general of the Imperial Russian Army, was appointed the commander-in-chief of the Civil Guards. The Senate appointed the Guards, henceforth called the White Guards, as the White Army of Finland. Mannerheim placed his Headquarters of the White Army in the Vaasa–Seinäjoki area. The White Order to engage was issued on 25 January. The Whites gained weaponry by disarming Russian garrisons during 21–28 January, in particular in southern Ostrobothnia. The Red Guards, led by Ali Aaltonen, refused to recognise the Whites' hegemony and established a military authority of their own. Aaltonen installed his headquarters in Helsinki and nicknamed it Smolna echoing the Smolny Institute, the Bolsheviks' headquarters in Petrograd. The Red Order of Revolution was issued on 26 January, and a red lantern, a symbolic indicator of the uprising, was lit in the tower of the Helsinki Workers' House. A large-scale mobilisation of the Reds began late in the evening of 27 January, with the Helsinki Red Guard and some of the Guards located along the Vyborg-Tampere railway having been activated between 23 and 26 January, in order to safeguard vital positions and escort a heavy railroad shipment of Bolshevik weapons from Petrograd to Finland. White troops tried to capture the shipment: 20–30 Finns, Red and White, died in the Battle of Kämärä at the Karelian Isthmus on 27 January 1918. The Finnish rivalry for power had culminated. At the beginning of the war, a discontinuous front line ran through southern Finland from west to east, dividing the country into White Finland and Red Finland. The Red Guards controlled the area to the south, including nearly all the major towns and industrial centres, along with the largest estates and farms with the highest numbers of crofters and tenant farmers. The White Army controlled the area to the north, which was predominantly agrarian and contained small or medium-sized farms and tenant farmers. The number of crofters was lower and they held a better social status than those in the south. Enclaves of the opposing forces existed on both sides of the front line: within the White area lay the industrial towns of Varkaus, Kuopio, Oulu, Raahe, Kemi and Tornio; within the Red area lay Porvoo, Kirkkonummi and Uusikaupunki. The elimination of these strongholds was a priority for both armies in February 1918. Red Finland was led by the People's Delegation (; ), established on 28 January 1918 in Helsinki. The delegation sought democratic socialism based on the Finnish Social Democratic Party's ethos; their visions differed from Lenin's dictatorship of the proletariat. Otto Ville Kuusinen formulated a proposal for a new constitution, influenced by those of Switzerland and the United States. With it, political power was to be concentrated to Parliament, with a lesser role for a government. The proposal included a multi-party system; freedom of assembly, speech and press; and the use of referenda in political decision-making. In order to ensure the authority of the labour movement, the common people would have a right to permanent revolution. The socialists planned to transfer a substantial part of property rights to the state and local administrations. In foreign policy, Red Finland leaned on Bolshevist Russia. A Red-initiated Finno–Russian treaty and peace agreement was signed on 1 March 1918, where Red Finland was called the Finnish Socialist Workers' Republic (; ). The negotiations for the treaty implied that –as in World War I in general– nationalism was more important for both sides than the principles of international socialism. The Red Finns did not simply accept an alliance with the Bolsheviks and major disputes appeared, for example, over the demarcation of the border between Red Finland and Soviet Russia. The significance of the Russo–Finnish Treaty evaporated quickly due to the signing of the Treaty of Brest-Litovsk between the Bolsheviks and the German Empire on 3 March 1918. Lenin's policy on the right of nations to self-determination aimed at preventing the disintegration of Russia during the period of military weakness. He assumed that in war-torn, splintering Europe, the proletariat of free nations would carry out socialist revolutions and unite with Soviet Russia later. The majority of the Finnish labour movement supported Finland's independence. The Finnish Bolsheviks, influential, though few in number, favoured annexation of Finland by Russia. The government of White Finland, Pehr Evind Svinhufvud's first senate, was called the Vaasa Senate after its relocation to the safer west-coast city of Vaasa, which acted as the capital of the Whites from 29 January to 3 May 1918. In domestic policy, the White Senate's main goal was to return the political right to power in Finland. The conservatives planned a monarchist political system, with a lesser role for Parliament. A section of the conservatives had always supported monarchy and opposed democracy; others had approved of parliamentarianism since the revolutionary reform of 1906, but after the crisis of 1917–1918, concluded that empowering the common people would not work. Social liberals and reformist non-socialists opposed any restriction of parliamentarianism. They initially resisted German military help, but the prolonged warfare changed their stance. In foreign policy, the Vaasa Senate relied on the German Empire for military and political aid. Their objective was to defeat the Finnish Reds; end the influence of Bolshevist Russia in Finland and expand Finnish territory to East Karelia, a geopolitically significant home to people speaking Finno-Ugric languages. The weakness of Russia inspired an idea of Greater Finland among the expansionist factions of both the right and left: the Reds had claims concerning the same areas. General Mannerheim agreed on the need to take over East Karelia and to request German weapons, but opposed actual German intervention in Finland. Mannerheim recognised the Red Guards' lack of combat skill and trusted in the abilities of the German-trained Finnish Jägers. As a former Russian army officer, Mannerheim was well aware of the demoralisation of the Russian army. He co-operated with White-aligned Russian officers in Finland and Russia. The number of Finnish troops on each side varied from 70,000 to 90,000 and both had around 100,000 rifles, 300–400 machine guns and a few hundred cannons. While the Red Guards consisted mostly of volunteers, with wages paid at the beginning of the war, the White Army consisted predominantly of conscripts with 11,000–15,000 volunteers. The main motives for volunteering were socio-economic factors, such as salary and food, as well as idealism and peer pressure. The Red Guards included 2,600 women, mostly girls recruited from the industrial centres and cities of southern Finland. Urban and agricultural workers constituted the majority of the Red Guards, whereas land-owning farmers and well-educated people formed the backbone of the White Army. Both armies used child soldiers, mainly between 14 and 17 years of age. The use of juvenile soldiers was not rare in World War I; children of the time were under the absolute authority of adults and were not shielded against exploitation. Rifles and machine guns from Imperial Russia were the main armaments of the Reds and the Whites. The most commonly used rifle was the Russian Mosin–Nagant Model 1891. In total, around ten different rifle models were in service, causing problems for ammunition supply. The Maxim gun was the most-used machine gun, along with the less-used M1895 Colt–Browning, Lewis and Madsen guns. The machine guns caused a substantial part of the casualties in combat. Russian field guns were mostly used with direct fire. The Civil War was fought primarily along railways; vital means for transporting troops and supplies, as well for using armoured trains, equipped with light cannons and heavy machine guns. The strategically most important railway junction was Haapamäki, approximately northeast of Tampere, connecting eastern and western Finland and as well as southern and northern Finland. Other critical junctions included Kouvola, Riihimäki, Tampere, Toijala and Vyborg. The Whites captured Haapamäki at the end of January 1918, leading to the Battle of Vilppula. The Finnish Red Guards seized the early initiative in the war by taking control of Helsinki on 28 January 1918 and by undertaking a general offensive lasting from February till early March 1918. The Reds were relatively well armed, but a chronic shortage of skilled leaders, both at the command level and in the field, left them unable to capitalise on this momentum, and most of the offensives came to nothing. The military chain of command functioned relatively well at company and platoon level, but leadership and authority remained weak as most of the field commanders were chosen by the vote of the troops. The common troops were more or less armed civilians, whose military training, discipline and combat morale were both inadequate and low. Ali Aaltonen was replaced on 28 January 1918 by Eero Haapalainen as commander-in-chief. He, in turn, was displaced by the Bolshevik triumvirate of Eino Rahja, Adolf Taimi and Evert Eloranta on 20 March. The last commander-in-chief of the Red Guard was Kullervo Manner, from 10 April until the last period of the war when the Reds no longer had a named leader. Some talented local commanders, such as Hugo Salmela in the Battle of Tampere, provided successful leadership, but could not change the course of the war. The Reds achieved some local victories as they retreated from southern Finland toward Russia, such as against German troops in the Battle of Syrjäntaka on 28–29 April in Tuulos. The revolutions in Russia divided the Soviet army officers politically and their attitude towards the Finnish Civil War varied. Mikhail Svechnikov led Finnish Red troops in western Finland in February and Konstantin Yeremejev Soviet forces on the Karelian Isthmus, while other officers were mistrustful of their revolutionary peers and instead co-operated with General Mannerheim, in disarming Soviet garrisons in Finland. On 30 January 1918, Mannerheim proclaimed to Russian soldiers in Finland that the White Army did not fight against Russia, but that the objective of the White campaign was to beat the Finnish Reds and the Soviet troops supporting them. The number of Soviet soldiers active in the civil war declined markedly once Germany attacked Russia on 18 February 1918. The German-Soviet Treaty of Brest-Litovsk of 3 March restricted the Bolsheviks' support for the Finnish Reds to weapons and supplies. The Soviets remained active on the south-eastern front, mainly in the Battle of Rautu on the Karelian Isthmus between February and April 1918, where they defended the approaches to Petrograd. While the conflict has been called by some, "The War of Amateurs", the White Army had two major advantages over the Red Guards: the professional military leadership of Gustaf Mannerheim and his staff, which included 84 Swedish volunteer officers and former Finnish officers of the czar's army; and 1,450 soldiers of the 1,900-strong, Jäger battalion. The majority of the unit arrived in Vaasa on 25 February 1918. On the battlefield, the Jägers, battle-hardened on the Eastern Front, provided strong leadership that made disciplined combat of the common White troopers possible. The soldiers were similar to those of the Reds, having brief and inadequate training. At the beginning of the war, the White Guards' top leadership had little authority over volunteer White units, which obeyed only their local leaders. At the end of February, the Jägers started a rapid training of six conscript regiments. The Jäger battalion was politically divided, too. Four-hundred-and-fifty –mostly socialist– Jägers remained stationed in Germany, as it was feared they were likely to side with the Reds. White Guard leaders faced a similar problem when drafting young men to the army in February 1918: 30,000 obvious supporters of the Finnish labour movement never showed up. It was also uncertain whether common troops drafted from the small-sized and poor farms of central and northern Finland had strong enough motivation to fight the Finnish Reds. The Whites' propaganda promoted the idea that they were fighting a defensive war against Bolshevist Russians, and belittled the role of the Red Finns among their enemies. Social divisions appeared both between southern and northern Finland and within rural Finland. The economy and society of the north had modernised more slowly than that of the south. There was a more pronounced conflict between Christianity and socialism in the north, and the ownership of farmland conferred major social status, motivating the farmers to fight against the Reds. Sweden declared neutrality both during World War I and the Finnish Civil War. General opinion, in particular among the Swedish elite, was divided between supporters of the Allies and the Central powers, Germanism being somewhat more popular. Three war-time priorities determined the pragmatic policy of the Swedish liberal-social democratic government: sound economics, with export of iron-ore and foodstuff to Germany; sustaining the tranquility of Swedish society; and geopolitics. The government accepted the participation of Swedish volunteer officers and soldiers in the Finnish White Army in order to block expansion of revolutionary unrest to Scandinavia. A 1,000-strong paramilitary Swedish Brigade, led by Hjalmar Frisell, took part in the Battle of Tampere and in the fighting south of the town. In February 1918, the Swedish Navy escorted the German naval squadron transporting Finnish Jägers and German weapons and allowed it to pass through Swedish territorial waters. The Swedish socialists tried to open peace negotiations between the Whites and the Reds. The weakness of Finland offered Sweden a chance to take over the geopolitically vital Finnish Åland Islands, east of Stockholm, but the German army's Finland operation stalled this plan. In March 1918, the German Empire intervened in the Finnish Civil War on the side of the White Army. Finnish activists leaning on Germanism had been seeking German aid in freeing Finland from Soviet hegemony since late 1917, but because of the pressure they were facing at the Western Front, the Germans did not want to jeopardise their armistice and peace negotiations with the Soviet Union. The German stance changed after 10 February when Leon Trotsky, despite the weakness of the Bolsheviks' position, broke off negotiations, hoping revolutions would break out in the German Empire and change everything. On 13 February, the German leadership decided to retaliate and send military detachments to Finland too. As a pretext for aggression, the Germans invited "requests for help" from the western neighbouring countries of Russia. Representatives of White Finland in Berlin duly requested help on 14 February. The Imperial German Army attacked Russia on 18 February. The offensive led to a rapid collapse of the Soviet forces and to the signing of the first Treaty of Brest-Litovsk by the Bolsheviks on 3 March 1918. Finland, the Baltic countries, Poland and Ukraine were transferred to the German sphere of influence. The Finnish Civil War opened a low-cost access route to Fennoscandia, where the geopolitical status was altered as a British Naval squadron invaded the Soviet harbour of Murmansk by the Arctic Ocean on 9 March 1918. The leader of the German war effort, General Erich Ludendorff, wanted to keep Petrograd under threat of attack via the Vyborg-Narva area and to install a German-led monarchy in Finland. On 5 March 1918, a German naval squadron landed on the Åland Islands (in mid-February 1918, the islands had been occupied by a Swedish military expedition, which departed from there in May). On 3 April 1918, the 10,000-strong Baltic Sea Division (), led by General Rüdiger von der Goltz, launched the main attack at Hanko, west of Helsinki. It was followed on 7 April by Colonel Otto von Brandenstein's 3,000-strong Detachment Brandenstein () taking the town of Loviisa east of Helsinki. The larger German formations advanced eastwards from Hanko and took Helsinki on 12–13 April, while Detachment Brandenstein overran the town of Lahti on 19 April. The main German detachment proceeded northwards from Helsinki and took Hyvinkää and Riihimäki on 21–22 April, followed by Hämeenlinna on 26 April. The final blow to the cause of the Finnish Reds was dealt when the Bolsheviks broke off the peace negotiations at Brest-Litovsk, leading to the German eastern offensive in February 1918. In February 1918, General Mannerheim deliberated on where to focus the general offensive of the Whites. There were two strategically vital enemy strongholds: Tampere, Finland's major industrial town in the south-west, and Vyborg, Karelia's main city. Although seizing Vyborg offered many advantages, his army's lack of combat skills and the potential for a major counterattack by the Reds in the area or in the south-west made it too risky. Mannerheim decided to strike first at Tampere. He launched the main assault on 16 March 1918, at Längelmäki north-east of the town, through the right flank of the Reds' defence. At the same time, the Whites attacked through the north-western frontline Vilppula–Kuru–Kyröskoski–Suodenniemi. Although the Whites were unaccustomed to offensive warfare, some Red Guard units collapsed and retreated in panic under the weight of the offensive, while other Red detachments defended their posts to the last and were able to slow the advance of the White troops. Eventually, the Whites lay siege to Tampere. They cut off the Reds' southward connection at Lempäälä on 24 March and westward ones at Siuro, Nokia, and Ylöjärvi on 25 March. The Battle for Tampere was fought between 16,000 White and 14,000 Red soldiers. It was Finland's first large-scale urban battle and one of the four most decisive military engagements of the war. The fight for the area of Tampere began on 28 March, on the eve of Easter 1918, later called "Bloody Maundy Thursday", in the Kalevankangas cemetery. The White Army did not achieve a decisive victory in the fierce combat, suffering more than 50 percent losses in some of their units. The Whites had to re-organise their troops and battle plans, managing to raid the town centre in the early hours of 3 April. After a heavy, concentrated artillery barrage, the White Guards advanced from house to house and street to street, as the Red Guards retreated. In the late evening of 3 April, the Whites reached the eastern banks of the Tammerkoski rapids. The Reds' attempts to break the siege of Tampere from the outside along the Helsinki-Tampere railway failed. The Red Guards lost the western parts of the town between 4 and 5 April. The Tampere City Hall was among the last strongholds of the Reds. The battle ended 6 April 1918 with the surrender of Red forces in the Pyynikki and Pispala sections of Tampere. The Reds, now on the defensive, showed increased motivation to fight during the battle. General Mannerheim was compelled to deploy some of the best-trained Jäger detachments, initially meant to be conserved for later use in the Vyborg area. The Battle of Tampere was the bloodiest action of the Civil War. The White Army lost 700–900 men, including 50 Jägers, the highest number of deaths the Jäger battalion suffered in a single battle of the 1918 war. The Red Guards lost 1,000–1,500 soldiers, with a further 11,000–12,000 captured. 71 civilians died, mainly due to artillery fire. The eastern parts of the city, consisting mostly of wooden buildings, were completely destroyed. After peace talks between Germans and the Finnish Reds were broken off on 11 April 1918, the battle for the capital of Finland began. At 05:00 on 12 April, around 2,000–3,000 German Baltic Sea Division soldiers, led by Colonel Hans von Tschirsky und von Bögendorff, attacked the city from the north-west, supported via the Helsinki-Turku railway. The Germans broke through the area between Munkkiniemi and Pasila, and advanced on the central-western parts of the town. The German naval squadron led by Vice Admiral Hugo Meurer blocked the city harbour, bombarded the southern town area, and landed "Seebataillon" marines at Katajanokka. Around 7,000 Finnish Reds defended Helsinki, but their best troops fought on other fronts of the war. The main strongholds of the Red defence were the Workers' Hall, the Helsinki railway station, the Red Headquarters at Smolna, the Senate Palace–Helsinki University area and the former Russian garrisons. By the late evening of 12 April, most of the southern parts and all of the western area of the city had been occupied by the Germans. Local Helsinki White Guards, having hidden in the city during the war, joined the battle as the Germans advanced through the town. On 13 April, German troops took over the Market Square, the Smolna, the Presidential Palace and the Senate-Ritarihuone area. Toward the end, a German brigade with 2,000–3,000 soldiers, led by Colonel Kondrad Wolf joined the battle. The unit rushed from north to the eastern parts of Helsinki, pushing into the working-class neighborhoods of Hermanni, Kallio and Sörnäinen. German artillery bombarded and destroyed the Workers' Hall and put out the red lantern of the Finnish revolution. The eastern parts of the town surrendered around 14:00 on 13 April, when a white flag was raised in the tower of the Kallio Church. Sporadic fighting lasted until the evening. In total, 60 Germans, 300–400 Reds and 23 White Guard troopers were killed in the battle. Around 7,000 Reds were captured. The German army celebrated the victory with a military parade in the centre of Helsinki on 14 April 1918. On 19 April 1918, Detachment Brandenstein took over the town of Lahti. The German troops advanced from the east-southeast via Nastola, through the Mustankallio graveyard in Salpausselkä and the Russian garrisons at Hennala. The battle was minor but strategically important as it cut the connection between the western and eastern Red Guards. Local engagements broke out in the town and the surrounding area between 22 April and 1 May 1918 as several thousand western Red Guards and Red civilian refugees tried to push through on their way to Russia. The German troops were able to hold major parts of the town and halt the Red advance. In total, 600 Reds and 80 German soldiers perished, and 30,000 Reds were captured in and around Lahti. After the defeat in Tampere, the Red Guards began a slow retreat eastwards. As the German army seized Helsinki, the White Army shifted the military focus to Vyborg area, where 18,500 Whites advanced against 15,000 defending Reds. General Mannerheim's war plan had been revised as a result of the Battle for Tampere, a civilian, industrial town. He aimed to avoid new, complex city combat in Vyborg, an old military fortress. The Jäger detachments tried to tie down and destroy the Red force outside the town. The Whites were able to cut the Reds' connection to Petrograd and weaken the troops on the Karelian Isthmus on 20–26 April, but the decisive blow remained to be dealt in Vyborg. The final attack began on late 27 April with a heavy Jäger artillery barrage. The Reds' defence collapsed gradually, and eventually the Whites conquered Patterinmäki—the Reds' symbolic last stand of the 1918 uprising—in the early hours of 29 April 1918. In total, 400 Whites died, and 500–600 Reds perished and 12,000–15,000 were captured. Both Whites and Reds carried out political violence through executions, respectively termed White Terror (; ) and Red Terror (; ). The threshold of political violence had already been crossed by the Finnish activists during the First Period of Russification. Large-scale terror operations were born and bred in Europe during World War I, the first total war. The February and October Revolutions initiated similar violence in Finland: at first by Russian army troops executing their officers, later between the Finnish Reds and Whites. The terror consisted of a calculated aspect of general warfare and, on the other hand, the local, personal murders and corresponding acts of revenge. In the former, the commanding staff planned and organised the actions and gave orders to the lower ranks. At least a third of the Red terror and most of the White terror was centrally led. In February 1918, a "Desk of Securing Occupied Areas" was implemented by the highest-ranking White staff, and the White troops were given "Instructions for Wartime Judicature", later called the Shoot on the Spot Declaration. This order authorised field commanders to execute essentially anyone they saw fit. No order by the less-organised, highest Red Guard leadership authorising Red Terror has been found. The paper was "burned" or the command was oral. The main goals of the terror were to destroy the command structure of the enemy; to clear and secure the areas governed and occupied by armies; and to create shock and fear among the civil population and the enemy soldiers. Additionally, the common troops' paramilitary nature and their lack of combat skills drove them to use political violence as a military weapon. Most of the executions were carried out by cavalry units called Flying Patrols, consisting of 10 to 80 soldiers aged 15 to 20 and led by an experienced, adult leader with absolute authority. The patrols, specialised in search and destroy operations and death squad tactics, were similar to German Sturmbattalions and Russian Assault units organized during World War I. The terror achieved some of its objectives but also gave additional motivation to fight against an enemy perceived to be inhuman and cruel. Both Red and White propaganda made effective use of their opponents' actions, increasing the spiral of revenge. The Red Guards executed influential Whites, including politicians, major landowners, industrialists, police officers, civil servants and teachers as well as White Guards. Ten priests of the Evangelical Lutheran Church and 90 moderate socialists were killed. The number of executions varied over the war months, peaking in February as the Reds secured power, but March saw low counts because the Reds could not seize new areas outside of the original frontlines. The numbers rose again in April as the Reds aimed to leave Finland. The two major centres for Red Terror were Toijala and Kouvola, where 300–350 Whites were executed between February and April 1918. The White Guards executed Red Guard and party leaders, Red troops, socialist members of the Finnish Parliament and local Red administrators, and those active in implementing Red Terror. The numbers varied over the months as the Whites conquered southern Finland. Comprehensive White Terror started with their general offensive in March 1918 and increased constantly. It peaked at the end of the war and declined and ceased after the enemy troops had been transferred to prison camps. During the high point of the executions, between the end of April and the beginning of May, 200 Reds were shot per day. White Terror was decisive against Russian soldiers who assisted the Finnish Reds, and several Russian non-socialist civilians were killed in the Vyborg massacre, the aftermath of the Battle of Vyborg. In total, 1,650 Whites died as a result of Red Terror, while around 10,000 Reds perished by White Terror, which turned into political cleansing. White victims have been recorded exactly, while the number of Red troops executed immediately after battles remains unclear. Together with the harsh prison-camp treatment of the Reds during 1918, the executions inflicted the deepest mental scars on the Finns, regardless of their political allegiance. Some of those who carried out the killings were traumatised, a phenomenon that was later documented. On 8 April 1918, after the defeat in Tampere and the German army intervention, the People's Delegation retreated from Helsinki to Vyborg. The loss of Helsinki pushed them to Petrograd on 25 April. The escape of the leadership embittered many Reds, and thousands of them tried to flee to Russia, but most of the refugees were encircled by White and German troops. In the Lahti area they surrendered on 1–2 May. The long Red caravans included women and children, who experienced a desperate, chaotic escape with severe losses due to White attacks. The scene was described as a "road of tears" for the Reds, but for the Whites, the sight of long, enemy caravans heading east was a victorious moment. The Red Guards' last strongholds between the Kouvola and Kotka area fell by 5 May, after the Battle of Ahvenkoski. The war of 1918 ended on 15 May 1918, when the Whites took over Fort Ino, a Russian coastal artillery base on the Karelian Isthmus, from the Russian troops. White Finland and General Mannerheim celebrated the victory with a large military parade in Helsinki on 16 May 1918. The Red Guards had been defeated. The initially pacifist Finnish labour movement had lost the Civil War, several military leaders committed suicide and a majority of the Reds were sent to prison camps. The Vaasa Senate returned to Helsinki on 4 May 1918, but the capital was under the control of the German army. White Finland had become a protectorate of the German Empire and General Rüdiger von der Goltz was called "the true Regent of Finland". No armistice or peace negotiations were carried out between the Whites and Reds and an official peace treaty to end the Finnish Civil War was never signed. The White Army and German troops captured around 80,000 Red prisoners of war (POWs), including 5,000 women, 1,500 children and 8,000 Russians. The largest prison camps were Suomenlinna (an island facing Helsinki), Hämeenlinna, Lahti, Riihimäki, Tammisaari, Tampere and Vyborg. The Senate decided to keep the POWs detained until each individual's role in the Civil War had been investigated. Legislation making provision for a Treason Court (; ) was enacted on 29 May 1918. The judicature of the 145 inferior courts led by the Supreme Treason Court (; ) did not meet the standards of impartiality, due to the condemnatory atmosphere of White Finland. In total 76,000 cases were examined and 68,000 Reds were convicted, primarily for treason; 39,000 were released on parole while the mean-length of punishment for the rest was two to four years in jail. 555 people were sentenced to death, of whom 113 were executed. The trials revealed that some innocent adults had been imprisoned. Combined with the severe food shortages caused by the Civil War, mass imprisonment led to high mortality rates in the POW camps, and the catastrophe was compounded by the angry, punitive and uncaring mentality of the victors. Many prisoners felt that they had been abandoned by their own leaders, who had fled to Russia. The physical and mental condition of the POWs declined in May 1918. Many prisoners had been sent to the camps in Tampere and Helsinki in the first half of April and food supplies were disrupted during the Reds' eastward retreat. Consequently, in June 2,900 prisoners starved to death, or died as a result of diseases caused by malnutrition or the Spanish flu: 5,000 in July; 2,200 in August; and 1,000 in September. The mortality rate was highest in the Tammisaari camp at 34 percent, while the rate varied between 5 percent and 20 percent in the others. In total, around 12,500 Finns perished (3,000–4,000 due to the Spanish flu) while detained. The dead were buried in mass graves near the camps. Moreover, 700 severely weakened POWs died soon after release from the camps. Most POWs were paroled or pardoned by the end of 1918, after a shift in the political situation. There were 6,100 Red prisoners left at the end of the year and 4,000 at the end of 1919. In January 1920, 3,000 POWs were pardoned and civil rights were returned to 40,000 former Reds. In 1927, the Social Democratic Party government led by Väinö Tanner pardoned the last 50 prisoners. The Finnish government paid reparations to 11,600 POWs in 1973. The traumatic hardships of the prison camps increased support for communism in Finland. The Civil War was a catastrophe for Finland: around 36,000 people – 1.2 percent of the population – perished. The war left approximately 15,000 children orphaned. Most of the casualties occurred outside the battlefields: in the prison camps and the terror campaigns. Many Reds fled to Russia at the end of the war and during the period that followed. The fear, bitterness and trauma caused by the war deepened the divisions within Finnish society and many moderate Finns identified themselves as "citizens of two nations." The conflict caused disintegration within both socialist and non-socialist factions. The rightward shift of power caused a dispute between conservatives and liberals on the best system of government for Finland to adopt: the former demanded monarchy and restricted parliamentarianism; the latter demanded a democratic republic. Both sides justified their views on political and legal grounds. The monarchists leaned on the Swedish regime's 1772 monarchist constitution (accepted by Russia in 1809), belittled the Declaration of Independence of 1917, and proposed a modernised, monarchist constitution for Finland. The republicans argued that the 1772 law lost validity in the February Revolution, that the authority of the Russian czar was assumed by the Finnish Parliament on 15 November 1917, and that the Republic of Finland had been adopted on 6 December that year. The republicans were able to halt the passage of the monarchists' proposal in Parliament. The royalists responded by applying the 1772 law to select a new monarch for the country without reference to Parliament. The Finnish labour movement was divided into three parts: moderate social democrats in Finland; radical socialists in Finland; and communists in Soviet Russia. The Social Democratic Party had its first official party meeting after the Civil War on 25 December 1918, at which the party proclaimed a commitment to parliamentary means and disavowed Bolshevism and communism. The leaders of Red Finland, who had fled to Russia, established the Communist Party of Finland in Moscow on 29 August 1918. After the power struggle of 1917 and the bloody civil war, the former Fennomans and the social democrats who had supported "ultra-democratic" means in Red Finland declared a commitment to revolutionary Bolshevism–communism and to the dictatorship of the proletariat, under the control of Lenin. In May 1918, a conservative-monarchist Senate was formed by J. K. Paasikivi, and the Senate asked the German troops to remain in Finland. 3 March 1918 Treaty of Brest-Litovsk and 7 March German-Finnish agreements bound White Finland to the German Empire's sphere of influence. General Mannerheim resigned his post on 25 May after disagreements with the Senate about German hegemony over Finland, and about his planned attack on Petrograd to repulse the Bolsheviks and capture Russian Karelia. The Germans opposed these plans due to their peace treaties with Lenin. The Civil War weakened the Finnish Parliament; it became a Rump Parliament that included only three socialist representatives. On 9 October 1918, under pressure from Germany, the Senate and Parliament elected a German prince, Friedrich Karl, the brother-in-law of German Emperor William II, to become the King of Finland. The German leadership was able to utilise the breakdown of Russia for the geopolitical benefit of the German Empire in Fennoscandia also. The Civil War and the aftermath diminished independence of Finland, compared to the status it had held at the turn of the year 1917–1918. The economic condition of Finland deteriorated drastically from 1918; recovery to pre-conflict levels was achieved only in 1925. The most acute crisis was in food supply, already deficient in 1917, though large-scale starvation had been avoided that year. The Civil War caused marked starvation in southern Finland. Late in 1918, Finnish politician Rudolf Holsti appealed for relief to Herbert Hoover, the American chairman of the Committee for Relief in Belgium. Hoover arranged for the delivery of food shipments and persuaded the Allies to relax their blockade of the Baltic Sea, which had obstructed food supplies to Finland, and to allow food into the country. On 15 March 1917, the fate of Finns had been decided outside Finland, in Petrograd. On 11 November 1918, the future of the nation was determined in Berlin, as a result of Germany's surrender to end World War I. The German Empire collapsed in the German Revolution of 1918–19, caused by lack of food, war-weariness and defeat in the battles of the Western Front. General Rüdiger von der Goltz and his division left Helsinki on 16 December 1918, and Prince Friedrich Karl, who had not yet been crowned, abandoned his role four days later. Finland's status shifted from a monarchist protectorate of the German Empire to an independent republic. The new system of government was confirmed by the Constitution Act (; ) on 17 July 1919. The first local elections based on universal suffrage in Finland were held during 17–28 December 1918, and the first free parliamentary election took place after the Civil War on 3 March 1919. The United States and the United Kingdom recognised Finnish sovereignty on 6–7 May 1919. The Western powers demanded the establishment of democratic republics in post-war Europe, to lure the masses away from widespread revolutionary movements. The Finno–Russian Treaty of Tartu was signed on 14 October 1920, with the aim of stabilizing political relations between Finland and Russia and settling the border question. In April 1918, the leading Finnish social liberal and the eventual first President of Finland, Kaarlo Juho Ståhlberg wrote: "It is urgent to get the life and development in this country back on the path that we had already reached in 1906 and which the turmoil of war turned us away from." Moderate social democrat Väinö Voionmaa agonised in 1919: "Those who still trust in the future of this nation must have an exceptionally strong faith. This young independent country has lost almost everything due to the war." Voionmaa was a vital companion for the leader of the reformed Social Democratic Party, Väinö Tanner. Santeri Alkio supported moderate politics. His party colleague, Kyösti Kallio urged in his Nivala address of 5 May 1918: "We must rebuild a Finnish nation, which is not divided into the Reds and Whites. We have to establish a democratic Finnish republic, where all the Finns can feel that we are true citizens and members of this society." In the end, many of the moderate Finnish conservatives followed the thinking of National Coalition Party member Lauri Ingman, who wrote in early 1918: "A political turn more to the right will not help us now, instead it would strengthen the support of socialism in this country." Together with other broad-minded Finns, the new partnership constructed a Finnish compromise which eventually delivered a stable and broad parliamentary democracy. The compromise was based both on the defeat of the Reds in the Civil War and the fact that most of the Whites' political goals had not been achieved. After foreign forces left Finland, the militant factions of the Reds and the Whites lost their backing, while the pre-1918 cultural and national integrity and the legacy of Fennomania stood out among the Finns. The weakness of both Germany and Russia after World War I empowered Finland and made a peaceful, domestic Finnish social and political settlement possible. A reconciliation process led to a slow and painful, but steady, national unification. In the end, the power vacuum and interregnum of 1917–1919 gave way to the Finnish compromise. From 1919 to 1991, the democracy and sovereignty of the Finns withstood challenges from right-wing and left-wing political radicalism, the crisis of World War II and pressure from the Soviet Union during the Cold War. Between 1918 and the 1950s, mainstream literature and poetry presented the 1918 war from the White victors' point of view, with works such as the "Psalm of the Cannons" () by Arvi Järventaus in 1918. In poetry, Bertel Gripenberg, who had volunteered for the White Army, celebrated its cause in "The Great Age" () in 1928 and V. A. Koskenniemi in "Young Anthony" () in 1918. The war tales of the Reds were kept silent. The first neutrally critical books were written soon after the war, notably, "Devout Misery" () written by the Nobel Prize laureate Frans Emil Sillanpää in 1919; "Dead Apple Trees" () by Joel Lehtonen in 1918; and "Homecoming" () by Runar Schildt in 1919. These were followed by Jarl Hemmer in 1931 with the book "A Man and His Conscience" () and Oiva Paloheimo in 1942 with "Restless Childhood" (). Lauri Viita's book "Scrambled Ground" () from 1950 presented the life and experiences of a worker family in the Tampere of 1918, including a point of view from outsiders to the Civil War. Between 1959 and 1962, Väinö Linna described in his trilogy "Under the North Star" () the Civil War and World War II from the viewpoint of the common people. Part II of Linna's work opened a larger view of these events and included tales of the Reds in the 1918 war. At the same time, a new outlook on the war was opened by Paavo Haavikko's book "Private Matters" (), Veijo Meri's "The Events of 1918" () and Paavo Rintala's "My Grandmother and Mannerheim" (), all published in 1960. In poetry, Viljo Kajava, who had experienced the Battle of Tampere at the age of nine, presented a pacifist view of the Civil War in his "Poems of Tampere" () in 1966. The same battle is described in the novel "Corpse Bearer" () by Antti Tuuri from 2007. Jenni Linturi's multilayered "Malmi 1917" (2013) describes contradictory emotions and attitudes in a village drifting towards civil war. Väinö Linna's trilogy turned the general tide, and after it, several books were written mainly from the Red viewpoint: The Tampere-trilogy by Erkki Lepokorpi in 1977; Juhani Syrjä's "Juho 18" in 1998; "The Command" () by Leena Lander in 2003; and "Sandra" by Heidi Köngäs in 2017. Kjell Westö's epic novel "Where We Once Went" (), published in 2006, deals with the period of 1915–1930 from both the Red and the White sides. Westö's book "Mirage 38" () from 2013, describes post-war traumas of the 1918 war and Finnish mentality in the 1930s. Many of the stories have been utilised in motion pictures and in theatre. Notes Citations Fridtjof Nansen Fridtjof Nansen (; 10 October 1861 – 13 May 1930) was a Norwegian explorer, scientist, diplomat, humanitarian, and Nobel Peace Prize laureate. In his youth he was a champion skier and ice skater. He led the team that made the first crossing of the Greenland interior in 1888, traversing the island on cross-country skis. He won international fame after reaching a record northern latitude of 86°14′ during his North Pole expedition of 1893–96. Although he retired from exploration after his return to Norway, his techniques of polar travel and his innovations in equipment and clothing influenced a generation of subsequent Arctic and Antarctic expeditions. Nansen studied zoology at the Royal Frederick University in Christiania (renamed Oslo in 1925), and later worked as a curator at the University Museum of Bergen where his research on the central nervous system of lower marine creatures earned him a doctorate and helped establish neuron doctrine. Later, famed neuroscientist Santiago Ramón y Cajal would win the 1906 Nobel Prize in Medicine for his research on the same subject, though "technical priority" for the theory is given to Nansen. After 1896 his main scientific interest switched to oceanography; in the course of his research he made many scientific cruises, mainly in the North Atlantic, and contributed to the development of modern oceanographic equipment. As one of his country's leading citizens, in 1905 Nansen spoke out for the ending of Norway's union with Sweden, and was instrumental in persuading Prince Carl of Denmark to accept the throne of the newly independent Norway. Between 1906 and 1908 he served as the Norwegian representative in London, where he helped negotiate the Integrity Treaty that guaranteed Norway's independent status. In the final decade of his life, Nansen devoted himself primarily to the League of Nations, following his appointment in 1921 as the League's High Commissioner for Refugees. In 1922 he was awarded the Nobel Peace Prize for his work on behalf of the displaced victims of the First World War and related conflicts. Among the initiatives he introduced was the "Nansen passport" for stateless persons, a certificate that used to be recognised by more than 50 countries. He worked on behalf of refugees until his sudden death in 1930, after which the League established the Nansen International Office for Refugees to ensure that his work continued. This office received the Nobel Peace Prize in 1938. His name is commemorated in numerous geographical features, particularly in the polar regions. The Nansen family originated in Denmark. Hans Nansen (1598–1667), a trader, was an early explorer of the White Sea region of the Arctic Ocean. In later life he settled in Copenhagen, becoming the city's "borgmester" in 1654. Later generations of the family lived in Copenhagen until the mid-18th century, when Ancher Antoni Nansen moved to Norway (then in a union with Denmark). His son, Hans Leierdahl Nansen (1764–1821), was a magistrate first in the Trondheim district, later in Jæren. After Norway's separation from Denmark in 1814, he entered national political life as the representative for Stavanger in the first Storting, and became a strong advocate of union with Sweden. After suffering a paralytic stroke in 1821 Hans Leierdahl Nansen died, leaving a four-year-old son, Baldur Fridtjof Nansen, the explorer's father. Baldur was a lawyer without ambitions for public life, who became Reporter to the Supreme Court of Norway. He married twice, the second time to Adelaide Johanne Thekla Isidore Bølling Wedel-Jarlsberg from Bærum, a niece of Herman Wedel-Jarlsberg who had helped frame the Norwegian constitution of 1814 and was later the Swedish king's Norwegian Viceroy. Baldur and Adelaide settled at Store Frøen, an estate at Aker, a few kilometres north of Norway's capital city, Christiania (since renamed Oslo). The couple had three children; the first died in infancy, the second, born 10 October 1861, was Fridtjof Nansen. Store Frøen's rural surroundings shaped the nature of Nansen's childhood. In the short summers the main activities were swimming and fishing, while in the autumn the chief pastime was hunting for game in the forests. The long winter months were devoted mainly to skiing, which Nansen began to practice at the age of two, on improvised skis. At the age of 10 he defied his parents and attempted the ski jump at the nearby Huseby installation. This exploit had near-disastrous consequences, as on landing the skis dug deep into the snow, pitching the boy forward: "I, head first, described a fine arc in the air ... [W]hen I came down again I bored into the snow up to my waist. The boys thought I had broken my neck, but as soon as they saw there was life in me ... a shout of mocking laughter went up." Nansen's enthusiasm for skiing was undiminished, though as he records, his efforts were overshadowed by those of the skiers from the mountainous region of Telemark, where a new style of skiing was being developed. "I saw this was the only way", wrote Nansen later. At school, Nansen worked adequately without showing any particular aptitude. Studies took second place to sports, or to expeditions into the forests where he would live "like Robinson Crusoe" for weeks at a time. Through such experiences Nansen developed a marked degree of self-reliance. He became an accomplished skier and a highly proficient skater. Life was disrupted when, in the summer of 1877, Adelaide Nansen died suddenly. Distressed, Baldur Nansen sold the Store Frøen property and moved with his two sons to Christiania. Nansen's sporting prowess continued to develop;at 18 he broke the world one-mile (1.6 km) skating record, and in the following year won the national cross-country skiing championship, a feat he would repeat on 11 subsequent occasions. In 1880 Nansen passed his university entrance examination, the "examen artium". He decided to study zoology, claiming later that he chose the subject because he thought it offered the chance of a life in the open air. He began his studies at the Royal Frederick University in Christiania early in 1881. Early in 1882 Nansen took "...the first fatal step that led me astray from the quiet life of science." Professor Robert Collett of the university's zoology department proposed that Nansen take a sea voyage, to study Arctic zoology at first hand. Nansen was enthusiastic, and made arrangements through a recent acquaintance, Captain Axel Krefting, commander of the sealer "Viking". The voyage began on 11 March 1882 and extended over the following five months. In the weeks before sealing started, Nansen was able to concentrate on scientific studies. From water samples he showed that, contrary to previous assumption, sea ice forms on the surface of the water rather than below. His readings also demonstrated that the Gulf Stream flows beneath a cold layer of surface water. Through the spring and early summer "Viking" roamed between Greenland and Spitsbergen in search of seal herds. Nansen became an expert marksman, and on one day proudly recorded that his team had shot 200 seal. In July, "Viking" became trapped in the ice close to an unexplored section of the Greenland coast; Nansen longed to go ashore, but this was impossible. However, he began to develop the idea that the Greenland icecap might be explored, or even crossed. On 17 July the ship broke free from the ice, and early in August was back in Norwegian waters. Nansen did not resume formal studies at the university. Instead, on Collett's recommendation, he accepted a post as curator in the zoological department of the Bergen Museum. He was to spend the next six years of his life there—apart from a six-month sabbatical tour of Europe—working and studying with leading figures such as Gerhard Armauer Hansen, the discoverer of the leprosy bacillus, and Daniel Cornelius Danielssen, the museum's director who had turned it from a backwater collection into a centre of scientific research and education. Nansen's chosen area of study was the then relatively unexplored field of neuroanatomy, specifically the central nervous system of lower marine creatures. Before leaving for his sabbatical in February 1886 he published a paper summarising his research to date, in which he stated that "anastomoses or unions between the different ganglion cells" could not be demonstrated with certainty. This unorthodox view was confirmed by the simultaneous researches of the embryologist Wilhelm His and the psychiatrist August Forel. Nansen is considered the first Norwegian defender of the neuron theory, originally proposed by Santiago Ramón y Cajal. His subsequent paper, "The Structure and Combination of Histological Elements of the Central Nervous System", published in 1887, became his doctoral thesis. The idea of an expedition across the Greenland icecap grew in Nansen's mind throughout his Bergen years. In 1887, after the submission of his doctoral thesis, he finally began organising this project. Before then, the two most significant penetrations of the Greenland interior had been those of Adolf Erik Nordenskiöld in 1883, and Robert Peary in 1886. Both had set out from Disko Bay on the western coast, and had travelled about eastward before turning back. By contrast, Nansen proposed to travel from east to west, ending rather than beginning his trek at Disko Bay. A party setting out from the inhabited west coast would, he reasoned, have to make a return trip, as no ship could be certain of reaching the dangerous east coast and picking them up. By starting from the east—assuming that a landing could be made there—Nansen's would be a one-way journey towards a populated area. The party would have no line of retreat to a safe base; the only way to go would be forward, a situation that fitted Nansen's philosophy completely. Nansen rejected the complex organisation and heavy manpower of other Arctic ventures, and instead planned his expedition for a small party of six. Supplies would be manhauled on specially designed lightweight sledges. Much of the equipment, including sleeping bags, clothing and cooking stoves, also needed to be designed from scratch. These plans received a generally poor reception in the press; one critic had no doubt that "if [the] scheme be attempted in its present form ... the chances are ten to one that he will ... uselessly throw his own and perhaps others' lives away". The Norwegian parliament refused to provide financial support, believing that such a potentially risky undertaking should not be encouraged. The project was eventually launched with a donation from a Danish businessman, Augustin Gamél; the rest came mainly from small contributions from Nansen's countrymen, through a fundraising effort organised by students at the university. Despite the adverse publicity, Nansen received numerous applications from would-be adventurers. He wanted expert skiers, and attempted to recruit from the skiers of Telemark, but his approaches were rebuffed. Nordenskiöld had advised Nansen that Sami people, from Finland in the far north of Norway, were expert snow travellers, so Nansen recruited a pair, Samuel Balto and Ole Nielsen Ravna. The remaining places went to Otto Sverdrup, a former sea-captain who had more recently worked as a forester; Oluf Christian Dietrichson, an army officer, and Kristian Kristiansen, an acquaintance of Sverdrup's. All had experience of outdoor life in extreme conditions, and were experienced skiers. Just before the party's departure, Nansen attended a formal examination at the university, which had agreed to receive his doctoral thesis. In accordance with custom he was required to defend his work before appointed examiners acting as "devil's advocates". He left before knowing the outcome of this process. On 3 June 1888, Nansen's party was picked up from the north-western Icelandic port of Ísafjörður by the sealer "Jason". A week later the Greenland coast was sighted, but progress was hindered by thick pack ice. On 17 July, with the coast still away, Nansen decided to launch the small boats; they were within sight of the Sermilik Fjord, which Nansen believed would offer a route up on to the icecap. The expedition left "Jason" "in good spirits and with the highest hopes of a fortunate result", according to "Jason's" captain. There followed days of extreme frustration for the party as, prevented by weather and sea conditions from reaching the shore, they drifted southwards with the ice. Most of this time was spent camping on the ice itself—it was too dangerous to launch the boats. By 29 July 1888, they were south of the point where they had left the ship. On that day they finally reached land, but were too far south to begin the crossing. After a brief rest, Nansen ordered the team back into the boats and to begin rowing north. During the next 12 days, the party battled northward along the coast through the ice floes. On the first day they encountered a large Eskimo encampment near Cape Steen Bille, and there were further occasional contacts with the nomadic native population as the journey continued. On 11 August, when they had covered about and had reached Umivik Bay, Nansen decided that although they were still far south of his intended starting place, they needed to begin the crossing before the season became too advanced for travel. After landing at Umivik, they spent the next four days preparing for their journey, and on the evening of 15 August they set out. They were heading north-west, towards Christianhaab (now Qasigiannguit) on the west Greenland shores of Disko Bay, away. Over the next few days, the party struggled to ascend the inland ice over a treacherous surface with many hidden crevasses. The weather was generally bad; on one occasion all progress was halted for three days by violent storms and continuous rain. On 26 August Nansen concluded that there was now no chance of reaching Christianhaab by mid-September, when the last ship was due to leave. He therefore ordered a change of course, almost due west towards Godthaab (now Nuuk), a shorter journey by at least . The rest of the party, according to Nansen, "hailed the change of plan with acclamation". They continued climbing, until on 11 September they had reached a height of above sea level, the summit of the icecap with temperatures dropping to at night. From then on the downward slope made travelling easier, although the terrain was difficult and the weather remained hostile. Progress was slow because of fresh snowfalls which made dragging the sledges as hard as pulling them through sand. By 26 September they had battled their way down to the edge of a fjord that ran westward towards Godthaab. From their tent, some local willows and parts of the sledges Sverdrup constructed a makeshift boat, and on 29 September Nansen and Sverdrup began the last stage of the journey, rowing down the fjord. Four days later, on 3 October 1888, they reached Godthaab, where they were greeted by the town's Danish representative. His first words were to inform Nansen that he had been awarded his doctorate, a matter that "could not have been more remote from my thoughts at that moment". The crossing had been accomplished in 49 days, making 78 days in total since they had left the "Jason"; throughout the journey the team had maintained careful meteorological, geographical and other records relating to the previously unexplored interior. The rest of the team arrived in Godthaab on 12 October. Nansen soon learned that no ship was likely to call at Godthaab until the following spring, though they were able to send letters back to Norway via a boat leaving Ivigtut at the end of October. He and his party therefore spent the next seven months in Greenland, hunting, fishing and studying the life of the local inhabitants. On 15 April 1889, the Danish ship "Hvidbjørnen" finally entered the harbour, and Nansen and his comrades prepared to depart. Nansen recorded: "It was not without sorrow that we left this place and these people, among whom we had enjoyed ourselves so well." "Hvidbjørnen" reached Copenhagen on 21 May 1889. News of the crossing had preceded its arrival, and Nansen and his companions were feted as heroes. This welcome, however, was dwarfed by the reception in Christiania a week later, when crowds of between thirty and forty thousand—a third of the city's population—thronged the streets as the party made its way to the first of a series of receptions. The interest and enthusiasm generated by the expedition's achievement led directly to the formation that year of the Norwegian Geographical Society. Nansen accepted the position of curator of the Royal Frederick University's zoology collection, a post which carried a salary but involved no duties; the university was satisfied by the association with the explorer's name. Nansen's main task in the following weeks was writing his account of the expedition, but he found time late in June to visit London, where he met the Prince of Wales (the future King Edward VII), and addressed a meeting of the Royal Geographical Society (RGS). The RGS president, Sir Mountstuart Elphinstone Grant Duff, said that Nansen has claimed "the foremost place amongst northern travellers", and later awarded him the Society's prestigious Founder's Medal. This was one of many honours Nansen received from institutions all over Europe. He was invited by a group of Australians to lead an expedition to Antarctica, but declined, believing that Norway's interests would be better served by a North Pole conquest. On 11 August 1889 Nansen announced his engagement to Eva Sars, the daughter of Michael Sars, a zoology professor who had died when Eva was 11 years old. The couple had met some years previously, at the skiing resort of Frognerseteren, where Nansen recalled seeing "two feet sticking out of the snow". Eva was three years older than Nansen, and despite the evidence of this first meeting, was an accomplished skier. She was also a celebrated classical singer who had been coached in Berlin by Désirée Artôt, one-time paramour of Tchaikovsky. The engagement surprised many; since Nansen had previously expressed himself forcefully against the institution of marriage, Otto Sverdrup assumed he had read the message wrongly. The wedding took place on 6 September 1889, less than a month after the engagement. Nansen first began to consider the possibility of reaching the North Pole by using the natural drift of the polar ice when, in 1884, he read the theories of Henrik Mohn, the distinguished Norwegian meteorologist. Artifacts found on the Greenland coast had been identified as coming from the lost US Arctic exploration vessel "Jeannette", which had been crushed and sunk in June 1881 on the opposite side of the Arctic Ocean, off the Siberian coast. Mohn surmised that the location of the artefacts indicated the existence of an ocean current, flowing from east to west all the way across the polar sea, possibly over the pole itself. A strong enough ship might therefore enter the frozen Siberian sea, and drift to the Greenland coast via the pole. This idea remained with Nansen during following years. After his triumphant return from Greenland he began to develop a detailed plan for a polar venture, which he made public in February 1890 at a meeting of the recently formed Norwegian Geographical Society. Previous expeditions, he argued, had approached the North Pole from the west, and had failed because they were working "against" the prevailing east-west current. The secret of success was to work "with" this current. A workable plan, Nansen said, would require a small, strong and manoeuvrable ship capable of carrying fuel and provisions for twelve men for five years. The ship would sail to the approximate location of "Jeannette's" sinking, and would enter the ice. It would then drift west with the current towards the pole and beyond it, eventually reaching the sea between Greenland and Spitsbergen. Many experienced polar hands were dismissive of Nansen's plans. The retired American explorer Adolphus Greely called the idea "an illogical scheme of self-destruction". Sir Allen Young, a veteran of the searches for Sir John Franklin's lost expedition, and Sir Joseph Hooker, who had sailed south with James Clark Ross in 1839–43, were equally dismissive. However, after an impassioned speech Nansen secured the support of the Norwegian parliament, which voted him a grant. The balance of funding was met by private donations and from a national appeal. Nansen chose Colin Archer, Norway's leading shipbuilder and naval architect, to design and build a suitable ship for the planned expedition. Using the toughest oak timbers available, and an intricate system of crossbeams and braces throughout its length, Archer built a vessel of extraordinary strength. Its rounded hull was designed so that it would slip upwards out of the grip of pack ice instead of being crushed. Speed and sailing performance were secondary to the requirement of making the ship a safe and warm shelter during a predicted lengthy confinement. With an overall length of and a beam of , the length-to-beam ratio of just over three gave the ship its stubby appearance, justified by Archer thus: "A ship that is built with exclusive regard to its suitability for [Nansen's] object must differ essentially from any known vessel." The ship was launched by Eva Nansen at Archer's yard at Larvik, on 6 October 1892, and was named "Fram", in English "Forward". From thousands of applicants, Nansen selected a party of twelve. Otto Sverdrup from the Greenland expedition was appointed captain of "Fram" and second-in-command of the expedition. Competition for places on the voyage was such that reserve Army lieutenant and dog-driving expert Hjalmar Johansen signed on as ship's stoker, the only position available. "Fram" left Christiania on 24 June 1893, cheered on by thousands of well-wishers. After a slow journey around the coast, the final port of call was Vardø, in the far north-east of Norway. "Fram" left Vardø on 21 July, following the North-East Passage route pioneered by Nordenskiöld in 1878–79, along the northern coast of Siberia. Progress was impeded by fog and ice conditions in the mainly uncharted seas. The crew also experienced the dead water phenomenon, where a ship's forward progress is impeded by friction caused by a layer of fresh water lying on top of heavier salt water. Nevertheless, Cape Chelyuskin, the most northerly point of the Eurasian continental mass, was passed on 10 September. Ten days later, as "Fram" approached the area in which "Jeannette" had been crushed, heavy pack ice was sighted at around latitude 78°N. Nansen followed the line of the pack northwards to a position recorded as 78°49′N, 132°53′E, before ordering engines stopped and the rudder raised. From this point "Fram's" drift began. The first weeks in the ice were frustrating, as the drift moved unpredictably, sometimes north, sometimes south; by 19 November "Fram's" latitude was south of that at which she had entered the ice. Only after the turn of the year, in January 1894, did the northerly direction become generally settled; the 80° mark was finally passed on 22 March. Nansen calculated that, at this rate, it might take the ship five years to reach the pole. As the ship's northerly progress continued at a rate rarely above a mile (1.6 km) a day, Nansen began privately to consider a new plan—a dog sledge journey towards the pole. With this in mind he began to practice dog-driving, making many experimental journeys over the ice. In November Nansen announced his plan: when the ship passed latitude 83° he and Hjalmar Johansen would leave the ship with the dogs and make for the pole while "Fram", under Sverdrup, continued its drift until it emerged from the ice in the North Atlantic. After reaching the pole, Nansen and Johansen would make for the nearest known land, the recently discovered and sketchily mapped Franz Josef Land. They would then cross to Spitzbergen where they would find a ship to take them home. The crew spent the rest of the 1894–95 winter preparing clothing and equipment for the forthcoming sledge journey. Kayaks were built, to be carried on the sledges until needed for the crossing of open water. Preparations were interrupted early in January when violent tremors shook the ship. The crew disembarked, fearing that the vessel would be crushed, but "Fram" proved herself equal to the danger. On 8 January 1895 the ship's position was 83°34′N, above Greely's previous Farthest North record of 83°24. On 14 March 1895, after two false starts and with the ship's position at 84°4′N, Nansen and Johansen began their journey. Nansen had allowed 50 days to cover the to the pole, an average daily journey of seven nautical miles (13 km; 8.1 mi). After a week of travel a sextant observation indicated that they were averaging nine nautical miles a day, (17 km; 10 mi), putting them ahead of schedule. However, uneven surfaces made skiing more difficult, and their speeds slowed. They also realised that they were marching against a southerly drift, and that distances travelled did not necessarily equate to northerly progression. On 3 April Nansen began to wonder whether the pole was, indeed, attainable. Unless their speed improved, their food would not last them to the pole and then on to Franz Josef Land. He confided in his diary: "I have become more and more convinced we ought to turn before time." On 7 April, after making camp and observing that the way ahead was "a veritable chaos of iceblocks stretching as far as the horizon", Nansen decided to turn south. He recorded the latitude of the final northerly camp as 86°13.6′N, almost three degrees beyond the previous Farthest North mark. At first Nansen and Johansen made good progress south, but on 13 April suffered a serious setback when both of their chronometers stopped. Without knowing the correct time, it was impossible for them to calculate their longitude and thus navigate their way accurately to Franz Josef Land. They restarted the watches on the basis of Nansen's guess that they were at longitude 86°E, but from then on were uncertain of their true position. Towards the end of April they observed the tracks of an Arctic fox, the first trace they had seen of a living creature other than their dogs since leaving "Fram". Soon they began to see bear tracks, and by the end of May seals, gulls and whales were in evidence. On 31 May, by Nansen's calculations, they were only from Cape Fligely, the northernmost known point of Franz Josef Land. However, travel conditions worsened as the warmer weather caused the ice to break up. On 22 June the pair decided to rest on a stable ice floe while they repaired their equipment and gathered their strength for the next stage of their journey. They remained on the floe for a month. The day after leaving this camp Nansen recorded: "At last the marvel has come to pass—land, land, and after we had almost given up our belief in it!" Whether this still-distant land was Franz Josef Land or a new discovery they did not know—they had only a rough sketch map to guide them. On 6 August they reached the edge of the ice, where they shot the last of their dogs—they had been killing the weakest regularly since 24 April, to feed the others. They then lashed their two kayaks together, raised a sail and made for the land. It was soon clear that this land was part of a group of islands. As they moved slowly southwards, Nansen tentatively identified a headland as Cape Felder, on the western edge of Franz Josef Land. Towards the end of August, as the weather grew colder and travel became increasingly difficult, Nansen decided to camp for the winter. In a sheltered cove, with stones and moss for building materials, the pair erected a hut which was to be their home for the next eight months. With ready supplies of bear, walrus and seal to keep their larder stocked, their principal enemy was not hunger but inactivity. After muted Christmas and New Year celebrations, in slowly improving weather they began to prepare to leave their refuge, but it was 19 May 1896 before they were able to resume their journey. On 17 June, during a stop for repairs after the kayaks had been attacked by a walrus, Nansen thought he heard sounds of a dog barking, and of voices. He went to investigate, and a few minutes later saw the figure of a man approaching. It was the British explorer Frederick Jackson, who was leading an expedition to Franz Josef Land and was camped at Cape Flora on the nearby Northbrook Island. The two were equally astonished by their encounter; after some awkward hesitation Jackson asked: "You are Nansen, aren't you?", and received the reply "Yes, I am Nansen." Johansen was soon picked up, and the pair were taken to Cape Flora where, during the following weeks, they recuperated from their ordeal. Nansen later wrote that he could "still scarcely grasp" the sudden change of fortune; had it not been for the walrus attack that caused the delay, the two parties might have been unaware of each other's existence. On 7 August Nansen and Johansen boarded Jackson's supply ship "Windward", and sailed for Vardø where they arrived on the 13th. They were greeted by Hans Mohn, the originator of the polar drift theory, who was in the town by chance. The world was quickly informed by telegram of Nansen's safe return, but as yet there was no news of "Fram". Taking the weekly mail steamer south, Nansen and Johansen reached Hammerfest on 18 August, where they learned that "Fram" had been sighted. She had emerged from the ice north and west of Spitsbergen, as Nansen had predicted, and was now on her way to Tromsø. She had not passed over the pole, nor exceeded Nansen's northern mark. Without delay Nansen and Johansen sailed for Tromsø, where they were reunited with their comrades. The homeward voyage to Christiania was a series of triumphant receptions at every port. On 9 September "Fram" was escorted into Christiania's harbour and welcomed by the largest crowds the city had ever seen. The crew were received by King Oscar, and Nansen, reunited with family, remained at the palace for several days as a special guest. Tributes arrived from all over the world; typical was that from the British mountaineer Edward Whymper, who wrote that Nansen had made "almost as great an advance as has been accomplished by all other voyages in the nineteenth century put together". Nansen's first task on his return was to write his account of the voyage. This he did remarkably quickly, producing 300,000 words of Norwegian text by November 1896; the English translation, titled "Farthest North", was ready in January 1897. The book was an instant success, and secured Nansen's long-term financial future. Nansen included without comment the one significant adverse criticism of his conduct, that of Greely, who had written in "Harper's Weekly" on Nansen's decision to leave "Fram" and strike for the pole: "It passes comprehension how Nansen could have thus deviated from the most sacred duty devolving on the commander of a naval expedition." During the 20 years following his return from the Arctic, Nansen devoted most of his energies to scientific work. In 1897 he accepted a professorship in zoology at the Royal Frederick University, which gave him a base from which he could tackle the major task of editing the reports of the scientific results of the "Fram" expedition. This was a much more arduous task than writing the expedition narrative. The results were eventually published in six volumes, and according to a later polar scientist, Robert Rudmose-Brown, "were to Arctic oceanography what the Challenger expedition results had been to the oceanography of other oceans." In 1900 Nansen became director of the Christiania-based International Laboratory for North Sea Research, and helped found the International Council for the Exploration of the Sea. Through his connection with the latter body, in the summer of 1900 Nansen embarked on his first visit to Arctic waters since the "Fram" expedition, a cruise to Iceland and Jan Mayen Land on the oceanographic research vessel "Michael Sars", named after Eva's father. Shortly after his return he learned that his Farthest North record had been passed, by members of the Duke of the Abruzzi's Italian expedition. They had reached 86°34′N on 24 April 1900, in an attempt to reach the North Pole from Franz Josef Land. Nansen received the news philosophically: "What is the value of having goals for their own sake? They all vanish ... it is merely a question of time." Nansen was now considered an oracle by all would-be explorers of the north and south polar regions. Abruzzi had consulted him, as had the Belgian Adrien de Gerlache, each of whom took expeditions to the Antarctic. Although Nansen refused to meet his own countryman and fellow-explorer Carsten Borchgrevink (whom he considered a fraud), he gave advice to Robert Falcon Scott on polar equipment and transport, prior to the 1901–04 Discovery Expedition. At one point Nansen seriously considered leading a South Pole expedition himself, and asked Colin Archer to design two ships. However, these plans remained on the drawing board. By 1901 Nansen's family had expanded considerably. A daughter, Liv, had been born just before "Fram" set out; a son, Kåre was born in 1897 followed by a daughter, Irmelin, in 1900 and a second son Odd in 1901. The family home, which Nansen had built in 1891 from the profits of his Greenland expedition book, was now too small. Nansen acquired a plot of land in the Lysaker district and built, substantially to his own design, a large and imposing house which combined some of the characteristics of an English manor house with features from the Italian renaissance. The house was ready for occupation by April 1902; Nansen called it "Polhøgda" (in English "polar heights"), and it remained his home for the rest of his life. A fifth and final child, son Asmund, was born at Polhøgda in 1903. The union between Norway and Sweden, imposed by the Great Powers in 1814, had been under considerable strain through the 1890s, the chief issue in question being Norway's rights to its own consular service. Nansen, although not by inclination a politician, had spoken out on the issue on several occasions in defence of Norway's interests. It seemed, early in the 20th century that agreement between the two countries might be possible, but hopes were dashed when negotiations broke down in February 1905. The Norwegian government fell, and was replaced by one led by Christian Michelsen, whose programme was one of separation from Sweden. In February and March Nansen published a series of newspaper articles which placed him firmly in the separatist camp. The new prime minister wanted Nansen in the cabinet, but Nansen had no political ambitions. However, at Michelsen's request he went to Berlin and then to London where, in a letter to "The Times", he presented Norway's legal case for a separate consular service to the English-speaking world. On 17 May 1905, Norway's Constitution Day, Nansen addressed a large crowd in Christiania, saying: "Now have all ways of retreat been closed. Now remains only one path, the way forward, perhaps through difficulties and hardships, but forward for our country, to a free Norway". He also wrote a book, "Norway and the Union with Sweden", specifically to promote Norway's case abroad. On 23 May the Storting passed the Consulate Act establishing a separate consular service. King Oscar refused his assent; on 27 May the Norwegian cabinet resigned, but the king would not recognise this step. On 7 June the Storting unilaterally announced that the union with Sweden was dissolved. In a tense situation the Swedish government agreed to Norway's request that the dissolution should be put to a referendum of the Norwegian people. This was held on 13 August 1905 and resulted in an overwhelming vote for separation, at which point King Oscar relinquished the crown of Norway while retaining the Swedish throne. A second referendum, held in November, determined that the new independent state should be a monarchy rather than a republic. In anticipation of this, Michelsen's government had been considering the suitability of various princes as candidates for the Norwegian throne. Faced with King Oscar's refusal to allow anyone from his own House of Bernadotte to accept the crown, the favoured choice was Prince Charles of Denmark. In July 1905 Michelsen sent Nansen to Copenhagen on a secret mission to persuade Charles to accept the Norwegian throne. Nansen was successful; shortly after the second referendum Charles was proclaimed king, taking the name Haakon VII. He and his wife, the British princess Maud, were crowned in the Nidaros Cathedral in Trondheim on 22 June 1906. In April 1906 Nansen was appointed Norway's first Minister in London. His main task was to work with representatives of the major European powers on an Integrity Treaty which would guarantee Norway's position. Nansen was popular in England, and got on well with King Edward, though he found court functions and diplomatic duties disagreeable; "frivolous and boring" was his description. However, he was able to pursue his geographical and scientific interests through contacts with the Royal Geographical Society and other learned bodies. The Treaty was signed on 2 November 1907, and Nansen considered his task complete. Resisting the pleas of, among others, King Edward that he should remain in London, on 15 November Nansen resigned his post. A few weeks later, still in England as the king's guest at Sandringham, Nansen received word that Eva was seriously ill with pneumonia. On 8 December he set out for home, but before he reached Polhøgda he learned, from a telegram, that Eva had died. After a period of mourning, Nansen returned to London. He had been persuaded by his government to rescind his resignation until after King Edward's state visit to Norway in April 1908. His formal retirement from the diplomatic service was dated 1 May 1908, the same day on which his university professorship was changed from zoology to oceanography. This new designation reflected the general character of Nansen's more recent scientific interests. In 1905 he had supplied the Swedish physicist Walfrid Ekman with the data which established the principle in oceanography known as the Ekman spiral. Based on Nansen's observations of ocean currents recorded during the "Fram" expedition, Ekman concluded that the effect of wind on the sea's surface produced currents which "formed something like a spiral staircase, down towards the depths". In 1909 Nansen combined with Bjørn Helland-Hansen to publish an academic paper, "The Norwegian Sea: its Physical Oceanography", based on the "Michael Sars" voyage of 1900. Nansen had by now retired from polar exploration, the decisive step being his release of "Fram" to his fellow-Norwegian Roald Amundsen, who was planning a North Pole expedition. When Amundsen made his controversial change of plan and set out for the South Pole, Nansen stood by him. Between 1910 and 1914, Nansen participated in several oceanographic voyages. In 1910, aboard the Norwegian naval vessel "Fridtjof", he carried out researches in the northern Atlantic, and in 1912 he took his own yacht, "Veslemøy", to Bear Island and Spitsbergen. The main objective of the "Veslemøy" cruise was the investigation of salinity in the North Polar Basin. One of Nansen's lasting contributions to oceanography was his work designing instruments and equipment; the "Nansen bottle" for taking deep water samples remained in use into the 21st century, in a version updated by Shale Niskin. At the request of the Royal Geographical Society, Nansen began work on a study of Arctic discoveries, which developed into a two-volume history of the exploration of the northern regions up to the beginning of the 16th century. This was published in 1911 as "Nord i Tåkeheimen" ("In Northern Mists"). That year he renewed an acquaintance with Kathleen Scott, wife of Robert Falcon Scott whose Terra Nova Expedition had sailed for Antarctica in 1910. Biographer Roland Huntford has asserted, without any compelling evidence, that Nansen and Kathleen Scott had a brief love affair. Louisa Young, in her biography of Lady Scott, refutes the claim. Many women were attracted to Nansen, and he had a reputation as a womaniser. His personal life was troubled around this time; in January 1913 he received news of the suicide of Hjalmar Johansen, who had returned in disgrace from Amundsen's successful South Pole expedition. In March 1913, Nansen's youngest son Asmund died after a long illness. In the summer of 1913 Nansen travelled to the Kara Sea, by the invitation of Jonas Lied, as part of a delegation investigating a possible trade route between Western Europe and the Siberian interior. The party then took a steamer up the Yenisei River to Krasnoyarsk, and travelled on the Trans-Siberian Railway to Vladivostok before turning for home. Nansen published a report from the trip in "Through Siberia". The life and culture of the Russian peoples aroused in Nansen an interest and sympathy he would carry through to his later life. Immediately before the First World War, Nansen joined Helland-Hansen in an oceanographical cruise in eastern Atlantic waters. On the outbreak of war in 1914 Norway declared its neutrality, alongside Sweden and Denmark. Nansen was appointed president of the Norwegian Union of Defence, but had few official duties, and continued with his professional work as far as circumstances permitted. As the war progressed, the loss of Norway's overseas trade led to acute shortages of food in the country, which became critical in April 1917 when the United States entered the war and placed extra restrictions on international trade. Nansen was dispatched to Washington by the Norwegian government; after months of discussion he secured food and other supplies in return for the introduction of a rationing system. When his government hesitated over the deal, he signed the agreement on his own initiative. Within a few months of the war's end in November 1918 a draft agreement had been accepted by the Paris Peace Conference to create a League of Nations, as a means of resolving disputes between nations by peaceful means. The foundation of the League at this time was providential as far as Nansen was concerned, giving him a new outlet for his restless energy. He became president of the Norwegian League of Nations Society, and although the Scandinavian nations with their traditions of neutrality initially held themselves aloof, his advocacy helped to ensure that Norway became a full member of the League in 1920, and he became one of its three delegates to the League's General Assembly. In April 1920, at the League's request, Nansen began organising the repatriation of around half a million prisoners of war, stranded in various parts of the world. Of these, 300,000 were in Russia which, gripped by revolution and civil war, had little interest in their fate. Nansen was able to report to the Assembly in November 1920 that around 200,000 men had been returned to their homes. "Never in my life", he said, "have I been brought into touch with so formidable an amount of suffering." Nansen continued this work for a further two years until, in his final report to the Assembly in 1922, he was able to state that 427,886 prisoners had been repatriated to around 30 different countries. In paying tribute to his work, the responsible committee recorded that the story of his efforts "would contain tales of heroic endeavour worthy of those in the accounts of the crossing of Greenland and the great Arctic voyage." Even before this work was complete, Nansen was involved in a further humanitarian effort. On 1 September 1921, prompted by the British delegate Philip Noel-Baker, he accepted the post of the League's High Commissioner for Refugees. His main brief was the resettlement of around two million Russian refugees displaced by the upheavals of the Russian Revolution. At the same time he tried to tackle the urgent problem of famine in Russia; following a widespread failure of crops around 30 million people were threatened with starvation and death. Despite Nansen's pleas on behalf of the starving, Russia's revolutionary government was feared and distrusted internationally, and the League was reluctant to come to its peoples' aid. Nansen had to rely largely on fundraising from private organisations, and his efforts met with limited success. Later he was to express himself bitterly on the matter: There was in various transatlantic countries such an abundance of maize, that the farmers had to burn it as fuel in their railway engines. At the same time the ships in Europe were idle, for there were no cargoes. Simultaneously there were thousands, nay millions of unemployed. All this, while thirty million people in the Volga region—not far away and easily reached by our ships—were allowed to starve and die. A major problem impeding Nansen's work on behalf of refugees was that most of them lacked documentary proof of identity or nationality. Without legal status in their country of refuge, their lack of papers meant they were unable to go anywhere else. To overcome this, Nansen devised a document that became known as the "Nansen passport", a form of identity for stateless persons that was in time recognised by more than 50 governments, and which allowed refugees to cross borders legally. Among the more distinguished holders of Nansen passports were the artist Marc Chagall, the composer Igor Stravinsky, and the dancer Anna Pavlova. Although the passport was created initially for refugees from Russia, it was extended to cover other groups. After the Greco-Turkish wars of 1919–1922 Nansen travelled to Istanbul to negotiate the resettlement of hundreds of thousands of refugees, mainly ethnic Greeks who had fled from Turkey after the defeat of the Greek Army. The impoverished Greek state was unable to take them in, and so Nansen devised a scheme for a population exchange whereby half a million Turks in Greece were returned to Turkey, with full financial compensation, while further loans facilitated the absorption of the refugee Greeks into their homeland. Despite some controversy over the principle of a population exchange, the plan was implemented successfully over a period of several years. In November 1922, while attending the Conference of Lausanne, Nansen learned that he had been awarded the Nobel Peace Prize for 1922. The citation referred to "his work for the repatriation of the prisoners of war, his work for the Russian refugees, his work to bring succour to the millions of Russians afflicted by famine, and finally his present work for the refugees in Asia Minor and Thrace". Nansen donated the prize money to international relief efforts. From 1925 onwards he spent much time trying to help Armenian refugees, victims of Armenian Genocide at the hands of the Ottoman Empire during the First World War and further ill-treatment thereafter. His goal was the establishment of a national home for these refugees, within the borders of Soviet Armenia. His main assistant in this endeavour was Vidkun Quisling, the future Nazi collaborator and head of a Norwegian puppet government during the Second World War. After visiting the region, Nansen presented the Assembly with a modest plan for the irrigation of 36,000 hectares (360 km or 139 square miles) on which 15,000 refugees could be settled. The plan ultimately failed, because even with Nansen's unremitting advocacy the money to finance the scheme was not forthcoming. Despite this failure, his reputation among the Armenian people remains high. Nansen wrote the book, "Armenia and the Near East" in 1923 which describes his sympathies to the plight of the Armenians in the wake of losing its independence to the Soviet Union. The book was translated in many languages including Norwegian, English, French, German, Russian and Armenian. After his visit to Armenia, Nansen wrote two additional books called "Gjennem Armenia" ("Across Armenia"), published in 1927 and "Gjennem Kaukasus til Volga" ("Through Caucasus to Volga"). Within the League's Assembly, Nansen spoke out on many issues besides those related to refugees. He believed that the Assembly gave the smaller countries such as Norway a "unique opportunity for speaking in the councils of the world." He believed that the extent of the League's success in reducing armaments would be the greatest test of its credibility. He was a signatory to the Slavery Convention of 25 September 1926, which sought to outlaw the use of forced labour. He supported a settlement of the post-war reparations issue, and championed Germany's membership of the League, which was granted in September 1926 after intensive preparatory work by Nansen. On 17 January 1919 Nansen married Sigrun Munthe, a long-time friend with whom he had had a love affair in 1905, while Eva was still alive. The marriage was resented by the Nansen children, and proved unhappy; an acquaintance writing of them in the 1920s said Nansen appeared unbearably miserable and Sigrun steeped in hate. Nansen's League of Nations commitments through the 1920s meant that he was mostly absent from Norway, and was able to devote little time to scientific work. Nevertheless, he continued to publish occasional papers. He entertained the hope that he might travel to the North Pole by airship, but could not raise sufficient funding. In any event he was forestalled in this ambition by Amundsen, who flew over the pole in Umberto Nobile's airship "Norge" in May 1926. Two years later Nansen broadcast a memorial oration to Amundsen, who had disappeared in the Arctic while organising a rescue party for Nobile whose airship had crashed during a second polar voyage. Nansen said of Amundsen: "He found an unknown grave under the clear sky of the icy world, with the whirring of the wings of eternity through space." In 1926 Nansen was elected Rector of the University of St Andrews in Scotland, the first foreigner to hold this largely honorary position. He used the occasion of his inaugural address to review his life and philosophy, and to deliver a call to the youth of the next generation. He ended: We all have a Land of Beyond to seek in our life—what more can we ask? Our part is to find the trail that leads to it. A long trail, a hard trail, maybe; but the call comes to us, and we have to go. Rooted deep in the nature of every one of us is the spirit of adventure, the call of the wild—vibrating under all our actions, making life deeper and higher and nobler. Nansen largely avoided involvement in domestic Norwegian politics, but in 1924 he was persuaded by the long-retired former Prime Minister Christian Michelsen to take part in a new anti-communist political grouping, the Fatherland League. There were fears in Norway that should the Marxist-oriented Labour Party gain power it would introduce a revolutionary programme. At the inaugural rally of the League in Oslo (as Christiania had now been renamed), Nansen declared: "To talk of the right of revolution in a society with full civil liberty, universal suffrage, equal treatment for everyone ... [is] idiotic nonsense." Following continued turmoil between the centre-right parties, there was even an independent petition in 1926 gaining some momentum that proposed for Nansen to head a centre-right national unity government on a balanced budget program, an idea he did not reject. He was the headline speaker at the single largest Fatherland League rally with 15,000 attendees in Tønsberg in 1928. In 1929 he went on his final tour for the League on the ship "Stella Polaris", holding speeches from Bergen to Hammerfest. In between his various duties and responsibilities, Nansen had continued to take skiing holidays when he could. In February 1930, aged 68, he took a short break in the mountains with two old friends, who noted that Nansen was slower than usual and appeared to tire easily. On his return to Oslo he was laid up for several months, with influenza and later phlebitis, and was visited on his sickbed by King Haakon VII. Nansen was a close friend of a clergyman named Wilhelm. Nansen himself was an atheist. Nansen died of a heart attack, at home, on 13 May 1930. He was given a non-religious state funeral before cremation, after which his ashes were laid under a tree at Polhøgda. Nansen's daughter Liv recorded that there were no speeches, just music: Schubert's "Death and the Maiden", which Eva used to sing. Among the many tributes paid to him subsequently was that of Lord Robert Cecil, a fellow League of Nations delegate, who spoke of the range of Nansen's work, done with no regard for his own interests or health: "Every good cause had his support. He was a fearless peacemaker, a friend of justice, an advocate always for the weak and suffering." Nansen had been a pioneer and innovator in many fields. As a young man he embraced the revolution in skiing methods that transformed it from a means of winter travel to a universal sport, and quickly became one of Norway's leading skiers. He was later able to apply this expertise to the problems of polar travel, in both his Greenland and his "Fram" expeditions. He invented the "Nansen sledge" with broad, ski-like runners, the "Nansen cooker" to improve the heat efficiency of the standard spirit stoves then in use, and the layer principle in polar clothing, whereby the traditionally heavy, awkward garments were replaced by layers of lightweight material. In science, Nansen is recognised both as one of the founders of modern neurology, and as a significant contributor to early oceanographical science, in particular for his work in establishing the Central Oceanographic Laboratory in Christiania. Through his work on behalf of the League of Nations, Nansen helped to establish the principle of international responsibility for refugees. Immediately after his death the League set up the Nansen International Office for Refugees, a semi-autonomous body under the League's authority, to continue his work. The Nansen Office faced great difficulties, in part arising from the large numbers of refugees from the European dictatorships during the 1930s. Nevertheless, it secured the agreement of 14 countries (including a reluctant Great Britain) to the Refugee Convention of 1933. It also helped to repatriate 10,000 Armenians to Yerevan in Soviet Armenia, and to find homes for a further 40,000 in Syria and Lebanon. In 1938, the year in which it was superseded by a wider-ranging body, the Nansen Office was awarded the Nobel Peace Prize. In 1954 the League's successor body, the United Nations, established the Nansen Medal, later named the Nansen Refugee Award, given annually by the United Nations High Commissioner for Refugees to an individual, group or organisation "for outstanding work on behalf of the forcibly displaced". In his lifetime and thereafter, Nansen received honours and recognition from many countries. Nansen Ski Club, the oldest continually operated ski club in the United States, located in Berlin, New Hampshire, is named in his honour. Numerous geographical features are named after him: the Nansen Basin and the Nansen-Gakkel Ridge in the Arctic Ocean; Mount Nansen in the Yukon region of Canada; Mount Nansen, Mount Fridtjof Nansen and Nansen Island, all in Antarctica, as well as Nansen Island in the Kara Sea and Nansen Island in Franz Josef Land. Polhøgda is now home to the Fridtjof Nansen Institute, an independent foundation which engages in research on environmental, energy and resource management politics. In 2004 the Royal Norwegian Navy launched the first of a series of five "Fridtjof Nansen"-class frigates. The lead ship of the group is HNoMS "Fridtjof Nansen"; two others are named after Roald Amundsen and Otto Sverdrup. In the ocean, Nansen is commemorated by "Nansenia", small mesopelagic fishes of family Microstomatidae. In space, he is commemorated by asteroid 853 Nansenia. In 1964, the IAU adopted the name Nansen for an impact crater at the Lunar north pole, after the Norwegian explorer. Germany Germany ( ), officially the Federal Republic of Germany (, ), is a sovereign state in central-western Europe. It includes 16 constituent states, covers an area of , and has a largely temperate seasonal climate. With nearly 83 million inhabitants, Germany is the most populous member state of the European Union. Germany's capital and largest metropolis is Berlin, while its largest conurbation is the Ruhr, with its main centres of Dortmund and Essen. The country's other major cities are Hamburg, Munich, Cologne, Frankfurt, Stuttgart, Düsseldorf, Leipzig, Bremen, Dresden, Hannover, and Nuremberg. Various Germanic tribes have inhabited the northern parts of modern Germany since classical antiquity. A region named Germania was documented before 100 AD. During the Migration Period, the Germanic tribes expanded southward. Beginning in the 10th century, German territories formed a central part of the Holy Roman Empire. During the 16th century, northern German regions became the centre of the Protestant Reformation. After the collapse of the Holy Roman Empire, the German Confederation was formed in 1815. The German revolutions of 1848–49 resulted in the Frankfurt Parliament establishing major democratic rights. In 1871, Germany became a nation state when most of the German states unified into the Prussian-dominated German Empire. After World War I and the revolution of 1918–19, the Empire was replaced by the parliamentary Weimar Republic. The Nazi seizure of power in 1933 led to the establishment of a dictatorship, World War II and the Holocaust. After the end of World War II in Europe and a period of Allied occupation, two German states were founded: West Germany, formed from the American, British, and French occupation zones, and East Germany, formed from the Soviet occupation zone. Following the Revolutions of 1989 that ended communist rule in Central and Eastern Europe, the country was reunified on 3 October 1990. In the 21st century, Germany is a great power with a strong economy; it has the world's fourth-largest economy by nominal GDP, and the fifth-largest by PPP. As a global leader in several industrial and technological sectors, it is both the world's third-largest exporter and importer of goods. A developed country with a very high standard of living, it upholds a social security and universal health care system, environmental protection, and a tuition-free university education. The Federal Republic of Germany was a founding member of the European Economic Community in 1957 and the European Union in 1993. It is part of the Schengen Area and became a co-founder of the Eurozone in 1999. Germany is a member of the United Nations, NATO, the G7, the G20, and the OECD. Known for its rich cultural history, Germany has been continuously the home of influential and successful artists, philosophers, musicians, sportspeople, entrepreneurs, scientists, engineers, and inventors. The English word "Germany" derives from the Latin , which came into use after Julius Caesar adopted it for the peoples east of the Rhine. The German term , originally ("the German lands") is derived from (compare "Dutch"), descended from Old High German "popular" (i.e. belonging to the or "people"), originally used to distinguish the language of the common people from Latin and its Romance descendants. This in turn descends from Proto-Germanic "popular" (see also the Latinised form ), derived from , descended from Proto-Indo-European *"" "people", from which the word "Teutons" also originates. The discovery of the Mauer 1 mandible shows that ancient humans were present in Germany at least 600,000 years ago. The oldest complete hunting weapons found anywhere in the world were discovered in a coal mine in Schöningen where three 380,000-year-old wooden javelins were unearthed. The Neander Valley was the location where the first ever non-modern human fossil was discovered; the new species of human was called the Neanderthal. The Neanderthal 1 fossils are known to be 40,000 years old. Evidence of modern humans, similarly dated, has been found in caves in the Swabian Jura near Ulm. The finds include 42,000-year-old bird bone and mammoth ivory flutes which are the oldest musical instruments ever found, the 40,000-year-old Ice Age Lion Man which is the oldest uncontested figurative art ever discovered, and the 35,000-year-old Venus of Hohle Fels which is the oldest uncontested human figurative art ever discovered. The Nebra sky disk is a bronze artefact created during the European Bronze Age attributed to a site near Nebra, Saxony-Anhalt. It is part of UNESCO's Memory of the World Programme. The Germanic tribes are thought to date from the Nordic Bronze Age or the Pre-Roman Iron Age. From southern Scandinavia and north Germany, they expanded south, east and west from the 1st century BC, coming into contact with the Celtic tribes of Gaul as well as Iranian, Baltic, and Slavic tribes in Central and Eastern Europe. Under Augustus, Rome began to invade Germania (an area extending roughly from the Rhine to the Ural Mountains). In 9 AD, three Roman legions led by Varus were defeated by the Cheruscan leader Arminius. By 100 AD, when Tacitus wrote "Germania", Germanic tribes had settled along the Rhine and the Danube (the Limes Germanicus), occupying most of the area of modern Germany. However, Austria, Baden Württemberg, southern Bavaria, southern Hessen and the western Rhineland had been conquered and incorporated into Roman provinces: Noricum, Raetia, Germania Superior, and Germania Inferior. In the 3rd century a number of large West Germanic tribes emerged: Alemanni, Franks, Chatti, Saxons, Frisii, Sicambri, and Thuringii. Around 260, the Germanic peoples broke into Roman-controlled lands. After the invasion of the Huns in 375, and with the decline of Rome from 395, Germanic tribes moved farther southwest. Simultaneously several large tribes formed in what is now Germany and displaced or absorbed smaller Germanic tribes. Large areas known since the Merovingian period as Austrasia, Neustria, and Aquitaine were conquered by the Franks who established the Frankish Kingdom, and pushed farther east to subjugate Saxony and Bavaria. Areas of what is today the eastern part of Germany were inhabited by Western Slavic tribes of Sorbs, Veleti and the Obotritic confederation. In 800, the Frankish king Charlemagne was crowned emperor and founded the Carolingian Empire, which was later divided in 843 among his heirs. Following the break up of the Frankish Realm, for 900 years, the history of Germany was intertwined with the history of the Holy Roman Empire, which subsequently emerged from the eastern portion of Charlemagne's original empire. The territory initially known as East Francia stretched from the Rhine in the west to the Elbe River in the east and from the North Sea to the Alps. The Ottonian rulers (919–1024) consolidated several major duchies and the German king Otto I was crowned Holy Roman Emperor of these regions in 962. In 996 Gregory V became the first German Pope, appointed by his cousin Otto III, whom he shortly after crowned Holy Roman Emperor. The Holy Roman Empire absorbed northern Italy and Burgundy under the reign of the Salian emperors (1024–1125), although the emperors lost power through the Investiture Controversy. In the 12th century, under the Hohenstaufen emperors (1138–1254), German princes increased their influence further south and east into territories inhabited by Slavs; they encouraged German settlement in these areas, called the eastern settlement movement "(Ostsiedlung)". Members of the Hanseatic League, which included mostly north German cities and towns, prospered in the expansion of trade. In the south, the Greater Ravensburg Trade Corporation ("Große Ravensburger Handelsgesellschaft") served a similar function. The edict of the Golden Bull issued in 1356 by Emperor Charles IV provided the basic constitutional structure of the Empire and codified the election of the emperor by seven prince-electors who ruled some of the most powerful principalities and archbishoprics. Population declined in the first half of the 14th century, starting with the Great Famine in 1315, followed by the Black Death of 1348–50. Despite the decline, however, German artists, engineers, and scientists developed a wide array of techniques similar to those used by the Italian artists and designers of the time who flourished in such merchant city-states as Venice, Florence and Genoa. Artistic and cultural centres throughout the German states produced such artists as the Augsburg painters Hans Holbein and his son, and Albrecht Dürer. Johannes Gutenberg introduced moveable-type printing to Europe, a development that laid the basis for the spread of learning to the masses. In 1517, the Wittenberg monk Martin Luther publicised The Ninety-Five Theses, challenging the Roman Catholic Church and initiating the Protestant Reformation. In 1555, the Peace of Augsburg established Lutheranism as an acceptable alternative to Catholicism, but also decreed that the faith of the prince was to be the faith of his subjects, a principle called Cuius regio, eius religio. The agreement at Augsburg failed to address other religious creed: for example, the Reformed faith was still considered a heresy and the principle did not address the possible conversion of an ecclesiastic ruler, such as happened in Electorate of Cologne in 1583. From the Cologne War until the end of the Thirty Years' Wars (1618–1648), religious conflict devastated German lands. The latter reduced the overall population of the German states by about 30 per cent, and in some places, up to 80 per cent. The Peace of Westphalia ended religious warfare among the German states. German rulers were able to choose either Roman Catholicism, Lutheranism or the Reformed faith as their official religion after 1648. In the 18th century, the Holy Roman Empire consisted of approximately 1,800 territories. The elaborate legal system initiated by a series of Imperial Reforms (approximately 1450–1555) created the Imperial Estates and provided for considerable local autonomy among ecclesiastical, secular, and hereditary states, reflected in Imperial Diet. The House of Habsburg held the imperial crown from 1438 until the death of Charles VI in 1740. Having no male heirs, he had convinced the Electors to retain Habsburg hegemony in the office of the emperor by agreeing to the Pragmatic Sanction. This was finally settled through the War of Austrian Succession; in the Treaty of Aix-la-Chapelle, Charles VI's daughter Maria Theresa ruled the Empire as Empress Consort when her husband, Francis I, became Holy Roman Emperor. From 1740, the dualism between the Austrian Habsburg Monarchy and the Kingdom of Prussia dominated the German history. In 1772, then again in 1793 and 1795, the two dominant German states of Prussia and Austria, along with the Russian Empire, agreed to the Partitions of Poland; dividing among themselves the lands of the Polish–Lithuanian Commonwealth. As a result of the partitions, millions of Polish speaking inhabitants fell under the rule of the two German monarchies. However, the annexed territories though incorporated into the Kingdom of Prussia and the Habsburg Realm, were not legally considered as a part of the Holy Roman Empire. During the period of the French Revolutionary Wars, along with the arrival of the Napoleonic era and the subsequent final meeting of the Imperial Diet, most of the secular Free Imperial Cities were annexed by dynastic territories; the ecclesiastical territories were secularised and annexed. In 1806 the "Imperium" was dissolved; German states, particularly the Rhineland states, fell under the influence of France. Until 1815, France, Russia, Prussia and the Habsburgs competed for hegemony in the German states during the Napoleonic Wars. Following the fall of Napoleon, the Congress of Vienna (convened in 1814) founded the German Confederation ("Deutscher Bund"), a loose league of 39 sovereign states. The appointment of the Emperor of Austria as the permanent president of the Confederation reflected the Congress's failure to accept Prussia's influence among the German states, and acerbated the long-standing competition between the Hohenzollern and Habsburg interests. Disagreement within restoration politics partly led to the rise of liberal movements, followed by new measures of repression by Austrian statesman Metternich. The "Zollverein", a tariff union, furthered economic unity in the German states. National and liberal ideals of the French Revolution gained increasing support among many, especially young, Germans. The Hambach Festival in May 1832 was a main event in support of German unity, freedom and democracy. In the light of a series of revolutionary movements in Europe, which established a republic in France, intellectuals and commoners started the Revolutions of 1848 in the German states. King Frederick William IV of Prussia was offered the title of Emperor, but with a loss of power; he rejected the crown and the proposed constitution, leading to a temporary setback for the movement. King William I appointed Otto von Bismarck as the new Minister President of Prussia in 1862. Bismarck successfully concluded war on Denmark in 1864, which promoted German over Danish interests in the Jutland peninsula. The subsequent (and decisive) Prussian victory in the Austro-Prussian War of 1866 enabled him to create the North German Confederation ("Norddeutscher Bund") which excluded Austria from the federation's affairs. After the French defeat in the Franco-Prussian War, the German princes proclaimed the founding of the German Empire in 1871 at Versailles, uniting all the scattered parts of Germany except Austria. Prussia was the dominant constituent state of the new empire; the Hohenzollern King of Prussia ruled as its concurrent Emperor, and Berlin became its capital. In the period following the unification of Germany, Bismarck's foreign policy as Chancellor of Germany under Emperor William I secured Germany's position as a great nation by forging alliances, isolating France by diplomatic means, and avoiding war. Under Wilhelm II, Germany, like other European powers, took an imperialistic course, leading to friction with neighbouring countries. Most alliances in which Germany had previously been involved were not renewed. This resulted in creation of a dual alliance with the multinational realm of Austria-Hungary, promoting at least benevolent neutrality if not outright military support. Subsequently, the Triple Alliance of 1882 included Italy, completing a Central European geographic alliance that illustrated German, Austrian and Italian fears of incursions against them by France and/or Russia. Similarly, Britain, France and Russia also concluded alliances that would protect them against Habsburg interference with Russian interests in the Balkans or German interference against France. At the Berlin Conference in 1884, Germany claimed several colonies including German East Africa, German South West Africa, Togoland, and Kamerun. Later, Germany further expanded its colonial empire to include German New Guinea, German Micronesia and German Samoa in the Pacific, and Kiautschou Bay in China. In what became known as the "First Genocide of the Twentieth-Century", between 1904 and 1907, the German colonial government in South West Africa (present-day Namibia) ordered the annihilation of the local Herero and Namaqua peoples, as a punitive measure for an uprising against German colonial rule. In total, around 100,000 people—80% of the Herero and 50% of the Namaqua—perished from imprisonment in concentration camps, where the majority died of disease, abuse, and exhaustion, or from dehydration and starvation in the countryside after being deprived of food and water. The assassination of Austria's crown prince on 28 June 1914 provided the pretext for the Austrian Empire to attack Serbia and trigger World War I. After four years of warfare, in which approximately two million German soldiers were killed, a general armistice ended the fighting on 11 November, and German troops returned home. In the German Revolution (November 1918), Emperor Wilhelm II and all German ruling princes abdicated their positions and responsibilities. Germany's new political leadership signed the Treaty of Versailles in 1919. In this treaty, Germany, as part of the Central Powers, accepted defeat by the Allies in one of the bloodiest conflicts of all time. Germans perceived the treaty as humiliating and unjust and it was later seen by historians as influential in the rise of Adolf Hitler. After the defeat in the First World War, Germany lost around 13% of its European territory (areas predominantly inhabited by ethnic Polish, French and Danish populations, which were lost following the Greater Poland Uprising, the return of Alsace-Lorraine and the Schleswig plebiscites), and all of its colonial possessions in Africa and the South Sea. Germany was declared a republic at the beginning of the German Revolution in November 1918. On 11 August 1919 President Friedrich Ebert signed the democratic Weimar Constitution. In the subsequent struggle for power, the radical-left Communists seized power in Bavaria, but conservative elements in other parts of Germany attempted to overthrow the Republic in the Kapp Putsch. It was supported by parts of the "Reichswehr" (military) and other conservative, nationalistic and monarchist factions. After a tumultuous period of bloody street fighting in the major industrial centres, the occupation of the Ruhr by Belgian and French troops and the rise of inflation culminating in the hyperinflation of 1922–23, a debt restructuring plan and the creation of a new currency in 1924 ushered in the Golden Twenties, an era of increasing artistic innovation and liberal cultural life. Historians describe the period between 1924 and 1929 as one of "partial stabilisation." The worldwide Great Depression hit Germany in 1929. After the federal election of 1930, Chancellor Heinrich Brüning's government was enabled by President Paul von Hindenburg to act without parliamentary approval. Brüning's government pursued a policy of fiscal austerity and deflation which caused high unemployment of nearly 30% by 1932. The Nazi Party led by Adolf Hitler won the special federal election of 1932. After a series of unsuccessful cabinets, Hindenburg appointed Hitler as Chancellor of Germany on 30 January 1933. After the Reichstag fire, a decree abrogated basic civil rights and within weeks the first Nazi concentration camp at Dachau opened. The Enabling Act of 1933 gave Hitler unrestricted legislative power; subsequently, his government established a centralised totalitarian state, withdrew from the League of Nations following a national referendum, and began military rearmament. Using deficit spending, a government-sponsored programme for economic renewal focused on public works projects. In public work projects of 1934, 1.7 million Germans immediately were put to work, which gave them an income and social benefits. The most famous of the projects was the high speed roadway, the "Reichsautobahn", known as the German autobahns. Other capital construction projects included hydroelectric facilities such as the Rur Dam, water supplies such as Zillierbach Dam, and transportation hubs such as Zwickau Hauptbahnhof. Over the next five years, unemployment plummeted and average wages both per hour and per week rose. In 1935, the regime withdrew from the Treaty of Versailles and introduced the Nuremberg Laws which targeted Jews and other minorities. Germany also reacquired control of the Saar in 1935, remilitarised the Rhineland in 1936, annexed Austria in 1938, annexed the Sudetenland in 1938 with the Munich Agreement and in direct violation of the agreement occupied Czechoslovakia with the proclamation of the Protectorate of Bohemia and Moravia in March 1939. Kristallnacht, or the "Night of Broken Glass", saw the burning of hundreds of synagogues, the destruction of thousands of Jewish businesses, and the arrest of around 30,000 Jewish men by Nazi forces inside Germany. Many Jewish women were arrested and placed in jails and a curfew was placed on the Jewish people in Germany. In August 1939, Hitler's government negotiated and signed the Molotov–Ribbentrop pact that divided Eastern Europe into German and Soviet spheres of influence. Following the agreement, on 1 September 1939, Germany invaded Poland, marking the beginning of World War II. In response to Hitler's actions, two days later, on 3 September, after a British ultimatum to Germany to cease military operations was ignored, Britain and France declared war on Germany. In the spring of 1940, Germany conquered Denmark and Norway, the Netherlands, Belgium, Luxembourg, and France forcing the French government to sign an armistice after German troops occupied most of the country. The British repelled German air attacks in the Battle of Britain in the same year. In 1941, German troops invaded Yugoslavia, Greece and the Soviet Union. By 1942, Germany and other Axis powers controlled most of continental Europe and North Africa, but following the Soviet Union's victory at the Battle of Stalingrad, the allies' reconquest of North Africa and invasion of Italy in 1943, German forces suffered repeated military defeats. In June 1944, the Western allies landed in France and the Soviets pushed into Eastern Europe. By late 1944, the Western allies had entered Germany despite one final German counter offensive in the Ardennes Forest. Following Hitler's suicide during the Battle of Berlin, German armed forces surrendered on 8 May 1945, ending World War II in Europe. After World War II, former members of the Nazi regime were tried for war crimes at the Nuremberg trials. In what later became known as The Holocaust, the German government persecuted minorities and used a network of concentration and death camps across Europe to conduct a genocide of what they considered to be inferior peoples. In total, over 10 million civilians were systematically murdered, including 6 million Jews, between 220,000 and 1,500,000 Romani, 275,000 persons with disabilities, thousands of Jehovah's Witnesses, thousands of homosexuals, and hundreds of thousands of members of the political and religious opposition from Germany, and occupied countries (Nacht und Nebel). Nazi policies in the German occupied countries resulted in the deaths of 2.7 million Poles, 1.3 million Ukrainians, and an estimated 2.8 million Soviet war prisoners. In addition, the Nazi regime abducted approximately 12 million people from across the German occupied Europe for use as slave labour in the German industry. German military war casualties have been estimated at 5.3 million, and around 900,000 German civilians died; 400,000 from Allied bombing, and 500,000 in the course of the Soviet invasion from the east. Around 12 million ethnic Germans were expelled from across Eastern Europe. Germany lost roughly one-quarter of its pre-war territory. Strategic bombing and land warfare destroyed many cities and cultural heritage sites. After Germany surrendered, the Allies partitioned Berlin and Germany's remaining territory into four military occupation zones. The western sectors, controlled by France, the United Kingdom, and the United States, were merged on 23 May 1949 to form the Federal Republic of Germany ("Bundesrepublik Deutschland"); on 7 October 1949, the Soviet Zone became the German Democratic Republic ("Deutsche Demokratische Republik"). They were informally known as West Germany and East Germany. East Germany selected East Berlin as its capital, while West Germany chose Bonn as a provisional capital, to emphasise its stance that the two-state solution was an artificial and temporary status quo. West Germany was established as a federal parliamentary republic with a "social market economy". Starting in 1948 West Germany became a major recipient of reconstruction aid under the Marshall Plan and used this to rebuild its industry. Konrad Adenauer was elected the first Federal Chancellor ("Bundeskanzler") of Germany in 1949 and remained in office until 1963. Under his and Ludwig Erhard's leadership, the country enjoyed prolonged economic growth beginning in the early 1950s, that became known as an "economic miracle" ("Wirtschaftswunder"). The Federal Republic of Germany joined NATO in 1955 and was a founding member of the European Economic Community in 1957. East Germany was an Eastern Bloc state under political and military control by the USSR via occupation forces and the Warsaw Pact. Although East Germany claimed to be a democracy, political power was exercised solely by leading members ("Politbüro") of the communist-controlled Socialist Unity Party of Germany, supported by the Stasi, an immense secret service controlling many aspects of the society. A Soviet-style command economy was set up and the GDR later became a Comecon state. While East German propaganda was based on the benefits of the GDR's social programmes and the alleged constant threat of a West German invasion, many of its citizens looked to the West for freedom and prosperity. The Berlin Wall, rapidly built on 13 August 1961 prevented East German citizens from escaping to West Germany, eventually becoming a symbol of the Cold War. Ronald Reagan's "Mr. Gorbachov, Tear down this wall!" speech at the Wall on 12 June 1987 influenced public opinion, echoing John F. Kennedy's famous "Ich bin ein Berliner" speech of 26 June 1963. The fall of the Wall in 1989 became a symbol of the Fall of Communism, the Dissolution of the Soviet Union, German Reunification and "Die Wende". Tensions between East and West Germany were reduced in the early 1970s by Chancellor Willy Brandt's . In summer 1989, Hungary decided to dismantle the Iron Curtain and open the borders, causing the emigration of thousands of East Germans to West Germany via Hungary. This had devastating effects on the GDR, where regular mass demonstrations received increasing support. The East German authorities eased the border restrictions, allowing East German citizens to travel to the West; originally intended to help retain East Germany as a state, the opening of the border actually led to an acceleration of the "Wende" reform process. This culminated in the "Two Plus Four Treaty" a year later on 12 September 1990, under which the four occupying powers renounced their rights under the Instrument of Surrender, and Germany regained full sovereignty. This permitted German reunification on 3 October 1990, with the accession of the five re-established states of the former GDR. The united Germany is considered to be the enlarged continuation of the Federal Republic of Germany and not a successor state. As such, it retained all of West Germany's memberships in international organisations. Based on the Berlin/Bonn Act, adopted in 1994, Berlin once again became the capital of the reunified Germany, while Bonn obtained the unique status of a "Bundesstadt" (federal city) retaining some federal ministries. The relocation of the government was completed in 1999. Following the 1998 elections, SPD politician Gerhard Schröder became the first Chancellor of a red–green coalition with the Alliance '90/The Greens party. Among the major projects of the two Schröder legislatures was the Agenda 2010 to reform the labour market to become more flexible and reduce unemployment. The modernisation and integration of the eastern German economy is a long-term process scheduled to last until the year 2019, with annual transfers from west to east amounting to roughly $80 billion. Since reunification, Germany has taken a more active role in the European Union. Together with its European partners Germany signed the Maastricht Treaty in 1992, established the Eurozone in 1999, and signed the Lisbon Treaty in 2007. Germany sent a peacekeeping force to secure stability in the Balkans and sent a force of German troops to Afghanistan as part of a NATO effort to provide security in that country after the ousting of the Taliban. These deployments were controversial since Germany is bound by domestic law only to deploy troops for defence roles. In the 2005 elections, Angela Merkel became the first female chancellor of Germany as the leader of a grand coalition. In 2009 the German government approved a €50 billion economic stimulus plan to protect several sectors from a downturn. In 2009, a liberal-conservative coalition under Merkel assumed leadership of the country. In 2013, a grand coalition was established in a Third Merkel cabinet. Among the major German political projects of the early 21st century are the advancement of European integration, the energy transition ("Energiewende") for a sustainable energy supply, the "Debt Brake" for balanced budgets, measures to increase the fertility rate significantly (pronatalism), and high-tech strategies for the future transition of the German economy, summarised as Industry 4.0. Germany was affected by the European migrant crisis in 2015 as it became the final destination of choice for many asylum seekers from Africa and the Middle East entering the EU. The country took in over a million refugees and migrants and developed a quota system which redistributed migrants around its federal states based on their tax income and existing population density. Germany is in Western and Central Europe, with Denmark bordering to the north, Poland and the Czech Republic to the east, Austria to the southeast, Switzerland to the south-southwest, France, Luxembourg and Belgium lie to the west, and the Netherlands to the northwest. It lies mostly between latitudes 47° and 55° N and longitudes 5° and 16° E. Germany is also bordered by the North Sea and, at the north-northeast, by the Baltic Sea. With Switzerland and Austria, Germany also shares a border on the fresh-water Lake Constance, the third largest lake in Central Europe. German territory covers , consisting of of land and of water. It is the seventh largest country by area in Europe and the 64th largest in the world. Elevation ranges from the mountains of the Alps (highest point: the Zugspitze at ) in the south to the shores of the North Sea ("Nordsee") in the northwest and the Baltic Sea ("Ostsee") in the northeast. The forested uplands of central Germany and the lowlands of northern Germany (lowest point: Wilstermarsch at below sea level) are traversed by such major rivers as the Rhine, Danube and Elbe. Germany's alpine glaciers are experiencing deglaciation. Significant natural resources include iron ore, coal, potash, timber, lignite, uranium, copper, natural gas, salt, nickel, arable land and water. Most of Germany has a temperate seasonal climate dominated by humid westerly winds. The country is situated in between the oceanic Western European and the continental Eastern European climate. The climate is moderated by the North Atlantic Drift, the northern extension of the Gulf Stream. This warmer water affects the areas bordering the North Sea; consequently in the northwest and the north the climate is oceanic. Germany gets an average of of precipitation per year; there is no consistent dry season. Winters are cool and summers tend to be warm: temperatures can exceed . The east has a more continental climate: winters can be very cold and summers very warm, and longer dry periods can occur. Central and southern Germany are transition regions which vary from moderately oceanic to continental. In addition to the maritime and continental climates that predominate over most of the country, the Alpine regions in the extreme south and, to a lesser degree, some areas of the Central German Uplands have a mountain climate, with lower temperatures and more precipitation. Though the German climate is rarely extreme, there are occasional spikes of cold or heat. Winter temperatures can sometimes drop to two-digit negative temperatures for a few days in a row. Conversely, summer can see periods of very high temperatures for a week or two. The recorded extremes are a maximum of (July 2015, in Kitzingen), and a minimum of (February 1929, in Pfaffenhofen an der Ilm). The territory of Germany can be subdivided into two ecoregions: European-Mediterranean montane mixed forests and Northeast-Atlantic shelf marine. the majority of Germany is covered by either arable land (34%) or forest and woodland (30.1%); only 13.4% of the area consists of permanent pastures, 11.8% is covered by settlements and streets. Plants and animals include those generally common to Central Europe. Beeches, oaks, and other deciduous trees constitute one-third of the forests; conifers are increasing as a result of reforestation. Spruce and fir trees predominate in the upper mountains, while pine and larch are found in sandy soil. There are many species of ferns, flowers, fungi, and mosses. Wild animals include roe deer, wild boar, mouflon (a subspecies of wild sheep), fox, badger, hare, and small numbers of the Eurasian beaver. The blue cornflower was once a German national symbol. The 16 national parks in Germany include the Jasmund National Park, the Vorpommern Lagoon Area National Park, the Müritz National Park, the Wadden Sea National Parks, the Harz National Park, the Hainich National Park, the Black Forest National Park, the Saxon Switzerland National Park, the Bavarian Forest National Park and the Berchtesgaden National Park. In addition, there are 15 Biosphere Reserves, as well as 98 nature parks. More than 400 registered zoos and animal parks operate in Germany, which is believed to be the largest number in any country. The Berlin Zoo, opened in 1844, is the oldest zoo in Germany, and presents the most comprehensive collection of species in the world. Germany has a number of large cities. There are 11 officially recognised metropolitan regions in Germany. 34 cities have been identified as regiopolis. The largest conurbation is the Rhine-Ruhr region (11.7 million ), including Düsseldorf (the capital of North Rhine-Westphalia), Cologne, Bonn, Dortmund, Essen, Duisburg, and Bochum. Germany is a federal, parliamentary, representative democratic republic. The German political system operates under a framework laid out in the 1949 constitutional document known as the "Grundgesetz" (Basic Law). Amendments generally require a two-thirds majority of both chambers of parliament; the fundamental principles of the constitution, as expressed in the articles guaranteeing human dignity, the separation of powers, the federal structure, and the rule of law are valid in perpetuity. The president, Frank-Walter Steinmeier (19 March 2017–present), is the head of state and invested primarily with representative responsibilities and powers. He is elected by the "Bundesversammlung" (federal convention), an institution consisting of the members of the "Bundestag" and an equal number of state delegates. The second-highest official in the German order of precedence is the "Bundestagspräsident" (President of the "Bundestag"), who is elected by the "Bundestag" and responsible for overseeing the daily sessions of the body. The third-highest official and the head of government is the Chancellor, who is appointed by the "Bundespräsident" after being elected by the "Bundestag". The chancellor, Angela Merkel (22 November 2005–present), is the head of government and exercises executive power through their Cabinet, similar to the role of a Prime Minister in other parliamentary democracies. Federal legislative power is vested in the parliament consisting of the "Bundestag" (Federal Diet) and "Bundesrat" (Federal Council), which together form the legislative body. The "Bundestag" is elected through direct elections, by proportional representation (mixed-member). The members of the "Bundesrat" represent the governments of the sixteen federated states and are members of the state cabinets. Since 1949, the party system has been dominated by the Christian Democratic Union and the Social Democratic Party of Germany. So far every chancellor has been a member of one of these parties. However, the smaller liberal Free Democratic Party (in parliament from 1949 to 2013) and the Alliance '90/The Greens (in parliament since 1983) have also played important roles. Since 2005, the left-wing populist party The Left, formed through the merger of two former parties, has been a staple in the German "Bundestag" though they have never been part of the federal government. In the German federal election, 2017, the right-wing populist Alternative for Germany gained enough votes to attain representation in the parliament for the first time. The debt-to-GDP ratio of Germany had its peak in 2010 when it stood at 80.3% and decreased since then. According to Eurostat, the government gross debt of Germany amounts to €2,152.0 billion or 71.9% of its GDP in 2015. The federal government achieved a budget surplus of €12.1 billion ($13.1 billion) in 2015. Germany's credit rating by credit rating agencies Standard & Poor's, Moody's and Fitch Ratings stands at the highest possible rating "AAA" with a stable outlook in 2016. Germany has a civil law system based on Roman law with some references to Germanic law. The "Bundesverfassungsgericht" (Federal Constitutional Court) is the German Supreme Court responsible for constitutional matters, with power of judicial review. Germany's supreme court system, called "Oberste Gerichtshöfe des Bundes", is specialised: for civil and criminal cases, the highest court of appeal is the inquisitorial Federal Court of Justice, and for other affairs the courts are the Federal Labour Court, the Federal Social Court, the Federal Finance Court and the Federal Administrative Court. Criminal and private laws are codified on the national level in the "Strafgesetzbuch" and the "Bürgerliches Gesetzbuch" respectively. The German penal system seeks the rehabilitation of the criminal and the protection of the public. Except for petty crimes, which are tried before a single professional judge, and serious political crimes, all charges are tried before mixed tribunals on which lay judges ("") sit side by side with professional judges. Many of the fundamental matters of administrative law remain in the jurisdiction of the states. Germany has a low murder rate with 0.9 murders per 100,000 in 2014. Germany comprises sixteen federal states which are collectively referred to as "Bundesländer". Each state has its own state constitution and is largely autonomous in regard to its internal organisation. Two of the states are city-states consisting of just one city: Berlin and Hamburg. The state of Bremen consists of two cities that are separated from each other by the state of Lower Saxony: Bremen and Bremerhaven. Because of the differences in size and population the subdivisions of the states vary. For regional administrative purposes four states, namely Baden-Württemberg, Bavaria, Hesse and North Rhine-Westphalia, consist of a total of 19 Government Districts ("Regierungsbezirke"). Germany is divided into 401 districts ("Kreise") at a municipal level; these consist of 294 rural districts and 107 urban districts. Germany has a network of 227 diplomatic missions abroad and maintains relations with more than 190 countries. , Germany is the largest contributor to the budget of the European Union (providing 20%) and the third largest contributor to the UN (providing 8%). Germany is a member of NATO, the OECD, the G8, the G20, the World Bank and the IMF. It has played an influential role in the European Union since its inception and has maintained a strong alliance with France and all neighbouring countries since 1990. Germany promotes the creation of a more unified European political, economic and security apparatus. The development policy of Germany is an independent area of foreign policy. It is formulated by the Federal Ministry for Economic Cooperation and Development and carried out by the implementing organisations. The German government sees development policy as a joint responsibility of the international community. It was the world's third biggest aid donor in 2009 after the United States and France. In 1999, Chancellor Gerhard Schröder's government defined a new basis for German foreign policy by taking part in the NATO decisions surrounding the Kosovo War and by sending German troops into combat for the first time since 1945. The governments of Germany and the United States are close political allies. Cultural ties and economic interests have crafted a bond between the two countries resulting in Atlanticism. Germany's military, the "Bundeswehr", is organised into "Heer" (Army and special forces KSK), "Marine" (Navy), "Luftwaffe" (Air Force), "Bundeswehr Joint Medical Service" and "Streitkräftebasis" (Joint Support Service) branches. In absolute terms, German military expenditure is the 9th highest in the world. In 2015, military spending was at €32.9 billion, about 1.2% of the country's GDP, well below the NATO target of 2%. In peacetime, the Bundeswehr is commanded by the Minister of Defence. In state of defence, the Chancellor would become commander-in-chief of the "Bundeswehr". The role of the "Bundeswehr" is described in the Constitution of Germany as defensive only. But after a ruling of the Federal Constitutional Court in 1994 the term "defence" has been defined to not only include protection of the borders of Germany, but also crisis reaction and conflict prevention, or more broadly as guarding the security of Germany anywhere in the world. , the German military has about 3,600 troops stationed in foreign countries as part of international peacekeeping forces, including about 1,200 supporting operations against Daesh, 980 in the NATO-led Resolute Support Mission in Afghanistan, and 800 in Kosovo. Until 2011, military service was compulsory for men at age 18, and conscripts served six-month tours of duty; conscientious objectors could instead opt for an equal length of "Zivildienst" (civilian service), or a six-year commitment to (voluntary) emergency services like a fire department or the Red Cross. In 2011 conscription was officially suspended and replaced with a voluntary service. Germany has a social market economy with a highly skilled labour force, a large capital stock, a low level of corruption, and a high level of innovation. It is the world's third largest exporter of goods, and has the largest national economy in Europe which is also the world's fourth largest by nominal GDP and the fifth one by PPP. The service sector contributes approximately 71% of the total GDP (including information technology), industry 28%, and agriculture 1%. The unemployment rate published by Eurostat amounts to 4.7% in January 2015, which is the lowest rate of all 28 EU member states. With 7.1% Germany also has the lowest youth unemployment rate of all EU member states. According to the OECD Germany has one of the highest labour productivity levels in the world. Germany is part of the European single market which represents more than 508 million consumers. Several domestic commercial policies are determined by agreements among European Union (EU) members and by EU legislation. Germany introduced the common European currency, the Euro in 2002. It is a member of the Eurozone which represents around 340 million citizens. Its monetary policy is set by the European Central Bank, which is headquartered in Frankfurt, the financial centre of continental Europe. Being home to the modern car, the automotive industry in Germany is regarded as one of the most competitive and innovative in the world, and is the fourth largest by production. The top 10 exports of Germany are vehicles, machinery, chemical goods, electronic products, electrical equipments, pharmaceuticals, transport equipments, basic metals, food products, and rubber and plastics. Of the world's 500 largest stock-market-listed companies measured by revenue in 2014, the Fortune Global 500, 28 are headquartered in Germany. 30 major Germany-based companies are included in the DAX, the prime German stock market index which is operated by Frankfurt Stock Exchange of Deutsche Börse. Well-known international brands include Mercedes-Benz, BMW, SAP, Volkswagen, Audi, Siemens, Allianz, Adidas, Porsche, Deutsche Bahn, Deutsche Bank and Bosch. Germany is recognised for its large portion of specialised small and medium enterprises, known as the "Mittelstand" model. More than 1,000 of these companies are global market leaders in their segment and are labelled hidden champions. Berlin developed a thriving, cosmopolitan hub for startup companies and became a leading location for venture capital funded firms in the European Union. The list includes the largest German companies by revenue in 2015: With its central position in Europe, Germany is a transport hub for the continent. Like its neighbours in Western Europe, Germany's road network is among the densest in the world. The motorway (Autobahn) network ranks as the third-largest worldwide in length and is known for its lack of a general speed limit. Germany has established a polycentric network of high-speed trains. The InterCityExpress or "ICE" network of the Deutsche Bahn serves major German cities as well as destinations in neighbouring countries with speeds up to . The German railways are subsidised by the government, receiving €17.0 billion in 2014. The largest German airports are Frankfurt Airport and Munich Airport, both hubs of Lufthansa, while Air Berlin has hubs at Berlin Tegel and Düsseldorf. Other major airports include Berlin Schönefeld, Hamburg, Cologne/Bonn and Leipzig/Halle. The Port of Hamburg is one of the top twenty largest container ports in the world. , Germany was the world's sixth-largest consumer of energy, and 60% of its primary energy was imported. In 2014, energy sources were: oil (35.0%); coal, including lignite (24.6%); natural gas (20.5%); nuclear (8.1%); hydro-electric and renewable sources (11.1%). The government and the nuclear power industry agreed to phase out all nuclear power plants by 2021. It also enforces energy conservation, green technologies, emission reduction activities, and aims to meet the country's electricity demands using 40% renewable sources by 2020. Germany is committed to the Paris Agreement and several other treaties promoting biodiversity, low emission standards, water management, and the renewable energy commercialisation. The country's household recycling rate is among the highest in the world—at around 65%. Nevertheless, the country's total greenhouse gas emissions were the highest in the EU . The German energy transition ("Energiewende") is the recognised move to a sustainable economy by means of energy efficiency and renewable energy. Germany is a global leader in science and technology as its achievements in the fields of science and technology have been significant. Research and development efforts form an integral part of the economy. The Nobel Prize has been awarded to 107 German laureates. It produces the second highest number of graduates in science and engineering (31%) after South Korea. In the beginning of the 20th century, German laureates had more awards than those of any other nation, especially in the sciences (physics, chemistry, and physiology or medicine). Notable German physicists before the 20th century include Hermann von Helmholtz, Joseph von Fraunhofer and Gabriel Daniel Fahrenheit, among others. Albert Einstein introduced the special relativity and general relativity theories for light and gravity in 1905 and 1915 respectively. Along with Max Planck, he was instrumental in the introduction of quantum mechanics, in which Werner Heisenberg and Max Born later made major contributions. Wilhelm Röntgen discovered X-rays. Otto Hahn was a pioneer in the fields of radiochemistry and discovered nuclear fission, while Ferdinand Cohn and Robert Koch were founders of microbiology. Numerous mathematicians were born in Germany, including Carl Friedrich Gauss, David Hilbert, Bernhard Riemann, Gottfried Leibniz, Karl Weierstrass, Hermann Weyl, Felix Klein and Emmy Noether. Germany has been the home of many famous inventors and engineers, including Hans Geiger, the creator of the Geiger counter; and Konrad Zuse, who built the first fully automatic digital computer. Such German inventors, engineers and industrialists as Count Ferdinand von Zeppelin, Otto Lilienthal, Gottlieb Daimler, Rudolf Diesel, Hugo Junkers and Karl Benz helped shape modern automotive and air transportation technology. German institutions like the German Aerospace Center (DLR) are the largest contributor to ESA. Aerospace engineer Wernher von Braun developed the first space rocket at Peenemünde and later on was a prominent member of NASA and developed the Saturn V Moon rocket. Heinrich Rudolf Hertz's work in the domain of electromagnetic radiation was pivotal to the development of modern telecommunication. Research institutions in Germany include the Max Planck Society, the Helmholtz Association and the Fraunhofer Society. The Wendelstein 7-X in Greifswald hosts a facility in the research of fusion power for instance. The Gottfried Wilhelm Leibniz Prize is granted to ten scientists and academics every year. With a maximum of €2.5 million per award it is one of highest endowed research prizes in the world. Germany is the seventh most visited country in the world, with a total of 407 million overnights during 2012. This number includes 68.83 million nights by foreign visitors. In 2012, over 30.4 million international tourists arrived in Germany. Berlin has become the third most visited city destination in Europe. Additionally, more than 30% of Germans spend their holiday in their own country, with the biggest share going to Mecklenburg-Vorpommern. Domestic and international travel and tourism combined directly contribute over EUR43.2 billion to German GDP. Including indirect and induced impacts, the industry contributes 4.5% of German GDP and supports 2 million jobs (4.8% of total employment). Germany is well known for its diverse tourist routes, such as the Romantic Road, the Wine Route, the Castle Road, and the Avenue Road. The German Timber-Frame Road ("Deutsche Fachwerkstraße") connects towns with examples of these structures. Germany's most-visited landmarks include e.g. Neuschwanstein Castle, Cologne Cathedral, Berlin Bundestag, Hofbräuhaus Munich, Heidelberg Castle, Dresden Zwinger, Fernsehturm Berlin and Aachen Cathedral. The Europa-Park near Freiburg is Europe's second most popular theme park resort. With a population of 80.2 million according to the 2011 census, rising to 81.5 million as of 30 June 2015 and to at least 81.9 million as of 31 December 2015, Germany is the most populous country in the European Union, the second most populous country in Europe after Russia, and ranks as the 16th most populous country in the world. Its population density stands at 227 inhabitants per square kilometre (588 per square mile). The overall life expectancy in Germany at birth is 80.19 years (77.93 years for males and 82.58 years for females). The fertility rate of 1.41 children born per woman (2011 estimates), or 8.33 births per 1000 inhabitants, is one of the lowest in the world. Since the 1970s, Germany's death rate has exceeded its birth rate. However, Germany is witnessing increased birth rates and migration rates since the beginning of the 2010s, particularly a rise in the number of well-educated migrants. Four sizeable groups of people are referred to as "national minorities" because their ancestors have lived in their respective regions for centuries. There is a Danish minority (about 50,000) in the northernmost state of Schleswig-Holstein. The Sorbs, a Slavic population of about 60,000, are in the Lusatia region of Saxony and Brandenburg. The Roma and Sinti live throughout the whole federal territory and the Frisians live on Schleswig-Holstein's western coast, and in the north-western part of Lower Saxony. Approximately 5 million Germans live abroad. After the United States, Germany is the second most popular immigration destination in the world. , about ten million of Germany's 82 million residents did not have German citizenship, which makes up 12% of the country's population. The majority of migrants live in western Germany, in particular in urban areas. The Federal Statistical Office classifies the citizens by immigrant background. Regarding the immigrant background, 22.5% of the country's residents, or more than 18.6 million people, were of immigrant or partially immigrant descent in 2016 (including persons descending or partially descending from ethnic German repatriates). In 2015, 36% of children under 5 were of immigrant or partially immigrant descent. In 2011 census, as people with immigrant background ("Personen mit Migrationshintergrund") were counted all immigrants, including ethnic Germans that came to the federal republic or had at least one parent settling here after 1955. The largest part of people with immigrant background is made up of returning ethnic Germans ("Aussiedler and Spätaussiedler"), followed by Turkish, European Union, and former Yugoslav citizens. In the 1960s and 1970s, the German governments invited "guest workers" (Gastarbeiter) to migrate to Germany for work in the German industries. Many companies preferred to keep these workers employed in Germany after they had trained them and Germany's immigrant population has steadily increased. In 2015, the Population Division of the United Nations Department of Economic and Social Affairs listed Germany as host to the second-highest number of international migrants worldwide, about 5% or 12 million of all 244 million migrants. Germany ranks 7th amongst EU countries and 37th globally in terms of the per centage of migrants who made up part of the country's population. , the largest national group was from Turkey (2,859,000), followed by Poland (1,617,000), Russia (1,188,000), and Italy (764,000). 740,000 people have African origins, an increase of 46% since 2011. Since 1987, around 3 million ethnic Germans, mostly from the former Eastern Bloc countries, have exercised their right of return and emigrated to Germany. Upon its establishment in 1871, Germany was about two-thirds Protestant and one-third Roman Catholic, with a notable Jewish minority. Other faiths existed in the state, but never achieved a demographic significance and cultural impact of these three confessions. Germany lost nearly all of its Jewish minority during the Holocaust. Religious makeup changed gradually in the decades following 1945, with West Germany becoming more religiously diversified through immigration and East Germany becoming overwhelmingly irreligious through state policies. It continues to diversify after the German reunification in 1990, with an accompanying substantial decline in religiosity throughout all of Germany and a contrasting increase of evangelical Protestants and Muslims. Geographically, Protestantism is concentrated in the northern, central and eastern parts of the country. These are mostly members of the EKD, which encompasses Lutheran, Reformed and administrative or confessional unions of both traditions dating back to the Prussian Union of 1817. Roman Catholicism is concentrated in the south and west. According to the 2011 German Census, Christianity is the largest religion in Germany, claiming 66.8% of the total population. Relative to the whole population, 31.7% declared themselves as Protestants, including members of the Evangelical Church in Germany (EKD) (30.8%) and the free churches () (0.9%), and 31.2% declared themselves as Roman Catholics. Orthodox believers constituted 1.3%. Other religions accounted for 2.7%. According to the most recent data from 2016, the Catholic Church and the Evangelical Church claimed respectively 28.5% and 27.5% of the population. Both large churches have lost significant numbers of adherents in recent years. In 2011, 33% of Germans were not members of officially recognised religious associations with special status. Irreligion in Germany is strongest in the former East Germany, which used to be predominantly Protestant before state atheism, and major metropolitan areas. Islam is the second largest religion in the country. In the 2011 census, 1.9% of the census population (1.52 million people) gave their religion as Islam, but this figure is deemed unreliable because a disproportionate number of adherents of this religion (and other religions, such as Judaism) are likely to have made use of their right not to answer the question. Figures from Religionswissenschaftlicher Medien- und Informationsdienst suggest a figure of 4.4 to 4.7 million (around 5.5% of the population) in 2015. A study conducted by the Federal Office for Migration and Refugees found that between 2011 and 2015 the Muslim population rose by 1.2 million people, mostly due to immigration. Most of the Muslims are Sunnis and Alevites from Turkey, but there are a small number of Shi'ites, Ahmadiyyas and other denominations. Other religions comprising less than one per cent of Germany's population are Buddhism with 270,000 adherents, Judaism with 200,000 adherents, and Hinduism with some 100,000 adherents. All other religious communities in Germany have fewer than 50,000 adherents each. German is the official and predominant spoken language in Germany. Standard German is a West Germanic language and is closely related to and classified alongside Low German, Dutch, Afrikaans, Frisian and English. To a lesser extent, it is also related to the North Germanic languages. Most German vocabulary is derived from the Germanic branch of the Indo-European language family. Significant minorities of words are derived from Latin and Greek, with a smaller amount from French and most recently English (known as Denglisch). German is written using the Latin alphabet. German dialects, traditional local varieties traced back to the Germanic tribes, are distinguished from varieties of standard German by their lexicon, phonology, and syntax. It is one of 24 official and working languages of the European Union, and one of the three working languages of the European Commission. German is the most widely spoken first language in the European Union, with around 100 million native speakers. Recognised native minority languages in Germany are Danish, Low German, Low Rhenish, Sorbian, Romany, North Frisian and Saterland Frisian; they are officially protected by the European Charter for Regional or Minority Languages. The most used immigrant languages are Turkish, Kurdish, Polish, the Balkan languages, and Russian. Germans are typically multilingual: 67% of German citizens claim to be able to communicate in at least one foreign language and 27% in at least two. The Goethe-Institut is a non-profit German cultural association operational worldwide with 159 institutes. It is offering the study of the German language and encouraging global cultural exchange. Responsibility for educational supervision in Germany is primarily organised within the individual federal states. Optional kindergarten education is provided for all children between three and six years old, after which school attendance is compulsory for at least nine years. Primary education usually lasts for four to six years. Secondary education includes three traditional types of schools focused on different academic levels: the "Gymnasium" enrols the most gifted children and prepares students for university studies; the "Realschule" for intermediate students lasts six years and the "Hauptschule" prepares pupils for vocational education. The "Gesamtschule" unifies all secondary education. A system of apprenticeship called "Duale Ausbildung" leads to a skilled qualification which is almost comparable to an academic degree. It allows students in vocational training to learn in a company as well as in a state-run trade school. This model is well regarded and reproduced all around the world. Most of the German universities are public institutions, and students traditionally study without fee payment. The general requirement for university is the "Abitur". However, there are a number of exceptions, depending on the state, the college and the subject. Tuition free academic education is open to international students and is increasingly common. According to an OECD report in 2014, Germany is the world's third leading destination for international study. Germany has a long tradition of higher education. The established universities in Germany include some of the oldest in the world, with Heidelberg University (established in 1386) being the oldest. It is followed by the Leipzig University (1409), the Rostock University (1419) and the Greifswald University (1456). The University of Berlin, founded in 1810 by the liberal educational reformer Wilhelm von Humboldt, became the academic model for many European and Western universities. In the contemporary era Germany has developed eleven Universities of Excellence: Humboldt University Berlin, the University of Bremen, the University of Cologne, TU Dresden, the University of Tübingen, RWTH Aachen, FU Berlin, Heidelberg University, the University of Konstanz, LMU Munich, and the Technical University of Munich. Germany's system of hospices, called "spitals", dates from medieval times, and today, Germany has the world's oldest universal health care system, dating from Bismarck's social legislation of the 1880s, Since the 1880s, reforms and provisions have ensured a balanced health care system. Currently the population is covered by a health insurance plan provided by statute, with criteria allowing some groups to opt for a private health insurance contract. According to the World Health Organization, Germany's health care system was 77% government-funded and 23% privately funded . In 2014, Germany spent 11.3% of its GDP on health care. Germany ranked 20th in the world in life expectancy with 77 years for men and 82 years for women, and it had a very low infant mortality rate (4 per 1,000 live births). , the principal cause of death was cardiovascular disease, at 41%, followed by malignant tumours, at 26%. , about 82,000 Germans had been infected with HIV/AIDS and 26,000 had died from the disease (cumulatively, since 1982). According to a 2005 survey, 27% of German adults are smokers. Obesity in Germany has been increasingly cited as a major health issue. A 2007 study shows Germany has the highest number of overweight people in Europe. Culture in German states has been shaped by major intellectual and popular currents in Europe, both religious and secular. Historically, Germany has been called "Das Land der Dichter und Denker" ("the land of poets and thinkers"), because of the major role its writers and philosophers have played in the development of Western thought. Germany is well known for such folk festival traditions as Oktoberfest and Christmas customs, which include Advent wreaths, Christmas pageants, Christmas trees, Stollen cakes, and other practices. UNESCO inscribed 41 properties in Germany on the World Heritage List. There are a number of public holidays in Germany determined by each state; 3 October has been a national day of Germany since 1990, celebrated as the "Tag der Deutschen Einheit" (German Unity Day). Prior to reunification, the day was celebrated on 17 June, in honour of the Uprising of 1953 in East Germany which was brutally suppressed on that date. In the 21st century Berlin has emerged as a major international creative centre. According to the Anholt–GfK Nation Brands Index, in 2014 Germany was the world's most respected nation among 50 countries (ahead of US, UK, and France). A global opinion poll for the BBC revealed that Germany is recognised for having the most positive influence in the world in 2013 and 2014. German classical music includes works by some of the world's most well-known composers. Dieterich Buxtehude composed oratorios for organ, which influenced the later work of Johann Sebastian Bach and Georg Friedrich Händel; these men were influential composers of the Baroque period. During his tenure as violinist and teacher at the Salzburg cathedral, Augsburg-born composer Leopold Mozart mentored one of the most noted musicians of all time: Wolfgang Amadeus Mozart. Ludwig van Beethoven was a crucial figure in the transition between the Classical and Romantic eras. Carl Maria von Weber and Felix Mendelssohn were important in the early Romantic period. Robert Schumann and Johannes Brahms composed in the Romantic idiom. Richard Wagner was known for his operas. Richard Strauss was a leading composer of the late Romantic and early modern eras. Karlheinz Stockhausen and Hans Zimmer are important composers of the 20th and early 21st centuries. Germany is the second largest music market in Europe, and fourth largest in the world. German popular music of the 20th and 21st centuries includes the movements of Neue Deutsche Welle, pop, Ostrock, heavy metal/rock, punk, pop rock, indie and schlager pop. German electronic music gained global influence, with Kraftwerk and Tangerine Dream pioneering in this genre. DJs and artists of the techno and house music scenes of Germany have become well known (e.g. Paul van Dyk, Paul Kalkbrenner, and Scooter). German painters have influenced western art. Albrecht Dürer, Hans Holbein the Younger, Matthias Grünewald and Lucas Cranach the Elder were important German artists of the Renaissance, Peter Paul Rubens and Johann Baptist Zimmermann of the Baroque, Caspar David Friedrich and Carl Spitzweg of Romanticism, Max Liebermann of Impressionism and Max Ernst of Surrealism. Such German sculptors as Otto Schmidt-Hofer, Franz Iffland, and Julius Schmidt-Felling made important contributions to German art history in the late 19th and early 20th centuries. Several German art groups formed in the 20th century, such as the November Group or Die Brücke (The Bridge) and Der Blaue Reiter (The Blue Rider), by the Russian-born Wassily Kandinsky, influenced the development of Expressionism in Munich and Berlin. The New Objectivity arose as a counter-style to it during the Weimar Republic. Post-World War II art trends in Germany can broadly be divided into Neo-expressionism, performance art and Conceptualism. Especially notable neo-expressionists include Georg Baselitz, Anselm Kiefer, Jörg Immendorff, A. R. Penck, Markus Lüpertz, Peter Robert Keil and Rainer Fetting. Other notable artists who work with traditional media or figurative imagery include Martin Kippenberger, Gerhard Richter, Sigmar Polke, and Neo Rauch. Leading German conceptual artists include or included Bernd and Hilla Becher, Hanne Darboven, Hans-Peter Feldmann, Hans Haacke, Joseph Beuys, HA Schult, Aris Kalaizis, Neo Rauch (New Leipzig School) and Andreas Gursky (photography). Major art exhibitions and festivals in Germany are the documenta, the Berlin Biennale, transmediale and Art Cologne. Architectural contributions from Germany include the Carolingian and Ottonian styles, which were precursors of Romanesque. Brick Gothic is a distinctive medieval style that evolved in Germany. Also in Renaissance and Baroque art, regional and typically German elements evolved (e.g. Weser Renaissance and "Dresden Baroque"). Among many renowned Baroque masters were Pöppelmann, Balthasar Neumann, Knobelsdorff and the Asam brothers. The Wessobrunner School exerted a decisive influence on, and at times even dominated, the art of stucco in southern Germany in the 18th century. The Upper Swabian Baroque Route offers a baroque-themed tourist route that highlights the contributions of such artists and craftsmen as the sculptor and plasterer Johann Michael Feuchtmayer, one of the foremost members of the Feuchtmayer family and the brothers Johann Baptist Zimmermann and Dominikus Zimmermann. Vernacular architecture in Germany is often identified by its timber framing ("Fachwerk") traditions and varies across regions, and among carpentry styles. When industrialisation spread across Europe, Classicism and a distinctive style of historism developed in Germany, sometimes referred to as "Gründerzeit style", due to the economical boom years at the end of the 19th century. Regional historicist styles include the "Hanover School", "Nuremberg Style" and Dresden's "Semper-Nicolai School". Among the most famous of German buildings, the Schloss Neuschwanstein represents Romanesque Revival. Notable sub-styles that evolved since the 18th century are the German spa and seaside resort architecture. German artists, writers and gallerists like Siegfried Bing, Georg Hirth and Bruno Möhring also contributed to the development of Art Nouveau at the turn of the 20th century, known as "Jugendstil" in German. Expressionist architecture developed in the 1910s in Germany and influenced Art Deco and other modern styles, with e.g. Fritz Höger, Erich Mendelsohn, Dominikus Böhm, and Fritz Schumacher being influential architects. Germany was particularly important in the early modernist movement: it is the home of Werkbund initiated by Hermann Muthesius (New Objectivity), and of the Bauhaus movement founded by Walter Gropius. Consequently, Germany is often considered the cradle of modern architecture and design. Ludwig Mies van der Rohe became one of the world's most renowned architects in the second half of the 20th century. He conceived of the glass façade skyscraper. Renowned contemporary architects and offices include Hans Kollhoff, Sergei Tchoban, KK Architekten, Helmut Jahn, Behnisch, GMP, Ole Scheeren, J. Mayer H., OM Ungers, Gottfried Böhm and Frei Otto (the last two being Pritzker Prize winners). German literature can be traced back to the Middle Ages and the works of writers such as Walther von der Vogelweide and Wolfram von Eschenbach. Well-known German authors include Johann Wolfgang von Goethe, Friedrich Schiller, Gotthold Ephraim Lessing and Theodor Fontane. The collections of folk tales published by the Brothers Grimm popularised German folklore on an international level. The Grimms also gathered and codified regional variants of the German language, grounding their work in historical principles; their "Deutsches Wörterbuch", or German Dictionary, sometimes called the Grimm dictionary, was begun in 1838 and the first volumes published in 1854. Influential authors of the 20th century include Gerhart Hauptmann, Thomas Mann, Hermann Hesse, Heinrich Böll and Günter Grass. The German book market is the third largest in the world, after the United States and China. The Frankfurt Book Fair is the most important in the world for international deals and trading, with a tradition spanning over 500 years. The Leipzig Book Fair also retains a major position in Europe. German philosophy is historically significant: Gottfried Leibniz's contributions to rationalism; the enlightenment philosophy by Immanuel Kant; the establishment of classical German idealism by Johann Gottlieb Fichte, Georg Wilhelm Friedrich Hegel and Friedrich Wilhelm Joseph Schelling; Arthur Schopenhauer's composition of metaphysical pessimism; the formulation of communist theory by Karl Marx and Friedrich Engels; Friedrich Nietzsche's development of perspectivism; Gottlob Frege's contributions to the dawn of analytic philosophy; Martin Heidegger's works on Being; Oswald Spengler's historical philosophy; the development of the Frankfurt School by Max Horkheimer, Theodor Adorno, Herbert Marcuse and Jürgen Habermas have been particularly influential. The largest internationally operating media companies in Germany are the Bertelsmann enterprise, Axel Springer SE and ProSiebenSat.1 Media. The German Press Agency DPA is also significant. Germany's television market is the largest in Europe, with some 38 million TV households. Around 90% of German households have cable or satellite TV, with a variety of free-to-view public and commercial channels. There are more than 500 public and private radio stations in Germany, with the public Deutsche Welle being the main German radio and television broadcaster in foreign languages. Germany's national radio network is the Deutschlandradio while ARD stations are covering local services. Many of Europe's best-selling newspapers and magazines are produced in Germany. The papers (and internet portals) with the highest circulation are "Bild" (a tabloid), "Die Zeit", "Süddeutsche Zeitung", "Frankfurter Allgemeine Zeitung" and "Die Welt", the largest magazines include "Der Spiegel", "Stern" and "Focus". The German video gaming market is one of the largest in the world. The Gamescom in Cologne is the world's leading gaming convention. Popular game series from Germany include "Turrican", the "Anno" series, "The Settlers" series, the "Gothic" series, "SpellForce", the "FIFA Manager" series, "Far Cry" and "Crysis". Relevant game developers and publishers are Blue Byte, Crytek, Deep Silver, Kalypso Media, Piranha Bytes, Yager Development, and some of the largest social network game companies like Bigpoint, Gameforge, Goodgame and Wooga. German cinema has made major technical and artistic contributions to film. The first works of the Skladanowsky Brothers were shown to an audience in 1895. The renowned Babelsberg Studio in Potsdam was established in 1912, thus being the first large-scale film studio in the world (today it is Europe's second largest studio after Cinecittà in Rome, Italy). Other early and still active studios include UFA and Bavaria Film. Early German cinema was particularly influential with German expressionists such as Robert Wiene and Friedrich Wilhelm Murnau. Director Fritz Lang's "Metropolis" (1927) is referred to as the first major science-fiction film. In 1930 Josef von Sternberg directed "The Blue Angel", the first major German sound film, with Marlene Dietrich. Films of Leni Riefenstahl set new artistic standards, in particular Triumph of the Will. After 1945, many of the films of the immediate post-war period can be characterised as "Trümmerfilm" (rubble film). Such films included Wolfgang Staudte's "Die Mörder sind unter uns (The Murderers are among us, 1946)" and "Irgendwo in Berlin" (Somewhere in Berlin, 1946) by Werner Krien. The state-owned East German film studio DEFA produced notable films including "Ehe im Schatten" (Marriage in the Shadows) by Kurt Maetzig (1947), "Der Untertan" (1951); "Die Geschichte vom kleinen Muck" (The Story of Little Muck, 1953), Konrad Wolf's "Der geteilte Himmel (Divided Heaven)" (1964) and Frank Beyer's "Jacob the Liar" (1975). The defining film genre in West Germany of the 1950s was arguably the "Heimatfilm" ("homeland film"); these films depicted the beauty of the land and the moral integrity of the people living in it. Characteristic for the films of the 1960s were genre films including Edgar Wallace and Karl May adaptations. One of the most successful German movie series of the 1970s included the sex reports called "Schulmädchen-Report" (Schoolgirl Report). During the 1970s and 1980s, New German Cinema directors such as Volker Schlöndorff, Werner Herzog, Wim Wenders, and Rainer Werner Fassbinder brought West German auteur cinema to critical acclaim. Among the box office hits, there were films such as "Chariots of the Gods" (1970), "Das Boot" (The Boat, 1981), "The Never Ending Story" (1984), "Otto – The Movie" (1985), "Run Lola Run" (1998), "Manitou's Shoe" (2001), the "Resident Evil" series (2002–2016), "Good Bye, Lenin!" (2003), "Head On" (2004), "The White Ribbon" (2009), "Animals United" (2010), and "Cloud Atlas" (2012). The Academy Award for Best Foreign Language Film ("Oscar") went to the German production "Die Blechtrommel (The Tin Drum)" in 1979, to "Nirgendwo in Afrika (Nowhere in Africa)" in 2002, and to "Das Leben der Anderen (The Lives of Others)" in 2007. Various Germans won an "Oscar" award for their performances in other films. The annual European Film Awards ceremony is held every other year in Berlin, home of the European Film Academy. The Berlin International Film Festival, known as "Berlinale", awarding the "Golden Bear" and held annually since 1951, is one of the world's leading film festivals. The "Lolas" are annually awarded in Berlin, at the German Film Awards, that have been presented since 1951. German cuisine varies from region to region and often neighbouring regions share some culinary similarities (e.g. the southern regions of Bavaria and Swabia share some traditions with Switzerland and Austria). International varieties such as pizza, sushi, Chinese food, Greek food, Indian cuisine and doner kebab are also popular and available, thanks to diverse ethnic communities. Bread is a significant part of German cuisine and German bakeries produce about 600 main types of bread and 1,200 different types of pastries and rolls ("Brötchen"). German cheeses account for about a third of all cheese produced in Europe. In 2012 over 99% of all meat produced in Germany was either pork, chicken or beef. Germans produce their ubiquitous sausages in almost 1,500 varieties, including Bratwursts, Weisswursts, and Currywursts. In 2012, organic foods accounted for 3.9% of total food sales. Although wine is becoming more popular in many parts of Germany, especially close to German wine regions, the national alcoholic drink is beer. German beer consumption per person stands at in 2013 and remains among the highest in the world. German beer purity regulations date back to the 15th century. The 2015 Michelin Guide awarded eleven restaurants in Germany three stars, the highest designation, while 38 more received two stars and 233 one star. German restaurants have become the world's second-most decorated after France. Twenty-seven million Germans are members of a sports club and an additional twelve million pursue sports individually. Association football is the most popular sport. With more than 6.3 million official members, the German Football Association ("Deutscher Fußball-Bund") is the largest sports organisation of its kind worldwide, and the German top league, the Bundesliga, attracts the second highest average attendance of all professional sports leagues in the world. The German men's national football team won the FIFA World Cup in 1954, 1974, 1990, and 2014, the UEFA European Championship in 1972, 1980 and 1996, and the FIFA Confederations Cup in 2017. Germany hosted the FIFA World Cup in 1974 and 2006 and the UEFA European Championship in 1988. Other popular spectator sports include winter sports, boxing, basketball, handball, volleyball, ice hockey, tennis, horse riding and golf. Water sports like sailing, rowing, and swimming are popular in Germany as well. Germany is one of the leading motor sports countries in the world. Constructors like BMW and Mercedes are prominent manufacturers in motor sport. Porsche has won the 24 Hours of Le Mans race 19 times, and Audi 13 times (). The driver Michael Schumacher has set many motor sport records during his career, having won seven Formula One World Drivers' Championships, more than any other. He is one of the highest paid sportsmen in history. Sebastian Vettel is also among the top five most successful Formula One drivers of all time. Also Nico Rosberg won the Formula One World Championship. Historically, German athletes have been successful contenders in the Olympic Games, ranking third in an all-time Olympic Games medal count (when combining East and West German medals). Germany was the last country to host both the summer and winter games in the same year, in 1936 the Berlin Summer Games and the Winter Games in Garmisch-Partenkirchen. In Munich it hosted the Summer Games of 1972. German designers became early leaders of modern product design, with the Bauhaus designers like Mies van der Rohe, and Dieter Rams of Braun being essential pioneers. Germany is a leading country in the fashion industry. The German textile industry consisted of about 1,300 companies with more than 130,000 employees in 2010, which generated a revenue of 28 billion Euro. Almost 44 per cent of the products are exported. The Berlin Fashion Week and the fashion trade fair Bread & Butter are held twice a year. Munich, Hamburg, Cologne and Düsseldorf are also important design, production and trade hubs of the domestic fashion industry, among smaller towns. Renowned fashion designers from Germany include Karl Lagerfeld, Jil Sander, Wolfgang Joop, Philipp Plein and Michael Michalsky. Important brands include Hugo Boss, Escada, Adidas, Puma, Esprit and Triumph. The German supermodels Claudia Schiffer, Heidi Klum, Tatjana Patitz, Nadja Auermann and Toni Garrn, among others, have come to international fame. General relativity General relativity (GR, also known as the general theory of relativity or GTR) is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the "" is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of partial differential equations. Some predictions of general relativity differ significantly from those of classical physics, especially concerning the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light. Examples of such differences include gravitational time dilation, gravitational lensing, the gravitational redshift of light, and the gravitational time delay. The predictions of general relativity have been confirmed in all observations and experiments to date. Although general relativity is not the only relativistic theory of gravity, it is the simplest theory that is consistent with experimental data. However, unanswered questions remain, the most fundamental being how "general relativity" can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity. Einstein's theory has important astrophysical implications. For example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. There is ample evidence that the intense radiation emitted by certain kinds of astronomical objects is due to black holes; for example, microquasars and active galactic nuclei result from the presence of stellar black holes and supermassive black holes, respectively. The bending of light by gravity can lead to the phenomenon of gravitational lensing, in which multiple images of the same distant astronomical object are visible in the sky. General relativity also predicts the existence of gravitational waves, which have since been observed directly by the physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of a consistently expanding universe. Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as "the" most beautiful of all existing physical theories. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall, he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, and form the core of Einstein's general theory of relativity. The Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, which eventually resulted in the Reissner–Nordström solution, now associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life. During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein himself had shown in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"). Similarly, a 1919 expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of May 29, 1919, making Einstein instantly famous. Yet the theory entered the mainstream of theoretical physics and astrophysics only with the developments between approximately 1960 and 1975, now known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology, too, became amenable to direct observational tests. Over the years, general relativity has acquired a reputation as a theory of extraordinary beauty. Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Frances Bacon has termed, a "strangeness in the proportion" ("i.e". elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time "versus" matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory. Other elements of beauty associated with the general theory of relativity are its simplicity, symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency. General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity. At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime. Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in a small enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is at rest in a gravitational field, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field. Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, space"time" as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The resulting Newton–Cartan theory is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass. As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations and boosts.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena. With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event , there is a set of events that can, in principle, either influence or be influenced by via signals or interactions that do not need to travel faster than light (such as event in the image), and a set of events for which such an influence is impossible (such as event in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the space–time's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure or conformal geometry. Special relativity is defined in the absence of gravity, so for practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry. A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity. The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are now dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish). Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress: pressure and shear. Using the equivalence principle, this tensor is readily generalized to curved spacetime. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero— the simplest set of equations are what are called Einstein's (field) equations: On the left-hand side is the Einstein tensor, a specific divergence-free combination of the Ricci tensor formula_1 and the metric. Where formula_2 is symmetric. In particular, is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as On the right-hand side, "formula_5" is the energy–momentum tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant can be fixed as "κ" = 8π"G"/"c", with "G" the gravitational constant and "c" the speed of light. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations, There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, "f"("R") gravity and Einstein–Cartan theory. The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building. General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional, pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve. While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and slow speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation. As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance. The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's energy–momentum tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present. Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub-NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture). Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories. General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of many years of research that followed Einstein's initial publication. Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation. Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid. General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a star. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun. This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity. Closely related to light deflection is the gravitational time delay (or Shapiro delay), the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space. Predicted in 1916 by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On February 11, 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging. The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, for weak fields, a linear approximation can be made. Such linearized gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by formula_7 or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed. Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models. General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction. In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations. The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude. In general relativity the perihelion shift σ, expressed in radians per revolution, is approximately given by: where: According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are immensely compact, significant amounts of energy are emitted in the form of gravitational radiation. The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737-3039, in which both stars are pulsars. Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%. Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used. The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs. The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed. Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies. Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10 to 10 Hertz frequency range, which originate from binary supermassive blackholes. A European space-based detector, eLISA / NGO, is currently under development, with a precursor mission (LISA Pathfinder) having launched in December 2015. Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger. Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures. Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed. General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory. Black holes are also sought-after targets in the search for gravitational waves (cf. Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors here on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry. The current models of cosmology are based on Einstein's field equations, which include the cosmological constant Λ since it has important influence on the large-scale dynamics of the cosmos, where "formula_14" is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation. Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear. An inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around 10 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below). Kurt Gödel showed that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event "A" can reach any other location "X" before light sent out at "A" to "X". In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams. Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of energy conditions) are used to derive general results. Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's "horizon", is not a physical barrier. Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. In the long run, they are rather simple objects characterized by eleven parameters specifying energy, linear momentum, angular momentum, location at a specified time and electric charge. This is stated by the black hole uniqueness theorems: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple. Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for black hole area to decrease—as long as other processes ensure that, overall, entropy increases. As thermodynamical objects with non-zero temperature, black holes should emit thermal radiation. Semi-classical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below). There are other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semi-classical radiation known as Unruh radiation. Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well. Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity. Each solution of Einstein's equation encompasses the whole history of a universe — it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories. To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity. The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy. Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define "quasi-local" quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture. If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid state physics, would be the other. However, how to reconcile quantum theory with general relativity is still an open question. Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes. The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist. Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability"). One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables, this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps. Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge Calculus, dynamical triangulations, causal sets, twistor models or the path integral based models of quantum cosmology. All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available. General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics. Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes). In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on September 14, 2015. A century after its introduction, general relativity remains a highly active area of research. Georg Cantor Georg Ferdinand Ludwig Philipp Cantor ( ; ;  – January 6, 1918) was a German mathematician. He created set theory, which has become a fundamental theory in mathematics. Cantor established the importance of one-to-one correspondence between the members of two sets, defined infinite and well-ordered sets, and proved that the real numbers are more numerous than the natural numbers. In fact, Cantor's method of proof of this theorem implies the existence of an "infinity of infinities". He defined the cardinal and ordinal numbers and their arithmetic. Cantor's work is of great philosophical interest, a fact of which he was well aware. Cantor's theory of transfinite numbers was originally regarded as so counter-intuitive – even shocking – that it encountered resistance from mathematical contemporaries such as Leopold Kronecker and Henri Poincaré and later from Hermann Weyl and L. E. J. Brouwer, while Ludwig Wittgenstein raised philosophical objections. Cantor, a devout Lutheran, believed the theory had been communicated to him by God. Some Christian theologians (particularly neo-Scholastics) saw Cantor's work as a challenge to the uniqueness of the absolute infinity in the nature of God – on one occasion equating the theory of transfinite numbers with pantheism – a proposition that Cantor vigorously rejected. The objections to Cantor's work were occasionally fierce: Henri Poincaré referred to his ideas as a "grave disease" infecting the discipline of mathematics, and Leopold Kronecker's public opposition and personal attacks included describing Cantor as a "scientific charlatan", a "renegade" and a "corrupter of youth." Kronecker objected to Cantor's proofs that the algebraic numbers are countable, and that the transcendental numbers are uncountable, results now included in a standard mathematics curriculum. Writing decades after Cantor's death, Wittgenstein lamented that mathematics is "ridden through and through with the pernicious idioms of set theory", which he dismissed as "utter nonsense" that is "laughable" and "wrong". Cantor's recurring bouts of depression from 1884 to the end of his life have been blamed on the hostile attitude of many of his contemporaries, though some have explained these episodes as probable manifestations of a bipolar disorder. The harsh criticism has been matched by later accolades. In 1904, the Royal Society awarded Cantor its Sylvester Medal, the highest honor it can confer for work in mathematics. David Hilbert defended it from its critics by declaring: Georg Cantor was born in 1845 in the western merchant colony of Saint Petersburg, Russia, and brought up in the city until he was eleven. Georg, the oldest of six children, was regarded as an outstanding violinist. His grandfather Franz Böhm (1788–1846) (the violinist Joseph Böhm's brother) was a well-known musician and soloist in a Russian imperial orchestra. Cantor's father had been a member of the Saint Petersburg stock exchange; when he became ill, the family moved to Germany in 1856, first to Wiesbaden, then to Frankfurt, seeking milder winters than those of Saint Petersburg. In 1860, Cantor graduated with distinction from the Realschule in Darmstadt; his exceptional skills in mathematics, trigonometry in particular, were noted. In 1862, Cantor entered the Swiss Federal Polytechnic. After receiving a substantial inheritance upon his father's death in June 1863, Cantor shifted his studies to the University of Berlin, attending lectures by Leopold Kronecker, Karl Weierstrass and Ernst Kummer. He spent the summer of 1866 at the University of Göttingen, then and later a center for mathematical research. Georg Cantor was a good student, so in 1867, he received his doctorate degree. Cantor submitted his dissertation on number theory at the University of Berlin in 1867. After teaching briefly in a Berlin girls' school, Cantor took up a position at the University of Halle, where he spent his entire career. He was awarded the requisite habilitation for his thesis, also on number theory, which he presented in 1869 upon his appointment at Halle University. In 1874, Cantor married Vally Guttmann. They had six children, the last (Rudolph) born in 1886. Cantor was able to support a family despite modest academic pay, thanks to his inheritance from his father. During his honeymoon in the Harz mountains, Cantor spent much time in mathematical discussions with Richard Dedekind, whom he had met two years earlier while on Swiss holiday. Cantor was promoted to Extraordinary Professor in 1872 and made full Professor in 1879. To attain the latter rank at the age of 34 was a notable accomplishment, but Cantor desired a chair at a more prestigious university, in particular at Berlin, at that time the leading German university. However, his work encountered too much opposition for that to be possible. Kronecker, who headed mathematics at Berlin until his death in 1891, became increasingly uncomfortable with the prospect of having Cantor as a colleague, perceiving him as a "corrupter of youth" for teaching his ideas to a younger generation of mathematicians. Worse yet, Kronecker, a well-established figure within the mathematical community and Cantor's former professor, disagreed fundamentally with the thrust of Cantor's work ever since he intentionally delayed the publication of Cantor's first major publication in 1874. Kronecker, now seen as one of the founders of the constructive viewpoint in mathematics, disliked much of Cantor's set theory because it asserted the existence of sets satisfying certain properties, without giving specific examples of sets whose members did indeed satisfy those properties. Whenever Cantor applied for a post in Berlin, he was declined, and it usually involved Kronecker, so Cantor came to believe that Kronecker's stance would make it impossible for him ever to leave Halle. In 1881, Cantor's Halle colleague Eduard Heine died, creating a vacant chair. Halle accepted Cantor's suggestion that it be offered to Dedekind, Heinrich M. Weber and Franz Mertens, in that order, but each declined the chair after being offered it. Friedrich Wangerin was eventually appointed, but he was never close to Cantor. In 1882, the mathematical correspondence between Cantor and Richard Dedekind came to an end, apparently as a result of Dedekind's declining the chair at Halle. Cantor also began another important correspondence, with Gösta Mittag-Leffler in Sweden, and soon began to publish in Mittag-Leffler's journal "Acta Mathematica". But in 1885, Mittag-Leffler was concerned about the philosophical nature and new terminology in a paper Cantor had submitted to "Acta". He asked Cantor to withdraw the paper from "Acta" while it was in proof, writing that it was "... about one hundred years too soon." Cantor complied, but then curtailed his relationship and correspondence with Mittag-Leffler, writing to a third party: Cantor suffered his first known bout of depression on May 1884. Criticism of his work weighed on his mind: every one of the fifty-two letters he wrote to Mittag-Leffler in 1884 mentioned Kronecker. A passage from one of these letters is revealing of the damage to Cantor's self-confidence: This crisis led him to apply to lecture on philosophy rather than mathematics. He also began an intense study of Elizabethan literature thinking there might be evidence that Francis Bacon wrote the plays attributed to Shakespeare (see Shakespearean authorship question); this ultimately resulted in two pamphlets, published in 1896 and 1897. Cantor recovered soon thereafter, and subsequently made further important contributions, including his diagonal argument and theorem. However, he never again attained the high level of his remarkable papers of 1874–84, even after Kronecker's death on December 29, 1891. He eventually sought, and achieved, a reconciliation with Kronecker. Nevertheless, the philosophical disagreements and difficulties dividing them persisted. In 1889, Cantor was instrumental in founding the German Mathematical Society and chaired its first meeting in Halle in 1891, where he first introduced his diagonal argument; his reputation was strong enough, despite Kronecker's opposition to his work, to ensure he was elected as the first president of this society. Setting aside the animosity Kronecker had displayed towards him, Cantor invited him to address the meeting, but Kronecker was unable to do so because his wife was dying from injuries sustained in a skiing accident at the time. Georg Cantor was also instrumental in the establishment of the first International Congress of Mathematicians, which was held in Zürich, Switzerland, in 1897. After Cantor's 1884 hospitalization, there is no record that he was in any sanatorium again until 1899. Soon after that second hospitalization, Cantor's youngest son Rudolph died suddenly on December 16 (Cantor was delivering a lecture on his views on Baconian theory and William Shakespeare), and this tragedy drained Cantor of much of his passion for mathematics. Cantor was again hospitalized in 1903. One year later, he was outraged and agitated by a paper presented by Julius König at the Third International Congress of Mathematicians. The paper attempted to prove that the basic tenets of transfinite set theory were false. Since the paper had been read in front of his daughters and colleagues, Cantor perceived himself as having been publicly humiliated. Although Ernst Zermelo demonstrated less than a day later that König's proof had failed, Cantor remained shaken, and momentarily questioning God. Cantor suffered from chronic depression for the rest of his life, for which he was excused from teaching on several occasions and repeatedly confined in various sanatoria. The events of 1904 preceded a series of hospitalizations at intervals of two or three years. He did not abandon mathematics completely, however, lecturing on the paradoxes of set theory (Burali-Forti paradox, Cantor's paradox, and Russell's paradox) to a meeting of the "Deutsche Mathematiker–Vereinigung" in 1903, and attending the International Congress of Mathematicians at Heidelberg in 1904. In 1911, Cantor was one of the distinguished foreign scholars invited to attend the 500th anniversary of the founding of the University of St. Andrews in Scotland. Cantor attended, hoping to meet Bertrand Russell, whose newly published "Principia Mathematica" repeatedly cited Cantor's work, but this did not come about. The following year, St. Andrews awarded Cantor an honorary doctorate, but illness precluded his receiving the degree in person. Cantor retired in 1913, living in poverty and suffering from malnourishment during World War I. The public celebration of his 70th birthday was canceled because of the war. In June 1917, he entered a sanatorium for the last time and continually wrote to his wife asking to be allowed to go home. Georg Cantor had a fatal heart attack on January 6, 1918, in the sanatorium where he had spent the last year of his life. Cantor's work between 1874 and 1884 is the origin of set theory. Prior to this work, the concept of a set was a rather elementary one that had been used implicitly since the beginning of mathematics, dating back to the ideas of Aristotle. No one had realized that set theory had any nontrivial content. Before Cantor, there were only finite sets (which are easy to understand) and "the infinite" (which was considered a topic for philosophical, rather than mathematical, discussion). By proving that there are (infinitely) many possible sizes for infinite sets, Cantor established that set theory was not trivial, and it needed to be studied. Set theory has come to play the role of a foundational theory in modern mathematics, in the sense that it interprets propositions about mathematical objects (for example, numbers and functions) from all the traditional areas of mathematics (such as algebra, analysis and topology) in a single theory, and provides a standard set of axioms to prove or disprove them. The basic concepts of set theory are now used throughout mathematics. In one of his earliest papers, Cantor proved that the set of real numbers is "more numerous" than the set of natural numbers; this showed, for the first time, that there exist infinite sets of different sizes. He was also the first to appreciate the importance of one-to-one correspondences (hereinafter denoted "1-to-1 correspondence") in set theory. He used this concept to define finite and infinite sets, subdividing the latter into denumerable (or countably infinite) sets and nondenumerable sets (uncountably infinite sets). Cantor developed important concepts in topology and their relation to cardinality. For example, he showed that the Cantor set is nowhere dense, but has the same cardinality as the set of all real numbers, whereas the rationals are everywhere dense, but countable. He also showed that all countable dense linear orders without end points are order-isomorphic to the rational numbers. Cantor introduced fundamental constructions in set theory, such as the power set of a set "A", which is the set of all possible subsets of "A". He later proved that the size of the power set of "A" is strictly larger than the size of "A", even when "A" is an infinite set; this result soon became known as Cantor's theorem. Cantor developed an entire theory and arithmetic of infinite sets, called cardinals and ordinals, which extended the arithmetic of the natural numbers. His notation for the cardinal numbers was the Hebrew letter formula_1 (aleph) with a natural number subscript; for the ordinals he employed the Greek letter ω (omega). This notation is still in use today. The "Continuum hypothesis", introduced by Cantor, was presented by David Hilbert as the first of his twenty-three open problems in his address at the 1900 International Congress of Mathematicians in Paris. Cantor's work also attracted favorable notice beyond Hilbert's celebrated encomium. The US philosopher Charles Sanders Peirce praised Cantor's set theory, and, following public lectures delivered by Cantor at the first International Congress of Mathematicians, held in Zurich in 1897, Hurwitz and Hadamard also both expressed their admiration. At that Congress, Cantor renewed his friendship and correspondence with Dedekind. From 1905, Cantor corresponded with his British admirer and translator Philip Jourdain on the history of set theory and on Cantor's religious ideas. This was later published, as were several of his expository works. Cantor's first ten papers were on number theory, his thesis topic. At the suggestion of Eduard Heine, the Professor at Halle, Cantor turned to analysis. Heine proposed that Cantor solve an open problem that had eluded Peter Gustav Lejeune Dirichlet, Rudolf Lipschitz, Bernhard Riemann, and Heine himself: the uniqueness of the representation of a function by trigonometric series. Cantor solved this difficult problem in 1869. It was while working on this problem that he discovered transfinite ordinals, which occurred as indices "n" in the "n"th derived set "S" of a set "S" of zeros of a trigonometric series. Given a trigonometric series f(x) with "S" as its set of zeros, Cantor had discovered a procedure that produced another trigonometric series that had "S" as its set of zeros, where "S" is the set of limit points of "S". If "S" is the set of limit points of "S", then he could construct a trigonometric series whose zeros are "S". Because the sets "S" were closed, they contained their Limit points, and the intersection of the infinite decreasing sequence of sets "S", "S", "S", "S"... formed a limit set, which we would now call "S", and then he noticed that "S" would also have to have a set of limit points "S", and so on. He had examples that went on forever, and so here was a naturally occurring infinite sequence of infinite numbers "ω", "ω" + 1, "ω" + 2, ... Between 1870 and 1872, Cantor published more papers on trigonometric series, and also a paper defining irrational numbers as convergent sequences of rational numbers. Dedekind, whom Cantor befriended in 1872, cited this paper later that year, in the paper where he first set out his celebrated definition of real numbers by Dedekind cuts. While extending the notion of number by means of his revolutionary concept of infinite cardinality, Cantor was paradoxically opposed to theories of infinitesimals of his contemporaries Otto Stolz and Paul du Bois-Reymond, describing them as both "an abomination" and "a cholera bacillus of mathematics". Cantor also published an erroneous "proof" of the inconsistency of infinitesimals. The beginning of set theory as a branch of mathematics is often marked by the publication of Cantor's 1874 paper, "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen" ("On a Property of the Collection of All Real Algebraic Numbers"). This paper was the first to provide a rigorous proof that there was more than one kind of infinity. Previously, all infinite collections had been implicitly assumed to be equinumerous (that is, of "the same size" or having the same number of elements). Cantor proved that the collection of real numbers and the collection of positive integers are not equinumerous. In other words, the real numbers are not countable. His proof differs from diagonal argument that he gave in 1891. Cantor's article also contains a new method of constructing transcendental numbers. Transcendental numbers were first constructed by Joseph Liouville in 1844. Cantor established these results using two constructions. His first construction shows how to write the real algebraic numbers as a sequence "a", "a", "a", ... In other words, the real algebraic numbers are countable. Cantor starts his second construction with any sequence of real numbers. Using this sequence, he constructs nested intervals whose intersection contains a real number not in the sequence. Since every sequence of real numbers can be used to construct a real not in the sequence, the real numbers cannot be written as a sequence – that is, the real numbers are not countable. By applying his construction to the sequence of real algebraic numbers, Cantor produces a transcendental number. Cantor points out that his constructions prove more – namely, they provide a new proof of Liouville's theorem: Every interval contains infinitely many transcendental numbers. Cantor's next article contains a construction that proves the set of transcendental numbers has the same "power" (see below) as the set of real numbers. Between 1879 and 1884, Cantor published a series of six articles in "Mathematische Annalen" that together formed an introduction to his set theory. At the same time, there was growing opposition to Cantor's ideas, led by Kronecker, who admitted mathematical concepts only if they could be constructed in a finite number of steps from the natural numbers, which he took as intuitively given. For Kronecker, Cantor's hierarchy of infinities was inadmissible, since accepting the concept of actual infinity would open the door to paradoxes which would challenge the validity of mathematics as a whole. Cantor also introduced the Cantor set during this period. The fifth paper in this series, "Grundlagen einer allgemeinen Mannigfaltigkeitslehre" (""Foundations of a General Theory of Aggregates""), published in 1883, was the most important of the six and was also published as a separate monograph. It contained Cantor's reply to his critics and showed how the transfinite numbers were a systematic extension of the natural numbers. It begins by defining well-ordered sets. Ordinal numbers are then introduced as the order types of well-ordered sets. Cantor then defines the addition and multiplication of the cardinal and ordinal numbers. In 1885, Cantor extended his theory of order types so that the ordinal numbers simply became a special case of order types. In 1891, he published a paper containing his elegant "diagonal argument" for the existence of an uncountable set. He applied the same idea to prove Cantor's theorem: the cardinality of the power set of a set "A" is strictly larger than the cardinality of "A". This established the richness of the hierarchy of infinite sets, and of the cardinal and ordinal arithmetic that Cantor had defined. His argument is fundamental in the solution of the Halting problem and the proof of Gödel's first incompleteness theorem. Cantor wrote on the Goldbach conjecture in 1894. In 1895 and 1897, Cantor published a two-part paper in "Mathematische Annalen" under Felix Klein's editorship; these were his last significant papers on set theory. The first paper begins by defining set, subset, etc., in ways that would be largely acceptable now. The cardinal and ordinal arithmetic are reviewed. Cantor wanted the second paper to include a proof of the continuum hypothesis, but had to settle for expositing his theory of well-ordered sets and ordinal numbers. Cantor attempts to prove that if "A" and "B" are sets with "A" equivalent to a subset of "B" and "B" equivalent to a subset of "A", then "A" and "B" are equivalent. Ernst Schröder had stated this theorem a bit earlier, but his proof, as well as Cantor's, was flawed. Felix Bernstein supplied a correct proof in his 1898 PhD thesis; hence the name Cantor–Bernstein–Schröder theorem. Cantor's 1874 Crelle paper was the first to invoke the notion of a 1-to-1 correspondence, though he did not use that phrase. He then began looking for a 1-to-1 correspondence between the points of the unit square and the points of a unit line segment. In an 1877 letter to Richard Dedekind, Cantor proved a far stronger result: for any positive integer "n", there exists a 1-to-1 correspondence between the points on the unit line segment and all of the points in an "n"-dimensional space. About this discovery Cantor wrote to Dedekind: ""Je le vois, mais je ne le crois pas"!" ("I see it, but I don't believe it!") The result that he found so astonishing has implications for geometry and the notion of dimension. In 1878, Cantor submitted another paper to Crelle's Journal, in which he defined precisely the concept of a 1-to-1 correspondence and introduced the notion of "power" (a term he took from Jakob Steiner) or "equivalence" of sets: two sets are equivalent (have the same power) if there exists a 1-to-1 correspondence between them. Cantor defined countable sets (or denumerable sets) as sets which can be put into a 1-to-1 correspondence with the natural numbers, and proved that the rational numbers are denumerable. He also proved that "n"-dimensional Euclidean space R has the same power as the real numbers R, as does a countably infinite product of copies of R. While he made free use of countability as a concept, he did not write the word "countable" until 1883. Cantor also discussed his thinking about dimension, stressing that his mapping between the unit interval and the unit square was not a continuous one. This paper displeased Kronecker, and Cantor wanted to withdraw it; however, Dedekind persuaded him not to do so and Weierstrass supported its publication. Nevertheless, Cantor never again submitted anything to Crelle. Cantor was the first to formulate what later came to be known as the continuum hypothesis or CH: there exists no set whose power is greater than that of the naturals and less than that of the reals (or equivalently, the cardinality of the reals is "exactly" aleph-one, rather than just "at least" aleph-one). Cantor believed the continuum hypothesis to be true and tried for many years to prove it, in vain. His inability to prove the continuum hypothesis caused him considerable anxiety. The difficulty Cantor had in proving the continuum hypothesis has been underscored by later developments in the field of mathematics: a 1940 result by Gödel and a 1963 one by Paul Cohen together imply that the continuum hypothesis can neither be proved nor disproved using standard Zermelo–Fraenkel set theory plus the axiom of choice (the combination referred to as "ZFC"). In 1883, Cantor divided the infinite into the transfinite and the absolute. The transfinite is increasable in magnitude, while the absolute is unincreasable. For example, an ordinal α is transfinite because it can be increased to α + 1. On the other hand, the ordinals form an absolutely infinite sequence that cannot be increased in magnitude because there are no larger ordinals to add to it. In 1883, Cantor also introduced the well-ordering principle "every set can be well-ordered" and stated that it is a "law of thought." Cantor extended his work on the absolute infinite by using it in a proof. Around 1895, he began to regard his well-ordering principle as a theorem and attempted to prove it. In 1899, he sent Dedekind a proof of the equivalent aleph theorem: the cardinality of every infinite set is an aleph. First, he defined two types of multiplicities: consistent multiplicities (sets) and inconsistent multiplicities (absolutely infinite multiplicities). Next he assumed that the ordinals form a set, proved that this leads to a contradiction, and concluded that the ordinals form an inconsistent multiplicity. He used this inconsistent multiplicity to prove the aleph theorem. In 1932, Zermelo criticized the construction in Cantor's proof. Cantor avoided paradoxes by recognizing that there are two types of multiplicities. In his set theory, when it is assumed that the ordinals form a set, the resulting contradiction only implies that the ordinals form an inconsistent multiplicity. On the other hand, Bertrand Russell treated all collections as sets, which leads to paradoxes. In Russell's set theory, the ordinals form a set, so the resulting contradiction implies that the theory is inconsistent. From 1901 to 1903, Russell discovered three paradoxes implying that his set theory is inconsistent: the Burali-Forti paradox (which was just mentioned), Cantor's paradox, and Russell's paradox. Russell named paradoxes after Cesare Burali-Forti and Cantor even though neither of them believed that they had found paradoxes. In 1908, Zermelo published his axiom system for set theory. He had two motivations for developing the axiom system: eliminating the paradoxes and securing his proof of the well-ordering theorem. Zermelo had proved this theorem in 1904 using the axiom of choice, but his proof was criticized for a variety of reasons. His response to the criticism included his axiom system and a new proof of the well-ordering theorem. His axioms support this new proof, and they eliminate the paradoxes by restricting the formation of sets. In 1923, John von Neumann developed an axiom system that eliminates the paradoxes by using an approach similar to Cantor's—namely, by identifying collections that are not sets and treating them differently. Von Neumann stated that a class is too big to be a set if it can be put into one-to-one correspondence with the class of all sets. He defined a set as a class that is a member of some class and stated the axiom: A class is not a set if and only if there is a one-to-one correspondence between it and the class of all sets. This axiom implies that these big classes are not sets, which eliminates the paradoxes since they cannot be members of any class. Von Neumann also used his axiom to prove the well-ordering theorem: Like Cantor, he assumed that the ordinals form a set. The resulting contradiction implies that the class of all ordinals is not a set. Then his axiom provides a one-to-one correspondence between this class and the class of all sets. This correspondence well-orders the class of all sets, which implies the well-ordering theorem. In 1930, Zermelo defined models of set theory that satisfy von Neumann's axiom. The concept of the existence of an actual infinity was an important shared concern within the realms of mathematics, philosophy and religion. Preserving the orthodoxy of the relationship between God and mathematics, although not in the same form as held by his critics, was long a concern of Cantor's. He directly addressed this intersection between these disciplines in the introduction to his "Grundlagen einer allgemeinen Mannigfaltigkeitslehre," where he stressed the connection between his view of the infinite and the philosophical one. To Cantor, his mathematical views were intrinsically linked to their philosophical and theological implications – he identified the Absolute Infinite with God, and he considered his work on transfinite numbers to have been directly communicated to him by God, who had chosen Cantor to reveal them to the world. Debate among mathematicians grew out of opposing views in the philosophy of mathematics regarding the nature of actual infinity. Some held to the view that infinity was an abstraction which was not mathematically legitimate, and denied its existence. Mathematicians from three major schools of thought (constructivism and its two offshoots, intuitionism and finitism) opposed Cantor's theories in this matter. For constructivists such as Kronecker, this rejection of actual infinity stems from fundamental disagreement with the idea that nonconstructive proofs such as Cantor's diagonal argument are sufficient proof that something exists, holding instead that constructive proofs are required. Intuitionism also rejects the idea that actual infinity is an expression of any sort of reality, but arrive at the decision via a different route than constructivism. Firstly, Cantor's argument rests on logic to prove the existence of transfinite numbers as an actual mathematical entity, whereas intuitionists hold that mathematical entities cannot be reduced to logical propositions, originating instead in the intuitions of the mind. Secondly, the notion of infinity as an expression of reality is itself disallowed in intuitionism, since the human mind cannot intuitively construct an infinite set. Mathematicians such as Brouwer and especially Poincaré adopted an intuitionist stance against Cantor's work. Citing the paradoxes of set theory as an example of its fundamentally flawed nature, Poincaré held that "most of the ideas of Cantorian set theory should be banished from mathematics once and for all." Finally, Wittgenstein's attacks were finitist: he believed that Cantor's diagonal argument conflated the intension of a set of cardinal or real numbers with its extension, thus conflating the concept of rules for generating a set with an actual set. Some Christian theologians saw Cantor's work as a challenge to the uniqueness of the absolute infinity in the nature of God. In particular, Neo-Thomist thinkers saw the existence of an actual infinity that consisted of something other than God as jeopardizing "God's exclusive claim to supreme infinity". Cantor strongly believed that this view was a misinterpretation of infinity, and was convinced that set theory could help correct this mistake: Cantor also believed that his theory of transfinite numbers ran counter to both materialism and determinism – and was shocked when he realized that he was the only faculty member at Halle who did "not" hold to deterministic philosophical beliefs. In 1888, Cantor published his correspondence with several philosophers on the philosophical implications of his set theory. In an extensive attempt to persuade other Christian thinkers and authorities to adopt his views, Cantor had corresponded with Christian philosophers such as Tilman Pesch and Joseph Hontheim, as well as theologians such as Cardinal Johannes Franzelin, who once replied by equating the theory of transfinite numbers with pantheism. Cantor even sent one letter directly to Pope Leo XIII himself, and addressed several pamphlets to him. Cantor's philosophy on the nature of numbers led him to affirm a belief in the freedom of mathematics to posit and prove concepts apart from the realm of physical phenomena, as expressions within an internal reality. The only restrictions on this metaphysical system are that all mathematical concepts must be devoid of internal contradiction, and that they follow from existing definitions, axioms, and theorems. This belief is summarized in his assertion that "the essence of mathematics is its freedom." These ideas parallel those of Edmund Husserl, whom Cantor had met in Halle. Meanwhile, Cantor himself was fiercely opposed to infinitesimals, describing them as both an "abomination" and "the cholera bacillus of mathematics". Cantor's 1883 paper reveals that he was well aware of the opposition his ideas were encountering: Hence he devotes much space to justifying his earlier work, asserting that mathematical concepts may be freely introduced as long as they are free of contradiction and defined in terms of previously accepted concepts. He also cites Aristotle, Descartes, Berkeley, Leibniz, and Bolzano on infinity. Cantor's paternal grandparents were from Copenhagen and fled to Russia from the disruption of the Napoleonic Wars. There is very little direct information on his grandparents. Cantor was sometimes called Jewish in his lifetime, but has also variously been called Russian, German, and Danish as well. Jakob Cantor, Cantor's grandfather, gave his children Christian saints' names. Further, several of his grandmother's relatives were in the Czarist civil service, which would not welcome Jews, unless they converted to Christianity. Cantor's father, Georg Waldemar Cantor, was educated in the Lutheran mission in Saint Petersburg, and his correspondence with his son shows both of them as devout Lutherans. Very little is known for sure about George Woldemar's origin or education. His mother, Maria Anna Böhm, was an Austro-Hungarian born in Saint Petersburg and baptized Roman Catholic; she converted to Protestantism upon marriage. However, there is a letter from Cantor's brother Louis to their mother, stating: ("Even if we were descended from Jews ten times over, and even though I may be, in principle, completely in favour of equal rights for Hebrews, in social life I prefer Christians...") which could be read to imply that she was of Jewish ancestry. There were documented statements, during the 1930s, that called this Jewish ancestry into question: It is also later said in the same document: (the rest of the quote is finished by the very first quote above). In Men of Mathematics, Eric Temple Bell described Cantor as being "of pure Jewish descent on both sides," although both parents were baptized. In a 1971 article entitled "Towards a Biography of Georg Cantor," the British historian of mathematics Ivor Grattan-Guinness mentions (Annals of Science 27, pp. 345–391, 1971) that he was unable to find evidence of Jewish ancestry. (He also states that Cantor's wife, Vally Guttmann, was Jewish). In a letter written by Georg Cantor to Paul Tannery in 1896 (Paul Tannery, Memoires Scientifique 13 Correspondence, Gauthier-Villars, Paris, 1934, p. 306), Cantor states that his paternal grandparents were members of the Sephardic Jewish community of Copenhagen. Specifically, Cantor states in describing his father: "Er ist aber in Kopenhagen geboren, von israelitischen Eltern, die der dortigen portugisischen Judengemeinde..." ("He was born in Copenhagen of Jewish (lit: "Israelite") parents from the local Portuguese-Jewish community.") In addition, Cantor's maternal great uncle, a Hungarian violinist Josef Böhm, has been described as Jewish, which may imply that Cantor's mother was at least partly descended from the Hungarian Jewish community. In a letter to Bertrand Russell, Cantor described his ancestry and self-perception as follows: Until the 1970s, the chief academic publications on Cantor were two short monographs by Schönflies (1927) – largely the correspondence with Mittag-Leffler – and Fraenkel (1930). Both were at second and third hand; neither had much on his personal life. The gap was largely filled by Eric Temple Bell's "Men of Mathematics" (1937), which one of Cantor's modern biographers describes as "perhaps the most widely read modern book on the history of mathematics"; and as "one of the worst". Bell presents Cantor's relationship with his father as Oedipal, Cantor's differences with Kronecker as a quarrel between two Jews, and Cantor's madness as Romantic despair over his failure to win acceptance for his mathematics, and fills the picture with stereotypes. Grattan-Guinness (1971) found that none of these claims were true, but they may be found in many books of the intervening period, owing to the absence of any other narrative. There are other legends, independent of Bell – including one that labels Cantor's father a foundling, shipped to Saint Petersburg by unknown parents. A critique of Bell's book is contained in Joseph Dauben's biography. Writes Dauben: Germanium Germanium is a chemical element with symbol "Ge" and atomic number 32. It is a lustrous, hard, grayish-white metalloid in the carbon group, chemically similar to its group neighbors tin and silicon. Pure germanium is a semiconductor with an appearance similar to elemental silicon. Like silicon, germanium naturally reacts and forms complexes with oxygen in nature. Because it seldom appears in high concentration, germanium was discovered comparatively late in the history of chemistry. Germanium ranks near fiftieth in relative abundance of the elements in the Earth's crust. In 1869, Dmitri Mendeleev predicted its existence and some of its properties from its position on his periodic table, and called the element ekasilicon. Nearly two decades later, in 1886, Clemens Winkler found the new element along with silver and sulfur, in a rare mineral called argyrodite. Although the new element somewhat resembled arsenic and antimony in appearance, the combining ratios in compounds agreed with Mendeleev's predictions for a relative of silicon. Winkler named the element after his country, Germany. Today, germanium is mined primarily from sphalerite (the primary ore of zinc), though germanium is also recovered commercially from silver, lead, and copper ores. Elemental germanium is used as a semiconductor in transistors and various other electronic devices. Historically, the first decade of semiconductor electronics was based entirely on germanium. Today, the amount of germanium produced for semiconductor electronics is one fiftieth the amount of ultra-high purity silicon produced for the same. Presently, the major end uses are fibre-optic systems, infrared optics, solar cell applications, and light-emitting diodes (LEDs). Germanium compounds are also used for polymerization catalysts and have most recently found use in the production of nanowires. This element forms a large number of organometallic compounds, such as tetraethylgermane, useful in organometallic chemistry. Germanium is not thought to be an essential element for any living organism. Some complex organic germanium compounds are being investigated as possible pharmaceuticals, though none have yet proven successful. Similar to silicon and aluminum, natural germanium compounds tend to be insoluble in water and thus have little oral toxicity. However, synthetic soluble germanium salts are nephrotoxic, and synthetic chemically reactive germanium compounds with halogens and hydrogen are irritants and toxins. In his report on "The Periodic Law of the Chemical Elements" in 1869, the Russian chemist Dmitri Ivanovich Mendeleev predicted the existence of several unknown chemical elements, including one that would fill a gap in the carbon family in his Periodic Table of the Elements, located between silicon and tin. Because of its position in his Periodic Table, Mendeleev called it "ekasilicon (Es)", and he estimated its atomic weight to be about 72.0. In mid-1885, at a mine near Freiberg, Saxony, a new mineral was discovered and named "argyrodite" because of the high silver content. The chemist Clemens Winkler analyzed this new mineral, which proved to be a combination of silver, sulfur, and a new element. Winkler was able to isolate the new element in 1886 and found it similar to antimony. He initially considered the new element to be eka-antimony, but was soon convinced that it was instead eka-silicon. Before Winkler published his results on the new element, he decided that he would name his element "neptunium", since the recent discovery of planet Neptune in 1846 had similarly been preceded by mathematical predictions of its existence. However, the name "neptunium" had already been given to another proposed chemical element (though not the element that today bears the name neptunium, which was discovered in 1940). So instead, Winkler named the new element "germanium" (from the Latin word, "Germania", for Germany) in honor of his homeland. Argyrodite proved empirically to be AgGeS. Because this new element showed some similarities with the elements arsenic and antimony, its proper place in the periodic table was under consideration, but its similarities with Dmitri Mendeleev's predicted element "ekasilicon" confirmed that place on the periodic table. With further material from 500 kg of ore from the mines in Saxony, Winkler confirmed the chemical properties of the new element in 1887. He also determined an atomic weight of 72.32 by analyzing pure germanium tetrachloride (), while Lecoq de Boisbaudran deduced 72.3 by a comparison of the lines in the spark spectrum of the element. Winkler was able to prepare several new compounds of germanium, including fluorides, chlorides, sulfides, dioxide, and tetraethylgermane (Ge(CH)), the first organogermane. The physical data from those compounds — which corresponded well with Mendeleev's predictions — made the discovery an important confirmation of Mendeleev's idea of element periodicity. Here is a comparison between the prediction and Winkler's data: Until the late 1930s, germanium was thought to be a poorly conducting metal. Germanium did not become economically significant until after 1945 when its properties as an electronic semiconductor were recognized. During World War II, small amounts of germanium were used in some special electronic devices, mostly diodes. The first major use was the point-contact Schottky diodes for radar pulse detection during the War. The first silicon-germanium alloys were obtained in 1955. Before 1945, only a few hundred kilograms of germanium were produced in smelters each year, but by the end of the 1950s, the annual worldwide production had reached 40 metric tons. The development of the germanium transistor in 1948 opened the door to countless applications of solid state electronics. From 1950 through the early 1970s, this area provided an increasing market for germanium, but then high-purity silicon began replacing germanium in transistors, diodes, and rectifiers. For example, the company that became Fairchild Semiconductor was founded in 1957 with the express purpose of producing silicon transistors. Silicon has superior electrical properties, but it requires much greater purity that could not be commercially achieved in the early years of semiconductor electronics. Meanwhile, the demand for germanium for fiber optic communication networks, infrared night vision systems, and polymerization catalysts increased dramatically. These end uses represented 85% of worldwide germanium consumption in 2000. The US government even designated germanium as a strategic and critical material, calling for a 146 ton (132 t) supply in the national defense stockpile in 1987. Germanium differs from silicon in that the supply is limited by the availability of exploitable sources, while the supply of silicon is limited only by production capacity since silicon comes from ordinary sand and quartz. While silicon could be bought in 1998 for less than $10 per kg, the price of germanium was almost $800 per kg. Under standard conditions, germanium is a brittle, silvery-white, semi-metallic element. This form constitutes an allotrope known as "α-germanium", which has a metallic luster and a diamond cubic crystal structure, the same as diamond. At pressures above 120 kbar, it becomes the allotrope "β-germanium" with the same structure as β-tin. Like silicon, gallium, bismuth, antimony, and water, germanium is one of the few substances that expands as it solidifies (i.e. freezes) from the molten state. Germanium is a semiconductor. Zone refining techniques have led to the production of crystalline germanium for semiconductors that has an impurity of only one part in 10, making it one of the purest materials ever obtained. The first metallic material discovered (in 2005) to become a superconductor in the presence of an extremely strong electromagnetic field was an alloy of germanium, uranium, and rhodium. Pure germanium suffers from the forming of whiskers by spontaneous screw dislocations. If a whisker grows long enough to touch another part of the assembly or a metallic packaging, it can effectively shunt out a p-n junction. This is one of the primary reasons for the failure of old germanium diodes and transistors. Elemental germanium oxidizes slowly to GeO at 250 °C. Germanium is insoluble in dilute acids and alkalis but dissolves slowly in hot concentrated sulfuric and nitric acids and reacts violently with molten alkalis to produce germanates (). Germanium occurs mostly in the oxidation state +4 although many +2 compounds are known. Other oxidation states are rare: +3 is found in compounds such as GeCl, and +3 and +1 are found on the surface of oxides, or negative oxidation states in germanes, such as −4 in . Germanium cluster anions (Zintl ions) such as Ge, Ge, Ge, [(Ge)] have been prepared by the extraction from alloys containing alkali metals and germanium in liquid ammonia in the presence of ethylenediamine or a cryptand. The oxidation states of the element in these ions are not integers—similar to the ozonides O. Two oxides of germanium are known: germanium dioxide (, germania) and germanium monoxide, (). The dioxide, GeO can be obtained by roasting germanium disulfide (), and is a white powder that is only slightly soluble in water but reacts with alkalis to form germanates. The monoxide, germanous oxide, can be obtained by the high temperature reaction of GeO with Ge metal. The dioxide (and the related oxides and germanates) exhibits the unusual property of having a high refractive index for visible light, but transparency to infrared light. Bismuth germanate, BiGeO, (BGO) is used as a scintillator. Binary compounds with other chalcogens are also known, such as the disulfide (), diselenide (), and the monosulfide (GeS), selenide (GeSe), and telluride (GeTe). GeS forms as a white precipitate when hydrogen sulfide is passed through strongly acid solutions containing Ge(IV). The disulfide is appreciably soluble in water and in solutions of caustic alkalis or alkaline sulfides. Nevertheless, it is not soluble in acidic water, which allowed Winkler to discover the element. By heating the disulfide in a current of hydrogen, the monosulfide (GeS) is formed, which sublimes in thin plates of a dark color and metallic luster, and is soluble in solutions of the caustic alkalis. Upon melting with alkaline carbonates and sulfur, germanium compounds form salts known as thiogermanates. Four tetrahalides are known. Under normal conditions GeI is a solid, GeF a gas and the others volatile liquids. For example, germanium tetrachloride, GeCl, is obtained as a colorless fuming liquid boiling at 83.1 °C by heating the metal with chlorine. All the tetrahalides are readily hydrolyzed to hydrated germanium dioxide. GeCl is used in the production of organogermanium compounds. All four dihalides are known and in contrast to the tetrahalides are polymeric solids. Additionally GeCl and some higher compounds of formula GeCl are known. The unusual compound GeCl has been prepared that contains the GeCl unit with a neopentane structure. Germane (GeH) is a compound similar in structure to methane. Polygermanes—compounds that are similar to alkanes—with formula GeH containing up to five germanium atoms are known. The germanes are less volatile and less reactive than their corresponding silicon analogues. GeH reacts with alkali metals in liquid ammonia to form white crystalline MGeH which contain the GeH anion. The germanium hydrohalides with one, two and three halogen atoms are colorless reactive liquids. The first organogermanium compound was synthesized by Winkler in 1887; the reaction of germanium tetrachloride with diethylzinc yielded tetraethylgermane (). Organogermanes of the type RGe (where R is an alkyl) such as tetramethylgermane () and tetraethylgermane are accessed through the cheapest available germanium precursor germanium tetrachloride and alkyl nucleophiles. Organic germanium hydrides such as isobutylgermane () were found to be less hazardous and may be used as a liquid substitute for toxic germane gas in semiconductor applications. Many germanium reactive intermediates are known: germyl free radicals, germylenes (similar to carbenes), and germynes (similar to carbynes). The organogermanium compound 2-carboxyethylgermasesquioxane was first reported in the 1970s, and for a while was used as a dietary supplement and thought to possibly have anti-tumor qualities. Using a ligand called Eind (1,1,3,3,5,5,7,7-octaethyl-s-hydrindacen-4-yl) germanium is able to form a double bond with oxygen (germanone). Germanium occurs in 5 natural isotopes: , , , , and . Of these, is very slightly radioactive, decaying by double beta decay with a half-life of . is the most common isotope, having a natural abundance of approximately 36%. is the least common with a natural abundance of approximately 7%. When bombarded with alpha particles, the isotope will generate stable , releasing high energy electrons in the process. Because of this, it is used in combination with radon for nuclear batteries. At least 27 radioisotopes have also been synthesized, ranging in atomic mass from 58 to 89. The most stable of these is , decaying by electron capture with a half-life of ays. The least stable is , with a half-life of . While most of germanium's radioisotopes decay by beta decay, and decay by delayed proton emission. through isotopes also exhibit minor delayed neutron emission decay paths. Germanium is created by stellar nucleosynthesis, mostly by the s-process in asymptotic giant branch stars. The s-process is a slow neutron capture of lighter elements inside pulsating red giant stars. Germanium has been detected in some of the most distant stars and in the atmosphere of Jupiter. Germanium's abundance in the Earth's crust is approximately 1.6 ppm. Only a few minerals like argyrodite, briartite, germanite, and renierite contain appreciable amounts of germanium, and none in mineable deposits. Some zinc-copper-lead ore bodies contain enough germanium to justify extraction from the final ore concentrate. An unusual natural enrichment process causes a high content of germanium in some coal seams, discovered by Victor Moritz Goldschmidt during a broad survey for germanium deposits. The highest concentration ever found was in Hartley coal ash with as much as 1.6% germanium. The coal deposits near Xilinhaote, Inner Mongolia, contain an estimated 1600 tonnes of germanium. About 118 tonnes of germanium was produced in 2011 worldwide, mostly in China (80 t), Russia (5 t) and United States (3 t). Germanium is recovered as a by-product from sphalerite zinc ores where it is concentrated in amounts as great as 0.3%, especially from low-temperature sediment-hosted, massive Zn–Pb–Cu(–Ba) deposits and carbonate-hosted Zn–Pb deposits. A recent study found that at least 10,000 t of extractable germanium is contained in known zinc reserves, particularly those hosted by Mississippi-Valley type deposits, while at least 112,000 t will be found in coal reserves. In 2007 35% of the demand was met by recycled germanium. While it is produced mainly from sphalerite, it is also found in silver, lead, and copper ores. Another source of germanium is fly ash of power plants fueled from coal deposits that contain germanium. Russia and China used this as a source for germanium. Russia's deposits are located in the far east of Sakhalin Island, and northeast of Vladivostok. The deposits in China are located mainly in the lignite mines near Lincang, Yunnan; coal is also mined near Xilinhaote, Inner Mongolia. The ore concentrates are mostly sulfidic; they are converted to the oxides by heating under air in a process known as roasting: Some of the germanium is left in the dust produced, while the rest is converted to germanates, which are then leached (together with zinc) from the cinder by sulfuric acid. After neutralization, only the zinc stays in solution while germanium and other metals precipitate. After removing some of the zinc in the precipitate by the Waelz process, the residing Waelz oxide is leached a second time. The dioxide is obtained as precipitate and converted with chlorine gas or hydrochloric acid to germanium tetrachloride, which has a low boiling point and can be isolated by distillation: Germanium tetrachloride is either hydrolyzed to the oxide (GeO) or purified by fractional distillation and then hydrolyzed. The highly pure GeO is now suitable for the production of germanium glass. It is reduced to the element by reacting it with hydrogen, producing germanium suitable for infrared optics and semiconductor production: The germanium for steel production and other industrial processes is normally reduced using carbon: The major end uses for germanium in 2007, worldwide, were estimated to be: 35% for fiber-optics, 30% infrared optics, 15% polymerization catalysts, and 15% electronics and solar electric applications. The remaining 5% went into such uses as phosphors, metallurgy, and chemotherapy. The notable properties of germania (GeO) are its high index of refraction and its low optical dispersion. These make it especially useful for wide-angle camera lenses, microscopy, and the core part of optical fibers. It has replaced titania as the dopant for silica fiber, eliminating the subsequent heat treatment that made the fibers brittle. At the end of 2002, the fiber optics industry consumed 60% of the annual germanium use in the United States, but this is less than 10% of worldwide consumption. GeSbTe is a phase change material used for its optic properties, such as that used in rewritable DVDs. Because germanium is transparent in the infrared wavelengths, it is an important infrared optical material that can be readily cut and polished into lenses and windows. It is especially used as the front optic in thermal imaging cameras working in the 8 to 14 micron range for passive thermal imaging and for hot-spot detection in military, mobile night vision, and fire fighting applications. It is used in infrared spectroscopes and other optical equipment that require extremely sensitive infrared detectors. It has a very high refractive index (4.0) and must be coated with anti-reflection agents. Particularly, a very hard special antireflection coating of diamond-like carbon (DLC), refractive index 2.0, is a good match and produces a diamond-hard surface that can withstand much environmental abuse. Silicon-germanium alloys are rapidly becoming an important semiconductor material for high-speed integrated circuits. Circuits utilizing the properties of Si-SiGe junctions can be much faster than those using silicon alone. Silicon-germanium is beginning to replace gallium arsenide (GaAs) in wireless communications devices. The SiGe chips, with high-speed properties, can be made with low-cost, well-established production techniques of the silicon chip industry. Solar panels are a major use of germanium. Germanium is the substrate of the wafers for high-efficiency multijunction photovoltaic cells for space applications. High-brightness LEDs, used for automobile headlights and to backlight LCD screens, are an important application. Because germanium and gallium arsenide have very similar lattice constants, germanium substrates can be used to make gallium arsenide solar cells. The Mars Exploration Rovers and several satellites use triple junction gallium arsenide on germanium cells. Germanium-on-insulator substrates are seen as a potential replacement for silicon on miniaturized chips. Other uses in electronics include phosphors in fluorescent lamps and solid-state light-emitting diodes (LEDs). Germanium transistors are still used in some effects pedals by musicians who wish to reproduce the distinctive tonal character of the "fuzz"-tone from the early rock and roll era, most notably the Dallas Arbiter Fuzz Face. Germanium dioxide is also used in catalysts for polymerization in the production of polyethylene terephthalate (PET). The high brilliance of this polyester is especially favored for PET bottles marketed in Japan. In the United States, germanium is not used for polymerization catalysts. Due to the similarity between silica (SiO) and germanium dioxide (GeO), the silica stationary phase in some gas chromatography columns can be replaced by GeO. In recent years germanium has seen increasing use in precious metal alloys. In sterling silver alloys, for instance, it reduces firescale, increases tarnish resistance, and improves precipitation hardening. A tarnish-proof silver alloy trademarked Argentium contains 1.2% germanium. Semiconductor detectors made of single crystal high-purity germanium can precisely identify radiation sources—for example in airport security. Germanium is useful for monochromators for beamlines used in single crystal neutron scattering and synchrotron X-ray diffraction. The reflectivity has advantages over silicon in neutron and high energy X-ray applications. Crystals of high purity germanium are used in detectors for gamma spectroscopy and the search for dark matter. Germanium crystals are also used in X-ray spectrometers for the determination of phosphorus, chlorine and sulfur. Germanium is emerging as an important material for spintronics and spin-based quantum computing applications. In 2010, researchers demonstrated room temperature spin transport and more recently donor electron spins in germanium has been shown to have very long coherence times. Germanium is not considered essential to the health of plants or animals. Germanium in the environment has little or no health impact. This is primarily because it usually occurs only as a trace element in ores and carbonaceous materials, and the various industrial and electronic applications involve very small quantities that are not likely to be ingested. For similar reasons, end-use germanium has little impact on the environment as a biohazard. Some reactive intermediate compounds of germanium are poisonous (see precautions, below). Germanium supplements, made from both organic and inorganic germanium, have been marketed as an alternative medicine capable of treating leukemia and lung cancer. There is, however, no medical evidence of benefit; some evidence suggests that such supplements are actively harmful. Some germanium compounds have been administered by alternative medical practitioners as non-FDA-allowed injectable solutions. Soluble inorganic forms of germanium used at first, notably the citrate-lactate salt, resulted in some cases of renal dysfunction, hepatic steatosis, and peripheral neuropathy in individuals using them over a long term. Plasma and urine germanium concentrations in these individuals, several of whom died, were several orders of magnitude greater than endogenous levels. A more recent organic form, beta-carboxyethylgermanium sesquioxide (propagermanium), has not exhibited the same spectrum of toxic effects. U.S. Food and Drug Administration research has concluded that inorganic germanium, when used as a nutritional supplement, "presents potential human health hazard". Certain compounds of germanium have low toxicity to mammals, but have toxic effects against certain bacteria. Some of germanium's artificially-produced compounds are quite reactive and present an immediate hazard to human health on exposure. For example, germanium chloride and germane (GeH) are a liquid and gas, respectively, that can be very irritating to the eyes, skin, lungs, and throat. As of the year 2000, about 15% of United States consumption of germanium was used for infrared optics technology and 50% for fiber-optics. Over the past 20 years, infrared use has consistently decreased; fiber optic demand, however, is slowly increasing. In America, 30–50% of current fiber optic lines are unused dark fiber, sparking discussion of over-production and a future reduction in demand. Worldwide, demand is increasing dramatically as countries such as China are installing fiber optic telecommunication lines throughout the country. Genetics Genetics is the study of genes, genetic variation, and heredity in living organisms. It is generally considered a field of biology, but intersects frequently with many other life sciences and is strongly linked with the study of information systems. The discoverer of genetics is Gregor Mendel, a late 19th-century scientist and Augustinian friar. Mendel studied "trait inheritance", patterns in the way traits are handed down from parents to offspring. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene. Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded beyond inheritance to studying the function and behavior of genes. Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance), and within the context of a population. Genetics has given rise to a number of subfields, including epigenetics and population genetics. Organisms studied within the broad field span the domains of life (archaea, bacteria, and eukarya). Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intracellular or extracellular environment of a cell or organism may switch gene transcription on or off. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate. While the average height of the two corn stalks may be genetically determined to be equal, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment. The word "genetics" stems from the ancient Greek ' meaning "genitive"/"generative", which in turn derives from ' meaning "origin". The observation that living things inherit traits from their parents has been used since prehistoric times to improve crop plants and animals through selective breeding. The modern science of genetics, seeking to understand this process, began with the work of the Augustinian friar Gregor Mendel in the mid-19th century. Prior to Mendel, Imre Festetics, a Hungarian noble, who lived in Kőszeg before Mendel, was the first who used the word "genetics." He described several rules of genetic inheritance in his work "The genetic law of the Nature" (Die genetische Gesätze der Natur, 1819). His second law is the same as what Mendel published. In his third law, he developed the basic principles of mutation (he can be considered a forerunner of Hugo de Vries). Other theories of inheritance preceded Mendel's work. A popular theory during the 19th century, and implied by Charles Darwin's 1859 "On the Origin of Species", was blending inheritance: the idea that individuals inherit a smooth blend of traits from their parents. Mendel's work provided examples where traits were definitely not blended after hybridization, showing that traits are produced by combinations of distinct genes rather than a continuous blend. Blending of traits in the progeny is now explained by the action of multiple genes with quantitative effects. Another theory that had some support at that time was the inheritance of acquired characteristics: the belief that individuals inherit traits strengthened by their parents. This theory (commonly associated with Jean-Baptiste Lamarck) is now known to be wrong—the experiences of individuals do not affect the genes they pass to their children, although evidence in the field of epigenetics has revived some aspects of Lamarck's theory. Other theories included the pangenesis of Charles Darwin (which had both acquired and inherited aspects) and Francis Galton's reformulation of pangenesis as both particulate and inherited. Modern genetics started with Mendel's studies of the nature of inheritance in plants. In his paper ""Versuche über Pflanzenhybriden"" ("Experiments on Plant Hybridization"), presented in 1865 to the "Naturforschender Verein" (Society for Research in Nature) in Brünn, Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically. Although this pattern of inheritance could only be observed for a few traits, Mendel's work suggested that heredity was particulate, not acquired, and that the inheritance patterns of many traits could be explained through simple rules and ratios. The importance of Mendel's work did not gain wide understanding until 1900, after his death, when Hugo de Vries and other scientists rediscovered his research. William Bateson, a proponent of Mendel's work, coined the word "genetics" in 1905 (the adjective "genetic", derived from the Greek word "genesis"—γένεσις, "origin", predates the noun and was first used in a biological sense in 1860). Bateson both acted as a mentor and was aided significantly by the work of female scientists from Newnham College at Cambridge, specifically the work of Becky Saunders, Nora Darwin Barlow, and Muriel Wheldale Onslow. Bateson popularized the usage of the word "genetics" to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London in 1906. After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance. In 1911, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies. In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome. Although genes were known to exist on chromosomes, chromosomes are composed of both protein and DNA, and scientists did not know which of the two is responsible for inheritance. In 1928, Frederick Griffith discovered the phenomenon of transformation (see Griffith's experiment): dead bacteria could transfer genetic material to "transform" other still-living bacteria. Sixteen years later, in 1944, the Avery–MacLeod–McCarty experiment identified DNA as the molecule responsible for transformation. The role of the nucleus as the repository of genetic information in eukaryotes had been established by Hämmerling in 1943 in his work on the single celled alga "Acetabularia". The Hershey–Chase experiment in 1952 confirmed that DNA (rather than protein) is the genetic material of the viruses that infect bacteria, providing further evidence that DNA is the molecule responsible for inheritance. James Watson and Francis Crick determined the structure of DNA in 1953, using the X-ray crystallography work of Rosalind Franklin and Maurice Wilkins that indicated DNA has a helical structure (i.e., shaped like a corkscrew). Their double-helix model had two strands of DNA with the nucleotides pointing inward, each matching a complementary nucleotide on the other strand to form what look like rungs on a twisted ladder. This structure showed that genetic information exists in the sequence of nucleotides on each strand of DNA. The structure also suggested a simple method for replication: if the strands are separated, new partner strands can be reconstructed for each based on the sequence of the old strand. This property is what gives DNA its semi-conservative nature where one strand of new DNA is from an original parent strand. Although the structure of DNA showed how inheritance works, it was still not known how DNA influences the behavior of cells. In the following years, scientists tried to understand how DNA controls the process of protein production. It was discovered that the cell uses DNA as a template to create matching messenger RNA, molecules with nucleotides very similar to DNA. The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide sequences and amino acid sequences is known as the genetic code. With the newfound molecular understanding of inheritance came an explosion of research. A notable theory arose from Tomoko Ohta in 1973 with her amendment to the neutral theory of molecular evolution through publishing the nearly neutral theory of molecular evolution. In this theory, Ohta stressed the importance of natural selection and the environment to the rate at which genetic evolution occurs. One important development was chain-termination DNA sequencing in 1977 by Frederick Sanger. This technology allows scientists to read the nucleotide sequence of a DNA molecule. In 1983, Kary Banks Mullis developed the polymerase chain reaction, providing a quick way to isolate and amplify a specific section of DNA from a mixture. The efforts of the Human Genome Project, Department of Energy, NIH, and parallel private efforts by Celera Genomics led to the sequencing of the human genome in 2003. At its most fundamental level, inheritance in organisms occurs by passing discrete heritable units, called genes, from parents to offspring. This property was first observed by Gregor Mendel, who studied the segregation of heritable traits in pea plants. In his experiments studying the trait for flower color, Mendel observed that the flowers of each pea plant were either purple or white—but never an intermediate between the two colors. These different, discrete versions of the same gene are called alleles. In the case of the pea, which is a diploid species, each individual plant has two copies of each gene, one copy inherited from each parent. Many species, including humans, have this pattern of inheritance. Diploid organisms with two copies of the same allele of a given gene are called homozygous at that gene locus, while organisms with two different alleles of a given gene are called heterozygous. The set of alleles for a given organism is called its genotype, while the observable traits of the organism are called its phenotype. When organisms are heterozygous at a gene, often one allele is called dominant as its qualities dominate the phenotype of the organism, while the other allele is called recessive as its qualities recede and are not observed. Some alleles do not have complete dominance and instead have incomplete dominance by expressing an intermediate phenotype, or codominance by expressing both alleles at once. When a pair of organisms reproduce sexually, their offspring randomly inherit one of the two alleles from each parent. These observations of discrete inheritance and the segregation of alleles are collectively known as Mendel's first law or the Law of Segregation. Geneticists use diagrams and symbols to describe inheritance. A gene is represented by one or a few letters. Often a "+" symbol is used to mark the usual, non-mutant allele for a gene. In fertilization and breeding experiments (and especially when discussing Mendel's laws) the parents are referred to as the "P" generation and the offspring as the "F1" (first filial) generation. When the F1 offspring mate with each other, the offspring are called the "F2" (second filial) generation. One of the common diagrams used to predict the result of cross-breeding is the Punnett square. When studying human genetic diseases, geneticists often use pedigree charts to represent the inheritance of traits. These charts map the inheritance of a trait in a family tree. Organisms have thousands of genes, and in sexually reproducing organisms these genes generally assort independently of each other. This means that the inheritance of an allele for yellow or green pea color is unrelated to the inheritance of alleles for white or purple flowers. This phenomenon, known as "Mendel's second law" or the "law of independent assortment," means that the alleles of different genes get shuffled between parents to form offspring with many different combinations. (Some genes do not assort independently, demonstrating genetic linkage, a topic discussed later in this article.) Often different genes can interact in a way that influences the same trait. In the Blue-eyed Mary ("Omphalodes verna"), for example, there exists a gene with alleles that determine the color of flowers: blue or magenta. Another gene, however, controls whether the flowers have color at all or are white. When a plant has two copies of this white allele, its flowers are white—regardless of whether the first gene has blue or magenta alleles. This interaction between genes is called epistasis, with the second gene epistatic to the first. Many traits are not discrete features (e.g. purple or white flowers) but are instead continuous features (e.g. human height and skin color). These complex traits are products of many genes. The influence of these genes is mediated, to varying degrees, by the environment an organism has experienced. The degree to which an organism's genes contribute to a complex trait is called heritability. Measurement of the heritability of a trait is relative—in a more variable environment, the environment has a bigger influence on the total variation of the trait. For example, human height is a trait with complex causes. It has a heritability of 89% in the United States. In Nigeria, however, where people experience a more variable access to good nutrition and health care, height has a heritability of only 62%. The molecular basis for genes is deoxyribonucleic acid (DNA). DNA is composed of a chain of nucleotides, of which there are four types: adenine (A), cytosine (C), guanine (G), and thymine (T). Genetic information exists in the sequence of these nucleotides, and genes exist as stretches of sequence along the DNA chain. Viruses are the only exception to this rule—sometimes viruses use the very similar molecule RNA instead of DNA as their genetic material. Viruses cannot reproduce without a host and are unaffected by many genetic processes, so tend not to be considered living organisms. DNA normally exists as a double-stranded molecule, coiled into the shape of a double helix. Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G. Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its partner strand. This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by splitting the strands and using each strand as a template for synthesis of a new partner strand. Genes are arranged linearly along long chains of DNA base-pair sequences. In bacteria, each cell usually contains a single circular genophore, while eukaryotic organisms (such as plants and animals) have their DNA arranged in multiple linear chromosomes. These DNA strands are often extremely long; the largest human chromosome, for example, is about 247 million base pairs in length. The DNA of a chromosome is associated with structural proteins that organize, compact, and control access to the DNA, forming a material called chromatin; in eukaryotes, chromatin is usually composed of nucleosomes, segments of DNA wound around cores of histone proteins. The full set of hereditary material in an organism (usually the combined DNA sequences of all chromosomes) is called the genome. While haploid organisms have only one copy of each chromosome, most animals and many plants are diploid, containing two of each chromosome and thus two copies of every gene. The two alleles for a gene are located on identical loci of the two homologous chromosomes, each allele inherited from a different parent. Many species have so-called sex chromosomes that determine the gender of each organism. In humans and many other animals, the Y chromosome contains the gene that triggers the development of the specifically male characteristics. In evolution, this chromosome has lost most of its content and also most of its genes, while the X chromosome is similar to the other chromosomes and contains many genes. The X and Y chromosomes form a strongly heterogeneous pair. When cells divide, their full genome is copied and each daughter cell inherits one copy. This process, called mitosis, is the simplest form of reproduction and is the basis for asexual reproduction. Asexual reproduction can also occur in multicellular organisms, producing offspring that inherit their genome from a single parent. Offspring that are genetically identical to their parents are called clones. Eukaryotic organisms often use sexual reproduction to generate offspring that contain a mixture of genetic material inherited from two different parents. The process of sexual reproduction alternates between forms that contain single copies of the genome (haploid) and double copies (diploid). Haploid cells fuse and combine genetic material to create a diploid cell with paired chromosomes. Diploid organisms form haploids by dividing, without replicating their DNA, to create daughter cells that randomly inherit one of each pair of chromosomes. Most animals and many plants are diploid for most of their lifespan, with the haploid form reduced to single cell gametes such as sperm or eggs. Although they do not use the haploid/diploid method of sexual reproduction, bacteria have many methods of acquiring new genetic information. Some bacteria can undergo conjugation, transferring a small circular piece of DNA to another bacterium. Bacteria can also take up raw DNA fragments found in the environment and integrate them into their genomes, a phenomenon known as transformation. These processes result in horizontal gene transfer, transmitting fragments of genetic information between organisms that would be otherwise unrelated. The diploid nature of chromosomes allows for genes on different chromosomes to assort independently or be separated from their homologous pair during sexual reproduction wherein haploid gametes are formed. In this way new combinations of genes can occur in the offspring of a mating pair. Genes on the same chromosome would theoretically never recombine. However, they do, via the cellular process of chromosomal crossover. During crossover, chromosomes exchange stretches of DNA, effectively shuffling the gene alleles between the chromosomes. This process of chromosomal crossover generally occurs during meiosis, a series of cell divisions that creates haploid cells. The first cytological demonstration of crossing over was performed by Harriet Creighton and Barbara McClintock in 1931. Their research and experiments on corn provided cytological evidence for the genetic theory that linked genes on paired chromosomes do in fact exchange places from one homolog to the other. The probability of chromosomal crossover occurring between two given points on the chromosome is related to the distance between the points. For an arbitrarily long distance, the probability of crossover is high enough that the inheritance of the genes is effectively uncorrelated. For genes that are closer together, however, the lower probability of crossover means that the genes demonstrate genetic linkage; alleles for the two genes tend to be inherited together. The amounts of linkage between a series of genes can be combined to form a linear linkage map that roughly describes the arrangement of the genes along the chromosome. Genes generally express their functional effect through the production of proteins, which are complex molecules responsible for most functions in the cell. Proteins are made up of one or more polypeptide chains, each of which is composed of a sequence of amino acids, and the DNA sequence of a gene (through an RNA intermediate) is used to produce a specific amino acid sequence. This process begins with the production of an RNA molecule with a sequence matching the gene's DNA sequence, a process called transcription. This messenger RNA molecule is then used to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds either to one of the twenty possible amino acids in a protein or an instruction to end the amino acid sequence; this correspondence is called the genetic code. The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNA—a phenomenon Francis Crick called the central dogma of molecular biology. The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of proteins are related to their functions. Some are simple structural molecules, like the fibers formed by the protein collagen. Proteins can bind to other proteins and simple molecules, sometimes acting as enzymes by facilitating chemical reactions within the bound molecules (without changing the structure of the protein itself). Protein structure is dynamic; the protein hemoglobin bends into slightly different forms as it facilitates the capture, transport, and release of oxygen molecules within mammalian blood. A single nucleotide difference within DNA can cause a change in the amino acid sequence of a protein. Because protein structures are the result of their amino acid sequences, some changes can dramatically change the properties of a protein by destabilizing the structure or changing the surface of the protein in a way that changes its interaction with other proteins and molecules. For example, sickle-cell anemia is a human genetic disease that results from a single base difference within the coding region for the β-globin section of hemoglobin, causing a single amino acid change that changes hemoglobin's physical properties. Sickle-cell versions of hemoglobin stick to themselves, stacking to form fibers that distort the shape of red blood cells carrying the protein. These sickle-shaped cells no longer flow smoothly through blood vessels, having a tendency to clog or degrade, causing the medical problems associated with this disease. Some DNA sequences are transcribed into RNA but are not translated into protein products—such RNA molecules are called non-coding RNA. In some cases, these products fold into structures which are involved in critical cell functions (e.g. ribosomal RNA and transfer RNA). RNA can also have regulatory effects through hybridization interactions with other RNA molecules (e.g. microRNA). Although genes contain all the information an organism uses to function, the environment plays an important role in determining the ultimate phenotypes an organism displays. The phrase "nature and nurture" refers to this complementary relationship. The phenotype of an organism depends on the interaction of genes and the environment. An interesting example is the coat coloration of the Siamese cat. In this case, the body temperature of the cat plays the role of the environment. The cat's genes code for dark hair, thus the hair-producing cells in the cat make cellular proteins resulting in dark hair. But these dark hair-producing proteins are sensitive to temperature (i.e. have a mutation causing temperature-sensitivity) and denature in higher-temperature environments, failing to produce dark-hair pigment in areas where the cat has a higher body temperature. In a low-temperature environment, however, the protein's structure is stable and produces dark-hair pigment normally. The protein remains functional in areas of skin that are coldersuch as its legs, ears, tail and faceso the cat has dark-hair at its extremities. Environment plays a major role in effects of the human genetic disease phenylketonuria. The mutation that causes phenylketonuria disrupts the ability of the body to break down the amino acid phenylalanine, causing a toxic build-up of an intermediate molecule that, in turn, causes severe symptoms of progressive intellectual disability and seizures. However, if someone with the phenylketonuria mutation follows a strict diet that avoids this amino acid, they remain normal and healthy. A common method for determining how genes and environment ("nature and nurture") contribute to a phenotype involves studying identical and fraternal twins, or other siblings of multiple births. Because identical siblings come from the same zygote, they are genetically the same. Fraternal twins are as genetically different from one another as normal siblings. By comparing how often a certain disorder occurs in a pair of identical twins to how often it occurs in a pair of fraternal twins, scientists can determine whether that disorder is caused by genetic or postnatal environmental factors – whether it has "nature" or "nurture" causes. One famous example involved the study of the Genain quadruplets, who were identical quadruplets all diagnosed with schizophrenia. However such tests cannot separate genetic factors from environmental factors affecting fetal development. The genome of a given organism contains thousands of genes, but not all these genes need to be active at any given moment. A gene is expressed when it is being transcribed into mRNA and there exist many cellular methods of controlling the expression of genes such that proteins are produced only when needed by the cell. Transcription factors are regulatory proteins that bind to DNA, either promoting or inhibiting the transcription of a gene. Within the genome of "Escherichia coli" bacteria, for example, there exists a series of genes necessary for the synthesis of the amino acid tryptophan. However, when tryptophan is already available to the cell, these genes for tryptophan synthesis are no longer needed. The presence of tryptophan directly affects the activity of the genes—tryptophan molecules bind to the tryptophan repressor (a transcription factor), changing the repressor's structure such that the repressor binds to the genes. The tryptophan repressor blocks the transcription and expression of the genes, thereby creating negative feedback regulation of the tryptophan synthesis process. Differences in gene expression are especially clear within multicellular organisms, where cells all contain the same genome but have very different structures and behaviors due to the expression of different sets of genes. All the cells in a multicellular organism derive from a single cell, differentiating into variant cell types in response to external and intercellular signals and gradually establishing different patterns of gene expression to create different behaviors. As no single gene is responsible for the development of structures within multicellular organisms, these patterns arise from the complex interactions between many cells. Within eukaryotes, there exist structural features of chromatin that influence the transcription of genes, often in the form of modifications to DNA and chromatin that are stably inherited by daughter cells. These features are called "epigenetic" because they exist "on top" of the DNA sequence and retain inheritance from one cell generation to the next. Because of epigenetic features, different cell types grown within the same medium can retain very different properties. Although epigenetic features are generally dynamic over the course of development, some, like the phenomenon of paramutation, have multigenerational inheritance and exist as rare exceptions to the general rule of DNA as the basis for inheritance. During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the "proofreading" ability of DNA polymerases. Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well and cells use DNA repair mechanisms to repair mismatches and breaks. The repair does not, however, always restore the original sequence. In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment; this makes some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequenceduplications, inversions, deletions of entire regionsor the accidental exchange of whole parts of sequences between different chromosomes (chromosomal translocation). Mutations alter an organism's genotype and occasionally this causes different phenotypes to appear. Most mutations have little effect on an organism's phenotype, health, or reproductive fitness. Mutations that do have an effect are usually detrimental, but occasionally some can be beneficial. Studies in the fly "Drosophila melanogaster" suggest that if a mutation changes a protein produced by a gene, about 70 percent of these mutations will be harmful with the remainder being either neutral or weakly beneficial. Population genetics studies the distribution of genetic differences within populations and how these distributions change over time. Changes in the frequency of an allele in a population are mainly influenced by natural selection, where a given allele provides a selective or reproductive advantage to the organism, as well as other factors such as mutation, genetic drift, genetic hitchhiking, artificial selection and migration. Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment. New species are formed through the process of speciation, often caused by geographical separations that prevent populations from exchanging genes with each other. By comparing the homology between different species' genomes, it is possible to calculate the evolutionary distance between them and when they may have diverged. Genetic comparisons are generally considered a more accurate method of characterizing the relatedness between species than the comparison of phenotypic characteristics. The evolutionary distances between species can be used to form evolutionary trees; these trees represent the common descent and divergence of species over time, although they do not show the transfer of genetic material between unrelated species (known as horizontal gene transfer and most common in bacteria). Although geneticists originally studied inheritance in a wide range of organisms, researchers began to specialize in studying the genetics of a particular subset of organisms. The fact that significant research already existed for a given organism would encourage new researchers to choose it for further study, and so eventually a few model organisms became the basis for most genetics research. Common research topics in model organism genetics include the study of gene regulation and the involvement of genes in development and cancer. Organisms were chosen, in part, for convenience—short generation times and easy genetic manipulation made some organisms popular genetics research tools. Widely used model organisms include the gut bacterium "Escherichia coli", the plant "Arabidopsis thaliana", baker's yeast ("Saccharomyces cerevisiae"), the nematode "Caenorhabditis elegans", the common fruit fly ("Drosophila melanogaster"), and the common house mouse ("Mus musculus"). Medical genetics seeks to understand how genetic variation relates to human health and disease. When searching for an unknown gene that may be involved in a disease, researchers commonly use genetic linkage and genetic pedigree charts to find the location on the genome associated with the disease. At the population level, researchers take advantage of Mendelian randomization to look for locations in the genome that are associated with diseases, a method especially useful for multigenic traits not clearly defined by a single gene. Once a candidate gene is found, further research is often done on the corresponding (or homologous) genes of model organisms. In addition to studying genetic diseases, the increased availability of genotyping methods has led to the field of pharmacogenetics: the study of how genotype can affect drug responses. Individuals differ in their inherited tendency to develop cancer, and cancer is a genetic disease. The process of cancer development in the body is a combination of events. Mutations occasionally occur within cells in the body as they divide. Although these mutations will not be inherited by any offspring, they can affect the behavior of cells, sometimes causing them to grow and divide more frequently. There are biological mechanisms that attempt to stop this process; signals are given to inappropriately dividing cells that should trigger cell death, but sometimes additional mutations occur that cause cells to ignore these messages. An internal process of natural selection occurs within the body and eventually mutations accumulate within cells to promote their own growth, creating a cancerous tumor that grows and invades various tissues of the body. Normally, a cell divides only in response to signals called growth factors and stops growing once in contact with surrounding cells and in response to growth-inhibitory signals. It usually then divides a limited number of times and dies, staying within the epithelium where it is unable to migrate to other organs. To become a cancer cell, a cell has to accumulate mutations in a number of genes (three to seven) that allow it to bypass this regulation: it no longer needs growth factors to divide, continues growing when making contact to neighbor cells, ignores inhibitory signals, keeps growing indefinitely and is immortal, escapes from the epithelium and ultimately may be able to escape from the primary tumor, cross the endothelium of a blood vessel, be transported by the bloodstream and colonize a new organ, forming deadly metastasis. Although there are some genetic predispositions in a small fraction of cancers, the major fraction is due to a set of new genetic mutations that originally appear and accumulate in one or a small number of cells that will divide to form the tumor and are not transmitted to the progeny (somatic mutations). The most frequent mutations are a loss of function of p53 protein, a tumor suppressor, or in the p53 pathway, and gain of function mutations in the Ras proteins, or in other oncogenes. DNA can be manipulated in the laboratory. Restriction enzymes are commonly used enzymes that cut DNA at specific sequences, producing predictable fragments of DNA. DNA fragments can be visualized through use of gel electrophoresis, which separates fragments according to their length. The use of ligation enzymes allows DNA fragments to be connected. By binding ("ligating") fragments of DNA together from different sources, researchers can create recombinant DNA, the DNA often associated with genetically modified organisms. Recombinant DNA is commonly used in the context of plasmids: short circular DNA molecules with a few genes on them. In the process known as molecular cloning, researchers can amplify the DNA fragments by inserting plasmids into bacteria and then culturing them on plates of agar (to isolate clones of bacteria cells – "cloning" can also refer to the various means of creating cloned ("clonal") organisms). DNA can also be amplified using a procedure called the polymerase chain reaction (PCR). By using specific short sequences of DNA, PCR can isolate and exponentially amplify a targeted region of DNA. Because it can amplify from extremely small amounts of DNA, PCR is also often used to detect the presence of specific DNA sequences. DNA sequencing, one of the most fundamental technologies developed to study genetics, allows researchers to determine the sequence of nucleotides in DNA fragments. The technique of chain-termination sequencing, developed in 1977 by a team led by Frederick Sanger, is still routinely used to sequence DNA fragments. Using this technology, researchers have been able to study the molecular sequences associated with many human diseases. As sequencing has become less expensive, researchers have sequenced the genomes of many organisms using a process called genome assembly, which utilizes computational tools to stitch together sequences from many different fragments. These technologies were used to sequence the human genome in the Human Genome Project completed in 2003. New high-throughput sequencing technologies are dramatically lowering the cost of DNA sequencing, with many researchers hoping to bring the cost of resequencing a human genome down to a thousand dollars. Next-generation sequencing (or high-throughput sequencing) came about due to the ever-increasing demand for low-cost sequencing. These sequencing technologies allow the production of potentially millions of sequences concurrently. The large amount of sequence data available has created the field of genomics, research that uses computational tools to search for and analyze patterns in the full genomes of organisms. Genomics can also be considered a subfield of bioinformatics, which uses computational approaches to analyze large sets of biological data. A common problem to these fields of research is how to manage and share data that deals with human subject and personally identifiable information. See also genomics data sharing. On 19 March 2015, a group of leading biologists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited. In April 2015, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR. Gettysburg Address The Gettysburg Address is a speech by U.S. President Abraham Lincoln, and one of the best-known speeches in American history. It was delivered by Lincoln during the American Civil War at the dedication of the Soldiers' National Cemetery in Gettysburg, Pennsylvania, on the afternoon of Thursday, November 19, 1863, four and a half months after the Union armies defeated those of the Confederacy at the Battle of Gettysburg. Lincoln's carefully crafted address, secondary to others' presentations that day, came to be seen as one of the greatest and most influential statements of American national purpose. In just over two minutes, Lincoln reiterated the principles of human equality espoused by the Declaration of Independence and proclaimed the Civil War as a struggle for the preservation of the Union sundered by the secession crisis, with "a new birth of freedom" that would bring true equality to all of its citizens. Lincoln also redefined the Civil War as a struggle not just for the Union, but also for the principle of human equality. Beginning with the now-iconic phrase "Four score and seven years ago"referring to the signing of the Declaration of Independence eighty-seven years earlierLincoln invoked the United States' founding principles as set forth in that document, then reminded his listeners of the peril to those principles posed by the Civil War then in progress. He extolled the sacrifices of those who died at Gettysburg in defense of those principles, and exhorted his listeners to continue the struggle for survival of the nation's representative democracy as a beacon to the worldurging resolvethat these dead shall not have died in vain — that this nation, under God, shall have a new birth of freedom — and that government of the people, by the people, for the people, shall not perish from the earth. Despite the speech's prominent place in the history and popular culture of the United States, the exact wording and location of the speech are disputed. The five known manuscripts of the Gettysburg Address in Lincoln's hand differ in a number of details, and also differ from contemporary newspaper reprints of the speech. Modern scholarship locates the speakers' platform 40 yards (or more) away from the "Traditional Site" within Soldiers' National Cemetery at the Soldiers' National Monument and entirely within private, adjacent Evergreen Cemetery. Following the Battle of Gettysburg on July 1–3, 1863, reburial of Union soldiers from the Gettysburg Battlefield graves began on October 17. David Wills, of the committee for the November 19 Consecration of the National Cemetery at Gettysburg, invited President Lincoln: "It is the desire that, after the Oration, you, as Chief Executive of the nation, formally set apart these grounds to their sacred use by a few appropriate remarks." Lincoln's address followed the oration by Edward Everett, who subsequently included a copy of the Gettysburg Address in his 1864 book about the event ("Address of the Hon. Edward Everett At the Consecration of the National Cemetery At Gettysburg, 19th November 1863, with the Dedicatory Speech of President Lincoln, and the Other Exercises of the Occasion; Accompanied by An Account of the Origin of the Undertaking and of the Arrangement of the Cemetery Grounds, and by a Map of the Battle-field and a Plan of the Cemetery"). During the train trip from Washington, D.C., to Gettysburg on November 18, Lincoln remarked to John Hay that he felt weak. On the morning of November 19, Lincoln mentioned to John Nicolay that he was dizzy. In the railroad car the President rode with his secretary, John G. Nicolay, his assistant secretary, John Hay, the three members of his Cabinet who accompanied him, William Seward, John Usher and Montgomery Blair, several foreign officials and others. Hay noted that during the speech Lincoln's face had 'a ghastly color' and that he was 'sad, mournful, almost haggard.' After the speech, when Lincoln boarded the 6:30 pm train for Washington, D.C., he was feverish and weak, with a severe headache. A protracted illness followed, which included a vesicular rash and was diagnosed as a mild case of smallpox. It thus seems highly likely that Lincoln was in the prodromal period of smallpox when he delivered the Gettysburg address. The program organized for that day by Wills and his committee included: Music, by Birgfeld's Band ("Homage d'uns Heros" by Adolph Birgfeld) Prayer, by Reverend T. H. Stockton, D.D. Music, by the Marine Band ("Old Hundred"), directed by Francis Scala Oration, by Hon. Edward Everett ("The Battles of Gettysburg") Music, Hymn ("Consecration Chant") by B. B. French, Esq., music by Wilson G Horner, sung by Baltimore Glee Club Dedicatory Remarks, by the President of the United States Dirge ("Oh! It is Great for Our Country to Die", words by James G. Percival, music by Alfred Delaney), sung by Choir selected for the occasion Benediction, by Reverend H. L. Baugher, D.D. While it is Lincoln's short speech that has gone down in history as one of the finest examples of English public oratory, it was Everett's two-hour oration that was slated to be the "Gettysburg address" that day. His now seldom-read 13,607-word oration began: Standing beneath this serene sky, overlooking these broad fields now reposing from the labors of the waning year, the mighty Alleghenies dimly towering before us, the graves of our brethren beneath our feet, it is with hesitation that I raise my poor voice to break the eloquent silence of God and Nature. But the duty to which you have called me must be performed;—grant me, I pray you, your indulgence and your sympathy. And ended two hours later with: But they, I am sure, will join us in saying, as we bid farewell to the dust of these martyr-heroes, that wheresoever throughout the civilized world the accounts of this great warfare are read, and down to the latest period of recorded time, in the glorious annals of our common country, there will be no brighter page than that which relates the Battles of Gettysburg. Lengthy dedication addresses like Everett's were common at cemeteries in this era. The tradition began in 1831 when Justice Joseph Story delivered the dedication address at Mount Auburn Cemetery in Cambridge, Massachusetts. Those addresses often linked cemeteries to the mission of Union. Shortly after Everett's well-received remarks, Lincoln spoke for only a few minutes. With a "few appropriate remarks", he was able to summarize his view of the war in just ten sentences. Despite the historical significance of Lincoln's speech, modern scholars disagree as to its exact wording, and contemporary transcriptions published in newspaper accounts of the event and even handwritten copies by Lincoln himself differ in their wording, punctuation, and structure. Of these versions, the Bliss version, written well after the speech as a favor for a friend, is viewed by many as the standard text. Its text differs, however, from the written versions prepared by Lincoln before and after his speech. It is the only version to which Lincoln affixed his signature, and the last he is known to have written. In "Lincoln at Gettysburg", Garry Wills notes the parallels between Lincoln's speech and Pericles's Funeral Oration during the Peloponnesian War as described by Thucydides. (James McPherson notes this connection in his review of Wills's book. Gore Vidal also draws attention to this link in a BBC documentary about oration.) Pericles' speech, like Lincoln's: In contrast, writer Adam Gopnik, in "The New Yorker", notes that while Everett's Oration was explicitly neoclassical, referring directly to Marathon and Pericles, "Lincoln's rhetoric is, instead, deliberately Biblical. (It is difficult to find a single obviously classical reference in any of his speeches.) Lincoln had mastered the sound of the King James Bible so completely that he could recast abstract issues of constitutional law in Biblical terms, making the proposition that Texas and New Hampshire should be forever bound by a single post office sound like something right out of Genesis." Several theories have been advanced by Lincoln scholars to explain the provenance of Lincoln's famous phrase "government of the people, by the people, for the people". Despite many claims, there is no evidence a similar phrase appears in the Prologue to John Wycliffe's 1384 English translation of the Bible. In a discussion "A more probable origin of a famous Lincoln phrase", in "The American Monthly Review of Reviews", Albert Shaw credits a correspondent with pointing out the writings of William Herndon, Lincoln's law partner, who wrote in the 1888 work "Abraham Lincoln: The True Story of A Great Life" that he had brought to Lincoln some of the sermons of abolitionist minister Theodore Parker, of Massachusetts, and that Lincoln was moved by Parker's use of this idea: Craig R. Smith, in "Criticism of Political Rhetoric and Disciplinary Integrity", suggested Lincoln's view of the government as expressed in the Gettysburg Address was influenced by the noted speech of Massachusetts Senator Daniel Webster, the "Second Reply to Hayne", in which Webster famously thundered "Liberty and Union, now and forever, one and inseparable!" Specifically, in this speech on January 26, 1830, before the United States Senate, Webster described the federal government as: "made for the people, made by the people, and answerable to the people", foreshadowing Lincoln's "government of the people, by the people, for the people". Webster also noted, "This government, Sir, is the independent offspring of the popular will. It is not the creature of State legislatures; nay, more, if the whole truth must be told, the people brought it into existence, established it, and have hitherto supported it, for the very purpose, amongst others, of imposing certain salutary restraints on State sovereignties." A source predating these others with which Lincoln was certainly familiar was Chief Justice John Marshall's opinion in McCulloch v. Maryland (1819), a case asserting federal authority to create a national bank and to be free from the State's powers to tax. In asserting the superiority of federal power over the states, Chief Justice Marshall stated: "The government of the Union, then (whatever may be the influence of this fact on the case), is, emphatically and truly, a government of the people. In form, and in substance, it emanates from them. Its powers are granted by them, and are to be exercised directly on them, and for their benefit." Lincoln, a lawyer and President engaged in the greatest struggle of federalism, was (more eloquently) echoing the preeminent case that had solidified federal power over the States. Wills observed Lincoln's usage of the imagery of birth, life, and death in reference to a nation "brought forth", "conceived", and that shall not "perish". Others, including Allen C. Guelzo, the director of Civil War Era studies at Gettysburg College in Pennsylvania, suggested that Lincoln's formulation "four score and seven" was an allusion to the King James Version of the Bible's , in which man's lifespan is given as "threescore years and ten; and if by reason of strength they be fourscore years". Each of the five known manuscript copies of the Gettysburg Address is named for the person who received it from Lincoln. Lincoln gave copies to his private secretaries, John Nicolay and John Hay. Both of these drafts were written around the time of his November 19 address, while the other three copies of the address, the Everett, Bancroft, and Bliss copies, were written by Lincoln for charitable purposes well after November 19. In part because Lincoln provided a title and signed and dated the Bliss copy, it has become the standard text of Lincoln's Gettysburg Address. Nicolay and Hay were appointed custodians of Lincoln's papers by Lincoln's son Robert Todd Lincoln in 1874. After appearing in facsimile in an article written by John Nicolay in 1894, the Nicolay copy was presumably among the papers passed to Hay by Nicolay's daughter Helen upon Nicolay's death in 1901. Robert Lincoln began a search for the original copy in 1908, which resulted in the discovery of a handwritten copy of the Gettysburg Address among the bound papers of John Hay—a copy now known as the "Hay copy" or "Hay draft". The Hay draft differed from the version of the Gettysburg Address published by John Nicolay in 1894 in a number of significant ways: it was written on a different type of paper, had a different number of words per line and number of lines, and contained editorial revisions in Lincoln's hand. Both the Hay and Nicolay copies of the Address are within the Library of Congress, encased in specially designed, temperature-controlled, sealed containers with argon gas in order to protect the documents from oxidation and continued deterioration. The Nicolay copy is often called the "first draft" because it is believed to be the earliest copy that exists. Scholars disagree over whether the Nicolay copy was actually the reading copy Lincoln held at Gettysburg on November 19. In an 1894 article that included a facsimile of this copy, Nicolay, who had become the custodian of Lincoln's papers, wrote that Lincoln had brought to Gettysburg the first part of the speech written in ink on Executive Mansion stationery, and that he had written the second page in pencil on lined paper before the dedication on November 19. Matching folds are still evident on the two pages, suggesting it could be the copy that eyewitnesses say Lincoln took from his coat pocket and read at the ceremony. Others believe that the delivery text has been lost, because some of the words and phrases of the Nicolay copy do not match contemporary transcriptions of Lincoln's original speech. The words "under God", for example, are missing in this copy from the phrase "that this nation shall have a new birth of freedom ..." In order for the Nicolay draft to have been the reading copy, either the contemporary transcriptions were inaccurate, or Lincoln would have had to depart from his written text in several instances. This copy of the Gettysburg Address apparently remained in John Nicolay's possession until his death in 1901, when it passed to his friend and colleague John Hay. It used to be on display as part of the American Treasures exhibition of the Library of Congress in Washington, D.C. The existence of the Hay copy was first announced to the public in 1906, after the search for the "original manuscript" of the Address among the papers of John Hay brought it to light. Significantly, it differs somewhat from the manuscript of the Address described by John Nicolay in his article, and contains numerous omissions and inserts in Lincoln's own hand, including omissions critical to the basic meaning of the sentence, not simply words that would be added by Lincoln to strengthen or clarify their meaning. In this copy, as in the Nicolay copy, the words "under God" are not present. This version has been described as "the most inexplicable" of the drafts and is sometimes referred to as the "second draft". The "Hay copy" was made either on the morning of the delivery of the Address, or shortly after Lincoln's return to Washington. Those who believe that it was completed on the morning of his address point to the fact that it contains certain phrases that are not in the first draft but are in the reports of the address as delivered and in subsequent copies made by Lincoln. It is probable, they conclude, that, as stated in the explanatory note accompanying the original copies of the first and second drafts in the Library of Congress, Lincoln held this second draft when he delivered the address. Lincoln eventually gave this copy to his other personal secretary, John Hay, whose descendants donated both it and the Nicolay copy to the Library of Congress in 1916. The Everett copy, also known as the "Everett-Keyes copy", was sent by President Lincoln to Edward Everett in early 1864, at Everett's request. Everett was collecting the speeches at the Gettysburg dedication into one bound volume to sell for the benefit of stricken soldiers at New York's Sanitary Commission Fair. The draft Lincoln sent became the third autograph copy, and is now in the possession of the Illinois State Historical Library in Springfield, Illinois, where it is currently on display in the Treasures Gallery of the Abraham Lincoln Presidential Library and Museum. The Bancroft copy of the Gettysburg Address was written out by President Lincoln in February 1864 at the request of George Bancroft, the famed historian and former Secretary of the Navy, whose comprehensive ten-volume "History of the United States" later led him to be known as the "father of American History". Bancroft planned to include this copy in "Autograph Leaves of Our Country's Authors", which he planned to sell at a Soldiers' and Sailors' Sanitary Fair in Baltimore. As this fourth copy was written on both sides of the paper, it proved unusable for this purpose, and Bancroft was allowed to keep it. This manuscript is the only one accompanied both by a letter from Lincoln transmitting the manuscript and by the original envelope addressed and franked by Lincoln. This copy remained in the Bancroft family for many years, was sold to various dealers and purchased by Nicholas and Marguerite Lilly Noyes, who donated the manuscript to Cornell in 1949. It is now held by the Division of Rare and Manuscript Collections in the Carl A. Kroch Library at Cornell University. It is the only one of the five copies to be privately owned. Discovering that his fourth written copy could not be used, Lincoln then wrote a fifth draft, which was accepted for the purpose requested. The Bliss copy, named for Colonel Alexander Bliss, Bancroft's stepson and publisher of "Autograph Leaves", is the only draft to which Lincoln affixed his signature. Lincoln is not known to have made any further copies of the Gettysburg Address. Because of the apparent care in its preparation, and in part because Lincoln provided a title and signed and dated this copy, it has become the standard version of the address and the source for most facsimile reproductions of Lincoln's Gettysburg Address. It is the version that is inscribed on the South wall of the Lincoln Memorial. This draft is now displayed in the Lincoln Room of the White House, a gift of Oscar B. Cintas, former Cuban Ambassador to the United States. Cintas, a wealthy collector of art and manuscripts, purchased the Bliss copy at a public auction in 1949 for $54,000 ($ as of 2018), at that time the highest price ever paid for a document at public auction. Cintas' properties were claimed by the Castro government after the Cuban Revolution in 1959, but Cintas, who died in 1957, willed the Gettysburg Address to the American people, provided it would be kept at the White House, where it was transferred in 1959. Garry Wills concluded the Bliss copy "is stylistically preferable to others in one significant way: Lincoln removed 'here' from 'that cause for which they (here) gave ...' The seventh 'here' is in all other versions of the speech." Wills noted the fact that Lincoln "was still making such improvements", suggesting Lincoln was more concerned with a perfected text than with an 'original' one. From November 21, 2008, to January 1, 2009, the Albert H. Small Documents Gallery at the Smithsonian Institution National Museum of American History hosted a limited public viewing of the Bliss copy, with the support of then-First Lady Laura Bush. The Museum also launched an online exhibition and interactive gallery to enable visitors to look more closely at the document. Another contemporary source of the text is the Associated Press dispatch, transcribed from the shorthand notes taken by reporter Joseph L. Gilbert. It also differs from the drafted text in a number of minor ways. Eyewitness reports vary as to their view of Lincoln's performance. In 1931, the printed recollections of 87-year-old Mrs. Sarah A. Cooke Myers, who was 19 when she attended the ceremony, suggest a dignified silence followed Lincoln's speech: "I was close to the President and heard all of the Address, but it seemed short. Then there was an impressive silence like our Menallen Friends Meeting. There was no applause when he stopped speaking." According to historian Shelby Foote, after Lincoln's presentation, the applause was delayed, scattered, and "barely polite". In contrast, Pennsylvania Governor Andrew Gregg Curtin maintained, "He pronounced that speech in a voice that all the multitude heard. The crowd was hushed into silence because the President stood before them ... It was so Impressive! It was the common remark of everybody. Such a speech, as they said it was!" Reinterment of soldiers' remains from field graves into the cemetery, which had begun within months of the battle, was less than half complete on the day of the ceremony. In an oft-repeated legend, Lincoln is said to have turned to his bodyguard Ward Hill Lamon and remarked that his speech, like a bad plow, "won't scour". According to Garry Wills, this statement has no basis in fact and largely originates from the unreliable recollections of Lamon. In Garry Wills's view, " had done what he wanted to do ". In a letter to Lincoln written the following day, Everett praised the President for his eloquent and concise speech, saying, "I should be glad if I could flatter myself that I came as near to the central idea of the occasion, in two hours, as you did in two minutes." Lincoln replied that he was glad to know the speech was not a "total failure". Other public reaction to the speech was divided along partisan lines. The Democratic-leaning "Chicago Times" observed, "The cheek of every American must tingle with shame as he reads the silly, flat and dishwatery utterances of the man who has to be pointed out to intelligent foreigners as the President of the United States." In contrast, the Republican-leaning "The New York Times" was complimentary and printed the speech. In Massachusetts, the "Springfield Republican" also printed the entire speech, calling it "a perfect gem" that was "deep in feeling, compact in thought and expression, and tasteful and elegant in every word and comma". The "Republican" predicted that Lincoln's brief remarks would "repay further study as the model speech". On the sesquicentennial of the address, "The Patriot-News" of Harrisburg, Pennsylvania, formerly the "Patriot & Union", retracted its original reaction ("silly remarks" deserving "the veil of oblivion") stating: "Seven score and ten years ago, the forefathers of this media institution brought forth to its audience a judgment so flawed, so tainted by hubris, so lacking in the perspective history would bring, that it cannot remain unaddressed in our archives. ... the "Patriot & Union" failed to recognize [the speech's] momentous importance, timeless eloquence, and lasting significance. The "Patriot-News" regrets the error." Foreign newspapers also criticized Lincoln's remarks. "The Times" of London commented: "The ceremony [at Gettysburg] was rendered ludicrous by some of the luckless sallies of that poor President Lincoln." Congressman Joseph A. Goulden, then an eighteen-year-old school teacher, was present and heard the speech. He served in the United States Marine Corps during the war, and later had a successful career in insurance in Pennsylvania and New York City before entering Congress as a Democrat. In his later life, Goulden was often asked about the speech, since the passage of time made him one of a dwindling number of individuals who had been present for it. He commented on the event and Lincoln's speech in favorable terms, naming Lincoln's address as one of the inspirations for him to enter military service. Goulden's recollections included remarks to the House of Representatives in 1914. William R. Rathvon is the only known eyewitness of both Lincoln's arrival at Gettysburg and the address itself to have left an audio recording of his recollections. One year before his death in 1939, Rathvon's reminiscences were recorded on February 12, 1938, at the Boston studios of radio station WRUL, including his reading the address, itself, and a 78 rpm record was pressed. The title of the 78 record was "I Heard Lincoln That Day – William R. Rathvon, TR Productions". A copy wound up at National Public Radio (NPR) during a "Quest for Sound" project in 1999. NPR continues to air it around Lincoln's birthday. Like most people who came to Gettysburg, the Rathvon family was aware that Lincoln was going to make some remarks. The family went to the town square where the procession was to form to go out to the cemetery that had not been completed yet. At the head of the procession rode Lincoln on a gray horse preceded by a military band that was the first the young boy had ever seen. Rathvon describes Lincoln as so tall and with such long legs that they went almost to the ground; he also mentions the long eloquent speech given by Edward Everett of Massachusetts whom Rathvon accurately described as the "most finished orator of the day". Rathvon then goes on to describe how Lincoln stepped forward and "with a manner serious almost to sadness, gave his brief address". During the delivery, along with some other boys, young Rathvon wiggled his way forward through the crowd until he stood within 15 feet of Mr. Lincoln and looked up into what he described as Lincoln's "serious face". Rathvon recalls candidly that, although he listened "intently to every word the president uttered and heard it clearly", he explains, "boylike, I could not recall any of it afterwards". But he explains that if anyone said anything disparaging about "honest Abe", there would have been a "junior battle of Gettysburg". In the recording Rathvon speaks of Lincoln's speech allegorically "echoing through the hills". The only known and confirmed photograph of Lincoln at Gettysburg, taken by photographer David Bachrach was identified in the Mathew Brady collection of photographic plates in the National Archives and Records Administration in 1952. While Lincoln's speech was short and may have precluded multiple pictures of him while speaking, he and the other dignitaries sat for hours during the rest of the program. Given the length of Everett's speech and the length of time it took for 19th-century photographers to get "set up" before taking a picture, it is quite plausible that the photographers were ill-prepared for the brevity of Lincoln's remarks. The words "under God" do not appear in the Nicolay and Hay drafts but are included in the three later copies (Everett, Bancroft, and Bliss). Accordingly, some skeptics maintain that Lincoln did not utter the words "under God" at Gettysburg. However, at least three reporters telegraphed the text of Lincoln's speech on the day the Address was given with the words "under God" included. Historian William E. Barton argues that: The reporters present included Joseph Gilbert, from the Associated Press; Charles Hale, from the "Boston Advertiser"; John R. Young (who later became the Librarian of Congress), from the "Philadelphia Press"; and reporters from the "Cincinnati Commercial", "New York Tribune", and "The New York Times". Charles Hale "had notebook and pencil in hand, [and] took down the slow-spoken words of the President". "He took down what he declared was the exact language of Lincoln's address, and his declaration was as good as the oath of a court stenographer. His associates confirmed his testimony, which was received, as it deserved to be, at its face value." One explanation is that Lincoln deviated from his prepared text and inserted the phrase when he spoke. Ronald C. White, visiting professor of history at the University of California – Los Angeles and professor of American religious history emeritus at the San Francisco Theological Seminary, wrote in this context of Lincoln's insertion and usage of "under God": It was an uncharacteristically spontaneous revision for a speaker who did not trust extemporaneous speech. Lincoln had added impromptu words in several earlier speeches, but always offered a subsequent apology for the change. In this instance, he did not. And Lincoln included "under God" in all three copies of the address he prepared at later dates. "Under God" pointed backward and forward: back to "this nation", which drew its breath from both political and religious sources, but also forward to a "new birth". Lincoln had come to see the Civil War as a ritual of purification. The old Union had to die. The old man had to die. Death became a transition to a new Union and a new humanity. The phrase "under God" was used frequently in works published before 1860, usually with the meaning "with God's help". Outside the Cemetery and within sight of the cross-walk, a historical marker reads: Nearby, Nov. 19, 1863, in dedicating the National Cemetery, Abraham Lincoln gave the address which he had written in Washington and revised after his arrival at Gettysburg the evening of November 18. Directly inside the Taneytown Road entrance are located the Rostrum and the "Lincoln Address Memorial." Neither of these is located within 300 yards of any of the five (or more) claimed locations for the dedicatory platform. Colonel W. Yates Selleck was a marshal in the parade on Consecration Day and was seated on the platform when Lincoln made the address. Selleck marked a map with the position of the platform and described it as "350 feet almost due north of Soldiers' National Monument, 40 feet from a point in the outer circle of lots where [the] Michigan and New York [burial sections] are separated by a path". A location which approximates this description is 39°49.243′N, 77°13.869′W. As pointed out in 1973 by retired park historian Frederick Tilberg, the "Selleck Site" is 25 feet lower than the crest of Cemetery Hill, and only the crest presents a panoramic view of the battlefield. A spectacular view from the location of the speech was noted by many eyewitnesses, is consistent with the "Traditional Site" at the Soldiers' National Monument (and other sites on the crest) but is inconsistent with the "Selleck Site." The "Kentucky Memorial", erected in 1975, is directly adjacent to the Soldiers' National Monument, and states, "Kentucky honors her son, Abraham Lincoln, who delivered his immortal address at the site now marked by the soldiers' monument." With its position at the center of the concentric rings of soldiers' graves and the continuing endorsement of Lincoln's native state the Soldiers' National Monument persists as a credible location for the speech. Writing a physical description of the layout for the Gettysburg National Cemetery under construction in November 1863, the correspondent from the "Cincinnati Daily Commercial" described the dividing lines between the state grave plots as "the radii of a common center, where a flag pole is now raised, but where it is proposed to erect a national monument". With the inclusion of this quotation Tilberg inadvertently verifies a central principle of future photographic analyses—a flagpole, rather than the speakers' platform, occupied the central point of the soldiers' graves. In fact, the precision of the photo-analyses relies upon the coincidence of position between this temporary flag pole and the future monument. Confusing to today's tourist, the "Kentucky Memorial" is contradicted by a newer marker which was erected nearby by the Gettysburg National Military Park and locates the speakers' platform inside Evergreen Cemetery. Similarly, outdated National Park Service documents which pinpoint the location at the Soldiers' National Monument have not been systematically revised since the placement of the newer marker. Miscellaneous web pages perpetuate the "Traditional Site." Based upon photographic analysis, the Gettysburg National Military Park (G.N.M.P.) placed a marker (near ) which states, "The speakers' platform was located in Evergreen Cemetery to your left." The observer of this marker stands facing the fence which separates the two cemeteries (one public and one private). In 1982, Senior Park Historian Kathleen Georg Harrison first analyzed photographs and proposed a location in Evergreen Cemetery but has not published her analysis. Speaking for Harrison without revealing details, two sources characterize her proposed location as "on or near [the] Brown family vault" in Evergreen Cemetery. William A. Frassanito, a former military intelligence analyst, documented a comprehensive photographic analysis in 1995, and it associates the location of the platform with the position of specific modern headstones in Evergreen Cemetery. According to Frassanito, the extant graves of Israel Yount (died 1892)(), John Koch (died 1913)(), and George E. Kitzmiller (died 1874)() are among those which occupy the location of the 1863 speaker's stand. Assistant Professor of New Media at the University of North Carolina at Asheville, Christopher Oakley and his students are "working to produce a lifelike virtual 3-D re-creation of Lincoln delivering the Gettysburg Address" as part of the "Virtual Lincoln Project." After taking precise measurements, some using lasers, and countless photographs on Cemetery Hill in 2013, Oakley's team used 3-D animation software Maya to estimate locations for the platform and the photographers who recorded its occupants. This work remains under development. The GNMP marker, Wills' interpretation of Harrison's analysis, and the Frassanito analysis concur that the platform was located in private Evergreen Cemetery, rather than public Soldiers' National Cemetery. The National Park Service's "National Cemetery Walking Tour" brochure is one NPS document which agrees: The Soldiers' National Monument, long misidentified as the spot from which Lincoln spoke, honors the fallen soldiers. [The location of the speech] was actually on the crown of this hill, a short distance on the other side of the iron fence and inside the Evergreen Cemetery, where President Lincoln delivered the Gettysburg Address to a crowd of some 15,000 people. While the GNMP marker is unspecific, providing only "to your left", the locations determined by the Harrison/Wills analysis and the Frassanito analysis differ by 40 yards. Frassanito has documented 1) his own conclusion, 2) his own methods and 3) a refutation of the Harrison site, but neither the GNMP nor Harrison has provided any documentation. Each of the three points to a location in Evergreen Cemetery, as do modern NPS publications. Although Lincoln dedicated the Gettysburg National Cemetery, the monument at the Cemetery's center actually has nothing to do with Lincoln or his famous speech. Intended to symbolize Columbia paying tribute to her fallen sons, its appreciation has been commandeered by the thirst for a tidy home for the speech. Freeing the Cemetery and Monument to serve their original purpose, honoring of Union departed, is as unlikely as a resolution to the location controversy and the erection of a public monument to the speech in the exclusively private Evergreen Cemetery. The importance of the Gettysburg Address in the history of the United States is underscored by its enduring presence in American culture. In addition to its prominent place carved into a stone cella on the south wall of the Lincoln Memorial in Washington, D.C., the Gettysburg Address is frequently referred to in works of popular culture, with the implicit expectation that contemporary audiences will be familiar with Lincoln's words. In the many generations that have passed since the Address, it has remained among the most famous speeches in American history, and is often taught in classes about history or civics. Lincoln's Gettysburg Address is itself referenced in another of those famed orations, Martin Luther King, Jr.'s "I Have a Dream" speech. Standing on the steps of the Lincoln Memorial in August 1963, King began with a reference, by the style of his opening phrase, to President Lincoln and his enduring words: "Five score years ago, a great American, in whose symbolic shadow we stand today, signed the Emancipation Proclamation. This momentous decree came as a great beacon light of hope to millions of Negro slaves who had been seared in the flames of withering injustice." Phrases from the Address are often used or referenced in other works. The current Constitution of France states that the principle of the French Republic is ""gouvernement du peuple, par le peuple et pour le peuple" ("government of the people, by the people, and for the people"), a literal translation of Lincoln's words. Sun Yat-Sen's "Three Principles of the People" as well as the preamble for the 1947 Constitution of Japan were also inspired from that phrase. The aircraft carrier has as its ship's motto the phrase "shall not perish". U.S. Senator Charles Sumner of Massachusetts wrote of the address and its enduring presence in American culture after Lincoln's assassination in April 1865: "That speech, uttered at the field of Gettysburg ... and now sanctified by the martyrdom of its author, is a monumental act. In the modesty of his nature he said 'the world will little note, nor long remember what we say here; but it can never forget what they did here.' He was mistaken. The world at once noted what he said, and will never cease to remember it." U.S. President John F. Kennedy stated in July 1963 about the battle and Lincoln's speech: "Five score years ago the ground on which we here stand shuddered under the clash of arms and was consecrated for all time by the blood of American manhood. Abraham Lincoln, in dedicating this great battlefield, has expressed, in words too eloquent for paraphrase or summary, why this sacrifice was necessary." Kennedy was himself assassinated three days after the Gettysburg Address centennial. In 2015, the Abraham Lincoln Presidential Library Foundation compiled "Gettysburg Replies: The World Responds to Abraham Lincoln's Gettysburg Address". The work challenges leaders to craft 272 word responses to celebrate Lincoln, the Gettysburg Address, or a related topic. One of the replies was by astrophysicist Neil deGrasse Tyson in which he made the point that one of Lincoln's greatest legacies was establishing, in the same year of the Gettysburg Address, the National Academy of Sciences, which had the longterm effect of "setting our Nation on a course of scientifically enlightened governance, without which we all may perish from this Earth". A common American myth about the Gettysburg Address is that Lincoln quickly wrote the speech on the back of an envelope. This widely held misunderstanding may have originated with a popular book, "The Perfect Tribute", by Mary Raymond Shipman Andrews (1906), which was assigned reading for generations of schoolchildren, sold 600,000 copies when published as a standalone volume, and was twice adapted for film. Other lesser-known claims include Harriet Beecher Stowe's assertion that Lincoln had composed the address "in only a few moments," and that of industrialist Andrew Carnegie, who claimed to have personally supplied Lincoln with a pen. Gough Whitlam Edward Gough Whitlam ( 11 July 191621 October 2014) was the 21st Prime Minister of Australia, serving from 1972 to 1975. The Leader of the Labor Party from 1967 to 1977, Whitlam led his party to power for the first time in 23 years at the 1972 election. He won the 1974 election before being controversially dismissed by the Governor-General of Australia, Sir John Kerr, at the climax of the 1975 Australian constitutional crisis. Whitlam remains the only Australian prime minister to have his commission terminated in that manner. Whitlam served as an air navigator in the Royal Australian Air Force for four years during World War II, and worked as a barrister following the war. He was first elected to Parliament in 1952, representing Werriwa in the House of Representatives. Whitlam became Deputy Leader of the Labor Party in 1960, and in 1967, after the retirement of Arthur Calwell, was elected Leader and became the Leader of the Opposition. After narrowly losing the 1969 election, Whitlam led Labor to victory at the 1972 election after 23 years of continuous Liberal-Country Coalition Government. The Whitlam Government implemented a large number of new programs and policy changes, including the termination of military conscription, institution of universal health care and free university education, and the implementation of legal aid programs. With the opposition-controlled Senate delaying passage of bills, Whitlam called a double dissolution election in 1974 in which he won a majority in the House of Representatives, albeit a slightly reduced one, and picked up three Senate seats. The Whitlam government then instituted the first and only joint sitting enabled under s. 57 of the Constitution as part of the double dissolution process. Despite the government's second election victory, the opposition, reacting to government scandals and a flagging economy suffering from the 1973 oil crisis and the 1973–75 recession, continued to obstruct the government's program in the Senate. In late 1975, the Opposition Senators refused to allow a vote on the government's appropriation bills, returning them to the House of Representatives with a demand that the government go to an election, thus denying the government supply. Whitlam refused to back down, arguing that his government, which held a clear majority in the House of Representatives, was being held to ransom by the Senate. The crisis ended on 11 November, when Whitlam arrived at a pre-arranged meeting with the Governor-General, Sir John Kerr, at Government House in order to call a half-Senate election. Kerr dismissed him and commissioned the opposition leader, Malcolm Fraser, as prime minister. Labor lost the subsequent election by a landslide. Whitlam stepped down after losing again at the 1977 election, and retired from parliament in 1978. Upon the election of the Hawke Government in 1983, he was appointed as Ambassador to UNESCO, a position he filled with distinction, and was elected a member of the UNESCO Executive Board. He remained active into his nineties. The circumstances of his dismissal and the legacy of his government remain a large part of Australian political discourse. Edward Gough Whitlam was born on 11 July 1916 at the family home 'Ngara', 46 Rowland Street, Kew, a suburb of Melbourne. He was the older of two children (he had a younger sister, Freda) born to Martha (née Maddocks) and Fred Whitlam. His father was a federal public servant who later served as Commonwealth Crown Solicitor, and Whitlam senior's involvement in human rights issues was a powerful influence on his son. Since the boy's maternal grandfather was also named Edward, from early childhood he was called by his middle name, Gough, which in turn had come from his paternal grandfather, who had been named after the British soldier Field-Marshal Hugh Gough, 1st Viscount Gough. In 1918, Fred Whitlam was promoted to deputy Crown solicitor and transferred to Sydney. The family lived first in the North Shore suburb of Mosman and then in Turramurra. At age six, Gough began his education at Chatswood Church of England Girls' School (early primary schooling at a girls' school was not unusual for small boys at the time). After a year there, he attended Mowbray House School and Knox Grammar School, in the suburbs of Sydney. Fred Whitlam was promoted again in 1927, this time to Assistant Crown Solicitor. The position was located in the new national capital of Canberra, and the Whitlam family moved there. Gough Whitlam remains the only prime minister to have spent his formative years in Canberra. At the time, conditions remained primitive in what was dubbed "the bush capital" and "the land of the blowflies". Gough attended the government Telopea Park School. In 1932, Whitlam's father transferred him to Canberra Grammar School where, at the Speech Day ceremony that year, he was awarded a prize by the Governor-General, Sir Isaac Isaacs. Whitlam enrolled at St Paul's College at the University of Sydney at the age of 18. He earned his first wages by appearing, with several other "Paulines", in a cabaret scene in the film "The Broken Melody"—the students were chosen because St Paul's requires formal wear at dinner, and they could therefore supply their own costumes. After receiving a Bachelor of Arts degree with second-class honours in classics, Whitlam remained at St Paul's to begin his law studies. He had originally contemplated an academic career, but his lacklustre marks made that unlikely. Dropping out of Greek classes, he professed himself unable to care for the "dry as dust" lectures of Enoch Powell. Soon after the outbreak of World War II in 1939, Whitlam enlisted in the Sydney University Regiment, part of the Militia. In late 1941, following the Japanese attack on Pearl Harbor, and with a year remaining in his legal studies, he volunteered for the Royal Australian Air Force (RAAF). In 1942, while awaiting entry into the service, Whitlam met and married Margaret Elaine Dovey, who had swum for Australia in the 1938 British Empire Games and was the daughter of barrister and future New South Wales Supreme Court judge Bill Dovey. He entered the RAAF on 20 June 1942. Whitlam trained as a navigator and bomb aimer, before serving with No. 13 Squadron RAAF, based mainly on the Gove Peninsula, Northern Territory, flying Lockheed Ventura bombers. He reached the rank of Flight Lieutenant. While in the service, he began his political activities, distributing literature for the Australian Labor Party during the 1943 federal election and urging the passage of the "Fourteen Powers" referendum of 1944, which would have expanded the powers of the federal government. Although the party was victorious, the referendum it advocated was defeated. In 1961, Whitlam said of the referendum defeat, "My hopes were dashed by the outcome and from that moment I determined to do all I could do to modernise the Australian Constitution." While still in uniform, Whitlam joined the ALP in Sydney in 1945. He was discharged from the RAAF on 17 October 1945, and continued to use Air Force log books to record all of the flights he took until 2007. Whitlam completed his studies after the war, obtained his Bachelor of Laws, and was admitted to the federal and New South Wales bars in 1947. With his war service loan, Whitlam built a house in seaside Cronulla. He also bought the block of land next door, using the prize money (£1,000 in security bonds) he received for winning the Australian National Quiz Championship in 1948 and 1949 (he was runner-up in 1950). He sought to make a career in the ALP there, but local Labor supporters were sceptical of Whitlam's loyalties, given his privileged background. In the postwar years, he practised law, concentrating on landlord/tenant matters, and sought to build his bona fides in the party. He ran twice—unsuccessfully—for the local council, once (also unsuccessfully) for the New South Wales Legislative Assembly, and campaigned for other candidates. In 1951, Bert Lazzarini, the Labor member for the Federal electorate of Werriwa, announced that he would stand down at the next election. Whitlam won the preselection as ALP candidate. Lazzarini died in 1952 before completing his term and Whitlam was elected to the House of Representatives in the ensuing by-election on 29 November 1952. Whitlam trebled Lazzarini's majority in a 12 per cent swing to Labor. Whitlam joined the ALP minority in the House of Representatives. His maiden speech provoked an interruption by a future prime minister, John McEwen, who was then told by the Speaker that maiden speeches are traditionally heard in silence. Whitlam responded to McEwen by stating that Benjamin Disraeli had been heckled in his maiden speech and had responded, "The time will come when you shall hear me." He told McEwen, "The time will come when you may interrupt me." According to early Whitlam biographers Laurie Oakes and David Solomon, this cool response put the Coalition government on notice that the new Member for Werriwa would be a force to be reckoned with. In the rough and tumble debate in the House of Representatives, Whitlam called fellow MHR Bill Bourke "this grizzling Quisling", Garfield Barwick (who would, as High Court Chief Justice, play a role in Whitlam's downfall) a "bumptious bastard", and stated that Bill Wentworth exhibited a "hereditary streak of insanity". After calling future prime minister William McMahon a "quean", he apologised. The ALP had been out of office since the Chifley Government's defeat in 1949 and, since 1951, had been under the leadership of Bert Evatt, whom Whitlam greatly admired. In 1954, the ALP seemed likely to return to power. The Prime Minister, Robert Menzies, adroitly used the defection of a Soviet official to his advantage, and his coalition of the Liberal and Country parties was returned in the 1954 election with a seven-seat majority. After the election, Evatt attempted to purge the party of industrial groupers, who had long dissented from party policy, and who were predominantly Catholic and anti-communist. The ensuing division in the ALP, which came to be known as "The Split", sparked the birth of the Democratic Labor Party (DLP). It was a conflict that helped to keep Labor out of power for a generation, since DLP supporters chose the Liberal Party in preferential voting. Whitlam supported Evatt throughout this period. In 1955, a redistribution divided Whitlam's electorate of Werriwa in two, with his Cronulla home located in the new electorate of Hughes. Although Whitlam would have received ALP support in either division, he chose to continue standing for Werriwa and moved from Cronulla to Cabramatta. This meant even longer journeys for his older children to attend school, since neither electorate had a high school at the time, and they attended school in Sydney. Whitlam was appointed to the Parliamentary Joint Committee on Constitutional Review in 1956. Biographer Jenny Hocking calls his service on the committee, which included members from all parties in both chambers of Parliament, one of the "great influences in his political development". According to Hocking, service on the committee caused Whitlam to focus not on internal conflicts consuming the ALP, but on Labor goals which were possible and worthwhile in the constitutional framework. Many Labor goals, such as nationalisation, ran contrary to the Constitution. Whitlam came to believe that the Constitution—and especially Section 96 (which allowed the federal government to make grants to the states)—could be used to advance a worthwhile Labor programme. By the late 1950s Whitlam was seen as a leadership contender once the existing Labor leaders exited the scene. Most of the party's major figures, including Evatt, Deputy Leader Arthur Calwell, Eddie Ward, and Reg Pollard, were in their sixties, twenty years older than Whitlam. In 1960, after losing three elections, Evatt resigned and was replaced by Calwell, with Whitlam defeating Ward for deputy leader. Calwell came within a handful of votes of winning the cliffhanger 1961 election. He had not wanted Whitlam as deputy leader, and believed Labor would have won if Ward had been in the position. Soon after the 1961 election, events began to turn against Labor. When President Sukarno of Indonesia announced that he intended to take over West New Guinea as the colonial Dutch departed, Calwell responded by declaring that Indonesia must be stopped by force. Calwell's statement was called "crazy and irresponsible" by Prime Minister Menzies, and the incident reduced public support for the ALP. At that time, the Federal Conference of the Labor Party, which dictated policy to parliamentary members, consisted of six members from each state, but not Calwell or Whitlam. In early 1963 a special conference met in a Canberra hotel to determine Labor policy regarding a proposed US base in northern Australia; Calwell and Whitlam were photographed by "The Daily Telegraph" peering in through the doors, waiting for the verdict. In an accompanying story, Alan Reid of the "Telegraph" wrote that Labor was ruled by "36 faceless men". The Liberals seized on it, issuing a leaflet called "Mr Calwell and the Faceless Men" which accused Calwell and Whitlam of taking direction from "36 unknown men, not elected to Parliament nor responsible to the people." Menzies manipulated the Opposition on issues that bitterly divided it, such as direct aid to the states for private schools, and the proposed base. He called an early election for November 1963, standing in support of those two issues. The Prime Minister performed better than Calwell on television and received an unexpected boost after the assassination of US President John F. Kennedy. As a result, the Coalition easily defeated Labor on a 10-seat swing. Whitlam had hoped Calwell would step down after 1963, but he remained, reasoning that Evatt had been given three opportunities to win, and that he should be allowed a third try. Calwell dismissed proposals that the ALP leader and deputy leader should be entitled to membership of the party's conference (or on its governing 12-person Federal Executive, which had two representatives from each state), and instead ran successfully for one of the conference's Victoria seats. Labor did badly in a 1964 by-election in the Tasmanian electorate of Denison, and lost seats in the 1964 half-Senate election. The party was also defeated in the state elections in the most populous state, New South Wales, surrendering control of the state government for the first time since 1941. Whitlam's relationship with Calwell, never good, deteriorated further after publication of a 1965 article in "The Australian". The article reported off-the-record comments Whitlam had made that his leader was "too old and weak" to win office, and that the party might be gravely damaged by an "old-fashioned" 70-year-old Calwell seeking his first term as prime minister. Later that year, at Whitlam's urging, and over Calwell's objection, the biennial party conference made major changes to the party's platform: deleting support for the White Australia policy and making the ALP's leader and deputy leader "ex officio" members of the conference and executive, along with the party's leader and deputy leader in the Senate. As Whitlam considered the Senate unrepresentative, he opposed the admission of its ALP leaders to the party's governing bodies. Menzies retired in January 1966, and was succeeded as prime minister by the new Liberal Party leader, Harold Holt. After years of politics being dominated by the elderly Menzies and Calwell, the younger Holt was seen as a breath of fresh air, and attracted public interest and support in the run-up to the November election. In early 1966, the 36-member conference, with Calwell's assent, banned any ALP parliamentarian from supporting federal assistance to the states for spending on both government and private schools, commonly called "state aid". Whitlam broke with the party on the issue, and was charged with gross disloyalty by the executive, an offence which carried the penalty of expulsion from the party. Before the matter could be heard, Whitlam left for Queensland, where he campaigned intensively for the ALP candidate in the Dawson by-election. The ALP won, dealing the government its first by-election defeat since 1952. Whitlam survived the expulsion vote by a margin of only two, gaining both Queensland votes. At the end of April, Whitlam challenged Calwell for the leadership; though Calwell received two-thirds of the vote, he announced that if the party lost the upcoming election, he would not stand again for the leadership. Holt called an election for November 1966, in which Australia's involvement in the Vietnam War was a major issue. Calwell called for an "immediate and unconditional withdrawal" of Australian troops from Vietnam. Whitlam, however, said that this would deprive Australia of any voice in a settlement, and that regular troops, rather than conscripts, should remain under some circumstances. Calwell considered Whitlam's remark disastrous, disputing the party line just five days before the election. The ALP suffered a crushing defeat; the party was reduced to 41 seats in the House of Representatives. Shortly after the election, Whitlam faced another expulsion vote for his stance on Vietnam, and survived. True to his word, Calwell resigned two months after the election. At the caucus meeting on 8 February 1967, Whitlam was elected party leader, defeating leading left-wing candidate Dr Jim Cairns. Whitlam believed the Labor Party had little chance of being elected unless it could expand its appeal from the traditional working-class base to include the suburban middle class. He sought to shift control of the ALP from union officials to the parliamentary party, and hoped that even rank-and-file party members could be given a voice in the conference. In 1968, controversy erupted within the party when the executive refused to seat new Tasmanian delegate Brian Harradine, a Whitlam supporter who was considered a right-wing extremist. Whitlam resigned the leadership, demanding a vote of confidence from caucus. He defeated Cairns for the leadership in an unexpectedly close 38–32 vote. Despite the vote, the executive refused to seat Harradine. With the ALP's governing bodies unwilling to reform themselves, Whitlam worked to build support for change among ordinary party members. He was successful in reducing union influence in the party, though he was never able to give the rank and file a direct vote in selecting the executive. The Victoria branch of the party had long been a problem; its executive was far to the left of the rest of the ALP, and had little electoral success. Whitlam was able to reconstruct the Victoria party organisation against the will of its leaders, and the reconstituted state party proved essential to victory in the 1972 election. By the time of the 1969 party conference, Whitlam had gained considerable control over the ALP. That conference passed 61 resolutions, including broad changes to party policy and procedures. It called for the establishment of an Australian Schools Commission to consider the proper level of state aid for schools and universities, recognition of Aboriginal land claims, and expanded party policy on universal health care. The conference also called for increased federal involvement in urban planning, and would form the basis of "The Program" of modern socialism which Whitlam and the ALP would present to the voters in 1972. Since 1918, Labor had called for the abolition of the existing Australian Constitution, with the vesting of all political power in Parliament, a plan which would turn the states into powerless geographic regions. Beginning in 1965, Whitlam had sought to change this goal. He finally succeeded at the 1971 ALP Conference in Launceston, Tasmania, which called for Parliament to receive "such plenary powers as are necessary and desirable" to achieve the ALP's goals in domestic and international affairs. Labor also pledged to abolish the Senate; this goal would not be erased from the party platform until 1979, after Whitlam had stepped down as leader. Soon after taking the leadership, Whitlam reorganised the ALP caucus, assigning portfolios and turning the Labor frontbench into a shadow cabinet. While the Liberal-Country Coalition had a huge majority in the House of Representatives, Whitlam energised the party by campaigning intensively to win two by-elections in 1967: first in Corio in Victoria, and later that year in Capricornia (Queensland). The November half-Senate election saw a moderate swing to Labor and against the Coalition, compared with the general election the previous year. These federal victories, in which both Whitlam and Holt campaigned, helped give Whitlam the leverage he needed to carry out party reforms. At the end of 1967, Holt vanished while swimming in rough seas near Melbourne; his body was never recovered. John McEwen, as leader of the junior Coalition partner, the Country Party, took over as prime minister for three weeks until the Liberals could elect a new leader. Senator John Gorton won the vote and became prime minister. The leadership campaign was conducted mostly by television, and Gorton appeared to have the visual appeal needed to keep Whitlam out of office. Gorton resigned his seat in the Senate, and in February 1968 won the by-election for Holt's seat of Higgins in Victoria. For the remainder of the year, Gorton appeared to have the better of Whitlam in the House of Representatives. In his chronicle of the Whitlam years, however, speechwriter Graham Freudenberg asserts that Gorton's erratic behaviour, Whitlam's strengthening of his party, and events outside Australia (such as the Vietnam War) ate away at Liberal dominance. Gorton called an election for October 1969. Whitlam and the ALP, with little internal dissension, stood on a platform calling for domestic reform, an end to conscription, and the withdrawal of Australian troops from Vietnam by 1 July 1970. Whitlam knew that, given the ALP's poor position after the 1966 election, victory was unlikely. Nevertheless, Whitlam scored an 18-seat swing, Labor's best performance since losing government in 1949. Although the Coalition was returned for an eighth term in government, it was with a slim majority of seven seats, down from 41 after the writs were dropped. Labor actually won a bare majority of the two-party vote; only DLP preferences in four Melbourne-area seats kept Whitlam from becoming prime minister. The 1970 half-Senate election brought little change to Coalition control, but the Liberal vote fell for the first time below 40 per cent, representing a severe threat to Gorton's leadership. In March 1971, the resentment against Gorton came to a head when a confidence vote in the Liberal caucus resulted in a tie. Declaring that this was a sign he no longer had the confidence of the party, Gorton resigned, and William McMahon was elected his successor. With the Liberals in turmoil, Whitlam and the ALP sought to gain public trust as a credible government-in-waiting. The party's actions, such as its abandonment of the White Australia policy, gained favourable media attention. The Labor leader flew to Papua New Guinea and pledged himself to the independence of what was then an Australian possession. In 1971, Whitlam flew to Beijing and met with Chinese officials, including Zhou Enlai. McMahon attacked Whitlam for the visit and claimed that the Chinese had manipulated him. This attack backfired when US President Richard Nixon announced that he would visit China the following year. His National Security Advisor, Henry Kissinger, visited Beijing between 9–11 July (less than a week after Whitlam's visit of 4–6 July), and (unknown to Whitlam) some of Kissinger's staff had actually been in Beijing preparing for Kissinger's visit at the same time as the Labor delegation. According to Whitlam biographer Jenny Hocking, the incident transformed Whitlam into an international statesman, while McMahon was seen as reacting defensively to Whitlam's foreign policy ventures. Other errors by McMahon, such as a confused ad-lib speech while visiting Washington, and a statement to Indonesia's President Suharto that Australia was a "west European nation", also damaged the government. By early 1972, Labor had established a clear lead in the polls; indeed, for the first time since 1955 its support was greater than the combined vote for the Coalition and DLP. Unemployment was at a ten-year peak, rising to 2.14 percent in August (though the unemployment rate was calculated differently compared to the present, and did not include thousands of rural workers on Commonwealth-financed relief work). Inflation was also at its highest rate since the early 1950s. The government recovered slightly in the August Budget session of Parliament, proposing income tax cuts and increased spending. The Labor strategy for the run-up to the election was to sit back and allow the Coalition to make mistakes. Whitlam controversially stated in March that "draft-dodging is not a crime" and that he would be open to a revaluation of the Australian dollar. With the Coalition sinking in the polls and his own personal approval ratings down as low as 28 percent, McMahon waited as long as he could, finally calling an election for the House of Representatives for 2 December. Whitlam noted that the polling day was the anniversary of the Battle of Austerlitz – at which another "ramshackle, reactionary coalition" had been given a "crushing defeat". Labor campaigned under the slogan "It's Time", an echo of Menzies' successful 1949 slogan, "It's Time for a Change". Surveys showed that even Liberal voters approved of the Labor slogan. Whitlam pledged an end to conscription and the release of individuals who had refused the draft; an income tax surcharge to pay for universal health insurance; free dental care for students; and renovation of aging urban infrastructure. The party pledged to eliminate university tuition fees and establish a schools commission to evaluate educational needs. The party benefited from the support of the proprietor of News Limited, Rupert Murdoch, who preferred Whitlam over McMahon. Labor was so dominant in the campaign that some of Whitlam's advisers urged him to stop joking about McMahon; people were feeling sorry for him. The election saw the ALP increase its tally by 12 seats, mostly in suburban Sydney and Melbourne, for a majority of nine in the House of Representatives. The ALP gained little beyond the suburban belts, however, losing a seat in South Australia and two in Western Australia. Whitlam took office with a majority in the House of Representatives, but without control of the Senate (elected in the 1967 and 1970 half-elections). The Senate at that time consisted of ten members from each of the six states, elected by proportional representation. The ALP parliamentary caucus chose the ministers, with the party leader only having the power to assign portfolios. A caucus meeting could not be held until after the final results came in on 15 December. In the meantime, McMahon would remain caretaker prime minister. Whitlam, however, was unwilling to wait that long. On 5 December, with Labor's win beyond doubt even though counting was still underway, Whitlam had the Governor-General, Sir Paul Hasluck, swear him in as prime minister and Labor's deputy leader, Lance Barnard, as deputy prime minister. The two men held 27 portfolios during the two weeks before a full cabinet could be determined. During the two weeks the so-called "duumvirate" held office, Whitlam sought to fulfill those campaign promises that did not require legislation. Whitlam ordered negotiations to establish full relations with the People's Republic of China, and broke those with Taiwan. Legislation allowed the defence minister to grant exemptions from conscription. Barnard held this office, and exempted everyone. Seven men were at that time incarcerated for refusing conscription; Whitlam arranged for their liberation. The Whitlam government in its first days reopened the equal pay case pending before the Commonwealth Conciliation and Arbitration Commission, and appointed a woman, Elizabeth Evatt, to the commission. Whitlam and Barnard eliminated sales tax on contraceptive pills, announced major grants for the arts, and appointed an interim schools commission. The duumvirate barred racially discriminatory sports teams from Australia, and instructed the Australian delegation at the United Nations to vote in favour of sanctions on apartheid South Africa and Rhodesia. It also ordered the Australian Army Training Team home from Vietnam, ending Australia's involvement in the war; most troops (including all conscripts) had been withdrawn by McMahon. According to Whitlam speechwriter Graham Freudenberg, the duumvirate was a success, as it showed that the Labor government could manipulate the machinery of government, despite its long absence from office. However, Freudenberg noted that the rapid pace and public excitement caused by the duumvirate's actions caused the Opposition to be wary of giving Labor too easy a time, and gave rise to one post-mortem assessment of the Whitlam government: "We did too much too soon." The McMahon government had consisted of 27 ministers, twelve of whom comprised the Cabinet. In the run-up to the election, the Labor caucus had decided that should the party take power, all 27 ministers were to be Cabinet members. Intense canvassing took place amongst ALP parliamentarians as the duumvirate did its work, and on 18 December the caucus elected the Cabinet. The results were generally acceptable to Whitlam, and within three hours, he had announced the portfolios of the Cabinet members. To give himself greater control over the Cabinet, in January 1973 Whitlam established five Cabinet committees (with the members appointed by himself, not the caucus) and took full control of the Cabinet agenda. Gough Whitlam, prime minister for fewer than three years between 1972 and 1975, pushed through a raft of reforms that radically changed Australia's economic, legal and cultural landscape. The Whitlam government abolished the death penalty for federal crimes. Legal aid was established, with offices in each state capital. It abolished university fees, and established the Schools Commission to allocate funds to schools. Whitlam founded the Department of Urban Development and, having lived in developing Cabramatta, most of which lacked sewage facilities, set a goal to leave no urban home unsewered. The Whitlam government gave grants directly to local government units for urban renewal, flood prevention, and the promotion of tourism. Other federal grants financed highways linking the state capitals, and paid for standard-gauge rail lines between the states. The government attempted to set up a new city at Albury–Wodonga on the Victoria–New South Wales border. The process was started for "Advance Australia Fair" to become the country's national anthem in place of "God Save the Queen". The Order of Australia replaced the British honours system in early 1975. In 1973, the National Gallery of Australia, then called the Australian National Gallery, bought the painting "Blue Poles" by contemporary artist Jackson Pollock for US$2 million (A$1.3 million at the time of payment) – about a third of its annual budget. This required Whitlam's personal permission, which he gave on the condition the price was publicized. The purchase created a political and media scandal, and was said to symbolise, alternatively, Whitlam's foresight and vision or his profligate spending. Whitlam travelled extensively as prime minister, and was the first Australian prime minister to visit China while in office. He was criticised for making this visit, especially after Cyclone Tracy struck Darwin; he interrupted an extensive tour of Europe for 48 hours (deemed too brief a period by many) to view the devastation. In February 1973, the Attorney General, Senator Lionel Murphy, led a police raid on the Melbourne office of the Australian Security Intelligence Organisation, which was under his ministerial responsibility. Murphy believed that ASIO might have files relating to threats against Yugoslav Prime Minister Džemal Bijedić, who was about to visit Australia, and feared ASIO might conceal or destroy them. The Opposition attacked the Government over the raid, terming Murphy a "loose cannon". A Senate investigation of the incident was cut short when Parliament was dissolved in 1974. According to journalist and author Wallace Brown, the controversy over the raid continued to dog the Whitlam government throughout its term, because the incident was "so silly". From the start of the Whitlam government, the Opposition, led by Billy Snedden (who replaced McMahon as Liberal leader in December 1972) sought to use control of the Senate to baulk Whitlam. It did not seek to block all government legislation; the Coalition senators, led by Senate Liberal leader Reg Withers, sought to block government legislation only when the obstruction would advance the Opposition's agenda. The Whitlam government also had troubles in relations with the states. New South Wales refused the government's request that it close the Rhodesian Information Centre in Sydney. The Queensland premier, Joh Bjelke-Petersen refused to consider any adjustment in Queensland's border with Papua New Guinea, which, due to the state's ownership of islands in the Torres Strait, came within half a kilometre (about one-third of a mile) of the Papuan mainland. Liberal state governments in New South Wales and Victoria were re-elected by large margins in 1973. Whitlam and his majority in the House of Representatives proposed a constitutional referendum in December 1973, transferring control of wages and prices from the states to the federal government. The two propositions failed to attract a majority of voters in any state, and were rejected by over 800,000 votes nationwide. By early 1974, the Senate had rejected nineteen government bills, ten of them twice. With a half-Senate election due by mid-year, Whitlam looked for ways to shore up support in that body. Queensland senator and former DLP leader Vince Gair signalled his willingness to leave the Senate for a diplomatic post. Gair's term would not expire until the following half-Senate election (or upon a double dissolution election). With five Queensland seats at stake in the half-Senate election, the ALP would probably win only two, but if six (including Gair's) were at stake, the party likely would win a third. Possible control of the Senate was therefore at stake; Whitlam agreed to Gair's request and had Governor-General Sir Paul Hasluck appoint him ambassador to Ireland. Word leaked of Gair's pending resignation, and Whitlam's opponents attempted to counteract his manoeuvre. On what became known as the "Night of the Long Prawns", Country Party members secreted Gair at a small party in a legislative office as the ALP searched for him to secure his written resignation. As Gair enjoyed beer and prawns, Bjelke-Petersen advised the Queensland governor, Sir Colin Hannah, to issue writs for only the usual five vacancies, since Gair's seat was not yet vacant, effectively countering Whitlam's plan. The Senate had refused to pass six bills after their twice being passed by the House of Representatives. With the Opposition threatening to disrupt money supply to government, Whitlam used the Senate's recalcitrance to trigger a double dissolution election, holding it instead of the half-Senate election. After a campaign featuring the Labor slogan "Give Gough a fair go", the Whitlam government was returned, with its majority in the House of Representatives cut from seven to five and increasing its Senate seats by three. It was only the second time since Federation that a Labor government had been elected to a second full term. The government and the opposition each had 29 Senators with two seats held by independents. The deadlock over the twice-rejected bills was broken, uniquely in Australian history, with a special joint sitting of the two houses of Parliament under Section 57 of the Constitution. This session, authorised by the new governor-general, Sir John Kerr, passed bills providing for universal health insurance (known then as Medibank, today as Medicare) and providing the Northern Territory and Australian Capital Territory with representation in the Senate, effective at the next election. By mid-1974, Australia was in an economic slump, suffering from the 1973 oil crisis and 1973–75 recession. The 1973 oil crisis had caused prices to spike and, according to government figures, inflation topped 13 percent for over a year between 1973 and 1974. Part of the inflation was due to Whitlam's desire to increase wages and conditions of the Commonwealth Public Service as a pacesetter for the private sector. The Whitlam government had cut tariffs by 25 percent in 1973; 1974 saw an increase in imports of 30 percent and a $1.5 billion increase in the trade deficit. Primary producers of commodities such as beef were caught in a credit squeeze as short-term rates rose to extremely high levels. Unemployment also rose significantly. Unease within the ALP led to Barnard's defeat when Jim Cairns challenged him for his deputy leadership. Whitlam gave little help to his embattled deputy, who had formed the other half of the duumvirate. Despite these economic indicators, the Budget presented in August 1974 saw large increases in spending, especially in education. Treasury officials had advised a series of tax and fee increases, ranging from excise taxes to the cost of posting a letter; their advice was mostly rejected by Cabinet. The Budget was unsuccessful in dealing with the inflation and unemployment, and Whitlam introduced large tax cuts in November. He also announced additional spending to help the private sector. Beginning in October 1974, the Whitlam government sought overseas loans to finance its development plans, with the newly enriched oil nations a likely target. Whitlam attempted to secure financing before informing the Loan Council (which included state officials hostile to Whitlam), and his government empowered Pakistani financier Tirath Khemlani as an intermediary in the hope of securing US$4 billion in loans. While the Loans Affair never resulted in an actual loan, according to author and Whitlam speechwriter Graham Freudenberg, "The only cost involved was the cost to the reputation of the Government. That cost was to be immense—it was government itself." Whitlam appointed Senator Murphy to the High Court, even though Murphy's Senate seat would not be up for election if a half-Senate election were held. Labor then held three of the five short-term New South Wales Senate seats. Under proportional representation, Labor could hold its three short-term seats in the next half-Senate election but, if Murphy's seat were also contested, Labor was unlikely to win four out of six. Thus, a Murphy appointment meant the almost certain loss of a seat in the closely divided Senate at the next election. Whitlam appointed Murphy anyway. By convention, senators appointed by the state legislature to fill casual vacancies were from the same political party as the former senator. The New South Wales premier, Tom Lewis felt that this convention only applied to vacancies caused by deaths or ill-health, and arranged for the legislature to elect Cleaver Bunton, former mayor of Albury and an independent. By March 1975, many Liberal parliamentarians felt that Snedden was doing an inadequate job as leader of the Opposition, and that Whitlam was dominating him in the House of Representatives. Malcolm Fraser challenged Snedden for the leadership, and defeated him on 21 March. Soon after Fraser's accession, controversy arose over the Whitlam government's actions in trying to restart peace talks in Vietnam. As the North prepared to end the civil war, Whitlam sent cables to both Vietnamese governments, telling Parliament that both cables were substantially the same. The Opposition contended he had misled Parliament, and a motion to censure Whitlam was defeated along party lines. The Opposition also attacked Whitlam for not allowing enough South Vietnamese refugees into Australia, with Fraser calling for the entry of 50,000. Freudenberg alleges that 1,026 Vietnamese refugees entered Australia in the final eight months of the Whitlam government, and only 399 in 1976 under Fraser. However, by 1977, Australia had accepted over five thousand refugees. As the political situation deteriorated, Whitlam and his government continued to enact legislation: The Family Law Act 1975 provided for no-fault divorce while the Racial Discrimination Act 1975 caused Australia to ratify the International Convention on the Elimination of All Forms of Racial Discrimination that Australia had signed under Holt, but which had never been ratified. In August 1975, Whitlam gave the Gurindji people of the Northern Territory title deeds to part of their traditional lands, beginning the process of Aboriginal land reform. The next month, Australia granted independence to Papua New Guinea. Following the 1974 Carnation Revolution, Portugal began a process of decolonisation and began a withdrawal from Portuguese Timor (later East Timor). Australians had long taken an interest in the colony; the nation had sent troops to the region during World War II, and many East Timorese had fought the Japanese as guerrillas. In September 1974, Whitlam met with President Suharto in Indonesia and indicated that he would support Indonesia if it annexed East Timor. At the height of the Cold War, and in the context of the American retreat from Indo-China, he felt that incorporation of East Timor into Indonesia would enhance the stability of the region, and reduce the risk of the East Timorese FRETILIN movement, which many feared was communist, coming to power. Whitlam had offered Barnard a diplomatic post; in early 1975 Barnard agreed to this, triggering a by-election in his Tasmanian electorate of Bass. The election on 28 June proved a disaster for Labor, which lost the seat with a swing against it of 17 percent. The next week, Whitlam fired Barnard's successor as deputy prime minister, Cairns, who had misled Parliament regarding the Loans Affair amid controversy about his relationship with his office manager, Junie Morosi. At the time of Cairns's dismissal, one Senate seat was vacant, following the death on 30 June of Queensland ALP Senator Bertie Milliner. The state Labor party nominated Mal Colston, resulting in a deadlock. The unicameral Queensland legislature twice voted against Colston, and the party refused to submit any alternative candidates. Bjelke-Petersen finally convinced the legislature to elect a low-level union official, Albert Field, who had contacted his office and expressed a willingness to serve. In interviews, Field made it clear he would not support Whitlam. Field was expelled from the ALP for standing against Colston, and Labor senators boycotted his swearing-in. Whitlam argued that, because of the manner of filling vacancies, the Senate was "corrupted" and "tainted", with the Opposition enjoying a majority they did not win at the ballot box. In October 1975, the Opposition, led by Malcolm Fraser, determined to withhold supply by deferring consideration of appropriation bills. With Field on leave (his Senate appointment having been challenged), the Coalition had an effective majority of 30–29 in the Senate. The Coalition believed that if Whitlam could not deliver supply, and would not advise new elections, Kerr would have to dismiss him. Supply would run out on 30 November. The stakes were raised in the conflict on 10 October, when the High Court declared valid the Act granting the territories two senators each. In a half-Senate election, most successful candidates would not take their places until 1 July 1976, but the territories' senators, and those filling Field's and Bunton's seats, would assume their seats immediately. This gave Labor an outside chance of controlling the Senate, at least until 1 July 1976. On 14 October, Labor minister Rex Connor, mastermind of the loans scheme, was forced to resign when Khemlani released documents showing that Connor had made misleading statements. The continuing scandal bolstered the Coalition in their stance that they would not concede supply. Whitlam on the other hand, convinced that he would win the battle, was glad of the distraction from the Loans Affair, and believed that he would "smash" not only the Senate, but Fraser's leadership as well. Whitlam told the House of Representatives on 21 October, Whitlam and his ministers repeatedly claimed that the Opposition was damaging not only the constitution, but the economy as well. The Coalition senators remained united, though several became increasingly concerned about the tactic of blocking supply. As the crisis dragged into November, Whitlam attempted to make arrangements for public servants and suppliers to be able to cash cheques at banks. These transactions would be temporary loans which the government would repay once supply was restored. This plan to prolong government without supply was presented to Kerr unsigned on 6 November, under the title "Draft Joint Opinion" (ostensibly of solicitor-general Maurice Byers and attorney-general Kep Enderby). It proposed that public employees, including members of the armed forces and police, "could assign arrears of pay by way of mortgage". The government's refusal to formalise this and other "advice" was a factor justifying the Governor-General's fateful resort to alternative legal advice. Governor-General Kerr was following the crisis closely. At a luncheon with Whitlam and several of his ministers on 30 October, Kerr suggested a compromise: if Fraser conceded supply, Whitlam would agree not to call the half-Senate election until May or June 1976, or alternatively would agree not to call the Senate into session until after 1 July. Whitlam rejected the idea, seeking to end the Senate's right to deny supply. On 3 November, after a meeting with Kerr, Fraser proposed that if the government agreed to hold a House of Representatives election at the same time as the half-Senate election, the Coalition would concede supply. Whitlam rejected this offer, stating that he had no intention of advising a House election for at least a year. With the crisis unresolved, Kerr decided to dismiss Whitlam as prime minister. Fearing that Whitlam would go to the Queen and potentially have him removed, the Governor-General gave Whitlam no prior hint. He conferred (against Whitlam's advice) with High Court Chief Justice Sir Garfield Barwick, who agreed that he had the power to dismiss Whitlam. A meeting among the party leaders, including Whitlam and Fraser, to resolve the crisis on the morning of 11 November came to nothing. Kerr and Whitlam met at the Governor-General's office that afternoon at 1.00 pm. Unknown to Whitlam, Fraser was waiting in an ante-room; Whitlam later stated that he would not have set foot in the building if he had known Fraser was there. Whitlam, as he had told Kerr by phone earlier that day, came prepared to advise a half-Senate election, to be held on 13 December. Kerr instead told Whitlam that he had terminated his commission as prime minister, and handed him a letter to that effect. After the conversation, Whitlam returned to the Prime Minister's residence, The Lodge, had lunch and conferred with his advisers. Immediately after his meeting with Whitlam, Kerr commissioned Fraser as 'caretaker' Prime Minister, on the assurance he could obtain supply and would then advise Kerr to dissolve both houses for election. In the confusion, Whitlam and his advisers did not immediately tell any Senate members of the dismissal, with the result that when the Senate convened at 2.00 pm, the appropriation bills were rapidly passed, with the ALP senators assuming the Opposition had given in. The bills were soon sent to Kerr to receive Royal Assent. At 2.34 pm, ten minutes after supply had been secured, Fraser rose in the House and announced he was prime minister. Whitlam immediately moved a no confidence motion against the new prime minister in the House, which instructed the Speaker, Gordon Scholes, to advise Kerr to reinstate Whitlam. Kerr refused to receive Scholes, keeping him waiting for more than an hour while he prorogued Parliament by proclamation: his Official Secretary, David Smith, came to Parliament House to proclaim the dissolution from the front steps. A large, angry crowd had gathered, and Smith was nearly drowned out by their noise. He concluded with the traditional "God save the Queen". The dismissed prime minister, Whitlam, who had been standing behind Smith, then addressed the crowd: Well may we say "God save the Queen", because nothing will save the Governor-General! The Proclamation which you have just heard read by the Governor-General's Official Secretary was countersigned Malcolm Fraser, who will undoubtedly go down in Australian history from Remembrance Day 1975 as Kerr's cur. They won't silence the outskirts of Parliament House, even if the inside has been silenced for a few weeks. ... Maintain your rage and enthusiasm for the campaign for the election now to be held and until polling day. As the ALP began the 1975 campaign, it seemed that its supporters would maintain their rage. Early rallies drew huge crowds, with attendees handing Whitlam money to pay election expenses. The crowds greatly exceeded those in any of Whitlam's earlier campaigns; in Sydney, 30,000 partisans gathered for an ALP rally in The Domain below a banner: "Shame Fraser Shame". Fraser's appearances drew protests, and a letter bomb sent to Kerr was defused by authorities. Instead of making a policy speech to keynote his campaign, Whitlam made a speech attacking his opponents and calling 11 November "a day which will live in infamy". Polls from the first week of campaigning showed a nine-point swing against Labor, which would have decimated Labor if repeated in an election. Whitlam's campaign team disbelieved the results at first, but additional polling returns were clear: the electorate had turned against Labor. The Coalition attacked Labor for economic conditions, and released television commercials including "The Three Dark Years" showing images from Whitlam government scandals. The ALP campaign, which had concentrated on the issue of Whitlam's dismissal, did not address the economy until its final days. By that time Fraser, confident of victory, was content to sit back, avoid specifics and make no mistakes. On election night, 13 December, the Coalition won the largest majority government in Australian history, winning 91 seats to Labor's 36. Labor suffered a 6.5 percent swing against it and its caucus was cut almost in half, suffering a 30-seat swing. Labor was left with five fewer seats than it had when Whitlam took the leadership. The Coalition also won a 37–25 majority in the Senate. Whitlam stayed on as Opposition leader, surviving a leadership challenge. In early 1976, an additional controversy broke when it was reported that Whitlam had been involved in ALP attempts to raise $500,000 during the election from the Ahmed Hassan al-Bakr government of Iraq. "Luckily, for Whitlam and Labor, the deal, ultimately, went pear-shaped." No money had actually been paid, and no charges were filed. The Whitlams were visiting China at the time of the Tangshan earthquake in July 1976, though they were staying in Tianjin, away from the epicentre. "The Age" printed a cartoon by Peter Nicholson showing the Whitlams huddled together in bed with Margaret Whitlam saying, "Did the earth move for you too, dear?" This cartoon prompted a page full of outraged letters from Labor partisans and a telegram from Gough Whitlam, safe in Tokyo, requesting the original of the cartoon. In early 1977 Whitlam faced a leadership challenge from Bill Hayden, the final treasurer in the Whitlam government, and won by a two-vote margin. Fraser called an election for 10 December. Although Labor managed to pick up five seats, the Coalition still enjoyed a majority of 48. According to Freudenberg, "The meaning and the message were unmistakable. It was the Australian people's rejection of Edward Gough Whitlam." Whitlam's son Tony, who had joined his father in the House of Representatives at the 1975 election, was defeated. Shortly after the election, Whitlam resigned as party leader and was succeeded by Hayden. Whitlam was made a Companion of the Order of Australia in June 1978, and resigned from Parliament on 31 July of the same year. He then held various academic positions. When Labor returned to power under Bob Hawke in 1983, Whitlam was appointed as ambassador to UNESCO, based in Paris. He served for three years in this post, defending UNESCO against allegations of corruption. At the end of his term as Australia's Ambassador Whitlam was elected to the Executive Board of UNESCO for a 3-year term, until 1989. In 1985, he was appointed to Australia's Constitutional Commission. Whitlam was appointed chairman of the National Gallery of Australia in 1987 after his son Nick (then managing director of the State Bank of New South Wales) turned down the position. He and Margaret Whitlam were part of the bid team that in 1993 persuaded the International Olympic Committee to give Sydney the right to host the 2000 Summer Olympics. John Kerr died in 1991. He and Whitlam never reconciled; indeed, Whitlam always saw his dismissal from office as a "constitutional coup d'état". Whitlam and Fraser put aside their differences and became friends during the 1980s, though they never discussed the events of 1975. The two subsequently campaigned together in support of the 1999 Australian republic referendum. In March 2010, Fraser visited Whitlam at his Sydney office while on a book tour to promote his memoirs. Whitlam accepted an autographed copy of the book and presented Fraser with a copy of his 1979 book about the dismissal, "The Truth of the Matter". In 2003, Whitlam's former research assistant Mark Latham became the leader of the ALP. Although Latham was more conservative than Whitlam, the former prime minister gave Latham much support, according to one account "anointing him as his political heir". Latham, like Whitlam, represented Werriwa in the House of Representatives. Whitlam supported Latham when he opposed the invasion and occupation of Iraq. Whitlam supported fixed four-year terms for both houses of Parliament. In 2006, he accused the ALP of failing to press for this change. In April 2007, he and Margaret Whitlam were both made life members of the Australian Labor Party. This was the first time anyone had been made a life member of the party organisation at the national level. In 2007, Whitlam testified at an inquest into the death of Brian Peters, one of five Australia-based TV personnel killed in East Timor in October 1975. Whitlam indicated that he had warned Peters' colleague, Greg Shackleton (who was also killed), that the Australian government could not protect them in East Timor and that they should not go there. He also said that Shackleton was "culpable" if he had not passed on Whitlam's warning. Whitlam joined three other former prime ministers in February 2008 in returning to Parliament to witness the Federal Government apology to the Aboriginal Stolen Generations by the then prime minister Kevin Rudd. On 21 January 2009, Whitlam achieved a greater age () than any other prime minister of Australia, surpassing the previous record holder Frank Forde. On the 60th anniversary of his marriage to Margaret Whitlam, he called it "very satisfactory" and claimed a record for "matrimonial endurance". In 2010, it was reported that Whitlam had moved into an aged care facility in Sydney's inner east in 2007. Despite this, he continued to go to his office three days a week. Margaret Whitlam remained in the couple's nearby apartment. In early 2012 she suffered a fall there, leading to her death in hospital at age 92 on 17 March of that year, a month short of the Whitlams' 70th wedding anniversary. Gough Whitlam died on the morning of 21 October 2014. His family announced that there would be a private cremation and a public memorial service. Whitlam was survived by his four children, five grandchildren and nine great-grandchildren. He was the longest-lived Australian Prime Minister, dying at the age of 98 years and 102 days. A state memorial service was held on 5 November 2014 in the Sydney Town Hall and was led by Kerry O'Brien. The Welcome to Country was given by Auntie Millie Ingram and eulogies were delivered by Graham Freudenberg, Cate Blanchett, Noel Pearson, John Faulkner and Antony Whitlam. Pearson's contribution in particular was hailed as "one of the best political speeches of our time". Musical performances were delivered by William Barton (a didgeridoo improvisation), Paul Kelly and Kev Carmody (their land rights protest song "From Little Things Big Things Grow"), as well as the Sydney Philharmonia Choir and the Sydney Symphony Orchestra, conducted by Benjamin Northey. In accordance with Whitlam's wishes, the orchestra performed "In Tears of Grief" from Bach's "St Matthew Passion", "Va, pensiero" from Verdi's "Nabucco", "Un Bal" from "Symphonie fantastique" by Berlioz and, as the final piece, "Jerusalem" by Parry. "Jerusalem" was followed by a flypast of four RAAF F/A-18 Hornets in missing man formation. Those attending the memorial included the current and some former governors-general, the current and all living former prime ministers, and members of the family of Vincent Lingiari. The two-hour service, attended by 1,000 invited guests and 900 others, was screened to thousands outside the Hall, as well as in Cabramatta and Melbourne, and broadcast live by ABC television. In honour of Whitlam, the Australian Electoral Commission created the Division of Whitlam in the House of Representatives in place of the Division of Throsby, with effect from the 2016 election. ACT Chief Minister Katy Gallagher announced that a future Canberra suburb will be named for Whitlam, and that his family would be consulted about other potential memorials. Gough Whitlam Park in Earlwood, New South Wales, is named after him. Whitlam remains well remembered for the circumstances of his dismissal. It is a legacy he did little to efface; he wrote a 1979 book, "The Truth of the Matter" (the title is a play on that of Kerr's 1978 memoir, "Matters for Judgment") and devoted part of his subsequent book, "Abiding Interests", to the circumstances of his removal. According to journalist and author Paul Kelly, who penned two books on the crisis, Whitlam "achieved a paradoxical triumph: the shadow of the dismissal has obscured the sins of his government". More books have been written about Whitlam, including his own writings, than about any other Australian prime minister. According to Whitlam biographer Jenny Hocking, for a period of at least a decade, the Whitlam era was viewed almost entirely in negative terms, but that has changed. Still, she feels that Australians take for granted programmes and policies initiated by the Whitlam government, such as recognition of China, legal aid, and Medicare. Ross McMullin, who wrote an official history of the ALP, notes that Whitlam remains greatly admired by many Labor supporters because of his efforts to reform Australian government, and his inspiring leadership. Economist and writer Ross Gittins evaluated the Whitlam Government's responses to the economic challenges of the time. Wallace Brown describes Whitlam in his book about his experiences covering Australian prime ministers as a journalist: Whitlam was the most paradoxical of all Prime Ministers in the last half of the 20th century. A man of superb intellect, knowledge, and literacy, he yet had little ability when it came to economics. ... Whitlam rivalled Menzies in his passion for the House of Representatives and ability to use it as his stage, and yet his parliamentary skills were rhetorical and not tactical. He could devise a strategy and then often botch the tactics in trying to implement that strategy. ... Above all he was a man of grand vision with serious blind spots. Whitlam's last words in the documentary film "Gough Whitlam – In His Own Words" (2002) were in response to a question about his status as an icon and elder statesman. He said: "I hope this is not just because I was a martyr; the fact was, I was an achiever". Grover Cleveland Stephen Grover Cleveland (March 18, 1837 – June 24, 1908) was an American politician and lawyer who was the 22nd and 24th President of the United States, the only president in American history to serve two non-consecutive terms in office (1885–1889 and 1893–1897). He won the popular vote for three presidential elections—in 1884, 1888, and 1892—and was one of two Democrats (with Woodrow Wilson) to be elected president during the era of Republican political domination dating from 1861 to 1933. Cleveland was the leader of the pro-business Bourbon Democrats who opposed high tariffs, Free Silver, inflation, imperialism, and subsidies to business, farmers, or veterans on libertarian philosophical grounds. His crusade for political reform and fiscal conservatism made him an icon for American conservatives of the era. Cleveland won praise for his honesty, self-reliance, integrity, and commitment to the principles of classical liberalism. He fought political corruption, patronage, and bossism. As a reformer, Cleveland had such prestige that the like-minded wing of the Republican Party, called "Mugwumps", largely bolted the GOP presidential ticket and swung to his support in the 1884 election. As his second administration began, disaster hit the nation when the Panic of 1893 produced a severe national depression, which Cleveland was unable to reverse. It ruined his Democratic Party, opening the way for a Republican landslide in 1894 and for the agrarian and silverite seizure of the Democratic Party in 1896. The result was a political realignment that ended the Third Party System and launched the Fourth Party System and the Progressive Era. Cleveland was a formidable policymaker, and he also drew corresponding criticism. His intervention in the Pullman Strike of 1894 to keep the railroads moving angered labor unions nationwide in addition to the party in Illinois; his support of the gold standard and opposition to Free Silver alienated the agrarian wing of the Democratic Party. Critics complained that Cleveland had little imagination and seemed overwhelmed by the nation's economic disasters—depressions and strikes—in his second term. Even so, his reputation for probity and good character survived the troubles of his second term. Biographer Allan Nevins wrote, "[I]n Grover Cleveland, the greatness lies in typical rather than unusual qualities. He had no endowments that thousands of men do not have. He possessed honesty, courage, firmness, independence, and common sense. But he possessed them to a degree other men do not." By the end of his second term, public perception showed him to be one of the most unpopular U.S. presidents, and was by then rejected even by most Democrats. Today, Cleveland is considered by most historians to have been a successful leader, generally ranked among the upper-mid tier of American presidents. Stephen Grover Cleveland was born on March 18, 1837, in Caldwell, New Jersey, to Ann (née Neal) and Richard Falley Cleveland. Cleveland's father was a Congregational and Presbyterian minister who was originally from Connecticut. His mother was from Baltimore and was the daughter of a bookseller. On his father's side, Cleveland was descended from English ancestors, the first of the family having emigrated to Massachusetts from Cleveland, England in 1635. His father's maternal grandfather, Richard Falley Jr., fought at the Battle of Bunker Hill, and was the son of an immigrant from Guernsey. On his mother's side, Cleveland was descended from Anglo-Irish Protestants and German Quakers from Philadelphia. Cleveland was distantly related to General Moses Cleaveland, after whom the city of Cleveland, Ohio, was named. Cleveland, the fifth of nine children, was named Stephen Grover in honor of the first pastor of the First Presbyterian Church of Caldwell, where his father was pastor at the time. He became known as Grover in his adult life. In 1841, the Cleveland family moved to Fayetteville, New York, where Grover spent much of his childhood. Neighbors later described him as "full of fun and inclined to play pranks," and fond of outdoor sports. In 1850, Cleveland's father moved to Clinton, Oneida County, New York, to work as district secretary for the American Home Missionary Society. Despite his father's dedication to his missionary work, the income was insufficient for the large family. Financial conditions forced him to remove Grover from school into a two-year mercantile apprenticeship in Fayetteville. The experience was valuable and brief, and the living conditions quite austere. Grover returned to Clinton and his schooling at the completion of the apprentice contract. In 1853, when missionary work began to take a toll on his health, Cleveland's father took an assignment in Holland Patent, New York (near Utica) and the family moved again. Shortly after, he died from a gastric ulcer, with Grover reputedly hearing of his father's death from a boy selling newspapers. Cleveland received his elementary education at the Fayetteville Academy and the Clinton Liberal Academy. After his father died in 1853, he again left school to help support his family. Later that year, Cleveland's brother William was hired as a teacher at the New York Institute for the Blind in New York City, and William obtained a place for Cleveland as an assistant teacher. He returned home to Holland Patent at the end of 1854, where an elder in his church offered to pay for his college education if he would promise to become a minister. Cleveland declined, and in 1855 he decided to move west. He stopped first in Buffalo, New York, where his uncle, Lewis F. Allen, gave him a clerical job. Allen was an important man in Buffalo, and he introduced his nephew to influential men there, including the partners in the law firm of Rogers, Bowen, and Rogers. Millard Fillmore, the 13th president of the United States, had previously worked for the partnership. Cleveland later took a clerkship with the firm, began to read the law, and was admitted to the New York bar in 1859. Cleveland worked for the Rogers firm for three years, then left in 1862 to start his own practice. In January 1863, he was appointed assistant district attorney of Erie County. With the American Civil War raging, Congress passed the Conscription Act of 1863, requiring able-bodied men to serve in the army if called upon, or else to hire a substitute. Cleveland chose the latter course, paying $150 to George Benninsky, a thirty-two-year-old Polish immigrant, to serve in his place. As a lawyer, Cleveland became known for his single-minded concentration and dedication to hard work. In 1866, he successfully defended some participants in the Fenian raid, working on a "pro bono" basis (free of charge). In 1868, Cleveland attracted professional attention for his winning defense of a libel suit against the editor of Buffalo's "Commercial Advertiser". During this time, Cleveland assumed a lifestyle of simplicity, taking residence in a plain boarding house; Cleveland dedicated his growing income instead to the support of his mother and younger sisters. While his personal quarters were austere, Cleveland enjoyed an active social life and "the easy-going sociability of hotel-lobbies and saloons." He shunned the circles of higher society of Buffalo in which his uncle's family traveled. From his earliest involvement in politics, Cleveland aligned with the Democratic Party. He had a decided aversion to Republicans John Fremont and Abraham Lincoln, and the heads of the Rogers law firm were solid Democrats. In 1865, he ran for District Attorney, losing narrowly to his friend and roommate, Lyman K. Bass, the Republican nominee. In 1870, with the help of friend Oscar Folsom, Cleveland secured the Democratic nomination for Sheriff of Erie County, New York. He won the election by a 303-vote margin and took office on January 1, 1871 at age 33. While this new career took him away from the practice of law, it was rewarding in other ways: the fees were said to yield up to $40,000 (equivalent to $ in ) over the two-year term. Cleveland's service as sheriff was unremarkable; biographer Rexford Tugwell described the time in office as a waste for Cleveland politically. Cleveland was aware of graft in the sheriff's office during his tenure and chose not to confront it. A notable incident of his term took place on September 6, 1872, when Patrick Morrissey was executed, who had been convicted of murdering his mother. As sheriff, Cleveland was responsible for either personally carrying out the execution or paying a deputy $10 to perform the task. In spite of reservations about the hanging, Cleveland executed Morrissey himself; he hanged another murderer, John Gaffney, on February 14, 1873. After his term as sheriff ended, Cleveland returned to his law practice, opening a firm with his friends Lyman K. Bass and Wilson S. Bissell. Elected to Congress in 1872, Bass did not spend much time at the firm, but Cleveland and Bissell soon rose to the top of Buffalo's legal community. Up to that point, Cleveland's political career had been honorable and unexceptional. As biographer Allan Nevins wrote, "Probably no man in the country, on March 4, 1881, had less thought than this limited, simple, sturdy attorney of Buffalo that four years later he would be standing in Washington and taking the oath as President of the United States." It was during this period that Cleveland began a relationship with a widow, Maria Crofts Halpin. She accused him of raping her. He accused her of being an alcoholic and consorting with men. He had her institutionalized and her child taken away and raised by his friends. The illegitimate child became a campaign issue for the GOP in his first presidential campaign. In the 1870s, the municipal government in Buffalo had grown increasingly corrupt, with Democratic and Republican political machines cooperating to share the spoils of political office. In 1881 the Republicans nominated a slate of particularly disreputable machine politicians; the Democrats saw the opportunity to gain the votes of disaffected Republicans by nominating a more honest candidate. The party leaders approached Cleveland, and he agreed to run for Mayor of Buffalo, provided that the rest of the ticket was to his liking. When the more notorious politicians were left off the Democratic ticket, Cleveland accepted the nomination. Cleveland was elected mayor with 15,120 votes, as against 11,528 for Milton C. Beebe, his opponent. He took office January 2, 1882. Cleveland's term as mayor was spent fighting the entrenched interests of the party machines. Among the acts that established his reputation was a veto of the street-cleaning bill passed by the Common Council. The street-cleaning contract was open for bids, and the Council selected the highest bidder at $422,000, rather than the lowest of $100,000 less, because of the political connections of the bidder. While this sort of bipartisan graft had previously been tolerated in Buffalo, Mayor Cleveland would have none of it. His veto message said, "I regard it as the culmination of a most bare-faced, impudent, and shameless scheme to betray the interests of the people, and to worse than squander the public money." The Council reversed itself and awarded the contract to the lowest bidder. Cleveland also asked the state legislature to form a Commission to develop a plan to improve the sewer system in Buffalo at a much lower cost than previously proposed locally; this plan was successfully adopted. For this, and other actions safeguarding public funds, Cleveland's reputation as a leader willing to purge government corruption began to spread beyond Erie County. New York Democratic party officials began to consider Cleveland a possible nominee for governor. Daniel Manning, a party insider who admired Cleveland's record, was instrumental in his candidacy. With a split in the state Republican party in 1882, the Democratic party was considered to be at an advantage; there were several contenders for that party's nomination. The two leading Democratic candidates were Roswell P. Flower and Henry W. Slocum. Their factions deadlocked, and the convention could not agree on a nominee. Cleveland, in third place on the first ballot, picked up support in subsequent votes and emerged as the compromise choice. The Republican party remained divided against itself, and in the general election Cleveland emerged the victor, with 535,318 votes to Republican nominee Charles J. Folger's 342,464. Cleveland's margin of victory was, at the time, the largest in a contested New York election; the Democrats also picked up seats in both houses of the New York State Legislature. Cleveland brought his opposition to needless spending to the governor's office; he promptly sent the legislature eight vetos in his first two months in office. The first to attract attention was his veto of a bill to reduce the fares on New York City elevated trains to five cents. The bill had broad support because the trains' owner, Jay Gould, was unpopular, and his fare increases were widely denounced. Cleveland, however, saw the bill as unjust—Gould had taken over the railroads when they were failing and had made the system solvent again. Moreover, Cleveland believed that altering Gould's franchise would violate the Contract Clause of the federal Constitution. Despite the initial popularity of the fare-reduction bill, the newspapers praised Cleveland's veto. Theodore Roosevelt, then a member of the Assembly, had reluctantly voted for the bill to which Cleveland objected, in a desire to punish the unscrupulous railroad barons. After the veto, Roosevelt reversed himself, as did many legislators, and the veto was sustained. Cleveland's defiance of political corruption won him popular acclaim, and the enmity of the influential Tammany Hall organization in New York City. Tammany, under its boss, John Kelly, had disapproved of Cleveland's nomination as governor, and their resistance intensified after Cleveland openly opposed and prevented the re-election of their point man in the State Senate, Thomas F. Grady. Cleveland also steadfastly opposed nominees of the Tammanyites, as well as bills passed as a result of their deal making. The loss of Tammany's support was offset by the support of Theodore Roosevelt and other reform-minded Republicans who helped Cleveland to pass several laws reforming municipal governments. The Republicans convened in Chicago and nominated former Speaker of the House James G. Blaine of Maine for president on the fourth ballot. Blaine's nomination alienated many Republicans who viewed Blaine as ambitious and immoral. The GOP standard bearer was weakened by alienating the Mugwumps, and the Conkling faction, recently disenfranchised by President Arthur. Democratic party leaders saw the Republicans' choice as an opportunity to win the White House for the first time since 1856 if the right candidate could be found. Among the Democrats, Samuel J. Tilden was the initial front-runner, having been the party's nominee in the contested election of 1876. After Tilden declined a nomination due to his poor health, his supporters shifted to several other contenders. Cleveland was among the leaders in early support, and Thomas F. Bayard of Delaware, Allen G. Thurman of Ohio, Samuel Freeman Miller of Iowa, and Benjamin Butler of Massachusetts also had considerable followings, along with various favorite sons. Each of the other candidates had hindrances to his nomination: Bayard had spoken in favor of secession in 1861, making him unacceptable to Northerners; Butler, conversely, was reviled throughout the South for his actions during the Civil War; Thurman was generally well liked, but was growing old and infirm, and his views on the silver question were uncertain. Cleveland, too, had detractors—Tammany remained opposed to him—but the nature of his enemies made him still more friends. Cleveland led on the first ballot, with 392 votes out of 820. On the second ballot, Tammany threw its support behind Butler, but the rest of the delegates shifted to Cleveland, who won. Thomas A. Hendricks of Indiana was selected as his running mate. Corruption in politics was the central issue in 1884; indeed, Blaine had over the span of his career been involved in several questionable deals. Cleveland's reputation as an opponent of corruption proved the Democrats' strongest asset. William C. Hudson created Cleveland's contextual campaign slogan "A public office is a public trust." Reform-minded Republicans called "Mugwumps" denounced Blaine as corrupt and flocked to Cleveland. The Mugwumps, including such men as Carl Schurz and Henry Ward Beecher, were more concerned with morality than with party, and felt Cleveland was a kindred soul who would promote civil service reform and fight for efficiency in government. At the same time the Democrats gained support from the Mugwumps, they lost some blue-collar workers to the Greenback-Labor party, led by ex-Democrat Benjamin Butler. In general, Cleveland abided by the precedent of minimizing presidential campaign travel and speechmaking; Blaine became one of the first to break with that tradition. The campaign focused on the candidates' moral standards, as each side cast aspersions on their opponents. Cleveland's supporters rehashed the old allegations that Blaine had corruptly influenced legislation in favor of the Little Rock and Fort Smith Railroad and the Union Pacific Railway, later profiting on the sale of bonds he owned in both companies. Although the stories of Blaine's favors to the railroads had made the rounds eight years earlier, this time Blaine's correspondence was discovered, making his earlier denials less plausible. On some of the most damaging correspondence, Blaine had written "Burn this letter", giving Democrats the last line to their rallying cry: "Blaine, Blaine, James G. Blaine, the continental liar from the state of Maine, 'Burn this letter!" Regarding Cleveland, commentator Jeff Jacoby notes that, "Not since George Washington had a candidate for President been so renowned for his rectitude." But the Republicans found a refutation buried in Cleveland's past. Aided by the sermons of Reverend George H. Ball, a minister from Buffalo, they made public the allegation that Cleveland had fathered an illegitimate child while he was a lawyer there, and their rallies soon included the chant "Ma, Ma, where's my Pa?". When confronted with the scandal, Cleveland immediately instructed his supporters to "Above all, tell the truth." Cleveland admitted to paying child support in 1874 to Maria Crofts Halpin, the woman who asserted he had fathered her son Oscar Folsom Cleveland and he assumed responsibility. Shortly before the 1884 election, the Republican media published an affidavit from Halpin in which she stated that until she met Cleveland, her "life was pure and spotless", and "there is not, and never was, a doubt as to the paternity of our child, and the attempt of Grover Cleveland, or his friends, to couple the name of Oscar Folsom, or any one else, with that boy, for that purpose is simply infamous and false." The electoral votes of closely contested New York, New Jersey, Indiana, and Connecticut would determine the election. In New York, the Tammany Democrats decided that they would gain more from supporting a Democrat they disliked than a Republican who would do nothing for them. Blaine hoped that he would have more support from Irish Americans than Republicans typically did; while the Irish were mainly a Democratic constituency in the 19th century, Blaine's mother was Irish Catholic, and he had been supportive of the Irish National Land League while he was Secretary of State. The Irish, a significant group in three of the swing states, did appear inclined to support Blaine until a Republican, Samuel D. Burchard, gave a speech pivotal for the Democrats, denouncing them as the party of "Rum, Romanism, and Rebellion." The Democrats spread the word of this implied Catholic insult on the eve of the election. They also blistered Blaine for attending a banquet with some of New York City's wealthiest men. After the votes were counted, Cleveland narrowly won all four of the swing states, including New York by 1200 votes. While the popular vote total was close, with Cleveland winning by just one-quarter of a percent, the electoral votes gave Cleveland a majority of 219–182. Following the electoral victory, the "Ma, Ma ..." attack phrase gained a classic riposte: "Gone to the White House. Ha! Ha! Ha!" Soon after taking office, Cleveland was faced with the task of filling all the government jobs for which the president had the power of appointment. These jobs were typically filled under the spoils system, but Cleveland announced that he would not fire any Republican who was doing his job well, and would not appoint anyone solely on the basis of party service. He also used his appointment powers to reduce the number of federal employees, as many departments had become bloated with political time-servers. Later in his term, as his fellow Democrats chafed at being excluded from the spoils, Cleveland began to replace more of the partisan Republican officeholders with Democrats; this was especially the case with policy making positions. While some of his decisions were influenced by party concerns, more of Cleveland's appointments were decided by merit alone than was the case in his predecessors' administrations. Cleveland also reformed other parts of the government. In 1887, he signed an act creating the Interstate Commerce Commission. He and Secretary of the Navy William C. Whitney undertook to modernize the navy and canceled construction contracts that had resulted in inferior ships. Cleveland angered railroad investors by ordering an investigation of western lands they held by government grant. Secretary of the Interior Lucius Q.C. Lamar charged that the rights of way for this land must be returned to the public because the railroads failed to extend their lines according to agreements. The lands were forfeited, resulting in the return of approximately . Cleveland was the first Democratic President subject to the Tenure of Office Act which originated in 1867; the act purported to require the Senate to approve the dismissal of any presidential appointee who was originally subject to its advice and consent. Cleveland objected to the act in principle and his steadfast refusal to abide by it prompted its fall into disfavor and led to its ultimate repeal in 1887. Cleveland faced a Republican Senate and often resorted to using his veto powers. He vetoed hundreds of private pension bills for American Civil War veterans, believing that if their pensions requests had already been rejected by the Pension Bureau, Congress should not attempt to override that decision. When Congress, pressured by the Grand Army of the Republic, passed a bill granting pensions for disabilities not caused by military service, Cleveland also vetoed that. Cleveland used the veto far more often than any president up to that time. In 1887, Cleveland issued his most well-known veto, . After a drought had ruined crops in several Texas counties, Congress appropriated $10,000 to purchase seed grain for farmers there. Cleveland vetoed the expenditure. In his veto message, he espoused a theory of limited government: One of the most volatile issues of the 1880s was whether the currency should be backed by gold and silver, or by gold alone. The issue cut across party lines, with western Republicans and southern Democrats joining together in the call for the free coinage of silver, and both parties' representatives in the northeast holding firm for the gold standard. Because silver was worth less than its legal equivalent in gold, taxpayers paid their government bills in silver, while international creditors demanded payment in gold, resulting in a depletion of the nation's gold supply. Cleveland and Treasury Secretary Daniel Manning stood firmly on the side of the gold standard, and tried to reduce the amount of silver that the government was required to coin under the Bland-Allison Act of 1878. Cleveland unsuccessfully appealed to Congress to repeal this law before he was inaugurated. Angered Westerners and Southerners advocated for cheap money to help their poorer constituents. In reply, one of the foremost silverites, Richard P. Bland, introduced a bill in 1886 that would require the government to coin unlimited amounts of silver, inflating the then-deflating currency. While Bland's bill was defeated, so was a bill the administration favored that would repeal any silver coinage requirement. The result was a retention of the status quo, and a postponement of the resolution of the Free Silver issue. Another contentious financial issue at the time was the protective tariff. While it had not been a central point in his campaign, Cleveland's opinion on the tariff was that of most Democrats: that the tariff ought to be reduced. Republicans generally favored a high tariff to protect American industries. American tariffs had been high since the Civil War, and by the 1880s the tariff brought in so much revenue that the government was running a surplus. In 1886, a bill to reduce the tariff was narrowly defeated in the House. The tariff issue was emphasized in the Congressional elections that year, and the forces of protectionism increased their numbers in the Congress, but Cleveland continued to advocate tariff reform. As the surplus grew, Cleveland and the reformers called for a tariff for revenue only. His message to Congress in 1887 (quoted at right) highlighted the injustice of taking more money from the people than the government needed to pay its operating expenses. Republicans, as well as protectionist northern Democrats like Samuel J. Randall, believed that American industries would fail without high tariffs, and they continued to fight reform efforts. Roger Q. Mills, chairman of the House Ways and Means Committee, proposed a bill to reduce the tariff from about 47% to about 40%. After significant exertions by Cleveland and his allies, the bill passed the House. The Republican Senate failed to come to agreement with the Democratic House, and the bill died in the conference committee. Dispute over the tariff persisted into the 1888 presidential election. Cleveland was a committed non-interventionist who had campaigned in opposition to expansion and imperialism. He refused to promote the previous administration's Nicaragua canal treaty, and generally was less of an expansionist in foreign relations. Cleveland's Secretary of State, Thomas F. Bayard, negotiated with Joseph Chamberlain of the United Kingdom over fishing rights in the waters off Canada, and struck a conciliatory note, despite the opposition of New England's Republican Senators. Cleveland also withdrew from Senate consideration the Berlin Conference treaty which guaranteed an open door for U.S. interests in the Congo. Cleveland's military policy emphasized self-defense and modernization. In 1885 Cleveland appointed the Board of Fortifications under Secretary of War William C. Endicott to recommend a new coastal fortification system for the United States. No improvements to US coastal defenses had been made since the late 1870s. The Board's 1886 report recommended a massive $127 million construction program at 29 harbors and river estuaries, to include new breech-loading rifled guns, mortars, and naval minefields. The Board and the program are usually called the Endicott Board and the Endicott Program. Most of the Board's recommendations were implemented, and by 1910, 27 locations were defended by over 70 forts. Many of the weapons remained in place until scrapped in World War II as they were replaced with new defenses. Endicott also proposed to Congress a system of examinations for Army officer promotions. For the Navy, the Cleveland administration spearheaded by Secretary of the Navy William Collins Whitney moved towards modernization, although no ships were constructed that could match the best European warships. Although completion of the four steel-hulled warships begun under the previous administration was delayed due to a corruption investigation and subsequent bankruptcy of their building yard, these ships were completed in a timely manner in naval shipyards once the investigation was over. Sixteen additional steel-hulled warships were ordered by the end of 1888; these ships later proved vital in the Spanish–American War of 1898, and many served in World War I. These ships included the "second-class battleships" and , designed to match modern armored ships recently acquired by South American countries from Europe, such as the Brazilian battleship "Riachuelo". Eleven protected cruisers (including the famous ), one armored cruiser, and one monitor were also ordered, along with the experimental cruiser . Cleveland, like a growing number of Northerners (and nearly all white Southerners) saw Reconstruction as a failed experiment, and was reluctant to use federal power to enforce the 15th Amendment of the U.S. Constitution, which guaranteed voting rights to African Americans. Though Cleveland appointed no black Americans to patronage jobs, he allowed Frederick Douglass to continue in his post as recorder of deeds in Washington, D.C. and appointed another black man (James Campbell Matthews, a former New York judge) to replace Douglass upon his resignation. His decision to replace Douglass with a black man was met with outrage, but Cleveland claimed to have known Matthews personally. Although Cleveland had condemned the "outrages" against Chinese immigrants, he believed that Chinese immigrants were unwilling to assimilate into white society. Secretary of State Thomas F. Bayard negotiated an extension to the Chinese Exclusion Act, and Cleveland lobbied the Congress to pass the Scott Act, written by Congressman William Lawrence Scott, which prevented the return of Chinese immigrants who left the United States. The Scott Act easily passed both houses of Congress, and Cleveland signed it into law on October 1, 1888. Cleveland viewed Native Americans as wards of the state, saying in his first inaugural address that "[t]his guardianship involves, on our part, efforts for the improvement of their condition and enforcement of their rights." He encouraged the idea of cultural assimilation, pushing for the passage of the Dawes Act, which provided for distribution of Indian lands to individual members of tribes, rather than having them continued to be held in trust for the tribes by the federal government. While a conference of Native leaders endorsed the act, in practice the majority of Native Americans disapproved of it. Cleveland believed the Dawes Act would lift Native Americans out of poverty and encourage their assimilation into white society. It ultimately weakened the tribal governments and allowed individual Indians to sell land and keep the money. In the month before Cleveland's 1885 inauguration, President Arthur opened four million acres of Winnebago and Crow Creek Indian lands in the Dakota Territory to white settlement by executive order. Tens of thousands of settlers gathered at the border of these lands and prepared to take possession of them. Cleveland believed Arthur's order to be in violation of treaties with the tribes, and rescinded it on April 17 of that year, ordering the settlers out of the territory. Cleveland sent in eighteen companies of Army troops to enforce the treaties and ordered General Philip Sheridan, at the time Commanding General of the U. S. Army, to investigate the matter. Cleveland entered the White House as a bachelor, and his sister Rose Cleveland joined him, to act as hostess for the first two years of his administration. Unlike the previous bachelor president James Buchanan, Cleveland did not remain a bachelor for long. In 1885 the daughter of Cleveland's friend Oscar Folsom visited him in Washington. Frances Folsom was a student at Wells College. When she returned to school, President Cleveland received her mother's permission to correspond with her, and they were soon engaged to be married. On June 2, 1886, Cleveland married Frances Folsom in the Blue Room at the White House. He was the second president to wed while in office, and has been the only president married in the White House. This marriage was unusual, since Cleveland was the executor of Oscar Folsom's estate and had supervised Frances's upbringing after her father's death; nevertheless, the public took no exception to the match. At 21 years, Frances Folsom Cleveland was the youngest First Lady in history, and the public soon warmed to her beauty and warm personality. The Clevelands had five children: Ruth (1891–1904), Esther (1893–1980), Marion (1895–1977), Richard (1897–1974), and Francis Grover (1903–1995). British philosopher Philippa Foot was their granddaughter. During his first term, Cleveland successfully nominated two justices to the Supreme Court of the United States. The first, Lucius Q.C. Lamar, was a former Mississippi Senator who served in Cleveland's Cabinet as Interior Secretary. When William Burnham Woods died, Cleveland nominated Lamar to his seat in late 1887. While Lamar had been well liked as a Senator, his service under the Confederacy two decades earlier caused many Republicans to vote against him. Lamar's nomination was confirmed by the narrow margin of 32 to 28. Chief Justice Morrison Waite died a few months later, and Cleveland nominated Melville Fuller to fill his seat on April 30, 1888. Fuller accepted. He had previously declined Cleveland's nomination to the Civil Service Commission, preferring his Chicago law practice. The Senate Judiciary Committee spent several months examining the little-known nominee, before the Senate confirmed the nomination 41 to 20. Cleveland nominated 41 lower federal court judges in addition to his four Supreme Court justices. These included two judges to the United States circuit courts, nine judges to the United States Courts of Appeals, and 30 judges to the United States district courts. Because Cleveland served terms both before and after Congress eliminated the circuit courts in favor of the Courts of Appeals, he is one of only two presidents to have appointed judges to both bodies. The other, Benjamin Harrison, was in office at the time that the change was made. Thus, all of Cleveland's appointments to the circuit courts were made in his first term, and all of his appointments to the Courts of Appeals were made in his second. The Republicans nominated Benjamin Harrison of Indiana for president and Levi P. Morton of New York for vice president. Cleveland was easily renominated at the Democratic convention in St. Louis. Following Vice President Thomas A. Hendricks death in 1885, the Democrats chose Allen G. Thurman of Ohio to be Cleveland's new running mate. The Republicans gained the upper hand in the campaign, as Cleveland's campaign was poorly managed by Calvin S. Brice and William H. Barnum, whereas Harrison had engaged more aggressive fundraisers and tacticians in Matt Quay and John Wanamaker. The Republicans campaigned heavily on the tariff issue, turning out protectionist voters in the important industrial states of the North. Further, the Democrats in New York were divided over the gubernatorial candidacy of David B. Hill, weakening Cleveland's support in that swing state. A letter from the British ambassador supporting Cleveland caused a scandal which cost Cleveland votes in New York. As in 1884, the election focused on the swing states of New York, New Jersey, Connecticut, and Indiana. But unlike that year, when Cleveland had triumphed in all four, in 1888 he won only two, losing his home state of New York by 14,373 votes. The Republicans won Indiana, largely as the result of a fraudulent voting practice known as Blocks of Five. Cleveland continued his duties diligently until the end of the term and began to look forward to return to private life. As Frances Cleveland left the White House, she told a staff member, "Now, Jerry, I want you to take good care of all the furniture and ornaments in the house, for I want to find everything just as it is now, when we come back again." When asked when she would return, she responded, "We are coming back four years from today." In the meantime, the Clevelands moved to New York City, where Cleveland took a position with the law firm of Bangs, Stetson, Tracy, and MacVeigh. This affiliation was more of an office-sharing arrangement, though quite compatible. Cleveland's law practice brought only a moderate income, perhaps because Cleveland spent considerable time at the couple's vacation home Gray Gables at Buzzard Bay, where fishing became his obsession. While they lived in New York, the Clevelands' first child, Ruth, was born in 1891. The Harrison administration worked with Congress to pass the McKinley Tariff, an aggressively protectionist measure and the Sherman Silver Purchase Act, which increased money backed by silver; these were among policies Cleveland deplored as dangerous to the nation's financial health. At first he refrained from criticizing his successor, but by 1891 Cleveland felt compelled to speak out, addressing his concerns in an open letter to a meeting of reformers in New York. The "silver letter" thrust Cleveland's name back into the spotlight just as the 1892 election was approaching. Cleveland's enduring reputation as chief executive and his recent pronouncements on the monetary issues made him a leading contender for the Democratic nomination. His leading opponent was David B. Hill, a Senator for New York. Hill united the anti-Cleveland elements of the Democratic party—silverites, protectionists, and Tammany Hall—but was unable to create a coalition large enough to deny Cleveland the nomination. Despite some desperate maneuvering by Hill, Cleveland was nominated on the first ballot at the convention in Chicago. For vice president, the Democrats chose to balance the ticket with Adlai E. Stevenson of Illinois, a silverite. Although the Cleveland forces preferred Isaac P. Gray of Indiana for vice president, they accepted the convention favorite. As a supporter of greenbacks and Free Silver to inflate the currency and alleviate economic distress in the rural districts, Stevenson balanced the otherwise hard-money, gold-standard ticket headed by Cleveland. The Republicans re-nominated President Harrison, making the 1892 election a rematch of the one four years earlier. Unlike the turbulent and controversial elections of 1876, 1884, and 1888, the 1892 election was, according to Cleveland biographer Allan Nevins, "the cleanest, quietest, and most creditable in the memory of the post-war generation," in part because Harrison's wife, Caroline, was dying of tuberculosis. Harrison did not personally campaign at all. Following Caroline Harrison's death on October 25, two weeks before the national election, Cleveland and all of the other candidates stopped campaigning, thus making Election Day a somber and quiet event for the whole country as well as the candidates. The issue of the tariff worked to the Republicans' advantage in 1888. The legislative revisions of the past four years also made imported goods so expensive that now many voters favored tariff reform and were skeptical of big business. Many Westerners, traditionally Republican voters, defected to James Weaver, the candidate of the new Populist Party. Weaver promised Free Silver, generous veterans' pensions, and an eight-hour work day. The Tammany Hall Democrats adhered to the national ticket, allowing a united Democratic party to carry New York. At the campaign's end, many Populists and labor supporters endorsed Cleveland after an attempt by the Carnegie Corporation to break the union during the Homestead strike in Pittsburgh and after a similar conflict between big business and labor at the Tennessee Coal and Iron Co. The final result was a victory for Cleveland by wide margins in both the popular and electoral votes, and it was Cleveland's third consecutive popular vote plurality. Shortly after Cleveland's second term began, the Panic of 1893 struck the stock market, and he soon faced an acute economic depression. The panic was worsened by the acute shortage of gold that resulted from the increased coinage of silver, and Cleveland called Congress into special session to deal with the problem. The debate over the coinage was as heated as ever, and the effects of the panic had driven more moderates to support repealing the coinage provisions of the Sherman Silver Purchase Act. Even so, the silverites rallied their following at a convention in Chicago, and the House of Representatives debated for fifteen weeks before passing the repeal by a considerable margin. In the Senate, the repeal of silver coinage was equally contentious. Cleveland, forced against his better judgment to lobby the Congress for repeal, convinced enough Democrats – and along with eastern Republicans, they formed a 48–37 majority for repeal. Depletion of the Treasury's gold reserves continued, at a lesser rate, and subsequent bond issues replenished supplies of gold. At the time the repeal seemed a minor setback to silverites, but it marked the beginning of the end of silver as a basis for American currency. Having succeeded in reversing the Harrison administration's silver policy, Cleveland sought next to reverse the effects of the McKinley tariff. The Wilson-Gorman Tariff Act was introduced by West Virginian Representative William L. Wilson in December 1893. After lengthy debate, the bill passed the House by a considerable margin. The bill proposed moderate downward revisions in the tariff, especially on raw materials. The shortfall in revenue was to be made up by an income tax of two percent on income above $4,000 (US$ today). The bill was next considered in the Senate, where it faced stronger opposition from key Democrats led by Arthur Pue Gorman of Maryland, who insisted on more protection for their states' industries than the Wilson bill allowed. The bill passed the Senate with more than 600 amendments attached that nullified most of the reforms. The Sugar Trust in particular lobbied for changes that favored it at the expense of the consumer. Cleveland was outraged with the final bill, and denounced it as a disgraceful product of the control of the Senate by trusts and business interests. Even so, he believed it was an improvement over the McKinley tariff and allowed it to become law without his signature. In 1892, Cleveland had campaigned against the Lodge Bill, which would have strengthened voting rights protections through the appointing of federal supervisors of congressional elections upon a petition from the citizens of any district. The Enforcement Act of 1871 had provided for a detailed federal overseeing of the electoral process, from registration to the certification of returns. Cleveland succeeded in ushering in the 1894 repeal of this law (ch. 25, 28 Stat. 36). The pendulum thus swung from stronger attempts to protect voting rights to the repealing of voting rights protections; this in turn led to unsuccessful attempts to have the federal courts protect voting rights in "Giles v. Harris", 189 U.S. 475 (1903), and "Giles v. Teasley", 193 U.S. 146 (1904). The Panic of 1893 had damaged labor conditions across the United States, and the victory of anti-silver legislation worsened the mood of western laborers. A group of workingmen led by Jacob S. Coxey began to march east toward Washington, D.C. to protest Cleveland's policies. This group, known as Coxey's Army, agitated in favor of a national roads program to give jobs to workingmen, and a weakened currency to help farmers pay their debts. By the time they reached Washington, only a few hundred remained, and when they were arrested the next day for walking on the lawn of the United States Capitol, the group scattered. Even though Coxey's Army may not have been a threat to the government, it signaled a growing dissatisfaction in the West with Eastern monetary policies. The Pullman Strike had a significantly greater impact than Coxey's Army. A strike began against the Pullman Company over low wages and twelve-hour workdays, and sympathy strikes, led by American Railway Union leader Eugene V. Debs, soon followed. By June 1894, 125,000 railroad workers were on strike, paralyzing the nation's commerce. Because the railroads carried the mail, and because several of the affected lines were in federal receivership, Cleveland believed a federal solution was appropriate. Cleveland obtained an injunction in federal court, and when the strikers refused to obey it, he sent federal troops into Chicago and 20 other rail centers. "If it takes the entire army and navy of the United States to deliver a postcard in Chicago", he proclaimed, "that card will be delivered." Most governors supported Cleveland except Democrat John P. Altgeld of Illinois, who became his bitter foe in 1896. Leading newspapers of both parties applauded Cleveland's actions, but the use of troops hardened the attitude of organized labor toward his administration. Just before the 1894 election, Cleveland was warned by Francis Lynde Stetson, an advisor: The warning was appropriate, for in the Congressional elections, Republicans won their biggest landslide in decades, taking full control of the House, while the Populists lost most of their support. Cleveland's factional enemies gained control of the Democratic Party in state after state, including full control in Illinois and Michigan, and made major gains in Ohio, Indiana, Iowa and other states. Wisconsin and Massachusetts were two of the few states that remained under the control of Cleveland's allies. The Democratic opposition were close to controlling two-thirds of the vote at the 1896 national convention, which they needed to nominate their own candidate. They failed for lack of unity and a national leader, as Illinois governor John Peter Altgeld had been born in Germany and was ineligible to be nominated for president. When Cleveland took office he faced the question of Hawaiian annexation. In his first term, he had supported free trade with Hawai'i and accepted an amendment that gave the United States a coaling and naval station in Pearl Harbor. In the intervening four years, Honolulu businessmen of European and American ancestry had denounced Queen Liliuokalani as a tyrant who rejected constitutional government. In early 1893 they overthrew her, set up a republican government under Sanford B. Dole, and sought to join the United States. The Harrison administration had quickly agreed with representatives of the new government on a treaty of annexation and submitted it to the Senate for approval. Five days after taking office on March 9, 1893, Cleveland withdrew the treaty from the Senate and sent former Congressman James Henderson Blount to Hawai'i to investigate the conditions there. Cleveland agreed with Blount's report, which found the populace to be opposed to annexation. Liliuokalani initially refused to grant amnesty as a condition of her reinstatement, saying that she would either execute or banish the current government in Honolulu, but Dole's government refused to yield their position. By December 1893, the matter was still unresolved, and Cleveland referred the issue to Congress. In his message to Congress, Cleveland rejected the idea of annexation and encouraged the Congress to continue the American tradition of non-intervention (see excerpt at right). The Senate, under Democratic control but opposed to Cleveland, commissioned and produced the Morgan Report, which contradicted Blount's findings and found the overthrow was a completely internal affair. Cleveland dropped all talk of reinstating the Queen, and went on to recognize and maintain diplomatic relations with the new Republic of Hawaii. Closer to home, Cleveland adopted a broad interpretation of the Monroe Doctrine that not only prohibited new European colonies, but also declared an American national interest in any matter of substance within the hemisphere. When Britain and Venezuela disagreed over the boundary between Venezuela and the colony of British Guiana, Cleveland and Secretary of State Richard Olney protested. British Prime Minister Lord Salisbury and the British ambassador to Washington, Julian Pauncefote, misjudged how important successful resolution of the dispute was to the American government, having prolonged the crisis before ultimately accepting the American demand for arbitration. A tribunal convened in Paris in 1898 to decide the matter, and in 1899 awarded the bulk of the disputed territory to British Guiana. But by standing with a Latin American nation against the encroachment of a colonial power, Cleveland improved relations with the United States' southern neighbors, and at the same time, the cordial manner in which the negotiations were conducted also made for good relations with Britain. The second Cleveland administration was as committed to military modernization as the first, and ordered the first ships of a navy capable of offensive action. Construction continued on the Endicott program of coastal fortifications begun under Cleveland's first administration. The adoption of the Krag–Jørgensen rifle, the US Army's first bolt-action repeating rifle, was finalized. In 1895–96 Secretary of the Navy Hilary A. Herbert, having recently adopted the aggressive naval strategy advocated by Captain Alfred Thayer Mahan, successfully proposed ordering five battleships (the and es) and sixteen torpedo boats. Completion of these ships nearly doubled the Navy's battleships and created a new torpedo boat force, which previously had only two boats. The battleships and seven of the torpedo boats were not completed until 1899–1901, after the Spanish–American War. In the midst of the fight for repeal of Free Silver coinage in 1893, Cleveland sought the advice of the White House doctor, Dr. O'Reilly, about soreness on the roof of his mouth and a crater-like edge ulcer with a granulated surface on the left side of Cleveland's hard palate. Samples of the tumor were sent anonymously to the Army Medical Museum. The diagnosis was not a malignant cancer, but instead an "epithelioma". Cleveland decided to have surgery secretly, to avoid further panic that might worsen the financial depression. The surgery occurred on July 1, to give Cleveland time to make a full recovery in time for the upcoming Congressional session. Under the guise of a vacation cruise, Cleveland and his surgeon, Dr. Joseph Bryant, left for New York. The surgeons operated aboard the "Oneida", a yacht owned by Cleveland's friend E. C. Benedict, as it sailed off Long Island. The surgery was conducted through the President's mouth, to avoid any scars or other signs of surgery. The team, sedating Cleveland with nitrous oxide and ether, successfully removed parts of his upper left jaw and hard palate. The size of the tumor and the extent of the operation left Cleveland's mouth disfigured. During another surgery, Cleveland was fitted with a hard rubber dental prosthesis that corrected his speech and restored his appearance. A cover story about the removal of two bad teeth kept the suspicious press placated. Even when a newspaper story appeared giving details of the actual operation, the participating surgeons discounted the severity of what transpired during Cleveland's vacation. In 1917, one of the surgeons present on the "Oneida", Dr. William W. Keen, wrote an article detailing the operation. Cleveland enjoyed many years of life after the tumor was removed, and there was some debate as to whether it was actually malignant. Several doctors, including Dr. Keen, stated after Cleveland's death that the tumor was a carcinoma. Other suggestions included ameloblastoma or a benign salivary mixed tumor (also known as a pleomorphic adenoma). In the 1980s, analysis of the specimen finally confirmed the tumor to be verrucous carcinoma, a low-grade epithelial cancer with a low potential for metastasis. Cleveland's trouble with the Senate hindered the success of his nominations to the Supreme Court in his second term. In 1893, after the death of Samuel Blatchford, Cleveland nominated William B. Hornblower to the Court. Hornblower, the head of a New York City law firm, was thought to be a qualified appointee, but his campaign against a New York machine politician had made Senator David B. Hill his enemy. Further, Cleveland had not consulted the Senators before naming his appointee, leaving many who were already opposed to Cleveland on other grounds even more aggrieved. The Senate rejected Hornblower's nomination on January 15, 1894, by a vote of 30 to 24. Cleveland continued to defy the Senate by next appointing Wheeler Hazard Peckham another New York attorney who had opposed Hill's machine in that state. Hill used all of his influence to block Peckham's confirmation, and on February 16, 1894, the Senate rejected the nomination by a vote of 32 to 41. Reformers urged Cleveland to continue the fight against Hill and to nominate Frederic R. Coudert, but Cleveland acquiesced in an inoffensive choice, that of Senator Edward Douglass White of Louisiana, whose nomination was accepted unanimously. Later, in 1896, another vacancy on the Court led Cleveland to consider Hornblower again, but he declined to be nominated. Instead, Cleveland nominated Rufus Wheeler Peckham, the brother of Wheeler Hazard Peckham, and the Senate confirmed the second Peckham easily. No new states were admitted to the Union during Cleveland's first term. On February 22, 1889, days before leaving office, the 50th Congress passed the Enabling Act of 1889, authorizing North Dakota, South Dakota, Montana, and Washington to form state governments and to gain admission to the Union. All four officially became states in November 1889, during the first year of Benjamin Harrison's administration. During his second term, the 53rd United States Congress passed an Enabling Act that permitted Utah to apply for statehood. Cleveland signed it on July 16, 1894. Utah joined the Union as the 45th state on January 4, 1896. Cleveland's agrarian and silverite enemies gained control of the Democratic party in 1896, repudiated his administration and the gold standard, and nominated William Jennings Bryan on a Silver Platform. Cleveland silently supported the Gold Democrats' third-party ticket that promised to defend the gold standard, limit government and oppose high tariffs, but he declined their nomination for a third term. The party won only 100,000 votes in the general election, and William McKinley, the Republican nominee, triumphed easily over Bryan. Agrarians nominated Bryan again in 1900. In 1904 the conservatives, with Cleveland's support, regained control of the Democratic Party and nominated Alton B. Parker. After leaving the White House on March 4, 1897, Cleveland lived in retirement at his estate, Westland Mansion, in Princeton, New Jersey. For a time he was a trustee of Princeton University, and was one of the majority of trustees who preferred Dean West's plans for the Graduate School and undergraduate living over those of Woodrow Wilson, then president of the university. Cleveland consulted occasionally with President Theodore Roosevelt (1901–1909), but was financially unable to accept the chairmanship of the commission handling the Coal Strike of 1902. Cleveland still made his views known in political matters. In a 1905 article in "The Ladies Home Journal", Cleveland weighed in on the women's suffrage movement, writing that "sensible and responsible women do not want to vote. The relative positions to be assumed by men and women in the working out of our civilization were assigned long ago by a higher intelligence." In 1906, a group of New Jersey Democrats promoted Cleveland as a possible candidate for the United States Senate. The incumbent, John F. Dryden, was not seeking re-election, and some Democrats felt that the former president could attract the votes of some disaffected Republican legislators who might be drawn to Cleveland's statesmanship and conservatism. Cleveland's health had been declining for several years, and in the autumn of 1907 he fell seriously ill. In 1908, he suffered a heart attack and died on June 24 at age 71. His last words were, "I have tried so hard to do right." He is buried in the Princeton Cemetery of the Nassau Presbyterian Church. In his first term in office, Cleveland sought a summer house to escape the heat and smells of Washington, D.C., near enough the capital. He secretly bought a farmhouse, Oak View (or Oak Hill), in a rural upland part of the District of Columbia, in 1886, and remodeled it into a Queen Anne style summer estate. He sold Oak View upon losing his bid for re-election in 1888. Not long thereafter, suburban residential development reached the area, which came to be known as Oak View, and then Cleveland Heights, and eventually Cleveland Park. The Clevelands are depicted in local murals. Grover Cleveland Hall at Buffalo State College in Buffalo, New York. Cleveland Hall houses the offices of the college president, vice presidents, and other administrative functions and student services. Cleveland was a member of the first board of directors of the then Buffalo Normal School. Grover Cleveland Middle School in his birthplace, Caldwell, New Jersey, was named for him, as is Grover Cleveland High School in Buffalo, New York, and the town of Cleveland, Mississippi. Mount Cleveland, a volcano in Alaska, is also named after him. In 1895 he became the first U.S. President who was filmed. The first U.S. postage stamp to honor Cleveland appeared in 1923. This twelve-cent issue accompanied a thirteen-cent stamp in the same definitive series that depicted his old rival Benjamin Harrison. Cleveland's only two subsequent stamp appearances have been in issues devoted to the full roster of U.S. Presidents, released, respectively, in 1938 and 1986. Cleveland's portrait was on the U.S. $1000 bill of series 1928 and series 1934. He also appeared on the first few issues of the $20 Federal Reserve Notes from 1914. Since he was both the 22nd and 24th president, he was featured on two separate dollar coins released in 2012 as part of the Presidential $1 Coin Act of 2005. In 2013, Cleveland was inducted into the New Jersey Hall of Fame. Scholarly studies Primary sources Official Letters and Speeches Media coverage Other Great auk The great auk ("Pinguinus impennis") is a species of flightless alcid that became extinct in the mid-19th century. It was the only modern species in the genus Pinguinus. It is not closely related to the birds now known as penguins, which were discovered later and so named by sailors because of their physical resemblance to the great auk. It bred on rocky, isolated islands with easy access to the ocean and a plentiful food supply, a rarity in nature that provided only a few breeding sites for the great auks. When not breeding, they spent their time foraging in the waters of the North Atlantic, ranging as far south as northern Spain and along the coastlines of Canada, Greenland, Iceland, the Faroe Islands, Norway, Ireland, and Great Britain. The great auk was tall and weighed about , making it the second-largest member of the alcid family ("Miomancalla" was larger). It had a black back and a white belly. The black beak was heavy and hooked, with grooves on its surface. During summer, great auk plumage showed a white patch over each eye. During winter, the great auk lost these patches, instead developing a white band stretching between the eyes. The wings were only long, rendering the bird flightless. Instead, the great auk was a powerful swimmer, a trait that it used in hunting. Its favourite prey were fish, including Atlantic menhaden and capelin, and crustaceans. Although agile in the water, it was clumsy on land. Great auk pairs mated for life. They nested in extremely dense and social colonies, laying one egg on bare rock. The egg was white with variable brown marbling. Both parents participated in the incubation of the egg for around 6 weeks before the young hatched. The young left the nest site after 2–3 weeks, although the parents continued to care for it. The great auk was an important part of many Native American cultures, both as a food source and as a symbolic item. Many Maritime Archaic people were buried with great auk bones. One burial discovered included someone covered by more than 200 great auk beaks, which are presumed to be the remnants of a cloak made of great auk skins. Early European explorers to the Americas used the great auk as a convenient food source or as fishing bait, reducing its numbers. The bird's down was in high demand in Europe, a factor that largely eliminated the European populations by the mid-16th century. Scientists soon began to realize that the great auk was disappearing and it became the beneficiary of many early environmental laws, but this proved ineffectual. Its growing rarity increased interest from European museums and private collectors in obtaining skins and eggs of the bird. On 3 June 1844, the last two confirmed specimens were killed on Eldey, off the coast of Iceland, ending the last known breeding attempt. Later reports of roaming individuals being seen or caught are unconfirmed. A record of one great auk in 1852 is considered by some to be the last sighting of a member of the species. The great auk is mentioned in several novels and the scientific journal of the American Ornithologists' Union is named "The Auk" in honour of this bird. Analysis of mtDNA sequences has confirmed morphological and biogeographical studies suggesting that the razorbill is the closest living relative of the great auk. The great auk also was related closely to the little auk (dovekie), which underwent a radically different evolution compared to "Pinguinus". Due to its outward similarity to the razorbill (apart from flightlessness and size), the great auk often was placed in the genus "Alca", following Linnaeus. The fossil record (especially the sister species, "Pinguinus alfrednewtoni") and molecular evidence show that the three closely related genera diverged soon after their common ancestor, a bird probably similar to a stout Xantus's murrelet, had spread to the coasts of the Atlantic. Apparently, by that time, the murres, or Atlantic guillemots, already had split from the other Atlantic alcids. Razorbill-like birds were common in the Atlantic during the Pliocene, but the evolution of the little auk is sparsely documented. The molecular data are compatible with either possibility, but the weight of evidence suggests placing the great auk in a distinct genus. Some ornithologists still believe it is more appropriate to retain the species in the genus "Alca". It is the only recorded British bird made extinct in historic times. The following cladogram shows the placement of the great auk among its closest relatives, based on a 2004 genetic study: "Pinguinus alfrednewtoni" was a larger, and also flightless, member of the genus "Pinguinus" that lived during the Early Pliocene. Known from bones found in the Yorktown Formation of the Lee Creek Mine in North Carolina, it is believed to have split, along with the great auk, from a common ancestor. "Pinguinus alfrednewtoni" lived in the western Atlantic, while the great auk lived in the eastern Atlantic. After the former died out following the Pliocene, the great auk took over its territory. The great auk was not related closely to the other extinct genera of flightless alcids, "Mancalla", "Praemancalla", and "Alcodes". The great auk was one of the 4,400 animal species formally described by Carl Linnaeus in his eighteenth-century work, "Systema Naturae", in which it was given the binomial "Alca impennis". The name "Alca" is a Latin derivative of the Scandinavian word for razorbills and their relatives. The bird was known in literature even before this and was described by Charles d'Ecluse in 1605 as "Mergus Americanus." This also included a woodcut which represents the oldest unambiguous visual depictions of the bird. The species was not placed in its own scientific genus, "Pinguinus", until 1791. The generic name is derived from the Spanish and Portuguese name for the species, in turn from Latin "pinguis" meaning 'plump', and the specific name, "impennis", is from Latin and refers to the lack of flight feathers or pennae. The Irish name for the great auk is 'falcóg mhór', meaning 'big seabird/auk'. The Basque name is "arponaz", meaning "spearbill". Its early French name was "apponatz". The Norse called the great auk, "geirfugl", which means "spearbird". This has led to an alternative English common name for the bird, "garefowl" or "gairfowl". The Inuit name for the great auk was "isarukitsok", which meant "little wing". The word "penguin" first appears in the sixteenth century as a synonym for "great auk". Although the etymology is debated, the generic name, "penguin", may be derived from the Welsh "pen gwyn" "white head", either because the birds lived in Newfoundland on White Head Island (Pen Gwyn in Welsh), or, because the great auk had such large white circles on its head. When European explorers discovered what today are known as penguins in the Southern Hemisphere, they noticed their similar appearance to the great auk and named them after this bird, although biologically, they are not closely related. Whalers also lumped the northern and southern birds together under the common name "woggins". Standing about tall and weighing approximately as adult birds, the flightless great auk was the second-largest member of both its family and the order Charadriiformes, surpassed only by the mancalline "Miomancalla". The great auks that lived farther north averaged larger in size than the more southerly members of the species. Males and females were similar in plumage, although there is evidence for differences in size, particularly in the bill and femur length. The back was primarily a glossy black, and the belly was white. The neck and legs were short, and the head and wings small. The great auk appeared chubby due to a thick layer of fat necessary for warmth. During summer, it developed a wide white eye patch over each eye, which had a hazel or chestnut iris. During winter the great auk moulted and lost this eye patch, which was replaced with a wide white band and a gray line of feathers that stretched from the eye to the ear. During the summer, the its chin and throat were blackish-brown and the inside of the mouth was yellow. In winter, the throat became white. Some individuals reportedly had grey plumage on their flanks, but the purpose, seasonal duration, and frequency of this variation is unknown. The bill was large at long and curved downward at the top; the bill also had deep white grooves in both the upper and lower mandibles, up to seven on the upper mandible and twelve on the lower mandible in summer, although there were fewer in winter. The wings were only in length and the longest wing feathers were only long. Its feet and short claws were black, while the webbed skin between the toes was brownish black. The legs were far back on the bird's body, which gave it powerful swimming and diving abilities. Hatchlings were described as grey and downy, but their exact appearance is unknown, since no skins exist today. Juvenile birds had fewer prominent grooves in their beaks than adults and they had mottled white and black necks, while the eye spot found in adults was not present; instead, a grey line ran through the eyes (which still had white eye rings) to just below the ears. Great Auk calls included low croaking and a hoarse scream. A captive great auk was observed making a gurgling noise when anxious. It is not known what its other vocalizations were, but it is believed that they were similar to those of the razorbill, only louder and deeper. The great auk was found in the cold North Atlantic coastal waters along the coasts of Canada, the northeastern United States, Norway, Greenland, Iceland, the Faroe Islands, Ireland, Great Britain, France, and the Iberian Peninsula. Pleistocene fossils indicate the great auk also inhabited Southern France, Italy, and other coasts of the Mediterranean basin. The great auk left the North Atlantic waters for land only to breed, even roosting at sea when not breeding. The rookeries of the great auk were found from Baffin Bay to the Gulf of St. Lawrence, across the far northern Atlantic, including Iceland, and in Norway and the British Isles in Europe. For their nesting colonies the great auks required rocky islands with sloping shorelines that provided access to the sea. These were very limiting requirements and it is believed that the great auk never had more than 20 breeding colonies. The nesting sites also needed to be close to rich feeding areas and to be far enough from the mainland to discourage visitation by predators such as humans and polar bears. The localities of only seven former breeding colonies are known: Papa Westray in the Orkney Islands, St. Kilda off Scotland, Grimsey Island, Eldey Island, Geirfuglasker near Iceland, Funk Island near Newfoundland, and the Bird Rocks (Rochers-aux-Oiseaux) in the Gulf of St. Lawrence. Records suggest that this species may have bred on Cape Cod in Massachusetts. By the late eighteenth and early nineteenth centuries, the breeding range of the great auk was restricted to Funk Island, Grimsey Island, Eldey Island, the Gulf of St. Lawrence, and St. Kilda Island. Funk Island was the largest known breeding colony. After the chicks fledged, the great auk migrated north and south away from the breeding colonies and they tended to go southward during late autumn and winter. It also was common on the Grand Banks of Newfoundland. In recorded history, the great auk typically did not go farther south than Massachusetts Bay in the winter. Archeological excavations have found great auk remains in New England and Southern Spain. Great auk bones have been found as far south as Florida, where it may have been present during four periods: approximately 1000 BC and 1000 AD, as well as, during the fifteenth century and the seventeenth century. (It has been suggested that some of the bones discovered in Florida may be the result of aboriginal trading.) Pleistocene fossils indicate the great auk also inhabited Southern France, Italy, and other coasts of the Mediterranean basin. The great auk was never observed and described by modern scientists during its existence, and is only known from the accounts of laymen, such as sailors, so its behaviour is not well known and difficult to reconstruct. Much may be inferred from its close, living relative, the razorbill, as well as from remaining soft tissue. Great auks walked slowly and sometimes used their wings to help them traverse rough terrain. When they did run, it was awkwardly and with short steps in a straight line. They had few natural predators, mainly large marine mammals, such as the orca and white-tailed eagles. Polar bears preyed on nesting colonies of the great auk. Reportedly, this species had no innate fear of human beings, and their flightlessness and awkwardness on land compounded their vulnerability. Humans preyed upon them as food, for feathers, and as specimens for museums and private collections. Great auks reacted to noises, but were rarely frightened by the sight of something. They used their bills aggressively both in the dense nesting sites and when threatened or captured by humans. These birds are believed to have had a life span of approximately 20 to 25 years. During the winter, the great auk migrated south, either in pairs or in small groups, but never with the entire nesting colony. The great auk was generally an excellent swimmer, using its wings to propel itself underwater. While swimming, the head was held up but the neck was drawn in. This species was capable of banking, veering, and turning underwater. The great auk was known to dive to depths of and it has been claimed that the species was able to dive to depths of . To conserve energy, most dives were shallow. It also could hold its breath for 15 minutes, longer than a seal. Its ability to dive this deeply reduced competition with other alcid species. The great auk was capable of accelerating under water, then shooting out of the water to land on a rocky ledge above the ocean's surface. This alcid typically fed in shoaling waters that were shallower than those frequented by other alcids, although after the breeding season, they had been sighted as far as from land. They are believed to have fed cooperatively in flocks. Their main food was fish, usually in length and weighing , but occasionally their prey was up to half the bird's own length. Based on remains associated with great auk bones found on Funk Island and on ecological and morphological considerations, it seems that Atlantic menhaden and capelin were their favoured prey. Other fish suggested as potential prey include lumpsuckers, shorthorn sculpins, cod, sand lance, as well as crustaceans. The young of the great auk are believed to have eaten plankton and, possibly, fish and crustaceans regurgitated by adults. Historical descriptions of the great auk breeding behaviour are somewhat unreliable. Great Auks began pairing in early and mid-May. They are believed to have mated for life (although some theorize that great auks could have mated outside their pair, a trait seen in the razorbill). Once paired, they nested at the base of cliffs in colonies, likely where they copulated. Mated pairs had a social display in which they bobbed their heads and displayed their white eye patch, bill markings, and yellow mouth. These colonies were extremely crowded and dense, with some estimates stating that there was a nesting great auk for every of land. These colonies were very social. When the colonies included other species of alcid, the great auks were dominant due to their size. Female great auks would lay only one egg each year, between late May and early June, although they could lay a replacement egg if the first one was lost. In years when there was a shortage of food, the great auks did not breed. A single egg was laid on bare ground up to from shore. The egg was ovate and elongate in shape, and it averaged in length and across at the widest point. The egg was yellowish white to light ochre with a varying pattern of black, brown, or greyish spots and lines that often were congregated on the large end. It is believed that the variation in the egg streaks enabled the parents to recognize their egg among those in the vast colony. The pair took turns incubating the egg in an upright position for the 39 to 44 days before the egg hatched, typically in June, although eggs could be present at the colonies as late as August. The parents also took turns feeding their chick. According to one account, the chick was covered with grey down. The young bird took only two or three weeks to mature enough to abandon the nest and land for the water, typically around the middle of July. The parents cared for their young after they fledged, and adults would be seen swimming with their young perched on their backs. Great auks matured sexually when they were four to seven years old. The great auk was a food source for Neanderthals more than 100,000 years ago, as evidenced by well-cleaned bones found by their campfires. Images believed to depict the great auk also were carved into the walls of the El Pendo Cave, Santander in Spain and Paglicci, Italy more than 35,000 years ago, and cave paintings 20,000 years old have been found in France's Grotte Cosquer. Native Americans valued the great auk as a food source during the winter and as an important cultural symbol. Images of the great auk have been found in bone necklaces. A person buried at the Maritime Archaic site at Port au Choix, Newfoundland, dating to about 2000 BC, was found surrounded by more than 200 great auk beaks, which are believed to have been part of a suit made from their skins, with the heads left attached as decoration. Nearly half of the bird bones found in graves at this site were of the great auk, suggesting that it had great cultural significance for the Maritime Archaic people. The extinct Beothuks of Newfoundland made pudding out of the eggs of the great auk. The Dorset Eskimos also hunted it. The Saqqaq in Greenland overhunted the species, causing a local reduction in range. Later, European sailors used the great auks as a navigational beacon, as the presence of these birds signalled that the Grand Banks of Newfoundland were near. This species is estimated to have had a maximum population in the millions. The great auk was hunted on a significant scale for food, eggs, and its down feathers from at least the eighth century. Prior to that, hunting by local natives may be documented from Late Stone Age Scandinavia and eastern North America, as well as from early fifth century Labrador, where the bird seems to have occurred only as stragglers. Early explorers, including Jacques Cartier, and numerous ships attempting to find gold on Baffin Island were not provisioned with food for the journey home, and therefore, used this vulnerable species as both a convenient food source and bait for fishing. Reportedly, some of the later vessels anchored next to a colony and ran out planks to the land. The sailors then herded hundreds of these great auks onto the ships, where they were slaughtered. Some authors have questioned the reports of this hunting method and whether it was successful. Great auk eggs were also a valued food source, as the eggs were three times the size of a murre's and had a large yolk. These sailors also introduced rats onto the islands, that preyed upon nests. The Little Ice Age may have reduced the population of the great auk by exposing more of their breeding islands to predation by polar bears, but massive exploitation by humans for their down drastically reduced the population. By the mid-sixteenth century, the nesting colonies along the European side of the Atlantic were nearly all eliminated by humans killing this bird for its down, which was used to make pillows. In 1553, the great auk received its first official protection, and in 1794 Great Britain banned the killing of this species for its feathers. In St. John's, those violating a 1775 law banning hunting the great auk for its feathers or eggs were publicly flogged, though hunting for use as fishing bait was still permitted. On the North American side, eider down initially was preferred, but once the eiders were nearly driven to extinction in the 1770s, down collectors switched to the great auk at the same time that hunting for food, fishing bait, and oil decreased. The great auk had disappeared from Funk Island by 1800. An account by Aaron Thomas of HMS "Boston" from 1794 described how the bird had been slaughtered systematically until then: With its increasing rarity, specimens of the great auk and its eggs became collectible and highly prized by rich Europeans, and the loss of a large number of its eggs to collection contributed to the demise of the species. Eggers, individuals who visited the nesting sites of the great auk to collect their eggs, quickly realized that the birds did not all lay their eggs on the same day, so they could make return visits to the same breeding colony. Eggers only collected the eggs without embryos and typically, discarded the eggs with embryos growing inside of them. On the islet of Stac an Armin, St. Kilda, Scotland, in July 1840, the last great auk seen in Britain was caught and killed. Three men from St. Kilda caught a single "garefowl", noticing its little wings and the large white spot on its head. They tied it up and kept it alive for three days, until a large storm arose. Believing that the bird was a witch and was causing the storm, they then killed it by beating it with a stick. The last colony of great auks lived on Geirfuglasker (the "Great Auk Rock") off Iceland. This islet was a volcanic rock surrounded by cliffs that made it inaccessible to humans, but in 1830, the islet submerged after a volcanic eruption, and the birds moved to the nearby island of Eldey, which was accessible from a single side. When the colony initially was discovered in 1835, nearly fifty birds were present. Museums, desiring the skins of the great auk for preservation and display, quickly began collecting birds from the colony. The last pair, found incubating an egg, was killed there on 3 June 1844, on request from a merchant who wanted specimens, with Jón Brandsson and Sigurður Ísleifsson strangling the adults and Ketill Ketilsson smashing the egg with his boot. Great Auk specialist John Wolley interviewed the two men who killed the last birds, and Sigurður described the act as follows: A later claim of a live individual sighted in 1852 on the Grand Banks of Newfoundland has been accepted by the International Union for Conservation of Nature and Natural Resources (IUCN). There is an ongoing discussion on the internet about the possibilities for recreating the Great Auk using its DNA from specimens collected. This possibility is the subject of concern to science. Today, 78 skins of the great auk remain, mostly in museum collections, along with approximately 75 eggs and 24 complete skeletons. All but four of the surviving skins are in summer plumage, and only two of these are immature. No hatchling specimens exist. Each egg and skin has been assigned a number by specialists. Although thousands of isolated bones were collected from nineteenth century Funk Island to Neolithic middens, only a few complete skeletons exist. Natural mummies also are known from Funk Island, and the eyes and internal organs of the last two birds from 1844 are stored in the Zoological Museum, Copenhagen. The whereabouts of the skins from the last two individuals has been unknown for more than a hundred years, however, that mystery has been partly resolved using DNA extracted from the organs of the last individuals and the skins of the candidate specimens suggested by Errol Fuller (those in Übersee-Museum Bremen, Royal Belgian Institute of Natural Sciences, Zoological Museum of Kiel University, Los Angeles County Museum of Natural History, and Landesmuseum Natur und Mensch Oldenburg). A positive match was found between the organs from the male individual and the skin now in the RBINS in Brussels. No match was found between the female organs and a specimen from Fuller's list, but authors speculate that the skin in Cincinnati museum of Natural History and Science may be a potential candidate due to a common history with the L.A. specimen. Following the bird's extinction, remains of the great auk increased dramatically in value, and auctions of specimens created intense interest in Victorian Britain, where 15 specimens are now located, the largest number of any country. A specimen was bought in 1971 by the Icelandic Museum of National History for £9000, which placed it in the Guinness Book of Records as the most expensive stuffed bird ever sold. The price of its eggs sometimes reached up to 11 times the amount earned by a skilled worker in a year. The present whereabouts of six of the eggs are unknown. Several other eggs have been destroyed accidentally. Two mounted skins were destroyed in the twentieth century, one in the Mainz Museum during the Second World War, and one in the Museu Bocage, Lisbon that was destroyed by a fire in 1978. The great auk is one of the more frequently referenced extinct birds in literature, much as the famous Dodo. It appears in many works of children's literature. Charles Kingsley's "The Water-Babies, A Fairy Tale for a Land Baby" depicts a great auk telling the tale of the extinction of its species. (In this image, Kingsley's artist played with or misunderstood the words "large pair of white spectacles", intended to mean the natural white patches on the bird's face.) Enid Blyton's "The Island of Adventure", features the bird's extinction, sending the protagonist on a failed search for what he believes is a lost colony of the species. The great auk also is present in a wide variety of other works of fiction. In the short story "The Harbor-Master", by Robert W. Chambers, the discovery and attempted recovery of the last known pair of great auks is central to the plot (which also involves a proto-H.P. Lovecraftian element of suspense). The story first appeared in "Ainslee's Magazine" (Aug 1889) and was slightly revised to become the first five chapters of Chambers' episodic novel "In Search of the Unknown", (Harper and Brothers Publishers, New York, 1904). In his novel "Ulysses", James Joyce mentions the bird while the novel's main character is drifting into sleep. He associates the great auk with the mythical roc as a method of formally returning the main character to a sleepy land of fantasy and memory. "Penguin Island", a 1908 French satirical novel by the Nobel Prize winning author Anatole France, narrates the fictional history of a great auk population that mistakenly, is baptized by a nearsighted missionary. A great auk is collected by fictional naturalist Stephen Maturin in the Patrick O'Brian historical novel "The Surgeon's Mate". This work also details the harvesting of a colony of auks. The great auk is the subject of a novel, "The Last Great Auk" by Allen Eckert, which tells of the events leading to the extinction of the great auk as seen from the perspective of the last one alive. It appears also in Farley Mowat's "Sea of Slaughter". Ogden Nash warns that humans could suffer the same fate as the great auk in his short poem "A Caution to Everybody." Night of the Auk, a 1956 Broadway drama by Arch Oboler, depicts a group of astronauts returning from the moon to discover that a full-blown nuclear war has broken out. Obeler draws a parallel between the human-caused extinction of the Great Auk and of the story's nuclear extinction of humankind. This bird also is featured in a variety of other media. It is the subject of a ballet, "Still Life at the Penguin Café", and a song, "A Dream Too Far", in the ecological musical Rockford's Rock Opera. A great auk appears as a prized possession of Baba the Turk in Igor Stravinsky's opera "The Rake's Progress" (libretto by W. H. Auden and Chester Kallman). The great auk is the mascot of the Archmere Academy in Claymont, Delaware, and the Adelaide University Choral Society (AUCS) in Australia. The great auk was formerly the mascot of the Lindsay Frost campus of Sir Sandford Fleming College in Ontario. In 2012, the two separate sports programs of Fleming College were combined and the great auk mascot went extinct. The Lindsay Frost campus student owned bar, student center, and lounge is still known as the Auk's Lodge. It was also the mascot of the now ended Knowledge Masters educational competition. The scientific journal of the American Ornithologists' Union is named "The Auk" in honour of this bird. According to Homer Hickam's memoir, "Rocket Boys", and its film production, "October Sky", the early rockets he and his friends built, ironically were named "Auk". A cigarette company, the British Great Auk Cigarettes, was named after this bird. Walton Ford, the American painter, has featured great auks in two paintings: "The Witch of St. Kilda" and "Funk Island". The English painter and writer Errol Fuller produced "Last Stand" for his monograph on the species. The great auk also appeared on one stamp in a set of five depicting extinct birds issued by Cuba in 1974. Galaxy A galaxy is a gravitationally bound system of stars, stellar remnants, interstellar gas, dust, and dark matter. The word galaxy is derived from the Greek "" (), literally "milky", a reference to the Milky Way. Galaxies range in size from dwarfs with just a few hundred million () stars to giants with one hundred trillion () stars, each orbiting its galaxy's center of mass. Galaxies are categorized according to their visual morphology as elliptical, spiral, or irregular. Many galaxies are thought to have supermassive black holes at their active centers. The Milky Way's central black hole, known as Sagittarius A*, has a mass four million times greater than the Sun. As of March 2016, GN-z11 is the oldest and most distant observed galaxy with a comoving distance of 32 billion light-years from Earth, and observed as it existed just 400 million years after the Big Bang. Recent estimates of the number of galaxies in the observable universe range from 200 billion () to 2 trillion () or more, containing more stars than all the grains of sand on planet Earth. Most of the galaxies are 1,000 to 100,000 parsecs in diameter (approximately 3000 to 300,000 light years) and separated by distances on the order of millions of parsecs (or megaparsecs). For comparison, the Milky Way has a diameter of at least 30,000 parsecs (100,000 LY) and is separated from the Andromeda Galaxy, its nearest large neighbor, by 780,000 parsecs (2.5 million LY). The space between galaxies is filled with a tenuous gas (the intergalactic medium) having an average density of less than one atom per cubic meter. The majority of galaxies are gravitationally organized into groups, clusters, and superclusters. The Milky Way is part of the Local Group, which is dominated by it and the Andromeda Galaxy and is part of the Virgo Supercluster. At the largest scale, these associations are generally arranged into sheets and filaments surrounded by immense voids. The largest structure of galaxies yet recognised is a cluster of superclusters that has been named Laniakea, which contains the Virgo supercluster. The origin of the word "galaxy" derives from the Greek term for the Milky Way, ' (', "milky one"), or "" ("milky circle") due to its appearance as a "milky" band of light in the sky. In Greek mythology, Zeus places his son born by a mortal woman, the infant Heracles, on Hera's breast while she is asleep so that the baby will drink her divine milk and will thus become immortal. Hera wakes up while breastfeeding and then realizes she is nursing an unknown baby: she pushes the baby away, some of her milk spills, and it produces the faint band of light known as the Milky Way. In the astronomical literature, the capitalized word "Galaxy" is often used to refer to our galaxy, the Milky Way, to distinguish it from the other galaxies in our universe. The English term "Milky Way" can be traced back to a story by Chaucer : Galaxies were initially discovered telescopically and were known as "spiral nebulae". Most 18th to 19th Century astronomers considered them as either unresolved star clusters or anagalactic nebulae, and were just thought as a part of the Milky Way', but their true composition and natures remained a mystery. Observations using larger telescopes of a few nearby bright galaxies, like the Andromeda Galaxy, began resolving them into huge conglomerations of stars, but based simply on the apparent faintness and sheer population of stars, the true distances of these objects placed them well beyond the Milky Way. For this reason they were popularly called "island universes", but this term quickly fell into disuse, as the word "universe" implied the entirety of existence. Instead, they became known simply as galaxies. Tens of thousands of galaxies have been catalogued, but only a few have well-established names, such as the Andromeda Galaxy, the Magellanic Clouds, the Whirlpool Galaxy, and the Sombrero Galaxy. Astronomers work with numbers from certain catalogues, such as the Messier catalogue, the NGC (New General Catalogue), the IC (Index Catalogue), the CGCG (Catalogue of Galaxies and of Clusters of Galaxies), the MCG (Morphological Catalogue of Galaxies) and UGC (Uppsala General Catalogue of Galaxies). All of the well-known galaxies appear in one or more of these catalogues but each time under a different number. For example, Messier 109 is a spiral galaxy having the number 109 in the catalogue of Messier, but also codes NGC3992, UGC6937, CGCG 269-023, MCG +09-20-044, and PGC 37617. The realization that we live in a galaxy which is one among many galaxies, parallels major discoveries that were made about the Milky Way and other nebulae. The Greek philosopher Democritus (450–370 BCE) proposed that the bright band on the night sky known as the Milky Way might consist of distant stars. Aristotle (384–322 BCE), however, believed the Milky Way to be caused by "the ignition of the fiery exhalation of some stars that were large, numerous and close together" and that the "ignition takes place in the upper part of the atmosphere, in the region of the World that is continuous with the heavenly motions." The Neoplatonist philosopher Olympiodorus the Younger (–570 CE) was critical of this view, arguing that if the Milky Way is sublunary (situated between Earth and the Moon) it should appear different at different times and places on Earth, and that it should have parallax, which it does not. In his view, the Milky Way is celestial. According to Mohani Mohamed, the Arabian astronomer Alhazen (965–1037) made the first attempt at observing and measuring the Milky Way's parallax, and he thus "determined that because the Milky Way had no parallax, it must be remote from the Earth, not belonging to the atmosphere." The Persian astronomer al-Bīrūnī (973–1048) proposed the Milky Way galaxy to be "a collection of countless fragments of the nature of nebulous stars." The Andalusian astronomer Ibn Bâjjah ("Avempace", 1138) proposed that the Milky Way is made up of many stars that almost touch one another and appear to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars as evidence of this occurring when two objects are near. In the 14th century, the Syrian-born Ibn Qayyim proposed the Milky Way galaxy to be "a myriad of tiny stars packed together in the sphere of the fixed stars." Actual proof of the Milky Way consisting of many stars came in 1610 when the Italian astronomer Galileo Galilei used a telescope to study the Milky Way and discovered that it is composed of a huge number of faint stars. In 1750 the English astronomer Thomas Wright, in his "An original theory or new hypothesis of the Universe", speculated (correctly) that the galaxy might be a rotating body of a huge number of stars held together by gravitational forces, akin to the Solar System but on a much larger scale. The resulting disk of stars can be seen as a band on the sky from our perspective inside the disk. In a treatise in 1755, Immanuel Kant elaborated on Wright's idea about the structure of the Milky Way. The first project to describe the shape of the Milky Way and the position of the Sun was undertaken by William Herschel in 1785 by counting the number of stars in different regions of the sky. He produced a diagram of the shape of the galaxy with the Solar System close to the center. Using a refined approach, Kapteyn in 1920 arrived at the picture of a small (diameter about 15 kiloparsecs) ellipsoid galaxy with the Sun close to the center. A different method by Harlow Shapley based on the cataloguing of globular clusters led to a radically different picture: a flat disk with diameter approximately 70 kiloparsecs and the Sun far from the center. Both analyses failed to take into account the absorption of light by interstellar dust present in the galactic plane, but after Robert Julius Trumpler quantified this effect in 1930 by studying open clusters, the present picture of our host galaxy, the Milky Way, emerged. A few galaxies outside the Milky Way are visible in the night sky to the unaided eye, including the Andromeda Galaxy, Large Magellanic Cloud and the Small Magellanic Cloud. During the 10th century, the Persian astronomer Al-Sufi made the earliest recorded identification of the Andromeda Galaxy, describing it as a "small cloud" in his "Book of Fixed Stars". In 964, Al-Sufi also probably mentions the Large Magellanic Cloud, referring to it as "Al Bakr of the southern Arabs", however, as the object is placed at the declination of −70° south, it was not visible from his latitude. The Large Magellanic Cloud, after which it is now commonly named, was not well known to Europeans until Magellan's voyage in the 16th century. The Andromeda Galaxy was later independently noted by Simon Marius in 1612. In 1734, philosopher Emanuel Swedenborg in his "Principia" speculated that there may be galaxies outside our own that are formed into galactic clusters that are miniscule parts of the universe which extends far beyond what we can see. These views "are remarkably close to the present-day views of the cosmos." In 1750, Thomas Wright speculated (correctly) that the Milky Way is a flattened disk of stars, and that some of the nebulae visible in the night sky might be separate Milky Ways. In 1755, Immanuel Kant used the term "island Universe" to describe these distant nebulae. Toward the end of the 18th century, Charles Messier compiled a catalog containing the 109 brightest celestial objects having nebulous appearance. Subsequently, William Herschel assembled a catalog of 5,000 nebulae. In 1845, Lord Rosse constructed a new telescope and was able to distinguish between elliptical and spiral nebulae. He also managed to make out individual point sources in some of these nebulae, lending credence to Kant's earlier conjecture. In 1912, Vesto Slipher made spectrographic studies of the brightest spiral nebulae to determine their composition. Slipher discovered that the spiral nebulae have high Doppler shifts, indicating that they are moving at a rate exceeding the velocity of the stars he had measured. He found that the majority of these nebulae are moving away from us. In 1917, Heber Curtis observed nova S Andromedae within the "Great Andromeda Nebula" (as the Andromeda Galaxy, Messier object M31, was then known). Searching the photographic record, he found 11 more novae. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred within our galaxy. As a result, he was able to come up with a distance estimate of 150,000 parsecs. He became a proponent of the so-called "island universes" hypothesis, which holds that spiral nebulae are actually independent galaxies. In 1920 a debate took place between Harlow Shapley and Heber Curtis (the Great Debate), concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the Universe. To support his claim that the Great Andromeda Nebula is an external galaxy, Curtis noted the appearance of dark lanes resembling the dust clouds in the Milky Way, as well as the significant Doppler shift. In 1922, the Estonian astronomer Ernst Öpik gave a distance determination that supported the theory that the Andromeda Nebula is indeed a distant extra-galactic object. Using the new 100 inch Mt. Wilson telescope, Edwin Hubble was able to resolve the outer parts of some spiral nebulae as collections of individual stars and identified some Cepheid variables, thus allowing him to estimate the distance to the nebulae: they were far too distant to be part of the Milky Way. In 1936 Hubble produced a classification of galactic morphology that is used to this day. In 1944, Hendrik van de Hulst predicted that microwave radiation with wavelength of 21 cm would be detectable from interstellar atomic hydrogen gas; and in 1951 it was observed. This radiation is not affected by dust absorption, and so its Doppler shift can be used to map the motion of the gas in our galaxy. These observations led to the hypothesis of a rotating bar structure in the center of our galaxy. With improved radio telescopes, hydrogen gas could also be traced in other galaxies. In the 1970s, Vera Rubin uncovered a discrepancy between observed galactic rotation speed and that predicted by the visible mass of stars and gas. Today, the galaxy rotation problem is thought to be explained by the presence of large quantities of unseen dark matter. A concept known as the universal rotation curve of spirals, moreover, shows that the problem is ubiquitous in these objects. Beginning in the 1990s, the Hubble Space Telescope yielded improved observations. Among other things, Hubble data helped establish that the missing dark matter in our galaxy cannot solely consist of inherently faint and small stars. The Hubble Deep Field, an extremely long exposure of a relatively empty part of the sky, provided evidence that there are about 125 billion () galaxies in the observable universe. Improved technology in detecting the spectra invisible to humans (radio telescopes, infrared cameras, and x-ray telescopes) allow detection of other galaxies that are not detected by Hubble. Particularly, galaxy surveys in the Zone of Avoidance (the region of the sky blocked at visible-light wavelengths by the Milky Way) have revealed a number of new galaxies. In 2016, a study published in The Astrophysical Journal and led by Christopher Conselice of the University of Nottingham using 3D modeling of images collected over 20 years by the Hubble Space Telescope concluded that there are over 2 trillion () galaxies in the observable universe. Galaxies come in three main types: ellipticals, spirals, and irregulars. A slightly more extensive description of galaxy types based on their appearance is given by the Hubble sequence. Since the Hubble sequence is entirely based upon visual morphological type (shape), it may miss certain important characteristics of galaxies such as star formation rate in starburst galaxies and activity in the cores of active galaxies. The Hubble classification system rates elliptical galaxies on the basis of their ellipticity, ranging from E0, being nearly spherical, up to E7, which is highly elongated. These galaxies have an ellipsoidal profile, giving them an elliptical appearance regardless of the viewing angle. Their appearance shows little structure and they typically have relatively little interstellar matter. Consequently, these galaxies also have a low portion of open clusters and a reduced rate of new star formation. Instead they are dominated by generally older, more evolved stars that are orbiting the common center of gravity in random directions. The stars contain low abundances of heavy elements because star formation ceases after the initial burst. In this sense they have some similarity to the much smaller globular clusters. The largest galaxies are giant ellipticals. Many elliptical galaxies are believed to form due to the interaction of galaxies, resulting in a collision and merger. They can grow to enormous sizes (compared to spiral galaxies, for example), and giant elliptical galaxies are often found near the core of large galaxy clusters. Starburst galaxies are the result of a galactic collision that can result in the formation of an elliptical galaxy. A shell galaxy is a type of elliptical galaxy where the stars in the galaxy's halo are arranged in concentric shells. About one-tenth of elliptical galaxies have a shell-like structure, which has never been observed in spiral galaxies. The shell-like structures are thought to develop when a larger galaxy absorbs a smaller companion galaxy. As the two galaxy centers approach, the centers start to oscillate around a center point, the oscillation creates gravitational ripples forming the shells of stars, similar to ripples spreading on water. For example, galaxy NGC 3923 has over twenty shells. Spiral galaxies resemble spiraling pinwheels. Though the stars and other visible material contained in such a galaxy lie mostly on a plane, the majority of mass in spiral galaxies exists in a roughly spherical halo of dark matter that extends beyond the visible component, as demonstrated by the universal rotation curve concept. Spiral galaxies consist of a rotating disk of stars and interstellar medium, along with a central bulge of generally older stars. Extending outward from the bulge are relatively bright arms. In the Hubble classification scheme, spiral galaxies are listed as type "S", followed by a letter ("a", "b", or "c") that indicates the degree of tightness of the spiral arms and the size of the central bulge. An "Sa" galaxy has tightly wound, poorly defined arms and possesses a relatively large core region. At the other extreme, an "Sc" galaxy has open, well-defined arms and a small core region. A galaxy with poorly defined arms is sometimes referred to as a flocculent spiral galaxy; in contrast to the grand design spiral galaxy that has prominent and well-defined spiral arms. The speed in which a galaxy rotates is thought to correlate with the flatness of the disc as some spiral galaxies have thick bulges, while others are thin and dense. In spiral galaxies, the spiral arms do have the shape of approximate logarithmic spirals, a pattern that can be theoretically shown to result from a disturbance in a uniformly rotating mass of stars. Like the stars, the spiral arms rotate around the center, but they do so with constant angular velocity. The spiral arms are thought to be areas of high-density matter, or "density waves". As stars move through an arm, the space velocity of each stellar system is modified by the gravitational force of the higher density. (The velocity returns to normal after the stars depart on the other side of the arm.) This effect is akin to a "wave" of slowdowns moving along a highway full of moving cars. The arms are visible because the high density facilitates star formation, and therefore they harbor many bright and young stars. A majority of spiral galaxies, including our own Milky Way galaxy, have a linear, bar-shaped band of stars that extends outward to either side of the core, then merges into the spiral arm structure. In the Hubble classification scheme, these are designated by an "SB", followed by a lower-case letter ("a", "b" or "c") that indicates the form of the spiral arms (in the same manner as the categorization of normal spiral galaxies). Bars are thought to be temporary structures that can occur as a result of a density wave radiating outward from the core, or else due to a tidal interaction with another galaxy. Many barred spiral galaxies are active, possibly as a result of gas being channeled into the core along the arms. Our own galaxy, the Milky Way, is a large disk-shaped barred-spiral galaxy about 30 kiloparsecs in diameter and a kiloparsec thick. It contains about two hundred billion (2×10) stars and has a total mass of about six hundred billion (6×10) times the mass of the Sun. Recently, researchers described galaxies called super-luminous spirals. They are very large with an upward diameter of 437,000 light-years (compared to the Milky Way's 100,000 light-year diameter). With a mass of 340 billion solar masses, they generate a significant amount of ultraviolet and mid-infrared light. They are thought to have an increased star formation rate around 30 times faster than the Milky Way. Despite the prominence of large elliptical and spiral galaxies, most galaxies in the Universe are dwarf galaxies. These galaxies are relatively small when compared with other galactic formations, being about one hundredth the size of the Milky Way, containing only a few billion stars. Ultra-compact dwarf galaxies have recently been discovered that are only 100 parsecs across. Many dwarf galaxies may orbit a single larger galaxy; the Milky Way has at least a dozen such satellites, with an estimated 300–500 yet to be discovered. Dwarf galaxies may also be classified as elliptical, spiral, or irregular. Since small dwarf ellipticals bear little resemblance to large ellipticals, they are often called dwarf spheroidal galaxies instead. A study of 27 Milky Way neighbors found that in all dwarf galaxies, the central mass is approximately 10 million solar masses, regardless of whether the galaxy has thousands or millions of stars. This has led to the suggestion that galaxies are largely formed by dark matter, and that the minimum size may indicate a form of warm dark matter incapable of gravitational coalescence on a smaller scale. Interactions between galaxies are relatively frequent, and they can play an important role in galactic evolution. Near misses between galaxies result in warping distortions due to tidal interactions, and may cause some exchange of gas and dust. Collisions occur when two galaxies pass directly through each other and have sufficient relative momentum not to merge. The stars of interacting galaxies will usually not collide, but the gas and dust within the two forms will interact, sometimes triggering star formation. A collision can severely distort the shape of the galaxies, forming bars, rings or tail-like structures. At the extreme of interactions are galactic mergers. In this case the relative momentum of the two galaxies is insufficient to allow the galaxies to pass through each other. Instead, they gradually merge to form a single, larger galaxy. Mergers can result in significant changes to morphology, as compared to the original galaxies. If one of the merging galaxies is much more massive than the other merging galaxy then the result is known as cannibalism. The more massive larger galaxy will remain relatively undisturbed by the merger, while the smaller galaxy is torn apart. The Milky Way galaxy is currently in the process of cannibalizing the Sagittarius Dwarf Elliptical Galaxy and the Canis Major Dwarf Galaxy. Stars are created within galaxies from a reserve of cold gas that forms into giant molecular clouds. Some galaxies have been observed to form stars at an exceptional rate, which is known as a starburst. If they continue to do so, then they would consume their reserve of gas in a time span less than the lifespan of the galaxy. Hence starburst activity usually lasts for only about ten million years, a relatively brief period in the history of a galaxy. Starburst galaxies were more common during the early history of the Universe, and, at present, still contribute an estimated 15% to the total star production rate. Starburst galaxies are characterized by dusty concentrations of gas and the appearance of newly formed stars, including massive stars that ionize the surrounding clouds to create H II regions. These massive stars produce supernova explosions, resulting in expanding remnants that interact powerfully with the surrounding gas. These outbursts trigger a chain reaction of star building that spreads throughout the gaseous region. Only when the available gas is nearly consumed or dispersed does the starburst activity end. Starbursts are often associated with merging or interacting galaxies. The prototype example of such a starburst-forming interaction is M82, which experienced a close encounter with the larger M81. Irregular galaxies often exhibit spaced knots of starburst activity. A portion of the observable galaxies are classified as active galaxies if the galaxy contains an active galactic nucleus (AGN). A significant portion of the total energy output from the galaxy is emitted by the active galactic nucleus, instead of the stars, dust and interstellar medium of the galaxy. The standard model for an active galactic nucleus is based upon an accretion disc that forms around a supermassive black hole (SMBH) at the core region of the galaxy. The radiation from an active galactic nucleus results from the gravitational energy of matter as it falls toward the black hole from the disc. In about 10% of these galaxies, a diametrically opposed pair of energetic jets ejects particles from the galaxy core at velocities close to the speed of light. The mechanism for producing these jets is not well understood. Blazars are believed to be an active galaxy with a relativistic jet that is pointed in the direction of Earth. A radio galaxy emits radio frequencies from relativistic jets. A unified model of these types of active galaxies explains their differences based on the viewing angle of the observer. Possibly related to active galactic nuclei (as well as starburst regions) are low-ionization nuclear emission-line regions (LINERs). The emission from LINER-type galaxies is dominated by weakly ionized elements. The excitation sources for the weakly ionized lines include post-AGB stars, AGN, and shocks. Approximately one-third of nearby galaxies are classified as containing LINER nuclei. Seyfert galaxies are one of the two largest groups of active galaxies, along with quasars. They have quasar-like nuclei (very luminous, distant and bright sources of electromagnetic radiation) with very high surface brightnesses but unlike quasars, their host galaxies are clearly detectable. Seyfert galaxies account for about 10% of all galaxies. Seen in visible light, most Seyfert galaxies look like normal spiral galaxies, but when studied under other wavelengths, the luminosity of their cores is equivalent to the luminosity of whole galaxies the size of the Milky Way. Quasars (/ˈkweɪzɑr/) or quasi-stellar radio sources are the most energetic and distant members of a class of objects called active galactic nuclei (AGN). Quasars are extremely luminous and were first identified as being high redshift sources of electromagnetic energy, including radio waves and visible light, that appeared to be similar to stars, rather than extended sources similar to galaxies. Their luminosity can be 100 times greater than that of the Milky Way. Luminous infrared galaxies or LIRGs are galaxies with luminosities, the measurement of brightness, above 10 L☉. LIRGs are more abundant than starburst galaxies, Seyfert galaxies and quasi-stellar objects at comparable total luminosity. Infrared galaxies emit more energy in the infrared than at all other wavelengths combined. A LIRG's luminosity is 100 billion times that of our Sun. Galaxies have magnetic fields of their own. They are strong enough to be dynamically important: they drive mass inflow into the centers of galaxies, they modify the formation of spiral arms and they can affect the rotation of gas in the outer regions of galaxies. Magnetic fields provide the transport of angular momentum required for the collapse of gas clouds and hence the formation of new stars. The typical average equipartition strength for spiral galaxies is about 10 μG (microGauss) or 1 nT (nanoTesla). For comparison, the Earth's magnetic field has an average strength of about 0.3 G (Gauss or 30 μT (microTesla). Radio-faint galaxies like M 31 and M 33, our Milky Way's neighbors, have weaker fields (about 5 μG), while gas-rich galaxies with high star-formation rates, like M 51, M 83 and NGC 6946, have 15 μG on average. In prominent spiral arms the field strength can be up to 25 μG, in regions where cold gas and dust are also concentrated. The strongest total equipartition fields (50–100 μG) were found in starburst galaxies, for example in M 82 and the Antennae, and in nuclear starburst regions, for example in the centers of NGC 1097 and of other barred galaxies. Galactic formation and evolution is an active area of research in astrophysics. Current cosmological models of the early Universe are based on the Big Bang theory. About 300,000 years after this event, atoms of hydrogen and helium began to form, in an event called recombination. Nearly all the hydrogen was neutral (non-ionized) and readily absorbed light, and no stars had yet formed. As a result, this period has been called the "dark ages". It was from density fluctuations (or anisotropic irregularities) in this primordial matter that larger structures began to appear. As a result, masses of baryonic matter started to condense within cold dark matter halos. These primordial structures would eventually become the galaxies we see today. Evidence for the early appearance of galaxies was found in 2006, when it was discovered that the galaxy IOK-1 has an unusually high redshift of 6.96, corresponding to just 750 million years after the Big Bang and making it the most distant and primordial galaxy yet seen. While some scientists have claimed other objects (such as Abell 1835 IR1916) have higher redshifts (and therefore are seen in an earlier stage of the Universe's evolution), IOK-1's age and composition have been more reliably established. In December 2012, astronomers reported that UDFj-39546284 is the most distant object known and has a redshift value of 11.9. The object, estimated to have existed around "380 million years" after the Big Bang (which was about 13.8 billion years ago), is about 13.42 billion light travel distance years away. The existence of such early protogalaxies suggests that they must have grown in the so-called "dark ages". As of May 5, 2015, the galaxy EGS-zs8-1 is the most distant and earliest galaxy measured, forming 670 million years after the Big Bang. The light from EGS-zs8-1 has taken 13 billion years to reach Earth, and is now 30 billion light-years away, because of the expansion of the universe during 13 billion years. The detailed process by which early galaxies formed is an open question in astrophysics. Theories can be divided into two categories: top-down and bottom-up. In top-down correlations (such as the Eggen–Lynden-Bell–Sandage [ELS] model), protogalaxies form in a large-scale simultaneous collapse lasting about one hundred million years. In bottom-up theories (such as the Searle-Zinn [SZ] model), small structures such as globular clusters form first, and then a number of such bodies accrete to form a larger galaxy. Once protogalaxies began to form and contract, the first halo stars (called Population III stars) appeared within them. These were composed almost entirely of hydrogen and helium, and may have been massive. If so, these huge stars would have quickly consumed their supply of fuel and became supernovae, releasing heavy elements into the interstellar medium. This first generation of stars re-ionized the surrounding neutral hydrogen, creating expanding bubbles of space through which light could readily travel. In June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at . Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it. Within a billion years of a galaxy's formation, key structures begin to appear. Globular clusters, the central supermassive black hole, and a galactic bulge of metal-poor Population II stars form. The creation of a supermassive black hole appears to play a key role in actively regulating the growth of galaxies by limiting the total amount of additional matter added. During this early epoch, galaxies undergo a major burst of star formation. During the following two billion years, the accumulated matter settles into a galactic disc. A galaxy will continue to absorb infalling material from high-velocity clouds and dwarf galaxies throughout its life. This matter is mostly hydrogen and helium. The cycle of stellar birth and death slowly increases the abundance of heavy elements, eventually allowing the formation of planets. The evolution of galaxies can be significantly affected by interactions and collisions. Mergers of galaxies were common during the early epoch, and the majority of galaxies were peculiar in morphology. Given the distances between the stars, the great majority of stellar systems in colliding galaxies will be unaffected. However, gravitational stripping of the interstellar gas and dust that makes up the spiral arms produces a long train of stars known as tidal tails. Examples of these formations can be seen in NGC 4676 or the Antennae Galaxies. The Milky Way galaxy and the nearby Andromeda Galaxy are moving toward each other at about 130 km/s, and—depending upon the lateral movements—the two might collide in about five to six billion years. Although the Milky Way has never collided with a galaxy as large as Andromeda before, evidence of past collisions of the Milky Way with smaller dwarf galaxies is increasing. Such large-scale interactions are rare. As time passes, mergers of two systems of equal size become less common. Most bright galaxies have remained fundamentally unchanged for the last few billion years, and the net rate of star formation probably also peaked approximately ten billion years ago. Spiral galaxies, like the Milky Way, produce new generations of stars as long as they have dense molecular clouds of interstellar hydrogen in their spiral arms. Elliptical galaxies are largely devoid of this gas, and so form few new stars. The supply of star-forming material is finite; once stars have converted the available supply of hydrogen into heavier elements, new star formation will come to an end. The current era of star formation is expected to continue for up to one hundred billion years, and then the "stellar age" will wind down after about ten trillion to one hundred trillion years (10–10 years), as the smallest, longest-lived stars in our universe, tiny red dwarfs, begin to fade. At the end of the stellar age, galaxies will be composed of compact objects: brown dwarfs, white dwarfs that are cooling or cold ("black dwarfs"), neutron stars, and black holes. Eventually, as a result of gravitational relaxation, all stars will either fall into central supermassive black holes or be flung into intergalactic space as a result of collisions. Deep sky surveys show that galaxies are often found in groups and clusters. Solitary galaxies that have not significantly interacted with another galaxy of comparable mass during the past billion years are relatively scarce. Only about 5% of the galaxies surveyed have been found to be truly isolated; however, these isolated formations may have interacted and even merged with other galaxies in the past, and may still be orbited by smaller, satellite galaxies. Isolated galaxies can produce stars at a higher rate than normal, as their gas is not being stripped by other nearby galaxies. On the largest scale, the Universe is continually expanding, resulting in an average increase in the separation between individual galaxies (see Hubble's law). Associations of galaxies can overcome this expansion on a local scale through their mutual gravitational attraction. These associations formed early in the Universe, as clumps of dark matter pulled their respective galaxies together. Nearby groups later merged to form larger-scale clusters. This on-going merger process (as well as an influx of infalling gas) heats the inter-galactic gas within a cluster to very high temperatures, reaching 30–100 megakelvins. About 70–80% of the mass in a cluster is in the form of dark matter, with 10–30% consisting of this heated gas and the remaining few percent of the matter in the form of galaxies. Most galaxies in the Universe are gravitationally bound to a number of other galaxies. These form a fractal-like hierarchical distribution of clustered structures, with the smallest such associations being termed groups. A group of galaxies is the most common type of galactic cluster, and these formations contain a majority of the galaxies (as well as most of the baryonic mass) in the Universe. To remain gravitationally bound to such a group, each member galaxy must have a sufficiently low velocity to prevent it from escaping (see Virial theorem). If there is insufficient kinetic energy, however, the group may evolve into a smaller number of galaxies through mergers. Clusters of galaxies consist of hundreds to thousands of galaxies bound together by gravity. Clusters of galaxies are often dominated by a single giant elliptical galaxy, known as the brightest cluster galaxy, which, over time, tidally destroys its satellite galaxies and adds their mass to its own. Superclusters contain tens of thousands of galaxies, which are found in clusters, groups and sometimes individually. At the supercluster scale, galaxies are arranged into sheets and filaments surrounding vast empty voids. Above this scale, the Universe appears to be the same in all directions (isotropic and homogeneous). The Milky Way galaxy is a member of an association named the Local Group, a relatively small group of galaxies that has a diameter of approximately one megaparsec. The Milky Way and the Andromeda Galaxy are the two brightest galaxies within the group; many of the other member galaxies are dwarf companions of these two galaxies. The Local Group itself is a part of a cloud-like structure within the Virgo Supercluster, a large, extended structure of groups and clusters of galaxies centered on the Virgo Cluster. And the Virgo Supercluster itself is a part of the Pisces-Cetus Supercluster Complex, a giant galaxy filament. The peak radiation of most stars lies in the visible spectrum, so the observation of the stars that form galaxies has been a major component of optical astronomy. It is also a favorable portion of the spectrum for observing ionized H II regions, and for examining the distribution of dusty arms. The dust present in the interstellar medium is opaque to visual light. It is more transparent to far-infrared, which can be used to observe the interior regions of giant molecular clouds and galactic cores in great detail. Infrared is also used to observe distant, red-shifted galaxies that were formed much earlier in the history of the Universe. Water vapor and carbon dioxide absorb a number of useful portions of the infrared spectrum, so high-altitude or space-based telescopes are used for infrared astronomy. The first non-visual study of galaxies, particularly active galaxies, was made using radio frequencies. The Earth's atmosphere is nearly transparent to radio between 5 MHz and 30 GHz. (The ionosphere blocks signals below this range.) Large radio interferometers have been used to map the active jets emitted from active nuclei. Radio telescopes can also be used to observe neutral hydrogen (via 21 cm radiation), including, potentially, the non-ionized matter in the early Universe that later collapsed to form galaxies. Ultraviolet and X-ray telescopes can observe highly energetic galactic phenomena. Ultraviolet flares are sometimes observed when a star in a distant galaxy is torn apart from the tidal forces of a nearby black hole. The distribution of hot gas in galactic clusters can be mapped by X-rays. The existence of supermassive black holes at the cores of galaxies was confirmed through X-ray astronomy. Grus (constellation) Grus (, or colloquially ) is a constellation in the southern sky. Its name is Latin for the crane, a type of bird. It is one of twelve constellations conceived by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. Grus first appeared on a 35-cm (14 in) diameter celestial globe published in 1598 in Amsterdam by Plancius and Jodocus Hondius and was depicted in Johann Bayer's star atlas "Uranometria" of 1603. French explorer and astronomer Nicolas Louis de Lacaille gave Bayer designations to its stars in 1756, some of which had been previously considered part of the neighbouring constellation Piscis Austrinus. The constellations Grus, Pavo, Phoenix and Tucana are collectively known as the "Southern Birds". The constellation's brightest star, Alpha Gruis, is also known as Alnair and appears as a 1.7-magnitude blue-white star. Beta Gruis is a red giant variable star with a minimum magnitude of 2.3 and a maximum magnitude of 2.0. Six star systems have been found to have planets: the red dwarf Gliese 832 is one of the closest stars to Earth to have a planetary system. Another—WASP-95—has a planet that orbits every two days. Deep-sky objects found in Grus include the planetary nebula IC 5148, also known as the Spare Tyre Nebula, and a group of four interacting galaxies known as the Grus Quartet. The stars that form Grus were originally considered part of the neighbouring constellation Piscis Austrinus (the southern fish), with Gamma Gruis seen as part of the fish's tail. The stars were first defined as a separate constellation by the Dutch astronomer Petrus Plancius, who created twelve new constellations based on the observations of the southern sky by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the "Eerste Schipvaart", to the East Indies. Grus first appeared on a 35-cm diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. Its first depiction in a celestial atlas was in the German cartographer Johann Bayer's "Uranometria" of 1603. De Houtman included it in his southern star catalogue the same year under the Dutch name "Den Reygher", "The Heron", but Bayer followed Plancius and Hondius in using Grus. An alternative name for the constellation, "Phoenicopterus" (Latin "flamingo"), was used briefly during the early 17th century, seen in the 1605 work "Cosmographiae Generalis" by Paul Merula of Leiden University and a c. 1625 globe by Dutch globe maker Pieter van den Keere. Astronomer Ian Ridpath has reported the symbolism likely came from Plancius originally, who had worked with both of these people. Grus and the nearby constellations Phoenix, Tucana and Pavo are collectively called the "Southern Birds". The stars that correspond to Grus were generally too far south to be seen from China. In Chinese astronomy, Gamma and Lambda Gruis may have been included in the tub-shaped asterism "Bàijiù", along with stars from Piscis Austrinus. In Central Australia, the Arrernte and Luritja people living on a mission in Hermannsburg viewed the sky as divided between them, east of the Milky Way representing Arrernte camps and west denoting Luritja camps. Alpha and Beta Gruis, along with Fomalhaut, Alpha Pavonis and the stars of Musca, were all claimed by the Arrernte. Grus is bordered by Piscis Austrinus to the north, Sculptor to the northeast, Phoenix to the east, Tucana to the south, Indus to the southwest, and Microscopium to the west. Bayer straightened the tail of Piscis Austrinus to make way for Grus in his "Uranometria". Covering 366 square degrees, it ranks 45th of the 88 modern constellations in size and covers 0.887% of the night sky. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Gru". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined as a polygon of 6 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −36.31° and −56.39°. Grus is located too far south to be seen by observers in the British Isles and the northern United States, though it can easily be seen from Florida or California; the whole constellation is visible to observers south of latitude 33°N. Keyser and de Houtman assigned twelve stars to the constellation. Bayer depicted Grus on his chart, but did not assign its stars Bayer designations. French explorer and astronomer Nicolas Louis de Lacaille labelled them Alpha to Phi in 1756 with some omissions. In 1879, American astronomer Benjamin Gould added Kappa, Nu, Omicron and Xi, which had all been catalogued by Lacaille but not given Bayer designations. Lacaille considered them too faint, while Gould thought otherwise. Xi Gruis had originally been placed in Microscopium. Conversely, Gould dropped Lacaille's Sigma as he thought it was too dim. Grus has several bright stars. Marking the left wing is Alpha Gruis, a blue-white star of spectral type B6V and apparent magnitude 1.7, around 101 light-years from Earth. Its traditional name, Alnair, means "the bright one" and refers to its status as the brightest star in Grus (although the Arabians saw it as the brightest star in the Fish's tail, as Grus was then depicted).Alnair Alnair is around 380 times as luminous and has over 3 times the diameter of the Sun. Lying 5 degrees west of Alnair, denoting the Crane's heart is Beta Gruis (the proper name is Tiaki), a red giant of spectral type M5III. It has a diameter of 0.8 astronomical units (AU) (if placed in the Solar System it would extend to the orbit of Venus) located around 170 light-years from Earth. It is a variable star with a minimum magnitude of 2.3 and a maximum magnitude of 2.0. An imaginary line drawn from the Great Square of Pegasus through Fomalhaut will lead to Alnair and Beta Gruis. Lying in the northwest corner of the constellation and marking the crane's eye is Gamma Gruis , a blue-white subgiant of spectral type B8III and magnitude 3.0 lying around 211 light-years from Earth. Also known as Al Dhanab, it has finished fusing its core hydrogen and has begun cooling and expanding, which will see it transform into a red giant. There are several naked-eye double stars in Grus. Forming a triangle with Alnair and Beta, Delta Gruis is an optical double whose components—Delta and Delta—are separated by 45 arcseconds. Delta is a yellow giant of spectral type G7III and magnitude 4.0, 309 light-years from Earth, and may have its own magnitude 12 orange dwarf companion. Delta is a red giant of spectral type M4.5III and semiregular variable that ranges between magnitudes 3.99 and 4.2, located 325 light-years from Earth. It has around 3 times the mass and 135 times the diameter of our sun. Mu Gruis, composed of Mu and Mu, is also an optical double—both stars are yellow giants of spectral type G8III around 2.5 times as massive as the Sun with surface temperatures of around 4900 K. Mu is the brighter of the two at magnitude 4.8 located around 275 light-years from Earth, while Mu the dimmer at magnitude 5.11 lies 265 light-years distant from Earth. Pi Gruis, an optical double with a variable component, is composed of Pi Gruis and Pi. Pi is a semi-regular red giant of spectral type S5, ranging from magnitude 5.31 to 7.01 over a period of 191 days, and is around 532 light-years from Earth. One of the brightest S-class stars to Earth viewers, it has a companion star of apparent magnitude 10.9 with sunlike properties, being a yellow main sequence star of spectral type G0V. The pair make up a likely binary system. Pi is a giant star of spectral type F3III-IV located around 130 light-years from Earth, and is often brighter than its companion at magnitude 5.6. Marking the right wing is Theta Gruis, yet another double star, lying 5 degrees east of Delta and Delta. RZ Gruis is a binary system of apparent magnitude 12.3 with occasional dimming to 13.4, whose components—a white dwarf and main sequence star—are thought to orbit each other roughly every 8.5 to 10 hours. It belongs to the UX Ursae Majoris subgroup of cataclysmic variable star systems, where material from the donor star is drawn to the white dwarf where it forms an accretion disc that remains bright and outshines the two component stars. The system is poorly understood, though the donor star has been calculated to be of spectral type F5V. These stars have spectra very similar to novae that have returned to quiescence after outbursts, yet they have not been observed to have erupted themselves. The American Association of Variable Star Observers recommends watching them for future events. CE Gruis (also known as Grus V-1) is a faint (magnitude 18–21) star system also composed of a white dwarf and donor star; in this case the two are so close they are tidally locked. Known as polars, material from the donor star does not form an accretion disc around the white dwarf, but rather streams directly onto it. Six star systems are thought to have planetary systems. Tau Gruis is a yellow star of magnitude 6.0 located around 106 light-years away. It may be a main sequence star or be just beginning to depart from the sequence as it expands and cools. In 2002 the star was found to have a planetary companion. HD 215456, HD 213240 and WASP-95 are yellow sunlike stars discovered to have two planets, a planet and a remote red dwarf, and a hot Jupiter respectively; this last—WASP-95b—completes an orbit round its sun in a mere two days. Gliese 832 is a red dwarf of spectral type M1.5V and apparent magnitude 8.66 located only 16.1 light-years distant; hence it is one of the nearest stars to the Solar System. A Jupiter-like planet—Gliese 832 b—orbiting the red dwarf over a period of 9.4±0.4 years was discovered in 2008. WISE 2220−3628 is a brown dwarf of spectral type Y, and hence one of the coolest star-like objects known. It has been calculated as being around 26 light-years distant from Earth. Nicknamed the spare-tyre nebula, IC 5148 is a planetary nebula located around 1 degree west of Lambda Gruis. Around 3000 light-years distant, it is expanding at 50 kilometres a second, one of the fastest rates of expansion of all planetary nebulae. Northeast of Theta Gruis are four interacting galaxies known as the Grus Quartet. These galaxies are NGC 7552, NGC 7590, NGC 7599, and NGC 7582. The latter three galaxies occupy an area of sky only 10 arcminutes across and are sometimes referred to as the "Grus Triplet," although all four are part of a larger loose group of galaxies called the IC 1459 Grus Group. NGC 7552 and 7582 are exhibiting high starburst activity; this is thought to have arisen because of the tidal forces from interacting. Located on the border of Grus with Piscis Austrinus, IC 1459 is a peculiar E3 giant elliptical galaxy. It has a fast counterrotating stellar core, and shells and ripples in its outer region. The galaxy has an apparent magnitude of 11.9 and is around 80 million light years distant. NGC 7424 is a barred spiral galaxy with an apparent magnitude of 10.4. located around 4 degrees west of the Grus Triplet. Approximately 37.5 million light years distant, it is about 100,000 light years in diameter, has well defined spiral arms and is thought to resemble the Milky Way. Two ultraluminous X-ray sources and one supernova have been observed in NGC 7424. SN 2001ig was discovered in 2001 and classified as a Type IIb supernova, one that initially showed a weak hydrogen line in its spectrum, but this emission later became undetectable and was replaced by lines of oxygen, magnesium and calcium, as well as other features that resembled the spectrum of a Type Ib supernova. A massive star of spectral type F, A or B is thought to be the surviving binary companion to SN 2001ig, which was believed to have been a Wolf–Rayet star. Located near Alnair is NGC 7213, a face-on type 1 Seyfert galaxy located approximately 71.7 million light years from Earth. It has an apparent magnitude of 12.1. Appearing undisturbed in visible light, it shows signs of having undergone a collision or merger when viewed at longer wavelengths, with disturbed patterns of ionized hydrogen including a filament of gas around 64,000 light-years long. It is part of a group of ten galaxies. NGC 7410 is a spiral galaxy discovered by British astronomer John Herschel during observations at the Cape of Good Hope in October 1834. The galaxy has a visual magnitude of 11.7 and is approximately 122 million light years distant from Earth. Guy Fawkes Guy Fawkes (; 13 April 1570 – 31 January 1606), also known as Guido Fawkes, the name he adopted while fighting for the Spanish, was a member of a group of provincial English Catholics who planned the failed Gunpowder Plot of 1605. Fawkes was born and educated in York. His father died when Fawkes was eight years old, after which his mother married a recusant Catholic. Fawkes converted to Catholicism and left for mainland Europe, where he fought in the Eighty Years' War on the side of Catholic Spain against Protestant Dutch reformers in the Low Countries. He travelled to Spain to seek support for a Catholic rebellion in England without success. He later met Thomas Wintour, with whom he returned to England. Wintour introduced Fawkes to Robert Catesby, who planned to assassinate and restore a Catholic monarch to the throne. The plotters leased an undercroft beneath the House of Lords, and Fawkes was placed in charge of the gunpowder they stockpiled there. Prompted by the receipt of an anonymous letter, the authorities searched Westminster Palace during the early hours of 5 November and found Fawkes guarding the explosives. Over the next few days, he was questioned and tortured and eventually confessed. Immediately before his execution on 31 January, Fawkes fell from the scaffold where he was to be hanged and broke his neck, thus avoiding the agony of the mutilation that followed. Fawkes became synonymous with the Gunpowder Plot, the failure of which has been commemorated in Britain since 5 November 1605. His effigy is traditionally burned on a bonfire, commonly accompanied by a fireworks display. Guy Fawkes was born in 1570 in Stonegate, York. He was the second of four children born to Edward Fawkes, a proctor and an advocate of the consistory court at York, and his wife, Edith. Guy's parents were regular communicants of the Church of England, as were his paternal grandparents; his grandmother, born Ellen Harrington, was the daughter of a prominent merchant, who served as Lord Mayor of York in 1536. Guy's mother's family were recusant Catholics, and his cousin, Richard Cowling, became a Jesuit priest. "Guy" was an uncommon name in England, but may have been popular in York on account of a local notable, Sir Guy Fairfax of Steeton. The date of Fawkes's birth is unknown, but he was baptised in the church of St Michael le Belfrey on 16 April. As the customary gap between birth and baptism was three days, he was probably born about 13 April. In 1568, Edith had given birth to a daughter named Anne, but the child died aged about seven weeks, in November that year. She bore two more children after Guy: Anne (b. 1572), and Elizabeth (b. 1575). Both were married, in 1599 and 1594 respectively. In 1579, when Guy was eight years old, his father died. His mother remarried several years later, to the Catholic Dionis Baynbrigge (or Denis Bainbridge) of Scotton, Harrogate. Fawkes may have become a Catholic through the Baynbrigge family's recusant tendencies, and also the Catholic branches of the Pulleyn and Percy families of Scotton, but also from his time at St. Peter's School in York. A governor of the school had spent about 20 years in prison for recusancy, and its headmaster, John Pulleyn, came from a family of noted Yorkshire recusants, the Pulleyns of Blubberhouses. In her 1915 work "The Pulleynes of Yorkshire", author Catharine Pullein suggested that Fawkes's Catholic education came from his Harrington relatives, who were known for harbouring priests, one of whom later accompanied Fawkes to Flanders in 1592–1593. Fawkes's fellow students included John Wright and his brother Christopher (both later involved with Fawkes in the Gunpowder Plot) and Oswald Tesimond, Edward Oldcorne and Robert Middleton, who became priests (the latter executed in 1601). After leaving school Fawkes entered the service of Anthony Browne, 1st Viscount Montagu. The Viscount took a dislike to Fawkes and after a short time dismissed him; he was subsequently employed by Anthony-Maria Browne, 2nd Viscount Montagu, who succeeded his grandfather at the age of 18. At least one source claims that Fawkes married and had a son, but no known contemporary accounts confirm this. In October 1591 Fawkes sold the estate in Clifton in York that he had inherited from his father. He travelled to the continent to fight in the Eighty Years War for Catholic Spain against the new Dutch Republic and, from 1595 until the Peace of Vervins in 1598, France. Although England was not by then engaged in land operations against Spain, the two countries were still at war, and the Spanish Armada of 1588 was only five years in the past. He joined Sir William Stanley, an English Catholic and veteran commander in his mid-fifties who had raised an army in Ireland to fight in Leicester's expedition to the Netherlands. Stanley had been held in high regard by Elizabeth I, but following his surrender of Deventer to the Spanish in 1587 he, and most of his troops, had switched sides to serve Spain. Fawkes became an alférez or junior officer, fought well at the siege of Calais in 1596, and by 1603 had been recommended for a captaincy. That year, he travelled to Spain to seek support for a Catholic rebellion in England. He used the occasion to adopt the Italian version of his name, Guido, and in his memorandum described James I (who became king of England that year) as "a heretic", who intended "to have all of the Papist sect driven out of England." He denounced Scotland, and the King's favourites among the Scottish nobles, writing "it will not be possible to reconcile these two nations, as they are, for very long". Although he was received politely, the court of Philip III was unwilling to offer him any support. In 1604 Fawkes became involved with a small group of English Catholics, led by Robert Catesby, who planned to assassinate the Protestant King James and replace him with his daughter, third in the line of succession, Princess Elizabeth. Fawkes was described by the Jesuit priest and former school friend Oswald Tesimond as "pleasant of approach and cheerful of manner, opposed to quarrels and strife ... loyal to his friends". Tesimond also claimed Fawkes was "a man highly skilled in matters of war", and that it was this mixture of piety and professionalism that endeared him to his fellow conspirators. The author Antonia Fraser describes Fawkes as "a tall, powerfully built man, with thick reddish-brown hair, a flowing moustache in the tradition of the time, and a bushy reddish-brown beard", and that he was "a man of action ... capable of intelligent argument as well as physical endurance, somewhat to the surprise of his enemies." The first meeting of the five central conspirators took place on Sunday 20 May 1604, at an inn called the Duck and Drake, in the fashionable Strand district of London. Catesby had already proposed at an earlier meeting with Thomas Wintour and John Wright to kill the King and his government by blowing up "the Parliament House with gunpowder". Wintour, who at first objected to the plan, was convinced by Catesby to travel to the continent to seek help. Wintour met with the Constable of Castile, the exiled Welsh spy Hugh Owen, and Sir William Stanley, who said that Catesby would receive no support from Spain. Owen did, however, introduce Wintour to Fawkes, who had by then been away from England for many years, and thus was largely unknown in the country. Wintour and Fawkes were contemporaries; each was militant, and had first-hand experience of the unwillingness of the Spaniards to help. Wintour told Fawkes of their plan to "doe some whatt in Ingland if the pece with Spaine healped us nott", and thus in April 1604 the two men returned to England. Wintour's news did not surprise Catesby; despite positive noises from the Spanish authorities, he feared that "the deeds would nott answere". One of the conspirators, Thomas Percy, was promoted in June 1604, gaining access to a house in London that belonged to John Whynniard, Keeper of the King's Wardrobe. Fawkes was installed as a caretaker and began using the pseudonym John Johnson, servant to Percy. The contemporaneous account of the prosecution (taken from Thomas Wintour's confession) claimed that the conspirators attempted to dig a tunnel from beneath Whynniard's house to Parliament, although this story may have been a government fabrication; no evidence for the existence of a tunnel was presented by the prosecution, and no trace of one has ever been found; Fawkes himself did not admit the existence of such a scheme until his fifth interrogation, but even then he could not locate the tunnel. If the story is true, however, by December 1604 the conspirators were busy tunnelling from their rented house to the House of Lords. They ceased their efforts when, during tunnelling, they heard a noise from above. Fawkes was sent out to investigate, and returned with the news that the tenant's widow was clearing out a nearby undercroft, directly beneath the House of Lords. The plotters purchased the lease to the room, which also belonged to John Whynniard. Unused and filthy, it was considered an ideal hiding place for the gunpowder the plotters planned to store. According to Fawkes, 20 barrels of gunpowder were brought in at first, followed by 16 more on 20 July. On 28 July however, the ever-present threat of the plague delayed the opening of Parliament until Tuesday, 5 November. In an attempt to gain foreign support, in May 1605 Fawkes travelled overseas and informed Hugh Owen of the plotters' plan. At some point during this trip his name made its way into the files of Robert Cecil, 1st Earl of Salisbury, who employed a network of spies across Europe. One of these spies, Captain William Turner, may have been responsible. Although the information he provided to Salisbury usually amounted to no more than a vague pattern of invasion reports, and included nothing which regarded the Gunpowder Plot, on 21 April he told how Fawkes was to be brought by Tesimond to England. Fawkes was a well-known Flemish mercenary, and would be introduced to "Mr Catesby" and "honourable friends of the nobility and others who would have arms and horses in readiness". Turner's report did not, however, mention Fawkes's pseudonym in England, John Johnson, and did not reach Cecil until late in November, well after the plot had been discovered. It is uncertain when Fawkes returned to England, but he was back in London by late August 1605, when he and Wintour discovered that the gunpowder stored in the undercroft had decayed. More gunpowder was brought into the room, along with firewood to conceal it. Fawkes's final role in the plot was settled during a series of meetings in October. He was to light the fuse and then escape across the Thames. Simultaneously, a revolt in the Midlands would help to ensure the capture of Princess Elizabeth. Acts of regicide were frowned upon, and Fawkes would therefore head to the continent, where he would explain to the Catholic powers his holy duty to kill the King and his retinue. A few of the conspirators were concerned about fellow Catholics who would be present at Parliament during the opening. On the evening of 26 October, Lord Monteagle received an anonymous letter warning him to stay away, and to "retyre youre self into yowre contee whence yow maye expect the event in safti for ... they shall receyve a terrible blowe this parleament". Despite quickly becoming aware of the letterinformed by one of Monteagle's servantsthe conspirators resolved to continue with their plans, as it appeared that it "was clearly thought to be a hoax". Fawkes checked the undercroft on 30 October, and reported that nothing had been disturbed. Monteagle's suspicions had been aroused, however, and the letter was shown to King James. The King ordered Sir Thomas Knyvet to conduct a search of the cellars underneath Parliament, which he did in the early hours of 5 November. Fawkes had taken up his station late on the previous night, armed with a slow match and a watch given to him by Percy "becaus he should knowe howe the time went away". He was found leaving the cellar, shortly after midnight, and arrested. Inside, the barrels of gunpowder were discovered hidden under piles of firewood and coal. Fawkes gave his name as John Johnson and was first interrogated by members of the King's Privy chamber, where he remained defiant. When asked by one of the lords what he was doing in possession of so much gunpowder, Fawkes answered that his intention was "to blow you Scotch beggars back to your native mountains." He identified himself as a 36-year-old Catholic from Netherdale in Yorkshire, and gave his father's name as Thomas and his mother's as Edith Jackson. Wounds on his body noted by his questioners he explained as the effects of pleurisy. Fawkes admitted his intention to blow up the House of Lords, and expressed regret at his failure to do so. His steadfast manner earned him the admiration of King James, who described Fawkes as possessing "a Roman resolution". James's admiration did not, however, prevent him from ordering on 6 November that "John Johnson" be tortured, to reveal the names of his co-conspirators. He directed that the torture be light at first, referring to the use of manacles, but more severe if necessary, authorising the use of the rack: "the gentler Tortures are to be first used unto him "et sic per gradus ad ima tenditur" [and so by degrees proceeding to the worst]". Fawkes was transferred to the Tower of London. The King composed a list of questions to be put to "Johnson", such as ""as to what he is", For I can never yet hear of any man that knows him", "When and where he learned to speak French?", and "If he was a Papist, who brought him up in it?" The room in which Fawkes was interrogated subsequently became known as the Guy Fawkes Room. Sir William Waad, Lieutenant of the Tower, supervised the torture and obtained Fawkes's confession. He searched his prisoner, and found a letter addressed to Guy Fawkes. To Waad's surprise, "Johnson" remained silent, revealing nothing about the plot or its authors. On the night of 6 November he spoke with Waad, who reported to Salisbury "He [Johnson] told us that since he undertook this action he did every day pray to God he might perform that which might be for the advancement of the Catholic Faith and saving his own soul". According to Waad, Fawkes managed to rest through the night, despite his being warned that he would be interrogated until "I had gotton the inwards secret of his thoughts and all his complices". His composure was broken at some point during the following day. The observer Sir Edward Hoby remarked "Since Johnson's being in the Tower, he beginneth to speak English". Fawkes revealed his true identity on 7 November, and told his interrogators that there were five people involved in the plot to kill the King. He began to reveal their names on 8 November, and told how they intended to place Princess Elizabeth on the throne. His third confession, on 9 November, implicated Francis Tresham. Following the Ridolfi plot of 1571 prisoners were made to dictate their confessions, before copying and signing them, if they still could. Although it is uncertain if he was tortured on the rack, Fawkes's scrawled signature bears testament to the suffering he endured at the hands of his interrogators. The trial of eight of the plotters began on Monday 27 January 1606. Fawkes shared the barge from the Tower to Westminster Hall with seven of his co-conspirators. They were kept in the Star Chamber before being taken to Westminster Hall, where they were displayed on a purpose-built scaffold. The King and his close family, watching in secret, were among the spectators as the Lords Commissioners read out the list of charges. Fawkes was identified as Guido Fawkes, "otherwise called Guido Johnson". He pleaded not guilty, despite his apparent acceptance of guilt from the moment he was captured. The outcome was never in doubt. The jury found all the defendants guilty, and the Lord Chief Justice Sir John Popham pronounced them guilty of high treason. The Attorney General Sir Edward Coke told the court that each of the condemned would be drawn backwards to his death, by a horse, his head near the ground. They were to be "put to death halfway between heaven and earth as unworthy of both". Their genitals would be cut off and burnt before their eyes, and their bowels and hearts removed. They would then be decapitated, and the dismembered parts of their bodies displayed so that they might become "prey for the fowls of the air". Fawkes's and Tresham's testimony regarding the Spanish treason was read aloud, as well as confessions related specifically to the Gunpowder Plot. The last piece of evidence offered was a conversation between Fawkes and Wintour, who had been kept in adjacent cells. The two men apparently thought they had been speaking in private, but their conversation was intercepted by a government spy. When the prisoners were allowed to speak, Fawkes explained his not guilty plea as ignorance of certain aspects of the indictment. On 31 January 1606, Fawkes and three others – Thomas Wintour, Ambrose Rookwood, and Robert Keyes – were dragged ("i.e.", "drawn") from the Tower on wattled hurdles to the Old Palace Yard at Westminster, opposite the building they had attempted to destroy. His fellow plotters were then hanged and quartered. Fawkes was the last to stand on the scaffold. He asked for forgiveness of the King and state, while keeping up his "crosses and idle ceremonies" (Catholic practices). Weakened by torture and aided by the hangman, Fawkes began to climb the ladder to the noose, but either through jumping to his death or climbing too high so the rope was incorrectly set, he managed to avoid the agony of the latter part of his execution by breaking his neck. His lifeless body was nevertheless quartered and, as was the custom, his body parts were then distributed to "the four corners of the kingdom", to be displayed as a warning to other would-be traitors. On 5 November 1605 Londoners were encouraged to celebrate the King's escape from assassination by lighting bonfires, "always provided that 'this testemonye of joy be carefull done without any danger or disorder. An Act of Parliament designated each 5 November as a day of thanksgiving for "the joyful day of deliverance", and remained in force until 1859. Although he was only one of 13 conspirators, Fawkes is today the individual most associated with the failed plot. In Britain, 5 November has variously been called Guy Fawkes Night, Guy Fawkes Day, Plot Night and Bonfire Night; the latter can be traced directly back to the original celebration of 5 November 1605. Bonfires were accompanied by fireworks from the 1650s onwards, and it became the custom to burn an effigy (usually the pope) after 1673, when the heir presumptive, James, Duke of York, made his conversion to Catholicism public. Effigies of other notable figures who have become targets for the public's ire, such as Paul Kruger and Margaret Thatcher, have also found their way onto the bonfires, although most modern effigies are of Fawkes. The "guy" is normally created by children, from old clothes, newspapers, and a mask. During the 19th century, "guy" came to mean an oddly dressed person, but in American English it lost any pejorative connotation, and was used to refer to any male person. James Sharpe, professor of history at the University of York, has described how Guy Fawkes came to be toasted as "the last man to enter Parliament with honest intentions". William Harrison Ainsworth's 1841 historical romance "Guy Fawkes; or, The Gunpowder Treason" portrays Fawkes in a generally sympathetic light, and transformed him in the public perception into an "acceptable fictional character". Fawkes subsequently appeared as "essentially an action hero" in children's books and penny dreadfuls such as "The Boyhood Days of Guy Fawkes; or, The Conspirators of Old London", published in about 1905. According to historian Lewis Call, Fawkes is now "a major icon in modern political culture", whose face has become "a potentially powerful instrument for the articulation of postmodern anarchism" in the late 20th century, exemplified by the mask worn by V in the comic book series "V for Vendetta", who fights against a fictional fascist English state. Notes Footnotes Bibliography Giraffe The giraffe ("Giraffa") is a genus of African even-toed ungulate mammals, the tallest living terrestrial animals and the largest ruminants. The genus currently consists of one species, "Giraffa camelopardalis", the type species. Seven other species are extinct, prehistoric species known from fossils. Taxonomic classifications of one to eight extant giraffe species have been described, based upon research into the mitochondrial and nuclear DNA, as well as morphological measurements of "Giraffa," but the IUCN currently recognises only one species with nine subspecies. The giraffe's chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its distinctive coat patterns. It is classified under the family Giraffidae, along with its closest extant relative, the okapi. Its scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. Giraffes usually inhabit savannahs and woodlands. Their food source is leaves, fruits and flowers of woody plants, primarily acacia species, which they browse at heights most other herbivores cannot reach. They may be preyed on by lions, leopards, spotted hyenas and African wild dogs. Giraffes live in herds of related females and their offspring, or bachelor herds of unrelated adult males, but are gregarious and may gather in large aggregations. Males establish social hierarchies through "necking", which are combat bouts where the neck is used as a weapon. Dominant males gain mating access to females, which bear the sole responsibility for raising the young. The giraffe has intrigued various cultures, both ancient and modern, for its peculiar appearance, and has often been featured in paintings, books, and cartoons. It is classified by the International Union for Conservation of Nature as Vulnerable to extinction, and has been extirpated from many parts of its former range. Giraffes are still found in numerous national parks and game reserves but estimations as of 2016 indicate that there are approximately 97,500 members of "Giraffa" in the wild, with around 1,144 in captivity. The name "giraffe" has its earliest known origins in the Arabic word "zarāfah" (زرافة), perhaps borrowed from the animal's Somali name "geri". The Arab name is translated as "fast-walker". There were several Middle English spellings, such as "jarraf", "ziraph", and "gerfauntz". The Italian form "giraffa" arose in the 1590s. The modern English form developed around 1600 from the French "girafe". "Camelopard" is an archaic English name for the giraffe deriving from the Ancient Greek for camel and leopard, referring to its camel-like shape and its leopard-like colouring. Living giraffes were originally classified as one species by Carl Linnaeus in 1758. He gave it the binomial name "Cervus camelopardalis". Morten Thrane Brünnich classified the genus "Giraffa" in 1772. The species name "camelopardalis" is from Latin. The giraffe is one of only two living genera of the family Giraffidae in the order Artiodactyla, the other being the okapi. The family was once much more extensive, with over 10 fossil genera described. Their closest known relatives are the extinct deer-like climacocerids. They, together with the family Antilocapridae (whose only extant species is the pronghorn), belong to the superfamily Giraffoidea. These animals may have evolved from the extinct family Palaeomerycidae which might also have been the ancestor of deer. The elongation of the neck appears to have started early in the giraffe lineage. Comparisons between giraffes and their ancient relatives suggest that vertebrae close to the skull lengthened earlier, followed by lengthening of vertebrae further down. One early giraffid ancestor was "Canthumeryx" which has been dated variously to have lived 25–20 million years ago (mya), 17–15 mya or 18–14.3 mya and whose deposits have been found in Libya. This animal was medium-sized, slender and antelope-like. "Giraffokeryx" appeared 15 mya in the Indian subcontinent and resembled an okapi or a small giraffe, and had a longer neck and similar ossicones. "Giraffokeryx" may have shared a clade with more massively built giraffids like "Sivatherium" and "Bramatherium". Giraffids like "Palaeotragus", "Shansitherium" and "Samotherium" appeared 14 mya and lived throughout Africa and Eurasia. These animals had bare ossicones and small cranial sinuses and were longer with broader skulls. "Paleotragus" resembled the okapi and may have been its ancestor. Others find that the okapi lineage diverged earlier, before "Giraffokeryx". "Samotherium" was a particularly important transitional fossil in the giraffe lineage as its cervical vertebrae was intermediate in length and structure between a modern giraffe and an okapi, and was more vertical than the okapi's. "Bohlinia", which first appeared in southeastern Europe and lived 9–7 mya was likely a direct ancestor of the giraffe. "Bohlinia" closely resembled modern giraffes, having a long neck and legs and similar ossicones and dentition. "Bohlinia" entered China and northern India in response to climate change. From there, the genus "Giraffa" evolved and, around 7 mya, entered Africa. Further climate changes caused the extinction of the Asian giraffes, while the African giraffes survived and radiated into several new species. Living giraffes appear to have arisen around 1 mya in eastern Africa during the Pleistocene. Some biologists suggest the modern giraffes descended from "G. jumae"; others find "G. gracilis" a more likely candidate. "G. jumae" was larger and more heavily built while "G. gracilis" was smaller and more lightly built. The main driver for the evolution of the giraffes is believed to have been the changes from extensive forests to more open habitats, which began 8 mya. During this time, tropical plants disappeared and were replaced by arid C4 plants, and a dry savannah emerged across eastern and northern Africa and western India. Some researchers have hypothesised that this new habitat coupled with a different diet, including acacia species, may have exposed giraffe ancestors to toxins that caused higher mutation rates and a higher rate of evolution. The coat patterns of modern giraffes may also have coincided with these habitat changes. Asian giraffes are hypothesised to have had more okapi-like colourations. In the early 19th century, Jean-Baptiste Lamarck believed the giraffe's long neck was an "acquired characteristic", developed as generations of ancestral giraffes strove to reach the leaves of tall trees. This theory was eventually rejected, and scientists now believe the giraffe's neck arose through Darwinian natural selection—that ancestral giraffes with long necks thereby had a competitive feeding advantage (competing browsers hypothesis) that better enabled them to survive and reproduce to pass on their genes. The giraffe genome is around 2.9 billion base pairs in length compared to the 3.3 billion base pairs of the okapi. Of the proteins in giraffe and okapi genes, 19.4% are identical. The two species are equally distantly related to cattle, suggesting the giraffe's unique characteristics are not because of faster evolution. The divergence of giraffe and okapi lineages dates to around 11.5 mya. A small group of regulatory genes in the giraffe appear to be responsible for the animal's stature and associated circulatory adaptations. The IUCN currently recognises only one species of giraffe with nine subspecies. In 2001, a two-species taxonomy was proposed. A 2007 study on the genetics of "Giraffa", suggested they were six species: the West African, Rothschild's, reticulated, Masai, Angolan, and South African giraffe. The study deduced from genetic differences in nuclear and mitochondrial DNA (mtDNA) that giraffes from these populations are reproductively isolated and rarely interbreed, though no natural obstacles block their mutual access. This includes adjacent populations of Rothschild's, reticulated, and Masai giraffes. The Masai giraffe was also suggested to consist of possibly two species separated by the Rift Valley. Reticulated and Masai giraffes have the highest mtDNA diversity, which is consistent with giraffes originating in eastern Africa. Populations further north are more closely related to the former, while those to the south are more related to the latter. Giraffes appear to select mates of the same coat type, which are imprinted on them as calves. The implications of these findings for the conservation of giraffes were summarised by David Brown, lead author of the study, who told BBC News: "Lumping all giraffes into one species obscures the reality that some kinds of giraffe are on the brink. Some of these populations number only a few hundred individuals and need immediate protection." A 2011 study using detailed analyses of the morphology of giraffes, and application of the phylogenetic species concept, described eight species of living giraffes. The eight species are: "G. angolensis", "G.antiquorum", "G. camelopardalis", "G. giraffa", "G. peralta", "G. reticulata", "G. thornicrofti", and "G. tippelskirchi". A 2016 study also concluded that living giraffes consist of multiple species. The researchers suggested the existence of four species, which have not exchanged genetic information between each other for 1 million to 2 million years. Those four species are the northern giraffe ("G. camelopardalis"), southern giraffe ("G. giraffa"), reticulated giraffe ("G. reticulata"), and Masai giraffe ("G. tippelskirchi"). Since then, a response to this publication has been published, highlighting seven problems in data interpretation, and concludes "the conclusions should not be accepted unconditionally". There are an estimated 90,000 individuals of "Giraffa" in the wild, with 1,144 currently in captivity. There are also seven extinct species of giraffe, listed as the following: "G. attica", also extinct, was formerly considered part of "Giraffa" but was reclassified as "Bohlinia attica" in 1929. Fully grown giraffes stand tall, with males taller than females. The tallest recorded male was and the tallest recorded female was tall. The average weight is for an adult male and for an adult female with maximum weights of and having been recorded for males and females, respectively. Despite its long neck and legs, the giraffe's body is relatively short. Located at both sides of the head, the giraffe's large, bulging eyes give it good all-round vision from its great height. Giraffes see in colour and their senses of hearing and smell are also sharp. The animal can close its muscular nostrils to protect against sandstorms and ants. The giraffe's prehensile tongue is about long. It is purplish-black in colour, perhaps to protect against sunburn, and is useful for grasping foliage, as well as for grooming and cleaning the animal's nose. The upper lip of the giraffe is also prehensile and useful when foraging and is covered in hair to protect against thorns. The tongue, and inside of the mouth are covered in papillae. The coat has dark blotches or patches (which can be orange, chestnut, brown, or nearly black in colour) separated by light hair (usually white or cream in colour). Male giraffes become darker as they age. The coat pattern has been claimed to serve as camouflage in the light and shade patterns of savannah woodlands. While adult giraffes standing among trees and bushes are hard to see at even a few metres' distance, they actively move into the open to gain the best view of an approaching predator, obviating any benefit that camouflage might bring. Instead, the adults rely on their size and ability to defend themselves. However, camouflage appears to be important for calves, which spend a large part of the day in hiding, away from their mothers; further, over half of all calves die within a year, so predation is certainly important. It appears, therefore, that the spotted coat of the giraffe functions as camouflage for the young, while adults simply inherit this coloration as a by-product. The skin underneath the dark areas may serve as windows for thermoregulation, being sites for complex blood vessel systems and large sweat glands. Each individual giraffe has a unique coat pattern. The skin of a giraffe is mostly gray. Its thickness allows the animal to run through thorn bushes without being punctured. The fur may serve as a chemical defence, as its parasite repellents give the animal a characteristic scent. At least 11 main aromatic chemicals are in the fur, although indole and 3-methylindole are responsible for most of the smell. Because the males have a stronger odour than the females, the odour may also have sexual function. Along the animal's neck is a mane made of short, erect hairs. The one-metre (3.3-ft) tail ends in a long, dark tuft of hair and is used as a defense against insects. Both sexes have prominent horn-like structures called ossicones, which are formed from ossified cartilage, covered in skin and fused to the skull at the parietal bones. Being vascularized, the ossicones may have a role in thermoregulation, and are also used in combat between males. Appearance is a reliable guide to the sex or age of a giraffe: the ossicones of females and young are thin and display tufts of hair on top, whereas those of adult males end in knobs and tend to be bald on top. Also, a median lump, which is more prominent in males, emerges at the front of the skull. Males develop calcium deposits that form bumps on their skulls as they age. A giraffe's skull is lightened by multiple sinuses. However, as males age, their skulls become heavier and more club-like, helping them become more dominant in combat. The upper jaw has a grooved palate and lacks front teeth. The giraffe's molars have a rough surface. The front and back legs of a giraffe are about the same length. The radius and ulna of the front legs are articulated by the carpus, which, while structurally equivalent to the human wrist, functions as a knee. It appears that a suspensory ligament allows the lanky legs to support the animal's great weight. The foot of the giraffe reaches a diameter of , and the hoof is high in males and in females. The rear of each hoof is low and the fetlock is close to the ground, allowing the foot to provide additional support to the animal's weight. Giraffes lack dewclaws and interdigital glands. The giraffe's pelvis, though relatively short, has an ilium that is outspread at the upper ends. A giraffe has only two gaits: walking and galloping. Walking is done by moving the legs on one side of the body at the same time, then doing the same on the other side. When galloping, the hind legs move around the front legs before the latter move forward, and the tail will curl up. The animal relies on the forward and backward motions of its head and neck to maintain balance and the counter momentum while galloping. The giraffe can reach a sprint speed of up to , and can sustain for several kilometres. A giraffe rests by lying with its body on top of its folded legs. To lie down, the animal kneels on its front legs and then lowers the rest of its body. To get back up, it first gets on its knees and spreads its hind legs to raise its hindquarters. It then straightens its front legs. With each step, the animal swings its head. In captivity, the giraffe sleeps intermittently around 4.6 hours per day, mostly at night. It usually sleeps lying down, however, standing sleeps have been recorded, particularly in older individuals. Intermittent short "deep sleep" phases while lying are characterised by the giraffe bending its neck backwards and resting its head on the hip or thigh, a position believed to indicate paradoxical sleep. If the giraffe wants to bend down to drink, it either spreads its front legs or bends its knees. Giraffes would probably not be competent swimmers as their long legs would be highly cumbersome in the water, although they could possibly float. When swimming, the thorax would be weighed down by the front legs, making it difficult for the animal to move its neck and legs in harmony or keep its head above the surface. The giraffe has an extremely elongated neck, which can be up to in length, accounting for much of the animal's vertical height. The long neck results from a disproportionate lengthening of the cervical vertebrae, not from the addition of more vertebrae. Each cervical vertebra is over long. They comprise 52–54 per cent of the length of the giraffe's vertebral column, compared with the 27–33 percent typical of similar large ungulates, including the giraffe’s closest living relative, the okapi. This elongation largely takes place after birth, perhaps because giraffe mothers would have a difficult time giving birth to young with the same neck proportions as adults. The giraffe's head and neck are held up by large muscles and a strengthened nuchal ligament, which are anchored by long dorsal spines on the anterior thoracic vertebrae, giving the animal a hump. The giraffe's neck vertebrae have ball and socket joints. In particular, the atlas–axis joint (C1 and C2) allows the animal to tilt its head vertically and reach more branches with the tongue. The point of articulation between the cervical and thoracic vertebrae of giraffes is shifted to lie between the first and second thoracic vertebrae (T1 and T2), unlike most other ruminants where the articulation is between the seventh cervical vertebra (C7) and T1. This allows C7 to contribute directly to increased neck length and has given rise to the suggestion that T1 is actually C8, and that giraffes have added an extra cervical vertebra. However, this proposition is not generally accepted, as T1 has other morphological features, such as an articulating rib, deemed diagnostic of thoracic vertebrae, and because exceptions to the mammalian limit of seven cervical vertebrae are generally characterised by increased neurological anomalies and maladies. There are several hypotheses regarding the evolutionary origin and maintenance of elongation in giraffe necks. The "competing browsers hypothesis" was originally suggested by Charles Darwin and challenged only recently. It suggests that competitive pressure from smaller browsers, such as kudu, steenbok and impala, encouraged the elongation of the neck, as it enabled giraffes to reach food that competitors could not. This advantage is real, as giraffes can and do feed up to high, while even quite large competitors, such as kudu, can feed up to only about high. There is also research suggesting that browsing competition is intense at lower levels, and giraffes feed more efficiently (gaining more leaf biomass with each mouthful) high in the canopy. However, scientists disagree about just how much time giraffes spend feeding at levels beyond the reach of other browsers, and a 2010 study found that adult giraffes with longer necks actually suffered higher mortality rates under drought conditions than their shorter-necked counterparts. This study suggests that maintaining a longer neck requires more nutrients, which puts longer-necked giraffes at risk during a food shortage. Another theory, the sexual selection hypothesis, proposes that the long necks evolved as a secondary sexual characteristic, giving males an advantage in "necking" contests (see below) to establish dominance and obtain access to sexually receptive females. In support of this theory, necks are longer and heavier for males than females of the same age, and the former do not employ other forms of combat. However, one objection is that it fails to explain why female giraffes also have long necks. It has also been proposed that the neck serves to give the animal greater vigilance. In mammals, the left recurrent laryngeal nerve is longer than the right; in the giraffe it is over longer. These nerves are longer in the giraffe than in any other living animal; the left nerve is over long. Each nerve cell in this path begins in the brainstem and passes down the neck along the vagus nerve, then branches off into the recurrent laryngeal nerve which passes back up the neck to the larynx. Thus, these nerve cells have a length of nearly in the largest giraffes. The structure of a giraffe's brain resembles that of domestic cattle. It is kept cool by evaporative heat loss in the nasal passages. The shape of the skeleton gives the giraffe a small lung volume relative to its mass. Its long neck gives it a large amount of dead space, in spite of its narrow windpipe. These factors increase the resistance to airflow. Nevertheless, the animal can still supply enough oxygen to its tissues and it can increase its respiratory rate and oxygen diffusion when running. The circulatory system of the giraffe has several adaptations for its great height. Its heart, which can weigh more than and measures about long, must generate approximately double the blood pressure required for a human to maintain blood flow to the brain. As such, the wall of the heart can be as thick as . Giraffes have unusually high heart rates for their size, at 150 beats per minute. When the animal lowers its head the blood rushes down fairly unopposed and a rete mirabile in the upper neck, with its large cross sectional area, prevents excess blood flow to the brain. When it raises again, the blood vessels constrict and direct blood into the brain so the animal does not faint. The jugular veins contain several (most commonly seven) valves to prevent blood flowing back into the head from the inferior vena cava and right atrium while the head is lowered. Conversely, the blood vessels in the lower legs are under great pressure because of the weight of fluid pressing down on them. To solve this problem, the skin of the lower legs is thick and tight; preventing too much blood from pouring into them. Giraffes have oesophageal muscles that are unusually strong to allow regurgitation of food from the stomach up the neck and into the mouth for rumination. They have four chambered stomachs, as in all ruminants, and the first chamber has adapted to their specialised diet. The intestines of an adult giraffe measure more than in length and have a relatively small ratio of small to large intestine. The liver of the giraffe is small and compact. A gallbladder is generally present during fetal life, but it may disappear before birth. Giraffes usually inhabit savannahs and open woodlands. They prefer Acacieae, "Commiphora", "Combretum" and open "Terminalia" woodlands over denser environments like "Brachystegia" woodlands. The Angolan giraffe can be found in desert environments. Giraffes browse on the twigs of trees, preferring trees of the subfamily Acacieae and the genera "Commiphora" and "Terminalia", which are important sources of calcium and protein to sustain the giraffe's growth rate. They also feed on shrubs, grass and fruit. A giraffe eats around of foliage daily. When stressed, giraffes may chew the bark off branches. Although herbivorous, the giraffe has been known to visit carcasses and lick dried meat off bones. During the wet season, food is abundant and giraffes are more spread out, while during the dry season, they gather around the remaining evergreen trees and bushes. Mothers tend to feed in open areas, presumably to make it easier to detect predators, although this may reduce their feeding efficiency. As a ruminant, the giraffe first chews its food, then swallows it for processing and then visibly passes the half-digested cud up the neck and back into the mouth to chew again. It is common for a giraffe to salivate while feeding. The giraffe requires less food than many other herbivores because the foliage it eats has more concentrated nutrients and it has a more efficient digestive system. The animal's faeces come in the form of small pellets. When it has access to water, a giraffe drinks at intervals no longer than three days. Giraffes have a great effect on the trees that they feed on, delaying the growth of young trees for some years and giving "waistlines" to trees that are too tall. Feeding is at its highest during the first and last hours of daytime. Between these hours, giraffes mostly stand and ruminate. Rumination is the dominant activity during the night, when it is mostly done lying down. Giraffes are usually found in groups. Traditionally, the composition of these groups has been described as open and ever-changing. Giraffes were thought to have few social bonds and for research purposes, a "group" has been defined as "a collection of individuals that are less than a kilometre apart and moving in the same general direction." More recent studies have found that giraffes do have long-term social associations and may form groups or pairs based on kinship, sex or other factors. These groups may regularly associate with one another in larger communities or sub-communities within a fission–fusion society. The number of giraffes in a group can range up to 44 individuals. Giraffe groups tend to be sex-segregated although mixed-sex groups made of adult females and young males are known to occur. Particularly stable giraffe groups are those made of mothers and their young, which can last weeks or months. Social cohesion in these groups is maintained by the bonds formed between calves. Female association appears to be based on space-use and individuals may be matrilineally related. In general, females are more selective than males in who they associate with in regards to individuals of the same sex. Young males also form groups and will engage in playfights. However, as they get older males become more solitary but may also associate in pairs or with female groups. Giraffes are not territorial, but they have home ranges. Male giraffes occasionally wander far from areas that they normally frequent. Although generally quiet and non-vocal, giraffes have been heard to communicate using various sounds. During courtship, males emit loud coughs. Females call their young by bellowing. Calves will emit snorts, bleats, mooing and mewing sounds. Giraffes also snore, hiss, moan, grunt and make flute-like sounds, and possibly communicate over long distances using infrasound—though this is disputed. During nighttime, giraffes appear to hum to each other above the infrasound range for purposes which are unclear. Reproduction in giraffes is broadly polygamous: a few older males mate with the fertile females. Male giraffes assess female fertility by tasting the female's urine to detect oestrus, in a multi-step process known as the flehmen response. Males prefer young adult females over juveniles and older adults. Once an oestrous female is detected, the male will attempt to court her. When courting, dominant males will keep subordinate ones at bay. A courting male may lick a female's tail, rest his head and neck on her body or nudge her with his horns. During copulation, the male stands on his hind legs with his head held up and his front legs resting on the female's sides. Giraffe gestation lasts 400–460 days, after which a single calf is normally born, although twins occur on rare occasions. The mother gives birth standing up. The calf emerges head and front legs first, having broken through the fetal membranes, and falls to the ground, severing the umbilical cord. The mother then grooms the newborn and helps it stand up. A newborn giraffe is tall. Within a few hours of birth, the calf can run around and is almost indistinguishable from a one-week-old. However, for the first 1–3 weeks, it spends most of its time hiding; its coat pattern providing camouflage. The ossicones, which have lain flat while it was in the womb, become erect within a few days. Mothers with calves will gather in nursery herds, moving or browsing together. Mothers in such a group may sometimes leave their calves with one female while they forage and drink elsewhere. This is known as a "calving pool". Adult males play almost no role in raising the young, although they appear to have friendly interactions. Calves are at risk of predation, and a mother giraffe will stand over her calf and kick at an approaching predator. Females watching calving pools will only alert their own young if they detect a disturbance, although the others will take notice and follow. The length time in which offspring stay with their mother varies, though it can last until the female's next calving. Likewise, calves may suckle for only a month or as long as a year. Females become sexually mature when they are four years old, while males become mature at four or five years. Spermatogenesis in male giraffes begins at three to four years of age. Males must wait until they are at least seven years old to gain the opportunity to mate. Male giraffes use their necks as weapons in combat, a behaviour known as "necking". Necking is used to establish dominance and males that win necking bouts have greater reproductive success. This behaviour occurs at low or high intensity. In low intensity necking, the combatants rub and lean against each other. The male that can hold itself more erect wins the bout. In high intensity necking, the combatants will spread their front legs and swing their necks at each other, attempting to land blows with their ossicones. The contestants will try to dodge each other's blows and then get ready to counter. The power of a blow depends on the weight of the skull and the arc of the swing. A necking duel can last more than half an hour, depending on how well matched the combatants are. Although most fights do not lead to serious injury, there have been records of broken jaws, broken necks, and even deaths. After a duel, it is common for two male giraffes to caress and court each other. Such interactions between males have been found to be more frequent than heterosexual coupling. In one study, up to 94 percent of observed mounting incidents took place between males. The proportion of same-sex activities varied from 30–75 percent. Only one percent of same-sex mounting incidents occurred between females. Giraffes have high adult survival probability, and an unusually long lifespan compared to other ruminants, up to 25 years in the wild. Because of their size, eyesight and powerful kicks, adult giraffes are usually not subject to predation, aside from lions. Giraffes are the most common prey for the big cats in Kruger National Park. Nile crocodiles can also be a threat to giraffes when they bend down to drink. Calves are much more vulnerable than adults, and are additionally preyed on by leopards, spotted hyenas and wild dogs. A quarter to a half of giraffe calves reach adulthood. Calf survival varies according to the season of birth, with calves born during the dry season having higher survival rates. The local, seasonal presence of large herds of migratory wildebeests and zebras reduces predation pressure on giraffe calves and increases their survival probability. Some parasites feed on giraffes. They are often hosts for ticks, especially in the area around the genitals, which has thinner skin than other areas. Tick species that commonly feed on giraffes are those of genera "Hyalomma", "Amblyomma" and "Rhipicephalus". Giraffes may rely on red-billed and yellow-billed oxpeckers to clean them of ticks and alert them to danger. Giraffes host numerous species of internal parasite and are susceptible to various diseases. They were victims of the (now eradicated) viral illness rinderpest. Giraffes can also suffer from a skin disorder, which comes in the form of wrinkles, lesions or raw fissures. In Tanzania, it appears to be caused by a nematode, and may be further affected by secondary infections. As much as 79% of giraffes show signs of the disease in Ruaha National Park, but is less prevalent in areas with fertile soils. Humans have interacted with giraffes for millennia. The San people of southern Africa have medicine dances named after some animals; the giraffe dance is performed to treat head ailments. How the giraffe got its height has been the subject of various African folktales, including one from eastern Africa which explains that the giraffe grew tall from eating too many magic herbs. Giraffes were depicted in art throughout the African continent, including that of the Kiffians, Egyptians and Meroë Nubians. The Kiffians were responsible for a life-size rock engraving of two giraffes that has been called the "world's largest rock art petroglyph". The Egyptians gave the giraffe its own hieroglyph, named 'sr' in Old Egyptian and 'mmy' in later periods. They also kept giraffes as pets and shipped them around the Mediterranean. The giraffe was also known to the Greeks and Romans, who believed that it was an unnatural hybrid of a camel and a leopard and called it "camelopardalis". The giraffe was among the many animals collected and displayed by the Romans. The first one in Rome was brought in by Julius Caesar in 46 BC and exhibited to the public. With the fall of the Western Roman Empire, the housing of giraffes in Europe declined. During the Middle Ages, giraffes were known to Europeans through contact with the Arabs, who revered the giraffe for its peculiar appearance. Individual captive giraffes were given celebrity status throughout history. In 1414, a giraffe was shipped from Malindi to Bengal. It was then taken to China by explorer Zheng He and placed in a Ming dynasty zoo. The animal was a source of fascination for the Chinese people, who associated it with the mythical Qilin. The Medici giraffe was a giraffe presented to Lorenzo de' Medici in 1486. It caused a great stir on its arrival in Florence. Zarafa, another famous giraffe, was brought from Egypt to Paris in the early 19th century as a gift from Muhammad Ali of Egypt to Charles X of France. A sensation, the giraffe was the subject of numerous memorabilia or "giraffanalia". Giraffes continue to have a presence in modern culture. Salvador Dalí depicted them with burning manes in some of his surrealist paintings. Dali considered the giraffe to be a symbol of masculinity, and a flaming giraffe was meant to be a "masculine cosmic apocalyptic monster". Several children's books feature the giraffe, including David A. Ufer's "The Giraffe Who Was Afraid of Heights", Giles Andreae's "Giraffes Can't Dance" and Roald Dahl's "The Giraffe and the Pelly and Me". Giraffes have appeared in animated films, as minor characters in Disney's "The Lion King" and "Dumbo", and in more prominent roles in "The Wild" and in the "Madagascar" films. Sophie the Giraffe has been a popular teether since 1961. Another famous fictional giraffe is the Toys "R" Us mascot Geoffrey the Giraffe. The giraffe has also been used for some scientific experiments and discoveries. Scientists have looked at the properties of giraffe skin when developing suits for astronauts and fighter pilots because the people in these professions are in danger of passing out if blood rushes to their legs. Computer scientists have modeled the coat patterns of several subspecies using reaction–diffusion mechanisms. The constellation of Camelopardalis, introduced in the seventeenth century, depicts a giraffe. The Tswana people of Botswana traditionally see the constellation Crux as two giraffes – Acrux and Mimosa forming a male, and Gacrux and Delta Crucis forming the female. Giraffes were probably common targets for hunters throughout Africa. Different parts of their bodies were used for different purposes. Their meat was used for food. The tail hairs served as flyswatters, bracelets, necklaces and thread. Shields, sandals and drums were made using the skin, and the strings of musical instruments were from the tendons. The smoke from burning giraffe skins was used by the medicine men of Buganda to treat nose bleeds. The Humr people of Sudan consume the drink Umm Nyolokh; which is created from the liver and marrow of giraffes. Umm Nyolokh often contains DMT and other psychoactive substances from plants the giraffes eat such as Acacia; and is known to cause hallucinations of giraffes, believed to be the giraffes' ghosts by the Humr. In the 19th century, European explorers began to hunt them for sport. Habitat destruction has hurt the giraffe, too: in the Sahel, the need for firewood and grazing room for livestock has led to deforestation. Normally, giraffes can coexist with livestock, since they do not directly compete with them. In 2017, severe droughts in northern Kenya have led to increased tensions over land and the killing of wildlife by herders, with giraffe populations being particularly hit. Aerial survey is the most common method of monitoring giraffe population trends in the vast roadless tracts of African landscapes, but aerial methods are known to undercount giraffes. Ground-based survey methods are more accurate and should be used in conjunction with aerial surveys to make accurate estimates of population sizes and trends. In 2010, giraffes were assessed as Least Concern from a conservation perspective by the International Union for Conservation of Nature (IUCN), but the 2016 assessment categorized giraffes as Vulnerable. Giraffes have been extirpated from much of their historic range including Eritrea, Guinea, Mauritania and Senegal. They may also have disappeared from Angola, Mali, and Nigeria, but have been introduced to Rwanda and Swaziland. Two subspecies, the West African giraffe and the Rothschild giraffe, have been classified as endangered, as wild populations of each of them number in the hundreds. In 1997, Jonathan Kingdon suggested that the Nubian giraffe was the most threatened of all giraffes; , it may number fewer than 250, although this estimate is uncertain. Private game reserves have contributed to the preservation of giraffe populations in southern Africa. Giraffe Manor is a popular hotel in Nairobi that also serves as sanctuary for Rothschild's giraffes. The giraffe is a protected species in most of its range. It is the national animal of Tanzania, and is protected by law. Unauthorised killing can result in imprisonment. The UN backed Convention of Migratory Species selected giraffes for protection in 2017. In 1999, it was estimated that over 140,000 giraffes existed in the wild, estimations as of 2016 indicate that there are approximately 97,500 members of "Giraffa" in the wild, down from 155,000 in 1985, with around 1,144 in captivity. George Harrison George Harrison (25 February 1943 – 29 November 2001) was an English guitarist, singer-songwriter, and producer who achieved international fame as the lead guitarist of the Beatles. Often referred to as "the quiet Beatle", Harrison embraced Indian culture and helped broaden the scope of popular music through his incorporation of Indian instrumentation and Hindu-aligned spirituality in the Beatles' work. Although the majority of the band's songs were written by John Lennon and Paul McCartney, most Beatles albums from 1965 onwards contained at least two Harrison compositions. His songs for the group included "Taxman", "Within You Without You", "While My Guitar Gently Weeps", "Here Comes the Sun" and "Something", the last of which became the Beatles' second-most covered song. Harrison's earliest musical influences included George Formby and Django Reinhardt; Carl Perkins, Chet Atkins and Chuck Berry were subsequent influences. By 1965, he had begun to lead the Beatles into folk rock through his interest in the Byrds and Bob Dylan, and towards Indian classical music through his use of the sitar on "Norwegian Wood (This Bird Has Flown)". Having initiated the band's embracing of Transcendental Meditation in 1967, he subsequently developed an association with the Hare Krishna movement. After the band's break-up in 1970, Harrison released the triple album "All Things Must Pass", a critically acclaimed work that produced his most successful hit single, "My Sweet Lord", and introduced his signature sound as a solo artist, the slide guitar. He also organised the 1971 Concert for Bangladesh with Indian musician Ravi Shankar, a precursor for later benefit concerts such as Live Aid. In his role as a music and film producer, Harrison produced acts signed to the Beatles' Apple record label before founding Dark Horse Records in 1974 and co-founding HandMade Films in 1978. Harrison released several best-selling singles and albums as a solo performer. In 1988, he co-founded the platinum-selling supergroup the Traveling Wilburys. A prolific recording artist, he was featured as a guest guitarist on tracks by Badfinger, Ronnie Wood and Billy Preston, and collaborated on songs and music with Dylan, Eric Clapton, Ringo Starr and Tom Petty, among others. "Rolling Stone" magazine ranked him number 11 in their list of the "100 Greatest Guitarists of All Time". He is a two-time Rock and Roll Hall of Fame inductee – as a member of the Beatles in 1988, and posthumously for his solo career in 2004. Harrison's first marriage, to model Pattie Boyd in 1966, ended in divorce in 1977. The following year he married Olivia Arias, with whom he had a son, Dhani. Harrison died in 2001, aged 58, from lung cancer that was attributed to years of cigarette smoking. His remains were cremated and the ashes were scattered according to Hindu tradition in a private ceremony in the Ganges and Yamuna rivers in India. He left an estate of almost £100 million. Harrison was born at 12 Arnold Grove in Liverpool, England on 25 February 1943. He was the youngest of four children of Harold Hargreaves (or Hargrove) Harrison (1909–1978) and Louise (née French; 1911–1970). Harold was a bus conductor who had worked as a ship's steward on the White Star Line, and Louise was a shop assistant of Irish Catholic descent. He had one sister, Louise (born 16 August 1931), and two brothers, Harold (born 1934) and Peter (20 July 1940 – 1 June 2007). According to Boyd, Harrison's mother was particularly supportive: "All she wanted for her children is that they should be happy, and she recognized that nothing made George quite as happy as making music." Louise was an enthusiastic music fan, and she was known among friends for her loud singing voice, which at times startled visitors by rattling the Harrisons' windows. When Louise was pregnant with George, she often listened to the weekly broadcast "Radio India". Harrison's biographer Joshua Greene wrote, "Every Sunday she tuned in to mystical sounds evoked by sitars and tablas, hoping that the exotic music would bring peace and calm to the baby in the womb." Harrison lived the first four years of his life at 12 Arnold Grove, Wavertree, Liverpool, a terraced house on a dead end street. The home had an outdoor toilet and its only heat came from a single coal fire. In 1949, the family was offered a council house and moved to 25 Upton Green, Speke. In 1948, at the age of five, Harrison enrolled at Dovedale Primary School. He passed the eleven-plus exam and attended Liverpool Institute High School for Boys from 1954 to 1959. Though the institute did offer a music course, Harrison was disappointed with the absence of guitars, and felt the school "moulded [students] into being frightened". Harrison's earliest musical influences included George Formby, Cab Calloway, Django Reinhardt and Hoagy Carmichael; by the 1950s, Carl Perkins and Lonnie Donegan were significant influences. In early 1956 he had an epiphany: while riding his bicycle, he heard Elvis Presley's "Heartbreak Hotel" playing from a nearby house, and the song piqued his interest in rock and roll. He often sat at the back of the class drawing guitars in his schoolbooks, and later commented, "I was totally into guitars." Harrison cited Slim Whitman as another early influence: "The first person I ever saw playing a guitar was Slim Whitman, either a photo of him in a magazine or live on television. Guitars were definitely coming in." Although Harold Harrison was apprehensive about his son's interest in pursuing a music career, in 1956 he bought George a Dutch Egmond flat top acoustic guitar, which according to Harold, cost £3.10 (equivalent to £ in 2018). One of his father's friends taught Harrison how to play "Whispering", "Sweet Sue" and "Dinah", and, inspired by Donegan's music, Harrison formed a skiffle group called the Rebels with his brother Peter and a friend, Arthur Kelly. On the bus to school, Harrison met Paul McCartney, who also attended the Liverpool Institute, and the pair bonded over their shared love of music. Harrison became part of the Beatles with McCartney and John Lennon when the band was still a skiffle group called the Quarrymen. McCartney told Lennon about his friend Harrison, who could play "Raunchy" on his guitar. In March 1958, Harrison auditioned for the Quarrymen at Rory Storm's Morgue Skiffle Club, playing Arthur "Guitar Boogie" Smith's "Guitar Boogie Shuffle", but Lennon felt that Harrison, having just turned 15, was too young to join the band. During a second meeting, arranged by McCartney, he performed the lead guitar part for the instrumental "Raunchy" on the upper deck of a Liverpool bus. He began socialising with the group, filling in on guitar as needed, and became accepted as a member. Although his father wanted him to continue his education, Harrison left school at 16 and worked for several months as an apprentice electrician at Blacklers, a local department store. During the group's first tour of Scotland, in 1960, Harrison used the pseudonym "Carl Harrison", in reference to Carl Perkins. In 1960, promoter Allan Williams arranged for the band, now calling themselves the Beatles, to play at the Indra and Kaiserkeller clubs in Hamburg, both owned by Bruno Koschmider. The impromptu musical education Harrison received while playing long hours with the Beatles, as well as the guitar lessons he took from Tony Sheridan while they briefly served as his backing group, laid the foundations of his sound and of his role within the group; he was later known as "the quiet Beatle". The band's first residency in Hamburg ended prematurely when Harrison was deported for being too young to work in nightclubs. When Brian Epstein became their manager in December 1961, he polished their image and secured them a recording contract with EMI. The group's first single, "Love Me Do", peaked at number seventeen on the "Record Retailer" chart, and by the time their debut album, "Please Please Me", was released in early 1963, Beatlemania had arrived. Their second album, "With the Beatles" (1963), included "Don't Bother Me", Harrison's first solo writing credit. By 1965's "Rubber Soul", Harrison had begun to lead the other Beatles into folk rock through his interest in the Byrds and Bob Dylan, and towards Indian classical music through his use of the sitar on "Norwegian Wood (This Bird Has Flown)". He later called "Rubber Soul" his "favourite [Beatles] album". "Revolver" (1966) included three of his compositions: "Taxman", "Love You To" and "I Want to Tell You". His introduction of the drone-like tambura part on Lennon's "Tomorrow Never Knows" exemplified the band's ongoing exploration of non-Western instruments. The tabla-driven "Love You To" was the Beatles' first genuine foray into Indian music. According to the ethnomusicologist David Reck, the song set a precedent in popular music as an example of Asian culture being represented by Westerners respectfully and without parody. Harrison continued to develop his interest in non-Western instrumentation, playing swarmandal on "Strawberry Fields Forever". By late 1966, Harrison's interests had moved away from the Beatles. This was reflected in his choice of Eastern gurus and religious leaders for inclusion on the album cover for "Sgt. Pepper's Lonely Hearts Club Band" in 1967. His sole composition on the album was the Indian-inspired "Within You Without You", to which no other Beatle contributed. He played sitar and tambura on the track, backed by musicians from the London Asian Music Circle on dilruba, swarmandal and tabla. He later commented on the "Sgt. Pepper" album: "It was a millstone and a milestone in the music industry ... There's about half the songs I like and the other half I can't stand." In 1968 his song "The Inner Light" was recorded at EMI's studio in Bombay, featuring a group of local musicians playing traditional Indian instruments. Released as the B-side to McCartney's "Lady Madonna", it was the first Harrison composition to appear on a Beatles single. Derived from a quotation from the "Tao Te Ching", the song's lyric reflected Harrison's deepening interest in Hinduism and meditation, while musically it embraced the Karnatak discipline of Indian music, rather than the Hindustani style of his previous work in the genre. During the recording of "The Beatles" that same year, tensions within the group ran high, and drummer Ringo Starr quit briefly. Harrison's songwriting contributions to the double album included "While My Guitar Gently Weeps", which featured Eric Clapton on lead guitar, "Piggies", "Long, Long, Long" and "Savoy Truffle". Dylan and the Band were a major musical influence on Harrison at the end of his career with the Beatles. While on a visit to Woodstock in late 1968, he established a friendship with Dylan and found himself drawn to the Band's sense of communal music-making and to the creative equality among the band members, which contrasted with Lennon and McCartney's domination of the Beatles' songwriting and creative direction. This coincided with a prolific period in his songwriting and a growing desire to assert his independence from the Beatles, tensions among whom surfaced again in January 1969, during the filming of rehearsals at Twickenham Studios for what became the album "Let It Be". Frustrated by the poor working conditions in the cold and sterile film studio, as well as by what he perceived as Lennon's creative disengagement from the Beatles and a domineering attitude from McCartney, Harrison quit the group on 10 January, but agreed to return twelve days later. Relations among the Beatles were more cordial, though still strained, during sessions for their final recorded album, "Abbey Road". The LP included two of Harrison's most respected Beatles compositions: "Here Comes the Sun" and "Something". The latter also became Harrison's first A-side when issued on a double A-side single with "Come Together"; the song was number one in Canada, Australia, New Zealand and West Germany, and the combined sides topped the "Billboard" Hot 100 chart in the United States. In 1969 Frank Sinatra recorded "Something", and later dubbed it "the greatest love song of the past fifty years". Lennon considered it the best song on "Abbey Road", and it became the Beatles' second most covered song after "Yesterday". Author Peter Lavezzoli wrote: "Harrison would finally achieve equal songwriting status ... with his two classic contributions to the final Beatles' LP". In May 1970 Harrison's "For You Blue" was coupled on a US single with McCartney's "The Long and Winding Road" and became Harrison's second chart-topper when the sides were listed together at number one on the Hot 100. His increased productivity and the Beatles' reluctance to include his songs on their albums meant that by the time of their break-up he had amassed a stockpile of unreleased compositions. While Harrison grew as a songwriter, his compositional presence on Beatles albums remained limited to two or three songs, increasing his frustration, and significantly contributing to the band's break-up. Harrison's last recording session with the Beatles was on 4 January 1970, when he, McCartney and Starr recorded the Harrison song "I Me Mine". Before the Beatles' break-up, Harrison had already recorded and released two solo albums: "Wonderwall Music" and "Electronic Sound", both of which contain mainly instrumental compositions. "Wonderwall Music", a soundtrack to the 1968 film "Wonderwall", blends Indian and Western instrumentation, while "Electronic Sound" is an experimental album that prominently features a Moog synthesizer. Released in November 1968, "Wonderwall Music" was the first solo album by a Beatle and the first LP released by Apple Records. Indian musicians Aashish Khan and Shivkumar Sharma performed on the album, which contains the experimental sound collage "Dream Scene", recorded several months before Lennon's "Revolution 9". In December 1969, Harrison participated in a brief tour of Europe with the American group Delaney & Bonnie and Friends. During the tour that included Clapton, Bobby Whitlock, drummer Jim Gordon and band leaders Delaney and Bonnie Bramlett, Harrison began to write "My Sweet Lord", which became his first single as a solo artist. Delaney Bramlett inspired Harrison to learn slide guitar, significantly influencing his later music. For many years, Harrison was restricted in his songwriting contributions to the Beatles' albums, but he released "All Things Must Pass", a triple album with two discs of his songs and the third of recordings of Harrison jamming with friends. The album was regarded by many as his best work, and it topped the charts on both sides of the Atlantic. The LP produced the number-one hit single "My Sweet Lord" and the top-ten single "What Is Life". The album was co-produced by Phil Spector using his "Wall of Sound" approach, and the musicians included Starr, Clapton, Gary Wright, Preston, Klaus Voormann, the whole of Delaney and Bonnie's Friends band and the Apple group Badfinger. On release, "All Things Must Pass" was received with critical acclaim; Ben Gerson of "Rolling Stone" described it as being "of classic Spectorian proportions, Wagnerian, Brucknerian, the music of mountain tops and vast horizons". Author and musicologist Ian Inglis considers the lyrics of the album's title track "a recognition of the impermanence of human existence ... a simple and poignant conclusion" to Harrison's former band. In 1971, Bright Tunes sued Harrison for copyright infringement over "My Sweet Lord", owing to its similarity to the 1963 Chiffons hit "He's So Fine". When the case was heard in the United States district court in 1976, he denied deliberately plagiarising the song, but lost the case, as the judge ruled that he had done so subconsciously. In 2000, Apple Records released a thirtieth anniversary edition of the album, and Harrison actively participated in its promotion. In an interview, he reflected on the work: "It's just something that was like my continuation from the Beatles, really. It was me sort of getting out of the Beatles and just going my own way ... it was a very happy occasion." He commented on the production: "Well, in those days it was like the reverb was kind of used a bit more than what I would do now. In fact, I don't use reverb at all. I can't stand it ... You know, it's hard to go back to anything thirty years later and expect it to be how you would want it now." Harrison responded to a request from Ravi Shankar by organising a charity event, the Concert for Bangladesh, which took place on 1 August 1971. The event drew over 40,000 people to two shows in New York's Madison Square Garden. The goal of the event was to raise money to aid starving refugees during the Bangladesh Liberation War. Shankar opened the show, which featured popular musicians such as Dylan, Clapton, Leon Russell, Badfinger, Preston and Starr. A triple album, "The Concert for Bangladesh", was released by Apple Corps that year, followed by a concert film in 1972. Tax troubles and questionable expenses later tied up many of the proceeds, but Harrison commented: "Mainly the concert was to attract attention to the situation ... The money we raised was secondary, and although we had some money problems ... they still got plenty ... even though it was a drop in the ocean. The main thing was, we spread the word and helped get the war ended." The event has been described as an innovative precursor for the large-scale charity rock shows that followed, including Live Aid. Harrison would never again release an album that matched the critical and commercial achievements of "All Things Must Pass"; however, his next solo album, 1973's "Living in the Material World", held the number one spot on the "Billboard" album chart for five weeks, and the album's single, "Give Me Love (Give Me Peace on Earth)", also reached number one in the US. In the UK, the LP achieved number two, spending 12 weeks on the charts with the single peaking at number 8. The album was lavishly produced and packaged, and its dominant message was Harrison's Hindu beliefs. In Greene's opinion it "contained many of the strongest compositions of his career". Stephen Holden, writing in "Rolling Stone", felt the album was "vastly appealing" and "profoundly seductive", and that it stood "alone as an article of faith, miraculous in its radiance". Other reviewers were less enthusiastic, describing the release as awkward, sanctimonious and overly sentimental, a reaction that left Harrison despondent. In November 1974, Harrison became the first ex-Beatle to tour North America when he began his 45-date Dark Horse Tour. Performances by Harrison included an ensemble of musicians such as Preston, Tom Scott, Willie Weeks, Andy Newmark and Jim Horn. The tour also included traditional and contemporary Indian music performed by "Ravi Shankar, Family and Friends". Despite numerous positive reviews, the consensus reaction to the tour was negative, with complaints about the content, structure, and length – the show's duration of two and a half hours was seen as excessive. Some fans found Shankar's significant presence to be a bizarre disappointment, having expected to see only Harrison perform, and many were affronted by what Inglis described as Harrison's "sermonizing". Further, he reworked the lyrics to several Beatles songs, and some of the substitutions were seen as "gratuitously offensive". His laryngitis-affected vocals also disappointed fans and critics, who began calling the tour "dark hoarse". Harrison was so deeply bothered by the caustic backlash that he did not tour again until the 1990s. The author Robert Rodriguez commented: "While the Dark Horse tour might be considered a noble failure, there were a number of fans who were tuned-in to what was being attempted. They went away ecstatic, conscious that they had just witnessed something so uplifting that it could never be repeated." Simon Leng called the tour "groundbreaking" and "revolutionary in its presentation of Indian Music". In December, Harrison released "Dark Horse", which was an album that earned him the least favourable reviews of his career. "Rolling Stone" called it "the chronicle of a performer out of his element, working to a deadline, enfeebling his overtaxed talents by a rush to deliver a new 'LP product', rehearse a band, and assemble a cross-country tour, all within three weeks". The album reached number 4 on the "Billboard" chart and the single "Dark Horse" reached number 15, but they failed to make an impact in the UK. The music critic Mikal Gilmore described "Dark Horse" as "one of Harrison's most fascinating works – a record about change and loss". Harrison's final studio album for EMI and Apple Records was the soul music-inspired "Extra Texture (Read All About It)" (1975). He considered it the least satisfactory of the three he had recorded since "All Things Must Pass". Leng identified "bitterness and dismay" in many of the album's tracks; his long-time friend Klaus Voormann commented: "He wasn't up for it ... It was a terrible time because I think there was a lot of cocaine going around, and that's when I got out of the picture ... I didn't like his frame of mind". He released two singles from the LP: "You", which reached the "Billboard" top 20, and "This Guitar (Can't Keep from Crying)", Apple's final original single release. "Thirty Three & 1/3" (1976), Harrison's first album release on his own Dark Horse Records label, produced the hit singles "This Song" and "Crackerbox Palace", both of which reached the top 25 in the US. The surreal humour of "Crackerbox Palace" reflected Harrison's association with Monty Python's Eric Idle, who directed a comical music video for the song. With an emphasis on melody and musicianship, and a more subtle subject matter than the pious message of his earlier works, "Thirty Three & 1/3" earned Harrison his most favourable critical notices in the US since "All Things Must Pass". In 1979, Harrison released "George Harrison", which followed his second marriage and the birth of his son Dhani. The album and the single "Blow Away" both made the "Billboard" top 20. The album marked the beginning of Harrison's gradual retreat from the music business, and the fruition of ideas introduced on "All Things Must Pass". The death of his father in May 1978 and the birth of his son the following August had influenced his decision to devote more time to his family than to his career. Leng described the album as "melodic and lush ... peaceful ... the work of a man who had lived the rock and roll dream twice over and was now embracing domestic as well as spiritual bliss". The murder of John Lennon on 8 December 1980 disturbed Harrison and reinforced his decades-long concern about stalkers. The tragedy was also a deep personal loss, although unlike McCartney and Starr, Harrison and Lennon had little contact in the years before Lennon was killed. Following the murder, Harrison commented: "After all we went through together I had and still have great love and respect for John Lennon. I am shocked and stunned." Harrison modified the lyrics of a song he had written for Starr in order to make the song a tribute to Lennon. "All Those Years Ago", which included vocal contributions from Paul and Linda McCartney, as well as Starr's original drum part, peaked at number two in the US charts. The single was included on the album "Somewhere in England" in 1981. Harrison did not release any new albums for five years after 1982's "Gone Troppo" received little notice from critics or the public. During this period he made several guest appearances, including a 1985 performance at a tribute to Carl Perkins titled "". In March 1986 he made a surprise appearance during the finale of the Birmingham Heart Beat Charity Concert, an event organised to raise money for the Birmingham Children's Hospital. The following year, he appeared at The Prince's Trust concert at London's Wembley Arena, performing "While My Guitar Gently Weeps" and "Here Comes the Sun". In February 1987 he joined Dylan, John Fogerty and Jesse Ed Davis on stage for a two-hour performance with the blues musician Taj Mahal. Harrison recalled: "Bob rang me up and asked if I wanted to come out for the evening and see Taj Mahal ... So we went there and had a few of these Mexican beers – and had a few more ... Bob says, 'Hey, why don't we all get up and play, and you can sing?' But every time I got near the microphone, Dylan comes up and just starts singing this rubbish in my ear, trying to throw me." In November 1987 Harrison released the platinum album "Cloud Nine". Co-produced with Jeff Lynne of Electric Light Orchestra, the LP included Harrison's rendition of James Ray's "Got My Mind Set on You", which went to number one in the US and number two in the UK. The accompanying music video received substantial airplay, and another single, "When We Was Fab", a retrospective of the Beatles' career, earned two MTV Music Video Awards nominations in 1988. Recorded at his estate in Friar Park, Harrison's slide guitar playing featured prominently on the album, which included several of his long-time musical collaborators, including Clapton, Jim Keltner, and Jim Horn, who recalled Harrison's relaxed and friendly demeanour during the sessions: "George made you feel at home, in his home ... He once had me sit on a toilet and play my soprano sax, and they miked it at the end of the hall for a distant sound. I thought they were kidding ... Another time he stopped me in the middle of a sax solo and brought me 3 p.m. tea—again I thought he was kidding." "Cloud Nine" reached number eight and number ten on the US and UK charts respectively, and several tracks from the album achieved placement on "Billboard"s Mainstream Rock chart – "Devil's Radio", "This Is Love" and "Cloud 9". In 1988, Harrison formed the Traveling Wilburys with Jeff Lynne, Roy Orbison, Bob Dylan and Tom Petty. The band had gathered in Dylan's garage to record a song for a Harrison European single release. Harrison's record company decided the track, "Handle with Care", was too good for its original purpose as a B-side and asked for a full album. The LP, "Traveling Wilburys Vol. 1", was released in October 1988 and recorded under pseudonyms as half-brothers, supposed sons of Charles Truscott Wilbury, Sr. Harrison's pseudonym on the first album was "Nelson Wilbury"; he used the name "Spike Wilbury" for their second album. After Orbison's death in December 1988, the group recorded as a four-piece. Their second release, issued in October 1990, was mischievously titled "Traveling Wilburys Vol. 3". According to Lynne, "That was George's idea. He said, 'Let's confuse the buggers.'" It reached number 14 in the UK, where it went platinum, with certified sales of more than 3,000,000 units. The Wilburys never performed live, and the group did not record together again following the release of their second album. In 1989, Harrison and Starr appeared in the music video for Tom Petty's song "I Won't Back Down". Starr is filmed playing the drums, but did not play on the track; Harrison played acoustic guitar and provided backing vocals. In December 1991, Harrison joined Clapton for a tour of Japan. It was Harrison's first since 1974 and no others followed. On 6 April 1992, Harrison held a benefit concert for the Natural Law Party at the Royal Albert Hall, his first London performance since the Beatles' 1969 rooftop concert. In October 1992, he performed at a Bob Dylan tribute concert at Madison Square Garden in New York City, playing alongside Dylan, Clapton, McGuinn, Petty and Neil Young. In 1994 Harrison began a collaboration with McCartney, Starr and producer Jeff Lynne for the "Beatles Anthology" project. This included the recording of two new Beatles songs built around solo vocal and piano tapes recorded by Lennon as well as lengthy interviews about the Beatles' career. Released in December 1995, "Free as a Bird" was the first new Beatles single since 1970. In March 1996, they released a second single, "Real Love". Harrison refused to participate in the completion of a third song. He later commented on the project: "I hope somebody does this to all my crap demos when I'm dead, make them into hit songs." Following the "Anthology" project, Harrison collaborated with Ravi Shankar on the latter's "Chants of India". Harrison's final television appearance was a VH-1 special to promote the album, taped in May 1997. In January 1998, Harrison attended Carl Perkins's funeral in Jackson, Tennessee, performing a brief rendition of Perkins's song "Your True Love". In June 1998, he attended the public memorial service for Linda McCartney, and appeared on Starr's album "Vertical Man", playing guitar on two tracks. Harrison wrote his first song, "Don't Bother Me", while sick in a hotel bed in Bournemouth during August 1963, as "an exercise to see if I "could" write a song", as he remembered. "Don't Bother Me" appeared on the band's second album, "With the Beatles", later that year, then on "Meet the Beatles!" in the US in early 1964. In 1965, he contributed "I Need You" and "You Like Me Too Much" to the album "Help!" His songwriting ability improved throughout the Beatles' career, but his material did not earn full respect from Lennon, McCartney and producer George Martin until near the group's break-up. In 1969, McCartney told Lennon: "Until this year, our songs have been better than George's. Now this year his songs are at least as good as ours". Harrison often had difficulty getting the band to record his songs. Most Beatles albums from 1965 onwards contain at least two Harrison compositions; three of his songs appear on "Revolver", "the album on which Harrison came of age as a songwriter", according to Inglis. Of the 1967 Harrison song "Within You Without You", author Gerry Farrell claimed that Harrison had created a "new form", calling the composition "a quintessential fusion of pop and Indian music". Lennon called the song one of Harrison's best: "His mind and his music are clear. There is his innate talent, he brought that sound together." Beatles biographer Bob Spitz described "Something" as a masterpiece, and "an intensely stirring romantic ballad that would challenge 'Yesterday' and 'Michelle' as one of the most recognizable songs they ever produced". According to Kenneth Womack, "Harrison comes into his own on "Abbey Road" ... 'Here Comes the Sun' is matched – indeed, surpassed – only by 'Something', his crowning achievement". Inglis considered "Abbey Road" a turning point in Harrison's development as a songwriter and musician. He described Harrison's contributions to the LP as "exquisite", declaring them equal to any previous Beatles songs. During the album's recording, Harrison asserted more creative control than before, proactively rejecting suggestions for changes to his music or lyrics, particularly from McCartney. His interest in Indian music proved a strong influence on his songwriting and contributed to his innovation within the Beatles. According to Mikal Gilmore of "Rolling Stone", "Harrison's openness to new sounds and textures cleared new paths for his rock and roll compositions. His use of dissonance on ... 'Taxman' and 'I Want to Tell You' was revolutionary in popular music – and perhaps more originally creative than the avant-garde mannerisms that Lennon and McCartney borrowed from the music of Karlheinz Stockhausen, Luciano Berio, Edgard Varèse and Igor Stravinsky ..." In 1997, Gerry Farrell commented: "It is a mark of Harrison's sincere involvement ... that, nearly thirty years on, the Beatles' 'Indian' songs remain the most imaginative and successful examples of this type of fusion." Harrison's guitar work with the Beatles was varied and flexible; although not fast or flashy, his lead guitar playing was solid and typified the more subdued lead guitar style of the early 1960s; his rhythm guitar playing was as innovative, such as using a capo to shorten the strings on an acoustic guitar, as on the "Rubber Soul" album and "Here Comes the Sun", to create a bright, sweet sound. Eric Clapton felt that Harrison was "clearly an innovator" as he was "taking certain elements of R&B and rock and rockabilly and creating something unique". "Rolling Stone" founder Jann Wenner described Harrison as "a guitarist who was never showy but who had an innate, eloquent melodic sense. He played exquisitely in the service of the song". Harrison's friend and former bandmate Tom Petty agreed: "He just had a way of getting right to the business, of finding the right thing to play." The guitar picking style of Chet Atkins and Carl Perkins influenced Harrison, giving a country music feel to many of the Beatles' recordings. He identified Chuck Berry as an early influence and Ry Cooder as an important later influence. In 1961 the Beatles recorded "Cry for a Shadow", a blues-inspired instrumental co-written by Lennon and Harrison, who is credited with composing the song's lead guitar part, building on unusual chord voicings and imitating the style of other English groups such as the Shadows. The musicologist Walter Everett noted that while early Beatles compositions typically held close to the conventional patterns in rock music at the time, he also identified significant variations in their rhythm and tonal direction. Harrison's liberal use of the diatonic scale in his guitar playing reveals the influence of Buddy Holly, and his interest in Berry inspired him to compose songs based on the blues scale while incorporating a rockabilly feel in the style of Perkins. Within this framework he often used syncopation, as during his guitar solos for the Beatles' covers of Berry's "Roll Over Beethoven" and "Too Much Monkey Business". Another of Harrison's musical techniques was the use of guitar lines written in octaves, as on "I'll Be on My Way". He was the first person to own a Rickenbacker 360/12, a guitar with twelve strings, the low eight of which are tuned in pairs, one octave apart; the higher four being pairs tuned in unison. The Rickenbacker is unique among twelve-string guitars in having the lower octave string of each of the first four pairs placed above the higher tuned string. This, and the naturally rich harmonics produced by a twelve-string guitar provided the distinctive overtones found on many of the Beatles' recordings. His use of this guitar during the recording of "A Hard Day's Night" helped to popularise the model, and the jangly sound became so prominent that "Melody Maker" termed it the Beatles' "secret weapon". Harrison wrote the chord progression of his first published song, "Don't Bother Me" (1963), almost exclusively in the Dorian mode, demonstrating an interest in exotic tones that eventually culminated in his embrace of Indian music. The dark timbre of his guitar playing on the track was accentuated by his use of uncomplicated yet effective C+9 chord voicings and a solo in the minor pentatonic scale. By 1964 he had begun to develop a distinctive personal style as a guitarist, writing parts that featured the use of nonresolving tones, as with the ending chord arpeggios on "A Hard Day's Night". In 1965 he used an expression pedal to control his guitar's volume on "I Need You", creating a syncopated flautando effect with the melody resolving its dissonance through tonal displacements. He used the same volume-swell technique on "Yes It Is", applying what Everett described as "ghostly articulation" to the song's natural harmonics. Of "Rubber Soul"s "If I Needed Someone", Harrison said: "it's like a million other songs written around the D chord. If you move your fingers about, you get various little melodies ... it amazes me that people still find new permutations of the same notes." His other contribution to the album, "Think for Yourself", features what Everett described as "ambiguous tonal coloring", using chromaticism in G major with a "strange" mixture of the Dorian mode and the minor pentatonic; he called it a "tour de force of altered scale degrees". In 1966 Harrison contributed innovative musical ideas to "Revolver". He played backwards guitar on Lennon's composition "I'm Only Sleeping" and a guitar counter-melody on "And Your Bird Can Sing" that moved in parallel octaves above McCartney's bass downbeats. His guitar playing on "I Want to Tell You" exemplified the pairing of altered chordal colours with descending chromatic lines and his guitar part for "Sgt Pepper"s "Lucy in the Sky with Diamonds" mirrors Lennon's vocal line in much the same way that a sarangi player accompanies a khyal singer in a Hindu devotional song. Everett described Harrison's guitar solo from "Old Brown Shoe" as "stinging [and] highly Claptonesque". He identified two of the composition's significant motifs: a bluesy trichord and a diminished triad with roots in A and E. Huntley called the song "a sizzling rocker with a ferocious ... solo". In Greene's opinion, Harrison's demo for "Old Brown Shoe" contains "one of the most complex lead guitar solos on any Beatles song". Harrison's playing on "Abbey Road", and in particular on "Something", marked a significant moment in his development as a guitarist. The song's guitar solo shows a varied range of influences, incorporating the blues guitar style of Clapton and the styles of Indian gamakas. According to author and musicologist Kenneth Womack: "'Something' meanders toward the most unforgettable of Harrison's guitar solos ... A masterpiece in simplicity, [it] reaches toward the sublime". Harrison received an Ivor Novello award in July 1970 for "Something", as "The Best Song Musically and Lyrically of the Year". After Delaney Bramlett inspired him to learn slide guitar, Harrison began to incorporate it into his solo work, which allowed him to mimic many traditional Indian instruments, including the sarangi and the dilruba. Leng described Harrison's slide guitar solo on Lennon's "How Do You Sleep?" as a departure for "the sweet soloist of 'Something'", calling his playing "rightly famed ... one of Harrison's greatest guitar statements". Lennon commented: "That's the best he's ever fucking played in his life." A Hawaiian influence is notable in much of Harrison's music, ranging from his slide guitar work on "Gone Troppo" (1982) to his televised performance of the Cab Calloway standard "Between the Devil and the Deep Blue Sea" on ukulele in 1992. Lavezzoli described Harrison's slide playing on the Grammy-winning instrumental "Marwa Blues" (2002) as demonstrating Hawaiian influences while comparing the melody to an Indian sarod or veena, calling it "yet another demonstration of Harrison's unique slide approach". Harrison was an admirer of George Formby and a member of the Ukulele Society of Great Britain, and played a ukulele solo in the style of Formby at the end of "Free as a Bird". He performed at a Formby convention in 1991, and served as the honorary president of the George Formby Appreciation Society. Harrison played bass guitar on numerous tracks, including the Beatles songs "She Said She Said", "Golden Slumbers", "Birthday" and "Honey Pie". He also played bass on several solo recordings, including "Faster", "Wake Up My Love" and "Bye Bye Love". When Harrison joined the Quarrymen in 1958 his main guitar was a Höfner President Acoustic, which he soon traded for a Höfner Club 40 model. His first solid-body electric guitar was a Czech-built Jolana Futurama/Grazioso. The guitars he used on early recordings were mainly Gretsch models, played through a Vox amplifier, including a Gretsch Duo Jet that he bought secondhand in 1961, and posed with on the album cover for "Cloud Nine" (1987). He also bought a Gretsch Tennessean and a Gretsch Country Gentleman, which he played on "She Loves You", and during the Beatles' 1964 appearance on "The Ed Sullivan Show". In 1963 he bought a Rickenbacker 425 Fireglo, and in 1964 he acquired a Rickenbacker 360/12 guitar, which was the second of its kind to be manufactured. Harrison obtained his first Fender Stratocaster in 1965 and first used it during the recording of the "Help!" album that February; he also used it when recording "Rubber Soul" later that year, most notably on the song "Nowhere Man". In early 1966 Harrison and Lennon each purchased Epiphone Casinos (McCartney had owned one since 1965), which they used on "Revolver". Harrison also used a Gibson J-160E and a Gibson SG Standard while recording the album. He later painted his Stratocaster in a psychedelic design that included the word "Bebopalula" above the pickguard and the guitar's nickname, "Rocky", on the headstock. He played this guitar in the "Magical Mystery Tour" film and throughout his solo career. In mid-1968 he acquired a Gibson Les Paul that he nicknamed "Lucy". Around this time, he obtained a Gibson Jumbo J-200, which he used for early demos of "While My Guitar Gently Weeps". In late 1968 Fender Musical Instruments Corporation gave Harrison a custom-made Fender Telecaster Rosewood prototype, made especially for him by Philip Kubicki (who years later would start his own business, Factor), a Fender master builder who also crafted a prototype Stratocaster for Jimi Hendrix. In August 2017, Fender released a "Limited Edition George Harrison Rosewood Telecaster" modeled after a Telecaster Roger Rossmeisel originally created for Harrison. From 1968 onward Harrison collaborated with other musicians; he brought in Eric Clapton to play lead guitar on "While My Guitar Gently Weeps" for the 1968 Beatles' White Album, and collaborated with John Barham on his 1968 debut solo album, "Wonderwall Music", which included contributions from Clapton again, as well as Peter Tork from the Monkees. He played on tracks by Dave Mason, Nicky Hopkins, Alvin Lee, Ronnie Wood, Billy Preston and Tom Scott. Harrison co-wrote songs and music with Dylan, Clapton, Preston, Doris Troy, David Bromberg, Gary Wright, Wood, Jeff Lynne, and Tom Petty, among others. Harrison's music projects during the final years of the Beatles included producing Apple Records artists Doris Troy, Jackie Lomax and Billy Preston. Harrison co-wrote the song "Badge" with Clapton, which was included on Cream's 1969 album, "Goodbye". Harrison played rhythm guitar on the track, using the pseudonym "L'Angelo Misterioso" for contractual reasons. In May 1970 he played guitar on several songs during a recording session for Dylan's album "New Morning". In addition to his own work, between 1971 and 1973 he co-wrote and/or produced three top ten hits for Starr: "It Don't Come Easy", "Back Off Boogaloo" and "Photograph". In 1971 he played electric slide guitar on "How Do You Sleep?" and a dobro on "Crippled Inside", both from Lennon's "Imagine" album. Also that year, he produced and played slide guitar on Badfinger's top ten hit "Day After Day", and a dobro on Preston's "I Wrote a Simple Song". He worked with Harry Nilsson on "You're Breakin' My Heart" (1972) and with Cheech & Chong on "Basketball Jones" (1973). In 1973 he produced and made a guest appearance on the album "Shankar Family & Friends". In 1974 Harrison founded Dark Horse Records. In addition to eventually releasing his own albums on the label, he initially used the company as an avenue for collaboration with other musicians. He wanted Dark Horse to serve as a creative outlet for artists, as Apple Records had for the Beatles. Harrison explained: "Most of the stuff will be what I produce". Eric Idle commented: "He's extremely generous, and he backs and supports all sorts of people that you'll never, ever hear of." The first acts signed to the new label were Ravi Shankar and Splinter, whose album Harrison produced, which provided Dark Horse with their first hit, "Costafine Town". Other artists signed by Dark Horse include Attitudes, Henry McCullough, Jiva, and Stairsteps. Harrison collaborated with Tom Scott on Scott's album "New York Connection" (1976), and in 1981 he played guitar on "Walk a Thin Line", from Mick Fleetwood's "The Visitor". In 1996 he recorded "Distance Makes No Difference With Love" with Carl Perkins, and played slide guitar on the title track of Dylan's "Under the Red Sky" album. In 2001 he performed as a guest musician on Jeff Lynne and Electric Light Orchestra's comeback album "Zoom", and on the song "Love Letters" for Bill Wyman's Rhythm Kings. He also co-wrote a new song with his son Dhani, "Horse to the Water", which was recorded on 2 October, eight weeks before his death. It appeared on Jools Holland's album "Small World, Big Band". During the Beatles' American tour in August 1965, Harrison's friend David Crosby of the Byrds introduced him to Indian classical music and the work of sitar maestro Ravi Shankar. Harrison described Shankar as "the first person who ever impressed me in my life ... and he was the only person who didn't try to impress me." Harrison became fascinated with the sitar and immersed himself in Indian music. According to Lavezzoli, Harrison's introduction of the instrument on the Beatles' song "Norwegian Wood" "opened the floodgates for Indian instrumentation in rock music, triggering what Shankar would call 'The Great Sitar Explosion' of 1966–67". Lavezzoli recognises Harrison as "the man most responsible for this phenomenon". In June 1966 Harrison met Shankar at the home of Mrs Angadi of the Asian Music Circle, asked to be his student, and was accepted. Before this meeting, Harrison had recorded his "Revolver" track "Love You To", contributing a sitar part that Lavezzoli describes as an "astonishing improvement" over "Norwegian Wood" and "the most accomplished performance on sitar by any rock musician". On 6 July, Harrison travelled to India to buy a sitar from Rikhi Ram & Sons in New Delhi. In September, following the Beatles' final tour, he returned to India to study sitar for six weeks with Shankar. He initially stayed in Bombay until fans learned of his arrival, then moved to a houseboat on a remote lake in Kashmir. During this visit, he also received tutelage from Shambhu Das, Shankar's protégé. Harrison studied the instrument until 1968, when, following a discussion with Shankar about the need to find his "roots", an encounter with Clapton and Hendrix at a hotel in New York convinced him to return to guitar playing. Harrison commented: "I decided ... I'm not going to be a great sitar player ... because I should have started at least fifteen years earlier." Harrison continued to use Indian instrumentation occasionally on his solo albums and remained strongly associated with the genre. Lavezzoli groups him with Paul Simon and Peter Gabriel as the three rock musicians who have given the most "mainstream exposure to non-Western musics, or the concept of 'world music'". By the mid-1960s Harrison had become an admirer of Indian culture and mysticism, introducing it to the other Beatles. During the filming of "Help!" in the Bahamas, they met the founder of Sivananda Yoga, Swami Vishnu-devananda, who gave each of them a signed copy of his book, "The Complete Illustrated Book of Yoga". Between the end of the last Beatles tour in 1966 and the beginning of the "Sgt Pepper" recording sessions, he made a pilgrimage to India with his wife Pattie; there, he studied sitar with Ravi Shankar, met several gurus, and visited various holy places. In 1968 he travelled to Rishikesh in northern India with the other Beatles to study meditation with Maharishi Mahesh Yogi. Harrison's use of psychedelic drugs encouraged his path to meditation and Hinduism. He commented: "For me, it was like a flash. The first time I had acid, it just opened up something in my head that was inside of me, and I realized a lot of things. I didn't learn them because I already knew them, but that happened to be the key that opened the door to reveal them. From the moment I had that, I wanted to have it all the time – these thoughts about the yogis and the Himalayas, and Ravi's music." In line with the Hindu yoga tradition, Harrison became a vegetarian in the late 1960s. After being given various religious texts by Shankar in 1966, he remained a lifelong advocate of the teachings of Swami Vivekananda and Paramahansa Yogananda – yogis and authors, respectively, of "Raja Yoga" and "Autobiography of a Yogi". In mid-1969, he produced the single "Hare Krishna Mantra", performed by members of the London Radha Krishna Temple. Having also helped the Temple devotees become established in Britain, Harrison then met their leader, A.C. Bhaktivedanta Swami Prabhupada, whom he described as "my friend … my master" and "a perfect example of everything he preached". Harrison embraced the Hare Krishna tradition, particularly "japa-yoga" chanting with beads, and became a lifelong devotee. Regarding other faiths he once remarked: "All religions are branches of one big tree. It doesn't matter what you call Him just as long as you call." He commented on his beliefs: Before his religious conversion, the only British performer known for similar activities had been Cliff Richard, whose conversion to Christianity in 1966 had gone largely unnoticed by the public. "By contrast," wrote Inglis, "Harrison's spiritual journey was seen as a serious and important development that reflected popular music's increasing maturity ... what he, and the Beatles, had managed to overturn was the paternalistic assumption that popular musicians had no role other than to stand on stage and sing their hit songs." Harrison married model Pattie Boyd on 21 January 1966, with McCartney serving as best man. Harrison and Boyd had met in 1964 during the production of the film "A Hard Day's Night", in which the 19-year-old Boyd had been cast as a schoolgirl. They separated in 1974 and their divorce was finalised in 1977. Boyd said her decision to end the marriage was due largely to George's repeated infidelities. The last infidelity culminated in an affair with Ringo's wife Maureen, which Boyd called "the final straw". She characterised the last year of their marriage as "fuelled by alcohol and cocaine", and she stated: "George used coke excessively, and I think it changed him ... it froze his emotions and hardened his heart." She subsequently moved in with Eric Clapton, and they married in 1979. Harrison married Dark Horse Records' secretary Olivia Trinidad Arias on 2 September 1978. They had met at the A&M Records offices in Los Angeles in 1974, and together had one son, Dhani Harrison, born on 1 August 1978. He restored the English manor house and grounds of Friar Park, his home in Henley-on-Thames, where several of his music videos were filmed including "Crackerbox Palace"; the grounds also served as the background for the cover of "All Things Must Pass". He employed ten workers to maintain the garden. Harrison commented on gardening as a form of escapism: "Sometimes I feel like I'm actually on the wrong planet, and it's great when I'm in my garden, but the minute I go out the gate I think: 'What the hell am I doing here?'" His autobiography, "I, Me, Mine", is dedicated "to gardeners everywhere". The former Beatles publicist Derek Taylor helped Harrison write the book, which said little about the Beatles, focusing instead on Harrison's hobbies, music and lyrics. Taylor commented: "George is not disowning the Beatles ... but it was a long time ago and actually a short part of his life." Harrison had an interest in sports cars and motor racing; he was one of the 100 people who purchased the McLaren F1 road car. He had collected photos of racing drivers and their cars since he was young; at 12 he had attended his first race, the 1955 British Grand Prix at Aintree. He wrote "Faster" as a tribute to the Formula One racing drivers Jackie Stewart and Ronnie Peterson. Proceeds from its release went to the Gunnar Nilsson cancer charity, set up after the Swedish driver's death from the disease in 1978. Harrison's first extravagant car, a 1964 Aston Martin DB5, was sold at auction on 7 December 2011 in London. An anonymous Beatles collector paid £350,000 for the vehicle that Harrison had bought new in January 1965. For most of the Beatles' career the relationships in the group were close. According to Hunter Davies, "the Beatles spent their lives not living a communal life, but communally living the same life. They were each other's greatest friends." Harrison's ex-wife Pattie Boyd described how the Beatles "all belonged to each other" and admitted, "George has a lot with the others that I can never know about. Nobody, not even the wives, can break through or even comprehend it." Starr said, "We really looked out for each other and we had so many laughs together. In the old days we'd have the biggest hotel suites, the whole floor of the hotel, and the four of us would end up in the bathroom, just to be with each other". He added, "there were some really loving, caring moments between four people: a hotel room here and there – a really amazing closeness. Just four guys who loved each other. It was pretty sensational." Lennon stated that his relationship with Harrison was "one of young follower and older guy ... [he] was like a disciple of mine when we started." The two later bonded over their LSD experiences, finding common ground as seekers of spirituality. They took radically different paths thereafter, Harrison finding God and Lennon coming to the conclusion that people are the creators of their own lives. In 1974 Harrison said of his former bandmate: "John Lennon is a saint and he's heavy-duty, and he's great and I love him. But at the same time, he's such a "bastard" – but that's the great thing about him, you see?" Harrison and McCartney were the first of the Beatles to meet, having shared a school bus, and often learned and rehearsed new guitar chords together. McCartney stated that he and Harrison usually shared a bedroom while touring. McCartney was best man at Harrison's wedding in 1966, and was the only Beatle in attendance. McCartney has referred to Harrison as his "baby brother". In a 1974 BBC radio interview with Alan Freeman, Harrison stated: "[McCartney] ruined me as a guitar player". Perhaps the most significant obstacle to a Beatles reunion after the death of Lennon was Harrison and McCartney's personal relationship, as both men admitted that they often got on each other's nerves. Rodriguez commented: "Even to the end of George's days, theirs was a volatile relationship". Harrison was involved in humanitarian and political activism throughout his life. In the 1960s, the Beatles supported the civil rights movement and protested against the Vietnam War. After the band's break-up, Ravi Shankar consulted Harrison about how to provide aid to the people of Bangladesh after the 1970 Bhola cyclone and the Bangladesh Liberation War. Harrison recorded the song "Bangla Desh", and pushed Apple Records to release his song alongside Shankar's "Joy Bangla" in an effort to raise funds. Shankar then asked for Harrison's advice about planning a small charity event in the US. Harrison responded by organising the Concert for Bangladesh, which raised more than $240,000. In June 1972, UNICEF honoured Harrison and Shankar with the "Child Is the Father of Man" award at an annual ceremony in recognition of their fundraising efforts for Bangladesh. The George Harrison Humanitarian Fund for UNICEF, a joint effort between the Harrison family and the US Fund for UNICEF, aims to support programmes that help children caught in humanitarian emergencies. In December 2007, they donated $450,000 to help the victims of Cyclone Sidr in Bangladesh. On 13 October 2009, the first George Harrison Humanitarian Award went to Ravi Shankar for his efforts in saving the lives of children, and his involvement with the Concert for Bangladesh. In 1973 Peter Sellers introduced Harrison to Denis O'Brien. Soon after, the two went into business together. In 1978, in an effort to produce "Monty Python's Life of Brian", they formed the film production and distribution company HandMade Films. Harrison explained: "The name of the company came about as a bit of a joke. I'd been to Wooky Hole in Somerset ... [near] an old paper mill where they show you how to make old underpants into paper. So I bought a few rolls, and they had this watermark 'British Handmade Paper' ... So we said ... we'll call it Handmade Films." Their opportunity for investment came after EMI Films withdrew funding at the demand of their chief executive, Bernard Delfont. Harrison financed the production of "Life of Brian" in part by mortgaging his home, which Idle later called "the most anybody's ever paid for a cinema ticket in history". The film grossed $21 million at the box office in the US. The first film distributed by HandMade Films was "The Long Good Friday" (1980), and the first they produced was "Time Bandits" (1981), a co-scripted project by Monty Pythons Terry Gilliam and Michael Palin. The film featured a new song by Harrison, "Dream Away", in the closing credits. "Time Bandits" became one of HandMade's most successful and acclaimed efforts; with a budget of $5 million, it earned $35 million in the US within ten weeks of its release. Harrison served as executive producer for 23 films with HandMade, including "Mona Lisa", "Shanghai Surprise" and "Withnail and I". He made several cameo appearances in these films, including a role as a nightclub singer in "Shanghai Surprise", for which he recorded five new songs. According to Ian Inglis, Harrison's "executive role in HandMade Films helped to sustain British cinema at a time of crisis, producing some of the country's most memorable movies of the 1980s." Following a series of box office bombs in the late 1980s, and excessive debt incurred by O'Brien which was guaranteed by Harrison, HandMade's financial situation became precarious. The company ceased operations in 1991 and was sold three years later to Paragon Entertainment, a Canadian corporation. Afterwards, Harrison sued O'Brien for $25 million for fraud and negligence, resulting in an $11.6 million judgement in 1996. In 1997, Harrison was diagnosed with throat cancer; he was treated with radiotherapy, which was thought at the time to be successful. He publicly blamed years of smoking for the illness. On 30 December 1999, Harrison and his wife were attacked at their home, Friar Park. Michael Abram, a 34-year-old man, broke in and attacked Harrison with a kitchen knife, puncturing a lung and causing head injuries before Olivia Harrison incapacitated the assailant by striking him repeatedly with a fireplace poker and a lamp. Abram suffered from paranoid schizophrenia, believing that Harrison was an extraterrestrial and that the Beatles were witches from Hell who rode broomsticks. During the attack, Harrison repeatedly shouted "Hare Krishna" at Abram. During the trial, a psychiatrist testified that Abram told him he would have stopped the attack if Harrison had talked normally to him. Following the attack, Harrison was hospitalised with more than 40 stab wounds. He released a statement soon afterwards regarding his assailant: "[he] wasn't a burglar, and he certainly wasn't auditioning for the Traveling Wilburys." In May 2001, it was revealed that Harrison had undergone an operation to remove a cancerous growth from one of his lungs, and in July, it was reported that he was being treated for a brain tumour at a clinic in Switzerland. While in Switzerland, Starr visited him but had to cut short his stay in order to travel to Boston, where his daughter was undergoing emergency brain surgery, prompting Harrison to quip: "Do you want me to come with you?" In November 2001, he began radiotherapy at Staten Island University Hospital in New York City for non-small cell lung cancer which had spread to his brain. When the news was made public, Harrison bemoaned his physician's breach of privacy, and his estate later claimed damages. On 12 November 2001 in New York, Harrison, Starr and McCartney came together for the last time. Less than three weeks later, on 29 November 2001, Harrison died at a friend's home in Los Angeles, aged 58. He was cremated at Hollywood Forever Cemetery and his funeral was held at the Self-Realization Fellowship Lake Shrine in Pacific Palisades, California. His close family scattered his ashes according to Hindu tradition in a private ceremony in the Ganges and Yamuna rivers near Varanasi, India. He left almost £100 million in his will. Harrison's final album, "Brainwashed" (2002), was released posthumously after it was completed by his son Dhani and Jeff Lynne. A quotation from the "Bhagavad Gita" is included in the album's liner notes: "There never was a time when you or I did not exist. Nor will there be any future when we shall cease to be." A media-only single, "Stuck Inside a Cloud", which Leng described as "a uniquely candid reaction to illness and mortality", achieved number 27 on "Billboard"s Adult Contemporary chart. The single "Any Road", released in May 2003, peaked at number 37 on the UK Singles Chart. "Marwa Blues" went on to receive the 2004 Grammy Award for Best Pop Instrumental Performance, while "Any Road" was nominated for Best Male Pop Vocal Performance. In June 1965, Harrison and the other Beatles were appointed Members of the Order of the British Empire (MBE). They received their insignia from the Queen at an investiture at Buckingham Palace on 26 October. In 1971 the Beatles received an Academy Award for the best Original Song Score for the film "Let It Be". The minor planet 4149 Harrison, discovered in 1984, was named after him, as was a variety of Dahlia flower. In December 1992 he became the first recipient of the Billboard Century Award, an honour presented to music artists for significant bodies of work. The award recognised Harrison's "critical role in laying the groundwork for the modern concept of world music" and for his having "advanced society's comprehension of the spiritual and altruistic power of popular music". "Rolling Stone" magazine ranked him number 11 in their list of the "100 Greatest Guitarists of All Time". In 2002, on the first anniversary of his death, the Concert for George was held at the Royal Albert Hall. Eric Clapton organised the event, which included performances by many of Harrison's friends and musical collaborators, including McCartney and Starr. Eric Idle, who described Harrison as "one of the few morally good people that rock and roll has produced", was among the performers of Monty Python's "Lumberjack Song". The profits from the concert went to Harrison's charity, the Material World Charitable Foundation. In 2004, Harrison was posthumously inducted into the Rock and Roll Hall of Fame as a solo artist by his former bandmates Lynne and Petty, and into the Madison Square Garden Walk of Fame in 2006 for the Concert for Bangladesh. On 14 April 2009, the Hollywood Chamber of Commerce awarded Harrison a star on the Walk of Fame in front of the Capitol Records Building. McCartney, Lynne and Petty were present when the star was unveiled. Harrison's widow Olivia, the actor Tom Hanks and Idle made speeches at the ceremony, and Harrison's son Dhani spoke the Hare Krishna mantra. A documentary film entitled "", directed by Martin Scorsese, was released in October 2011. The film features interviews with Olivia and Dhani Harrison, Klaus Voormann, Terry Gilliam, Starr, Clapton, McCartney, Keltner and Astrid Kirchherr. Harrison was posthumously honoured with The Recording Academy's Grammy Lifetime Achievement Award at the Grammy Awards in February 2015. Citations Sources George Bernard Shaw George Bernard Shaw (26 July 1856 – 2 November 1950), known at his insistence simply as Bernard Shaw, was an Irish playwright, critic, polemicist, and political activist. His influence on Western theatre, culture and politics extended from the 1880s to his death and beyond. He wrote more than sixty plays, including major works such as "Man and Superman" (1902), "Pygmalion" (1912)" and Saint Joan" (1923). With a range incorporating both contemporary satire and historical allegory, Shaw became the leading dramatist of his generation, and in 1925 was awarded the Nobel Prize in Literature. Born in Dublin, Shaw moved to London in 1876, where he struggled to establish himself as a writer and novelist, and embarked on a rigorous process of self-education. By the mid-1880s he had become a respected theatre and music critic. Following a political awakening, he joined the gradualist Fabian Society and became its most prominent pamphleteer. Shaw had been writing plays for years before his first public success, "Arms and the Man" in 1894. Influenced by Henrik Ibsen, he sought to introduce a new realism into English-language drama, using his plays as vehicles to disseminate his political, social and religious ideas. By the early twentieth century his reputation as a dramatist was secured with a series of critical and popular successes that included "Major Barbara", "The Doctor's Dilemma" and "Caesar and Cleopatra". Shaw's expressed views were often contentious; he promoted eugenics and alphabet reform, and opposed vaccination and organised religion. He courted unpopularity by denouncing both sides in the First World War as equally culpable, and although not a republican, castigated British policy on Ireland in the postwar period. These stances had no lasting effect on his standing or productivity as a dramatist; the inter-war years saw a series of often ambitious plays, which achieved varying degrees of popular success. In 1938 he provided the screenplay for a filmed version of "Pygmalion" for which he received an Academy Award. His appetite for politics and controversy remained undiminished; by the late 1920s he had largely renounced Fabian Society gradualism and often wrote and spoke favourably of dictatorships of the right and left—he expressed admiration for both Mussolini and Stalin. In the final decade of his life he made fewer public statements, but continued to write prolifically until shortly before his death, aged ninety-four, having refused all state honours, including the Order of Merit in 1946. Since Shaw's death scholarly and critical opinion has varied about his works, but he has regularly been rated as second only to Shakespeare among British dramatists; analysts recognise his extensive influence on generations of English-language playwrights. The word "Shavian" has entered the language as encapsulating Shaw's ideas and his means of expressing them. Shaw was born at 3 Upper Synge Street in Portobello, a lower-middle-class part of Dublin. He was the youngest child and only son of George Carr Shaw (1814–1885) and Lucinda Elizabeth (Bessie) Shaw ("née" Gurly; 1830–1913). His elder siblings were Lucinda (Lucy) Frances (1853–1920) and Elinor Agnes (1855–1876). The Shaw family was of English descent and belonged to the dominant Protestant Ascendancy in Ireland; George Carr Shaw, an ineffectual alcoholic, was among the family's less successful members. His relatives secured him a sinecure in the civil service, from which he was pensioned off in the early 1850s; thereafter he worked irregularly as a corn merchant. In 1852 he married Bessie Gurly; in the view of Shaw's biographer Michael Holroyd she married to escape a tyrannical great-aunt. If, as Holroyd and others surmise, George's motives were mercenary, then he was disappointed, as Bessie brought him little of her family's money. She came to despise her ineffectual and often drunken husband, with whom she shared what their son later described as a life of "shabby-genteel poverty". By the time of Shaw's birth, his mother had become close to George John Lee, a flamboyant figure well known in Dublin's musical circles. Shaw retained a lifelong obsession that Lee might have been his biological father; there is no consensus among Shavian scholars on the likelihood of this. The young Shaw suffered no harshness from his mother, but he later recalled that her indifference and lack of affection hurt him deeply. He found solace in the music that abounded in the house. Lee was a conductor and teacher of singing; Bessie had a fine mezzo-soprano voice and was much influenced by Lee's unorthodox method of vocal production. The Shaws' house was often filled with music, with frequent gatherings of singers and players. In 1862, Lee and the Shaws agreed to share a house, No. 1 Hatch Street, in an affluent part of Dublin, and a country cottage on Dalkey Hill, overlooking Killiney Bay. Shaw, a sensitive boy, found the less salubrious parts of Dublin shocking and distressing, and was happier at the cottage. Lee's students often gave him books, which the young Shaw read avidly; thus, as well as gaining a thorough musical knowledge of choral and operatic works, he became familiar with a wide spectrum of literature. Between 1865 and 1871, Shaw attended four schools, all of which he hated. His experiences as a schoolboy left him disillusioned with formal education: "Schools and schoolmasters", he later wrote, were "prisons and turnkeys in which children are kept to prevent them disturbing and chaperoning their parents." In October 1871 he left school to become a junior clerk in a Dublin firm of land agents, where he worked hard, and quickly rose to become head cashier. During this period, Shaw was known as "George Shaw"; after 1876, he dropped the "George" and styled himself "Bernard Shaw". In June 1873, Lee left Dublin for London and never returned. A fortnight later, Bessie followed him; the two girls joined her. Shaw's explanation of why his mother followed Lee was that without the latter's financial contribution the joint household had to be broken up. Left in Dublin with his father, Shaw compensated for the absence of music in the house by teaching himself to play the piano. Early in 1876 Shaw learned from his mother that Agnes was dying of tuberculosis. He resigned from the land agents, and in March travelled to England to join his mother and Lucy at Agnes's funeral. He never again lived in Ireland, and did not visit it for twenty-nine years. Initially, Shaw refused to seek clerical employment in London. His mother allowed him to live free of charge in her house in South Kensington, but he nevertheless needed an income. He had abandoned a teenage ambition to become a painter, and had no thought yet of writing for a living, but Lee found a little work for him, ghost-writing a musical column printed under Lee's name in a satirical weekly, "The Hornet". Lee's relations with Bessie deteriorated after their move to London. Shaw maintained contact with Lee, who found him work as a rehearsal pianist and occasional singer. Eventually Shaw was driven to applying for office jobs. In the interim he secured a reader's pass for the British Museum Reading Room (the forerunner of the British Library) and spent most weekdays there, reading and writing. His first attempt at drama, begun in 1878, was a blank-verse satirical piece on a religious theme. It was abandoned unfinished, as was his first try at a novel. His first completed novel, "Immaturity" (1879), was too grim to appeal to publishers and did not appear until the 1930s. He was employed briefly by the newly formed Edison Telephone Company in 1879–80, and as in Dublin achieved rapid promotion. Nonetheless, when the Edison firm merged with the rival Bell Telephone Company, Shaw chose not to seek a place in the new organisation. Thereafter he pursued a full-time career as an author. For the next four years Shaw made a negligible income from writing, and was subsidised by his mother. In 1881, for the sake of economy, and increasingly as a matter of principle, he became a vegetarian. He grew a beard to hide a facial scar left by smallpox. In rapid succession he wrote two more novels: "The Irrational Knot" (1880) and "Love Among the Artists" (1881), but neither found a publisher; each was serialised a few years later in the socialist magazine "Our Corner". In 1880 Shaw began attending meetings of the Zetetical Society, whose objective was to "search for truth in all matters affecting the interests of the human race". Here he met Sidney Webb, a junior civil servant who, like Shaw, was busy educating himself. Despite difference of style and temperament, the two quickly recognised qualities in each other and developed a lifelong friendship. Shaw later reflected: "You knew everything that I didn't know and I knew everything you didn't know ... We had everything to learn from one another and brains enough to do it". Shaw's next attempt at drama was a one-act playlet in French, "Un Petit Drame", written in 1884 but not published in his lifetime. In the same year the critic William Archer suggested a collaboration, with a plot by Archer and dialogue by Shaw. The project foundered, but Shaw returned to the draft as the basis of "Widowers' Houses" in 1892, and the connection with Archer proved of immense value to Shaw's career. On 5 September 1882 Shaw attended a meeting at the Memorial Hall, Farringdon, addressed by the political economist Henry George. Shaw then read George's book "Progress and Poverty", which awakened his interest in economics. He began attending meetings of the Social Democratic Federation (SDF), where he discovered the writings of Karl Marx, and thereafter spent much of 1883 reading "Das Kapital". He was not impressed by the SDF's founder, H. M. Hyndman, whom he found autocratic, ill-tempered and lacking leadership qualities. Shaw doubted the ability of the SDF to harness the working classes into an effective radical movement and did not join it—he preferred, he said, to work with his intellectual equals. After reading a tract, "Why Are The Many Poor?", issued by the recently formed Fabian Society, Shaw went to the society's next advertised meeting, on 16 May 1884. He became a member in September, and before the year's end had provided the society with its first manifesto, published as Fabian Tract No. 2. He joined the society's executive committee in January 1885, and later that year recruited Webb and also Annie Besant, a fine orator. From 1885 to 1889 Shaw attended the fortnightly meetings of the British Economic Association; it was, Holroyd observes, "the closest Shaw had ever come to university education." This experience changed his political ideas; he moved away from Marxism and became an apostle of gradualism. When in 1886–87 the Fabians debated whether to embrace anarchism, as advocated by Charlotte Wilson, Besant and others, Shaw joined the majority in rejecting this approach. After a rally in Trafalgar Square addressed by Besant was violently broken up by the authorities on 13 November 1887 ("Bloody Sunday"), Shaw became convinced of the folly of attempting to challenge police power. Thereafter he largely accepted the principle of "permeation" as advocated by Webb: the notion whereby socialism could best be achieved by infiltration of people and ideas into existing political parties. Throughout the 1880s the Fabian Society remained small, its message of moderation frequently unheard among more strident voices. Its profile was raised in 1889 with the publication of "Fabian Essays in Socialism", edited by Shaw who also provided two of the essays. The second of these, "Transition", details the case for gradualism and permeation, asserting that "the necessity for cautious and gradual change must be obvious to everyone". In 1890 Shaw produced Tract No. 13, "What Socialism Is", a revision of an earlier tract in which Charlotte Wilson had defined socialism in anarchistic terms. In Shaw's new version, readers were assured that "socialism can be brought about in a perfectly constitutional manner by democratic institutions". The mid-1880s marked a turning point in Shaw's life, both personally and professionally: he lost his virginity, had two novels published, and began a career as a critic. He had been celibate until his twenty-ninth birthday, when his shyness was overcome by Jane (Jenny) Patterson, a widow some years his senior. Their affair continued, not always smoothly, for eight years. Shaw's sex life has caused much speculation and debate among his biographers, but there is a consensus that the relationship with Patterson was one of his few non-platonic romantic liaisons. The published novels, neither commercially successful, were his two final efforts in this genre: "Cashel Byron's Profession" written in 1882–83, and "An Unsocial Socialist", begun and finished in 1883. The latter was published as a serial in "ToDay" magazine in 1884, although it did not appear in book form until 1887. "Cashel Byron" appeared in magazine and book form in 1886. In 1884 and 1885, through the influence of Archer, Shaw was engaged to write book and music criticism for London papers. When Archer resigned as art critic of "The World" in 1886 he secured the succession for Shaw. The two figures in the contemporary art world whose views Shaw most admired were William Morris and John Ruskin, and he sought to follow their precepts in his criticisms. Their emphasis on morality appealed to Shaw, who rejected the idea of art for art's sake, and insisted that all great art must be didactic. Of Shaw's various reviewing activities in the 1880s and 1890s it was as a music critic that he was best known. After serving as deputy in 1888, he became musical critic of "The Star" in February 1889, writing under the pen-name Corno di Bassetto. In May 1890 he moved back to "The World", where he wrote a weekly column as "G.B.S." for more than four years. In the 2016 version of the "Grove Dictionary of Music and Musicians", Robert Anderson writes, "Shaw's collected writings on music stand alone in their mastery of English and compulsive readability." Shaw ceased to be a salaried music critic in August 1894, but published occasional articles on the subject throughout his career, his last in 1950. From 1895 to 1898, Shaw was the theatre critic for "The Saturday Review", edited by his friend Frank Harris. As at "The World", he used the by-line "G.B.S." He campaigned against the artificial conventions and hypocrisies of the Victorian theatre and called for plays of real ideas and true characters. By this time he had embarked in earnest on a career as a playwright: "I had rashly taken up the case; and rather than let it collapse I manufactured the evidence". After using the plot of the aborted 1884 collaboration with Archer to complete "Widowers' Houses" (it was staged twice in London, in December 1892), Shaw continued writing plays. At first he made slow progress; "The Philanderer", written in 1893 but not published until 1898, had to wait until 1905 for a stage production. Similarly, "Mrs Warren's Profession" (1893) was written five years before publication and nine years before reaching the stage. Shaw's first box-office success was "Arms and the Man" (1894), a mock-Ruritanian comedy satirising conventions of love, military honour and class. The press found the play overlong, and accused Shaw of mediocrity, sneering at heroism and patriotism, heartless cleverness, and copying W.S.Gilbert's style. The public took a different view, and the management of the theatre staged extra matinée performances to meet the demand. The play ran from April to July, toured the provinces and was staged in New York. Among the cast of the London production was Florence Farr, with whom Shaw had a romantic relationship between 1890 and 1894, much resented by Jenny Patterson. The success of "Arms and the Man" was not immediately replicated. "Candida", which presented a young woman making a conventional romantic choice for unconventional reasons, received a single performance in South Shields in 1895; in 1897 a playlet about Napoleon called "The Man of Destiny" had a single staging at Croydon. In the 1890s Shaw's plays were better known in print than on the West End stage; his biggest success of the decade was in New York in 1897, when Richard Mansfield's production of the historical melodrama "The Devil's Disciple" earned the author more than £2,000 in royalties. In January 1893, as a Fabian delegate, Shaw attended the Bradford conference which led to the foundation of the Independent Labour Party. He was sceptical about the new party, and scorned the likelihood that it could switch the allegiance of the working class from sport to politics. He persuaded the conference to adopt resolutions abolishing indirect taxation, and taxing unearned income "to extinction". Back in London, Shaw produced what Margaret Cole, in her Fabian history, terms a "grand philippic" against the minority Liberal administration that had taken power in 1892. "To Your Tents, O Israel" excoriated the government for ignoring social issues and concentrating solely on Irish Home Rule, a matter Shaw declared of no relevance to socialism. In 1894 the Fabian Society received a substantial bequest from a sympathiser, Henry Hunt Hutchinson—Holroyd mentions £10,000. Webb, who chaired the board of trustees appointed to supervise the legacy, proposed to use most of it to found a school of economics and politics. Shaw demurred; he thought such a venture was contrary to the specified purpose of the legacy. He was eventually persuaded to support the proposal, and the London School of Economics and Political Science (LSE) opened in the summer of 1895. By the later 1890s Shaw's political activities lessened as he concentrated on making his name as a dramatist. In 1897 he was persuaded to fill an uncontested vacancy for a "vestryman" (parish councillor) in London's St Pancras district. At least initially, Shaw took to his municipal responsibilities seriously; when London government was reformed in 1899 and the St Pancras vestry became the Metropolitan Borough of St Pancras, he was elected to the newly formed borough council. In 1898, as a result of overwork, Shaw's health broke down. He was nursed by Charlotte Payne-Townshend, a rich Anglo-Irish woman whom he had met through the Webbs. The previous year she had proposed that she and Shaw should marry. He had declined, but when she insisted on nursing him in a house in the country, Shaw, concerned that this might cause scandal, agreed to their marriage. The ceremony took place on 1 June 1898, in the register office in Covent Garden. The bride and bridegroom were both aged forty-one. In the view of the biographer and critic St John Ervine, "their life together was entirely felicitous". There were no children of the marriage, which it is generally believed was never consummated; whether this was wholly at Charlotte's wish, as Shaw liked to suggest, is less widely credited. In the early weeks of the marriage Shaw was much occupied writing his Marxist analysis of Wagner's "Ring" cycle, published as "The Perfect Wagnerite" late in 1898. In 1906 the Shaws found a country home in Ayot St Lawrence, Hertfordshire; they renamed the house "Shaw's Corner", and lived there for the rest of their lives. They retained a London flat in the Adelphi and later at Whitehall Court. During the first decade of the twentieth century, Shaw secured a firm reputation as a playwright. In 1904 J. E. Vedrenne and Harley Granville-Barker established a company at the Royal Court Theatre in Sloane Square, Chelsea to present modern drama. Over the next five years they staged fourteen of Shaw's plays. The first, "John Bull's Other Island", a comedy about an Englishman in Ireland, attracted leading politicians and was seen by Edward VII, who laughed so much that he broke his chair. The play was withheld from Dublin's Abbey Theatre, for fear of the affront it might provoke, although it was shown at the city's Royal Theatre in November 1907. Shaw later wrote that William Butler Yeats, who had requested the play, "got rather more than he bargained for... It was uncongenial to the whole spirit of the neo-Gaelic movement, which is bent on creating a new Ireland after its own ideal, whereas my play is a very uncompromising presentment of the real old Ireland." Nonetheless, Shaw and Yeats were close friends; Yeats and Lady Gregory tried unsuccessfully to persuade Shaw to take up the vacant co-directorship of the Abbey Theatre after J. M. Synge's death in 1909. Shaw admired other figures in the Irish Literary Revival, including George Russell and James Joyce, and was a close friend of Seán O'Casey, who was inspired to become a playwright after reading "John Bull's Other Island". "Man and Superman", completed in 1902, was a success both at the Royal Court in 1905 and in Robert Loraine's New York production in the same year. Among the other Shaw works presented by Vedrenne and Granville-Barker were "Major Barbara" (1905), depicting the contrasting morality of arms manufacturers and the Salvation Army; "The Doctor's Dilemma" (1906), a mostly serious piece about professional ethics; and "Caesar and Cleopatra", Shaw's counterblast to Shakespeare's "Antony and Cleopatra", seen in New York in 1906 and in London the following year. Now prosperous and established, Shaw experimented with unorthodox theatrical forms described by his biographer Stanley Weintraub as "discussion drama" and "serious farce". These plays included "Getting Married" (premiered 1908), "The Shewing-Up of Blanco Posnet" (1909), "Misalliance" (1910), and "Fanny's First Play" (1911). "Blanco Posnet" was banned on religious grounds by the Lord Chamberlain (the official theatre censor in England), and was produced instead in Dublin; it filled the Abbey Theatre to capacity. "Fanny's First Play", a comedy about suffragettes, had the longest initial run of any Shaw play—622 performances. "Androcles and the Lion" (1912), a less heretical study of true and false religious attitudes than "Blanco Posnet", ran for eight weeks in September and October 1913. It was followed by one of Shaw's most successful plays, "Pygmalion", written in 1912 and staged in Vienna the following year, and in Berlin shortly afterwards. Shaw commented, "It is the custom of the English press when a play of mine is produced, to inform the world that it is not a play—that it is dull, blasphemous, unpopular, and financially unsuccessful. ... Hence arose an urgent demand on the part of the managers of Vienna and Berlin that I should have my plays performed by them first." The British production opened in April 1914, starring Sir Herbert Tree and Mrs Patrick Campbell as, respectively, a professor of phonetics and a cockney flower-girl. There had earlier been a romantic liaison between Shaw and Campbell that caused Charlotte Shaw considerable concern, but by the time of the London premiere it had ended. The play attracted capacity audiences until July, when Tree insisted on going on holiday, and the production closed. His co-star then toured with the piece in the US. In 1899, when the Boer War began, Shaw wished the Fabians to take a neutral stance on what he deemed, like Home Rule, to be a "non-Socialist" issue. Others, including the future Labour prime minister Ramsay MacDonald, wanted unequivocal opposition, and resigned from the society when it followed Shaw. In the Fabians' war manifesto, "Fabianism and the Empire" (1900), Shaw declared that "until the Federation of the World becomes an accomplished fact we must accept the most responsible Imperial federations available as a substitute for it". As the new century began, Shaw became increasingly disillusioned by the limited impact of the Fabians on national politics. Thus, although a nominated Fabian delegate, he did not attend the London conference at the Memorial Hall, Farringdon Street in February 1900, that created the Labour Representation Committee—precursor of the modern Labour Party. By 1903, when his term as borough councillor expired, he had lost his earlier enthusiasm, writing: "After six years of Borough Councilling I am convinced that the borough councils should be abolished". Nevertheless, in 1904 he stood in the London County Council elections. After an eccentric campaign, which Holroyd characterises as "[making] absolutely certain of not getting in", he was duly defeated. It was Shaw's final foray into electoral politics. Nationally, the 1906 general election produced a huge Liberal majority and an intake of 29 Labour members. Shaw viewed this outcome with scepticism; he had a low opinion of the new prime minister, Sir Henry Campbell-Bannerman, and saw the Labour members as inconsequential: "I apologise to the Universe for my connection with such a body". In the years after the 1906 election, Shaw felt that the Fabians needed fresh leadership, and saw this in the form of his fellow-writer H. G. Wells, who had joined the society in February 1903. Wells's ideas for reform—particularly his proposals for closer cooperation with the Independent Labour Party—placed him at odds with the society's "Old Gang", led by Shaw. According to Cole, Wells "had minimal capacity for putting [his ideas] across in public meetings against Shaw's trained and practised virtuosity". In Shaw's view, "the Old Gang did not extinguish Mr Wells, he annihilated himself". Wells resigned from the society in September 1908; Shaw remained a member, but left the executive in April 1911. He later wondered whether the Old Gang should have given way to Wells some years earlier: "God only knows whether the Society had not better have done it". Although less active—he blamed his advancing years—Shaw remained a Fabian. In 1912 Shaw invested £1,000 for a one-fifth share in the Webbs' new publishing venture, a socialist weekly magazine called "The New Statesman", which appeared in April 1913. He became a founding director, publicist, and in due course a contributor, mostly anonymously. He was soon at odds with the magazine's editor, Clifford Sharp, who by 1916 was rejecting his contributions—"the only paper in the world that refuses to print anything by me", according to Shaw. After the First World War began in August 1914, Shaw produced his tract "Common Sense About the War", which argued that the warring nations were equally culpable. Such a view was anathema in an atmosphere of fervent patriotism, and offended many of Shaw's friends; Ervine records that "[h]is appearance at any public function caused the instant departure of many of those present." Despite his errant reputation, Shaw's propagandist skills were recognised by the British authorities, and early in 1917 he was invited by Field Marshal Haig to visit the Western Front battlefields. Shaw's 10,000-word report, which emphasised the human aspects of the soldier's life, was well received, and he became less of a lone voice. In April 1917 he joined the national consensus in welcoming America's entry into the war: "a first class moral asset to the common cause against junkerism". Three short plays by Shaw were premiered during the war. "The Inca of Perusalem", written in 1915, encountered problems with the censor for burlesquing not only the enemy but the British military command; it was performed in 1916 at the Birmingham Repertory Theatre. "O'Flaherty V.C.", satirising the government's attitude to Irish recruits, was banned in the UK and was presented at a Royal Flying Corps base in Belgium in 1917. "Augustus Does His Bit", a genial farce, was granted a licence; it opened at the Royal Court in January 1917. Shaw had long supported the principle of Irish Home Rule within the British Empire (which he thought should become the British Commonwealth). In April 1916 he wrote scathingly in "The New York Times" about militant Irish nationalism: "In point of learning nothing and forgetting nothing these fellow-patriots of mine leave the Bourbons nowhere." Total independence, he asserted, was impractical; alliance with a bigger power (preferably England) was essential. The Dublin Easter Rising later that month took him by surprise. After its suppression by British forces, he expressed horror at the summary execution of the rebel leaders, but continued to believe in some form of Anglo-Irish union. In "How to Settle the Irish Question" (1917), he envisaged a federal arrangement, with national and imperial parliaments. Holroyd records that by this time the separatist party Sinn Féin was in the ascendency, and Shaw's and other moderate schemes were forgotten. In the postwar period, Shaw despaired of the British government's coercive policies towards Ireland, and joined his fellow-writers Hilaire Belloc and G. K. Chesterton in publicly condemning these actions. The Anglo-Irish Treaty of December 1921 led to the partition of Ireland between north and south, a provision that dismayed Shaw. In 1922 civil war broke out in the south between its pro-treaty and anti-treaty factions, the former of whom had established the Irish Free State. Shaw visited Dublin in August, and met Michael Collins, then head of the Free State's Provisional Government. Shaw was much impressed by Collins, and was saddened when, three days later, the Irish leader was ambushed and killed by anti-treaty forces. In a letter to Collins's sister, Shaw wrote: "I met Michael for the first and last time on Saturday last, and am very glad I did. I rejoice in his memory, and will not be so disloyal to it as to snivel over his valiant death". Shaw remained a British subject all his life, but took dual British-Irish nationality in 1934. Shaw's first major work to appear after the war was "Heartbreak House", written in 1916–17 and performed in 1920. It was produced on Broadway in November, and was coolly received; according to "The Times": "Mr Shaw on this occasion has more than usual to say and takes twice as long as usual to say it". After the London premiere in October 1921 "The Times" concurred with the American critics: "As usual with Mr Shaw, the play is about an hour too long", although containing "much entertainment and some profitable reflection". Ervine in "The Observer" thought the play brilliant but ponderously acted, except for Edith Evans as Lady Utterword. Shaw's largest-scale theatrical work was "Back to Methuselah", written in 1918–20 and staged in 1922. Weintraub describes it as "Shaw's attempt to fend off 'the bottomless pit of an utterly discouraging pessimism'". This cycle of five interrelated plays depicts evolution, and the effects of longevity, from the Garden of Eden to the year 31,920 AD. Critics found the five plays strikingly uneven in quality and invention. The original run was brief, and the work has been revived infrequently. Shaw felt he had exhausted his remaining creative powers in the huge span of this "Metabiological Pentateuch". He was now sixty-seven, and expected to write no more plays. This mood was short-lived. In 1920 Joan of Arc was proclaimed a saint by Pope Benedict XV; Shaw had long found Joan an interesting historical character, and his view of her veered between "half-witted genius" and someone of "exceptional sanity". He had considered writing a play about her in 1913, and the canonisation prompted him to return to the subject. He wrote "Saint Joan" in the middle months of 1923, and the play was premiered on Broadway in December. It was enthusiastically received there, and at its London premiere the following March. In Weintraub's phrase, "even the Nobel prize committee could no longer ignore Shaw after Saint Joan". The citation for the literature prize for 1925 praised his work as "... marked by both idealism and humanity, its stimulating satire often being infused with a singular poetic beauty". He accepted the award, but rejected the monetary prize that went with it, on the grounds that "My readers and my audiences provide me with more than sufficient money for my needs". After "Saint Joan", it was five years before Shaw wrote a play. From 1924, he spent four years writing what he described as his "magnum opus", a political treatise entitled "The Intelligent Woman's Guide to Socialism and Capitalism". The book was published in 1928 and sold well. At the end of the decade Shaw produced his final Fabian tract, a commentary on the League of Nations. He described the League as "a school for the new international statesmanship as against the old Foreign Office diplomacy", but thought that it had not yet become the "Federation of the World". Shaw returned to the theatre with what he called "a political extravaganza", "The Apple Cart", written in late 1928. It was, in Ervine's view, unexpectedly popular, taking a conservative, monarchist, anti-democratic line that appealed to contemporary audiences. The premiere was in Warsaw in June 1928, and the first British production was two months later, at Sir Barry Jackson's inaugural Malvern Festival. The other eminent creative artist most closely associated with the festival was Sir Edward Elgar, with whom Shaw enjoyed a deep friendship and mutual regard. He described "The Apple Cart" to Elgar as "a scandalous Aristophanic burlesque of democratic politics, with a brief but shocking sex interlude". During the 1920s Shaw began to lose faith in the idea that society could be changed through Fabian gradualism, and became increasingly fascinated with dictatorial methods. In 1922 he had welcomed Mussolini's accession to power in Italy, observing that amid the "indiscipline and muddle and Parliamentary deadlock", Mussolini was "the right kind of tyrant". Shaw was prepared to tolerate certain dictatorial excesses; Weintraub in his ODNB biographical sketch comments that Shaw's "flirtation with authoritarian inter-war regimes" took a long time to fade, and Beatrice Webb thought he was "obsessed" about Mussolini. Shaw's enthusiasm for the Soviet Union dated to the early 1920s when he had hailed Lenin as "the one really interesting statesman in Europe". Having turned down several chances to visit, in 1931 he joined a party led by Nancy Astor. The carefully managed trip culminated in a lengthy meeting with Stalin, whom Shaw later described as "a Georgian gentleman" with no malice in him. At a dinner given in his honour, Shaw told the gathering: "I have seen all the 'terrors' and I was terribly pleased by them". In March 1933 Shaw was a co-signatory to a letter in "The Manchester Guardian" protesting at the continuing misrepresentation of Soviet achievements: "No lie is too fantastic, no slander is too stale ... for employment by the more reckless elements of the British press." Shaw's admiration for Mussolini and Stalin demonstrated his growing belief that dictatorship was the only viable political arrangement. When the Nazi Party came to power in Germany in January 1933, Shaw described Hitler as "a very remarkable man, a very able man", and professed himself proud to be the only writer in England who was "scrupulously polite and just to Hitler". His principal admiration was for Stalin, whose regime he championed uncritically throughout the decade. Shaw saw the 1939 Molotov–Ribbentrop Pact as a triumph for Stalin who, he said, now had Hitler under his thumb. Shaw's first play of the decade was "Too True to be Good", written in 1931 and premiered in Boston in February 1932. The reception was unenthusiastic. Brooks Atkinson of "The New York Times" commenting that Shaw had "yielded to the impulse to write without having a subject", judged the play a "rambling and indifferently tedious conversation". The correspondent of "The New York Herald Tribune" said that most of the play was "discourse, unbelievably long lectures" and that although the audience enjoyed the play it was bewildered by it. During the decade Shaw travelled widely and frequently. Most of his journeys were with Charlotte; she enjoyed voyages on ocean liners, and he found peace to write during the long spells at sea. Shaw met an enthusiastic welcome in South Africa in 1932, despite his strong remarks about the racial divisions of the country. In December 1932 the couple embarked on a round-the-world cruise. In March 1933 they arrived at San Francisco, to begin Shaw's first visit to the US. He had earlier refused to go to "that awful country, that uncivilized place", "unfit to govern itself... illiberal, superstitious, crude, violent, anarchic and arbitrary". He visited Hollywood, with which he was unimpressed, and New York, where he lectured to a capacity audience in the Metropolitan Opera House. Harried by the intrusive attentions of the press, Shaw was glad when his ship sailed from New York harbour. New Zealand, which he and Charlotte visited the following year, struck him as "the best country I've been in"; he urged its people to be more confident and loosen their dependence on trade with Britain. He used the weeks at sea to complete two plays—"The Simpleton of the Unexpected Isles" and "The Six of Calais"—and begin work on a third, "The Millionairess". Despite his contempt for Hollywood and its aesthetic values, Shaw was enthusiastic about cinema, and in the middle of the decade wrote screenplays for prospective film versions of "Pygmalion" and "Saint Joan". The latter was never made, but Shaw entrusted the rights to the former to the unknown Gabriel Pascal, who produced it at Pinewood Studios in 1938. Shaw was determined that Hollywood should have nothing to do with the film, but was powerless to prevent it from winning one Academy Award ("Oscar"); he described his award for "best-written screenplay" as an insult, coming from such a source. He became the first person to have been awarded both a Nobel Prize and an Oscar. In a 1993 study of the Oscars, Anthony Holden observes that "Pygmalion" was soon spoken of as having "lifted movie-making from illiteracy to literacy". Shaw's final plays of the 1930s were "Cymbeline Refinished" (1936), "Geneva" (1936) and "In Good King Charles's Golden Days" (1939). The first, a fantasy reworking of Shakespeare, made little impression, but the second, a satire on European dictators, attracted more notice, much of it unfavourable. In particular, Shaw's parody of Hitler as "Herr Battler" was considered mild, almost sympathetic. The third play, an historical conversation piece first seen at Malvern, ran briefly in London in May 1940. James Agate commented that the play contained nothing to which even the most conservative audiences could take exception, and though it was long and lacking in dramatic action only "witless and idle" theatregoers would object. After their first runs none of the three plays were seen again in the West End during Shaw's lifetime. Towards the end of the decade, both Shaws began to suffer ill health. Charlotte was increasingly incapacitated by Paget's disease of bone, and he developed pernicious anaemia. His treatment, involving injections of concentrated animal liver, was successful, but this breach of his vegetarian creed distressed him and brought down condemnation from militant vegetarians. Although Shaw's works since "The Apple Cart" had been received without great enthusiasm, his earlier plays were revived in the West End throughout the Second World War, starring such actors as Edith Evans, John Gielgud, Deborah Kerr and Robert Donat. In 1944 nine Shaw plays were staged in London, including "Arms and the Man" with Ralph Richardson, Laurence Olivier, Sybil Thorndike and Margaret Leighton in the leading roles. Two touring companies took his plays all round Britain. The revival in his popularity did not tempt Shaw to write a new play, and he concentrated on prolific journalism. A second Shaw film produced by Pascal, "Major Barbara" (1941), was less successful both artistically and commercially than "Pygmalion", partly because of Pascal's insistence on directing, to which he was unsuited. Following the outbreak of war on 3 September 1939 and the rapid conquest of Poland, Shaw was accused of defeatism when, in a "New Statesman" article, he declared the war over and demanded a peace conference. Nevertheless, when he became convinced that a negotiated peace was impossible, he publicly urged the neutral United States to join the fight. The London blitz of 1940–41 led the Shaws, both in their mid-eighties, to live full-time at Ayot St Lawrence. Even there they were not immune from enemy air raids, and stayed on occasion with Nancy Astor at her country house, Cliveden. In 1943, the worst of the London bombing over, the Shaws moved back to Whitehall Court, where medical help for Charlotte was more easily arranged. Her condition deteriorated, and she died in September. Shaw's final political treatise, "Everybody's Political What's What", was published in 1944. Holroyd describes this as "a rambling narrative ... that repeats ideas he had given better elsewhere and then repeats itself". The book sold well—85,000 copies by the end of the year. After Hitler's suicide in May 1945, Shaw approved of the formal condolences offered by the Irish Taoiseach, Éamon de Valera, at the German embassy in Dublin. Shaw disapproved of the postwar trials of the defeated German leaders, as an act of self-righteousness: "We are all potential criminals". Pascal was given a third opportunity to film Shaw's work with "Caesar and Cleopatra" (1945). It cost three times its original budget and was rated "the biggest financial failure in the history of British cinema". The film was poorly received by British critics, although American reviews were friendlier. Shaw thought its lavishness nullified the drama, and he considered the film "a poor imitation of Cecil B. de Mille". In 1946, the year of Shaw's ninetieth birthday, he accepted the freedom of Dublin and became the first honorary freeman of the borough of St Pancras, London. In the same year the government asked Shaw informally whether he would accept the Order of Merit. He declined, believing that an author's merit could only be determined by the posthumous verdict of history. 1946 saw the publication, as "The Crime of Imprisonment", of the preface Shaw had written 20 years previously to a study of prison conditions. It was widely praised; a reviewer in the "American Journal of Public Health" considered it essential reading for any student of the American criminal justice system. Shaw continued to write into his nineties. His last plays were "Buoyant Billions" (1947), his final full-length work; "Farfetched Fables" (1948) a set of six short plays revisiting several of his earlier themes such as evolution; a comic play for puppets, "Shakes versus Shav" (1949), a ten-minute piece in which Shakespeare and Shaw trade insults; and "Why She Would Not" (1950), which Shaw described as "a little comedy", written in one week shortly before his ninety-fourth birthday. During his later years, Shaw enjoyed tending the gardens at Shaw's Corner. He died at the age of ninety-four of renal failure precipitated by injuries incurred when falling while pruning a tree. He was cremated at Golders Green Crematorium on 6 November 1950. His ashes, mixed with those of Charlotte, were scattered along footpaths and around the statue of Saint Joan in their garden. Shaw published a collected edition of his plays in 1934, comprising forty-two works. He wrote a further twelve in the remaining sixteen years of his life, mostly one-act pieces. Including eight earlier plays that he chose to omit from his published works, the total is sixty-two. Shaw's first three full-length plays dealt with social issues. He later grouped them as "Plays Unpleasant". "Widower's Houses" (1892) concerns the landlords of slum properties, and introduces the first of Shaw's New Women—a recurring feature of later plays. "The Philanderer" (1893) develops the theme of the New Woman, draws on Ibsen, and has elements of Shaw's personal relationships, the character of Julia being based on Jenny Patterson. In a 2003 study Judith Evans describes "Mrs Warren's Profession" (1893) as "undoubtedly the most challenging" of the three Plays Unpleasant, taking Mrs Warren's profession—prostitute and, later, brothel-owner—as a metaphor for a prostituted society. Shaw followed the first trilogy with a second, published as "Plays Pleasant". "Arms and the Man" (1894) conceals beneath a mock-Ruritanian comic romance a Fabian parable contrasting impractical idealism with pragmatic socialism. The central theme of "Candida" (1894) is a woman's choice between two men; the play contrasts the outlook and aspirations of a Christian Socialist and a poetic idealist. The third of the Pleasant group, "You Never Can Tell" (1896), portrays social mobility, and the gap between generations, particularly in how they approach social relations in general and mating in particular. The "Three Plays for Puritans"—comprising "The Devil's Disciple" (1896), "Caesar and Cleopatra" (1898) and "Captain Brassbound's Conversion" (1899)—all centre on questions of empire and imperialism, a major topic of political discourse in the 1890s. The three are set, respectively, in 1770s America, Ancient Egypt, and 1890s Morocco. "The Gadfly", an adaptation of the popular novel by Ethel Voynich, was unfinished and unperformed. "The Man of Destiny" (1895) is a short curtain raiser about Napoleon. Shaw's major plays of the first decade of the twentieth century address individual social, political or ethical issues. "Man and Superman" (1902) stands apart from the others in both its subject and its treatment, giving Shaw's interpretation of creative evolution in a combination of drama and associated printed text. "The Admirable Bashville" (1901), a blank verse dramatisation of Shaw's novel "Cashel Byron's Profession", focuses on the imperial relationship between Britain and Africa. "John Bull's Other Island" (1904), comically depicting the prevailing relationship between Britain and Ireland, was popular at the time but fell out of the general repertoire in later years. "Major Barbara" (1905) presents ethical questions in an unconventional way, confounding expectations that in the depiction of an armaments manufacturer on the one hand and the Salvation Army on the other the moral high ground must invariably be held by the latter. "The Doctor's Dilemma" (1906), a play about medical ethics and moral choices in allocating scarce treatment, was described by Shaw as a tragedy. With a reputation for presenting characters who did not resemble real flesh and blood, he was challenged by Archer to present an on-stage death, and here did so, with a deathbed scene for the anti-hero. "Getting Married" (1908) and "Misalliance" (1909)—the latter seen by Judith Evans as a companion piece to the former—are both in what Shaw called his "disquisitionary" vein, with the emphasis on discussion of ideas rather than on dramatic events or vivid characterisation. Shaw wrote seven short plays during the decade; they are all comedies, ranging from the deliberately absurd "Passion, Poison, and Petrifaction" (1905) to the satirical "Press Cuttings" (1909). In the decade from 1910 to the aftermath of the First World War Shaw wrote four full-length plays, the third and fourth of which are among his most frequently staged works. "Fanny's First Play" (1911) continues his earlier examinations of middle-class British society from a Fabian viewpoint, with additional touches of melodrama and an epilogue in which theatre critics discuss the play. "Androcles and the Lion" (1912), which Shaw began writing as a play for children, became a study of the nature of religion and how to put Christian precepts into practice. "Pygmalion" (1912) is a Shavian study of language and speech and their importance in society and in personal relationships. To correct the impression left by the original performers that the play portrayed a romantic relationship between the two main characters Shaw rewrote the ending to make it clear that the heroine will marry another, minor character. Shaw's only full-length play from the war years is "Heartbreak House" (1917), which in his words depicts "cultured, leisured Europe before the war" drifting towards disaster. Shaw named Shakespeare ("King Lear") and Chekhov ("The Cherry Orchard") as important influences on the piece, and critics have found elements drawing on Congreve ("The Way of the World") and Ibsen ("The Master Builder"). The short plays range from genial historical drama in "The Dark Lady of the Sonnets" and "Great Catherine" (1910 and 1913) to a study of polygamy in "Overruled"; three satirical works about the war ("The Inca of Perusalem", "O'Flaherty V.C." and "Augustus Does His Bit", 1915–16); a piece that Shaw called "utter nonsense" ("The Music Cure", 1914) and a brief sketch about a "Bolshevik empress" ("Annajanska", 1917). "Saint Joan" (1923) drew widespread praise both for Shaw and for Sybil Thorndike, for whom he wrote the title role and who created the part in Britain. In the view of the commentator Nicholas Grene, Shaw's Joan, a "no-nonsense mystic, Protestant and nationalist before her time" is among the 20th century's classic leading female roles. "The Apple Cart" (1929) was Shaw's last popular success. He gave both that play and its successor, "Too True to Be Good" (1931), the subtitle "A political extravaganza", although the two works differ greatly in their themes; the first presents the politics of a nation (with a brief royal love-scene as an interlude) and the second, in Judith Evans's words, "is concerned with the social mores of the individual, and is nebulous." Shaw's plays of the 1930s were written in the shadow of worsening national and international political events. Once again, with "On the Rocks" (1933) and "The Simpleton of the Unexpected Isles" (1934), a political comedy with a clear plot was followed by an introspective drama. The first play portrays a British prime minister considering, but finally rejecting, the establishment of a dictatorship; the second is concerned with polygamy and eugenics and ends with the Day of Judgement. "The Millionairess" (1934) is a farcical depiction of the commercial and social affairs of a successful businesswoman. "Geneva" (1936) lampoons the feebleness of the League of Nations compared with the dictators of Europe. "In Good King Charles's Golden Days" (1939), described by Weintraub as a warm, discursive high comedy, also depicts authoritarianism, but less satirically than "Geneva". As in earlier decades, the shorter plays were generally comedies, some historical and others addressing various political and social preoccupations of the author. Ervine writes of Shaw's later work that although it was still "astonishingly vigorous and vivacious" it showed unmistakable signs of his age. "The best of his work in this period, however, was full of wisdom and the beauty of mind often displayed by old men who keep their wits about them." Shaw's collected musical criticism, published in three volumes, runs to more than 2,700 pages. It covers the British musical scene from 1876 to 1950, but the core of the collection dates from his six years as music critic of "The Star" and "The World" in the late 1880s and early 1890s. In his view music criticism should be interesting to everyone rather than just the musical élite, and he wrote for the non-specialist, avoiding technical jargon—"Mesopotamian words like 'the dominant of D major'". He was fiercely partisan in his columns, promoting the music of Wagner and decrying that of Brahms and those British composers such as Stanford and Parry whom he saw as Brahmsian. He campaigned against the prevailing fashion for performances of Handel oratorios with huge amateur choirs and inflated orchestration, calling for "a chorus of twenty capable artists". He railed against opera productions unrealistically staged or sung in languages the audience did not speak. In Shaw's view, the London theatres of the 1890s presented too many revivals of old plays and not enough new work. He campaigned against "melodrama, sentimentality, stereotypes and worn-out conventions". As a music critic he had frequently been able to concentrate on analysing new works, but in the theatre he was often obliged to fall back on discussing how various performers tackled well-known plays. In a study of Shaw's work as a theatre critic, E. J. West writes that Shaw "ceaselessly compared and contrasted artists in interpretation and in technique". Shaw contributed more than 150 articles as theatre critic for "The Saturday Review", in which he assessed more than 212 productions. He championed Ibsen's plays when many theatregoers regarded them as outrageous, and his 1891 book "Quintessence of Ibsenism" remained a classic throughout the twentieth century. Of contemporary dramatists writing for the West End stage he rated Oscar Wilde above the rest: "... our only thorough playwright. He plays with everything: with wit, with philosophy, with drama, with actors and audience, with the whole theatre". Shaw's collected criticisms were published as "Our Theatres in the Nineties" in 1932. Shaw maintained a provocative and frequently self-contradictory attitude to Shakespeare (whose name he insisted on spelling "Shakespear"). Many found him difficult to take seriously on the subject; Duff Cooper observed that by attacking Shakespeare, "it is Shaw who appears a ridiculous pigmy shaking his fist at a mountain." Shaw was, nevertheless, a knowledgeable Shakespearian, and in an article in which he wrote, "With the single exception of Homer, there is no eminent writer, not even Sir Walter Scott, whom I can despise so entirely as I despise Shakespear when I measure my mind against his," he also said, "But I am bound to add that I pity the man who cannot enjoy Shakespear. He has outlasted thousands of abler thinkers, and will outlast a thousand more". Shaw had two regular targets for his more extreme comments about Shakespeare: undiscriminating "Bardolaters", and actors and directors who presented insensitively cut texts in over-elaborate productions. He was continually drawn back to Shakespeare, and wrote three plays with Shakespearean themes: "The Dark Lady of the Sonnets", "Cymbeline Refinished" and "Shakes versus Shav". In a 2001 analysis of Shaw's Shakespearian criticisms, Robert Pierce concludes that Shaw, who was no academic, saw Shakespeare's plays—like all theatre—from an author's practical point of view: "Shaw helps us to get away from the Romantics' picture of Shakespeare as a titanic genius, one whose art cannot be analyzed or connected with the mundane considerations of theatrical conditions and profit and loss, or with a specific staging and cast of actors." Shaw's political and social commentaries were published variously in Fabian tracts, in essays, in two full-length books, in innumerable newspaper and journal articles and in prefaces to his plays. The majority of Shaw's Fabian tracts were published anonymously, representing the voice of the society rather than of Shaw, although the society's secretary Edward Pease later confirmed Shaw's authorship. According to Holroyd, the business of the early Fabians, mainly under the influence of Shaw, was to "alter history by rewriting it". Shaw's talent as a pamphleteer was put to immediate use in the production of the society's manifesto—after which, says Holroyd, he was never again so succinct. After the turn of the twentieth century, Shaw increasingly propagated his ideas through the medium of his plays. An early critic, writing in 1904, observed that Shaw's dramas provided "a pleasant means" of proselytising his socialism, adding that "Mr Shaw's views are to be sought especially in the prefaces to his plays". After loosening his ties with the Fabian movement in 1911, Shaw's writings were more personal and often provocative; his response to the furore following the issue of "Common Sense About the War" in 1914, was to prepare a sequel, "More Common Sense About the War". In this, he denounced the pacifist line espoused by Ramsay MacDonald and other socialist leaders, and proclaimed his readiness to shoot all pacifists rather than cede them power and influence. On the advice of Beatrice Webb, this pamphlet remained unpublished. "The Intelligent Woman's Guide", Shaw's main political treatise of the 1920s, attracted both admiration and criticism. MacDonald considered it the world's most important book since the Bible; Harold Laski thought its arguments outdated and lacking in concern for individual freedoms. Shaw's increasing flirtation with dictatorial methods is evident in many of his subsequent pronouncements. A "New York Times" report dated 10 December 1933 quoted a recent Fabian Society lecture in which Shaw had praised Hitler, Mussolini and Stalin: "[T]hey are trying to get something done, [and] are adopting methods by which it is possible to get something done". As late as the Second World War, in "Everybody's Political What's What", Shaw blamed the Allies' "abuse" of their 1918 victory for the rise of Hitler, and hoped that, after defeat, the Führer would escape retribution "to enjoy a comfortable retirement in Ireland or some other neutral country". These sentiments, according to the Irish philosopher-poet Thomas Duddy, "rendered much of the Shavian outlook passé and contemptible". "Creative evolution", Shaw's version of the new science of eugenics, became an increasing theme in his political writing after 1900. He introduced his theories in "The Revolutionist's Handbook" (1903), an appendix to "Man and Superman", and developed them further during the 1920s in "Back to Methuselah". A 1946 "Life" magazine article observed that Shaw had "always tended to look at people more as a biologist than as an artist". By 1933, in the preface to "On the Rocks", he was writing that "if we desire a certain type of civilization and culture we must exterminate the sort of people who do not fit into it"; critical opinion is divided on whether this was intended as irony. In an article in the American magazine "Liberty" in September 1938, Shaw included the statement: "There are many people in the world who ought to be liquidated". Many commentators assumed that such comments were intended as a joke, although in the worst possible taste. Otherwise, "Life" magazine concluded, "this silliness can be classed with his more innocent bad guesses". Shaw's fiction-writing was largely confined to the five unsuccessful novels written in the period 1879–1885. "Immaturity" (1879) is a semi-autobiographical portrayal of mid-Victorian England, Shaw's "own "David Copperfield"" according to Weintraub. "The Irrational Knot" (1880) is a critique of conventional marriage, in which Weintraub finds the characterisations lifeless, "hardly more than animated theories". Shaw was pleased with his third novel, "Love Among the Artists" (1881), feeling that it marked a turning point in his development as a thinker, although he had no more success with it than with its predecessors. "Cashel Byron's Profession" (1882) is, says Weintraub, an indictment of society which anticipates Shaw's first full-length play, "Mrs Warren's Profession". Shaw later explained that he had intended "An Unsocial Socialist" as the first section of a monumental depiction of the downfall of capitalism. Gareth Griffith, in a study of Shaw's political thought, sees the novel as an interesting record of conditions, both in society at large and in the nascent socialist movement of the 1880s. Shaw's only subsequent fiction of any substance was his 1932 novella "The Adventures of the Black Girl in Her Search for God", written during a visit to South Africa in 1932. The eponymous girl, intelligent, inquisitive, and converted to Christianity by insubstantial missionary teaching, sets out to find God, on a journey that after many adventures and encounters, leads her to a secular conclusion. The story, on publication, offended some Christians and was banned in Ireland by the Board of Censors. Shaw was a prolific correspondent throughout his life. His letters, edited by Dan H. Laurence, were published between 1965 and 1988. Shaw once estimated his letters would occupy twenty volumes; Laurence commented that, unedited, they would fill many more. Shaw wrote more than a quarter of a million letters, of which about ten per cent have survived; 2,653 letters are printed in Laurence's four volumes. Among Shaw's many regular correspondents were his childhood friend Edward McNulty; his theatrical colleagues (and "amitiés amoureuses") Mrs Patrick Campbell and Ellen Terry; writers including Lord Alfred Douglas, H. G. Wells and G. K. Chesterton; the boxer Gene Tunney; the nun Laurentia McLachlan; and the art expert Sydney Cockerell. In 2007 a 316-page volume consisting entirely of Shaw's letters to "The Times" was published. Shaw's diaries for 1885–1897, edited by Weintraub, were published in two volumes, with a total of 1,241 pages, in 1986. Reviewing them, the Shaw scholar Fred Crawford wrote: "Although the primary interest for Shavians is the material that supplements what we already know about Shaw's life and work, the diaries are also valuable as a historical and sociological document of English life at the end of the Victorian age." After 1897, pressure of other writing led Shaw to give up keeping a diary. Through his journalism, pamphlets and occasional longer works, Shaw wrote on many subjects. His range of interest and enquiry included vivisection, vegetarianism, religion, language, cinema and photography, on all of which he wrote and spoke copiously. Collections of his writings on these and other subjects were published, mainly after his death, together with volumes of "wit and wisdom" and general journalism. Despite the many books written about him (Holroyd counts 80 by 1939) Shaw's autobiographical output, apart from his diaries, was relatively slight. He gave interviews to newspapers—"GBS Confesses", to "The Daily Mail" in 1904 is an example—and provided sketches to would-be biographers whose work was rejected by Shaw and never published. In 1939 Shaw drew on these materials to produce "Shaw Gives Himself Away", a miscellany which, a year before his death, he revised and republished as "Sixteen Self Sketches" (there were seventeen). He made it clear to his publishers that this slim book was in no sense a full autobiography. In his lifetime Shaw professed many beliefs, often contradictory. This inconsistency was partly an intentional provocation—the Spanish scholar-statesman Salvador de Madariaga describes Shaw as "a pole of negative electricity set in a people of positive electricity". In one area at least Shaw was constant: in his lifelong refusal to follow normal English forms of spelling and punctuation. He favoured archaic spellings such as "shew" for "show"; he dropped the "u" in words like "honour" and "favour"; and wherever possible he rejected the apostrophe in contractions such as "won't" or "that's". In his will, Shaw ordered that, after some specified legacies, his remaining assets were to form a trust to pay for fundamental reform of the English alphabet into a phonetic version of forty letters. Though Shaw's intentions were clear, his drafting was flawed, and the courts initially ruled the intended trust void. A later out-of-court agreement provided a sum of £8,300 for spelling reform; the bulk of his fortune went to the residuary legatees—the British Museum, the Royal Academy of Dramatic Art and the National Gallery of Ireland. Most of the £8,300 went on a special phonetic edition of "Androcles and the Lion" in the Shavian alphabet, published in 1962 to a largely indifferent reception. Shaw's views on religion and Christianity were less consistent. Having in his youth proclaimed himself an atheist, in middle age he explained this as a reaction against the Old Testament image of a vengeful Jehovah. By the early twentieth century, he termed himself a "mystic", although Gary Sloan, in an essay on Shaw's beliefs, disputes his credentials as such. In 1913 Shaw declared that he was not religious "in the sectarian sense", aligning himself with Jesus as "a person of no religion". In the preface (1915) to "Androcles and the Lion", Shaw asks "Why not give Christianity a chance?" contending that Britain's social order resulted from the continuing choice of Barabbas over Christ. In a broadcast just before the Second World War, Shaw invoked the Sermon on the Mount, "a very moving exhortation, and it gives you one first-rate tip, which is to do good to those who despitefully use you and persecute you". In his will, Shaw stated that his "religious convictions and scientific views cannot at present be more specifically defined than as those of a believer in creative revolution". He requested that no one should imply that he accepted the beliefs of any specific religious organisation, and that no memorial to him should "take the form of a cross or any other instrument of torture or symbol of blood sacrifice". Shaw espoused racial equality, and inter-marriage between people of different races. Despite his expressed wish to be fair to Hitler, he called anti-Semitism "the hatred of the lazy, ignorant fat-headed Gentile for the pertinacious Jew who, schooled by adversity to use his brains to the utmost, outdoes him in business". In "The Jewish Chronicle" he wrote in 1932, "In every country you can find rabid people who have a phobia against Jews, Jesuits, Armenians, Negroes, Freemasons, Irishmen, or simply foreigners as such. Political parties are not above exploiting these fears and jealousies." In 1903 Shaw joined in a controversy about vaccination against smallpox. He called vaccination "a peculiarly filthy piece of witchcraft"; in his view immunisation campaigns were a cheap and inadequate substitute for a decent programme of housing for the poor, which would, he declared, be the means of eradicating smallpox and other infectious diseases. Less contentiously, Shaw was keenly interested in transport; Laurence observed in 1992 a need for a published study of Shaw's interest in "bicycling, motorbikes, automobiles, and planes, climaxing in his joining the Interplanetary Society in his nineties". Shaw published articles on travel, took photographs of his journeys, and submitted notes to the Royal Automobile Club. Shaw strove throughout his adult life to be referred to as "Bernard Shaw" rather than "George Bernard Shaw", but confused matters by continuing to use his full initials—G.B.S.—as a by-line, and often signed himself "G.Bernard Shaw". He left instructions in his will that his executor (the Public Trustee) was to license publication of his works only under the name Bernard Shaw. Shaw scholars including Ervine, Judith Evans, Holroyd, Laurence and Weintraub, and many publishers have respected Shaw's preference, although the Cambridge University Press was among the exceptions with its 1988 "Cambridge Companion to George Bernard Shaw". Shaw did not found a school of dramatists as such, but Crawford asserts that today "we recognise [him] as second only to Shakespeare in the British theatrical tradition ... the proponent of the theater of ideas" who struck a death-blow to 19th-century melodrama. According to Laurence, Shaw pioneered "intelligent" theatre, in which the audience was required to think, thereby paving the way for the new breeds of twentieth-century playwrights from Galsworthy to Pinter. Crawford lists numerous playwrights whose work owes something to that of Shaw. Among those active in Shaw's lifetime he includes Noël Coward, who based his early comedy "The Young Idea" on "You Never Can Tell" and continued to draw on the older man's works in later plays. T. S. Eliot, by no means an admirer of Shaw, admitted that the epilogue of "Murder in the Cathedral", in which Becket's slayers explain their actions to the audience, might have been influenced by "Saint Joan". The critic Eric Bentley comments that Eliot's later play "The Confidential Clerk" "had all the earmarks of Shavianism ... without the merits of the real Bernard Shaw". Among more recent British dramatists, Crawford marks Tom Stoppard as "the most Shavian of contemporary playwrights"; Shaw's "serious farce" is continued in the works of Stoppard's contemporaries Alan Ayckbourn, Henry Livings and Peter Nichols. Shaw's influence crossed the Atlantic at an early stage. Bernard Dukore notes that he was successful as a dramatist in America ten years before achieving comparable success in Britain. Among many American writers professing a direct debt to Shaw, Eugene O'Neill became an admirer at the age of seventeen, after reading "The Quintessence of Ibsenism". Other Shaw-influenced American playwrights mentioned by Dukore are Elmer Rice, for whom Shaw "opened doors, turned on lights, and expanded horizons"; William Saroyan, who empathised with Shaw as "the embattled individualist against the philistines"; and S. N. Behrman, who was inspired to write for the theatre after attending a performance of "Caesar and Cleopatra": "I thought it would be agreeable to write plays like that". Assessing Shaw's reputation in a 1976 critical study, T. F. Evans described Shaw as unchallenged in his lifetime and since as the leading English-language dramatist of the (twentieth) century, and as a master of prose style. The following year, in a contrary assessment, the playwright John Osborne castigated "The Guardian's" theatre critic Michael Billington for referring to Shaw as "the greatest British dramatist since Shakespeare". Osborne responded that Shaw "is the most fraudulent, inept writer of Victorian melodramas ever to gull a timid critic or fool a dull public". Despite this hostility, Crawford sees the influence of Shaw in some of Osborne's plays, and concludes that though the latter's work is neither imitative nor derivative, these affinities are sufficient to classify Osborne as an inheritor of Shaw. In a 1983 study, R. J. Kaufmann suggests that Shaw was a key forerunner—"godfather, if not actually finicky paterfamilias"—of the Theatre of the Absurd. Two further aspects of Shaw's theatrical legacy are noted by Crawford: his opposition to stage censorship, which was finally ended in 1968, and his efforts which extended over many years to establish a National Theatre. Shaw's short 1910 play "The Dark Lady of the Sonnets", in which Shakespeare pleads with Queen Elizabeth I for the endowment of a state theatre, was part of this campaign. Writing in "The New Statesman" in 2012 Daniel Janes commented that Shaw's reputation had declined by the time of his 150th anniversary in 2006 but had recovered considerably. In Janes's view, the many current revivals of Shaw's major works showed the playwright's "almost unlimited relevance to our times". In the same year, Mark Lawson wrote in "The Guardian" that Shaw's moral concerns engaged present-day audiences, and made him—like his model, Ibsen—one of the most popular playwrights in contemporary British theatre. The Shaw Festival in Niagara-on-the-Lake, Ontario, Canada is the second largest repertory theatre company in North America. It produces plays by or written during the lifetime of Shaw as well as some contemporary works. The Gingold Theatrical Group, founded in 2006, presents works by Shaw and others in New York City that feature the humanitarian ideals that his work promoted. It became the first theatre group to present all of Shaw's stage work through its monthly concert series "Project Shaw". In the 1940s the author Harold Nicolson advised the National Trust not to accept the bequest of Shaw's Corner, predicting that Shaw would be totally forgotten within fifty years. In the event, Shaw's broad cultural legacy, embodied in the widely used term "Shavian", has endured and is nurtured by Shaw Societies in various parts of the world. The original society was founded in London in 1941 and survives; it organises meetings and events, and publishes a regular bulletin "The Shavian". The Shaw Society of America began in June 1950; it foundered in the 1970s but its journal, adopted by Penn State University Press, continued to be published as "Shaw: The Annual of Bernard Shaw Studies" until 2004. A second American organisation, founded in 1951 as "The Bernard Shaw Society", remains active as of 2016. More recent societies have been established in Japan and India. Besides his collected music criticism, Shaw has left a varied musical legacy, not all of it of his choosing. Despite his dislike of having his work adapted for the musical theatre ("my plays set themselves to a verbal music of their own") two of his plays were turned into musical comedies: "Arms and the Man" was the basis of "The Chocolate Soldier" in 1908, with music by Oscar Straus, and "Pygmalion" was adapted in 1956 as "My Fair Lady" with book and lyrics by Alan Jay Lerner and music by Frederick Loewe. Although he had a high regard for Elgar, Shaw turned down the composer's request for an opera libretto, but played a major part in persuading the BBC to commission Elgar's Third Symphony, and was the dedicatee of "The Severn Suite" (1930). The substance of Shaw's political legacy is uncertain. In 1921 Shaw's erstwhile collaborator William Archer, in a letter to the playwright, wrote: "I doubt if there is any case of a man so widely read, heard, seen, and known as yourself, who has produced so little effect on his generation." Margaret Cole, who considered Shaw the greatest writer of his age, professed never to have understood him. She thought he worked "immensely hard" at politics, but essentially, she surmises, it was for fun—"the fun of a brilliant artist". After Shaw's death, Pearson wrote: "No one since the time of Tom Paine has had so definite an influence on the social and political life of his time and country as Bernard Shaw." In its obituary tribute to Shaw, "The Times Literary Supplement" concluded: Globular cluster A globular cluster is a spherical collection of stars that orbits a galactic core as a satellite. Globular clusters are very tightly bound by gravity, which gives them their spherical shapes and relatively high stellar densities toward their centers. The name of this category of star cluster is derived from the Latin "globulus"—a small sphere. A globular cluster is sometimes known more simply as a "globular". Globular clusters are found in the halo of a galaxy and contain considerably more stars and are much older than the less dense open clusters, which are found in the disk of a galaxy. Globular clusters are fairly common; there are about 150 to 158 currently known globular clusters in the Milky Way, with perhaps 10 to 20 more still undiscovered. Larger galaxies can have more: Andromeda Galaxy, for instance, may have as many as 500. Some giant elliptical galaxies (particularly those at the centers of galaxy clusters), such as M87, have as many as 13,000 globular clusters. Every galaxy of sufficient mass in the Local Group has an associated group of globular clusters, and almost every large galaxy surveyed has been found to possess a system of globular clusters. The Sagittarius Dwarf galaxy and the disputed Canis Major Dwarf galaxy appear to be in the process of donating their associated globular clusters (such as Palomar 12) to the Milky Way. This demonstrates how many of this galaxy's globular clusters might have been acquired in the past. Although it appears that globular clusters contain some of the first stars to be produced in the galaxy, their origins and their role in galactic evolution are still unclear. It does appear clear that globular clusters are significantly different from dwarf elliptical galaxies and were formed as part of the star formation of the parent galaxy rather than as a separate galaxy. The first known globular cluster, now called M22, was discovered in 1665 by Abraham Ihle, a German amateur astronomer. However, given the small aperture of early telescopes, individual stars within a globular cluster were not resolved until Charles Messier observed M4 in 1764. The first eight globular clusters discovered are shown in the table. Subsequently, Abbé Lacaille would list NGC 104, NGC 4833, M55, M69, and NGC 6397 in his 1751–52 catalogue. The "M" before a number refers to Charles Messier's catalogue, while "NGC" is from the New General Catalogue by John Dreyer. When William Herschel began his comprehensive survey of the sky using large telescopes in 1782 there were 34 known globular clusters. Herschel discovered another 36 himself and was the first to resolve virtually all of them into stars. He coined the term "globular cluster" in his "Catalogue of a Second Thousand New Nebulae and Clusters of Stars" published in 1789. The number of globular clusters discovered continued to increase, reaching 83 in 1915, 93 in 1930 and 97 by 1947. A total of 152 globular clusters have now been discovered in the Milky Way galaxy, out of an estimated total of 180 ± 20. These additional, undiscovered globular clusters are believed to be hidden behind the gas and dust of the Milky Way. Beginning in 1914, Harlow Shapley began a series of studies of globular clusters, published in about 40 scientific papers. He examined the RR Lyrae variables in the clusters (which he assumed were Cepheid variables) and used their period–luminosity relationship for distance estimates. Later, it was found that RR Lyrae variables are fainter than Cepheid variables, which caused Shapley to overestimate the distances of the clusters. Of the globular clusters within the Milky Way, the majority are found in a halo around the galactic core, and the large majority are located in the celestial sky centered on the core. In 1918, this strongly asymmetrical distribution was used by Shapley to make a determination of the overall dimensions of the galaxy. By assuming a roughly spherical distribution of globular clusters around the galaxy's center, he used the positions of the clusters to estimate the position of the Sun relative to the galactic center. While his distance estimate was in significant error (although within the same order of magnitude as the currently accepted value), it did demonstrate that the dimensions of the galaxy were much greater than had been previously thought. His error was due to interstellar dust in the Milky Way, which absorbs and diminishes the amount of light from distant objects, such as globular clusters, that reaches the Earth, thus making them appear to be more distant than they are. Shapley's measurements also indicated that the Sun is relatively far from the center of the galaxy, also contrary to what had previously been inferred from the apparently nearly even distribution of ordinary stars. In reality, most ordinary stars lie within the galaxy's disk and those stars that lie in the direction of the galactic centre and beyond are thus obscured by gas and dust, whereas globular clusters lie outside the disk and can be seen at much further distances. Shapley was subsequently assisted in his studies of clusters by Henrietta Swope and Helen Battles Sawyer (later Hogg). In 1927–29, Shapley and Sawyer categorized clusters according to the degree of concentration each system has toward its core. The most concentrated clusters were identified as Class I, with successively diminishing concentrations ranging to Class XII. This became known as the Shapley–Sawyer Concentration Class (it is sometimes given with numbers [Class 1–12] rather than Roman numerals.) In 2015, a new type of globular cluster was proposed on the basis of observational data, the dark globular clusters. The formation of globular clusters remains a poorly understood phenomenon and it remains uncertain whether the stars in a globular cluster form in a single generation or are spawned across multiple generations over a period of several hundred million years. In many globular clusters, most of the stars are at approximately the same stage in stellar evolution, suggesting that they formed at about the same time. However, the star formation history varies from cluster to cluster, with some clusters showing distinct populations of stars. An example of this is the globular clusters in the Large Magellanic Cloud (LMC) that exhibit a bimodal population. During their youth, these LMC clusters may have encountered giant molecular clouds that triggered a second round of star formation. This star-forming period is relatively brief, compared to the age of many globular clusters. It has also been proposed that the reason for this multiplicity in stellar populations could have a dynamical origin. In the Antennae galaxy, for example, the Hubble Space Telescope has observed clusters of clusters, regions in the galaxy that span hundreds of parsecs, where many of the clusters will eventually collide and merge. Many of them present a significant range in ages, hence possibly metallicities, and their merger could plausibly lead to clusters with a bimodal or even multiple distribution of populations. Observations of globular clusters show that these stellar formations arise primarily in regions of efficient star formation, and where the interstellar medium is at a higher density than in normal star-forming regions. Globular cluster formation is prevalent in starburst regions and in interacting galaxies. Research indicates a correlation between the mass of a central supermassive black holes (SMBH) and the extent of the globular cluster systems of elliptical and lenticular galaxies. The mass of the SMBH in such a galaxy is often close to the combined mass of the galaxy's globular clusters. No known globular clusters display active star formation, which is consistent with the view that globular clusters are typically the oldest objects in the Galaxy, and were among the first collections of stars to form. Very large regions of star formation known as super star clusters, such as Westerlund 1 in the Milky Way, may be the precursors of globular clusters. Globular clusters are generally composed of hundreds of thousands of low-metal, old stars. The type of stars found in a globular cluster are similar to those in the bulge of a spiral galaxy but confined to a volume of only a few million cubic parsecs. They are free of gas and dust and it is presumed that all of the gas and dust was long ago turned into stars. Globular clusters can contain a high density of stars; on average about 0.4 stars per cubic parsec, increasing to 100 or 1000 stars per cubic parsec in the core of the cluster. The typical distance between stars in a globular cluster is about 1 light year, but at its core, the separation is comparable to the size of the Solar System (100 to 1000 times closer than stars near the Solar System). Globular clusters are not thought to be favorable locations for the survival of planetary systems. Planetary orbits are dynamically unstable within the cores of dense clusters because of the perturbations of passing stars. A planet orbiting at 1 astronomical unit around a star that is within the core of a dense cluster such as 47 Tucanae would only survive on the order of 10 years. There is a planetary system orbiting a pulsar (PSR B1620−26) that belongs to the globular cluster M4, but these planets likely formed after the event that created the pulsar. Some globular clusters, like Omega Centauri in the Milky Way and G1 in M31, are extraordinarily massive, with several million solar masses () and multiple stellar populations. Both can be regarded as evidence that supermassive globular clusters are in fact the cores of dwarf galaxies that are consumed by the larger galaxies. About a quarter of the globular cluster population in the Milky Way may have been accreted along with their host dwarf galaxy. Several globular clusters (like M15) have extremely massive cores which may harbor black holes, although simulations suggest that a less massive black hole or central concentration of neutron stars or massive white dwarfs explain observations equally well. Globular clusters normally consist of Population II stars, which have a low proportion of elements other than hydrogen and helium when compared to Population I stars such as the Sun. Astronomers refer to these heavier elements as metals and to the proportions of these elements as the metallicity. These elements are produced by stellar nucleosynthesis and then are recycled into the interstellar medium, where they enter the next generation of stars. Hence the proportion of metals can be an indication of the age of a star, with older stars typically having a lower metallicity. The Dutch astronomer Pieter Oosterhoff noticed that there appear to be two populations of globular clusters, which became known as "Oosterhoff groups". The second group has a slightly longer period of RR Lyrae variable stars. Both groups have weak lines of metallic elements. But the lines in the stars of Oosterhoff type I (OoI) cluster are not quite as weak as those in type II (OoII). Hence type I are referred to as "metal-rich" (for example, Terzan 7), while type II are "metal-poor" (for example, ESO 280-SC06). These two populations have been observed in many galaxies, especially massive elliptical galaxies. Both groups are nearly as old as the universe itself and are of similar ages, but differ in their metal abundances. Many scenarios have been suggested to explain these subpopulations, including violent gas-rich galaxy mergers, the accretion of dwarf galaxies, and multiple phases of star formation in a single galaxy. In the Milky Way, the metal-poor clusters are associated with the halo and the metal-rich clusters with the bulge. In the Milky Way it has been discovered that the large majority of the low metallicity clusters are aligned along a plane in the outer part of the galaxy's halo. This result argues in favor of the view that type II clusters in the galaxy were captured from a satellite galaxy, rather than being the oldest members of the Milky Way's globular cluster system as had been previously thought. The difference between the two cluster types would then be explained by a time delay between when the two galaxies formed their cluster systems. Globular clusters have a very high star density, and therefore close interactions and near-collisions of stars occur relatively often. Due to these chance encounters, some exotic classes of stars, such as blue stragglers, millisecond pulsars and low-mass X-ray binaries, are much more common in globular clusters. A blue straggler is formed from the merger of two stars, possibly as a result of an encounter with a binary system. The resulting star has a higher temperature than comparable stars in the cluster with the same luminosity, and thus differs from the main sequence stars formed at the beginning of the cluster. Astronomers have searched for black holes within globular clusters since the 1970s. The resolution requirements for this task, however, are exacting, and it is only with the Hubble space telescope that the first confirmed discoveries have been made. In independent programs, a intermediate-mass black hole has been suggested to exist based on HST observations in the globular cluster M15 and a black hole in the Mayall II cluster in the Andromeda Galaxy. Both x-ray and radio emissions from Mayall II appear to be consistent with an intermediate-mass black hole. They are the first black holes discovered that were intermediate in mass between the conventional stellar-mass black hole and the supermassive black holes discovered at the cores of galaxies. The mass of these intermediate mass black holes is proportional to the mass of the clusters, following a pattern previously discovered between supermassive black holes and their surrounding galaxies. Claims of intermediate mass black holes have been met with some skepticism. The heaviest objects in globular clusters are expected to migrate to the cluster center due to mass segregation. As pointed out in two papers by Holger Baumgardt and collaborators, the mass-to-light ratio should rise sharply towards the center of the cluster, even without a black hole, in both M15 and Mayall II. The Hertzsprung-Russell diagram (HR-diagram) is a graph of a large sample of stars that plots their visual absolute magnitude against their color index. The color index, B−V, is the difference between the magnitude of the star in blue light, or B, and the magnitude in visual light (green-yellow), or V. Large positive values indicate a red star with a cool surface temperature, while negative values imply a blue star with a hotter surface. When the stars near the Sun are plotted on an HR diagram, it displays a distribution of stars of various masses, ages, and compositions. Many of the stars lie relatively close to a sloping curve with increasing absolute magnitude as the stars are hotter, known as main-sequence stars. However the diagram also typically includes stars that are in later stages of their evolution and have wandered away from this main-sequence curve. As all the stars of a globular cluster are at approximately the same distance from us, their absolute magnitudes differ from their visual magnitude by about the same amount. The main-sequence stars in the globular cluster will fall along a line that is believed to be comparable to similar stars in the solar neighborhood. The accuracy of this assumption is confirmed by comparable results obtained by comparing the magnitudes of nearby short-period variables, such as RR Lyrae stars and cepheid variables, with those in the cluster. By matching up these curves on the HR diagram the absolute magnitude of main-sequence stars in the cluster can also be determined. This in turn provides a distance estimate to the cluster, based on the visual magnitude of the stars. The difference between the relative and absolute magnitude, the "distance modulus", yields this estimate of the distance. When the stars of a particular globular cluster are plotted on an HR diagram, in many cases nearly all of the stars fall upon a relatively well-defined curve. This differs from the HR diagram of stars near the Sun, which lumps together stars of differing ages and origins. The shape of the curve for a globular cluster is characteristic of a grouping of stars that were formed at approximately the same time and from the same materials, differing only in their initial mass. As the position of each star in the HR diagram varies with age, the shape of the curve for a globular cluster can be used to measure the overall age of the star population. However, the above-mentioned historic process of determining the age and distance to globular clusters is not as robust as first thought, since the morphology and luminosity of globular cluster stars in color-magnitude diagrams are influenced by numerous parameters, many of which are still being actively researched. Certain clusters even display populations that are absent from other globular clusters (e.g., blue hook stars), or feature multiple populations. The historical paradigm that all globular clusters consist of stars born at exactly the same time, or sharing exactly the same chemical abundance, has likewise been overturned (e.g., NGC 2808). Further, the morphology of the cluster stars in a color-magnitude diagram, and that includes the brightnesses of distance indicators such as RR Lyrae variable members, can be influenced by observational biases. One such effect is called blending, and it arises because the cores of globular clusters are so dense that in low-resolution observations multiple (unresolved) stars may appear as a single target. Thus the brightness measured for that seemingly single star (e.g., an RR Lyrae variable) is erroneously too bright, given those unresolved stars contributed to the brightness determined. Consequently, the computed distance is wrong, and more importantly, certain researchers have argued that the blending effect can introduce a systematic uncertainty into the cosmic distance ladder, and may bias the estimated age of the Universe and the Hubble constant. The most massive main-sequence stars will also have the highest absolute magnitude, and these will be the first to evolve into the giant star stage. As the cluster ages, stars of successively lower masses will also enter the giant star stage. Thus the age of a single population cluster can be measured by looking for the stars that are just beginning to enter the giant star stage. This forms a "knee" in the HR diagram, bending to the upper right from the main-sequence line. The absolute magnitude at this bend is directly a function of the age of globular cluster, so an age scale can be plotted on an axis parallel to the magnitude. In addition, globular clusters can be dated by looking at the temperatures of the coolest white dwarfs. Typical results for globular clusters are that they may be as old as 12.7 billion years. This is in contrast to open clusters which are only tens of millions of years old. The ages of globular clusters place a bound on the age limit of the entire universe. This lower limit has been a significant constraint in cosmology. Historically, astronomers were faced with age estimates of globular clusters that appeared older than cosmological models would allow. However, better measurements of cosmological parameters through deep sky surveys and satellites such as the Hubble Space Telescope appear to have resolved this issue. Evolutionary studies of globular clusters can also be used to determine changes due to the starting composition of the gas and dust that formed the cluster. That is, the evolutionary tracks change with changes in the abundance of heavy elements. The data obtained from studies of globular clusters are then used to study the evolution of the Milky Way as a whole. In globular clusters a few stars known as blue stragglers are observed, apparently continuing the main sequence in the direction of brighter, bluer stars. The origins of these stars is still unclear, but most models suggest that these stars are the result of mass transfer in multiple star systems. In contrast to open clusters, most globular clusters remain gravitationally bound for time periods comparable to the life spans of the majority of their stars. However, a possible exception is when strong tidal interactions with other large masses result in the dispersal of the stars. After they are formed, the stars in the globular cluster begin to interact gravitationally with each other. As a result, the velocity vectors of the stars are steadily modified, and the stars lose any history of their original velocity. The characteristic interval for this to occur is the relaxation time. This is related to the characteristic length of time a star needs to cross the cluster as well as the number of stellar masses in the system. The value of the relaxation time varies by cluster, but the mean value is on the order of 10 years. Although globular clusters generally appear spherical in form, ellipticities can occur due to tidal interactions. Clusters within the Milky Way and the Andromeda Galaxy are typically oblate spheroids in shape, while those in the Large Magellanic Cloud are more elliptical. Astronomers characterize the morphology of a globular cluster by means of standard radii. These are the core radius ("r"), the half-light radius ("r"), and the tidal (or Jacobi) radius ("r"). The overall luminosity of the cluster steadily decreases with distance from the core, and the core radius is the distance at which the apparent surface luminosity has dropped by half. A comparable quantity is the half-light radius, or the distance from the core within which half the total luminosity from the cluster is received. This is typically larger than the core radius. Note that the half-light radius includes stars in the outer part of the cluster that happen to lie along the line of sight, so theorists will also use the half-mass radius ("r")—the radius from the core that contains half the total mass of the cluster. When the half-mass radius of a cluster is small relative to the overall size, it has a dense core. An example of this is Messier 3 (M3), which has an overall visible dimension of about 18 arc minutes, but a half-mass radius of only 1.12 arc minutes. Almost all globular clusters have a half-light radius of less than 10 pc, although there are well-established globular clusters with very large radii (i.e. NGC 2419 (R = 18 pc) and Palomar 14 (R = 25 pc)). Finally the tidal radius, or Hill sphere, is the distance from the center of the globular cluster at which the external gravitation of the galaxy has more influence over the stars in the cluster than does the cluster itself. This is the distance at which the individual stars belonging to a cluster can be separated away by the galaxy. The tidal radius of M3 is about 40 arc minutes, or about 113 pc at the distance of 10.4 kpc. In measuring the luminosity curve of a given globular cluster as a function of distance from the core, most clusters in the Milky Way increase steadily in luminosity as this distance decreases, up to a certain distance from the core, then the luminosity levels off. Typically this distance is about 1–2 parsecs from the core. However about 20% of the globular clusters have undergone a process termed "core collapse". In this type of cluster, the luminosity continues to increase steadily all the way to the core region. An example of a core-collapsed globular is M15. Core-collapse is thought to occur when the more massive stars in a globular cluster encounter their less massive companions. Over time, dynamic processes cause individual stars to migrate from the center of the cluster to the outside. This results in a net loss of kinetic energy from the core region, leading the remaining stars grouped in the core region to occupy a more compact volume. When this gravothermal instability occurs, the central region of the cluster becomes densely crowded with stars and the surface brightness of the cluster forms a power-law cusp. (Note that a core collapse is not the only mechanism that can cause such a luminosity distribution; a massive black hole at the core can also result in a luminosity cusp.) Over a lengthy period of time this leads to a concentration of massive stars near the core, a phenomenon called mass segregation. The dynamical heating effect of binary star systems works to prevent an initial core collapse of the cluster. When a star passes near a binary system, the orbit of the latter pair tends to contract, releasing energy. Only after the primordial supply of binaries is exhausted due to interactions can a deeper core collapse proceed. In contrast, the effect of tidal shocks as a globular cluster repeatedly passes through the plane of a spiral galaxy tends to significantly accelerate core collapse. The different stages of core-collapse may be divided into three phases. During a globular cluster's adolescence, the process of core-collapse begins with stars near the core. However, the interactions between binary star systems prevents further collapse as the cluster approaches middle age. Finally, the central binaries are either disrupted or ejected, resulting in a tighter concentration at the core. The interaction of stars in the collapsed core region causes tight binary systems to form. As other stars interact with these tight binaries, they increase the energy at the core, which causes the cluster to re-expand. As the mean time for a core collapse is typically less than the age of the galaxy, many of a galaxy's globular clusters may have passed through a core collapse stage, then re-expanded. The Hubble Space Telescope has been used to provide convincing observational evidence of this stellar mass-sorting process in globular clusters. Heavier stars slow down and crowd at the cluster's core, while lighter stars pick up speed and tend to spend more time at the cluster's periphery. The globular star cluster 47 Tucanae, which is made up of about 1 million stars, is one of the densest globular clusters in the Southern Hemisphere. This cluster was subjected to an intensive photographic survey, which allowed astronomers to track the motion of its stars. Precise velocities were obtained for nearly 15,000 stars in this cluster. A 2008 study by John Fregeau of 13 globular clusters in the Milky Way shows that three of them have an unusually large number of X-ray sources, or X-ray binaries, suggesting the clusters are middle-aged. Previously, these globular clusters had been classified as being in old age because they had very tight concentrations of stars in their centers, another test of age used by astronomers. The implication is that most globular clusters, including the other ten studied by Fregeau, are not in middle age as previously thought, but are actually in 'adolescence'. The overall luminosities of the globular clusters within the Milky Way and the Andromeda Galaxy can be modeled by means of a gaussian curve. This gaussian can be represented by means of an average magnitude M and a variance σ. This distribution of globular cluster luminosities is called the Globular Cluster Luminosity Function (GCLF). (For the Milky Way, M = , σ = magnitudes.) The GCLF has also been used as a "standard candle" for measuring the distance to other galaxies, under the assumption that the globular clusters in remote galaxies follow the same principles as they do in the Milky Way. Computing the interactions between the stars within a globular cluster requires solving what is termed the N-body problem. That is, each of the stars within the cluster continually interacts with the other "N"−1 stars, where "N" is the total number of stars in the cluster. The naive CPU computational "cost" for a dynamic simulation increases in proportion to "N" (each of N objects must interact pairwise with each of the other N objects), so the potential computing requirements to accurately simulate such a cluster can be enormous. An efficient method of mathematically simulating the N-body dynamics of a globular cluster is done by subdividing into small volumes and velocity ranges, and using probabilities to describe the locations of the stars. The motions are then described by means of a formula called the Fokker–Planck equation. This can be solved by a simplified form of the equation, or by running Monte Carlo simulations and using random values. However the simulation becomes more difficult when the effects of binaries and the interaction with external gravitation forces (such as from the Milky Way galaxy) must also be included. The results of N-body simulations have shown that the stars can follow unusual paths through the cluster, often forming loops and often falling more directly toward the core than would a single star orbiting a central mass. In addition, due to interactions with other stars that result in an increase in velocity, some of the stars gain sufficient energy to escape the cluster. Over long periods of time this will result in a dissipation of the cluster, a process termed evaporation. The typical time scale for the evaporation of a globular cluster is 10 years. In 2010 it became possible to directly compute, star by star, N-body simulations of a globular cluster over the course of its lifetime. Binary stars form a significant portion of the total population of stellar systems, with up to half of all stars occurring in binary systems. Numerical simulations of globular clusters have demonstrated that binaries can hinder and even reverse the process of core collapse in globular clusters. When a star in a cluster has a gravitational encounter with a binary system, a possible result is that the binary becomes more tightly bound and kinetic energy is added to the solitary star. When the massive stars in the cluster are sped up by this process, it reduces the contraction at the core and limits core collapse. The ultimate fate of a globular cluster must be either to accrete stars at its core, causing its steady contraction, or gradual shedding of stars from its outer layers. The distinction between cluster types is not always clear-cut, and objects have been found that blur the lines between the categories. For example, BH 176 in the southern part of the Milky Way has properties of both an open and a globular cluster. In 2005, astronomers discovered a completely new type of star cluster in the Andromeda Galaxy, which is, in several ways, very similar to globular clusters. The new-found clusters contain hundreds of thousands of stars, a similar number to that found in globular clusters. The clusters share other characteristics with globular clusters such as stellar populations and metallicity. What distinguishes them from the globular clusters is that they are much larger – several hundred light-years across – and hundreds of times less dense. The distances between the stars are, therefore, much greater within the newly discovered extended clusters. Parametrically, these clusters lie somewhere between a globular cluster and a dwarf spheroidal galaxy. How these clusters are formed is not yet known, but their formation might well be related to that of globular clusters. Why M31 has such clusters, while the Milky Way does not, is not yet known. It is also unknown if any other galaxy contains these types of clusters, but it would be very unlikely that M31 is the sole galaxy with extended clusters. When a globular cluster has a close encounter with a large mass, such as the core region of a galaxy, it undergoes a tidal interaction. The difference in the pull of gravity between the part of the cluster nearest the mass and the pull on the furthest part of the cluster results in a tidal force. A "tidal shock" occurs whenever the orbit of a cluster takes it through the plane of a galaxy. As a result of a tidal shock, streams of stars can be pulled away from the cluster halo, leaving only the core part of the cluster. These tidal interaction effects create tails of stars that can extend up to several degrees of arc away from the cluster. These tails typically both precede and follow the cluster along its orbit. The tails can accumulate significant portions of the original mass of the cluster, and can form clumplike features. The globular cluster Palomar 5, for example, is near the apogalactic point of its orbit after passing through the Milky Way. Streams of stars extend outward toward the front and rear of the orbital path of this cluster, stretching out to distances of 13,000 light-years. Tidal interactions have stripped away much of the mass from Palomar 5, and further interactions as it passes through the galactic core are expected to transform it into a long stream of stars orbiting the Milky Way halo. Tidal interactions add kinetic energy into a globular cluster, dramatically increasing the evaporation rate and shrinking the size of the cluster. Not only does tidal shock strip off the outer stars from a globular cluster, but the increased evaporation accelerates the process of core collapse. The same physical mechanism may be at work in dwarf spheroidal galaxies such as the Sagittarius Dwarf, which appears to be undergoing tidal disruption due to its proximity to the Milky Way. There are many globular clusters with a retrograde orbit round the Milky Way Galaxy. A hypervelocity globular cluster was discovered around Messier 87 in 2014, having a velocity in excess of the escape velocity of M87. Astronomers are searching for exoplanets of stars in globular star clusters. In 2000, the results of a search for giant planets in the globular cluster 47 Tucanae were announced. The lack of any successful discoveries suggests that the abundance of elements (other than hydrogen or helium) necessary to build these planets may need to be at least 40% of the abundance in the Sun. Terrestrial planets are built from heavier elements such as silicon, iron and magnesium. The very low abundance of these elements in globular clusters means that the member stars have a far lower likelihood of hosting Earth-mass planets, when compared to stars in the neighborhood of the Sun. Hence the halo region of the Milky Way galaxy, including globular cluster members, are unlikely to host habitable terrestrial planets. In spite of the lower likelihood of giant planet formation, just such an object has been found in the globular cluster Messier 4. This planet was detected orbiting a pulsar in the binary star system PSR B1620-26. The eccentric and highly inclined orbit of the planet suggests it may have been formed around another star in the cluster, then was later "exchanged" into its current arrangement. The likelihood of close encounters between stars in a globular cluster can disrupt planetary systems, some of which break loose to become free floating planets. Even close orbiting planets can become disrupted, potentially leading to orbital decay and an increase in orbital eccentricity and tidal effects. George Fox George Fox (July 1624 – 13 January 1691) was an English Dissenter and a founder of the Religious Society of Friends, commonly known as the Quakers or Friends. The son of a Leicestershire weaver, Fox lived in a time of great social upheaval and war. He rebelled against the religious and political authorities by proposing an unusual and uncompromising approach to the Christian faith. He travelled throughout Britain as a dissenting preacher, for which he was often persecuted by the authorities who disapproved of his beliefs. In 1669, Fox married Margaret Fell, the widow of one of his wealthier supporters, Thomas Fell; she was a leading Friend. His ministry expanded and he undertook tours of North America and the Low Countries. Between these tours, he was imprisoned for more than a year. He spent the final decade of his life working in London to organize the expanding Quaker movement. While his movement attracted disdain from some, others such as William Penn and Oliver Cromwell viewed Fox with respect. George Fox was born in the strongly Puritan village of Drayton-in-the-Clay, Leicestershire, England (now known as Fenny Drayton), 15 miles (24 km) west-south-west of Leicester. He was the eldest of four children of Christopher Fox, a successful weaver, called "Righteous Christer" by his neighbours, and his wife, Mary Lago. Christopher Fox was a churchwarden and was relatively wealthy; when he died in the late 1650s he left his son a substantial legacy. From childhood Fox was of a serious, religious disposition. There is no record of any formal schooling but he learned to read and write. "When I came to eleven years of age", he said, "I knew pureness and righteousness; for, while I was a child, I was taught how to walk to be kept pure. The Lord taught me to be faithful, in all things, and to act faithfully two ways; viz., inwardly to God, and outwardly to man." Known as an honest person, he also proclaimed, "The Lord taught me to be faithful in all things...and to keep to Yea and Nay in all things." As he grew up, his relatives "thought to have made me a priest" but he was instead apprenticed to a local shoemaker and grazier, George Gee of Mancetter. This suited his contemplative temperament and he became well known for his diligence among the wool traders who had dealings with his master. A constant obsession for Fox was the pursuit of "simplicity" in life, meaning humility and the abandonment of luxury, and the short time he spent as a shepherd was important to the formation of this view. Toward the end of his life he wrote a letter for general circulation pointing out that Abel, Noah, Abraham, Jacob, Moses and David were all keepers of sheep or cattle and therefore that a learned education should not be seen as a necessary qualification for ministry. George Fox knew people who were "professors" (followers of the standard religion), but by the age of 19 he had begun to look down on their behaviour, in particular drinking alcohol. He records that, in prayer one night after leaving two acquaintances at a drinking session, he heard an inner voice saying, "Thou seest how young people go together into vanity, and old people into the earth; thou must forsake all, young and old, keep out of all, and be as a stranger unto all." Driven by his "inner voice", Fox left Drayton-in-the-Clay in September 1643, moving toward London in a state of mental torment and confusion. The English Civil War had begun and troops were stationed in many towns through which he passed. In Barnet, he was torn by depression (perhaps from the temptations of the resort town near London). He alternately shut himself in his room for days at a time or went out alone into the countryside. After almost a year he returned to Drayton, where he engaged Nathaniel Stephens, the clergyman of his hometown, in long discussions on religious matters. Stephens considered Fox a gifted young man but the two disagreed on so many issues that he later called Fox mad and spoke against him. Over the next few years Fox continued to travel around the country as his particular religious beliefs took shape. At times he actively sought the company of clergy but found no comfort from them as they seemed unable to help with the matters troubling him. One, in Warwickshire, advised him to take tobacco (which Fox disliked) and sing psalms; another, in Coventry, lost his temper when Fox accidentally stood on a flower in his garden; a third suggested bloodletting. He became fascinated by the Bible, which he studied assiduously. He hoped to find among the "English Dissenters" a spiritual understanding absent from the established church but fell out with one group, for example, because he maintained that women had souls: He thought intensely about the Temptation of Christ, which he compared to his own spiritual condition, but drew strength from his conviction that God would support and preserve him. In prayer and meditation he came to a greater understanding of the nature of his faith and what it required from him; this process he called "opening". He also came to what he deemed a deep inner understanding of standard Christian beliefs. Among his ideas were: In 1647 Fox began to preach publicly: in market-places, fields, appointed meetings of various kinds or even sometimes "steeple-houses" after the service. His powerful preaching began to attract a small following. It is not clear at what point the Society of Friends was formed but there was certainly a group of people who often travelled together. At first, they called themselves "Children of the Light" or "Friends of the Truth", and later simply "Friends". Fox seems to have initially had no desire to found a sect but only to proclaim what he saw as the pure and genuine principles of Christianity in their original simplicity, though he afterward showed great prowess as a religious legislator in the organization he gave to the new society. There were a great many rival Christian denominations holding very diverse opinions; the atmosphere of dispute and confusion gave Fox an opportunity to put forward his own beliefs through his personal sermons. Fox's preaching was grounded in scripture but was mainly effective because of the intense personal experience he was able to project. He was scathing about immorality, deceit and the exacting of tithes and urged his listeners to lead lives without sin, avoiding the Ranter's antinomian view that a believer becomes automatically sinless. By 1651 he had gathered other talented preachers around him and continued to roam the country despite a harsh reception from some listeners, who would whip and beat them to drive them away. As his reputation spread, his words were not welcomed by all. As an uncompromising preacher, he hurled disputation and contradiction to the faces of his opponents. The worship of Friends in the form of silent waiting punctuated by individuals speaking as the Spirit moved them seems to have been well-established by this time, though it is not recorded how this came to be; Richard Bauman asserts that "speaking was an important feature of the meeting for worship from the earliest days of Quakerism." Fox complained to judges about decisions he considered morally wrong, as in his letter on the case of a woman due to be executed for theft. He campaigned against the paying of tithes, which funded the established church and often went into the pockets of absentee landlords or religious colleges far away from the paying parishioners. In his view, as God was everywhere and anyone could preach, the established church was unnecessary and a university qualification irrelevant for a preacher. Conflict with civil authority was inevitable. Fox was imprisoned several times, the first at Nottingham in 1649. At Derby in 1650 he was imprisoned for blasphemy; a judge mocked Fox's exhortation to "tremble at the word of the Lord", calling him and his followers "Quakers". Following his refusal to fight against the return of the monarchy (or to take up arms for any reason), his sentence was doubled. The refusal to swear oaths or take up arms came to be a much more important part of his public statements. Refusal to take oaths meant that Quakers could be prosecuted under laws compelling subjects to pledge allegiance, as well as making testifying in court problematic. In a letter of 1652 ("That which is set up by the sword"), he urged Friends not to use "carnal weapons" but "spiritual weapons", saying "let the waves [the power of nations] break over your heads". In 1652, Fox preached for several hours under a walnut tree at Balby, where his disciple Thomas Aldham was instrumental in setting up the first meeting in the Doncaster area. In the same year Fox felt that God led him to ascend Pendle Hill where he had a vision of many souls coming to Christ. From there he travelled to Sedbergh, where he had heard a group of Seekers were meeting, and preached to over a thousand people on Firbank Fell, convincing many, including Francis Howgill, to accept that Christ might speak to people directly. At the end of the month he stayed at Swarthmoor Hall, near Ulverston, the home of Thomas Fell, vice-chancellor of the Duchy of Lancaster, and his wife, Margaret. At around this time the "ad hoc" meetings of Friends began to be formalized and a monthly meeting was set up in County Durham. Margaret became a Quaker and, although Thomas did not convert, his familiarity with the Friends proved influential when Fox was arrested for blasphemy in October. Fell was one of three presiding judges, and had the charges dismissed on a technicality. Fox remained at Swarthmoor until summer 1653 then left for Carlisle where he was arrested again for blasphemy. It was even proposed to put him to death but Parliament requested his release rather than have "a young man ... die for religion". Further imprisonments came at London, England in 1654, Launceston in 1656, Lancaster in 1660, Leicester in 1662, Lancaster again and Scarborough in 1664–66 and Worcester in 1673–75. Charges usually included causing a disturbance and travelling without a pass. Quakers fell foul of irregularly enforced laws forbidding unauthorized worship while actions motivated by belief in social equality—refusing to use or acknowledge titles, take hats off in court or bow to those who considered themselves socially superior—were seen as disrespectful. While imprisoned at Launceston Fox wrote, "Christ our Lord and master saith 'Swear not at all, but let your communications be yea, yea, and nay, nay, for whatsoever is more than these cometh of evil.' ... the Apostle James saith, 'My brethren, above all things swear not, neither by heaven, nor by earth, nor by any other oath. Lest ye fall into condemnation.'" In prison George Fox continued writing and preaching, feeling that imprisonment brought him into contact with people who needed his help—the jailers as well as his fellow prisoners. In his journal, he told his magistrate, "God dwells not in temples made with hands." He also sought to set an example by his actions there, turning the other cheek when being beaten and refusing to show his captors any dejected feelings. Parliamentarians grew suspicious of monarchist plots and fearful that the group travelling with Fox aimed to overthrow the government: by this time his meetings were regularly attracting crowds of over a thousand. In early 1655 he was arrested at Whetstone, Leicestershire and taken to London under armed guard. In March he was brought before the Lord Protector, Oliver Cromwell. After affirming that he had no intention of taking up arms Fox was able to speak with Cromwell for most of the morning about the Friends and advised him to listen to God's voice and obey it so that, as Fox left, Cromwell "with tears in his eyes said, 'Come again to my house; for if thou and I were but an hour of a day together, we should be nearer one to the other'; adding that he wished [Fox] no more ill than he did to his own soul." This episode was later recalled as an example of "speaking truth to power", a preaching technique by which subsequent Quakers hoped to influence the powerful. Although not used until the 20th century, the phrase is related to the ideas of plain speech and simplicity which Fox practiced, but motivated by the more worldly goal of eradicating war, injustice and oppression. Fox petitioned Cromwell over the course of 1656, asking him to alleviate the persecution of Quakers. Later that year, they met for a second time at Whitehall. On a personal level, the meeting went well; despite disagreements between the two men, they had a certain rapport. Fox invited Cromwell to "lay down his crown at the feet of Jesus"—which Cromwell declined to do. Fox met Cromwell again twice in March 1657. Their last meeting was in 1658 at Hampton Court, though they could not speak for long or meet again because of the Protector's worsening illness—Fox even wrote that "he looked like a dead man". Cromwell died in September of that year. One early Quaker convert, the Yorkshireman James Nayler, arose as a prominent preacher in London around 1655. A breach began to form between Fox's and Nayler's followers. As Fox was held prisoner at Launceston, Nayler moved south-westwards towards Launceston intending to meet Fox and heal any rift. On the way he was arrested himself and held at Exeter. After Fox was released from Launceston gaol in 1656, he preached throughout the West Country. Arriving at Exeter late in September, Fox was reunited with Nayler. Nayler and his followers refused to remove their hats while Fox prayed, which Fox took as both a personal slight and a bad example. When Nayler refused to kiss Fox's hand, Fox told Nayler to kiss his foot instead. Nayler was offended and the two parted acrimoniously. Fox wrote, "there was now a wicked spirit risen amongst Friends". After Nayler's own release later the same year he rode into Bristol triumphantly playing the part of Jesus Christ in a re-enactment of Palm Sunday. He was arrested and taken to London, where Parliament defeated a motion to execute him by 96–82. Instead, they ordered that he be pilloried and whipped through both London and Bristol, branded on his forehead with the letter B (for blasphemer), bored through the tongue with a red-hot iron and imprisoned in solitary confinement with hard labour. Nayler was released in 1659, but he was a broken man. On meeting Fox in London, he fell to his knees and begged Fox's forgiveness. Shortly afterward, Nayler was attacked by thieves while travelling home to his family, and died. The persecutions of these years—with about a thousand Friends in prison by 1657—hardened Fox's opinions of traditional religious and social practices. In his preaching, he often emphasized the Quaker rejection of baptism by water; this was a useful way of highlighting how the focus of Friends on inward transformation differed from what he saw as the superstition of outward ritual. It was also deliberately provocative to adherents of those practices, providing opportunities for Fox to argue with them on matters of scripture. This pattern was also found in his court appearances: when a judge challenged him to remove his hat, Fox riposted by asking where in the Bible such an injunction could be found. The Society of Friends became increasingly organized towards the end of the decade. Large meetings were held, including a three-day event in Bedfordshire, the precursor of the present Britain Yearly Meeting system. Fox commissioned two Friends to travel around the country collecting the testimonies of imprisoned Quakers, as evidence of their persecution; this led to the establishment in 1675 of Meeting for Sufferings, which has continued to the present day. The 1650s, when the Friends were most confrontational, was one of the most creative periods of their history. During the Commonwealth, Fox had hoped that the movement would become the major church in England. Disagreements, persecution and increasing social turmoil, however, led Fox to suffer from a severe depression, which left him deeply troubled at Reading, Berkshire, for ten weeks in 1658 or 1659. In 1659, he sent parliament his most politically radical pamphlet, "Fifty nine Particulars laid down for the Regulating things", but the year was so chaotic that it never considered them; the document was not reprinted until the 21st century. With the restoration of the monarchy, Fox's dreams of establishing the Friends as the dominant religion seemed at an end. He was again accused of conspiracy, this time against Charles II, and fanaticism—a charge he resented. He was imprisoned in Lancaster for five months, during which he wrote to the king offering advice on governance: Charles should refrain from war and domestic religious persecution, and discourage oath-taking, plays, and maypole games. These last suggestions reveal Fox's Puritan leanings, which continued to influence Quakers for centuries after his death. Once again, Fox was released after demonstrating that he had no military ambitions. At least on one point, Charles listened to Fox. The seven hundred Quakers who had been imprisoned under Richard Cromwell were released, though the government remained uncertain about the group's links with other, more violent, movements. A revolt by the Fifth Monarchists in January 1661 led to the suppression of that sect and the repression of other Nonconformists, including Quakers. In the aftermath of this attempted coup, Fox and eleven other Quakers issued a broadside proclaiming what became known among Friends in the 20th century as the "peace testimony": they committed themselves to oppose all outward wars and strife as contrary to the will of God. Not all his followers accepted this statement; Isaac Penington, for example, dissented for a time arguing that the state had a duty to protect the innocent from evil, if necessary by using military force. Despite the testimony, persecution against Quakers and other dissenters continued. Penington and others, such as John Perrot and John Pennyman, were uneasy at Fox's increasing power within the movement. Like Nayler before them, they saw no reason why men should remove their hats for prayer, arguing that men and women should be treated as equals and if, as according to the apostle Paul, women should cover their heads, then so could men. Perrot and Penington lost the argument. Perrot emigrated to the New World, and Fox retained leadership of the movement. Parliament enacted laws which forbade non-Anglican religious meetings of more than five people, essentially making Quaker meetings illegal. Fox counseled his followers to openly violate laws that attempted to suppress the movement, and many Friends, including women and children, were jailed over the next two and a half decades. Meanwhile, Quakers in New England had been banished (and some executed), and Charles was advised by his councillors to issue a mandamus condemning this practice and allowing them to return. Fox was able to meet some of the New England Friends when they came to London, stimulating his interest in the colonies. Fox was unable to travel there immediately: he was imprisoned again in 1664 for his refusal to swear the oath of allegiance, and on his release in 1666 was preoccupied with organizational matters—he normalized the system of monthly and quarterly meetings throughout the country, and extended it to Ireland. Visiting Ireland also gave him the opportunity to preach against what he saw as the excesses of the Roman Catholic Church, in particular the use of ritual. More recent Quaker commentators have noted points of contact between the denominations: both claim the actual presence of God in their meetings, and both allow the collective opinion of the church to augment Biblical teaching. Fox, however, did not perceive this, brought up as he was in a wholly Protestant environment hostile to "Popery". Fox married Margaret Fell of Swarthmoor Hall, a lady of high social position and one of his early converts, on 27 October 1669 at a meeting in Bristol. She was ten years his senior and had eight children (all but one of them Quakers) by her first husband, Thomas Fell, who had died in 1658. She was herself very active in the movement, and had campaigned for equality and the acceptance of women as preachers. As there were no priests at Quaker weddings to perform the ceremony, the union took the form of a civil marriage approved by the principals and the witnesses at a meeting. Ten days after the marriage, Margaret returned to Swarthmoor to continue her work there while George went back to London. Their shared religious work was at the heart of their life together, and they later collaborated on a great deal of the administration the Society required. Shortly after the marriage, Margaret was imprisoned at Lancaster; George remained in the south-east of England, becoming so ill and depressed that for a time he lost his sight. By 1671 Fox had recovered and Margaret had been released by order of the King. Fox resolved to visit the English settlements in America and the West Indies, remaining there for two years, possibly to counter any remnants of Perrot's teaching there. After a voyage of seven weeks, during which dolphins were caught and eaten, the party arrived in Barbados on 3 October 1671. From there, Fox sent an epistle to Friends spelling out the role of women's meetings in the Quaker marriage ceremony, a point of controversy when he returned home. One of his proposals suggested that the prospective couple should be interviewed by an all-female meeting prior to the marriage to determine whether there were any financial or other impediments. Though women's meetings had been held in London for the last ten years, this was an innovation in Bristol and the north-west of England, which many there felt went too far. Fox wrote a letter to the governor and assembly of the island in which he refuted charges that Quakers were stirring up the slaves to revolt and tried to affirm the orthodoxy of Quaker beliefs. After a stay in Jamaica, Fox's first landfall on the North American continent was at Maryland, where he participated in a four-day meeting of local Quakers. He remained there while various of his English companions travelled to the other colonies, because he wished to meet some Native Americans who were interested in Quaker ways—though he relates that they had "a great dispute" among themselves about whether to participate in the meeting. Fox was impressed by their general demeanour, which he said was "courteous and loving". He resented the suggestion (from a man in North Carolina) that "the Light and Spirit of God ... was not in the Indians", a proposition which Fox refuted. Fox left no record of encountering slaves on the mainland. Elsewhere in the colonies, Fox helped to establish organizational systems for the Friends, along the same lines as he had done in Britain. He also preached to many non-Quakers, some but not all of whom were converted. Following extensive travels around the various American colonies, George Fox returned to England in June 1673 confident that his movement was firmly established there. Back in England, however, he found his movement sharply divided among provincial Friends (such as William Rogers, John Wilkinson and John Story) who resisted establishment of women's meetings and the power of those who resided in or near London. With William Penn and Robert Barclay as allies of Fox, the challenge to Fox's leadership was eventually put down. But in the midst of the dispute, Fox was imprisoned again for refusing to swear oaths after being captured at Armscote, Worcestershire. His mother died shortly after hearing of his arrest and Fox's health began to suffer. Margaret Fell petitioned the king for his release, which was granted, but Fox felt too weak to take up his travels immediately. Recuperating at Swarthmoor, he began dictating what would be published after his death as his journal and devoted his time to his written output: letters, both public and private, as well as books and essays. Much of his energy was devoted to the topic of oaths, having become convinced of its importance to Quaker ideas. By refusing to swear, he felt that he could bear witness to the value of truth in everyday life, as well as to God, whom he associated with truth and the inner light. For three months in 1677 and a month in 1684, Fox visited the Friends in the Netherlands, and organized their meetings for discipline. The first trip was the more extensive, taking him into what is now Germany, proceeding along the coast to Friedrichstadt and back again over several days. Meanwhile, Fox was participating in a dispute among Friends in Britain over the role of women in meetings, a struggle which took much of his energy and left him exhausted. Returning to England, he stayed in the south in order to try to end the dispute. He followed the foundation of the colony of Pennsylvania, where Penn had given him over of land, with interest. Persecution continued, with Fox arrested briefly in October 1683. Fox's health was becoming worse, but he continued his activities—writing to leaders in Poland, Denmark, Germany, and elsewhere about his beliefs, and their treatment of Quakers. In the last years of his life, Fox continued to participate in the London Meetings, and still made representations to Parliament about the sufferings of Friends. The new King, James II, pardoned religious dissenters jailed for failure to attend the established church, leading to the release of about 1,500 Friends. Though the Quakers lost influence after the Glorious Revolution, which deposed James II, the Act of Toleration 1689 put an end to the uniformity laws under which Quakers had been persecuted, permitting them to assemble freely. Two days after preaching, as usual, at the Gracechurch Street Meeting House in London, George Fox died between 9 and 10 p.m. on 13 January 1691. He was interred in the Quaker Burying Ground, Bunhill Fields, three days later in the presence of thousands of mourners. Fox's journal was first published in 1694, after editing by Thomas Ellwood—a friend and associate of John Milton—with a preface by William Penn. Like most similar works of its time the journal was not written contemporaneously to the events it describes, but rather compiled many years later, much of it dictated. Parts of the journal were not in fact by Fox at all but are constructed by its editors from diverse sources and written as if by him. The dissent within the movement and the contributions of others to the development of Quakerism are largely excluded from the narrative. Fox portrays himself as always in the right and always vindicated by God's interventions on his behalf. As a religious autobiography, Rufus Jones compared it to such works as Augustine's "Confessions" and John Bunyan's "Grace Abounding to the Chief of Sinners". It is, though, an intensely personal work with little dramatic power that only succeeds in appealing to readers after substantial editing. Historians have used it as a primary source because of its wealth of detail on ordinary life in the 17th century, and the many towns and villages which Fox visited. Hundreds of Fox's letters—mostly intended for wide circulation, along with a few private communications—were also published. Written from the 1650s onwards, with such titles as "Friends, seek the peace of all men" or "To Friends, to know one another in the light", they give enormous insight into the detail of Fox's beliefs, and show his determination to spread them. These writings, in the words of Henry Cadbury, Professor of Divinity at Harvard University and a leading Quaker, "contain a few fresh phrases of his own, [but] are generally characterized by an excess of scriptural language and today they seem dull and repetitious". Others point out that "Fox's sermons, rich in biblical metaphor and common speech, brought hope in a dark time." Fox's aphorisms have found an audience beyond Quakers, with many other church groups using them to illustrate principles of Christianity. Fox is described by Ellwood as "graceful in countenance, manly in personage, grave in gesture, courteous in conversation". Penn says he was "civil beyond all forms of breeding". We are told that he was "plain and powerful in preaching, fervent in prayer", "a discerner of other men's spirits, and very much master of his own", skilful to "speak a word in due season to the conditions and capacities of most, especially to them that were weary, and wanted soul's rest"; "valiant in asserting the truth, bold in defending it, patient in suffering for it, immovable as a rock". Fox's influence on the Society of Friends was tremendous, and his beliefs have largely been carried forward by that group. Perhaps his most significant achievement, other than his predominant influence in the early movement, was his leadership in overcoming the twin challenges of government prosecution after the Restoration and internal disputes that threatened its stability during the same period. Not all of his beliefs were welcome to all Quakers: his Puritan-like opposition to the arts and rejection of theological study, forestalled development of these practices among Quakers for some time. The name of George Fox is often invoked by traditionalist Friends who dislike modern liberal attitudes to the Society's Christian origins. At the same time, Quakers and others can relate to Fox's religious experience, and even those who disagree with many of his ideas regard him as a pioneer. Walt Whitman, who was raised by parents inspired by Quaker principles, later wrote: "George Fox stands for something too—a thought—the thought that wakes in silent hours—perhaps the deepest, most eternal thought latent in the human soul. This is the thought of God, merged in the thoughts of moral right and the immortality of identity. Great, great is this thought—aye, greater than all else." Primary sources Various editions of Fox's journal have been published from time to time since the first printing in 1694: Secondary sources Gunpowder Plot The Gunpowder Plot of 1605, in earlier centuries often called the Gunpowder Treason Plot or the Jesuit Treason, was a failed assassination attempt against King James I of England and VI of Scotland by a group of provincial English Catholics led by Robert Catesby. The plan was to blow up the House of Lords during the State Opening of England's Parliament on 5 November 1605, as the prelude to a popular revolt in the Midlands during which James's nine-year-old daughter, Princess Elizabeth, was to be installed as the Catholic head of state. Catesby may have embarked on the scheme after hopes of securing greater religious tolerance under King James had faded, leaving many English Catholics disappointed. His fellow plotters were John Wright, Thomas Wintour, Thomas Percy, Guy Fawkes, Robert Keyes, Thomas Bates, Robert Wintour, Christopher Wright, John Grant, Ambrose Rookwood, Sir Everard Digby and Francis Tresham. Fawkes, who had 10 years of military experience fighting in the Spanish Netherlands in the failed suppression of the Dutch Revolt, was given charge of the explosives. The plot was revealed to the authorities in an anonymous letter sent to William Parker, 4th Baron Monteagle, on 26 October 1605. During a search of the House of Lords at about midnight on 4 November 1605, Fawkes was discovered guarding 36 barrels of gunpowder—enough to reduce the House of Lords to rubble—and arrested. Most of the conspirators fled from London as they learned of the plot's discovery, trying to enlist support along the way. Several made a stand against the pursuing Sheriff of Worcester and his men at Holbeche House; in the ensuing battle, Catesby was one of those shot and killed. At their trial on 27 January 1606, eight of the survivors, including Fawkes, were convicted and sentenced to be hanged, drawn and quartered. Details of the assassination attempt were allegedly known by the principal Jesuit of England, Father Henry Garnet. Although he was convicted of treason and sentenced to death, doubt has been cast on how much he really knew of the plot. As its existence was revealed to him through confession, Garnet was prevented from informing the authorities by the absolute confidentiality of the confessional. Although anti-Catholic legislation was introduced soon after the plot's discovery, many important and loyal Catholics retained high office during King James I's reign. The thwarting of the Gunpowder Plot was commemorated for many years afterwards by special sermons and other public events such as the ringing of church bells, which have evolved into the Bonfire Night of today. Between 1533 and 1540, the Tudor King Henry VIII took control of the English Church from Rome, the start of several decades of religious tension in England. English Catholics struggled in a society dominated by the newly separate and increasingly Protestant Church of England. Henry's daughter, Elizabeth I, responded to the growing religious divide by introducing the Elizabethan Religious Settlement, which required anyone appointed to a public or church office to swear allegiance to the monarch as head of the Church and state. The penalties for refusal were severe; fines were imposed for recusancy, and repeat offenders risked imprisonment and execution. Catholicism became marginalised, but despite the threat of torture or execution, priests continued to practise their faith in secret. Queen Elizabeth, unmarried and childless, steadfastly refused to name an heir. Many Catholics believed that her Catholic cousin, Mary, Queen of Scots, was the legitimate heir to the English throne, but she was executed for treason in 1587. The English Secretary of State, Robert Cecil, negotiated secretly with Mary's son, James VI of Scotland, who had a strong claim to the English throne as Elizabeth's first cousin twice removed through both his parents. In the months before Elizabeth's death on 24 March 1603, Cecil prepared the way for James to succeed her. Some exiled Catholics favoured Philip II of Spain's daughter, Infanta Isabella, as Elizabeth's successor. More moderate Catholics looked to James's and Elizabeth's cousin Arbella Stuart, a woman thought to have Catholic sympathies. As Elizabeth's health deteriorated, the government detained those they considered to be the "principal papists", and the Privy Council grew so worried that Arbella Stuart was moved closer to London to prevent her from being kidnapped by papists. Despite competing claims to the English throne, the transition of power following Elizabeth's death went smoothly. James's succession was announced by a proclamation from Cecil on 24 March, which was generally celebrated. Leading papists, rather than causing trouble as anticipated, reacted to the news by offering their enthusiastic support for the new monarch. Jesuit priests, whose presence in England was punishable by death, also demonstrated their support for James, who was widely believed to embody "the natural order of things". James ordered a ceasefire in the conflict with Spain, and even though the two countries were still technically at war, King Philip III sent his envoy, Don Juan de Tassis, to congratulate James on his accession. For decades, the English had lived under a monarch who refused to provide an heir, but James arrived with a family and a future line of succession. His wife, Anne of Denmark, was the daughter of a king. Their eldest child, the nine-year-old Henry, was considered a handsome and confident boy, and their two younger children, Princess Elizabeth and Prince Charles, were proof that James was able to provide heirs to continue the Protestant monarchy. James's attitude towards Catholics was more moderate than that of his predecessor, perhaps even tolerant. He promised that he would not "persecute any that will be quiet and give an outward obedience to the law", and believed that exile was a better solution than capital punishment: "I would be glad to have both their heads and their bodies separated from this whole island and transported beyond seas." Some Catholics believed that the martyrdom of James's mother, Mary, Queen of Scots, would encourage James to convert to the Catholic faith, and the Catholic houses of Europe may also have shared that hope. James received an envoy from the Habsburg Archduke Albert of the Southern Netherlands, ruler of the remaining Catholic territories after over 30 years of war in the Dutch Revolt by English-supported Protestant rebels. For the Catholic expatriates engaged in that struggle, the restoration by force of a Catholic monarchy was an intriguing possibility, but following the failed Spanish invasion of England in 1588 the papacy had taken a longer-term view on the return of a Catholic monarch to the English throne. During the late 16th century, Catholics made several assassination attempts against Protestant rulers in Europe and in England, including plans to poison Elizabeth I. The Jesuit Juan de Mariana's 1598 "On Kings and the Education of Kings" explicitly justified the assassination of the French king Henry III—who had been stabbed to death by a Catholic fanatic in 1589—and until the 1620s, some English Catholics believed that regicide was justifiable to remove tyrants from power. Much of the "rather nervous" James I's political writing was "concerned with the threat of Catholic assassination and refutation of the [Catholic] argument that 'faith did not need to be kept with heretics'". In the absence of any sign that James would move to end the persecution of Catholics, as some had hoped for, several members of the clergy (including two anti-Jesuit priests) decided to take matters into their own hands. In what became known as the Bye Plot, the priests William Watson and William Clark planned to kidnap James and hold him in the Tower of London until he agreed to be more tolerant towards Catholics. Cecil received news of the plot from several sources, including the Archpriest George Blackwell, who instructed his priests to have no part in any such schemes. At about the same time, Lord Cobham, Lord Grey de Wilton, Griffin Markham and Walter Raleigh hatched what became known as the Main Plot, which involved removing James and his family and supplanting them with Arbella Stuart. Amongst others, they approached Henry IV of France for funding, but were unsuccessful. All those involved in both plots were arrested in July and tried in autumn 1603; Sir George Brooke was executed, but James, keen not to have too bloody a start to his reign, reprieved Cobham, Grey, and Markham while they were at the scaffold. Raleigh, who had watched while his colleagues sweated, and who was due to be executed a few days later, was also pardoned. Arbella Stuart denied any knowledge of the Main Plot. The two priests, condemned by the pope, and "very bloodily handled", were executed. The Catholic community responded to news of these plots with shock. That the Bye Plot had been revealed by Catholics was instrumental in saving them from further persecution, and James was grateful enough to allow pardons for those recusants who sued for them, as well as postponing payment of their fines for a year. On 19 February 1604, shortly after he discovered that his wife, Queen Anne, had been sent a rosary from the pope via one of James's spies, Sir Anthony Standen, James denounced the Catholic Church. Three days later, he ordered all Jesuits and all other Catholic priests to leave the country, and reimposed the collection of fines for recusancy. James changed his focus from the anxieties of English Catholics to the establishment of an Anglo-Scottish union. He also appointed Scottish nobles such as George Home to his court, which proved unpopular with the Parliament of England. Some Members of Parliament made it clear that in their view, the "effluxion of people from the Northern parts" was unwelcome, and compared them to "plants which are transported from barren ground into a more fertile one". Even more discontent resulted when the King allowed his Scottish nobles to collect the recusancy fines. There were 5,560 convicted of recusancy in 1605, of whom 112 were landowners. The very few Catholics of great wealth who refused to attend services at their parish church were fined £20 per month. Those of more moderate means had to pay two-thirds of their annual rental income; middle class recusants were fined one shilling a week, although the collection of all these fines was "haphazard and negligent". When James came to power, almost £5,000 a year (equivalent to over £10 million in 2008) was being raised by these fines. On 19 March, the King gave his opening speech to his first English Parliament in which he spoke of his desire to secure peace, but only by "profession of the true religion". He also spoke of a Christian union and reiterated his desire to avoid religious persecution. For the Catholics, the King's speech made it clear that they were not to "increase their number and strength in this Kingdom", that "they might be in hope to erect their Religion again". To Father John Gerard, these words were almost certainly responsible for the heightened levels of persecution the members of his faith now suffered, and for the priest Oswald Tesimond they were a rebuttal of the early claims that the King had made, upon which the papists had built their hopes. A week after James's speech, Lord Sheffield informed the king of over 900 recusants brought before the Assizes in Normanby, and on 24 April a Bill was introduced in Parliament which threatened to outlaw all English followers of the Catholic Church. The conspirators' principal aim was to kill King James, but many other important targets would also be present at the State Opening, including the monarch's nearest relatives and members of the Privy Council. The senior judges of the English legal system, most of the Protestant aristocracy, and the bishops of the Church of England would all have attended in their capacity as members of the House of Lords, along with the members of the House of Commons. Another important objective was the kidnapping of the King's daughter, third in the line of succession, Princess Elizabeth. Housed at Coombe Abbey near Coventry, the Princess lived only ten miles north of Warwick—convenient for the plotters, most of whom lived in the Midlands. Once the King and his Parliament were dead, the plotters intended to install Elizabeth on the English throne as a titular Queen. The fate of Princes Henry and Charles would be improvised; their role in state ceremonies was, as yet, uncertain. The plotters planned to use Henry Percy, Earl of Northumberland, as Elizabeth's Protector, but most likely never informed him of this. Robert Catesby (1573–1605), a man of "ancient, historic and distinguished lineage", was the inspiration behind the plot. He was described by contemporaries as "a good-looking man, about six feet tall, athletic and a good swordsman". Along with several other conspirators, he took part in the Earl of Essex's rebellion in 1601, during which he was wounded and captured. Queen Elizabeth allowed him to escape with his life after fining him 4,000 marks (equivalent to more than £6 million in 2008), after which he sold his estate in Chastleton. In 1603 Catesby helped to organise a mission to the new king of Spain, Philip III, urging Philip to launch an invasion attempt on England, which they assured him would be well supported, particularly by the English Catholics. Thomas Wintour (1571–1606) was chosen as the emissary, but the Spanish king, although sympathetic to the plight of Catholics in England, was intent on making peace with James. Wintour had also attempted to convince the Spanish envoy Don Juan de Tassis that "3,000 Catholics" were ready and waiting to support such an invasion. Concern was voiced by Pope Clement VIII that using violence to achieve a restoration of Catholic power in England would result in the destruction of those that remained. According to contemporary accounts, in February 1604 Catesby invited Thomas Wintour to his house in Lambeth, where they discussed Catesby's plan to re-establish Catholicism in England by blowing up the House of Lords during the State Opening of Parliament. Wintour was known as a competent scholar, able to speak several languages, and he had fought with the English army in the Netherlands. His uncle, Francis Ingleby, had been executed for being a Catholic priest in 1586, and Wintour later converted to Catholicism. Also present at the meeting was John Wright, a devout Catholic said to be one of the best swordsmen of his day, and a man who had taken part with Catesby in the Earl of Essex's rebellion three years earlier. Despite his reservations over the possible repercussions should the attempt fail, Wintour agreed to join the conspiracy, perhaps persuaded by Catesby's rhetoric: "Let us give the attempt and where it faileth, pass no further." Wintour travelled to Flanders to enquire about Spanish support. While there he sought out Guy Fawkes (1570–1606), a committed Catholic who had served as a soldier in the Southern Netherlands under the command of William Stanley, and who in 1603 was recommended for a captaincy. Accompanied by John Wright's brother Christopher, Fawkes had also been a member of the 1603 delegation to the Spanish court pleading for an invasion of England. Wintour told Fawkes that "some good frends of his wished his company in Ingland", and that certain gentlemen "were uppon a resolution to doe some whatt in Ingland if the pece with Spain healped us nott". The two men returned to England late in April 1604, telling Catesby that Spanish support was unlikely. Thomas Percy, Catesby's friend and John Wright's brother-in-law, was introduced to the plot several weeks later. Percy had found employment with his kinsman the Earl of Northumberland, and by 1596 was his agent for the family's northern estates. About 1600–1601 he served with his patron in the Low Countries. At some point during Northumberland's command in the Low Countries, Percy became his agent in his communications with James. Percy was reputedly a "serious" character who had converted to the Catholic faith. His early years were, according to a Catholic source, marked by a tendency to rely on "his sword and personal courage". Northumberland, although not a Catholic himself, planned to build a strong relationship with James in order to better the prospects of English Catholics, and to reduce the family disgrace caused by his separation from his wife Martha Wright, a favourite of Elizabeth. Thomas Percy's meetings with James seemed to go well. Percy returned with promises of support for the Catholics, and Northumberland believed that James would go so far as to allow Mass in private houses, so as not to cause public offence. Percy, keen to improve his standing, went further, claiming that the future King would guarantee the safety of English Catholics. The first meeting between the five conspirators took place on 20 May 1604, probably at the Duck and Drake Inn, just off the Strand, Thomas Wintour's usual residence when staying in London. Catesby, Thomas Wintour, and John Wright were in attendance, joined by Guy Fawkes and Thomas Percy. Alone in a private room, the five plotters swore an oath of secrecy on a prayer book. By coincidence, and ignorant of the plot, Father John Gerard (a friend of Catesby's) was celebrating Mass in another room, and the five men subsequently received the Eucharist. Following their oath, the plotters left London and returned to their homes. The adjournment of Parliament gave them, they thought, until February 1605 to finalise their plans. On 9 June, Percy's patron, the Earl of Northumberland, appointed him to the Honourable Corps of Gentlemen at Arms, a mounted troop of 50 bodyguards to the King. This role gave Percy reason to seek a base in London, and a small property near the Prince's Chamber owned by Henry Ferrers, a tenant of John Whynniard, was chosen. Percy arranged for the use of the house through Northumberland's agents, Dudley Carleton and John Hippisley. Fawkes, using the pseudonym "John Johnson", took charge of the building, posing as Percy's servant. The building was occupied by Scottish commissioners appointed by the King to consider his plans for the unification of England and Scotland, so the plotters hired Catesby's lodgings in Lambeth, on the opposite bank of the Thames, from where their stored gunpowder and other supplies could be conveniently rowed across each night. Meanwhile, King James continued with his policies against the Catholics, and Parliament pushed through anti-Catholic legislation, until its adjournment on 7 July. The conspirators returned to London in October 1604, when Robert Keyes, a "desperate man, ruined and indebted", was admitted to the group. His responsibility was to take charge of Catesby's house in Lambeth, where the gunpowder and other supplies were to be stored. Keyes's family had notable connections; his wife's employer was the Catholic Lord Mordaunt. Tall, with a red beard, he was seen as trustworthy and, like Fawkes, capable of looking after himself. In December Catesby recruited his servant, Thomas Bates, into the plot, after the latter accidentally became aware of it. It was announced on 24 December that the re-opening of Parliament would be delayed. Concern over the plague meant that rather than sitting in February, as the plotters had originally planned for, Parliament would not sit again until 3 October 1605. The contemporaneous account of the prosecution claimed that during this delay the conspirators were digging a tunnel beneath Parliament. This may have been a government fabrication, as no evidence for the existence of a tunnel was presented by the prosecution, and no trace of one has ever been found. The account of a tunnel comes directly from Thomas Wintour's confession, and Guy Fawkes did not admit the existence of such a scheme until his fifth interrogation. Logistically, digging a tunnel would have proved extremely difficult, especially as none of the conspirators had any experience of mining. If the story is true, by 6 December the Scottish commissioners had finished their work, and the conspirators were busy tunnelling from their rented house to the House of Lords. They ceased their efforts when, during tunnelling, they heard a noise from above. The noise turned out to be the then-tenant's widow, who was clearing out the undercroft directly beneath the House of Lords—the room where the plotters eventually stored the gunpowder. By the time the plotters reconvened at the start of the old style new year on Lady Day, 25 March, three more had been admitted to their ranks; Robert Wintour, John Grant, and Christopher Wright. The additions of Wintour and Wright were obvious choices. Along with a small fortune, Robert Wintour inherited Huddington Court (a known refuge for priests) near Worcester, and was reputedly a generous and well-liked man. A devout Catholic, he married Gertrude Talbot, who was from a family of recusants. Christopher Wright (1568–1605), John's brother, had also taken part in the Earl of Essex's revolt and had moved his family to Twigmore in Lincolnshire, then known as something of a haven for priests. John Grant was married to Wintour's sister, Dorothy, and was lord of the manor of Norbrook near Stratford-upon-Avon. Reputed to be an intelligent, thoughtful man, he sheltered Catholics at his home at Snitterfield, and was another who had been involved in the Essex revolt of 1601. In addition, 25 March was the day on which the plotters purchased the lease to the undercroft they had supposedly tunnelled near to, owned by John Whynniard. The Palace of Westminster in the early 17th century was a warren of buildings clustered around the medieval chambers, chapels, and halls of the former royal palace that housed both Parliament and the various royal law courts. The old palace was easily accessible; merchants, lawyers, and others lived and worked in the lodgings, shops and taverns within its precincts. Whynniard's building was along a right-angle to the House of Lords, alongside a passageway called Parliament Place, which itself led to Parliament Stairs and the River Thames. Undercrofts were common features at the time, used to house a variety of materials including food and firewood. Whynniard's undercroft, on the ground floor, was directly beneath the first-floor House of Lords, and may once have been part of the palace's medieval kitchen. Unused and filthy, its location was ideal for what the group planned to do. In the second week of June Catesby met in London the principal Jesuit in England, Father Henry Garnet, and asked him about the morality of entering into an undertaking which might involve the destruction of the innocent, together with the guilty. Garnet answered that such actions could often be excused, but according to his own account later admonished Catesby during a second meeting in July in Essex, showing him a letter from the pope which forbade rebellion. Soon after, the Jesuit priest Oswald Tesimond told Garnet he had taken Catesby's confession, in the course of which he had learnt of the plot. Garnet and Catesby met for a third time on 24 July 1605, at the house of the wealthy catholic Anne Vaux in Enfield Chase. Garnet decided that Tesimond's account had been given under the seal of the confessional, and that canon law therefore forbade him to repeat what he had heard. Without acknowledging that he was aware of the precise nature of the plot, Garnet attempted to dissuade Catesby from his course, to no avail. Garnet wrote to a colleague in Rome, Claudio Acquaviva, expressing his concerns about open rebellion in England. He also told Acquaviva that "there is a risk that some private endeavour may commit treason or use force against the King", and urged the pope to issue a public brief against the use of force. According to Fawkes, 20 barrels of gunpowder were brought in at first, followed by 16 more on 20 July. The supply of gunpowder was theoretically controlled by the government, but it was easily obtained from illicit sources. On 28 July, the ever-present threat of the plague again delayed the opening of Parliament, this time until Tuesday 5 November. Fawkes left the country for a short time. The King, meanwhile, spent much of the summer away from the city, hunting. He stayed wherever was convenient, including on occasion at the houses of prominent Catholics. Garnet, convinced that the threat of an uprising had receded, travelled the country on a pilgrimage. It is uncertain when Fawkes returned to England, but he was back in London by late August, when he and Wintour discovered that the gunpowder stored in the undercroft had decayed. More gunpowder was brought into the room, along with firewood to conceal it. The final three conspirators were recruited in late 1605. At Michaelmas, Catesby persuaded the staunchly Catholic Ambrose Rookwood to rent Clopton House near Stratford-upon-Avon. Rookwood was a young man with recusant connections, whose stable of horses at Coldham Hall in Stanningfield, Suffolk was an important factor in his enlistment. His parents, Robert Rookwood and Dorothea Drury, were wealthy landowners, and had educated their son at a Jesuit school near Calais. Everard Digby was a young man who was generally well liked, and lived at Gayhurst House in Buckinghamshire. He had been knighted by the King in April 1603, and was converted to Catholicism by Gerard. Digby and his wife, Mary Mulshaw, had accompanied the priest on his pilgrimage, and the two men were reportedly close friends. Digby was asked by Catesby to rent Coughton Court near Alcester. Digby also promised £1,500 after Percy failed to pay the rent due for the properties he had taken in Westminster. Finally, on 14 October Catesby invited Francis Tresham into the conspiracy. Tresham was the son of the Catholic Thomas Tresham, and a cousin to Robert Catesby—the two had been raised together. He was also the heir to his father's large fortune, which had been depleted by recusant fines, expensive tastes, and by Francis and Catesby's involvement in the Essex revolt. Catesby and Tresham met at the home of Tresham's brother-in-law and cousin, Lord Stourton. In his confession, Tresham claimed that he had asked Catesby if the plot would damn their souls, to which Catesby had replied it would not, and that the plight of England's Catholics required that it be done. Catesby also apparently asked for £2,000, and the use of Rushton Hall in Northamptonshire. Tresham declined both offers (although he did give £100 to Thomas Wintour), and told his interrogators that he had moved his family from Rushton to London in advance of the plot; hardly the actions of a guilty man, he claimed. The details of the plot were finalised in October, in a series of taverns across London and Daventry. Fawkes would be left to light the fuse and then escape across the Thames, while simultaneously a revolt in the Midlands would help to ensure the capture of Princess Elizabeth. Fawkes would leave for the continent, to explain events in England to the European Catholic powers. The wives of those involved and Anne Vaux (a friend of Garnet who often shielded priests at her home) became increasingly concerned by what they suspected was about to happen. Several of the conspirators expressed worries about the safety of fellow Catholics who would be present in Parliament on the day of the planned explosion. Percy was concerned for his patron, Northumberland, and the young Earl of Arundel's name was brought up; Catesby suggested that a minor wound might keep him from the chamber on that day. The Lords Vaux, Montague, Monteagle, and Stourton were also mentioned. Keyes suggested warning Lord Mordaunt, his wife's employer, to derision from Catesby. On Saturday 26 October, Monteagle (Tresham's brother-in-law) received an anonymous letter while at his house in Hoxton. Having broken the seal, he handed the letter to a servant who read it aloud: Uncertain of the letter's meaning, Monteagle promptly rode to Whitehall and handed it to Cecil (then Earl of Salisbury). Salisbury informed the Earl of Worcester, considered to have recusant sympathies, and the suspected Catholic Henry Howard, 1st Earl of Northampton, but kept news of the plot from the King, who was busy hunting in Cambridgeshire and not expected back for several days. Monteagle's servant, Thomas Ward, had family connections with the Wright brothers, and sent a message to Catesby about the betrayal. Catesby, who had been due to go hunting with the King, suspected that Tresham was responsible for the letter, and with Thomas Wintour confronted the recently recruited conspirator. Tresham managed to convince the pair that he had not written the letter, but urged them to abandon the plot. Salisbury was already aware of certain stirrings before he received the letter, but did not yet know the exact nature of the plot, or who exactly was involved. He therefore elected to wait, to see how events unfolded. The letter was shown to the King on Friday 1 November following his arrival back in London. Upon reading it, James immediately seized upon the word "blow" and felt that it hinted at "some strategem of fire and powder", perhaps an explosion exceeding in violence the one that killed his father, Lord Darnley, at Kirk o' Field in 1567. Keen not to seem too intriguing, and wanting to allow the King to take the credit for unveiling the conspiracy, Salisbury feigned ignorance. The following day members of the Privy Council visited the King at the Palace of Whitehall and informed him that, based on the information that Salisbury had given them a week earlier, on Monday the Lord Chamberlain Thomas Howard, 1st Earl of Suffolk would undertake a search of the Houses of Parliament, "both above and below". On Sunday 3 November Percy, Catesby and Wintour had a final meeting, where Percy told his colleagues that they should "abide the uttermost triall", and reminded them of their ship waiting at anchor on the Thames. By 4 November Digby was ensconced with a "hunting party" at Dunchurch, ready to abduct Princess Elizabeth. The same day, Percy visited the Earl of Northumberland—who was uninvolved in the conspiracy—to see if he could discern what rumours surrounded the letter to Monteagle. Percy returned to London and assured Wintour, John Wright, and Robert Keyes that they had nothing to be concerned about, and returned to his lodgings on Gray's Inn Road. That same evening Catesby, likely accompanied by John Wright and Bates, set off for the Midlands. Fawkes visited Keyes, and was given a pocket watch left by Percy, to time the fuse, and an hour later Rookwood received several engraved swords from a local cutler. Although two accounts of the number of searches and their timing exist, according to the King's version, the first search of the buildings in and around Parliament was made on Monday 4 November—as the plotters were busy making their final preparations—by Suffolk, Monteagle, and John Whynniard. They found a large pile of firewood in the undercroft beneath the House of Lords, accompanied by what they presumed to be a serving man (Fawkes), who told them that the firewood belonged to his master, Thomas Percy. They left to report their findings, at which time Fawkes also left the building. The mention of Percy's name aroused further suspicion as he was already known to the authorities as a Catholic agitator. The King insisted that a more thorough search be undertaken. Late that night, the search party, headed by Thomas Knyvet, returned to the undercroft. They again found Fawkes, dressed in a cloak and hat, and wearing boots and spurs. He was arrested, whereupon he gave his name as John Johnson. He was carrying a lantern now held in the Ashmolean Museum, Oxford, and a search of his person revealed a pocket watch, several slow matches and touchwood. 36 barrels of gunpowder were discovered hidden under piles of faggots and coal. Fawkes was taken to the King early on the morning of 5 November. As news of "John Johnson's" arrest spread among the plotters still in London, most fled northwest, along Watling Street. Christopher Wright and Thomas Percy left together. Rookwood left soon after, and managed to cover 30 miles in two hours on one horse. He overtook Keyes, who had set off earlier, then Wright and Percy at Little Brickhill, before catching Catesby, John Wright, and Bates on the same road. Reunited, the group continued northwest to Dunchurch, using horses provided by Digby. Keyes went to Mordaunt's house at Drayton. Meanwhile, Thomas Wintour stayed in London, and even went to Westminster to see what was happening. When he realised the plot had been uncovered, he took his horse and made for his sister's house at Norbrook, before continuing to Huddington Court. The group of six conspirators stopped at Ashby St Ledgers at about 6 pm, where they met Robert Wintour and updated him on their situation. They then continued on to Dunchurch, and met with Digby. Catesby convinced him that despite the plot's failure, an armed struggle was still a real possibility. He announced to Digby's "hunting party" that the King and Salisbury were dead, before the fugitives moved west to Warwick. In London, news of the plot was spreading, and the authorities set extra guards on the city gates, closed the ports, and protected the house of the Spanish Ambassador, which was surrounded by an angry mob. An arrest warrant was issued against Thomas Percy, and his patron, the Earl of Northumberland, was placed under house arrest. In "John Johnson's" initial interrogation he revealed nothing other than the name of his mother, and that he was from Yorkshire. A letter to Guy Fawkes was discovered on his person, but he claimed that name was one of his aliases. Far from denying his intentions, "Johnson" stated that it had been his purpose to destroy the King and Parliament. Nevertheless, he maintained his composure and insisted that he had acted alone. His unwillingness to yield so impressed the King that he described him as possessing "a Roman resolution". On 6 November, the Lord Chief Justice, Sir John Popham (a man with a deep-seated hatred of Catholics) questioned Rookwood's servants. By the evening he had learned the names of several of those involved in the conspiracy: Catesby, Rookwood, Keyes, Wynter , John and Christopher Wright, and Grant. "Johnson" meanwhile persisted with his story, and along with the gunpowder he was found with, was moved to the Tower of London, where the King had decided that "Johnson" would be tortured. The use of torture was forbidden, except by royal prerogative or a body such as the Privy Council or Star Chamber. In a letter of 6 November James wrote: "The gentler tortours [tortures] are to be first used unto him, "et sic per gradus ad ima tenditur" [and thus by steps extended to greater ones], and so God speed your good work." "Johnson" may have been placed in manacles and hung from the wall, but he was almost certainly subjected to the horrors of the rack. On 7 November his resolve was broken; he confessed late that day, and again over the following two days. On 6 November, with Fawkes maintaining his silence, the fugitives raided Warwick Castle for supplies and continued to Norbrook to collect weapons. From there they continued their journey to Huddington. Bates left the group and travelled to Coughton Court to deliver a letter from Catesby, to Father Garnet and the other priests, informing them of what had transpired, and asking for their help in raising an army. Garnet replied by begging Catesby and his followers to stop their "wicked actions", before himself fleeing. Several priests set out for Warwick, worried about the fate of their colleagues. They were caught, and then imprisoned in London. Catesby and the others arrived at Huddington early in the afternoon, and were met by Thomas Wintour. They received practically no support or sympathy from those they met, including family members, who were terrified at the prospect of being associated with treason. They continued on to Holbeche House on the border of Staffordshire, the home of Stephen Littleton, a member of their ever-decreasing band of followers. Whilst there Stephen Littleton and Thomas Wintour went to 'Pepperhill', the Shropshire residence of Sir John Talbot to gain support but to no avail. Tired and desperate, they spread out some of the now-soaked gunpowder in front of the fire, to dry out. Although gunpowder does not explode unless physically contained, a spark from the fire landed on the powder and the resultant flames engulfed Catesby, Rookwood, Grant, and a man named Morgan (a member of the hunting party). Thomas Wintour and Littleton, on their way from Huddington to Holbeche House, were told by a messenger that Catesby had died. At that point, Littleton left, but Thomas arrived at the house to find Catesby alive, albeit scorched. John Grant was not so lucky, and had been blinded by the fire. Digby, Robert Wintour and his half-brother John, and Thomas Bates, had all left. Of the plotters, only the singed figures of Catesby and Grant, and the Wright brothers, Rookwood, and Percy, remained. The fugitives resolved to stay in the house and wait for the arrival of the King's men. Richard Walsh (Sheriff of Worcestershire) and his company of 200 men besieged Holbeche House on the morning of 8 November. Thomas Wintour was hit in the shoulder while crossing the courtyard. John Wright was shot, followed by his brother, and then Rookwood. Catesby and Percy were reportedly killed by a single lucky shot. The attackers rushed the property, and stripped the dead or dying defenders of their clothing. Grant, Morgan, Rookwood, and Wintour were arrested. Bates and Keyes were captured shortly after Holbeche House was taken. Digby, who had intended to give himself up, was caught by a small group of pursuers. Tresham was arrested on 12 November, and taken to the Tower three days later. Montague, Mordaunt, and Stourton (Tresham's brother-in-law) were also imprisoned in the Tower. The Earl of Northumberland joined them on 27 November. Meanwhile the government used the revelation of the plot to accelerate its persecution of Catholics. The home of Anne Vaux at Enfield Chase was searched, revealing the presence of trap doors and hidden passages. A terrified servant then revealed that Garnet, who had often stayed at the house, had recently given a Mass there. Father John Gerard was secreted at the home of Elizabeth Vaux, in Harrowden. Elizabeth was taken to London for interrogation. There she was resolute; she had never been aware that Gerard was a priest, she had presumed he was a "Catholic gentleman", and she did not know of his whereabouts. The homes of the conspirators were searched, and looted; Mary Digby's household was ransacked, and she was made destitute. Some time before the end of November, Garnet moved to Hindlip Hall near Worcester, the home of the Habingtons, where he wrote a letter to the Privy Council protesting his innocence. The foiling of the Gunpowder Plot initiated a wave of national relief at the delivery of the King and his sons, and inspired in the ensuing parliament a mood of loyalty and goodwill, which Salisbury astutely exploited to extract higher subsidies for the King than any (bar one) granted in Elizabeth's reign. Walter Raleigh, who was languishing in the Tower owing to his involvement in the Main Plot, and whose wife was a first cousin of Lady Catesby, declared he had had no knowledge of the conspiracy. The Bishop of Rochester gave a sermon at St. Paul's Cross, in which he condemned the plot. In his speech to both Houses on 9 November, James expounded on two emerging preoccupations of his monarchy: the divine right of kings and the Catholic question. He insisted that the plot had been the work of only a few Catholics, not of the English Catholics as a whole, and he reminded the assembly to rejoice at his survival, since kings were divinely appointed and he owed his escape to a miracle. Salisbury wrote to his English ambassadors abroad, informing them of what had occurred, and also reminding them that the King bore no ill will to his Catholic neighbours. The foreign powers largely distanced themselves from the plotters, calling them atheists and Protestant heretics. Sir Edward Coke was in charge of the interrogations. Over a period of about ten weeks, in the Lieutenant's Lodgings at the Tower of London (now known as the Queen's House) he questioned those who had been implicated in the plot. For the first round of interrogations, no real proof exists that these people were tortured, although on several occasions Salisbury certainly suggested that they should be. Coke later revealed that the threat of torture was in most cases enough to elicit a confession from those caught up in the aftermath of the plot. Only two confessions were printed in full: Fawkes's confession of 8 November, and Wintour's of 23 November. Having been involved in the conspiracy from the start (unlike Fawkes), Wintour was able to give extremely valuable information to the Privy Council. The handwriting on his testimony is almost certainly that of the man himself, but his signature was markedly different. Wintour had previously only ever signed his name as such, but his confession is signed "Winter", and since he had been shot in the shoulder, the steady hand used to write the signature may indicate some measure of government interference—or it may indicate that writing a shorter version of his name was less painful. Wintour's testimony makes no mention of his brother, Robert. Both were published in the so-called "King's Book", a hastily written official account of the conspiracy published in late November 1605. Henry Percy, Earl of Northumberland, was in a difficult position. His midday dinner with Thomas Percy on 4 November was damning evidence against him, and after Thomas Percy's death there was nobody who could either implicate him or clear him. The Privy Council suspected that Northumberland would have been Princess Elizabeth's protector had the plot succeeded, but there was insufficient evidence to convict him. Northumberland remained in the Tower and on 27 June 1606 was finally charged with contempt. He was stripped of all public offices, fined £30,000 (about £ in 2018), and kept in the Tower until June 1621. The Lords Mordaunt and Stourton were tried in the Star Chamber. They were condemned to imprisonment in the Tower, where they remained until 1608, when they were transferred to the Fleet Prison. Both were also given significant fines. Several other people not involved in the conspiracy, but known or related to the conspirators, were also questioned. Northumberland's brothers, Sir Allen and Sir Josceline, were arrested. Anthony-Maria Browne, 2nd Viscount Montagu had employed Fawkes at an early age, and had also met Catesby on 29 October, and was therefore of interest; he was released several months later. Agnes Wenman was from a Catholic family, and related to Elizabeth Vaux. She was examined twice but the charges against her were eventually dropped. Percy's secretary and later the controller of Northumberland's household, Dudley Carleton, had leased the vault where the gunpowder was stored, and consequently he was imprisoned in the Tower. Salisbury believed his story, and authorised his release. Thomas Bates confessed on 4 December, providing much of the information that Salisbury needed to link the Catholic clergy to the plot. Bates had been present at most of the conspirators' meetings, and under interrogation he implicated Father Tesimond in the plot. On 13 January 1606 he described how he had visited Garnet and Tesimond on 7 November to inform Garnet of the plot's failure. Bates also told his interrogators of his ride with Tesimond to Huddington, before the priest left him to head for the Habingtons at Hindlip Hall, and of a meeting between Garnet, Gerard, and Tesimond in October 1605. At about the same time in December, Tresham's health began to deteriorate. He was visited regularly by his wife, a nurse, and his servant William Vavasour, who documented his strangury. Before he died Tresham had also told of Garnet's involvement with the 1603 mission to Spain, but in his last hours he retracted some of these statements. Nowhere in his confession did he mention the Monteagle letter. He died early on the morning of 23 December, and was buried in the Tower. Nevertheless he was attainted along with the other plotters, his head was set on a pike either at Northampton or London Bridge, and his estates confiscated. On 15 January a proclamation named Father Garnet, Father Gerard, and Father Greenway (Tesimond) as wanted men. Tesimond and Gerard managed to escape the country and live out their days in freedom; Garnet was not so lucky. Several days earlier, on 9 January, Robert Wintour and Stephen Littleton were captured. Their hiding place at Hagley, the home of Humphrey Littleton (brother of MP John Littleton, imprisoned for treason in 1601 for his part in the Essex revolt) was betrayed by a cook, who grew suspicious of the amount of food sent up for his master's consumption. Humphrey denied the presence of the two fugitives, but another servant led the authorities to their hiding place. On 20 January the local Justice and his retainers arrived at Thomas Habington's home, Hindlip Hall, to arrest the Jesuits. Despite Thomas Habington's protests, the men spent the next four days searching the house. On 24 January, starving, two priests left their hiding places and were discovered. Humphrey Littleton, who had escaped from the authorities at Hagley, got as far as Prestwood in Staffordshire before he was captured. He was imprisoned, and then condemned to death at Worcester. On 26 January, in exchange for his life, he told the authorities where they could find Father Garnet. Worn down by hiding for so long, Garnet, accompanied by another priest, emerged from his priest hole the next day. By coincidence, on the same day that Garnet was found, the surviving conspirators were arraigned in Westminster Hall. Seven of the prisoners were taken from the Tower to the Star Chamber by barge. Bates, who was considered lower class, was brought from the Gatehouse Prison. Some of the prisoners were reportedly despondent, but others were nonchalant, even smoking tobacco. The King and his family, hidden from view, were among the many who watched the trial. The Lords Commissioners present were the Earls of Suffolk, Worcester, Northampton, Devonshire, and Salisbury. Sir John Popham was Lord Chief Justice, Sir Thomas Fleming was Lord Chief Baron of the Exchequer, and two Justices, Sir Thomas Walmsley and Sir Peter Warburton, sat as Justices of the Common Pleas. The list of traitors' names was read aloud, beginning with those of the priests: Garnet, Tesimond, and Gerard. The first to speak was the Speaker of the House of Commons (later Master of the Rolls), Sir Edward Philips, who described the intent behind the plot in lurid detail. He was followed by the Attorney-General Sir Edward Coke, who began with a long speech—the content of which was heavily influenced by Salisbury—that included a denial that the King had ever made any promises to the Catholics. Monteagle's part in the discovery of the plot was welcomed, and denunciations of the 1603 mission to Spain featured strongly. Fawkes's protestations that Gerard knew nothing of the plot were omitted from Coke's speech. The foreign powers, when mentioned, were accorded due respect, but the priests were accursed, their behaviour analysed and criticised wherever possible. There was little doubt, according to Coke, that the plot had been invented by the Jesuits. Garnet's meeting with Catesby, at which the former was said to have absolved the latter of any blame in the plot, was proof enough that the Jesuits were central to the conspiracy; according to Coke the Gunpowder Plot would always be known as the Jesuit Treason. Coke spoke with feeling of the probable fate of the Queen and the rest of the King's family, and of the innocents who would have been caught up in the explosion. Each of the condemned, said Coke, would be drawn backwards to his death, by a horse, his head near the ground. He was to be "put to death halfway between heaven and earth as unworthy of both". His genitals would be cut off and burnt before his eyes, and his bowels and heart then removed. Then he would be decapitated, and the dismembered parts of his body displayed so that they might become "prey for the fowls of the air". Confessions and declarations from the prisoners were then read aloud, and finally the prisoners were allowed to speak. Rookwood claimed that he had been drawn into the plot by Catesby, "whom he loved above any worldy man". Thomas Wintour begged to be hanged for himself and his brother, so that his brother might be spared. Fawkes explained his not guilty plea as ignorance of certain aspects of the indictment. Keyes appeared to accept his fate, Bates and Robert Wintour begged for mercy, and Grant explained his involvement as "a conspiracy intended but never effected". Only Digby, tried on a separate indictment, pleaded guilty, insisting that the King had reneged upon promises of toleration for Catholics, and that affection for Catesby and love of the Catholic cause mitigated his actions. He sought death by the axe and begged mercy from the King for his young family. His defence was in vain; his arguments were rebuked by Coke and Northumberland, and along with his seven co-conspirators, he was found guilty by the jury of high treason. Digby shouted "If I may but hear any of your lordships say, you forgive me, I shall go more cheerfully to the gallows." The response was short: "God forgive you, and we do." Garnet may have been questioned on as many as 23 occasions. His response to the threat of the rack was ""Minare ista pueris" [Threats are only for boys]", and he denied having encouraged Catholics to pray for the success of the "Catholic Cause". His interrogators resorted to the forgery of correspondence between Garnet and other Catholics, but to no avail. His jailers then allowed him to talk with another priest in a neighbouring cell, with eavesdroppers listening to every word. Eventually Garnet let slip a crucial piece of information, that there was only one man who could testify that he had any knowledge of the plot. Under torture Garnet admitted that he had heard of the plot from fellow Jesuit Oswald Tesimond, who had learnt of it in confession from Catesby. Garnet was charged with high treason and tried in the Guildhall on 28 March, in a trial lasting from 8 am until 7 pm. According to Coke, Garnet instigated the plot: "[Garnet] hath many gifts and endowments of nature, by art learned, a good linguist and, by profession, a Jesuit and a Superior as indeed he is Superior to all his predecessors in devilish treason, a Doctor of Dissimulation, Deposing of Princes, Disposing of Kingdoms, Daunting and deterring of subjects, and Destruction." Garnet refuted all the charges against him, and explained the Catholic position on such matters, but he was nevertheless found guilty and sentenced to death. Although Catesby and Percy escaped the executioner, their bodies were exhumed and decapitated, and their heads exhibited on spikes outside the House of Lords. On a cold 30 January, Everard Digby, Robert Wintour, John Grant, and Thomas Bates, were tied to hurdles—wooden panels—and dragged through the crowded streets of London to St Paul's Churchyard. Digby, the first to mount the scaffold, asked the spectators for forgiveness, and refused the attentions of a Protestant clergyman. He was stripped of his clothing, and wearing only a shirt, climbed the ladder to place his head through the noose. He was quickly cut down, and while still fully conscious was castrated, disembowelled, and then quartered, along with the three other prisoners. The following day, Thomas Wintour, Ambrose Rookwood, Robert Keyes, and Guy Fawkes were hanged, drawn and quartered, opposite the building they had planned to blow up, in the Old Palace Yard at Westminster. Keyes did not wait for the hangman's command and jumped from the gallows, but he survived the drop and was led to the quartering block. Although weakened by his torture, Fawkes managed to jump from the gallows and break his neck, thus avoiding the agony of the gruesome latter part of his execution. Steven Littleton was executed at Stafford. His cousin Humphrey, despite his cooperation with the authorities, met his end at Red Hill near Worcester. Henry Garnet's execution took place on 3 May 1606. Greater freedom for Roman Catholics to worship as they chose seemed unlikely in 1604, but the discovery of such a wide-ranging conspiracy, the capture of those involved, and the subsequent trials, led Parliament to consider introducing new anti-Catholic legislation. The event also destroyed all hope that the Spanish would ever secure tolerance of the Catholics in England. In the summer of 1606, laws against recusancy were strengthened; the Popish Recusants Act returned England to the Elizabethan system of fines and restrictions, introduced a sacramental test, and an Oath of Allegiance, requiring Catholics to abjure as a "heresy" the doctrine that "princes excommunicated by the Pope could be deposed or assassinated". Catholic Emancipation took another 200 years, but many important and loyal Catholics retained high office during King James I's reign. Although there was no "golden time" of "toleration" of Catholics, which Father Garnet had hoped for, James's reign was nevertheless a period of relative leniency for Catholics, and few were subject to prosecution. The playwright William Shakespeare had already used the family history of Northumberland's family in his "Henry IV" series of plays, and the events of the Gunpowder Plot seem to have featured alongside the earlier Gowrie conspiracy in "Macbeth", written some time between 1603 and 1607. Interest in the demonic was heightened by the Gunpowder Plot. The King had become engaged in the great debate about other-worldly powers in writing his "Daemonology" in 1597, before he became King of England as well as Scotland. Inversions seen in such lines as "fair is foul and foul is fair" are used frequently, and another possible reference to the plot relates to the use of equivocation; Garnett's "A Treatise of Equivocation" was found on one of the plotters. Another writer influenced by the plot was John Milton, who in 1626 wrote what one commentator has called a "critically vexing poem", "In Quintum Novembris". Reflecting "partisan public sentiment on an English-Protestant national holiday", in the published editions of 1645 and 1673 the poem is preceded by five epigrams on the subject of the Gunpowder Plot, apparently written by Milton in preparation for the larger work. The plot may also have influenced his later work, "Paradise Lost". The Gunpowder Plot was commemorated for years by special sermons and other public acts, such as the ringing of church bells. It added to an increasingly full calendar of Protestant celebrations that contributed to the national and religious life of 17th-century England, and has evolved into the Bonfire Night of today. In "What If the Gunpowder Plot Had Succeeded?" historian Ronald Hutton considered the events which might have followed a successful implementation of the plot, and the destruction of the House of Lords and all those within it. He concluded that a severe backlash against suspected Catholics would have followed, and that without foreign assistance a successful rebellion would have been unlikely; despite differing religious convictions, most Englishmen were loyal to the institution of the monarchy. England might have become a more "Puritan absolute monarchy", as "existed in Sweden, Denmark, Saxony, and Prussia in the seventeenth century", rather than following the path of parliamentary and civil reform that it did. Many at the time felt that Salisbury had been involved in the plot to gain favour with the King and enact more stridently anti-Catholic legislation. Such conspiracy theories alleged that Salisbury had either actually invented the plot or allowed it to continue when his agents had already infiltrated it, for the purposes of propaganda. The Popish Plot of 1678 sparked renewed interest in the Gunpowder Plot, resulting in a book by Thomas Barlow, Bishop of Lincoln, which refuted "a bold and groundless surmise that all this was a contrivance of Secretary Cecil". In 1897 Father John Gerard of Stonyhurst College, namesake of John Gerard (who, following the plot's discovery, had evaded capture), wrote an account called "What was the Gunpowder Plot?", alleging Salisbury's culpability. This prompted a refutation later that year by Samuel Gardiner, who argued that Gerard had gone too far in trying to "wipe away the reproach" which the plot had exacted on generations of English Catholics. Gardiner portrayed Salisbury as guilty of nothing more than opportunism. Subsequent attempts to prove Salisbury's involvement, such as Francis Edwards's 1969 work "Guy Fawkes: the real story of the gunpowder plot?", have similarly foundered on the lack of any clear evidence. The cellars under the Houses of Parliament continued to be leased out to private individuals until 1678, when news of the Popish Plot broke. It was then considered prudent to search the cellars on the day before each State Opening of Parliament, a ritual that survives to this day. In January 1606, during the first sitting of Parliament since the plot, the Observance of 5th November Act 1605 was passed, making services and sermons commemorating the event an annual feature of English life; the act remained in force until 1859. The tradition of marking the day with the ringing of church bells and bonfires started soon after the Plot's discovery, and fireworks were included in some of the earliest celebrations. In Britain, the 5th of November is variously called Bonfire Night, Fireworks Night, or Guy Fawkes Night. It remains the custom in Britain, on or around 5 November, to let off fireworks. Traditionally, in the weeks running up to the 5th, children made "guys"—effigies supposedly of Fawkes—usually made from old clothes stuffed with newspaper, and fitted with a grotesque mask, to be burnt on the 5 November bonfire. These guys were exhibited in the street to collect money for fireworks, although this custom has become less common. The word guy thus came in the 19th century to mean an oddly dressed person, and hence in the 20th and 21st centuries to mean any male person. November the 5th firework displays and bonfire parties are common throughout Britain, in major public displays and in private gardens. In some areas, particularly in Sussex, there are extensive processions, large bonfires and firework displays organised by local bonfire societies, the most elaborate of which take place in Lewes. According to the biographer Esther Forbes, the Guy Fawkes Day celebration in the pre-revolutionary American colonies was a very popular holiday. In Boston, the revelry on "Pope Night" took on anti-authoritarian overtones, and often became so dangerous that many would not venture out of their homes. In the 2005 ITV programme "", a full-size replica of the House of Lords was built and destroyed with barrels of gunpowder. The experiment was conducted on the Advantica Spadeadam test site, and demonstrated that the explosion, if the gunpowder was in good order, would have killed all those in the building. The power of the explosion was such that the deep concrete walls (replicating how archives suggest the walls of the old House of Lords were constructed) were reduced to rubble. Measuring devices placed in the chamber to calculate the force of the blast were themselves destroyed by the explosion; the skull of the dummy representing King James, which had been placed on a throne inside the chamber surrounded by courtiers, peers and bishops, was found a considerable distance from the site. According to the findings of the programme, no one within of the blast could have survived, and all of the stained glass windows in Westminster Abbey would have been shattered, as would all of the windows in the vicinity of the Palace. The explosion would have been seen from miles away, and heard from further away still. Even if only half of the gunpowder had gone off, everyone in the House of Lords and its environs would have been killed instantly. The programme also disproved claims that some deterioration in the quality of the gunpowder would have prevented the explosion. A portion of deliberately deteriorated gunpowder, of such low quality as to make it unusable in firearms, when placed in a heap and ignited, still managed to create a large explosion. The impact of even deteriorated gunpowder would have been magnified by its containment in wooden barrels, compensating for the quality of the contents. The compression would have created a cannon effect, with the powder first blowing up from the top of the barrel before, a millisecond later, blowing out. Calculations showed that Fawkes, who was skilled in the use of gunpowder, had deployed double the amount needed. Some of the gunpowder guarded by Fawkes may have survived. In March 2002 workers cataloguing archives of diarist John Evelyn at the British Library found a box containing a number of gunpowder samples, including a compressed bar with a note in Evelyn's handwriting stating that it had belonged to Guy Fawkes. A further note, written in the 19th century, confirmed this provenance, although in 1952 the document acquired a new comment: "but there was none left!" Notes Footnotes Bibliography Hydrogen Hydrogen is a chemical element with symbol H and atomic number 1. With a standard atomic weight of , hydrogen is the lightest element on the periodic table. Its monatomic form (H) is the most abundant chemical substance in the Universe, constituting roughly 75% of all baryonic mass. Non-remnant stars are mainly composed of hydrogen in the plasma state. The most common isotope of hydrogen, termed "protium" (name rarely used, symbol H), has one proton and no neutrons. The universal emergence of atomic hydrogen first occurred during the recombination epoch. At standard temperature and pressure, hydrogen is a colorless, odorless, tasteless, non-toxic, nonmetallic, highly combustible diatomic gas with the molecular formula H. Since hydrogen readily forms covalent compounds with most nonmetallic elements, most of the hydrogen on Earth exists in molecular forms such as water or organic compounds. Hydrogen plays a particularly important role in acid–base reactions because most acid-base reactions involve the exchange of protons between soluble molecules. In ionic compounds, hydrogen can take the form of a negative charge (i.e., anion) when it is known as a hydride, or as a positively charged (i.e., cation) species denoted by the symbol H. The hydrogen cation is written as though composed of a bare proton, but in reality, hydrogen cations in ionic compounds are always more complex. As the only neutral atom for which the Schrödinger equation can be solved analytically, study of the energetics and bonding of the hydrogen atom has played a key role in the development of quantum mechanics. Hydrogen gas was first artificially produced in the early 16th century by the reaction of acids on metals. In 1766–81, Henry Cavendish was the first to recognize that hydrogen gas was a discrete substance, and that it produces water when burned, the property for which it was later named: in Greek, hydrogen means "water-former". Industrial production is mainly from steam reforming natural gas, and less often from more energy-intensive methods such as the electrolysis of water. Most hydrogen is used near the site of its production, the two largest uses being fossil fuel processing (e.g., hydrocracking) and ammonia production, mostly for the fertilizer market. Hydrogen is a concern in metallurgy as it can embrittle many metals, complicating the design of pipelines and storage tanks. Hydrogen gas (dihydrogen or molecular hydrogen, also called diprotium when consisting specifically of a pair of protium atoms) is highly flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume. The enthalpy of combustion is −286 kJ/mol: Hydrogen gas forms explosive mixtures with air in concentrations from 4–74% and with chlorine at 5–95%. The explosive reactions may be triggered by spark, heat, or sunlight. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is . Pure hydrogen-oxygen flames emit ultraviolet light and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle Main Engine, compared to the highly visible plume of a Space Shuttle Solid Rocket Booster, which uses an ammonium perchlorate composite. The detection of a burning hydrogen leak may require a flame detector; such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames. The destruction of the Hindenburg airship was a notorious example of hydrogen combustion and the cause is still debated. The visible orange flames in that incident were the result of a rich mixture of hydrogen to oxygen combined with carbon compounds from the airship skin. H reacts with every oxidizing element. Hydrogen can react spontaneously and violently at room temperature with chlorine and fluorine to form the corresponding hydrogen halides, hydrogen chloride and hydrogen fluoride, which are also potentially dangerous acids. The ground state energy level of the electron in a hydrogen atom is −13.6 eV, which is equivalent to an ultraviolet photon of roughly 91 nm wavelength. The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, which conceptualizes the electron as "orbiting" the proton in analogy to the Earth's orbit of the Sun. However, the atomic electron and proton are held together by electromagnetic force, while planets and celestial objects are held by gravity. Because of the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies. A more accurate description of the hydrogen atom comes from a purely quantum mechanical treatment that uses the Schrödinger equation, Dirac equation or even the Feynman path integral formulation to calculate the probability density of the electron around the proton. The most complicated treatments allow for the small effects of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a ground state hydrogen atom has no angular momentum at all—illustrating how the "planetary orbit" differs from electron motion. There exist two different spin isomers of hydrogen diatomic molecules that differ by the relative spin of their nuclei. In the orthohydrogen form, the spins of the two protons are parallel and form a triplet state with a molecular spin quantum number of 1 (+); in the parahydrogen form the spins are antiparallel and form a singlet with a molecular spin quantum number of 0 (–). At standard temperature and pressure, hydrogen gas contains about 25% of the para form and 75% of the ortho form, also known as the "normal form". The equilibrium ratio of orthohydrogen to parahydrogen depends on temperature, but because the ortho form is an excited state and has a higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium state is composed almost exclusively of the para form. The liquid and gas phase thermal properties of pure parahydrogen differ significantly from those of the normal form because of differences in rotational heat capacities, as discussed more fully in "spin isomers of hydrogen". The ortho/para distinction also occurs in other hydrogen-containing molecules or functional groups, such as water and methylene, but is of little significance for their thermal properties. The uncatalyzed interconversion between para and ortho H increases with increasing temperature; thus rapidly condensed H contains large quantities of the high-energy ortho form that converts to the para form very slowly. The ortho/para ratio in condensed H is an important consideration in the preparation and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate some of the hydrogen liquid, leading to loss of liquefied material. Catalysts for the ortho-para interconversion, such as ferric oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromic oxide, or some nickel compounds, are used during hydrogen cooling. While H is not very reactive under standard conditions, it does form compounds with most elements. Hydrogen can form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I), or oxygen; in these compounds hydrogen takes on a partial positive charge. When bonded to fluorine, oxygen, or nitrogen, hydrogen can participate in a form of medium-strength noncovalent bonding with the hydrogen of other similar molecules, a phenomenon called hydrogen bonding that is critical to the stability of many biological molecules. Hydrogen also forms compounds with less electronegative elements, such as metals and metalloids, where it takes on a partial negative charge. These compounds are often known as hydrides. Hydrogen forms a vast array of compounds with carbon called the hydrocarbons, and an even vaster array with heteroatoms that, because of their general association with living things, are called organic compounds. The study of their properties is known as organic chemistry and their study in the context of living organisms is known as biochemistry. By some definitions, "organic" compounds are only required to contain carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond which gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry. Millions of hydrocarbons are known, and they are usually formed by complicated synthetic pathways that seldom involve elementary hydrogen. Compounds of hydrogen are often called hydrides, a term that is used fairly loosely. The term "hydride" suggests that the H atom has acquired a negative or anionic character, denoted H, and is used when hydrogen forms a compound with a more electropositive element. The existence of the hydride anion, suggested by Gilbert N. Lewis in 1916 for group 1 and 2 salt-like hydrides, was demonstrated by Moers in 1920 by the electrolysis of molten lithium hydride (LiH), producing a stoichiometry quantity of hydrogen at the anode. For hydrides other than group 1 and 2 metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group 2 hydrides is , which is polymeric. In lithium aluminium hydride, the anion carries hydridic centers firmly attached to the Al(III). Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, more than 100 binary borane hydrides are known, but only one binary aluminium hydride. Binary indium hydride has not yet been identified, although larger complexes exist. In inorganic chemistry, hydrides can also serve as bridging ligands that link two metal centers in a coordination complex. This function is particularly common in group 13 elements, especially in boranes (boron hydrides) and aluminium complexes, as well as in clustered carboranes. Oxidation of hydrogen removes its electron and gives H, which contains no electrons and a nucleus which is usually composed of one proton. That is why is often called a proton. This species is central to discussion of acids. Under the Brønsted–Lowry acid–base theory, acids are proton donors, while bases are proton acceptors. A bare proton, , cannot exist in solution or in ionic crystals because of its unstoppable attraction to other atoms or molecules with electrons. Except at the high temperatures associated with plasmas, such protons cannot be removed from the electron clouds of atoms and molecules, and will remain attached to them. However, the term 'proton' is sometimes used loosely and metaphorically to refer to positively charged or cationic hydrogen attached to other species in this fashion, and as such is denoted "" without any implication that any single protons exist freely as a species. To avoid the implication of the naked "solvated proton" in solution, acidic aqueous solutions are sometimes considered to contain a less unlikely fictitious species, termed the "hydronium ion" (). However, even in this case, such solvated hydrogen cations are more realistically conceived as being organized into clusters that form species closer to H. Other oxonium ions are found when water is in acidic solution with other solvents. Although exotic on Earth, one of the most common ions in the universe is the ion, known as protonated molecular hydrogen or the trihydrogen cation. NASA has investigated the use of atomic hydrogen as a rocket propellant. It could be stored in liquid helium to prevent it from recombining into molecular hydrogen. When the helium is vaporized, the atomic hydrogen would be released and combine back to molecular hydrogen. The result would be a intensely hot stream of hydrogen and helium gas. The liftoff weight of rockets could be reduced by 50% by this method. Most interstellar hydrogen is in the form of atomic hydrogen because the atoms can seldom collide and combine. They are the source of the important 21cm hydrogen line in astronomy at 1420 MHz. Hydrogen has three naturally occurring isotopes, denoted , and . Other, highly unstable nuclei ( to ) have been synthesized in the laboratory but not observed in nature. Hydrogen is the only element that has different names for its isotopes in common use today. During the early study of radioactivity, various heavy radioactive isotopes were given their own names, but such names are no longer used, except for deuterium and tritium. The symbols D and T (instead of and ) are sometimes used for deuterium and tritium, but the corresponding symbol for protium, P, is already in use for phosphorus and thus is not available for protium. In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry (IUPAC) allows any of D, T, , and to be used, although and are preferred. The exotic atom muonium (symbol Mu), composed of an antimuon and an electron, is also sometimes considered as a light radioisotope of hydrogen, due to the mass difference between the antimuon and the electron. Muonium was discovered in 1960. During the muon's lifetime, muonium can enter into compounds such as muonium chloride (MuCl) or sodium muonide (NaMu), analogous to hydrogen chloride and sodium hydride respectively. In 1671, Robert Boyle discovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas. In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by naming the gas from a metal-acid reaction "inflammable air". He speculated that "inflammable air" was in fact identical to the hypothetical substance called "phlogiston" and further finding in 1781 that the gas produces water when burned. He is usually given credit for the discovery of hydrogen as an element. In 1783, Antoine Lavoisier gave the element the name hydrogen (from the Greek ὑδρο- "hydro" meaning "water" and -γενής "genes" meaning "creator") when he and Laplace reproduced Cavendish's finding that water is produced when hydrogen is burned. Lavoisier produced hydrogen for his experiments on mass conservation by reacting a flux of steam with metallic iron through an incandescent iron tube heated in a fire. Anaerobic oxidation of iron by the protons of water at high temperature can be schematically represented by the set of following reactions: Many metals such as zirconium undergo a similar reaction with water leading to the production of hydrogen. Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. He produced solid hydrogen the next year. Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck. Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932. François Isaac de Rivaz built the first de Rivaz engine, an internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823. The first hydrogen-filled balloon was invented by Jacques Charles in 1783. Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard. German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900. Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war. The first non-stop transatlantic crossing was made by the British airship "R34" in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Therefore, H was used in the "Hindenburg" airship, which was destroyed in a midair fire over New Jersey on 6 May 1937. The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen is widely assumed to be the cause, but later investigations pointed to the ignition of the aluminized fabric coating by static electricity. But the damage to hydrogen's reputation as a lifting gas was already done and commercial hydrogen airship travel ceased. Hydrogen is still used, in preference to non-flammable but more expensive helium, as a lifting gas for weather balloons. In the same year the first hydrogen-cooled turbogenerator went into service with gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, by the Dayton Power & Light Co.; because of the thermal conductivity of hydrogen gas, this is the most common type in its field today. The nickel hydrogen battery was used for the first time in 1977 aboard the U.S. Navy's Navigation technology satellite-2 (NTS-2). For example, the ISS, Mars Odyssey and the Mars Global Surveyor are equipped with nickel-hydrogen batteries. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen batteries, which were finally replaced in May 2009, more than 19 years after launch and 13 years beyond their design life. Because of its simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure. Furthermore, study of the corresponding simplicity of the hydrogen molecule and the corresponding cation brought understanding of the nature of the chemical bond, which followed shortly after the quantum mechanical treatment of the hydrogen atom had been developed in the mid-1920s. One of the first quantum effects to be explicitly noticed (but not understood at the time) was a Maxwell observation involving hydrogen, half a century before full quantum mechanical theory arrived. Maxwell observed that the specific heat capacity of H unaccountably departs from that of a diatomic gas below room temperature and begins to increasingly resemble that of a monatomic gas at cryogenic temperatures. According to quantum theory, this behavior arises from the spacing of the (quantized) rotational energy levels, which are particularly wide-spaced in H because of its low mass. These widely spaced levels inhibit equal partition of heat energy into rotational motion in hydrogen at low temperatures. Diatomic gases composed of heavier atoms do not have such widely spaced levels and do not exhibit the same effect. Antihydrogen () is the antimatter counterpart to hydrogen. It consists of an antiproton with a positron. Antihydrogen is the only type of antimatter atom to have been produced as of 2015. Hydrogen, as atomic H, is the most abundant chemical element in the universe, making up 75% of normal matter by mass and more than 90% by number of atoms. (Most of the mass of the universe, however, is not in the form of chemical-element type matter, but rather is postulated to occur as yet-undetected forms of mass such as dark matter and dark energy.) This element is found in great abundance in stars and gas giant planets. Molecular clouds of H are associated with star formation. Hydrogen plays a vital role in powering stars through the proton-proton reaction and the CNO cycle nuclear fusion. Throughout the universe, hydrogen is mostly found in the atomic and plasma states, with properties quite different from those of molecular hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the Sun and other stars). The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora. Hydrogen is found in the neutral atomic state in the interstellar medium. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological baryonic density of the Universe up to redshift "z"=4. Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H. However, hydrogen gas is very rare in the Earth's atmosphere (1 ppm by volume) because of its light weight, which enables it to escape from Earth's gravity more easily than heavier gases. However, hydrogen is the third most abundant element on the Earth's surface, mostly in the form of chemical compounds such as hydrocarbons and water. Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus, as is methane, itself a hydrogen source of increasing importance. A molecular form called protonated molecular hydrogen () is found in the interstellar medium, where it is generated by ionization of molecular hydrogen from cosmic rays. This charged ion has also been observed in the upper atmosphere of the planet Jupiter. The ion is relatively stable in the environment of outer space due to the low temperature and density. is one of the most abundant ions in the Universe, and it plays a notable role in the chemistry of the interstellar medium. Neutral triatomic hydrogen H can exist only in an excited form and is unstable. By contrast, the positive hydrogen molecular ion () is a rare molecule in the universe. is produced in chemistry and biology laboratories, often as a by-product of other reactions; in industry for the hydrogenation of unsaturated substrates; and in nature as a means of expelling reducing equivalents in biochemical reactions. The electrolysis of water is a simple method of producing hydrogen. A low voltage current is run through the water, and gaseous oxygen forms at the anode while gaseous hydrogen forms at the cathode. Typically the cathode is made from platinum or another inert metal when producing hydrogen for storage. If, however, the gas is to be burnt on site, oxygen is desirable to assist the combustion, and so both electrodes would be made from inert metals. (Iron, for instance, would oxidize, and thus decrease the amount of oxygen given off.) The theoretical maximum efficiency (electricity used vs. energetic value of hydrogen produced) is in the range 88-94%. When determining the electrical efficiency of PEM (proton exchange membrane) electrolysis, the higher heat value (HHV) is used. This is because the catalyst layer interacts with water as steam. As the process operates at 80°C for PEM electrolysers the waste heat can be redirected through the system to create the steam, resulting in a higher overall electrical efficiency. The lower heat value (LHV) must be used for alkaline electrolysers as the process within these electrolysers requires water in liquid form and uses alkalinity to facilitate the breaking of the bond holding the hydrogen and oxygen atoms together. The lower heat value must also be used for fuel cells, as steam is the output rather than input. Hydrogen is often produced using natural gas, which involves the removal of hydrogen from hydrocarbons at very high temperatures, with about 95% of hydrogen production coming from steam reforming around year 2000. Commercial bulk hydrogen is usually produced by the steam reforming of natural gas. At high temperatures (1000–1400 K, 700–1100 °C or 1300–2000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and . This reaction is favored at low pressures but is nonetheless conducted at high pressures (2.0  MPa, 20 atm or 600 inHg). This is because high-pressure is the most marketable product and pressure swing adsorption (PSA) purification systems work better at higher pressures. The product mixture is known as "synthesis gas" because it is often used directly for the production of methanol and related compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon: Consequently, steam reforming typically employs an excess of . Additional hydrogen can be recovered from the steam by use of carbon monoxide through the water gas shift reaction, especially with an iron oxide catalyst. This reaction is also a common industrial source of carbon dioxide: Other important methods for production include partial oxidation of hydrocarbons: and the coal reaction, which can serve as a prelude to the shift reaction above: Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for the production of ammonia, hydrogen is generated from natural gas. Electrolysis of brine to yield chlorine also produces hydrogen as a co-product. In the laboratory, is usually prepared by the reaction of dilute non-oxidizing acids on some reactive metals such as zinc with Kipp's apparatus. Aluminium can also produce upon treatment with bases: An alloy of aluminium and gallium in pellet form added to water can be used to generate hydrogen. The process also produces alumina, but the expensive gallium, which prevents the formation of an oxide skin on the pellets, can be re-used. This has important potential implications for a hydrogen economy, as hydrogen can be produced on-site and does not need to be transported. There are more than 200 thermochemical cycles which can be used for water splitting, around a dozen of these cycles such as the iron oxide cycle, cerium(IV) oxide–cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle are under research and in testing phase to produce hydrogen and oxygen from water and heat without using electricity. A number of laboratories (including in France, Germany, Greece, Japan, and the USA) are developing thermochemical methods to produce hydrogen from solar energy and water. Under anaerobic conditions, iron and steel alloys are slowly oxidized by the protons of water concomitantly reduced in molecular hydrogen (). The anaerobic corrosion of iron leads first to the formation of ferrous hydroxide (green rust) and can be described by the following reaction: In its turn, under anaerobic conditions, the ferrous hydroxide () can be oxidized by the protons of water to form magnetite and molecular hydrogen. This process is described by the Schikorr reaction: The well crystallized magnetite () is thermodynamically more stable than the ferrous hydroxide (). This process occurs during the anaerobic corrosion of iron and steel in oxygen-free groundwater and in reducing soils below the water table. In the absence of atmospheric oxygen (), in deep geological conditions prevailing far away from Earth atmosphere, hydrogen () is produced during the process of serpentinization by the anaerobic oxidation by the water protons (H) of the ferrous (Fe) silicate present in the crystal lattice of the fayalite (, the olivine iron-endmember). The corresponding reaction leading to the formation of magnetite (), quartz (Si) and hydrogen () is the following: This reaction closely resembles the Schikorr reaction observed in the anaerobic oxidation of the ferrous hydroxide in contact with water. From all the fault gases formed in power transformers, hydrogen is the most common and is generated under most fault conditions; thus, formation of hydrogen is an early indication of serious problems in the transformer's life cycle. Large quantities of are needed in the petroleum and chemical industries. The largest application of is for the processing ("upgrading") of fossil fuels, and in the production of ammonia. The key consumers of in the petrochemical plant include hydrodealkylation, hydrodesulfurization, and hydrocracking. has several other important uses. is used as a hydrogenating agent, particularly in increasing the level of saturation of unsaturated fats and oils (found in items such as margarine), and in the production of methanol. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. is also used as a reducing agent of metallic ores. Hydrogen is highly soluble in many rare earth and transition metals and is soluble in both nanocrystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice. These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas's high solubility is a metallurgical problem, contributing to the embrittlement of many metals, complicating the design of pipelines and storage tanks. Apart from its use as a reactant, has wide applications in physics and engineering. It is used as a shielding gas in welding methods such as atomic hydrogen welding. H is used as the rotor coolant in electrical generators at power stations, because it has the highest thermal conductivity of any gas. Liquid H is used in cryogenic research, including superconductivity studies. Because is lighter than air, having a little more than of the density of air, it was once widely used as a lifting gas in balloons and airships. In more recent applications, hydrogen is used pure or mixed with nitrogen (sometimes called forming gas) as a tracer gas for minute leak detection. Applications can be found in the automotive, chemical, power generation, aerospace, and telecommunications industries. Hydrogen is an authorized food additive (E 949) that allows food package leak testing among other anti-oxidizing properties. Hydrogen's rarer isotopes also each have specific applications. Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons, and in nuclear fusion reactions. Deuterium compounds have applications in chemistry and biology in studies of reaction isotope effects. Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs, as an isotopic label in the biosciences, and as a radiation source in luminous paints. The triple point temperature of equilibrium hydrogen is a defining fixed point on the ITS-90 temperature scale at 13.8033 kelvins. Hydrogen is commonly used in power stations as a coolant in generators due to a number of favorable properties that are a direct result of its light diatomic molecules. These include low density, low viscosity, and the highest specific heat and thermal conductivity of all gases. Hydrogen is not an energy resource, except in the hypothetical context of commercial nuclear fusion power plants using deuterium or tritium, a technology presently far from development. The Sun's energy comes from nuclear fusion of hydrogen, but this process is difficult to achieve controllably on Earth. Elemental hydrogen from solar, biological, or electrical sources requires more energy to make than is obtained by burning it, so in these cases hydrogen functions as an energy carrier, like a battery. Hydrogen may be obtained from fossil sources (such as methane), but these sources are unsustainable. The energy density per unit "volume" of both liquid hydrogen and compressed hydrogen gas at any practicable pressure is significantly less than that of traditional fuel sources, although the energy density per unit fuel "mass" is higher. Nevertheless, elemental hydrogen has been widely discussed in the context of energy, as a possible future "carrier" of energy on an economy-wide scale. For example, sequestration followed by carbon capture and storage could be conducted at the point of production from fossil fuels. Hydrogen used in transportation would burn relatively cleanly, with some NO emissions, but without carbon emissions. However, the infrastructure costs associated with full conversion to a hydrogen economy would be substantial. Fuel cells can convert hydrogen and oxygen directly to electricity more efficiently than internal combustion engines. Hydrogen is employed to saturate broken ("dangling") bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties. It is also a potential electron donor in various oxide materials, including ZnO, SnO, CdO, MgO, ZrO, HfO, LaO, YO, TiO, SrTiO, LaAlO, SiO, AlO, ZrSiO, HfSiO, and SrZrO. H is a product of some types of anaerobic metabolism and is produced by several microorganisms, usually via reactions catalyzed by iron- or nickel-containing enzymes called hydrogenases. These enzymes catalyze the reversible redox reaction between H and its component two protons and two electrons. Creation of hydrogen gas occurs in the transfer of reducing equivalents produced during pyruvate fermentation to water. The natural cycle of hydrogen production and consumption by organisms is called the hydrogen cycle. Water splitting, in which water is decomposed into its component protons, electrons, and oxygen, occurs in the light reactions in all photosynthetic organisms. Some such organisms, including the alga "Chlamydomonas reinhardtii" and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are reduced to form H gas by specialized hydrogenases in the chloroplast. Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to efficiently synthesize H gas even in the presence of oxygen. Efforts have also been undertaken with genetically modified alga in a bioreactor. Hydrogen poses a number of hazards to human safety, from potential detonations and fires when mixed with air to being an asphyxiant in its pure, oxygen-free form. In addition, liquid hydrogen is a cryogen and presents dangers (such as frostbite) associated with very cold liquids. Hydrogen dissolves in many metals, and, in addition to leaking out, may have adverse effects on them, such as hydrogen embrittlement, leading to cracks and explosions. Hydrogen gas leaking into external air may spontaneously ignite. Moreover, hydrogen fire, while being extremely hot, is almost invisible, and thus can lead to accidental burns. Even interpreting the hydrogen data (including safety data) is confounded by a number of phenomena. Many physical and chemical properties of hydrogen depend on the parahydrogen/orthohydrogen ratio (it often takes days or weeks at a given temperature to reach the equilibrium ratio, for which the data is usually given). Hydrogen detonation parameters, such as critical detonation pressure and temperature, strongly depend on the container geometry. Helium Helium (from ) is a chemical element with symbol He and atomic number 2. It is a colorless, odorless, tasteless, non-toxic, inert, monatomic gas, the first in the noble gas group in the periodic table. Its boiling point is the lowest among all the elements. After hydrogen, helium is the second lightest and second most abundant element in the observable universe, being present at about 24% of the total elemental mass, which is more than 12 times the mass of all the heavier elements combined. Its abundance is similar to this figure in the Sun and in Jupiter. This is due to the very high nuclear binding energy (per nucleon) of helium-4 with respect to the next three elements after helium. This helium-4 binding energy also accounts for why it is a product of both nuclear fusion and radioactive decay. Most helium in the universe is helium-4, the vast majority of which was formed during the Big Bang. Large amounts of new helium are being created by nuclear fusion of hydrogen in stars. Helium is named for the Greek Titan of the Sun, Helios. It was first detected as an unknown yellow spectral line signature in sunlight during a solar eclipse in 1868 by Georges Rayet, Captain C. T. Haig, Norman R. Pogson, and Lieutenant John Herschel, and was subsequently confirmed by French astronomer Jules Janssen. Janssen is often jointly credited with detecting the element along with Norman Lockyer. Janssen recorded the helium spectral line during the solar eclipse of 1868 while Lockyer observed it from Britain. Lockyer was the first to propose that the line was due to a new element, which he named. The formal discovery of the element was made in 1895 by two Swedish chemists, Per Teodor Cleve and Nils Abraham Langlet, who found helium emanating from the uranium ore cleveite. In 1903, large reserves of helium were found in natural gas fields in parts of the United States, which is by far the largest supplier of the gas today. Liquid helium is used in cryogenics (its largest single use, absorbing about a quarter of production), particularly in the cooling of superconducting magnets, with the main commercial application being in MRI scanners. Helium's other industrial uses—as a pressurizing and purge gas, as a protective atmosphere for arc welding and in processes such as growing crystals to make silicon wafers—account for half of the gas produced. A well-known but minor use is as a lifting gas in balloons and airships. As with any gas whose density differs from that of air, inhaling a small volume of helium temporarily changes the timbre and quality of the human voice. In scientific research, the behavior of the two fluid phases of helium-4 (helium I and helium II) is important to researchers studying quantum mechanics (in particular the property of superfluidity) and to those looking at the phenomena, such as superconductivity, produced in matter near absolute zero. On Earth it is relatively rare—5.2 ppm by volume in the atmosphere. Most terrestrial helium present today is created by the natural radioactive decay of heavy radioactive elements (thorium and uranium, although there are other examples), as the alpha particles emitted by such decays consist of helium-4 nuclei. This radiogenic helium is trapped with natural gas in concentrations as great as 7% by volume, from which it is extracted commercially by a low-temperature separation process called fractional distillation. Previously, terrestrial helium—a non-renewable resource, because once released into the atmosphere it readily escapes into space—was thought to be in increasingly short supply. However, recent studies suggest that helium produced deep in the earth by radioactive decay can collect in natural gas reserves in larger than expected quantities, in some cases having been released by volcanic activity. The first evidence of helium was observed on August 18, 1868, as a bright yellow line with a wavelength of 587.49 nanometers in the spectrum of the chromosphere of the Sun. The line was detected by French astronomer Jules Janssen during a total solar eclipse in Guntur, India. This line was initially assumed to be sodium. On October 20 of the same year, English astronomer Norman Lockyer observed a yellow line in the solar spectrum, which he named the D because it was near the known D and D Fraunhofer line lines of sodium. He concluded that it was caused by an element in the Sun unknown on Earth. Lockyer and English chemist Edward Frankland named the element with the Greek word for the Sun, ἥλιος ("helios"). In 1881, Italian physicist Luigi Palmieri detected helium on Earth for the first time through its D spectral line, when he analyzed a material that had been sublimated during a recent eruption of Mount Vesuvius. On March 26, 1895, Scottish chemist Sir William Ramsay isolated helium on Earth by treating the mineral cleveite (a variety of uraninite with at least 10% rare earth elements) with mineral acids. Ramsay was looking for argon but, after separating nitrogen and oxygen from the gas liberated by sulfuric acid, he noticed a bright yellow line that matched the D line observed in the spectrum of the Sun. These samples were identified as helium by Lockyer and British physicist William Crookes. It was independently isolated from cleveite in the same year by chemists Per Teodor Cleve and Abraham Langlet in Uppsala, Sweden, who collected enough of the gas to accurately determine its atomic weight. Helium was also isolated by the American geochemist William Francis Hillebrand prior to Ramsay's discovery when he noticed unusual spectral lines while testing a sample of the mineral uraninite. Hillebrand, however, attributed the lines to nitrogen. His letter of congratulations to Ramsay offers an interesting case of discovery and near-discovery in science. In 1907, Ernest Rutherford and Thomas Royds demonstrated that alpha particles are helium nuclei by allowing the particles to penetrate the thin glass wall of an evacuated tube, then creating a discharge in the tube to study the spectrum of the new gas inside. In 1908, helium was first liquefied by Dutch physicist Heike Kamerlingh Onnes by cooling the gas to less than one kelvin. He tried to solidify it by further reducing the temperature but failed because helium does not solidify at atmospheric pressure. Onnes' student Willem Hendrik Keesom was eventually able to solidify 1 cm of helium in 1926 by applying additional external pressure. In 1913, Niels Bohr published his "trilogy" on atomic structure that included a reconsideration of the Pickering–Fowler series as central evidence in support of his model of the atom. This series is named for Edward Charles Pickering, who in 1896 published observations of previously unknown lines in the spectrum of the star ζ Puppis (these are now known to occur with Wolf–Rayet and other hot stars). Pickering attributed the observation (lines at 4551, 5411, and 10123 Å) to a new form of hydrogen with half-integer transition levels. In 1912, Alfred Fowler managed to produce similar lines from a hydrogen-helium mixture, and supported Pickering's conclusion as to their origin. Bohr's model does not allow for half-integer transitions (nor does quantum mechanics) and Bohr concluded that Pickering and Fowler were wrong, and instead assigned these spectral lines to ionised helium, He. Fowler was initially skeptical but was ultimately convinced that Bohr was correct, and by 1915 "spectroscopists had transferred [the Pickering–Fowler series] definitively [from hydrogen] to helium." Bohr's theoretical work on the Pickering series had demonstrated the need for "a re-examination of problems that seemed already to have been solved within classical theories" and provided important confirmation for his atomic theory. In 1938, Russian physicist Pyotr Leonidovich Kapitsa discovered that helium-4 has almost no viscosity at temperatures near absolute zero, a phenomenon now called superfluidity. This phenomenon is related to Bose–Einstein condensation. In 1972, the same phenomenon was observed in helium-3, but at temperatures much closer to absolute zero, by American physicists Douglas D. Osheroff, David M. Lee, and Robert C. Richardson. The phenomenon in helium-3 is thought to be related to pairing of helium-3 fermions to make bosons, in analogy to Cooper pairs of electrons producing superconductivity. After an oil drilling operation in 1903 in Dexter, Kansas, produced a gas geyser that would not burn, Kansas state geologist Erasmus Haworth collected samples of the escaping gas and took them back to the University of Kansas at Lawrence where, with the help of chemists Hamilton Cady and David McFarland, he discovered that the gas consisted of, by volume, 72% nitrogen, 15% methane (a combustible percentage only with sufficient oxygen), 1% hydrogen, and 12% an unidentifiable gas. With further analysis, Cady and McFarland discovered that 1.84% of the gas sample was helium. This showed that despite its overall rarity on Earth, helium was concentrated in large quantities under the American Great Plains, available for extraction as a byproduct of natural gas. This enabled the United States to become the world's leading supplier of helium. Following a suggestion by Sir Richard Threlfall, the United States Navy sponsored three small experimental helium plants during World War I. The goal was to supply barrage balloons with the non-flammable, lighter-than-air gas. A total of of 92% helium was produced in the program even though less than a cubic meter of the gas had previously been obtained. Some of this gas was used in the world's first helium-filled airship, the U.S. Navy's C-7, which flew its maiden voyage from Hampton Roads, Virginia, to Bolling Field in Washington, D.C., on December 1, 1921, nearly two years before the Navy's first "rigid" helium-filled airship, the Naval Aircraft Factory-built "USS Shenandoah", flew in September 1923. Although the extraction process, using low-temperature gas liquefaction, was not developed in time to be significant during World War I, production continued. Helium was primarily used as a lifting gas in lighter-than-air craft. During World War II, the demand increased for helium for lifting gas and for shielded arc welding. The helium mass spectrometer was also vital in the atomic bomb Manhattan Project. The government of the United States set up the National Helium Reserve in 1925 at Amarillo, Texas, with the goal of supplying military airships in time of war and commercial airships in peacetime. Because of the Helium Act of 1925, which banned the export of scarce helium on which the US then had a production monopoly, together with the prohibitive cost of the gas, the Hindenburg, like all German Zeppelins, was forced to use hydrogen as the lift gas. The helium market after World War II was depressed but the reserve was expanded in the 1950s to ensure a supply of liquid helium as a coolant to create oxygen/hydrogen rocket fuel (among other uses) during the Space Race and Cold War. Helium use in the United States in 1965 was more than eight times the peak wartime consumption. After the "Helium Acts Amendments of 1960" (Public Law 86–777), the U.S. Bureau of Mines arranged for five private plants to recover helium from natural gas. For this "helium conservation" program, the Bureau built a pipeline from Bushton, Kansas, to connect those plants with the government's partially depleted Cliffside gas field near Amarillo, Texas. This helium-nitrogen mixture was injected and stored in the Cliffside gas field until needed, at which time it was further purified. By 1995, a billion cubic meters of the gas had been collected and the reserve was US$1.4 billion in debt, prompting the Congress of the United States in 1996 to phase out the reserve. The resulting Helium Privatization Act of 1996 (Public Law 104–273) directed the United States Department of the Interior to empty the reserve, with sales starting by 2005. Helium produced between 1930 and 1945 was about 98.3% pure (2% nitrogen), which was adequate for airships. In 1945, a small amount of 99.9% helium was produced for welding use. By 1949, commercial quantities of Grade A 99.95% helium were available. For many years, the United States produced more than 90% of commercially usable helium in the world, while extraction plants in Canada, Poland, Russia, and other nations produced the remainder. In the mid-1990s, a new plant in Arzew, Algeria, producing 17 million cubic meters (600 million cubic feet) began operation, with enough production to cover all of Europe's demand. Meanwhile, by 2000, the consumption of helium within the U.S. had risen to more than 15 million kg per year. In 2004–2006, additional plants in Ras Laffan, Qatar, and Skikda, Algeria were built. Algeria quickly became the second leading producer of helium. Through this time, both helium consumption and the costs of producing helium increased. From 2002 to 2007 helium prices doubled. As of 2012, the United States National Helium Reserve accounted for 30 percent of the world's helium. The reserve was expected to run out of helium in 2018. Despite that, a proposed bill in the United States Senate would allow the reserve to continue to sell the gas. Other large reserves were in the Hugoton in Kansas, United States, and nearby gas fields of Kansas and the panhandles of Texas and Oklahoma. New helium plants were scheduled to open in 2012 in Qatar, Russia, and the US state of Wyoming, but they were not expected to ease the shortage. In 2013, Qatar started up the world's largest helium unit, although the 2017 Qatar diplomatic crisis severely affected helium production there. 2014 was widely acknowledged to be a year of over-supply in the helium business, following years of renowned shortages. Nasdaq reported (2015) that for Air Products, an international corporation that sells gases for industrial use, helium volumes remain under economic pressure due to feedstock supply constraints. In the perspective of quantum mechanics, helium is the second simplest atom to model, following the hydrogen atom. Helium is composed of two electrons in atomic orbitals surrounding a nucleus containing two protons and (usually) two neutrons. As in Newtonian mechanics, no system that consists of more than two particles can be solved with an exact analytical mathematical approach (see 3-body problem) and helium is no exception. Thus, numerical mathematical methods are required, even to solve the system of one nucleus and two electrons. Such computational chemistry methods have been used to create a quantum mechanical picture of helium electron binding which is accurate to within < 2% of the correct value, in a few computational steps. Such models show that each electron in helium partly screens the nucleus from the other, so that the effective nuclear charge "Z" which each electron sees, is about 1.69 units, not the 2 charges of a classic "bare" helium nucleus. The nucleus of the helium-4 atom is identical with an alpha particle. High-energy electron-scattering experiments show its charge to decrease exponentially from a maximum at a central point, exactly as does the charge density of helium's own electron cloud. This symmetry reflects similar underlying physics: the pair of neutrons and the pair of protons in helium's nucleus obey the same quantum mechanical rules as do helium's pair of electrons (although the nuclear particles are subject to a different nuclear binding potential), so that all these fermions fully occupy 1s orbitals in pairs, none of them possessing orbital angular momentum, and each cancelling the other's intrinsic spin. Adding another of any of these particles would require angular momentum and would release substantially less energy (in fact, no nucleus with five nucleons is stable). This arrangement is thus energetically extremely stable for all these particles, and this stability accounts for many crucial facts regarding helium in nature. For example, the stability and low energy of the electron cloud state in helium accounts for the element's chemical inertness, and also the lack of interaction of helium atoms with each other, producing the lowest melting and boiling points of all the elements. In a similar way, the particular energetic stability of the helium-4 nucleus, produced by similar effects, accounts for the ease of helium-4 production in atomic reactions that involve either heavy-particle emission or fusion. Some stable helium-3 (2 protons and 1 neutron) is produced in fusion reactions from hydrogen, but it is a very small fraction compared to the highly favorable helium-4. The unusual stability of the helium-4 nucleus is also important cosmologically: it explains the fact that in the first few minutes after the Big Bang, as the "soup" of free protons and neutrons which had initially been created in about 6:1 ratio cooled to the point that nuclear binding was possible, almost all first compound atomic nuclei to form were helium-4 nuclei. So tight was helium-4 binding that helium-4 production consumed nearly all of the free neutrons in a few minutes, before they could beta-decay, and also leaving few to form heavier atoms such as lithium, beryllium, or boron. Helium-4 nuclear binding per nucleon is stronger than in any of these elements (see nucleogenesis and binding energy) and thus, once helium had been formed, no energetic drive was available to make elements 3, 4 and 5. It was barely energetically favorable for helium to fuse into the next element with a lower energy per nucleon, carbon. However, due to lack of intermediate elements, this process requires three helium nuclei striking each other nearly simultaneously (see triple alpha process). There was thus no time for significant carbon to be formed in the few minutes after the Big Bang, before the early expanding universe cooled to the temperature and pressure point where helium fusion to carbon was no longer possible. This left the early universe with a very similar ratio of hydrogen/helium as is observed today (3 parts hydrogen to 1 part helium-4 by mass), with nearly all the neutrons in the universe trapped in helium-4. All heavier elements (including those necessary for rocky planets like the Earth, and for carbon-based or other life) have thus been created since the Big Bang in stars which were hot enough to fuse helium itself. All elements other than hydrogen and helium today account for only 2% of the mass of atomic matter in the universe. Helium-4, by contrast, makes up about 23% of the universe's ordinary matter—nearly all the ordinary matter that is not hydrogen. Helium is the second least reactive noble gas after neon, and thus the second least reactive of all elements. It is chemically inert and monatomic in all standard conditions. Because of helium's relatively low molar (atomic) mass, its thermal conductivity, specific heat, and sound speed in the gas phase are all greater than any other gas except hydrogen. For these reasons and the small size of helium monatomic molecules, helium diffuses through solids at a rate three times that of air and around 65% that of hydrogen. Helium is the least water-soluble monatomic gas, and one of the least water-soluble of any gas (CF, SF, and CF have lower mole fraction solubilities: 0.3802, 0.4394, and 0.2372 x/10, respectively, versus helium's 0.70797 x/10), and helium's index of refraction is closer to unity than that of any other gas. Helium has a negative Joule–Thomson coefficient at normal ambient temperatures, meaning it heats up when allowed to freely expand. Only below its Joule–Thomson inversion temperature (of about 32 to 50 K at 1 atmosphere) does it cool upon free expansion. Once precooled below this temperature, helium can be liquefied through expansion cooling. Most extraterrestrial helium is found in a plasma state, with properties quite different from those of atomic helium. In a plasma, helium's electrons are not bound to its nucleus, resulting in very high electrical conductivity, even when the gas is only partially ionized. The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind together with ionized hydrogen, the particles interact with the Earth's magnetosphere, giving rise to Birkeland currents and the aurora. Unlike any other element, helium will remain liquid down to absolute zero at normal pressures. This is a direct effect of quantum mechanics: specifically, the zero point energy of the system is too high to allow freezing. Solid helium requires a temperature of 1–1.5 K (about −272 °C or −457 °F) at about 25 bar (2.5 MPa) of pressure. It is often hard to distinguish solid from liquid helium since the refractive index of the two phases are nearly the same. The solid has a sharp melting point and has a crystalline structure, but it is highly compressible; applying pressure in a laboratory can decrease its volume by more than 30%. With a bulk modulus of about 27 MPa it is ~100 times more compressible than water. Solid helium has a density of at 1.15 K and 66 atm; the projected density at 0 K and 25 bar (2.5 MPa) is . At higher temperatures, helium will solidify with sufficient pressure. At room temperature, this requires about 114,000 atm. Below its boiling point of 4.22 kelvins and above the lambda point of 2.1768 kelvins, the isotope helium-4 exists in a normal colorless liquid state, called "helium I". Like other cryogenic liquids, helium I boils when it is heated and contracts when its temperature is lowered. Below the lambda point, however, helium does not boil, and it expands as the temperature is lowered further. Helium I has a gas-like index of refraction of 1.026 which makes its surface so hard to see that floats of Styrofoam are often used to show where the surface is. This colorless liquid has a very low viscosity and a density of 0.145–0.125 g/mL (between about 0 and 4 K), which is only one-fourth the value expected from classical physics. Quantum mechanics is needed to explain this property and thus both states of liquid helium (helium I and helium II) are called "quantum fluids", meaning they display atomic properties on a macroscopic scale. This may be an effect of its boiling point being so close to absolute zero, preventing random molecular motion (thermal energy) from masking the atomic properties. Liquid helium below its lambda point (called "helium II") exhibits very unusual characteristics. Due to its high thermal conductivity, when it boils, it does not bubble but rather evaporates directly from its surface. Helium-3 also has a superfluid phase, but only at much lower temperatures; as a result, less is known about the properties of the isotope. Helium II is a superfluid, a quantum mechanical state (see: macroscopic quantum phenomena) of matter with strange properties. For example, when it flows through capillaries as thin as 10 to 10 m it has no measurable viscosity. However, when measurements were done between two moving discs, a viscosity comparable to that of gaseous helium was observed. Current theory explains this using the "two-fluid model" for helium II. In this model, liquid helium below the lambda point is viewed as containing a proportion of helium atoms in a ground state, which are superfluid and flow with exactly zero viscosity, and a proportion of helium atoms in an excited state, which behave more like an ordinary fluid. In the "fountain effect", a chamber is constructed which is connected to a reservoir of helium II by a sintered disc through which superfluid helium leaks easily but through which non-superfluid helium cannot pass. If the interior of the container is heated, the superfluid helium changes to non-superfluid helium. In order to maintain the equilibrium fraction of superfluid helium, superfluid helium leaks through and increases the pressure, causing liquid to fountain out of the container. The thermal conductivity of helium II is greater than that of any other known substance, a million times that of helium I and several hundred times that of copper. This is because heat conduction occurs by an exceptional quantum mechanism. Most materials that conduct heat well have a valence band of free electrons which serve to transfer the heat. Helium II has no such valence band but nevertheless conducts heat well. The flow of heat is governed by equations that are similar to the wave equation used to characterize sound propagation in air. When heat is introduced, it moves at 20 meters per second at 1.8 K through helium II as waves in a phenomenon known as "second sound". Helium II also exhibits a creeping effect. When a surface extends past the level of helium II, the helium II moves along the surface, against the force of gravity. Helium II will escape from a vessel that is not sealed by creeping along the sides until it reaches a warmer region where it evaporates. It moves in a 30 nm-thick film regardless of surface material. This film is called a Rollin film and is named after the man who first characterized this trait, Bernard V. Rollin. As a result of this creeping behavior and helium II's ability to leak rapidly through tiny openings, it is very difficult to confine liquid helium. Unless the container is carefully constructed, the helium II will creep along the surfaces and through valves until it reaches somewhere warmer, where it will evaporate. Waves propagating across a Rollin film are governed by the same equation as gravity waves in shallow water, but rather than gravity, the restoring force is the van der Waals force. These waves are known as "third sound". There are nine known isotopes of helium, but only helium-3 and helium-4 are stable. In the Earth's atmosphere, one atom is for every million that are . Unlike most elements, helium's isotopic abundance varies greatly by origin, due to the different formation processes. The most common isotope, helium-4, is produced on Earth by alpha decay of heavier radioactive elements; the alpha particles that emerge are fully ionized helium-4 nuclei. Helium-4 is an unusually stable nucleus because its nucleons are arranged into complete shells. It was also formed in enormous quantities during Big Bang nucleosynthesis. Helium-3 is present on Earth only in trace amounts. Most of it has been present since Earth's formation, though some falls to Earth trapped in cosmic dust. Trace amounts are also produced by the beta decay of tritium. Rocks from the Earth's crust have isotope ratios varying by as much as a factor of ten, and these ratios can be used to investigate the origin of rocks and the composition of the Earth's mantle. is much more abundant in stars as a product of nuclear fusion. Thus in the interstellar medium, the proportion of to is about 100 times higher than on Earth. Extraplanetary material, such as lunar and asteroid regolith, have trace amounts of helium-3 from being bombarded by solar winds. The Moon's surface contains helium-3 at concentrations on the order of 10 ppb, much higher than the approximately 5 ppt found in the Earth's atmosphere. A number of people, starting with Gerald Kulcinski in 1986, have proposed to explore the moon, mine lunar regolith, and use the helium-3 for fusion. Liquid helium-4 can be cooled to about 1 kelvin using evaporative cooling in a 1-K pot. Similar cooling of helium-3, which has a lower boiling point, can achieve about in a helium-3 refrigerator. Equal mixtures of liquid and below separate into two immiscible phases due to their dissimilarity (they follow different quantum statistics: helium-4 atoms are bosons while helium-3 atoms are fermions). Dilution refrigerators use this immiscibility to achieve temperatures of a few millikelvins. It is possible to produce exotic helium isotopes, which rapidly decay into other substances. The shortest-lived heavy helium isotope is helium-5 with a half-life of . Helium-6 decays by emitting a beta particle and has a half-life of 0.8 second. Helium-7 also emits a beta particle as well as a gamma ray. Helium-7 and helium-8 are created in certain nuclear reactions. Helium-6 and helium-8 are known to exhibit a nuclear halo. Helium has a valence of zero and is chemically unreactive under all normal conditions. It is an electrical insulator unless ionized. As with the other noble gases, helium has metastable energy levels that allow it to remain ionized in an electrical discharge with a voltage below its ionization potential. Helium can form unstable compounds, known as excimers, with tungsten, iodine, fluorine, sulfur, and phosphorus when it is subjected to a glow discharge, to electron bombardment, or reduced to plasma by other means. The molecular compounds HeNe, HgHe, and WHe, and the molecular ions , , , and have been created this way. HeH is also stable in its ground state, but is extremely reactive—it is the strongest Brønsted acid known, and therefore can exist only in isolation, as it will protonate any molecule or counteranion it contacts. This technique has also produced the neutral molecule He, which has a large number of band systems, and HgHe, which is apparently held together only by polarization forces. Van der Waals compounds of helium can also be formed with cryogenic helium gas and atoms of some other substance, such as LiHe and He. Theoretically, other true compounds may be possible, such as helium fluorohydride (HHeF) which would be analogous to HArF, discovered in 2000. Calculations show that two new compounds containing a helium-oxygen bond could be stable. Two new molecular species, predicted using theory, CsFHeO and N(CH)FHeO, are derivatives of a metastable FHeO anion first theorized in 2005 by a group from Taiwan. If confirmed by experiment, the only remaining element with no known stable compounds would be neon. Helium atoms have been inserted into the hollow carbon cage molecules (the fullerenes) by heating under high pressure. The endohedral fullerene molecules formed are stable at high temperatures. When chemical derivatives of these fullerenes are formed, the helium stays inside. If helium-3 is used, it can be readily observed by helium nuclear magnetic resonance spectroscopy. Many fullerenes containing helium-3 have been reported. Although the helium atoms are not attached by covalent or ionic bonds, these substances have distinct properties and a definite composition, like all stoichiometric chemical compounds. Under high pressures helium can form compounds with various other elements. Helium-nitrogen clathrate (He(N)) crystals have been grown at room temperature at pressures ca. 10 GPa in a diamond anvil cell. The insulating electride NaHe has been shown to be thermodynamically stable at pressures above 113 GPa. It has a fluorite structure. Although it is rare on Earth, helium is the second most abundant element in the known Universe (after hydrogen), constituting 23% of its baryonic mass. The vast majority of helium was formed by Big Bang nucleosynthesis one to three minutes after the Big Bang. As such, measurements of its abundance contribute to cosmological models. In stars, it is formed by the nuclear fusion of hydrogen in proton-proton chain reactions and the CNO cycle, part of stellar nucleosynthesis. In the Earth's atmosphere, the concentration of helium by volume is only 5.2 parts per million. The concentration is low and fairly constant despite the continuous production of new helium because most helium in the Earth's atmosphere escapes into space by several processes. In the Earth's heterosphere, a part of the upper atmosphere, helium and other lighter gases are the most abundant elements. Most helium on Earth is a result of radioactive decay. Helium is found in large amounts in minerals of uranium and thorium, including cleveite, pitchblende, carnotite and monazite, because they emit alpha particles (helium nuclei, He) to which electrons immediately combine as soon as the particle is stopped by the rock. In this way an estimated 3000 metric tons of helium are generated per year throughout the lithosphere. In the Earth's crust, the concentration of helium is 8 parts per billion. In seawater, the concentration is only 4 parts per trillion. There are also small amounts in mineral springs, volcanic gas, and meteoric iron. Because helium is trapped in the subsurface under conditions that also trap natural gas, the greatest natural concentrations of helium on the planet are found in natural gas, from which most commercial helium is extracted. The concentration varies in a broad range from a few ppm to more than 7% in a small gas field in San Juan County, New Mexico. As of 2011 the world's helium reserves were estimated at 40 billion cubic meters, with a quarter of that being in the South Pars / North Dome Gas-Condensate field owned jointly by Qatar and Iran. In 2015 and 2016 more probable reserves were announced to be under the Rocky Mountains in North America and in east Africa. For large-scale use, helium is extracted by fractional distillation from natural gas, which can contain as much as 7% helium. Since helium has a lower boiling point than any other element, low temperature and high pressure are used to liquefy nearly all the other gases (mostly nitrogen and methane). The resulting crude helium gas is purified by successive exposures to lowering temperatures, in which almost all of the remaining nitrogen and other gases are precipitated out of the gaseous mixture. Activated charcoal is used as a final purification step, usually resulting in 99.995% pure Grade-A helium. The principal impurity in Grade-A helium is neon. In a final production step, most of the helium that is produced is liquefied via a cryogenic process. This is necessary for applications requiring liquid helium and also allows helium suppliers to reduce the cost of long distance transportation, as the largest liquid helium containers have more than five times the capacity of the largest gaseous helium tube trailers. In 2008, approximately 169 million standard cubic meters (SCM) of helium were extracted from natural gas or withdrawn from helium reserves with approximately 78% from the United States, 10% from Algeria, and most of the remainder from Russia, Poland and Qatar. By 2013, increases in helium production in Qatar (under the company RasGas managed by Air Liquide) had increased Qatar's fraction of world helium production to 25%, and made it the second largest exporter after the United States. An estimated deposit of helium was found in Tanzania in 2016. In the United States, most helium is extracted from natural gas of the Hugoton and nearby gas fields in Kansas, Oklahoma, and the Panhandle Field in Texas. Much of this gas was once sent by pipeline to the National Helium Reserve, but since 2005 this reserve is being depleted and sold off, and is expected to be largely depleted by 2021, under the October 2013 "Responsible Helium Administration and Stewardship Act" (H.R. 527). Diffusion of crude natural gas through special semipermeable membranes and other barriers is another method to recover and purify helium. In 1996, the U.S. had "proven" helium reserves, in such gas well complexes, of about 147 billion standard cubic feet (4.2 billion SCM). At rates of use at that time (72 million SCM per year in the U.S.; see pie chart below) this would have been enough helium for about 58 years of U.S. use, and less than this (perhaps 80% of the time) at world use rates, although factors in saving and processing impact effective reserve numbers. Helium must be extracted from natural gas because it is present in air at only a fraction of that of neon, yet the demand for it is far higher. It is estimated that if all neon production were retooled to save helium, that 0.1% of the world's helium demands would be satisfied. Similarly, only 1% of the world's helium demands could be satisfied by re-tooling all air distillation plants. Helium can be synthesized by bombardment of lithium or boron with high-velocity protons, or by bombardment of lithium with deuterons, but these processes are a completely uneconomical method of production. Helium is commercially available in either liquid or gaseous form. As a liquid, it can be supplied in small insulated containers called dewars which hold as much as 1,000 liters of helium, or in large ISO containers which have nominal capacities as large as 42 m (around 11,000 U.S. gallons). In gaseous form, small quantities of helium are supplied in high-pressure cylinders holding as much as 8 m (approx. 282 standard cubic feet), while large quantities of high-pressure gas are supplied in tube trailers which have capacities of as much as 4,860 m (approx. 172,000 standard cubic feet). According to helium conservationists like Nobel laureate physicist Robert Coleman Richardson, writing in 2010, the free market price of helium has contributed to "wasteful" usage (e.g. for helium balloons). Prices in the 2000s had been lowered by the decision of the U.S. Congress to sell off the country's large helium stockpile by 2015. According to Richardson, the price needed to be multiplied by 20 to eliminate the excessive wasting of helium. In their book, the "Future of helium as a natural resource" (Routledge, 2012), Nuttall, Clarke & Glowacki (2012) also proposed to create an International Helium Agency (IHA) to build a sustainable market for this precious commodity. While balloons are perhaps the best known use of helium, they are a minor part of all helium use. Helium is used for many purposes that require some of its unique properties, such as its low boiling point, low density, low solubility, high thermal conductivity, or inertness. Of the 2014 world helium total production of about 32 million kg (180 million standard cubic meters) helium per year, the largest use (about 32% of the total in 2014) is in cryogenic applications, most of which involves cooling the superconducting magnets in medical MRI scanners and NMR spectrometers. Other major uses were pressurizing and purging systems, welding, maintenance of controlled atmospheres, and leak detection. Other uses by category were relatively minor fractions. Helium is used as a protective gas in growing silicon and germanium crystals, in titanium and zirconium production, and in gas chromatography, because it is inert. Because of its inertness, thermally and calorically perfect nature, high speed of sound, and high value of the heat capacity ratio, it is also useful in supersonic wind tunnels and impulse facilities. Helium is used as a shielding gas in arc welding processes on materials that at welding temperatures are contaminated and weakened by air or nitrogen. A number of inert shielding gases are used in gas tungsten arc welding, but helium is used instead of cheaper argon especially for welding materials that have higher heat conductivity, like aluminium or copper. One industrial application for helium is leak detection. Because helium diffuses through solids three times faster than air, it is used as a tracer gas to detect leaks in high-vacuum equipment (such as cryogenic tanks) and high-pressure containers. The tested object is placed in a chamber, which is then evacuated and filled with helium. The helium that escapes through the leaks is detected by a sensitive device (helium mass spectrometer), even at the leak rates as small as 10 mbar·L/s (10 Pa·m/s). The measurement procedure is normally automatic and is called helium integral test. A simpler procedure is to fill the tested object with helium and to manually search for leaks with a hand-held device. Helium leaks through cracks should not be confused with gas permeation through a bulk material. While helium has documented permeation constants (thus a calculable permeation rate) through glasses, ceramics, and synthetic materials, inert gases such as helium will not permeate most bulk metals. Because it is lighter than air, airships and balloons are inflated with helium for lift. While hydrogen gas is more buoyant, and escapes permeating through a membrane at a lower rate, helium has the advantage of being non-flammable, and indeed fire-retardant. Another minor use is in rocketry, where helium is used as an ullage medium to displace fuel and oxidizers in storage tanks and to condense hydrogen and oxygen to make rocket fuel. It is also used to purge fuel and oxidizer from ground support equipment prior to launch and to pre-cool liquid hydrogen in space vehicles. For example, the Saturn V rocket used in the Apollo program needed about 370,000 m (13 million cubic feet) of helium to launch. Helium as a breathing gas has no narcotic properties, so helium mixtures such as trimix, heliox and heliair are used for deep diving to reduce the effects of narcosis, which worsen with increasing depth. As pressure increases with depth, the density of the breathing gas also increases, and the low molecular weight of helium is found to considerably reduce the effort of breathing by lowering the density of the mixture. This reduces the Reynolds number of flow, leading to a reduction of turbulent flow and an increase in laminar flow, which requires less work of breathing. At depths below divers breathing helium–oxygen mixtures begin to experience tremors and a decrease in psychomotor function, symptoms of high-pressure nervous syndrome. This effect may be countered to some extent by adding an amount of narcotic gas such as hydrogen or nitrogen to a helium–oxygen mixture. Helium–neon lasers, a type of low-powered gas laser producing a red beam, had various practical applications which included barcode readers and laser pointers, before they were almost universally replaced by cheaper diode lasers. For its inertness and high thermal conductivity, neutron transparency, and because it does not form radioactive isotopes under reactor conditions, helium is used as a heat-transfer medium in some gas-cooled nuclear reactors. Helium, mixed with a heavier gas such as xenon, is useful for thermoacoustic refrigeration due to the resulting high heat capacity ratio and low Prandtl number. The inertness of helium has environmental advantages over conventional refrigeration systems which contribute to ozone depletion or global warming. Helium is also used in some hard disk drives. The use of helium reduces the distorting effects of temperature variations in the space between lenses in some telescopes, due to its extremely low index of refraction. This method is especially used in solar telescopes where a vacuum tight telescope tube would be too heavy. Helium is a commonly used carrier gas for gas chromatography. The age of rocks and minerals that contain uranium and thorium can be estimated by measuring the level of helium with a process known as helium dating. Helium at low temperatures is used in cryogenics, and in certain cryogenics applications. As examples of applications, liquid helium is used to cool certain metals to the extremely low temperatures required for superconductivity, such as in superconducting magnets for magnetic resonance imaging. The Large Hadron Collider at CERN uses 96 metric tons of liquid helium to maintain the temperature at 1.9 kelvin. Neutral helium at standard conditions is non-toxic, plays no biological role and is found in trace amounts in human blood. The speed of sound in helium is nearly three times the speed of sound in air. Because the fundamental frequency of a gas-filled cavity is proportional to the speed of sound in the gas, when helium is inhaled there is a corresponding increase in the resonant frequencies of the vocal tract. The fundamental frequency (sometimes called pitch) does not change, since this is produced by direct vibration of the vocal folds, which is unchanged. However, the higher resonant frequencies cause a change in timbre, resulting in a reedy, duck-like vocal quality. The opposite effect, lowering resonant frequencies, can be obtained by inhaling a dense gas such as sulfur hexafluoride or xenon. Inhaling helium can be dangerous if done to excess, since helium is a simple asphyxiant and so displaces oxygen needed for normal respiration. Fatalities have been recorded, including a youth who suffocated in Vancouver in 2003 and two adults who suffocated in South Florida in 2006. In 1998, an Australian girl (her age is not known) from Victoria fell unconscious and temporarily turned blue after inhaling the entire contents of a party balloon. Inhaling helium directly from pressurized cylinders or even balloon filling valves is extremely dangerous, as high flow rate and pressure can result in barotrauma, fatally rupturing lung tissue. Death caused by helium is rare. The first media-recorded case was that of a 15-year-old girl from Texas who died in 1998 from helium inhalation at a friend's party; the exact type of helium death is unidentified. In the United States only two fatalities were reported between 2000 and 2004, including a man who died in North Carolina of barotrauma in 2002. A youth asphyxiated in Vancouver during 2003, and a 27-year-old man in Australia had an embolism after breathing from a cylinder in 2000. Since then two adults asphyxiated in South Florida in 2006, and there were cases in 2009 and 2010, one a Californian youth who was found with a bag over his head, attached to a helium tank, and another teenager in Northern Ireland died of asphyxiation. At Eagle Point, Oregon a teenage girl died in 2012 from barotrauma at a party. A girl from Michigan died from hypoxia later in the year. On February 4, 2015 it was revealed that during the recording of their main TV show on January 28, a 12-year-old member (name withheld) of Japanese all-girl singing group 3B Junior suffered from air embolism, losing consciousness and falling in a coma as a result of air bubbles blocking the flow of blood to the brain, after inhaling huge quantities of helium as part of a game. The incident was not made public until a week later. The staff of TV Asahi held an emergency press conference to communicate that the member had been taken to the hospital and is showing signs of rehabilitation such as moving eyes and limbs, but her consciousness has not been sufficiently recovered as of yet. Police have launched an investigation due to a neglect of safety measures. On July 13, 2017 CBS News reported that a political operative who reportedly attempted to recover e-mails missing from the Clinton server, Peter W. Smith, "apparently" committed suicide in May at a hotel room in Rochester, Minnesota and that his death was recorded as "asphyxiation due to displacement of oxygen in confined space with helium". More details followed in the Chicago Tribune. The safety issues for cryogenic helium are similar to those of liquid nitrogen; its extremely low temperatures can result in cold burns, and the liquid-to-gas expansion ratio can cause explosions if no pressure-relief devices are installed. Containers of helium gas at 5 to 10 K should be handled as if they contain liquid helium due to the rapid and significant thermal expansion that occurs when helium gas at less than 10 K is warmed to room temperature. At high pressures (more than about 20 atm or two MPa), a mixture of helium and oxygen (heliox) can lead to high-pressure nervous syndrome, a sort of reverse-anesthetic effect; adding a small amount of nitrogen to the mixture can alleviate the problem. General More detail Miscellaneous Helium shortage Hamlet The Tragedy of Hamlet, Prince of Denmark, often shortened to Hamlet (), is a tragedy written by William Shakespeare at an uncertain date between 1599 and 1602. Set in Denmark, the play dramatises the revenge Prince Hamlet is called to wreak upon his uncle, Claudius, by the ghost of Hamlet's father, King Hamlet. Claudius had murdered his own brother and seized the throne, also marrying his deceased brother's widow. "Hamlet" is Shakespeare's longest play, and is considered among the most powerful and influential works of world literature, with a story capable of "seemingly endless retelling and adaptation by others". It was probably one of Shakespeare's most popular works during his lifetime, and still ranks among his most performed, topping the performance list of the Royal Shakespeare Company and its predecessors in Stratford-upon-Avon since 1879. It has inspired many other writers—from Johann Wolfgang von Goethe and Charles Dickens to James Joyce and Iris Murdoch—and has been described as "the world's most filmed story after "Cinderella"". The story of Shakespeare's "Hamlet" was derived from the legend of Amleth, preserved by 13th-century chronicler Saxo Grammaticus in his "Gesta Danorum", as subsequently retold by the 16th-century scholar François de Belleforest. Shakespeare may also have drawn on an earlier Elizabethan play known today as the "Ur-Hamlet", though some scholars believe he himself wrote the "Ur-Hamlet", later revising it to create the version of "Hamlet" we now have. He almost certainly wrote his version of the title role for his fellow actor, Richard Burbage, the leading tragedian of Shakespeare's time. In the 400 years since its inception, the role has been performed by numerous highly acclaimed actors in each successive century. Three different early versions of the play are extant: the First Quarto (Q1, 1603); the Second Quarto (Q2, 1604); and the First Folio (F1, 1623). Each version includes lines and entire scenes missing from the others. The play's structure and depth of characterisation have inspired much critical scrutiny. One such example is the centuries-old debate about Hamlet's hesitation to kill his uncle, which some see as merely a plot device to prolong the action, but which others argue is a dramatisation of the complex philosophical and ethical issues that surround cold-blooded murder, calculated revenge, and thwarted desire. More recently, psychoanalytic critics have examined Hamlet's unconscious desires, while feminist critics have re-evaluated and attempted to rehabilitate the often maligned characters of Ophelia and Gertrude. The protagonist of "Hamlet" is Prince Hamlet of Denmark, son of the recently deceased King Hamlet, and nephew of King Claudius, his father's brother and successor. Claudius hastily married King Hamlet's widow, Gertrude, Hamlet's mother, and took the throne for himself. Denmark has a long-standing feud with neighbouring Norway, in which King Hamlet slew King Fortinbras of Norway in a battle some years ago. Although Denmark defeated Norway, and the Norwegian throne fell to King Fortinbras's infirm brother, Denmark fears that an invasion led by the dead Norwegian king's son, Prince Fortinbras, is imminent. On a cold night on the ramparts of Elsinore, the Danish royal castle, the sentries Bernardo and Marcellus discuss a ghost resembling the late King Hamlet which they have recently seen, and bring Prince Hamlet's friend Horatio as a witness. After the ghost appears again, the three vow to tell Prince Hamlet what they have witnessed. As the court gathers the next day, while King Claudius and Queen Gertrude discuss affairs of state with their elderly adviser Polonius, Hamlet looks on glumly. During the court, Claudius grants permission for Polonius's son Laertes to return to school in France, and sends envoys to inform the King of Norway about Fortinbras. Claudius also scolds Hamlet for continuing to grieve over his father, and forbids him from returning to his schooling in Wittenberg. After the court exits, Hamlet despairs of his father's death and his mother's hasty remarriage. Learning of the ghost from Horatio, Hamlet resolves to see it himself. As Polonius's son Laertes prepares to depart for a visit to France, Polonius gives him contradictory advice that culminates in the ironic maxim "to thine own self be true." Polonius's daughter, Ophelia, admits her interest in Hamlet, but Laertes warns her against seeking the prince's attention, and Polonius orders her to reject his advances. That night on the rampart, the ghost appears to Hamlet, telling the prince that he was murdered by Claudius and demanding that Hamlet avenge him. Hamlet agrees and the ghost vanishes. The prince confides to Horatio and the sentries that from now on he plans to "put an antic disposition on", or act as though he has gone mad, and forces them to swear to keep his plans for revenge secret. Privately, however, he remains uncertain of the ghost's reliability. Soon thereafter, Ophelia rushes to her father, telling him that Hamlet arrived at her door the prior night half-undressed and behaving erratically. Polonius blames love for Hamlet's madness and resolves to inform Claudius and Gertrude. As he enters to do so, the king and queen finish welcoming Rosencrantz and Guildenstern, two student acquaintances of Hamlet, to Elsinore. The royal couple has requested that the students investigate the cause of Hamlet's mood and behaviour. Additional news requires that Polonius wait to be heard: messengers from Norway inform Claudius that the King of Norway has rebuked Prince Fortinbras for attempting to re-fight his father's battles. The forces that Fortinbras had conscripted to march against Denmark will instead be sent against Poland, though they will pass through Danish territory to get there. Polonius tells Claudius and Gertrude his theory regarding Hamlet's behaviour, and speaks to Hamlet in a hall of the castle to try to uncover more information. Hamlet feigns madness but subtly insults Polonius all the while. When Rosencrantz and Guildenstern arrive, Hamlet greets his "friends" warmly, but quickly discerns that they are spies. Hamlet becomes bitter, admitting that he is upset at his situation but refusing to give the true reason why, instead commenting on "what a piece of work" humanity is. Rosencrantz and Guildenstern tell Hamlet that they have brought along a troupe of actors that they met while traveling to Elsinore. Hamlet, after welcoming the actors and dismissing his friends-turned-spies, asks them to deliver a soliloquy about the death of King Priam and Queen Hecuba at the climax of the Trojan War. Impressed by their delivery of the speech, he plots to stage "The Murder of Gonzago", a play featuring a death in the style of his father's murder, and to determine the truth of the ghost's story, as well as Claudius's guilt or innocence, by studying Claudius's reaction. Polonius forces Ophelia to return Hamlet's love letters and tokens of affection to the prince while he and Claudius watch from afar to evaluate Hamlet's reaction. Hamlet is walking alone in the hall as the King and Polonius await Ophelia's entrance, musing whether "to be or not to be". When Ophelia enters and tries to return Hamlet's things, Hamlet accuses her of immodesty and cries "get thee to a nunnery", though it is unclear whether this, too, is a show of madness or genuine distress. His reaction convinces Claudius that Hamlet is not mad for love. Shortly thereafter, the court assembles to watch the play Hamlet has commissioned. After seeing the Player King murdered by his rival pouring poison in his ear, Claudius abruptly rises and runs from the room: for Hamlet, proof positive of his uncle's guilt. Gertrude summons Hamlet to her room to demand an explanation. Meanwhile, Claudius talks to himself about the impossibility of repenting, since he still has possession of his ill-gotten goods: his brother's crown and wife. He sinks to his knees. Hamlet, on his way to visit his mother, sneaks up behind him, but does not kill him, reasoning that killing Claudius while he is praying will send him straight to heaven while his father's ghost is stuck in purgatory. In the queen's bedchamber, Hamlet and Gertrude fight bitterly. Polonius, spying on the conversation from behind a tapestry, calls for help as Gertrude, believing Hamlet wants to kill her, calls out for help herself. Hamlet, believing it is Claudius, stabs wildly, killing Polonius, but pulls aside the curtain and sees his mistake. In a rage, Hamlet brutally insults his mother for her apparent ignorance of Claudius's villainy, but the ghost enters and reprimands Hamlet for his inaction and harsh words. Unable to see or hear the ghost herself, Gertrude takes Hamlet's conversation with it as further evidence of madness. After begging the queen to stop sleeping with Claudius, Hamlet leaves, dragging Polonius's corpse away. Hamlet jokes with Claudius about where he has hidden Polonius's body, and the king, fearing for his life, sends Rosencrantz and Guildenstern to accompany Hamlet to England with a sealed letter to the English king requesting that Hamlet be executed immediately. Demented by grief at Polonius's death, Ophelia wanders Elsinore. Laertes arrives back from France, enraged by his father's death and his sister's madness. Claudius convinces Laertes that Hamlet is solely responsible, but a letter soon arrives indicating that Hamlet has returned to Denmark, foiling Claudius' plan. Claudius switches tactics, proposing a fencing match between Laertes and Hamlet to settle their differences. Laertes will be given a poison-tipped foil, and Claudius will offer Hamlet poisoned wine as a congratulation if that fails. Gertrude interrupts to report that Ophelia has drowned, though it is unclear whether it was suicide or an accident exacerbated by her madness. Horatio has received a letter from Hamlet, explaining that the prince escaped by negotiating with pirates who attempted to attack his England-bound ship, and the friends reunite offstage. Two gravediggers discuss Ophelia's apparent suicide while digging her grave. Hamlet arrives with Horatio and banters with one of the gravediggers, who unearths the skull of a jester from Hamlet's childhood, Yorick. Hamlet picks up the skull, saying "alas, poor Yorick" as he contemplates mortality. Ophelia's funeral procession approaches, led by Laertes. Hamlet and Horatio initially hide, but when Hamlet realizes that Ophelia is the one being buried, he reveals himself, proclaiming his love for her. Laertes and Hamlet fight by Ophelia's graveside, but the brawl is broken up. Back at Elsinore, Hamlet explains to Horatio that he had discovered Claudius's letter with Rosencrantz and Guildenstern's belongings and replaced it with a forged copy indicating that his former friends should be killed instead. A foppish courtier, Osric, interrupts the conversation to deliver the fencing challenge to Hamlet. Hamlet, despite Horatio's pleas, accepts it. Hamlet does well at first, leading the match by two hits to none, and Gertrude raises a toast to him using the poisoned glass of wine Claudius had set aside for Hamlet. Claudius tries to stop her, but is too late: she drinks, and Laertes realizes the plot will be revealed. Laertes slashes Hamlet with his poisoned blade. In the ensuing scuffle, they switch weapons and Hamlet wounds Laertes with his own poisoned sword. Gertrude collapses and, claiming she has been poisoned, dies. In his dying moments, Laertes reconciles with Hamlet and reveals Claudius's plan. Hamlet rushes at Claudius and kills him. As the poison takes effect, Hamlet, hearing that Fortinbras is marching through the area, names the Norwegian prince as his successor. Horatio, distraught at the thought of being the last survivor and living whilst Hamlet does not, says he will commit suicide by drinking the dregs of Gertrude's poisoned wine, but Hamlet begs him to live on and tell his story. Hamlet dies in Horatio's arms, proclaiming "the rest is silence". Fortinbras, who was ostensibly marching towards Poland with his army, arrives at the palace, along with an English ambassador bringing news of Rosencrantz and Guildenstern's deaths. Horatio promises to recount the full story of what happened, and Fortinbras, seeing the entire Danish royal family dead, takes the crown for himself, and orders a military funeral to honour Hamlet. "Hamlet"-like legends are so widely found (for example in Italy, Spain, Scandinavia, Byzantium, and Arabia) that the core "hero-as-fool" theme is possibly Indo-European in origin. Several ancient written precursors to "Hamlet" can be identified. The first is the anonymous Scandinavian "Saga of Hrolf Kraki". In this, the murdered king has two sons—Hroar and Helgi—who spend most of the story in disguise, under false names, rather than feigning madness, in a sequence of events that differs from Shakespeare's. The second is the Roman legend of Brutus, recorded in two separate Latin works. Its hero, Lucius ("shining, light"), changes his name and persona to Brutus ("dull, stupid"), playing the role of a fool to avoid the fate of his father and brothers, and eventually slaying his family's killer, King Tarquinius. A 17th-century Nordic scholar, Torfaeus, compared the Icelandic hero Amlodi and the Spanish hero Prince Ambales (from the "Ambales Saga") to Shakespeare's "Hamlet". Similarities include the prince's feigned madness, his accidental killing of the king's counsellor in his mother's bedroom, and the eventual slaying of his uncle. Many of the earlier legendary elements are interwoven in the 13th-century "Life of Amleth" () by Saxo Grammaticus, part of "Gesta Danorum". Written in Latin, it reflects classical Roman concepts of virtue and heroism, and was widely available in Shakespeare's day. Significant parallels include the prince feigning madness, his mother's hasty marriage to the usurper, the prince killing a hidden spy, and the prince substituting the execution of two retainers for his own. A reasonably faithful version of Saxo's story was translated into French in 1570 by François de Belleforest, in his "Histoires tragiques". Belleforest embellished Saxo's text substantially, almost doubling its length, and introduced the hero's melancholy. According to one theory, Shakespeare's main source is an earlier play—now lost—known today as the "Ur-Hamlet". Possibly written by Thomas Kyd or even William Shakespeare, the "Ur-Hamlet" would have existed by 1589, and would have incorporated a ghost. Shakespeare's company, the Chamberlain's Men, may have purchased that play and performed a version for some time, which Shakespeare reworked. However, since no copy of the "Ur-Hamlet" has survived, it is impossible to compare its language and style with the known works of any of its putative authors. Consequently, there is no direct evidence that Kyd wrote it, nor any evidence that the play was not an early version of "Hamlet" by Shakespeare himself. This latter idea—placing "Hamlet" far earlier than the generally accepted date, with a much longer period of development—has attracted some support. The upshot is that scholars cannot assert with any confidence how much material Shakespeare took from the "Ur-Hamlet" (if it even existed), how much from Belleforest or Saxo, and how much from other contemporary sources (such as Kyd's "The Spanish Tragedy"). No clear evidence exists that Shakespeare made any direct references to Saxo's version. However, elements of Belleforest's version which are not in Saxo's story do appear in Shakespeare's play. Whether Shakespeare took these from Belleforest directly or from the hypothetical "Ur-Hamlet" remains unclear. Most scholars reject the idea that "Hamlet" is in any way connected with Shakespeare's only son, Hamnet Shakespeare, who died in 1596 at age eleven. Conventional wisdom holds that "Hamlet" is too obviously connected to legend, and the name Hamnet was quite popular at the time. However, Stephen Greenblatt has argued that the coincidence of the names and Shakespeare's grief for the loss of his son may lie at the heart of the tragedy. He notes that the name of Hamnet Sadler, the Stratford neighbour after whom Hamnet was named, was often written as Hamlet Sadler and that, in the loose orthography of the time, the names were virtually interchangeable. Scholars have often speculated that "Hamlet"s Polonius might have been inspired by William Cecil (Lord Burghley)—Lord High Treasurer and chief counsellor to Queen Elizabeth I. E. K. Chambers suggested Polonius's advice to Laertes may have echoed Burghley's to his son Robert Cecil. John Dover Wilson thought it almost certain that the figure of Polonius caricatured Burghley. A. L. Rowse speculated that Polonius's tedious verbosity might have resembled Burghley's. Lilian Winstanley thought the name Corambis (in the First Quarto) did suggest Cecil and Burghley. Harold Jenkins considers the idea that Polonius might be a caricature of Burghley is a conjecture, and may be based on the similar role they each played at court, and also on the fact that Burghley addressed his "Ten Precepts" to his son, as in the play Polonius offers "precepts" to Laertes, his son. Jenkins suggests that any personal satire may be found in the name "Polonius", which might point to a Polish or Polonian connection. G. R. Hibbard hypothesised that differences in names (Corambis/Polonius:Montano/Raynoldo) between the First Quarto and other editions might reflect a desire not to offend scholars at Oxford University. "Any dating of "Hamlet" must be tentative", cautions the "New Cambridge" editor, Phillip Edwards. The earliest date estimate relies on "Hamlet"s frequent allusions to Shakespeare's "Julius Caesar", itself dated to mid-1599. The latest date estimate is based on an entry, of 26 July 1602, in the Register of the Stationers' Company, indicating that "Hamlet" was "latelie Acted by the Lo: Chamberleyne his servantes". In 1598, Francis Meres published his "Palladis Tamia", a survey of English literature from Chaucer to its present day, within which twelve of Shakespeare's plays are named. "Hamlet" is not among them, suggesting that it had not yet been written. As "Hamlet" was very popular, Bernard Lott, the series editor of "New Swan", believes it "unlikely that he [Meres] would have overlooked ... so significant a piece". The phrase "little eyases" in the First Folio (F1) may allude to the Children of the Chapel, whose popularity in London forced the Globe company into provincial touring. This became known as the War of the Theatres, and supports a 1601 dating. Katherine Duncan-Jones accepts a 1600–1 attribution for the date "Hamlet" was written, but notes that the Lord Chamberlain's Men, playing "Hamlet" in the 3000-capacity Globe, were unlikely to be put to any disadvantage by an audience of "barely one hundred" for the Children of the Chapel's equivalent play, "Antonio's Revenge"; she believes that Shakespeare, confident in the superiority of his own work, was making a playful and charitable allusion to his friend John Marston's very similar piece. A contemporary of Shakespeare's, Gabriel Harvey, wrote a marginal note in his copy of the 1598 edition of Chaucer's works, which some scholars use as dating evidence. Harvey's note says that "the wiser sort" enjoy "Hamlet", and implies that the Earl of Essex—executed in February 1601 for rebellion—was still alive. Other scholars consider this inconclusive. Edwards, for example, concludes that the "sense of time is so confused in Harvey's note that it is really of little use in trying to date ". This is because the same note also refers to Spenser and Watson as if they were still alive ("our flourishing metricians"), but also mentions "Owen's new epigrams", published in 1607. Three early editions of the text have survived, making attempts to establish a single "authentic" text problematic and inconclusive. Each surviving edition differs from the others: Other folios and quartos were subsequently published—including John Smethwick's Q3, Q4, and Q5 (1611–37)—but these are regarded as derivatives of the first three editions. Early editors of Shakespeare's works, beginning with Nicholas Rowe (1709) and Lewis Theobald (1733), combined material from the two earliest sources of "Hamlet" available at the time, Q2 and F1. Each text contains material that the other lacks, with many minor differences in wording: scarcely 200 lines are identical in the two. Editors have combined them in an effort to create one "inclusive" text that reflects an imagined "ideal" of Shakespeare's original. Theobald's version became standard for a long time, and his "full text" approach continues to influence editorial practice to the present day. Some contemporary scholarship, however, discounts this approach, instead considering "an authentic "Hamlet" an unrealisable ideal. ... there are "texts" of this play but no "text"". The 2006 publication by Arden Shakespeare of different "Hamlet" texts in different volumes is perhaps evidence of this shifting focus and emphasis. Other editors have continued to argue the need for well-edited editions taking material from all versions of the play. Colin Burrow has argued that "most of us should read a text that is made up by conflating all three versions ... it's about as likely that Shakespeare wrote: "To be or not to be, ay, there's the point" [in Q1], as that he wrote the works of Francis Bacon. I suspect most people just won't want to read a three-text play ... [multi-text editions are] a version of the play that is out of touch with the needs of a wider public." Traditionally, editors of Shakespeare's plays have divided them into five acts. None of the early texts of "Hamlet", however, were arranged this way, and the play's division into acts and scenes derives from a 1676 quarto. Modern editors generally follow this traditional division, but consider it unsatisfactory; for example, after Hamlet drags Polonius's body out of Gertrude's bedchamber, there is an act-break after which the action appears to continue uninterrupted. The discovery in 1823 of Q1—whose existence had been quite unsuspected—caused considerable interest and excitement, raising many questions of editorial practice and interpretation. Scholars immediately identified apparent deficiencies in Q1, which was instrumental in the development of the concept of a Shakespearean "bad quarto". Yet Q1 has value: it contains stage directions (such as Ophelia entering with a lute and her hair down) that reveal actual stage practices in a way that Q2 and F1 do not; it contains an entire scene (usually labelled 4.6) that does not appear in either Q2 or F1; and it is useful for comparison with the later editions. The major deficiency of Q1 is in the language: particularly noticeable in the opening lines of the famous "To be, or not to be" soliloquy: "To be, or not to be, aye there's the point. / To die, to sleep, is that all? Aye all: / No, to sleep, to dream, aye marry there it goes." However, the scene order is more coherent, without the problems of Q2 and F1 of Hamlet seeming to resolve something in one scene and enter the next drowning in indecision. New Cambridge editor Kathleen Irace has noted that "Q1's more linear plot design is certainly easier […] to follow […] but the simplicity of the Q1 plot arrangement eliminates the alternating plot elements that correspond to Hamlet's shifts in mood." Q1 is considerably shorter than Q2 or F1 and may be a memorial reconstruction of the play as Shakespeare's company performed it, by an actor who played a minor role (most likely Marcellus). Scholars disagree whether the reconstruction was pirated or authorised. It is suggested by Irace that Q1 is an abridged version intended especially for travelling productions, thus the question of length may be considered as separate from issues of poor textual quality. Editing Q1 thus poses problems in whether or not to "correct" differences from Q2 and F. Irace, in her introduction to Q1, wrote that "I have avoided as many other alterations as possible, because the differences...are especially intriguing...I have recorded a selection of Q2/F readings in the collation." The idea that Q1 is not riddled with error but is instead eminently fit for the stage has led to at least 28 different Q1 productions since 1881. Other productions have used the probably superior Q2 and Folio texts, but used Q1's running order, in particular moving the "to be or not to be" soliloquy earlier. Developing this, some editors such as Jonathan Bate have argued that Q2 may represent "a 'reading' text as opposed to a 'performance' one" of "Hamlet", analogous to how modern films released on disc may include deleted scenes: an edition containing all of Shakespeare's material for the play for the pleasure of readers, so not representing the play as it would have been staged. From the early 17th century, the play was famous for its ghost and vivid dramatisation of melancholy and insanity, leading to a procession of mad courtiers and ladies in Jacobean and Caroline drama. Though it remained popular with mass audiences, late 17th-century Restoration critics saw "Hamlet" as primitive and disapproved of its lack of unity and decorum. This view changed drastically in the 18th century, when critics regarded Hamlet as a hero—a pure, brilliant young man thrust into unfortunate circumstances. By the mid-18th century, however, the advent of Gothic literature brought psychological and mystical readings, returning madness and the ghost to the forefront. Not until the late 18th century did critics and performers begin to view Hamlet as confusing and inconsistent. Before then, he was either mad, or not; either a hero, or not; with no in-betweens. These developments represented a fundamental change in literary criticism, which came to focus more on character and less on plot. By the 19th century, Romantic critics valued "Hamlet" for its internal, individual conflict reflecting the strong contemporary emphasis on internal struggles and inner character in general. Then too, critics started to focus on Hamlet's delay as a character trait, rather than a plot device. This focus on character and internal struggle continued into the 20th century, when criticism branched in several directions, discussed in context and interpretation below. "Hamlet" departed from contemporary dramatic convention in several ways. For example, in Shakespeare's day, plays were usually expected to follow the advice of Aristotle in his "Poetics": that a drama should focus on action, not character. In "Hamlet", Shakespeare reverses this so that it is through the soliloquies, not the action, that the audience learns Hamlet's motives and thoughts. The play is full of seeming discontinuities and irregularities of action, except in the "bad" quarto. At one point, as in the Gravedigger scene, Hamlet seems resolved to kill Claudius: in the next scene, however, when Claudius appears, he is suddenly tame. Scholars still debate whether these twists are mistakes or intentional additions to add to the play's themes of confusion and duality. "Hamlet" also contains a recurrent Shakespearean device, a play within the play, a literary device or conceit in which one story is told during the action of another story. "Hamlet" is Shakespeare's longest play. The Riverside edition constitutes 4,042 lines totaling 29,551 words, typically requiring over four hours to stage. It is rare that the play is performed without some abridgments, and only one film adaptation has used a full-text conflation: Kenneth Branagh's 1996 version, which runs slightly more than four hours. Much of "Hamlet"'s language is courtly: elaborate, witty discourse, as recommended by Baldassare Castiglione's 1528 etiquette guide, "The Courtier". This work specifically advises royal retainers to amuse their masters with inventive language. Osric and Polonius, especially, seem to respect this injunction. Claudius's speech is rich with rhetorical figures—as is Hamlet's and, at times, Ophelia's—while the language of Horatio, the guards, and the gravediggers is simpler. Claudius's high status is reinforced by using the royal first person plural ("we" or "us"), and anaphora mixed with metaphor to resonate with Greek political speeches. Of all the characters, Hamlet has the greatest rhetorical skill. He uses highly developed metaphors, stichomythia, and in nine memorable words deploys both anaphora and asyndeton: "to die: to sleep— / To sleep, perchance to dream". In contrast, when occasion demands, he is precise and straightforward, as when he explains his inward emotion to his mother: "But I have that within which passes show, / These but the trappings and the suits of woe". At times, he relies heavily on puns to express his true thoughts while simultaneously concealing them. His "nunnery" remarks to Ophelia are an example of a cruel double meaning as "nunnery" was Elizabethan slang for "brothel". His very first words in the play are a pun; when Claudius addresses him as "my cousin Hamlet, and my son", Hamlet says as an aside: "A little more than kin, and less than kind." An unusual rhetorical device, hendiadys, appears in several places in the play. Examples are found in Ophelia's speech at the end of the nunnery scene: "Th"'expectancy and rose" of the fair state" and "And I, of ladies most "deject and wretched"". Many scholars have found it odd that Shakespeare would, seemingly arbitrarily, use this rhetorical form throughout the play. One explanation may be that "Hamlet" was written later in Shakespeare's life, when he was adept at matching rhetorical devices to characters and the plot. Linguist George T. Wright suggests that hendiadys had been used deliberately to heighten the play's sense of duality and dislocation. Pauline Kiernan argues that Shakespeare changed English drama forever in "Hamlet" because he "showed how a character's language can often be saying several things at once, and contradictory meanings at that, to reflect fragmented thoughts and disturbed feelings". She gives the example of Hamlet's advice to Ophelia, "get thee to a nunnery", which is simultaneously a reference to a place of chastity and a slang term for a brothel, reflecting Hamlet's confused feelings about female sexuality. Hamlet's soliloquies have also captured the attention of scholars. Hamlet interrupts himself, vocalising either disgust or agreement with himself, and embellishing his own words. He has difficulty expressing himself directly and instead blunts the thrust of his thought with wordplay. It is not until late in the play, after his experience with the pirates, that Hamlet is able to articulate his feelings freely. Written at a time of religious upheaval, and in the wake of the English Reformation, the play is alternately Catholic (or piously medieval) and Protestant (or consciously modern). The ghost describes himself as being in purgatory, and as dying without last rites. This and Ophelia's burial ceremony, which is characteristically Catholic, make up most of the play's Catholic connections. Some scholars have observed that revenge tragedies come from Catholic countries like Italy and Spain, where the revenge tragedies present contradictions of motives, since according to Catholic doctrine the duty to God and family precedes civil justice. Hamlet's conundrum, then, is whether to avenge his father and kill Claudius, or to leave the vengeance to God, as his religion requires. Much of the play's Protestant tones derive from its setting in Denmark—both then and now a predominantly Protestant country, though it is unclear whether the fictional Denmark of the play is intended to portray this implicit fact. Dialogue refers explicitly to Wittenberg, where Hamlet, Horatio, and Rosencrantz and Guildenstern attend university, implying where Martin Luther in 1517 first proposed his 95 theses and thereby initiated the Protestant Reformation. Hamlet is often perceived as a philosophical character, expounding ideas that are now described as relativist, existentialist, and sceptical. For example, he expresses a subjectivistic idea when he says to Rosencrantz: "there is nothing either good or bad, but thinking makes it so". The idea that nothing is real except in the mind of the individual finds its roots in the Greek Sophists, who argued that since nothing can be perceived except through the senses—and since all individuals sense, and therefore perceive things differently—there is no absolute truth, but rather only relative truth. The clearest alleged instance of existentialism is in the "to be, or not to be" speech, where Hamlet is thought by some to use "being" to allude to life and action, and "not being" to death and inaction. "Hamlet" reflects the contemporary scepticism promoted by the French Renaissance humanist Michel de Montaigne. Prior to Montaigne's time, humanists such as Pico della Mirandola had argued that man was God's greatest creation, made in God's image and able to choose his own nature, but this view was subsequently challenged in Montaigne's "Essais" of 1580. Hamlet's "What a piece of work is a man" seems to echo many of Montaigne's ideas, and many scholars have discussed whether Shakespeare drew directly from Montaigne or whether both men were simply reacting similarly to the spirit of the times. In the first half of the 20th century, when psychoanalysis was at the height of its influence, its concepts were applied to "Hamlet", notably by Sigmund Freud, Ernest Jones, and Jacques Lacan, and these studies influenced theatrical productions. In his "The Interpretation of Dreams" (1900), Freud's analysis starts from the premise that "the play is built up on Hamlet's hesitations over fulfilling the task of revenge that is assigned to him; but its text offers no reasons or motives for these hesitations". After reviewing various literary theories, Freud concludes that Hamlet has an "Oedipal desire for his mother and the subsequent guilt [is] preventing him from murdering the man [Claudius] who has done what he unconsciously wanted to do". Confronted with his repressed desires, Hamlet realises that "he himself is literally no better than the sinner whom he is to punish". Freud suggests that Hamlet's apparent "distaste for sexuality"—articulated in his "nunnery" conversation with Ophelia—accords with this interpretation. John Barrymore's long-running 1922 performance in New York, directed by Thomas Hopkins, "broke new ground in its Freudian approach to character", in keeping with the post-World War I rebellion against everything Victorian. He had a "blunter intention" than presenting the genteel, sweet prince of 19th-century tradition, imbuing his character with virility and lust. Beginning in 1910, with the publication of "The Œdipus-Complex as an Explanation of Hamlet's Mystery: A Study in Motive" Ernest Jones—a psychoanalyst and Freud's biographer—developed Freud's ideas into a series of essays that culminated in his book "Hamlet and Oedipus" (1949). Influenced by Jones's psychoanalytic approach, several productions have portrayed the "closet scene", where Hamlet confronts his mother in her private quarters, in a sexual light. In this reading, Hamlet is disgusted by his mother's "incestuous" relationship with Claudius while simultaneously fearful of killing him, as this would clear Hamlet's path to his mother's bed. Ophelia's madness after her father's death may also be read through the Freudian lens: as a reaction to the death of her hoped-for lover, her father. Ophelia is overwhelmed by having her unfulfilled love for him so abruptly terminated and drifts into the oblivion of insanity. In 1937, Tyrone Guthrie directed Laurence Olivier in a Jones-inspired "Hamlet" at The Old Vic. Olivier later used some of these same ideas in his 1948 film version of the play. In the 1950s, Lacan's structuralist theories about "Hamlet" were first presented in a series of seminars given in Paris and later published in "Desire and the Interpretation of Desire in "Hamlet"". Lacan postulated that the human psyche is determined by structures of language and that the linguistic structures of "Hamlet" shed light on human desire. His point of departure is Freud's Oedipal theories, and the central theme of mourning that runs through "Hamlet". In Lacan's analysis, Hamlet unconsciously assumes the role of "phallus"—the cause of his inaction—and is increasingly distanced from reality "by mourning, fantasy, narcissism and psychosis", which create holes (or lack) in the real, imaginary, and symbolic aspects of his psyche. Lacan's theories influenced literary criticism of "Hamlet" because of his alternative vision of the play and his use of semantics to explore the play's psychological landscape. In the "Bloom's Shakespeare Through the Ages" volume on Hamlet, editors Bloom and Foster express a conviction that the intentions of Shakespeare in portraying the character of Hamlet in the play exceeded the capacity of the Freudian Oedipus complex to completely encompass the extent of characteristics depicted in Hamlet throughout the tragedy: "For once, Freud regressed in attempting to fasten the Oedipus Complex upon Hamlet: it will not stick, and merely showed that Freud did better than T.S. Eliot, who preferred "Coriolanus" to "Hamlet", or so he said. Who can believe Eliot, when he exposes his own Hamlet Complex by declaring the play to be an aesthetic failure?" The book also notes James Joyce's interpretation, stating that he "did far better in the Library Scene of "Ulysses", where Stephen marvelously credits Shakespeare, in this play, with universal fatherhood while accurately implying that Hamlet is fatherless, thus opening a pragmatic gap between Shakespeare and Hamlet." Joshua Rothman has written in "The New Yorker" that "we tell the story wrong when we say that Freud used the idea of the Oedipus complex to understand "Hamlet"". Rothman suggests that "it was the other way around: "Hamlet" helped Freud understand, and perhaps even invent, psychoanalysis". He concludes, "The Oedipus complex is a misnomer. It should be called the 'Hamlet complex'." In the essay "Hamlet Made Simple", David P. Gontar turns the tables on the psychoanalysts by suggesting that Claudius is not a symbolic father figure but actually Prince Hamlet's biological father. The hesitation in killing Claudius results from an unwillingness on Hamlet's part to slay his real father. If Hamlet is the biological son of Claudius, that explains many things. Hamlet does not become King of Denmark on the occasion of the King's death inasmuch as it is an open secret in court that he is Claudius's biological son, and as such he is merely a court bastard not in the line of succession. He is angry with his mother because of her long standing affair with a man Hamlet hates, and Hamlet must face the fact that he has been sired by the man he loathes. That point overturns T. S. Eliot's complaint that the play is a failure for not furnishing an "objective correlative" to account for Hamlet's rage at his mother. Gontar suggests that if the reader assumes that Hamlet is not who he seems to be, the objective correlative becomes apparent. Hamlet is suicidal in the first soliloquy not because his mother quickly remarries but because of her adulterous affair with the despised Claudius which makes Hamlet his son. Finally, the ghost's confirmation of an alternative fatherhood for Hamlet is a fabrication that gives the prince a motive for revenge. In the 20th century, feminist critics opened up new approaches to Gertrude and Ophelia. New Historicist and cultural materialist critics examined the play in its historical context, attempting to piece together its original cultural environment. They focused on the gender system of early modern England, pointing to the common trinity of "maid, wife, or widow", with "whores" outside of that stereotype. In this analysis, the essence of "Hamlet" is the central character's changed perception of his mother as a whore because of her failure to remain faithful to Old Hamlet. In consequence, Hamlet loses his faith in all women, treating Ophelia as if she too were a whore and dishonest with Hamlet. Ophelia, by some critics, can be seen as honest and fair; however, it is virtually impossible to link these two traits, since 'fairness' is an outward trait, while 'honesty' is an inward trait. Carolyn Heilbrun's 1957 essay "The Character of Hamlet's Mother" defends Gertrude, arguing that the text never hints that Gertrude knew of Claudius poisoning King Hamlet. This analysis has been praised by many feminist critics, combating what is, by Heilbrun's argument, centuries' worth of misinterpretation. By this account, Gertrude's worst crime is of pragmatically marrying her brother-in-law in order to avoid a power vacuum. This is borne out by the fact that King Hamlet's ghost tells Hamlet to leave Gertrude out of Hamlet's revenge, to leave her to heaven, an arbitrary mercy to grant to a conspirator to murder. This view has not been without objection from some critics. Ophelia has also been defended by feminist critics, most notably Elaine Showalter. Ophelia is surrounded by powerful men: her father, brother, and Hamlet. All three disappear: Laertes leaves, Hamlet abandons her, and Polonius dies. Conventional theories had argued that without these three powerful men making decisions for her, Ophelia is driven into madness. Feminist theorists argue that she goes mad with guilt because, when Hamlet kills her father, he has fulfilled her sexual desire to have Hamlet kill her father so they can be together. Showalter points out that Ophelia has become the symbol of the distraught and hysterical woman in modern culture. "Hamlet" is one of the most quoted works in the English language, and is often included on lists of the world's greatest literature. As such, it reverberates through the writing of later centuries. Academic Laurie Osborne identifies the direct influence of "Hamlet" in numerous modern narratives, and divides them into four main categories: fictional accounts of the play's composition, simplifications of the story for young readers, stories expanding the role of one or more characters, and narratives featuring performances of the play. English poet John Milton was an early admirer of Shakespeare, and took evident inspiration from his work. As John Kerrigan discusses, Milton originally considered writing his epic poem "Paradise Lost" (1667) as a tragedy. While Milton did not ultimately go that route, the poem still shows distinct echoes of Shakespearean revenge tragedy, and of "Hamlet" in particular. As scholar Christopher N. Warren argues, "Paradise Lost"’s Satan “undergoes a transformation in the poem from a Hamlet-like avenger into a Claudius-like usurper,” a plot device that supports Milton’s larger Republican internationalist project. The poem also reworks theatrical language from "Hamlet", especially around the idea of “putting on” certain dispositions, as when Hamlet puts on “an antic disposition,” similarly to the Son in "Paradise Lost" who “can put on / [God’s] terrors.” Henry Fielding's "Tom Jones", published about 1749, describes a visit to "Hamlet" by Tom Jones and Mr Partridge, with similarities to the "play within a play". In contrast, Goethe's Bildungsroman "Wilhelm Meister's Apprenticeship", written between 1776 and 1796, not only has a production of "Hamlet" at its core but also creates parallels between the ghost and Wilhelm Meister's dead father. In the early 1850s, in "", Herman Melville focuses on a Hamlet-like character's long development as a writer. Ten years later, Dickens's "Great Expectations" contains many Hamlet-like plot elements: it is driven by revenge-motivated actions, contains ghost-like characters (Abel Magwitch and Miss Havisham), and focuses on the hero's guilt. Academic Alexander Welsh notes that "Great Expectations" is an "autobiographical novel" and "anticipates psychoanalytic readings of "Hamlet" itself". About the same time, George Eliot's "The Mill on the Floss" was published, introducing Maggie Tulliver "who is explicitly compared with Hamlet" though "with a reputation for sanity". L. Frank Baum's first published short story was "They Played a New Hamlet" (1895). When Baum had been touring New York State in the title role, the actor playing the ghost fell through the floorboards, and the rural audience thought it was part of the show and demanded that the actor repeat the fall, because they thought it was funny. Baum would later recount the actual story in an article, but the short story is told from the point of view of the actor playing the ghost. In the 1920s, James Joyce managed "a more upbeat version" of "Hamlet"—stripped of obsession and revenge—in "Ulysses", though its main parallels are with Homer's "Odyssey". In the 1990s, two novelists were explicitly influenced by "Hamlet". In Angela Carter's "Wise Children", "To be or not to be" is reworked as a song and dance routine, and Iris Murdoch's "The Black Prince" has Oedipal themes and murder intertwined with a love affair between a "Hamlet"-obsessed writer, Bradley Pearson, and the daughter of his rival. In the late 20th century, David Foster Wallace's novel Infinite Jest draws heavily from "Hamlet" and takes its title from the play's text; Wallace incorporates references to the gravedigger scene, the marriage of the main character's mother to his uncle, and the re-appearance of the main character's father as a ghost. There is the story of the woman who read Hamlet for the first time and said, "I don't see why people admire that play so. It is nothing but a bunch of quotations strung together." —Isaac Asimov, "Asimov's Guide to Shakespeare", pg vii, Avenal Books, 1970 Shakespeare almost certainly wrote the role of Hamlet for Richard Burbage. He was the chief tragedian of the Lord Chamberlain's Men, with a capacious memory for lines and a wide emotional range. Judging by the number of reprints, "Hamlet" appears to have been Shakespeare's fourth most popular play during his lifetime—only "Henry IV Part 1", "Richard III" and "Pericles" eclipsed it. Shakespeare provides no clear indication of when his play is set; however, as Elizabethan actors performed at the Globe in contemporary dress on minimal sets, this would not have affected the staging. Firm evidence for specific early performances of the play is scant. What is known is that the crew of the ship "Red Dragon", anchored off Sierra Leone, performed "Hamlet" in September 1607; that the play toured in Germany within five years of Shakespeare's death; and that it was performed before James I in 1619 and Charles I in 1637. Oxford editor George Hibbard argues that, since the contemporary literature contains many allusions and references to "Hamlet" (only Falstaff is mentioned more, from Shakespeare), the play was surely performed with a frequency that the historical record misses. All theatres were closed down by the Puritan government during the Interregnum. Even during this time, however, playlets known as "drolls" were often performed illegally, including one called "The Grave-Makers" based on Act 5, Scene 1 of "Hamlet". The play was revived early in the Restoration. When the existing stock of pre-civil war plays was divided between the two newly created patent theatre companies, "Hamlet" was the only Shakespearean favourite that Sir William Davenant's Duke's Company secured. It became the first of Shakespeare's plays to be presented with movable flats painted with generic scenery behind the proscenium arch of Lincoln's Inn Fields Theatre. This new stage convention highlighted the frequency with which Shakespeare shifts dramatic location, encouraging the recurrent criticism of his failure to maintain unity of place. In the title role, Davenant cast Thomas Betterton, who continued to play the Dane until he was 74. David Garrick at Drury Lane produced a version that adapted Shakespeare heavily; he declared: "I had sworn I would not leave the stage till I had rescued that noble play from all the rubbish of the fifth act. I have brought it forth without the grave-digger's trick, Osrick, & the fencing match". The first actor known to have played Hamlet in North America is Lewis Hallam. Jr., in the American Company's production in Philadelphia in 1759. John Philip Kemble made his Drury Lane debut as Hamlet in 1783. His performance was said to be 20 minutes longer than anyone else's, and his lengthy pauses provoked the suggestion by Richard Brinsley Sheridan that "music should be played between the words". Sarah Siddons was the first actress known to play Hamlet; many women have since played him as a breeches role, to great acclaim. In 1748, Alexander Sumarokov wrote a Russian adaptation that focused on Prince Hamlet as the embodiment of an opposition to Claudius's tyranny—a treatment that would recur in Eastern European versions into the 20th century. In the years following America's independence, Thomas Apthorpe Cooper, the young nation's leading tragedian, performed "Hamlet" among other plays at the Chestnut Street Theatre in Philadelphia, and at the Park Theatre in New York. Although chided for "acknowledging acquaintances in the audience" and "inadequate memorisation of his lines", he became a national celebrity. From around 1810 to 1840, the best-known Shakespearean performances in the United States were tours by leading London actors—including George Frederick Cooke, Junius Brutus Booth, Edmund Kean, William Charles Macready, and Charles Kemble. Of these, Booth remained to make his career in the States, fathering the nation's most notorious actor, John Wilkes Booth (who later assassinated Abraham Lincoln), and its most famous Hamlet, Edwin Booth. Edwin Booth's "Hamlet" at the Fifth Avenue Theatre in 1875 was described as "… the dark, sad, dreamy, mysterious hero of a poem. … [acted] in an ideal manner, as far removed as possible from the plane of actual life". Booth played Hamlet for 100 nights in the 1864/5 season at The Winter Garden Theatre, inaugurating the era of long-run Shakespeare in America. In the United Kingdom, the actor-managers of the Victorian era (including Kean, Samuel Phelps, Macready, and Henry Irving) staged Shakespeare in a grand manner, with elaborate scenery and costumes. The tendency of actor-managers to emphasise the importance of their own central character did not always meet with the critics' approval. George Bernard Shaw's praise for Johnston Forbes-Robertson's performance contains a sideswipe at Irving: "The story of the play was perfectly intelligible, and quite took the attention of the audience off the principal actor at moments. What is the Lyceum coming to?" In London, Edmund Kean was the first Hamlet to abandon the regal finery usually associated with the role in favour of a plain costume, and he is said to have surprised his audience by playing Hamlet as serious and introspective. In stark contrast to earlier opulence, William Poel's 1881 production of the Q1 text was an early attempt at reconstructing the Elizabethan theatre's austerity; his only backdrop was a set of red curtains. Sarah Bernhardt played the prince in her popular 1899 London production. In contrast to the "effeminate" view of the central character that usually accompanied a female casting, she described her character as "manly and resolute, but nonetheless thoughtful ... [he] thinks before he acts, a trait indicative of great strength and great spiritual power". In France, Charles Kemble initiated an enthusiasm for Shakespeare; and leading members of the Romantic movement such as Victor Hugo and Alexandre Dumas saw his 1827 Paris performance of "Hamlet", particularly admiring the madness of Harriet Smithson's Ophelia. In Germany, "Hamlet" had become so assimilated by the mid-19th century that Ferdinand Freiligrath declared that "Germany is Hamlet". From the 1850s, the Parsi theatre tradition in India transformed "Hamlet" into folk performances, with dozens of songs added. Apart from some western troupes' 19th-century visits, the first professional performance of Hamlet in Japan was Otojirō Kawakami's 1903 "Shimpa" ("new school theatre") adaptation. Tsubouchi Shōyō translated "Hamlet" and produced a performance in 1911 that blended "Shingeki" ("new drama") and "Kabuki" styles. This hybrid-genre reached its peak in Tsuneari Fukuda's 1955 "Hamlet". In 1998, Yukio Ninagawa produced an acclaimed version of "Hamlet" in the style of Nō theatre, which he took to London. Konstantin Stanislavski and Edward Gordon Craig—two of the 20th century's most influential theatre practitioners—collaborated on the Moscow Art Theatre's seminal production of 1911–12. While Craig favoured stylised abstraction, Stanislavski, armed with his 'system,' explored psychological motivation. Craig conceived of the play as a symbolist monodrama, offering a dream-like vision as seen through Hamlet's eyes alone. This was most evident in the staging of the first court scene. The most famous aspect of the production is Craig's use of large, abstract screens that altered the size and shape of the acting area for each scene, representing the character's state of mind spatially or visualising a dramaturgical progression. The production attracted enthusiastic and unprecedented worldwide attention for the theatre and placed it "on the cultural map for Western Europe". "Hamlet" is often played with contemporary political overtones. Leopold Jessner's 1926 production at the Berlin Staatstheater portrayed Claudius's court as a parody of the corrupt and fawning court of Kaiser Wilhelm. In Poland, the number of productions of "Hamlet" has tended to increase at times of political unrest, since its political themes (suspected crimes, coups, surveillance) can be used to comment on a contemporary situation. Similarly, Czech directors have used the play at times of occupation: a 1941 Vinohrady Theatre production "emphasised, with due caution, the helpless situation of an intellectual attempting to endure in a ruthless environment". In China, performances of Hamlet often have political significance: Gu Wuwei's 1916 "The Usurper of State Power", an amalgam of "Hamlet" and "Macbeth", was an attack on Yuan Shikai's attempt to overthrow the republic. In 1942, Jiao Juyin directed the play in a Confucian temple in Sichuan Province, to which the government had retreated from the advancing Japanese. In the immediate aftermath of the collapse of the protests at Tiananmen Square, Lin Zhaohua staged a 1990 "Hamlet" in which the prince was an ordinary individual tortured by a loss of meaning. In this production, the actors playing Hamlet, Claudius and Polonius exchanged roles at crucial moments in the performance, including the moment of Claudius's death, at which point the actor mainly associated with Hamlet fell to the ground. Notable stagings in London and New York include Barrymore's 1925 production at the Haymarket; it influenced subsequent performances by John Gielgud and Laurence Olivier. Gielgud played the central role many times: his 1936 New York production ran for 132 performances, leading to the accolade that he was "the finest interpreter of the role since Barrymore". Although "posterity has treated Maurice Evans less kindly", throughout the 1930s and 1940s he was regarded by many as the leading interpreter of Shakespeare in the United States and in the 1938/9 season he presented Broadway's first uncut "Hamlet", running four and a half hours. Evans later performed a highly truncated version of the play that he played for South Pacific war zones during World War II which made the prince a more decisive character. The staging, known as the "G.I. Hamlet", was produced on Broadway for 131 performances in 1945/46. Olivier's 1937 performance at The Old Vic was popular with audiences but not with critics, with James Agate writing in a famous review in "The Sunday Times," "Mr. Olivier does not speak poetry badly. He does not speak it at all." In 1937 Tyrone Guthrie directed the play at Elsinore, Denmark with Laurence Olivier as Hamlet and Vivien Leigh as Ophelia. In 1963, Olivier directed Peter O'Toole as Hamlet in the inaugural performance of the newly formed National Theatre; critics found resonance between O'Toole's Hamlet and John Osborne's hero, Jimmy Porter, from "Look Back in Anger". Richard Burton received his third Tony Award nomination when he played his second Hamlet, his first under John Gielgud's direction, in 1964 in a production that holds the record for the longest run of the play in Broadway history (137 performances). The performance was set on a bare stage, conceived to appear like a dress rehearsal, with Burton in a black v-neck sweater, and Gielgud himself tape-recorded the voice for the ghost (which appeared as a looming shadow). It was immortalised both on record and on a film that played in US theatres for a week in 1964 as well as being the subject of books written by cast members William Redfield and Richard L. Sterne. Other New York portrayals of "Hamlet" of note include that of Ralph Fiennes's in 1995 (for which he won the Tony Award for Best Actor) – which ran, from first preview to closing night, a total of one hundred performances. About the Fiennes "Hamlet" Vincent Canby wrote in "The New York Times" that it was "… not one for literary sleuths and Shakespeare scholars. It respects the play, but it doesn't provide any new material for arcane debates on what it all means. Instead it's an intelligent, beautifully read …" Stacy Keach played the role with an all-star cast at Joseph Papp's Delacorte Theatre in the early 70s, with Colleen Dewhurst's Gertrude, James Earl Jones's King, Barnard Hughes's Polonius, Sam Waterston's Laertes and Raúl Juliá's Osric. Sam Waterston later played the role himself at the Delacorte for the New York Shakespeare Festival, and the show transferred to the Vivian Beaumont Theatre in 1975 (Stephen Lang played Bernardo and other roles). Stephen Lang's "Hamlet" for the Roundabout Theatre Company in 1992 received mixed reviews and ran for sixty-one performances. David Warner played the role with the Royal Shakespeare Theatre in 1965. William Hurt (at Circle Rep Off-Broadway, memorably performing "To Be Or Not to Be" while lying on the floor), Jon Voight at Rutgers, and Christopher Walken (fiercely) at Stratford CT have all played the role, as has Diane Venora at the Public Theatre. Off Broadway, the Riverside Shakespeare Company mounted an uncut first folio "Hamlet" in 1978 at Columbia University, with a playing time of under three hours. In fact, "Hamlet" is the most produced Shakespeare play in New York theatre history, with sixty-four recorded productions on Broadway, and an untold number Off Broadway. Ian Charleson performed Hamlet from 9 October to 13 November 1989, in Richard Eyre's production at the Olivier Theatre, replacing Daniel Day-Lewis, who had abandoned the production. Seriously ill from AIDS at the time, Charleson died eight weeks after his last performance. Fellow actor and friend, Sir Ian McKellen, said that Charleson played Hamlet so well it was as if he had rehearsed the role all his life; McKellen called it "the perfect Hamlet". The performance garnered other major accolades as well, some critics echoing McKellen in calling it the definitive Hamlet performance. In Australia, a production of Hamlet was staged at the Belvoir Street Theatre in Sydney in 1994 starring notable names including Richard Roxburgh as Hamlet, Geoffrey Rush as Horatio, Jacqueline McKenzie as Ophelia and David Wenham as Laertes. The critically acclaimed production was directed by Niel Armfield. "Hamlet" continues to be staged regularly, with actors such as Simon Russell Beale, Ben Whishaw, David Tennant, Angela Winkler, Samuel West, Christopher Eccleston, Maxine Peake, Rory Kinnear, Oscar Isaac, Christian Camargo, Andrew Scott, Paapa Essiedu and Michael Urie performing the lead role. In May 2009, "Hamlet" opened with Jude Law in the title role at the Donmar Warehouse West End season at Wyndham's Theatre. The production officially opened on 3 June and ran through 22 August 2009. A further production of the play ran at Elsinore Castle in Denmark from 25–30 August 2009. The Jude Law "Hamlet" then moved to Broadway, and ran for 12 weeks at the Broadhurst Theatre in New York. In 2013, American actor Paul Giamatti won critical acclaim for his performance on stage in the title role of "Hamlet", performed in modern dress, at the Yale Repertory Theater, at Yale University in New Haven, Connecticut. The Globe Theatre of London initiated a project in 2014 to perform "Hamlet" in every country in the world in the space of two years. Titled "Globe to Globe Hamlet", it began its tour on 23 April 2014, the 450th anniversary of Shakespeare's birth. As of 23 February 2016, the project had performed in 170 countries. Benedict Cumberbatch played the role for a 12-week run in a production at the Barbican Theatre, opening on 25 August 2015. The play was produced by Sonia Friedman, and directed by Lyndsey Turner, with set design by Es Devlin. It was called the "most in-demand theatre production of all time" and sold out in seven hours after tickets went on sale 11 August 2014, more than a year before the play opened. Tom Hiddleston played the role for a three-week run at Vanbrugh Theatre that opened on 1 September 2017 and was directed by Kenneth Branagh. In 2018, The Globe Theatre's newly instated artistic director Michelle Terry played the role in a production notable for its gender-blind casting. The earliest screen success for "Hamlet" was Sarah Bernhardt's five-minute film of the fencing scene, which was produced in 1900. The film was an early attempt at combining sound and film, music and words were recorded on phonograph records, to be played along with the film. Silent versions were released in 1907, 1908, 1910, 1913, 1917, and 1920. In the 1921 film "Hamlet", Danish actress Asta Nielsen played the role of Hamlet as a woman who spends her life disguised as a man. In 1942 Ernst Lubitsch directed a motion picture titled "To Be or Not To Be". The story is a comedy about a troupe of Polish actors in the Nazi era. A key plot point revolves around the famous soliloquy. The story was remade in 1983 in a film starring (and produced by) Mel Brooks. Laurence Olivier's 1948 moody black-and-white "Hamlet" won Best Picture and Best Actor Oscars, and is still, , the only Shakespeare film to have done so. His interpretation stressed the Oedipal overtones of the play, and cast 28-year-old Eileen Herlie as Hamlet's mother, opposite himself, at 41, as Hamlet. In 1953, actor Jack Manning performed the play in 15-minute segments over two weeks in the short-lived late night DuMont series "Monodrama Theater". "New York Times" TV critic Jack Gould praised Manning's performance as Hamlet. Renowned Shakespearean actor-directors Sir John Gielgud, Sir Laurence Olivier and Sir Kenneth Branagh consider the definitive rendition of the play to be the 1964 Soviet film "Gamlet" () based on a translation by Boris Pasternak and directed by Grigori Kozintsev, with a score by Dmitri Shostakovich. Innokenty Smoktunovsky was cast in the role of Hamlet. John Gielgud directed Richard Burton in a Broadway production at the Lunt-Fontanne Theatre in 1964–5, the longest-running "Hamlet" in the U.S. to date. A live film of the production was produced using "Electronovision", a method of recording a live performance with multiple video cameras and converting the image to film. Eileen Herlie repeated her role from Olivier's film version as the Queen, and the voice of Gielgud was heard as the ghost. The Gielgud/Burton production was also recorded complete and released on LP by Columbia Masterworks. The first "Hamlet" in color was a 1969 film directed by Tony Richardson with Nicol Williamson as Hamlet and Marianne Faithfull as Ophelia. In 1990 Franco Zeffirelli, whose Shakespeare films have been described as "sensual rather than cerebral", cast Mel Gibson—then famous for the "Mad Max" and "Lethal Weapon" movies—in the title role of his 1990 version; Glenn Close—then famous as the psychotic "other woman" in "Fatal Attraction"—played Gertrude, and Paul Scofield played Hamlet's father. In contrast to Zeffirelli, whose "Hamlet" was heavily cut, Kenneth Branagh adapted, directed, and starred in a 1996 version containing every word of Shakespeare's play, combining the material from the F1 and Q2 texts. Branagh's "Hamlet" runs for just over four hours. Branagh set the film with late 19th-century costuming and furnishings, a production in many ways reminiscent of a Russian novel of the time; and Blenheim Palace, built in the early 18th century, became Elsinore Castle in the external scenes. The film is structured as an epic and makes frequent use of flashbacks to highlight elements not made explicit in the play: Hamlet's sexual relationship with Kate Winslet's Ophelia, for example, or his childhood affection for Yorick (played by Ken Dodd). In 2000, Michael Almereyda's "Hamlet" set the story in contemporary Manhattan, with Ethan Hawke playing Hamlet as a film student. Claudius (played by Kyle MacLachlan) became the CEO of "Denmark Corporation", having taken over the company by killing his brother. The 2002 "The Simpsons" episode "Tales from the Public Domain" features a retelling of the play. There have also been several films that transposed the general storyline of "Hamlet" or elements thereof to other settings. For example, the 2014 Bollywood film "Haider" is an adaptation set in Kashmir. There have also been many films which included performances of scenes from "Hamlet" as a play-within-a-film. There have been various "derivative works" of "Hamlet" which recast the story from the point of view of other characters, or transpose the story into a new setting or act as sequels or prequels to "Hamlet". This section is limited to those written for the stage. The best-known is Tom Stoppard's 1966 play "Rosencrantz and Guildenstern Are Dead", which retells many of the events of the story from the point of view of the characters Rosencrantz and Guildenstern and gives them a backstory of their own. Several times since 1995, the American Shakespeare Center has mounted repertories that included both "Hamlet" and "Rosencrantz and Guildenstern", with the same actors performing the same roles in each; in their 2001 and 2009 seasons the two plays were "directed, designed, and rehearsed together to make the most out of the shared scenes and situations". W. S. Gilbert wrote a short comic play titled "Rosencrantz and Guildenstern", in which Hamlet's play is presented as a tragedy written by Claudius in his youth of which he is greatly embarrassed. Through the chaos triggered by Hamlet's staging of it, Guildenstern helps Rosencrantz vie with Hamlet to make Ophelia his bride. Lee Blessing's "Fortinbras" is a comical sequel to "Hamlet" in which all the deceased characters come back as ghosts. The "New York Times" reviewed the play, saying it is "scarcely more than an extended comedy sketch, lacking the portent and linguistic complexity of Tom Stoppard's "Rosencrantz and Guildenstern Are Dead". "Fortinbras" operates on a far less ambitious plane, but it is a ripping yarn and offers Keith Reddin a role in which he can commit comic mayhem". Caridad Svich's "12 Ophelias (a play with broken songs)" includes elements of the story of "Hamlet" but focuses on Ophelia. In Svich's play, Ophelia is resurrected and rises from a pool of water, after her death in "Hamlet". The play is a series of scenes and songs, and was first staged at public swimming pool in Brooklyn. David Davalos' "Wittenberg" is a "tragical-comical-historical" prequel to "Hamlet" that depicts the Danish prince as a student at Wittenberg University (now known as the University of Halle-Wittenberg), where he is torn between the conflicting teachings of his mentors John Faustus and Martin Luther. The "New York Times" reviewed the play, saying, "Mr. Davalos has molded a daft campus comedy out of this unlikely convergence," and "Nytheatre.com"'s review said the playwright "has imagined a fascinating alternate reality, and quite possibly, given the fictional Hamlet a back story that will inform the role for the future." "Mad Boy Chronicle" by Canadian playwright Michael O'Brien is a dark comedy loosely based on "Hamlet", set in Viking Denmark in 999 A.D. All references to "Hamlet", unless otherwise specified, are taken from the Arden Shakespeare Q2. Under their referencing system, 3.1.55 means act 3, scene 1, line 55. References to the First Quarto and First Folio are marked "Hamlet Q1" and "Hamlet F1", respectively, and are taken from the Arden Shakespeare "Hamlet: the texts of 1603 and 1623". Their referencing system for Q1 has no act breaks, so 7.115 means scene 7, line 115. Huldrych Zwingli Huldrych Zwingli or Ulrich Zwingli (1 January 1484 – 11 October 1531) was a leader of the Reformation in Switzerland. Born during a time of emerging Swiss patriotism and increasing criticism of the Swiss mercenary system, he attended the University of Vienna and the University of Basel, a scholarly center of Renaissance humanism. He continued his studies while he served as a pastor in Glarus and later in Einsiedeln, where he was influenced by the writings of Erasmus. In 1519, Zwingli became the pastor of the Grossmünster in Zürich where he began to preach ideas on reform of the Catholic Church. In his first public controversy in 1522, he attacked the custom of fasting during Lent. In his publications, he noted corruption in the ecclesiastical hierarchy, promoted clerical marriage, and attacked the use of images in places of worship. In 1525, Zwingli introduced a new communion liturgy to replace the Mass. Zwingli also clashed with the Anabaptists, which resulted in their persecution. Historians have debated whether or not he turned Zürich into a theocracy. The Reformation spread to other parts of the Swiss Confederation, but several cantons resisted, preferring to remain Catholic. Zwingli formed an alliance of Reformed cantons which divided the Confederation along religious lines. In 1529, a war between the two sides was averted at the last moment. Meanwhile, Zwingli's ideas came to the attention of Martin Luther and other reformers. They met at the Marburg Colloquy and although they agreed on many points of doctrine, they could not reach an accord on the doctrine of the Real Presence of Christ in the Eucharist. In 1531 Zwingli's alliance applied an unsuccessful food blockade on the Catholic cantons. The cantons responded with an attack at a moment when Zürich was ill-prepared. Zwingli was killed in battle at the age of 47. His legacy lives on in the confessions, liturgy, and church orders of the Reformed churches of today. The Swiss Confederation in Huldrych Zwingli's time consisted of thirteen states (cantons) as well as affiliated areas and common lordships. Unlike the modern state of Switzerland, which operates under a federal government, each of the thirteen cantons was nearly independent, conducting its own domestic and foreign affairs. Each canton formed its own alliances within and without the Confederation. This relative independence served as the basis for conflict during the time of the Reformation when the various cantons divided between different confessional camps. Military ambitions gained an additional impetus with the competition to acquire new territory and resources, as seen for example in the Old Zürich War of 1440–1446. The wider political environment in Europe during the 15th and 16th centuries was also volatile. For centuries the relationship with the Confederation's powerful neighbour, France, determined the foreign policies of the Swiss. Nominally, the Confederation formed a part of the Holy Roman Empire. However, through a succession of wars culminating in the Swabian War in 1499, the Confederation had become "de facto" independent. As the two continental powers and minor regional states such as the Duchy of Milan, the Duchy of Savoy, and the Papal States competed and fought against each other, there were far-reaching political, economic, and social consequences for the Confederation. During this time the mercenary pension system became a subject of disagreement. The religious factions of Zwingli's time debated vociferously the merits of sending young Swiss men to fight in foreign wars mainly for the enrichment of the cantonal authorities. These internal and external factors contributed to the rise of a Confederation national consciousness, in which the term "fatherland" () began to take on meaning beyond a reference to an individual canton. At the same time, Renaissance humanism, with its universal values and emphasis on scholarship (as exemplified by Erasmus (1466–1536), the "prince of humanism"), had taken root in the Confederation. Within this environment, defined by the confluence of Swiss patriotism and humanism, Zwingli was born in 1484. Huldrych Zwingli was born on 1 January 1484 in Wildhaus, in the Toggenburg valley of Switzerland, to a family of farmers, the third child of nine. His father, Ulrich, played a leading role in the administration of the community ("Amtmann" or chief local magistrate). Zwingli's primary schooling was provided by his uncle, Bartholomew, a cleric in Weesen, where he probably met Katharina von Zimmern. At ten years old, Zwingli was sent to Basel to obtain his secondary education where he learned Latin under Magistrate Gregory Bünzli. After three years in Basel, he stayed a short time in Bern with the humanist, Henry Wölfflin. The Dominicans in Bern tried to persuade Zwingli to join their order and it is possible that he was received as a novice. However, his father and uncle disapproved of such a course and he left Bern without completing his Latin studies. He enrolled in the University of Vienna in the winter semester of 1498 but was expelled, according to the university's records. However, it is not certain that Zwingli was indeed expelled, and he re-enrolled in the summer semester of 1500; his activities in 1499 are unknown. Zwingli continued his studies in Vienna until 1502, after which he transferred to the University of Basel where he received the Master of Arts degree ("Magister") in 1506. Zwingli was ordained in Constance, the seat of the local diocese, and he celebrated his first Mass in his hometown, Wildhaus, on 29 September 1506. As a young priest he had studied little theology, but this was not considered unusual at the time. His first ecclesiastical post was the pastorate of the town of Glarus, where he stayed for ten years. It was in Glarus, whose soldiers were used as mercenaries in Europe, that Zwingli became involved in politics. The Swiss Confederation was embroiled in various campaigns with its neighbours: the French, the Habsburgs, and the Papal States. Zwingli placed himself solidly on the side of the Roman See. In return, Pope Julius II honoured Zwingli by providing him with an annual pension. He took the role of chaplain in several campaigns in Italy, including the Battle of Novara in 1513. However, the decisive defeat of the Swiss in the Battle of Marignano caused a shift in mood in Glarus in favour of the French rather than the pope. Zwingli, the papal partisan, found himself in a difficult position and he decided to retreat to Einsiedeln in the canton of Schwyz. By this time, he had become convinced that mercenary service was immoral and that Swiss unity was indispensable for any future achievements. Some of his earliest extant writings, such as "The Ox" (1510) and "The Labyrinth" (1516), attacked the mercenary system using allegory and satire. His countrymen were presented as virtuous people within a French, imperial, and papal triangle. Zwingli stayed in Einsiedeln for two years during which he withdrew completely from politics in favour of ecclesiastical activities and personal studies. Zwingli's time as the pastor of Glarus and Einsiedeln was characterized by inner growth and development. He perfected his Greek and he took up the study of Hebrew. His library contained over three hundred volumes from which he was able to draw upon classical, patristic, and scholastic works. He exchanged scholarly letters with a circle of Swiss humanists and began to study the writings of Erasmus. Zwingli took the opportunity to meet him while Erasmus was in Basel between August 1514 and May 1516. Zwingli's turn to relative pacifism and his focus on preaching can be traced to the influence of Erasmus. In late 1518, the post of the "Leutpriestertum" (people's priest) of the Grossmünster at Zürich became vacant. The canons of the foundation that administered the Grossmünster recognised Zwingli's reputation as a fine preacher and writer. His connection with humanists was a decisive factor as several canons were sympathetic to Erasmian reform. In addition, his opposition to the French and to mercenary service was welcomed by Zürich politicians. On 11 December 1518, the canons elected Zwingli to become the stipendiary priest and on 27 December he moved permanently to Zürich. On 1 January 1519, Zwingli gave his first sermon in Zürich. Deviating from the prevalent practice of basing a sermon on the Gospel lesson of a particular Sunday, Zwingli, using Erasmus' New Testament as a guide, began to read through the Gospel of Matthew, giving his interpretation during the sermon, known as the method of "lectio continua". He continued to read and interpret the book on subsequent Sundays until he reached the end and then proceeded in the same manner with the Acts of the Apostles, the New Testament epistles, and finally the Old Testament. His motives for doing this are not clear, but in his sermons he used exhortation to achieve moral and ecclesiastical improvement which were goals comparable with Erasmian reform. Sometime after 1520, Zwingli's theological model began to evolve into an idiosyncratic form that was neither Erasmian nor Lutheran. Scholars do not agree on the process of how he developed his own unique model. One view is that Zwingli was trained as an Erasmian humanist and Luther played a decisive role in changing his theology. Another view is that Zwingli did not pay much attention to Luther's theology and in fact he considered it as part of the humanist reform movement. A third view is that Zwingli was not a complete follower of Erasmus, but had diverged from him as early as 1516 and that he independently developed his theology. Zwingli's theological stance was gradually revealed through his sermons. He attacked moral corruption and in the process he named individuals who were the targets of his denunciations. Monks were accused of indolence and high living. In 1519, Zwingli specifically rejected the veneration of saints and called for the need to distinguish between their true and fictional accounts. He cast doubts on hellfire, asserted that unbaptised children were not damned, and questioned the power of excommunication. His attack on the claim that tithing was a divine institution, however, had the greatest theological and social impact. This contradicted the immediate economic interests of the foundation. One of the elderly canons who had supported Zwingli's election, Konrad Hofmann, complained about his sermons in a letter. Some canons supported Hofmann, but the opposition never grew very large. Zwingli insisted that he was not an innovator and that the sole basis of his teachings was Scripture. Within the diocese of Constance, Bernhardin Sanson was offering a special indulgence for contributors to the building of St Peter's in Rome. When Sanson arrived at the gates of Zürich at the end of January 1519, parishioners prompted Zwingli with questions. He responded with displeasure that the people were not being properly informed about the conditions of the indulgence and were being induced to part with their money on false pretences. This was over a year after Martin Luther published his Ninety-five theses (31 October 1517). The council of Zürich refused Sanson entry into the city. As the authorities in Rome were anxious to contain the fire started by Luther, the Bishop of Constance denied any support of Sanson and he was recalled. In August 1519, Zürich was struck by an outbreak of the plague during which at least one in four persons died. All of those who could afford it left the city, but Zwingli remained and continued his pastoral duties. In September, he caught the disease and nearly died. He described his preparation for death in a poem, Zwingli's "Pestlied", consisting of three parts: the onset of the illness, the closeness to death, and the joy of recovery. The final verses of the first part read: In the years following his recovery, Zwingli's opponents remained in the minority. When a vacancy occurred among the canons of the Grossmünster, Zwingli was elected to fulfill that vacancy on 29 April 1521. In becoming a canon, he became a full citizen of Zürich. He also retained his post as the people's priest of the Grossmünster. The first public controversy regarding Zwingli's preaching broke out during the season of Lent in 1522. On the first fasting Sunday, 9 March, Zwingli and about a dozen other participants consciously transgressed the fasting rule by cutting and distributing two smoked sausages (the "Wurstessen" in Christoph Froschauer's workshop). Zwingli defended this act in a sermon which was published on 16 April, under the title "Von Erkiesen und Freiheit der Speisen" (Regarding the Choice and Freedom of Foods). He noted that no general valid rule on food can be derived from the Bible and that to transgress such a rule is not a sin. The event, which came to be referred to as the Affair of the Sausages, is considered to be the start of the Reformation in Switzerland. Even before the publication of this treatise, the diocese of Constance reacted by sending a delegation to Zürich. The city council condemned the fasting violation, but assumed responsibility over ecclesiastical matters and requested the religious authorities clarify the issue. The bishop responded on 24 May by admonishing the Grossmünster and city council and repeating the traditional position. Following this event, Zwingli and other humanist friends petitioned the bishop on 2 July to abolish the requirement of celibacy on the clergy. Two weeks later the petition was reprinted for the public in German as "Eine freundliche Bitte und Ermahnung an die Eidgenossen" (A Friendly Petition and Admonition to the Confederates). The issue was not just an abstract problem for Zwingli, as he had secretly married a widow, Anna Reinhard, earlier in the year. Their cohabitation was well-known and their public wedding took place on 2 April 1524, three months before the birth of their first child. They would eventually have four children: Regula, William, Huldrych, and Anna. As the petition was addressed to the secular authorities, the bishop responded at the same level by notifying the Zürich government to maintain the ecclesiastical order. Other Swiss clergymen joined in Zwingli's cause which encouraged him to make his first major statement of faith, "Apologeticus Archeteles" (The First and Last Word). He defended himself against charges of inciting unrest and heresy. He denied the ecclesiastical hierarchy any right to judge on matters of church order because of its corrupted state. The events of 1522 brought no clarification on the issues. Not only did the unrest between Zürich and the bishop continue, tensions were growing among Zürich's Confederation partners in the Swiss Diet. On 22 December, the Diet recommended that its members prohibit the new teachings, a strong indictment directed at Zürich. The city council felt obliged to take the initiative and find its own solution. On 3 January 1523, the Zürich city council invited the clergy of the city and outlying region to a meeting to allow the factions to present their opinions. The bishop was invited to attend or to send a representative. The council would render a decision on who would be allowed to continue to proclaim their views. This meeting, the first Zürich disputation, took place on 29 January 1523. The meeting attracted a large crowd of approximately six hundred participants. The bishop sent a delegation led by his vicar general, Johannes Fabri. Zwingli summarised his position in the "Schlussreden" (Concluding Statements or the Sixty-seven Articles). Fabri, who had not envisaged an academic disputation in the manner Zwingli had prepared for, was forbidden to discuss high theology before laymen, and simply insisted on the necessity of the ecclesiastical authority. The decision of the council was that Zwingli would be allowed to continue his preaching and that all other preachers should teach only in accordance with Scripture. In September 1523, Leo Jud, Zwingli's closest friend and colleague and pastor of St. Peterskirche, publicly called for the removal of statues of saints and other icons. This led to demonstrations and iconoclastic activities. The city council decided to work out the matter of images in a second disputation. The essence of the mass and its sacrificial character was also included as a subject of discussion. Supporters of the mass claimed that the eucharist was a true sacrifice, while Zwingli claimed that it was a commemorative meal. As in the first disputation, an invitation was sent out to the Zürich clergy and the bishop of Constance. This time, however, the lay people of Zürich, the dioceses of Chur and Basel, the University of Basel, and the twelve members of the Confederation were also invited. About nine hundred persons attended this meeting, but neither the bishop nor the Confederation sent representatives. The disputation started on 26 October 1523 and lasted two days. Zwingli again took the lead in the disputation. His opponent was the aforementioned canon, Konrad Hofmann, who had initially supported Zwingli's election. Also taking part was a group of young men demanding a much faster pace of reformation, who among other things pleaded for replacing infant baptism with adult baptism. This group was led by Conrad Grebel, one of the initiators of the Anabaptist movement. During the first three days of dispute, although the controversy of images and the mass were discussed, the arguments led to the question of whether the city council or the ecclesiastical government had the authority to decide on these issues. At this point, Konrad Schmid, a priest from Aargau and follower of Zwingli, made a pragmatic suggestion. As images were not yet considered to be valueless by everyone, he suggested that pastors preach on this subject under threat of punishment. He believed the opinions of the people would gradually change and the voluntary removal of images would follow. Hence, Schmid rejected the radicals and their iconoclasm, but supported Zwingli's position. In November the council passed ordinances in support of Schmid's motion. Zwingli wrote a booklet on the evangelical duties of a minister, "Kurze, christliche Einleitung" (Short Christian Introduction), and the council sent it out to the clergy and the members of the Confederation. In December 1523, the council set a deadline of Pentecost in 1524 for a solution to the elimination of the mass and images. Zwingli gave a formal opinion in "Vorschlag wegen der Bilder und der Messe" (Proposal Concerning Images and the Mass). He did not urge an immediate, general abolition. The council decided on the orderly removal of images within Zürich, but rural congregations were granted the right to remove them based on majority vote. The decision on the mass was postponed. Evidence of the effect of the Reformation was seen in early 1524. Candlemas was not celebrated, processions of robed clergy ceased, worshippers did not go with palms or relics on Palm Sunday to the Lindenhof, and triptychs remained covered and closed after Lent. Opposition to the changes came from Konrad Hofmann and his followers, but the council decided in favour of keeping the government mandates. When Hofmann left the city, opposition from pastors hostile to the Reformation broke down. The bishop of Constance tried to intervene in defending the mass and the veneration of images. Zwingli wrote an official response for the council and the result was the severance of all ties between the city and the diocese. Although the council had hesitated in abolishing the mass, the decrease in the exercise of traditional piety allowed pastors to be unofficially released from the requirement of celebrating mass. As individual pastors altered their practices as each saw fit, Zwingli was prompted to address this disorganised situation by designing a communion liturgy in the German language. This was published in "Aktion oder Brauch des Nachtmahls" (Act or Custom of the Supper). Shortly before Easter, Zwingli and his closest associates requested the council to cancel the mass and to introduce the new public order of worship. On Maundy Thursday, 13 April 1525, Zwingli celebrated communion under his new liturgy. Wooden cups and plates were used to avoid any outward displays of formality. The congregation sat at set tables to emphasise the meal aspect of the sacrament. The sermon was the focal point of the service and there was no organ music or singing. The importance of the sermon in the worship service was underlined by Zwingli's proposal to limit the celebration of communion to four times a year. For some time Zwingli had accused mendicant orders of hypocrisy and demanded their abolition in order to support the truly poor. He suggested the monasteries be changed into hospitals and welfare institutions and incorporate their wealth into a welfare fund. This was done by reorganising the foundations of the Grossmünster and Fraumünster and pensioning off remaining nuns and monks. The council secularised the church properties (Fraumünster handed over by Zwingli's acquaintance Katharina von Zimmern) and established new welfare programs for the poor. Zwingli requested permission to establish a Latin school, the "Prophezei" (Prophecy) or "Carolinum", at the Grossmünster. The council agreed and it was officially opened on 19 June 1525 with Zwingli and Jud as teachers. It served to retrain and re-educate the clergy. The Zürich Bible translation, traditionally attributed to Zwingli and printed by Christoph Froschauer, bears the mark of teamwork from the Prophecy school. Scholars have not yet attempted to clarify Zwingli's share of the work based on external and stylistic evidence. Shortly after the second Zürich disputation, many in the radical wing of the Reformation became convinced that Zwingli was making too many concessions to the Zürich council. They rejected the role of civil government and demanded the immediate establishment of a congregation of the faithful. Conrad Grebel, the leader of the radicals and the emerging Anabaptist movement, spoke disparagingly of Zwingli in private. On 15 August 1524 the council insisted on the obligation to baptise all newborn infants. Zwingli secretly conferred with Grebel's group and late in 1524, the council called for official discussions. When talks were broken off, Zwingli published "Wer Ursache gebe zu Aufruhr" (Whoever Causes Unrest) clarifying the opposing points-of-view. On 17 January 1525 a public debate was held and the council decided in favour of Zwingli. Anyone refusing to have their children baptised was required to leave Zürich. The radicals ignored these measures and on 21 January, they met at the house of the mother of another radical leader, Felix Manz. Grebel and a third leader, George Blaurock, performed the first recorded Anabaptist adult baptisms. On February 2, the council repeated the requirement on the baptism of all babies and some who failed to comply were arrested and fined, Manz and Blaurock among them. Zwingli and Jud interviewed them and more debates were held before the Zürich council. Meanwhile, the new teachings continued to spread to other parts of the Confederation as well as a number of Swabian towns. On 6–8 November, the last debate on the subject of baptism took place in the Grossmünster. Grebel, Manz, and Blaurock defended their cause before Zwingli, Jud, and other reformers. There was no serious exchange of views as each side would not move from their positions and the debates degenerated into an uproar, each side shouting abuse at the other. The Zürich council decided that no compromise was possible. On 7 March 1526 it released the notorious mandate that no one shall rebaptise another under the penalty of death. Although Zwingli, technically, had nothing to do with the mandate, there is no indication that he disapproved. Felix Manz, who had sworn to leave Zürich and not to baptise any more, had deliberately returned and continued the practice. After he was arrested and tried, he was executed on 5 January 1527 by being drowned in the Limmat. He was the first Anabaptist martyr; three more were to follow, after which all others either fled or were expelled from Zürich. On 8 April 1524, five cantons, Lucerne, Uri, Schwyz, Unterwalden, and Zug, formed an alliance, "die fünf Orte" (the Five States) to defend themselves from Zwingli's Reformation. They contacted the opponents of Martin Luther including John Eck, who had debated Luther in the Leipzig Disputation of 1519. Eck offered to dispute Zwingli and he accepted. However, they could not agree on the selection of the judging authority, the location of the debate, and the use of the Swiss Diet as a court. Because of the disagreements, Zwingli decided to boycott the disputation. On 19 May 1526, all the cantons sent delegates to Baden. Although Zürich's representatives were present, they did not participate in the sessions. Eck led the Catholic party while the reformers were represented by Johannes Oecolampadius of Basel, a theologian from Württemberg who had carried on an extensive and friendly correspondence with Zwingli. While the debate proceeded, Zwingli was kept informed of the proceedings and printed pamphlets giving his opinions. It was of little use as the Diet decided against Zwingli. He was to be banned and his writings were no longer to be distributed. Of the thirteen Confederation members, Glarus, Solothurn, Fribourg, and Appenzell as well as the Five States voted against Zwingli. Bern, Basel, Schaffhausen, and Zürich supported him. The Baden disputation exposed a deep rift in the Confederation on matters of religion. The Reformation was now emerging in other states. The city of St Gallen, an affiliated state to the Confederation, was led by a reformed mayor, Joachim Vadian, and the city abolished the mass in 1527, just two years after Zürich. In Basel, although Zwingli had a close relationship with Oecolampadius, the government did not officially sanction any reformatory changes until 1 April 1529 when the mass was prohibited. Schaffhausen, which had closely followed Zürich's example, formally adopted the Reformation in September 1529. In the case of Bern, Berchtold Haller, the priest at St Vincent Münster, and Niklaus Manuel, the poet, painter, and politician, had campaigned for the reformed cause. But it was only after another disputation that Bern counted itself as a canton of the Reformation. Four hundred and fifty persons participated, including pastors from Bern and other cantons as well as theologians from outside the Confederation such as Martin Bucer and Wolfgang Capito from Strasbourg, Ambrosius Blarer from Constance, and Andreas Althamer from Nuremberg. Eck and Fabri refused to attend and the Catholic cantons did not send representatives. The meeting started on 6 January 1528 and lasted nearly three weeks. Zwingli assumed the main burden of defending the Reformation and he preached twice in the Münster. On 7 February 1528 the council decreed that the Reformation be established in Bern. Even before the Bern disputation, Zwingli was canvassing for an alliance of reformed cities. Once Bern officially accepted the Reformation, a new alliance, "das Christliche Burgrecht" (the Christian Civic Union) was created. The first meetings were held in Bern between representatives of Bern, Constance, and Zürich on 5–6 January 1528. Other cities, including Basel, Biel, Mülhausen, Schaffhausen, and St Gallen, eventually joined the alliance. The Five (Catholic) States felt encircled and isolated, so they searched for outside allies. After two months of negotiations, the Five States formed "die Christliche Vereinigung" (the Christian Alliance) with Ferdinand of Austria on 22 April 1529. Soon after the Austrian treaty was signed, a reformed preacher, Jacob Kaiser, was captured in Uznach and executed in Schwyz. This triggered a strong reaction from Zwingli; he drafted "Ratschlag über den Krieg" (Advice About the War) for the government. He outlined justifications for an attack on the Catholic states and other measures to be taken. Before Zürich could implement his plans, a delegation from Bern that included Niklaus Manuel arrived in Zürich. The delegation called on Zürich to settle the matter peacefully. Manuel added that an attack would expose Bern to further dangers as Catholic Valais and the Duchy of Savoy bordered its southern flank. He then noted, "You cannot really bring faith by means of spears and halberds." Zürich, however, decided that it would act alone, knowing that Bern would be obliged to acquiesce. War was declared on 8 June 1529. Zürich was able to raise an army of 30,000 men. The Five States were abandoned by Austria and could raise only 9,000 men. The two forces met near Kappel, but war was averted due to the intervention of Hans Aebli, a relative of Zwingli, who pleaded for an armistice. Zwingli was obliged to state the terms of the armistice. He demanded the dissolution of the Christian Alliance; unhindered preaching by reformers in the Catholic states; prohibition of the pension system; payment of war reparations; and compensation to the children of Jacob Kaiser. Manuel was involved in the negotiations. Bern was not prepared to insist on the unhindered preaching or the prohibition of the pension system. Zürich and Bern could not agree and the Five (Catholic) States pledged only to dissolve their alliance with Austria. This was a bitter disappointment for Zwingli and it marked his decline in political influence. The first Land Peace of Kappel, "der erste Landfriede", ended the war on 24 June. While Zwingli carried on the political work of the Swiss Reformation, he developed his theological views with his colleagues. The famous disagreement between Luther and Zwingli on the interpretation of the eucharist originated when Andreas Karlstadt, Luther's former colleague from Wittenberg, published three pamphlets on the Lord's Supper in which Karlstadt rejected the idea of a real presence in the elements. These pamphlets, published in Basel in 1524, received the approval of Oecolampadius and Zwingli. Luther rejected Karlstadt's arguments and considered Zwingli primarily to be a partisan of Karlstadt. Zwingli began to express his thoughts on the eucharist in several publications including "de Eucharistia" (On the Eucharist). He attacked the idea of the real presence and argued that the word "is" in the words of the institution—"This is my body, this is my blood"—means "signifies". Hence, the words are understood as a metaphor and Zwingli claimed that there was no real presence during the eucharist. In effect, the meal was symbolic of the Last Supper. By spring 1527, Luther reacted strongly to Zwingli's views in the treatise "Dass Diese Worte Christi "Das ist mein Leib etc." noch fest stehen wider die Schwarmgeister" (That These Words of Christ "This is My Body etc." Still Stand Firm Against the Fanatics). The controversy continued until 1528 when efforts to build bridges between the Lutheran and the Zwinglian views began. Martin Bucer tried to mediate while Philip of Hesse, who wanted to form a political coalition of all Protestant forces, invited the two parties to Marburg to discuss their differences. This event became known as the Marburg Colloquy. Zwingli accepted Philip's invitation fully believing that he would be able to convince Luther. By contrast, Luther did not expect anything to come out of the meeting and had to be urged by Philip to attend. Zwingli, accompanied by Oecolampadius, arrived on 28 September 1529 with Luther and Philipp Melanchthon arriving shortly thereafter. Other theologians also participated including Martin Bucer, Andreas Osiander, Johannes Brenz, and Justus Jonas. The debates were held from 1–3 October and the results were published in the fifteen "Marburg Articles". The participants were able to agree on fourteen of the articles, but the fifteenth article established the differences in their views on the presence of Christ in the eucharist. Afterwards, each side was convinced that they were the victors, but in fact the controversy was not resolved and the final result was the formation of two different Protestant confessions. With the failure of the Marburg Colloquy and the split of the Confederation, Zwingli set his goal on an alliance with Philip of Hesse. He kept up a lively correspondence with Philip. Bern refused to participate, but after a long process, Zürich, Basel, and Strasbourg signed a mutual defence treaty with Philip in November 1530. Zwingli also personally negotiated with France's diplomatic representative, but the two sides were too far apart. France wanted to maintain good relations with the Five States. Approaches to Venice and Milan also failed. As Zwingli was working on establishing these political alliances, Charles V, the Holy Roman Emperor, invited Protestants to the Augsburg Diet to present their views so that he could make a verdict on the issue of faith. The Lutherans presented the Augsburg Confession. Under the leadership of Martin Bucer, the cities of Strasbourg, Constance, Memmingen, and Lindau produced the Tetrapolitan Confession. This document attempted to take a middle position between the Lutherans and Zwinglians. It was too late for the "Burgrecht" cities to produce a confession of their own. Zwingli then produced his own private confession, "Fidei ratio" (Account of Faith) in which he explained his faith in twelve articles conforming to the articles of the Apostles' Creed. The tone was strongly anti-Catholic as well as anti-Lutheran. The Lutherans did not react officially, but criticised it privately. Zwingli's and Luther's old opponent, Johann Eck, counter-attacked with a publication, "Refutation of the Articles Zwingli Submitted to the Emperor". When Philip of Hesse formed the Schmalkaldic League at the end of 1530, the four cities of the Tetrapolitan Confession joined on the basis of a Lutheran interpretation of that confession. Given the flexibility of the league's entrance requirements, Zürich, Basel, and Bern also considered joining. However, Zwingli could not reconcile the Tetrapolitan Confession with his own beliefs and wrote a harsh refusal to Bucer and Capito. This offended Philip to the point where relations with the League were severed. The "Burgrecht" cities now had no external allies to help deal with internal Confederation religious conflicts. The peace treaty of the First Kappel War did not define the right of unhindered preaching in the Catholic states. Zwingli interpreted this to mean that preaching should be permitted, but the Five States suppressed any attempts to reform. The "Burgrecht" cities considered different means of applying pressure to the Five States. Basel and Schaffhausen preferred quiet diplomacy while Zürich wanted armed conflict. Zwingli and Jud unequivocally advocated an attack on the Five States. Bern took a middle position which eventually prevailed. In May 1531, Zürich reluctantly agreed to impose a food blockade. It failed to have any effect and in October, Bern decided to withdraw the blockade. Zürich urged its continuation and the "Burgrecht" cities began to quarrel among themselves. On 9 October 1531, in a surprise move, the Five States declared war on Zürich. Zürich's mobilisation was slow due to internal squabbling and on 11 October, 3500 poorly deployed men encountered a Five States force nearly double their size near Kappel. Many pastors, including Zwingli, were among the soldiers. The battle lasted less than one hour and Zwingli was among the 500 casualties in the Zürich army. Zwingli had considered himself first and foremost a soldier of Christ; second a defender of his country, the Confederation; and third a leader of his city, Zürich, where he had lived for the previous twelve years. Ironically, he died at the age of 47, not for Christ nor for the Confederation, but for Zürich. In Tabletalk, Luther is recorded saying: "They say that Zwingli recently died thus; if his error had prevailed, we would have perished, and our church with us. It was a judgment of God. That was always a proud people. The others, the papists, will probably also be dealt with by our Lord God." . Erasmus wrote, "We are freed from great fear by the death of the two preachers, Zwingli and Oecolampadius, whose fate has wrought an incredible change in the mind of many. This is the wonderful hand of God on high." Oecolampadius had died on 24 November. Erasmus also wrote, "If Bellona had favoured them, it would have been all over with us." According to Zwingli, the cornerstone of theology is the Bible. Zwingli appealed to scripture constantly in his writings. He placed its authority above other sources such as the ecumenical councils or the Church Fathers, although he did not hesitate to use other sources to support his arguments. The principles that guide Zwingli's interpretations are derived from his rationalist humanist education and his Reformed understanding of the Bible. He rejected literalist interpretations of a passage, such as those of the Anabaptists, and used synecdoche and analogies, methods he describes in "A Friendly Exegesis" (1527). Two analogies that he used quite effectively were between baptism and circumcision and between the eucharist and Passover. He also paid attention to the immediate context and attempted to understand the purpose behind it, comparing passages of scripture with each other. Zwingli rejected the word "sacrament" in the popular usage of his time. For ordinary people, the word meant some kind of holy action of which there is inherent power to free the conscience from sin. For Zwingli, a sacrament was an initiatory ceremony or a pledge, pointing out that the word was derived from "sacramentum" meaning an oath. (However, the word is also translated "mystery".) In his early writings on baptism, he noted that baptism was an example of such a pledge. He challenged Catholics by accusing them of superstition when they ascribed the water of baptism a certain power to wash away sin. Later, in his conflict with the Anabaptists, he defended the practice of infant baptism, noting that there is no law forbidding the practice. He argued that baptism was a sign of a covenant with God, thereby replacing circumcision in the Old Testament. Zwingli approached the eucharist in a similar manner to baptism. During the first Zürich disputation in 1523, he denied that an actual sacrifice occurred during the mass, arguing that Christ made the sacrifice only once and for all eternity. Hence, the eucharist was "a memorial of the sacrifice". Following this argument, he further developed his view, coming to the conclusion of the "signifies" interpretation for the words of the institution. He used various passages of scripture to argue against transubstantiation as well as Luther's views, the key text being John 6:63, "It is the Spirit who gives life, the flesh is of no avail". Zwingli's approach and interpretation of scripture to understand the meaning of the eucharist was one reason he could not reach a consensus with Luther. The impact of Luther on Zwingli's theological development has long been a source of interest and discussion among Zwinglian scholars. Zwingli himself asserted vigorously his independence of Luther. The most recent studies have lent credibility to this claim, although some scholars still argue his theology was dependent upon Luther's. Zwingli appears to have read Luther's books in search of confirmation from Luther for his own views. Zwingli did, however, admire Luther greatly for the stand he took against the pope. This, more than Luther's theology, was a key influence on Zwingli's convictions as a reformer. What Zwingli considered Luther's courageous stance at the Leipzig Disputation had a decisive impact on Zwingli during his earliest years as a priest, and during this time Zwingli praised and promoted Luther's writings to support his own similar ideas. Like Luther, Zwingli was also a student and admirer of Augustine. His later writings continued to show characteristic differences from Luther such as the inclusion of non-Christians in heaven as described in "An Exposition of the Faith". Zwingli enjoyed music and could play several instruments, including the violin, harp, flute, dulcimer and hunting horn. He would sometimes amuse the children of his congregation on his lute and was so well known for his playing that his enemies mocked him as "the evangelical lute-player and fifer". Three of Zwingli's "Lieder" or hymns have been preserved: the "Pestlied" mentioned above, an adaptation of Psalm 65 (c. 1525), and the "Kappeler Lied", which is believed to have been composed during the campaign of the first war of Kappel (1529). These songs were not meant to be sung during worship services and are not identified as hymns of the Reformation, though they were published in some 16th-century hymnals. Zwingli criticised the practice of priestly chanting and monastic choirs. The criticism dates from 1523 when he attacked certain worship practices. His arguments are detailed in the Conclusions of 1525, in which, Conclusions 44, 45 and 46 are concerned with musical practices under the rubric of "prayer". He associated music with images and vestments, all of which he felt diverted people's attention from true spiritual worship. It is not known what he thought of the musical practices in early Lutheran churches. Zwingli, however, eliminated instrumental music from worship in the church, stating that God had not commanded it in worship. The organist of the People's Church in Zürich is recorded as weeping upon seeing the great organ broken up. Although Zwingli did not express an opinion on congregational singing, he made no effort to encourage it. Nevertheless, scholars have found that Zwingli was supportive of a role for music in the church. Gottfried W. Locher writes, "The old assertion 'Zwingli was against church singing' holds good no longer ... Zwingli's polemic is concerned exclusively with the medieval Latin choral and priestly chanting and not with the hymns of evangelical congregations or choirs". Locher goes on to say that "Zwingli freely allowed vernacular psalm or choral singing. In addition, he even seems to have striven for lively, antiphonal, unison recitative". Locher then summarizes his comments on Zwingli's view of church music as follows: "The chief thought in his conception of worship was always 'conscious attendance and understanding'—'devotion', yet with the lively participation of all concerned". The as of today Musikabteilung (literally: music departement), located in the choir of the "Predigern" church in Zürich was founded in 1971, being a scientific music collection of European importance. It publishes the materials entrusted to it at irregular intervals as CD's, the repertoire ranges of early 16th-century spiritual music of Huldrych Zwingli's to the late 20th century, published under the label "Musik aus der Zentralbibliothek Zürich". Zwingli was a humanist and a scholar with many devoted friends and disciples. He communicated as easily with the ordinary people of his congregation as with rulers such as Philip of Hesse. His reputation as a stern, stolid reformer is counterbalanced by the fact that he had an excellent sense of humour and used satiric fables, spoofing, and puns in his writings. He was more conscious of social obligations than Luther and he genuinely believed that the masses would accept a government guided by God's word. He tirelessly promoted assistance to the poor, who he believed should be cared for by a truly Christian community. In December 1531, the Zürich council selected Heinrich Bullinger as his successor. He immediately removed any doubts about Zwingli's orthodoxy and defended him as a prophet and a martyr. During Bullinger's rule, the confessional divisions of the Confederation were stabilised. He rallied the reformed cities and cantons and helped them to recover from the defeat at Kappel. Zwingli had instituted fundamental reforms, while Bullinger consolidated and refined them. Scholars have found it difficult to assess Zwingli's impact on history, for several reasons. There is no consensus on the definition of "Zwinglianism"; by any definition, Zwinglianism evolved under his successor, Heinrich Bullinger; and research into Zwingli's influence on Bullinger and John Calvin is still rudimentary. Bullinger adopted most of Zwingli's points of doctrine. Like Zwingli, he summarised his theology several times, the best-known being the Second Helvetic Confession of 1566. Meanwhile, Calvin had taken over the Reformation in Geneva. Calvin differed with Zwingli on the eucharist and criticised him for regarding it as simply a metaphorical event. In 1549, however, Bullinger and Calvin succeeded in overcoming the differences in doctrine and produced the "Consensus Tigurinus" (Zürich Consensus). They declared that the eucharist was not just symbolic of the meal, but they also rejected the Lutheran position that the body and blood of Christ is in union with the elements. With this rapprochement, Calvin established his role in the Swiss Reformed Churches and eventually in the wider world. Outside of Switzerland, no church counts Zwingli as its founder. Scholars speculate as to why Zwinglianism has not diffused more widely, even though Zwingli's theology is considered the first expression of Reformed theology. Although his name is not widely recognised, Zwingli's legacy lives on in the basic confessions of the Reformed churches of today. He is often called, after Martin Luther and John Calvin, the "Third Man of the Reformation". Zwingli's collected works are expected to fill 21 volumes. A collection of selected works was published in 1995 by the "Zwingliverein" in collaboration with the "Theologischer Verlag Zürich" This four-volume collection contains the following works: The complete 21-volume edition is being undertaken by the "Zwingliverein" in collaboration with the "Institut für schweizerische Reformationsgeschichte", and is projected to be organised as follows: Vols. XIII and XIV have been published, vols. XV and XVI are under preparation. Vols. XVII to XXI are planned to cover the New Testament. Older German / Latin editions available online include: See also the following English translations of selected works by Zwingli: Holden Holden, formally known as General Motors-Holden, is an Australian automobile importer and a former automobile manufacturer with its headquarters in Port Melbourne, Victoria. The company was founded in 1856 as a saddlery manufacturer in South Australia. In 1908 it moved into the automotive field, becoming a subsidiary of the United States-based General Motors (GM) in 1931, when the company was renamed General Motors-Holden's Ltd. It was renamed Holden Ltd in 1998, and General Motors Holden in 2005. Holden sells the remaining stock of the locally produced range of Commodore vehicles, and imported GM models. Holden has offered badge engineered models in sharing arrangements with Chevrolet, Isuzu, Nissan, Opel, Suzuki, Toyota and Vauxhall Motors. In 2013, the vehicle lineup consisted of models from GM Korea, GM Thailand, GM in the US, and the self-developed Commodore, Caprice, and Ute. Holden also distributed the European Opel brand in Australia in 2012 until its Australian demise in mid-2013. From 1994 to 2017, all Australian-built Holden vehicles were manufactured in Elizabeth, South Australia, and engines were produced at the Fishermans Bend plant in Melbourne. Historically, production or assembly plants were operated in all mainland states of Australia. The consolidation of car production at Elizabeth was completed in 1988, but some assembly operations continued at Dandenong until 1994. General Motors assembly plants were operated in New Zealand from 1926 until 1990 by General Motors New Zealand Limited in an earlier and quite separate operation from Holden in Australia. Although Holden's involvement in exports has fluctuated since the 1950s, the declining sales of large cars in Australia led the company to look to international markets to increase profitability. From 2010 Holden incurred losses due to the strong Australian dollar, and reductions of government grants and subsidies. This led to the announcement on 11 December 2013 that Holden would cease vehicle and engine production by the end of 2017. However, the company will continue in Australia importing and selling cars as national sales company. Holden will retain their design centre, but with reduced staffing. On 20 October 2017, the last existing vehicle plant located in Elizabeth, South Australia was closed. Holden continues solely as an importer of vehicles. In 1852, James Alexander Holden emigrated to South Australia from Walsall, England and in 1856 established "J.A. Holden & Co", a saddlery business in Adelaide. In 1879 J A Holden’s eldest son Henry James (HJ) Holden, became a partner and effectively managed the company. In 1885, German-born H. A. Frost joined the business as a junior partner and J.A. Holden & Co became "Holden & Frost Ltd". Edward Holden, James' grandson, joined the firm in 1905 with an interest in automobiles. From there, the firm evolved through various partnerships and, in 1908, Holden & Frost moved into the business of minor repairs to car upholstery. The company began re-body older chassis using motor bodies produced by F T Hack and Co from 1914. Holden & Frost mounted the body, painted and trimmed it. The company began to produce complete motorcycle sidecar bodies after 1913. After 1917, wartime trade restrictions led the company to start full-scale production of vehicle body shells. H.J. Holden founded a new company in late 1917, and registered "Holden's Motor Body Builders Ltd" (HMBB) on 25 February 1919 specialising in car bodies and using the former F T Hack & Co facility at 400 King William Street in Adelaide before erecting a large 4 story factory on the site. By 1923, HMBB were producing 12,000 units per year. During this time, HMBB assembled bodies for Ford Motor Company of Australia until its Geelong plant was completed. From 1924, HMBB became the exclusive supplier of car bodies for GM in Australia, with manufacturing taking place at the new Woodville plant. These bodies were made to suit a number of chassis imported from manufacturers such as Chevrolet and Dodge. In 1926 General Motors (Australia) was established with assembly plants at Newstead, Queensland; Marrickville, New South Wales; City Road, Melbourne; Birkenhead, South Australia; and Cottesloe, Western Australia using bodies produced by Holden Motor Body Builders and imported complete knock down (CKD) chassis. In 1930 alone, the still independent Woodville plant built bodies for Austin, Chrysler, DeSoto, Morris, Hillman, Humber, Hupmobile and Willys-Overland as well GM cars. The last of this line of business was the assembly of Hillman Minx sedans in 1948. The Great Depression led to a substantial downturn in production by Holden, from 34,000 units annually in 1930 to just 1,651 units one year later. In 1931 General Motors purchased Holden Motor Body Builders and merged it with General Motors (Australia) Pty Ltd to form General Motors-Holden's Ltd (GM-H). Throughout the 1920s Holden also supplied tramcars to the Melbourne & Metropolitan Tramways Board, of which several examples have been preserved in both Australia and New Zealand. Holden's second full-scale car factory, located in Fishermans Bend (Port Melbourne), was completed in 1936, with construction beginning in 1939 on a new plant in Pagewood, New South Wales. However, World War II delayed car production with efforts shifted to the construction of vehicle bodies, field guns, aircraft and engines. Before the war ended, the Australian Government took steps to encourage an Australian automotive industry. Both GM and Ford provided studies to the Australian Government outlining the production of the first Australian-designed car. Ford's proposal was the government's first choice, but required substantial financial assistance. GM's study was ultimately chosen because of its low level of government intervention. After the war, Holden returned to producing vehicle bodies, this time for Buick, Chevrolet, Pontiac and Vauxhall. The Oldsmobile Ace was also produced from 1946 to 1948. From here, Holden continued to pursue the goal of producing an Australian car. This involved compromise with GM, as Holden's managing director, Laurence Hartnett, favoured development of a local design, while GM preferred to see an American design as the basis for "Australia's Own Car". In the end, the design was based on a previously rejected post-war Chevrolet proposal. The Holden was launched in 1948, creating long waiting lists extending through 1949 and beyond. The name "Holden" was chosen in honour of Sir Edward Holden, the company's first chairman and grandson of J.A. Holden. Other names considered were "GeM", "Austral", "Melba", "Woomerah", "Boomerang", "Emu" and "Canbra", a phonetic spelling of Canberra. Although officially designated "48-215", the car was marketed simply as the "Holden". The unofficial usage of the name "FX" originated within Holden, referring to the updated suspension on the 48-215 of 1953. During the 1950s, Holden dominated the Australian car market. GM invested heavily in production capacity, which allowed the company to meet increased post-war demand for motor cars. Less expensive four-cylinder cars did not offer Holden's ability to deal with rugged rural areas. 48-215 sedans were produced in parallel with the 50-2106 coupé utility from 1951; the latter was known colloquially as the "ute" and became ubiquitous in Australian rural areas as the workhorse of choice. Production of both the utility and sedan continued with minor changes until 1953, when they were replaced by the facelifted FJ model, introducing a third panel van body style. The FJ was the first major change to the Holden since its 1948 introduction. Over time it gained iconic status and remains one of Australia's most recognisable automotive symbols. A new horizontally slatted grille dominated the front-end of the FJ, which received various other trim and minor mechanical revisions. In 1954 Holden began exporting the FJ to New Zealand. Although little changed from the 48-215, marketing campaigns and price cuts kept FJ sales steady until a completely redesigned model was launched. At the 2005 Australian International Motor Show in Sydney, Holden paid homage to the FJ with the Efijy concept car. Holden's next model, the FE, launched in 1956; offered in a new station wagon body style dubbed "Station Sedan" in the company's sales literature. In the same year Holden commenced exports to Malaya, Thailand and North Borneo. Strong sales continued in Australia, and Holden achieved a market share of more than 50 percent in 1958 with the revised FC model. This was the first Holden to be tested on the new "Holden Proving Ground" based in Lang Lang, Victoria. 1957 saw Holden's export markets grow to 17 countries, with new additions including Indonesia, Hong Kong, Singapore, Fiji, Sudan, the East Africa region and South Africa. Indonesian market cars were assembled locally by P.T. Udatin. The opening of the Dandenong, Victoria, production facility in 1956 brought further jobs; by 1959 Holden employed 19,000 workers country-wide. In 1959 complete knock down assembly began in South Africa and Indonesia. In 1960, Holden introduced its third major new model, the FB. The car's style was inspired by 1950s Chevrolets, with tailfins and a wrap-around windshield with "dog leg" A-pillars. By the time it was introduced, many considered the appearance dated. Much of the motoring industry at the time noted that the adopted style did not translate well to the more compact Holden. The FB became the first Holden that was adapted for left-hand-drive markets, enhancing its export potential, and as such was exported to New Caledonia, New Hebrides, the Philippines, and Hawaii. In 1960, Ford unveiled the new Falcon in Australia, only months after its introduction in the United States. To Holden's advantage, the Falcon was not durable, particularly in the front suspension, making it ill-suited for Australian conditions. In response to the Falcon, Holden introduced the facelifted EK series in 1961; the new model featured two-tone paintwork and optional "Hydramatic" automatic transmission. A restyled EJ series came in 1962, debuting the new luxury oriented Premier model. The EH update came a year later bringing the new "Red" motor, providing better performance than the previous "Grey" motor. The HD series of 1965 saw the introduction of the "Powerglide" automatic transmission. At the same time, an "X2" performance option with a more powerful version of the six-cylinder engine was made available. In 1966, the HR was introduced, including changes in the form of new front and rear styling and higher-capacity engines. More significantly, the HR fitted standard front seat belts; Holden thus became the first Australian automaker to provide the safety device as standard equipment across all models. This coincided with the completion of the production plant in Acacia Ridge, Queensland. By 1963, Holden was exporting cars to Africa, the Middle East, South-East Asia, the Pacific Islands, and the Caribbean. Holden began assembling the compact HA series Vauxhall Viva in 1964. This was superseded by the Holden Torana in 1967, a development of the Viva ending Vauxhall production in Australia. Holden offered the LC, a Torana with new styling, in 1969 with the availability of Holden's six-cylinder engine. In the development days, the six-cylinder Torana was reserved for motor racing, but research had shown that there was a business case for such a model. The LC Torana was the first application of Holden's new three-speed "Tri-Matic" automatic transmission. This was the result of Holden's A$16.5 million transformation of the Woodville, South Australia factory for its production. Holden's association with the manufacture of Chevrolets and Pontiacs ended in 1968, coinciding with the year of Holden's next major new model, the HK . This included Holden's first V8 engine, a Chevrolet engine imported from Canada. Models based on the HK series included an extended-length prestige model, the Brougham, and a two-door coupé, the Monaro. The mainstream Holden Special was rebranded the Kingswood, and the basic fleet model, the Standard, became the Belmont. On 3 March 1969 Alexander Rhea, managing director of General Motors-Holden's at the time, was joined by press photographers and the Federal Minister of Shipping and Transport, Ian Sinclair as the two men drove the two millionth Holden, an HK Brougham off the production line. This came just over half a decade since the one millionth car, an EJ Premier sedan rolled off the Dandenong line on 25 October 1962. Following the Chevrolet V8 fitted to the HK, the first Australian-designed and mass-produced V8, the Holden V8 engine debuted in the Hurricane concept of 1969 before fitment to facelifted HT model. This was available in two capacities: and . Late in HT production, use of the new "Tri-Matic" automatic transmission, first seen in the LC Torana was phased in as "Powerglide" stock was exhausted, but Holden's official line was that the HG of 1971 was the first full-size Holden to receive it. Despite the arrival of serious competitors—namely, the Ford Falcon, Chrysler Valiant, and Japanese cars—in the 1960s, Holden's locally produced large six- and eight-cylinder cars remained Australia's top-selling vehicles. Sales were boosted by exporting the Kingswood sedan, station wagon, and utility body styles to Indonesia, Trinidad and Tobago, Pakistan, the Philippines, and South Africa in complete knock down form. Holden launched the new HQ series in 1971. At this time, the company was producing all of its passenger cars in Australia, and every model was of Australian design; however, by the end of the decade, Holden was producing cars based on overseas designs. The HQ was thoroughly re-engineered, featuring a perimeter frame and semi-monocoque (unibody) construction. Other firsts included an all-coil suspension and an extended wheelbase for station wagons, while the utilities and panel vans retained the traditional coil/leaf suspension configuration. The series included the new prestige Statesman brand, which also had a longer wheelbase, replacing the Brougham. The Statesman remains noteworthy because it was not marketed as a "Holden", but rather a "Statesman". The HQ framework led to a new generation of two-door Monaros, and, despite the introduction of the similar sized competitors, the HQ range became the top-selling Holden of all time, with 485,650 units sold in three years. 14,558 units were exported and 72,290 CKD kits were constructed. The HQ series was facelifted in 1974 with the introduction of the HJ, heralding new front panel styling and a revised rear fascia. This new bodywork was to remain, albeit with minor upgrades through the HX and HZ series. Detuned engines adhering to government emission standards were brought in with the HX series, whilst the HZ brought considerably improved road handling and comfort with the introduction of "Radial Tuned Suspension" (RTS). As a result of GM's toying with the Wankel rotary engine, as used by Mazda of Japan, an export agreement was initiated in 1975. This involved Holden exporting with powertrains, HJ, and later, HX series Premiers as the Mazda Roadpacer AP. Mazda then fitted these cars with the "13B" rotary engine and three-speed automatic transmission. Production ended in 1977, after just 840 units sold. During the 1970s, Holden ran an advertising jingle "Football, Meat Pies, Kangaroos and Holden cars", based on the "Baseball, Hot Dogs, Apple Pies and Chevrolet" jingle used by Chevrolet in the United States. Also, development of the Torana continued in with the larger mid-sized LH series released in 1974, offered only as a four-door sedan. The LH Torana was one of the few cars worldwide engineered to accommodate four-, six-and eight-cylinder engines. This trend continued until Holden introduced the Sunbird in 1976; essentially the four-cylinder Torana with a new name. Designated LX, both the Sunbird and Torana introduced a three-door hatchback variant. A final UC update appeared in 1978. During its production run, the Torana achieved legendary racing success in Australia, achieving victories at the Mount Panorama Circuit in Bathurst, New South Wales. In 1975, Holden introduced the compact Gemini, the Australian version of the "T-car", based on the Opel Kadett C. The Gemini was an overseas design developed jointly with Isuzu, GM's Japanese affiliate; and was powered by a 1.6-litre four-cylinder engine. Fast becoming a popular car, the Gemini rapidly attained sales leadership in its class, and the nameplate lived on until 1987. Holden's most popular car to date, the Commodore, was introduced in 1978 as the VB. The new family car was loosely based on the Opel Rekord E body shell, but with the front from the Opel Senator grafted to accommodate the larger Holden six-cylinder and V8 engines. Initially, the Commodore maintained Holden's sales leadership in Australia. However, some of the compromises resulting from the adoption of a design intended for another market hampered the car's acceptance. In particular, it was narrower than its predecessor and its Falcon rival, making it less comfortable for three rear-seat passengers. With the abandonment of left-hand drive markets, Holden exported almost 100,000 Commodores to markets such as New Zealand, Thailand, Hong Kong, Malaysia, Indonesia, Malta and Singapore. Holden discontinued the Torana in 1979 and the Sunbird in 1980. After the 1978 introduction of the Commodore, the Torana became the "in-between" car, surrounded by the smaller and more economical Gemini and the larger, more sophisticated Commodore. The closest successor to the Torana was the Camira, released in 1982 as Australia's version of GM's medium-sized "J-car". The 1980s were challenging for Holden and the Australian automotive industry. The Australian Government tried to revive the industry with the Button car plan, which encouraged car makers to focus on producing fewer models at higher, more economical volumes, and to export cars. The decade opened with the shut-down of the Pagewood, New South Wales production plant and introduction of the light commercial Rodeo, sourced from Isuzu in Japan. The Rodeo was available in both two- and four-wheel drive chassis cab models with a choice of petrol and diesel powerplants. The range was updated in 1988 with the TF series, based on the Isuzu TF. Other cars sourced from Isuzu during the 1980s were the four-wheel drive Jackaroo (1981), the Shuttle (1982) van and the Piazza (1986) three-door sports hatchback. The second generation Holden Gemini from 1985 was also based on an Isuzu design, although, its manufacture was undertaken in Australia. While GM Australia's commercial vehicle range had originally been mostly based on Bedford products, these had gradually been replaced by Isuzu products. This process began in the 1970s and by 1982 Holden's commercial vehicle arm no longer offered any Bedford products. The new Holden WB commercial vehicles and the Statesman WB limousines were introduced in 1980. However, the designs, based on the HQ and updated HJ, HX and HZ models from the 1970s were less competitive than similar models in Ford's lineup. Thus, Holden abandoned those vehicle classes altogether in 1984. Sales of the Commodore also fell, with the effects of the 1979 energy crisis lessening, and for the first time the Commodore lost ground to the Ford Falcon. Sales in other segments also suffered when competition from Ford intensified, and other Australian manufacturers: Mitsubishi, Nissan and Toyota gained market share. When released in 1982, the Camira initially generated good sales, which later declined because buyers considered the 1.6-litre engine underpowered, and the car's build and ride quality below-average. The Camira lasted just seven years, and contributed to Holden's accumulated losses of over A$500 million by the mid-1980s. In 1984, Holden introduced the VK Commodore, with significant styling changes from the previous VH. The Commodore was next updated in 1986 as the VL, which had new front and rear styling. Controversially, the VL was powered by the 3.0-litre Nissan "RB30" six-cylinder engine and had a Nissan-built, electronically controlled four-speed automatic transmission. Holden even went to court in 1984 to stop local motoring magazine Wheels from reporting on the matter. The engine change was necessitated by the legal requirement that all new cars sold in Australia after 1986 had to consume unleaded petrol. Because it was unfeasible to convert the existing six-cylinder engine to run on unleaded fuel, the Nissan engine was chosen as the best engine available. However, changing exchange rates doubled the cost of the engine and transmission over the life of the VL. The decision to opt for a Japanese-made transmission led to the closure of the Woodville, South Australia assembly plant. Confident by the apparent sign of turnaround, GM paid off Holden's mounted losses of A$780 million on 19 December 1986. At GM headquarters' request, Holden was then reorganised and recapitalised, separating the engine and car manufacturing divisions in the process. This involved the splitting of Holden into "Holden's Motor Company" (HMC) and "Holden's Engine Company" (HEC). For the most part, car bodies were now manufactured at Elizabeth, South Australia, with engines as before, confined to the Fishermans Bend plant in Port Melbourne, Victoria. The engine manufacturing business was successful, building four-cylinder "Family II" engines for use in cars built overseas. The final phase of the Commodore's recovery strategy involved the 1988 VN, a significantly wider model powered by the American-designed, Australian-assembled 3.8-litre Buick V6 engine. Holden began to sell the subcompact Suzuki Swift-based Barina in 1985. The Barina was launched concurrently with the Suzuki-sourced Holden Drover, followed by the Scurry later on in 1985. In the previous year, Nissan Pulsar hatchbacks were rebadged as the Holden Astra, as a result of a deal with Nissan. This arrangement ceased in 1989 when Holden entered a new alliance with Toyota, forming a new company: United Australian Automobile Industries (UAAI). UAAI resulted in Holden selling rebadged versions of Toyota's Corolla and Camry, as the Holden Nova and Apollo respectively, with Toyota re-branding the Commodore as the Lexcen. The company changed throughout the 1990s, increasing its Australian market share from 21 percent in 1991 to 28.2 percent in 1999. Besides manufacturing Australia's best selling car, which was exported in significant numbers, Holden continued to export many locally produced engines to power cars made elsewhere. In this decade, Holden adopted a strategy of importing cars it needed to offer a full range of competitive vehicles. During 1998, General Motors-Holden's Ltd name was shortened to "Holden Ltd". On 26 April 1990, GM's New Zealand subsidiary Holden New Zealand announced that production at the assembly plant based in Trentham would be phased out and vehicles would be imported duty-free—this came after the 1984 closure of the Petone assembly line due to low output volumes. During the 1990s, Holden, other Australian automakers and trade unions pressured the Australian Government to halt the lowering of car import tariffs. By 1997, the federal government had already cut tariffs to 22.5 percent, from 57.5 percent ten years earlier; by 2000, a plan was formulated to reduce the tariffs to 15 percent. Holden was critical, saying that Australia's population was not large enough, and that the changes could tarnish the local industry. Holden re-introduced its defunct Statesman title in 1990—this time under the Holden marque, as the Statesman and Caprice. For 1991, Holden updated the Statesman and Caprice with a range of improvements, including the introduction of four-wheel anti-lock brakes (ABS); although, a rear-wheel system had been standard on the Statesman Caprice from March 1976. ABS was added to the short-wheelbase Commodore range in 1992. Another returning variant was the full-size utility, and on this occasion it was based on the Commodore. The VN Commodore received a major facelift in 1993 with the VR—compared to the VN, approximately 80 percent of the car model was new. Exterior changes resulted in a smoother overall body and a "twin-kidney" grille—a Commodore styling trait that remained until the 2002 VY model and, as of 2013, remains a permanent staple on HSV variants. Holden introduced the all-new VT Commodore in 1997, the outcome of a A$600 million development programme that spanned more than five years. The new model featured a rounded exterior body shell, improved dynamics and many firsts for an Australian-built car. Also, a stronger body structure increased crash safety. The locally produced Buick-sourced V6 engine powered the Commodore range, as did the 5.0-litre Holden V8 engine, and was replaced in 1999 by the 5.7-litre "LS" unit. The UAAI badge-engineered cars first introduced in 1989 sold in far fewer numbers than anticipated, but the Holden Commodore, Toyota Camry, and Corolla were all successful when sold under their original nameplates. The first generation Nova and the donor Corolla were produced at Holden's Dandenong, Victoria facility until 1994. UAAI was dissolved in 1996, and Holden returned to selling only GM products. The Holden Astra and Vectra, both designed by Opel in Germany, replaced the Toyota-sourced Holden Nova and Apollo. This came after the 1994 introduction of the Opel Corsa replacing the already available Suzuki Swift as the source for the Holden Barina. Sales of the full-size Holden Suburban SUV sourced from Chevrolet commenced in 1998—lasting until 2001. Also in 1998, local assembly of the Vectra began at Elizabeth, South Australia. These cars were exported to Japan and Southeast Asia with Opel badges. However, the Vectra did not achieve sufficient sales in Australia to justify local assembly, and reverted to being fully imported in 2000. Holden's market surge from the 1990s reversed in the 2000s decade. In Australia, Holden's market share dropped from 27.5 percent in 2000 to 15.2 percent in 2006. From March 2003, Holden no longer held the number one sales position in Australia, losing ground to Toyota. This overall downturn affected Holden's profits; the company recorded a combined gain of A$842.9 million from 2002 to 2004, and a combined loss of A$290 million from 2005 to 2006. Factors contributing to the loss included the development of an all-new model, the strong Australian dollar and the cost of reducing the workforce at the Elizabeth plant, including the loss of 1,400 jobs after the closure of the third-shift assembly line in 2005, after two years in operation. Holden fared better in 2007, posting an A$6 million loss. This was followed by an A$70.2 million loss in the 2008, an A$210.6 million loss in 2009, and a profit of A$112 million in 2010. On 18 May 2005, "Holden Ltd" became "GM Holden Ltd", coinciding with the resettling to the new Holden headquarters on 191 Salmon Street, Port Melbourne, Victoria. Holden caused controversy in 2005 with their Holden Employee Pricing television advertisement, which ran from October to December 2005. The campaign publicised, "for the first time ever, all Australians can enjoy the financial benefit of Holden Employee Pricing". However, this did not include a discounted dealer delivery fee and savings on factory fitted options and accessories that employees received. At the same time, employees were given a further discount of 25 to 29 percent on selected models. Holden revived the Monaro coupe in 2001. Based on the VT Commodore architecture, the coupe attracted worldwide attention after being shown as a concept car at Australian auto shows. The VT Commodore received its first major update in 2002 with the VY series. A mildly facelifted VZ model launched in 2004, introducing the "High Feature" engine. This was built at the Fishermans Bend facility completed in 2003, with a maximum output of 900 engines per day. This has reportedly added A$5.2 billion to the Australian economy; exports account for about A$450 million alone. After the VZ, the "High Feature" engine powered the all-new Holden Commodore (VE). In contrast to previous models, the VE no longer used an Opel-sourced platform adapted both mechanically and in size, but was based on the GM Zeta platform that was earmarked to become a "Global RWD Architecture", until plans were cancelled due to the 2007/08 global financial crisis. Throughout the 1990s, Opel had also been the source of many Holden models. To increase profitability, Holden looked to the South Korean Daewoo brand for replacements after acquiring a 44.6 percent stake—worth US$251 million—in the company in 2002 as a representative of GM. This was increased to 50.9 percent in 2005, but when GM further increased its stake to 70.1 percent around the time of its 2009 Chapter 11 reorganisation, Holden's interest was relinquished and transferred to another (undisclosed) part of GM. The commencement of the Holden-branded Daewoo models began with the 2005 Holden Barina, which based on the Daewoo Kalos, replaced the Opel Corsa as the source of the Barina. In the same year, the Viva, based on the Daewoo Lacetti, replaced the entry-level Holden Astra Classic, although the new-generation Astra introduced in 2004 continued on. The Captiva crossover SUV came next in 2006. After discontinuing the Frontera and Jackaroo models in 2003, Holden was only left with one all-wheel drive model: the Adventra, a Commodore-based station wagon. The fourth model to be replaced with a South Korean alternative was the Vectra by the mid-size Epica in 2007. As a result of the split between GM and Isuzu, Holden lost the rights to use the "Rodeo" nameplate. Consequently, the Holden Rodeo was facelifted and relaunched as the Colorado in 2008. Following Holden's successful application for a A$149 million government grant to build a localised version of the Chevrolet Cruze in Australia from 2011, Holden in 2009 announced that it would initially import the small car unchanged from South Korea as the Holden Cruze. Following the government grant announcement, Kevin Rudd, Australia's Prime Minister at the time, stated that production would support 600 new jobs at the Elizabeth facility; however, this failed to take into account Holden's previous announcement, whereby 600 jobs would be shed when production of the "Family II" engine ceased in late 2009. In mid-2013, Holden sought a further A$265 million, in addition to the A$275 million that was already committed by the governments of Canberra, South Australia and Victoria, to remain viable as a car manufacturer in Australia. A source close to Holden informed the "Australian" news publication that the car company is losing money on every vehicle that it produces and consequently initiated negotiations to reduce employee wages by up to A$200 per week to cut costs, following the announcement of 400 job cuts and an assembly line reduction of 65 (400 to 335) cars per day. From 2001 to 2012, Holden received over A$150 million a year in subsidy from Australian government. The subsidy from 2007 was more than Holden's capital investment of the same period. From 2004, Holden was only able to make a profit in 2010 and 2011. In March 2012, Holden was given a $270 million lifeline by the Gillard Federal Government, Weatherill and Baillieu ministries. In return, Holden planned to inject over $1 billion into car manufacturing in Australia. They estimated the new investment package would return around $4 billion to the Australian economy and see GM Holden continue making cars in Australia until at least 2022. Industry Minister Kim Carr confirmed on 10 July 2013 that talks had been scheduled between the Australian government and Holden. On 13 August 2013, 1,700 employees at the Elizabeth plant in northern Adelaide voted to accept a three-year wage freeze in order to decrease the chances of the production line's closure in 2016. Holden's ultimate survival, though, depended on continued negotiations with the Federal Government—to secure funding for the period from 2016 to 2022—and the final decision of the global headquarters in Detroit, US. Following an unsuccessful attempt to secure the extra funding required from the new Liberal/National coalition government, on 10 December 2013, General Motors announced that Holden would cease engine and vehicle manufacturing operations in Australia by the end of 2017. As a result, 2,900 jobs would be lost over four years. Beyond 2017 Holden's Australian presence will consist of: a national sales company, a parts distribution centre and a global design studio. In May 2014 GM reversed their decision to abandon the Lang Lang Proving Ground and decided to keep it as part of their engineering capability in Australia. In 2015, Holden again began selling a range of Opel-derived cars comprising the Astra VXR and Insignia VXR (both based on the OPC models sold by Vauxhall) and Cascada. Later that year, Holden also announced plans to sell the European Astra and the Korean Cruze alongside each other from 2017. In December 2015, Belgian entrepreneur Guido Dumarey commenced negotiations to buy the Commodore manufacturing plant in South Australia, with a view to continue producing a rebadged Zeta-based premium range of rear and all-wheel drive vehicles for local and export sales. The proposal was met with doubt in South Australia, and it later came to nothing. On 20 October 2017 it ceased manufacturing vehicles in Australia. On 8 May 2015 Jeff Rolfs, Holden's CFO, became interim chairman and managing director. Holden announced on 6 February 2015 that Mark Bernhard would return to Holden as chairman and managing director, the first Australian to hold the post in 25 years. In 2010 vehicles were sold countrywide through the Holden Dealer Network (310 authorised stores and 12 service centres), which employed more than 13,500 people. In 1987, Holden Special Vehicles (HSV) was formed in partnership with Tom Walkinshaw, who primarily manufactured modified, high-performance Commodore variants. To further reinforce the brand, HSV introduced the HSV Dealer Team into the V8 Supercar fold in 2005 under the naming rights of Toll HSV Dealer Team. The logo, or "Holden lion and stone" as it is known, has played a vital role in establishing Holden's identity. In 1928, Holden's Motor Body Builders appointed Rayner Hoff to design the emblem. The logo refers to a prehistoric fable, in which observations of lions rolling stones led to the invention of the wheel. With the 1948 launch of the 48-215, Holden revised its logo and commissioned another redesign in 1972 to better represent the company. The emblem was reworked once more in 1994. Holden began to export vehicles in 1954, sending the FJ to New Zealand. Exports to New Zealand continued, but to broaden their export potential, Holden began to cater their Commodore, Monaro and Statesman/Caprice models for both right- and left-hand drive markets. The Middle East was Holden's largest export market, with the Commodore sold as the Chevrolet Lumina from 1998, and the Statesman from 1999 as the Chevrolet Caprice. Commodores were also sold as the Chevrolet Lumina in Brunei, Fiji and South Africa, and as the Chevrolet Omega in Brazil. Pontiac in North America also imported Commodore sedans from 2008 through to 2009 as the G8. The G8's cessation was a consequence of GM's Chapter 11 bankruptcy resulting in the demise of the Pontiac brand. Sales of the Monaro began in 2003 to the Middle East as the Chevrolet Lumina Coupe. Later on that year, a modified version of the Monaro began selling in the United States (but not Canada) as the Pontiac GTO, and under the Monaro name through Vauxhall dealerships in the United Kingdom. This arrangement continued through to 2005 when the car was discontinued. The long-wheelbase Statesman sales in the Chinese market as the Buick Royaum began in 2005, before being replaced in 2007 by the Statesman-based Buick Park Avenue. Statesman/Caprice exports to South Korea also began in 2005. These Korean models were sold as the Daewoo Statesman, and later as the Daewoo Veritas from 2008. Holden's move into international markets was profitable; export revenue increased from A$973 million in 1999 to just under $1.3 billion in 2006. From 2011 the WM Caprice was exported to North America as the Chevrolet Caprice PPV, a version of the Caprice built exclusively for law enforcement in North America sold only to police. From 2007, the HSV-based Commodore was exported to the United Kingdom as the Vauxhall VXR8. In 2013, it was announced that exports of the Commodore would resume to North America in the form of the VF Commodore as the Chevrolet SS sedan for the 2014 model year. The Chevrolet SS Sedan was imported to the United States (but again, not to Canada) again for 2015 with only minor changes, notably the addition of Magnetic Ride Control suspension and a Tremec TR-6060 manual transmission. For the 2016 model year, the SS sedan received a facelift based on the VF Series II Commodore unveiled in September 2015. In 2017, production of Holden's last two American exports, the SS and the Caprice PPV was discontinued. Holden has been involved with factory backed teams in Australian touring car racing since 1968. The main factory-backed teams have been the Holden Dealer Team (1969–1987) and the Holden Racing Team (1990–2016). Since 2017, Triple Eight Race Engineering has been Holden's factory team. Holden has won the Bathurst 1000 32 times, more than any other manufacturer, and has won the Australian Touring Car and Supercars Championship title 20 times. Brad Jones Racing, Charlie Schwerkolt Racing, Erebus Motorsport, Tekno Autosports and Walkinshaw Andretti United also run Holden Commodores in the series. Hippocrates Hippocrates of Kos (; ; ), also known as Hippocrates II, was a Greek physician of the Age of Pericles (Classical Greece), and is considered one of the most outstanding figures in the history of medicine. He is often referred to as the "Father of Medicine" in recognition of his lasting contributions to the field as the founder of the Hippocratic School of Medicine. This intellectual school revolutionized medicine in ancient Greece, establishing it as a discipline distinct from other fields with which it had traditionally been associated (theurgy and philosophy), thus establishing medicine as a profession. However, the achievements of the writers of the Corpus, the practitioners of Hippocratic medicine and the actions of Hippocrates himself were often commingled; thus very little is known about what Hippocrates actually thought, wrote, and did. Hippocrates is commonly portrayed as the paragon of the ancient physician, and credited with coining the Hippocratic Oath, which is still relevant and in use today. He is also credited with greatly advancing the systematic study of clinical medicine, summing up the medical knowledge of previous schools, and prescribing practices for physicians through the Hippocratic Corpus and other works. Historians agree that Hippocrates was born around the year 460 BC on the Greek island of Kos; other biographical information, however, is likely to be untrue. Soranus of Ephesus, a 2nd-century Greek physician, was Hippocrates' first biographer and is the source of most personal information about him. Later biographies are in the "Suda" of the 10th century AD, and in the works of John Tzetzes, which date from the 12th century AD. Hippocrates is mentioned in passing in the writings of two contemporaries: Plato, in "Protagoras" and "Phaedrus", and, Aristotle's "Politics", which date from the 4th century BC. Soranus wrote that Hippocrates' father was Heraclides, a physician, and his mother was Praxitela, daughter of Tizane. The two sons of Hippocrates, Thessalus and Draco, and his son-in-law, Polybus, were his students. According to Galen, a later physician, Polybus was Hippocrates' true successor, while Thessalus and Draco each had a son named Hippocrates (Hippocrates III and IV). Soranus said that Hippocrates learned medicine from his father and grandfather (Hippocrates I), and studied other subjects with Democritus and Gorgias. Hippocrates was probably trained at the asklepieion of Kos, and took lessons from the Thracian physician Herodicus of Selymbria. Plato mentions Hippocrates in two of his dialogues: in "Protagoras", Plato describes Hippocrates as "Hippocrates of Kos, the Asclepiad"; while in "Phaedrus", Plato suggests that "Hippocrates the Asclepiad" thought that a complete knowledge of the nature of the body was necessary for medicine. Hippocrates taught and practiced medicine throughout his life, traveling at least as far as Thessaly, Thrace, and the Sea of Marmara. Several different accounts of his death exist. He died, probably in Larissa, at the age of 83, 85 or 90, though some say he lived to be well over 100. Hippocrates is credited with being the first person to believe that diseases were caused naturally, not because of superstition and gods. Hippocrates was credited by the disciples of Pythagoras of allying philosophy and medicine. He separated the discipline of medicine from religion, believing and arguing that disease was not a punishment inflicted by the gods but rather the product of environmental factors, diet, and living habits. Indeed there is not a single mention of a mystical illness in the entirety of the Hippocratic Corpus. However, Hippocrates did work with many convictions that were based on what is now known to be incorrect anatomy and physiology, such as Humorism. Ancient Greek schools of medicine were split (into the Knidian and Koan) on how to deal with disease. The Knidian school of medicine focused on diagnosis. Medicine at the time of Hippocrates knew almost nothing of human anatomy and physiology because of the Greek taboo forbidding the dissection of humans. The Knidian school consequently failed to distinguish when one disease caused many possible series of symptoms. The Hippocratic school or Koan school achieved greater success by applying general diagnoses and passive treatments. Its focus was on patient care and prognosis, not diagnosis. It could effectively treat diseases and allowed for a great development in clinical practice. Hippocratic medicine and its philosophy are far removed from that of modern medicine. Now, the physician focuses on specific diagnosis and specialized treatment, both of which were espoused by the Knidian school. This shift in medical thought since Hippocrates' day has caused serious criticism over the past two millennia, with the passivity of Hippocratic treatment being the subject of particularly strong denunciations; for example, the French doctor M. S. Houdart called the Hippocratic treatment a "meditation upon death". Another important concept in Hippocratic medicine was that of a "crisis", a point in the progression of disease at which either the illness would begin to triumph and the patient would succumb to death, or the opposite would occur and natural processes would make the patient recover. After a crisis, a relapse might follow, and then another deciding crisis. According to this doctrine, crises tend to occur on "critical days", which were supposed to be a fixed time after the contraction of a disease. If a crisis occurred on a day far from a "critical day", a relapse might be expected. Galen believed that this idea originated with Hippocrates, though it is possible that it predated him. Hippocratic medicine was humble and passive. The therapeutic approach was based on "the healing power of nature" (""vis medicatrix naturae"" in Latin). According to this doctrine, the body contains within itself the power to re-balance the four humours and heal itself ("physis"). Hippocratic therapy focused on simply easing this natural process. To this end, Hippocrates believed "rest and immobilization [were] of capital importance." In general, the Hippocratic medicine was very kind to the patient; treatment was gentle, and emphasized keeping the patient clean and sterile. For example, only clean water or wine were ever used on wounds, though "dry" treatment was preferable. Soothing balms were sometimes employed. Hippocrates was reluctant to administer drugs and engage in specialized treatment that might prove to be wrongly chosen; generalized therapy followed a generalized diagnosis. Generalized treatments he prescribed include fasting and the consumption of a mix of honey and vinegar. Hippocrates once said that "to eat when you are sick, is to feed your sickness." However, potent drugs were used on certain occasions. This passive approach was very successful in treating relatively simple ailments such as broken bones which required traction to stretch the skeletal system and relieve pressure on the injured area. The Hippocratic bench and other devices were used to this end. One of the strengths of Hippocratic medicine was its emphasis on prognosis. At Hippocrates' time, medicinal therapy was quite immature, and often the best thing that physicians could do was to evaluate an illness and predict its likely progression based upon data collected in detailed case histories. Hippocratic medicine was notable for its strict professionalism, discipline, and rigorous practice. The Hippocratic work "On the Physician" recommends that physicians always be well-kempt, honest, calm, understanding, and serious. The Hippocratic physician paid careful attention to all aspects of his practice: he followed detailed specifications for, "lighting, personnel, instruments, positioning of the patient, and techniques of bandaging and splinting" in the ancient operating room. He even kept his fingernails to a precise length. The Hippocratic School gave importance to the clinical doctrines of observation and documentation. These doctrines dictate that physicians record their findings and their medicinal methods in a very clear and objective manner, so that these records may be passed down and employed by other physicians. Hippocrates made careful, regular note of many symptoms including complexion, pulse, fever, pains, movement, and excretions. He is said to have measured a patient's pulse when taking a case history to discover whether the patient was lying. Hippocrates extended clinical observations into family history and environment. "To him medicine owes the art of clinical inspection and observation." For this reason, he may more properly be termed as the "Father of Medicine". Hippocrates and his followers were first to describe many diseases and medical conditions. He is given credit for the first description of clubbing of the fingers, an important diagnostic sign in chronic lung disease, lung cancer and cyanotic heart disease. For this reason, clubbed fingers are sometimes referred to as "Hippocratic fingers". Hippocrates was also the first physician to describe Hippocratic face in "Prognosis". Shakespeare famously alludes to this description when writing of Falstaff's death in Act II, Scene iii. of "Henry V". Hippocrates began to categorize illnesses as acute, chronic, endemic and epidemic, and use terms such as, "exacerbation, relapse, resolution, crisis, paroxysm, peak, and convalescence." Another of Hippocrates' major contributions may be found in his descriptions of the symptomatology, physical findings, surgical treatment and prognosis of thoracic empyema, i.e. suppuration of the lining of the chest cavity. His teachings remain relevant to present-day students of pulmonary medicine and surgery. Hippocrates was the first documented chest surgeon and his findings and techniques, while crude, such as the use of lead pipes to drain chest wall abscess, are still valid. The Hippocratic school of medicine described well the ailments of the human rectum and the treatment thereof, despite the school's poor theory of medicine. Hemorrhoids, for instance, though believed to be caused by an excess of bile and phlegm, were treated by Hippocratic physicians in relatively advanced ways. Cautery and excision are described in the Hippocratic Corpus, in addition to the preferred methods: ligating the hemorrhoids and drying them with a hot iron. Other treatments such as applying various salves are suggested as well. Today, "treatment [for hemorrhoids] still includes burning, strangling, and excising." Also, some of the fundamental concepts of proctoscopy outlined in the Corpus are still in use. For example, the uses of the rectal speculum, a common medical device, are discussed in the Hippocratic Corpus. This constitutes the earliest recorded reference to endoscopy. Hippocrates often used lifestyle modifications such as diet and exercise to treat diseases such as diabetes, what is today called lifestyle medicine. He is often quoted with "Let food be your medicine, and medicine be your food" and "Walking is man's best medicine", however the quote "Let food be your medicine" appears to be a misquotation and its exact origin remains unknown. The Hippocratic Corpus (Latin: "Corpus Hippocraticum") is a collection of around seventy early medical works collected in Alexandrian Greece. It is written in Ionic Greek. The question of whether Hippocrates himself was the author of any of the treatises in the corpus has not been conclusively answered, but current debate revolves around only a few of the treatises seen as potentially by him. Because of the variety of subjects, writing styles and apparent date of construction, the Hippocratic Corpus could not have been written by one person (Ermerins numbers the authors at nineteen). The corpus came to be known by his name because of his fame, possibly all medical works were classified under 'Hippocrates' by a librarian in Alexandria. The volumes were probably produced by his students and followers. The Hippocratic Corpus contains textbooks, lectures, research, notes and philosophical essays on various subjects in medicine, in no particular order. These works were written for different audiences, both specialists and laymen, and were sometimes written from opposing viewpoints; significant contradictions can be found between works in the Corpus. Notable among the treatises of the Corpus are "The Hippocratic Oath"; "The Book of Prognostics"; "On Regimen in Acute Diseases"; ""; "On Airs, Waters and Places"; "Instruments of Reduction"; "On The Sacred Disease"; etc. The Hippocratic Oath, a seminal document on the ethics of medical practice, was attributed to Hippocrates in antiquity although new information shows it may have been written after his death. This is probably the most famous document of the Hippocratic Corpus. Recently the authenticity of the document's author has come under scrutiny. While the Oath is rarely used in its original form today, it serves as a foundation for other, similar oaths and laws that define good medical practice and morals. Such derivatives are regularly taken today by medical graduates about to enter medical practice. Hippocrates is widely considered to be the "Father of Medicine". His contributions revolutionized the practice of medicine; but after his death the advancement stalled. So revered was Hippocrates that his teachings were largely taken as too great to be improved upon and no significant advancements of his methods were made for a long time. The centuries after Hippocrates' death were marked as much by retrograde movement as by further advancement. For instance, "after the Hippocratic period, the practice of taking clinical case-histories died out," according to Fielding Garrison. After Hippocrates, the next significant physician was Galen, a Greek who lived from AD 129 to AD 200. Galen perpetuated Hippocratic medicine, moving both forward and backward. In the Middle Ages, the Islamic world adopted Hippocratic methods and developed new medical technologies. After the European Renaissance, Hippocratic methods were revived in western Europe and even further expanded in the 19th century. Notable among those who employed Hippocrates' rigorous clinical techniques were Thomas Sydenham, William Heberden, Jean-Martin Charcot and William Osler. Henri Huchard, a French physician, said that these revivals make up "the whole history of internal medicine." According to Aristotle's testimony, Hippocrates was known as "The Great Hippocrates". Concerning his disposition, Hippocrates was first portrayed as a "kind, dignified, old country doctor" and later as "stern and forbidding". He is certainly considered wise, of very great intellect and especially as very practical. Francis Adams describes him as "strictly the physician of experience and common sense." His image as the wise, old doctor is reinforced by busts of him, which wear large beards on a wrinkled face. Many physicians of the time wore their hair in the style of Jove and Asklepius. Accordingly, the busts of Hippocrates that have been found could be only altered versions of portraits of these deities. Hippocrates and the beliefs that he embodied are considered medical ideals. Fielding Garrison, an authority on medical history, stated, "He is, above all, the exemplar of that flexible, critical, well-poised attitude of mind, ever on the lookout for sources of error, which is the very essence of the scientific spirit." "His figure... stands for all time as that of the ideal physician," according to "A Short History of Medicine", inspiring the medical profession since his death. "The Travels of Sir John Mandeville" reports (incorrectly) that Hippocrates was the ruler of the islands of "Kos and Lango" [sic], and recounts a legend about Hippocrates' daughter. She was transformed into a hundred-foot long dragon by the goddess Diana, and is the "lady of the manor" of an old castle. She emerges three times a year, and will be turned back into a woman if a knight kisses her, making the knight into her consort and ruler of the islands. Various knights try, but flee when they see the hideous dragon; they die soon thereafter. This is a version of the legend of Melusine. Hippocrates' legendary genealogy traces his paternal heritage directly to Asklepius and his maternal ancestry to Heracles. According to Tzetzes's "Chiliades", the ahnentafel of Hippocrates II is: 1. Hippocrates II. "The Father of Medicine" 2. Heraclides 4. Hippocrates I. 8. Gnosidicus 16. 32. Sostratus III. 64. Theodorus II. 128. Sostratus, II. 256. Thedorus 512. Cleomyttades 1024. Crisamis 2048. Dardanus 4096. Sostratus 8192. Hippolochus 16384. Podalirius 32768. Asklepius Some clinical symptoms and signs have been named after Hippocrates as he is believed to be the first person to describe those. Hippocratic face is the change produced in the countenance by death, or long sickness, excessive evacuations, excessive hunger, and the like. Clubbing, a deformity of the fingers and fingernails, is also known as Hippocratic fingers. Hippocratic succussion is the internal splashing noise of hydropneumothorax or pyopneumothorax. Hippocratic bench (a device which uses tension to aid in setting bones) and Hippocratic cap-shaped bandage are two devices named after Hippocrates. Hippocratic Corpus and Hippocratic Oath are also his namesakes. The drink hypocras is also believed to be invented by Hippocrates. Risus sardonicus, a sustained spasming of the face muscles may also be termed the Hippocratic Smile. The most severe form of hair loss and baldness is called the Hippocratic form. In the modern age, a lunar crater has been named Hippocrates. The Hippocratic Museum, a museum on the Greek island of Kos is dedicated to him. The Hippocrates Project is a program of the New York University Medical Center to enhance education through use of technology. Project Hippocrates (an acronym of "HIgh PerfOrmance Computing for Robot-AssisTEd Surgery") is an effort of the Carnegie Mellon School of Computer Science and Shadyside Medical Center, "to develop advanced planning, simulation, and execution technologies for the next generation of computer-assisted surgical robots." Both the Canadian Hippocratic Registry and American Hippocratic Registry are organizations of physicians who uphold the principles of the original Hippocratic Oath as inviolable through changing social times. Hydrus Hydrus is a small constellation in the deep southern sky. It was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman and it first appeared on a 35-cm (14 in) diameter celestial globe published in late 1597 (or early 1598) in Amsterdam by Plancius and Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in Johann Bayer's Uranometria of 1603. The French explorer and astronomer Nicolas Louis de Lacaille charted the brighter stars and gave their Bayer designations in 1756. Its name means "male water snake", as opposed to Hydra, a much larger constellation that represents a female water snake. It remains below the horizon for most Northern Hemisphere observers. The brightest star is the 2.8-magnitude Beta Hydri, also the closest reasonably bright star to the south celestial pole. Pulsating between magnitude 3.26 and 3.33, Gamma Hydri is a variable red giant 60 times the diameter of our Sun. Lying near it is VW Hydri, one of the brightest dwarf novae in the heavens. Four star systems in Hydrus have been found to have exoplanets to date, including HD 10180, which could bear up to nine planetary companions. Hydrus was one of the twelve constellations established by the Dutch astronomer Petrus Plancius from the observations of the southern sky by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the "Eerste Schipvaart", to the East Indies. It first appeared on a 35-cm (14 in) diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in the German cartographer Johann Bayer's "Uranometria" of 1603. De Houtman included it in his southern star catalogue the same year under the Dutch name "De Waterslang", "The Water Snake", it representing a type of snake encountered on the expedition rather than a mythical creature. The French explorer and astronomer Nicolas Louis de Lacaille called it "l’Hydre Mâle" on the 1756 version of his planisphere of the southern skies, distinguishing it from the feminine Hydra. The French name was retained by Jean Fortin in 1776 for his "Atlas Céleste", while Lacaille Latinised the name to Hydrus for his revised "Coelum Australe Stelliferum" in 1763. Irregular in shape, Hydrus is bordered by Mensa to the southeast, Eridanus to the east, Horologium and Reticulum to the northeast, Phoenix to the north, Tucana to the northwest and west, and Octans to the south; Lacaille had shortened Hydrus' tail to make space for this last constellation he had drawn up. Covering 243 square degrees and 0.589% of the night sky, it ranks 61st of the 88 constellations in size. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is 'Hyi'. The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 12 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −57.85° and −82.06°. As one of the deep southern constellations, it remains below the horizon at latitudes north of the 30th parallel in the Northern Hemisphere, and is circumpolar at latitudes south of the 50th parallel in the Southern Hemisphere. Indeed, Herman Melville mentions it and Argo Navis in Moby Dick "beneath effulgent Antarctic Skies", highlighting his knowledge of the southern constellations from whaling voyages. A line drawn between the long axis of the Southern Cross to Beta Hydri and then extended 4.5 times will mark a point due south. Hydrus culminates at midnight around 26 October. Keyzer and de Houtman assigned fifteen stars to the constellation in their Malay and Madagascan vocabulary, with a star that would be later designated as Alpha Hydri marking the head, Gamma the chest and a number of stars that were later allocated to Tucana, Reticulum, Mensa and Horologium marking the body and tail. Lacaille charted and designated 20 stars with the Bayer designations Alpha through to Tau in 1756. Of these, he used the designations Eta, Pi and Tau twice each, for three sets of two stars close together, and omitted Omicron and Xi. He assigned Rho to a star that subsequent astronomers were unable to find. Beta Hydri, the brightest star in Hydrus, is a yellow star of apparent magnitude 2.8, lying 24 light-years from Earth. It has about 104% of the mass of the Sun and 181% of the Sun's radius, with more than three times the Sun's luminosity. The spectrum of this star matches a stellar classification of G2 IV, with the luminosity class of 'IV' indicating this is a subgiant star. As such, it is a slightly more evolved star than the Sun, with the supply of hydrogen fuel at its core becoming exhausted. It is the nearest subgiant star to the Sun and one of the oldest stars in the solar neighbourhood. Thought to be between 6.4 and 7.1 billion years old, this star bears some resemblance to what the Sun may look like in the far distant future, making it an object of interest to astronomers. It is also the closest bright star to the south celestial pole. Located at the northern edge of the constellation and just southwest of Achernar is Alpha Hydri, a white sub-giant star of magnitude 2.9, situated 72 light-years from Earth. Of spectral type F0IV, it is beginning to cool and enlarge as it uses up its supply of hydrogen. It is twice as massive and 3.3 times as wide as our sun and 26 times more luminous. A line drawn between Alpha Hydri and Beta Centauri is bisected by the south celestial pole. In the southeastern corner of the constellation is Gamma Hydri, a red giant of spectral type M2III located 214 light-years from Earth. It is a semi-regular variable star, pulsating between magnitudes 3.26 and 3.33. Observations over five years were not able to establish its periodicity. It is around 1.5 to 2 times as massive as our Sun, and has expanded to about 60 times the Sun's diameter. It shines with about 655 times the luminosity of the Sun. Located 3° northeast of Gamma is the VW Hydri, a dwarf nova of the SU Ursae Majoris type. It is a close binary system that consists of a white dwarf and another star, the former drawing off matter from the latter into a bright accretion disk. These systems are characterised by frequent eruptions and less frequent supereruptions. The former are smooth, while the latter exhibit short "superhumps" of heightened activity. One of the brightest dwarf novae in the sky, it has a baseline magnitude of 14.4 and can brighten to magnitude 8.4 during peak activity. BL Hydri is another close binary system composed of a low mass star and a strongly magnetic white dwarf. Known as a polar or AM Herculis variable, these produce polarized optical and infrared emissions and intense soft and hard X-ray emissions to the frequency of the white dwarf's rotation period—in this case 113.6 minutes. There are two notable optical double stars in Hydrus. Pi Hydri, composed of Pi Hydri and Pi Hydri, is divisible in binoculars. Around 476 light-years distant, Pi is a red giant of spectral type M1III that varies between magnitudes 5.52 and 5.58. Pi is an orange giant of spectral type K2III and shining with a magnitude of 5.7, around 488 light-years from Earth. Eta Hydri is the other optical double, composed of Eta and Eta. Eta is a blue-white main sequence star of spectral type B9V that was suspected of being variable, and is located just over 700 light-years away. Eta has a magnitude of 4.7 and is a yellow giant star of spectral type G8.5III around 218 light-years distant, which has evolved off the main sequence and is expanding and cooling on its way to becoming a red giant. Calculations of its mass indicate it was most likely a white A-type main sequence star for most of its existence, around twice the mass of our Sun. A planet, Eta2 Hydri b, greater than 6.5 times the mass of Jupiter was discovered in 2005, orbiting around Eta every 711 days at a distance of 1.93 astronomical units (AU). Three other systems have been found to have planets, most notably the Sun-like star HD 10180, which has seven planets, plus possibly an additional two for a total of nine—as of 2012 more than any other system to date, including the Solar System. Lying around from the Earth, it has an apparent magnitude of 7.33. GJ 3021 is a solar twin—a star very like our own Sun—around 57 light-years distant with a spectral type G8V and magnitude of 6.7. It has a Jovian planet companion (GJ 3021 b). Orbiting about 0.5 AU from its sun, it has a minimum mass 3.37 times that of Jupiter and a period of around 133 days. The system is a complex one as the faint star GJ 3021B orbits at a distance of 68 AU; it is a red dwarf of spectral type M4V. HD 20003 is a star of magnitude 8.37. It is a yellow main sequence star of spectral type G8V a little cooler and smaller than our Sun around 143 light-years away. It has two planets that are around 12 and 13.5 times as massive as the Earth with periods of just under 12 and 34 days respectively. Hydrus contains only faint deep-sky objects. IC 1717 was a deep-sky object discovered by the Danish astronomer John Louis Emil Dreyer in the late 19th century. The object at the coordinate Dreyer observed is no longer there, and is now a mystery. It was very likely to have been a faint comet. PGC 6240, known as the White Rose Galaxy, is a giant spiral galaxy surrounded by shells resembling rose petals, located around 345 million light years from the Solar System. Unusually, it has cohorts of globular clusters of three distinct ages suggesting bouts of post-starburst formation following a merger with another galaxy. The constellation also contains a spiral galaxy, NGC 1511, which lies edge on to observers on Earth and is readily viewed in amateur telescopes. Located mostly in Dorado, the Large Magellanic Cloud extends into Hydrus. The globular cluster NGC 1466 is an outlying component of the galaxy, and contains many RR Lyrae-type variable stars. It has a magnitude of 11.59 and is thought to be over 12 billion years old. Two stars, HD 24188 of magnitude 6.3 and HD 24115 of magnitude 9.0, lie nearby in its foreground. NGC 602 is composed of an emission nebula and a young, bright open cluster of stars that is an outlying component on the eastern edge of the Small Magellanic Cloud, a satellite galaxy to the Milky Way. Most of the cloud is located in the neighbouring constellation Tucana. Heavy metal music Heavy metal (or simply metal) is a genre of rock music that developed in the late 1960s and early 1970s, largely in the United Kingdom. With roots in blues rock, psychedelic rock, and acid rock, the bands that created heavy metal developed a thick, massive sound, characterized by highly amplified distortion, extended guitar solos, emphatic beats, and overall loudness. Heavy metal lyrics and performance styles are sometimes associated with aggression and machismo. In 1968, three of the genre's most famous pioneers, Led Zeppelin, Black Sabbath and Deep Purple were founded. Though they came to attract wide audiences, they were often derided by critics. During the mid-1970s, Judas Priest helped spur the genre's evolution by discarding much of its blues influence; Motörhead introduced a punk rock sensibility and an increasing emphasis on speed. Beginning in the late 1970s, bands in the new wave of British heavy metal such as Iron Maiden and Saxon followed in a similar vein. Before the end of the decade, heavy metal fans became known as "metalheads" or "headbangers". During the 1980s, glam metal became popular with groups such as Mötley Crüe, Poison and Def Leppard. Underground scenes produced an array of more aggressive styles: thrash metal broke into the mainstream with bands such as Metallica, Slayer, Megadeth, and Anthrax, while other extreme subgenres of heavy metal such as death metal (with bands such as Death, Possessed, and Obituary) and black metal (with bands such as Mayhem, Bathory, and Immortal) remain subcultural phenomena. Since the mid-1990s popular styles have further expanded the definition of the genre. These include groove metal (with bands such as Pantera, Sepultura, and Lamb of God) and nu metal (with bands such as Korn, Slipknot, and Linkin Park), the latter of which often incorporates elements of grunge and hip hop. Heavy metal is traditionally characterized by loud distorted guitars, emphatic rhythms, dense bass-and-drum sound, and vigorous vocals. Heavy metal subgenres variously emphasize, alter, or omit one or more of these attributes. "The New York Times" critic Jon Pareles writes, "In the taxonomy of popular music, heavy metal is a major subspecies of hard-rock—the breed with less syncopation, less blues, more showmanship and more brute force." The typical band lineup includes a drummer, a bassist, a rhythm guitarist, a lead guitarist, and a singer, who may or may not be an instrumentalist. Keyboard instruments are sometimes used to enhance the fullness of the sound. Deep Purple's Jon Lord played an overdriven Hammond organ. In 1970, John Paul Jones used a Moog synthesizer on "Led Zeppelin III"; by the 1990s, in "almost every subgenre of heavy metal" synthesizers were used. The electric guitar and the sonic power that it projects through amplification has historically been the key element in heavy metal. The heavy metal guitar sound comes from a combined use of high volumes and heavy distortion. For classic heavy metal guitar tone, guitarists maintain moderate levels gain at moderate levels, without excessive preamp or pedal distortion, to retain open spaces and air in the music; the guitar amplifier is turned up loud to produce the characteristic "punch and grind". Thrash metal guitar tone has scooped mid-frequencies and tightly compressed sound with lots of bass frequencies. Guitar solos are "an essential element of the heavy metal code ... that underscores the significance of the guitar" to the genre. Most heavy metal songs "feature at least one guitar solo", which is "a primary means through which the heavy metal performer expresses virtuosity". Some exceptions are nu metal and grindcore bands, which tend to omit guitar solos. With rhythm guitar parts, the "heavy crunch sound in heavy metal ... [is created by] palm muting" the strings with the picking hand and using distortion. Palm muting creates a tighter, more precise sound and it emphasizes the low end. The lead role of the guitar in heavy metal often collides with the traditional "frontman" or bandleader role of the vocalist, creating a musical tension as the two "contend for dominance" in a spirit of "affectionate rivalry". Heavy metal "demands the subordination of the voice" to the overall sound of the band. Reflecting metal's roots in the 1960s counterculture, an "explicit display of emotion" is required from the vocals as a sign of authenticity. Critic Simon Frith claims that the metal singer's "tone of voice" is more important than the lyrics. The prominent role of the bass is also key to the metal sound, and the interplay of bass and guitar is a central element. The bass guitar provides the low-end sound crucial to making the music "heavy". The bass plays a "more important role in heavy metal than in any other genre of rock". Metal basslines vary widely in complexity, from holding down a low pedal point as a foundation to doubling complex riffs and licks along with the lead or rhythm guitars. Some bands feature the bass as a lead instrument, an approach popularized by Metallica's Cliff Burton with his heavy emphasis on bass guitar solos and use of chords while playing bass in the early 1980s. Lemmy of Motörhead often played overdriven power chords in his bass lines. The essence of heavy metal drumming is creating a loud, constant beat for the band using the "trifecta of speed, power, and precision". Heavy metal drumming "requires an exceptional amount of endurance", and drummers have to develop "considerable speed, coordination, and dexterity ... to play the intricate patterns" used in heavy metal. A characteristic metal drumming technique is the cymbal choke, which consists of striking a cymbal and then immediately silencing it by grabbing it with the other hand (or, in some cases, the same striking hand), producing a burst of sound. The metal drum setup is generally much larger than those employed in other forms of rock music. Black metal, death metal and some "mainstream metal" bands "all depend upon double-kicks and blast beats". In live performance, loudness—an "onslaught of sound", in sociologist Deena Weinstein's description—is considered vital. In his book "Metalheads", psychologist Jeffrey Arnett refers to heavy metal concerts as "the sensory equivalent of war". Following the lead set by Jimi Hendrix, Cream and The Who, early heavy metal acts such as Blue Cheer set new benchmarks for volume. As Blue Cheer's Dick Peterson put it, "All we knew was we wanted more power." A 1977 review of a Motörhead concert noted how "excessive volume in particular figured into the band's impact." Weinstein makes the case that in the same way that melody is the main element of pop and rhythm is the main focus of house music, powerful sound, timbre, and volume are the key elements of metal. She argues that the loudness is designed to "sweep the listener into the sound" and to provide a "shot of youthful vitality". In relation to the gender composition of heavy metal bands, performers tended to be almost exclusively male until at least the mid-1980s apart from exceptions such as Girlschool. However, "now [in the 2010s] maybe more than ever–strong metal women have put up their dukes and got down to it", "carv[ing] out a considerable place for [them]selves". A 2013 article states that metal "clearly empowers women". In the sub-genres of symphonic and power metal, there has been a sizable number of bands that have had women as the lead singers; bands such as Nightwish, Delain, and Within Temptation have featured women as lead singers with men playing instruments. The rhythm in metal songs is emphatic, with deliberate stresses. Weinstein observes that the wide array of sonic effects available to metal drummers enables the "rhythmic pattern to take on a complexity within its elemental drive and insistency". In many heavy metal songs, the main groove is characterized by short, two-note or three-note rhythmic figures—generally made up of 8th or 16th notes. These rhythmic figures are usually performed with a staccato attack created by using a palm-muted technique on the rhythm guitar. Brief, abrupt, and detached rhythmic cells are joined into rhythmic phrases with a distinctive, often jerky texture. These phrases are used to create rhythmic accompaniment and melodic figures called riffs, which help to establish thematic hooks. Heavy metal songs also use longer rhythmic figures such as whole note- or dotted quarter note-length chords in slow-tempo power ballads. The tempos in early heavy metal music tended to be "slow, even ponderous". By the late 1970s, however, metal bands were employing a wide variety of tempos. In the 2000s decade, metal tempos range from slow ballad tempos (quarter note = 60 beats per minute) to extremely fast blast beat tempos (quarter note = 350 beats per minute). One of the signatures of the genre is the guitar power chord. In technical terms, the power chord is relatively simple: it involves just one main interval, generally the perfect fifth, though an octave may be added as a doubling of the root. When power chords are played on the lower strings at high volumes and with distortion, additional low frequency sounds are created, which add to the "weight of the sound" and create an effect of "overwhelming power". Although the perfect fifth interval is the most common basis for the power chord, power chords are also based on different intervals such as the minor third, major third, perfect fourth, diminished fifth, or minor sixth. Most power chords are also played with a consistent finger arrangement that can be slid easily up and down the fretboard. Heavy metal is usually based on riffs created with three main harmonic traits: modal scale progressions, tritone and chromatic progressions, and the use of pedal points. Traditional heavy metal tends to employ modal scales, in particular the Aeolian and Phrygian modes. Harmonically speaking, this means the genre typically incorporates modal chord progressions such as the Aeolian progressions I-♭VI-♭VII, I-♭VII-(♭VI), or I-♭VI-IV-♭VII and Phrygian progressions implying the relation between I and ♭II (I-♭II-I, I-♭II-III, or I-♭II-VII for example). Tense-sounding chromatic or tritone relationships are used in a number of metal chord progressions. In addition to using modal harmonic relationships, heavy metal also uses "pentatonic and blues-derived features". The tritone, an interval spanning three whole tones—such as C to F#—was a forbidden dissonance in medieval ecclesiastical singing, which led monks to call it "diabolus in musica"—"the devil in music". Heavy metal songs often make extensive use of pedal point as a harmonic basis. A pedal point is a sustained tone, typically in the bass range, during which at least one foreign (i.e., dissonant) harmony is sounded in the other parts. According to Robert Walser, heavy metal harmonic relationships are "often quite complex" and the harmonic analysis done by metal players and teachers is "often very sophisticated". In the study of heavy metal chord structures, it has been concluded that "heavy metal music has proved to be far more complicated" than other music researchers had realized. Robert Walser stated that, alongside blues and R&B, the "assemblage of disparate musical styles known ... as 'classical music has been a major influence on heavy metal since the genre's earliest days. Also that metal's "most influential musicians have been guitar players who have also studied classical music. Their appropriation and adaptation of classical models sparked the development of a new kind of guitar virtuosity [and] changes in the harmonic and melodic language of heavy metal." In an article written for "Grove Music Online", Walser stated that the "1980s brought on ... the widespread adaptation of chord progressions and virtuosic practices from 18th-century European models, especially Bach and Antonio Vivaldi, by influential guitarists such as Ritchie Blackmore, Marty Friedman, Jason Becker, Uli Jon Roth, Eddie Van Halen, Randy Rhoads and Yngwie Malmsteen". Kurt Bachmann of Believer has stated that "If done correctly, metal and classical fit quite well together. Classical and metal are probably the two genres that have the most in common when it comes to feel, texture, creativity." Although a number of metal musicians cite classical composers as inspiration, classical and metal are rooted in different cultural traditions and practices—classical in the art music tradition, metal in the popular music tradition. As musicologists Nicolas Cook and Nicola Dibben note, "Analyses of popular music also sometimes reveal the influence of 'art traditions'. An example is Walser's linkage of heavy metal music with the ideologies and even some of the performance practices of nineteenth-century Romanticism. However, it would be clearly wrong to claim that traditions such as blues, rock, heavy metal, rap or dance music derive primarily from "art music'." According to scholars David Hatch and Stephen Millward, Black Sabbath, and the numerous heavy metal bands that they inspired, have concentrated lyrically "on dark and depressing subject matter to an extent hitherto unprecedented in any form of pop music". They take as an example Sabbath's second album "Paranoid" (1970), which "included songs dealing with personal trauma—'Paranoid' and 'Fairies Wear Boots' (which described the unsavoury side effects of drug-taking)—as well as those confronting wider issues, such as the self-explanatory 'War Pigs' and 'Hand of Doom'." Deriving from the genre's roots in blues music, sex is another important topic—a thread running from Led Zeppelin's suggestive lyrics to the more explicit references of glam metal and nu metal bands. The thematic content of heavy metal has long been a target of criticism. According to Jon Pareles, "Heavy metal's main subject matter is simple and virtually universal. With grunts, moans and subliterary lyrics, it celebrates ... a party without limits ... [T]he bulk of the music is stylized and formulaic." Music critics have often deemed metal lyrics juvenile and banal, and others have objected to what they see as advocacy of misogyny and the occult. During the 1980s, the Parents Music Resource Center petitioned the U.S. Congress to regulate the popular music industry due to what the group asserted were objectionable lyrics, particularly those in heavy metal songs. Andrew Cope states that claims that heavy metal lyrics are misogynistic are "clearly misguided" as these critics have "overlook[ed] the overwhelming evidence that suggests otherwise". Music critic Robert Christgau called metal "an expressive mode [that] it sometimes seems will be with us for as long as ordinary white boys fear girls, pity themselves, and are permitted to rage against a world they'll never beat". Heavy metal artists have had to defend their lyrics in front of the U.S. Senate and in court. In 1985, Twisted Sister frontman Dee Snider was asked to defend his song "Under the Blade" at a U.S. Senate hearing. At the hearing, the PMRC alleged that the song was about sadomasochism and rape; Snider stated that the song was about his bandmate's throat surgery. In 1986, Ozzy Osbourne was sued over the lyrics of his song "Suicide Solution". A lawsuit against Osbourne was filed by the parents of John McCollum, a depressed teenager who committed suicide allegedly after listening to Osbourne's song. Osbourne was not found to be responsible for the teen's death. In 1990, Judas Priest was sued in American court by the parents of two young men who had shot themselves five years earlier, allegedly after hearing the subliminal statement "do it" in the song Better by You, Better than Me, it was featured on the album Stained Class (1978), the song was also a Spooky Tooth cover. While the case attracted a great deal of media attention, it was ultimately dismissed. In 1991, UK police seized death metal records from the British record label Earache Records, in an "unsuccessful attempt to prosecute the label for obscenity". In some predominantly Muslim countries, heavy metal has been officially denounced as a threat to traditional values. In countries such as Morocco, Egypt, Lebanon, and Malaysia, there have been incidents of heavy metal musicians and fans being arrested and incarcerated. In 1997, the Egyptian police jailed many young metal fans and they were accused of "devil worship" and blasphemy, after police found metal recordings during searches of their homes. In 2013, Malaysia banned Lamb of God from performing in their country, on the grounds that the "band's lyrics could be interpreted as being religiously insensitive" and blasphemous. Some people considered heavy metal music to being a leading factor for mental health disorders, and thought that heavy metal fans were more likely to suffer with a poor mental health, but study has proven that this is not true and the fans of this music have a lower or similar percentage of people suffering from poor mental health. For many artists and bands, visual imagery plays a large role in heavy metal. In addition to its sound and lyrics, a heavy metal band's image is expressed in album cover art, logos, stage sets, clothing, design of instruments, and music videos. Down-the-back long hair is the "most crucial distinguishing feature of metal fashion". Originally adopted from the hippie subculture, by the 1980s and 1990s heavy metal hair "symbolised the hate, angst and disenchantment of a generation that seemingly never felt at home", according to journalist Nader Rahman. Long hair gave members of the metal community "the power they needed to rebel against nothing in general". The classic uniform of heavy metal fans consists of light colored, ripped frayed or torn blue jeans, black T-shirts, boots, and black leather or denim jackets. Deena Weinstein writes, "T-shirts are generally emblazoned with the logos or other visual representations of favorite metal bands." In the 1980s, a range of sources, from punk and goth music to horror films, influenced metal fashion. Many metal performers of the 1970s and 1980s used radically shaped and brightly colored instruments to enhance their stage appearance. Fashion and personal style was especially important for glam metal bands of the era. Performers typically wore long, dyed, hairspray-teased hair (hence the nickname, "hair metal"); makeup such as lipstick and eyeliner; gaudy clothing, including leopard-skin-printed shirts or vests and tight denim, leather, or spandex pants; and accessories such as headbands and jewelry. Pioneered by the heavy metal act X Japan in the late 1980s, bands in the Japanese movement known as visual kei—which includes many nonmetal groups—emphasize elaborate costumes, hair, and makeup. Many metal musicians when performing live engage in headbanging, which involves rhythmically beating time with the head, often emphasized by long hair. The il cornuto, or devil horns, hand gesture was popularized by vocalist Ronnie James Dio while with Black Sabbath and Dio. Although Gene Simmons of Kiss claims to have been the first to make the gesture on the 1977 "Love Gun" album cover, there is speculation as to who started the phenomenon. Attendees of metal concerts do not dance in the usual sense. It has been argued that this is due to the music's largely male audience and "extreme heterosexualist ideology". Two primary body movements used are headbanging and an arm thrust that is both a sign of appreciation and a rhythmic gesture. The performance of air guitar is popular among metal fans both at concerts and listening to records at home. According to Deena Weinstein, thrash metal concerts have two elements that are not part of the other metal genres: moshing and stage diving, which "were imported from the punk/hardcore subculture". Weinstein states that moshing participants bump and jostle each other as they move in a circle in an area called the "pit" near the stage. Stage divers climb onto the stage with the band and then jump "back into the audience". It has been argued that heavy metal has outlasted many other rock genres largely due to the emergence of an intense, exclusionary, strongly masculine subculture. While the metal fan base is largely young, white, male, and blue-collar, the group is "tolerant of those outside its core demographic base who follow its codes of dress, appearance, and behavior". Identification with the subculture is strengthened not only by the group experience of concert-going and shared elements of fashion, but also by contributing to metal magazines and, more recently, websites. Attending live concerts in particular has been called the "holiest of heavy metal communions." The metal scene has been characterized as a "subculture of alienation", with its own code of authenticity. This code puts several demands on performers: they must appear both completely devoted to their music and loyal to the subculture that supports it; they must appear uninterested in mainstream appeal and radio hits; and they must never "sell out". Deena Weinstein states that for the fans themselves, the code promotes "opposition to established authority, and separateness from the rest of society". Musician and filmmaker Rob Zombie observes, "Most of the kids who come to my shows seem like really imaginative kids with a lot of creative energy they don't know what to do with" and that metal is "outsider music for outsiders. Nobody wants to be the weird kid; you just somehow end up being the weird kid. It's kind of like that, but with metal you have all the weird kids in one place". Scholars of metal have noted the tendency of fans to classify and reject some performers (and some other fans) as "poseurs" "who pretended to be part of the subculture, but who were deemed to lack authenticity and sincerity". The origin of the term "heavy metal" in a musical context is uncertain. The phrase has been used for centuries in chemistry and metallurgy, where the periodic table organizes elements of both light and heavy metals (e.g., uranium). An early use of the term in modern popular culture was by countercultural writer William S. Burroughs. His 1962 novel "The Soft Machine" includes a character known as "Uranian Willy, the Heavy Metal Kid". Burroughs' next novel, "Nova Express" (1964), develops the theme, using "heavy metal" as a metaphor for addictive drugs: "With their diseases and orgasm drugs and their sexless parasite life forms—Heavy Metal People of Uranus wrapped in cool blue mist of vaporized bank notes—And The Insect People of Minraud with metal music". Inspired by Burroughs' novels, the term was used in the title of the 1967 album "Featuring the Human Host and the Heavy Metal Kids" by Hapshash and the Coloured Coat, which has been claimed to be its first use in the context of music. The phrase was later lifted by Sandy Pearlman, who used the term to describe The Byrds for their supposed "aluminium style of context and effect", particularly on their album "The Notorious Byrd Brothers" (1968). Metal historian Ian Christe describes what the components of the term mean in "hippiespeak": "heavy" is roughly synonymous with "potent" or "profound," and "metal" designates a certain type of mood, grinding and weighted as with metal. The word "heavy" in this sense was a basic element of beatnik and later countercultural hippie slang, and references to "heavy music"—typically slower, more amplified variations of standard pop fare—were already common by the mid-1960s, such as in reference to Vanilla Fudge. Iron Butterfly's debut album, released in early 1968, was titled "Heavy". The first use of "heavy metal" in a song lyric is in reference to a motorcycle in the Steppenwolf song "Born to Be Wild", also released that year: "I like smoke and lightning/Heavy metal thunder/Racin' with the wind/And the feelin' that I'm under." The first documented use of the phrase to describe a type of rock music identified to date appears in a review by Barry Gifford. In the May 11, 1968, issue of "Rolling Stone", he wrote about the album "A Long Time Comin'" by U.S. band Electric Flag: "Nobody who's been listening to Mike Bloomfield—either talking or playing—in the last few years could have expected this. This is the new soul music, the synthesis of white blues and heavy metal rock." In January 1970 Lucian K. Truscott IV reviewing "Led Zeppelin II" for the Village Voice described the sound as "heavy" and made comparisons with Blue Cheer and Vanilla Fudge. Other early documented uses of the phrase are from reviews by critic Mike Saunders. In the November 12, 1970 issue of "Rolling Stone", he commented on an album put out the previous year by the British band Humble Pie: ""Safe as Yesterday Is," their first American release, proved that Humble Pie could be boring in lots of different ways. Here they were a noisy, unmelodic, heavy metal-leaden shit-rock band with the loud and noisy parts beyond doubt. There were a couple of nice songs ... and one monumental pile of refuse". He described the band's latest, self-titled release as "more of the same 27th-rate heavy metal crap". In a review of Sir Lord Baltimore's "Kingdom Come" in the May 1971 "Creem", Saunders wrote, "Sir Lord Baltimore seems to have down pat most all the best heavy metal tricks in the book". "Creem" critic Lester Bangs is credited with popularizing the term via his early 1970s essays on bands such as Led Zeppelin and Black Sabbath. Through the decade, "heavy metal" was used by certain critics as a virtually automatic putdown. In 1979, lead "New York Times" popular music critic John Rockwell described what he called "heavy-metal rock" as "brutally aggressive music played mostly for minds clouded by drugs", and, in a different article, as "a crude exaggeration of rock basics that appeals to white teenagers". Coined by Black Sabbath drummer Bill Ward, "downer rock" was one of the earliest terms used to describe this style of music and was applied to acts such as Sabbath and Bloodrock. "Classic Rock" magazine described the downer rock culture revolving around the use of Quaaludes and the drinking of wine. Later the term would be replaced by "heavy metal". The terms "heavy metal" and "hard rock" have often been used interchangeably, particularly in discussing bands of the 1970s, a period when the terms were largely synonymous. For example, the 1983 "Rolling Stone Encyclopedia of Rock & Roll" includes this passage: "known for its aggressive blues-based hard-rock style, Aerosmith was the top American heavy-metal band of the mid-Seventies". Earlier on, as "heavy metal" emerged partially from the heavy psychedelic rock or acid rock scene, "acid rock" was often used interchangeably with "heavy metal" and "hard rock". Musicologist Steve Waksman stated that "the distinction between acid rock, hard rock, and heavy metal can at some point never be more than tenuous", while percussionist John Beck defined "acid rock" as synonymous with hard rock and heavy metal. Heavy metal's quintessential guitar style, built around distortion-heavy riffs and power chords, traces its roots to early 1950s Memphis blues guitarists such as Joe Hill Louis, Willie Johnson, and particularly Pat Hare, who captured a "grittier, nastier, more ferocious electric guitar sound" on records such as James Cotton's "" (1954); the late 1950s instrumentals of Link Wray, particularly "Rumble" (1958); the early 1960s surf rock of Dick Dale, including "Let's Go Trippin'" (1961) and "Misirlou" (1962); and The Kingsmen's version of "Louie Louie" (1963) which made it a garage rock standard. However, the genre's direct lineage begins in the mid-1960s. American blues music was a major influence on the early British rockers of the era. Bands like The Rolling Stones and The Yardbirds developed blues rock by recording covers of classic blues songs, often speeding up the tempos. As they experimented with the music, the UK blues-based bands—and the U.S. acts they influenced in turn—developed what would become the hallmarks of heavy metal, in particular, the loud, distorted guitar sound. The Kinks played a major role in popularising this sound with their 1964 hit "You Really Got Me". In addition to The Kinks' Dave Davies, other guitarists such as The Who's Pete Townshend and The Yardbirds' Jeff Beck were experimenting with feedback. Where the blues rock drumming style started out largely as simple shuffle beats on small kits, drummers began using a more muscular, complex, and amplified approach to match and be heard against the increasingly loud guitar. Vocalists similarly modified their technique and increased their reliance on amplification, often becoming more stylized and dramatic. In terms of sheer volume, especially in live performance, The Who's "bigger-louder-wall-of-Marshalls" approach was seminal. The combination of blues rock with psychedelic rock and acid rock formed much of the original basis for heavy metal. The variant or subgenre of psychedelic rock often known as "acid rock" was particularly influential on heavy metal; acid rock is often defined as a heavier, louder, or harder variant of psychedelic rock, or the more extreme side of the psychedelic rock genre, frequently containing a loud, improvised, and heavily distorted guitar-centered sound. Acid rock has been described as psychedelic rock at its "rawest and most intense," emphasizing the heavier qualities associated with both the positive and negative extremes of the psychedelic experience rather than only the idyllic side of psychedelia. American acid rock garage bands such as the 13th Floor Elevators epitomized the frenetic, heavier, darker and more psychotic sound of acid rock, a sound characterized by droning guitar riffs, amplified feedback, and guitar distortion, while the 13th Floor Elevators' sound in particular featured yelping vocals and "occasionally demented" lyrics. Frank Hoffman notes that: "Psychedelia was sometimes referred to as 'acid rock'. The latter label was applied to a pounding, hard rock variant that evolved out of the mid-1960s garage-punk movement. ... When rock began turning back to softer, roots-oriented sounds in late 1968, acid-rock bands mutated into heavy metal acts." One of the most influential bands in forging the merger of psychedelic rock and acid rock with the blues rock genre was the British power trio Cream, who derived a massive, heavy sound from unison riffing between guitarist Eric Clapton and bassist Jack Bruce, as well as Ginger Baker's double bass drumming. Their first two LPs, "Fresh Cream" (1966) and "Disraeli Gears" (1967), are regarded as essential prototypes for the future style of heavy metal. The Jimi Hendrix Experience's debut album, "Are You Experienced" (1967), was also highly influential. Hendrix's virtuosic technique would be emulated by many metal guitarists and the album's most successful single, "Purple Haze", is identified by some as the first heavy metal hit. Vanilla Fudge, whose first album also came out in 1967, has been called "one of the few American links between psychedelia and what soon became heavy metal", and the band has been cited as an early American heavy metal group. On their self-titled debut album, Vanilla Fudge created "loud, heavy, slowed-down arrangements" of contemporary hit songs, blowing these songs up to "epic proportions" and "bathing them in a trippy, distorted haze." During the late 1960s, many psychedelic singers, such as Arthur Brown, began to create outlandish, theatrical and often macabre performances; which in itself became incredibly influential to many metal acts. The American psychedelic rock band Coven, who opened for early heavy metal influencers such as Vanilla Fudge and the Yardbirds, portrayed themselves as practitioners of witchcraft or black magic, using dark—Satanic or occult—imagery in their lyrics, album art, and live performances. Live shows consisted of elaborate, theatrical "Satanic rites." Coven's 1969 debut album, "Witchcraft Destroys Minds & Reaps Souls", featured imagery of skulls, black masses, inverted crosses, and Satan worship, and both the album artwork and the band's live performances marked the first appearances in rock music of the sign of the horns, which would later become an important gesture in heavy metal culture. At the same time in England, the band Black Widow were also among the first psychedelic rock bands to use occult and Satanic imagery and lyrics, though both Black Widow and Coven's lyrical and thematic influences on heavy metal were quickly overshadowed by the darker and heavier sounds of Black Sabbath. Critics disagree over who can be thought of as the first heavy metal band. Most credit either Led Zeppelin or Black Sabbath, with American commentators tending to favour Led Zeppelin and British commentators tending to favour Black Sabbath, though many give equal credit to both. A few commentators—mainly American—argue for other groups including Iron Butterfly, Steppenwolf or Blue Cheer. Deep Purple, the third band in what is sometimes considered the "unholy trinity" of heavy metal (Black Sabbath, Led Zeppelin, and Deep Purple), despite being slightly older than Black Sabbath and Led Zeppelin, fluctuated between many rock styles until late 1969 when they took a heavy metal direction. In 1968, the sound that would become known as heavy metal began to coalesce. That January, the San Francisco band Blue Cheer released a cover of Eddie Cochran's classic "Summertime Blues", from their debut album "Vincebus Eruptum", that many consider the first true heavy metal recording. The same month, Steppenwolf released its self-titled debut album, including "Born to Be Wild", which refers to "heavy metal thunder" in describing a motorcycle. In July, the Jeff Beck Group, whose leader had preceded Page as The Yardbirds' guitarist, released its debut record: "Truth" featured some of the "most molten, barbed, downright funny noises of all time," breaking ground for generations of metal ax-slingers. In September, Page's new band, Led Zeppelin, made its live debut in Denmark (billed as The New Yardbirds). The Beatles' "White Album", released the following month, included "Helter Skelter", then one of the heaviest-sounding songs ever released by a major band. The Pretty Things' rock opera "S.F. Sorrow", released in December, featured "proto heavy metal" songs such as "Old Man Going" and "I See You". Iron Butterfly's 1968 song "In-A-Gadda-Da-Vida" is sometimes described as an example of the transition between acid rock and heavy metal or the turning point in which acid rock became "heavy metal", and both Iron Butterfly's 1968 album "In-A-Gadda-Da-Vida" and Blue Cheer's 1968 album "Vincebus Eruptum" have been described as laying the foundation of heavy metal and greatly influential in the transformation of acid rock into heavy metal. In this counterculture period MC5, who began as part of the Detroit garage rock scene, developed a raw distorted style that has been seen as a major influence on the future sound of both heavy metal and later punk music. The Stooges also began to establish and influence a heavy metal and later punk sound, with songs such as "I Wanna Be Your Dog", featuring pounding and distorted heavy guitar power chord riffs. Pink Floyd released two of their heaviest and loudest songs to date; "Ibiza Bar" and "The Nile Song", which was regarded as "one of the heaviest songs the band recorded". King Crimson's debut album started with "21st Century Schizoid Man," which was considered heavy metal by several critics. In January 1969, Led Zeppelin's self-titled debut album was released and reached number 10 on the "Billboard" album chart. In July, Zeppelin and a power trio with a Cream-inspired, but cruder sound, Grand Funk Railroad, played the Atlanta Pop Festival. That same month, another Cream-rooted trio led by Leslie West released "Mountain", an album filled with heavy blues rock guitar and roaring vocals. In August, the group—now itself dubbed Mountain—played an hour-long set at the Woodstock Festival, exposing the crowd of 300,000 people to the emerging sound of heavy metal. Mountain's proto-metal or early heavy metal hit song "Mississippi Queen" from the album "Climbing!" is especially credited with paving the way for heavy metal and was one of the first heavy guitar songs to receive regular play on radio. In September 1969, the Beatles released the album "Abbey Road" containing the track "I Want You (She's So Heavy)" which has been credited as an early example of or influence on heavy metal or doom metal. In October 1969, British band High Tide debuted with the heavy, proto-metal album "Sea Shanties". Led Zeppelin defined central aspects of the emerging genre, with Page's highly distorted guitar style and singer Robert Plant's dramatic, wailing vocals. Other bands, with a more consistently heavy, "purely" metal sound, would prove equally important in codifying the genre. The 1970 releases by Black Sabbath ("Black Sabbath" and "Paranoid") and Deep Purple ("In Rock") were crucial in this regard. Birmingham's Black Sabbath had developed a particularly heavy sound in part due to an industrial accident guitarist Tony Iommi suffered before cofounding the band. Unable to play normally, Iommi had to tune his guitar down for easier fretting and rely on power chords with their relatively simple fingering. The bleak, industrial, working class environment of Birmingham, a manufacturing city full of noisy factories and metalworking, has itself been credited with influencing Black Sabbath's heavy, chugging, metallic sound and the sound of heavy metal in general. Deep Purple had fluctuated between styles in its early years, but by 1969 vocalist Ian Gillan and guitarist Ritchie Blackmore had led the band toward the developing heavy metal style. In 1970, Black Sabbath and Deep Purple scored major UK chart hits with "Paranoid" and "Black Night", respectively. That same year, two other British bands released debut albums in a heavy metal mode: Uriah Heep with "Very 'Eavy... Very 'Umble" and UFO with "UFO 1". Bloodrock released their self-titled debut album, containing a collection of heavy guitar riffs, gruff style vocals and sadistic and macabre lyrics. The influential Budgie brought the new metal sound into a power trio context, creating some of the heaviest music of the time. The occult lyrics and imagery employed by Black Sabbath and Uriah Heep would prove particularly influential; Led Zeppelin also began foregrounding such elements with its fourth album, released in 1971. In 1973, Deep Purple released the song "Smoke on the Water", with the iconic riff that's usually considered as the most recognizable one in "heavy rock" history, as a single of the classic live album Made in Japan. On the other side of the Atlantic, the trend-setting group was Grand Funk Railroad, described as "the most commercially successful American heavy-metal band from 1970 until they disbanded in 1976, [they] established the Seventies success formula: continuous touring". Other influential bands identified with metal emerged in the U.S., such as Sir Lord Baltimore ("Kingdom Come," 1970), Blue Öyster Cult ("Blue Öyster Cult", 1972), Aerosmith ("Aerosmith", 1973) and Kiss ("Kiss", 1974). Sir Lord Baltimore's 1970 debut album and both Humble Pie's debut and self-titled third album were all among the first albums to be described in print as "heavy metal", with "As Safe As Yesterday Is" being referred to by the term "heavy metal" in a 1970 review in "Rolling Stone" magazine. Various smaller bands from the U.S., U.K, and Continental Europe, including Bang, Josefus, Leaf Hound, Primeval, Hard Stuff, Truth and Janey, Dust, JPT Scare Band, Frijid Pink, Cactus, May Blitz, Captain Beyond, Toad, Granicus, Iron Claw, and Yesterday's Children, though lesser known outside of their respective scenes, proved to be greatly influential on the emerging metal movement. In Germany, Scorpions debuted with "Lonesome Crow" in 1972. Blackmore, who had emerged as a virtuoso soloist with Deep Purple's highly influential album "Machine Head" (1972), left the band in 1975 to form Rainbow with Ronnie James Dio, singer and bassist for blues rock band Elf and future vocalist for Black Sabbath and heavy metal band Dio. Rainbow with Ronnie James Dio would expand on the mystical and fantasy-based lyrics and themes sometimes found in heavy metal, pioneering both power metal and neoclassical metal. These bands also built audiences via constant touring and increasingly elaborate stage shows. As described above, there are arguments about whether these and other early bands truly qualify as "heavy metal" or simply as "hard rock". Those closer to the music's blues roots or placing greater emphasis on melody are now commonly ascribed the latter label. AC/DC, which debuted with "High Voltage" in 1975, is a prime example. The 1983 "Rolling Stone" encyclopedia entry begins, "Australian heavy-metal band AC/DC". Rock historian Clinton Walker writes, "Calling AC/DC a heavy metal band in the seventies was as inaccurate as it is today... [They] were a rock 'n' roll band that just happened to be heavy enough for metal". The issue is not only one of shifting definitions, but also a persistent distinction between musical style and audience identification: Ian Christe describes how the band "became the stepping-stone that led huge numbers of hard rock fans into heavy metal perdition". In certain cases, there is little debate. After Black Sabbath, the next major example is Britain's Judas Priest, which debuted with "Rocka Rolla" in 1974. In Christe's description, "Black Sabbath's audience was...left to scavenge for sounds with similar impact. By the mid-1970s, heavy metal aesthetic could be spotted, like a mythical beast, in the moody bass and complex dual guitars of Thin Lizzy, in the stagecraft of Alice Cooper, in the sizzling guitar and showy vocals of Queen, and in the thundering medieval questions of Rainbow... Judas Priest arrived to unify and amplify these diverse highlights from hard rock's sonic palette. For the first time, heavy metal became a true genre unto itself." Though Judas Priest did not have a top 40 album in the United States until 1980, for many it was the definitive post-Sabbath heavy metal band; its twin-guitar attack, featuring rapid tempos and a non-bluesy, more cleanly metallic sound, was a major influence on later acts. While heavy metal was growing in popularity, most critics were not enamored of the music. Objections were raised to metal's adoption of visual spectacle and other trappings of commercial artifice, but the main offense was its perceived musical and lyrical vacuity: reviewing a Black Sabbath album in the early 1970s, leading critic Robert Christgau described it as "dull and decadent...dim-witted, amoral exploitation." Punk rock emerged in the mid-1970s as a reaction against contemporary social conditions as well as what was perceived as the overindulgent, overproduced rock music of the time, including heavy metal. Sales of heavy metal records declined sharply in the late 1970s in the face of punk, disco, and more mainstream rock. With the major labels fixated on punk, many newer British heavy metal bands were inspired by the movement's aggressive, high-energy sound and "lo-fi", do it yourself ethos. Underground metal bands began putting out cheaply recorded releases independently to small, devoted audiences. Motörhead, founded in 1975, was the first important band to straddle the punk/metal divide. With the explosion of punk in 1977, others followed. British music papers such as the "NME" and "Sounds" took notice, with "Sounds" writer Geoff Barton christening the movement the "New Wave of British Heavy Metal". NWOBHM bands including Iron Maiden, Saxon, and Def Leppard re-energized the heavy metal genre. Following the lead set by Judas Priest and Motörhead, they toughened up the sound, reduced its blues elements, and emphasized increasingly fast tempos. By 1980, the NWOBHM had broken into the mainstream, as albums by Iron Maiden and Saxon, as well as Motörhead, reached the British top 10. Though less commercially successful, other NWOBHM bands such as Venom and Diamond Head would have a significant influence on metal's development. In 1981, Motörhead became the first of this new breed of metal bands to top the UK charts with the live album "No Sleep 'til Hammersmith". The first generation of metal bands was ceding the limelight. Deep Purple had broken up soon after Blackmore's departure in 1975, and Led Zeppelin broke up following drummer John Bonham's death in 1980. Black Sabbath plagued with infighting and substance abuse, while facing fierce competition with their opening band, the Los Angeles band Van Halen. Eddie Van Halen established himself as one of the leading metal guitarists of the era. His solo on "Eruption", from the band's self-titled 1978 album, is considered a milestone. Eddie Van Halen's sound even crossed over into pop music when his guitar solo was featured on the track "Beat It" by Michael Jackson (a U.S. number 1 in February 1983). Inspired by Van Halen's success, a metal scene began to develop in Southern California during the late 1970s. Based on the clubs of L.A.'s Sunset Strip, bands such as Quiet Riot, Ratt, Mötley Crüe, and W.A.S.P. were influenced by traditional heavy metal of the earlier 1970s. These acts incorporated the theatrics (and sometimes makeup) of glam metal or "hair metal" such as Alice Cooper and Kiss. Hair/glam metal bands were often visually distinguished by long, overworked hair styles accompanied by wardrobes which were sometimes considered cross-gender. The lyrics of these glam metal bands characteristically emphasized hedonism and wild behavior, including lyrics which involved sexual expletives and the use of narcotics. In the wake of the new wave of British heavy metal and Judas Priest's breakthrough "British Steel" (1980), heavy metal became increasingly popular in the early 1980s. Many metal artists benefited from the exposure they received on MTV, which began airing in 1981—sales often soared if a band's videos screened on the channel. Def Leppard's videos for "Pyromania" (1983) made them superstars in America and Quiet Riot became the first domestic heavy metal band to top the "Billboard" chart with "Metal Health" (1983). One of the seminal events in metal's growing popularity was the 1983 US Festival in California, where the "heavy metal day" featuring Ozzy Osbourne, Van Halen, Scorpions, Mötley Crüe, Judas Priest, and others drew the largest audiences of the three-day event. Between 1983 and 1984, heavy metal went from an 8 percent to a 20 percent share of all recordings sold in the U.S. Several major professional magazines devoted to the genre were launched, including "Kerrang!" (in 1981) and "Metal Hammer" (in 1984), as well as a host of fan journals. In 1985, "Billboard" declared, "Metal has broadened its audience base. Metal music is no longer the exclusive domain of male teenagers. The metal audience has become older (college-aged), younger (pre-teen), and more female". By the mid-1980s, glam metal was a dominant presence on the U.S. charts, music television, and the arena concert circuit. New bands such as L.A.'s Warrant and acts from the East Coast like Poison and Cinderella became major draws, while Mötley Crüe and Ratt remained very popular. Bridging the stylistic gap between hard rock and glam metal, New Jersey's Bon Jovi became enormously successful with its third album, "Slippery When Wet" (1986). The similarly styled Swedish band Europe became international stars with "The Final Countdown" (1986). Its title track hit number 1 in 25 countries. In 1987, MTV launched a show, "Headbanger's Ball", devoted exclusively to heavy metal videos. However, the metal audience had begun to factionalize, with those in many underground metal scenes favoring more extreme sounds and disparaging the popular style as "light metal" or "hair metal". One band that reached diverse audiences was Guns N' Roses. In contrast to their glam metal contemporaries in L.A., they were seen as much more raw and dangerous. With the release of their chart-topping "Appetite for Destruction" (1987), they "recharged and almost single-handedly sustained the Sunset Strip sleaze system for several years". The following year, Jane's Addiction emerged from the same L.A. hard-rock club scene with its major label debut, "Nothing's Shocking". Reviewing the album, "Rolling Stone" declared, "as much as any band in existence, Jane's Addiction is the true heir to Led Zeppelin". The group was one of the first to be identified with the "alternative metal" trend that would come to the fore in the next decade. Meanwhile, new bands such as New York's Winger and New Jersey's Skid Row sustained the popularity of the glam metal style. Many subgenres of heavy metal developed outside of the commercial mainstream during the 1980s such as crossover thrash. Several attempts have been made to map the complex world of underground metal, most notably by the editors of AllMusic, as well as critic Garry Sharpe-Young. Sharpe-Young's multivolume metal encyclopedia separates the underground into five major categories: thrash metal, death metal, black metal, power metal, and the related subgenres of doom and gothic metal. In 1990, a review in "Rolling Stone" suggested retiring the term "heavy metal" as the genre was "ridiculously vague". The article stated that the term only fueled "misperceptions of rock & roll bigots who still assume that five bands as different as Ratt, Extreme, Anthrax, Danzig and Mother Love Bone" sound the same. Thrash metal emerged in the early 1980s under the influence of hardcore punk and the new wave of British heavy metal, particularly songs in the revved-up style known as speed metal. The movement began in the United States, with Bay Area thrash metal being the leading scene. The sound developed by thrash groups was faster and more aggressive than that of the original metal bands and their glam metal successors. Low-register guitar riffs are typically overlaid with shredding leads. Lyrics often express nihilistic views or deal with social issues using visceral, gory language. Thrash has been described as a form of "urban blight music" and "a palefaced cousin of rap". The subgenre was popularized by the "Big Four of Thrash": Metallica, Anthrax, Megadeth, and Slayer. Three German bands, Kreator, Sodom, and Destruction, played a central role in bringing the style to Europe. Others, including San Francisco Bay Area's Testament and Exodus, New Jersey's Overkill, and Brazil's Sepultura and Sarcófago, also had a significant impact. Although thrash began as an underground movement, and remained largely that for almost a decade, the leading bands of the scene began to reach a wider audience. Metallica brought the sound into the top 40 of the "Billboard" album chart in 1986 with "Master of Puppets", the genre's first platinum record. Two years later, the band's "...And Justice for All" hit number 6, while Megadeth and Anthrax also had top 40 records on the American charts. Though less commercially successful than the rest of the Big Four, Slayer released one of the genre's definitive records: "Reign in Blood" (1986) was credited for incorporating heavier guitar timbres, and for including explicit depictions of death, suffering, violence and occult into thrash metal's lyricism. Slayer attracted a following among far-right skinheads, and accusations of promoting violence and Nazi themes have dogged the band. Even though Slayer did not receive substantial media exposure, their music played a key role in the development of extreme metal. In the early 1990s, thrash achieved breakout success, challenging and redefining the metal mainstream. Metallica's self-titled 1991 album topped the "Billboard" chart, as the band established international following. Megadeth's "Countdown to Extinction" (1992) debuted at number two, Anthrax and Slayer cracked the top 10, and albums by regional bands such as Testament and Sepultura entered the top 100. Thrash soon began to evolve and split into more extreme metal genres. "Slayer's music was directly responsible for the rise of death metal," according to MTV News. The NWOBHM band Venom was also an important progenitor. The death metal movement in both North America and Europe adopted and emphasized the elements of blasphemy and diabolism employed by such acts. Florida's Death, San Francisco Bay Area's Possessed, and Ohio’s Necrophagia are recognized as seminal bands in the style. Both groups have been credited with inspiring the subgenre's name, the latter via its 1984 demo "Death Metal" and the song "Death Metal", from its 1985 debut album "Seven Churches" (1985). In the late 1980s and early 1990s, Swedish death metal became notable and melodic forms of death metal were created. Death metal utilizes the speed and aggression of both thrash and hardcore, fused with lyrics preoccupied with Z-grade slasher movie violence and Satanism. Death metal vocals are typically bleak, involving guttural "death growls", high-pitched screaming, the "death rasp", and other uncommon techniques.[ "Genre—Death Metal/Black Metal"]. AllMusic. Retrieved on February 27, 2007. Complementing the deep, aggressive vocal style are downtuned, heavily distorted guitars and extremely fast percussion, often with rapid double bass drumming and "wall of sound"–style blast beats. Frequent tempo and time signature changes and syncopation are also typical. Death metal, like thrash metal, generally rejects the theatrics of earlier metal styles, opting instead for an everyday look of ripped jeans and plain leather jackets. One major exception to this rule was Deicide's Glen Benton, who branded an inverted cross on his forehead and wore armor on stage. Morbid Angel adopted neo-fascist imagery. These two bands, along with Death and Obituary, were leaders of the major death metal scene that emerged in Florida in the mid-1980s. In the UK, the related style of grindcore, led by bands such as Napalm Death and Extreme Noise Terror, emerged from the anarcho-punk movement. The first wave of black metal emerged in Europe in the early and mid-1980s, led by the United Kingdom's Venom, Denmark's Mercyful Fate, Switzerland's Hellhammer and Celtic Frost, and Sweden's Bathory. By the late 1980s, Norwegian bands such as Mayhem and Burzum were heading a second wave. Black metal varies considerably in style and production quality, although most bands emphasize shrieked and growled vocals, highly distorted guitars frequently played with rapid tremolo picking, a dark atmosphere and intentionally lo-fi production, with ambient noise and background hiss. Satanic themes are common in black metal, though many bands take inspiration from ancient paganism, promoting a return to supposed pre-Christian values. Numerous black metal bands also "experiment with sounds from all possible forms of metal, folk, classical music, electronica and avant-garde". Darkthrone drummer Fenriz explains, "It had something to do with production, lyrics, the way they dressed and a commitment to making ugly, raw, grim stuff. There wasn't a generic sound." Although bands such as Sarcófago had been donning corpsepaint, by 1990, Mayhem was regularly wearing corpsepaint; many other black metal acts also adopted the look. Bathory inspired the Viking metal and folk metal movements and Immortal brought blast beats to the fore. Some bands in the Scandinavian black metal scene became associated with considerable violence in the early 1990s, with Mayhem and Burzum linked to church burnings. Growing commercial hype around death metal generated a backlash; beginning in Norway, much of the Scandinavian metal underground shifted to support a black metal scene that resisted being co-opted by the commercial metal industry. By 1992, black metal scenes had begun to emerge in areas outside Scandinavia, including Germany, France, and Poland. The 1993 murder of Mayhem's Euronymous by Burzum's Varg Vikernes provoked intensive media coverage. Around 1996, when many in the scene felt the genre was stagnating, several key bands, including Burzum and Finland's Beherit, moved toward an ambient style, while symphonic black metal was explored by Sweden's Tiamat and Switzerland's Samael. In the late 1990s and early 2000s decade, Norway's Dimmu Borgir brought black metal closer to the mainstream, as did Cradle of Filth. During the late 1980s, the power metal scene came together largely in reaction to the harshness of death and black metal. Though a relatively underground style in North America, it enjoys wide popularity in Europe, Japan, and South America. Power metal focuses on upbeat, epic melodies and themes that "appeal to the listener's sense of valor and loveliness". The prototype for the sound was established in the mid-to-late 1980s by Germany's Helloween, which combined the power riffs, melodic approach, and high-pitched, "clean" singing style of bands like Judas Priest and Iron Maiden with thrash's speed and energy, "crystalliz[ing] the sonic ingredients of what is now known as power metal". Traditional power metal bands like Sweden's HammerFall, England's DragonForce, and America's Iced Earth have a sound clearly indebted to the classic NWOBHM style. Many power metal bands such as America's Kamelot, Finnish groups Nightwish, Stratovarius and Sonata Arctica, Italy's Rhapsody of Fire, and Russia's Catharsis feature a keyboard-based "symphonic" sound, sometimes employing orchestras and opera singers. Power metal has built a strong fanbase in Japan and South America, where bands like Brazil's Angra and Argentina's Rata Blanca are popular. Closely related to power metal is progressive metal, which adopts the complex compositional approach of bands like Rush and King Crimson. This style emerged in the United States in the early and mid-1980s, with innovators such as Queensrÿche, Fates Warning, and Dream Theater. The mix of the progressive and power metal sounds is typified by New Jersey's Symphony X, whose guitarist Michael Romeo is among the most recognized of latter-day shredders. Emerging in the mid-1980s with such bands as California's Saint Vitus, Maryland's The Obsessed, Chicago's Trouble, and Sweden's Candlemass, the doom metal movement rejected other metal styles' emphasis on speed, slowing its music to a crawl. Doom metal traces its roots to the lyrical themes and musical approach of early Black Sabbath. The Melvins have also been a significant influence on doom metal and a number of its subgenres. Doom emphasizes melody, melancholy tempos, and a sepulchral mood relative to many other varieties of metal. The 1991 release of "Forest of Equilibrium", the debut album by UK band Cathedral, helped spark a new wave of doom metal. During the same period, the doom-death fusion style of British bands Paradise Lost, My Dying Bride, and Anathema gave rise to European gothic metal, with its signature dual-vocalist arrangements, exemplified by Norway's Theatre of Tragedy and Tristania. New York's Type O Negative introduced an American take on the style. In the United States, sludge metal, mixing doom and hardcore, emerged in the late 1980s—Eyehategod and Crowbar were leaders in a major Louisiana sludge scene. Early in the next decade, California's Kyuss and Sleep, inspired by the earlier doom metal bands, spearheaded the rise of stoner metal, while Seattle's Earth helped develop the drone metal subgenre. The late 1990s saw new bands form such as the Los Angeles–based Goatsnake, with a classic stoner/doom sound, and Sunn O))), which crosses lines between doom, drone, and dark ambient metal—the "New York Times" has compared their sound to an "Indian raga in the middle of an earthquake". The era of heavy metal's mainstream dominance in North America came to an end in the early 1990s with the emergence of Nirvana and other grunge bands, signaling the popular breakthrough of alternative rock. Grunge acts were influenced by the heavy metal sound, but rejected the excesses of the more popular metal bands, such as their "flashy and virtuosic solos" and "appearance-driven" MTV orientation. Glam metal fell out of favor due not only to the success of grunge, but also because of the growing popularity of the more aggressive sound typified by Metallica and the post-thrash groove metal of Pantera and White Zombie. In 1991, the band Metallica released their album "Metallica", also known as "The Black Album", which moved the band's sound out of the thrash metal genre and into standard heavy metal. The album was certified 16× Platinum by the RIAA. A few new, unambiguously metal bands had commercial success during the first half of the decade—Pantera's "Far Beyond Driven" topped the "Billboard" chart in 1994—but, "In the dull eyes of the mainstream, metal was dead". Some bands tried to adapt to the new musical landscape. Metallica revamped its image: the band members cut their hair and, in 1996, headlined the alternative musical festival Lollapalooza founded by Jane's Addiction singer Perry Farrell. While this prompted a backlash among some long-time fans, Metallica remained one of the most successful bands in the world into the new century. Like Jane's Addiction, many of the most popular early 1990s groups with roots in heavy metal fall under the umbrella term "alternative metal". Bands in Seattle's grunge scene such as Soundgarden, credited as making a "place for heavy metal in alternative rock", and Alice in Chains were at the center of the alternative metal movement. The label was applied to a wide spectrum of other acts that fused metal with different styles: Faith No More combined their alternative rock sound with punk, funk, metal, and hip hop; Primus joined elements of funk, punk, thrash metal, and experimental music; Tool mixed metal and progressive rock; bands such as Fear Factory, Ministry and Nine Inch Nails began incorporating metal into their industrial sound, and vice versa, respectively; and Marilyn Manson went down a similar route, while also employing shock effects of the sort popularized by Alice Cooper. Alternative metal artists, though they did not represent a cohesive scene, were united by their willingness to experiment with the metal genre and their rejection of glam metal aesthetics (with the stagecraft of Marilyn Manson and White Zombie—also identified with alt-metal—significant, if partial, exceptions). Alternative metal's mix of styles and sounds represented "the colorful results of metal opening up to face the outside world." In the mid- and late 1990s came a new wave of U.S. metal groups inspired by the alternative metal bands and their mix of genres. Dubbed "nu metal", bands such as Slipknot, Linkin Park, Limp Bizkit, Papa Roach, P.O.D., Korn and Disturbed incorporated elements ranging from death metal to hip hop, often including DJs and rap-style vocals. The mix demonstrated that "pancultural metal could pay off". Nu metal gained mainstream success through heavy MTV rotation and Ozzy Osbourne's 1996 introduction of Ozzfest, which led the media to talk of a resurgence of heavy metal. In 1999, "Billboard" noted that there were more than 500 specialty metal radio shows in the United States, nearly three times as many as ten years before. While nu metal was widely popular, traditional metal fans did not fully embrace the style. By early 2003, the movement's popularity was on the wane, though several nu metal acts such as Korn or Limp Bizkit retained substantial followings. Metalcore, a hybrid of extreme metal and hardcore punk, emerged as a commercial force in the mid-2000s decade. Through the 1980s and 1990s, metalcore was mostly an underground phenomenon; pioneering bands include Earth Crisis, other prominent bands include Converge, Hatebreed and Shai Hulud. By 2004, melodic metalcore—influenced as well by melodic death metal—was popular enough that Killswitch Engage's "The End of Heartache" and Shadows Fall's "The War Within" debuted at numbers 21 and 20, respectively, on the "Billboard" album chart. Evolving even further from metalcore comes mathcore, a more rhythmically complicated and progressive style brought to light by bands such as The Dillinger Escape Plan, Converge, and Protest the Hero. Mathcore's main defining quality is the use of odd time signatures, and has been described to possess rhythmic comparability to free jazz. Heavy metal remained popular in the 2000s, particularly in continental Europe. By the new millennium Scandinavia had emerged as one of the areas producing innovative and successful bands, while Belgium, The Netherlands and especially Germany were the most significant markets. Metal music is more favorably embraced in Scandinavia and Northern Europe than other regions due to social and political openness in these regions. Established continental metal bands that placed multiple albums in the top 20 of the German charts between 2003 and 2008, including Finnish band Children of Bodom, Norwegian act Dimmu Borgir, Germany's Blind Guardian and Sweden's HammerFall. In the 2000s, an extreme metal fusion genre known as deathcore emerged. Deathcore incorporates elements of death metal, hardcore punk and metalcore. Deathcore features characteristics such as death metal riffs, hardcore punk breakdowns, death growling, "pig squeal"-sounding vocals, and screaming. Deathcore bands include Whitechapel, Suicide Silence, Despised Icon and Carnifex. The term "retro-metal" has been used to describe bands such as Texas-based The Sword, California's High on Fire, Sweden's Witchcraft, and Australia's Wolfmother. The Sword's "Age of Winters" (2006) drew heavily on the work of Black Sabbath and Pentagram, Witchcraft added elements of folk rock and psychedelic rock, and Wolfmother's self-titled 2005 debut album had "Deep Purple-ish organs" and "Jimmy Page-worthy chordal riffing". Mastodon, which plays in a progressive/sludge style, has inspired claims of a metal revival in the United States, dubbed by some critics the "New Wave of American Heavy Metal". By the early 2010s, metalcore was evolving to more frequently incorporate synthesizers and elements from genres beyond rock and metal. The album "Reckless & Relentless" by British band Asking Alexandria (which sold 31,000 copies in its first week), and The Devil Wears Prada's 2011 album "Dead Throne" (which sold 32,400 in its first week) reached up to number 9 and 10, respectively, on the "Billboard" 200 chart. In 2013, British band Bring Me the Horizon released their fourth studio album Sempiternal to critical acclaim. The album debuted at number 3 on the UK Album Chart and at number 1 in Australia. The album sold 27,522 copies in the US, and charted at number 11 on the US Billboard Chart, making it their highest charting release in America until their follow-up album "That's the Spirit" debuted at no. 2 in 2015. Also in the 2010s, a metal style called "djent" developed as a spinoff of standard progressive metal. Djent music uses rhythmic and technical complexity, heavily distorted, palm-muted guitar chords, syncopated riffs and polyrhythms alongside virtuoso soloing. Another typical characteristic is the use of extended range seven, eight, and nine-string guitars. Djent bands include Periphery, Tesseract and Textures. The history of women in heavy metal can be traced back as far as the 1970s when the band Vixen was formed in 1973. Another hard rock band that featured all-female members, The Runaways, would be founded in 1975. And while The Runaways may have been lost to time, two former members would rise to fame in solo careers. Those being Joan Jett and Lita Ford, who would have successful rock careers. In 1978, with the rise of the new wave of British heavy metal, the band Girlschool would be founded, who are best known for their collaboration with Motörhead in 1980 under the pseudonym Headgirl. In 1996, Finnish band Nightwish was founded and has featured women as vocalists. This would lead to a surplus of heavy metal bands featuring women as lead singers, such as Grammy Award winners Halestorm, Delain, Within Temptation, and Epica among others. Women have had an important role behind the scenes, such as Gaby Hoffmann and Sharon Osbourne. In 1981, Hoffmann would help Don Dokken acquire his first record deal. Hoffmann also became the manager of Accept in 1981 and wrote songs under the pseudonym of "Deaffy" for many of band's studio albums. In the band's newer albums, vocalist Mark Tornillo stated that Hoffmann still had some influence in songwriting. Osbourne, the wife and manager of Ozzy Osbourne, would found the "Ozzfest" music festival and manage several bands, included Motörhead, Coal Chamber, The Smashing Pumpkins, Electric Light Orchestra, Lita Ford and Queen. Henry I of England Henry I (c. 1068 – 1 December 1135), also known as Henry Beauclerc, was King of England from 1100 to his death. Henry was the fourth son of William the Conqueror and was educated in Latin and the liberal arts. On William's death in 1087, Henry's elder brothers Robert Curthose and William Rufus inherited Normandy and England, respectively, but Henry was left landless. Henry purchased the County of Cotentin in western Normandy from Robert, but William and Robert deposed him in 1091. Henry gradually rebuilt his power base in the Cotentin and allied himself with William against Robert. Henry was present when William died in a hunting accident in 1100, and he seized the English throne, promising at his coronation to correct many of William's less popular policies. Henry married Matilda of Scotland but continued to have a large number of mistresses by whom he had many illegitimate children. Robert, who invaded in 1101, disputed Henry's control of England; this military campaign ended in a negotiated settlement that confirmed Henry as king. The peace was short-lived, and Henry invaded the Duchy of Normandy in 1105 and 1106, finally defeating Robert at the Battle of Tinchebray. Henry kept Robert imprisoned for the rest of his life. Henry's control of Normandy was challenged by Louis VI of France, Baldwin VII of Flanders and Fulk V of Anjou, who promoted the rival claims of Robert's son, William Clito, and supported a major rebellion in the Duchy between 1116 and 1119. Following Henry's victory at the Battle of Brémule, a favourable peace settlement was agreed with Louis in 1120. Considered by contemporaries to be a harsh but effective ruler, Henry skilfully manipulated the barons in England and Normandy. In England, he drew on the existing Anglo-Saxon system of justice, local government and taxation, but also strengthened it with additional institutions, including the royal exchequer and itinerant justices. Normandy was also governed through a growing system of justices and an exchequer. Many of the officials who ran Henry's system were "new men" of obscure backgrounds rather than from families of high status, who rose through the ranks as administrators. Henry encouraged ecclesiastical reform, but became embroiled in a serious dispute in 1101 with Archbishop Anselm of Canterbury, which was resolved through a compromise solution in 1105. He supported the Cluniac order and played a major role in the selection of the senior clergy in England and Normandy. Henry's only legitimate son and heir, William Adelin, drowned in the "White Ship" disaster of 1120, throwing the royal succession into doubt. Henry took a second wife, Adeliza of Louvain, in the hope of having another son, but their marriage was childless. In response to this, Henry declared his daughter, Empress Matilda, his heir and married her to Geoffrey of Anjou. The relationship between Henry and the couple became strained, and fighting broke out along the border with Anjou. Henry died on 1 December 1135 after a week of illness. Despite his plans for Matilda, the King was succeeded by his nephew, Stephen of Blois, resulting in a period of civil war known as the Anarchy. Henry was probably born in England in 1068, in either the summer or the last weeks of the year, possibly in the town of Selby in Yorkshire. His father was William the Conqueror, the Duke of Normandy, who had invaded England in 1066 to become the King of England, establishing lands stretching into Wales. The invasion had created an Anglo-Norman elite, many with estates spread across both sides of the English Channel. These Anglo-Norman barons typically had close links to the kingdom of France, which was then a loose collection of counties and smaller polities, under only the minimal control of the king. Henry's mother, Matilda of Flanders, was the granddaughter of Robert II of France, and she probably named Henry after her uncle, King Henry I of France. Henry was the youngest of William and Matilda's four sons. Physically he resembled his older brothers Robert Curthose, Richard and William Rufus, being, as historian David Carpenter describes, "short, stocky and barrel-chested," with black hair. As a result of their age differences and Richard's early death, Henry would have probably seen relatively little of his older brothers. He probably knew his sister, Adela, well, as the two were close in age. There is little documentary evidence for his early years; historians Warren Hollister and Kathleen Thompson suggest he was brought up predominantly in England, while Judith Green argues he was initially brought up in the Duchy. He was probably educated by the Church, possibly by Bishop Osmund, the King's chancellor, at Salisbury Cathedral; it is uncertain if this indicated an intent by his parents for Henry to become a member of the clergy. It is also uncertain how far Henry's education extended, but he was probably able to read Latin and had some background in the liberal arts. He was given military training by an instructor called Robert Achard, and Henry was knighted by his father on 24 May 1086. In 1087, William was fatally injured during a campaign in the Vexin. Henry joined his dying father near Rouen in September, where the King partitioned his possessions among his sons. The rules of succession in western Europe at the time were uncertain; in some parts of France, primogeniture, in which the eldest son would inherit a title, was growing in popularity. In other parts of Europe, including Normandy and England, the tradition was for lands to be divided up, with the eldest son taking patrimonial lands – usually considered to be the most valuable – and younger sons given smaller, or more recently acquired, partitions or estates. In dividing his lands, William appears to have followed the Norman tradition, distinguishing between Normandy, which he had inherited, and England, which he had acquired through war. William's second son, Richard, had died in a hunting accident, leaving Henry and his two brothers to inherit William's estate. Robert, the eldest, despite being in armed rebellion against his father at the time of his death, received Normandy. England was given to William Rufus, who was in favour with the dying king. Henry was given a large sum of money, usually reported as £5,000, with the expectation that he would also be given his mother's modest set of lands in Buckinghamshire and Gloucestershire. William's funeral at Caen was marred by angry complaints from a local man, and Henry may have been responsible for resolving the dispute by buying off the protester with silver. Robert returned to Normandy, expecting to have been given both the Duchy and England, to find that William Rufus had crossed the Channel and been crowned king, as William II. The two brothers disagreed fundamentally over the inheritance, and Robert soon began to plan an invasion of England to seize the kingdom, helped by a rebellion by some of the leading nobles against William Rufus. Henry remained in Normandy and took up a role within Robert's court, possibly either because he was unwilling to side openly with William Rufus, or because Robert might have taken the opportunity to confiscate Henry's inherited money if he had tried to leave. William Rufus sequestered Henry's new estates in England, leaving Henry landless. In 1088, Robert's plans for the invasion of England began to falter, and he turned to Henry, proposing that his brother lend him some of his inheritance, which Henry refused. Henry and Robert then came to an alternative arrangement, in which Robert would make Henry the count of western Normandy, in exchange for £3,000. Henry's lands were a new countship based around a delegation of the ducal authority in the Cotentin, but it extended across the Avranchin, with control over the bishoprics of both. This also gave Henry influence over two major Norman leaders, Hugh d'Avranches and Richard de Redvers, and the abbey of Mont Saint-Michel, whose lands spread out further across the Duchy. Robert's invasion force failed to leave Normandy, leaving William Rufus secure in England. Henry quickly established himself as count, building up a network of followers from western Normandy and eastern Brittany, whom historian John Le Patourel has characterised as "Henry's gang". His early supporters included Roger of Mandeville, Richard of Redvers, Richard d'Avranches and Robert Fitzhamon, along with the churchman Roger of Salisbury. Robert attempted to go back on his deal with Henry and re-appropriate the county, but Henry's grip was already sufficiently firm to prevent this. Robert's rule of the Duchy was chaotic, and parts of Henry's lands became almost independent of central control from Rouen. During this period, neither William nor Robert seems to have trusted Henry. Waiting until the rebellion against William Rufus was safely over, Henry returned to England in July 1088. He met with the King but was unable to persuade him to grant him their mother's estates, and travelled back to Normandy in the autumn. While he had been away, however, Odo, the Bishop of Bayeux, who regarded Henry as a potential competitor, had convinced Robert that Henry was conspiring against the duke with William Rufus. On landing, Odo seized Henry and imprisoned him in Neuilly-la-Forêt, and Robert took back the county of the Cotentin. Henry was held there over the winter, but in the spring of 1089 the senior elements of the Normandy nobility prevailed upon Robert to release him. Although no longer formally the Count of Cotentin, Henry continued to control the west of Normandy. The struggle between Henry's brothers continued. William Rufus continued to put down resistance to his rule in England, but began to build a number of alliances against Robert with barons in Normandy and neighbouring Ponthieu. Robert allied himself with Philip I of France. In late 1090 William Rufus encouraged Conan Pilatus, a powerful burgher in Rouen, to rebel against Robert; Conan was supported by most of Rouen and made appeals to the neighbouring ducal garrisons to switch allegiance as well. Robert issued an appeal for help to his barons, and Henry was the first to arrive in Rouen in November. Violence broke out, leading to savage, confused street fighting as both sides attempted to take control of the city. Robert and Henry left the castle to join the battle, but Robert then retreated, leaving Henry to continue the fighting. The battle turned in favour of the ducal forces and Henry took Conan prisoner. Henry was angry that Conan had turned against his feudal lord. He had him taken to the top of Rouen Castle and then, despite Conan's offers to pay a huge ransom, threw him off the top of the castle to his death. Contemporaries considered Henry to have acted appropriately in making an example of Conan, and Henry became famous for his exploits in the battle. In the aftermath, Robert forced Henry to leave Rouen, probably because Henry's role in the fighting had been more prominent than his own, and possibly because Henry had asked to be formally reinstated as the count of the Cotentin. In early 1091, William Rufus invaded Normandy with a sufficiently large army to bring Robert to the negotiating table. The two brothers signed a treaty at Rouen, granting William Rufus a range of lands and castles in Normandy. In return, William Rufus promised to support Robert's attempts to regain control of the neighbouring county of Maine, once under Norman control, and help in regaining control over the Duchy, including Henry's lands. They nominated each other as heirs to England and Normandy, excluding Henry from any succession while either one of them lived. War now broke out between Henry and his brothers. Henry mobilised a mercenary army in the west of Normandy, but as William Rufus and Robert's forces advanced, his network of baronial support melted away. Henry focused his remaining forces at Mont Saint-Michel, where he was besieged, probably in March 1091. The site was easy to defend, but lacked fresh water. The chronicler William of Malmesbury suggested that when Henry's water ran short, Robert allowed his brother fresh supplies, leading to remonstrations between Robert and William Rufus. The events of the final days of the siege are unclear: the besiegers had begun to argue about the future strategy for the campaign, but Henry then abandoned Mont Saint-Michel, probably as part of a negotiated surrender. He left for Brittany and crossed over into France. Henry's next steps are not well documented; one chronicler, Orderic Vitalis, suggests that he travelled in the French Vexin, along the Normandy border, for over a year with a small band of followers. By the end of the year, Robert and William Rufus had fallen out once again, and the Treaty of Rouen had been abandoned. In 1092, Henry and his followers seized the Normandy town of Domfront. Domfront had previously been controlled by Robert of Bellême, but the inhabitants disliked his rule and invited Henry to take over the town, which he did in a bloodless coup. Over the next two years, Henry re-established his network of supporters across western Normandy, forming what Judith Green terms a "court in waiting". By 1094, he was allocating lands and castles to his followers as if he were the Duke of Normandy. William Rufus began to support Henry with money, encouraging his campaign against Robert, and Henry used some of this to construct a substantial castle at Domfront. William Rufus crossed into Normandy to take the war to Robert in 1094, and when progress stalled, called upon Henry for assistance. Henry responded, but travelled to London instead of joining the main campaign further east in Normandy, possibly at the request of the King, who in any event abandoned the campaign and returned to England. Over the next few years, Henry appears to have strengthened his power base in western Normandy, visiting England occasionally to attend at William Rufus's court. In 1095 Pope Urban II called the First Crusade, encouraging knights from across Europe to join. Robert joined the Crusade, borrowing money from William Rufus to do so, and granting the King temporary custody of his part of the Duchy in exchange. The King appeared confident of regaining the remainder of Normandy from Robert, and Henry appeared ever closer to William Rufus, the pair campaigning together in the Norman Vexin between 1097 and 1098. Henry became King of England following the death of William Rufus, who had been shot while hunting. On the afternoon of 2 August 1100, the King had gone hunting in the New Forest, accompanied by a team of huntsmen and a number of the Norman nobility, including Henry. An arrow was fired, possibly by the baron Walter Tirel, which hit and killed William Rufus. Numerous conspiracy theories have been put forward suggesting that the King was killed deliberately; most modern historians reject these, as hunting was a risky activity, and such accidents were common. Chaos broke out, and Tirel fled the scene for France, either because he had fired the fatal shot, or because he had been incorrectly accused and feared that he would be made a scapegoat for the King's death. Henry rode to Winchester, where an argument ensued as to who now had the best claim to the throne. William of Breteuil championed the rights of Robert, who was still abroad, returning from the Crusade, and to whom Henry and the barons had given homage in previous years. Henry argued that, unlike Robert, he had been born to a reigning king and queen, thereby giving him a claim under the right of porphyrogeniture. Tempers flared, but Henry, supported by Henry de Beaumont and Robert of Meulan, held sway and persuaded the barons to follow him. He occupied Winchester Castle and seized the royal treasury. Henry was hastily crowned king in Westminster Abbey on 5 August by Maurice, the Bishop of London, as Anselm, the Archbishop of Canterbury, had been exiled by William Rufus, and Thomas, the Archbishop of York, was in the north of England at Ripon. In accordance with English tradition and in a bid to legitimise his rule, Henry issued a coronation charter laying out various commitments. The new king presented himself as having restored order to a trouble-torn country. He announced that he would abandon William Rufus's policies towards the Church, which had been seen as oppressive by the clergy; he promised to prevent royal abuses of the barons' property rights, and assured a return to the gentler customs of Edward the Confessor; he asserted that he would "establish a firm peace" across England and ordered "that this peace shall henceforth be kept". In addition to his existing circle of supporters, many of whom were richly rewarded with new lands, Henry quickly co-opted many of the existing administration into his new royal household. William Giffard, William Rufus's chancellor, was made the Bishop of Winchester, and the prominent sheriffs Urse d'Abetot, Haimo Dapifer and Robert Fitzhamon continued to play a senior role in government. By contrast, the unpopular Ranulf Flambard, the Bishop of Durham and a key member of the previous regime, was imprisoned in the Tower of London and charged with corruption. The late king had left many church positions unfilled, and Henry set about nominating candidates to these, in an effort to build further support for his new government. The appointments needed to be consecrated, and Henry wrote to Anselm, apologising for having been crowned while the Archbishop was still in France and asking him to return at once. On 11 November 1100 Henry married Matilda, the daughter of Malcolm III of Scotland. Henry was now around 31 years old, but late marriages for noblemen were not unusual in the 11th century. The pair had probably first met earlier the previous decade, possibly being introduced through Bishop Osmund of Salisbury. Historian Warren Hollister argues that Henry and Matilda were emotionally close, but their union was also certainly politically motivated. Matilda had originally been named Edith, an Anglo-Saxon name, and was a member of the West Saxon royal family, being the niece of Edgar the Ætheling, the great-granddaughter of Edmund Ironside and a descendant of Alfred the Great. For Henry, marrying Matilda gave his reign increased legitimacy, and for Matilda, an ambitious woman, it was an opportunity for high status and power in England. Matilda had been educated in a sequence of convents, however, and may well have taken the vows to formally become a nun, which formed an obstacle to the marriage progressing. She did not wish to be a nun and appealed to Anselm for permission to marry Henry, and the Archbishop established a council at Lambeth Palace to judge the issue. Despite some dissenting voices, the council concluded that although Matilda had lived in a convent, she had not actually become a nun and was therefore free to marry, a judgement that Anselm then affirmed, allowing the marriage to proceed. Matilda proved an effective queen for Henry, acting as a regent in England on occasion, addressing and presiding over councils, and extensively supporting the arts. The couple soon had two children, Matilda, born in 1102, and William Adelin, born in 1103; it is possible that they also had a second son, Richard, who died young. Following the birth of these children, Matilda preferred to remain based in Westminster while Henry travelled across England and Normandy, either for religious reasons or because she enjoyed being involved in the machinery of royal governance. Henry had a considerable sexual appetite and enjoyed a substantial number of sexual partners, resulting in a large number of illegitimate children, at least nine sons and 13 daughters, many of whom he appears to have recognised and supported. It was normal for unmarried Anglo-Norman noblemen to have sexual relations with prostitutes and local women, and kings were also expected to have mistresses. Some of these relationships occurred before Henry was married, but many others took place after his marriage to Matilda. Henry had a wide range of mistresses from a range of backgrounds, and the relationships appear to have been conducted relatively openly. He may have chosen some of his noble mistresses for political purposes, but the evidence to support this theory is limited. By early 1101, Henry's new regime was established and functioning, but many of the Anglo-Norman elite still supported Robert, or would be prepared to switch sides if Henry's elder brother appeared likely to gain power in England. In February, Flambard escaped from the Tower of London and crossed the Channel to Normandy, where he injected fresh direction and energy to Robert's attempts to mobilise an invasion force. By July, Robert had formed an army and a fleet, ready to move against Henry in England. Raising the stakes in the conflict, Henry seized Flambard's lands and, with the support of Anselm, Flambard was removed from his position as bishop. Henry held court in April and June, where the nobility renewed their oaths of allegiance to him, but their support still appeared partial and shaky. With the invasion imminent, Henry mobilised his forces and fleet outside Pevensey, close to Robert's anticipated landing site, training some of them personally in how to counter cavalry charges. Despite English levies and knights owing military service to the Church arriving in considerable numbers, many of his barons did not appear. Anselm intervened with some of the doubters, emphasising the religious importance of their loyalty to Henry. Robert unexpectedly landed further up the coast at Portsmouth on 20 July with a modest force of a few hundred men, but these were quickly joined by many of the barons in England. However, instead of marching into nearby Winchester and seizing Henry's treasury, Robert paused, giving Henry time to march west and intercept the invasion force. The two armies met at Alton where peace negotiations began, possibly initiated by either Henry or Robert, and probably supported by Flambard. The brothers then agreed to the Treaty of Alton, under which Robert released Henry from his oath of homage and recognised him as king; Henry renounced his claims on western Normandy, except for Domfront, and agreed to pay Robert £2,000 a year for life; if either brother died without a male heir, the other would inherit his lands; the barons whose lands had been seized by either the King or the Duke for supporting his rival would have them returned, and Flambard would be reinstated as bishop; the two brothers would campaign together to defend their territories in Normandy. Robert remained in England for a few months more with Henry before returning to Normandy. Despite the treaty, Henry set about inflicting severe penalties on the barons who had stood against him during the invasion. William de Warenne, the Earl of Surrey, was accused of fresh crimes, which were not covered by the Alton amnesty, and was banished from England. In 1102 Henry then turned against Robert of Bellême and his brothers, the most powerful of the barons, accusing him of 45 different offences. Robert escaped and took up arms against Henry. Henry besieged Robert's castles at Arundel, Tickhill and Shrewsbury, pushing down into the south-west to attack Bridgnorth. His power base in England broken, Robert accepted Henry's offer of banishment and left the country for Normandy. Henry's network of allies in Normandy became stronger during 1103. Henry married Juliana, one of his illegitimate daughters, to Eustace of Breteuil, and another illegitimate daughter, Matilda, to Rotrou, the Count of Perche, on the Normandy border. Henry attempted to win over other members of the Normandy nobility and gave other English estates and lucrative offers to key Norman lords. Duke Robert continued to fight Robert of Bellême, but the Duke's position worsened, until by 1104, he had to ally himself formally with Bellême to survive. Arguing that Duke Robert had broken the terms of their treaty, Henry crossed over the Channel to Domfront, where he met with senior barons from across Normandy, eager to ally themselves with the King. Henry confronted his brother and accused him of siding with his enemies, before returning to England. Normandy continued to disintegrate into chaos. In 1105, Henry sent his friend Robert Fitzhamon and a force of knights into the Duchy, apparently to provoke a confrontation with Duke Robert. Fitzhamon was captured, and Henry used this as an excuse to invade, promising to restore peace and order. Henry had the support of most of the neighbouring counts around Normandy's borders, and King Philip of France was persuaded to remain neutral. Henry occupied western Normandy, and advanced east on Bayeux, where Fitzhamon was held. The city refused to surrender, and Henry besieged it, burning it to the ground. Terrified of meeting the same fate, the town of Caen switched sides and surrendered, allowing Henry to advance on Falaise, which he took with some casualties. Henry's campaign stalled, and the King instead began peace discussions with Robert. The negotiations were inconclusive and the fighting dragged on until Christmas, when Henry returned to England. Henry invaded again in July 1106, hoping to provoke a decisive battle. After some initial tactical successes, he turned south-west towards the castle of Tinchebray. He besieged the castle and Duke Robert, supported by Robert of Bellême, advanced from Falaise to relieve it. After attempts at negotiation failed, the Battle of Tinchebray took place, probably on 28 September. The battle lasted around an hour, and began with a charge by Duke Robert's cavalry; the infantry and dismounted knights of both sides then joined the battle. Henry's reserves, led by Elias, the Count of Maine and Alan, the Duke of Brittany, attacked the enemy's flanks, routing first Bellême's troops and then the bulk of the ducal forces. Duke Robert was taken prisoner, but Bellême escaped. Henry mopped up the remaining resistance in Normandy, and Robert ordered his last garrisons to surrender. Reaching Rouen, Henry reaffirmed the laws and customs of Normandy and took homage from the leading barons and citizens. The lesser prisoners taken at Tinchebray were released, but Robert and several other leading nobles were imprisoned indefinitely. Henry's nephew, Robert's son William Clito, was only three years old and was released to the care of Helias of Saint-Saens, a Norman baron. Henry reconciled himself with Robert of Bellême, who gave up the ducal lands he had seized and rejoined the royal court. Henry had no way of legally removing the Duchy from his brother Robert, and initially Henry avoided using the title "duke" at all, emphasising that, as the King of England, he was only acting as the guardian of the troubled Duchy. Henry inherited the kingdom of England from William Rufus, giving him a claim of suzerainty over Wales and Scotland, and acquired the Duchy of Normandy, a complex entity with troubled borders. The borders between England and Scotland were still uncertain during Henry's reign, with Anglo-Norman influence pushing northwards through Cumbria, but Henry's relationship with King David I of Scotland was generally good, partially due to Henry's marriage to his sister. In Wales, Henry used his power to coerce and charm the indigenous Welsh princes, while Norman Marcher Lords pushed across the valleys of South Wales. Normandy was controlled via various interlocking networks of ducal, ecclesiastical and family contacts, backed by a growing string of important ducal castles along the borders. Alliances and relationships with neighbouring counties along the Norman border were particularly important to maintaining the stability of the Duchy. Henry ruled through the various barons and lords in England and Normandy, whom he manipulated skilfully for political effect. Political friendships, termed "amicitia" in Latin, were important during the 12th century, and Henry maintained a wide range of these, mediating between his friends in various factions across his realm when necessary, and rewarding those who were loyal to him. Henry also had a reputation for punishing those barons who stood against him, and he maintained an effective network of informers and spies who reported to him on events. Henry was a harsh, firm ruler, but not excessively so by the standards of the day. Over time, he increased the degree of his control over the barons, removing his enemies and bolstering his friends until the "reconstructed baronage", as historian Warren Hollister describes it, was predominantly loyal and dependent on the King. Henry's itinerant royal court comprised various parts. At the heart was Henry's domestic household, called the "domus"; a wider grouping was termed the "familia regis", and formal gatherings of the court were termed "curia". The "domus" was divided into several parts. The chapel, headed by the chancellor, looked after the royal documents, the chamber dealt with financial affairs and the master-marshal was responsible for travel and accommodation. The "familia regis" included Henry's mounted household troops, up to several hundred strong, who came from a wider range of social backgrounds, and could be deployed across England and Normandy as required. Initially Henry continued his father's practice of regular crown-wearing ceremonies at his "curia", but they became less frequent as the years passed. Henry's court was grand and ostentatious, financing the construction of large new buildings and castles with a range of precious gifts on display, including the King's private menagerie of exotic animals, which he kept at Woodstock Palace. Despite being a lively community, Henry's court was more tightly controlled than those of previous kings. Strict rules controlled personal behaviour and prohibited members of the court from pillaging neighbouring villages, as had been the norm under William Rufus. Henry was responsible for a substantial expansion of the royal justice system. In England, Henry drew on the existing Anglo-Saxon system of justice, local government and taxes, but strengthened it with additional central governmental institutions. Roger of Salisbury began to develop the royal exchequer after 1110, using it to collect and audit revenues from the King's sheriffs in the shires. Itinerant justices began to emerge under Henry, travelling around the country managing eyre courts, and many more laws were formally recorded. Henry gathered increasing revenue from the expansion of royal justice, both from fines and from fees. The first Pipe Roll that is known to have survived dates from 1130, recording royal expenditures. Henry reformed the coinage in 1107, 1108 and in 1125, inflicting harsh corporal punishments to English coiners who had been found guilty of debasing the currency. In Normandy, Henry restored law and order after 1106, operating through a body of Norman justices and an exchequer system similar to that in England. Norman institutions grew in scale and scope under Henry, although less quickly than in England. Many of the officials that ran Henry's system were termed "new men", relatively low-born individuals who rose through the ranks as administrators, managing justice or the royal revenues. Henry's ability to govern was intimately bound up with the Church, which formed the key to the administration of both England and Normandy, and this relationship changed considerably over the course of his reign. William the Conqueror had reformed the English Church with the support of his Archbishop of Canterbury, Lanfranc, who became a close colleague and advisor to the King. Under William Rufus this arrangement had collapsed, the King and Archbishop Anselm had become estranged and Anselm had gone into exile. Henry also believed in Church reform, but on taking power in England he became embroiled in the investiture controversy. The argument concerned who should invest a new bishop with his staff and ring: traditionally, this had been carried out by the king in a symbolic demonstration of royal power, but Pope Urban II had condemned this practice in 1099, arguing that only the papacy could carry out this task, and declaring that the clergy should not give homage to their local temporal rulers. Anselm returned to England from exile in 1100 having heard Urban's pronouncement, and informed Henry that he would be complying with the Pope's wishes. Henry was in a difficult position. On one hand, the symbolism and homage was important to him; on the other hand, he needed Anselm's support in his struggle with his brother Duke Robert. Anselm stuck firmly to the letter of the papal decree, despite Henry's attempts to persuade him to give way in return for a vague assurance of a future royal compromise. Matters escalated, with Anselm going back into exile and Henry confiscating the revenues of his estates. Anselm threatened excommunication, and in July 1105 the two men finally negotiated a solution. A distinction was drawn between the secular and ecclesiastical powers of the prelates, under which Henry gave up his right to invest his clergy, but retained the custom of requiring them to come and do homage for the temporalities, the landed properties they held in England. Despite this argument, the pair worked closely together, combining to deal with Duke Robert's invasion of 1101, for example, and holding major reforming councils in 1102 and 1108. A long-running dispute between the Archbishops of Canterbury and York flared up under Anselm's successor, Ralph d'Escures. Canterbury, traditionally the senior of the two establishments, had long argued that the Archbishop of York should formally promise to obey their Archbishop, but York argued that the two episcopates were independent within the English Church and that no such promise was necessary. Henry supported the primacy of Canterbury, to ensure that England remained under a single ecclesiastical administration, but the Pope preferred the case of York. The matter was complicated by Henry's personal friendship with Thurstan, the Archbishop of York, and the King's desire that the case should not end up in a papal court, beyond royal control. Henry badly needed the support of the Papacy in his struggle with Louis of France, however, and therefore allowed Thurstan to attend the Council of Rheims in 1119, where Thurstan was then consecrated by the Pope with no mention of any duty towards Canterbury. Henry believed that this went against assurances Thurstan had previously made and exiled him from England until the King and Archbishop came to a negotiated solution the following year. Even after the investiture dispute, the King continued to play a major role in the selection of new English and Norman bishops and archbishops. Henry appointed many of his officials to bishoprics and, as historian Martin Brett suggests, "some of his officers could look forward to a mitre with all but absolute confidence". Henry's chancellors, and those of his queens, became bishops of Durham, Hereford, London, Lincoln, Winchester and Salisbury. Henry increasingly drew on a wider range of these bishops as advisors – particularly Roger of Salisbury – breaking with the earlier tradition of relying primarily on the Archbishop of Canterbury. The result was a cohesive body of administrators through which Henry could exercise careful influence, holding general councils to discuss key matters of policy. This stability shifted slightly after 1125, when Henry began to inject a wider range of candidates into the senior positions of the Church, often with more reformist views, and the impact of this generation would be felt in the years after Henry's death. Like other rulers of the period, Henry donated to the Church and patronised various religious communities, but contemporary chroniclers did not consider him an unusually pious king. His personal beliefs and piety may, however, have developed during the course of his life. Henry had always taken an interest in religion, but in his later years he may have become much more concerned about spiritual affairs. If so, the major shifts in his thinking would appear to have occurred after 1120, when his son William Adelin died, and 1129, when his daughter's marriage teetered on the verge of collapse. As a proponent of religious reform, Henry gave extensively to reformist groups within the Church. He was a keen supporter of the Cluniac order, probably for intellectual reasons. He donated money to the abbey at Cluny itself, and after 1120 gave generously to Reading Abbey, a Cluniac establishment. Construction on Reading began in 1121, and Henry endowed it with rich lands and extensive privileges, making it a symbol of his dynastic lines. He also focused effort on promoting the conversion of communities of clerks into Augustinian canons, the foundation of leper hospitals, expanding the provision of nunneries, and the charismatic orders of the Savigniacs and Tironensians. He was an avid collector of relics, sending an embassy to Constantinople in 1118 to collect Byzantine items, some of which were donated to Reading Abbey. Normandy faced an increased threat from France, Anjou and Flanders after 1108. Louis VI succeeded to the French throne in 1108 and began to reassert central royal power. Louis demanded Henry give homage to him and that two disputed castles along the Normandy border be placed into the control of neutral castellans. Henry refused, and Louis responded by mobilising an army. After some arguments, the two kings negotiated a truce and retreated without fighting, leaving the underlying issues unresolved. Fulk V assumed power in Anjou in 1109 and began to rebuild Angevin authority. Fulk also inherited the county of Maine, but refused to recognise Henry as his feudal lord and instead allied himself with Louis. Robert II of Flanders also briefly joined the alliance, before his death in 1111. In 1108, Henry betrothed his six-year-old daughter, Matilda, to Henry V, the future Holy Roman Emperor. For King Henry, this was a prestigious match; for Henry V, it was an opportunity to restore his financial situation and fund an expedition to Italy, as he received a dowry of £6,666 from England and Normandy. Raising this money proved challenging, and required the implementation of a special "aid", or tax, in England. Matilda was crowned Henry V's queen in 1110. Henry responded to the French and Angevin threat by expanding his own network of supporters beyond the Norman borders. Some Norman barons deemed unreliable were arrested or dispossessed, and Henry used their forfeited estates to bribe his potential allies in the neighbouring territories, in particular Maine. Around 1110, Henry attempted to arrest the young William Clito, but William's mentors moved him to the safety of Flanders before he could be taken. At about this time, Henry probably began to style himself as the Duke of Normandy. Robert of Bellême turned against Henry once again, and when he appeared at Henry's court in 1112 in a new role as a French ambassador, he was arrested and imprisoned. Rebellions broke out in France and Anjou between 1111 and 1113, and Henry crossed into Normandy to support his nephew, Count Theobald of Blois, who had sided against Louis in the uprising. In a bid to diplomatically isolate the French King, Henry betrothed his young son, William Adelin, to Fulk's daughter Matilda, and married his illegitimate daughter Matilda to Conan III, the Duke of Brittany, creating alliances with Anjou and Brittany respectively. Louis backed down and in March 1113 met with Henry near Gisors to agree a peace settlement, giving Henry the disputed fortresses and confirming Henry's overlordship of Maine, Bellême and Brittany. Meanwhile, the situation in Wales was deteriorating. Henry had conducted a campaign in South Wales in 1108, pushing out royal power in the region and colonising the area around Pembroke with Flemings. By 1114, some of the resident Norman lords were under attack, while in Mid-Wales, Owain ap Cadwgan blinded one of the political hostages he was holding, and in North Wales Gruffudd ap Cynan threatened the power of the Earl of Chester. Henry sent three armies into Wales that year, with Gilbert Fitz Richard leading a force from the south, Alexander, King of Scotland, pressing from the north and Henry himself advancing into Mid-Wales. Owain and Gruffudd sued for peace, and Henry accepted a political compromise. Henry reinforced the Welsh Marches with his own appointees, strengthening the border territories. Concerned about the succession, Henry sought to persuade Louis VI to accept his son, William Adelin, as the legitimate future Duke of Normandy, in exchange for his son's homage. Henry crossed into Normandy in 1115 and assembled the Norman barons to swear loyalty; he also almost successfully negotiated a settlement with King Louis, affirming William's right to the Duchy in exchange for a large sum of money, but the deal fell through and Louis, backed by his ally Baldwin of Flanders, instead declared that he considered William Clito the legitimate heir to the Duchy. War broke out after Henry returned to Normandy with an army to support Theobald of Blois, who was under attack from Louis. Henry and Louis raided each other's towns along the border, and a wider conflict then broke out, probably in 1116. Henry was pushed onto the defensive as French, Flemish and Angevin forces began to pillage the Normandy countryside. Amaury III of Montfort and many other barons rose up against Henry, and there was an assassination plot from within his own household. Henry's wife, Matilda, died in early 1118, but the situation in Normandy was sufficiently pressing that Henry was unable to return to England for her funeral. Henry responded by mounting campaigns against the rebel barons and deepening his alliance with Theobald. Baldwin of Flanders was wounded in battle and died in September 1118, easing the pressure on Normandy from the north-east. Henry attempted to crush a revolt in the city of Alençon, but was defeated by Fulk and the Angevin army. Forced to retreat from Alençon, Henry's position deteriorated alarmingly, as his resources became overstretched and more barons abandoned his cause. Early in 1119, Eustace of Breteuil and Henry's daughter, Juliana, threatened to join the baronial revolt. Hostages were exchanged in a bid to avoid conflict, but relations broke down and both sides mutilated their captives. Henry attacked and took the town of Breteuil, despite Juliana's attempt to kill her father with a crossbow. In the aftermath, Henry dispossessed the couple of almost all of their lands in Normandy. Henry's situation improved in May 1119 when he enticed Fulk to switch sides by finally agreeing to marry William Adelin to Fulk's daughter, Matilda, and paying Fulk a large sum of money. Fulk left for the Levant, leaving the County of Maine in Henry's care, and the King was free to focus on crushing his remaining enemies. During the summer Henry advanced into the Norman Vexin, where he encountered Louis's army, resulting in the Battle of Brémule. Henry appears to have deployed scouts and then organised his troops into several carefully formed lines of dismounted knights. Unlike Henry's forces, the French knights remained mounted; they hastily charged the Anglo-Norman positions, breaking through the first rank of the defences but then becoming entangled in Henry's second line of knights. Surrounded, the French army began to collapse. In the melee, Henry was hit by a sword blow, but his armour protected him. Louis and William Clito escaped from the battle, leaving Henry to return to Rouen in triumph. The war slowly petered out after this battle, and Louis took the dispute over Normandy to Pope Callixtus II's council in Reims that October. Henry faced a number of French complaints concerning his acquisition and subsequent management of Normandy, and despite being defended by Geoffrey, the Archbishop of Rouen, Henry's case was shouted down by the pro-French elements of the council. Callixtus declined to support Louis, however, and merely advised the two rulers to seek peace. Amaury de Montfort came to terms with Henry, but Henry and William Clito failed to find a mutually satisfactory compromise. In June 1120, Henry and Louis formally made peace on terms advantageous to the King of England: William Adelin gave homage to Louis, and in return Louis confirmed William's rights to the Duchy. Henry's succession plans were thrown into chaos by the sinking of the "White Ship" on 25 November 1120. Henry had left the port of Barfleur for England in the early evening, leaving William Adelin and many of the younger members of the court to follow on that night in a separate vessel, the "White Ship". Both the crew and passengers were drunk and, just outside the harbour, the ship hit a submerged rock. The ship sank, killing as many as 300 people, with only one survivor, a butcher from Rouen. Henry's court was initially too scared to report William's death to the King. When he was finally told, he collapsed with grief. The disaster left Henry with no legitimate son, his various nephews now the closest male heirs. Henry announced he would take a new wife, Adeliza of Louvain, opening up the prospect of a new royal son, and the two were married at Windsor Castle in January 1121. Henry appears to have chosen her because she was attractive and came from a prestigious noble line. Adela seems to have been fond of Henry and joined him in his travels, probably to maximise the chances of her conceiving a child. The "White Ship" disaster initiated fresh conflict in Wales, where the drowning of Richard, Earl of Chester, encouraged a rebellion led by Maredudd ap Bleddyn. Henry intervened in North Wales that summer with an army and, although the King was hit by a Welsh arrow, the campaign reaffirmed royal power across the region. With William dead, Henry's alliance with Anjou – which had been based on his son marrying Fulk's daughter – began to disintegrate. Fulk returned from the Levant and demanded that Henry return Matilda and her dowry, a range of estates and fortifications in Maine. Matilda left for Anjou, but Henry argued that the dowry had in fact originally belonged to him before it came into the possession of Fulk, and so declined to hand the estates back to Anjou. Fulk married his daughter Sibylla to William Clito, and granted them Maine. Once again, conflict broke out, as Amaury de Montfort allied himself with Fulk and led a revolt along the Norman-Anjou border in 1123. Amaury was joined by several other Norman barons, headed by Waleran de Beaumont, one of the sons of Henry's old ally, Robert of Meulan. Henry dispatched Robert of Gloucester and Ranulf le Meschin to Normandy and then intervened himself in late 1123. Henry began the process of besieging the rebel castles, before wintering in the Duchy. In the spring, campaigning began again. Ranulf received intelligence that the rebels were returning to one of their bases at Vatteville, allowing him to ambush them en route at Rougemontiers; Waleran charged the royal forces, but his knights were cut down by Ranulf's archers and the rebels were quickly overwhelmed. Waleran was captured, but Amaury escaped. Henry mopped up the remainder of the rebellion, blinding some of the rebel leaders – considered, at the time, a more merciful punishment than execution – and recovering the last rebel castles. Henry paid Pope Callixtus a large amount of money, in exchange for the Papacy annulling the marriage of William Clito and Sibylla on the grounds of consanguinity. Henry and his new wife did not conceive any children, generating prurient speculation as to the possible explanation, and the future of the dynasty appeared at risk. Henry may have begun to look among his nephews for a possible heir. He may have considered Stephen of Blois as a possible option and, perhaps in preparation for this, he arranged a beneficial marriage for Stephen to a wealthy heiress, Matilda. Theobald of Blois, his close ally, may have also felt that he was in favour with Henry. William Clito, who was King Louis's preferred choice, remained opposed to Henry and was therefore unsuitable. Henry may have also considered his own illegitimate son, Robert of Gloucester, as a possible candidate, but English tradition and custom would have looked unfavourably on this. Henry's plans shifted when the Empress Matilda's husband, the Emperor Henry, died in 1125. King Henry recalled his daughter to England the next year and declared that, should he die without a male heir, she was to be his rightful successor. The Anglo-Norman barons were gathered together at Westminster on Christmas 1126, where they swore to recognise Matilda and any future legitimate heir she might have. Putting forward a woman as a potential heir in this way was unusual: opposition to Matilda continued to exist within the English court, and Louis was vehemently opposed to her candidacy. Fresh conflict broke out in 1127, when Charles, the childless Count of Flanders, was murdered, creating a local succession crisis. Backed by King Louis, William Clito was chosen by the Flemings to become their new ruler. This development potentially threatened Normandy, and Henry began to finance a proxy war in Flanders, promoting the claims of William's Flemish rivals. In an effort to disrupt the French alliance with William, Henry mounted an attack into France in 1128, forcing Louis to cut his aid to William. William died unexpectedly in July, removing the last major challenger to Henry's rule and bringing the war in Flanders to a halt. Without William, the baronial opposition in Normandy lacked a leader. A fresh peace was made with France, and the King was finally able to release the remaining prisoners from the revolt of 1123, including Waleran of Meulan, who was rehabilitated into the royal court. Meanwhile, Henry rebuilt his alliance with Fulk of Anjou, this time by marrying Matilda to Fulk's eldest son, Geoffrey. The pair were betrothed in 1127 and married the following year. It is unknown whether Henry intended Geoffrey to have any future claim on England or Normandy, and he was probably keeping his son-in-law's status deliberately uncertain. Similarly, although Matilda was granted a number of Normandy castles as part of her dowry, it was not specified when the couple would actually take possession of them. Fulk left Anjou for Jerusalem in 1129, declaring Geoffrey the Count of Anjou and Maine. The marriage proved difficult, as the couple did not particularly like each other and the disputed castles proved a point of contention, resulting in Matilda returning to Normandy later that year. Henry appears to have blamed Geoffrey for the separation, but in 1131 the couple were reconciled. Much to the pleasure and relief of Henry, Matilda then gave birth to a sequence of two sons, Henry and Geoffrey, in 1133 and 1134. Relations between Henry, Matilda, and Geoffrey became increasingly strained during the King's final years. Matilda and Geoffrey suspected that they lacked genuine support in England. In 1135 they urged Henry to hand over the royal castles in Normandy to Matilda whilst he was still alive, and insisted that the Norman nobility swear immediate allegiance to her, thereby giving the couple a more powerful position after Henry's death. Henry angrily declined to do so, probably out of concern that Geoffrey would try to seize power in Normandy. A fresh rebellion broke out amongst the barons in southern Normandy, led by William, the Count of Ponthieu, whereupon Geoffrey and Matilda intervened in support of the rebels. Henry campaigned throughout the autumn, strengthening the southern frontier, and then travelled to Lyons-la-Forêt in November to enjoy some hunting, still apparently healthy. There Henry fell ill – according to the chronicler Henry of Huntingdon, he ate too many ("a surfeit of") lampreys against his physician's advice – and his condition worsened over the course of a week. Once the condition appeared terminal, Henry gave confession and summoned Archbishop Hugh of Amiens, who was joined by Robert of Gloucester and other members of the court. In accordance with custom, preparations were made to settle Henry's outstanding debts and to revoke outstanding sentences of forfeiture. The King died on 1 December 1135, and his corpse was taken to Rouen accompanied by the barons, where it was embalmed; his entrails were buried locally at the priory of Notre-Dame du Pré, and the preserved body was taken on to England, where it was interred at Reading Abbey. Despite Henry's efforts, the succession was disputed. When news began to spread of the King's death, Geoffrey and Matilda were in Anjou supporting the rebels in their campaign against the royal army, which included a number of Matilda's supporters such as Robert of Gloucester. Many of these barons had taken an oath to stay in Normandy until the late king was properly buried, which prevented them from returning to England. The Norman nobility discussed declaring Theobald of Blois king. Theobald's younger brother, Stephen of Blois, quickly crossed from Boulogne to England, however, accompanied by his military household. With the help of his brother, Henry of Blois, he seized power in England and was crowned king on 22 December. The Empress Matilda did not give up her claim to England and Normandy, leading to the prolonged civil war known as the Anarchy between 1135 and 1153. Historians have drawn on a range of sources on Henry, including the accounts of chroniclers; other documentary evidence, including early pipe rolls; and surviving buildings and architecture. The three main chroniclers to describe the events of Henry's life were William of Malmesbury, Orderic Vitalis, and Henry of Huntingdon, but each incorporated extensive social and moral commentary into their accounts and borrowed a range of literary devices and stereotypical events from other popular works. Other chroniclers include Eadmer, Hugh the Chanter, Abbot Suger, and the authors of the Welsh "Brut". Not all royal documents from the period have survived, but there are a number of royal acts, charters, writs, and letters, along with some early financial records. Some of these have since been discovered to be forgeries, and others had been subsequently amended or tampered with. Late medieval historians seized on the accounts of selected chroniclers regarding Henry's education and gave him the title of Henry "Beauclerc", a theme echoed in the analysis of Victorian and Edwardian historians such as Francis Palgrave and Henry Davis. The historian Charles David dismissed this argument in 1929, showing the more extreme claims for Henry's education to be without foundation. Modern histories of Henry commenced with Richard Southern's work in the early 1960s, followed by extensive research during the rest of the 20th century into a wide number of themes from his reign in England, and a much more limited number of studies of his rule in Normandy. Only two major, modern biographies of Henry have been produced, Warren Hollister's posthumous volume in 2001, and Judith Green's 2006 work. Interpretation of Henry's personality by historians has altered over time. Earlier historians such as Austin Poole and Richard Southern considered Henry as a cruel, draconian ruler. More recent historians, such as Hollister and Green, view his implementation of justice much more sympathetically, particularly when set against the standards of the day, but even Green has noted that Henry was "in many respects highly unpleasant", and Alan Cooper has observed that many contemporary chroniclers were probably too scared of the King to voice much criticism. Historians have also debated the extent to which Henry's administrative reforms genuinely constituted an introduction of what Hollister and John Baldwin have termed systematic, "administrative kingship", or whether his outlook remained fundamentally traditional. Henry's burial at Reading Abbey is marked by a local cross, but Reading Abbey was slowly demolished during the Dissolution of the Monasteries in the 16th century. The exact location is uncertain, but the most likely location of the tomb itself is now in a built-up area of central Reading, on the site of the former abbey choir. A plan to locate his remains was announced in March 2015, with support from English Heritage and Philippa Langley, who aided with the successful exhumation of Richard III. In addition to Matilda and William, Henry possibly had a short-lived son, Richard, from his first marriage. Henry and his second wife, Adeliza, had no children. Henry had a number of illegitimate children by various mistresses. Elagabalus Elagabalus (), also known as Heliogabalus (; 203 – 11 March 222), was Roman emperor from 218 to 222. A member of the Severan dynasty, he was Syrian, the second son of Julia Soaemias and Sextus Varius Marcellus. In his early youth he served as a priest of the god Elagabalus in the hometown of his mother's family, Emesa. As a private citizen, he was probably named Sextus Varius Avitus Bassianus. Upon becoming emperor he took the name Marcus Aurelius Antoninus Augustus. He was called Elagabalus only after his death. In 217, the emperor Caracalla was assassinated and replaced by his Praetorian prefect, Marcus Opellius Macrinus. Caracalla's maternal aunt, Julia Maesa, successfully instigated a revolt among the Third Legion to have her eldest grandson (and Caracalla's cousin), Elagabalus, declared emperor in his place. Macrinus was defeated on 8 June 218 at the Battle of Antioch. Elagabalus, barely 14 years old, became emperor, initiating a reign remembered mainly for sex scandals and religious controversy. Later historians suggest Elagabalus showed a disregard for Roman religious traditions and sexual taboos. He replaced the traditional head of the Roman pantheon, Jupiter, with the deity Elagabalus, of whom he had been high priest. He forced leading members of Rome's government to participate in religious rites celebrating this deity, over which he personally presided. Elagabalus was supposedly "married" as many as five times, lavishing favours on male courtiers popularly thought to have been his lovers, and was reported to have prostituted himself in the imperial palace. His behavior estranged the Praetorian Guard, the Senate, and the common people alike. Amidst growing opposition, Elagabalus, just 18 years old, was assassinated and replaced by his cousin Severus Alexander on 11 March 222, who ruled for 13 years before his own assassination, which marked the epoch event for the Crisis of the Third Century. The assassination plot against Elagabalus was devised by his grandmother, Julia Maesa, and carried out by disaffected members of the Praetorian Guard. Elagabalus developed a reputation among his contemporaries for extreme eccentricity, decadence, and zealotry. This tradition has persisted, and with writers of the early modern age he suffers one of the worst reputations among Roman emperors. Edward Gibbon, for example, wrote that Elagabalus "abandoned himself to the grossest pleasures and ungoverned fury". According to Barthold Georg Niebuhr, "The name Elagabalus is branded in history above all others" because of his "unspeakably disgusting life". Elagabalus was born around the year 203 to Sextus Varius Marcellus and Julia Soaemias Bassiana. His father was initially a member of the Equites class, but was later elevated to the rank of senator. His grandmother, Julia Maesa, was the widow of the consul Julius Avitus, the sister of Julia Domna, and the sister-in-law of the emperor Septimius Severus. He had at least one sibling: an unnamed elder brother. His mother, Julia Soaemias, was a cousin of the emperor Caracalla. His other relatives included his aunt Julia Avita Mamaea and uncle Marcus Julius Gessius Marcianus and among their children, their son Severus Alexander. Elagabalus's family held hereditary rights to the priesthood of the sun god Elagabal, of whom Elagabalus was the high priest at Emesa (modern Homs) in Roman Syria. The deity Elagabalus was initially venerated at Emesa. This form of the god's name is a Latinized version of the Syrian "Ilāh hag-Gabal", which derives from "Ilāh" (a Semitic word for "god") and "gabal" (an Arabic word for "mountain"), resulting in "the God of the Mountain," the Emesene manifestation of the deity. The cult of the deity spread to other parts of the Roman Empire in the 2nd century; a dedication has been found as far away as Woerden (Netherlands), near the Roman "limes". The god was later imported and assimilated with the Roman sun god known as Sol Indiges in republican times and as Sol Invictus during the second and third centuries CE. In Greek the sun god is Helios, hence "Heliogabalus", a hybrid conjunction of "Helios" and "Elagabalus". When the Emperor Macrinus came to power, he suppressed the threat against his reign from the family of his assassinated predecessor, Caracalla, by exiling them—Julia Maesa, her two daughters, and her eldest grandson Elagabalus—to their estate at Emesa in Syria. Almost upon arrival in Syria, Maesa began a plot with her advisor and Elagabalus' tutor, Gannys, to overthrow Macrinus and elevate the fourteen-year-old Elagabalus to the imperial throne. His mother publicly declared that he was the illegitimate son of Caracalla, and therefore deserving the loyalty of Roman soldiers and senators who had sworn allegiance to Caracalla. After Julia Maesa displayed her wealth to the Third Legion at Raphana they swore allegiance to Elagabalus. At sunrise on 16 May 218, Publius Valerius Comazon, commander of the legion, declared him emperor. To strengthen his legitimacy Elagabalus assumed Caracalla's names, "Marcus Aurelius Antoninus". In response Macrinus dispatched his Praetorian prefect Ulpius Julianus to the region with a contingent of troops he considered strong enough to crush the rebellion. However, this force soon joined the faction of Elagabalus when, during the battle, they turned on their own commanders. The officers were killed and Julianus' head was sent back to the emperor. Macrinus now sent letters to the Senate denouncing Elagabalus as the "False Antoninus" and claiming he was insane. Both consuls and other high-ranking members of Rome's leadership condemned Elagabalus, and the Senate subsequently declared war on both Elagabalus and Julia Maesa. Macrinus and his son, weakened by the desertion of the Second Legion due to bribes and promises circulated by Julia Maesa, were defeated on 8 June 218 at the Battle of Antioch by troops commanded by Gannys. Macrinus fled to Italy, disguised as a courier, but was intercepted near Chalcedon and executed in Cappadocia. His son Diadumenian, sent as a friendly hostage to the Parthian court as a guarantee of peace between the states, was captured at Zeugma and also put to death. Elagabalus declared the date of the victory at Antioch to be the beginning of his reign and assumed the imperial titles without prior senatorial approval. This violated tradition but was a common practice among 3rd-century emperors nonetheless. Letters of reconciliation were dispatched to Rome extending amnesty to the Senate and recognizing the laws, while also condemning the administration of Macrinus and his son. The senators responded by acknowledging Elagabalus as emperor and accepting his claim to be the son of Caracalla. Caracalla and Julia Domna were both deified by the Senate, both Julia Maesa and Julia Soaemias were elevated to the rank of Augustae, and the memory of both Macrinus and Diadumenian was expunged by the Senate. The former commander of the Third Legion, Comazon, was appointed commander of the Praetorian Guard. Elagabalus and his entourage spent the winter of 218 in Bithynia at Nicomedia, where the emperor's religious beliefs first presented themselves as a problem. The contemporary historian Cassius Dio suggests that Gannys was in fact killed by the new emperor because he pressured Elagabalus to live "temperately and prudently". To help Romans adjust to having an oriental priest as emperor, Julia Maesa had a painting of Elagabalus in priestly robes sent to Rome and hung over a statue of the goddess Victoria in the Senate House. This placed senators in the awkward position of having to make offerings to Elagabalus whenever they made offerings to Victoria. The legions were dismayed by his behaviour and quickly came to regret having supported his accession. While Elagabalus was still on his way to Rome, brief revolts broke out by the Fourth Legion at the instigation of Gellius Maximus, and by the Third Legion, which itself had been responsible for the elevation of Elagabalus to the throne, under the command of Senator Verus. The rebellion was quickly put down, and the Third Legion disbanded. When the entourage reached Rome in the autumn of 219, Comazon and other allies of Julia Maesa and Elagabalus were given powerful and lucrative positions, to the chagrin of many senators who did not consider them worthy of such privileges. After his tenure as Praetorian prefect, Comazon served as the city prefect of Rome three times, and as consul twice. Elagabalus soon devalued the Roman currency. He decreased the silver purity of the "denarius" from 58% to 46.5% — the actual silver weight dropping from 1.82 grams to 1.41 grams. He also demonetized the "antoninianus" during this period in Rome. Elagabalus tried to have his presumed lover, the charioteer Hierocles, declared Caesar, while another alleged lover, the athlete Aurelius Zoticus, was appointed to the non-administrative but influential position of Master of the Chamber, or "Cubicularius". His offer of amnesty for the Roman upper class was largely honoured, though the jurist Ulpian was exiled. The relationships between Julia Maesa, Julia Soaemias, and Elagabalus were strong at first. His mother and grandmother became the first women to be allowed into the Senate, and both received senatorial titles: Soaemias the established title of "Clarissima," and Maesa the more unorthodox "Mater Castrorum et Senatus" ("Mother of the army camp and of the Senate"). They held the title of "Augusta" as well, suggesting that they may have been the power behind the throne. Indeed, they exercised great influence over the young emperor throughout his reign, and can be found on many coins and inscriptions—a rare honor for Roman women. Since the reign of Septimius Severus, sun worship had increased throughout the Empire. Elagabalus saw this as an opportunity to install Elagabal as the chief deity of the Roman pantheon. The god was renamed "Deus Sol Invictus", meaning "God the Undefeated Sun", and honored above Jupiter. As a token of respect for Roman religion, however, Elagabalus joined either Astarte, Minerva, Urania, or some combination of the three to Elagabal as consort. A union between Elagabal and a traditional goddess would have served to strengthen ties between the new religion and the imperial cult. In fact, there may have been an effort to introduce Elagabal, Urania, and Athena as the new Capitoline triad of Rome—replacing Jupiter, Juno, and Minerva. He aroused further discontent when he married the Vestal Virgin Aquilia Severa, claiming the marriage would produce "godlike children". This was a flagrant breach of Roman law and tradition, which held that any Vestal found to have engaged in sexual intercourse was to be buried alive. A lavish temple called the Elagabalium was built on the east face of the Palatine Hill to house Elagabal, who was represented by a black conical meteorite from Emesa. Herodian wrote "this stone is worshipped as though it were sent from heaven; on it there are some small projecting pieces and markings that are pointed out, which the people would like to believe are a rough picture of the sun, because this is how they see them". In order to become the high priest of his new religion, Elagabalus had himself circumcised. He forced senators to watch while he danced around the altar of Deus Sol Invictus to the accompaniment of drums and cymbals. Each summer solstice he held a festival dedicated to the god, which became popular with the masses because of the free food distributed on these occasions. During this festival, Elagabalus placed the Emesa stone on a chariot adorned with gold and jewels, which he paraded through the city: The most sacred relics from the Roman religion were transferred from their respective shrines to the Elagabalium, including the emblem of the Great Mother, the fire of Vesta, the Shields of the Salii, and the Palladium, so that no other god could be worshipped except in association with Elagabal. The question of Elagabalus' sexual orientation is confused, owing to salacious and unreliable sources. Elagabalus married and divorced five women, three of whom are known. His first wife was Julia Cornelia Paula; the second was the Vestal Virgin Julia Aquilia Severa. Within a year, he abandoned her and married Annia Aurelia Faustina, a descendant of Marcus Aurelius and the widow of a man he had recently had executed. He had returned to his second wife Severa by the end of the year. According to Cassius Dio, his most stable relationship seems to have been with his chariot driver, a blond slave from Caria named Hierocles, whom he referred to as his husband. The "Augustan History" claims that he also married a man named Zoticus, an athlete from Smyrna, in a public ceremony at Rome. Cassius Dio reported that Elagabalus would paint his eyes, depilate his body hair and wear wigs before prostituting himself in taverns, brothels, and even in the imperial palace: Herodian commented that Elagabalus enhanced his natural good looks by the regular application of cosmetics. He was described as having been "delighted to be called the mistress, the wife, the queen of Hierocles" and was reported to have offered vast sums of money to any physician who could equip him with female genitalia. Elagabalus has been characterized by some modern writers as transgender or transsexual. By 221 Elagabalus' eccentricities, particularly his relationship with Hierocles, increasingly provoked the soldiers of the Praetorian Guard. When Elagabalus' grandmother Julia Maesa perceived that popular support for the emperor was waning, she decided that he and his mother, who had encouraged his religious practices, had to be replaced. As alternatives, she turned to her other daughter, Julia Avita Mamaea, and her daughter's son, the fifteen-year-old Severus Alexander. Prevailing on Elagabalus, she arranged that he appoint his cousin Alexander as his heir and that the boy be given the title of "Caesar". Alexander shared the consulship with the emperor that year. However, Elagabalus reconsidered this arrangement when he began to suspect that the Praetorian Guard preferred his cousin to himself. Following the failure of various attempts on Alexander's life, Elagabalus stripped his cousin of his titles, revoked his consulship, and invented the rumor that Alexander was near death, in order to see how the Praetorians would react. A riot ensued, and the Guard demanded to see Elagabalus and Alexander in the Praetorian camp. The Emperor complied and on 11 March 222 he publicly presented his cousin along with his own mother, Julia Soaemias. On their arrival the soldiers started cheering Alexander while ignoring Elagabalus, who ordered the summary arrest and execution of anyone who had taken part in this display of insubordination. In response, members of the Praetorian Guard attacked Elagabalus and his mother: Following his assassination, many associates of Elagabalus were killed or deposed, including his lover Hierocles. His religious edicts were reversed and the stone of Elagabal was sent back to Emesa. Women were again barred from attending meetings of the Senate. The practice of "damnatio memoriae"—erasing from the public record a disgraced personage formerly of note—was systematically applied in his case. The source of many of these stories of Elagabalus's depravity is the "Augustan History" ("Historia Augusta"), which includes controversial claims. It is most likely that the "Historia Augusta" was written towards the end of the 4th century, during the reign of Emperor Theodosius I. The life of Elagabalus as described in the "Augustan History" is of uncertain historical merit. Sections 13 to 17, relating to the fall of Elagabalus, are less controversial among historians. Sources often considered more credible than the "Augustan History" include the contemporary historians Cassius Dio and Herodian. Cassius Dio lived from the second half of the 2nd century until sometime after 229. Born into a patrician family, he spent the greater part of his life in public service. He was a senator under emperor Commodus and governor of Smyrna after the death of Septimius Severus. Afterwards, he served as suffect consul around 205, and as proconsul in Africa and Pannonia. Severus Alexander held him in high esteem and made him his consul again. His "Roman History" spans nearly a millennium, from the arrival of Aeneas in Italy until the year 229. As a contemporary of Elagabalus, Cassius Dio's account of his reign is generally considered more reliable than the "Augustan History", although by his own admission Dio spent the greater part of the relevant period outside of Rome and had to rely on second-hand information. Furthermore, the political climate in the aftermath of Elagabalus' reign, as well as Dio's own position within the government of Alexander likely influenced the truth of this part of his history for the worse. Dio regularly refers to Elagabalus as Sardanapalus, partly to distinguish him from his divine namesake, but chiefly to do his part in maintaining the "damnatio memoriae" and to associate him with another autocrat notorious for a dissolute life. Another contemporary of Elagabalus' was Herodian, a minor Roman civil servant who lived from c. 170 until 240. His work, "History of the Roman Empire since Marcus Aurelius", commonly abbreviated as "Roman History", is an eyewitness account of the reign of Commodus until the beginning of the reign of Gordian III. His work largely overlaps with Dio's own "Roman History", but the texts, written independently of each other, agree more often than not about the emperor and his short but eventful reign . Although Herodian is not deemed as reliable as Cassius Dio, his lack of literary and scholarly pretensions make him less biased than senatorial historians. Herodian is considered the most important source for the religious reforms which took place during the reign of Elagabalus, which have been confirmed by numismatic and archaeological evidence. For readers of the modern age, "The History of the Decline and Fall of the Roman Empire" by Edward Gibbon (1737–94) further cemented the scandalous reputation of Elagabalus. Gibbon not only accepted and expressed outrage at the allegations of the ancient historians, but he might have added some details of his own; he is the first historian known to claim that Gannys was a eunuch, for example. Gibbon wrote: The 20th-century anthropologist James George Frazer (famous for "The Golden Bough") took seriously the monotheistic aspirations of the emperor, but also ridiculed him: "The dainty priest of the Sun [was] the most abandoned reprobate who ever sat upon a throne...It was the intention of this eminently religious but crack-brained despot to supersede the worship of all the gods, not only at Rome but throughout the world, by the single worship of Elagabalus or the Sun." The first book-length biography was "The Amazing Emperor Heliogabalus" (1911) by J. Stuart Hay, "a serious and systematic study" more sympathetic than that of previous historians, which nonetheless stressed the exoticism of Elagabalus, calling his reign one of "enormous wealth and excessive prodigality, luxury and aestheticism, carried to their ultimate extreme, and sensuality in all the refinements of its Eastern habit." Some recent historians paint a more favorable picture of the emperor's rule. Martijn Icks, in "Images of Elagabalus" (2008; republished as "The Crimes of Elagabalus" in 2012), doubts the reliability of the ancient sources and argues that it was the emperor's unorthodox religious policies that alienated the power elite of Rome, to the point that his grandmother saw fit to eliminate him and replace him with his cousin. Leonardo de Arrizabalaga y Prado, in "The Emperor Elagabalus: Fact or Fiction?" (2008), is also critical of the ancient historians and speculates that neither religion nor sexuality played a role in the fall of the young emperor. He was simply the loser in a power struggle within the imperial family; the loyalty of the Praetorian Guards was up for sale, and Julia Maesa had the resources to outmaneuver and outbribe her grandson. In this version of events, once Elagabalus, his mother, and his immediate circle had been murdered, a campaign of character assassination began, resulting in a grotesque caricature that has persisted to the present day. Historians have not only kept the tradition alive, but often embellished it, reflecting their own bias against effeminacy, religious zealotry, and other traits with which Elagabalus is commonly identified. Due to the ancient tradition about him, Elagabalus became something of a(n anti-)hero in the Decadent movement of the late 19th century. He often appears in literature and other creative media as the epitome of a young, amoral aesthete. His life and character have informed or at least inspired many famous works of art, by Decadents, even by contemporary artists. The most notable of these works include: Hamilton, Ontario Hamilton () is a port city in the Canadian province of Ontario. An industrialized city in the Golden Horseshoe at the west end of Lake Ontario, Hamilton has a population of 536,917, and a metropolitan population of 747,545. On January 1, 2001, the new City of Hamilton was formed through the amalgamation of the former city and the other constituent lower-tier municipalities of the Regional Municipality of Hamilton-Wentworth with the upper-tier regional government. Residents of the old city are known as Hamiltonians. Since 1981, the metropolitan area has been listed as the ninth largest in Canada and the third largest in Ontario. Hamilton is home to the Royal Botanical Gardens, the Canadian Warplane Heritage Museum, the Bruce Trail, McMaster University, Redeemer University College and Mohawk College. McMaster University is ranked 4th in Canada and 94th in the world by Times Higher Education Rankings 2015-16 and has a well-known medical school. In pre-colonial times, the Neutral Indians used much of the land but were gradually driven out by the Five (later Six) Nations (Iroquois) who were allied with the British against the Huron and their French allies. A member of the Iroquois Confederacy provided the route and name for Mohawk Road, which originally included King Street in the lower city. Following the United States gaining independence after their American Revolutionary War, in 1784, about 10,000 United Empire Loyalists settled in Upper Canada (what is now southern Ontario), chiefly in Niagara, around the Bay of Quinte, and along the St. Lawrence River between Lake Ontario and Montreal. The Crown granted them land in these areas in order to develop Upper Canada and to compensate them for losses in the United States. With former First Nations lands available for purchase, these new settlers were soon followed by many more Americans, attracted by the availability of inexpensive, arable land. At the same time, large numbers of Iroquois who had been allied with Britain arrived from the United States and were settled on reserves west of Lake Ontario as compensation for lands they lost in what was now the United States. During the War of 1812, British regulars and Canadian militia defeated invading American troops at the Battle of Stoney Creek, fought in what is now a park in eastern Hamilton. The town of Hamilton was conceived by George Hamilton (a son of a Queenston entrepreneur and founder, Robert Hamilton), when he purchased farm holdings of James Durand, the local Member of the British Legislative Assembly, shortly after the War of 1812. Nathaniel Hughson, a property owner to the north, cooperated with George Hamilton to prepare a proposal for a courthouse and jail on Hamilton's property. Hamilton offered the land to the crown for the future site. Durand was empowered by Hughson and Hamilton to sell property holdings which later became the site of the town. As he had been instructed, Durand circulated the offers at York during a session of the Legislative Assembly, which established a new Gore District, of which the Hamilton townsite was a member. Initially, this town was not the most important centre of the Gore District. An early indication of Hamilton’s sudden prosperity was signaled by the fact that in 1816 it was chosen over Ancaster, Ontario that year to be the administrative center for the new Gore District. Another dramatic economic turnabout for Hamilton occurred in 1832 when a canal was finally cut through the outer sand bar that enabled Hamilton to become a major port. A permanent jail was not constructed until 1832, when a cut-stone design was completed on Prince's Square, one of the two squares created in 1816. Subsequently, the first police board and the town limits were defined by statute on February 13, 1833. Official city status was achieved on June 9, 1846, by an act of Parliament, 9 Victoria Chapter 73. By 1845, the population was 6,475. In 1846, there were useful roads to many communities as well as stage coaches and steamboats to Toronto, Queenston, and Niagara. Eleven cargo schooners were owned in Hamilton. Eleven churches were in operation. A reading room provided access to newspapers from other cities and from England and the U.S. In addition to stores of all types, four banks, tradesmen of various types, and sixty-five taverns, industry in the community included three breweries, ten importers of dry goods and groceries, five importers of hardware, two tanneries, three coachmakers, and a marble and a stone works. As the city grew, several prominent buildings were constructed in the late 19th century, including the Grand Lodge of Canada in 1855, West Flamboro Methodist Church in 1879 (later purchased by Dufferin Masonic Lodge in 1893), a public library in 1890, and the Right House department store in 1893. The first commercial telephone service in Canada, the first telephone exchange in the British Empire, and the second telephone exchange in all of North America were each established in the city between 1877–78. The city had several interurban electric street railways and two inclines, all powered by the Cataract Power Co. Though suffering through the Hamilton Street Railway strike of 1906, with industrial businesses expanding, Hamilton's population doubled between 1900 and 1914. Two steel manufacturing companies, Stelco and Dofasco, were formed in 1910 and 1912, respectively. Procter & Gamble and the Beech-Nut Packing Company opened manufacturing plants in 1914 and 1922, respectively, their first outside the US. Population and economic growth continued until the 1960s. In 1929 the city's first high-rise building, the Pigott Building, was constructed; that year McMaster University moved from Toronto to Hamilton, in 1934 the second Canadian Tire store in Canada opened here; in 1940 the airport was completed; and in 1948, the Studebaker assembly line was constructed. Infrastructure and retail development continued, with the Burlington Bay James N. Allan Skyway opening in 1958, and the first Tim Hortons store in 1964. Since then, many of the large industries have moved or shut down operations in restructuring that also affected the United States. The economy has shifted more toward the service sector, such as transportation, education, and health services. On January 1, 2001, the new city of Hamilton was formed from the amalgamation of Hamilton and its five neighbouring municipalities: Ancaster, Dundas, Flamborough, Glanbrook, and Stoney Creek. Before amalgamation, the "old" City of Hamilton had 331,121 Hamiltonians divided into 100 neighbourhoods. The former region of Hamilton-Wentworth had a population of 490,268. The amalgamation created a single-tier municipal government ending subsidization of its suburbs. The new amalgamated city has 519,949 people in more than 100 neighbourhoods, and surrounding communities. In 1997 there was a devastating fire at the Plastimet plastics plant. Approximately 300 firefighters battled the blaze, and many sustained severe chemical burns and inhaled volatile organic compounds when at least 400 tonnes of PVC plastic were consumed in the fire. Hamilton is in Southern Ontario on the western end of the Niagara Peninsula and wraps around the westernmost part of Lake Ontario; most of the city, including the downtown section, is on the south shore. Hamilton is in the geographic centre of the Golden Horseshoe and is roughly the midway point between Toronto and Buffalo, New York, although slightly closer to the former. Its major physical features are Hamilton Harbour, marking the northern limit of the city, and the Niagara Escarpment running through the middle of the city across its entire breadth, bisecting the city into "upper" and "lower" parts. The maximum high point is 250m (820') above the level of Lake Ontario. According to all records from local historians, this district was called "Attiwandaronia" by the native Neutral people. The first aboriginals to settle in the Hamilton area called the bay "Macassa", meaning "beautiful waters". Hamilton is one of 11 cities showcased in the book, "Green City: People, Nature & Urban Places" by Quebec author Mary Soderstrom, which examines the city as an example of an industrial powerhouse co-existing with nature. Soderstrom credits Thomas McQuesten and family in the 1930s who "became champions of parks, greenspace and roads" in Hamilton. Hamilton Harbour is a natural harbour with a large sandbar called the Beachstrip. This sandbar was deposited during a period of higher lake levels during the last ice age, and extends southeast through the central lower city to the escarpment. Hamilton's deep sea port is accessed by ship canal through the beach strip into the harbour and is traversed by two bridges, the QEW's Burlington Bay James N. Allan Skyway and the lower Canal Lift Bridge. Between 1788 and 1793, the townships at the Head-of-the-Lake were surveyed and named. The area was first known as The Head-of-the-Lake for its location at the western end of Lake Ontario. John Ryckman, born in Barton township (where present day downtown Hamilton is), described the area in 1803 as he remembered it: "The city in 1803 was all forest. The shores of the bay were difficult to reach or see because they were hidden by a thick, almost impenetrable mass of trees and undergrowth ... Bears ate pigs, so settlers warred on bears. Wolves gobbled sheep and geese, so they hunted and trapped wolves. They also held organized raids on rattlesnakes on the mountainside. There was plenty of game. Many a time have I seen (sic) a deer jump the fence into my back yard, and there were millions of pigeons which we clubbed as they flew low." George Hamilton, a settler and local politician, established a town site in the northern portion of Barton Township in 1815. He kept several east–west roads which were originally Indian trails, but the north–south streets were on a regular grid pattern. Streets were designated "East" or "West" if they crossed James Street or Highway 6. Streets were designated "North" or "South" if they crossed King Street or Highway 8. The overall design of the townsite, likely conceived in 1816, was commonplace. George Hamilton employed a grid street pattern used in most towns in Upper Canada and throughout the American frontier. The eighty original lots had frontages of fifty feet; each lot faced a broad street and backed onto a twelve-foot lane. It took at least a decade for all of the original lots to be sold, but the construction of the Burlington Canal in 1823, and a new court-house in 1827 encouraged Hamilton to add more blocks around 1828–9. At this time, he included a market square in an effort to draw commercial activity onto his lands, but the natural growth of the town was to the north of Hamilton's plot. The Hamilton Conservation Authority owns, leases or manages about of land with the city operating of parkland at 310 locations. Many of the parks are along the Niagara Escarpment, which runs from Tobermory at the tip of the Bruce Peninsula in the north, to Queenston at the Niagara River in the south, and provides views of the cities and towns at Lake Ontario's western end. The hiking path Bruce Trail runs the length of the escarpment. Hamilton is home to more than 100 waterfalls and cascades, most of which are on or near the Bruce Trail as it winds through the Niagara Escarpment. Hamilton's climate is humid-continental, characterized by changeable weather patterns. However, its climate is moderate compared with most of Canada. Hamilton's location on an embayment at the southwestern corner of Lake Ontario with an escarpment dividing upper and lower parts of the city results in noticeable disparities in weather over short distances. This is also the case with pollution levels, which depending on localized winds patterns or low clouds can be high in certain areas mostly originating from the city's steel industry mixed with regional vehicle pollution. With a July average of exactly , the lower city is in a pocket of the found at the southwestern end of Lake Ontario (between Hamilton and Toronto and eastward into the Niagara Peninsula), while the upper reaches of the city fall into the . The airport's open, rural location and higher altitude (240m vs. 85m ASL downtown) results in lower temperatures, generally windier conditions, and higher snowfall amounts than lower, built-up areas of the city. One exception is on early spring afternoons; when colder than air lake temperatures keep shoreline areas significantly cooler, under the presence of an east or north-east onshore flow. The highest temperature ever recorded in Hamilton was 41.1 °C (106 °F) on 14 July 1868. The coldest temperature ever recorded was -30.6 °C (-23 °F) on 25 January 1884. As per the 2016 Canadian census, 24.69% of the city's population was not born in Canada. Between 2001 and 2006, the foreign-born population increased by 7.7% while the total population of the Hamilton census metropolitan area (CMA) grew by 4.3%. Hamilton is home to 26,330 immigrants who arrived in Canada between 2001 and 2010 and 13,150 immigrants who arrived between 2011 and 2016. Hamilton maintains significant Italian, English, Scottish, German and Irish ancestry. 130,705 Hamiltonians claim English heritage, while 98,765 indicate their ancestors arrived from Scotland, 87,825 from Ireland, 62,335 from Italy, 50,400 from Germany. In February 2014, the city's council voted to declare Hamilton a sanctuary city, offering municipal services to undocumented immigrants at risk of deportation. Hamilton also has a notable French community for which provincial services are offered in French. In Ontario, urban centres where there are at least 5000 Francophones, or where at least 10% of the population is francophone, are designated areas where bilingual provincial services have to be offered. As per the 2016 census, the Francophone community maintains a population of 6,760, while 30,530 residents, or 5.8% of the city's population, have knowledge of both official languages. The Franco-Ontarian community of Hamilton boasts two school boards, the public "Conseil scolaire Viamonde" and the Catholic "Conseil scolaire catholique MonAvenir", which operate five schools (2 secondary and 3 elementary). Additionally, the city maintains a Francophone community health centre that is part of the LHIN (Centre de santé communautaire Hamilton/Niagara), a cultural centre (Centre français Hamilton), three daycare centres, a provincially funded employment centre (Options Emploi), a community college site (Collège Boréal) and a community organization that supports the development of the francophone community in Hamilton (ACFO Régionale Hamilton). The top countries of birth for the newcomers living in Hamilton in the 1990s were: former Yugoslavia, Poland, India, China, the Philippines, and Iraq. Children aged 14 years and under accounted for 16.23% of the city's population, a decline of 1.57% from the 2011 census. Hamiltonians aged 65 years and older constituted 17.3% of the population, an increase of 2.4% since 2011. The city's average age is 41.3 years. 54.9% of Hamiltonians are married or in a common-law relationship, while 6.4% of city residents are divorced. Same-sex couples (married or in common-law relationships) constitute 0.08% (2,710 individuals) of the partnered population in Hamilton. The most described religion in Hamilton is Christianity although other religions brought by immigrants are also growing. The 2011 census indicates 67.6% of the population adheres to a Christian denomination, with Catholics being the largest at 34.3% of the city's population. The Christ the King Cathedral is the seat of the Diocese of Hamilton. Other denominations include the United Church (6.5%), Anglican (6.4%), Presbyterian (3.1%), Christian Orthodox (2.9%), and other denominations (9.8%). Other religions with significant populations include Islam (3.7%), Buddhist (0.9%), Sikh (0.8%), Hindu (0.8%), and Jewish (0.7%). Those with no religious affiliation accounted for 24.9% of the population. Environics Analytics, a geodemographic marketing firm that created 66 different "clusters" of people complete with profiles of how they live, what they think and what they consume, sees a future Hamilton with younger upscale Hamiltonians—who are tech savvy and university educated—choosing to live in the downtown and surrounding areas rather than just visiting intermittently. More two and three-storey townhouses and apartments will be built on downtown lots; small condos will be built on vacant spaces in areas such as Dundas, Ainslie Wood and Westdale to accommodate newly retired seniors; and more retail and commercial zones will be created. The city is also expected to grow by more than 28,000 people and 18,000 households by the year 2012. The following data are recorded by the 2011 NHS survey: The most important economic activity in Ontario is manufacturing, and the Toronto–Hamilton region is the country's most highly industrialized area. The area from Oshawa, Ontario around the west end of Lake Ontario to Niagara Falls, with Hamilton at its centre, is known as the Golden Horseshoe and had a population of approximately 8.1 million people in 2006. The phrase was first used by Westinghouse President Herbert H. Rogge in a speech to the Hamilton Chamber of Commerce, on January 12, 1954. "Hamilton in 50 years will be the forward cleat in a golden horseshoe of industrial development from Oshawa to the Niagara River ... 150 miles long and wide...It will run from Niagara Falls on the south to about Oshawa on the north and take in numerous cities and towns already there, including Hamilton and Toronto." With sixty percent of Canada's steel being produced in Hamilton by Stelco and Dofasco, the city has become known as the Steel Capital of Canada. After nearly declaring bankruptcy, Stelco returned to profitability in 2004. On August 26, 2007 United States Steel Corporation acquired Stelco for C$38.50 in cash per share, owning more than 76 percent of Stelco's outstanding shares. On September 17, 2014, US Steel Canada announced it was applying for bankruptcy protection and it would close its Hamilton operations. A stand-alone subsidiary of Arcelor Mittal, the world's largest steel producer, Dofasco produces products for the automotive, construction, energy, manufacturing, pipe and tube, appliance, packaging, and steel distribution industries. It has approximately 7,300 employees at its Hamilton plant, and the four million tons of steel it produces each year represents about 30% of Canada's flat-rolled sheet steel shipments. Dofasco was North America's most profitable steel producer in 1999 and the most profitable in Canada in 2000 as well as a long-time member of the Dow Jones Sustainability World Index. Previously ordered by the U.S. Department of Justice to divest itself of the Canadian company, Arcelor Mittal has been allowed to retain Dofasco provided it sells several of its American assets. In the 1940s, the John C. Munro Hamilton International Airport was used as a wartime air force training station. Today TradePort International Corporation manages and operates the John C. Munro Hamilton International Airport. Under TradePort management, passenger traffic at the Hamilton terminal has grown from 90,000 in 1996 to approximately 900,000 in 2002. The airport's mid-term target for growth in its passenger service is five million air travelers annually. The airport's air cargo sector has 24–7 operational capability and strategic geographic location, allowing its capacity to increase by 50% since 1996; 91,000 metric tonnes (100,000 tons) of cargo passed through the airport in 2002. Courier companies with operations at the airport include United Parcel Service and Cargojet Canada. In 2003, the city began developing a 30-year growth management strategy which called, in part, for a massive aerotropolis industrial park centred on Hamilton Airport. The aerotropolis proposal, now known as the "Airport Employment Growth District", is touted as a solution to the city's shortage of employment lands. Hamilton turned over operation of the airport to TradePort International Corp. in 1996. In 2007, YVR Airport Services (YVRAS), which runs the Vancouver International Airport, took over 100 percent ownership of TradePort in a $13-million deal. The airport is also home to the Canadian Warplane Heritage Museum. A report by Hemson Consulting identified an opportunity to develop of greenfields (the size of the Royal Botanical Gardens) that could generate an estimated 90,000 jobs by 2031. A proposed aerotropolis industrial park at Highway 6 and 403, has been debated at City Hall for years. Opponents feel the city needs to do more investigation about the cost to taxpayers before embarking on the project. The Hamilton GO Centre, formerly the Toronto, Hamilton and Buffalo Railway station, is a commuter rail station on the Lakeshore West line of GO Transit. While Hamilton is not directly served by intercity rail, the Lakeshore West line does offer an off-peak bus connection and a peak-hours rail connection to Aldershot station in Burlington, which doubles as a Via station. When ranked on a "total crime severity index", Hamilton was 21st in Canada in 2011 for a metropolitan area. This was an eight percent decrease from 2010. Of note was Hamilton's second place rank for police-reported hate crimes in 2011. Citizens of Hamilton are represented at all three levels of Canadian government. Following the 2015 Federal Election, representation in the Parliament of Canada will consist of five Members of Parliament representing the federal ridings of Hamilton West—Ancaster—Dundas, Hamilton Centre, Hamilton East—Stoney Creek, Hamilton Mountain, and Flamborough—Glanbrook. This election will mark the first occasion in which Hamilton will have five Members of Parliament representing areas wholly within Hamilton's city boundaries, with previous boundaries situating rural ridings across municipal lines. Provincially, there are five elected Members of Provincial Parliament who serve in the Legislature of Ontario. Leader of the Ontario New Democratic Party and Leader of the Official Opposition, Andrea Horwath, represents Hamilton Centre, Paul Miller (NDP) represents Hamilton East-Stoney Creek, Monique Taylor (NDP) represents Hamilton Mountain, Sandy Shaw (NDP) represents Hamilton West—Ancaster—Dundas, and Progressive Conservative Donna Skelly represents Flamborough—Glanbrook. Hamilton's municipal government consists of one mayor, elected city wide, and 15 city councillors, elected individually by each of the city's wards, to serve on the Hamilton City Council. Presently, Hamilton's mayor is Fred Eisenberger, elected on October 27, 2014 to a second, non-consecutive term. Additionally, both Public and Catholic school board trustees are elected for defined areas ranging from two trustees for multiple wards to a single trustee for an individual ward. The Hamilton City Council is granted authority to govern by the province through the Municipal Act of Ontario. As with all municipalities, the Province of Ontario has supervisory privilege over the municipality and the power to redefine, restrict or expand the powers of all municipalities in Ontario. The Criminal Code of Canada is the chief piece of legislation defining criminal conduct and penalty. The Hamilton Police Service is chiefly responsible for the enforcement of federal and provincial law. Although the Hamilton Police Service has authority to enforce, bylaws passed by the Hamilton City Council are mainly enforced by Provincial Offences Officers employed by the City of Hamilton. The Canadian Military maintains a presence in Hamilton, with the John Weir Foote Armoury in the downtown core on James Street North, housing the Royal Hamilton Light Infantry as well as the 11th Field Hamilton-Wentworth Battery and the Argyll and Sutherland Highlanders of Canada. The Hamilton Reserve Barracks on Pier Nine houses the naval reserve division , 23 Service Battalion and the 23 Field Ambulance. Hamilton is home to several post-secondary institutions that have created numerous direct and indirect jobs in education and research. McMaster University moved to the city in 1930 and today has around 30,000 enrolled students, of which almost two-thirds come from outside the immediate Hamilton region. Brock University of St. Catharines, Ontario has a satellite campus used primarily for teacher education in Hamilton. Colleges in Hamilton include: From 1995 to 2001, the city was home to a satellite campus of the defunct francophone Collège des Grands-Lacs. Public education for students from kindergarten through high school is administered by three school boards. The Hamilton-Wentworth District School Board manages 114 public schools, while the Hamilton-Wentworth Catholic District School Board operates 55 schools in the greater Hamilton area. The Conseil scolaire Viamonde operates one elementary and one secondary school (École secondaire Georges-P.-Vanier), and the Conseil scolaire de district catholique Centre-Sud operates two elementary schools and one secondary school. Calvin Christian School, Providence Christian School and Timothy Christian School are independent Christian elementary schools. Hamilton District Christian High School, Rehoboth Christian High School and Guido de Bres Christian High School are independent Christian high schools in the area. Both HDCH and Guido de Brès participate in the city's interscholastic athletics. Hillfield Strathallan College is on the West Hamilton mountain and is a CAIS member, non-profit school for children from early Montessori ages through grade twelve. Columbia International College is Canada's largest private boarding high school, with 1,700 students from 73 countries. The Dundas Valley School of Art is an independent art school which has served the Hamilton region since 1964. Students range in age from 4 years old to senior citizens and enrollment as of February 2007 was close to 4,000. In 1998, a new full-time diploma programme was launched as a joint venture with McMaster University. The faculty and staff are highly regarded regional artists. The Hamilton Conservatory for the Arts is home to many of the area's talented young actors, dancers, musicians, singers and visual artists. The school has a keyboard studio, spacious dance studios, art and sculpting studios, gallery space and a 300 seat recital hall. HCA offers over 90 programs for ages 3–93, creating a "united nations" of arts under one roof. The Hamilton Literacy Council is a non-profit organization that provides basic (grades 1–5 equivalent) training in reading, writing, and math to English-speaking adults. The council's service is free, private, and one-to-one. It started to assist adults with their literacy skills in 1973. Hamilton is home to two think tanks, the Centre for Cultural Renewal and Cardus, which deals with social architecture, culture, urbanology, economics and education and also publishes the "LexView Policy Journal" and "Comment Magazine". Hamilton's local attractions include the Canadian Warplane Heritage Museum, the National Historic Site, Dundurn Castle (the residence of a Allan MacNab, the 8th Premier of Canada West), the Royal Botanical Gardens, the Canadian Football Hall of Fame, the African Lion Safari Park, the Cathedral of Christ the King, and the Workers' Arts and Heritage Centre. Founded in 1914, the Art Gallery of Hamilton is Ontario's third largest public art gallery. The gallery has over 9,000 works in its permanent collection that focus on three areas: 19th century European, Historical Canadian and Contemporary Canadian. The McMaster Museum of Art (MMA), founded at McMaster University in 1967, houses and exhibits the university's art collection of more than 7,000 objects, including historical, modern and contemporary art, the Levy Collection of Impressionist and Post-Impressionist paintings, and a collection of over 300 German Expressionist prints. Hamilton has an active theatre scene, with the professional company Theatre Aquarius, plus long-time amateur companies, the Players' Guild of Hamilton and Hamilton Theatre Inc.. Many smaller theatre companies have also opened in the past decade, bringing a variety of theatre to the area. Growth in the arts and culture sector has garnered media attention for Hamilton. A "Globe and Mail" article in 2006, entitled "Go West, Young Artist," focused on the growing art scene in Hamilton. The Factory: Hamilton Media Arts Centre, opened up a new home on James Street North in 2006. Art galleries have sprung up on streets across the city: James Street, King William Street, Locke Street and King Street. This, coupled with growth in the downtown condo market which is drawing people back to the core, is affecting the cultural fabric of the city. The opening of the Downtown Arts Centre on Rebecca Street has spurred further creative activities in the core. The Community Centre for Media Arts (CCMA) continues to operate in downtown Hamilton. The CCMA works with marginalized populations and combines new media services with arts education and skills development programming. In March 2015, Hamilton was host to the JUNO Awards, which featured performances by Hedley, Alanis Morissette and Magic!. The award ceremony was held at the FirstOntario Centre in downtown Hamilton. During JUNOfest, hundreds of local acts performed across the city, bringing thousands of tourists. Hamilton was the host of Canada's first major international athletic event, the first Commonwealth Games (then called the British Empire Games) in 1930. Hamilton bid unsuccessfully for the Commonwealth Games in 2010, losing out to New Delhi in India. On November 7, 2009, in Guadalajara, Mexico it was announced that Toronto would host the 2015 Pan Am Games after beating out two rival South American cities, Lima, Peru and Bogota, Colombia. The city of Hamilton co-hosted the Games with Toronto. Hamilton Mayor Fred Eisenberger said, "the Pan Am Games will provide a 'unique opportunity for Hamilton to renew major sport facilities giving Hamiltonians a multi-purpose stadium, a 50-metre swimming pool, and an international-calibre velodrome to enjoy for generations to come.'" Hamilton's major sports complexes include Tim Hortons Field and FirstOntario Centre. Hamilton is represented by the Tiger-Cats in the Canadian Football League. The team traces their origins to the 1869 "Hamilton Foot Ball Club." Hamilton is also home to the Canadian Football Hall of Fame museum. The museum hosts an annual induction event in a week-long celebration that includes school visits, a golf tournament, a formal induction dinner and concludes with the Hall of Fame game involving the local CFL Hamilton Tiger-Cats at Tim Hortons Field. In 2019, Forge FC will debut as Hamilton's soccer team in the Canadian Premier League. The team play at Tim Hortons Field and will share the venue with the Tiger-Cats. Hamilton hosted an NHL team in the 1920s called the Hamilton Tigers. The team folded after a players' strike in 1925. Research in Motion CEO Jim Balsillie has shown interest in bringing another NHL team to southern Ontario. The NHL's Phoenix Coyotes filed for bankruptcy in 2009 and have included within their Chapter 11 reorganization a plan to sell the team to Balsillie and move the team and its operations to Hamilton, Ontario. In late September, however, the bankruptcy judge did not rule in favour of Balsillie. The Around the Bay Road Race circumnavigates Hamilton Harbour. Although it is not a marathon distance, it is the longest continuously held long distance foot race in North America. The local newspaper also hosts the amateur Spectator Indoor Games. In addition to team sports, Hamilton is also home to an auto race track, Flamboro Speedway and Canada's fastest half-mile harness horse racing track, Flamboro Downs. Another auto race track, Cayuga International Speedway, is near Hamilton in the Haldimand County community of Nelles Corners, between Hagersville and Cayuga. Hamilton is a sister city with Flint, Michigan, and its young amateur athletes compete in the CANUSA Games, held alternatively in the two cities since 1958. Flint and Hamilton hold the distinction of having the oldest continuous sister-city relationship between a U.S. and Canadian city, since 1957. Other sister cities with Hamilton include: Other city relationships: Hoover Dam Hoover Dam is a concrete arch-gravity dam in the Black Canyon of the Colorado River, on the border between the U.S. states of Nevada and Arizona. It was constructed between 1931 and 1936 during the Great Depression and was dedicated on September 30, 1935, by President Franklin D. Roosevelt. Its construction was the result of a massive effort involving thousands of workers, and cost over one hundred lives. Originally known as Boulder Dam from 1933, it was officially renamed Hoover Dam, for President Herbert Hoover, by a joint resolution of Congress in 1947. Since about 1900, the Black Canyon and nearby Boulder Canyon had been investigated for their potential to support a dam that would control floods, provide irrigation water and produce hydroelectric power. In 1928, Congress authorized the project. The winning bid to build the dam was submitted by a consortium called Six Companies, Inc., which began construction on the dam in early 1931. Such a large concrete structure had never been built before, and some of the techniques were unproven. The torrid summer weather and lack of facilities near the site also presented difficulties. Nevertheless, Six Companies turned the dam over to the federal government on March 1, 1936, more than two years ahead of schedule. Hoover Dam impounds Lake Mead, the largest reservoir in the United States by volume (when it is full). The dam is located near Boulder City, Nevada, a municipality originally constructed for workers on the construction project, about southeast of Las Vegas, Nevada. The dam's generators provide power for public and private utilities in Nevada, Arizona, and California. Hoover Dam is a major tourist attraction; nearly a million people tour the dam each year. The heavily traveled U.S. Route 93 (US 93) ran along the dam's crest until October 2010, when the Hoover Dam Bypass opened. As the United States developed the Southwest, the Colorado River was seen as a potential source of irrigation water. An initial attempt at diverting the river for irrigation purposes occurred in the late 1890s, when land speculator William Beatty built the Alamo Canal just north of the Mexican border; the canal dipped into Mexico before running to a desolate area Beatty named the Imperial Valley. Though water from the Imperial Canal allowed for the widespread settlement of the valley, the canal proved expensive to maintain. After a catastrophic breach that caused the Colorado River to fill the Salton Sea, the Southern Pacific Railroad spent $3 million in 1906–07 to stabilize the waterway, an amount it hoped in vain would be reimbursed by the Federal Government. Even after the waterway was stabilized, it proved unsatisfactory because of constant disputes with landowners on the Mexican side of the border. As the technology of electric power transmission improved, the Lower Colorado was considered for its hydroelectric-power potential. In 1902, the Edison Electric Company of Los Angeles surveyed the river in the hope of building a rock dam which could generate . However, at the time, the limit of transmission of electric power was , and there were few customers (mostly mines) within that limit. Edison allowed land options it held on the river to lapse—including an option for what became the site of Hoover Dam. In the following years, the Bureau of Reclamation (BOR), known as the Reclamation Service at the time, also considered the Lower Colorado as the site for a dam. Service chief Arthur Powell Davis proposed using dynamite to collapse the walls of Boulder Canyon, north of the eventual dam site, into the river. The river would carry off the smaller pieces of debris, and a dam would be built incorporating the remaining rubble. In 1922, after considering it for several years, the Reclamation Service finally rejected the proposal, citing doubts about the unproven technique and questions as to whether it would in fact save money. In 1922, the Reclamation Service presented a report calling for the development of a dam on the Colorado River for flood control and electric power generation. The report was principally authored by Davis, and was called the Fall-Davis report after Interior Secretary Albert Fall. The Fall-Davis report cited use of the Colorado River as a federal concern because the river's basin covered several states, and the river eventually entered Mexico. Though the Fall-Davis report called for a dam "at or near Boulder Canyon", the Reclamation Service (which was renamed the Bureau of Reclamation the following year) found that canyon unsuitable. One potential site at Boulder Canyon was bisected by a geologic fault; two others were so narrow there was no space for a construction camp at the bottom of the canyon or for a spillway. The Service investigated Black Canyon and found it ideal; a railway could be laid from the railhead in Las Vegas to the top of the dam site. Despite the site change, the dam project was referred to as the "Boulder Canyon Project". With little guidance on water allocation from the Supreme Court, proponents of the dam feared endless litigation. A Colorado attorney proposed that the seven states which fell within the river's basin (California, Nevada, Arizona, Utah, New Mexico, Colorado and Wyoming) form an interstate compact, with the approval of Congress. Such compacts were authorized by Article I of the United States Constitution but had never been concluded among more than two states. In 1922, representatives of seven states met with then-Secretary of Commerce Herbert Hoover. Initial talks produced no result, but when the Supreme Court handed down the "Wyoming v. Colorado" decision undermining the claims of the upstream states, they became anxious to reach an agreement. The resulting Colorado River Compact was signed on November 24, 1922. Legislation to authorize the dam was introduced repeatedly by two California Republicans, Representative Phil Swing and Senator Hiram Johnson, but representatives from other parts of the country considered the project as hugely expensive and one that would mostly benefit California. The 1927 Mississippi flood made Midwestern and Southern congressmen and senators more sympathetic toward the dam project. On March 12, 1928, the failure of the St. Francis Dam, constructed by the city of Los Angeles, caused a disastrous flood that killed up to 600 people. As that dam was a curved-gravity type, similar in design to the arch-gravity as was proposed for the Black Canyon dam, opponents claimed that the Black Canyon dam's safety could not be guaranteed. Congress authorized a board of engineers to review plans for the proposed dam. The Colorado River Board found the project feasible, but warned that should the dam fail, every downstream Colorado River community would be destroyed, and that the river might change course and empty into the Salton Sea. The Board cautioned: "To avoid such possibilities, the proposed dam should be constructed on conservative if not ultra-conservative lines." On December 21, 1928, President Coolidge signed the bill authorizing the dam. The Boulder Canyon Project Act appropriated $165 million for the Hoover Dam along with the downstream Imperial Dam and All-American Canal, a replacement for Beatty's canal entirely on the U.S. side of the border. It also permitted the compact to go into effect when at least six of the seven states approved it. This occurred on March 6, 1929, with Utah's ratification; Arizona did not approve it until 1944. Even before Congress approved the Boulder Canyon Project, the Bureau of Reclamation was considering what kind of dam should be used. Officials eventually decided on a massive concrete arch-gravity dam, the design of which was overseen by the Bureau's chief design engineer John L. Savage. The monolithic dam would be thick at the bottom and thin near the top, and would present a convex face towards the water above the dam. The curving arch of the dam would transmit the water's force into the abutments, in this case the rock walls of the canyon. The wedge-shaped dam would be thick at the bottom, narrowing to at the top, leaving room for a highway connecting Nevada and Arizona. On January 10, 1931, the Bureau made the bid documents available to interested parties, at five dollars a copy. The government was to provide the materials; but the contractor was to prepare the site and build the dam. The dam was described in minute detail, covering 100 pages of text and 76 drawings. A $2 million bid bond was to accompany each bid; the winner would have to post a $5 million performance bond. The contractor had seven years to build the dam, or penalties would ensue. The Wattis Brothers, heads of the Utah Construction Company, were interested in bidding on the project, but lacked the money for the performance bond. They lacked sufficient resources even in combination with their longtime partners, Morrison-Knudsen, which employed the nation's leading dam builder, Frank Crowe. They formed a joint venture to bid for the project with Pacific Bridge Company of Portland, Oregon; Henry J. Kaiser & W. A. Bechtel Company of San Francisco; MacDonald & Kahn Ltd. of Los Angeles; and the J.F. Shea Company of Portland, Oregon. The joint venture was called Six Companies, Inc. as Bechtel and Kaiser were considered one company for purposes of 6 in the name. The name was descriptive and was an inside joke among the San Franciscans in the bid, where "Six Companies" was also a Chinese benevolent association in the city. There were three valid bids, and Six Companies' bid of $48,890,955 was the lowest, within $24,000 of the confidential government estimate of what the dam would cost to build, and five million dollars less than the next-lowest bid. The city of Las Vegas had lobbied hard to be the headquarters for the dam construction, closing its many speakeasies when the decision maker, Secretary of the Interior Ray Wilbur, came to town. Instead, Wilbur announced in early 1930 that a model city was to be built in the desert near the dam site. This town became known as Boulder City, Nevada. Construction of a rail line joining Las Vegas and the dam site began in September 1930. Soon after the dam was authorized, increasing numbers of unemployed people converged on southern Nevada. Las Vegas, then a small city of some 5,000, saw between 10,000 and 20,000 unemployed descend on it. A government camp was established for surveyors and other personnel near the dam site; this soon became surrounded by a squatters' camp. Known as McKeeversville, the camp was home to men hoping for work on the project, together with their families. Another camp, on the flats along the Colorado River, was officially called Williamsville, but was known to its inhabitants as "Ragtown". When construction began, Six Companies hired large numbers of workers, with more than 3,000 on the payroll by 1932 and with employment peaking at 5,251 in July 1934. "Mongolian" (Chinese) labor was prevented by the construction contract, while the number of blacks employed by Six Companies never exceeded thirty, mostly lowest-pay-scale laborers in a segregated crew, who were issued separate water buckets. As part of the contract, Six Companies, Inc. was to build Boulder City to house the workers. The original timetable called for Boulder City to be built before the dam project began, but President Hoover ordered work on the dam to begin in March 1931 rather than in October. The company built bunkhouses, attached to the canyon wall, to house 480 single men at what became known as River Camp. Workers with families were left to provide their own accommodations until Boulder City could be completed, and many lived in Ragtown. The site of Hoover Dam endures extremely hot weather, and the summer of 1931 was especially torrid, with the daytime high averaging . Sixteen workers and other riverbank residents died of heat prostration between June 25 and July 26, 1931. The Industrial Workers of the World (IWW or "Wobblies"), though much-reduced from their heyday as militant labor organizers in the early years of the century, hoped to unionize the Six Companies workers by capitalizing on their discontent. They sent eleven organizers, several of whom were arrested by Las Vegas police. On August 7, 1931, the company cut wages for all tunnel workers. Although the workers sent away the organizers, not wanting to be associated with the "Wobblies", they formed a committee to represent them with the company. The committee drew up a list of demands that evening and presented them to Crowe the following morning. He was noncommittal. The workers hoped that Crowe, the general superintendent of the job, would be sympathetic; instead he gave a scathing interview to a newspaper, describing the workers as "malcontents". On the morning of the 9th, Crowe met with the committee and told them that management refused their demands, was stopping all work, and was laying off the entire work force, except for a few office workers and carpenters. The workers were given until 5 p.m. to vacate the premises. Concerned that a violent confrontation was imminent, most workers took their paychecks and left for Las Vegas to await developments. Two days later, the remainder were talked into leaving by law enforcement. On August 13, the company began hiring workers again, and two days later, the strike was called off. While the workers received none of their demands, the company guaranteed there would be no further reductions in wages. Living conditions began to improve as the first residents moved into Boulder City in late 1931. A second labor action took place in July 1935, as construction on the dam wound down. When a Six Companies manager altered working times to force workers to take lunch on their own time, workers responded with a strike. Emboldened by Crowe's reversal of the lunch decree, workers raised their demands to include a $1-per-day raise. The company agreed to ask the Federal government to supplement the pay, but no money was forthcoming from Washington. The strike ended. Before the dam could be built, the Colorado River needed to be diverted away from the construction site. To accomplish this, four diversion tunnels were driven through the canyon walls, two on the Nevada side and two on the Arizona side. These tunnels were in diameter. Their combined length was nearly 16,000 ft, or more than . The contract required these tunnels to be completed by October 1, 1933, with a $3,000-per-day fine to be assessed for any delay. To meet the deadline, Six Companies had to complete work by early 1933, since only in late fall and winter was the water level in the river low enough to safely divert. Tunneling began at the lower portals of the Nevada tunnels in May 1931. Shortly afterward, work began on two similar tunnels in the Arizona canyon wall. In March 1932, work began on lining the tunnels with concrete. First the base, or invert, was poured. Gantry cranes, running on rails through the entire length of each tunnel were used to place the concrete. The sidewalls were poured next. Movable sections of steel forms were used for the sidewalls. Finally, using pneumatic guns, the overheads were filled in. The concrete lining is thick, reducing the finished tunnel diameter to . The river was diverted into the two Arizona tunnels on November 13, 1932; the Nevada tunnels were kept in reserve for high water. This was done by exploding a temporary cofferdam protecting the Arizona tunnels while at the same time dumping rubble into the river until its natural course was blocked. Following the completion of the dam, the entrances to the two outer diversion tunnels were sealed at the opening and halfway through the tunnels with large concrete plugs. The downstream halves of the tunnels following the inner plugs are now the main bodies of the spillway tunnels. The inner diversion tunnels were plugged at approximately one-third of their length, beyond which they now carry steel pipes connecting the intake towers to the power plant and outlet works. The inner tunnels' outlets are equipped with gates that can be closed to drain the tunnels for maintenance. To protect the construction site from the Colorado River and to facilitate the river's diversion, two cofferdams were constructed. Work on the upper cofferdam began in September 1932, even though the river had not yet been diverted. The cofferdams were designed to protect against the possibility of the river's flooding a site at which two thousand men might be at work, and their specifications were covered in the bid documents in nearly as much detail as the dam itself. The upper cofferdam was high, and thick at its base, thicker than the dam itself. It contained of material. When the cofferdams were in place and the construction site was drained of water, excavation for the dam foundation began. For the dam to rest on solid rock, it was necessary to remove accumulated erosion soils and other loose materials in the riverbed until sound bedrock was reached. Work on the foundation excavations was completed in June 1933. During this excavation, approximately of material was removed. Since the dam was an arch-gravity type, the side-walls of the canyon would bear the force of the impounded lake. Therefore, the side-walls were excavated too, to reach virgin rock, as weathered rock might provide pathways for water seepage. Shovels for the excavation came from the Marion Power Shovel Company. The men who removed this rock were called "high scalers". While suspended from the top of the canyon with ropes, the high-scalers climbed down the canyon walls and removed the loose rock with jackhammers and dynamite. Falling objects were the most common cause of death on the dam site; the high scalers' work thus helped ensure worker safety. One high scaler was able to save a life in a more direct manner: when a government inspector lost his grip on a safety line and began tumbling down a slope towards almost certain death, a high scaler was able to intercept him and pull him into the air. The construction site had, even then, become a magnet for tourists; the high scalers were prime attractions and showed off for the watchers. The high scalers received considerable media attention, with one worker dubbed the "Human Pendulum" for swinging co-workers (and, at other times, cases of dynamite) across the canyon. To protect themselves against falling objects, some high scalers took cloth hats and dipped them in tar, allowing them to harden. When workers wearing such headgear were struck hard enough to inflict broken jaws, they sustained no skull damage, Six Companies ordered thousands of what initially were called "hard boiled hats" (later "hard hats") and strongly encouraged their use. The cleared, underlying rock foundation of the dam site was reinforced with grout, called a grout curtain. Holes were driven into the walls and base of the canyon, as deep as into the rock, and any cavities encountered were to be filled with grout. This was done to stabilize the rock, to prevent water from seeping past the dam through the canyon rock, and to limit "uplift"—upward pressure from water seeping under the dam. The workers were under severe time constraints due to the beginning of the concrete pour, and when they encountered hot springs or cavities too large to readily fill, they moved on without resolving the problem. A total of 58 of the 393 holes were incompletely filled. After the dam was completed and the lake began to fill, large numbers of significant leaks into the dam caused the Bureau of Reclamation to look into the situation. It found that the work had been incompletely done, and was based on less than a full understanding of the canyon's geology. New holes were drilled from inspection galleries inside the dam into the surrounding bedrock. It took nine years (1938–47) under relative secrecy to complete the supplemental grout curtain. The first concrete was poured into the dam on June 6, 1933, 18 months ahead of schedule. Since concrete heats and contracts as it cures, the potential for uneven cooling and contraction of the concrete posed a serious problem. Bureau of Reclamation engineers calculated that if the dam were to be built in a single continuous pour, the concrete would take 125 years to cool, and the resulting stresses would cause the dam to crack and crumble. Instead, the ground where the dam would rise was marked with rectangles, and concrete blocks in columns were poured, some as large as and high. Each five-foot form contained a series of steel pipes; cool riverwater would be poured through the pipes, followed by ice-cold water from a refrigeration plant. When an individual block had cured and had stopped contracting, the pipes were filled with grout. Grout was also used to fill the hairline spaces between columns, which were grooved to increase the strength of the joins. The concrete was delivered in huge steel buckets and almost 7 feet in diameter; Crowe was awarded two patents for their design. These buckets, which weighed when full, were filled at two massive concrete plants on the Nevada side, and were delivered to the site in special railcars. The buckets were then suspended from aerial cableways which were used to deliver the bucket to a specific column. As the required grade of aggregate in the concrete differed depending on placement in the dam (from pea-sized gravel to 9-inch or 23 cm stones), it was vital that the bucket be maneuvered to the proper column. When the bottom of the bucket opened up, disgorging of concrete, a team of men worked it throughout the form. Although there are myths that men were caught in the pour and are entombed in the dam to this day, each bucket deepened the concrete in a form by only an inch, and Six Companies engineers would not have permitted a flaw caused by the presence of a human body. A total of of concrete was used in the dam before concrete pouring ceased on May 29, 1935. In addition, were used in the power plant and other works. More than of cooling pipes were placed within the concrete. Overall, there is enough concrete in the dam to pave a two-lane highway from San Francisco to New York. Concrete cores were removed from the dam for testing in 1995; they showed that "Hoover Dam's concrete has continued to slowly gain strength" and the dam is composed of a "durable concrete having a compressive strength exceeding the range typically found in normal mass concrete". Hoover Dam concrete is not subject to alkali–silica reaction (ASR), as the Hoover Dam builders happened to use nonreactive aggregate, unlike that at downstream Parker Dam, where ASR has caused measurable deterioration. With most work finished on the dam itself (the powerhouse remained uncompleted), a formal dedication ceremony was arranged for September 30, 1935, to coincide with a western tour being made by President Franklin D. Roosevelt. The morning of the dedication, it was moved forward three hours from 2 p.m. Pacific time to 11 a.m.; this was done because Secretary of the Interior Harold L. Ickes had reserved a radio slot for the President for 2 p.m. but officials did not realize until the day of the ceremony that the slot was for 2 p.m. Eastern Time. Despite the change in the ceremony time, and temperatures of , 10,000 people were present for the President's speech, in which he avoided mentioning the name of former President Hoover, who was not invited to the ceremony. To mark the occasion, a three-cent stamp was issued by the United States Post Office Department—bearing the name "Boulder Dam", the official name of the dam between 1933 and 1947. After the ceremony, Roosevelt made the first visit by any American president to Las Vegas. Most work had been completed by the dedication, and Six Companies negotiated with the government through late 1935 and early 1936 to settle all claims and arrange for the formal transfer of the dam to the Federal Government. The parties came to an agreement and on March 1, 1936, Secretary Ickes formally accepted the dam on behalf of the government. Six Companies was not required to complete work on one item, a concrete plug for one of the bypass tunnels, as the tunnel had to be used to take in irrigation water until the powerhouse went into operation. There were 112 deaths reported as associated with the construction of the dam. The first was J. G. Tierney, a surveyor who drowned on December 20, 1922, while looking for an ideal spot for the dam. The last death on the project's official fatality list occurred on December 20, 1935, when an "electrician's helper", Patrick Tierney, the son of J. G. Tierney, fell from an intake tower. Included in the fatality list are three workers, one in 1932 and two in 1933, who committed suicide onsite. Ninety-six of the deaths occurred during construction at the site. Of the 112 fatalities, 91 were Six Companies employees, three were BOR employees, and one was a visitor to the site, with the remainder employees of various contractors not part of Six Companies. Not included in the official number of fatalities were deaths that were recorded as pneumonia. Workers alleged that this diagnosis was a cover for death from carbon monoxide poisoning (brought on by the use of gasoline-fueled vehicles in the diversion tunnels), and a classification used by Six Companies to avoid paying compensation claims. The site's diversion tunnels frequently reached , enveloped in thick plumes of vehicle exhaust gases. A total of 42 workers were recorded as having died from pneumonia and were not included in the above total; none were listed as having died from carbon monoxide poisoning. No deaths of non-workers from pneumonia were recorded in Boulder City during the construction period. The initial plans for the facade of the dam, the power plant, the outlet tunnels and ornaments clashed with the modern look of an arch dam. The Bureau of Reclamation, more concerned with the dam's functionality, adorned it with a Gothic-inspired balustrade and eagle statues. This initial design was criticized by many as being too plain and unremarkable for a project of such immense scale, so Los Angeles-based architect Gordon B. Kaufmann, then the supervising architect to the Bureau of Reclamation, was brought in to redesign the exteriors. Kaufmann greatly streamlined the design and applied an elegant Art Deco style to the entire project. He designed sculptured turrets rising seamlessly from the dam face and clock faces on the intake towers set for the time in Nevada and Arizona — both states are in different time zones, but since Arizona does not observe Daylight Saving Time, the clocks display the same time for more than half the year. At Kaufmann's request, Denver artist Allen Tupper True was hired to handle the design and decoration of the walls and floors of the new dam. True's design scheme incorporated motifs of the Navajo and Pueblo tribes of the region. Although some initially were opposed to these designs, True was given the go-ahead and was officially appointed consulting artist. With the assistance of the National Laboratory of Anthropology, True researched authentic decorative motifs from Indian sand paintings, textiles, baskets and ceramics. The images and colors are based on Native American visions of rain, lightning, water, clouds, and local animals — lizards, serpents, birds — and on the Southwestern landscape of stepped mesas. In these works, which are integrated into the walkways and interior halls of the dam, True also reflected on the machinery of the operation, making the symbolic patterns appear both ancient and modern. With the agreement of Kaufmann and the engineers, True also devised for the pipes and machinery an innovative color-coding which was implemented throughout all BOR projects. True's consulting artist job lasted through 1942; it was extended so he could complete design work for the Parker, Shasta and Grand Coulee dams and power plants. True's work on the Hoover Dam was humorously referred to in a poem published in "The New Yorker", part of which read, "lose the spark, and justify the dream; but also worthy of remark will be the color scheme". Complementing Kaufmann and True's work, the Norwegian-born, naturalized American sculptor Oskar J.W. Hansen designed many of the sculptures on and around the dam. His works include the monument of dedication plaza, a plaque to memorialize the workers killed and the bas-reliefs on the elevator towers. In his words, Hansen wanted his work to express "the immutable calm of intellectual resolution, and the enormous power of trained physical strength, equally enthroned in placid triumph of scientific accomplishment", because "[t]he building of Hoover Dam belongs to the sagas of the daring." Hansen's dedication plaza, on the Nevada abutment, contains a sculpture of two winged figures flanking a flagpole. Surrounding the base of the monument is a terrazzo floor embedded with a "star map". The map depicts the Northern Hemisphere sky at the moment of President Roosevelt's dedication of the dam. This is intended to help future astronomers, if necessary, calculate the exact date of dedication. The bronze figures, dubbed "Winged Figures of the Republic", were each formed in a continuous pour. To put such large bronzes into place without marring the highly polished bronze surface, they were placed on ice and guided into position as the ice melted. Hansen's bas-relief on the Nevada elevator tower depicts the benefits of the dam: flood control, navigation, irrigation, water storage, and power. The bas-relief on the Arizona elevator depicts, in his words, "the visages of those Indian tribes who have inhabited mountains and plains from ages distant." Excavation for the powerhouse was carried out simultaneously with the excavation for the dam foundation and abutments. The excavation of this U-shaped structure located at the downstream toe of the dam was completed in late 1933 with the first concrete placed in November 1933. Filling of Lake Mead began February 1, 1935, even before the last of the concrete was poured that May. The powerhouse was one of the projects uncompleted at the time of the formal dedication on September 30, 1935; a crew of 500 men remained to finish it and other structures. To make the powerhouse roof bombproof, it was constructed of layers of concrete, rock, and steel with a total thickness of about , topped with layers of sand and tar. In the latter half of 1936, water levels in Lake Mead were high enough to permit power generation, and the first three Allis Chalmers built Francis turbine-generators, all on the Nevada side, began operating. In March 1937, one more Nevada generator went online and the first Arizona generator by August. By September 1939, four more generators were operating, and the dam's power plant became the largest hydroelectricity facility in the world. The final generator was not placed in service until 1961, bringing the maximum generating capacity to 1,345 megawatts at the time. Original plans called for 16 large generators, eight on each side of the river, but two smaller generators were installed instead of one large one on the Arizona side for a total of 17. The smaller generators were used to serve smaller communities at a time when the output of each generator was dedicated to a single municipality, before the dam's total power output was placed on the grid and made arbitrarily distributable. Before water from Lake Mead reaches the turbines, it enters the intake towers and then four gradually narrowing penstocks which funnel the water down towards the powerhouse. The intakes provide a maximum hydraulic head (water pressure) of as the water reaches a speed of about . The entire flow of the Colorado River passes through the turbines. The spillways and outlet works (jet-flow gates) are rarely used. The jet-flow gates, located in concrete structures above the river and also at the outlets of the inner diversion tunnels at river level, may be used to divert water around the dam in emergency or flood conditions, but have never done so, and in practice are used only to drain water from the penstocks for maintenance. Following an uprating project from 1986 to 1993, the total gross power rating for the plant, including two 2.4 megawatt Pelton turbine-generators that power Hoover Dam's own operations is a maximum capacity of 2080 megawatts. The annual generation of Hoover Dam varies. The maximum net generation was 10.348 TWh in 1984, and the minimum since 1940 was 2.648 TWh in 1956. The average power generated was 4.2 TWh/year for 1947–2008. In 2015, the dam generated 3.6 TWh. The amount of electricity generated by Hoover Dam has been decreasing along with the falling water level in Lake Mead due to the prolonged drought in the 2010s and high demand for the Colorado River's water. Lake Mead fell to a new record low elevation of on July 1, 2016 before beginning to rebound slowly. Under its original design, the dam will no longer be able to generate power once the water level falls below , which could occur as early as 2017. To lower the minimum power pool elevation from , five wide-head turbines, designed to work efficiently with less flow, were installed. Due to the low water levels, by 2014 it was providing power only during periods of peak demand. Control of water was the primary concern in the building of the dam. Power generation has allowed the dam project to be self-sustaining: proceeds from the sale of power repaid the 50-year construction loan, and those revenues also finance the multimillion-dollar yearly maintenance budget. Power is generated in step with and only with the release of water in response to downstream water demands. Lake Mead and downstream releases from the dam also provide water for both municipal and irrigation uses. Water released from the Hoover Dam eventually reaches several canals. The Colorado River Aqueduct and Central Arizona Project branch off Lake Havasu while the All-American Canal is supplied by the Imperial Dam. In total, water from the Lake Mead serves 18 million people in Arizona, Nevada and California and supplies the irrigation of over of land. Electricity from the dam's powerhouse was originally sold pursuant to a fifty-year contract, authorized by Congress in 1934, which ran from 1937 to 1987. In 1984, Congress passed a new statute which set power allocations to southern California, Arizona, and Nevada from the dam from 1987 to 2017. The powerhouse was run under the original authorization by the Los Angeles Department of Water and Power and Southern California Edison; in 1987, the Bureau of Reclamation assumed control. In 2011, Congress enacted legislation extending the current contracts until 2067, after setting aside 5% of Hoover Dam's power for sale to Native American tribes, electric cooperatives, and other entities. The new arrangement began on October 1, 2017. The Bureau of Reclamation reports that the energy generated under the contracts ending in 2017 was allocated as follows: The dam is protected against over-topping by two spillways. The spillway entrances are located behind each dam abutment, running roughly parallel to the canyon walls. The spillway entrance arrangement forms a classic side-flow weir with each spillway containing four and steel-drum gates. Each gate weighs and can be operated manually or automatically. Gates are raised and lowered depending on water levels in the reservoir and flood conditions. The gates could not entirely prevent water from entering the spillways but could maintain an extra of lake level. Water flowing over the spillways falls dramatically into , spillway tunnels before connecting to the outer diversion tunnels, and reentering the main river channel below the dam. This complex spillway entrance arrangement combined with the approximate elevation drop from the top of the reservoir to the river below was a difficult engineering problem and posed numerous design challenges. Each spillway's capacity of was empirically verified in post-construction tests in 1941. The large spillway tunnels have only been used twice, for testing in 1941 and because of flooding in 1983. Both times, when inspecting the tunnels after the spillways were used, engineers found major damage to the concrete linings and underlying rock. The 1941 damage was attributed to a slight misalignment of the tunnel invert (or base), which caused cavitation, a phenomenon in fast-flowing liquids in which vapor bubbles collapse with explosive force. In response to this finding, the tunnels were patched with special heavy-duty concrete and the surface of the concrete was polished mirror-smooth. The spillways were modified in 1947 by adding flip buckets, which both slow the water and decrease the spillway's effective capacity, in an attempt to eliminate conditions thought to have contributed to the 1941 damage. The 1983 damage, also due to cavitation, led to the installation of aerators in the spillways. Tests at Grand Coulee Dam showed that the technique worked, in principle. There are two lanes for automobile traffic across the top of the dam, which formerly served as the Colorado River crossing for U.S. Route 93. In the wake of the September 11, 2001 terrorist attacks, authorities expressed security concerns and the Hoover Dam Bypass project was expedited. Pending the completion of the bypass, restricted traffic was permitted over Hoover Dam. Some types of vehicles were inspected prior to crossing the dam while semi-trailer trucks, buses carrying luggage, and enclosed-box trucks over long were not allowed on the dam at all, and were diverted to U.S. Route 95 or Nevada State Routes 163/68. The four-lane Hoover Dam Bypass opened on October 19, 2010. It includes a composite steel and concrete arch bridge, the Mike O'Callaghan–Pat Tillman Memorial Bridge, downstream from the dam. With the opening of the bypass, through traffic is no longer allowed across Hoover Dam; dam visitors are allowed to use the existing roadway to approach from the Nevada side and cross to parking lots and other facilities on the Arizona side. Hoover Dam opened for tours in 1937 after its completion, but following Japan's attack on Pearl Harbor on December 7, 1941, it was closed to the public when the United States entered World War II, during which only authorized traffic, in convoys, was permitted. After the war, it reopened September 2, 1945, and by 1953, annual attendance had risen to 448,081. The dam closed on November 25, 1963, and March 31, 1969, days of mourning in remembrance of Presidents Kennedy and Eisenhower. In 1995, a new visitors' center was built, and the following year, visits exceeded one million for the first time. The dam closed again to the public on September 11, 2001; modified tours were resumed in December and a new "Discovery Tour" was added the following year. Today, nearly a million people per year take the tours of the dam offered by the Bureau of Reclamation. Increased security concerns by the government have led to most of the interior structure being inaccessible to tourists. As a result, few of True's decorations can now be seen by visitors. The changes in water flow and use caused by Hoover Dam's construction and operation have had a large impact on the Colorado River Delta. The construction of the dam has been implicated in causing the decline of this estuarine ecosystem. For six years after the construction of the dam, while Lake Mead filled, virtually no water reached the mouth of the river. The delta's estuary, which once had a freshwater-saltwater mixing zone stretching south of the river's mouth, was turned into an inverse estuary where the level of salinity was higher close to the river's mouth. The Colorado River had experienced natural flooding before the construction of the Hoover Dam. The dam eliminated the natural flooding, threatening many species adapted to the flooding, including both plants and animals. The construction of the dam devastated the populations of native fish in the river downstream from the dam. Four species of fish native to the Colorado River, the Bonytail chub, Colorado pikeminnow, Humpback chub, and Razorback sucker, are listed as endangered. During the years of lobbying leading up to the passage of legislation authorizing the dam in 1928, the press generally referred to the dam as "Boulder Dam" or as "Boulder Canyon Dam", even though the proposed site had shifted to Black Canyon. The Boulder Canyon Project Act of 1928 (BCPA) never mentioned a proposed name or title for the dam. The BCPA merely allows the government to "construct, operate, and maintain a dam and incidental works in the main stream of the Colorado River at Black Canyon or Boulder Canyon". When Secretary Wilbur spoke at the ceremony starting the building of the railway between Las Vegas and the dam site on September 17, 1930, he named the dam "Hoover Dam", citing a tradition of naming dams after Presidents, though none had been so honored during their terms of office. Wilbur justified his choice on the ground that Hoover was "the great engineer whose vision and persistence ... has done so much to make [the dam] possible". One writer complained in response that "the Great Engineer had quickly drained, ditched, and dammed the country." After Hoover's election defeat in 1932 and the accession of the Roosevelt administration, Secretary Ickes ordered on May 13, 1933, that the dam be referred to as "Boulder Dam". Ickes stated that Wilbur had been imprudent in naming the dam after a sitting president, that Congress had never ratified his choice, and that it had long been referred to as Boulder Dam. Unknown to the general public, Attorney General Homer Cummings informed Ickes that Congress had indeed used the name "Hoover Dam" in five different bills appropriating money for construction of the dam. The official status this conferred to the name "Hoover Dam" had been noted on the floor of the House of Representatives by Congressman Edward T. Taylor of Colorado on December 12, 1930, but was likewise ignored by Ickes. When Ickes spoke at the dedication ceremony on September 30, 1935, he was determined, as he recorded in his diary, "to try to nail down for good and all the name Boulder Dam." At one point in the speech, he spoke the words "Boulder Dam" five times within thirty seconds. Further, he suggested that if the dam were to be named after any one person, it should be for California Senator Hiram Johnson, a lead sponsor of the authorizing legislation. Roosevelt also referred to the dam as Boulder Dam, and the Republican-leaning "Los Angeles Times", which at the time of Ickes' name change had run an editorial cartoon showing Ickes ineffectively chipping away at an enormous sign "HOOVER DAM," reran it showing Roosevelt reinforcing Ickes, but having no greater success. In the following years, the name "Boulder Dam" failed to fully take hold, with many Americans using both names interchangeably and mapmakers divided as to which name should be printed. Memories of the Great Depression faded, and Hoover to some extent rehabilitated himself through good works during and after World War II. In 1947, a bill passed both Houses of Congress unanimously restoring the name "Hoover Dam." Ickes, who was by then a private citizen, opposed the change, stating, "I didn't know Hoover was that small a man to take credit for something he had nothing to do with." Hoover Dam was recognized as a National Civil Engineering Landmark in 1984. It was listed on the National Register of Historic Places in 1981, and was designated a National Historic Landmark in 1985, cited for its engineering innovations. Other sources India India (IAST: ), also called the Republic of India (IAST: ), is a country in South Asia. It is the seventh-largest country by area, the second-most populous country (with over 1.2 billion people), and the most populous democracy in the world. It is bounded by the Indian Ocean on the south, the Arabian Sea on the southwest, and the Bay of Bengal on the southeast. It shares land borders with Pakistan to the west; China, Nepal, and Bhutan to the northeast; and Bangladesh and Myanmar to the east. In the Indian Ocean, India is in the vicinity of Sri Lanka and the Maldives. India's Andaman and Nicobar Islands share a maritime border with Thailand and Indonesia. The Indian subcontinent was home to the urban Indus Valley Civilisation of the 3rd millennium BCE. In the following millennium, the oldest scriptures associated with Hinduism began to be composed. Social stratification, based on caste, emerged in the first millennium BCE, and Buddhism and Jainism arose. Early political consolidations took place under the Maurya and Gupta empires; the later peninsular Middle Kingdoms influenced cultures as far as southeast Asia. In the medieval era, Judaism, Zoroastrianism, Christianity, and Islam arrived, and Sikhism emerged, all adding to the region's diverse culture. Much of the north fell to the Delhi sultanate; the south was united under the Vijayanagara Empire. The economy expanded in the 17th century in the Mughal Empire. In the mid-18th century, the subcontinent came under British East India Company rule, and in the mid-19th under British crown rule. A nationalist movement emerged in the late 19th century, which later, under Mahatma Gandhi, was noted for nonviolent resistance and led to India's independence in 1947. In 2017, the Indian economy was the world's sixth largest by nominal GDP and third largest by purchasing power parity. Following market-based economic reforms in 1991, India became one of the fastest-growing major economies and is considered a newly industrialised country. However, it continues to face the challenges of poverty, corruption, malnutrition, and inadequate public healthcare. A nuclear weapons state and regional power, it has the second largest standing army in the world and ranks fifth in military expenditure among nations. India is a federal republic governed under a parliamentary system and consists of 29 states and 7 union territories. It is a pluralistic, multilingual and multi-ethnic society and is also home to a diversity of wildlife in a variety of protected habitats. The name "India" is derived from "Indus", which originates from the Old Persian word "Hindu". The latter term stems from the Sanskrit word "Sindhu", which was the historical local appellation for the Indus River. The ancient Greeks referred to the Indians as "Indoi" (Ἰνδοί), which translates as "The people of the Indus". The geographical term "Bharat" (, ), which is recognised by the Constitution of India as an official name for the country, is used by many Indian languages in its variations. It is a modernisation of the historical name "Bharatavarsha", which traditionally referred to the Indian subcontinent and gained increasing currency from the mid-19th century as a native name for India. "Hindustan" () is a Persian name for India dating back to the 3rd century BCE. It was introduced into India by the Mughals and widely used since then. Its meaning varied, referring to a region that encompassed northern India and Pakistan or India in its entirety. Currently, the name may refer to either the northern part of India or the entire country. The earliest authenticated human remains in South Asia date to about 30,000 years ago. Nearly contemporaneous Mesolithic rock art sites have been found in many parts of the Indian subcontinent, including at the Bhimbetka rock shelters in Madhya Pradesh. Around 7000 BCE, one of the first known Neolithic settlements appeared on the subcontinent in Mehrgarh and other sites in the subcontinent. These gradually developed into the Indus Valley Civilisation, the first urban culture in South Asia; it flourished during 2500–1900 BCE in northeast Afghanistan to Pakistan and northwest India. Centred around cities such as Mohenjo-daro, Harappa, Dholavira, and Kalibangan, and relying on varied forms of subsistence, the civilisation engaged robustly in crafts production and wide-ranging trade. During the period 2000–500 BCE, in terms of culture, many regions of the subcontinent transitioned from the Chalcolithic to the Iron Age. The Vedas, the oldest scriptures associated with Hinduism, were composed during this period, and historians have analysed these to posit a Vedic culture in the Punjab region and the upper Gangetic Plain. Most historians also consider this period to have encompassed several waves of Indo-Aryan migration into the subcontinent from the north-west. The caste system, which created a hierarchy of priests, warriors, and free peasants, but which excluded indigenous peoples by labeling their occupations impure, arose during this period. On the Deccan Plateau, archaeological evidence from this period suggests the existence of a chiefdom stage of political organisation. In South India, a progression to sedentary life is indicated by the large number of megalithic monuments dating from this period, as well as by nearby traces of agriculture, irrigation tanks, and craft traditions. In the late Vedic period, around the 6th century BCE, the small states and chiefdoms of the Ganges Plain and the north-western regions had consolidated into 16 major oligarchies and monarchies that were known as the "mahajanapadas". The emerging urbanisation gave rise to non-Vedic religious movements, two of which became independent religions. Jainism came into prominence during the life of its exemplar, Mahavira. Buddhism, based on the teachings of Gautama Buddha, attracted followers from all social classes excepting the middle class; chronicling the life of the Buddha was central to the beginnings of recorded history in India. In an age of increasing urban wealth, both religions held up renunciation as an ideal, and both established long-lasting monastic traditions. Politically, by the 3rd century BCE, the kingdom of Magadha had annexed or reduced other states to emerge as the Mauryan Empire. The empire was once thought to have controlled most of the subcontinent excepting the far south, but its core regions are now thought to have been separated by large autonomous areas. The Mauryan kings are known as much for their empire-building and determined management of public life as for Ashoka's renunciation of militarism and far-flung advocacy of the Buddhist "dhamma". The Sangam literature of the Tamil language reveals that, between 200 BCE and 200 CE, the southern peninsula was being ruled by the Cheras, the Cholas, and the Pandyas, dynasties that traded extensively with the Roman Empire and with West and South-East Asia. In North India, Hinduism asserted patriarchal control within the family, leading to increased subordination of women. By the 4th and 5th centuries, the Gupta Empire had created in the greater Ganges Plain a complex system of administration and taxation that became a model for later Indian kingdoms. Under the Guptas, a renewed Hinduism based on devotion rather than the management of ritual began to assert itself. The renewal was reflected in a flowering of sculpture and architecture, which found patrons among an urban elite. Classical Sanskrit literature flowered as well, and Indian science, astronomy, medicine, and mathematics made significant advances. The Indian early medieval age, 600 CE to 1200 CE, is defined by regional kingdoms and cultural diversity. When Harsha of Kannauj, who ruled much of the Indo-Gangetic Plain from 606 to 647 CE, attempted to expand southwards, he was defeated by the Chalukya ruler of the Deccan. When his successor attempted to expand eastwards, he was defeated by the Pala king of Bengal. When the Chalukyas attempted to expand southwards, they were defeated by the Pallavas from farther south, who in turn were opposed by the Pandyas and the Cholas from still farther south. No ruler of this period was able to create an empire and consistently control lands much beyond his core region. During this time, pastoral peoples whose land had been cleared to make way for the growing agricultural economy were accommodated within caste society, as were new non-traditional ruling classes. The caste system consequently began to show regional differences. In the 6th and 7th centuries, the first devotional hymns were created in the Tamil language. They were imitated all over India and led to both the resurgence of Hinduism and the development of all modern languages of the subcontinent. Indian royalty, big and small, and the temples they patronised drew citizens in great numbers to the capital cities, which became economic hubs as well. Temple towns of various sizes began to appear everywhere as India underwent another urbanisation. By the 8th and 9th centuries, the effects were felt in South-East Asia, as South Indian culture and political systems were exported to lands that became part of modern-day Myanmar, Thailand, Laos, Cambodia, Vietnam, Philippines, Malaysia, and Java. Indian merchants, scholars, and sometimes armies were involved in this transmission; South-East Asians took the initiative as well, with many sojourning in Indian seminaries and translating Buddhist and Hindu texts into their languages. After the 10th century, Muslim Central Asian nomadic clans, using swift-horse cavalry and raising vast armies united by ethnicity and religion, repeatedly overran South Asia's north-western plains, leading eventually to the establishment of the Islamic Delhi Sultanate in 1206. The sultanate was to control much of North India and to make many forays into South India. Although at first disruptive for the Indian elites, the sultanate largely left its vast non-Muslim subject population to its own laws and customs. By repeatedly repulsing Mongol raiders in the 13th century, the sultanate saved India from the devastation visited on West and Central Asia, setting the scene for centuries of migration of fleeing soldiers, learned men, mystics, traders, artists, and artisans from that region into the subcontinent, thereby creating a syncretic Indo-Islamic culture in the north. The sultanate's raiding and weakening of the regional kingdoms of South India paved the way for the indigenous Vijayanagara Empire. Embracing a strong Shaivite tradition and building upon the military technology of the sultanate, the empire came to control much of peninsular India, and was to influence South Indian society for long afterwards. In the early 16th century, northern India, being then under mainly Muslim rulers, fell again to the superior mobility and firepower of a new generation of Central Asian warriors. The resulting Mughal Empire did not stamp out the local societies it came to rule, but rather balanced and pacified them through new administrative practices and diverse and inclusive ruling elites, leading to more systematic, centralised, and uniform rule. Eschewing tribal bonds and Islamic identity, especially under Akbar, the Mughals united their far-flung realms through loyalty, expressed through a Persianised culture, to an emperor who had near-divine status. The Mughal state's economic policies, deriving most revenues from agriculture and mandating that taxes be paid in the well-regulated silver currency, caused peasants and artisans to enter larger markets. The relative peace maintained by the empire during much of the 17th century was a factor in India's economic expansion, resulting in greater patronage of painting, literary forms, textiles, and architecture. Newly coherent social groups in northern and western India, such as the Marathas, the Rajputs, and the Sikhs, gained military and governing ambitions during Mughal rule, which, through collaboration or adversity, gave them both recognition and military experience. Expanding commerce during Mughal rule gave rise to new Indian commercial and political elites along the coasts of southern and eastern India. As the empire disintegrated, many among these elites were able to seek and control their own affairs. By the early 18th century, with the lines between commercial and political dominance being increasingly blurred, a number of European trading companies, including the English East India Company, had established coastal outposts. The East India Company's control of the seas, greater resources, and more advanced military training and technology led it to increasingly flex its military muscle and caused it to become attractive to a portion of the Indian elite; these factors were crucial in allowing the company to gain control over the Bengal region by 1765 and sideline the other European companies. Its further access to the riches of Bengal and the subsequent increased strength and size of its army enabled it to annex or subdue most of India by the 1820s. India was then no longer exporting manufactured goods as it long had, but was instead supplying the British Empire with raw materials, and many historians consider this to be the onset of India's colonial period. By this time, with its economic power severely curtailed by the British parliament and effectively having been made an arm of British administration, the company began to more consciously enter non-economic arenas such as education, social reform, and culture. Historians consider India's modern age to have begun sometime between 1848 and 1885. The appointment in 1848 of Lord Dalhousie as Governor General of the East India Company set the stage for changes essential to a modern state. These included the consolidation and demarcation of sovereignty, the surveillance of the population, and the education of citizens. Technological changes—among them, railways, canals, and the telegraph—were introduced not long after their introduction in Europe. However, disaffection with the company also grew during this time, and set off the Indian Rebellion of 1857. Fed by diverse resentments and perceptions, including invasive British-style social reforms, harsh land taxes, and summary treatment of some rich landowners and princes, the rebellion rocked many regions of northern and central India and shook the foundations of Company rule. Although the rebellion was suppressed by 1858, it led to the dissolution of the East India Company and the direct administration of India by the British government. Proclaiming a unitary state and a gradual but limited British-style parliamentary system, the new rulers also protected princes and landed gentry as a feudal safeguard against future unrest. In the decades following, public life gradually emerged all over India, leading eventually to the founding of the Indian National Congress in 1885. The rush of technology and the commercialisation of agriculture in the second half of the 19th century was marked by economic setbacks—many small farmers became dependent on the whims of far-away markets. There was an increase in the number of large-scale famines, and, despite the risks of infrastructure development borne by Indian taxpayers, little industrial employment was generated for Indians. There were also salutary effects: commercial cropping, especially in the newly canalled Punjab, led to increased food production for internal consumption. The railway network provided critical famine relief, notably reduced the cost of moving goods, and helped the nascent Indian-owned industry. After World War I, in which approximately one million Indians served, a new period began. It was marked by British reforms but also repressive legislations, by more strident Indian calls for self-rule, and by the beginnings of a nonviolent movement of non-co-operation, of which Mohandas Karamchand Gandhi would become the leader and enduring symbol. During the 1930s, slow legislative reform was enacted by the British; the Indian National Congress won victories in the resulting elections. The next decade was beset with crises: Indian participation in World War II, the Congress's final push for non-co-operation, and an upsurge of Muslim nationalism. All were capped by the advent of independence in 1947, but tempered by the partition of India into two states: India and Pakistan. Vital to India's self-image as an independent nation was its constitution, completed in 1950, which put in place a secular and democratic republic. It has remained a democracy with civil liberties, an active Supreme Court, and a largely independent press. Economic liberalisation, which was begun in the 1990s, has created a large urban middle class, transformed India into one of the world's fastest-growing economies, and increased its geopolitical clout. Indian movies, music, and spiritual teachings play an increasing role in global culture. Yet, India is also shaped by seemingly unyielding poverty, both rural and urban; by religious and caste-related violence; by Maoist-inspired Naxalite insurgencies; and by separatism in Jammu and Kashmir and in Northeast India. It has unresolved territorial disputes with China and with Pakistan. The India–Pakistan nuclear rivalry came to a head in 1998. India's sustained democratic freedoms are unique among the world's newer nations; however, in spite of its recent economic successes, freedom from want for its disadvantaged population remains a goal yet to be achieved. India comprises the bulk of the Indian subcontinent, lying atop the Indian tectonic plate, and part of the Indo-Australian Plate. India's defining geological processes began 75 million years ago when the Indian plate, then part of the southern supercontinent Gondwana, began a north-eastward drift caused by seafloor spreading to its south-west and, later, south and south-east. Simultaneously, the vast Tethyn oceanic crust, to its northeast, began to subduct under the Eurasian plate. These dual processes, driven by convection in the Earth's mantle, both created the Indian Ocean and caused the Indian continental crust eventually to under-thrust Eurasia and to uplift the Himalayas. Immediately south of the emerging Himalayas, plate movement created a vast trough that rapidly filled with river-borne sediment and now constitutes the Indo-Gangetic Plain. Cut off from the plain by the ancient Aravalli Range lies the Thar Desert. The original Indian plate survives as peninsular India, the oldest and geologically most stable part of India. It extends as far north as the Satpura and Vindhya ranges in central India. These parallel chains run from the Arabian Sea coast in Gujarat in the west to the coal-rich Chota Nagpur Plateau in Jharkhand in the east. To the south, the remaining peninsular landmass, the Deccan Plateau, is flanked on the west and east by coastal ranges known as the Western and Eastern Ghats; the plateau contains the country's oldest rock formations, some over one billion years old. Constituted in such fashion, India lies to the north of the equator between 6° 44' and 35° 30' north latitude and 68° 7' and 97° 25' east longitude. India's coastline measures in length; of this distance, belong to peninsular India and to the Andaman, Nicobar, and Lakshadweep island chains. According to the Indian naval hydrographic charts, the mainland coastline consists of the following: 43% sandy beaches; 11% rocky shores, including cliffs; and 46% mudflats or marshy shores. Major Himalayan-origin rivers that substantially flow through India include the Ganges and the Brahmaputra, both of which drain into the Bay of Bengal. Important tributaries of the Ganges include the Yamuna and the Kosi; the latter's extremely low gradient often leads to severe floods and course changes. Major peninsular rivers, whose steeper gradients prevent their waters from flooding, include the Godavari, the Mahanadi, the Kaveri, and the Krishna, which also drain into the Bay of Bengal; and the Narmada and the Tapti, which drain into the Arabian Sea. Coastal features include the marshy Rann of Kutch of western India and the alluvial Sundarbans delta of eastern India; the latter is shared with Bangladesh. India has two archipelagos: the Lakshadweep, coral atolls off India's south-western coast; and the Andaman and Nicobar Islands, a volcanic chain in the Andaman Sea. The Indian climate is strongly influenced by the Himalayas and the Thar Desert, both of which drive the economically and culturally pivotal summer and winter monsoons. The Himalayas prevent cold Central Asian katabatic winds from blowing in, keeping the bulk of the Indian subcontinent warmer than most locations at similar latitudes. The Thar Desert plays a crucial role in attracting the moisture-laden south-west summer monsoon winds that, between June and October, provide the majority of India's rainfall. Four major climatic groupings predominate in India: tropical wet, tropical dry, subtropical humid, and montane. India lies within the Indomalaya ecozone and contains three biodiversity hotspots. One of 17 megadiverse countries, it hosts 8.6% of all mammalian, 13.7% of all avian, 7.9% of all reptilian, 6% of all amphibian, 12.2% of all piscine, and 6.0% of all flowering plant species. About 21.2% of the country's landmass is covered by forests (tree canopy density >10%), of which 12.2% comprises moderately or very dense forests (tree canopy density >40%). Endemism is high among plants, 33%, and among ecoregions such as the "shola" forests. Habitat ranges from the tropical rainforest of the Andaman Islands, Western Ghats, and North-East India to the coniferous forest of the Himalaya. Between these extremes lie the moist deciduous "sal" forest of eastern India; the dry deciduous teak forest of central and southern India; and the babul-dominated thorn forest of the central Deccan and western Gangetic plain. The medicinal neem, widely used in rural Indian herbal remedies, is a key Indian tree. The luxuriant pipal fig tree, shown on the seals of Mohenjo-daro, shaded Gautama Buddha as he sought enlightenment. Many Indian species descend from taxa originating in Gondwana, from which the Indian plate separated more than 105 million years before present. Peninsular India's subsequent movement towards and collision with the Laurasian landmass set off a mass exchange of species. Epochal volcanism and climatic changes 20 million years ago forced a mass extinction. Mammals then entered India from Asia through two zoogeographical passes flanking the rising Himalaya. Thus, while 45.8% of reptiles and 55.8% of amphibians are endemic, only 12.6% of mammals and 4.5% of birds are. Among them are the Nilgiri leaf monkey and Beddome's toad of the Western Ghats. India contains 172 IUCN-designated threatened animal species, or 2.9% of endangered forms. These include the Asiatic lion, the Bengal tiger, the snow leopard and the Indian white-rumped vulture, which, by ingesting the carrion of diclofenac-laced cattle, nearly became extinct. The pervasive and ecologically devastating human encroachment of recent decades has critically endangered Indian wildlife. In response, the system of national parks and protected areas, first established in 1935, was substantially expanded. In 1972, India enacted the Wildlife Protection Act and Project Tiger to safeguard crucial wilderness; the Forest Conservation Act was enacted in 1980 and amendments added in 1988. India hosts more than five hundred wildlife sanctuaries and thirteen biosphere reserves, four of which are part of the World Network of Biosphere Reserves; twenty-five wetlands are registered under the Ramsar Convention. India is the world's most populous democracy. A parliamentary republic with a multi-party system, it has seven recognised national parties, including the Indian National Congress and the Bharatiya Janata Party (BJP), and more than 40 regional parties. The Congress is considered centre-left in Indian political culture, and the BJP right-wing. For most of the period between 1950—when India first became a republic—and the late 1980s, the Congress held a majority in the parliament. Since then, however, it has increasingly shared the political stage with the BJP, as well as with powerful regional parties which have often forced the creation of multi-party coalitions at the centre. In the Republic of India's first three general elections, in 1951, 1957, and 1962, the Jawaharlal Nehru-led Congress won easy victories. On Nehru's death in 1964, Lal Bahadur Shastri briefly became prime minister; he was succeeded, after his own unexpected death in 1966, by Indira Gandhi, who went on to lead the Congress to election victories in 1967 and 1971. Following public discontent with the state of emergency she declared in 1975, the Congress was voted out of power in 1977; the then-new Janata Party, which had opposed the emergency, was voted in. Its government lasted just over three years. Voted back into power in 1980, the Congress saw a change in leadership in 1984, when Indira Gandhi was assassinated; she was succeeded by her son Rajiv Gandhi, who won an easy victory in the general elections later that year. The Congress was voted out again in 1989 when a National Front coalition, led by the newly formed Janata Dal in alliance with the Left Front, won the elections; that government too proved relatively short-lived, lasting just under two years. Elections were held again in 1991; no party won an absolute majority. The Congress, as the largest single party, was able to form a minority government led by P. V. Narasimha Rao. A two-year period of political turmoil followed the general election of 1996. Several short-lived alliances shared power at the centre. The BJP formed a government briefly in 1996; it was followed by two comparatively long-lasting United Front coalitions, which depended on external support. In 1998, the BJP was able to form a successful coalition, the National Democratic Alliance (NDA). Led by Atal Bihari Vajpayee, the NDA became the first non-Congress, coalition government to complete a five-year term. In the 2004 Indian general elections, again no party won an absolute majority, but the Congress emerged as the largest single party, forming another successful coalition: the United Progressive Alliance (UPA). It had the support of left-leaning parties and MPs who opposed the BJP. The UPA returned to power in the 2009 general election with increased numbers, and it no longer required external support from India's communist parties. That year, Manmohan Singh became the first prime minister since Jawaharlal Nehru in 1957 and 1962 to be re-elected to a consecutive five-year term. In the 2014 general election, the BJP became the first political party since 1984 to win a majority and govern without the support of other parties. The Prime Minister of India is Narendra Modi, who was formerly Chief Minister of Gujarat. On 20 July 2017, Ram Nath Kovind was elected India’s 14th President and took the oath of office on 25 July 2017. India is a federation with a parliamentary system governed under the Constitution of India, which serves as the country's supreme legal document. It is a constitutional republic and representative democracy, in which "majority rule is tempered by minority rights protected by law". Federalism in India defines the power distribution between the Union, or Central, government and the states. The government abides by constitutional checks and balances. The Constitution of India, which came into effect on 26 January 1950, states in its preamble that India is a sovereign, socialist, secular, democratic republic. India's form of government, traditionally described as "quasi-federal" with a strong centre and weak states, has grown increasingly federal since the late 1990s as a result of political, economic, and social changes. The Union government comprises three branches: India is a federation composed of 29 states and 7 union territories. All states, as well as the union territories of Puducherry and the National Capital Territory of Delhi, have elected legislatures and governments, both patterned on the Westminster model. The remaining five union territories are directly ruled by the centre through appointed administrators. In 1956, under the States Reorganisation Act, states were reorganised on a linguistic basis. Since then, their structure has remained largely unchanged. Each state or union territory is further divided into administrative districts. The districts, in turn, are further divided into tehsils and ultimately into villages. Since its independence in 1947, India has maintained cordial relations with most nations. In the 1950s, it strongly supported decolonisation in Africa and Asia and played a lead role in the Non-Aligned Movement. In the late 1980s, the Indian military twice intervened abroad at the invitation of neighbouring countries: a peace-keeping operation in Sri Lanka between 1987 and 1990; and an armed intervention to prevent a 1988 coup d'état attempt in the Maldives. India has tense relations with neighbouring Pakistan; the two nations have gone to war four times: in 1947, 1965, 1971, and 1999. Three of these wars were fought over the disputed territory of Kashmir, while the fourth, the 1971 war, followed from India's support for the independence of Bangladesh. After waging the 1962 Sino-Indian War and the 1965 war with Pakistan, India pursued close military and economic ties with the Soviet Union; by the late 1960s, the Soviet Union was its largest arms supplier. Aside from ongoing strategic relations with Russia, India has wide-ranging defence relations with Israel and France. In recent years, it has played key roles in the South Asian Association for Regional Cooperation and the World Trade Organisation. The nation has provided 100,000 military and police personnel to serve in 35 UN peacekeeping operations across four continents. It participates in the East Asia Summit, the G8+5, and other multilateral forums. India has close economic ties with South America, Asia, and Africa; it pursues a "Look East" policy that seeks to strengthen partnerships with the ASEAN nations, Japan, and South Korea that revolve around many issues, but especially those involving economic investment and regional security. China's nuclear test of 1964, as well as its repeated threats to intervene in support of Pakistan in the 1965 war, convinced India to develop nuclear weapons. India conducted its first nuclear weapons test in 1974 and carried out further underground testing in 1998. Despite criticism and military sanctions, India has signed neither the Comprehensive Nuclear-Test-Ban Treaty nor the Nuclear Non-Proliferation Treaty, considering both to be flawed and discriminatory. India maintains a "no first use" nuclear policy and is developing a nuclear triad capability as a part of its "minimum credible deterrence" doctrine. It is developing a ballistic missile defence shield and, in collaboration with Russia, a fifth-generation fighter jet. Other indigenous military projects involve the design and implementation of "Vikrant"-class aircraft carriers and "Arihant"-class nuclear submarines. Since the end of the Cold War, India has increased its economic, strategic, and military co-operation with the United States and the European Union. In 2008, a civilian nuclear agreement was signed between India and the United States. Although India possessed nuclear weapons at the time and was not party to the Nuclear Non-Proliferation Treaty, it received waivers from the International Atomic Energy Agency and the Nuclear Suppliers Group, ending earlier restrictions on India's nuclear technology and commerce. As a consequence, India became the sixth "de facto" nuclear weapons state. India subsequently signed co-operation agreements involving civilian nuclear energy with Russia, France, the United Kingdom, and Canada. The President of India is the supreme commander of the nation's armed forces; with 1.395 million active troops, they compose the world's second-largest military. It comprises the Indian Army, the Indian Navy, the Indian Air Force, and the Indian Coast Guard. The official Indian defence budget for 2011 was US$36.03 billion, or 1.83% of GDP. For the fiscal year spanning 2012–2013, US$40.44 billion was budgeted. According to a 2008 SIPRI report, India's annual military expenditure in terms of purchasing power stood at US$72.7 billion. In 2011, the annual defence budget increased by 11.6%, although this does not include funds that reach the military through other branches of government. , India is the world's largest arms importer; between 2007 and 2011, it accounted for 10% of funds spent on international arms purchases. Much of the military expenditure was focused on defence against Pakistan and countering growing Chinese influence in the Indian Ocean. According to the International Monetary Fund (IMF), the Indian economy in 2017 was nominally worth US$2.611 trillion; it is the sixth-largest economy by market exchange rates, and is, at US$9.459 trillion, the third-largest by purchasing power parity, or PPP. With its average annual GDP growth rate of 5.8% over the past two decades, and reaching 6.1% during 2011–12, India is one of the world's fastest-growing economies. However, the country ranks 140th in the world in nominal GDP per capita and 129th in GDP per capita at PPP. Until 1991, all Indian governments followed protectionist policies that were influenced by socialist economics. Widespread state intervention and regulation largely walled the economy off from the outside world. An acute balance of payments crisis in 1991 forced the nation to liberalise its economy; since then it has slowly moved towards a free-market system by emphasising both foreign trade and direct investment inflows. India has been a member of WTO since 1 January 1995. The 513.7-million-worker Indian labour force is the world's second-largest, . The service sector makes up 55.6% of GDP, the industrial sector 26.3% and the agricultural sector 18.1%. India's foreign exchange remittances of US$70 billion in 2014, the largest in the world, contributed to its economy by 25 million Indians working in foreign countries. Major agricultural products include rice, wheat, oilseed, cotton, jute, tea, sugarcane, and potatoes. Major industries include textiles, telecommunications, chemicals, pharmaceuticals, biotechnology, food processing, steel, transport equipment, cement, mining, petroleum, machinery, and software. In 2006, the share of external trade in India's GDP stood at 24%, up from 6% in 1985. In 2008, India's share of world trade was 1.68%; In 2011, India was the world's tenth-largest importer and the nineteenth-largest exporter. Major exports include petroleum products, textile goods, jewellery, software, engineering goods, chemicals, and leather manufactures. Major imports include crude oil, machinery, gems, fertiliser, and chemicals. Between 2001 and 2011, the contribution of petrochemical and engineering goods to total exports grew from 14% to 42%. India was the second largest textile exporter after China in the world in the calendar year 2013. Averaging an economic growth rate of 7.5% for several years prior to 2007, India has more than doubled its hourly wage rates during the first decade of the 21st century. Some 431 million Indians have left poverty since 1985; India's middle classes are projected to number around 580 million by 2030. Though ranking 51st in global competitiveness, India ranks 17th in financial market sophistication, 24th in the banking sector, 44th in business sophistication, and 39th in innovation, ahead of several advanced economies, . With 7 of the world's top 15 information technology outsourcing companies based in India, the country is viewed as the second-most favourable outsourcing destination after the United States, . India's consumer market, the world's eleventh-largest, is expected to become fifth-largest by 2030. However, hardly 2% of Indians pay income taxes. Driven by growth, India's nominal GDP per capita has steadily increased from US$329 in 1991, when economic liberalisation began, to US$1,265 in 2010, to an estimated US$1,723 in 2016, and is expected to grow to US$2,358 by 2020; however, it has remained lower than those of other Asian developing countries such as Indonesia, Malaysia, Philippines, Sri Lanka, and Thailand, and is expected to remain so in the near future. However, it is higher than Pakistan, Nepal, Afghanistan, Bangladesh and others. According to a 2011 PricewaterhouseCoopers report, India's GDP at purchasing power parity could overtake that of the United States by 2045. During the next four decades, Indian GDP is expected to grow at an annualised average of 8%, making it potentially the world's fastest-growing major economy until 2050. The report highlights key growth factors: a young and rapidly growing working-age population; growth in the manufacturing sector because of rising education and engineering skill levels; and sustained growth of the consumer market driven by a rapidly growing middle-class. The World Bank cautions that, for India to achieve its economic potential, it must continue to focus on public sector reform, transport infrastructure, agricultural and rural development, removal of labour regulations, education, energy security, and public health and nutrition. According to the Worldwide Cost of Living Report 2017 released by the Economist Intelligence Unit (EIU) which was created by comparing more than 400 individual prices across 160 products and services, four of the cheapest cities were in India: Bangalore (3rd), Mumbai (5th), Chennai (5th) and New Delhi (8th). India's telecommunication industry, the world's fastest-growing, added 227 million subscribers during the period 2010–11, and after the third quarter of 2017, India surpassed the US to become the second largest smartphone market in the world after China. The Indian automotive industry, the world's second-fastest growing, increased domestic sales by 26% during 2009–10, and exports by 36% during 2008–09. India's capacity to generate electrical power is 300 gigawatts, of which 42 gigawatts is renewable. At the end of 2011, the Indian IT industry employed 2.8 million professionals, generated revenues close to US$100 billion equalling 7.5% of Indian GDP and contributed 26% of India's merchandise exports. The pharmaceutical industry in India is among the significant emerging markets for the global pharmaceutical industry. The Indian pharmaceutical market is expected to reach $48.5 billion by 2020. India's R & D spending constitutes 60% of the biopharmaceutical industry. India is among the top 12 biotech destinations in the world. The Indian biotech industry grew by 15.1% in 2012–13, increasing its revenues from 204.4 billion INR (Indian rupees) to 235.24 billion INR (3.94 B US$ – exchange rate June 2013: 1 US$ approx. 60 INR). Despite economic growth during recent decades, India continues to face socio-economic challenges. In 2006, India contained the largest number of people living below the World Bank's international poverty line of US$1.25 per day, the proportion having decreased from 60% in 1981 to 42% in 2005; under its later revised poverty line, it was 21% in 2011. 30.7% of India's children under the age of five are underweight. According to a Food and Agriculture Organization report in 2015, 15% of the population is undernourished. The Mid-Day Meal Scheme attempts to lower these rates. According to a Walk Free Foundation report in 2016, there were an estimated 18.3 million people in India, or 1.4% of the population, living in the forms of modern slavery, such as bonded labour, child labour, human trafficking, and forced begging, among others. According to the 2011 census, there were 10.1 million child labourers in the country, a decline of 2.6 million from 12.6 million child labourers in 2001. Since 1991, economic inequality between India's states has consistently grown: the per-capita net state domestic product of the richest states in 2007 was 3.2 times that of the poorest. Corruption in India is perceived to have decreased. According to Corruption Perceptions Index, India ranked 76th out of 176 countries in 2016, from 85th in 2014. With 1,210,193,422 residents reported in the 2011 provisional census report, India is the world's second-most populous country. Its population grew by 17.64% during 2001–2011, compared to 21.54% growth in the previous decade (1991–2001). The human sex ratio, according to the 2011 census, is 940 females per 1,000 males. The median age was 27.6 . The first post-colonial census, conducted in 1951, counted 361.1 million people. Medical advances made in the last 50 years as well as increased agricultural productivity brought about by the "Green Revolution" have caused India's population to grow rapidly. India continues to face several public health-related challenges. Life expectancy in India is at 68 years, with life expectancy for women being 69.6 years and for men being 67.3. There are around 50 physicians per 100,000 Indians. Migration from rural to urban areas has been an important dynamic in the recent history of India. The number of Indians living in urban areas grew by 31.2% between 1991 and 2001. Yet, in 2001, over 70% still lived in rural areas. The level of urbanisation increased further from 27.81% in the 2001 Census to 31.16% in the 2011 Census. The slowing down of the overall growth rate of population was due to the sharp decline in the growth rate in rural areas since 1991. According to the 2011 census, there are 53 million-plus urban agglomerations in India; among them Mumbai, Delhi, Kolkata, Chennai, Bangalore, Hyderabad and Ahmedabad, in decreasing order by population. The literacy rate in 2011 was 74.04%: 65.46% among females and 82.14% among males. The rural-urban literacy gap, which was 21.2 percentage points in 2001, dropped to 16.1 percentage points in 2011. The improvement in literacy rate in rural area is two times that in urban areas. Kerala is the most literate state with 93.91% literacy; while Bihar the least with 63.82%. India is home to two major language families: Indo-Aryan (spoken by about 74% of the population) and Dravidian (spoken by 24% of the population). Other languages spoken in India come from the Austroasiatic and Sino-Tibetan language families. India has no national language. Hindi, with the largest number of speakers, is the official language of the government. English is used extensively in business and administration and has the status of a "subsidiary official language"; it is important in education, especially as a medium of higher education. Each state and union territory has one or more official languages, and the constitution recognises in particular 22 "scheduled languages". The Constitution of India recognises 212 scheduled tribal groups which together constitute about 7.5% of the country's population. The 2011 census reported that the religion in India with the largest number of followers was Hinduism (79.80% of the population), followed by Islam (14.23%); the remaining were Christianity (2.30%), Sikhism (1.72%), Buddhism (0.70%), Jainism (0.36%) and others (0.9%). India has the world's largest Hindu, Sikh, Jain, Zoroastrian, and Bahá'í populations, and has the third-largest Muslim population—the largest for a non-Muslim majority country. Indian cultural history spans more than 4,500 years. During the Vedic period (c. 1700 – 500 BCE), the foundations of Hindu philosophy, mythology, theology and literature were laid, and many beliefs and practices which still exist today, such as "dhárma", "kárma", "yóga", and "mokṣa", were established. India is notable for its religious diversity, with Hinduism, Buddhism, Sikhism, Islam, Christianity, and Jainism among the nation's major religions. The predominant religion, Hinduism, has been shaped by various historical schools of thought, including those of the "Upanishads", the "Yoga Sutras", the "Bhakti" movement, and by Buddhist philosophy. Much of Indian architecture, including the Taj Mahal, other works of Mughal architecture, and South Indian architecture, blends ancient local traditions with imported styles. Vernacular architecture is also highly regional in it flavours. "Vastu shastra", literally "science of construction" or "architecture" and ascribed to Mamuni Mayan, explores how the laws of nature affect human dwellings; it employs precise geometry and directional alignments to reflect perceived cosmic constructs. As applied in Hindu temple architecture, it is influenced by the "Shilpa Shastras", a series of foundational texts whose basic mythological form is the "Vastu-Purusha mandala", a square that embodied the "absolute". The Taj Mahal, built in Agra between 1631 and 1648 by orders of Emperor Shah Jahan in memory of his wife, has been described in the UNESCO World Heritage List as "the jewel of Muslim art in India and one of the universally admired masterpieces of the world's heritage". Indo-Saracenic Revival architecture, developed by the British in the late 19th century, drew on Indo-Islamic architecture. The earliest literary writings in India, composed between 1700 BCE and 1200 CE, were in the Sanskrit language. Prominent works of this Sanskrit literature include epics such as the "Mahābhārata" and the "Ramayana", the dramas of Kālidāsa such as the "Abhijñānaśākuntalam" ("The Recognition of Śakuntalā"), and poetry such as the "Mahākāvya". Kamasutra, the famous book about sexual intercourse also originated in India. Developed between 600 BCE and 300 CE in South India, the "Sangam" literature, consisting of 2,381 poems, is regarded as a predecessor of Tamil literature. From the 14th to the 18th centuries, India's literary traditions went through a period of drastic change because of the emergence of devotional poets such as Kabīr, Tulsīdās, and Guru Nānak. This period was characterised by a varied and wide spectrum of thought and expression; as a consequence, medieval Indian literary works differed significantly from classical traditions. In the 19th century, Indian writers took a new interest in social questions and psychological descriptions. In the 20th century, Indian literature was influenced by the works of Bengali poet and novelist Rabindranath Tagore, who was a recipient of the Nobel Prize in Literature. Indian music ranges over various traditions and regional styles. Classical music encompasses two genres and their various folk offshoots: the northern Hindustani and southern Carnatic schools. Regionalised popular forms include filmi and folk music; the syncretic tradition of the "bauls" is a well-known form of the latter. Indian dance also features diverse folk and classical forms. Among the better-known folk dances are the "bhangra" of Punjab, the "bihu" of Assam, the "chhau" of Odisha, West Bengal and Jharkhand, "garba" and "dandiya" of Gujarat, "ghoomar" of Rajasthan, and the "lavani" of Maharashtra. Eight dance forms, many with narrative forms and mythological elements, have been accorded classical dance status by India's National Academy of Music, Dance, and Drama. These are: "bharatanatyam" of the state of Tamil Nadu, "kathak" of Uttar Pradesh, "kathakali" and "mohiniyattam" of Kerala, "kuchipudi" of Andhra Pradesh, "manipuri" of Manipur, "odissi" of Odisha, and the "sattriya" of Assam. Theatre in India melds music, dance, and improvised or written dialogue. Often based on Hindu mythology, but also borrowing from medieval romances or social and political events, Indian theatre includes the "bhavai" of Gujarat, the "jatra" of West Bengal, the "nautanki" and "ramlila" of North India, "tamasha" of Maharashtra, "burrakatha" of Andhra Pradesh, "terukkuttu" of Tamil Nadu, and the "yakshagana" of Karnataka. India has a theatre training institute N.S.D that is situated at New Delhi It is an autonomous organisation under the Ministry of Culture, Government of India. The Indian film industry produces the world's most-watched cinema. Established regional cinematic traditions exist in the Assamese, Bengali, Bhojpuri, Hindi, Kannada, Malayalam, Punjabi, Gujarati, Marathi, Odia, Tamil, and Telugu languages. South Indian cinema attracts more than 75% of national film revenue. Television broadcasting began in India in 1959 as a state-run medium of communication and had slow expansion for more than two decades. The state monopoly on television broadcast ended in the 1990s and, since then, satellite channels have increasingly shaped the popular culture of Indian society. Today, television is the most penetrative media in India; industry estimates indicate that there are over 554 million TV consumers, 462 million with satellite and/or cable connections, compared to other forms of mass media such as press (350 million), radio (156 million) or internet (37 million). Indian cuisine encompasses a wide variety of regional and traditional cuisines, often depending on a particular state (such as Maharashtrian cuisine). Staple foods of Indian cuisine include pearl millet ("bājra"), rice, whole-wheat flour ("aṭṭa"), and a variety of lentils, such as "masoor" (most often red lentils), "toor" (pigeon peas), "urad" (black gram), and "mong" (mung beans). Lentils may be used whole, dehusked—for example, "dhuli moong" or "dhuli urad"—or split. Split lentils, or "dal", are used extensively. The spice trade between India and Europe is often cited by historians as the primary catalyst for Europe's Age of Discovery. Traditional Indian society is sometimes defined by social hierarchy. The Indian caste system embodies much of the social stratification and many of the social restrictions found in the Indian subcontinent. Social classes are defined by thousands of endogamous hereditary groups, often termed as "jātis", or "castes". India declared untouchability to be illegal in 1947 and has since enacted other anti-discriminatory laws and social welfare initiatives. At the workplace in urban India and in international or leading Indian companies, the caste related identification has pretty much lost its importance. Family values are important in the Indian tradition, and multi-generational patriarchal joint families have been the norm in India, though nuclear families are becoming common in urban areas. An overwhelming majority of Indians, with their consent, have their marriages arranged by their parents or other elders in the family. Marriage is thought to be for life, and the divorce rate is extremely low. , just 1.6 percent of Indian women were divorced but this figure was rising due to their education and economic independence. Child marriages are common, especially in rural areas; many women wed before reaching 18, which is their legal marriageable age. Female infanticide and female foeticide in the country have caused a discrepancy in the sex ratio, it was estimated that there were 50 million more males than females in the nation. However a report from 2011 has shown improvement in the gender ratio. The payment of dowry, although illegal, remains widespread across class lines. Deaths resulting from dowry, mostly from bride burning, are on the rise, despite stringent anti-dowry laws. Many Indian festivals are religious in origin. The best known include Diwali, Ganesh Chaturthi, Thai Pongal, Holi, Durga Puja, Eid ul-Fitr, Bakr-Id, Christmas, and Vaisakhi. India has three national holidays which are observed in all states and union territories – Republic Day, Independence Day and Gandhi Jayanti. Other sets of holidays, varying between nine and twelve, are officially observed in individual states. Cotton was domesticated in India by 4000 BCE. Traditional Indian dress varies in colour and style across regions and depends on various factors, including climate and faith. Popular styles of dress include draped garments such as the "sari" for women and the "dhoti" or "lungi" for men. Stitched clothes, such as the "shalwar kameez" for women and "kurta"–"pyjama" combinations or European-style trousers and shirts for men, are also popular. Use of delicate jewellery, modelled on real flowers worn in ancient India, is part of a tradition dating back some 5,000 years; gemstones are also worn in India as talismans. In India, several traditional indigenous sports remain fairly popular, such as "kabaddi", "kho kho", "pehlwani" and "gilli-danda". Some of the earliest forms of Asian martial arts, such as "kalarippayattu", "musti yuddha", "silambam", and "marma adi", originated in India. Chess, commonly held to have originated in India as "chaturaṅga", is regaining widespread popularity with the rise in the number of Indian grandmasters. "Pachisi", from which parcheesi derives, was played on a giant marble court by Akbar. The improved results garnered by the Indian Davis Cup team and other in the early 2010s have made tennis increasingly popular in the country. India has a in shooting sports, and has won several medals at the Olympics, the World Shooting Championships, and the Commonwealth Games. Other sports in which Indians have succeeded internationally include badminton (Saina Nehwal and P V Sindhu are two of the top-ranked female badminton players in the world), boxing, and wrestling. Football is popular in West Bengal, Goa, Tamil Nadu, Kerala, and the north-eastern states. India was the host country for the 2017 FIFA U-17 World Cup. The matches were held from 6 to 28 October in the cities of New Delhi, Kolkata, Kochi, Navi Mumbai, Guwahati and Margao. Field hockey in India is administered by Hockey India. The Indian national hockey team won the 1975 Hockey World Cup and have, , taken eight gold, one silver, and two bronze Olympic medals, making it the sport's most successful team in the Olympics. India has also played a major role in popularising cricket. Thus, cricket is, by far, the most popular sport in India. The Indian national cricket team won the 1983 and 2011 Cricket World Cup events, the 2007 ICC World Twenty20, shared the 2002 ICC Champions Trophy with Sri Lanka, and won 2013 ICC Champions Trophy. Cricket in India is administered by the Board of Control for Cricket in India (BCCI); the Ranji Trophy, the Duleep Trophy, the Deodhar Trophy, the Irani Trophy, and the NKP Salve Challenger Trophy are domestic competitions. The BCCI also conducts an annual Twenty20 competition known as the Indian Premier League. India has hosted or co-hosted several international sporting events: the 1951 and 1982 Asian Games; the 1987, 1996, and 2011 Cricket World Cup tournaments; the 2003 Afro-Asian Games; the 2006 ICC Champions Trophy; the 2010 Hockey World Cup; the 2010 Commonwealth Games; and the 2017 FIFA U-17 World Cup. Major international sporting events held annually in India include the Chennai Open, the Mumbai Marathon, the Delhi Half Marathon, and the Indian Masters. The first Formula 1 Indian Grand Prix featured in late 2011 but has been discontinued from the F1 season calendar since 2014. India has traditionally been the dominant country at the South Asian Games. An example of this dominance is the basketball competition where the Indian team won three out of four tournaments to date. The Rajiv Gandhi Khel Ratna and the Arjuna Award are the highest forms of government recognition for athletic achievement; the Dronacharya Award is awarded for excellence in coaching. Overview Etymology History Geography Biodiversity Politics Foreign relations and military Economy Demographics Culture Government General information Iridium Iridium is a chemical element with symbol Ir and atomic number 77. A very hard, brittle, silvery-white transition metal of the platinum group, iridium is the second-densest element (after osmium). It is also the most corrosion-resistant metal, even at temperatures as high as 2000 °C. Although only certain molten salts and halogens are corrosive to solid iridium, finely divided iridium dust is much more reactive and can be flammable. Iridium was discovered in 1803 among insoluble impurities in natural platinum. Smithson Tennant, the primary discoverer, named iridium for the Greek goddess Iris, personification of the rainbow, because of the striking and diverse colors of its salts. Iridium is one of the rarest elements in Earth's crust, with annual production and consumption of only three tonnes. Ir and Ir are the only two naturally occurring isotopes of iridium, as well as the only stable isotopes; the latter is the more abundant of the two. The most important iridium compounds in use are the salts and acids it forms with chlorine, though iridium also forms a number of organometallic compounds used in industrial catalysis, and in research. Iridium metal is employed when high corrosion resistance at high temperatures is needed, as in high-performance spark plugs, crucibles for recrystallization of semiconductors at high temperatures, and electrodes for the production of chlorine in the chloralkali process. Iridium radioisotopes are used in some radioisotope thermoelectric generators. Iridium is found in meteorites in much higher abundance than in the Earth's crust. For this reason, the unusually high abundance of iridium in the clay layer at the Cretaceous–Paleogene boundary gave rise to the Alvarez hypothesis that the impact of a massive extraterrestrial object caused the extinction of dinosaurs and many other species 66 million years ago. Similarly, an iridium anomaly in core samples from the Pacific Ocean suggested the Eltanin impact of about 2.5 million years ago. It is thought that the total amount of iridium in the planet Earth is much higher than that observed in crustal rocks, but as with other platinum-group metals, the high density and tendency of iridium to bond with iron caused most iridium to descend below the crust when the planet was young and still molten. A member of the platinum group metals, iridium is white, resembling platinum, but with a slight yellowish cast. Because of its hardness, brittleness, and very high melting point, solid iridium is difficult to machine, form, or work; thus powder metallurgy is commonly employed instead. It is the only metal to maintain good mechanical properties in air at temperatures above . It has the 10th highest boiling point among all elements and becomes a superconductor at temperatures below 0.14 K. Iridium's modulus of elasticity is the second-highest among the metals, only being surpassed by osmium. This, together with a high shear modulus and a very low figure for Poisson's ratio (the relationship of longitudinal to lateral strain), indicate the high degree of stiffness and resistance to deformation that have rendered its fabrication into useful components a matter of great difficulty. Despite these limitations and iridium's high cost, a number of applications have developed where mechanical strength is an essential factor in some of the extremely severe conditions encountered in modern technology. The measured density of iridium is only slightly lower (by about 0.12%) than that of osmium, the densest element known. Some ambiguity occurred regarding which of the two elements was denser, due to the small size of the difference in density and difficulties in measuring it accurately, but, with increased accuracy in factors used for calculating density X-ray crystallographic data yielded densities of 22.56 g/cm for iridium and 22.59 g/cm for osmium. Iridium is the most corrosion-resistant metal known: it is not attacked by almost any acid, aqua regia, molten metals, or silicates at high temperatures. It can, however, be attacked by some molten salts, such as sodium cyanide and potassium cyanide, as well as oxygen and the halogens (particularly fluorine) at higher temperatures. Iridium forms compounds in oxidation states between −3 and +9; the most common oxidation states are +3 and +4. Well-characterized examples of the high +6 oxidation state are rare, but include and two mixed oxides and . In addition, it was reported in 2009 that iridium(VIII) oxide () was prepared under matrix isolation conditions (6 K in Ar) by UV irradiation of an iridium-peroxo complex. This species, however, is not expected to be stable as a bulk solid at higher temperatures. The highest oxidation state (+9), which is also the highest recorded for "any" element, is only known in one cation, ; it is only known as gas-phase species and is not known to form any salts. Iridium dioxide, , a blue black solid, is the only well-characterized oxide of iridium. A sesquioxide, , has been described as a blue-black powder which is oxidized to by . The corresponding disulfides, diselenides, sesquisulfides, and sesquiselenides are known, and has also been reported. Iridium also forms iridates with oxidation states +4 and +5, such as and , which can be prepared from the reaction of potassium oxide or potassium superoxide with iridium at high temperatures. Although no binary hydrides of iridium, are known, complexes are known that contain and , where iridium has the +1 and +3 oxidation states, respectively. The ternary hydride is believed to contain both the and the 18-electron anion. No monohalides or dihalides are known, whereas trihalides, , are known for all of the halogens. For oxidation states +4 and above, only the tetrafluoride, pentafluoride and hexafluoride are known. Iridium hexafluoride, , is a volatile and highly reactive yellow solid, composed of octahedral molecules. It decomposes in water and is reduced to , a crystalline solid, by iridium black. Iridium pentafluoride has similar properties but it is actually a tetramer, , formed by four corner-sharing octahedra. Iridium metal dissolves in molten alkali-metal cyanides to produce the (hexacyanoiridate) ion. Hexachloroiridic(IV) acid, , and its ammonium salt are the most important iridium compounds from an industrial perspective. They are involved in the purification of iridium and used as precursors for most other iridium compounds, as well as in the preparation of anode coatings. The ion has an intense dark brown color, and can be readily reduced to the lighter-colored and vice versa. Iridium trichloride, , which can be obtained in anhydrous form from direct oxidation of iridium powder by chlorine at 650 °C, or in hydrated form by dissolving in hydrochloric acid, is often used as a starting material for the synthesis of other Ir(III) compounds. Another compound used as a starting material is ammonium hexachloroiridate(III), . Iridium(III) complexes are diamagnetic (low-spin) and generally have an octahedral molecular geometry. Organoiridium compounds contain iridium–carbon bonds where the metal is usually in lower oxidation states. For example, oxidation state zero is found in tetrairidium dodecacarbonyl, , which is the most common and stable binary carbonyl of iridium. In this compound, each of the iridium atoms is bonded to the other three, forming a tetrahedral cluster. Some organometallic Ir(I) compounds are notable enough to be named after their discoverers. One is Vaska's complex, , which has the unusual property of binding to the dioxygen molecule, . Another one is Crabtree's catalyst, a homogeneous catalyst for hydrogenation reactions. These compounds are both square planar, d complexes, with a total of 16 valence electrons, which accounts for their reactivity. An iridium-based organic LED material has been documented, and found to be much brighter than DPA or PPV, so could be the basis for flexible OLED lighting in the future. Iridium has two naturally occurring, stable isotopes, Ir and Ir, with natural abundances of 37.3% and 62.7%, respectively. At least 34 radioisotopes have also been synthesized, ranging in mass number from 164 to 199. Ir, which falls between the two stable isotopes, is the most stable radioisotope, with a half-life of 73.827 days, and finds application in brachytherapy and in industrial radiography, particularly for nondestructive testing of welds in steel in the oil and gas industries; iridium-192 sources have been involved in a number of radiological accidents. Three other isotopes have half-lives of at least a day—Ir, Ir, and Ir. Isotopes with masses below 191 decay by some combination of β decay, α decay, and (rare) proton emission, with the exceptions of Ir, which decays by electron capture. Synthetic isotopes heavier than 191 decay by β decay, although Ir also has a minor electron capture decay path. All known isotopes of iridium were discovered between 1934 and 2001; the most recent is Ir. At least 32 metastable isomers have been characterized, ranging in mass number from 164 to 197. The most stable of these is Ir, which decays by isomeric transition with a half-life of 241 years, making it more stable than any of iridium's synthetic isotopes in their ground states. The least stable isomer is Ir with a half-life of only 2 µs. The isotope Ir was the first one of any element to be shown to present a Mössbauer effect. This renders it useful for Mössbauer spectroscopy for research in physics, chemistry, biochemistry, metallurgy, and mineralogy. The discovery of iridium is intertwined with that of platinum and the other metals of the platinum group. Native platinum used by ancient Ethiopians and by South American cultures always contained a small amount of the other platinum group metals, including iridium. Platinum reached Europe as "platina" ("silverette"), found in the 17th century by the Spanish conquerors in a region today known as the department of Chocó in Colombia. The discovery that this metal was not an alloy of known elements, but instead a distinct new element, did not occur until 1748. Chemists who studied platinum dissolved it in aqua regia (a mixture of hydrochloric and nitric acids) to create soluble salts. They always observed a small amount of a dark, insoluble residue. Joseph Louis Proust thought that the residue was graphite. The French chemists Victor Collet-Descotils, Antoine François, comte de Fourcroy, and Louis Nicolas Vauquelin also observed the black residue in 1803, but did not obtain enough for further experiments. In 1803, British scientist Smithson Tennant (1761–1815) analyzed the insoluble residue and concluded that it must contain a new metal. Vauquelin treated the powder alternately with alkali and acids and obtained a volatile new oxide, which he believed to be of this new metal—which he named "ptene", from the Greek word "ptēnós", "winged". Tennant, who had the advantage of a much greater amount of residue, continued his research and identified the two previously undiscovered elements in the black residue, iridium and osmium. He obtained dark red crystals (probably of ]·"n") by a sequence of reactions with sodium hydroxide and hydrochloric acid. He named iridium after Iris (), the Greek winged goddess of the rainbow and the messenger of the Olympian gods, because many of the salts he obtained were strongly colored. Discovery of the new elements was documented in a letter to the Royal Society on June 21, 1804. British scientist John George Children was the first to melt a sample of iridium in 1813 with the aid of "the greatest galvanic battery that has ever been constructed" (at that time). The first to obtain high-purity iridium was Robert Hare in 1842. He found it had a density of around 21.8 g/cm and noted the metal is nearly immalleable and very hard. The first melting in appreciable quantity was done by Henri Sainte-Claire Deville and Jules Henri Debray in 1860. They required burning more than 300 liters of pure and gas for each kilogram of iridium. These extreme difficulties in melting the metal limited the possibilities for handling iridium. John Isaac Hawkins was looking to obtain a fine and hard point for fountain pen nibs, and in 1834 managed to create an iridium-pointed gold pen. In 1880, John Holland and William Lofland Dudley were able to melt iridium by adding phosphorus and patented the process in the United States; British company Johnson Matthey later stated they had been using a similar process since 1837 and had already presented fused iridium at a number of World Fairs. The first use of an alloy of iridium with ruthenium in thermocouples was made by Otto Feussner in 1933. These allowed for the measurement of high temperatures in air up to 2000 °C. In Munich, Germany in 1957 Rudolf Mössbauer, in what has been called one of the "landmark experiments in twentieth-century physics", discovered the resonant and recoil-free emission and absorption of gamma rays by atoms in a solid metal sample containing only Ir. This phenomenon, known as the Mössbauer effect (which has since been observed for other nuclei, such as Fe), and developed as Mössbauer spectroscopy, has made important contributions to research in physics, chemistry, biochemistry, metallurgy, and mineralogy. Mössbauer received the Nobel Prize in Physics in 1961, at the age 32, just three years after he published his discovery. In 1986 Rudolf Mössbauer was honored for his achievements with the Albert Einstein Medal and the Elliot Cresson Medal. Iridium is one of the nine least abundant stable elements in Earth's crust, having an average mass fraction of 0.001 ppm in crustal rock; gold is 40 times more abundant, platinum is 10 times more abundant, and silver and mercury are 80 times more abundant. Tellurium is about as abundant as iridium. In contrast to its low abundance in crustal rock, iridium is relatively common in meteorites, with concentrations of 0.5 ppm or more. The overall concentration of iridium on Earth is thought to be much higher than what is observed in crustal rocks, but because of the density and siderophilic ("iron-loving") character of iridium, it descended below the crust and into Earth's core when the planet was still molten. Iridium is found in nature as an uncombined element or in natural alloys; especially the iridium–osmium alloys, osmiridium (osmium-rich), and iridosmium (iridium-rich). In the nickel and copper deposits, the platinum group metals occur as sulfides (i.e. (Pt,Pd)S), tellurides (i.e. PtBiTe), antimonides (PdSb), and arsenides (i.e. ). In all of these compounds, platinum is exchanged by a small amount of iridium and osmium. As with all of the platinum group metals, iridium can be found naturally in alloys with raw nickel or raw copper. Within Earth's crust, iridium is found at highest concentrations in three types of geologic structure: igneous deposits (crustal intrusions from below), impact craters, and deposits reworked from one of the former structures. The largest known primary reserves are in the Bushveld igneous complex in South Africa, though the large copper–nickel deposits near Norilsk in Russia, and the Sudbury Basin in Canada are also significant sources of iridium. Smaller reserves are found in the United States. Iridium is also found in secondary deposits, combined with platinum and other platinum group metals in alluvial deposits. The alluvial deposits used by pre-Columbian people in the Chocó Department of Colombia are still a source for platinum-group metals. As of 2003, world reserves have not been estimated. The Cretaceous–Paleogene boundary of 66 million years ago, marking the temporal border between the Cretaceous and Paleogene periods of geological time, was identified by a thin stratum of iridium-rich clay. A team led by Luis Alvarez proposed in 1980 an extraterrestrial origin for this iridium, attributing it to an asteroid or comet impact. Their theory, known as the Alvarez hypothesis, is now widely accepted to explain the extinction of the non-avian dinosaurs. A large buried impact crater structure with an estimated age of about 66 million years was later identified under what is now the Yucatán Peninsula (the Chicxulub crater). Dewey M. McLean and others argue that the iridium may have been of volcanic origin instead, because Earth's core is rich in iridium, and active volcanoes such as Piton de la Fournaise, in the island of Réunion, are still releasing iridium. Iridium is also obtained commercially as a by-product from nickel and copper mining and processing. During electrorefining of copper and nickel, noble metals such as silver, gold and the platinum group metals as well as selenium and tellurium settle to the bottom of the cell as "anode mud", which forms the starting point for their extraction. To separate the metals, they must first be brought into solution. Several separation methods are available depending on the nature of the mixture; two representative methods are fusion with sodium peroxide followed by dissolution in aqua regia, and dissolution in a mixture of chlorine with hydrochloric acid. After the mixture is dissolved, iridium is separated from the other platinum group metals by precipitating ammonium hexachloroiridate () or by extracting with organic amines. The first method is similar to the procedure Tennant and Wollaston used for their separation. The second method can be planned as continuous liquid–liquid extraction and is therefore more suitable for industrial scale production. In either case, the product is reduced using hydrogen, yielding the metal as a powder or "sponge" that can be treated using powder metallurgy techniques. Iridium prices have fluctuated over a considerable range. With a relatively small volume in the world market (compared to other industrial metals like aluminium or copper), the iridium price reacts strongly to instabilities in production, demand, speculation, hoarding, and politics in the producing countries. As a substance with rare properties, its price has been particularly influenced by changes in modern technology: The gradual decrease between 2001 and 2003 has been related to an oversupply of Ir crucibles used for industrial growth of large single crystals. Likewise the prices above 1000 USD/oz between 2010 and 2014 have been explained with the installation of production facilities for single crystal sapphire used in LED backlights for TVs. The demand for iridium surged from 2.5 tonnes in 2009 to 10.4 tonnes in 2010, mostly because of electronics-related applications that saw a rise from 0.2 to 6 tonnes – iridium crucibles are commonly used for growing large high-quality single crystals, demand for which has increased sharply. This increase in iridium consumption is predicted to saturate due to accumulating stocks of crucibles, as happened earlier in the 2000s. Other major applications include spark plugs that consumed 0.78 tonnes of iridium in 2007, electrodes for the chloralkali process (1.1 t in 2007) and chemical catalysts (0.75 t in 2007). The high melting point, hardness and corrosion resistance of iridium and its alloys determine most of its applications. Iridium (or sometimes platinum alloys or osmium) and mostly iridium alloys have a low wear and are used, for example, for multi-pored spinnerets, through which a plastic polymer melt is extruded to form fibers, such as rayon. Osmium–iridium is used for compass bearings and for balances. Their resistance to arc erosion makes iridium alloys ideal for electrical contacts for spark plugs, and iridium-based spark plugs are particularly used in aviation. Pure iridium is extremely brittle, to the point of being hard to weld because the heat-affected zone cracks, but it can be made more ductile by addition of small quantities of titanium and zirconium (0.2% of each apparently works well) Corrosion and heat resistance makes iridium an important alloying agent. Certain long-life aircraft engine parts are made of an iridium alloy, and an iridium–titanium alloy is used for deep-water pipes because of its corrosion resistance. Iridium is also used as a hardening agent in platinum alloys. The Vickers hardness of pure platinum is 56 HV, whereas platinum with 50% of iridium can reach over 500 HV. Devices that must withstand extremely high temperatures are often made from iridium. For example, high-temperature crucibles made of iridium are used in the Czochralski process to produce oxide single-crystals (such as sapphires) for use in computer memory devices and in solid state lasers. The crystals, such as gadolinium gallium garnet and yttrium gallium garnet, are grown by melting pre-sintered charges of mixed oxides under oxidizing conditions at temperatures up to 2100 °C. Iridium compounds are used as catalysts in the Cativa process for carbonylation of methanol to produce acetic acid. The radioisotope iridium-192 is one of the two most important sources of energy for use in industrial γ-radiography for non-destructive testing of metals. Additionally, Ir is used as a source of gamma radiation for the treatment of cancer using brachytherapy, a form of radiotherapy where a sealed radioactive source is placed inside or next to the area requiring treatment. Specific treatments include high-dose-rate prostate brachytherapy, bilary duct brachytherapy, and intracavitary cervix brachytherapy. Iridium is a good catalyst for the decomposition of hydrazine (into hot nitrogen and ammonia), and this is used in practice in low-thrust rocket engines; there are more details in the monopropellant rocket article. An alloy of 90% platinum and 10% iridium was used in 1889 to construct the International Prototype Metre and kilogram mass, kept by the International Bureau of Weights and Measures near Paris. The meter bar was replaced as the definition of the fundamental unit of length in 1960 by a line in the atomic spectrum of krypton, but the kilogram prototype is still the international standard of mass. Iridium has been used in the radioisotope thermoelectric generators of unmanned spacecraft such as the "Voyager", "Viking", "Pioneer", "Cassini", "Galileo", and "New Horizons". Iridium was chosen to encapsulate the plutonium-238 fuel in the generator because it can withstand the operating temperatures of up to 2000 °C and for its great strength. Another use concerns X-ray optics, especially X-ray telescopes. The mirrors of the Chandra X-ray Observatory are coated with a layer of iridium 60 nm thick. Iridium proved to be the best choice for reflecting X-rays after nickel, gold, and platinum were also tested. The iridium layer, which had to be smooth to within a few atoms, was applied by depositing iridium vapor under high vacuum on a base layer of chromium. Iridium is used in particle physics for the production of antiprotons, a form of antimatter. Antiprotons are made by shooting a high-intensity proton beam at a "conversion target", which needs to be made from a very high density material. Although tungsten may be used instead, iridium has the advantage of better stability under the shock waves induced by the temperature rise due to the incident beam. Carbon–hydrogen bond activation (C–H activation) is an area of research on reactions that cleave carbon–hydrogen bonds, which were traditionally regarded as unreactive. The first reported successes at activating C–H bonds in saturated hydrocarbons, published in 1982, used organometallic iridium complexes that undergo an oxidative addition with the hydrocarbon. Iridium complexes are being investigated as catalysts for asymmetric hydrogenation. These catalysts have been used in the synthesis of natural products and able to hydrogenate certain difficult substrates, such as unfunctionalized alkenes, enantioselectively (generating only one of the two possible enantiomers). Iridium forms a variety of complexes of fundamental interest in triplet harvesting. Iridium–osmium alloys were used in fountain pen nib tips. The first major use of iridium was in 1834 in nibs mounted on gold. Since 1944, the famous Parker 51 fountain pen was fitted with a nib tipped by a ruthenium and iridium alloy (with 3.8% iridium). The tip material in modern fountain pens is still conventionally called "iridium", although there is seldom any iridium in it; other metals such as ruthenium, osmium and tungsten have taken its place. An iridium–platinum alloy was used for the touch holes or vent pieces of cannon. According to a report of the Paris Exhibition of 1867, one of the pieces being exhibited by Johnson and Matthey "has been used in a Withworth gun for more than 3000 rounds, and scarcely shows signs of wear yet. Those who know the constant trouble and expense which are occasioned by the wearing of the vent-pieces of cannon when in active service, will appreciate this important adaptation". The pigment "iridium black", which consists of very finely divided iridium, is used for painting porcelain an intense black; it was said that "all other porcelain black colors appear grey by the side of it". Iridium in bulk metallic form is not biologically important or hazardous to health due to its lack of reactivity with tissues; there are only about 20 parts per trillion of iridium in human tissue. Like most metals, finely divided iridium powder can be hazardous to handle, as it is an irritant and may ignite in air. Very little is known about the toxicity of iridium compounds, because they are used in very small amounts, but soluble salts, such as the iridium halides, could be hazardous due to elements other than iridium or due to iridium itself. However, most iridium compounds are insoluble, which makes absorption into the body difficult. A radioisotope of iridium, , is dangerous, like other radioactive isotopes. The only reported injuries related to iridium concern accidental exposure to radiation from used in brachytherapy. High-energy gamma radiation from can increase the risk of cancer. External exposure can cause burns, radiation poisoning, and death. Ingestion of Ir can burn the linings of the stomach and the intestines. Ir, Ir, and Ir tend to deposit in the liver, and can pose health hazards from both gamma and beta radiation. Immune system The immune system is a host defense system comprising many biological structures and processes within an organism that protects against disease. To function properly, an immune system must detect a wide variety of agents, known as pathogens, from viruses to parasitic worms, and distinguish them from the organism's own healthy tissue. In many species, the immune system can be classified into subsystems, such as the innate immune system versus the adaptive immune system, or humoral immunity versus cell-mediated immunity. In humans, the blood–brain barrier, blood–cerebrospinal fluid barrier, and similar fluid–brain barriers separate the peripheral immune system from the neuroimmune system, which protects the brain. Pathogens can rapidly evolve and adapt, and thereby avoid detection and neutralization by the immune system; however, multiple defense mechanisms have also evolved to recognize and neutralize pathogens. Even simple unicellular organisms such as bacteria possess a rudimentary immune system in the form of enzymes that protect against bacteriophage infections. Other basic immune mechanisms evolved in ancient eukaryotes and remain in their modern descendants, such as plants and invertebrates. These mechanisms include phagocytosis, antimicrobial peptides called defensins, and the complement system. Jawed vertebrates, including humans, have even more sophisticated defense mechanisms, including the ability to adapt over time to recognize specific pathogens more efficiently. Adaptive (or acquired) immunity creates immunological memory after an initial response to a specific pathogen, leading to an enhanced response to subsequent encounters with that same pathogen. This process of acquired immunity is the basis of vaccination. Disorders of the immune system can result in autoimmune diseases, inflammatory diseases and cancer. Immunodeficiency occurs when the immune system is less active than normal, resulting in recurring and life-threatening infections. In humans, immunodeficiency can either be the result of a genetic disease such as severe combined immunodeficiency, acquired conditions such as HIV/AIDS, or the use of immunosuppressive medication. In contrast, autoimmunity results from a hyperactive immune system attacking normal tissues as if they were foreign organisms. Common autoimmune diseases include Hashimoto's thyroiditis, rheumatoid arthritis, diabetes mellitus type 1, and systemic lupus erythematosus. Immunology covers the study of all aspects of the immune system. Immunology is a science that examines the structure and function of the immune system. It originates from medicine and early studies on the causes of immunity to disease. The earliest known reference to immunity was during the plague of Athens in 430 BC. Thucydides noted that people who had recovered from a previous bout of the disease could nurse the sick without contracting the illness a second time. In the 18th century, Pierre-Louis Moreau de Maupertuis made experiments with scorpion venom and observed that certain dogs and mice were immune to this venom. This and other observations of acquired immunity were later exploited by Louis Pasteur in his development of vaccination and his proposed germ theory of disease. Pasteur's theory was in direct opposition to contemporary theories of disease, such as the miasma theory. It was not until Robert Koch's 1891 proofs, for which he was awarded a Nobel Prize in 1905, that microorganisms were confirmed as the cause of infectious disease. Viruses were confirmed as human pathogens in 1901, with the discovery of the yellow fever virus by Walter Reed. Immunology made a great advance towards the end of the 19th century, through rapid developments, in the study of humoral immunity and cellular immunity. Particularly important was the work of Paul Ehrlich, who proposed the side-chain theory to explain the specificity of the antigen-antibody reaction; his contributions to the understanding of humoral immunity were recognized by the award of a Nobel Prize in 1908, which was jointly awarded to the founder of cellular immunology, Elie Metchnikoff. The immune system protects organisms from infection with layered defenses of increasing specificity. In simple terms, physical barriers prevent pathogens such as bacteria and viruses from entering the organism. If a pathogen breaches these barriers, the innate immune system provides an immediate, but non-specific response. Innate immune systems are found in all plants and animals. If pathogens successfully evade the innate response, vertebrates possess a second layer of protection, the adaptive immune system, which is activated by the innate response. Here, the immune system adapts its response during an infection to improve its recognition of the pathogen. This improved response is then retained after the pathogen has been eliminated, in the form of an immunological memory, and allows the adaptive immune system to mount faster and stronger attacks each time this pathogen is encountered. Both innate and adaptive immunity depend on the ability of the immune system to distinguish between self and non-self molecules. In immunology, "self" molecules are those components of an organism's body that can be distinguished from foreign substances by the immune system. Conversely, "non-self" molecules are those recognized as foreign molecules. One class of non-self molecules are called antigens (short for "anti"body "gen"erators) and are defined as substances that bind to specific immune receptors and elicit an immune response. Microorganisms or toxins that successfully enter an organism encounter the cells and mechanisms of the innate immune system. The innate response is usually triggered when microbes are identified by pattern recognition receptors, which recognize components that are conserved among broad groups of microorganisms, or when damaged, injured or stressed cells send out alarm signals, many of which (but not all) are recognized by the same receptors as those that recognize pathogens. Innate immune defenses are non-specific, meaning these systems respond to pathogens in a generic way. This system does not confer long-lasting immunity against a pathogen. The innate immune system is the dominant system of host defense in most organisms. Several barriers protect organisms from infection, including mechanical, chemical, and biological barriers. The waxy cuticle of most leaves, the exoskeleton of insects, the shells and membranes of externally deposited eggs, and skin are examples of mechanical barriers that are the first line of defense against infection. However, as organisms cannot be completely sealed from their environments, other systems act to protect body openings such as the lungs, intestines, and the genitourinary tract. In the lungs, coughing and sneezing mechanically eject pathogens and other irritants from the respiratory tract. The flushing action of tears and urine also mechanically expels pathogens, while mucus secreted by the respiratory and gastrointestinal tract serves to trap and entangle microorganisms. Chemical barriers also protect against infection. The skin and respiratory tract secrete antimicrobial peptides such as the β-defensins. Enzymes such as lysozyme and phospholipase A2 in saliva, tears, and breast milk are also antibacterials. Vaginal secretions serve as a chemical barrier following menarche, when they become slightly acidic, while semen contains defensins and zinc to kill pathogens. In the stomach, gastric acid and proteases serve as powerful chemical defenses against ingested pathogens. Within the genitourinary and gastrointestinal tracts, commensal flora serve as biological barriers by competing with pathogenic bacteria for food and space and, in some cases, by changing the conditions in their environment, such as pH or available iron. As a result of the symbiotic relationship between commensals and the immune system, the probability that pathogens will reach sufficient numbers to cause illness is reduced. However, since most antibiotics non-specifically target bacteria and do not affect fungi, oral antibiotics can lead to an "overgrowth" of fungi and cause conditions such as a vaginal candidiasis (a yeast infection). There is good evidence that re-introduction of probiotic flora, such as pure cultures of the lactobacilli normally found in unpasteurized yogurt, helps restore a healthy balance of microbial populations in intestinal infections in children and encouraging preliminary data in studies on bacterial gastroenteritis, inflammatory bowel diseases, urinary tract infection and post-surgical infections. Inflammation is one of the first responses of the immune system to infection. The symptoms of inflammation are redness, swelling, heat, and pain, which are caused by increased blood flow into tissue. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens. The complement system is a biochemical cascade that attacks the surfaces of foreign cells. It contains over 20 different proteins and is named for its ability to "complement" the killing of pathogens by antibodies. Complement is the major humoral component of the innate immune response. Many species have complement systems, including non-mammals like plants, fish, and some invertebrates. In humans, this response is activated by complement binding to antibodies that have attached to these microbes or the binding of complement proteins to carbohydrates on the surfaces of microbes. This recognition signal triggers a rapid killing response. The speed of the response is a result of signal amplification that occurs after sequential proteolytic activation of complement molecules, which are also proteases. After complement proteins initially bind to the microbe, they activate their protease activity, which in turn activates other complement proteases, and so on. This produces a catalytic cascade that amplifies the initial signal by controlled positive feedback. The cascade results in the production of peptides that attract immune cells, increase vascular permeability, and opsonize (coat) the surface of a pathogen, marking it for destruction. This deposition of complement can also kill cells directly by disrupting their plasma membrane. Leukocytes (white blood cells) act like independent, single-celled organisms and are the second arm of the innate immune system. The innate leukocytes include the phagocytes (macrophages, neutrophils, and dendritic cells), innate lymphoid cells, mast cells, eosinophils, basophils, and natural killer cells. These cells identify and eliminate pathogens, either by attacking larger pathogens through contact or by engulfing and then killing microorganisms. Innate cells are also important mediators in lymphoid organ development and the activation of the adaptive immune system. Phagocytosis is an important feature of cellular innate immunity performed by cells called phagocytes that engulf, or eat, pathogens or particles. Phagocytes generally patrol the body searching for pathogens, but can be called to specific locations by cytokines. Once a pathogen has been engulfed by a phagocyte, it becomes trapped in an intracellular vesicle called a phagosome, which subsequently fuses with another vesicle called a lysosome to form a phagolysosome. The pathogen is killed by the activity of digestive enzymes or following a respiratory burst that releases free radicals into the phagolysosome. Phagocytosis evolved as a means of acquiring nutrients, but this role was extended in phagocytes to include engulfment of pathogens as a defense mechanism. Phagocytosis probably represents the oldest form of host defense, as phagocytes have been identified in both vertebrate and invertebrate animals. Neutrophils and macrophages are phagocytes that travel throughout the body in pursuit of invading pathogens. Neutrophils are normally found in the bloodstream and are the most abundant type of phagocyte, normally representing 50% to 60% of the total circulating leukocytes, and consisting of neutrophil-killer and neutrophil-cager subpopulations. During the acute phase of inflammation, particularly as a result of bacterial infection, neutrophils migrate toward the site of inflammation in a process called chemotaxis, and are usually the first cells to arrive at the scene of infection. Macrophages are versatile cells that reside within tissues and produce a wide array of chemicals including enzymes, complement proteins, and cytokines, while they can also act as scavengers that rid the body of worn-out cells and other debris, and as antigen-presenting cells that activate the adaptive immune system. Dendritic cells (DC) are phagocytes in tissues that are in contact with the external environment; therefore, they are located mainly in the skin, nose, lungs, stomach, and intestines. They are named for their resemblance to neuronal dendrites, as both have many spine-like projections, but dendritic cells are in no way connected to the nervous system. Dendritic cells serve as a link between the bodily tissues and the innate and adaptive immune systems, as they present antigens to T cells, one of the key cell types of the adaptive immune system. Mast cells reside in connective tissues and mucous membranes, and regulate the inflammatory response. They are most often associated with allergy and anaphylaxis. Basophils and eosinophils are related to neutrophils. They secrete chemical mediators that are involved in defending against parasites and play a role in allergic reactions, such as asthma. Natural killer (NK cells) cells are leukocytes that attack and destroy tumor cells, or cells that have been infected by viruses. Natural killer cells, or NK cells, are lymphocytes and a component of the innate immune system which does not directly attack invading microbes. Rather, NK cells destroy compromised host cells, such as tumor cells or virus-infected cells, recognizing such cells by a condition known as "missing self." This term describes cells with low levels of a cell-surface marker called MHC I (major histocompatibility complex) – a situation that can arise in viral infections of host cells. They were named "natural killer" because of the initial notion that they do not require activation in order to kill cells that are "missing self." For many years it was unclear how NK cells recognize tumor cells and infected cells. It is now known that the MHC makeup on the surface of those cells is altered and the NK cells become activated through recognition of "missing self". Normal body cells are not recognized and attacked by NK cells because they express intact self MHC antigens. Those MHC antigens are recognized by killer cell immunoglobulin receptors (KIR) which essentially put the brakes on NK cells. The adaptive immune system evolved in early vertebrates and allows for a stronger immune response as well as immunological memory, where each pathogen is "remembered" by a signature antigen. The adaptive immune response is antigen-specific and requires the recognition of specific "non-self" antigens during a process called antigen presentation. Antigen specificity allows for the generation of responses that are tailored to specific pathogens or pathogen-infected cells. The ability to mount these tailored responses is maintained in the body by "memory cells". Should a pathogen infect the body more than once, these specific memory cells are used to quickly eliminate it. The cells of the adaptive immune system are special types of leukocytes, called lymphocytes. B cells and T cells are the major types of lymphocytes and are derived from hematopoietic stem cells in the bone marrow. B cells are involved in the humoral immune response, whereas T cells are involved in cell-mediated immune response. Both B cells and T cells carry receptor molecules that recognize specific targets. T cells recognize a "non-self" target, such as a pathogen, only after antigens (small fragments of the pathogen) have been processed and presented in combination with a "self" receptor called a major histocompatibility complex (MHC) molecule. There are two major subtypes of T cells: the killer T cell and the helper T cell. In addition there are regulatory T cells which have a role in modulating immune response. Killer T cells only recognize antigens coupled to Class I MHC molecules, while helper T cells and regulatory T cells only recognize antigens coupled to Class II MHC molecules. These two mechanisms of antigen presentation reflect the different roles of the two types of T cell. A third, minor subtype are the γδ T cells that recognize intact antigens that are not bound to MHC receptors. The double-positive T cells are exposed to a wide variety of self-antigens in the thymus, in which iodine is necessary for its thymus development and activity. In contrast, the B cell antigen-specific receptor is an antibody molecule on the B cell surface, and recognizes whole pathogens without any need for antigen processing. Each lineage of B cell expresses a different antibody, so the complete set of B cell antigen receptors represent all the antibodies that the body can manufacture. Killer T cells are a sub-group of T cells that kill cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional. As with B cells, each type of T cell recognizes a different antigen. Killer T cells are activated when their T-cell receptor (TCR) binds to this specific antigen in a complex with the MHC Class I receptor of another cell. Recognition of this MHC:antigen complex is aided by a co-receptor on the T cell, called CD8. The T cell then travels throughout the body in search of cells where the MHC I receptors bear this antigen. When an activated T cell contacts such cells, it releases cytotoxins, such as perforin, which form pores in the target cell's plasma membrane, allowing ions, water and toxins to enter. The entry of another toxin called granulysin (a protease) induces the target cell to undergo apoptosis. T cell killing of host cells is particularly important in preventing the replication of viruses. T cell activation is tightly controlled and generally requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T cells (see below). Helper T cells regulate both the innate and adaptive immune responses and help determine which immune responses the body makes to a particular pathogen. These cells have no cytotoxic activity and do not kill infected cells or clear pathogens directly. They instead control the immune response by directing other cells to perform these tasks. Helper T cells express T cell receptors (TCR) that recognize antigen bound to Class II MHC molecules. The MHC:antigen complex is also recognized by the helper cell's CD4 co-receptor, which recruits molecules inside the T cell (e.g., Lck) that are responsible for the T cell's activation. Helper T cells have a weaker association with the MHC:antigen complex than observed for killer T cells, meaning many receptors (around 200–300) on the helper T cell must be bound by an MHC:antigen in order to activate the helper cell, while killer T cells can be activated by engagement of a single MHC:antigen molecule. Helper T cell activation also requires longer duration of engagement with an antigen-presenting cell. The activation of a resting helper T cell causes it to release cytokines that influence the activity of many cell types. Cytokine signals produced by helper T cells enhance the microbicidal function of macrophages and the activity of killer T cells. In addition, helper T cell activation causes an upregulation of molecules expressed on the T cell's surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells. Gamma delta T cells (γδ T cells) possess an alternative T-cell receptor (TCR) as opposed to CD4+ and CD8+ (αβ) T cells and share the characteristics of helper T cells, cytotoxic T cells and NK cells. The conditions that produce responses from γδ T cells are not fully understood. Like other 'unconventional' T cell subsets bearing invariant TCRs, such as CD1d-restricted Natural Killer T cells, γδ T cells straddle the border between innate and adaptive immunity. On one hand, γδ T cells are a component of adaptive immunity as they rearrange TCR genes to produce receptor diversity and can also develop a memory phenotype. On the other hand, the various subsets are also part of the innate immune system, as restricted TCR or NK receptors may be used as pattern recognition receptors. For example, large numbers of human Vγ9/Vδ2 T cells respond within hours to common molecules produced by microbes, and highly restricted Vδ1+ T cells in epithelia respond to stressed epithelial cells. A B cell identifies pathogens when antibodies on its surface bind to a specific foreign antigen. This antigen/antibody complex is taken up by the B cell and processed by proteolysis into peptides. The B cell then displays these antigenic peptides on its surface MHC class II molecules. This combination of MHC and antigen attracts a matching helper T cell, which releases lymphokines and activates the B cell. As the activated B cell then begins to divide, its offspring (plasma cells) secrete millions of copies of the antibody that recognizes this antigen. These antibodies circulate in blood plasma and lymph, bind to pathogens expressing the antigen and mark them for destruction by complement activation or for uptake and destruction by phagocytes. Antibodies can also neutralize challenges directly, by binding to bacterial toxins or by interfering with the receptors that viruses and bacteria use to infect cells. Evolution of the adaptive immune system occurred in an ancestor of the jawed vertebrates. Many of the classical molecules of the adaptive immune system (e.g., immunoglobulins and T-cell receptors) exist only in jawed vertebrates. However, a distinct lymphocyte-derived molecule has been discovered in primitive jawless vertebrates, such as the lamprey and hagfish. These animals possess a large array of molecules called Variable lymphocyte receptors (VLRs) that, like the antigen receptors of jawed vertebrates, are produced from only a small number (one or two) of genes. These molecules are believed to bind pathogenic antigens in a similar way to antibodies, and with the same degree of specificity. When B cells and T cells are activated and begin to replicate, some of their offspring become long-lived memory cells. Throughout the lifetime of an animal, these memory cells remember each specific pathogen encountered and can mount a strong response if the pathogen is detected again. This is "adaptive" because it occurs during the lifetime of an individual as an adaptation to infection with that pathogen and prepares the immune system for future challenges. Immunological memory can be in the form of either passive short-term memory or active long-term memory. Newborn infants have no prior exposure to microbes and are particularly vulnerable to infection. Several layers of passive protection are provided by the mother. During pregnancy, a particular type of antibody, called IgG, is transported from mother to baby directly through the placenta, so human babies have high levels of antibodies even at birth, with the same range of antigen specificities as their mother. Breast milk or colostrum also contains antibodies that are transferred to the gut of the infant and protect against bacterial infections until the newborn can synthesize its own antibodies. This is passive immunity because the fetus does not actually make any memory cells or antibodies—it only borrows them. This passive immunity is usually short-term, lasting from a few days up to several months. In medicine, protective passive immunity can also be transferred artificially from one individual to another via antibody-rich serum. Long-term "active" memory is acquired following infection by activation of B and T cells. Active immunity can also be generated artificially, through vaccination. The principle behind vaccination (also called immunization) is to introduce an antigen from a pathogen in order to stimulate the immune system and develop specific immunity against that particular pathogen without causing disease associated with that organism. This deliberate induction of an immune response is successful because it exploits the natural specificity of the immune system, as well as its inducibility. With infectious disease remaining one of the leading causes of death in the human population, vaccination represents the most effective manipulation of the immune system mankind has developed. Most viral vaccines are based on live attenuated viruses, while many bacterial vaccines are based on acellular components of micro-organisms, including harmless toxin components. Since many antigens derived from acellular vaccines do not strongly induce the adaptive response, most bacterial vaccines are provided with additional adjuvants that activate the antigen-presenting cells of the innate immune system and maximize immunogenicity. The immune system is a remarkably effective structure that incorporates specificity, inducibility and adaptation. Failures of host defense do occur, however, and fall into three broad categories: immunodeficiencies, autoimmunity, and hypersensitivities. Immunodeficiencies occur when one or more of the components of the immune system are inactive. The ability of the immune system to respond to pathogens is diminished in both the young and the elderly, with immune responses beginning to decline at around 50 years of age due to immunosenescence. In developed countries, obesity, alcoholism, and drug use are common causes of poor immune function. However, malnutrition is the most common cause of immunodeficiency in developing countries. Diets lacking sufficient protein are associated with impaired cell-mediated immunity, complement activity, phagocyte function, IgA antibody concentrations, and cytokine production. Additionally, the loss of the thymus at an early age through genetic mutation or surgical removal results in severe immunodeficiency and a high susceptibility to infection. Immunodeficiencies can also be inherited or 'acquired'. Chronic granulomatous disease, where phagocytes have a reduced ability to destroy pathogens, is an example of an inherited, or congenital, immunodeficiency. AIDS and some types of cancer cause acquired immunodeficiency. Overactive immune responses comprise the other end of immune dysfunction, particularly the autoimmune disorders. Here, the immune system fails to properly distinguish between self and non-self, and attacks part of the body. Under normal circumstances, many T cells and antibodies react with "self" peptides. One of the functions of specialized cells (located in the thymus and bone marrow) is to present young lymphocytes with self antigens produced throughout the body and to eliminate those cells that recognize self-antigens, preventing autoimmunity. Hypersensitivity is an immune response that damages the body's own tissues. They are divided into four classes (Type I – IV) based on the mechanisms involved and the time course of the hypersensitive reaction. Type I hypersensitivity is an immediate or anaphylactic reaction, often associated with allergy. Symptoms can range from mild discomfort to death. Type I hypersensitivity is mediated by IgE, which triggers degranulation of mast cells and basophils when cross-linked by antigen. Type II hypersensitivity occurs when antibodies bind to antigens on the patient's own cells, marking them for destruction. This is also called antibody-dependent (or cytotoxic) hypersensitivity, and is mediated by IgG and IgM antibodies. Immune complexes (aggregations of antigens, complement proteins, and IgG and IgM antibodies) deposited in various tissues trigger Type III hypersensitivity reactions. Type IV hypersensitivity (also known as cell-mediated or "delayed type hypersensitivity") usually takes between two and three days to develop. Type IV reactions are involved in many autoimmune and infectious diseases, but may also involve "contact dermatitis" (poison ivy). These reactions are mediated by T cells, monocytes, and macrophages. Inflammation is one of the first responses of the immune system to infection, but it can appear without known cause. Inflammation is produced by eicosanoids and cytokines, which are released by injured or infected cells. Eicosanoids include prostaglandins that produce fever and the dilation of blood vessels associated with inflammation, and leukotrienes that attract certain white blood cells (leukocytes). Common cytokines include interleukins that are responsible for communication between white blood cells; chemokines that promote chemotaxis; and interferons that have anti-viral effects, such as shutting down protein synthesis in the host cell. Growth factors and cytotoxic factors may also be released. These cytokines and other chemicals recruit immune cells to the site of infection and promote healing of any damaged tissue following the removal of pathogens. It is likely that a multicomponent, adaptive immune system arose with the first vertebrates, as invertebrates do not generate lymphocytes or an antibody-based humoral response. Many species, however, utilize mechanisms that appear to be precursors of these aspects of vertebrate immunity. Immune systems appear even in the structurally most simple forms of life, with bacteria using a unique defense mechanism, called the restriction modification system to protect themselves from viral pathogens, called bacteriophages. Prokaryotes also possess acquired immunity, through a system that uses CRISPR sequences to retain fragments of the genomes of phage that they have come into contact with in the past, which allows them to block virus replication through a form of RNA interference. Prokaryotes also possess other defense mechanisms. Offensive elements of the immune systems are also present in unicellular eukaryotes, but studies of their roles in defense are few. Pattern recognition receptors are proteins used by nearly all organisms to identify molecules associated with pathogens. Antimicrobial peptides called defensins are an evolutionarily conserved component of the innate immune response found in all animals and plants, and represent the main form of invertebrate systemic immunity. The complement system and phagocytic cells are also used by most forms of invertebrate life. Ribonucleases and the RNA interference pathway are conserved across all eukaryotes, and are thought to play a role in the immune response to viruses. Unlike animals, plants lack phagocytic cells, but many plant immune responses involve systemic chemical signals that are sent through a plant. Individual plant cells respond to molecules associated with pathogens known as Pathogen-associated molecular patterns or PAMPs. When a part of a plant becomes infected, the plant produces a localized hypersensitive response, whereby cells at the site of infection undergo rapid apoptosis to prevent the spread of the disease to other parts of the plant. Systemic acquired resistance (SAR) is a type of defensive response used by plants that renders the entire plant resistant to a particular infectious agent. RNA silencing mechanisms are particularly important in this systemic response as they can block virus replication. Another important role of the immune system is to identify and eliminate tumors. This is called immune surveillance. The "transformed cells" of tumors express antigens that are not found on normal cells. To the immune system, these antigens appear foreign, and their presence causes immune cells to attack the transformed tumor cells. The antigens expressed by tumors have several sources; some are derived from oncogenic viruses like human papillomavirus, which causes cervical cancer, while others are the organism's own proteins that occur at low levels in normal cells but reach high levels in tumor cells. One example is an enzyme called tyrosinase that, when expressed at high levels, transforms certain skin cells (e.g. melanocytes) into tumors called melanomas. A third possible source of tumor antigens are proteins normally important for regulating cell growth and survival, that commonly mutate into cancer inducing molecules called oncogenes. The main response of the immune system to tumors is to destroy the abnormal cells using killer T cells, sometimes with the assistance of helper T cells. Tumor antigens are presented on MHC class I molecules in a similar way to viral antigens. This allows killer T cells to recognize the tumor cell as abnormal. NK cells also kill tumorous cells in a similar way, especially if the tumor cells have fewer MHC class I molecules on their surface than normal; this is a common phenomenon with tumors. Sometimes antibodies are generated against tumor cells allowing for their destruction by the complement system. Clearly, some tumors evade the immune system and go on to become cancers. Tumor cells often have a reduced number of MHC class I molecules on their surface, thus avoiding detection by killer T cells. Some tumor cells also release products that inhibit the immune response; for example by secreting the cytokine TGF-β, which suppresses the activity of macrophages and lymphocytes. In addition, immunological tolerance may develop against tumor antigens, so the immune system no longer attacks the tumor cells. Paradoxically, macrophages can promote tumor growth when tumor cells send out cytokines that attract macrophages, which then generate cytokines and growth factors such as tumor-necrosis factor alpha that nurture tumor development or promote stem-cell-like plasticity. In addition, a combination of hypoxia in the tumor and a cytokine produced by macrophages induces tumor cells to decrease production of a protein that blocks metastasis and thereby assists spread of cancer cells. The immune system is involved in many aspects of physiological regulation in the body. The immune system interacts intimately with other systems, such as the endocrine and the nervous systems. The immune system also plays a crucial role in embryogenesis (development of the embryo), as well as in tissue repair and regeneration. Hormones can act as immunomodulators, altering the sensitivity of the immune system. For example, female sex hormones are known immunostimulators of both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. By contrast, male sex hormones such as testosterone seem to be immunosuppressive. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D. When a T-cell encounters a foreign pathogen, it extends a vitamin D receptor. This is essentially a signaling device that allows the T-cell to bind to the active form of vitamin D, the steroid hormone calcitriol. T-cells have a symbiotic relationship with vitamin D. Not only does the T-cell extend a vitamin D receptor, in essence asking to bind to the steroid hormone version of vitamin D, calcitriol, but the T-cell expresses the gene CYP27B1, which is the gene responsible for converting the pre-hormone version of vitamin D, calcidiol into the steroid hormone version, calcitriol. Only after binding to calcitriol can T-cells perform their intended function. Other immune system cells that are known to express CYP27B1 and thus activate vitamin D calcidiol, are dendritic cells, keratinocytes and macrophages. It is conjectured that a progressive decline in hormone levels with age is partially responsible for weakened immune responses in aging individuals. Conversely, some hormones are regulated by the immune system, notably thyroid hormone activity. The age-related decline in immune function is also related to decreasing vitamin D levels in the elderly. As people age, two things happen that negatively affect their vitamin D levels. First, they stay indoors more due to decreased activity levels. This means that they get less sun and therefore produce less cholecalciferol via UVB radiation. Second, as a person ages the skin becomes less adept at producing vitamin D. The immune system is affected by sleep and rest, and sleep deprivation is detrimental to immune function. Complex feedback loops involving cytokines, such as interleukin-1 and tumor necrosis factor-α produced in response to infection, appear to also play a role in the regulation of non-rapid eye movement (REM) sleep. Thus the immune response to infection may result in changes to the sleep cycle, including an increase in slow-wave sleep relative to REM sleep. When suffering from sleep deprivation, active immunizations may have a diminished effect and may result in lower antibody production, and a lower immune response, than would be noted in a well-rested individual. Additionally, proteins such as NFIL3, which have been shown to be closely intertwined with both T-cell differentiation and our circadian rhythms, can be affected through the disturbance of natural light and dark cycles through instances of sleep deprivation, shift work, etc. As a result, these disruptions can lead to an increase in chronic conditions such as heart disease, chronic pain, and asthma. In addition to the negative consequences of sleep deprivation, sleep and the intertwined circadian system have been shown to have strong regulatory effects on immunological functions affecting both the innate and the adaptive immunity. First, during the early slow-wave-sleep stage, a sudden drop in blood levels of cortisol, epinephrine, and norepinephrine induce increased blood levels of the hormones leptin, pituitary growth hormone, and prolactin. These signals induce a pro-inflammatory state through the production of the pro-inflammatory cytokines interleukin-1, interleukin-12, TNF-alpha and IFN-gamma. These cytokines then stimulate immune functions such as immune cells activation, proliferation, and differentiation. It is during this time that undifferentiated, or less differentiated, like naïve and central memory T cells, peak (i.e. during a time of a slowly evolving adaptive immune response). In addition to these effects, the milieu of hormones produced at this time (leptin, pituitary growth hormone, and prolactin) support the interactions between APCs and T-cells, a shift of the T1/T2 cytokine balance towards one that supports T1, an increase in overall T cell proliferation, and naïve T cell migration to lymph nodes. This milieu is also thought to support the formation of long-lasting immune memory through the initiation of Th1 immune responses. In contrast, during wake periods differentiated effector cells, such as cytotoxic natural killer cells and CTLs (cytotoxic T lymphocytes), peak in order to elicit an effective response against any intruding pathogens. As well during awake active times, anti-inflammatory molecules, such as cortisol and catecholamines, peak. There are two theories as to why the pro-inflammatory state is reserved for sleep time. First, inflammation would cause serious cognitive and physical impairments if it were to occur during wake times. Second, inflammation may occur during sleep times due to the presence of melatonin. Inflammation causes a great deal of oxidative stress and the presence of melatonin during sleep times could actively counteract free radical production during this time. Overnutrition is associated with diseases such as diabetes and obesity, which are known to affect immune function. More moderate malnutrition, as well as certain specific trace mineral and nutrient deficiencies, can also compromise the immune response. Foods rich in certain fatty acids may foster a healthy immune system. Likewise, fetal undernourishment can cause a lifelong impairment of the immune system. The immune system, particularly the innate component, plays a decisive role in tissue repair after an insult. Key actors include macrophages and neutrophils, but other cellular actors, including γδ T cells, innate lymphoid cells (ILCs), and regulatory T cells (Tregs), are also important. The plasticity of immune cells and the balance between pro-inflammatory and anti-inflammatory signals are crucial aspects of efficient tissue repair. Immune components and pathways are involved in regeneration as well, for example in amphibians. According to one hypothesis, organisms that can regenerate could be less immunocompetent than organisms that cannot regenerate. The immune response can be manipulated to suppress unwanted responses resulting from autoimmunity, allergy, and transplant rejection, and to stimulate protective responses against pathogens that largely elude the immune system (see immunization) or cancer. Immunosuppressive drugs are used to control autoimmune disorders or inflammation when excessive tissue damage occurs, and to prevent transplant rejection after an organ transplant. Anti-inflammatory drugs are often used to control the effects of inflammation. Glucocorticoids are the most powerful of these drugs; however, these drugs can have many undesirable side effects, such as central obesity, hyperglycemia, osteoporosis, and their use must be tightly controlled. Lower doses of anti-inflammatory drugs are often used in conjunction with cytotoxic or immunosuppressive drugs such as methotrexate or azathioprine. Cytotoxic drugs inhibit the immune response by killing dividing cells such as activated T cells. However, the killing is indiscriminate and other constantly dividing cells and their organs are affected, which causes toxic side effects. Immunosuppressive drugs such as cyclosporin prevent T cells from responding to signals correctly by inhibiting signal transduction pathways. Cancer immunotherapy covers the medical ways to stimulate the immune system to attack cancer tumours. Immunology is strongly experimental in everyday practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the 20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory of immunity, represented in particular by Elie Metchnikoff, it was cells – more precisely, phagocytes – that were responsible for immune responses. In contrast, the humoral theory of immunity, held, among others, by Robert Koch and Emil von Behring, stated that the active immune agents were soluble components (molecules) found in the organism’s “humors” rather than its cells. In the mid-1950s, Frank Burnet, inspired by a suggestion made by Niels Jerne, formulated the clonal selection theory (CST) of immunity. On the basis of CST, Burnet developed a theory of how an immune response is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger destructive immune responses, while "nonself" entities (pathogens, an allograft) trigger a destructive immune response. The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal" activation of T cells. The self/nonself theory of immunity and the self/nonself vocabulary have been criticized, but remain very influential. More recently, several theoretical frameworks have been suggested in immunology, including "autopoietic" views, "cognitive immune" views, the "danger model" (or "danger theory"), and the "discontinuity" theory. The danger model, suggested by Polly Matzinger and colleagues, has been very influential, arousing many comments and discussions. Larger drugs (>500 Da) can provoke a neutralizing immune response, particularly if the drugs are administered repeatedly, or in larger doses. This limits the effectiveness of drugs based on larger peptides and proteins (which are typically larger than 6000 Da). In some cases, the drug itself is not immunogenic, but may be co-administered with an immunogenic compound, as is sometimes the case for Taxol. Computational methods have been developed to predict the immunogenicity of peptides and proteins, which are particularly useful in designing therapeutic antibodies, assessing likely virulence of mutations in viral coat particles, and validation of proposed peptide-based drug treatments. Early techniques relied mainly on the observation that hydrophilic amino acids are overrepresented in epitope regions than hydrophobic amino acids; however, more recent developments rely on machine learning techniques using databases of existing known epitopes, usually on well-studied virus proteins, as a training set. A publicly accessible database has been established for the cataloguing of epitopes from pathogens known to be recognizable by B cells. The emerging field of bioinformatics-based studies of immunogenicity is referred to as "immunoinformatics". Immunoproteomics is the study of large sets of proteins (proteomics) involved in the immune response. The success of any pathogen depends on its ability to elude host immune responses. Therefore, pathogens evolved several methods that allow them to successfully infect a host, while evading detection or destruction by the immune system. Bacteria often overcome physical barriers by secreting enzymes that digest the barrier, for example, by using a type II secretion system. Alternatively, using a type III secretion system, they may insert a hollow tube into the host cell, providing a direct route for proteins to move from the pathogen to the host. These proteins are often used to shut down host defenses. An evasion strategy used by several pathogens to avoid the innate immune system is to hide within the cells of their host (also called intracellular pathogenesis). Here, a pathogen spends most of its life-cycle inside host cells, where it is shielded from direct contact with immune cells, antibodies and complement. Some examples of intracellular pathogens include viruses, the food poisoning bacterium "Salmonella" and the eukaryotic parasites that cause malaria ("Plasmodium falciparum") and leishmaniasis ("Leishmania spp."). Other bacteria, such as "Mycobacterium tuberculosis", live inside a protective capsule that prevents lysis by complement. Many pathogens secrete compounds that diminish or misdirect the host's immune response. Some bacteria form biofilms to protect themselves from the cells and proteins of the immune system. Such biofilms are present in many successful infections, e.g., the chronic "Pseudomonas aeruginosa" and "Burkholderia cenocepacia" infections characteristic of cystic fibrosis. Other bacteria generate surface proteins that bind to antibodies, rendering them ineffective; examples include "Streptococcus" (protein G), "Staphylococcus aureus" (protein A), and "Peptostreptococcus magnus" (protein L). The mechanisms used to evade the adaptive immune system are more complicated. The simplest approach is to rapidly change non-essential epitopes (amino acids and/or sugars) on the surface of the pathogen, while keeping essential epitopes concealed. This is called antigenic variation. An example is HIV, which mutates rapidly, so the proteins on its viral envelope that are essential for entry into its host target cell are constantly changing. These frequent changes in antigens may explain the failures of vaccines directed at this virus. The parasite "Trypanosoma brucei" uses a similar strategy, constantly switching one type of surface protein for another, allowing it to stay one step ahead of the antibody response. Masking antigens with host molecules is another common strategy for avoiding detection by the immune system. In HIV, the envelope that covers the virion is formed from the outermost membrane of the host cell; such "self-cloaked" viruses make it difficult for the immune system to identify them as "non-self" structures. I. M. Pei Ieoh Ming Pei, FAIA, RIBA (born 26 April 1917), commonly known as I. M. Pei, is a Chinese American architect. Born in Guangzhou and raised in Hong Kong and Shanghai, Pei drew inspiration at an early age from the gardens at Suzhou. In 1935, he moved to the United States and enrolled in the University of Pennsylvania's architecture school, but quickly transferred to the Massachusetts Institute of Technology. He was unhappy with the focus at both schools on Beaux-Arts architecture, and spent his free time researching emerging architects, especially Le Corbusier. After graduating, he joined the Harvard Graduate School of Design (GSD) and became a friend of the Bauhaus architects Walter Gropius and Marcel Breuer. In 1948, Pei was recruited by New York City real estate magnate William Zeckendorf, for whom he worked for seven years before establishing his own independent design firm I. M. Pei & Associates in 1955, which became I. M. Pei & Partners in 1966 and later in 1989 became Pei Cobb Freed & Partners. Pei retired from full-time practice in 1990. Since then, he has taken on work as an architectural consultant primarily from his sons' architectural firm Pei Partnership Architects. Pei's first major recognition came with the National Center for Atmospheric Research in Colorado (designed in 1961, and completed in 1967). His new stature led to his selection as chief architect for the John F. Kennedy Library in Massachusetts. He went on to design Dallas City Hall and the East Building of the National Gallery of Art. He returned to China for the first time in 1975 to design a hotel at Fragrant Hills, and designed Bank of China Tower, Hong Kong, a skyscraper in Hong Kong for the Bank of China fifteen years later. In the early 1980s, Pei was the focus of controversy when he designed a glass-and-steel pyramid for the Musée du Louvre in Paris. He later returned to the world of the arts by designing the Morton H. Meyerson Symphony Center in Dallas, the Miho Museum in Japan, the Suzhou Museum in Suzhou, and the Museum of Islamic Art in Qatar. Pei has won a wide variety of prizes and awards in the field of architecture, including the AIA Gold Medal in 1979, the first Praemium Imperiale for Architecture in 1989, and the Lifetime Achievement Award from the Cooper-Hewitt, National Design Museum in 2003. In 1983, he won the Pritzker Prize, sometimes called the Nobel Prize of architecture. Pei's ancestry traces back to the Ming Dynasty, when his family moved from Anhui province to Suzhou, but most importantly his family were directors of the Bank of China which later on funded the construction of important projects including the Kips Bay project in New York. They also found wealth in the sale of medicinal herbs, the family stressed the importance of helping the less fortunate. Ieoh Ming Pei was born on 26 April 1917 to Tsuyee Pei and Lien Kwun, and the family moved to Hong Kong one year later. The family eventually included five children. As a boy, Pei was very close to his mother, a devout Buddhist who was recognized for her skills as a flautist. She invited him, his brothers, and his sisters to join her on meditation retreats. His relationship with his father was less intimate. Their interactions were respectful but distant. Pei's ancestors' success meant that the family lived in the upper echelons of society, but Pei said his father was "not cultivated in the ways of the arts". The younger Pei, drawn more to music and other cultural forms than to his father's domain of banking, explored art on his own. "I have cultivated myself," he said later. At the age of ten, Pei moved with his family to Shanghai after his father was promoted. Pei attended Saint Johns Middle School, run by Protestant missionaries. Academic discipline was rigorous; students were allowed only one half-day each month for leisure. Pei enjoyed playing billiards and watching Hollywood movies, especially those of Buster Keaton and Charlie Chaplin. He also learned rudimentary English skills by reading the Bible and novels by Charles Dickens. Shanghai's many international elements gave it the name "Paris of the East". The city's global architectural flavors had a profound influence on Pei, from the Bund waterfront area to the Park Hotel, built in 1934. He was also impressed by the many gardens of Suzhou, where he spent the summers with extended family and regularly visited a nearby ancestral shrine. The Shizilin Garden, built in the 14th century by a Buddhist monk, was especially influential. Its unusual rock formations, stone bridges, and waterfalls remained etched in Pei's memory for decades. He spoke later of his fondness for the garden's blending of natural and human-built structures. Soon after the move to Shanghai, Pei's mother developed cancer. As a pain reliever, she was prescribed opium, and assigned the task of preparing her pipe to Pei. She died shortly after his thirteenth birthday, and he was profoundly upset. The children were sent to live with extended family; their father became more consumed by his work and more physically distant. Pei said: "My father began living his own separate life pretty soon after that." His father later married a woman named Aileen, who moved to New York later in her life. As Pei, neared the end of his secondary education, he decided to study at a university. He was accepted to a number of schools, but decided to enroll at the University of Pennsylvania. Pei's choice had two roots. While studying in Shanghai, he had closely examined the catalogs for various institutions of higher learning around the world. The architectural program at the University of Pennsylvania stood out to him. The other major factor was Hollywood. Pei was fascinated by the representations of college life in the films of Bing Crosby, which differed tremendously from the academic atmosphere in China. "College life in the U.S. seemed to me to be mostly fun and games", he said in 2000. "Since I was too young to be serious, I wanted to be part of it ... You could get a feeling for it in Bing Crosby's movies. College life in America seemed very exciting to me. It's not real, we know that. Nevertheless, at that time it was very attractive to me. I decided that was the country for me." In 1935 Pei boarded a boat and sailed to San Francisco, then traveled by train to Philadelphia. What he found, however, differed vastly from his expectations. Professors at the University of Pennsylvania based their teaching in the Beaux-Arts style, rooted in the classical traditions of Greece and Rome. Pei was more intrigued by modern architecture, and also felt intimidated by the high level of drafting proficiency shown by other students. He decided to abandon architecture and transferred to the engineering program at Massachusetts Institute of Technology (MIT). Once he arrived, however, the dean of the architecture school commented on his eye for design and convinced Pei to return to his original major. MIT's architecture faculty was also focused on the Beaux-Arts school, and Pei found himself uninspired by the work. In the library he found three books by the Swiss-French architect Le Corbusier. Pei was inspired by the innovative designs of the new International style, characterized by simplified form and the use of glass and steel materials. Le Corbusier visited MIT in , an occasion which powerfully affected Pei: "The two days with Le Corbusier, or 'Corbu' as we used to call him, were probably the most important days in my architectural education." Pei was also influenced by the work of US architect Frank Lloyd Wright. In 1938 he drove to Spring Green, Wisconsin, to visit Wright's famous Taliesin building. After waiting for two hours, however, he left without meeting Wright. Although he disliked the Beaux-Arts emphasis at MIT, Pei excelled in his studies. "I certainly don't regret the time at MIT", he said later. "There I learned the science and technique of building, which is just as essential to architecture." Pei received his B.Arch. degree in 1940. While visiting New York City in the late '30s, Pei met a Wellesley College student named Eileen Loo. They began dating and they married in the spring of 1942. She enrolled in the landscape architecture program at Harvard University, and Pei was thus introduced to members of the faculty at Harvard's Graduate School of Design (GSD). He was excited by the lively atmosphere, and joined the GSD in . Less than a month later, Pei suspended his work at Harvard to join the National Defense Research Committee, which coordinated scientific research into US weapons technology during World War II. Pei's background in architecture was seen as a considerable asset; one member of the committee told him: "If you know how to build you should also know how to destroy." The fight against Germany was ending, so he focused on the Pacific War. The US realized that its bombs used against the stone buildings of Europe would be ineffective against Japanese cities, mostly constructed from wood and paper; Pei was assigned to work on incendiary bombs. Pei spent two and a half years with the NDRC, but has revealed few details. In 1945 Eileen gave birth to a son, T'ing Chung; she withdrew from the landscape architecture program in order to care for him. Pei returned to Harvard in the autumn of 1945, and received a position as assistant professor of design. The GSD was developing into a hub of resistance to the Beaux-Arts orthodoxy. At the center were members of the Bauhaus, a European architectural movement that had advanced the cause of modernist design. The Nazi regime had condemned the Bauhaus school, and its leaders left Germany. Two of these, Walter Gropius and Marcel Breuer, took positions at the Harvard GSD. Their iconoclastic focus on modern architecture appealed to Pei, and he worked closely with both men. One of Pei's design projects at the GSD was a plan for an art museum in Shanghai. He wanted to create a mood of Chinese authenticity in the architecture without using traditional materials or styles. The design was based on straight modernist structures, organized around a central courtyard garden, with other similar natural settings arranged nearby. It was very well received; Gropius, in fact, called it "the best thing done in [my] master class". Pei received his M.Arch. degree in 1946, and taught at Harvard for another two years. In the spring of 1948 Pei was recruited by New York real estate magnate William Zeckendorf to join a staff of architects for his firm of Webb and Knapp to design buildings around the country. Pei found Zeckendorf's personality the opposite of his own; his new boss was known for his loud speech and gruff demeanor. Nevertheless, they became good friends and Pei found the experience personally enriching. Zeckendorf was well connected politically, and Pei enjoyed learning about the social world of New York's city planners. His first project for Webb and Knapp was an apartment building with funding from the Housing Act of 1949. Pei's design was based on a circular tower with concentric rings. The areas closest to the supporting pillar handled utilities and circulation; the apartments themselves were located toward the outer edge. Zeckendorf loved the design and even showed it off to Le Corbusier when they met. The cost of such an unusual design was too high, however, and the building never moved beyond the model stage. Pei finally saw his architecture come to life in 1949, when he designed a two-story corporate building for Gulf Oil in Atlanta, Georgia. The building was demolished in February 2013 although the front facade will be retained as part of an apartment development. His use of marble for the exterior curtain wall brought praise from the journal "Architectural Forum". Pei's designs echoed the work of Mies van der Rohe in the beginning of his career as also shown in his own weekend-house in Katonah in 1952. Soon Pei was so inundated with projects that he asked Zeckendorf for assistants, which he chose from his associates at the GSD, including Henry N. Cobb and Ulrich Franzen. They set to work on a variety of proposals, including the Roosevelt Field Shopping Mall. The team also redesigned the Webb and Knapp office building, transforming Zeckendorf's office into a circular space with teak walls and a glass clerestory. They also installed a control panel into the desk that allowed their boss to control the lighting in his office. The project took one year and exceeded its budget, but Zeckendorf was delighted with the results. In 1952 Pei and his team began work on a series of projects in Denver, Colorado. The first of these was the Mile High Center, which compressed the core building into less than twenty-five percent of the total site; the rest is adorned with an exhibition hall and fountain-dotted plazas. One block away, Pei's team also redesigned Denver's Courthouse Square, which combined office spaces, commercial venues, and hotels. These projects helped Pei conceptualize architecture as part of the larger urban geography. "I learned the process of development," he said later, "and about the city as a living organism." These lessons, he said, became essential for later projects. Pei and his team also designed a united urban area for Washington, D.C., L'Enfant Plaza (named for French-American architect Pierre Charles L'Enfant). Pei's associate Araldo Cossutta was the lead architect for the plaza's North Building (955 L'Enfant Plaza SW) and South Building (490 L'Enfant Plaza SW). Vlastimil Koubek was the architect for the East Building (L'Enfant Plaza Hotel, located at 480 L'Enfant Plaza SW), and for the Center Building (475 L'Enfant Plaza SW; now the United States Postal Service headquarters). The team set out with a broad vision that was praised by both "The Washington Post" and "Washington Star" (which rarely agreed on anything), but funding problems forced revisions and a significant reduction in scale. In 1955 Pei's group took a step toward institutional independence from Webb and Knapp by establishing a new firm called I. M. Pei & Associates. (The name changed later to I. M. Pei & Partners.) They gained the freedom to work with other companies, but continued working primarily with Zeckendorf. The new firm distinguished itself through the use of detailed architectural models. They took on the Kips Bay residential area on the east side of Manhattan, where Pei set up Kips Bay Towers, two large long towers of apartments with recessed windows (to provide shade and privacy) in a neat grid, adorned with rows of trees. Pei involved himself in the construction process at Kips Bay, even inspecting the bags of concrete to check for consistency of color. The company continued its urban focus with the Society Hill project in central Philadelphia. Pei designed the Society Hill Towers, a three-building residential block injecting cubist design into the 18th-century milieu of the neighborhood. As with previous projects, abundant green spaces were central to Pei's vision, which also added traditional townhouses to aid the transition from classical to modern design. From 1958 to 1963 Pei and Ray Affleck developed a key downtown block of Montreal in a phased process that involved one of Pei's most admired structures in the Commonwealth, the cruciform tower known as the Royal Bank Plaza (Place Ville Marie). According to the Canadian Encyclopedia "its grand plaza and lower office buildings, designed by internationally famous US architect I. M. Pei, helped to set new standards for architecture in Canada in the 1960s ... The tower's smooth aluminum and glass surface and crisp unadorned geometric form demonstrate Pei's adherence to the mainstream of 20th-century modern design." Although these projects were satisfying, Pei wanted to establish an independent name for himself. In 1959 he was approached by MIT to design a building for its Earth science program. The Green Building continued the grid design of Kips Bay and Society Hill. The pedestrian walkway at the ground floor, however, was prone to sudden gusts of wind, which embarrassed Pei. "Here I was from MIT," he said, "and I didn't know about wind-tunnel effects." At the same time, he designed the Luce Memorial Chapel in at Tunghai University in Taichung, Taiwan. The soaring structure, commissioned by the same organisation that had run his middle school in Shanghai, broke severely from the cubist grid patterns of his urban projects. The challenge of coordinating these projects took an artistic toll on Pei. He found himself responsible for acquiring new building contracts and supervising the plans for them. As a result, he felt disconnected from the actual creative work. "Design is something you have to put your hand to," he said. "While my people had the luxury of doing one job at a time, I had to keep track of the whole enterprise." Pei's dissatisfaction reached its peak at a time when financial problems began plaguing Zeckendorf's firm. I. M. Pei and Associates officially broke from Webb and Knapp in 1960, which benefited Pei creatively but pained him personally. He had developed a close friendship with Zeckendorf, and both men were sad to part ways. Pei was able to return to hands-on design when he was approached in 1961 by Walter Orr Roberts to design the new Mesa Laboratory for the National Center for Atmospheric Research outside Boulder, Colorado. The project differed from Pei's earlier urban work; it would rest in an open area in the foothills of the Rocky Mountains. He drove with his wife around the region, visiting assorted buildings and surveying the natural environs. He was impressed by the United States Air Force Academy in Colorado Springs, but felt it was "detached from nature". The conceptualization stages were important for Pei, presenting a need and an opportunity to break from the Bauhaus tradition. He later recalled the long periods of time he spent in the area: "I recalled the places I had seen with my mother when I was a little boy—the mountaintop Buddhist retreats. There in the Colorado mountains, I tried to listen to the silence again—just as my mother had taught me. The investigation of the place became a kind of religious experience for me." Pei also drew inspiration from the Mesa Verde cliff dwellings of the Ancient Pueblo Peoples; he wanted the buildings to exist in harmony with their natural surroundings. To this end, he called for a rock-treatment process that could color the buildings to match the nearby mountains. He also set the complex back on the mesa overlooking the city, and designed the approaching road to be long, winding, and indirect. Roberts disliked Pei's initial designs, referring to them as "just a bunch of towers". Roberts intended his comments as typical of scientific experimentation, rather than artistic critique; still, Pei was frustrated. His second attempt, however, fit Roberts' vision perfectly: a spaced-out series of clustered buildings, joined by lower structures and complemented by two underground levels. The complex uses many elements of cubist design, and the walkways are arranged to increase the probability of casual encounters among colleagues. Once the laboratory was built, several problems with its construction became apparent. Leaks in the roof caused difficulties for researchers, and the shifting of clay soil beneath caused cracks in the buildings which were expensive to repair. Still, both architect and project manager were pleased with the final result. Pei refers to the NCAR complex as his "breakout building", and he remained a friend of Roberts until the scientist died in . The success of NCAR brought renewed attention to Pei's design acumen. He was recruited to work on a variety of projects, including the S. I. Newhouse School of Public Communications at Syracuse University, the Sundrome terminal at John F. Kennedy International Airport in New York City, and dormitories at New College of Florida. After President John F. Kennedy was assassinated in , his family and friends discussed how to construct a library that would serve as a fitting memorial. A committee was formed to advise Kennedy's widow Jacqueline, who would make the final decision. The group deliberated for months and considered many famous architects. Eventually, Kennedy chose Pei to design the library, based on two considerations. First, she appreciated the variety of ideas he had used for earlier projects. "He didn't seem to have just one way to solve a problem," she said. "He seemed to approach each commission thinking only of it and then develop a way to make something beautiful." Ultimately, however, Kennedy made her choice based on her personal connection with Pei. Calling it "really an emotional decision", she explained: "He was so full of promise, like Jack; they were born in the same year. I decided it would be fun to take a great leap with him." The project was plagued with problems from the outset. The first was scope. President Kennedy had begun considering the structure of his library soon after taking office, and he wanted to include archives from his administration, a museum of personal items, and a political science institute. After the assassination, the list expanded to include a fitting memorial tribute to the slain president. The variety of necessary inclusions complicated the design process and caused significant delays. Pei's first proposed design included a large glass pyramid that would fill the interior with sunlight, meant to represent the optimism and hope that Kennedy's administration had symbolized for so many in the US. Mrs. Kennedy liked the design, but resistance began in Cambridge, the first proposed site for the building, as soon as the project was announced. Many community members worried that the library would become a tourist attraction, causing particular problems with traffic congestion. Others worried that the design would clash with the architectural feel of nearby Harvard Square. By the mid-70s, Pei tried proposing a new design, but the library's opponents resisted every effort. These events pained Pei, who had sent all three of his sons to Harvard, and although he rarely discussed his frustration, it was evident to his wife. "I could tell how tired he was by the way he opened the door at the end of the day," she said. "His footsteps were dragging. It was very hard for I. M. to see that so many people didn't want the building." Finally the project moved to Columbia Point, near the University of Massachusetts Boston. The new site was less than ideal; it was located on an old landfill, and just over a large sewage pipe. Pei's architectural team added more fill to cover the pipe and developed an elaborate ventilation system to conquer the odor. A new design was unveiled, combining a large square glass-enclosed atrium with a triangular tower and a circular walkway. The John F. Kennedy Presidential Library and Museum was dedicated on 20 October 1979. Critics generally liked the finished building, but the architect himself was unsatisfied. The years of conflict and compromise had changed the nature of the design, and Pei felt that the final result lacked its original passion. "I wanted to give something very special to the memory of President Kennedy," he said in 2000. "It could and should have been a great project." Pei's work on the Kennedy project boosted his reputation as an architect of note. The Pei Plan was an urban redevelopment initiative designed for downtown Oklahoma City, Oklahoma, in the 1960s and 1970s. It is the informal name for two related commissions by Pei – namely the Central Business District General Neighborhood Renewal Plan (design completed 1964) and the Central Business District Project I-A Development Plan (design completed 1966). It was formally adopted in 1965, and implemented in various public and private phases throughout the 1960s and 1970s. The plan called for the demolition of hundreds of old downtown structures in favor of renewed parking, office building, and retail developments, in addition to public projects such as the Myriad Convention Center and the Myriad Botanical Gardens. It was the dominant template for downtown development in Oklahoma City from its inception through the 1970s. The plan generated mixed results and opinion, largely succeeding in re-developing office building and parking infrastructure but failing to attract its anticipated retail and residential development. Significant public resentment also developed as a result of the destruction of multiple historic structures. As a result, Oklahoma City's leadership avoided large-scale urban planning for downtown throughout the 1980s and early 1990s, until the passage of the Metropolitan Area Projects (MAPS) initiative in 1993. Another city which turned to Pei for urban renewal during this time was Providence, Rhode Island. In the late 1960s, Providence hired Pei to redesign Cathedral Square, a once-bustling civic center which had become neglected and empty, as part of an ambitious larger plan to redesign downtown. Pei's new plaza, modeled after the Greek Agora marketplace, opened in 1972. Unfortunately, the city ran out of money before Pei's vision could be fully realized. Also, recent construction of a low-income housing complex and Interstate 95 had changed the neighborhood's character permanently. In 1974, The Providence Evening Bulletin called Pei's new plaza a "conspicuous failure." By 2016, media reports characterized the plaza as a neglected, little-visited "hidden gem". In 1974, Augusta, GA turned to Pei and his firm for downtown revitalization. From the plan, the Chamber of Commence building and Bicentennial Park, were completed. In 1976, Pei designed a distinctive modern penthouse, that was added to the roof of architect William Lee Stoddart's historic Lamar Building, designed in 1916. The penthouse is a modern take on a pyramid, predating Pei's more famous Louvre Pyramid. It has been criticized by architectural critic James Howard Kunstler as an "Eyesore of the Month" with him comparing it to Darth Vader's helmet. In 1980, he and his company designed the Augusta Civic Center, now known as the James Brown Arena. Kennedy's assassination led indirectly to another commission for Pei's firm. In 1964 the acting mayor, Erik Jonsson, began working to change the community's image. Dallas was known and disliked as the city where the president had been killed, but Jonsson began a program designed to initiate a community renewal. One of the goals was a new city hall, which could be a "symbol of the people". Jonsson, a co-founder of Texas Instruments, learned about Pei from his associate Cecil Howard Green, who had recruited the architect for MIT's Earth Sciences building. Pei's approach to the new Dallas City Hall mirrored those of other projects; he surveyed the surrounding area and worked to make the building fit. In the case of Dallas, he spent days meeting with residents of the city and was impressed by their civic pride. He also found that the skyscrapers of the downtown business district dominated the skyline, and sought to create a building which could face the tall buildings and represent the importance of the public sector. He spoke of creating "a public-private dialogue with the commercial high-rises". Working with his associate Theodore Musho, Pei developed a design centered on a building with a top much wider than the bottom; the facade leans at an angle of 34 degrees. A plaza stretches out before the building, and a series of support columns holds it up. It was influenced by Le Corbusier's High Court building in Chandigarh, India; Pei sought to use the significant overhang to unify building and plaza. The project cost much more than initially expected, and took 11 years. Revenue was secured in part by including a subterranean parking garage. The interior of the city hall is large and spacious; windows in the ceiling above the eighth floor fill the main space with light. The city of Dallas received the building well, and a local television news crew found unanimous approval of the new city hall when it officially opened to the public in 1978. Pei himself considered the project a success, even as he worried about the arrangement of its elements. He said: "It's perhaps stronger than I would have liked; it's got more strength than finesse." He felt that his relative lack of experience left him without the necessary design tools to refine his vision, but the community liked the city hall enough to invite him back. Over the years he went on to design five additional buildings in the Dallas area. While Pei and Musho were coordinating the Dallas project, their associate Henry Cobb had taken the helm for a commission in Boston. John Hancock Insurance chairman Robert Slater hired I. M. Pei & Partners to design a building that could overshadow the Prudential Tower, erected by their rival. After the firm's first plan was discarded due to a need for more office space, Cobb developed a new plan around a towering parallelogram, slanted away from the Trinity Church and accented by a wedge cut into each narrow side. To minimize the visual impact, the building was covered in large reflective glass panels; Cobb said this would make the building a "background and foil" to the older structures around it. When the Hancock Tower was finished in 1976, it was the tallest building in New England. Serious issues of execution became evident in the tower almost immediately. Many glass panels fractured in a windstorm during construction in 1973. Some detached and fell to the ground, causing no injuries but sparking concern among Boston residents. In response, the entire tower was reglazed with smaller panels. This significantly increased the cost of the project. Hancock sued the glass manufacturers, Libbey-Owens-Ford, as well as I. M. Pei & Partners, for submitting plans that were "not good and workmanlike". LOF countersued Hancock for defamation, accusing Pei's firm of poor use of their materials; I. M. Pei & Partners sued LOF in return. All three companies settled out of court in 1981. The project became an albatross for Pei's firm. Pei himself refused to discuss it for many years. The pace of new commissions slowed and the firm's architects began looking overseas for opportunities. Cobb worked in Australia and Pei took on jobs in Singapore, Iran, and Kuwait. Although it was a difficult time for everyone involved, Pei later reflected with patience on the experience. "Going through this trial toughened us," he said. "It helped to cement us as partners; we did not give up on each other." In the mid-1960s, directors of the National Gallery of Art in Washington, D.C., declared the need for a new building. Paul Mellon, a primary benefactor of the gallery and a member of its building committee, set to work with his assistant J. Carter Brown (who became gallery director in 1969) to find an architect. The new structure would be located to the east of the original building, and tasked with two functions: offer a large space for public appreciation of various popular collections; and house office space as well as archives for scholarship and research. They likened the scope of the new facility to the Library of Alexandria. After inspecting Pei's work at the Des Moines Art Center in Iowa and the Johnson Museum at Cornell University, they offered him the commission. Pei took to the project with vigor, and set to work with two young architects he had recently recruited to the firm, William Pedersen and Yann Weymouth. Their first obstacle was the unusual shape of the building site, a trapezoid of land at the intersection of Constitution and Pennsylvania Avenues. Inspiration struck Pei in 1968, when he scrawled a rough diagram of two triangles on a scrap of paper. The larger building would be the public gallery; the smaller would house offices and archives. This triangular shape became a singular vision for the architect. As the date for groundbreaking approached, Pedersen suggested to his boss that a slightly different approach would make construction easier. Pei simply smiled and said: "No compromises." The growing popularity of art museums presented unique challenges to the architecture. Mellon and Pei both expected large crowds of people to visit the new building, and they planned accordingly. To this end, he designed a large lobby roofed with enormous skylights. Individual galleries are located along the periphery, allowing visitors to return after viewing each exhibit to the spacious main room. A large mobile sculpture by American artist Alexander Calder was later added to the lobby. Pei hoped the lobby would be exciting to the public in the same way as the central room of the Guggenheim Museum in New York. The modern museum, he said later, "must pay greater attention to its educational responsibility, especially to the young". Materials for the building's exterior were chosen with careful precision. To match the look and texture of the original gallery's marble walls, builders re-opened the quarry in Knoxville, Tennessee, from which the first batch of stone had been harvested. The project even found and hired Malcolm Rice, a quarry supervisor who had overseen the original 1941 gallery project. The marble was cut into three-inch-thick panels and arranged over the concrete foundation, with darker blocks at the bottom and lighter blocks on top. The East Building was honored on 30 May 1978, two days before its public unveiling, with a black-tie party attended by celebrities, politicians, benefactors, and artists. When the building opened, popular opinion was enthusiastic. Large crowds visited the new museum, and critics generally voiced their approval. Ada Louise Huxtable wrote in "The New York Times" that Pei's building was "a palatial statement of the creative accommodation of contemporary art and architecture". The sharp angle of the smaller building has been a particular note of praise for the public; over the years it has become stained and worn from the hands of visitors. Some critics disliked the unusual design, however, and criticized the reliance on triangles throughout the building. Others took issue with the large main lobby, particularly its attempt to lure casual visitors. In his review for "Artforum", critic Richard Hennessy described a "shocking fun-house atmosphere" and "aura of ancient Roman patronage". One of the earliest and most vocal critics, however, came to appreciate the new gallery once he saw it in person. Allan Greenberg had scorned the design when it was first unveiled, but wrote later to J. Carter Brown: "I am forced to admit that you are right and I was wrong! The building is a masterpiece." Starting in 2005, the joints attaching the marble panels to the walls began to show signs of strain, creating a risk of panels falling off the building onto the public below. In 2008 officials decided that it would be necessary to remove and reinstall "all" the panels. The project is scheduled for completion in 2013. After US President Richard Nixon made his famous 1972 visit to China, a wave of exchanges took place between the two countries. One of these was a delegation of the American Institute of Architects in 1974, which Pei joined. It was his first trip back to China since leaving in 1935. He was favorably received, returned the welcome with positive comments, and a series of lectures ensued. Pei noted in one lecture that since the 1950s Chinese architects had been content to imitate Western styles; he urged his audience in one lecture to search China's native traditions for inspiration. In 1978, Pei was asked to initiate a project for his home country. After surveying a number of different locations, Pei fell in love with a valley that had once served as an imperial garden and hunting preserve known as Fragrant Hills. The site housed a decrepit hotel; Pei was invited to tear it down and build a new one. As usual, he approached the project by carefully considering the context and purpose. Likewise, he considered modernist styles inappropriate for the setting. Thus, he said, it was necessary to find "a third way". After visiting his ancestral home in Suzhou, Pei created a design based on some simple but nuanced techniques he admired in traditional residential Chinese buildings. Among these were abundant gardens, integration with nature, and consideration of the relationship between enclosure and opening. Pei's design included a large central atrium covered by glass panels that functioned much like the large central space in his East Building of the National Gallery. Openings of various shapes in walls invited guests to view the natural scenery beyond. Younger Chinese who had hoped the building would exhibit some of Cubist flavor for which Pei had become known were disappointed, but the new hotel found more favour with government officials and architects. The hotel, with 325 guest rooms and a four-story central atrium, was designed to fit perfectly into its natural habitat. The trees in the area were of special concern, and particular care was taken to cut down as few as possible. He worked with an expert from Suzhou to preserve and renovate a water maze from the original hotel, one of only five in the country. Pei was also meticulous about the arrangement of items in the garden behind the hotel; he even insisted on transporting of rocks from a location in southwest China to suit the natural aesthetic. An associate of Pei's said later that he never saw the architect so involved in a project. During construction, a series of mistakes collided with the nation's lack of technology to strain relations between architects and builders. Whereas 200 or so workers might have been used for a similar building in the US, the Fragrant Hill project employed over 3,000 workers. This was mostly because the construction company lacked the sophisticated machines used in other parts of the world. The problems continued for months, until Pei had an uncharacteristically emotional moment during a meeting with Chinese officials. He later explained that his actions included "shouting and pounding the table" in frustration. The design staff noticed a difference in the manner of work among the crew after the meeting. As the opening neared, however, Pei found the hotel still needed work. He began scrubbing floors with his wife and ordered his children to make beds and vacuum floors. The project's difficulties took an emotional and physical strain on the Pei family. The Fragrant Hill Hotel opened on 17 October 1982 but quickly fell into disrepair. A member of Pei's staff returned for a visit several years later and confirmed the dilapidated condition of the hotel. He and Pei attributed this to the country's general unfamiliarity with deluxe buildings. The Chinese architectural community at the time gave the structure little attention, as their interest at the time centered on the work of American postmodernists such as Michael Graves. As the Fragrant Hill project neared completion, Pei began work on the Jacob K. Javits Convention Center in New York City, for which his associate James Freed served as lead designer. Hoping to create a vibrant community institution in what was then a run-down neighborhood on Manhattan's west side, Freed developed a glass-coated structure with an intricate space frame of interconnected metal rods and spheres. The convention center was plagued from the start by budget problems and construction blunders. City regulations forbid a general contractor having final authority over the project, so architects and program manager Richard Kahan had to coordinate the wide array of builders, plumbers, electricians, and other workers. The forged steel globes to be used in the space frame came to the site with hairline cracks and other defects; 12,000 were rejected. These and other problems led to media comparisons with the disastrous Hancock Tower. One New York City official blamed Kahan for the difficulties, indicating that the building's architectural flourishes were responsible for delays and financial crises. The Javits Center opened on 3 April 1986, to a generally positive reception. During the inauguration ceremonies, however, neither Freed nor Pei was recognized for their role in the project. When François Mitterrand was elected President of France in 1981, he laid out an ambitious plan for a variety of construction projects. One of these was the renovation of the Louvre Museum. Mitterrand appointed a civil servant named to oversee it. After visiting museums in Europe and the United States, including the US National Gallery, he asked Pei to join the team. The architect made three secretive trips to Paris, to determine the feasibility of the project; only one museum employee knew why he was there. Pei finally agreed that a reconstruction project was not only possible, but necessary for the future of the museum. He thus became the first foreign architect to work on the Louvre. The heart of the new design included not only a renovation of the "Cour Napoléon" in the midst of the buildings, but also a transformation of the interiors. Pei proposed a central entrance, not unlike the lobby of the National Gallery East Building, which would link the three major buildings. Below would be a complex of additional floors for research, storage, and maintenance purposes. At the center of the courtyard he designed a glass and steel pyramid, first proposed with the Kennedy Library, to serve as entrance and anteroom skylight. It was mirrored by another inverted pyramid underneath, to reflect sunlight into the room. These designs were partly an homage to the fastidious geometry of the famous French landscape architect André Le Nôtre (1613–1700). Pei also found the pyramid shape best suited for stable transparency, and considered it "most compatible with the architecture of the Louvre, especially with the faceted planes of its roofs". Biasini and Mitterrand liked the plans, but the scope of the renovation displeased Louvre director André Chabaud. He resigned from his post, complaining that the project was "unfeasible" and posed "architectural risks". The public also reacted harshly to the design, mostly because of the proposed pyramid. One critic called it a "gigantic, ruinous gadget"; another charged Mitterrand with "despotism" for inflicting Paris with the "atrocity". Pei estimated that 90 percent of Parisians opposed his design. "I received many angry glances in the streets of Paris," he said. Some condemnations carried nationalistic overtones. One opponent wrote: "I am surprised that one would go looking for a Chinese architect in America to deal with the historic heart of the capital of France." Soon, however, Pei and his team won the support of several key cultural icons, including the conductor Pierre Boulez and Claude Pompidou, widow of former French President Georges Pompidou, after whom another controversial museum was named. In an attempt to soothe public ire, Pei took a suggestion from then-mayor of Paris Jacques Chirac and placed a full-sized cable model of the pyramid in the courtyard. During the four days of its exhibition, an estimated 60,000 people visited the site. Some critics eased their opposition after witnessing the proposed scale of the pyramid. To minimize the impact of the structure, Pei demanded a method of glass production that resulted in clear panes. The pyramid was constructed at the same time as the subterranean levels below, which caused difficulties during the building stages. As they worked, construction teams came upon an abandoned set of rooms containing 25,000 historical items; these were incorporated into the rest of the structure to add a new exhibition zone. The new Louvre courtyard was opened to the public on 14 October 1988, and the Pyramid entrance was opened the following March. By this time, public opinion had softened on the new installation; a poll found a fifty-six percent approval rating for the pyramid, with twenty-three percent still opposed. The newspaper "Le Figaro" had vehemently criticized Pei's design, but later celebrated the tenth anniversary of its magazine supplement at the pyramid. Prince Charles of Britain surveyed the new site with curiosity, and declared it "marvelous, very exciting". A writer in "Le Quotidien de Paris" wrote: "The much-feared pyramid has become adorable." The experience was exhausting for Pei, but also rewarding. "After the Louvre," he said later, "I thought no project would be too difficult." The "Louvre Pyramid" has become Pei's most famous structure. The opening of the Louvre Pyramid coincided with four other projects on which Pei had been working, prompting architecture critic Paul Goldberger to declare 1989 "the year of Pei" in "The New York Times". It was also the year in which Pei's firm changed its name to Pei Cobb Freed & Partners, to reflect the increasing stature and prominence of his associates. At the age of seventy-two, Pei had begun thinking about retirement, but continued working long hours to see his designs come to light. One of the projects took Pei back to Dallas, Texas, to design the Morton H. Meyerson Symphony Center. The success of city's performing artists, particularly the Dallas Symphony Orchestra then being led by conductor Eduardo Mata, led to interest by city leaders in creating a modern center for musical arts that could rival the best halls in Europe. The organizing committee contacted 45 architects, but at first Pei did not respond, thinking that his work on the Dallas City Hall had left a negative impression. One of his colleagues from that project, however, insisted that he meet with the committee. He did and, although it would be his first concert hall, the committee voted unanimously to offer him the commission. As one member put it: "We were convinced that we would get the world's greatest architect putting his best foot forward." The project presented a variety of specific challenges. Because its main purpose was the presentation of live music, the hall needed a design focused on acoustics first, then public access and exterior aesthetics. To this end, a professional sound technician was hired to design the interior. He proposed a shoebox auditorium, used in the acclaimed designs of top European symphony halls such as the Amsterdam Concertgebouw and Vienna Musikverein. Pei drew inspiration for his adjustments from the designs of the German architect Johann Balthasar Neumann, especially the Basilica of the Fourteen Holy Helpers. He also sought to incorporate some of the panache of the Paris Opéra designed by Charles Garnier. Pei's design placed the rigid shoebox at an angle to the surrounding street grid, connected at the north end to a long rectangular office building, and cut through the middle with an assortment of circles and cones. The design attempted to reproduce with modern features the acoustic and visual functions of traditional elements like filigree. The project was risky: its goals were ambitious and any unforeseen acoustic flaws would be virtually impossible to remedy after the hall's completion. Pei admitted that he did not completely know how everything would come together. "I can imagine only 60 percent of the space in this building," he said during the early stages. "The rest will be as surprising to me as to everyone else." As the project developed, costs rose steadily and some sponsors considered withdrawing their support. Billionaire tycoon Ross Perot made a donation of US$10 million, on the condition that it be named in honor of Morton H. Meyerson, the longtime patron of the arts in Dallas. The building opened and immediately garnered widespread praise, especially for its acoustics. After attending a week of performances in the hall, a music critic for "The New York Times" wrote an enthusiastic account of the experience and congratulated the architects. One of Pei's associates told him during a party before the opening that the symphony hall was "a very mature building"; he smiled and replied: "Ah, but did I have to wait this long?" A new offer had arrived for Pei from the Chinese government in 1982. With an eye toward the transfer of sovereignty of Hong Kong from the British in 1997, authorities in China sought Pei's aid on a new tower for the local branch of the Bank of China. The Chinese government was preparing for a new wave of engagement with the outside world and sought a tower to represent modernity and economic strength. Given the elder Pei's history with the bank before the Communist takeover, government officials visited the 89-year-old man in New York to gain approval for his son's involvement. Pei then spoke with his father at length about the proposal. Although the architect remained pained by his experience with Fragrant Hill, he agreed to accept the commission. The proposed site in Hong Kong's Central District was less than ideal; a tangle of highways lined it on three sides. The area had also been home to a headquarters for Japanese military police during World War II, and was notorious for prisoner torture. The small parcel of land made a tall tower necessary, and Pei had usually shied away from such projects; in Hong Kong especially, the skyscrapers lacked any real architectural character. Lacking inspiration and unsure of how to approach the building, Pei took a weekend vacation to the family home in Katonah, New York. There he found himself experimenting with a bundle of sticks until he happened upon a cascading sequence. The design that Pei developed for the Bank of China Tower was not only unique in appearance, but also sound enough to pass the city's rigorous standards for wind-resistance. The tower was planned around a visible truss structure, which distributed stress to the four corners of the base. Using the reflective glass that had become something of a trademark for him, Pei organized the facade around a series of boxed X shapes. At the top, he designed the roofs at sloping angles to match the rising aesthetic of the building. Some influential advocates of "feng shui" in Hong Kong and China criticized the design, and Pei and government officials responded with token adjustments. As the tower neared completion, Pei was shocked to witness the government's massacre of unarmed civilians at the Tiananmen Square protests of 1989. He wrote an opinion piece for "The New York Times" titled "China Won't Ever Be the Same", in which he said that the killings "tore the heart out of a generation that carries the hope for the future of the country". The massacre deeply disturbed his entire family, and he wrote that "China is besmirched." As the 1990s began, Pei transitioned into a role of decreased involvement with his firm. The staff had begun to shrink, and Pei wanted to dedicate himself to smaller projects allowing for more creativity. Before he made this change, however, he set to work on his last major project as active partner: The Rock and Roll Hall of Fame in Cleveland, Ohio. Considering his work on such bastions of high culture as the Louvre and US National Gallery, some critics were surprised by his association with what many considered a tribute to low culture. The sponsors of the hall, however, sought Pei for specifically this reason; they wanted the building to have an aura of respectability from the beginning. As in the past, Pei accepted the commission in part because of the unique challenge it presented. Using a glass wall for the entrance, similar in appearance to his Louvre pyramid, Pei coated the exterior of the main building in white metal, and placed a large cylinder on a narrow perch to serve as a performance space. The combination of off-centered wraparounds and angled walls was, Pei said, designed to provide "a sense of tumultuous youthful energy, rebelling, flailing about". The building opened in 1995, and was received with moderate praise. "The New York Times" called it "a fine building", but Pei was among those who felt disappointed with the results. The museum's early beginnings in New York combined with an unclear mission created a fuzzy understanding among project leaders for precisely what was needed. Although the city of Cleveland benefited greatly from the new tourist attraction, Pei was unhappy with it. At the same time, Pei designed a new museum for Luxembourg, the "Musée d'art moderne Grand-Duc Jean", commonly known as the Mudam. Drawing from the original shape of the Fort Thüngen walls where the museum was located, Pei planned to remove a portion of the original foundation. Public resistance to the historical loss forced a revision of his plan, however, and the project was nearly abandoned. The size of the building was halved, and it was set back from the original wall segments to preserve the foundation. Pei was disappointed with the alterations, but remained involved in the building process even during construction. In 1995, Pei was hired to design an extension to the "Deutsches Historisches Museum", or German Historical Museum in Berlin. Returning to the challenge of the East Building of the US National Gallery, Pei worked to combine a modernist approach with a classical main structure. He described the glass cylinder addition as a "beacon", and topped it with a glass roof to allow plentiful sunlight inside. Pei had difficulty working with German government officials on the project; their utilitarian approach clashed with his passion for aesthetics. "They thought I was nothing but trouble", he said. Pei also worked at this time on two projects for a new Japanese religious movement called "Shinji Shumeikai". He was approached by the movement's spiritual leader, Kaishu Koyama, who impressed the architect with her sincerity and willingness to give him significant artistic freedom. One of the buildings was a bell tower, designed to resemble the "bachi" used when playing traditional instruments like the "shamisen". Pei was unfamiliar with the movement's beliefs, but explored them in order to represent something meaningful in the tower. As he said: "It was a search for the sort of expression that is not at all technical." The experience was rewarding for Pei, and he agreed immediately to work with the group again. The new project was the Miho Museum, to display Koyama's collection of tea ceremony artifacts. Pei visited the site in Shiga Prefecture, and during their conversations convinced Koyama to expand her collection. She conducted a global search and acquired more than 300 items showcasing the history of the Silk Road. One major challenge was the approach to the museum. The Japanese team proposed a winding road up the mountain, not unlike the approach to the NCAR building in Colorado. Instead, Pei ordered a hole cut through a nearby mountain, connected to a major road via a bridge suspended from ninety-six steel cables and supported by a post set into the mountain. The museum itself was built into the mountain, with 80 percent of the building underground. When designing the exterior, Pei borrowed from the tradition of Japanese temples, particularly those found in nearby Kyoto. He created a concise spaceframe wrapped into French limestone and covered with a glass roof. Pei also oversaw specific decorative details, including a bench in the entrance lobby, carved from a 350-year-old "keyaki" tree. Because of Koyama's considerable wealth, money was rarely considered an obstacle; estimates at the time of completion put the cost of the project at US$350 million. During the first decade of the 2000s, Pei designed a variety of buildings, including the Suzhou Museum near his childhood home. He also designed the Museum of Islamic Art in Doha, Qatar at the request of the Al-Thani Family. Although it was originally planned for the corniche road along Doha Bay, Pei convinced project coordinators to build a new island to provide the needed space. He then spent six months touring the region and surveying mosques in Spain, Syria, and Tunisia. He was especially impressed with the elegant simplicity of the Mosque of Ibn Tulun in Cairo. Once again, Pei sought to combine new design elements with the classical aesthetic most appropriate for the location of the building. The rectangular boxes rotate evenly to create a subtle movement, with small arched windows at regular intervals into the limestone exterior. The museum's coordinators were pleased with the project; its official website describes its "true splendour unveiled in the sunlight", and speaks of "the shades of colour and the interplay of shadows paying tribute to the essence of Islamic architecture". The Macao Science Center in Macau was designed by Pei Partnership Architects in association with I. M. Pei. The project to build the science center was conceived in 2001 and construction started in 2006. The center was completed in 2009 and opened by the Chinese President Hu Jintao. The main part of the building is a distinctive conical shape with a spiral walkway and large atrium inside, similar to the Solomon R. Guggenheim Museum in New York. Galleries lead off the walkway, mainly consisting of interactive exhibits aimed at science education. The building is in a prominent position by the sea and is now a landmark of Macau. Pei's style is described as thoroughly modernist, with significant cubist themes. He is known for combining traditional architectural elements with progressive designs based on simple geometric patterns. As one critic writes: "Pei has been aptly described as combining a classical sense of form with a contemporary mastery of method." In 2000, biographer Carter Wiseman called Pei "the most distinguished member of his Late-Modernist generation still in practice". At the same time, Pei himself rejects simple dichotomies of architectural trends. He once said: "The talk about modernism versus post-modernism is unimportant. It's a side issue. An individual building, the style in which it is going to be designed and built, is not that important. The important thing, really, is the community. How does it affect life?" Pei's work is celebrated throughout the world of architecture. His colleague John Portman once told him: "Just once, I'd like to do something like the East Building." But this originality does not always bring large financial reward; as Pei replied to the successful architect: "Just once, I'd like to make the kind of money you do." His concepts, moreover, are too individualized and dependent on context to give rise to a particular school of design. Pei refers to his own "analytical approach" when explaining the lack of a "Pei School". "For me," he said, "the important distinction is between a stylistic approach to the design; and an analytical approach giving the process of due consideration to time, place, and purpose ... My analytical approach requires a full understanding of the three essential elements ... to arrive at an ideal balance among them." In the words of his biographer, Pei has won "every award of any consequence in his art", including the Arnold Brunner Award from the National Institute of Arts and Letters (1963), the Gold Medal for Architecture from the American Academy of Arts and Letters (1979), the AIA Gold Medal (1979), the first "Praemium Imperiale" for Architecture from the Japan Art Association (1989), the Lifetime Achievement Award from the Cooper-Hewitt, National Design Museum, the 1998 Edward MacDowell Medal in the Arts, and the 2010 Royal Gold Medal from the Royal Institute of British Architects. In 1983 he was awarded the Pritzker Prize, sometimes called the Nobel Prize of architecture. In its citation, the jury said: "Ieoh Ming Pei has given this century some of its most beautiful interior spaces and exterior forms ... His versatility and skill in the use of materials approach the level of poetry." The prize was accompanied by a US$100,000 award, which Pei used to create a scholarship for Chinese students to study architecture in the US, on the condition that they return to China to work. In being awarded the 2003 Henry C. Turner Prize by the National Building Museum, museum board chair Carolyn Brody praised his impact on construction innovation: "His magnificent designs have challenged engineers to devise innovative structural solutions, and his exacting expectations for construction quality have encouraged contractors to achieve high standards." In December 1992, Pei was awarded the Presidential Medal of Freedom by President George H. W. Bush. Pei's wife of over seventy years, Eileen Loo, predeceased him on 20 June 2014. They had three sons, T'ing Chung (1946–2003), Chien Chung (b. 1946) and Li Chung (b. 1949), and a daughter, Liane (b. 1960). T'ing Chung was an urban planner and alumnus of his father's "alma mater" MIT and Harvard. Chieng Chung and Li Chung, who are both Harvard Graduate School of Design alumni, founded and run Pei Partnership Architects. Liane is a lawyer. He celebrated his 100th birthday on 26 April 2017. Notes Bibliography Intelligent design Intelligent design (ID) is a religious argument for the existence of God, presented by its proponents as "an evidence-based scientific theory about life's origins", though it has been discredited as pseudoscience. Proponents claim that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." ID is a form of creationism that lacks empirical support and offers no testable or tenable hypotheses, so is not science. The leading proponents of ID are associated with the Discovery Institute, a fundamentalist Christian and politically conservative think tank based in the United States. Though the phrase "intelligent design" had featured previously in theological discussions of the design argument, the first publication of the term "intelligent design" in its present use as an alternative term for creationism was in "Of Pandas and People", a 1989 creationist textbook intended for high school biology classes. The term was substituted into drafts of the book, directly replacing references to "creation science" and "creationism", after the 1987 United States Supreme Court's "Edwards v. Aguillard" decision, which barred the teaching of "creation science" in public schools on constitutional grounds. From the mid-1990s, the intelligent design movement (IDM), supported by the Discovery Institute, advocated inclusion of intelligent design in public school biology curricula. This led to the 2005 "Kitzmiller v. Dover Area School District" trial in which U.S. District Judge John E. Jones III found that intelligent design was not science, that it "cannot uncouple itself from its creationist, and thus religious, antecedents," and that the school district's promotion of it therefore violated the Establishment Clause of the First Amendment to the United States Constitution. ID presents two main arguments against evolutionary explanations: irreducible complexity and specified complexity. These arguments assert that certain features (biological and informational, respectively) are too complex to be the result of natural processes. As a positive argument against evolution, ID proposes an analogy between natural systems and human artifacts, a version of the theological argument from design for the existence of God. ID proponents then conclude by analogy that the complex features, as defined by ID, are evidence of design. Detailed scientific examination has rebutted the claims that evolutionary explanations are inadequate, and this premise of intelligent design—that evidence against evolution constitutes evidence for design—is a false dichotomy. It is asserted that ID challenges the methodological naturalism inherent in modern science though proponents concede that they have yet to produce a scientific theory. By 1910 evolution was not a topic of major religious controversy in America, but in the 1920s the Fundamentalist–Modernist Controversy in theology resulted in Fundamentalist Christian opposition to teaching evolution, and the origins of modern creationism. Teaching of evolution was effectively suspended in U.S. public schools until the 1960s, and when evolution was then reintroduced into the curriculum, there was a series of court cases in which attempts were made to get creationism taught alongside evolution in science classes. Young Earth creationists (YEC) promoted creation science as "an alternative scientific explanation of the world in which we live". This frequently invoked the argument from design to explain complexity in nature as demonstrating the existence of God. The argument from design, the teleological argument or "argument from intelligent design", has been advanced in theology for centuries. It can be summarised briefly as "Wherever complex design exists, there must have been a designer; nature is complex; therefore nature must have had an intelligent designer." Thomas Aquinas presented it in his fifth proof of God's existence as a syllogism. In 1802, William Paley's "Natural Theology" presented examples of intricate purpose in organisms. His version of the watchmaker analogy argued that, in the same way that a watch has evidently been designed by a craftsman, complexity and adaptation seen in nature must have been designed, and the perfection and diversity of these designs shows the designer to be omnipotent, the Christian God. Like creation science, intelligent design centers on Paley's religious argument from design, but while Paley's natural theology was open to deistic design through God-given laws, intelligent design seeks scientific confirmation of repeated miraculous interventions in the history of life. Creation science prefigured the intelligent design arguments of irreducible complexity, even featuring the bacterial flagellum. In the United States, attempts to introduce creation science in schools led to court rulings that it is religious in nature, and thus cannot be taught in public school science classrooms. Intelligent design is also presented as science, and shares other arguments with creation science but avoids literal Biblical references to such things as the Flood story from the Book of Genesis or using Bible verses to age the Earth. Barbara Forrest writes that the intelligent design movement began in 1984 with the book "The Mystery of Life's Origin: Reassessing Current Theories", co-written by creationist Charles B. Thaxton, a chemist, with two other authors, and published by Jon A. Buell's Foundation for Thought and Ethics. Thaxton held a conference in 1988, "Sources of Information Content in DNA", which attracted creationists such as Stephen C. Meyer. In March 1986, a review by Meyer used information theory to suggest that messages transmitted by DNA in the cell show "specified complexity" specified by intelligence, and must have originated with an intelligent agent. In November of that year, Thaxton described his reasoning as a more sophisticated form of Paley's argument from design. At the "Sources of Information Content in DNA" conference in 1988, he said that his intelligent cause view was compatible with both metaphysical naturalism and supernaturalism. Intelligent design avoids identifying or naming the intelligent designer—it merely states that one (or more) must exist—but leaders of the movement have said the designer is the Christian God. Whether this lack of specificity about the designer's identity in public discussions is a genuine feature of the concept, or just a posture taken to avoid alienating those who would separate religion from the teaching of science, has been a matter of great debate between supporters and critics of intelligent design. The "Kitzmiller v. Dover Area School District" court ruling held the latter to be the case. Since the Middle Ages, discussion of the religious "argument from design" or "teleological argument" in theology, with its concept of "intelligent design", has persistently referred to the theistic Creator God. Although ID proponents chose this provocative label for their proposed alternative to evolutionary explanations, they have de-emphasized their religious antecedents and denied that ID is natural theology, while still presenting ID as supporting the argument for the existence of God. While intelligent design proponents have pointed out past examples of the phrase "intelligent design" that they said were not creationist and faith-based, they have failed to show that these usages had any influence on those who introduced the label in the intelligent design movement. Variations on the phrase appeared in Young Earth creationist publications: a 1967 book co-written by Percival Davis referred to "design according to which basic organisms were created". In 1970, A. E. Wilder-Smith published "The Creation of Life: A Cybernetic Approach to Evolution" which defended Paley's design argument with computer calculations of the improbability of genetic sequences, which he said could not be explained by evolution but required "the abhorred necessity of divine intelligent activity behind nature", and that "the same problem would be expected to beset the relationship between the designer behind nature and the intelligently designed part of nature known as man." In a 1984 article as well as in his affidavit to "Edwards v. Aguillard", Dean H. Kenyon defended creation science by stating that "biomolecular systems require intelligent design and engineering know-how", citing Wilder-Smith. Creationist Richard B. Bliss used the phrase "creative design" in "Origins: Two Models: Evolution, Creation" (1976), and in "Origins: Creation or Evolution" (1988) wrote that "while evolutionists are trying to find non-intelligent ways for life to occur, the creationist insists that an intelligent design must have been there in the first place." The first systematic use of the term, defined in a glossary and claimed to be other than creationism, was in "Of Pandas and People", co-authored by Davis and Kenyon. The most common modern use of the words "intelligent design" as a term intended to describe a field of inquiry began after the United States Supreme Court ruled in 1987 in the case of "Edwards v. Aguillard" that it is unconstitutional for a state to require the teaching of creationism in public school science curricula. A Discovery Institute report says that Charles B. Thaxton, editor of "Pandas", had picked the phrase up from a NASA scientist, and thought, "That's just what I need, it's a good engineering term." In drafts of the book, over one hundred uses of the root word "creation", such as "creationism" and "Creation Science", were changed, almost without exception, to "intelligent design", while "creationists" was changed to "design proponents" or, in one instance, "cdesign proponentsists". In June 1988, Thaxton held a conference titled "Sources of Information Content in DNA" in Tacoma, Washington, and in December decided to use the label "intelligent design" for his new creationist movement. Stephen C. Meyer was at the conference, and later recalled that "The term "intelligent design" came up..." "Of Pandas and People" was published in 1989, and in addition to including all the current arguments for ID, was the first book to make systematic use of the terms "intelligent design" and "design proponents" as well as the phrase "design theory", defining the term "intelligent design" in a glossary and representing it as not being creationism. It thus represents the start of the modern intelligent design movement. "Intelligent design" was the most prominent of around fifteen new terms it introduced as a new lexicon of creationist terminology to oppose evolution without using religious language. It was the first place where the phrase "intelligent design" appeared in its primary present use, as stated both by its publisher Jon A. Buell, and by William A. Dembski in his expert witness report for "Kitzmiller v. Dover Area School District". The National Center for Science Education (NCSE) has criticized the book for presenting all of the basic arguments of intelligent design proponents and being actively promoted for use in public schools before any research had been done to support these arguments. Although presented as a scientific textbook, philosopher of science Michael Ruse considers the contents "worthless and dishonest". An American Civil Liberties Union lawyer described it as a political tool aimed at students who did not "know science or understand the controversy over evolution and creationism". One of the authors of the science framework used by California schools, Kevin Padian, condemned it for its "sub-text", "intolerance for honest science" and "incompetence". The term "irreducible complexity" was introduced by biochemist Michael Behe in his 1996 book "Darwin's Black Box", though he had already described the concept in his contributions to the 1993 revised edition of "Of Pandas and People". Behe defines it as "a single system which is composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning". Behe uses the analogy of a mousetrap to illustrate this concept. A mousetrap consists of several interacting pieces—the base, the catch, the spring and the hammer—all of which must be in place for the mousetrap to work. Removal of any one piece destroys the function of the mousetrap. Intelligent design advocates assert that natural selection could not create irreducibly complex systems, because the selectable function is present only when all parts are assembled. Behe argued that irreducibly complex biological mechanisms include the bacterial flagellum of "E. coli", the blood clotting cascade, cilia, and the adaptive immune system. Critics point out that the irreducible complexity argument assumes that the necessary parts of a system have always been necessary and therefore could not have been added sequentially. They argue that something that is at first merely advantageous can later become necessary as other components change. Furthermore, they argue, evolution often proceeds by altering preexisting parts or by removing them from a system, rather than by adding them. This is sometimes called the "scaffolding objection" by an analogy with scaffolding, which can support an "irreducibly complex" building until it is complete and able to stand on its own. Behe has acknowledged using "sloppy prose", and that his "argument against Darwinism does not add up to a logical proof." Irreducible complexity has remained a popular argument among advocates of intelligent design; in the Dover trial, the court held that "Professor Behe's claim for irreducible complexity has been refuted in peer-reviewed research papers and has been rejected by the scientific community at large." In 1986, Charles B. Thaxton, a physical chemist and creationist, used the term "specified complexity" from information theory when claiming that messages transmitted by DNA in the cell were specified by intelligence, and must have originated with an intelligent agent. The intelligent design concept of "specified complexity" was developed in the 1990s by mathematician, philosopher, and theologian William A. Dembski. Dembski states that when something exhibits specified complexity (i.e., is both complex and "specified", simultaneously), one can infer that it was produced by an intelligent cause (i.e., that it was designed) rather than being the result of natural processes. He provides the following examples: "A single letter of the alphabet is specified without being complex. A long sentence of random letters is complex without being specified. A Shakespearean sonnet is both complex and specified." He states that details of living things can be similarly characterized, especially the "patterns" of molecular sequences in functional biological molecules such as DNA. Dembski defines complex specified information (CSI) as anything with a less than 1 in 10 chance of occurring by (natural) chance. Critics say that this renders the argument a tautology: complex specified information cannot occur naturally because Dembski has defined it thus, so the real question becomes whether or not CSI actually exists in nature. The conceptual soundness of Dembski's specified complexity/CSI argument has been discredited in the scientific and mathematical communities. Specified complexity has yet to be shown to have wide applications in other fields, as Dembski asserts. John Wilkins and Wesley R. Elsberry characterize Dembski's "explanatory filter" as "eliminative" because it eliminates explanations sequentially: first regularity, then chance, finally defaulting to design. They argue that this procedure is flawed as a model for scientific inference because the asymmetric way it treats the different possible explanations renders it prone to making false conclusions. Richard Dawkins, another critic of intelligent design, argues in "The God Delusion" (2006) that allowing for an intelligent designer to account for unlikely complexity only postpones the problem, as such a designer would need to be at least as complex. Other scientists have argued that evolution through selection is better able to explain the observed complexity, as is evident from the use of selective evolution to design certain electronic, aeronautic and automotive systems that are considered problems too complex for human "intelligent designers". Intelligent design proponents have also occasionally appealed to broader teleological arguments outside of biology, most notably an argument based on the fine-tuning of universal constants that make matter and life possible and which are argued not to be solely attributable to chance. These include the values of fundamental physical constants, the relative strength of nuclear forces, electromagnetism, and gravity between fundamental particles, as well as the ratios of masses of such particles. Intelligent design proponent and Center for Science and Culture fellow Guillermo Gonzalez argues that if any of these values were even slightly different, the universe would be dramatically different, making it impossible for many chemical elements and features of the Universe, such as galaxies, to form. Thus, proponents argue, an intelligent designer of life was needed to ensure that the requisite features were present to achieve that particular outcome. Scientists have generally responded that these arguments are poorly supported by existing evidence. Victor J. Stenger and other critics say both intelligent design and the weak form of the anthropic principle are essentially a tautology; in his view, these arguments amount to the claim that life is able to exist because the Universe is able to support life. The claim of the improbability of a life-supporting universe has also been criticized as an argument by lack of imagination for assuming no other forms of life are possible. Life as we know it might not exist if things were different, but a different sort of life might exist in its place. A number of critics also suggest that many of the stated variables appear to be interconnected and that calculations made by mathematicians and physicists suggest that the emergence of a universe similar to ours is quite probable. The contemporary intelligent design movement formulates its arguments in secular terms and intentionally avoids identifying the intelligent agent (or agents) they posit. Although they do not state that God is the designer, the designer is often implicitly hypothesized to have intervened in a way that only a god could intervene. Dembski, in "The Design Inference" (1998), speculates that an alien culture could fulfill these requirements. "Of Pandas and People" proposes that SETI illustrates an appeal to intelligent design in science. In 2000, philosopher of science Robert T. Pennock suggested the Raëlian UFO religion as a real-life example of an extraterrestrial intelligent designer view that "make[s] many of the same bad arguments against evolutionary theory as creationists". The authoritative description of intelligent design, however, explicitly states that the "Universe" displays features of having been designed. Acknowledging the paradox, Dembski concludes that "no intelligent agent who is strictly physical could have presided over the origin of the universe or the origin of life." The leading proponents have made statements to their supporters that they believe the designer to be the Christian God, to the exclusion of all other religions. Beyond the debate over whether intelligent design is scientific, a number of critics argue that existing evidence makes the design hypothesis appear unlikely, irrespective of its status in the world of science. For example, Jerry Coyne asks why a designer would "give us a pathway for making vitamin C, but then destroy it by disabling one of its enzymes" (see pseudogene) and why a designer would not "stock oceanic islands with reptiles, mammals, amphibians, and freshwater fish, despite the suitability of such islands for these species". Coyne also points to the fact that "the flora and fauna on those islands resemble that of the nearest mainland, even when the environments are very different" as evidence that species were not placed there by a designer. Previously, in "Darwin's Black Box", Behe had argued that we are simply incapable of understanding the designer's motives, so such questions cannot be answered definitively. Odd designs could, for example, "...have been placed there by the designer for a reason—for artistic reasons, for variety, to show off, for some as-yet-undetected practical purpose, or for some unguessable reason—or they might not." Coyne responds that in light of the evidence, "either life resulted not from intelligent design, but from evolution; or the intelligent designer is a cosmic prankster who designed everything to make it look as though it had evolved." Intelligent design proponents such as Paul Nelson avoid the problem of poor design in nature by insisting that we have simply failed to understand the perfection of the design. Behe cites Paley as his inspiration, but he differs from Paley's expectation of a perfect Creation and proposes that designers do not necessarily produce the best design they can. Behe suggests that, like a parent not wanting to spoil a child with extravagant toys, the designer can have multiple motives for not giving priority to excellence in engineering. He says that "Another problem with the argument from imperfection is that it critically depends on a psychoanalysis of the unidentified designer. Yet the reasons that a designer would or would not do anything are virtually impossible to know unless the designer tells you specifically what those reasons are." This reliance on inexplicable motives of the designer makes intelligent design scientifically untestable. Retired UC Berkeley law professor, author and intelligent design advocate Phillip E. Johnson puts forward a core definition that the designer creates for a purpose, giving the example that in his view AIDS was created to punish immorality and is not caused by HIV, but such motives cannot be tested by scientific methods. Asserting the need for a designer of complexity also raises the question "What designed the designer?" Intelligent design proponents say that the question is irrelevant to or outside the scope of intelligent design. Richard Wein counters that "...scientific explanations often create new unanswered questions. But, in assessing the value of an explanation, these questions are not irrelevant. They must be balanced against the improvements in our understanding which the explanation provides. Invoking an unexplained being to explain the origin of other beings (ourselves) is little more than question-begging. The new question raised by the explanation is as problematic as the question which the explanation purports to answer." Richard Dawkins sees the assertion that the designer does not need to be explained as a thought-terminating cliché. In the absence of observable, measurable evidence, the very question "What designed the designer?" leads to an infinite regression from which intelligent design proponents can only escape by resorting to religious creationism or logical contradiction. The intelligent design movement is a direct outgrowth of the creationism of the 1980s. The scientific and academic communities, along with a U.S. federal court, view intelligent design as either a form of creationism or as a direct descendant that is closely intertwined with traditional creationism; and several authors explicitly refer to it as "intelligent design creationism". The movement is headquartered in the Center for Science and Culture, established in 1996 as the creationist wing of the Discovery Institute to promote a religious agenda calling for broad social, academic and political changes. The Discovery Institute's intelligent design campaigns have been staged primarily in the United States, although efforts have been made in other countries to promote intelligent design. Leaders of the movement say intelligent design exposes the limitations of scientific orthodoxy and of the secular philosophy of naturalism. Intelligent design proponents allege that science should not be limited to naturalism and should not demand the adoption of a naturalistic philosophy that dismisses out-of-hand any explanation that includes a supernatural cause. The overall goal of the movement is to "reverse the stifling dominance of the materialist worldview" represented by the theory of evolution in favor of "a science consonant with Christian and theistic convictions". Phillip E. Johnson stated that the goal of intelligent design is to cast creationism as a scientific concept. All leading intelligent design proponents are fellows or staff of the Discovery Institute and its Center for Science and Culture. Nearly all intelligent design concepts and the associated movement are the products of the Discovery Institute, which guides the movement and follows its wedge strategy while conducting its "Teach the Controversy" campaign and their other related programs. Leading intelligent design proponents have made conflicting statements regarding intelligent design. In statements directed at the general public, they say intelligent design is not religious; when addressing conservative Christian supporters, they state that intelligent design has its foundation in the Bible. Recognizing the need for support, the Institute affirms its Christian, evangelistic orientation: Barbara Forrest, an expert who has written extensively on the movement, describes this as being due to the Discovery Institute's obfuscating its agenda as a matter of policy. She has written that the movement's "activities betray an aggressive, systematic agenda for promoting not only intelligent design creationism, but the religious worldview that undergirds it." Although arguments for intelligent design by the intelligent design movement are formulated in secular terms and intentionally avoid positing the identity of the designer, the majority of principal intelligent design advocates are publicly religious Christians who have stated that, in their view, the designer proposed in intelligent design is the Christian conception of God. Stuart Burgess, Phillip E. Johnson, William A. Dembski, and Stephen C. Meyer are evangelical Protestants; Michael Behe is a Roman Catholic; and Jonathan Wells is a member of the Unification Church. Non-Christian proponents include David Klinghoffer, who is Jewish, Michael Denton and David Berlinski, who are agnostic, and Muzaffar Iqbal, a Pakistani-Canadian Muslim. Phillip E. Johnson has stated that cultivating ambiguity by employing secular language in arguments that are carefully crafted to avoid overtones of theistic creationism is a necessary first step for ultimately reintroducing the Christian concept of God as the designer. Johnson explicitly calls for intelligent design proponents to obfuscate their religious motivations so as to avoid having intelligent design identified "as just another way of packaging the Christian evangelical message." Johnson emphasizes that "...the first thing that has to be done is to get the Bible out of the discussion. ...This is not to say that the biblical issues are unimportant; the point is rather that the time to address them will be after we have separated materialist prejudice from scientific fact." The strategy of deliberately disguising the religious intent of intelligent design has been described by William A. Dembski in "The Design Inference". In this work, Dembski lists a god or an "alien life force" as two possible options for the identity of the designer; however, in his book "Intelligent Design: The Bridge Between Science and Theology" (1999), Dembski states: Dembski also stated, "ID is part of God's general revelation [...] Not only does intelligent design rid us of this ideology materialism , which suffocates the human spirit, but, in my personal experience, I've found that it opens the path for people to come to Christ." Both Johnson and Dembski cite the Bible's Gospel of John as the foundation of intelligent design. Barbara Forrest contends such statements reveal that leading proponents see intelligent design as essentially religious in nature, not merely a scientific concept that has implications with which their personal religious beliefs happen to coincide. She writes that the leading proponents of intelligent design are closely allied with the ultra-conservative Christian Reconstructionism movement. She lists connections of (current and former) Discovery Institute Fellows Phillip E. Johnson, Charles B. Thaxton, Michael Behe, Richard Weikart, Jonathan Wells and Francis J. Beckwith to leading Christian Reconstructionist organizations, and the extent of the funding provided the Institute by Howard Ahmanson, Jr., a leading figure in the Reconstructionist movement. Not all creationist organizations have embraced the intelligent design movement. According to Thomas Dixon, "Religious leaders have come out against ID too. An open letter affirming the compatibility of Christian faith and the teaching of evolution, first produced in response to controversies in Wisconsin in 2004, has now been signed by over ten thousand clergy from different Christian denominations across America. In 2006, the director of the Vatican Observatory, the Jesuit astronomer George Coyne, condemned ID as a kind of 'crude creationism' which reduced God to a mere engineer." Hugh Ross of Reasons to Believe, a proponent of Old Earth creationism, believes that the efforts of intelligent design proponents to divorce the concept from Biblical Christianity make its hypothesis too vague. In 2002, he wrote: "Winning the argument for design without identifying the designer yields, at best, a sketchy origins model. Such a model makes little if any positive impact on the community of scientists and other scholars. [...] ...the time is right for a direct approach, a single leap into the origins fray. Introducing a biblically based, scientifically verifiable creation model represents such a leap." Likewise, two of the most prominent YEC organizations in the world have attempted to distinguish their views from those of the intelligent design movement. Henry M. Morris of the Institute for Creation Research (ICR) wrote, in 1999, that ID, "even if well-meaning and effectively articulated, will not work! It has often been tried in the past and has failed, and it will fail today. The reason it won't work is because it is not the Biblical method." According to Morris: "The evidence of intelligent design ... must be either followed by or accompanied by a sound presentation of true Biblical creationism if it is to be meaningful and lasting." In 2002, Carl Wieland, then of Answers in Genesis (AiG), criticized design advocates who, though well-intentioned, "'left the Bible out of it'" and thereby unwittingly aided and abetted the modern rejection of the Bible. Wieland explained that "AiG's major 'strategy' is to boldly, but humbly, call the church back to its Biblical foundations ... [so] we neither count ourselves a part of this movement nor campaign against it." The unequivocal consensus in the scientific community is that intelligent design is not science and has no place in a science curriculum. The U.S. National Academy of Sciences has stated that "creationism, intelligent design, and other claims of supernatural intervention in the origin of life or of species are not science because they are not testable by the methods of science." The U.S. National Science Teachers Association and the American Association for the Advancement of Science have termed it pseudoscience. Others in the scientific community have denounced its tactics, accusing the ID movement of manufacturing false attacks against evolution, of engaging in misinformation and misrepresentation about science, and marginalizing those who teach it. More recently, in September 2012, Bill Nye warned that creationist views threaten science education and innovations in the United States. In 2001, the Discovery Institute published advertisements under the heading "A Scientific Dissent From Darwinism", with the claim that listed scientists had signed this statement expressing skepticism: The ambiguous statement did not exclude other known evolutionary mechanisms, and most signatories were not scientists in relevant fields, but starting in 2004 the Institute claimed the increasing number of signatures indicated mounting doubts about evolution among scientists. The statement formed a key component of Discovery Institute campaigns to present intelligent design as scientifically valid by claiming that evolution lacks broad scientific support, with Institute members continued to cite the list through at least 2011. As part of a strategy to counter these claims, scientists organised Project Steve, which gained more signatories named Steve (or variants) than the Institute's petition, and a counter-petition, "A Scientific Support for Darwinism", which quickly gained similar numbers of signatories. Several surveys were conducted prior to the December 2005 decision in "Kitzmiller v. Dover School District", which sought to determine the level of support for intelligent design among certain groups. According to a 2005 Harris poll, 10% of adults in the United States viewed human beings as "so complex that they required a powerful force or intelligent being to help create them." Although Zogby polls commissioned by the Discovery Institute show more support, these polls suffer from considerable flaws, such as having a very low response rate (248 out of 16,000), being conducted on behalf of an organization with an expressed interest in the outcome of the poll, and containing leading questions. The 2017 Gallup creationism survey found that 38% of adults in the United States hold the view that "God created humans in their present form at one time within the last 10,000 years" when asked for their views on the origin and development of human beings, which was noted as being at the lowest level in 35 years. Previously, a series of Gallup polls in the United States from 1982 through 2014 on "Evolution, Creationism, Intelligent Design" found support for "human beings have developed over millions of years from less advanced formed of life, but God guided the process" of between 31% and 40%, support for "God created human beings in pretty much their present form at one time within the last 10,000 years or so" varied from 40% to 47%, and support for "human beings have developed over millions of years from less advanced forms of life, but God had no part in the process" varied from 9% to 19%. The polls also noted answers to a series of more detailed questions. There have been allegations that ID proponents have met discrimination, such as being refused tenure or being harshly criticized on the Internet. In the documentary film "", released in 2008, host Ben Stein presents five such cases. The film contends that the mainstream science establishment, in a "scientific conspiracy to keep God out of the nation's laboratories and classrooms", suppresses academics who believe they see evidence of intelligent design in nature or criticize evidence of evolution. Investigation into these allegations turned up alternative explanations for perceived persecution. The film portrays intelligent design as motivated by science, rather than religion, though it does not give a detailed definition of the phrase or attempt to explain it on a scientific level. Other than briefly addressing issues of irreducible complexity, "Expelled" examines it as a political issue. The scientific theory of evolution is portrayed by the film as contributing to fascism, the Holocaust, communism, atheism, and eugenics. "Expelled" has been used in private screenings to legislators as part of the Discovery Institute intelligent design campaign for Academic Freedom bills. Review screenings were restricted to churches and Christian groups, and at a special pre-release showing, one of the interviewees, PZ Myers, was refused admission. The American Association for the Advancement of Science describes the film as dishonest and divisive propaganda aimed at introducing religious ideas into public school science classrooms, and the Anti-Defamation League has denounced the film's allegation that evolutionary theory influenced the Holocaust. The film includes interviews with scientists and academics who were misled into taking part by misrepresentation of the topic and title of the film. Skeptic Michael Shermer describes his experience of being repeatedly asked the same question without context as "surreal". Advocates of intelligent design seek to keep God and the Bible out of the discussion, and present intelligent design in the language of science as though it were a scientific hypothesis. For a theory to qualify as scientific, it is expected to be: For any theory, hypothesis or conjecture to be considered scientific, it must meet most, and ideally all, of these criteria. The fewer criteria are met, the less scientific it is; and if it meets only a few or none at all, then it cannot be treated as scientific in any meaningful sense of the word. Typical objections to defining intelligent design as science are that it lacks consistency, violates the principle of parsimony, is not scientifically useful, is not falsifiable, is not empirically testable, and is not correctable, dynamic, progressive or provisional. Intelligent design proponents seek to change this fundamental basis of science by eliminating "methodological naturalism" from science and replacing it with what the leader of the intelligent design movement, Phillip E. Johnson, calls "theistic realism". Intelligent design proponents argue that naturalistic explanations fail to explain certain phenomena and that supernatural explanations provide a very simple and intuitive explanation for the origins of life and the universe. Many intelligent design followers believe that "scientism" is itself a religion that promotes secularism and materialism in an attempt to erase theism from public life, and they view their work in the promotion of intelligent design as a way to return religion to a central role in education and other public spheres. It has been argued that methodological naturalism is not an "assumption" of science, but a "result" of science well done: the God explanation is the least parsimonious, so according to Occam's razor, it cannot be a scientific explanation. The failure to follow the procedures of scientific discourse and the failure to submit work to the scientific community that withstands scrutiny have weighed against intelligent design being accepted as valid science. The intelligent design movement has not published a properly peer-reviewed article supporting ID in a scientific journal, and has failed to publish supporting peer-reviewed research or data. The only article published in a peer-reviewed scientific journal that made a case for intelligent design was quickly withdrawn by the publisher for having circumvented the journal's peer-review standards. The Discovery Institute says that a number of intelligent design articles have been published in peer-reviewed journals, but critics, largely members of the scientific community, reject this claim and state intelligent design proponents have set up their own journals with peer review that lack impartiality and rigor, consisting entirely of intelligent design supporters. Further criticism stems from the fact that the phrase "intelligent" design makes use of an assumption of the quality of an observable intelligence, a concept that has no scientific consensus definition. The characteristics of intelligence are assumed by intelligent design proponents to be observable without specifying what the criteria for the measurement of intelligence should be. Critics say that the design detection methods proposed by intelligent design proponents are radically different from conventional design detection, undermining the key elements that make it possible as legitimate science. Intelligent design proponents, they say, are proposing both searching for a designer without knowing anything about that designer's abilities, parameters, or intentions (which scientists do know when searching for the results of human intelligence), as well as denying the very distinction between natural/artificial design that allows scientists to compare complex designed artifacts against the background of the sorts of complexity found in nature. Among a significant proportion of the general public in the United States, the major concern is whether conventional evolutionary biology is compatible with belief in God and in the Bible, and how this issue is taught in schools. The Discovery Institute's "Teach the Controversy" campaign promotes intelligent design while attempting to discredit evolution in United States public high school science courses. The scientific community and science education organizations have replied that there is no scientific controversy regarding the validity of evolution and that the controversy exists solely in terms of religion and politics. Eugenie C. Scott, along with Glenn Branch and other critics, has argued that many points raised by intelligent design proponents are arguments from ignorance. In the argument from ignorance, a lack of evidence for one view is erroneously argued to constitute proof of the correctness of another view. Scott and Branch say that intelligent design is an argument from ignorance because it relies on a lack of knowledge for its conclusion: lacking a natural explanation for certain specific aspects of evolution, we assume intelligent cause. They contend most scientists would reply that the unexplained is not unexplainable, and that "we don't know yet" is a more appropriate response than invoking a cause outside science. Particularly, Michael Behe's demands for ever more detailed explanations of the historical evolution of molecular systems seem to assume a false dichotomy, where either evolution or design is the proper explanation, and any perceived failure of evolution becomes a victory for design. Scott and Branch also contend that the supposedly novel contributions proposed by intelligent design proponents have not served as the basis for any productive scientific research. In his conclusion to the Kitzmiller trial, Judge John E. Jones III wrote that "ID is at bottom premised upon a false dichotomy, namely, that to the extent evolutionary theory is discredited, ID is confirmed." This same argument had been put forward to support creation science at the "McLean v. Arkansas" (1982) trial, which found it was "contrived dualism", the false premise of a "two model approach". Behe's argument of irreducible complexity puts forward negative arguments against evolution but does not make any positive scientific case for intelligent design. It fails to allow for scientific explanations continuing to be found, as has been the case with several examples previously put forward as supposed cases of irreducible complexity. Intelligent design proponents often insist that their claims do not require a religious component. However, various philosophical and theological issues are naturally raised by the claims of intelligent design. Intelligent design proponents attempt to demonstrate scientifically that features such as irreducible complexity and specified complexity could not arise through natural processes, and therefore required repeated direct miraculous interventions by a Designer (often a Christian concept of God). They reject the possibility of a Designer who works merely through setting natural laws in motion at the outset, in contrast to theistic evolution (to which even Charles Darwin was open). Intelligent design is distinct because it asserts repeated miraculous interventions in addition to designed laws. This contrasts with other major religious traditions of a created world in which God's interactions and influences do not work in the same way as physical causes. The Roman Catholic tradition makes a careful distinction between ultimate metaphysical explanations and secondary, natural causes. The concept of direct miraculous intervention raises other potential theological implications. If such a Designer does not intervene to alleviate suffering even though capable of intervening for other reasons, some imply the designer is not omnibenevolent (see problem of evil and related theodicy). Further, repeated interventions imply that the original design was not perfect and final, and thus pose a problem for any who believe that the Creator's work had been both perfect and final. Intelligent design proponents seek to explain the problem of poor design in nature by insisting that we have simply failed to understand the perfection of the design (for example, proposing that vestigial organs have unknown purposes), or by proposing that designers do not necessarily produce the best design they can, and may have unknowable motives for their actions. Intelligent design has also been characterized as a God-of-the-gaps argument, which has the following form: A God-of-the-gaps argument is the theological version of an argument from ignorance. A key feature of this type of argument is that it merely answers outstanding questions with explanations (often supernatural) that are unverifiable and ultimately themselves subject to unanswerable questions. Historians of science observe that the astronomy of the earliest civilizations, although astonishing and incorporating mathematical constructions far in excess of any practical value, proved to be misdirected and of little importance to the development of science because they failed to inquire more carefully into the mechanisms that drove the heavenly bodies across the sky. It was the Greek civilization that first practiced science, although not yet as a formally defined experimental science, but nevertheless an attempt to rationalize the world of natural experience without recourse to divine intervention. In this historically motivated definition of science any appeal to an intelligent creator is explicitly excluded for the paralysing effect it may have on scientific progress. "Kitzmiller v. Dover Area School District" was the first direct challenge brought in the United States federal courts against a public school district that required the presentation of intelligent design as an alternative to evolution. The plaintiffs successfully argued that intelligent design is a form of creationism, and that the school board policy thus violated the Establishment Clause of the First Amendment to the United States Constitution. Eleven parents of students in Dover, Pennsylvania, sued the Dover Area School District over a statement that the school board required be read aloud in ninth-grade science classes when evolution was taught. The plaintiffs were represented by the American Civil Liberties Union (ACLU), Americans United for Separation of Church and State (AU) and Pepper Hamilton LLP. The National Center for Science Education acted as consultants for the plaintiffs. The defendants were represented by the Thomas More Law Center. The suit was tried in a bench trial from September 26 to November 4, 2005, before Judge John E. Jones III. Kenneth R. Miller, Kevin Padian, Brian Alters, Robert T. Pennock, Barbara Forrest and John F. Haught served as expert witnesses for the plaintiffs. Michael Behe, Steve Fuller and Scott Minnich served as expert witnesses for the defense. On December 20, 2005, Judge Jones issued his 139-page findings of fact and decision, ruling that the Dover mandate was unconstitutional, and barring intelligent design from being taught in Pennsylvania's Middle District public school science classrooms. On November 8, 2005, there had been an election in which the eight Dover school board members who voted for the intelligent design requirement were all defeated by challengers who opposed the teaching of intelligent design in a science class, and the current school board president stated that the board did not intend to appeal the ruling. In his finding of facts, Judge Jones made the following condemnation of the "Teach the Controversy" strategy: Judge Jones himself anticipated that his ruling would be criticized, saying in his decision that: As Jones had predicted, John G. West, Associate Director of the Center for Science and Culture, said: Newspapers have noted with interest that the judge is "a Republican and a churchgoer". Subsequently, the decision has been examined in a search for flaws and conclusions, partly by intelligent design supporters aiming to avoid future defeats in court. In its Winter issue of 2007, the "Montana Law Review" published three articles. In the first, David K. DeWolf, John G. West and Casey Luskin, all of the Discovery Institute, argued that intelligent design is a valid scientific theory, the Jones court should not have addressed the question of whether it was a scientific theory, and that the Kitzmiller decision will have no effect at all on the development and adoption of intelligent design as an alternative to standard evolutionary theory. In the second Peter H. Irons responded, arguing that the decision was extremely well reasoned and spells the death knell for the intelligent design efforts to introduce creationism in public schools, while in the third, DeWolf, "et al.", answer the points made by Irons. However, fear of a similar lawsuit has resulted in other school boards abandoning intelligent design "teach the controversy" proposals. A number of anti-evolution bills have been introduced in the United States Congress and State legislatures since 2001, based largely upon language drafted by the Discovery Institute for the Santorum Amendment. Their aim has been to expose more students to articles and videos produced by advocates of intelligent design that criticise evolution. They have been presented as supporting "academic freedom", on the supposition that teachers, students, and college professors face intimidation and retaliation when discussing scientific criticisms of evolution, and therefore require protection. Critics of the legislation have pointed out that there are no credible scientific critiques of evolution, and an investigation in Florida of allegations of intimidation and retaliation found no evidence that it had occurred. The vast majority of the bills have been unsuccessful, with the one exception being Louisiana's Louisiana Science Education Act, which was enacted in 2008. In April 2010, the American Academy of Religion issued "Guidelines for Teaching About Religion in K‐12 Public Schools in the United States", which included guidance that creation science or intelligent design should not be taught in science classes, as "Creation science and intelligent design represent worldviews that fall outside of the realm of science that is defined as (and limited to) a method of inquiry based on gathering observable and measurable evidence subject to specific principles of reasoning." However, these worldviews as well as others "that focus on speculation regarding the origins of life represent another important and relevant form of human inquiry that is appropriately studied in literature or social sciences courses. Such study, however, must include a diversity of worldviews representing a variety of religious and philosophical perspectives and must avoid privileging one view as more legitimate than others." In June 2007, the Council of Europe's Committee on Culture, Science and Education issued a report, "The dangers of creationism in education", which states "Creationism in any of its forms, such as 'intelligent design', is not based on facts, does not use any scientific reasoning and its contents are pathetically inadequate for science classes." In describing the dangers posed to education by teaching creationism, it described intelligent design as "anti-science" and involving "blatant scientific fraud" and "intellectual deception" that "blurs the nature, objectives and limits of science" and links it and other forms of creationism to denialism. On October 4, 2007, the Council of Europe's Parliamentary Assembly approved a resolution stating that schools should "resist presentation of creationist ideas in any discipline other than religion", including "intelligent design", which it described as "the latest, more refined version of creationism", "presented in a more subtle way". The resolution emphasises that the aim of the report is not to question or to fight a belief, but to "warn against certain tendencies to pass off a belief as science". In the United Kingdom, public education includes religious education as a compulsory subject, and there are many faith schools that teach the ethos of particular denominations. When it was revealed that a group called Truth in Science had distributed DVDs produced by Illustra Media featuring Discovery Institute fellows making the case for design in nature, and claimed they were being used by 59 schools, the Department for Education and Skills (DfES) stated that "Neither creationism nor intelligent design are taught as a subject in schools, and are not specified in the science curriculum" (part of the National Curriculum, which does not apply to independent schools or to education in Scotland). The DfES subsequently stated that "Intelligent design is not a recognised scientific theory; therefore, it is not included in the science curriculum", but left the way open for it to be explored in religious education in relation to different beliefs, as part of a syllabus set by a local Standing Advisory Council on Religious Education. In 2006, the Qualifications and Curriculum Authority produced a "Religious Education" model unit in which pupils can learn about religious and nonreligious views about creationism, intelligent design and evolution by natural selection. On June 25, 2007, the UK Government responded to an e-petition by saying that creationism and intelligent design should not be taught as science, though teachers would be expected to answer pupils' questions within the standard framework of established scientific theories. Detailed government "Creationism teaching guidance" for schools in England was published on September 18, 2007. It states that "Intelligent design lies wholly outside of science", has no underpinning scientific principles, or explanations, and is not accepted by the science community as a whole. Though it should not be taught as science, "Any questions about creationism and intelligent design which arise in science lessons, for example as a result of media coverage, could provide the opportunity to explain or explore why they are not considered to be scientific theories and, in the right context, why evolution is considered to be a scientific theory." However, "Teachers of subjects such as RE, history or citizenship may deal with creationism and intelligent design in their lessons." The British Centre for Science Education lobbying group has the goal of "countering creationism within the UK" and has been involved in government lobbying in the UK in this regard. Northern Ireland's Department for Education says that the curriculum provides an opportunity for alternative theories to be taught. The Democratic Unionist Party (DUP)—which has links to fundamentalist Christianity—has been campaigning to have intelligent design taught in science classes. A DUP former Member of Parliament, David Simpson, has sought assurances from the education minister that pupils will not lose marks if they give creationist or intelligent design answers to science questions. In 2007, Lisburn city council voted in favor of a DUP recommendation to write to post-primary schools asking what their plans are to develop teaching material in relation to "creation, intelligent design and other theories of origin". Plans by Dutch Education Minister Maria van der Hoeven to "stimulate an academic debate" on the subject in 2005 caused a severe public backlash. After the 2006 elections, she was succeeded by Ronald Plasterk, described as a "molecular geneticist, staunch atheist and opponent of intelligent design". As a reaction on this situation in the Netherlands, the Director General of the Flemish Secretariat of Catholic Education (VSKO) in Belgium, Mieke Van Hecke, declared that: "Catholic scientists already accepted the theory of evolution for a long time and that intelligent design and creationism doesn't belong in Flemish Catholic schools. It's not the tasks of the politics to introduce new ideas, that's task and goal of science." Muzaffar Iqbal, a notable Pakistani-Canadian Muslim, signed "A Scientific Dissent From Darwinism", a petition from the Discovery Institute. Ideas similar to intelligent design have been considered respected intellectual options among Muslims, and in Turkey many intelligent design books have been translated. In Istanbul in 2007, public meetings promoting intelligent design were sponsored by the local government, and David Berlinski of the Discovery Institute was the keynote speaker at a meeting in May 2007. In 2011, the International Society for Krishna Consciousness (ISKCON) Bhaktivedanta Book Trust published an intelligent design book titled "Rethinking Darwin: A Vedic Study of Darwinism and Intelligent Design". The book included contributions from intelligent design advocates William A. Dembski, Jonathan Wells and Michael Behe as well as from Hindu creationists Leif A. Jensen and Michael Cremo. The status of intelligent design in Australia is somewhat similar to that in the UK (see Education in Australia). In 2005, the Australian Minister for Education, Science and Training, Brendan Nelson, raised the notion of intelligent design being taught in science classes. The public outcry caused the minister to quickly concede that the correct forum for intelligent design, if it were to be taught, is in religion or philosophy classes. The Australian chapter of Campus Crusade for Christ distributed a DVD of the Discovery Institute's documentary "Unlocking the Mystery of Life" (2002) to Australian secondary schools. Tim Hawkes, the head of The King's School, one of Australia's leading private schools, supported use of the DVD in the classroom at the discretion of teachers and principals. INS Vikrant (R11) INS "Vikrant (from Sanskrit "vikrānta", "courageous") was a of the Indian Navy. The ship was laid down as HMS "Hercules for the British Royal Navy during World War II, but construction was put on hold when the war ended. India purchased the incomplete carrier in 1957, and construction was completed in 1961. "Vikrant" was commissioned as the first aircraft carrier of the Indian Navy and played a key role in enforcing the naval blockade of East Pakistan during the Indo-Pakistani War of 1971. In its later years, the ship underwent major refits to embark modern aircraft, before being decommissioned in January 1997. She was preserved as a museum ship in Cuffe Parade, Mumbai until 2012. In January 2014, the ship was sold through an online auction and scrapped in November 2014 after final clearance from the Supreme Court. In 1943 the Royal Navy commissioned six light aircraft carriers in an effort to counter the German and Japanese navies. The 1942 Design Light Fleet Carrier, commonly referred to as the British Light Fleet Carrier, was the result. Serving with eight navies between 1944 and 2001, these ships were designed and constructed by civilian shipyards as an intermediate step between the full-sized fleet aircraft carriers and the less expensive but limited-capability escort carriers. Sixteen light fleet carriers were ordered, and all were laid down as what became the "Colossus" class in 1942 and 1943. The final six ships were modified during construction to handle larger and faster aircraft, and were re-designated the "Majestic" class. The improvements from the "Colossus" class to the "Majestic" class included heavier displacement, armament, catapult, aircraft lifts and aircraft capacity. Construction on the ships was suspended at the end of World War II, as the ships were surplus to the Royal Navy's peacetime requirements. Instead, the carriers were modernized and sold to several Commonwealth nations. The ships were similar, but each varied depending on the requirements of the country to which the ship was sold. HMS "Hercules", the fifth ship in the "Majestic" class, was ordered on 7 August 1942 and laid down on 14 October 1943 by Vickers-Armstrongs on the River Tyne. After World War II ended with Japan's surrender on 2 September 1945, she was launched on 22 September, and her construction was suspended in May 1946. At the time of suspension, she was 75 per cent complete. Her hull was preserved, and in May 1947 she was laid up in Gareloch off the Clyde. In January 1957, she was purchased by India and was towed to Belfast to complete her construction and modifications by Harland and Wolff. Several improvements to the original design were ordered by the Indian Navy, including an angled deck, steam catapults, and a modified island. "Vikrant" displaced at standard load and at deep load. She had an overall length of , a beam of and a mean deep draught of . She was powered by a pair of Parsons geared steam turbines, driving two propeller shafts, using steam provided by four Admiralty three-drum boilers. The turbines developed a total of which gave a maximum speed of . "Vikrant" carried about of fuel oil that gave her a range of at , and at . The air and ship crew included 1,110 officers. The ship was armed with sixteen Bofors anti-aircraft guns, but these were later reduced to eight. At various times, its aircraft consisted of Hawker Sea Hawk and Sea Harrier (STOVL) jet fighters, Sea King Mk 42B and HAL Chetak helicopters, and Breguet Alizé Br.1050 anti-submarine aircraft. The carrier fielded between 21 and 23 aircraft of all types. "Vikrant"s flight decks were designed to handle aircraft up to , but remained the heaviest landing weight of an aircraft. Larger lifts were installed. The ship was equipped with one LW-05 air-search radar, one ZW-06 surface-search radar, one LW-10 tactical radar and one Type 963 aircraft landing radar with other communication systems. The Indian Navy's first aircraft carrier was commissioned as INS "Vikrant" on 4 March 1961 in Belfast by Vijaya Lakshmi Pandit, the Indian High Commissioner to the United Kingdom . The name "Vikrant" was derived from the Sanskrit word "vikrānta" meaning "stepping beyond", "courageous" or "bold". Captain Pritam Singh was the first commanding officer of the ship, which carried British Hawker Sea Hawk fighter-bombers and French Alizé anti-submarine aircraft. On 18 May 1961, the first jet landed on her deck. It was piloted by Lieutenant Radhakrishna Hariram Tahiliani, who later served as admiral and Chief of the Naval Staff of India from 1984 to 1987. "Vikrant" formally joined the Indian Navy's fleet in Bombay (now Mumbai) on 3 November 1961, when she was received at Ballard Pier by then Prime Minister Jawaharlal Nehru. In December of that year, the ship was deployed for Operation Vijay (the code name for the annexation of Portuguese India) off the coast of Goa with two destroyers, and . "Vikrant" did not see action, and patrolled along the coast to deter foreign interference. During the Indo-Pakistani War of 1965, "Vikrant" was in dry dock refitting, and did not see any action. In June 1970, "Vikrant" was docked at the Naval Dockyard, Bombay, due to many internal fatigue cracks and fissures in the water drums of her boilers that could not be repaired by welding. As replacement drums were not available locally, four new ones were ordered from Britain, and Naval Headquarters issued orders not to use the boilers until further notice. On 26 February 1971 the ship was moved from Ballard Pier Extension to the anchorage, without replacement drums. The main objective behind this move was to light up the boilers at reduced pressure, and work up the main and flight deck machinery that had been idle for almost seven months. On 1 March, the boilers were ignited, and basin trials up to 40 revolutions per minute (RPM) were conducted. Catapult trials were conducted on the same day. The ship began preliminary sea trials on 18 March and returned two days later. Trials were again conducted on 26–27 April. The navy decided to limit the boilers to a pressure of and the propeller revolutions to 120 RPM ahead and 80 RPM astern, reducing the ship's speed to . With the growing expectations of a war with Pakistan in the near future, the navy started to transfer its ships to strategically advantageous locations in Indian waters. The primary concern of Naval Headquarters about the operation was the serviceability of "Vikrant". When asked his opinion regarding the involvement of "Vikrant" in the war, Fleet Operations Officer Captain Gulab Mohanlal Hiranandani told the Chief of the Naval Staff Admiral Sardarilal Mathradas Nanda: Nanda and Hiranandani proved to be instrumental in taking "Vikrant" to war. There were objections that the ship might have severe operational difficulties that would expose the carrier to increased danger on operations. In addition, the three s acquired by the Pakistan Navy posed a significant risk to the carrier. In June, extensive deep sea trials were carried out, with steel safety harnesses around the three boilers still operational. Observation windows were fitted as a precautionary measure, to detect any steam leaks. By the end of June, the trials were complete and "Vikrant" was cleared to participate on operations, with its speed restricted to 14 knots. As a part of preparations for the war, "Vikrant" was assigned to the Eastern Naval Command, then to the Eastern Fleet. This fleet consisted of INS "Vikrant", the two s and , the two Petya III-class corvettes and , and one submarine, . The main reason behind strengthening the Eastern Fleet was to counter the Pakistani maritime forces deployed in support of military operations in East Bengal. A surveillance area of , confined by a triangle with a base of and sides of and , was set up in the Bay of Bengal. Any ship in this area was to be challenged and checked. If found to be neutral, it would be escorted to the nearest Indian port, otherwise, it would be captured, and taken as a war prize. In the meantime, intelligence reports confirmed that Pakistan was to deploy a US-built , . "Ghazi" was considered as a serious threat to "Vikrant" by the Indian Navy, as "Vikrant"s approximate position would be known by the Pakistanis once she started operating aircraft. Of the four available surface ships, INS "Kavaratti" had no sonar, which meant that the other three had to remain in close vicinity of "Vikrant", without which the carrier would be completely vulnerable to attack by "Ghazi". On 23 July, "Vikrant" sailed off to Cochin in company with the Western Fleet. En route, before reaching Cochin on 26 July, Sea King landing trials were carried out. After the completion of the radar and communication trials on 28 July, she departed for Madras, escorted by "Brahmaputra" and "Beas". The next major problem was operating aircraft from the carrier. The commanding officer of the ship, Captain (later Vice Admiral) S. Prakash, was seriously concerned about flight operations. He was concerned that aircrew morale would be adversely affected if flight operations were not undertaken, which could be disastrous. Naval Headquarters remained stubborn on the speed restrictions, and sought confirmation from Prakash whether it was possible to embark an Alizé without compromising the speed restrictions. The speed restrictions imposed by the headquarters meant that Alizé aircraft would have to land at close to stalling speed. Eventually the aircraft weight was reduced, which allowed several of the aircraft to embark, along with a Seahawk squadron. By the end of September, "Vikrant" and her escorts reached Port Blair. En route to Visakhapatnam, tactical exercises were conducted in the presence of the Flag Officer Commanding-in-Chief of the Eastern Naval Command. From Vishakhapatnam, "Vikrant" set out for Madras for maintenance. Rear Admiral S. H. Sharma was appointed Flag Officer Commanding Eastern Fleet and arrived at Vishakhapatnam on 14 October. After receiving the reports that Pakistan might launch preemptive strikes, maintenance was stopped for another tactical exercise, which was completed during the night of 26–27 October at Vishakhapatnam. "Vikrant" then returned to Madras to resume maintenance. On 1 November, the Eastern Fleet was formally constituted, and on 13 November, all the ships set out for the Andaman and Nicobar Islands. To avoid misadventures, it was planned to sail "Vikrant" to a remote anchorage, isolating it from combat. Simultaneously, deception signals would give the impression that "Vikrant" was operating somewhere between Madras and Vishakhapatnam. On 23 November, an emergency was declared in Pakistan after a clash of Indian and Pakistani troops in East Pakistan two days earlier. On 2 December, the Eastern Fleet proceeded to its patrol area in anticipation of an attack by Pakistan. The Pakistan Navy had deployed "Ghazi" on 14 November with the explicit goal of targeting and sinking "Vikrant", and "Ghazi" reached a location near Madras by the 23rd. In an attempt to deceive the Pakistan Navy and "Ghazi", India's Naval Headquarters deployed "Rajput" as a decoy—the ship sailed off the coast of Vishakhapatnam and broadcast a significant amount of radio traffic, making her appear to be "Vikrant". "Ghazi", meanwhile, sank off the Visakhapatnam coast under mysterious circumstances. On the night of 3–4 December, a muffled underwater explosion was detected by a coastal battery. The next morning, a local fisherman observed flotsam near the coast, causing Indian naval officials to suspect a vessel had sunk off the coast. The next day, a clearance diving team was sent to search the area, and they confirmed that "Ghazi" had sunk in shallow waters. The reason for "Ghazi"s fate is unclear. The Indian Navy's official historian, Hiranandani, suggests three possibilities, after having analysed the position of the rudder and extent of the damage suffered. The first was that "Ghazi" had come up to periscope depth to identify her position and may have seen an anti-submarine vessel that caused her to crash dive, which in turn may have led her to bury her bow in the bottom. The second possibility is closely related to the first: on the night of the explosion, "Rajput" was on patrol off Visakhapatnam and observed a severe disturbance in the water. Suspecting that it was a submarine, the ship dropped two depth charges on the spot, on a position that was very close to the wreckage. The third possibility is that there was a mishap when "Ghazi" was laying mines on the day before hostilities broke out. "Vikrant" was redeployed towards Chittagong at the outbreak of hostilities. On 4 December, the ship's Sea Hawks struck shipping in Chittagong and Cox's Bazar harbours, sinking or incapacitating most of the ships present. Later strikes targeted Khulna and the Port of Mongla, which continued until 10 December, while other operations were flown to support a naval blockade of East Pakistan. On 14 December, the Sea Hawks attacked the cantonment area in Chittagong, destroying several Pakistani army barracks. Medium anti-aircraft fire was encountered during this strike. Simultaneous attacks by Alizés continued on Cox's Bazar. After this, "Vikrant"s fuel levels dropped to less than 25 per cent, and the aircraft carrier sailed to Paradip for refueling. The crew of INS "Vikrant" earned two Maha Vir Chakras and twelve Vir Chakra gallantry medals for their part in the war. "Vikrant" did not see much service after the war, and was given two major modernisation refits—the first one from 1979 to 1981 and the second one from 1987 to 1989. In the first phase, her boilers, radars, communication systems and anti-aircraft guns were modernised, and facilities to operate Sea Harriers were installed. In the second phase, facilities to operate the new Sea Harrier Vertical/Short Take Off and Land (V/STOL) fighter aircraft and the new Sea King Mk 42B Anti-Submarine Warfare (ASW) helicopters were introduced. A 9.75-degree ski-jump ramp was fitted. The steam catapult was removed during this phase. Again in 1991, "Vikrant" underwent a six-month refit, followed by another fourteen-month refit in 1992–94. She remained operational thereafter, flying Sea Harriers, Sea Kings and Chetaks until her final sea outing on 23 November 1994. In the same year, a fire was also recorded aboard. In January 1995, the navy decided to keep "Vikrant" in "safe to float" state. She was laid up and formally decommissioned on 31 January 1997. During her service, INS "Vikrant" embarked four squadrons of the Naval Air Arm of the Indian Navy: Following decommissioning in 1997, the ship was earmarked for preservation as a museum ship in Mumbai. Lack of funding prevented progress on the ship's conversion to a museum and it was speculated that the ship would be made into a training ship. In 2001, the ship was opened to the public by the Indian Navy, but the Government of Maharashtra was unable to find a partner to operate the museum on a permanent, long-term basis and the museum was closed after it was deemed unsafe for the public in 2012. In August 2013, Vice-Admiral Shekhar Sinha, chief of the Western Naval Command, said the Ministry of Defence would scrap the ship as she had become very difficult to maintain and no private bidders had offered to fund the museum's operations. On 3 December 2013, the Indian government decided to auction the ship. The Bombay High Court dismissed a public-interest lawsuit filed by Kiran Paigankar to stop the auction, stating the vessel's dilapidated condition did not warrant her preservation, nor were the necessary funds or government support available. In January 2014, the ship was sold through an online auction to a Darukhana ship-breaker for . The Supreme Court of India dismissed another lawsuit challenging the ship's sale and scrapping on 14 August 2014. "Vikrant" remained beached off Darukhana in Mumbai Port while awaiting the final clearances of the Mumbai Port Trust. On 12 November 2014, the Supreme Court gave its final approval for the carrier to be scrapped, which commenced on 22 November 2014. In memory of "Vikrant", the Vikrant Memorial was unveiled by Vice Admiral Surinder Pal Singh Cheema, Flag Officer Commanding-in-Chief of the Western Naval Command at K Subash Marg in the Naval Dockyard of Mumbai on 25 January 2016. The memorial is made from metal recovered from the ship. In February 2016, Bajaj unveiled a new motorbike made with metal from "Vikrant"s scrap and named it Bajaj V in honour of "Vikrant". The navy has named its first home-built carrier INS "Vikrant" in honour of INS "Vikrant" (R11). The new carrier is built by Cochin Shipyard Limited, and will displace . The keel was laid down in February 2009 and she was launched in August 2013. , the ship is being fitted out and is expected to be commissioned by the end of 2018. The decommissioned ship featured prominently in the film "ABCD 2" as a backdrop while it was moored near Darukhana in Mumbai. Japan Japan (; "Nippon" or "Nihon" ; formally "" or "Nihon-koku", lit. "State of Japan") is a sovereign island country in East Asia. Located in the Pacific Ocean, it lies off the eastern coast of the Asian mainland and stretches from the Sea of Okhotsk in the north to the East China Sea and China in the southwest. The kanji that make up Japan's name mean "sun origin", and it is often called the "Land of the Rising Sun". Japan is a stratovolcanic archipelago consisting of about 6,852 islands. The four largest are Honshu, Hokkaido, Kyushu, and Shikoku, which make up about ninety-seven percent of Japan's land area and often are referred to as home islands. The country is divided into 47 prefectures in eight regions, with Hokkaido being the northernmost prefecture and Okinawa being the southernmost one. The population of 127 million is the world's tenth largest. Japanese people make up 98.5% of Japan's total population. About 9.1 million people live in Tokyo, the capital of Japan. Archaeological research indicates that Japan was inhabited as early as the Upper Paleolithic period. The first written mention of Japan is in Chinese history texts from the 1st century AD. Influence from other regions, mainly China, followed by periods of isolation, particularly from Western Europe, has characterized Japan's history. From the 12th century until 1868, Japan was ruled by successive feudal military "shōguns" who ruled in the name of the Emperor. Japan entered into a long period of isolation in the early 17th century, which was ended in 1853 when a United States fleet pressured Japan to open to the West. After nearly two decades of internal conflict and insurrection, the Imperial Court regained its political power in 1868 through the help of several clans from Chōshū and Satsuma—and the Empire of Japan was established. In the late 19th and early 20th centuries, victories in the First Sino-Japanese War, the Russo-Japanese War and World War I allowed Japan to expand its empire during a period of increasing militarism. The Second Sino-Japanese War of 1937 expanded into part of World War II in 1941, which came to an end in 1945 following the atomic bombings of Hiroshima and Nagasaki and the Japanese surrender. Since adopting its revised constitution on May 3, 1947, during the occupation by the SCAP, Japan has maintained a unitary parliamentary constitutional monarchy with an Emperor and an elected legislature called the National Diet. Japan is a member of the ASEAN Plus mechanism, UN, the OECD, the G7, the G8 and the G20—and is considered a great power. The country has the world's third-largest economy by nominal GDP and the world's fourth-largest economy by purchasing power parity. It is also the world's fourth-largest exporter and fourth-largest importer. The country benefits from a highly skilled workforce and is among the most highly educated countries in the world, with one of the highest percentages of its citizens holding a tertiary education degree. Although Japan has officially renounced its right to declare war, it maintains a modern military with the world's eighth-largest military budget, used for self-defense and peacekeeping roles. Japan is a highly developed country with a very high standard of living and Human Development Index. Its population enjoys the highest life expectancy and the third lowest infant mortality rate in the world. Japan is renowned for its historical and extensive cinema, influential music industry, rich cuisine and its major contributions to science and modern-day technology. The Japanese word for Japan is , which is pronounced "Nihon" or "Nippon" and literally means "the origin of the sun". The character means "sun" or "day"; means "base" or "origin". The compound therefore means "origin of the sun" and is the source of the popular Western epithet "Land of the Rising Sun". The earliest record of the name "Nihon" appears in the Chinese historical records of the Tang dynasty, the "Old Book of Tang". At the end of the seventh century, a delegation from Japan requested that "Nihon" be used as the name of their country. This name may have its origin in a letter sent in 607 and recorded in the official history of the Sui dynasty. Prince Shōtoku, the Regent of Japan, sent a mission to China with a letter in which he called himself "the Emperor of the Land where the Sun rises" (日出處天子). The message said: "Here, I, the emperor of the country where the sun rises, send a letter to the emperor of the country where the sun sets. How are you[?]”. Prior to the adoption of "Nihon", other terms such as and were used. The term is a homophone of "Wo" 倭 (pronounced "Wa" by the Japanese), which has been used by the Chinese as a designation for the Japanese as early as the third century Three Kingdoms period. Another form "Wei" (委) was used for an early state in Japan called Nakoku during the Han dynasty. However, the Japanese disliked some connotation of Wa 倭 (which has been associated in China with concepts like "dwarf" or "pygmy"), and it was therefore replaced with the substitute character , meaning "togetherness, harmony". The English word Japan possibly derives from the historical Chinese pronunciation of 日本. The Old Mandarin or possibly early Wu Chinese pronunciation of Japan was recorded by Marco Polo as "Cipangu". In modern Shanghainese, a Wu dialect, the pronunciation of characters Japan is "Zeppen" . The old Malay word for Japan, "Japun" or "Japang", was borrowed from a southern coastal Chinese dialect, probably Fukienese or Ningpo—and this Malay word was encountered by Portuguese traders in Southeast Asia in the 16th century. These Early Portuguese traders then brought the word to Europe. The first record of this name in English is in a book published in 1577 and spelled "Giapan", in a translation of a 1565 letter written by a Portuguese Jesuit Luís Fróis. From the Meiji Restoration until the end of World War II, the full title of Japan was , meaning "the Empire of Great Japan". Today, the name is used as a formal modern-day equivalent with the meaning of "the State of Japan". Countries like Japan whose long form does not contain a descriptive designation are generally given a name appended by the character , meaning "country", "nation" or "state". A Paleolithic culture around 30,000 BC constitutes the first known habitation of the Japanese archipelago. This was followed from around 14,000 BC (the start of the Jōmon period) by a Mesolithic to Neolithic semi-sedentary hunter-gatherer culture characterized by pit dwelling and rudimentary agriculture, including by ancestors of contemporary Ainu people and Yamato people. Decorated clay vessels from this period are some of the oldest surviving examples of pottery in the world. Around 300 BC, the Yayoi people began to enter the Japanese islands, intermingling with the Jōmon. The Yayoi period, starting around 500 BC, saw the introduction of practices like wet-rice farming, a new style of pottery and metallurgy, introduced from China and Korea. Japan first appears in written history in the Chinese "Book of Han". According to the "Records of the Three Kingdoms", the most powerful kingdom on the archipelago during the third century was called Yamataikoku. Buddhism was introduced to Japan from Baekje, Korea and was promoted by Prince Shōtoku, but the subsequent development of Japanese Buddhism was primarily influenced by China. Despite early resistance, Buddhism was promoted by the ruling class and gained widespread acceptance beginning in the Asuka period (592–710). The Nara period (710–784) marked an emergence of the centralized Japanese state centered on the Imperial Court in Heijō-kyō (modern Nara). The Nara period is characterized by the appearance of a nascent literature as well as the development of Buddhist-inspired art and architecture. The smallpox epidemic of 735–737 is believed to have killed as much as one-third of Japan's population. In 784, Emperor Kanmu moved the capital from Nara to Nagaoka-kyō, then to Heian-kyō (modern Kyoto) in 794. This marked the beginning of the Heian period (794–1185), during which a distinctly indigenous Japanese culture emerged, noted for its art, poetry and prose. Murasaki Shikibu's "The Tale of Genji" and the lyrics of Japan's national anthem "Kimigayo" were written during this time. Buddhism began to spread during the Heian era chiefly through two major sects, Tendai by Saichō and Shingon by Kūkai. Pure Land Buddhism (Jōdo-shū, Jōdo Shinshū) became greatly popular in the latter half of the 11th century. Japan's feudal era was characterized by the emergence and dominance of a ruling class of warriors, the samurai. In 1185, following the defeat of the Taira clan in the Genpei War, sung in the epic Tale of Heike, samurai Minamoto no Yoritomo was appointed "shōgun" by Emperor Go-Toba, and Yoritomo established a base of power in Kamakura. After his death, the Hōjō clan came to power as regents for the "shōguns". The Zen school of Buddhism was introduced from China in the Kamakura period (1185–1333) and became popular among the samurai class. The Kamakura shogunate repelled Mongol invasions in 1274 and 1281, but was eventually overthrown by Emperor Go-Daigo. Emperor Go-Daigo was himself defeated by Ashikaga Takauji in 1336. Ashikaga Takauji established the shogunate in Muromachi, Kyoto. This was the start of the Muromachi period (1336–1573). The Ashikaga shogunate achieved glory at the age of Ashikaga Yoshimitsu, and the culture based on Zen Buddhism (the art of "Miyabi") prospered. This evolved to Higashiyama Culture, and prospered until the 16th century. On the other hand, the succeeding Ashikaga shogunate failed to control the feudal warlords ("daimyōs") and a civil war (the Ōnin War) began in 1467, opening the century-long Sengoku period ("Warring States"). During the 16th century, traders and Jesuit missionaries from Portugal reached Japan for the first time, initiating direct commercial and cultural exchange between Japan and the West. This allowed Oda Nobunaga to obtain European technology and firearms, which he used to conquer many other "daimyōs". His consolidation of power began what was known as the Azuchi–Momoyama period (1573–1603). After Nobunaga was assassinated in 1582 by Akechi Mitsuhide, his successor Toyotomi Hideyoshi unified the nation in 1590 and launched two unsuccessful invasions of Korea in 1592 and 1597. Tokugawa Ieyasu served as regent for Hideyoshi's son and used his position to gain political and military support. When open war broke out, Ieyasu defeated rival clans in the Battle of Sekigahara in 1600. Tokugawa Ieyasu was appointed "shōgun" by Emperor Go-Yōzei in 1603 and established the Tokugawa shogunate in Edo (modern Tokyo). The shogunate enacted measures including "buke shohatto", as a code of conduct to control the autonomous "daimyōs"; and in 1639 the isolationist "sakoku" ("closed country") policy that spanned the two and a half centuries of tenuous political unity known as the Edo period (1603–1868). The study of Western sciences, known as "rangaku", continued through contact with the Dutch enclave at Dejima in Nagasaki. The Edo period also gave rise to "kokugaku" ("national studies"), the study of Japan by the Japanese. On March 31, 1854, Commodore Matthew Perry and the "Black Ships" of the United States Navy forced the opening of Japan to the outside world with the Convention of Kanagawa. Subsequent similar treaties with Western countries in the Bakumatsu period brought economic and political crises. The resignation of the "shōgun" led to the Boshin War and the establishment of a centralized state nominally unified under the Emperor (the Meiji Restoration). Plunging itself through an active process of Westernization during the Meiji Restoration in 1868, Japan adopted Western political, judicial and military institutions and Western cultural influences integrated with its traditional culture for modern industrialization. The Cabinet organized the Privy Council, introduced the Meiji Constitution, and assembled the Imperial Diet. The Meiji Restoration transformed the Empire of Japan into an industrialized world power that pursued military conflict to expand its sphere of influence. Although France and Britain showed some interest, the European powers largely ignored Japan and instead concentrated on the much greater attractions of China. France was also set back by its failures in Mexico and defeat by the Germans. After victories in the First Sino-Japanese War (1894–1895) and the Russo-Japanese War (1904–1905), Japan gained control of Taiwan, Korea and the southern half of Sakhalin. In addition to imperialistic success, Japan also invested much more heavily in its own economic growth, leading to a period of economic flourishing in the country which lasted until the Great Depression. Japan's population grew from 35 million in 1873 to 70 million in 1935. In World War I, Japan joined the Allies and captured German possessions, and made advances into China. The early 20th century saw a period of Taishō democracy (1912–1926), but the 1920s saw a fragile democracy buckle under a political shift towards statism, the passing of laws against political dissent and a series of attempted coups. This process accelerated during the 1930s, spawning a number of new Radical Nationalist groups which shared a hostility to liberal democracy and a dedication to expansion in Asia. Japanese expansionism and militarization along with the totalitarianism and ultranationalism reshaped the country. In 1931 Japan invaded and occupied Manchuria and following international condemnation of this occupation, it quit the League of Nations in 1933. In 1936, Japan signed the Anti-Comintern Pact with Germany and the 1940 Tripartite Pact made it one of the Axis Powers. The Empire of Japan invaded other parts of China in 1937, precipitating the Second Sino-Japanese War (1937–1945). The Imperial Japanese Army swiftly captured the capital Nanjing and conducted the Nanking Massacre. In 1940, the Empire invaded French Indochina, after which the United States placed an oil embargo on Japan. On December 7–8, 1941, Japanese forces carried out surprise attacks on Pearl Harbor, British forces in Malaya, Singapore and Hong Kong and declared war on the United States and the British Empire, bringing the United States and the United Kingdom into World War II in the Pacific. After Allied victories across the Pacific during the next four years, which culminated in the Soviet invasion of Manchuria and the atomic bombings of Hiroshima and Nagasaki in 1945, Japan agreed to an unconditional surrender on August 15. The war cost Japan, its colonies, China and the war's other combatants tens of millions of lives and left much of Japan's industry and infrastructure destroyed. The Allies (led by the United States) repatriated millions of ethnic Japanese from colonies and military camps throughout Asia, largely eliminating the Japanese empire and restoring the independence of its conquered territories. The Allies also convened the International Military Tribunal for the Far East on May 3, 1946, to prosecute some senior generals for war crimes. In 1947, Japan adopted a new constitution emphasizing liberal democratic practices. The Allied occupation ended with the Treaty of San Francisco in 1952 and Japan was granted membership in the United Nations in 1956. Japan later achieved rapid growth to become the second-largest economy in the world, until surpassed by China in 2010. This ended in the mid-1990s when Japan suffered a major recession. In the beginning of the 21st century, positive growth has signaled a gradual economic recovery. On March 11, 2011, Japan suffered one of the largest earthquakes in its recorded history; this triggered the Fukushima Daiichi nuclear disaster, one of the worst disasters in the history of nuclear power. Japan has a total of 6,852 islands extending along the Pacific coast of East Asia. The country, including all of the islands it controls, lies between latitudes 24° and 46°N, and longitudes 122° and 146°E. The main islands, from north to south, are Hokkaido, Honshu, Shikoku and Kyushu. The Ryukyu Islands, which include Okinawa, are a chain to the south of Kyushu. Together they are often known as the Japanese archipelago. About 73 percent of Japan is forested, mountainous and unsuitable for agricultural, industrial or residential use. As a result, the habitable zones, mainly located in coastal areas, have extremely high population densities. Japan is one of the most densely populated countries in the world. The islands of Japan are located in a volcanic zone on the Pacific Ring of Fire. They are primarily the result of large oceanic movements occurring over hundreds of millions of years from the mid-Silurian to the Pleistocene as a result of the subduction of the Philippine Sea Plate beneath the continental Amurian Plate and Okinawa Plate to the south, and subduction of the Pacific Plate under the Okhotsk Plate to the north. The Boso Triple Junction off the coast of Japan is a triple junction where the North American Plate, the Pacific Plate and the Philippine Sea Plate meets. Japan was originally attached to the eastern coast of the Eurasian continent. The subducting plates pulled Japan eastward, opening the Sea of Japan around 15 million years ago. Japan has 108 active volcanoes. During the twentieth century several new volcanoes emerged, including Shōwa-shinzan on Hokkaido and Myōjin-shō off the Bayonnaise Rocks in the Pacific. Destructive earthquakes, often resulting in tsunami, occur several times each century. The 1923 Tokyo earthquake killed over 140,000 people. More recent major quakes are the 1995 Great Hanshin earthquake and the 2011 Tōhoku earthquake, a 9.1-magnitude quake which hit Japan on March 11, 2011, and triggered a large tsunami. Japan is substantially prone to earthquakes, tsunami and volcanoes due to its location along the Pacific Ring of Fire. It has the 15th highest natural disaster risk as measured in the 2013 World Risk Index. The climate of Japan is predominantly temperate, but varies greatly from north to south. Japan's geographical features divide it into six principal climatic zones: Hokkaido, Sea of Japan, Central Highland, Seto Inland Sea, Pacific Ocean, and Ryukyu Islands. The northernmost zone, Hokkaido, has a humid continental climate with long, cold winters and very warm to cool summers. Precipitation is not heavy, but the islands usually develop deep snowbanks in the winter. In the Sea of Japan zone on Honshu's west coast, northwest winter winds bring heavy snowfall. In the summer, the region is cooler than the Pacific area, though it sometimes experiences extremely hot temperatures because of the foehn. The Central Highland has a typical inland humid continental climate, with large temperature differences between summer and winter seasons, as well as large diurnal variation; precipitation is light, though winters are usually snowy. The mountains of the Chūgoku and Shikoku regions shelter the Seto Inland Sea from seasonal winds, bringing mild weather year-round. The Pacific coast features a humid subtropical climate that experiences milder winters with occasional snowfall and hot, humid summers because of the southeast seasonal wind. The Ryukyu Islands have a subtropical climate, with warm winters and hot summers. Precipitation is very heavy, especially during the rainy season. The average winter temperature in Japan is and the average summer temperature is . The highest temperature ever measured in Japan was recorded on July 23, 2018. The main rainy season begins in early May in Okinawa, and the rain front gradually moves north until reaching Hokkaido in late July. In most of Honshu, the rainy season begins before the middle of June and lasts about six weeks. In late summer and early autumn, typhoons often bring heavy rain. Japan has nine forest ecoregions which reflect the climate and geography of the islands. They range from subtropical moist broadleaf forests in the Ryūkyū and Bonin Islands, to temperate broadleaf and mixed forests in the mild climate regions of the main islands, to temperate coniferous forests in the cold, winter portions of the northern islands. Japan has over 90,000 species of wildlife, including the brown bear, the Japanese macaque, the Japanese raccoon dog, the large Japanese field mouse, and the Japanese giant salamander. A large network of national parks has been established to protect important areas of flora and fauna as well as thirty-seven Ramsar wetland sites. Four sites have been inscribed on the UNESCO World Heritage List for their outstanding natural value. In the period of rapid economic growth after World War II, environmental policies were downplayed by the government and industrial corporations; as a result, environmental pollution was widespread in the 1950s and 1960s. Responding to rising concern about the problem, the government introduced several environmental protection laws in 1970. The oil crisis in 1973 also encouraged the efficient use of energy because of Japan's lack of natural resources. Current environmental issues include urban air pollution (NOx, suspended particulate matter, and toxics), waste management, water eutrophication, nature conservation, climate change, chemical management and international co-operation for conservation. As of June 2015, more than 40 coal-fired power plants are planned or under construction in Japan. The NGO Climate Action Network announced Japan as the winner of its "Fossil of the Day" award for "doing the most to block progress on climate action". Japan ranks 20th in the 2018 Environmental Performance Index, which measures a nation's commitment to environmental sustainability. As the host and signatory of the 1997 Kyoto Protocol, Japan is under treaty obligation to reduce its carbon dioxide emissions and to take other steps to curb climate change. Japan is a constitutional monarchy whereby the power of the Emperor is very limited. As a ceremonial figurehead, he is defined by the constitution to be "the symbol of the State and of the unity of the people". Executive power is wielded chiefly by the Prime Minister and his cabinet, while sovereignty is vested in the Japanese people. Japan's legislative body is the National Diet, seated in Chiyoda, Tokyo. The Diet is a bicameral body, comprising the lower House of Representatives with 465 seats, elected by popular vote every four years or when dissolved; and the upper House of Councillors with 242 seats, whose popularly elected members serve six-year terms. There is universal suffrage for adults over 18 years of age, with a secret ballot for all elected offices. The Diet is dominated by the social liberal Constitutional Democratic Party (CDP) and the conservative Liberal Democratic Party (LDP). The LDP has enjoyed near-continuous electoral success since 1955, except for brief periods between 1993 and 1994 and from 2009 to 2012. As of November 2017, it holds 283 seats in the lower house and 125 seats in the upper house. The Prime Minister of Japan is the head of government and is appointed by the Emperor after being designated by the Diet from among its members. The Prime Minister is the head of the Cabinet, and appoints and dismisses the Ministers of State. Following the LDP's landslide victory in the 2012 general election, Shinzō Abe replaced Yoshihiko Noda as the Prime Minister on December 26, 2012. Historically influenced by Chinese law, the Japanese legal system developed independently during the Edo period through texts such as "Kujikata Osadamegaki". However, since the late 19th century the judicial system has been largely based on the civil law of Europe, notably Germany. For example, in 1896, the Japanese government established a civil code based on a draft of the German Bürgerliches Gesetzbuch; with the code remaining in effect with post–World War II modifications. Statutory law originates in Japan's legislature and has the rubber stamp of the Emperor. Japan's court system is divided into four basic tiers: the Supreme Court and three levels of lower courts. The main body of Japanese statutory law is called the Six Codes. Japan is divided into 47 prefectures, each overseen by an elected governor, legislature and administrative bureaucracy. Each prefecture is further divided into cities, towns and villages. The nation is currently undergoing administrative reorganization by merging many of the cities, towns and villages with each other. This process will reduce the number of sub-prefecture administrative regions and is expected to cut administrative costs. Japan has diplomatic relations with nearly all independent nations and has been an active member of the United Nations since December 1956. Japan is a member of the G8, APEC, and "ASEAN Plus Three", and is a participant in the East Asia Summit. Japan signed a security pact with Australia in March 2007 and with India in October 2008. It is the world's fifth largest donor of official development assistance, donating US$9.2 billion in 2014. Japan has close ties to the United States. Since Japan's defeat by the United States and allies in World War II, the two countries have maintained close economic and defense relations. The United States is a major market for Japanese exports and the primary source of Japanese imports, and is committed to defending the country, having military bases in Japan for partially that purpose. Japan contests Russia's control of the Southern Kuril Islands (including Etorofu, Kunashiri, Shikotan, and the Habomai group) which were occupied by the Soviet Union in 1945. South Korea's control of Liancourt Rocks (Japanese: "Takeshima", Korean: "Dokdo") are acknowledged, but not accepted and are claimed by Japan. Japan has strained relations with the People's Republic of China (PRC) and the Republic of China (ROC) over the Senkaku Islands; and with the People's Republic of China over the status of Okinotorishima. Japan's relationship with South Korea has been strained due to Japan's treatment of Koreans during Japanese colonial rule, particularly over the issue of comfort women. These women were essentially sex slaves, and although there is no exact number on how many women were subjected to this treatment, experts believe it could be in the tens or hundreds of thousands. Between 1910 and 1945, the Japanese government rebuilt Korean infrastructure. Despite this, modernization in Korea was always linked to Japanese interests and therefore did not imply a "revolutionization" of social structures. For instance, Japan kept Korea's primitive feudalistic agriculture because it served Japanese interests. Further developments on Japan's imperialism in Korea included establishing a slew of police stations all over the country, replacing taxes in kind with taxes in fixed money, and taking much of the communal land which had belonged to villages to give them to private companies in Japan (causing many peasants to loose their land.) Japan also introduced over 800,000 Japanese immigrants onto the peninsula and carried out a campaign of cultural suppression through efforts to ban the Korean language in schools and force Koreans to adopt Japanese names. With the surrender of Japan and the Axis at the end of WWII in 1945, the Korean Peninsula was once again independent. Despite their historical tensions, in December 2015, Japan agreed to settle the comfort women dispute with South Korea by issuing a formal apology, taking responsibility for the issue and paying money to the surviving comfort women. Today, South Korea and Japan have a stronger and more economically-driven relationship. Since the 1990s, the Korean Wave has created a large fanbase in East Asia, but most notably in Japan. Japan is the number one importer of Korean music (K-pop), television (K-dramas), and films, but this was only made possible after the South Korean government lifted the 30-year ban on cultural exchange with Japan that had been in place since 1948. Korean pop cultural products' success in the Japanese market is partially explained by the borrowing of Japanese ideas such as the star-marketing system and heavy promotion of new television shows and music. Korean dramas such as "Winter Sonata" and "Coffee Prince," as well as K-pop artists such as BIGBANG and SHINee are extremely popular with Japanese consumers. Most recently, South Korean President Moon Jae-in met with Japanese Prime Minister Shinzo Abe at the 2017 G-20 Summit in Hamburg, Germany to discuss the future of their relationship and specifically how to cooperate on finding solutions for North Korean aggression in the region. Both leaders restated their commitment to solving the comfort women dispute, building positive relations in the region, and pressuring China to be more assertive with North Korea as it continues to test nuclear weapons and isolate themselves further form the international community. Japan maintains one of the largest military budgets of any country in the world. The country's military (the Japan Self-Defense Forces – JSDF) is restricted by Article 9 of the Japanese Constitution, which renounces Japan's right to declare war or use military force in international disputes. Accordingly, Japan's Self-Defense Forces is an unusual military that has never fired shots outside Japan. Japan is the highest-ranked Asian country in the Global Peace Index. The military is governed by the Ministry of Defense, and primarily consists of the Japan Ground Self-Defense Force (JGSDF), the Japan Maritime Self-Defense Force (JMSDF) and the Japan Air Self-Defense Force (JASDF). The Japan Maritime Self-Defense Force (JMSDF) is a regular participant in RIMPAC maritime exercises. The forces have been recently used in peacekeeping operations; the deployment of troops to Iraq marked the first overseas use of Japan's military since World War II. Japan Business Federation has called on the government to lift the ban on arms exports so that Japan can join multinational projects such as the Joint Strike Fighter. The 21st century is witnessing a rapid change in global power balance along with globalization. The security environment around Japan has become increasingly severe as represented by nuclear and missile development by North Korea. Transnational threats grounded on technological progress including international terrorism and cyber attacks are also increasing their significance. Japan, including its Self-Defense Forces, has contributed to the maximum extent possible to the efforts to maintain and restore international peace and security, such as UN peacekeeping operations. Building on the ongoing efforts as a peaceful state, the Government of Japan has been making various efforts on its security policy which include: the establishment of the National Security Council (NSC), the adoption of the National Security Strategy (NSS), and the National Defense Program Guidelines (NDPG). These efforts are made based on the belief that Japan, as a "Proactive Contributor to Peace", needs to contribute more actively to the peace and stability of the region and the international community, while coordinating with other countries including its ally, the United States. Japan has close economic and military relations with the United States; the US-Japan security alliance acts as the cornerstone of the nation's foreign policy. A member state of the United Nations since 1956, Japan has served as a non-permanent Security Council member for a total of 20 years, most recently for 2009 and 2010. It is one of the G4 nations seeking permanent membership in the Security Council. In May 2014, Prime Minister Shinzō Abe said Japan wanted to shed the passiveness it has maintained since the end of World War II and take more responsibility for regional security. He said Japan wanted to play a key role and offered neighboring countries Japan's support. In recent years, they have been engaged in international peacekeeping operations including the UN peacekeeping. Recent tensions, particularly with North Korea, have reignited the debate over the status of the JSDF and its relation to Japanese society. New military guidelines, announced in December 2010, will direct the JSDF away from its Cold War focus on the former Soviet Union to a focus on China, especially regarding the territorial dispute over the Senkaku Islands. Japan is the third largest national economy in the world, after the United States and China, in terms of nominal GDP, and the fourth largest national economy in the world, after the United States, China and India, in terms of purchasing power parity. , Japan's public debt was estimated at more than 230 percent of its annual gross domestic product, the largest of any nation in the world. In August 2011, Moody's rating has cut Japan's long-term sovereign debt rating one notch from Aa3 to Aa2 inline with the size of the country's deficit and borrowing level. The large budget deficits and government debt since the 2009 global recession and followed by the earthquake and tsunami in March 2011 caused the rating downgrade. The service sector accounts for three quarters of the gross domestic product. Japan has a large industrial capacity, and is home to some of the largest and most technologically advanced producers of motor vehicles, electronics, machine tools, steel and nonferrous metals, ships, chemical substances, textiles, and processed foods. Agricultural businesses in Japan cultivate 13 percent of Japan's land, and Japan accounts for nearly 15 percent of the global fish catch, second only to China. , Japan's labor force consisted of some 65.9 million workers. Japan has a low unemployment rate of around four percent. Some 20 million people, around 17 per cent of the population, were below the poverty line in 2007. Housing in Japan is characterized by limited land supply in urban areas. Japan's exports amounted to US$4,210 per capita in 2005. , Japan's main export markets were the United States (20.2 percent), China (17.5 percent), South Korea (7.1 percent), Hong Kong (5.6 percent) and Thailand (4.5 percent). Its main exports are transportation equipment, motor vehicles, iron and steel products, semiconductors and auto parts. Japan's main import markets were China (24.8 percent), the United States (10.5 percent), Australia (5.4 percent) and South Korea (4.1 percent). Japan's main imports are machinery and equipment, fossil fuels, foodstuffs (in particular beef), chemicals, textiles and raw materials for its industries. By market share measures, domestic markets are the least open of any OECD country. Junichirō Koizumi's administration began some pro-competition reforms, and foreign investment in Japan has soared. Japan ranks 27th of 189 countries in the 2014 ease of doing business index and has one of the smallest tax revenues of the developed world. The Japanese variant of capitalism has many distinct features: keiretsu enterprises are influential, and lifetime employment and seniority-based career advancement are relatively common in the Japanese work environment. Japanese companies are known for management methods like "The Toyota Way", and shareholder activism is rare. Japan's top global brands include Toyota, Honda, Canon, Nissan, Sony, Mitsubishi UFJ (MUFG), Panasonic, Uniqlo, Lexus, Subaru, Nintendo, Bridgestone, Mazda and Suzuki. Modern Japan's economic growth began in the Edo period. Some of the surviving elements of the Edo period are roads and water transportation routes, as well as financial instruments such as futures contracts, banking and insurance of the Osaka rice brokers. During the Meiji period from 1868, Japan expanded economically with the embrace of the market economy. Many of today's enterprises were founded at the time, and Japan emerged as the most developed nation in Asia. The period of overall real economic growth from the 1960s to the 1980s has been called the Japanese post-war economic miracle: it averaged 7.5 percent in the 1960s and 1970s, and 3.2 percent in the 1980s and early 1990s. Growth slowed in the 1990s during the "Lost Decade" due to after-effects of the Japanese asset price bubble and government policies intended to wring speculative excesses from the stock and real estate markets. Efforts to revive economic growth were unsuccessful and further hampered by the global slowdown in 2000. The economy recovered after 2005; GDP growth for that year was 2.8 percent, surpassing the growth rates of the US and European Union during the same period. Today, Japan ranks highly for competitiveness and economic freedom. It is ranked sixth in the Global Competitiveness Report for 2015–2016. The Japanese agricultural sector accounts for about 1.4% of the total country's GDP. Only 12% of Japan's land is suitable for cultivation. Due to this lack of arable land, a system of terraces is used to farm in small areas. This results in one of the world's highest levels of crop yields per unit area, with an overall agricultural self-sufficiency rate of about 50% on fewer than cultivated. Japan's small agricultural sector, however, is also highly subsidized and protected, with government regulations that favor small-scale cultivation instead of large-scale agriculture as practiced in North America. There has been a growing concern about farming as the current farmers are aging with a difficult time finding successors. Rice accounts for almost all of Japan's cereal production. Japan is the second-largest agricultural product importer in the world. Rice, the most protected crop, is subject to tariffs of 777.7%. In 1996, Japan ranked fourth in the world in tonnage of fish caught. Japan captured 4,074,580 metric tons of fish in 2005, down from 4,987,703 tons in 2000, 9,558,615 tons in 1990, 9,864,422 tons in 1980, 8,520,397 tons in 1970, 5,583,796 tons in 1960 and 2,881,855 tons in 1950. In 2003, the total aquaculture production was predicted at 1,301,437 tonnes. In 2010, Japan's total fisheries production was 4,762,469 fish. Offshore fisheries accounted for an average of 50% of the nation's total fish catches in the late 1980s although they experienced repeated ups and downs during that period. Today, Japan maintains one of the world's largest fishing fleets and accounts for nearly 15% of the global catch, prompting some claims that Japan's fishing is leading to depletion in fish stocks such as tuna. Japan has also sparked controversy by supporting quasi-commercial whaling. Japan's industrial sector makes up approximately 27.5% of its GDP. Japan's major industries are motor vehicles, electronics, machine tools, metals, ships, chemicals and processed foods; some major Japanese industrial companies include Toyota, Canon Inc., Toshiba and Nippon Steel. Japan is the third largest automobile producer in the world, and is home to Toyota, the world's largest automobile company. The Japanese consumer electronics industry, once considered the strongest in the world, is currently in a state of decline as competition arises in countries like South Korea, the United States and China. However, despite also facing similar competition from South Korea and China, the Japanese shipbuilding industry is expected to remain strong due to an increased focus on specialized, high-tech designs. Japan's service sector accounts for about three-quarters of its total economic output. Banking, insurance, real estate, retailing, transportation, and telecommunications are all major industries, with companies such as Mitsubishi UFJ, Mizuho, NTT, TEPCO, Nomura, Mitsubishi Estate, ÆON, Mitsui Sumitomo, Softbank, JR East, Seven & I, KDDI and Japan Airlines listed as some of the largest in the world. Four of the five most circulated newspapers in the world are Japanese newspapers. Japan Post Holdings, one of the country's largest providers of savings and insurance services, was slated for privatization by 2015. The six major keiretsus are the Mitsubishi, Sumitomo, Fuyo, Mitsui, Dai-Ichi Kangyo and Sanwa Groups. Japan attracted 19.73 million international tourists in 2015 and increased by 21.8% to attract 24.03 million international tourists in 2016. Tourism from abroad is one of the few promising businesses in Japan. Foreign visitors to Japan doubled in last decade and reached 10 million people for the first time in 2013, led by increase of Asian visitors. In 2008, the Japanese government has set up Japan Tourism Agency and set the initial goal to increase foreign visitors to 20 million in 2020. In 2016, having met the 20 million target, the government has revised up its target to 40 million by 2020 and to 60 million by 2030. Japan has 20 World Heritage Sites, including Himeji Castle, Historic Monuments of Ancient Kyoto and Nara. Popular tourist attractions include Tokyo and Hiroshima, Mount Fuji, ski resorts such as Niseko in Hokkaido, Okinawa, riding the shinkansen and taking advantage of Japan's hotel and hotspring network. For inbound tourism, Japan was ranked 16th in the world in 2015. In 2009, the "Yomiuri Shimbun" published a modern list of famous sights under the name "Heisei Hyakkei" (the Hundred Views of the Heisei period). The "Travel and Tourism Competitiveness Report 2017" ranks Japan 4th out of 141 countries overall, which was the best in Asia. Japan gained relatively high scores in almost all aspects, especially health and hygiene, safety and security, cultural resources and business travel. In 2016, 24,039,053 foreign tourists visited Japan. Neighbouring South Korea is Japan's most important source of foreign tourists. In 2010, the 2.4 million arrivals made up 27% of the tourists visiting Japan. Chinese travelers are the highest spenders in Japan by country, spending an estimated 196.4 billion yen (US$2.4 billion) in 2011, or almost a quarter of total expenditure by foreign visitors, according to data from the Japan Tourism Agency. The Japanese government hopes to receive 40 million foreign tourists every year by 2020. Japan is a leading nation in scientific research, particularly in fields related to the natural sciences and engineering. The country ranks second among the most innovative countries in the Bloomberg Innovation Index. Nearly 700,000 researchers share a US$130 billion research and development budget. The amount spent on research and development relative to gross domestic product is the third highest in the world. The country is a world leader in fundamental scientific research, having produced twenty-two Nobel laureates in either physics, chemistry or medicine and three Fields medalists. Japanese scientists and engineers have contributed to the advancement of agricultural sciences, electronics, industrial robotics, optics, chemicals, semiconductors, life sciences and various fields of engineering. Japan leads the world in robotics production and use, possessing more than 20% (300,000 of 1.3 million) of the world's industrial robots —though its share was historically even higher, representing one-half of all industrial robots worldwide in 2000. Japan boasts the third highest number of scientists, technicians, and engineers per capita in the world with 83 scientists, technicians and engineers per 10,000 employees. The Japanese electronics and automotive manufacturing industry is well known throughout the world, and the country's electronic and automotive products account for a large share in the global market, compared to a majority of other countries. Brands such as Fujifilm, Canon, Sony, Nintendo, Panasonic, Toyota, Nissan and Honda are internationally famous. It is estimated that 16% of the world's gold and 22% of the world's silver is contained in Japanese electronics. Japan has started a project to build the world's fastest supercomputer by the end of 2017. The Japan Aerospace Exploration Agency (JAXA) is Japan's national space agency; it conducts space, planetary, and aviation research, and leads development of rockets and satellites. It is a participant in the International Space Station: the Japanese Experiment Module (Kibo) was added to the station during Space Shuttle assembly flights in 2008. The space probe "Akatsuki" was launched May 20, 2010, and achieved orbit around Venus on December 9, 2015. Japan's plans in space exploration include: developing the "Mercury Magnetospheric Orbiter" to be launched in 2018; and building a moon base by 2030. On September 14, 2007, it launched lunar explorer SELENE (Selenological and Engineering Explorer) on a H-IIA (Model H2A2022) carrier rocket from Tanegashima Space Center. SELENE is also known as Kaguya, after the lunar princess of "The Tale of the Bamboo Cutter". Kaguya is the largest lunar mission since the Apollo program. Its purpose is to gather data on the moon's origin and evolution. It entered a lunar orbit on October 4, flying at an altitude of about . The probe's mission was ended when it was deliberately crashed by JAXA into the Moon on June 11, 2009. Japan has received the most science Nobel Prizes in Asia and ranked 8th in the world. Hideki Yukawa, educated at Kyoto University, was awarded the prize in physics in 1949. Shin'ichirō Tomonaga followed in 1965. Solid-state physicist Leo Esaki, educated at the University of Tokyo, received the prize in 1973. Kenichi Fukui of Kyoto University shared the 1981 prize in chemistry, and Susumu Tonegawa, also educated at Kyoto University, became Japan's first laureate in physiology or medicine in 1987. Japanese chemists took prizes in 2000 and 2001: first Hideki Shirakawa (Tokyo Institute of Technology) and then Ryōji Noyori (Kyoto University). In 2002, Masatoshi Koshiba (University of Tokyo) and Koichi Tanaka (Tohoku University) won in physics and chemistry, respectively. Makoto Kobayashi, Toshihide Masukawa and Yoichiro Nambu, who was an American citizen when awarded, shared the physics prize and Osamu Shimomura also won the chemistry prize in 2008. Isamu Akasaki, Hiroshi Amano and Shuji Nakamura, who is an American citizen when awarded, shared the physics prize in 2014 and the Nobel Prize in Physiology or Medicine was awarded to Yoshinori Ohsumi in 2016. Japan's road spending has been extensive. Its of paved road are the main means of transportation. As of April 2012, Japan has approximately of roads made up of of city, town and village roads, of prefectural roads, of general national highways and of national expressways. A single network of high-speed, divided, limited-access toll roads connects major cities on Honshu, Shikoku and Kyushu. Hokkaido has a separate network, and Okinawa Island has a highway of this type. A single network of high-speed, divided, limited-access toll roads connects major cities and is operated by toll-collecting enterprises. New and used cars are inexpensive; car ownership fees and fuel levies are used to promote energy efficiency. However, at just 50 percent of all distance traveled, car usage is the lowest of all G8 countries. Since privatisation in 1987, dozens of Japanese railway companies compete in regional and local passenger transportation markets; major companies include seven JR enterprises, Kintetsu, Seibu Railway and Keio Corporation. Some 250 high-speed Shinkansen trains connect major cities and Japanese trains are known for their safety and punctuality. Proposals for a new Maglev route between Tokyo and Osaka are at an advanced stage. There are 175 airports in Japan; the largest domestic airport, Haneda Airport, is Asia's second-busiest airport. The largest international gateways are Narita International Airport, Kansai International Airport and Chūbu Centrair International Airport. Nagoya Port is the country's largest and busiest port, accounting for 10 percent of Japan's trade value. , 46.1% of energy in Japan was produced from petroleum, 21.3% from coal, 21.4% from natural gas, 4.0% from nuclear power and 3.3% from hydropower. Nuclear power produced 9.2 percent of Japan's electricity, , down from 24.9 percent the previous year. However, by May 2012 all of the country's nuclear power plants had been taken offline because of ongoing public opposition following the Fukushima Daiichi nuclear disaster in March 2011, though government officials continued to try to sway public opinion in favor of returning at least some of Japan's 50 nuclear reactors to service. , two reactors at Sendai are likely to restart in early 2015. Japan lacks significant domestic reserves and so has a heavy dependence on imported energy. Japan has therefore aimed to diversify its sources and maintain high levels of energy efficiency. The government took responsibility for regulating the water and sanitation sector is shared between the Ministry of Health, Labor and Welfare in charge of water supply for domestic use; the Ministry of Land, Infrastructure, Transport and Tourism in charge of water resources development as well as sanitation; the Ministry of the Environment in charge of ambient water quality and environmental preservation; and the Ministry of Internal Affairs and Communications in charge of performance benchmarking of utilities. Access to an improved water source is universal in Japan. 97% of the population receives piped water supply from public utilities and 3% receive water from their own wells or unregulated small systems, mainly in rural areas. Access to improved sanitation is also universal, either through sewers or on-site sanitation. All collected waste water is treated at secondary-level treatment plants. All effluents discharged to closed or semi-closed water bodies, such as Tokyo Bay, Osaka Bay, or Lake Biwa, are further treated to tertiary level. This applies to about 15% of waste water. The effluent quality is remarkably good at 3–10 mg/l of BOD for secondary-level treatment, well below the national effluent standard of 20 mg/l. Water supply and sanitation in Japan is facing some challenges, such as a decreasing population, declining investment, fiscal constraints, ageing facilities, an ageing workforce, a fragmentation of service provision among thousands of municipal utilities, and the vulnerability of parts of the country to droughts that are expected to become more frequent due to climate change. Japan's population is estimated at around /1e6 round 0 million, with 80% of the population living on Honshū. Japanese society is linguistically, ethnically and culturally homogeneous, composed of 98.5% ethnic Japanese, with small populations of foreign workers. Zainichi Koreans, Chinese, Filipinos, Brazilians mostly of Japanese descent, Peruvians mostly of Japanese descent and Americans are among the small minority groups in Japan. In 2003, there were about 134,700 non-Latin American Western (not including more than 33,000 American military personnel and their dependents stationed throughout the country) and 345,500 Latin American expatriates, 274,700 of whom were Brazilians (said to be primarily Japanese descendants, or "nikkeijin", along with their spouses), the largest community of Westerners. The most dominant native ethnic group is the Yamato people; primary minority groups include the indigenous Ainu and Ryukyuan peoples, as well as social minority groups like the "burakumin". There are persons of mixed ancestry incorporated among the Yamato, such as those from Ogasawara Archipelago. In 2014, foreign-born non-naturalized workers made up only 1.5% of the total population. Japan is widely regarded as ethnically homogeneous, and does not compile ethnicity or race statistics for Japanese nationals; sources varies regarding such claim, with at least one analysis describing Japan as a multiethnic society while another analysis put the number of Japanese nationals of recent foreign descent to be minimal. Most Japanese continue to see Japan as a monocultural society. Former Japanese Prime Minister and current Finance Minister Tarō Asō described Japan as being a nation of "one race, one civilization, one language and one culture", which drew criticism from representatives of ethnic minorities such as the Ainu. Japan has the second longest overall life expectancy at birth of any country in the world: 83.5 years for persons born in the period 2010–2015. The Japanese population is rapidly aging as a result of a post–World War II baby boom followed by a decrease in birth rates. In 2012, about 24.1 percent of the population was over 65, and the proportion is projected to rise to almost 40 percent by 2050. Japan has full religious freedom based on Article 20 of its Constitution. Upper estimates suggest that 84–96 percent of the Japanese population subscribe to Shinto as its indigenous religion (50% to 80% of which considering degrees of syncretism with Buddhism, shinbutsu-shūgō). However, these estimates are based on people affiliated with a temple, rather than the number of true believers. The number of Shinto shrines in Japan is estimated to be around 100,000. Other studies have suggested that only 30 percent of the population identify themselves as belonging to a religion. According to Edwin Reischauer and Marius Jansen, some 70–80% of the Japanese do not consider themselves believers in any religion. Nevertheless, the level of participation remains high, especially during festivals and occasions such as the first shrine visit of the New Year. Taoism and Confucianism from China have also influenced Japanese beliefs and customs. Japanese streets are decorated on Tanabata, Obon and Christmas. Shinto is the largest religion in Japan, practiced by nearly 80% of the population, yet only a small percentage of these identify themselves as "Shintoists" in surveys. This is due to the fact that "Shinto" has different meanings in Japan: most of the Japanese attend Shinto shrines and beseech kami without belonging to Shinto organisations, and since there are no formal rituals to become a member of folk Shinto, Shinto membership is often estimated counting those who join organised Shinto sects. Shinto has 100,000 shrines and 78,890 priests in the country. Buddhism first arrived in Japan in the 6th century; it was introduced in the year 538 or 552 from the kingdom of Baekje in Korea. Christianity was first introduced into Japan by Jesuit missions starting in 1549. Today, fewer than 1% to 2.3% are Christians, most of them living in the western part of the country, where the missionaries' activities were greatest during the 16th century. Nagasaki Prefecture has the highest percentage of Christians: about 5.1% in 1996. , there are 32,036 Christian priests and pastors in Japan. Throughout the latest century, some Western customs originally related to Christianity (including Western style weddings, Valentine's Day and Christmas) have become popular as secular customs among many Japanese. Islam in Japan is estimated to constitute, about 80–90%, of foreign born migrants and their children, primarily from Indonesia, Pakistan, Bangladesh, and Iran. Much of the ethnic Japanese Muslims are those who convert upon marrying immigrant Muslims. The Pew Research Center estimated that there were 185,000 Muslims in Japan in 2010. Other minority religions include Hinduism, Sikhism and Judaism, and since the mid-19th century numerous new religious movements have emerged in Japan. More than 99 percent of the population speaks Japanese as their first language. Japanese is an agglutinative language distinguished by a system of honorifics reflecting the hierarchical nature of Japanese society, with verb forms and particular vocabulary indicating the relative status of speaker and listener. Japanese writing uses kanji (Chinese characters) and two sets of kana (syllabaries based on cursive script and radical of kanji), as well as the Latin alphabet and Arabic numerals. Besides Japanese, the Ryukyuan languages (Amami, Kunigami, Okinawan, Miyako, Yaeyama, Yonaguni), also part of the Japonic language family, are spoken in the Ryukyu Islands chain. Few children learn these languages, but in recent years the local governments have sought to increase awareness of the traditional languages. The Okinawan Japanese dialect is also spoken in the region. The Ainu language, which has no proven relationship to Japanese or any other language, is moribund, with only a few elderly native speakers remaining in Hokkaido. Public and private schools generally require students to take Japanese language classes as well as English language courses. The changes in demographic structure have created a number of social issues, particularly a potential decline in workforce population and increase in the cost of social security benefits such as the public pension plan. A growing number of younger Japanese are not marrying or remain childless. In 2011, Japan's population dropped for a fifth year, falling by 204,000 people to 126.24 million people. This was the greatest decline since at least 1947, when comparable figures were first compiled. This decline was made worse by the March 2011 earthquake and tsunami, which killed nearly 16,000 people. Japan's population is expected to drop to 95 million by 2050; demographers and government planners are currently in a heated debate over how to cope with this problem. Immigration and birth incentives are sometimes suggested as a solution to provide younger workers to support the nation's ageing population. Japan accepts an average flow of 9,500 new Japanese citizens by naturalization per year. According to the UNHCR, in 2012 Japan accepted just 18 refugees for resettlement, while the United States took in 76,000. Japan suffers from a high suicide rate. In 2009, the number of suicides exceeded 30,000 for the twelfth successive year. Suicide is the leading cause of death for people under 30. Primary schools, secondary schools and universities were introduced in 1872 as a result of the Meiji Restoration. Since 1947, compulsory education in Japan comprises elementary and junior high school, which together last for nine years (from age 6 to age 15). Almost all children continue their education at a three-year senior high school. Japan's education system played a central part in the country's recovery and rapid economic growth in the decades following the end of World War II. After World War II, the Fundamental Law of Education and the School Education Law were enacted. The latter law defined the school system that would be in effect for many decades: six years of elementary school, three years of junior high school, three years of high school, and two or four years of university. Starting in April 2016, various schools began the academic year with elementary school and junior high school integrated into one nine-year compulsory schooling program, in hopes to mitigate bullying and truancy; MEXT plans for this approach to be adopted nationwide in the coming years. In Japan, having a strong educational background greatly improves the likelihood of finding a job and earning enough money to support oneself. Highly educated individuals are less affected by unemployment trends as higher levels of educational attainment make an individual more attractive in the workforce. The lifetime earnings also increase with each level of education attained. Furthermore, skills needed in the modern 21st century labor market are becoming more knowledge-based and strong aptitude in science and mathematics are more strong predictors of employment prospects in Japan's highly technological economy. Japan is one of the top-performing OECD countries in reading literacy, maths and sciences with the average student scoring 540 and has one of the worlds highest-educated labor forces among OECD countries. The Japanese populace is well educated and its society highly values education as a platform for social mobility and for gaining employment in the country's competitive high-tech economy. The country's large pool of highly educated and skilled individuals is largely responsible for ushering Japan's post-war economic growth. Tertiary-educated adults in Japan, particularly graduates in sciences and engineering benefit economically and socially from their education and skills in the country's high tech economy. Spending on education as a proportion of GDP is below the OECD average. Although expenditure per student is comparatively high in Japan, total expenditure relative to GDP remains small. In 2015, Japan's public spending on education amounted to just 3.5 percent of its GDP, below the OECD average of 4.7%. In 2014, the country ranked fourth for the percentage of 25- to 64-year-olds that have attained tertiary education with 48 percent. In addition, bachelor's degrees are held by 59 percent of Japanese aged 25–34, the second most in the OECD after South Korea. As the Japanese economy is largely scientific and technological based, the labor market demands people who have achieved some form of higher education, particularly related to science and engineering in order to gain a competitive edge when searching for employment opportunities. About 75.9 percent of high school graduates attended a university, junior college, trade school, or other higher education institution. The two top-ranking universities in Japan are the University of Tokyo and Kyoto University, which have produced 16 Nobel Prize laureates. The Programme for International Student Assessment coordinated by the OECD currently ranks the overall knowledge and skills of Japanese 15-year-olds as sixth best in the world. In Japan, health care is provided by national and local governments. Payment for personal medical services is offered through a universal health insurance system that provides relative equality of access, with fees set by a government committee. People without insurance through employers can participate in a national health insurance program administered by local governments. Since 1973, all elderly persons have been covered by government-sponsored insurance. Patients are free to select the physicians or facilities of their choice. Japanese culture has evolved greatly from its origins. Contemporary culture combines influences from Asia, Europe and North America. Traditional Japanese arts include crafts such as ceramics, textiles, lacquerware, swords and dolls; performances of bunraku, kabuki, noh, dance, and rakugo; and other practices, the tea ceremony, ikebana, martial arts, calligraphy, origami, onsen, Geisha and games. Japan has a developed system for the protection and promotion of both tangible and intangible Cultural Properties and National Treasures. Nineteen sites have been inscribed on the UNESCO World Heritage List, fifteen of which are of cultural significance. Japanese architecture is a combination between local and other influences. It has traditionally been typified by wooden structures, elevated slightly off the ground, with tiled or thatched roofs. Sliding doors ("fusuma") were used in place of walls, allowing the internal configuration of a space to be customized for different occasions. People usually sat on cushions or otherwise on the floor, traditionally; chairs and high tables were not widely used until the 20th century. Since the 19th century, however, Japan has incorporated much of Western, modern, and post-modern architecture into construction and design, and is today a leader in cutting-edge architectural design and technology. The introduction of Buddhism during the sixth century was a catalyst for large-scale temple building using complicated techniques in wood. Influence from the Chinese Tang and Sui dynasties led to the foundation of the first permanent capital in Nara. Its checkerboard street layout used the Chinese capital of Chang'an as a template for its design. A gradual increase in the size of buildings led to standard units of measurement as well as refinements in layout and garden design. The introduction of the tea ceremony emphasised simplicity and modest design as a counterpoint to the excesses of the aristocracy. During the Meiji Restoration of 1868 the history of Japanese architecture was radically changed by two important events. The first was the Kami and Buddhas Separation Act of 1868, which formally separated Buddhism from Shinto and Buddhist temples from Shinto shrines, breaking an association between the two which had lasted well over a thousand years. Second, it was then that Japan underwent a period of intense Westernization in order to compete with other developed countries. Initially architects and styles from abroad were imported to Japan but gradually the country taught its own architects and began to express its own style. Architects returning from study with western architects introduced the International Style of modernism into Japan. However, it was not until after the Second World War that Japanese architects made an impression on the international scene, firstly with the work of architects like Kenzō Tange and then with theoretical movements like Metabolism. The Shrines of Ise have been celebrated as the prototype of Japanese architecture. Largely of wood, traditional housing and many temple buildings see the use of tatami mats and sliding doors that break down the distinction between rooms and indoor and outdoor space. Japanese sculpture, largely of wood, and Japanese painting are among the oldest of the Japanese arts, with early figurative paintings dating back to at least 300 BC. The history of Japanese painting exhibits synthesis and competition between native Japanese aesthetics and adaptation of imported ideas. The interaction between Japanese and European art has been significant: for example ukiyo-e prints, which began to be exported in the 19th century in the movement known as Japonism, had a significant influence on the development of modern art in the West, most notably on post-Impressionism. Famous ukiyo-e artists include Hokusai and Hiroshige. Japanese comics, known as manga, developed in the 20th century and have become popular worldwide. Rakuten Kitazawa was first to use the word "manga" in the modern sense. Japanese-made video game consoles have been popular since the 1980s. Japanese animated films and television series, known as anime for short, were largely influenced by Japanese manga comic books and have been extensively popular in the West. Japan is a world-renowned powerhouse of animation. Famous anime directors include Hayao Miyazaki, Osamu Tezuka and Isao Takahata. Japan has one of the oldest and largest film industries in the world; movies have been produced in Japan since 1897. Three Japanese films ("Rashomon", "Seven Samurai" and "Tokyo Story") made the "Sight & Sound"'s 2002 Critics and Directors Poll for the best films of all time. Ishirō Honda's "Godzilla" became an international icon of Japan and spawned an entire subgenre of "kaiju" films, as well as the longest-running film franchise in history. The most acclaimed Japanese film directors include Akira Kurosawa, Kenji Mizoguchi, Yasujiro Ozu and Shohei Imamura. Japan has won the Academy Award for the Best Foreign Language Film four times, more than any other Asian country. Japanese music is eclectic and diverse. Many instruments, such as the koto, were introduced in the 9th and 10th centuries. The accompanied recitative of the Noh drama dates from the 14th century and the popular folk music, with the guitar-like shamisen, from the sixteenth. Western classical music, introduced in the late 19th century, now forms an integral part of Japanese culture. The imperial court ensemble Gagaku has influenced the work of some modern Western composers. Notable classical composers from Japan include Toru Takemitsu and Rentarō Taki. Popular music in post-war Japan has been heavily influenced by American and European trends, which has led to the evolution of J-pop, or Japanese popular music. Karaoke is the most widely practiced cultural activity in Japan. A 1993 survey by the Cultural Affairs Agency found that more Japanese had sung karaoke that year than had participated in traditional pursuits such as flower arranging (ikebana) or tea ceremonies. The earliest works of Japanese literature include the "Kojiki" and "Nihon Shoki" chronicles and the "Man'yōshū" poetry anthology, all from the 8th century and written in Chinese characters. In the early Heian period, the system of phonograms known as "kana" (hiragana and katakana) was developed. "The Tale of the Bamboo Cutter" is considered the oldest Japanese narrative. An account of Heian court life is given in "The Pillow Book" by Sei Shōnagon, while "The Tale of Genji" by Murasaki Shikibu is often described as the world's first novel. During the Edo period, the chōnin ("townspeople") overtook the samurai aristocracy as producers and consumers of literature. The popularity of the works of Saikaku, for example, reveals this change in readership and authorship, while Bashō revivified the poetic tradition of the Kokinshū with his haikai (haiku) and wrote the poetic travelogue "Oku no Hosomichi". The Meiji era saw the decline of traditional literary forms as Japanese literature integrated Western influences. Natsume Sōseki and Mori Ōgai were the first "modern" novelists of Japan, followed by Ryūnosuke Akutagawa, Jun'ichirō Tanizaki, Yukio Mishima and, more recently, Haruki Murakami. Japan has two Nobel Prize-winning authors—Yasunari Kawabata (1968) and Kenzaburō Ōe (1994). Japanese Philosophy has historically been a fusion of both foreign; particularly Chinese and Western, and uniquely Japanese elements. In its literary forms, Japanese philosophy began about fourteen centuries ago. Archaeological evidence and early historical accounts suggest that Japan was originally an animistic culture, which viewed the world as infused with "kami" (神) or sacred presence as taught by Shinto, though it is not a philosophy as such, but has greatly influenced all other philosophies in their Japanese interpretations. Confucianism entered Japan from China around the 5th century A.D., as did Buddhism. Confucian ideals are still evident today in the Japanese concept of society and the self, and in the organization of the government and the structure of society. Buddhism has profoundly impacted Japanese psychology, metaphysics, and aesthetics. Indigenous ideas of loyalty and honour have been held since the 16th century. Western philosophy has had its major impact in Japan only since the middle of the 19th century. Japanese cuisine is based on combining staple foods, typically Japanese rice or noodles, with a soup and "okazu"—dishes made from fish, vegetable, tofu and the like—to add flavor to the staple food. In the early modern era ingredients such as red meats that had previously not been widely used in Japan were introduced. Japanese cuisine is known for its emphasis on seasonality of food, quality of ingredients and presentation. Japanese cuisine offers a vast array of regional specialties that use traditional recipes and local ingredients. The phrase refers to the makeup of a typical meal served, but has roots in classic "kaiseki", "honzen", and "yūsoku" cuisine. The term is also used to describe the first course served in standard "kaiseki" cuisine nowadays. Traditional Japanese sweets are known as "wagashi". Ingredients such as red bean paste and mochi are used. More modern-day tastes includes green tea ice cream, a very popular flavor. Almost all manufacturers produce a version of it. Kakigori is a shaved ice dessert flavored with syrup or condensed milk. It is usually sold and eaten at summer festivals. Popular Japanese beverages such as sake, which is a brewed rice beverage that, typically, contains 15%~17% alcohol and is made by multiple fermentation of rice. Beer has been brewed in Japan since the late 1800s and is produced in many regions by companies including Asahi Breweries, Kirin Brewery, and Sapporo Brewery – the oldest brand of beer in Japan. Officially, Japan has 16 national, government-recognized holidays. Public holidays in Japan are regulated by the Public Holiday Law (国民の祝日に関する法律 "Kokumin no Shukujitsu ni Kansuru Hōritsu") of 1948. Beginning in 2000, Japan implemented the Happy Monday System, which moved a number of national holidays to Monday in order to obtain a long weekend. In 2006, the country decided to add Shōwa Day, a new national holiday, in place of Greenery Day on April 29, and to move Greenery Day to May 4. These changes took effect in 2007. In 2014, the House of Councillors decided to add to the Japanese calendar on August 11, after lobbying by the Japanese Alpine Club. It is intended to coincide with the Bon Festival vacation time, giving Japanese people an opportunity to appreciate Japan's mountains. The national holidays in Japan are New Year's Day on January 1, Coming of Age Day on Second Monday of January, National Foundation Day on February 11, Vernal Equinox Day on March 20 or 21, Shōwa Day on April 29, Constitution Memorial Day on May 3, Greenery Day on May 4, Children's Day on May 5, Marine Day on Third Monday of July, Mountain Day on August 11, Respect for the Aged Day on Third Monday of September, Autumnal Equinox on September 23 or 24, Health and Sports Day on Second Monday of October, Culture Day on November 3, Labour Thanksgiving Day on November 23, and The Emperor's Birthday on December 23. There are many festivals in Japan, which are called in Japanese as "matsuri" (祭) which celebrate annually. There are no specific festival days for all of Japan; dates vary from area to area, and even within a specific area, but festival days do tend to cluster around traditional holidays such as Setsubun or Obon. Festivals are often based around one event, with food stalls, entertainment, and carnival games to keep people entertained. Its usually sponsored by a local shrine or temple, though they can be secular. Notable festival often feature processions which may include elaborate floats. Preparation for these processions is usually organised at the level of neighborhoods, or "machi" (町). Prior to these, the local kami may be ritually installed in mikoshi and paraded through the streets, such as Gion in Kyoto, and Hadaka in Okayama. Traditionally, sumo is considered Japan's national sport. Japanese martial arts such as judo, karate and kendo are also widely practiced and enjoyed by spectators in the country. After the Meiji Restoration, many Western sports were introduced in Japan and began to spread through the education system. Japan hosted the Summer Olympics in Tokyo in 1964 and the Winter Olympics in Sapporo in 1972 and Nagano in 1998. Further, the country hosted the official 2006 Basketball World Championship. Tokyo will host the 2020 Summer Olympics, making Tokyo the first Asian city to host the Olympics twice. The country gained the hosting rights for the official Women's Volleyball World Championship on five occasions (1967, 1998, 2006, 2010, 2018), more than any other nation. Japan is the most successful Asian Rugby Union country, winning the Asian Five Nations a record 6 times and winning the newly formed IRB Pacific Nations Cup in 2011. Japan will host the 2019 IRB Rugby World Cup. Baseball is currently the most popular spectator sport in the country. Japan's top professional league, now known as Nippon Professional Baseball, was established in 1936 and is widely considered to be the highest level of professional baseball in the world outside of the North American Major Leagues. Since the establishment of the Japan Professional Football League in 1992, association football has also gained a wide following. Japan was a venue of the Intercontinental Cup from 1981 to 2004 and co-hosted the 2002 FIFA World Cup with South Korea. Japan has one of the most successful football teams in Asia, winning the Asian Cup four times. Also, Japan recently won the FIFA Women's World Cup in 2011. Golf is also popular in Japan, as are forms of auto racing like the Super GT series and Formula Nippon. The country has produced one NBA player, Yuta Tabuse. Television and newspapers take an important role in Japanese mass media, though radio and magazines also take a part. For a long time, newspapers were regarded as the most influential information medium in Japan, although audience attitudes towards television changed with the emergence of commercial news broadcasting in the mid-1980s. Over the last decade, television has clearly come to surpass newspapers as Japan's main information and entertainment medium. There are 6 nationwide television networks: NHK (public broadcasting), Nippon Television (NTV), Tokyo Broadcasting System (TBS), Fuji Network System (FNS), TV Asahi (EX) and TV Tokyo Network (TXN). For the most part, television networks were established based on capital investments by existing radio networks. Variety shows, serial dramas, and news constitute a large percentage of Japanese television show. According to the 2015 NHK survey on television viewing in Japan, 79 percent of Japanese watch television every day. The average daily duration of television viewing was three hours. Japanese readers have a choice of approximately 120 daily newspapers with a total of 50 million copies of set paper with an average subscription rate of 1.13 newspapers per household. The main newspaper's publishers are Yomiuri Shimbun, Asahi Shimbun, Mainichi Shimbun, Nikkei Shimbun and Sankei Shimbun. According to a survey conducted by the Japanese Newspaper Association in June 1999, 85.4 per cent of men and 75 per cent of women read a newspaper every day. Average daily reading times vary with 27.7 minutes on weekdays and 31.7 minutes on holidays and Sunday. James Joyce James Augustine Aloysius Joyce (2 February 1882 – 13 January 1941) was an Irish novelist, short story writer, and poet. He contributed to the modernist avant-garde and is regarded as one of the most influential and important authors of the 20th century. Joyce is best known for "Ulysses" (1922), a landmark work in which the episodes of Homer's "Odyssey" are paralleled in a variety of literary styles, most famously stream of consciousness. Other well-known works are the short-story collection "Dubliners" (1914), and the novels "A Portrait of the Artist as a Young Man" (1916) and "Finnegans Wake" (1939). His other writings include three books of poetry, a play, his published letters and occasional journalism. Joyce was born in 41 Brighton Square, Rathgar, Dublin, into a middle-class family. A brilliant student, he briefly attended the Christian Brothers-run O'Connell School before excelling at the Jesuit schools Clongowes and Belvedere, despite the chaotic family life imposed by his father's alcoholism and unpredictable finances. He went on to attend University College Dublin. In 1904, in his early twenties, Joyce emigrated to continental Europe with his partner (and later wife) Nora Barnacle. They lived in Trieste, Paris, and Zurich. Although most of his adult life was spent abroad, Joyce's fictional universe centres on Dublin, and is populated largely by characters who closely resemble family members, enemies and friends from his time there. "Ulysses" in particular is set with precision in the streets and alleyways of the city. Shortly after the publication of "Ulysses", he elucidated this preoccupation somewhat, saying, "For myself, I always write about Dublin, because if I can get to the heart of Dublin I can get to the heart of all the cities of the world. In the particular is contained the universal." On 2 February 1882, Joyce was born in Rathgar, Dublin, Ireland. Joyce's father was John Stanislaus Joyce and his mother was Mary Jane "May" Murray. He was the eldest of ten surviving siblings; two died of typhoid. James was baptised according to the Rites of the Catholic Church in the nearby St Joseph's Church in Terenure on 5 February 1882 by Rev. John O'Mulloy. Joyce's godparents were Philip and Ellen McCann. John Stanislaus Joyce's family came from Fermoy in County Cork, and had owned a small salt and lime works. Joyce's paternal grandfather, James Augustine Joyce, married Ellen O'Connell, daughter of John O'Connell, a Cork Alderman who owned a drapery business and other properties in Cork City. Ellen's family claimed kinship with Daniel O'Connell, "The Liberator". The Joyce family's purported ancestor, Seán Mór Seoighe (fl. 1680) was a stonemason from Connemara. In 1887, his father was appointed rate collector by Dublin Corporation; the family subsequently moved to the fashionable adjacent small town of Bray, from Dublin. Around this time Joyce was attacked by a dog, leading to his lifelong cynophobia. He suffered from astraphobia; a superstitious aunt had described thunderstorms as a sign of God's wrath. In 1891 Joyce wrote a poem on the death of Charles Stewart Parnell. His father was angry at the treatment of Parnell by the Catholic Church, the Irish Home Rule Party and the British Liberal Party and the resulting collaborative failure to secure Home Rule for Ireland. The Irish Party had dropped Parnell from leadership. But the Vatican's role in allying with the British Conservative Party to prevent Home Rule left a lasting impression on the young Joyce. The elder Joyce had the poem printed and even sent a part to the Vatican Library. In November, John Joyce was entered in "Stubbs' Gazette" (a publisher of bankruptcies) and suspended from work. In 1893, John Joyce was dismissed with a pension, beginning the family's slide into poverty caused mainly by his drinking and financial mismanagement. Joyce had begun his education at Clongowes Wood College, a Jesuit boarding school near Clane, County Kildare, in 1888 but had to leave in 1892 when his father could no longer pay the fees. Joyce then studied at home and briefly at the Christian Brothers O'Connell School on North Richmond Street, Dublin, before he was offered a place in the Jesuits' Dublin school, Belvedere College, in 1893. This came about because of a chance meeting his father had with a Jesuit priest who knew the family and Joyce was given a reduction in fees to attend Belvedere. In 1895, Joyce, now aged 13, was elected to join the Sodality of Our Lady by his peers at Belvedere. The philosophy of Thomas Aquinas continued to have a strong influence on him for most of his life. Joyce enrolled at the recently established University College Dublin (UCD) in 1898, studying English, French and Italian. He became active in theatrical and literary circles in the city. In 1900 his laudatory review of Henrik Ibsen's "When We Dead Awaken" was published in "The Fortnightly Review"; it was his first publication and, after learning basic Norwegian to send a fan letter to Ibsen, he received a letter of thanks from the dramatist. Joyce wrote a number of other articles and at least two plays (since lost) during this period. Many of the friends he made at University College Dublin appeared as characters in Joyce's works. His closest colleagues included leading figures of the generation, most notably, Tom Kettle, Francis Sheehy-Skeffington and Oliver St. John Gogarty. Joyce was first introduced to the Irish public by Arthur Griffith in his newspaper, "United Irishman", in November 1901. Joyce had written an article on the Irish Literary Theatre and his college magazine refused to print it. Joyce had it printed and distributed locally. Griffith himself wrote a piece decrying the censorship of the student James Joyce. In 1901, the National Census of Ireland lists James Joyce (19) as an English- and Irish-speaking scholar living with his mother and father, six sisters and three brothers at Royal Terrace (now Inverness Road), Clontarf, Dublin. After graduating from UCD in 1902, Joyce left for Paris to study medicine, but he soon abandoned this. Richard Ellmann suggests that this may have been because he found the technical lectures in French too difficult. Joyce had already failed to pass chemistry in English in Dublin. But Joyce claimed ill health as the problem and wrote home that he was unwell and complained about the cold weather. He stayed on for a few months, appealing for finance his family could ill afford and reading late in the Bibliothèque Sainte-Geneviève. When his mother was diagnosed with cancer, his father sent a telegram which read, "NOTHER DYING COME HOME FATHER". Joyce returned to Ireland. Fearing for her son's impiety, his mother tried unsuccessfully to get Joyce to make his confession and to take communion. She finally passed into a coma and died on 13 August, James and his brother Stanislaus having refused to kneel with other members of the family praying at her bedside. After her death he continued to drink heavily, and conditions at home grew quite appalling. He scraped together a living reviewing books, teaching, and singing—he was an accomplished tenor, and won the bronze medal in the 1904 Feis Ceoil. On 7 January 1904 Joyce attempted to publish "A Portrait of the Artist", an essay-story dealing with aesthetics, only to have it rejected by the free-thinking magazine "Dana". He decided, on his twenty-second birthday, to revise the story into a novel he called "Stephen Hero". It was a fictional rendering of Joyce's youth, but he eventually grew frustrated with its direction and abandoned this work. It was never published in this form, but years later, in Trieste, Joyce completely rewrote it as "A Portrait of the Artist as a Young Man". The unfinished "Stephen Hero" was published after his death. Also in 1904 he met Nora Barnacle, a young woman from Galway city who was working as a chambermaid. On 16 June 1904 they had their first outing together, they walked to the Dublin suburb of Ringsend, where Nora masturbated him. This event was commemorated by providing the date for the action of "Ulysses" (as "Bloomsday"). Joyce remained in Dublin for some time longer, drinking heavily. After one of his drinking binges, he got into a fight over a misunderstanding with a man in St Stephen's Green; he was picked up and dusted off by a minor acquaintance of his father, Alfred H. Hunter, who took him into his home to tend to his injuries. Hunter was rumoured to be a Jew and to have an unfaithful wife, and would serve as one of the models for Leopold Bloom, the protagonist of "Ulysses". He took up with the medical student Oliver St. John Gogarty, who informed the character for Buck Mulligan in "Ulysses". After six nights in the Martello Tower that Gogarty was renting in Sandycove, he left in the middle of the night following an altercation which involved another student he lived with, the unstable Dermot Chenevix Trench (Haines in "Ulysses"), who fired a pistol at some pans hanging directly over Joyce's bed. Joyce walked the back to Dublin to stay with relatives for the night, and sent a friend to the tower the next day to pack his trunk. Shortly after, the couple left Ireland to live on the continent. Joyce and Nora went into self-imposed exile, moving first to Zurich in Switzerland, where he ostensibly taught English at the Berlitz Language School through an agent in England. It later came to fact that the agent had been swindled; the director of the school sent Joyce on to Trieste, which was then part of Austria-Hungary (until the First World War), and is today part of Italy. Once again, he found there was no position for him, but with the help of Almidano Artifoni, director of the Trieste Berlitz School, he finally secured a teaching position in Pola, then also part of Austria-Hungary (today part of Croatia). He stayed there, teaching English mainly to Austro-Hungarian naval officers stationed at the Pola base, from October 1904 until March 1905, when the Austrians—having discovered an espionage ring in the city—expelled all aliens. With Artifoni's help, he moved back to Trieste and began teaching English there. He remained in Trieste for most of the next ten years. Later that year Nora gave birth to their first child, George (known as Giorgio). Joyce persuaded his brother, Stanislaus, to join him in Trieste, and secured a teaching position for him at the school. Joyce sought to augment his family's meagre income with his brother's earnings. Stanislaus and Joyce had strained relations while they lived together in Trieste, arguing about Joyce's drinking habits and frivolity with money. Joyce became frustrated with life in Trieste and moved to Rome in late 1906, taking employment as a clerk in a bank. He disliked Rome and returned to Trieste in early 1907. His daughter Lucia was born later that year. Joyce returned to Dublin in mid-1909 with George, to visit his father and work on getting "Dubliners" published. He visited Nora's family in Galway and liked Nora's mother very much. While preparing to return to Trieste he decided to take one of his sisters, Eva, back with him to help Nora run the home. He spent a month in Trieste before returning to Dublin, this time as a representative of some cinema owners and businessmen from Trieste. With their backing he launched Ireland's first cinema, the Volta Cinematograph, which was well-received, but fell apart after Joyce left. He returned to Trieste in January 1910 with another sister, Eileen, in tow. Eva became homesick for Dublin and returned there a few years later, but Eileen spent the rest of her life on the continent, eventually marrying the Czech bank cashier Frantisek Schaurek. Joyce returned to Dublin again briefly in mid-1912 during his years-long fight with Dublin publisher George Roberts over the publication of "Dubliners". His trip was once again fruitless, and on his return he wrote the poem "Gas from a Burner", an invective against Roberts. After this trip, he never again came closer to Dublin than London, despite many pleas from his father and invitations from his fellow Irish writer William Butler Yeats. One of his students in Trieste was Ettore Schmitz, better known by the pseudonym Italo Svevo. They met in 1907 and became lasting friends and mutual critics. Schmitz was a Catholic of Jewish origin and became a primary model for Leopold Bloom; most of the details about the Jewish faith in "Ulysses" came from Schmitz's responses to queries from Joyce. While living in Trieste, Joyce was first beset with eye problems that ultimately required over a dozen surgical operations. Joyce concocted a number of money-making schemes during this period, including an attempt to become a cinema magnate in Dublin. He frequently discussed but ultimately abandoned a plan to import Irish tweed to Trieste. Correspondence relating to that venture with the Irish Woollen Mills were for a long time displayed in the windows of their premises in Dublin. Joyce's skill at borrowing money saved him from indigence. What income he had came partially from his position at the Berlitz school and partially from teaching private students. In 1915, after most of his students in Trieste were conscripted to fight in the First World War, Joyce moved to Zurich. Two influential private students, Baron Ambrogio Ralli and Count Francesco Sordina, petitioned officials for an exit permit for the Joyces, who in turn agreed not to take any action against the emperor of Austria-Hungary during the war. Joyce set himself to finishing "Ulysses" in Paris, delighted to find that he was gradually gaining fame as an avant-garde writer. A further grant from Harriet Shaw Weaver meant he could devote himself full-time to writing again, as well as consort with other literary figures in the city. During this time, Joyce's eyes began to give him more and more problems and he often wore an eyepatch. He was treated by Louis Borsch in Paris, undergoing nine operations before Borsch's death in 1929. Throughout the 1930s he travelled frequently to Switzerland for eye surgeries and for treatments for his daughter Lucia, who, according to the Joyces, suffered from schizophrenia. Lucia was analysed by Carl Jung at the time, who after reading "Ulysses" is said to have concluded that her father had schizophrenia. Jung said that she and her father were two people heading to the bottom of a river, except that Joyce was diving and Lucia was sinking. In Paris, Maria and Eugene Jolas nursed Joyce during his long years of writing "Finnegans Wake". Were it not for their support (along with Harriet Shaw Weaver's constant financial support), there is a good possibility that his books might never have been finished or published. In their literary magazine "transition", the Jolases published serially various sections of "Finnegans Wake" under the title "Work in Progress". Joyce returned to Zurich in late 1940, fleeing the Nazi occupation of France. The issue of Joyce's relationship with religion is somewhat controversial. Early in life, he lapsed from Catholicism, according to first-hand testimonies coming from himself, his brother Stanislaus Joyce, and his wife: My mind rejects the whole present social order and Christianity—home, the recognised virtues, classes of life and religious doctrines. [...] Six years ago I left the Catholic church, hating it most fervently. I found it impossible for me to remain in it on account of the impulses of my nature. I made secret war upon it when I was a student and declined to accept the positions it offered me. By doing this I made myself a beggar but I retained my pride. Now I make open war upon it by what I write and say and do. When the arrangements for Joyce's burial were being made, a Catholic priest offered a religious service, which Joyce's wife Nora declined, saying: "I couldn't do that to him." Leonard Strong, William T. Noon, Robert Boyle and others have argued that Joyce, later in life, reconciled with the faith he rejected earlier in life and that his parting with the faith was succeeded by a not so obvious reunion, and that "Ulysses" and "Finnegans Wake" are essentially Catholic expressions. Likewise, Hugh Kenner and T. S. Eliot believed they saw between the lines of Joyce's work the outlook of a serious Christian and that beneath the veneer of the work lies a remnant of Catholic belief and attitude. Kevin Sullivan maintains that, rather than reconciling with the faith, Joyce never left it. Critics holding this view insist that Stephen, the protagonist of the semi-autobiographical "A Portrait of the Artist as a Young Man" as well as "Ulysses", is not Joyce. Somewhat cryptically, in an interview after completing Ulysses, in response to the question "When did you leave the Catholic Church", Joyce answered, "That's for the Church to say." Eamonn Hughes maintains that Joyce takes a dialectic approach, both affirming and denying, saying that Stephen's much noted "non-serviam" is qualified—"I will not serve "that which I no longer believe"...", and that the "non-serviam" will always be balanced by Stephen's "I am a servant..." and Molly's "yes". He attended Catholic Mass and Orthodox Sacred Liturgy, especially during Holy Week, purportedly for aesthetic reasons. His sisters noted his Holy Week attendance and that he did not seek to dissuade them. One friend witnessed him cry "secret tears" upon hearing Jesus' words on the cross and another accused him of being a "believer at heart" because of his frequent attendance at church. Umberto Eco compares Joyce to the ancient "episcopi vagantes" (wandering bishops) in the Middle Ages. They left a discipline, not a cultural heritage or a way of thinking. Like them, the writer retains the sense of blasphemy held as a liturgical ritual. Some critics and biographers have opined along the lines of Andrew Gibson: "The modern James Joyce may have vigorously resisted the oppressive power of Catholic tradition. But there was another Joyce who asserted his allegiance to that tradition, and never left it, or wanted to leave it, behind him." Gibson argues that Joyce "remained a Catholic intellectual if not a believer" since his thinking remained influenced by his cultural background, even though he lived apart from that culture. His relationship with religion was complex and not easily understood, even perhaps by himself. He acknowledged the debt he owed to his early Jesuit training. Joyce told the sculptor August Suter, that from his Jesuit education, he had 'learnt to arrange things in such a way that they become easy to survey and to judge.' On 11 January 1941, Joyce underwent surgery in Zurich for a perforated duodenal ulcer. He fell into a coma the following day. He awoke at 2 a.m. on 13 January 1941, and asked a nurse to call his wife and son, before losing consciousness again. They were en route when he died 15 minutes later, less than a month short of his 59th birthday. His body was interred in the Fluntern Cemetery, Zurich. The Swiss tenor Max Meili sang "Addio terra, addio cielo" from Monteverdi's "L'Orfeo" at the burial service. Although two senior Irish diplomats were in Switzerland at the time, neither attended Joyce's funeral, and the Irish government later declined Nora's offer to permit the repatriation of Joyce's remains. When Joseph Walshe secretary at the Department of External Affairs in Dublin was informed of Joyce's death by Frank Cremins, chargé d'affaires at Berne, Walshe responded "Please wire details of Joyce's death. If possible find out did he die a Catholic? Express sympathy with Mrs Joyce and explain inability to attend funeral". Buried originally in an ordinary grave, Joyce was moved in 1966 to a more prominent "honour grave," with a seated portrait statue by American artist Milton Hebald nearby. Nora, whom he had married in 1931, survived him by 10 years. She is buried by his side, as is their son Giorgio, who died in 1976. "Dubliners" is a collection of fifteen short stories by Joyce, first published in 1914. They form a naturalistic depiction of Irish middle class life in and around Dublin in the early years of the 20th century. The stories were written when Irish nationalism was at its peak and a search for a national identity and purpose was raging; at a crossroads of history and culture, Ireland was jolted by converging ideas and influences. The stories centre on Joyce's idea of an epiphany: a moment when a character experiences a life-changing self-understanding or illumination. Many of the characters in "Dubliners" later appear in minor roles in Joyce's novel "Ulysses". The initial stories in the collection are narrated by child protagonists. Subsequent stories deal with the lives and concerns of progressively older people. This aligns with Joyce's tripartite division of the collection into childhood, adolescence and maturity. "A Portrait of the Artist as a Young Man" is a nearly complete rewrite of the abandoned novel "Stephen Hero". Joyce attempted to burn the original manuscript in a fit of rage during an argument with Nora, though to his subsequent relief it was rescued by his sister. A "Künstlerroman", "Portrait" is a heavily autobiographical coming-of-age novel depicting the childhood and adolescence of the protagonist Stephen Dedalus and his gradual growth into artistic self-consciousness. Some hints of the techniques Joyce frequently employed in later works, such as stream of consciousness, interior monologue, and references to a character's psychic reality rather than to his external surroundings are evident throughout this novel. Despite early interest in the theatre, Joyce published only one play, "Exiles", begun shortly after the outbreak of the First World War in 1914 and published in 1918. A study of a husband and wife relationship, the play looks back to "The Dead" (the final story in "Dubliners") and forward to "Ulysses", which Joyce began around the time of the play's composition. Joyce published a number of books of poetry. His first mature published work was the satirical broadside "The Holy Office" (1904), in which he proclaimed himself to be the superior of many prominent members of the Celtic Revival. His first full-length poetry collection "Chamber Music" (1907; referring, Joyce joked, to the sound of urine hitting the side of a chamber pot) consisted of 36 short lyrics. This publication led to his inclusion in the "Imagist Anthology", edited by Ezra Pound, who was a champion of Joyce's work. Other poetry Joyce published in his lifetime include "Gas From A Burner" (1912), "Pomes Penyeach" (1927), and "Ecce Puer" (written in 1932 to mark the birth of his grandson and the recent death of his father). It was published by the Black Sun Press in "Collected Poems" (1936). As he was completing work on "Dubliners" in 1906, Joyce considered adding another story featuring a Jewish advertising canvasser called Leopold Bloom under the title "Ulysses". Although he did not pursue the idea further at the time, he eventually commenced work on a novel using both the title and basic premise in 1914. The writing was completed in October 1921. Three more months were devoted to working on the proofs of the book before Joyce halted work shortly before his self-imposed deadline, his 40th birthday (2 February 1922). Thanks to Ezra Pound, serial publication of the novel in the magazine "The Little Review" began in March 1918. This magazine was edited by Margaret C. Anderson and Jane Heap, with the intermittent financial backing of John Quinn, a successful New York commercial lawyer with an interest in contemporary experimental art and literature. Unfortunately, this publication encountered problems with New York Postal Authorities; serialisation ground to a halt in December 1920; the editors were convicted of publishing obscenity in February 1921. Although the conviction was based on the "Nausicaä" episode of "Ulysses", "The Little Review" had fuelled the fires of controversy with dada poet Elsa von Freytag-Loringhoven's defence of "Ulysses" in an essay "The Modest Woman." Joyce's novel was not published in the United States until 1934. Partly because of this controversy, Joyce found it difficult to get a publisher to accept the book, but it was published in 1922 by Sylvia Beach from her well-known Rive Gauche bookshop, "Shakespeare and Company". An English edition published the same year by Joyce's patron, Harriet Shaw Weaver, ran into further difficulties with the United States authorities, and 500 copies that were shipped to the States were seized and possibly destroyed. The following year, John Rodker produced a print run of 500 more intended to replace the missing copies, but these were burned by English customs at Folkestone. A further consequence of the novel's ambiguous legal status as a banned book was that a number of "bootleg" versions appeared, most notably a number of pirate versions from the publisher Samuel Roth. In 1928, a court injunction against Roth was obtained and he ceased publication. With the appearance of both "Ulysses" and T. S. Eliot's poem, "The Waste Land", 1922 was a key year in the history of English-language literary modernism. In "Ulysses", Joyce employs stream of consciousness, parody, jokes, and virtually every other literary technique to present his characters. The action of the novel, which takes place in a single day, 16 June 1904, sets the characters and incidents of the Odyssey of Homer in modern Dublin and represents Odysseus (Ulysses), Penelope and Telemachus in the characters of Leopold Bloom, his wife Molly Bloom and Stephen Dedalus, parodically contrasted with their lofty models. The book explores various areas of Dublin life, dwelling on its squalor and monotony. Nevertheless, the book is also an affectionately detailed study of the city, and Joyce claimed that if Dublin were to be destroyed in some catastrophe it could be rebuilt, brick by brick, using his work as a model. In order to achieve this level of accuracy, Joyce used the 1904 edition of Thom's Directory—a work that listed the owners and/or tenants of every residential and commercial property in the city. He also bombarded friends still living there with requests for information and clarification. The book consists of 18 chapters, each covering roughly one hour of the day, beginning around about 8 a.m. and ending sometime after 2 a.m. the following morning. Each of the 18 chapters of the novel employs its own literary style. Each chapter also refers to a specific episode in Homer's Odyssey and has a specific colour, art or science and bodily organ associated with it. This combination of kaleidoscopic writing with an extreme formal, schematic structure represents one of the book's major contributions to the development of 20th century modernist literature. The use of classical mythology as a framework for his book and the near-obsessive focus on external detail in a book in which much of the significant action is happening inside the minds of the characters are others. Nevertheless, Joyce complained that, "I may have oversystematised "Ulysses"," and played down the mythic correspondences by eliminating the chapter titles that had been taken from Homer. Joyce was reluctant to publish the chapter titles because he wanted his work to stand separately from the Greek form. It was only when Stuart Gilbert published his critical work on "Ulysses" in 1930 that the schema was supplied by Joyce to Gilbert. But as Terrence Killeen points out this schema was developed after the novel had been written and was not something that Joyce consulted as he wrote the novel. Having completed work on "Ulysses", Joyce was so exhausted that he did not write a line of prose for a year. On 10 March 1923 he informed a patron, Harriet Weaver: "Yesterday I wrote two pages—the first I have since the final "Yes" of "Ulysses". Having found a pen, with some difficulty I copied them out in a large handwriting on a double sheet of foolscap so that I could read them. "Il lupo perde il pelo ma non il vizio", the Italians say. 'The wolf may lose his skin but not his vice' or 'the leopard cannot change his spots.'" Thus was born a text that became known, first, as "Work in Progress" and later "Finnegans Wake". By 1926 Joyce had completed the first two parts of the book. In that year, he met Eugene and Maria Jolas who offered to serialise the book in their magazine "transition". For the next few years, Joyce worked rapidly on the new book, but in the 1930s, progress slowed considerably. This was due to a number of factors, including the death of his father in 1931, concern over the mental health of his daughter Lucia, and his own health problems, including failing eyesight. Much of the work was done with the assistance of younger admirers, including Samuel Beckett. For some years, Joyce nursed the eccentric plan of turning over the book to his friend James Stephens to complete, on the grounds that Stephens was born in the same hospital as Joyce exactly one week later, and shared the first name of both Joyce and of Joyce's fictional alter-ego, an example of Joyce's superstitions. Reaction to the work was mixed, including negative comment from early supporters of Joyce's work, such as Pound and the author's brother, Stanislaus Joyce. To counteract this hostile reception, a book of essays by supporters of the new work, including Beckett, William Carlos Williams and others was organised and published in 1929 under the title "Our Exagmination Round His Factification for Incamination of Work in Progress". At his 57th birthday party at the Jolases' home, Joyce revealed the final title of the work and "Finnegans Wake" was published in book form on 4 May 1939. Later, further negative comments surfaced from doctor and author Hervey Cleckley, who questioned the significance others had placed on the work. In his book, "The Mask of Sanity", Cleckley refers to "Finnegans Wake" as "a 628-page collection of erudite gibberish indistinguishable to most people from the familiar word salad produced by hebephrenic patients on the back wards of any state hospital." Joyce's method of stream of consciousness, literary allusions and free dream associations was pushed to the limit in "Finnegans Wake", which abandoned all conventions of plot and character construction and is written in a peculiar and obscure English, based mainly on complex multi-level puns. This approach is similar to, but far more extensive than that used by Lewis Carroll in "Jabberwocky". This has led many readers and critics to apply Joyce's oft-quoted description in the "Wake" of "Ulysses" as his "usylessly unreadable Blue Book of Eccles" to the "Wake" itself. However, readers have been able to reach a consensus about the central cast of characters and general plot. Much of the wordplay in the book stems from the use of multilingual puns which draw on a wide range of languages. The role played by Beckett and other assistants included collating words from these languages on cards for Joyce to use and, as Joyce's eyesight worsened, of writing the text from the author's dictation. The view of history propounded in this text is very strongly influenced by Giambattista Vico, and the metaphysics of Giordano Bruno of Nola are important to the interplay of the "characters." Vico propounded a cyclical view of history, in which civilisation rose from chaos, passed through theocratic, aristocratic, and democratic phases, and then lapsed back into chaos. The most obvious example of the influence of Vico's cyclical theory of history is to be found in the opening and closing words of the book. "Finnegans Wake" opens with the words "riverrun, past Eve and Adam's, from swerve of shore to bend of bay, brings us by a commodius vicus of recirculation back to Howth Castle and Environs." ("vicus" is a pun on Vico) and ends "A way a lone a last a loved a long the." In other words, the book ends with the beginning of a sentence and begins with the end of the same sentence, turning the book into one great cycle. Indeed, Joyce said that the ideal reader of the "Wake" would suffer from "ideal insomnia" and, on completing the book, would turn to page one and start again, and so on in an endless cycle of reading. Joyce's work has been an important influence on writers and scholars such as Samuel Beckett, Seán Ó Ríordáin, Jorge Luis Borges, Flann O'Brien, Salman Rushdie, Robert Anton Wilson, John Updike, David Lodge and Joseph Campbell. "Ulysses" has been called "a demonstration and summation of the entire [Modernist] movement". The Bulgarian-French literary theorist Julia Kristéva characterised Joyce's novel writing as "polyphonic" and a hallmark of postmodernity alongside the poets Mallarmé and Rimbaud. Some scholars, notably Vladimir Nabokov, have reservations, often championing some of his fiction while condemning other works. In Nabokov's opinion, "Ulysses" was brilliant, while "Finnegans Wake" was horrible. Joyce's influence is also evident in fields other than literature. The sentence "Three quarks for Muster Mark!" in Joyce's "Finnegans Wake" is the source of the word "quark", the name of one of the elementary particles proposed by the physicist Murray Gell-Mann in 1963. The work and life of Joyce is celebrated annually on 16 June, known as Bloomsday, in Dublin and in an increasing number of cities worldwide, and critical studies in scholarly publications, such as the "James Joyce Quarterly", continue. Both popular and academic uses of Joyce's work were hampered by restrictions imposed by Stephen J. Joyce, Joyce's grandson and executor of his literary estate. On 1 January 2012, those restrictions were lessened by the expiry of copyright protection of much of the published work of James Joyce. In April 2013 the Central Bank of Ireland issued a silver €10 commemorative coin in honour of Joyce that misquoted a famous line from "Ulysses". Posthumous publications Joyce Papers, National Library of Ireland Electronic editions Resources Johannes Kepler Johannes Kepler (; ; December 27, 1571 – November 15, 1630) was a German mathematician, astronomer, and astrologer. Kepler is a key figure in the 17th-century scientific revolution. He is best known for his laws of planetary motion, based on his works "Astronomia nova", "Harmonices Mundi", and "Epitome of Copernican Astronomy". These works also provided one of the foundations for Isaac Newton's theory of universal gravitation. Kepler was a mathematics teacher at a seminary school in Graz, where he became an associate of Prince Hans Ulrich von Eggenberg. Later he became an assistant to the astronomer Tycho Brahe in Prague, and eventually the imperial mathematician to Emperor Rudolf II and his two successors Matthias and Ferdinand II. He also taught mathematics in Linz, and was an adviser to General Wallenstein. Additionally, he did fundamental work in the field of optics, invented an improved version of the refracting telescope (the Keplerian telescope), and was mentioned in the telescopic discoveries of his contemporary Galileo Galilei. He was a corresponding member of the Accademia dei Lincei in Rome. Kepler lived in an era when there was no clear distinction between astronomy and astrology, but there was a strong division between astronomy (a branch of mathematics within the liberal arts) and physics (a branch of natural philosophy). Kepler also incorporated religious arguments and reasoning into his work, motivated by the religious conviction and belief that God had created the world according to an intelligible plan that is accessible through the natural light of reason. Kepler described his new astronomy as "celestial physics", as "an excursion into Aristotle's "Metaphysics"", and as "a supplement to Aristotle's "On the Heavens"", transforming the ancient tradition of physical cosmology by treating astronomy as part of a universal mathematical physics. Kepler was born on December 27, the feast day of St John the Evangelist, 1571, in the Free Imperial City of Weil der Stadt (now part of the Stuttgart Region in the German state of Baden-Württemberg, 30 km west of Stuttgart's center). His grandfather, Sebald Kepler, had been Lord Mayor of the city. By the time Johannes was born, he had two brothers and one sister and the Kepler family fortune was in decline. His father, Heinrich Kepler, earned a precarious living as a mercenary, and he left the family when Johannes was five years old. He was believed to have died in the Eighty Years' War in the Netherlands. His mother, Katharina Guldenmann, an innkeeper's daughter, was a healer and herbalist. Born prematurely, Johannes claimed to have been weak and sickly as a child. Nevertheless, he often impressed travelers at his grandfather's inn with his phenomenal mathematical faculty. He was introduced to astronomy at an early age, and developed a love for it that would span his entire life. At age six, he observed the Great Comet of 1577, writing that he "was taken by [his] mother to a high place to look at it." In 1580, at age nine, he observed another astronomical event, a lunar eclipse, recording that he remembered being "called outdoors" to see it and that the moon "appeared quite red". However, childhood smallpox left him with weak vision and crippled hands, limiting his ability in the observational aspects of astronomy. In 1589, after moving through grammar school, Latin school, and seminary at Maulbronn, Kepler attended Tübinger Stift at the University of Tübingen. There, he studied philosophy under Vitus Müller and theology under Jacob Heerbrand (a student of Philipp Melanchthon at Wittenberg), who also taught Michael Maestlin while he was a student, until he became Chancellor at Tübingen in 1590. He proved himself to be a superb mathematician and earned a reputation as a skilful astrologer, casting horoscopes for fellow students. Under the instruction of Michael Maestlin, Tübingen's professor of mathematics from 1583 to 1631, he learned both the Ptolemaic system and the Copernican system of planetary motion. He became a Copernican at that time. In a student disputation, he defended heliocentrism from both a theoretical and theological perspective, maintaining that the Sun was the principal source of motive power in the universe. Despite his desire to become a minister, near the end of his studies, Kepler was recommended for a position as teacher of mathematics and astronomy at the Protestant school in Graz. He accepted the position in April 1594, at the age of 23. Kepler's first major astronomical work, "Mysterium Cosmographicum" ("The Cosmographic Mystery") [1596], was the first published defense of the Copernican system. Kepler claimed to have had an epiphany on July 19, 1595, while teaching in Graz, demonstrating the periodic conjunction of Saturn and Jupiter in the zodiac: he realized that regular polygons bound one inscribed and one circumscribed circle at definite ratios, which, he reasoned, might be the geometrical basis of the universe. After failing to find a unique arrangement of polygons that fit known astronomical observations (even with extra planets added to the system), Kepler began experimenting with 3-dimensional polyhedra. He found that each of the five Platonic solids could be inscribed and circumscribed by spherical orbs; nesting these solids, each encased in a sphere, within one another would produce six layers, corresponding to the six known planets—Mercury, Venus, Earth, Mars, Jupiter, and Saturn. By ordering the solids selectively—octahedron, icosahedron, dodecahedron, tetrahedron, cube—Kepler found that the spheres could be placed at intervals corresponding to the relative sizes of each planet's path, assuming the planets circle the Sun. Kepler also found a formula relating the size of each planet's orb to the length of its orbital period: from inner to outer planets, the ratio of increase in orbital period is twice the difference in orb radius. However, Kepler later rejected this formula, because it was not precise enough. As he indicated in the title, Kepler thought he had revealed God's geometrical plan for the universe. Much of Kepler's enthusiasm for the Copernican system stemmed from his theological convictions about the connection between the physical and the spiritual; the universe itself was an image of God, with the Sun corresponding to the Father, the stellar sphere to the Son, and the intervening space between to the Holy Spirit. His first manuscript of "Mysterium" contained an extensive chapter reconciling heliocentrism with biblical passages that seemed to support geocentrism. With the support of his mentor Michael Maestlin, Kepler received permission from the Tübingen university senate to publish his manuscript, pending removal of the Bible exegesis and the addition of a simpler, more understandable description of the Copernican system as well as Kepler's new ideas. "Mysterium" was published late in 1596, and Kepler received his copies and began sending them to prominent astronomers and patrons early in 1597; it was not widely read, but it established Kepler's reputation as a highly skilled astronomer. The effusive dedication, to powerful patrons as well as to the men who controlled his position in Graz, also provided a crucial doorway into the patronage system. Though the details would be modified in light of his later work, Kepler never relinquished the Platonist polyhedral-spherist cosmology of "Mysterium Cosmographicum". His subsequent main astronomical works were in some sense only further developments of it, concerned with finding more precise inner and outer dimensions for the spheres by calculating the eccentricities of the planetary orbits within it. In 1621, Kepler published an expanded second edition of "Mysterium", half as long again as the first, detailing in footnotes the corrections and improvements he had achieved in the 25 years since its first publication. In terms of the impact of "Mysterium", it can be seen as an important first step in modernizing the theory proposed by Nicolaus Copernicus in his "De Revolutionibus orbium coelestium". Whilst Copernicus sought to advance a heliocentric system in this book, he resorted to Ptolemaic devices (viz., epicycles and eccentric circles) in order to explain the change in planets' orbital speed, and also continued to use as a point of reference the center of the earth's orbit rather than that of the sun "as an aid to calculation and in order not to confuse the reader by diverging too much from Ptolemy." Modern astronomy owes much to "Mysterium Cosmographicum", despite flaws in its main thesis, "since it represents the first step in cleansing the Copernican system of the remnants of the Ptolemaic theory still clinging to it." In December 1595, Kepler was introduced to Barbara Müller, a 23-year-old widow (twice over) with a young daughter, Regina Lorenz, and he began courting her. Müller, an heiress to the estates of her late husbands, was also the daughter of a successful mill owner. Her father Jobst initially opposed a marriage despite Kepler's nobility; though he had inherited his grandfather's nobility, Kepler's poverty made him an unacceptable match. Jobst relented after Kepler completed work on "Mysterium", but the engagement nearly fell apart while Kepler was away tending to the details of publication. However, Protestant officials—who had helped set up the match—pressured the Müllers to honor their agreement. Barbara and Johannes were married on April 27, 1597. In the first years of their marriage, the Keplers had two children (Heinrich and Susanna), both of whom died in infancy. In 1602, they had a daughter (Susanna); in 1604, a son (Friedrich); and in 1607, another son (Ludwig). Following the publication of "Mysterium" and with the blessing of the Graz school inspectors, Kepler began an ambitious program to extend and elaborate his work. He planned four additional books: one on the stationary aspects of the universe (the Sun and the fixed stars); one on the planets and their motions; one on the physical nature of planets and the formation of geographical features (focused especially on Earth); and one on the effects of the heavens on the Earth, to include atmospheric optics, meteorology, and astrology. He also sought the opinions of many of the astronomers to whom he had sent "Mysterium", among them Reimarus Ursus (Nicolaus Reimers Bär)—the imperial mathematician to Rudolph II and a bitter rival of Tycho Brahe. Ursus did not reply directly, but republished Kepler's flattering letter to pursue his priority dispute over (what is now called) the Tychonic system with Tycho. Despite this black mark, Tycho also began corresponding with Kepler, starting with a harsh but legitimate critique of Kepler's system; among a host of objections, Tycho took issue with the use of inaccurate numerical data taken from Copernicus. Through their letters, Tycho and Kepler discussed a broad range of astronomical problems, dwelling on lunar phenomena and Copernican theory (particularly its theological viability). But without the significantly more accurate data of Tycho's observatory, Kepler had no way to address many of these issues. Instead, he turned his attention to chronology and "harmony," the numerological relationships among music, mathematics and the physical world, and their astrological consequences. By assuming the Earth to possess a soul (a property he would later invoke to explain how the sun causes the motion of planets), he established a speculative system connecting astrological aspects and astronomical distances to weather and other earthly phenomena. By 1599, however, he again felt his work limited by the inaccuracy of available data—just as growing religious tension was also threatening his continued employment in Graz. In December of that year, Tycho invited Kepler to visit him in Prague; on January 1, 1600 (before he even received the invitation), Kepler set off in the hopes that Tycho's patronage could solve his philosophical problems as well as his social and financial ones. On February 4, 1600, Kepler met Tycho Brahe and his assistants Franz Tengnagel and Longomontanus at Benátky nad Jizerou (35 km from Prague), the site where Tycho's new observatory was being constructed. Over the next two months, he stayed as a guest, analyzing some of Tycho's observations of Mars; Tycho guarded his data closely, but was impressed by Kepler's theoretical ideas and soon allowed him more access. Kepler planned to test his theory from "Mysterium Cosmographicum" based on the Mars data, but he estimated that the work would take up to two years (since he was not allowed to simply copy the data for his own use). With the help of Johannes Jessenius, Kepler attempted to negotiate a more formal employment arrangement with Tycho, but negotiations broke down in an angry argument and Kepler left for Prague on April 6. Kepler and Tycho soon reconciled and eventually reached an agreement on salary and living arrangements, and in June, Kepler returned home to Graz to collect his family. Political and religious difficulties in Graz dashed his hopes of returning immediately to Brahe; in hopes of continuing his astronomical studies, Kepler sought an appointment as a mathematician to Archduke Ferdinand. To that end, Kepler composed an essay—dedicated to Ferdinand—in which he proposed a force-based theory of lunar motion: "In Terra inest virtus, quae Lunam ciet" ("There is a force in the earth which causes the moon to move"). Though the essay did not earn him a place in Ferdinand's court, it did detail a new method for measuring lunar eclipses, which he applied during the July 10 eclipse in Graz. These observations formed the basis of his explorations of the laws of optics that would culminate in "Astronomiae Pars Optica". On August 2, 1600, after refusing to convert to Catholicism, Kepler and his family were banished from Graz. Several months later, Kepler returned, now with the rest of his household, to Prague. Through most of 1601, he was supported directly by Tycho, who assigned him to analyzing planetary observations and writing a tract against Tycho's (by then deceased) rival, Ursus. In September, Tycho secured him a commission as a collaborator on the new project he had proposed to the emperor: the "Rudolphine Tables" that should replace the "Prutenic Tables" of Erasmus Reinhold. Two days after Tycho's unexpected death on October 24, 1601, Kepler was appointed his successor as the imperial mathematician with the responsibility to complete his unfinished work. The next 11 years as imperial mathematician would be the most productive of his life. Kepler's primary obligation as imperial mathematician was to provide astrological advice to the emperor. Though Kepler took a dim view of the attempts of contemporary astrologers to precisely predict the future or divine specific events, he had been casting well-received detailed horoscopes for friends, family, and patrons since his time as a student in Tübingen. In addition to horoscopes for allies and foreign leaders, the emperor sought Kepler's advice in times of political trouble. Rudolph was actively interested in the work of many of his court scholars (including numerous alchemists) and kept up with Kepler's work in physical astronomy as well. Officially, the only acceptable religious doctrines in Prague were Catholic and Utraquist, but Kepler's position in the imperial court allowed him to practice his Lutheran faith unhindered. The emperor nominally provided an ample income for his family, but the difficulties of the over-extended imperial treasury meant that actually getting hold of enough money to meet financial obligations was a continual struggle. Partly because of financial troubles, his life at home with Barbara was unpleasant, marred with bickering and bouts of sickness. Court life, however, brought Kepler into contact with other prominent scholars (Johannes Matthäus Wackher von Wackhenfels, Jost Bürgi, David Fabricius, Martin Bachazek, and Johannes Brengger, among others) and astronomical work proceeded rapidly. As Kepler slowly continued analyzing Tycho's Mars observations—now available to him in their entirety—and began the slow process of tabulating the "Rudolphine Tables", Kepler also picked up the investigation of the laws of optics from his lunar essay of 1600. Both lunar and solar eclipses presented unexplained phenomena, such as unexpected shadow sizes, the red color of a total lunar eclipse, and the reportedly unusual light surrounding a total solar eclipse. Related issues of atmospheric refraction applied to "all" astronomical observations. Through most of 1603, Kepler paused his other work to focus on optical theory; the resulting manuscript, presented to the emperor on January 1, 1604, was published as "Astronomiae Pars Optica" (The Optical Part of Astronomy). In it, Kepler described the inverse-square law governing the intensity of light, reflection by flat and curved mirrors, and principles of pinhole cameras, as well as the astronomical implications of optics such as parallax and the apparent sizes of heavenly bodies. He also extended his study of optics to the human eye, and is generally considered by neuroscientists to be the first to recognize that images are projected inverted and reversed by the eye's lens onto the retina. The solution to this dilemma was not of particular importance to Kepler as he did not see it as pertaining to optics, although he did suggest that the image was later corrected "in the hollows of the brain" due to the "activity of the Soul." Today, "Astronomiae Pars Optica" is generally recognized as the foundation of modern optics (though the law of refraction is conspicuously absent). With respect to the beginnings of projective geometry, Kepler introduced the idea of continuous change of a mathematical entity in this work. He argued that if a focus of a conic section were allowed to move along the line joining the foci, the geometric form would morph or degenerate, one into another. In this way, an ellipse becomes a parabola when a focus moves toward infinity, and when two foci of an ellipse merge into one another, a circle is formed. As the foci of a hyperbola merge into one another, the hyperbola becomes a pair of straight lines. He also assumed that if a straight line is extended to infinity it will meet itself at a single point at infinity, thus having the properties of a large circle. In October 1604, a bright new evening star (SN 1604) appeared, but Kepler did not believe the rumors until he saw it himself. Kepler began systematically observing the nova. Astrologically, the end of 1603 marked the beginning of a fiery trigon, the start of the about 800-year cycle of great conjunctions; astrologers associated the two previous such periods with the rise of Charlemagne (c. 800 years earlier) and the birth of Christ (c. 1600 years earlier), and thus expected events of great portent, especially regarding the emperor. It was in this context, as the imperial mathematician and astrologer to the emperor, that Kepler described the new star two years later in his "De Stella Nova". In it, Kepler addressed the star's astronomical properties while taking a skeptical approach to the many astrological interpretations then circulating. He noted its fading luminosity, speculated about its origin, and used the lack of observed parallax to argue that it was in the sphere of fixed stars, further undermining the doctrine of the immutability of the heavens (the idea accepted since Aristotle that the celestial spheres were perfect and unchanging). The birth of a new star implied the variability of the heavens. In an appendix, Kepler also discussed the recent chronology work of the Polish historian Laurentius Suslyga; he calculated that, if Suslyga was correct that accepted timelines were four years behind, then the Star of Bethlehem—analogous to the present new star—would have coincided with the first great conjunction of the earlier 800-year cycle. The extended line of research that culminated in "Astronomia nova" ("A New Astronomy")—including the first two laws of planetary motion—began with the analysis, under Tycho's direction, of Mars' orbit. Kepler calculated and recalculated various approximations of Mars' orbit using an equant (the mathematical tool that Copernicus had eliminated with his system), eventually creating a model that generally agreed with Tycho's observations to within two arcminutes (the average measurement error). But he was not satisfied with the complex and still slightly inaccurate result; at certain points the model differed from the data by up to eight arcminutes. The wide array of traditional mathematical astronomy methods having failed him, Kepler set about trying to fit an ovoid orbit to the data. In Kepler's religious view of the cosmos, the Sun (a symbol of God the Father) was the source of motive force in the solar system. As a physical basis, Kepler drew by analogy on William Gilbert's theory of the magnetic soul of the Earth from "De Magnete" (1600) and on his own work on optics. Kepler supposed that the motive power (or motive "species") radiated by the Sun weakens with distance, causing faster or slower motion as planets move closer or farther from it. Perhaps this assumption entailed a mathematical relationship that would restore astronomical order. Based on measurements of the aphelion and perihelion of the Earth and Mars, he created a formula in which a planet's rate of motion is inversely proportional to its distance from the Sun. Verifying this relationship throughout the orbital cycle, however, required very extensive calculation; to simplify this task, by late 1602 Kepler reformulated the proportion in terms of geometry: "planets sweep out equal areas in equal times"—Kepler's second law of planetary motion. He then set about calculating the entire orbit of Mars, using the geometrical rate law and assuming an egg-shaped ovoid orbit. After approximately 40 failed attempts, in early 1605 he at last hit upon the idea of an ellipse, which he had previously assumed to be too simple a solution for earlier astronomers to have overlooked. Finding that an elliptical orbit fit the Mars data, he immediately concluded that "all planets move in ellipses, with the sun at one focus"—Kepler's first law of planetary motion. Because he employed no calculating assistants, however, he did not extend the mathematical analysis beyond Mars. By the end of the year, he completed the manuscript for "Astronomia nova", though it would not be published until 1609 due to legal disputes over the use of Tycho's observations, the property of his heirs. In the years following the completion of "Astronomia Nova", most of Kepler's research was focused on preparations for the "Rudolphine Tables" and a comprehensive set of ephemerides (specific predictions of planet and star positions) based on the table (though neither would be completed for many years). He also attempted (unsuccessfully) to begin a collaboration with Italian astronomer Giovanni Antonio Magini. Some of his other work dealt with chronology, especially the dating of events in the life of Jesus, and with astrology, especially criticism of dramatic predictions of catastrophe such as those of Helisaeus Roeslin. Kepler and Roeslin engaged in a series of published attacks and counter-attacks, while physician Philip Feselius published a work dismissing astrology altogether (and Roeslin's work in particular). In response to what Kepler saw as the excesses of astrology on the one hand and overzealous rejection of it on the other, Kepler prepared "Tertius Interveniens" [Third-party Interventions]. Nominally this work—presented to the common patron of Roeslin and Feselius—was a neutral mediation between the feuding scholars, but it also set out Kepler's general views on the value of astrology, including some hypothesized mechanisms of interaction between planets and individual souls. While Kepler considered most traditional rules and methods of astrology to be the "evil-smelling dung" in which "an industrious hen" scrapes, there was an "occasional grain-seed, indeed, even a pearl or a gold nugget" to be found by the conscientious scientific astrologer. Conversely, Sir Oliver Lodge observed that Kepler was somewhat disdainful of astrology, as Kepler was "continually attacking and throwing sarcasm at astrology, but it was the only thing for which people would pay him, and on it after a fashion he lived." In the first months of 1610, Galileo Galilei—using his powerful new telescope—discovered four satellites orbiting Jupiter. Upon publishing his account as "Sidereus Nuncius" [Starry Messenger], Galileo sought the opinion of Kepler, in part to bolster the credibility of his observations. Kepler responded enthusiastically with a short published reply, "Dissertatio cum Nuncio Sidereo" [Conversation with the Starry Messenger]. He endorsed Galileo's observations and offered a range of speculations about the meaning and implications of Galileo's discoveries and telescopic methods, for astronomy and optics as well as cosmology and astrology. Later that year, Kepler published his own telescopic observations of the moons in "Narratio de Jovis Satellitibus", providing further support of Galileo. To Kepler's disappointment, however, Galileo never published his reactions (if any) to "Astronomia Nova". After hearing of Galileo's telescopic discoveries, Kepler also started a theoretical and experimental investigation of telescopic optics using a telescope borrowed from Duke Ernest of Cologne. The resulting manuscript was completed in September 1610 and published as "Dioptrice" in 1611. In it, Kepler set out the theoretical basis of double-convex converging lenses and double-concave diverging lenses—and how they are combined to produce a Galilean telescope—as well as the concepts of real vs. virtual images, upright vs. inverted images, and the effects of focal length on magnification and reduction. He also described an improved telescope—now known as the "astronomical" or "Keplerian telescope"—in which two convex lenses can produce higher magnification than Galileo's combination of convex and concave lenses. Around 1611, Kepler circulated a manuscript of what would eventually be published (posthumously) as "Somnium" [The Dream]. Part of the purpose of "Somnium" was to describe what practicing astronomy would be like from the perspective of another planet, to show the feasibility of a non-geocentric system. The manuscript, which disappeared after changing hands several times, described a fantastic trip to the moon; it was part allegory, part autobiography, and part treatise on interplanetary travel (and is sometimes described as the first work of science fiction). Years later, a distorted version of the story may have instigated the witchcraft trial against his mother, as the mother of the narrator consults a demon to learn the means of space travel. Following her eventual acquittal, Kepler composed 223 footnotes to the story—several times longer than the actual text—which explained the allegorical aspects as well as the considerable scientific content (particularly regarding lunar geography) hidden within the text. As a New Year's gift that year (1611), he also composed for his friend and some-time patron, Baron Wackher von Wackhenfels, a short pamphlet entitled "Strena Seu de Nive Sexangula" ("A New Year's Gift of Hexagonal Snow"). In this treatise, he published the first description of the hexagonal symmetry of snowflakes and, extending the discussion into a hypothetical atomistic physical basis for the symmetry, posed what later became known as the Kepler conjecture, a statement about the most efficient arrangement for packing spheres. In 1611, the growing political-religious tension in Prague came to a head. Emperor Rudolph—whose health was failing—was forced to abdicate as King of Bohemia by his brother Matthias. Both sides sought Kepler's astrological advice, an opportunity he used to deliver conciliatory political advice (with little reference to the stars, except in general statements to discourage drastic action). However, it was clear that Kepler's future prospects in the court of Matthias were dim. Also in that year, Barbara Kepler contracted Hungarian spotted fever, then began having seizures. As Barbara was recovering, Kepler's three children all fell sick with smallpox; Friedrich, 6, died. Following his son's death, Kepler sent letters to potential patrons in Württemberg and Padua. At the University of Tübingen in Württemberg, concerns over Kepler's perceived Calvinist heresies in violation of the Augsburg Confession and the Formula of Concord prevented his return. The University of Padua—on the recommendation of the departing Galileo—sought Kepler to fill the mathematics professorship, but Kepler, preferring to keep his family in German territory, instead travelled to Austria to arrange a position as teacher and district mathematician in Linz. However, Barbara relapsed into illness and died shortly after Kepler's return. Kepler postponed the move to Linz and remained in Prague until Rudolph's death in early 1612, though between political upheaval, religious tension, and family tragedy (along with the legal dispute over his wife's estate), Kepler could do no research. Instead, he pieced together a chronology manuscript, "Eclogae Chronicae", from correspondence and earlier work. Upon succession as Holy Roman Emperor, Matthias re-affirmed Kepler's position (and salary) as imperial mathematician but allowed him to move to Linz. In Linz, Kepler's primary responsibilities (beyond completing the "Rudolphine Tables") were teaching at the district school and providing astrological and astronomical services. In his first years there, he enjoyed financial security and religious freedom relative to his life in Prague—though he was excluded from Eucharist by his Lutheran church over his theological scruples. It was also during his time in Linz that Kepler had to deal with the accusation and ultimate verdict of witchcraft against his mother Katharina in the Protestant town of Leonberg. That blow, happening only a few years after Kepler’s excommunication, is not seen as a coincidence but as a symptom of the full-fledged assault waged by the Lutherans against Kepler. His first publication in Linz was "De vero Anno" (1613), an expanded treatise on the year of Christ's birth; he also participated in deliberations on whether to introduce Pope Gregory's reformed calendar to Protestant German lands; that year he also wrote the influential mathematical treatise "Nova stereometria doliorum vinariorum", on measuring the volume of containers such as wine barrels, published in 1615. On October 30, 1613, Kepler married the 24-year-old Susanna Reuttinger. Following the death of his first wife Barbara, Kepler had considered 11 different matches over two years (a decision process formalized later as the marriage problem). He eventually returned to Reuttinger (the fifth match) who, he wrote, "won me over with love, humble loyalty, economy of household, diligence, and the love she gave the stepchildren." The first three children of this marriage (Margareta Regina, Katharina, and Sebald) died in childhood. Three more survived into adulthood: Cordula (born 1621); Fridmar (born 1623); and Hildebert (born 1625). According to Kepler's biographers, this was a much happier marriage than his first. Since completing the "Astronomia nova", Kepler had intended to compose an astronomy textbook. In 1615, he completed the first of three volumes of "Epitome astronomiae Copernicanae" ("Epitome of Copernican Astronomy"); the first volume (books I–III) was printed in 1617, the second (book IV) in 1620, and the third (books V–VII) in 1621. Despite the title, which referred simply to heliocentrism, Kepler's textbook culminated in his own ellipse-based system. The "Epitome" became Kepler's most influential work. It contained all three laws of planetary motion and attempted to explain heavenly motions through physical causes. Though it explicitly extended the first two laws of planetary motion (applied to Mars in "Astronomia nova") to all the planets as well as the Moon and the Medicean satellites of Jupiter, it did not explain how elliptical orbits could be derived from observational data. As a spin-off from the "Rudolphine Tables" and the related "Ephemerides", Kepler published astrological calendars, which were very popular and helped offset the costs of producing his other work—especially when support from the Imperial treasury was withheld. In his calendars—six between 1617 and 1624—Kepler forecast planetary positions and weather as well as political events; the latter were often cannily accurate, thanks to his keen grasp of contemporary political and theological tensions. By 1624, however, the escalation of those tensions and the ambiguity of the prophecies meant political trouble for Kepler himself; his final calendar was publicly burned in Graz. In 1615, Ursula Reingold, a woman in a financial dispute with Kepler's brother Christoph, claimed Kepler's mother Katharina had made her sick with an evil brew. The dispute escalated, and in 1617 Katharina was accused of witchcraft; witchcraft trials were relatively common in central Europe at this time. Beginning in August 1620, she was imprisoned for fourteen months. She was released in October 1621, thanks in part to the extensive legal defense drawn up by Kepler. The accusers had no stronger evidence than rumors. Katharina was subjected to "territio verbalis", a graphic description of the torture awaiting her as a witch, in a final attempt to make her confess. Throughout the trial, Kepler postponed his other work to focus on his "harmonic theory". The result, published in 1619, was "Harmonices Mundi" ("Harmony of the World"). Kepler was convinced "that the geometrical things have provided the Creator with the model for decorating the whole world". In "Harmony", he attempted to explain the proportions of the natural world—particularly the astronomical and astrological aspects—in terms of music. The central set of "harmonies" was the "musica universalis" or "music of the spheres", which had been studied by Pythagoras, Ptolemy and many others before Kepler; in fact, soon after publishing "Harmonices Mundi", Kepler was embroiled in a priority dispute with Robert Fludd, who had recently published his own harmonic theory. Kepler began by exploring regular polygons and regular solids, including the figures that would come to be known as Kepler's solids. From there, he extended his harmonic analysis to music, meteorology, and astrology; harmony resulted from the tones made by the souls of heavenly bodies—and in the case of astrology, the interaction between those tones and human souls. In the final portion of the work (Book V), Kepler dealt with planetary motions, especially relationships between orbital velocity and orbital distance from the Sun. Similar relationships had been used by other astronomers, but Kepler—with Tycho's data and his own astronomical theories—treated them much more precisely and attached new physical significance to them. Among many other harmonies, Kepler articulated what came to be known as the third law of planetary motion. He then tried many combinations until he discovered that (approximately) ""The square of the periodic times are to each other as the cubes of the mean distances"." Although he gives the date of this epiphany (March 8, 1618), he does not give any details about how he arrived at this conclusion. However, the wider significance for planetary dynamics of this purely kinematical law was not realized until the 1660s. When conjoined with Christiaan Huygens' newly discovered law of centrifugal force, it enabled Isaac Newton, Edmund Halley, and perhaps Christopher Wren and Robert Hooke to demonstrate independently that the presumed gravitational attraction between the Sun and its planets decreased with the square of the distance between them. This refuted the traditional assumption of scholastic physics that the power of gravitational attraction remained constant with distance whenever it applied between two bodies, such as was assumed by Kepler and also by Galileo in his mistaken universal law that gravitational fall is uniformly accelerated, and also by Galileo's student Borrelli in his 1666 celestial mechanics. In 1623, Kepler at last completed the "Rudolphine Tables", which at the time was considered his major work. However, due to the publishing requirements of the emperor and negotiations with Tycho Brahe's heir, it would not be printed until 1627. In the meantime, religious tension — the root of the ongoing Thirty Years' War — once again put Kepler and his family in jeopardy. In 1625, agents of the Catholic Counter-Reformation placed most of Kepler's library under seal, and in 1626 the city of Linz was besieged. Kepler moved to Ulm, where he arranged for the printing of the "Tables" at his own expense. In 1628, following the military successes of the Emperor Ferdinand's armies under General Wallenstein, Kepler became an official advisor to Wallenstein. Though not the general's court astrologer per se, Kepler provided astronomical calculations for Wallenstein's astrologers and occasionally wrote horoscopes himself. In his final years, Kepler spent much of his time traveling, from the imperial court in Prague to Linz and Ulm to a temporary home in Sagan, and finally to Regensburg. Soon after arriving in Regensburg, Kepler fell ill. He died on November 15, 1630, and was buried there; his burial site was lost after the Swedish army destroyed the churchyard. Only Kepler's self-authored poetic epitaph survived the times: Kepler's belief that God created the cosmos in an orderly fashion caused him to attempt to determine and comprehend the laws that govern the natural world, most profoundly in astronomy. The phrase "I am merely thinking God's thoughts after Him" has been attributed to him, although this is probably a capsulized version of a writing from his hand: Those laws [of nature] are within the grasp of the human mind; God wanted us to recognize them by creating us after his own image so that we could share in his own thoughts. Kepler's laws of planetary motion were not immediately accepted. Several major figures such as Galileo and René Descartes completely ignored Kepler's "Astronomia nova." Many astronomers, including Kepler's teacher, Michael Maestlin, objected to Kepler's introduction of physics into his astronomy. Some adopted compromise positions. Ismaël Bullialdus accepted elliptical orbits but replaced Kepler's area law with uniform motion in respect to the empty focus of the ellipse, while Seth Ward used an elliptical orbit with motions defined by an equant. Several astronomers tested Kepler's theory, and its various modifications, against astronomical observations. Two transits of Venus and Mercury across the face of the sun provided sensitive tests of the theory, under circumstances when these planets could not normally be observed. In the case of the transit of Mercury in 1631, Kepler had been extremely uncertain of the parameters for Mercury, and advised observers to look for the transit the day before and after the predicted date. Pierre Gassendi observed the transit on the date predicted, a confirmation of Kepler's prediction. This was the first observation of a transit of Mercury. However, his attempt to observe the transit of Venus just one month later was unsuccessful due to inaccuracies in the Rudolphine Tables. Gassendi did not realize that it was not visible from most of Europe, including Paris. Jeremiah Horrocks, who observed the 1639 Venus transit, had used his own observations to adjust the parameters of the Keplerian model, predicted the transit, and then built apparatus to observe the transit. He remained a firm advocate of the Keplerian model. "Epitome of Copernican Astronomy" was read by astronomers throughout Europe, and following Kepler's death, it was the main vehicle for spreading Kepler's ideas. In the period 1630 - 1650, this book was the most widely used astronomy textbook, winning many converts to ellipse-based astronomy. However, few adopted his ideas on the physical basis for celestial motions. In the late 17th century, a number of physical astronomy theories drawing from Kepler's work—notably those of Giovanni Alfonso Borelli and Robert Hooke—began to incorporate attractive forces (though not the quasi-spiritual motive species postulated by Kepler) and the Cartesian concept of inertia. This culminated in Isaac Newton's "Principia Mathematica" (1687), in which Newton derived Kepler's laws of planetary motion from a force-based theory of universal gravitation. Beyond his role in the historical development of astronomy and natural philosophy, Kepler has loomed large in the philosophy and historiography of science. Kepler and his laws of motion were central to early histories of astronomy such as Jean-Étienne Montucla's 1758 "Histoire des mathématiques" and Jean-Baptiste Delambre's 1821 "Histoire de l'astronomie moderne". These and other histories written from an Enlightenment perspective treated Kepler's metaphysical and religious arguments with skepticism and disapproval, but later Romantic-era natural philosophers viewed these elements as central to his success. William Whewell, in his influential "History of the Inductive Sciences" of 1837, found Kepler to be the archetype of the inductive scientific genius; in his "Philosophy of the Inductive Sciences" of 1840, Whewell held Kepler up as the embodiment of the most advanced forms of scientific method. Similarly, Ernst Friedrich Apelt—the first to extensively study Kepler's manuscripts, after their purchase by Catherine the Great—identified Kepler as a key to the "Revolution of the sciences". Apelt, who saw Kepler's mathematics, aesthetic sensibility, physical ideas, and theology as part of a unified system of thought, produced the first extended analysis of Kepler's life and work. Alexandre Koyré's work on Kepler was, after Apelt, the first major milestone in historical interpretations of Kepler's cosmology and its influence. In the 1930s and 1940s, Koyré, and a number of others in the first generation of professional historians of science, described the "Scientific Revolution" as the central event in the history of science, and Kepler as a (perhaps the) central figure in the revolution. Koyré placed Kepler's theorization, rather than his empirical work, at the center of the intellectual transformation from ancient to modern world-views. Since the 1960s, the volume of historical Kepler scholarship has expanded greatly, including studies of his astrology and meteorology, his geometrical methods, the role of his religious views in his work, his literary and rhetorical methods, his interaction with the broader cultural and philosophical currents of his time, and even his role as an historian of science. Philosophers of science—such as Charles Sanders Peirce, Norwood Russell Hanson, Stephen Toulmin, and Karl Popper—have repeatedly turned to Kepler: examples of incommensurability, analogical reasoning, falsification, and many other philosophical concepts have been found in Kepler's work. Physicist Wolfgang Pauli even used Kepler's priority dispute with Robert Fludd to explore the implications of analytical psychology on scientific investigation. Modern translations of a number of Kepler's books appeared in the late-nineteenth and early-twentieth centuries, the systematic publication of his collected works began in 1937 (and is nearing completion in the early 21st century). An edition in eight volumes, " Kepleri Opera omnia," was prepared by Christian Frisch (1807–1881), during 1858 to 1871, on the occasion of Kepler's 300th birthday. Frisch's edition only included Kepler's Latin, with a Latin commentary. A new edition was planned beginning in 1914 by Walther von Dyck (1856–1934). Dyck compiled copies of Kepler's unedited manuscripts, using international diplomatic contacts to convince the Soviet authorities to lend him the manuscripts kept in Leningrad for photographic reproduction. These manuscripts contained several works by Kepler that had not been available to Frisch. Dyck's photographs remain the basis for the modern editions of Kepler's unpublished manuscripts. Max Caspar (1880–1956) published his German translation of Kepler's "Mysterium Cosmographicum" in 1923. Both Dyck and Caspar were influenced in their interest in Kepler by mathematician Alexander von Brill (1842–1935). Caspar became Dyck's collaborator, succeeding him as project leader in 1934, establishing the "Kepler-Kommission" in the following year. Assisted by Martha List (1908–1992) and Franz Hammer (1898–1979), Caspar continued editorial work during World War II. Max Caspar also published a biography of Kepler in 1948. The commission was later chaired by Volker Bialas (during 1976–2003) and Ulrich Grigull (during 1984–1999) and Roland Bulirsch (1998–2014). Kepler has acquired a popular image as an icon of scientific modernity and a man before his time; science popularizer Carl Sagan described him as "the first astrophysicist and the last scientific astrologer". The debate over Kepler's place in the Scientific Revolution has produced a wide variety of philosophical and popular treatments. One of the most influential is Arthur Koestler's 1959 "The Sleepwalkers", in which Kepler is unambiguously the hero (morally and theologically as well as intellectually) of the revolution. A well-received, if fanciful, historical novel by John Banville, "Kepler" (1981), explored many of the themes developed in Koestler's non-fiction narrative and in the philosophy of science. Somewhat more fanciful is a recent work of nonfiction, "Heavenly Intrigue" (2004), suggesting that Kepler murdered Tycho Brahe to gain access to his data. In Austria, Kepler left behind such a historical legacy that he was one of the motifs of a silver collector's coin: the 10-euro Johannes Kepler silver coin, minted on September 10, 2002. The reverse side of the coin has a portrait of Kepler, who spent some time teaching in Graz and the surrounding areas. Kepler was acquainted with Prince Hans Ulrich von Eggenberg personally, and he probably influenced the construction of Eggenberg Castle (the motif of the obverse of the coin). In front of him on the coin is the model of nested spheres and polyhedra from "Mysterium Cosmographicum". The German composer Paul Hindemith wrote an opera about Kepler entitled "Die Harmonie der Welt", and a symphony of the same name was derived from music for the opera. Philip Glass wrote an opera called "Kepler" based on Kepler's life (2009). Kepler is honored together with Nicolaus Copernicus with a feast day on the liturgical calendar of the Episcopal Church (USA) on May 23. Directly named for Kepler's contribution to science are Kepler's laws of planetary motion, Kepler's Supernova (Supernova 1604, which he observed and described) and the Kepler Solids, a set of geometrical constructions, two of which were described by him, and the Kepler conjecture on sphere packing. A critical edition of Kepler's collected works ("Johannes Kepler Gesammelte Werke", KGW) in 22 volumes is being edited by the "Kepler-Kommission" (founded 1935) on behalf of the Bayerische Akademie der Wissenschaften. The Kepler-Kommission also publishes "Bibliographia Kepleriana" (2nd ed. List, 1968), a complete bibliography of editions of Kepler's works, with a supplementary volume to the second edition (ed. Hamel 1998). From the Lessing J. Rosenwald Collection at the Library of Congress: John Lennon John Winston Ono Lennon (9 October 19408 December 1980) was an English singer, songwriter, and peace activist who co-founded the Beatles, the most commercially successful band in the history of popular music. He and fellow member Paul McCartney formed a much-celebrated songwriting partnership. Along with George Harrison and Ringo Starr, the group would ascend to worldwide fame during the 1960s. He was born as John Winston Lennon in Liverpool, where he became involved in the skiffle craze as a teenager. In 1957, he formed his first band, the Quarrymen, which evolved into the Beatles in 1960. Lennon began to record as a solo artist before the band's break-up in April 1970; two of those songs were "Give Peace a Chance" and "Instant Karma!" Lennon subsequently produced albums that included "John Lennon/Plastic Ono Band" and "Imagine", and songs such as "Working Class Hero", "Imagine" and "Happy Xmas (War Is Over)". After he married Yoko Ono in 1969, he added "Ono" as one of his middle names. Lennon disengaged himself from the music business in 1975 to raise his infant son Sean, but re-emerged with Ono in 1980 with the album "Double Fantasy". He was shot and killed in the archway of his Manhattan apartment building three weeks after the album was released. Lennon revealed a rebellious nature and acerbic wit in his music, writing, drawings, on film and in interviews. Controversial through his political and peace activism, he moved from London to Manhattan in 1971, where his criticism of the Vietnam War resulted in a lengthy attempt by the Nixon administration to deport him. Some of his songs were adopted as anthems by the anti-war movement and the larger counterculture. By 2012, Lennon's solo album sales in the United States had exceeded 14 million units. He had 25 number-one singles on the US "Billboard" Hot 100 chart as a writer, co-writer, or performer. In 2002, Lennon was voted eighth in a BBC poll of the 100 Greatest Britons and in 2008, "Rolling Stone" ranked him the fifth-greatest singer of all time. In 1987, he was posthumously inducted into the Songwriters Hall of Fame. Lennon was twice posthumously inducted into the Rock and Roll Hall of Fame: first in 1988 as a member of the Beatles and again in 1994 as a solo artist. Lennon was born on 9 October 1940 at Liverpool Maternity Hospital, to Julia (née Stanley) (1914–1958) and Alfred Lennon (1912–1976). Alfred was a merchant seaman of Irish descent who was away at the time of his son's birth. His parents named him John Winston Lennon after his paternal grandfather, John "Jack" Lennon, and Prime Minister Winston Churchill. His father was often away from home but sent regular pay cheques to 9Newcastle Road, Liverpool, where Lennon lived with his mother; the cheques stopped when he went absent without leave in February 1944. When he eventually came home six months later, he offered to look after the family, but Julia, by then pregnant with another man's child, rejected the idea. After her sister Mimi complained to Liverpool's Social Services twice, Julia gave her custody of Lennon. In July 1946, Lennon's father visited her and took his son to Blackpool, secretly intending to emigrate to New Zealand with him. Julia followed them—with her partner at the time, 'Bobby' Dykins—and after a heated argument his father forced the five-year-old to choose between them. Lennon twice chose his father, but as his mother walked away, he began to cry and followed her, although this has been disputed. According to author Mark Lewisohn, Lennon's parents agreed that Julia should take him and give him a home as Alf left again. A witness who was there that day, Billy Hall, has said the dramatic scene often portrayed with a young John Lennon having to make a decision between his parents never happened. It would be 20 years before he had contact with his father again. Throughout the rest of his childhood and adolescence, Lennon lived at Mendips, 251Menlove Avenue, Woolton with Mimi and her husband George Toogood Smith, who had no children of their own. His aunt purchased volumes of short stories for him, and his uncle, a dairyman at his family's farm, bought him a mouth organ and engaged him in solving puzzles. Julia visited Mendips on a regular basis, and when John was 11 years old he often visited her at 1 Blomfield Road, Liverpool, where she played him Elvis Presley records, taught him the banjo, and showed him how to play Ain't That a Shame by Fats Domino. In September 1980, Lennon commented about his family and his rebellious nature: He regularly visited his cousin, Stanley Parkes, who lived in Fleetwood and took him on trips to local cinemas. During the school holidays, Parkes often visited Lennon with Leila Harvey, another cousin, and the threesome often travelled to Blackpool two or three times a week to watch shows. They would visit the Blackpool Tower Circus and see artists such as Dickie Valentine, Arthur Askey, Max Bygraves and Joe Loss, with Parkes recalling that Lennon particularly liked George Formby. After Parkes's family moved to Scotland, the three cousins often spent their school holidays together there. Parkes recalled, "John, cousin Leila and I were very close. From Edinburgh we would drive up to the family croft at Durness, which was from about the time John was nine years old until he was about 16." He was 14 years old when his uncle George died of a liver haemorrhage on 5 June 1955, at age 52. Lennon was raised as an Anglican and attended Dovedale Primary School. After passing his eleven-plus exam, he attended Quarry Bank High School in Liverpool from September 1952 to 1957, and was described by Harvey at the time as a "happy-go-lucky, good-humoured, easy going, lively lad". He often drew comical cartoons that appeared in his own self-made school magazine called "The Daily Howl", but despite his artistic talent, his school reports were damning: "Certainly on the road to failure ... hopeless ... rather a clown in class ... wasting other pupils' time." In 2005, the National Postal Museum in the US acquired a stamp collection that Lennon had assembled when he was a boy. In 1956, Julia bought John his first guitar. The instrument was an inexpensive Gallotone Champion acoustic for which she lent her son five pounds and ten shillings on the condition that the guitar be delivered to her own house and not Mimi's, knowing well that her sister was not supportive of her son's musical aspirations. Mimi was sceptical of his claim that he would be famous one day, and she hoped that he would grow bored with music, often telling him, "The guitar's all very well, John, but you'll never make a living out of it". On 15 July 1958 (when Lennon was 17 years old) his mother was struck and killed by a car while she was walking home after visiting the Smiths' house. Lennon failed his O-level examinations and was accepted into the Liverpool College of Art after his aunt and headmaster intervened. Once at the college, he started wearing Teddy Boy clothes and was threatened with expulsion for his behaviour; He was"thrown out of the college before his final year". At age 15, Lennon formed the skiffle group, the Quarrymen. Named after Quarry Bank High School, the group was established by Lennon in September 1956. By the summer of 1957, the Quarrymen played a "spirited set of songs" made up of half skiffle and half rock and roll. Lennon first met Paul McCartney at the Quarrymen's second performance, which was held in Woolton on 6 July at the St. Peter's Church garden fête. Lennon then asked McCartney to join the band. McCartney said that Aunt Mimi "was very aware that John's friends were lower class", and would often patronise him when he arrived to visit Lennon. According to Paul's brother Mike, McCartney's father was also disapproving, declaring that Lennon would get his son "into trouble", although he later allowed the fledgling band to rehearse in the McCartneys' front room at 20Forthlin Road. During this time, 18-year-old Lennon wrote his first song, "Hello Little Girl", a UK top 10 hit for The Fourmost nearly five years later. McCartney recommended his friend George Harrison to be the lead guitarist. Lennon thought that Harrison, then 14 years old, was too young. McCartney engineered an audition on the upper deck of a Liverpool bus, where Harrison played Raunchy for Lennon and was asked to join. Stuart Sutcliffe, Lennon's friend from art school, later joined as bassist. Lennon, McCartney, Harrison and Sutcliffe became "The Beatles" in early 1960. In August that year, the Beatles engaged for a 48-night residency in Hamburg, Germany and were desperately in need of a drummer. They asked Pete Best to join them. Lennon was now 19, and his aunt, horrified when he told her about the trip, pleaded with him to continue his art studies instead. After the first Hamburg residency, the band accepted another in April 1961, and a third in April 1962. As with the other band members, Lennon was introduced to Preludin while in Hamburg, and regularly took the drug as a stimulant during their long, overnight performances. Brian Epstein managed the Beatles from 1962 until his untimely death in 1967. He had no prior experience managing artists, but he had a strong influence on the group's dress code and attitude on stage. Lennon initially resisted his attempts to encourage the band to present a professional appearance, but eventually complied, saying, "I'll wear a bloody balloon if somebody's going to pay me". McCartney took over on bass after Sutcliffe decided to stay in Hamburg, and Pete Best was replaced with drummer Ringo Starr; this completed the four-piece line-up that would remain until the group's break-up in 1970. The band's first single, "Love Me Do", was released in October 1962 and reached No. 17 on the British charts. They recorded their debut album, "Please Please Me", in under 10 hours on 11 February 1963, a day when Lennon was suffering the effects of a cold, which is evident in the vocal on the last song to be recorded that day, "Twist and Shout". The Lennon–McCartney songwriting partnership yielded eight of its fourteen tracks. With a few exceptions, one being the album title itself, Lennon had yet to bring his love of wordplay to bear on his song lyrics, saying: "We were just writing songs... pop songs with no more thought of them than that—to create a sound. And the words were almost irrelevant". In a 1987 interview, McCartney said that the other Beatles idolised John: "He was like our own little Elvis... We all looked up to John. He was older and he was very much the leader; he was the quickest wit and the smartest." The Beatles achieved mainstream success in the UK early in 1963. Lennon was on tour when his first son, Julian, was born in April. During their Royal Variety Show performance, which was attended by the Queen Mother and other British royalty, Lennon poked fun at the audience: "For our next song, I'd like to ask for your help. For the people in the cheaper seats, clap your hands... and the rest of you, if you'll just rattle your jewellery." After a year of Beatlemania in the UK, the group's historic February 1964 US debut appearance on "The Ed Sullivan Show" marked their breakthrough to international stardom. A two-year period of constant touring, filmmaking, and songwriting followed, during which Lennon wrote two books, "In His Own Write" and "A Spaniard in the Works". The Beatles received recognition from the British Establishment when they were appointed Members of the Order of the British Empire (MBE) in the 1965 Queen's Birthday Honours. Lennon grew concerned that fans who attended Beatles concerts were unable to hear the music above the screaming of fans, and that the band's musicianship was beginning to suffer as a result. Lennon's "Help!" expressed his own feelings in 1965: "I "meant" it... It was me singing 'help'". He had put on weight (he would later refer to this as his "Fat Elvis" period), and felt he was subconsciously seeking change. In March that year he was unknowingly introduced to LSD when a dentist, hosting a dinner party attended by Lennon, Harrison and their wives, spiked the guests' coffee with the drug. When they wanted to leave, their host revealed what they had taken, and strongly advised them not to leave the house because of the likely effects. Later, in a lift at a nightclub, they all believed it was on fire: "We were all screaming... hot and hysterical." After the Beatles' final concert on 29 August 1966, Lennon was deprived of the routine of live performances; he felt lost and considered leaving the band. Since his involuntary introduction to LSD, he had increased his use of the drug and was almost constantly under its influence for much of 1967. According to biographer Ian MacDonald, Lennon's continuous experimentation with LSD during the year brought him "close to erasing his identity". The year 1967 saw the release of "Strawberry Fields Forever", hailed by "Time" magazine for its "astonishing inventiveness", and the group's landmark album "Sgt. Pepper's Lonely Hearts Club Band", which revealed lyrics by Lennon that contrasted strongly with the simple love songs of the 'Lennon–McCartney' early years. After the Beatles were introduced to the Maharishi Mahesh Yogi, the group attended an August weekend of personal instruction at his Transcendental Meditation seminar in Bangor, Wales. During the seminar, they were informed of Epstein's death. "I knew we were in trouble then", Lennon said later. "I didn't have any misconceptions about our ability to do anything other than play music, and I was scared". Led primarily by Harrison and Lennon's interest in Eastern religion, the Beatles later travelled to Maharishi's ashram in India for further guidance. While there, they composed most of the songs for "The Beatles" and "Abbey Road". The anti-war, black comedy "How I Won the War", featuring Lennon's only appearance in a non–Beatles full-length film, was shown in cinemas in October 1967. McCartney organised the group's first post-Epstein project, the self-written, produced and directed television film "Magical Mystery Tour", which was released in December that year. While the film itself proved to be their first critical flop, its soundtrack release, featuring Lennon's acclaimed, Lewis Carroll-inspired "I Am the Walrus", was a success. With Epstein gone, the band members became increasingly involved in business activities, and in February 1968 they formed Apple Corps, a multimedia corporation composed of Apple Records and several other subsidiary companies. Lennon described the venture as an attempt to achieve "artistic freedom within a business structure", but his increased drug experimentation and growing preoccupation with Yoko Ono, combined with the Beatles' inability to agree on how the company should be run, left Apple in need of professional management. Lennon asked Lord Beeching to take on the role, but he declined, advising Lennon to go back to making records. Lennon was approached by Allen Klein, who had managed the Rolling Stones and other bands during the British Invasion. In early 1969, Klein was appointed as Apple's chief executive by Lennon, Harrison and Starr, but McCartney never signed the management contract. At the end of 1968, Lennon was featured in the film "The Rolling Stones Rock and Roll Circus" in the role of a Dirty Mac band member. The film was not released until 1996. The supergroup, composed of Lennon, Eric Clapton, Keith Richards and Mitch Mitchell, also backed a vocal performance by Ono in the film. Lennon and Ono were married on 20 March 1969, and soon released a series of 14 lithographs called "Bag One" depicting scenes from their honeymoon, eight of which were deemed indecent and most of which were banned and confiscated. Lennon's creative focus continued to move beyond the Beatles and between 1968 and 1969 he and Ono recorded three albums of experimental music together: "" (known more for its cover than for its music), "" and "Wedding Album". In 1969, they formed the Plastic Ono Band, releasing "Live Peace in Toronto 1969". Between 1969 and 1970, Lennon released the singles "Give Peace a Chance", which was widely adopted as an anti-Vietnam-War anthem in 1969, "Cold Turkey", which documented his withdrawal symptoms after he became addicted to heroin, and "Instant Karma!" In protest at Britain's involvement in "the Nigeria-Biafra thing", (the Nigerian Civil War), its support of America in the Vietnam war and (perhaps jokingly) against "Cold Turkey" slipping down the charts, Lennon returned his MBE medal to the Queen, though this had no effect on his MBE status, which could not be renounced. Lennon left the Beatles in September 1969, and agreed not to inform the media while the group renegotiated their recording contract, but he was outraged that McCartney publicised his own departure on releasing his debut solo album in April 1970. Lennon's reaction was, "Jesus Christ! He gets all the credit for it!" He later wrote, "I started the band. I disbanded it. It's as simple as that." In later interviews with "Rolling Stone" magazine, he revealed his bitterness towards McCartney, saying, "I was a fool not to do what Paul did, which was use it to sell a record." Lennon also spoke of the hostility he perceived the other members had towards Ono, and of how he, Harrison, and Starr "got fed up with being sidemen for Paul ... After Brian Epstein died we collapsed. Paul took over and supposedly led us. But what is leading us when we went round in circles?" In 1970, Lennon and Ono went through primal therapy with Arthur Janov in Los Angeles, California. Designed to release emotional pain from early childhood, the therapy entailed two half-days a week with Janov for four months; he had wanted to treat the couple for longer, but they felt no need to continue and returned to London. Lennon's debut solo album, "John Lennon/Plastic Ono Band" (1970), was received with praise by many music critics, but its highly personal lyrics and stark sound limited its commercial performance. Critic Greil Marcus remarked, "John's singing in the last verse of 'God' may be the finest in all of rock." The album featured the song "Mother", in which Lennon confronted his feelings of childhood rejection, and the Dylanesque "Working Class Hero", a bitter attack against the bourgeois social system which, due to the lyric "you're still fucking peasants", fell foul of broadcasters. The same year, Tariq Ali expressed his revolutionary political views when he interviewed Lennon. This inspired the singer to write "Power to the People". Lennon also became involved with Ali during a protest against the prosecution of "Oz" magazine for alleged obscenity. Lennon denounced the proceedings as "disgusting fascism", and he and Ono (as Elastic Oz Band) released the single "God Save Us/Do the Oz" and joined marches in support of the magazine. Eager for a major commercial success, Lennon adopted a more accessible sound for his next album, "Imagine" (1971). "Rolling Stone" reported that "it contains a substantial portion of good music" but warned of the possibility that "his posturings will soon seem not merely dull but irrelevant". The album's title track later became an anthem for anti-war movements, while the song "How Do You Sleep?" was a musical attack on McCartney in response to lyrics on "Ram" that Lennon felt, and McCartney later confirmed, were directed at him and Ono. Lennon softened his stance in the mid-1970s, however, and said he had written "How Do You Sleep?" about himself. He said in 1980: "I used my resentment against Paul… to create a song… not a terrible vicious horrible vendetta[…] I used my resentment and withdrawing from Paul and the Beatles, and the relationship with Paul, to write 'How Do You Sleep'. I don't really go 'round with those thoughts in my head all the time." Lennon and Ono moved to New York in August 1971 and released "Happy Xmas (War Is Over)" in December. During the new year, the Nixon administration took what it called a "strategic counter-measure" against Lennon's anti-war and anti-Nixon propaganda. The administration embarked on what would be a four-year attempt to deport him. After George McGovern lost the presidential election to Richard Nixon in 1972, Lennon and Ono attended a post-election wake held in the New York home of activist Jerry Rubin. Lennon was embroiled in a continuing legal battle with the immigration authorities, and he was denied permanent residency in the US; the issue would not be resolved until 1976. Lennon was depressed and got intoxicated; he left Ono embarrassed after he had sex with a female guest. Her song "Death of Samantha" was inspired by the incident. "Some Time in New York City" was recorded as a collaboration with Ono and was released in 1972 with backing from the New York band Elephant's Memory. A double LP, it contained songs about women's rights, race relations, Britain's role in Northern Ireland and Lennon's difficulties in obtaining a green card. The album was a commercial failure and was maligned by critics, who found its political sloganeering heavy-handed and relentless. The "NME"s review took the form of an open letter in which Tony Tyler derided Lennon as a "pathetic, ageing revolutionary". In the US, "Woman Is the Nigger of the World" was released as a single from the album and was televised on 11 May, on "The Dick Cavett Show". Many radio stations refused to broadcast the song because of the word "nigger". Lennon and Ono gave two benefit concerts with Elephant's Memory and guests in New York in aid of patients at the Willowbrook State School mental facility. Staged at Madison Square Garden on 30 August 1972, they were his last full-length concert appearances. While Lennon was recording "Mind Games" in 1973, he and Ono decided to separate. The ensuing 18-month period apart, which he later called his "lost weekend", was spent in Los Angeles and New York City in the company of May Pang. "Mind Games", credited to the "Plastic U.F.Ono Band", was released in November 1973. Lennon also contributed "I'm the Greatest" to Starr's album "Ringo" (1973), released the same month. An alternate take, from the same 1973 "Ringo" sessions, with Lennon providing a guide vocal, appears on "John Lennon Anthology". In early 1974, Lennon was drinking heavily and his alcohol-fuelled antics with Harry Nilsson made headlines. In March, two widely publicised incidents occurred at The Troubadour club. In the first incident, Lennon stuck an unused menstrual pad on his forehead and scuffled with a waitress. The second incident occurred two weeks later, when Lennon and Nilsson were ejected from the same club after heckling the Smothers Brothers. Lennon decided to produce Nilsson's album "Pussy Cats", and Pang rented a Los Angeles beach house for all the musicians. After a month of further debauchery, the recording sessions were in chaos, and Lennon returned to New York with Pang to finish work on the album. In April, Lennon had produced the Mick Jagger song "Too Many Cooks (Spoil the Soup)" which was, for contractual reasons, to remain unreleased for more than 30 years. Pang supplied the recording for its eventual inclusion on "The Very Best of Mick Jagger" (2007). Lennon had settled back in New York when he recorded the album "Walls and Bridges". Released in October 1974, it included "Whatever Gets You thru the Night", which featured Elton John on backing vocals and piano, and became Lennon's only single as a solo artist to top the US "Billboard" Hot 100 chart during his lifetime. A second single from the album, "#9 Dream", followed before the end of the year. Starr's "Goodnight Vienna" (1974) again saw assistance from Lennon, who wrote the title track and played piano. On 28 November, Lennon made a surprise guest appearance at Elton John's Thanksgiving concert at Madison Square Garden, in fulfilment of his promise to join the singer in a live show if "Whatever Gets You thru the Night", a song whose commercial potential Lennon had doubted, reached number one. Lennon performed the song along with "Lucy in the Sky with Diamonds" and "I Saw Her Standing There", which he introduced as "a song by an old estranged fiancé of mine called Paul". Lennon co-wrote "Fame", David Bowie's first US number one, and provided guitar and backing vocals for the January 1975 recording. In the same month, Elton John topped the charts with his cover of "Lucy in the Sky with Diamonds", featuring Lennon on guitar and back-up vocals; Lennon is credited on the single under the moniker of "Dr. Winston O'Boogie". He and Ono were reunited shortly afterwards. Lennon released "Rock 'n' Roll" (1975), an album of cover songs, in February. "Stand by Me", taken from the album and a US and UK hit, became his last single for five years. He made what would be his final stage appearance in the ATV special "A Salute to Lew Grade", recorded on 18 April and televised in June. Playing acoustic guitar and backed by an eight-piece band, Lennon performed two songs from "Rock 'n' Roll" ("Stand by Me", which was not broadcast, and "Slippin' and Slidin'") followed by "Imagine". The band, known as Etc., wore masks behind their heads, a dig by Lennon, who thought Grade was two-faced. Sean was Lennon's only child with Ono. Sean was born on 9 October 1975 (Lennon's thirty-fifth birthday), and John took on the role of househusband. Lennon began what would be a five-year hiatus from the music industry, during which time he gave all his attention to his family. Within the month, he fulfilled his contractual obligation to EMI/Capitol for one more album by releasing "Shaved Fish", a compilation album of previously recorded tracks. He devoted himself to Sean, rising at 6am daily to plan and prepare his meals and to spend time with him. He wrote "Cookin' (In the Kitchen of Love)" for Starr's "Ringo's Rotogravure" (1976), performing on the track in June in what would be his last recording session until 1980. He formally announced his break from music in Tokyo in 1977, saying, "we have basically decided, without any great decision, to be with our baby as much as we can until we feel we can take time off to indulge ourselves in creating things outside of the family." During his career break he created several series of drawings, and drafted a book containing a mix of autobiographical material and what he termed "mad stuff", all of which would be published posthumously. Lennon emerged from his five-year interruption in music recording in October 1980, when he released the single "(Just Like) Starting Over". The following month saw the release of "Double Fantasy", which contained songs written during a June 1980 journey to Bermuda on a 43-foot sailing boat. The music reflected Lennon's fulfilment in his new-found stable family life. Sufficient additional material was recorded for a planned follow-up album "Milk and Honey", which was released posthumously, in 1984. "Double Fantasy" was jointly released by Lennon and Ono very shortly before his death; the album was not well received and drew comments such as "Melody Maker"'s "indulgent sterility... a godawful yawn". After an evening at the Record Plant on 8 December 1980, Lennon and Ono returned to their Manhattan apartment in a limousine at around 10:50p.m. (EST). They exited the vehicle and walked through the archway of The Dakota, when lone gunman Mark David Chapman shot Lennon four times in the back at close range. Lennon was rushed in a police cruiser to the emergency room of nearby Roosevelt Hospital, where he was pronounced dead on arrival at 11:00p.m. (EST). Earlier that evening, Lennon had autographed a copy of "Double Fantasy" for Chapman. Ono issued a statement the next day, saying "There is no funeral for John", ending it with the words, "John loved and prayed for the human race. Please do the same for him." His remains were cremated at Ferncliff Cemetery in Hartsdale, New York. Ono scattered his ashes in New York's Central Park, where the Strawberry Fields memorial was later created. Chapman avoided going to trial when he ignored his attorney's advice and pleaded guilty to second-degree murder and was sentenced to 20-years-to-life. In 2016, he was denied parole for a ninth time. Lennon met Cynthia Powell (1939–2015) in 1957, when they were fellow students at the Liverpool College of Art. Although Powell was intimidated by Lennon's attitude and appearance, she heard that he was obsessed with the French actress Brigitte Bardot, so she dyed her hair blonde. Lennon asked her out, but when she said that she was engaged, he screamed out, "I didn't ask you to fuckin' marry me, did I?" She often accompanied him to Quarrymen gigs and travelled to Hamburg with McCartney's girlfriend to visit him. Lennon was jealous by nature and eventually grew possessive, often terrifying Powell with his anger and physical violence. Lennon later said that until he met Ono, he had never questioned his chauvinistic attitude toward women. He said that the Beatles song "Getting Better" told his own story, "I used to be cruel to my woman, and physically—any woman. I was a hitter. I couldn't express myself and I hit. I fought men and I hit women. That is why I am always on about peace." Recalling his July 1962 reaction when he learned that Cynthia was pregnant, Lennon said, "There's only one thing for it Cyn. We'll have to get married." The couple wed on 23 August at the Mount Pleasant Register Office in Liverpool, with Brian Epstein serving as best man. His marriage began just as Beatlemania was taking off across the UK. He performed on the evening of his wedding day and would continue to do so almost daily from then on. Epstein feared that fans would be alienated by the idea of a married Beatle, and he asked the Lennons to keep their marriage secret. Julian was born on 8 April 1963; Lennon was on tour at the time and did not see his infant son until three days later. Cynthia attributed the start of the marriage breakdown to Lennon's use of LSD, and she felt that he slowly lost interest in her as a result of his use of the drug. When the group travelled by train to Bangor, Wales in 1967 for the Maharishi Yogi's Transcendental Meditation seminar, a policeman did not recognise her and stopped her from boarding. She later recalled how the incident seemed to symbolise the end of their marriage. After Cynthia arrived home at Kenwood, she found Lennon with Ono and left the house to stay with friends. Alexis Mardas later claimed to have slept with her that night, and a few weeks later he informed her that Lennon was seeking a divorce and custody of Julian on the grounds of her adultery with him. After negotiations, Lennon capitulated and agreed to let her divorce him on the same grounds. The case was settled out of court in November 1968, with Lennon giving her £100,000 ($240,000 in US dollars at the time), a small annual payment and custody of Julian. The Beatles were performing at Liverpool's Cavern Club in November 1961 when they were introduced to Brian Epstein after a midday concert. Epstein was a homosexual, and according to biographer Philip Norman, one of Brian's reasons for wanting to manage the group was that he was physically attracted to Lennon. Almost as soon as Julian was born, Lennon went on holiday to Spain with Epstein, which led to speculation about their relationship. When he was later questioned about it, Lennon said, "Well, it was almost a love affair, but not quite. It was never consummated. But it was a pretty intense relationship. It was my first experience with a homosexual that I was conscious was homosexual. We used to sit in a café in Torremolinos looking at all the boys and I'd say, 'Do you like that one? Do you like this one?' I was rather enjoying the experience, thinking like a writer all the time: I am experiencing this." Soon after their return from Spain, at McCartney's twenty-first birthday party in June 1963, Lennon physically attacked Cavern Club Master of ceremonies Bob Wooler for saying "How was your honeymoon, John?" The MC, known for his wordplay and affectionate but cutting remarks, was making a joke, but ten months had passed since Lennon's marriage, and the deferred honeymoon was still two months in the future. Lennon was drunk at the time and the matter was simple: "He called me a queer so I battered his bloody ribs in". Lennon delighted in mocking Epstein for his homosexuality and for the fact that he was Jewish. When Epstein invited suggestions for the title of his autobiography, Lennon offered "Queer Jew"; on learning of the eventual title, "A Cellarful of Noise", he parodied, "More like "A Cellarful of Boys"". He demanded of a visitor to Epstein's flat, "Have you come to blackmail him? If not, you're the only bugger in London who hasn't." During the recording of "Baby, You're a Rich Man", he sang altered choruses of "Baby, you're a rich fag Jew". During his marriage to Cynthia, Lennon's first son Julian was born at the same time that his commitments with the Beatles were intensifying at the height of Beatlemania. Lennon was touring with the Beatles when Julian was born on 8 April 1963. Julian's birth, like his mother Cynthia's marriage to Lennon, was kept secret because Epstein was convinced that public knowledge of such things would threaten the Beatles' commercial success. Julian recalled that as a small child in Weybridge some four years later, "I was trundled home from school and came walking up with one of my watercolour paintings. It was just a bunch of stars and this blonde girl I knew at school. And Dad said, 'What's this?' I said, 'It's Lucy in the sky with diamonds.'" Lennon used it as the title of a Beatles song, and though it was later reported to have been derived from the initials LSD, Lennon insisted, "It's not an acid song." McCartney corroborated Lennon's explanation that Julian innocently came up with the name. Lennon was distant from Julian, who felt closer to McCartney than to his father. During a car journey to visit Cynthia and Julian during Lennon's divorce, McCartney composed a song, "Hey Jules", to comfort him. It would evolve into the Beatles song "Hey Jude". Lennon later said, "That's his best song. It started off as a song about my son Julian... he turned it into 'Hey Jude'. I always thought it was about me and Yoko but he said it wasn't." Lennon's relationship with Julian was already strained, and after Lennon and Ono moved to Manhattan in 1971, Julian would not see his father again until 1973. With Pang's encouragement, arrangements were made for Julian (and his mother) to visit Lennon in Los Angeles, where they went to Disneyland. Julian started to see his father regularly, and Lennon gave him a drumming part on a "Walls and Bridges" track. He bought Julian a Gibson Les Paul guitar and other instruments, and encouraged his interest in music by demonstrating guitar chord techniques. Julian recalls that he and his father "got on a great deal better" during the time he spent in New York: "We had a lot of fun, laughed a lot and had a great time in general." In a "Playboy" interview with David Sheff shortly before his death, Lennon said, "Sean was a planned child, and therein lies the difference. I don't love Julian any less as a child. He's still my son, whether he came from a bottle of whiskey or because they didn't have pills in those days. He's here, he belongs to me, and he always will." He said he was trying to reestablish a connection with the then 17-year-old, and confidently predicted, "Julian and I will have a relationship in the future." After his death it was revealed that he had left Julian very little in his will. Two versions exist of how Lennon met Yoko Ono. According to the first, told by the Lennons, on 9 November 1966 Lennon went to the Indica Gallery in London, where Ono was preparing her conceptual art exhibit, and they were introduced by gallery owner John Dunbar. Lennon was intrigued by Ono's "Hammer A Nail": patrons hammered a nail into a wooden board, creating the art piece. Although the exhibition had not yet begun, Lennon wanted to hammer a nail into the clean board, but Ono stopped him. Dunbar asked her, "Don't you know who this is? He's a millionaire! He might buy it." Ono had supposedly not heard of the Beatles, but relented on condition that Lennon pay her five shillings, to which Lennon replied, "I'll give you an imaginary five shillings and hammer an imaginary nail in." Ono subsequently related that Lennon had taken a bite out of the apple on display in her work "Apple", much to her fury. The second version, told by McCartney, is that in late 1965, Ono was in London compiling original musical scores for a book John Cage was working on, "Notations", but McCartney declined to give her any of his own manuscripts for the book, suggesting that Lennon might oblige. When asked, Lennon gave Ono the original handwritten lyrics to "The Word". Ono began visiting and telephoning Lennon's home and, when his wife asked him for an explanation, Lennon explained that Ono was only trying to obtain money for her "avant-garde bullshit". While his wife was on holiday in Greece in May 1968, Lennon invited Ono to visit. They spent the night recording what would become the "Two Virgins" album, after which, he said, they "made love at dawn." When Lennon's wife returned home she found Ono wearing her bathrobe and drinking tea with Lennon who simply said, "Oh, hi." Ono became pregnant in 1968 and miscarried a male child on 21 November 1968, a few weeks after Lennon's divorce from Cynthia was granted. Two years before the Beatles disbanded, Lennon and Ono began public protests against the Vietnam War. They were married in Gibraltar on 20 March 1969, and spent their honeymoon at the Hilton Amsterdam, campaigning with a week-long Bed-In for Peace. They planned another Bed-In in the United States, but were denied entry, so held one instead at the Queen Elizabeth Hotel in Montreal, where they recorded "Give Peace a Chance". They often combined advocacy with performance art, as in their "Bagism", first introduced during a Vienna press conference. Lennon detailed this period in the Beatles song "The Ballad of John and Yoko". Lennon changed his name by deed poll on 22 April 1969, adding "Ono" as a middle name. The brief ceremony took place on the roof of the Apple Corps building, made famous three months earlier by the Beatles' "Let It Be" rooftop concert. Although he used the name John Ono Lennon thereafter, official documents referred to him as John Winston Ono Lennon, since he was not permitted to revoke a name given at birth. The couple settled at Tittenhurst Park at Sunninghill in Berkshire. After Ono was injured in a car accident, Lennon arranged for a king-sized bed to be brought to the recording studio as he worked on the Beatles' last album, "Abbey Road". To escape the acrimony of the band's break-up, Ono suggested they move permanently to Manhattan, which they did on 31 August 1971. They first lived in The St. Regis Hotel on 5th Avenue, East 55th Street, then moved to a street-level flat at 105 Bank Street, Greenwich Village, on 16 October 1971. After a robbery, they relocated in 1973 to the more secure Dakota at 1West72nd Street. ABKCO Industries was formed in 1968 by Allen Klein as an umbrella company to ABKCO Records. Klein hired May Pang as a receptionist in 1969. Through involvement in a project with ABKCO, Lennon and Ono met her the following year. She became their personal assistant. After she had been working with the couple for three years, Ono confided that she and Lennon were becoming estranged. She went on to suggest that Pang should begin a physical relationship with Lennon, telling her, "He likes you a lot." Pang, 22, astounded by Ono's proposition, eventually agreed to become Lennon's companion. The pair soon moved to California, beginning an 18-month period he later called his "lost weekend". In Los Angeles, Pang encouraged Lennon to develop regular contact with Julian, whom he had not seen for two years. He also rekindled friendships with Starr, McCartney, Beatles roadie Mal Evans, and Harry Nilsson. While Lennon was drinking with Nilsson, he misunderstood something that Pang had said and attempted to strangle her. Lennon relented only after he was physically restrained by Nilsson. When Lennon and Pang returned to their newly rented Manhattan apartment, they prepared a spare room for Julian when he visited them. Lennon, who had been inhibited by Ono in this regard, began to reestablish contact with other relatives and friends. By December, he and Pang were considering a house purchase, and he refused to accept Ono's telephone calls. In January 1975, he agreed to meet Ono, who claimed to have found a cure for smoking. After the meeting, he failed to return home or call Pang. When Pang telephoned the next day, Ono told her that Lennon was unavailable because he was exhausted after a hypnotherapy session. Two days later, Lennon reappeared at a joint dental appointment; he was stupefied and confused to such an extent that Pang believed he had been brainwashed. Lennon told Pang that his separation from Ono was now over, although Ono would allow him to continue seeing her as his mistress. Ono had previously suffered three miscarriages in her attempt to have a child with Lennon. When Ono and Lennon were reunited, she became pregnant. She initially said that she wanted to have an abortion but changed her mind and agreed to allow the pregnancy to continue on condition that Lennon adopt the role of househusband, which he agreed to do. Following Sean's birth, Lennon's subsequent hiatus from the music industry would span five years. He had a photographer take pictures of Sean every day of his first year and created numerous drawings for him, which were posthumously published as "Real Love: The Drawings for Sean". Lennon later proudly declared, "He didn't come out of my belly but, by God, I made his bones, because I've attended to every meal, and to how he sleeps, and to the fact that he swims like a fish." While Lennon and Starr remained consistently friendly during the years that followed the Beatles' break-up in 1970, his relationships with McCartney and Harrison varied. He was initially close to Harrison, but the two drifted apart after Lennon moved to Manhattan in 1971. When Harrison was in New York for his December 1974 "Dark Horse" tour, Lennon agreed to join him on stage, but failed to appear after an argument over Lennon's refusal to sign an agreement that would finally dissolve the Beatles' legal partnership. Lennon eventually signed the papers while he was on holiday in Florida with Pang and Julian. Harrison offended Lennon in 1980 when he published an autobiography that made little mention of him. Lennon told "Playboy", "I was hurt by it. By glaring omission... my influence on his life is absolutely zilch... he remembers every two-bit sax player or guitarist he met in subsequent years. I'm not in the book." Lennon's most intense feelings were reserved for McCartney. In addition to attacking him with the lyrics of "How Do You Sleep?", Lennon argued with him through the press for three years after the group split. The two later began to reestablish something of the close friendship they had once known, and in 1974, they even played music together again before eventually growing apart once more. During McCartney's final visit in April 1976, Lennon said that they watched the episode of "Saturday Night Live" in which Lorne Michaels made a $3,000 cash offer to get the Beatles to reunite on the show. The pair considered going to the studio to make a joke appearance, attempting to claim their share of the money, but were too tired. Lennon summarised his feelings towards McCartney in an interview three days before his death: "Throughout my career, I've selected to work with... only two people: Paul McCartney and Yoko Ono... That ain't bad picking." Along with his estrangement from McCartney, Lennon always felt a musical competitiveness with him and kept an ear on his music. During his career break from 1980 until shortly before his death, Lennon was content to sit back as long as McCartney was producing what Lennon saw as mediocre material. Lennon took notice when McCartney released "Coming Up" in 1980, which was the year Lennon returned to the studio. "It's driving me crackers!" he jokingly complained, because he could not get the tune out of his head. That same year, Lennon was asked whether the group were dreaded enemies or the best of friends, and he replied that they were neither, and that he had not seen any of them in a long time. But he also said, "I still love those guys. The Beatles are over, but John, Paul, George and Ringo go on." Lennon and Ono used their honeymoon as a Bed-In for Peace at the Amsterdam Hilton Hotel; the March 1969 event attracted worldwide media ridicule. During a second Bed-In three months later at the Queen Elizabeth Hotel in Montreal, Lennon wrote and recorded "Give Peace a Chance". Released as a single, the song was quickly interpreted as an anti-war anthem and sung by a quarter of a million demonstrators against the Vietnam War in Washington, DC, on 15 November, the second Vietnam Moratorium Day. In December, they paid for billboards in 10 cities around the world which declared, in the national language, "War Is Over! If You Want It". Later that year, Lennon and Ono supported efforts by the family of James Hanratty to prove his innocence. Hanratty had been hanged in 1962. According to Lennon, those who had condemned Hanratty were "the same people who are running guns to South Africa and killing blacks in the streets. ... The same bastards are in control, the same people are running everything, it's the whole bullshit bourgeois scene." In London, Lennon and Ono staged a "Britain Murdered Hanratty" banner march and a "Silent Protest For James Hanratty", and produced a 40-minute documentary on the case. At an appeal hearing years later, Hanratty's conviction was upheld after DNA evidence matched. Lennon and Ono showed their solidarity with the Clydeside UCS workers' work-in of 1971 by sending a bouquet of red roses and a cheque for £5,000. On moving to New York City in August that year, they befriended two of the Chicago Seven, Yippie peace activists Jerry Rubin and Abbie Hoffman. Another political activist, John Sinclair, poet and co-founder of the White Panther Party, was serving ten years in prison for selling two joints of marijuana after previous convictions for possession of the drug. In December 1971 at Ann Arbor, Michigan, 15,000 people attended the "John Sinclair Freedom Rally", a protest and benefit concert with contributions from Lennon, Stevie Wonder, Bob Seger, Bobby Seale of the Black Panther Party, and others. Lennon and Ono, backed by David Peel and Rubin, performed an acoustic set of four songs from their forthcoming "Some Time in New York City" album including "John Sinclair", whose lyrics called for his release. The day before the rally, the Michigan Senate passed a bill that significantly reduced the penalties for possession of marijuana and four days later Sinclair was released on an appeal bond. The performance was recorded and two of the tracks later appeared on "John Lennon Anthology" (1998). Following the "Bloody Sunday" incident in Northern Ireland in 1972, in which fourteen unarmed civil rights protesters were shot dead by the British Army, Lennon said that given the choice between the army and the IRA (who were not involved in the incident) he would side with the latter. Lennon and Ono wrote two songs protesting British presence and actions in Ireland for their "Some Time in New York City" album: "Luck of the Irish" and "Sunday Bloody Sunday". In 2000, David Shayler, a former member of Britain's domestic security service MI5, suggested that Lennon had given money to the IRA, though this was swiftly denied by Ono. Biographer Bill Harry records that following Bloody Sunday, Lennon and Ono financially supported the production of the film "The Irish Tapes", a political documentary with a Republican slant. According to FBI surveillance reports, and confirmed by Tariq Ali in 2006, Lennon was sympathetic to the International Marxist Group, a Trotskyist group formed in Britain in 1968. However, the FBI considered Lennon to have limited effectiveness as a revolutionary, as he was "constantly under the influence of narcotics". In 1973, Lennon contributed a limerick called "Why Make It Sad To Be Gay?" to Len Richmond's "The Gay Liberation Book". Lennon's last act of political activism was a statement in support of the striking minority sanitation workers in San Francisco on 5 December 1980. He and Ono planned to join the workers' protest on 14 December. Lennon's former personal assistant Fred Seaman, who had earlier been convicted for, and admitted, stealing from Lennon's personal belongings, has claimed that by this time, however, Lennon had quietly renounced the counterculture views which he had helped promote during the 1960s and 1970s and become more aligned with conservatism and was supportive of Ronald Reagan, though whether Lennon had actually aligned to a more conservative world view is disputed. Some commentators noted that Seaman's claims have low credibility, considering that he was a thief who had abused his position to steal from Lennon, and had also earlier released a book with dubious allegations about Lennon which had largely been ignored. Following the impact of "Give Peace a Chance" and "Happy Xmas (War Is Over)", both of which songs were strongly associated with the anti-Vietnam War movement, the Nixon administration heard rumours of Lennon's involvement in a concert to be held in San Diego at the same time as the Republican National Convention and tried to have him deported. Nixon believed that Lennon's anti-war activities could cost him his reelection; Republican Senator Strom Thurmond suggested in a February 1972 memo that "deportation would be a strategic counter-measure" against Lennon. The next month the United States Immigration and Naturalization Service (INS) began deportation proceedings, arguing that his 1968 misdemeanour conviction for cannabis possession in London had made him ineligible for admission to the United States. Lennon spent the next three and a half years in and out of deportation hearings until 8 October 1975, when a court of appeals barred the deportation attempt, stating "the courts will not condone selective deportation based upon secret political grounds". While the legal battle continued, Lennon attended rallies and made television appearances. Lennon and Ono co-hosted "The Mike Douglas Show" for a week in February 1972, introducing guests such as Jerry Rubin and Bobby Seale to mid-America. In 1972, Bob Dylan wrote a letter to the INS defending Lennon, stating: John and Yoko add a great voice and drive to the country's so-called art institution. They inspire and transcend and stimulate and by doing so, only help others to see pure light and in doing that, put an end to this dull taste of petty commercialism which is being passed off as Artist Art by the overpowering mass media. Hurray for John and Yoko. Let them stay and live here and breathe. The country's got plenty of room and space. Let John and Yoko stay! On 23 March 1973, Lennon was ordered to leave the US within 60 days. Ono, meanwhile, was granted permanent residence. In response, Lennon and Ono held a press conference on 1 April 1973 at the New York City Bar Association, where they announced the formation of the state of Nutopia; a place with "no land, no boundaries, no passports, only people". Waving the white flag of Nutopia (two handkerchiefs), they asked for political asylum in the US. The press conference was filmed, and appeared 2006 documentary "The US vs. John Lennon". Lennon's "Mind Games" (1973) included the track "Nutopian International Anthem", which comprised three seconds of silence. Soon after the press conference, Nixon's involvement in a political scandal came to light, and in June the Watergate hearings began in Washington, DC. They led to the president's resignation 14 months later. Nixon's successor, Gerald Ford, showed little interest in continuing the battle against Lennon, and the deportation order was overturned in 1975. The following year, Lennon received his "green card" certifying his permanent residency, and when Jimmy Carter was inaugurated as president in January 1977, Lennon and Ono attended the Inaugural Ball. After Lennon's death, historian Jon Wiener filed a Freedom of Information Act request for FBI files that documented the Bureau's role in the deportation attempt. The FBI admitted it had 281 pages of files on Lennon, but refused to release most of them on the grounds that they contained national security information. In 1983, Wiener sued the FBI with the help of the American Civil Liberties Union of Southern California. It took 14 years of litigation to force the FBI to release the withheld pages. The ACLU, representing Wiener, won a favourable decision in their suit against the FBI in the Ninth Circuit in 1991. The Justice Department appealed the decision to the Supreme Court in April 1992, but the court declined to review the case. In 1997, respecting President Bill Clinton's newly instigated rule that documents should be withheld only if releasing them would involve "foreseeable harm", the Justice Department settled most of the outstanding issues outside court by releasing all but 10 of the contested documents. Wiener published the results of his 14-year campaign in January 2000. "Gimme Some Truth: The John Lennon FBI Files" contained facsimiles of the documents, including "lengthy reports by confidential informants detailing the daily lives of anti-war activists, memos to the White House, transcripts of TV shows on which Lennon appeared, and a proposal that Lennon be arrested by local police on drug charges". The story is told in the documentary "The US vs. John Lennon". The final 10 documents in Lennon's FBI file, which reported on his ties with London anti-war activists in 1971 and had been withheld as containing "national security information provided by a foreign government under an explicit promise of confidentiality", were released in December 2006. They contained no indication that the British government had regarded Lennon as a serious threat; one example of the released material was a report that two prominent British leftists had hoped Lennon would finance a left-wing bookshop and reading room. Beatles biographer Bill Harry wrote that Lennon began drawing and writing creatively at an early age with the encouragement of his uncle. He collected his stories, poetry, cartoons and caricatures in a Quarry Bank High School exercise book that he called the "Daily Howl". The drawings were often of crippled people, and the writings satirical, and throughout the book was an abundance of wordplay. According to classmate Bill Turner, Lennon created the "Daily Howl" to amuse his best friend and later Quarrymen bandmate Pete Shotton, to whom he would show his work before he let anyone else see it. Turner said that Lennon "had an obsession for Wigan Pier. It kept cropping up", and in Lennon's story "A Carrot in a Potato Mine", "the mine was at the end of Wigan Pier." Turner described how one of Lennon's cartoons depicted a bus stop sign annotated with the question, "Why?". Above was a flying pancake, and below, "a blind man wearing glasses leading along a blind dog—also wearing glasses". Lennon's love of wordplay and nonsense with a twist found a wider audience when he was 24. Harry writes that "In His Own Write" (1964) was published after "Some journalist who was hanging around the Beatles came to me and I ended up showing him the stuff. They said, 'Write a book' and that's how the first one came about". Like the "Daily Howl" it contained a mix of formats including short stories, poetry, plays and drawings. One story, "Good Dog Nigel", tells the tale of "a happy dog, urinating on a lamp post, barking, wagging his tail—until he suddenly hears a message that he will be killed at three o'clock". "The Times Literary Supplement" considered the poems and stories "remarkable ... also very funny ... the nonsense runs on, words and images prompting one another in a chain of pure fantasy". "Book Week" reported, "This is nonsense writing, but one has only to review the literature of nonsense to see how well Lennon has brought it off. While some of his homonyms are gratuitous word play, many others have not only double meaning but a double edge." Lennon was not only surprised by the positive reception, but that the book was reviewed at all, and suggested that readers "took the book more seriously than I did myself. It just began as a laugh for me". In combination with "A Spaniard in the Works" (1965), "In His Own Write" formed the basis of the stage play "The John Lennon Play: In His Own Write", co-adapted by Victor Spinetti and Adrienne Kennedy. After negotiations between Lennon, Spinetti and the artistic director of the National Theatre, Sir Laurence Olivier, the play opened at The Old Vic in 1968. Lennon and Ono attended the opening night performance, their second public appearance together. In 1969, Lennon wrote "Four in Hand", a skit based on his teenage experiences of group masturbation, for Kenneth Tynan's play "Oh! Calcutta!". After Lennon's death, further works were published, including "Skywriting by Word of Mouth" (1986), "Ai: Japan Through John Lennon's Eyes: A Personal Sketchbook" (1992), with Lennon's illustrations of the definitions of Japanese words, and "Real Love: The Drawings for Sean" (1999). "The Beatles Anthology" (2000) also presented examples of his writings and drawings. Lennon played a mouth organ during a bus journey to visit his cousin in Scotland; the music caught the driver's ear. Impressed, the driver told Lennon of a harmonica he could have if he came to Edinburgh the following day, where one had been stored in the bus depot since a passenger left it on a bus. The professional instrument quickly replaced Lennon's toy. He would continue to play harmonica, often using the instrument during the Beatles' Hamburg years, and it became a signature sound in the group's early recordings. His mother taught him how to play the banjo, later buying him an acoustic guitar. At 16, he played rhythm guitar with the Quarrymen. As his career progressed, he played a variety of electric guitars, predominantly the Rickenbacker 325, Epiphone Casino and Gibson J-160E, and, from the start of his solo career, the Gibson Les Paul Junior. "Double Fantasy" producer Jack Douglas claimed that since his Beatle days Lennon habitually tuned his D-string slightly flat, so his Aunt Mimi could tell which guitar was his on recordings. Occasionally he played a six-string bass guitar, the Fender Bass VI, providing bass on some Beatles numbers ("Back in the U.S.S.R.", "The Long and Winding Road", "Helter Skelter") that occupied McCartney with another instrument. His other instrument of choice was the piano, on which he composed many songs, including "Imagine", described as his best-known solo work. His jamming on a piano with McCartney in 1963 led to the creation of the Beatles' first US number one, "I Want to Hold Your Hand". In 1964, he became one of the first British musicians to acquire a Mellotron keyboard, though it was not heard on a Beatles recording until "Strawberry Fields Forever" in 1967. When the Beatles recorded "Twist and Shout", the final track during the mammoth one-day session that produced the band's 1963 debut album, "Please Please Me", Lennon's voice, already compromised by a cold, came close to giving out. Lennon said, "I couldn't sing the damn thing, I was just screaming." In the words of biographer Barry Miles, "Lennon simply shredded his vocal cords in the interests of rock 'n' roll." The Beatles' producer, George Martin, tells how Lennon "had an inborn dislike of his own voice which I could never understand. He was always saying to me: 'DO something with my voice!... put something on it... Make it "different".'" Martin obliged, often using double-tracking and other techniques. As his Beatles era segued into his solo career, his singing voice found a widening range of expression. Biographer Chris Gregory writes of Lennon "tentatively beginning to expose his insecurities in a number of acoustic-led 'confessional' ballads, so beginning the process of 'public therapy' that will eventually culminate in the primal screams of "Cold Turkey" and the cathartic "John Lennon/Plastic Ono Band"." Music critic Robert Christgau calls this Lennon's "greatest vocal performance... from scream to whine, is modulated electronically... echoed, filtered, and double tracked." David Stuart Ryan notes Lennon's vocal delivery to range from "extreme vulnerability, sensitivity and even naivety" to a hard "rasping" style. Wiener too describes contrasts, saying the singer's voice can be "at first subdued; soon it almost cracks with despair". Music historian Ben Urish recalls hearing the Beatles' "Ed Sullivan Show" performance of "This Boy" played on the radio a few days after Lennon's murder: "As Lennon's vocals reached their peak... it hurt too much to hear him scream with such anguish and emotion. But it was my emotions I heard in his voice. Just like I always had." Music historians Schinder and Schwartz wrote of the transformation in popular music styles that took place between the 1950s and the 1960s. They said that the Beatles' influence cannot be overstated: having "revolutionised the sound, style, and attitude of popular music and opened rock and roll's doors to a tidal wave of British rock acts", the group then "spent the rest of the 1960s expanding rock's stylistic frontiers". Liam Gallagher and his group Oasis were among the many who acknowledged the band's influence; he identified Lennon as a hero. In 1999, he named his first child Lennon Gallagher in tribute. On National Poetry Day in 1999, the BBC conducted a poll to identify the UK's favourite song lyric and announced "Imagine" as the winner. In a 2006 "Guardian" article, Jon Wiener wrote: "For young people in 1972, it was thrilling to see Lennon's courage in standing up to [US President] Nixon. That willingness to take risks with his career, and his life, is one reason why people still admire him today." For music historians Urish and Bielen, Lennon's most significant effort was "the self-portraits ... in his songs [which] spoke to, for, and about, the human condition." In 2013, Downtown Music Publishing signed a publishing administration agreement for the US with Lenono Music and Ono Music, home to the song catalogues of John Lennon and Yoko Ono respectively. Under the terms of the agreement, Downtown represents Lennon's solo works, including "Imagine", "Instant Karma (We All Shine On)", "Power to the People", "Happy X-Mas (War Is Over)", "Jealous Guy", "(Just Like) Starting Over" and others. Lennon continues to be mourned throughout the world and has been the subject of numerous memorials and tributes. In 2002, the airport in Lennon's home town was renamed the Liverpool John Lennon Airport. On what would have been Lennon's 70th birthday in 2010, the John Lennon Peace Monument was unveiled in Chavasse Park, Liverpool, by Cynthia and Julian Lennon. The sculpture, entitled "Peace & Harmony", exhibits peace symbols and carries the inscription "Peace on Earth for the Conservation of Life · In Honour of John Lennon 1940–1980". In December 2013, the International Astronomical Union named one of the craters on Mercury after Lennon. The Lennon–McCartney songwriting partnership is regarded as one of the most influential and successful of the 20th century. As performer, writer or co-writer, Lennon had 25 number one singles in the US Hot 100 chart. His album sales in the US stand at 14 million units. "Double Fantasy" was his best-selling solo album, at three million shipments in the US; Released shortly before his death, it won the 1981 Grammy Award for Album of the Year. The following year, the BRIT Award for Outstanding Contribution to Music was given to Lennon. Participants in a 2002 BBC poll voted him eighth of "100 Greatest Britons". Between 2003 and 2008, "Rolling Stone" recognised Lennon in several reviews of artists and music, ranking him fifth of "100 Greatest Singers of All Time" and 38th of "100 Greatest Artists of All Time", and his albums "John Lennon/Plastic Ono Band" and "Imagine", 22nd and 76th respectively of "Rolling Stone's 500 Greatest Albums of All Time". He was appointed Member of the Order of the British Empire (MBE) with the other Beatles in 1965 (returned in 1969). Lennon was posthumously inducted into the Songwriters Hall of Fame in 1987 and into the Rock and Roll Hall of Fame in 1994. Lennon was responsible for 25 "Billboard" Hot 100 number one singles as performer, writer or co-writer. "Imagine" topped the US singles chart compiled by "Record World" magazine, however, in 1971. Citations Sources J. R. R. Tolkien John Ronald Reuel Tolkien, (; 3 January 1892 – 2 September 1973) was an English writer, poet, philologist, and university professor who is best known as the author of the classic high fantasy works "The Hobbit", "The Lord of the Rings", and "The Silmarillion". He served as the Rawlinson and Bosworth Professor of Anglo-Saxon and Fellow of Pembroke College, Oxford, from 1925 to 1945 and Merton Professor of English Language and Literature and Fellow of Merton College, Oxford, from 1945 to 1959. He was at one time a close friend of C. S. Lewis—they were both members of the informal literary discussion group known as the Inklings. Tolkien was appointed a Commander of the Order of the British Empire by Queen Elizabeth II on 28 March 1972. After Tolkien's death, his son Christopher published a series of works based on his father's extensive notes and unpublished manuscripts, including "The Silmarillion". These, together with "The Hobbit" and "The Lord of the Rings", form a connected body of tales, poems, fictional histories, invented languages, and literary essays about a fantasy world called Arda and Middle-earth within it. Between 1951 and 1955, Tolkien applied the term "legendarium" to the larger part of these writings. While many other authors had published works of fantasy before Tolkien, the great success of "The Hobbit" and "The Lord of the Rings" led directly to a popular resurgence of the genre. This has caused Tolkien to be popularly identified as the "father" of modern fantasy literature—or, more precisely, of high fantasy. In 2008, "The Times" ranked him sixth on a list of "The 50 greatest British writers since 1945". "Forbes" ranked him the 5th top-earning "dead celebrity" in 2009. Tolkien's paternal ancestors were middle-class craftsmen who made and sold clocks, watches and pianos in London and Birmingham. The Tolkien family had emigrated from Germany in the 18th century but had become "quickly intensely English". According to the family tradition, the Tolkiens had arrived in England in 1756, as refugees from Frederick the Great's invasion of the Electorate of Saxony during the Seven Years' War. Several families with the surname Tolkien or similar spelling live in northwestern Germany, mainly in Lower Saxony and Hamburg. Tolkien believed his surname derived from the German word "tollkühn", meaning "foolhardy", and jokingly inserted himself as a "cameo" into "The Notion Club Papers" under the literally translated name Rashbold. However, this origin of the name has not been proven. John Ronald Reuel Tolkien was born on 3 January 1892 in Bloemfontein in the Orange Free State (now Free State Province in South Africa) to Arthur Reuel Tolkien (1857–1896), an English bank manager, and his wife Mabel, "née" Suffield (1870–1904). The couple had left England when Arthur was promoted to head the Bloemfontein office of the British bank for which he worked. Tolkien had one sibling, his younger brother, Hilary Arthur Reuel Tolkien, who was born on 17 February 1894. As a child, Tolkien was bitten by a large baboon spider in the garden, an event some think later echoed in his stories, although he admitted no actual memory of the event and no special hatred of spiders as an adult. In another incident, a young family servant, who thought Tolkien a beautiful child, took the baby to his kraal to show him off, returning him the next morning. When he was three, he went to England with his mother and brother on what was intended to be a lengthy family visit. His father, however, died in South Africa of rheumatic fever before he could join them. This left the family without an income, so Tolkien's mother took him to live with her parents in Kings Heath, Birmingham. Soon after, in 1896, they moved to Sarehole (now in Hall Green), then a Worcestershire village, later annexed to Birmingham. He enjoyed exploring Sarehole Mill and Moseley Bog and the Clent, Lickey and Malvern Hills, which would later inspire scenes in his books, along with nearby towns and villages such as Bromsgrove, Alcester, and Alvechurch and places such as his aunt Jane's farm of Bag End, the name of which he used in his fiction. Mabel Tolkien taught her two children at home. Ronald, as he was known in the family, was a keen pupil. She taught him a great deal of botany and awakened in him the enjoyment of the look and feel of plants. Young Tolkien liked to draw landscapes and trees, but his favourite lessons were those concerning languages, and his mother taught him the rudiments of Latin very early. Tolkien could read by the age of four and could write fluently soon afterwards. His mother allowed him to read many books. He disliked "Treasure Island" and "The Pied Piper" and thought "Alice's Adventures in Wonderland" by Lewis Carroll was "amusing but disturbing". He liked stories about "Red Indians" (Native Americans) and the fantasy works by George MacDonald. In addition, the "Fairy Books" of Andrew Lang were particularly important to him and their influence is apparent in some of his later writings. Mabel Tolkien was received into the Roman Catholic Church in 1900 despite vehement protests by her Baptist family, which stopped all financial assistance to her. In 1904, when J.R.R. Tolkien was 12, his mother died of acute diabetes at Fern Cottage in Rednal, which she was renting. She was then about 34 years of age, about as old as a person with diabetes mellitus type 1 could live without treatment—insulin would not be discovered until two decades later. Nine years after her death, Tolkien wrote, "My own dear mother was a martyr indeed, and it is not to everybody that God grants so easy a way to his great gifts as he did to Hilary and myself, giving us a mother who killed herself with labour and trouble to ensure us keeping the faith." Prior to her death, Mabel Tolkien had assigned the guardianship of her sons to her close friend, Fr. Francis Xavier Morgan of the Birmingham Oratory, who was assigned to bring them up as good Catholics. In a 1965 letter to his son Michael, Tolkien recalled the influence of the man whom he always called "Father Francis": "He was an upper-class Welsh-Spaniard Tory, and seemed to some just a pottering old gossip. He was—and he was "not". I first learned charity and forgiveness from him; and in the light of it pierced even the 'liberal' darkness out of which I came, knowing more [i.e. Tolkien having grown up knowing more] about 'Bloody Mary' than the Mother of Jesus—who was never mentioned except as an object of wicked worship by the Romanists." After his mother's death, Tolkien grew up in the Edgbaston area of Birmingham and attended King Edward's School, Birmingham, and later St. Philip's School. In 1903, he won a Foundation Scholarship and returned to King Edward's. While a pupil there, Tolkien was one of the cadets from the school's Officers Training Corps who helped "line the route" for the 1910 coronation parade of King George V. Like the other cadets from King Edward's, Tolkien was posted just outside the gates of Buckingham Palace. In Edgbaston, Tolkien lived there in the shadow of Perrott's Folly and the Victorian tower of Edgbaston Waterworks, which may have influenced the images of the dark towers within his works. Another strong influence was the romantic medievalist paintings of Edward Burne-Jones and the Pre-Raphaelite Brotherhood; the Birmingham Museum and Art Gallery had a large collection of works on public display. While in his early teens, Tolkien had his first encounter with a constructed language, Animalic, an invention of his cousins, Mary and Marjorie Incledon. At that time, he was studying Latin and Anglo-Saxon. Their interest in Animalic soon died away, but Mary and others, including Tolkien himself, invented a new and more complex language called Nevbosh. The next constructed language he came to work with, Naffarin, would be his own creation. Tolkien learned Esperanto some time before 1909. Around 10 June 1909 he composed "The Book of the Foxrook", a sixteen-page notebook, where the "earliest example of one of his invented alphabets" appears. Short texts in this notebook are written in Esperanto. In 1911, while they were at King Edward's School, Tolkien and three friends, Rob Gilson, Geoffrey Bache Smith and Christopher Wiseman, formed a semi-secret society they called the T.C.B.S. The initials stood for Tea Club and Barrovian Society, alluding to their fondness for drinking tea in Barrow's Stores near the school and, secretly, in the school library. After leaving school, the members stayed in touch and, in December 1914, they held a "council" in London at Wiseman's home. For Tolkien, the result of this meeting was a strong dedication to writing poetry. In 1911, Tolkien went on a summer holiday in Switzerland, a trip that he recollects vividly in a 1968 letter, noting that Bilbo's journey across the Misty Mountains ("including the glissade down the slithering stones into the pine woods") is directly based on his adventures as their party of 12 hiked from Interlaken to Lauterbrunnen and on to camp in the moraines beyond Mürren. Fifty-seven years later, Tolkien remembered his regret at leaving the view of the eternal snows of Jungfrau and Silberhorn, "the Silvertine (Celebdil) of my dreams". They went across the Kleine Scheidegg to Grindelwald and on across the Grosse Scheidegg to Meiringen. They continued across the Grimsel Pass, through the upper Valais to Brig and on to the Aletsch glacier and Zermatt. In October of the same year, Tolkien began studying at Exeter College, Oxford. He initially studied Classics but changed his course in 1913 to English Language and Literature, graduating in 1915 with first-class honours in his final examinations. At the age of 16, J.R.R. Tolkien met Edith Mary Bratt, who was three years his senior, when he and his brother Hilary moved into the boarding house where she lived in Duchess Road, Edgbaston. According to Humphrey Carpenter, His guardian, Father Morgan, viewed Edith as the reason for Tolkien's having "muffed" his exams and considered it "altogether unfortunate" that his surrogate son was romantically involved with an older, Protestant woman. He prohibited him from meeting, talking to, or even corresponding with her until he was 21. He obeyed this prohibition to the letter, with one notable early exception, over which Father Morgan threatened to cut short his university career if he did not stop. In a 1941 letter to his son Michael, Tolkien recalled: On the evening of his 21st birthday, Tolkien wrote to Edith, who was living with family friend C. H. Jessop at Cheltenham. He declared that he had never ceased to love her and asked her to marry him. Edith replied that she had already accepted the proposal of George Field, the brother of one of her closest schoolfriends. Edith said, however, that she had agreed to marry Field only because she felt "on the shelf" and had begun to doubt that Tolkien still cared for her. She explained that, because of Tolkien's letter, everything had changed. On 8 January 1913, Tolkien travelled by train to Cheltenham and was met on the platform by Edith. The two took a walk into the countryside, sat under a railway viaduct, and talked. By the end of the day, Edith had agreed to accept Tolkien's proposal. She wrote to Field and returned her engagement ring. Field was "dreadfully upset at first", and the Field family was "insulted and angry". Upon learning of Edith's new plans, Jessop wrote to her guardian, "I have nothing to say against Tolkien, he is a cultured gentleman, but his prospects are poor in the extreme, and when he will be in a position to marry I cannot imagine. Had he adopted a profession it would have been different." Following their engagement, Edith reluctantly announced that she was converting to Catholicism at Tolkien's insistence. Jessop, "like many others of his age and class ... strongly anti-Catholic", was infuriated, and he ordered Edith to find other lodgings. Edith Bratt and Ronald Tolkien were formally engaged at Birmingham in January 1913, and married at St. Mary Immaculate Roman Catholic Church, Warwick, on 22 March 1916. In his 1941 letter to Michael, Tolkien expressed admiration for his wife's willingness to marry a man with no job, little money, and no prospects except the likelihood of being killed in the Great War. In August 1914 the United Kingdom entered the First World War. Tolkien's relatives were shocked when he elected not to immediately volunteer for the British Army. In a 1941 letter to his son Michael, Tolkien recalled, "In those days chaps joined up, or were scorned publicly. It was a nasty cleft to be in for a young man with too much imagination and little physical courage." Instead, Tolkien, "endured the obloquy", and entered a programme wherein he delayed enlistment until completing his degree. By the time he passed his Finals in July 1915, Tolkien recalled that the hints were "becoming outspoken from relatives". He was then commissioned as a temporary second lieutenant in the Lancashire Fusiliers on 15 July 1915. He trained with the 13th (Reserve) Battalion on Cannock Chase, Staffordshire, for eleven months. In a letter to Edith, Tolkien complained, "Gentlemen are rare among the superiors, and even human beings rare indeed." Following their wedding, Lieutenant and Mrs. Tolkien took up lodgings near the training camp. On 2 June 1916, Tolkien received a telegram summoning him to Folkestone for posting to France. The Tolkiens spent the night before his departure in a room at the Plough & Harrow Hotel in Edgbaston, Birmingham. He later wrote, "Junior officers were being killed off, a dozen a minute. Parting from my wife then ... it was like a death." On 5 June 1916, Tolkien boarded a troop transport for an overnight voyage to Calais. Like other soldiers arriving for the first time, he was sent to the British Expeditionary Force's (BEF) base depot at Étaples. On 7 June, he was informed that he had been assigned as a signals officer to the 11th (Service) Battalion, Lancashire Fusiliers. The battalion was part of the 74th Brigade, 25th Division. While waiting to be summoned to his unit, Tolkien sank into boredom. To pass the time, he composed a poem entitled "The Lonely Isle", which was inspired by his feelings during the sea crossing to Calais. To evade the British Army's postal censorship, he also developed a code of dots by which Edith could track his movements. He left Étaples on 27 June 1916 and joined his battalion at Rubempré, near Amiens. He found himself commanding enlisted men who were drawn mainly from the mining, milling, and weaving towns of Lancashire. According to John Garth, he "felt an affinity for these working class men", but military protocol prohibited friendships with "other ranks". Instead, he was required to "take charge of them, discipline them, train them, and probably censor their letters ... If possible, he was supposed to inspire their love and loyalty." Tolkien later lamented, "The most improper job of any man ... is bossing other men. Not one in a million is fit for it, and least of all those who seek the opportunity." Tolkien arrived at the Somme in early July 1916. In between terms behind the lines at Bouzincourt, he participated in the assaults on the Schwaben Redoubt and the Leipzig salient. Tolkien's time in combat was a terrible stress for Edith, who feared that every knock on the door might carry news of her husband's death. To get around the British Army's postal censorship, the Tolkiens developed a secret code for his letters home. By using the code, Edith could track her husband's movements on a map of the Western Front. According to the memoirs of the Reverend Mervyn S. Evers, Anglican chaplain to the Lancashire Fusiliers: On 27 October 1916, as his battalion attacked Regina Trench, Tolkien came down with trench fever, a disease carried by the lice. He was invalided to England on 8 November 1916. Many of his dearest school friends were killed in the war. Among their number were Rob Gilson of the Tea Club and Barrovian Society, who was killed on the first day of the Somme while leading his men in the assault on Beaumont Hamel. Fellow T.C.B.S. member Geoffrey Smith was killed during the same battle when a German artillery shell landed on a first aid post. Tolkien's battalion was almost completely wiped out following his return to England. Tolkien might well have been killed himself, but he had suffered from health problems and had been removed from combat multiple times. According to John Garth: In later years, Tolkien indignantly declared that those who searched his works for parallels to the Second World War were entirely mistaken: A weak and emaciated Tolkien spent the remainder of the war alternating between hospitals and garrison duties, being deemed medically unfit for general service. During his recovery in a cottage in Little Haywood, Staffordshire, he began to work on what he called "The Book of Lost Tales", beginning with "The Fall of Gondolin". "Lost Tales" represented Tolkien's attempt to create a mythology for England, a project he would abandon without ever completing. Throughout 1917 and 1918 his illness kept recurring, but he had recovered enough to do home service at various camps. It was at this time that Edith bore their first child, John Francis Reuel Tolkien. In a 1941 letter, Tolkien described his son John as "(conceived and carried during the starvation-year of 1917 and the great U-Boat campaign) round about the Battle of Cambrai, when the end of the war seemed as far off as it does now". Tolkien was promoted to the temporary rank of lieutenant on 6 January 1918. When he was stationed at Kingston upon Hull, he and Edith went walking in the woods at nearby Roos, and Edith began to dance for him in a clearing among the flowering hemlock. After his wife's death in 1971, Tolkien remembered, This incident inspired the account of the meeting of Beren and Lúthien. On 3 November 1920, Tolkien was demobilized and left the army, retaining his rank of lieutenant. His first civilian job after World War I was at the "Oxford English Dictionary", where he worked mainly on the history and etymology of words of Germanic origin beginning with the letter "W". In 1920, he took up a post as Reader in English Language at the University of Leeds, and became the youngest professor there. While at Leeds, he produced "A Middle English Vocabulary" and a definitive edition of "Sir Gawain and the Green Knight" with E. V. Gordon, both becoming academic standard works for several decades. He also translated "Sir Gawain", "Pearl", and "Sir Orfeo". In 1925, he returned to Oxford as Rawlinson and Bosworth Professor of Anglo-Saxon, with a fellowship at Pembroke College. In mid-1919 he began to privately tutor undergraduates, most importantly those of Lady Margaret Hall and St Hugh's College given that the women's colleges were in great need of good teachers in their early years. During his time at Pembroke College Tolkien wrote "The Hobbit" and the first two volumes of "The Lord of the Rings", whilst living at 20 Northmoor Road in North Oxford (where a blue plaque was placed in 2002). He also published a philological essay in 1932 on the name "Nodens", following Sir Mortimer Wheeler's unearthing of a Roman Asclepeion at Lydney Park, Gloucestershire, in 1928. In the 1920s, Tolkien undertook a translation of "Beowulf", which he finished in 1926. He never published it. It was finally edited by his son and published in 2014, more than forty years after Tolkien's death and almost 90 years since its completion. Ten years after finishing his translation, Tolkien gave a highly acclaimed lecture on the work, "", which had a lasting influence on "Beowulf" research. Lewis E. Nicholson said that the article Tolkien wrote about "Beowulf" is "widely recognized as a turning point in Beowulfian criticism", noting that Tolkien established the primacy of the poetic nature of the work as opposed to its purely linguistic elements. At the time, the consensus of scholarship deprecated "Beowulf" for dealing with childish battles with monsters rather than realistic tribal warfare; Tolkien argued that the author of "Beowulf" was addressing human destiny in general, not as limited by particular tribal politics, and therefore the monsters were essential to the poem. Where "Beowulf" does deal with specific tribal struggles, as at Finnsburg, Tolkien argued firmly against reading in fantastic elements. In the essay, Tolkien also revealed how highly he regarded Beowulf: "Beowulf is among my most valued sources", and this influence may be seen throughout his Middle-earth legendarium. According to Humphrey Carpenter, Tolkien had an ingenious means of beginning his series of lectures on "Beowulf": Decades later, W.H. Auden wrote to his former professor, In the run-up to the Second World War, Tolkien was earmarked as a codebreaker. In January 1939, he was asked whether he would be prepared to serve in the cryptographic department of the Foreign Office in the event of national emergency. He replied in the affirmative and, beginning on 27 March, took an instructional course at the London HQ of the Government Code and Cypher School. A record of his training was found which included the notation "keen" next to his name, although Tolkien scholar Anders Stenström suggested that "In all likelihood, that is not a record of Tolkien's interest, but a note about how to pronounce the name." He was informed in October that his services would not be required. In 1945, Tolkien moved to Merton College, Oxford, becoming the Merton Professor of English Language and Literature, in which post he remained until his retirement in 1959. He served as an external examiner for University College, Dublin, for many years. In 1954 Tolkien received an honorary degree from the National University of Ireland (of which U.C.D. was a constituent college). Tolkien completed "The Lord of the Rings" in 1948, close to a decade after the first sketches. Tolkien also translated the Book of Jonah for the "Jerusalem Bible", which was published in 1966. The Tolkiens had four children: John Francis Reuel Tolkien (17 November 1917 – 22 January 2003), Michael Hilary Reuel Tolkien (22 October 1920 – 27 February 1984), Christopher John Reuel Tolkien (born 21 November 1924) and Priscilla Mary Anne Reuel Tolkien (born 18 June 1929). Tolkien was very devoted to his children and sent them illustrated letters from Father Christmas when they were young. Each year more characters were added, such as the North Polar Bear (Father Christmas's helper), the Snow Man (his gardener), Ilbereth the elf (his secretary), and various other, minor characters. The major characters would relate tales of Father Christmas's battles against goblins who rode on bats and the various pranks committed by the North Polar Bear. During his life in retirement, from 1959 up to his death in 1973, Tolkien received steadily increasing public attention and literary fame. In 1961, his friend C. S. Lewis even nominated him for the Nobel Prize in Literature. The sales of his books were so profitable that he regretted that he had not chosen early retirement. At first, he wrote enthusiastic answers to readers' enquiries, but he became increasingly unhappy about the sudden popularity of his books with the 1960s counter-culture movement. In a 1972 letter, he deplored having become a cult-figure, but admitted that "even the nose of a very modest idol ... cannot remain entirely untickled by the sweet smell of incense!" Fan attention became so intense that Tolkien had to take his phone number out of the public directory, and eventually he and Edith moved to Bournemouth, which was then a seaside resort patronized by the British upper middle class. Tolkien's status as a best-selling author gave them easy entry into polite society, but Tolkien deeply missed the company of his fellow Inklings. Edith, however, was overjoyed to step into the role of a society hostess, which had been the reason that Tolkien selected Bournemouth in the first place. According to Humphrey Carpenter: Edith Tolkien died on 29 November 1971, at the age of 82. According to Simon Tolkien: Tolkien was appointed by Queen Elizabeth II a Commander of the Order of the British Empire in the 1972 New Year Honours and received the insignia of the Order at Buckingham Palace on 28 March 1972. In the same year Oxford University conferred upon him an honorary Doctorate of Letters. Tolkien had the name Lúthien engraved on Edith's tombstone at Wolvercote Cemetery, Oxford. When Tolkien died 21 months later on 2 September 1973 from a bleeding ulcer and chest infection, at the age of 81, he was buried in the same grave, with Beren added to his name. The engravings read: In Tolkien's Middle-earth legendarium, Lúthien was the most beautiful of all the Children of Ilúvatar, and forsook her immortality for her love of the mortal warrior Beren. After Beren was captured by the forces of the Dark Lord Morgoth, Lúthien rode to his rescue upon the talking wolfhound Huan. Ultimately, when Beren was slain in battle against the demonic wolf Carcharoth, Lúthien, like Orpheus, approached the Valar, the angelic order of beings placed in charge of the world by Eru (God), and persuaded them to restore her beloved to life. Tolkien was a devout Roman Catholic, and in his religious and political views he was mostly a traditionalist moderate, with libertarian, distributist, and monarchist leanings, in the sense of favouring established conventions and orthodoxies over innovation and modernization, whilst castigating government bureaucracy; in 1943 he wrote, "My political opinions lean more and more to Anarchy (philosophically understood, meaning abolition of control not whiskered men with bombs)—or to 'unconstitutional' Monarchy." Although he did not often write or speak about it, Tolkien advocated the dismantling of the British Empire and even of the United Kingdom. In a 1936 letter to a former student, Belgian linguist Simonne d'Ardenne, he wrote, "The political situation is dreadful... I have the greatest sympathy with Belgium—which is about the right size of any country! I wish my own were bounded still by the seas of the Tweed and the walls of Wales... we folk do at least know something of mortality and eternity and when Hitler (or a Frenchman) says 'Germany (or France) must live forever' we know that he lies." Tolkien had an intense hatred for the side effects of industrialization, which he considered to be devouring the English countryside and simpler life. For most of his adult life, he was disdainful of cars, preferring to ride a bicycle. This attitude can be seen in his work, most famously in the portrayal of the forced "industrialization" of the Shire in "The Lord of the Rings". Many commentators have remarked on a number of potential parallels between the Middle-earth saga and events in Tolkien's lifetime. "The Lord of the Rings" is often thought to represent England during and immediately after the Second World War. Tolkien ardently rejected this opinion in the foreword to the second edition of the novel, stating he preferred applicability to allegory. This theme is taken up at greater length in his essay "On Fairy-Stories", where he argues that fairy-stories are so apt because they are consistent both within themselves and with some truths about reality. He concludes that Christianity itself follows this pattern of inner consistency and external truth. His belief in the fundamental truths of Christianity leads commentators to find Christian themes in "The Lord of the Rings". Tolkien objected strongly to C. S. Lewis's use of religious references in his stories, which were often overtly allegorical. However, Tolkien wrote that the Mount Doom scene exemplified lines from the Lord's Prayer. His love of myths and his devout faith came together in his assertion that he believed mythology to be the divine echo of "the Truth". This view was expressed in his poem and essay entitled "Mythopoeia". His theory that myths held "fundamental truths" became a central theme of the Inklings in general. Tolkien's devout Roman Catholic faith was a significant factor in the conversion of C. S. Lewis from atheism to Christianity, although Tolkien was dismayed that Lewis chose to join the Church of England. He once said, "It may be said that the chief purpose of life, for any one of us, is to increase according to our capacity our knowledge of God by all the means we have, and to be moved by it to praise and thanks." According to his grandson Simon Tolkien, Tolkien in the last years of his life was disappointed by some of the liturgical reforms and changes implemented after the Second Vatican Council: Tolkien voiced support for the Nationalists (eventually led by Franco during the Spanish Civil War) upon hearing that communist Republicans were destroying churches and killing priests and nuns. Tolkien was contemptuous of Joseph Stalin. During World WarII, Tolkien referred to Stalin as "that bloodthirsty old murderer". However, in 1961, Tolkien sharply criticized a Swedish commentator who suggested that "The Lord of the Rings" was an anti-communist parable and identified Sauron with Stalin. Tolkien said, "I utterly repudiate any such "reading", which angers me. The situation was conceived long before the Russian revolution. Such allegory is entirely foreign to my thought." Tolkien vocally opposed Adolf Hitler and the Nazi Party prior to the Second World War, and was known to especially despise Nazi racist and anti-Semitic ideology. In 1938, the publishing house Rütten & Loening Verlag was preparing to release "The Hobbit" in Nazi Germany. To Tolkien's outrage, he was asked beforehand whether he was of Aryan origin. In a letter to his British publisher Stanley Unwin, he condemned Nazi "race-doctrine" as "wholly pernicious and unscientific". He added that he had many Jewish friends and was considering "letting a German translation go hang". He provided two letters to Rütten & Loening and instructed Unwin to send whichever he preferred. The more tactful letter was sent and was lost during the later bombing of Germany. In the unsent letter, Tolkien makes the point that "Aryan" is a linguistic term, denoting speakers of Indo-Iranian languages. He continued, In a 1941 letter to his son Michael, he expressed his resentment at the distortion of Germanic history in "Nordicism": In 1968, he objected to a description of Middle-earth as "Nordic", a term he said he disliked because of its association with racialist theories. Tolkien criticized Allied use of total-war tactics against civilians of Nazi Germany and Imperial Japan. In a 1945 letter to his son Christopher, he wrote: He also reacted with anger to the excesses of anti-German propaganda during World War II. In 1944 he wrote in a letter to his son Christopher: Tolkien was horrified by the 1945 atomic bombings of Hiroshima and Nagasaki, referring to the scientists of the Manhattan Project as "these lunatic physicists" and "Babel-builders". During most of his own life conservationism was not yet on the political agenda, and Tolkien himself did not directly express conservationist views—except in some private letters, in which he tells about his fondness for forests and sadness at tree-felling. In later years, a number of authors of biographies or literary analyses of Tolkien conclude that during his writing of "The Lord of the Rings", Tolkien gained increased interest in the value of wild and untamed nature, and in protecting what wild nature was left in the industrialized world. Tolkien devised several themes that were reused in successive drafts of his "legendarium", beginning with "The Book of Lost Tales", written while recuperating from illnesses contracted during The Battle of the Somme. The two most prominent stories, the tale of Beren and Lúthien and that of Túrin, were carried forward into long narrative poems (published in "The Lays of Beleriand"). One of the greatest influences on Tolkien was the Arts and Crafts polymath William Morris. Tolkien wished to imitate Morris's prose and poetry romances, from which he took hints for the names of features such as the Dead Marshes in "The Lord of the Rings" and Mirkwood, along with some general aspects of approach. Edward Wyke-Smith's "The Marvellous Land of Snergs", with its "table-high" title characters, strongly influenced the incidents, themes, and depiction of Bilbo's race in "The Hobbit". Tolkien also cited H. Rider Haggard's novel "She" in a telephone interview: "I suppose as a boy "She" interested me as much as anything—like the Greek shard of Amyntas [Amenartas], which was the kind of machine by which everything got moving." A supposed facsimile of this potsherd appeared in Haggard's first edition, and the ancient inscription it bore, once translated, led the English characters to She's ancient kingdom. Critics have compared this device to the "Testament of Isildur" in "The Lord of the Rings" and to Tolkien's efforts to produce as an illustration a realistic page from the "Book of Mazarbul". Critics starting with Edwin Muir have found resemblances between Haggard's romances and Tolkien's. Tolkien wrote of being impressed as a boy by S. R. Crockett's historical novel "The Black Douglas" and of basing the Necromancer (Sauron) on its villain, Gilles de Retz. Incidents in both "The Hobbit" and "The Lord of the Rings" are similar in narrative and style to the novel, and its overall style and imagery have been suggested as an influence on Tolkien. Tolkien was inspired by early Germanic, especially Old English, literature, poetry, and mythology, which were his chosen and much-loved areas of expertise. These sources of inspiration included Old English literature such as "Beowulf", Norse sagas such as the "Volsunga saga" and the "Hervarar saga", the "Poetic Edda", the "Prose Edda", the "Nibelungenlied", and numerous other culturally related works. Despite the similarities of his work to the "Volsunga saga" and the "Nibelungenlied", which were the basis for Richard Wagner's opera cycle "Der Ring des Nibelungen", Tolkien dismissed critics' direct comparisons to Wagner, telling his publisher, "Both rings were round, and there the resemblance ceases." However, some critics believe that Tolkien was, in fact, indebted to Wagner for elements such as the "concept of the Ring as giving the owner mastery of the world ..." Two of the characteristics possessed by the One Ring, its inherent malevolence and corrupting power upon minds and wills, were not present in the mythical sources but have a central role in Wagner's opera. Tolkien also acknowledged several non-Germanic influences or sources for some of his stories and ideas. Sophocles' play "Oedipus Rex" he cited as inspiring elements of "The Silmarillion" and "The Children of Húrin". In addition, Tolkien first read William Forsell Kirby's translation of the Finnish national epic, the "Kalevala", while attending King Edward's School. He described its character of Väinämöinen as one of his influences for Gandalf the Grey. The "Kalevala"'s antihero Kullervo was further described as an inspiration for Túrin Turambar. Dimitra Fimi, Douglas A. Anderson, John Garth, and many other prominent Tolkien scholars believe that Tolkien also drew influence from a variety of Celtic (Irish, Scottish and Welsh) history and legends. However, after the "Silmarillion" manuscript was rejected, in part for its "eye-splitting" Celtic names, Tolkien denied their Celtic origin: One of Tolkien's purposes when writing his Middle-earth books was to create what his biographer Humphrey Carter called a "mythology for England", complaining in a letter to Milton Waldman of the "poverty of my country: it had no stories of its own (bound up with its tongue and soil)" unlike the Celtic nations of Scotland, Ireland and Wales, which all had their own well developed mythologies. Tolkien himself never used the exact phrase "a mythology for England", but he often made statements to that effect, writing to one reader that his intention in writing the Middle-earth stories was "to restore to the English an epic tradition and present them with a mythology of their own". In the early 20th century, proponents of Irish nationalism like the poet William Butler Yeats, Lady Gregory and others had succeeded in linking in the public mind traditional Irish folk tales of fairies and elves to Irish national identity while denigrating English folk tales as being merely derivative of Irish folk tales. This had prompted a backlash by English writers, leading to a savage war of words about which nation had the more authentic and better fairy tales with for example the English essayist G. K. Chesterton engaging in a series of polemical essays with Yeats over the question of the superiority of Irish vs. English fairy tales. Even though there is nothing innately anti-English about Irish folklore, the way in which Irish mythology became associated with Irish nationalism, being promoted most enthusiastically by those favouring Irish independence, led many to perceive Irish mythology and folklore as Anglophobic. Tolkien with his determination to write a "mythology for England" was for this reason disinclined to admit to Celtic influences. Fimi noted in particular that the story of the Noldor, the Elves who fled Valinor for Middle-earth, resembles the story related in the "Lebor Gabála Érenn" of the semi-divine Tuatha Dé Danann who fled from what is variously described as a place in the north or Greece to conquer Ireland. Like Tolkien's Elves, the Tuatha Dé Danann are inferior to the gods, but superior to humans; being endowed with extraordinary skills as craftsmen, poets, warriors, and magicians. Likewise, after the triumph of humanity, both the Elves and the Tuatha Dé Danann are driven underground, which causes their "fading", leading them to become diminutive and pale. Catholic theology and imagery played a part in fashioning Tolkien's creative imagination, suffused as it was by his deeply religious spirit. Tolkien acknowledged this himself: Specifically, Paul H. Kocher argues that Tolkien describes evil in the orthodox Christian way as the absence of good. He cites many examples in "The Lord of the Rings", such as Sauron's "Lidless Eye": "the black slit of its pupil opened on a pit, a window into nothing". Kocher sees Tolkien's source as Thomas Aquinas, "whom it is reasonable to suppose that Tolkien, as a medievalist and a Catholic, knows well". Tom Shippey makes the same point, but, instead of referring to Aquinas, says Tolkien was very familiar with Alfred the Great's Anglo-Saxon translation of Boethius' "Consolation of Philosophy", known as the "Lays of Boethius". Shippey contends that this Christian view of evil is most clearly stated by Boethius: "evil is nothing". He says Tolkien used the corollary that evil cannot create as the basis of Frodo's remark, "the Shadow ... can only mock, it cannot make: not real new things of its own", and related remarks by Treebeard and Elrond. He goes on to argue that in "The Lord of the Rings" evil does sometimes seem to be an independent force, more than merely the absence of good (though not independent to the point of the Manichaean heresy), and suggests that Alfred's additions to his translation of Boethius may have inspired that view. Stratford Caldecott also interpreted the Ring in theological terms: "The Ring of Power exemplifies the dark magic of the corrupted will, the assertion of self in disobedience to God. It appears to give freedom, but its true function is to enslave the wearer to the Fallen Angel. It corrodes the human will of the wearer, rendering him increasingly 'thin' and unreal; indeed, its gift of invisibility symbolizes this ability to destroy all natural human relationships and identity. You could say the Ring is sin itself: tempting and seemingly harmless to begin with, increasingly hard to give up and corrupting in the long run." As well as his fiction, Tolkien was also a leading author of academic literary criticism. His seminal 1936 lecture, later published as an article, revolutionized the treatment of the Anglo-Saxon epic "Beowulf" by literary critics. The essay remains highly influential in the study of Old English literature to this day. "Beowulf" is one of the most significant influences upon Tolkien's later fiction, with major details of both "The Hobbit" and "The Lord of the Rings" being adapted from the poem. The piece reveals many of the aspects of "Beowulf" which Tolkien found most inspiring, most prominently the role of monsters in literature, particularly that of the dragon which appears in the final third of the poem: In addition to his mythopoeic compositions, Tolkien enjoyed inventing fantasy stories to entertain his children. He wrote annual Christmas letters from Father Christmas for them, building up a series of short stories (later compiled and published as "The Father Christmas Letters"). Other works included "Mr. Bliss" and "Roverandom" (for children), and "Leaf by Niggle" (part of "Tree and Leaf"), "The Adventures of Tom Bombadil", "Smith of Wootton Major" and "Farmer Giles of Ham". "Roverandom" and "Smith of Wootton Major", like "The Hobbit", borrowed ideas from his "legendarium". Tolkien never expected his stories to become popular, but by sheer accident a book called "The Hobbit", which he had written some years before for his own children, came in 1936 to the attention of Susan Dagnall, an employee of the London publishing firm George Allen & Unwin, who persuaded Tolkien to submit it for publication. When it was published a year later, the book attracted adult readers as well as children, and it became popular enough for the publishers to ask Tolkien to produce a sequel. The request for a sequel prompted Tolkien to begin what would become his most famous work: the epic novel "The Lord of the Rings" (originally published in three volumes 1954–1955). Tolkien spent more than ten years writing the primary narrative and appendices for "The Lord of the Rings", during which time he received the constant support of the Inklings, in particular his closest friend C. S. Lewis, the author of "The Chronicles of Narnia". Both "The Hobbit" and "The Lord of the Rings" are set against the background of "The Silmarillion", but in a time long after it. Tolkien at first intended "The Lord of the Rings" to be a children's tale in the style of "The Hobbit", but it quickly grew darker and more serious in the writing. Though a direct sequel to "The Hobbit", it addressed an older audience, drawing on the immense backstory of Beleriand that Tolkien had constructed in previous years, and which eventually saw posthumous publication in "The Silmarillion" and other volumes. Tolkien's influence weighs heavily on the fantasy genre that grew up after the success of "The Lord of the Rings". "The Lord of the Rings" became immensely popular in the 1960s and has remained so ever since, ranking as one of the most popular works of fiction of the 20th century, judged by both sales and reader surveys. In the 2003 "Big Read" survey conducted by the BBC, "The Lord of the Rings" was found to be the UK's "Best-loved Novel". Australians voted "The Lord of the Rings" "My Favourite Book" in a 2004 survey conducted by the Australian ABC. In a 1999 poll of Amazon.com customers, "The Lord of the Rings" was judged to be their favourite "book of the millennium". In 2002 Tolkien was voted the 92nd "greatest Briton" in a poll conducted by the BBC, and in 2004 he was voted 35th in the SABC3's Great South Africans, the only person to appear in both lists. His popularity is not limited to the English-speaking world: in a 2004 poll inspired by the UK's "Big Read" survey, about 250,000 Germans found "The Lord of the Rings" to be their favourite work of literature. Tolkien wrote a brief "Sketch of the Mythology", which included the tales of Beren and Lúthien and of Túrin; and that sketch eventually evolved into the "Quenta Silmarillion", an epic history that Tolkien started three times but never published. Tolkien desperately hoped to publish it along with "The Lord of the Rings", but publishers (both Allen & Unwin and Collins) declined. Moreover, printing costs were very high in 1950s Britain, requiring "The Lord of the Rings" to be published in three volumes. The story of this continuous redrafting is told in the posthumous series "The History of Middle-earth", edited by Tolkien's son, Christopher Tolkien. From around 1936, Tolkien began to extend this framework to include the tale of "The Fall of Númenor", which was inspired by the legend of Atlantis. Tolkien had appointed his son Christopher to be his literary executor, and he (with assistance from Guy Gavriel Kay, later a well-known fantasy author in his own right) organized some of this material into a single coherent volume, published as "The Silmarillion" in 1977. It received the Locus Award for Best Fantasy novel in 1978. In 1980 Christopher Tolkien published a collection of more fragmentary material, under the title "Unfinished Tales of Númenor and Middle-earth". In subsequent years (1983–1996) he published a large amount of the remaining unpublished materials, together with notes and extensive commentary, in a series of twelve volumes called "The History of Middle-earth". They contain unfinished, abandoned, alternative, and outright contradictory accounts, since they were always a work in progress for Tolkien and he only rarely settled on a definitive version for any of the stories. There is not complete consistency between "The Lord of the Rings" and "The Hobbit", the two most closely related works, because Tolkien never fully integrated all their traditions into each other. He commented in 1965, while editing "The Hobbit" for a third edition, that he would have preferred to completely rewrite the book because of the style of its prose. One of Tolkien's least-known short works is the children's storybook "Mr. Bliss", published in 1982. It tells the story of Mr. Bliss and his first ride in his new motor-car. Many adventures follow: encounters with bears, angry neighbours, irate shopkeepers, and assorted collisions. The story was inspired by Tolkien's own vehicular mishaps with his first car, purchased in 1932. The bears were based on toy bears owned by Tolkien's sons. Tolkien was both author and illustrator of the book. He submitted it to his publishers as a balm to readers who were hungry for more from him after the success of "The Hobbit". The lavish ink and coloured-pencil illustrations would have made production costs prohibitively expensive. Tolkien agreed to redraw the pictures in a simpler style, but then found he did not have time to do so. The book was published in 1982 as a facsimile of Tolkien's difficult-to-read illustrated manuscript, with a typeset transcription on each facing page. More recently, in 2007, "The Children of Húrin" was published by HarperCollins (in the UK and Canada) and Houghton Mifflin (in the US). The novel tells the story of Túrin Turambar and his sister Nienor, children of Húrin Thalion. The material was compiled by Christopher Tolkien from "The Silmarillion", "Unfinished Tales", "The History of Middle-earth", and unpublished manuscripts. "The Legend of Sigurd and Gudrún", which was released worldwide on 5 May 2009 by HarperCollins and Houghton Mifflin Harcourt, retells the legend of Sigurd and the fall of the Niflungs from Germanic mythology. It is a narrative poem composed in alliterative verse and is modelled after the Old Norse poetry of the Elder Edda. Christopher Tolkien supplied copious notes and commentary upon his father's work. According to Christopher Tolkien, it is no longer possible to trace the exact date of the work's composition. On the basis of circumstantial evidence, he suggests that it dates from the 1930s. In his foreword he wrote, "He scarcely ever (to my knowledge) referred to them. For my part, I cannot recall any conversation with him on the subject until very near the end of his life, when he spoke of them to me, and tried unsuccessfully to find them." In a 1967 letter to W. H. Auden, Tolkien wrote, "The Fall of Arthur", published on 23 May 2013, is a long narrative poem composed by Tolkien in the early-1930s. It is alliterative, extending to almost 1,000 lines imitating the Old English "Beowulf" metre in Modern English. Though inspired by high medieval Arthurian fiction, the historical setting of the poem is during the Post-Roman Migration Period, both in form (using Germanic verse) and in content, showing Arthur as a British warlord fighting the Saxon invasion, while it avoids the high medieval aspects of the Arthurian cycle (such as the Grail, and the courtly setting); the poem begins with a British "counter-invasion" to the Saxon lands ("Arthur eastward in arms purposed"). "", published on 22 May 2014, is a prose translation of the early medieval epic poem "Beowulf" from Old English to modern English. Translated by Tolkien from 1920 to 1926, it was edited by his son Christopher. The translation is followed by over 200 pages of commentary on the poem; this commentary was the basis of Tolkien's acclaimed 1936 lecture "Beowulf: The Monsters and the Critics". The book also includes the previously unpublished "Sellic Spell" and two versions of "The Lay of Beowulf". The former is a fantasy piece on Beowulf's biographical background, while the latter is a poem on the Beowulf theme. "The Story of Kullervo", first published in "Tolkien Studies" in 2010 and reissued with additional material in 2015, is a retelling of a 19th-century Finnish poem. It was written in 1915 while Tolkien was studying at Oxford. "The Tale of Beren and Lúthien" is one of the oldest and most often revised in Tolkien's legendarium. The story is one of three contained within "The Silmarillion" which Tolkien believed to warrant their own long-form narratives. It was published as a standalone book, edited by Christopher Tolkien, under the title "Beren and Lúthien" in 2017. "The Fall of Gondolin" is a tale of a beautiful, mysterious city destroyed by dark forces, that called "the first real story" of Middle-earth, will be published in August 2018 as a standalone book, edited by Christopher Tolkien and illustrated by Alan Lee. Before his death Tolkien negotiated the sale of the manuscripts, drafts, proofs and other materials related to his then-published works—including "The Lord of the Rings", "The Hobbit" and "Farmer Giles of Ham"—to the Department of Special Collections and University Archives at Marquette University's John P. Raynor, S.J., Library in Milwaukee, Wisconsin. After his death his estate donated the papers containing Tolkien's "Silmarillion" mythology and his academic work to Oxford University's Bodleian Library. In 2009, a partial draft of "Language and Human Nature", which Tolkien had begun co-writing with C. S. Lewis but had never completed, was discovered at the Bodleian Library. Both Tolkien's academic career and his literary production are inseparable from his love of language and philology. He specialized in English philology at university and in 1915 graduated with Old Norse as his special subject. He worked for the Oxford English Dictionary from 1918 and is credited with having worked on a number of words starting with the letter W, including walrus, over which he struggled mightily. In 1920, he became Reader in English Language at the University of Leeds, where he claimed credit for raising the number of students of linguistics from five to twenty. He gave courses in Old English heroic verse, history of English, various Old English and Middle English texts, Old and Middle English philology, introductory Germanic philology, Gothic, Old Icelandic, and Medieval Welsh. When in 1925, aged thirty-three, Tolkien applied for the Rawlinson and Bosworth Professorship of Anglo-Saxon at Pembroke College, Oxford, he boasted that his students of Germanic philology in Leeds had even formed a "Viking Club". He also had a certain, if imperfect, knowledge of Finnish. Privately, Tolkien was attracted to "things of racial and linguistic significance", and in his 1955 lecture "English and Welsh", which is crucial to his understanding of race and language, he entertained notions of "inherent linguistic predilections", which he termed the "native language" as opposed to the "cradle-tongue" which a person first learns to speak. He considered the West Midlands dialect of Middle English to be his own "native language", and, as he wrote to W. H. Auden in 1955, "I am a West-midlander by blood (and took to early west-midland Middle English as a known tongue as soon as I set eyes on it)." Parallel to Tolkien's professional work as a philologist, and sometimes overshadowing this work, to the effect that his academic output remained rather thin, was his affection for constructing languages. The most developed of these are Quenya and Sindarin, the etymological connection between which formed the core of much of Tolkien's "legendarium". Language and grammar for Tolkien was a matter of aesthetics and euphony, and Quenya in particular was designed from "phonaesthetic" considerations; it was intended as an "Elvenlatin", and was phonologically based on Latin, with ingredients from Finnish, Welsh, English, and Greek. A notable addition came in late 1945 with Adûnaic or Númenórean, a language of a "faintly Semitic flavour", connected with Tolkien's Atlantis legend, which by "The Notion Club Papers" ties directly into his ideas about the inability of language to be inherited, and via the "Second Age" and the story of Eärendil was grounded in the "legendarium", thereby providing a link of Tolkien's 20th-century "real primary world" with the legendary past of his Middle-earth. Tolkien considered languages inseparable from the mythology associated with them, and he consequently took a dim view of auxiliary languages: in 1930 a congress of Esperantists were told as much by him, in his lecture "A Secret Vice", "Your language construction will breed a mythology", but by 1956 he had concluded that "Volapük, Esperanto, Ido, Novial, &c, &c, are dead, far deader than ancient unused languages, because their authors never invented any Esperanto legends". The popularity of Tolkien's books has had a small but lasting effect on the use of language in fantasy literature in particular, and even on mainstream dictionaries, which today commonly accept Tolkien's idiosyncratic spellings "dwarves" and "dwarvish" (alongside "dwarfs" and "dwarfish"), which had been little used since the mid-19th century and earlier. (In fact, according to Tolkien, had the Old English plural survived, it would have been "dwarrows" or "dwerrows".) He also coined the term "eucatastrophe", though it remains mainly used in connection with his own work. He was an accomplished artist, who learned to paint and draw as a child and continued to do so all his life. From early in his writing career, the development of his stories was accompanied by drawings and paintings, especially of landscapes, and by maps of the lands in which the tales were set. He also produced pictures to accompany the stories told to his own children, including those later published in "Mr Bliss" and "Roverandom", and sent them elaborately illustrated letters purporting to come from Father Christmas. Although he regarded himself as an amateur, the publisher used the author's own cover art, maps, and full-page illustrations for the early editions of "The Hobbit". Much of his artwork was collected and published in 1995 as a book: "". The book discusses the paintings, drawings, and sketches of J.R.R. Tolkien and in total includes 200 reproductions of his art. In a 1951 letter to Milton Waldman, Tolkien wrote about his intentions to create a "body of more or less connected legend", of which "[t]he cycles should be linked to a majestic whole, and yet leave scope for other minds and hands, wielding paint and music and drama". The hands and minds of many artists have indeed been inspired by Tolkien's legends. Personally known to him were Pauline Baynes (Tolkien's favourite illustrator of "The Adventures of Tom Bombadil" and "Farmer Giles of Ham") and Donald Swann (who set the music to "The Road Goes Ever On"). Queen Margrethe II of Denmark created illustrations to "The Lord of the Rings" in the early 1970s. She sent them to Tolkien, who was struck by the similarity they bore in style to his own drawings. However, Tolkien was not fond of all the artistic representation of his works that were produced in his lifetime, and was sometimes harshly disapproving. In 1946, he rejected suggestions for illustrations by Horus Engels for the German edition of "The Hobbit" as "too Disnified ... Bilbo with a dribbling nose, and Gandalf as a figure of vulgar fun rather than the Odinic wanderer that I think of". Tolkien was sceptical of the emerging Tolkien fandom in the United States, and in 1954 he returned proposals for the dust jackets of the American edition of "The Lord of the Rings": He had dismissed dramatic representations of fantasy in his essay "On Fairy-Stories", first presented in 1939: Tolkien scholar James Dunning coined the word "Tollywood", a portmanteau derived from "Tolkien Hollywood", to describe attempts to create a cinematographic adaptation of the stories in Tolkien's legendarium aimed at generating good box office results, rather than at fidelity to the idea of the original. On receiving a screenplay for a proposed film adaptation of "The Lord of the Rings" by Morton Grady Zimmerman, Tolkien wrote: Tolkien went on to criticize the script scene by scene ("yet one more scene of screams and rather meaningless slashings"). He was not implacably opposed to the idea of a dramatic adaptation, however, and sold the film, stage and merchandise rights of "The Hobbit" and "The Lord of the Rings" to United Artists in 1968. United Artists never made a film, although director John Boorman was planning a live-action film in the early 1970s. In 1976, the rights were sold to Tolkien Enterprises, a division of the Saul Zaentz Company, and the first film adaptation of "The Lord of the Rings" was released in 1978 as an animated rotoscoping film directed by Ralph Bakshi with screenplay by the fantasy writer Peter S. Beagle. It covered only the first half of the story of "The Lord of the Rings". In 1977, an animated musical television film of "The Hobbit" was made by Rankin-Bass, and in 1980, they produced the animated musical television film "The Return of the King", which covered some of the portions of "The Lord of the Rings" that Bakshi was unable to complete. From 2001 to 2003, New Line Cinema released "The Lord of the Rings" as a trilogy of live-action films that were filmed in New Zealand and directed by Peter Jackson. The series was successful, performing extremely well commercially and winning numerous Oscars. From 2012 to 2014, Warner Bros. and New Line Cinema released "The Hobbit", a series of three films based on "The Hobbit", with Peter Jackson serving as executive producer, director, and co-writer. The first instalment, "", was released in December 2012; the second, "", in December 2013; and the last instalment, "", in December 2014. On 13 November 2017, it was announced that Amazon had acquired the global television rights to "The Lord of the Rings". The series will not be a direct adaptation of the books, but will instead introduce new stories that are set before "The Fellowship of the Ring." The press release referred to "previously unexplored stories based on J. R. R. Tolkien's original writings". Amazon will be the producer in conjunction with the Tolkien Estate and the Tolkien Trust, HarperCollins and New Line Cinema. Tolkien and the characters and places from his works have become the namesake of various things around the World. These include street names, mountains, companies, species of animals and plants as well as other notable objects. By convention, certain classes of features on Saturn's moon Titan are named after elements from Middle-earth. "Colles" (small hills or knobs) are named for characters, while "montes" (mountains) are named for mountains of Middle-earth. There are also asteroids named for Bilbo Baggins and Tolkien himself. Three mountains in the Cadwallader Range of British Columbia, Canada, have been named after Tolkien's characters. These are Mount Shadowfax, Mount Gandalf and Mount Aragorn. On 1 December 2012, it was announced in the New Zealand press that a bid was launched for the New Zealand Geographic Board to name a mountain peak near Milford Sound after Tolkien for historical and literary reasons and to mark Tolkien's 121st birthday. The "Tolkien Road" in Eastbourne, East Sussex, was named after Tolkien whereas the "Tolkien Way" in Stoke-on-Trent is named after Tolkien's eldest son, Fr. John Francis Tolkien, who was the priest in charge at the nearby Roman Catholic Church of Our Lady of the Angels and St. Peter in Chains. In the Hall Green and Moseley areas of Birmingham there are a number of parks and walkways dedicated to J. R. R. Tolkien—most notably, the Millstream Way and Moseley Bog. Collectively the parks are known as the Shire Country Parks. Also in Weston-super-Mare, Somerset, England there are a collection of roads in the 'Weston Village' named after locales of Middle Earth, namely Hobbiton Road, Bree Close, Arnor Close, Rivendell, Westmarch Way and Buckland Green. In the Dutch town of Geldrop, near Eindhoven, the streets of an entire new neighbourhood are named after Tolkien himself ("Laan van Tolkien") and some of the best-known characters from his books. In the Silicon Valley towns of Saratoga and San Jose in California, there are two housing developments with street names drawn from Tolkien's works. About a dozen Tolkien-derived street names also appear scattered throughout the town of Lake Forest, California. The Columbia, Maryland, neighbourhood of Hobbit's Glen and its street names (including Rivendell Lane, Tooks Way, and Oakenshield Circle) come from Tolkien's works. In the field of taxonomy, over 80 taxa (genera and species) have been given scientific names honouring, or deriving from, characters or other fictional elements from "The Lord of the Rings", "The Hobbit", and other works set in Middle-earth. Several taxa have been named after the character Gollum (also known as Sméagol), as well as for various hobbits, the small humanlike creatures such as Bilbo and Frodo Baggins. Various elves, dwarves, and other creatures that appear in his writings as well as Tolkien himself have been honoured in the names of several species, including the amphipod "Leucothoe tolkieni", and the wasp "Shireplitis tolkieni". In 2004, the extinct hominid "Homo floresiensis" was described, and quickly earned the nickname "hobbit" due to its small size. In 1978, Paleontologist Leigh Van Valen named over 20 taxa of extinct mammals after Tolkien lore in a single paper. In 1999, entomologist Lauri Kaila described 48 new species of "Elachista" moths and named 37 of them after Tolkien mythology. It has been noted that "Tolkien has been accorded formal taxonomic commemoration like no other author." Since 2003 The Tolkien Society has organized Tolkien Reading Day, which takes place on 25 March in schools around the world. There are seven blue plaques in England that commemorate places associated with Tolkien: one in Oxford, one in Bournemouth, four in Birmingham and one in Leeds. One of the Birmingham plaques commemorates the inspiration provided by Sarehole Mill, near which he lived between the ages of four and eight, while two mark childhood homes up to the time he left to attend Oxford University and the other marks a hotel he stayed at before leaving for France during World War I. The plaque in West Park, Leeds, commemorates the five years Tolkien enjoyed at Leeds as Reader and then Professor of English Language at the University. The Oxford plaque commemorates the residence where Tolkien wrote "The Hobbit" and most of "The Lord of the Rings". Another two plaques marking buildings associated with Tolkien are:- In 2012, Tolkien was among the British cultural icons selected by artist Sir Peter Blake to appear in a new version of his most famous artwork—the Beatles' "Sgt. Pepper's Lonely Hearts Club Band" album cover—to celebrate the British cultural figures of his life that he most admires. Unlike other authors of the genre, Tolkien never favoured signing his works. Owing to his popularity, handsigned copies of his letters or of the first editions of his individual writings have however achieved high values at auctions, and forged autographs may occur on the market. In particular, the signed first hardback edition of "The Hobbit" from 1937 has reportedly been offered for $85,000. Collectibles also include non-fiction books with hand-written annotations from Tolkien's private library. A small selection of books about Tolkien and his works: John A. Macdonald Sir John Alexander Macdonald (11 January 1815 – 6 June 1891) was the first Prime Minister of Canada (1867–1873, 1878–1891). The dominant figure of Canadian Confederation, he had a political career which spanned almost half a century. Macdonald was born in Scotland; when he was a boy his family immigrated to Kingston in the Province of Upper Canada (today in eastern Ontario). As a lawyer he was involved in several high-profile cases and quickly became prominent in Kingston, which elected him in 1844 to the legislature of the Province of Canada. By 1857, he had become premier under the colony's unstable political system. In 1864, when no party proved capable of governing for long, Macdonald agreed to a proposal from his political rival, George Brown, that the parties unite in a Great Coalition to seek federation and political reform. Macdonald was the leading figure in the subsequent discussions and conferences, which resulted in the British North America Act and the birth of Canada as a nation on 1 July 1867. Macdonald was the first Prime Minister of the new nation, and served 19 years; only William Lyon Mackenzie King served longer. In 1873, he resigned from office over the Pacific Scandal, in which his party took bribes from businessmen seeking the contract to build the Canadian Pacific Railway. However, he was re-elected in 1878, continuing until he died in office in 1891. Macdonald's greatest achievements were building and guiding a successful national government for the new Dominion, using patronage to forge a strong Conservative Party, promoting the protective tariff of the National Policy, and completing the railway. He fought to block provincial efforts to take power back from the national government in Ottawa. His most controversial move was to approve the execution of Métis leader Louis Riel for treason in 1885; it alienated many francophones from his Conservative Party. He died in 1891, still in office; he is respected today for his key role in the formation of Canada. Historical rankings have consistently placed Macdonald as one of the highest rated Prime Ministers in Canadian history. John Alexander Macdonald was born in Ramshorn parish in Glasgow, Scotland, on the 10th (official record) or 11 January (father's journal) 1815. His father was named Hugh, an unsuccessful merchant, who had married John's mother, Helen Shaw, on 21 October 1811. John Alexander Macdonald was the third of five children. After Hugh's business ventures left him in debt, the family immigrated to Kingston, in Upper Canada (today the southern and eastern portions of Ontario), in 1820, where there were already a number of relatives and connections. The family initially lived with another, but then resided over a store which Hugh Mcdonald ran. Soon after their arrival, John's younger brother James died from a blow to the head by a servant who was supposed to look after the boys. After Hugh's store failed, the family moved to Hay Bay (south of Napanee, Ontario), west of Kingston, where Hugh unsuccessfully ran another shop. His father, in 1829, was appointed a magistrate for the Midland District. John Macdonald's mother was a lifelong influence on her son, helping him in his difficult first marriage and remaining a force in his life until her 1862 death. John initially attended local schools. When he was aged 10, his family scraped together the money to send him to Midland District Grammar School in Kingston. Macdonald's formal schooling ended at 15, a common school-leaving age at a time when only children from the most prosperous families were able to attend university. Nevertheless, Macdonald later regretted leaving school when he did, remarking to his secretary Joseph Pope that if he had attended university, he might have embarked on a literary career. Macdonald's parents decided he should become a lawyer after leaving school. As Donald Creighton (who penned a two-volume biography of Macdonald in the 1950s) wrote, "law was a broad, well-trodden path to comfort, influence, even to power". It was also "the obvious choice for a boy who seemed as attracted to study as he was uninterested in trade." Besides, Macdonald needed to start earning money immediately to support his family because his father's businesses were again failing. "I had no boyhood," he complained many years later. "From the age of 15, I began to earn my own living." Macdonald travelled by steamboat to Toronto (known until 1834 as York), where he passed an examination set by The Law Society of Upper Canada, including mathematics, Latin, and history. British North America had no law schools in 1830; students were examined when beginning and ending their tutelage. Between the two examinations, they were apprenticed, or articled to established lawyers. Macdonald began his apprenticeship with George Mackenzie, a prominent young lawyer who was a well-regarded member of Kingston's rising Scottish community. Mackenzie practised corporate law, a lucrative speciality that Macdonald himself would later pursue. Macdonald was a promising student, and in the summer of 1833, managed the Mackenzie office when his employer went on a business trip to Montreal and Quebec in Lower Canada (today the southern portion of the province of Quebec). Later that year, Macdonald was sent to manage the law office of a Mackenzie cousin who had fallen ill. In August 1834, George Mackenzie died of cholera. With his supervising lawyer dead, Macdonald remained at the cousin's law office in Hallowell (today Picton, Ontario). In 1835, Macdonald returned to Kingston, and even though not yet of age nor qualified, began his practice as a lawyer, hoping to gain his former employer's clients. Macdonald's parents and sisters also returned to Kingston, and Hugh Macdonald became a bank clerk. Soon after Macdonald was called to the Bar in February 1836, he arranged to take in two students; both became, like Macdonald, Fathers of Confederation. Oliver Mowat became premier of Ontario, and Alexander Campbell a federal cabinet minister and Lieutenant Governor of Ontario. One early client was Eliza Grimason, an Irish immigrant then aged sixteen, who sought advice concerning a shop she and her husband wanted to buy. Grimason would become one of Macdonald's richest and most loyal supporters, and may have also become his lover. Macdonald joined many local organisations, seeking to become well known in the town. He also sought out high-profile cases, representing accused child rapist William Brass. Brass was hanged for his crime, but Macdonald attracted positive press comments for the quality of his defence. According to his biographer, Richard Gwyn: As a criminal lawyer who took on dramatic cases, Macdonald got himself noticed well beyond the narrow confines of the Kingston business community. He was operating now in the arena where he would spend by far the greatest part of his life – the court of public opinion. And, while there, he was learning the arts of argument and of persuasion that would serve him all his political life. All male Upper Canadians between 18 and 60 years of age were members of the Sedentary Militia, which was called into active duty during the Rebellions of 1837. Macdonald served as a private in the militia, patrolling the area around Kingston, but the town saw no real action and Macdonald was not called upon to fire on the enemy. Although most of the trials resulting from the Upper Canada Rebellion took place in Toronto, Macdonald represented one of the defendants in the one trial to take place in Kingston. All the Kingston defendants were acquitted, and a local paper described Macdonald as "one of the youngest barristers in the Province [who] is rapidly rising in his profession". In late 1838, Macdonald agreed to advise one of a group of American raiders who had crossed the border to liberate Canada from what they saw as the yoke of British colonial oppression. The inept invaders had been captured after the Battle of the Windmill (near Prescott, Ontario), in which 16 Canadians were killed and 60 wounded. Public opinion was inflamed against the prisoners, as they were accused of mutilating the body of a dead Canadian lieutenant. Macdonald biographer Donald Creighton wrote that Kingston was "mad with grief and rage and horror" at the allegations. Macdonald could not represent the prisoners, as they were tried by court-martial and civilian counsel had no standing. At the request of Kingston relatives of Daniel George, paymaster of the ill-fated invasion, Macdonald agreed to advise George, who, like the other prisoners, had to conduct his own defence. George was convicted and hanged. According to Macdonald biographer Donald Swainson, "By 1838, Macdonald's position was secure. He was a public figure, a popular young man, and a senior lawyer." Macdonald continued to expand his practice while being appointed director of many companies, mainly in Kingston. Macdonald became both a director of and a lawyer for the new Commercial Bank of the Midland District. Throughout the 1840s, Macdonald invested heavily in real estate, including commercial properties in downtown Toronto. Meanwhile, he was suffering from some illness, and in 1841, his father died. Sick and grieving, he decided to take a lengthy holiday in Britain in early 1842. He left for the journey well supplied with money, as he spent the last three days before his departure gambling at the card game loo and winning substantially. Sometime during his two months in Britain, he met his first cousin, Isabella Clark. As Macdonald did not mention her in his letters home, the circumstances of their meeting are not known. In late 1842, Isabella journeyed to Kingston to visit with a sister. The visit stretched for nearly a year before John and Isabella Macdonald married on 1 September 1843. In February 1843, Macdonald announced his candidacy for the post of alderman in Kingston's Fourth Ward. On 29 March 1843, Macdonald celebrated his first election victory, with 156 votes against 43 for his opponent, Colonel Jackson. He also suffered what he termed his first downfall, as his supporters, carrying the victorious candidate, accidentally dropped him onto a slushy street. The British Parliament had merged Upper and Lower Canada into the Province of Canada effective in 1841. Kingston became the initial capital of the new province; Upper Canada and Lower Canada became known as Canada West and Canada East. In March 1844, Macdonald was asked by local businessmen to stand as Conservative candidate for Kingston in the upcoming legislative election. Macdonald followed the contemporary custom of supplying the voters with large quantities of alcohol. In the era preceding the secret ballot when votes were publicly declared, Macdonald defeated his opponent, Anthony Manahan, by 275 "shouts" to 42 when the two-day election concluded on 15 October 1844. At that time, the Legislative Assembly met in Montreal. Macdonald was never an orator, and especially disliked the bombastic addresses of the time. Instead, he found a niche in becoming an expert on election law and parliamentary procedure. In 1844, Isabella fell ill. She recovered, but the illness recurred the following year, and she became an invalid. John Macdonald took his wife to Savannah, Georgia, in the United States in 1845, hoping that the sea air and warmth would cure her ailments. Although John Macdonald was able to return to Canada after six months, Isabella remained in the United States for three years. He visited her again in New York at the end of 1846, and returned several months later when she informed him she was pregnant. In August 1847 their son John Alexander Macdonald Jr. was born in New York, but as Isabella remained ill, relatives cared for the infant. Although he was often absent due to his wife's illness, Macdonald was able to gain professional and political advancement. In 1846, he was made a Queen's Counsel. The same year, he was offered the non-cabinet post of Solicitor General, but declined it. In 1847, the Joint Premier, William Henry Draper, appointed Macdonald as Receiver General. Accepting the government post required Macdonald to give up his law firm income and spend most of his time in Montreal, away from Isabella. When elections were held in December 1848 and January 1849, Macdonald was easily reelected for Kingston, but the Conservatives lost seats and were forced to resign when the legislature reconvened in March 1848. Macdonald returned to Kingston when the legislature was not sitting, and Isabella joined him there in June. In August, the child John Jr. died suddenly. In March 1850, Isabella Macdonald gave birth to another boy, Hugh John Macdonald, and his father wrote, "We have got Johnny back again, almost his image." Macdonald began to drink heavily around this time, both in public and in private, which Patricia Phenix, who studied Macdonald's private life, attributes to his family troubles. The Liberals, or Grits, maintained power in the 1851 election, but soon, they were divided by a parliamentary scandal. In September, the government resigned, and a coalition government uniting parties from both parts of the province under Sir Allan MacNab took power. Macdonald did much of the work of putting the government together and served as Attorney General. The coalition which came to power in 1854 became known as the Liberal-Conservatives (referred to, for short, as the Conservatives). In 1855, George-Étienne Cartier of Canada East (today Quebec) joined the Cabinet. Until Cartier's 1873 death, he would be Macdonald's political partner. In 1856, MacNab was eased out as premier by Macdonald, who became the leader of the Canada West Conservatives. Though the most powerful man in the government he remained as Attorney General, with Sir Étienne-Paschal Taché as premier. In July 1857, Macdonald departed for Britain to promote Canadian government projects. On his return to Canada, he was appointed premier in place of the retiring Taché, just in time to lead the Conservatives in a general election. Macdonald was elected in Kingston by 1,189 votes to 9 for John Shaw, who was subsequently hanged in effigy; other Conservatives, however, did badly in Canada West, and only French-Canadian support kept Macdonald in power. On 28 December, Isabella Macdonald died, leaving John A. Macdonald a widower with a seven-year-old son. Hugh John Macdonald would be principally raised by his paternal aunt and her husband. The Assembly had voted to move the seat of government permanently to Quebec City. Macdonald had opposed that, and used his power to force the Assembly to reconsider in 1857. Macdonald proposed that Queen Victoria decide which city should be Canada's capital. Opponents, especially from Canada East, argued that the Queen would not make the decision in isolation; she would be bound to receive informal advice from her Canadian ministers. Nevertheless, Macdonald's scheme was adopted, with Canada East support assured by allowing Quebec City to serve a three-year term as the seat of government before the Assembly moved to the permanent capital. Macdonald privately asked the Colonial Office to ensure that the Queen would not respond for at least 10 months, or until after the general election. In February 1858, the Queen's choice was announced, much to the dismay of many legislators from both parts of the province: the isolated Canada West town of Ottawa. On 28 July 1858, an opposition Canada East member proposed an address to the Queen informing her that Ottawa was an unsuitable place for a national capital. Macdonald's Canada East party members crossed the floor to vote for the address, and the government was defeated. Macdonald resigned, and the Governor General, Sir Edmund Walker Head, invited opposition leader George Brown to form a government. Under the law at that time, Brown and his ministers lost their seats in the Assembly by accepting office, and had to face by-elections. This gave Macdonald a majority pending the by-elections, and he promptly defeated the government. Head refused Brown's request for a dissolution of the Assembly, and Brown and his ministers resigned. Head then asked Macdonald to form a government. The law allowed anyone who had held a ministerial position within the last thirty days to accept a new position without needing to face a by-election; Macdonald and his ministers accepted new positions, then completed what was dubbed the "Double Shuffle" by returning to their old posts. In an effort to give the appearance of fairness, Head insisted that Cartier be titular premier, with Macdonald as his deputy. In the late 1850s and early 1860s, Canada enjoyed a period of great prosperity. The railroad and telegraph improved communications. According to Macdonald biographer Richard Gwyn, "In short, Canadians began to become a single community." At the same time, the provincial government became increasingly difficult to manage. An act affecting both Canada East and Canada West required a "double majority"—a majority of legislators from each of the two sections of the province. This led to increasing deadlock in the Assembly. The two sections each elected 65 legislators, even though Canada West had a larger population. One of Brown's major demands was "rep by pop", that is, representation by population, which would lead to Canada West having more seats, and was bitterly opposed by Canada East. The American Civil War led to fears in Canada and in Britain that once the Americans had concluded their internal warfare, they would invade Canada again. Britain asked the Canadians to pay a part of the expense of defence, and a Militia Bill was introduced in the Assembly in 1862. The opposition objected to the expense, and Canada East representatives feared that French-Canadians would have to fight in a British-instigated war. At the time, Macdonald was drinking heavily, and he failed to provide much leadership on behalf of the bill. The government fell over the bill, and the Grits took over under the leadership of John Sandfield Macdonald (no relation to John A. Macdonald). John A. Macdonald did not remain out of power long; the parties remained closely matched, with a handful of independents able to destroy any government. The new government fell in May 1863, but Head allowed a new election, which made little change to party strength. In December 1863, Canada West MP Albert Norton Richards accepted the post of Solicitor-General, and so had to face a by-election. John A. Macdonald campaigned against Richards personally, and Richards was defeated by a Conservative. The switch in seats cost the Grits their majority, and they resigned in March. John A. Macdonald returned to office with Taché as titular premier. The Taché-Macdonald government was defeated in June. The parties were deadlocked to such an extent that, according to Swainson, "It was clear to everybody that the constitution of the Province of Canada was dead". As his government had fallen again, Macdonald approached the new governor general, Lord Monck, and obtained a dissolution. Before he could act on it, he was approached by Brown through intermediaries; the Grit leader felt that the crisis gave the parties the opportunity to join together for constitutional reform. Brown had led a parliamentary committee on confederation among the British North American colonies, which had reported back just before the Taché-Macdonald government fell. Brown was more interested in representation by population; Macdonald's priority was a federation that the other colonies could join. The two compromised and agreed that the new government would support the "federative principle"—a conveniently elastic phrase. The discussions were not public knowledge and Macdonald stunned the Assembly by announcing that the dissolution was being postponed because of progress in negotiations with Brown—the two men were not only political rivals, but were known to hate each other. The parties resolved their differences, joining in the Great Coalition, with only the Parti rouge of Canada East, led by Jean-Baptiste-Éric Dorion, remaining apart. A conference, called by the Colonial Office, was scheduled for 1 September 1864 in Charlottetown, Prince Edward Island; the Maritimes were to consider a union. The Canadians obtained permission to send a delegation—led by Macdonald, Cartier, and Brown—to what became known as the Charlottetown Conference. At its conclusion, the Maritime delegations expressed a willingness to join a confederation if the details could be worked out. In October 1864, delegates for confederation met in Quebec City for the Quebec Conference, where the Seventy-Two Resolutions were agreed to—they would form the basis of Canada's government. The Great Coalition was endangered by Taché's 1865 death: Lord Monck asked Macdonald to become premier, but Brown felt that he had as good a claim on the position as his coalition partner. The disagreement was resolved by appointing another compromise candidate to serve as titular premier, Narcisse-Fortunat Belleau. In 1865, after lengthy debates, Canada's legislative assembly approved confederation by 91 votes to 33. None of the Maritimes, however, had approved the plan. In 1866, Macdonald and his colleagues financed pro-confederation candidates in the New Brunswick general election, resulting in a pro-confederation assembly. Shortly after the election, Nova Scotia's premier, Charles Tupper, pushed a pro-confederation resolution through that colony's legislature. A final conference, to be held in London, was needed before the British parliament could formalise the union. Maritime delegates left for London in July 1866, but Macdonald, who was drinking heavily again, did not leave until November, angering the Maritimers. In December 1866, Macdonald both led the London Conference, winning acclaim for his handling of the discussions, and wooed and won his second wife, Agnes Bernard. Bernard was the sister of Macdonald's private secretary, Hewitt Bernard; the couple first met in Quebec in 1860, but Macdonald had seen and admired her as early as 1856. In January 1867, while still in London, he was seriously burned in his hotel room when his candle set fire to the chair he had fallen asleep in, but Macdonald refused to miss any sessions of the conference. In February, he married Agnes at St George's, Hanover Square. On 8 March, the British North America Act, which would thereafter serve as the major part of Canada's constitution, passed the House of Commons (it had previously passed the House of Lords). Queen Victoria gave the bill Royal Assent on 29 March 1867. Macdonald had favoured the union coming into force on 15 July, fearing that the preparations would not be completed any earlier. The British favoured an earlier date and, on 22 May, it was announced that the Dominion of Canada would come into existence on 1 July. Lord Monck appointed Macdonald as the new nation's first prime minister. With the birth of the Dominion, Canada East and Canada West became separate provinces, known as Quebec and Ontario. Macdonald was appointed a Knight Commander of the Order of the Bath (KCB) on that first observance of what came to be known as Canada Day, 1 July 1867. Canada's economic growth was quite slow at only 1% annually 1867–1896. Canada verged on stagnation so many residents emigrated to the United States, where growth was much more rapid. Macdonald's solution was to build the transcontinental railroad to stimulate growth, and to implement a "National Policy" of high tariffs that would protect the small Canadian firms from American competition. Macdonald and his government faced immediate problems upon formation of the new country. Much work remained to do in creating a federal government. Nova Scotia was already threatening to withdraw from the union; the Intercolonial Railway, which would both conciliate the Maritimes and bind them closer to the rest of Canada, was not yet built. Anglo-American relations were in a poor state, and Canadian foreign relations were matters handled from London. The withdrawal of the Americans in 1866 from the Reciprocity Treaty had increased tariffs on Canadian goods in US markets. Much of present-day Canada remained outside confederation—in addition to the separate colonies of Prince Edward Island, Newfoundland, and British Columbia, which remained governed by the British, vast areas in the north and west belonged to the British and to the Hudson's Bay Company. American and British opinion was that the experiment of Confederation would quickly unravel, and the nascent nation absorbed by the United States. In August 1867, the new nation's first general election was held; Macdonald's party won easily, with strong support in both large provinces, and a majority from New Brunswick. Parliament convened in November, surprisingly without Brown, who was defeated in Ontario and never served as a member of the House of Commons of Canada. By 1869, Nova Scotia had agreed to remain part of Canada after a promise of better financial terms—the first of many provinces to negotiate concessions from Ottawa. Pressure from London and Ottawa failed to gain the accession of Newfoundland, whose voters rejected a Confederation platform in a general election in October 1869. In 1869, John and Agnes Macdonald had a daughter, Mary. It soon became apparent that Mary had ongoing developmental issues. She was never able to walk, nor did she ever fully develop mentally. Hewitt Bernard, Deputy Minister of Justice and Macdonald's former secretary, also lived in the Macdonald house in Ottawa, together with Bernard's widowed mother. In May 1870, John Macdonald fell ill with gallstones; coupled with his frequent drinking, he may have developed a severe case of acute pancreatitis. In July, he moved to Prince Edward Island to convalesce, most likely conducting discussions aimed at drawing the island into Confederation at a time when some there supported joining the United States. The island joined Confederation in 1873. Macdonald had once been tepid on the question of westward expansion of the Canadian provinces; as Prime Minister he became a strong supporter of a bicoastal Canada. Immediately upon Confederation, he sent commissioners to London who in due course successfully negotiated the transfer of Rupert's Land and the North-Western Territory to Canada. The Hudson's Bay Company received $1,500,000, and retained some trading posts as well as one-twentieth of the best farmland. Prior to the effective date of acquisition, the Canadian government faced unrest in the Red River Colony (today southeastern Manitoba, centred on Winnipeg). The local people, including the Métis, were fearful that rule would be imposed on them which did not take into account their interests, and rose in the Red River Rebellion led by Louis Riel. Unwilling to pay for a territory in insurrection, Macdonald had troops put down the uprising before 15 July 1870 formal transfer, but as a result of the unrest, the Red River Colony joined Confederation as the province of Manitoba, while the rest of the purchased lands became the North-West Territories. Following the North-West Rebellion of 1885 Macdonald implemented restrictions upon the movement of indigenous groups, requiring them to receive formal permission from an Indian Department Official in order to go off reserve. Macdonald also wished to secure the Colony of British Columbia. There was interest in the United States in bringing about the colony's annexation, and Macdonald wished to ensure his new nation had a Pacific outlet. The colony had an extremely large debt that would have to be assumed should it join Confederation. Negotiations were conducted in 1870, principally during Macdonald's illness and recuperation, with Cartier leading the Canadian delegation. Cartier offered British Columbia a railroad linking it to the eastern provinces within 10 years. The British Columbians, who privately had been prepared to accept far less generous terms, quickly agreed and joined Confederation in 1871. The Canadian Parliament ratified the terms after a debate over the high cost that cabinet member Alexander Morris described as the worst fight the Conservatives had had since Confederation. There were continuing disputes with the Americans over deep-sea fishing rights, and in early 1871, an Anglo-American commission was appointed to settle outstanding matters between the British (and Canadians) and the Americans. Canada was hoping to secure compensation for damage done by Fenians raiding Canada from bases in the United States. Macdonald was appointed a British commissioner, a post he was reluctant to accept as he realised Canadian interests might be sacrificed for the mother country. This proved to be the case; Canada received no compensation for the raids and no significant trade advantages in the settlement, which required Canada to open her waters to American fishermen. Macdonald returned home to defend the Treaty of Washington against a political firestorm. In the run-up to the 1872 election, Macdonald had yet to formulate a railway policy, or to devise the loan guarantees that would be needed to secure the construction. During the previous year, Macdonald had met with potential railway financiers such as Hugh Allan and considerable financial discussion took place. Nevertheless, the greatest political problem Macdonald faced was the Washington treaty, which had not yet been debated in Parliament. In early 1872, Macdonald submitted the treaty for ratification, and it passed the Commons with a majority of 66. The general election was held through late August and early September (future Canadian elections would be conducted, for the most part, on one day). Redistribution had given Ontario increased representation in the House; Macdonald spent much time campaigning in the province, for the most part outside Kingston. Widespread bribery of voters took place throughout Canada, a practice especially effective in the era when votes were publicly declared; in future elections the secret ballot would be used. Macdonald and the Conservatives saw their majority reduced from 35 to 8. The Liberals (as the Grits were coming to be known) did better than the Conservatives in Ontario, forcing the government to rely on the votes of Western and Maritime MPs who did not fully support the party. Macdonald had hoped to award the charter for the Canadian Pacific Railway in early 1872, but negotiations dragged on between the government and the financiers. Macdonald's government awarded the Allan group the charter in late 1872. In 1873, when Parliament opened, Liberal MP Lucius Seth Huntington charged that government ministers had been bribed with large, undisclosed political contributions to award the charter. Documents soon came to light which substantiated what came to be known as the Pacific Scandal. The Allan-led financiers, who were secretly backed by the United States's Northern Pacific Railway, had donated $179,000 to the Tory election funds, they had received the charter, and Opposition newspapers began to publish telegrams signed by government ministers requesting large sums from the railway interest at the time the charter was under consideration. Macdonald had taken $45,000 in contributions from the railway interest himself. Substantial sums went to Cartier, who waged an expensive fight to try to retain his seat in Montreal East (he was defeated, but was subsequently returned for the Manitoba seat of Provencher). During the campaign Cartier had fallen ill with Bright's disease, which may have been causing his judgment to lapse; he died in May 1873 while seeking treatment in London. Even before Cartier's death, Macdonald attempted to use delay to extricate the government. The Opposition responded by leaking documents to friendly newspapers. On 18 July, three papers published a telegram dated August 1872 from Macdonald requesting another $10,000 and promising "it will be the last time of asking". Macdonald was able to get a prorogation of Parliament in August by appointing a Royal Commission to look into the matter, but when Parliament reconvened in late October, the Liberals, feeling Macdonald could be defeated over the issue, applied immense pressure to wavering members. On 3 November, Macdonald rose in the Commons to defend the government, and according to his biographer P.B. Waite, gave "the speech of his life, and, in a sense, for his life". He began his speech at 9 p.m., looking frail and ill, an appearance which quickly improved. As he spoke, he consumed glass after glass of gin and water. He denied that there had been a corrupt bargain, and stated that such contributions were common to both political parties. After five hours, Macdonald concluded, I leave it with this House with every confidence. I am equal to either fortune. I can see past the decision of this House either for or against me, but whether it be against me or for me, I know, and it is no vain boast to say so, for even my enemies will admit that I am no boaster, that there does not exist in Canada a man who has given more of his time, more of his heart, more of his wealth, or more of his intellect and power, as it may be, for the good of this Dominion of Canada. Macdonald's speech was seen as a personal triumph, but it did little to salvage the fortunes of his government. With eroding support both in the Commons and among the public, Macdonald went to the Governor General, Lord Dufferin on 5 November and resigned; Liberal leader Alexander Mackenzie became the second Prime Minister of Canada. Following the resignation, Macdonald returned home and told his wife Agnes, "Well, that's got along with", and when asked what he meant, told her of his resignation, and stated, "It's a relief to be out of it." He is not known to have spoken of the events of the Pacific Scandal again. When Macdonald announced his resignation in the Commons, Conservative and Liberal MPs traded places on the benches of the House of Commons, though one Conservative MP, British Columbia's Amor De Cosmos remained in his place, thereby joining the Liberals. On 6 November 1873, Macdonald offered his resignation as party leader to his caucus; it was refused. Mackenzie called an election for January 1874; the Conservatives were reduced to 70 seats out of the 206 in the Commons, giving Mackenzie a massive majority. The Conservatives bested the Liberals only in British Columbia; Mackenzie had called the terms by which the province had joined Confederation "impossible". Macdonald was returned in Kingston but was unseated on an election contest when bribery was proven; he won the ensuing by-election by 17 votes. According to Swainson, most observers viewed Macdonald as finished in politics, "a used-up and dishonoured man". Macdonald was content to lead the Conservatives in a relaxed manner in opposition and await Liberal mistakes. He took long holidays and resumed his law practice, moving his family to Toronto and going into partnership with his son Hugh John. One mistake that Macdonald believed the Liberals had made was a free-trade agreement with Washington, negotiated in 1874; Macdonald had come to believe that protection was necessary to build Canadian industry. The Panic of 1873 had led to a worldwide depression; the Liberals found it difficult to finance the railroad in such a climate, and were generally opposed to the line anyway—the slow pace of construction led to British Columbia claims that the agreement under which it had entered Confederation was in jeopardy of being broken. By 1876, Macdonald and the Conservatives had adopted protection as party policy. This view was widely promoted in speeches at a number of political picnics, held across Ontario during the summer of 1876. Macdonald's proposals struck a chord with the public, and the Conservatives began to win a string of by-elections. By the end of 1876, the Tories had picked up 14 seats as a result of by-elections, reducing Mackenzie's Liberal majority from 70 to 42. Despite the success, Macdonald considered retirement, wishing only to reverse the voters' verdict of 1874—he considered Charles Tupper his heir apparent. When Parliament convened in 1877, the Conservatives were confident and the Liberals defensive. After the Tories had a successful session in the early part of the year, another series of picnics commenced in a wide belt around Toronto. Macdonald even campaigned in Quebec, which he had rarely done, leaving speechmaking there to Cartier. More picnics followed in 1878, promoting proposals which would come to be collectively called the "National Policy": high tariffs, rapid construction of the transcontinental railway (the Canadian Pacific Railway or CPR), rapid agricultural development of the West using the railroad, and policies which would attract immigrants to Canada. These picnics allowed Macdonald venues to show off his talents at campaigning, and were often lighthearted—at one, the Tory leader blamed agricultural pests on the Grits, and promised the insects would go away if the Conservatives were elected. The final days of the 3rd Canadian Parliament were marked by explosive conflict, as Macdonald and Tupper alleged that MP and railway financier Donald Smith had been allowed to build the Pembina branch of the CPR (connecting to American lines) as a reward for betraying the Conservatives during the Pacific Scandal. The altercation continued even after the Commons had been summoned to the Senate to hear the dissolution read, as Macdonald spoke the final words recorded in the 3rd Parliament: "That fellow Smith is the biggest liar I ever saw!" The election was called for 17 September 1878. Fearful that Macdonald would be defeated in Kingston, his supporters tried to get him to run in the safe Conservative riding of Cardwell; having represented his hometown for 35 years, he stood there again. In the election, Macdonald was defeated in his riding by Alexander Gunn, but the Conservatives swept to victory. Macdonald remained in the House of Commons, having quickly secured his election for Marquette, Manitoba; elections there were held later than in Ontario. His acceptance of office vacated his parliamentary seat, and Macdonald decided to stand for the British Columbia seat of Victoria, where the election was to be held on 21 October. Macdonald was duly returned for Victoria, although he had never visited either Marquette or Victoria. Part of the National Policy was implemented in the budget presented in February 1879. Under that budget, Canada became a high-tariff nation like the United States and Germany. The tariffs were designed to protect and build Canadian industry—finished textiles received a tariff of 34%, but the machinery to make them entered Canada free. Macdonald continued to fight for higher tariffs for the remainder of his life. By the 1880s, Macdonald was becoming more frail, but he maintained his political acuity. In 1883, he secured the "Intoxicating Liquors Bill" which took the regulation system away from the provinces, in part to stymie his foe Premier Mowat. In his own case, Macdonald took better control of his drinking and binges had ended. "The great drinking-bouts, the gargantuan in sobriety's of his middle years, were dwindling away now into memories." As the budget moved forward, Macdonald studied the railway issue, and found the picture unexpectedly good. Although little money had been spent on the project under Mackenzie, several hundred miles of track had been built and nearly the entire route surveyed. In 1880, Macdonald found a syndicate, led by George Stephen, willing to undertake the CPR project. Donald Smith (later Lord Strathcona) was a major partner in the syndicate, but because of the ill will between him and the Conservatives, Smith's participation was initially not made public, though it was well-known to Macdonald. In 1880, the Dominion took over Britain's remaining Arctic territories, which extended Canada to its present-day boundaries, with the exception of Newfoundland, which would not enter Confederation until 1949. Also in 1880, Canada sent its first diplomatic representative abroad, Sir Alexander Galt as High Commissioner to Britain. With good economic times, Macdonald and the Conservatives were returned with a slightly decreased majority in 1882. Macdonald was returned for the Ontario riding of Carleton. The transcontinental railroad project was heavily subsidised by the government. The CPR was granted of land along the route of the railroad, and $25,000,000 from the government. In addition, the government was pledged to build $32,000,000 of other railways to support the CPR. The entire project was extremely costly, especially for a nation with only 4.1 million people in 1881. Between 1880 and 1885, as the railway was slowly built, the CPR repeatedly came close to financial ruin. Not only was the terrain in the Rocky Mountains difficult, the route north of Lake Superior proved treacherous, as tracks and engines sank into the muskeg. When Canadian guarantees of the CPR's bonds failed to make them salable in a declining economy, Macdonald obtained a loan to the corporation from the Treasury—the bill authorizing it passed the Senate just before the firm would have become insolvent. The Northwest again saw unrest. Many of the Manitoban Métis had moved into the territories. Negotiations between the Métis and the Government to settle grievances over land rights proved difficult, Riel had lived in exile in the United States since 1870, he journeyed to Regina with the connivance of Macdonald's government, who believed he would prove a leader they could deal with. Instead, the Métis rose the following year under Riel in the North-West Rebellion. Macdonald put down the rebellion with militia troops transported by rail, and Riel was captured, tried for treason, convicted, and hanged. Macdonald refused to consider reprieving Riel, who was of uncertain mental health. The hanging of Riel proved bitterly controversial, and alienated many Quebecers (like Riel, Catholic and culturally French Canadian) from the Conservatives—they soon realign themselves with the Liberals. The CPR was almost bankrupt, but its essential role in rushing troops to the crisis proved its worth, and Parliament provided money for its completion. On 7 November 1885, CPR manager William Van Horne who wired Macdonald from Craigellachie, British Columbia that the last spike was driven home. In the summer of 1886, Macdonald traveled for the only time to western Canada, traveling from town to town by private railway car, and addressing large crowds. Macdonald traveled with his wife, and to get a better view, the two would sometimes sit in front of the locomotive on the train's cowcatcher. On 13 August 1886, Macdonald used a silver hammer and pounded a gold spike to complete the Esquimalt and Nanaimo Railway. In 1886, another dispute arose over fishing rights with the United States. Americans fishermen had been using treaty provisions allowing them to land in Canada to take on wood and water as a cover for clandestine inshore fishing. Several vessels were detained in Canadian ports, to the outrage of Americans, who demanded their release. Macdonald sought to pass a Fisheries Act which would override some of the treaty provisions, to the dismay of the British, who were still responsible for external relations. The British government instructed the Governor General, Lord Lansdowne, to reserve Royal Assent for the bill, effectively placing it on hold without vetoing it. After considerable discussion, the British government allowed Royal Assent at the end of 1886, and indicated it would send a warship to protect the fisheries if no agreement was reached with the Americans. Fearing continued loss of political strength as poor economic times continued, Macdonald planned to hold an election by the end of 1886, but had not yet issued the writ when an Ontario provincial election was called by Macdonald's former student, Liberal Ontario Premier Oliver Mowat. The provincial election was seen as a bellwether for the federal poll. Despite considerable campaigning by the Prime Minister, Mowat's Liberals were returned in Ontario, and increased their majority. Macdonald finally dissolved Parliament on 15 January 1887 for an election on 22 February. During the campaign, Macdonald suffered another blow when the Quebec provincial Liberals were able to form a government (four months after the October 1886 Quebec election), forcing the Conservatives from power in Quebec City. Nevertheless, Macdonald and his cabinet campaigned hard in the winter election, with Tupper (the new High Commissioner to London) postponing his departure to try to bolster Conservative hopes in Nova Scotia. The Liberal leader, Edward Blake, ran an uninspiring campaign, and the Conservatives were returned nationally with a majority of 35, winning easily in Ontario, Nova Scotia and Manitoba. The Tories even took a narrow majority of Quebec's seats despite resentment over Riel's hanging. Macdonald became MP for Kingston once again. Even the younger ministers, such as future Prime Minister John Thompson, who sometimes differed with Macdonald on policy, admitted the Prime Minister was an essential electoral asset for the Conservatives. Blake, whom Macdonald biographer Gwyn describes as the Liberal Party's "worst campaigner until Stéphane Dion early in the twenty-first century", resigned after the defeat, to be replaced by Wilfrid Laurier. Under Laurier's early leadership, the Liberals, who had accepted much of the National Policy under Blake while questioning details, rejected it entirely, calling for "unrestricted reciprocity", or free trade, with the United States. Advocates of Laurier's plan argued that north–south trade made more economic sense than trying to trade across the vast, empty prairies, using a CPR which was already provoking resentment for what were seen as high freight rates. Macdonald was willing to see some reciprocity with the United States, but was reluctant to lower many tariffs. American advocates of what they dubbed "commercial union" saw it as a prelude to political union, and did not scruple to say so, causing additional controversy in Canada. Macdonald called an election for 5 March 1891. The Liberals were heavily financed by American interests; the Conservatives drew much financial support from the CPR. The 76-year-old Prime Minister collapsed during the campaign, and conducted political activities from his brother-in-law's house in Kingston. The Conservatives gained slightly in the popular vote, but their majority was trimmed to 27. The parties broke even in the central part of the country but the Conservatives dominated in the Maritimes and Western Canada, leading Liberal MP Richard John Cartwright to claim that Macdonald's majority was dependent on "the shreds and patches of Confederation". After the election, Laurier and his Liberals grudgingly accepted the National Policy, and when Laurier himself later became Prime Minister, he adopted it with only minor changes. After the election, Macdonald suffered a stroke, which left him partially paralysed and unable to speak. "The Old Chieftain" lingered for days, remaining mentally alert, before dying in the late evening of Saturday, 6 June 1891. Thousands filed by his open casket in the Senate Chamber; his body was transported by funeral train to his hometown of Kingston, with crowds greeting the train at each stop. On arrival in Kingston, Macdonald lay in state again in City Hall, wearing the uniform of an Imperial Privy Counselor. He was buried in Cataraqui Cemetery in Kingston, his grave near that of his first wife, Isabella. Wilfrid Laurier paid tribute to Macdonald in the House of Commons: In fact the place of Sir John A. Macdonald in this country was so large and so absorbing that it is almost impossible to conceive that the politics of this country, the fate of this country, will continue without him. His loss overwhelms us. Macdonald served just under 19 years as Prime Minister, a length of service only surpassed by William Lyon Mackenzie King. Unlike his American counterpart, George Washington, no cities or political subdivisions are named for Macdonald (with the exception of a small Manitoba village), nor are there any massive monuments. A peak in the Rockies, Mount Macdonald (c. 1887) at Rogers Pass, is named for him. In 2001, Parliament designated 11 January as Sir John A. Macdonald Day, but the day is not a federal holiday and generally passes unremarked. He appears on Canadian ten-dollar notes printed between 1971 and 2018. In 2015, the Royal Canadian Mint featured Macdonald's face on the Canadian two dollar coin, the Toonie, to celebrate his 200th birthday. He also gives his name to Ottawa's Sir John A. Macdonald Parkway (River Parkway before 2012), Ottawa Macdonald–Cartier International Airport (renamed in 1993) and Ontario Highway 401 (the Macdonald–Cartier Freeway c. 1968), though these facilities are rarely referred to using his name. A number of sites associated with Macdonald are preserved. His gravesite has been designated a National Historic Site of Canada. Bellevue House in Kingston, where the Macdonald family lived in the 1840s, is also a National Historic Site administered by Parks Canada, and has been restored to that time period. His Ottawa home, Earnscliffe, still stands and is today the official residence of the British High Commissioner to Canada. Statues have been erected to Macdonald across Canada; one stands on Parliament Hill in Ottawa (by Louis-Philippe Hebert c. 1895). A statue of Macdonald stands atop a granite plinth originally intended for a statue of Queen Victoria in Toronto's Queen's Park, looking south on University Avenue. Macdonald's statue also stands in Kingston's City Park; the Kingston Historical Society annually holds a memorial service in his honour. A square outside of Union Station (Toronto) will be named in his honour. A memorial was erected around 1895 on the front buttress of St David's Church on Ingram Street in Glasgow next to the Ramshorn Cemetery, near his birthplace in Glasgow. (His birthplace is most likely 20 Brunswick Street, demolished in 2017 in favour of a condo redevelopment. Conservative Senator Hugh Segal believes that Macdonald's true monument is Canada itself: "Without Macdonald we'd be a country that begins somewhere at the Manitoba-Ontario border that probably goes throughout the east. Newfoundland would be like Alaska and I think that would also go for Manitoba, Saskatchewan, Alberta and B.C. We'd be buying our oil from the United States. It would diminish our quality of life and range of careers, and our role in the world would have been substantially reduced." Macdonald's biographers note his contribution to establishing Canada as a nation. Swainson suggests that Macdonald's desire for a free and tolerant Canada became part of its national outlook: "He not only helped to create Canada, but contributed immeasurably to its character." Gwyn said of Macdonald, His accomplishments were staggering: Confederation above all, but almost as important, if not more so, extending the country across the continent by a railway that was, objectively, a fiscal and economic insanity ... On the ledger's other side, he was responsible for the CPR scandal, the execution of Louis Riel, and for the head tax on Chinese workers. He's thus not easy to scan. His private life was mostly barren. Yet few other Canadian leaders—Pierre Trudeau, John Diefenbaker for a time, Wilfrid Laurier—had the same capacity to inspire love. In polls, Macdonald has consistently been ranked as one of the greatest Prime Ministers in Canadian history. Sir John A. Macdonald was awarded several Honorary Degrees, These Include Notes Citations John Diefenbaker John George Diefenbaker (; September 18, 1895 – August 16, 1979) was the 13th Prime Minister of Canada, serving from June 21, 1957 to April 22, 1963. He was the only Progressive Conservative (PC or Tory) party leader after 1930 and before 1979 to lead the party to an election victory, doing so three times, although only once with a majority of seats in the House of Commons of Canada. Diefenbaker was born in southwestern Ontario in the small town of Neustadt, Ontario in 1895. In 1903, his family migrated west to the portion of the North-West Territories which would shortly thereafter become the province of Saskatchewan. He grew up in the province, and was interested in politics from a young age. After brief service in World War I, Diefenbaker became a noted criminal defence lawyer. He contested elections through the 1920s and 1930s with little success until he was finally elected to the House of Commons in 1940. Diefenbaker was repeatedly a candidate for the PC leadership. He gained that party position in 1956, on his third attempt. In 1957, he led the Tories to their first electoral victory in 27 years; a year later he called a snap election and spearheaded them to one of their greatest triumphs. Diefenbaker appointed the first female minister in Canadian history to his Cabinet, as well as the first aboriginal member of the Senate. During his six years as Prime Minister, his government obtained passage of the Canadian Bill of Rights and granted the vote to the First Nations and Inuit peoples. In foreign policy, his stance against apartheid helped secure the departure of South Africa from the Commonwealth of Nations, but his indecision on whether to accept Bomarc nuclear missiles from the United States led to his government's downfall. Diefenbaker is also remembered for his role in the 1959 cancellation of the Avro Arrow project, which ruined both his reputation and any chance of re-election. Factionalism returned in full force as the Progressive Conservatives fell from power in 1963, and while Diefenbaker's performance as Opposition Leader was heralded, his second loss at the polls prompted opponents within the party force him to a leadership convention in 1967. Diefenbaker stood for re-election as party leader at the last moment, but only attracted minimal support and withdrew. He remained an MP until his death in 1979, two months after Joe Clark became the first Tory Prime Minister since Diefenbaker. Diefenbaker was born on September 18, 1895, in Neustadt, Ontario, to William Thomas Diefenbaker and the former Mary Florence Bannerman. His father was the son of German immigrants from Adersbach (near Sinsheim) in Baden; Mary Diefenbaker was of Scottish descent and Diefenbaker was Baptist. The family moved to several locations in Ontario in John's early years. William Diefenbaker was a teacher, and had deep interests in history and politics, which he sought to inculcate in his students. He had remarkable success doing so; of the 28 students at his school near Toronto in 1903, four, including his son, John, served as Conservative MPs in the 19th Canadian Parliament beginning in 1940. The Diefenbaker family moved west in 1903, for William Diefenbaker to accept a position near Fort Carlton, then in the Northwest Territories (now in Saskatchewan). In 1906, William claimed a quarter-section, of undeveloped land near Borden, Saskatchewan. In February 1910, the Diefenbaker family moved to Saskatoon, the site of the University of Saskatchewan. William and Mary Diefenbaker felt that John and his brother Elmer would have greater educational opportunities in Saskatoon. John Diefenbaker had been interested in politics from an early age, and told his mother at the age of eight or nine that he would some day be Prime Minister. She told him that it was an impossible ambition, especially for a boy living on the prairies. She would live to be proved wrong. John's first contact with politics came in 1910, when he sold a newspaper to Prime Minister Sir Wilfrid Laurier, in Saskatoon to lay the cornerstone for the University's first building. The present and future Prime Ministers conversed, and when giving his speech that afternoon, Sir Wilfrid commented on the newsboy who had ended their conversation by saying, "I can't waste any more time on you, Prime Minister. I must get about my work." The authenticity of the meeting was questioned in the 21st century, with an author suggesting that it was invented by Diefenbaker during an election campaign. After graduating from high school in Saskatoon, in 1912, Diefenbaker entered the University of Saskatchewan. He received his Bachelor of Arts degree in 1915, and his Master of Arts the following year. Diefenbaker was commissioned a lieutenant into the 196th (Western Universities) Battalion, CEF in May 1916. In September, Diefenbaker was part of a contingent of 300 junior officers sent to Britain for pre-deployment training. Diefenbaker related in his memoirs that he was hit by a shovel, and the injury eventually resulted in his being invalided home. Diefenbaker's recollections do not correspond with his army medical records, which show no contemporary account of such an injury, and his biographer, Denis Smith, speculates that any injury was psychosomatic. After leaving the military in 1917, Diefenbaker returned to Saskatchewan where he resumed his work as an articling student in law. He received his law degree in 1919, the first student to secure three degrees from the University of Saskatchewan. On June 30, 1919, he was called to the bar, and the following day, opened a small practice in the village of Wakaw, Saskatchewan. Although Wakaw had a population of only 400, it sat at the heart of a densely populated area of rural townships and had its own district court. It was also easily accessible to Saskatoon, Prince Albert and Humboldt, places where the Court of King's Bench sat. The local people were mostly immigrants, and Diefenbaker's research found them to be particularly litigious. There was already one barrister in town, and the residents were loyal to him, initially refusing to rent office space to Diefenbaker. The new lawyer was forced to rent a vacant lot and erect a two-room wooden shack. Diefenbaker won the local people over through his success; in his first year in practice, he tried 62 jury trials, winning approximately half of his cases. He rarely called defence witnesses, thereby avoiding the possibility of rebuttal witnesses for the Crown, and securing the last word for himself. In late 1920, he was elected to the village council to serve a three-year term. Diefenbaker would often spend weekends with his parents in Saskatoon. While there, he began to woo Olive Freeman, daughter of the Baptist minister, but in 1921, she moved with her family to Brandon, Manitoba, and the two lost touch for more than 20 years. He then courted Beth Newell, a cashier in Saskatoon, and by 1922, the two were engaged. However, in 1923, Newell was diagnosed with tuberculosis, and Diefenbaker broke off contact with her. She died the following year. Diefenbaker was himself subject to internal bleeding, and may have feared that the disease would be transmitted to him. In late 1923, he had an operation at the Mayo Clinic for a gastric ulcer, but his health remained uncertain for several more years. After four years in Wakaw, Diefenbaker so dominated the local legal practice that his competitor left town. On May 1, 1924, Diefenbaker moved to Prince Albert, leaving a law partner in charge of the Wakaw office. Since 1905, when Saskatchewan entered Confederation, the province had been dominated by the Liberal Party, which practised highly effective machine politics. Diefenbaker was fond of stating, in his later years, that the only protection a Conservative had in the province was that afforded by the game laws. Diefenbaker's father, William, was a Liberal; however, John Diefenbaker found himself attracted to the Conservative Party. Free trade was widely popular throughout Western Canada, but Diefenbaker was convinced by the Conservative position that free trade would make Canada an economic dependent of the United States. However, he did not speak publicly of his politics. Diefenbaker recalled in his memoirs that, in 1921, he had been elected as secretary of the Wakaw Liberal Association while absent in Saskatoon, and had returned to find the association's records in his office. He promptly returned them to the association president. Diefenbaker also stated that he had been told that if he became a Liberal candidate, "there was no position in the province which would not be open to him." It was not until 1925 that Diefenbaker publicly came forward as a Conservative, a year in which both federal and Saskatchewan provincial elections were held. Journalist and historian Peter C. Newman, in his best-selling account of the Diefenbaker years, suggested that this choice was made for practical, rather than political reasons, as Diefenbaker had little chance of defeating established politicians and securing the Liberal nomination for either the House of Commons or the Legislative Assembly. The provincial election took place in early June; Liberals would later claim that Diefenbaker had campaigned for their party in the election. On June 19, however, Diefenbaker addressed a Conservative organizing committee, and on August 6, was nominated as the party's candidate for the federal riding of Prince Albert, a district in which the party's last candidate had lost his election deposit. A nasty campaign ensued, in which Diefenbaker was called a "Hun" because of his German-derived surname. The 1925 federal election was held on October 29; he finished third behind the Liberal and Progressive Party candidates, losing his deposit. The winning candidate, Charles McDonald, did not hold the seat long, resigning it to open a place for the Prime Minister, William Lyon Mackenzie King, who had been defeated in his Ontario riding. The Tories ran no candidate against Mackenzie King in the by-election on February 15, 1926, and he won easily. Although in the 1925 federal election, the Conservatives had won the greatest number of seats, Mackenzie King continued as Prime Minister with the tacit support of the Progressives. Mackenzie King held office for several months until he finally resigned when the Governor General, Lord Byng, refused a dissolution. Conservative Party leader Arthur Meighen became Prime Minister, but was quickly defeated in the House of Commons, and Byng finally granted a dissolution of Parliament. Diefenbaker, who had been confirmed as Conservative candidate, stood against Mackenzie King in the 1926 election, a rare direct electoral contest between two Canadian Prime Ministers. Mackenzie King triumphed easily, and regained his position as Prime Minister. Diefenbaker stood for the Legislative Assembly in the 1929 provincial election. He was defeated, but Saskatchewan Conservatives formed their first government, with help from smaller parties. As the defeated Conservative candidate for Prince Albert City, he was given charge of political patronage there, and was created a King's Counsel. Three weeks after his electoral defeat, he married Saskatoon teacher Edna Brower. Diefenbaker chose not to stand for the House of Commons in the 1930 federal election, citing health reasons. The Conservatives gained a majority in the election, and party leader R. B. Bennett became Prime Minister. Diefenbaker continued a high-profile legal practice, and in 1933, ran for mayor of Prince Albert. He was defeated by 48 votes in an election in which over 2,000 ballots were cast. In 1934, when the Crown prosecutor for Prince Albert resigned to become the Conservative Party's legislative candidate, Diefenbaker took his place as prosecutor. Diefenbaker did not stand in the 1934 provincial election, in which the governing Conservatives lost every seat. Six days after the election, Diefenbaker resigned as Crown prosecutor. The federal government of Bennett was defeated the following year and Mackenzie King returned as Prime Minister. Judging his prospects hopeless, Diefenbaker had declined a nomination to stand again against Mackenzie King in Prince Albert. In the waning days of the Bennett government, the Saskatchewan Conservative Party President was appointed a judge, leaving Diefenbaker, who had been elected the party's vice president, as acting president of the provincial party. Saskatchewan Conservatives eventually arranged a leadership convention for October 28, 1936. Eleven people were nominated, including Diefenbaker. The other ten candidates all deemed the provincial party in such hopeless shape that they withdrew, and Diefenbaker won the position by default. Diefenbaker asked the federal party for $10,000 in financial support, but the funds were refused, and the Conservatives were shut out of the legislature in the 1938 provincial elections for the second consecutive time. Diefenbaker himself was defeated in the Arm River riding by 190 votes. With the province-wide Conservative vote having fallen to 12%, Diefenbaker offered his resignation to a post-election party meeting in Moose Jaw, but it was refused. Diefenbaker continued to run the provincial party out of his law office, and paid the party's debts from his own pocket. Diefenbaker quietly sought the Conservative nomination for the federal riding of Lake Centre, but was unwilling to risk a divisive intra-party squabble. In what Diefenbaker biographer Smith states "appears to have been an elaborate and prearranged charade", Diefenbaker attended the nominating convention as keynote speaker, but withdrew when his name was proposed, stating a local man should be selected. The winner among the six remaining candidates, riding president W. B. Kelly, declined the nomination, urging the delegates to select Diefenbaker, which they promptly did. Mackenzie King called a general election for March 25, 1940. The incumbent in Lake Centre was the Deputy Speaker of the House of Commons, Liberal John Frederick Johnston. Diefenbaker campaigned aggressively in Lake Centre, holding 63 rallies and seeking to appeal to members of all parties. On election day, he defeated Johnston by 280 votes on what was otherwise a disastrous day for the Conservatives, who won only 39 seats out of the 245 in the House of Commons—their lowest total since Confederation. Diefenbaker joined a shrunken and demoralized Conservative caucus in the House of Commons. The Conservative leader, Robert Manion, failed to win a place in the Commons in the election, which saw the Liberals take 181 seats. The Tories sought to be included in a wartime coalition government, but Mackenzie King refused. The House of Commons had only a slight role in the war effort; under the state of emergency, most business was accomplished through the Cabinet issuing Orders in Council. Diefenbaker was appointed to the House Committee on the Defence of Canada Regulations, an all-party committee which examined the wartime rules which allowed arrest and detention without trial. On June 13, 1940, Diefenbaker made his maiden speech as an MP, supporting the regulations, and emphatically stating that most Canadians of German descent were loyal. Diefenbaker described an unsuccessful fight against the forced relocation and internment of many Japanese-Canadians in his memoirs, however, this is disputed. According to Diefenbaker biographer Smith, the Conservative MP quietly admired Mackenzie King for his political skills. However, Diefenbaker proved a gadfly and an annoyance to Mackenzie King. Angered by the words of Diefenbaker and fellow Conservative MP Howard Green in seeking to censure the government, the Prime Minister referred to Conservative MPs as "a mob". When Diefenbaker accompanied two other Conservative leaders to a briefing by Mackenzie King on the war, the Prime Minister exploded at Diefenbaker (a constituent of his), "What business do you have to be here? You strike me to the heart every time you speak." The Conservatives elected a floor leader, and in 1941 approached former Prime Minister Meighen, who had been appointed as a senator by Bennett, about becoming party leader again. Meighen agreed, and resigned his Senate seat, but lost a by-election for an Ontario seat in the House of Commons. He remained as leader for several months, although he could not enter the chamber of the House of Commons. Meighen sought to move the Tories to the left, in order to undercut the Liberals and to take support away from the Co-operative Commonwealth Federation (CCF, the predecessor of the New Democratic Party (NDP)). To that end, he sought to draft the Liberal-Progressive premier of Manitoba, John Bracken, to lead the Conservatives. Diefenbaker objected to what he saw as an attempt to rig the party's choice of new leader and stood for the leadership himself at the party's 1942 leadership convention. Bracken was elected on the second ballot; Diefenbaker finished a distant third in both polls. At Bracken's request, the convention changed the party's name to "Progressive Conservative Party of Canada." Bracken chose not to seek entry to the House through a by-election, and when the Conservatives elected a new floor leader, Diefenbaker was defeated by one vote. Bracken was elected to the Commons in the 1945 general election, and for the first time in five years the Tories had their party leader in the House of Commons. The Progressive Conservatives won 67 seats to the Liberals' 125, with smaller parties and independents winning 52 seats. Diefenbaker increased his majority to over 1,000 votes, and had the satisfaction of seeing Mackenzie King defeated in Prince Albert—but by a CCF candidate. The Prime Minister was returned in an Ontario by-election within months. Diefenbaker staked out a position on the populist left of the PC party. Though most Canadians were content to look to Parliament for protection of civil liberties, Diefenbaker called for a Bill of Rights, calling it "the only way to stop the march on the part of the government towards arbitrary power". He objected to the great powers used by the Mackenzie King government to attempt to root out Soviet spies after the war, such as imprisonment without trial, and complained about the government's proclivity for letting its wartime powers become permanent. In early 1948, Mackenzie King, by now aged 73, announced his retirement; later that year Louis St. Laurent succeeded him. Although Bracken had nearly doubled the Tory representation in the House, prominent Tories were increasingly unhappy with his leadership, and pressured him to stand down. These party bosses believed that Ontario Premier George A. Drew, who had won three successive provincial elections and had even made inroads in francophone ridings, was the man to lead the Progressive Conservatives to victory. When Bracken resigned on July 17, 1948, Diefenbaker announced his candidacy. The party's backers, principally financiers headquartered on Toronto's Bay Street, preferred Drew's conservative political stances to Diefenbaker's Western populism. Tory leaders packed the 1948 leadership convention in Ottawa in favour of Drew, appointing more than 300 delegates at-large. One cynical party member commented, "Ghost delegates with ghost ballots, marked by the ghostly hidden hand of Bay Street, are going to pick George Drew, and he'll deliver a ghost-written speech that'll cheer us all up, as we march briskly into a political graveyard." Drew easily defeated Diefenbaker on the first ballot. St. Laurent called an election for June 1949, and the Tories were decimated, falling to 41 seats, only two more than the party's 1940 nadir. Despite intense efforts to make the Progressive Conservatives appeal to Quebecers, the party won only two seats in the province. Newman argued that but for Diefenbaker's many defeats, he would never have become Prime Minister: If, as a neophyte lawyer, he had succeeded in winning the Prince Albert seat in the federal elections of 1925 or 1926, ... Diefenbaker would probably have been remembered only as an obscure minister in Bennett's Depression cabinet ... If he had carried his home-town mayoralty in 1933, ... he'd probably not be remembered at all ... If he had succeeded in his bid for the national leadership in 1942, he might have taken the place of John Bracken on his six-year march to oblivion as leader of a party that had not changed itself enough to follow a Prairie radical ... [If he had defeated Drew in 1948, he] would have been free to flounder before the political strength of Louis St. Laurent in the 1949 and 1953 campaigns. The governing Liberals repeatedly attempted to deprive Diefenbaker of his parliamentary seat. In 1948, Lake Centre was redistricted to remove areas which strongly supported Diefenbaker. In spite of that, he was returned in the 1949 election, the only PC member from Saskatchewan. In 1952, a redistricting committee dominated by Liberals abolished Lake Centre entirely, dividing its voters among three other ridings. Diefenbaker stated in his memoirs that he had considered retiring from the House; with Drew only a year older than he was, the Westerner saw little prospect of advancement, and had received tempting offers from Ontario law firms. However, the gerrymandering so angered him that he decided to fight for a seat. Diefenbaker's party had taken Prince Albert only once, in 1911, but he decided to stand in that riding for the 1953 election, and was successful. He would hold that seat for the rest of his life. Even though Diefenbaker campaigned nationally for party candidates, the Progressive Conservatives gained little, rising to 51 seats as St. Laurent led the Liberals to a fifth successive majority. In addition to trying to secure his departure from Parliament, the government opened a home for unwed Indian mothers next door to Diefenbaker's home in Prince Albert. Diefenbaker continued practising law. In 1951, he gained national attention by accepting the "Atherton" case, in which a young telegraph operator had been accused of negligently causing a train crash by omitting crucial information from a message. Twenty-one people were killed, mostly Canadian troops bound for Korea. Diefenbaker paid $1,500 and sat a token bar examination to join the Law Society of British Columbia to take the case, and gained an acquittal, prejudicing the jury against the Crown prosecutor and pointing out a previous case in which interference had caused information to be lost in transmission. Although Edna Diefenbaker had been devoted to advancing her husband's career, in the mid-1940s she began to suffer mental illness, and was placed in a private mental hospital for a time. She later fell ill from leukemia, and died in 1951. In 1953, Diefenbaker married Olive Palmer (formerly Olive Freeman), whom he had courted while living in Wakaw. Olive Diefenbaker became a great source of strength to her husband. There were no children born of either marriage. Diefenbaker won Prince Albert in 1953, even as the Tories suffered a second consecutive disastrous defeat under Drew. Speculation arose in the press that the leader might be pressured to step aside. Drew was determined to remain, however, and Diefenbaker was careful to avoid any action that might be seen as disloyal. However, Diefenbaker was never a member of the "Five O'clock Club" of Drew intimates who met the leader in his office for a drink and gossip each day. By 1955, there was a widespread feeling among Tories that Drew was not capable of leading the party to a victory. At the same time, the Liberals were in flux as the aging St. Laurent tired of politics. Drew was able to damage the government in a weeks-long battle over the TransCanada pipeline in 1956—the so-called Pipeline Debate—in which the government, in a hurry to obtain financing for the pipeline, imposed closure before the debate even began. The Tories and the CCF combined to obstruct business in the House for weeks before the Liberals were finally able to pass the measure. Diefenbaker played a relatively minor role in the Pipeline Debate, speaking only once. By 1956, the Social Credit Party was becoming a potential rival to the Tories as Canada's main right-wing party. Canadian journalist and author Bruce Hutchison discussed the state of the Tories in 1956: When a party calling itself Conservative can think of nothing better than to outbid the Government's election promises; when it demands economy in one breath and increased spending in the next; when it proposes an immediate tax cut regardless of inflationary results ... when in short, the Conservative party no longer gives us a conservative alternative after twenty-one years ... then our political system desperately requires an opposition prepared to stand for something more than the improbable chance of quick victory. In August 1956, Drew fell ill and many within the party urged him to step aside, feeling that the Progressive Conservatives needed vigorous leadership with an election likely within a year. He resigned in late September, and Diefenbaker immediately announced his candidacy for the leadership. A number of Progressive Conservative leaders, principally from the Ontario wing of the party, started a "Stop Diefenbaker" movement, and wooed University of Toronto president Sidney Smith as a possible candidate. When Smith declined, they could find no one of comparable stature to stand against Diefenbaker. At the leadership convention in Ottawa in December 1956, Diefenbaker won on the first ballot, and the dissidents reconciled themselves to his victory. After all, they reasoned, Diefenbaker was now 61 and unlikely to lead the party for more than one general election, an election they believed would be won by the Liberals regardless of who led the Tories. In January 1957, Diefenbaker took his place as Leader of the Official Opposition. In February, St. Laurent informed him that Parliament would be dissolved in April for an election on June 10. The Liberals submitted a budget in March; Diefenbaker attacked it for overly high taxes, failure to assist pensioners, and a lack of aid for the poorer provinces. Parliament was dissolved on April 12. St. Laurent was so confident of victory that he did not even bother to make recommendations to the Governor General to fill the 16 vacancies in the Senate. Diefenbaker ran on a platform which concentrated on changes in domestic policies. He pledged to work with the provinces to reform the Senate. He proposed a vigorous new agricultural policy, seeking to stabilize income for farmers. He sought to reduce dependence on trade with the United States, and to seek closer ties with the United Kingdom. St. Laurent called the Tory platform "a mere cream-puff of a thing—with more air than substance". Diefenbaker and the PC party used television adroitly, whereas St. Laurent stated that he was more interested in seeing people than in talking to cameras. Though the Liberals outspent the Progressive Conservatives three to one, according to Newman, their campaign had little imagination, and was based on telling voters that their only real option was to re-elect St. Laurent. Diefenbaker characterized the Tory program in a nationwide telecast on April 30: It is a program ... for a united Canada, for one Canada, for Canada first, in every aspect of our political and public life, for the welfare of the average man and woman. That is my approach to public affairs and has been throughout my life ... A Canada, united from Coast to Coast, wherein there will be freedom for the individual, freedom of enterprise and where there will be a Government which, in all its actions, will remain the servant and not the master of the people. The final Gallup poll before the election showed the Liberals ahead, 48% to 34%. Just before the election, "Maclean's" magazine printed its regular weekly issue, to go on sale the morning after the vote, editorializing that democracy in Canada was still strong despite a sixth consecutive Liberal victory. On election night, the Progressive Conservative advance started early, with the gain of two seats in reliably Liberal Newfoundland. The party picked up nine seats in Nova Scotia, five in Quebec, 28 in Ontario, and at least one seat in every other province. The Progressive Conservatives took 112 seats to the Liberals' 105: a plurality, but not a majority. While the Liberals finished some 200,000 votes ahead of the Tories nationally, that margin was mostly wasted in overwhelming victories in safe Quebec seats. St. Laurent could have attempted to form a government, however, with the minor parties pledging to cooperate with the Progressive Conservatives, he would have likely faced a quick defeat at the Commons. St. Laurent instead resigned, making Diefenbaker Prime Minister. When John Diefenbaker took office as Prime Minister of Canada on June 21, 1957, only one Progressive Conservative MP, Earl Rowe, had served in federal governmental office, for a brief period under Bennett in 1935. Rowe was no friend of Diefenbaker, and was given no place in his government. Diefenbaker appointed Ellen Fairclough as Secretary of State for Canada, the first woman to be appointed to a Cabinet post, and Michael Starr as Minister of Labour, the first Canadian of Ukrainian descent to serve in Cabinet. As the Parliament buildings had been lent to the Universal Postal Union for its 14th congress, Diefenbaker was forced to wait until the fall to convene Parliament. However, the Cabinet approved measures that summer, including increased price supports for butter and turkeys, and raises for federal employees. Once the 23rd Canadian Parliament was opened on October 14 by Queen Elizabeth II — the first to be opened by any Canadian monarch — the government rapidly passed legislation, including tax cuts and increases in old age pensions. The Liberals were ineffective in opposition, with the party in the midst of a leadership race after St. Laurent's resignation as party leader. With the Conservatives leading in the polls, Diefenbaker wanted a new election, hopeful that his party would gain a majority of seats. The strong Liberal presence meant that the Governor General could refuse a dissolution request early in a parliament's term and allow them to form government if Diefenbaker resigned. Diefenbaker sought a pretext for a new election. Such an excuse presented itself when former Minister of External Affairs Lester Pearson attended his first parliamentary session as Leader of the Opposition on January 20, 1958, four days after becoming the Liberal leader. In his first speech as leader, Pearson (recently returned from Oslo where he had been awarded the Nobel Peace Prize), moved an amendment to supply, and called, not for an election, but for the Progressive Conservatives to resign, allowing the Liberals to form a government. Pearson stated that the condition of the economy required "a Government pledged to implement Liberal policies". Government MPs laughed at Pearson, as did members of the press who were present. Pearson later recorded in his memoirs that he knew that his "first attack on the government had been a failure, indeed a fiasco". Diefenbaker spoke for two hours and three minutes, and devastated his Liberal opposition. He mocked Pearson, contrasting the party leader's address at the Liberal leadership convention with his speech to the House: On Thursday there was shrieking defiance, on the following Monday there is shrinking indecision ... The only reason that this motion is worded as it is ["sic"] is that my honourable friends opposite quake when they think of what will happen if an election comes ... It is the resignation from responsibility of a great party. Diefenbaker read from an internal report provided to the St. Laurent government in early 1957, warning that a recession was coming, and stated: Across the way, Mr. Speaker, sit the purveyors of gloom who would endeavour for political purposes, to panic the Canadian people ... They had a warning ... Did they tell us that? No. Mr. Speaker, why did they not reveal this? Why did they not act when the House was sitting in January, February, March, and April? They had the information ... You concealed the facts, that is what you did. According to the Minister of Finance, Donald Fleming, "Pearson looked at first merry, then serious, then uncomfortable, then disturbed, and finally sick." Pearson recorded in his memoirs that the Prime Minister "tore me to shreds". Prominent Liberal frontbencher Paul Martin called Diefenbaker's response "one of the greatest devastating speeches" and "Diefenbaker's great hour". On February 1, Diefenbaker asked the Governor General, Vincent Massey, to dissolve Parliament, alleging that though St. Laurent had promised cooperation, Pearson had made it clear he would not follow his predecessor's lead. Massey agreed to the dissolution, and Diefenbaker set an election date of March 31, 1958. The 1958 election campaign saw a huge outpouring of public support for the Progressive Conservatives. At the opening campaign rally in Winnipeg on February 12 voters filled the hall until the doors had to be closed for safety reasons. They were promptly broken down by the crowd outside. At the rally, Diefenbaker called for "[a] new vision. A new hope. A new soul for Canada." He pledged to open the Canadian North, to seek out its resources and make it a place for settlements. The conclusion to his speech expounded on what became known as "The Vision", This is the vision: One Canada. One Canada, where Canadians will have preserved to them the control of their own economic and political destiny. Sir John A. Macdonald saw a Canada from east to west: he opened the west. I see a new Canada—a Canada of the North. This is the vision! Pierre Sévigny, who would be elected an MP in 1958, recalled the gathering, "When he had finished that speech, as he was walking to the door, I saw people kneel and kiss his coat. Not one, but many. People were in tears. People were delirious. And this happened many a time after." When Sévigny introduced Diefenbaker to a Montreal rally with the words ""Levez-vous, levez-vous, saluez votre chef!"" (Rise, rise, salute your chief!) according to Postmaster General William Hamilton "thousands and thousands of people, jammed into that auditorium, just tore the roof off in a frenzy." Michael Starr remembered, "That was the most fantastic election ... I went into little places. Smoky Lake, Alberta, where nobody ever saw a minister. Canora, Saskatchewan. Every meeting was jammed ... The halls would be filled with people and sitting there in the front would be the first Ukrainian immigrants with shawls and hands gnarled from work ... I would switch to Ukrainian and the tears would start to run down their faces ... I don't care who says what won the election; it was the emotional aspect that really caught on." Pearson and his Liberals faltered badly in the campaign. The Liberal Party leader tried to make an issue of the fact that Diefenbaker had called a winter election, generally disfavoured in Canada due to travel difficulties. Pearson's objection cut little ice with voters, and served only to remind the electorate that the Liberals, at their convention, had called for an election. Pearson mocked Diefenbaker's northern plans as "igloo-to-igloo" communications, and was assailed by the Prime Minister for being condescending. The Liberal leader spoke to small, quiet crowds, which quickly left the halls when he was done. By election day, Pearson had no illusions that he might win the election, and hoped only to salvage 100 seats. The Liberals would be limited to less than half of that. On March 31, 1958, the Tories won what is still the largest majority (in terms of percentage of seats) in Canadian federal political history, winning 208 seats to the Liberals' 48, with the CCF winning 8 and Social Credit wiped out. The Progressive Conservatives won a majority of the votes and of the seats in every province except British Columbia (49.8%) and Newfoundland. Quebec's "Union Nationale" political machine had given the PC party little support, but with Quebec voters minded to support Diefenbaker, "Union Nationale" boss Maurice Duplessis threw the machinery of his party behind the Tories. An economic downturn was beginning in Canada by 1958. Because of tax cuts instituted the previous year, the budget presented by the government predicted a small deficit for 1957–58, and a large one, $648 million, for the following year. Minister of Finance Fleming and Bank of Canada Governor James Coyne proposed that the wartime Victory Bond issue, which constituted two-thirds of the national debt and which was due to be redeemed by 1967, be refinanced to a longer term. After considerable indecision on Diefenbaker's part, a nationwide campaign took place, and 90% of the bonds were converted. However, this transaction led to an increase in the money supply, which in future years would hamper the government's efforts to respond to unemployment. As a trial lawyer, and in opposition, Diefenbaker had long been concerned with civil liberties. On July 1, 1960, Dominion Day, he introduced the Canadian Bill of Rights in Parliament, and the bill rapidly passed and was proclaimed on August 10, fulfilling a lifetime goal of Diefenbaker's. The document purported to guarantee fundamental freedoms, with special attention to the rights of the accused. However, as a mere piece of federal legislation, it could be amended by any other law, and the question of civil liberties was to a large extent a provincial matter, outside of federal jurisdiction. One lawyer remarked that the document provided rights for all Canadians, "so long as they don't live in any of the provinces". Diefenbaker had appointed the first First Nations member of the Senate, James Gladstone in January 1958, and in 1960, his government extended voting rights to all native people. Diefenbaker pursued a "One Canada" policy, seeking equality of all Canadians. As part of that philosophy, he was unwilling to make special concessions to Quebec's francophones. Thomas Van Dusen, who served as Diefenbaker's executive assistant and wrote a book about him, characterized the leader's views on this issue: There must be no compromise with Canada's existence as a nation. Opting out, two flags, two pension plans, associated states, Two Nations and all the other baggage of political dualism was ushering Quebec out of Confederation on the instalment plan. He could not accept any theory of two nations, however worded, because it would make of those neither French nor English second-class citizens. Diefenbaker's disinclination to make concessions to Quebec, along with the disintegration of the "Union Nationale", the failure of the Tories to build an effective structure in Quebec, and Diefenbaker appointing few Quebecers to his Cabinet, none to senior positions, all led to an erosion of Progressive Conservative support in Quebec. Diefenbaker did recommend the appointment of the first French-Canadian governor general, Georges Vanier. By mid-1961, differences in monetary policy led to open conflict with Bank of Canada Governor Coyne, who adhered to a tight money policy. Appointed by St. Laurent to a term expiring in December 1961, Coyne could only be dismissed before then by the passing of an Act of Parliament. Coyne defended his position by giving public speeches, to the dismay of the government. The Cabinet was also angered when it learned that Coyne and his board had passed amendments to the bank's pension scheme which greatly increased Coyne's pension, without publishing the amendments in the "Canada Gazette" as required by law. Negotiations between Minister of Finance Fleming and Coyne for the latter's resignation broke down, with the governor making the dispute public, and Diefenbaker sought to dismiss Coyne by legislation. Diefenbaker was able to get legislation to dismiss Coyne through the House, but the Liberal-controlled Senate invited Coyne to testify before one of its committees. After giving the governor a platform against the government, the committee then chose to take no further action, adding its view that Coyne had done nothing wrong. Once he had the opportunity to testify (denied him in the Commons), Coyne resigned, keeping his increased pension, and the government was extensively criticized in the press. By the time Diefenbaker called an election for June 18, 1962, the party had been damaged by loss of support in Quebec and in urban areas as voters grew disillusioned with Diefenbaker and the Tories. The PC campaign was hurt when the Bank of Canada was forced to devalue the Canadian dollar to  US cents; it had previously hovered in the range from 95 cents to par with the United States dollar. Privately printed satirical "Diefenbucks" swept the country. On election day, the Progressive Conservatives lost 92 seats, but were still able to form a minority government. The New Democratic Party (the successor to the CCF) and Social Credit held the balance of power in the new Parliament. Diefenbaker attended a meeting of the Commonwealth Prime Ministers in London shortly after taking office in 1957. He generated headlines by proposing that 15% of Canadian spending on US imports instead be spent on imports from the United Kingdom. Britain responded with an offer of a free trade agreement, which was rejected by the Canadians. As the Harold Macmillan government in the UK sought to enter the Common Market, Diefenbaker feared that Canadian exports to the UK would be threatened. He also believed that the mother country should place the Commonwealth first, and sought to discourage Britain's entry. The British were annoyed at Canadian interference. Britain's initial attempt to enter the Common Market was vetoed by French President Charles de Gaulle. Through 1959, the Diefenbaker government had a policy of not criticizing South Africa and its apartheid government. In this stance, Diefenbaker had the support of the Liberals but not that of CCF leader Hazen Argue. In 1960, however, the South Africans sought to maintain membership in the Commonwealth even if South African white voters chose to make the country a republic in a referendum scheduled for later that year. South Africa asked the Commonwealth Prime Minister's Conference to allow it to remain in the Commonwealth regardless of the result of the referendum. Diefenbaker privately expressed his distaste for apartheid to South African External Affairs Minister Eric Louw and urged him to give the black and coloured people of South Africa at least the minimal representation they had originally had. Louw, attending the conference as Prime Minister Hendrik Verwoerd recovered from an assassination attempt, refused. The conference resolved that an advance decision would be interfering in South Africa's internal affairs. On October 5, 1960, South Africa's white voters decided to make the country a republic. At the Prime Ministers' Conference in 1961, Verwoerd formally applied for South Africa to remain in the Commonwealth. The prime ministers were divided. Diefenbaker broke the deadlock by proposing that the conference not reject South Africa's application, but instead state in a communique that racial equality was a principle of the Commonwealth. This was adopted, although Britain and New Zealand disagreed with Diefenbaker's proposal. South Africa could not accept the communique, and withdrew its application to remain in the Commonwealth. According to Peter Newman, this was "Diefenbaker's most important contribution to international politics ... Diefenbaker flew home, a hero." American officials were uncomfortable with Diefenbaker's initial election, believing they had heard undertones of anti-Americanism in the campaign. After years of the Liberals, one US State Department official noted, "We'll be dealing with an unknown quantity." Diefenbaker's 1958 landslide was viewed with disappointment by the US officials, who knew and liked Pearson from his years in diplomacy and who felt the Liberal Party leader would be more likely to institute pro-American policies. However, US President Dwight Eisenhower took pains to foster good relations with Diefenbaker. The two men found much in common, from Western farm backgrounds to a love of fishing, and Diefenbaker had an admiration for war leaders such as Eisenhower and Churchill. Diefenbaker wrote in his memoirs, "I might add that President Eisenhower and I were from our first meeting on an 'Ike–John' basis, and that we were as close as the nearest telephone." The Eisenhower–Diefenbaker relationship was sufficiently strong that the touchy Canadian Prime Minister was prepared to overlook slights. When Eisenhower addressed Parliament in October 1958, he downplayed trade concerns that Diefenbaker had publicly expressed. Diefenbaker said nothing and took Eisenhower fishing. Diefenbaker had approved plans to join the United States in what became known as NORAD, an integrated air defence system, in mid-1957. Despite Liberal misgivings that Diefenbaker had committed Canada to the system before consulting either the Cabinet or Parliament, Pearson and his followers voted with the government to approve NORAD in June 1958. In 1959, the Diefenbaker government cancelled the development and manufacture of the Avro CF-105 Arrow. The Arrow was a supersonic jet interceptor built by Avro Canada in Malton, Ontario, to defend Canada in the event of a Soviet attack. The interceptor had been under development since 1953, and had suffered from many cost overruns and complications. In 1955, the RCAF stated it would need only nine squadrons of Arrows, down from 20, as originally proposed. According to C. D. Howe, the former minister responsible for postwar reconstruction, the St. Laurent government had serious misgivings about continuing the Arrow program, and planned to discuss its termination after the 1957 election. In the run-up to the 1958 election, with three Tory-held seats at risk in the Malton area, the Diefenbaker government authorized further funding. Even though the first test flights of the Arrow were successful, the US government was unwilling to commit to a purchase of aircraft from Canada. In September 1958, Diefenbaker warned that the Arrow would come under complete review in six months. The company began seeking out other projects including a US-funded "saucer" program that became the VZ-9 Avrocar, and also mounted a public relations offensive urging that the Arrow go into full production. On February 20, 1959, the Cabinet decided to cancel the Avro Arrow, following an earlier decision to permit the United States to build two Bomarc missile bases in Canada. The company immediately dismissed its 14,000 employees, blaming Diefenbaker for the firings, though it rehired 2,500 employees to fulfil existing obligations. Although the two leaders had a strong relationship, by 1960 US officials were becoming concerned by what they viewed as Canadian procrastination on vital issues, such as whether Canada should join the Organization of American States (OAS). Talks on these issues in June 1960 produced little in results. Diefenbaker hoped that US Vice President Richard Nixon would win the 1960 US presidential election, but when Nixon's Democratic rival, Senator John F. Kennedy won the race, he sent Senator Kennedy a note of congratulations. Kennedy did not respond until Canadian officials asked what had become of Diefenbaker's note, two weeks later. Diefenbaker, for whom such correspondence was very meaningful, was annoyed at the President-elect's slowness to respond. In January 1961, Diefenbaker visited Washington to sign the Columbia River Treaty. However, with only days remaining in the Eisenhower administration, little else could be accomplished. The Kennedy administration began its dealings with Canada badly, with Kennedy mispronouncing Diefenbaker's name in a press conference announcing the Prime Minister's visit to Washington in February 1961. A furious Diefenbaker brought up in Cabinet whether to send a note of protest at the gaffe to Washington; his colleagues were inclined to let the matter pass. When the two met in Washington on February 20, Diefenbaker was impressed by Kennedy, and invited him to visit Ottawa. President Kennedy, however, told his aides that he never wanted "to see the boring son of a bitch again". The Ottawa visit also began badly: at the welcome at the airport, Kennedy again mispronounced Diefenbaker's name and stated that after hearing the Prime Minister's (notoriously bad) French, he was uncertain if he should venture into the language (Kennedy's French was equally bad). After meeting with Diefenbaker, Kennedy accidentally left behind a briefing note suggesting he "push" Diefenbaker on several issues, including the decision to accept nuclear weapons on Canadian soil, which bitterly divided the Cabinet. Diefenbaker was also annoyed by Kennedy's speech to Parliament, in which he urged Canada to join the OAS (which Diefenbaker had already rejected), and by the President spending most of his time talking to Leader of the Opposition Pearson at the formal dinner. Both Kennedy and his wife Jackie were bored by Diefenbaker's Churchill anecdotes at lunch, stories that Jackie Kennedy later described as "painful". Diefenbaker was initially inclined to go along with Kennedy's request that nuclear weapons be stationed on Canadian soil as part of NORAD. However, when an August 3, 1961, letter from Kennedy which urged this was leaked to the media, Diefenbaker was angered and withdrew his support. The Prime Minister was also influenced by a massive demonstration against nuclear weapons, which took place on Parliament Hill. Diefenbaker was handed a petition containing 142,000 names. By 1962, the American government was becoming increasingly concerned at the lack of a commitment from Canada to take nuclear weapons. The interceptors and Bomarc missiles with which Canada was being supplied as a NORAD member were either of no use or of greatly diminished utility without nuclear devices. Canadian and American military officers launched a quiet campaign to make this known to the press, and to advocate for Canadian agreement to acquire the warheads. Diefenbaker was also upset when Pearson was invited to the White House for a dinner for Nobel Prize winners in April, and met with the President privately for 40 minutes. When the Prime Minister met with retiring American Ambassador Livingston Merchant, he angrily disclosed the paper Kennedy had left behind, and hinted that he might make use of it in the upcoming election campaign. Merchant's report caused consternation in Washington, and the ambassador was sent back to see Diefenbaker again. This time, he found Diefenbaker calm, and the Prime Minister pledged not to use the memo, and to give Merchant advance word if he changed his mind. Canada appointed a new ambassador to Washington, Charles Ritchie, who on arrival received a cool reception from Kennedy and found that the squabble was affecting progress on a number of issues. Though Kennedy was careful to avoid overt favouritism during the 1962 Canadian election campaign, he did allow his pollster, Lou Harris, to work clandestinely for the Liberals. Several times during the campaign, Diefenbaker stated that the Kennedy administration desired his defeat because he refused to "bow down to Washington". After Diefenbaker was returned with a minority, Washington continued to press for acceptance of nuclear arms, but Diefenbaker, faced with a split between Defence Minister Douglas Harkness and External Affairs Minister Howard Green on the question, continued to stall, hoping that time and events would invite consensus. When the Cuban Missile Crisis erupted in October 1962, Kennedy chose not to consult with Diefenbaker before making decisions on what actions to take. The US President sent former Ambassador Merchant to Ottawa to inform the Prime Minister as to the content of the speech that Kennedy was to make on television. Diefenbaker was upset at both the lack of consultation and the fact that he was given less than two hours advance word. He was angered again when the US government released a statement stating that it had Canada's full support. In a statement to the Commons, Diefenbaker proposed sending representatives of neutral nations to Cuba to verify the American allegations, which Washington took to mean that he was questioning Kennedy's word. When American forces went to a heightened alert, DEFCON 3, Diefenbaker was slow to order Canadian forces to match it. Harkness and the Chiefs of Staff had Canadian forces clandestinely go to that alert status anyway, and Diefenbaker eventually authorized it. The crisis ended without war, and polls found that Kennedy's actions were widely supported by Canadians. Diefenbaker was severely criticized in the media. On January 3, 1963, NATO Supreme Commander General Lauris Norstad visited Ottawa, in one of a series of visits to member nations prior to his retirement. At a news conference, Norstad stated that if Canada did not accept nuclear weapons, it would not be fulfilling its commitments to NATO. Newspapers across Canada criticized Diefenbaker, who was convinced the statement was part of a plot by Kennedy to bring down his government. Although the Liberals had been previously indecisive on the question of nuclear weapons, on January 12, Pearson made a speech stating that the government should live up to the commitments it had made. With the Cabinet still divided between adherents of Green and Harkness, Diefenbaker made a speech in the Commons on January 25 that Fleming (by then Minister of Justice) termed "a model of obfuscation". Harkness was initially convinced that Diefenbaker was saying that he would support nuclear warheads in Canada. After talking to the press, he realized that his view of the speech was not universally shared, and he asked Diefenbaker for clarification. Diefenbaker, however, continued to try to avoid taking a firm position. On January 30, the US State Department issued a press release suggesting that Diefenbaker had made misstatements in his Commons speech. For the first time ever, Canada recalled its ambassador to Washington as a diplomatic protest. Though all parties condemned the State Department action, the three parties outside the government demanded that Diefenbaker take a stand on the nuclear weapon issue. The bitter divisions within the Cabinet continued, with Diefenbaker deliberating whether to call an election on the issue of American interference in Canadian politics. At least six Cabinet ministers favoured Diefenbaker's ouster. Finally, at a dramatic Cabinet meeting on Sunday, February 3, Harkness told Diefenbaker that the Prime Minister no longer had the confidence of the Canadian people, and resigned. Diefenbaker asked ministers supporting him to stand, and when only about half did, stated that he was going to see the Governor General to resign, and that Fleming would be the next Prime Minister. Green called his Cabinet colleagues a "nest of traitors", but eventually cooler heads prevailed, and the Prime Minister was urged to return and to fight the motion of non-confidence scheduled for the following day. Harkness, however, persisted in his resignation. Negotiations with the Social Credit Party, which had enough votes to save the government, failed, and the government fell, 142–111. Two members of the government resigned the day after the government lost the vote. As the campaign opened, the Tories trailed in the polls by 15 points. To Pearson and his Liberals, the only question was how large a majority they would win. Peter Stursberg, who wrote two books about the Diefenbaker years, stated of that campaign: For the old Diefenbaker was in full cry. All the agony of the disintegration of his government was gone, and he seemed to be a giant revived by his contact with the people. This was Diefenbaker's finest election. He was virtually alone on the hustings. Even such loyalists as Gordon Churchill had to stick close to their own bailiwicks, where they were fighting for their political lives. Though the White House maintained public neutrality, privately Kennedy made it clear he desired a Liberal victory. Kennedy lent Lou Harris, his pollster to work for the Liberals again. On election day, April 8, 1963, the Liberals claimed 129 seats to the Tories' 95, five seats short of an absolute majority. Diefenbaker held to power for several days, until six Quebec Social Credit MPs signed a statement that Pearson should form the government. These votes would be enough to give Pearson support of a majority of the House of Commons, and Diefenbaker resigned. The six MPs repudiated the statement within days. Nonetheless, Pearson formed a government with the support of the NDP. Diefenbaker continued to lead the Progressive Conservatives, again as Leader of the Opposition. In November 1963, upon hearing of Kennedy's assassination, the Tory leader addressed the Commons, stating, "A beacon of freedom has gone. Whatever the disagreement, to me he stood as the embodiment of freedom, not only in his own country, but throughout the world." In the 1964 Great Canadian Flag Debate, Diefenbaker led the unsuccessful opposition to the Maple Leaf flag, which the Liberals pushed for after the rejection of Pearson's preferred design showing three maple leaves. Diefenbaker preferred the existing Canadian Red Ensign or another design showing symbols of the nation's heritage. He dismissed the adopted design, with a single red maple leaf and two red bars, as "a flag that Peruvians might salute". At the request of Quebec Tory Léon Balcer, who feared devastating PC losses in the province at the next election, Pearson imposed closure, and the bill passed with the majority singing "O Canada" as Diefenbaker led the dissenters in "God Save the Queen". In 1966, the Liberals began to make an issue of the Munsinger affair—two officials of the Diefenbaker government had slept with a woman suspected of being a Soviet spy. In what Diefenbaker saw as a partisan attack, Pearson established a one-man Royal Commission, which, according to Diefenbaker biographer Smith, indulged in "three months of reckless political inquisition". By the time the commission issued its report, Diefenbaker and other former ministers had long since withdrawn their counsel from the proceedings. The report faulted Diefenbaker for not dismissing the ministers in question, but found no actual security breach. There were calls for Diefenbaker's retirement, especially from the Bay Street wing of the party as early as 1964. Diefenbaker initially beat back attempts to remove him without trouble. When Pearson called an election in 1965 in the expectation of receiving a majority, Diefenbaker ran an aggressive campaign. The Liberals fell two seats short of a majority, and the Tories improved their position slightly at the expense of the smaller parties. After the election, some Tories, led by party president Dalton Camp, began a quiet campaign to oust Diefenbaker. In the absence of a formal leadership review process, Camp was able to stage a de facto review by running for re-election as party president on the platform of holding a leadership convention within a year. His campaign at the Tories' 1966 convention occurred amidst allegations of vote rigging, violence, and seating arrangements designed to ensure that when Diefenbaker addressed the delegates, television viewers would see unmoved delegates in the first ten rows. Other Camp supporters tried to shout Diefenbaker down. Camp was successful in being re-elected thereby forcing a leadership convention for 1967. Diefenbaker initially made no announcement as to whether he would stand, but angered by a resolution at the party's policy conference which spoke of ""deux nations"" or "two founding peoples" (as opposed to Diefenbaker's "One Canada"), decided to seek to retain his leadership. Although Diefenbaker entered at the last minute to stand as a candidate for the leadership, he finished fifth on each of the first three ballots, and withdrew from the contest, which was won by Nova Scotia Premier Robert Stanfield. Diefenbaker addressed the delegates before Stanfield spoke: My course has come to an end. I have fought your battles, and you have given me that loyalty that led us to victory more often than the party has ever had since the days of Sir John A. Macdonald. In my retiring, I have nothing to withdraw in my desire to see Canada, my country and your country, one nation. Diefenbaker was embittered by his loss of the party leadership. Pearson announced his retirement in December 1967, and Diefenbaker forged a wary relationship of mutual respect with Pearson's successor, Pierre Trudeau. Trudeau called a general election for June 1968; Stanfield asked Diefenbaker to join him at a rally in Saskatoon, which Diefenbaker refused, although the two appeared at hastily arranged photo opportunities. Trudeau obtained the majority against Stanfield that Pearson had never been able to obtain against Diefenbaker, as the PC party lost 25 seats, 20 of them in the West. The former Prime Minister, though stating, "The Conservative Party has suffered a calamitous disaster" in a CBC interview, could not conceal his delight at Stanfield's humiliation, and especially gloated at the defeat of Camp, who made an unsuccessful attempt to enter the Commons. Diefenbaker was easily returned for Prince Albert. Although Stanfield worked to try to unify the party, Diefenbaker and his loyalists proved difficult to reconcile. The division in the party broke out in well-publicised dissensions, as when Diefenbaker called on Progressive Conservative MPs to break with Stanfield's position on the Official Languages bill, and nearly half the caucus voted against their leader's will or abstained. In addition to his parliamentary activities, Diefenbaker travelled extensively and began work on his memoirs, which were published in three volumes between 1975 and 1977. Pearson died of cancer in 1972, and Diefenbaker was asked if he had kind words for his old rival. Diefenbaker shook his head and said only, "He shouldn't have won the Nobel Prize." By 1972, Diefenbaker had grown disillusioned with Trudeau, and campaigned wholeheartedly for the Tories in that year's election. Diefenbaker was reelected comfortably in his home riding, and the Progressive Conservatives came within two seats of matching the Liberal total. Diefenbaker was relieved both that Trudeau had been humbled and that Stanfield had been denied power. Trudeau regained his majority two years later in an election that saw Diefenbaker, by then the only living former Prime Minister, have his personal majority grow to 11,000 votes. In the 1976 New Year Honours, Diefenbaker was created a Companion of Honour, an accolade bestowed as the personal gift of the Sovereign. After a long illness, Olive Diefenbaker died on December 22, a loss which plunged Diefenbaker into despair. Joe Clark succeeded Stanfield as party leader in 1976, but as Clark had supported the leadership review, Diefenbaker held a grudge against him. Diefenbaker had supported Claude Wagner for leader, but when Clark won, stated that Clark would make "a remarkable leader of this party". However, Diefenbaker repeatedly criticized his party leader, to such an extent that Stanfield publicly asked Diefenbaker "to stop sticking a knife into Mr. Clark"—a request Diefenbaker did not agree to. According to columnist Charles Lynch, Diefenbaker regarded Clark as an upstart and a pipsqueak. In 1978, Diefenbaker announced that he would stand in one more election, and under the slogan "Diefenbaker—Now More Than Ever", weathered a campaign the following year during which he apparently suffered a mild stroke, although the media were told he was bedridden with influenza. In the May election Diefenbaker defeated NDP candidate Stan Hovdebo (who, after Diefenbaker's death, would win the seat in a by-election) by 4,000 votes. Clark had defeated Trudeau, though only gaining a minority government, and Diefenbaker returned to Ottawa to witness the swearing-in, still unreconciled to his old opponents among Clark's ministers. Two months later, Diefenbaker died of a heart attack in his study about a month before his 84th birthday. Diefenbaker had extensively planned his funeral in consultation with government officials. He lay in state in the Hall of Honour in Parliament for two and a half days; 10,000 Canadians passed by his casket. The Maple Leaf Flag on the casket was partially obscured by the Red Ensign. After the service, his body was taken by train on a slow journey to its final destination, Saskatoon; along the route, many Canadians lined the tracks to watch the funeral train pass. In Winnipeg, an estimated 10,000 people waited at midnight in a one-kilometre line to file past the casket which made the trip draped in a Canadian flag and Diefenbaker's beloved Red Ensign. In Prince Albert, thousands of those he had represented filled the square in front of the railroad station to salute the only man from Saskatchewan ever to become Prime Minister. His coffin was accompanied by that of his wife Olive, disinterred from temporary burial in Ottawa. Prime Minister Clark delivered the eulogy, paying tribute to "an indomitable man, born to a minority group, raised in a minority region, leader of a minority party, who went on to change the very nature of his country, and change it forever". John and Olive Diefenbaker rest outside the Diefenbaker Centre, built to house his papers, on the campus of the University of Saskatchewan. Most of the policies that Diefenbaker held dear did not survive the 16 years of Liberal rule which followed his fall. By the end of 1963, the first of the Bomarc warheads entered Canada, where they remained until the last were finally phased out during John Turner's brief government in 1984. Diefenbaker's decision to have Canada remain outside the OAS was not reversed by Pearson, and it was not until 1989, under the Tory government of Brian Mulroney, that Canada joined. But several defining features of modern Canada can be traced back to Diefenbaker. Diefenbaker's Bill of Rights remains in effect, and signalled the change in Canadian political culture that would eventually bring about the Canadian Charter of Rights and Freedoms, which came into force after his death. Diefenbaker was the first to appoint women and ethnic minorities to Cabinet. It was under Diefenbaker that Canada finally achieved universal adult suffrage, with the granting of the vote to Native Canadians in 1960. The removal of explicit racial discrimination from the criteria for admission to Canada under the Immigration Act of 1961 was a factor in the creation of today's multi-cultural and multi-ethnic Canada. Diefenbaker reinvigorated a moribund party system in Canada. Clark and Mulroney, two men who, as students, worked on and were inspired by his 1957 triumph, became the only other Progressive Conservatives to lead the party to election triumphs. Diefenbaker's biographer, Denis Smith, wrote of him, "In politics he had little more than two years of success in the midst of failure and frustration, but he retained a core of deeply committed loyalists to the end of his life and beyond. The federal Conservative Party that he had revived remained dominant in the prairie provinces for 25 years after he left the leadership." The Harper government, believing that Tory prime ministers have been given short shrift in the naming of Canadian places and institutions, named the former Ottawa City Hall, now a federal office building, the John G. Diefenbaker Building. It also gave Diefenbaker's name to a human rights award and an icebreaking vessel. Harper often invoked Diefenbaker's northern vision in his speeches. Conservative Senator Marjory LeBreton worked in Diefenbaker's office during his second time as Opposition Leader, and has said of him, "He brought a lot of firsts to Canada, but a lot of it has been air-brushed from history by those who followed." Historian Michael Bliss, who published a survey of the Canadian Prime Ministers, wrote of Diefenbaker: From the distance of our times, Diefenbaker's role as a prairie populist who tried to revolutionize the Conservative Party begins to loom larger than his personal idiosyncrasies. The difficulties he faced in the form of significant historical dilemmas seem less easy to resolve than Liberals and hostile journalists opined at the time. If Diefenbaker defies rehabilitation, he can at least be appreciated. He stood for a fascinating and still relevant combination of individual and egalitarian values ... But his contemporaries were also right in seeing some kind of disorder near the centre of his personality and his prime-ministership. The problems of leadership, authority, power, ego, and a mad time in history overwhelmed the prairie politician with the odd name. John Diefenbaker received several Honorary Degrees in recognition of His political career, These Include Explanatory notes Citations Bibliography Online sources John Calvin John Calvin ( ; born Jehan Cauvin; 10 July 150927 May 1564) was a French theologian, pastor and reformer in Geneva during the Protestant Reformation. He was a principal figure in the development of the system of Christian theology later called Calvinism, aspects of which include the doctrines of predestination and of the absolute sovereignty of God in salvation of the human soul from death and eternal damnation, in which doctrines Calvin was influenced by and elaborated upon the Augustinian and other Christian traditions. Various Congregational, Reformed and Presbyterian churches, which look to Calvin as the chief expositor of their beliefs, have spread throughout the world. Calvin was a tireless polemic and apologetic writer who generated much controversy. He also exchanged cordial and supportive letters with many reformers, including Philipp Melanchthon and Heinrich Bullinger. In addition to his seminal "Institutes of the Christian Religion", Calvin wrote commentaries on most books of the Bible, confessional documents, and various other theological treatises. Originally trained as a humanist lawyer, he broke from the Roman Catholic Church around 1530. After religious tensions erupted in widespread deadly violence against Protestant Christians in France, Calvin fled to Basel, Switzerland, where in 1536 he published the first edition of the "Institutes". In that same year, Calvin was recruited by Frenchman William Farel to join the Reformation in Geneva, where he regularly preached sermons throughout the week; but the governing council of the city resisted the implementation of their ideas, and both men were expelled. At the invitation of Martin Bucer, Calvin proceeded to Strasbourg, where he became the minister of a church of French refugees. He continued to support the reform movement in Geneva, and in 1541 he was invited back to lead the church of the city. Following his return, Calvin introduced new forms of church government and liturgy, despite opposition from several powerful families in the city who tried to curb his authority. During this period, Michael Servetus, a Spaniard regarded by both Roman Catholics and Protestants as having a heretical view of the Trinity, arrived in Geneva. He was denounced by Calvin and burned at the stake for heresy by the city council. Following an influx of supportive refugees and new elections to the city council, Calvin's opponents were forced out. Calvin spent his final years promoting the Reformation both in Geneva and throughout Europe. John Calvin was born as Jehan Cauvin on 10 July 1509, at Noyon, a town in Picardy, a province of the Kingdom of France. He was the first of four sons who survived infancy. His mother, Jeanne le Franc, was the daughter of an innkeeper from Cambrai. She died of an unknown cause in Calvin's childhood, after having borne four more children. Calvin's father, Gérard Cauvin, had a prosperous career as the cathedral notary and registrar to the ecclesiastical court; he died in 1531, after suffering for two years with testicular cancer. Gérard intended his three sons — Charles, Jean, and Antoine — for the priesthood. The surname "Calvin" or "Cauvin" is in origin a diminutive of French "chauve" (Picard "calve", from Latin "calvus") meaning "bald". Young Calvin was particularly precocious. By age 12, he was employed by the bishop as a clerk and received the tonsure, cutting his hair to symbolise his dedication to the Church. He also won the patronage of an influential family, the Montmors. Through their assistance, Calvin was able to attend the Collège de la Marche, Paris, where he learned Latin from one of its greatest teachers, Mathurin Cordier. Once he completed the course, he entered the Collège de Montaigu as a philosophy student. In 1525 or 1526, Gérard withdrew his son from the Collège de Montaigu and enrolled him in the University of Orléans to study law. According to contemporary biographers Theodore Beza and Nicolas Colladon, Gérard believed that Calvin would earn more money as a lawyer than as a priest. After a few years of quiet study, Calvin entered the University of Bourges in 1529. He was intrigued by Andreas Alciati, a humanist lawyer. Humanism was a European intellectual movement which stressed classical studies. During his 18-month stay in Bourges, Calvin learned Koine Greek, a necessity for studying the New Testament. Alternate theories have been suggested regarding the date of Calvin's religious conversion. Some have placed the date of his conversion around 1533, shortly before he resigned his chaplaincy. In this view, his resignation is the direct evidence for his conversion to the evangelical faith. However, T. H. L. Parker argues that while this date is a terminus for his conversion, the more likely date is in late 1529 or early 1530. The main evidence for his conversion is contained in two significantly different accounts of his conversion. In the first, found in his "Commentary on the Book of Psalms", Calvin portrayed his conversion as a sudden change of mind, brought about by God: God by a sudden conversion subdued and brought my mind to a teachable frame, which was more hardened in such matters than might have been expected from one at my early period of life. Having thus received some taste and knowledge of true godliness, I was immediately inflamed with so intense a desire to make progress therein, that although I did not altogether leave off other studies, yet I pursued them with less ardour. In the second account, Calvin wrote of a long process of inner turmoil, followed by spiritual and psychological anguish: Being exceedingly alarmed at the misery into which I had fallen, and much more at that which threatened me in view of eternal death, I, duty bound, made it my first business to betake myself to your way, condemning my past life, not without groans and tears. And now, O Lord, what remains to a wretch like me, but instead of defence, earnestly to supplicate you not to judge that fearful abandonment of your Word according to its deserts, from which in your wondrous goodness you have at last delivered me. Scholars have argued about the precise interpretation of these accounts, but most agree that his conversion corresponded with his break from the Roman Catholic Church. The Calvin biographer Bruce Gordon has stressed that "the two accounts are not antithetical, revealing some inconsistency in Calvin's memory, but rather [are] two different ways of expressing the same reality." By 1532, Calvin received his licentiate in law and published his first book, a commentary on Seneca's "De Clementia". After uneventful trips to Orléans and his hometown of Noyon, Calvin returned to Paris in October 1533. During this time, tensions rose at the Collège Royal (later to become the Collège de France) between the humanists/reformers and the conservative senior faculty members. One of the reformers, Nicolas Cop, was rector of the university. On 1 November 1533 he devoted his inaugural address to the need for reform and renewal in the Roman Catholic Church. The address provoked a strong reaction from the faculty, who denounced it as heretical, forcing Cop to flee to Basel. Calvin, a close friend of Cop, was implicated in the offence, and for the next year he was forced into hiding. He remained on the move, sheltering with his friend Louis du Tillet in Angoulême and taking refuge in Noyon and Orléans. He was finally forced to flee France during the Affair of the Placards in mid-October 1534. In that incident, unknown reformers had posted placards in various cities criticizing the Roman Catholic mass, to which adherents of the Roman Catholic church responded with violence against the would-be Reformers and their sympathizers. In January 1535, Calvin joined Cop in Basel, a city under the enduring influence of the late reformer Johannes Oecolampadius. In March 1536, Calvin published the first edition of his "Institutio Christianae Religionis" or "Institutes of the Christian Religion". The work was an "apologia" or defense of his faith and a statement of the doctrinal position of the reformers. He also intended it to serve as an elementary instruction book for anyone interested in the Christian faith. The book was the first expression of his theology. Calvin updated the work and published new editions throughout his life. Shortly after its publication, he left Basel for Ferrara, Italy, where he briefly served as secretary to Princess Renée of France. By June he was back in Paris with his brother Antoine, who was resolving their father's affairs. Following the Edict of Coucy, which gave a limited six-month period for heretics to reconcile with the Catholic faith, Calvin decided that there was no future for him in France. In August he set off for Strasbourg, a free imperial city of the Holy Roman Empire and a refuge for reformers. Due to military manoeuvres of imperial and French forces, he was forced to make a detour to the south, bringing him to Geneva. Calvin had intended to stay only a single night, but William Farel, a fellow French reformer residing in the city, implored him to stay and assist him in his work of reforming the church there, insisting that it was his pious duty. Calvin, who reluctantly agreed to remain, later recounted: Then Farel, who was working with incredible zeal to promote the gospel, bent all his efforts to keep me in the city. And when he realized that I was determined to study in privacy in some obscure place, and saw that he gained nothing by entreaty, he descended to cursing, and said that God would surely curse my peace if I held back from giving help at a time of such great need. Terrified by his words, and conscious of my own timidity and cowardice, I gave up my journey and attempted to apply whatever gift I had in defense of my faith. Calvin accepted his new role without any preconditions on his tasks or duties. The office to which he was initially assigned is unknown. He was eventually given the title of "reader", which most likely meant that he could give expository lectures on the Bible. Sometime in 1537 he was selected to be a "pastor" although he never received any pastoral consecration. For the first time, the lawyer-theologian took up pastoral duties such as baptisms, weddings, and church services. During late 1536, Farel drafted a confession of faith, and Calvin wrote separate articles on reorganizing the church in Geneva. On 16 January 1537, Farel and Calvin presented their "Articles concernant l'organisation de l'église et du culte à Genève" (Articles on the Organization of the Church and its Worship at Geneva) to the city council. The document described the manner and frequency of their celebrations of the Eucharist, the reason for, and the method of, excommunication, the requirement to subscribe to the confession of faith, the use of congregational singing in the liturgy, and the revision of marriage laws. The council accepted the document on the same day. As the year progressed, Calvin and Farel's reputation with the council began to suffer. The council was reluctant to enforce the subscription requirement, as only a few citizens had subscribed to their confession of faith. On 26 November, the two ministers hotly debated the council over the issue. Furthermore, France was taking an interest in forming an alliance with Geneva and as the two ministers were Frenchmen, councillors had begun to question their loyalty. Finally, a major ecclesiastical-political quarrel developed when the city of Bern, Geneva's ally in the reformation of the Swiss churches, proposed to introduce uniformity in the church ceremonies. One proposal required the use of unleavened bread for the Eucharist. The two ministers were unwilling to follow Bern's lead and delayed the use of such bread until a synod in Zurich could be convened to make the final decision. The council ordered Calvin and Farel to use unleavened bread for the Easter Eucharist. In protest, they refused to administer communion during the Easter service. This caused a riot during the service and the next day, the council told Farel and Calvin to leave Geneva. Farel and Calvin then went to Bern and Zurich to plead their case. The resulting synod in Zurich placed most of the blame on Calvin for not being sympathetic enough toward the people of Geneva. It asked Bern to mediate with the aim of restoring the two ministers. The Geneva council refused to readmit the two men, who then took refuge in Basel. Subsequently, Farel received an invitation to lead the church in Neuchâtel. Calvin was invited to lead a church of French refugees in Strasbourg by that city's leading reformers, Martin Bucer and Wolfgang Capito. Initially, Calvin refused because Farel was not included in the invitation, but relented when Bucer appealed to him. By September 1538 Calvin had taken up his new position in Strasbourg, fully expecting that this time it would be permanent; a few months later, he applied for and was granted citizenship of the city. During his time in Strasbourg, Calvin was not attached to one particular church, but held his office successively in the Saint-Nicolas Church, the Sainte-Madeleine Church and the former Dominican Church, renamed the Temple Neuf. (All of these churches still exist, but none are in the architectural state of Calvin's days.) Calvin ministered to 400–500 members in his church. He preached or lectured every day, with two sermons on Sunday. Communion was celebrated monthly and congregational singing of the psalms was encouraged. He also worked on the second edition of the "Institutes". Calvin was dissatisfied with its original structure as a catechism, a primer for young Christians. For the second edition, published in 1539, Calvin dropped this format in favour of systematically presenting the main doctrines from the Bible. In the process, the book was enlarged from six chapters to seventeen. He concurrently worked on another book, the "Commentary on Romans", which was published in March 1540. The book was a model for his later commentaries: it included his own Latin translation from the Greek rather than the Latin Vulgate, an exegesis, and an exposition. In the dedicatory letter, Calvin praised the work of his predecessors Philipp Melanchthon, Heinrich Bullinger, and Martin Bucer, but he also took care to distinguish his own work from theirs and to criticise some of their shortcomings. Calvin's friends urged him to marry. Calvin took a prosaic view, writing to one correspondent:I, who have the air of being so hostile to celibacy, I am still not married and do not know whether I will ever be. If I take a wife it will be because, being better freed from numerous worries, I can devote myself to the Lord. Several candidates were presented to him including one young woman from a noble family. Reluctantly, Calvin agreed to the marriage, on the condition that she would learn French. Although a wedding date was planned for March 1540, he remained reluctant and the wedding never took place. He later wrote that he would never think of marrying her, "unless the Lord had entirely bereft me of my wits". Instead, in August of that year, he married Idelette de Bure, a widow who had two children from her first marriage. Geneva reconsidered its expulsion of Calvin. Church attendance had dwindled and the political climate had changed; as Bern and Geneva quarrelled over land, their alliance frayed. When Cardinal Jacopo Sadoleto wrote a letter to the city council inviting Geneva to return to the Catholic faith, the council searched for an ecclesiastical authority to respond to him. At first Pierre Viret was consulted, but when he refused, the council asked Calvin. He agreed and his "Responsio ad Sadoletum" (Letter to Sadoleto) strongly defended Geneva's position concerning reforms in the church. On 21 September 1540 the council commissioned one of its members, Ami Perrin, to find a way to recall Calvin. An embassy reached Calvin while he was at a colloquy, a conference to settle religious disputes, in Worms. His reaction to the suggestion was one of horror in which he wrote, "Rather would I submit to death a hundred times than to that cross on which I had to perish daily a thousand times over." Calvin also wrote that he was prepared to follow the Lord's calling. A plan was drawn up in which Viret would be appointed to take temporary charge in Geneva for six months while Bucer and Calvin would visit the city to determine the next steps. The city council pressed for the immediate appointment of Calvin in Geneva. By mid-1541, Strasbourg decided to lend Calvin to Geneva for six months. Calvin returned on 13 September 1541 with an official escort and a wagon for his family. In supporting Calvin's proposals for reforms, the council of Geneva passed the "Ordonnances ecclésiastiques" (Ecclesiastical Ordinances) on 20 November 1541. The ordinances defined four orders of ministerial function: pastors to preach and to administer the sacraments; doctors to instruct believers in the faith; elders to provide discipline; and deacons to care for the poor and needy. They also called for the creation of the "Consistoire" (Consistory), an ecclesiastical court composed of the lay elders and the ministers. The city government retained the power to summon persons before the court, and the Consistory could judge only ecclesiastical matters having no civil jurisdiction. Originally, the court had the power to mete out sentences, with excommunication as its most severe penalty. The government contested this power and on 19 March 1543 the council decided that all sentencing would be carried out by the government. In 1542, Calvin adapted a service book used in Strasbourg, publishing "La Forme des Prières et Chants Ecclésiastiques" (The Form of Prayers and Church Hymns). Calvin recognised the power of music and he intended that it be used to support scripture readings. The original Strasbourg psalter contained twelve psalms by Clément Marot and Calvin added several more hymns of his own composition in the Geneva version. At the end of 1542, Marot became a refugee in Geneva and contributed nineteen more psalms. Louis Bourgeois, also a refugee, lived and taught music in Geneva for sixteen years and Calvin took the opportunity to add his hymns, the most famous being the Old Hundredth. In the same year of 1542, Calvin published "Catéchisme de l'Eglise de Genève" (Catechism of the Church of Geneva), which was inspired by Bucer's "Kurze Schrifftliche Erklärung" of 1534. Calvin had written an earlier catechism during his first stay in Geneva which was largely based on Martin Luther's Large Catechism. The first version was arranged pedagogically, describing Law, Faith, and Prayer. The 1542 version was rearranged for theological reasons, covering Faith first, then Law and Prayer. Historians debate the extent to which Geneva was a theocracy. On the one hand, Calvin's theology clearly called for separation between church and state. Other historians have stressed the enormous political power wielded on a daily basis by the clerics. During his ministry in Geneva, Calvin preached over two thousand sermons. Initially he preached twice on Sunday and three times during the week. This proved to be too heavy a burden and late in 1542 the council allowed him to preach only once on Sunday. In October 1549, he was again required to preach twice on Sundays and, in addition, every weekday of alternate weeks. His sermons lasted more than an hour and he did not use notes. An occasional secretary tried to record his sermons, but very little of his preaching was preserved before 1549. In that year, professional scribe Denis Raguenier, who had learned or developed a system of shorthand, was assigned to record all of Calvin's sermons. An analysis of his sermons by T. H. L. Parker suggests that Calvin was a consistent preacher and his style changed very little over the years. John Calvin was also known for his thorough manner of working his way through the Bible in consecutive sermons. From March 1555 to July 1556, Calvin delivered two hundred sermons on Deuteronomy. Voltaire wrote about Calvin, Luther and Zwingli, "If they condemned celibacy in the priests, and opened the gates of the convents, it was only to turn all society into a convent. Shows and entertainments were expressly forbidden by their religion; and for more than two hundred years there was not a single musical instrument allowed in the city of Geneva. They condemned auricular confession, but they enjoined a public one; and in Switzerland, Scotland, and Geneva it was performed the same as penance." Very little is known about Calvin's personal life in Geneva. His house and furniture were owned by the council. The house was big enough to accommodate his family as well as Antoine's family and some servants. On 28 July 1542, Idelette gave birth to a son, Jacques, but he was born prematurely and survived only briefly. Idelette fell ill in 1545 and died on 29 March 1549. Calvin never married again. He expressed his sorrow in a letter to Viret: I have been bereaved of the best friend of my life, of one who, if it has been so ordained, would willingly have shared not only my poverty but also my death. During her life she was the faithful helper of my ministry. From her I never experienced the slightest hindrance. Throughout the rest of his life in Geneva, he maintained several friendships from his early years including Montmor, Cordier, Cop, Farel, Melanchthon and Bullinger. Calvin encountered bitter opposition to his work in Geneva. Around 1546, the uncoordinated forces coalesced into an identifiable group whom he referred to as the libertines, but who preferred to be called either Spirituels or Patriots. According to Calvin, these were people who felt that after being liberated through grace, they were exempted from both ecclesiastical and civil law. The group consisted of wealthy, politically powerful, and interrelated families of Geneva. At the end of January 1546, Pierre Ameaux, a maker of playing cards who had already been in conflict with the Consistory, attacked Calvin by calling him a "Picard", an epithet denoting anti-French sentiment, and accused him of false doctrine. Ameaux was punished by the council and forced to make expiation by parading through the city and begging God for forgiveness. A few months later Ami Perrin, the man who had brought Calvin to Geneva, moved into open opposition. Perrin had married Françoise Favre, daughter of François Favre, a well-established Genevan merchant. Both Perrin's wife and father-in-law had previous conflicts with the Consistory. The court noted that many of Geneva's notables, including Perrin, had breached a law against dancing. Initially, Perrin ignored the court when he was summoned, but after receiving a letter from Calvin, he appeared before the Consistory. By 1547, opposition to Calvin and other French refugee ministers had grown to constitute the majority of the syndics, the civil magistrates of Geneva. On 27 June an unsigned threatening letter in Genevan dialect was found at the pulpit of St. Pierre Cathedral where Calvin preached. Suspecting a plot against both the church and the state, the council appointed a commission to investigate. Jacques Gruet, a Genevan member of Favre's group, was arrested and incriminating evidence was found when his house was searched. Under torture, he confessed to several crimes including writing the letter left in the pulpit which threatened the church leaders. A civil court condemned Gruet to death and he was beheaded on 26 July. Calvin was not opposed to the civil court's decision. The libertines continued organizing opposition, insulting the appointed ministers, and challenging the authority of the Consistory. The council straddled both sides of the conflict, alternately admonishing and upholding Calvin. When Perrin was elected first syndic in February 1552, Calvin's authority appeared to be at its lowest point. After some losses before the council, Calvin believed he was defeated; on 24 July 1553 he asked the council to allow him to resign. Although the libertines controlled the council, his request was refused. The opposition realised that they could curb Calvin's authority, but they did not have enough power to banish him. The turning point in Calvin's fortunes occurred when Michael Servetus, a fugitive from ecclesiastical authorities, appeared in Geneva on 13 August 1553. Servetus was a Spanish physician and Protestant theologian who boldly criticised the doctrine of the Trinity and paedobaptism (infant baptism). In July 1530 he disputed with Johannes Oecolampadius in Basel and was eventually expelled. He went to Strasbourg, where he published a pamphlet against the Trinity. Bucer publicly refuted it and asked Servetus to leave. After returning to Basel, Servetus published "Two Books of Dialogues on the Trinity" () which caused a sensation among Reformers and Catholics alike. The Inquisition in Spain ordered his arrest. Calvin and Servetus were first brought into contact in 1546 through a common acquaintance, Jean Frellon of Lyon; they exchanged letters debating doctrine; Calvin used a pseudonym as "Charles d' Espeville"; Servetus left his unsigned. Eventually, Calvin lost patience and refused to respond; by this time Servetus had written around thirty letters to Calvin. Calvin was particularly outraged when Servetus sent him a copy of the "Institutes of the Christian Religion" heavily annotated with arguments pointing to errors in the book. When Servetus mentioned that he would come to Geneva, "Espeville" (Calvin) wrote a letter to Farel on 13 February 1546 noting that if Servetus were to come, he would not assure him safe conduct: "for if he came, as far as my authority goes, I would not let him leave alive." In 1553 Servetus published "Christianismi Restitutio" (English: The Restoration of Christianity), in which he rejected the Christian doctrine of the Trinity and the concept of predestination. In the same year, Calvin's representative, Guillaume de Trie, sent letters alerting the French Inquisition to Servetus. Calling him a "Spanish-Portuguese", suspecting and accusing him of his recently proved Jewish converso origin. De Trie wrote down that "his proper name is Michael Servetus, but he currently calls himself Villeneuve, practising medicine. He stayed for some time in Lyon, and now he is living in Vienne." When the inquisitor-general of France learned that Servetus was hiding in Vienne, according to Calvin under an assumed name, he contacted Cardinal François de Tournon, the secretary of the archbishop of Lyon, to take up the matter. Servetus was arrested and taken in for questioning. His letters to Calvin were presented as evidence of heresy, but he denied having written them, and later said he was not sure it was his handwriting. He said, after swearing before the holy gospel, that "he was Michel De Villeneuve Doctor in Medicine about 42 years old, native of Tudela of the kingdom of Navarre, a city under the obedience to the Emperor". The following day he said: "..although he was not Servetus he assumed the person of Servet for debating with Calvin". He managed to escape from prison, and the Catholic authorities sentenced him "in absentia" to death by slow burning. On his way to Italy, Servetus stopped in Geneva to visit ""d'Espeville"", where he was recognized and arrested. Calvin's secretary, Nicholas de la Fontaine, composed a list of accusations that was submitted before the court. The prosecutor was Philibert Berthelier, a member of a libertine family and son of a famous Geneva patriot, and the sessions were led by Pierre Tissot, Perrin's brother-in-law. The libertines allowed the trial to drag on in an attempt to harass Calvin. The difficulty in using Servetus as a weapon against Calvin was that the heretical reputation of Servetus was widespread and most of the cities in Europe were observing and awaiting the outcome of the trial. This posed a dilemma for the libertines, so on 21 August the council decided to write to other Swiss cities for their opinions, thus mitigating their own responsibility for the final decision. While waiting for the responses, the council also asked Servetus if he preferred to be judged in Vienne or in Geneva. He begged to stay in Geneva. On 20 October the replies from Zurich, Basel, Bern, and Schaffhausen were read and the council condemned Servetus as a heretic. The following day he was sentenced to burning at the stake, the same sentence as in Vienne. Some scholars claim that Calvin and other ministers asked that he be beheaded instead of burnt, knowing that burning at the stake was the only legal recourse. This plea was refused and on 27 October, Servetus was burnt alive at the Plateau of Champel at the edge of Geneva. After the death of Servetus, Calvin was acclaimed a defender of Christianity, but his ultimate triumph over the libertines was still two years away. He had always insisted that the Consistory retain the power of excommunication, despite the council's past decision to take it away. During Servetus's trial, Philibert Berthelier asked the council for permission to take communion, as he had been excommunicated the previous year for insulting a minister. Calvin protested that the council did not have the legal authority to overturn Berthelier's excommunication. Unsure of how the council would rule, he hinted in a sermon on 3 September 1553 that he might be dismissed by the authorities. The council decided to re-examine the "Ordonnances" and on 18 September it voted in support of Calvin—excommunication was within the jurisdiction of the Consistory. Berthelier applied for reinstatement to another Genevan administrative assembly, the "Deux Cents" (Two Hundred), in November. This body reversed the council's decision and stated that the final arbiter concerning excommunication should be the council. The ministers continued to protest, and as in the case of Servetus, the opinions of the Swiss churches were sought. The affair dragged on through 1554. Finally, on 22 January 1555, the council announced the decision of the Swiss churches: the original "Ordonnances" were to be kept and the Consistory was to regain its official powers. The libertines' downfall began with the February 1555 elections. By then, many of the French refugees had been granted citizenship and with their support, Calvin's partisans elected the majority of the syndics and the councillors. On 16 May the libertines took to the streets in a drunken protest and attempted to burn down a house that was supposedly full of Frenchmen. The syndic Henri Aulbert tried to intervene, carrying with him the baton of office that symbolised his power. Perrin seized the baton and waved it over the crowd, which gave the appearance that he was taking power and initiating a "coup d'état". The insurrection was soon over when another syndic appeared and ordered Perrin to go with him to the town hall. Perrin and other leaders were forced to flee the city. With the approval of Calvin, the other plotters who remained in the city were found and executed. The opposition to Calvin's church polity came to an end. Calvin's authority was practically uncontested during his final years, and he enjoyed an international reputation as a reformer distinct from Martin Luther. Initially, Luther and Calvin had mutual respect for each other. A doctrinal conflict had developed between Luther and Zurich reformer Huldrych Zwingli on the interpretation of the eucharist. Calvin's opinion on the issue forced Luther to place him in Zwingli's camp. Calvin actively participated in the polemics that were exchanged between the Lutheran and Reformed branches of the Reformation movement. At the same time, Calvin was dismayed by the lack of unity among the reformers. He took steps toward rapprochement with Bullinger by signing the "Consensus Tigurinus", a concordat between the Zurich and Geneva churches. He reached out to England when Archbishop of Canterbury Thomas Cranmer called for an ecumenical synod of all the evangelical churches. Calvin praised the idea, but ultimately Cranmer was unable to bring it to fruition. Calvin sheltered Marian exiles (those who fled the reign of Catholic Mary Tudor in England) in Geneva starting in 1555. Under the city's protection, they were able to form their own reformed church under John Knox and William Whittingham and eventually carried Calvin's ideas on doctrine and polity back to England and Scotland. Within Geneva, Calvin's main concern was the creation of a "collège", an institute for the education of children. A site for the school was selected on 25 March 1558 and it opened the following year on 5 June 1559. Although the school was a single institution, it was divided into two parts: a grammar school called the "collège" or "schola privata" and an advanced school called the "académie" or "schola publica". Calvin tried to recruit two professors for the institute, Mathurin Cordier, his old friend and Latin scholar who was now based in Lausanne, and Emmanuel Tremellius, the former Regius professor of Hebrew in Cambridge. Neither was available, but he succeeded in obtaining Theodore Beza as rector. Within five years there were 1,200 students in the grammar school and 300 in the advanced school. The "collège" eventually became the Collège Calvin, one of the college preparatory schools of Geneva; the "académie" became the University of Geneva. Calvin was deeply committed to reforming his homeland, France. The Protestant movement had been energetic, but lacked central organizational direction. With financial support from the church in Geneva, Calvin turned his enormous energies toward uplifting the French Protestant cause. As one historian explains: In late 1558, Calvin became ill with a fever. Since he was afraid that he might die before completing the final revision of the "Institutes", he forced himself to work. The final edition was greatly expanded to the extent that Calvin referred to it as a new work. The expansion from the 21 chapters of the previous edition to 80 was due to the extended treatment of existing material rather than the addition of new topics. Shortly after he recovered, he strained his voice while preaching, which brought on a violent fit of coughing. He burst a blood-vessel in his lungs, and his health steadily declined. He preached his final sermon in St. Pierre on 6 February 1564. On 25 April, he made his will, in which he left small sums to his family and to the "collège". A few days later, the ministers of the church came to visit him, and he bade his final farewell, which was recorded in "Discours d'adieu aux ministres". He recounted his life in Geneva, sometimes recalling bitterly some of the hardships he had suffered. Calvin died on 27 May 1564 aged 54. At first his body lay in state, but since so many people came to see it, the reformers were afraid that they would be accused of fostering a new saint's cult. On the following day, he was buried in an unmarked grave in the Cimetière des Rois. The exact location of the grave is unknown; a stone was added in the 19th century to mark a grave traditionally thought to be Calvin's. Calvin developed his theology in his biblical commentaries as well as his sermons and treatises, but the most comprehensive expression of his views is found in his magnum opus, the "Institutes of the Christian Religion". He intended that the book be used as a summary of his views on Christian theology and that it be read in conjunction with his commentaries. The various editions of that work spanned nearly his entire career as a reformer, and the successive revisions of the book show that his theology changed very little from his youth to his death. The first edition from 1536 consisted of only six chapters. The second edition, published in 1539, was three times as long because he added chapters on subjects that appear in Melanchthon's "Loci Communes". In 1543, he again added new material and expanded a chapter on the Apostles' Creed. The final edition of the "Institutes" appeared in 1559. By then, the work consisted of four books of eighty chapters, and each book was named after statements from the creed: Book 1 on God the Creator, Book 2 on the Redeemer in Christ, Book 3 on receiving the Grace of Christ through the Holy Spirit, and Book 4 on the Society of Christ or the Church. The first statement in the "Institutes" acknowledges its central theme. It states that the sum of human wisdom consists of two parts: the knowledge of God and of ourselves. Calvin argues that the knowledge of God is not inherent in humanity nor can it be discovered by observing this world. The only way to obtain it is to study scripture. Calvin writes, "For anyone to arrive at God the Creator he needs Scripture as his Guide and Teacher." He does not try to prove the authority of scripture but rather describes it as "autopiston" or self-authenticating. He defends the trinitarian view of God and, in a strong polemical stand against the Catholic Church, argues that images of God lead to idolatry. At the end of the first book, he offers his views on providence, writing, "By his Power God cherishes and guards the World which he made and by his Providence rules its individual Parts." Humans are unable to fully comprehend why God performs any particular action, but whatever good or evil people may practise, their efforts always result in the execution of God's will and judgments. The second book includes several essays on original sin and the fall of man, which directly refer to Augustine, who developed these doctrines. He often cited the Church Fathers in order to defend the reformed cause against the charge that the reformers were creating new theology. In Calvin's view, sin began with the fall of Adam and propagated to all of humanity. The domination of sin is complete to the point that people are driven to evil. Thus fallen humanity is in need of the redemption that can be found in Christ. But before Calvin expounded on this doctrine, he described the special situation of the Jews who lived during the time of the Old Testament. God made a covenant with Abraham, promising the coming of Christ. Hence, the Old Covenant was not in opposition to Christ, but was rather a continuation of God's promise. Calvin then describes the New Covenant using the passage from the Apostles' Creed that describes Christ's suffering under Pontius Pilate and his return to judge the living and the dead. For Calvin, the whole course of Christ's obedience to the Father removed the discord between humanity and God. In the third book, Calvin describes how the spiritual union of Christ and humanity is achieved. He first defines faith as the firm and certain knowledge of God in Christ. The immediate effects of faith are repentance and the remission of sin. This is followed by spiritual regeneration, which returns the believer to the state of holiness before Adam's transgression. Complete perfection is unattainable in this life, and the believer should expect a continual struggle against sin. Several chapters are then devoted to the subject of justification by faith alone. He defined justification as "the acceptance by which God regards us as righteous whom he has received into grace." In this definition, it is clear that it is God who initiates and carries through the action and that people play no role; God is completely sovereign in salvation. Near the end of the book, Calvin describes and defends the doctrine of predestination, a doctrine advanced by Augustine in opposition to the teachings of Pelagius. Fellow theologians who followed the Augustinian tradition on this point included Thomas Aquinas and Martin Luther, though Calvin's formulation of the doctrine went further than the tradition that went before him. The principle, in Calvin's words, is that "All are not created on equal terms, but some are preordained to eternal life, others to eternal damnation; and, accordingly, as each has been created for one or other of these ends, we say that he has been predestinated to life or to death." The final book describes what he considers to be the true Church and its ministry, authority, and sacraments. He denied the papal claim to primacy and the accusation that the reformers were schismatic. For Calvin, the Church was defined as the body of believers who placed Christ at its head. By definition, there was only one "catholic" or "universal" Church. Hence, he argued that the reformers "had to leave them in order that we might come to Christ." The ministers of the Church are described from a passage from Ephesians, and they consisted of apostles, prophets, evangelists, pastors, and doctors. Calvin regarded the first three offices as temporary, limited in their existence to the time of the New Testament. The latter two offices were established in the church in Geneva. Although Calvin respected the work of the ecumenical councils, he considered them to be subject to God's Word found in scripture. He also believed that the civil and church authorities were separate and should not interfere with each other. Calvin defined a sacrament as an earthly sign associated with a promise from God. He accepted only two sacraments as valid under the new covenant: baptism and the Lord's Supper (in opposition to the Catholic acceptance of seven sacraments). He completely rejected the Catholic doctrine of transubstantiation and the treatment of the Supper as a sacrifice. He also could not accept the Lutheran doctrine of sacramental union in which Christ was "in, with and under" the elements. His own view was close to Zwingli's symbolic view, but it was not identical. Rather than holding a purely symbolic view, Calvin noted that with the participation of the Holy Spirit, faith was nourished and strengthened by the sacrament. In his words, the eucharistic rite was "a secret too sublime for my mind to understand or words to express. I experience it rather than understand it." Calvin's theology caused controversy. Pierre Caroli, a Protestant minister in Lausanne accused Calvin as well as Viret and Farel of Arianism in 1536. Calvin defended his beliefs on the Trinity in "Confessio de Trinitate propter calumnias P. Caroli". In 1551 Jérôme-Hermès Bolsec, a physician in Geneva, attacked Calvin's doctrine of predestination and accused him of making God the author of sin. Bolsec was banished from the city, and after Calvin's death, he wrote a biography which severely maligned Calvin's character. In the following year, Joachim Westphal, a Gnesio-Lutheran pastor in Hamburg, condemned Calvin and Zwingli as heretics in denying the eucharistic doctrine of the union of Christ's body with the elements. Calvin's "Defensio sanae et orthodoxae doctrinae de sacramentis" (A Defence of the Sober and Orthodox Doctrine of the Sacrament) was his response in 1555. In 1556 , a Dutch dissident, held a public disputation with Calvin during his visit to Frankfurt, in which Velsius defended free will against Calvin's doctrine of predestination. Following the execution of Servetus, a close associate of Calvin, Sebastian Castellio, broke with him on the issue of the treatment of heretics. In Castellio's "Treatise on Heretics" (1554), he argued for a focus on Christ's moral teachings in place of the vanity of theology, and he afterward developed a theory of tolerance based on biblical principles. Scholars have debated Calvin's view of the Jews and Judaism. Some have argued that Calvin was the least anti-semitic among all the major reformers of his time, especially in comparison to Martin Luther. Others have argued that Calvin was firmly within the anti-semitic camp. Scholars agree that it is important to distinguish between Calvin's views toward the biblical Jews and his attitude toward contemporary Jews. In his theology, Calvin does not differentiate between God's covenant with Israel and the New Covenant. He stated, "all the children of the promise, reborn of God, who have obeyed the commands by faith working through love, have belonged to the New Covenant since the world began." Nevertheless, he was a covenant theologian and argued that the Jews are a rejected people who must embrace Jesus to re-enter the covenant. Most of Calvin's statements on the Jewry of his era were polemical. For example, Calvin once wrote, "I have had much conversation with many Jews: I have never seen either a drop of piety or a grain of truth or ingenuousness – nay, I have never found common sense in any Jew." In this respect, he differed little from other Protestant and Catholic theologians of his day. Among his extant writings, Calvin only dealt explicitly with issues of contemporary Jews and Judaism in one treatise, "Response to Questions and Objections of a Certain Jew". In it, he argued that Jews misread their own scriptures because they miss the unity of the Old and New Testaments. The aim of Calvin's political theory was to safeguard the rights and freedoms of ordinary people. Although he was convinced that the Bible contained no blueprint for a certain form of government, Calvin favored a combination of democracy and aristocracy (mixed government). He appreciated the advantages of democracy. To further minimize the misuse of political power, Calvin proposed to divide it among several political institutions like the aristocracy, lower estates, or magistrates in a system of checks and balances (separation of powers). Finally, Calvin taught that if rulers rise up against God they lose their divine right and must be deposed. State and church are separate, though they have to cooperate to the benefit of the people. Christian magistrates have to make sure that the church can fulfill its duties in freedom. In extreme cases the magistrates have to expel or execute dangerous heretics. But nobody can be forced to become a Protestant. Calvin thought that agriculture and the traditional crafts were normal human activities. With regard to trade and the financial world he was more liberal than Luther, but both were strictly opposed to usury. Calvin allowed the charging of modest interest rates on loans. Like the other Reformers Calvin understood work as a means through which the believers expressed their gratitude to God for their redemption in Christ and as a service to their neighbors. Everybody was obliged to work; loafing and begging were rejected. The idea that economic success was a visible sign of God's grace played only a minor role in Calvin's thinking. It became more important in later, partly secularized forms of Calvinism and became the starting-point of Max Weber's theory about the rise of capitalism. Calvin's first published work was a commentary of Seneca the Younger's "De Clementia". Published at his own expense in 1532, it showed that he was a humanist in the tradition of Erasmus with a thorough understanding of classical scholarship. His first theological work, the "Psychopannychia", attempted to refute the doctrine of soul sleep as promulgated by the Anabaptists. Calvin probably wrote it during the period following Cop's speech, but it was not published until 1542 in Strasbourg. Calvin produced commentaries on most of the books of the Bible. His first commentary on Romans was published in 1540, and he planned to write commentaries on the entire New Testament. Six years passed before he wrote his second, a commentary on First Epistle to the Corinthians, but after that he devoted more attention to reaching his goal. Within four years he had published commentaries on all the Pauline epistles, and he also revised the commentary on Romans. He then turned his attention to the general epistles, dedicating them to Edward VI of England. By 1555 he had completed his work on the New Testament, finishing with the Acts and the Gospels (he omitted only the brief second and third Epistles of John and the Book of Revelation). For the Old Testament, he wrote commentaries on Isaiah, the books of the Pentateuch, the Psalms, and Joshua. The material for the commentaries often originated from lectures to students and ministers that he reworked for publication. From 1557 onwards, he could not find the time to continue this method, and he gave permission for his lectures to be published from stenographers' notes. These "Praelectiones" covered the minor prophets, Daniel, Jeremiah, Lamentations, and part of Ezekiel. Calvin also wrote many letters and treatises. Following the "Responsio ad Sadoletum", Calvin wrote an open letter at the request of Bucer to Charles V in 1543, "Supplex exhortatio ad Caesarem", defending the reformed faith. This was followed by an open letter to the pope ("Admonitio paterna Pauli III") in 1544, in which Calvin admonished Paul III for depriving the reformers of any prospect of rapprochement. The pope proceeded to open the Council of Trent, which resulted in decrees against the reformers. Calvin refuted the decrees by producing the "Acta synodi Tridentinae cum Antidoto" in 1547. When Charles tried to find a compromise solution with the Augsburg Interim, Bucer and Bullinger urged Calvin to respond. He wrote the treatise, "Vera Christianae pacificationis et Ecclesiae reformandae ratio" in 1549, in which he described the doctrines that should be upheld, including justification by faith. Calvin provided many of the foundational documents for reformed churches, including documents on the catechism, the liturgy, and church governance. He also produced several confessions of faith in order to unite the churches. In 1559, he drafted the French confession of faith, the Gallic Confession, and the synod in Paris accepted it with few changes. The Belgic Confession of 1561, a Dutch confession of faith, was partly based on the Gallic Confession. After the deaths of Calvin and his successor, Beza, the Geneva city council gradually gained control over areas of life that were previously in the ecclesiastical domain. Increasing secularisation was accompanied by the decline of the church. Even the Geneva "académie" was eclipsed by universities in Leiden and Heidelberg, which became the new strongholds of Calvin's ideas, first identified as "Calvinism" by Joachim Westphal in 1552. By 1585, Geneva, once the wellspring of the reform movement, had become merely its symbol. Calvin had always warned against describing him as an "idol" and Geneva as a new "Jerusalem". He encouraged people to adapt to the environments in which they found themselves. Even during his polemical exchange with Westphal, he advised a group of French-speaking refugees, who had settled in Wesel, Germany, to integrate with the local Lutheran churches. Despite his differences with the Lutherans, he did not deny that they were members of the true Church. Calvin's recognition of the need to adapt to local conditions became an important characteristic of the reformation movement as it spread across Europe. Due to Calvin's missionary work in France, his programme of reform eventually reached the French-speaking provinces of the Netherlands. Calvinism was adopted in the Electorate of the Palatinate under Frederick III, which led to the formulation of the Heidelberg Catechism in 1563. This and the Belgic Confession were adopted as confessional standards in the first synod of the Dutch Reformed Church in 1571. Several leading divines, either Calvinist or those sympathetic to Calvinism, settled in England (Martin Bucer, Peter Martyr, and Jan Laski) and Scotland (John Knox). During the English Civil War, the Calvinistic Puritans produced the Westminster Confession, which became the confessional standard for Presbyterians in the English-speaking world. As the Ottoman Empire did not force Muslim conversion on its conquered western territories, reformed ideas were quickly adopted in the two-thirds of Hungary they occupied (the Habsburg-ruled third part of Hungary remained Catholic). A Reformed Constitutional Synod was held in 1567 in Debrecen, the main hub of Hungarian Calvinism, where the Second Helvetic Confession was adopted as the official confession of Hungarian Calvinists. Having established itself in Europe, the movement continued to spread to other parts of the world including North America, South Africa, and Korea. Calvin did not live to see the foundation of his work grow into an international movement; but his death allowed his ideas to break out of their city of origin, to succeed far beyond their borders, and to establish their own distinct character. Calvin is recognized as a Renewer of the Church in Lutheran churches, and as a saint in the Church of England, commemorated on 26 May, and on 28 May by the Episcopal Church (USA). James K. Polk James Knox Polk (November 2, 1795 – June 15, 1849) was the 11th President of the United States (1845–1849). He previously was Speaker of the House of Representatives (1835–1839) and Governor of Tennessee (1839–1841). A protégé of Andrew Jackson, he was a member of the Democratic Party and an advocate of Jacksonian democracy. During Polk's presidency, the United States expanded significantly with the annexation of the Republic of Texas, the Oregon Territory, and the Mexican Cession following the American victory in the Mexican–American War. After building a successful law practice in Tennessee, Polk was elected to the state legislature (1823) and then to the United States House of Representatives in 1825, becoming a strong supporter of Jackson. After serving as chairman of the Ways and Means Committee, he became Speaker in 1835, the only president to have been Speaker. Polk left Congress to run for governor; he won in 1839, but lost in 1841 and 1843. He was a dark horse candidate for the Democratic nomination for president in 1844; he entered his party's convention as a potential nominee for vice president, but emerged as a compromise to head the ticket when no presidential candidate could secure the necessary two-thirds majority. In the general election, Polk defeated Henry Clay of the rival Whig Party. Polk is considered by many the most effective president of the pre–Civil War era, having met during his four-year term every major domestic and foreign policy goal he had set. After a negotiation fraught with risk of war, he reached a settlement with the United Kingdom over the disputed Oregon Country, the territory for the most part being divided along the 49th parallel. Polk achieved a sweeping victory in the Mexican–American War, which resulted in the cession by Mexico of nearly all the American Southwest. He secured a substantial reduction of tariff rates with the Walker tariff of 1846. The same year, he achieved his other major goal, re-establishment of the Independent Treasury system. True to his campaign pledge to serve only one term, Polk left office in 1849 and returned to Tennessee; he died in Nashville, most likely of cholera, three months after leaving the White House. Scholars have ranked Polk favorably for his ability to promote and achieve the major items on his presidential agenda, but he has been criticized for leading the country into war against Mexico and for exacerbating sectional divides. A slaveholder for most of his adult life, he owned a plantation in Mississippi and bought slaves while President. A major legacy of Polk's presidency is territorial expansion, as the United States reached the Pacific coast and became poised to be a world power. James Knox Polk was born on November 2, 1795 in a log cabin in Pineville, North Carolina. He was the first of 10 children born into a family of farmers. His mother Jane named him after her father, James Knox. His father Samuel Polk was a farmer, slaveholder, and surveyor of Scots-Irish descent. The Polks had immigrated to America in the late 1600s, settling initially on the Eastern Shore of Maryland but later moving to south-central Pennsylvania and then to the Carolina hill country. The Knox and Polk families were Presbyterian. While Polk's mother remained a devout Presbyterian, his father, whose own father Ezekiel Polk was a deist, rejected dogmatic Presbyterianism. He refused to declare his belief in Christianity at his son's baptism, and the minister refused to baptize young James. Nevertheless, James' mother "stamped her rigid orthodoxy on James, instilling lifelong Calvinistic traits of self-discipline, hard work, piety, individualism, and a belief in the imperfection of human nature," according to James A. Rawley's "American National Biography" article. In 1803, Ezekiel Polk led four of his adult children and their families to the Duck River area in what is now Maury County, Tennessee; Samuel Polk and his family followed in 1806. The Polk clan dominated politics in Maury County and in the new town of Columbia. Samuel became a county judge, and the guests at his home included Andrew Jackson, who had already served as a judge and in Congress. James learned from the political talk around the dinner table; both Samuel and Ezekiel were strong supporters of President Thomas Jefferson and opponents of the Federalist Party. Polk suffered from frail health as a child, a particular disadvantage in a frontier society. His father took him to see prominent Philadelphia physician Dr. Philip Syng Physick for urinary stones. The journey was broken off by James's severe pain, and Dr. Ephraim McDowell of Danville, Kentucky, operated to remove them. No anesthetic was available except brandy. The operation was successful, but it might have left James impotent or sterile, as he had no children. He recovered quickly, and became more robust. His father offered to bring him into one of his businesses, but he wanted an education and enrolled at a Presbyterian academy in 1813. He became a member of the Zion Church near his home in 1813, and enrolled in the Zion Church Academy. He then entered Bradley Academy in Murfreesboro, Tennessee, where he proved a promising student. In January 1816, Polk was admitted into the University of North Carolina at Chapel Hill as a second-semester sophomore. The Polk family had connections with the university, then a small school of about 80 students; Samuel was its land agent in Tennessee and his cousin William Polk was a trustee. Polk's roommate was William Dunn Moseley, who became the first Governor of Florida. Polk joined the Dialectic Society where he took part in debates, became its president, and learned the art of oratory. In one address, he warned that some American leaders were flirting with monarchical ideals, singling out Alexander Hamilton, a foe of Jefferson. Polk graduated with honors in May 1818. After graduation, Polk returned to Nashville, Tennessee to study law under renowned trial attorney Felix Grundy, who became his first mentor. On September 20, 1819, he was elected clerk of the Tennessee State Senate, which then sat in Murfreesboro and to which Grundy had been elected. He was re-elected clerk in 1821 without opposition, and continued to serve until 1822. In June 1820, he was admitted to the Tennessee bar, and his first case was to defend his father against a public fighting charge; he secured his release for a one-dollar fine. He opened an office in Maury County and was successful as a lawyer, due largely to the many cases arising from the Panic of 1819, a severe depression. His law practice subsidized his political career. By the time the legislature adjourned its session in September 1822, Polk was determined to be a candidate for the Tennessee House of Representatives. The election was in August 1823, almost a year away, allowing him ample time for campaigning. Already involved locally as a member of the Masons, he was commissioned in the Tennessee militia as a captain in the cavalry regiment of the 5th Brigade. He was later appointed a colonel on the staff of Governor William Carroll, and was afterwards often referred to as "Colonel". Although many of the voters were members of the Polk clan, the young politician campaigned energetically. People liked Polk's oratory, which earned him the nickname "Napoleon of the Stump." At the polls, where Polk provided alcoholic refreshments for his voters, he defeated incumbent William Yancey. Beginning in early 1822, Polk courted Sarah Childress—they were engaged the following year and married on January 1, 1824 in Murfreesboro. Educated far better than most women of her time, especially in frontier Tennessee, Sarah Polk was from one of the state's most prominent families. During James's political career Sarah assisted her husband with his speeches, gave him advice on policy matters, and played an active role in his campaigns. Rawley noted that Sarah Polk's grace, intelligence and charming conversation helped compensate for her husband's often austere manner. Polk's first mentor was Grundy, but in the legislature, Polk came increasingly to oppose him on such matters as land reform, and came to support the policies of Andrew Jackson, by then a military hero for his victory at the Battle of New Orleans (1815). Jackson was a family friend to both the Polks and the Childresses—there is evidence Sarah Polk and her siblings called him "Uncle Andrew"—and James Polk quickly came to support his presidential ambitions for 1824. When the Tennessee Legislature deadlocked on who to elect as U.S. senator in 1823 (until 1913, legislators, not the people, elected senators), Jackson's name was placed in nomination. Polk broke from his usual allies, casting his vote as a member of the state House of Representatives for the general in Jackson's victory. This boosted Jackson's presidential chances by giving him recent political experience to match his military accomplishments. This began an alliance that would continue until Jackson's death early in Polk's presidency. Polk, through much of his political career, was known as "Young Hickory", based on the nickname for Jackson, "Old Hickory". Polk's political career was as dependent on Jackson as his nickname implied. In the 1824 U.S. presidential election, Jackson got the most electoral votes (he also led in the popular vote) but as he did not receive a majority in the Electoral College, the election was thrown into the U.S. House of Representatives, which chose Secretary of State John Quincy Adams, who had received the second-most of each. Polk, like other Jackson supporters, believed that Speaker of the House Henry Clay had traded his support as fourth-place finisher (the House may only choose from among the top three) to Adams in a Corrupt Bargain in exchange for being the new Secretary of State. Polk had in August 1824 declared his candidacy for the following year's election to the House of Representatives from Tennessee's 6th congressional district. The district stretched from Maury County south to the Alabama line, and extensive electioneering was expected of the five candidates. Polk campaigned so vigorously that Sarah began to worry about his health. During the campaign, Polk's opponents said that at the age of 29 Polk was too young for the responsibility of a seat in the House, but he won the election with 3,669 votes out of 10,440 and took his seat in Congress later that year. When Polk arrived in Washington, D.C. for Congress' regular session in December 1825, he roomed in Benjamin Burch's boarding house with other Tennessee representatives, including Sam Houston. Polk made his first major speech on March 13, 1826, in which he said that the Electoral College should be abolished and that the president should be elected by popular vote. Remaining bitter at the alleged Corrupt Bargain between Adams and Clay, Polk became a vocal critic of the administration, frequently voting against its policies. Sarah Polk remained at home in Columbia during her husband's first year in Congress, but accompanied him to Washington beginning in December 1826; she assisted him with his correspondence, and came to hear James's speeches. Polk won re-election in 1827 and continued to oppose the Adams administration. He remained in close touch with Jackson, and when Jackson ran for president in 1828, Polk was a corresponding advisor on his campaign. Following Jackson's victory over Adams, Polk became one of the new President's most prominent and loyal supporters in the House. Working on Jackson's behalf, Polk successfully opposed federally-funded "internal improvements" such as a proposed Buffalo-to-New Orleans road, and he was pleased by Jackson's Maysville Road veto in May 1830, when Jackson blocked a bill to finance a road extension entirely within one state, Kentucky, deeming it unconstitutional. Jackson opponents alleged that the veto message, which strongly complained about Congress' penchant for passing pork barrel projects, was written by Polk, but he denied this, stating that the message was entirely the President's. Polk served as Jackson's most prominent House ally in the "Bank War" that developed over Jackson's opposition to the re-authorization of the Second Bank of the United States. The Second Bank, headed by Nicholas Biddle of Philadelphia, not only held federal dollars, but controlled much of the credit in the United States, as it could present currency issued by local banks for redemption in gold or silver. Some Westerners, including Jackson, opposed the Second Bank, deeming it a monopoly acting in the interest of Easterners. Polk, as a member of the House Ways and Means Committee, conducted investigations of the Second Bank, and though the committee voted for a bill to renew the bank's charter (scheduled to expire in 1836), Polk issued a strong minority report condemning the bank. The bill passed Congress in 1832, but Jackson vetoed it and Congress failed to override the veto. Jackson's action was highly controversial in Washington, but had considerable public support, and he won easy re-election in 1832. Like many Southerners, Polk favored low tariffs on imported goods, and initially sympathized with John C. Calhoun's opposition to the Tariff of Abominations during the Nullification Crisis of 1832–1833, but came over to Jackson's side as Calhoun moved towards advocating secession. Thereafter, Polk remained loyal to Jackson as the President sought to assert federal authority. Polk condemned secession and supported the Force Bill against South Carolina, which had claimed the authority to nullify federal tariffs. The matter was settled by Congress passing a compromise tariff. In December 1833, after being elected to a fifth consecutive term, Polk, with Jackson's backing, became the chairman of Ways and Means, a powerful position in the House. In that position, Polk supported Jackson's withdrawal of federal funds from the Second Bank. Polk's committee issued a report questioning the Second Bank's finances, and another supporting Jackson's actions against it. In April 1834, the Ways and Means Committee reported a bill to regulate state deposit banks, which, when passed, enabled Jackson to deposit funds in pet banks, and Polk got legislation passed to allow the sale of the government's stock in the Second Bank. In June 1834, Speaker of the House Andrew Stevenson resigned from Congress to become Minister to the United Kingdom. With Jackson's support, Polk ran for Speaker against fellow Tennessean John Bell, Calhoun disciple Richard Henry Wilde, and Joel Barlow Sutherland of Pennsylvania. After ten ballots, Bell, who had the support of many opponents of the administration, defeated Polk. Jackson called in political debts to try to get Polk elected Speaker at the start of the next Congress in December 1835, assuring Polk in a letter he meant him to burn that New England would support him for Speaker. They were successful; Polk defeated Bell to take the Speakership. According to Thomas M. Leonard in his book on Polk, "by 1836, while serving as Speaker of the House of Representatives, Polk approached the zenith of his congressional career. He was at the center of Jacksonian Democracy on the House floor, and, with the help of his wife, he ingratiated himself into Washington's social circles." The prestige of the Speakership caused them to abandon life in a Washington boarding house for their own residence on Pennsylvania Avenue. In the 1836 presidential election, Vice President Martin Van Buren, Jackson's chosen successor, defeated multiple Whig candidates, including Tennessee Senator Hugh Lawson White. Greater Whig strength in Tennessee helped White carry his state, though Polk's home district went for Van Buren. Ninety percent of Tennessee voters had supported Jackson in 1832, but many in the state disliked the destruction of the Second Bank, or were unwilling to support Van Buren. As Speaker, Polk worked for the policies of Jackson and later Van Buren. Polk appointed committees with Democratic chairs and majorities, including the New York radical C. C. Cambreleng as the new Ways and Means chair, although he tried to maintain the Speaker's traditional nonpartisan appearance. The two major issues during Polk's speakership were slavery and, after the Panic of 1837, the economy. Polk firmly enforced the "gag rule", by which the House of Representatives would not accept or debate citizen petitions regarding slavery. This ignited fierce protests from John Quincy Adams, who was by then a congressman from Massachusetts and an abolitionist. Instead of finding a way to silence Adams, Polk frequently engaged in useless shouting matches, leading Jackson to conclude that the Speaker should have shown better leadership. Van Buren and Polk faced pressure to rescind the Specie Circular, Jackson's 1836 order that payment for government lands be in gold and silver. Some believed this had led to the crash by causing a lack of confidence in paper currency issued by banks. Despite such arguments, with support from Polk and his cabinet, Van Buren chose to back the Specie Circular. Polk and Van Buren attempted to establish an Independent Treasury system that would allow the government to oversee its own deposits (rather than using pet banks), but the bill was defeated in the House. It eventually passed in 1840. Using his thorough grasp of the House's rules, Polk attempted to bring greater order to its proceedings. Unlike many of his peers, he never challenged anyone to a duel no matter how much they insulted his honor. The economic downturn cost the Democrats seats, so that when he faced re-election as Speaker in December 1837, he won by only 13 votes, and he foresaw defeat in 1839. Polk by then had presidential ambitions, but was well aware that no Speaker had ever become president (Polk is still the only one to have held both offices). After seven terms in the House, two as Speaker, he announced that he would not seek re-election, choosing instead to run for Governor of Tennessee in the 1839 election. In 1835, the Democrats had lost the governorship of Tennessee for the first time in their history, and Polk decided to return home to help the party. Polk returned to a Tennessee afire for White and Whiggism; the state had changed greatly in its political loyalties since the days of Jacksonian domination. Polk undertook his first statewide campaign, against the Whig incumbent, Newton Cannon, who sought a third two-year term as governor. The fact that Polk was the one called upon to "redeem" Tennessee from the Whigs tacitly acknowledged him as head of the state Democratic Party. Polk campaigned on national issues, whereas Cannon stressed matters local to Tennessee. After being bested by Polk in the early debates, the governor retreated to Nashville, by then the state capital, alleging important official business. Polk made speeches across the state, seeking to become known more widely than in his native Middle Tennessee. When Cannon came back on the campaign trail in the final days, Polk pursued him, hastening the length of the state to be able to debate the governor again. On Election Day, August 1, 1839, Polk defeated Cannon, 54,102 to 51,396, as the Democrats recaptured the state legislature and won back three congressional seats in Tennessee. Tennessee's governor had limited power—there was no gubernatorial veto, and the small size of the state government limited any political patronage. But Polk saw the office as a springboard for his national ambitions, seeking to be nominated as Van Buren's vice presidential running mate at the 1840 Democratic National Convention in Baltimore in May. Polk hoped to be the replacement if Vice President Richard Mentor Johnson was dumped from the ticket; Johnson was disliked by many Southern whites for fathering two daughters by a biracial mistress, and attempting to introduce them into white society. Johnson was from Kentucky, so Polk's Tennessee residence would keep the New Yorker Van Buren's ticket balanced. The convention chose to endorse no one for vice president, stating that a choice would be made once the popular vote was cast. Three weeks after the convention, recognizing that Johnson was too popular in the party to be ousted, Polk withdrew his name. The Whig presidential candidate, General William Henry Harrison, conducted a rollicking campaign with the motto "Tippecanoe and Tyler Too", easily winning both the national vote and that in Tennessee. Polk campaigned in vain for Van Buren and was embarrassed by the outcome; Jackson, who had returned to his home, the Hermitage, near Nashville, was horrified at the prospect of a Whig administration. Harrison's death after a month in office in 1841 left the presidency to Vice President John Tyler, who soon broke with the Whigs. Polk's three major programs during his governorship; regulating state banks, implementing state internal improvements, and improving education all failed to win the approval of the legislature. His only major success as governor was his politicking to secure the replacement of Tennessee's two Whig U.S. senators with Democrats. Polk's tenure was hindered by the continuing nationwide economic crisis that had followed the Panic of 1837 and which had caused Van Buren to lose the 1840 election. Encouraged by the success of Harrison's campaign, the Whigs ran a freshman legislator from frontier Wilson County, James C. Jones against Polk in 1841. "Lean Jimmy" had proven one of their most effective gadflies against Polk, and his lighthearted tone at campaign debates was very effective against the serious Polk. The two debated the length of Tennessee, and Jones's support of distribution to the states of surplus federal revenues, and of a national bank, struck a chord with Tennessee voters. On election day in August 1841, Polk was defeated by 3,000 votes, the first time he had been beaten at the polls. Polk returned to Columbia and the practice of law, and prepared for a rematch against Jones in 1843, but though the new governor took less of a joking tone, it made little difference to the outcome, as Polk was beaten again, this time by 3,833 votes. In the wake of his second statewide defeat in three years, Polk faced an uncertain political future. Despite his loss, Polk was determined to become the next Vice President of the United States, seeing it as a path to the presidency. Van Buren was the frontrunner for the 1844 Democratic nomination, and Polk engaged in a careful campaign to become his running mate. The former president faced opposition from Southerners who feared his views on slavery, while his handling of the Panic of 1837—he had refused to rescind the Specie Circular—aroused opposition from some in the West (today's Midwest) who believed his hard money policies had hurt their section of the country. Many Southerners backed Calhoun's candidacy, Westerners rallied around Senator Lewis Cass of Michigan, and former Vice President Johnson also maintained a strong following among Democrats. Jackson assured Van Buren by letter that Polk in his campaigns for governor had "fought the battle well and fought it alone". Polk hoped to gain Van Buren's support, hinting in a letter that a Van Buren/Polk ticket could carry Tennessee, but found him unconvinced. The biggest political issue in the United States at that time was territorial expansion. The Republic of Texas had successfully revolted against Mexico in 1836. With the republic largely populated by American emigres, those on both sides of the Sabine River border between the U.S. and Texas deemed it inevitable that Texas would join the United States, but this would anger Mexico, which considered Texas a breakaway province, and threatened war if the United States annexed it. Jackson, as president, had recognized Texas independence, but the initial momentum toward annexation had stalled. Britain was seeking to expand her influence in Texas: Britain had abolished slavery, and if Texas did the same, it would provide a western haven for runaways to match one in the North. A Texas not in the United States would also stand in the way of what was deemed America's Manifest Destiny to overspread the continent. Clay was nominated for president by acclamation at the April 1844 Whig National Convention, with New Jersey's Theodore Frelinghuysen his running mate. A Kentucky slaveholder at a time when opponents of Texas annexation argued that it would give slavery more room to spread, Clay sought a nuanced position on the issue. Jackson, who strongly supported a Van Buren/Polk ticket, was delighted when Clay issued a letter for publication in the newspapers opposing Texas annexation, only to be devastated when he learned Van Buren had done the same thing. Van Buren did this because he feared losing his base of support in the Northeast, but his supporters in the old Southwest were stunned at his action. Polk, on the other hand, had written a pro-annexation letter that had been published four days before Van Buren's. Jackson wrote sadly to Van Buren that no candidate who opposed annexation could be elected, and decided Polk was the best person to head the ticket. Jackson met with Polk at the Hermitage on May 13, 1844 and explained to his visitor that only an expansionist from the South or Southwest could be elected—and, in his view, Polk had the best chance. Polk was at first startled, calling the plan "utterly abortive", but he agreed to accept it. Polk immediately wrote to instruct his lieutenants at the convention to work for his nomination as president. Despite Jackson's quiet efforts on his behalf, Polk was skeptical that he could win. Nevertheless, because of the opposition to Van Buren by expansionists in the West and South, Polk's key lieutenant at the 1844 Democratic National Convention in Baltimore, Gideon Johnson Pillow, believed Polk could emerge as a compromise candidate. Publicly, Polk, who remained in Columbia during the convention, professed full support for Van Buren's candidacy, and was believed to be seeking the vice presidency. Polk was one of the few major Democrats to have declared for the annexation of Texas. The convention opened on May 27, 1844. A crucial question was whether the nominee needed two-thirds of the delegate vote, as had been the case at previous Democratic conventions, or merely a majority. A vote for two-thirds would doom Van Buren's candidacy due to the opposition to him. With the support of the Southern states, the two-thirds rule was passed. Van Buren won a majority on the first presidential ballot, but failed to win the necessary two-thirds, and his support slowly faded on subsequent ballots. Cass, Johnson, Calhoun and James Buchanan had also received votes on the first ballot, and Cass took the lead on the fifth ballot. After seven ballots, the convention remained deadlocked: Cass could not attract the support necessary to reach two-thirds, and Van Buren's supporters were more and more discouraged about the former president's chances. Delegates were ready to consider a new candidate who might break the stalemate. When the convention adjourned after the seventh ballot, Pillow, who had been waiting for an opportunity to press Polk's name, conferred with George Bancroft of Massachusetts, a politician and historian who was a longtime Polk correspondent, and who had planned to nominate Polk for vice president. Bancroft had supported Van Buren's candidacy, and was willing to see New York Senator Silas Wright head the ticket, but Wright would not consider taking a nomination that Van Buren wanted. Pillow and Bancroft decided if Polk were nominated for president, Wright might accept the second spot. Before the eighth ballot, former Attorney General Benjamin F. Butler, head of the New York delegation, read a pre-written letter from Van Buren to be used if he could not be nominated, withdrawing in Wright's favor. But Wright (who was in Washington) had also entrusted a pre-written letter to a supporter, in which he refused to be considered as a presidential candidate, and stated in the letter that he agreed with Van Buren's position on Texas. Had Wright's letter not been read he most likely would have been nominated, but without him, Butler began to rally Van Buren supporters for Polk as the best possible candidate, and Bancroft placed Polk's name before the convention. On the eighth ballot, Polk received only 44 votes to Cass's 114 and Van Buren's 104, but the deadlock showed signs of breaking. Butler formally withdrew Van Buren's name, many delegations declared for the Tennessean, and on the ninth ballot Polk received 233 ballots to Cass's 29, making him the Democratic nominee for president. The nomination was then made unanimous. This left the question of the vice presidential candidate. Butler urged Wright's nomination, and the convention agreed to this, with only eight Georgia delegates dissenting. As the convention waited, word of Wright's nomination was sent to him in Washington via telegraph. Having by proxy declined an almost certain presidential nomination, Wright would not accept the second place. Senator Robert J. Walker of Mississippi, a close Polk ally, suggested former senator George M. Dallas of Pennsylvania. Dallas was acceptable enough to all factions, and gained the vice presidential nomination on the second ballot. The delegates passed a platform, and adjourned on May 30. Although many contemporary politicians, including Pillow and Bancroft, claimed credit in the years to come for getting Polk the nomination, Walter R. Borneman felt that most credit was due to Jackson and Polk, "the two who had done the most were back in Tennessee, one an aging icon ensconced at the Hermitage and the other a shrewd lifelong politician waiting expectantly in Columbia". Whigs mocked Polk with the chant "Who is James K. Polk?", affecting never to have heard of him. Though he had experience as Speaker of the House and Governor of Tennessee, all previous presidents had served as Vice President, Secretary of State, or as a high-ranking general. Polk has been described as the first "dark horse" presidential nominee, although his nomination was less of a surprise than that of future nominees such as Franklin Pierce or Warren G. Harding. Despite his party's gibes, Clay recognized that Polk could unite the Democrats. Rumors of Polk's nomination reached Nashville on June 4, much to Jackson's delight; they were substantiated later that day. The dispatches were sent on to Columbia, arriving the same day, and letters and newspapers describing what had happened at Baltimore were in Polk's hands by June 6. He accepted his nomination by letter dated June 12, alleging that he had never sought the office, and stating his intent to serve only one term. Wright was embittered by what he called the "foul plot" against Van Buren, and demanded assurances that Polk had played no part; it was only after Polk professed that he had remained loyal to Van Buren that Wright supported his campaign. Following the custom of the time that presidential candidates avoid electioneering or appearing to seek the office, Polk remained in Columbia and made no speeches. He engaged in an extensive correspondence with Democratic Party officials as he managed his campaign. Polk made his views known in his acceptance letter and through responses to questions sent by citizens that were printed in newspapers, often by arrangement. A potential pitfall for Polk's campaign was the issue of whether the tariff should be for revenue only, or with the intent to protect American industry. Polk finessed the tariff issue in a published letter. Recalling that he had long stated that tariffs should only be sufficient to finance government operations, he maintained that stance, but wrote that within that limitation, government could and should offer "fair and just protection" to American interests, including manufacturers. He refused to expand on this stance, acceptable to most Democrats, despite the Whigs pointing out that he had committed himself to nothing. In September, a delegation of Whigs from nearby Giles County came to Columbia, armed with specific questions on Polk's views regarding the current tariff, the Whig-passed Tariff of 1842, and with the stated intent of remaining in Columbia until they got answers. Polk took several days to respond, and chose to stand by his earlier statement, provoking an outcry in the Whig papers. Another concern was the third-party candidacy of President Tyler, which might split the Democratic vote. Tyler had been nominated by a group of loyal officeholders. Under no illusions he could win, he believed he could rally states' rights supporters and populists to hold the balance of power in the election. Only Jackson had the stature to resolve the situation, which he did with two letters to friends in the Cabinet, that he knew would be shown to Tyler, stating that the President's supporters would be welcomed back into the Democratic fold. Jackson wrote that once Tyler withdrew, many Democrats would embrace him for his pro-annexation stance. The former president also used his influence to stop Francis Blair and his "Globe" newspaper, the semi-official organ of the Democratic Party, from attacking Tyler. These proved enough; Tyler withdrew from the race in August. Party troubles were a third concern. Polk and Calhoun made peace when a former South Carolina congressman, Francis Pickens visited Tennessee and came to Columbia for two days and to the Hermitage for sessions with the increasingly ill Jackson. Calhoun wanted the "Globe" dissolved, and that Polk would act against the 1842 tariff and promote Texas annexation. Reassured on these points, Calhoun became a strong supporter. Polk was aided regarding Texas when Clay, realizing his anti-annexation letter had cost him support, attempted in two subsequent letters to clarify his position. These angered both sides, which attacked Clay as insincere. Texas also threatened to divide the Democrats sectionally, but Polk managed to appease most Southern party leaders without antagonizing Northern ones. As the election drew closer, it became clear that most of the country favored the annexation of Texas, and some Southern Whig leaders supported Polk's campaign due to Clay's anti-annexation stance. The campaign was vitriolic; both major party candidates were accused of various acts of malfeasance; Polk was accused of being both a duelist and a coward. The most damaging smear was the Roorback forgery; in late August an item appeared in an abolitionist newspaper, part of a book detailing fictional travels through the South of a Baron von Roorback, an imaginary German nobleman. The Ithaca "Chronicle" printed it without labeling it as fiction, and inserted a sentence alleging that the traveler had seen forty slaves who had been sold by Polk after being branded with his initials. The item was withdrawn by the "Chronicle" when challenged by the Democrats, but it was widely reprinted. Borneman suggested that the forgery backfired on Polk's opponents as it served to remind voters that Clay too was a slaveholder, John Eisenhower, in his journal article on the election, stated that the smear came too late to be effectively rebutted, and likely cost Polk Ohio. Southern newspapers, on the other hand, went far in defending Polk, one Nashville newspaper alleging that his slaves preferred their bondage to freedom. Polk himself implied to newspaper correspondents that the only slaves he owned had either been inherited or had been purchased from relatives in financial distress; this paternalistic image was also painted by surrogates like Gideon Pillow. This was not true, though not known at the time; by then he had bought over thirty slaves, both from relatives and others, mainly for the purpose of procuring labor for his Mississippi cotton plantation. There was no uniform election day in 1844; states voted between November 1 and 12. Polk won the election with 49.5% of the popular vote and 170 of the 275 electoral votes. Becoming the first president elected despite losing his state of residence (Tennessee), Polk also lost his birth state, North Carolina. However, he won Pennsylvania and New York, where Clay lost votes to the antislavery Liberty Party candidate James G. Birney, who got more votes in New York than Polk's margin of victory. Had Clay won New York, he would have been elected president. Polk presided over a country whose population had doubled every twenty years since the American Revolution and which had reached demographic parity with Great Britain. Polk's tenure saw continued technological improvements, including the continued expansion of railroads and increased use of the telegraph. These improved communications and growing demographics increasingly made the United States into a strong military power, while also stoking expansionism. Polk set four clearly defined goals for his administration: While his domestic aims represented continuity with past Democratic policies, successful completion of Polk's foreign policy goals would represent the first major American territorial gains since the Adams–Onís Treaty of 1819. After being informed of his victory on November 15, 1844, Polk turned his attention to forming a geographically-balanced Cabinet. He consulted Jackson and one or two other close allies, and decided that the large states of New York, Pennsylvania and Virginia should have representation in the six-member Cabinet, as should his home state of Tennessee. At a time when an incoming president might retain some or all of his predecessor's department heads, Polk wanted an entirely fresh Cabinet, but this proved delicate. Tyler's final Secretary of State was Calhoun, leader of a considerable faction of the Democratic Party, but, when approached by emissaries, he did not take offense and was willing to step down. Polk did not want his Cabinet to contain presidential hopefuls, though he chose to nominate James Buchanan of Pennsylvania, whose ambition for the presidency was well-known, as Secretary of State. Tennessee's Cave Johnson, a close friend and ally of Polk, was nominated for the position of Postmaster General, with George Bancroft, the historian who had placed a crucial role in Polk's nomination as Navy Secretary. Polk's choices met with the approval of Andrew Jackson, whom Polk met with in January 1845 for the last time, as Jackson died that June. Tyler's last Navy Secretary, John Y. Mason of Virginia, Polk's friend since college days and a longtime political ally, was not on the original list. As Cabinet choices were affected by factional politics and President Tyler's drive to resolve the Texas issue before leaving office, Polk at the last minute chose him as Attorney General. Polk also chose Mississippi Senator Walker as Secretary of the Treasury and New York's William Marcy as Secretary of War. All gained Senate confirmation after Polk took office. The members worked well together, and few replacements were necessary. One reshuffle was required in 1846 when Bancroft, who wanted a diplomatic posting, became U.S. minister to Britain. As Polk put together his Cabinet, President Tyler sought to complete the annexation of Texas. While the Senate had defeated an earlier treaty that would annex the republic, Tyler urged Congress to pass a joint resolution, relying on its constitutional power to admit states. There were disagreements about the terms under which Texas would be admitted and Polk became involved in negotiations to break the impasse. With Polk's help, the annexation resolution narrowly cleared the Senate. Tyler was unsure whether to sign the resolution or leave it for Polk, and sent Calhoun to consult with the President-elect, who declined to give any advice. On his final evening in office, March 3, 1845, Tyler offered annexation to Texas according to the terms of the resolution. Even before his inauguration, Polk wrote to Cave Johnson, "I intend to be President of the U.S." He would gain a reputation as a hard worker, spending ten to twelve hours at his desk, and rarely leaving Washington. Polk wrote, "No President who performs his duty faithfully and conscientiously can have any leisure. I prefer to supervise the whole operations of the government myself rather than intrust the public business to subordinates, and this makes my duties very great." When he took office on March 4, 1845, Polk, at 49, became the youngest president to that point. Polk's inauguration was the first inaugural ceremony to be reported by telegraph, and first to be shown in a newspaper illustration (in "The Illustrated London News"). In his inaugural address, delivered in a steady rain, Polk made clear his support for annexation by referring to the 28 states, thus including Texas. He proclaimed his fidelity to Jackson's principles by quoting his famous toast, "Every lover of his country must shudder at the thought of the possibility of its dissolution and will be ready to adopt the patriotic sentiment, 'Our Federal Union—it must be preserved.'" He stated his opposition to a national bank, and repeated that the tariff could include incidental protection. Although he did not mention slavery specifically, he alluded to it, decrying those who would tear down an institution protected by the Constitution. Polk devoted the second half of his speech to foreign affairs, and specifically to expansion. He applauded the annexation of Texas, warning that Texas was no affair of any other nation, and certainly none of Mexico's. He spoke of the Oregon Country, and of the many who were migrating, pledging to safeguard America's rights there, and to protect the settlers. As well as appointing Cabinet officers to advise him, Polk made his sister's son, J. Knox Walker, his personal secretary, an especially important position because, other than his slaves, Polk had no staff at the White House. Walker, who lived at the White House with his growing family (two children were born to him while living there), performed his duties competently through his uncle's presidency. Other Polk relatives visited at the White House, some for extended periods. Britain derived its claim to the Oregon Country from the voyages of Captains James Cook and George Vancouver, the Americans from the explorations of the Lewis and Clark expedition and from the discovery of the Columbia River by the American sea captain, Robert Gray. By treaty, Russia had waived any claim south of the southern border of Alaska, which it possessed until 1867, and Spain, which until the Mexican Revolution owned the Pacific Coast to the 42nd parallel, ceded any claims it might have north of that to the United States under the Adams–Onís Treaty of 1819. Rather than war over the distant and low-population territory, the United States and Britain had negotiated. Since the signing of the Treaty of 1818, the Oregon Country had been under the joint occupation and control of the United Kingdom and the United States. Previous U.S. administrations had offered to divide the region along the 49th parallel, which was not acceptable to Britain, as it had commercial interests along the Columbia River. Britain's preferred partition was unacceptable to Polk, as it would have awarded Puget Sound and all lands north of the Columbia River to Britain, and Britain was unwilling to accept the 49th parallel extended to the Pacific, as it meant the entire opening to Puget Sound would be in American hands, isolating its settlements along the Fraser River. Edward Everett, President Tyler's ambassador to Great Britain, had informally proposed dividing the territory at the 49th parallel with the strategic Vancouver Island granted to the British, thus allowing an opening to the Pacific, but when the new British minister in Washington, Richard Pakenham arrived in 1844 prepared to follow up, he found that many Americans desired the entire territory. Oregon had not been a major issue in the 1844 election, but the heavy influx of settlers, mostly American, to the Oregon Country in 1845, and the rising spirit of expansionism in the United States as Texas and Oregon seized the public's eye, made a treaty with Britain more urgent. Many Democrats believed that the United States should span from coast to coast, a philosophy described as Manifest Destiny. Though both sides sought an acceptable compromise, each also saw the territory as an important geopolitical asset that would play a large part in determining the dominant power in North America. In his inaugural address, Polk announced that he viewed the American claim to the land as "clear and unquestionable", provoking threats of war from British leaders should Polk attempt to take control of the entire territory. Polk had refrained in his address from asserting a claim to the entire territory, which extended north to 54 degrees, 40 minutes north latitude, although the Democratic Party platform called for such a claim. Despite Polk's hawkish rhetoric, he viewed war with the British as unwise, and Polk and Buchanan opened up negotiations with the British. Like his predecessors, Polk again proposed a division along the 49th parallel, which was immediately rejected by Pakenham. Secretary of State Buchanan was wary of a two-front war with Mexico and Britain, but Polk was willing to risk war with both countries in pursuit of a favorable settlement. In his annual message to Congress in December 1845, Polk requested approval of giving Britain a one-year notice (as required in the Treaty of 1818) of his intention to terminate the joint occupancy of Oregon. In that message, he quoted from the Monroe Doctrine to denote America's intention of keeping European powers out, the first significant use of it since its origin in 1823. After much debate, Congress eventually passed the resolution in April 1846, attaching its hope that the dispute would be settled amicably. When the British Foreign Secretary, Lord Aberdeen, learned of the proposal rejected by Pakenham, Aberdeen asked the United States to re-open negotiations, but Polk was unwilling unless a proposal was made by the British. With Britain moving towards free trade with the repeal of the Corn Laws, good trade relations with the United States were more important to Aberdeen than a distant territory. In February 1846, Polk allowed Buchanan to inform Louis McLane, the American ambassador to Britain, that Polk's administration would look favorably on a British proposal based around a division at the 49th parallel. In June 1846, Pakenham presented an offer to the Polk administration, calling for a boundary line at the 49th parallel, with the exception that Britain would retain all of Vancouver Island, and there would be limited navigation rights for British subjects on the Columbia River until the expiration of the charter of the Hudson's Bay Company in 1859. Polk and most of his Cabinet were prepared to accept the proposal, but Buchanan, in a reversal, urged that the United States seek control of all of the Oregon Territory. Polk deemed Buchanan's about-face linked to his presidential ambitions. After winning the reluctant approval of Buchanan, and choosing to have the Senate weigh in (favorably) on the draft treaty, Polk submitted the full treaty to the Senate for ratification. The Senate ratified the Oregon Treaty in a 41–14 vote, with opposition from diehards who sought the full territory. Polk's willingness to risk war with Britain had frightened many, but his tough negotiation tactics may have gained the United States concessions from the British (particularly regarding the Columbia River) that a more conciliatory president might not have won. The annexation resolution signed by Tyler gave the president the choice of asking Texas to approve annexation, or reopening negotiations; Tyler immediately sent a messenger to the U.S. representative in Texas, Andrew Jackson Donelson, choosing the former option. Thus, Polk's first major decision in office was whether to recall Tyler's courier to Texas. Though it was within Polk's power to recall the messenger, he chose to allow him to continue, with the hope that Texas would accept the offer. He also sent Congressman Archibald Yell of Arkansas as his personal emissary, taking his private assurance that the United States would defend Texas, and would fix its southern border at the Rio Grande, as claimed by Texas, rather than at the Nueces River, as claimed by Mexico. Polk retained Donelson in his post, and the diplomat sought to convince Texas' leaders to accept annexation under the terms proposed by the Tyler administration. Though public sentiment in Texas favored annexation, some leaders, including President Anson Jones, hoped negotiation would bring better terms. Britain had offered to work a deal whereby Texas would gain Mexican recognition in exchange for a pledge never to annex itself to another country, but after consideration, the influential former president, Sam Houston, rejected it, as did the Texas Congress. In July 1845, a convention ratified annexation, and thereafter voters approved it. In December 1845, Polk signed a resolution annexing Texas, and it became the 28th state. Mexico had broken diplomatic relations with the United States on passage of the joint resolution in March 1845; annexation increased tensions with that nation, which had never recognized Texan independence. Following the Texan ratification of annexation in 1845, both Mexicans and Americans saw conflict as a likely possibility. Polk began preparations for a potential war with Mexico over Texas, sending an army led by Brigadier General Zachary Taylor into Texas. Taylor and Commodore David Conner of the U.S. Navy, commanding American ships off the Mexican coast, were both ordered to avoid provoking a war, while preparing for conflict, and to respond to any Mexican aggression. Although Polk had the military prepare for war, he did not believe it would come to that; he thought Mexico would give in under duress. Polk hoped that a show of force by the U.S. military under Taylor and Conner could avert war and lead to negotiations with the Mexican government. In late 1845, Polk sent diplomat John Slidell to Mexico to purchase New Mexico and California for $20–40 million, as well as securing Mexico's agreement to a Rio Grande border. Slidell arrived in Mexico City in December 1845. Mexican President José Joaquín de Herrera was unwilling to receive him because of the hostility of the public towards the United States. Slidell's ambassadorial credentials were refused by a Mexican council of government, and Herrara soon thereafter was deposed by a military coup led by General Mariano Paredes, a hard-liner who pledged to take back Texas from the United States. Dispatches from Slidell and from the U.S. consul in Mexico City, John Black, made clear their views that U.S. aims for territorial expansion could not be accomplished without war. Taylor's instructions were to repel any incursion by Mexico north of the Rio Grande, but initially, his army did not advance further than Corpus Christi, at the mouth of the Nueces. On January 13, 1846, Polk ordered Taylor to proceed to the Rio Grande, though it took him time to prepare for the march. Polk was convinced that sending Taylor to the Nueces Strip would prompt war; even if it did not, he was prepared to have Congress declare it. As he waited, Polk considered supporting a potential coup led by the exiled Mexican General Antonio López de Santa Anna with the hope that Santa Anna would sell parts of California. Slidell returned to Washington in May 1846 and gave his opinion that negotiations with the Mexican government were unlikely to be successful. Polk regarded the treatment of his diplomat as an insult and an "ample cause of war", and he prepared to ask Congress for a declaration of war. Meanwhile, in late March, General Taylor had reached the Rio Grande, and his army camped across the river from Matamoros, Tamaulipas. In April, after Mexican general Pedro de Ampudia demanded that Taylor return to the Nueces River, Taylor began a blockade of Matamoros. A skirmish on the northern side of the Rio Grande on April 25 ended in the death or capture of dozens of American soldiers, and became known as the Thornton Affair. Word did not reach Washington until May 9, and Polk immediately convened the Cabinet and obtained their approval of his plan to send a war message to Congress on the ground that Mexico had, as Polk put it in his message, "shed American blood on the American soil". Polk's message was crafted to present the war as a just and necessary defense of the country against a neighbor that had long troubled the United States. The House overwhelmingly approved a resolution declaring war and authorizing the president to accept 50,000 volunteers into the military. Some of those voting in favor were unconvinced that the U.S. had just cause to go to war, but feared to be deemed unpatriotic. In the Senate, war opponents led by Calhoun also questioned Polk's version of events. Nonetheless, the House resolution passed the Senate in a 40–2 vote, with Calhoun abstaining, marking the beginning of the Mexican–American War. After the initial skirmishes, Taylor and much of his army marched away from the river to secure the supply line, leaving a makeshift base, Fort Texas. On the way back to the Rio Grande, Mexican forces under General Mariano Arista attempted to block Taylor's way as other troops laid siege to Fort Texas, forcing the American general to the attack if he hoped to relieve the fort. In the Battle of Palo Alto, the first major engagement of the war, Taylor's troops forced Arista's from the field, suffering only four dead to hundreds for the Mexicans. The next day, Taylor led the army to victory in the Battle of Resaca de la Palma, putting the Mexican Army to rout. The early successes boosted support for the war, which despite the lopsided votes in Congress had deeply divided the nation. Many Northern Whigs opposed the war, as did others; they felt Polk had used patriotism to manipulate the nation into fighting a war the goal of which was to give slavery room to expand. Polk distrusted the two senior officers, Major General Winfield Scott and Taylor, as both were Whigs, and would have replaced them with Democrats, but felt Congress would not approve it. He offered Scott the position of top commander in the war, which the general accepted. Polk and Scott already knew and disliked each other: the President made the appointment despite the fact that Scott had sought his party's presidential nomination in 1840. Polk came to believe that Scott was too slow in getting himself and his army away from Washington and to the Rio Grande, and was outraged to learn Scott was using his influence in Congress to defeat the administration's plan to expand the number of generals. The news of Taylor's victory at Resaca de la Palma arrived then, and Polk decided to have Taylor take command in the field, and Scott to remain in Washington. Polk also ordered Commodore Conner to allow Santa Anna to return to Mexico from his exile in Havana, and sent an army expedition led by Stephen W. Kearny towards Santa Fe. In 1845, Polk, fearful of French or British intervention, had sent Lieutenant Archibald H. Gillespie to California with orders to foment a pro-American rebellion that could be used to justify annexation of the territory. After meeting with Gillespie, Army captain John C. Frémont led settlers in northern California to overthrow the Mexican garrison in Sonoma in what became known as the Bear Flag Revolt. In August 1846, American forces under Kearny captured Santa Fe, capital of the province of New Mexico, without firing a shot. Almost simultaneously, Commodore Robert F. Stockton landed in Los Angeles and proclaimed the capture of California. After American forces put down a revolt, the United States held effective control of New Mexico and California. Nevertheless, the Western theater of the war would prove to be a political headache for Polk, as a dispute between Frémont and Kearny led to a break between Polk and the powerful Missouri senator (and father-in-law of Frémont), Thomas Hart Benton. The initial public euphoria over the victories at the start of the war slowly dissipated. In August 1846, Polk asked Congress to appropriate $2 million as a down payment for the potential purchase of Mexican lands. Polk's request ignited opposition, as he had never before made public his desire to annex parts of Mexico (aside from lands claimed by Texas). It was unclear whether such newly acquired lands would be slave or free, and there was angry sectional debate. A freshman Democratic Congressman, David Wilmot of Pennsylvania, previously a firm supporter of Polk's administration, offered an amendment to the bill—the "Wilmot Proviso"—that would ban slavery in any land acquired using the money. The appropriation bill, with the Wilmot Proviso attached, passed the House, but died in the Senate. This discord cost Polk's party, as Democrats lost control of the House in the 1846 elections. In early 1847, though, Polk was successful in passing a bill raising further regiments, and he also finally won approval for the appropriation. In July 1846, American envoy Alexander Slidell Mackenzie had met with Santa Anna, offering terms by which the US would pay to acquire San Francisco Bay and other parts of Alta California. Santa Anna seemed receptive, but after returning to Mexico, taking control of the government, he stated that he would fight against the Americans, and placed himself at the head of the army. This caused Polk to harden his position on Mexico, and he ordered an American landing at Veracruz, the most important Mexican port on the Gulf of Mexico. From there, troops were to march to Mexico City, which it was hoped would end the war. Continuing to advance in northeast Mexico, Taylor defeated a Mexican army led by Ampudia in the September 1846 Battle of Monterrey, but allowed Ampudia's forces to withdraw from the town, much to Polk's consternation. Polk believed Taylor had not aggressively pursued the enemy, and reluctantly offered command of the Veracruz expedition to Scott. The lack of trust Polk had in Taylor was returned by the Whig general, who feared the partisan president was trying to destroy him. Accordingly, Taylor disobeyed orders to remain near Monterrey. In March 1847, Polk learned that Taylor had continued to march south, capturing the northern Mexican town of Saltillo. Continuing beyond Saltillo, Taylor's army decimated a larger Mexican force, led by Santa Anna, in the Battle of Buena Vista. Mexican casualties were five times that of the Americans, and the victory made Taylor even more of a military hero in the public's eyes, though Polk preferred to credit the bravery of the soldiers rather than the Whig general. In March 1847, Scott landed in Veracruz, and quickly won control of the city. With the capture of Veracruz, Polk dispatched Nicholas Trist, Buchanan's chief clerk, to accompany Scott's army and negotiate a peace treaty with Mexican leaders. Trist was instructed to seek the cession of Alta California, New Mexico, and Baja California, recognition of the Rio Grande as the southern border of Texas, and American access across the Isthmus of Tehuantepec. Trist was authorized to make a payment of up to $30 million in exchange for these concessions. In August 1847, as he advanced towards Mexico City, Scott defeated Santa Anna at the Battle of Contreras and the Battle of Churubusco. With the Americans at the gates of Mexico City, Trist negotiated with commissioners, but the Mexicans were willing to give up little. Scott prepared to take Mexico City, which he did in mid-September. In the United States, a heated political debate emerged regarding how much of Mexico the United States should seek to annex, Whigs such as Henry Clay arguing that the United States should only seek to settle the Texas border question, and some expansionists arguing for the annexation of all of Mexico. War opponents were also active; Whig Congressman Abraham Lincoln of Illinois introduced the "exact spot" resolutions, calling on Polk to state exactly where American blood had been shed on American soil to start the war, but the House refused to consider them. Frustrated by a lack of progress in negotiations, Polk ordered Trist to return to Washington, but the diplomat, when the notice of recall arrived in mid-November 1847, decided to remain, writing a lengthy letter to Polk the following month to justify his decision. Polk considered having Butler, designated as Scott's replacement, forcibly remove him from Mexico City. Though outraged by Trist's decision, Polk decided to allow him some time to negotiate a treaty. Throughout January 1848, Trist regularly met with officials in Mexico City, though at the request of the Mexicans, the treaty signing took place in Guadalupe Hidalgo, a small town near Mexico City. Trist was willing to allow Mexico to keep Baja California, as his instructions allowed, but successfully haggled for the inclusion of the important harbor of San Diego in a cession of Alta California. Provisions included the Rio Grande border and a $15 million payment to Mexico. On February 2, 1848, Trist and the Mexican delegation signed the Treaty of Guadalupe Hidalgo. Polk received the document on February 19, and, after the Cabinet met on the 20th, decided he had no choice but to accept it. If he turned it down, with the House by then controlled by the Whigs, there was no assurance Congress would vote funding to continue the war. Both Buchanan and Walker dissented, wanting more land from Mexico, a position with which the President was sympathetic, though he considered Buchanan's view motivated by his ambition. Some senators opposed the treaty because they wanted to take no Mexican territory; others hesitated because of the irregular nature of Trist's negotiations. Polk waited in suspense for two weeks as the Senate considered it, sometimes hearing that it would likely be defeated, and that Buchanan and Walker were working against it. He was relieved when the two Cabinet officers lobbied on behalf of the treaty. On March 10, the Senate ratified the treaty in a 38–14 vote, on a vote that cut across partisan and geographic lines. The Senate made some modifications to the treaty before ratification, and Polk worried that the Mexican government would reject them. On June 7, Polk learned that Mexico had ratified the treaty. Polk declared the treaty in effect as of July 4, 1848, thus ending the war. With the acquisition of California, Polk had accomplished all four of his major presidential goals. With the exception of the territory acquired by the 1853 Gadsden Purchase, the territorial acquisitions under Polk established the modern borders of the Contiguous United States. Polk had been anxious to establish a territorial government for Oregon once the treaty was effective in 1846, but the matter became embroiled in the arguments over slavery, though few thought Oregon suitable for that institution. A bill to establish an Oregon territorial government passed the House after being amended to bar slavery; the bill died in the Senate when opponents ran out the clock on the congressional session. A resurrected bill, still barring slavery, again passed the House in January 1847 but it was not considered by the Senate before Congress adjourned in March. By the time Congress met again in December, California and New Mexico were in U.S. hands, and Polk in his annual message urged the establishment of territorial governments in all three. The Missouri Compromise had settled the issue of the geographic reach of slavery within the Louisiana Purchase territories by prohibiting slavery in states north of 36°30′ latitude, and Polk sought to extend this line into the newly acquired territory. If extended to the Pacific, this would have made slavery illegal in San Francisco, but allowed it in Monterey and Los Angeles. A plan to accomplish the extension was defeated in the House by a bipartisan alliance of Northerners. As the last congressional session before the 1848 election came to a close, Polk signed the lone territorial bill passed by Congress, which established the Territory of Oregon and prohibited slavery in it. When Congress reconvened in December 1848, Polk asked it in his annual message to establish territorial governments in California and New Mexico, a task made especially urgent by the onset of the California Gold Rush. The divisive issue of slavery blocked any such legislation, though congressional action continued until the final hours of Polk's term. When the bill was amended to have the laws of Mexico apply to the southwest territories until Congress changed them (thus effectively banning slavery), Polk made it clear that he would veto it, considering it the Wilmot Proviso in another guise. It was not until the Compromise of 1850 that the matter of the territories was resolved. Polk's ambassador to the Republic of New Granada, Benjamin Alden Bidlack, negotiated the Mallarino–Bidlack Treaty. Though Bidlack had initially only sought to remove tariffs on American goods, Bidlack and New Granadan Foreign Minister Manuel María Mallarino negotiated a broader agreement that deepened military and trade ties between the two countries. The treaty also allowed for the construction of the Panama Railway. In an era of slow overland travel, the treaty gave the United States a route for a quicker journey between its eastern and western coasts. In exchange, Bidlack agreed to have the United States guarantee New Granada's sovereignty over the Isthmus of Panama. The treaty won ratification in both countries in 1848. The agreement helped to establish a stronger American influence in the region, as the Polk administration sought to ensure that Great Britain would not dominate Central America. The United States would use the rights granted under the Mallarino-Bidlack Treaty as a justification for its military interventions in Latin America through the remainder of the 19th century. In mid-1848, President Polk authorized his ambassador to Spain, Romulus Mitchell Saunders, to negotiate the purchase of Cuba and offer Spain up to $100 million, a large sum at the time for one territory, equal to $ in present-day terms. Cuba was close to the United States and had slavery, so the idea appealed to Southerners but was unwelcome in the North. However, Spain was still making profits in Cuba (notably in sugar, molasses, rum and tobacco), and thus the Spanish government rejected Saunders's overtures. Though Polk was eager to acquire Cuba, he refused to support the filibuster expedition of Narciso López, who sought to invade and take over the island as a prelude to annexation. In his inaugural address, Polk called upon Congress to re-establish the Independent Treasury System under which government funds were held in the Treasury and not in banks or other financial institutions. President Van Buren had previously established a similar system, but it had been abolished during the Tyler administration. Polk made clear his opposition to a national bank in his inaugural address, and in his first annual message to Congress in December 1845, he called for the government to keep its funds itself. Congress was slow to act; the House passed a bill in April 1846 and the Senate in August, both without a single Whig vote. Polk signed the Independent Treasury Act into law on August 6, 1846. The act provided that the public revenues were to be retained in the Treasury building and in sub-treasuries in various cities, separate from private or state banks. The system would remain in place until the passage of the Federal Reserve Act in 1913. Polk's other major domestic initiative was the lowering of the tariff. Polk directed Secretary of the Treasury Robert Walker to draft a new and lower tariff, which Polk submitted to Congress. After intense lobbying by both sides, the bill passed the House and, in a close vote that required Vice President Dallas to break a tie, the Senate in July 1846. Dallas, although from protectionist Pennsylvania, voted for the bill, having decided his best political prospects lay in supporting the administration. Polk signed the Walker Tariff into law, substantially reducing the rates that had been set by the Tariff of 1842. The reduction of tariffs in the United States and the repeal of the Corn Laws in Great Britain led to a boom in Anglo-American trade. Congress passed the Rivers and Harbors Bill in 1846 to provide $500,000 to improve port facilities, but Polk vetoed it. Polk believed that the bill was unconstitutional because it unfairly favored particular areas, including ports that had no foreign trade. Polk considered internal improvements to be matters for the states, and feared that passing the bill would encourage legislators to compete for favors for their home district—a type of corruption that he felt would spell doom to the virtue of the republic. In this regard he followed his hero Jackson, who had vetoed the Maysville Road Bill in 1830 on similar grounds. Opposed by conviction to Federal funding for internal improvements, Polk stood strongly against all such bills. Congress, in 1847, passed another internal improvements bill; he pocket vetoed it and sent Congress a full veto message when it met in December. Similar bills continued to advance in Congress in 1848, though none reached his desk. When he came to the Capitol to sign bills on March 3, 1849, the last day of the congressional session and his final full day in office, he feared that an internal improvements bill would pass Congress, and he brought with him a draft veto message. The bill did not pass, so it was not needed, but feeling the draft had been ably written, he had it preserved among his papers. Authoritative word of the discovery of gold in California did not arrive in Washington until after the 1848 election, by which time Polk was a lame duck. Polk's political adversaries had claimed California was too far away to be useful, and was not worth the price paid to Mexico. The President was delighted by the news, seeing it as validation of his stance on expansion, and referred to the discovery several times in his final annual message to Congress that December. Shortly thereafter, actual samples of the California gold arrived, and Polk sent a special message to Congress on the subject. The message, confirming less authoritative reports, caused large numbers of people to move to California, both from the U.S. and abroad, thus helping to spark the California Gold Rush. One of Polk's last acts as President was to sign the bill creating the Department of the Interior (March 3, 1849). This was the first new cabinet position created since the early days of the Republic. Polk had misgivings about the federal government usurping power over public lands from the states. Nevertheless, the delivery of the legislation on his last full day in office gave him no time to find constitutional grounds for a veto, or to draft a sufficient veto message, so he signed the bill. Polk appointed the following justices to the U.S. Supreme Court: The 1844 death of Justice Henry Baldwin left a vacant place on the Supreme Court, but Tyler had been unable to get the Senate to confirm a nominee. At the time, it was the custom to have geographic balance on the Supreme Court, and Baldwin had been from Pennsylvania. Polk's efforts to fill Baldwin's seat became embroiled in Pennsylvania politics and the efforts of factional leaders to secure the lucrative post of Collector of Customs for the Port of Philadelphia. As Polk attempted to find his way through the minefield of Pennsylvania politics, a second position on the high court became vacant with the death, in September 1845, of Justice Joseph Story; his replacement was expected to come from his native New England. Because Story's death had occurred while the Senate was not in session, Polk was able to make a recess appointment, choosing Senator Levi Woodbury of New Hampshire, and when the Senate reconvened in December 1845, Woodbury was confirmed. Polk's initial nominee for Baldwin's seat, George W. Woodward, was rejected by the Senate in January 1846, in large part due to the opposition of Buchanan and Pennsylvania Senator Simon Cameron. Despite Polk's anger at Buchanan, he eventually offered the Secretary of State the seat, but Buchanan, after some indecision, turned it down. Polk subsequently nominated Robert Cooper Grier of Pittsburgh, who won confirmation. Justice Woodbury died in 1851, but Grier served until 1870 and in the slavery case of "Dred Scott v. Sandford" (1857) wrote an opinion stating that slaves were property and could not sue. Polk appointed eight other federal judges, one to the United States Circuit Court of the District of Columbia, and seven to various United States district courts. Honoring his pledge to serve only one term, Polk declined to seek re-election. At the 1848 Democratic National Convention, Cass led on all ballots, though it was not until the fourth that he attained a two-thirds vote. William Butler, who had replaced Winfield Scott as the commanding general in Mexico City, won the vice presidential nomination. The 1848 Whig National Convention nominated Taylor for president and former congressman Millard Fillmore of New York for vice president. New York Democrats remained bitter because of what they deemed shabby treatment of Van Buren in 1844, and the former president had drifted from the party in the years since. Many of Van Buren's faction of the party, the Barnburners, were younger men who strongly opposed the spread of slavery, a position with which, by 1848, Van Buren agreed. Senator Cass was a strong expansionist, and slavery might find new fields under him; accordingly the Barnburners bolted the Democratic National Convention upon his nomination, and, in June, joined by anti-slavery Democrats from other states, they held a convention, nominating Van Buren for president. Polk was surprised and disappointed by his former ally's political conversion, and worried about the divisiveness of a sectional party organized around abolitionism. Polk did not give speeches for Cass, remaining at his desk at the White House. He did remove some Van Buren supporters from federal office during the campaign. In the election, Taylor won 47.3% of the popular vote and a majority of the electoral vote. Cass won 42.5% of the vote, while Van Buren finished with 10.1% of the popular vote, much of his support coming from northern Democrats. Polk was disappointed by the outcome as he had a low opinion of Taylor, seeing the general as someone with poor judgment and few opinions on important public matters. Nevertheless, Polk observed tradition and welcomed President-elect Taylor to Washington, hosting him at a gala White House dinner. Polk departed the White House on March 3, leaving behind him a clean desk, though he worked from his hotel or the Capitol on last-minute appointments and bill signings. He attended Taylor's inauguration on March 5 (March 4, the presidential inauguration day until 1937, fell on a Sunday, and thus the ceremony was postponed a day), and though he was unimpressed with the new President, wished him the best. Polk's time in the White House took its toll on his health. Full of enthusiasm and vigor when he entered office, Polk left the presidency exhausted by his years of public service. He left Washington on March 6 for a pre-arranged triumphal tour of the South, to end in Nashville. Polk had two years previously arranged to buy a house there, afterwards dubbed Polk Place, that had once belonged to his old mentor, Felix Grundy. James and Sarah Polk progressed down the Atlantic coast, and then westward through the Deep South. He was enthusiastically received and banqueted. By the time the Polks reached Alabama, he was suffering from a bad cold, and soon became concerned by reports of cholera—a passenger on Polk's riverboat died of it, and it was rumored to be common in New Orleans, but it was too late to change plans. Worried about his health, he would have departed the city quickly, but was overwhelmed by Louisiana hospitality. Several passengers on the riverboat up the Mississippi died of the disease, and Polk felt so ill that he went ashore for four days, staying in a hotel. A doctor assured him he did not have cholera, and Polk made the final leg, arriving in Nashville on April 2 to a huge reception. After a visit to James's mother in Columbia, the Polks settled into Polk Place. The exhausted former president seemed to gain new life, but in early June, he fell ill again, by most accounts of cholera. Attended by several doctors, he lingered for several days, and chose to be baptized into the Methodist Church, which he had long admired, though his mother arrived from Columbia with her Episcopalian clergyman, and his wife was also Episcopalian. By traditional accounts, his last words before he died on June 15 were "I love you, Sarah, for all eternity, I love you"—Borneman noted that whether or not they were spoken, there was nothing in Polk's life which would make the sentiment false. Polk's funeral was held at the McKendree Methodist Church in Nashville. Initially he was buried in what is now Nashville City Cemetery, due to a legal requirement related to his infectious disease death. He was moved to a tomb on the grounds of Polk Place (as specified in his will) less than a year later. Sarah Polk lived at Polk Place for 42 years after his death and died on August 14, 1891. In 1893, the bodies of James and Sarah Polk were relocated to their current resting place on the grounds of the Tennessee State Capitol in Nashville. Polk Place was demolished in 1900. In March 2017, the Tennessee Senate approved a resolution considered a "first step" toward relocating the Polks' remains to the family home in Columbia; as well as support by state lawmakers, the move requires approval by the courts and the Tennessee Historical Commission. Polk was a slaveholder for most of his adult life. His father, Samuel Polk, in 1827 left Polk more than 8,000 acres (32 km²) of land, and divided about 53 slaves among his widow and children in his will. James inherited twenty of his father's slaves, either directly or from deceased brothers. In 1831, he became an absentee cotton planter, sending slaves to clear plantation land that his father had left him near Somerville, Tennessee. Four years later Polk sold his Somerville plantation and, together with his brother-in-law, bought 920 acres (3.7 km²) of land, a cotton plantation near Coffeeville, Mississippi, hoping to increase his income. The land in Mississippi was richer than that in Somerville, and Polk transferred his Tennessee slaves there, taking care to conceal from them that they were to be sent south. From the start of 1839, Polk, having bought out his brother-in-law, owned all of the Mississippi plantation, and ran it on a mostly absentee basis for the rest of his life. He occasionally visited—for example, he spent much of April 1844 on his Mississippi plantation, right before the Democratic convention. Adding to the inherited slaves, in 1831, Polk purchased five more, mostly buying them in Kentucky, and expending $1,870; the youngest had a recorded age of 11. As older children sold for a higher price, slave sellers routinely lied about age. Between 1834 and 1835, he bought five more, aged from 2 to 37, the youngest a granddaughter of the oldest. The amount expended was $2,250. In 1839, he bought eight slaves from his brother William at a cost of $5,600. This represented three young adults and most of a family, though not including the father, whom James Polk had previously owned, and who had been sold to a slave trader as a chronic runaway. The expenses of four campaigns (three for governor, one for the presidency) in six years kept Polk from making more slave purchases until after he was living in the White House. In an era when the presidential salary was expected to cover wages for the White House servants, Polk replaced them with slaves from his home in Tennessee. Polk did not purchase slaves with his presidential salary, likely for political reasons. Instead, he reinvested earnings from his plantation in the purchase of slaves, enjoining secrecy on his agent: "that as my "private business" does not concern the public, you will keep it to yourself". Polk saw the plantation as his route to a comfortable existence after his presidency for himself and his wife; he did not intend to return to the practice of law. Hoping the increased labor force would increase his retirement income, he purchased seven slaves in 1846, through an agent, aged roughly between 12 and 17. The 17 year old and one of the 12 year olds were purchased together at an estate sale; the agent within weeks resold the younger boy to Polk's profit. The year 1847 saw the purchase of nine more. Three he purchased from Gideon Pillow, and his agent purchased six slaves, aged between 10 and 20. By the time of the purchase from Pillow, the Mexican War had begun and Polk sent payment with the letter in which he offered Pillow a commission in the Army. The purchase from Pillow was a slave Polk had previously owned and had sold for being a disruption, and his wife and child. None of the other slaves Polk purchased as President, all younger than 20, came with a parent, and as only in the one case were two slaves bought together, most likely none had an accompanying sibling as each faced life on Polk's plantation. Discipline for those owned by Polk varied over time. At the Tennessee plantation, he employed an overseer named Herbert Biles, who was said to be relatively indulgent. Biles's illness in 1833 resulted in Polk replacing him with Ephraim Beanland, who tightened discipline and increased work. Polk backed his overseer, returning runaways who complained of beatings and other harsh treatment, "even though every report suggested that the overseer was a heartless brute". Beanland was hired for the Mississippi plantation, but was soon dismissed by Polk's partner, who deemed Beanland too harsh as the slaves undertook the arduous task of clearing the timber from the new plantation so it could be used for cotton farming. His replacement was discharged after a year for being too indulgent; the next died of dysentery in 1839. Others followed, and it was not until 1845 that Polk found a satisfactory overseer, John Mairs, who remained the rest of Polk's life and was still working at the plantation for Sarah Polk in 1860, when the widow sold a half-share in many of her slaves. There had been a constant stream of runaways under Mairs' predecessors, many seeking protection at the plantation of Polk relatives or friends; only one ran away between the time of Mairs' hiring and the end of 1847, but the overseer had to report three absconded slaves (including the one who had fled earlier) to Polk in 1848 and 1849. Polk's will, dated February 28, 1849, a few days before the end of his presidency, contained the nonbinding expectation that his slaves were to be freed when both he and Sarah Polk were dead. The Mississippi plantation was expected to be the support of Sarah Polk during her widowhood. Sarah Polk lived until 1891, but the slaves were freed in 1865 by the Thirteenth Amendment, which abolished slavery in the United States. By selling a half-interest in the slaves in 1860, Sarah Polk had given up the sole power to free them, and it is unlikely that her new partner, having paid $28,500 for a half-interest in the plantation and its slaves, would have allowed the laborers to go free had she died while slavery was legal. Like Jackson, Polk saw the politics of slavery as a side issue compared to more important matters such as territorial expansion and economic policy. The issue of slavery became increasingly polarizing during the 1840s, and Polk's expansionary policies increased its divisiveness. During his presidency, many abolitionists harshly criticized him as an instrument of the "Slave Power", and claimed that spreading slavery was the reason he supported annexing Texas and later war with Mexico. Polk did support the expansion of slavery's realm, with his views informed by his own family's experience of settling Tennessee, bringing slaves with them. He believed in Southern rights, meaning both the right of slave states not to have that institution interfered with by the Federal government, and the right of individual Southerners to bring their slaves with them into the new territory. Though Polk opposed the Wilmot Proviso, he also condemned southern agitation on the issue, and he accused both northern and southern leaders of attempting to use the slavery issue for political gain. On March 4, 2017, new tombstones for three of his slaves, Elias Polk, Mary Polk and Matilda Polk, were placed in the Nashville City Cemetery. Elias and Mary Polk both survived slavery, dying in the 1880s; Matilda Polk died still in slavery in 1849, at the age of about 110. After his death, Polk's historic reputation was initially formed by the attacks made on him in his own time. Whig politicians claimed that he was drawn from a well-deserved obscurity. Sam Houston is said to have observed that Polk, a teetotaler, was "a victim of the use of water as a beverage". Little was published about him but two biographies released in the wake of his death. Polk was not again the subject of a major biography until 1922, when Eugene I. McCormac published "James K. Polk: A Political Biography". McCormac relied heavily on Polk's presidential diary, first published in 1909. When historians began ranking the presidents in 1948, Polk ranked 10th in Arthur M. Schlesinger Sr.'s poll, and has subsequently ranked 8th in Schlesinger's 1962 poll, 11th in the Riders-McIver Poll (1996), and 14th in the 2017 survey by C-SPAN. Borneman deemed Polk the most effective president prior to the Civil War, and noted that Polk expanded the power of the presidency, especially in its power as commander in chief and its oversight over the Executive Branch. Steven G. Calabresi and Christopher S. Yoo, in their history of presidential power, praised Polk's conduct of the Mexican War, "it seems unquestionable that his management of state affairs during this conflict was one of the strongest examples since Jackson of the use of presidential power to direct specifically the conduct of subordinate officers." Harry S. Truman called Polk "a great president. Said what he intended to do and did it." Bergeron noted that the matters that Polk settled, he settled for his time. The questions of the banking system, and of the tariff, which Polk had made two of the main issues of his presidency, were not significantly revised until the 1860s. Similarly, the Gadsden Purchase, and that of Alaska (1867), were the only major U.S. expansions until the 1890s. Paul H. Bergeron wrote in his study of Polk's presidency: "Virtually everyone remembers Polk and his expansionist successes. He produced a new map of the United States, which fulfilled a continent-wide vision." "To look at that map," Robert W. Merry concluded, "and to take in the western and southwestern expanse included in it, is to see the magnitude of Polk's presidential accomplishments." Amy Greenberg, in her history of the Mexican War, found Polk's legacy to be more than territorial, "during a single brilliant term, he accomplished a feat that earlier presidents would have considered impossible. With the help of his wife, Sarah, he masterminded, provoked and successfully prosecuted a war that turned the United States into a world power." Borneman noted that in securing this expansion, Polk did not consider the likely effect on Mexicans and Native Americans, "That ignorance may well be debated on moral grounds, but it cannot take away Polk's stunning political achievement." James A. Rawley wrote in his "American National Biography" piece on Polk, "he added extensive territory to the United States, including Upper California and its valuable ports, and bequeathed a legacy of a nation poised on the Pacific rim prepared to emerge as a superpower in future generations". Historians have criticized Polk for not perceiving that his territorial gains set the table for civil war. Pletcher stated that Polk, like others of his time, failed "to understand that sectionalism and expansion had formed a new, explosive compound". Fred I. Greenstein, in his journal article on Polk, noted that Polk "lacked a far-seeing awareness of the problems that were bound to arise over the status of slavery in the territory acquired from Mexico" William Dusinberre, in his volume on Polk as slave owner, suggested "that Polk's deep personal involvement in the plantation slavery system ... colored his stance on slavery-related issues". Greenberg noted that Polk's war served as the training ground for that later conflict: J. K. Rowling Joanne Rowling, ( "rolling"; born 31 July 1965), writing under the pen names J. K. Rowling and Robert Galbraith, is a British novelist, philanthropist, film and television producer and screenwriter best known for writing the "Harry Potter" fantasy series. The books have won multiple awards, and sold more than 500 million copies, becoming the best-selling book series in history. They have also been the basis for a film series, over which Rowling had overall approval on the scripts and was a producer on the final films in the series. Born in Yate, Gloucestershire, England, Rowling was working as a researcher and bilingual secretary for Amnesty International when she conceived the idea for the "Harry Potter" series while on a delayed train from Manchester to London in 1990. The seven-year period that followed saw the death of her mother, birth of her first child, divorce from her first husband and relative poverty until the first novel in the series, "Harry Potter and the Philosopher's Stone", was published in 1997. There were six sequels, of which the last, "Harry Potter and the Deathly Hallows", was released in 2007. Since then, Rowling has written four books for adult readers: "The Casual Vacancy" (2012) and—under the pseudonym Robert Galbraith—the crime fiction novels "The Cuckoo's Calling" (2013), "The Silkworm" (2014) and "Career of Evil" (2015). Rowling has lived a "rags to riches" life story, in which she progressed from living on state benefits to being the world's first billionaire author. She lost her billionaire status after giving away much of her earnings to charity, but remains one of the wealthiest people in the world. She is the United Kingdom's bestselling living author, with sales in excess of £238M. The 2016 "Sunday Times Rich List" estimated Rowling's fortune at £600 million, ranking her as the joint 197th richest person in the UK. "Time" named her a runner-up for its 2007 Person of the Year, noting the social, moral, and political inspiration she has given her fans. In October 2010, Rowling was named the "Most Influential Woman in Britain" by leading magazine editors. She has supported charities, including Comic Relief, One Parent Families and Multiple Sclerosis Society of Great Britain, and launched her own charity, Lumos. Although she writes under the pen name J. K. Rowling, her name, before her remarriage, was Joanne Rowling. Anticipating that the target audience of young boys might not want to read a book written by a woman, her publishers asked that she use two initials rather than her full name. As she had no middle name, she chose "K" (for Kathleen) as the second initial of her pen name, from her paternal grandmother. She calls herself Jo. Following her remarriage, she has sometimes used the name Joanne Murray when conducting personal business. During the Leveson Inquiry she gave evidence under the name of Joanne Kathleen Rowling and her entry in "Who's Who" lists her name also as Joanne Kathleen Rowling. Rowling was born to Peter James Rowling, a Rolls-Royce aircraft engineer, and Anne Rowling (née Volant), a science technician, on 31 July 1965 in Yate, Gloucestershire, England, northeast of Bristol. Her parents first met on a train departing from King's Cross Station bound for Arbroath in 1964. They married on 14 March 1965. One of her maternal great-grandfathers, Dugald Campbell, was Scottish, born in Lamlash on the Isle of Arran. Her mother's paternal grandfather, Louis Volant, was French, and was awarded the Croix de Guerre for exceptional bravery in defending the village of Courcelles-le-Comte during the First World War. Rowling originally believed he had won the Légion d'honneur during the war, as she said when she received it herself in 2009. She later discovered the truth when featured in an episode of the UK genealogy series "Who Do You Think You Are?", in which she found out it was a different Louis Volant who won the Legion of Honour. When she heard his story of bravery and discovered the "croix de guerre" was for "ordinary" soldiers like her grandfather, who had been a waiter, she stated the "croix de guerre" was "better" to her than the Legion of Honour. Rowling's sister Dianne was born at their home when Rowling was 23 months old. The family moved to the nearby village Winterbourne when Rowling was four. As a child, Rowling often wrote fantasy stories which she frequently read to her sister. Aged nine, Rowling moved to Church Cottage in the Gloucestershire village of Tutshill, close to Chepstow, Wales. When she was a young teenager, her great-aunt gave her a copy of Jessica Mitford's autobiography, "Hons and Rebels." Mitford became Rowling's heroine, and Rowling read all of her books. Rowling has said that her teenage years were unhappy. Her home life was complicated by her mother's diagnosis with multiple sclerosis and a strained relationship with her father, with whom she is not on speaking terms. Rowling later said that she based the character of Hermione Granger on herself when she was eleven. Sean Harris, her best friend in the Upper Sixth, owned a turquoise Ford Anglia which she says inspired a flying version that appeared in "Harry Potter and the Chamber of Secrets". Like many teenagers, she became interested in pop music, listening to the Clash, the Smiths and Siouxsie Sioux and adopted the look of the latter with back-combed hair and black eyeliner, a look that she would still sport when beginning university. As a child, Rowling attended St Michael's Primary School, a school founded by abolitionist William Wilberforce and education reformer Hannah More. Her headmaster at St Michael's, Alfred Dunn, has been suggested as the inspiration for the "Harry Potter" headmaster Albus Dumbledore. She attended secondary school at Wyedean School and College, where her mother worked in the science department. Steve Eddy, her first secondary school English teacher, remembers her as "not exceptional" but "one of a group of girls who were bright, and quite good at English". Rowling took A-levels in English, French and German, achieving two As and a B and was Head Girl. In 1982, Rowling took the entrance exams for Oxford University but was not accepted and earned a BA in French and Classics at the University of Exeter. Martin Sorrell, a French professor at Exeter, remembers "a quietly competent student, with a denim jacket and dark hair, who, in academic terms, gave the appearance of doing what was necessary". Rowling recalls doing little work, preferring to read Dickens and Tolkien. After a year of study in Paris, Rowling graduated from Exeter in 1986. In 1988, Rowling wrote a short essay about her time studying Classics titled "What was the Name of that Nymph Again? or Greek and Roman Studies Recalled"; it was published by the University of Exeter's journal "Pegasus". After working as a researcher and bilingual secretary in London for Amnesty International, Rowling moved with her then boyfriend to Manchester, where she worked at the Chamber of Commerce. In 1990, while she was on a four-hour-delayed train trip from Manchester to London, the idea for a story of a young boy attending a school of wizardry "came fully formed" into her mind. When she had reached her Clapham Junction flat, she began to write immediately. In December, Rowling's mother, Anne, died after ten years suffering from multiple sclerosis. Rowling was writing "Harry Potter" at the time and had never told her mother about it. Her mother's death heavily affected Rowling's writing, and she channelled her own feelings of loss by writing about Harry's own feelings of loss in greater detail in the first book. An advertisement in "The Guardian" led Rowling to move to Porto, Portugal, to teach English as a foreign language. She taught at night and began writing in the day while listening to Tchaikovsky's Violin Concerto. After 18 months in Porto, she met Portuguese television journalist Jorge Arantes in a bar and found they shared an interest in Jane Austen. They married on 16 October 1992 and their child, Jessica Isabel Rowling Arantes (named after Jessica Mitford), was born on 27 July 1993 in Portugal. Rowling had previously suffered a miscarriage. The couple separated on 17 November 1993. Biographers have suggested that Rowling suffered domestic abuse during her marriage, although the extent is unknown. In December 1993, Rowling and her then-infant daughter moved to Edinburgh, Scotland, to be near Rowling's sister with three chapters of what would become "Harry Potter" in her suitcase. Seven years after graduating from university, Rowling saw herself as a failure. Her marriage had failed, and she was jobless with a dependent child, but she described her failure as liberating and allowing her to focus on writing. During this period, Rowling was diagnosed with clinical depression and contemplated suicide. Her illness inspired the characters known as Dementors, soul-sucking creatures introduced in the third book. Rowling signed up for welfare benefits, describing her economic status as being "poor as it is possible to be in modern Britain, without being homeless." Rowling was left in despair after her estranged husband arrived in Scotland, seeking both her and her daughter. She obtained an Order of Restraint, and Arantes returned to Portugal, with Rowling filing for divorce in August 1994. She began a teacher training course in August 1995 at the Moray House School of Education, at Edinburgh University, after completing her first novel while living on state benefits. She wrote in many cafés, especially Nicolson's Café (owned by her brother-in-law), and the Elephant House, wherever she could get Jessica to fall asleep. In a 2001 BBC interview, Rowling denied the rumour that she wrote in local cafés to escape from her unheated flat, pointing out that it had heating. One of the reasons she wrote in cafés was that taking her baby out for a walk was the best way to make her fall asleep. In 1995, Rowling finished her manuscript for "Harry Potter and the Philosopher's Stone" on an old manual typewriter. Upon the enthusiastic response of Bryony Evens, a reader who had been asked to review the book's first three chapters, the Fulham-based Christopher Little Literary Agency agreed to represent Rowling in her quest for a publisher. The book was submitted to twelve publishing houses, all of which rejected the manuscript. A year later she was finally given the green light (and a £1,500 advance) by editor Barry Cunningham from Bloomsbury, a publishing house in London. The decision to publish Rowling's book owes much to Alice Newton, the eight-year-old daughter of Bloomsbury's chairman, who was given the first chapter to review by her father and immediately demanded the next. Although Bloomsbury agreed to publish the book, Cunningham says that he advised Rowling to get a day job, since she had little chance of making money in children's books. Soon after, in 1997, Rowling received an £8,000 grant from the Scottish Arts Council to enable her to continue writing. In June 1997, Bloomsbury published "Philosopher's Stone" with an initial print run of 1,000 copies, 500 of which were distributed to libraries. Today, such copies are valued between £16,000 and £25,000. Five months later, the book won its first award, a Nestlé Smarties Book Prize. In February, the novel won the British Book Award for Children's Book of the Year, and later, the Children's Book Award. In early 1998, an auction was held in the United States for the rights to publish the novel, and was won by Scholastic Inc., for US$105,000. Rowling said that she "nearly died" when she heard the news. In October 1998, Scholastic published "Philosopher's Stone" in the US under the title of "Harry Potter and the Sorcerer's Stone", a change Rowling says she now regrets and would have fought if she had been in a better position at the time. Rowling moved from her flat with the money from the Scholastic sale, into 19 Hazelbank Terrace in Edinburgh. Its sequel, "Harry Potter and the Chamber of Secrets", was published in July 1998 and again Rowling won the Smarties Prize. In December 1999, the third novel, "Harry Potter and the Prisoner of Azkaban", won the Smarties Prize, making Rowling the first person to win the award three times running. She later withdrew the fourth "Harry Potter" novel from contention to allow other books a fair chance. In January 2000, "Prisoner of Azkaban" won the inaugural Whitbread Children's Book of the Year award, though it lost the Book of the Year prize to Seamus Heaney's translation of "Beowulf". The fourth book, "Harry Potter and the Goblet of Fire", was released simultaneously in the UK and the US on 8 July 2000 and broke sales records in both countries. 372,775 copies of the book were sold in its first day in the UK, almost equalling the number "Prisoner of Azkaban" sold during its first year. In the US, the book sold three million copies in its first 48 hours, smashing all records. Rowling said that she had had a crisis while writing the novel and had to rewrite one chapter many times to fix a problem with the plot. Rowling was named Author of the Year in the 2000 British Book Awards. A wait of three years occurred between the release of "Goblet of Fire" and the fifth "Harry Potter" novel, "Harry Potter and the Order of the Phoenix". This gap led to press speculation that Rowling had developed writer's block, speculations she denied. Rowling later said that writing the book was a chore, that it could have been shorter, and that she ran out of time and energy as she tried to finish it. The sixth book, "Harry Potter and the Half-Blood Prince", was released on 16 July 2005. It too broke all sales records, selling nine million copies in its first 24 hours of release. In 2006, "Half-Blood Prince" received the Book of the Year prize at the British Book Awards. The title of the seventh and final "Harry Potter" book was announced on 21 December 2006 as "Harry Potter and the Deathly Hallows". In February 2007 it was reported that Rowling wrote on a bust in her hotel room at the Balmoral Hotel in Edinburgh that she had finished the seventh book in that room on 11 January 2007. "Harry Potter and the Deathly Hallows" was released on 21 July 2007 (0:01 BST) and broke its predecessor's record as the fastest-selling book of all time. It sold 11 million copies in the first day of release in the United Kingdom and United States. The book's last chapter was one of the earliest things she wrote in the entire series. "Harry Potter" is now a global brand worth an estimated US$15 billion, and the last four "Harry Potter" books have consecutively set records as the fastest-selling books in history. The series, totalling 4,195 pages, has been translated, in whole or in part, into 65 languages. The "Harry Potter" books have also gained recognition for sparking an interest in reading among the young at a time when children were thought to be abandoning books for computers and television, although it is reported that despite the huge uptake of the books, adolescent reading has continued to decline. In October 1998, Warner Bros. purchased the film rights to the first two novels for a seven-figure sum. A film adaptation of "Harry Potter and the Philosopher's Stone" was released on 16 November 2001, and "Harry Potter and the Chamber of Secrets" on 15 November 2002. Both films were directed by Chris Columbus. The film version of "Harry Potter and the Prisoner of Azkaban" was released on 4 June 2004, directed by Alfonso Cuarón. The fourth film, "Harry Potter and the Goblet of Fire", was directed by Mike Newell, and released on 18 November 2005. The film of "Harry Potter and the Order of the Phoenix" was released on 11 July 2007. David Yates directed, and Michael Goldenberg wrote the screenplay, having taken over the position from Steve Kloves. "Harry Potter and the Half-Blood Prince" was released on 15 July 2009. David Yates directed again, and Kloves returned to write the script. Warner Bros. filmed the final instalment of the series, "Harry Potter and the Deathly Hallows", in two segments, with part one being released on 19 November 2010 and part two being released on 15 July 2011. Yates directed both films. Warner Bros. took considerable notice of Rowling's desires and thoughts when drafting her contract. One of her principal stipulations was the films be shot in Britain with an all-British cast, which has been generally adhered to. Rowling also demanded that Coca-Cola, the victor in the race to tie in their products to the film series, donate US$18 million to the American charity Reading is Fundamental, as well as several community charity programs. The first four, sixth, seventh, and eighth films were scripted by Steve Kloves; Rowling assisted him in the writing process, ensuring that his scripts did not contradict future books in the series. She told Alan Rickman (Severus Snape) and Robbie Coltrane (Hagrid) certain secrets about their characters before they were revealed in the books. Daniel Radcliffe (Harry Potter) asked her if Harry died at any point in the series; Rowling answered him by saying, "You have a death scene", thereby not explicitly answering the question. Director Steven Spielberg was approached to direct the first film, but dropped out. The press has repeatedly claimed that Rowling played a role in his departure, but Rowling stated that she had no say in who directed the films and would not have vetoed Spielberg. Rowling's first choice for the director had been Monty Python member Terry Gilliam, but Warner Bros. wanted a family-friendly film and chose Columbus. Rowling had gained some creative control on the films, reviewing all the scripts as well as acting as a producer on the final two-part instalment, "Deathly Hallows". Rowling, producers David Heyman and David Barron, along with directors David Yates, Mike Newell and Alfonso Cuarón collected the Michael Balcon Award for Outstanding British Contribution to Cinema at the 2011 British Academy Film Awards in honour of the "Harry Potter" film franchise. In September 2013, Warner Bros. announced an "expanded creative partnership" with Rowling, based on a planned series of films about Newt Scamander, author of "Fantastic Beasts and Where to Find Them". The first film, scripted by Rowling, was released in November 2016 and is set roughly 70 years before the events of the main series. In 2016, it was announced that the series would consist of five films, with the second scheduled for release in November 2018. In 2004, "Forbes" named Rowling as the first person to become a US-dollar billionaire by writing books, the second-richest female entertainer and the 1,062nd richest person in the world. Rowling disputed the calculations and said she had plenty of money, but was not a billionaire. The 2016 "Sunday Times Rich List" estimated Rowling's fortune at £600 million, ranking her as the joint 197th richest person in the UK. In 2012, "Forbes" removed Rowling from their rich list, claiming that her US$160 million in charitable donations and the high tax rate in the UK meant she was no longer a billionaire. In February 2013 she was assessed as the 13th most powerful woman in the United Kingdom by "Woman's Hour" on BBC Radio 4. In 2001, Rowling purchased a 19th-century estate house, Killiechassie House, on the banks of the River Tay, near Aberfeldy, in Perth and Kinross. Rowling also owns a £4.5 million Georgian house in Kensington, west London, on a street with 24-hour security. In 2017, Rowling was worth an estimated £650 million according to the Sunday Times Rich List. She was named the most highly paid author in the world with earnings of £72 million ($95 million) a year by Forbes magazine in 2017. On 26 December 2001, Rowling married Neil Murray (born 30 June 1971), a Scottish doctor, in a private ceremony at her home, Killiechassie House, near Aberfeldy. Their son, David Gordon Rowling Murray, was born on 24 March 2003. Shortly after Rowling began writing "Harry Potter and the Half-Blood Prince", she ceased working on the novel to care for David in his early infancy. Rowling is a friend of Sarah Brown, wife of former prime minister Gordon Brown, whom she met when they collaborated on a charitable project. When Sarah Brown's son Fraser was born in 2003, Rowling was one of the first to visit her in hospital. Rowling's youngest child, daughter Mackenzie Jean Rowling Murray, to whom she dedicated "Harry Potter and the Half-Blood Prince", was born on 23 January 2005. In October 2012, a "New Yorker" magazine article stated that the Rowling family lived in a seventeenth-century Edinburgh house, concealed at the front by tall conifer hedges. Prior to October 2012, Rowling lived near the author Ian Rankin, who later said she was quiet and introspective, and that she seemed in her element with children. , the family resides in Scotland. In July 2011, Rowling parted company with her agent, Christopher Little, moving to a new agency founded by one of his staff, Neil Blair. On 23 February 2012, his agency, the Blair Partnership, announced on its website that Rowling was set to publish a new book targeted at adults. In a press release, Rowling said that her new book would be quite different from Harry Potter. In April 2012, Little, Brown and Company announced that the book was titled "The Casual Vacancy" and would be released on 27 September 2012. Rowling gave several interviews and made appearances to promote "The Casual Vacancy", including at the London Southbank Centre, the Cheltenham Literature Festival, "Charlie Rose" and the Lennoxlove Book Festival. In its first three weeks of release, "The Casual Vacancy" sold over 1 million copies worldwide. On 3 December 2012, it was announced that the BBC would be adapting "The Casual Vacancy" into a television drama miniseries. Rowling's agent, Neil Blair acted as producer, through his independent production company and with Rick Senat serving as executive producer. Rowling collaborated on the adaptation, serving as an executive producer for the series. The series aired in three parts from 15 February to 1 March 2015. In 2007, during the Edinburgh Book Festival, author Ian Rankin claimed that his wife spotted Rowling "scribbling away" at a detective novel in a café. Rankin later retracted the story, claiming it was a joke, but the rumour persisted, with a report in 2012 in "The Guardian" speculating that Rowling's next book would be a crime novel. In an interview with Stephen Fry in 2005, Rowling claimed that she would much prefer to write any subsequent books under a pseudonym, but she conceded to Jeremy Paxman in 2003 that if she did, the press would probably "find out in seconds". In April 2013, Little Brown published "The Cuckoo's Calling", the purported début novel of author Robert Galbraith, whom the publisher described as "a former plainclothes Royal Military Police investigator who had left in 2003 to work in the civilian security industry". The novel, a detective story in which private investigator Cormoran Strike unravels the supposed suicide of a supermodel, sold 1,500 copies in hardback (although the matter was not resolved ; later reports stated that this number is the number of copies that were printed for the first run, while the sales total was closer to 500) and received acclaim from other crime writers and critics—a "Publishers Weekly" review called the book a "stellar debut", while the "Library Journal"s mystery section pronounced the novel "the debut of the month". India Knight, a novelist and columnist for "The Sunday Times", tweeted on 9 July 2013 that she had been reading "The Cuckoo's Calling" and thought it was good for a début novel. In response, a tweeter called Jude Callegari said that the author was Rowling. Knight queried this but got no further reply. Knight notified Richard Brooks, arts editor of the "Sunday Times", who began his own investigation. After discovering that Rowling and Galbraith had the same agent and editor, he sent the books for linguistic analysis which found similarities, and subsequently contacted Rowling's agent who confirmed it was Rowling's pseudonym. Within days of Rowling being revealed as the author, sales of the book rose by 4,000%, and Little Brown printed another 140,000 copies to meet the increase in demand. , a signed copy of the first edition sold for US$4,453 (£2,950), while an unsold signed first-edition copy was being offered for $6,188 (£3,950). Rowling said that she had enjoyed working under a pseudonym. On her Robert Galbraith website, Rowling explained that she took the name from one of her personal heroes, Robert Kennedy, and a childhood fantasy name she had invented for herself, Ella Galbraith. Soon after the revelation, Brooks pondered whether Jude Callegari could have been Rowling as part of wider speculation that the entire affair had been a publicity stunt. Some also noted that many of the writers who had initially praised the book, such as Alex Gray or Val McDermid, were within Rowling's circle of acquaintances; both vociferously denied any foreknowledge of Rowling's authorship. Judith "Jude" Callegari was the best friend of the wife of Chris Gossage, a partner within Russells Solicitors, Rowling's legal representatives. Rowling released a statement saying she was disappointed and angry; Russells apologised for the leak, confirming it was not part of a marketing stunt and that "the disclosure was made in confidence to someone he [Gossage] trusted implicitly". Russells made a donation to the Soldiers' Charity on Rowling's behalf and reimbursed her for her legal fees. On 26 November 2013 the Solicitors Regulation Authority (SRA) issued Gossage a written rebuke and £1,000 fine for breaching privacy rules. On 17 February 2014, Rowling announced that the second Cormoran Strike novel, named "The Silkworm", would be released in June 2014. It sees Strike investigating the disappearance of a writer hated by many of his old friends for insulting them in his new novel. In 2015, Rowling stated on Galbraith's website that the third Cormoran Strike novel would include "an insane amount of planning, the most I have done for any book I have written so far. I have colour-coded spreadsheets so I can keep a track of where I am going." On 24 April 2015, Rowling announced that work on the third book was completed. Titled "Career of Evil", it was released on 20 October 2015 in the United States, and on 22 October 2015 in the United Kingdom. In 2017, the BBC released a "Cormoran Strike" television series, starring Tom Burke as Cormoran Strike, it was picked up by HBO for distribution in the United States and Canada. In March 2017, Rowling revealed the fourth novel's title via Twitter in a game of "Hangman" with her followers. After many failed attempts, followers finally guessed correctly. Rowling confirmed that the next novel's title is "Lethal White". While intended for a 2017 release, Rowling revealed on Twitter the book was taking longer than expected and would be the longest book in the series thus far. Rowling has said it is unlikely she will write any more books in the "Harry Potter" series. In October 2007 she stated that her future work was unlikely to be in the fantasy genre. On 1 October 2010, in an interview with Oprah Winfrey, Rowling stated a new book on the saga might happen. In 2007, Rowling stated that she planned to write an encyclopaedia of "Harry Potter"s wizarding world consisting of various unpublished material and notes. Any profits from such a book would be given to charity. During a news conference at Hollywood's Kodak Theatre in 2007, Rowling, when asked how the encyclopaedia was coming along, said, "It's not coming along, and I haven't started writing it. I never said it was the next thing I'd do." At the end of 2007, Rowling said that the encyclopaedia could take up to ten years to complete. In June 2011, Rowling announced that future "Harry Potter" projects, and all electronic downloads, would be concentrated in a new website, called Pottermore. The site includes 18,000 words of information on characters, places and objects in the "Harry Potter" universe. In October 2015, Rowling announced via "Pottermore", that a two part play she had co-authored with playwrights Jack Thorne and John Tiffany, "Harry Potter and the Cursed Child", was the 'eighth Harry Potter story' and that it would focus on the life of Harry Potter's youngest son Albus after the epilogue of "Harry Potter and the Deathly Hallows". On 28 October 2015, the first round of tickets went on sale and sold out in several hours. In 2000, Rowling established the Volant Charitable Trust, which uses its annual budget of £5.1 million to combat poverty and social inequality. The fund also gives to organisations that aid children, one parent families, and multiple sclerosis research. Rowling, once a single parent, is now president of the charity Gingerbread (originally One Parent Families), having become their first Ambassador in 2000. Rowling collaborated with Sarah Brown to write a book of children's stories to aid One Parent Families. In 2001, the UK anti-poverty fundraiser Comic Relief asked three best-selling British authors – cookery writer and TV presenter Delia Smith, "Bridget Jones" creator Helen Fielding, and Rowling – to submit booklets related to their most famous works for publication. Rowling's two booklets, "Fantastic Beasts and Where to Find Them" and "Quidditch Through the Ages", are ostensibly facsimiles of books found in the Hogwarts library. Since going on sale in March 2001, the books have raised £15.7 million for the fund. The £10.8 million they have raised outside the UK have been channelled into a newly created International Fund for Children and Young People in Crisis. In 2002 Rowling contributed a foreword to "Magic", an anthology of fiction published by Bloomsbury Publishing, helping to raise money for the National Council for One Parent Families. In 2005, Rowling and MEP Emma Nicholson founded the Children's High Level Group (now Lumos). In January 2006, Rowling went to Bucharest to highlight the use of caged beds in mental institutions for children. To further support the CHLG, Rowling auctioned one of seven handwritten and illustrated copies of "The Tales of Beedle the Bard", a series of fairy tales referred to in "Harry Potter and the Deathly Hallows". The book was purchased for £1.95 million by on-line bookseller Amazon.com on 13 December 2007, becoming the most expensive modern book ever sold at auction. Rowling gave away the remaining six copies to those who have a close connection with the "Harry Potter" books. In 2008, Rowling agreed to publish the book with the proceeds going to Lumos. On 1 June 2010 (International Children's Day), Lumos launched an annual initiative – "Light a Birthday Candle for Lumos". In November 2013, Rowling handed over all earnings from the sale of "The Tales of Beedle the Bard", totalling nearly £19 million. In July 2012, Rowling was featured at the 2012 Summer Olympics opening ceremony in London where she read a few lines from J. M. Barrie's "Peter Pan" as part of a tribute to Great Ormond Street Hospital for Children. An inflatable representation of Lord Voldemort and other children's literary characters accompanied her reading. Rowling has contributed money and support for research and treatment of multiple sclerosis, from which her mother suffered before her death in 1990. In 2006, Rowling contributed a substantial sum toward the creation of a new Centre for Regenerative Medicine at Edinburgh University, later named the Anne Rowling Regenerative Neurology Clinic. In 2010 she donated a further £10 million to the centre. For reasons unknown, Scotland, Rowling's country of adoption, has the highest rate of multiple sclerosis in the world. In 2003, Rowling took part in a campaign to establish a national standard of care for MS sufferers. In April 2009, she announced that she was withdrawing her support for Multiple Sclerosis Society Scotland, citing her inability to resolve an ongoing feud between the organisation's northern and southern branches that had sapped morale and led to several resignations. In May 2008, bookseller Waterstones asked Rowling and 12 other writers (Sebastian Faulks, Doris Lessing, Lisa Appignanesi, Margaret Atwood, Lauren Child, Richard Ford, Neil Gaiman, Nick Hornby, Michael Rosen, Axel Scheffler, Tom Stoppard and Irvine Welsh) to compose a short piece of their own choosing on a single A5 card, which would then be sold at auction in aid of the charities Dyslexia Action and English PEN. Rowling's contribution was an 800-word "Harry Potter" prequel that concerns Harry's father, James Potter, and godfather, Sirius Black, and takes place three years before Harry was born. The cards were collated and sold for charity in book form in August 2008. On 1 and 2 August 2006, she read alongside Stephen King and John Irving at Radio City Music Hall in New York City. Profits from the event were donated to the Haven Foundation, a charity that aids artists and performers left uninsurable and unable to work, and the medical NGO Médecins Sans Frontières. In May 2007, Rowling pledged a donation reported as over £250,000 to a reward fund started by the tabloid "News of the World" for the safe return of a young British girl, Madeleine McCann, who disappeared in Portugal. Rowling, along with Nelson Mandela, Al Gore, and Alan Greenspan, wrote an introduction to a collection of Gordon Brown's speeches, the proceeds of which were donated to the Jennifer Brown Research Laboratory. After her exposure as the true author of "The Cuckoo's Calling" led a massive increase in sales, Rowling announced she would donate all her royalties to the Army Benevolent Fund, claiming she had always intended to, but never expected the book to be a best-seller. Rowling is a member of both English PEN and Scottish PEN. She was one of 50 authors to contribute to First Editions, Second Thoughts, a charity auction for English PEN. Each author hand annotated a first edition copy of one of their books: In Rowling's case, "Harry Potter and the Philosopher's Stone". The book was the highest selling lot of the event and fetched £150,000 ($228,600). Rowling is a supporter of The Shannon Trust, which runs the Toe by Toe Reading Plan and the Shannon Reading Plan in prisons across Britain, helping and giving tutoring to prisoners who cannot read. Rowling has named communist and civil rights activist Jessica Mitford as her "most influential writer" saying, "Jessica Mitford has been my heroine since I was 14 years old, when I overheard my formidable great-aunt discussing how Mitford had run away at the age of 19 to fight with the Reds in the Spanish Civil War", and claims what inspired her about Mitford was that she was "incurably and instinctively rebellious, brave, adventurous, funny and irreverent, she liked nothing better than a good fight, preferably against a pompous and hypocritical target". Rowling has described Jane Austen as her favourite author, calling "Emma" her favourite book in "O, The Oprah Magazine". As a child, Rowling has said her early influences included "The Lion, The Witch and The Wardrobe" by C.S. Lewis, "The Little White Horse" by Elizabeth Goudge, and "Manxmouse" by Paul Gallico. Rowling is known for her left-wing political views. In September 2008, on the eve of the Labour Party Conference, Rowling announced that she had donated £1 million to the Labour Party, and publicly endorsed Labour Prime Minister Gordon Brown over Conservative challenger David Cameron, praising Labour's policies on child poverty. Rowling is a close friend of Sarah Brown, wife of Gordon Brown, whom she met when they collaborated on a charitable project for One Parent Families. Rowling discussed the 2008 United States presidential election with the Spanish-language newspaper "El País" in February 2008, stating that the election would have a profound effect on the rest of the world. She also said that Barack Obama and Hillary Clinton would be "extraordinary" in the White House. In the same interview, Rowling identified Robert F. Kennedy as her hero. In April 2010, Rowling published an article in "The Times", in which she criticised Cameron's plan to encourage married couples to stay together by offering them a £150 annual tax credit: "Nobody who has ever experienced the reality of poverty could say 'it's not the money, it's the message'. When your flat has been broken into, and you cannot afford a locksmith, it is the money. When you are two pence short of a tin of baked beans, and your child is hungry, it is the money. When you find yourself contemplating shoplifting to get nappies, it is the money." As a resident of Scotland, Rowling was eligible to vote in the 2014 referendum on Scottish independence, and campaigned for the "No" vote. She donated £1 million to the Better Together anti-independence campaign (run by her former neighbour Alistair Darling), the largest donation it had received at the time. In a blog post, Rowling explained that an open letter from Scottish medical professionals raised problems with First Minister Alex Salmond's plans for a common research funding. Rowling compared some Scottish Nationalists with the Death Eaters, characters from "Harry Potter" who are scornful of those without pure blood. On 22 October 2015 a letter was published in "The Guardian" signed by Rowling (along with over 150 other figures from arts and politics) opposing the cultural boycott of Israel, and announcing the creation of a network for dialogue, called Culture for Coexistence. Rowling later explained her position in more detail, saying that although she opposed most of Benjamin Netanyahu's actions she did not think the cultural boycott would bring about the removal of Israel's leader or help improve the situation in Israel and Palestine. In June 2016, Rowling campaigned against the Referendum to leave the European Union, stating on her website that, "I'm the mongrel product of this European continent and I'm an internationalist. I was raised by a Francophile mother whose family was proud of their part-French heritage ... My values are not contained or proscribed by borders. The absence of a visa when I cross the channel has symbolic value to me. I might not be in my house, but I'm still in my hometown." Over the years, some religious people, particularly Christians, have decried Rowling's books for supposedly promoting witchcraft. Rowling identifies as a Christian. She once said, "I believe in God, not magic." Early on she felt that if readers knew of her Christian beliefs they would be able to predict her plot line. In 2007, Rowling described having been brought up in the Church of England. She said she was the only one in her family who regularly went to church. As a student she became annoyed at the "smugness of religious people" and worshipped less often. Later, she started to attend a Church of Scotland congregation at the time she was writing "Harry Potter". Her eldest daughter, Jessica, was baptised there. In a 2006 interview with "Tatler" magazine, Rowling noted that, "like Graham Greene, my faith is sometimes about if my faith will return. It's important to me." She has said that she has struggled with doubt, that she believes in an afterlife, and that her faith plays a part in her books. In a 2012 radio interview, she said that she was a member of the Scottish Episcopal Church, a province of the Anglican Communion. In 2015, following the referendum on same-sex marriage in Ireland, Rowling joked that if Ireland legalised same-sex marriage, Dumbledore and Gandalf could get married there. The Westboro Baptist Church, in response, stated that if the two got married, they would picket. Rowling responded by saying "Alas, the sheer awesomeness of such a union in such a place would blow your tiny bigoted minds out of your thick sloping skulls." Rowling has had a difficult relationship with the press. She admits to being "thin-skinned" and dislikes the fickle nature of reporting. Rowling disputes her reputation as a recluse who hates to be interviewed. By 2011, Rowling had taken more than 50 actions against the press. In 2001, the Press Complaints Commission upheld a complaint by Rowling over a series of unauthorised photographs of her with her daughter on the beach in Mauritius published in "OK!" magazine. In 2007, Rowling's young son, David, assisted by Rowling and her husband, lost a court fight to ban publication of a photograph of him. The photo, taken by a photographer using a long-range lens, was subsequently published in a "Sunday Express" article featuring Rowling's family life and motherhood. The judgement was overturned in David's favour in May 2008. Rowling particularly dislikes the British tabloid the "Daily Mail", which has conducted interviews with her estranged ex-husband. As one journalist noted, "Harry's Uncle Vernon is a grotesque philistine of violent tendencies and remarkably little brain. It is not difficult to guess which newspaper Rowling gives him to read [in "Goblet of Fire"]." , she was seeking damages from the "Mail" for libel over an article about her time as a single mother. Some have speculated that Rowling's fraught relationship with the press was the inspiration behind the character Rita Skeeter, a gossipy celebrity journalist who first appears in "Goblet of Fire", but Rowling noted in 2000 that the character predates her rise to fame. In September 2011, Rowling was named a "core participant" in the Leveson Inquiry into the culture, practices and ethics of the British press, as one of dozens of celebrities who may have been the victim of phone hacking. On 24 November 2011, Rowling gave evidence before the inquiry; although she was not suspected to have been the victim of phone hacking, her testimony included accounts of photographers camping on her doorstep, her fiancé being duped into giving his home address to a journalist masquerading as a tax official, her chasing a journalist a week after giving birth, a journalist leaving a note inside her then-five-year-old daughter's schoolbag, and an attempt by "The Sun" to "blackmail" her into a photo opportunity in exchange for the return of a stolen manuscript. Rowling claimed she had to leave her former home in Merchiston because of press intrusion. In November 2012, Rowling wrote an article for "The Guardian" in reaction to David Cameron's decision not to implement the full recommendations of the Leveson inquiry, saying she felt "duped and angry". In 2014, Rowling reaffirmed her support for "Hacked Off" and its campaign towards press self-regulation by co-signing with other British celebrities a declaration to "[safeguard] the press from political interference while also giving vital protection to the vulnerable." Rowling, her publishers, and Time Warner, the owner of the rights to the Harry Potter films, have taken numerous legal actions to protect their copyright. The worldwide popularity of the "Harry Potter" series has led to the appearance of a number of locally produced, unauthorised sequels and other derivative works, sparking efforts to ban or contain them. Another area of legal dispute involves a series of injunctions obtained by Rowling and her publishers to prohibit anyone from reading her books before their official release date. The injunction drew fire from civil liberties and free speech campaigners and sparked debates over the "right to read". Rowling has received honorary degrees from St Andrews University, the University of Edinburgh, Edinburgh Napier University, the University of Exeter which she attended, the University of Aberdeen and Harvard University, for whom she spoke at the 2008 commencement ceremony. In 2009 Rowling was made a Chevalier de la Légion d'honneur by French President Nicolas Sarkozy. In 2011 Rowling became an honorary Fellow of the Royal College of Physicians of Edinburgh. Other awards include: She was appointed Member of the Order of the Companions of Honour (CH) in the 2017 Birthday Honours for services to literature and philanthropy. Jimi Hendrix James Marshall "Jimi" Hendrix (born Johnny Allen Hendrix; November 27, 1942 – September 18, 1970) was an American rock guitarist, singer, and songwriter. Although his mainstream career spanned only four years, he is widely regarded as one of the most influential electric guitarists in the history of popular music, and one of the most celebrated musicians of the 20th century. The Rock and Roll Hall of Fame describes him as "arguably the greatest instrumentalist in the history of rock music". Born in Seattle, Washington, Hendrix began playing guitar at the age of 15. In 1961, he enlisted in the U.S. Army and trained as a paratrooper in the 101st Airborne Division; he was granted an honorable discharge the following year. Soon afterward, he moved to Clarksville, Tennessee, and began playing gigs on the Chitlin' Circuit, earning a place in the Isley Brothers' backing band and later with Little Richard, with whom he continued to work through mid-1965. He then played with Curtis Knight and the Squires before moving to England in late 1966 after being discovered by Linda Keith, who in turn interested bassist Chas Chandler of the Animals in becoming his first manager. Within months, Hendrix had earned three UK top ten hits with the Jimi Hendrix Experience: "Hey Joe", "Purple Haze", and "The Wind Cries Mary". He achieved fame in the U.S. after his performance at the Monterey Pop Festival in 1967, and in 1968 his third and final studio album, "Electric Ladyland", reached number one in the U.S.; it was Hendrix's most commercially successful release and his first and only number one album. The world's highest-paid performer, he headlined the Woodstock Festival in 1969 and the Isle of Wight Festival in 1970, before his accidental death from barbiturate-related asphyxia on September 18, 1970, at the age of 27. Hendrix was inspired musically by American rock and roll and electric blues. He favored overdriven amplifiers with high volume and gain, and was instrumental in utilizing the previously undesirable sounds caused by guitar amplifier feedback. He helped to popularize the use of a wah-wah pedal in mainstream rock, and was the first artist to use stereophonic phasing effects in music recordings. Holly George-Warren of "Rolling Stone" commented: "Hendrix pioneered the use of the instrument as an electronic sound source. Players before him had experimented with feedback and distortion, but Hendrix turned those effects and others into a controlled, fluid vocabulary every bit as personal as the blues with which he began." Hendrix was the recipient of several music awards during his lifetime and posthumously. In 1967, readers of "Melody Maker" voted him the Pop Musician of the Year, and in 1968, "Rolling Stone" declared him the Performer of the Year. "Disc and Music Echo" honored him with the World Top Musician of 1969 and in 1970, "Guitar Player" named him the Rock Guitarist of the Year. The Jimi Hendrix Experience was inducted into the Rock and Roll Hall of Fame in 1992 and the UK Music Hall of Fame in 2005. "Rolling Stone" ranked the band's three studio albums, "Are You Experienced", "", and "Electric Ladyland", among the 100 greatest albums of all time, and they ranked Hendrix as the greatest guitarist and the sixth greatest artist of all time. Jimi Hendrix had a diverse heritage. His paternal grandmother, Zenora "Nora" Rose Moore, was African American and one-quarter Cherokee. Hendrix's paternal grandfather, Bertran Philander Ross Hendrix (born 1866), was born out of an extramarital affair between a woman named Fanny, and a grain merchant from Urbana, Ohio, or Illinois, one of the wealthiest men in the area at that time. After Hendrix and Moore relocated to Vancouver, Canada, had a son they named James Allen Ross Hendrix on June 10, 1919; the family called him "Al". In 1941 after moving to Seattle, Al met Lucille Jeter (1925–1958) at a dance; they married on March 31, 1942. Lucille's father (Jimi's maternal grandfather) was Preston Jeter (born 1875), whose mother was born in similar circumstances as Bertran Philander Ross Hendrix. Lucille's mother, née Clarice Lawson, had African American and Cherokee ancestors. Al, who had been drafted by the U.S. Army to serve in World War II, left to begin his basic training three days after the wedding. Johnny Allen Hendrix was born on November 27, 1942, in Seattle; he was the first of Lucille's five children. In 1946, Johnny's parents changed his name to James Marshall Hendrix, in honor of Al and his late brother Leon Marshall. Stationed in Alabama at the time of Hendrix's birth, Al was denied the standard military furlough afforded servicemen for childbirth; his commanding officer placed him in the stockade to prevent him from going AWOL to see his infant son in Seattle. He spent two months locked up without trial, and while in the stockade received a telegram announcing his son's birth. During Al's three-year absence, Lucille struggled to raise their son. When Al was away, Hendrix was mostly cared for by family members and friends, especially Lucille's sister Delores Hall and her friend Dorothy Harding. Al received an honorable discharge from the U.S. Army on September 1, 1945. Two months later, unable to find Lucille, Al went to the Berkeley, California, home of a family friend named Mrs. Champ, who had taken care of and had attempted to adopt Hendrix; this is where Al saw his son for the first time. After returning from service, Al reunited with Lucille, but his inability to find steady work left the family impoverished. They both struggled with alcohol, and often fought when intoxicated. The violence sometimes drove Hendrix to withdraw and hide in a closet in their home. His relationship with his brother Leon (born 1948) was close but precarious; with Leon in and out of foster care, they lived with an almost constant threat of fraternal separation. In addition to Leon, Hendrix had three younger siblings: Joseph, born in 1949, Kathy in 1950, and Pamela, 1951, all of whom Al and Lucille gave up to foster care and adoption. The family frequently moved, staying in cheap hotels and apartments around Seattle. On occasion, family members would take Hendrix to Vancouver to stay at his grandmother's. A shy and sensitive boy, he was deeply affected by his life experiences. In later years, he confided to a girlfriend that he had been the victim of sexual abuse by a man in uniform. On December 17, 1951, when Hendrix was nine years old, his parents divorced; the court granted Al custody of him and Leon. At Horace Mann Elementary School in Seattle during the mid-1950s, Hendrix's habit of carrying a broom with him to emulate a guitar gained the attention of the school's social worker. After more than a year of his clinging to a broom like a security blanket, she wrote a letter requesting school funding intended for underprivileged children, insisting that leaving him without a guitar might result in psychological damage. Her efforts failed, and Al refused to buy him a guitar. In 1957, while helping his father with a side-job, Hendrix found a ukulele amongst the garbage that they were removing from an older woman's home. She told him that he could keep the instrument, which had only one string. Learning by ear, he played single notes, following along to Elvis Presley songs, particularly Presley's cover of Leiber and Stoller's "Hound Dog". By the age of thirty-three, Hendrix's mother Lucille had developed cirrhosis of the liver, and on February 2, 1958, she died when her spleen ruptured. Al refused to take James and Leon to attend their mother's funeral; he instead gave them shots of whiskey and instructed them that was how men were supposed to deal with loss. In 1958, Hendrix completed his studies at Washington Junior High School and began attending, but did not graduate from, Garfield High School. In mid-1958, at age 15, Hendrix acquired his first acoustic guitar, for $5. He earnestly applied himself, playing the instrument for several hours daily, watching others and getting tips from more experienced guitarists, and listening to blues artists such as Muddy Waters, B.B. King, Howlin' Wolf, and Robert Johnson. The first tune Hendrix learned how to play was "Peter Gunn", the theme from the television series of the same name. Around that time, Hendrix jammed with boyhood friend Sammy Drain and his keyboard playing brother. In 1959, while attending a concert by Hank Ballard & the Midnighters in Seattle, Hendrix met the group's guitarist Billy Davis. Davis showed him some guitar licks and later got him a short gig with the Midnighters. The two remained friends until Hendrix's death in 1970. Soon after he acquired the acoustic guitar, Hendrix formed his first band, the Velvetones. Without an electric guitar, he could barely be heard over the sound of the group. After about three months, he realized that he needed an electric guitar in order to continue. In mid-1959, his father relented and bought him a white Supro Ozark. Hendrix's first gig was with an unnamed band in the Jaffe Room of Seattle's Temple De Hirsch Sinai, but after he did too much showing off, the band fired him between sets. He later joined the Rocking Kings, which played professionally at venues such as the Birdland club. When someone stole his guitar after he left it backstage overnight, Al bought him a red Silvertone Danelectro. Before Hendrix was 19 years old, law enforcement authorities had twice caught him riding in stolen cars. When given a choice between spending time in prison or joining the Army, he chose the latter and enlisted on May 31, 1961. After completing eight weeks of basic training at Fort Ord, California, he was assigned to the 101st Airborne Division and stationed at Fort Campbell, Kentucky. He arrived there on November 8, and soon afterward he wrote to his father: "There's nothing but physical training and harassment here for two weeks, then when you go to jump school ... you get hell. They work you to death, fussing and fighting." In his next letter home, Hendrix, who had left his guitar at his girlfriend Betty Jean Morgan's house in Seattle, asked his father to send it to him as soon as possible, stating: "I really need it now." His father obliged and sent the red Silvertone Danelectro on which Hendrix had hand-painted the words "Betty Jean" to Fort Campbell. His apparent obsession with the instrument contributed to his neglect of his duties, which led to verbal taunting and physical abuse from his peers, who at least once hid the guitar from him until he had begged for its return. In November 1961, fellow serviceman Billy Cox walked past an army club and heard Hendrix playing guitar. Intrigued by the proficient playing, which he described as a combination of "John Lee Hooker and Beethoven", Cox borrowed a bass guitar and the two jammed. Within a few weeks, they began performing at base clubs on the weekends with other musicians in a loosely organized band called the Casuals. Hendrix completed his paratrooper training in just over eight months, and Major General C. W. G. Rich awarded him the prestigious Screaming Eagles patch on January 11, 1962. By February, his personal conduct had begun to draw criticism from his superiors. They labeled him an unqualified marksman and often caught him napping while on duty and failing to report for bed checks. On May 24, Hendrix's platoon sergeant, James C. Spears, filed a report in which he stated: "He has no interest whatsoever in the Army ... It is my opinion that Private Hendrix will never come up to the standards required of a soldier. I feel that the military service will benefit if he is discharged as soon as possible." On June 29, 1962, Captain Gilbert Batchman granted Hendrix an honorable discharge on the basis of unsuitability. Hendrix later spoke of his dislike of the army and falsely stated that he had received a medical discharge after breaking his ankle during his 26th parachute jump. In September 1963, after Cox was discharged from the Army, he and Hendrix moved to Clarksville, Tennessee, and formed a band called the King Kasuals. Hendrix had watched Butch Snipes play with his teeth in Seattle and by now Alphonso 'Baby Boo' Young, the other guitarist in the band, was performing this guitar gimmick. Not to be upstaged, Hendrix learned to play with his teeth. He later commented: "The idea of doing that came to me...in Tennessee. Down there you have to play with your teeth or else you get shot. There's a trail of broken teeth all over the stage." Although they began playing low-paying gigs at obscure venues, the band eventually moved to Nashville's Jefferson Street, which was the traditional heart of the city's black community and home to a thriving rhythm and blues music scene. They earned a brief residency playing at a popular venue in town, the Club del Morocco, and for the next two years Hendrix made a living performing at a circuit of venues throughout the South that were affiliated with the Theater Owners' Booking Association (TOBA), widely known as the Chitlin' Circuit. In addition to playing in his own band, Hendrix performed as a backing musician for various soul, R&B, and blues musicians, including Wilson Pickett, Slim Harpo, Sam Cooke, Ike & Tina Turner and Jackie Wilson. In January 1964, feeling he had outgrown the circuit artistically, and frustrated by having to follow the rules of bandleaders, Hendrix decided to venture out on his own. He moved into the Hotel Theresa in Harlem, where he befriended Lithofayne Pridgon, known as "Faye", who became his girlfriend. A Harlem native with connections throughout the area's music scene, Pridgon provided him with shelter, support, and encouragement. Hendrix also met the Allen twins, Arthur and Albert. In February 1964, Hendrix won first prize in the Apollo Theater amateur contest. Hoping to secure a career opportunity, he played the Harlem club circuit and sat in with various bands. At the recommendation of a former associate of Joe Tex, Ronnie Isley granted Hendrix an audition that led to an offer to become the guitarist with the Isley Brothers' back-up band, the I.B. Specials, which he readily accepted. In March 1964, Hendrix recorded the two-part single "Testify" with the Isley Brothers. Released in June, it failed to chart. In May, he provided guitar instrumentation for the Don Covay song, "Mercy Mercy". Issued in August by Rosemart Records and distributed by Atlantic, the track reached number 35 on the "Billboard" chart. Hendrix toured with the Isleys during much of 1964, but near the end of October, after growing tired of playing the same set every night, he left the band. Soon afterward, Hendrix joined Little Richard's touring band, the Upsetters. During a stop in Los Angeles in February 1965, he recorded his first and only single with Richard, "I Don't Know What You Got (But It's Got Me)", written by Don Covay and released by Vee-Jay Records. Richard's popularity was waning at the time, and the single peaked at number 92, where it remained for one week before dropping off the chart. Hendrix met singer Rosa Lee Brooks while staying at the Wilcox Hotel in Hollywood, and she invited him to participate in a recording session for her single, which included the Arthur Lee penned "My Diary" as the A-side, and "Utee" as the B-side. Hendrix played guitar on both tracks, which also included background vocals by Lee. The single failed to chart, but Hendrix and Lee began a friendship that lasted several years; Hendrix later became an ardent supporter of Lee's band, Love. In July 1965, on Nashville's Channel 5 "Night Train", Hendrix made his first television appearance. Performing in Little Richard's ensemble band, he backed up vocalists Buddy and Stacy on "Shotgun". The video recording of the show marks the earliest known footage of Hendrix performing. Richard and Hendrix often clashed over tardiness, wardrobe, and Hendrix's stage antics, and in late July, Richard's brother Robert fired him. He then briefly rejoined the Isley Brothers, and recorded a second single with them, "Move Over and Let Me Dance" backed with "Have You Ever Been Disappointed". Later that year, he joined a New York-based R&B band, Curtis Knight and the Squires, after meeting Knight in the lobby of a hotel where both men were staying. Hendrix performed with them for eight months. In October 1965, he and Knight recorded the single, "How Would You Feel" backed with "Welcome Home" and on October 15, Hendrix signed a three-year recording contract with entrepreneur Ed Chalpin. While the relationship with Chalpin was short-lived, his contract remained in force, which later caused legal and career problems for Hendrix. During his time with Knight, Hendrix briefly toured with Joey Dee and the Starliters, and worked with King Curtis on several recordings including Ray Sharpe's two-part single, "Help Me". Hendrix earned his first composer credits for two instrumentals, "Hornets Nest" and "Knock Yourself Out", released as a Curtis Knight and the Squires single in 1966. Feeling restricted by his experiences as an R&B sideman, Hendrix moved in 1966 to New York City's Greenwich Village, which had a vibrant and diverse music scene. There, he was offered a residency at the Cafe Wha? on MacDougal Street and formed his own band that June, Jimmy James and the Blue Flames, which included future Spirit guitarist Randy California. The Blue Flames played at several clubs in New York and Hendrix began developing his guitar style and material that he would soon use with the Experience. In September, they gave some of their last concerts at the Cafe au Go Go, as John Hammond Jr.'s backing group. By May 1966, Hendrix was struggling to earn a living wage playing the R&B circuit, so he briefly rejoined Curtis Knight and the Squires for an engagement at one of New York City's most popular nightspots, the Cheetah Club. During a performance, Linda Keith, the girlfriend of Rolling Stones guitarist Keith Richards, noticed Hendrix. She remembered: "[His] playing mesmerised me". She invited him to join her for a drink; he accepted and the two became friends. While Hendrix was playing with Jimmy James and the Blue Flames, Keith recommended him to Stones manager Andrew Loog Oldham and producer Seymour Stein. They failed to see Hendrix's musical potential, and rejected him. She then referred him to Chas Chandler, who was leaving the Animals and interested in managing and producing artists. Chandler saw the then-unknown Jimi Hendrix play in Cafe Wha?, a Greenwich Village, New York City nightclub. Chandler liked the Billy Roberts song "Hey Joe", and was convinced he could create a hit single with the right artist. Impressed with Hendrix's version of the song, he brought him to London on September 24, 1966, and signed him to a management and production contract with himself and ex-Animals manager Michael Jeffery. On September 24, Hendrix gave an impromptu solo performance at The Scotch of St James, and later that night he began a relationship with Kathy Etchingham that lasted for two and a half years. Following Hendrix's arrival in London, Chandler began recruiting members for a band designed to highlight the guitarist's talents, the Jimi Hendrix Experience. Hendrix met guitarist Noel Redding at an audition for the New Animals, where Redding's knowledge of blues progressions impressed Hendrix, who stated that he also liked Redding's hairstyle. Chandler asked Redding if he wanted to play bass guitar in Hendrix's band; Redding agreed. Chandler then began looking for a drummer and soon after, he contacted Mitch Mitchell through a mutual friend. Mitchell, who had recently been fired from Georgie Fame and the Blue Flames, participated in a rehearsal with Redding and Hendrix where they found common ground in their shared interest in rhythm and blues. When Chandler phoned Mitchell later that day to offer him the position, he readily accepted. Chandler also convinced Hendrix to change the spelling of his first name from "Jimmy" to the exotic looking "Jimi". On October 1, 1966, Chandler brought Hendrix to the London Polytechnic at Regent Street, where Cream was scheduled to perform, and where Hendrix and Eric Clapton met. Clapton later commented: "He asked if he could play a couple of numbers. I said, 'Of course', but I had a funny feeling about him." Halfway through Cream's set, Hendrix took the stage and performed a frantic version of the Howlin' Wolf song "Killing Floor". In 1989, Clapton described the performance: "He played just about every style you could think of, and not in a flashy way. I mean he did a few of his tricks, like playing with his teeth and behind his back, but it wasn't in an upstaging sense at all, and that was it ... He walked off, and my life was never the same again". In mid-October 1966, Chandler arranged an engagement for the Experience as Johnny Hallyday's supporting act during a brief tour of France. Thus, the Jimi Hendrix Experience performed their very first show on October 13, 1966, at the Novelty in Evreux. Their enthusiastically received 15-minute performance at the Olympia theatre in Paris on October 18 marks the earliest known recording of the band. In late October, Kit Lambert and Chris Stamp, managers of the Who, signed the Experience to their newly formed label, Track Records, and the group recorded their first song, "Hey Joe", on October 23. "Stone Free", which was Hendrix's first songwriting effort after arriving in England, was recorded on November 2. In mid-November, they performed at the Bag O'Nails nightclub in London, with Clapton, John Lennon, Paul McCartney, Jeff Beck, Pete Townshend, Brian Jones, Mick Jagger, and Kevin Ayers in attendance. Ayers described the crowd's reaction as stunned disbelief: "All the stars were there, and I heard serious comments, you know 'shit', 'Jesus', 'damn' and other words worse than that." The successful performance earned Hendrix his first interview, published in "Record Mirror" with the headline: "Mr. Phenomenon". "Now hear this ... we predict that [Hendrix] is going to whirl around the business like a tornado", wrote Bill Harry, who asked the rhetorical question: "Is that full, big, swinging sound really being created by only three people?" Hendrix commented: "We don't want to be classed in any category ... If it must have a tag, I'd like it to be called, 'Free Feeling'. It's a mixture of rock, freak-out, rave and blues". Through a distribution deal with Polydor Records, the Experience's first single, "Hey Joe", backed with "Stone Free", was released on December 16, 1966. After appearances on the UK television shows "Ready Steady Go!" and the "Top of the Pops", "Hey Joe" entered the UK charts on December 29 and peaked at number six. Further success came in March 1967 with the UK number three hit "Purple Haze", and in May with "The Wind Cries Mary", which remained on the UK charts for eleven weeks, peaking at number six. On March 12, 1967, he performed at the Troutbeck Hotel, Ilkley, West Yorkshire, where, after about 900 people turned up (the hotel was licensed for 250) the local police stopped the gig due to safety concerns. On March 31, 1967, while the Experience waited to perform at the London Astoria, Hendrix and Chandler discussed ways in which they could increase the band's media exposure. When Chandler asked journalist Keith Altham for advice, Altham suggested that they needed to do something more dramatic than the stage show of the Who, which involved the smashing of instruments. Hendrix joked: "Maybe I can smash up an elephant", to which Altham replied: "Well, it's a pity you can't set fire to your guitar". Chandler then asked road manager Gerry Stickells to procure some lighter fluid. During the show, Hendrix gave an especially dynamic performance before setting his guitar on fire at the end of a 45-minute set. In the wake of the stunt, members of London's press labeled Hendrix the "Black Elvis" and the "Wild Man of Borneo". After the UK chart success of their first two singles, "Hey Joe" and "Purple Haze", the Experience began assembling material for a full-length LP. Recording began at De Lane Lea Studios and later moved to the prestigious Olympic Studios. The album, "Are You Experienced", features a diversity of musical styles, including blues tracks such as "Red House" and "Highway Chile", and the R&B song "Remember". It also included the experimental science fiction piece, "Third Stone from the Sun" and the post-modern soundscapes of the title track, with prominent backwards guitar and drums. "I Don't Live Today" served as a medium for Hendrix's guitar feedback improvisation and "Fire" was driven by Mitchell's drumming. Released in the UK on May 12, 1967, "Are You Experienced" spent 33 weeks on the charts, peaking at number two. It was prevented from reaching the top spot by the Beatles' "Sgt. Pepper's Lonely Hearts Club Band". On June 4, 1967, Hendrix opened a show at the Saville Theatre in London with his rendition of "Sgt. Pepper" title track, which was released just three days previous. Beatles manager Brian Epstein owned the Saville at the time, and both George Harrison and Paul McCartney attended the performance. McCartney described the moment: "The curtains flew back and he came walking forward playing 'Sgt. Pepper'. It's a pretty major compliment in anyone's book. I put that down as one of the great honors of my career." Released in the U.S. on August 23 by Reprise Records, "Are You Experienced" reached number five on the "Billboard" 200. In 1989, Noe Goldwasser, the founding editor of "Guitar World" magazine, described "Are You Experienced" as "the album that shook the world ... leaving it forever changed". In 2005, "Rolling Stone" called the double-platinum LP Hendrix's "epochal debut", and they ranked it the 15th greatest album of all time, noting his "exploitation of amp howl", and characterizing his guitar playing as "incendiary ... historic in itself". Although popular in Europe at the time, the Experience's first U.S. single, "Hey Joe", failed to reach the "Billboard" Hot 100 chart upon its release on May 1, 1967. The group's fortunes improved when McCartney recommended them to the organizers of the Monterey Pop Festival. He insisted that the event would be incomplete without Hendrix, whom he called "an absolute ace on the guitar", and he agreed to join the board of organizers on the condition that the Experience perform at the festival in mid-June. Introduced by Brian Jones as "the most exciting performer [he had] ever heard", Hendrix opened with a fast arrangement of Howlin' Wolf's song "Killing Floor", wearing what author Keith Shadwick described as "clothes as exotic as any on display elsewhere." Shadwick wrote: "[Hendrix] was not only something utterly new musically, but an entirely original vision of what a black American entertainer should and could look like." The Experience went on to perform renditions of "Hey Joe", B.B. King's "Rock Me Baby", Chip Taylor's "Wild Thing", and Bob Dylan's "Like a Rolling Stone", as well as four original compositions: "Foxy Lady", "Can You See Me", "The Wind Cries Mary", and "Purple Haze". The set ended with Hendrix destroying his guitar and tossing pieces of it out to the audience. "Rolling Stone" Alex Vadukul wrote: Caraeff stood on a chair next to the edge of the stage while taking a series of four monochrome pictures of Hendrix burning his guitar. Caraeff was close enough to the fire that he had to use his camera as a shield to protect his face from the heat. "Rolling Stone" later colorized the image, matching it with other pictures taken at the festival before using the shot for a 1987 magazine cover. According to author Gail Buckland, the fourth and final frame of "Hendrix kneeling in front of his burning guitar, hands raised, is one of the most famous images in rock." Author and historian Matthew C. Whitaker wrote: "Hendrix's burning of his guitar became an iconic image in rock history and brought him national attention." The "Los Angeles Times" asserted that, upon leaving the stage, Hendrix "graduated from rumor to legend". Author John McDermott commented: "Hendrix left the Monterey audience stunned and in disbelief at what they'd just heard and seen." According to Hendrix: "I decided to destroy my guitar at the end of a song as a sacrifice. You sacrifice things you love. I love my guitar." The performance was filmed by D. A. Pennebaker, and later included in the concert documentary "Monterey Pop", which helped Hendrix gain popularity with the U.S. public. Immediately after the festival, the Experience were booked for a series of five concerts at Bill Graham's Fillmore, with Big Brother and the Holding Company and Jefferson Airplane. The Experience outperformed Jefferson Airplane during the first two nights, and replaced them at the top of the bill on the fifth. Following their successful West Coast introduction, which included a free open-air concert at Golden Gate Park and a concert at the Whisky a Go Go, the Experience were booked as the opening act for the first American tour of the Monkees. They requested Hendrix as a supporting act because they were fans, but their young audience disliked the Experience, who left the tour after six shows. Chandler later admitted that he engineered the tour in an effort to gain publicity for Hendrix. The second Experience album, "", opens with the track "EXP", which utilized microphonic and harmonic feedback in a new, creative fashion. It also showcased an experimental stereo panning effect in which sounds emanating from Hendrix's guitar move through the stereo image, revolving around the listener. The piece reflected his growing interest in science fiction and outer space. He composed the album's title track and finale around two verses and two choruses, during which he pairs emotions with personas, comparing them to colors. The song's coda features the first recording of stereo phasing. Shadwick described the composition as "possibly the most ambitious piece on "Axis", the extravagant metaphors of the lyrics suggesting a growing confidence" in Hendrix's songwriting. His guitar playing throughout the song is marked by chordal arpeggios and contrapuntal motion, with tremolo-picked partial chords providing the musical foundation for the chorus, which culminates in what musicologist Andy Aledort described as "simply one of the greatest electric guitar solos ever played". The track fades out on tremolo-picked thirty-second note double stops. The scheduled release date for "Axis" was almost delayed when Hendrix lost the master tape of side one of the LP, leaving it in the back seat of a London taxi. With the deadline looming, Hendrix, Chandler, and engineer Eddie Kramer remixed most of side one in a single overnight session, but they could not match the quality of the lost mix of "If 6 Was 9". Bassist Noel Redding had a tape recording of this mix, which had to be smoothed out with an iron as it had gotten wrinkled. During the verses, Hendrix doubled his singing with a guitar line which he played one octave lower than his vocals. Hendrix voiced his disappointment about having re-mixed the album so quickly, and he felt that it could have been better had they been given more time. "Axis" featured psychedelic cover art that depicts Hendrix and the Experience as various avatars of Vishnu, incorporating a painting of them by Roger Law, from a photo-portrait by Karl Ferris. The painting was then superimposed on a copy of a mass-produced religious poster. Hendrix stated that the cover, which Track spent $5,000 producing, would have been more appropriate had it highlighted his American Indian heritage. He commented: "You got it wrong ... I'm not that kind of Indian." Track released the album in the UK on December 1, 1967, where it peaked at number five, spending 16 weeks on the charts. In February 1968, "Axis: Bold as Love" reached number three in the U.S. While author and journalist Richie Unterberger described "Axis" as the least impressive Experience album, according to author Peter Doggett, the release "heralded a new subtlety in Hendrix's work". Mitchell commented: ""Axis" was the first time that it became apparent that Jimi was pretty good working behind the mixing board, as well as playing, and had some positive ideas of how he wanted things recorded. It could have been the start of any potential conflict between him and Chas in the studio." Recording for the Experience's third and final studio album, "Electric Ladyland", began at the newly opened Record Plant Studios, with Chandler as producer and engineers Eddie Kramer and Gary Kellgren. As the sessions progressed, Chandler became increasingly frustrated with Hendrix's perfectionism and his demands for repeated takes. Hendrix also allowed numerous friends and guests to join them in the studio, which contributed to a chaotic and crowded environment in the control room and led Chandler to sever his professional relationship with Hendrix. Redding later recalled: "There were tons of people in the studio; you couldn't move. It was a party, not a session." Redding, who had formed his own band in mid-1968, Fat Mattress, found it increasingly difficult to fulfill his commitments with the Experience, so Hendrix played many of the bass parts on "Electric Ladyland". The album's cover stated that it was "produced and directed by Jimi Hendrix". During the "Electric Ladyland" recording sessions, Hendrix began experimenting with other combinations of musicians, including Jefferson Airplane's Jack Casady and Traffic's Steve Winwood, who played bass and organ, respectively, on the fifteen-minute slow-blues jam, "Voodoo Chile". During the album's production, Hendrix appeared at an impromptu jam with B.B. King, Al Kooper, and Elvin Bishop. "Electric Ladyland" was released on October 25, and by mid-November it had reached number one in the U.S., spending two weeks at the top spot. The double LP was Hendrix's most commercially successful release and his only number one album. It peaked at number six in the UK, spending 12 weeks on the chart. "Electric Ladyland" included Hendrix's cover of Bob Dylan's song, "All Along the Watchtower", which became Hendrix's highest-selling single and his only U.S. top 40 hit, peaking at number 20; the single reached number five in the UK. "Burning of the Midnight Lamp", which was his first recorded song to feature the use of a wah-wah pedal, was added to the album. It was originally released as his fourth single in the UK in August 1967 and reached number 18 in the charts. In 1989, Noe Goldwasser, the founding editor of "Guitar World" magazine, described "Electric Ladyland" as "Hendrix's masterpiece". According to author Michael Heatley, "most critics agree" that the album is "the fullest realization of Jimi's far-reaching ambitions." In 2004, author Peter Doggett commented: "For pure experimental genius, melodic flair, conceptual vision and instrumental brilliance, "Electric Ladyland" remains a prime contender for the status of rock's greatest album." Doggett described the LP as "a display of musical virtuosity never surpassed by any rock musician." In January 1969, after an absence of more than six months, Hendrix briefly moved back into his girlfriend Kathy Etchingham's Brook Street apartment, which was next door to the Handel House Museum in the West End of London. During this time, the Experience toured Scandinavia, Germany, and gave their final two performances in France. On February 18 and 24, they played sold-out concerts at London's Royal Albert Hall, which were the last European appearances of this lineup. By February 1969, Redding had grown weary of Hendrix's unpredictable work ethic and his creative control over the Experience's music. During the previous month's European tour, interpersonal relations within the group had deteriorated, particularly between Hendrix and Redding. In his diary, Redding documented the building frustration during early 1969 recording sessions: "On the first day, as I nearly expected, there was nothing doing ... On the second it was no show at all. I went to the pub for three hours, came back, and it was still ages before Jimi ambled in. Then we argued ... On the last day, I just watched it happen for a while, and then went back to my flat." The last Experience sessions that included Redding—a re-recording of "Stone Free" for use as a possible single release—took place on April 14 at Olmstead and the Record Plant in New York. Hendrix then flew bassist Billy Cox to New York; they started recording and rehearsing together on April 21. The last performance of the original Experience lineup took place on June 29, 1969, at Barry Fey's Denver Pop Festival, a three-day event held at Denver's Mile High Stadium that was marked by police using tear gas to control the audience. The band narrowly escaped from the venue in the back of a rental truck, which was partly crushed by fans who had climbed on top of the vehicle. Before the show, a journalist angered Redding by asking why he was there; the reporter then informed him that two weeks earlier Hendrix announced that he had been replaced with Billy Cox. The next day, Redding quit the Experience and returned to London. He announced that he had left the band and intended to pursue a solo career, blaming Hendrix's plans to expand the group without allowing for his input as a primary reason for leaving. Redding later commented: "Mitch and I hung out a lot together, but we're English. If we'd go out, Jimi would stay in his room. But any bad feelings came from us being three guys who were traveling too hard, getting too tired, and taking too many drugs ... I liked Hendrix. I don't like Mitchell." Soon after Redding's departure, Hendrix began lodging at the eight-bedroom Ashokan House, in the hamlet of Boiceville near Woodstock in upstate New York, where he had spent some time vacationing in mid-1969. Manager Michael Jeffery arranged the accommodations in the hope that the respite might encourage Hendrix to write material for a new album. During this time, Mitchell was unavailable for commitments made by Jeffery, which included Hendrix's first appearance on U.S. TV—on "The Dick Cavett Show"—where he was backed by the studio orchestra, and an appearance on "The Tonight Show" where he appeared with Cox and session drummer Ed Shaughnessy. By 1969, Hendrix was the world's highest-paid rock musician. In August, he headlined the Woodstock Music and Art Fair that included many of the most popular bands of the time. For the concert, he added rhythm guitarist Larry Lee and conga players Juma Sultan and Jerry Velez. The band rehearsed for less than two weeks before the performance, and according to Mitchell, they never connected musically. Before arriving at the engagement, Hendrix heard reports that the size of the audience had grown to epic proportions, which gave him cause for concern as he did not enjoy performing for large crowds. He was an important draw for the event, and although he accepted substantially less money for the appearance than his usual fee, he was the festival's highest-paid performer. As his scheduled time slot of midnight on Sunday drew closer, he indicated that he preferred to wait and close the show in the morning; the band took the stage around 8:00 a.m. on Monday. By the time of their set, Hendrix had been awake for more than three days. The audience, which peaked at an estimated 400,000 people, was now reduced to 30–40,000, many of whom had waited to catch a glimpse of Hendrix before leaving during his performance. The festival MC, Chip Monck, introduced the group as "the Jimi Hendrix Experience", but Hendrix clarified: "We decided to change the whole thing around and call it "Gypsy Sun and Rainbows". For short, it's nothin' but a "Band of Gypsys"". Hendrix's performance featured a rendition of the U.S. national anthem, "The Star-Spangled Banner", during which he used copious amounts of amplifier feedback, distortion, and sustain to replicate the sounds made by rockets and bombs. Although contemporary political pundits described his interpretation as a statement against the Vietnam War, three weeks later Hendrix explained its meaning: "We're all Americans ... it was like 'Go America!'... We play it the way the air is in America today. The air is slightly static, see". Immortalized in the 1970 documentary film, "Woodstock", his guitar-driven version would become part of the sixties Zeitgeist. Pop critic Al Aronowitz of the "New York Post" wrote: "It was the most electrifying moment of Woodstock, and it was probably the single greatest moment of the sixties." Images of the performance showing Hendrix wearing a blue-beaded white leather jacket with fringe, a red head-scarf, and blue jeans are widely regarded as iconic pictures that capture a defining moment of the era. He played "Hey Joe" during the encore, concluding the 3½-day festival. Upon leaving the stage, he collapsed from exhaustion. In 2011, the editors of "Guitar World" placed his rendition of "The Star-Spangled Banner" at Woodstock at number one in their list of his 100 greatest performances. A legal dispute arose in 1966 regarding a record contract that Hendrix had entered into the previous year with producer Ed Chalpin. After two years of litigation, the parties agreed to a resolution that granted Chalpin the distribution rights to an album of original Hendrix material. Hendrix decided that they would record the LP, "Band of Gypsys", during two live appearances. In preparation for the shows he formed an all-black power-trio with Cox and drummer Buddy Miles, formerly with Wilson Pickett, the Electric Flag, and the Buddy Miles Express. Critic John Rockwell described Hendrix and Miles as jazz-rock fusionists, and their collaboration as pioneering. Others identified a funk and soul influence in their music. Concert promoter Bill Graham called the shows "the most brilliant, emotional display of virtuoso electric guitar" that he had ever heard. Biographers have speculated that Hendrix formed the band in an effort to appease members of the Black Power movement and others in the black communities who called for him to use his fame to speak-up for civil rights. Hendrix had been recording with Cox since April and jamming with Miles since September, and the trio wrote and rehearsed material which they performed at a series of four shows over two nights on December 31 and January 1, at the Fillmore East. They used recordings of these concerts to assemble the LP, which was produced by Hendrix. The album includes the track "Machine Gun", which musicologist Andy Aledort described as the pinnacle of Hendrix's career, and "the premiere example of [his] unparalleled genius as a rock guitarist ... In this performance, Jimi transcended the medium of rock music, and set an entirely new standard for the potential of electric guitar." During the song's extended instrumental breaks, Hendrix created sounds with his guitar that sonically represented warfare, including rockets, bombs, and diving planes. The "Band of Gypsys" album was the only official live Hendrix LP made commercially available during his lifetime; several tracks from the Woodstock and Monterey shows were released later that year. The album was released in April 1970 by Capitol Records; it reached the top ten in both the U.S. and the UK. That same month a single was issued with "Stepping Stone" as the A-side and "Izabella" as the B-side, but Hendrix was dissatisfied with the quality of the mastering and he demanded that it be withdrawn and re-mixed, preventing the songs from charting and resulting in Hendrix's least successful single; it was also his last. On January 28, 1970, a third and final Band of Gypsys appearance took place; they performed during a music festival at Madison Square Garden benefiting the anti-Vietnam War Moratorium Committee titled the "Winter Festival for Peace". American blues guitarist Johnny Winter was backstage before the concert; he recalled: "[Hendrix] came in with his head down, sat on the couch alone, and put his head in his hands ... He didn't move until it was time for the show." Minutes after taking the stage he snapped a vulgar response at a woman who had shouted a request for "Foxy Lady". He then began playing "Earth Blues" before telling the audience: "That's what happens when earth fucks with space". Moments later, he briefly sat down on the drum riser before leaving the stage. Both Miles and Redding later stated that Jeffery had given Hendrix LSD before the performance. Miles believed that Jeffery gave Hendrix the drugs in an effort to sabotage the current band and bring about the return of the original Experience lineup. Jeffery fired Miles after the show and Cox quit, ending the Band of Gypsys. Soon after the abruptly ended Band of Gypsys performance and their subsequent dissolution, Jeffery made arrangements to reunite the original Experience lineup. Although Hendrix, Mitchell, and Redding were interviewed by "Rolling Stone" in February 1970 as a united group, Hendrix never intended to work with Redding. When Redding returned to New York in anticipation of rehearsals with a re-formed Experience, he was told that he had been replaced with Cox. During an interview with "Rolling Stone" Keith Altham, Hendrix defended the decision: "It's nothing personal against Noel, but we finished what we were doing with the Experience and Billy's style of playing suits the new group better." Although the lineup of Hendrix, Mitchell, and Cox became known as the Cry of Love band, after their accompanying tour, billing, advertisements, and tickets were printed with the New Jimi Hendrix Experience or occasionally just Jimi Hendrix. During the first half of 1970, Hendrix sporadically worked on material for what would have been his next LP. Many of the tracks were posthumously released in 1971 as "The Cry of Love". He had started writing songs for the album in 1968, but in April 1970 he told Keith Altham that the project had been abandoned. Soon afterward, he and his band took a break from recording and began the Cry of Love tour at the L.A. Forum, performing for 20,000 people. Set-lists during the tour included numerous Experience tracks as well as a selection of newer material. Several shows were recorded, and they produced some of Hendrix's most memorable live performances. At one of them, the second Atlanta International Pop Festival, on July 4, he played to the largest American audience of his career. According to authors Scott Schinder and Andy Schwartz, as many as 500,000 people attended the concert. On July 17, they appeared at the New York Pop Festival; Hendrix had again consumed too many drugs before the show, and the set was considered a disaster. The American leg of the tour, which included 32 performances, ended at Honolulu, Hawaii, on August 1, 1970. This would be Hendrix's final concert appearance in the U.S. In 1968, Hendrix and Jeffery jointly invested in the purchase of the Generation Club in Greenwich Village. They had initially planned to reopen the establishment, but after an audit revealed that Hendrix had incurred exorbitant fees by block-booking lengthy sessions at peak rates they decided that the building would better serve them as a recording studio. With a facility of his own, Hendrix could work as much as he wanted while also reducing his recording expenditures, which had reached a reported $300,000 annually. Architect and acoustician John Storyk designed Electric Lady Studios for Hendrix, who requested that they avoid right angles where possible. With round windows, an ambient lighting machine, and a psychedelic mural, Storyk wanted the studio to have a relaxing environment that would encourage Hendrix's creativity. The project took twice as long as planned and cost twice as much as Hendrix and Jeffery had budgeted, with their total investment estimated at $1 million. Hendrix first used Electric Lady on June 15, 1970, when he jammed with Steve Winwood and Chris Wood of Traffic; the next day, he recorded his first track there, "Night Bird Flying". The studio officially opened for business on August 25, and a grand opening party was held the following day. Immediately afterwards, Hendrix left for England; he never returned to the States. He boarded an Air India flight for London with Cox, joining Mitchell for a performance as the headlining act of the Isle of Wight Festival. When the European leg of the Cry of Love tour began, Hendrix was longing for his new studio and creative outlet, and was not eager to fulfill the commitment. On September 2, 1970, he abandoned a performance in Aarhus after three songs, stating: "I've been dead a long time". Four days later, he gave his final concert appearance, at the Isle of Fehmarn Festival in Germany. He was met with booing and jeering from fans in response to his cancellation of a show slated for the end of the previous night's bill due to torrential rain and risk of electrocution. Immediately following the festival, Hendrix, Mitchell, and Cox travelled to London. Three days after the performance, Cox, who was suffering from severe paranoia after either taking LSD or being given it unknowingly, quit the tour and went to stay with his parents in Pennsylvania. Within days of Hendrix's arrival in England, he had spoken with Chas Chandler, Alan Douglas, and others about leaving his manager, Michael Jeffery. On September 16, Hendrix performed in public for the last time during an informal jam at Ronnie Scott's Jazz Club in Soho with Eric Burdon and his latest band, War. They began by playing a few of their recent hits, and after a brief intermission Hendrix joined them during "Mother Earth" and "Tobacco Road". His performance was uncharacteristically subdued; he quietly played backing guitar, and refrained from the histrionics that people had come to expect from him. He died less than 48 hours later. In July 1962, after Hendrix was discharged from the U.S. Army, he entered a small club in Clarksville, Tennessee. Drawn in by live music, he stopped for a drink and ended up spending most of the $400 he had saved. He explained: "I went in this jazz joint and had a drink. I liked it and I stayed. People tell me I get foolish, good-natured sometimes. Anyway, I guess I felt real benevolent that day. I must have been handing out bills to anyone that asked me. I came out of that place with sixteen dollars left." According to the authors Steven Roby and Brad Schreiber: "Alcohol would later be the scourge of his existence, driving him to fits of pique, even rare bursts of atypical, physical violence." While Roby and Schreiber assert that Hendrix first used LSD when he met Linda Keith in late 1966, according to the authors Harry Shapiro and Caesar Glebbeek, the earliest that Hendrix is known to have taken it was in June 1967, while attending the Monterey Pop Festival. According to Hendrix biographer Charles Cross, the subject of drugs came up one evening in 1966 at Keith's New York apartment; when one of Keith's friends offered Hendrix "acid", a street name for lysergic acid diethylamide, Hendrix asked for LSD instead, showing what Cross described as "his naivete and his complete inexperience with psychedelics". Before that, Hendrix had only sporadically used drugs, with his experimentation limited to cannabis, hashish, amphetamines and occasionally cocaine. After 1967, he regularly smoked cannabis and hashish, and used LSD and amphetamines, particularly while touring. According to Cross, by the time of his death in September 1970, "few stars were as closely associated with the drug culture as Jimi". Hendrix would often become angry and violent when he drank too much alcohol or when he mixed alcohol with drugs. His friend Herbie Worthington explained: "You wouldn't expect somebody with that kind of love to be that violent ... He just couldn't drink ... he simply turned into a bastard". According to journalist and friend Sharon Lawrence, Hendrix "admitted he could not handle hard liquor, which set off a bottled-up anger, a destructive fury he almost never displayed otherwise". In January 1968, the Experience travelled to Sweden for a one-week tour of Europe. During the early morning hours of the first day, Hendrix became engaged in a drunken brawl in the Hotel Opalen, in Gothenburg, smashing a plate-glass window and injuring his right hand, for which he received medical treatment. The incident culminated in his arrest and release, pending a court appearance that resulted in a large fine. After the 1969 burglary of a house Hendrix was renting in Benedict Canyon, California, and while he was under the influence of drugs and alcohol, he punched his friend Paul Caruso and accused him of the theft. He then chased Caruso away from the residence while throwing stones at him. A few days later, one of Hendrix's girlfriends, Carmen Borrero, required stitches after he hit her above her eye with a vodka bottle during a drunken, jealous rage. On May 3, 1969, while Hendrix was passing through customs at Toronto International Airport, authorities detained him after finding a small amount of what they suspected to be heroin and hashish in his luggage. Four hours later, he was formally charged with drug possession and released on $10,000 bail. He was required to return on May 5 for an arraignment hearing. The incident proved stressful for Hendrix, and it weighed heavily on his mind during the seven months that he awaited trial, which took place in December of that year. For the Crown to prove possession they had to show that Hendrix knew the drugs were there. During the jury trial he testified that a fan had given him a vial of what he thought was legal medication, which he put in his bag not knowing what was in it. He was acquitted of the charges. Mitchell and Redding later revealed that everyone had been warned about a planned drug bust the day before flying to Toronto; both men also stated they believed that the drugs had been planted in Hendrix's bag without his knowledge. Although the details of Hendrix's last day and death are widely disputed, he spent much of September 17, 1970, in London with Monika Dannemann, the only witness to his final hours. Dannemann said that she prepared a meal for them at her apartment in the Samarkand Hotel, 22 Lansdowne Crescent, Notting Hill, sometime around 11 p.m., when they shared a bottle of wine. She drove Hendrix to the residence of an acquaintance at approximately 1:45 a.m., where he remained for about an hour before she picked him up and drove them back to her flat at 3 a.m. Dannemann said they talked until around 7 a.m., when they went to sleep. She awoke around 11 a.m., and found Hendrix breathing, but unconscious and unresponsive. She called for an ambulance at 11:18 a.m., which arrived on the scene at 11:27 a.m. Paramedics then transported Hendrix to St Mary Abbot's Hospital where Dr. John Bannister pronounced him dead at 12:45 p.m. on September 18, 1970. To determine the cause of death, coroner Gavin Thurston ordered a post-mortem examination on Hendrix's body, which was performed on September 21 by Professor Robert Donald Teare, a forensic pathologist. Thurston completed the inquest on September 28, and concluded that Hendrix aspirated his own vomit and died of asphyxia while intoxicated with barbiturates. Citing "insufficient evidence of the circumstances", he declared an open verdict. Dannemann later revealed that Hendrix had taken nine of her prescribed Vesparax sleeping tablets, 18 times the recommended dosage. After Hendrix's body had been embalmed by Desmond Henley, it was flown to Seattle, Washington, on September 29, 1970. After a service at Dunlap Baptist Church in Seattle's Rainier Valley on October 1, it was interred at Greenwood Cemetery in Renton, Washington, the location of his mother's grave. Hendrix's family and friends traveled in twenty-four limousines and more than two hundred people attended the funeral, including several notable musicians such as original Experience members Mitch Mitchell and Noel Redding, as well as Miles Davis, John Hammond, and Johnny Winter. By 1967, as Hendrix was gaining in popularity, many of his pre-Experience recordings were marketed to an unsuspecting public as Jimi Hendrix albums, sometimes with misleading later images of Hendrix. The recordings, which came under the control of producer Ed Chalpin of PPX, with whom Hendrix had signed a recording contract in 1965, were often re-mixed between their repeated reissues, and licensed to record companies such as Decca and Capitol. Hendrix publicly denounced the releases, describing them as "malicious" and "greatly inferior", stating: "At PPX, we spent on average about one hour recording a song. Today I spend at least twelve hours on each song." These unauthorized releases have long constituted a substantial part of his recording catalogue, amounting to hundreds of albums. Some of Hendrix's unfinished material was released as the 1971 title "The Cry of Love". Although the album reached number three in the U.S. and number two in the UK, producers Mitchell and Kramer later complained that they were unable to make use of all the available songs because some tracks were used for 1971's "Rainbow Bridge"; still others were issued on 1972's "War Heroes". Material from "The Cry of Love" was re-released in 1997 as "First Rays of the New Rising Sun", along with the other tracks that Mitchell and Kramer had wanted to include. In 1993, MCA Records delayed a multimillion-dollar sale of Hendrix's publishing copyrights because Al Hendrix was unhappy about the arrangement. He acknowledged that he had sold distribution rights to a foreign corporation in 1974, but stated that it did not include copyrights and argued that he had retained veto power of the sale of the catalogue. Under a settlement reached in July 1995, Al Hendrix prevailed in his legal battle and regained control of his son's song and image rights. He subsequently licensed the recordings to MCA through the family-run company Experience Hendrix LLC, formed in 1995. In August 2009, Experience Hendrix announced that it had entered a new licensing agreement with Sony Music Entertainment's Legacy Recordings division which would take effect in 2010.Legacy and Experience Hendrix launched the 2010, Jimi Hendrix Catalog Project, starting with the release of "Valleys of Neptune" in March of that year. In the months before his death, Hendrix recorded demos for a concept album tentatively titled "Black Gold", which are now in the possession of Experience Hendrix LLC; as of 2013, no official release date has been announced. Hendrix played a variety of guitars throughout his career, but the instrument that became most associated with him was the Fender Stratocaster. He acquired his first Stratocaster in 1966, when a girlfriend loaned him enough money to purchase a used one that had been built around 1964. He thereafter used the model prevalently during performances and recordings. In 1967, he described the instrument as "the best all-around guitar for the stuff we're doing"; he praised its "bright treble and deep bass sounds". With few exceptions, Hendrix played right-handed guitars that were turned upside down and restrung for left-hand playing. This had an important effect on the sound of his guitar; because of the slant of the bridge pickup, his lowest string had a brighter sound while his highest string had a darker sound, which was the opposite of the Stratocaster's intended design. In addition to Stratocasters, Hendrix also used Fender Jazzmasters, Duosonics, two different Gibson Flying Vs, a Gibson Les Paul, three Gibson SGs, a Gretsch Corvette, and a Fender Jaguar. He used a white Gibson SG Custom for his performances on "The Dick Cavett Show" in September 1969, and a black Gibson Flying V during the Isle of Wight festival in 1970. During 1965, and 1966, while Hendrix was playing back-up for soul and R&B acts in the U.S., he used an 85-watt Fender Twin Reverb amplifier. When Chandler brought Hendrix to England in October 1966, he supplied him with 30-watt Burns amps, which Hendrix thought were too small for his needs. After an early London gig when he was unable to use his preferred Fender Twin, he asked about the Marshall amps that he had noticed other groups using. Years earlier, Mitch Mitchell had taken drum lessons from the amp builder, Jim Marshall, and he introduced Hendrix to Marshall. At their initial meeting, Hendrix bought four speaker cabinets and three 100-watt Super Lead amplifiers; he would grow accustomed to using all three in unison. The equipment arrived on October 11, 1966, and the Experience used the new gear during their first tour. Marshall amps were well-suited for Hendrix's needs, and they were paramount in the evolution of his heavily overdriven sound, enabling him to master the use of feedback as a musical effect, creating what author Paul Trynka described as a "definitive vocabulary for rock guitar". Hendrix usually turned all of the amplifier's control knobs to the maximum level, which became known as the Hendrix setting. During the four years prior to his death, he purchased between 50 and 100 Marshall amplifiers. Jim Marshall said that he was "the greatest ambassador" his company ever had. One of Hendrix's signature effects was the wah-wah pedal, which he first heard used with an electric guitar in Cream's "Tales of Brave Ulysses", released in May 1967. In July of that year, while playing gigs at the Scene club in New York City, Hendrix met Frank Zappa, whose band, the Mothers of Invention were performing at the adjacent Garrick Theater. Hendrix was fascinated by Zappa's application of the pedal, and he experimented with one later that evening. He used a wah pedal during the opening to "Voodoo Child (Slight Return)", creating one of the best-known wah-wah riffs of the classic rock era. He can also be heard using the effect on "Up from the Skies", "Little Miss Lover", and "Still Raining, Still Dreaming". Hendrix consistently used a Dallas Arbiter Fuzz Face and a Vox wah pedal during recording sessions and live performances, but he also experimented with other guitar effects. He enjoyed a fruitful long-term collaboration with electronics enthusiast Roger Mayer, whom he once called "the secret" of his sound. Mayer introduced him to the Octavia, an octave doubling effect pedal, in December 1966, and he first recorded with the effect during the guitar solo to "Purple Haze". Hendrix also utilized the Uni-Vibe, which was designed to simulate the modulation effects of a rotating Leslie speaker by providing a rich phasing sound that could be manipulated with a speed control pedal. He can be heard using the effect during his performance at Woodstock and on the Band of Gypsys track "Machine Gun", which prominently features the Uni-vibe along with an Octavia and a Fuzz Face. His signal flow for live performance involved first plugging his guitar into a wah-wah pedal, then connecting the wah-wah pedal to a Fuzz Face, which was then linked to a Uni-Vibe, before connecting to a Marshall amplifier. As an adolescent during the 1950s, Hendrix became interested in rock and roll artists such as Elvis Presley, Little Richard, and Chuck Berry. In 1968, he told "Guitar Player" magazine that electric blues artists Muddy Waters, Elmore James, and B.B. King inspired him during the beginning of his career; he also cited Eddie Cochran as an early influence. Of Muddy Waters, the first electric guitarist of which Hendrix became aware, he said: "I heard one of his records when I was a little boy and "it scared me to death" because I heard all of these "sounds"." In 1970, he told "Rolling Stone" that he was a fan of western swing artist Bob Wills and while he lived in Nashville, the television show the Grand Ole Opry. Cox stated that during their time serving in the U.S. military, he and Hendrix primarily listened to southern blues artists such as Jimmy Reed and Albert King. According to Cox, "King was a very, very powerful influence". Howlin' Wolf also inspired Hendrix, who performed Wolf's "Killing Floor" as the opening song of his U.S. debut at the Monterey Pop Festival. The influence of soul artist Curtis Mayfield can be heard in Hendrix's guitar playing, and the influence of Bob Dylan can be heard in Hendrix's songwriting; he was known to play Dylan's records repeatedly, particularly "Highway 61 Revisited" and "Blonde on Blonde". The Experience's Rock and Roll Hall of Fame biography states: "Jimi Hendrix was arguably the greatest instrumentalist in the history of rock music. Hendrix expanded the range and vocabulary of the electric guitar into areas no musician had ever ventured before. His boundless drive, technical ability and creative application of such effects as wah-wah and distortion forever transformed the sound of rock and roll." Musicologist Andy Aledort described Hendrix as "one of the most creative" and "influential musicians that has ever lived". Music journalist Chuck Philips wrote: "In a field almost exclusively populated by white musicians, Hendrix has served as a role model for a cadre of young black rockers. His achievement was to reclaim title to a musical form pioneered by black innovators like Little Richard and Chuck Berry in the 1950s." Hendrix favored overdriven amplifiers with high volume and gain. He was instrumental in developing the previously undesirable technique of guitar amplifier feedback, and helped to popularize use of the wah-wah pedal in mainstream rock. He rejected the standard barre chord fretting technique used by most guitarists in favor of fretting the low 6th string root notes with his thumb. He applied this technique during the beginning bars of "Little Wing", which allowed him to sustain the root note of chords while also playing melody. This method has been described as piano style, with the thumb playing what a pianist's left hand would play and the other fingers playing melody as a right hand. Having spent several years fronting a trio, he developed an ability to play rhythm chords and lead lines together, giving the audio impression that more than one guitarist was performing. He was the first artist to incorporate stereophonic phasing effects in rock music recordings. Holly George-Warren of "Rolling Stone" commented: "Hendrix pioneered the use of the instrument as an electronic sound source. Players before him had experimented with feedback and distortion, but Hendrix turned those effects and others into a controlled, fluid vocabulary every bit as personal as the blues with which he began." Aledort wrote: "In rock guitar, there are but two eras — before Hendrix and after Hendrix." While creating his unique musical voice and guitar style, Hendrix synthesized diverse genres, including blues, R&B, soul, British rock, American folk music, 1950s rock and roll, and jazz. Musicologist David Moskowitz emphasized the importance of blues music in Hendrix's playing style, and according to authors Steven Roby and Brad Schreiber, "[He] explored the outer reaches of psychedelic rock". His influence is evident in a variety of popular music formats, and he has contributed significantly to the development of hard rock, heavy metal, funk, post-punk, and hip hop music. His lasting influence on modern guitar players is difficult to overstate; his techniques and delivery have been abundantly imitated by others. Despite his hectic touring schedule and notorious perfectionism, he was a prolific recording artist who left behind numerous unreleased recordings. More than 40 years after his death, Hendrix remains as popular as ever, with annual album sales exceeding that of any year during his lifetime. Hendrix has influenced numerous funk and funk rock artists, including Prince, George Clinton, John Frusciante, formerly of the Red Hot Chili Peppers, Eddie Hazel of Funkadelic, and Ernie Isley of the Isley Brothers. Hendrix's influence also extends to many hip hop artists, including De La Soul, A Tribe Called Quest, Digital Underground, Beastie Boys, and Run–D.M.C. Miles Davis was deeply impressed by Hendrix, and he compared Hendrix's improvisational abilities with those of saxophonist John Coltrane. Hendrix also influenced industrial artist Marilyn Manson, blues legend Stevie Ray Vaughan, Metallica Kirk Hammett, instrumental rock guitarist Joe Satriani, Frank Zappa/David Bowie/Talking Heads/King Crimson/Nine Inch Nails hired gun Adrian Belew, and heavy metal virtuoso Yngwie Malmsteen, who said: "[Hendrix] created modern electric playing, without question ... He was the first. He started it all. The rest is history." Hendrix received several prestigious rock music awards during his lifetime and posthumously. In 1967, readers of "Melody Maker" voted him the Pop Musician of the Year. In 1968, "Rolling Stone" declared him the Performer of the Year. Also in 1968, the City of Seattle gave him the Keys to the City. "Disc & Music Echo" newspaper honored him with the World Top Musician of 1969 and in 1970 "Guitar Player" magazine named him the Rock Guitarist of the Year. "Rolling Stone" ranked his three non-posthumous studio albums, "Are You Experienced" (1967), "Axis: Bold as Love" (1967), and "Electric Ladyland" (1968) among the "500 Greatest Albums of All Time". They ranked Hendrix number one on their list of the 100 greatest guitarists of all time, and number six on their list of the 100 greatest artists of all time. "Guitar World"'s readers voted six of Hendrix's solos among the top 100 Greatest Guitar Solos of All Time: "Purple Haze" (70), "The Star-Spangled Banner" (52; from "Live at Woodstock"), "Machine Gun" (32; from "Band of Gypsys"), "Little Wing" (18), "Voodoo Child (Slight Return)" (11), and "All Along the Watchtower" (5). "Rolling Stone" placed seven of his recordings in their list of the 500 Greatest Songs of All Time: "Purple Haze" (17), "All Along the Watchtower" (47) "Voodoo Child (Slight Return)" (102), "Foxy Lady" (153), "Hey Joe" (201), "Little Wing" (366), and "The Wind Cries Mary" (379). They also included three of Hendrix's songs in their list of the "100 Greatest Guitar Songs of All Time": "Purple Haze" (2), "Voodoo Child" (12), and "Machine Gun" (49). A star on the Hollywood Walk of Fame was dedicated to Hendrix on November 14, 1991, at 6627 Hollywood Boulevard. November 27, 1992 was made Jimi Hendrix Day in Seattle. This was largely due to the efforts of his boyhood friend Sammy Drain. He approached the Seattle mayor Norm Rice and talked to him about it. Mayor Rice being aware of the contribution to music Hendrix had made, readily agreed and on November 27, 1992 which would have been the 50th birthday for the guitarist, the mayor issued a proclamation making it Jimi Hendrix Day. The Jimi Hendrix Experience was inducted into the Rock and Roll Hall of Fame in 1992, and the UK Music Hall of Fame in 2005. In 1999, readers of "Rolling Stone" and "Guitar World" ranked Hendrix among the most important musicians of the 20th century. In 2005, his debut album, "Are You Experienced", was one of 50 recordings added that year to the United States National Recording Registry in the Library of Congress, "[to] be preserved for all time ... [as] part of the nation's audio legacy". The blue plaque identifying his former residence at 23 Brook Street, London, (next door to the former residence of George Frideric Handel) was the first one issued by English Heritage to commemorate a pop star. A memorial statue of Hendrix playing a Stratocaster stands near the corner of Broadway and Pine Streets in Seattle. In May 2006, the city renamed a park near its Central District Jimi Hendrix Park, in his honor. In 2012, an official historic marker was erected on the site of the July 1970 Second Atlanta International Pop Festival near Byron, Georgia. The marker text reads, in part: "Over thirty musical acts performed, including rock icon Jimi Hendrix playing to the largest American audience of his career." Hendrix's music has received a number of Hall of Fame Grammy awards, starting with a Lifetime Achievement Award in 1992, followed by two Grammys in 1999 for his albums "Are You Experienced" and "Electric Ladyland"; "Axis: Bold as Love" received a Grammy in 2006. In 2000, he received a Hall of Fame Grammy award for his original composition, "Purple Haze", and in 2001, for his recording of Dylan's "All Along the Watchtower". Hendrix's rendition of "The Star-Spangled Banner" was honored with a Grammy in 2009. The United States Postal Service issued a commemorative postage stamp honoring Hendrix in 2014. On August 21, 2016, Jimi Hendrix was officially inducted into the R&B Hall of Fame in Dearborn, Michigan. The Jimi Hendrix Experience Jimi Hendrix/Band of Gypsys Posthumous albums Jefferson Davis Jefferson Finis Davis (June 3, 1808 – December 6, 1889) was an American politician who served as the only President of the Confederate States from 1861 to 1865. As a member of the Democratic Party, he represented Mississippi in the United States Senate and the House of Representatives prior to switching allegiance to the Confederacy. He was appointed as the United States Secretary of War, serving from 1853 to 1857, under President Franklin Pierce. Davis was born in Fairview, Kentucky, to a moderately prosperous farmer, the youngest of ten children. He grew up in Wilkinson County, Mississippi, and also lived in Louisiana. His eldest brother Joseph Emory Davis secured the younger Davis's appointment to the United States Military Academy. After graduating, Jefferson Davis served six years as a lieutenant in the United States Army. He fought in the Mexican–American War (1846–1848), as the colonel of a volunteer regiment. Before the American Civil War, he operated a large cotton plantation in Mississippi, which his brother Joseph gave him, and owned as many as 74 slaves. Although Davis argued against secession in 1858, he believed that states had an unquestionable right to leave the Union. Davis married Sarah Knox Taylor, who died of malaria after three months of marriage, and he also struggled with recurring bouts of the disease. He was unhealthy for much of his life. At the age of 36, Davis married again, to 18-year-old Varina Howell, a native of Natchez, Mississippi, who had been educated in Philadelphia and had some family ties in the North. They had six children. Only two survived him, and only one married and had children. Many historians attribute some of the Confederacy's weaknesses to the poor leadership of Davis. His preoccupation with detail, reluctance to delegate responsibility, lack of popular appeal, feuds with powerful state governors and generals, favoritism toward old friends, inability to get along with people who disagreed with him, neglect of civil matters in favor of military ones, and resistance to public opinion all worked against him. Historians agree he was a much less effective war leader than his Union counterpart, President Abraham Lincoln. After Davis was captured in 1865, he was accused of treason and imprisoned at Fort Monroe. He was never tried and was released after two years. While not disgraced, Davis had been displaced in ex-Confederate affection after the war by his leading general, Robert E. Lee. Davis wrote a memoir entitled "The Rise and Fall of the Confederate Government", which he completed in 1881. By the late 1880s, he began to encourage reconciliation, telling Southerners to be loyal to the Union. Ex-Confederates came to appreciate his role in the war, seeing him as a Southern patriot. He became a hero of the Lost Cause in the post-Reconstruction South. Jefferson Finis Davis was born at the family homestead in Fairview, Kentucky, on June 3, 1808. He sometimes gave his year of birth as 1807. He dropped his middle name in later life, although he sometimes used a middle initial. Davis was the youngest of ten children born to Jane (née Cook) and Samuel Emory Davis; his oldest brother Joseph Emory Davis was 23 years his senior. He was named after President Thomas Jefferson, whom his father admired. In the early 20th century, the Jefferson Davis State Historic Site was established near the site of Davis's birth. Coincidentally, Abraham Lincoln was born in Hodgenville, Kentucky, only eight months later, less than to the northeast of Fairview. Davis's paternal grandparents were born in the region of Snowdonia in North Wales, and immigrated separately to North America in the early 18th century. His maternal ancestors were English. After initially arriving in Philadelphia, Davis's paternal grandfather Evan settled in the colony of Georgia, which was developed chiefly along the coast. He married the widow Lydia Emory Williams, who had two sons from a previous marriage, and their son Samuel Emory Davis was born in 1756. He served in the Continental Army during the American Revolutionary War, along with his two older half-brothers. In 1783, after the war, he married Jane Cook. She was born in 1759 to William Cook and his wife Sarah Simpson in what is now Christian County, Kentucky. In 1793, the Davis family relocated to Kentucky, establishing a community named "Davisburg" on the border of Christian and Todd counties; it was eventually renamed Fairview. During Davis's childhood, his family moved twice: in 1811 to St. Mary Parish, Louisiana, and less than a year later to Wilkinson County, Mississippi. Three of his older brothers served in the War of 1812. In 1813, Davis began his education at the Wilkinson Academy in the small town of Woodville, near the family cotton plantation. His brother Joseph acted as a surrogate father and encouraged Jefferson in his education. Two years later, Davis entered the Catholic school of Saint Thomas at St. Rose Priory, a school operated by the Dominican Order in Washington County, Kentucky. At the time, he was the only Protestant student at the school. Davis returned to Mississippi in 1818, studying at Jefferson College in Washington. He returned to Kentucky in 1821, studying at Transylvania University in Lexington. (At the time, these colleges were like academies, roughly equivalent to high schools.) His father Samuel died on July 4, 1824, when Jefferson was 16 years old. Joseph arranged for Davis to get an appointment and attend the United States Military Academy (West Point) starting in late 1824. While there, he was placed under house arrest for his role in the Eggnog Riot during Christmas 1826. Cadets smuggled whiskey into the academy to make eggnog, and more than one-third of the cadets were involved in the incident. In June 1828, Davis graduated 23rd in a class of 33. Following graduation, Second Lieutenant Davis was assigned to the 1st Infantry Regiment and was stationed at Fort Crawford, Prairie du Chien, Michigan Territory. Zachary Taylor, a future president of the United States, had assumed command shortly before Davis arrived in early 1829. In March 1832, Davis returned to Mississippi on furlough, having had no leave since he first arrived at Fort Crawford. He was still in Mississippi during the Black Hawk War but returned to the fort in August. At the conclusion of the war, Colonel Taylor assigned him to escort Black Hawk to prison. Davis made an effort to shield Black Hawk from curiosity seekers, and the chief noted in his autobiography that Davis treated him "with much kindness" and showed empathy for the leader's situation as a prisoner. Davis fell in love with Sarah Knox Taylor, daughter of his commanding officer, Zachary Taylor. Both Sarah and Davis sought Taylor's permission to marry. Taylor refused, as he did not wish his daughter to have the difficult life of a military wife on frontier army posts. Davis's own experience led him to appreciate Taylor's objection. He consulted with his older brother Joseph, and they both began to question the value of an Army career. Davis hesitated to leave, but his desire for Sarah overcame this, and he resigned his commission in a letter dated April 20, 1835. He had arranged for the letter to be sent to the War Department for him on May 12 when he did not return from leave; But he did not tell Taylor he intended to resign. Against his former commander's wishes, on June 17, he married Sarah in Louisville, Kentucky. His resignation became effective June 30. Davis's older brother Joseph had been very successful and owned Hurricane Plantation and of adjoining land along the Mississippi River on a peninsula 20 miles south of Vicksburg, Mississippi. The adjoining land was known as Brierfield, since it was largely covered with brush and briers. Wanting to have his youngest brother and his wife nearby, Joseph gave use of Brierfield to Jefferson, who eventually developed Brierfield Plantation there. Joseph retained the title. In August 1835, Jefferson and Sarah traveled south to his sister Anna's home in West Feliciana Parish, Louisiana; the plantation was known as Locust Grove. They intended to spend the hot summer months in the countryside away from the river floodplain, for their health, but both of them contracted either malaria or yellow fever. Sarah died at the age of 21 on September 15, 1835, after three months of marriage. Davis was also severely ill, and his family feared for his life. In the month following Sarah's death, he slowly improved, although he remained weak. In late 1835, Davis sailed from New Orleans to Havana, Cuba, to help restore his health. He was accompanied by James Pemberton, his only slave at the time. Davis observed the Spanish military and sketched fortifications. Although no evidence points to his having any motive beyond general interest, the authorities knew that Davis was a former army officer and warned him to stop his observations. Bored and feeling somewhat better, Davis booked passage on a ship to New York, then continued to Washington, D.C., where he visited his old schoolmate George Wallace Jones. He soon returned with Pemberton to Mississippi. For several years following Sarah's death, Davis was reclusive and honored her memory. He spent time clearing Brierfield and developing his plantation, studied government and history, and had private political discussions with his brother Joseph. By early 1836, Davis had purchased 16 slaves; he held 40 slaves by 1840, and 74 by 1845. Davis promoted Pemberton to be overseer of the field teams. In 1860, he owned 113 slaves. In 1840, Davis first became involved in politics when he attended a Democratic Party meeting in Vicksburg and, to his surprise, was chosen as a delegate to the party's state convention in Jackson. In 1842, he attended the Democratic convention, and, in 1843, became a Democratic candidate for the state House of Representatives from the Warren County-Vicksburg district; he lost his first election. In 1844, Davis was sent to the party convention for a third time, and his interest in politics deepened. He was selected as one of six presidential electors for the 1844 presidential election and campaigned effectively throughout Mississippi for the Democratic candidate James K. Polk. In 1844, Davis met Varina Banks Howell, then 18 years old, whom his brother Joseph had invited for the Christmas season at Hurricane Plantation. She was a granddaughter of New Jersey Governor Richard Howell; her mother's family was from the South and included successful Scots-Irish planters. Within a month of their meeting, the 35-year-old widower Davis had asked Varina to marry him, and they became engaged despite her parents' initial concerns about his age and politics. They were married on February 26, 1845. During this time, Davis was persuaded to become a candidate for the United States House of Representatives and began canvassing for the election. In early October 1845 he traveled to Woodville to give a speech. He arrived a day early to visit his mother there, only to find that she had died the day before. After the funeral, he rode the back to Natchez to deliver the news, then returned to Woodville again to deliver his speech. He won the election. Jefferson and Varina had six children; three died before reaching adulthood. Samuel Emory, born July 30, 1852, was named after his grandfather; he died June 30, 1854, of an undiagnosed disease. Margaret Howell was born February 25, 1855, and was the only child to marry and raise a family. She married Joel Addison Hayes, Jr. (1848–1919), and they had five children. They were married in St. Lazarus Church, nicknamed "The Confederate Officers' Church", in Memphis, Tennessee. In the late 19th century, they moved from Memphis to Colorado Springs, Colorado. She died on July 18, 1909, at the age of 54. Jefferson Davis, Jr., was born January 16, 1857. He died of yellow fever at age 21 on October 16, 1878, during an in the Mississippi River Valley that caused 20,000 deaths. Joseph Evan, born on April 18, 1859, died at the age of five due to an accidental fall on April 30, 1864. William Howell, born on December 6, 1861, was named for Varina's father; he died of diphtheria at age 10 on October 16, 1872. Varina Anne, known as "Winnie", was born on June 27, 1864, several months after her brother Joseph's death. She was known as the Daughter of the Confederacy as she was born during the war. After her parents refused to let her marry into a northern abolitionist family, she never married. She died nine years after her father, on September 18, 1898, at age 34. Jim Limber an octoroon (mixed race) orphan was briefly a ward of Jefferson Davis and Varina Howell Davis. Davis had poor health for most of his life, including repeated bouts of malaria, battle wounds from fighting in the Mexican–American War and a chronic eye infection that made bright light painful. He also had trigeminal neuralgia, a nerve disorder that causes severe pain in the face; it has been called one of the most painful known ailments. In 1846 the Mexican–American War began. Davis raised a volunteer regiment, the 155th Infantry Regiment, becoming its colonel under the command of his former father-in-law, General Zachary Taylor. On July 21 the regiment sailed from New Orleans for Texas. Colonel Davis sought to arm his regiment with the M1841 Mississippi rifle. At this time, smoothbore muskets were still the primary infantry weapon, and any unit with rifles was considered special and designated as such. President James K. Polk had promised Davis the weapons if he would remain in Congress long enough for an important vote on the Walker tariff. General Winfield Scott objected on the basis that the weapons were insufficiently tested. Davis insisted and called in his promise from Polk, and his regiment was armed with the rifles, making it particularly effective in combat. The regiment became known as the Mississippi Rifles because it was the first to be fully armed with these new weapons. The incident was the start of a lifelong feud between Davis and Scott. In September 1846, Davis participated in the Battle of Monterrey, during which he led a successful charge on the La Teneria fort. On October 28, Davis resigned his seat in the House of Representatives. On February 22, 1847, Davis fought bravely at the Battle of Buena Vista and was shot in the foot, being carried to safety by Robert H. Chilton. In recognition of Davis's bravery and initiative, Taylor is reputed to have said, "My daughter, sir, was a better judge of men than I was." On May 17, President Polk offered Davis a federal commission as a brigadier general and command of a brigade of militia. Davis declined the appointment, arguing that the Constitution gives the power of appointing militia officers to the states, not the federal government. Honoring Davis's war service, Governor Brown of Mississippi appointed him to the vacant position of United States Senator Jesse Speight, a Democrat, who had died on May 1, 1847. Davis, also a Democrat, took his temporary seat on December 5, and in January 1848 he was elected by the state legislature to serve the remaining two years of the term. In December, during the 30th United States Congress, Davis was made a regent of the Smithsonian Institution and began serving on the Committee on Military Affairs and the Library Committee. In 1848, Senator Davis proposed and introduced an amendment (the first of several) to the Treaty of Guadalupe Hidalgo that would have annexed most of northeastern Mexico, but it failed on a vote of 11 to 44. Southerners wanted to increase territory held in Mexico as an area for the expansion of slavery. Regarding Cuba, Davis declared that it "must be ours" to "increase the number of slaveholding constituencies." He also was concerned about the security implications of a Spanish holding lying relatively close to the coast of Florida. A group of Cuban revolutionaries led by Venezuelan adventurer Narciso López intended to liberate Cuba from Spanish rule by the sword. Searching for a military leader for a filibuster expedition, they first offered command of the Cuban forces to General William J. Worth, but he died before making his decision. In the summer of 1849, López visited Davis and asked him to lead the expedition. He offered an immediate payment of $100,000 (worth more than $2,000,000 in 2013), plus the same amount when Cuba was liberated. Davis turned down the offer, stating that it was inconsistent with his duty as a senator. When asked to recommend someone else, Davis suggested Robert E. Lee, then an army major in Baltimore; López approached Lee, who also declined on the grounds of his duty. The Senate made Davis chairman of the Committee on Military Affairs on December 3, 1849, during the first session of the 31st United States Congress. On December 29 he was elected to a full six-year term (by the Mississippi legislature, as the constitution mandated at the time). Davis had not served a year when he resigned (in September 1851) to run for the governorship of Mississippi on the issue of the Compromise of 1850, which he opposed. He was defeated by fellow Senator Henry Stuart Foote by 999 votes. Left without political office, Davis continued his political activity. He took part in a convention on states' rights, held at Jackson, Mississippi, in January 1852. In the weeks leading up to the presidential election of 1852, he campaigned in numerous Southern states for Democratic candidates Franklin Pierce and William R. King. Franklin Pierce, after winning the presidential election, made Davis his Secretary of War in 1853. In this capacity, Davis began the Pacific Railroad Surveys in order to determine various possible routes for the proposed Transcontinental Railroad. He promoted the Gadsden Purchase of today's southern Arizona from Mexico, partly because it would provide an easier southern route for the new railroad; the Pierce administration agreed and the land was purchased in December 1853. He saw the size of the regular army as insufficient to fulfill its mission, maintaining that salaries would have to be increased, something which had not occurred for 25 years. Congress agreed and increased the pay scale. It also added four regiments, which increased the army's size from about 11,000 to about 15,000. Davis also introduced general usage of the rifles that he had used successfully during the Mexican–American War. As a result, both the morale and capability of the army was improved. He became involved in public works when Pierce gave him responsibility for construction of the Washington Aqueduct and an expansion of the U.S. Capitol, both of which he managed closely. The Pierce administration ended in 1857 after Pierce's loss of the Democratic nomination to James Buchanan. Davis's term was to end with Pierce's, so he ran for the Senate, was elected, and re-entered it on March 4, 1857. In the 1840s, tensions were growing between the North and South over various issues including slavery. The Wilmot Proviso, introduced in 1846, contributed to these tensions; if passed, it would have banned slavery in any land acquired from Mexico. The Compromise of 1850 brought a temporary respite, but the Dred Scott case, decided by the United States Supreme Court in 1857, spurred public debate. Chief Justice Roger Taney ruled that the Missouri Compromise was unconstitutional and that African Americans had no standing as citizens under the constitution. Northerners were outraged and there was increasing talk in the South of secession from the Union. Davis's renewed service in the Senate was interrupted in early 1858 by an illness that began as a severe cold and which threatened him with the loss of his left eye. He was forced to remain in a darkened room for four weeks. He spent the summer of 1858 in Portland, Maine. On the Fourth of July, Davis delivered an anti-secessionist speech on board a ship near Boston. He again urged the preservation of the Union on October 11 in Faneuil Hall, Boston, and returned to the Senate soon after. As he explained in his memoir "The Rise and Fall of the Confederate Government", Davis believed that each state was sovereign and had an unquestionable right to secede from the Union. At the same time, he counseled delay among his fellow Southerners, because he did not think that the North would permit the peaceable exercise of the right to secession. Having served as secretary of war under President Pierce, he also knew that the South lacked the military and naval resources necessary for defense in a war. Following the election of Abraham Lincoln in 1860, however, events accelerated. South Carolina adopted an ordinance of secession on December 20, 1860, and Mississippi did so on January 9, 1861. Davis had expected this but waited until he received official notification. On January 21, the day Davis called "the saddest day of my life", he delivered a farewell address to the United States Senate, resigned and returned to Mississippi. In 1861, the Episcopal Church split and Davis became a member of the newly founded Protestant Episcopal Church in the Confederate States of America. He attended St. Paul's Episcopal Church in Richmond while he was President of the Confederacy. The two denominations were reunited in 1865. Anticipating a call for his services since Mississippi had seceded, Davis had sent a telegraph message to Governor John J. Pettus saying, "Judge what Mississippi requires of me and place me accordingly." On January 23, 1861, Pettus made Davis a major general of the Army of Mississippi. On February 9, a constitutional convention met at Montgomery, Alabama, and considered Davis and Robert Toombs of Georgia as a possible president. Davis, who had widespread support from six of the seven states, easily won. He was seen as the "champion of a slave society and embodied the values of the planter class", and was elected provisional Confederate President by acclamation. He was inaugurated on February 18, 1861. Alexander H. Stephens was chosen as Vice President, but he and Davis feuded constantly. Davis was the first choice because of his strong political and military credentials. He wanted to serve as commander-in-chief of the Confederate armies but said he would serve wherever directed. His wife Varina Davis later wrote that when he received word that he had been chosen as president, "Reading that telegram he looked so grieved that I feared some evil had befallen our family." Several forts in Confederate territory remained in Union hands. Davis sent a commission to Washington with an offer to pay for any federal property on Southern soil, as well as the Southern portion of the national debt, but Lincoln refused to meet with the commissioners. Brief informal discussions did take place with Secretary of State William Seward through Supreme Court Justice John A. Campbell, the latter of whom later resigned from the federal government, as he was from Alabama. Seward hinted that Fort Sumter would be evacuated, but gave no assurance. On March 1, 1861, Davis appointed General P. G. T. Beauregard to command all Confederate troops in the vicinity of Charleston, South Carolina, where state officials prepared to take possession of Fort Sumter. Beauregard was to prepare his forces but await orders to attack the fort. Within the fort the issue was not of the niceties of geopolitical posturing but of survival. They would be out of food on the 15th. The small Union crew had but half a dozen officers under Major Robert Anderson. Famously, this included the baseball folk hero Abner Doubleday. More improbable yet was a Union officer who had the name of Jefferson C. Davis. He would spend the war being taunted for his name but not his loyalty to the Northern cause. The newly installed President Lincoln, not wishing to initiate hostilities, informed South Carolina Governor Pickens that he was dispatching a small fleet of ships from the navy yard in New York to resupply but not re-enforce Fort Pickens in Florida and Fort Sumter. The U.S. President did not inform CSA President Davis of this intended resupply of food and fuel. For Lincoln, Davis, as the leader of an insurrection, was without legal standing in U.S. affairs. To deal with him would be to give legitimacy to the rebellion. The belief that Sumter was the property of the sovereign United States was the reason for maintaining the garrison on the island fortress. He informed Pickens that the resupply mission would not land troops or munitions unless they were fired upon. As it turned out, just as the supply ships approached Charleston harbor, the bombardment would begin and the flotilla watched the spectacle from 10 miles at sea. Davis faced the most important decision of his career: to prevent reinforcement at Fort Sumter or to let it take place. He and his cabinet decided to demand that the Federal garrison surrender and, if this was refused, to use military force to prevent reinforcement before the fleet arrived. Anderson did not surrender. With Davis's endorsement, Beauregard began the bombarding of the fort in the early dawn of April 12. The Confederates continued their artillery attack on Fort Sumter until it surrendered on April 14. No one was killed in the artillery duel, but the attack on the U.S. fortress meant the fighting had started. President Lincoln called up state militia to march south to recapture Federal property. In the North and South, massive rallies were held to demand immediate war. The Civil War had begun. When Virginia joined the Confederacy, Davis moved his government to Richmond in May 1861. He and his family took up his residence there at the White House of the Confederacy later that month. Having served since February as the provisional president, Davis was elected to a full six-year term on November 6, 1861, and was inaugurated on February 22, 1862. At the start of the war, nearly 21 million people lived in the North compared to 9 million in the South. While the North's population was almost entirely white, the South had an enormous number of black slaves and people of color. While the latter were free, becoming a soldier was seen as the prerogative of white men only. Many Southerners were terrified at the idea of a black man with a gun. Excluding old men and boys, the white males available for Confederate service were less than two million. There was also the additional burden that the near four million black slaves had to be heavily policed as there was no trust between the owner and "owned". The North had vastly greater industrial capacity; built nearly all the locomotives, steamships, and industrial machinery; and had a much larger and more integrated railroad system. Nearly all the munitions facilities were in the North, while critical ingredients for gunpowder were in very short supply in the South. Much of the railroad track that existed in the Confederacy was of simple railway design just meant to carry the large bales of cotton to local river ports in the harvest season. These often did not connect to other rail-lines, making internal shipments of goods difficult at best. While the Union had a large navy, the new Confederate Navy had only a few captured warships or newly built vessels. These did surprisingly well but ultimately were sunk or abandoned as the Union Navy controlled more rivers and ports. Rebel 'raiders' loosed on the Northern ships on the Atlantic did tremendous damage and sent Yankee ships into safe harbors as insurance rates soared. The Union blockade of the South, however, made imports via blockade runners difficult and expensive. Somewhat awkwardly, these runners didn't bring significant amounts of the war materials, so greatly needed, but rather the European luxuries sought as a relief from the privations of the wartime's stark conditions. In June 1862, Davis was forced to assign General Robert E. Lee to replace the wounded Joseph E. Johnston to command of the Army of Northern Virginia, the main Confederate army in the Eastern Theater. That December Davis made a tour of Confederate armies in the west of the country. He had a very small circle of military advisers. He largely made the main strategic decisions on his own, though he had special respect for Lee's views. Given the Confederacy's limited resources compared with the Union, Davis decided that the Confederacy would have to fight mostly on the strategic defensive. He maintained this outlook throughout the war, paying special attention to the defense of his national capital at Richmond. He approved Lee's strategic offensives when he felt that military success would both shake Northern self-confidence and strengthen the peace movements there. However, the several campaigns invading the North were met with defeat. A bloody battle at Antietam in Maryland as well as the ride into Kentucky, the Confederate Heartland Offensive (both in 1862) drained irreplaceable men and talented officers. A final offense led to the three-day bloodletting at Gettysburg in Pennsylvania (1863), crippling the South still further. The status of techniques and munitions made the defensive side much more likely to endure: an expensive lesson vindicating Davis's initial belief. As provisional president in 1861, Davis formed his first cabinet. Robert Toombs of Georgia was the first Secretary of State and Christopher Memminger of South Carolina became Secretary of the Treasury. LeRoy Pope Walker of Alabama was made Secretary of War, after being recommended for this post by Clement Clay and William Yancey (both of whom declined to accept cabinet positions themselves). John Reagan of Texas became Postmaster General. Judah P. Benjamin of Louisiana became Attorney General. Although Stephen Mallory was not put forward by the delegation from his state of Florida, Davis insisted that he was the best man for the job of Secretary of the Navy, and he was eventually confirmed. Since the Confederacy was founded, among other things, on states' rights, one important factor in Davis's choice of cabinet members was representation from the various states. He depended partly upon recommendations from congressmen and other prominent people. This helped maintain good relations between the executive and legislative branches. This also led to complaints as more states joined the Confederacy, however, because there were more states than cabinet positions. As the war progressed, this dissatisfaction increased and there were frequent changes to the cabinet. Toombs, who had wished to be president himself, was frustrated as an advisor and resigned within a few months of his appointment to join the army. Robert Hunter of Virginia replaced him as Secretary of State on July 25, 1861. On September 17, Walker resigned as Secretary of War due to a conflict with Davis, who had questioned his management of the War Department and had suggested he consider a different position. Walker requested, and was given, command of the troops in Alabama. Benjamin left the Attorney General position to replace him, and Thomas Bragg of North Carolina (brother of General Braxton Bragg) took Benjamin's place as Attorney General. Following the November 1861 election, Davis announced the permanent cabinet in March 1862. Benjamin moved again, to Secretary of State. George W. Randolph of Virginia had been made the Secretary of War. Mallory continued as Secretary of the Navy and Reagan as Postmaster General. Both kept their positions throughout the war. Memminger remained Secretary of the Treasury, while Thomas Hill Watts of Alabama was made Attorney General. In 1862 Randolph resigned from the War Department, and James Seddon of Virginia was appointed to replace him. In late 1863, Watts resigned as Attorney General to take office as the Governor of Alabama, and George Davis of North Carolina took his place. In 1864, Memminger withdrew from the Treasury post due to congressional opposition, and was replaced by George Trenholm of South Carolina. In 1865 congressional opposition likewise caused Seddon to withdraw, and he was replaced by John C. Breckinridge of Kentucky. Cotton was the South's primary export and the basis of its economy and the system of production the South used was dependent upon slave labor. At the outset of the Civil War, Davis realized that intervention from European powers would be vital if the Confederacy was to stand against the Union. The administration sent repeated delegations to European nations, but several factors prevented Southern success in terms of foreign diplomacy. The Union blockade of the Confederacy led European powers to remain neutral, contrary to the Southern belief that a blockade would cut off the supply of cotton to Britain and other European nations and prompt them to intervene on behalf of the South. Many European countries objected to slavery. Britain had abolished it in the 1830s, and Lincoln's Emancipation Proclamation of 1863 made support for the South even less appealing in Europe. Finally, as the war progressed and the South's military prospects dwindled, foreign powers were not convinced that the Confederacy had the strength to become independent. In the end, not a single foreign nation recognized the Confederate States of America. Most historians sharply criticize Davis for his flawed military strategy, his selection of friends for military commands, and his neglect of homefront crises. Until late in the war, he resisted efforts to appoint a general-in-chief, essentially handling those duties himself. On January 31, 1865, Lee assumed this role, but it was far too late. Davis insisted on a strategy of trying to defend all Southern territory with ostensibly equal effort. This diluted the limited resources of the South and made it vulnerable to coordinated strategic thrusts by the Union into the vital Western Theater (e.g., the capture of New Orleans in early 1862). He made other controversial strategic choices, such as allowing Lee to invade the North in 1862 and 1863 while the Western armies were under very heavy pressure. When Lee lost at Gettysburg, Vicksburg simultaneously fell, and the Union took control of the Mississippi River, splitting the Confederacy. At Vicksburg, the failure to coordinate multiple forces on both sides of the Mississippi River rested primarily on Davis's inability to create a harmonious departmental arrangement or to force such generals as Edmund Kirby Smith, Earl Van Dorn, and Theophilus H. Holmes to work together. Davis has been faulted for poor coordination and management of his generals. This includes his reluctance to resolve a dispute between Leonidas Polk, a personal friend, and Braxton Bragg, who was defeated in important battles and distrusted by his subordinates. He was similarly reluctant to relieve the capable but overcautious Joseph E. Johnston until, after numerous frustrations which he detailed in a March 1, 1865 letter to Col. James Phelan of Mississippi, he replaced him with John Bell Hood. Davis gave speeches to soldiers and politicians but largely ignored the common people, who came to resent the favoritism shown to the rich and powerful; Davis thus failed to harness Confederate nationalism. One historian speaks of "the heavy-handed intervention of the Confederate government." Economic intervention, regulation, and state control of manpower, production and transport were much greater in the Confederacy than in the Union. Davis did not use his presidential pulpit to rally the people with stirring rhetoric; he called instead for people to be fatalistic and to die for their new country. Apart from two month-long trips across the country where he met a few hundred people, Davis stayed in Richmond where few people saw him; newspapers had limited circulation, and most Confederates had little favorable information about him. To finance the war, the Confederate government initially issued bonds, but investment from the public never met the demands. Taxes were lower than in the Union and collected with less efficiency; European investment was also insufficient. As the war proceeded, both the Confederate government and the individual states printed more and more paper money. Inflation increased from 60% in 1861 to 300% in 1863 and 600% in 1864. Davis did not seem to grasp the enormity of the problem. In April 1863, food shortages led to rioting in Richmond, as poor people robbed and looted numerous stores for food until Davis cracked down and restored order. Davis feuded bitterly with his vice president. Perhaps even more seriously, he clashed with powerful state governors who used states' rights arguments to withhold their militia units from national service and otherwise blocked mobilization plans. Davis is widely evaluated as a less effective war leader than Lincoln, even though Davis had extensive military experience and Lincoln had little. Davis would have preferred to be an army general and tended to manage military matters himself. Lincoln and Davis led in very different ways. According to one historian, There were many factors that led to Union victory over the Confederacy, and Davis recognized from the start that the South was at a distinct disadvantage; but in the end, Lincoln helped to achieve victory, whereas Davis contributed to defeat. In March 1865, General Order 14 provided for enlisting slaves into the army, with a promise of freedom for service. The idea had been suggested years earlier, but Davis did not act upon it until late in the war, and very few slaves were enlisted. On April 3, with Union troops under Ulysses S. Grant poised to capture Richmond, Davis escaped to Danville, Virginia, together with the Confederate Cabinet, leaving on the Richmond and Danville Railroad. Lincoln sat in Davis's Richmond office just 40 hours later. William T. Sutherlin turned over his mansion, which served as Davis's temporary residence from April 3 to April 10, 1865. On about April 12, Davis received Robert E. Lee's letter announcing surrender. He issued his last official proclamation as president of the Confederacy, and then went south to Greensboro, North Carolina. After Lee's surrender, a public meeting was held in Shreveport, Louisiana, at which many speakers supported continuation of the war. Plans were developed for the Davis government to flee to Havana, Cuba. There, the leaders would regroup and head to the Confederate-controlled Trans-Mississippi area by way of the Rio Grande. None of these plans were put into practice. On April 14, Lincoln was shot, dying the next day. Davis expressed regret at his death. He later said that he believed Lincoln would have been less harsh with the South than his successor, Andrew Johnson. In the aftermath, Johnson issued a $100,000 reward for the capture of Davis and accused him of helping to plan the assassination. As the Confederate military structure fell into disarray, the search for Davis by Union forces intensified. President Davis met with his Confederate Cabinet for the last time on May 5, 1865, in Washington, Georgia, and officially dissolved the Confederate government. The meeting took place at the Heard house, the Georgia Branch Bank Building, with 14 officials present. Along with their hand-picked escort led by Given Campbell, Davis and his wife Varina Davis were captured by Union forces on May 10 at Irwinville in Irwin County, Georgia. Mrs. Davis recounted the circumstances of her husband's capture as described below: "Just before day the enemy charged our camp yelling like demons... I pleaded with him to let me throw over him a large waterproof wrap which had often served him in sickness during the summer season for a dressing gown and which I hoped might so cover his person that in the grey of the morning he would not be recognized. As he strode off I threw over his head a little black shawl which was around my own shoulders, saying that he could not find his hat and after he started sent my colored woman after him with a bucket for water hoping that he would pass unobserved." It was reported in the media that Davis put his wife's overcoat over his shoulders while fleeing. This led to the persistent rumor that he attempted to flee in women's clothes, inspiring caricatures that portrayed him as such. Over 40 years later, an article in the "Washington Herald" claimed that Mrs. Davis's heavy shawl had been placed on Davis who was "always extremely sensitive to cold air", to protect him from the "chilly atmosphere of the early hour of the morning" by the slave James Henry Jones, Davis's valet who served Davis and his family during and after the Civil War. Meanwhile, Davis's belongings continued on the train bound for Cedar Key, Florida. They were first hidden at Senator David Levy Yulee's plantation in Florida, then placed in the care of a railroad agent in Waldo. On June 15, 1865, Union soldiers seized Davis's personal baggage from the agent, together with some of the Confederate government's records. A historical marker was erected at this site. In 1939, Jefferson Davis Memorial Historic Site was opened to mark the place where Confederate President Jefferson Davis was captured. On May 19, 1865, Davis was imprisoned in a casemate at Fortress Monroe on the coast of Virginia. Irons were riveted to his ankles at the order of General Nelson Miles who was in charge of the fort. Davis was allowed no visitors, and no books except the Bible. He became sicker, and the attending physician warned that his life was in danger, but this treatment continued for some months until late autumn when he was finally given better quarters. General Miles was transferred in mid-1866, and Davis's treatment continued to improve. Pope Pius IX (see Pope Pius IX and the United States), after learning that Davis was a prisoner, sent him a portrait inscribed with the Latin words ""Venite ad me omnes qui laboratis, et ego reficiam vos, dicit Dominus"", which correspond to , "Come to me, all you that labor, and are burdened, and I will refresh you, sayeth the Lord". A hand-woven crown of thorns associated with the portrait is often said to have been made by the Pope but may have been woven by Davis's wife Varina. Varina and their young daughter Winnie were allowed to join Davis, and the family was eventually given an apartment in the officers' quarters. Davis was indicted for treason while imprisoned; one of his attorneys was ex-Governor Thomas Pratt of Maryland. There was a great deal of discussion in 1865 about bringing treason trials, especially against Jefferson Davis, and there was no consensus in President Johnson's cabinet to do so. Although Davis wanted such a trial for himself, there were no treason trials against anyone, as it was felt they would probably not succeed and would impede reconciliation. There was also a concern at the time that such action could result in a judicial decision that would validate the constitutionality of secession (later removed by the Supreme Court ruling in "Texas v. White" (1869) declaring secession unconstitutional). A jury of 12 black and 12 white men was recruited by United States Circuit Court judge John Curtiss Underwood in preparation for the trial. After two years of imprisonment, Davis was released on bail of $100,000, which was posted by prominent citizens including Horace Greeley, Cornelius Vanderbilt and Gerrit Smith. (Smith was a former member of the Secret Six who had supported abolitionist John Brown.) Davis went to Montreal, Canada to join his family which had fled there earlier, and lived in Lennoxville, Quebec until 1868, also visiting Cuba and Europe in search of work. At one stage he stayed as a guest of James Smith, a foundry owner in Glasgow, who had struck up a friendship with Davis when he toured the Southern States promoting his foundry business. Davis remained under indictment until Andrew Johnson issued on Christmas Day of 1868 a presidential "pardon and amnesty" for the offense of treason to "every person who directly or indirectly participated in the late insurrection or rebellion" and after a federal circuit court on February 15, 1869, dismissed the case against Davis after the government's attorney informed the court that he would no longer continue to prosecute Davis. After his release from prison and pardon, Davis faced continued financial pressures, as well as an unsettled family life. His elder brother Joseph died in 1870, his son William Howell Davis in 1872 and Jefferson Davis Jr. in 1878. His wife Varina was often ill or abroad, and for a time refused to live with him in Memphis, Tennessee. Davis resented having to resort to charity, and would only accept jobs befitting his former positions as U.S. Senator and Confederate President; several that he accepted proved financial failures. While in England, Davis sought a mercantile position in Liverpool. However, British companies were wary, both because Britons were not interested in Canadian mines, and because Mississippi defaulted on debts in the 1840s, and Judah Benjamin cautioned him against countering former wartime propaganda by Robert J. Walker. Davis also refused positions as head of Randolph-Macon Academy in Virginia and the University of the South in Sewanee, Tennessee for financial reasons. In 1869, Davis became president of the Carolina Life Insurance Company in Memphis, Tennessee, at an annual salary of $12,000, plus travel expenses, and resided at the Peabody Hotel. He recruited former Confederate officers as agents, and the board ratified his position in 1870. By 1873, he suggested that the company have boards of trustees at its various branches, and that qualification for such be that the trustee either take out a policy of at least $5,000 or own at least $1,000 in the company's stock. By midyear the Panic of 1873 affected the company, and Davis resigned when it merged with another firm over his objections. He also planned a "Davis Land Company" in which investors would pay $10 per share for 5,700 acres Davis owned in Arkansas. He drafted a prospectus that stated he owed more than $40,000 and his income did not amount to $200. Upon General Lee's death, Davis agreed to preside over the Presbyterian memorial in Richmond on November 3, 1870. That speech prompted further invitations, although he declined them until July 1871, when he was commencement speaker at the University of the South. Two years later Davis addressed the Virginia Historical Society at White Sulpher Springs, where Davis proclaimed southerners were "cheated not conquered" and would never have surrendered if they had foreseen Congressional Reconstruction. In the summer of 1875, Davis agreed to speak at 17 agricultural fairs in the Midwest. He received criticism from the Chicago Tribune and threats to his life in Indiana, but crowds in Kansas City, Missouri and Fairview, Kentucky received him well. During the next two years Davis began writing his books about the Confederacy, but only addressed fellow former soldiers: first veterans of the Mexican War (before which he attacked Congressional Reconstruction), then Confederate veterans (where he promoted reconciliation). Early in Reconstruction, Davis publicly remained silent on his opinions, but privately condemned federal military rule and believed Republican authority over former Confederate states unjustified. Mississippi had elected Hiram Rhodes Revels, an African-American, as a U.S. Senator in 1870 to finish the term of Albert G. Brown. Furthermore, during the war, after Joseph Davis's departure from his plantations at Davis Bend and the Union capture of Vicksburg and the surrounding area, General Grant had continued Joseph Davis's utopian experiment and ordered that the land be leased to the freedman and black refugees allowed to settle in the area. Although Joseph Davis ultimately received the land back, many black leaders came from the plantation, which had its own political system, including elected black judges and sheriffs. After the 1867 floods changed the course of the Mississippi River, Joseph Davis sold the plantation to the former slave who had operated a store and handled the white brothers' cotton transaction, Ben Montgomery. Ben's son Isaiah Thornton Montgomery became the first black to hold office in Mississippi when General E.O.C. Ord appointed him Davis Bend's postmaster in 1867. Ben himself was elected justice of the peace. Other black leaders during Mississippi Reconstruction with Davis Bend ties included Israel Shadd, who became speaker of the state's House of Representatives, and legislator Albert Johnson (who also served in the state's constitutional convention). Jefferson Davis considered "Yankee and Negroe" rule in the South oppressive, and said so in 1871 and especially after 1873. Like most of his white contemporaries, Davis believed that blacks were inferior to whites. One recent biographer believes Davis favored a Southern social order that included a "democratic white polity based firmly on dominance of a controlled and excluded black caste". While seeking to reclaim Davis Bend ("Hurricane" and "Brierfield" plantations) in 1865, Joseph Davis had filed documents with the Freedmans Bureau insisting that he had intentionally never given Jefferson Davis title to the latter. After receiving first a pardon, and then the lands back, he sold both plantations to former slave Ben Montgomery and his sons, taking back a mortgage for $300,000 at 6% interest, with payments due each January 1 beginning in 1867. While Joseph Davis recognized he could not farm successfully without his 375 enslaved people, he expected the Montgomerys could better manage the labor situation, since in 1865 they had raised nearly 2000 bales of cotton and earned $160,000 in profits. However, when Mississippi River flooded in spring 1867, it also changed course, ruining many acres and creating "Davis Island". After Joseph Davis died two years later, his 1869 will left property to his two orphaned grandchildren, as well as to his brother's children, and named Jefferson Davis one of three executors (with Dr. J. H. D. Bowmar and nephew Joseph Smith). After the Montgomery men entertained the three executors in May 1870, and he suffered losses in the Panic of 1873, Jefferson Davis decided the black men could never fulfill the land purchase contract, and filed suit against the other trustees on June 15, 1874. Jefferson Davis argued his late brother had an oral agreement with Ben Montgomery that allowed Jefferson Davis to rescind the deal and that an unassigned $70,000 from the land sale represented Brierfield's value (the orphaned Hamer grandchildren said it represented declining land values). The local Chancery Court (which then had a Republican judge, and two of the three Hamer lawyers were former Confederates) dismissed Davis's lawsuit in January 1876, citing estoppel, because Davis had been acting as executor for four years despite this claim based on alleged actions in the 1840s. In April 1878 (months after Ben Montgomery had died), the Mississippi Supreme Court overruled the Warren County chancery court, deciding that Jefferson Davis properly claimed the Brierfield land by adverse possession, since he had cleared and farmed it from the 1840s until the outbreak of the Civil War (more than the ten years the statute required). By that time, two of the Republicans on that appellate court had been replaced by Democrats, both former Confederate officers, To actually gain possession of Brierfield, Davis needed to convince the Warren County chancery court to foreclose the mortgage, which happened on June 1, 1880, and all appeals were rejected by December 1, 1881, allowing Jefferson Davis (for the first time in his life), to gain legal title. While pursuing the Brierfield litigation, Davis took another business trip to Liverpool. This time he sought employment from the Royal Insurance Company (a fire and marine insurer) which refused him, citing Northern animosity toward the former Confederate President. Other insurers also rejected him both directly and through intermediaries. He then visited former Confederate ambassador John Slidell in Paris, but was unable to associate with a land company, either to aid the southern people or encourage emigration to the South. Davis returned to the United States and blamed race as the heart of what he called "the night of despotism" enveloping the South, citing Republicans who gave political rights to blacks that made them "more idle and ungovernable than before." Davis also investigated mine properties in Arkansas and backed an ice-making machine venture, which failed. He was invited to Texas, but turned down the opportunity to become the first president of the Agriculture and Mechanical College of Texas (now Texas A&M University) in 1876, citing the financial sacrifice (the offered salary was only $4,000/yr). The Mississippi Valley Society, based in England, sought to spur European immigration and English investment, but Davis declined to accept that presidency until salary details had been settled, though he took a speaking tour of the area to drum up public support. Joseph Davis had encouraged his brother to write his memoirs just after his release from prison, but Davis had responded that he was not capable of doing so, either physically nor emotionally. His wartime assistant Preston Johnston had also encouraged Davis three years later. As Davis began to seriously consider the memoir endeavor in 1869, his early working title became "Our Cause," for he believed he could convert others to the rightness of the Confederacy's actions. In 1875, unable to come to terms with Preston Johnston, Davis authorized William T. Walthall, a former Confederate officer and Carolina Life agent in Mobile, Alabama to look for a publisher for the proposed book. Walthall contacted D. Appleton & Company in New York City, and editor Joseph C. Derby agreed to pay Walthall $250/month as an advance until the manuscript's completion, with the final product not to exceed two volumes of 800 pages each. Davis made minor changes and Appleton agreed. In 1877, Sarah Anne Ellis Dorsey, a wealthy widow and writer whom he and Varina had known from childhood and who supported the Lost Cause, invited Davis to stay at her estate and plantation house, "Beauvoir", which faced the Gulf of Mexico in Biloxi, Mississippi. Her husband, Maryland-born Samuel Dorsey had bought Beauvoir in 1873, and died there two years later. Mrs. Dorsey wanted to provide Davis with a refuge in which he could write his memoirs per the Appleton contract. She provided him a cabin for his own use as well as helped him with his writing through organization, dictation, editing and encouragement. Davis refused to accept overt charity, but agreed to purchase the property at a modest price ($5,500, payable in installments over three years). In January 1878 Dorsey, knowing she too was ill (with breast cancer), made over her will with Walthall's assistance in order to leave her remaining three small Louisiana plantations and financial assets of $50,000 () to Davis and (acknowledging his still-precarious health) if he predeceased her, to his beloved daughter, Winnie Davis. Dorsey died in 1879, by which time both the Davises and Winnie were living at Beauvoir. Her relatives came to contest that last will, which excluded them and gave everything to Davis in fee simple. They argued Davis exerted undue influence over the widow. The court dismissed their lawsuit without comment in March 1880, and they filed no appeal. Upon receiving the Appleton contract, Davis had sent letters to his former associates, seeking supporting documentation. When Walthall sent two proposed chapters to New York in 1878, Appleton returned them, cautioning that it did not want a long rehash of constitutional history, but rather an account of Davis's actions as the Confederacy's president. The publisher then sent William J. Tenney, a states-rights Democrat and staff member, to visit Beauvoir to get the problematic manuscript into publishable shape. When it still failed to arrive, Derby personally traveled to Mississippi in February 1880. By this time, Derby had advanced $8,000, but Davis confessed that he had seen few pages, asserting that Walthall had the rest. Since Davis did not want to give up on the book nor return the funds (and had already mortgaged the properties he received from Dorsey), he agreed that Tenney would take up residence in a cottage at Beauvoir. On May 1, 1880, Davis severed all connections with Walthall, who had made little progress in the preceding two years. Davis and Tenney then completed "The Rise and Fall of the Confederate Government" (1881), in two volumes of 700 and 800 pages respectively. The Southern Historical Society had been formed in 1876 by Rev. J. William Jones (a Baptist minister and former Confederate chaplain) and Gen. Jubal A. Early. Jones became the Society's paid secretary and editor of the Southern Historical Review; Early became President and head of its executive committee. They made Davis a life member and helped him gather material for his book. They had tried to enlist him for a speaking tour in 1882, but Davis declined, citing his health and a yellow fever epidemic near Beauvoir, and only made one address in New Orleans on its behalf before 1882. Early also began visiting Davis when the Virginian visited New Orleans as supervisor in the Louisiana State Lottery Company. Like Judah Benjamin, Early repeatedly advised Davis not to participate publicly in personal vendettas and old battles, despite critical books and articles by former Confederate Generals Pierre Beauregard and Joseph E. Johnston. Nonetheless, when asked to speak at dedication of the Lee mausoleum in Lexington, Virginia, Davis declined when he learned Johnston would preside, and also vented in his personal correspondence. Davis also took issue with Gen. William T. Sherman in an address in St. Louis in 1884 and in a lengthy letter to the editor, and also criticized young New York politician Theodore Roosevelt for comparing him to Benedict Arnold. When touring the South in 1886 and 1887, Davis attended many Lost Cause ceremonies, and large crowds showered him with affection as local leaders presented emotional speeches honoring his sacrifices to the would-be nation. According to the "Meriden Daily Journal", at a reception held in New Orleans in May 1887, Davis urged southerners to be loyal to the nation--"United you are now, and if the Union is ever to be broken, let the other side break it." He continued by lauding Confederate men who successfully fought for their own rights despite inferior numbers during the Civil War, and argued that northern historians ignored this view. Davis firmly believed that Confederate secession was constitutional, and was optimistic concerning American prosperity and the next generation. In the summer of 1888, James Redpath, editor of the North American Review and a former political enemy who became an admirer upon meeting Davis, convinced him to write a series of articles at $250 per article, as well as a book. Davis then completed his final book "A Short History of the Confederate States of America" in October 1889. On November 6, Davis left Beauvoir to visit his Brierfield plantation. He embarked a steamboat in New Orleans during sleety rain and fell ill during the trip, so that he initially felt too sick to disembark at his stop, and spent the night upriver in Vicksburg before making his way to the plantation the next day. He refused to send for a doctor for four days before embarking on his return trip. Meanwhile, servants sent Varina a telegram, and she took a train to New Orleans, and then a steamboat upriver, finally reaching the vessel on which her husband was returning. Davis finally received medical care as two doctors came aboard further south and diagnosed acute bronchitis complicated by malaria. Upon arriving in New Orleans three days later, Davis was taken to Garden District home of Charles Erasmus Fenner, a former Confederate officer who became an Associate Justice of the Louisiana Supreme Court. Fenner was the son-in-law of Davis's old friend J. M. Payne. Davis's doctor Stanford E. Chaille pronounced him too ill to travel to Beauvoir; four medical students who were sons of Confederate veterans and a Catholic nun attended Davis in the Charity Hospital ambulance which took him to the Fenner home. Davis remained bedridden but stable for the next two weeks. He took a turn for the worse in early December. According to Fenner, just when Davis again appeared to be improving, he lost consciousness on the evening of December 5 and died at 12:45 a.m. on Friday, December 6, 1889, holding Varina's hand and in the presence of several friends. His funeral was one of the largest in the South, and New Orleans draped itself in mourning as his body lay in state in the City Hall for several days. An Executive Committee decided to emphasize Davis's ties to the United States, so an American national flag was placed over the Confederate flag during the viewing, and many crossed American and Confederate flags nearby. Davis wore a new suit of Confederate grey fabric Jubal Early had given him, and Varina placed a sword Davis had carried during the Black Hawk War on the bier. A common decoration during the initial funeral was a small American flag in mourning, with a portrait of Davis in the center. The Grand Army of the Republic had a prominent role, even though the Grand Marshall was John G. Glynn, head of the Louisiana National Guard, and Georgia Governor John Gordon (head of the newly organized United Confederate Veterans) was honorary Grand Marshall. While the federal government officially ignored Davis's death, many church bells rang in the south, Confederate veterans held many processions, and Senators and congressmen crossed the Potomac River to join former Confederate officials and generals in eulogizing Davis in Alexandria, Virginia. Although initially laid to rest in New Orleans in the Army of Northern Virginia tomb at Greenwood Cemetery, in 1893 Davis was reinterred in Richmond, Virginia at Hollywood Cemetery, per his widow's request. Before his death, Davis left the location of his burial up to Varina, but within a day of his death "The New York Times" proclaimed Richmond wanted his body. Varina Davis had refused to accept direct charity, but let it be known that she would accept financial help through the Davis Land Company. Soon, many tourists in New Orleans visited the mausoleum. Several other locations in the South wanted Davis's remains. Louisville, Kentucky offered a site in Cave Hill cemetery, noting that two years earlier Davis had dedicated a church built on the site of his birthplace and claiming that he several times said he wanted to be buried in his native state. Memphis, Tennessee, Montgomery, Alabama, Macon and Atlanta, Georgia and both Jackson and Vicksburg, Mississippi also petitioned for Davis's remains. Richmond mayor and Confederate veteran J. Taylor Ellyson established the Jefferson Davis Monument Association, and on July 12, 1891 Varina revealed in a letter to Confederate Veterans and people of the Southern States that her first choice would be Davis's plantation in Mississippi, but that because she feared flooding, she had decided to urge Richmond as the proper place for his tomb. After Davis's remains were exhumed in New Orleans, they lay in state for a day at Memorial Hall of the newly organized Louisiana Historical Association. Those paying final respects included Louisiana Governor Murphy J. Foster, Sr.. A continuous cortège, day and night, then accompanied Davis's remains from New Orleans to Richmond. The Louisville and Nashville Railroad car traveled past Beauvoir, then proceeded northeastward toward Richmond, with ceremonies at stops in Mobile and Montgomery, Alabama, Atlanta, Georgia, then Charlotte and Greensboro, North Carolina. The train also detoured to Raleigh, North Carolina for Davis's coffin to lie in state in that capital city, having been driven by James J. Jones, a free black man who had served Davis during the war and become a local businessman and politician. After a stop in Danville, Virginia, the Confederacy's last capital, and another ceremony at the Virginia State Capital, Davis was then interred at Hollywood Cemetery in Richmond. Per the association's agreement with Varina, their children's remains were exhumed from Washington, D.C., Memphis and another plot at the Hollywood cemetery, to rest in the new family plot. A life sized statue of Davis was eventually erected as promised by the Jefferson Davis Monument Association, in cooperation with the Southern Press Davis Monument Association, the United Confederate Veterans and ultimately the United Daughters of the Confederacy. The monument's cornerstone was laid in an 1896 ceremony, and it was dedicated with great pomp and 125,000 spectators on June 3, 1907, the last day of a Confederate reunion. Despite controversies about other monuments as discussed below, it continues to mark his tomb. Jefferson Davis served in many roles. As a soldier, he was brave and resourceful. As a politician, he served as a United States senator and a Mississippi congressman and was active and accomplished, although he never completed a full term in any elected position. As a plantation owner, he employed slave labor as did most of his peers in the South, and supported slavery. As president of the Confederate States of America, he is widely viewed as an ineffective wartime leader; although the task of defending the Confederacy against the much stronger Union would have been a great challenge for any leader, Davis's performance in this role is considered poor. After the war, he contributed to reconciliation of the South with the North, but remained a symbol for Southern pride. Some portions of his legacy were created not as memorials, but as contemporary recognition of his service at the time. Fort Davis National Historic Site began as a frontier military post in October 1854, in the mountains of western Texas. It was named after then-United States Secretary of War Jefferson Davis. That fort gave its name to the surrounding Davis Mountains range, and the town of Fort Davis. The surrounding area was designated Jeff Davis County in 1887, with the town of Fort Davis as the county seat. Other states containing a Jefferson (or Jeff) Davis County/Parish include Georgia, Louisiana, and Mississippi. Jefferson Davis Hospital began operations in 1924 and was the first centralized municipal hospital to treat indigent patients in Houston, Texas. The building was designated as a protected historic landmark on November 13, 2013, by the Houston City Council and is monitored by the Historic Preservation Office of the City of Houston Department of Planning and Development. The hospital was named for Jefferson Davis, former president of the Confederacy, in honor of the Confederate soldiers who had been buried in the cemetery and as a means to console the families of the deceased. Numerous memorials to Jefferson Davis were created. The largest is the concrete obelisk located at the Jefferson Davis State Historic Site in Fairview, marking his birthplace. Construction of the monument began in 1917 and finished in 1924 at a cost of about $200,000. In 1913, the United Daughters of the Confederacy conceived the Jefferson Davis Memorial Highway, a transcontinental highway to be built through the South. Portions of the highway's route in Virginia, Alabama and other states still bear the name of Jefferson Davis. However, in Alexandria, Virginia, the city council voted unanimously to rename the highway and has solicited public suggestions for a new name. Davis appeared on several postage stamps issued by the Confederacy, including its first postage stamp (issued in 1861). In 1995, his portrait appeared on a United States postage stamp, part of a series of 20 stamps commemorating the 130th anniversary of the end of the Civil War. Davis was also celebrated on the 6-cent Stone Mountain Memorial Carving commemorative on September 19, 1970, at Stone Mountain, Georgia. The stamp portrayed Jefferson Davis, Robert E. Lee and Thomas J. "Stonewall" Jackson on horseback. It depicts a replica of the actual memorial, carved into the side of Stone Mountain at above ground level, the largest high-relief sculpture in the world. The Jefferson Davis Presidential Library was established at Beauvoir in 1998. For some years, the white-columned Biloxi mansion that was Davis's final home had served as a Confederate Veterans Home. The house and library were damaged by Hurricane Katrina in 2005; the house reopened in 2008. Bertram Hayes-Davis, Davis's great-great grandson, is the executive director of Beauvoir, which is owned by the Mississippi Division of the Sons of Confederate Veterans. Based at Rice University in Houston, Texas, "The Papers of Jefferson Davis" is an editing project to publish documents related to Davis. Since the early 1960s, it has published 13 volumes, the first in 1971 and the most recent in 2012; two more volumes are planned. The project has roughly 100,000 documents in its archives. The birthday of Jefferson Davis is commemorated in several states. His actual birthday, June 3, is celebrated in Florida, Kentucky, Louisiana and Tennessee; in Alabama, it is celebrated on the first Monday in June. In Mississippi, the last Monday of May (Memorial Day) is celebrated as "National Memorial Day and Jefferson Davis's Birthday". In Texas, "Confederate Heroes Day" is celebrated on January 19, the birthday of Robert E. Lee; Jefferson Davis's birthday had been officially celebrated on June 3 but was combined with Lee's birthday in 1973. Robert E. Lee's United States citizenship was posthumously restored in 1975. Davis had been specifically excluded from earlier resolutions restoring rights to other Confederate officials, and a movement arose to restore Davis's citizenship as well. This was accomplished with the passing of Senate Joint Resolution 16 on October 17, 1978. In signing the law, President Jimmy Carter referred to this as the last act of reconciliation in the Civil War. Primary sources Official Other Jackie Robinson Jack Roosevelt Robinson (January 31, 1919 – October 24, 1972) was an American professional baseball player who became the first African American to play in Major League Baseball (MLB) in the modern era. Robinson broke the baseball color line when the Brooklyn Dodgers started him at first base on April 15, 1947. When the Dodgers signed Robinson, they heralded the end of racial segregation in professional baseball that had relegated black players to the Negro leagues since the 1880s. Robinson was inducted into the Baseball Hall of Fame in 1962. Robinson had an exceptional 10-year MLB career. He was the recipient of the inaugural MLB Rookie of the Year Award in 1947, was an All-Star for six consecutive seasons from 1949 through 1954, and won the National League Most Valuable Player Award in 1949—the first black player so honored. Robinson played in six World Series and contributed to the Dodgers' 1955 World Series championship. In 1997, MLB retired his uniform number 42 across all major league teams; he was the first pro athlete in any sport to be so honored. MLB also adopted a new annual tradition, "Jackie Robinson Day", for the first time on April 15, 2004, on which every player on every team wears No. 42. Robinson's character, his use of nonviolence, and his unquestionable talent challenged the traditional basis of segregation which then marked many other aspects of American life. He influenced the culture of and contributed significantly to the civil rights movement. Robinson also was the first black television analyst in MLB and the first black vice president of a major American corporation, Chock full o'Nuts. In the 1960s, he helped establish the Freedom National Bank, an African-American-owned financial institution based in Harlem, New York. After his death in 1972, in recognition of his achievements on and off the field, Robinson was posthumously awarded the Congressional Gold Medal and Presidential Medal of Freedom. Robinson was born on January 31, 1919, into a family of sharecroppers in Cairo, Georgia. He was the youngest of five children born to Mallie (McGriff) and Jerry Robinson, after siblings Edgar, Frank, Matthew (nicknamed "Mack"), and Willa Mae. His middle name was in honor of former President Theodore Roosevelt, who died 25 days before Robinson was born. After Robinson's father left the family in 1920, they moved to Pasadena, California. The extended Robinson family established itself on a residential plot containing two small houses at 121 Pepper Street in Pasadena. Robinson's mother worked various odd jobs to support the family. Growing up in relative poverty in an otherwise affluent community, Robinson and his minority friends were excluded from many recreational opportunities. As a result, Robinson joined a neighborhood gang, but his friend Carl Anderson persuaded him to abandon it. In 1935, Robinson graduated from Washington Junior High School and enrolled at John Muir High School (Muir Tech). Recognizing his athletic talents, Robinson's older brothers Mack (himself an accomplished athlete and silver medalist at the 1936 Summer Olympics) and Frank inspired Jackie to pursue his interest in sports. At Muir Tech, Robinson played several sports at the varsity level and lettered in four of them: football, basketball, track, and baseball. He played shortstop and catcher on the baseball team, quarterback on the football team, and guard on the basketball team. With the track and field squad, he won awards in the broad jump. He was also a member of the tennis team. In 1936, Robinson won the junior boys singles championship in the annual Pacific Coast Negro Tennis Tournament and earned a place on the Pomona annual baseball tournament all-star team, which included future Hall of Famers Ted Williams and Bob Lemon. In late January 1937, the "Pasadena Star-News" newspaper reported that Robinson "for two years has been the outstanding athlete at Muir, starring in football, basketball, track, baseball and tennis." After Muir, Robinson attended Pasadena Junior College (PJC), where he continued his athletic career by participating in basketball, football, baseball, and track. On the football team, he played quarterback and safety. He was a shortstop and leadoff hitter for the baseball team, and he broke school broad-jump records held by his brother Mack. As at Muir High School, most of Jackie's teammates were white. While playing football at PJC, Robinson suffered a fractured ankle, complications from which would eventually delay his deployment status while in the military. In 1938, he was elected to the All-Southland Junior College Team for baseball and selected as the region's Most Valuable Player. That year, Robinson was one of 10 students named to the school's Order of the Mast and Dagger ("Omicron Mu Delta"), awarded to students performing "outstanding service to the school and whose scholastic and citizenship record is worthy of recognition." Also while at PJC, he was elected to the Lancers, a student-run police organization responsible for patrolling various school activities. An incident at PJC illustrated Robinson's impatience with authority figures he perceived as racist—a character trait that would resurface repeatedly in his life. On January 25, 1938, he was arrested after vocally disputing the detention of a black friend by police. Robinson received a two-year suspended sentence, but the incident—along with other rumored run-ins between Robinson and police—gave Robinson a reputation for combativeness in the face of racial antagonism. While at PJC, he was motivated by a preacher (the Rev. Karl Downs) to attend church on a regular basis, and Downs became a confidant for Robinson, a Christian. Toward the end of his PJC tenure, Frank Robinson (to whom Robinson felt closest among his three brothers) was killed in a motorcycle accident. The event motivated Jackie to pursue his athletic career at the nearby University of California, Los Angeles (UCLA), where he could remain closer to Frank's family. After graduating from PJC in spring 1939, Robinson enrolled at UCLA, where he became the school's first athlete to win varsity letters in four sports: baseball, basketball, football, and track. He was one of four black players on the Bruins' 1939 football team; the others were Woody Strode, Kenny Washington, and Ray Bartlett. Washington, Strode, and Robinson made up three of the team's four backfield players. At a time when only a few black students played mainstream college football, this made UCLA college football's most They went undefeated with four ties at In track and field, Robinson won the 1940 NCAA championship in the long jump at . Belying his future career, Robinson's "worst sport" at UCLA was baseball; he hit .097 in his only season, although in his first game he went 4-for-4 and twice stole home. While a senior at UCLA, Robinson met his future wife, Rachel Isum (b.1922), a UCLA freshman who was familiar with Robinson's athletic career He played football as a senior, but the 1940 Bruins won only one game. In the spring, Robinson left college just shy of graduation, despite his mother's and Isum's reservations. He took a job as an assistant athletic director with the government's National Youth Administration (NYA) After the government ceased NYA operations, Robinson traveled to Honolulu in the fall of 1941 to play football for the semi-professional, racially integrated Honolulu After a short season, Robinson returned to California in December 1941 to pursue a career as running back for the Los Angeles Bulldogs of the Pacific Coast Football League. By that time, however, the Japanese attack on Pearl Harbor had taken place, drawing the United States into World War II and ending Robinson's nascent football career. In 1942, Robinson was drafted and assigned to a segregated Army cavalry unit in Fort Riley, Kansas. Having the requisite qualifications, Robinson and several other black soldiers applied for admission to an Officer Candidate School (OCS) then located at Fort Riley. Although the Army's initial July 1941 guidelines for OCS had been drafted as race neutral, few black applicants were admitted into OCS until after subsequent directives by Army leadership. As a result, the applications of Robinson and his colleagues were delayed for several months. After protests by heavyweight boxing champion Joe Louis (then stationed at Fort Riley) and the help of Truman Gibson (then an assistant civilian aide to the Secretary of War), the men were accepted into OCS. The experience led to a personal friendship between Robinson and Louis. Upon finishing OCS, Robinson was commissioned as a second lieutenant in January 1943. Shortly afterward, Robinson and Isum were formally engaged. After receiving his commission, Robinson was reassigned to Fort Hood, Texas, where he joined the 761st "Black Panthers" Tank Battalion. While at Fort Hood, Robinson often used his weekend leave to visit the Rev. Karl Downs, President of Sam Huston College (now Huston-Tillotson University) in nearby Austin, Texas; Downs had been Robinson's pastor at Scott United Methodist Church while Robinson attended PJC. An event on July 6, 1944 derailed Robinson's military career. While awaiting results of hospital tests on the ankle he had injured in junior college, Robinson boarded an Army bus with a fellow officer's wife; although the Army had commissioned its own unsegregated bus line, the bus driver ordered Robinson to move to the back of the bus. Robinson refused. The driver backed down, but after reaching the end of the line, summoned the military police, who took Robinson into custody. When Robinson later confronted the investigating duty officer about racist questioning by the officer and his assistant, the officer recommended Robinson be court-martialed. After Robinson's commander in the 761st, Paul L. Bates, refused to authorize the legal action, Robinson was summarily transferred to the 758th Battalion—where the commander quickly consented to charge Robinson with multiple offenses, including, among other charges, public drunkenness, even though Robinson did not drink. By the time of the court-martial in August 1944, the charges against Robinson had been reduced to two counts of insubordination during questioning. Robinson was acquitted by an all-white panel of nine officers. The experiences Robinson was subjected to during the court proceedings would be remembered when he later joined MLB and was subjected to racist attacks. Although his former unit, the 761st Tank Battalion, became the first black tank unit to see combat in World War II, Robinson's court-martial proceedings prohibited him from being deployed overseas; thus, he never saw combat action. After his acquittal, he was transferred to Camp Breckinridge, Kentucky, where he served as a coach for army athletics until receiving an honorable discharge in November 1944. While there, Robinson met a former player for the Kansas City Monarchs of the Negro American League, who encouraged Robinson to write the Monarchs and ask for a tryout. Robinson took the former player's advice and wrote to Monarchs' co-owner Thomas Baird. After his discharge, Robinson briefly returned to his old football club, the Los Angeles Bulldogs. Robinson then accepted an offer from his old friend and pastor Rev. Karl Downs to be the athletic director at Samuel Huston College in Austin, then of the Southwestern Athletic Conference. The job included coaching the school's basketball team for the 1944–45 season. As it was a fledgling program, few students tried out for the basketball team, and Robinson even resorted to inserting himself into the lineup for exhibition games. Although his teams were outmatched by opponents, Robinson was respected as a disciplinarian coach, and drew the admiration of, among others, Langston University basketball player Marques Haynes, a future member of the Harlem Globetrotters. In early 1945, while Robinson was at Sam Huston College, the Kansas City Monarchs sent him a written offer to play professional baseball in the Negro leagues. Robinson accepted a contract for $400 per month. Although he played well for the Monarchs, Robinson was frustrated with the experience. He had grown used to a structured playing environment in college, and the Negro leagues' disorganization and embrace of gambling interests appalled him. The hectic travel schedule also placed a burden on his relationship with Isum, with whom he could now communicate only by letter. In all, Robinson played 47 games at shortstop for the Monarchs, hitting .387 with five home runs, and registering 13 stolen bases. He also appeared in the 1945 Negro League All-Star Game, going hitless in five at-bats. During the season, Robinson pursued potential major league interests. No black man had played in the major leagues since Moses Fleetwood Walker in 1884, but the Boston Red Sox nevertheless held a tryout at Fenway Park for Robinson and other black players on April 16. The tryout, however, was a farce chiefly designed to assuage the desegregationist sensibilities of powerful Boston City Councilman Isadore Muchnick. Even with the stands limited to management, Robinson was subjected to racial epithets. He left the tryout humiliated, and more than fourteen years later, in July 1959, the Red Sox became the last major league team to integrate its roster. Other teams, however, had more serious interest in signing a black ballplayer. In the mid-1940s, Branch Rickey, club president and general manager of the Brooklyn Dodgers, began to scout the Negro leagues for a possible addition to the Dodgers' roster. Rickey selected Robinson from a list of promising black players and interviewed him for possible assignment to Brooklyn's International League farm club, the Montreal Royals. Rickey was especially interested in making sure his eventual signee could withstand the inevitable racial abuse that would be directed at him. In a famous three-hour exchange on August 28, 1945, Rickey asked Robinson if he could face the racial animus without taking the bait and reacting angrily—a concern given Robinson's prior arguments with law enforcement officials at PJC and in the military. Robinson was aghast: "Are you looking for a Negro who is afraid to fight back?" Rickey replied that he needed a Negro player "with guts enough not to fight back." After obtaining a commitment from Robinson to "turn the other cheek" to racial antagonism, Rickey agreed to sign him to a contract for $600 a month, equal to $ today. Rickey did not offer compensation to the Monarchs, instead believing all Negro league players were free agents due to the contracts not containing a reserve clause. Among those Rickey discussed prospects with was Wendell Smith, writer for the black weekly "Pittsburgh Courier", who according to Cleveland Indians owner and team president Bill Veeck "influenced Rickey to take Jack Robinson, for which he's never completely gotten credit." Although he required Robinson to keep the arrangement a secret for the time being, Rickey committed to formally signing Robinson before November 1, 1945. On October 23, it was publicly announced that Robinson would be assigned to the Royals for the 1946 season. On the same day, with representatives of the Royals and Dodgers present, Robinson formally signed his contract with the Royals. In what was later referred to as "The Noble Experiment", Robinson was the first black baseball player in the International League since the 1880s. He was not necessarily the best player in the Negro leagues, and black talents Satchel Paige and Josh Gibson were upset when Robinson was selected first. Larry Doby, who broke the color line in the American League the same year as Robinson, said, "One of the things that was disappointing and disheartening to a lot of the black players at the time was that Jack was not the best player. The best was Josh Gibson. I think that's one of the reasons why Josh died so early – he was heartbroken." Rickey's offer allowed Robinson to leave behind the Monarchs and their grueling bus rides, and he went home to Pasadena. That September, he signed with Chet Brewer's Kansas City Royals, a post-season barnstorming team in the California Winter League. Later that off-season, he briefly toured South America with another barnstorming team, while his fiancée Isum pursued nursing opportunities in New York City. On February 10, 1946, Robinson and Isum were married by their old friend, the Rev. Karl Downs. In 1946, Robinson arrived at Daytona Beach, Florida, for spring training with the Montreal Royals of the Class AAA International League (the designation of "AAA" for the highest level of minor league baseball was first used in the 1946 season). Clay Hopper, the manager of the Royals, asked Rickey to assign Robinson to any other Dodger affiliate, but Rickey refused. Robinson's presence was controversial in racially charged Florida. As he was not allowed to stay with his teammates at the team hotel, he lodged instead at the home of a local black politician. Since the Dodgers organization did not own a spring training facility (the Dodger-controlled spring training compound in Vero Beach known as "Dodgertown" did not open until spring 1948), scheduling was subject to the whim of area localities, several of which turned down any event involving Robinson or Johnny Wright, another black player whom Rickey had signed to the Dodgers' organization in January. In Sanford, Florida, the police chief threatened to cancel games if Robinson and Wright did not cease training activities there; as a result, Robinson was sent back to Daytona Beach. In Jacksonville, the stadium was padlocked shut without warning on game day, by order of the city's Parks and Public Property director. In DeLand, a scheduled day game was postponed, ostensibly because of issues with the stadium's electrical lighting. After much lobbying of local officials by Rickey himself, the Royals were allowed to host a game involving Robinson in Daytona Beach. Robinson made his Royals debut at Daytona Beach's City Island Ballpark on March 17, 1946, in an exhibition game against the team's parent club, the Dodgers. Robinson thus became the first black player to openly play for a minor league team against a major league team since the "de facto" baseball color line had been implemented in the 1880s. Later in spring training, after some less-than-stellar performances, Robinson was shifted from shortstop to second base, allowing him to make shorter throws to first base. Robinson's performance soon rebounded. On April 18, 1946, Roosevelt Stadium hosted the Jersey City Giants' season opener against the Montreal Royals, marking the professional debut of the Royals' Jackie Robinson and the first time the color barrier had been broken in a game between two minor league clubs. Pitching against Robinson was Warren Sandel who had played against him when they both lived in California. During Robinson's first at bat, the Jersey City catcher, Dick Bouknight, demanded that Sandel throw at Robinson, but Sandel refused. Although Sandel induced Robinson to ground out at his first at bat, Robinson ended up with four hits in his five trips to the plate; his first hit was a three-run home run in the game's third inning. He also scored four runs, drove in three, and stole two bases in the Royals' 14–1 victory. Robinson proceeded to lead the International League that season with a .349 batting average and .985 fielding percentage, and he was named the league's Most Valuable Player. Although he often faced hostility while on road trips (the Royals were forced to cancel a Southern exhibition tour, for example), the Montreal fan base enthusiastically supported Robinson. Whether fans supported or opposed it, Robinson's presence on the field was a boon to attendance; more than one million people went to games involving Robinson in 1946, an amazing figure by International League standards. In the fall of 1946, following the baseball season, Robinson returned home to California and briefly played professional basketball for the short-lived Los Angeles Red Devils. In 1947, the Dodgers called Robinson up to the major leagues six days before the start of the season. With Eddie Stanky entrenched at second base for the Dodgers, Robinson played his initial major league season as a first baseman. On April 15, Robinson made his major league debut at the relatively advanced age of 28 at Ebbets Field before a crowd of 26,623 spectators, more than 14,000 of whom were black. Although he failed to get a base hit, he walked and scored a run in the Dodgers' 5–3 victory. Robinson became the first player since 1880 to openly break the major league baseball color line. Black fans began flocking to see the Dodgers when they came to town, abandoning their Negro league teams. Robinson's promotion met a generally positive, although mixed, reception among newspapers and white major league players. However, racial tension existed in the Dodger clubhouse. Some Dodger players insinuated they would sit out rather than play alongside Robinson. The brewing mutiny ended when Dodgers management took a stand for Robinson. Manager Leo Durocher informed the team, "I do not care if the guy is yellow or black, or if he has stripes like a fuckin' zebra. I'm the manager of this team, and I say he plays. What's more, I say he can make us all rich. And if any of you cannot use the money, I will see that you are all traded." Robinson was also derided by opposing teams. Some, notably the St. Louis Cardinals, threatened to strike if Robinson played, but also to spread the walkout across the entire National League. Existence of the plot was leaked by the Cardinals' team physician, Robert Hyland, to a friend, the "New York Herald Tribune"s Rutherford "Rud" Rennie. The reporter, concerned about protecting Hyland's anonymity and job, in turn leaked it to his "Tribune" colleague and editor, Stanley Woodward, whose own subsequent reporting with other sources protected Hyland. The Woodward article made national headlines. After the threat was exposed, National League President Ford Frick and Baseball Commissioner Happy Chandler let it be known that any striking players would be suspended. "You will find that the friends that you think you have in the press box will not support you, that you will be outcasts," threatened Chandler. "I do not care if half the league strikes. Those who do it will encounter quick retribution. All will be suspended and I don't care if it wrecks the National League for five years. This is the United States of America and one citizen has as much right to play as another." Woodward's article received the E. P. Dutton Award in 1947 for Best Sports Reporting. "New York Times" columnist Red Smith turned to the Cardinals' 1947 racial strike in 1977, as a spate of commemorative articles appeared on the 30th anniversary of Robinson's signing with the Dodgers. Smith remembered his old "Herald Tribune" colleagues' part in exposing the players' strike conspiracy. It would have succeeded, wrote Smith, "…if Rud Rennie and Stanley Woodward hadn't exposed their intentions in the "New York Herald Tribune"." Robinson nonetheless became the target of rough physical play by opponents (particularly the Cardinals). At one time, he received a seven-inch gash in his leg from Enos Slaughter. On April 22, 1947, during a game between the Dodgers and the Philadelphia Phillies, Phillies players and manager Ben Chapman called Robinson a "nigger" from their dugout and yelled that he should "go back to the cotton fields". Rickey later recalled that Chapman "did more than anybody to unite the Dodgers. When he poured out that string of unconscionable abuse, he solidified and united thirty men." Robinson did, however, receive significant encouragement from several major league players. Robinson named Lee "Jeep" Handley, who played for the Phillies at the time, as the first opposing player to wish him well. Dodgers teammate Pee Wee Reese once came to Robinson's defense with the famous line, "You can hate a man for many reasons. Color is not one of them." In 1948, Reese put his arm around Robinson in response to fans who shouted racial slurs at Robinson before a game in Cincinnati. A statue by sculptor William Behrends, unveiled at KeySpan Park on November 1, 2005, commemorates this event by representing Reese with his arm around Robinson. Jewish baseball star Hank Greenberg, who had to deal with racial epithets during his career, also encouraged Robinson. Following an incident where Greenberg collided with Robinson at first base, he "whispered a few words into Robinson's ear", which Robinson later characterized as "words of encouragement." Greenberg had advised him to overcome his critics by defeating them in games. Robinson also talked frequently with Larry Doby, who endured his own hardships since becoming the first black player in the American League with the Cleveland Indians, as the two spoke to one another via telephone throughout the season. Robinson finished the season having played in 151 games for the Dodgers, with a batting average of .297, an on-base percentage of .383, and a .427 slugging percentage. He had 175 hits (scoring 125 runs) including 31 doubles, 5 triples, and 12 home runs, driving in 48 runs for the year. Robinson led the league in sacrifice hits, with 28, and in stolen bases, with 29. His cumulative performance earned him the inaugural Major League Baseball Rookie of the Year Award (separate National and American League Rookie of the Year honors were not awarded until 1949). Following Stanky's trade to the Boston Braves in March 1948, Robinson took over second base, where he logged a .980 fielding percentage that year (second in the National League at the position, fractionally behind Stanky). Robinson had a batting average of .296 and 22 stolen bases for the season. In a 12–7 win against the St. Louis Cardinals on August 29, 1948, he hit for the cycle—a home run, a triple, a double, and a single in the same game. The Dodgers briefly moved into first place in the National League in late August 1948, but they ultimately finished third as the Braves went on to win the league title and lose to the Cleveland Indians in the World Series. Racial pressure on Robinson eased in 1948 as a number of other black players entered the major leagues. Larry Doby (who broke the color barrier in the American League on July 5, 1947, just 11 weeks after Robinson) and Satchel Paige played for the Cleveland Indians, and the Dodgers had three other black players besides Robinson. In February 1948, he signed a $12,500 contract (equal to $ today) with the Dodgers; while a significant amount, this was less than Robinson made in the off-season from a vaudeville tour, where he answered pre-set baseball questions, and a speaking tour of the South. Between the tours, he underwent surgery on his right ankle. Because of his off-season activities, Robinson reported to training camp overweight. He lost the weight during training camp, but dieting left him weak at the plate. In 1948, Wendell Smith's book, "Jackie Robinson: My Own Story", was released. In the spring of 1949, Robinson turned to Hall of Famer George Sisler, working as an advisor to the Dodgers, for batting help. At Sisler's suggestion, Robinson spent hours at a batting tee, learning to hit the ball to right field. Sisler taught Robinson to anticipate a fastball, on the theory that it is easier to subsequently adjust to a slower curveball. Robinson also noted that "Sisler showed me how to stop lunging, how to check my swing until the last fraction of a second". The tutelage helped Robinson raise his batting average from .296 in 1948 to .342 in 1949. In addition to his improved batting average, Robinson stole 37 bases that season, was second place in the league for both doubles and triples, and registered 124 runs batted in with 122 runs scored. For the performance Robinson earned the Most Valuable Player Award for the National League. Baseball fans also voted Robinson as the starting second baseman for the 1949 All-Star Game—the first All-Star Game to include black players. That year, a song about Robinson by Buddy Johnson, "Did You See Jackie Robinson Hit That Ball?", reached number 13 on the charts; Count Basie recorded a famous version. Ultimately, the Dodgers won the National League pennant, but lost in five games to the New York Yankees in the 1949 World Series. Summer 1949 brought an unwanted distraction for Robinson. In July, he was called to testify before the United States House of Representatives' Committee on Un-American Activities (HUAC) concerning statements made that April by black athlete and actor Paul Robeson. Robinson was reluctant to testify, but he eventually agreed to do so, fearing it might negatively affect his career if he declined. In 1950, Robinson led the National League in double plays made by a second baseman with 133. His salary that year was the highest any Dodger had been paid to that point: $35,000 ($ in dollars). He finished the year with 99 runs scored, a .328 batting average, and 12 stolen bases. The year saw the release of a film biography of Robinson's life, "The Jackie Robinson Story", in which Robinson played himself, and actress Ruby Dee played Rachael "Rae" (Isum) Robinson. The project had been previously delayed when the film's producers refused to accede to demands of two Hollywood studios that the movie include scenes of Robinson being tutored in baseball by a white man. "The New York Times" wrote that Robinson, "doing that rare thing of playing himself in the picture's leading role, displays a calm assurance and composure that might be envied by many a Hollywood star." Robinson's Hollywood exploits, however, did not sit well with Dodgers co-owner Walter O'Malley, who referred to Robinson as "Rickey's prima donna". In late 1950, Rickey's contract as the Dodgers' team President expired. Weary of constant disagreements with O'Malley, and with no hope of being re-appointed as President of the Dodgers, Rickey cashed out his one-quarter financial interest in the team, leaving O'Malley in full control of the franchise. Rickey shortly thereafter became general manager of the Pittsburgh Pirates. Robinson was disappointed at the turn of events and wrote a sympathetic letter to Rickey, whom he considered a father figure, stating, "Regardless of what happens to me in the future, it all can be placed on what you have done and, believe me, I appreciate it." Before the 1951 season, O'Malley reportedly offered Robinson the job of manager of the Montreal Royals, effective at the end of Robinson's playing career. O'Malley was quoted in the "Montreal Standard" as saying, "Jackie told me that he would be both delighted and honored to tackle this managerial post"—although reports differed as to whether a position was ever formally offered. During the 1951 season, Robinson led the National League in double plays made by a second baseman for the second year in a row, with 137. He also kept the Dodgers in contention for the 1951 pennant. During the last game of the regular season, in the 13th inning, he had a hit to tie the game, and then won the game with a home run in the 14th. This forced a best-of-three playoff series against the crosstown rival New York Giants. Despite Robinson's regular-season heroics, the Dodgers lost the pennant on Bobby Thomson's famous home run, known as the Shot Heard 'Round the World, on October 3, 1951. Overcoming his dejection, Robinson dutifully observed Thomson's feet to ensure he touched all the bases. Dodgers sportscaster Vin Scully later noted that the incident showed "how much of a competitor Robinson was." He finished the season with 106 runs scored, a batting average of .335, and 25 stolen bases. Robinson had what was an average year for him in 1952. He finished the year with 104 runs, a .308 batting average, and 24 stolen bases. He did, however, record a career-high on-base percentage of .436. The Dodgers improved on their performance from the year before, winning the National League pennant before losing the 1952 World Series to the New York Yankees in seven games. That year, on the television show "Youth Wants to Know", Robinson challenged the Yankees' general manager, George Weiss, on the racial record of his team, which had yet to sign a black player. Sportswriter Dick Young, whom Robinson had described as a "bigot", said, "If there was one flaw in Jackie, it was the common one. He believed that everything unpleasant that happened to him happened because of his blackness." The 1952 season was the last year Robinson was an everyday starter at second base. Afterward, Robinson played variously at first, second, and third bases, shortstop, and in the outfield, with Jim Gilliam, another black player, taking over everyday second base duties. Robinson's interests began to shift toward the prospect of managing a major league team. He had hoped to gain experience by managing in the Puerto Rican Winter League, but according to the "New York Post", Commissioner Happy Chandler denied the request. In 1953, Robinson had 109 runs, a .329 batting average, and 17 steals, leading the Dodgers to another National League pennant (and another World Series loss to the Yankees, this time in six games). Robinson's continued success spawned a string of death threats. He was not dissuaded, however, from addressing racial issues publicly. That year, he served as editor for "Our Sports" magazine, a periodical focusing on Negro sports issues; contributions to the magazine included an article on golf course segregation by Robinson's old friend Joe Louis. Robinson also openly criticized segregated hotels and restaurants that served the Dodger organization; a number of these establishments integrated as a result, including the five-star Chase Park Hotel in St. Louis. In 1954, Robinson had 62 runs scored, a .311 batting average, and 7 steals. His best day at the plate was on June 17, when he hit two home runs and two doubles. The following autumn, Robinson won his only championship when the Dodgers beat the New York Yankees in the 1955 World Series. Although the team enjoyed ultimate success, 1955 was the worst year of Robinson's individual career. He hit .256 and stole only 12 bases. The Dodgers tried Robinson in the outfield and as a third baseman, both because of his diminishing abilities and because Gilliam was established at second base. Robinson, then 37 years old, missed 49 games and did not play in Game 7 of the World Series. Robinson missed the game because manager Walter Alston decided to play Gilliam at second and Don Hoak at third base. That season, the Dodgers' Don Newcombe became the first black major league pitcher to win twenty games in a year. In 1956, Robinson had 61 runs scored, a .275 batting average, and 12 steals. By then, he had begun to exhibit the effects of diabetes and to lose interest in the prospect of playing or managing professional baseball. Robinson ended his major league career when he struck out to end Game 7 of the 1956 World Series. After the season, Robinson was traded by the Dodgers to the arch-rival New York Giants for Dick Littlefield and $35,000 cash (equal to $ today). The trade, however, was never completed; unbeknownst to the Dodgers, Robinson had already agreed with the president of Chock full o'Nuts to quit baseball and become an executive with the company. Since Robinson had sold exclusive rights to any retirement story to "Look" magazine two years previously, his retirement decision was revealed through the magazine, instead of through the Dodgers organization. Robinson's major league debut brought an end to approximately sixty years of segregation in professional baseball, known as the baseball color line. After World War II, several other forces were also leading the country toward increased equality for blacks, including their accelerated migration to the North, where their political clout grew, and President Harry Truman's desegregation of the military in 1948. Robinson's breaking of the baseball color line and his professional success symbolized these broader changes and demonstrated that the fight for equality was more than simply a political matter. Martin Luther King, Jr. said that he was "a legend and a symbol in his own time", and that he "challenged the dark skies of intolerance and frustration." According to historian Doris Kearns Goodwin, Robinson's "efforts were a monumental step in the civil-rights revolution in America ... [His] accomplishments allowed black and white Americans to be more respectful and open to one another and more appreciative of everyone's abilities." Beginning his major league career at the relatively advanced age of twenty-eight, he played only ten seasons from 1947 to 1956, all of them for the Brooklyn Dodgers. During his career, the Dodgers played in six World Series, and Robinson himself played in six All-Star Games. In 1999, he was posthumously named to the Major League Baseball All-Century Team. Robinson's career is generally considered to mark the beginning of the post–"long ball" era in baseball, in which a reliance on raw power-hitting gave way to balanced offensive strategies that used footspeed to create runs through aggressive baserunning. Robinson exhibited the combination of hitting ability and speed which exemplified the new era. He scored more than 100 runs in six of his ten seasons (averaging more than 110 runs from 1947 to 1953), had a .311 career batting average, a .409 career on-base percentage, a .474 slugging percentage, and substantially more walks than strikeouts (740 to 291). Robinson was one of only two players during the span of 1947–56 to accumulate at least 125 steals while registering a slugging percentage over .425 (Minnie Miñoso was the other). He accumulated 197 stolen bases in total, including 19 steals of home. None of the latter were double steals (in which a player stealing home is assisted by a player stealing another base at the same time). Robinson has been referred to by author David Falkner as "the father of modern base-stealing". Historical statistical analysis indicates Robinson was an outstanding fielder throughout his ten years in the major leagues and at virtually every position he played. After playing his rookie season at first base, Robinson spent most of his career as a second baseman. He led the league in fielding among second basemen in 1950 and 1951. Toward the end of his career, he played about 2,000 innings at third base and about 1,175 innings in the outfield, excelling at both. Assessing himself, Robinson said, "I'm not concerned with your liking or disliking me ... all I ask is that you respect me as a human being." Regarding Robinson's qualities on the field, Leo Durocher said, "Ya want a guy that comes to play. This guy didn't just come to play. He come to beat ya. He come to stuff the goddamn bat right up your ass." Robinson portrayed himself in the 1950 motion picture "The Jackie Robinson Story". Other portrayals include: Robinson was also the subject of a 2016 PBS documentary, "Jackie Robinson", which was directed by Ken Burns and features Jamie Foxx doing voice-over as Robinson. Robinson once told future Hall of Fame inductee Hank Aaron that "the game of baseball is great, but the greatest thing is what you do after your career is over." Robinson retired from baseball at age 37 on January 5, 1957. Later that year, after he complained of numerous physical ailments, he was diagnosed with diabetes, a disease that also afflicted his brothers. Although Robinson adopted an insulin injection regimen, the state of medicine at the time could not prevent the continued deterioration of Robinson's physical condition from the disease. In his first year of eligibility for the Baseball Hall of Fame in 1962, Robinson encouraged voters to consider only his on-field qualifications, rather than his cultural impact on the game. He was elected on the first ballot, becoming the first black player inducted into the Cooperstown museum. In 1965, Robinson served as an analyst for ABC's "Major League Baseball Game of the Week" telecasts, the first black person to do so. In 1966, Robinson was hired as general manager for the short-lived Brooklyn Dodgers of the Continental Football League. In 1972, he served as a part-time commentator on Montreal Expos telecasts. On June 4, 1972, the Dodgers retired his uniform number, 42, alongside those of Roy Campanella (39) and Sandy Koufax (32). From 1957 to 1964, Robinson was the vice president for personnel at Chock full o'Nuts; he was the first black person to serve as vice president of a major American corporation. Robinson always considered his business career as advancing the cause of black people in commerce and industry. Robinson also chaired the National Association for the Advancement of Colored People's (NAACP) million-dollar Freedom Fund Drive in 1957, and served on the organization's board until 1967. In 1964, he helped found, with Harlem businessman Dunbar McLaurin, Freedom National Bank—a black-owned and operated commercial bank based in Harlem. He also served as the bank's first chairman of the board. In 1970, Robinson established the Jackie Robinson Construction Company to build housing for low-income families. Robinson was active in politics throughout his post-baseball life. He identified himself as a political independent, although he held conservative opinions on several issues, including the Vietnam War (he once wrote to Martin Luther King, Jr. to defend the Johnson Administration's military policy). After supporting Richard Nixon in his 1960 presidential race against John F. Kennedy, Robinson later praised Kennedy effusively for his stance on civil rights. Robinson was angered by conservative Republican opposition to the Civil Rights Act of 1964, though a higher percentage of Democrats voted against it in both the House and Senate. He became one of six national directors for Nelson Rockefeller's unsuccessful campaign to be nominated as the Republican candidate for the 1964 presidential election. After the party nominated Senator Barry Goldwater of Arizona instead, Robinson left the party's convention commenting that he now had "a better understanding of how it must have felt to be a Jew in Hitler's Germany". He later became special assistant for community affairs when Rockefeller was re-elected governor of New York in 1966. Switching his allegiance to the Democrats, he subsequently supported Hubert Humphrey against Nixon in 1968. Robinson protested against the major leagues' ongoing lack of minority managers and central office personnel, and he turned down an invitation to appear in an old-timers' game at Yankee Stadium in 1969. He made his final public appearance on October 15, 1972, throwing the ceremonial first pitch before Game 2 of the World Series at Riverfront Stadium in Cincinnati. He gratefully accepted a plaque honoring the twenty-fifth anniversary of his MLB debut, but also commented, "I'm going to be tremendously more pleased and more proud when I look at that third base coaching line one day and see a black face managing in baseball." This wish was only fulfilled after Robinson's death: following the 1974 season, the Cleveland Indians gave their managerial post to Frank Robinson (no relation to Jackie), a Hall of Fame-bound player who would go on to manage three other teams. Despite the success of these two Robinsons and other black players, the number of African-American players in Major League Baseball has declined since the 1970s. After Robinson's retirement from baseball, his wife Rachel Robinson pursued a career in academic nursing. She became an assistant professor at the Yale School of Nursing and director of nursing at the Connecticut Mental Health Center. She also served on the board of the Freedom National Bank until it closed in 1990. She and Jackie had three children: Jackie Robinson Jr. (1946–1971), Sharon Robinson (b. 1950), and David Robinson (b. 1952). Robinson's eldest son, Jackie Robinson Jr., had emotional trouble during his childhood and entered special education at an early age. He enrolled in the Army in search of a disciplined environment, served in the Vietnam War, and was wounded in action on November 19, 1965. After his discharge, he struggled with drug problems. Robinson Jr. eventually completed the treatment program at Daytop Village in Seymour, Connecticut, and became a counselor at the institution. On June 17, 1971, he was killed in an automobile accident at age 24. The experience with his son's drug addiction turned Robinson Sr. into an avid anti-drug crusader toward the end of his life. Robinson did not long outlive his son. Complications from heart disease and diabetes weakened Robinson and made him almost blind by middle age. On October 24, 1972, nine days after his appearance at the World Series, Robinson died of a heart attack at his home on 95 Cascade Road in North Stamford, Connecticut; he was 53 years old. Robinson's funeral service on October 27, 1972, at Upper Manhattan's Riverside Church adjacent to Grant's Tomb in Morningside Heights, attracted 2,500 mourners. Many of his former teammates and other famous baseball players served as pallbearers, and the Rev. Jesse Jackson gave the eulogy. Tens of thousands of people lined the subsequent procession route to Robinson's interment site at Cypress Hills Cemetery in Brooklyn, New York, where he was buried next to his son Jackie and mother-in-law Zellee Isum. Twenty-five years after Robinson's death, the Interboro Parkway was renamed the Jackie Robinson Parkway in his memory. This parkway bisects the cemetery in close proximity to Robinson's gravesite. After Robinson's death, his widow founded the Jackie Robinson Foundation, and she remains an officer as of 2018. On April 15, 2008, she announced that in 2010 the foundation will be opening a museum devoted to Jackie in Lower Manhattan. Robinson's daughter, Sharon, became a midwife, educator, director of educational programming for MLB, and the author of two books about her father. His youngest son, David, who has six children, is a coffee grower and social activist in Tanzania. According to a poll conducted in 1947, Robinson was the second most popular man in the country, behind Bing Crosby. In 1999, he was named by "Time" on its list of the . Also in 1999, he ranked number 44 on the "Sporting News" list of Baseball's 100 Greatest Players and was elected to the Major League Baseball All-Century Team as the top vote-getter among second basemen. Baseball writer Bill James, in "The New Bill James Historical Baseball Abstract", ranked Robinson as the 32nd greatest player of all time strictly on the basis of his performance on the field, noting that he was one of the top players in the league throughout his career. Robinson was among the 25 charter members of UCLA's Athletics Hall of Fame in 1984. In 2002, Molefi Kete Asante included Robinson on his list of 100 Greatest African Americans. Robinson has also been honored by the United States Postal Service on three separate postage stamps, in 1982, 1999, and 2000. The City of Pasadena has recognized Robinson in several ways. Brookside Park, situated next to the Rose Bowl, features a baseball diamond and stadium named Jackie Robinson Field. The city's Human Services Department operates the Jackie Robinson Center, a community outreach center that provides health services. In 1997, a $325,000 bronze sculpture (equal to $ today) by artists Ralph Helmick, Stu Schecter, and John Outterbridge depicting oversized nine-foot busts of Robinson and his brother Mack was erected at Garfield Avenue, across from the main entrance of Pasadena City Hall; a granite footprint lists multiple donors to the commission project, which was organized by the Robinson Memorial Foundation and supported by members of the Robinson family. Major League Baseball has honored Robinson many times since his death. In 1987, both the National and American League Rookie of the Year Awards were renamed the "Jackie Robinson Award" in honor of the first recipient (Robinson's Major League Rookie of the Year Award in 1947 encompassed both leagues). On April 15, 1997, Robinson's jersey number, 42, was retired throughout Major League Baseball, the first time any jersey number had been retired throughout one of the four major American sports leagues. Under the terms of the retirement, a grandfather clause allowed the handful of players who wore number 42 to continue doing so in tribute to Robinson, until such time as they subsequently changed teams or jersey numbers. This affected players such as the Mets' Butch Huskey and Boston's Mo Vaughn. The Yankees' Mariano Rivera, who retired at the end of the 2013 season, was the last player in Major League Baseball to wear jersey number 42 on a regular basis. Since 1997, only Wayne Gretzky's number 99, retired by the NHL in 2000, has been retired league-wide. There have also been calls for MLB to retire number 21 league-wide in honor of Roberto Clemente, a sentiment opposed by the Robinson family. The Hispanics Across America advocacy group wants Clemente's number set aside the way the late Robinson's No. 42 was in 1997, but Sharon Robinson maintained the position that such an honor should remain in place for Jackie Robinson only. As an exception to the retired-number policy, MLB began honoring Robinson by allowing players to wear number 42 on April 15, Jackie Robinson Day, which is an annual observance that started in 2004. For the 60th anniversary of Robinson's major league debut, MLB invited players to wear the number 42 on Jackie Robinson Day in 2007. The gesture was originally the idea of outfielder Ken Griffey, Jr., who sought Rachel Robinson's permission to wear the number. After receiving her permission, Commissioner Bud Selig not only allowed Griffey to wear the number, but also extended an invitation to all major league teams to do the same. Ultimately, more than 200 players wore number 42, including the entire rosters of the Los Angeles Dodgers, New York Mets, Houston Astros, Philadelphia Phillies, St. Louis Cardinals, Milwaukee Brewers, and Pittsburgh Pirates. The tribute was continued in 2008, when, during games on April 15, all members of the Mets, Cardinals, Washington Nationals, and Tampa Bay Rays wore Robinson's number 42. On June 25, 2008, MLB installed a new plaque for Robinson at the Baseball Hall of Fame commemorating his off-the-field impact on the game as well as his playing statistics. In 2009, all uniformed personnel (players, managers, coaches, and umpires) wore number 42 on April 15. At the November 2006 groundbreaking for Citi Field, the new ballpark for the New York Mets, it was announced that the main entrance, modeled on the one in Brooklyn's old Ebbets Field, would be called the Jackie Robinson Rotunda. The rotunda was dedicated at the opening of Citi Field on April 16, 2009. It honors Robinson with large quotations spanning the inner curve of the facade and features a large freestanding statue of his number, 42, which has become an attraction in itself. Mets owner Fred Wilpon announced that the Mets—in conjunction with Citigroup and the Jackie Robinson Foundation—will create a Jackie Robinson Museum and Learning Center, located at the headquarters of the Jackie Robinson Foundation at One Hudson Square, along Canal Street in lower Manhattan. Along with the museum, scholarships will be awarded to "young people who live by and embody Jackie's ideals." The museum hopes to open by 2019. At Dodger Stadium in Los Angeles, a statue of Robinson was introduced in 2017. Since 2004, the Aflac National High School Baseball Player of the Year has been presented the "Jackie Robinson Award". Robinson has also been recognized outside of baseball. In December 1956, the NAACP recognized him with the Spingarn Medal, which it awards annually for the highest achievement by an African-American. President Ronald Reagan posthumously awarded Robinson the Presidential Medal of Freedom on March 26, 1984, and on March 2, 2005, President George W. Bush gave Robinson's widow the Congressional Gold Medal, the highest civilian award bestowed by Congress; Robinson was only the second baseball player to receive the award, after Roberto Clemente. On August 20, 2007, California Governor Arnold Schwarzenegger and his wife, Maria Shriver, announced that Robinson was inducted into the California Hall of Fame, located at The California Museum for History, Women and the Arts in Sacramento. A number of buildings have been named in Robinson's honor. The UCLA Bruins baseball team plays in Jackie Robinson Stadium, which, because of the efforts of Jackie's brother Mack, features a memorial statue of Robinson by sculptor Richard H. Ellis. The stadium also unveiled a new mural of Robinson by Mike Sullivan on April 14, 2013. City Island Ballpark in Daytona Beach, Florida was renamed Jackie Robinson Ballpark in 1990 and a statue of Robinson with two children stands in front of the ballpark. His wife Rachel was present for the dedication on September 15. 1990. A number of facilities at Pasadena City College (successor to PJC) are named in Robinson's honor, including Robinson Field, a football/soccer/track facility named jointly for Robinson and his brother Mack. The New York Public School system has named a middle school after Robinson, and Dorsey High School plays at a Los Angeles football stadium named after him. In 1976, his home in Brooklyn, the Jackie Robinson House, was declared a National Historic Landmark. Brooklyn residents want to turn his home into a city landmark. Robinson also has an asteroid named after him, 4319 Jackierobinson. In 1997, the United States Mint issued a Jackie Robinson commemorative silver dollar, and five-dollar gold coin. That same year, New York City renamed the Interboro Parkway in his honor. In 2011, the U.S. placed a plaque at Robinson's Montreal home to honor the ending of segregation in baseball. The house, at 8232 avenue de Gaspé near Jarry Park, was Robinson's residence when he played for the Montreal Royals during 1946. In a letter read during the ceremony, Rachel Robinson, Jackie's widow, wrote: "I remember Montreal and that house very well and have always had warm feeling for that great city. Before Jack and I moved to Montreal, we had just been through some very rough treatment in the racially biased South during spring training in Florida. In the end, Montreal was the perfect place for him to get his start. We never had a threatening or unpleasant experience there. The people were so welcoming and saw Jack as a player and as a man." On November 22, 2014, UCLA announced that it would officially retire the number 42 across all university sports, effective immediately. While Robinson wore several different numbers during his UCLA career, the school chose 42 because it had become indelibly identified with him. The only sport this did not affect was men's basketball, which had previously retired the number for Walt Hazzard (although Kevin Love was actually the last player in that sport to wear 42, with Hazzard's blessing). In a move paralleling that of MLB when it retired the number, UCLA allowed three athletes (in women's soccer, softball, and football) who were already wearing 42 to continue to do so for the remainder of their UCLA careers. The school also announced it would prominently display the number at all of its athletic venues. A jersey Robinson brought home with him in 1947 after his rookie season was sold at an auction for $2.05 million on November 19, 2017. The price was the highest ever paid for a post-World War II jersey. , or Retrosheet James G. Blaine James Gillespie Blaine (January 31, 1830January 27, 1893) was an American statesman and Republican politician who represented Maine in the U.S. House of Representatives from 1863 to 1876, serving as Speaker of the U.S. House of Representatives from 1869 to 1875, and then in the United States Senate from 1876 to 1881. Blaine twice served as Secretary of State (1881, 1889–1892), one of only two persons to hold the position under three separate presidents (the other being Daniel Webster), and unsuccessfully sought the Republican nomination for President in 1876 and 1880 before being nominated in 1884. In the general election, he was narrowly defeated by Democrat Grover Cleveland. Blaine was one of the late 19th century's leading Republicans and champion of the moderate reformist faction of the party known as the "Half-Breeds." Blaine was born in the western Pennsylvania town of West Brownsville and after college moved to Maine, where he became a newspaper editor. Nicknamed "the Magnetic Man," he was a charismatic speaker in an era that prized oratory. He began his political career as an early supporter of Abraham Lincoln and the Union war effort in the American Civil War. In Reconstruction, Blaine was a supporter of black suffrage, but opposed some of the more coercive measures of the Radical Republicans. Initially a protectionist, he later worked for a reduction in the tariff and an expansion of American trade with foreign countries. Railroad promotion and construction were important issues in his time, and as a result of his interest and support, Blaine was widely suspected of corruption in the awarding of railroad charters; these allegations plagued his 1884 presidential candidacy. As Secretary of State, Blaine was a transitional figure, marking the end of an isolationist era in foreign policy and foreshadowing the rise of the American Century that would begin with the Spanish–American War. His efforts at expanding the United States' trade and influence began the shift to a more active American foreign policy. Blaine was a pioneer of tariff reciprocity and urged greater involvement in Latin American affairs. An expansionist, Blaine's policies would lead in less than a decade to the establishment of the United States' acquisition of Pacific colonies and dominance of the Caribbean. James Gillespie Blaine was born January 31, 1830 in West Brownsville, Pennsylvania, the third child of Ephraim Lyon Blaine and his wife Maria (Gillespie) Blaine. He had two older sisters, Harriet, and Margaret. Blaine's father was a western Pennsylvania businessman and landowner, and the family lived in relative comfort. On his father's side, Blaine was descended from Scotch-Irish settlers who first emigrated to Pennsylvania in 1745. His great-grandfather, Ephraim Blaine, served as a Commissary-General under George Washington in the American Revolutionary War. Blaine's mother and her forebears were Irish Catholics who immigrated to Pennsylvania in the 1780s. Blaine's parents were married in 1820 in a Roman Catholic ceremony, although Blaine's father remained a Presbyterian. Following a common compromise of the era, the Blaines agreed that their daughters would be raised in their mother's Catholic faith while their sons would be brought up in their father's religion. In politics, Blaine's father supported the Whig party. Blaine's biographers describe his childhood as "harmonious," and note that the boy took an early interest in history and literature. At the age of thirteen, Blaine enrolled in his father's "alma mater", Washington College (now Washington & Jefferson College), in nearby Washington, Pennsylvania. There, he was a member of the Washington Literary Society, one of the college's debating societies. Blaine succeeded academically, graduating near the top of his class and delivering the salutatory address in June 1847. After graduation, Blaine considered attending law school at Yale Law School, but ultimately decided against it, instead moving west to find a job. In 1848, Blaine was hired as a professor of mathematics and ancient languages at the Western Military Institute in Georgetown, Kentucky. Although he was only eighteen years old and younger than many of his students, Blaine adapted well to his new profession. Blaine grew to enjoy life in his adopted state and became an admirer of Kentucky Senator Henry Clay. He also made the acquaintance of Harriet Stanwood, a teacher at the nearby Millersburg Female College and native of Maine. On June 30, 1850, the two were married. Blaine once again considered taking up the study of law, but instead took his new bride to visit his family in Pennsylvania. They next lived with Harriet Blaine's family in Augusta, Maine for several months, where their first child, Stanwood Blaine, was born in 1851. The young family soon moved again, this time to Philadelphia where Blaine took a job at the Pennsylvania Institution for the Instruction of the Blind (now Overbrook School for the Blind) in 1852, teaching science and literature. Philadelphia's law libraries gave Blaine the chance to at last begin to study the law, but in 1853 he received a more tempting offer: to become editor and co-owner of the "Kennebec Journal". Blaine had spent several vacations in his wife's native state of Maine and had become friendly with the "Journal"'s editors. When the newspaper's founder, Luther Severance, retired, Blaine was invited to purchase the publication along with co-editor Joseph Baker. He quickly accepted, borrowing the purchase price from his wife's brothers. Baker soon sold his share to John L. Stevens, a local minister, in 1854. The "Journal" had been a staunchly Whig newspaper, which coincided with Blaine's and Stevens' political opinions. The decision to become a newspaperman, unexpected as it was, started Blaine on the road to a lifelong career in politics. Blaine's purchase of the "Journal" coincided with the demise of the Whig party and birth of the Republican party, and Blaine and Stevens actively promoted the new party in their newspaper. The newspaper was financially successful, and Blaine was soon able to invest his profits in coal mines in Pennsylvania and Virginia, forming the basis of his future wealth. Blaine's career as a Republican newspaperman led naturally to involvement in Republican party politics. In 1856, he was selected as a delegate to the first Republican National Convention. From the party's early days, Blaine identified with the conservative wing, supporting Supreme Court Justice John McLean for the presidential nomination over the more radical John C. Frémont, the eventual nominee. The following year, Blaine was offered the editorship of the "Portland Daily Advertiser", which he accepted, selling his interest in the "Journal" soon thereafter. He still maintained his home in Augusta, however, with his growing family. Although Blaine's first son, Stanwood, died in infancy, he and Harriet had two more sons soon afterward: Walker, in 1855, and Emmons, in 1857. They would have four more children in years to come: Alice, James, Margaret, and Harriet. It was around this time that Blaine left the Presbyterian church of his childhood and joined his wife's new denomination, becoming a member of the South Parish Congregational Church in Augusta. In 1858, Blaine ran for a seat in the Maine House of Representatives, and was elected. He ran for reelection in 1859, 1860, and 1861, and was successful each time by large majorities. The added responsibilities led Blaine to reduce his duties with the "Advertiser" in 1860, and he soon ceased editorial work altogether. Meanwhile, his political power was growing as he became chairman of the Republican state committee in 1859, replacing Stevens. Blaine was not a delegate to the Republican convention in 1860, but attended anyway as an enthusiastic supporter of Abraham Lincoln. Returning to Maine, he was elected Speaker of the Maine House of Representatives in 1861 and reelected in 1862. With the outbreak of the Civil War in 1861, he supported Lincoln's war effort and saw that the Maine Legislature voted to organize and equip units to join the Union Army. Blaine had considered running for the United States House of Representatives from Maine's 4th district in 1860, but agreed to step aside when Anson P. Morrill, a former governor, announced his interest in the seat. Morrill was successful, but after redistricting placed Blaine in the 3rd district for the 1862 elections, he allowed his name to be put forward. Running on a campaign of staunch support for the war effort, Blaine was elected with an ample majority despite Republican losses across the rest of the country. Under the Congressional calendar of the 1860s, members of the 38th United States Congress, elected in November 1862, did not begin their work until December 1863; by the time Blaine finally took his seat that month, the Union had turned the tide in the war with victories at Gettysburg and Vicksburg. As a first-term congressman, he initially said little, mostly following the administration's lead in supporting the continuing war effort. He did clash several times with the leader of the Republicans' radical faction, Thaddeus Stevens of Pennsylvania, firstly over payment of states' debts incurred in supporting the war, and again over monetary policy concerning the new greenback currency. Blaine also spoke in support of the commutation provision of the military draft law passed in 1863 and proposed a constitutional amendment allowing the federal government to impose taxes on exports. Blaine was reelected in 1864 and, when the 39th Congress assembled in December 1865, the main issue was the Reconstruction of the defeated Confederate States. Although he was not a member of the committee charged with drafting what became the Fourteenth Amendment, Blaine did make his views on the subject known and believed that three-fourths of the non-seceded states would be needed to ratify it, rather than three-fourths of all states, an opinion that did not prevail and placed him, atypically, in the radical camp. The Republican Congress also played a role in the governance of the conquered South, dissolving the state governments President Andrew Johnson had installed and substituting military governments under Congress' control. Blaine voted in favor of these new, harsher measures, but also supported some leniency toward the former rebels when he opposed a bill that would have barred Southerners from attending the United States Military Academy. Blaine voted to impeach Johnson in 1868, although he had initially opposed the effort. Later, Blaine was more ambiguous about the validity of the charges against Johnson, writing that "there was a very grave difference of opinion among those equally competent to decide," but at the time partisan zeal led him to follow his party's leaders. Continuing his earlier battle with Stevens, Blaine led the fight in Congress for a strong dollar. After the issuance of 150 million dollars in greenbacks—non-gold-backed currency—the value of the dollar stood at a low ebb. A bipartisan group of inflationists, led by Republican Benjamin F. Butler and Democrat George H. Pendleton, wished to preserve the "status quo" and allow the Treasury to continue to issue greenbacks and even to use them to pay the interest due on pre-war bonds. Blaine called this idea a repudiation of the nation's promise to investors, which was made when the only currency was gold. Speaking several times on the matter, Blaine said that the greenbacks had only ever been an emergency measure to avoid bankruptcy during the war. Blaine and his hard money allies were successful, but the issue remained alive until 1879, when all remaining greenbacks were made redeemable in gold by the Specie Payment Resumption Act of 1875. With Speaker Schuyler Colfax' election to the Vice Presidency in 1868, the leadership of the House became vacant. Blaine had only been a member of Congress since 1863, but he had developed a reputation for parliamentary skill and, aside from a growing feud with Roscoe Conkling of New York, was popular with his fellow Republicans. He was elected with the unanimous vote of the Republican members at the start of the 41st Congress in March 1869. Blaine was an effective Speaker with a magnetic personality. Moreover, President Ulysses S. Grant valued his skill and loyalty in leading the House. He enjoyed the job and made his presence in Washington more permanent by buying a large residence on Fifteenth Street in the city. At the same time, the Blaine family moved to a mansion in Augusta. Republicans remained in control of the House in the 42nd and 43rd Congresses, and Blaine was reelected as Speaker at the start of both of them, for a total term of six years in the Speaker's chair. His popularity continued to grow, and Republicans dissatisfied with Grant mentioned Blaine as a potential candidate for President in 1872. Instead, Blaine worked steadfastly for Grant's reelection, which was a success. Blaine's growing fame brought growing opposition from the Democrats, as well, and during the 1872 campaign he was accused of receiving bribes in the Crédit Mobilier scandal Blaine denied any part in the scandal, which involved railroad companies bribing federal officials to turn a blind eye to fraudulent railroad contracts that overcharged the government by millions of dollars. No one was able to satisfactorily prove Blaine's involvement. Though not an absolute defense, it is true that the law that made the fraud possible had been written before he was elected to Congress. But other Republicans were exposed by the accusations, including Vice President Colfax, who was dropped from the ticket at the 1872 Republican National Convention. Although he supported a general amnesty for former Confederates, Blaine opposed extending it to include Jefferson Davis, and he cooperated with Grant in helping to pass the Civil Rights Act of 1875 in response to increased violence and disenfranchisement of blacks in the South. He refrained from voting on the anti-third term resolution that overwhelmingly passed the House that same year, believing that to vote for it would look self-interested. Blaine was loyal to Grant, and the scandals of the Grant administration did not seem to affect how the public perceived him; according to his biographer, Blaine was never more popular than when he was Speaker of the House. Liberal Republicans saw him as an alternative to the evident corruption of other Republican leaders, and some even urged him to form a new, reformist party. Although he remained a Republican, this base of moderate reformers remained loyal to Blaine and became known as the Half Breed faction of the party. The 1874 House elections produced a Democratic majority for the 44th Congress, and Blaine's time as Speaker was at an end. This gave Blaine more time to concentrate on his presidential ambitions, and to develop new policy ideas. One result was a foray into education policy. In late 1875, President Grant made several speeches on the importance of the separation of church and state and the duty of the states to provide free public education. Blaine saw in this an issue that would distract from the Grant administration scandals and let the Republican party regain the high moral ground. In December 1875, he proposed a joint resolution that became known as the Blaine Amendment. The proposed amendment codified the church-state separation Blaine and Grant were promoting, stating that: The effect was to prohibit the use of public funds by any religious school, although it did not advance Grant's other aim of requiring states to provide public education to all children. The bill passed the House but failed in the Senate. Although it never passed Congress, and left Blaine open to charges of anti-Catholicism, the proposed amendment served Blaine's purpose of rallying Protestants to the Republican party and promoting himself as one of the party's foremost leaders. Blaine entered the 1876 presidential campaign as the favorite, but his chances were almost immediately harmed by the emergence of a scandal. Rumors had begun to spread in February of that year that Blaine had been involved in a transaction with the Union Pacific Railroad in which the railroad had paid Blaine $64,000 for some Little Rock and Fort Smith Railroad bonds he owned, even though the bonds were nearly worthless. In essence, the alleged transaction was presented as a sham designed to bribe Blaine. Blaine denied the charges, as did the Union Pacific's directors. Blaine claimed he never had any dealings with the Little Rock and Fort Smith Railroad except to purchase bonds at market price, and that he had lost money on the transaction. Democrats in the House of Representatives nevertheless demanded a Congressional investigation. The testimony appeared to favor Blaine's version of events until May 31, when James Mulligan, a Boston clerk formerly employed by Blaine's brother-in-law, testified that the allegations were true, that he had arranged the transaction, and that he had letters to prove it. The letters ended with the damning phrase, "Kindly burn this letter." When the investigating committee recessed, Blaine met with Mulligan that night in his hotel room. What transpired between the men is unclear, but Blaine either acquired the letters or, as Mulligan told the committee, snatched them from Mulligan's hands and fled the room. In any event, Blaine had the letters and refused the committee's demand to turn them over. Opinion swiftly turned against Blaine; the June 3 "The New York Times" carried the headline "Blaine's Nomination Now Out of the Question." Blaine took his case to the House floor on June 5, theatrically proclaiming his innocence and calling the investigation a partisan attack by Southern Democrats, revenge for his exclusion of Jefferson Davis from the amnesty bill of the previous year. He read selected passages from the letters aloud, saying "Thank God Almighty, I am not afraid to show them!" Blaine even succeeded in extracting an apology from the committee chairman. The political tide turned anew in Blaine's favor. But now the pressure had begun to affect Blaine's health, and he collapsed while leaving church services on June 14. His opponents called the collapse a political stunt, with one Democratic newspaper reporting the event as "Blaine Feigns a Faint." Rumors of Blaine's ill health combined with the lack of hard evidence against him garnered him sympathy among Republicans, and when the Republican convention began in Cincinnati later that month, he was again seen as the front-runner. Though he was damaged by the Mulligan letters, Blaine entered the convention as the favorite. Five other men were also considered serious candidates: Benjamin Bristow, the Kentucky-born Treasury Secretary; Roscoe Conkling, Blaine's old enemy and now a Senator from New York; Senator Oliver P. Morton of Indiana; Governor Rutherford B. Hayes of Ohio; and Governor John F. Hartranft of Pennsylvania. Blaine's was nominated by Illinois orator Robert G. Ingersoll in what became a famous speech: The speech was a success and Ingersoll's appellation of "plumed knight" remained a nickname for Blaine for years to come. On the first ballot, no candidate received the required majority of 378, but Blaine had the most votes, with 285 and no other candidate had more than 125. There were a few vote shifts in the next five ballots, and Blaine climbed to 308 votes, with his nearest competitor at just 111. On the seventh ballot the situation shifted drastically as anti-Blaine delegates began to coalesce around Hayes; by the time the balloting ended, Blaine's votes had risen to 351, but Hayes surpassed him at 384, a majority. Blaine received the news at his home in Washington and telegraphed Hayes his congratulations. In the subsequent contest of 1876, Hayes was elected after a contentious compromise over disputed electoral votes. The results of the convention had further effects on Blaine's political career, as Bristow, having lost the nomination, also resigned as Treasury Secretary three days after the convention ended. President Grant selected Senator Lot M. Morrill of Maine to fill the cabinet post, and Maine's governor, Seldon Connor, appointed Blaine to the now-vacant Senate seat. When the Maine Legislature reconvened that autumn, they confirmed Blaine's appointment and elected him to the full six-year term that would begin on March 4, 1877. Blaine was appointed to the Senate on July 10, 1876, but did not begin his duties there until the Senate convened in December of that year. While in the Senate, he served on the Appropriations Committee and held the chairmanship of the Committee on Civil Service and Retrenchment, but he never achieved the role of leadership that he had held as a member of the House. The Senate in the 45th Congress was controlled by a narrow Republican majority, but it was a majority often divided against itself and against the Hayes administration. Blaine did not number himself among the administration's defenders, but neither could he join the Republicans led by Conkling—later known as the Stalwarts—who opposed Hayes, because of the deep personal enmity between Blaine and Conkling. He opposed Hayes's withdrawal of federal troops from Southern capitals, which effectively ended the Reconstruction of the South, but to no avail. Blaine continued to antagonize Southern Democrats, voting against bills passed in the Democrat-controlled House that would reduce the Army's appropriation and repeal the post-war Enforcement Acts he had helped pass. Such bills passed Congress several times and Hayes vetoed them several times; ultimately, the Enforcement Acts remained in place, but the funds to enforce them dwindled. By 1879, there were only 1,155 soldiers stationed in the former Confederacy, and Blaine believed that this small force could never guarantee the civil and political rights of black Southerners—which would mean an end to the Republican party in the South. On monetary issues, Blaine continued the advocacy for a strong dollar that he had begun as a Representative. The issue had shifted from debate over greenbacks to debate over which metal should back the dollar: gold and silver, or gold alone. The Coinage Act of 1873 stopped the coinage of silver for all coins worth a dollar or more, effectively tying the dollar to the value of gold. As a result, the money supply contracted and the effects of the Panic of 1873 grew worse, making it more expensive for debtors to pay debts they had entered into when currency was less valuable. Farmers and laborers, especially, clamored for the return of coinage in both metals, believing the increased money supply would restore wages and property values. Democratic Representative Richard P. Bland of Missouri proposed a bill, which passed the House, that required the United States to coin as much silver as miners could sell the government, thus increasing the money supply and aiding debtors. In the Senate, William B. Allison, a Republican from Iowa offered an amendment to limit the silver coinage to two to four million dollars per month. This was still too much for Blaine, and he denounced the bill and the proposed amendment, but the amended Bland–Allison Act passed the Senate by a 48 to 21 vote. Hayes vetoed the bill, but Congress mustered the two-thirds vote to pass it over his veto. Even after the Bland–Allison Act's passage, Blaine continued his opposition, making a series of speeches against it during the 1878 congressional campaign season. His time in the Senate allowed Blaine to develop his foreign policy ideas. He advocated expansion of the American navy and merchant marine, which had been in decline since the Civil War. Blaine also bitterly opposed the results of the arbitration with Great Britain over American fishermen's right to fish in Canadian waters, which resulted in a $5.5 million award to Britain. Blaine's Anglophobia combined with his support of high tariffs. He had initially opposed a reciprocity treaty with Canada that would have reduced tariffs between the two nations, but by the end of his time in the Senate, he had changed his mind, believing that Americans had more to gain by increasing exports than they would lose by the risk of cheap imports. Hayes had announced early in his presidency that he would not seek another term, which meant that the contest for the Republican nomination in 1880 was open to all challengers—including Blaine. Blaine was among the early favorites for the nomination, as were former President Grant, Treasury Secretary John Sherman of Ohio, and Senator George F. Edmunds of Vermont. Although Grant did not actively promote his candidacy, his entry into the race re-energized the Stalwarts and when the convention met in Chicago in June 1880, they instantly polarized the delegates into Grant and anti-Grant factions, with Blaine the most popular choice of the latter group. Blaine was nominated by James Frederick Joy of Michigan, but in contrast to Ingersoll's exciting speech of 1876, Joy's lengthy oration was remembered only for its maladroitness. After the other candidates were nominated, the first ballot showed Grant leading with 304 votes and Blaine in second with 284; no other candidate had more than Sherman's 93, and none had the required majority of 379. Sherman's delegates could swing the nomination to either Grant or Blaine, but he refused to release them through twenty-eight ballots in the hope that the anti-Grant forces would desert Blaine and flock to him. Eventually, they did desert Blaine, but instead of Sherman they shifted their votes to Ohio Congressman James A. Garfield, and by the thirty-sixth ballot he had 399 votes, enough for victory. Garfield placated the Stalwarts by endorsing Chester A. Arthur of New York, a Conkling loyalist, as nominee for vice president, but it was to Blaine and his delegates that Garfield owed his nomination. When Garfield was elected over Democrat Winfield Scott Hancock, he turned to Blaine to guide him in selection of his cabinet and offered him the preeminent position: Secretary of State. Blaine accepted, resigning from the Senate on March 4, 1881. Blaine saw presiding over the cabinet as a chance to preside over the Washington social scene, as well, and soon ordered construction of a new, larger home near Dupont Circle. Although his foreign policy experience was minimal, Blaine quickly threw himself into his new duties. By 1881, Blaine had completely abandoned his protectionist leanings and now used his position as Secretary of State to promote freer trade, especially within the western hemisphere. His reasons were twofold: firstly, Blaine's old fear of British interference in the Americas was undiminished, and he saw increased trade with Latin America as the best way to keep Britain from dominating the region. Secondly, he believed that by encouraging exports, he could increase American prosperity, and by doing so position the Republican party as the author of that prosperity, ensuring continued electoral success. Garfield agreed with his Secretary of State's vision and Blaine called for a Pan-American conference in 1882 to mediate disputes among the Latin American nations and to serve as a forum for talks on increasing trade. At the same time, Blaine hoped to negotiate a peace in the War of the Pacific then being fought by Bolivia, Chile, and Peru. Blaine favored a resolution that would not result in Peru yielding any territory, but Chile, which had by 1881 occupied the Peruvian capital, rejected any negotiations that would gain them nothing. Blaine sought to expand American influence in other areas, calling for renegotiation of the Clayton–Bulwer Treaty to allow the United States to construct a canal through Panama without British involvement, as well as attempting to reduce British involvement in the strategically located Kingdom of Hawaii. His plans for the United States' involvement in the world stretched even beyond the Western Hemisphere, as he sought commercial treaties with Korea and Madagascar. On July 2, 1881, Blaine and Garfield were walking through the Sixth Street Station of the Baltimore and Potomac Railroad in Washington when Garfield was shot by an assassin, Charles J. Guiteau. Guiteau, a deranged man who had earlier pestered Blaine and other State Department officials to be appointed to ambassadorships for which he was grossly unqualified, believed that by assassinating the President he could ingratiate himself with Vice President Arthur and receive his coveted position. Guiteau was captured immediately and hanged just short of a year later; he survived longer than Garfield, who lingered for two-and-a-half months, then died on September 19, 1881. Garfield's death was not just a personal tragedy for Blaine; it also meant the end of his dominance of the cabinet and the end of his foreign policy initiatives. With Arthur's ascent to the presidency, the Stalwart faction now held sway and Blaine's days at the State Department were numbered. Arthur asked all of the cabinet members to postpone their resignations until Congress recessed that December; Blaine nonetheless tendered his resignation on October 19, 1881 but agreed to remain in office until December 19, when his successor was in place. Blaine's replacement was Frederick T. Frelinghuysen, a New Jersey Stalwart. Arthur and Frelinghuysen undid much of Blaine's work, cancelling the call for a Pan-American conference and stopping the effort to end the War of the Pacific, but they did continue the drive for tariff reductions, signing a reciprocity treaty with Mexico in 1882. Blaine began the year 1882 without a political office for the first time since 1859. Troubled by poor health, he sought no employment other than the completion of the first volume of his memoir, "Twenty Years of Congress." Friends in Maine petitioned Blaine to run for Congress in the 1882 elections, but he declined, preferring to spend his time writing and supervising the move to the new home. His income from mining and railroad investments was sufficient to sustain the family's lifestyle and to allow for the construction of a vacation cottage, "Stanwood" on Mount Desert Island, Maine, designed by Frank Furness. Blaine appeared before Congress in 1882 during an investigation into his War of the Pacific diplomacy, defending himself against allegations that he owned an interest in the Peruvian guano deposits being occupied by Chile, but otherwise stayed away from the Capitol. The publication of the first volume of "Twenty Years" in early 1884 added to Blaine's financial security and thrust him back into the political spotlight. As the 1884 campaign loomed, Blaine's name was being circulated once more as a potential nominee, and despite some reservations, he soon found himself back in the hunt for the presidency. In the months leading up to the 1884 convention, Blaine was once more considered the favorite for the nomination, but President Arthur was contemplating a run for election in his own right. George Edmunds was again the favored candidate among reformers and John Sherman had a few delegates pledged to him, but neither was expected to command much support at the convention. John A. Logan of Illinois hoped to attract Stalwart votes if Arthur's campaign was unsuccessful. Blaine was unsure he wanted to try for the nomination for the third time and even encouraged General William T. Sherman, John Sherman's older brother, to accept it if it came to him, but ultimately Blaine agreed to be a candidate again. William H. West of Ohio nominated Blaine with an enthusiastic speech and after the first ballot, Blaine led the count with 334½ votes. While short of the necessary 417 for nomination, Blaine had far more than any other candidate with Arthur in second place at 278 votes. Blaine was unacceptable to the Arthur delegates just as Blaine's own delegates would never vote for the President, so the contest was between the two for the delegates of the remaining candidates. Blaine's total steadily increased as Logan and Sherman withdrew in his favor and some of the Edmunds delegates defected to him. Unlike in previous conventions, the momentum for Blaine in 1884 would not be halted. On the fourth ballot, Blaine received 541 votes and was, at last, nominated. Logan was named vice presidential nominee on the first ballot, and the Republicans had their ticket. The Democrats held their convention in Chicago the following month and nominated Governor Grover Cleveland of New York. Cleveland's time on the national scene was brief, but Democrats hoped that his reputation as a reformer and an opponent of corruption would attract Republicans dissatisfied with Blaine and his reputation for scandal. They were correct, as reform-minded Republicans (called "Mugwumps") denounced Blaine as corrupt and flocked to Cleveland. The Mugwumps, including such men as Carl Schurz and Henry Ward Beecher, were more concerned with morality than with party, and felt Cleveland was a kindred soul who would promote civil service reform and fight for efficiency in government. However, even as the Democrats gained support from the Mugwumps, they lost some blue-collar workers to the Greenback Party, led by Benjamin F. Butler, Blaine's antagonist from their early days in the House. The campaign focused on the candidates' personalities, as each candidate's supporters cast aspersions on their opponents. Cleveland's supporters rehashed the old allegations from the Mulligan letters that Blaine had corruptly influenced legislation in favor of railroads, later profiting on the sale of bonds he owned in both companies. Although the stories of Blaine's favors to the railroads had made the rounds eight years earlier, this time more of his correspondence was discovered, making his earlier denials less plausible. Blaine acknowledged that the letters were genuine, but denied that anything in them impugned his integrity or contradicted his earlier explanations. Nevertheless, what Blaine described as "stale slander" served to focus the public's attention negatively on his character. On some of the most damaging correspondence, Blaine had written "Burn this letter," giving Democrats the last line to their rallying cry: "Blaine, Blaine, James G. Blaine, the continental liar from the state of Maine, 'Burn this letter!'" To counter Cleveland's image of superior morality, Republicans discovered reports that Cleveland had fathered an illegitimate child while he was a lawyer in Buffalo, New York, and chanted "Ma, Ma, where's my Pa?"—to which the Democrats, after Cleveland had been elected, appended, "Gone to the White House, Ha! Ha! Ha!" Cleveland admitted to paying child support in 1874 to Maria Crofts Halpin, the woman who claimed he fathered her child named Oscar Folsom Cleveland. Halpin was involved with several men at the time, including Cleveland's friend and law partner, Oscar Folsom, for whom the child was also named. Cleveland did not know which man was the father, and is believed to have assumed responsibility because he was the only bachelor among them. At the same time, Democratic operatives accused Blaine and his wife of not having been married when their eldest son, Stanwood, was born in 1851; this rumor was false, however, and caused little excitement in the campaign. Both candidates believed that the states of New York, New Jersey, Indiana, and Connecticut would determine the election. In New York, Blaine received less support than he anticipated when Arthur and Conkling, still powerful in the New York Republican party, failed to actively campaign for him. Blaine hoped that he would have more support from Irish Americans than Republicans typically did; while the Irish were mainly a Democratic constituency in the 19th century, Blaine's mother was Irish Catholic, and he believed his career-long opposition to the British government would resonate with the Irish. Blaine's hope for Irish defections to the Republican standard were dashed late in the campaign when one of his supporters, Samuel D. Burchard, gave a speech denouncing the Democrats as the party of "Rum, Romanism, and Rebellion." The Democrats spread the word of this insult in the days before the election, and Cleveland narrowly won all four of the swing states, including New York by just over one thousand votes. While the popular vote total was close, with Cleveland winning by just one-quarter of a percent, the electoral votes gave Cleveland a majority of 219–182. Blaine accepted his narrow defeat and spent most of the next year working on the second volume of "Twenty Years of Congress." The book continued to earn him enough money to support his lavish household and pay off his debts. Although he spoke to friends of retiring from politics, Blaine still attended dinners and commented on the Cleveland administration's policies. By the time of the 1886 Congressional elections, Blaine was giving speeches and promoting Republican candidates, especially in his home state of Maine. Republicans were successful in Maine, and after the Maine elections in September, Blaine went on a speaking tour from Pennsylvania to Tennessee, hoping to boost the prospects of Republican candidates there. Republicans were less successful nationwide, gaining seats in the House while losing seats in the Senate, but Blaine's speeches kept him and his opinions in the spotlight. Blaine and his wife and daughters sailed for Europe in June 1887, visiting England, Ireland, Germany, France, Austria-Hungary, and finally Scotland, where they stayed at the summer home of Andrew Carnegie. While in France, Blaine wrote a letter to the "New-York Tribune" criticizing Cleveland's plans to reduce the tariff, saying that free trade with Europe would impoverish American workers and farmers. The family returned to the United States in August 1887. His letter in the "Tribune" had raised his political profile even higher, and by 1888 Theodore Roosevelt and Henry Cabot Lodge, both former opponents, urged Blaine to run against Cleveland again. Opinion within the party was overwhelmingly in favor of renominating Blaine. As the state conventions drew nearer, Blaine announced that he would not be a candidate. His supporters doubted his sincerity and continued to encourage him to run, but Blaine still demurred. Hoping to make his intentions clear, Blaine left the country and was staying with Carnegie in Scotland when the 1888 Republican National Convention began in Chicago. Carnegie encouraged Blaine to accept if the convention nominated him, but the delegates finally accepted Blaine's refusal. John Sherman was the most prominent candidate and sought to attract the Blaine supporters to his candidacy, but instead found them flocking to former senator Benjamin Harrison of Indiana after a telegram from Carnegie suggested that Blaine favored him. Blaine returned to the United States in August 1888 and visited Harrison at his home in October, where twenty-five thousand residents paraded in Blaine's honor. Harrison defeated Cleveland in a close election, and offered Blaine his former position as Secretary of State. Harrison had developed his foreign policy based largely on Blaine's ideas, and at the start of his term, Harrison and Blaine had very similar views on the United States' place in the world. In spite of their shared worldview, however, the two men became personally unfriendly as the term went on. Harrison was conscious that his Secretary of State was more popular than he, and while he admired Blaine's gift for diplomacy, he grew displeased with Blaine's frequent absence from his post because of illness, and suspected that Blaine was angling for the presidential nomination in 1892. Harrison tried to limit how many "Blaine men" filled subordinate positions in the State Department and denied Blaine's request that his son, Walker, be appointed First Assistant Secretary, instead naming him Solicitor of the Department of State. Despite the growing personal rancor, the two men continued, with one exception, to agree on the foreign policy questions of the day. Blaine and Harrison wished to see American power and trade expanded across the Pacific and were especially interested in securing rights to harbors in Pearl Harbor, Hawaii, and Pago Pago, Samoa. When Blaine entered office, the United States, Great Britain, and the German Empire were disputing their respective rights in Samoa. Thomas F. Bayard, Blaine's predecessor, had accepted an invitation to a three-party conference in Berlin aimed at resolving the dispute, and Blaine appointed American representatives to attend. The result was a treaty that created a condominium among the three powers, allowing all of them access to the harbor. In Hawaii, Blaine worked to bind the kingdom more closely to the United States and to avoid its becoming a British protectorate. When the McKinley Tariff of 1890 eliminated the duty on sugar, Hawaiian sugar-growers looked for a way to retain their once-exclusive access to the American market. The Hawaiian minister to the United States, Henry A. P. Carter, tried to arrange for Hawaii have complete trade reciprocity with the United States, but Blaine proposed instead that Hawaii become an American protectorate; Carter favored the idea, but the Hawaiian king, Kalākaua, rejected the infringement on his sovereignty. Blaine next procured the appointment of his former newspaper colleague John L. Stevens as minister to Hawaii. Stevens had long believed that the United States should annex Hawaii, and as minister he co-operated with Americans living in Hawaii in their efforts to bring about annexation. Their efforts ultimately culminated in a coup d'état against Kalākaua's successor, Liliuokalani, in 1893. Blaine's precise involvement is undocumented, but the results of Stevens' diplomacy were in accord with his ambitions for American power in the region. The new government petitioned the United States for annexation, but by that time Blaine was no longer in office. Soon after taking office, Blaine revived his old idea of an international conference of western hemisphere nations. The result was the First International Conference of American States, which met in Washington in 1890. Blaine and Harrison had high hopes for the conference, including proposals for a customs union, a pan-American railroad line, and an arbitration process to settle disputes among member nations. Their overall goal was to extend trade and political influence over the entire hemisphere; some of the other nations understood this and were wary of deepening ties with the United States to the exclusion of European powers. Blaine said publicly that his only interest was in "annexation of trade," not annexation of territory, but privately he wrote to Harrison of a desire for some territorial enlargement of the United States: Congress was not as enthusiastic about a customs union as Blaine and Harrison were, but tariff reciprocity provisions were ultimately included in the McKinley Tariff that reduced duties on some inter-American trade. Otherwise, the conference achieved none of Blaine's goals in the short-term, but did lead to further communication and what would eventually become the Organization of American States. In 1891, a diplomatic crisis arose in Chile that drove a wedge between Harrison and Blaine. The American minister to Chile, Patrick Egan, a political friend of Blaine's, granted asylum to Chileans who were seeking refuge from the Chilean Civil War. Chile was already suspicious of Blaine because of his War of the Pacific diplomacy ten years earlier, and this incident raised tensions even further. When sailors from the "Baltimore" took shore leave in Valparaíso, a fight broke out, resulting in the deaths of two American sailors and three dozen arrested. When the news reached Washington, Blaine was in Bar Harbor recuperating from a bout of ill health and Harrison himself drafted a demand for reparations. The Chilean foreign minister, Manuel Antonio Matta, replied that Harrison's message was "erroneous or deliberately incorrect" and said that the Chilean government was treating the affair the same as any other criminal matter. Tensions increased as Harrison threatened to break off diplomatic relations unless the United States received a suitable apology. Blaine returned to the capital and made conciliatory overtures to the Chilean government, offering to submit the dispute to arbitration and recall Egan. Harrison still insisted on an apology and submitted a special message to Congress about the threat of war. Chile issued an apology for the incident, and the threat of war subsided. Blaine's earliest expressions in the foreign policy sphere were those of a reactionary Anglophobe, but by the end of his career his relationship with the United Kingdom had become more moderate and nuanced. A dispute over seal hunting in the waters off Alaska was the cause of Blaine's first interaction with Britain as Harrison's Secretary of State. A law passed in 1889 required Harrison to ban seal hunting in Alaskan waters, but Canadian fishermen believed they had the right to continue fishing there. Soon thereafter, the United States Navy seized several Canadian ships near the Pribilof Islands. Blaine entered into negotiations with Britain and the two nations agreed to submit the dispute to arbitration by a neutral tribunal. Blaine was no longer in office when the tribunal began its work, but the result was to allow the hunting once more, albeit with some regulation, and to require the United States to pay damages of $473,151. Ultimately, the nations signed the North Pacific Fur Seal Convention of 1911, which outlawed open-water seal hunting. At the same time as the Pribilof Islands dispute, an outbreak of mob violence in New Orleans became an international incident. After New Orleans police chief David Hennessy led a crackdown against local mafiosi, he was assassinated on October 14, 1890. After the alleged murderers were found not guilty on March 14, 1891, a mob stormed the jail and lynched eleven of them. Since many of those killed were Italian citizens the Italian minister, Saverio Fava, protested to Blaine. Blaine explained that federal officials could not control how state officials deal with criminal matters, and Fava announced that he would withdraw the legation back to Italy. Blaine and Harrison believed the Italians' response to be an overreaction, and did nothing. Tensions slowly cooled, and after nearly a year, the Italian minister returned to the United States to negotiate an indemnity. After some internal dispute—Blaine wanted conciliation with Italy, Harrison was reluctant to admit fault—the United States agreed to pay an indemnity of $25,000, and normal diplomatic relations resumed. Blaine had always believed his health to be fragile, and by the time he joined Harrison's cabinet he truly was unwell. The years at the State Department also brought Blaine personal tragedy as two of his children, Walker and Alice, died suddenly in 1890. Another son, Emmons, died in 1892. With these family issues and his declining health, Blaine decided to retire and announced that he would resign from the cabinet on June 4, 1892. Because of their growing animosity, and because Blaine's resignation came three days before the 1892 Republican National Convention began, Harrison suspected that Blaine was preparing to run against him for the party's nomination for president. Harrison was unpopular with the party and the country, and many of Blaine's old supporters encouraged him to run for the nomination. Blaine had denied any interest in the nomination months before his resignation, but some of his friends, including Senator Matthew Quay of Pennsylvania and James S. Clarkson, chairman of the Republican National Committee, took it for false modesty and worked for his nomination anyway. When Blaine resigned from the cabinet, his boosters were certain that he was a candidate, but the majority of the party stood by the incumbent. Harrison was renominated on the first ballot, but die-hard Blaine delegates still gave their champion 182 and 1/6 votes, good enough for second place. Blaine spent the summer of 1892 at his Bar Harbor cottage, and did not involve himself in the presidential campaign other than to make a single speech in New York in October. Harrison was defeated soundly in his rematch against former president Cleveland and when Blaine returned to Washington at the close of 1892, he and Harrison were friendlier than they had been in years. Blaine's health declined rapidly in the winter of 1892–1893, and he died in his Washington home on January 27, 1893. After a funeral at the Presbyterian Church of the Covenant, he was buried in Oak Hill Cemetery in Washington. He was later re-interred in Blaine Memorial Park, Augusta, Maine, in 1920. A towering figure in the Republican party of his day, Blaine fell into obscurity fairly soon after his death. A 1905 biography by his wife's cousin, Edward Stanwood, was written when the question was still in doubt, but by the time David Saville Muzzey published his biography of Blaine in 1934, the subtitle "A Political Idol of Other Days" already spoke to its subject's fading place in the popular mind, perhaps because of the nine men the Republican Party nominated for the Presidency from 1860 to 1912, Blaine is the only one who never became President. Although several authors studied Blaine's foreign policy career, including Edward P. Crapol's 2000 work, Muzzey's was the last full-scale biography of the man until Neil Rolde's 2006 book. Historian R. Hal Williams was working on a new biography of Blaine, tentatively titled "James G. Blaine: A Life in Politics", until his death in 2016. Books Articles Jaguar The jaguar ("Panthera onca") is a wild cat species and the only extant member of the genus "Panthera" native to the Americas. The jaguar's present range extends from Southwestern United States and Mexico across much of Central America and south to Paraguay and northern Argentina. Though there are single cats now living within the western United States, the species has largely been extirpated from the United States since the early 20th century. It is listed as Near Threatened on the IUCN Red List; and its numbers are declining. Threats include loss and fragmentation of habitat. The jaguar is the biggest cat species in North and South America followed by the cougar. This spotted cat closely resembles the leopard, but is usually larger and sturdier. It ranges across a variety of forested and open terrains, but its preferred habitat is tropical and subtropical moist broadleaf forest, swamps and wooded regions. The jaguar enjoys swimming and is largely a solitary, opportunistic, stalk-and-ambush predator at the top of the food chain. As a keystone species it plays an important role in stabilizing ecosystems and regulating prey populations. While international trade in jaguars or their body parts is prohibited, the cat is still frequently killed, particularly in conflicts with ranchers and farmers in South America. Although reduced, its range remains large. Given its historical distribution, the jaguar has featured prominently in the mythology of numerous indigenous American cultures, including those of the Maya and Aztec. The word 'jaguar' is thought to derive from the Tupian word "yaguara", meaning "beast of prey". The word entered English presumably via the Amazonian trade language Tupinambá, via Portuguese "jaguar". The specific word for jaguar is "yaguareté", with the suffix -"eté" meaning "real" or "true". The word 'panther' derives from classical Latin "panthēra", itself from the ancient Greek "pánthēr" (πάνθηρ). In Mexican Spanish, its nickname is "el tigre": 16th century Spaniards had no native word in their language for the jaguar, which is smaller than a lion, but bigger than a leopard, nor had ever encountered it in the Old World, and so named it after the tiger, since its ferocity would have been known to them through Roman writings and popular literature during the Renaissance. "Onca" is the Portuguese "onça", with the cedilla dropped for typographical reasons, found in English as "ounce" for the snow leopard, "Panthera uncia". It derives from the Latin "lyncea" lynx, with the letter L confused with the definite article (Italian "lonza", Old French "l'once)". The jaguar is the only extant New World member of the genus "Panthera". Results of DNA analysis shows the lion, tiger, leopard, jaguar, snow leopard, and clouded leopard share a common ancestor, and that this group is between six and ten million years old; the fossil record points to the emergence of "Panthera" just two to 3.8 million years ago. The "Panthera" are thought to have evolved in Asia. The jaguar is thought to have diverged from a common ancestor of the "Panthera" species at least 1.5 million years ago and to have entered the American continent in the Early Pleistocene via Beringia, the land bridge that once spanned the Bering Strait. Results of jaguar mitochondrial DNA analysis indicates that the species' lineage evolved between 280,000 and 510,000 years ago. Phylogenetic studies generally have shown the clouded leopard ("Neofelis nebulosa") is basal to this group. The position of the remaining species varies between studies and is effectively unresolved. Based on morphological evidence, British zoologist Reginald Innes Pocock concluded the jaguar is most closely related to the leopard. However, DNA evidence is inconclusive and the position of the jaguar relative to the other species varies between studies. Fossils of extinct "Panthera" species, such as the European jaguar ("Panthera gombaszoegensis") and the American lion ("Panthera atrox"), show characteristics of both the lion and the jaguar. In 1758, Carl Linnaeus described the jaguar in his work "Systema Naturae" and gave it the scientific name "Felis onca". In the 19th and 20th centuries, several jaguar type specimens formed the basis for descriptions of subspecies. In 1939, Reginald Innes Pocock recognized eight subspecies based on geographic origins and skull morphology of these specimens. Pocock did not have access to sufficient zoological specimens to critically evaluate their subspecific status, but expressed doubt about the status of several. Later consideration of his work suggested only three subspecies should be recognized. The description of "P. o. palustris" was based on a fossil skull. The author of "Mammal Species of the World" listed nine subspecies and both "P. o. palustris" or "P. o. paraguensis" separately. Results of morphologic and genetic research indicate a clinal north–south variation between populations, but no evidence for subspecific differentiation. A subsequent, more detailed study confirmed the predicted population structure within jaguar populations in Colombia. IUCN Red List assessors for the species and the Cat Classification Taskforce of the Cat Specialist Group do not recognize any jaguar subspecies as valid. The following table is based on the classification of the species provided in "Mammal Species of the World". The jaguar, a compact and well-muscled animal, is the largest cat in South America, and one of the largest carnivorous mammals in Central or North America. Its coat is generally a tawny yellow, but ranges to reddish-brown, for most of the body. The ventral areas are white. The fur is covered with rosettes for camouflage in the dappled light of its forest habitat. The spots and their shapes vary between individual jaguars: rosettes may include one or several dots. The spots on the head and neck are generally solid, as are those on the tail, where they may merge to form a band. Forest jaguars are frequently darker and considerably smaller than those in open areas, possibly due to the smaller numbers of large, herbivorous prey in forest areas. Its size and weight vary considerably: weights are normally in the range of . Larger males have been recorded to weigh as much as . The smallest females weigh about . Females are typically 10–20 percent smaller than males. The length, from the nose to the base of the tail, varies from . The tail is the shortest of any big cat, at in length. Legs are also short, but thick and powerful, considerably shorter when compared to a small tiger or lion in a similar weight range. The jaguar stands tall at the shoulders. Compared to the similarly colored leopard, the jaguar is generally bigger, heavier and relatively stocky in build. Further variations in size have been observed across regions and habitats, with size tending to increase from north to south. Jaguars in the Chamela-Cuixmala Biosphere Reserve on the Mexican Pacific coast weighed just about , about the size of a female cougar. Jaguars in Venezuela or Brazil are much larger with average weights of about in males and of about in females. In the Brazilian Pantanal, weights of or more are not uncommon in old males. The highest recorded weight was of a jaguar with an empty stomach. A short and stocky limb structure makes the jaguar adept at climbing, crawling, and swimming. The head is robust and the jaw extremely powerful, it has the third highest bite force of all felids, after the tiger and lion. A jaguar can bite with a force of at canine teeth and at carnassial notch. This allows it to pierce the shells of armored reptiles and turtles. A comparative study of bite force adjusted for body size ranked it as the top felid, alongside the clouded leopard and ahead of the tiger and lion. It has been reported that "an individual jaguar can drag an bull in its jaws and pulverize the heaviest bones". While the jaguar closely resembles the leopard, it is generally sturdier and heavier, and the two animals can be distinguished by their rosettes: the rosettes on a jaguar's coat are larger, fewer in number, usually darker, and have thicker lines and small spots in the middle that the leopard lacks. Jaguars also have rounder heads and shorter, stockier limbs compared to leopards. The black morph is less common than the spotted morph, estimated at occurring in about 6% of the South American jaguar population. In Mexico's Sierra Madre Occidental, the first black jaguar was recorded in 2004. Some evidence indicates that the melanistic allele is dominant, and being supported by natural selection. The black form may be an example of heterozygote advantage; breeding in captivity is not yet conclusive on this. Melanistic jaguars (or “black” jaguars) occur primarily in parts of South America, and are virtually unknown in wild populations residing in the subtropical and temperate regions of North America; they have never been documented north of Mexico’s Isthmus of Tehuantepec. Melanistic jaguars are informally known as black panthers, but as with all forms of polymorphism they do not form a separate species. Extremely rare albino individuals, sometimes called white panthers, also occur among jaguars, as with the other big cats. As usual with albinos in the wild, selection keeps the frequency close to the rate of mutation. At present, the jaguar's range extends from Mexico through Central America to South America, including much of Amazonian Brazil. The countries included in this range are Argentina, Belize, Bolivia, Colombia, Costa Rica (particularly on the Osa Peninsula), Ecuador, French Guiana, Guatemala, Guyana, Honduras, Nicaragua, Panama, Paraguay, Peru, Suriname, the United States and Venezuela. It is now locally extinct in El Salvador and Uruguay. The jaguar has been an American cat since crossing the Bering Land Bridge during the Pleistocene epoch; the immediate ancestor of modern animals is "Panthera onca augusta", which was larger than the contemporary cat. It occurs in the 400 km² Cockscomb Basin Wildlife Sanctuary in Belize, the 5,300 km² Sian Ka'an Biosphere Reserve in Mexico, the approximately 15,000 km Manú National Park in Peru, the approximately 26,000 km Xingu National Park in Brazil, and numerous other reserves throughout its range. The inclusion of the United States in the list is based on occasional sightings in the southwest, particularly in Arizona, New Mexico and Texas. In the early 20th century, the jaguar's range extended as far north as the Grand Canyon and possibly Colorado, and as far west as Monterey in Northern California. The jaguar is a protected species in the United States under the Endangered Species Act, which has stopped the shooting of the animal for its pelt. In 1996 and from 2004 on, hunting guides and wildlife officials in Arizona photographed and documented jaguars in the southern part of the state. Between 2004 and 2007, two or three jaguars have been reported by researchers around Buenos Aires National Wildlife Refuge in southern Arizona. One of them, called 'Macho B', had been previously photographed in 1996 in the area. For any permanent population in the USA to thrive, protection from killing, an adequate prey base, and connectivity with Mexican populations are essential. In February 2009, a jaguar was caught, radio-collared and released in an area southwest of Tucson, Arizona; this is farther north than had previously been expected and represents a sign there may be a permanent breeding population of jaguars within southern Arizona. The animal was later confirmed to be indeed the same male individual ('Macho B') that was photographed in 2004. On 2 March 2009, Macho B was recaptured and euthanized after he was found to be suffering from kidney failure; the animal was thought to be 16 years old, older than any known wild jaguar. Completion of the United States–Mexico barrier as currently proposed will reduce the viability of any population currently residing in the United States, by reducing gene flow with Mexican populations, and prevent any further northward expansion for the species. The historic range of the species included much of the southern half of the United States, and in the south extended much farther to cover most of the South American continent. In total, its northern range has receded southward and its southern range northward. Ice age fossils of the jaguar, dated between 40,000 and 11,500 years ago, have been discovered in the United States, including some at an important site as far north as Missouri. Fossil evidence shows jaguars of up to , much larger than the contemporary average for the animal. The habitat of the cat typically includes the rain forests of South and Central America, open, seasonally flooded wetlands, and dry grassland terrain. Of these habitats, the jaguar much prefers dense forest; the cat has lost range most rapidly in regions of drier habitat, such as the Argentine pampas, the arid grasslands of Mexico, and the southwestern United States. The cat will range across tropical, subtropical, and dry deciduous forests (including, historically, oak forests in the United States). The jaguar prefers to live by rivers, swamps, and in dense rainforest with thick cover for stalking prey. Jaguars have been found at elevations as high as 3,800 m, but they typically avoid montane forest and are not found in the high plateau of central Mexico or in the Andes. The jaguars preferred habitats are usually swamps and wooded regions, but jaguars also live in scrublands and deserts. The adult jaguar is an apex predator, meaning it exists at the top of its food chain and is not preyed on in the wild. The jaguar has also been termed a keystone species, as it is assumed, through controlling the population levels of prey such as herbivorous and granivorous mammals, apex felids maintain the structural integrity of forest systems. However, accurately determining what effect species like the jaguar have on ecosystems is difficult, because data must be compared from regions where the species is absent as well as its current habitats, while controlling for the effects of human activity. It is accepted that mid-sized prey species undergo population increases in the absence of the keystone predators, and this has been hypothesized to have cascading negative effects. However, field work has shown this may be natural variability and the population increases may not be sustained. Thus, the keystone predator hypothesis is not accepted by all scientists. The jaguar also has an effect on other predators. The jaguar and the cougar, which is the next-largest feline of South America, but the biggest in Central or North America, are often sympatric (related species sharing overlapping territory) and have often been studied in conjunction. The jaguar tends to take larger prey, usually over and the cougar smaller, usually between , reducing the latter's size. This situation may be advantageous to the cougar. Its broader prey niche, including its ability to take smaller prey, may give it an advantage over the jaguar in human-altered landscapes; while both are classified as near-threatened species, the cougar has a significantly larger current distribution. Depending on the availability of prey, the cougar and jaguar may even share it. Jaguar females reach sexual maturity at about two years of age, and males at three or four. The cat is believed to mate throughout the year in the wild, although births may increase when prey is plentiful. Research on captive male jaguars supports the year-round mating hypothesis, with no seasonal variation in semen traits and ejaculatory quality; low reproductive success has also been observed in captivity. Female estrus is 6–17 days out of a full 37-day cycle, and females will advertise fertility with urinary scent marks and increased vocalization. Both sexes will range more widely than usual during courtship. Pairs separate after mating, and females provide all parenting. The gestation period lasts 93–105 days; females give birth to up to four cubs, and most commonly to two. The mother will not tolerate the presence of males after the birth of cubs, given a risk of infanticide; this behavior is also found in the tiger. The young are born blind, gaining sight after two weeks. Cubs are weaned at three months, but remain in the birth den for six months before leaving to accompany their mother on hunts. They will continue in their mother's company for one to two years before leaving to establish a territory for themselves. Young males are at first nomadic, jostling with their older counterparts until they succeed in claiming a territory. Typical lifespan in the wild is estimated at around 12–15 years; in captivity, the jaguar lives up to 23 years, placing it among the longest-lived cats. Like most cats, the jaguar is solitary outside mother–cub groups. Adults generally meet only to court and mate (though limited noncourting socialization has been observed anecdotally) and carve out large territories for themselves. Female territories, which range from 25 to 40 km in size, may overlap, but the animals generally avoid one another. Male ranges cover roughly twice as much area, varying in size with the availability of game and space, and do not overlap. The territory of a male can contain those of several females. The jaguar uses scrape marks, urine, and feces to mark its territory. Like the other big cats, the jaguar is capable of roaring and does so to warn territorial and mating competitors away; intensive bouts of counter-calling between individuals have been observed in the wild. Their roar often resembles a repetitive cough, and they may also vocalize mews and grunts. Mating fights between males occur, but are rare, and aggression avoidance behavior has been observed in the wild. When it occurs, conflict is typically over territory: a male's range may encompass that of two or three females, and he will not tolerate intrusions by other adult males. The jaguar is often described as nocturnal, but is more specifically crepuscular (peak activity around dawn and dusk). Both sexes hunt, but males travel farther each day than females, befitting their larger territories. The jaguar may hunt during the day if game is available and is a relatively energetic feline, spending as much as 50–60 percent of its time active. The jaguar's elusive nature and the inaccessibility of much of its preferred habitat make it a difficult animal to sight, let alone study. Like all cats, the jaguar is an obligate carnivore, feeding only on meat. It is an opportunistic hunter and its diet encompasses at least 87 species. It employ an unusual killing method: it bites directly through the skull of prey between the ears to deliver a fatal bite to the brain. The jaguar can take virtually any terrestrial or riparian vertebrate found in Central or South America, except for large crocodilians such as black caiman. The jaguar is more of a dietary generalist than its Old World cousins: the American tropics have a high diversity of small animals but relatively low populations and diversity of the large ungulates which this genus favors. They regularly take adult caimans, except for black caimans, deer, capybaras, tapirs, peccaries, dogs, zorros, and sometimes even anacondas. However, it preys on any small species available, including frogs, mice, birds (mainly ground-based species such as cracids), fish, sloths, monkeys, and turtles. A study conducted in Cockscomb Basin Wildlife Sanctuary in Belize revealed that the diet of jaguars there consisted primarily of armadillos and pacas. Some jaguars will also take domestic livestock. El Jefe, one of the few jaguars that were reported in the United States, has also been found to kill and eat American black bears, as deduced from hairs found within his scats and the partly consumed carcass of a black bear sow with the distinctive puncture marks to the skull left by jaguars. This indicates that jaguars might have once preyed on black bears when the species was still present in the area. Spectacled bears are also known to avoid jaguars, possibly because they may constitute occasional prey items. There is evidence that jaguars in the wild consume the roots of "Banisteriopsis caapi". While the jaguar often employs the deep throat-bite and suffocation technique typical among "Panthera", it sometimes uses a killing method unique amongst cats: it pierces directly through the temporal bones of the skull between the ears of prey (especially the capybara) with its canine teeth, piercing the brain. This may be an adaptation to "cracking open" turtle shells; following the late Pleistocene extinctions, armored reptiles such as turtles would have formed an abundant prey base for the jaguar. The skull bite is employed with mammals in particular; with reptiles such as the caiman, the jaguar may leap onto the back of the prey and sever the cervical vertebrae, immobilizing the target. When attacking sea turtles, including the huge leatherback sea turtle which weighs about on average, as they try to nest on beaches, the jaguar will bite at the head, often beheading the prey, before dragging it off to eat. Reportedly, while hunting horses, a jaguar may leap onto their back, place one paw on the muzzle and another on the nape and then twist, dislocating the neck. Local people have anecdotally reported that when hunting a pair of horses bound together, the jaguar will kill one horse and then drag it while the other horse, still living, is dragged in their wake. With prey such as smaller dogs, a paw swipe to the skull may be sufficient to kill it. The jaguar is a stalk-and-ambush rather than a chase predator. The cat will walk slowly down forest paths, listening for and stalking prey before rushing or ambushing. The jaguar attacks from cover and usually from a target's blind spot with a quick pounce; the species' ambushing abilities are considered nearly peerless in the animal kingdom by both indigenous people and field researchers, and are probably a product of its role as an apex predator in several different environments. The ambush may include leaping into water after prey, as a jaguar is quite capable of carrying a large kill while swimming; its strength is such that carcasses as large as a heifer can be hauled up a tree to avoid flood levels. On killing prey, the jaguar will drag the carcass to a thicket or other secluded spot. It begins eating at the neck and chest, rather than the midsection. The heart and lungs are consumed, followed by the shoulders. The daily food requirement of a animal, at the extreme low end of the species' weight range, has been estimated at . For captive animals in the range, more than of meat daily are recommended. In the wild, consumption is naturally more erratic; wild cats expend considerable energy in the capture and kill of prey, and they may consume up to of meat at one feeding, followed by periods of famine. Unlike all other "Panthera" species, jaguars very rarely attack humans. However, jaguar attacks appear to be on the rise with increased human encroachment on their habitat and a decrease in prey populations. Sometimes jaguars in captivity attack zookeepers. In addition, it appears that attacks on humans had been more common in the past, at least after conquistadors arrived in the Americas, to the extent that the jaguar had a fearsome reputation in the Americas, akin to the lion and tiger in the Old World. Nevertheless, even in those times, the jaguar's chief prey was the capybara, not the human, and Charles Darwin reported a saying of Native Americans that people would not have to fear the jaguar much, as long as capybaras were abundant. Jaguar populations are rapidly declining. The species is listed as Near Threatened on the IUCN Red List. The loss of parts of its range, including its virtual elimination from its historic northern areas and the increasing fragmentation of the remaining range, have contributed to this status. Particularly significant declines occurred in the 1960, when more than 15,000 jaguars were killed for their skins in the Brazilian Amazon yearly; the Convention on International Trade in Endangered Species of 1973 brought about a sharp decline in the pelt trade. Detailed work performed under the auspices of the Wildlife Conservation Society revealed the species has lost 37% of its historic range, with its status unknown in an additional 18% of the global range. More encouragingly, the probability of long-term survival was considered high in 70% of its remaining range, particularly in the Amazon basin and the adjoining Gran Chaco and Pantanal. The major risks to the jaguar include deforestation across its habitat, increasing competition for food with human beings, especially in dry and unproductive habitat, poaching, hurricanes in northern parts of its range, and the behavior of ranchers who will often kill the cat where it preys on livestock. When adapted to the prey, the jaguar has been shown to take cattle as a large portion of its diet; while land clearance for grazing is a problem for the species, the jaguar population may have increased when cattle were first introduced to South America, as the animals took advantage of the new prey base. This willingness to take livestock has induced ranch owners to hire full-time jaguar hunters. The skins of wild cats and other mammals have been highly valued by the fur trade for many decades. From the beginning of the 20th-century Jaguars were hunted in large numbers, but over-harvest and habitat destruction reduced the availability and induced hunters and traders to gradually shift to smaller species by the 1960s. The international trade of jaguar skins had its largest boom between the end of the Second World War and the early 1970, due to the growing economy and lack of regulations. From 1967 onwards, the regulations introduced by national laws and international agreements diminished the reported international trade from as high as 13000 skins in 1967, through 7000 skins in 1969, until it became negligible after 1976, although illegal trade and smuggling continue to be a problem. During this period, the biggest exporters were Brazil and Paraguay, and the biggest importers were the USA and Germany. The jaguar is listed on CITES Appendix I, which means that all international trade in jaguars or their body parts is prohibited. Hunting jaguars is prohibited in Argentina, Brazil, Colombia, French Guiana, Honduras, Nicaragua, Panama, Paraguay, Suriname, the United States, and Venezuela. Hunting jaguars is restricted in Guatemala and Peru. Trophy hunting is still permitted in Bolivia, and it is not protected in Ecuador or Guyana. Jaguar conservation is complicated because of the species' large range spaning 18 countries with different policies and regulations. Specific areas of high importance for jaguar conservation, so-called "Jaguar Conservation Units" (JCU) were determined in 2000. These are large areas inhabited by at least 50 jaguars. Each unit was assessed and evaluated on the basis of size, connectivity, habitat quality for both jaguar and prey, and jaguar population status. That way, 51 Jaguar Conservation Units were determined in 36 geographic regions as priority areas for jaguar conservation including: Recent studies underlined that to maintain the robust exchange across the jaguar gene pool necessary for maintaining the species, it is important that jaguar habitats are interconnected. To facilitate this, a new project, the Paseo del Jaguar, has been established to connect several jaguar hotspots. In 1986, the Cockscomb Basin Wildlife Sanctuary was established in Belize as the world's first protected area for jaguar conservation. Given the inaccessibility of much of the species' range, particularly the central Amazon, estimating jaguar numbers is difficult. Researchers typically focus on particular bioregions, thus species-wide analysis is scant. In 1991, 600–1,000 (the highest total) were estimated to be living in Belize. A year earlier, 125–180 jaguars were estimated to be living in Mexico's 4,000-km (2400-mi) Calakmul Biosphere Reserve, with another 350 in the state of Chiapas. The adjoining Maya Biosphere Reserve in Guatemala, with an area measuring 15,000 km (9,000 mi), may have 465–550 animals. Work employing GPS telemetry in 2003 and 2004 found densities of only six to seven jaguars per 100 km in the critical Pantanal region, compared with 10 to 11 using traditional methods; this suggests the widely used sampling methods may inflate the actual numbers of cats. In setting up protected reserves, efforts generally also have to be focused on the surrounding areas, as jaguars are unlikely to confine themselves to the bounds of a reservation, especially if the population is increasing in size. Human attitudes in the areas surrounding reserves and laws and regulations to prevent poaching are essential to make conservation areas effective. To estimate population sizes within specific areas and to keep track of individual jaguars, camera trapping and wildlife tracking telemetry are widely used, and feces may be sought out with the help of detector dogs to study jaguar health and diet. Current conservation efforts often focus on educating ranch owners and promoting ecotourism. The jaguar is generally defined as an umbrella species – its home range and habitat requirements are sufficiently broad that, if protected, numerous other species of smaller range will also be protected. Umbrella species serve as "mobile links" at the landscape scale, in the jaguar's case through predation. Conservation organizations may thus focus on providing viable, connected habitat for the jaguar, with the knowledge other species will also benefit. Ecotourism setups are being used to generate public interest in charismatic animals such as the jaguar, while at the same time generating revenue that can be used in conservation efforts. Audits done in Africa have shown that ecotourism has helped in African cat conservation. As with large African cats, a key concern in jaguar ecotourism is the considerable habitat space the species requires, so if ecotourism is used to aid in jaguar conservation, some considerations need to be made as to how existing ecosystems will be kept intact, or how new ecosystems that are large enough to support a growing jaguar population will be put into place. The only extant cat native to North America that roars, the jaguar was recorded as an animal of the Americas by Thomas Jefferson in 1799. Jaguars are still occasionally sighted in Arizona and New Mexico, such as El Jefe, prompting actions for its conservation by authorities. For example, on the 20th of August, 2012, the USFWS proposed setting aside 838,232 acres in Arizona and New Mexico — an area larger than Rhode Island — as critical jaguar habitat. In pre-Columbian Central and South America, the jaguar was a symbol of power and strength. Among the Andean cultures, a jaguar cult disseminated by the early Chavín culture became accepted over most of what is today Peru by 900 BC. The later Moche culture of northern Peru used the jaguar as a symbol of power in many of their ceramics. In the religion of the Muisca, who inhabited the cool Altiplano Cundiboyacense in the Colombian Andes, the jaguar was considered a sacred animal and during their religious rituals the people dressed in jaguar skins. The skins were traded with the lowland peoples of the tropical Llanos Orientales. The name of "zipa" Nemequene was derived from the Muysccubun words "nymy" and "quyne", meaning "force of the jaguar". In Mesoamerica, the Olmec—an early and influential culture of the Gulf Coast region roughly contemporaneous with the Chavín—developed a distinct "were-jaguar" motif of sculptures and figurines showing stylised jaguars or humans with jaguar characteristics. In the later Maya civilization, the jaguar was believed to facilitate communication between the living and the dead and to protect the royal household. The Maya saw these powerful felines as their companions in the spiritual world, and a number of Maya rulers bore names that incorporated the Mayan word for jaguar ("b'alam" in many of the Mayan languages). "Balam" ("Jaguar") remains a common Maya surname, and it is also the name of Chilam Balam, a legendary author to whom are attributed 17th and 18th-centuries Maya miscellanies preserving much important knowledge. The Aztec civilization shared this image of the jaguar as the representative of the ruler and as a warrior. The Aztecs formed an elite warrior class known as the Jaguar Knights. In Aztec mythology, the jaguar was considered to be the totem animal of the powerful deity Tezcatlipoca. The jaguar and its name are widely used as a symbol in contemporary culture. It is the national animal of Guyana, and is featured in its coat of arms. The flag of the Department of Amazonas, a Colombian department, features a black jaguar silhouette pouncing towards a hunter. The jaguar also appears in banknotes of Brazilian real. The jaguar is also a common fixture in the mythology of many contemporary native cultures in South America, usually being portrayed as the creature which gave humans the power over fire. Jaguar is widely used as a product name, most prominently for a British luxury car brand. The name has been adopted by sports franchises, including the NFL's Jacksonville Jaguars and the Mexican soccer club Chiapas F.C. The crest of Argentina's national federation in rugby union features a jaguar; however, because of a journalist error, the country's national team is nicknamed "Los Pumas". In the spirit of the ancient Mayan culture, the 1968 Olympics in Mexico City adopted a red jaguar as the first official Olympic mascot. Justus Justus (died on 10 November between 627 and 631) was the fourth Archbishop of Canterbury. He was sent from Italy to England by Pope Gregory the Great, on a mission to Christianize the Anglo-Saxons from their native paganism, probably arriving with the second group of missionaries despatched in 601. Justus became the first Bishop of Rochester in 604, and attended a church council in Paris in 614. Following the death of King Æthelberht of Kent in 616, Justus was forced to flee to Gaul, but was reinstated in his diocese the following year. In 624 Justus became Archbishop of Canterbury, overseeing the despatch of missionaries to Northumbria. After his death he was revered as a saint, and had a shrine in St Augustine's Abbey, Canterbury. Justus was a member of the Gregorian mission sent to England by Pope Gregory I. Almost everything known about Justus and his career is derived from the early 8th-century "Historia ecclesiastica gentis Anglorum" of Bede. As Bede does not describe Justus' origins, nothing is known about him prior to his arrival in England. He probably arrived in England with the second group of missionaries, sent at the request of Augustine of Canterbury in 601. Some modern writers describe Justus as one of the original missionaries who arrived with Augustine in 597, but Bede believed that Justus came in the second group. The second group included Mellitus, who later became Bishop of London and Archbishop of Canterbury. If Justus was a member of the second group of missionaries, then he arrived with a gift of books and "all things which were needed for worship and the ministry of the Church". A 15th-century Canterbury chronicler, Thomas of Elmham, claimed that there were a number of books brought to England by that second group still at Canterbury in his day, although he did not identify them. An investigation of extant Canterbury manuscripts shows that one possible survivor is the St. Augustine Gospels, now in Cambridge, Corpus Christi College, Manuscript (MS) 286. Augustine consecrated Justus as a bishop in 604, over a province including the Kentish town of Rochester. The historian Nicholas Brooks argues that the choice of Rochester was probably not because it had been a Roman-era bishopric, but rather because of its importance in the politics of the time. Although the town was small, with just one street, it was at the junction of Watling Street and the estuary of the Medway, and was thus a fortified town. Because Justus was probably not a monk (he was not called that by Bede), his cathedral clergy was very likely non-monastic too. A charter purporting to be from King Æthelberht, dated 28 April 604, survives in the "Textus Roffensis", as well as a copy based on the Textus in the 14th-century "Liber Temporalium". Written mostly in Latin but using an Old English boundary clause, the charter records a grant of land near the city of Rochester to Justus' church. Among the witnesses is Laurence, Augustine's future successor, but not Augustine himself. The text turns to two different addressees. First, Æthelberht is made to admonish his son Eadbald, who had been established as a sub-ruler in the region of Rochester. The grant itself is addressed directly to Saint Andrew, the patron saint of the church, a usage parallelled by other charters in the same archive. Historian Wilhelm Levison, writing in 1946, was sceptical about the authenticity of this charter. In particular, he felt that the two separate addresses were incongruous and suggested that the first address, occurring before the preamble, may have been inserted by someone familiar with Bede to echo Eadbald's future conversion (see below). A more recent and more positive appraisal by John Morris argues that the charter and its witness list are authentic because it incorporates titles and phraseology that had fallen out of use by 800. Æthelberht built Justus a cathedral church in Rochester; the foundations of a nave and chancel partly underneath the present-day Rochester Cathedral may date from that time. What remains of the foundations of an early rectangular building near the southern part of the current cathedral might also be contemporary with Justus or may be part of a Roman building. Together with Mellitus, the Bishop of London, Justus signed a letter written by Archbishop Laurence of Canterbury to the Irish bishops urging the native church to adopt the Roman method of calculating the date of Easter. This letter also mentioned the fact that Irish missionaries, such as Dagan, had refused to share meals with the missionaries. Although the letter has not survived, Bede quoted from parts of it. In 614, Justus attended the Council of Paris, held by the Frankish king, Chlothar II. It is unclear why Justus and Peter, the abbot of Sts Peter and Paul in Canterbury, were present. It may have been just chance, but historian James Campbell has suggested that Chlothar summoned clergy from Britain to attend in an attempt to assert overlordship over Kent. The historian N. J. Higham offers another explanation for their attendance, arguing that Æthelberht sent the pair to the council because of shifts in Frankish policy towards the Kentish kingdom, which threatened Kentish independence, and that the two clergymen were sent to negotiate a compromise with Chlothar. A pagan backlash against Christianity followed Æthelberht's death in 616, forcing Justus and Mellitus to flee to Gaul. The pair probably took refuge with Chlothar, hoping that the Frankish king would intervene and restore them to their sees, and by 617 Justus had been reinstalled in his bishopric by the new king. Mellitus also returned to England, but the prevailing pagan mood did not allow him to return to London; after Laurence's death, Mellitus became Archbishop of Canterbury. According to Bede, Justus received letters of encouragement from Pope Boniface V (619–625), as did Mellitus, although Bede does not record the actual letters. The historian J. M. Wallace-Hadrill assumes that both letters were general statements of encouragement to the missionaries. Justus became Archbishop of Canterbury in 624, receiving his pallium—the symbol of the jurisdiction entrusted to archbishops—from Pope Boniface V, following which Justus consecrated Romanus as his successor at Rochester. Boniface also gave Justus a letter congratulating him on the conversion of King "Aduluald" (probably King Eadbald of Kent), a letter which is included in Bede's "Historia ecclesiastica gentis Anglorum". Bede's account of Eadbald's conversion states that it was Laurence, Justus' predecessor at Canterbury, who converted the King to Christianity, but the historian D. P. Kirby argues that the letter's reference to Eadbald makes it likely that it was Justus. Other historians, including Barbara Yorke and Henry Mayr-Harting, conclude that Bede's account is correct, and that Eadbald was converted by Laurence. Yorke argues that there were two kings of Kent during Eadbald's reign, Eadbald and Æthelwald, and that Æthelwald was the "Aduluald" referred to by Boniface. Yorke argues that Justus converted Æthelwald back to Christianity after Æthelberht's death. Justus consecrated Paulinus as the first Bishop of York, before the latter accompanied Æthelburg of Kent to Northumbria for her marriage to King Edwin of Northumbria. Bede records Justus as having died on 10 November, but does not give a year, although it is likely to have between 627 and 631. After his death, Justus was regarded as a saint, and was given a feast day of 10 November. The ninth century Stowe Missal commemorates his feast day, along with Mellitus and Laurence. In the 1090s, his remains were translated, or ritually moved, to a shrine beside the high altar of St Augustine's Abbey in Canterbury. At about the same time, a "Life" was written about him by Goscelin of Saint-Bertin, as well as a poem by Reginald of Canterbury. Other material from Thomas of Elmham, Gervase of Canterbury, and William of Malmesbury, later medieval chroniclers, adds little to Bede's account of Justus' life. Joan of Arc Joan of Arc (; ; 6 January  – 30 May 1431), nicknamed "The Maid of Orléans" (), is considered a heroine of France for her role during the Lancastrian phase of the Hundred Years' War and was canonized as a Roman Catholic saint. Joan of Arc was born to Jacques d'Arc and Isabelle Romée, a peasant family, at Domrémy in north-east France. Joan said she received visions of the Archangel Michael, Saint Margaret, and Saint Catherine of Alexandria instructing her to support Charles VII and recover France from English domination late in the Hundred Years' War. The uncrowned King Charles VII sent Joan to the siege of Orléans as part of a relief mission. She gained prominence after the siege was lifted only nine days later. Several additional swift victories led to Charles VII's coronation at Reims. This long-awaited event boosted French morale and paved the way for the final French victory. On 23 May 1430, she was captured at Compiègne by the Burgundian faction, which was allied with the English. She was later handed over to the English and put on trial by the pro-English Bishop of Beauvais Pierre Cauchon on a variety of charges. After Cauchon declared her guilty she was burned at the stake on 30 May 1431, dying at about nineteen years of age. In 1456, an inquisitorial court authorized by Pope Callixtus III examined the trial, debunked the charges against her, pronounced her innocent, and declared her a martyr. In the 16th century she became a symbol of the Catholic League, and in 1803 she was declared a national symbol of France by the decision of Napoleon Bonaparte. She was beatified in 1909 and canonized in 1920. Joan of Arc is one of the nine secondary patron saints of France, along with Saint Denis, Saint Martin of Tours, Saint Louis, Saint Michael, Saint Rémi, Saint Petronilla, Saint Radegund and Saint Thérèse of Lisieux. Joan of Arc has remained a popular figure in literature, painting, sculpture, and other cultural works since the time of her death, and many famous writers, playwrights, filmmakers, artists, and composers have created, and continue to create, cultural depictions of her. The Hundred Years' War had begun in 1337 as an inheritance dispute over the French throne, interspersed with occasional periods of relative peace. Nearly all the fighting had taken place in France, and the English army's use of "chevauchée" tactics (destructive "scorched earth" raids) had devastated the economy. The French population had not regained its former size prior to the Black Death of the mid-14th century, and its merchants were isolated from foreign markets. Prior to the appearance of Joan of Arc, the English had nearly achieved their goal of a dual monarchy under English control and the French army had not achieved any major victories for a generation. In the words of DeVries, "The kingdom of France was not even a shadow of its thirteenth-century prototype." The French king at the time of Joan's birth, Charles VI, suffered from bouts of insanity and was often unable to rule. The king's brother Louis, Duke of Orléans, and the king's cousin John the Fearless, Duke of Burgundy, quarreled over the regency of France and the guardianship of the royal children. This dispute included accusations that Louis was having an extramarital affair with the queen, Isabeau of Bavaria, and allegations that John the Fearless kidnapped the royal children. The conflict climaxed with the assassination of the Duke of Orléans in 1407 on the orders of the Duke of Burgundy. The young Charles of Orléans succeeded his father as duke and was placed in the custody of his father-in-law, the Count of Armagnac. Their faction became known as the "Armagnac" faction, and the opposing party led by the Duke of Burgundy was called the "Burgundian faction". Henry V of England took advantage of these internal divisions when he invaded the kingdom in 1415, winning a dramatic victory at Agincourt on 25 October and subsequently capturing many northern French towns. In 1418 Paris was taken by the Burgundians, who massacred the Count of Armagnac and about 2,500 of his followers. The future French king, Charles VII, assumed the title of Dauphin—the heir to the throne—at the age of fourteen, after all four of his older brothers had died in succession. His first significant official act was to conclude a peace treaty with the Duke of Burgundy in 1419. This ended in disaster when Armagnac partisans assassinated John the Fearless during a meeting under Charles's guarantee of protection. The new duke of Burgundy, Philip the Good, blamed Charles for the murder and entered into an alliance with the English. The allied forces conquered large sections of France. In 1420 the queen of France, Isabeau of Bavaria, signed the Treaty of Troyes, which granted the succession of the French throne to Henry V and his heirs instead of her son Charles. This agreement revived suspicions that the Dauphin may have been the illegitimate product of Isabeau's rumored affair with the late duke of Orléans rather than the son of King Charles VI. Henry V and Charles VI died within two months of each other in 1422, leaving an infant, Henry VI of England, the nominal monarch of both kingdoms. Henry V's brother, John of Lancaster, 1st Duke of Bedford, acted as regent. By the time Joan of Arc began to influence events in 1429, nearly all of northern France and some parts of the southwest were under Anglo-Burgundian control. The English controlled Paris and Rouen while the Burgundian faction controlled Reims, which had served as the traditional coronation site for French kings since 816. This was an important consideration since neither claimant to the throne of France had been officially crowned yet. In 1428 the English had begun the siege of Orléans, one of the few remaining cities still loyal to Charles VII and an important objective since it held a strategic position along the Loire River, which made it the last obstacle to an assault on the remainder of the French heartland. In the words of one modern historian, "On the fate of Orléans hung that of the entire kingdom." No one was optimistic that the city could long withstand the siege. For generations, there had been prophecies in France which promised France would be saved by a virgin from the "borders of Lorraine" "who would work miracles" and "that France will be lost by a woman and shall thereafter be restored by a virgin". The second prophecy predicating France would be "lost" by a woman was taken to refer to Isabeau's role in signing the Treaty of Troyes. Joan was the daughter of Jacques d'Arc and Isabelle Romée in Domrémy, a village which was then in the French part of the Duchy of Bar. Joan's parents owned about 50 acres (20 hectares) of land and her father supplemented his farming work with a minor position as a village official, collecting taxes and heading the local watch. They lived in an isolated patch of eastern France that remained loyal to the French crown despite being surrounded by pro-Burgundian lands. Several local raids occurred during her childhood and on one occasion her village was burned. Joan was illiterate and it is believed that her letters were dictated by her to scribes and she signed her letters with the help of others. At her trial, Joan stated that she was about 19 years old, which implies she thought she was born around 1412. She later testified that she experienced her first vision in 1425 at the age of 13, when she was in her "father's garden" and saw visions of figures she identified as Saint Michael, Saint Catherine, and Saint Margaret, who told her to drive out the English and bring the Dauphin to Reims for his coronation. She said she cried when they left, as they were so beautiful. At the age of 16, she asked a relative named Durand Lassois to take her to the nearby town of Vaucouleurs, where she petitioned the garrison commander, Robert de Baudricourt, for an armed escort to bring her to the French Royal Court at Chinon. Baudricourt's sarcastic response did not deter her. She returned the following January and gained support from two of Baudricourt's soldiers: Jean de Metz and Bertrand de Poulengy. According to Jean de Metz, she told him that "I must be at the King's side ... there will be no help (for the kingdom) if not from me. Although I would rather have remained spinning [wool] at my mother's side ... yet must I go and must I do this thing, for my Lord wills that I do so." Under the auspices of Metz and Poulengy, she was given a second meeting, where she made a prediction about a military reversal at the Battle of Rouvray near Orléans several days before messengers arrived to report it. According to the "Journal du Siége d’Orléans," which portrays Joan as a miraculous figure, Joan came to know of the battle through "grace divine" while tending her flocks in Lorraine and used this divine revelation to persuade Baudricort to take her to the Dauphin. Robert de Baudricourt granted Joan an escort to visit Chinon after news from Orleans confirmed her assertion of the defeat. She made the journey through hostile Burgundian territory disguised as a male soldier, a fact which would later lead to charges of "cross-dressing" against her, although her escort viewed it as a normal precaution. Two of the members of her escort said they and the people of Vaucouleurs provided her with this clothing, and had suggested it to her. Joan's first meeting with Charles took place at the Royal Court at Chinon in 1429, when she was aged 17 and he 26. After arriving at the Court she made a strong impression on Charles during a private conference with him. During this time Charles' mother-in-law Yolande of Aragon was planning to finance a relief expedition to Orléans. Joan asked for permission to travel with the army and wear protective armor, which was provided by the Royal government. She depended on donated items for her armor, horse, sword, banner, and other items utilized by her entourage. Historian Stephen W. Richey explains her attraction to the royal court by pointing out that they may have viewed her as the only source of hope for a regime that was near collapse: After years of one humiliating defeat after another, both the military and civil leadership of France were demoralized and discredited. When the Dauphin Charles granted Joan's urgent request to be equipped for war and placed at the head of his army, his decision must have been based in large part on the knowledge that every orthodox, every rational option had been tried and had failed. Only a regime in the final straits of desperation would pay any heed to an illiterate farm girl who claimed that the voice of God was instructing her to take charge of her country's army and lead it to victory. Upon her arrival on the scene, Joan effectively turned the longstanding Anglo-French conflict into a religious war, a course of action that was not without risk. Charles' advisers were worried that unless Joan's orthodoxy could be established beyond doubt—that she was not a heretic or a sorceress—Charles' enemies could easily make the allegation that his crown was a gift from the devil. To circumvent this possibility, the Dauphin ordered background inquiries and a theological examination at Poitiers to verify her morality. In April 1429, the commission of inquiry "declared her to be of irreproachable life, a good Christian, possessed of the virtues of humility, honesty and simplicity." The theologians at Poitiers did not render a decision on the issue of divine inspiration; rather, they informed the Dauphin that there was a "favorable presumption" to be made on the divine nature of her mission. This was enough for Charles, but they also stated that he had an obligation to put Joan to the test. "To doubt or abandon her without suspicion of evil would be to repudiate the Holy Spirit and to become unworthy of God's aid", they declared. They recommended that her claims should be put to the test by seeing if she could lift the siege of Orléans as she had predicted. She arrived at the besieged city of Orléans on 29 April 1429. Jean d'Orléans, the acting head of the ducal family of Orléans on behalf of his captive half-brother, initially excluded her from war councils and failed to inform her when the army engaged the enemy. However, his decision to exclude her did not prevent her presence at most councils and battles. The extent of her actual military participation and leadership is a subject of debate among historians. On the one hand, Joan stated that she carried her banner in battle and had never killed anyone, preferring her banner "forty times" better than a sword; and the army was always directly commanded by a nobleman, such as the Duke of Alençon for example. On the other hand, many of these same noblemen stated that Joan had a profound effect on their decisions since they often accepted the advice she gave them, believing her advice was divinely inspired. In either case, historians agree that the army enjoyed remarkable success during her brief time with it. The appearance of Joan of Arc at Orléans coincided with a sudden change in the pattern of the siege. During the five months before her arrival, the defenders had attempted only one offensive assault, which had ended in defeat. On 4 May, however, the Armagnacs attacked and captured the outlying fortress of Saint Loup ("bastille de Saint-Loup"), followed on 5 May by a march to a second fortress called Saint-Jean-le-Blanc, which was found deserted. When English troops came out to oppose the advance, a rapid cavalry charge drove them back into their fortresses, apparently without a fight. The Armagnacs then attacked and captured an English fortress built around a monastery called Les Augustins. That night, Armagnac troops maintained positions on the south bank of the river before attacking the main English stronghold, called ""les Tourelles"", on the morning of 7 May. Contemporaries acknowledged Joan as the heroine of the engagement. She was wounded by an arrow between the neck and shoulder while holding her banner in the trench outside les Tourelles, but later returned to encourage a final assault that succeeded in taking the fortress. The English retreated from Orléans the next day, and the siege was over. At Chinon and Poitiers, Joan had declared that she would provide a sign at Orléans. The lifting of the siege was interpreted by many people to be that sign, and it gained her the support of prominent clergy such as the Archbishop of Embrun and the theologian Jean Gerson, both of whom wrote supportive treatises immediately following this event. To the English, the ability of this peasant girl to defeat their armies was regarded as proof that she was possessed by the Devil; the British medievalist Beverly Boyd noted that this charge was not just propaganda, and was sincerely believed since the idea that God was supporting the French via Joan was distinctly unappealing to an English audience. The sudden victory at Orléans also led to many proposals for further offensive action. Joan persuaded Charles VII to allow her to accompany the army with Duke John II of Alençon, and she gained royal permission for her plan to recapture nearby bridges along the Loire as a prelude to an advance on Reims and the coronation of Charles VII. This was a bold proposal because Reims was roughly twice as far away as Paris and deep within enemy territory. The English expected an attempt to recapture Paris or an attack on Normandy. The Duke of Alençon accepted Joan's advice concerning strategy. Other commanders including Jean d'Orléans had been impressed with her performance at Orléans and became her supporters. Alençon credited her with saving his life at Jargeau, where she warned him that a cannon on the walls was about to fire at him. During the same siege she withstood a blow from a stone that hit her helmet while she was near the base of the town's wall. The army took Jargeau on 12 June, Meung-sur-Loire on 15 June, and Beaugency on 17 June. The English army withdrew from the Loire Valley and headed north on 18 June, joining with an expected unit of reinforcements under the command of Sir John Fastolf. Joan urged the Armagnacs to pursue, and the two armies clashed southwest of the village of Patay. The battle at Patay might be compared to Agincourt in reverse. The French vanguard attacked a unit of English archers who had been placed to block the road. A rout ensued that decimated the main body of the English army and killed or captured most of its commanders. Fastolf escaped with a small band of soldiers and became the scapegoat for the humiliating English defeat. The French suffered minimal losses. The French army left Gien on 29 June on the march toward Reims and accepted the conditional surrender of the Burgundian-held city of Auxerre on 3 July. Other towns in the army's path returned to French allegiance without resistance. Troyes, the site of the treaty that tried to disinherit Charles VII, was the only one to put up even brief opposition. The army was in short supply of food by the time it reached Troyes. But the army was in luck: a wandering friar named Brother Richard had been preaching about the end of the world at Troyes and convinced local residents to plant beans, a crop with an early harvest. The hungry army arrived as the beans ripened. Troyes capitulated after a bloodless four-day siege. Reims opened its gates to the army on 16 July 1429. The coronation took place the following morning. Although Joan and the Duke of Alençon urged a prompt march toward Paris, the royal court preferred to negotiate a truce with Duke Philip of Burgundy. The duke violated the purpose of the agreement by using it as a stalling tactic to reinforce the defense of Paris. The French army marched past a succession of towns near Paris during the interim and accepted the surrender of several towns without a fight. The Duke of Bedford led an English force and confronted the French army in a standoff at the battle of Montépilloy on 15 August. The French assault at Paris ensued on 8 September. Despite a wound to the leg from a crossbow bolt, Joan remained in the inner trench of Paris until she was carried back to safety by one of the commanders. The following morning the army received a royal order to withdraw. Most historians blame French Grand Chamberlain Georges de la Trémoille for the political blunders that followed the coronation. In October, Joan was with the royal army when it took Saint-Pierre-le-Moûtier, followed by an unsuccessful attempt to take La-Charité-sur-Loire in November and December. On 29 December, Joan and her family were ennobled by Charles VII as a reward for her actions. A truce with England during the following few months left Joan with little to do. On 23 March 1430, she dictated a threatening letter to the Hussites, a dissident group which had broken with the Catholic Church on a number of doctrinal points and had defeated several previous crusades sent against them. Joan's letter promises to "remove your madness and foul superstition, taking away either your heresy or your lives." Joan, an ardent Catholic who hated all forms of heresy together with Islam also sent a letter challenging the English to leave France and go with her to Bohemia to fight the Hussites, an offer that went unanswered. The truce with England quickly came to an end. Joan traveled to Compiègne the following May to help defend the city against an English and Burgundian siege. On 23 May 1430 she was with a force that attempted to attack the Burgundian camp at Margny north of Compiègne, but was ambushed and captured. When the troops began to withdraw toward the nearby fortifications of Compiègne after the advance of an additional force of 6,000 Burgundians, Joan stayed with the rear guard. Burgundian troops surrounded the rear guard, and she was pulled off her horse by an archer. She agreed to surrender to a pro-Burgundian nobleman named Lionel of Wandomme, a member of Jean de Luxembourg's unit. Joan was imprisoned by the Burgundians at Beaurevoir Castle. She made several escape attempts, on one occasion jumping from her 70-foot (21 m) tower, landing on the soft earth of a dry moat, after which she was moved to the Burgundian town of Arras. The English negotiated with their Burgundian allies to transfer her to their custody, with Bishop Pierre Cauchon of Beauvais, an English partisan, assuming a prominent role in these negotiations and her later trial. The final agreement called for the English to pay the sum of 10,000 livres tournois to obtain her from Jean de Luxembourg, a member of the Council of Duke Philip of Burgundy. The English moved Joan to the city of Rouen, which served as their main headquarters in France. Historian Pierre Champion notes that the Armagnacs attempted to rescue her several times by launching military campaigns toward Rouen while she was held there. One campaign occurred during the winter of 1430–1431, another in March 1431, and one in late May shortly before her execution. These attempts were beaten back. Champion also quotes 15th-century sources that say Charles VII threatened to "exact vengeance" upon Burgundian troops whom his forces had captured and upon "the English and women of England" in retaliation for their treatment of Joan. The trial for heresy was politically motivated. The tribunal was composed entirely of pro-English and Burgundian clerics, and overseen by English commanders including the Duke of Bedford and the Earl of Warwick. In the words of the British medievalist Beverly Boyd, the trial was meant by the English Crown to be "...a ploy to get rid of a bizarre prisoner of war with maximum embarrassment to their enemies". Legal proceedings commenced on 9 January 1431 at Rouen, the seat of the English occupation government. The procedure was suspect on a number of points, which would later provoke criticism of the tribunal by the chief inquisitor who investigated the trial after the war. Under ecclesiastical law, Bishop Cauchon lacked jurisdiction over the case. Cauchon owed his appointment to his partisan support of the English Crown, which financed the trial. The low standard of evidence used in the trial also violated inquisitorial rules. Clerical notary Nicolas Bailly, who was commissioned to collect testimony against Joan, could find no adverse evidence. Without such evidence the court lacked grounds to initiate a trial. Opening a trial anyway, the court also violated ecclesiastical law by denying Joan the right to a legal adviser. In addition, stacking the tribunal entirely with pro-English clergy violated the medieval Church's requirement that heresy trials be judged by an impartial or balanced group of clerics. Upon the opening of the first public examination, Joan complained that those present were all partisans against her and asked for "ecclesiastics of the French side" to be invited in order to provide balance. This request was denied. The Vice-Inquisitor of Northern France (Jean Lemaitre) objected to the trial at its outset, and several eyewitnesses later said he was forced to cooperate after the English threatened his life. Some of the other clergy at the trial were also threatened when they refused to cooperate, including a Dominican friar named Isambart de la Pierre. These threats, and the domination of the trial by a secular government, were violations of the Church's rules and undermined the right of the Church to conduct heresy trials without secular interference. The trial record contains statements from Joan that the eyewitnesses later said astonished the court, since she was an illiterate peasant and yet was able to evade the theological pitfalls the tribunal had set up to entrap her. The transcript's most famous exchange is an exercise in subtlety: "Asked if she knew she was in God's grace, she answered, 'If I am not, may God put me there; and if I am, may God so keep me.'" The question is a scholarly trap. Church doctrine held that no one could be certain of being in God's grace. If she had answered yes, then she would have been charged with heresy. If she had answered no, then she would have confessed her own guilt. The court notary Boisguillaume later testified that at the moment the court heard her reply, "Those who were interrogating her were stupefied." Several members of the tribunal later testified that important portions of the transcript were falsified by being altered in her disfavor. Under Inquisitorial guidelines, Joan should have been confined in an ecclesiastical prison under the supervision of female guards (i.e., nuns). Instead, the English kept her in a secular prison guarded by their own soldiers. Bishop Cauchon denied Joan's appeals to the Council of Basel and the Pope, which should have stopped his proceeding. The twelve articles of accusation which summarized the court's findings contradicted the court record, which had already been doctored by the judges. The illiterate defendant signed an abjuration document that she did not understand under threat of immediate execution. The court substituted a different abjuration in the official record. Heresy was a capital crime only for a repeat offense, therefore a repeat offense of "cross-dressing" was now arranged by the court, according to the eyewitnesses. Joan agreed to wear feminine clothing when she abjured, which created a problem. According to the later descriptions of some of the tribunal members, she had previously been wearing male (i.e. military) clothing in prison because it gave her the ability to fasten her hosen, boots and tunic together into one piece, which deterred rape by making it difficult to pull her hosen off. She was evidently afraid to give up this outfit even temporarily because it was likely to be confiscated by the judge and she would thereby be left without protection. A woman's dress offered no such protection. A few days after her abjuration, when she was forced to wear a dress, she told a tribunal member that "a great English lord had entered her prison and tried to take her by force." She resumed male attire either as a defense against molestation or, in the testimony of Jean Massieu, because her dress had been taken by the guards and she was left with nothing else to wear. Her resumption of male military clothing was labeled a relapse into heresy for cross-dressing, although this would later be disputed by the inquisitor who presided over the appeals court that examined the case after the war. Medieval Catholic doctrine held that cross-dressing should be evaluated based on context, as stated in the "Summa Theologica" by St. Thomas Aquinas, which says that necessity would be a permissible reason for cross-dressing. This would include the use of clothing as protection against rape if the clothing would offer protection. In terms of doctrine, she had been justified in disguising herself as a pageboy during her journey through enemy territory, and she was justified in wearing armor during battle and protective clothing in camp and then in prison. The "Chronique de la Pucelle" states that it deterred molestation while she was camped in the field. When her soldiers' clothing wasn't needed while on campaign, she was said to have gone back to wearing a dress. Clergy who later testified at the posthumous appellate trial affirmed that she continued to wear male clothing in prison to deter molestation and rape. Joan referred the court to the Poitiers inquiry when questioned on the matter. The Poitiers record no longer survives, but circumstances indicate the Poitiers clerics had approved her practice. She also kept her hair cut short through her military campaigns and while in prison. Her supporters, such as the theologian Jean Gerson, defended her hairstyle for practical reasons, as did Inquisitor Brehal later during the appellate trial. Nonetheless, at the trial in 1431 she was condemned and sentenced to die. Boyd described Joan's trial as so "unfair" that the trial transcripts were later used as evidence for canonizing her in the 20th century. Eyewitnesses described the scene of the execution by burning on 30 May 1431. Tied to a tall pillar at the Vieux-Marché in Rouen, she asked two of the clergy, Fr Martin Ladvenu and Fr Isambart de la Pierre, to hold a crucifix before her. An English soldier also constructed a small cross that she put in the front of her dress. After she died, the English raked back the coals to expose her charred body so that no one could claim she had escaped alive. They then burned the body twice more, to reduce it to ashes and prevent any collection of relics, and cast her remains into the Seine River. The executioner, Geoffroy Thérage, later stated that he "greatly feared to be damned." The Hundred Years' War continued for twenty-two years after her death. Charles VII retained legitimacy as the king of France in spite of a rival coronation held for Henry VI at Notre-Dame cathedral in Paris on 16 December 1431, the boy's tenth birthday. Before England could rebuild its military leadership and force of longbowmen lost in 1429, the country lost its alliance with Burgundy when the Treaty of Arras was signed in 1435. The Duke of Bedford died the same year and Henry VI became the youngest king of England to rule without a regent. His weak leadership was probably the most important factor in ending the conflict. Kelly DeVries argues that Joan of Arc's aggressive use of artillery and frontal assaults influenced French tactics for the rest of the war. In 1452, during the posthumous investigation into her execution, the Church declared that a religious play in her honor at Orléans would allow attendees to gain an indulgence (remission of temporal punishment for sin) by making a pilgrimage to the event. A posthumous retrial opened after the war ended. Pope Callixtus III authorized this proceeding, also known as the "nullification trial", at the request of Inquisitor-General Jean Bréhal and Joan's mother Isabelle Romée. The purpose of the trial was to investigate whether the trial of condemnation and its verdict had been handled justly and according to canon law. Investigations started with an inquest by Guillaume Bouillé, a theologian and former rector of the University of Paris (Sorbonne). Bréhal conducted an investigation in 1452. A formal appeal followed in November 1455. The appellate process involved clergy from throughout Europe and observed standard court procedure. A panel of theologians analyzed testimony from 115 witnesses. Bréhal drew up his final summary in June 1456, which describes Joan as a martyr and implicated the late Pierre Cauchon with heresy for having convicted an innocent woman in pursuit of a secular vendetta. The technical reason for her execution had been a Biblical clothing law. The nullification trial reversed the conviction in part because the condemnation proceeding had failed to consider the doctrinal exceptions to that stricture. The appellate court declared her innocent on 7 July 1456. Joan of Arc became a symbol of the Catholic League during the 16th century. When Félix Dupanloup was made bishop of Orléans in 1849, he pronounced a fervid panegyric on Joan of Arc, which attracted attention in England as well as France, and he led the efforts which culminated in Joan of Arc's beatification in 1909. Joan of Arc became a semi-legendary figure for the four centuries after her death. The main sources of information about her were chronicles. Five original manuscripts of her condemnation trial surfaced in old archives during the 19th century. Soon, historians also located the complete records of her rehabilitation trial, which contained sworn testimony from 115 witnesses, and the original French notes for the Latin condemnation trial transcript. Various contemporary letters also emerged, three of which carry the signature "Jehanne" in the unsteady hand of a person learning to write. This unusual wealth of primary source material is one reason DeVries declares, "No person of the Middle Ages, male or female, has been the subject of more study." Joan of Arc came from an obscure village and rose to prominence when she was a teenager, and she did so as an uneducated peasant. The French and English kings had justified the ongoing war through competing interpretations of inheritance law, first concerning Edward III's claim to the French throne and then Henry VI's. The conflict had been a legalistic feud between two related royal families, but Joan transformed it along religious lines and gave meaning to appeals such as that of squire Jean de Metz when he asked, "Must the king be driven from the kingdom; and are we to be English?" In the words of Stephen Richey, "She turned what had been a dry dynastic squabble that left the common people unmoved except for their own suffering into a passionately popular war of national liberation." Richey also expresses the breadth of her subsequent appeal: From Christine de Pizan to the present, women have looked to Joan as a positive example of a brave and active woman. She operated within a religious tradition that believed an exceptional person from any level of society might receive a divine calling. Some of her most significant aid came from women. King Charles VII's mother-in-law, Yolande of Aragon, confirmed Joan's virginity and financed her departure to Orléans. Joan of Luxembourg, aunt to the count of Luxembourg who held custody of her after Compiègne, alleviated her conditions of captivity and may have delayed her sale to the English. Finally, Anne of Burgundy, the duchess of Bedford and wife to the regent of England, declared Joan a virgin during pretrial inquiries. Three separate vessels of the French Navy have been named after her, including a helicopter carrier that was retired from active service on 7 June 2010. At present, the French far-right political party "Front National" holds rallies at her statues, reproduces her image in the party's publications, and uses a tricolor flame partly symbolic of her martyrdom as its emblem. This party's opponents sometimes satirize its appropriation of her image. The French civic , set in 1920, is the second Sunday of May. World War I songs include "Joan of Arc, They Are Calling You", and "Joan of Arc's Answer Song". Joan of Arc's religious visions have remained an ongoing topic of interest. She identified Saint Margaret, Saint Catherine, and Saint Michael as the sources of her revelations, although there is some ambiguity as to which of several identically named saints she intended. Analysis of her visions is problematic since the main source of information on this topic is the condemnation trial transcript in which she defied customary courtroom procedure about a witness' oath and specifically refused to answer every question about her visions. She complained that a standard witness oath would conflict with an oath she had previously sworn to maintain confidentiality about meetings with her king. It remains unknown to what extent the surviving record may represent the fabrications of corrupt court officials or her own possible fabrications to protect state secrets. Some historians sidestep speculation about the visions by asserting that her belief in her calling is more relevant than questions about the visions' ultimate origin. A number of more recent scholars attempted to explain her visions in psychiatric or neurological terms. Potential diagnoses have included epilepsy, migraine, tuberculosis, and schizophrenia. None of the putative diagnoses have gained consensus support, and many scholars have argued that she didn't display any of the objective symptoms that can accompany the mental illnesses which have been suggested, such as schizophrenia. Dr. Philip Mackowiak dismissed the possibility of schizophrenia and several other disorders (Temporal Lobe Epilepsy and ergot poisoning) in a chapter on Joan of Arc in his book "Post-Mortem" in 2007. Dr. John Hughes rejected the idea that Joan of Arc suffered from epilepsy in an article in the academic journal "Epilepsy & Behavior". Two experts who analysed the hypothesis of temporal lobe tuberculoma in the medical journal "Neuropsychobiology" expressed their misgivings about this claim in the following statement: In response to another such theory alleging that her visions were caused by bovine tuberculosis as a result of drinking unpasteurized milk, historian Régine Pernoud wrote that if drinking unpasteurized milk could produce such potential benefits for the nation, then the French government should stop mandating the pasteurization of milk. Joan of Arc gained favor in the court of King Charles VII, who accepted her as sane. He would have been familiar with the signs of madness because his own father, Charles VI, had suffered from it. Charles VI was popularly known as "Charles the Mad", and much of France's political and military decline during his reign could be attributed to the power vacuum that his episodes of insanity had produced. The previous king had believed he was made of glass, a delusion no courtier had mistaken for a religious awakening. Fears that King Charles VII would manifest the same insanity may have factored into the attempt to disinherit him at Troyes. This stigma was so persistent that contemporaries of the next generation would attribute to inherited madness the breakdown that England's King Henry VI was to suffer in 1453: Henry VI was nephew to Charles VII and grandson to Charles VI. The court of Charles VII was shrewd and skeptical on the subject of mental health. Upon Joan's arrival at Chinon the royal counselor Jacques Gélu cautioned, She remained astute to the end of her life and the rehabilitation trial testimony frequently marvels at her astuteness: Her subtle replies under interrogation even forced the court to stop holding public sessions. In 1867, a jar was found in a Paris pharmacy with the inscription "Remains found under the stake of Joan of Arc, virgin of Orleans." They consisted of a charred human rib, carbonized wood, a piece of linen and a cat femur—explained as the practice of throwing black cats onto the pyre of witches. They are now in the Museum of Art and History in Chinon. In 2006, Philippe Charlier, a forensic scientist at Raymond Poincaré Hospital (Garches) was authorized to study the relics. Carbon-14 tests and various spectroscopic analyses were performed, and the results determined that the remains come from an Egyptian mummy from the sixth to the third century BC. The charred appearance was the result of the embalming substances, not from combustion. Large amounts of pine pollen were also found, consistent with the presence of resin used in mummification and some unburned linen was found and was determined to be similar to that used to wrap mummies. The noted perfumers Guerlain and Jean Patou said that they could smell vanilla in the remains, also consistent with mummification. Apparently the mummy was part of the ingredients of medieval pharmacopoeia and it was relabeled in a time of French nationalism. In March 2016 a ring believed to have been worn by Joan, which has passed through the hands of a cardinal, a king, an aristocrat and the daughter of a British physician, was sold at auction to the Puy du Fou, a historical theme park, for £300,000. There is no conclusive proof that she owned the ring, but its unusual design closely matches Joan's own words about her ring at her trial. The Arts Council later determined the ring should not have left the United Kingdom. The purchasers appealed, including to Elizabeth II and the ring was allowed to remain in France. The ring was reportedly first passed to Cardinal Henry Beaufort, who attended Joan's trial and execution in 1431. The standard accounts of the life of Joan of Arc have been challenged by revisionist authors. Claims made by such authors include: that Joan of Arc was not actually burned at the stake; that she was secretly the half sister of King Charles VII; that she was not a true Christian but a member of a pagan cult; and that most of the story of Joan of Arc is actually a myth. None of these claims are widely accepted by historians. John, King of England John (24 December 1166 – 19 October 1216), also known as John Lackland, was King of England from 1199 until his death in 1216. John lost the Duchy of Normandy to King Philip II of France, resulting in the collapse of most of the Angevin Empire and contributing to the subsequent growth in power of the French Capetian dynasty during the 13th century. The baronial revolt at the end of John's reign led to the sealing of ", a document sometimes considered an early step in the evolution of the constitution of the United Kingdom. John, the youngest of five sons of King Henry II of England and Duchess Eleanor of Aquitaine, was at first not expected to inherit significant lands. Following the failed rebellion of his elder brothers between 1173 and 1174, however, John became Henry's favourite child. He was appointed the Lord of Ireland in 1177 and given lands in England and on the continent. John's elder brothers William, Henry and Geoffrey died young; by the time Richard I became king in 1189, John was a potential heir to the throne. John unsuccessfully attempted a rebellion against Richard's royal administrators whilst his brother was participating in the Third Crusade. Despite this, after Richard died in 1199, John was proclaimed King of England, and came to an agreement with Philip II of France to recognise John's possession of the continental Angevin lands at the peace treaty of Le Goulet in 1200. When war with France broke out again in 1202, John achieved early victories, but shortages of military resources and his treatment of Norman, Breton, and Anjou nobles resulted in the collapse of his empire in northern France in 1204. John spent much of the next decade attempting to regain these lands, raising huge revenues, reforming his armed forces and rebuilding continental alliances. John's judicial reforms had a lasting impact on the English common law system, as well as providing an additional source of revenue. An argument with Pope Innocent III led to John's excommunication in 1209, a dispute finally settled by the king in 1213. John's attempt to defeat Philip in 1214 failed due to the French victory over John's allies at the battle of Bouvines. When he returned to England, John faced a rebellion by many of his barons, who were unhappy with his fiscal policies and his treatment of many of England's most powerful nobles. Although both John and the barons agreed to the " peace treaty in 1215, neither side complied with its conditions. Civil war broke out shortly afterwards, with the barons aided by Louis of France. It soon descended into a stalemate. John died of dysentery contracted whilst on campaign in eastern England during late 1216; supporters of his son Henry III went on to achieve victory over Louis and the rebel barons the following year. Contemporary chroniclers were mostly critical of John's performance as king, and his reign has since been the subject of significant debate and periodic revision by historians from the 16th century onwards. Historian Jim Bradbury has summarised the current historical opinion of John's positive qualities, observing that John is today usually considered a "hard-working administrator, an able man, an able general". Nonetheless, modern historians agree that he also had many faults as king, including what historian Ralph Turner describes as "distasteful, even dangerous personality traits", such as pettiness, spitefulness, and cruelty. These negative qualities provided extensive material for fiction writers in the Victorian era, and John remains a recurring character within Western popular culture, primarily as a villain in films and stories depicting the Robin Hood legends. John was born to Henry II of England and Eleanor of Aquitaine on 24 December 1166. Henry had inherited significant territories along the Atlantic seaboard—Anjou, Normandy and England—and expanded his empire by conquering Brittany. Henry married the powerful Eleanor of Aquitaine, who reigned over the Duchy of Aquitaine and had a tenuous claim to Toulouse and Auvergne in southern France, in addition to being the former wife of Louis VII of France. The result was the Angevin Empire, named after Henry's paternal title as Count of Anjou and, more specifically, its seat in Angers. The Empire, however, was inherently fragile: although all the lands owed allegiance to Henry, the disparate parts each had their own histories, traditions and governance structures. As one moved south through Anjou and Aquitaine, the extent of Henry's power in the provinces diminished considerably, scarcely resembling the modern concept of an empire at all. Some of the traditional ties between parts of the empire such as Normandy and England were slowly dissolving over time. It was unclear what would happen to the empire on Henry's death. Although the custom of primogeniture, under which an eldest son would inherit all his father's lands, was slowly becoming more widespread across Europe, it was less popular amongst the Norman kings of England. Most believed that Henry would divide the empire, giving each son a substantial portion, and hoping that his children would continue to work together as allies after his death. To complicate matters, much of the Angevin empire was held by Henry only as a vassal of the King of France of the rival line of the House of Capet. Henry had often allied himself with the Holy Roman Emperor against France, making the feudal relationship even more challenging. Shortly after his birth, John was passed from Eleanor into the care of a wet nurse, a traditional practice for medieval noble families. Eleanor then left for Poitiers, the capital of Aquitaine, and sent John and his sister Joan north to Fontevrault Abbey. This may have been done with the aim of steering her youngest son, with no obvious inheritance, towards a future ecclesiastical career. Eleanor spent the next few years conspiring against her husband Henry and neither parent played a part in John's very early life. John was probably, like his brothers, assigned a "magister" whilst he was at Fontevrault, a teacher charged with his early education and with managing the servants of his immediate household; John was later taught by Ranulf de Glanvill, a leading English administrator. John spent some time as a member of the household of his eldest living brother Henry the Young King, where he probably received instruction in hunting and military skills. John grew up to be around tall, relatively short, with a "powerful, barrel-chested body" and dark red hair; he looked to contemporaries like an inhabitant of Poitou. John enjoyed reading and, unusually for the period, built up a travelling library of books. He enjoyed gambling, in particular at backgammon, and was an enthusiastic hunter, even by medieval standards. He liked music, although not songs. John would become a "connoisseur of jewels", building up a large collection, and became famous for his opulent clothes and also, according to French chroniclers, for his fondness for bad wine. As John grew up, he became known for sometimes being "genial, witty, generous and hospitable"; at other moments, he could be jealous, over-sensitive and prone to fits of rage, "biting and gnawing his fingers" in anger. During John's early years, Henry attempted to resolve the question of his succession. Henry the Young King had been crowned King of England in 1170, but was not given any formal powers by his father; he was also promised Normandy and Anjou as part of his future inheritance. Richard was to be appointed the Count of Poitou with control of Aquitaine, whilst Geoffrey was to become the Duke of Brittany. At this time it seemed unlikely that John would ever inherit substantial lands, and he was jokingly nicknamed "Lackland" by his father. Henry II wanted to secure the southern borders of Aquitaine and decided to betroth his youngest son to Alais, the daughter and heiress of Humbert III of Savoy. As part of this agreement John was promised the future inheritance of Savoy, Piedmont, Maurienne, and the other possessions of Count Humbert. For his part in the potential marriage alliance, Henry II transferred the castles of Chinon, Loudun and Mirebeau into John's name; as John was only five years old his father would continue to control them for practical purposes. Henry the Young King was unimpressed by this; although he had yet to be granted control of any castles in his new kingdom, these were effectively his future property and had been given away without consultation. Alais made the trip over the Alps and joined Henry II's court, but she died before marrying John, which left the prince once again without an inheritance. In 1173 John's elder brothers, backed by Eleanor, rose in revolt against Henry in the short-lived rebellion of 1173 to 1174. Growing irritated with his subordinate position to Henry II and increasingly worried that John might be given additional lands and castles at his expense, Henry the Young King travelled to Paris and allied himself with Louis VII. Eleanor, irritated by her husband's persistent interference in Aquitaine, encouraged Richard and Geoffrey to join their brother Henry in Paris. Henry II triumphed over the coalition of his sons, but was generous to them in the peace settlement agreed at Montlouis. Henry the Young King was allowed to travel widely in Europe with his own household of knights, Richard was given Aquitaine back, and Geoffrey was allowed to return to Brittany; only Eleanor was imprisoned for her role in the revolt. John had spent the conflict travelling alongside his father, and was given widespread possessions across the Angevin empire as part of the Montlouis settlement; from then onwards, most observers regarded John as Henry II's favourite child, although he was the furthest removed in terms of the royal succession. Henry II began to find more lands for John, mostly at various nobles' expense. In 1175 he appropriated the estates of the late Earl of Cornwall and gave them to John. The following year, Henry disinherited the sisters of Isabelle of Gloucester, contrary to legal custom, and betrothed John to the now extremely wealthy Isabelle. In 1177, at the Council of Oxford, Henry dismissed William FitzAldelm as the Lord of Ireland and replaced him with the ten-year-old John. Henry the Young King fought a short war with his brother Richard in 1183 over the status of England, Normandy and Aquitaine. Henry II moved in support of Richard, and Henry the Young King died from dysentery at the end of the campaign. With his primary heir dead, Henry rearranged the plans for the succession: Richard was to be made King of England, albeit without any actual power until the death of his father; Geoffrey would retain Brittany; and John would now become the Duke of Aquitaine in place of Richard. Richard refused to give up Aquitaine; Henry II was furious and ordered John, with help from Geoffrey, to march south and retake the duchy by force. The two attacked the capital of Poitiers, and Richard responded by attacking Brittany. The war ended in stalemate and a tense family reconciliation in England at the end of 1184. In 1185 John made his first visit to Ireland, accompanied by 300 knights and a team of administrators. Henry had tried to have John officially proclaimed King of Ireland, but Pope Lucius III would not agree. John's first period of rule in Ireland was not a success. Ireland had only recently been conquered by Anglo-Norman forces, and tensions were still rife between Henry II, the new settlers and the existing inhabitants. John infamously offended the local Irish rulers by making fun of their unfashionable long beards, failed to make allies amongst the Anglo-Norman settlers, began to lose ground militarily against the Irish and finally returned to England later in the year, blaming the viceroy, Hugh de Lacy, for the fiasco. The problems amongst John's wider family continued to grow. His elder brother Geoffrey died during a tournament in 1186, leaving a posthumous son, Arthur of Brittany, and an elder daughter, Eleanor. Geoffrey's death brought John slightly closer to the throne of England. The uncertainty about what would happen after Henry's death continued to grow; Richard was keen to join a new crusade and remained concerned that whilst he was away Henry would appoint John his formal successor. Richard began discussions about a potential alliance with Philip II in Paris during 1187, and the next year Richard gave homage to Philip in exchange for support for a war against Henry. Richard and Philip fought a joint campaign against Henry, and by the summer of 1189 the king made peace, promising Richard the succession. John initially remained loyal to his father, but changed sides once it appeared that Richard would win. Henry died shortly afterwards. When John's elder brother Richard became king in September 1189, he had already declared his intention of joining the Third Crusade. Richard set about raising the huge sums of money required for this expedition through the sale of lands, titles and appointments, and attempted to ensure that he would not face a revolt while away from his empire. John was made Count of Mortain, was married to the wealthy Isabel of Gloucester, and was given valuable lands in Lancaster and the counties of Cornwall, Derby, Devon, Dorset, Nottingham and Somerset, all with the aim of buying his loyalty to Richard whilst the king was on crusade. Richard retained royal control of key castles in these counties, thereby preventing John from accumulating too much military and political power, and, for the time being, the king named the four-year-old Arthur of Brittany as the heir to the throne. In return, John promised not to visit England for the next three years, thereby in theory giving Richard adequate time to conduct a successful crusade and return from the Levant without fear of John seizing power. Richard left political authority in England – the post of justiciar – jointly in the hands of Bishop Hugh de Puiset and William Mandeville, and made William Longchamp, the Bishop of Ely, his chancellor. Mandeville immediately died, and Longchamp took over as joint justiciar with Puiset, which would prove a less than satisfactory partnership. Eleanor, the queen mother, convinced Richard to allow John into England in his absence. The political situation in England rapidly began to deteriorate. Longchamp refused to work with Puiset and became unpopular with the English nobility and clergy. John exploited this unpopularity to set himself up as an alternative ruler with his own royal court, complete with his own justiciar, chancellor and other royal posts, and was happy to be portrayed as an alternative regent, and possibly the next king. Armed conflict broke out between John and Longchamp, and by October 1191 Longchamp was isolated in the Tower of London with John in control of the city of London, thanks to promises John had made to the citizens in return for recognition as Richard's heir presumptive. At this point Walter of Coutances, the Archbishop of Rouen, returned to England, having been sent by Richard to restore order. John's position was undermined by Walter's relative popularity and by the news that Richard had married whilst in Cyprus, which presented the possibility that Richard would have legitimate children and heirs. The political turmoil continued. John began to explore an alliance with the French king Philip II, freshly returned from the crusade. John hoped to acquire Normandy, Anjou and the other lands in France held by Richard in exchange for allying himself with Philip. John was persuaded not to pursue an alliance by his mother. Longchamp, who had left England after Walter's intervention, now returned, and argued that he had been wrongly removed as justiciar. John intervened, suppressing Longchamp's claims in return for promises of support from the royal administration, including a reaffirmation of his position as heir to the throne. When Richard still did not return from the crusade, John began to assert that his brother was dead or otherwise permanently lost. Richard had in fact been captured en route to England by the Duke of Austria and was handed over to Emperor Henry VI, who held him for ransom. John seized the opportunity and went to Paris, where he formed an alliance with Philip. He agreed to set aside his wife, Isabella of Gloucester, and marry Philip's sister, Alys, in exchange for Philip's support. Fighting broke out in England between forces loyal to Richard and those being gathered by John. John's military position was weak and he agreed to a truce; in early 1194 the king finally returned to England, and John's remaining forces surrendered. John retreated to Normandy, where Richard finally found him later that year. Richard declared that his younger brother – despite being 27 years old – was merely "a child who has had evil counsellors" and forgave him, but removed his lands with the exception of Ireland. For the remaining years of Richard's reign, John supported his brother on the continent, apparently loyally. Richard's policy on the continent was to attempt to regain through steady, limited campaigns the castles he had lost to Philip II whilst on crusade. He allied himself with the leaders of Flanders, Boulogne and the Holy Roman Empire to apply pressure on Philip from Germany. In 1195 John successfully conducted a sudden attack and siege of Évreux castle, and subsequently managed the defences of Normandy against Philip. The following year, John seized the town of Gamaches and led a raiding party within of Paris, capturing the Bishop of Beauvais. In return for this service, Richard withdrew his "malevolentia" (ill-will) towards John, restored him to the county of Gloucestershire and made him again the Count of Mortain. After Richard's death on 6 April 1199 there were two potential claimants to the Angevin throne: John, whose claim rested on being the sole surviving son of Henry II, and young Arthur I of Brittany, who held a claim as the son of John's elder brother Geoffrey. Richard appears to have started to recognise John as his heir presumptive in the final years before his death, but the matter was not clear-cut and medieval law gave little guidance as to how the competing claims should be decided. With Norman law favouring John as the only surviving son of Henry II and Angevin law favouring Arthur as the only son of Henry's elder son, the matter rapidly became an open conflict. John was supported by the bulk of the English and Norman nobility and was crowned at Westminster, backed by his mother, Eleanor. Arthur was supported by the majority of the Breton, Maine and Anjou nobles and received the support of Philip II, who remained committed to breaking up the Angevin territories on the continent. With Arthur's army pressing up the Loire valley towards Angers and Philip's forces moving down the valley towards Tours, John's continental empire was in danger of being cut in two. Warfare in Normandy at the time was shaped by the defensive potential of castles and the increasing costs of conducting campaigns. The Norman frontiers had limited natural defences but were heavily reinforced with castles, such as Château Gaillard, at strategic points, built and maintained at considerable expense. It was difficult for a commander to advance far into fresh territory without having secured his lines of communication by capturing these fortifications, which slowed the progress of any attack. Armies of the period could be formed from either feudal or mercenary forces. Feudal levies could only be raised for a fixed length of time before they returned home, forcing an end to a campaign; mercenary forces, often called Brabançons after the Duchy of Brabant but actually recruited from across northern Europe, could operate all year long and provide a commander with more strategic options to pursue a campaign, but cost much more than equivalent feudal forces. As a result, commanders of the period were increasingly drawing on larger numbers of mercenaries. After his coronation, John moved south into France with military forces and adopted a defensive posture along the eastern and southern Normandy borders. Both sides paused for desultory negotiations before the war recommenced; John's position was now stronger, thanks to confirmation that the counts Baldwin IX of Flanders and Renaud of Boulogne had renewed the anti-French alliances they had previously agreed to with Richard. The powerful Anjou nobleman William des Roches was persuaded to switch sides from Arthur to John; suddenly the balance seemed to be tipping away from Philip and Arthur in favour of John. Neither side was keen to continue the conflict, and following a papal truce the two leaders met in January 1200 to negotiate possible terms for peace. From John's perspective, what then followed represented an opportunity to stabilise control over his continental possessions and produce a lasting peace with Philip in Paris. John and Philip negotiated the May 1200 Treaty of Le Goulet; by this treaty, Philip recognised John as the rightful heir to Richard in respect to his French possessions, temporarily abandoning the wider claims of his client, Arthur. John, in turn, abandoned Richard's former policy of containing Philip through alliances with Flanders and Boulogne, and accepted Philip's right as the legitimate feudal overlord of John's lands in France. John's policy earned him the disrespectful title of "John Softsword" from some English chroniclers, who contrasted his behaviour with his more aggressive brother, Richard. The new peace would only last for two years; war recommenced in the aftermath of John's decision in August 1200 to marry Isabella of Angoulême. In order to remarry, John first needed to abandon Isabel, Countess of Gloucester, his first wife; John accomplished this by arguing that he had failed to get the necessary papal permission to marry Isabel in the first place – as a cousin, John could not have legally wed her without this. It remains unclear why John chose to marry Isabella of Angoulême. Contemporary chroniclers argued that John had fallen deeply in love with Isabella, and John may have been motivated by desire for an apparently beautiful, if rather young, girl. On the other hand, the Angoumois lands that came with Isabella were strategically vital to John: by marrying Isabella, John was acquiring a key land route between Poitou and Gascony, which significantly strengthened his grip on Aquitaine. Isabella, however, was already engaged to Hugh of Lusignan, an important member of a key Poitou noble family and brother of Count Raoul of Eu, who possessed lands along the sensitive eastern Normandy border. Just as John stood to benefit strategically from marrying Isabella, so the marriage threatened the interests of the Lusignans, whose own lands currently provided the key route for royal goods and troops across Aquitaine. Rather than negotiating some form of compensation, John treated Hugh "with contempt"; this resulted in a Lusignan uprising that was promptly crushed by John, who also intervened to suppress Raoul in Normandy. Although John was the Count of Poitou and therefore the rightful feudal lord over the Lusignans, they could legitimately appeal John's actions in France to his own feudal lord, Philip. Hugh did exactly this in 1201 and Philip summoned John to attend court in Paris in 1202, citing the Le Goulet treaty to strengthen his case. John was unwilling to weaken his authority in western France in this way. He argued that he need not attend Philip's court because of his special status as the Duke of Normandy, who was exempt by feudal tradition from being called to the French court. Philip argued that he was summoning John not as the Duke of Normandy, but as the Count of Poitou, which carried no such special status. When John still refused to come, Philip declared John in breach of his feudal responsibilities, reassigned all of John's lands that fell under the French crown to Arthur – with the exception of Normandy, which he took back for himself – and began a fresh war against John. John initially adopted a defensive posture similar to that of 1199: avoiding open battle and carefully defending his key castles. John's operations became more chaotic as the campaign progressed, and Philip began to make steady progress in the east. John became aware in July that Arthur's forces were threatening his mother, Eleanor, at Mirebeau Castle. Accompanied by William de Roches, his seneschal in Anjou, he swung his mercenary army rapidly south to protect her. His forces caught Arthur by surprise and captured the entire rebel leadership at the battle of Mirebeau. With his southern flank weakening, Philip was forced to withdraw in the east and turn south himself to contain John's army. John's position in France was considerably strengthened by the victory at Mirebeau, but John's treatment of his new prisoners and of his ally, William de Roches, quickly undermined these gains. De Roches was a powerful Anjou noble, but John largely ignored him, causing considerable offence, whilst the king kept the rebel leaders in such bad conditions that twenty-two of them died. At this time most of the regional nobility were closely linked through kinship, and this behaviour towards their relatives was regarded as unacceptable. William de Roches and other of John's regional allies in Anjou and Brittany deserted him in favour of Philip, and Brittany rose in fresh revolt. John's financial situation was tenuous: once factors such as the comparative military costs of materiel and soldiers were taken into account, Philip enjoyed a considerable, although not overwhelming, advantage of resources over John. Further desertions of John's local allies at the beginning of 1203 steadily reduced John's freedom to manoeuvre in the region. He attempted to convince Pope Innocent III to intervene in the conflict, but Innocent's efforts were unsuccessful. As the situation became worse for John, he appears to have decided to have Arthur killed, with the aim of removing his potential rival and of undermining the rebel movement in Brittany. Arthur had initially been imprisoned at Falaise and was then moved to Rouen. After this, Arthur's fate remains uncertain, but modern historians believe he was murdered by John. The annals of Margam Abbey suggest that "John had captured Arthur and kept him alive in prison for some time in the castle of Rouen ... when John was drunk he slew Arthur with his own hand and tying a heavy stone to the body cast it into the Seine." Rumours of the manner of Arthur's death further reduced support for John across the region. Arthur's sister, Eleanor, who had also been captured at Mirebeau, was kept imprisoned by John for many years, albeit in relatively good conditions. In late 1203, John attempted to relieve Château Gaillard, which although besieged by Philip was guarding the eastern flank of Normandy. John attempted a synchronised operation involving land-based and water-borne forces, considered by most historians today to have been imaginative in conception, but overly complex for forces of the period to have carried out successfully. John's relief operation was blocked by Philip's forces, and John turned back to Brittany in an attempt to draw Philip away from eastern Normandy. John successfully devastated much of Brittany, but did not deflect Philip's main thrust into the east of Normandy. Opinions vary amongst historians as to the military skill shown by John during this campaign, with most recent historians arguing that his performance was passable, although not impressive. John's situation began to deteriorate rapidly. The eastern border region of Normandy had been extensively cultivated by Philip and his predecessors for several years, whilst Angevin authority in the south had been undermined by Richard's giving away of various key castles some years before. His use of "routier" mercenaries in the central regions had rapidly eaten away his remaining support in this area too, which set the stage for a sudden collapse of Angevin power. John retreated back across the Channel in December, sending orders for the establishment of a fresh defensive line to the west of Chateau Gaillard. In March 1204, Gaillard fell. John's mother Eleanor died the following month. This was not just a personal blow for John, but threatened to unravel the widespread Angevin alliances across the far south of France. Philip moved south around the new defensive line and struck upwards at the heart of the Duchy, now facing little resistance. By August, Philip had taken Normandy and advanced south to occupy Anjou and Poitou as well. John's only remaining possession on the Continent was now the Duchy of Aquitaine. The nature of government under the Angevin monarchs was ill-defined and uncertain. John's predecessors had ruled using the principle of "vis et voluntas", or "force and will", taking executive and sometimes arbitrary decisions, often justified on the basis that a king was above the law. Both Henry II and Richard had argued that kings possessed a quality of "divine majesty"; John continued this trend and claimed an "almost imperial status" for himself as ruler. During the 12th century, there were contrary opinions expressed about the nature of kingship, and many contemporary writers believed that monarchs should rule in accordance with the custom and the law, and take counsel of the leading members of the realm. There was as yet no model for what should happen if a king refused to do so. Despite his claim to unique authority within England, John would sometimes justify his actions on the basis that he had taken council with the barons. Modern historians remain divided as to whether John suffered from a case of "royal schizophrenia" in his approach to government, or if his actions merely reflected the complex model of Angevin kingship in the early 13th century. John inherited a sophisticated system of administration in England, with a range of royal agents answering to the Royal Household: the Chancery kept written records and communications; the Treasury and the Exchequer dealt with income and expenditure respectively; and various judges were deployed to deliver justice around the kingdom. Thanks to the efforts of men like Hubert Walter, this trend towards improved record keeping continued into his reign. Like previous kings, John managed a peripatetic court that travelled around the kingdom, dealing with both local and national matters as he went. John was very active in the administration of England and was involved in every aspect of government. In part he was following in the tradition of Henry I and Henry II, but by the 13th century the volume of administrative work had greatly increased, which put much more pressure on a king who wished to rule in this style. John was in England for much longer periods than his predecessors, which made his rule more personal than that of previous kings, particularly in previously ignored areas such as the north. The administration of justice was of particular importance to John. Several new processes had been introduced to English law under Henry II, including "novel disseisin" and "mort d'ancestor". These processes meant the royal courts had a more significant role in local law cases, which had previously been dealt with only by regional or local lords. John increased the professionalism of local sergeants and bailiffs, and extended the system of coroners first introduced by Hubert Walter in 1194, creating a new class of borough coroners. John worked extremely hard to ensure that this system operated well, through judges he had appointed, by fostering legal specialists and expertise, and by intervening in cases himself. John continued to try relatively minor cases, even during military crises. Viewed positively, Lewis Warren considers that John discharged "his royal duty of providing justice ... with a zeal and a tirelessness to which the English common law is greatly endebted". Seen more critically, John may have been motivated by the potential of the royal legal process to raise fees, rather than a desire to deliver simple justice; John's legal system also only applied to free men, rather than to all of the population. Nonetheless, these changes were popular with many free tenants, who acquired a more reliable legal system that could bypass the barons, against whom such cases were often brought. John's reforms were less popular with the barons themselves, especially as they remained subject to arbitrary and frequently vindictive royal justice. One of John's principal challenges was acquiring the large sums of money needed for his proposed campaigns to reclaim Normandy. The Angevin kings had three main sources of income available to them, namely revenue from their personal lands, or "demesne"; money raised through their rights as a feudal lord; and revenue from taxation. Revenue from the royal demesne was inflexible and had been diminishing slowly since the Norman conquest. Matters were not helped by Richard's sale of many royal properties in 1189, and taxation played a much smaller role in royal income than in later centuries. English kings had widespread feudal rights which could be used to generate income, including the scutage system, in which feudal military service was avoided by a cash payment to the king. He derived income from fines, court fees and the sale of charters and other privileges. John intensified his efforts to maximise all possible sources of income, to the extent that he has been described as "avaricious, miserly, extortionate and moneyminded". John also used revenue generation as a way of exerting political control over the barons: debts owed to the crown by the king's favoured supporters might be forgiven; collection of those owed by enemies was more stringently enforced. The result was a sequence of innovative but unpopular financial measures. John levied scutage payments eleven times in his seventeen years as king, as compared to eleven times in total during the reign of the preceding three monarchs. In many cases these were levied in the absence of any actual military campaign, which ran counter to the original idea that scutage was an alternative to actual military service. John maximised his right to demand relief payments when estates and castles were inherited, sometimes charging enormous sums, beyond barons' abilities to pay. Building on the successful sale of sheriff appointments in 1194, John initiated a new round of appointments, with the new incumbents making back their investment through increased fines and penalties, particularly in the forests. Another innovation of Richard's, increased charges levied on widows who wished to remain single, was expanded under John. John continued to sell charters for new towns, including the planned town of Liverpool, and charters were sold for markets across the kingdom and in Gascony. The king introduced new taxes and extended existing ones. The Jews, who held a vulnerable position in medieval England, protected only by the king, were subject to huge taxes; £44,000 was extracted from the community by the tallage of 1210; much of it was passed on to the Christian debtors of Jewish moneylenders. John created a new tax on income and movable goods in 1207 – effectively a version of a modern income tax – that produced £60,000; he created a new set of import and export duties payable directly to the crown. John found that these measures enabled him to raise further resources through the confiscation of the lands of barons who could not pay or refused to pay. At the start of John's reign there was a sudden change in prices, as bad harvests and high demand for food resulted in much higher prices for grain and animals. This inflationary pressure was to continue for the rest of the 13th century and had long-term economic consequences for England. The resulting social pressures were complicated by bursts of deflation that resulted from John's military campaigns. It was usual at the time for the king to collect taxes in silver, which was then re-minted into new coins; these coins would then be put in barrels and sent to royal castles around the country, to be used to hire mercenaries or to meet other costs. At those times when John was preparing for campaigns in Normandy, for example, huge quantities of silver had to be withdrawn from the economy and stored for months, which unintentionally resulted in periods during which silver coins were simply hard to come by, commercial credit difficult to acquire and deflationary pressure placed on the economy. The result was political unrest across the country. John attempted to address some of the problems with the English currency in 1204 and 1205 by carrying out a radical overhaul of the coinage, improving its quality and consistency. John's royal household was based around several groups of followers. One group was the "familiares regis", John's immediate friends and knights who travelled around the country with him. They also played an important role in organising and leading military campaigns. Another section of royal followers were the "curia regis"; these "curiales" were the senior officials and agents of the king and were essential to his day-to-day rule. Being a member of these inner circles brought huge advantages, as it was easier to gain favours from the king, file lawsuits, marry a wealthy heiress or have one's debts remitted. By the time of Henry II, these posts were increasingly being filled by "new men" from outside the normal ranks of the barons. This intensified under John's rule, with many lesser nobles arriving from the continent to take up positions at court; many were mercenary leaders from Poitou. These men included soldiers who would become infamous in England for their uncivilised behaviour, including Falkes de Breauté, Geard d'Athies, Engelard de Cigongé, and Philip Marc. Many barons perceived the king's household as what Ralph Turner has characterised as a "narrow clique enjoying royal favour at barons' expense" staffed by men of lesser status. This trend for the king to rely on his own men at the expense of the barons was exacerbated by the tradition of Angevin royal "ira et malevolentia" – "anger and ill-will" – and John's own personality. From Henry II onwards, "ira et malevolentia" had come to describe the right of the king to express his anger and displeasure at particular barons or clergy, building on the Norman concept of "malevoncia" – royal ill-will. In the Norman period, suffering the king's ill-will meant difficulties in obtaining grants, honours or petitions; Henry II had infamously expressed his fury and ill-will towards Thomas Becket, which ultimately resulted in Becket's death. John now had the additional ability to "cripple his vassals" on a significant scale using his new economic and judicial measures, which made the threat of royal anger all the more serious. John was deeply suspicious of the barons, particularly those with sufficient power and wealth to potentially challenge the king. Numerous barons were subjected to John's "malevolentia", even including William Marshal, a famous knight and baron normally held up as a model of utter loyalty. The most infamous case, which went beyond anything considered acceptable at the time, was that of William de Braose, a powerful marcher lord with lands in Ireland. De Braose was subjected to punitive demands for money, and when he refused to pay a huge sum of 40,000 marks (equivalent to £26,666 at the time), his wife and one of his sons were imprisoned by John, which resulted in their deaths. De Braose died in exile in 1211, and his grandsons remained in prison until 1218. John's suspicions and jealousies meant that he rarely enjoyed good relationships with even the leading loyalist barons. John's personal life greatly affected his reign. Contemporary chroniclers state that John was sinfully lustful and lacking in piety. It was common for kings and nobles of the period to keep mistresses, but chroniclers complained that John's mistresses were married noblewomen, which was considered unacceptable. John had at least five children with mistresses during his first marriage to Isabelle of Gloucester, and two of those mistresses are known to have been noblewomen. John's behaviour after his second marriage to Isabella of Angoulême is less clear, however. None of John's known illegitimate children were born after he remarried, and there is no actual documentary proof of adultery after that point, although John certainly had female friends amongst the court throughout the period. The specific accusations made against John during the baronial revolts are now generally considered to have been invented for the purposes of justifying the revolt; nonetheless, most of John's contemporaries seem to have held a poor opinion of his sexual behaviour. The character of John's relationship with his second wife, Isabella of Angoulême, is unclear. John married Isabella whilst she was relatively young – her exact date of birth is uncertain, and estimates place her between at most 15 and more probably towards nine years old at the time of her marriage. Even by the standards of the time, Isabella was married whilst very young. John did not provide a great deal of money for his wife's household and did not pass on much of the revenue from her lands, to the extent that historian Nicholas Vincent has described him as being "downright mean" towards Isabella. Vincent concluded that the marriage was not a particularly "amicable" one. Other aspects of their marriage suggest a closer, more positive relationship. Chroniclers recorded that John had a "mad infatuation" with Isabella, and certainly John had conjugal relationships with Isabella between at least 1207 and 1215; they had five children. In contrast to Vincent, historian William Chester Jordan concludes that the pair were a "companionable couple" who had a successful marriage by the standards of the day. John's lack of religious conviction has been noted by contemporary chroniclers and later historians, with some suspecting that John was at best impious, or even atheistic, a very serious issue at the time. Contemporary chroniclers catalogued his various anti-religious habits at length, including his failure to take communion, his blasphemous remarks, and his witty but scandalous jokes about church doctrine, including jokes about the implausibility of the Resurrection. They commented on the paucity of John's charitable donations to the church. Historian Frank McLynn argues that John's early years at Fontevrault, combined with his relatively advanced education, may have turned him against the church. Other historians have been more cautious in interpreting this material, noting that chroniclers also reported John's personal interest in the life of St Wulfstan of Worcester and his friendships with several senior clerics, most especially with Hugh of Lincoln, who was later declared a saint. Financial records show a normal royal household engaged in the usual feasts and pious observances – albeit with many records showing John's offerings to the poor to atone for routinely breaking church rules and guidance. The historian Lewis Warren has argued that the chronicler accounts were subject to considerable bias and the King was "at least conventionally devout", citing his pilgrimages and interest in religious scripture and commentaries. During the remainder of his reign, John focused on trying to retake Normandy. The available evidence suggests that John did not regard the loss of the Duchy as a permanent shift in Capetian power. Strategically, John faced several challenges: England itself had to be secured against possible French invasion, the sea-routes to Bordeaux needed to be secured following the loss of the land route to Aquitaine, and his remaining possessions in Aquitaine needed to be secured following the death of his mother, Eleanor, in April 1204. John's preferred plan was to use Poitou as a base of operations, advance up the Loire valley to threaten Paris, pin down the French forces and break Philip's internal lines of communication before landing a maritime force in the Duchy itself. Ideally, this plan would benefit from the opening of a second front on Philip's eastern frontiers with Flanders and Boulogne – effectively a re-creation of Richard's old strategy of applying pressure from Germany. All of this would require a great deal of money and soldiers. John spent much of 1205 securing England against a potential French invasion. As an emergency measure, John recreated a version of Henry II's Assize of Arms of 1181, with each shire creating a structure to mobilise local levies. When the threat of invasion faded, John formed a large military force in England intended for Poitou, and a large fleet with soldiers under his own command intended for Normandy. To achieve this, John reformed the English feudal contribution to his campaigns, creating a more flexible system under which only one knight in ten would actually be mobilised, but would be financially supported by the other nine; knights would serve for an indefinite period. John built up a strong team of engineers for siege warfare and a substantial force of professional crossbowmen. The king was supported by a team of leading barons with military expertise, including William Longespée, William the Marshal, Roger de Lacy and, until he fell from favour, the marcher lord William de Braose. John had already begun to improve his Channel forces before the loss of Normandy and he rapidly built up further maritime capabilities after its collapse. Most of these ships were placed along the Cinque Ports, but Portsmouth was also enlarged. By the end of 1204 he had around 50 large galleys available; another 54 vessels were built between 1209 and 1212. William of Wrotham was appointed "keeper of the galleys", effectively John's chief admiral. Wrotham was responsible for fusing John's galleys, the ships of the Cinque Ports and pressed merchant vessels into a single operational fleet. John adopted recent improvements in ship design, including new large transport ships called "buisses" and removable forecastles for use in combat. Baronial unrest in England prevented the departure of the planned 1205 expedition, and only a smaller force under William Longespée deployed to Poitou. In 1206 John departed for Poitou himself, but was forced to divert south to counter a threat to Gascony from Alfonso VIII of Castile. After a successful campaign against Alfonso, John headed north again, taking the city of Angers. Philip moved south to meet John; the year's campaigning ended in stalemate and a two-year truce was made between the two rulers. During the truce of 1206–1208, John focused on building up his financial and military resources in preparation for another attempt to recapture Normandy. John used some of this money to pay for new alliances on Philip's eastern frontiers, where the growth in Capetian power was beginning to concern France's neighbours. By 1212 John had successfully concluded alliances with his nephew Otto IV, a contender for the crown of Holy Roman Emperor in Germany, as well as with the counts Renaud of Boulogne and Ferdinand of Flanders. The invasion plans for 1212 were postponed because of fresh English baronial unrest about service in Poitou. Philip seized the initiative in 1213, sending his elder son, Louis, to invade Flanders with the intention of next launching an invasion of England. John was forced to postpone his own invasion plans to counter this threat. He launched his new fleet to attack the French at the harbour of Damme. The attack was a success, destroying Philip's vessels and any chances of an invasion of England that year. John hoped to exploit this advantage by invading himself late in 1213, but baronial discontent again delayed his invasion plans until early 1214, in what was his final Continental campaign. In the late 12th and early 13th centuries the border and political relationship between England and Scotland was disputed, with the kings of Scotland claiming parts of what is now northern England. John's father, Henry II, had forced William the Lion to swear fealty to him at the Treaty of Falaise in 1174. This had been rescinded by Richard I in exchange for financial compensation in 1189, but the relationship remained uneasy. John began his reign by reasserting his sovereignty over the disputed northern counties. He refused William's request for the earldom of Northumbria, but did not intervene in Scotland itself and focused on his continental problems. The two kings maintained a friendly relationship, meeting in 1206 and 1207, until it was rumoured in 1209 that William was intending to ally himself with Philip II of France. John invaded Scotland and forced William to sign the Treaty of Norham, which gave John control of William's daughters and required a payment of £10,000. This effectively crippled William's power north of the border, and by 1212 John had to intervene militarily to support the Scottish king against his internal rivals. John made no efforts to reinvigorate the Treaty of Falaise, though, and both William and Alexander in turn remained independent kings, supported by, but not owing fealty to, John. John remained Lord of Ireland throughout his reign. He drew on the country for resources to fight his war with Philip on the continent. Conflict continued in Ireland between the Anglo-Norman settlers and the indigenous Irish chieftains, with John manipulating both groups to expand his wealth and power in the country. During Richard's rule, John had successfully increased the size of his lands in Ireland, and he continued this policy as king. In 1210 the king crossed into Ireland with a large army to crush a rebellion by the Anglo-Norman lords; he reasserted his control of the country and used a new charter to order compliance with English laws and customs in Ireland. John stopped short of trying to actively enforce this charter on the native Irish kingdoms, but historian David Carpenter suspects that he might have done so, had the baronial conflict in England not intervened. Simmering tensions remained with the native Irish leaders even after John left for England. Royal power in Wales was unevenly applied, with the country divided between the marcher lords along the borders, royal territories in Pembrokeshire and the more independent native Welsh lords of North Wales. John took a close interest in Wales and knew the country well, visiting every year between 1204 and 1211 and marrying his illegitimate daughter, Joan, to the Welsh prince Llywelyn the Great. The king used the marcher lords and the native Welsh to increase his own territory and power, striking a sequence of increasingly precise deals backed by royal military power with the Welsh rulers. A major royal expedition to enforce these agreements occurred in 1211, after Llywelyn attempted to exploit the instability caused by the removal of William de Braose, through the Welsh uprising of 1211. John's invasion, striking into the Welsh heartlands, was a military success. Llywelyn came to terms that included an expansion of John's power across much of Wales, albeit only temporarily. When the Archbishop of Canterbury, Hubert Walter, died on 13 July 1205, John became involved in a dispute with Pope Innocent III that would lead to the king's excommunication. The Norman and Angevin kings had traditionally exercised a great deal of power over the church within their territories. From the 1040s onwards, however, successive popes had put forward a reforming message that emphasised the importance of the church being "governed more coherently and more hierarchically from the centre" and established "its own sphere of authority and jurisdiction, separate from and independent of that of the lay ruler", in the words of historian Richard Huscroft. After the 1140s, these principles had been largely accepted within the English church, albeit with an element of concern about centralising authority in Rome. These changes brought the customary rights of lay rulers such as John over ecclesiastical appointments into question. Pope Innocent was, according to historian Ralph Turner, an "ambitious and aggressive" religious leader, insistent on his rights and responsibilities within the church. John wanted John de Gray, the Bishop of Norwich and one of his own supporters, to be appointed Archbishop of Canterbury after the death of Walter, but the cathedral chapter for Canterbury Cathedral claimed the exclusive right to elect Walter's successor. They favoured Reginald, the chapter's sub-prior. To complicate matters, the bishops of the province of Canterbury also claimed the right to appoint the next archbishop. The chapter secretly elected Reginald and he travelled to Rome to be confirmed; the bishops challenged the appointment and the matter was taken before Innocent. John forced the Canterbury chapter to change their support to John de Gray, and a messenger was sent to Rome to inform the papacy of the new decision. Innocent disavowed both Reginald and John de Gray, and instead appointed his own candidate, Stephen Langton. John refused Innocent's request that he consent to Langton's appointment, but the pope consecrated Langton anyway in June 1207. John was incensed about what he perceived as an abrogation of his customary right as monarch to influence the election. He complained both about the choice of Langton as an individual, as John felt he was overly influenced by the Capetian court in Paris, and about the process as a whole. He barred Langton from entering England and seized the lands of the archbishopric and other papal possessions. Innocent set a commission in place to try to convince John to change his mind, but to no avail. Innocent then placed an interdict on England in March 1208, prohibiting clergy from conducting religious services, with the exception of baptisms for the young, and confessions and absolutions for the dying. John treated the interdict as "the equivalent of a papal declaration of war". He responded by attempting to punish Innocent personally and to drive a wedge between those English clergy that might support him and those allying themselves firmly with the authorities in Rome. John seized the lands of those clergy unwilling to conduct services, as well as those estates linked to Innocent himself; he arrested the illicit concubines that many clerics kept during the period, only releasing them after the payment of fines; he seized the lands of members of the church who had fled England, and he promised protection for those clergy willing to remain loyal to him. In many cases, individual institutions were able to negotiate terms for managing their own properties and keeping the produce of their estates. By 1209 the situation showed no signs of resolution, and Innocent threatened to excommunicate John if he did not acquiesce to Langton's appointment. When this threat failed, Innocent excommunicated the king in November 1209. Although theoretically a significant blow to John's legitimacy, this did not appear to greatly worry the king. Two of John's close allies, Emperor Otto IV and Count Raymond VI of Toulouse, had already suffered the same punishment themselves, and the significance of excommunication had been somewhat devalued. John simply tightened his existing measures and accrued significant sums from the income of vacant sees and abbeys: one 1213 estimate, for example, suggested the church had lost an estimated 100,000 marks (equivalent to £66,666 at the time) to John. Official figures suggest that around 14% of annual income from the English church was being appropriated by John each year. Innocent gave some dispensations as the crisis progressed. Monastic communities were allowed to celebrate Mass in private from 1209 onwards, and late in 1212 the Holy Viaticum for the dying was authorised. The rules on burials and lay access to churches appear to have been steadily circumvented, at least unofficially. Although the interdict was a burden to much of the population, it did not result in rebellion against John. By 1213, though, John was increasingly worried about the threat of French invasion. Some contemporary chroniclers suggested that in January Philip II of France had been charged with deposing John on behalf of the papacy, although it appears that Innocent merely prepared secret letters in case Innocent needed to claim the credit if Philip did successfully invade England. Under mounting political pressure, John finally negotiated terms for a reconciliation, and the papal terms for submission were accepted in the presence of the papal legate Pandulf Verraccio in May 1213 at the Templar Church at Dover. As part of the deal, John offered to surrender the Kingdom of England to the papacy for a feudal service of 1,000 marks (equivalent to £666 at the time) annually: 700 marks (£466) for England and 300 marks (£200) for Ireland, as well as recompensing the church for revenue lost during the crisis. The agreement was formalised in the "Bulla Aurea", or Golden Bull. This resolution produced mixed responses. Although some chroniclers felt that John had been humiliated by the sequence of events, there was little public reaction. Innocent benefited from the resolution of his long-standing English problem, but John probably gained more, as Innocent became a firm supporter of John for the rest of his reign, backing him in both domestic and continental policy issues. Innocent immediately turned against Philip, calling upon him to reject plans to invade England and to sue for peace. John paid some of the compensation money he had promised the church, but he ceased making payments in late 1214, leaving two-thirds of the sum unpaid; Innocent appears to have conveniently forgotten this debt for the good of the wider relationship. Tensions between John and the barons had been growing for several years, as demonstrated by the 1212 plot against the king. Many of the disaffected barons came from the north of England; that faction was often labelled by contemporaries and historians as "the Northerners". The northern barons rarely had any personal stake in the conflict in France, and many of them owed large sums of money to John; the revolt has been characterised as "a rebellion of the king's debtors". Many of John's military household joined the rebels, particularly amongst those that John had appointed to administrative roles across England; their local links and loyalties outweighed their personal loyalty to John. Tension also grew across North Wales, where opposition to the 1211 treaty between John and Llywelyn was turning into open conflict. For some the appointment of Peter des Roches as justiciar was an important factor, as he was considered an "abrasive foreigner" by many of the barons. The failure of John's French military campaign in 1214 was probably the final straw that precipitated the baronial uprising during John's final years as king; James Holt describes the path to civil war as "direct, short and unavoidable" following the defeat at Bouvines. In 1214 John began his final campaign to reclaim Normandy from Philip. John was optimistic, as he had successfully built up alliances with the Emperor Otto, Renaud of Boulogne and Count Ferdinand of Flanders; he was enjoying papal favour; and he had successfully built up substantial funds to pay for the deployment of his experienced army. Nonetheless, when John left for Poitou in February 1214, many barons refused to provide military service; mercenary knights had to fill the gaps. John's plan was to split Philip's forces by pushing north-east from Poitou towards Paris, whilst Otto, Renaud and Ferdinand, supported by William Longespée, marched south-west from Flanders. The first part of the campaign went well, with John outmanoeuvring the forces under the command of Prince Louis and retaking the county of Anjou by the end of June. John besieged the castle of Roche-au-Moine, a key stronghold, forcing Louis to give battle against John's larger army. The local Angevin nobles refused to advance with the king; left at something of a disadvantage, John retreated back to La Rochelle. Shortly afterwards, Philip won the hard-fought battle of Bouvines in the north against Otto and John's other allies, bringing an end to John's hopes of retaking Normandy. A peace agreement was signed in which John returned Anjou to Philip and paid the French king compensation; the truce was intended to last for six years. John arrived back in England in October. Within a few months of John's return, rebel barons in the north and east of England were organising resistance to his rule. John held a council in London in January 1215 to discuss potential reforms and sponsored discussions in Oxford between his agents and the rebels during the spring. John appears to have been playing for time until Pope Innocent III could send letters giving him explicit papal support. This was particularly important for John, as a way of pressuring the barons but also as a way of controlling Stephen Langton, the Archbishop of Canterbury. In the meantime, John began to recruit fresh mercenary forces from Poitou, although some were later sent back to avoid giving the impression that the king was escalating the conflict. John announced his intent to become a crusader, a move which gave him additional political protection under church law. Letters of support from the pope arrived in April but by then the rebel barons had organised. They congregated at Northampton in May and renounced their feudal ties to John, appointing Robert fitz Walter as their military leader. This self-proclaimed "Army of God" marched on London, taking the capital as well as Lincoln and Exeter. John's efforts to appear moderate and conciliatory had been largely successful, but once the rebels held London they attracted a fresh wave of defectors from John's royalist faction. John instructed Langton to organise peace talks with the rebel barons. John met the rebel leaders at Runnymede, near Windsor Castle, on 15 June 1215. Langton's efforts at mediation created a charter capturing the proposed peace agreement; it was later renamed "Magna Carta", or "Great Charter". The charter went beyond simply addressing specific baronial complaints, and formed a wider proposal for political reform, albeit one focusing on the rights of free men, not serfs and unfree labour. It promised the protection of church rights, protection from illegal imprisonment, access to swift justice, new taxation only with baronial consent and limitations on scutage and other feudal payments. A council of twenty-five barons would be created to monitor and ensure John's future adherence to the charter, whilst the rebel army would stand down and London would be surrendered to the king. Neither John nor the rebel barons seriously attempted to implement the peace accord. The rebel barons suspected that the proposed baronial council would be unacceptable to John and that he would challenge the legality of the charter; they packed the baronial council with their own hardliners and refused to demobilise their forces or surrender London as agreed. Despite his promises to the contrary, John appealed to Innocent for help, observing that the charter compromised the pope's rights under the 1213 agreement that had appointed him John's feudal lord. Innocent obliged; he declared the charter "not only shameful and demeaning, but illegal and unjust" and excommunicated the rebel barons. The failure of the agreement led rapidly to the First Barons' War. The rebels made the first move in the war, seizing the strategic Rochester Castle, owned by Langton but left almost unguarded by the archbishop. John was well prepared for a conflict. He had stockpiled money to pay for mercenaries and ensured the support of the powerful marcher lords with their own feudal forces, such as William Marshal and Ranulf of Chester. The rebels lacked the engineering expertise or heavy equipment necessary to assault the network of royal castles that cut off the northern rebel barons from those in the south. John's strategy was to isolate the rebel barons in London, protect his own supply lines to his key source of mercenaries in Flanders, prevent the French from landing in the south-east, and then win the war through slow attrition. John put off dealing with the badly deteriorating situation in North Wales, where Llywelyn the Great was leading a rebellion against the 1211 settlement. John's campaign started well. In November John retook Rochester Castle from rebel baron William d'Aubigny in a sophisticated assault. One chronicler had not seen "a siege so hard pressed or so strongly resisted", whilst historian Reginald Brown describes it as "one of the greatest [siege] operations in England up to that time". Having regained the south-east John split his forces, sending William Longespée to retake the north side of London and East Anglia, whilst John himself headed north via Nottingham to attack the estates of the northern barons. Both operations were successful and the majority of the remaining rebels were pinned down in London. In January 1216 John marched against Alexander II of Scotland, who had allied himself with the rebel cause. John took back Alexander's possessions in northern England in a rapid campaign and pushed up towards Edinburgh over a ten-day period. The rebel barons responded by inviting the French prince Louis to lead them: Louis had a claim to the English throne by virtue of his marriage to Blanche of Castile, a granddaughter of Henry II. Philip may have provided him with private support but refused to openly support Louis, who was excommunicated by Innocent for taking part in the war against John. Louis' planned arrival in England presented a significant problem for John, as the prince would bring with him naval vessels and siege engines essential to the rebel cause. Once John contained Alexander in Scotland, he marched south to deal with the challenge of the coming invasion. Prince Louis intended to land in the south of England in May 1216, and John assembled a naval force to intercept him. Unfortunately for John, his fleet was dispersed by bad storms and Louis landed unopposed in Kent. John hesitated and decided not to attack Louis immediately, either due to the risks of open battle or over concerns about the loyalty of his own men. Louis and the rebel barons advanced west and John retreated, spending the summer reorganising his defences across the rest of the kingdom. John saw several of his military household desert to the rebels, including his half-brother, William Longespée. By the end of the summer the rebels had regained the south-east of England and parts of the north. In September 1216, John began a fresh, vigorous attack. He marched from the Cotswolds, feigned an offensive to relieve the besieged Windsor Castle, and attacked eastwards around London to Cambridge to separate the rebel-held areas of Lincolnshire and East Anglia. From there he travelled north to relieve the rebel siege at Lincoln and back east to King's Lynn, probably to order further supplies from the continent. In King's Lynn, John contracted dysentery, which would ultimately prove fatal. Meanwhile, Alexander II invaded northern England again, taking Carlisle in August and then marching south to give homage to Prince Louis for his English possessions; John narrowly missed intercepting Alexander along the way. Tensions between Louis and the English barons began to increase, prompting a wave of desertions, including William Marshal's son William and William Longespée, who both returned to John's faction. The king returned west but is said to have lost a significant part of his baggage train along the way. Roger of Wendover provides the most graphic account of this, suggesting that the king's belongings, including the Crown Jewels, were lost as he crossed one of the tidal estuaries which empties into the Wash, being sucked in by quicksand and whirlpools. Accounts of the incident vary considerably between the various chroniclers and the exact location of the incident has never been confirmed; the losses may have involved only a few of his pack-horses. Modern historians assert that by October 1216 John faced a "stalemate", "a military situation uncompromised by defeat". John's illness grew worse and by the time he reached Newark Castle he was unable to travel any farther; John died on the night of 18/19 October. Numerous – probably fictitious – accounts circulated soon after his death that he had been killed by poisoned ale, poisoned plums or a "surfeit of peaches". His body was escorted south by a company of mercenaries and he was buried in Worcester Cathedral in front of the altar of St Wulfstan. A new sarcophagus with an effigy was made for him in 1232, in which his remains now rest. In the aftermath of John's death William Marshal was declared the protector of the nine-year-old Henry III. The civil war continued until royalist victories at the battles of Lincoln and Dover in 1217. Louis gave up his claim to the English throne and signed the Treaty of Lambeth. The failed "Magna Carta" agreement was resuscitated by Marshal's administration and reissued in an edited form in 1217 as a basis for future government. Henry III continued his attempts to reclaim Normandy and Anjou until 1259, but John's continental losses and the consequent growth of Capetian power in the 13th century proved to mark a "turning point in European history". John's first wife, Isabel, Countess of Gloucester, was released from imprisonment in 1214; she remarried twice, and died in 1217. John's second wife, Isabella of Angoulême, left England for Angoulême soon after the king's death; she became a powerful regional leader, but largely abandoned the children she had had by John. John had five legitimate children, all by Isabella. His eldest son, Henry III, ruled as King of England for the majority of the 13th century. Richard became a noted European leader and ultimately the King of the Romans in the Holy Roman Empire. Joan became Queen of Scotland on her marriage to Alexander II. Isabella was Holy Roman Empress as the wife of Frederick II. His youngest daughter, Eleanor, married William Marshal's son, also called William, and later the famous English rebel Simon de Montfort. John had various mistresses. By them he had eight, possibly nine, sons – Richard, Oliver, John, Geoffrey, Henry, Osbert Gifford, Eudes, Bartholomew and probably Philip – and two or three daughters – Joan, Maud and probably Isabel. Of these, Joan became the most famous, marrying Prince Llywelyn the Great of Wales. Historical interpretations of John have been subject to considerable change over the years. Medieval chroniclers provided the first contemporary, or near contemporary, histories of John's reign. One group of chroniclers wrote early in John's life, or around the time of his accession, including Richard of Devizes, William of Newburgh, Roger of Hoveden and Ralph de Diceto. These historians were generally unsympathetic to John's behaviour under Richard's rule, but slightly more positive towards the very earliest years of John's reign. Reliable accounts of the middle and later parts of John's reign are more limited, with Gervase of Canterbury and Ralph of Coggeshall writing the main accounts; neither of them were positive about John's performance as king. Much of John's later, negative reputation was established by two chroniclers writing after the king's death, Roger of Wendover and Matthew Paris, the latter claiming that John attempted conversion to Islam in exchange for military aid from the Almohad ruler Muhammad al-Nasir – a story modern historians consider untrue. In the 16th century political and religious changes altered the attitude of historians towards John. Tudor historians were generally favourably inclined towards the king, focusing on John's opposition to the Papacy and his promotion of the special rights and prerogatives of a king. Revisionist histories written by John Foxe, William Tyndale and Robert Barnes portrayed John as an early Protestant hero, and John Foxe included the king in his "Book of Martyrs". John Speed's "Historie of Great Britaine" in 1632 praised John's "great renown" as a king; he blamed the bias of medieval chroniclers for the king's poor reputation. By the Victorian period in the 19th century, historians were more inclined to draw on the judgements of the chroniclers and to focus on John's moral personality. Kate Norgate, for example, argued that John's downfall had been due not to his failure in war or strategy, but due to his "almost superhuman wickedness", whilst James Ramsay blamed John's family background and his cruel personality for his downfall. Historians in the "Whiggish" tradition, focusing on documents such as the Domesday Book and "Magna Carta", trace a progressive and universalist course of political and economic development in England over the medieval period. These historians were often inclined to see John's reign, and his signing of "Magna Carta" in particular, as a positive step in the constitutional development of England, despite the flaws of the king himself. Winston Churchill, for example, argued that "[w]hen the long tally is added, it will be seen that the British nation and the English-speaking world owe far more to the vices of John than to the labours of virtuous sovereigns". In the 1940s, new interpretations of John's reign began to emerge, based on research into the record evidence of his reign, such as pipe rolls, charters, court documents and similar primary records. Notably, an essay by Vivian Galbraith in 1945 proposed a "new approach" to understanding the ruler. The use of recorded evidence was combined with an increased scepticism about two of the most colourful chroniclers of John's reign, Roger of Wendover and Matthew Paris. In many cases the detail provided by these chroniclers, both writing after John's death, was challenged by modern historians. Interpretations of "Magna Carta" and the role of the rebel barons in 1215 have been significantly revised: although the charter's symbolic, constitutional value for later generations is unquestionable, in the context of John's reign most historians now consider it a failed peace agreement between "partisan" factions. There has been increasing debate about the nature of John's Irish policies. Specialists in Irish medieval history, such as Sean Duffy, have challenged the conventional narrative established by Lewis Warren, suggesting that Ireland was less stable by 1216 than was previously supposed. Most historians today, including John's recent biographers Ralph Turner and Lewis Warren, argue that John was an unsuccessful monarch, but note that his failings were exaggerated by 12th- and 13th-century chroniclers. Jim Bradbury notes the current consensus that John was a "hard-working administrator, an able man, an able general", albeit, as Turner suggests, with "distasteful, even dangerous personality traits", including pettiness, spitefulness and cruelty. John Gillingham, author of a major biography of Richard I, follows this line too, although he considers John a less effective general than do Turner or Warren, and describes him "one of the worst kings ever to rule England". Bradbury takes a moderate line, but suggests that in recent years modern historians have been overly lenient towards John's numerous faults. Popular historian Frank McLynn maintains a counter-revisionist perspective on John, arguing that the king's modern reputation amongst historians is "bizarre", and that as a monarch John "fails almost all those [tests] that can be legitimately set". Popular representations of John first began to emerge during the Tudor period, mirroring the revisionist histories of the time. The anonymous play "The Troublesome Reign of King John" portrayed the king as a "proto-Protestant martyr", similar to that shown in John Bale's morality play "Kynge Johan", in which John attempts to save England from the "evil agents of the Roman Church". By contrast, Shakespeare's "King John", a relatively anti-Catholic play that draws on "The Troublesome Reign" for its source material, offers a more "balanced, dual view of a complex monarch as both a proto-Protestant victim of Rome's machinations and as a weak, selfishly motivated ruler". Anthony Munday's play "The Downfall and The Death of Robert Earl of Huntington" portrays many of John's negative traits, but adopts a positive interpretation of the king's stand against the Roman Catholic Church, in line with the contemporary views of the Tudor monarchs. By the middle of the 17th century, plays such as Robert Davenport's "King John and Matilda", although based largely on the earlier Elizabethan works, were transferring the role of Protestant champion to the barons and focusing more on the tyrannical aspects of John's behaviour. Nineteenth-century fictional depictions of John were heavily influenced by Sir Walter Scott's historical romance, "Ivanhoe", which presented "an almost totally unfavourable picture" of the king; the work drew on Victorian histories of the period and on Shakespeare's play. Scott's work influenced the late 19th-century children's writer Howard Pyle's book "The Merry Adventures of Robin Hood", which in turn established John as the principal villain within the traditional Robin Hood narrative. During the 20th century, John was normally depicted in fictional books and films alongside Robin Hood. Sam De Grasse's role as John in the black-and-white 1922 film version shows John committing numerous atrocities and acts of torture. Claude Rains played John in the 1938 colour version alongside Errol Flynn, starting a trend for films to depict John as an "effeminate ... arrogant and cowardly stay-at-home". The character of John acts either to highlight the virtues of King Richard, or contrasts with the Sheriff of Nottingham, who is usually the "swashbuckling villain" opposing Robin. An extreme version of this trend can be seen in the 1973 Disney cartoon version, for example, which depicts John, voiced by Peter Ustinov, as a "cowardly, thumbsucking lion". Popular works that depict John beyond the Robin Hood legends, such as James Goldman's play and later film, "The Lion in Winter", set in 1183, commonly present him as an "effete weakling", in this instance contrasted with the more masculine Henry II, or as a tyrant, as in A. A. Milne's poem for children, "King John's Christmas". John Hay John Milton Hay (October 8, 1838July 1, 1905) was an American statesman and official whose career in government stretched over almost half a century. Beginning as a private secretary and assistant to Abraham Lincoln, Hay's highest office was United States Secretary of State under Presidents William McKinley and Theodore Roosevelt. Hay was also an author and biographer and wrote poetry and other literature throughout much of his life. Born in Indiana to an anti-slavery family that moved to Illinois when he was young, Hay showed great potential, and his family sent him to Brown University. After graduation in 1858, Hay read law in his uncle's office in Springfield, Illinois, adjacent to that of Lincoln. Hay worked for Lincoln's successful presidential campaign and became one of his private secretaries at the White House. Throughout the American Civil War, Hay was close to Lincoln and stood by his deathbed after the President was shot at Ford's Theatre. In addition to his other literary works, Hay co-authored with John George Nicolay a that helped shape the assassinated president's historical image. After Lincoln's death, Hay spent several years at diplomatic posts in Europe, then worked for the "New-York Tribune" under Horace Greeley and Whitelaw Reid. Yet, Hay remained active in politics, and from 1879 to 1881 served as Assistant Secretary of State. Afterward, he remained in the private sector, until President McKinley, for whom he had been a major backer, made him Ambassador to the United Kingdom in 1897. Hay became Secretary of State the following year. Hay served for almost seven years as Secretary of State under President McKinley, and after McKinley's assassination, under Theodore Roosevelt. Hay was responsible for negotiating the Open Door Policy, which kept China open to trade with all countries on an equal basis, with international powers. By negotiating the Hay–Pauncefote Treaty with the United Kingdom, the (ultimately unratified) Hay–Herrán Treaty with Colombia, and finally the Hay–Bunau-Varilla Treaty with the newly-independent Republic of Panama, Hay also cleared the way for the building of the Panama Canal. John Milton Hay was born in Salem, Indiana, on October 8, 1838. He was the third son of Dr. Charles Hay and the former Helen Leonard. Charles Hay, born in Lexington, Kentucky, hated slavery and moved to the North in the early 1830s. A doctor, he practiced in Salem. Helen's father, David Leonard, had moved his family west from Assonet, Massachusetts, in 1818, but died en route to Vincennes, Indiana, and Helen relocated to Salem in 1830 to teach school. They married there in 1831. Charles was not successful in Salem, and moved, with his wife and children, to Warsaw, Illinois, in 1841. John attended the local schools, and in 1849 his uncle Milton Hay invited John to live at his home in Pittsfield, Pike County, and attend a well-regarded local school, the John D. Thomson Academy. Milton was a friend of Springfield attorney Abraham Lincoln and had read law in the firm Stuart and Lincoln. In Pittsfield, John first met John Nicolay, who was at the time a 20-year-old newspaperman. Once John Hay completed his studies there, the 13-year-old was sent to live with his grandfather in Springfield and attend school there. His parents and uncle Milton (who financed the boy's education) sent him to Brown University in Providence, Rhode Island, "alma mater" of his late maternal grandfather. Hay enrolled at Brown in 1855. Although he enjoyed college life, he did not find it easy: his Western clothing and accent made him stand out; he was not well prepared academically and was often sick. Hay gained a reputation as a star student and became a part of Providence's literary circle that included Sarah Helen Whitman and Nora Perry. He wrote poetry and experimented with hashish. Hay received his Master of Arts degree in 1858, and was, like his grandfather before him, Class Poet. He returned to Illinois. Milton Hay had moved his practice to Springfield, and John became a clerk in his firm, where he could study law. Milton Hay's firm was one of the most prestigious in Illinois. Lincoln maintained offices next door and was a rising star in the new Republican Party. Hay recalled an early encounter with Lincoln: Hay was not a supporter of Lincoln for president until after his nomination in 1860. Hay then made speeches and wrote newspaper articles boosting Lincoln's candidacy. When Nicolay, who had been made Lincoln's private secretary for the campaign, found he needed help with the huge amounts of correspondence, Hay worked full-time for Lincoln for six months. After Lincoln was elected, Nicolay, who continued as Lincoln's private secretary, recommended that Hay be hired to assist him at the White House. Lincoln is reported to have said, "We can't take all Illinois with us down to Washington" but then "Well, let Hay come". Kushner and Sherrill were dubious about "the story of Lincoln's offhand appointment of Hay" as fitting well into Hay's self-image of never having been an office-seeker, but "poorly into the realities of Springfield politics of the 1860s"—Hay must have expected some reward for handling Lincoln's correspondence for months. Hay biographer John Taliaferro suggests that Lincoln engaged Nicolay and Hay to assist him, rather than more seasoned men, both "out of loyalty and surely because of the competence and compatibility that his two young aides had demonstrated". Historian Joshua Zeitz argues that Lincoln was moved to hire Hay when Milton agreed to pay his nephew's salary for six months. Milton Hay desired that his nephew go to Washington as a qualified attorney, and John Hay was admitted to the bar in Illinois on February 4, 1861. On February 11, he embarked with President-elect Lincoln on a circuitous journey to Washington. By this time, several Southern states had seceded to form the Confederate States of America in reaction to the election of Lincoln, seen as an opponent of slavery. When Lincoln was sworn in on March 4, Hay and Nicolay moved into the White House, sharing a shabby bedroom. As there was only authority for payment of one presidential secretary (Nicolay), Hay was appointed to a post in the Interior Department at $1,600 per year, seconded to service at the White House. They were available to Lincoln 24 hours a day. As Lincoln took no vacations as president and worked seven days a week, often until 11 pm (or later, during crucial battles) the burden on his secretaries was heavy. Hay and Nicolay divided their responsibilities: Nicolay tending to assist Lincoln in his office and in meetings, while Hay dealt with the correspondence, which was very large. Both men tried to shield Lincoln from office-seekers and others who wanted to meet with the President. Unlike the dour Nicolay, Hay, with his charm, escaped much of the hard feelings from those denied Lincoln's presence. Abolitionist Thomas Wentworth Higginson described Hay as "a nice young fellow, who unfortunately looks about seventeen and is oppressed with the necessity of behaving like seventy". Hay continued to write, anonymously, for newspapers, sending in columns calculated to make Lincoln appear a sorrowful man, religious and competent, giving of his life and health to preserve the Union. Similarly, Hay served as what Taliaferro deemed a "White House propagandist", in his columns explaining away losses such as that at First Manassas in July 1861. Despite the heavy workload—Hay wrote that he was busy 20 hours a day—he tried to make as normal a life as possible, eating his meals with Nicolay at Willard's Hotel, going to the theatre with Abraham and Mary Todd Lincoln, and reading "Les Misérables" in French. Hay, still in his early 20s, spent time both in barrooms and at cultured get-togethers in the homes of Washington's elite. The two secretaries often clashed with Mary Lincoln, who resorted to various stratagems to get the dilapidated White House restored without depleting Lincoln's salary, which had to cover entertainment and other expenses. Despite the secretaries' objections, Mrs. Lincoln was generally the victor and managed to save almost 70% of her husband's salary in his four years in office. After the death of Lincoln's 11-year-old son Willie in February 1862 (an event not mentioned in Hay's diary or correspondence), "it was Hay who became, if not a surrogate son, then a young man who stirred a higher form of parental nurturing that Lincoln, despite his best intentions, did not successfully bestow on either of his surviving children". According to Hay biographer Robert Gale, "Hay came to adore Lincoln for his goodness, patience, understanding, sense of humor, humility, magnanimity, sense of justice, healthy skepticism, resilience and power, love of the common man, and mystical patriotism". Speaker of the House Galusha Grow stated, "Lincoln was very much attached to him"; writer Charles G. Halpine, who knew Hay then, later recorded that "Lincoln loved him as a son". Hay and Nicolay accompanied Lincoln to Gettysburg, Pennsylvania, for the dedication of the cemetery there, where were interred many of those who fell at the Battle of Gettysburg. Although they made much of Lincoln's brief Gettysburg Address in their 1890 multi-volume biography of Lincoln, Hay's diary states "the President, in a firm, free way, with more grace than is his wont, said his half-dozen lines of consecration." Lincoln sent Hay away from the White House on various missions. In August 1861, Hay escorted Mary Lincoln and her children to Long Branch, New Jersey, a resort on the Jersey Shore, both as their caretaker and as a means of giving Hay a much-needed break. The following month, Lincoln sent him to Missouri to deliver a letter to Union General John C. Frémont, who had irritated the President with military blunders and by freeing local slaves without authorization, endangering Lincoln's attempts to keep the border states in the Union. In April 1863, Lincoln sent Hay to the Union-occupied South Carolina coast to report back on the ironclad vessels being used in an attempt to recapture Charleston Harbor. Hay then went on to the Florida coast. He returned to Florida in January 1864, after Lincoln had announced his Ten Percent Plan, that if ten percent of the 1860 electorate in a state took oaths of loyalty and to support emancipation, they could form a government with federal protection. Lincoln considered Florida, with its small population, a good test case, and made Hay a major, sending him to see if he could get sufficient men to take the oath. Hay spent a month in the state during February and March 1864, but Union defeats there reduced the area under federal control. Believing his mission impractical, he sailed back to Washington. In July 1864, New York publisher Horace Greeley sent word to Lincoln that there were Southern peace emissaries in Canada. Lincoln doubted that they actually spoke for Confederate President Jefferson Davis, but had Hay journey to New York to persuade the publisher to go to Niagara Falls, Ontario, to meet with them and bring them to Washington. Greeley reported to Lincoln that the emissaries lacked accreditation by Davis, but were confident they could bring both sides together. Lincoln sent Hay to Ontario with what became known as the Niagara Manifesto: that if the South laid down its arms, freed the slaves, and reentered the Union, it could expect liberal terms on other points. The Southerners refused to come to Washington to negotiate. By the end of 1864, with Lincoln reelected and the victorious war winding down, both Hay and Nicolay let it be known that they desired different jobs. Soon after Lincoln's second inauguration in March 1865, the two secretaries were appointed to the US delegation in Paris, Nicolay as consul and Hay as secretary of legation. Hay wrote to his brother Charles that the appointment was "entirely unsolicited and unexpected", a statement that Kushner and Sherrill found unconvincing given that Hay had spent hundreds of hours during the war with Secretary of State William H. Seward, who had often discussed personal and political matters with him, and the close relationship between the two men was so well known that office-seekers cultivated Hay as a means of getting to Seward. The two men were also motivated to find new jobs by their deteriorating relationship with Mary Lincoln, who sought their ouster, and by Nicolay's desire to wed his intended—he could not bring a bride to his shared room at the White House.They remained at the White House pending the arrival and training of replacements. Hay did not accompany the Lincolns to Ford's Theatre on the night of April 14, 1865, but remained at the White House, drinking whiskey with Robert Lincoln. When the two were informed that the President had been shot, they hastened to the Petersen House, a boarding house where the stricken Lincoln had been taken. Hay remained by Lincoln's deathbed through the night and was present when he died. Hay saw, at the moment of Lincoln's death, "a look of unspeakable peace came upon his worn features". He heard War Secretary Edwin Stanton's declaration, "Now he belongs to the ages." According to Kushner and Sherrill, "Lincoln's death was for Hay a personal loss, like the loss of a father ...Lincoln's assassination erased any remaining doubts Hay had about Lincoln's greatness." In 1866, in a personal letter, Hay deemed Lincoln, "the greatest character since Christ". Taliaferro noted that "Hay would spend the rest of his life mourning Lincoln ... wherever Hay went and whatever he did, Lincoln would "always" be watching". Hay sailed for Paris at the end of June 1865. There, he served under U.S. Minister to France John Bigelow. The workload was not heavy, and Hay found time to enjoy the pleasures of Paris. When Bigelow resigned in mid-1866, Hay, as was customary, submitted his resignation, though he was asked to remain until Bigelow's successor was in place, and stayed until January 1867. He consulted with Secretary of State William H. Seward, asking him for "anything worth having". Seward suggested the post of Minister to Sweden, but reckoned without the new president, Andrew Johnson, who had his own candidate. Seward offered Hay a job as his private secretary, but Hay declined, and returned home to Warsaw. Initially happy to be home, Hay quickly grew restive, and he was glad to hear, in early June 1867, that he had been appointed secretary of legation to act as chargé d'affaires at Vienna. He sailed for Europe the same month, and while in England visited the House of Commons, where he was greatly impressed by the Chancellor of the Exchequer, Benjamin Disraeli. The Vienna post was only temporary, until Johnson could appoint a chargé d'affaires and have him confirmed by the Senate, and the workload was light, allowing Hay, who was fluent in German, to spend much of his time traveling. It was not until July 1868 that Henry Watts became Hay's replacement. Hay resigned, spent the remainder of the summer in Europe, then went home to Warsaw. Unemployed again, in December 1868 Hay journeyed to the capital, writing to Nicolay that he "came to Washington in the peaceful pursuit of a fat office. But there is nothing just now available". Seward promised to "wrestle with Andy for anything that turns up", but nothing did prior to the departure of both Seward and Johnson from office on March 4, 1869. In May, Hay went back to Washington from Warsaw to press his case with the new Grant administration. The next month, due to the influence of his friends, he obtained the post of secretary of legation in Spain. Although the salary was low, Hay was interested in serving in Madrid both because of the political situation there—Queen Isabella II had recently been deposed—and because the U.S. Minister was the swashbuckling former congressman, General Daniel Sickles. Hay hoped to assist Sickles in gaining U.S. control over Cuba, then a Spanish colony. Sickles was unsuccessful and Hay resigned in May 1870, citing the low salary, but remaining in his post until September. Two legacies of Hay's time in Madrid were magazine articles he wrote that became the basis of his first book, "Castilian Days", and his lifelong friendship with Sickles's personal secretary, Alvey A. Adee, who would be a close aide to Hay at the State Department. While still in Spain, Hay had been offered the position of assistant editor at the "New-York Tribune"—both the editor, Horace Greeley, and his managing editor, Whitelaw Reid, were anxious to hire Hay. He joined the staff in October 1870. The "Tribune" was the leading reform newspaper in New York, and through mail subscriptions, the largest-circulating newspaper in the nation. Hay wrote editorials for the "Tribune", and Greeley soon proclaimed him the most brilliant writer of "breviers" (as they were called) that he had ever had. With his success as an editorial writer, Hay's duties expanded. In October 1871, he journeyed to Chicago after the great fire there, interviewing Mrs. O'Leary, whose cow was said to have started the blaze, describing her as "a woman with a lamp [who went] to the barn behind the house, to milk the cow with the crumpled temper, that kicked the lamp, that spilled the kerosene, that fired the straw that burned Chicago". His work at the "Tribune" came as his fame as a poet was reaching its peak, and one colleague described it as "a liberal education in the delights of intellectual life to sit in intimate companionship with John Hay and watch the play of that well-stored and brilliant mind". In addition to writing, Hay was signed by the prestigious Boston Lyceum Bureau, whose clients included Mark Twain and Susan B. Anthony, to give lectures on the prospects for democracy in Europe, and on his years in the Lincoln White House. By the time President Grant ran for reelection in 1872, Grant's administration had been rocked by scandal, and some disaffected members of his party formed the Liberal Republicans, naming Greeley as their candidate for president, a nomination soon joined in by the Democrats. Hay was unenthusiastic about the editor-turned-candidate, and in his editorials mostly took aim at Grant, who, despite the scandals, remained untarred, and who won a landslide victory in the election. Greeley died only weeks later, a broken man. Hay's stance endangered his hitherto sterling credentials in the Republican Party. By 1873, Hay was wooing Clara Stone, daughter of Cleveland multimillionaire railroad and banking mogul Amasa Stone. The success of his suit (they married in 1874) made the salary attached to office a small consideration for the rest of his life. Amasa Stone needed someone to watch over his investments, and wanted Hay to move to Cleveland to fill the post. Although the Hays initially lived in John's New York apartment and later in a townhouse there, they moved in June 1875 to Stone's ornate home on Cleveland's Euclid Avenue, "Millionaire's Row", and a mansion was quickly under construction for the Hays next-door. The Hays had four children, Helen Hay Whitney, Adelbert Barnes Hay, Alice Evelyn Hay Wadsworth Boyd, and Clarence Leonard Hay. Their father proved successful as a money manager, though he devoted much of his time to literary and political activities, writing to Adee that "I do nothing but read and yawn". On December 29, 1876, a bridge over Ohio's Ashtabula River collapsed. The bridge had been built from metal cast at one of Stone's mills, and was carrying a train owned and operated by Stone's Lake Shore and Michigan Railway. Ninety-two people died; it was the worst rail disaster in American history up to that point. Blame fell heavily on Stone, who departed for Europe to recuperate and left Hay in charge of his businesses. The summer of 1877 was marked by labor disputes; a strike over wage cuts on the Baltimore & Ohio Railroad soon spread to the Lake Shore, much to Hay's outrage. He blamed foreign agitators for the dispute, and vented his anger over the strike in his only novel, "The Bread-Winners" (1883). Hay remained disaffected from the Republican Party in the mid-1870s. Seeking a candidate of either party he could support as a reformer, he watched as his favored Democrat, Samuel Tilden, gained his party's nomination, but his favored Republican, James G. Blaine, did not, falling to Ohio Governor Rutherford B. Hayes, whom Hay did not support during the campaign. Hayes's victory in the election left Hay an outsider as he sought a return to politics, and he was initially offered no place in the new administration. Nevertheless, Hay attempted to ingratiate himself with the new President by sending him a gold ring with a strand of George Washington's hair, a gesture that Hayes deeply appreciated. Hay spent time working with Nicolay on their Lincoln biography, and traveling in Europe. When Reid, who had succeeded Greeley as editor of the "Tribune", was offered the post of Minister to Germany in December 1878, he turned it down and recommended Hay. Secretary of State William M. Evarts indicated that Hay "had not been active enough in political efforts", to Hay's regret, who told Reid that he "would like a second-class mission uncommonly well". From May to October 1879, Hay set out to reconfirm his credentials as a loyal Republican, giving speeches in support of candidates and attacking the Democrats. In October, President and Mrs. Hayes came to a reception at Hay's Cleveland home. When Assistant Secretary of State Frederick W. Seward resigned later that month, Hay was offered his place and accepted, after some hesitancy because he was considering running for Congress. In Washington, Hay oversaw a staff of eighty employees, renewed his acquaintance with his friend Henry Adams, and substituted for Evarts at Cabinet meetings when the Secretary was out of town. In 1880, he campaigned for the Republican nominee for president, his fellow Ohioan, Congressman James A. Garfield. Hay felt that Garfield did not have enough backbone, and hoped that Reid and others would "inoculate him with the gall which I fear he lacks". Garfield consulted Hay before and after his election as president on appointments and other matters, but offered Hay only the post of private secretary (though he promised to increase its pay and power), and Hay declined. Hay resigned as assistant secretary effective March 31, 1881, and spent the next seven months as acting editor of the "Tribune" during Reid's extended absence in Europe. Garfield's death in September and Reid's return the following month left Hay again on the outside of political power, looking in. He would spend the next fifteen years in that position. After 1881, Hay did not again hold public office until 1897. Amasa Stone committed suicide in 1883; his death left the Hays very wealthy. They spent several months in most years traveling in Europe. The Lincoln biography absorbed some of Hay's time, the hardest work being done with Nicolay in 1884 and 1885; beginning in 1886, portions began appearing serially, and the was published in 1890. In 1884, Hay and Adams commissioned architect Henry Hobson Richardson to construct houses for them on Washington's Lafayette Square; these were completed by 1886. Hay's house, facing the White House and fronting on Sixteenth Street, was described even before completion as "the finest house in Washington". The price for the combined tract, purchased from William Wilson Corcoran, was $73,800, of which Adams paid a third for his lot. Hay budgeted the construction cost at $50,000; his ornate, mansion eventually cost over twice that. Despite their possession of two lavish houses, the Hays spent less than half the year in Washington and only a few weeks a year in Cleveland. They also spent time at The Fells, their summer residence in Newbury, New Hampshire. According to Gale, "for a full decade before his appointment in 1897 as ambassador to England, Hay was lazy and uncertain." Hay continued to devote much of his energy to Republican politics. In 1884, he supported Blaine for president, donating considerable sums to the senator's unsuccessful campaign against New York Governor Grover Cleveland. Many of Hay's friends were unenthusiastic about Blaine's candidacy, to Hay's anger, and he wrote to editor Richard Watson Gilder, "I have never been able to appreciate the logic that induces some excellent people every four years because they cannot nominate the candidate they prefer to vote for the party they don't prefer." In 1888, Hay had to follow his own advice as his favored candidate, Ohio Senator John Sherman, was unsuccessful at the Republican convention. After some reluctance, Hay supported the nominee, former Indiana senator Benjamin Harrison, who was elected. Though Harrison appointed men whom Hay supported, including Blaine, Reid, and Robert Lincoln, Hay was not asked to serve in the Harrison administration. In 1890, Hay spoke for Republican congressional candidates, addressing a rally of 10,000 people in New York City, but the party was defeated, losing control of Congress. Hay contributed funds to Harrison's unsuccessful re-election effort, in part because Reid had been made Harrison's 1892 running mate. Hay was an early supporter of Ohio's William McKinley and worked closely with McKinley's political manager, Cleveland industrialist Mark Hanna. In 1889, Hay supported McKinley in his unsuccessful effort to become Speaker of the House. Four years later, McKinley—by then Governor of Ohio—faced a crisis when a friend whose notes he had imprudently co-signed went bankrupt during the Panic of 1893. The debts were beyond the governor's means to pay, and the possibility of insolvency threatened McKinley's promising political career. Hay was among those Hanna called upon to contribute, buying up $3,000 of the debt of over $100,000. Although others paid more, "Hay's checks were two of the first, and his touch was more personal, a kindness McKinley never forgot". The governor wrote, "How can I ever repay you & other dear friends?" The same panic that nearly ruined McKinley convinced Hay that men like himself must take office to save the country from disaster. By the end of 1894, he was deeply involved in efforts to lay the groundwork for the governor's 1896 presidential bid. It was Hay's job to persuade potential supporters that McKinley was worth backing. Nevertheless, Hay found time for a lengthy stay in New Hampshire—one visitor at The Fells in mid-1895 was Rudyard Kipling—and later in the year wrote, "The summer wanes and I have done nothing for McKinley." He atoned with a $500 check to Hanna, the first of many. During the winter of 1895–96, Hay passed along what he heard from other Republicans influential in Washington, such as Massachusetts Senator Henry Cabot Lodge. Hay spent part of the spring and early summer of 1896 in the United Kingdom, and elsewhere in Europe. There was a border dispute between Venezuela and British Guiana, and Cleveland's Secretary of State, Richard Olney, supported the Venezuelan position, announcing the Olney interpretation of the Monroe Doctrine. Hay told British politicians that McKinley, if elected, would be unlikely to change course. McKinley was nominated in June 1896; still, many Britons were minded to support whoever became the Democratic candidate. This changed when the 1896 Democratic National Convention nominated former Nebraska congressman William Jennings Bryan on a "free silver" platform; he had electrified the delegates with his Cross of Gold speech. Hay reported to McKinley when he returned to Britain after a brief stay on the Continent during which Bryan was nominated in Chicago: "they were all scared out of their wits for fear Bryan would be elected, and very polite in their references to you." Once Hay returned to the United States in early August, he went to The Fells and watched from afar as Bryan barnstormed the nation in his campaign while McKinley gave speeches from his front porch. Despite an invitation from the candidate, Hay was reluctant to visit McKinley at his home in Canton. "He has asked me to come, but I thought I would not struggle with the millions on his trampled lawn". In October, after basing himself at his Cleveland home and giving a speech for McKinley, Hay went to Canton at last, writing to Adams, Hay was disgusted by Bryan's speeches, writing in language that Taliaferro compares to "The Bread-Winners" that the Democrat "simply reiterates the unquestioned truths that every man with a clean shirt is a thief and ought to be hanged: that there is no goodness and wisdom except among the illiterate & criminal classes". Despite Bryan's strenuous efforts, McKinley won the election easily, with a campaign run by himself and Hanna, and well-financed by supporters like Hay. Henry Adams later wondered, "I would give sixpence to know how much Hay paid for McKinley. His politics must have cost." In the post-election speculation as to who would be given office under McKinley, Hay's name figured prominently, as did that of Whitelaw Reid; both men sought high office in the State Department, either as secretary or one of the major ambassadorial posts. Reid, in addition to his vice-presidential run, had been Minister to France under Harrison. Reid, an asthmatic, handicapped himself by departing for Arizona Territory for the winter, leading to speculation about his health. Hay was faster than Reid to realize that the race for these posts would be affected by Hanna's desire to be senator from Ohio, as with one of the state's places about to be occupied by the newly elected Joseph B. Foraker, the only possible seat for him was that held by Senator Sherman. As the septuagenarian senator had served as Treasury Secretary under Hayes, only the secretaryship of state was likely to attract him and cause a vacancy that Hanna could fill. Hay knew that with only eight cabinet positions, only one could go to an Ohioan, and so he had no chance for a cabinet post. Accordingly, Hay encouraged Reid to seek the State position, while firmly ruling himself out as a possible candidate for that post, and quietly seeking the inside track to be ambassador in London. Zeitz states that Hay "aggressively lobbied" for the position. According to Taliaferro, "only after the deed was accomplished and Hay was installed as the ambassador to the Court of St. James's would it be possible to detect just how subtly and completely he had finessed his ally and friend, Whitelaw Reid". A telegraph from Hay to McKinley in the latter's papers, dated December 26 (most likely 1896) reveals the former's suggestion that McKinley tell Reid that the editor's friends had insisted that Reid not endanger his health through office, especially in London's smoggy climes. The following month, in a letter, Hay set forth his own case for the ambassadorship, and urged McKinley to act quickly, as suitable accommodations in London would be difficult to secure. Hay gained his object (as did Hanna), and shifted his focus to appeasing Reid. Taliaferro states that Reid never blamed Hay, but Kushner and Sherrill recorded, "Reid was certain that he had been wronged" by Hay, and the announcement of Hay's appointment nearly ended their 26-year friendship. Reaction in Britain to Hay's appointment was generally positive, with George Smalley of "The Times" writing to him, "we want a man who is a true American yet not anti-English". Hay secured a Georgian house on Carlton House Terrace, overlooking Horse Guards Parade, with 11 servants. He brought with him Clara, their own silver, two carriages, and five horses. Hay's salary of $17,000 "did not even begin to cover the cost of their extravagant lifestyle". During his service as ambassador, Hay attempted to advance the relationship between the U.S. and Britain. The latter country had long been seen negatively by many Americans, a legacy of its colonial role that was refreshed by its Civil War neutrality, when British-built raiders such as the "Alabama" preyed on US-flagged ships. In spite of these past differences, according to Taliaferro, "rapprochement made more sense than at any time in their respective histories". In his Thanksgiving Day address to the American Society in London in 1897, Hay echoed these points, "The great body of people in the United States and England are friends ... [sharing] that intense respect and reverence for order, liberty, and law which is so profound a sentiment in both countries". Although Hay was not successful in resolving specific controversies in his year and a third as ambassador, both he and British policymakers regarded his tenure as a success, because of the advancement of good feelings and cooperation between the two nations. An ongoing dispute between the U.S. and Britain was over the practice of pelagic sealing, that is, the capture of seals offshore of Alaska. The U.S. considered them American resources; the Canadians (Britain was still responsible for that dominion's foreign policy) contended that the mammals were being taken on the high seas, free to all. Soon after Hay's arrival, McKinley sent former Secretary of State John W. Foster to London to negotiate the issue. Foster quickly issued an accusatory note to the British that was printed in the newspapers. Although Hay was successful in getting Lord Salisbury, then both Prime Minister and Foreign Secretary, to agree to a conference to decide the matter, the British withdrew when the U.S. also invited Russia and Japan, rendering the conference ineffective. Another issue on which no agreement was reached was that of bimetallism: McKinley had promised silver-leaning Republicans to seek an international agreement varying the price ratio between silver and gold to allow for free coinage of silver, and Hay was instructed to seek British participation. The British would only join if the Indian colonial government (on a silver standard until 1893) was willing; this did not occur, and coupled with an improving economic situation that decreased support for bimetallism in the United States, no agreement was reached. Hay had little involvement in the crisis over Cuba that culminated in the Spanish–American War. He met with Lord Salisbury in October 1897 and gained assurances Britain would not intervene if the U.S. found it necessary to go to war against Spain. Hay's role was "to make friends and to pass along the English point of view to Washington". Hay spent much of early 1898 on an extended trip to the Middle East, and did not return to London until the last week of March, by which time the USS "Maine" had exploded in Havana harbor. During the war, he worked to ensure U.S.-British amity, and British acceptance of the U.S. occupation of the Philippines—Salisbury and his government preferred that the U.S. have the islands than have them fall into the hands of the Germans. In its early days, Hay described the war "as necessary as it is righteous". In July, writing to former Assistant Secretary of the Navy Theodore Roosevelt, who had gained wartime glory by leading the Rough Riders volunteer regiment, Hay made a description of the war for which, according to Zeitz, he "is best remembered by many students of American history": Secretary Sherman had resigned on the eve of war, and been replaced by his first assistant, William R. Day. One of McKinley's Canton cronies, with little experience of statecraft, Day was never intended as more than a temporary wartime replacement. With America about to splash her flag across the Pacific, McKinley needed a secretary with stronger credentials. On August 14, 1898, Hay received a telegram from McKinley that Day would head the American delegation to the peace talks with Spain, and that Hay would be the new Secretary of State. After some indecision, Hay, who did not think he could decline and still remain as ambassador, accepted. British response to Hay's promotion was generally positive, and Queen Victoria, after he took formal leave of her at Osborne House, invited him again the following day, and subsequently pronounced him, "the most interesting of all the Ambassadors I have known." John Hay was sworn in as Secretary of State on September 30, 1898. He needed little introduction to Cabinet meetings, and sat at the President's right hand. Meetings were held in the Cabinet Room of the White House, where he found his old office and bedroom each occupied by several clerks. Now responsible for 1,300 federal employees, he leaned heavily for administrative help on his old friend Alvey Adee, the second assistant. By the time Hay took office, the war was effectively over and it had been decided to strip Spain of her overseas empire and transfer at least part of it to the United States. At the time of Hay's swearing-in, McKinley was still undecided whether to take the Philippines, but in October finally decided to do so, and Hay sent instructions to Day and the other peace commissioners to insist on it. Spain yielded, and the result was the Treaty of Paris, narrowly ratified by the Senate in February 1899 over the objections of anti-imperialists. By the 1890s, China had become a major trading partner for Western nations, and for Japan. China lacked military muscle to resist these countries, and several, including Russia, Britain, and Germany, had carved off bits of China—some known as treaty ports—for use as trading or military bases. Within those jurisdictions, the nation in possession often gave preference to its own citizens in trade or in developing infrastructure such as railroads. Although the United States did not claim any parts of China, a third of the China trade was carried in American ships, and having an outpost near there was a major factor in deciding to retain the former Spanish colony of the Philippines in the Treaty of Paris. Hay had been concerned about the Far East since the 1870s. As Ambassador, he had attempted to forge a common policy with the British, but the United Kingdom was willing to undertake territorial acquisition in China to guard its interests there whereas McKinley was not. In March 1898, Hay warned that Russia, Germany, and France were seeking to exclude Britain and America from the China trade, but he was disregarded by Sherman, who accepted assurances from Russia and Germany. McKinley was of the view that equality of opportunity for American trade in China was key to success there, rather than colonial acquisitions; that Hay shared these views was one reason for his appointment as Secretary of State. Many influential Americans, seeing coastal China being divided into spheres of influence, urged McKinley to join in; still, in his annual message to Congress in December 1898, he stated that as long as Americans were not discriminated against, he saw no need for the United States to become "an actor in the scene". As Secretary of State, it was Hay's responsibility to put together a workable China policy. He was advised by William Rockhill, an old China hand. Also influential was Charles Beresford, a British Member of Parliament who gave a number of speeches to American businessmen, met with McKinley and Hay, and in a letter to the secretary stated that "it is imperative for American interests as well as our own that the policy of the 'open door' should be maintained". Assuring that all would play on an even playing field in China would give the foreign powers little incentive to dismember the Chinese Empire through territorial acquisition. In mid-1899, the British inspector of Chinese maritime customs, Alfred Hippisley, visited the United States. In a letter to Rockhill, a friend, he urged that the United States and other powers agree to uniform Chinese tariffs, including in the enclaves. Rockhill passed the letter on to Hay, and subsequently summarized the thinking of Hippisley and others, that there should be "an open market through China for our trade on terms of equality with all other foreigners". Hay was in agreement, but feared Senate and popular opposition, and wanted to avoid Senate ratification of a treaty. Rockhill drafted the first Open Door note, calling for equality of commercial opportunity for foreigners in China. Hay formally issued his Open Door note on September 6, 1899. This was not a treaty, and did not require the approval of the Senate. Most of the powers had at least some caveats, and negotiations continued through the remainder of the year. On March 20, 1900, Hay announced that all powers had agreed, and he was not contradicted. Former secretary Day wrote to Hay, congratulating him, "moving at the right time and in the right manner, you have secured a diplomatic triumph in the 'open door' in China of the first importance to your country". Little thought was given to the Chinese reaction to the Open Door note; the Chinese minister in Washington, Wu Ting-fang, did not learn of it until he read of it in the newspapers. Among those in China who opposed Western influence there was a movement in Shantung Province, in the north, that became known as the Fists of Righteous Harmony, or Boxers, after the martial arts they practiced. The Boxers were especially angered by missionaries and their converts. As late as June 1900, Rockhill dismissed the Boxers, contending that they would soon disband. By the middle of that month, the Boxers, joined by imperial troops, had cut the railroad between Peking and the coast, killed many missionaries and converts, and besieged the foreign legations. Hay faced a precarious situation; how to rescue the Americans trapped in Peking, and how to avoid giving the other powers an excuse to partition China, in an election year when there was already Democrat opposition to what they deemed American imperialism. As American troops were sent to China to relieve the nation's legation, Hay sent a letter to foreign powers (often called the Second Open Door note), stating while the United States wanted to see lives preserved and the guilty punished, it intended that China not be dismembered. Hay issued this on July 3, 1900, suspecting that the powers were quietly making private arrangements to divide up China. Communication between the foreign legations and the outside world had been cut off, and the personnel there were falsely presumed slaughtered, but Hay realized that Minister Wu could get a message in, and Hay was able to establish communication. Hay suggested to the Chinese government that it now cooperate for its own good. When the foreign relief force, principally Japanese but including 2,000 Americans, relieved the legations and sacked Peking, China was made to pay a huge indemnity but there was no cession of land. McKinley's vice president, Garret Hobart, had died in November 1899. Under the laws then in force, this made Hay next in line to the presidency should anything happen to McKinley. There was a presidential election in 1900, and McKinley was unanimously renominated at the Republican National Convention that year. He allowed the convention to make its own choice of running mate, and it selected Roosevelt, by then Governor of New York. Senator Hanna bitterly opposed that choice, but nevertheless raised millions for the McKinley/Roosevelt ticket, which was elected. Hay accompanied McKinley on his nationwide train tour in mid-1901, during which both men visited California and saw the Pacific Ocean for the only times in their lives. The summer of 1901 was tragic for Hay; his older son Adelbert, who had been consul in Pretoria during the Boer War and was about to become McKinley's personal secretary, died in a fall from a New Haven hotel window. Secretary Hay was at The Fells when McKinley was shot by Leon Czolgosz, an anarchist, on September 6 in Buffalo. With Vice President Roosevelt and much of the cabinet hastening to the bedside of McKinley, who had been operated on (it was thought successfully) soon after the shooting, Hay planned to go to Washington to manage the communication with foreign governments, but presidential secretary George Cortelyou urged him to come to Buffalo. He traveled to Buffalo on September 10; hearing on his arrival an account of the President's recovery, Hay responded that McKinley would die. He was more cheerful after visiting McKinley, giving a statement to the press, and went to Washington, as Roosevelt and other officials also dispersed. Hay was about to return to New Hampshire on the 13th, when word came that McKinley was dying. Hay remained at his office and the next morning, on the way to Buffalo, the former Rough Rider received from Hay his first communication as head of state, officially informing President Roosevelt of McKinley's death. Hay, again next in line to the presidency, remained in Washington as McKinley's body was transported to the capital by funeral train, and stayed there as the late president was taken to Canton for interment. He had admired McKinley, describing him as "awfully like Lincoln in many respects" and wrote to a friend, "what a strange and tragic fate it has been of mine—to stand by the bier of three of my dearest friends, Lincoln, Garfield, and McKinley, three of the gentlest of men, all risen to be head of the State, and all done to death by assassins". By letter, Hay offered his resignation to Roosevelt while the new president was still in Buffalo, amid newspaper speculation that Hay would be replaced—Garfield's Secretary of State, Blaine, had not remained long under the Arthur administration. When Hay met the funeral train in Washington, Roosevelt greeted him at the station and immediately told him he must stay on as Secretary. According to Zeitz, "Roosevelt's accidental ascendance to the presidency made John Hay an essential anachronism ... the wise elder statesman and senior member of the cabinet, he was indispensable to TR, who even today remains the youngest president ever". The deaths of his son and of McKinley were not the only griefs Hay suffered in 1901—on September 26, John Nicolay died after a long illness, as did Hay's close friend Clarence King on Christmas Eve. Hay's involvement in the efforts to have a canal joining the oceans in Central America went back to his time as Assistant Secretary of State under Hayes, when he served as translator for Ferdinand de Lesseps in his efforts to interest the American government in investing in his canal company. President Hayes was only interested in the idea of a canal under American control, which de Lesseps's project would not be. By the time Hay became Secretary of State, de Lesseps's project in Panama (then a Colombian province) had collapsed, as had an American-run project in Nicaragua. The 1850 Clayton–Bulwer Treaty (between the United States and Britain) forbade the United States from building a Central American canal that it exclusively controlled, and Hay, from early in his tenure, sought the removal of this restriction. But the Canadians, for whose foreign policy Britain was still available, saw the canal matter as their greatest leverage to get other disputes resolved in their favor, persuaded Salisbury not to resolve it independently. Shortly before Hay took office, Britain and the U.S. agreed to establish a Joint High Commission to adjudicate unsettled matters, which met in late 1898 but made slow progress, especially on the Canada-Alaska boundary. The Alaska issue became less contentious in August 1899 when the Canadians accepted a provisional boundary pending final settlement. With Congress anxious to begin work on a canal bill, and increasingly likely to ignore the Clayton-Bulwer restriction, Hay and British Ambassador Julian Pauncefote began work on a new treaty in January 1900. The first Hay–Pauncefote Treaty was sent to the Senate the following month, where it met a cold reception, as the terms forbade the United States from blockading or fortifying the canal, that was to be open to all nations in wartime as in peace. The Senate Foreign Relations Committee added an amendment allowing the U.S. to fortify the canal, then in March postponed further consideration until after the 1900 election. Hay submitted his resignation, which McKinley refused. The treaty, as amended, was ratified by the Senate in December, but the British would not agree to the changes. Despite the lack of agreement, Congress was enthusiastic about a canal, and was inclined to move forward, with or without a treaty. Authorizing legislation was slowed by discussion on whether to take the Nicaraguan or Panamanian route. Much of the negotiation of a revised treaty, allowing the U.S. to fortify the canal, took place between Hay's replacement in London, Joseph H. Choate, and the British Foreign Secretary, Lord Lansdowne, and the second Hay–Pauncefote Treaty was ratified by the Senate by a large margin on December 6, 1901. Seeing that the Americans were likely to build a Nicaragua Canal, the owners of the defunct French company, including Philippe Bunau-Varilla, who still had exclusive rights to the Panama route, lowered their price. Beginning in early 1902, President Roosevelt became a backer of the latter route, and Congress passed legislation for it, if it could be secured within a reasonable time. In June, Roosevelt told Hay to take personal charge of the negotiations with Colombia. Later that year, Hay began talks with Colombia's acting minister in Washington, Tomás Herrán. The Hay–Herrán Treaty, granting $10 million to Colombia for the right to build a canal, plus $250,000 annually, was signed on January 22, 1903, and ratified by the United States Senate two months later. In August, however, the treaty was rejected by the Colombian Senate. Roosevelt was minded to build the canal anyway, using an earlier treaty with Colombia that gave the U.S. transit rights in regard to the Panama Railroad. Hay predicted "an insurrection on the Isthmus [of Panama] against that regime of folly and graft ... at Bogotá". Bunau-Varilla gained meetings with both men, and assured them that a revolution, and a Panamanian government more friendly to a canal, was coming. In October, Roosevelt ordered Navy ships to be stationed near Panama. The Panamanians duly revolted in early November 1903, with Colombian interference deterred by the presence of U.S. forces. By prearrangement, Bunau-Varilla was appointed representative of the nascent nation in Washington, and quickly negotiated the Hay–Bunau-Varilla Treaty, signed on November 18, giving the United States the right to build the canal in a zone wide, over which the U.S. would exercise full jurisdiction. This was less than satisfactory to the Panamanian diplomats who arrived in Washington shortly after the signing, but they did not dare renounce it. The treaty was approved by the two nations, and work on the Panama Canal began in 1904. Hay wrote to Secretary of War Elihu Root, praising "the perfectly regular course which the President did follow" as much preferable to armed occupation of the isthmus. Hay had met the President's father, Theodore Roosevelt, Sr., during the Civil War, and during his time at the "Tribune" came to know the adolescent "Teddy", twenty years younger than himself. Although before becoming president Roosevelt often wrote fulsome letters of praise to Secretary Hay, his letters to others then and later were less complimentary. Hay felt Roosevelt too impulsive, and privately opposed his inclusion on the ticket in 1900, though he quickly wrote a congratulatory note after the convention. As President and Secretary of State, the two men took pains to cultivate a cordial relationship. Roosevelt read all ten volumes of the Lincoln biography and in mid-1903, wrote to Hay that by then "I have had a chance to know far more fully what a really great Secretary of State you are". Hay for his part publicly praised Roosevelt as "young, gallant, able, [and] brilliant", words that Roosevelt wrote that he hoped would be engraved on his tombstone. Privately, and in correspondence with others, they were less generous: Hay grumbled that while McKinley would give him his full attention, Roosevelt was always busy with others, and it would be "an hour's wait for a minute's talk". Roosevelt, after Hay's death in 1905, wrote to Senator Lodge that Hay had not been "a great Secretary of State ... under me he accomplished little ... his usefulness to me was almost exclusively the usefulness of a fine figurehead". Nevertheless, when Roosevelt successfully sought election in his own right in 1904, he persuaded the aging and infirm Hay to campaign for him, and Hay gave a speech linking the administration's policies with those of Lincoln: "there is not a principle avowed by the Republican party to-day which is out of harmony with his [Lincoln's] teaching or inconsistent with his character." Kushner and Sherrill suggested that the differences between Hay and Roosevelt were more style than ideological substance. In December 1902, the German government asked Roosevelt to arbitrate its dispute with Venezuela over unpaid debts. Hay did not think this appropriate, as Venezuela also owed the U.S. money, and quickly arranged for the International Court of Arbitration in The Hague to step in. Hay supposedly said, as final details were being worked out, "I have it all arranged. If Teddy will keep his mouth shut until tomorrow noon!" Hay and Roosevelt also differed over the composition of the Joint High Commission that was to settle the Alaska boundary dispute. The commission was to be composed of "impartial jurists" and the British and Canadians duly appointed notable judges. Roosevelt appointed politicians, including Secretary Root and Senator Lodge. Although Hay was supportive of the President's choices in public, in private he protested loudly to Roosevelt, complained by letter to his friends, and offered his resignation. Roosevelt declined it, but the incident confirmed him in his belief that Hay was too much of an Anglophile to be trusted where Britain was concerned. The American position on the boundary dispute was imposed on Canada by a 4–2 vote, with the one English judge joining the three Americans. One incident involving Hay that benefitted Roosevelt politically was the kidnapping of Greek-American playboy Ion Perdicaris in Morocco by chieftain Mulai Ahmed er Raisuli, an opponent of Sultan Abdelaziz. Raisuli demanded a ransom, but also wanted political prisoners to be released and control of Tangier in place of the military governor. Raisuli supposed Perdicaris to be a wealthy American, and hoped United States pressure would secure his demands. In fact, Perdicaris, though born in New Jersey, had renounced his citizenship during the Civil War to avoid Confederate confiscation of property in South Carolina, and had accepted Greek naturalization, a fact not generally known until years later, but that decreased Roosevelt's appetite for military action. The sultan was ineffective in dealing with the incident, and Roosevelt considered seizing the Tangier waterfront, source of much of Abdelaziz's income, as a means of motivating him. With Raisuli's demands escalating, Hay, with Roosevelt's approval, finally cabled the consul-general in Tangier, Samuel Gummeré: The 1904 Republican National Convention was in session, and the Speaker of the House, Joseph Cannon, its chair, read the first sentence of the cable—and only the first sentence—to the convention, electrifying what had been a humdrum coronation of Roosevelt. "The results were perfect. This was the fighting Teddy that America loved, and his frenzied supporters—and American chauvinists everywhere—roared in delight." In fact, by then the sultan had already agreed to the demands, and Perdicaris was released. What was seen as tough talk boosted Roosevelt's election chances. Hay never fully recovered from the death of his son Adelbert, writing in 1904 to his close friend Lizzie Cameron that "the death of our boy made my wife and me old, at once and for the rest of our lives". Gale described Hay in his final years as a "saddened, slowly dying old man". Although Hay gave speeches in support of Roosevelt, he spent much of the fall of 1904 at his New Hampshire house or with his younger brother Charles, who was ill in Boston. After the election, Roosevelt asked Hay to remain another four years. Hay asked for time to consider, but the President did not allow it, announcing to the press two days later that Hay would stay at his post. Early 1905 saw futility for Hay, as a number of treaties he had negotiated were defeated or amended by the Senate—one involving the British dominion of Newfoundland due to Senator Lodge's fears it would harm his fisherman constituents. Others, promoting arbitration, were voted down or amended because the Senate did not want to be bypassed in the settlement of international disputes. By Roosevelt's inauguration on March 4, 1905, Hay's health was so bad that both his wife and his friend Henry Adams insisted on his going to Europe, where he could rest and get medical treatment. Presidential doctor Presley Rixey issued a statement that Hay was suffering from overwork, but in letters the secretary hinted his conviction that he did not have long to live. An eminent physician in Italy prescribed medicinal baths for Hay's heart condition, and he duly journeyed to Bad Nauheim, near Frankfurt, Germany. Kaiser Wilhelm II was among the monarchs who wrote to Hay asking him to visit, though he declined; Belgian King Leopold II succeeded in seeing him by showing up at his hotel, unannounced. Adams suggested that Hay retire while there was still enough life left in him to do so, and that Roosevelt would be delighted to act as his own Secretary of State. Hay jokingly wrote to sculptor Augustus Saint-Gaudens that "there is nothing the matter with me except old age, the Senate, and one or two other mortal maladies". After the course of treatment, Hay went to Paris and began to take on his workload again by meeting with the French foreign minister, Théophile Delcassé. In London, King Edward VII broke protocol by meeting with Hay in a small drawing room, and Hay lunched with Whitelaw Reid, ambassador in London at last. There was not time to see all who wished to see Hay on what he knew was his final visit. On his return to the United States, despite his family's desire to take him to New Hampshire, the secretary went to Washington to deal with departmental business and "say "Ave Caesar!" to the President", as Hay put it. He was pleased to learn that Roosevelt was well on his way to settling the Russo-Japanese War, an action for which the President would win the Nobel Peace Prize. Hay left Washington for the last time on June 23, 1905, arriving in New Hampshire the following day. He died there on July 1 of his heart ailment and complications. Hay was interred in Lake View Cemetery in Cleveland, near the grave of Garfield, in the presence of Roosevelt and many dignitaries, including Robert Lincoln. Hay wrote some poetry while at Brown University, and more during the Civil War. In 1865, early in his Paris stay, Hay penned "Sunrise in the Place de la Concorde", a poem attacking Napoleon III for his reinstitution of the monarchy, depicting the Emperor as having been entrusted with the child Democracy by Liberty, and strangling it with his own hands. In "A Triumph of Order", set in the breakup of the Paris Commune, a boy promises soldiers that he will return from an errand to be executed with his fellow rebels. Much to their surprise, he keeps his word and shouts to them to "blaze away" as "The Chassepots tore the stout young heart,/And saved Society." In poetry, he sought the revolutionary outcome for other nations that he believed had come to a successful conclusion in the United States. His 1871 poem, "The Prayer of the Romans", recites Italian history up to that time, with the "Risorgimento" in progress: liberty cannot be truly present until "crosier and crown pass away", when there will be "One freedom, one faith without fetters,/One republic in Italy free!" His stay in Vienna yielded "The Curse of Hungary", in which Hay foresees the end of the Austria-Hungarian Empire. After Hay's death in 1905, William Dean Howells suggested that the Europe-themed poems expressed "(now, perhaps, old-fashioned) American sympathy for all the oppressed." "Castilian Days", souvenir of Hay's time in Madrid, is a collection of seventeen essays about Spanish history and customs, first published in 1871, though several of the individual chapters appeared in "The Atlantic" in 1870. It went through eight editions in Hay's lifetime. The Spanish are depicted as afflicted by the "triple curse of crown, crozier, and sabre"—most kings and ecclesiastics are presented as useless—and Hay pins his hope in the republican movement in Spain. Gale deems "Castilian Days" "a remarkable, if biased, book of essays about Spanish civilization". "Pike County Ballads", a grouping of six poems published (with other Hay poetry) as a book in 1871, brought him great success. Written in the dialect of Pike County, Illinois, where Hay went to school as a child, they are approximately contemporaneous with pioneering poems in similar dialect by Bret Harte and there has been debate as to which came first. The poem that brought the greatest immediate reaction was "Jim Bludso", about a boatman who is "no saint" with one wife in Mississippi and another in Illinois. Yet, when his steamboat catches fire, "He saw his duty, a dead-sure thing,—/And went for it, ther and then." Jim holds the burning steamboat against the riverbank until the last passenger gets ashore, at the cost of his life. Hay's narrator states that, "And Christ ain't a-going to be too hard/On a man that died for men." Hay's poem offended some clergymen, but was widely reprinted and even included in anthologies of verse. "The Bread-Winners", one of the first novels to take an anti-labor perspective, was published anonymously in 1883 (published editions did not bear Hay's name until 1916) and he may have tried to disguise his writing style. The book examines two conflicts: between capital and labor, and between the "nouveau riche" and old money. In writing it, Hay was influenced by the labor unrest of the 1870s, that affected him personally, as corporations belonging to Stone, his father-in-law, were among those struck, at a time when Hay had been left in charge in Stone's absence. According to historian Scott Dalrymple, "in response, Hay proceeded to write an indictment of organized labor so scathing, so vehement, that he dared not attach his name to it." The major character is Arthur Farnham, a wealthy Civil War veteran, likely based on Hay. Farnham, who inherited money, is without much influence in municipal politics, as his ticket is defeated in elections, symbolic of the decreasing influence of America's old-money patricians. The villain is Andrew Jackson Offitt (true name Ananias Offitt), who leads the Bread-winners, a labor organization that begins a violent general strike. Peace is restored by a group of veterans led by Farnham, and, at the end, he appears likely to marry Alice Belding, a woman of his own class. Although unusual among the many books inspired by the labor unrest of the late 1870s in taking the perspective of the wealthy, it was the most successful of them, and was a sensation, gaining many favorable reviews. It was also attacked as an anti-labor polemic with an upper-class bias. There were many guesses as to authorship, with the supposed authors ranging from Hay's friend Henry Adams to New York Governor Grover Cleveland, and the speculation fueled sales. Early in his presidency, Hay and Nicolay requested and received permission from Lincoln to write his biography. By 1872, Hay was "convinced that we ought to be at work on our 'Lincoln.' I don't think the time for publication has come, but the time for preparation is slipping away." Robert Lincoln in 1874 formally agreed to let Hay and Nicolay use his father's papers; by 1875, they were engaged in research. Hay and Nicolay enjoyed exclusive access to Lincoln's papers, which were not opened to other researchers until 1947. They gathered documents written by others, as well as many of the Civil War books already being published. They at rare times relied on memory, such as Nicolay's recollection of the moment at the 1860 Republican convention when Lincoln was nominated, but for much of the rest relied on research. Hay began his part of the writing in 1876; the work was interrupted by illnesses of Hay, Nicolay, or family members, or by Hay's writing of "The Bread-Winners". By 1885, Hay had completed the chapters on Lincoln's early life, and they were submitted to Robert Lincoln for approval. Sale of the serialization rights to "The Century" magazine, edited by Hay's friend Richard Gilder, helped give the pair the impetus to bring what had become a massive project to an end. The published work, "Abraham Lincoln: A History", alternates parts in which Lincoln is at center with discussions of contextual matters, such as legislative events or battles. The first serial installment, published in November 1886, received positive reviews. When the ten-volume set emerged in 1890, it was not sold in bookstores, but instead door-to-door, then a common practice. Despite a price of $50, and the fact that a good part of the work had been serialized, five thousand copies were quickly sold. The books helped forge the modern view of Lincoln as great war leader, against competing narratives that gave more credit to subordinates such as Seward. According to historian Joshua Zeitz, "it is easy to forget how widely underrated Lincoln the president and Lincoln the man were at the time of his death and how successful Hay and Nicolay were in elevating his place in the nation's collective historical memory." In 1902, Hay wrote that when he died, "I shall not be much missed except by my wife." Nevertheless, due to his premature death at age 66, he was survived by most of his friends. These included Adams, who although he blamed the pressures of Hay's office, where he was badgered by Roosevelt and many senators, for the Secretary of State's death, admitted that Hay had remained in the position because he feared being bored. He memorialized his friend in the final pages of his autobiographical "The Education of Henry Adams": with Hay's death, his own education had ended. Gale pointed out that Hay "accomplished a great deal in the realm of international statesmanship, and the world may be a better place because of his efforts as secretary of state ... the man was a scintillating ambassador". Yet, Gale felt, any assessment of Hay must include negatives as well, that after his marriage to the wealthy Clara Stone, Hay "allowed his deep-seated love of ease triumph over his Middle Western devotion to work and a fair shake for all." Despite his literary accomplishments, Hay "was often lazy. His first poetry was his best." Taliaferro suggests that "if Hay put any ... indelible stamp on history, perhaps it was that he demonstrated how the United States ought to comport itself. He, not Roosevelt, was the adult in charge when the nation and the State Department attained global maturity." He quotes John St. Loe Strachey, "All that the world saw was a great gentleman and a great statesman doing his work for the State and for the President with perfect taste, perfect good sense, and perfect good humour". Hay's efforts to shape Lincoln's image increased his own prominence and reputation in making his association (and that of Nicolay) with the assassinated president ever more remarkable and noteworthy. According to Zeitz, "the greater Lincoln grew in death, the greater they grew for having known him so well, and so intimately, in life. Everyone wanted to know them if only to ask what it had been like—what "he" had been like." Their answer to that, expressed in ten volumes of biography, Gale wrote, "has been incredibly influential". In 1974, Lincoln scholar Roy P. Basler stated that later biographers such as Carl Sandburg did not "ma[k]e revisions of the essential story told by N.[icolay] & H.[ay]. Zeitz concurs, "Americans today understand Abraham Lincoln much as Nicolay and Hay hoped that they would." Hay brought about more than 50 treaties, including the Canal-related treaties, and settlement of the Samoan dispute, as a result of which the United States secured what became known as American Samoa. In 1900, Hay negotiated a treaty with Denmark for the cession of the Danish West Indies. That treaty failed in the Danish parliament on a tied vote. In 1923 Mount Hay, also known as "Boundary Peak 167" on the Canada–United States border, was named after John Hay in recognition of his role in negotiating the US-Canada treaty resulting in the Alaska Boundary Tribunal. Brown University's John Hay Library is named for that prominent alumnus. Hay's New Hampshire estate has been conserved by various organizations. Although he and his family never lived there (Hay died while it was under construction), the Hay-McKinney House, home to the Cleveland History Center and thousands of artifacts, serves to remind Clevelanders of John Hay's lengthy service. During World War II the Liberty ship was built in Panama City, Florida, and named in his honor. Camp John Hay a United States military base established in 1903 in Baguio City, Philippines was named for John Hay, and the base name was maintained by the Philippine government even after its 1991 turnover to Philippine authorities. According to historian Lewis L. Gould, in his account of McKinley's presidency, Books Journals and other sources Joy Division Joy Division were an English rock band formed in 1976 in Salford, Greater Manchester. The band consisted of singer-songwriter Ian Curtis, guitarist and keyboardist Bernard Sumner, bassist Peter Hook and drummer Stephen Morris. Sumner and Hook formed the band after attending a Sex Pistols concert. While their early recordings were heavily influenced by early punk, they soon developed a unique style that made them one of the pioneers of the post-punk movement. Their self-released 1978 debut EP, "An Ideal for Living", drew the attention of the Manchester television personality Tony Wilson, who signed them to his independent label Factory Records. Their debut album "Unknown Pleasures", recorded with producer Martin Hannett, was released in 1979. Curtis suffered from personal problems including a failing marriage, depression, and epilepsy. As the band's popularity grew, Curtis's condition made it increasingly difficult for him to perform; he occasionally experienced grand mal seizures on stage. He hanged himself on the eve of the band's first American tour in May 1980, aged 23. Joy Division's second and final album, "Closer", was released two months later; it and single "Love Will Tear Us Apart" became their highest charting releases. The remaining members regrouped under the name New Order. They were successful throughout the next decade, blending post-punk with electronic and dance music influences. On 20 July 1976, childhood friends Bernard Sumner and Peter Hook separately attended a Sex Pistols show at the Manchester Lesser Free Trade Hall. Both were inspired by the Pistols' performance. Sumner said that he felt the Pistols "destroyed the myth of being a pop star, of a musician being some kind of god that you had to worship". The following day Hook borrowed £35 from his mother to buy a bass guitar. They formed a band with Terry Mason, who had also attended the gig; Sumner bought a guitar, and Mason a drum kit. After their schoolfriend Martin Gresty declined an invitation to join as vocalist after getting a job at a factory, the band placed an advertisement for a vocalist in the Manchester Virgin Records shop. Ian Curtis, who knew them from earlier gigs, responded and was hired without audition. Sumner said that he "knew he was all right to get on with and that's what we based the whole group on. If we liked someone, they were in." Buzzcocks manager Richard Boon and frontman Pete Shelley have both been credited with suggesting the band name "Stiff Kittens", but the band settled on "Warsaw" shortly before their first gig, a reference to David Bowie's song "Warszawa". Warsaw debuted on 29 May 1977 at the Electric Circus, supporting the Buzzcocks, Penetration and John Cooper Clarke. Tony Tabac played drums that night after joining the band two days earlier. Reviews in the "NME" by Paul Morley and in "Sounds" by Ian Wood brought them immediate national exposure. Mason became the band's manager and Tabac was replaced on drums in June 1977 by Steve Brotherdale, who also played in the punk band Panik. Brotherdale tried to get Curtis to leave the band and join Panik, and even had Curtis audition. In July 1977, Warsaw recorded five demo tracks at Pennine Sound Studios, Oldham. Uneasy with Brotherdale's aggressive personality, the band fired him soon after the sessions; driving home from the studio, they pulled over and asked Brotherdale to check on a flat tyre; when he got out of the car, they drove off. In August 1977, Warsaw placed an advertisement in a music shop window seeking a replacement drummer. Stephen Morris, who had attended the same school as Curtis, was the sole respondent. Deborah Curtis, Ian's wife, stated that Morris "fitted perfectly" with the band, and that with his addition Warsaw became a "complete 'family'". To avoid confusion with the London punk band Warsaw Pakt, the band renamed themselves Joy Division in early 1978, borrowing the name from the sexual slavery wing of a Nazi concentration camp mentioned in the 1955 novel "House of Dolls". In December, the group recorded their debut EP, "An Ideal for Living", at Pennine Sound Studio and played their final gig as Warsaw on New Year's Eve at the Swinging Apple in Liverpool. Billed as Warsaw to ensure an audience, the band played their first gig as Joy Division on 25 January 1978 at Pip's Disco in Manchester. Joy Division were approached by RCA Records to record a cover of Nolan "N.F." Porter's "Keep on Keepin' On" at a Manchester recording studio. The band spent late March and April 1978 writing and rehearsing material. During the Stiff/Chiswick Challenge concert at Manchester's Rafters Club on 14 April, they caught the attention of music producer Tony Wilson and manager Rob Gretton. Curtis berated Wilson for not putting the group on his Granada Television show "So It Goes"; Wilson responded that Joy Division would be the next band he would showcase on TV. Gretton, the venue's resident DJ, was so impressed by the band's performance that he convinced them to take him on as their manager. Gretton, whose "dogged determination" was later credited for much of the band's public success, contributed the business skills to provide Joy Division with a better foundation for creativity. Joy Division spent the first week of May 1978 recording at Manchester's Arrow Studios. The band were unhappy with the Grapevine Records head John Anderson's insistence on adding synthesiser into the mix to soften the sound, and asked to be dropped from the contract with RCA. Joy Division made their recorded debut in June 1978 when the band self-released "An Ideal for Living", and two weeks later their track "At a Later Date" was featured on the compilation album "" (which had been recorded live in October 1977). In the "Melody Maker" review, Chris Brazier said that it "has the familiar rough-hewn nature of home-produced records, but they're no mere drone-vendors—there are a lot of good ideas here, and they could be a very interesting band by now, seven months on". The packaging of "An Ideal for Living"—which featured a drawing of a Hitler Youth member on the cover—coupled with the nature of the band's name fuelled speculation about their political affiliations. While Hook and Sumner later said they were intrigued by fascism at the time, Morris believed that the group's dalliance with Nazi imagery came from a desire to keep memories of the sacrifices of their parents and grandparents during World War II alive. He argued that accusations of neo-Nazi sympathies merely provoked the band "to keep on doing it, because that's the kind of people we are". In September 1978, Joy Division made their television debut performing "Shadowplay" on "So It Goes", with an introduction by Wilson. In October, Joy Division contributed two tracks recorded with producer Martin Hannett to the compilation double-7" EP "A Factory Sample", the first release by Tony Wilson's record label, Factory Records. In the "NME" review of the EP, Paul Morley praised the band as "the missing link" between Elvis Presley and Siouxsie and the Banshees. Joy Division joined Factory's roster, after buying themselves out of the RCA deal. Gretton was made a label partner to represent the interests of the band. On 27 December, during the drive home from gig at the Hope and Anchor in London, Curtis suffered his first recognised severe epileptic seizure and was hospitalised. Meanwhile, Joy Division's career progressed, and Curtis appeared on the 13 January 1979 cover of "NME". That month the band recorded their session for BBC Radio 1 DJ John Peel. According to Deborah Curtis, "Sandwiched in between these two important landmarks was the realisation that Ian's illness was something we would have to learn to accommodate". Joy Division recorded their debut album, "Unknown Pleasures" at Strawberry Studios, Stockport, in April 1979. Producer Martin Hannett significantly altered their live sound, a fact that greatly displeased the band at the time; however, in 2006, Hook said that in retrospect Hannet had done a good job and "created the Joy Division sound". The album cover was designed by Peter Saville, who went on to provide artwork for future Joy Division releases. "Unknown Pleasures" was released in June and sold through its initial pressing of 10,000 copies. Wilson said the success turned the indie label into a true business and a "revolutionary force" that operated outside of the major record label system. Reviewing the album for "Melody Maker", writer Jon Savage described the album as an "opaque manifesto" and declared it "one of the best, white, English, debut LPs of the year". Joy Division performed on Granada TV again in July 1979, and made their only nationwide TV appearance in September on BBC2's "Something Else". They supported the Buzzcocks in a 24-venue UK tour that began that October, which allowed the band to quit their regular jobs. The non-album single "Transmission" was released in November. Joy Division's burgeoning success drew a devoted following who were stereotyped as "intense young men dressed in grey overcoats". Joy Division toured Continental Europe in January 1980. Although the schedule was demanding, Curtis experienced only two grand mal seizures, both in the final two months of the tour. That March, the band recorded their second album, "Closer," with Hannett at London's Britannia Row Studios. That month they released the "Licht und Blindheit" single, with "Atmosphere" as the A-side and "Dead Souls" as the B-side, on the French independent label Sordide Sentimental. A lack of sleep and long hours destabilised Curtis's epilepsy, and his seizures became almost uncontrollable. He often had seizures during performances, which some audience members believed was part of the performance. The seizures left him feeling ashamed and depressed, and the band became increasingly worried about Curtis's condition. On 7 April, Curtis attempted suicide by overdosing on his anti-seizure medication, phenobarbitone. The following evening, Joy Division were scheduled to play a gig at the Derby Hall in Bury. Curtis was too ill to perform, so at Gretton's insistence the band played a combined set with Alan Hempsall of Crispy Ambulance and Simon Topping of A Certain Ratio singing on the first few songs. When Topping came back towards the end of the set, some audience members threw bottles at the stage. Curtis's ill health lead to the cancellation of several other gigs that April. Joy Division's final live performance was held at the University of Birmingham's High Hall on 2 May, and included their only performance of "Ceremony", one of the last songs written by Curtis. Hannett's production has been widely praised. However, as with "Unknown Pleasures", both Hook and Sumner were unhappy with the production. Hook said that when he heard the final mix of "Atrocity Exhibition" he was disappointed that the abrasiveness had been toned down. He wrote; "I was like, head in hands, 'Oh fucking hell, it's happening again ... Martin had fucking melted the guitar with his Marshall Time Waster. Made it sound like someone strangling a cat and, to my mind, absolutely killed the song. I was so annoyed with him and went in and gave him a piece of my mind but he just turned round and told me to fuck off." Joy Division were scheduled to commence their first American tour in May 1980. Curtis had expressed enthusiasm about the tour, but his relationship with his wife, Deborah, was under strain; Deborah was excluded from the band's inner circle, and Curtis was having an affair with Belgian journalist and music promoter Annik Honoré, whom he met on tour in Europe in 1979. He was also anxious about how American audiences would react to his epilepsy. The evening before the band were due to depart for America, Curtis returned to his Macclesfield home to talk to Deborah. He asked her to drop an impending divorce suit, and asked her to leave him alone in the house until he caught a train to Manchester the following morning. Early on 18 May 1980, having spent the night watching the Werner Herzog film "Stroszek", Curtis hanged himself in his kitchen. Deborah discovered his body later that day when she returned. The suicide shocked the band and their management. In 2005, Wilson said: "I think all of us made the mistake of not thinking his suicide was going to happen ... We all completely underestimated the danger. We didn't take it seriously. That's how stupid we were." Music critic Simon Reynolds said Curtis's suicide "made for instant myth". Jon Savage's obituary said that "now no one will remember what his work with Joy Division was like when he was alive; it will be perceived as tragic rather than courageous". In June 1980, Joy Division's single "Love Will Tear Us Apart" was released, which hit number thirteen on the UK Singles Chart. In July 1980, "Closer" was released, and peaked at number six on the UK Albums Chart. "NME" reviewer Charles Shaar Murray wrote, ""Closer" is as magnificent a memorial (for 'Joy Division' as much as for Ian Curtis) as any post-Presley popular musician could have." Morris said that even without Curtis's death, it was unlikely that Joy Division would have endured. The members had made a pact long before Curtis's death that, should any member leave, the remaining members would change the band name. The band re-formed as New Order, with Sumner on vocals; they later recruited Morris's girlfriend Gillian Gilbert as keyboardist and second guitarist. Gilbert had befriended the band and played guitar at a Joy Division performance when Curtis had been unable to play. New Order's debut single, Ceremony" (1981), was formed from the last two songs written with Curtis. New Order struggled in their early years to escape the shadow of Joy Division, but went on to achieve far greater commercial success with a different, more upbeat and dance-orientated sound. Various Joy Division outtakes and live material have been released. "Still", featuring live tracks and rare recordings was issued in 1981. Factory issued the "Substance" compilation in 1988, including several out-of-print singles. "Permanent" was released in 1995 by London Records, which had acquired the Joy Division catalogue after Factory's 1992 bankruptcy. A comprehensive box set, "Heart and Soul", appeared in 1997. Joy Division's style quickly evolved from their punk roots. Their sound during their early inception as Warsaw was described as generic and "undistinguished punk-inflected hard-rock". Critic Simon Reynolds observed how the band's originality only "really became apparent as the songs got slower", and their music took on a "sparse" quality. According to Reynolds, "Hook's bass carried the melody, Bernard Sumner's guitar left gaps rather than filling up the group's sound with dense riffage and Steve Morris' drums seemed to circle the rim of a crater." According to music critic Jon Savage, "Joy Division were not punk but they were directly inspired by its energy". In 1994 Sumner said the band's characteristic sound "came out naturally: I'm more rhythm and chords, and Hooky was melody. He used to play high lead bass because I liked my guitar to sound distorted, and the amplifier I had would only work when it was at full volume. When Hooky played low, he couldn't hear himself. Steve has his own style which is different to other drummers. To me, a drummer in the band is the clock, but Steve wouldn't be the clock, because he's passive: he would follow the rhythm of the band, which gave us our own edge." By "Closer", Curtis had adapted a low baritone voice, drawing comparisons to Jim Morrison of the Doors (one of Curtis's favourite bands). Sumner largely acted as the band's director, a role he continued in New Order. While Sumner was the group's primary guitarist, Curtis played the instrument on a few recorded songs and during a few shows. Curtis hated playing guitar, but the band insisted he do so. Sumner said, "He played in quite a bizarre way and that to us was interesting, because no one else would play like Ian". During the recording sessions for "Closer", Sumner began using self-built synthesisers and Hook used a six-string bass for more melody. Hannett "dedicated himself to capturing and intensifying Joy Division's eerie spatiality". Hannett believed punk rock was sonically conservative because of its refusal to use studio technology to create sonic space. The producer instead aimed to create a more expansive sound on the group's records. Hannett said, "[Joy Division] were a gift to a producer, because they didn't have a clue. They didn't argue". Hannett demanded clean and clear "sound separation" not only for individual instruments, but even for individual pieces of Morris's drumkit. Morris recalled, "Typically on tracks he considered to be potential singles, he'd get me to play each drum on its own to avoid any bleed-through of sound". Music journalist Richard Cook noted that Hannett's role was "crucial". There are "devices of distance" in his production and "the sound is an illusion of physicality". Curtis was the band's sole lyricist, and he typically composed his lyrics in a notebook, independently of the eventual music to evolve. The music itself was largely written by Sumner and Hook as the group jammed during rehearsals. Curtis's imagery and word choice often referenced "coldness, pressure, darkness, crisis, failure, collapse, loss of control". In 1979, "NME" journalist Paul Rambali wrote, "The themes of Joy Division's music are sorrowful, painful and sometimes deeply sad." Music journalist Jon Savage wrote that "Curtis's great lyrical achievement was to capture the underlying reality of a society in turmoil, and to make it both universal and personal," while noting that "the lyrics reflected, in mood and approach, his interest in romantic and science-fiction literature." Critic Robert Palmer wrote that William S. Burroughs and J. G. Ballard were "obvious influences" to Curtis, and Morris also remembered the singer reading T. S. Eliot. Deborah Curtis also remembered Curtis reading works by writers such as Fyodor Dostoevsky, Friedrich Nietzsche, Jean-Paul Sartre, Franz Kafka, and Hermann Hesse. Curtis was unwilling to explain the meaning behind his lyrics and Joy Division releases were absent of any lyric sheets. He told the fanzine "Printed Noise", "We haven't got a message really; the lyrics are open to interpretation. They're multidimensional. You can read into them what you like." The other Joy Division members have said that at the time, they paid little attention to the contents of Curtis' lyrics. In a 1987 interview with "Option", Morris said that they "just thought the songs were sort of sympathetic and more uplifting than depressing. But everyone's got their own opinion." Deborah Curtis recalled that only with the release of "Closer" did many who were close to the singer realise "[h]is intentions and feelings were all there within the lyrics". The surviving members regret not seeing the warning signs in Curtis's lyrics. Morris said that "it was only after Ian died that we sat down and listened to the lyrics...you'd find yourself thinking, 'Oh my God, I missed this one'. Because I'd look at Ian's lyrics and think how clever he was putting himself in the position of someone else. I never believed he was writing about himself. Looking back, how could I have been so bleedin' stupid? Of course he was writing about himself. But I didn't go in and grab him and ask, 'What's up?' I have to live with that". Joy Division's live sound is loud and aggressive, in marked contrast to their studio recordings. The band were especially unhappy with Hannett's mix of "Unknown Pleasures", which replaced abrasiveness for a more cerebral and ghostly sound. According to Sumner "the music was loud and heavy, and we felt that Martin had toned it down, especially with the guitars". During their live performances, the group did not interact with the audience; according to Paul Morley, "During a Joy Division set, outside of the songs, you'll be lucky to hear more than two or three words. Hello and goodbye. No introductions, no promotion." Curtis would often perform what became known as his "'dead fly' dance", as if imitating a seizure; his arms would "start flying in [a] semicircular, hypnotic curve". Simon Reynolds noted that Curtis's dancing style was reminiscent of an epileptic fit, and that he was dancing in the manner for some months before he was diagnosed with epilepsy. Live performances became problematic for Joy Division, due to Curtis's condition. Sumner later said, "We didn't have flashing lights, but sometimes a particular drum beat would do something to him. He'd go off in a trance for a bit, then he'd lose it and have an epileptic fit. We'd have to stop the show and carry him off to the dressing room where he'd cry his eyes out because this appalling thing had just happened to him". Sumner wrote that Curtis was inspired by artists such as the Doors, Iggy Pop, David Bowie, Kraftwerk, the Velvet Underground and Neu!. Hook has also related that Curtis was particularly influenced by Iggy Pop's stage persona. The group were inspired by Kraftwerk's "marriage between humans and machines", and the inventiveness of their electronic music. Joy Division played "Trans-Europe Express" through the PA before they went on stage, 'to get a momentum". Bowie's "Berlin Trilogy" elaborated with Brian Eno, influenced them; the "cold austerity" of the synthesisers on the b-sides of "Heroes" and "Low" albums, was a "music looking at the future". Morris cited the "unique style" of Velvet Underground's Maureen Tucker and the motoric drum beats, from Neu! and Can. Hook said that "Siouxsie and the Banshees were one of our big influences ... The way the guitarist and the drummer played was a really unusual way of playing". Hook drew inspiration from the style of bassist Jean-Jacques Burnel and his early material with the Stranglers; he also credited Carol Kaye and her musical basslines on early 1970s work of the Temptations. Sumner mentioned "the raw, nasty, unpolished edge" in the guitars of the Rolling Stones, the simple riff of "Vicious" on Lou Reed's "Transformer", and Neil Young. His musical horizon went up a notch with Jimi Hendrix, he realised "it wasn't about little catchy tunes ... it was what you could do sonically with a guitar." Despite their short career, Joy Division have exerted a wide-reaching influence. John Bush of AllMusic argues that Joy Division "became the first band in the post-punk movement by ... emphasizing not anger and energy but mood and expression, pointing ahead to the rise of melancholy alternative music in the '80s." Joy Division have influenced bands including their contemporaries U2 and the Cure to later acts such as Radiohead, Nine Inch Nails, Neurosis, Interpol, Bloc Party, the Editors and rap artists. Rapper Danny Brown named his album "Atrocity Exhibition" after the Joy Division song, whose title was partially inspired by the 1970 J. G. Ballard collection of condensed novels of the same name. In 2005, both New Order and Joy Division were inducted into the UK Music Hall of Fame. The band's dark sound, which Martin Hannett described in 1979 as "dancing music with Gothic overtones", presaged the gothic rock genre. While the term "gothic" originally described a "doomy atmosphere" in music of the late 1970s, the term was soon applied to specific bands like Bauhaus that followed in the wake of Joy Division and Siouxsie and the Banshees. Standard musical fixtures of early gothic rock bands included "high-pitched post-Joy Division basslines usurp[ing] the melodic role" and "vocals that were either near operatic and Teutonic or deep, droning alloys of Jim Morrison and Ian Curtis." Joy Division have been dramatised in two biopics. "24 Hour Party People" (2002) is a fictionalised account of Factory Records in which members of the band appear as supporting characters. Tony Wilson said of the film, "It's all true, it's all not true. It's not a fucking documentary," and that he favoured the "myth" over the truth. The 2007 film "Control", directed by Anton Corbijn, is a biography of Ian Curtis (portrayed by Sam Riley) that uses Deborah Curtis's biography of her late husband, "Touching from a Distance" (1995), as its basis. "Control" had its international premiere on the opening night of Director's Fortnight at the 2007 Cannes Film Festival, where it was critically well received. That year Grant Gee directed the band documentary "Joy Division". Studio albums Kuiper belt The Kuiper belt ( or ), occasionally called the Edgeworth–Kuiper belt, is a circumstellar disc in the outer Solar System, extending from the orbit of Neptune (at 30 AU) to approximately 50 AU from the Sun. It is similar to the asteroid belt, but is far larger—20 times as wide and 20 to 200 times as massive. Like the asteroid belt, it consists mainly of small bodies or remnants from when the Solar System formed. While many asteroids are composed primarily of rock and metal, most Kuiper belt objects are composed largely of frozen volatiles (termed "ices"), such as methane, ammonia and water. The Kuiper belt is home to three officially recognized dwarf planets: Pluto, Haumea and Makemake. Some of the Solar System's moons, such as Neptune's Triton and Saturn's Phoebe, may have originated in the region. The Kuiper belt was named after Dutch-American astronomer Gerard Kuiper, though he did not predict its existence. In 1992, Albion was discovered, the first Kuiper belt object (KBO) since Pluto and Charon. Since its discovery, the number of known KBOs has increased to over a thousand, and more than 100,000 KBOs over in diameter are thought to exist. The Kuiper belt was initially thought to be the main repository for periodic comets, those with orbits lasting less than 200 years. Studies since the mid-1990s have shown that the belt is dynamically stable and that comets' true place of origin is the scattered disc, a dynamically active zone created by the outward motion of Neptune 4.5 billion years ago; scattered disc objects such as Eris have extremely eccentric orbits that take them as far as 100 AU from the Sun. The Kuiper belt is distinct from the theoretical Oort cloud, which is a thousand times more distant and is mostly spherical. The objects within the Kuiper belt, together with the members of the scattered disc and any potential Hills cloud or Oort cloud objects, are collectively referred to as trans-Neptunian objects (TNOs). Pluto is the largest and most massive member of the Kuiper belt, and the largest and the second-most-massive known TNO, surpassed only by Eris in the scattered disc. Originally considered a planet, Pluto's status as part of the Kuiper belt caused it to be reclassified as a dwarf planet in 2006. It is compositionally similar to many other objects of the Kuiper belt and its orbital period is characteristic of a class of KBOs, known as "plutinos", that share the same 2:3 resonance with Neptune. After the discovery of Pluto in 1930, many speculated that it might not be alone. The region now called the Kuiper belt was hypothesized in various forms for decades. It was only in 1992 that the first direct evidence for its existence was found. The number and variety of prior speculations on the nature of the Kuiper belt have led to continued uncertainty as to who deserves credit for first proposing it. The first astronomer to suggest the existence of a trans-Neptunian population was Frederick C. Leonard. Soon after Pluto's discovery by Clyde Tombaugh in 1930, Leonard pondered whether it was "not likely that in Pluto there has come to light the "first" of a "series" of ultra-Neptunian bodies, the remaining members of which still await discovery but which are destined eventually to be detected". That same year, astronomer Armin O. Leuschner suggested that Pluto "may be one of many long-period planetary objects yet to be discovered." In 1943, in the "Journal of the British Astronomical Association", Kenneth Edgeworth hypothesized that, in the region beyond Neptune, the material within the primordial solar nebula was too widely spaced to condense into planets, and so rather condensed into a myriad of smaller bodies. From this he concluded that "the outer region of the solar system, beyond the orbits of the planets, is occupied by a very large number of comparatively small bodies" and that, from time to time, one of their number "wanders from its own sphere and appears as an occasional visitor to the inner solar system", becoming a comet. In 1951, in a paper in "Astrophysics: A Topical Symposium", Gerard Kuiper speculated on a similar disc having formed early in the Solar System's evolution, but he did not think that such a belt still existed today. Kuiper was operating on the assumption, common in his time, that Pluto was the size of Earth and had therefore scattered these bodies out toward the Oort cloud or out of the Solar System. Were Kuiper's hypothesis correct, there would not be a Kuiper belt today. The hypothesis took many other forms in the following decades. In 1962, physicist Al G.W. Cameron postulated the existence of "a tremendous mass of small material on the outskirts of the solar system". In 1964, Fred Whipple, who popularised the famous "dirty snowball" hypothesis for cometary structure, thought that a "comet belt" might be massive enough to cause the purported discrepancies in the orbit of Uranus that had sparked the search for Planet X, or, at the very least, massive enough to affect the orbits of known comets. Observation ruled out this hypothesis. In 1977, Charles Kowal discovered 2060 Chiron, an icy planetoid with an orbit between Saturn and Uranus. He used a blink comparator, the same device that had allowed Clyde Tombaugh to discover Pluto nearly 50 years before. In 1992, another object, 5145 Pholus, was discovered in a similar orbit. Today, an entire population of comet-like bodies, called the centaurs, is known to exist in the region between Jupiter and Neptune. The centaurs' orbits are unstable and have dynamical lifetimes of a few million years. From the time of Chiron's discovery in 1977, astronomers have speculated that the centaurs therefore must be frequently replenished by some outer reservoir. Further evidence for the existence of the Kuiper belt later emerged from the study of comets. That comets have finite lifespans has been known for some time. As they approach the Sun, its heat causes their volatile surfaces to sublimate into space, gradually dispersing them. In order for comets to continue to be visible over the age of the Solar System, they must be replenished frequently. One such area of replenishment is the Oort cloud, a spherical swarm of comets extending beyond 50,000 AU from the Sun first hypothesised by Dutch astronomer Jan Oort in 1950. The Oort cloud is thought to be the point of origin of long-period comets, which are those, like Hale–Bopp, with orbits lasting thousands of years. There is another comet population, known as short-period or periodic comets, consisting of those comets that, like Halley's Comet, have orbital periods of less than 200 years. By the 1970s, the rate at which short-period comets were being discovered was becoming increasingly inconsistent with their having emerged solely from the Oort cloud. For an Oort cloud object to become a short-period comet, it would first have to be captured by the giant planets. In a paper published in "Monthly Notices of the Royal Astronomical Society" in 1980, Uruguayan astronomer Julio Fernández stated that for every short-period comet to be sent into the inner Solar System from the Oort cloud, 600 would have to be ejected into interstellar space. He speculated that a comet belt from between 35 and 50 AU would be required to account for the observed number of comets. Following up on Fernández's work, in 1988 the Canadian team of Martin Duncan, Tom Quinn and Scott Tremaine ran a number of computer simulations to determine if all observed comets could have arrived from the Oort cloud. They found that the Oort cloud could not account for all short-period comets, particularly as short-period comets are clustered near the plane of the Solar System, whereas Oort-cloud comets tend to arrive from any point in the sky. With a "belt", as Fernández described it, added to the formulations, the simulations matched observations. Reportedly because the words "Kuiper" and "comet belt" appeared in the opening sentence of Fernández's paper, Tremaine named this hypothetical region the "Kuiper belt". In 1987, astronomer David Jewitt, then at MIT, became increasingly puzzled by "the apparent emptiness of the outer Solar System". He encouraged then-graduate student Jane Luu to aid him in his endeavour to locate another object beyond Pluto's orbit, because, as he told her, "If we don't, nobody will." Using telescopes at the Kitt Peak National Observatory in Arizona and the Cerro Tololo Inter-American Observatory in Chile, Jewitt and Luu conducted their search in much the same way as Clyde Tombaugh and Charles Kowal had, with a blink comparator. Initially, examination of each pair of plates took about eight hours, but the process was sped up with the arrival of electronic charge-coupled devices or CCDs, which, though their field of view was narrower, were not only more efficient at collecting light (they retained 90% of the light that hit them, rather than the 10% achieved by photographs) but allowed the blinking process to be done virtually, on a computer screen. Today, CCDs form the basis for most astronomical detectors. In 1988, Jewitt moved to the Institute of Astronomy at the University of Hawaii. Luu later joined him to work at the University of Hawaii's 2.24 m telescope at Mauna Kea. Eventually, the field of view for CCDs had increased to 1024 by 1024 pixels, which allowed searches to be conducted far more rapidly. Finally, after five years of searching, Jewitt and Luu announced on August 30, 1992 the "Discovery of the candidate Kuiper belt object" 15760 Albion. Six months later, they discovered a second object in the region, (181708) 1993 FW. Studies conducted since the trans-Neptunian region was first charted have shown that the region now called the Kuiper belt is not the point of origin of short-period comets, but that they instead derive from a linked population called the scattered disc. The scattered disc was created when Neptune migrated outward into the proto-Kuiper belt, which at the time was much closer to the Sun, and left in its wake a population of dynamically stable objects that could never be affected by its orbit (the Kuiper belt proper), and a population whose perihelia are close enough that Neptune can still disturb them as it travels around the Sun (the scattered disc). Because the scattered disc is dynamically active and the Kuiper belt relatively dynamically stable, the scattered disc is now seen as the most likely point of origin for periodic comets. Astronomers sometimes use the alternative name Edgeworth–Kuiper belt to credit Edgeworth, and KBOs are occasionally referred to as EKOs. Brian G. Marsden claims that neither deserves true credit: "Neither Edgeworth nor Kuiper wrote about anything remotely like what we are now seeing, but Fred Whipple did". David Jewitt comments: "If anything... Fernández most nearly deserves the credit for predicting the Kuiper Belt." KBOs are sometimes called "kuiperoids", a name suggested by Clyde Tombaugh. The term "trans-Neptunian object" (TNO) is recommended for objects in the belt by several scientific groups because the term is less controversial than all others—it is not an exact synonym though, as TNOs include all objects orbiting the Sun past the orbit of Neptune, not just those in the Kuiper belt. At its fullest extent (but excluding the scattered disc), including its outlying regions, the Kuiper belt stretches from roughly 30 to 55 AU. The main body of the belt is generally accepted to extend from the 2:3 mean-motion resonance (see below) at 39.5 AU to the 1:2 resonance at roughly 48 AU. The Kuiper belt is quite thick, with the main concentration extending as much as ten degrees outside the ecliptic plane and a more diffuse distribution of objects extending several times farther. Overall it more resembles a torus or doughnut than a belt. Its mean position is inclined to the ecliptic by 1.86 degrees. The presence of Neptune has a profound effect on the Kuiper belt's structure due to orbital resonances. Over a timescale comparable to the age of the Solar System, Neptune's gravity destabilises the orbits of any objects that happen to lie in certain regions, and either sends them into the inner Solar System or out into the scattered disc or interstellar space. This causes the Kuiper belt to have pronounced gaps in its current layout, similar to the Kirkwood gaps in the asteroid belt. In the region between 40 and 42 AU, for instance, no objects can retain a stable orbit over such times, and any observed in that region must have migrated there relatively recently. Between the 2:3 and 1:2 resonances with Neptune, at approximately 42–48 AU, the gravitational interactions with Neptune occur over an extended timescale, and objects can exist with their orbits essentially unaltered. This region is known as the classical Kuiper belt, and its members comprise roughly two thirds of KBOs observed to date. Because the first modern KBO discovered, , is considered the prototype of this group, classical KBOs are often referred to as cubewanos ("Q-B-1-os"). The guidelines established by the IAU demand that classical KBOs be given names of mythological beings associated with creation. The classical Kuiper belt appears to be a composite of two separate populations. The first, known as the "dynamically cold" population, has orbits much like the planets; nearly circular, with an orbital eccentricity of less than 0.1, and with relatively low inclinations up to about 10° (they lie close to the plane of the Solar System rather than at an angle). The cold population also contain a concentration of objects, referred to as the kernel, with semi-major axes at 44–44.5 AU. The second, the "dynamically hot" population, has orbits much more inclined to the ecliptic, by up to 30°. The two populations have been named this way not because of any major difference in temperature, but from analogy to particles in a gas, which increase their relative velocity as they become heated up. Not only are the two populations in different orbits, the cold population also differs in color and albedo, being redder and brighter, has a larger fraction of binary objects, has a different size distribution, and lacks very large objects. The difference in colors may be a reflection of different compositions, which suggests they formed in different regions. The hot population is proposed to have formed near Neptune's original orbit and to have been scattered out during the migration of the giant planets. The cold population, on the other hand, has been proposed to have formed more or less in its current position because the loose binaries would be unlikely to survive encounters with Neptune. Although the Nice model appears to be able to at least partially explain a compositional difference, it has also been suggested the color difference may reflect differences in surface evolution. When an object's orbital period is an exact ratio of Neptune's (a situation called a mean-motion resonance), then it can become locked in a synchronised motion with Neptune and avoid being perturbed away if their relative alignments are appropriate. If, for instance, an object orbits the Sun twice for every three Neptune orbits, and if it reaches perihelion with Neptune a quarter of an orbit away from it, then whenever it returns to perihelion, Neptune will always be in about the same relative position as it began, because it will have completed orbits in the same time. This is known as the 2:3 (or 3:2) resonance, and it corresponds to a characteristic semi-major axis of about 39.4 AU. This 2:3 resonance is populated by about 200 known objects, including Pluto together with its moons. In recognition of this, the members of this family are known as plutinos. Many plutinos, including Pluto, have orbits that cross that of Neptune, though their resonance means they can never collide. Plutinos have high orbital eccentricities, suggesting that they are not native to their current positions but were instead thrown haphazardly into their orbits by the migrating Neptune. IAU guidelines dictate that all plutinos must, like Pluto, be named for underworld deities. The 1:2 resonance (whose objects complete half an orbit for each of Neptune's) corresponds to semi-major axes of ~47.7AU, and is sparsely populated. Its residents are sometimes referred to as twotinos. Other resonances also exist at 3:4, 3:5, 4:7 and 2:5. Neptune has a number of trojan objects, which occupy its Lagrangian points, gravitationally stable regions leading and trailing it in its orbit. Neptune trojans are in a 1:1 mean-motion resonance with Neptune and often have very stable orbits. Additionally, there is a relative absence of objects with semi-major axes below 39 AU that cannot apparently be explained by the present resonances. The currently accepted hypothesis for the cause of this is that as Neptune migrated outward, unstable orbital resonances moved gradually through this region, and thus any objects within it were swept up, or gravitationally ejected from it. The 1:2 resonance appears to be an edge beyond which few objects are known. It is not clear whether it is actually the outer edge of the classical belt or just the beginning of a broad gap. Objects have been detected at the 2:5 resonance at roughly 55 AU, well outside the classical belt; predictions of a large number of bodies in classical orbits between these resonances have not been verified through observation. Based on estimations of the primordial mass required to form Uranus and Neptune, as well as bodies as large as Pluto (see below), earlier models of the Kuiper belt had suggested that the number of large objects would increase by a factor of two beyond 50 AU, so this sudden drastic falloff, known as the "Kuiper cliff", was unexpected, and to date its cause is unknown. In 2003, Bernstein, Trilling, et al. found evidence that the rapid decline in objects of 100 km or more in radius beyond 50 AU is real, and not due to observational bias. Possible explanations include that material at that distance was too scarce or too scattered to accrete into large objects, or that subsequent processes removed or destroyed those that did. Patryk Lykawka of Kobe University claimed that the gravitational attraction of an unseen large planetary object, perhaps the size of Earth or Mars, might be responsible. The precise origins of the Kuiper belt and its complex structure are still unclear, and astronomers are awaiting the completion of several wide-field survey telescopes such as Pan-STARRS and the future LSST, which should reveal many currently unknown KBOs. These surveys will provide data that will help determine answers to these questions. The Kuiper belt is thought to consist of planetesimals, fragments from the original protoplanetary disc around the Sun that failed to fully coalesce into planets and instead formed into smaller bodies, the largest less than in diameter. Studies of the crater counts on Pluto and Charon revealed a scarcity of small craters suggesting that such objects formed directly as sizeable objects in the range of tens of kilometers in diameter rather than being accreted from much smaller, roughly kilometer scale bodies. Hypothetical mechanisms for the formation of these larger bodies include the gravitational collapse of clouds of pebbles concentrated between eddies in a turbulent protoplanetary disk or in streaming instabilities. These collapsing clouds may fragment, forming binaries. Modern computer simulations show the Kuiper belt to have been strongly influenced by Jupiter and Neptune, and also suggest that neither Uranus nor Neptune could have formed in their present positions, because too little primordial matter existed at that range to produce objects of such high mass. Instead, these planets are estimated to have formed closer to Jupiter. Scattering of planetesimals early in the Solar System's history would have led to migration of the orbits of the giant planets: Saturn, Uranus, and Neptune drifted outwards, whereas Jupiter drifted inwards. Eventually, the orbits shifted to the point where Jupiter and Saturn reached an exact 1:2 resonance; Jupiter orbited the Sun twice for every one Saturn orbit. The gravitational repercussions of such a resonance ultimately destabilized the orbits of Uranus and Neptune, causing them to be scattered outward onto high-eccentricity orbits that crossed the primordial planetesimal disc. While Neptune's orbit was highly eccentric, its mean-motion resonances overlapped and the orbits of the planetesimals evolved chaotically, allowing planetesimals to wander outward as far as Neptune's 1:2 resonance to form a dynamically cold belt of low-inclination objects. Later, after its eccentricity decreased, Neptune's orbit expanded outward toward its current position. Many planetesimals were captured into and remain in resonances during this migration, others evolved onto higher-inclination and lower-eccentricity orbits and escaped from the resonances onto stable orbits. Many more planetesimals were scattered inward, with small fractions being captured as Jupiter trojans, as irregular satellites orbiting the giant planets, and as outer belt asteroids. The remainder were scattered outward again by Jupiter and in most cases ejected from the Solar System reducing the primordial Kuiper belt population by 99% or more. The original version of the currently most popular model, the "Nice model", reproduces many characteristics of the Kuiper belt such as the "cold" and "hot" populations, resonant objects, and a scattered disc, but it still fails to account for some of the characteristics of their distributions. The model predicts a higher average eccentricity in classical KBO orbits than is observed (0.10–0.13 versus 0.07) and its predicted inclination distribution contains too few high inclination objects. In addition, the frequency of binary objects in the cold belt, many of which are far apart and loosely bound, also poses a problem for the model. These are predicted to have been separated during encounters with Neptune, leading some to propose that the cold disc formed at its current location, representing the only truly local population of small bodies in the solar system. A recent modification of the Nice model has the Solar System begin with five giant planets, including an additional ice giant, in a chain of mean-motion resonances. About 400 million years after the formation of the Solar System the resonance chain is broken. Instead of being scattered into the disc, the ice giants first migrate outward several AU. This divergent migration eventually leads to a resonance crossing, destabilizing the orbits of the planets. The extra ice giant encounters Saturn and is scattered inward onto a Jupiter-crossing orbit and after a series of encounters is ejected from the Solar System. The remaining planets then continue their migration until the planetesimal disc is nearly depleted with small fractions remaining in various locations. As in the original Nice model, objects are captured into resonances with Neptune during its outward migration. Some remain in the resonances, others evolve onto higher-inclination, lower-eccentricity orbits, and are released onto stable orbits forming the dynamically hot classical belt. The hot belt's inclination distribution can be reproduced if Neptune migrated from 24 AU to 30 AU on a 30 Myr timescale. When Neptune migrates to 28 AU, it has a gravitational encounter with the extra ice giant. Objects captured from the cold belt into the 1:2 mean-motion resonance with Neptune are left behind as a local concentration at 44 AU when this encounter causes Neptune's semi-major axis to jump outward. The objects deposited in the cold belt include some loosely bound 'blue' binaries originating from closer than the cold belt's current location. If Neptune's eccentricity remains small during this encounter, the chaotic evolution of orbits of the original Nice model is avoided and a primordial cold belt is preserved. In the later phases of Neptune's migration, a slow sweeping of mean-motion resonances removes the higher-eccentricity objects from the cold belt, truncating its eccentricity distribution. Being distant from the Sun and major planets, Kuiper belt objects are thought to be relatively unaffected by the processes that have shaped and altered other Solar System objects; thus, determining their composition would provide substantial information on the makeup of the earliest Solar System. Due to their small size and extreme distance from Earth, the chemical makeup of KBOs is very difficult to determine. The principal method by which astronomers determine the composition of a celestial object is spectroscopy. When an object's light is broken into its component colors, an image akin to a rainbow is formed. This image is called a spectrum. Different substances absorb light at different wavelengths, and when the spectrum for a specific object is unravelled, dark lines (called absorption lines) appear where the substances within it have absorbed that particular wavelength of light. Every element or compound has its own unique spectroscopic signature, and by reading an object's full spectral "fingerprint", astronomers can determine its composition. Analysis indicates that Kuiper belt objects are composed of a mixture of rock and a variety of ices such as water, methane, and ammonia. The temperature of the belt is only about 50 K, so many compounds that would be gaseous closer to the Sun remain solid. The densities and rock–ice fractions are known for only a small number of objects for which the diameters and the masses have been determined. The diameter can be determined by imaging with a high-resolution telescope such as the Hubble Space Telescope, by the timing of an occultation when an object passes in front of a star or, most commonly, by using the albedo of an object calculated from its infrared emissions. The masses are determined using the semi-major axes and periods of satellites, which are therefore known only for a few binary objects. The densities range from less than 0.4 to 2.6 g/cm. The least dense objects are thought to be largely composed of ice and have significant porosity. The densest objects are likely composed of rock with a thin crust of ice. There is a trend of low densities for small objects and high densities for the largest objects. One possible explanation for this trend is that ice was lost from the surface layers when differentiated objects collided to form the largest objects. Initially, detailed analysis of KBOs was impossible, and so astronomers were only able to determine the most basic facts about their makeup, primarily their color. These first data showed a broad range of colors among KBOs, ranging from neutral grey to deep red. This suggested that their surfaces were composed of a wide range of compounds, from dirty ices to hydrocarbons. This diversity was startling, as astronomers had expected KBOs to be uniformly dark, having lost most of the volatile ices from their surfaces to the effects of cosmic rays. Various solutions were suggested for this discrepancy, including resurfacing by impacts or outgassing. Jewitt and Luu's spectral analysis of the known Kuiper belt objects in 2001 found that the variation in color was too extreme to be easily explained by random impacts. The radiation from the Sun is thought to have chemically altered methane on the surface of KBOs, producing products such as tholins. Makemake has been shown to possess a number of hydrocarbons derived from the radiation-processing of methane, including ethane, ethylene and acetylene. Although to date most KBOs still appear spectrally featureless due to their faintness, there have been a number of successes in determining their composition. In 1996, Robert H. Brown et al. acquired spectroscopic data on the KBO 1993 SC, which revealed that its surface composition is markedly similar to that of Pluto, as well as Neptune's moon Triton, with large amounts of methane ice. For the smaller objects, only colors and in some cases the albedos have been determined. These objects largely fall into two classes: gray with low albedos, or very red with higher albedos. The difference in colors and albedos is hypothesized to be due to the retention or the loss of hydrogen sulfide (HS) on the surface of these objects, with the surfaces of those that formed far enough from the Sun to retain HS being reddened due to irradiation. The largest KBOs, such as Pluto and Quaoar, have surfaces rich in volatile compounds such as methane, nitrogen and carbon monoxide; the presence of these molecules is likely due to their moderate vapor pressure in the 30–50 K temperature range of the Kuiper belt. This allows them to occasionally boil off their surfaces and then fall again as snow, whereas compounds with higher boiling points would remain solid. The relative abundances of these three compounds in the largest KBOs is directly related to their surface gravity and ambient temperature, which determines which they can retain. Water ice has been detected in several KBOs, including members of the Haumea family such as , mid-sized objects such as 38628 Huya and 20000 Varuna, and also on some small objects. The presence of crystalline ice on large and mid-sized objects, including 50000 Quaoar where ammonia hydrate has also been detected, may indicate past tectonic activity aided by melting point lowering due to the presence of ammonia. Despite its vast extent, the collective mass of the Kuiper belt is relatively low. The total mass is estimated to range between 1/25 and 1/10 the mass of the Earth. Conversely, models of the Solar System's formation predict a collective mass for the Kuiper belt of 30 Earth masses. This missing >99% of the mass can hardly be dismissed, because it is required for the accretion of any KBOs larger than in diameter. If the Kuiper belt had always had its current low density, these large objects simply could not have formed by the collision and mergers of smaller planetesimals. Moreover, the eccentricity and inclination of current orbits makes the encounters quite "violent" resulting in destruction rather than accretion. It appears that either the current residents of the Kuiper belt have been created closer to the Sun, or some mechanism dispersed the original mass. Neptune's current influence is too weak to explain such a massive "vacuuming", though the Nice model proposes that it could have been the cause of mass removal in the past. Although the question remains open, the conjectures vary from a passing star scenario to grinding of smaller objects, via collisions, into dust small enough to be affected by solar radiation. The extent of mass loss by collisional grinding is limited by the presence of loosely bound binaries in the cold disk, which are likely to be disrupted in collisions. Bright objects are rare compared with the dominant dim population, as expected from accretion models of origin, given that only some objects of a given size would have grown further. This relationship between "N"("D") (the number of objects of diameter greater than "D") and "D", referred to as brightness slope, has been confirmed by observations. The slope is inversely proportional to some power of the diameter "D": This implies (assuming "q" is not 1) that Less formally, if "q" is 4, for example, there are 8 (=2) times more objects in the 100–200 km range than in the 200–400 km range, and for every object with a diameter between 1000 and 1010 km there should be around 1000 (=10) objects with diameter of 100 to 101 km. If "q" was 1 or less, the law would imply an infinite number and mass of large objects in the Kuiper belt. If 1<"q"≤4 there will be a finite number of objects greater than a given size, but the expected value of their combined mass would be infinite. If "q" is 4 or more, the law would imply an infinite mass of small objects. More accurate models find that the "slope" parameter "q" is in effect greater at large diameters and lesser at small diameters. It seems that Pluto is somewhat unexpectedly large, having several percent of the total mass of the Kuiper belt. It is not expected that anything larger than Pluto exists in the Kuiper belt, and in fact most of the brightest (largest) objects at inclinations less than 5° have probably been found. For most TNOs, only the absolute magnitude is actually known, the size is inferred assuming a given albedo (not a safe assumption for larger objects). Recent research has revealed that the size distributions of the hot classical and cold classical objects have differing slopes. The slope for the hot objects is q = 5.3 at large diameters and q = 2.0 at small diameters with the change in slope at 110 km. The slope for the cold objects is q = 8.2 at large diameters and q = 2.9 at small diameters with a change in slope at 140 km. The size distributions of the scattering objects, the plutinos, and the Neptune trojans have slopes similar to the other dynamically hot populations, but may instead have a divot, a sharp decrease in the number of objects below a specific size. This divot is hypothesized to be due to either the collisional evolution of the population, or to be due to the population having formed with no objects below this size, with the smaller objects being fragments of the original objects. As of December 2009, the smallest Kuiper belt object detected is 980 m across. It is too dim (magnitude 35) to be seen by "Hubble" directly, but it was detected by "Hubble"'s star tracking system when it occulted a star. The scattered disc is a sparsely populated region, overlapping with the Kuiper belt but extending to beyond 100 AU. Scattered disc objects (SDOs) have very elliptical orbits, often also very inclined to the ecliptic. Most models of Solar System formation show both KBOs and SDOs first forming in a primordial belt, with later gravitational interactions, particularly with Neptune, sending the objects outward, some into stable orbits (the KBOs) and some into unstable orbits, the scattered disc. Due to its unstable nature, the scattered disc is suspected to be the point of origin of many of the Solar System's short-period comets. Their dynamic orbits occasionally force them into the inner Solar System, first becoming centaurs, and then short-period comets. According to the Minor Planet Center, which officially catalogues all trans-Neptunian objects, a KBO, strictly speaking, is any object that orbits exclusively within the defined Kuiper belt region regardless of origin or composition. Objects found outside the belt are classed as scattered objects. In some scientific circles the term "Kuiper belt object" has become synonymous with any icy minor planet native to the outer Solar System assumed to have been part of that initial class, even if its orbit during the bulk of Solar System history has been beyond the Kuiper belt (e.g. in the scattered-disc region). They often describe scattered disc objects as "scattered Kuiper belt objects". Eris, which is known to be more massive than Pluto, is often referred to as a KBO, but is technically an SDO. A consensus among astronomers as to the precise definition of the Kuiper belt has yet to be reached, and this issue remains unresolved. The centaurs, which are not normally considered part of the Kuiper belt, are also thought to be scattered objects, the only difference being that they were scattered inward, rather than outward. The Minor Planet Center groups the centaurs and the SDOs together as scattered objects. During its period of migration, Neptune is thought to have captured a large KBO, Triton, which is the only large moon in the Solar System with a retrograde orbit (it orbits opposite to Neptune's rotation). This suggests that, unlike the large moons of Jupiter, Saturn and Uranus, which are thought to have coalesced from rotating discs of material around their young parent planets, Triton was a fully formed body that was captured from surrounding space. Gravitational capture of an object is not easy: it requires some mechanism to slow down the object enough to be caught by the larger object's gravity. A possible explanation is that Triton was part of a binary when it encountered Neptune. (Many KBOs are members of binaries. See below.) Ejection of the other member of the binary by Neptune could then explain Triton's capture. Triton is only 14% larger than Pluto, and spectral analysis of both worlds shows that their surfaces are largely composed of similar materials, such as methane and carbon monoxide. All this points to the conclusion that Triton was once a KBO that was captured by Neptune during its outward migration. Since 2000, a number of KBOs with diameters of between 500 and , more than half that of Pluto (diameter 2370 km), have been discovered. 50000 Quaoar, a classical KBO discovered in 2002, is over 1,200 km across. and , both announced on July 29, 2005, are larger still. Other objects, such as 28978 Ixion (discovered in 2001) and 20000 Varuna (discovered in 2000), measure roughly across. The discovery of these large KBOs in orbits similar to Pluto's led many to conclude that, aside from its relative size, Pluto was not particularly different from other members of the Kuiper belt. Not only are these objects similar to Pluto in size, but many also have satellites, and are of similar composition (methane and carbon monoxide have been found both on Pluto and on the largest KBOs). Thus, just as Ceres was considered a planet before the discovery of its fellow asteroids, some began to suggest that Pluto might also be reclassified. The issue was brought to a head by the discovery of Eris, an object in the scattered disc far beyond the Kuiper belt, that is now known to be 27% more massive than Pluto. (Eris was originally thought to be larger than Pluto by volume, but the "New Horizons" mission found this not to be the case.) In response, the International Astronomical Union (IAU) was forced to define what a planet is for the first time, and in so doing included in their definition that a planet must have "cleared the neighbourhood around its orbit". As Pluto shares its orbit with many other sizable objects, it was deemed not to have cleared its orbit, and was thus reclassified from a planet to a dwarf planet, making it a member of the Kuiper belt. Although Pluto is currently the largest known KBO, there is at least one known larger object currently outside the Kuiper belt that probably originated in it: Neptune's moon Triton (which, as explained above, is probably a captured KBO). As of 2008, only five objects in the Solar System (Ceres, Eris, and the KBOs Pluto, Makemake and Haumea) are listed as dwarf planets by the IAU. 90482 Orcus, 28978 Ixion and many other Kuiper-belt objects are large enough to be in hydrostatic equilibrium; most of them will probably qualify when more is known about them. The six largest TNOs (Eris, Pluto, , Makemake, Haumea and Quaoar) are all known to have satellites, and two have more than one. A higher percentage of the larger KBOs have satellites than the smaller objects in the Kuiper belt, suggesting that a different formation mechanism was responsible. There are also a high number of binaries (two objects close enough in mass to be orbiting "each other") in the Kuiper belt. The most notable example is the Pluto–Charon binary, but it is estimated that around 11% of KBOs exist in binaries. On January 19, 2006, the first spacecraft to explore the Kuiper belt, "New Horizons", was launched, which flew by Pluto on July 14, 2015. Beyond the Pluto flyby, the mission's goal was to locate and investigate other, farther objects in the Kuiper belt. On October 15, 2014, it was revealed that "Hubble" had uncovered three potential targets, provisionally designated PT1 ("potential target 1"), PT2 and PT3 by the "New Horizons" team. The objects' diameters were estimated to be in the 30–55 km range; too small to be seen by ground telescopes, at distances from the Sun of 43–44 AU, which would put the encounters in the 2018–2019 period. The initial estimated probabilities that these objects were reachable within "New Horizons" fuel budget were 100%, 7%, and 97%, respectively. All were members of the "cold" (low-inclination, low-eccentricity) classical Kuiper belt, and thus very different from Pluto. PT1 (given the temporary designation "1110113Y" on the HST web site), the most favorably situated object, was magnitude 26.8, 30–45 km in diameter, and will be encountered around January 2019. Once sufficient orbital information was provided, the Minor Planet Center gave official designations to the three target KBOs: (PT1), (PT2), and (PT3). By the fall of 2014, a possible fourth target, , had been eliminated by follow-up observations. PT2 was out of the running before the Pluto flyby. On August 26, 2015, the first target, , was chosen. Course adjustment took place in late October and early November 2015, leading to a flyby in January 2019. On July 1, 2016, NASA approved additional funding for "New Horizons" to visit the object. On December 2, 2015, "New Horizons" detected from away, and the photographs show the shape of the object and one or two details. No follow up missions for "New Horizons" are planned, though at least two concepts for missions that would return to orbit or land on Pluto have been studied. Beyond Pluto, there exist many large KBOs that cannot be visited with "New Horizons", such as the dwarf planets Makemake and Haumea. New missions would be tasked to explore and study these objects in detail. Thales Alenia Space has studied the logistics of an orbiter mission to Haumea, a high priority scientific target due to its status as the parent body of a collisional family that includes several other TNOs, as well as Haumea's ring and two moons. The lead author, Joel Poncy, has advocated for new technology that would allow spacecraft to reach and orbit KBOs in 10-20 years or less. "New Horizons" Principal Investigator Alan Stern has informally suggested missions that would flyby the planets Uranus or Neptune before visiting new KBO targets, thus furthering the exploration of the Kuiper belt while also visiting these ice giant planets for the first time since the "Voyager 2" flybys in the 1980s. Quaoar would make a particularly attractive flyby target for a probe tasked with exploring the interstellar medium, as it currently lies near the heliospheric nose; Pontus Brandt at Johns Hopkins Applied Physics Laboratory and his colleagues have studied a probe that would flyby Quaoar in the 2030s before continuing to the intersellar medium through the heliospheric nose. Quaoar is also an attractive target due to a likely disappearing methane atmosphere and cyro-volcanism. The mission studied by Brandt and his colleagues would launch using SLS and achieve 30 km/s using a Jupiter flyby. Alternatively, for an orbiter mission, a study published in 2012 concluded that Ixion and Huya are among the most feasible targets. For instance, the authors calculated that an orbiter mission could reach Ixion after 17 years cruise time if launched in 2039. By 2006, astronomers had resolved dust discs thought to be Kuiper belt-like structures around nine stars other than the Sun. They appear to fall into two categories: wide belts, with radii of over 50 AU, and narrow belts (tentatively like that of the Solar System) with radii of between 20 and 30 AU and relatively sharp boundaries. Beyond this, 15–20% of solar-type stars have an observed infrared excess that is suggestive of massive Kuiper-belt-like structures. Most known debris discs around other stars are fairly young, but the two images on the right, taken by the "Hubble Space Telescope" in January 2006, are old enough (roughly 300 million years) to have settled into stable configurations. The left image is a "top view" of a wide belt, and the right image is an "edge view" of a narrow belt. Computer simulations of dust in the Kuiper belt suggest that when it was younger, it may have resembled the narrow rings seen around younger stars. King Arthur King Arthur is a legendary British leader who, according to medieval histories and romances, led the defence of Britain against Saxon invaders in the late 5th and early 6th centuries. The details of Arthur's story are mainly composed of folklore and literary invention, and his historical existence is debated and disputed by modern historians. The sparse historical background of Arthur is gleaned from various sources, including the "Annales Cambriae", the "Historia Brittonum", and the writings of Gildas. Arthur's name also occurs in early poetic sources such as "Y Gododdin". Arthur is a central figure in the legends making up the Matter of Britain. The legendary Arthur developed as a figure of international interest largely through the popularity of Geoffrey of Monmouth's fanciful and imaginative 12th-century "Historia Regum Britanniae" ("History of the Kings of Britain"). In some Welsh and Breton tales and poems that date from before this work, Arthur appears either as a great warrior defending Britain from human and supernatural enemies or as a magical figure of folklore, sometimes associated with the Welsh Otherworld, Annwn. How much of Geoffrey's "Historia" (completed in 1138) was adapted from such earlier sources, rather than invented by Geoffrey himself, is unknown. Although the themes, events and characters of the Arthurian legend varied widely from text to text, and there is no one canonical version, Geoffrey's version of events often served as the starting point for later stories. Geoffrey depicted Arthur as a king of Britain who defeated the Saxons and established an empire over Britain, Ireland, Iceland, Norway and Gaul. Many elements and incidents that are now an integral part of the Arthurian story appear in Geoffrey's "Historia", including Arthur's father Uther Pendragon, the wizard Merlin, Arthur's wife Guinevere, the sword Excalibur, Arthur's conception at Tintagel, his final battle against Mordred at Camlann, and final rest in Avalon. The 12th-century French writer Chrétien de Troyes, who added Lancelot and the Holy Grail to the story, began the genre of Arthurian romance that became a significant strand of medieval literature. In these French stories, the narrative focus often shifts from King Arthur himself to other characters, such as various Knights of the Round Table. Arthurian literature thrived during the Middle Ages but waned in the centuries that followed until it experienced a major resurgence in the 19th century. In the 21st century, the legend lives on, not only in literature but also in adaptations for theatre, film, television, comics and other media. The historical basis for the King Arthur legend has long been debated by scholars. One school of thought, citing entries in the "Historia Brittonum" ("History of the Britons") and "Annales Cambriae" ("Welsh Annals"), sees Arthur as a genuine historical figure, a Romano-British leader who fought against the invading Anglo-Saxons some time in the late 5th to early 6th century. The "Historia Brittonum", a 9th-century Latin historical compilation attributed in some late manuscripts to a Welsh cleric called Nennius, contains the first datable mention of King Arthur, listing twelve battles that Arthur fought. These culminate in the Battle of Badon, where he is said to have single-handedly killed 960 men. Recent studies, however, question the reliability of the "Historia Brittonum". The other text that seems to support the case for Arthur's historical existence is the 10th-century "Annales Cambriae", which also link Arthur with the Battle of Badon. The "Annales" date this battle to 516–518, and also mention the Battle of Camlann, in which Arthur and Medraut (Mordred) were both killed, dated to 537–539. These details have often been used to bolster confidence in the "Historia"'s account and to confirm that Arthur really did fight at Badon. Problems have been identified, however, with using this source to support the "Historia Brittonum"'s account. The latest research shows that the "Annales Cambriae" was based on a chronicle begun in the late 8th century in Wales. Additionally, the complex textual history of the "Annales Cambriae" precludes any certainty that the Arthurian annals were added to it even that early. They were more likely added at some point in the 10th century and may never have existed in any earlier set of annals. The Badon entry probably derived from the "Historia Brittonum". This lack of convincing early evidence is the reason many recent historians exclude Arthur from their accounts of sub-Roman Britain. In the view of historian Thomas Charles-Edwards, "at this stage of the enquiry, one can only say that there may well have been an historical Arthur [but ...] the historian can as yet say nothing of value about him". These modern admissions of ignorance are a relatively recent trend; earlier generations of historians were less sceptical. The historian John Morris made the putative reign of Arthur the organising principle of his history of sub-Roman Britain and Ireland, "The Age of Arthur" (1973). Even so, he found little to say about an historical Arthur. Partly in reaction to such theories, another school of thought emerged which argued that Arthur had no historical existence at all. Morris's "Age of Arthur" prompted the archaeologist Nowell Myres to observe that "no figure on the borderline of history and mythology has wasted more of the historian's time". Gildas' 6th-century polemic "De Excidio et Conquestu Britanniae" ("On the Ruin and Conquest of Britain"), written within living memory of Badon, mentions the battle but does not mention Arthur. Arthur is not mentioned in the "Anglo-Saxon Chronicle" or named in any surviving manuscript written between 400 and 820. He is absent from Bede's early-8th-century "Ecclesiastical History of the English People", another major early source for post-Roman history that mentions Badon. The historian David Dumville has written: "I think we can dispose of him [Arthur] quite briefly. He owes his place in our history books to a 'no smoke without fire' school of thought ... The fact of the matter is that there is no historical evidence about Arthur; we must reject him from our histories and, above all, from the titles of our books." Some scholars argue that Arthur was originally a fictional hero of folklore—or even a half-forgotten Celtic deity—who became credited with real deeds in the distant past. They cite parallels with figures such as the Kentish Hengist and Horsa, who may be totemic horse-gods that later became historicised. Bede ascribed to these legendary figures a historical role in the 5th-century Anglo-Saxon conquest of eastern Britain. It is not even certain that Arthur was considered a king in the early texts. Neither the "Historia" nor the "Annales" calls him ""rex"": the former calls him instead ""dux bellorum"" (leader of battles) and ""miles"" (soldier). Historical documents for the post-Roman period are scarce, so a definitive answer to the question of Arthur's historical existence is unlikely. Sites and places have been identified as "Arthurian" since the 12th century, but archaeology can confidently reveal names only through inscriptions found in secure contexts. The so-called "Arthur stone", discovered in 1998 among the ruins at Tintagel Castle in Cornwall in securely dated 6th-century contexts, created a brief stir but proved irrelevant. Other inscriptional evidence for Arthur, including the Glastonbury cross, is tainted with the suggestion of forgery. Although several historical figures have been proposed as the basis for Arthur, no convincing evidence for these identifications has emerged. The origin of the Welsh name "Arthur" remains a matter of debate. The most widely accepted etymology derives it from the Roman "nomen gentile" (family name) Artorius. Artorius itself is of obscure and contested etymology, but possibly of Messapian or Etruscan origin. According to the linguist and Celticist Stephan Zimmer, is also possible that Arthur is derived from a Latinized Late Brittonic "*Artorījos", derived from a hypothetical Celtic patronym "*Arto-rīg-ios", meaning "Son of the Bear/Warrior/Hero-King" (the root of which, "*arto-rīg-", "bear/warrior/hero-king", is to be found in the Old Irish personal name "Artrí"). According to Zimmer's etymology, the Celtic short compositional vowel -o- was lengthened and the long -ī- in the second element of the compound "-rījos" was shortened by Latin speakers, under the influence of Latin agent nouns ending in "-tõr" (and their derivatives in "-tõrius"). Some scholars have suggested it is relevant to this debate that the legendary King Arthur's name only appears as "Arthur", or "Arturus", in early Latin Arthurian texts, never as "Artōrius" (though Classical Latin Artōrius became Arturius in some Vulgar Latin dialects). However, this may not say anything about the origin of the name "Arthur", as "Artōrius" would regularly become "Art(h)ur" when borrowed into Welsh. Another commonly proposed derivation of "Arthur" from Welsh "arth" "bear" + "(g)wr" "man" (earlier "*Arto-uiros" in Brittonic) is not accepted by modern scholars for phonological and orthographic reasons. Notably, a Brittonic compound name "*Arto-uiros" should produce Old Welsh "*Artgur" (where "u" represents the short vowel /u/) and Middle/Modern Welsh "*Arthwr", rather than "Arthur" (where "u" is a long vowel /ʉː/). In Welsh poetry the name is always spelled "Arthur" and is exclusively rhymed with words ending in "-ur"—never words ending in "-wr"—which confirms that the second element cannot be "[g]wr" "man". An alternative theory, which has gained only limited acceptance among professional scholars, derives the name Arthur from Arcturus, the brightest star in the constellation Boötes, near Ursa Major or the Great Bear. Classical Latin "Arcturus" would also have become "Art(h)ur" when borrowed into Welsh, and its brightness and position in the sky led people to regard it as the "guardian of the bear" (which is the meaning of the name in Ancient Greek) and the "leader" of the other stars in Boötes. A similar first name is Old Irish Artúr, which is believed to be derived directly from an early Old Welsh or Cumbric "Artur". The earliest historically attested bearer of the name is a son or grandson of Áedán mac Gabráin (d. 609). The creator of the familiar literary persona of Arthur was Geoffrey of Monmouth, with his pseudo-historical "Historia Regum Britanniae" ("History of the Kings of Britain"), written in the 1130s. The textual sources for Arthur are usually divided into those written before Geoffrey's "Historia" (known as pre-Galfridian texts, from the Latin form of Geoffrey, "Galfridus") and those written afterwards, which could not avoid his influence (Galfridian, or post-Galfridian, texts). The earliest literary references to Arthur come from Welsh and Breton sources. There have been few attempts to define the nature and character of Arthur in the pre-Galfridian tradition as a whole, rather than in a single text or text/story-type. A 2007 academic survey that does attempt this by Caitlin Green identifies three key strands to the portrayal of Arthur in this earliest material. The first is that he was a peerless warrior who functioned as the monster-hunting protector of Britain from all internal and external threats. Some of these are human threats, such as the Saxons he fights in the "Historia Brittonum", but the majority are supernatural, including giant cat-monsters, destructive divine boars, dragons, dogheads, giants, and witches. The second is that the pre-Galfridian Arthur was a figure of folklore (particularly topographic or onomastic folklore) and localised magical wonder-tales, the leader of a band of superhuman heroes who live in the wilds of the landscape. The third and final strand is that the early Welsh Arthur had a close connection with the Welsh Otherworld, Annwn. On the one hand, he launches assaults on Otherworldly fortresses in search of treasure and frees their prisoners. On the other, his warband in the earliest sources includes former pagan gods, and his wife and his possessions are clearly Otherworldly in origin. One of the most famous Welsh poetic references to Arthur comes in the collection of heroic death-songs known as "Y Gododdin" ("The Gododdin"), attributed to 6th-century poet Aneirin. One stanza praises the bravery of a warrior who slew 300 enemies, but says that despite this, "he was no Arthur" – that is, his feats cannot compare to the valour of Arthur. "Y Gododdin" is known only from a 13th-century manuscript, so it is impossible to determine whether this passage is original or a later interpolation, but John Koch's view that the passage dates from a 7th-century or earlier version is regarded as unproven; 9th- or 10th-century dates are often proposed for it. Several poems attributed to Taliesin, a poet said to have lived in the 6th century, also refer to Arthur, although these all probably date from between the 8th and 12th centuries. They include "Kadeir Teyrnon" ("The Chair of the Prince"), which refers to "Arthur the Blessed"; "Preiddeu Annwn" ("The Spoils of Annwn"), which recounts an expedition of Arthur to the Otherworld; and "Marwnat vthyr pen[dragon]" ("The Elegy of Uther Pen[dragon]"), which refers to Arthur's valour and is suggestive of a father-son relationship for Arthur and Uther that pre-dates Geoffrey of Monmouth. Other early Welsh Arthurian texts include a poem found in the "Black Book of Carmarthen", "Pa gur yv y porthaur?" ("What man is the gatekeeper?"). This takes the form of a dialogue between Arthur and the gatekeeper of a fortress he wishes to enter, in which Arthur recounts the names and deeds of himself and his men, notably Cei (Kay) and Bedwyr (Bedivere). The Welsh prose tale "Culhwch and Olwen" (), included in the modern "Mabinogion" collection, has a much longer list of more than 200 of Arthur's men, though Cei and Bedwyr again take a central place. The story as a whole tells of Arthur helping his kinsman Culhwch win the hand of Olwen, daughter of Ysbaddaden Chief-Giant, by completing a series of apparently impossible tasks, including the hunt for the great semi-divine boar Twrch Trwyth. The 9th-century "Historia Brittonum" also refers to this tale, with the boar there named Troy(n)t. Finally, Arthur is mentioned numerous times in the "Welsh Triads", a collection of short summaries of Welsh tradition and legend which are classified into groups of three linked characters or episodes to assist recall. The later manuscripts of the Triads are partly derivative from Geoffrey of Monmouth and later continental traditions, but the earliest ones show no such influence and are usually agreed to refer to pre-existing Welsh traditions. Even in these, however, Arthur's court has started to embody legendary Britain as a whole, with "Arthur's Court" sometimes substituted for "The Island of Britain" in the formula "Three XXX of the Island of Britain". While it is not clear from the "Historia Brittonum" and the "Annales Cambriae" that Arthur was even considered a king, by the time "Culhwch and Olwen" and the Triads were written he had become "Penteyrnedd yr Ynys hon", "Chief of the Lords of this Island", the overlord of Wales, Cornwall and the North. In addition to these pre-Galfridian Welsh poems and tales, Arthur appears in some other early Latin texts besides the "Historia Brittonum" and the "Annales Cambriae". In particular, Arthur features in a number of well-known "vitae" ("Lives") of post-Roman saints, none of which are now generally considered to be reliable historical sources (the earliest probably dates from the 11th century). According to the "Life of Saint Gildas", written in the early 12th century by Caradoc of Llancarfan, Arthur is said to have killed Gildas' brother Hueil and to have rescued his wife Gwenhwyfar from Glastonbury. In the "Life of Saint Cadoc", written around 1100 or a little before by Lifris of Llancarfan, the saint gives protection to a man who killed three of Arthur's soldiers, and Arthur demands a herd of cattle as "wergeld" for his men. Cadoc delivers them as demanded, but when Arthur takes possession of the animals, they turn into bundles of ferns. Similar incidents are described in the medieval biographies of Carannog, Padarn, and Eufflam, probably written around the 12th century. A less obviously legendary account of Arthur appears in the "Legenda Sancti Goeznovii", which is often claimed to date from the early 11th century (although the earliest manuscript of this text dates from the 15th century and the text is now dated to the late 12th to early 13th century). Also important are the references to Arthur in William of Malmesbury's "De Gestis Regum Anglorum" and Herman's "De Miraculis Sanctae Mariae Laudensis", which together provide the first certain evidence for a belief that Arthur was not actually dead and would at some point return, a theme that is often revisited in post-Galfridian folklore. The first narrative account of Arthur's life is found in Geoffrey of Monmouth's Latin work "Historia Regum Britanniae" ("History of the Kings of Britain"), completed . This work is an imaginative and fanciful account of British kings from the legendary Trojan exile Brutus to the 7th-century Welsh king Cadwallader. Geoffrey places Arthur in the same post-Roman period as do "Historia Brittonum" and "Annales Cambriae". He incorporates Arthur's father, Uther Pendragon, his magician advisor Merlin, and the story of Arthur's conception, in which Uther, disguised as his enemy Gorlois by Merlin's magic, sleeps with Gorlois's wife Igerna (Igraine) at Tintagel, and she conceives Arthur. On Uther's death, the fifteen-year-old Arthur succeeds him as King of Britain and fights a series of battles, similar to those in the "Historia Brittonum", culminating in the Battle of Bath. He then defeats the Picts and Scots before creating an Arthurian empire through his conquests of Ireland, Iceland and the Orkney Islands. After twelve years of peace, Arthur sets out to expand his empire once more, taking control of Norway, Denmark and Gaul. Gaul is still held by the Roman Empire when it is conquered, and Arthur's victory naturally leads to a further confrontation between his empire and Rome's. Arthur and his warriors, including Kaius (Kay), Beduerus (Bedivere) and Gualguanus (Gawain), defeat the Roman emperor Lucius Tiberius in Gaul but, as he prepares to march on Rome, Arthur hears that his nephew Modredus (Mordred)—whom he had left in charge of Britain—has married his wife Guenhuuara (Guinevere) and seized the throne. Arthur returns to Britain and defeats and kills Modredus on the river Camblam in Cornwall, but he is mortally wounded. He hands the crown to his kinsman Constantine and is taken to the isle of Avalon to be healed of his wounds, never to be seen again. How much of this narrative was Geoffrey's own invention is open to debate. Certainly, Geoffrey seems to have made use of the list of Arthur's twelve battles against the Saxons found in the 9th-century "Historia Brittonum", along with the battle of Camlann from the "Annales Cambriae" and the idea that Arthur was still alive. Arthur's personal status as the king of all Britain would also seem to be borrowed from pre-Galfridian tradition, being found in "Culhwch and Olwen", the "Triads", and the saints' lives. Finally, Geoffrey borrowed many of the names for Arthur's possessions, close family, and companions from the pre-Galfridian Welsh tradition, including Kaius (Cei), Beduerus (Bedwyr), Guenhuuara (Gwenhwyfar), Uther (Uthyr) and perhaps also Caliburnus (Caledfwlch), the latter becoming Excalibur in subsequent Arthurian tales. However, while names, key events, and titles may have been borrowed, Brynley Roberts has argued that "the Arthurian section is Geoffrey's literary creation and it owes nothing to prior narrative." So, for instance, the Welsh Medraut is made the villainous Modredus by Geoffrey, but there is no trace of such a negative character for this figure in Welsh sources until the 16th century. There have been relatively few modern attempts to challenge this notion that the "Historia Regum Britanniae" is primarily Geoffrey's own work, with scholarly opinion often echoing William of Newburgh's late-12th-century comment that Geoffrey "made up" his narrative, perhaps through an "inordinate love of lying". Geoffrey Ashe is one dissenter from this view, believing that Geoffrey's narrative is partially derived from a lost source telling of the deeds of a 5th-century British king named Riotamus, this figure being the original Arthur, although historians and Celticists have been reluctant to follow Ashe in his conclusions. Whatever his sources may have been, the immense popularity of Geoffrey's "Historia Regum Britanniae" cannot be denied. Well over 200 manuscript copies of Geoffrey's Latin work are known to have survived, and this does not include translations into other languages. Thus, for example, around 60 manuscripts are extant containing Welsh-language versions of the "Historia", the earliest of which were created in the 13th century; the old notion that some of these Welsh versions actually underlie Geoffrey's "Historia", advanced by antiquarians such as the 18th-century Lewis Morris, has long since been discounted in academic circles. As a result of this popularity, Geoffrey's "Historia Regum Britanniae" was enormously influential on the later medieval development of the Arthurian legend. While it was by no means the only creative force behind Arthurian romance, many of its elements were borrowed and developed (e.g., Merlin and the final fate of Arthur), and it provided the historical framework into which the romancers' tales of magical and wonderful adventures were inserted. The popularity of Geoffrey's "Historia" and its other derivative works (such as Wace's "Roman de Brut") is generally agreed to be an important factor in explaining the appearance of significant numbers of new Arthurian works in continental Europe during the 12th and 13th centuries, particularly in France. It was not, however, the only Arthurian influence on the developing "Matter of Britain". There is clear evidence that Arthur and Arthurian tales were familiar on the Continent before Geoffrey's work became widely known (see for example, the Modena Archivolt), and "Celtic" names and stories not found in Geoffrey's "Historia" appear in the Arthurian romances. From the perspective of Arthur, perhaps the most significant effect of this great outpouring of new Arthurian story was on the role of the king himself: much of this 12th-century and later Arthurian literature centres less on Arthur himself than on characters such as Lancelot and Guinevere, Percival, Galahad, Gawain, Ywain, and Tristan and Iseult. Whereas Arthur is very much at the centre of the pre-Galfridian material and Geoffrey's "Historia" itself, in the romances he is rapidly sidelined. His character also alters significantly. In both the earliest materials and Geoffrey he is a great and ferocious warrior, who laughs as he personally slaughters witches and giants and takes a leading role in all military campaigns, whereas in the continental romances he becomes the "roi fainéant", the "do-nothing king", whose "inactivity and acquiescence constituted a central flaw in his otherwise ideal society". Arthur's role in these works is frequently that of a wise, dignified, even-tempered, somewhat bland, and occasionally feeble monarch. So, he simply turns pale and silent when he learns of Lancelot's affair with Guinevere in the "Mort Artu", whilst in Chrétien de Troyes's "Yvain, the Knight of the Lion", he is unable to stay awake after a feast and has to retire for a nap. Nonetheless, as Norris J. Lacy has observed, whatever his faults and frailties may be in these Arthurian romances, "his prestige is never—or almost never—compromised by his personal weaknesses ... his authority and glory remain intact." Arthur and his retinue appear in some of the "Lais" of Marie de France, but it was the work of another French poet, Chrétien de Troyes, that had the greatest influence with regard to the development of Arthur's character and legend. Chrétien wrote five Arthurian romances between and 1190. "Erec and Enide" and "Cligès" are tales of courtly love with Arthur's court as their backdrop, demonstrating the shift away from the heroic world of the Welsh and Galfridian Arthur, while "Yvain, the Knight of the Lion", features Yvain and Gawain in a supernatural adventure, with Arthur very much on the sidelines and weakened. However, the most significant for the development of the Arthurian legend are "Lancelot, the Knight of the Cart", which introduces Lancelot and his adulterous relationship with Arthur's queen Guinevere, extending and popularising the recurring theme of Arthur as a cuckold, and "Perceval, the Story of the Grail", which introduces the Holy Grail and the Fisher King and which again sees Arthur having a much reduced role. Chrétien was thus "instrumental both in the elaboration of the Arthurian legend and in the establishment of the ideal form for the diffusion of that legend", and much of what came after him in terms of the portrayal of Arthur and his world built upon the foundations he had laid. "Perceval", although unfinished, was particularly popular: four separate continuations of the poem appeared over the next half century, with the notion of the Grail and its quest being developed by other writers such as Robert de Boron, a fact that helped accelerate the decline of Arthur in continental romance. Similarly, Lancelot and his cuckolding of Arthur with Guinevere became one of the classic motifs of the Arthurian legend, although the Lancelot of the prose "Lancelot" () and later texts was a combination of Chrétien's character and that of Ulrich von Zatzikhoven's "Lanzelet". Chrétien's work even appears to feed back into Welsh Arthurian literature, with the result that the romance Arthur began to replace the heroic, active Arthur in Welsh literary tradition. Particularly significant in this development were the three Welsh Arthurian romances, which are closely similar to those of Chrétien, albeit with some significant differences: "Owain, or the Lady of the Fountain" is related to Chrétien's "Yvain"; "Geraint and Enid", to "Erec and Enide"; and "Peredur son of Efrawg", to "Perceval". Up to , continental Arthurian romance was expressed primarily through poetry; after this date the tales began to be told in prose. The most significant of these 13th-century prose romances was the Vulgate Cycle (also known as the Lancelot-Grail Cycle), a series of five Middle French prose works written in the first half of that century. These works were the "Estoire del Saint Grail", the "Estoire de Merlin", the "Lancelot propre" (or Prose "Lancelot", which made up half the entire Vulgate Cycle on its own), the "Queste del Saint Graal" and the "Mort Artu", which combine to form the first coherent version of the entire Arthurian legend. The cycle continued the trend towards reducing the role played by Arthur in his own legend, partly through the introduction of the character of Galahad and an expansion of the role of Merlin. It also made Mordred the result of an incestuous relationship between Arthur and his sister Morgause and established the role of Camelot, first mentioned in passing in Chrétien's "Lancelot", as Arthur's primary court. This series of texts was quickly followed by the Post-Vulgate Cycle (), of which the "Suite du Merlin" is a part, which greatly reduced the importance of Lancelot's affair with Guinevere but continued to sideline Arthur, and to focus more on the Grail quest. As such, Arthur became even more of a relatively minor character in these French prose romances; in the Vulgate itself he only figures significantly in the "Estoire de Merlin" and the "Mort Artu". During this period, Arthur was made one of the Nine Worthies, a group of three pagan, three Jewish and three Christian exemplars of chivalry. The Worthies were first listed in Jacques de Longuyon's "Voeux du Paon" in 1312, and subsequently became a common subject in literature and art. The development of the medieval Arthurian cycle and the character of the "Arthur of romance" culminated in "Le Morte d'Arthur", Thomas Malory's retelling of the entire legend in a single work in English in the late 15th century. Malory based his book—originally titled "The Whole Book of King Arthur and of His Noble Knights of the Round Table"—on the various previous romance versions, in particular the Vulgate Cycle, and appears to have aimed at creating a comprehensive and authoritative collection of Arthurian stories. Perhaps as a result of this, and the fact that "Le Morte D'Arthur" was one of the earliest printed books in England, published by William Caxton in 1485, most later Arthurian works are derivative of Malory's. The end of the Middle Ages brought with it a waning of interest in King Arthur. Although Malory's English version of the great French romances was popular, there were increasing attacks upon the truthfulness of the historical framework of the Arthurian romances – established since Geoffrey of Monmouth's time – and thus the legitimacy of the whole Matter of Britain. So, for example, the 16th-century humanist scholar Polydore Vergil famously rejected the claim that Arthur was the ruler of a post-Roman empire, found throughout the post-Galfridian medieval "chronicle tradition", to the horror of Welsh and English antiquarians. Social changes associated with the end of the medieval period and the Renaissance also conspired to rob the character of Arthur and his associated legend of some of their power to enthrall audiences, with the result that 1634 saw the last printing of Malory's "Le Morte d'Arthur" for nearly 200 years. King Arthur and the Arthurian legend were not entirely abandoned, but until the early 19th century the material was taken less seriously and was often used simply as a vehicle for allegories of 17th- and 18th-century politics. Thus Richard Blackmore's epics "Prince Arthur" (1695) and "King Arthur" (1697) feature Arthur as an allegory for the struggles of William III against James II. Similarly, the most popular Arthurian tale throughout this period seems to have been that of Tom Thumb, which was told first through chapbooks and later through the political plays of Henry Fielding; although the action is clearly set in Arthurian Britain, the treatment is humorous and Arthur appears as a primarily comedic version of his romance character. John Dryden's masque "King Arthur" is still performed, largely thanks to Henry Purcell's music, though seldom unabridged. In the early 19th century, medievalism, Romanticism, and the Gothic Revival reawakened interest in Arthur and the medieval romances. A new code of ethics for 19th-century gentlemen was shaped around the chivalric ideals embodied in the "Arthur of romance". This renewed interest first made itself felt in 1816, when Malory's "Le Morte d'Arthur" was reprinted for the first time since 1634. Initially, the medieval Arthurian legends were of particular interest to poets, inspiring, for example, William Wordsworth to write "The Egyptian Maid" (1835), an allegory of the Holy Grail. Pre-eminent among these was Alfred Tennyson, whose first Arthurian poem "The Lady of Shalott" was published in 1832. Arthur himself played a minor role in some of these works, following in the medieval romance tradition. Tennyson's Arthurian work reached its peak of popularity with "Idylls of the King", however, which reworked the entire narrative of Arthur's life for the Victorian era. It was first published in 1859 and sold 10,000 copies within the first week. In the "Idylls", Arthur became a symbol of ideal manhood who ultimately failed, through human weakness, to establish a perfect kingdom on earth. Tennyson's works prompted a large number of imitators, generated considerable public interest in the legends of Arthur and the character himself, and brought Malory's tales to a wider audience. Indeed, the first modernisation of Malory's great compilation of Arthur's tales was published in 1862, shortly after "Idylls" appeared, and there were six further editions and five competitors before the century ended. This interest in the "Arthur of romance" and his associated stories continued through the 19th century and into the 20th, and influenced poets such as William Morris and Pre-Raphaelite artists including Edward Burne-Jones. Even the humorous tale of Tom Thumb, which had been the primary manifestation of Arthur's legend in the 18th century, was rewritten after the publication of "Idylls". While Tom maintained his small stature and remained a figure of comic relief, his story now included more elements from the medieval Arthurian romances and Arthur is treated more seriously and historically in these new versions. The revived Arthurian romance also proved influential in the United States, with such books as Sidney Lanier's "The Boy's King Arthur" (1880) reaching wide audiences and providing inspiration for Mark Twain's satiric "A Connecticut Yankee in King Arthur's Court" (1889). Although the 'Arthur of romance' was sometimes central to these new Arthurian works (as he was in Burne-Jones's "The Sleep of Arthur in Avalon", 1881-1898), on other occasions he reverted to his medieval status and is either marginalized or even missing entirely, with Wagner's Arthurian operas providing a notable instance of the latter. Furthermore, the revival of interest in Arthur and the Arthurian tales did not continue unabated. By the end of the 19th century, it was confined mainly to Pre-Raphaelite imitators, and it could not avoid being affected by World War I, which damaged the reputation of chivalry and thus interest in its medieval manifestations and Arthur as chivalric role model. The romance tradition did, however, remain sufficiently powerful to persuade Thomas Hardy, Laurence Binyon and John Masefield to compose Arthurian plays, and T. S. Eliot alludes to the Arthur myth (but not Arthur) in his poem "The Waste Land", which mentions the Fisher King. In the latter half of the 20th century, the influence of the romance tradition of Arthur continued, through novels such as T. H. White's "The Once and Future King" (1958) and Marion Zimmer Bradley's "The Mists of Avalon" (1982) in addition to comic strips such as "Prince Valiant" (from 1937 onward). Tennyson had reworked the romance tales of Arthur to suit and comment upon the issues of his day, and the same is often the case with modern treatments too. Bradley's tale, for example, takes a feminist approach to Arthur and his legend, in contrast to the narratives of Arthur found in medieval materials, and American authors often rework the story of Arthur to be more consistent with values such as equality and democracy. In John Cowper Powys's "" (1951), set in Wales in 499, just prior to the Saxon invasion, Arthur, the Emperor of Britain, is only a minor character, whereas Myrddin (Merlin) and Nineue, Tennyson's Vivien, are major figures. Myrddin's disappearance at the end of the novel is "in the tradition of magical hibernation when the king or mage leaves his people for some island or cave to return either at a more propitious or more dangerous time" (see King Arthur's messianic return). Also Powys's earlier novel, "A Glastonbury Romance" (1932) is concerned with both the Holy Grail and the legend that Arthur is buried in the town of Glastonbury. The romance Arthur has become popular in film and theatre as well. T. H. White's novel was adapted into the Lerner and Loewe stage musical "Camelot" (1960) and Walt Disney's animated film "The Sword in the Stone" (1963); "Camelot", with its focus on the love of Lancelot and Guinevere and the cuckolding of Arthur, was itself made into a film of the same name in 1967. The romance tradition of Arthur is particularly evident and, according to critics, successfully handled in Robert Bresson's "Lancelot du Lac" (1974), Éric Rohmer's "Perceval le Gallois" (1978) and perhaps John Boorman's fantasy film "Excalibur" (1981); it is also the main source of the material used in the Arthurian spoof "Monty Python and the Holy Grail" (1975). Re-tellings and re-imaginings of the romance tradition are not the only important aspect of the modern legend of King Arthur. Attempts to portray Arthur as a genuine historical figure of , stripping away the "romance", have also emerged. As Taylor and Brewer have noted, this return to the medieval "chronicle tradition" of Geoffrey of Monmouth and the "Historia Brittonum" is a recent trend which became dominant in Arthurian literature in the years following the outbreak of the Second World War, when Arthur's legendary resistance to Germanic invaders struck a chord in Britain. Clemence Dane's series of radio plays, "The Saviours" (1942), used a historical Arthur to embody the spirit of heroic resistance against desperate odds, and Robert Sherriff's play "The Long Sunset" (1955) saw Arthur rallying Romano-British resistance against the Germanic invaders. This trend towards placing Arthur in a historical setting is also apparent in historical and fantasy novels published during this period. In recent years the portrayal of Arthur as a real hero of the 5th century has also made its way into film versions of the Arthurian legend, most notably the TV series' "Arthur of the Britons" (1972–73), "Merlin" (2008–12), "The Legend of King Arthur" (1979), and "Camelot" (2011) and the feature films "King Arthur" (2004), "The Last Legion" (2007) and "" (2017). Arthur has also been used as a model for modern-day behaviour. In the 1930s, the Order of the Fellowship of the Knights of the Round Table was formed in Britain to promote Christian ideals and Arthurian notions of medieval chivalry. In the United States, hundreds of thousands of boys and girls joined Arthurian youth groups, such as the Knights of King Arthur, in which Arthur and his legends were promoted as wholesome exemplars. However, Arthur's diffusion within modern culture goes beyond such obviously Arthurian endeavours, with Arthurian names being regularly attached to objects, buildings, and places. As Norris J. Lacy has observed, "The popular notion of Arthur appears to be limited, not surprisingly, to a few motifs and names, but there can be no doubt of the extent to which a legend born many centuries ago is profoundly embedded in modern culture at every level." Kenesaw Mountain Landis Kenesaw Mountain Landis (; November 20, 1866 – November 25, 1944) was an American jurist who served as a federal judge from 1905 to 1922 and as the first Commissioner of Baseball from 1920 until his death. He is remembered for his handling of the Black Sox scandal, in which he expelled eight members of the Chicago White Sox from organized baseball for conspiring to lose the 1919 World Series and repeatedly refused their reinstatement requests. His firm actions and iron rule over baseball in the near quarter-century of his commissionership are generally credited with restoring public confidence in the game. Landis was born in Millville, Ohio, in 1866. His name was a spelling variation on the Battle of Kennesaw Mountain in the American Civil War, where his father was wounded in 1864. Landis spent much of his youth in Indiana; he left school at fifteen and worked in a series of positions in that state. His involvement in politics led to a civil service job. At age 21, Landis applied to become a lawyer—there were then no educational or examination requirements for the Indiana bar. Following a year of unprofitable practice, he went to law school. After his graduation, he opened an office in Chicago, but left it when Walter Q. Gresham, the new United States Secretary of State, named him his personal secretary in 1893. After Gresham's death in 1895, Landis refused an offer of an ambassadorship, and returned to Chicago to practice law and marry. President Theodore Roosevelt appointed Landis a federal judge in 1905. Landis received national attention in 1907 when he fined Standard Oil of Indiana more than $29 million for violating federal laws forbidding rebates on railroad freight tariffs. Though Landis was reversed on appeal, he was seen as a judge determined to rein in big business. During and after World War I, Landis presided over several high-profile trials of draft resisters and others whom he saw as opposing the war effort. He imposed heavy sentences on those who were convicted; some of the convictions were reversed on appeal, and other sentences were commuted. In 1920, Judge Landis was a leading candidate when American League and National League team owners, embarrassed by the Black Sox scandal and other instances of players throwing games, sought someone to rule over baseball. Landis was given full power to act in the sport's best interest, and used that power extensively over the next quarter-century. Landis was widely praised for cleaning up the game, although some of his decisions in the Black Sox matter remain controversial: supporters of "Shoeless Joe" Jackson and Buck Weaver contend that he was overly harsh with those players. Others blame Landis for, in their view, delaying the racial integration of baseball. Landis was elected to the National Baseball Hall of Fame by a special vote shortly after he died in 1944. Kenesaw Mountain Landis was born in Millville, Ohio, the sixth child and fourth son of Abraham Hoch Landis, a physician, and Mary Kumler Landis, on November 20, 1866. The Landises descended from Swiss Mennonites who had emigrated to Alsace before coming to the United States. Abraham Landis had been wounded fighting on the Union side at the Battle of Kennesaw Mountain in Georgia, and when his parents proved unable to agree on a name for the new baby, Mary Landis proposed that they call him Kenesaw Mountain. At the time, both spellings of "Kenesaw" were used, but in the course of time, "Kennesaw Mountain" became the accepted spelling of the battle site. Abraham Landis worked in Millville as a country physician. When Kenesaw was eight, the elder Landis moved his family to Delphi, Indiana and subsequently to Logansport, Indiana where the doctor purchased and ran several local farms—his war injury had caused him to scale back his medical practice. Two of Kenesaw's four brothers, Charles Beary Landis and Frederick Landis, became members of Congress. As "Kenny", as he was sometimes known, grew, he did an increasing share of the farm work, later stating, "I did my share—and it was a substantial share—in taking care of the 13 acres ... I do not remember that I particularly liked to get up at 3:30 in the morning." Kenesaw began his off-farm career at age ten as a news delivery boy. He left school at 15 after an unsuccessful attempt to master algebra; he then worked at the local general store. He left that job for a position as errand boy with the Vandalia Railroad. Landis applied for a job as a brakeman, but was laughingly dismissed as too small. He then worked for the Logansport "Journal", and taught himself shorthand reporting, becoming in 1883 official court reporter for the Cass County Circuit Court. Landis later wrote, "I may not have been much of a judge, nor baseball official, but I do pride myself on having been a real shorthand reporter." He served in that capacity until 1886. In his spare time, he became a prize-winning bicycle racer and played on and managed a baseball team. Offered a professional contract as a ballplayer, he turned it down, stating that he preferred to play for the love of the game. In 1886, Landis first ventured into Republican Party politics, supporting a friend, Charles F. Griffin, for Indiana Secretary of State. Griffin won, and Landis was rewarded with a civil service job in the Indiana Department of State. While employed there, he applied to be an attorney. At that time, in Indiana, an applicant needed only to prove that he was 21 and of good moral character, and Landis was admitted. Landis opened a practice in Marion, Indiana but attracted few clients in his year of work there. Realizing that an uneducated lawyer was unlikely to build a lucrative practice, Landis enrolled at Cincinnati's YMCA Law School (now part of Northern Kentucky University) in 1889. Landis transferred to Union Law School (now part of Northwestern University) the following year, and in 1891, he took his law degree from Union and was admitted to the Illinois Bar. He began a practice in Chicago, served as an assistant instructor at Union and with fellow attorney Clarence Darrow helped found the nonpartisan Chicago Civic Centre Club, devoted to municipal reform. Landis practiced with college friend Frank O. Lowden; the future commissioner and his law partner went into debt to impress potential clients, buying a law library secondhand. In March 1893, President Grover Cleveland appointed federal judge Walter Q. Gresham as his Secretary of State, and Gresham hired Landis as his personal secretary. Gresham had a long career as a political appointee in the latter part of the 19th century; though he lost his only two bids for elective office, he served in three Cabinet positions and was twice a dark horse candidate for the Republican presidential nomination. Although Gresham was a Republican, he had supported Cleveland (a Democrat) in the 1892 election because of his intense dislike for the Republican nominee, President Benjamin Harrison. Kenesaw Landis had appeared before Judge Gresham in court. According to Landis biographer J.G. Taylor Spink, Gresham thought Landis "had something on the ball" and believed that Landis's shorthand skills would be of use. In Washington, Landis worked hard to protect Gresham's interests in the State Department, making friends with many members of the press. He was less popular among many of the Department's senior career officials, who saw him as brash. When word leaked concerning President Cleveland's Hawaiian policy, the President was convinced Landis was the source of the information and demanded his dismissal. Gresham defended Landis, stating that Cleveland would have to fire both of them, and the President relented, later finding out that he was mistaken in accusing Landis. President Cleveland grew to like Landis, and when Gresham died in 1895, offered Landis the post of United States Ambassador to Venezuela. Landis declined the diplomatic post, preferring to return to Chicago to begin a law practice and to marry Winifred Reed, daughter of the Ottawa, Illinois postmaster. The two married July 25, 1895; they had two surviving children, a boy, Reed, and a girl, Susanne—a third child, Winifred, died almost immediately after being born. Landis built a corporate law practice in Chicago; with the practice doing well, he deeply involved himself in Republican Party politics. He built a close association with his friend Lowden and served as his campaign manager for governor of Illinois in 1904. Lowden was defeated, but would later serve two terms in the office and be a major contender for the 1920 Republican presidential nomination. A seat on the United States District Court for the Northern District of Illinois was vacant; President Theodore Roosevelt offered it to Lowden, who declined it and recommended Landis. Other recommendations from Illinois politicians followed, and Roosevelt nominated Landis for the seat. According to Spink, President Roosevelt wanted "a tough judge and a man sympathetic with his viewpoint in that important court"; Lowden and Landis were, like Roosevelt, on the progressive left of the Republican Party. On March 18, 1905, Roosevelt transmitted the nomination to the Senate, which confirmed Landis the same afternoon, without any committee hearing. Landis's courtroom, room 627 in the Chicago Federal Building, was ornate and featured two murals; one of King John conceding Magna Carta, the other of Moses about to smash the tablets of the Ten Commandments. The mahogany and marble chamber was, according to Landis biographer David Pietrusza, "just the spot for Landis's sense of the theatrical. In it he would hold court for nearly the next decade and a half." According to Spink, "It wasn't long before Chicago writers discovered they had a 'character' on the bench." A. L. Sloan of the Chicago "Herald-American", a friend of Landis, recalled: The Judge was always headline news. He was a great showman, theatrical in appearance, with his sharp jaw and shock of white hair, and people always crowded into his courtroom, knowing there would be something going on. There were few dull moments. If Judge Landis was suspicious of an attorney's line of questioning, he would begin to wrinkle his nose, and once told a witness, "Now let's stop fooling around and tell exactly what did happen, without reciting your life's history." When an elderly defendant told him that he would not be able to live to complete a five-year sentence, Landis scowled at him and asked, "Well, you can try, can't you?" When a young man stood before him for sentencing after admitting to stealing jewels from a parcel, the defendant's wife stood near him, infant daughter in her arms, and Landis mused what to do about the situation. After a dramatic pause, Landis ordered the young man to take his wife and daughter and go home with them, expressing his unwillingness to have the girl be the daughter of a convict. According to sportswriter Ed Fitzgerald in "SPORT" magazine, "[w]omen wept unashamed and the entire courtroom burst into spontaneous, prolonged applause." Landis had been a lawyer with a corporate practice; upon his elevation to the bench, corporate litigants expected him to favor them. According to a 1907 magazine article about Landis, "Corporations smiled pleasantly at the thought of a corporation lawyer being on the bench. They smile no more." In an early case, Landis fined the Allis-Chalmers Manufacturing Company the maximum $4,000 for illegally importing workers, even though Winifred Landis's sister's husband served on the corporate board. In another decision, Landis struck down a challenge to the Interstate Commerce Commission's (ICC) jurisdiction over rebating, a practice banned by the Elkins Act of 1903 in which railroads and favored customers agreed that the customers would pay less than the posted tariff, which by law was to be the same for all shippers. Landis's decision allowed the ICC to take action against railroads which gave rebates. By the first decade of the 20th century, a number of business entities had formed themselves into trusts, which dominated their industries. Trusts often sought to purchase or otherwise neutralize their competitors, allowing the conglomerates to raise prices to high levels. In 1890, Congress had passed the Sherman Anti-Trust Act, but it was not until the Theodore Roosevelt administration (1901–09) that serious efforts were made to break up or control the trusts. The dominant force in the oil industry was Standard Oil, controlled by John D. Rockefeller. Modern-day Exxon, Mobil, Atlantic Richfield, Chevron, Sohio, Amoco and Continental Oil all trace their ancestry to various parts of Standard Oil. In March 1906, Commissioner of Corporations James Rudolph Garfield submitted a report to President Roosevelt, alleging large-scale rebating in Standard Oil shipments. Federal prosecutors in several states and territories sought indictments against components of the Standard Oil Trust. On June 28, 1906, Standard Oil of Indiana was indicted on 6,428 counts of violation of the Elkins Act for accepting rebates on shipments on the Chicago & Alton Railroad. The case was assigned to Landis. Trial on the 1,903 counts that survived pretrial motions began on March 4, 1907. The fact that rebates had been given was not contested; what was at issue was whether Standard Oil knew the railroad's posted rates, and if it had a duty to enquire if it did not. Landis charged the jury that it "was the duty of the defendant diligently in good faith to get from the Chicago & Alton ... the lawful rate". The jury found Standard Oil guilty on all 1,903 counts. The maximum fine that Landis could impose was $29,240,000. To aid the judge in determining the sentence, Landis issued a subpoena for Rockefeller to testify as to Standard Oil's assets. The tycoon had often evaded subpoenas, not having testified in court since 1888. Deputy United States marshals visited Rockefeller's several homes, as well as the estates of his friends, in the hope of finding him. After several days, Rockefeller was found at his lawyer's estate, Taconic Farm in northwestern Massachusetts, and was served with the subpoena. The tycoon duly came to Landis's Chicago courtroom, making his way through a mob anxious to see the proceedings. Rockefeller's actual testimony, proffered after the judge made him wait through several cases and witnesses, proved to be anticlimactic, as he professed almost no knowledge of Standard Oil's corporate structure or assets. On August 3, 1907, Landis pronounced sentence. He fined Standard Oil the maximum penalty, $29,240,000, the largest fine imposed on a corporation to that point. The corporation quickly appealed; in the meantime, Landis was lionized as a hero. According to Pietrusza, "much of the nation could hardly believe a federal judge had finally cracked down on a trust—and cracked down "hard" ". President Roosevelt, when he heard the sentence, reportedly stated, "That's bully." Rockefeller was playing golf in Cleveland when he was brought a telegram containing the news. Rockefeller calmly informed his golfing partners of the amount of the fine, and proceeded to shoot a personal record score, later stating, "Judge Landis will be dead a long time before this fine is paid." He proved correct; the verdict and sentence were reversed by the United States Court of Appeals for the Seventh Circuit on July 22, 1908. In January 1909, the Supreme Court refused to hear the case, and in a new trial before another judge (Landis recused himself), Standard Oil was acquitted. A lifelong baseball fan, Landis often slipped away from the courthouse for a White Sox or Cubs game. In 1914, the two existing major leagues were challenged by a new league, the Federal League. In 1915, the upstart league brought suit against the existing leagues and owners under the Sherman Act and the case was assigned to Landis. Baseball owners feared that the reserve clause, which forced players to sign new contracts only with their former team, and the 10-day clause, which allowed teams (but not players) to terminate player contracts on ten days notice, would be struck down by Landis. Landis held hearings in late January 1915, and newspapers expected a quick decision, certainly before spring training began in March. During the hearings, Landis admonished the parties, "Both sides must understand that any blows at the thing called baseball would be regarded by this court as a blow to a national institution". When the National League's chief counsel, future Senator George Wharton Pepper referred to the activities of baseball players on the field as "labor", Landis interrupted him: "As a result of 30 years of observation, I am shocked because you call playing baseball 'labor.' " Landis reserved judgment, and the parties waited for his ruling. Spring training passed, as did the entire regular season and the World Series. In December 1915, still with no word from Landis, the parties reached a settlement, and the Federal League disbanded. Landis made no public statement as to the reasons for his failure to rule, though he told close friends that he had been certain the parties would reach a settlement sooner or later. Most observers thought that Landis waited because he did not want to rule against the two established leagues and their contracts. In 1916, Landis presided over the "Ryan Baby" or "Baby Iraene" case. The recent widow of a prominent Chicago banker, Anna Dollie Ledgerwood Matters, had brought a baby girl home from a visit to Canada and claimed that the child was her late husband's posthumous heir. Matters had left an estate of $250,000. However, a shop girl from Ontario, Margaret Ryan, claimed the baby was hers, and brought a writ of "habeas corpus" in Landis's court. Ryan stated that she had given birth to the girl in an Ottawa hospital, but had been told her baby had died. In the era before blood and DNA testing, Landis relied on witness testimony and awarded the child to Ryan. The case brought comparisons between Landis and King Solomon, who had judged a similar case. Landis was reversed by the Supreme Court, which held he had no jurisdiction in the matter. A Canadian court later awarded the child to Ryan. Although Landis was an autocrat in the courtroom, he was less so at home. In a 1916 interview, he stated, Every member of this family does exactly what he or she wants to do. Each one is his or her supreme court. Everything for the common good of the family is decided according to the wishes of the whole family. Each one knows what is right and each one can do whatever he thinks is best. It is purely democratic. In early 1917, Landis considered leaving the bench and returning to private practice—though he greatly enjoyed being a judge, the salary of $7,500 was considerably lower than what he could make as an attorney. The entry of the United States into World War I in April ended Landis's determination to resign; a firm supporter of the war effort, he felt he could best serve the country by remaining on the bench. Despite this decision and his age, fifty, Landis wrote to Secretary of War Newton D. Baker, asking him to take him into the service and send him to France, where the war was raging. Baker urged Landis to make speeches in support of the war instead, which he did. The judge's son, Reed, had already served briefly in the Illinois National Guard; when war came he became a pilot and eventually an ace. Landis's disdain for draft dodgers and other opponents of the war was evident in July 1917, when he presided over the trials of some 120 men, mostly foreign-born Socialists, who had resisted the draft and rioted in Rockford, Illinois. According to Pietrusza, Landis "was frequently brutal in his remarks" to the defendants, interrogating them on their beliefs. Landis tried the case in Rockford, and found all guilty, sentencing all but three to a year and a day in jail, the maximum sentence. The prisoners were ordered to register for the draft after serving their sentences—except 37, whom he ordered deported. On September 5, 1917, federal officers raided the national headquarters, in Chicago, of the Industrial Workers of the World (IWW, sometimes "Wobblies"), as well as 48 of the union's halls across the nation. The union had opposed the war and urged members and others to refuse conscription into the armed forces. On September 28, 166 IWW leaders, including union head Big Bill Haywood were indicted in the Northern District of Illinois; their cases were assigned to Landis. Some 40 of the indicted men could not be found; a few others had charges dismissed against them. Ultimately, Landis presided over a trial against 113 defendants, the largest federal criminal trial to that point. The trial began on April 1, 1918. Landis quickly dismissed charges against a dozen defendants, including one A.C. Christ, who showed up in newly obtained army uniform. Jury selection occupied a month. Journalist John Reed attended the trial, and wrote of his impressions of Landis: Small on the huge bench sits a wasted man with untidy white hair, an emaciated face in which two burning eyes are set like jewels, parchment-like skin split by a crack for a mouth; the face of Andrew Jackson three years dead ... Upon this man has devolved the historic role of trying the Social Revolution. He is doing it like a gentleman. In many ways a most unusual trial. When the judge enters the court-room after recess, no one rises—he himself has abolished the pompous formality. He sits without robes, in an ordinary business suit, and often leaves the bench to come down and perch on the step of the jury box. By his personal orders, spittoons are placed by the prisoners' seats ... and as for the prisoners themselves, they are permitted to take off their coats, move around, read newspapers. It takes some human understanding for a Judge to fly in the face of judicial ritual as much as that. Haywood biographer Melvyn Dubofsky wrote that Landis "exercised judicial objectivity and restraint for five long months". Baseball historian Harold Seymour stated that "[o]n the whole, Landis conducted the trial with restraint, despite his reputation as a foe of all radical groups." Landis dismissed charges against an elderly defendant who was in obvious pain as he testified, and allowed the release of a number of prisoners on bail or on their own recognizances. On August 17, 1918, following the closing argument for the prosecution (the defendants waived argument), Landis instructed the jury. The lead defense counsel objected to the wording of the jury charge several times, but Haywood believed it to have been fair. After 65 minutes, the jury returned with guilty verdicts for all of the remaining accused, much to their shock; they had believed that Landis's charge pointed towards their acquittal. When the defendants returned to court on August 29, Landis listened with patience to the defendants' final pleas. For the sentencing, according to Richard Cahan in his history of Chicago's district court, "mild-mannered Landis returned a changed man". Although two defendants received only ten days in jail, all others received at least a year and a day, and Haywood and fourteen others received twenty years. A number of defendants, including Haywood, obtained bail during the appeal; even before Haywood's appeals were exhausted, he jumped bail and took ship for the Soviet Union. The labor leader hung a portrait of Landis in his Moscow apartment, and when Haywood died in 1928, he was interred near John Reed (who had died of illness in Moscow after the Bolshevik Revolution) in the Kremlin Wall—they remain the only Americans so honored. President Calvin Coolidge commuted the sentences of the remaining incarcerated defendants in 1923, much to the disgust of Landis, who issued an angry statement. After leaving his judgeship, Landis referred to the defendants in the Haywood case as "scum", "filth", and "slimy rats". Landis hoped that the Kaiser, Wilhelm II would be captured and tried in his court; he wanted to indict the Kaiser for the murder of a Chicagoan who lost his life on the "RMS Lusitania" in 1915. The State Department notified Landis that extradition treaties did not permit the rendition of the Kaiser, who fled into exile in the Netherlands as the war concluded. Nevertheless, in a speech, Landis demanded that Kaiser Wilhelm, his six sons, and 5,000 German military leaders "be lined up against a wall and shot down in justice to the world and to Germany". Even with the armistice in November 1918, the war-related trials continued. The Socialist Party of America, like the IWW, had opposed the war, and had also been raided by federal authorities. Seven Socialist Party leaders, including Victor Berger, who was elected to Congress in November 1918, were indicted for alleged anti-war activities. The defendants were charged under the Espionage Act of 1917, which made it illegal "to utter, print, write, or publish any disloyal, profane, scurrilous or abusive language" about the armed forces, the flag, the Constitution, or democracy. The defendants, who were mostly of German birth or descent, moved for a change of venue away from Landis's courtroom, alleging that Landis had stated on November 1, 1918, that "[i]f anybody has said anything about the Germans that is worse than I have said, I would like to hear it so I could use it myself." Landis, however, examined the transcript of the trial in which the statement was supposedly made, failed to find it, declared the affidavit in support of the motion "perjurious", and denied the motion. While the jury was being selected, Berger was indicted on additional espionage charges for supposedly violating the law during an earlier, unsuccessful political campaign. At the conclusion of the case, Landis took an hour to dramatically charge the jury, emphasizing the secretive nature of conspiracies and pointing at the jury box as he noted, "the country was then at war". At one point, Landis leapt out of his seat, twirled his chair around, then sat on its arm. Later in his charge, he lay prone upon the bench. The jury took less than a day to convict Congressman-elect Berger and his four remaining codefendants. Landis sentenced each defendant to twenty years in federal prison. Landis denied the defendants bail pending appeal; but they quickly obtained it from an appellate court judge. The Seventh Circuit Court of Appeals declined to rule on the case itself, sending it on to the Supreme Court, which on January 31, 1921, overturned the convictions and sentences by a 6–3 vote, holding that Landis should have stepped aside once he was satisfied that the affidavit was legally sufficient, leaving it for another judge to decide whether it was actually true. Landis refused to comment on the Supreme Court's decision, which ordered a new trial. In 1922, charges against the defendants were dropped by the government. The postwar period saw considerable deflation; the shortage of labor and materials during the war had led to much higher wages and prices, and in the postwar economic readjustment, wages were cut heavily. In Chicago, employers in the building trades attempted a 20% wage cut; when this was rejected by the unions, a lockout followed. Both sides agreed to submit the matter to a neutral arbitrator, and settled on Landis, who agreed to take the case in June 1921. By this time, Landis was Commissioner of Baseball, and still a federal judge. In September, Landis issued his report, cutting wages by an average of 12.5%. To improve productivity, he also struck restrictions on machinery which saved labor, established a standardized overtime rate, and resolved jurisdictional conflicts between unions. The labor organizations were not completely satisfied, but Landis's reforms were adopted in many places across the country and were credited with reviving the building industry. Criticism of Landis having both the judicial and baseball positions began almost as soon as his baseball appointment was announced in November 1920. On February 2, 1921, lame duck Congressman Benjamin F. Welty (Democrat-Ohio) offered a resolution calling for Landis's impeachment. On February 11, Attorney General A. Mitchell Palmer opined that there was no legal impediment to Landis holding both jobs. On February 14, the House Judiciary Committee voted 24–1 to investigate Landis. Reed Landis later stated, "[n]one of the other congressmen wanted Father impeached but they did want him to come down and defend himself because they knew what a show it would be." Although Welty's departure from office on March 4, 1921, began a lull in criticism of Landis, in April, the judge made a controversial decision in the case of Francis J. Carey, a 19-year-old bank teller, who had pleaded guilty to embezzling $96,500. Carey, the sole support of his widowed mother and unmarried sisters, gained Landis's sympathy. He accused the bank of underpaying Carey, and sent the youth home with his mother. Two members of the Senate objected to Landis's actions, and the "New York Post" compared Carey with "Les Misérables's" Jean Valjean, noting "[b]etween a loaf of bread [Valjean was incarcerated for stealing one] and $96,500 there is a difference." A bill barring outside employment by federal judges had been introduced by Landis's foes, but had expired with the end of the congressional session in March; his opponents tried again in July, and the bill failed in the Senate on a tie vote. On September 1, 1921, the American Bar Association, a trade group of lawyers, passed a resolution of censure against Landis. By the end of 1921, the controversy was dying down, and Landis felt that he could resign without looking pressured. On February 18, 1922, he announced his resignation as judge effective March 1, stating, "There are not enough hours in the day for all these activities". In his final case, he fined two theatre owners for evading the federal amusement tax. One owner had refused to make restitution before sentencing; he was fined $5,000. The owner who had tried to make his shortfall good was fined one cent. By 1919, the influence of gamblers on baseball had been a problem for several years. Historian Paul Gardner wrote, Baseball had for some time been living uneasily in the knowledge that bribes were being offered by gamblers, and that some players were accepting them. The players knew it was going on, and the owners knew it was going on. But more important, the players knew that the owners knew—and they knew the owners were doing nothing about it for fear of a scandal that might damage organized baseball. Under such conditions it quite obviously did not pay to be honest. The 1919 World Series between the Chicago White Sox and Cincinnati Reds was much anticipated, as the nation attempted to return to normalcy in the postwar period. Baseball had seen a surge of popularity during the 1919 season, which set several attendance records. The powerful White Sox, with their superstar batter "Shoeless Joe" Jackson and star pitchers Eddie Cicotte and Claude "Lefty" Williams, were believed likely to defeat the less-well-regarded Reds. To the surprise of many, the Reds defeated the White Sox, five games to three (during 1919–21, the World Series was a best-of-nine affair). Rumors that the series was fixed began to circulate after gambling odds against the Reds winning dropped sharply before the series began, and gained more credibility after the White Sox lost four of the first five games. Cincinnati lost the next two games, and speculation began that the Reds were losing on purpose to extend the series and increase gate revenues. However, Cincinnati won Game Eight, 10–5, to end the series, as Williams lost his third game (Cicotte lost the other two). After the series, according to Gene Carney, who wrote a book about the scandal, "there was more than the usual complaining from those who had bet big on the Sox and lost". The issue of the 1919 Series came to the public eye again in September 1920, when, after allegations that a game between the Chicago Cubs and Philadelphia Phillies on August 31 had been fixed, a grand jury was empaneled in state court in Chicago to investigate baseball gambling. Additional news came from Philadelphia, where gambler Billy Maharg stated that he had worked with former boxer Abe Attell and New York gambler Arnold Rothstein to get the White Sox to throw the 1919 Series. Cicotte and Jackson were called before the grand jury, where they gave statements incriminating themselves and six teammates: Williams, first baseman Chick Gandil, shortstop Swede Risberg, third baseman Buck Weaver, center fielder Happy Felsch and reserve infielder Fred McMullin. Williams and Felsch were also called before the grand jury and incriminated themselves and their teammates. Through late September, the 1920 American League season had been one of the most exciting on record, with the White Sox, Cleveland Indians, and New York Yankees dueling for the league lead. By September 28, the Yankees were close to elimination, but the White Sox and Indians were within percentage points of each other. On that day, however, the eight players, seven of whom were still on the White Sox, were indicted. They were immediately suspended by White Sox owner Charles Comiskey. The Indians were able to pull ahead and win the pennant, taking the American League championship by two games over Chicago. Baseball had been governed by a three-man National Commission, consisting of American League President Ban Johnson, National League President John Heydler and Cincinnati Reds owner Garry Herrmann. In January 1920, Herrmann left office at the request of other club owners, leaving the Commission effectively deadlocked between Johnson and Heydler. A number of club owners, disliking one or both league presidents, preferred a single commissioner to rule over the game, but were willing to see the National Commission continue if Herrmann was replaced by someone who would provide strong leadership. Landis's name was mentioned in the press for this role, and the influential baseball newspaper "The Sporting News" sought his appointment. Another proposal, known as the "Lasker Plan" after Albert Lasker, a shareholder in the Chicago Cubs who had proposed it, was for a three-man commission to govern the game, drawn from outside baseball. On September 30, 1920, with the Black Sox scandal exposed, National League President Heydler began to advocate for the Lasker Plan, and by the following day, four major league teams had supported him. Among the names discussed in the press for membership on the new commission were Landis, former Secretary of the Treasury William Gibbs McAdoo, former President William Howard Taft, and General John J. Pershing. The start of the 1920 World Series on October 5 distracted the public from baseball's woes for a time, but discussions continued behind the scenes. By mid-October, 11 of the 16 team owners (all eight from the National League and the owners of the American League Yankees, White Sox and Boston Red Sox) were demanding the end of the National Commission and the appointment of a three-man commission whose members would have no financial interest in baseball. Heydler stated his views on baseball's requirements: We want a man as chairman who will rule with an iron hand ... Baseball has lacked a hand like that for years. It needs it now worse than ever. Therefore, it is our object to appoint a big man to lead the new commission. On November 8, the owners of the eight National League and three American League teams which supported the Lasker Plan met and unanimously selected Landis as head of the proposed commission. The American League clubs that supported the plan threatened to move to the National League, away from Johnson, who opposed it. Johnson had hoped that the minor leagues would support his position; when they did not, he and the "Loyal Five" teams agreed to the Lasker Plan. In the discussions among the owners that followed, they decided that Landis would be the only commissioner–no associate members would be elected. On November 12, the team owners came to Landis's courtroom to approach him. Landis was trying a bribery case; when he heard noise in the back of the courtroom from the owners, he gaveled them to silence. He made them wait 45 minutes while he completed his docket, then met with them in his chambers. The judge heard out the owners; after expressing initial reluctance, he took the job for seven years at a salary of $50,000, on condition he could remain on the federal bench. During Landis's time serving as both judge and commissioner, he allowed a $7,500 reduction in his salary as commissioner, to reflect his pay as judge. The appointment of Landis was met with acclaim in the press. A tentative agreement was signed by the parties a month later—an agreement which itemized Landis's powers over baseball, and which was drafted by the judge. The owners were still reeling from the perception that baseball was crooked, and accepted the agreement virtually without dissent. Under the terms of the contract, Landis could not be dismissed by the team owners, have his pay reduced, or even be criticized by them in public. He also had nearly unlimited authority over every person employed in the major or minor leagues, from owners to batboys. The owners waived any recourse to the courts to contest Landis's will. Humorist Will Rogers stated, "[D]on't kid yourself that that old judicial bird isn't going to make those baseball birds walk the chalkline". Player and manager Leo Durocher later stated, "The legend has been spread that the owners hired the Judge off the federal bench. Don't you believe it. They got him right out of Dickens." On January 30, 1921, Landis, speaking at an Illinois church, warned: Now that I am in baseball, just watch the game I play. If I catch any crook in baseball, the rest of his life is going to be a pretty hot one. I'll go to any means and to anything possible to see that he gets a real penalty for his offense. The criminal case against the Black Sox defendants suffered unexpected setbacks, with evidence vanishing, including some of the incriminating statements made to the grand jury. The prosecution was forced to dismiss the original indictments, and bring new charges against seven of the ballplayers (McMullin was not charged again). Frustrated by the delays, Landis placed all eight on an "ineligible list", banning them from major and minor league baseball. Comiskey supported Landis by giving the seven who remained under contract to the White Sox their unconditional release. Public sentiment was heavily against the ballplayers, and when Jackson, Williams, Felsch, and Weaver played in a semi-pro game, "The Sporting News" mocked the 3,000 attendees, "Just Like Nuts Go to See a Murderer". The criminal trial of the Black Sox indictees began in early July 1921. Despite what Robert C. Cottrell, in his book on the scandal, terms "the mysterious loss of evidence", the prosecution was determined to pursue the case, demanding five-year prison terms for the ballplayers for defrauding the public by throwing the World Series. On August 2, 1921, the jury returned not guilty verdicts against all defendants, leading to happy pandemonium in the courtroom, joined by the courtroom bailiffs, with even the trial judge, Hugo Friend, looking visibly pleased. The players and jury then repaired to an Italian restaurant and partied well into the night. The jubilation proved short-lived. On August 3, Landis issued a statement: Regardless of the verdict of juries, no player that throws a ball game; no player that undertakes or promises to throw a ball game; no player that sits in a conference with a bunch of crooked players and gamblers where the ways and means of throwing ball games are planned and discussed and does not promptly tell his club about it, will ever play professional baseball. Of course, I don't know that any of these men will apply for reinstatement, but if they do, the above are at least a few of the rules that will be enforced. Just keep in mind that, regardless of the verdict of juries, baseball is competent to protect itself against crooks, both inside and outside the game. According to ESPN columnist Rob Neyer, "with that single decision, Landis might have done more for the sport than anyone else, ever. Certainly, Landis never did anything more important." According to Carney, "The public amputation of the eight Sox was seen as the only acceptable cure." Over the years of Landis's commissionership, a number of the players applied for reinstatement to the game, notably Jackson and Weaver. Jackson, raised in rural South Carolina and with limited education, was said to have been drawn unwillingly into the conspiracy, while Weaver, though admitting his presence at the meetings, stated that he took no money. Both men stated that their play on the field, and their batting percentages during the series (.375 for Jackson, .324 for Weaver) indicated that they did not help to throw the series. None was ever reinstated, with Landis telling a group of Weaver supporters that his presence at the meetings with the gamblers was sufficient to bar him. Even today, long after the deaths of all three men, efforts are periodically made to reinstate Jackson (which would make him eligible for election to the National Baseball Hall of Fame) and Weaver (deemed by some the least culpable of the eight). In the 1990s, a petition drive to reinstate Jackson drew 60,000 signatures. He has been treated sympathetically in movies such as "Eight Men Out" and "Field of Dreams", and Hall of Famers Ted Williams and Bob Feller expressed their support for Jackson's induction into the Hall. Landis's expulsion of the eight men remains in force. Landis felt that the Black Sox scandal had been initiated by people involved in horse racing, and stated that "by God, as long as I have anything to do with this game, they'll never get another hold on it." In 1921, his first season as commissioner, New York Giants owner Charles Stoneham and manager John McGraw purchased Oriental Park Racetrack in Havana, Cuba. Landis gave Stoneham and McGraw an ultimatum—they could not be involved in both baseball and horse racing. They quickly put the track back on the market. Even before the Black Sox scandal had been resolved, Landis acted to clean up other gambling cases. Eugene Paulette, a first baseman for the Philadelphia Phillies, had been with the St. Louis Cardinals in 1919, and had met with gamblers. It is uncertain if any games were fixed, but Paulette had written a letter naming two other Cardinals who might be open to throwing games. The letter had fallen into the hands of Phillies President William F. Baker, who had taken no action until Landis's appointment, then turned the letter over to him. Paulette met with Landis once, denying any wrongdoing, then refused further meetings. Landis placed him on the ineligible list in March 1921. In November 1921, Landis banned former St. Louis Browns player Joe Gedeon, who had been released by the Browns after admitting to sitting in on meetings with gamblers who were trying to raise the money to bribe the Black Sox. When a minor league official asked if he was eligible, Landis settled the matter by placing Gedeon on the ineligible list. Two other player gambling affairs marked Landis's early years as commissioner. In 1922, Giants pitcher Phil Douglas, embittered at McGraw for disciplining him for heavy drinking, wrote a letter to Cardinals outfielder Leslie Mann, suggesting that he would take a bribe to ensure the Giants did not win the pennant. Although Mann had been a friend, the outfielder neither smoked nor drank and had long been associated with the YMCA movement; according to baseball historian Lee Allen, Douglas might as well have sent the letter to Landis himself. Mann immediately turned over the letter to his manager, Branch Rickey, who ordered Mann to contact Landis at once. The Giants placed Douglas on the ineligible list, an action backed by Landis after meeting with the pitcher. On September 27, 1924, Giants outfielder Jimmy O'Connell offered Phillies shortstop Heinie Sand $500 if Sand didn't "bear down too hard against us today". Sand was initially inclined to let the matter pass, but recalling the fate of Weaver and other Black Sox players, told his manager, Art Fletcher. Fletcher met with Heydler, who contacted Landis. O'Connell did not deny the bribe attempt, and was placed on the ineligible list. In total, Landis banned eighteen players from the game. Landis biographer Pietrusza details the effect of Landis's stand against gambling: Before 1920 if one player approached another player to throw a contest, there was a very good chance he would not be informed upon. Now, there was an excellent chance he would be turned in. No honest player wanted to meet the same fate as Buck Weaver ... Without the forbidding example of Buck Weaver to haunt them, it is unlikely Mann and Sand would have snitched on their fellow players. After Landis' unforgiving treatment of the popular and basically honest Weaver they dared not to. And once prospectively crooked players knew that honest players would no longer shield them, "the scandals stopped". At the time of Landis's appointment as commissioner, it was common for professional baseball players to supplement their pay by participating in postseason "barnstorming" tours, playing on teams which would visit smaller cities and towns to play games for which admission would be charged. Since 1911, however, players on the two World Series teams had been barred from barnstorming. The rule had been indifferently enforced—in 1916, several members of the champion Red Sox, including pitcher George Herman "Babe" Ruth had barnstormed and had been fined a token $100 each by the National Commission. Ruth, who after the 1919 season had been sold to the Yankees, and who by then had mostly abandoned his pitching role for the outfield, was the focus of considerable fan interest as he broke batting records in 1920 and 1921, some by huge margins. Ruth's major league record 29 home runs with the Red Sox in 1919 fell to his own efforts in 1920, when he hit 54. He then proceeded to hit 59 in 1921, leading the Yankees to their first pennant. Eight major league teams failed to hit as many home runs in 1921 as Ruth hit by himself. The Yankees lost the 1921 World Series to the Giants (Ruth was injured and missed several games) and after the series, the outfielder proposed to capitalize on fan interest by leading a team of barnstormers, including Yankees teammate Bob Meusel, in violation of the rule. According to Cottrell, [T]he two men clashed who helped the national pastime overcome the Black Sox scandal, one through his seemingly iron will, the other thanks to his magical bat. Judge Kenesaw Mountain Landis and Babe Ruth battled over the right of a ballplayer from a pennant-winning squad to barnstorm in the off-season. Also involved was the commissioner's continued determination to display, as he had through his banishment of the Black Sox, that he had established the boundaries for organized baseball. These boundaries, Landis intended to demonstrate, applied even to the sport's most popular and greatest star. Significant too, only Babe Ruth now contended with Commissioner Landis for the title of baseball's most important figure. Ruth had asked Yankees general manager Ed Barrow for permission to barnstorm. Barrow had no objection but warned Ruth he must obtain Landis's consent. Landis biographer Spink, who was at the time the editor of "The Sporting News", stated, "I can say that Ruth knew exactly what he was doing when he defied Landis in October, 1921. He was willing to back his own popularity and well-known drawing powers against the Judge." Ruth, to the commissioner's irritation, did not contact Landis until October 15, one day before the first exhibition. When the two spoke by telephone, Landis ordered Ruth to attend a meeting with him; Ruth refused, stating that he had to leave for Buffalo for the first game. Landis angrily refused consent for Ruth to barnstorm, and after slamming down the receiver, is recorded as saying, "Who the hell does that big ape think he is? That blankety-blank! If he goes on that trip it will be one of the sorriest things he has ever done." By one account, Yankees co-owner Colonel Tillinghast Huston attempted to dissuade Ruth as he departed, only to be told by the ballplayer, "Aw, tell the old guy to jump in a lake." The tour also featured fellow Yankees Bob Meusel and Bill Piercy (who had been called up late in the season and was ineligible for the World Series) as well as Tom Sheehan, who had been sent to the minor leagues before the end of the season. Two other Yankees, Carl Mays and Wally Schang, had been scheduled to join the tour, but given Landis's position, according to Spink, "wisely decided to pass it up". Spink describes the tour as "a fiasco." On Landis's orders, it was barred from all major and minor league ballparks. In addition, it was plagued by poor weather, and was called off in late October. In early December, Landis suspended Ruth, Piercy, and Meusel until May 20, 1922. Yankee management was actually relieved; they had feared Landis would suspend Ruth for the season or even longer. Both the Yankees and Ruth repeatedly asked Landis for the players' early reinstatement, which was refused, and when Landis visited the Yankees during spring training in New Orleans, he lectured Ruth for two hours on the value of obeying authority. "He sure can talk", noted Ruth. When Ruth returned on May 20, he batted 0-for-4, and was booed by the crowd at the Polo Grounds. According to Pietrusza, "Always a politician, there was one boss Landis did fear: public opinion. He had no guarantee at the start of the Ruth controversy that the public and press would back him as he assumed unprecedented powers over baseball. Now, he knew they would. At the start of Landis's commissionership, the minor league teams were for the most part autonomous of the major leagues; in fact the minor leagues independently chose to accept Landis's rule. To ensure players did not become mired in the minor leagues without a chance to earn their way out, major league teams were able to draft players who played two consecutive years with the same minor league team. Several minor leagues were not subject to the draft; Landis fought for the inclusion of these leagues, feeling that the non-draft leagues could prevent players from advancing as they became more skilled. By 1924, he had succeeded, as the International League, the final holdout, accepted the draft. By the mid-1920s, major league clubs were beginning to develop "farm systems", that is, minor league teams owned or controlled by them, at which they could develop young prospects without the risk of the players being acquired by major league rivals. The pioneer in this development was Branch Rickey, who then ran the St. Louis Cardinals. As the 1921 National Agreement among the major and minor leagues which implemented Landis's hiring lifted a ban on major league teams owning minor league ones, Landis was limited in his avenues of attack on Rickey's schemes. Developing talent at little cost thanks to Rickey, the Cardinals dominated the National League, winning nine league titles in the years from 1926 to 1946. Soon after Landis's appointment, he surprised the major league owners by requiring that they disclose their minor league interests. Landis fought against the practice of "covering up", using transfers between two teams controlled by the same major league team to make players ineligible for the draft. His first formal act as commissioner was to declare infielder Phil Todt a free agent, dissolving his contract with the St. Louis Browns (at the time run by Rickey, who soon thereafter moved across town to run the Cardinals); in 1928, he ruled future Hall of Famer Chuck Klein a free agent as he held the Cardinals had tried to cover Klein up. The following year, he freed Detroit Tigers prospect and future Hall of Famer Rick Ferrell, who attracted a significant signing bonus from the Browns. In 1936, Landis found that teenage pitching prospect Bob Feller's signing by minor league club Fargo-Moorhead had been a charade; the young pitcher was for all intents and purposes property of the Cleveland Indians. However, Feller indicated that he wanted to play for Cleveland and Landis issued a ruling which required the Indians to pay damages to minor league clubs, but allowed them to retain Feller, who went on to a Hall of Fame career with the Indians. Landis's attempts to crack down on "covering up" provoked the only time he was ever sued by one of his owners. After the 1930 season, minor leaguer Fred Bennett, convinced he was being covered up by the Browns, petitioned Landis for his release. Landis ruled that the Browns could either keep Bennett on their roster for the entire 1931 season, trade him, or release him. Instead, Browns owner Phil Ball brought suit against Landis in his old court in Chicago. Federal Judge Walter Lindley ruled for Landis, noting that the agreements and rules were intended to "endow the Commissioner with all the attributes of a benevolent but absolute despot and all the disciplinary powers of the proverbial "pater familias"". Ball intended to appeal, but after a meeting between team owners and Landis in which the commissioner reminded owners of their agreement not to sue, decided to drop the case. Landis had hoped that the large Cardinal farm system would become economically unfeasible; when it proved successful for the Cardinals, he had tolerated it for several years and was in a poor position to abolish it. In 1938, however, finding that the Cardinals effectively controlled multiple teams in the same league (a practice disliked by Landis), he freed 70 players from their farm system. As few of the players were likely prospects for the major leagues, Landis's actions generated headlines, but had little effect on the Cardinals organization, and the development of the modern farm system, whereby each major league club has several minor league teams which it uses to develop talent, proceeded apace. Rob Neyer describes Landis's effort as "a noble effort in a good cause, but it was also doomed to fail." One of the most controversial aspects of Landis's commissionership is the question of race. From 1884, black ballplayers were informally banned from organized baseball. No black ballplayer played in organized baseball during Landis's commissionership; Rickey (then running the Brooklyn Dodgers) broke the color line by signing Jackie Robinson to play for the minor league Montreal Royals in 1946, after Landis's death. Robinson became the first black in the major leagues since the 19th century, playing with the Dodgers beginning in 1947. According to contemporary newspaper columns, at the time of his appointment as commissioner, Landis was considered a liberal on race questions; two Chicago African-American newspapers defended him against the 1921 efforts to impeach him from his judgeship. A number of baseball authors have ascribed racism to Landis, who they say actively perpetuated baseball's color line. James Bankes, in "The Pittsburgh Crawfords", tracing the history of that Negro League team, states that Landis, whom the author suggests was a Southerner, made "little effort to disguise his racial prejudice during 25 years in office" and "remained a steadfast foe of integration". Negro League historian John Holway termed Landis "the hard-bitten Carolinian Kennesaw Mountain Landis". In a 2000 article in "Smithsonian" magazine, writer Bruce Watson states that Landis "upheld baseball's unwritten ban on black players and did nothing to push owners toward integration". A number of authors say that Landis banned major league play against black teams for fear the white teams would lose, though they ascribe various dates for this action, and the Dodgers are known to have played black teams in and around their Havana spring training base as late as 1942. Landis's documented actions on race are inconsistent. In 1938, Yankee Jake Powell was interviewed by a radio station, and when asked what he did in the offseason, made comments that were interpreted as meaning he worked as a police officer and beat up African Americans. Landis suspended Powell for ten days. In June 1942, the Negro League Kansas City Monarchs played several games against the white "Dizzy Dean All-Stars" at major league ballparks, attracting large crowds. After three games, all won by the Monarchs, Landis ordered a fourth canceled, on the ground that the games were outdrawing major league contests. On one occasion, Landis intervened in Negro League affairs, though he had no jurisdiction to do so. The Crawfords lost a game to a white semi-pro team when their star catcher, Josh Gibson dropped a pop fly, and Gibson was accused of throwing the game at the behest of gamblers. Landis summoned the black catcher to his office, interviewed him, and announced Gibson was cleared of wrongdoing. In July 1942, Dodger manager Leo Durocher charged that there was a "grapevine understanding" keeping blacks out of baseball. He was summoned to Landis's Chicago office, and after emerging from a meeting with the commissioner, alleged that he had been misquoted. Landis then addressed the press, and stated," Negroes are not barred from organized baseball by the commissioner and never have been in the 21 years I have served. There is no rule in organized baseball prohibiting their participation and never has been to my knowledge. If Durocher, or if any other manager, or all of them, want to sign one, or twenty-five Negro players, it is all right with me. That is the business of the managers and the club owners. The business of the commissioner is to interpret the rules of baseball, and to enforce them. In his 1961 memoir, "Veeck as in Wreck", longtime baseball executive and owner Bill Veeck told of his plan, in 1942, to buy the Phillies and stock the team with Negro League stars. Veeck wrote that he told Landis, who reacted with shock, and soon moved to block the purchase. In his book, Veeck placed some of the blame on National League President Ford Frick, but later reserved blame exclusively for Landis, whom he accused of racism, stating in a subsequent interview, "[a]fter all, a man who is named Kenesaw Mountain was not born and raised in the state of Maine." However, when Veeck was asked for proof of his allegations against Landis, he stated, "I have no proof of that. I can only surmise." According to baseball historian David Jordan, "Veeck, nothing if not a storyteller, seems to have added these embellishments, sticking in some guys in black hats, simply to juice up his tale." In November 1943, Landis agreed after some persuasion that black sportswriter Sam Lacy should make a case for integration of organized baseball before the owners' annual meeting. Instead of Lacy attending the meeting, actor Paul Robeson did. Robeson, though a noted black actor and advocate of civil rights, was a controversial figure due to his affiliation with the Communist Party. The owners heard Robeson out, but at Landis's suggestion, did not ask him any questions or begin any discussion with him. Neyer noted that "Landis has been blamed for delaying the integration of the major leagues, but the truth is that the owners didn't want black players in the majors any more than Landis did. And it's not likely that, even if Landis hadn't died in 1944, he could have prevented Branch Rickey from bringing Jackie Robinson to the National League in 1947." C.C. Johnson Spink, son of Landis biographer J.G. Taylor Spink and his successor as editor of "The Sporting News", noted in the introduction to the reissue of his father's biography of Landis, K.M. Landis was quite human and not infallible. If, for example, he did drag his feet at erasing baseball's color line, he was grievously wrong, but then so were many others of his post-Civil War generation. Landis took full jurisdiction over the World Series, as a contest between representatives of the two major leagues. Landis was blamed when the umpires called a game on account of darkness with the score tied during the 1922 World Series, even though there was still light. Landis decided that such decisions in future would be made by himself, moved forward the starting time of World Series games in future years, and announced that proceeds from the tied game would be donated to charity. In the 1932 World Series, Landis ordered that tickets for Game One at Yankee Stadium only be sold as part of strips, forcing fans to purchase tickets for all Yankee home games during that Series. Bad weather and the poor economy resulted in a half-filled stadium, and Landis allowed individual game sales for Game Two. During the 1933 World Series, he instituted a rule that only he could throw a player out of a World Series game, a rule which followed the ejection of Washington Senator Heinie Manush by umpire Charley Moran. The following year, with the visiting Cardinals ahead of the Detroit Tigers, 9–0 in Game Seven, he removed Cardinal Joe Medwick from the game for his own safety when Medwick, the left fielder, was pelted with fruit by Tiger fans after Medwick had been involved in a fight with one of the Tigers. Spink notes that Landis would most likely not have done so were the game within reach of the Tigers. In the 1938 World Series, umpire Moran was hit by a wild throw and suffered facial injuries. He was able to continue, but the incident caused Landis to order that World Series games and All-Star Games be played with six umpires. The All-Star Game began in 1933; Landis had been a strong supporter of the proposal for such a contest, and after the first game remarked, "That's a grand show, and it should be continued." He never missed an All-Star Game in his lifetime; his final public appearance was at the 1944 All-Star Game in Pittsburgh. In 1928, National League ball clubs proposed an innovation whereby each team's pitcher, usually the weakest hitter in the lineup, would not bat, but be replaced for the purposes of batting and base-running by a tenth player. There were expectations that at the interleague meetings that year, the National League teams would vote for it, and the American League teams against it, leaving Landis to cast the deciding vote. In the event, the proposal was withdrawn, and Landis did not disclose how he would have voted on this early version of the "designated hitter" rule. Landis disliked the innovation of "night baseball", played in the evening with the aid of artificial light, and sought to discourage teams from it. Despite this, he attended the first successful minor league night game, in Des Moines, Iowa, in 1930. When major league night baseball began in the late 1930s, Landis got the owners to restrict the number of such games. During World War II, many restrictions on night baseball were reduced, with the Washington Senators permitted to play all their home games (except those on Sundays and holidays) at night. With the entry of the United States into World War II in late 1941, Landis wrote to President Franklin D. Roosevelt, inquiring as to the wartime status of baseball. The President responded urging Landis to keep baseball open, foreseeing that even those fully engaged in war work would benefit from inexpensive diversions such as attending baseball games. Many major leaguers enlisted or were drafted; even so Landis repeatedly stated, "We'll play as long as we can put nine men on the field." Although many of the teams practiced at their normal spring training sites in 1942, beginning the following year they were required to train near their home cities or in the Northeast. Landis was as virulently opposed to the Axis Powers as he had been towards the Kaiser, writing that peace would not be possible until "about fifteen thousand little Hitler, Himmlers and Hirohitos" were killed. Landis retained a firm hold on baseball despite his advancing years and, in 1943, banned Phillies owner William D. Cox from baseball for betting on his own team. In 1927, Landis's stance regarding gambling had been codified in the rules of baseball: "Any player, umpire, or club or league official or employee who shall bet any sum whatsoever upon any baseball game in connection with which the bettor had a duty to perform shall be declared permanently ineligible." Cox was required to sell his stake in the Phillies. In early October 1944, Landis checked into St. Luke's Hospital in Chicago, where his wife Winifred had been hospitalized, with a severe cold. While in the hospital, he had a heart attack, causing him to miss the World Series for the first time in his commissionership. He remained fully alert, and as usual signed the World Series share checks to players. His contract was due to expire in January 1946; on November 17, 1944, baseball's owners voted him another seven-year term. However, on November 25, he died surrounded by family, five days after his 78th birthday. His longtime assistant, Leslie O'Connor, wept as he read the announcement for the press. Landis is buried at Oak Woods Cemetery in Chicago. Two weeks after his death, Landis was voted into the National Baseball Hall of Fame by a special committee vote. The Baseball Writers' Association of America renamed its Most Valuable Player Awards after Landis. American League President Will Harridge said of Landis, "He was a wonderful man. His great qualities and downright simplicity impressed themselves deeply on all who knew him." Pietrusza suggests that the legend on Landis's Hall of Fame plaque is his true legacy: "His integrity and leadership established baseball in the esteem, respect, and affection of the American people." Pietrusza notes that Landis was hired by the baseball owners to clean up the sport, and "no one could deny Kenesaw Mountain Landis had accomplished what he had been hired to do". According to his first biographer, Spink, [Landis] may have been arbitrary, self-willed and even unfair, but he 'called 'em as he saw 'em' and he turned over to his successor and the future a game cleansed of the nasty spots which followed World War I. Kenesaw Mountain Landis put the fear of God into weak characters who might otherwise have been inclined to violate their trust. And for that, I, as a lifelong lover of baseball, am eternally grateful. Khalid al-Mihdhar Khalid Muhammad Abdallah al-Mihdhar (, ; also transliterated as Almihdhar) (May 16, 1975 – September 11, 2001) was one of five hijackers of American Airlines Flight 77, which was flown into the Pentagon as part of the September 11 attacks. Mihdhar was born in Saudi Arabia and fought in the Bosnian War during the 1990s. In early 1999, he traveled to Afghanistan where, as an experienced and respected jihadist, he was selected by Osama bin Laden to participate in the attacks. Mihdhar arrived in California with fellow hijacker Nawaf al-Hazmi in January 2000, after traveling to Malaysia for the Kuala Lumpur al-Qaeda Summit. At this point, the CIA was aware of Mihdhar, and he was photographed in Malaysia with another al-Qaeda member who was involved in the USS "Cole" bombing. The CIA did not inform the FBI when it learned that Mihdhar and Hazmi had entered the United States, and Mihdhar was not placed on any watchlists until late August 2001. Upon arriving in San Diego, California, Mihdhar and Hazmi were to train as pilots, but spoke English poorly and did not do well with flight lessons. In June 2000, Mihdhar left the United States for Yemen, leaving Hazmi behind in San Diego. Mihdhar spent some time in Afghanistan in early 2001 and returned to the United States in early July 2001. He stayed in New Jersey in July and August, before arriving in the Washington, D.C. area at the beginning of September. On the morning of September 11, 2001, Mihdhar boarded American Airlines Flight 77, which was hijacked approximately 30 minutes after take off. The plane was deliberately crashed into the Pentagon, killing all 64 people aboard the flight, along with 125 on the ground. Khalid al-Mihdhar was born on May 16, 1975, in Mecca, Saudi Arabia to a prominent family, related to the Quraysh tribe of Mecca. Little is known about his life before the age of 20, when he and childhood friend Nawaf al-Hazmi went to Bosnia to fight with the mujahideen in the Bosnian War. After the war, Mihdhar and Hazmi went to Afghanistan where they fought alongside the Taliban against the Northern Alliance, and al-Qaeda would later dub Nawaf his "second in command". In 1997, Mihdhar told his family that he was leaving to fight in Chechnya, though it is not certain that he actually went to Chechnya. The same year, both men attracted the attention of Saudi intelligence, who believed they were involved in arms smuggling, and the following year they were eyed as possible collaborators in the 1998 United States embassy bombings in East Africa after it emerged that Mohamed Rashed Daoud Al-Owhali had given the FBI the phone number of Mihdhar's father-in-law; 967-1-200578, which turned out to be a key communications hub for al-Qaeda militants, and eventually tipped off the Americans about the upcoming Kuala Lumpur al-Qaeda Summit. In the late 1990s, Mihdhar married Hoda al-Hada, who was the sister of a comrade from Yemen, and they had two daughters. Through marriage, Mihdhar was related to a number of individuals involved with al-Qaeda in some way. Mihdhar's father-in-law, Ahmad Mohammad Ali al-Hada, helped facilitate al-Qaeda communications in Yemen, and in late 2001, Mihdhar's brother-in-law, Ahmed al-Darbi, was captured in Azerbaijan and sent to Guantanamo Bay on charges of supporting a plot to bomb ships in the Strait of Hormuz. In Spring 1999, al-Qaeda founder Osama bin Laden committed to support the 9/11 attacks plot, which was largely organized by prominent al-Qaeda member Khalid Sheikh Mohammed. Mihdhar and Hazmi were among the first group of participants selected for the operation, along with Tawfiq bin Attash and Abu Bara al Yemeni, al-Qaeda members from Yemen. Mihdhar, who had spent time in al-Qaeda camps in the 1990s, was known and highly regarded by Bin Laden. Mihdhar was so eager to participate in jihad operations in the United States that he had already obtained a one-year B-1/B-2 (tourist/business) multiple-entry visa from the consulate in Jeddah, Saudi Arabia, on April 7, 1999, one day after obtaining a new passport. Mihdhar listed the Los Angeles Sheraton as his intended destination. Once selected, Mihdhar and Hazmi were sent to the Mes Aynak training camp in Afghanistan. In late 1999, Hazmi, Attash and Yemeni went to Karachi, Pakistan to see Mohammed, who instructed them on Western culture and travel; however, Mihdhar did not go to Karachi, instead returning to Yemen. He was known as "Sinaan" during the preparations. The CIA was aware of Mihdhar and Hazmi's involvement with al-Qaeda, having been informed by Saudi intelligence during a 1999 meeting in Riyadh. Based on information uncovered by the FBI in the 1998 United States embassy bombings case, the National Security Agency (NSA) began tracking the communications of Hada, Mihdhar's father-in-law. In late 1999, the NSA informed the CIA of an upcoming meeting in Malaysia, which Hada mentioned would involve "Khalid", "Nawaf", and "Salem", who was Hazmi's younger brother, Salem al-Hazmi. On January 4, 2000, Mihdhar left Yemen and flew to Dubai, United Arab Emirates, where he spent the night. The CIA broke into his hotel room and photocopied his passport, which gave them his full name, birth information and passport number for the first time, and alerted them that he held an entry visa to the United States. The photocopy was sent to the CIA's Alec Station, which was tracking al-Qaeda. On January 5, 2000, Mihdhar traveled to Kuala Lumpur, where he joined Hazmi, Attash and Yemeni, who were all arriving from Pakistan. Hamburg cell member Ramzi bin al-Shibh was also at the summit, and Mohammed possibly attended. The group was in Malaysia to meet with Hambali, the leader of Jemaah Islamiyah, an Asian al-Qaeda affiliate. During the Kuala Lumpur al-Qaeda Summit, many key details of the 9/11 attacks may have been arranged. At the time, the attacks plot had an additional component involving hijacking aircraft in Asia, as well as in the United States. Attash and Yemeni were slated for this part of the plot, however, it was later canceled by Bin Laden for being too difficult to coordinate with United States operations. In Malaysia, the group stayed with Yazid Sufaat, a local Jemaah Islamiyah member, who provided accommodation at Hambali's request. Both Mihdhar and Hazmi were secretly photographed at the meeting by Malaysian authorities, whom the CIA had asked to provide surveillance. The Malaysians reported that Mihdhar spoke at length with Attash, and he met with Fahd al-Quso and others who were later involved in the USS "Cole" bombing. After the meeting, Mihdhar and Hazmi traveled to Bangkok, Thailand, on January 8 and left a week later on January 15 for the United States. On January 15, 2000, Mihdhar and Hazmi arrived at Los Angeles International Airport from Bangkok and were admitted as tourists for a period of six months. Immediately after entering the country, Mihdhar and Hazmi met Omar al-Bayoumi in an airport restaurant. Bayoumi claimed he was merely being charitable in assisting the two seemingly out-of-place Muslims with moving to San Diego, where he helped them find an apartment near his own, co-signed their lease, and gave them $1,500 to help pay their rent. Mohammed later claimed that he suggested San Diego as their destination, based on information gleaned from a San Diego phone book that listed language and flight schools. Mohammed also recommended that the two seek assistance from the local Muslim community, since neither spoke English nor had experience with Western culture. While in San Diego, witnesses told the FBI he and Hazmi had a close relationship with Anwar Al Awlaki, an imam who served as their spiritual advisor. Authorities say the two regularly attended the Masjid Ar-Ribat al-Islami mosque Awlaki led in San Diego, and Awlaki had many closed-door meetings with them, which led investigators to believe Awlaki knew about the 9/11 attacks in advance. In early February 2000, Mihdhar and Hazmi rented an apartment at the Parkwood Apartments complex in the Clairemont Mesa area of San Diego, and Mihdhar purchased a used 1988 Toyota Corolla. Neighbors thought that Mihdhar and Hazmi were odd because months passed without the men getting any furniture, and they slept on mattresses on the floor, yet they carried briefcases, were frequently on their mobile phones, and were occasionally picked up by a limousine. Those who met Mihdhar in San Diego described him as "dark and brooding, with a disdain for American culture". Neighbors also said that the pair constantly played flight simulator games. Mihdhar and Hazmi took flight lessons on May 5, 2000, at the Sorbi Flying Club in San Diego, with Mihdhar flying an aircraft for 42 minutes. They took additional lessons on May 10; however, with poor English skills, they did not do well with flight lessons. Mihdhar and Hazmi raised some suspicion when they offered extra money to their flight instructor, Richard Garza, if he would train them to fly jets. Garza refused the offer but did not report them to authorities. After the 9/11 attacks, Garza described the two men as "impatient students" who "wanted to learn to fly jets, specifically Boeings". Mihdhar and Hazmi moved out of the Parkwood Apartments at the end of May 2000, and Mihdhar transferred registration for the Toyota Corolla to Hazmi. On June 10, 2000, Mihdhar left the United States and returned to Yemen to visit his wife, against the wishes of Mohammed who wanted him to remain in the United States to help Hazmi adapt. Mohammed was so angered by this that he decided to remove Mihdhar from the 9/11 plot, but he was overruled by bin Laden. Mihdhar remained part of the plot as a muscle hijacker, who would help take over the aircraft. On October 12, 2000, the USS "Cole" was bombed by a small boat laden with explosives. After the bombing, Yemeni Prime Minister Abdul Karim al-Iryani reported that Mihdhar had been one of the key planners of the attack and had been in the country at the time of the attacks. In late 2000, Mihdhar was back in Saudi Arabia, staying with a cousin in Mecca. In February 2001, Mihdhar returned to Afghanistan for several months, possibly entering across the Iranian border after a flight from Syria. FBI director Robert Mueller later stated his belief that Mihdhar served as the coordinator and organizer for the muscle hijackers. He was the last of the muscle hijackers to return to the United States. On June 10, he returned to Saudi Arabia for a month, where he applied to re-enter the United States through the Visa Express program, indicating that he intended to stay at a Marriott hotel in New York City. On his visa application, Mihdhar falsely stated that he had never previously traveled to the United States. On July 4, Mihdhar returned to the United States, arriving at New York City's John F. Kennedy International Airport, using a new passport obtained the previous month. A digital copy of one of Mihdhar's passports was later recovered during a search of an al-Qaeda safe house in Afghanistan, which held indicators, such as fake or altered passport stamps, that Mihdhar was a member of a known terrorist group. At the time when Mihdhar was admitted to the United States, immigration inspectors had not been trained to look for such indicators. Upon arriving, Mihdhar did not check into the Marriott but instead spent a night at another hotel in the city. Mihdhar bought a fake ID on July 10 from All Services Plus in Passaic County, New Jersey, which was in the business of selling counterfeit documents, including another ID to Flight 11 hijacker Abdulaziz al-Omari. On August 1, Mihdhar and fellow Flight 77 hijacker Hani Hanjour drove to Virginia in order to obtain driver's licenses. Once they arrived, they scouted out a 7-Eleven convenience store and a dollar store in Falls Church, and found two Salvadoran immigrants who, for $50 each, were willing to vouch for Mihdhar and Hanjour as Virginia residents. With notarized residency forms, Mihdhar and Hanjour were able to obtain driver's licenses at a Virginia motor vehicle office. Flight 77 hijackers Salem al-Hazmi and Majed Moqed, and United Airlines Flight 93 hijacker Ziad Jarrah used the same addresses obtained from the Salvadorans to obtain Virginia driver's licenses. In August 2001, Mihdhar and Hazmi made several visits to the library at William Paterson University in Wayne, New Jersey, where they used computers to look up travel information and book flights. On August 22, Mihdhar and Hazmi tried to purchase flight tickets from the American Airlines online ticket-merchant, but had technical difficulties and gave up. Mihdhar and Moqed were able to make flight reservations for Flight 77 on August 25, using Moqed's credit card; however, the transaction did not fully go through because the billing address and the shipment address for the tickets did not match. On August 31, Mihdhar closed an account at Hudson United Bank in New Jersey, having opened the account when he arrived in July, and was with Hanjour when he made a withdrawal from an ATM in Paterson on September 1. The next day, Mihdhar, Moqed and Hanjour traveled to Maryland, where they stayed at budget motels in Laurel. Mihdhar was among the muscle hijackers who worked out at a Gold's Gym in Greenbelt in early September. On September 5, Mihdhar and Moqed went to the American Airlines ticket counter at Baltimore-Washington International Airport to pick up their tickets for Flight 77, paying $2,300 in cash. Mihdhar was placed on a CIA watchlist on August 21, 2001, and a note was sent on August 23 to the Department of State and the Immigration and Naturalization Service (INS) suggesting that Mihdhar and Hazmi be added to their watchlists. The Federal Aviation Administration (FAA) was not notified about the two men. On August 23, the CIA informed the FBI that Mihdhar had obtained a U.S. visa in Jeddah. The FBI headquarters received a copy of the Visa Express application from the Jeddah embassy on August 24, showing the New York Marriott as Mihdhar's destination. On August 28, the FBI New York field office requested that a criminal case be opened to determine whether Mihdhar was still in the United States, but the request was refused. The FBI ended up treating Mihdhar as an intelligence case, which meant that the FBI's criminal investigators could not work on the case, due to the barrier separating intelligence and criminal case operations. An agent in the New York office sent an e-mail to FBI headquarters saying, "Whatever has happened to this, someday someone will die, and the public will not understand why we were not more effective and throwing every resource we had at certain 'problems.'" The reply from headquarters was, "we [at headquarters] are all frustrated with this issue ... [t]hese are the rules. NSLU does not make them up." The FBI contacted Marriott on August 30, requesting that they check guest records, and on September 5, they reported that no Marriott hotels had any record of Mihdhar checking in. The day before the attacks, Robert Fuller of the New York office requested that the Los Angeles FBI office check all local Sheraton Hotels, as well as Lufthansa and United Airlines bookings, because those were the two airlines Mihdhar had used to enter the country. Neither the Treasury Department's Financial Crimes Enforcement Network nor the FBI's Financial Review Group, which have access to credit card and other private financial records, were notified about Mihdhar prior to September 11. Regarding the CIA's refusal to inform the FBI about Mihdhar and Hazmi, author Lawrence Wright suggests the CIA wanted to protect its turf and was concerned about giving sensitive intelligence to FBI Agent John P. O'Neill, who Alec Station chief Michael Scheuer described as duplicitous. Wright also speculates that the CIA may have been protecting intelligence operations overseas, and might have been eying Mihdhar and Hazmi as recruitment targets to obtain intelligence on al-Qaeda, although the CIA was not authorized to operate in the United States and might have been leaving them for Saudi intelligence to recruit. On September 10, 2001, Mihdhar and the other hijackers checked into the Marriott Residence Inn in Herndon, Virginia, near Washington Dulles International Airport. Saleh Ibn Abdul Rahman Hussayen, a prominent Saudi government official, was staying at the same hotel that night, although there is no evidence that they met or knew of each other's presence. At 06:22 on September 11, 2001, the group checked out of the hotel and headed to Dulles airport. At 07:15, Mihdhar and Moqed checked in at the American Airlines ticket counter and arrived at the passenger security checkpoint at 07:20. Both men set off the metal detector and were put through secondary screening. Security video footage later released shows that Moqed was wanded, but the screener did not identify what set off the alarm, and both Moqed and Mihdhar were able to proceed without further hindrance. Mihdhar was also selected by the Computer Assisted Passenger Prescreening System (CAPPS), which involved extra screening of his luggage; however, because Mihdhar did not check any luggage, this had no effect. By 07:50, Mihdhar and the other hijackers, carrying knives and box cutters, had made it through the airport security checkpoint and boarded Flight 77 to Los Angeles. Mihdhar was seated in seat 12B, next to Moqed. The flight was scheduled to depart from Gate D26 at 08:10 but was delayed by 10 minutes. The last routine radio communication from the plane to air traffic control occurred at 08:50:51. At 08:54, Flight 77 deviated from its assigned flight path and began to turn south, at which point the hijackers set the flight's autopilot setting for Washington, D.C. Passenger Barbara Olson called her husband, United States Solicitor General Ted Olson (whose 61st birthday was on that day), and reported that the plane had been hijacked. At 09:37:45, Flight 77 crashed into the west facade of the Pentagon, killing all 64 people aboard, along with 125 in the Pentagon. In the recovery process, remains of the five hijackers were identified through a process of elimination, since their DNA did not match any from the victims, and put into the custody of the FBI. After the attacks, the identification of Mihdhar was one of the first links suggesting that bin Laden had played a role in their organization, since Mihdhar had been seen at the Malaysian conference speaking to bin Laden's associates. The FBI interrogated Quso, who was arrested following the USS "Cole" bombing and in custody in Yemen. Quso was able to identify Mihdhar, Hazmi and Attash in photos provided by the FBI, and he also knew Marwan al-Shehhi, a hijacker aboard United Airlines Flight 175. From Quso, the FBI was able to establish an al-Qaeda link to the attacks. On September 12, 2001, the Toyota Corolla purchased by Mihdhar was found in Dulles International Airport's hourly parking lot. Inside the vehicle, authorities found a letter written by Mohamed Atta, a hijacker aboard American Airlines Flight 11; maps of Washington, D.C. and New York City; a cashier's check made out to a Phoenix, Arizona, flight school; four drawings of a Boeing 757 cockpit; a box cutter; and a page with notes and phone numbers, which contained evidence that led investigators to San Diego. On September 19, 2001, the Federal Deposit Insurance Corporation (FDIC) distributed a special alert that listed Mihdhar as still alive, and other reports began suggesting that a number of the alleged hijackers were likewise still alive. For instance, on September 23, 2001, the BBC published an article that suggested Mihdhar and others named as hijackers were still at large. The German magazine Der Spiegel later investigated the BBC's claims of "living" hijackers and reported they were cases of mistaken identities. In 2002, Saudi officials stated that the names of the hijackers were correct and that 15 of the 19 hijackers were Saudi. In 2006, in response to 9/11 conspiracy theories surrounding its original news story, the BBC said that confusion had arisen with the common Arabic names, and that its later reports on the hijackers superseded its original story. In 2005, Army Lt. Col. Anthony Shaffer and Congressman Curt Weldon alleged that the Defense Department data mining project Able Danger identified Mihdhar, Hazmi, Shehhi and Atta as members of a Brooklyn-based al-Qaeda cell in early 2000. Shaffer largely based his allegations on the recollections of Navy Captain Scott Phillpott, who later recanted his recollection, telling investigators that he was "convinced that Atta was not on the chart that we had". Phillpott said that Shaffer was "relying on my recollection 100 percent", and the Defense Department Inspector General's report indicated that Philpott strongly supported the social network analysis techniques used in Able Danger, and might have exaggerated claims of identifying the hijackers. Kurt Vonnegut Kurt Vonnegut Jr. (; November 11, 1922April 11, 2007) was an American writer. In a career spanning over 50 years, Vonnegut published 14 novels, three short story collections, five plays, and five works of non-fiction. He is most famous for his darkly satirical, best-selling novel "Slaughterhouse-Five" (1969). Born and raised in Indianapolis, Indiana, Vonnegut attended Cornell University but dropped out in January 1943 and enlisted in the United States Army. As part of his training, he studied mechanical engineering at Carnegie Institute of Technology (now Carnegie Mellon University) and the University of Tennessee. He was then deployed to Europe to fight in World War II and was captured by the Germans during the Battle of the Bulge. He was interned in Dresden and survived the Allied bombing of the city by taking refuge in a meat locker of the slaughterhouse where he was imprisoned. After the war, Vonnegut married Jane Marie Cox, with whom he had three children. He later adopted his sister's three sons, after she died of cancer and her husband was killed in a train accident. Vonnegut published his first novel, "Player Piano", in 1952. The novel was reviewed positively but was not commercially successful. In the nearly 20 years that followed, Vonnegut published several novels that were only marginally successful, such as "Cat's Cradle" (1963) and "God Bless You, Mr. Rosewater" (1964). Vonnegut's breakthrough was his commercially and critically successful sixth novel, "Slaughterhouse-Five". The book's anti-war sentiment resonated with its readers amidst the ongoing Vietnam War and its reviews were generally positive. After its release, "Slaughterhouse-Five" went to the top of "The New York Times" Best Seller list, thrusting Vonnegut into fame. He was invited to give speeches, lectures and commencement addresses around the country and received many awards and honors. Later in his career, Vonnegut published several autobiographical essays and short-story collections, including "Fates Worse Than Death" (1991), and "A Man Without a Country" (2005). After his death, he was hailed as a morbidly comical commentator on the society in which he lived and as one of the most important contemporary writers. Vonnegut's son Mark published a compilation of his father's unpublished compositions, titled "Armageddon in Retrospect". In 2017, Seven Stories Press published "Complete Stories", a collection of Vonnegut's short fiction including 5 previously unpublished stories. "Complete Stories" was collected and introduced by Vonnegut friends and scholars Jerome Klinkowitz and Dan Wakefield. Numerous scholarly works have examined Vonnegut's writing and humor. Kurt Vonnegut Jr. was born on November 11, 1922, in Indianapolis, Indiana. He was the youngest of three children of Kurt Vonnegut Sr. and his wife Edith (born Lieber). His older siblings were Bernard (born 1914) and Alice (born 1917). Vonnegut was descended from German immigrants who settled in the United States in the mid-19th century; his patrilineal great-grandfather, Clemens Vonnegut of Westphalia, Germany, settled in Indianapolis and founded the Vonnegut Hardware Company. Kurt's father, and his father before him, Bernard, were architects; the architecture firm under Kurt Sr. designed such buildings as Das Deutsche Haus (now called "The Athenæum"), the Indiana headquarters of the Bell Telephone Company, and the Fletcher Trust Building. Vonnegut's mother was born into Indianapolis high society, as her family, the Liebers, were among the wealthiest in the city, their fortune derived from ownership of a successful brewery. Although both of Vonnegut's parents were fluent German speakers, the ill feeling toward that country during and after World War I caused the Vonneguts to abandon that culture to show their American patriotism. Thus, they did not teach their youngest son German or introduce him to German literature and tradition, leaving him feeling "ignorant and rootless." Vonnegut later credited Ida Young, his family's African-American cook and housekeeper for the first 10 years of his life, for raising him and giving him values. "[She] gave me decent moral instruction and was exceedingly nice to me. So she was as great an influence on me as anybody." Vonnegut described Young as "humane and wise", adding that "the compassionate, forgiving aspects of [his] beliefs" came from her. The financial security and social prosperity that the Vonneguts once enjoyed were destroyed in a matter of years. The Liebers's brewery was closed in 1921 after the advent of Prohibition in the United States. When the Great Depression hit, few people could afford to build, causing clients at Kurt Sr.'s architectural firm to become scarce. Vonnegut's brother and sister had finished their primary and secondary educations in private schools, but Vonnegut was placed in a public school, called Public School No. 43, now known as the James Whitcomb Riley School. He was not bothered by this, but both his parents were affected deeply by their economic misfortune. His father withdrew from normal life and became what Vonnegut called a "dreamy artist". His mother became depressed, withdrawn, bitter, and abusive. She labored to regain the family's wealth and status, and Vonnegut said she expressed hatred "as corrosive as hydrochloric acid" for her husband. Edith Vonnegut forayed into writing and tried to sell short stories to magazines like "Collier's" and "The Saturday Evening Post" with no success. Vonnegut enrolled at Shortridge High School in Indianapolis in 1936. While there, he played clarinet in the school band and became a co-editor (along with Madelyn Pugh) for the Tuesday edition of the school newspaper, "The Shortridge Echo". Vonnegut said his tenure with the "Echo" allowed him to write for a large audience—his fellow students—rather than for a teacher, an experience he said was "fun and easy". "It just turned out that I could write better than a lot of other people", Vonnegut observed. "Each person has something he can do easily and can't imagine why everybody else has so much trouble doing it." After graduating from Shortridge in 1940, Vonnegut enrolled at Cornell University in Ithaca, New York. He wanted to study the humanities or become an architect like his father, but his father and brother, a scientist, urged him to study a "useful" discipline. As a result, Vonnegut majored in biochemistry, but he had little proficiency in the area and was indifferent towards his studies. As his father had been a member at MIT, Vonnegut was entitled to join the Delta Upsilon fraternity, and did. He overcame stiff competition for a place at the university's independent newspaper, "The Cornell Daily Sun", first serving as a staff writer, then as an editor. By the end of his freshman year, he was writing a column titled "Innocents Abroad" which reused jokes from other publications. He later penned a piece, "Well All Right", focusing on pacifism, a cause he strongly supported, arguing against U.S. intervention in World War II. The attack on Pearl Harbor brought the U.S. into the war. Vonnegut was a member of Reserve Officers' Training Corps, but poor grades and a satirical article in Cornell's newspaper cost him his place there. He was placed on academic probation in May 1942 and dropped out the following January. No longer eligible for a student deferment, he faced likely conscription into the United States Army. Instead of waiting to be drafted, he enlisted in the army and in March 1943 reported to Fort Bragg, North Carolina, for basic training. Vonnegut was trained to fire and maintain howitzers and later received instruction in mechanical engineering at the Carnegie Institute of Technology and the University of Tennessee as part of the Army Specialized Training Program (ASTP). In early 1944, the ASTP was canceled due to the Army's need for soldiers to support the D-Day invasion, and Vonnegut was ordered to an infantry battalion at Camp Atterbury, south of Indianapolis in Edinburgh, Indiana, where he trained as a scout. He lived so close to his home that he was "able to sleep in [his] own bedroom and use the family car on weekends". On May 14, 1944, Vonnegut returned home on leave for Mother's Day weekend to discover that his mother had committed suicide the previous night by overdosing on sleeping pills. Possible factors that contributed to Edith Vonnegut's suicide include the family's loss of wealth and status, Vonnegut's forthcoming deployment overseas, and her own lack of success as a writer. She was inebriated at the time and under the influence of prescription drugs. Three months after his mother's suicide, Vonnegut was sent to Europe as an intelligence scout with the 106th Infantry Division. In December 1944, he fought in the Battle of the Bulge, the final German offensive of the war. During the battle, the 106th Infantry Division, which had only recently reached the front and was assigned to a "quiet" sector due to its inexperience, was overrun by advancing German armored forces. Over 500 members of the division were killed and over 6,000 were captured. On December 22, Vonnegut was captured with about 50 other American soldiers. Vonnegut was taken by boxcar to a prison camp south of Dresden, in Saxony. During the journey, the Royal Air Force bombed the prisoner trains and killed about 150 men. Vonnegut was sent to Dresden, the "first fancy city [he had] ever seen". He lived in a slaughterhouse when he got to the city, and worked in a factory that made malt syrup for pregnant women. Vonnegut recalled the sirens going off whenever another city was bombed. The Germans did not expect Dresden to get bombed, Vonnegut said. "There were very few air-raid shelters in town and no war industries, just cigarette factories, hospitals, clarinet factories." On February 13, 1945, Dresden became the target of Allied forces. In the hours and days that followed, the Allies engaged in a fierce firebombing of the city. The offensive subsided on February 15, with around 25,000 civilians killed in the bombing. Vonnegut marveled at the level of both the destruction in Dresden and the secrecy that attended it. He had survived by taking refuge in a meat locker three stories underground. "It was cool there, with cadavers hanging all around", Vonnegut said. "When we came up the city was gone ... They burnt the whole damn town down." Vonnegut and other American prisoners were put to work immediately after the bombing, excavating bodies from the rubble. He described the activity as a "terribly elaborate Easter-egg hunt". The American prisoners of war were evacuated on foot to the border of Saxony and Czechoslovakia after General George S. Patton captured Leipzig. With the captives abandoned by their guards, Vonnegut reached a prisoner-of-war repatriation camp in Le Havre, France, before the end of May 1945, with the aid of the Soviets. He returned to the United States and continued to serve in the Army, stationed at Fort Riley, Kansas, typing discharge papers for other soldiers. Soon after he was awarded a Purple Heart about which he remarked "I myself was awarded my country's second-lowest decoration, a Purple Heart for frost-bite." He was discharged from the U.S. Army and returned to Indianapolis. After he returned to the United States, 22-year-old Vonnegut married Jane Marie Cox, his high school girlfriend and classmate since kindergarten, on September 1, 1945. The pair relocated to Chicago; there, Vonnegut enrolled in the University of Chicago on the G.I. Bill, as an anthropology student in an unusual five-year joint undergraduate/graduate program that conferred a master's degree. He augmented his income by working as a reporter for the City News Bureau of Chicago at night. Jane accepted a scholarship from the university to study Russian literature as a graduate student. Jane dropped out of the program after becoming pregnant with the couple's first child, Mark (born May 1947), while Kurt also left the University without any degree (despite having completed his undergraduate education) when his master's thesis on the Ghost Dance religious movement was unanimously rejected by the department. Shortly thereafter, General Electric (GE) hired Vonnegut as a publicist for the company's Schenectady, New York, research laboratory. Although the job required a college degree, Vonnegut was hired after claiming to hold a master's degree in anthropology from the University of Chicago. His brother Bernard had worked at GE since 1945, contributing significantly to an iodine-based cloud seeding project. In 1949, Kurt and Jane had a daughter named Edith. Still working for GE, Vonnegut had his first piece, titled "Report on the Barnhouse Effect", published in the February 11, 1950 issue of "Collier's", for which he received $750. Vonnegut wrote another story, after being coached by the fiction editor at "Collier's", Knox Burger, and again sold it to the magazine, this time for $950. Burger suggested he quit GE, a course he had contemplated before. Vonnegut moved with his family to Cape Cod, Massachusetts to write full-time, and left GE in 1951. On Cape Cod, Vonnegut made most of his money writing pieces for magazines such as "Collier's", "The Saturday Evening Post", and "Cosmopolitan". He also did a stint as an English teacher, wrote copy for an advertising agency, and opened the first USA Saab dealership, which eventually failed. In 1952, Vonnegut's first novel, "Player Piano", was published by Scribner's. The novel has a post-Third World War setting, in which factory workers have been replaced by machines. "Player Piano" draws upon Vonnegut's experience as an employee at GE. He satirizes the drive to climb the corporate ladder, one that in "Player Piano" is rapidly disappearing as automation increases, putting even executives out of work. His central character, Paul Proteus, has an ambitious wife, a backstabbing assistant, and a feeling of empathy for the poor. Sent by his boss, Kroner, as a double agent among the poor (who have all the material goods they want, but little sense of purpose), he leads them in a machine-smashing, museum-burning revolution. "Player Piano" expresses Vonnegut's opposition to McCarthyism, something made clear when the Ghost Shirts, the revolutionary organization Paul penetrates and eventually leads, is referred to by one character as "fellow travelers". In "Player Piano", Vonnegut originates many of the techniques he would use in his later works. The comic, heavy-drinking Shah of Bratpuhr, an outsider to this dystopian corporate United States, is able to ask many questions that an insider would not think to ask, or would cause offense by doing so. For example, when taken to see the artificially intelligent supercomputer EPICAC, the Shah asks it "what are people for?" and receives no answer. Speaking for Vonnegut, he dismisses it as a "false god". This type of alien visitor would recur throughout Vonnegut's literature. "The New York Times" writer and critic Granville Hicks gave "Player Piano" a positive review, favorably comparing it to Aldous Huxley's "Brave New World". Hicks called Vonnegut a "sharp-eyed satirist". None of the reviewers considered the novel particularly important. Several editions were printed—one by Bantam with the title "Utopia 14", and another by the Doubleday Science Fiction Book Club—whereby Vonnegut gained the repute of a science fiction writer, a genre held in disdain by writers at that time. He defended the genre, and deplored a perceived sentiment that "no one can simultaneously be a respectable writer and understand how a refrigerator works." After "Player Piano", Vonnegut continued to sell short stories to various magazines. In 1954 the couple had a third child, Nanette. With a growing family and no financially successful novels yet, Vonnegut's short stories sustained the family. In 1958, his sister, Alice, died of cancer two days after her husband, James Carmalt Adams, was killed in a train accident. Vonnegut adopted Alice's three young sons—James, Steven, and Kurt, aged 14, 11, and 9 respectively. Grappling with family challenges, Vonnegut continued to write, publishing novels vastly dissimilar in terms of plot. "The Sirens of Titan" (1959) features a Martian invasion of Earth, as experienced by a bored billionaire, Malachi Constant. He meets Winston Rumfoord, an aristocratic space traveler, who is virtually omniscient but stuck in a time warp that allows him to appear on Earth every 59 days. The billionaire learns that his actions and the events of all of history are determined by a race of robotic aliens from the planet Tralfamadore, who need a replacement part that can only be produced by an advanced civilization in order to repair their spaceship and return home—human history has been manipulated to produce it. Some human structures, such as the Kremlin, are coded signals from the aliens to their ship as to how long it may expect to wait for the repair to take place. Reviewers were uncertain what to think of the book, with one comparing it to Offenbach's opera "The Tales of Hoffmann". Rumfoord, who is based on Franklin D. Roosevelt, also physically resembles the former president. Rumfoord is described, "he put a cigarette in a long, bone cigarette holder, lighted it. He thrust out his jaw. The cigarette holder pointed straight up." William Rodney Allen, in his guide to Vonnegut's works, stated that Rumfoord foreshadowed the fictional political figures who would play major roles in "God Bless You, Mr. Rosewater" and "Jailbird". "Mother Night", published in 1961, received little attention at the time of its publication. Howard W. Campbell Jr., Vonnegut's protagonist, is an American who goes to Nazi Germany during the war as a double agent for the U.S. Office of Strategic Services, and rises to the regime's highest ranks as a radio propagandist. After the war, the spy agency refuses to clear his name and he is eventually imprisoned by the Israelis in the same cell block as Adolf Eichmann, and later commits suicide. Vonnegut wrote in a foreword to a later edition, "we are what we pretend to be, so we must be careful about what we pretend to be". Literary critic Lawrence Berkove considered the novel, like Mark Twain's "Adventures of Huckleberry Finn", to illustrate the tendency for "impersonators to get carried away by their impersonations, to become what they impersonate and therefore to live in a world of illusion". Also published in 1961 was Vonnegut's short story, "Harrison Bergeron", set in a dystopic future where all are equal, even if that means disfiguring beautiful people and forcing the strong or intelligent to wear devices that negate their advantages. Fourteen-year-old Harrison is a genius and athlete forced to wear record-level "handicaps" and imprisoned for attempting to overthrow the government. He escapes to a television studio, tears away his handicaps, and frees a ballerina from her lead weights. As they dance, they are killed by the Handicapper General, Diana Moon Glampers. Vonnegut, in a later letter, suggested that "Harrison Bergeron" might have sprung from his envy and self-pity as a high school misfit. In his 1976 biography of Vonnegut, Stanley Schatt suggested that the short story shows "in any leveling process, what really is lost, according to Vonnegut, is beauty, grace, and wisdom". Darryl Hattenhauer, in his 1998 journal article on "Harrison Bergeron", theorized that the story was a satire on American Cold War misunderstandings of communism and socialism. With "Cat's Cradle" (1963), Allen wrote, "Vonnegut hit full stride for the first time". The narrator, John, intends to write of Dr. Felix Hoenikker, one of the fictional fathers of the atomic bomb, seeking to cover the scientist's human side. Hoenikker, in addition to the bomb, has developed another threat to mankind, ice-9, solid water stable at room temperature, and if a particle of it is dropped in water, all of it becomes ice-9. Much of the second half of the book is spent on the fictional Caribbean island of San Lorenzo, where John explores a religion called Bokononism, whose holy books (excerpts from which are quoted), give the novel the moral core science does not supply. After the oceans are converted to ice-9, wiping out most of humankind, John wanders the frozen surface, seeking to have himself and his story survive. Vonnegut based the title character of "God Bless You, Mr. Rosewater" (1964), on an accountant he knew on Cape Cod, who specialized in clients in trouble and often had to comfort them. Eliot Rosewater, the wealthy son of a Republican senator, seeks to atone for his wartime killing of noncombatant firefighters by serving in a volunteer fire department, and by giving away money to those in trouble or need. Stress from a battle for control of his charitable foundation pushes him over the edge, and he is placed in a mental hospital. He recovers, and ends the financial battle by declaring the children of his county to be his heirs. Allen deemed "God Bless You, Mr. Rosewater" more "a cry from the heart than a novel under its author's full intellectual control", that reflected family and emotional stresses Vonnegut was going through at the time. After spending almost two years at the writer's workshop at the University of Iowa, teaching one course each term, Vonnegut was awarded a Guggenheim Fellowship for research in Germany. By the time he won it, in March 1967, he was becoming a well-known writer. He used the funds to travel in Eastern Europe, including to Dresden, where he found many prominent buildings still in ruins. At the time of the bombing, Vonnegut had not appreciated the sheer scale of destruction in Dresden; his enlightenment came only slowly as information dribbled out, and based on early figures he came to believe that 135,000 had died there. Vonnegut had been writing about his war experiences at Dresden ever since he returned from the war, but had never been able to write anything acceptable to himself or his publishers—Chapter 1 of "Slaughterhouse-Five" tells of his difficulties. Released in 1969, the novel rocketed Vonnegut to fame. It tells of the life of Billy Pilgrim, who like Vonnegut was born in 1922 and survives the bombing of Dresden. The story is told in a non-linear fashion, with many of the story's climaxes—Billy's death in 1976, his kidnapping by aliens from the planet Tralfamadore nine years earlier, and the execution of Billy's friend Edgar Derby in the ashes of Dresden for stealing a teapot—disclosed in the story's first pages. In 1970, he was also a correspondent in Biafra during the Nigerian Civil War. "Slaughterhouse-Five" received generally positive reviews, with Michael Crichton writing in "The New Republic", "he writes about the most excruciatingly painful things. His novels have attacked our deepest fears of automation and the bomb, our deepest political guilts, our fiercest hatreds and loves. No one else writes books on these subjects; they are inaccessible to normal novelists." The book went immediately to the top of "The New York Times" Best Seller list. Vonnegut's earlier works had appealed strongly to many college students, and the antiwar message of "Slaughterhouse-Five" resonated with a generation marked by the Vietnam War. He later stated that the loss of confidence in government that Vietnam caused finally allowed for an honest conversation regarding events like Dresden. After "Slaughterhouse-Five" was published, Vonnegut embraced the fame and financial security that attended its release. He was hailed as a hero of the burgeoning anti-war movement in the United States, was invited to speak at numerous rallies, and gave college commencement addresses around the country. In addition to briefly teaching at Harvard University as a lecturer in creative writing in 1970, Vonnegut taught at the City College of New York as a distinguished professor during the 1973–1974 academic year. He was later elected vice president of the National Institute of Arts and Letters, and given honorary degrees by, among others, Indiana University and Bennington College. Vonnegut also wrote a play called "Happy Birthday, Wanda June", which opened on October 7, 1970 at New York's Theatre de Lys. Receiving mixed reviews, it closed on March 14, 1971. In 1972, Universal Pictures adapted "Slaughterhouse-Five" into a film which the author said was "flawless". Meanwhile, Vonnegut's personal life was disintegrating. His wife Jane had embraced Christianity, which was contrary to Vonnegut's atheistic beliefs, and with five of their six children having left home, Vonnegut said the two were forced to find "other sorts of seemingly important work to do." The couple battled over their differing beliefs until Vonnegut moved from their Cape Cod home to New York in 1971. Vonnegut called the disagreements "painful", and said the resulting split was a "terrible, unavoidable accident that we were ill-equipped to understand." The couple divorced and they remained friends until Jane's death in late 1986. Beyond his marriage, he was deeply affected when his son Mark suffered a mental breakdown in 1972, which exacerbated Vonnegut's chronic depression, and led him to take Ritalin. When he stopped taking the drug in the mid-1970s, he began to see a psychologist weekly. Vonnegut's difficulties materialized in numerous ways; most distinctly though, was the painfully slow progress he was making on his next novel, the darkly comical "Breakfast of Champions". In 1971, Vonnegut stopped writing the novel altogether. When it was finally released in 1973, it was panned critically. In Thomas S. Hischak's book "American Literature on Stage and Screen", "Breakfast of Champions" was called "funny and outlandish", but reviewers noted that it "lacks substance and seems to be an exercise in literary playfulness." Vonnegut's 1976 novel "Slapstick", which meditates on the relationship between him and his sister (Alice), met a similar fate. In "The New York Times"'s review of "Slapstick", Christopher Lehmann-Haupt said Vonnegut "seems to be putting less effort into [storytelling] than ever before", and that "it still seems as if he has given up storytelling after all." At times, Vonnegut was disgruntled by the personal nature of his detractors' complaints. In 1979, Vonnegut married Jill Krementz, a photographer whom he met while she was working on a series about writers in the early 1970s. With Jill, he adopted a daughter, Lily, when the baby was three days old. In subsequent years, his popularity resurged as he published several satirical books, including "Jailbird" (1979), "Deadeye Dick" (1982), "Galápagos" (1985), "Bluebeard" (1987), and "Hocus Pocus" (1990). Although he remained a prolific writer in the 1980s Vonnegut struggled with depression and attempted suicide in 1984. Two years later, Vonnegut was seen by a younger generation when he played himself in Rodney Dangerfield's film "Back to School". The last of Vonnegut's fourteen novels, "Timequake" (1997), was, as University of Detroit history professor and Vonnegut biographer Gregory Sumner said, "a reflection of an aging man facing mortality and testimony to an embattled faith in the resilience of human awareness and agency." Vonnegut's final book, a collection of essays entitled "A Man Without a Country" (2005), became a bestseller. In a 2006 "Rolling Stone" interview, Vonnegut sardonically stated that he would sue the Brown & Williamson tobacco company, the maker of the Pall Mall-branded cigarettes he had been smoking since he was twelve or fourteen years old, for false advertising. "And do you know why?" he said. "Because I'm 83 years old. The lying bastards! On the package Brown & Williamson promised to kill me." He died on the night of April 11, 2007 in Manhattan, as a result of brain injuries incurred several weeks prior from a fall at his New York brownstone home. His death was reported by his wife Jill. Vonnegut was 84 years old. At the time of his death, Vonnegut had written fourteen novels, three short story collections, five plays and five non-fiction books. A book composed of Vonnegut's unpublished pieces, "Armageddon in Retrospect", was compiled and posthumously published by Vonnegut's son Mark in 2008. When asked about the impact Vonnegut had on his work, author Josip Novakovich stated that he has "much to learn from Vonnegut—how to compress things and yet not compromise them, how to digress into history, quote from various historical accounts, and not stifle the narrative. The ease with which he writes is sheerly masterly, Mozartian." "Los Angeles Times" columnist Gregory Rodriguez said that the author will "rightly be remembered as a darkly humorous social critic and the premier novelist of the counterculture", and Dinitia Smith of "The New York Times" dubbed Vonnegut the "counterculture's novelist." Vonnegut has inspired numerous posthumous tributes and works. In 2008, the Kurt Vonnegut Society was established, and in November 2010, the Kurt Vonnegut Memorial Library was opened in Vonnegut's hometown of Indianapolis. The Library of America published a compendium of Vonnegut's compositions between 1963 and 1973 the following April, and another compendium of his earlier works in 2012. Late 2011 saw the release of two Vonnegut biographies, Gregory Sumner's "Unstuck in Time" and Charles J. Shields's "And So It Goes". Shields's biography of Vonnegut created some controversy. According to "The Guardian", the book portrays Vonnegut as distant, cruel and nasty. "Cruel, nasty and scary are the adjectives commonly used to describe him by the friends, colleagues, and relatives Shields quotes", said "The Daily Beast"'s Wendy Smith. "Towards the end he was very feeble, very depressed and almost morose", said Jerome Klinkowitz of the University of Northern Iowa, who has examined Vonnegut in depth. Vonnegut's works have evoked ire on several occasions. His most prominent novel, "Slaughterhouse-Five", has been objected to or removed at various institutions in at least 18 instances. In the case of "Island Trees School District v. Pico", the United States Supreme Court ruled that a school district's ban on "Slaughterhouse-Five"—which the board had called "anti-American, anti-Christian, anti-Semitic, and just plain filthy"—and eight other novels was unconstitutional. When a school board in Republic, Missouri decided to withdraw Vonnegut's novel from its libraries, the Kurt Vonnegut Memorial Library offered a free copy to all the students of the district. Tally, writing in 2013, suggests that Vonnegut has only recently become the subject of serious study rather than fan adulation, and much is yet to be written about him. "The time for scholars to say 'Here's why Vonnegut is worth reading' has definitively ended, thank goodness. We know he's worth reading. Now tell us things we don't know." Todd F. Davis notes that Vonnegut's work is kept alive by his loyal readers, who have "significant influence as they continue to purchase Vonnegut's work, passing it on to subsequent generations and keeping his entire canon in print—an impressive list of more than twenty books that [Dell Publishing] has continued to refurbish and hawk with new cover designs." Donald E. Morse notes that Vonnegut, "is now firmly, if somewhat controversially, ensconced in the American and world literary canon as well as in high school, college and graduate curricula". Tally writes of Vonnegut's work: The Science Fiction and Fantasy Hall of Fame inducted Vonnegut posthumously in 2015. The asteroid 25399 Vonnegut is named in his honor. In the introduction to "Slaughterhouse-Five" Vonnegut recounts meeting filmmaker Harrison Starr at a party who asked him whether his forthcoming book was an anti-war novel — "I guess" replied Vonnegut. Starr responded "Why don't you write an anti-glacier novel?". This underlined Vonnegut's belief that wars were, unfortunately, inevitable, but that it was important to ensure the wars one fought were just wars. In 2011, NPR wrote, "Kurt Vonnegut's blend of anti-war sentiment and satire made him one of the most popular writers of the 1960s." Vonnegut stated in a 1987 interview that, "my own feeling is that civilization ended in World War I, and we're still trying to recover from that", and that he wanted to write war-focused works without glamorizing war itself. Vonnegut had not intended to publish again, but his anger against the George W. Bush administration led him to write "A Man Without a Country". "Slaughterhouse-Five" is the Vonnegut novel best known for its antiwar themes, but the author expressed his beliefs in ways beyond the depiction of the destruction of Dresden. He has one character, Mary O'Hare, opine that "wars were partly encouraged by books and movies", made by "Frank Sinatra or John Wayne or some of those other glamorous, war-loving, dirty old men". Vonnegut made a number of comparisons between Dresden and the bombing of Hiroshima in "Slaughterhouse-Five" and wrote in "Palm Sunday" (1991) that "I learned how vile that religion of mine could be when the atomic bomb was dropped on Hiroshima". Nuclear war, or at least deployed nuclear arms, is mentioned in almost all of Vonnegut's novels. In "Player Piano", the computer EPICAC is given control of the nuclear arsenal, and is charged with deciding whether to use high-explosive or nuclear arms. In "Cat's Cradle", John's original purpose in setting pen to paper is to write an account of what prominent Americans had been doing as Hiroshima was bombed. Vonnegut was an atheist and a humanist, serving as the honorary president of the American Humanist Association. In an interview for "Playboy", he stated that his forebears who came to the United States did not believe in God, and he learned his atheism from his parents. He did not however disdain those who seek the comfort of religion, hailing church associations as a type of extended family. Like his great-grandfather Clemens, Vonnegut was a freethinker. He occasionally attended a Unitarian church, but with little consistency. In his autobiographical work "Palm Sunday", Vonnegut says he is a "Christ-worshiping agnostic"; in a speech to the Unitarian Universalist Association, he called himself a "Christ-loving atheist". However, he was keen to stress that he was not a Christian. Vonnegut was an admirer of Jesus' Sermon on the Mount, particularly the Beatitudes, and incorporated it into his own doctrines. He also referred to it in many of his works. In his 1991 book "Fates Worse than Death", Vonnegut suggests that during the Reagan administration, "anything that sounded like the Sermon on the Mount was socialistic or communistic, and therefore anti-American". In "Palm Sunday", he wrote that "the Sermon on the Mount suggests a mercifulness that can never waver or fade." However, Vonnegut had a deep dislike for certain aspects of Christianity, often reminding his readers of the bloody history of the Crusades and other religion-inspired violence. He despised the televangelists of the late 20th century, feeling that their thinking was narrow-minded. Religion features frequently in Vonnegut's work, both in his novels and elsewhere. He laced a number of his speeches with religion-focused rhetoric, and was prone to using such expressions as "God forbid" and "thank God". He once wrote his own version of the Requiem Mass, which he then had translated into Latin and set to music. In "God Bless You, Dr. Kevorkian", Vonnegut goes to heaven after he is euthanized by Dr. Jack Kevorkian. Once in heaven, he interviews 21 deceased celebrities, including Isaac Asimov, William Shakespeare, and Kilgore Trout—the last a fictional character from several of his novels. Vonnegut's works are filled with characters founding new faiths, and religion often serves as a major plot device, for example in "Player Piano", "The Sirens of Titan" and "Cat's Cradle". In "The Sirens of Titan", Rumfoord proclaims The Church of God the Utterly Indifferent. "Slaughterhouse-Five" sees Billy Pilgrim, lacking religion himself, nevertheless become a chaplain's assistant in the military and display a large crucifix on his bedroom wall. In "Cat's Cradle", Vonnegut invented the religion of Bokononism. Vonnegut did not particularly sympathize with liberalism or conservatism, and mused on the specious simplicity of American politics. "If you want to take my guns away from me, and you're all for murdering fetuses, and love it when homosexuals marry each other [...] you're a liberal. If you are against those perversions and for the rich, you're a conservative. What could be simpler?" Regarding political parties, Vonnegut said, "The two real political parties in America are the Winners and the Losers. The people don't acknowledge this. They claim membership in two imaginary parties, the Republicans and the Democrats, instead." Vonnegut disregarded more mainstream political ideologies in favor of socialism, which he thought could provide a valuable substitute for what he saw as social Darwinism and a spirit of "survival of the fittest" in American society, believing that "socialism would be a good for the common man". Vonnegut would often return to a quote by socialist and five-time presidential candidate Eugene V. Debs: "As long as there is a lower class, I am in it. As long as there is a criminal element, I'm of it. As long as there is a soul in prison, I am not free." Vonnegut expressed disappointment that communism and socialism seemed to be unsavory topics to the average American, and believed that they may offer beneficial substitutes to contemporary social and economic systems. Vonnegut's writing was inspired by an eclectic mix of sources. When he was younger, Vonnegut stated that he read works of pulp fiction, science fiction, fantasy, and action-adventure. He also read the Classics, like those of Aristophanes. Aristophanes, like Vonnegut, wrote humorous critiques of contemporary society. Vonnegut's life and work also share similarities with that of "Adventures of Huckleberry Finn" writer Mark Twain. Both shared pessimistic outlooks on humanity, and a skeptical take on religion, and, as Vonnegut put it, were both "associated with the enemy in a major war", as Twain briefly enlisted in the South's cause during the American Civil War, and Vonnegut's German name and ancestry connected him with the United States' enemy in both world wars. Vonnegut called George Orwell his favorite writer, and admitted that he tried to emulate Orwell. "I like his concern for the poor, I like his socialism, I like his simplicity", Vonnegut said. Vonnegut also said that Orwell's "Nineteen Eighty-Four", and "Brave New World" by Aldous Huxley, heavily influenced his debut novel, "Player Piano", in 1952. Vonnegut commented that Robert Louis Stevenson's stories were emblems of thoughtfully put together works that he tried to mimic in his own compositions. Vonnegut also hailed playwright and socialist George Bernard Shaw as "a hero of [his]", and an "enormous influence." Within his own family, Vonnegut stated that his mother, Edith, had the greatest influence on him. "[My] mother thought she might make a new fortune by writing for the slick magazines. She took short-story courses at night. She studied magazines the way gamblers study racing forms." Early on in his career, Vonnegut decided to model his style after Henry David Thoreau, who wrote as if from the perspective of a child, allowing Thoreau's works to be more widely comprehensible. Using a youthful narrative voice allowed Vonnegut to deliver concepts in a modest and straightforward way. Other influences on Vonnegut include "The War of the Worlds" author H. G. Wells, and satirist Jonathan Swift. Vonnegut credited newspaper magnate H. L. Mencken for inspiring him to become a journalist. In his book "Popular Contemporary Writers", Michael D. Sharp describes Vonnegut's linguistic style as straightforward; his sentences concise, his language simple, his paragraphs brief, and his ordinary tone conversational. Vonnegut uses this style to convey normally complex subject matter in a way that is intelligible to a large audience. He credited his time as a journalist for his ability, pointing to his work with the Chicago City News Bureau, which required him to convey stories in telephone conversations. Vonnegut's compositions are also laced with distinct references to his own life, notably in "Slaughterhouse-Five" and "Slapstick". Vonnegut believed that ideas, and the convincing communication of those ideas to the reader, were vital to literary art. He did not always sugarcoat his points: much of "Player Piano" leads up to the moment when Paul, on trial and hooked up to a lie detector, is asked to tell a falsehood, and states, "every new piece of scientific knowledge is a good thing for humanity". Robert T. Tally Jr., in his volume on Vonnegut's novels, wrote, "rather than tearing down and destroying the icons of twentieth-century, middle-class American life, Vonnegut gently reveals their basic flimsiness." Vonnegut did not simply propose utopian solutions to the ills of American society, but showed how such schemes would not allow ordinary people to live lives free from want and anxiety. The large artificial families that the U.S. population is formed into in "Slapstick" soon serve as an excuse for tribalism, with people giving no help to those not part of their group, and with the extended family's place in the social hierarchy becoming vital. In the introduction to their essay "Kurt Vonnegut and Humor", Tally and Peter C. Kunze suggest that Vonnegut was not a "black humorist", but a "frustrated idealist" who used "comic parables" to teach the reader absurd, bitter or hopeless truths, with his grim witticisms serving to make the reader laugh rather than cry. "Vonnegut makes sense through humor, which is, in the author's view, as valid a means of mapping this crazy world as any other strategies." Vonnegut resented being called a black humorist, feeling that, as with many literary labels, it allows readers to disregard aspects of a writer's work that do not fit the label's stereotype. Vonnegut's works have, at various times, been labeled science fiction, satire and postmodern. He also resisted such labels, but his works do contain common tropes that are often associated with those genres. In several of his books, Vonnegut imagines alien societies and civilizations, as is common in works of science fiction. Vonnegut does this to emphasize or exaggerate absurdities and idiosyncrasies in our own world. Furthermore, Vonnegut often humorizes the problems that plague societies, as is done in satirical works. However, literary theorist Robert Scholes noted in "Fabulation and Metafiction" that Vonnegut "reject[s] the traditional satirist's faith in the efficacy of satire as a reforming instrument. [He has] a more subtle faith in the humanizing value of laughter." Examples of postmodernism may also be found in Vonnegut's works. Postmodernism often entails a response to the theory that the truths of the world will be discovered through science. Postmodernists contend that truth is subjective, rather than objective, as it is biased towards each individual's beliefs and outlook on the world. They often use unreliable, first-person narration, and narrative fragmentation. One critic has argued that Vonnegut's most famous novel, "Slaughterhouse-Five", features a metafictional, Janus-headed outlook as it seeks both to represent actual historical events while problematizing the very notion of doing exactly that. This is encapsulated in the opening lines of the novel: "All this happened, more or less. The war parts, anyway, are pretty much true." This bombastic opening – "All this happened" – "reads like a declaration of complete mimesis" which is radically called into question in the rest of the quote and "[t]his creates an integrated perspective that seeks out extratextual themes [like war and trauma] while thematizing the novel's textuality and inherent constructedness at one and the same time." While Vonnegut does use elements as fragmentation and metafictional elements, in some of his works, he more distinctly focuses on the peril posed by individuals who find subjective truths, mistake them for objective truths, then proceed to impose these truths on others. Vonnegut was a vocal critic of the society in which he lived, and this was reflected in his writings. Several key social themes recur in Vonnegut's works, such as wealth, the lack of it, and its unequal distribution among a society. In "The Sirens of Titan", the novel's protagonist, Malachi Constant, is exiled to one of Saturn's moons, Titan, as a result of his vast wealth, which has made him arrogant and wayward. In "God Bless You, Mr. Rosewater", readers may find it difficult to determine whether the rich or the poor are in worse circumstances as the lives of both groups' members are ruled by their wealth or their poverty. Further, in "Hocus Pocus", the protagonist is named Eugene Debs Hartke, a homage to the famed socialist Eugene V. Debs and Vonnegut's socialist views. In "Kurt Vonnegut: A Critical Companion", Thomas F. Marvin states: "Vonnegut points out that, left unchecked, capitalism will erode the democratic foundations of the United States." Marvin suggests that Vonnegut's works demonstrate what happens when a "hereditary aristocracy" develops, where wealth is inherited along familial lines: the ability of poor Americans to overcome their situations is greatly or completely diminished. Vonnegut also often laments social Darwinism, and a "survival of the fittest" view of society. He points out that social Darwinism leads to a society that condemns its poor for their own misfortune, and fails to help them out of their poverty because "they deserve their fate". Vonnegut also confronts the idea of free will in a number of his pieces. In "Slaughterhouse-Five" and "Timequake" the characters have no choice in what they do; in "Breakfast of Champions", characters are very obviously stripped of their free will and even receive it as a gift; and in "Cat's Cradle", Bokononism views free will as heretical. The majority of Vonnegut's characters are estranged from their actual families and seek to build replacement or extended families. For example, the engineers in "Player Piano" called their manager's spouse "Mom". In "Cat's Cradle", Vonnegut devises two separate methods for loneliness to be combated: A "karass", which is a group of individuals appointed by God to do his will, and a "granfalloon", defined by Marvin as a "meaningless association of people, such as a fraternal group or a nation". Similarly, in "Slapstick", the U.S. government codifies that all Americans are a part of large extended families. Fear of the loss of one's purpose in life is a theme in Vonnegut's works. The Great Depression forced Vonnegut to witness the devastation many people felt when they lost their jobs, and while at General Electric, Vonnegut witnessed machines being built to take the place of human labor. He confronts these things in his works through references to the growing use of automation and its effects on human society. This is most starkly represented in his first novel, "Player Piano", where many Americans are left purposeless and unable to find work as machines replace human workers. Loss of purpose is also depicted in "Galápagos", where a florist rages at her spouse for creating a robot able to do her job, and in "Timequake", where an architect kills himself when replaced by computer software. Suicide by fire is another common theme in Vonnegut's works; the author often returns to the theory that "many people are not fond of life." He uses this as an explanation for why humans have so severely damaged their environments, and made devices such as nuclear weapons that can make their creators extinct. In "Deadeye Dick", Vonnegut features the neutron bomb, which he claims is designed to kill people, but leave buildings and structures untouched. He also uses this theme to demonstrate the recklessness of those who put powerful, apocalypse-inducing devices at the disposal of politicians. "What is the point of life?" is a question Vonnegut often pondered in his works. When one of Vonnegut's characters, Kilgore Trout, finds the question "What is the purpose of life?" written in a bathroom, his response is, "To be the eyes and ears and conscience of the Creator of the Universe, you fool." Marvin finds Trout's theory curious, given that Vonnegut was an atheist, and thus for him, there is no Creator to report back to, and comments that, "[as] Trout chronicles one meaningless life after another, readers are left to wonder how a compassionate creator could stand by and do nothing while such reports come in." In the epigraph to "Bluebeard", Vonnegut quotes his son Mark, and gives an answer to what he believes is the meaning of life: "We are here to help each other get through this thing, whatever it is." Unless otherwise cited, items in this list are taken from Thomas F. Marvin's 2002 book "Kurt Vonnegut: A Critical Companion", and the date in brackets is the date the work was first published: Novels Short fiction collections Nonfiction Interviews Art Knights Templar The Poor Fellow-Soldiers of Christ and of the Temple of Solomon (), also known as the Order of Solomon's Temple, the Knights Templar or simply as Templars, were a Catholic military order recognised in 1139 by papal bull "Omne Datum Optimum" of the Holy See. The order was founded in 1119 and was active until about 1312. The order, which was among the wealthiest and most powerful, became a favoured charity throughout Christendom and grew rapidly in membership and power. They were prominent in Christian finance. Templar knights, in their distinctive white mantles with a red cross, were among the most skilled fighting units of the Crusades. Non-combatant members of the order, who formed as much as 90% of the order's members, managed a large economic infrastructure throughout Christendom, developing innovative financial techniques that were an early form of banking, building its own network of nearly 1,000 commanderies and fortifications across Europe and the Holy Land, and arguably forming the world's first multinational corporation. The Templars were closely tied to the Crusades; when the Holy Land was lost, support for the order faded. Rumours about the Templars' secret initiation ceremony created distrust, and King Philip IV of France – deeply in debt to the order – took advantage of the situation to gain control over them. In 1307, he had many of the order's members in France arrested, tortured into giving false confessions, and burned at the stake. Pope Clement V disbanded the order in 1312 under pressure from King Philip. The abrupt reduction in power of a significant group in European society gave rise to speculation, legend, and legacy through the ages. After Europeans in the First Crusade captured Jerusalem in 1099, many Christians made pilgrimages to various sacred sites in the Holy Land. Although the city of Jerusalem was relatively secure under Christian control, the rest of Outremer was not. Bandits and marauding highwaymen preyed upon pilgrims, who were routinely slaughtered, sometimes by the hundreds, as they attempted to make the journey from the coastline at Jaffa through to the interior of the Holy Land. In 1119, the French knight Hugues de Payens approached King Baldwin II of Jerusalem and Warmund, Patriarch of Jerusalem, and proposed creating a monastic order for the protection of these pilgrims. King Baldwin and Patriarch Warmund agreed to the request, probably at the Council of Nablus in January 1120, and the king granted the Templars a headquarters in a wing of the royal palace on the Temple Mount in the captured Al-Aqsa Mosque. The Temple Mount had a mystique because it was above what was believed to be the ruins of the Temple of Solomon. The Crusaders therefore referred to the Al-Aqsa Mosque as Solomon's Temple, and from this location the new order took the name of "Poor Knights of Christ and the Temple of Solomon", or "Templar" knights. The order, with about nine knights including Godfrey de Saint-Omer and André de Montbard, had few financial resources and relied on donations to survive. Their emblem was of two knights riding on a single horse, emphasising the order's poverty. The impoverished status of the Templars did not last long. They had a powerful advocate in Saint Bernard of Clairvaux, a leading Church figure, the French abbot primarily responsible for the founding of the Cistercian Order of monks and a nephew of André de Montbard, one of the founding knights. Bernard put his weight behind them and wrote persuasively on their behalf in the letter 'In Praise of the New Knighthood', and in 1129, at the Council of Troyes, he led a group of leading churchmen to officially approve and endorse the order on behalf of the church. With this formal blessing, the Templars became a favoured charity throughout Christendom, receiving money, land, businesses, and noble-born sons from families who were eager to help with the fight in the Holy Land. Another major benefit came in 1139, when Pope Innocent II's papal bull "Omne Datum Optimum" exempted the order from obedience to local laws. This ruling meant that the Templars could pass freely through all borders, were not required to pay any taxes, and were exempt from all authority except that of the pope. With its clear mission and ample resources, the order grew rapidly. Templars were often the advance shock troops in key battles of the Crusades, as the heavily armoured knights on their warhorses would set out to charge at the enemy, ahead of the main army bodies, in an attempt to break opposition lines. One of their most famous victories was in 1177 during the Battle of Montgisard, where some 500 Templar knights helped several thousand infantry to defeat Saladin's army of more than 26,000 soldiers. Although the primary mission of the order was militaristic, relatively few members were combatants. The others acted in support positions to assist the knights and to manage the financial infrastructure. The Templar Order, though its members were sworn to individual poverty, was given control of wealth beyond direct donations. A nobleman who was interested in participating in the Crusades might place all his assets under Templar management while he was away. Accumulating wealth in this manner throughout Christendom and the Outremer, the order in 1150 began generating letters of credit for pilgrims journeying to the Holy Land: pilgrims deposited their valuables with a local Templar preceptory before embarking, received a document indicating the value of their deposit, then used that document upon arrival in the Holy Land to retrieve their funds in an amount of treasure of equal value. This innovative arrangement was an early form of banking and may have been the first formal system to support the use of cheques; it improved the safety of pilgrims by making them less attractive targets for thieves, and also contributed to the Templar coffers. Based on this mix of donations and business dealing, the Templars established financial networks across the whole of Christendom. They acquired large tracts of land, both in Europe and the Middle East; they bought and managed farms and vineyards; they built massive stone cathedrals and castles; they were involved in manufacturing, import and export; they had their own fleet of ships; and at one point they even owned the entire island of Cyprus. The Order of the Knights Templar arguably qualifies as the world's first multinational corporation. In the mid-12th century, the tide began to turn in the Crusades. The Muslim world had become more united under effective leaders such as Saladin, and dissension arose amongst Christian factions in, and concerning, the Holy Land. The Knights Templar were occasionally at odds with the two other Christian military orders, the Knights Hospitaller and the Teutonic Knights, and decades of internecine feuds weakened Christian positions, both politically and militarily. After the Templars were involved in several unsuccessful campaigns, including the pivotal Battle of Hattin, Jerusalem was recaptured by Muslim forces under Saladin in 1187. The Holy Roman Emperor Frederick II reclaimed the city for Christians in the Sixth Crusade of 1229, without Templar aid, but only held it briefly for a little more than a decade. In 1244, the Ayyubid dynasty together with Khwarezmi mercenaries recaptured Jerusalem, and the city did not return to Western control until 1917 when, during World War I, the British captured it from the Ottoman Empire. The Templars were forced to relocate their headquarters to other cities in the north, such as the seaport of Acre, which they held for the next century. It was lost in 1291, followed by their last mainland strongholds, Tortosa (Tartus in what is now Syria) and Atlit in present-day Israel. Their headquarters then moved to Limassol on the island of Cyprus, and they also attempted to maintain a garrison on tiny Arwad Island, just off the coast from Tortosa. In 1300, there was some attempt to engage in coordinated military efforts with the Mongols via a new invasion force at Arwad. In 1302 or 1303, however, the Templars lost the island to the Egyptian Mamluk Sultanate in the Siege of Arwad. With the island gone, the Crusaders lost their last foothold in the Holy Land. With the order's military mission now less important, support for the organization began to dwindle. The situation was complex, however, since during the two hundred years of their existence, the Templars had become a part of daily life throughout Christendom. The organisation's Templar Houses, hundreds of which were dotted throughout Europe and the Near East, gave them a widespread presence at the local level. The Templars still managed many businesses, and many Europeans had daily contact with the Templar network, such as by working at a Templar farm or vineyard, or using the order as a bank in which to store personal valuables. The order was still not subject to local government, making it everywhere a "state within a state" – its standing army, though it no longer had a well-defined mission, could pass freely through all borders. This situation heightened tensions with some European nobility, especially as the Templars were indicating an interest in founding their own monastic state, just as the Teutonic Knights had done in Prussia and the Knights Hospitaller were doing in Rhodes. In 1305, the new Pope Clement V, based in Avignon, France, sent letters to both the Templar Grand Master Jacques de Molay and the Hospitaller Grand Master Fulk de Villaret to discuss the possibility of merging the two orders. Neither was amenable to the idea, but Pope Clement persisted, and in 1306 he invited both Grand Masters to France to discuss the matter. De Molay arrived first in early 1307, but de Villaret was delayed for several months. While waiting, De Molay and Clement discussed criminal charges that had been made two years earlier by an ousted Templar and were being discussed by King Philip IV of France and his ministers. It was generally agreed that the charges were false, but Clement sent the king a written request for assistance in the investigation. According to some historians, King Philip, who was already deeply in debt to the Templars from his war with the English, decided to seize upon the rumours for his own purposes. He began pressuring the church to take action against the order, as a way of freeing himself from his debts. At dawn on Friday, 13 October 1307 (a date sometimes linked with the origin of the Friday the 13th superstition) King Philip IV ordered de Molay and scores of other French Templars to be simultaneously arrested. The arrest warrant started with the phrase: "Dieu n'est pas content, nous avons des ennemis de la foi dans le Royaume" ["God is not pleased. We have enemies of the faith in the kingdom"]. Claims were made that during Templar admissions ceremonies, recruits were forced to spit on the Cross, deny Christ, and engage in indecent kissing; brethren were also accused of worshipping idols, and the order was said to have encouraged homosexual practices. The Templars were charged with numerous other offences such as financial corruption, fraud, and secrecy. Many of the accused confessed to these charges under torture, and their confessions, even though obtained under duress, caused a scandal in Paris. The prisoners were coerced to confess that they had spat on the Cross: "Moi, Raymond de La Fère, 21 ans, reconnais que [j'ai] craché trois fois sur la Croix, mais de bouche et pas de cœur" (free translation: "I, Raymond de La Fère, 21 years old, admit that I have spat three times on the Cross, but only from my mouth and not from my heart"). The Templars were accused of idolatry and were suspected of worshiping either a figure known as Baphomet or a mummified severed head they recovered, amongst other artifacts, at their original headquarters on the Temple Mount that many scholars theorize might have been that of John the Baptist, among other things. Relenting to Phillip's demands, Pope Clement then issued the papal bull "Pastoralis Praeeminentiae" on 22 November 1307, which instructed all Christian monarchs in Europe to arrest all Templars and seize their assets. Pope Clement called for papal hearings to determine the Templars' guilt or innocence, and once freed of the Inquisitors' torture, many Templars recanted their confessions. Some had sufficient legal experience to defend themselves in the trials, but in 1310, having appointed the archbishop of Sens, Philippe de Marigny, to lead the investigation, Philip blocked this attempt, using the previously forced confessions to have dozens of Templars burned at the stake in Paris. With Philip threatening military action unless the pope complied with his wishes, Pope Clement finally agreed to disband the order, citing the public scandal that had been generated by the confessions. At the Council of Vienne in 1312, he issued a series of papal bulls, including "Vox in excelso", which officially dissolved the order, and "Ad providam", which turned over most Templar assets to the Hospitallers. As for the leaders of the order, the elderly Grand Master Jacques de Molay, who had confessed under torture, retracted his confession. Geoffroi de Charney, Preceptor of Normandy, also retracted his confession and insisted on his innocence. Both men were declared guilty of being relapsed heretics, and they were sentenced to burn alive at the stake in Paris on 18 March 1314. De Molay reportedly remained defiant to the end, asking to be tied in such a way that he could face the Notre Dame Cathedral and hold his hands together in prayer. According to legend, he called out from the flames that both Pope Clement and King Philip would soon meet him before God. His actual words were recorded on the parchment as follows : "Dieu sait qui a tort et a péché. Il va bientot arriver malheur à ceux qui nous ont condamnés à mort" (free translation : "God knows who is wrong and has sinned. Soon a calamity will occur to those who have condemned us to death"). Pope Clement died only a month later, and King Philip died in a hunting accident before the end of the year. With the last of the order's leaders gone, the remaining Templars around Europe were either arrested and tried under the Papal investigation (with virtually none convicted), absorbed into other military orders such as the Knights Hospitaller, or pensioned off and allowed to live out their days peacefully. By papal decree, the property of the Templars was transferred to the Knights Hospitaller, which also absorbed many of the Templars' members. In effect, the dissolution of the Templars could be seen as the merger of the two rival orders. Templar organizations simply changed their name, from Knights Templar to "Order of Christ" and also a parallel "Supreme Order of Christ of the Holy See" in which both are considered the successors. In September 2001, a document known as the "Chinon Parchment" dated 17–20 August 1308 was discovered in the Vatican Secret Archives by Barbara Frale, apparently after having been filed in the wrong place in 1628. It is a record of the trial of the Templars and shows that Clement absolved the Templars of all heresies in 1308 before formally disbanding the order in 1312, as did another Chinon Parchment dated 20 August 1308 addressed to Philip IV of France, also mentioning that all Templars that had confessed to heresy were "restored to the Sacraments and to the unity of the Church". This other Chinon Parchment has been well-known to historians, having been published by Étienne Baluze in 1693 and by Pierre Dupuy in 1751. The current position of the Roman Catholic Church is that the medieval persecution of the Knights Templar was unjust, that nothing was inherently wrong with the order or its rule, and that Pope Clement was pressed into his actions by the magnitude of the public scandal and by the dominating influence of King Philip IV, who was Clement's relative. The Templars were organized as a monastic order similar to Bernard's Cistercian Order, which was considered the first effective international organization in Europe. The organizational structure had a strong chain of authority. Each country with a major Templar presence (France, Poitou, Anjou, Jerusalem, England, Aragon, Portugal, Italy, Tripoli, Antioch, Hungary, and Croatia) had a Master of the Order for the Templars in that region. All of them were subject to the Grand Master, appointed for life, who oversaw both the order's military efforts in the East and their financial holdings in the West. The Grand Master exercised his authority via the visitors-general of the order, who were knights specially appointed by the Grand Master and convent of Jerusalem to visit the different provinces, correct malpractices, introduce new regulations, and resolve important disputes. The visitors-general had the power to remove knights from office and to suspend the Master of the province concerned. No precise numbers exist, but it is estimated that at the order's peak there were between 15,000 and 20,000 Templars, of whom about a tenth were actual knights. There was a threefold division of the ranks of the Templars: the noble knights, the non-noble sergeants, and the chaplains. The Templars did not perform knighting ceremonies, so any knight wishing to become a Knight Templar had to be a knight already. They were the most visible branch of the order, and wore the famous white mantles to symbolize their purity and chastity. They were equipped as heavy cavalry, with three or four horses and one or two squires. Squires were generally not members of the order but were instead outsiders who were hired for a set period of time. Beneath the knights in the order and drawn from non-noble families were the sergeants. They brought vital skills and trades from blacksmiths and builders, including administration of many of the order's European properties. In the Crusader States, they fought alongside the knights as light cavalry with a single horse. Several of the order's most senior positions were reserved for sergeants, including the post of Commander of the Vault of Acre, who was the "de facto" Admiral of the Templar fleet. The sergeants wore black or brown. From 1139, chaplains constituted a third Templar class. They were ordained priests who cared for the Templars' spiritual needs. All three classes of brother wore the order's red cross. Starting with founder Hugues de Payens in 1118–1119, the order's highest office was that of Grand Master, a position which was held for life, though considering the martial nature of the order, this could mean a very short tenure. All but two of the Grand Masters died in office, and several died during military campaigns. For example, during the Siege of Ascalon in 1153, Grand Master Bernard de Tremelay led a group of 40 Templars through a breach in the city walls. When the rest of the Crusader army did not follow, the Templars, including their Grand Master, were surrounded and beheaded. Grand Master Gérard de Ridefort was beheaded by Saladin in 1189 at the Siege of Acre. The Grand Master oversaw all of the operations of the order, including both the military operations in the Holy Land and Eastern Europe and the Templars' financial and business dealings in Western Europe. Some Grand Masters also served as battlefield commanders, though this was not always wise: several blunders in de Ridefort's combat leadership contributed to the devastating defeat at the Battle of Hattin. The last Grand Master was Jacques de Molay, burned at the stake in Paris in 1314 by order of King Philip IV. Bernard de Clairvaux and founder Hugues de Payens devised the specific code of behavior for the Templar Order, known to modern historians as the Latin Rule. Its 72 clauses defined the ideal behavior for the Knights, such as the types of garments they were to wear and how many horses they could have. Knights were to take their meals in silence, eat meat no more than three times per week, and not have physical contact of any kind with women, even members of their own family. A Master of the Order was assigned "4 horses, and one chaplain-brother and one clerk with three horses, and one sergeant brother with two horses, and one gentleman valet to carry his shield and lance, with one horse." As the order grew, more guidelines were added, and the original list of 72 clauses was expanded to several hundred in its final form. The knights wore a white surcoat with a red cross and a white mantle also with a red cross; the sergeants wore a black tunic with a red cross on the front and a black or brown mantle. The white mantle was assigned to the Templars at the Council of Troyes in 1129, and the cross was most probably added to their robes at the launch of the Second Crusade in 1147, when Pope Eugenius III, King Louis VII of France, and many other notables attended a meeting of the French Templars at their headquarters near Paris. According to their Rule, the knights were to wear the white mantle at all times, even being forbidden to eat or drink unless they were wearing it. The red cross that the Templars wore on their robes was a symbol of martyrdom, and to die in combat was considered a great honour that assured a place in heaven. There was a cardinal rule that the warriors of the order should never surrender unless the Templar flag had fallen, and even then they were first to try to regroup with another of the Christian orders, such as that of the Hospitallers. Only after all flags had fallen were they allowed to leave the battlefield. Although not prescribed by the Templar Rule, it later became customary for members of the order to wear long and prominent beards. In about 1240, Alberic of Trois-Fontaines described the Templars as an "order of bearded brethren"; while during the interrogations by the papal commissioners in Paris in 1310–1311, out of nearly 230 knights and brothers questioned, 76 are described as wearing a beard, in some cases specified as being "in the style of the Templars", and 133 are said to have shaved off their beards, either in renunciation of the order or because they had hoped to escape detection. Initiation, known as Reception ("receptio") into the order, was a profound commitment and involved a solemn ceremony. Outsiders were discouraged from attending the ceremony, which aroused the suspicions of medieval inquisitors during the later trials. New members had to willingly sign over all of their wealth and goods to the order and take vows of poverty, chastity, piety, and obedience. Most brothers joined for life, although some were allowed to join for a set period. Sometimes a married man was allowed to join if he had his wife's permission, but he was not allowed to wear the white mantle. With their military mission and extensive financial resources, the Knights Templar funded a large number of building projects around Europe and the Holy Land. Many of these structures are still standing. Many sites also maintain the name "Temple" because of centuries-old association with the Templars. For example, some of the Templars' lands in London were later rented to lawyers, which led to the names of the Temple Bar gateway and the Temple Underground station. Two of the four Inns of Court which may call members to act as barristers are the Inner Temple and Middle Temple – the entire area known as Temple, London. Distinctive architectural elements of Templar buildings include the use of the image of "two knights on a single horse", representing the Knights' poverty, and round buildings designed to resemble the Church of the Holy Sepulchre in Jerusalem. The story of the persecution and sudden dissolution of the secretive yet powerful medieval Templars has drawn many other groups to use alleged connections with them as a way of enhancing their own image and mystery. The Knights Templar were dismantled in the Rolls of the Catholic Church in 1309 with the death of Jacques de Molay; there is no clear historical connection between them and any modern organization, the earliest of which emerged publicly in the 18th century. Many temperance organizations named themselves after the Poor Fellow-Soldiers of Christ and of the Temple of Solomon, citing the belief that the original Knights Templar "drank sour milk, and also because they were fighting 'a great crusade' against 'this terrible vice' of alcohol." The largest of these, the International Order of Good Templars (IOGT), grew throughout the world after being started in the 19th century and continues to advocate for the abstinence of alcohol and other drugs. Freemasonry has incorporated the symbols and rituals of several medieval military orders in a number of Masonic bodies since the 18th century at least. This can be seen in the "Red Cross of Constantine," inspired by the Military Constantinian Order; the "Order of Malta," inspired by the Knights Hospitaller; and the "Order of the Temple", inspired by the Knights Templar. The Orders of Malta and the Temple feature prominently in the York Rite. One theory on the origin of Freemasonry claims direct descent from the historical Knights Templar through its final fourteenth-century members who allegedly took refuge in Scotland and aided Robert the Bruce in his victory at Bannockburn. This theory is usually rejected by both Masonic authorities and historians due to lack of evidence. The Knights Templar have become associated with legends concerning secrets and mysteries handed down to the select from ancient times. Rumours circulated even during the time of the Templars themselves. Masonic writers added their own speculations in the 18th century, and further fictional embellishments have been added in popular novels such as "Ivanhoe", "Foucault's Pendulum", and "The Da Vinci Code", modern movies such as "National Treasure", "The Last Templar", and "Indiana Jones and the Last Crusade", as well as video games such as "Broken Sword" and "Assassin's Creed". Beginning in the 1960s, there have been speculative popular publications surrounding the order's early occupation of the Temple Mount in Jerusalem and speculation about what relics the Templars may have found there, such as the Holy Grail or the Ark of the Covenant, or the historical accusation of idol worship (Baphomet) transformed into a context of "witchcraft". The association of the Holy Grail with the Templars has precedents even in 12th century fiction; Wolfram von Eschenbach's "Parzival" calls the knights guarding the Grail Kingdom "templeisen", apparently a conscious fictionalisation of the "templarii". Notes Bibliography Karnataka Karnataka is a state in the south western region of India. It was formed on 1 November 1956, with the passage of the States Reorganisation Act. Originally known as the State of Mysore, it was renamed "Karnataka" in 1973. The state corresponds to the Carnatic region. The capital and largest city is Bangalore (Bengaluru). Karnataka is bordered by the Arabian Sea to the west, Goa to the northwest, Maharashtra to the north, Telangana to the northeast, Andhra Pradesh to the east, Tamil Nadu to the southeast, and Kerala to the south. The state covers an area of , or 5.83 percent of the total geographical area of India. It is the seventh largest Indian state by area. With 61,130,704 inhabitants at the 2011 census, Karnataka is the eighth largest state by population, comprising 30 districts. Kannada, one of the classical languages of India, is the most widely spoken and official language of the state alongside Konkani, Tulu, Tamil, Telugu, Kodava, Beary. Karnataka also has the only 3 naturally Sanskrit-speaking districts in India. The two main river systems of the state are the Krishna and its tributaries, the Bhima, Ghataprabha, Vedavathi, Malaprabha, and Tungabhadra, in the north, and the Kaveri and its tributaries, the Hemavati, Shimsha, Arkavati, Lakshmana Thirtha and Kabini, in the south. Most of these rivers flow out of Karnataka eastward, reaching the sea at the Bay of Bengal. Though several etymologies have been suggested for the name Karnataka, the generally accepted one is that "Karnataka" is derived from the Kannada words "karu" and "nādu", meaning "elevated land". "Karu nadu" may also be read as "karu", meaning "black", and "nadu", meaning "region", as a reference to the black cotton soil found in the Bayalu Seeme region of the state. The British used the word Carnatic, sometimes "Karnatak", to describe both sides of peninsular India, south of the Krishna. With an antiquity that dates to the paleolithic, Karnataka has been home to some of the most powerful empires of ancient and medieval India. The philosophers and musical bards patronised by these empires launched socio-religious and literary movements which have endured to the present day. Karnataka has contributed significantly to both forms of Indian classical music, the Carnatic and Hindustani traditions. Karnataka's pre-history goes back to a paleolithic hand-axe culture evidenced by discoveries of, among other things, hand axes and cleavers in the region. Evidence of neolithic and megalithic cultures have also been found in the state. Gold discovered in Harappa was found to be imported from mines in Karnataka, prompting scholars to hypothesise about contacts between ancient Karnataka and the Indus Valley Civilisation ca. 3300 BCE. Prior to the third century BCE, most of Karnataka formed part of the Nanda Empire before coming under the Mauryan empire of Emperor Ashoka. Four centuries of Satavahana rule followed, allowing them to control large areas of Karnataka. The decline of Satavahana power led to the rise of the earliest native kingdoms, the Kadambas and the Western Gangas, marking the region's emergence as an independent political entity. The Kadamba Dynasty, founded by Mayurasharma, had its capital at Banavasi; the Western Ganga Dynasty was formed with Talakad as its capital. These were also the first kingdoms to use Kannada in administration, as evidenced by the Halmidi inscription and a fifth-century copper coin discovered at Banavasi. These dynasties were followed by imperial Kannada empires such as the Badami Chalukyas, the Rashtrakuta Empire of Manyakheta and the Western Chalukya Empire, which ruled over large parts of the Deccan and had their capitals in what is now Karnataka. The Western Chalukyas patronised a unique style of architecture and Kannada literature which became a precursor to the Hoysala art of the 12th century. Parts of modern-day Southern Karnataka (Gangavadi) were occupied by the Chola Empire at the turn of the 11th century. The Cholas and the Hoysalas fought over the region in the early 12th century before it eventually came under Hoysala rule. At the turn of the first millennium, the Hoysalas gained power in the region. Literature flourished during this time, which led to the emergence of distinctive Kannada literary metres, and the construction of temples and sculptures adhering to the Vesara style of architecture. The expansion of the Hoysala Empire brought minor parts of modern Andhra Pradesh and Tamil Nadu under its rule. In the early 14th century, Harihara and Bukka Raya established the Vijayanagara empire with its capital, "Hosapattana" (later named Vijayanagara), on the banks of the Tungabhadra River in the modern Bellary district. The empire rose as a bulwark against Muslim advances into South India, which it completely controlled for over two centuries. In 1565, Karnataka and the rest of South India experienced a major geopolitical shift when the Vijayanagara empire fell to a confederation of Islamic sultanates in the Battle of Talikota. The Bijapur Sultanate, which had risen after the demise of the Bahmani Sultanate of Bidar, soon took control of the Deccan; it was defeated by the Moghuls in the late 17th century. The Bahmani and Bijapur rulers encouraged Urdu and Persian literature and Indo-Saracenic architecture, the Gol Gumbaz being one of the high points of this style. During the sixteenth century, Konkani Hindus migrated to Karnataka, mostly from Salcette, Goa, while during the seventeenth and eighteenth century, Goan Catholics migrated to North Canara and South Canara, especially from Bardes, Goa, as a result of food shortages, epidemics and heavy taxation imposed by the Portuguese. In the period that followed, parts of northern Karnataka were ruled by the Nizam of Hyderabad, the Maratha Empire, the British, and other powers. In the south, the Mysore Kingdom, a former vassal of the Vijayanagara Empire, was briefly independent. With the death of Krishnaraja Wodeyar II, Haidar Ali, the commander-in-chief of the Mysore army, gained control of the region. After his death, the kingdom was inherited by his son Tipu Sultan. To contain European expansion in South India, Haidar Ali and later Tipu Sultan fought four significant Anglo-Mysore Wars, the last of which resulted in Tippu Sultan's death and the incorporation of Mysore into the British Raj in 1799. The Kingdom of Mysore was restored to the Wodeyars and Mysore remained a princely state under the British Raj. As the "doctrine of lapse" gave way to dissent and resistance from princely states across the country, Kittur Chennamma, Sangolli Rayanna and others spearheaded rebellions in Karnataka in 1830, nearly three decades before the Indian Rebellion of 1857. However, Kitturu was taken over by the British East India Company even before the doctrine was officially articulated by Lord Dalhousie in 1848. Other uprisings followed, such as the ones at Supa, Bagalkot, Shorapur, Nargund and Dandeli. These rebellions — which coincided with the Indian Rebellion of 1857 – were led by Mundargi Bhimarao, Bhaskar Rao Bhave, the Halagali Bedas, Raja Venkatappa Nayaka and others. By the late 19th century, the independence movement had gained momentum; Karnad Sadashiva Rao, Aluru Venkata Raya, S. Nijalingappa, Kengal Hanumanthaiah, Nittoor Srinivasa Rau and others carried on the struggle into the early 20th century. After India's independence, the Maharaja, Jayachamarajendra Wodeyar, allowed his kingdom's accession to India. In 1950, Mysore became an Indian state of the same name; the former Maharaja served as its "Rajpramukh" (head of state) until 1975. Following the long-standing demand of the Ekikarana Movement, Kodagu- and Kannada-speaking regions from the adjoining states of Madras, Hyderabad and Bombay were incorporated into the Mysore state, under the States Reorganisation Act of 1956. The thus expanded state was renamed Karnataka, seventeen years later, in 1973. In the early 1900s through the post-independence era, industrial visionaries such as Sir Mokshagundam Visvesvarayya, born in Muddenahalli, Chikballapur district, played an important role in the development of Karnataka's strong manufacturing and industrial base. The state has three principal geographical zones: The bulk of the state is in the Bayaluseeme region, the northern part of which is the second-largest arid region in India. The highest point in Karnataka is the Mullayanagiri hills in Chickmagalur district which has an altitude of . Some of the important rivers in Karnataka are Kaveri, Tungabhadra, Krishna, Malaprabha and the Sharavathi. A large number of dams and reservoirs are constructed across these rivers which richly add to the irrigation and hydel power generation capacities of the state. Karnataka consists of four main types of geological formations — the "Archean complex" made up of Dharwad schists and granitic gneisses, the "Proterozoic" non-fossiliferous sedimentary formations of the Kaladgi and Bhima series, the "Deccan trappean and intertrappean deposits" and the tertiary and recent laterites and alluvial deposits. Significantly, about 60% of the state is composed of the "Archean complex" which consist of gneisses, granites and charnockite rocks. Laterite cappings that are found in many districts over the Deccan Traps were formed after the cessation of volcanic activity in the early tertiary period. Eleven groups of soil orders are found in Karnataka, viz. Entisols, Inceptisols, Mollisols, Spodosols, Alfisols, Ultisols, Oxisols, Aridisols, Vertisols, Andisols and Histosols. Depending on the agricultural capability of the soil, the soil types are divided into six types, "viz." red, lateritic, black, alluvio-colluvial, forest and coastal soils. Karnataka experiences four seasons. The winter in January and February is followed by summer between March and May, the monsoon season between June and September and the post-monsoon season from October till December. Meteorologically, Karnataka is divided into three zones — coastal, north interior and south interior. Of these, the coastal zone receives the heaviest rainfall with an average rainfall of about per annum, far in excess of the state average of . Agumbe in the Shivamogga district receives the second highest annual rainfall in India. The highest recorded temperature was at Raichur and the lowest recorded temperature was at Bidar. About of Karnataka (i.e. 20% of the state's geographic area) is covered by forests. The forests are classified as reserved, protected, unclosed, village and private forests. The percentage of forested area is slightly less than the all-India average of about 23%, and significantly less than the 33% prescribed in the National Forest Policy. There are 30 districts in Karnataka: Each district is governed by a district commissioner or district magistrate. The districts are further divided into sub-divisions, which are governed by sub-divisional magistrates; sub-divisions comprise blocks containing "panchayats" (village councils) and town municipalities. At the 2011 census, Karnataka's ten largest cities, sorted in order of decreasing population, were Bangalore, Hubballi-Dharwad, Mysuru, Mangaluru, Gulbarga, Belagavi, Davangere, Ballary, Vijayapur and Shivamogga. According to the 2011 census of India, the total population of Karnataka was 61,095,297 of which 30,966,657 (50.7%) were male and 30,128,640 (49.3%) were female, or 1000 males for every 973 females. This represents a 15.60% increase over the population in 2001. The population density was 319 per km and 38.67% of the people lived in urban areas. The literacy rate was 75.36% with 82.47% of males and 68.08% of females being literate. 84.00% of the population were Hindu, 12.92% were Muslim, 1.87% were Christian, 0.72% were Jains, 0.16% were Buddhist, 0.05% were Sikh and 0.02% were belonging to other religions and 0.27% of the population did not state their religion. Kannada is the official language of Karnataka and spoken as a native language by about 66.26% of the people as of 2001. Other linguistic minorities in the state were Urdu (10.54%), Telugu (7.03%), Tamil (3.57%), Marathi (3.6%), Tulu (3.0%), Hindi (2.56%), Konkani (1.46%), Malayalam (1.33%) and Kodava Takk (0.3%). In 2007 the state had a birth rate of 2.2%, a death rate of 0.7%, an infant mortality rate of 5.5% and a maternal mortality rate of 0.2%. The total fertility rate was 2.2. In the field of speciality health care, Karnataka's private sector competes with the best in the world. Karnataka has also established a modicum of public health services having a better record of health care and child care than most other states of India. In spite of these advances, some parts of the state still leave much to be desired when it comes to primary health care. Karnataka has a parliamentary system of government with two democratically elected houses, the Legislative Assembly and the Legislative Council. The Legislative Assembly consists of 224 members who are elected for five-year terms. The Legislative Council is a permanent body of 75 members with one-third (25 members) retiring every two years. The government of Karnataka is headed by the Chief Minister who is chosen by the ruling party members of the Legislative Assembly. The Chief Minister, along with the council of ministers, executes the legislative agenda and exercises most of the executive powers. However, the constitutional and formal head of the state is the Governor who is appointed for a five-year term by the President of India on the advice of the Union government. The people of Karnataka also elect 28 members to the "Lok Sabha", the lower house of the Indian Parliament. The members of the state Legislative Assembly elect 12 members to the "Rajya Sabha", the upper house of the Indian Parliament. For administrative purposes, Karnataka has been divided into four revenue divisions, 49 sub-divisions, 30 districts, 175 "taluks" and 745 "hoblies" / revenue circles. The administration in each district is headed by a Deputy Commissioner who belongs to the Indian Administrative Service and is assisted by a number of officers belonging to Karnataka state services. The Deputy Commissioner of Police, an officer belonging to the Indian Police Service and assisted by the officers of the Karnataka Police Service, is entrusted with the responsibility of maintaining law and order and related issues in each district. The Deputy Conservator of Forests, an officer belonging to the Indian Forest Service, is entrusted with the responsibility of managing forests, environment and wildlife of the district, he will be assisted by the officers belonging to Karnataka Forest Service and officers belonging to Karnataka Forest Subordinate Service. Sectoral development in the districts is looked after by the district head of each development department such as Public Works Department, Health, Education, Agriculture, Animal Husbandry, etc. The judiciary in the state consists of the Karnataka High Court ("Attara Kacheri") in Bangalore, district and session courts in each district and lower courts and judges at the "taluk" level. Politics in Karnataka has been dominated by three political parties, the Indian National Congress, the Janata Dal (Secular) and the Bharatiya Janata Party. Politicians from Karnataka have played prominent roles in federal government of India with some of them having held the high positions of Prime Minister and Vice-President. Border disputes involving Karnataka's claim on the Kasaragod and Solapur districts and Maharashtra's claim on Belgaum are ongoing since the states reorganisation. The official has a "Ganda Berunda" in the centre. Surmounting this are four lions facing the four directions, taken from the Lion Capital of Ashoka at Sarnath. The emblem also carries two "Sharabhas" with the head of an elephant and the body of a lion. Karnataka had an estimated GSDP (Gross State Domestic Product) of about US$115.86 billion in the 2014–15 fiscal year. The state registered a GSDP growth rate of 7% for the year 2014–2015. Karnataka's contribution to India's GDP in the year 2014–15 was 7.54%. With GDP growth of 17.59% and per capita GDP growth of 16.04%, Karnataka is on the 6th position among all states and union territories. In an employment survey conducted for the year 2013–2014, the unemployment rate in Karnataka was 1.8% compared to the national rate of 4.9%. In 2011–2012, Karnataka had an estimated poverty ratio of 20.91% compared to the national ratio of 21.92%. Nearly 56% of the workforce in Karnataka is engaged in agriculture and related activities. A total of 12.31 million hectares of land, or 64.6% of the state's total area, is cultivated. Much of the agricultural output is dependent on the southwest monsoon as only 26.5% of the sown area is irrigated. Karnataka is the manufacturing hub for some of the largest public sector industries in India, including Hindustan Aeronautics Limited, National Aerospace Laboratories, Bharat Heavy Electricals Limited, Bharat Earth Movers Limited and HMT (formerly Hindustan Machine Tools), which are based in Bangalore. Many of India's premier science and technology research centres, such as Indian Space Research Organisation, Central Power Research Institute, Bharat Electronics Limited and the Central Food Technological Research Institute, are also headquartered in Karnataka. Mangalore Refinery and Petrochemicals Limited is an oil refinery, located in Mangalore. The state has also begun to invest heavily in solar power centred on the Pavagada Solar Park. As of December 2017, the state has installed an estimated 2.2 gigwatts of block solar panelling and in January 2018 announced a tender to generate a further 1.2 gigawatts in the coming years: Karnataka Renewable Energy Development suggests that this will be based on 24 separate systems (or 'blocks') generating 50 megawatts each. Since the 1980s, Karnataka has emerged as the pan-Indian leader in the field of IT (information technology). In 2007, there were nearly 2,000 firms operating in Karnataka. Many of them, including two of India's biggest software firms, Infosys and Wipro, are also headquartered in the state. Exports from these firms exceeded 50,000 crores ($12.5 billion) in 2006–07, accounting for nearly 38% of all IT exports from India. The Nandi Hills area in the outskirts of Devanahalli is the site of the upcoming $22 billion, 50 square kilometre BIAL IT Investment Region, one of the largest infrastructure projects in the history of Karnataka. All this has earned the state capital, Bangalore, the sobriquet "Silicon Valley of India". Karnataka also leads the nation in biotechnology. It is home to India's largest biocluster, with 158 of the country's 320 biotechnology firms being based here. The state accounts for 75% of India's floriculture, an upcoming industry which supplies flowers and ornamental plants worldwide. Seven of India's banks, Canara Bank, Syndicate Bank, Corporation Bank, Vijaya Bank, Karnataka Bank, ING Vysya Bank and the State Bank of Mysore originated in this state. The coastal districts of Udupi and Dakshina Kannada have a branch for every 500 persons—the best distribution of banks in India. In March 2002, Karnataka had 4767 branches of different banks with each branch serving 11,000 persons, which is lower than the national average of 16,000. A majority of the silk industry in India is headquartered in Karnataka, much of it in Doddaballapura, and the state government intends to invest 70 crore in a "Silk City" at Muddenahalli, near Bangalore International Airport. Air transport in Karnataka, as in the rest of the country, is still a fledgling but fast expanding sector. Karnataka has airports at Bangalore, Mangalore, Belgaum, Hubli, Hampi, Bellary and Mysore with international operations from Bangalore and Mangalore airports. Karnataka has a railway network with a total length of approximately . Until the creation of the South Western Zone headquartered at Hubli in 2003, the railway network in the state was in the Southern and Western railway zones. Several parts of the state now come under the South Western Zone, with the remainder under the Southern Railways. Coastal Karnataka is covered under the Konkan railway network which was considered India's biggest railway project of the century. Bangalore is well-connected with inter-state destinations, while other towns in the state are not. Karnataka has 11 ports, including the New Mangalore Port, a major port and ten minor ports, of which three were operational in 2012. The New Mangalore port was incorporated as the ninth major port in India on 4 May 1974. This port handled 32.04 million tonnes of traffic in the fiscal year 2006–07 with 17.92 million tonnes of imports and 14.12 million tonnes of exports. The port also handled 1015 vessels including 18 cruise vessels during the year 2006–07. Foreigners can enter Mangalore through the New Mangalore Port with the help of Electronic visa (e-visa). Cruise ships from Europe, North America and UAE arrive at New Mangalore Port to visit the tourist places across Coastal Karnataka. The total lengths of National Highways and state highways in Karnataka are , respectively. The KSRTC, the state public transport corporation, transports an average of 2.2 million passengers daily and employs about 25,000 people. In the late nineties, KSRTC was split into four corporations, viz., The Bangalore Metropolitan Transport Corporation, The North-East Karnataka Road Transport Corporation and The North-West Karnataka Road Transport Corporation with their headquarters in Bangalore, Gulbarga and Hubli respectively, and with the remnant of the KSRTC maintaining operations in the rest of the state from its headquarters in Bangalore. The diverse linguistic and religious ethnicities that are native to Karnataka, combined with their long histories, have contributed immensely to the varied cultural heritage of the state. Apart from Kannadigas, Karnataka is home to Tuluvas, Kodavas and Konkanis. Minor populations of Tibetan Buddhists and tribes like the Soligas, Yeravas, Todas and Siddhis also live in Karnataka. The traditional folk arts cover the entire gamut of music, dance, drama, storytelling by itinerant troupes, etc. "Yakshagana" of Malnad and coastal Karnataka, a classical dance drama, is one of the major theatrical forms of Karnataka. Contemporary theatre culture in Karnataka remains vibrant with organisations like "Ninasam", "Ranga Shankara", "Rangayana" and "Prabhat Kalavidaru" continuing to build on the foundations laid by Gubbi Veeranna, T. P. Kailasam, B. V. Karanth, K V Subbanna, Prasanna and others. "Veeragase", "Kamsale", "Kolata" and "Dollu Kunitha" are popular dance forms. The Mysore style of "Bharatanatya", nurtured and popularised by the likes of the legendary Jatti Tayamma, continues to hold sway in Karnataka, and Bangalore also enjoys an eminent place as one of the foremost centres of "Bharatanatya". Karnataka also has a special place in the world of Indian classical music, with both Karnataka (Carnatic) and Hindustani styles finding place in the state, and Karnataka has produced a number of stalwarts in both styles. The Haridasa movement of the sixteenth century contributed significantly to the development of Karnataka (Carnatic) music as a performing art form. Purandara Dasa, one of the most revered Haridasas, is known as the "Karnataka Sangeeta Pitamaha" ('Father of Karnataka a.k.a. Carnatic music'). Celebrated Hindustani musicians like Gangubai Hangal, Mallikarjun Mansur, Bhimsen Joshi, Basavaraja Rajaguru, Sawai Gandharva and several others hail from Karnataka, and some of them have been recipients of the Kalidas Samman, Padma Bhushan and Padma Vibhushan awards. Noted Carnatic musicians include Violin T. Chowdiah, Veena Sheshanna, Mysore Vasudevachar, Doreswamy Iyengar and Thitte Krishna Iyengar. "Gamaka" is another classical music genre based on Carnatic music that is practised in Karnataka. "Kannada Bhavageete" is a genre of popular music that draws inspiration from the expressionist poetry of modern poets. The Mysore school of painting has produced painters like Sundarayya, Tanjavur Kondayya, B. Venkatappa and Keshavayya. "Chitrakala Parishat" is an organisation in Karnataka dedicated to promoting painting, mainly in the Mysore painting style. "Saree" is the traditional dress of women in Karnataka. Women in Kodagu have a distinct style of wearing the "saree", different from the rest of Karnataka. "Dhoti", known as "Panche" in Karnataka, is the traditional attire of men. Shirt, Trousers and "Salwar kameez" are widely worn in Urban areas. "Mysore Peta" is the traditional headgear of southern Karnataka, while the "pagadi" or "pataga" (similar to the Rajasthani turban) is preferred in the northern areas of the state. Rice and "Ragi" form the staple food in South Karnataka, whereas "Jolada rotti", Sorghum is staple to North Karnataka. "Bisi bele bath", "Jolada rotti", "Ragi mudde", "Uppittu", "Benne Dose", "Masala Dose" and "Maddur Vade" are some of the popular food items in Karnataka. Among sweets, "Mysore Pak", "Karadantu" of Gokak and "Amingad", "Belgaavi Kunda" and "Dharwad pedha" are popular. Apart from this, coastal Karnataka and Kodagu have distinctive cuisines of their own. Udupi cuisine of coastal Karnataka is popular all over India. Adi Shankaracharya (788–820) chose Sringeri in Karnataka to establish the first of his four "mathas" (monastery). Madhvacharya (1238–1317) was the chief proponent of Tattvavada (Philosophy of Reality), popularly known as Dvaita or Dualistic school of Hindu philosophy — one of the three most influential Vedanta philosophies. Madhvacharya was one of the important philosophers during the Bhakti movement. He was a pioneer in many ways, going against standard conventions and norms. According to tradition, Madhvacharya is believed to be the third incarnation of Vayu (Mukhyaprana), after Hanuman and Bhima. The Haridasa devotional movement is considered as one of the turning points in the cultural history of India. Over a span of nearly six centuries, several saints and mystics helped shape the culture, philosophy and art of South India and Karnataka in particular by exerting considerable spiritual influence over the masses and kingdoms that ruled South India. This movement was ushered in by the Haridasas (literally "servants of Lord Hari") and took shape in the 13th century – 14th century CE, period, prior to and during the early rule of the Vijayanagara empire. The main objective of this movement was to propagate the Dvaita philosophy of Madhvacharya (Madhva Siddhanta) to the masses through a literary medium known as Dasa Sahitya literature of the servants of the Lord. Purandaradasa is widely recognised as the ""Pithamaha"" of Carnatic Music for his immense contribution. Ramanujacharya, the leading expounder of "Vishishtadvaita", spent many years in Melkote. He came to Karnataka in 1098 AD and lived here until 1122 AD. He first lived in Tondanur and then moved to Melkote where the Cheluvanarayana Swamy Temple and a well-organised "matha" were built. He was patronised by the Hoysala king, Vishnuvardhana. In the twelfth century, Lingayatism emerged in northern Karnataka as a protest against the rigidity of the prevailing social and caste system. Leading figures of this movement were Basava, Akka Mahadevi and Allama Prabhu, who established the Anubhava Mantapa which was the centre of all religious and philosophical thoughts and discussions pertaining to Ligayats. These three social reformers did so by the literary means of ""Vachana Sahitya"" which is very famous for its simple, straight forward and easily understandable Kannada language. Lingayatism preached women equality by letting women wear "Ishtalinga" i.e. Symbol of god around their neck. Basava shunned the sharp hierarchical divisions that existed and sought to remove all distinctions between the hierarchically superior master class and the subordinate, servile class. He also supported inter-caste marriages and Kaayaka Tatva of Basavanna. This was the basis of the Lingayat faith which today counts millions among its followers. The Jain philosophy and literature have contributed immensely to the religious and cultural landscape of Karnataka. Islam, which had an early presence on the west coast of India as early as the tenth century, gained a foothold in Karnataka with the rise of the Bahamani and Bijapur sultanates that ruled parts of Karnataka. Christianity reached Karnataka in the sixteenth century with the arrival of the Portuguese and St. Francis Xavier in 1545. Buddhism was popular in Karnataka during the first millennium in places such as Gulbarga and Banavasi. A chance discovery of edicts and several Mauryan relics at Sannati in Gulbarga district in 1986 has proven that the Krishna River basin was once home to both Mahayana and Hinayana Buddhism. There are Tibetan refugee camps in Karnataka. "Mysore Dasara" is celebrated as the "Nada habba" (state festival) and this is marked by major festivities at Mysore. "Ugadi" (Kannada New Year), "Makara Sankranti" (the harvest festival), "Ganesh Chaturthi", "Gowri Habba", "Ram Navami", "Nagapanchami", "Basava Jayanthi", "Deepavali", and "Ramzan" are the other major festivals of Karnataka. The Kannada language serves as the official language of the state of Karnataka, as the native language of approximately 65% of its population and as one of the classical languages of India. Kannada played a crucial role in the creation of Karnataka: linguistic demographics played a major role in defining the new state in 1956. Tulu, Konkani and Kodava are other minor native languages that share a long history in the state. Urdu is spoken widely by the Muslim population. Less widely spoken languages include Beary bashe and certain languages such as Sankethi. Some of the regional languages in Karnataka are Tulu, Kodava, Konkani and Beary. Kannada features a rich and ancient body of literature including religious and secular genre, covering topics as diverse as Jainism (such as "Puranas"), Veerashaivism (such as Vachanas), Vaishnavism (such as "Haridasa Sahitya") and modern literature. Evidence from edicts during the time of Ashoka (reigned 274–232 BCE) suggest that Buddhist literature influenced the Kannada script and its literature. The Halmidi inscription, the earliest attested full-length inscription in the Kannada language and script, dates from 450 CE, while the earliest available literary work, the "Kavirajamarga", has been dated to 850 CE. References made in the "Kavirajamarga", however, prove that Kannada literature flourished in the native composition meters such as "Chattana", "Beddande" and "Melvadu" during earlier centuries. The classic refers to several earlier greats ("purvacharyar") of Kannada poetry and prose. Kuvempu, the renowned Kannada poet and writer who wrote Jaya Bharata Jananiya Tanujate, the state anthem of Karnataka was the first recipient of the "Karnataka Ratna" award, the highest civilian award bestowed by the Government of Karnataka. Contemporary Kannada literature has received considerable acknowledgement in the arena of Indian literature, with eight Kannada writers winning India's highest literary honour, the Jnanpith award. Tulu is spoken mainly in the coastal districts of Udupi and Dakshina Kannada. "Tulu Mahabharato", written by Arunabja in the Tigalari script, is the oldest surviving Tulu text. Tigalari script was used by Brahmins to write Sanskrit language. The use of the Kannada script for writing Tulu and non-availability of print in Tigalari script contributed to the marginalisation of Tigalari script. Konkani is mostly spoken in the Uttara Kannada and Dakshina Kannada districts and in parts of Udupi, Konkani use the Kannada script for writing. The Kodavas who mainly reside in the Kodagu district, speak Kodava Takk. Two regional variations of the language exist, the northern "Mendale Takka" and the southern "Kiggaati Takka". Kodava Takk use the Kannada script for writing. English is the medium of education in many schools and widely used for business communication in most private companies. All of the state's languages are patronised and promoted by governmental and quasi-governmental bodies. The "Kannada Sahitya Parishat" and the "Kannada Sahitya Akademi" are responsible for the promotion of Kannada while the "Karnataka Konkani Sahitya Akademi", the "Tulu Sahitya Akademi" and the "Kodava Sahitya Akademi" promote their respective languages. As per the 2011 census, Karnataka had a literacy rate of 75.60%, with 82.85% of males and 68.13% of females in the state being literate. In 2001, the literacy rate of the state were 67.04%, with 76.29% of males and 57.45% of females being literate. The state is home to some of the premier educational and research institutions of India such as the Indian Institute of Science, the Indian Institute of Management, the Indian Institute of Technology Dharwad the National Institute of Mental Health and Neurosciences, the National Institute of Technology Karnataka and the National Law School of India University. In March 2006, Karnataka had 54,529 primary schools with 252,875 teachers and 8.495 million students, and 9498 secondary schools with 92,287 teachers and 1.384 million students. There are three kinds of schools in the state, viz., government-run, private aided (financial aid is provided by the government) and private unaided (no financial aid is provided). The primary languages of instruction in most schools are Kannada and English. The syllabus taught in the schools is either of the CBSE, the ICSE or the state syllabus (SSLC) defined by the Department of Public Instruction of the Government of Karnataka. However, some schools follow the NIOS syllabus. The state has two sainik schools — in Kodagu Sainik School in Kodagu and in Bijapur Sainik School in Bijapur. To maximise attendance in schools, the Karnataka Government has launched a midday meal scheme in government and aided schools in which free lunch is provided to the students. Statewide board examinations are conducted at the end of secondary education. Students who qualify are allowed to pursue a two-year pre-university course, after which they become eligible to pursue under-graduate degrees. There are 481 degree colleges affiliated with one of the universities in the state, viz. Bangalore University, Gulbarga University, Karnatak University, Kuvempu University, Mangalore University and Mysore University. In 1998, the engineering colleges in the state were brought under the newly formed Visvesvaraya Technological University headquartered at Belgaum, whereas the medical colleges are run under the jurisdiction of the Rajiv Gandhi University of Health Sciences. Some of these baccalaureate colleges are accredited with the status of a deemed university. There are 186 engineering, 39 medical and 41 dental colleges in the state. Udupi, Sringeri, Gokarna and Melkote are well-known places of Sanskrit and Vedic learning. In 2015 the Central Government decided to establish the first Indian Institute of Technology in Karnataka at Dharwad. Tulu and Konkani languages are taught as an optional subject in the twin districts of South Canara and Udupi. The era of Kannada newspapers started in the year 1843 when Hermann Mögling, a missionary from Basel Mission, published the first Kannada newspaper called "Mangalooru Samachara" in Mangalore. The first Kannada periodical, "Mysuru Vrittanta Bodhini" was started by Bhashyam Bhashyacharya in Mysore. Shortly after Indian independence in 1948, K. N. Guruswamy founded "The Printers (Mysore) Private Limited" and began publishing two newspapers, "Deccan Herald" and "Prajavani". Presently the "Times of India" and "Vijaya Karnataka" are the largest-selling English and Kannada newspapers respectively. A vast number of weekly, biweekly and monthly magazines are under publication in both Kannada and English. "Udayavani", "Kannadaprabha", "Samyukta Karnataka", "VarthaBharathi", "Sanjevani", "Eesanje", "Hosa digantha", "Karavali Ale" are also some popular dailies published from Karnataka. Doordarshan is the broadcaster of the Government of India and its channel DD Chandana is dedicated to Kannada. Prominent Kannada channels include Janasri News, Colors Kannada, Zee Kannada, Udaya TV, TV 9, Asianet Suvarna and Kasturi TV. Karnataka occupies a special place in the history of Indian radio. In 1935, "Aakashvani", the first private radio station in India, was started by Prof. M.V. Gopalaswamy in Mysore. The popular radio station was taken over by the local municipality and later by All India Radio (AIR) and moved to Bangalore in 1955. Later in 1957, AIR adopted the original name of the radio station, "Aakashavani" as its own. Some of the popular programs aired by AIR Bangalore included "Nisarga Sampada" and "Sasya Sanjeevini" which were programs that taught science through songs, plays and stories. These two programs became so popular that they were translated and broadcast in 18 different languages and the entire series was recorded on cassettes by the Government of Karnataka and distributed to thousands of schools across the state. Karnataka has witnessed a growth in FM radio channels, mainly in the cities of Bangalore, Mangalore and Mysore, which has become hugely popular. Karnataka's smallest district, Kodagu, is a major contributor to Indian field hockey, producing numerous players who have represented India at the international level. The annual Kodava Hockey Festival is the largest hockey tournament in the world. Bangalore has hosted a WTA tennis event and, in 1997, it hosted the fourth National Games of India. The Sports Authority of India, the premier sports institute in the country, and the Nike Tennis Academy are also situated in Bangalore. Karnataka has been referred to as the cradle of Indian swimming because of its high standards in comparison to other states. One of the most popular sports in Karnataka is cricket. The state cricket team has won the Ranji Trophy seven times, second only to Mumbai in terms of success. Chinnaswamy Stadium in Bangalore regularly hosts international matches and is also the home of the National Cricket Academy, which was opened in 2000 to nurture potential international players. Many cricketers have represented India and in one international match held in the 1990s; players from Karnataka composed the majority of the national team. The Royal Challengers Bangalore, an Indian Premier League franchise, the Bengaluru Football Club, an Indian Super League franchise, the Bengaluru Yodhas, a Pro Wrestling League franchise, the Bengaluru Blasters, a Premier Badminton League franchise and the Bengaluru Bulls, a Pro Kabaddi League franchise are based in Bangalore. The Karnataka Premier League is an inter-regional Twenty20 cricket tournament played in the state. Notable sportsmen from Karnataka include B.S. Chandrasekhar, Anil Kumble, Javagal Srinath, Rahul Dravid, Venkatesh Prasad, Robin Uthappa, Vinay Kumar, Gundappa Vishwanath, Syed Kirmani, Stuart Binny, Ashwini Ponnappa, Mahesh Bhupathi, Rohan Bopanna, Prakash Padukone who won the All England Badminton Championships in 1980 and Pankaj Advani who has won three world titles in cue sports by the age of 20 including the amateur World Snooker Championship in 2003 and the World Billiards Championship in 2005. Bijapur district has produced some of the best known road cyclists in the national circuit. Premalata Sureban was part of the Indian contingent at the Perlis Open '99 in Malaysia. In recognition of the talent of cyclists in the district, the state government laid down a cycling track at the B.R. Ambedkar Stadium at a cost of 40 lakh. Sports like "kho kho", "kabaddi", "chinni daandu" and "goli" (marbles) are played mostly in Karnataka's rural areas. Karnataka has a rich diversity of flora and fauna. It has a recorded forest area of which constitutes 20.19% of the total geographical area of the state. These forests support 25% of the elephant and 10% of the tiger population of India. Many regions of Karnataka are as yet unexplored, so new species of flora and fauna are found periodically. The Western Ghats, a biodiversity hotspot, includes the western region of Karnataka. Two sub-clusters in the Western Ghats, viz. Talacauvery and Kudremukh, both in Karnataka, are on the tentative list of World Heritage Sites of UNESCO. The Bandipur and Nagarahole National Parks, which fall outside these subclusters, were included in the Nilgiri Biosphere Reserve in 1986, a UNESCO designation. The Indian roller and the Indian elephant are recognised as the state bird and animal while sandalwood and the lotus are recognised as the state tree and flower respectively. Karnataka has five national parks: Anshi, Bandipur, Bannerghatta, Kudremukh and Nagarhole. It also has 27 wildlife sanctuaries of which seven are bird sanctuaries. Wild animals that are found in Karnataka include the elephant, the tiger, the leopard, the gaur, the sambar deer, the chital or spotted deer, the muntjac, the bonnet macaque, the slender loris, the common palm civet, the small Indian civet, the sloth bear, the dhole, the striped hyena and the golden jackal. Some of the birds found here are the great hornbill, the Malabar pied hornbill, the Ceylon frogmouth, herons, ducks, kites, eagles, falcons, quails, partridges, lapwings, sandpipers, pigeons, doves, parakeets, cuckoos, owls, nightjars, swifts, kingfishers, bee-eaters and munias. Some species of trees found in Karnataka are "Callophyllum tomentosa", "Callophyllum wightianum", "Garcina cambogia", "Garcina morealla", "Alstonia scholaris", "", "Artocarpus hirsutus", "Artocarpus lacoocha", "Cinnamomum zeylanicum", "Grewia tilaefolia", "Santalum album", "Shorea talura", "Emblica officinalis", "Vitex altissima" and "Wrightia tinctoria". Wildlife in Karnataka is threatened by poaching, habitat destruction, human-wildlife conflict and pollution. By virtue of its varied geography and long history, Karnataka hosts numerous spots of interest for tourists. There is an array of ancient sculptured temples, modern cities, scenic hill ranges, forests and beaches. Karnataka has been ranked as the fourth most popular destination for tourism among the states of India. Karnataka has the second highest number of nationally protected monuments in India, second only to Uttar Pradesh, in addition to 752 monuments protected by the State Directorate of Archaeology and Museums. Another 25,000 monuments are yet to receive protection. The districts of the Western Ghats and the southern districts of the state have popular eco-tourism locations including Kudremukh, Madikeri and Agumbe. Karnataka has 25 wildlife sanctuaries and five national parks. Popular among them are Bandipur National Park, Bannerghatta National Park and Nagarhole National Park. The ruins of the Vijayanagara Empire at Hampi and the monuments of Pattadakal are on the list of UNESCO's World Heritage Sites. The cave temples at Badami and the rock-cut temples at Aihole representing the Badami Chalukyan style of architecture are also popular tourist destinations. The Hoysala temples at Belur and Halebidu, which were built with Chloritic schist (soapstone) are proposed UNESCO World Heritage sites. The Gol Gumbaz and Ibrahim Rauza are famous examples of the Deccan Sultanate style of architecture. The monolith of Gomateshwara Bahubali at Shravanabelagola is the tallest sculpted monolith in the world, attracting tens of thousands of pilgrims during the Mahamastakabhisheka festival. The waterfalls of Karnataka and Kudremukh are considered by some to be among the "1001 Natural Wonders of the World". Jog Falls is India's tallest single-tiered waterfall with Gokak Falls, Unchalli Falls, Magod Falls, Abbey Falls and Shivanasamudra Falls among other popular waterfalls. Several popular beaches dot the coastline, including Murudeshwara, Gokarna, Malpe and Karwar. In addition, Karnataka is home to several places of religious importance. Several Hindu temples including the famous Udupi Sri Krishna Matha, the Marikamba Temple at Sirsi, the Kollur Mookambika Temple, the Sri Manjunatha Temple at Dharmasthala, Kukke Subramanya Temple and Sharadamba Temple at Shringeri attract pilgrims from all over India. Most of the holy sites of Lingayatism, like Kudalasangama and Basavana Bagewadi, are found in northern parts of the state. Shravanabelagola, Mudabidri and Karkala are famous for Jain history and monuments. Jainism had a stronghold in Karnataka in the early medieval period with Shravanabelagola as its most important centre. The Shettihalli Rosary Church near Shettihalli, an example of French colonial Gothic architecture, is a rare example of a Christian ruin, is a popular tourist site. Recently Karnataka has emerged as a center of health care tourism. Karnataka has the highest number of approved health systems and alternative therapies in India. Along with some ISO certified government-owned hospitals, private institutions which provide international-quality services have caused the health care industry to grow by 30% during 2004–05. Hospitals in Karnataka treat around 8,000 health tourists every year. Kylie Minogue Kylie Ann Minogue, (; born 28 May 1968) is an Australian-British singer and actress. She achieved recognition starring in the Australian soap opera "Neighbours", where she played tomboy mechanic Charlene Robinson. Appearing in the series for two years, Minogue's character married Scott Robinson (Jason Donovan) in an episode viewed by nearly 20 million people in the United Kingdom, making it one of the most watched Australian TV episodes ever. Since then, Minogue has been a recording artist and has achieved commercial success and critical acclaim in the entertainment industry. Minogue has been recognised with several honorific nicknames including "Princess of Pop" and "Goddess of Pop". She is recognised as the highest-selling Australian artist of all time by the Australian Recording Industry Association (ARIA). Born and raised in Melbourne, Australia, Minogue has worked and lived in the United Kingdom since the 1990s. She signed to record label PWL in 1987 and released her first studio album "Kylie" the next year. In 1992, she left PWL and signed with Deconstruction Records where she released her self-titled studio album and "Impossible Princess", both of which received positive reviews from critics. Returning to more mainstream dance-oriented music, Minogue signed to Parlophone and released "Light Years". The followup, "Fever", was a hit in many countries, including the United States. The lead single "Can't Get You Out of My Head" became one of the most successful singles of the 2000s, selling over ten million units. It is recognised as her "signature song" and was named "the catchiest song ever" by Yahoo! Music. Other successful singles by Minogue include "I Should Be So Lucky", "The Loco-Motion", "Especially for You", "Hand on Your Heart", "Better the Devil You Know", "Confide in Me", "Spinning Around", "Love at First Sight", "Slow", "2 Hearts" and "All the Lovers". In 2005, while Minogue was on her , she was diagnosed with breast cancer. After treatment, she resumed the tour under the title , which critics viewed as a "triumph". Minogue made her film debut in "The Delinquents" (1989) and portrayed Cammy in "Street Fighter" (1994). Minogue has also appeared in the films "Moulin Rouge!" (2001), "Jack & Diane", and "Holy Motors" (2012). In 2014, she appeared as a judge on the third series of "The Voice UK" and "The Voice Australia". Her other ventures include product endorsements, children books and fashion. As of 2015, Minogue has had worldwide record sales of more than 80 million. She has mounted several successful and critically acclaimed concert world tours and received a Mo Award for "Australian Entertainer of the Year" for her live performances. Minogue was appointed an Officer of the Order of the British Empire in the 2008 New Year Honours for services to Music. She was appointed by the French government as a Chevalier (knight) of the Ordre des Arts et des Lettres for her contribution to the enrichment of French culture. Minogue was awarded an honorary Doctor of Health Science (D.H.Sc.) degree by Anglia Ruskin University for her work in raising awareness for breast cancer. In November 2011, on the 25th anniversary of the ARIA Music Awards, she was inducted by the Australian Recording Industry Association into the ARIA Hall of Fame. In December 2016, "Billboard" ranked her as the 18th most successful dance artist of all-time. Minogue signed a new global recording contract with BMG Rights Management in early 2017. Her latest album "Golden" was released on 6 April 2018, debuting at No. 1 in the UK and Australia. Kylie was born to Ronald Charles Minogue and Carol Ann Jones in Melbourne, Australia, on 28 May 1968. Her father is a fifth generation Australian, and has Irish ancestry, while her mother came from Maesteg, Wales. Jones had lived in Wales until age ten when her mother and father, Millie and Denis Jones, decided to move to Australia for a better life. Just before Kylie's birth, Ron qualified as an accountant and worked through several jobs while Carol worked as a professional dancer. Kylie's younger brother, Brendan, is a news cameraman in Australia, while her younger sister Dannii Minogue is also a singer and television host. The Minogue family frequently moved around various suburbs in Melbourne to sustain their living expenses, which Kylie found unsettling as a child. After the birth of Dannii, the family moved to South Oakleigh. Because money was tight, Ron worked as an accountant at a family-owned car company and Carol worked as a tea lady at a local hospital. After moving to Surrey Hills, Melbourne, Minogue attended Studfield Primary School briefly before attending Camberwell Primary School. She went on to Camberwell High School. During her schooling years, Minogue found it difficult to make friends. She got her HSC (graduated high school) with subjects including Arts and Graphics and English. Minogue described herself as being of "average intelligence" and "quite modest" during her high school years. From the age of 11, Kylie appeared in small roles in soap operas including "The Sullivans" and "Skyways". In 1985, she was cast in one of the lead roles in "The Henderson Kids". Minogue took time off school to film "The Henderson Kids" and while Carol was not impressed, Minogue felt that she needed the independence to make it into the entertainment industry. During filming, co-star Nadine Garner labelled Minogue "fragile" after producers yelled at her for forgetting her lines; she would often cry on set. Minogue was dropped from the second season of the show after producer Alan Hardy felt the need for her character to be "written off". In retrospect, Hardy stated that removing her from the showing "turned out to be the best thing for her". Interested in following a career in music, Minogue made a demo tape for the producers of weekly music program "Young Talent Time", which featured Dannii as a regular performer. Kylie gave her first television singing performance on the show in 1985 but was not invited to join the cast. Kylie was cast in the soap opera "Neighbours" in 1986, as Charlene Mitchell, a schoolgirl turned garage mechanic. "Neighbours" achieved popularity in the UK, and a story arc that created a romance between her character and the character played by Jason Donovan culminated in a wedding episode in 1987 that attracted an audience of 20 million British viewers. Minogue became the first person to win four Logie Awards in one year and was the youngest recipient of the "Gold Logie" as the country's "Most Popular Television Performer", with the result determined by public vote. During a Fitzroy Football Club benefit concert with other "Neighbours" cast members, Minogue performed "I Got You Babe" as a duet with actor John Waters, and "The Loco-Motion" as an encore. She was subsequently signed to a recording contract with Mushroom Records in 1987. Her first single, "The Locomotion", spent seven weeks at number one on the Australian singles charts and became the country's highest-selling single in the 1980s. She received the ARIA Award for the year's highest-selling single. Its success resulted in Minogue travelling to England with Mushroom Records executive Gary Ashley to work with producers Stock, Aitken & Waterman. They knew little of Minogue and had forgotten that she was arriving; as a result, they wrote "I Should Be So Lucky" while she waited outside the studio. The song reached number one in the United Kingdom, Australia, Germany, Finland, Switzerland, Israel and Hong Kong. Minogue won her second consecutive ARIA Award for the year's highest-selling single, and received a "Special Achievement Award". Minogue's debut album, "Kylie" was released in July 1988. The album was a collection of dance-oriented pop tunes and spent more than a year on the UK Albums Chart, including several weeks at number one. The album went gold in the United States, and the single, "The Locomotion", reached number three on the US "Billboard" Hot 100 chart, and number one on the Canadian Singles Chart. The single "Got to Be Certain" became her third consecutive number one single on the Australian music charts. Later in the year, she left "Neighbours" to focus on her music career. Minogue also collaborated with Jason Donovan for the song "Especially for You", which peaked at number-one in the United Kingdom and in December 2014 sold its one millionth copy in the UK. Minogue was sometimes referred to as "the Singing Budgie" by her detractors over the coming years. In a review of the album "Kylie" for AllMusic, Chris True described the tunes as "standard, late-80s ... bubblegum", but added, "her cuteness makes these rather vapid tracks bearable". Minogue's second album "Enjoy Yourself" was released in October 1989. The album was a success in the United Kingdom, Europe, New Zealand, Asia and Australia and spawned number one singles "Hand on Your Heart" and "Tears on My Pillow". However, it failed to sell well throughout North America and Minogue was dropped by her American record label Geffen Records. She then embarked on her first concert tour, the Enjoy Yourself Tour, in the United Kingdom, Europe, Asia and Australia in February 1990. She was also one of the featured vocalists on the remake of "Do They Know It's Christmas?". Minogue's debut film, "The Delinquents" was released in December 1989. The movie received mixed reviews by critics but proved popular with audiences. In the UK it grossed more than £200,000, and in Australia, it was the fourth-highest grossing local film of 1989 and the highest grossing local film of 1990. Minogue's third album, "Rhythm of Love" was released in November 1990 and was described as "leaps and bounds more mature" than her previous albums. Her relationship with Michael Hutchence was also seen as part of her departure from her earlier persona. Its lead single, "Better the Devil You Know" became one of Minogue's most famous songs and peaked at number two in the UK and four in her native Australia. "Rhythm of Love"'s second and fourth single, "Step Back in Time" and "Shocked" were both a top ten hit in the UK and Australia. She then embarked on the Rhythm of Love Tour in February 1991. Minogue's fourth album, "Let's Get to It" was released in October 1991 and reached number 15 on the UK Albums Chart. It was her first album to fail to reach the top ten. While the first single from the album, "Word Is Out", became her first single to miss the top ten of the UK Singles Chart, subsequent singles "If You Were with Me Now" and "Give Me Just a Little More Time" both reached the top five. In support of the album, she embarked on the Let's Get to It Tour in October. She later expressed her opinion that she was stifled by Stock, Aitken and Waterman, saying, "I was very much a puppet in the beginning. I was blinkered by my record company. I was unable to look left or right." Her first "Greatest Hits" album was released in August 1992. It reached number one in the United Kingdom and number three in Australia. The singles from the album, "What Kind of Fool" and her cover version of Kool & the Gang's "Celebration" both reached the top twenty of the UK Singles Chart. Minogue's signing with Deconstruction Records in 1993 marked a new phase in her career. Her fifth album "Kylie Minogue" was released in September 1994 and sold well in Europe and Australia. It was produced by dance music producers the Brothers In Rhythm, namely Dave Seaman and Steve Anderson, who had previously produced Finer Feelings, her last single with PWL. As of 2015, Anderson continued to be Minogue's musical director. The lead single, "Confide in Me", spent four weeks at number one on the Australian singles chart. The next two singles from the album, "Put Yourself in My Place" and "Where Is the Feeling?", reached the top twenty on the UK Singles Chart, while the album peaked at number four on the UK Albums Chart, eventually selling 250,000 copies. During this period, Minogue made a guest appearance as herself in an episode of the comedy "The Vicar of Dibley". Director Steven E. de Souza saw Minogue's cover photo in Australia's "Who Magazine" as one of "The 30 Most Beautiful People in the World" and offered her a role opposite Jean-Claude Van Damme in the film "Street Fighter". The film was a moderate success, earning US$70 million in the US, but received poor reviews, with "The Washington Post"s Richard Harrington calling Minogue "the worst actress in the English-speaking world". She had a minor role in the 1996 film "Bio-Dome" starring Pauly Shore and Stephen Baldwin. She also appeared in the 1995 short film "Hayride to Hell" and in the 1997 film "Diana & Me". In 1995, Minogue collaborated with Australian artist Nick Cave for the song "Where the Wild Roses Grow". Cave had been interested in working with Minogue since hearing "Better the Devil You Know", saying it contained "one of pop music's most violent and distressing lyrics". The music video for their song was inspired by John Everett Millais's painting "Ophelia" (1851–1852), and showed Minogue as the murdered woman, floating in a pond as a serpent swam over her body. The single received widespread attention in Europe, where it reached the top 10 in several countries, and reached number two in Australia. The song won ARIA Awards for "Song of the Year" and "Best Pop Release". Following concert appearances with Cave, Minogue recited the lyrics to "I Should Be So Lucky" as poetry in London's Royal Albert Hall. By 1997, Minogue was in a relationship with French photographer Stéphane Sednaoui, who encouraged her to develop her creativity. Inspired by a mutual appreciation of Japanese culture, they created a visual combination of "geisha and manga superheroine" for the photographs taken for Minogue's sixth album "Impossible Princess" and the video for "German Bold Italic", Minogue's collaboration with Towa Tei. She drew inspiration from the music of artists such as Shirley Manson and Garbage, Björk, Tricky and U2, and Japanese pop musicians such as Pizzicato Five and Towa Tei. The album featured collaborations with musicians including James Dean Bradfield and Sean Moore of the Manic Street Preachers. Mostly a dance album, Minogue countered suggestions that she was trying to become an indie artist. Acknowledging that she had attempted to escape the perceptions of her that had developed during her early career, she commented that she was ready to "forget the painful criticism" and "accept the past, embrace it, use it". The music video for "Did It Again" paid homage to her earlier incarnations. Retitled "Kylie Minogue" in the UK following the death of Diana, Princess of Wales, it became the lowest-selling album of her career. At the end of the year, a campaign by "Virgin Radio" stated, "We've done something to improve Kylie's records: we've banned them." In Australia, the album was a success and spent 35 weeks on the album chart. Minogue's Intimate and Live tour in 1998 was extended due to demand. She gave several live performances in Australia, including the 1998 Sydney Gay and Lesbian Mardi Gras, and the opening ceremonies of Melbourne's Crown Casino, and Sydney's Fox Studios in 1999 (where she performed Marilyn Monroe's "Diamonds Are a Girl's Best Friend") as well as a Christmas concert in Dili, East Timor, in association with the United Nations Peace-Keeping Forces. She played a small role in the Australian-made Molly Ringwald 2000 film "Cut". After Minogue parted ways with Deconstruction Records, she performed a duet with the Pet Shop Boys' on their "Nightlife" album and spent several months in Barbados performing in Shakespeare's "The Tempest". She then appeared in the film "Sample People" and recorded a cover version of Russell Morris's "The Real Thing" for the soundtrack. In April 1999, she signed with Parlophone Records, which wanted to re-establish Minogue as a pop artist. In September 2000, Minogue released her seventh studio album, "Light Years". The album was a collection of dance songs, influenced by disco music. It generated strong reviews and was successful throughout Australia, Asia, Europe and New Zealand. The lead single, "Spinning Around", became her first number one in the United Kingdom in ten years, and its accompanying video featured Minogue in revealing gold hot pants, which came to be regarded as a "trademark". The second single, "On a Night Like This" reached number one in Australia and number two in the United Kingdom. "Kids", a duet with Robbie Williams, also peaked at number two in the United Kingdom. She then embarked on the On a Night Like This Tour, which played to sell-out crowds in Australia and the United Kingdom. The tour incorporated Burlesque and theatre and cited the style of Broadway shows such as "42nd Street", films such as "Anchors Aweigh", "South Pacific", the Fred Astaire and Ginger Rogers musicals of the 1930s and the live performances of Bette Midler as inspiration. Minogue was praised for her new material and her reinterpretations of some of her greatest successes. She won a "Mo Award" for Australian live entertainment as "Performer of the Year". She also appeared in the 2001 film, "Moulin Rouge!" as "The Green Fairy". In October 2001, Minogue released her eighth studio album "Fever". The album contained disco elements combined with 1980s electropop and synthpop. It reached number one in Australia, United Kingdom, and throughout Europe, eventually achieving worldwide sales in excess of eight million. The album's lead single, "Can't Get You Out of My Head", became the biggest success of her career, reaching number one in more than forty countries and selling over 5 million copies. She won four ARIA Awards including a "Most Outstanding Achievement" award, and two Brit Awards, for "Best international female solo artist" and "Best international album". Following extensive airplay by American radio, Capitol Records released the song and the album "Fever" in the US in 2002. The album debuted on the "Billboard" 200 albums chart at number three, and "Can't Get You out of My Head" reached number seven on the Hot 100. The subsequent singles "In Your Eyes", "Love at First Sight" and "Come into My World" were successful throughout the world, and Minogue established a presence in the mainstream North American market, particularly in the club scene. She followed the success of the album by touring the United States with the Jingle Ball festival. In April 2002, Minogue embarked on the KylieFever2002 tour which became the biggest production she had put on to date. Later that year, she worked in a voice role on the animated film "The Magic Roundabout", which was released in 2005 in Europe. In 2003, she received a Grammy Award nomination for "Best Dance Recording" for "Love at First Sight", and the following year won the same award for "Come into My World". In November 2003, Minogue released her ninth studio album "Body Language" following an invitation-only concert, titled "Money Can't Buy", at the Hammersmith Apollo in London. The album downplayed the disco style and was inspired by 1980s artists such as Scritti Politti, The Human League, Adam and the Ants and Prince, blending their styles with elements of hip hop. The sales of the album were lower than anticipated after the success of "Fever", though the first single, "Slow", was a number-one hit in the United Kingdom and Australia. Two more singles from the album were released: "Red Blooded Woman" and "Chocolate". In the US, "Slow" reached number-one on the club chart and received a Grammy Award nomination in the Best Dance Recording category. "Body Language" achieved first week sales of 43,000 and declined significantly in the second week. In November 2004, Minogue released her second official greatest hits album entitled "Ultimate Kylie". The album yielded two singles: "I Believe in You" and "Giving You Up". "I Believe in You" was later nominated for a Grammy Award in the category of "Best Dance Recording". In March 2005, Minogue commenced her . After performing in Europe, she travelled to Melbourne, where she was diagnosed with breast cancer, forcing her to cancel the tour. She resumed the tour in November 2006, under the title . Her dance routines had been reworked to accommodate her medical condition, with slower costume changes and longer breaks introduced between sections of the show to conserve her strength. The media reported that Minogue performed energetically, with the "Sydney Morning Herald" describing the show as an "extravaganza" and "nothing less than a triumph". In November 2007, Minogue released her tenth and much-discussed "comeback" album, "X". The electro-styled album included contributions from Guy Chambers, Cathy Dennis, Bloodshy & Avant and Calvin Harris. The album received some criticism for the triviality of its subject matter in light of Minogue's experiences with breast cancer. "X" and the lead single, "2 Hearts" entered at number one on the Australian albums and singles charts respectively. In the United Kingdom, "X" initially attracted lukewarm sales, although its commercial performance eventually improved. Follow-up singles from the album, "In My Arms" and "Wow", both peaked inside the top ten of the UK Singles Chart. In the US, the album was nominated at the 2009 Grammy Awards for Best Electronic/Dance Album. As part of the promotion of her album, Minogue was featured in "", a documentary filmed during 2006 and 2007 as she resumed her Showgirl: The Homecoming Tour. She also appeared in "The Kylie Show", which featured her performances as well as comedy sketches with Mathew Horne, Dannii Minogue, Jason Donovan and Simon Cowell. She co-starred in the 2007 "Doctor Who" Christmas special episode, "Voyage of the Damned", as Astrid Peth. The episode was watched by 13.31 million viewers, which was the show's highest viewing figure since 1979. In May 2008, Minogue embarked on the European leg of the KylieX2008 tour, her most expensive tour to date with production costs of £10 million. The tour was generally acclaimed and sold well. She was then appointed a Chevalier of the French Ordre des Arts et des Lettres, the junior grade of France's highest cultural honour. In July, she was officially invested by The Prince of Wales as an Officer of the Order of the British Empire. She also won the "Best International Female Solo Artist" award at the 2008 BRIT Awards. In September, she made her Middle East debut as the headline act at the opening of Atlantis, The Palm, an exclusive hotel resort in Dubai, and from November, she continued her "KylieX2008" tour, taking the show to cities across South America, Asia and Australia. The tour visited 21 countries, and was considered a success, with ticket sales estimated at $70,000,000. In 2009, Minogue hosted the BRIT Awards with James Corden and Mathew Horne. She then embarked on the For You, for Me tour which was her first North American concert tour. She was also featured in the Hindi movie, "Blue", performing an A. R. Rahman song. In July 2010, Minogue released her eleventh studio album, "Aphrodite". The album featured new songwriters and producers including Stuart Price as executive producer. Price also contributed to song writing along with Minogue, Calvin Harris, Jake Shears, Nerina Pallot, Pascal Gabriel, Lucas Secon, Keane's Tim Rice-Oxley and Kish Mauve. The album received favourable reviews from most music critics; Rob Sheffield from "Rolling Stone" labelled the album Minogue's "finest work since 1997's underrated "Impossible Princess"" and Tim Sendra from Allmusic commended Minogue's choice of collaborators and producers, commenting that the album is the "work of someone who knows exactly what her skills are and who to hire to help showcase them to perfection". "Aphrodite" debuted at number-one in the United Kingdom, exactly twenty two years after her first number one hit in the United Kingdom. The album's lead single, "All the Lovers," was a success and became her thirty-third top ten single in the United Kingdom, though subsequent singles from the album, "Get Outta My Way", "Better than Today", and "Put Your Hands Up", failed to reach the top ten of the UK Singles Chart. However, all the singles released from the album have topped the US "Billboard" Hot Dance Club Songs chart. Minogue recorded a duet with synthpop duo Hurts on their song "Devotion", which was included on the group's album "Happiness". She was then featured on Taio Cruz's single "Higher". The result was successful, peaking inside the top twenty in several charts and reaching number one on the US Hot Dance Club Charts. At the time, Minogue also held the third spot on the chart with "Higher", her collaboration with British recording artist Taio Cruz, becoming the first artist to claim two of the top three spots at the same time in the American dance chart's history. To conclude her recordings in 2010, she released the extended play "A Kylie Christmas", which included covers of Christmas songs including "Let It Snow" and "Santa Baby". Minogue embarked on the in February 2011, travelling to Europe, North America, Asia, Australia and Africa. With a stage set inspired by the birth of the love goddess Aphrodite and Grecian culture and history, it was greeted with positive reviews from critics, who praised the concept and the stage production. The tour was a commercial success, grossing a total of US$60 million and ranking at number six and twenty-one on the mid year and annual Pollstar Top Concert Tours of 2011 respectively. On 2012, Minogue began a year-long celebration for her 25 years in the music industry, which was often called "K25". The anniversary started with her embarking on the Anti Tour in England and Australia, which featured b-sides, demos and rarities from her music catalogue. The tour was positively received for its intimate atmosphere and was a commercial success, grossing over two million dollars from four shows. She then released the single "Timebomb" in May, the greatest hits compilation album, "The Best of Kylie Minogue" in June and the singles box-set, "K25 Time Capsule" in October. She performed at various events around the world, including Sydney Mardi Gras, Queen Elizabeth II's "Diamond Jubilee Concert", and BBC Proms in the Park London 2012. Minogue released the compilation album, "The Abbey Road Sessions" in October. The album contained reworked and orchestral versions of her previous songs. It was recorded at London's Abbey Road Studios and was produced by Steve Anderson and Colin Elliot. The album received favourable reviews from music critics and debuted at number-two in the United Kingdom. The album spawned two singles, "Flower" and "On a Night Like This". Minogue returned to acting and starred in two films: a cameo appearance in the American independent film "Jack & Diane" and a lead role in the French film "Holy Motors". "Jack & Diane" opened at the Tribeca Film Festival on 20 April 2012, while "Holy Motors" opened at the 2012 Cannes Film Festival, which Minogue attended. In January 2013, Minogue and her manager Terry Blamey, whom she had worked with since the start of her singing career, parted ways. The following month, she signed to Roc Nation for a management deal. In September, she was featured on Italian singer-songwriter Laura Pausini's single "Limpido", which was a number-one hit in Italy and received a nomination for "World's Best Song" at the 2013 World Music Awards. In the same month, Minogue was hired as a coach for the third series of BBC One's talent competition "The Voice UK", alongside record producer and Black Eyed Peas member, will.i.am, Kaiser Chiefs' lead singer Ricky Wilson and singer Sir Tom Jones. The show opened with 9.35 million views from the UK, a large percentage increase from the second season. It accumulated an estimated 8.10 million viewers on average. Minogue's judging and personality on the show were singled out for praise. Ed Power from "The Daily Telegraph" gave the series premiere 3 stars, praising Minogue for being "glamorous, agreeably giggly [and] a card-carrying national treasure". In November, she was hired as a coach for the third season of "The Voice Australia". In March 2014, Minogue released her 12th studio album, "Kiss Me Once". The album featured contributions from Sia Furler, Mike Del Rio, Cutfather, Pharrell Williams, MNEK and Ariel Rechtshaid. It peaked at number one in Australia and number two in the United Kingdom. The singles from the album, "Into the Blue" and "I Was Gonna Cancel", did not chart inside the top ten of the UK Singles Chart, peaking at number 12 and number 59 respectively. In August, Minogue performed a seven-song set at the closing ceremony of the 2014 Commonwealth Games, donning a custom Jean Paul Gaultier corset. In September, she embarked on the Kiss Me Once Tour. In January 2015, Minogue appeared as a guest vocalist on Giorgio Moroder's single "Right Here, Right Now" providing her 12th number one hit on the U.S. Dance Chart on 18 April 2015. In March, Minogue's contract with Parlophone Records ended, leaving her future music releases with Warner Music Group in Australia and New Zealand. The same month, she parted ways with Roc Nation. In April, Minogue played tech reporter Shauna in a two episode arc on the ABC Family series, "Young & Hungry". Minogue also appeared as Susan Riddick in the disaster film "San Andreas", released in May and starring Dwayne Johnson and Carla Gugino. In September 2015, an extended play with Fernando Garibay titled "Kylie + Garibay" was released. Garibay and Giorgio Moroder served as producers for the extended play. In November, Minogue was a featured artist on the track, "The Other Boys" by Nervo, alongside Jake Shears and Nile Rodgers. This became her thirteenth chart topper on the U.S Dance Chart, lifting her position in the list of artists with the most U.S. Dance Chart number ones to equal 8th alongside Whitney Houston, Enrique Iglesias and Lady Gaga. In December 2015, Minogue was the guest on BBC Radio 4's "Desert Island Discs". Her choices included "Dancing Queen" by ABBA, "Purple Rain" by Prince and "Need You Tonight" by INXS. Minogue released her first Christmas album, "Kylie Christmas" in November 2015. In 2016, she recorded the theme song "This Wheel's on Fire", from the soundtrack "". Minogue's holiday album "Kylie Christmas" was re-released in November entitled as "". In February 2017, Minogue signed a new record deal with BMG Rights Management which will release her upcoming album internationally. In December 2017, Minogue and BMG had struck a joint-deal with Mushroom Music Labels — under the sub-division label Liberator to release her new album in Australia and New Zealand. In 2017, Minogue worked with writers and producers for her fourteenth studio album including Amy Wadge, Sky Adams, DJ Fresh, Nathan Chapman, Richard Stannard, The Invisible Men and Karen Poole and recorded the album in London, Los Angeles and Nashville, with the latter profoundly influencing the record. Minogue's album "Golden" was released on 6 April 2018 with "Dancing" serving as the album's lead single. Minogue has been known for her soft soprano vocal range. Dara Hickey reviewed Minogue's studio album "Aphrodite" and wrote that she is "singing in her highest vocal range ever." According to Fiona MacDonald from "Madison" magazine, Kylie "has never shied away from making some brave but questionable artistic decisions". In musical terms, Minogue has worked with many genres in pop and dance music. However, her signature music has been contemporary disco music. Her first studio albums with Stock, Aitken, and Waterman present a more bubblegum pop influence, with many critics comparing her to American recording artist Madonna. Chris True from "Allmusic", reviewed her debut "Kylie" and found her music "standard late-'80s Stock-Aitken-Waterman bubblegum", however he stated that she presented the most personality of any 1980s recording artist. He said of her third album "Rhythm of Love", from the early 1990s, "The songwriting is stronger, the production dynamic, and Kylie seems more confident vocally." At the time of her third studio album, "She began to trade in her cutesy, bubblegum pop image for a more mature one, and in turn, a more sexual one." Chris True stated that during her relationship with Michael Hutchence, "her shedding of the near-virginal façade that dominated her first two albums, began to have an effect, not only on how the press and her fans treated her, but in the evolution of her music." From Minogue's work on her sixth studio album, "Impossible Princess", her songwriting and musical content began to change. She was constantly writing down words, exploring the form and meaning of sentences. She had written lyrics before, but called them "safe, just neatly rhymed words and that's that". The album, in musical genres, varied from her previous efforts as it incorporated dance-pop and trip hop music. Sal Cinquemani from "Slant Magazine" said that the album bears a resemblance to Madonna's "Ray of Light". He said that she took inspiration from "both the Brit-pop and electronica movements of the mid-'90s", saying that "Impossible Princess is the work of an artist willing to take risks". Her next effort, "Light Years", he said was "worked up by the renaissance of pure dance-pop that was the order of the day at the onset of the 21st century." Her ninth album "Body Language" was quite different from her musical experiments in the past as it was a "successful" attempt at broadening her sound with electro and hip-hop for instance. Incorporating styles of dance music with funk, disco and R&B, the album was listed on "Q"s "Best Albums of 2003". Minogue's tenth record "X" was a move back to her pop roots. In a press interview for the promotion of the album, she stated that the album was "mainly electropop" and was a "celebration for me to be back working." However, critics said the album did not feature enough "consistency" and Chris True called the tracks "cold, calculated dance-pop numbers." However, he said her eleventh album "Aphrodite" "rarely strays past sweet love songs or happy dance anthems" and "the main sound is the kind of glittery disco pop that really is her strong suit." Tim Sendra from "Allmusic" stated, "The various producers keep their eyes on the dancefloor throughout, crafting shiny and sleek tracks that sound custom-built to blast out of huge speaker columns" and found "Aphrodite" "One of her best, in fact." Minogue's efforts to be taken seriously as a recording artist were initially hindered by the perception that she had not "paid her dues" and was no more than a manufactured pop star exploiting the image she had created during her stint on "Neighbours". Minogue acknowledged this viewpoint, saying, "If you're part of a record company, I think to a degree it's fair to say that you're a manufactured product. You're a product and you're selling a product. It doesn't mean that you're not talented and that you don't make creative and business decisions about what you will and won't do and where you want to go." In 1993, Baz Luhrmann introduced Minogue to photographer Bert Stern, notable for his work with Marilyn Monroe. Stern photographed her in Los Angeles and, comparing her to Monroe, commented that Minogue had a similar mix of vulnerability and eroticism. Throughout her career, Minogue has chosen photographers who attempt to create a new "look" for her, and the resulting photographs have appeared in a variety of magazines, from the cutting edge "The Face" to the more traditionally sophisticated "Vogue" and "Vanity Fair", making the Minogue face and name known to a broad range of people. Stylist William Baker has suggested that this is part of the reason she entered mainstream pop culture in Europe more successfully than many other pop singers who concentrate solely on selling records. By 2000, Minogue was considered to have achieved a degree of musical credibility for having maintained her career longer than her critics had expected. Her progression from the wholesome "girl next door" to a more sophisticated performer with a flirtatious and playful persona attracted new fans. Her "Spinning Around" video led to some media outlets referring to her as "SexKylie", and sex became a stronger element in her subsequent videos. William Baker described her status as a sex symbol as a "double edged sword", observing that "we always attempted to use her sex appeal as an enhancement of her music and to sell a record. But now it has become in danger of eclipsing what she actually is: a pop singer." After 20 years as a performer, Minogue was described as a fashion "trend-setter" and a "style icon who constantly reinvents herself". She has been acknowledged for mounting successful tours and for worldwide record sales of more than 70 million. Minogue has been inspired by and compared to Madonna throughout her career. Her producer, Pete Waterman, recalled Minogue during the early years of her success with the observation: "She was setting her sights on becoming the new Prince or Madonna ... What I found amazing was that she was outselling Madonna four to one, but still wanted to be her." Minogue received negative comments that her Rhythm of Love tour in 1991 was too similar visually to Madonna's Blond Ambition World Tour, for which critics labelled her a Madonna wannabe. Kathy McCabe for "The Telegraph" noted that Minogue and Madonna follow similar styles in music and fashion, but concluded, "Where they truly diverge on the pop-culture scale is in shock value. Minogue's clips might draw a gasp from some but Madonna's ignite religious and political debate unlike any other artist on the planet ... Simply, Madonna is the dark force; Kylie is the light force." "Rolling Stone" commented that, with the exception of the US, Minogue is regarded throughout the world as "an icon to rival Madonna", saying, "Like Madonna, Minogue was not a virtuosic singer but a canny trend spotter." Minogue has said of Madonna, "Her huge influence on the world, in pop and fashion, meant that I wasn't immune to the trends she created. I admire Madonna greatly but in the beginning she made it difficult for artists like me, she had done everything there was to be done", and "Madonna's the Queen of Pop, I'm the princess. I'm quite happy with that." Minogue has been declared by media as a sex symbol. In January 2007, Madame Tussauds in London unveiled its fourth waxwork of Minogue; only Queen Elizabeth II has had more models created. During the same week a bronze cast of her hands was added to Wembley Arena's "Square of Fame". On 23 November 2007, a bronze statue of Minogue was unveiled at Melbourne Docklands for permanent display. In March 2010, Minogue was declared by researchers as the "most powerful celebrity in Britain". The study examined how marketers identify celebrity and brand partnerships. Mark Husak, head of Millward Brown's UK media practice, said: "Kylie is widely accepted as an adopted Brit. People know her, like her and she is surrounded by positive buzz". She was named one of the "100 Hottest Women of All-Time" by "Men's Health". In May 2011, according to the Sunday Times Rich List, Minogue had a net worth of $66 million (£40m). In April 2015, the list was updated with her estimated earnings of £55 million (AU $106.61 million). Minogue is regarded as a gay icon, which she has encouraged with comments including "I am not a traditional gay icon. There's been no tragedy in my life, only tragic outfits" and "My gay audience has been with me from the beginning ... they kind of adopted me." Minogue has explained that she first became aware of her gay audience in 1988, when several drag queens performed to her music at a Sydney pub, and she later saw a similar show in Melbourne. She said that she felt "very touched" to have such an "appreciative crowd", and this encouraged her to perform at gay venues throughout the world, as well as headlining the 1994 Sydney Gay and Lesbian Mardi Gras. Minogue has one of the largest gay followings in the world. Minogue was diagnosed with breast cancer at age 36 on 17 May 2005, leading to the postponement of the remainder of her "Showgirl – The Greatest Hits Tour" and her withdrawal from the Glastonbury Festival. Her hospitalisation and treatment in Melbourne resulted in a brief but intense period of media coverage, particularly in Australia, where Prime Minister John Howard issued a statement of support. As media and fans began to congregate outside the Minogue residence in Melbourne, Victorian Premier Steve Bracks warned the international media that any disruption of the Minogue family's rights under Australian privacy laws would not be tolerated. His comments became part of a wider criticism of the media's overall reaction, with particular criticism directed towards paparazzi. Minogue underwent surgery on 21 May 2005 at Cabrini Hospital in Malvern and commenced chemotherapy treatment soon after. On 8 July 2005, she made her first public appearance after surgery when she visited a children's cancer ward at Melbourne's Royal Children's Hospital. She returned to France where she completed her chemotherapy treatment at the Institut Gustave-Roussy in Villejuif, near Paris. In December 2005, Minogue released a digital-only single, "Over the Rainbow", a live recording from her Showgirl tour. Her children's book, "The Showgirl Princess", written during her period of convalescence, was published in October 2006, and her perfume, "Darling", was launched in November. The range was later augmented by eau de toilettes including Pink Sparkle, Couture and Inverse. On her return to Australia for her concert tour, she discussed her illness and said that her chemotherapy treatment had been like "experiencing a nuclear bomb". While appearing on "The Ellen DeGeneres Show" in 2008, Minogue said that her cancer had originally been misdiagnosed. She commented, "Because someone is in a white coat and using big medical instruments doesn't necessarily mean they're right", but later spoke of her respect for the medical profession. Minogue was acknowledged for the impact she made by publicly discussing her cancer diagnosis and treatment; in May 2008, the French Cultural Minister Christine Albanel said, "Doctors now even go as far as saying there is a "Kylie effect" that encourages young women to have regular checks." Minogue has been involved in humanitarian support in areas including health research and quality of life. Musically, she has helped fundraise on many occasions. In 1989, she participated in recording "Do They Know It's Christmas?" under the name Band Aid II to help raise money. In early 2010, Minogue along with many other artists (under the name Helping Haiti) recorded a cover version of "Everybody Hurts". The single was a fundraiser to help after the 2010 Haiti earthquake. She also spent a week in Thailand after the 2005 tsunami. During her 2011 Aphrodite World Tour, the 2011 Tōhoku earthquake and tsunami struck Japan, which was on her itinerary. She declared she would continue to tour there, stating, "I was here to do shows and I chose not to cancel, Why did I choose not to cancel? I thought long and hard about it and it wasn't an easy decision to make." While she was there, she and Australian Prime Minister Julia Gillard were star guests at an Australian Embassy fundraiser for the disaster. In 2008, Minogue pledged her support for a campaign to raise money for abused children, to be donated to the British charities ChildLine and the National Society for the Prevention of Cruelty to Children. According to the source, around $93 million was raised. She spoke out in relation to the cause, saying: "Finding the courage to tell someone about being abused is one of the most difficult decisions a child will ever have to make." In 2010 and 2012, she was involved in supporting the AIDS Support Gala, which was held by the American Foundation for Aids Research (Amfar). Since Minogue's breast cancer diagnosis in 2005, she has been a sponsor and ambassador for the cause. In May 2010, she held a breast cancer campaign for the first time. She later spoke about the cause saying "It means so much to me to be part of this year's campaign for Fashion Targets Breast Cancer. I wholeheartedly support their efforts to raise funds for the vital work undertaken by Breakthrough Breast Cancer." For the cause, she "posed in a silk sheet emblazoned with the distinctive target logo of Fashion Targets Breast Cancer" for photographer Mario Testino. In 2010, she celebrated her fifth anniversary of being clear of the disease by hosting a charity concert to raise money for cancer charities and awareness of the condition. In April 2014, Minogue had launched a new campaign entitled One Note Against Cancer, which is a charity organisation to help cancer research. From 1989 to 1991, Minogue dated INXS frontman Michael Hutchence. She also briefly dated rock superstar Lenny Kravitz in 1991. She had a relationship with model James Gooding from 1998 until 2001. Minogue began a relationship with French actor Olivier Martinez after meeting him at the 2003 Grammy Awards ceremony. They ended their relationship in February 2007, but remained on friendly terms. Minogue was reported to have been "saddened by false [media] accusations of [Martinez's] disloyalty". She defended Martinez, and acknowledged the support he had given during her treatment for breast cancer. Minogue was in a relationship with model Andrés Velencoso from 2008 to 2013. In November 2015, Minogue confirmed she was dating British actor Joshua Sasse. On 20 February 2016, their engagement was announced in the "Forthcoming Marriages" section of "The Daily Telegraph". In February 2017, Minogue confirmed the couple had ended their relationship. In February 2012, VH1 ranked Minogue at number 47 on their VH1 Greatest Women in Music and number 49 on the "50 Greatest Women of the Video Era". Channel 4 listed her as one of the world's greatest pop stars. The Official Chart Company revealed that she is the 12th best selling singer in the United Kingdom to date, and the third best selling female artist, selling over 10.1 million sales. According to the British Phonographic Industry (BPI), all her studio albums have been certified, and with her singles as well, she has a total of 27 certified records. In January 2011, She received a "Guinness World Records" citation for having the most consecutive decades with top five albums, with all her albums doing so. In September, she was ranked 27 on VH1's "100 Sexiest Artists". In 2008, she was honoured with Music Industry Trust's award for recognition of her 20-year career and was hailed as "an icon of pop and style", becoming the first female musician or act to receive a Music Industry Trust award. Minogue made history in the United States for having two songs inside the top three on the US Dance Club Songs chart, with her songs "Better than Today" and "Higher" charting at one and three, respectively. She has sold more than 70 million records worldwide. Dino Scatena from smh.co.au wrote that "A quarter of a century ago, a sequence of symbiotic events altered the fabric of Australian popular culture and set in motion the transformation of a 19-year-old soap actor from Melbourne into an international pop icon." He then stated "Who could have imagined this tiny, unsophisticated star of Neighbours, with the bad '80s perm and questionable vocal ability, would go on to become Australia's single most successful entertainer and a world-renowned style idol?" Throughout her career, Minogue has been known for reinventing herself in fashion and musical content. "Fabulous Magazine" labelled her a "Master of Reinvention." In November 2006, Minogue posed for "Vogue Australia", with Larissa Dubecki from "The Age" labelling her the "Mother of Reinvention" by saying "This unveiling is as cleverly managed as every aspect of her career, and her illness, to date. Like sharks, celebrities cannot remain static; they must keep moving or die. Kylie has beaten her early detractors by inhabiting almost a dozen identities" Fiona MacDonald from "Madison" said Kylie was "an icon, one of the handful of singers recognised around the world by her first name alone. And yet despite becoming an international music superstar, style icon and honorary Brit, those two syllables still seem as Australian as the smell of eucalyptus or a barbeque on a hot day." Minogue's hits have garned many accolades throughout her career. In January 2012, "NME" listed her worldwide hit single "Can't Get You Out of My Head" at number 4 on their Greatest Pop Songs in History. The song was also named the most-played track of the decade. The song eventually became the 3rd best-selling UK single and the most-played song in the UK in 2001. As of 2012, "Can't Get You Out of My Head" was the 72nd song featured on UK's Official Top 100 Biggest Selling Singles of All Time. Her single "Spinning Around" has also been iconic to Minogue's career, mostly due to her iconic hot pants, with critics calling the music video "excellent tabloid fodder" and a trademark to her career. Her song "Come into My World" won a Grammy Award for Best Dance Recording in 2004. "Slant Magazine" placed the video for "Come into My World" at number 1 on their "Best of the Aughts: Music Videos" list. In 2007, French avant-garde guitarist Noël Akchoté released "So Lucky", featuring solo guitar versions of tunes recorded by Minogue. Minogue has been recognized with a number of honorific nicknames. She has been compared to American recording artist Madonna in many media outlets, with Madonna identified as Queen of Pop and Minogue as the Princess of Pop. "Birmingham Post" said in 2000; "[o]nce upon a time, long before anybody had even heard of Britney, Christina, Jessica or Mandy, Australian singer Kylie Minogue ruled the charts as princess of pop. Back in 1988 her first single, I Should Be So Lucky, spent five weeks at number one, making her the most successful female artist in the UK charts with 13 successive Top 10 entries." She is sometimes referred to as the Goddess of Pop/Pop Goddess. She has also been dubbed a Disco Queen. William Baker, co-author of her book "", talking about her song "Better the Devil You Know", commented, "The track transferred well onto the dance-floor and heralded a long reign for Kylie as the new queen of disco... a pop princess." Jon O'Brien from "Allmusic" reviewed her box-set "" and stated "Kylie has yet to make that one essential album, and the going-through-the-motions nature of her later releases suggests her time may have passed, but this box set still contains plenty of moments to justify her position as one of the all-time premier pop princesses." In November 2011, on the twenty-fifth anniversary of the ARIA Music Awards, she was inducted by the Australian Recording Industry Association into the ARIA Hall of Fame. Many celebrities have named Minogue as a role model. Many media outlets have said many people, particularly women, have undergone regular checks for cancer symptoms since her breast cancer diagnosis. Television host Giuliana Rancic cited Minogue's cancer story as "inspirational" when she too was diagnosed with cancer. She said, "When you get diagnosed, you don't know what to do… you Google. I went online and I read your story, and it gave me the strength to go public as well." Musical entertainers who have cited Minogue as an influence include September, Diana Vickers, Paris Hilton, and The Veronicas. In April 2017 the Britain-Australia Society recognised Minogue with its 2016 award for outstanding contribution to the improving of relations and bilateral understanding between Britain and Australia. The citation reads "In recognition of significant contribution to the Britain - Australia relationship as an acclaimed singer - songwriter, actor, entrepreneur and iconic personality in both countries. Instantly recognisable worldwide as simply "Kylie", with arguably your greatest following in the UK and Australia, you are a powerful contemporary symbol of the enduring links shared between our two nations and an outstanding cultural ambassador for both. The award was announced at a reception in Australia House but was personally presented the next day by HRH Prince Philip, Patron of the Society, at Windsor Castle Killer whale The killer whale or orca ("Orcinus orca") is a toothed whale belonging to the oceanic dolphin family, of which it is the largest member. Killer whales have a diverse diet, although individual populations often specialize in particular types of prey. Some feed exclusively on fish, while others hunt marine mammals such as seals and dolphins. They have been known to attack baleen whale calves, and even adult whales. Killer whales are apex predators, as no animal preys on them. A cosmopolitan species, they can be found in each of the world's oceans in a variety of marine environments, from Arctic and Antarctic regions to tropical seas, absent only from the Baltic and Black seas, and some areas of the Arctic Ocean. Killer whales are highly social; some populations are composed of matrilineal family groups (pods) which are the most stable of any animal species. Their sophisticated hunting techniques and vocal behaviours, which are often specific to a particular group and passed across generations, have been described as manifestations of animal culture. The International Union for Conservation of Nature assesses the orca's conservation status as data deficient because of the likelihood that two or more killer whale types are separate species. Some local populations are considered threatened or endangered due to prey depletion, habitat loss, pollution (by PCBs), capture for marine mammal parks, and conflicts with human fisheries. In late 2005, the Southern Resident Killer Whales, which swim in British Columbia and Washington state waters, were placed on the U.S. Endangered Species list. Wild killer whales are not considered a threat to humans, but there have been cases of captive orcas killing or injuring their handlers at marine theme parks. Killer whales feature strongly in the mythologies of indigenous cultures, with their reputation ranging from being the souls of humans to merciless killers. Orcinus orca is the only recognized extant species in the genus "Orcinus", one of many animal species originally described by Linnaeus in 1758 in "Systema Naturae". Konrad Gessner wrote the first scientific description of a killer whale in his "Piscium & aquatilium animantium natura" of 1558, part of the larger "Historia animalium", based on examination of a dead stranded animal in the Bay of Greifswald that had attracted a great deal of local interest. The killer whale is one of 35 species in the oceanic dolphin family, which first appeared about 11 million years ago. The killer whale lineage probably branched off shortly thereafter. Although it has morphological similarities with the pygmy killer whale, the false killer whale and the pilot whales, a study of cytochrome b gene sequences by Richard LeDuc indicated that its closest extant relatives are the snubfin dolphins of the genus "Orcaella". Although the term "orca" is increasingly used, English-speaking scientists most often use the traditional name "killer whale". Indeed, the genus name "Orcinus" means "of the kingdom of the dead", or "belonging to Orcus". Ancient Romans originally used "orca" (pl. "orcae") for these animals, possibly borrowing Greek ("óryx)", which referred (among other things) to a whale species. Since the 1960s, "orca" has steadily grown in popularity. The term "orca" is euphemistically preferred by some to avoid the negative connotations of "killer", and because, being part of the family Delphinidae, the species is more closely related to other dolphins than to whales. According to some authors, the name "killer whale" is a mistranslation of the 18th century Spanish name "asesina-ballenas" ("killer of" "whales"), possibly given by Basque whalers after observing pods of orcas hunting baleen whales. They are sometimes referred to as "blackfish", a name also used for other whale species. "Grampus" is a former name for the species, but is now seldom used. This meaning of "grampus" should not be confused with the genus "Grampus", whose only member is Risso's dolphin. The three to five types of killer whales may be distinct enough to be considered different races, subspecies, or possibly even species (see Species problem). The IUCN reported in 2008, "The taxonomy of this genus is clearly in need of review, and it is likely that "O. orca" will be split into a number of different species or at least subspecies over the next few years." Although large variation in the ecological distinctiveness of different killer whale groups complicate simple differentiation into types, research off the west coast of Canada and the United States in the 1970s and 1980s identified the following three types: Transients and residents live in the same areas, but avoid each other. Other populations have not been as well studied, although specialized fish and mammal eating killer whales have been distinguished elsewhere. In addition, separate populations of "generalist" (fish- and mammal-eating) and "specialist" (mammal-eating) killer whales have been identified off northwestern Europe. As with residents and transients, the lifestyle of these whales appears to reflect their diet; fish-eating killer whales in Alaska and Norway have resident-like social structures, while mammal-eating killer whales in Argentina and the Crozet Islands behave more like transients. Three types have been documented in the Antarctic. Two dwarf species, named "Orcinus nanus" and "Orcinus glacialis", were described during the 1980s by Soviet researchers, but most cetacean researchers are skeptical about their status, and linking these directly to the types described below is difficult. Types B and C live close to the ice pack, and diatoms in these waters may be responsible for the yellowish coloring of both types. Mitochondrial DNA sequences support the theory that these are recently diverged separate species. More recently, complete mitochondrial sequencing indicates the two Antarctic groups that eat seals and fish should be recognized as distinct species, as should the North Pacific transients, leaving the others as subspecies pending additional data. Advanced methods that sequenced the entire mitochondrial genome revealed systematic differences in DNA between different populations. Mammal-eating killer whales in different regions were long thought likely to be closely related, but genetic testing has refuted this hypothesis. There are seven identified ecotypes inhabiting isolated ecological niches. Of three orca ecotypes in the Antarctic, one preys on minke whales, the second on seals and penguins, and the third on fish. Another ecotype lives in the eastern North Atlantic, while the three Northeast Pacific ecotypes are labeled the transient, resident and offshore populations described above. Research has supported a proposal to reclassify the Antarctic seal- and fish-eating populations and the North Pacific transients as a distinct species, leaving the remaining ecotypes as subspecies. The first split in the orca population, between the North Pacific transients and the rest, occurred an estimated 700,000 years ago. Such a designation would mean that each new species becomes subject to separate conservation assessments. A typical killer whale distinctively bears a black back, white chest and sides, and a white patch above and behind the eye. Calves are born with a yellowish or orange tint, which fades to white. It has a heavy and robust body with a large dorsal fin up to tall. Behind the fin, it has a dark grey "saddle patch" across the back. Antarctic killer whales may have pale gray to nearly white backs. Adult killer whales are very distinctive, seldom confused with any other sea creature. When seen from a distance, juveniles can be confused with other cetacean species, such as the false killer whale or Risso's dolphin. The killer whale's teeth are very strong, and its jaws exert a powerful grip; the upper teeth fall into the gaps between the lower teeth when the mouth is closed. The firm middle and back teeth hold prey in place, while the front teeth are inclined slightly forward and outward to protect them from powerful jerking movements. Killer whales are the largest extant members of the dolphin family. Males typically range from long and weigh in excess of . Females are smaller, generally ranging from and weighing about . The largest male killer whale on record was , weighing , while the largest female was , weighing . Calves at birth weigh about and are about long. The killer whale's large size and strength make it among the fastest marine mammals, able to reach speeds in excess of . The skeleton of the killer whale is of the typical delphinid structure, but more robust. Its integument, unlike that of most other dolphin species, is characterized by a well-developed dermal layer with a dense network of fascicles of collagen fibers. Killer whale pectoral fins, analogous to forelimbs, are large and rounded, resembling paddles, with those of males significantly larger than those of females. Dorsal fins also exhibit sexual dimorphism, with those of males about high, more than twice the size of the female's, with the male's fin more like a tall, elongated isosceles triangle, whereas the female's is shorter and more curved. Males and females also have different patterns of black and white skin in their genital areas. In the skull, adult males have longer lower jaws than females, as well as larger occipital crests. An individual killer whale can often be identified from its dorsal fin and saddle patch. Variations such as nicks, scratches, and tears on the dorsal fin and the pattern of white or grey in the saddle patch are unique. Published directories contain identifying photographs and names for hundreds of North Pacific animals. Photographic identification has enabled the local population of killer whales to be counted each year rather than estimated, and has enabled great insight into lifecycles and social structures. Occasionally a killer whale is white; they have been spotted in the northern Bering Sea and around St. Lawrence Island, and near the Russian coast. In February 2008, a white killer whale was photographed off Kanaga Volcano in the Aleutian Islands. In 2010, the Far East Russia Orca Project (FEROP), co-founded and co-directed by Alexander M. Burdin and Erich Hoyt, filmed an adult male nicknamed Iceberg Killer whales have good eyesight above and below the water, excellent hearing, and a good sense of touch. They have exceptionally sophisticated echolocation abilities, detecting the location and characteristics of prey and other objects in the water by emitting clicks and listening for echoes, as do other members of the dolphin family. The mean body temperature of the orca is . Like most marine mammals, orcas have a layer of insulating blubber ranging from thick beneath the skin. The pulse is about 60 heartbeats per minute when the orca is at the surface, dropping to 30 beats/min when submerged. Female killer whales begin to mature at around the age of 10 and reach peak fertility around 20, experiencing periods of polyestrous cycling separated by non-cycling periods of three to 16 months. Females can often breed until age 40, followed by a rapid decrease in fertility. As such, orcas are among the few animals that undergo menopause and live for decades after they have finished breeding. The lifespans of wild females average 50 years. Some are claimed to have lived substantially longer: Granny (J2) was estimated by some researchers to have been as 105 years old at the time of her death, though a biopsy sample indicated her age as 65 to 80 years. To avoid inbreeding, males mate with females from other pods. Gestation varies from 15 to 18 months. Mothers usually calve a single offspring about once every five years. In resident pods, births occur at any time of year, although winter is the most common. Mortality is extremely high during the first seven months of life, when 37–50% of all calves die. Weaning begins at about 12 months of age, and is complete by two years. According to observations in several regions, all male and female pod members participate in the care of the young. Males sexually mature at the age of 15, but do not typically reproduce until age 21. Wild males live around 29 years on average, with a maximum of about 60 years. One male, known as Old Tom, was reportedly spotted every winter between the 1840s and 1930 off New South Wales, Australia. This would have made him up to 90 years old. Examination of his teeth indicated he died around age 35, but this method of age determination is now believed to be inaccurate for older animals. One male known to researchers in the Pacific Northwest (identified as J1) was estimated to have been 59 years old when he died in 2010. Killer whales are unique among cetaceans, as their caudal sections enlongate with age, making their heads relatively shorter. Infanticide, once thought to occur only in captive killer whales, was observed in wild populations by researchers off British Columbia in 2018. In this incident, an adult male killed the calf of a female within the same pod, with his mother also joining in the assault. It is theorized that the male killed the young calf in order to mate with its mother (something that occurs in other carnivore species), while the male's mother supported the breeding opportunity for her son. Such behavior matches that of many smaller dolphin species such as the bottlenose dolphin. Killer whales are found in all oceans and most seas. Due to their enormous range, numbers, and density, relative distribution is difficult to estimate, but they clearly prefer higher latitudes and coastal areas over pelagic environments. Areas which serve as major study sites for the species include the coasts of Iceland, Norway, the Valdes Peninsula of Argentina, the Crozet Islands, New Zealand and parts of the west coast of North America, from California to Alaska. Systematic surveys indicate the highest densities of killer whales (>0.40 individuals per 100 km²) in the northeast Atlantic around the Norwegian coast, in the north Pacific along the Aleutian Islands, the Gulf of Alaska and in the Southern Ocean off much of the coast of Antarctica. They are considered "common" (0.20–0.40 individuals per 100 km²) in the eastern Pacific along the coasts of British Columbia, Washington and Oregon, in the North Atlantic Ocean around Iceland and the Faroe Islands. High densities have also been reported but not quantified in the western North Pacific around the Sea of Japan, Sea of Okhotsk, Kuril Islands, Kamchatka and the Commander Islands and in the Southern Hemisphere off southern Brazil and the tip of southern Africa. They are reported as seasonally common in the Canadian Arctic, including Baffin Bay between Greenland and Nunavut, as well as Tasmania and Macquarie Island and regularly occurring or distinct populations exist off Northwest Europe, California, Patagonia, the Crozet Islands, Marion Island, southern Australia and New Zealand. The northwest Atlantic population of at least 67 individuals ranges from Labrador and Newfoundland to New England with sightings to Cape Cod and Long Island. Information for offshore regions and warmer waters is more scarce, but widespread sightings indicate the killer whale can survive in most water temperatures. They have been sighted, though more infrequently, in the Mediterranean, the Arabian Sea, the Gulf of Mexico, and the Caribbean. Over 50 individual whales have been documented in the northern Indian Ocean, including two individuals that were sighted in the Persian Gulf in 2008 and off Sri Lanka in 2015. Those orcas may occasionally enter the Red Sea through the Gulf of Aden. The modern status of the species along coastal mainland China and its vicinity is unknown. Recorded sightings have been made from almost the entire shoreline. A wide-ranging population is likely to exist in the central Pacific, with some sightings off Hawaii. Distinct populations may also exist off the west coast of tropical Africa, and Papua New Guinea. In the Mediterranean, killer whales are considered "visitors", likely from the North Atlantic, and sightings become less frequent further east. However, a small year-round population is known to exist in the Strait of Gibraltar. Killer whales also appear to regularly occur off the Galápagos Islands. In the Antarctic, killer whales range up to the edge of the pack ice and are believed to venture into the denser pack ice, finding open leads much like beluga whales in the Arctic. However, killer whales are merely seasonal visitors to Arctic waters, and do not approach the pack ice in the summer. With the rapid Arctic sea ice decline in the Hudson Strait, their range now extends deep into the northwest Atlantic. Occasionally, killer whales swim into freshwater rivers. They have been documented up the Columbia River in the United States. They have also been found in the Fraser River in Canada and the Horikawa River in Japan. Migration patterns are poorly understood. Each summer, the same individuals appear off the coasts of British Columbia and Washington. Despite decades of research, where these animals go for the rest of the year remains unknown. Transient pods have been sighted from southern Alaska to central California. Worldwide population estimates are uncertain, but recent consensus suggests a minimum of 50,000. Local estimates include roughly 25,000 in the Antarctic, 8,500 in the tropical Pacific, 2,250–2,700 off the cooler northeast Pacific and 500–1,500 off Norway. Japan's Fisheries Agency estimated 2,321 killer whales were in the seas around Japan. Killer whales are apex predators, meaning that they themselves have no natural predators. They are sometimes called the wolves of the sea, because they hunt in groups like wolf packs. Killer whales hunt varied prey including fish, cephalopods, mammals, sea birds, and sea turtles. Different populations or ecotypes may specialize, and some can have a dramatic impact on prey species. However, whales in tropical areas appear to have more generalized diets due to lower food productivity. Fish-eating killer whales prey on around 30 species of fish. Some populations in the Norwegian and Greenland sea specialize in herring and follow that fish's autumnal migration to the Norwegian coast. Salmon account for 96% of northeast Pacific residents' diet, including 65% of large, fatty Chinook. Chum salmon are also eaten, but smaller sockeye and pink salmon are not a significant food item. Depletion of specific prey species in an area is, therefore, cause for concern for local populations, despite the high diversity of prey. On average, a killer whale eats each day. While salmon are usually hunted by an individual whale or a small group, herring are often caught using carousel feeding: the killer whales force the herring into a tight ball by releasing bursts of bubbles or flashing their white undersides. They then slap the ball with their tail flukes, stunning or killing up to 15 fish at a time, then eating them one by one. Carousel feeding has only been documented in the Norwegian killer whale population, as well as some oceanic dolphin species. In New Zealand, sharks and rays appear to be important prey, including eagle rays, long-tail and short-tail stingrays, common threshers, smooth hammerheads, blue sharks, basking sharks and shortfin mako sharks. With sharks, orcas may herd them to the surface and strike them with their tail flukes, while bottom-dwelling rays are cornered, pinned to the ground and taken to the surface. In other parts of the world, killer whales have preyed on broadnose sevengill sharks, tiger sharks and even small whale sharks. Killer whales have also been recorded feeding on great white sharks, including one incident filmed near the Farallon Islands in October 1997, where a female orca killed a white shark, possibly inducing tonic immobility before feeding. A pod of orcas have been recorded killing a white shark off South Australia, with possible kills in South Africa. The whales appear to target the shark's liver. Competition between killer whales and white sharks is probable in regions where their diets overlap. Killer whales are very sophisticated and effective predators of marine mammals. Thirty-two cetacean species have been recorded as prey, from observing orcas' feeding activity, examining the stomach contents of dead orcas, and seeing scars on the bodies of surviving prey animals. Groups even attack larger cetaceans such as minke whales, gray whales, and, rarely, sperm whales or blue whales. Hunting a large whale usually takes several hours. Killer whales generally attack young or weak animals; however, a group of five or more may attack a healthy adult. When hunting a young whale, a group chases it and its mother to exhaustion. Eventually, they separate the pair and surround the calf, drowining it by keeping it from surfacing. Pods of female sperm whales sometimes protect themselves by forming a protective circle around their calves with their flukes facing outwards, using them to repel the attackers. Rarely, large killer whale pods can overwhelm even adult female sperm whales. Adult bull sperm whales, which are large, powerful and aggressive when threatened, and fully grown adult blue whales, which are possibly too large to overwhelm, are not believed to be prey for killer whales. Prior to the advent of industrial whaling, great whales may have been the major food source for killer whales. The introduction of modern whaling techniques may have aided killer whales by the sound of exploding harpoons indicating availability of prey to scavenge, and compressed air inflation of whale carcasses causing them to float, thus exposing them to scavenging. However, the devastation of great whale populations by unfettered whaling has possibly reduced their availability for killer whales, and caused them to expand their consumption of smaller marine mammals, thus contributing to the decline of these as well. It has been hypothesised that predation by orcas on whale calves in high-productivity, high-latitude areas is the reason for great whale migrations during breeding season to low-productivity tropical waters where orcas are scarcer. Other marine mammal prey species include nearly 20 species of seal, sea lion and fur seal. Walruses and sea otters are less frequently taken. Often, to avoid injury, killer whales disable their prey before killing and eating it. This may involve throwing it in the air, slapping it with their tails, ramming it, or breaching and landing on it. Sea lions are killed by head-butting or after a stunning blow from a tail fluke. In the Aleutian Islands, a decline in sea otter populations in the 1990s was controversially attributed by some scientists to killer whale predation, although with no direct evidence. The decline of sea otters followed a decline in harbour seal and Steller sea lion populations, the killer whale's preferred prey, which in turn may be substitutes for their original prey, now decimated by industrial whaling. In steeply banked beaches off Península Valdés, Argentina, and the Crozet Islands, killer whales feed on South American sea lions and southern elephant seals in shallow water, even beaching temporarily to grab prey before wriggling back to the sea. Beaching, usually fatal to cetaceans, is not an instinctive behaviour, and can require years of practice for the young. Killer whales can then release the animal near juvenile whales, allowing the younger whales to practice the difficult capture technique on the now-weakened prey. "Wave-hunting" killer whales spy-hop to locate Weddell seals, crabeater seals, leopard seals, and penguins resting on ice floes, and then swim in groups to create waves that wash over the floe. This washes the prey into the water, where other killer whales lie in wait. Killer whales have also been observed preying on terrestrial mammals, such as deer swimming between islands off the northwest coast of North America. Killer whale cannibalism has also been reported based on analysis of stomach contents, but this is likely to be the result of scavenging remains dumped by whalers. One killer whale was also attacked by its companions after being shot. Although resident killer whales have never been observed to eat other marine mammals, they occasionally harass and kill porpoises and seals for no apparent reason. Killer whales in many areas may prey on cormorants and gulls. A captive killer whale at MarineLand discovered it could regurgitate fish onto the surface, attracting sea gulls, and then eat the birds. Four others then learned to copy the behaviour. Day-to-day killer whale behaviour generally consists of foraging, travelling, resting and socializing. Killer whales frequently engage in surface behaviour such as breaching (jumping completely out of the water) and tail-slapping. These activities may have a variety of purposes, such as courtship, communication, dislodging parasites, or play. Spyhopping is a behaviour in which a whale holds its head above water to view its surroundings. Resident killer whales swim alongside porpoises, other dolphins, seals, and sea lions, which are common prey for transient killer whales. Killer whales are notable for their complex societies. Only elephants and higher primates, such as humans, live in comparably complex social structures. Due to orcas' complex social bonds, many marine experts have concerns about how humane it is to keep these animals in captivity. Resident killer whales in the eastern North Pacific live in particularly complex and stable social groups. Unlike any other known mammal social structure, resident whales live with their mothers for their entire lives. These family groups are based on matrilines consisting of the eldest female (matriarch) and her sons and daughters, and the descendants of her daughters, etc. The average size of a matriline is 5.5 animals. Because females can reach age 90, as many as four generations travel together. These matrilineal groups are highly stable. Individuals separate for only a few hours at a time, to mate or forage. With one exception, a killer whale named Luna, no permanent separation of an individual from a resident matriline has been recorded. Closely related matrilines form loose aggregations called pods, usually consisting of one to four matrilines. Unlike matrilines, pods may separate for weeks or months at a time. DNA testing indicates resident males nearly always mate with females from other pods. Clans, the next level of resident social structure, are composed of pods with similar dialects, and common but older maternal heritage. Clan ranges overlap, mingling pods from different clans. The final association layer, perhaps more arbitrarily defined than the familial groupings, is called the community, and is defined as a set of clans that regularly commingle. Clans within a community do not share vocal patterns. Transient pods are smaller than resident pods, typically consisting of an adult female and one or two of her offspring. Males typically maintain stronger relationships with their mothers than other females. These bonds can extend well into adulthood. Unlike residents, extended or permanent separation of transient offspring from natal matrilines is common, with juveniles and adults of both sexes participating. Some males become "rovers" and do not form long-term associations, occasionally joining groups that contain reproductive females. As in resident clans, transient community members share an acoustic repertoire, although regional differences in vocalizations have been noted. Like all cetaceans, killer whales depend heavily on underwater sound for orientation, feeding, and communication. They produce three categories of sounds: clicks, whistles, and pulsed calls. Clicks are believed to be used primarily for navigation and discriminating prey and other objects in the surrounding environment, but are also commonly heard during social interactions. Northeast Pacific resident groups tend to be much more vocal than transient groups in the same waters. Residents feed primarily on Chinook and chum salmon, species that are insensitive to killer whale calls (inferred from the audio-gram of Atlantic salmon). In contrast, the marine mammal prey of transients hear well underwater at the frequencies used in killer whale calls. As such, transients are typically silent, probably to avoid alerting their mammalian prey. They sometimes use a single click (called a cryptic click) rather than the long train of clicks observed in other populations. Residents are only silent when resting. All members of a resident pod use similar calls, known collectively as a dialect. Dialects are composed of specific numbers and types of discrete, repetitive calls. They are complex and stable over time. Call patterns and structure are distinctive within matrilines. Newborns produce calls similar to their mothers, but have a more limited repertoire. Individuals likely learn their dialect through contact with their mother and other pod members. For instance, family-specific calls have been observed more frequently in the days following a calf's birth, which may help the calf learn them. Dialects are probably an important means of maintaining group identity and cohesiveness. Similarity in dialects likely reflects the degree of relatedness between pods, with variation building over time. When pods meet, dominant call types decrease and subset call types increase. The use of both call types is called biphonation. The increased subset call types may be the distinguishing factor between pods and inter-pod relations. Dialects of killer whales not only distinguish them between pods, but also between types. Resident dialects contain seven to 17 (mean = 11) distinctive call types. All members of the North American west coast transient community express the same basic dialect, although minor regional variation in call types is evident. Preliminary research indicates offshore killer whales have group-specific dialects unlike those of residents and transients. The vocalizations of killer whales in other parts of the world have also been studied. Norwegian and Icelandic herring-eating orcas appear to have different vocalizations for activities like hunting and traveling. Killer whales have the second-heaviest brains among marine mammals (after sperm whales, which have the largest brain of any animal). They can be trained in captivity and are often described as intelligent, although defining and measuring "intelligence" is difficult in a species whose environment and behavioral strategies are very different from those of humans. Killer whales imitate others, and seem to deliberately teach skills to their kin. Off the Crozet Islands, mothers push their calves onto the beach, waiting to pull the youngster back if needed. People who have interacted closely with killer whales offer numerous anecdotes demonstrating the whales' curiosity, playfulness, and ability to solve problems. Alaskan killer whales have not only learned how to steal fish from longlines, but have also overcome a variety of techniques designed to stop them, such as the use of unbaited lines as decoys. Once, fishermen placed their boats several miles apart, taking turns retrieving small amounts of their catch, in the hope that the whales would not have enough time to move between boats to steal the catch as it was being retrieved. A researcher described what happened next: In other anecdotes, researchers describe incidents in which wild killer whales playfully tease humans by repeatedly moving objects the humans are trying to reach, or suddenly start to toss around a chunk of ice after a human throws a snowball. The killer whale's use of dialects and the passing of other learned behaviours from generation to generation have been described as a form of animal culture. In 2008, the IUCN (International Union for Conservation of Nature) changed its assessment of the killer whale's conservation status from conservation dependent to data deficient, recognizing that one or more killer whale types may actually be separate, endangered species. Depletion of prey species, pollution, large-scale oil spills, and habitat disturbance caused by noise and conflicts with boats are the most significant worldwide threats. Like other animals at the highest trophic levels, the killer whale is particularly at risk of poisoning from bioaccumulation of toxins, including polychlorinated biphenyls (PCBs). European harbor seals have problems in reproductive and immune functions associated with high levels of PCBs and related contaminants, and a survey off the Washington coast found PCB levels in killer whales were higher than levels that had caused health problems in harbor seals. Blubber samples in the Norwegian Arctic show higher levels of PCBs, pesticides and brominated flame-retardants than in polar bears. When food is scarce, killer whales metabolize blubber for energy, which increases pollutant concentrations in their blood. In the Pacific Northwest, wild salmon stocks, a main resident food source, have declined dramatically in recent years. In the Puget Sound region only 75 whales remain with few births over the last few years. On the west coast of Alaska and the Aleutian Islands, seal and sea lion populations have also substantially declined. In 2005, the United States government listed the southern resident community as an endangered population under the Endangered Species Act. This community comprises three pods which live mostly in the Georgia and Haro Straits and Puget Sound in British Columbia and Washington. They do not breed outside of their community, which was once estimated at around 200 animals and later shrank to around 90. In October 2008, the annual survey revealed seven were missing and presumed dead, reducing the count to 83. This is potentially the largest decline in the population in the past ten years. These deaths can be attributed to declines in Chinook salmon. Scientist Ken Balcomb has extensively studied killer whales since 1976; he is the research biologist responsible for discovering U.S. Navy sonar may harm killer whales. He studied killer whales from the Center for Whale Research, located in Friday Harbor, Washington. He was also able to study killer whales from "his home porch perched above Puget Sound, where the animals hunt and play in summer months". In May 2003, Balcomb (along with other whale watchers near the Puget Sound coastline) noticed uncharacteristic behaviour displayed by the killer whales. The whales seemed "agitated and were moving haphazardly, attempting to lift their heads free of the water" to escape the sound of the sonars. "Balcomb confirmed at the time that strange underwater pinging noises detected with underwater microphones were sonar. The sound originated from a U.S. Navy frigate 12 miles (19 kilometers) distant, Balcomb said." The impact of sonar waves on killer whales is potentially life-threatening. Three years prior to Balcomb's discovery, research in the Bahamas showed 14 beaked whales washed up on the shore. These whales were beached on the day U.S. Navy destroyers were activated into sonar exercise. Of the 14 whales beached, six of them died. These six dead whales were studied, and CAT scans of two of the whale heads showed hemorrhaging around the brain and the ears, which is consistent with decompression sickness. Another conservation concern was made public in September 2008 when the Canadian government decided it was not necessary to enforce further protections (including the Species at Risk Act in place to protect endangered animals along their habitats) for killer whales aside from the laws already in place. In response to this decision, six environmental groups sued the federal government, claiming killer whales were facing many threats on the British Columbia Coast and the federal government did nothing to protect them from these threats. A legal and scientific nonprofit organization, Ecojustice, led the lawsuit and represented the David Suzuki Foundation, Environmental Defence, Greenpeace Canada, International Fund for Animal Welfare, the Raincoast Conservation Foundation, and the Wilderness Committee. Many scientists involved in this lawsuit, including Bill Wareham, a marine scientist with the David Suzuki Foundation, noted increased boat traffic, water toxic wastes, and low salmon population as major threats, putting approximately 87 killer whales on the British Columbia Coast in danger. Underwater noise from shipping, drilling, and other human activities is a significant concern in some key killer whale habitats, including Johnstone Strait and Haro Strait. In the mid-1990s, loud underwater noises from salmon farms were used to deter seals. Killer whales also avoided the surrounding waters. High-intensity sonar used by the Navy disturbs killer whales along with other marine mammals. Killer whales are popular with whale watchers, which may stress the whales and alter their behavior, particularly if boats approach too closely or block their lines of travel. The "Exxon Valdez" oil spill adversely affected killer whales in Prince William Sound and Alaska's Kenai Fjords region. Eleven members (about half) of one resident pod disappeared in the following year. The spill damaged salmon and other prey populations, which in turn damaged local killer whales. By 2009, scientists estimated the AT1 transient population (considered part of a larger population of 346 transients), numbered only seven individuals and had not reproduced since the spill. This population is expected to die out. The indigenous peoples of the Pacific Northwest Coast feature killer whales throughout their art, history, spirituality and religion. The Haida regarded killer whales as the most powerful animals in the ocean, and their mythology tells of killer whales living in houses and towns under the sea. According to these myths, they took on human form when submerged, and humans who drowned went to live with them. For the Kwakwaka'wakw, the killer whale was regarded as the ruler of the undersea world, with sea lions for slaves and dolphins for warriors. In Nuu-chah-nulth and Kwakwaka'wakw mythology, killer whales may embody the souls of deceased chiefs. The Tlingit of southeastern Alaska regarded the killer whale as custodian of the sea and a benefactor of humans. The Maritime Archaic people of Newfoundland also had great respect for killer whales, as evidenced by stone carvings found in a 4,000-year-old burial at the Port au Choix Archaeological Site. In the tales and beliefs of the Siberian Yupik people, killer whales are said to appear as wolves in winter, and wolves as killer whales in summer. Killer whales are believed to assist their hunters in driving walrus. Reverence is expressed in several forms: the boat represents the animal, and a wooden carving hung from the hunter's belt. Small sacrifices such as tobacco are strewn into the sea for them. Killer whales were believed to have helped the hunters even when in wolf guise, by forcing reindeer to allow themselves to be killed. Indigenous Ainu tribe often refereed killer whales in their folklore and myth as "Repun Kamuy" (God of Sea/Offshore) to bring fortunes (whales) to the coasts, and there had been traditional funerals for stranded or deceased orcas akin to funerals for other animals such as brown bears. In Western cultures, killer whales were historically feared as dangerous, savage predators. The first written description of a killer whale was given by Pliny the Elder "circa" AD 70, who wrote, "Orcas (the appearance of which no image can express, other than an enormous mass of savage flesh with teeth) are the enemy of [other whales]... they charge and pierce them like warships ramming." Of the very few confirmed attacks on humans by wild killer whales, none have been fatal. In one instance, killer whales tried to tip ice floes on which a dog team and photographer of the Terra Nova Expedition were standing. The sled dogs' barking is speculated to have sounded enough like seal calls to trigger the killer whale's hunting curiosity. In the 1970s, a surfer in California was bitten, and in 2005, a boy in Alaska who was splashing in a region frequented by harbor seals was bumped by a killer whale that apparently misidentified him as prey. Unlike wild killer whales, captive killer whales are reported to have made nearly two dozen attacks on humans since the 1970s, some of which have been fatal. Competition with fishermen also led to killer whales being regarded as pests. In the waters of the Pacific Northwest and Iceland, the shooting of killer whales was accepted and even encouraged by governments. As an indication of the intensity of shooting that occurred until fairly recently, about 25% of the killer whales captured in Puget Sound for aquarium through 1970 bore bullet scars. The U.S. Navy claimed to have deliberately killed hundreds of killer whales in Icelandic waters in 1956 with machine-guns, rockets, and depth charges. Western attitudes towards killer whales have changed dramatically in recent decades. In the mid-1960s and early 1970s, killer whales came to much greater public and scientific awareness, starting with the first live-capture and display of a killer whale known as Moby Doll, a resident harpooned off Saturna Island in 1964. So little was known at the time, it was nearly two months before the whale's keepers discovered what food (fish) it was willing to eat. To the surprise of those who saw him, Moby Doll was a docile, nonaggressive whale that made no attempts to attack humans. Between 1964 and 1976, 50 killer whales from the Pacific Northwest were captured for display in aquaria, and public interest in the animals grew. In the 1970s, research pioneered by Michael Bigg led to the discovery of the species' complex social structure, its use of vocal communication, and its extraordinarily stable mother–offspring bonds. Through photo-identification techniques, individuals were named and tracked over decades. Bigg's techniques also revealed the Pacific Northwest population was in the low hundreds rather than the thousands that had been previously assumed. The southern resident community alone had lost 48 of its members to captivity; by 1976, only 80 remained. In the Pacific Northwest, the species that had unthinkingly been targeted became a cultural icon within a few decades. The public's growing appreciation also led to growing opposition to whale–keeping in aquarium. Only one whale has been taken in North American waters since 1976. In recent years, the extent of the public's interest in killer whales has manifested itself in several high-profile efforts surrounding individuals. Following the success of the 1993 film "Free Willy", the movie's captive star Keiko was returned to the coast of his native Iceland in 1998. The director of the International Marine Mammal Project for the Earth Island Institute, David Phillips, led the efforts to return Keiko to the Iceland waters. In 2002, the orphan Springer was discovered in Puget Sound, Washington. She became the first whale to be successfully reintegrated into a wild pod after human intervention, crystallizing decades of research into the vocal behavior and social structure of the region's killer whales. The saving of Springer raised hopes that another young killer whale named Luna, which had become separated from his pod, could be returned to it. However, his case was marked by controversy about whether and how to intervene, and in 2006, Luna was killed by a boat propeller. The earlier of known records of commercial hunting of killer whales date to the 18th century in Japan. During the 19th and early 20th centuries, the global whaling industry caught immense numbers of baleen and sperm whales, but largely ignored killer whales because of their limited amounts of recoverable oil, their smaller populations, and the difficulty of taking them. Once the stocks of larger species were depleted, killer whales were targeted by commercial whalers in the mid-20th century. Between 1954 and 1997, Japan took 1,178 killer whales (although the Ministry of the Environment claims that there had been domestic catches of about 1,600 whales between late 1940s to 1960s) and Norway took 987. Over 3,000 killer whales were taken by Soviet whalers, including an Antarctic catch of 916 in 1979–80 alone, prompting the International Whaling Commission to recommend a ban on commercial hunting of the species pending further research. (Compare with the situation on land, commercial hunting.) Today, no country carries out a substantial hunt, although Indonesia and Greenland permit small subsistence hunts (see Aboriginal whaling). Other than commercial hunts, killer whales were hunted along Japanese coasts out of public concern for potential conflicts with fisheries. Such cases include a semi-resident male-female pair in Akashi Strait and Harimanada being killed in the Seto Inland Sea in 1957, the killing of five whales from a pod of 11 members that swam into Tokyo Bay in 1970, and a catch record in southern Taiwan in the 1990s. Killer whales have helped humans hunting other whales. One well-known example was the killer whales of Eden, Australia, including the male known as Old Tom. Whalers more often considered them a nuisance, however, as orcas would gather to scavenge meat from the whalers' catch. Some populations, such as in Alaska's Prince William Sound, may have been reduced significantly by whalers shooting them in retaliation. The killer whale's intelligence, trainability, striking appearance, playfulness in captivity and sheer size have made it a popular exhibit at aquaria and aquatic theme parks. From 1976 to 1997, 55 whales were taken from the wild in Iceland, 19 from Japan, and three from Argentina. These figures exclude animals that died during capture. Live captures fell dramatically in the 1990s, and by 1999, about 40% of the 48 animals on display in the world were captive-born. Organizations such as World Animal Protection and the Whale and Dolphin Conservation Society campaign against the practice of keeping them in captivity. In captivity, they often develop pathologies, such as the dorsal fin collapse seen in 60–90% of captive males. Captives have vastly reduced life expectancies, on average only living into their 20s. In the wild, females who survive infancy live 46 years on average, and up to 70–80 years in rare cases. Wild males who survive infancy live 31 years on average, and up to 50–60 years. Captivity usually bears little resemblance to wild habitat, and captive whales' social groups are foreign to those found in the wild. Critics claim captive life is stressful due to these factors and the requirement to perform circus tricks that are not part of wild killer whale behavior, see above. Wild killer whales may travel up to in a day, and critics say the animals are too big and intelligent to be suitable for captivity. Captives occasionally act aggressively towards themselves, their tankmates, or humans, which critics say is a result of stress. Between 1991 and 2010, the bull orca known as Tilikum was involved in the death of three people, and was featured in the critically acclaimed 2013 film, "Blackfish". Tilikum lived at SeaWorld from 1992 until his death in 2017. A 2015 study coauthored by staff at SeaWorld and the Minnesota Zoo indicates that there is no significant difference in survivorship between free-ranging and captive killer whales. The authors speculate about the future utility of studying captive populations for the purposes of understanding orca biology and the implications of such research of captive animals in the overall health of both wild and marine park populations. As of March 2016, SeaWorld has announced that they will be ending their orca breeding program and their theatrical shows. They previously announced, in November 2015, that the shows would be coming to an end in San Diego but it is now to happen in both Orlando and San Antonio as well. Koala The koala ("Phascolarctos cinereus", or, inaccurately, koala bear) is an arboreal herbivorous marsupial native to Australia. It is the only extant representative of the family Phascolarctidae and its closest living relatives are the wombats. The koala is found in coastal areas of the mainland's eastern and southern regions, inhabiting Queensland, New South Wales, Victoria, and South Australia. It is easily recognisable by its stout, tailless body and large head with round, fluffy ears and large, spoon-shaped nose. The koala has a body length of and weighs . Pelage colour ranges from silver grey to chocolate brown. Koalas from the northern populations are typically smaller and lighter in colour than their counterparts further south. These populations possibly are separate subspecies, but this is disputed. Koalas typically inhabit open eucalypt woodlands, and the leaves of these trees make up most of their diet. Because this eucalypt diet has limited nutritional and caloric content, koalas are largely sedentary and sleep up to 20 hours a day. They are asocial animals, and bonding exists only between mothers and dependent offspring. Adult males communicate with loud bellows that intimidate rivals and attract mates. Males mark their presence with secretions from scent glands located on their chests. Being marsupials, koalas give birth to underdeveloped young that crawl into their mothers' pouches, where they stay for the first six to seven months of their lives. These young koalas, known as joeys, are fully weaned around a year old. Koalas have few natural predators and parasites, but are threatened by various pathogens, such as Chlamydiaceae bacteria and the koala retrovirus, as well as by bushfires and droughts. Koalas were hunted by indigenous Australians and depicted in myths and cave art for millennia. The first recorded encounter between a European and a koala was in 1798, and an image of the animal was published in 1810 by naturalist George Perry. Botanist Robert Brown wrote the first detailed scientific description of the koala in 1814, although his work remained unpublished for 180 years. Popular artist John Gould illustrated and described the koala, introducing the species to the general British public. Further details about the animal's biology were revealed in the 19th century by several English scientists. Because of its distinctive appearance, the koala is recognised worldwide as a symbol of Australia. Koalas are listed as Vulnerable by the International Union for Conservation of Nature. The Australian government similarly lists specific populations in Queensland and New South Wales as Vulnerable. The animal was hunted heavily in the early 20th century for its fur, and large-scale cullings in Queensland resulted in a public outcry that initiated a movement to protect the species. Sanctuaries were established, and translocation efforts moved to new regions koalas whose habitat had become fragmented or reduced. The biggest threat to their existence is habitat destruction caused by agriculture and urbanisation. The word koala comes from the Dharug "gula". Although the vowel 'u' was originally written in the English orthography as "oo" (in spellings such as "coola" or "koolah"), it was changed to "oa", possibly in error. Because of the koala's supposed resemblance to a bear, it was often miscalled the koala bear, particularly by early settlers. The generic name, "Phascolarctos", is derived from the Greek words "phaskolos" "pouch" and "arktos" "bear". The specific name, "cinereus", is Latin for "ash coloured". The koala was given its generic name "Phascolarctos" in 1816 by French zoologist Henri Marie Ducrotay de Blainville, who would not give it a specific name until further review. In 1819, German zoologist Georg August Goldfuss gave it the binomial "Lipurus cinereus". Because "Phascolarctos" was published first, according to the International Code of Zoological Nomenclature, it has priority as the official name of the genus. French naturalist Anselme Gaëtan Desmarest proposed the name "Phascolartos fuscus" in 1820, suggesting that the brown-coloured versions were a different species than the grey ones. Other names suggested by European authors included "Marodactylus cinereus" by Goldfuss in 1820, "P. flindersii" by René Primevère Lesson in 1827, and "P. koala" by John Edward Gray in 1827. The koala is classified with wombats (family Vombatidae) and several extinct families (including marsupial tapirs, marsupial lions and giant wombats) in the suborder Vombatiformes within the order Diprotodontia. The Vombatiformes are a sister group to a clade that includes macropods (kangaroos and wallabies) and possums. The ancestors of vombatiforms were likely arboreal, and the koala's lineage was possibly the first to branch off around 40 million years ago during the Eocene. The modern koala is the only extant member of Phascolarctidae, a family that once included several genera and species. During the Oligocene and Miocene, koalas lived in rainforests and had less specialised diets. Some species, such as the Riversleigh rainforest koala ("Nimiokoala greystanesi") and some species of "Perikoala", were around the same size as the modern koala, while others, such as species of "Litokoala", were one-half to two-thirds its size. Like the modern species, prehistoric koalas had well developed ear structures which suggests that long-distance vocalising and sedentism developed early. During the Miocene, the Australian continent began drying out, leading to the decline of rainforests and the spread of open "Eucalyptus" woodlands. The genus "Phascolarctos" split from "Litokoala" in the late Miocene and had several adaptations that allowed it to live on a specialised eucalyptus diet: a shifting of the palate towards the front of the skull; larger molars and premolars; smaller pterygoid fossa; and a larger gap between the molar and the incisor teeth. During the Pliocene and Pleistocene, when Australia experienced changes in climate and vegetation, koala species grew larger. "P. cinereus" may have emerged as a dwarf form of the giant koala ("P. stirtoni"). The reduction in the size of large mammals has been seen as a common phenomenon worldwide during the late Pleistocene, and several Australian mammals, such as the agile wallaby, are traditionally believed to have resulted from this dwarfing. A 2008 study questions this hypothesis, noting that "P. cinereus" and "P. stirtoni" were sympatric during the middle to late Pleistocene, and possibly as early as the Pliocene. The fossil record of the modern koala extends back at least to the middle Pleistocene. Traditionally, three distinct subspecies have been recognised: the Queensland koala ("P. c. adustus", Thomas 1923), the New South Wales koala ("P. c. cinereus", Goldfuss 1817), and the Victorian koala ("P. c. victor", Troughton 1935). These forms are distinguished by pelage colour and thickness, body size, and skull shape. The Queensland koala is the smallest of the three, with shorter, silver fur and a shorter skull. The Victorian koala is the largest, with shaggier, brown fur and a wider skull. The boundaries of these variations are based on state borders, and their status as subspecies is disputed. A 1999 genetic study suggests that the variations represent differentiated populations with limited gene flow between them, and that the three subspecies comprise a single evolutionarily significant unit. Other studies have found that koala populations have high levels of inbreeding and low genetic variation. Such low genetic diversity may have been a characteristic of koala populations since the late Pleistocene. Rivers and roads have been shown to limit gene flow and contribute to the genetic differentiation of southeast Queensland populations. In April 2013, scientists from the Australian Museum and Queensland University of Technology announced they had fully sequenced the koala genome. The koala is a stocky animal with a large head and vestigial or non-existent tail. It has a body length of and a weight of , making it among the largest arboreal marsupials. Koalas from Victoria are twice as heavy as those from Queensland. The species is sexually dimorphic, with males 50% larger than females. Males are further distinguished from females by their more curved noses and the presence of chest glands, which are visible as hairless patches. As in most marsupials, the male koala has a bifurcated penis, and the female has two lateral vaginas and two separate uteri. The male's penile sheath contains naturally occurring bacteria that play an important role in fertilisation. The female's pouch opening is tightened by a sphincter that keeps the young from falling out. The pelage of the koala is thicker and longer on the back, and shorter on the belly. The ears have thick fur on both the inside and outside. The back fur colour varies from light grey to chocolate brown. The belly fur is whitish; on the rump it is dappled whitish, and darker at the back. The koala has the most effective insulating back fur of any marsupial and is highly resilient to wind and rain, while the belly fur can reflect solar radiation. The koala's curved, sharp claws are well adapted for climbing trees. The large forepaws have two opposable digits (the first and second, which are opposable to the other three) that allow them to grasp small branches. On the hindpaws, the second and third digits are fused, a typical condition for members of the Diprotodontia, and the attached claws (which are still separate) are used for grooming. As in humans and other primates, koalas have friction ridges on their paws. The animal has a sturdy skeleton and a short, muscular upper body with proportionately long upper limbs that contribute to its climbing and grasping abilities. Additional climbing strength is achieved with thigh muscles that attach to the shinbone lower than other animals. The koala has a cartilaginous pad at the end of the spine that may make it more comfortable when it perches in the fork of a tree. The koala has one of the smallest brains in proportion to body weight of any mammal, being 60% smaller than that of a typical diprotodont, weighing only . The brain's surface is fairly smooth, typical for a "primitive" animal. It occupies only 61% of the cranial cavity and is pressed against the inside surface by cerebrospinal fluid. The function of this relatively large amount of fluid is not known, although one possibility is that it acts as a shock absorber, cushioning the brain if the animal falls from a tree. The koala's small brain size may be an adaptation to the energy restrictions imposed by its diet, which is insufficient to sustain a larger brain. Because of its small brain, the koala has a limited ability to perform complex, unfamiliar behaviours. For example, when presented with plucked leaves on a flat surface, the animal cannot adapt to the change in its normal feeding routine and will not eat the leaves. The koala's olfactory senses are normal, and it is known to sniff the oils of individual branchlets to assess their edibility. Its nose is fairly large and covered in leathery skin. Its round ears provide it with good hearing, and it has a well-developed middle ear. A koala's vision is not well developed, and its relatively small eyes are unusual among marsupials in that the pupils have vertical slits. Koalas make use of a novel vocal organ to produce low-pitched sounds (see social spacing, below). Unlike typical mammalian vocal cords, which are folds in the larynx, these organs are placed in the velum (soft palate) and are called velar vocal cords. The koala has several adaptations for its eucalypt diet, which is of low nutritive value, of high toxicity, and high in dietary fibre. The animal's dentition consists of the incisors and cheek teeth (a single premolar and four molars on each jaw), which are separated by a large gap (a characteristic feature of herbivorous mammals). The incisors are used for grasping leaves, which are then passed to the premolars to be snipped at the petiole before being passed to the highly cusped molars, where they are shredded into small pieces. Koalas may also store food in their cheek pouches before it is ready to be chewed. The partially worn molars of middle-aged koalas are optimal for breaking the leaves into small particles, resulting in more efficient stomach digestion and nutrient absorption in the small intestine, which digests the eucalyptus leaves to provide most of the animal's energy. A koala sometimes regurgitates the food into the mouth to be chewed a second time. Unlike kangaroos and eucalyptus-eating possums, koalas are hindgut fermenters, and their digestive retention can last for up to 100 hours in the wild, or up to 200 hours in captivity. This is made possible by the extraordinary length of their caecum— long and in diameter—the largest proportionally of any animal. Koalas can select which food particles to retain for longer fermentation and which to pass through. Large particles typically pass through more quickly, as they would take more time to digest. While the hindgut is proportionally larger in the koala than in other herbivores, only 10% of the animal's energy is obtained from fermentation. Since the koala gains a low amount of energy from its diet, its metabolic rate is half that of a typical mammal, although this can vary between seasons and sexes. The koala conserves water by passing relatively dry faecal pellets high in undigested fibre, and by storing water in the caecum. The koala's geographic range covers roughly , and 30 ecoregions. It extends throughout eastern and southeastern Australia, encompassing northeastern, central and southeastern Queensland, eastern New South Wales, Victoria, and southeastern South Australia. The koala was introduced near Adelaide and on several islands, including Kangaroo Island and French Island. The population on Magnetic Island represents the northern limit of its range. Fossil evidence shows that the koala's range stretched as far west as southwestern Western Australia during the late Pleistocene. They were likely driven to extinction in these areas by environmental changes and hunting by indigenous Australians. In Queensland, koalas are unevenly distributed and uncommon except in the southeast, where they are numerous. In New South Wales, they are abundant only in Pilliga, while in Victoria they are common nearly everywhere. In South Australia, koalas were extirpated by 1920 and subsequently reintroduced. Koalas can be found in habitats ranging from relatively open forests to woodlands, and in climates ranging from tropical to cool temperate. In semi-arid climates, they prefer riparian habitats, where nearby streams and creeks provide refuge during times of drought and extreme heat. Koalas are herbivorous, and while most of their diet consists of eucalypt leaves, they can be found in trees of other genera, such as "Acacia", "Allocasuarina", "Callitris", "Leptospermum", and "Melaleuca". They are able to digest the toxins present in eucalyptus leaves due to their production of cytochrome P450, which breaks down these poisons in the liver. Though the foliage of over 600 species of "Eucalyptus" is available, the koala shows a strong preference for around 30. They tend to choose species that have a high protein content and low proportions of fibre and lignin. The most favoured species are "Eucalyptus microcorys", "E. tereticornis", and "E. camaldulensis", which, on average, make up more than 20% of their diet. Despite its reputation as a fussy eater, the koala is more generalist than some other marsupial species, such as the greater glider. Since eucalypt leaves have a high water content, the koala does not need to drink often; its daily water turnover rate ranges from 71 to 91 ml/kg of body weight. Although females can meet their water requirements from eating leaves, larger males require additional water found on the ground or in tree hollows. When feeding, a koala holds onto a branch with hindpaws and one forepaw while the other forepaw grasps foliage. Small koalas can move close to the end of a branch, but larger ones stay near the thicker bases. Koalas consume up to of leaves a day, spread over four to six feeding sessions. Despite their adaptations to a low-energy lifestyle, they have meagre fat reserves and need to feed often. Because they get so little energy from their diet, koalas must limit their energy use and sleep 20 hours a day; only 4 hours a day are spent in active movement. They are predominantly active at night and spend most of their waking hours feeding. They typically eat and sleep in the same tree, possibly for as long as a day. On very hot days, a koala may climb down to the coolest part of the tree which is cooler than the surrounding air. The koala hugs the tree to lose heat without panting. On warm days, a koala may rest with its back against a branch or lie on its stomach or back with its limbs dangling. During cold, wet periods, it curls itself into a tight ball to conserve energy. On windy days, a koala finds a lower, thicker branch on which to rest. While it spends most of the time in the tree, the animal descends to the ground to move to another tree, walking on all fours. The koala usually grooms itself with its hindpaws, but sometimes uses its forepaws or mouth. Koalas are asocial animals and spend just 15 minutes a day on social behaviours. In Victoria, home ranges are small and have extensive overlap, while in central Queensland they are larger and overlap less. Koala society appears to consist of "residents" and "transients", the former being mostly adult females and the latter males. Resident males appear to be territorial and dominate others with their larger body size. Alpha males tend to establish their territories close to breeding females, while younger males are subordinate until they mature and reach full size. Adult males occasionally venture outside their home ranges; when they do so, dominant ones retain their status. When a male enters a new tree, he marks it by rubbing his chest gland against the trunk or a branch; males have occasionally been observed to dribble urine on the trunk. This scent-marking behaviour probably serves as communication, and individuals are known to sniff the base of a tree before climbing. Scent marking is common during aggressive encounters. Chest gland secretions are complex chemical mixtures—about 40 compounds were identified in one analysis—that vary in composition and concentration with the season and the age of the individual. Adult males communicate with loud bellows—low pitched sounds that consist of snore-like inhalations and resonant exhalations that sound like growls. These sounds are thought to be generated by unique vocal organs found in koalas. Because of their low frequency, these bellows can travel far through air and vegetation. Koalas may bellow at any time of the year, particularly during the breeding season, when it serves to attract females and possibly intimidate other males. They also bellow to advertise their presence to their neighbours when they enter a new tree. These sounds signal the male's actual body size, as well as exaggerate it; females pay more attention to bellows that originate from larger males. Female koalas bellow, though more softly, in addition to making snarls, wails, and screams. These calls are produced when in distress and when making defensive threats. Young koalas squeak when in distress. As they get older, the squeak develops into a "squawk" produced both when in distress and to show aggression. When another individual climbs over it, a koala makes a low grunt with its mouth closed. Koalas make numerous facial expressions. When snarling, wailing, or squawking, the animal curls the upper lip and points its ears forward. During screams, the lips retract and the ears are drawn back. Females bring their lips forward and raise their ears when agitated. Agonistic behaviour typically consists of squabbles between individuals climbing over or passing each other. This occasionally involves biting. Males that are strangers may wrestle, chase, and bite each other. In extreme situations, a male may try to displace a smaller rival from a tree. This involves the larger aggressor climbing up and attempting to corner the victim, which tries either to rush past him and climb down or to move to the end of a branch. The aggressor attacks by grasping the target by the shoulders and repeatedly biting him. Once the weaker individual is driven away, the victor bellows and marks the tree. Pregnant and lactating females are particularly aggressive and attack individuals that come too close. In general, however, koalas tend to avoid energy-wasting aggressive behaviour. Koalas are seasonal breeders, and births take place from the middle of spring through the summer to early autumn, from October to May. Females in oestrus tend to hold their heads further back than usual and commonly display tremors and spasms. However, males do not appear to recognise these signs, and have been observed to mount non-oestrous females. Because of his much larger size, a male can usually force himself on a female, mounting her from behind, and in extreme cases, the male may pull the female out of the tree. A female may scream and vigorously fight off her suitors, but will submit to one that is dominant or is more familiar. The bellows and screams that accompany matings can attract other males to the scene, obliging the incumbent to delay mating and fight off the intruders. These fights may allow the female to assess which is dominant. Older males usually have accumulated scratches, scars, and cuts on the exposed parts of their noses and on their eyelids. The koala's gestation period lasts 33–35 days, and a female gives birth to a single joey (although twins occur on occasion). As with all marsupials, the young are born while at the embryonic stage, weighing only . However, they have relatively well-developed lips, forelimbs, and shoulders, as well as functioning respiratory, digestive, and urinary systems. The joey crawls into its mother's pouch to continue the rest of its development. Unlike most other marsupials, the koala does not clean her pouch. A female koala has two teats; the joey attaches itself to one of them and suckles for the rest of its pouch life. The koala has one of the lowest milk energy production rates in relation to body size of any mammal. The female makes up for this by lactating for as long as 12 months. At seven weeks of age, the joey's head grows longer and becomes proportionally large, pigmentation begins to develop, and its sex can be determined (the scrotum appears in males and the pouch begins to develop in females). At 13 weeks, the joey weighs around and its head has doubled in size. The eyes begin to open and fine fur grows on the forehead, nape, shoulders, and arms. At 26 weeks, the fully furred animal resembles an adult, and begins to poke its head out of the pouch. As the young koala approaches six months, the mother begins to prepare it for its eucalyptus diet by predigesting the leaves, producing a faecal pap that the joey eats from her cloacum. The pap is quite different in composition from regular faeces, resembling instead the contents of the caecum, which has a high concentration of bacteria. Eaten for about a month, the pap provides a supplementary source of protein at a transition time from a milk to a leaf diet. The joey fully emerges from the pouch for the first time at six or seven months of age, when it weighs . It explores its new surroundings cautiously, clinging to its mother for support. By nine months, it weighs over and develops its adult fur colour. Having permanently left the pouch, it rides on its mother's back for transportation, learning to climb by grasping branches. Gradually, it spends more time away from its mother, and at 12 months it is fully weaned, weighing around . When the mother becomes pregnant again, her bond with her previous offspring is permanently severed. Newly weaned young are encouraged to disperse by their mothers' aggressive behaviour towards them. Females become sexually mature at about three years of age and can then become pregnant; in comparison, males reach sexual maturity when they are about four years old, although they can produce sperm as early as two years. While the chest glands can be functional as early as 18 months of age, males do not begin scent-marking behaviours until they reach sexual maturity. Because the offspring have a long dependent period, female koalas usually breed in alternate years. Favourable environmental factors, such as a plentiful supply of high-quality food trees, allow them to reproduce every year. Koalas may live from 13 to 18 years in the wild. While female koalas usually live this long, males may die sooner because of their more hazardous lives. Koalas usually survive falls from trees and immediately climb back up, but injuries and deaths from falls do occur, particularly in inexperienced young and fighting males. Around six years of age, the koala's chewing teeth begin to wear down and their chewing efficiency decreases. Eventually, the cusps disappear completely and the animal will die of starvation. Koalas have few predators; dingos and large pythons may prey on them, while birds of prey (such as powerful owls and wedge-tailed eagles) are threats to young. They are generally not subject to external parasites, other than ticks in coastal areas. Koalas may also suffer mange from the mite "Sarcoptes scabiei", and skin ulcers from the bacterium "Mycobacterium ulcerans", but neither is common. Internal parasites are few and largely harmless. These include the tapeworm "Bertiella obesa", commonly found in the intestine, and the nematodes "Marsupostrongylus longilarvatus" and "Durikainema phascolarcti", which are infrequently found in the lungs. In a three-year study of almost 600 koalas admitted to the Australian Zoo Wildlife Hospital in Queensland, 73.8% of the animals were infected with at least one species of the parasitic protozoal genus "Trypanosoma", the most common of which was "T. irwini". Koalas can be subject to pathogens such as Chlamydiaceae bacteria, which can cause keratoconjunctivitis, urinary tract infection, and reproductive tract infection. Such infections are widespread on the mainland, but absent in some island populations. The koala retrovirus (KoRV) may cause koala immune deficiency syndrome (KIDS) which is similar to AIDS in humans. Prevalence of KoRV in koala populations suggests a trend spreading from the north to the south of Australia. Northern populations are completely infected, while some southern populations (including Kangaroo Island) are free. The animals are vulnerable to bushfires due to their slow movements and the flammability of eucalypt trees. The koala instinctively seeks refuge in the higher branches, where it is vulnerable to intense heat and flames. Bushfires also fragment the animal's habitat, which restricts their movement and leads to population decline and loss of genetic diversity. Dehydration and overheating can also prove fatal. Consequently, the koala is vulnerable to the effects of climate change. Models of climate change in Australia predict warmer and drier climates, suggesting that the koala's range will shrink in the east and south to more mesic habitats. Droughts also affect the koala's well-being. For example, a severe drought in 1980 caused many "Eucalyptus" trees to lose their leaves. Subsequently, 63% of the population in southwestern Queensland died, especially young animals that were excluded from prime feeding sites by older, dominant koalas, and recovery of the population was slow. Later, this population declined from an estimated mean population of 59,000 in 1995 to 11,600 in 2009, a reduction attributed largely to hotter and drier conditions resulting from droughts in most years between 2002 and 2007. Another predicted negative outcome of climate change is the effect of elevations in atmospheric levels on the koala's food supply: increases in cause "Eucalyptus" trees to reduce protein and increase tannin concentrations in their leaves, reducing the quality of the food source. The first written reference of the koala was recorded by John Price, servant of John Hunter, the Governor of New South Wales. Price encountered the "cullawine" on 26 January 1798, during an expedition to the Blue Mountains, although his account was not published until nearly a century later in "Historical Records of Australia". In 1802, French-born explorer Francis Louis Barrallier encountered the animal when his two Aboriginal guides, returning from a hunt, brought back two koala feet they were intending to eat. Barrallier preserved the appendages and sent them and his notes to Hunter's successor, Philip Gidley King, who forwarded them to Joseph Banks. Similar to Price, Barrallier's notes were not published until 1897. Reports of the first capture of a live "koolah" appeared in "The Sydney Gazette" in August 1803. Within a few weeks Flinders' astronomer, James Inman, purchased a specimen pair for live shipment to Joseph Banks in England. They were described as 'somewhat larger than the Waumbut (Wombat)'. These encounters helped provide the impetus for King to commission the artist John Lewin to paint watercolours of the animal. Lewin painted three pictures, one of which was subsequently made into a print that was reproduced in Georges Cuvier's "The Animal Kingdom" (first published in 1827) and several European works on natural history. Botanist Robert Brown was the first to write a detailed scientific description of the koala in 1814, based on a female specimen captured near what is now Mount Kembla in the Illawarra region of New South Wales. Austrian botanical illustrator Ferdinand Bauer drew the animal's skull, throat, feet, and paws. Brown's work remained unpublished and largely unnoticed, however, as his field books and notes remained in his possession until his death, when they were bequeathed to the British Museum (Natural History) in London. They were not identified until 1994, while Bauer's koala watercolours were not published until 1989. British surgeon Everard Home included details of the koala based on eyewitness accounts of William Paterson, who had befriended Brown and Bauer during their stay in New South Wales. Home, who in 1808 published his report in the journal "Philosophical Transactions of the Royal Society", gave the animal the scientific name "Didelphis coola". The first published image of the koala appeared in George Perry's (1810) natural history work "Arcana". Perry called it the "New Holland Sloth" on account of its perceived similarities to the Central and South American tree-living mammals of genus "Bradypus". His disdain for the koala, evident in his description of the animal, was typical of the prevailing early 19th-century British attitude about the primitiveness and oddity of Australian fauna: "... the eye is placed like that of the Sloth, very close to the mouth and nose, which gives it a clumsy awkward appearance, and void of elegance in the combination ... they have little either in their character or appearance to interest the Naturalist or Philosopher. As Nature however provides nothing in vain, we may suppose that even these torpid, senseless creatures are wisely intended to fill up one of the great links of the chain of animated nature ...". Naturalist and popular artist John Gould illustrated and described the koala in his three-volume work "The Mammals of Australia" (1845–63) and introduced the species, as well as other members of Australia's little-known faunal community, to the general British public. Comparative anatomist Richard Owen, in a series of publications on the physiology and anatomy of Australian mammals, presented a paper on the anatomy of the koala to the Zoological Society of London. In this widely cited publication, he provided the first careful description of its internal anatomy, and noted its general structural similarity to the wombat. English naturalist George Robert Waterhouse, curator of the Zoological Society of London, was the first to correctly classify the koala as a marsupial in the 1840s. He identified similarities between it and its fossil relatives "Diprotodon" and "Nototherium", which had been discovered just a few years before. Similarly, Gerard Krefft, curator of the Australian Museum in Sydney, noted evolutionary mechanisms at work when comparing the koala to its ancestral relatives in his 1871 "The Mammals of Australia". The first living koala in Britain arrived in 1881, purchased by the Zoological Society of London. As related by prosecutor to the society, William Alexander Forbes, the animal suffered an accidental demise when the heavy lid of a washstand fell on it and it was unable to free itself. Forbes used the opportunity to dissect the fresh female specimen, thus was able to provide explicit anatomical details on the female reproductive system, the brain, and the liver—parts not previously described by Owen, who had access only to preserved specimens. Scottish embryologist William Caldwell—well known in scientific circles for determining the reproductive mechanism of the platypus—described the uterine development of the koala in 1884, and used the new information to convincingly place the koala and the monotremes into an evolutionary time frame. Prince Henry, Duke of Gloucester, visited the Koala Park Sanctuary in Sydney in 1934 and was "intensely interested in the bears". His photograph, with Noel Burnet, the founder of the park, and a koala, appeared in "The Sydney Morning Herald". After World War II, when tourism to Australia increased and the animals were exported to zoos overseas, the koala's international popularity rose. Several political leaders and members of royal families had their pictures taken with koalas, including Queen Elizabeth II, Prince Harry, Crown Prince Naruhito, Crown Princess Masako, Pope John Paul II, US President Bill Clinton, Soviet premier Mikhail Gorbachev, South African President Nelson Mandela, Prime Minister Tony Abbott, and Russian President Vladimir Putin. The koala is well known worldwide and is a major draw for Australian zoos and wildlife parks. It has been featured in advertisements, games, cartoons, and as soft toys. It benefited the national tourism industry by over an estimated billion Australian dollars in 1998, a figure that has since grown. In 1997, half of visitors to Australia, especially those from Korea, Japan, and Taiwan, sought out zoos and wildlife parks; about 75% of European and Japanese tourists placed the koala at the top of their list of animals to see. According to biologist Stephen Jackson: "If you were to take a straw poll of the animal most closely associated with Australia, it's a fair bet that the koala would come out marginally in front of the kangaroo". Factors that contribute to the koala's enduring popularity include its childlike body proportions and teddy bear-like face. The koala is featured in the Dreamtime stories and mythology of indigenous Australians. The Tharawal people believed that the animal helped row the boat that brought them to the continent. Another myth tells of how a tribe killed a koala and used its long intestines to create a bridge for people from other parts of the world. This narrative highlights the koala's status as a game animal and the length of its intestines. Several stories tell of how the koala lost its tail. In one, a kangaroo cuts it off to punish the koala for being lazy and greedy. Tribes in both Queensland and Victoria regarded the koala as a wise animal and sought its advice. Bidjara-speaking people credited the koala for turning barren lands into lush forests. The animal is also depicted in rock carvings, though not as much as some other species. Early European settlers in Australia considered the koala to be a prowling sloth-like animal with a "fierce and menacing look". At the beginning of the 20th century, the koala's reputation took a more positive turn, largely due to its growing popularity and depiction in several widely circulated children's stories. It is featured in Ethel Pedley's 1899 book "Dot and the Kangaroo", in which it is portrayed as the "funny native bear". Artist Norman Lindsay depicted a more anthropomorphic koala in "The Bulletin" cartoons, starting in 1904. This character also appeared as Bunyip Bluegum in Lindsay's 1918 book "The Magic Pudding". Perhaps the most famous fictional koala is Blinky Bill. Created by Dorothy Wall in 1933, the character appeared in several books and has been the subject of films, TV series, merchandise, and a 1986 environmental song by John Williamson. The first Australian stamp featuring a koala was issued by the Commonwealth in 1930. A television ad campaign for Australia's national airline Qantas, starting in 1967 and running for several decades, featured a live koala (voiced by Howard Morris), who complained that too many tourists were coming to Australia and concluded "I hate Qantas". The series has been ranked among the greatest commercials of all time. The song "Ode to a Koala Bear" appears on the B-side of the 1983 Paul McCartney/Michael Jackson duet single "Say Say Say". A koala is the main character in Hanna-Barbera's "The Kwicky Koala Show" and Nippon Animation's "Noozles", both of which were animated cartoons of the early 1980s. Food products shaped like the koala include the Caramello Koala chocolate bar and the bite-sized cookie snack Koala's March. Dadswells Bridge in Victoria features a tourist complex shaped like a giant koala, and the Queensland Reds rugby team has a koala as its mascot. The Platinum Koala coin features the animal on the reverse and Elizabeth II on the obverse. The drop bear is an imaginary creature in contemporary Australian folklore featuring a predatory, carnivorous version of the koala. This hoax animal is commonly spoken about in tall tales designed to scare tourists. While koalas are typically docile herbivores, drop bears are described as unusually large and vicious marsupials that inhabit treetops and attack unsuspecting people (or other prey) that walk beneath them by dropping onto their heads from above. While the koala was previously classified as Least Concern on the Red List, it was uplisted to Vulnerable in 2016. Australian policy makers declined a 2009 proposal to include the koala in the Environment Protection and Biodiversity Conservation Act 1999. In 2012, the Australian government listed koala populations in Queensland and New South Wales as Vulnerable, because of a 40% population decline in the former and a 33% decline in the latter. Populations in Victoria and South Australia appear to be abundant; however, the Australian Koala Foundation argues that the exclusion of Victorian populations from protective measures is based on a misconception that the total koala population is 200,000, whereas they believe it is probably less than 100,000. Koalas were hunted for food by Aboriginals. A common technique used to capture the animals was to attach a loop of ropey bark to the end of a long, thin pole, so as to form a noose. This would be used to snare an animal high in a tree, beyond the reach of a climbing hunter; an animal brought down this way would then be killed with a stone hand axe or hunting stick (waddy). According to the customs of some tribes, it was considered taboo to skin the animal, while other tribes thought the animal's head had a special status, and saved them for burial. The koala was heavily hunted by European settlers in the early 20th century, largely for its thick, soft fur. More than two million pelts are estimated to have left Australia by 1924. Pelts were in demand for use in rugs, coat linings, muffs, and as trimming on women's garments. Extensive cullings occurred in Queensland in 1915, 1917, and again in 1919, when over one million koalas were killed with guns, poisons, and nooses. The public outcry over these cullings was probably the first wide-scale environmental issue that rallied Australians. Novelist and social critic Vance Palmer, writing in a letter to "The Courier-Mail", expressed the popular sentiment: "The shooting of our harmless and lovable native bear is nothing less than barbarous ... No one has ever accused him of spoiling the farmer's wheat, eating the squatter's grass, or even the spreading of the prickly pear. There is no social vice that can be put down to his account ... He affords no sport to the gun-man ... And he has been almost blotted out already from some areas." Despite the growing movement to protect native species, the poverty brought about by the drought of 1926–28 led to the killing of another 600,000 koalas during a one-month open season in August 1927. In 1934, Frederick Lewis, the Chief Inspector of Game in Victoria, said that the once-abundant animal had been brought to near extinction in that state, suggesting that only 500–1000 remained. The first successful efforts at conserving the species were initiated by the establishment of Brisbane's Lone Pine Koala Sanctuary and Sydney's Koala Park Sanctuary in the 1920s and 1930s. The owner of the latter park, Noel Burnet, became the first to successfully breed koalas and earned a reputation as the foremost contemporary authority on the marsupial. In 1934, David Fleay, curator of Australian mammals at the Melbourne Zoo, established the first Australian faunal enclosure at an Australian zoo, and featured the koala. This arrangement allowed him to undertake a detailed study of its diet in captivity. Fleay later continued his conservation efforts at Healesville Sanctuary and the David Fleay Wildlife Park. Since 1870, koalas have been introduced to several coastal and offshore islands, including Kangaroo Island and French Island. Their numbers have significantly increased, and since the islands are not large enough to sustain such high koala numbers, overbrowsing has become a problem. In the 1920s, Lewis initiated a program of large-scale relocation and rehabilitation programs to transfer koalas whose habitat had become fragmented or reduced to new regions, with the intent of eventually returning them to their former range. For example, in 1930–31, 165 koalas were translocated to Quail Island. After a period of population growth, and subsequent overbrowsing of gum trees on the island, about 1,300 animals were released into mainland areas in 1944. The practice of translocating koalas became commonplace; Victorian State manager Peter Menkorst estimated that from 1923 to 2006, about 25,000 animals were translocated to more than 250 release sites across Victoria. Since the 1990s, government agencies have tried to control their numbers by culling, but public and international outcry has forced the use of translocation and sterilisation, instead. One of the biggest anthropogenic threats to the koala is habitat destruction and fragmentation. In coastal areas, the main cause of this is urbanisation, while in rural areas, habitat is cleared for agriculture. Native forest trees are also taken down to be made into wood products. In 2000, Australia ranked fifth in the world by deforestation rates, having cleared . The distribution of the koala has shrunk by more than 50% since European arrival, largely due to fragmentation of habitat in Queensland. The koala's "vulnerable" status in Queensland and New South Wales means that developers in these states must consider the impacts on this species when making building applications. In addition, koalas live in many protected areas. While urbanisation can pose a threat to koala populations, the animals can survive in urban areas provided enough trees are present. Urban populations have distinct vulnerabilities: collisions with vehicles and attacks by domestic dogs kill about 4,000 animals every year. Injured koalas are often taken to wildlife hospitals and rehabilitation centres. In a 30-year retrospective study performed at a New South Wales koala rehabilitation centre, trauma (usually resulting from a motor vehicle accident or dog attack) was found to be the most frequent cause of admission, followed by symptoms of "Chlamydia" infection. Wildlife caretakers are issued special permits, but must release the animals back into the wild when they are either well enough or, in the case of joeys, old enough. As with most native animals, the koala cannot legally be kept as a pet in Australia or anywhere else. Komodo dragon The Komodo dragon ("Varanus komodoensis"), also known as the Komodo monitor, is a species of lizard found in the Indonesian islands of Komodo, Rinca, Flores, Gili Motang, and Padar. A member of the monitor lizard family Varanidae, it is the largest living species of lizard, growing to a maximum length of in rare cases and weighing up to approximately . Their unusually large size has been attributed to island gigantism, since no other carnivorous animals fill the niche on the islands where they live. However, recent research suggests the large size of Komodo dragons may be better understood as representative of a relict population of very large varanid lizards that once lived across Indonesia and Australia, most of which, along with other megafauna, died out after the Pleistocene (likely as a result of human activity). Fossils very similar to "V. komodoensis" have been found in Australia dating to greater than 3.8 million years ago, and its body size remained stable on Flores, one of the handful of Indonesian islands where it is currently found, over the last 900,000 years, "a time marked by major faunal turnovers, extinction of the island's megafauna, and the arrival of early hominids by 880 ka [kiloannums]." As a result of their size, these lizards dominate the ecosystems in which they live. Komodo dragons hunt and ambush prey including invertebrates, birds, and mammals. It has been claimed that they have a venomous bite; there are two glands in the lower jaw which secrete several toxic proteins. The biological significance of these proteins is disputed, but the glands have been shown to secrete an anticoagulant. Komodo dragons' group behaviour in hunting is exceptional in the reptile world. The diet of big Komodo dragons mainly consists of Timor deer, though they also eat considerable amounts of carrion. Komodo dragons also occasionally attack humans. Mating begins between May and August, and the eggs are laid in September. About 20 eggs are deposited in abandoned megapode nests or in a self-dug nesting hole. The eggs are incubated for seven to eight months, hatching in April, when insects are most plentiful. Young Komodo dragons are vulnerable and therefore dwell in trees, safe from predators and cannibalistic adults. They take 8 to 9 years to mature, and are estimated to live up to 30 years. Komodo dragons were first recorded by Western scientists in 1910. Their large size and fearsome reputation make them popular zoo exhibits. In the wild, their range has contracted due to human activities, and they are listed as vulnerable by the IUCN. They are protected under Indonesian law, and a national park, Komodo National Park, was founded to aid protection efforts. Komodo dragons were first documented by Europeans in 1910, when rumors of a "land crocodile" reached Lieutenant van Steyn van Hensbroek of the Dutch colonial administration. Widespread notoriety came after 1912, when Peter Ouwens, the director of the Zoological Museum at Bogor, Java, published a paper on the topic after receiving a photo and a skin from the lieutenant, as well as two other specimens from a collector. The first two live Komodo dragons to arrive in Europe were exhibited in the Reptile House at London Zoo when it opened in 1927. Joan Beauchamp Procter made some of the earliest observations of these animals in captivity and she demonstrated the behaviour of one of these animals at a Scientific Meeting of the Zoological Society of London in 1928. The Komodo dragon was the driving factor for an expedition to Komodo Island by W. Douglas Burden in 1926. After returning with 12 preserved specimens and 2 live ones, this expedition provided the inspiration for the 1933 movie "King Kong". It was also Burden who coined the common name "Komodo dragon." Three of his specimens were stuffed and are still on display in the American Museum of Natural History. The Dutch, realizing the limited number of individuals in the wild, outlawed sport hunting and heavily limited the number of individuals taken for scientific study. Collecting expeditions ground to a halt with the occurrence of World War II, not resuming until the 1950s and 1960s, when studies examined the Komodo dragon's feeding behavior, reproduction, and body temperature. At around this time, an expedition was planned in which a long-term study of the Komodo dragon would be undertaken. This task was given to the Auffenberg family, who stayed on Komodo Island for 11 months in 1969. During their stay, Walter Auffenberg and his assistant Putra Sastrawan captured and tagged more than 50 Komodo dragons. The research from the Auffenberg expedition would prove to be enormously influential in raising Komodo dragons in captivity. Research after that of the Auffenberg family has shed more light on the nature of the Komodo dragon, with biologists such as Claudio Ciofi continuing to study the creatures. The Komodo dragon is also known as the Komodo monitor or the Komodo Island monitor in scientific literature, although this is not very common. To the natives of Komodo Island, it is referred to as "ora", "buaya darat" (land crocodile), or "biawak raksasa" (giant monitor). The evolutionary development of the Komodo dragon started with the genus "Varanus", which originated in Asia about 40 million years ago and migrated to Australia, where it evolved into giant forms (the largest of all being the recently extinct "Megalania"), helped by the absence of competing placental carnivorans. Around 15 million years ago, a collision between Australia and Southeast Asia allowed these larger varanids to move back into what is now the Indonesian archipelago, extending their range as far east as the island of Timor. The Komodo dragon was believed to have differentiated from its Australian ancestors 4 million years ago. However, recent fossil evidence from Queensland suggests the Komodo dragon actually evolved in Australia before spreading to Indonesia. Dramatic lowering of sea level during the last glacial period uncovered extensive stretches of continental shelf that the Komodo dragon colonised, becoming isolated in their present island range as sea levels rose afterwards. Extinct Pliocene species of similar size to the modern Komodo dragon, such as "Varanus sivalensis", have been found in Eurasia as well, indicating that they fared well even in environments containing competition such as mammalian carnivores until the climate change and extinction events that marked the beginning of the Pleistocene. In the wild, an adult Komodo dragon usually weighs around , although captive specimens often weigh more. According to "Guinness World Records", an average adult male will weigh and measure , while an average female will weigh and measure . The largest verified wild specimen was long and weighed , including undigested food. The Komodo dragon has a tail as long as its body, as well as about 60 frequently replaced, serrated teeth that can measure up to in length. Its saliva is frequently blood-tinged, because its teeth are almost completely covered by gingival tissue that is naturally lacerated during feeding. It also has a long, yellow, deeply forked tongue. Komodo dragon skin is reinforced by armoured scales, which contain tiny bones called osteoderms that function as a sort of natural chain-mail. This rugged hide makes Komodo dragon skin a poor source of leather. As with other varanids, Komodo dragons have only a single ear bone, the stapes, for transferring vibrations from the tympanic membrane to the cochlea. This arrangement means they are likely restricted to sounds in the 400 to 2,000 hertz range, compared to humans who hear between 20 and 20,000 hertz. It was formerly thought to be deaf when a study reported no agitation in wild Komodo dragons in response to whispers, raised voices, or shouts. This was disputed when London Zoological Garden employee Joan Proctor trained a captive specimen to come out to feed at the sound of her voice, even when she could not be seen. The Komodo dragon can see objects as far away as , but because its retinas only contain cones, it is thought to have poor night vision. It can distinguish colours, but has poor visual discrimination of stationary objects. The Komodo dragon uses its tongue to detect, taste, and smell stimuli, as with many other reptiles, with the vomeronasal sense using the Jacobson's organ, rather than using the nostrils. With the help of a favorable wind and its habit of swinging its head from side to side as it walks, a Komodo dragon may be able to detect carrion from away. It only has a few taste buds in the back of its throat. Its scales, some of which are reinforced with bone, have sensory plaques connected to nerves to facilitate its sense of touch. The scales around the ears, lips, chin, and soles of the feet may have three or more sensory plaques. The Komodo dragon prefers hot and dry places, and typically lives in dry, open grassland, savanna, and tropical forest at low elevations. As an ectotherm, it is most active in the day, although it exhibits some nocturnal activity. Komodo dragons are solitary, coming together only to breed and eat. They are capable of running rapidly in brief sprints up to , diving up to , and climbing trees proficiently when young through use of their strong claws. To catch out-of-reach prey, the Komodo dragon may stand on its hind legs and use its tail as a support. As it matures, its claws are used primarily as weapons, as its great size makes climbing impractical. For shelter, the Komodo dragon digs holes that can measure from wide with its powerful forelimbs and claws. Because of its large size and habit of sleeping in these burrows, it is able to conserve body heat throughout the night and minimise its basking period the morning after. The Komodo dragon hunts in the afternoon, but stays in the shade during the hottest part of the day. These special resting places, usually located on ridges with cool sea breezes, are marked with droppings and are cleared of vegetation. They serve as strategic locations from which to ambush deer. Komodo dragons are carnivores. Although they have been considered as eating mostly carrion, they will frequently ambush live prey with a stealthy approach. When suitable prey arrives near a dragon's ambush site, it will suddenly charge at the animal at high speeds and go for the underside or the throat. Komodo dragons make no attempt to deliberately allow the prey to escape with fatal injuries, but try to kill prey outright using a combination of lacerating damage and blood loss. They have been recorded as killing wild pigs within seconds, and observations of Komodo dragons tracking prey for long distances are likely misinterpreted cases of prey escaping an attack before succumbing to infection. Komodo dragons have been observed knocking down large pigs and deer with their strong tails. It is able to locate carcasses using its keen sense of smell, which can locate a dead or dying animal from a range of up to . Komodo dragons eat by tearing large chunks of flesh and swallowing them whole while holding the carcass down with their forelegs. For smaller prey up to the size of a goat, their loosely articulated jaws, flexible skulls, and expandable stomachs allow them to swallow prey whole. The vegetable contents of the stomach and intestines are typically avoided. Copious amounts of red saliva the Komodo dragons produce help to lubricate the food, but swallowing is still a long process (15–20 minutes to swallow a goat). A Komodo dragon may attempt to speed up the process by ramming the carcass against a tree to force it down its throat, sometimes ramming so forcefully, the tree is knocked down. A small tube under the tongue that connects to the lungs allows it to breathe while swallowing. After eating up to 80% of its body weight in one meal, it drags itself to a sunny location to speed digestion, as the food could rot and poison the dragon if left undigested for too long. Because of their slow metabolism, large dragons can survive on as few as 12 meals a year. After digestion, the Komodo dragon regurgitates a mass of horns, hair, and teeth known as the gastric pellet, which is covered in malodorous mucus. After regurgitating the gastric pellet, it rubs its face in the dirt or on bushes to get rid of the mucus, suggesting it does not relish the scent of its own excretions. The largest animals eat first, while the smaller ones follow a hierarchy. The largest male asserts his dominance and the smaller males show their submission by use of body language and rumbling hisses. Dragons of equal size may resort to "wrestling". Losers usually retreat, though they have been known to be killed and eaten by victors. The Komodo dragon's diet is wide-ranging, and includes invertebrates, other reptiles (including smaller Komodo dragons), birds, bird eggs, small mammals, monkeys, wild boar, goats, deer, horses, and water buffalo. Young Komodos will eat insects, eggs, geckos, and small mammals, while adults prefer to hunt large mammals. Occasionally, they attack and bite humans (see first paragraphs of this article). Sometimes they consume human corpses, digging up bodies from shallow graves. This habit of raiding graves caused the villagers of Komodo to move their graves from sandy to clay ground and pile rocks on top of them to deter the lizards. The Komodo dragon may have evolved to feed on the extinct dwarf elephant "Stegodon" that once lived on Flores, according to evolutionary biologist Jared Diamond. The Komodo dragon drinks by sucking water into its mouth via buccal pumping (a process also used for respiration), lifting its head, and letting the water run down its throat. Although previous studies proposed that Komodo dragon saliva contains a variety of highly septic bacteria that would help to bring down prey, research in 2013 suggested that the bacteria in the mouths of Komodo dragons are ordinary and similar to those found in other carnivores. They actually have surprisingly good mouth hygiene. As Bryan Fry put it: "After they are done feeding, they will spend 10 to 15 minutes lip-licking and rubbing their head in the leaves to clean their mouth... Unlike people have been led to believe, they do not have chunks of rotting flesh from their meals on their teeth, cultivating bacteria." Nor do Komodo dragons wait for prey to die and track it at a distance, as vipers do; observations of them hunting deer, boar and in some cases buffalo reveal that they kill prey in less than half an hour, using their dentition to cause shock and trauma. The observation of prey dying of sepsis would then be explained by the natural instinct of water buffalos, who are not native to the islands where the Komodo dragon lives, to run into water after escaping an attack. The warm, faeces-filled water would then cause the infections. The study used samples from 16 captive dragons (10 adults and six neonates) from three U.S. zoos. Researchers have isolated a powerful antibacterial peptide from the blood plasma of Komodo dragons, VK25. Based on their analysis of this peptide, they have synthesized a short peptide dubbed DRGN-1 and tested it against multidrug-resistant (MDR) pathogens. Preliminary results of these tests show that DRGN-1 is effective in killing drug-resistant bacterial strains and even some fungi. It has the added observed benefit of significantly promoting wound healing in both uninfected and mixed biofilm infected wounds. In late 2005, researchers at the University of Melbourne speculated the perentie ("Varanus giganteus"), other species of monitors, and agamids may be somewhat venomous. The team believes the immediate effects of bites from these lizards were caused by mild envenomation. Bites on human digits by a lace monitor ("V. varius"), a Komodo dragon, and a spotted tree monitor ("V. scalaris") all produced similar effects: rapid swelling, localised disruption of blood clotting, and shooting pain up to the elbow, with some symptoms lasting for several hours. In 2009, the same researchers published further evidence demonstrating Komodo dragons possess a venomous bite. MRI scans of a preserved skull showed the presence of two glands in the lower jaw. The researchers extracted one of these glands from the head of a terminally ill dragon in the Singapore Zoological Gardens, and found it secreted several different toxic proteins. The known functions of these proteins include inhibition of blood clotting, lowering of blood pressure, muscle paralysis, and the induction of hypothermia, leading to shock and loss of consciousness in envenomated prey. As a result of the discovery, the previous theory that bacteria were responsible for the deaths of Komodo victims was disputed. Other scientists have stated that this allegation of venom glands "has had the effect of underestimating the variety of complex roles played by oral secretions in the biology of reptiles, produced a very narrow view of oral secretions and resulted in misinterpretation of reptilian evolution". According to these scientists "reptilian oral secretions contribute to many biological roles other than to quickly dispatch prey". These researchers concluded that, "Calling all in this clade venomous implies an overall potential danger that does not exist, misleads in the assessment of medical risks, and confuses the biological assessment of squamate biochemical systems". Evolutionary biologist Schwenk says that even if the lizards have venom-like proteins in their mouths they may be using them for a different function, and he doubts venom is necessary to explain the effect of a Komodo dragon bite, arguing that shock and blood loss are the primary factors. Mating occurs between May and August, with the eggs laid in September. During this period, males fight over females and territory by grappling with one another upon their hind legs, with the loser eventually being pinned to the ground. These males may vomit or defecate when preparing for the fight. The winner of the fight will then flick his long tongue at the female to gain information about her receptivity. Females are antagonistic and resist with their claws and teeth during the early phases of courtship. Therefore, the male must fully restrain the female during coitus to avoid being hurt. Other courtship displays include males rubbing their chins on the female, hard scratches to the back, and licking. Copulation occurs when the male inserts one of his hemipenes into the female's cloaca. Komodo dragons may be monogamous and form "pair bonds", a rare behavior for lizards. Female Komodos lay their eggs from August to September and may use several types of locality; in one study, 60% laid their eggs in the nests of orange-footed scrubfowl (a moundbuilder or megapode), 20% on ground level and 20% in hilly areas. The females make many camouflage nests/holes to prevent other dragons from eating the eggs. Clutches contain an average of 20 eggs, which have an incubation period of 7–8 months. Hatching is an exhausting effort for the neonates, which break out of their eggshells with an egg tooth that falls off before long. After cutting themselves out, the hatchlings may lie in their eggshells for hours before starting to dig out of the nest. They are born quite defenseless and are vulnerable to predation. Sixteen youngsters from a single nest were on average 46.5 cm long and weighed 105.1 grams. Young Komodo dragons spend much of their first few years in trees, where they are relatively safe from predators, including cannibalistic adults, as juvenile dragons make up 10% of their diets. The habit of cannibalism may be advantageous in sustaining the large size of adults, as medium-sized prey on the islands is rare. When the young approach a kill, they roll around in faecal matter and rest in the intestines of eviscerated animals to deter these hungry adults. Komodo dragons take approximately 8 to 9 years to mature, and may live for up to 30 years. A Komodo dragon at London Zoo named Sungai laid a clutch of eggs in late 2005 after being separated from male company for more than two years. Scientists initially assumed she had been able to store sperm from her earlier encounter with a male, an adaptation known as superfecundation. On 20 December 2006, it was reported that Flora, a captive Komodo dragon living in the Chester Zoo in England, was the second known Komodo dragon to have laid unfertilised eggs: she laid 11 eggs, and seven of them hatched, all of them male. Scientists at Liverpool University in England performed genetic tests on three eggs that collapsed after being moved to an incubator, and verified Flora had never been in physical contact with a male dragon. After Flora's eggs' condition had been discovered, testing showed Sungai's eggs were also produced without outside fertilization. On 31 January 2008, the Sedgwick County Zoo in Wichita, Kansas, became the first zoo in the Americas to document parthenogenesis in Komodo dragons. The zoo has two adult female Komodo dragons, one of which laid about 17 eggs on 19–20 May 2007. Only two eggs were incubated and hatched due to space issues; the first hatched on 31 January 2008, while the second hatched on 1 February. Both hatchlings were males. Komodo dragons have the ZW chromosomal sex-determination system, as opposed to the mammalian XY system. Male progeny prove Flora's unfertilised eggs were haploid (n) and doubled their chromosomes later to become diploid (2n) (by being fertilised by a polar body, or by chromosome duplication without cell division), rather than by her laying diploid eggs by one of the meiosis reduction-divisions in her ovaries failing. When a female Komodo dragon (with ZW sex chromosomes) reproduces in this manner, she provides her progeny with only one chromosome from each of her pairs of chromosomes, including only one of her two sex chromosomes. This single set of chromosomes is duplicated in the egg, which develops parthenogenetically. Eggs receiving a Z chromosome become ZZ (male); those receiving a W chromosome become WW and fail to develop, meaning that only males are produced by parthenogenesis in this species. It has been hypothesised that this reproductive adaptation allows a single female to enter an isolated ecological niche (such as an island) and by parthenogenesis produce male offspring, thereby establishing a sexually reproducing population (via reproduction with her offspring that can result in both male and female young). Despite the advantages of such an adaptation, zoos are cautioned that parthenogenesis may be detrimental to genetic diversity. Attacks on humans are rare, but this species has been responsible for several human fatalities, in both the wild and captivity. According to a data from Komodo National Park, within 38 years in a period between 1974 and 2012, there were 24 reported attacks on humans, 5 of them deadly. Most of the victims are local villagers living around the national park. Reports of attacks include: The Komodo dragon is a vulnerable species and is on the IUCN Red List. The Komodo National Park was founded in 1980 to protect Komodo dragon populations on islands including Komodo, Rinca, and Padar. Later, the Wae Wuul and Wolo Tado Reserves were opened on Flores to aid with Komodo dragon conservation. Komodo dragons avoid encounters with humans. Juveniles are very shy and will flee quickly into a hideout if a human comes closer than about . Older animals will also retreat from humans from a shorter distance away. If cornered, they will react aggressively by gaping their mouth, hissing, and swinging their tail. If they are disturbed further, they may start an attack and bite. Although there are anecdotes of unprovoked Komodo dragons attacking or preying on humans, most of these reports are either not reputable or caused by defensive bites. Only a very few cases are truly the result of unprovoked attacks by abnormal individuals, which lost their fear towards humans. Volcanic activity, earthquakes, loss of habitat, fire, loss of prey due to poaching, tourism, and illegal poaching of the dragons themselves have all contributed to the vulnerable status of the Komodo dragon. Under Appendix I of CITES (the Convention on International Trade in Endangered Species), commercial trade of skins or specimens is illegal. In 2013, total population in the wild was assessed as 3,222 individuals, declining to 3,092 in 2014 and 3,014 in 2015. Populations remained relatively stable on the bigger islands (Komodo and Rinca), but decreased on smaller island such as Nusa Kode and Gili Motang, likely due to diminishing prey availability. On Padar, a former population of the Komodo dragon became extinct, of which the last individuals were seen in 1975. It is widely assumed that the Komodo dragon died out on Padar after a strong decline of the populations of large ungulate prey, for which poaching was most likely responsible. Komodo dragons have long been great zoo attractions, where their size and reputation make them popular exhibits. They are, however, rare in zoos because they are susceptible to infection and parasitic disease if captured from the wild, and do not readily reproduce. As of May 2009, there were 13 European, 2 African, 35 North American, 1 Singaporean, and 2 Australian institutions that kept Komodo dragons. The first Komodo dragons were displayed at London Zoo in 1927. A Komodo dragon was exhibited in 1934 at the National Zoo in Washington, D.C., but it lived for only two years. More attempts to exhibit Komodo dragons were made, but the lifespan of these animals was very short, averaging five years in the National Zoological Park. Studies done by Walter Auffenberg, which were documented in his book "The Behavioral Ecology of the Komodo Monitor", eventually allowed for more successful managing and reproducing of the dragons in captivity. A variety of behaviors have been observed from captive specimens. Most individuals are relatively tame within a short time, and are capable of recognising individual humans and discriminating between familiar keepers. Komodo dragons have also been observed to engage in play with a variety of objects, including shovels, cans, plastic rings, and shoes. This behavior does not seem to be "food-motivated predatory behavior". Even seemingly docile dragons may become unpredictably aggressive, especially when the animal's territory is invaded by someone unfamiliar. In June 2001, a Komodo dragon seriously injured Phil Bronstein, the then husband of actress Sharon Stone, when he entered its enclosure at the Los Angeles Zoo after being invited in by its keeper. Bronstein was bitten on his bare foot, as the keeper had told him to take off his white shoes and socks, which the keeper stated could potentially excite the Komodo dragon as they were the same colour as the white rats the zoo fed the dragon. Although he escaped, Bronstein needed to have several tendons in his foot reattached surgically. Kate Bush Catherine "Kate" Bush (born 30 July 1958) is an English singer-songwriter, musician, dancer and record producer. Bush came to note in 1978 when, aged 19, she topped the UK Singles Chart for four weeks with her debut single "Wuthering Heights", becoming the first female artist to achieve a UK number one with a self-written song. She has since released twenty-five UK Top 40 singles, including the top-ten hits "The Man with the Child in His Eyes", "Babooshka", "Running Up That Hill", "Don't Give Up" (a duet with Peter Gabriel) and "King of the Mountain". She has released ten studio albums, all of which reached the UK Top 10, including the UK number-one albums "Never for Ever" (1980), "Hounds of Love" (1985), and the compilation "The Whole Story" (1986). She is the first British solo female artist to top the UK album charts and the first female artist to enter the album chart at number one. A diverse range of artists have cited Bush as an influence. Her work has been described as eclectic, experimental, idiosyncratic and theatrical. She has been nominated 13 times for British Phonographic Industry accolades, winning for Best British Female Artist in 1987. She has also been nominated for three Grammy Awards. In 2002, she was recognised with an Ivor Novello Award for Outstanding Contribution to British Music. In October 2017 she was nominated for induction in the Rock and Roll Hall of Fame in 2018. Bush was appointed Commander of the Order of the British Empire (CBE) in the 2013 New Year Honours for services to music. Bush was born in Bexleyheath, Kent, to an English medical doctor, Robert Bush (1920–2008), and an Irish mother by the name of Hannah Daly (1918–1992). She was raised as a Roman Catholic in their farmhouse in East Wickham, an urban village in the neighbouring town of Welling, with her older brothers, John and Paddy. Bush came from an artistic background: her mother was an amateur traditional Irish dancer, her father was an amateur pianist, Paddy worked as a musical instrument maker, and John was a poet and photographer. Both brothers were involved in the local folk music scene. John was a karateka at Goldsmiths College karate club and Kate also trained there, becoming known as "Ee-ee" because of her squeaky kiai – the loud verbalisation accompanying some martial arts attacking manoeuvres. One of the instructors, Dave Hazard, later noted in his autobiography that her dance moves seemed to owe something to karate. Her family's musical influence inspired Bush to teach herself the piano at the age of 11. She also played the organ in a barn behind her parents' house and studied the violin. She soon began composing songs, eventually adding her own lyrics. Bush attended St Joseph's Convent Grammar School, a Catholic girls' school in nearby Abbey Wood which, in 1975, after she had left, become part of St Mary's and St Joseph's School in Sidcup. During this time her family produced a demo tape with over 50 of her compositions, which was turned down by record labels. David Gilmour of Pink Floyd received the demo from Ricky Hopper, a mutual friend of Gilmour and the Bush family. Impressed with what he heard, Gilmour helped the sixteen-year-old Bush get a more professional-sounding demo tape recorded that would be more saleable to the record companies. Three tracks in total were recorded and paid for by Gilmour. The tape was produced by Gilmour's friend Andrew Powell, who would go on to produce Bush's first two albums, and sound engineer Geoff Emerick, who had previously worked with the Beatles. The tape was sent to EMI executive Terry Slater. Slater was impressed by the tape and signed her. The British record industry was reaching a point of stagnation. Progressive rock was very popular and visually oriented rock performers were growing in popularity, thus record labels looking for the next big thing were considering experimental acts. Bush was put on retainer for two years by Bob Mercer, managing director of EMI group-repertoire division. According to Mercer he felt Bush's material was good enough to be released but felt that if the album failed it would be demoralising and if it was successful Bush was too young to handle it. However, in a 1987 interview, Gilmour disputed this version of events, blaming EMI for initially using "wrong" producers. For the first two years of her contract, Bush spent more time on school work than making an album. She left school after doing her mock A-levels and having gained ten GCE O-Level qualifications. In 2005, Bush stated in an interview with Mark Radcliffe on BBC Radio 2 that she believed EMI signed her before she was ready to make an album so that no other record company could offer her a contract. After the contract signing, EMI forwarded her a sizeable advance, which she used to enroll in interpretive dance classes taught by Lindsay Kemp, a former teacher of David Bowie, and mime training with Adam Darius. Bush also wrote and made demos of close to 200 songs, a few of which today can be found on bootleg recordings and are known as the "Phoenix Recordings". From March to August 1977, she fronted the KT Bush Band at public houses around London – specifically at the Rose of Lee public house (now Dirty South) in Lewisham. The other three band members were Del Palmer (bass), Brian Bath (guitar), and Vic King (drums). She began recording her first album in August 1977, although two tracks ("The Saxophone Song" and "The Man with the Child in His Eyes") had been recorded during the summer of 1975. The latter was later released as her second single and peaked at #6 on the UK charts. For her début album, "The Kick Inside" (1978), Bush was persuaded to use established session musicians instead of the KT Bush Band. Some of these she would retain even after she had brought her bandmates back on board. Her brother Paddy played the harmonica and mandolin, unlike on later albums where he would play more exotic instruments such as the balalaika and didgeridoo. Stuart Elliott played some of the drums and would become her main drummer on subsequent albums. Bush released "The Kick Inside" when she was 19 years old, but some of the songs had been written when she was as young as 13. EMI originally wanted the more rock-oriented track "James and the Cold Gun" to be her debut single, but Bush insisted that it should be "Wuthering Heights". Even at this early stage of her career, she had gained a reputation for her determination to have a say in decisions affecting her work. "Wuthering Heights" topped the UK and Australian charts and became an international hit. Bush became the first woman to reach number one on the UK charts with a self-penned song. Despite her considerable subsequent chart success it is still her only No. 1 single (as of 2016). A second single, "The Man with the Child in His Eyes", reached number six in the UK charts. It also made it onto the American "Billboard" Hot 100 where it reached number 85 in early 1979, and went on to win her an Ivor Novello Award in 1979 for Outstanding British Lyric. Bob Mercer felt that Bush's relative lack of success in the United States compared to the rest of the world was due to her music being a poor fit for American radio formats, and that there were no outlets for the visual presentation central to Bush's appeal. EMI capitalised on Bush's appearance by promoting the album with a poster of her in a tight pink top that emphasised her breasts. In an interview with "NME" magazine in 1982, Bush criticised this marketing technique stating: "People weren't even generally aware that I wrote my own songs or played the piano. The media just promoted me as a female body. It's like I've had to prove that I'm an artist in a female body." In late 1978, EMI persuaded Bush to quickly record a follow-up album, "Lionheart", to take advantage of the success of "The Kick Inside". Bush has often expressed dissatisfaction with "Lionheart", feeling that she had needed more time to get it right. The album was produced by Andrew Powell, assisted by Bush. While it had spawned several hit singles, most notably "Wow", it did not garner the same reception as her first album, reaching number six in the UK album charts. Bush was displeased with being rushed into making the second album. She set up her own publishing company, Kate Bush Music, and her own management company, Novercia, to maintain complete control over her work. Members of her family, along with Bush herself, comprised the company's board of directors. Following the album's release, she was required by EMI to undertake heavy promotional work and an exhausting tour. The tour, which subsequently became known as The Tour of Life, began in April 1979 and lasted six weeks. This live show was co-devised and performed on stage with magician Simon Drake. Typical of her determination to have creative control, she was involved in every aspect of the production, choreography, set design, costume design and staff recruitment for the show. The shows were noted for her dancing, complex lighting and her 17 costume changes per show. Because of her intention to dance as she sang, her sound engineers used a wire coat hanger and a radio microphone to fashion the first headset microphone to be used by a rock performer since the Swedish group the Spotnicks used a very primitive version in the early 1960s. Released in September 1980, "Never for Ever" saw Bush's second foray into production, co-producing with Jon Kelly. Her first experience as a producer was on her "Live on Stage" EP, released after her tour the previous year. The first two albums had resulted in a definitive sound evident in every track, with orchestral arrangements supporting the live band sound. The range of styles on "Never for Ever" is much more diverse, veering from the straightforward rocker "Violin" to the wistful waltz of hit single "Army Dreamers". "Never for Ever" was the first Kate Bush album to feature synthesisers and drum machines, in particular the Fairlight CMI, to which she was introduced when providing backing vocals on Peter Gabriel's eponymous third album in early 1980. It was her first record to reach the top position in the UK album charts, also making her the first female British artist to achieve that status, and the first female artist ever to enter the album chart at the top. The top-selling single from the album was "Babooshka", which reached number five in the UK singles chart. In November 1980, she released the standalone Christmas single "December Will Be Magic Again", which reached number 29 in the UK charts. September 1982 saw the release of "The Dreaming", the first album Bush produced by herself. With her new-found freedom, she experimented with production techniques, creating an album that features a diverse blend of musical styles and is known for its near-exhaustive use of the Fairlight CMI. "The Dreaming" received a mixed reception in the UK, and critics were baffled by the dense soundscapes Bush had created to become "less accessible". In a 1993 interview with "Q (magazine)", Bush stated: "That was my 'She's gone mad' album." However, the album became her first to enter the US "Billboard" 200 chart, albeit only reaching number 157. The album entered the UK album chart at number-three, but is to date her lowest-selling album, garnering only a silver disc. "Sat in Your Lap" was the first single from the album to be released. It pre-dated the album by over a year and peaked at number 11 in the UK. The album's title track, featuring Rolf Harris and Percy Edwards, stalled at number 48, while the third single, "There Goes a Tenner", stalled at number 93, despite promotion from EMI and Bush. The track "Suspended in Gaffa" was released as a single in Europe, but not in the UK. Continuing in her storytelling tradition, Bush looked far outside her own personal experience for sources of inspiration. She drew on old crime films for "There Goes a Tenner", a documentary about the war in Vietnam for "Pull Out the Pin", and the plight of Indigenous Australians for "The Dreaming". "Houdini" is about the magician's death, and "Get Out of My House" was inspired by Stephen King's novel "The Shining". "Hounds of Love" was released in 1985. Because of the high cost of hiring studio space for her previous album, she built a private studio near her home, where she could work at her own pace. "Hounds of Love" ultimately topped the charts in the UK, knocking Madonna's "Like a Virgin" from the number-one position. The album takes advantage of the vinyl and cassette formats with two very different sides. The first side, "Hounds of Love", contains five "accessible" pop songs, including the four singles "Running Up that Hill", "Cloudbusting", "Hounds of Love", and "The Big Sky". "Running Up that Hill" reached number-three in the UK charts and re-introduced Bush to American listeners, climbing to number 30 on the "Billboard" Hot 100 in November 1985. The second side of the album, "The Ninth Wave", takes its name from Tennyson's poem, "Idylls of the King", about the legendary King Arthur's reign, and is seven interconnecting songs joined in one continuous piece of music. The album earned Bush nominations for Best Female Solo Artist, Best Album, Best Single, and Best Producer at the 1986 BRIT Awards. In the same year, Bush and Peter Gabriel had a UK Top 10 hit with the duet "Don't Give Up" (Dolly Parton, Gabriel's original choice to sing the female vocal, turned his offer down), and EMI released her "greatest hits" album, "The Whole Story". Bush provided a new lead vocal and refreshed backing track on "Wuthering Heights", and recorded a new single, "Experiment IV", for inclusion on the compilation. Dawn French and Hugh Laurie were among those featured in the video for Experiment IV. At the 1987 BRIT Awards, Bush won the award for Best Female Solo Artist. The increasingly personal tone of her writing continued on 1989's "The Sensual World". One of the songs "Heads We're Dancing", touched by Bush's black humour; is about a woman who dances all night with a charming stranger only to discover in the morning that he is Adolf Hitler. The title track drew its inspiration from James Joyce's novel "Ulysses". "The Sensual World" went on to become her biggest-selling album in the US, receiving an RIAA Gold certification four years after its release for 500,000 copies sold. In the United Kingdom album charts, it reached the number-two position. In 1990, the boxed set "" was released and included all of her albums with their original cover art, as well as two discs of all single B sides recorded from 1978 to 1990. In 1991, Bush released a cover of Elton John's "Rocket Man", which reached number 12 in the UK singles chart, and went as high as number-two in Australia, and in 2007, was voted the greatest cover ever by readers of "The Observer" newspaper. She recorded "Candle in the Wind", as the single's b-side. Bush's seventh studio album, "The Red Shoes", was released in November 1993. The album features more high-profile cameo appearances than her previous efforts, including contributions from composer and conductor Michael Kamen, comedy actor Lenny Henry, Prince, Eric Clapton, Gary Brooker of Procol Harum, Trevor Whittaker, and Jeff Beck. Both "The Sensual World" and "The Red Shoes" featured contributions from Trio Bulgarka, the Bulgarian female vocal trio, who sang on six tracks, including "You're The One" and "Rocket's Tail". The album gave Bush her highest chart position in the US, reaching number 28, although the only song from the album to make the US singles chart was "Rubberband Girl", which peaked at number 88 in January 1994. In the UK, the album reached number-two, and the singles "Rubberband Girl", "The Red Shoes", "Moments of Pleasure", and "And So Is Love" all reached the top 30. In 1994, Bush released an accompanying short film, "The Line, the Cross & the Curve". Written, directed by, and starring Bush, along with English actress Miranda Richardson, the film was based around the concept of "The Red Shoes" and featured six of the songs from the album. The initial plan had been to tour with "The Red Shoes" release, but it did not reach fruition. Thus, Bush deliberately produced a "live-band" feel for the album, with less studio production that had typified her last three albums and which would have been too difficult to re-create on stage. The result polarised her fan base, who had enjoyed the intricacy of her earlier compositions, with other fans claiming they had found new complexities in the lyrics and the emotions they expressed. This period had been a troubled time for Bush. She had suffered a series of bereavements, including the loss of guitarist Alan Murphy, who had started working with her on The Tour of Life in 1979, and her mother Hannah, to whom she was exceptionally close. The people she lost were honoured in the ballad "Moments of Pleasure." However, Bush's mother was still alive when "Moments of Pleasure" was written and recorded. Bush describes playing the song to her mother, who thought the line where she is quoted by Bush as saying, "Every old sock meets an old shoe", was hilarious and "couldn't stop laughing." After the release of "The Red Shoes", Kate Bush dropped out of the public eye. She had originally intended to take one year off, but despite working on material, twelve years passed before her next album release. Her name occasionally cropped up in the media with rumours of a new album release. The press often viewed her as an eccentric recluse, sometimes drawing a comparison with Miss Havisham from Charles Dickens's "Great Expectations". In 1998, Bush had given birth to Albert, known as "Bertie", fathered by her guitarist and now husband Danny McIntosh. In 2001, Bush was awarded a Q Award as Classic Songwriter. In 2002, she was awarded an Ivor Novello Award for Outstanding Contribution to Music, and performed "Comfortably Numb" at David Gilmour's concert at the Royal Festival Hall in London. Kate Bush's eighth studio album, "Aerial", was released on double CD and vinyl in November 2005. The first single from the album was "King of the Mountain", which was played for the first time on BBC Radio 2 on 21 September 2005. As on "Hounds of Love" (1985), the album is divided into two sections, each with its own theme and mood. The first disc, subtitled "A Sea of Honey", features a set of unrelated themed songs, including "King of the Mountain"; "Bertie", a Renaissance-style ode to her son; and "Joanni", based on the story of Joan of Arc. In the song "formula_1", Bush sings 117 digits of the number Pi, but misses 22 digits from the 80th to the 101st place of the value of pi. The second disc, subtitled "A Sky of Honey", features one continuous piece of music describing the experience of being outdoors after waking at dawn, moving through afternoon, dusk, to night, then back to the following dawn of a single summer's day. All the pieces in this suite refer or allude to sky and sea in their lyrical content. Bush mixed her voice with cooing woodpigeons to repeat the phrases "A sea of honey, a sky of honey", and "You're full of beauty" throughout the piece, and uses recordings of actual birdsong throughout. "A Sky of Honey" features Rolf Harris playing the didgeridoo on one track, and providing vocals on "The Painter's Link". Other artists making guest appearances on the album include Peter Erskine, Eberhard Weber, Lol Creme, and Gary Brooker. Two tracks feature string arrangements by Michael Kamen, performed by the London Metropolitan Orchestra. A CD release of the single "King of the Mountain" included a cover of "Sexual Healing" by Marvin Gaye. "King of the Mountain" entered the UK Downloads Chart at number-six on 17 October 2005, and by 30 October it had become Bush's third-highest-charting single ever in the UK, peaking at number-four on the full chart. "Aerial" entered the UK albums chart at number-three, and the US chart at number 48. Bush herself carried out relatively little publicity for the album, only conducting a handful of magazine and radio interviews. "Aerial" earned Bush two nominations at the 2006 BRIT Awards, for Best British Female Solo Artist and Best British Album. In late 2007, Bush composed and recorded a new song, "Lyra", for the soundtrack to the fantasy film "The Golden Compass". In May 2011, Bush released the album, "Director's Cut". The album, which Bush has described as an entirely new project rather than a collection of mere remixes, contains 11 tracks of substantially reworked material from her earlier albums, "The Sensual World" and "The Red Shoes", all of which have been recorded using analogue, rather than digital, equipment to create "a warmer sound". All the tracks have new lead vocals, new drums, and reworked instrumentation. Some of them have been transposed to a lower key to accommodate her lowering voice. Three of the songs, including "This Woman's Work", have been completely re-recorded, with lyrics often changed in places. This is the first album on her new label, "Fish People", a division of EMI Records, with whom she's had a relationship since she started recording. In addition to "Director's Cut" in its single CD form, the album was released with a box-set that contains the albums "The Sensual World" and the analogue re-mastered "The Red Shoes". It debuted at number-two on the United Kingdom chart. The song "The Sensual World" has been renamed "Flower of the Mountain" and contains a passage of Molly Bloom's soliloquy from James Joyce's novel "Ulysses". Bush said, "Originally when I wrote the song 'The Sensual World', I had used text from the end of "Ulysses". When I asked for permission to use the text I was refused, which was disappointing. I then wrote my own lyrics for the song, although I felt that the original idea had been more interesting. Well, I'm not James Joyce am I? When I came to work on this project I thought I would ask for permission again and this time they said yes." The first single released from the album was "Deeper Understanding" and contains a new chorus featuring computerised vocals from Bush's son, Albert. A video for the song, directed by Bush, has been released through her channel on YouTube. It features Robbie Coltrane as a man consumed by his relationship with his computer (voiced by Bush's son). Frances Barber plays the man's wife, and Noel Fielding also appears. Bush's next studio album, "50 Words for Snow", was released on 21 November 2011. The album contains seven new songs "set against a backdrop of falling snow", with a total running time of 65 minutes. A radio edit of the first single, "Wild Man", was played on BBC Radio 2's "The Ken Bruce Show" on 10 October. and was released as a digital download on 11 October. The album is distributed in the United States by Anti-Records. On 14 November 2011, NPR played "50 Words for Snow" in its entirety for the first time. Australia's ABC Radio National declared "50 Words for Snow" album of the week of 12 November 2011. The album's songs are built around Bush's quietly jazzy piano and Steve Gadd's drums, and utilise both sung and spoken word vocals in what "Classic Rock" critic Stephen Dalton calls "a ... supple and experimental affair, with a contemporary chamber pop sound grounded in crisp piano, minimal percussion and light-touch electronics ... billowing jazz-rock soundscapes, interwoven with fragmentary narratives delivered in a range of voices from shrill to Laurie Anderson-style cooing." Bassist Danny Thompson appears on the album, which also features performances by Elton John and actor Stephen Fry. On the first track, "Snowflake", in a song written specifically to use his still high choir-boy voice, Bush's son Bertie sings the role of a falling snowflake in a song expressing the hope of a noisy world soon being hushed by snowfall. "Snowflake" drifts into "Lake Tahoe", where choral singer Stefan Roberts and Bush sing about a rarely seen ghost: a woman who appears in a Victorian gown to call to her dog, Snowflake. Bush said to fellow musician Jamie Cullum in an interview on Dutch Radio, that she wished to explore using high male voices in contrast to her own, deeper, voice. "Misty" is about a snowman lover who melts away after a night of passion, while "Wild Man" tells the story of a group of climbers in the Himalayas who, upon finding evidence of a nearby Yeti, erase all traces of it to protect it from discovery. Elton John and Bush as eternally divided lovers trade vocals on "Snowed in at Wheeler Street", while Stephen Fry recites the "50 Words for Snow". The quiet "Among Angels" finishes the album. "50 Words for Snow" received general acclaim from music critics. At Metacritic, which assigns a normalised rating out of 100 to reviews from mainstream critics, the album received an average score of 88, based on 26 reviews, which indicates "universal acclaim". She was nominated for a Brit Award in the "Best Female Artist" category, and the album won the 2012 Best Album at the South Bank Arts Awards, and was also nominated for Best Album at the Ivor Novello Awards. Bush turned down an invitation by the organisers of the 2012 Summer Olympics closing ceremony to perform at the event; instead, a recording of a new remix of her 1985 hit "Running Up that Hill" was played at the end of the ceremony. Bush released an exclusive limited-edition 10" picture disc of the 2012 remix as part of Record Store Day on 20 April 2013. In the same year, Bush became the first (and to date, only) female artist to have top five albums in the UK charts in five successive decades. In March 2014, Bush announced her first live concerts in several decades: a 22-night residency called "Before the Dawn" in London from 26 August to 1 October 2014 at the Hammersmith Apollo. Tickets sold out in 15 minutes. The concerts received rave reviews. In August 2014, bolstered by the publicity around her upcoming performances, she became the first female performer to have eight albums in the Official UK Top 40 Albums Chart simultaneously, putting her at number three for simultaneous UK Top 40 albums (behind Elvis Presley with 12 albums in 1977, and The Beatles in 2009 with 11 albums. Note that in January 2016 following his death, David Bowie joined Elvis in the number one position with 12 albums); altogether she had 11 albums in the top 50. In November 2014, Bush was awarded with "Editor's Award" at the annual "Evening Standard Theatre Awards" for her theatrically performed live comeback. An eponymous three-disc album of recordings from "Before the Dawn" was released 25 November 2016. In April 2017 Marc Geiger, the head of music at the agency William Morris Endeavour, claimed that Goldenvoice CEO Paul Tollett, main promoter of the US music festival Coachella, had declined to book Bush in the festival's lineup. This would have marked her first ever live shows in North America. Geiger claimed that Tollett had reasoned that "No one is going to understand it (her act, presumably a version of Before The Dawn)". Tollett responded to Geiger's claims, indicating that he had been after Bush for 25 years and would book her if she ever became available, while recognising that this is unlikely as she does not tour. Additionally, a spokesperson for Bush commented that she had no intention of performing in the US. Bush's musical aesthetic is eclectic, and is known to employ varied influences and meld disparate styles, often within a single song or over the course of an album. Even in her earliest works, with piano the primary instrument, she wove together diverse influences, drawing on classical music, glam rock, and a wide range of ethnic and folk sources. This would continue throughout her career. By the time of "Never for Ever", Bush had begun to make prominent use of the Fairlight CMI synthesizer, which allowed her to sample and manipulate sounds, expanding her sonic palette. She has been compared with other "'arty' 1970s and '80s British pop rock artists" such as Roxy Music and Peter Gabriel. "The Guardian" called Bush "the queen of art-pop." Bush has a dramatic soprano vocal range. Her vocals contain elements of British, Anglo-Irish and most prominently (southern) English accents and, in its utilisation of musical instruments from various periods and cultures, her music has differed from American pop norms. Reviewers have used the term "surreal" to describe her music. Her songs explore melodramatic emotional and musical surrealism that defies easy categorisation. It has been observed that even her more joyous pieces are often tinged with traces of melancholy, and even the most sorrowful pieces have elements of vitality struggling against all that would oppress them. Elements of Bush's lyrics employ historical or literary references, as embodied in her first single "Wuthering Heights", which is based on Emily Brontë's novel of the same name. She has described herself as a storyteller who embodies the character singing the song and has dismissed efforts by others to conceive of her work as autobiographical. Bush's lyrics have been known to touch on obscure or esoteric subject matter, and "New Musical Express" noted that Bush was not afraid to tackle sensitive and taboo subjects in her work. "The Kick Inside" is based on a traditional English folk song ("The Ballad of Lucy Wan") about an incestuous pregnancy and a resulting suicide. "Kashka from Baghdad" is a song about a homosexual male couple; "Out" magazine listed two of her albums in their "Top 100 Greatest Gayest Albums" list. She has referenced G. I. Gurdjieff in the song "Them Heavy People", while "Cloudbusting" was inspired by Peter Reich's autobiography, "A Book of Dreams", about his relationship with his father, Wilhelm Reich. "The Infant Kiss" is a song about a haunted, unstable woman's almost paedophilic infatuation with a young boy in her care (inspired by Jack Clayton's film "The Innocents" (1961), which had been based on Henry James's novella "The Turn of the Screw"); and "Breathing" explores the results of nuclear fallout from the perspective of a fetus. Other non-musical sources of inspiration for Bush include horror films, which have influenced the gothic nature of her songs, such as "Hounds of Love", which samples the 1957 horror movie "Night of the Demon". Her songs have occasionally combined comedy and horror to form dark humour, such as murder by poisoning in "Coffee Homeground", an alcoholic mother in "Ran Tan Waltz" and the upbeat "The Wedding List", a song inspired by François Truffaut's 1967 film of Cornell Woolrich's "The Bride Wore Black" about the death of a groom and the bride's subsequent revenge against the killer. Bush has also cited comedy as a significant influence. She has cited Woody Allen, "Monty Python", "Fawlty Towers", and "The Young Ones" as particular favourites. The following artists cite her as an influence: Regina Spektor, Tori Amos, Ellie Goulding, Charli XCX, Tegan and Sara, k.d. lang, Paula Cole, Kate Nash, Bat for Lashes, Erasure, Alison Goldfrapp of Goldfrapp, Tim Bowness of No-Man, Chris Braide, Kyros, Aisles, Darren Hayes and Grimes. Nerina Pallot was inspired to become a songwriter after seeing Bush play "This Woman's Work" on "Wogan". Coldplay took inspiration from "Running Up That Hill" to compose their hit single "Speed of Sound". In addition to those artists who state that Bush has been a direct influence on their own careers, other artists have been quoted expressing admiration for her work including Annie Lennox, Björk, Florence Welch of Florence + The Machine, Little Boots, Elizabeth Fraser of Cocteau Twins, Dido, Sky Ferreira, St. Vincent, Lily Allen, Anohni of Antony and the Johnsons, Big Boi of OutKast, Tupac Shakur, Stevie Nicks, Steven Wilson, Steve Rothery of Marillion, and André Matos. Courtney Love of Hole mentioned Bush among other artists as one of her favourites as a teenager. Tricky wrote an article about "The Kick Inside", saying: "Her music has always sounded like dreamland to me... I don't believe in God, but if I did, her music would be my bible". Suede front-man Brett Anderson stated about "Hounds of Love": "I love the way it's a record of two halves, and the second half is a concept record about fear of drowning. It's an amazing record to listen to really late at night, unsettling and really jarring". John Lydon, better known as Johnny Rotten of the Sex Pistols, declared her work to be "beauty beyond belief". Rotten once wrote a song for her, titled "Bird in Hand" (about exploitation of parrots) that Bush rejected. Bush was one of the singers who Prince thanked in the liner notes of 1991's "Diamonds and Pearls". In December 1989, Robert Smith of The Cure chose "The Sensual World" as his favourite single of the year, "The Sensual World" as his favourite album of the year and included "all of Kate Bush" plus other artists in his list, "the best things about the eighties". Kele Okereke of Bloc Party said about "Hounds of Love": "The first time I heard it I was sitting in a reclining sofa. As the beat started I was transported somewhere else. Her voice, the imagery, the huge drum sound: it seemed to capture everything for me. As a songwriter you're constantly chasing that feeling". Rufus Wainwright named Bush as one of his top ten gay icons. Outside music, Bush has been an inspiration to several fashion designers, including Hussein Chalayan. Bush's first tour took place 2 April – 13 May 1979; 35 years later she embarked on a series of 22 concerts at the Hammersmith Apollo, London, beginning 26 August 2014. Apart from these two concert series, she has given only occasional live performances. Suggestions were prompted as to why she abandoned touring, among them her reputed need to be in total control of the final product, which is incompatible with live stage performance; a rumour of a crippling fear of flying; and the suggestion that the death of 21-year-old Bill Duffield severely affected her. Duffield, her lighting engineer, was killed in an accident during her concert of 2 April 1979 at Poole Arts Centre. Bush held a benefit concert on 12 May 1979, with Peter Gabriel and Steve Harley at London's Hammersmith Odeon for his family. Duffield would be honoured in two later songs: "Blow Away" on "Never for Ever" and "Moments of Pleasure" on "The Red Shoes". Bush said in a BBC Radio 2 interview with Mark Radcliffe that she actually enjoyed the tour but was consumed with producing her subsequent records. A BBC film crew followed the preparation for the tour which was shown as a 30-minute special on the "Nationwide" programme. During the same period as her tour, she made television appearances around the world, including "Top of the Pops" in the United Kingdom, "Bios Bahnhof" in Germany, and "Saturday Night Live" in the United States (with Paul Shaffer on piano) – her only appearance on American television to date. On 28 December 1979, BBC TV aired the "Kate Bush Christmas Special". It was recorded in October 1979 at the BBC Studios in Birmingham, England; choreography by Anthony Van Laast. As well as playing songs from her first two albums, she played "December Will Be Magic Again", "Violin", "The Wedding List", "Ran Tan Waltz" and "Egypt" from her forthcoming album, "Never for Ever". Peter Gabriel made a guest appearance to play "Here Comes the Flood", and a duet of Roy Harper's "Another Day" with Bush. After the Tour of Life, Bush wanted to make two more albums before touring again. At that point, she got involved with production techniques and sound experimentation that took up a lot of time and prevented her from touring. She came close to touring again following the release of "The Dreaming" and "The Red Shoes", but live shows never materialised. Bush participated in the first benefit concert in aid of The Prince's Trust in July 1982. Bush performed live for British charity event Comic Relief in 1986, singing "Do Bears... ?", a humorous duet with Rowan Atkinson, and a rendition of "Breathing". In March 1987, Bush sang "Running Up That Hill" at The Secret Policeman's Third Ball accompanied by David Gilmour. On 28 June 1987, she made a guest appearance to duet with Peter Gabriel on "Don't Give Up" at Earl's Court, London as part of his "So" tour. On 17 January 2002, Bush appeared with her long-time champion, David Gilmour, singing the part of the doctor in "Comfortably Numb" at the Royal Festival Hall in London. In 2011, Bush told the magazine "Classic Rock": "I do hope that some time I get a chance to do some shows. Maybe not a tour, but something." In March 2014, Bush announced a 22-night residency called "Before the Dawn" in London from 26 August – 1 October 2014 at the Hammersmith Apollo. The BBC reviewed the concert positively. The set list comprised most of "Hounds of Love" featuring the entire Ninth Wave suite, most of "Aerial" including the entire second disc, two songs from "The Red Shoes", and one song from "50 Words for Snow". Bush's first four albums and "The Sensual World" were noticeably excluded from the set list. In 1978 Bush made her debut on Dutch television in "De Efteling Special", which was broadcast on 11 May 1978. The amusement park De Efteling served as a backdrop for six songs from "The Kick Inside": "Moving", "Wuthering Heights", "Them Heavy People", "Strange Phenomena", "The Man With The Child In His Eyes" and "The Kick Inside". In early 1978, the amusement park's Haunted Castle was completed and the opening was scheduled on 10 May that year. Bush, who just had a big hit in the Netherlands with "Wuthering Heights", made her debut on Dutch television in this special. Her popularity was used to draw the attention to the Haunted Castle. In 1979 Bush's one live show, The Tour of Life, was recorded for the BBC and for release on VHS as "Kate Bush Live at Hammersmith Odeon". Bush has appeared in innovative music videos designed to accompany her singles releases. Among the best known are those for "Running Up that Hill", "Babooshka", "Breathing", "Wuthering Heights", "The Man with the Child in His Eyes", and "Cloudbusting", featuring actor Donald Sutherland, who made time during the filming of another project to take part in the video. EMI has released collections of her videos, including "The Single File", "Hair of the Hound", "The Whole Story", a career video overview released in conjunction with the 1986 compilation album of the same title, and "The Sensual World". In 1993, she directed and starred in the short film, "The Line, the Cross and the Curve", a musical co-starring Miranda Richardson, featuring music from Bush's album "The Red Shoes", which was inspired by the classic movie of the same name. It was released on VHS in the UK in 1994 and also received a small number of cinema screenings around the world. In recent interviews, Bush has said that she considers it a failure, and stated in 2001: "I'm very pleased with four minutes of it, but I'm very disappointed with the rest." In a 2005 interview, she described the film as "A load of bollocks." In 1994, Bush provided the music used in a series of psychedelic-themed television adverts for the soft drink Fruitopia that appeared in the United States. The same company aired the adverts in the United Kingdom, but the British version featured singer Elizabeth Fraser of Cocteau Twins instead of Bush. In late 2006, a DVD documentary titled "Kate Bush Under Review" was released by Sexy Intellectual, which included archival interviews with Bush, along with interviews with a selection of music historians and journalists (including Phil Sutcliffe, Nigel Williamson, and Morris Pert). The DVD also includes clips from several of Bush's music videos. On 2 December 2008, the DVD collection of the fourth season of "Saturday Night Live", including her performances, was released. A three DVD set of The Secret Policeman's Balls benefit concerts that includes Bush's performance was released on 27 January 2009. Bush has released four short videos for the album "50 Words for Snow". One is an advertisement for the album. Two stop-motion "Animation Segments" were posted on the Kate Bush Official website and YouTube, one to accompany a 2-minute 25 second section of "Misty", called "Mistraldespair", the other to accompany a 2-minute 33 second section of "Wild Man". "Mistraldespair" was directed by Bush and animated by Gary Pureton, while the "Wild Man" segment was created by Finn and Patrick at Brandt Animation. On 24 January 2012, a third piece called "Eider Falls at Lake Tahoe", was premiered on her website and on YouTube. Running at 5:01, the piece is a sepia tone shadow puppet animation. Directed by Bush and photographed by award-winning British cinematographer Roger Pratt, the shadow puppets were designed by Robert Allsopp. Bush stated that "Eider Falls at Lake Tahoe" is intended to be a "self contained piece" separate from the song "Lake Tahoe". In 1990, Bush starred in the black comedy film "Les Dogs", produced by "The Comic Strip" for BBC television. Aired on 8 March 1990, Bush plays the bride Angela at a wedding set in a post-apocalyptic version of Britain. While Bush's is a silent presence in a wedding dress throughout most of the film, she does have several lines of dialogue with Peter Richardson in two dream sequences. In another "Comic Strip Presents" film, "GLC", she produced the theme song "Ken", which includes a vocal performance by Bush. The song was written about Ken Livingstone, the leader of the Greater London Council, who would later be elected as mayor of London and at the time was working with musicians to help the Labour Party garner the youth vote. Bush wrote and performed the song "The Magician", using a fairground-like arrangement, for Menahem Golan's 1979 film "The Magician of Lublin". The track was scored and arranged by Michael Kamen. In 1986, she wrote and recorded "Be Kind to My Mistakes" for the Nicolas Roeg film "Castaway". An edited version of this track was used as the B-side to her 1989 single "This Woman's Work". In 1988, the song "This Woman's Work" was featured in the John Hughes film "She's Having a Baby", and a slightly remixed version appeared on Bush's album "The Sensual World". The song has since appeared on television shows, and in 2005 reached number-eight on the UK download chart after featuring in a British television advertisement for the charity NSPCC. In 1999, Bush wrote and recorded a song for the Disney film "Dinosaur", but the track was ultimately not included on the soundtrack. According to the winter 1999 issue of "HomeGround", a Bush fanzine, it was scrapped when Disney asked her to rewrite the song and she refused. Also in 1999, Bush's song "The Sensual World" was featured prominently in Canadian filmmaker Atom Egoyan's film "Felicia's Journey". "The Man with the Child in His Eyes" is on the soundtrack for the 2007 British romantic comedy film "Starter for 10". In 2007, Bush was asked to write a song for "The Golden Compass" soundtrack which made reference to the lead character, Lyra Belacqua. The song, "Lyra", was used in the closing credits of the film, reached number 187 in the UK Singles Chart and was nominated for the International Press Academy's Satellite Award for original song in a motion picture. According to Del Palmer, Bush was asked to compose the song on very short notice and the whole project was completed in 10 days. The song was produced and recorded by Bush in her own studio, and features the Magdalen College, Oxford choir. Bush provided vocals on two of Peter Gabriel's albums, including the hits "Games Without Frontiers" and "Don't Give Up", as well as "No Self-Control". Gabriel appeared on Bush's 1979 television special, where they sang a duet of Roy Harper's "Another Day". She has sung on two Roy Harper tracks, "You", on his 1979 album, "The Unknown Soldier"; and "Once", the title track of his 1990 album. She has also sung on the title song of the 1986 Big Country album "The Seer;" the Midge Ure song "Sister and Brother" from his 1988 album "Answers to Nothing"; Go West's 1987 single "The King Is Dead"; and two songs with Prince – "Why Should I Love You?", from her 1993 album "The Red Shoes", and "My Computer" from Prince's 1996 album "Emancipation". In 1987, she sang a verse on the Beatles cover charity single "Let It Be" by Ferry Aid. She sang a line on the charity single "Spirit of the Forest" by Spirit of the Forest in 1989. In 1990 Bush produced a song for another artist, Alan Stivell's "Kimiad" for his album "Again"; this is the only time she has done this to date. Stivell had appeared on "The Sensual World". In 1991, Kate Bush was invited to perform a cover of Elton John's 1972 song "Rocket Man" for the tribute album "". In 2011, Elton John collaborated with Bush once again in "Snowed in at Wheeler Street" for her most recent album "50 Words for Snow". In 1994, Bush covered George Gershwin's "The Man I Love" for the tribute album "The Glory of Gershwin". In 1996, Bush contributed a version of ""Mná na hÉireann"" (Irish for "Women of Ireland") for the Anglo-Irish folk-rock compilation project "Common Ground: The Voices of Modern Irish Music". Bush had to sing the song in Irish, which she learned to do phonetically. Artists who have contributed to Bush's own albums include Elton John, Eric Clapton, Jeff Beck, David Gilmour, Nigel Kennedy, Gary Brooker, Danny Thompson, and Prince. Bush provided backing vocals for a song that was recorded during the 1990s titled "Wouldn't Change a Thing" by Lionel Azulay, the drummer with the original band that was later to become the KT Bush Band. The song, which was engineered and produced by Del Palmer, was released on Azulay's album "Out of the Ashes". Bush declined a request by Erasure to produce one of their albums because "she didn't feel that that was her area". Bush is married to guitarist Dan McIntosh. Their son, Bertie, was born in 1998. Bertie featured prominently in the 2014 concert "Before the Dawn". She previously had a long-term relationship with bassist and engineer Del Palmer. Bush is a former resident of Eltham, southeast London. In the 1990s, she moved to a canalside residence in Sulhamstead, Berkshire, and subsequently moved to Devon in 2004. Bush is a vegetarian. Raised a Roman Catholic, she said in 1999: The length of time between album releases has led to rumours in the media concerning her health or appearance. In 2011 Bush told BBC Radio 4 that the amount of time between album releases is extremely stressful, noting: "It's very frustrating the albums take as long as they do ... I wish there weren't such big gaps between them". In the same interview Bush denied she was a perfectionist in the studio, saying: "I think it's important that things are flawed ... That's what makes a piece of art interesting sometimes – the bit that's wrong or the mistake you've made that's led onto an idea you wouldn't have had otherwise", and reiterated her prioritisation of her family life. Kirsten Dunst Kirsten Caroline Dunst (; born April 30, 1982) is an American actress. She made her film debut in Woody Allen's short film "Oedipus Wrecks" for the anthology film "New York Stories" (1989). At the age of twelve, Dunst gained widespread recognition as Claudia in "Interview with the Vampire" (1994), a role for which she was nominated for a Golden Globe Award for Best Supporting Actress. She appeared in "Little Women" the same year and in "Jumanji" the following year. After a recurring role in the third season of the NBC medical drama "ER" (1996–97) as Charlie Chemingo and starring in films such as "Wag the Dog" (1997), "Small Soldiers" (1998), the English dub of "Kiki's Delivery Service" (U.S. release 1998), and "The Virgin Suicides" (1999), Dunst began making romantic comedies and comedy-dramas, starring in "Drop Dead Gorgeous" (1999), "Bring It On" (2000), "Get Over It" and "Crazy/Beautiful" (both released in 2001). In 2001, Dunst made her singing debut in the film "Get Over It", in which she performed two songs. She also sang the jazz song "After You've Gone" for the end credits of the film "The Cat's Meow" (2001). Dunst achieved fame for her portrayal of Mary Jane Watson in Sam Raimi's "Spider-Man" trilogy (2002–2007). Since then, her films have included the romantic comedy "Wimbledon" (2004), the science fiction romantic comedy-drama "Eternal Sunshine of the Spotless Mind" (2004) and Cameron Crowe's romantic tragicomedy "Elizabethtown" (2005). She played the title role in Sofia Coppola's biographical film "Marie Antoinette" (2006) and starred in the comedy film "How to Lose Friends & Alienate People" (2008). She won the Best Actress Award at the Cannes Film Festival and the Saturn Award for Best Actress for her performance in Lars von Trier's "Melancholia" (2011). She starred in the second season of the television series "Fargo" in 2015, playing the role of Peggy Blumquist, a hairdresser who gets mixed up in a war between two crime families. Her performance garnered widespread critical acclaim, leading to her winning the Critics' Choice Television Award for Best Actress and being nominated for the Golden Globe Award for Best Actress and the Primetime Emmy Award for Outstanding Lead Actress in a Limited Series or Movie, losing to Lady Gaga and Sarah Paulson, respectively. In 2017, Dunst won her first Screen Actors Guild Award for her performance in the drama "Hidden Figures" (2017), about women mathematicians in the space program, and co-starred in her third collaboration with Sofia Coppola, the historical drama thriller "The Beguiled". Dunst was born in Point Pleasant, New Jersey, to Klaus Hermann Dunst and Inez Rupprecht. She has a younger brother, Christian. Her father worked for Siemens as a medical services executive, and her mother worked for Lufthansa as a flight attendant. She was also an artist and one-time gallery owner. Dunst's father is German, originally from Hamburg, and her mother was born in New Jersey, of German and Swedish descent. Until the age of eleven, Dunst lived in Brick Township, New Jersey, where she attended Ranney School. In 1993, her parents separated, and she subsequently moved with her mother and brother to Los Angeles, where she attended Laurel Hall School in North Hollywood and Notre Dame High School. Among her classmates was Rami Malek, who was a grade above; they were both in a musical theater class. In 1995, her mother filed for divorce. After graduating from high school in 2000, Dunst continued acting. As a teenager, she found it difficult to deal with her rising fame, and for a period she blamed her mother for pushing her into acting as a child. However, she later said that her mother "always had the best intentions". When asked if she had any regrets about her childhood, Dunst said: "Well, it's not a natural way to grow up, but it's the way I grew up and I wouldn't change it. I have my stuff to work out... I don't think anybody can sit around and say, 'My life is more screwed up than yours.' Everybody has their issues." Dunst began her career when she was three years old as a child fashion model in television commercials. She was signed with Ford Models and Elite Model Management. At the age of six, she made her feature film debut in a minor role in Woody Allen's short film "Oedipus Wrecks"; it was released as one-third of the anthology film "New York Stories" (1989). Soon after, Dunst performed in the comedy-drama "The Bonfire of the Vanities" (1990), based on Tom Wolfe's novel of the same name, in which she played the daughter of Tom Hanks's character. In 1993, Dunst made a guest appearance in an episode of the science fiction drama "". Her breakthrough role came in 1994 in the horror drama "Interview with the Vampire" opposite Tom Cruise and Brad Pitt, based on Anne Rice's novel of the same name. She played Claudia, the child vampire who is a surrogate daughter to Cruise and Pitt's characters. The film received mixed reviews, but many critics praised Dunst's performance. Roger Ebert commented that Dunst's creation of the child vampire Claudia was one of the "creepier" aspects of the film, and mentioned her ability to convey the impression of great age inside apparent youth. Todd McCarthy in "Variety" said that Dunst was "just right" for the family. The film featured a scene in which Dunst shared her first on-screen kiss with Pitt, who was almost two decades older. In an interview with "Interview" magazine, she revealed that kissing him had made her feel uncomfortable: "I thought it was gross, that Brad had cooties. I mean, I was 10." Her performance earned her the MTV Movie Award for Best Breakthrough Performance, the Saturn Award for Best Young Actress, and her first Golden Globe Award nomination. Later in 1994, Dunst co-starred in the drama film "Little Women" opposite Winona Ryder and Claire Danes. The film received favorable reviews. Critic Janet Maslin of "The New York Times" wrote that the film was the greatest adaptation of Louisa May Alcott's novel of the same name and remarked on Dunst's performance, "The perfect contrast to take-charge Jo comes from Kirsten Dunst's scene-stealing Amy, whose vanity and twinkling mischief make so much more sense coming from an 11-year-old vixen than they did from grown-up Joan Bennett in 1933. Ms. Dunst, also scarily effective as the baby bloodsucker of "Interview With the Vampire", is a little vamp with a big future." In 1995, Dunst co-starred in the fantasy adventure film "Jumanji", loosely based on Chris Van Allsburg's 1981 childrens book of the same name. The story is about a supernatural and ominous board game in which animals and other jungle hazards appear with each roll of the dice. She was part of an ensemble cast that included Robin Williams, Bonnie Hunt and David Alan Grier. The movie grossed $262 million worldwide. That year, and again in 2002, Dunst was named one of "People" magazine's 50 Most Beautiful People. From 1996 to 1997, Dunst had a recurring role in season three of the NBC medical drama "ER". She played Charlie Chemingo, a child prostitute who was being cared for by the ER pediatrician Dr. Doug Ross (George Clooney). In 1997, she voiced Young Anastasia in the animated musical film "Anastasia". Also in 1997, Dunst appeared in the black comedy film "Wag the Dog", opposite Robert De Niro and Dustin Hoffman. The following year she voiced the title character, Kiki, a thirteen-year-old apprentice witch who leaves her home village to spend a year on her own, in the anime movie "Kiki's Delivery Service" (1998). Dunst was offered the role of Angela in the 1999 drama film "American Beauty", but turned it down because she did not want to appear in the film's suggestive sexual scenes or kiss the film's star Kevin Spacey. She later explained: "When I read it, I was 15 and I don't think I was mature enough to understand the script's material." That same year, she co-starred in the comedy film "Dick", opposite Michelle Williams. The film is a parody retelling the events of the Watergate scandal that led to the resignation of U.S. president Richard Nixon. Dunst appeared in Savage Garden's music video "I Knew I Loved You", the first single from their second and final album "Affirmation" (1999). Dunst co-starred opposite James Woods in Sofia Coppola's drama film "The Virgin Suicides" (1999), based on Jeffrey Eugenides' novel of the same name. She played Lux Lisbon, one of the troubled teenage daughters of Ronald Lisbon (Woods). The film was screened as a special presentation at the 43rd San Francisco International Film Festival in 2000. The movie received generally favorable reviews. "San Francisco Chronicle" critic Peter Stack noted in his review that Dunst "beautifully balances innocence and wantonness." In 2000, Dunst starred in the comedy "Bring It On" as Torrance Shipman, the captain of a cheerleading squad. The film generated mostly positive reviews, with many critics reserving praise for her performance. In his review, A. O. Scott called her "a terrific comic actress, largely because of her great expressive range, and the nimbleness with which she can shift from anxiety to aggression to genuine hurt." Charles Taylor of "Salon" noted that "among contemporary teenage actresses, Dunst has become the sunniest imaginable parodist", even though he thought the film had failed to provide her with as good a role as she had either in "Dick" or in "The Virgin Suicides." Jessica Winter from "The Village Voice" complimented Dunst, stating that her performance was "as sprightly and knowingly daft as her turn in "Dick"" and commenting that "[Dunst] provides the only major element of "Bring It On" that plays as tweaking parody rather than slick, strident, body-slam churlishness." Peter Stack of the "San Francisco Chronicle", despite giving the film an unfavorable review, commended Dunst for her willingness "to be as silly and cloyingly agreeable as it takes to get through a slapdash film." The following year, Dunst starred in the comedy film "Get Over It" (2001). She later explained that she took the role for the chance to sing. Also in 2001, she starred in the historical drama "The Cat's Meow", directed by Peter Bogdanovich, as the American actress Marion Davies. Derek Elley of "Variety" described the film as "playful and sporty", saying that this was Dunst's best performance to date: "Believable as both a spoiled ingenue and a lover to two very different men, Dunst endows a potentially lightweight character with considerable depth and sympathy." For her work, she won the Best Actress Silver Ombú category award at the 2002 Mar del Plata International Film Festival. In 2002, Dunst co-starred opposite Tobey Maguire in the superhero film "Spider-Man", the most financially successful film of her career to date. She played Mary Jane Watson, the best friend and love interest of Peter Parker (Maguire). The film was directed by Sam Raimi. Owen Gleiberman of "Entertainment Weekly" remarked on Dunst's ability to "lend even the smallest line a tickle of flirtatious music." In the "Los Angeles Times" review, critic Kenneth Turan noted that Dunst and Maguire made a real connection on screen, concluding that their relationship "involved audiences to an extent rarely seen in films." "Spider-Man" was a commercial and critical success. The movie grossed $114 million during its opening weekend in North America and went on to earn $822 million worldwide. Dunst next co-starred opposite Billy Bob Thornton, Morgan Freeman and Holly Hunter in Ed Solomon's drama "Levity" (2003). That same year, she co-starred opposite Julia Roberts, Maggie Gyllenhaal and Julia Stiles in the drama "Mona Lisa Smile" (2003). The film received mostly negative reviews, with Manohla Dargis of the "Los Angeles Times" describing it as "smug and reductive." She co-starred as Mary Svevo opposite Jim Carrey, Kate Winslet and Tom Wilkinson in Michel Gondry's science fiction romantic comedy-drama "Eternal Sunshine of the Spotless Mind" (2004). The latter film received very positive reviews, with "Entertainment Weekly" describing Dunst's subplot as "nifty and clever". The movie grossed $72 million worldwide. The success of the first "Spider-Man" film led Dunst to reprise her role as Mary Jane Watson in 2004 in "Spider-Man 2". The movie was well received by critics and a financial success, setting a new opening weekend box office record for North America. With revenue of $783 million worldwide, it was the second highest-grossing film in 2004. Also in 2004, Dunst co-starred opposite Paul Bettany in the romantic comedy "Wimbledon" where she portrayed a rising tennis player in the Wimbledon Championships, while Bettany portrayed a fading former tennis star. The film received mixed reviews, but many critics enjoyed Dunst's performance. Claudia Puig of "USA Today" reported that the chemistry between Dunst and Bettany was potent, with Dunst doing a fine job as a sassy and self-assured player. In 2005, she co-starred opposite Orlando Bloom in Cameron Crowe's romantic tragicomedy "Elizabethtown" as Claire Colburn, a flight attendant. The film premiered at the 2005 Toronto International Film Festival. Dunst revealed that working with Crowe was enjoyable, but more demanding than she had expected. The movie garnered mixed reviews, with the "Chicago Tribune" rating it one out of four stars and describing Dunst's portrayal of a flight attendant as "cloying." It was a box office disappointment. In 2006, Dunst collaborated with Sofia Coppola again and starred as the title character in Coppola's historical drama "Marie Antoinette", based on Antonia Fraser's book "". The movie was screened at a special presentation at the 2006 Cannes Film Festival, and was reviewed favourably. International revenues were $45 million out of $60 million overall. In 2007, Dunst reprised her role as Mary Jane Watson in "Spider-Man 3". In contrast to the previous two films' positive reviews, "Spider-Man 3" received mixed reviews from critics. Nonetheless, with a total worldwide gross of $891 million, it stands as the most commercially successful film in the series and Dunst's highest-grossing film to the end of 2008. Having initially signed on for three "Spider-Man" films, she said that she would do a fourth, but only if Raimi and Maguire also returned. In January 2010, it was announced that the fourth film was cancelled and that the "Spider-Man" film series would be restarted, and therefore dropping Dunst, Maguire and Raimi from the franchise. In 2008, Dunst co-starred opposite Simon Pegg in the comedy "How to Lose Friends & Alienate People", based on former "Vanity Fair" contributing editor Toby Young's memoir of the same name. Dunst made her screenwriting and directorial debut with the short film "Bastard", which premiered at the Tribeca Film Festival in 2010 and was later featured at the 2010 Cannes Film Festival. She co-starred opposite Ryan Gosling in the mystery drama "All Good Things" (2010), based on a true story as the wife of Gosling's character from a run-down neighborhood who goes missing. The film received reasonable reviews, and earned $640,000 worldwide. Dunst co-starred with Brian Geraghty in Carlos Cuarón's short film "The Second Bakery Attack", based on Haruki Murakami's short story. In 2011, Dunst co-starred opposite Charlotte Gainsbourg, Kiefer Sutherland and Charlotte Rampling in Lars von Trier's drama film "Melancholia" as a woman suffering depression as the world ends. The film premiered at the 2011 Cannes Film Festival and received positive reviews and Dunst was singled out for praise. Steven Loeb of "Southampton Patch" wrote, "This film has brought the best out of von Trier, as well as his star. Dunst is so good in this film, playing a character unlike any other she has ever attempted... Even if the film itself were not the incredible work of art that it is, Dunst's performance alone would be incentive enough to recommend it." Sukhdev Sandhu wrote from Cannes in "The Daily Telegraph" that "Dunst is exceptional, so utterly convincing in the lead role – trouble, serene, a fierce savant – that it feels like a career breakthrough. Dunst won several awards for her performance, including the Best Actress Award at the Cannes Film Festival and the Best Actress Award from the U.S. National Society of Film Critics Dunst has signed to star in "Sweet Relief" as Marla Ruzicka, a peace activist and U.S. relief worker killed by a suicide bomb in Baghdad. She has expressed interest in playing the role of Blondie frontwoman Debbie Harry in Michel Gondry's upcoming biographical film about the band. In 2012, Dunst co-starred in Juan Diego Solanas' science fiction romantic drama "Upside Down" opposite Jim Sturgess. She co-starred opposite Isla Fisher, Rebel Wilson and Lizzy Caplan in Leslye Headland's romantic comedy "Bachelorette", produced by Will Ferrell and Adam McKay. In 2012, she co-starred opposite Sam Riley, Kristen Stewart and Garrett Hedlund in the adventure drama "On the Road" as Camille Moriarty, based on Jack Kerouac's novel of the same name. She made a cameo appearance in the short film "Fight For Your Right Revisited". It premiered at the 2011 Sundance Film Festival. In 2015, Dunst co-starred as Peggy Blumquist in the second season of the critically acclaimed FX crime comedy-drama "Fargo", for which she received a Golden Globe nomination. In 2016, Dunst co-starred in Jeff Nichols' science fiction drama "Midnight Special" with Michael Shannon and Joel Edgerton. In May 2016, she was a member of the main competition jury of the 2016 Cannes Film Festival. In 2017, Dunst starred with Colin Farrell, Nicole Kidman, and Elle Fanning in the drama "The Beguiled", her third collaboration with Sofia Coppola, who directed, wrote, and produced. The film is a remake of Don Siegel's original 1971 film about a wounded Union soldier who seeks shelter at an all-girls' school deep in Confederate country. That same year, Dunst starred in the Rodarte label founders' feature directorial debut "Woodshock" about a woman who falls deeper into paranoia after taking a deadly drug. In October 2015, Dunst said that she was co-writing and set to direct a film adaptation of a novel. In July 2016, it was announced that Dunst would be making her feature film directorial debut with an adaptation of Sylvia Plath's novel "The Bell Jar", with Dakota Fanning in the lead role. Dunst made her singing debut in the comedy film "Get Over It", performing two songs written by Marc Shaiman. She recorded Henry Creamer and Turner Layton's jazz standard "After You've Gone" that was used in the end credits of "The Cat's Meow". In "Spider-Man 3", she sang two songs as Mary Jane Watson, one during a Broadway performance, and one as a singing waitress in a jazz club. Dunst recorded the songs earlier and lip-synced while filming. She appeared in the music videos for Savage Garden's "I Knew I Loved You", Beastie Boys' "Make Some Noise" and R.E.M.'s "We All Go Back to Where We Belong" and she sang two tracks which were "This Old Machine" and "Summer Day" on Jason Schwartzman's 2007 solo album "Nighttiming". In 2007, Dunst said she had no plans to release albums, saying, "It worked when Barbra Streisand was doing it, but now it's a little cheesy, I think. It works better when singers are in movies." Dunst starred as the magical princess Majokko in the Takashi Murakami and McG directed short "Akihabara Majokko Princess" singing a cover of The Vapors' 1980 song "Turning Japanese". This was shown at the "Pop Life" exhibition in London's Tate Modern museum from October 1, 2009, to January 17, 2010. It shows Dunst prancing around Akihabara, a crowded shopping district in Tokyo, Japan. Dunst dated actor Jake Gyllenhaal from 2002 to 2004. She dated Razorlight frontman Johnny Borrell in 2007. She dated her "On the Road" co-star Garrett Hedlund from 2012 to 2016. Dunst began dating her "Fargo" co-star Jesse Plemons in 2016. They are currently engaged. Together, the couple have a son born in 2018. Dunst was treated for depression in early 2008 at the Cirque Lodge treatment center in Utah. She explained that she had been feeling low in the six months before her admission. In late March 2008, she checked out of the treatment center and began filming the mystery drama "All Good Things". In May 2008, she went public with this information in order to dispel rumors of drug and alcohol abuse, stating that "Now that I'm feeling stronger, I was prepared to say something (...) Depression is pretty serious and should not be gossiped about". Dunst gained German citizenship in 2011 and holds dual citizenship of Germany and the United States. Dunst supported Democratic candidate John Kerry for the 2004 U.S. presidential election. She supported Barack Obama for the 2008 presidential election, directing and narrating a documentary, "Why Tuesday", about the tradition of voting on Tuesdays and the low voter turnout in the U.S., as she felt it important to "influence people in a positive way". Dunst works with the Elizabeth Glaser Pediatric AIDS Foundation, for which she helped design and promote a necklace whose sales proceeds went to the Foundation. She worked in breast cancer awareness, participating in the Stand Up to Cancer telethon in September 2008 to raise funds for cancer research. On December 5, 2009, she participated in the Teletón in Mexico, to raise funds for cancer treatment and children's rehabilitation. Dunst bought a home in Toluca Lake, Los Angeles, California, in 2001. In 2010, she sold a residence in Nichols Canyon, Los Angeles. She also lived in a lower Manhattan apartment which she listed for sale in 2017. Lead Lead is a chemical element with symbol Pb (from the Latin "plumbum") and atomic number 82. It is a heavy metal that is denser than most common materials. Lead is soft and malleable, and has a relatively low melting point. When freshly cut, lead is silvery with a hint of blue; it tarnishes to a dull gray color when exposed to air. Lead has the highest atomic number of any stable element and three of its isotopes each conclude a major decay chain of heavier elements. Lead is a relatively unreactive post-transition metal. Its weak metallic character is illustrated by its amphoteric nature; lead and lead oxides react with acids and bases, and it tends to form covalent bonds. Compounds of lead are usually found in the +2 oxidation state rather than the +4 state common with lighter members of the carbon group. Exceptions are mostly limited to organolead compounds. Like the lighter members of the group, lead tends to bond with itself; it can form chains, rings and polyhedral structures. Lead is easily extracted from its ores; prehistoric people in Western Asia knew of it. Galena, a principal ore of lead, often bears silver, interest in which helped initiate widespread extraction and use of lead in ancient Rome. Lead production declined after the fall of Rome and did not reach comparable levels until the Industrial Revolution. In 2014, annual global production of lead was about ten million tonnes, over half of which was from recycling. Lead's high density, low melting point, ductility and relative inertness to oxidation make it useful. These properties, combined with its relative abundance and low cost, resulted in its extensive use in construction, plumbing, batteries, bullets and shot, weights, solders, pewters, fusible alloys, white paints, leaded gasoline, and radiation shielding. In the late 19th century, lead's toxicity was recognized, and its use has since been phased out of many applications. Lead is a toxin that accumulates in soft tissues and bones, it acts as a neurotoxin damaging the nervous system and interfering with the function of biological enzymes. It is particularly problematic in children: even if blood levels are promptly normalized with treatment, neurological disorders, such as brain damage and behavioral problems, may result. A lead atom has 82 electrons, arranged in an electron configuration of [Xe]4f5d6s6p. The sum of lead's first and second ionization energies—the total energy required to remove the two 6p electrons—is close to that of tin, lead's upper neighbor in the carbon group. This is unusual; ionization energies generally fall going down a group, as an element's outer electrons become more distant from the nucleus, and more shielded by smaller orbitals. The similarity of ionization energies is caused by the lanthanide contraction—the decrease in element radii from lanthanum (atomic number 57) to lutetium (71), and the relatively small radii of the elements from hafnium (72) onwards. This is due to poor shielding of the nucleus by the lanthanide 4f electrons. The sum of the first four ionization energies of lead exceeds that of tin, contrary to what periodic trends would predict. Relativistic effects, which become significant in heavier atoms, contribute to this behavior. One such effect is the inert pair effect: the 6s electrons of lead become reluctant to participate in bonding, making the distance between nearest atoms in crystalline lead unusually long. Lead's lighter carbon group congeners form stable or metastable allotropes with the tetrahedrally coordinated and covalently bonded diamond cubic structure. The energy levels of their outer s- and p-orbitals are close enough to allow mixing into four hybrid sp orbitals. In lead, the inert pair effect increases the separation between its s- and p-orbitals, and the gap cannot be overcome by the energy that would be released by extra bonds following hybridization. Rather than having a diamond cubic structure, lead forms metallic bonds in which only the p-electrons are delocalized and shared between the Pb ions. Lead consequently has a face-centered cubic structure like the similarly sized divalent metals calcium and strontium. Pure lead has a bright, silvery appearance with a hint of blue. It tarnishes on contact with moist air, and takes on a dull appearance, the hue of which depends on the prevailing conditions. Characteristic properties of lead include high density, malleability, ductility, and high resistance to corrosion due to passivation. Lead's close-packed face-centered cubic structure and high atomic weight result in a density of 11.34 g/cm, which is greater than that of common metals such as iron (7.87 g/cm), copper (8.93 g/cm), and zinc (7.14 g/cm). This density is the origin of the idiom "to go over like a lead balloon". Some rarer metals are denser: tungsten and gold are both at 19.3 g/cm, and osmium—the densest metal known—has a density of 22.59 g/cm, almost twice that of lead. Lead is a very soft metal with a Mohs hardness of 1.5; it can be scratched with a fingernail. It is quite malleable and somewhat ductile. The bulk modulus of lead—a measure of its ease of compressibility—is 45.8 GPa. In comparison, that of aluminium is 75.2 GPa; copper 137.8 GPa; and mild steel 160–169 GPa. Lead's tensile strength, at 12–17 MPa, is low (that of aluminium is 6 times higher, copper 10 times, and mild steel 15 times higher); it can be strengthened by adding small amounts of copper or antimony. The melting point of lead—at 327.5 °C (621.5 °F)—is very low compared to most metals. Its boiling point of 1749 °C (3180 °F) is the lowest among the carbon group elements. The electrical resistivity of lead at 20 °C is 192 nanoohm-meters, almost an order of magnitude higher than those of other industrial metals (copper at 15.43 nΩ·m; gold 20.51 nΩ·m; and aluminium at 24.15 nΩ·m). Lead is a superconductor at temperatures lower than 7.19 K; this is the highest critical temperature of all type-I superconductors and the third highest of the elemental superconductors. Natural lead consists of four stable isotopes with mass numbers of 204, 206, 207, and 208, and traces of five short-lived radioisotopes. The high number of isotopes is consistent with lead's atomic number being even. Lead has a magic number of protons (82), for which the nuclear shell model accurately predicts an especially stable nucleus. Lead-208 has 126 neutrons, another magic number, which may explain why lead-208 is extraordinarily stable. With its high atomic number, lead is the heaviest element whose natural isotopes are regarded as stable; lead-208 is the heaviest stable nucleus. (This distinction formerly fell to bismuth, with an atomic number of 83, until its only primordial isotope, bismuth-209, was found in 2003 to decay very slowly.) The four stable isotopes of lead could theoretically undergo alpha decay to isotopes of mercury with a release of energy, but this has not been observed for any of them; their predicted half-lives range from 10 to 10 years (at least 10 times the current age of the universe). Three of the stable isotopes are found in three of the four major decay chains: lead-206, lead-207, and lead-208 are the final decay products of uranium-238, uranium-235, and thorium-232, respectively. These decay chains are called the uranium chain, the actinium chain, and the thorium chain. Their isotopic concentrations in a natural rock sample depends greatly on the presence of these three parent uranium and thorium isotopes. For example, the relative abundance of lead-208 can range from 52% in normal samples to 90% in thorium ores; for this reason, the standard atomic weight of lead is given to only one decimal place. As time passes, the ratio of lead-206 and lead-207 to lead-204 increases, since the former two are supplemented by radioactive decay of heavier elements while the latter is not; this allows for lead–lead dating. As uranium decays into lead, their relative amounts change; this is the basis for uranium–lead dating. Lead-207 exhibits nuclear magnetic resonance, a property that has been used to study its compounds in solution and solid state, including in human body. Apart from the stable isotopes, which make up almost all lead that exists naturally, there are trace quantities of a few radioactive isotopes. One of them is lead-210; although it has a half-life of only 22.3 years, small quantities occur in nature because lead-210 is produced by a long decay series that starts with uranium-238 (which has been present for billions of years on Earth). Lead-211, -212, and -214 are present in the decay chains of uranium-235, thorium-232, and uranium-238, respectively, so traces of all three of these lead isotopes are found naturally. Minute traces of lead-209 arise from the very rare cluster decay of radium-223, one of the daughter products of natural uranium-235, and the decay chain of neptunium-237, traces of which are produced by neutron capture in uranium ores. Lead-210 is particularly useful for helping to identify the ages of samples by measuring its ratio to lead-206 (both isotopes are present in a single decay chain). In total, 43 lead isotopes have been synthesized, with mass numbers 178–220. Lead-205 is the most stable radioisotope, with a half-life of around 1.5 years. The second-most stable is lead-202, which has a half-life of about 53,000 years, longer than any of the natural trace radioisotopes. Bulk lead exposed to moist air forms a protective layer of varying composition. Lead(II) carbonate is a common constituent; the sulfate or chloride may also be present in urban or maritime settings. This layer makes bulk lead effectively chemically inert in the air. Finely powdered lead, as with many metals, is pyrophoric, and burns with a bluish-white flame. Fluorine reacts with lead at room temperature, forming lead(II) fluoride. The reaction with chlorine is similar but requires heating, as the resulting chloride layer diminishes the reactivity of the elements. Molten lead reacts with the chalcogens to give lead(II) chalcogenides. Lead metal resists sulfuric and phosphoric acid but not hydrochloric or nitric acid; the outcome depends on insolubility and subsequent passivation of the product salt. Organic acids, such as acetic acid, dissolve lead in the presence of oxygen. Concentrated alkalis will dissolve lead and form plumbites. Lead shows two main oxidation states: +4 and +2. The tetravalent state is common for the carbon group. The divalent state is rare for carbon and silicon, minor for germanium, important (but not prevailing) for tin, and is the more important of the two oxidation states for lead. This is attributable to relativistic effects, specifically the inert pair effect, which manifests itself when there is a large difference in electronegativity between lead and oxide, halide, or nitride anions, leading to a significant partial positive charge on lead. The result is a stronger contraction of the lead 6s orbital than is the case for the 6p orbital, making it rather inert in ionic compounds. The inert pair effect is less applicable to compounds in which lead forms covalent bonds with elements of similar electronegativity, such as carbon in organolead compounds. In these, the 6s and 6p orbitals remain similarly sized and sp hybridization is still energetically favorable. Lead, like carbon, is predominantly tetravalent in such compounds. There is a relatively large difference in the electronegativity of lead(II) at 1.87 and lead(IV) at 2.33. This difference marks the reversal in the trend of increasing stability of the +4 oxidation state going down the carbon group; tin, by comparison, has values of 1.80 in the +2 oxidation state and 1.96 in the +4 state. Lead(II) compounds are characteristic of the inorganic chemistry of lead. Even strong oxidizing agents like fluorine and chlorine react with lead to give only PbF and PbCl. Lead(II) ions are usually colorless in solution, and partially hydrolyze to form Pb(OH) and finally [Pb(OH)] (in which the hydroxyl ions act as bridging ligands), but are not reducing agents as tin(II) ions are. Techniques for identifying the presence of the Pb ion in water generally rely on the precipitation of lead(II) chloride using dilute hydrochloric acid. As the chloride salt is sparingly soluble in water, in very dilute solutions the precipitation of lead(II) sulfide is achieved by bubbling hydrogen sulfide through the solution. Lead monoxide exists in two polymorphs, litharge α-PbO (red) and massicot β-PbO (yellow), the latter being stable only above around 488 °C. Litharge is the most commonly used inorganic compound of lead. There is no lead(II) hydroxide; increasing the pH of solutions of lead(II) salts leads to hydrolysis and condensation. Lead commonly reacts with heavier chalcogens. Lead sulfide is a semiconductor, a photoconductor, and an extremely sensitive infrared radiation detector. The other two chalcogenides, lead selenide and lead telluride, are likewise photoconducting. They are unusual in that their color becomes lighter going down the group. Lead dihalides are well-characterized; this includes the diastatide, and mixed halides, such as PbFCl. The relative insolubility of the latter forms a useful basis for the gravimetric determination of fluorine. The difluoride was the first solid ionically conducting compound to be discovered (in 1834, by Michael Faraday). The other dihalides decompose on exposure to ultraviolet or visible light, especially the diiodide. Many lead(II) pseudohalides are known, such as the cyanide, cyanate, and thiocyanate. Lead(II) forms an extensive variety of halide coordination complexes, such as [PbCl], [PbCl], and the [PbCl] chain anion. Lead(II) sulfate is insoluble in water, like the sulfates of other heavy divalent cations. Lead(II) nitrate and lead(II) acetate are very soluble, and this is exploited in the synthesis of other lead compounds. Few inorganic lead(IV) compounds are known. They are only formed in highly oxidizing solutions and do not normally exist under standard conditions. Lead(II) oxide gives a mixed oxide on further oxidation, PbO. It is described as lead(II,IV) oxide, or structurally 2PbO·PbO, and is the best-known mixed valence lead compound. Lead dioxide is a strong oxidizing agent, capable of oxidizing hydrochloric acid to chlorine gas. This is because the expected PbCl that would be produced is unstable and spontaneously decomposes to PbCl and Cl. Analogously to lead monoxide, lead dioxide is capable of forming plumbate anions. Lead disulfide and lead diselenide are only stable at high pressures. Lead tetrafluoride, a yellow crystalline powder, is stable, but less so than the difluoride. Lead tetrachloride (a yellow oil) decomposes at room temperature, lead tetrabromide is less stable still, and the existence of lead tetraiodide is questionable. Some lead compounds exist in formal oxidation states other than +4 or +2. Lead(III) may be obtained, as an intermediate between lead(II) and lead(IV), in larger organolead complexes; this oxidation state is not stable, as both the lead(III) ion and the larger complexes containing it are radicals. The same applies for lead(I), which can be found in such radical species. Numerous mixed lead(II,IV) oxides are known. When PbO is heated in air, it becomes PbO at 293 °C, PbO at 351 °C, PbO at 374 °C, and finally PbO at 605 °C. A further sesquioxide, PbO, can be obtained at high pressure, along with several non-stoichiometric phases. Many of them show defective fluorite structures in which some oxygen atoms are replaced by vacancies: PbO can be considered as having such a structure, with every alternate layer of oxygen atoms absent. Negative oxidation states can occur as Zintl phases, as either free lead anions, as in BaPb, with lead formally being lead(−IV), or in oxygen-sensitive ring-shaped or polyhedral cluster ions such as the trigonal bipyramidal Pb ion, where two lead atoms are lead(−I) and three are lead(0). In such anions, each atom is at a polyhedral vertex and contributes two electrons to each covalent bond along an edge from their sp hybrid orbitals, the other two being an external lone pair. They may be made in liquid ammonia via the reduction of lead by sodium. Lead can form multiply-bonded chains, a property it shares with its lighter homologs in the carbon group. Its capacity to do so is much less because the Pb–Pb bond energy is over three and a half times lower than that of the C–C bond. With itself, lead can build metal–metal bonds of an order up to three. With carbon, lead forms organolead compounds similar to, but generally less stable than, typical organic compounds (due to the Pb–C bond being rather weak). This makes the organometallic chemistry of lead far less wide-ranging than that of tin. Lead predominantly forms organolead(IV) compounds, even when starting with inorganic lead(II) reactants; very few organolead(II) compounds are known. The most well-characterized exceptions are Pb[CH(SiMe)] and Pb("η"-CH). The lead analog of the simplest organic compound, methane, is plumbane. Plumbane may be obtained in a reaction between metallic lead and atomic hydrogen. Two simple derivatives, tetramethyllead and tetraethyllead, are the best-known organolead compounds. These compounds are relatively stable: tetraethyllead only starts to decompose if heated or if exposed to sunlight or ultraviolet light. (Tetraphenyllead is even more thermally stable, decomposing at 270 °C.) With sodium metal, lead readily forms an equimolar alloy that reacts with alkyl halides to form organometallic compounds such as tetraethyllead. The oxidizing nature of many organolead compounds is usefully exploited: lead tetraacetate is an important laboratory reagent for oxidation in organic synthesis, and tetraethyllead was once produced in larger quantities than any other organometallic compound. Other organolead compounds are less chemically stable. For many organic compounds, a lead analog does not exist. Lead's per-particle abundance in the Solar System is 0.121 ppb (parts per billion). This figure is two and a half times higher than that of platinum, eight times more than mercury, and seventeen times more than gold. The amount of lead in the universe is slowly increasing as most heavier atoms (all of which are unstable) gradually decay to lead. The abundance of lead in the Solar System since its formation 4.5 billion years ago has increased by about 0.75%. The solar system abundances table shows that lead, despite its relatively high atomic number, is more prevalent than most other elements with atomic numbers greater than 40. Primordial lead—which comprises the isotopes lead-204, lead-206, lead-207, and lead-208—was mostly created as a result of repetitive neutron capture processes occurring in stars. The two main modes of capture are the s- and r-processes. In the s-process (s is for "slow"), captures are separated by years or decades, allowing less stable nuclei to undergo beta decay. A stable thallium-203 nucleus can capture a neutron and become thallium-204; this undergoes beta decay to give stable lead-204; on capturing another neutron, it becomes lead-205, which has a half-life of around 15 million years. Further captures result in lead-206, lead-207, and lead-208. On capturing another neutron, lead-208 becomes lead-209, which quickly decays into bismuth-209. On capturing another neutron, bismuth-209 becomes bismuth-210, and this beta decays to polonium-210, which alpha decays to lead-206. The cycle hence ends at lead-206, lead-207, lead-208, and bismuth-209. In the r-process (r is for "rapid"), captures happen faster than nuclei can decay. This occurs in environments with a high neutron density, such as a supernova or the merger of two neutron stars. The neutron flux involved may be on the order of 10 neutrons per square centimeter per second. The r-process does not form as much lead as the s-process. It tends to stop once neutron-rich nuclei reach 126 neutrons. At this point, the neutrons are arranged in complete shells in the atomic nucleus, and it becomes harder to energetically accommodate more of them. When the neutron flux subsides, these nuclei beta decay into stable isotopes of osmium, iridium, and platinum. Lead is classified as a chalcophile under the Goldschmidt classification, meaning it is generally found combined with sulfur. It rarely occurs in its native, metallic form. Many lead minerals are relatively light and, over the course of the Earth's history, have remained in the crust instead of sinking deeper into the Earth's interior. This accounts for lead's relatively high crustal abundance of 14 ppm; it is the 38th most abundant element in the crust. The main lead-bearing mineral is galena (PbS), which is mostly found with zinc ores. Most other lead minerals are related to galena in some way; boulangerite, PbSbS, is a mixed sulfide derived from galena; anglesite, PbSO, is a product of galena oxidation; and cerussite or white lead ore, PbCO, is a decomposition product of galena. Arsenic, tin, antimony, silver, gold, copper, and bismuth are common impurities in lead minerals. World lead resources exceed 2 billion tons. Significant deposits are located in Australia, China, Ireland, Mexico, Peru, Portugal, Russia, and the United States. Global reserves—resources that are economically feasible to extract—totaled 88 million tons in 2016, of which Australia had 35 million, China 17 million, and Russia 6.4 million. Typical background concentrations of lead do not exceed 0.1 μg/m in the atmosphere; 100 mg/kg in soil; and 5 μg/L in freshwater and seawater. The modern English word "lead" is of Germanic origin; it comes from the Middle English "leed" and Old English "lēad" (with the macron above the "e" signifying that the vowel sound of that letter is long). The Old English word is derived from the hypothetical reconstructed Proto-Germanic "*lauda-" ("lead"). According to linguistic theory, this word bore descendants in multiple Germanic languages of exactly the same meaning. The origin of the Proto-Germanic "*lauda-" is not agreed in the linguistic community. One hypothesis suggests it is derived from Proto-Indo-European "*lAudh-" ("lead"; capitalization of the vowel is equivalent to the macron). Another hypothesis suggests it is borrowed from Proto-Celtic "*ɸloud-io-" ("lead"). This word is related to the Latin "plumbum", which gave the element its chemical symbol "Pb". The word "*ɸloud-io-" is thought to be the origin of Proto-Germanic "*bliwa-" (which also means "lead"), from which stemmed the German "Blei". The name of the chemical element is not related to the verb of the same spelling, which is derived from Proto-Germanic "*laidijan-" ("to lead"). Metallic lead beads dating back to 7000–6500 BCE have been found in Asia Minor and may represent the first example of metal smelting. At that time lead had few (if any) applications due to its softness and dull appearance. The major reason for the spread of lead production was its association with silver, which may be obtained by burning galena (a common lead mineral). The Ancient Egyptians were the first to use lead minerals in cosmetics, an application that spread to Ancient Greece and beyond; the Egyptians may have used lead for sinkers in fishing nets, glazes, glasses, enamels, and for ornaments. Various civilizations of the Fertile Crescent used lead as a writing material, as currency, and for construction. Lead was used in the Ancient Chinese royal court as a stimulant, as currency, and as a contraceptive; the Indus Valley civilization and the Mesoamericans used it for making amulets; and the eastern and southern African peoples used lead in wire drawing. Because silver was extensively used as a decorative material and an exchange medium, lead deposits came to be worked in Asia Minor since 3000 BCE; later, lead deposits were developed in the Aegean and Laurion. These three regions collectively dominated production of mined lead until c. 1200 BCE. Since 2000 BCE, the Phoenicians worked deposits in the Iberian peninsula; by 1600 BCE, lead mining existed in Cyprus, Greece, and Sardinia. Rome's territorial expansion in Europe and across the Mediterranean, and its development of mining, led to it becoming the greatest producer of lead during the classical era, with an estimated annual output peaking at 80,000 tonnes. Like their predecessors, the Romans obtained lead mostly as a by-product of silver smelting. Lead mining occurred in Central Europe, Britain, the Balkans, Greece, Anatolia, and Hispania, the latter accounting for 40% of world production. Lead tablets were commonly used as a material for letters. Lead coffins, cast in flat sand forms, with interchangeable motifs to suit the faith of the deceased were used in ancient Judea. Lead was used for making water pipes in the Roman Empire; the Latin word for the metal, "plumbum", is the origin of the English word "plumbing". Its ease of working and resistance to corrosion ensured its widespread use in other applications including pharmaceuticals, roofing, currency, and warfare. Writers of the time, such as Cato the Elder, Columella, and Pliny the Elder, recommended lead (or lead-coated) vessels for the preparation of sweeteners and preservatives added to wine and food. The lead conferred an agreeable taste due to the formation of "sugar of lead" (lead(II) acetate), whereas copper or bronze vessels could impart a bitter flavor through verdigris formation. The Roman author Vitruvius reported the health dangers of lead and modern writers have suggested that lead poisoning played a major role in the decline of the Roman Empire. Other researchers have criticized such claims, pointing out, for instance, that not all abdominal pain is caused by lead poisoning. According to archaeological research, Roman lead pipes increased lead levels in tap water but such an effect was "unlikely to have been truly harmful". When lead poisoning did occur, victims were called "saturnine", dark and cynical, after the ghoulish father of the gods, Saturn. By association, lead was considered the father of all metals. Its status in Roman society was low as it was readily available and cheap. During the classical era (and even up to the 17th century), tin was often not distinguished from lead: Romans called lead "plumbum nigrum" ("black lead"), and tin "plumbum candidum" ("bright lead"). The association of lead and tin can be seen in other languages: the word "olovo" in Czech translates to "lead", but in Russian the cognate "олово" ("olovo") means "tin". To add to the confusion, lead bore a close relation to antimony: both elements commonly occur as sulfides (galena and stibnite), often together. Pliny incorrectly wrote that stibnite would give lead on heating, instead of antimony. In countries such as Turkey and India, the originally Persian name "surma" came to refer to either antimony sulfide or lead sulfide, and in some languages, such as Russian, gave its name to antimony "(сурьма)." Lead mining in Western Europe declined after the fall of the Western Roman Empire, with Arabian Iberia being the only region having a significant output. The largest production of lead occurred in South and East Asia, especially China and India, where lead mining grew rapidly. In Europe, lead production began to increase in the 11th and 12th centuries, when it was again used for roofing and piping. Starting in the 13th century, lead was used to create stained glass. In the European and Arabian traditions of alchemy, lead (symbol in the European tradition) was considered an impure base metal which, by the separation, purification and balancing of its constituent essences, could be transformed to pure and incorruptible gold. During the period, lead was used increasingly for adulterating wine. The use of such wine was forbidden for use in Christian rites by a papal bull in 1498, but it continued to be imbibed and resulted in mass poisonings up to the late 18th century. Lead was a key material in parts of the printing press, which was invented around 1440; lead dust was commonly inhaled by print workers, causing lead poisoning. Firearms were invented at around the same time, and lead, despite being more expensive than iron, became the chief material for making bullets. It was less damaging to iron gun barrels, had a higher density (which allowed for better retention of velocity), and its lower melting point made the production of bullets easier as they could be made using a wood fire. Lead, in the form of Venetian ceruse, was extensively used in cosmetics by Western European aristocracy as whitened faces were regarded as a sign of modesty. This practice later expanded to white wigs and eyeliners, and only faded out with the French Revolution in the late 18th century. A similar fashion appeared in Japan in the 18th century with the emergence of the geishas, a practice that continued long into the 20th century. The white faces of women "came to represent their feminine virtue as Japanese women", with lead commonly used in the whitener. In the New World, lead was produced soon after the arrival of European settlers. The earliest recorded lead production dates to 1621 in the English Colony of Virginia, fourteen years after its foundation. In Australia, the first mine opened by colonists on the continent was a lead mine, in 1841. In Africa, lead mining and smelting were known in the Benue Trough and the lower Congo Basin, where lead was used for trade with Europeans, and as a currency by the 17th century, well before the scramble for Africa. In the second half of the 18th century, Britain, and later continental Europe and the United States, experienced the Industrial Revolution. This was the first time during which lead production rates exceeded those of Rome. Britain was the leading producer, losing this status by the mid-19th century with the depletion of its mines and the development of lead mining in Germany, Spain, and the United States. By 1900, the United States was the leader in global lead production, and other non-European nations—Canada, Mexico, and Australia—had begun significant production; production outside Europe exceeded that within. A great share of the demand for lead came from plumbing and painting—lead paints were in regular use. At this time, more (working class) people were exposed to the metal and lead poisoning cases escalated. This led to research into the effects of lead intake. Lead was proven to be more dangerous in its fume form than as a solid metal. Lead poisoning and gout were linked; British physician Alfred Baring Garrod noted a third of his gout patients were plumbers and painters. The effects of chronic ingestion of lead, including mental disorders, were also studied in the 19th century. The first laws aimed at decreasing lead poisoning in factories were enacted during the 1870s and 1880s in the United Kingdom. Further evidence of the threat that lead posed to humans was discovered in the late 19th and early 20th centuries. Mechanisms of harm were better understood, lead blindness was documented, and the element was phased out of public use in the United States and Europe. The United Kingdom introduced mandatory factory inspections in 1878 and appointed the first Medical Inspector of Factories in 1898; as a result, a 25-fold decrease in lead poisoning incidents from 1900 to 1944 was reported. The last major human exposure to lead was the addition of tetraethyllead to gasoline as an antiknock agent, a practice that originated in the United States in 1921. It was phased out in the United States and the European Union by 2000. Most European countries banned lead paint—commonly used because of its opacity and water resistance—for interiors by 1930. In the 1970s, the United States and Western European countries introduced legislation to reduce lead air pollution. The impact was significant: while a study conducted by the Centers for Disease Control and Prevention in the United States in 1976–1980 showed that 77.8% of the population had elevated blood lead levels, in 1991–1994, a study by the same institute showed the share of people with such high levels dropped to 2.2%. The main product made of lead by the end of the 20th century was the lead–acid battery, which posed no direct threat to humans. From 1960 to 1990, lead output in the Western Bloc grew by a third. The share of the world's lead production by the Eastern Bloc increased from 10% to 30%, from 1950 to 1990, with the Soviet Union being the world's largest producer during the mid-1970s and the 1980s, and China starting major lead production in the late 20th century. Unlike the European communist countries, China was largely unindustrialized by the mid-20th century; in 2004, China surpassed Australia as the largest producer of lead. As was the case during European industrialization, lead has had a negative effect on health in China. Production of lead is increasing worldwide due to its use in lead–acid batteries. There are two major categories of production: primary from mined ores, and secondary from scrap. In 2014, 4.58 million metric tons came from primary production and 5.64 million from secondary production. The top three producers of mined lead concentrate in that year were China, Australia, and the United States. The top three producers of refined lead were China, the United States, and South Korea. According to the International Resource Panel's Metal Stocks in Society report of 2010, the total amount of lead in use, stockpiled, discarded, or dissipated into the environment, on a global basis, is 8 kg per capita. Much of this is in more developed countries (20–150 kg per capita) rather than less developed ones (1–4 kg per capita). The primary and secondary lead production processes are similar. Some primary production plants now supplement their operations with scrap lead, and this trend is likely to increase in the future. Given adequate techniques, lead obtained via secondary processes is indistinguishable from lead obtained via primary processes. Scrap lead from the building trade is usually fairly clean and is re-melted without the need for smelting, though refining is sometimes needed. Secondary lead production is therefore cheaper, in terms of energy requirements, than is primary production, often by 50% or more. Most lead ores contain a low percentage of lead (rich ores have a typical content of 3–8%) which must be concentrated for extraction. During initial processing, ores typically undergo crushing, dense-medium separation, grinding, froth flotation, and drying. The resulting concentrate, which has a lead content of 30–80% by mass (regularly 50–60%), is then turned into (impure) lead metal. There are two main ways of doing this: a two-stage process involving roasting followed by blast furnace extraction, carried out in separate vessels; or a direct process in which the extraction of the concentrate occurs in a single vessel. The latter has become the most common route, though the former is still significant. First, the sulfide concentrate is roasted in air to oxidize the lead sulfide: As the original concentrate was not pure lead sulfide, roasting yields not only the desired lead(II) oxide, but a mixture of oxides, sulfates, and silicates of lead and of the other metals contained in the ore. This impure lead oxide is reduced in a coke-fired blast furnace to the (again, impure) metal: Impurities are mostly arsenic, antimony, bismuth, zinc, copper, silver, and gold. The melt is treated in a reverberatory furnace with air, steam, and sulfur, which oxidizes the impurities except for silver, gold, and bismuth. Oxidized contaminants float to the top of the melt and are skimmed off. Metallic silver and gold are removed and recovered economically by means of the Parkes process, in which zinc is added to lead. Zinc, which is immiscible in lead, dissolves the silver and gold. The zinc solution can be separated from the lead, and the silver and gold retrieved. De-silvered lead is freed of bismuth by the Betterton–Kroll process, treating it with metallic calcium and magnesium. The resulting bismuth dross can be skimmed off. Very pure lead can be obtained by processing smelted lead electrolytically using the Betts process. Anodes of impure lead and cathodes of pure lead are placed in an electrolyte of lead fluorosilicate (PbSiF). Once electrical potential is applied, impure lead at the anode dissolves and plates onto the cathode, leaving the majority of the impurities in solution. This is a high-cost process and thus mostly reserved for refining bullion containing high percentages of impurities. In this process, lead bullion and slag is obtained directly from lead concentrates. The lead sulfide concentrate is melted in a furnace and oxidized, forming lead monoxide. Carbon (as coke or coal gas) is added to the molten charge along with fluxing agents. The lead monoxide is thereby reduced to metallic lead, in the midst of a slag rich in lead monoxide. As much as 80% of the lead in very high-content initial concentrates can be obtained as bullion; the remaining 20% forms a slag rich in lead monoxide. For a low-grade feed, all of the lead can be oxidized to a high-lead slag. Metallic lead is further obtained from the high-lead (25–40%) slags via submerged fuel combustion or injection, reduction assisted by an electric furnace, or a combination of both. Research on a cleaner, less energy-intensive lead extraction process continues; a major drawback is that either too much lead is lost as waste, or the alternatives result in a high sulfur content in the resulting lead metal. Hydrometallurgical extraction, in which anodes of impure lead are immersed into an electrolyte and pure lead is deposited onto a cathode, is a technique that may have potential. Smelting, which is an essential part of the primary production, is often skipped during secondary production. It is only performed when metallic lead has undergone significant oxidation. The process is similar to that of primary production in either a blast furnace or a rotary furnace, with the essential difference being the greater variability of yields: blast furnaces produce hard lead (10% antimony) while reverberatory and rotary kiln furnaces produced semisoft lead (3–4% antimony). The Isasmelt process is a more recent method that may act as an extension to primary production; battery paste from spent lead–acid batteries has sulfur removed by treating it with alkali, and is then treated in a coal-fueled furnace in the presence of oxygen, which yields impure lead, with antimony the most common impurity. Refining of secondary lead is similar to that of primary lead; some refining processes may be skipped depending on the material recycled and its potential contamination. Of the sources of lead for recycling, lead–acid batteries are the most important; lead pipe, sheet, and cable sheathing are also significant. Contrary to popular belief, pencil leads in wooden pencils have never been made from lead. When the pencil originated as a wrapped graphite writing tool, the particular type of graphite used was named "plumbago" (literally, "act for lead" or "lead mockup"). Lead metal has several useful mechanical properties, including high density, low melting point, ductility, and relative inertness. Many metals are superior to lead in some of these aspects but are generally less common and more difficult to extract from parent ores. Lead's toxicity has led to its phasing out for some uses. Lead has been used for bullets since their invention in the Middle Ages. It is inexpensive; its low melting point means small arms ammunition and shotgun pellets can be cast with minimal technical equipment; and it is denser than other common metals, which allows for better retention of velocity. Concerns have been raised that lead bullets used for hunting can damage the environment. Lead's high density and resistance to corrosion have been exploited in a number of related applications. It is used as ballast in sailboat keels; its density allows it to take up a small volume and minimize water resistance, thus counterbalancing the heeling effect of wind on the sails. It is used in scuba diving weight belts to counteract the diver's buoyancy. In 1993, the base of the Leaning Tower of Pisa was stabilized with 600 tonnes of lead. Because of its corrosion resistance, lead is used as a protective sheath for underwater cables. Lead has many uses in the construction industry; lead sheets are used as architectural metals in roofing material, cladding, flashing, gutters and gutter joints, and on roof parapets. Detailed lead moldings are used as decorative motifs to fix lead sheet. Lead is still used in statues and sculptures, including for armatures. In the past it was often used to balance the wheels of cars; for environmental reasons this use is being phased out in favor of other materials. Lead is added to copper alloys, such as brass and bronze, to improve machinability and for its lubricating qualities. Being practically insoluble in copper the lead forms solid globules in imperfections throughout the alloy, such as grain boundaries. In low concentrations, as well as acting as a lubricant, the globules hinder the formation of swarf as the alloy is worked, thereby improving machinability. Copper alloys with larger concentrations of lead are used in bearings. The lead provides lubrication, and the copper provides the load-bearing support. Lead's high density, atomic number, and formability form the basis for use of lead as a barrier that absorbs sound, vibration, and radiation. Lead has no natural resonance frequencies; as a result, sheet-lead is used as a sound deadening layer in the walls, floors, and ceilings of sound studios. Organ pipes are often made from a lead alloy, mixed with various amounts of tin to control the tone of each pipe. Lead is an established shielding material from radiation in nuclear science and in X-ray rooms due to its denseness and high attenuation coefficient. Molten lead has been used as a coolant for lead-cooled fast reactors. The largest use of lead in the early 21st century is in lead–acid batteries. The reactions in the battery between lead, lead dioxide, and sulfuric acid provide a reliable source of voltage. The lead in batteries undergoes no direct contact with humans, so there are fewer toxicity concerns. Supercapacitors incorporating lead–acid batteries have been installed in kilowatt and megawatt scale applications in Australia, Japan, and the United States in frequency regulation, solar smoothing and shifting, wind smoothing, and other applications. These batteries have lower energy density and charge-discharge efficiency than lithium-ion batteries, but are significantly cheaper. Lead is used in high voltage power cables as sheathing material to prevent water diffusion into insulation; this use is decreasing as lead is being phased out. Its use in solder for electronics is also being phased out by some countries to reduce the amount of environmentally hazardous waste. Lead is one of three metals used in the Oddy test for museum materials, helping detect organic acids, aldehydes, and acidic gases. In addition to being the main application for lead metal, lead-acid batteries are also the main consumer of lead compounds. The energy storage/release reaction used in these devices involves lead sulfate and lead dioxide: Other applications of lead compounds are very specialized and often fading. Lead-based coloring agents are used in ceramic glazes and glass, especially for red and yellow shades. While lead paints are phased out in Europe and North America, they remain in use in less developed countries such as China or India. Lead tetraacetate and lead dioxide are used as oxidizing agents in organic chemistry. Lead is frequently used in the polyvinyl chloride coating of electrical cords. It can be used to treat candle wicks to ensure a longer, more even burn. Because of its toxicity, European and North American manufacturers use alternatives such as zinc. Lead glass is composed of 12–28% lead oxide, changing its optical characteristics and reducing the transmission of ionizing radiation. Lead-based semiconductors such as lead telluride and lead selenide are used in photovoltaic cells and infrared detectors. Lead has no confirmed biological role. Its prevalence in the human body—at an adult average of 120 mg—is nevertheless exceeded only by zinc (2500 mg) and iron (4000 mg) among the heavy metals. Lead salts are very efficiently absorbed by the body. A small amount of lead (1%) is stored in bones; the rest is excreted in urine and feces within a few weeks of exposure. Only about a third of lead is excreted by a child. Continual exposure may result in the bioaccumulation of lead. Lead is a highly poisonous metal (whether inhaled or swallowed), affecting almost every organ and system in the human body. At airborne levels of 100 mg/m, it is immediately dangerous to life and health. Most ingested lead is absorbed into the bloodstream. The primary cause of its toxicity is its predilection for interfering with the proper functioning of enzymes. It does so by binding to the sulfhydryl groups found on many enzymes, or mimicking and displacing other metals which act as cofactors in many enzymatic reactions. Among the essential metals that lead interacts with are calcium, iron, and zinc. High levels of calcium and iron tend to provide some protection from lead poisoning; low levels cause increased susceptibility. Lead can cause severe damage to the brain and kidneys and, ultimately, death. By mimicking calcium, lead can cross the blood–brain barrier. It degrades the myelin sheaths of neurons, reduces their numbers, interferes with neurotransmission routes, and decreases neuronal growth. In the human body, lead inhibits porphobilinogen synthase and ferrochelatase, preventing both porphobilinogen formation and the incorporation of iron into protoporphyrin IX, the final step in heme synthesis. This causes ineffective heme synthesis and microcytic anemia. Symptoms of lead poisoning include nephropathy, colic-like abdominal pains, and possibly weakness in the fingers, wrists, or ankles. Small blood pressure increases, particularly in middle-aged and older people, may be apparent and can cause anemia. Several studies, mostly cross-sectional, found an association between increased lead exposure and decreased heart rate variability. In pregnant women, high levels of exposure to lead may cause miscarriage. Chronic, high-level exposure has been shown to reduce fertility in males. In a child's developing brain, lead interferes with synapse formation in the cerebral cortex, neurochemical development (including that of neurotransmitters), and the organization of ion channels. Early childhood exposure has been linked with an increased risk of sleep disturbances and excessive daytime sleepiness in later childhood. High blood levels are associated with delayed puberty in girls. The rise and fall in exposure to airborne lead from the combustion of tetraethyl lead in gasoline during the 20th century has been linked with historical increases and decreases in crime levels, a hypothesis which is not universally accepted. Lead exposure is a global issue since lead mining and smelting, and battery manufacturing/disposal/recycling, are common in many countries. Lead enters the body via inhalation, ingestion, or skin absorption. Almost all inhaled lead is absorbed into the body; for ingestion, the rate is 20–70%, with children absorbing a higher percentage than adults. Poisoning typically results from ingestion of food or water contaminated with lead, and less commonly after accidental ingestion of contaminated soil, dust, or lead-based paint. Seawater products can contain lead if affected by nearby industrial waters. Fruit and vegetables can be contaminated by high levels of lead in the soils they were grown in. Soil can be contaminated through particulate accumulation from lead in pipes, lead paint, and residual emissions from leaded gasoline. The use of lead for water pipes is problematic in areas with soft or acidic water. Hard water forms insoluble layers in the pipes whereas soft and acidic water dissolves the lead pipes. Dissolved carbon dioxide in the carried water may result in the formation of soluble lead bicarbonate; oxygenated water may similarly dissolve lead as lead(II) hydroxide. Drinking such water, over time, can cause health problems due to the toxicity of the dissolved lead. The harder the water the more calcium bicarbonate and sulfate it will contain, and the more the inside of the pipes will be coated with a protective layer of lead carbonate or lead sulfate. Ingestion of applied lead-based paint is the major source of exposure for children: a direct source is chewing on old painted window sills. Alternatively, as the applied dry paint deteriorates, it peels, is pulverized into dust and then enters the body through hand-to-mouth contact or contaminated food, water, or alcohol. Ingesting certain home remedies may result in exposure to lead or its compounds. Inhalation is the second major exposure pathway, affecting smokers and especially workers in lead-related occupations. Cigarette smoke contains, among other toxic substances, radioactive lead-210. Skin exposure may be significant for people working with organic lead compounds. The rate of skin absorption is lower for inorganic lead. Treatment for lead poisoning normally involves the administration of dimercaprol and succimer. Acute cases may require the use of disodium calcium edetate, the calcium chelate, and the disodium salt of ethylenediaminetetraacetic acid (EDTA). It has a greater affinity for lead than calcium, with the result that lead chelate is formed by exchange and excreted in the urine, leaving behind harmless calcium. The extraction, production, use, and disposal of lead and its products have caused significant contamination of the Earth's soils and waters. Atmospheric emissions of lead were at their peak during the Industrial Revolution, and the leaded gasoline period in the second half of the twentieth century. Lead releases originate from natural sources (i.e., concentration of the naturally occurring lead), industrial production, incineration and recycling, and mobilization of previously buried lead. Elevated concentrations of lead persist in soils and sediments in post-industrial and urban areas; industrial emissions, including those arising from coal burning, continue in many parts of the world, particularly in the developing countries. Lead can accumulate in soils, especially those with a high organic content, where it remains for hundreds to thousands of years. Environmental lead can compete with other metals found in and on plants surfaces potentially inhibiting photosynthesis and at high enough concentrations, negatively affecting plant growth and survival. Contamination of soils and plants can allow lead to ascend the food chain affecting microorganisms and animals. In animals, lead exhibits toxicity in many organs, damaging the nervous, renal, reproductive, hematopoietic, and cardiovascular systems after ingestion, inhalation, or skin absorption. Fish uptake lead from both water and sediment; bioaccumulation in the food chain poses a hazard to fish, birds, and sea mammals. Antropogenic lead includes lead from shot and sinkers. These are among the most potent sources of lead contamination along with lead production sites. Lead was banned for shot and sinkers in the United States in 2017, although that ban was only effective for a month, and a similar ban is being considered in the European Union. Analytical methods for the determination of lead in the environment include spectrophotometry, X-ray fluorescence, atomic spectroscopy and electrochemical methods. A specific ion-selective electrode has been developed based on the ionophore S,S'-methylenebis(N,N-diisobutyldithiocarbamate). An important biomarker assay for lead poisoning is δ-aminolevulinic acid levels in plasma, serum, and urine. By the mid-1980s, there was significant decline in the use of lead in industry. In the United States, environmental regulations reduced or eliminated the use of lead in non-battery products, including gasoline, paints, solders, and water systems. Particulate control devices were installed in coal-fired power plants to capture lead emissions. Lead use was further curtailed by the European Union's 2003 Restriction of Hazardous Substances Directive. A large drop in lead deposition occurred in the Netherlands after the 1993 national ban on use of lead shot for hunting and sport shooting: from 230 tonnes in 1990 to 47.5 tonnes in 1995. In the United States, the permissible exposure limit for lead in the workplace, comprising metallic lead, inorganic lead compounds, and lead soaps, was set at 50 μg/m over an 8-hour workday, and the blood lead level limit at 5 μg per 100 g of blood in 2012. Lead may still be found in harmful quantities in stoneware, vinyl (such as that used for tubing and the insulation of electrical cords), and Chinese brass. Old houses may still contain lead paint. White lead paint has been withdrawn from sale in industrialized countries, but specialized uses of other pigments such as yellow lead chromate remain. Stripping old paint by sanding produces dust which can be inhaled. Lead abatement programs have been mandated by some authorities in properties where young children live. Lead waste, depending of the jurisdiction and the nature of the waste, may be treated as household waste (in order to facilitate lead abatement activities), or potentially hazardous waste requiring specialized treatment or storage. Lead is released to the wildlife in shooting places and a number of lead management practices, such as stewardship of the environment and reduced public scrutiny, have been developed to counter the lead contamination. Lead migration can be enhanced in acidic soils; to counter that, it is advised soils be treated with lime to neutralize the soils and prevent leaching of lead. Research has been conducted on how to remove lead from biosystems by biological means: Fish bones are being researched for their ability to bioremediate lead in contaminated soil. The fungus "Aspergillus versicolor" is effective at removing lead ions. Several bacteria have been researched for their ability to remove lead from the environment, including the sulfate-reducing bacteria "Desulfovibrio" and "Desulfotomaculum", both of which are highly effective in aqueous solutions. Logarithm In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a given number  is the exponent to which another fixed number, the "base" , must be raised, to produce that number . In the simplest case the logarithm counts repeated multiplication of the same factor; e.g., since , the "logarithm to base " of is . The logarithm of to "base" is denoted as (or, without parentheses, as , or even without explicit base as , when no confusion is possible). More generally, exponentiation allows any positive real number to be raised to any real power, always producing a positive result, so the logarithm for any two positive real numbers  and  where  is not equal to , is always a unique real number . More explicitly, the defining relation between exponentiation and logarithm is: For example, , as . The logarithm to base (that is ) is called the common logarithm and has many applications in science and engineering. The natural logarithm has the number (that is ) as its base; its use is widespread in mathematics and physics, because of its simpler derivative. The binary logarithm uses base (that is ) and is commonly used in computer science. Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations. They were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. Tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition because of the fact—important in its own right—that the logarithm of a product is the sum of the logarithms of the factors: provided that , and are all positive and . The present-day notion of logarithms comes from Leonhard Euler, who connected them to the exponential function in the 18th century. Logarithmic scales reduce wide-ranging quantities to tiny scopes. For example, the decibel (dB) is a unit used to express log-ratios, mostly for signal power and amplitude (of which sound pressure is a common example). In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and of geometric objects called fractals. They help describing frequency ratios of musical intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in psychophysics, and can aid in forensic accounting. In the same way as the logarithm reverses exponentiation, the complex logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant; it has uses in public-key cryptography. Addition, multiplication, and exponentiation are three fundamental arithmetic operations. Addition, the simplest of these, can be undone by subtraction: adding, say, 2 to 3 gives 5. The process of adding 2 can be undone by subtracting 2: 5 − 2 = 3. Multiplication, the next-simplest operation, can be undone by division: doubling a number , i.e., multiplying "x" by 2, the result is . To get back , it is necessary to divide by 2. For example formula_4 and the process of multiplying by 2 is undone by dividing by 2: formula_5. The idea and purpose of logarithms is also to undo a fundamental arithmetic operation, namely raising a number to a certain power, an operation also known as exponentiation. For example, raising 2 to the third power of 2 yields 8, because 8 is the product of three factors of 2: The logarithm (with respect to base 2) of 8 is 3, reflecting the fact that 2 was raised to the "third" power to get 8. This subsection contains a short overview of the exponentiation operation, which is fundamental to understand logarithms. Raising to the power, where is a natural number, is done by multiplying factors equal to . The power of is written , so that Exponentiation may be extended to , where is a positive number and the "exponent" is any real number. For example, is the reciprocal of , that is, . Raising "b" to the power 1/2 is the square root of "b". More generally, raising "b" to a rational power "p"/"q", where "p" and "q" are integers, is given by the "q"-th root of "b". Finally, any irrational number (a real number which is not rational) "y" can be approximated to arbitrary precision by rational numbers. This can be used to compute the "y"-th power of "b": for example formula_9 and formula_10 is increasingly well approximated by formula_11. A more detailed explanation, as well as the formula is contained in the article on exponentiation. The "logarithm" of a positive real number with respect to base is the exponent by which must be raised to yield . In other words, the logarithm of to base is the solution to the equation The logarithm is denoted "" (pronounced as "the logarithm of to base " or "the logarithm of " or (most commonly) "the log, base , of "). In the equation , the value is the answer to the question "To what power must be raised, in order to yield ?". Several important formulas, sometimes called "logarithmic identities" or "logarithmic laws", relate logarithms to one another. The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the power of a number is "p" times the logarithm of the number itself; the logarithm of a root is the logarithm of the number divided by "p". The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions formula_15 or formula_16 in the left hand sides. The logarithm can be computed from the logarithms of and with respect to an arbitrary base "k" using the following formula: Starting from the defining identity we can apply to both sides of this equation, to get Solving for formula_20 yields: showing the conversion factor from given formula_22-values to their corresponding formula_23-values to be formula_24 Typical scientific calculators calculate the logarithms to bases 10 and . Logarithms with respect to any base can be determined using either of these two logarithms by the previous formula: Given a number and its logarithm to an unknown base , the base is given by: Among all choices for the base, three are particularly common. These are , (the irrational mathematical constant ≈ 2.71828), and (the binary logarithm). In mathematical analysis, the logarithm to base is widespread because of its particular analytical properties explained below. On the other hand, logarithms are easy to use for manual calculations in the decimal number system: Thus, is related to the number of decimal digits of a positive integer : the number of digits is the smallest integer strictly bigger than log"x". For example, is approximately 3.15. The next integer is 4, which is the number of digits of 1430. Both the natural logarithm and the logarithm to base two are used in information theory, corresponding to the use of nats or bits as the fundamental units of information, respectively. Binary logarithms are also used in computer science, where the binary system is ubiquitous, in music theory, where a pitch ratio of two (the octave) is ubiquitous and the cent is the binary logarithm (scaled by 1200) of the ratio between two adjacent equally-tempered pitches, and in photography to measure exposure values. The following table lists common notations for logarithms to these bases and the fields where they are used. Many disciplines write instead of , when the intended base can be determined from the context. The notation also occurs. The "ISO notation" column lists designations suggested by the International Organization for Standardization (ISO 31-11). Because the notation has been used for all three bases (or when the base is indeterminate or immaterial), the intended base must often be inferred based on context or discipline. In computer science and mathematics, log usually refers to and , respectively. In other contexts log often means . The history of logarithm in seventeenth century Europe is the discovery of a new function that extended the realm of analysis beyond the scope of algebraic methods. The method of logarithms was publicly propounded by John Napier in 1614, in a book titled "Mirifici Logarithmorum Canonis Descriptio" ("Description of the Wonderful Rule of Logarithms"). Prior to Napier's invention, there had been other techniques of similar scopes, such as the prosthaphaeresis or the use of tables of progressions, extensively developed by Jost Bürgi around 1600. The common logarithm of a number is the index of that power of ten which equals the number. Speaking of a number as requiring so many figures is a rough allusion to common logarithm, and was referred to by Archimedes as the "order of a number". The first real logarithms were heuristic methods to turn multiplication into addition, thus facilitating rapid computation. Some of these methods used tables derived from trigonometric identities. Such methods are called prosthaphaeresis. Invention of the function now known as natural logarithm began as an attempt to perform a quadrature of a rectangular hyperbola by Grégoire de Saint-Vincent, a Belgian Jesuit residing in Prague. Archimedes had written The Quadrature of the Parabola in the third century BC, but a quadrature for the hyperbola eluded all efforts until Saint-Vincent published his results in 1647. The relation that the logarithm provides between a geometric progression in its argument and an arithmetic progression of values, prompted A. A. de Sarasa to make the connection of Saint-Vincent’s quadrature and the tradition of logarithms in prosthaphaeresis, leading to the term "hyperbolic logarithm", a synonym for natural logarithm. Soon the new function was appreciated by Christiaan Huygens, Patavii, and James Gregory. The notation Log y was adopted by Leibniz in 1675, and the next year he connected it to the integral formula_30 By simplifying difficult calculations, logarithms contributed to the advance of science, especially astronomy. They were critical to advances in surveying, celestial navigation, and other domains. Pierre-Simon Laplace called logarithms A key tool that enabled the practical use of logarithms before calculators and computers was the "table of logarithms". The first such table was compiled by Henry Briggs in 1617, immediately after Napier's invention. Subsequently, tables with increasing scope were written. These tables listed the values of and for any number in a certain range, at a certain precision, for a certain base (usually ). For example, Briggs' first table contained the common logarithms of all integers in the range 1–1000, with a precision of 14 digits. As the function is the inverse function of log"x", it has been called the antilogarithm. The product and quotient of two positive numbers "c" and "d" were routinely calculated as the sum and difference of their logarithms. The product "cd" or quotient "c"/"d" came from looking up the antilogarithm of the sum or difference, also via the same table: and For manual calculations that demand any appreciable precision, performing the lookups of the two logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than performing the multiplication by earlier methods such as prosthaphaeresis, which relies on trigonometric identities. Calculations of powers and roots are reduced to multiplications or divisions and look-ups by and Many logarithm tables give logarithms by separately providing the characteristic and mantissa of , that is to say, the integer part and the fractional part of . The characteristic of is one plus the characteristic of , and their significands are the same. This extends the scope of logarithm tables: given a table listing for all integers ranging from 1 to 1000, the logarithm of 3542 is approximated by Another critical application was the slide rule, a pair of logarithmically divided scales used for calculation. The non-sliding logarithmic scale, Gunter's rule, was invented shortly after Napier's invention. William Oughtred enhanced it to create the slide rule—a pair of logarithmic scales movable with respect to each other. Numbers are placed on sliding scales at distances proportional to the differences between their logarithms. Sliding the upper scale appropriately amounts to mechanically adding logarithms, as illustrated here: For example, adding the distance from 1 to 2 on the lower scale to the distance from 1 to 3 on the upper scale yields a product of 6, which is read off at the lower part. The slide rule was an essential calculating tool for engineers and scientists until the 1970s, because it allows, at the expense of precision, much faster computation than techniques based on tables. A deeper study of logarithms requires the concept of a "function". A function is a rule that, given one number, produces another number. An example is the function producing the power of from any real number , where the base is a fixed number. This function is written: formula_36 To justify the definition of logarithms, it is necessary to show that the equation has a solution and that this solution is unique, provided that is positive and that is positive and unequal to 1. A proof of that fact requires the intermediate value theorem from elementary calculus. This theorem states that a continuous function that produces two values "m" and "n" also produces any value that lies between "m" and "n". A function is "continuous" if it does not "jump", that is, if its graph can be drawn without lifting the pen. This property can be shown to hold for the function . Because "f" takes arbitrarily large and arbitrarily small positive values, any number lies between and for suitable and . Hence, the intermediate value theorem ensures that the equation has a solution. Moreover, there is only one solution to this equation, because the function "f" is strictly increasing (for ), or strictly decreasing (for ). The unique solution is the logarithm of to base , . The function that assigns to its logarithm is called "logarithm function" or "logarithmic function" (or just "logarithm"). The function is essentially characterized by the above product formula More precisely, the logarithm to any base is the only increasing function "f" from the positive reals to the reals satisfying and The formula for the logarithm of a power says in particular that for any number , In prose, taking the power of and then the logarithm gives back . Conversely, given a positive number , the formula says that first taking the logarithm and then exponentiating gives back . Thus, the two possible ways of combining (or composing) logarithms and exponentiation give back the original number. Therefore, the logarithm to base is the "inverse function" of . Inverse functions are closely related to the original functions. Their graphs correspond to each other upon exchanging the - and the -coordinates (or upon reflection at the diagonal line = ), as shown at the right: a point on the graph of "f" yields a point on the graph of the logarithm and vice versa. As a consequence, log("x") diverges to infinity (gets bigger than any given number) if grows to infinity, provided that is greater than one. In that case, is an increasing function. For , tends to minus infinity instead. When approaches zero, goes to minus infinity for (plus infinity for , respectively). Analytic properties of functions pass to their inverses. Thus, as is a continuous and differentiable function, so is . Roughly, a continuous function is differentiable if its graph has no sharp "corners". Moreover, as the derivative of evaluates to by the properties of the exponential function, the chain rule implies that the derivative of is given by That is, the slope of the tangent touching the graph of the logarithm at the point equals . The derivative of ln is 1/"x"; this implies that ln is the unique antiderivative of that has the value 0 for . It is this very simple formula that motivated to qualify as "natural" the natural logarithm; this is also one of the main reasons of the importance of the constant . The derivative with a generalised functional argument is The quotient at the right hand side is called the logarithmic derivative of "f". Computing by means of the derivative of is known as logarithmic differentiation. The antiderivative of the natural logarithm is: Related formulas, such as antiderivatives of logarithms to other bases can be derived from this equation using the change of bases. The natural logarithm of "t" equals the integral of 1/"x" "dx" from 1 to "t": In other words, equals the area between the axis and the graph of the function , ranging from to (figure at the right). This is a consequence of the fundamental theorem of calculus and the fact that the derivative of is . The right hand side of this equation can serve as a definition of the natural logarithm. Product and power logarithm formulas can be derived from this definition. For example, the product formula is deduced as: The equality (1) splits the integral into two parts, while the equality (2) is a change of variable (). In the illustration below, the splitting corresponds to dividing the area into the yellow and blue parts. Rescaling the left hand blue area vertically by the factor "t" and shrinking it by the same factor horizontally does not change its size. Moving it appropriately, the area fits the graph of the function again. Therefore, the left hand blue area, which is the integral of from "t" to "tu" is the same as the integral from 1 to "u". This justifies the equality (2) with a more geometric proof. The power formula may be derived in a similar way: The second equality uses a change of variables (integration by substitution), . The sum over the reciprocals of natural numbers, is called the harmonic series. It is closely tied to the natural logarithm: as "n" tends to infinity, the difference, converges (i.e., gets arbitrarily close) to a number known as the Euler–Mascheroni constant . This relation aids in analyzing the performance of algorithms such as quicksort. There are also some other integral representations of the logarithm that are useful in some situations: The first identity can be verified by showing that it has the same value at , and the same derivative. The second identity can be proven by writing and then inserting the Laplace transform of (and ). Real numbers that are not algebraic are called transcendental; for example, and "e" are such numbers, but formula_53 is not. Almost all real numbers are transcendental. The logarithm is an example of a transcendental function. The Gelfond–Schneider theorem asserts that logarithms usually take transcendental, i.e., "difficult" values. Logarithms are easy to compute in some cases, such as . In general, logarithms can be calculated using power series or the arithmetic–geometric mean, or be retrieved from a precalculated logarithm table that provides a fixed precision. Newton's method, an iterative method to solve equations approximately, can also be used to calculate the logarithm, because its inverse function, the exponential function, can be computed efficiently. Using look-up tables, CORDIC-like methods can be used to compute logarithms if the only available operations are addition and bit shifts. Moreover, the binary logarithm algorithm calculates recursively based on repeated squarings of , taking advantage of the relation For any real number that satisfies , the following formula holds: This is a shorthand for saying that can be approximated to a more and more accurate value by the following expressions: For example, with the third approximation yields 0.4167, which is about 0.011 greater than . This series approximates with arbitrary precision, provided the number of summands is large enough. In elementary calculus, is therefore the limit of this series. It is the Taylor series of the natural logarithm at . The Taylor series of provides a particularly useful approximation to when is small, , since then For example, with the first-order approximation gives , which is less than 5% off the correct value 0.0953. Another series is based on the area hyperbolic tangent function: for any real number . Using sigma notation, this is also written as This series can be derived from the above Taylor series. It converges more quickly than the Taylor series, especially if is close to 1. For example, for , the first three terms of the second series approximate with an error of about . The quick convergence for close to 1 can be taken advantage of in the following way: given a low-accuracy approximation and putting the logarithm of is: The better the initial approximation is, the closer is to 1, so its logarithm can be calculated efficiently. can be calculated using the exponential series, which converges quickly provided is not too large. Calculating the logarithm of larger can be reduced to smaller values of by writing , so that . A closely related method can be used to compute the logarithm of integers. From the above series, it follows that: If the logarithm of a large integer is known, then this series yields a fast converging series for . The arithmetic–geometric mean yields high precision approximations of the natural logarithm. Sasaki and Kanada showed in 1982 that it was particularly fast for precisions between 400 and 1000 decimal places, while Taylor series methods were typically faster when less precision was needed. In their work is approximated to a precision of (or "p" precise bits) by the following formula (due to Carl Friedrich Gauss): Here denotes the arithmetic–geometric mean of and . It is obtained by repeatedly calculating the average (arithmetic mean) and formula_64 (geometric mean) of and then let those two numbers become the next and . The two numbers quickly converge to a common limit which is the value of . "m" is chosen such that to ensure the required precision. A larger "m" makes the calculation take more steps (the initial x and y are farther apart so it takes more steps to converge) but gives more precision. The constants and can be calculated with quickly converging series. While at Los Alamos National Laboratory working on the Manhattan Project, Richard Feynman developed a bit processing algorithm that is similar to long division and was later used in the Connection Machine. The algorithm uses the fact that every real number formula_66 is uniquely representable as a product of distinct factors of the form formula_67. The algorithm sequentially builds that product formula_68: if formula_69, then it changes formula_68 to formula_71. It then increase formula_72 by one regardless. The algorithm stops when formula_72 is large enough to give the desired accuracy. Because formula_74 is the sum of the terms of the form formula_75 corresponding to those formula_72 for which the factor formula_77 was included in the product formula_68, formula_74 may be computed by simple addition, using a table of formula_75 for all formula_72. Any base may be used for the logarithm table. Logarithms have many applications inside and outside mathematics. Some of these occurrences are related to the notion of scale invariance. For example, each chamber of the shell of a nautilus is an approximate copy of the next one, scaled by a constant factor. This gives rise to a logarithmic spiral. Benford's law on the distribution of leading digits can also be explained by scale invariance. Logarithms are also linked to self-similarity. For example, logarithms appear in the analysis of algorithms that solve a problem by dividing it into two similar smaller problems and patching their solutions. The dimensions of self-similar geometric shapes, that is, shapes whose parts resemble the overall picture are also based on logarithms. Logarithmic scales are useful for quantifying the relative change of a value as opposed to its absolute difference. Moreover, because the logarithmic function grows very slowly for large , logarithmic scales are used to compress large-scale scientific data. Logarithms also occur in numerous scientific formulas, such as the Tsiolkovsky rocket equation, the Fenske equation, or the Nernst equation. Scientific quantities are often expressed as logarithms of other quantities, using a "logarithmic scale". For example, the decibel is a unit of measurement associated with logarithmic-scale quantities. It is based on the common logarithm of ratios—10 times the common logarithm of a power ratio or 20 times the common logarithm of a voltage ratio. It is used to quantify the loss of voltage levels in transmitting electrical signals, to describe power levels of sounds in acoustics, and the absorbance of light in the fields of spectrometry and optics. The signal-to-noise ratio describing the amount of unwanted noise in relation to a (meaningful) signal is also measured in decibels. In a similar vein, the peak signal-to-noise ratio is commonly used to assess the quality of sound and image compression methods using the logarithm. The strength of an earthquake is measured by taking the common logarithm of the energy emitted at the quake. This is used in the moment magnitude scale or the Richter magnitude scale. For example, a 5.0 earthquake releases 32 times and a 6.0 releases 1000 times the energy of a 4.0. Another logarithmic scale is apparent magnitude. It measures the brightness of stars logarithmically. Yet another example is pH in chemistry; pH is the negative of the common logarithm of the activity of hydronium ions (the form hydrogen ions take in water). The activity of hydronium ions in neutral water is 10 mol·L, hence a pH of 7. Vinegar typically has a pH of about 3. The difference of 4 corresponds to a ratio of 10 of the activity, that is, vinegar's hydronium ion activity is about . Semilog (log-linear) graphs use the logarithmic scale concept for visualization: one axis, typically the vertical one, is scaled logarithmically. For example, the chart at the right compresses the steep increase from 1 million to 1 trillion to the same space (on the vertical axis) as the increase from 1 to 1 million. In such graphs, exponential functions of the form appear as straight lines with slope equal to the logarithm of . Log-log graphs scale both axes logarithmically, which causes functions of the form to be depicted as straight lines with slope equal to the exponent "k". This is applied in visualizing and analyzing power laws. Logarithms occur in several laws describing human perception: Hick's law proposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have. Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic function of the distance to and the size of the target. In psychophysics, the Weber–Fechner law proposes a logarithmic relationship between stimulus and sensation such as the actual vs. the perceived weight of an item a person is carrying. (This "law", however, is less precise than more recent models, such as the Stevens' power law.) Psychological studies found that individuals with little mathematics education tend to estimate quantities logarithmically, that is, they position a number on an unmarked line according to its logarithm, so that 10 is positioned as close to 100 as 100 is to 1000. Increasing education shifts this to a linear estimate (positioning 1000 10x as far away) in some circumstances, while logarithms are used when the numbers to be plotted are difficult to plot linearly. Logarithms arise in probability theory: the law of large numbers dictates that, for a fair coin, as the number of coin-tosses increases to infinity, the observed proportion of heads approaches one-half. The fluctuations of this proportion about one-half are described by the law of the iterated logarithm. Logarithms also occur in log-normal distributions. When the logarithm of a random variable has a normal distribution, the variable is said to have a log-normal distribution. Log-normal distributions are encountered in many fields, wherever a variable is formed as the product of many independent positive random variables, for example in the study of turbulence. Logarithms are used for maximum-likelihood estimation of parametric statistical models. For such a model, the likelihood function depends on at least one parameter that must be estimated. A maximum of the likelihood function occurs at the same parameter-value as a maximum of the logarithm of the likelihood (the ""log likelihood""), because the logarithm is an increasing function. The log-likelihood is easier to maximize, especially for the multiplied likelihoods for independent random variables. Benford's law describes the occurrence of digits in many data sets, such as heights of buildings. According to Benford's law, the probability that the first decimal-digit of an item in the data sample is "d" (from 1 to 9) equals , "regardless" of the unit of measurement. Thus, about 30% of the data can be expected to have 1 as first digit, 18% start with 2, etc. Auditors examine deviations from Benford's law to detect fraudulent accounting. Analysis of algorithms is a branch of computer science that studies the performance of algorithms (computer programs solving a certain problem). Logarithms are valuable for describing algorithms that divide a problem into smaller ones, and join the solutions of the subproblems. For example, to find a number in a sorted list, the binary search algorithm checks the middle entry and proceeds with the half before or after the middle entry if the number is still not found. This algorithm requires, on average, comparisons, where "N" is the list's length. Similarly, the merge sort algorithm sorts an unsorted list by dividing the list into halves and sorting these first before merging the results. Merge sort algorithms typically require a time approximately proportional to . The base of the logarithm is not specified here, because the result only changes by a constant factor when another base is used. A constant factor is usually disregarded in the analysis of algorithms under the standard uniform cost model. A function is said to grow logarithmically if is (exactly or approximately) proportional to the logarithm of . (Biological descriptions of organism growth, however, use this term for an exponential function.) For example, any natural number "N" can be represented in binary form in no more than bits. In other words, the amount of memory needed to store "N" grows logarithmically with "N". Entropy is broadly a measure of the disorder of some system. In statistical thermodynamics, the entropy "S" of some physical system is defined as The sum is over all possible states "i" of the system in question, such as the positions of gas particles in a container. Moreover, is the probability that the state "i" is attained and "k" is the Boltzmann constant. Similarly, entropy in information theory measures the quantity of information. If a message recipient may expect any one of "N" possible messages with equal likelihood, then the amount of information conveyed by any one such message is quantified as bits. Lyapunov exponents use logarithms to gauge the degree of chaoticity of a dynamical system. For example, for a particle moving on an oval billiard table, even small changes of the initial conditions result in very different paths of the particle. Such systems are chaotic in a deterministic way, because small measurement errors of the initial state predictably lead to largely different final states. At least one Lyapunov exponent of a deterministically chaotic system is positive. Logarithms occur in definitions of the dimension of fractals. Fractals are geometric objects that are self-similar: small parts reproduce, at least roughly, the entire global structure. The Sierpinski triangle (pictured) can be covered by three copies of itself, each having sides half the original length. This makes the Hausdorff dimension of this structure . Another logarithm-based notion of dimension is obtained by counting the number of boxes needed to cover the fractal in question. Logarithms are related to musical tones and intervals. In equal temperament, the frequency ratio depends only on the interval between two tones, not on the specific frequency, or pitch, of the individual tones. For example, the note "A" has a frequency of 440 Hz and "B-flat" has a frequency of 466 Hz. The interval between "A" and "B-flat" is a semitone, as is the one between "B-flat" and "B" (frequency 493 Hz). Accordingly, the frequency ratios agree: Therefore, logarithms can be used to describe the intervals: an interval is measured in semitones by taking the logarithm of the frequency ratio, while the logarithm of the frequency ratio expresses the interval in cents, hundredths of a semitone. The latter is used for finer encoding, as it is needed for non-equal temperaments. Natural logarithms are closely linked to counting prime numbers (2, 3, 5, 7, 11, ...), an important topic in number theory. For any integer , the quantity of prime numbers less than or equal to is denoted . The prime number theorem asserts that is approximately given by in the sense that the ratio of and that fraction approaches 1 when tends to infinity. As a consequence, the probability that a randomly chosen number between 1 and is prime is inversely proportional to the number of decimal digits of . A far better estimate of is given by the offset logarithmic integral function , defined by The Riemann hypothesis, one of the oldest open mathematical conjectures, can be stated in terms of comparing and . The Erdős–Kac theorem describing the number of distinct prime factors also involves the natural logarithm. The logarithm of "n" factorial, , is given by This can be used to obtain Stirling's formula, an approximation of for large "n". All the complex numbers that solve the equation are called "complex logarithms" of , when is (considered as) a complex number. A complex number is commonly represented as , where and are real numbers and is an imaginary unit, the square of which is −1. Such a number can be visualized by a point in the complex plane, as shown at the right. The polar form encodes a non-zero complex number by its absolute value, that is, the (positive, real) distance to the origin, and an angle between the real () axis "Re" and the line passing through both the origin and . This angle is called the argument of . The absolute value of is given by Using the geometrical interpretation of formula_89 and formula_90 and their periodicity in formula_91 any complex number may be denoted as for any integer number . Evidently the argument of is not uniquely specified: both and ' = + 2"k" are valid arguments of for all integers , because adding 2"k" radian or "k"⋅360° to corresponds to "winding" around the origin counter-clock-wise by turns. The resulting complex number is always , as illustrated at the right for . One may select exactly one of the possible arguments of as the so-called "principal argument", denoted , with a capital , by requiring to belong to one, conveniently selected turn, e.g., formula_93 or formula_94 These regions, where the argument of is uniquely determined are called "branches" of the argument function. Euler's formula connects the trigonometric functions sine and cosine to the complex exponential: Using this formula, and again the periodicity, the following identities hold: where is the unique real natural logarithm, denote the complex logarithms of , and is an arbitrary integer. Therefore, the complex logarithms of , which are all those complex values for which the power of equals , are the infinitely many values Taking such that formula_98 is within the defined interval for the principal arguments, then is called the "principal value" of the logarithm, denoted , again with a capital . The principal argument of any positive real number is 0; hence is a real number and equals the real (natural) logarithm. However, the above formulas for logarithms of products and powers do "not" generalize to the principal value of the complex logarithm. The illustration at the right depicts , confining the arguments of to the interval . This way the corresponding branch of the complex logarithm has discontinuities all along the negative real axis, which can be seen in the jump in the hue there. This discontinuity arises from jumping to the other boundary in the same branch, when crossing a boundary, i.e., not changing to the corresponding -value of the continuously neighboring branch. Such a locus is called a branch cut. Dropping the range restrictions on the argument makes the relations "argument of ", and consequently the "logarithm of ", multi-valued functions. Exponentiation occurs in many areas of mathematics and its inverse function is often referred to as the logarithm. For example, the logarithm of a matrix is the (multi-valued) inverse function of the matrix exponential. Another example is the "p"-adic logarithm, the inverse function of the "p"-adic exponential. Both are defined via Taylor series analogous to the real case. In the context of differential geometry, the exponential map maps the tangent space at a point of a manifold to a neighborhood of that point. Its inverse is also called the logarithmic (or log) map. In the context of finite groups exponentiation is given by repeatedly multiplying one group element with itself. The discrete logarithm is the integer "n" solving the equation where is an element of the group. Carrying out the exponentiation can be done efficiently, but the discrete logarithm is believed to be very hard to calculate in some groups. This asymmetry has important applications in public key cryptography, such as for example in the Diffie–Hellman key exchange, a routine that allows secure exchanges of cryptographic keys over unsecured information channels. Zech's logarithm is related to the discrete logarithm in the multiplicative group of non-zero elements of a finite field. Further logarithm-like inverse functions include the "double logarithm" ln(ln("x")), the "super- or hyper-4-logarithm" (a slight variation of which is called iterated logarithm in computer science), the Lambert W function, and the logit. They are the inverse functions of the double exponential function, tetration, of , and of the logistic function, respectively. From the perspective of group theory, the identity expresses a group isomorphism between positive reals under multiplication and reals under addition. Logarithmic functions are the only continuous isomorphisms between these groups. By means of that isomorphism, the Haar measure (Lebesgue measure) "dx" on the reals corresponds to the Haar measure on the positive reals. Logarithmic one-forms appear in complex analysis and algebraic geometry as differential forms with logarithmic poles. The polylogarithm is the function defined by It is related to the natural logarithm by . Moreover, equals the Riemann zeta function . L. Ron Hubbard Lafayette Ronald Hubbard ( ; March 13, 1911 – January 24, 1986), often referred to by his initials LRH, was an American author and the founder of the Church of Scientology. After establishing a career as a writer of science fiction and fantasy stories, in 1950 he published a "branch of self-help psychology" called Dianetics. Hubbard subsequently developed his ideas into a new religious movement that he called Scientology. Hubbard was cited by "Smithsonian" magazine as one of the 100 most significant Americans of all time. Born in Tilden, Nebraska in 1911, Hubbard spent much of his childhood in Helena, Montana. After his father was posted to the U.S. naval base on Guam, Hubbard traveled to Asia and the South Pacific in the late 1920s. In 1930, Hubbard enrolled at George Washington University to study civil engineering, but dropped out in his second year. He began his career as a prolific writer of pulp fiction stories and married Margaret "Polly" Grubb, who shared his interest in aviation. Hubbard served briefly in the Marine Corps Reserve and was an officer in the Navy during World War II. He briefly commanded two ships, but was removed from command both times. The last few months of his active service were spent in a hospital, being treated for a duodenal ulcer. After the war, Hubbard moved into the Pasadena mansion of occultist and engineer Jack Parsons. In early 1946, Hubbard and Parsons collaborated on Babalon Working, a series of magic ceremonies or rituals. Hubbard became sexually involved with Parsons's 21-year-old girlfriend, Sara "Betty" Northrup, ultimately marrying her despite Hubbard still being married to first wife Polly. In 1950, Hubbard authored and established a series of organizations to promote Dianetics. In 1952, Hubbard lost the rights to Dianetics in bankruptcy proceedings, and he subsequently founded Scientology. Thereafter Hubbard oversaw the growth of the Church of Scientology into a worldwide organization. During the late 1960s and early 1970s, he spent much of his time at sea on his personal fleet of ships as "Commodore" of the Sea Organization, an elite, paramilitary group of Scientologists. Some ex-members and scholars have described the Sea Org as a totalitarian organization marked by intensive surveillance and a lack of freedom. His expedition came to an end when Britain, Greece, Spain, Portugal, and Venezuela all closed their ports to his fleet. Hubbard returned to the United States in 1975 and went into seclusion in the California desert. In 1978, a trial court in France convicted Hubbard of fraud "in absentia". In 1983 Hubbard was named as an unindicted co-conspirator in an international information infiltration and theft project called "Operation Snow White". He spent the remaining years of his life in a luxury motor home on his California property, attended to by a small group of Scientology officials including his physician. In 1986, L. Ron Hubbard died at age 74. The Church of Scientology describes Hubbard in hagiographic terms, and he portrayed himself as a pioneering explorer, world traveler, and nuclear physicist with expertise in a wide range of disciplines, including photography, art, poetry, and philosophy. Though many of Hubbard's autobiographical statements have been found to be fictitious, the Church rejects any suggestion that its account of Hubbard's life is not historical fact. In Scientology publications, he is referred to as "Founder" and "Source" of Scientology and Dianetics. His critics, including son Ronald DeWolf, have characterized Hubbard as a mentally-unstable chronic liar. Lafayette Ronald Hubbard was born in 1911, in Tilden, Nebraska. He was the only child of Ledora May ( Waterbury), who had trained as a teacher, and Harry Ross Hubbard, a former United States Navy officer. After moving to Kalispell, Montana, they settled in Helena in 1913. Hubbard's father rejoined the Navy in April 1917, during World War I, while his mother worked as a clerk for the state government. During the 1920s the Hubbards repeatedly relocated around the United States and overseas. After Hubbard's father Harry rejoined the Navy, his posting aboard the USS "Oklahoma" in 1921 required the family to relocate to the ship's home ports, first San Diego, then Seattle. Hubbard was active in the Boy Scouts in Washington, D.C. and earned the rank of Eagle Scout in 1924, two weeks after his 13th birthday. The following year, Harry Ross Hubbard was posted to Puget Sound Naval Shipyard at Bremerton, Washington. His son was enrolled at Union High School, Bremerton, and later studied at Queen Anne High School in Seattle. In 1927 Hubbard's father was sent to the U.S. Naval Station on Guam. Although Hubbard's mother accompanied her husband, while their child was placed in his grandparents' care in Helena, Montana to complete his schooling. In 1927, Hubbard and his mother traveled to Guam. It consisted of a brief stop-over in a couple of Chinese ports before traveling on to Guam, where he stayed for six weeks before returning home. He recorded his impressions of the places he visited and disdained the poverty of the inhabitants of Japan and China, whom he described as "gooks" and "lazy [and] ignorant". After his return to the United States in September 1927, Hubbard enrolled at Helena High School, where he contributed to the school paper, but earned only poor grades. He abandoned school the following May and went back west to stay with his aunt and uncle in Seattle. He joined his parents in Guam in June 1928. His mother took over his education in the hope of putting him forward for the entrance examination to the United States Naval Academy at Annapolis, Maryland. Between October and December 1928 a number of naval families, including Hubbard's, traveled from Guam to China aboard the cargo ship . The ship stopped at Manila in the Philippines before traveling on to Qingdao (Tsingtao) in China. Hubbard and his parents made a side trip to Beijing before sailing on to Shanghai and Hong Kong, from where they returned to Guam. Back on Guam, Hubbard spent much of his time writing dozens of short stories and essays and failed the Naval Academy entrance examination. In September 1929 Hubbard was enrolled at the Swavely Preparatory School in Manassas, Virginia, to prepare him for a second attempt at the examination. However, he was ruled out of consideration due to his near-sightedness. He was instead sent to Woodward School for Boys in Washington, D.C. to qualify for admission to George Washington University. He successfully graduated from the school in June 1930 and entered the university the following September. Hubbard studied civil engineering during his two years at George Washington University at the behest of his father, who "decreed that I should study engineering and mathematics". During Hubbard's final semester he organized an expedition to the Caribbean for "fifty young gentleman rovers" aboard the schooner "Doris Hamlin" commencing in June 1932. The aims of the "Caribbean Motion Picture Expedition" were stated as being to explore and film the pirate "strongholds and bivouacs of the Spanish Main" and to "collect whatever one collects for exhibits in museums". It ran into trouble even before it left the port of Baltimore: Ten participants quit and storms blew the ship far off course to Bermuda. Eleven more members of the expedition quit there and more left when the ship arrived at Martinique. With the expedition running critically short of money, the ship's owners ordered it to return to Baltimore. Hubbard blamed the expedition's problems on the captain: "the ship's dour Captain Garfield proved himself far less than a Captain Courageous, requiring Ron Hubbard's hand at both the helm and the charts." Specimens and photographs collected by the expedition are said by Scientology accounts to have been acquired by the University of Michigan, the U.S. Hydrographic Office, an unspecified national museum and "The New York Times", though none of those institutions have any record of this. Hubbard later wrote that the expedition "was a crazy idea at best, and I knew it, but I went ahead anyway, chartered a four-masted schooner and embarked with some fifty luckless souls who haven't stopped their cursings yet." He called it "a two-bit expedition and financial bust", which resulted in some of its participants making legal claims against him for refunds. Hubbard traveled to Puerto Rico in November 1932 after his father volunteered him for the Red Cross relief effort following the devastating 1932 San Ciprian hurricane. In a 1957 lecture Hubbard said that he had been "a field executive with the American Red Cross in the Puerto Rico hurricane disaster". According to his own account, Hubbard spent much of his time prospecting unsuccessfully for gold. Towards the end of his stay on Puerto Rico he appears to have done some work for a Washington, D.C. firm called West Indies Minerals Incorporated, accompanying a surveyor in an investigation of a small property near the town of Luquillo, Puerto Rico. The survey was unsuccessful. A few years later, Hubbard wrote: Hubbard became a well-known and prolific writer for pulp fiction magazines during the 1930s. Scientology texts describe him as becoming "well established as an essayist" even before he had concluded college. Scientology claims he "solved his finances, and his desire to travel by writing anything that came to hand" and to have earned an "astronomical" rate of pay for the times. His literary career began with contributions to the George Washington University student newspaper, "The University Hatchet", as a reporter for a few months in 1931. Six of his pieces were published commercially during 1932 to 1933. The going rate for freelance writers at the time was only a cent a word, so Hubbard's total earnings from these articles would have been less than $100 (). The pulp magazine "Thrilling Adventure" became the first to publish one of his short stories, in February 1934. Over the next six years, pulp magazines published around 140 of his short stories under a variety of pen names, including Winchester Remington Colt, Kurt von Rachen, René Lafayette, Joe Blitz and Legionnaire 148. Although he was best known for his fantasy and science fiction stories, Hubbard wrote in a wide variety of genres, including adventure fiction, aviation, travel, mysteries, westerns and even romance. Hubbard knew and associated with writers such as Isaac Asimov, Robert A. Heinlein, L. Sprague de Camp and A. E. van Vogt. His first full-length novel, "Buckskin Brigades", was published in 1937. He became a "highly idiosyncratic" writer of science fiction after being taken under the wing of editor John W. Campbell, who published many of Hubbard's short stories and also serialized a number of well-received novelettes that Hubbard wrote for Campbell's magazines "Unknown" and "Astounding Science Fiction". These included "Fear", "Final Blackout" and "Typewriter in the Sky". Science fiction newsletter Xignals reported that Hubbard wrote "over 100,000 words a month" during his peak. Martin Gardner asserted that his writing "[wa]s done at lightning speed." He wrote the script for "The Secret of Treasure Island", a 1938 Columbia Pictures movie serial. Hubbard's literary earnings helped him to support his new wife, Margaret "Polly" Grubb. She was already pregnant when they married on April 13, 1933, but had a miscarriage shortly afterwards; a few months later, she became pregnant again. On May 7, 1934, she gave birth prematurely to a son who was named Lafayette Ronald Hubbard, Jr. and the nickname "His Nibs", invariably shortened to "Nibs". Their second child, Katherine May, was born on January 15, 1936. The Hubbards lived for a while in Laytonsville, Maryland, but were chronically short of money. In the spring of 1936 they moved to Bremerton, Washington. They lived there for a time with Hubbard's aunts and grandmother before finding a place of their own at nearby South Colby. According to one of his friends at the time, Robert MacDonald Ford, the Hubbards were "in fairly dire straits for money" but sustained themselves on the income from Hubbard's writing. Hubbard spent an increasing amount of time in New York City, working out of a hotel room where his wife suspected him of carrying on affairs with other women. Hubbard's authorship in mid-1938 of a still-unpublished manuscript called "Excalibur" is highlighted by the Church of Scientology as a key step in developing the principles of Scientology and Dianetics. The manuscript is said by Scientologists to have outlined "the basic principles of human existence" and to have been the culmination of twenty years of research into "twenty-one races and cultures including Pacific Northwest Indian tribes, Philippine Tagalogs and, as he was wont to joke, the people of the Bronx". According to Arthur J. Cox, a contributor to John W. Campbell's "Astounding Science Fiction" magazine, Hubbard told a 1948 convention of science fiction fans that "Excalibur" inspiration came during an operation in which he "died" for eight minutes. (Gerry Armstrong, Hubbard's archivist, explains this as a dental extraction performed under nitrous oxide, a chemical known for its hallucinogenic effects): Arthur J. Burks, the President of the American Fiction Guild, wrote that an excited Hubbard called him and said: "I want to see you right away. I have written THE book." Hubbard believed that "Excalibur" would "revolutionize everything" and that "it was somewhat more important, and would have a greater impact upon people, than the Bible." It proposed that all human behavior could be explained in terms of survival and that to understand survival was to understand life. As Hubbard biographer Jon Atack notes, "the notion that everything that exists is trying to survive became the basis of Dianetics and Scientology." According to Burks, Hubbard "was so sure he had something 'away out and beyond' anything else that he had sent telegrams to several book publishers, telling them that he had written 'THE book' and that they were to meet him at Penn Station, and he would discuss it with them and go with whomever gave him the best offer." However, nobody bought the manuscript. Forrest J Ackerman, later Hubbard's literary agent, recalled that Hubbard told him "whoever read it either went insane or committed suicide. And he said that the last time he had shown it to a publisher in New York, he walked into the office to find out what the reaction was, the publisher called for the reader, the reader came in with the manuscript, threw it on the table and threw himself out of the skyscraper window." Hubbard's failure to sell "Excalibur" depressed him; he told his wife in an October 1938 letter: "Writing action pulp doesn't have much agreement with what I want to do because it retards my progress by demanding incessant attention and, further, actually weakens my name. So you see I've got to do something about it and at the same time strengthen the old financial position." He went on: The manuscript later became part of Scientology mythology. An early 1950s Scientology publication offered signed "gold-bound and locked" copies for the sum of $1,500 apiece (). It warned that "four of the first fifteen people who read it went insane" and that it would be "[r]eleased only on sworn statement not to permit other readers to read it. Contains data not to be released during Mr. Hubbard's stay on earth." Hubbard joined The Explorers Club in February 1940 on the strength of his claimed explorations in the Caribbean and survey flights in the United States. He persuaded the club to let him carry its flag on an "Alaskan Radio-Experimental Expedition" to update the U.S. Coast Pilot guide to the coastlines of Alaska and British Columbia and investigate new methods of radio position-finding. The expedition consisted of Hubbard and his wife—the children were left at South Colby—aboard his ketch "Magician". Hubbard told "The Seattle Star" in a November 1940 letter that the expedition was plagued by problems and did not get any further than Ketchikan near the southern end of the Alaska Panhandle, far from the Aleutian Islands. "Magician's" engine broke down only two days after setting off in July 1940. The Hubbards reached Ketchikan on August 30, 1940, after many delays following repeated engine breakdowns. The "Ketchikan Chronicle" reported—making no mention of the expedition—that Hubbard's purpose in coming to Alaska "was two-fold, one to win a bet and another to gather material for a novel of Alaskan salmon fishing". Having underestimated the cost of the trip, he did not have enough money to repair the broken engine. He raised money by writing stories and contributing to the local radio station and eventually earned enough to fix the engine, making it back to Puget Sound on December 27, 1940. After returning from Alaska, Hubbard applied to join the United States Navy. His Congressman, Warren G. Magnuson, wrote to President Roosevelt to recommend Hubbard as "a gentleman of reputation" who was "a respected explorer" and had "marine masters papers for more types of vessels than any other man in the United States". Hubbard was described as "a key figure" in writing organizations, "making him politically potent nationally". The Congressman concluded: "Anything you can do for Mr Hubbard will be appreciated." His friend Robert MacDonald Ford, by now a State Representative for Washington, sent a letter of recommendation describing Hubbard as "one of the most brilliant men I have ever known". It called Hubbard "a powerful influence" in the Northwest and said that he was "well known in many parts of the world and has considerable influence in the Caribbean and Alaska". The letter declared that "for courage and ability I cannot too strongly recommend him." Ford later said that Hubbard had written the letter himself: "I don't know why Ron wanted a letter. I just gave him a letter-head and said, 'Hell, you're the writer, you write it!'" Hubbard was commissioned as a Lieutenant (junior grade) in the U.S. Naval Reserve on July 19, 1941. Most of his military service was spent ashore in the continental United States on administrative or training duties. He served for a short time in Australia but was sent home after quarreling with his superiors. He briefly commanded two anti-submarine vessels, the USS "YP-422" and USS "PC-815", in coastal waters off Massachusetts, Oregon and California in 1942 and 1943 respectively. After Hubbard reported that the "PC-815" had attacked and crippled or sunk two Japanese submarines off Oregon in May 1943, his claim was rejected by the commander of the Northwest Sea Frontier. Hubbard and Thomas Moulton, his second in command on the "PC-815", later said the Navy wanted to avoid panic on the mainland. A month later Hubbard unwittingly sailed the "PC-815" into Mexican territorial waters and conducted gunnery practice off the Coronado Islands, in the belief that they were uninhabited and belonged to the United States. The Mexican government complained and Hubbard was relieved of command. A fitness report written after the incident rated Hubbard as unsuitable for independent duties and "lacking in the essential qualities of judgment, leadership and cooperation". He served for a while as the Navigation and Training Officer for the USS "Algol" while it was based at Portland. A fitness report from this period recommended promotion, describing him as "a capable and energetic officer, [but] very temperamental", and an "above average navigator". However, he never held another such position and did not serve aboard another ship after the "Algol". His medical records state that he was hospitalized with an acute duodenal ulcer rather than a war injury. He told his doctors that he was suffering from lameness caused by a hip infection and he told "Look" magazine in December 1950 that he had suffered from "ulcers, conjunctivitis, deteriorating eyesight, bursitis and something wrong with my feet". He was still complaining in 1951 of eye problems and stomach pains, which had given him "continuous trouble" for eight years, especially when "under nervous stress". This came well after Hubbard had promised that Dianetics would provide "a cure for the very ailments that plagued the author himself then and throughout his life, including allergies, arthritis, ulcers and heart problems". An October 1945 Naval Board found that Hubbard was "considered physically qualified to perform duty ashore, preferably within the continental United States". He was discharged from hospital on December 4, 1945, and transferred to inactive duty on February 17, 1946. He resigned his commission with effect from October 30, 1950. Hubbard's life underwent a turbulent period immediately after the war. According to his own account, he "was abandoned by family and friends as a supposedly hopeless cripple and a probable burden upon them for the rest of my days". His daughter Katherine presented a rather different version: his wife had refused to uproot their children from their home in Bremerton, Washington, to join him in California. Their marriage was by now in terminal difficulties and he chose to stay in California. In August 1945 Hubbard moved into the Pasadena mansion of John "Jack" Whiteside Parsons. A leading rocket propulsion researcher at the California Institute of Technology and a founder of the Jet Propulsion Laboratory, Parsons led a double life as an avid occultist and Thelemite, follower of the English ceremonial magician Aleister Crowley and leader of a lodge of Crowley's magical order, Ordo Templi Orientis (OTO). He let rooms in the house only to tenants who he specified should be "atheists and those of a Bohemian disposition". Hubbard befriended Parsons and soon became sexually involved with Parsons's 21-year-old girlfriend, Sara "Betty" Northrup. Despite this Parsons was very impressed with Hubbard and reported to Crowley: Hubbard, whom Parsons referred to in writing as "Frater H", became an enthusiastic collaborator in the Pasadena OTO. The two men collaborated on the "Babalon Working", a sex magic ritual intended to summon an incarnation of Babalon, the supreme Thelemite Goddess. It was undertaken over several nights in February and March 1946 in order to summon an "elemental" who would participate in further sex magic. As Richard Metzger describes it, The "elemental" arrived a few days later in the form of Marjorie Cameron, who agreed to participate in Parsons' rites. Soon afterwards, Parsons, Hubbard and Sara agreed to set up a business partnership, "Allied Enterprises", in which they invested nearly their entire savings—the vast majority contributed by Parsons. The plan was for Hubbard and Sara to buy yachts in Miami and sail them to the West Coast to sell for a profit. Hubbard had a different idea; he wrote to the U.S. Navy requesting permission to leave the country "to visit Central & South America & China" for the purposes of "collecting writing material"—in other words, undertaking a world cruise. Aleister Crowley strongly criticized Parsons's actions, writing: "Suspect Ron playing confidence trick—Jack Parsons weak fool—obvious victim prowling swindlers." Parsons attempted to recover his money by obtaining an injunction to prevent Hubbard and Sara leaving the country or disposing of the remnants of his assets. They attempted to sail anyway but were forced back to port by a storm. A week later, Allied Enterprises was dissolved. Parsons received only a $2,900 promissory note from Hubbard and returned home "shattered". He had to sell his mansion to developers soon afterwards to recoup his losses. Hubbard's fellow writers were well aware of what had happened between him and Parsons. L. Sprague de Camp wrote to Isaac Asimov on August 27, 1946, to tell him: On August 10, 1946, Hubbard bigamously married Sara, while still married to Polly. It was not until 1947 that his first wife learned that he had remarried. Hubbard agreed to divorce Polly in June that year and the marriage was dissolved shortly afterwards, with Polly given custody of the children. During this period, Hubbard authored a document called The "Affirmations" (also referred to as the "Admissions"). They consist of a series of statements by and addressed to Hubbard, relating to various physical, sexual, psychological and social issues that he was encountering in his life. The Affirmations appear to have been intended to be used as a form of self-hypnosis with the intention of resolving the author's psychological problems and instilling a positive mental attitude. In , Reitman called the Affirmations "the most revealing psychological self-assessment, complete with exhortations to himself, that [Hubbard] had ever made." Among the Affirmations: After Hubbard's wedding to Sara, the couple settled at Laguna Beach, California, where Hubbard took a short-term job looking after a friend's yacht before resuming his fiction writing to supplement the small disability allowance that he was receiving as a war veteran. Working from a trailer in a run-down area of North Hollywood, Hubbard sold a number of science fiction stories that included his "Ole Doc Methuselah" series and the serialized novels "The End Is Not Yet" and "To the Stars". However, he remained short of money and his son, L. Ron Hubbard Jr, testified later that Hubbard was dependent on his own father and Margaret's parents for money and his writings, which he was paid at a penny per word, never garnered him any more than $10,000 prior to the founding of Scientology. He repeatedly wrote to the Veterans Administration (VA) asking for an increase in his war pension. In October 1947 he wrote: The VA eventually did increase his pension, but his money problems continued. On August 31, 1948, he was arrested in San Luis Obispo, California, and subsequently pleaded guilty to a charge of petty theft, for which he was ordered to pay a $25 fine (). According to the Church of Scientology, around this time he "accept[ed] an appointment as a Special Police Officer with the Los Angeles Police Department and us[ed] the position to study society's criminal elements" and also "worked with neurotics from the Hollywood film community". In late 1948 Hubbard and Sara moved to Savannah, Georgia. Here, Scientology sources say, he "volunteer[ed] his time in hospitals and mental wards, saving the lives of patients with his counseling techniques". Hubbard began to make the first public mentions of what was to become Dianetics. His first thoughts on the subject were compiled in a short book called "The Original Thesis", which contained basic conclusions about human aberrations and handling them with auditing. His first published articles in Dianetics were "Terra Incognita: The Mind" in the Explorer Club Journal and another one that impacted people more heavily in Astounding Science Fiction. The positive public response to these articles led Hubbard to expand it to Dianetics: The Modern Science of Mental Health. Scientologists consider the publication of the volume in May 9, 1950 as "a seminal event of the century." He wrote in January 1949 that he was working on a "book of psychology" about "the cause and cure of nervous tension", which he was going to call "The Dark Sword", "Excalibur" or "Science of the Mind". In April 1949, Hubbard wrote to several professional organizations to offer his research. None were interested, so he turned to his editor John W. Campbell, who was more receptive due to a long-standing fascination with fringe psychologies and psychic powers ("psionics") that "permeated both his fiction and non-fiction". Campbell invited Hubbard and Sara to move into a cottage at Bay Head, New Jersey, not far from his own home at Plainfield. In July 1949, Campbell recruited an acquaintance, Dr. Joseph Winter, to help develop Hubbard's new therapy of "Dianetics". Campbell told Winter: Hubbard collaborated with Campbell and Winter to refine his techniques, testing them on science fiction fans recruited by Campbell. The basic principle of Dianetics was that the brain recorded every experience and event in a person's life, even when unconscious. Bad or painful experiences were stored as what he called "engrams" in a "reactive mind". These could be triggered later in life, causing emotional and physical problems. By carrying out a process he called "auditing", a person could be regressed through his engrams to re-experiencing past experiences. This enabled engrams to be "cleared". The subject, who would now be in a state of "Clear", would have a perfectly functioning mind with an improved IQ and photographic memory. The "Clear" would be cured of physical ailments ranging from poor eyesight to the common cold, which Hubbard asserted were purely psychosomatic. Winter submitted a paper on Dianetics to the "Journal of the American Medical Association" and the "American Journal of Psychiatry" but both journals rejected it. Hubbard and his collaborators decided to announce Dianetics in Campbell's "Astounding Science Fiction" instead. In an editorial, Campbell said: "Its power is almost unbelievable; it proves the mind not only can but does rule the body completely; following the sharply defined basic laws set forth, physical ills such as ulcers, asthma and arthritis can be cured, as can all other psychosomatic ills." The birth of Hubbard's second daughter Alexis Valerie, delivered by Winter on March 8, 1950, came in the middle of the preparations to launch Dianetics. A "Hubbard Dianetic Research Foundation" was established in April 1950 in Elizabeth, New Jersey, with Hubbard, Sara, Winter and Campbell on the board of directors. Hubbard described Dianetics as "the hidden source of all psychosomatic ills and human aberration" when he introduced Dianetics to the world in the 1950s. He further claimed that "skills have been developed for their invariable cure." Dianetics was duly launched in "Astounding's" May 1950 issue and on May 9, Hubbard's companion book "" was published by Hermitage House. Hubbard abandoned freelance writing in order to promote Dianetics, writing several books about it in the next decade, delivering an estimated 4,000 lectures while founding Dianetics research organizations. The etymological origin of the word Dianetics are the words "dia", meaning "through" and "nous", meaning "soul." Hubbard defined it as "a spiritual healing technology" and "an organized science of thought." Hubbard called Dianetics "a milestone for man comparable to his discovery of fire and superior to his invention of the wheel and the arch". It was an immediate commercial success and sparked what Martin Gardner calls "a nationwide cult of incredible proportions". By August 1950, Hubbard's book had sold 55,000 copies, was selling at the rate of 4,000 a week and was being translated into French, German and Japanese. Five hundred Dianetic auditing groups had been set up across the United States. Dianetics was poorly received by the press and the scientific and medical professions. The American Psychological Association criticized Hubbard's claims as "not supported by empirical evidence". "Scientific American" said that Hubbard's book contained "more promises and less evidence per page than any publication since the invention of printing", while "The New Republic" called it a "bold and immodest mixture of complete nonsense and perfectly reasonable common sense, taken from long acknowledged findings and disguised and distorted by a crazy, newly invented terminology". Some of Hubbard's fellow science fiction writers also criticized it; Isaac Asimov considered it "gibberish" while Jack Williamson called it "a lunatic revision of Freudian psychology". Several famous individuals became involved with Dianetics. Aldous Huxley received auditing from Hubbard himself; the poet Jean Toomer and the science fiction writers Theodore Sturgeon and A. E. van Vogt became trained Dianetics auditors. Van Vogt temporarily abandoned writing and became the head of the newly established Los Angeles branch of the Hubbard Dianetic Research Foundation. Other branches were established in New York, Washington, D.C., Chicago, and Honolulu. Although Dianetics was not cheap, a great many people were nonetheless willing to pay; van Vogt later recalled "doing little but tear open envelopes and pull out $500 checks from people who wanted to take an auditor's course". Financial controls were lax. Hubbard himself withdrew large sums with no explanation of what he was doing with it. On one occasion, van Vogt saw Hubbard taking a lump sum of $56,000 (equivalent to $0.5 million at 2010 prices) out of the Los Angeles Foundation's proceeds. One of Hubbard's employees, Helen O'Brien, commented that at the Elizabeth, N.J. branch of the Foundation, the books showed that "a month's income of $90,000 is listed, with only $20,000 accounted for". Hubbard played a very active role in the Dianetics boom, writing, lecturing and training auditors. Many of those who knew him spoke of being impressed by his personal charisma. Jack Horner, who became a Dianetics auditor in 1950, later said, "He was very impressive, dedicated and amusing. The man had tremendous charisma; you just wanted to hear every word he had to say and listen for any pearl of wisdom." Isaac Asimov recalled in his autobiography how, at a dinner party, he, Robert Heinlein, L. Sprague de Camp and their wives "all sat as quietly as pussycats and listened to Hubbard. He told tales with perfect aplomb and in complete paragraphs." As Atack comments, he was "a charismatic figure who compelled the devotion of those around him". Christopher Evans described the personal qualities that Hubbard brought to Dianetics and Scientology: Hubbard's supporters soon began to have doubts about Dianetics. Winter became disillusioned and wrote that he had never seen a single convincing Clear: "I have seen some individuals who are supposed to have been 'clear,' but their behavior does not conform to the definition of the state. Moreover, an individual supposed to have been 'clear' has undergone a relapse into conduct which suggests an incipient psychosis." He also deplored the Foundation's omission of any serious scientific research. Dianetics lost public credibility in August 1950 when a presentation by Hubbard before an audience of 6,000 at the Shrine Auditorium in Los Angeles failed disastrously. He introduced a Clear named Sonya Bianca and told the audience that as a result of undergoing Dianetic therapy she now possessed perfect recall. However, Gardner writes, "in the demonstration that followed, she failed to remember a single formula in physics (the subject in which she was majoring) or the color of Hubbard's tie when his back was turned. At this point, a large part of the audience got up and left." Hubbard also faced other practitioners moving into leadership positions within the Dianetics community. It was structured as an open, public practice in which others were free to pursue their own lines of research and claim that their approaches to auditing produced better results than Hubbard's. The community rapidly splintered and its members mingled Hubbard's ideas with a wide variety of esoteric and even occult practices. By late 1950, the Elizabeth, N.J. Foundation was in financial crisis and the Los Angeles Foundation was more than $200,000 in debt (). Winter and Art Ceppos, the publisher of Hubbard's book, resigned under acrimonious circumstances. Campbell also resigned, criticizing Hubbard for being impossible to work with, and blamed him for the disorganization and financial ruin of the Foundations. By the summer of 1951, the Elizabeth, N.J. Foundation and all of its branches had closed. The collapse of Hubbard's marriage to Sara created yet more problems. He had begun an affair with his 20-year-old public relations assistant in late 1950, while Sara started a relationship with Dianetics auditor Miles Hollister. Hubbard secretly denounced the couple to the FBI in March 1951, portraying them in a letter as communist infiltrators. According to Hubbard, Sara was "currently intimate with [communists] but evidently under coercion. Drug addiction set in fall 1950. Nothing of this known to me until a few weeks ago." Hollister was described as having a "sharp chin, broad forehead, rather Slavic". He was said to be the "center of most turbulence in our organization" and "active and dangerous". The FBI did not take Hubbard seriously: an agent annotated his correspondence with the comment, "Appears mental." Three weeks later, Hubbard and two Foundation staff seized Sara and his year-old daughter Alexis and forcibly took them to San Bernardino, California, where he attempted unsuccessfully to find a doctor to examine Sara and declare her insane. He let Sara go but took Alexis to Havana, Cuba. Sara filed a divorce suit on April 23, 1951, that accused him of marrying her bigamously and subjecting her to sleep deprivation, beatings, strangulation, kidnapping and exhortations to commit suicide. The case led to newspaper headlines such as "Ron Hubbard Insane, Says His Wife." Sara finally secured the return of her daughter in June 1951 by agreeing to a settlement with her husband in which she signed a statement, written by him, declaring: Dianetics appeared to be on the edge of total collapse. However, it was saved by Don Purcell, a millionaire businessman and Dianeticist who agreed to support a new Foundation in Wichita, Kansas. Their collaboration ended after less than a year when they fell out over the future direction of Dianetics. The Wichita Foundation became financially nonviable after a court ruled that it was liable for the unpaid debts of its defunct predecessor in Elizabeth, N.J. The ruling prompted Purcell and the other directors of the Wichita Foundation to file for voluntary bankruptcy in February 1952. Hubbard resigned immediately and accused Purcell of having been bribed by the American Medical Association to destroy Dianetics. Hubbard established a "Hubbard College" on the other side of town where he continued to promote Dianetics while fighting Purcell in the courts over the Foundation's intellectual property. Only six weeks after setting up the Hubbard College and marrying a staff member, 18-year-old Mary Sue Whipp, Hubbard closed it down and moved with his new bride to Phoenix, Arizona. He established a Hubbard Association of Scientologists International to promote his new "Science of Certainty"—Scientology. Scientology and Dianetics have been differentiated as follows: Dianetics is all about releasing the mind from the "distorting influence of engrams", and Scientology "is the study and handling of the spirit in relation to itself, universes and other life". The Church of Scientology attributes its genesis to Hubbard's discovery of "a new line of research"—"that man is most fundamentally a spiritual being (a thetan)". Non-Scientologist writers have suggested alternative motives: that he aimed "to reassert control over his creation", that he believed "he was about to lose control of Dianetics", or that he wanted to ensure "he would be able to stay in business even if the courts eventually awarded control of Dianetics and its valuable copyrights to ... the hated Don Purcell." Hubbard expanded upon the basics of Dianetics to construct a spiritually oriented (though at this stage not religious) doctrine based on the concept that the true self of a person was a thetan — an immortal, omniscient and potentially omnipotent entity. Hubbard taught that thetans, having created the material universe, had forgotten their god-like powers and become trapped in physical bodies. Scientology aimed to "rehabilitate" each person's self (the thetan) to restore its original capacities and become once again an "Operating Thetan". Hubbard insisted humanity was imperiled by the forces of "aberration", which were the result of engrams carried by immortal thetans for billions of years. In 2012, Ohio State University professor Hugh Urban asserted that Hubbard had adopted many of his theories from the early to mid 20th century astral projection pioneer Sylvan Muldoon stating that Hubbard's description of exteriorizing the thetan is extremely similar if not identical to the descriptions of astral projection in occult literature popularized by Muldoon's widely read Phenomena of Astral Projection (1951) (co-written with Hereward Carrington) and that Muldoon's description of the astral body as being connected to the physical body by a long thin, elastic cord is virtually identical to the one described in Hubbard's "Excalibur" vision. Hubbard introduced a device called an E-meter that he presented as having, as Miller puts it, "an almost mystical power to reveal an individual's innermost thoughts". He promulgated Scientology through a series of lectures, bulletins and books such as "" ("a cold-blooded and factual account of your last sixty trillion years") and "Scientology: 8-8008" ("With this book, the ability to make one's body old or young at will, the ability to heal the ill without physical contact, the ability to cure the insane and the incapacitated, is set forth for the physician, the layman, the mathematician and the physicist.") Scientology was organized in a very different way from the decentralized Dianetics movement. The Hubbard Association of Scientologists (HAS) was the only official Scientology organization. Training procedures and doctrines were standardized and promoted through HAS publications, and administrators and auditors were not permitted to deviate from Hubbard's approach. Branches or "orgs" were organized as franchises, rather like a fast food restaurant chain. Each franchise holder was required to pay ten percent of income to Hubbard's central organization. They were expected to find new recruits, known as "raw meat", but were restricted to providing only basic services. Costlier higher-level auditing was only provided by Hubbard's central organization. Although this model would eventually be extremely successful, Scientology was a very small-scale movement at first. Hubbard started off with only a few dozen followers, generally dedicated Dianeticists; a seventy-hour series of lectures in Philadelphia in December 1952 was attended by just 38 people. Hubbard was joined in Phoenix by his 18-year-old son Nibs, who had been unable to settle down in high school. Nibs had decided to become a Scientologist, moved into his father's home and went on to become a Scientology staff member and "professor". Hubbard also traveled to the United Kingdom to establish his control over a Dianetics group in London. It was very much a shoestring operation; as Helen O'Brien later recalled, "there was an atmosphere of extreme poverty and undertones of a grim conspiracy over all. At 163 Holland Park Avenue was an ill-lit lecture room and a bare-boarded and poky office some eight by ten feet—mainly infested by long haired men and short haired and tatty women." On September 24, 1952, only a few weeks after arriving in London, Hubbard's wife Mary Sue gave birth to her first child, a daughter whom they named Diana Meredith de Wolfe Hubbard. In February 1953, Hubbard acquired a doctorate from the unaccredited Sequoia University. According to a Scientology biography, this was "given in recognition of his outstanding work on Dianetics" and "as an inspiration to the many people ... who had been inspired by him to take up advanced studies in this field ..." The British government concluded in the 1970s that Sequoia University was a "degree mill" operated by Joseph Hough, a Los Angeles chiropractor. Miller cites a telegram sent by Hubbard on February 27, 1953, in which he instructed Scientologist Richard de Mille to procure him a Ph.D. from Hough urgently—"FOR GOSH SAKES EXPEDITE. WORK HERE UTTERLY DEPENDANT ON IT." Hough's "university" was closed down by the Californian authorities in 1971. British government officials noted in a report written in 1977: "It has not and never had any authority whatsoever to issue diplomas or degrees and the dean is sought by the authorities 'for questioning'." A few weeks after becoming "Dr." Hubbard, he wrote to Helen O'Brien—who had taken over the day-to-day management of Scientology in the United States—proposing that Scientology should be transformed into a religion. As membership declined and finances grew tighter, Hubbard had reversed the hostility to religion he voiced in "Dianetics". His letter to O'Brien discussed the legal and financial benefits of religious status. The idea may not have been new; Hubbard has been quoted as telling a science fiction convention in 1948: "Writing for a penny a word is ridiculous. If a man really wants to make a million dollars, the best way would be to start his own religion." Scholar J. Gordon Melton notes, "There is no record of Hubbard having ever made this statement, though several of his science fiction colleagues have noted the broaching of the subject on one of their informal conversations." The Church of Scientology has denied that Hubbard said this and insists that it is a misattributed quote that was said instead by George Orwell. Hubbard outlined plans for setting up a chain of "Spiritual Guidance Centers" charging customers $500 for twenty-four hours of auditing ("That is real money ... Charge enough and we'd be swamped."). He wrote: O'Brien was not enthusiastic and resigned the following September, worn out by work. She criticized Hubbard for creating "a temperate zone voodoo, in its inelasticity, unexplainable procedures, and mindless group euphoria". He nonetheless pressed ahead and on December 18, 1953, he incorporated the Church of Scientology, Church of American Science and Church of Spiritual Engineering in Camden, New Jersey. Hubbard, his wife Mary Sue and his secretary John Galusha became the trustees of all three corporations. Hubbard later denied founding the Church of Scientology, and to this day, Scientologists maintain that the "founding church" was actually the Church of Scientology of California, established on February 18, 1954, by Scientologist Burton Farber. The reason for Scientology's religious transformation was explained by officials of the HAS: Scientology franchises became Churches of Scientology and some auditors began dressing as clergymen, complete with clerical collars. If they were arrested in the course of their activities, Hubbard advised, they should sue for massive damages for molesting "a Man of God going about his business". A few years later he told Scientologists: "If attacked on some vulnerable point by anyone or anything or any organization, always find or manufacture enough threat against them to cause them to sue for peace ... Don't ever defend, always attack." Any individual breaking away from Scientology and setting up his own group was to be shut down: The 1950s saw Scientology growing steadily. Hubbard finally achieved victory over Don Purcell in 1954 when the latter, worn out by constant litigation, handed the copyrights of Dianetics back to Hubbard. Most of the formerly independent Scientology and Dianetics groups were either driven out of business or were absorbed into Hubbard's organizations. Hubbard marketed Scientology through medical claims, such as attracting polio sufferers by presenting the Church of Scientology as a scientific research foundation investigating polio cases. One advertisement during this period stated: Scientology became a highly profitable enterprise for Hubbard. He implemented a scheme under which he was paid a percentage of the Church of Scientology's gross income and by 1957 he was being paid about $250,000 (). His family grew, too, with Mary Sue giving birth to three more children—Geoffrey Quentin McCaully on January 6, 1954; Mary Suzette Rochelle on February 13, 1955; and Arthur Ronald Conway on June 6, 1958. In the spring of 1959, he used his new-found wealth to purchase Saint Hill Manor, an 18th-century country house in Sussex, formerly owned by Sawai Man Singh II, the Maharaja of Jaipur. The house became Hubbard's permanent residence and an international training center for Scientologists. By the start of the 1960s, Hubbard was the leader of a worldwide movement with thousands of followers. A decade later, however, he had left Saint Hill Manor and moved aboard his own private fleet of ships as the Church of Scientology faced worldwide controversy. The Church of Scientology says that the problems of this period were due to "vicious, covert international attacks" by the United States government, "all of which were proven false and baseless, which were to last 27 years and finally culminated in the Government being sued for 750 million dollars for conspiracy." Behind the attacks, stated Hubbard, lay a vast conspiracy of "psychiatric front groups" secretly controlling governments: "Every single lie, false charge and attack on Scientology has been traced directly to this group's members. They have sought at great expense for nineteen years to crush and eradicate any new development in the field of the mind. They are actively preventing any effectiveness in this field." Hubbard believed that Scientology was being infiltrated by saboteurs and spies and introduced "security checking" to identify those he termed "potential trouble sources" and "suppressive persons". Members of the Church of Scientology were interrogated with the aid of E-meters and were asked questions such as "Have you ever practiced homosexuality?" and "Have you ever had unkind thoughts about L. Ron Hubbard?" For a time, Scientologists were even interrogated about crimes committed in past lives: "Have you ever destroyed a culture?" "Did you come to Earth for evil purposes?" "Have you ever zapped anyone?" He also sought to exert political influence, advising Scientologists to vote against Richard Nixon in the 1960 presidential election and establishing a Department of Government Affairs "to bring government and hostile philosophies or societies into a state of complete compliance with the goals of Scientology". This, he said, "is done by high-level ability to control and in its absence by a low-level ability to overwhelm. Introvert such agencies. Control such agencies." The U.S. Government was already well aware of Hubbard's activities. The FBI had a lengthy file on him, including a 1951 interview with an agent who considered him a "mental case". Police forces in a number of jurisdictions began exchanging information about Scientology through the auspices of Interpol, which eventually led to prosecutions. In 1958, the U.S. Internal Revenue Service withdrew the Washington, D.C. Church of Scientology's tax exemption after it found that Hubbard and his family were profiting unreasonably from Scientology's ostensibly non-profit income. The Food and Drug Administration took action against Scientology's medical claims, seizing thousands of pills being marketed as "radiation cures" as well as publications and E-meters. The Church of Scientology was required to label them as being "ineffective in the diagnosis or treatment of disease". Following the FDA's actions, Scientology attracted increasingly unfavorable publicity across the English-speaking world. It faced particularly hostile scrutiny in Victoria, Australia, where it was accused of brainwashing, blackmail, extortion and damaging the mental health of its members. The Victorian state government established a Board of Inquiry into Scientology in November 1963. Its report, published in October 1965, condemned every aspect of Scientology and Hubbard himself. He was described as being of doubtful sanity, having a persecution complex and displaying strong indications of paranoid schizophrenia with delusions of grandeur. His writings were characterized as nonsensical, abounding in "self-glorification and grandiosity, replete with histrionics and hysterical, incontinent outbursts". Sociologist Roy Wallis comments that the report drastically changed public perceptions of Scientology: The report led to Scientology being banned in Victoria, Western Australia and South Australia, and led to more negative publicity around the world. Newspapers and politicians in the UK pressed the British government for action against Scientology. In April 1966, hoping to form a remote "safe haven" for Scientology, Hubbard traveled to the southern African country Rhodesia (today Zimbabwe) and looked into setting up a base there at a hotel on Lake Kariba. Despite his attempts to curry favour with the local government—he personally delivered champagne to Prime Minister Ian Smith's house, but Smith refused to see him—Rhodesia promptly refused to renew Hubbard's visa, compelling him to leave the country. In July 1968, the British Minister of Health, Kenneth Robinson, announced that foreign Scientologists would no longer be permitted to enter the UK and Hubbard himself was excluded from the country as an "undesirable alien". Further inquiries were launched in Canada, New Zealand and South Africa. Hubbard took three major new initiatives in the face of these challenges. "Ethics Technology" was introduced to tighten internal discipline within Scientology. It required Scientologists to "disconnect" from any organization or individual—including family members—deemed to be disruptive or "suppressive". According to church-operated websites, "A person who disconnects is simply exercising his right to communicate or not to communicate with a particular person." Hubbard stated: "Communication, however, is a two-way flow. If one has the right to communicate, then one must also have the right to not receive communication from another. It is this latter corollary of the right to communicate that gives us our right to privacy." Scientologists were also required to write "Knowledge Reports" on each other, reporting transgressions or misapplications of Scientology methods. Hubbard promulgated a long list of punishable "Misdemeanors", "Crimes", and "High Crimes". The "Fair Game" policy was introduced, which was applicable to anyone deemed an "enemy" of Scientology: "May be deprived of property or injured by any means by any Scientologist without any discipline of the Scientologist. May be tricked, sued or lied to or destroyed." At the start of March 1966, Hubbard created the Guardian's Office (GO), a new agency within the Church of Scientology that was headed by his wife Mary Sue. It dealt with Scientology's external affairs, including public relations, legal actions and the gathering of intelligence on perceived threats. As Scientology faced increasingly negative media attention, the GO retaliated with hundreds of writs for libel and slander; it issued more than forty on a single day. Hubbard ordered his staff to find "lurid, blood sex crime actual evidence on [Scientology's] attackers". Finally, at the end of 1966, Hubbard acquired his own fleet of ships. He established the "Hubbard Explorational Company Ltd" which purchased three ships—the "Enchanter", a forty-ton schooner, the "Avon River", an old trawler, and the "Royal Scotman" , a former Irish Sea cattle ferry that he made his home and flagship. The ships were crewed by the Sea Organization or Sea Org, a group of Scientologist volunteers, with the support of a couple of professional seamen. After Hubbard created the Sea Org "fleet" in early 1967 it began an eight-year voyage, sailing from port to port in the Mediterranean Sea and eastern North Atlantic. The fleet traveled as far as Corfu in the eastern Mediterranean and Dakar and the Azores in the Atlantic, but rarely stayed anywhere for longer than six weeks. Ken Urquhart, Hubbard's personal assistant at the time, later recalled: When Hubbard established the Sea Org he publicly declared that he had relinquished his management responsibilities. According to Miller, this was not true. He received daily telex messages from Scientology organizations around the world reporting their statistics and income. The Church of Scientology sent him $15,000 () a week and millions of dollars were transferred to his bank accounts in Switzerland and Liechtenstein. Couriers arrived regularly, conveying luxury food for Hubbard and his family or cash that had been smuggled from England to avoid currency export restrictions. Along the way, Hubbard sought to establish a safe haven in "a friendly little country where Scientology would be allowed to prosper", as Miller puts it. The fleet stayed at Corfu for several months in 1968–1969. Hubbard renamed the ships after Greek gods—the "Royal Scotman" was rechristened "Apollo"—and he praised the recently established military dictatorship. The Sea Org was represented as "Professor Hubbard's Philosophy School" in a telegram to the Greek government. In March 1969, however, Hubbard and his ships were ordered to leave. In mid-1972, Hubbard tried again in Morocco, establishing contacts with the country's secret police and training senior policemen and intelligence agents in techniques for detecting subversives. The program ended in failure when it became caught up in internal Moroccan politics, and Hubbard left the country hastily in December 1972. At the same time, Hubbard was still developing Scientology's doctrines. A Scientology biography states that "free of organizational duties and aided by the first Sea Org members, L. Ron Hubbard now had the time and facilities to confirm in the physical universe some of the events and places he had encountered in his journeys down the track of time." In 1965, he designated several existing Scientology courses as confidential, repackaging them as the first of the esoteric "OT levels". Two years later he announced the release of OT3, the "Wall of Fire", revealing the secrets of an immense disaster that had occurred "on this planet, and on the other seventy-five planets which form this Confederacy, seventy-five million years ago". Scientologists were required to undertake the first two OT levels before learning how Xenu, the leader of the Galactic Confederacy, had shipped billions of people to Earth and blown them up with hydrogen bombs, following which their traumatized spirits were stuck together at "implant stations", brainwashed with false memories and eventually became contained within human beings. The discovery of OT3 was said to have taken a major physical toll on Hubbard, who announced that he had broken a knee, an arm, and his back during the course of his research. A year later, in 1968, he unveiled OT levels 4 to 6 and began delivering OT training courses to Scientologists aboard the "Royal Scotman". Scientologists around the world were presented with a glamorous picture of life in the Sea Org and many applied to join Hubbard aboard the fleet. What they found was rather different from the image. Most of those joining had no nautical experience at all. Mechanical difficulties and blunders by the crews led to a series of embarrassing incidents and near-disasters. Following one incident in which the rudder of the "Royal Scotman" was damaged during a storm, Hubbard ordered the ship's entire crew to be reduced to a "condition of liability" and wear gray rags tied to their arms. The ship itself was treated the same way, with dirty tarpaulins tied around its funnel to symbolize its lower status. According to those aboard, conditions were appalling; the crew was worked to the point of exhaustion, given meagre rations and forbidden to wash or change their clothes for several weeks. Hubbard maintained a harsh disciplinary regime aboard the fleet, punishing mistakes by confining people in the "Royal Scotman" bilge tanks without toilet facilities and with food provided in buckets. At other times erring crew members were thrown overboard with Hubbard looking on and, occasionally, filming. David Mayo, a Sea Org member at the time, later recalled: From about 1970, Hubbard was attended aboard ship by the children of Sea Org members, organized as the Commodore's Messenger Organization (CMO). They were mainly young girls dressed in hot pants and halter tops, who were responsible for running errands for Hubbard such as lighting his cigarettes, dressing him or relaying his verbal commands to other members of the crew. In addition to his wife Mary Sue, he was accompanied by all four of his children by her, though not his first son Nibs, who had defected from Scientology in late 1959. The younger Hubbards were all members of the Sea Org and shared its rigors, though Quentin Hubbard reportedly found it difficult to adjust and attempted suicide in mid-1974. During the 1970s, Hubbard faced an increasing number of legal threats. French prosecutors charged him and the French Church of Scientology with fraud and customs violations in 1972. He was advised that he was at risk of being extradited to France. Hubbard left the Sea Org fleet temporarily at the end of 1972, living incognito in Queens, New York, until he returned to his flagship in September 1973 when the threat of extradition had abated. Scientology sources say that he carried out "a sociological study in and around New York City". Hubbard's health deteriorated significantly during this period. A chain-smoker, he also suffered from bursitis and excessive weight, and had a prominent growth on his forehead. He suffered serious injuries in a motorcycle accident in 1973 and had a heart attack in 1975 that required him to take anticoagulant drugs for the next year. In September 1978, Hubbard had a pulmonary embolism, falling into a coma, but recovered. He remained active in managing and developing Scientology, establishing the controversial Rehabilitation Project Force in 1974 and issuing policy and doctrinal bulletins. However, the Sea Org's voyages were coming to an end. The "Apollo" was banned from several Spanish ports and was expelled from Curaçao in October 1975. The Sea Org came to be suspected of being a CIA operation, leading to a riot in Funchal, Madeira, when the "Apollo" docked there. At the time, "The Apollo Stars", a musical group founded by Hubbard and made up entirely of shipbound members of the Sea Org, was offering free on-pier concerts in an attempt to promote Scientology, and the riot occurred at one of these events. Hubbard decided to relocate back to the United States to establish a "land base" for the Sea Org in Florida. The Church of Scientology attributes this decision to the activities on the "Apollo" having "outgrow[n] the ship's capacity". In October 1975, Hubbard moved into a hotel suite in Daytona Beach. The Fort Harrison Hotel in Clearwater, Florida, was secretly acquired as the location for the "land base". On December 5, 1975, Hubbard and his wife Mary Sue moved into a condominium complex in nearby Dunedin. Their presence was meant to be a closely guarded secret but was accidentally compromised the following month. Hubbard immediately left Dunedin and moved to Georgetown, Washington, D.C., accompanied by a handful of aides and messengers, but not his wife. Six months later, following another security alert in July 1976, Hubbard moved to another safe house in Culver City, California. He lived there for only about three months, relocating in October to the more private confines of the Olive Tree Ranch near La Quinta. His second son Quentin committed suicide a few weeks later in Las Vegas. Throughout this period, Hubbard was heavily involved in directing the activities of the Guardian's Office (GO), the legal bureau/intelligence agency that he had established in 1966. He believed that Scientology was being attacked by an international Nazi conspiracy, which he termed the "Tenyaka Memorial", through a network of drug companies, banks and psychiatrists in a bid to take over the world. In 1973, he instigated the "Snow White Program" and directed the GO to remove negative reports about Scientology from government files and track down their sources. The GO was ordered to "get all false and secret files on Scientology, LRH  ... that cannot be obtained legally, by all possible lines of approach ... i.e., job penetration, janitor penetration, suitable guises utilizing covers." His involvement in the GO's operations was concealed through the use of codenames. The GO carried out covert campaigns on his behalf such as Operation Bulldozer Leak, intended "to effectively spread the rumor that will lead Government, media, and individual [Suppressive Persons] to conclude that LRH has no control of the C of S and no legal liability for Church activity". He was kept informed of GO operations, such as the theft of medical records from a hospital, harassment of psychiatrists and infiltrations of organizations that had been critical of Scientology at various times, such as the Better Business Bureau, the American Medical Association, and American Psychiatric Association. Members of the GO infiltrated and burglarized numerous government organizations, including the U.S. Department of Justice and the Internal Revenue Service. After two GO agents were caught in the Washington, D.C. headquarters of the IRS, the FBI carried out simultaneous raids on GO offices in Los Angeles and Washington, D.C. on July 7, 1977. They retrieved wiretap equipment, burglary tools and some 90,000 pages of incriminating documents. Hubbard was not prosecuted, though he was labeled an "unindicted co-conspirator" by government prosecutors. His wife Mary Sue was indicted and subsequently convicted of conspiracy. She was sent to a federal prison along with ten other Scientologists. Hubbard's troubles increased in February 1978 when a French court convicted him in absentia for obtaining money under false pretenses. He was sentenced to four years in prison and a 35,000FF ($7,000) fine, . He went into hiding in April 1979, moving to an apartment in Hemet, California, where his only contact with the outside world was via ten trusted Messengers. He cut contact with everyone else, even his wife, whom he saw for the last time in August 1979. Hubbard faced a possible indictment for his role in Operation Freakout, the GO's campaign against New York journalist Paulette Cooper, and in February 1980 he disappeared into deep cover in the company of two trusted Messengers, Pat and Anne Broeker. For the first few years of the 1980s, Hubbard and the Broekers lived on the move, touring the Pacific Northwest in a recreational vehicle and living for a while in apartments in Newport Beach and Los Angeles. Hubbard used his time in hiding to write his first new works of science fiction for nearly thirty years—"Battlefield Earth" (1982) and "Mission Earth", a ten-volume series published between 1985 and 1987. They received mixed responses; as writer Jeff Walker puts it, they were "treated derisively by most critics but greatly admired by followers". Hubbard also wrote and composed music for three of his albums, which were produced by the Church of Scientology. The book soundtrack "Space Jazz" was released in 1982. "Mission Earth" and "The Road to Freedom" were released posthumously in 1986. In Hubbard's absence, members of the Sea Org staged a takeover of the Church of Scientology and purged many veteran Scientologists. A young Messenger, David Miscavige, became Scientology's "de facto" leader. Mary Sue Hubbard was forced to resign her position and her daughter Suzette became Miscavige's personal maid. For the last two years of his life, Hubbard lived in a luxury Blue Bird motorhome on Whispering Winds, a 160-acre ranch near Creston, California. He remained in deep hiding while controversy raged in the outside world about whether he was still alive and if so, where. He spent his time "writing and researching", according to a spokesperson, and pursued photography and music, overseeing construction work and checking on his animals. He repeatedly redesigned the property, spending millions of dollars remodeling the ranch house—which went virtually uninhabited—and building a quarter-mile horse-racing track with an observation tower, which reportedly was never used. He was still closely involved in managing the Church of Scientology via secretly delivered orders and continued to receive large amounts of money, of which "Forbes" magazine estimated "at least $200 million [was] gathered in Hubbard's name through 1982." In September 1985, the IRS notified the Church that it was considering indicting Hubbard for tax fraud. Hubbard suffered further ill-health, including chronic pancreatitis, during his residence at Whispering Winds. He suffered a stroke on January 17, 1986, and died a week later. His body was cremated and the ashes were scattered at sea. Scientology leaders announced that his body had become an impediment to his work and that he had decided to "drop his body" to continue his research on another planet, having "learned how to do it without a body". Hubbard was survived by his wife Mary Sue and all of his children except his second son Quentin. His will provided a trust fund to support Mary Sue; her children Arthur, Diana and Suzette; and Katherine, the daughter of his first wife Polly. He disinherited two of his other children. L. Ron Hubbard, Jr. had become estranged, changed his name to "Ronald DeWolf" and, in 1982, sued unsuccessfully for control of his father's estate. Alexis Valerie, Hubbard's daughter by his second wife Sara, had attempted to contact her father in 1971. She was rebuffed with the implied claim that her real father was Jack Parsons rather than Hubbard, and that her mother had been a Nazi spy during the war. Both later accepted settlements when litigation was threatened. In 2001, Diana and Suzette were reported to still be Church members, while Arthur had left and become an artist. Hubbard's great-grandson, Jamie DeWolf, is a noted slam poet. The copyrights of his works and much of his estate and wealth were willed to the Church of Scientology. In a bulletin dated May 5, 1980, Hubbard told his followers to preserve his teachings until an eventual reincarnation when he would return "not as a religious leader but as a political one". The Church of Spiritual Technology (CST), a sister organization of the Church of Scientology, has engraved Hubbard's entire corpus of Scientology and Dianetics texts on steel tablets stored in titanium containers. They are buried at the Trementina Base in a vault under a mountain near Trementina, New Mexico, on top of which the CST's logo has been bulldozed on such a gigantic scale that it is visible from space. Hubbard is the Guinness World Record holder for the most published author, with 1,084 works, most translated book (70 languages for "The Way to Happiness") and most audiobooks (185 as of April 2009). According to Galaxy Press, Hubbard's "Battlefield Earth" has sold over 6 million copies and "Mission Earth" a further 7 million, with each of its ten volumes becoming "New York Times" bestsellers on their release; however, the "Los Angeles Times" reported in 1990 that Hubbard's followers had been buying large numbers of the books and re-issuing them to stores, so as to boost sales figures. Opinions are divided about his literary legacy. Scientologists have written of their desire to "make Ron the most acclaimed and widely known author of all time". The sociologist William Sims Bainbridge writes that even at his peak in the late 1930s Hubbard was regarded by readers of "Astounding Science Fiction" as merely "a passable, familiar author but not one of the best", while by the late 1970s "the [science fiction] subculture wishes it could forget him" and fans gave him a worse rating than any other of the "Golden Age" writers. Posthumously, the Los Angeles City Council named a part of the street close to the headquarters of Scientology in 1996, as recognition of Hubbard. In 2011, the West Valley City Council declared March 13 as L. Ron Hubbard Centennial Day. On April 2016, the New Jersey State Board of Education approved Hubbard's birthday as one of its religious holidays. In 2004, eighteen years after Hubbard's death, the Church claimed eight million followers worldwide. According to religious scholar J. Gordon Melton, this is an overestimate, counting as Scientologists people who had merely bought a book. The City University of New York's American Religious Identification Survey found that by 2009 only 25,000 Americans identified as Scientologists. Hubbard's presence still pervades Scientology. Every Church of Scientology maintains an office reserved for Hubbard, with a desk, chair and writing equipment, ready to be used. Lonnie D. Kliever notes that Hubbard was "the only source of the religion, and he has no successor". Hubbard is referred to simply as "Source" within Scientology and the theological acceptability of any Scientology-related activity is determined by how closely it adheres to Hubbard's doctrines. Hubbard's name and signature are official trademarks of the Religious Technology Center, established in 1982 to control and oversee the use of Hubbard's works and Scientology's trademarks and copyrights. The RTC is the central organization within Scientology's complex corporate hierarchy and has put much effort into re-checking the accuracy of all Scientology publications to "ensur[e] the availability of the pure unadulterated writings of Mr. Hubbard to the coming generations". The Danish historian of religions Mikael Rothstein describes Scientology as "a movement focused on the figure of Hubbard". He comments: "The fact that [Hubbard's] life is mythologized is as obvious as in the cases of Jesus, Muhammad or Siddartha Gotama. This is how religion works. Scientology, however, rejects this analysis altogether, and goes to great lengths to defend every detail of Hubbard's amazing and fantastic life as plain historical fact." Hubbard is presented as "the master of a multitude of disciplines" who performed extraordinary feats as a photographer, composer, scientist, therapist, explorer, navigator, philosopher, poet, artist, humanitarian, adventurer, soldier, scout, musician and many other fields of endeavor. The Church of Scientology portrays Hubbard's life and work as having proceeded seamlessly, "as if they were a continuous set of predetermined events and discoveries that unfolded through his lifelong research" even up to and beyond his death. According to Rothstein's assessment of Hubbard's legacy, Scientology consciously aims to transfer the charismatic authority of Hubbard to institutionalize his authority over the organization, even after his death. Hubbard is presented as a virtually superhuman religious ideal just as Scientology itself is presented as the most important development in human history. As Rothstein puts it, "reverence for Scientology's scripture is reverence for Hubbard, the man who in the Scientological perspective single-handedly brought salvation to all human beings." David G. Bromley of the University of Virginia comments that the real Hubbard has been transformed into a "prophetic persona", "LRH", which acts as the basis for his prophetic authority within Scientology and transcends his biographical history. According to Dorthe Refslund Christensen, Hubbard's hagiography directly compares him with Buddha. Hubbard is viewed as having made Eastern traditions more accessible by approaching them with a scientific attitude. "Hubbard is seen as the ultimate-cross-cultural savior; he is thought to be able to release man from his miserable condition because he had the necessary background, and especially the right attitude." Hubbard, although increasingly deified after his death, is the model Operating Thetan to Scientologists and their founder, and not God. Hubbard then is the "Source", "inviting others to follow his path in ways comparable to a Bodhisattva figure" according to religious scholar Donald A. Westbrook. Scientologists refer to L. Ron Hubbard as "Ron", referring to him as a personal friend. In the late 1970s two men began to assemble a picture of Hubbard's life. Michael Linn Shannon, a resident of Portland, Oregon, became interested in Hubbard's life story after an encounter with a Scientology recruiter. Over the next four years he collected previously undisclosed records and documents. He intended to write an exposé of Hubbard and sent a copy of his findings and key records to a number of contacts but was unable to find a publisher. Shannon's findings were acquired by Gerry Armstrong, a Scientologist who had been appointed Hubbard's official archivist. He had been given the job of assembling documents relating to Hubbard's life for the purpose of helping Omar V. Garrison, a non-Scientologist who had written two books sympathetic to Scientology, to write an official biography. However, the documents that he uncovered convinced both Armstrong and Garrison that Hubbard had systematically misrepresented his life. Garrison refused to write a "puff piece" and declared that he would not "repeat all the falsehoods they [the Church of Scientology] had perpetuated over the years". He wrote a "warts and all" biography while Armstrong quit Scientology, taking five boxes of papers with him. The Church of Scientology and Mary Sue Hubbard sued for the return of the documents while settling out of court with Garrison, requiring him to turn over the nearly completed manuscript of the biography. In October 1984 Judge Paul G. Breckenridge ruled in Armstrong's favor, saying: In November 1987, the British journalist and writer Russell Miller published "Bare-faced Messiah", the first full-length biography of L. Ron Hubbard. He drew on Armstrong's papers, official records and interviews with those who had known Hubbard including ex-Scientologists and family members. The book was well-received by reviewers but the Church of Scientology sought unsuccessfully to prohibit its publication on the grounds of copyright infringement. Other critical biographical accounts are found in Bent Corydon's "L. Ron Hubbard, Messiah or Madman?" (1987) and Jon Atack's "A Piece of Blue Sky" (1990). Hagiographical accounts published by the Church of Scientology describe Hubbard as "a child prodigy of sorts" who rode a horse before he could walk and was able to read and write by the age of four. A Scientology profile says that he was brought up on his grandfather's "large cattle ranch in Montana" where he spent his days "riding, breaking broncos, hunting coyote and taking his first steps as an explorer". His grandfather is described as a "wealthy Western cattleman" from whom Hubbard "inherited his fortune and family interests in America, Southern Africa, etc." Scientology claims that Hubbard became a "blood brother" of the Native American Blackfeet tribe at the age of six through his friendship with a Blackfeet medicine man. However, contemporary records show that his grandfather, Lafayette Waterbury, was a veterinarian, not a rancher, and was not wealthy. Hubbard was actually raised in a townhouse in the center of Helena. According to his aunt, his family did not own a ranch but did own one cow and four or five horses on a few acres of land outside the city. Hubbard lived over a hundred miles from the Blackfeet reservation. While some sources support Scientology's claim of Hubbard's blood brotherhood, other sources say that the tribe did not practice blood brotherhood and no evidence has been found that he had ever been a Blackfeet blood brother. According to Scientology biographies, during a journey to Washington, D.C. in 1923 Hubbard learned of Freudian psychology from Commander Joseph "Snake" Thompson, a U.S. Navy psychoanalyst and medic. Scientology biographies describe this encounter as giving Hubbard training in a particular scientific approach to the mind, which he found unsatisfying. In his diary, Hubbard claimed he was the youngest Eagle Scout in the U.S. Scientology texts present Hubbard's travels in Asia as a time when he was intensely curious for answers to human suffering and explored ancient Eastern philosophies for answers, but found them lacking. He is described as traveling to China "at a time when few Westerners could enter" and according to Scientology, spent his time questioning Buddhist lamas and meeting old Chinese magicians. According to church materials, his travels were funded by his "wealthy grandfather". Scientology accounts say that Hubbard "made his way deep into Manchuria's Western Hills and beyond — to break bread with Mongolian bandits, share campfires with Siberian shamans and befriend the last in the line of magicians from the court of Kublai Khan". However, Hubbard did not record these events in his diary. He remained unimpressed with China and the Chinese, writing: "A Chinaman can not live up to a thing, he always drags it down." He characterized the sights of Beijing as "rubberneck stations" for tourists and described the palaces of the Forbidden City as "very trashy-looking" and "not worth mentioning". He was impressed by the Great Wall of China near Beijing, but concluded of the Chinese: "They smell of all the baths they didn't take. The trouble with China is, there are too many chinks here." Despite not graduating from George Washington, Hubbard claimed "to be not only a graduate engineer, but 'a member of the first United States course in formal education in what is called today nuclear physics.'" However, a Church of Scientology biography describes him as "never noted for being in class" and says that he "thoroughly detest[ed] his subjects". He earned poor grades, was placed on probation in September 1931 and dropped out altogether in the fall of 1932. Scientology accounts say that he "studied nuclear physics at George Washington University in Washington, D.C., before he started his studies about the mind, spirit and life" and Hubbard himself stated that he "set out to find out from nuclear physics a knowledge of the physical universe, something entirely lacking in Asian philosophy". His university records indicate that his exposure to "nuclear physics" consisted of one class in "atomic and molecular phenomena" for which he earned an "F" grade. Scientologists claim he was more interested in extracurricular activities, particularly writing and flying. According to church materials, "he earned his wings as a pioneering barnstormer at the dawn of American aviation" and was "recognized as one of the country's most outstanding pilots. With virtually no training time, he takes up powered flight and barnstorms throughout the Midwest." His airman certificate, however, records that he qualified to fly only gliders rather than powered aircraft and gave up his certificate when he could not afford the renewal fee. After leaving university Hubbard traveled to Puerto Rico on what the Church of Scientology calls the "Puerto Rican Mineralogical Expedition". Scientologists claim he "made the first complete mineralogical survey of Puerto Rico" as a means of "augmenting his [father's] pay with a mining venture", during which he "sluiced inland rivers and crisscrossed the island in search of elusive gold" as well as carrying out "much ethnological work amongst the interior villages and native hillsmen". Hubbard's unofficial biographer Russell Miller writes that neither the United States Geological Survey nor the Puerto Rican Department of Natural Resources have any record of any such expedition. According to the Church of Scientology, Hubbard was "called to Hollywood" to work on film scripts in the mid-1930s, although Scientology accounts differ as to exactly when this was (whether 1935, 1936 or 1937). The Church of Scientology claims he also worked on the Columbia serials "The Mysterious Pilot" (1937), "The Great Adventures of Wild Bill Hickok" (1938) and "The Spider Returns" (1941), though his name does not appear on the credits. Hubbard also claimed to have written "Dive Bomber" (1941), Cecil B. DeMille's "The Plainsman" (1936) and John Ford's "Stagecoach" (1939). Scientology accounts of the expedition to Alaska describe "Hubbard's recharting of an especially treacherous Inside Passage, and his ethnological study of indigenous Aleuts and Haidas" and tell of how "along the way, he not only roped a Kodiak Bear, but braved seventy-mile-an-hour winds and commensurate seas off the Aleutian Islands." They are divided about how far Hubbard's expedition actually traveled, whether or . The Church disputes the official record of Hubbard's naval career. It asserts that the records are incomplete and perhaps falsified "to conceal Hubbard's secret activities as an intelligence officer". In 1990 the Church provided the "Los Angeles Times" with a document that was said to be a copy of Hubbard's official record of service. The U.S. Navy told the "Times" that "its contents are not supported by Hubbard's personnel record." "The New Yorker" reported in February 2011 that the Scientology document was considered by federal archivists to be a forgery. The Church of Scientology presents him as a "much-decorated war hero who commanded a corvette and during hostilities was crippled and wounded". Scientology publications say he served as a "Commodore of Corvette squadrons" in "all five theaters of World War II" and was awarded "twenty-one medals and palms" for his service. He was "severely wounded and was taken crippled and blinded" to a military hospital, where he "worked his way back to fitness, strength and full perception in less than two years, using only what he knew and could determine about Man and his relationship to the universe". He said that he had seen combat repeatedly, telling A. E. van Vogt that he had once sailed his ship "right into the harbor of a Japanese occupied island in the Dutch East Indies. His attitude was that if you took your flag down the Japanese would not know one boat from another, so he tied up at the dock, went ashore and wandered around by himself for three days." Hubbard's war service has great significance in the history and mythology of the Church of Scientology, as he is said to have cured himself through techniques that would later underpin Scientology and Dianetics. According to Moulton, Hubbard told him that he had been machine-gunned in the back near the Dutch East Indies. Hubbard asserted that his eyes had been damaged as well, either "by the flash of a large-caliber gun" or when he had "a bomb go off in my face". Scientology texts say that he returned from the war "[b]linded with injured optic nerves, and lame with physical injuries to hip and back" and was twice pronounced dead. Hubbard's official Navy service records indicate that "his military performance was, at times, substandard" and he received only four campaign medals rather than the claimed twenty-one. He was never recorded as being injured or wounded in combat and never received a Purple Heart. The Church of Scientology says that Hubbard's key breakthrough in the development of Dianetics was made at Oak Knoll Naval Hospital in Oakland, California. According to the Church, Scientology accounts do not mention Hubbard's involvement in occultism. He is instead described as "continu[ing] to write to help support his research" during this period into "the development of a means to better the condition of man". The Church of Scientology has nonetheless acknowledged Hubbard's involvement with the OTO; a 1969 statement, written by Hubbard himself, said: The Church of Scientology says Hubbard was "sent in" by his fellow science fiction author Robert Heinlein, "who was running off-book intelligence operations for naval intelligence at the time". However, Heinlein's authorized biographer has said that he looked into the matter at the suggestion of Scientologists but found nothing to corroborate claims that Heinlein had been involved, and his biography of Heinlein makes no mention of the matter. The Church of Scientology says Hubbard quit the Navy because it "attempted to monopolize all his researches and force him to work on a project 'to make man more suggestible' and when he was unwilling, tried to blackmail him by ordering him back to active duty to perform this function. Having many friends he was able to instantly resign from the Navy and escape this trap." The Navy said in a statement in 1980: "There is no evidence on record of an attempt to recall him to active duty." Following Hubbard's death, Bridge Publications has published several stand-alone biographical accounts of his life. Marco Frenschkowski notes that "non-Scientologist readers immediately recognize some parts of Hubbard's life are here systematically left out: no information whatsoever is given about his private life (his marriages, divorces, children), his legal affairs and so on." The Church maintains an extensive website presenting the official version of Hubbard's life. It also owns a number of properties dedicated to Hubbard including the Los Angeles-based L. Ron Hubbard Life Exhibition (a presentation of Hubbard's life), the Author Services Center (a presentation of Hubbard's writings), and the L. Ron Hubbard House in Washington, D.C. In late 2012, Bridge published a comprehensive official biography of Hubbard, titled "The L. Ron Hubbard Series: A Biographical Encyclopedia", written primarily by Dan Sherman, the official Hubbard biographer at the time. This most recent official Church of Scientology biography of Hubbard is a 17 volume series, with each volume focusing on a different aspect of Hubbard's life, including his music, photography, geographic exploration, humanitarian work, and nautical career. It is advertised as a "Biographic Encyclopedia" and is primarily authored by the official biographer, Dan Sherman. To date, there has not been a single-volume comprehensive official biography published According to the Church of Scientology, Hubbard produced some 65 million words on Dianetics and Scientology, contained in about 500,000 pages of written material, 3,000 recorded lectures and 100 films. His works of fiction included some 500 novels and short stories. Hubbard "published nearly 600 books, stories, and articles during his lifetime." He sold over 23 million copies of fiction and 27 million copies of nonfiction. Leonhard Euler Leonhard Euler ( ; ; 15 April 170718 September 1783) was a Swiss mathematician, physicist, astronomer, logician and engineer, who made important and influential discoveries in many branches of mathematics, such as infinitesimal calculus and graph theory, while also making pioneering contributions to several branches such as topology and analytic number theory. He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also known for his work in mechanics, fluid dynamics, optics, astronomy, and music theory. Euler was one of the most eminent mathematicians of the 18th century and is held to be one of the greatest in history. He is also widely considered to be the most prolific mathematician of all time. His collected works fill 60 to 80 quarto volumes, more than anybody in the field. He spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia. A statement attributed to Pierre-Simon Laplace expresses Euler's influence on mathematics: "Read Euler, read Euler, he is the master of us all." Leonhard Euler was born on 15 April 1707, in Basel, Switzerland to Paul III Euler, a pastor of the Reformed Church, and Marguerite Brucker, a pastor's daughter. He had two younger sisters: Anna Maria and Maria Magdalena, and a younger brother Johann Heinrich. Soon after the birth of Leonhard, the Eulers moved from Basel to the town of Riehen, where Euler spent most of his childhood. Paul Euler was a friend of the Bernoulli family; Johann Bernoulli was then regarded as Europe's foremost mathematician, and would eventually be the most important influence on young Leonhard. Euler's formal education started in Basel, where he was sent to live with his maternal grandmother. In 1720, aged thirteen, he enrolled at the University of Basel, and in 1723, he received a Master of Philosophy with a dissertation that compared the philosophies of Descartes and Newton. During that time, he was receiving Saturday afternoon lessons from Johann Bernoulli, who quickly discovered his new pupil's incredible talent for mathematics. At that time Euler's main studies included theology, Greek, and Hebrew at his father's urging in order to become a pastor, but Bernoulli convinced his father that Leonhard was destined to become a great mathematician. In 1726, Euler completed a dissertation on the propagation of sound with the title "De Sono". At that time, he was unsuccessfully attempting to obtain a position at the University of Basel. In 1727, he first entered the "Paris Academy Prize Problem" competition; the problem that year was to find the best way to place the masts on a ship. Pierre Bouguer, who became known as "the father of naval architecture", won and Euler took second place. Euler later won this annual prize twelve times. Around this time Johann Bernoulli's two sons, Daniel and Nicolaus, were working at the Imperial Russian Academy of Sciences in Saint Petersburg. On 31 July 1726, Nicolaus died of appendicitis after spending less than a year in Russia, and when Daniel assumed his brother's position in the mathematics/physics division, he recommended that the post in physiology that he had vacated be filled by his friend Euler. In November 1726 Euler eagerly accepted the offer, but delayed making the trip to Saint Petersburg while he unsuccessfully applied for a physics professorship at the University of Basel. Euler arrived in Saint Petersburg on 17 May 1727. He was promoted from his junior post in the medical department of the academy to a position in the mathematics department. He lodged with Daniel Bernoulli with whom he often worked in close collaboration. Euler mastered Russian and settled into life in Saint Petersburg. He also took on an additional job as a medic in the Russian Navy. The Academy at Saint Petersburg, established by Peter the Great, was intended to improve education in Russia and to close the scientific gap with Western Europe. As a result, it was made especially attractive to foreign scholars like Euler. The academy possessed ample financial resources and a comprehensive library drawn from the private libraries of Peter himself and of the nobility. Very few students were enrolled in the academy in order to lessen the faculty's teaching burden, and the academy emphasized research and offered to its faculty both the time and the freedom to pursue scientific questions. The Academy's benefactress, Catherine I, who had continued the progressive policies of her late husband, died on the day of Euler's arrival. The Russian nobility then gained power upon the ascension of the twelve-year-old Peter II. The nobility was suspicious of the academy's foreign scientists, and thus cut funding and caused other difficulties for Euler and his colleagues. Conditions improved slightly after the death of Peter II, and Euler swiftly rose through the ranks in the academy and was made a professor of physics in 1731. Two years later, Daniel Bernoulli, who was fed up with the censorship and hostility he faced at Saint Petersburg, left for Basel. Euler succeeded him as the head of the mathematics department. On 7 January 1734, he married Katharina Gsell (1707–1773), a daughter of Georg Gsell, a painter from the Academy Gymnasium. The young couple bought a house by the Neva River. Of their thirteen children, only five survived childhood. Concerned about the continuing turmoil in Russia, Euler left St. Petersburg on 19 June 1741 to take up a post at the "Berlin Academy", which he had been offered by Frederick the Great of Prussia. He lived for 25 years in Berlin, where he wrote over 380 articles. In Berlin, he published the two works for which he would become most renowned: the "Introductio in analysin infinitorum", a text on functions published in 1748, and the "Institutiones calculi differentialis", published in 1755 on differential calculus. In 1755, he was elected a foreign member of the Royal Swedish Academy of Sciences. In addition, Euler was asked to tutor Friederike Charlotte of Brandenburg-Schwedt, the Princess of Anhalt-Dessau and Frederick's niece. Euler wrote over 200 letters to her in the early 1760s, which were later compiled into a best-selling volume entitled "Letters of Euler on different Subjects in Natural Philosophy Addressed to a German Princess". This work contained Euler's exposition on various subjects pertaining to physics and mathematics, as well as offering valuable insights into Euler's personality and religious beliefs. This book became more widely read than any of his mathematical works and was published across Europe and in the United States. The popularity of the "Letters" testifies to Euler's ability to communicate scientific matters effectively to a lay audience, a rare ability for a dedicated research scientist. Despite Euler's immense contribution to the Academy's prestige, he eventually incurred the ire of Frederick and ended up having to leave Berlin. The Prussian king had a large circle of intellectuals in his court, and he found the mathematician unsophisticated and ill-informed on matters beyond numbers and figures. Euler was a simple, devoutly religious man who never questioned the existing social order or conventional beliefs, in many ways the polar opposite of Voltaire, who enjoyed a high place of prestige at Frederick's court. Euler was not a skilled debater and often made it a point to argue subjects that he knew little about, making him the frequent target of Voltaire's wit. Frederick also expressed disappointment with Euler's practical engineering abilities: Euler's eyesight worsened throughout his mathematical career. In 1738, three years after nearly expiring from fever, he became almost blind in his right eye, but Euler rather blamed the painstaking work on cartography he performed for the St. Petersburg Academy for his condition. Euler's vision in that eye worsened throughout his stay in Germany, to the extent that Frederick referred to him as "Cyclops". Euler remarked on his loss of vision, "Now I will have fewer distractions." He later developed a cataract in his left eye, which was discovered in 1766. Just a few weeks after its discovery, he was rendered almost totally blind. However, his condition appeared to have little effect on his productivity, as he compensated for it with his mental calculation skills and exceptional memory. For example, Euler could repeat the "Aeneid" of Virgil from beginning to end without hesitation, and for every page in the edition he could indicate which line was the first and which the last. With the aid of his scribes, Euler's productivity on many areas of study actually increased. He produced, on average, one mathematical paper every week in the year 1775. The Eulers bore a double name, Euler-Schölpi, the latter of which derives from "schelb" and "schief", signifying squint-eyed, cross-eyed, or crooked. This suggests that the Eulers may have had a susceptibility to eye problems. In 1760, with the Seven Years' War raging, Euler's farm in Charlottenburg was ransacked by advancing Russian troops. Upon learning of this event, General Ivan Petrovich Saltykov paid compensation for the damage caused to Euler's estate, later Empress Elizabeth of Russia added a further payment of 4000 roubles – an exorbitant amount at the time. The political situation in Russia stabilized after Catherine the Great's accession to the throne, so in 1766 Euler accepted an invitation to return to the St. Petersburg Academy. His conditions were quite exorbitant – a 3000 ruble annual salary, a pension for his wife, and the promise of high-ranking appointments for his sons. All of these requests were granted. He spent the rest of his life in Russia. However, his second stay in the country was marred by tragedy. A fire in St. Petersburg in 1771 cost him his home, and almost his life. In 1773, he lost his wife Katharina after 40 years of marriage. Three years after his wife's death, Euler married her half-sister, Salome Abigail Gsell (1723–1794). This marriage lasted until his death. In 1782 he was elected a Foreign Honorary Member of the American Academy of Arts and Sciences. In St. Petersburg on 18 September 1783, after a lunch with his family, Euler was discussing the newly discovered planet Uranus and its orbit with a fellow academician Anders Johan Lexell, when he collapsed from a brain hemorrhage. He died a few hours later. wrote a short obituary for the Russian Academy of Sciences and Russian mathematician Nicolas Fuss, one of Euler's disciples, wrote a more detailed eulogy, which he delivered at a memorial meeting. In his eulogy for the French Academy, French mathematician and philosopher Marquis de Condorcet, wrote: Euler was buried next to Katharina at the Smolensk Lutheran Cemetery on Goloday Island. In 1785, the Russian Academy of Sciences put a marble bust of Leonhard Euler on a pedestal next to the Director's seat and, in 1837, placed a headstone on Euler's grave. To commemorate the 250th anniversary of Euler's birth, the headstone was moved in 1956, together with his remains, to the 18th-century necropolis at the Alexander Nevsky Monastery. Euler worked in almost all areas of mathematics, such as geometry, infinitesimal calculus, trigonometry, algebra, and number theory, as well as continuum physics, lunar theory and other areas of physics. He is a seminal figure in the history of mathematics; if printed, his works, many of which are of fundamental interest, would occupy between 60 and 80 quarto volumes. Euler's name is associated with a large number of topics. Euler is the only mathematician to have "two" numbers named after him: the important Euler's number in calculus, "e", approximately equal to 2.71828, and the Euler–Mascheroni constant γ (gamma) sometimes referred to as just "Euler's constant", approximately equal to 0.57721. It is not known whether γ is rational or irrational. Euler introduced and popularized several notational conventions through his numerous and widely circulated textbooks. Most notably, he introduced the concept of a function and was the first to write "f"("x") to denote the function "f" applied to the argument "x". He also introduced the modern notation for the trigonometric functions, the letter for the base of the natural logarithm (now also known as Euler's number), the Greek letter Σ for summations and the letter to denote the imaginary unit. The use of the Greek letter "π" to denote the ratio of a circle's circumference to its diameter was also popularized by Euler, although it originated with Welsh mathematician William Jones. The development of infinitesimal calculus was at the forefront of 18th-century mathematical research, and the Bernoullis—family friends of Euler—were responsible for much of the early progress in the field. Thanks to their influence, studying calculus became the major focus of Euler's work. While some of Euler's proofs are not acceptable by modern standards of mathematical rigour (in particular his reliance on the principle of the generality of algebra), his ideas led to many great advances. Euler is well known in analysis for his frequent use and development of power series, the expression of functions as sums of infinitely many terms, such as Notably, Euler directly proved the power series expansions for and the inverse tangent function. (Indirect proof via the inverse power series technique was given by Newton and Leibniz between 1670 and 1680.) His daring use of power series enabled him to solve the famous Basel problem in 1735 (he provided a more elaborate argument in 1741): Euler introduced the use of the exponential function and logarithms in analytic proofs. He discovered ways to express various logarithmic functions using power series, and he successfully defined logarithms for negative and complex numbers, thus greatly expanding the scope of mathematical applications of logarithms. He also defined the exponential function for complex numbers, and discovered its relation to the trigonometric functions. For any real number (taken to be radians), Euler's formula states that the complex exponential function satisfies A special case of the above formula is known as Euler's identity, called "the most remarkable formula in mathematics" by Richard P. Feynman, for its single uses of the notions of addition, multiplication, exponentiation, and equality, and the single uses of the important constants 0, 1, , and . In 1988, readers of the "Mathematical Intelligencer" voted it "the Most Beautiful Mathematical Formula Ever". In total, Euler was responsible for three of the top five formulae in that poll. De Moivre's formula is a direct consequence of Euler's formula. In addition, Euler elaborated the theory of higher transcendental functions by introducing the gamma function and introduced a new method for solving quartic equations. He also found a way to calculate integrals with complex limits, foreshadowing the development of modern complex analysis. He also invented the calculus of variations including its best-known result, the Euler–Lagrange equation. Euler also pioneered the use of analytic methods to solve number theory problems. In doing so, he united two disparate branches of mathematics and introduced a new field of study, analytic number theory. In breaking ground for this new field, Euler created the theory of hypergeometric series, q-series, hyperbolic trigonometric functions and the analytic theory of continued fractions. For example, he proved the infinitude of primes using the divergence of the harmonic series, and he used analytic methods to gain some understanding of the way prime numbers are distributed. Euler's work in this area led to the development of the prime number theorem. Euler's interest in number theory can be traced to the influence of Christian Goldbach, his friend in the St. Petersburg Academy. A lot of Euler's early work on number theory was based on the works of Pierre de Fermat. Euler developed some of Fermat's ideas and disproved some of his conjectures. Euler linked the nature of prime distribution with ideas in analysis. He proved that the sum of the reciprocals of the primes diverges. In doing so, he discovered the connection between the Riemann zeta function and the prime numbers; this is known as the Euler product formula for the Riemann zeta function. Euler proved Newton's identities, Fermat's little theorem, Fermat's theorem on sums of two squares, and he made distinct contributions to Lagrange's four-square theorem. He also invented the totient function φ("n"), the number of positive integers less than or equal to the integer "n" that are coprime to "n". Using properties of this function, he generalized Fermat's little theorem to what is now known as Euler's theorem. He contributed significantly to the theory of perfect numbers, which had fascinated mathematicians since Euclid. He proved that the relationship shown between perfect numbers and Mersenne primes earlier proved by Euclid was one-to-one, a result otherwise known as the Euclid–Euler theorem. Euler also conjectured the law of quadratic reciprocity. The concept is regarded as a fundamental theorem of number theory, and his ideas paved the way for the work of Carl Friedrich Gauss. By 1772 Euler had proved that 2 − 1 = 2,147,483,647 is a Mersenne prime. It may have remained the largest known prime until 1867. In 1735, Euler presented a solution to the problem known as the Seven Bridges of Königsberg. The city of Königsberg, Prussia was set on the Pregel River, and included two large islands that were connected to each other and the mainland by seven bridges. The problem is to decide whether it is possible to follow a path that crosses each bridge exactly once and returns to the starting point. It is not possible: there is no Eulerian circuit. This solution is considered to be the first theorem of graph theory, specifically of planar graph theory. Euler also discovered the formula formula_5 relating the number of vertices, edges and faces of a convex polyhedron, and hence of a planar graph. The constant in this formula is now known as the Euler characteristic for the graph (or other mathematical object), and is related to the genus of the object. The study and generalization of this formula, specifically by Cauchy and L'Huilier, is at the origin of topology. Some of Euler's greatest successes were in solving real-world problems analytically, and in describing numerous applications of the Bernoulli numbers, Fourier series, Euler numbers, the constants and , continued fractions and integrals. He integrated Leibniz's differential calculus with Newton's Method of Fluxions, and developed tools that made it easier to apply calculus to physical problems. He made great strides in improving the numerical approximation of integrals, inventing what are now known as the Euler approximations. The most notable of these approximations are Euler's method and the Euler–Maclaurin formula. He also facilitated the use of differential equations, in particular introducing the Euler–Mascheroni constant: One of Euler's more unusual interests was the application of mathematical ideas in music. In 1739 he wrote the "Tentamen novae theoriae musicae," hoping to eventually incorporate musical theory as part of mathematics. This part of his work, however, did not receive wide attention and was once described as too mathematical for musicians and too musical for mathematicians. Euler helped develop the Euler–Bernoulli beam equation, which became a cornerstone of engineering. Aside from successfully applying his analytic tools to problems in classical mechanics, Euler also applied these techniques to celestial problems. His work in astronomy was recognized by a number of Paris Academy Prizes over the course of his career. His accomplishments include determining with great accuracy the orbits of comets and other celestial bodies, understanding the nature of comets, and calculating the parallax of the sun. His calculations also contributed to the development of accurate longitude tables. In addition, Euler made important contributions in optics. He disagreed with Newton's corpuscular theory of light in the "Opticks", which was then the prevailing theory. His 1740s papers on optics helped ensure that the wave theory of light proposed by Christiaan Huygens would become the dominant mode of thought, at least until the development of the quantum theory of light. In 1757 he published an important set of equations for inviscid flow, that are now known as the Euler equations. In differential form, the equations are: where Euler is also well known in structural engineering for his formula giving the critical buckling load of an ideal strut, which depends only on its length and flexural stiffness: where Euler is also credited with using closed curves to illustrate syllogistic reasoning (1768). These diagrams have become known as Euler diagrams. An Euler diagram is a diagrammatic means of representing sets and their relationships. Euler diagrams consist of simple closed curves (usually circles) in the plane that depict sets. Each Euler curve divides the plane into two regions or "zones": the interior, which symbolically represents the elements of the set, and the exterior, which represents all elements that are not members of the set. The sizes or shapes of the curves are not important; the significance of the diagram is in how they overlap. The spatial relationships between the regions bounded by each curve (overlap, containment or neither) corresponds to set-theoretic relationships (intersection, subset and disjointness). Curves whose interior zones do not intersect represent disjoint sets. Two curves whose interior zones intersect represent sets that have common elements; the zone inside both curves represents the set of elements common to both sets (the intersection of the sets). A curve that is contained completely within the interior zone of another represents a subset of it. Euler diagrams (and their generalization in Venn diagrams) were incorporated as part of instruction in set theory as part of the new math movement in the 1960s. Since then, they have also been adopted by other curriculum fields such as reading. Even when dealing with music, Euler’s approach is mainly mathematical. His writings on music are not particularly numerous (a few hundred pages, in his total production of about thirty thousand pages), but they reflect an early preoccupation and one that did not leave him throughout his life. A first point of Euler’s musical theory is the definition of "genres", i.e. of possible divisions of the octave using the prime numbers 3 and 5. Euler describes 18 such genres, with the general definition 2A, where A is the "exponent" of the genre (i.e. the sum of the exponents of 3 and 5) and 2 (where "m is an indefinite number, small or large, so long as the sounds are perceptible"), expresses that the relation holds independently of the number of octaves concerned. The first genre, with A = 1, is the octave itself (or its duplicates); the second genre, 2.3, is the octave divided by the fifth (fifth + fourth, C–G–C); the third genre is 2.5, major third + minor sixth (C–E–C); the fourth is 2.3, two fourths and a tone (C–F–B–C); the fifth is 2.3.5 (C–E–G–B–C); etc. Genres 12 (2.3.5), 13 (2.3.5) and 14 (2.3.5) are corrected versions of the diatonic, chromatic and enharmonic, respectively, of the Ancients. Genre 18 (2.3.5) is the "diatonico-chromatic", "used generally in all compositions", and which turns out to be identical with the system described by Johann Mattheson. Euler later envisaged the possibility of describing genres including the prime number 7. Euler devised a specific graph, the "Speculum musicum", to illustrate the diatonico-chromatic genre, and discussed paths in this graph for specific intervals, reminding his interest for the Seven Bridges of Königsberg (see above). The device knew a renewed interest as the Tonnetz in neo-Riemannian theory (see also Lattice (music)). Euler further used the principle of the "exponent" to propose a derivation of the "gradus suavitatis" (degree of suavity, of agreeableness) of intervals and chords from their prime factors – one must keep in mind that he considered just intonation, i.e. 1 and the prime numbers 3 and 5 only. Formulas have been proposed extending this system to any number of prime numbers, e.g. in the form where "p" are prime numbers and "k" their exponents. Euler and his friend Daniel Bernoulli were opponents of Leibniz's monadism and the philosophy of Christian Wolff. Euler insisted that knowledge is founded in part on the basis of precise quantitative laws, something that monadism and Wolffian science were unable to provide. Euler's religious leanings might also have had a bearing on his dislike of the doctrine; he went so far as to label Wolff's ideas as "heathen and atheistic". Much of what is known of Euler's religious beliefs can be deduced from his "Letters to a German Princess" and an earlier work, "Rettung der Göttlichen Offenbahrung Gegen die Einwürfe der Freygeister" ("Defense of the Divine Revelation against the Objections of the Freethinkers"). These works show that Euler was a devout Christian who believed the Bible to be inspired; the "Rettung" was primarily an argument for the divine inspiration of scripture. There is a famous legend inspired by Euler's arguments with secular philosophers over religion, which is set during Euler's second stint at the St. Petersburg Academy. The French philosopher Denis Diderot was visiting Russia on Catherine the Great's invitation. However, the Empress was alarmed that the philosopher's arguments for atheism were influencing members of her court, and so Euler was asked to confront the Frenchman. Diderot was informed that a learned mathematician had produced a proof of the existence of God: he agreed to view the proof as it was presented in court. Euler appeared, advanced toward Diderot, and in a tone of perfect conviction announced this non-sequitur: "Sir, ="x", hence God exists—reply!" Diderot, to whom (says the story) all mathematics was gibberish, stood dumbstruck as peals of laughter erupted from the court. Embarrassed, he asked to leave Russia, a request that was graciously granted by the Empress. However amusing the anecdote may be, it is apocryphal, given that Diderot himself did research in mathematics. The legend was apparently first told by Dieudonné Thiébault with significant embellishment by Augustus De Morgan. Euler was featured on the sixth series of the Swiss 10-franc banknote and on numerous Swiss, German, and Russian postage stamps. The asteroid 2002 Euler was named in his honor. He is also commemorated by the Lutheran Church on their Calendar of Saints on 24 May—he was a devout Christian (and believer in biblical inerrancy) who wrote apologetics and argued forcefully against the prominent atheists of his time. Euler has an extensive bibliography. His best-known books include: A definitive collection of Euler's works, entitled "Opera Omnia", has been published since 1911 by the Euler Commission of the Swiss Academy of Sciences. A complete chronological list of Euler's works is available at the following page: "The Eneström Index" (PDF). League of Nations The League of Nations (abbreviated as LN in English, "" abbreviated as ' or ' in French) was an intergovernmental organisation founded on 10 January 1920 as a result of the Paris Peace Conference that ended the First World War. It was the first international organisation whose principal mission was to maintain world peace. Its primary goals, as stated in its Covenant, included preventing wars through collective security and disarmament and settling international disputes through negotiation and arbitration. Other issues in this and related treaties included labour conditions, just treatment of native inhabitants, human and drug trafficking, the arms trade, global health, prisoners of war, and protection of minorities in Europe. At its greatest extent from 28 September 1934 to 23 February 1935, it had 58 members. The diplomatic philosophy behind the League represented a fundamental shift from the preceding hundred years. The League lacked its own armed force and depended on the victorious Great Powers of World War I (France, the UK, Italy and Japan were the permanent members of the executive Council) to enforce its resolutions, keep to its economic sanctions, or provide an army when needed. The Great Powers were often reluctant to do so. Sanctions could hurt League members, so they were reluctant to comply with them. During the Second Italo-Abyssinian War, when the League accused Italian soldiers of targeting Red Cross medical tents, Benito Mussolini responded that "the League is very well when sparrows shout, but no good at all when eagles fall out." After some notable successes and some early failures in the 1920s, the League ultimately proved incapable of preventing aggression by the Axis powers in the 1930s. The credibility of the organization was weakened by the fact that the United States never officially joined the League and the Soviet Union joined late and only briefly. Germany withdrew from the League, as did Japan, Italy, Spain and others. The onset of the Second World War showed that the League had failed its primary purpose, which was to prevent any future world war. The League lasted for 26 years; the United Nations (UN) replaced it after the end of the Second World War and inherited several agencies and organisations founded by the League. The concept of a peaceful community of nations had been proposed as far back as 1795, when Immanuel Kant's "" outlined the idea of a league of nations to control conflict and promote peace between states. Kant argued for the establishment of a peaceful world community, not in a sense of a global government, but in the hope that each state would declare itself a free state that respects its citizens and welcomes foreign visitors as fellow rational beings, thus promoting peaceful society worldwide. International co-operation to promote collective security originated in the Concert of Europe that developed after the Napoleonic Wars in the 19th century in an attempt to maintain the "status quo" between European states and so avoid war. This period also saw the development of international law, with the first Geneva Conventions establishing laws dealing with humanitarian relief during wartime, and the international Hague Conventions of 1899 and 1907 governing rules of war and the peaceful settlement of international disputes. As historians William H. Harbaugh and Ronald E. Powaski point out, Theodore Roosevelt was the first American President to call for an international league. At the acceptance for his Nobel Prize, Roosevelt said: "it would be a masterstroke if those great powers honestly bent on peace would form a League of Peace." The forerunner of the League of Nations, the Inter-Parliamentary Union (IPU), was formed by the peace activists William Randal Cremer and Frédéric Passy in 1889 (and is currently still in existence as an international body with a focus on the various elected legislative bodies of the world.) The IPU was founded with an international scope, with a third of the members of parliaments (in the 24 countries that had parliaments) serving as members of the IPU by 1914. Its foundational aims were to encourage governments to solve international disputes by peaceful means. Annual conferences were established to help governments refine the process of international arbitration. Its structure was designed as a council headed by a president, which would later be reflected in the structure of the League. At the start of the First World War the first schemes for international organisation to prevent future wars began to gain considerable public support, particularly in Great Britain and the United States. Goldsworthy Lowes Dickinson, a British political scientist, coined the term "League of Nations" in 1914 and drafted a scheme for its organisation. Together with Lord Bryce, he played a leading role in the founding of the group of internationalist pacifists known as the Bryce Group, later the League of Nations Union. The group became steadily more influential among the public and as a pressure group within the then governing Liberal Party. In Dickinson's 1915 pamphlet "After the War" he wrote of his "League of Peace" as being essentially an organisation for arbitration and conciliation. He felt that the secret diplomacy of the early twentieth century had brought about war and thus could write that, "the impossibility of war, I believe, would be increased in proportion as the issues of foreign policy should be known to and controlled by public opinion." The ‘Proposals’ of the Bryce Group were circulated widely, both in England and the US, where they had a profound influence on the nascent international movement. Within two weeks of the start of the war, feminists began to mobilise against the war. Having been barred from participating in prior peace organizations, American women formed a Women's Peace Parade Committee to plan a silent protest to the war. Led by chairwoman Fanny Garrison Villard, women from trade unions, feminist organizations, and social reform organizations, such as Kate Waller Barrett, Mary Ritter Beard, Carrie Chapman Catt, Rose Schneiderman, Lillian Wald, and others, organized 1500 women, who marched down Manhattan's Fifth Avenue on August 29, 1914. As a result of the parade, Jane Addams became interested in proposals by two European suffragists—Hungarian Rosika Schwimmer and British Emmeline Pethick-Lawrence—to hold a peace conference. On 9–10 January 1915, a peace conference directed by Addams was held in Washington, D. C., where the delegates adopted a platform calling for creation of international bodies with administrative and legislative powers to develop a "permanent league of neutral nations" to work for peace and disarmament. Within months a call was made for an international women's conference to be held in The Hague. Coordinated by Mia Boissevain, Aletta Jacobs and Rosa Manus, the Congress, which opened on April 28, 1915 was attended by 1,136 participants from both neutral and non-belligerent nations, and resulted in the establishment of an organization which would become the Women's International League for Peace and Freedom (WILPF). At the close of the conference, two delegations of women were dispatched to meet European heads of state over the next several months. They secured agreement from reluctant Foreign Ministers, who overall felt that such a body would be ineffective, but agreed to participate or not impede creation of a neutral mediating body, if other nations agreed and if President Woodrow Wilson would initiate a body. In the midst of the War, Wilson refused. In 1915, a similar body to the Bryce group proposals was set up in the United States by a group of like-minded individuals, including William Howard Taft. It was called the League to Enforce Peace and was substantially based on the proposals of the Bryce Group. It advocated the use of arbitration in conflict resolution and the imposition of sanctions on aggressive countries. None of these early organisations envisioned a continuously functioning body; with the exception of the Fabian Society in England, they maintained a legalistic approach that would limit the international body to a court of justice. The Fabians were the first to argue for a "Council" of states, necessarily the Great Powers, who would adjudicate world affairs, and for the creation of a permanent secretariat to enhance international co-operation across a range of activities. In the course of the diplomatic efforts surrounding World War I, both sides had to clarify their long-term war aims. By 1916 in Britain, the leader of the Allies, and in neutral United States, long-range thinkers had begun to design a unified international organisation to prevent future wars. Historian Peter Yearwood argues that when the new coalition government of David Lloyd George took power in December 1916, there was widespread discussion among intellectuals and diplomats of the desirability of establishing such an organisation, when Lloyd George was challenged by Wilson to state his position Regarding the postwar, he endorsed such an organisation. Wilson himself Included in his Fourteen Points in January 1918 a "league of nations to insure peace and justice." British foreign secretary, Lord Balfour, argued that, as a condition of durable peace, "behind international law, and behind all treaty arrangements for preventing or limiting hostilities, some form of international sanction should be devised which would give pause to the hardiest aggressor." By the time Germany gave up in November 1918, the war had had a profound impact, affecting the social, political and economic systems of Europe and inflicting psychological and physical damage. Anti-war sentiment rose across the world; the First World War was described as "the war to end all wars", and its possible causes were vigorously investigated. The causes identified included arms races, alliances, militaristic nationalism, secret diplomacy, and the freedom of sovereign states to enter into war for their own benefit. One proposed remedy was the creation of an international organisation whose aim was to prevent future war through disarmament, open diplomacy, international co-operation, restrictions on the right to wage war, and penalties that made war unattractive. In London Balfour commissioned the first official report into the matter in early 1918, under the initiative of Lord Robert Cecil. The British committee was finally appointed in February 1918. It was led by Walter Phillimore (and became known as the Phillimore Committee), but also included Eyre Crowe, William Tyrrell, and Cecil Hurst. The recommendations of the so-called Phillimore Commission included the establishment of a "Conference of Allied States" that would arbitrate disputes and impose sanctions on offending states. The proposals were approved by the British government, and much of the commission's results were later incorporated into the Covenant of the League of Nations. The French also drafted a much more far-reaching proposal in June 1918; they advocated annual meetings of a council to settle all disputes, as well as an "international army" to enforce its decisions. The American President Woodrow Wilson instructed Edward M. House to draft a US plan which reflected Wilson's own idealistic views (first articulated in the Fourteen Points of January 1918), as well as the work of the Phillimore Commission. The outcome of House's work, and Wilson's own first draft, proposed the termination of "unethical" state behaviour, including forms of espionage and dishonesty. Methods of compulsion against recalcitrant states would include severe measures, such as "blockading and closing the frontiers of that power to commerce or intercourse with any part of the world and to use any force that may be necessary..." The two principal drafters and architects of the covenant of the League of Nations were Lord Robert Cecil (a British diplomat), and Jan Smuts (a South African statesman). Smuts' proposals included the creation of a Council of the great powers as permanent members and a non-permanent selection of the minor states. He also proposed the creation of a Mandate system for captured colonies of the Central Powers during the war. Cecil focused on the administrative side, and proposed annual Council meetings and quadrennial meetings for the Assembly of all members. He also argued for a large and permanent secretariat to carry out the League's administrative duties. At the Paris Peace Conference in 1919, Wilson, Cecil and Smuts all put forward their draft proposals. After lengthy negotiations between the delegates, the Hurst-Miller draft was finally produced as a basis for the Covenant. After more negotiation and compromise, the delegates finally approved of the proposal to create the League of Nations (, ) on 25 January 1919. The final Covenant of the League of Nations was drafted by a special commission, and the League was established by Part I of the Treaty of Versailles. On 28 June 1919, 44 states signed the Covenant, including 31 states which had taken part in the war on the side of the Triple Entente or joined it during the conflict. French women's rights advocates invited international feminists to participate in a parallel conference to the Paris Conference in hopes that they could gain permission to participate in the official conference. The Inter-Allied Women's Conference asked to be allowed to submit suggestions to the peace negotiations and commissions and were granted the right to sit on commissions dealing specifically with women and children. Though they asked for enfranchisement and full legal protection under the law equal with men, those rights were ignored. Women won the right to serve in all capacities, including as staff or delegates in the League of Nations organization. They also won a declaration that member nations should prevent trafficking of women and children and should equally support humane conditions for children, women and men labourers. At the Zürich Peace Conference held between 17–19 May 1919, the women of the WILPF condemned the terms of the Treaty of Versailles for both is punitive measures, as well as its failure to provide for condemnation of violence and exclusion of women from civil and political participation. Upon reading the Rules of Procedure for the League of Nations, Catherine Marshall, a British suffragist, discovered that the guidelines were completely undemocratic and they were modified based on her suggestion. The League would be made up of a General Assembly (representing all member states), an Executive Council (with membership limited to major powers), and a permanent secretariat. Member states were expected to "respect and preserve as against external aggression" the territorial integrity of other members and to disarm "to the lowest point consistent with domestic safety." All states were required to submit complaints for arbitration or judicial inquiry before going to war. The Executive Council would create a Permanent Court of International Justice to make judgements on the disputes. Despite Wilson's efforts to establish and promote the League, for which he was awarded the Nobel Peace Prize in October 1919, the United States never joined. Senate Republicans led by Henry Cabot Lodge wanted a League with the reservation that only Congress could take the U.S. into war. Lodge gained a majority of Senators. Wilson refused to allow a compromise and the needed 2/3 majority was lacking. The League held its first council meeting in Paris on 16 January 1920, six days after the Versailles Treaty and the Covenant of the League of Nations came into force. On 1 November 1920, the headquarters of the League was moved from London to Geneva, where the first General Assembly was held on 15 November 1920. The Palais Wilson on Geneva's western lakeshore, named after US President Woodrow Wilson in recognition of his efforts towards the establishment of the League, was the League's first permanent home. The official languages of the League of Nations were French and English. The League rejected adopting Esperanto as its working language. China and Japan wanted Esperanto but France was strongly opposed. In 1939, a semi-official emblem for the League of Nations emerged: two five-pointed stars within a blue pentagon. They symbolised the Earth's five continents and five "races". A bow at the top displayed the English name ("League of Nations"), while another at the bottom showed the French (""Société des Nations""). The main constitutional organs of the League were the Assembly, the Council, and the Permanent Secretariat. It also had two essential wings: the Permanent Court of International Justice and the International Labour Organisation. In addition, there were several auxiliary agencies and commissions. Each organ's budget was allocated by the Assembly (the League was supported financially by its member states). The relations between the Assembly and the Council and the competencies of each were for the most part not explicitly defined. Each body could deal with any matter within the sphere of competence of the League or affecting peace in the world. Particular questions or tasks might be referred to either. Unanimity was required for the decisions of both the Assembly and the Council, except in matters of procedure and some other specific cases such as the admission of new members. This requirement was a reflection of the League's belief in the sovereignty of its component nations; the League sought solution by consent, not by dictation. In case of a dispute, the consent of the parties to the dispute was not required for unanimity. The Permanent Secretariat, established at the seat of the League at Geneva, comprised a body of experts in various spheres under the direction of the general secretary. Its principal sections were Political, Financial and Economics, Transit, Minorities and Administration (administering the Saar and Danzig), Mandates, Disarmament, Health, Social (Opium and Traffic in Women and Children), Intellectual Cooperation and International Bureaux, Legal, and Information. The staff of the Secretariat was responsible for preparing the agenda for the Council and the Assembly and publishing reports of the meetings and other routine matters, effectively acting as the League's civil service. In 1931 the staff numbered 707. The Assembly consisted of representatives of all members of the League, with each state allowed up to three representatives and one vote. It met in Geneva and, after its initial sessions in 1920, it convened once a year in September. The special functions of the Assembly included the admission of new members, the periodical election of non-permanent members to the Council, the election with the Council of the judges of the Permanent Court, and control of the budget. In practice, the Assembly was the general directing force of League activities. The League Council acted as a type of executive body directing the Assembly's business. It began with four permanent members (Great Britain, France, Italy, and Japan) and four non-permanent members that were elected by the Assembly for a three-year term. The first non-permanent members were Belgium, Brazil, Greece, and Spain. The composition of the Council was changed several times. The number of non-permanent members was first increased to six on 22 September 1922 and to nine on 8 September 1926. Werner Dankwort of Germany pushed for his country to join the League; joining in 1926, Germany became the fifth permanent member of the Council. Later, after Germany and Japan both left the League, the number of non-permanent seats was increased from nine to eleven, and the Soviet Union was made a permanent member giving the Council a total of fifteen members. The Council met, on average, five times a year and in extraordinary sessions when required. In total, 107 sessions were held between 1920 and 1939. The League oversaw the Permanent Court of International Justice and several other agencies and commissions created to deal with pressing international problems. These included the Disarmament Commission, the International Labour Organisation (ILO), the Mandates Commission, the International Commission on Intellectual Cooperation (precursor to UNESCO), the Permanent Central Opium Board, the Commission for Refugees, and the Slavery Commission. Three of these institutions were transferred to the United Nations after the Second World War: the International Labour Organisation, the Permanent Court of International Justice (as the International Court of Justice), and the Health Organisation (restructured as the World Health Organisation). The Permanent Court of International Justice was provided for by the Covenant, but not established by it. The Council and the Assembly established its constitution. Its judges were elected by the Council and the Assembly, and its budget was provided by the latter. The Court was to hear and decide any international dispute which the parties concerned submitted to it. It might also give an advisory opinion on any dispute or question referred to it by the Council or the Assembly. The Court was open to all the nations of the world under certain broad conditions. The International Labour Organisation was created in 1919 on the basis of Part XIII of the Treaty of Versailles. The ILO, although having the same members as the League and being subject to the budget control of the Assembly, was an autonomous organisation with its own Governing Body, its own General Conference and its own Secretariat. Its constitution differed from that of the League: representation had been accorded not only to governments but also to representatives of employers' and workers' organisations. Albert Thomas was its first director. The ILO successfully restricted the addition of lead to paint, and convinced several countries to adopt an eight-hour work day and forty-eight-hour working week. It also campaigned to end child labour, increase the rights of women in the workplace, and make shipowners liable for accidents involving seamen. After the demise of the League, the ILO became an agency of the United Nations in 1946. The League's health organisation had three bodies: the Health Bureau, containing permanent officials of the League; the General Advisory Council or Conference, an executive section consisting of medical experts; and the Health Committee. The Committee's purpose was to conduct inquiries, oversee the operation of the League's health work, and prepare work to be presented to the Council. This body focused on ending leprosy, malaria, and yellow fever, the latter two by starting an international campaign to exterminate mosquitoes. The Health Organisation also worked successfully with the government of the Soviet Union to prevent typhus epidemics, including organising a large education campaign. The League of Nations had devoted serious attention to the question of international intellectual co-operation since its creation. The First Assembly in December 1920 recommended that the Council take action aiming at the international organisation of intellectual work, which it did by adopting a report presented by the Fifth Committee of the Second Assembly and inviting a Committee on Intellectual Cooperation to meet in Geneva in August 1922. The French philosopher Henri Bergson became the first chairman of the committee. The work of the committee included: inquiry into the conditions of intellectual life, assistance to countries where intellectual life was endangered, creation of national committees for intellectual co-operation, co-operation with international intellectual organisations, protection of intellectual property, inter-university co-operation, co-ordination of bibliographical work and international interchange of publications, and international co-operation in archaeological research. Introduced by the second International Opium Convention, the Permanent Central Opium Board had to supervise the statistical reports on trade in opium, morphine, cocaine and heroin. The board also established a system of import certificates and export authorisations for the legal international trade in narcotics. The Slavery Commission sought to eradicate slavery and slave trading across the world, and fought forced prostitution. Its main success was through pressing the governments who administered mandated countries to end slavery in those countries. The League secured a commitment from Ethiopia to end slavery as a condition of membership in 1923, and worked with Liberia to abolish forced labour and intertribal slavery. The United Kingdom had not supported Ethiopian membership of the League on the grounds that "Ethiopia had not reached a state of civilisation and internal security sufficient to warrant her admission." The League also succeeded in reducing the death rate of workers constructing the Tanganyika railway from 55 to 4 percent. Records were kept to control slavery, prostitution, and the trafficking of women and children. Partly as a result of pressure brought by the League of Nations, Afghanistan abolished slavery in 1923, Iraq in 1924, Nepal in 1926, Transjordan and Persia in 1929, Bahrain in 1937, and Ethiopia in 1942. Led by Fridtjof Nansen, the Commission for Refugees was established on 27 June 1921 to look after the interests of refugees, including overseeing their repatriation and, when necessary, resettlement. At the end of the First World War, there were two to three million ex-prisoners of war from various nations dispersed throughout Russia; within two years of the commission's foundation, it had helped 425,000 of them return home. It established camps in Turkey in 1922 to aid the country with an ongoing refugee crisis, helping to prevent disease and hunger. It also established the Nansen passport as a means of identification for stateless people. The Committee for the Study of the Legal Status of Women sought to inquire into the status of women all over the world. It was formed in 1937, and later became part of the United Nations as the Commission on the Status of Women. Of the League's 42 founding members, 23 (24 counting Free France) remained members until it was dissolved in 1946. In the founding year, six other states joined, only two of which remained members throughout the League's existence. An additional 15 countries joined later. The largest number of member states was 58, between 28 September 1934 (when Ecuador joined) and 23 February 1935 (when Paraguay withdrew). On 26 May 1937, Egypt became the last state to join the League. The first member to withdraw permanently from the League was Costa Rica on 22 January 1925; having joined on 16 December 1920, this also makes it the member to have most quickly withdrawn. Brazil was the first founding member to withdraw (14 June 1926), and Haiti the last (April 1942). Iraq, which joined in 1932, was the first member that had previously been a League of Nations mandate. The Soviet Union became a member on 18 September 1934, and was expelled on 14 December 1939 for invading Finland. In expelling the Soviet Union, the League broke its own rule: only 7 of 15 members of the Council voted for expulsion (United Kingdom, France, Belgium, Bolivia, Egypt, South Africa, and the Dominican Republic), short of the majority required by the Covenant. Three of these members had been made Council members the day before the vote (South Africa, Bolivia, and Egypt). This was one of the League's final acts before it practically ceased functioning due to the Second World War. At the end of the First World War, the Allied powers were confronted with the question of the disposal of the former German colonies in Africa and the Pacific, and the several Arabic-speaking provinces of the Ottoman Empire. The Peace Conference adopted the principle that these territories should be administered by different governments on behalf of the League – a system of national responsibility subject to international supervision. This plan, defined as the mandate system, was adopted by the "Council of Ten" (the heads of government and foreign ministers of the main Allied powers: Britain, France, the United States, Italy, and Japan) on 30 January 1919 and transmitted to the League of Nations. League of Nations mandates were established under Article 22 of the Covenant of the League of Nations. The Permanent Mandates Commission supervised League of Nations mandates, and also organised plebiscites in disputed territories so that residents could decide which country they would join. There were three mandate classifications: A, B and C. The A mandates (applied to parts of the old Ottoman Empire) were "certain communities" that had The B mandates were applied to the former German colonies that the League took responsibility for after the First World War. These were described as "peoples" that the League said were South West Africa and certain South Pacific Islands were administered by League members under C mandates. These were classified as "territories" The territories were governed by mandatory powers, such as the United Kingdom in the case of the Mandate of Palestine, and the Union of South Africa in the case of South West Africa, until the territories were deemed capable of self-government. Fourteen mandate territories were divided up among seven mandatory powers: the United Kingdom, the Union of South Africa, France, Belgium, New Zealand, Australia and Japan. With the exception of the Kingdom of Iraq, which joined the League on 3 October 1932, these territories did not begin to gain their independence until after the Second World War, in a process that did not end until 1990. Following the demise of the League, most of the remaining mandates became United Nations Trust Territories. In addition to the mandates, the League itself governed the Territory of the Saar Basin for 15 years, before it was returned to Germany following a plebiscite, and the Free City of Danzig (now Gdańsk, Poland) from 15 November 1920 to 1 September 1939. The aftermath of the First World War left many issues to be settled, including the exact position of national boundaries and which country particular regions would join. Most of these questions were handled by the victorious Allied powers in bodies such as the Allied Supreme Council. The Allies tended to refer only particularly difficult matters to the League. This meant that, during the early interwar period, the League played little part in resolving the turmoil resulting from the war. The questions the League considered in its early years included those designated by the Paris Peace treaties. As the League developed, its role expanded, and by the middle of the 1920s it had become the centre of international activity. This change can be seen in the relationship between the League and non-members. The United States and Russia, for example, increasingly worked with the League. During the second half of the 1920s, France, Britain and Germany were all using the League of Nations as the focus of their diplomatic activity, and each of their foreign secretaries attended League meetings at Geneva during this period. They also used the League's machinery to try to improve relations and settle their differences. Åland is a collection of around 6,500 islands in the Baltic Sea, midway between Sweden and Finland. The islands are almost exclusively Swedish-speaking, but in 1809, the Åland Islands, along with Finland, were taken by Imperial Russia. In December 1917, during the turmoil of the Russian October Revolution, Finland declared its independence, but most of the Ålanders wished to rejoin Sweden. The Finnish government considered the islands to be a part of their new nation, as the Russians had included Åland in the Grand Duchy of Finland, formed in 1809. By 1920, the dispute had escalated to the point that there was danger of war. The British government referred the problem to the League's Council, but Finland would not let the League intervene, as they considered it an internal matter. The League created a small panel to decide if it should investigate the matter and, with an affirmative response, a neutral commission was created. In June 1921, the League announced its decision: the islands were to remain a part of Finland, but with guaranteed protection of the islanders, including demilitarisation. With Sweden's reluctant agreement, this became the first European international agreement concluded directly through the League. The Allied powers referred the problem of Upper Silesia to the League after they had been unable to resolve the territorial dispute. After the First World War, Poland laid claim to Upper Silesia, which had been part of Prussia. The Treaty of Versailles had recommended a plebiscite in Upper Silesia to determine whether the territory should become part of Germany or Poland. Complaints about the attitude of the German authorities led to rioting and eventually to the first two Silesian Uprisings (1919 and 1920). A plebiscite took place on 20 March 1921, with 59.6 percent (around 500,000) of the votes cast in favour of joining Germany, but Poland claimed the conditions surrounding it had been unfair. This result led to the Third Silesian Uprising in 1921. On 12 August 1921, the League was asked to settle the matter; the Council created a commission with representatives from Belgium, Brazil, China and Spain to study the situation. The committee recommended that Upper Silesia be divided between Poland and Germany according to the preferences shown in the plebiscite and that the two sides should decide the details of the interaction between the two areas – for example, whether goods should pass freely over the border due to the economic and industrial interdependence of the two areas. In November 1921, a conference was held in Geneva to negotiate a convention between Germany and Poland. A final settlement was reached, after five meetings, in which most of the area was given to Germany, but with the Polish section containing the majority of the region's mineral resources and much of its industry. When this agreement became public in May 1922, bitter resentment was expressed in Germany, but the treaty was still ratified by both countries. The settlement produced peace in the area until the beginning of the Second World War. The frontiers of the Principality of Albania had not been set during the Paris Peace Conference in 1919, as they were left for the League to decide; they had not yet been determined by September 1921, creating an unstable situation. Greek troops conducted military operations in the south of Albania. Kingdom of Serbs, Croats and Slovenes (Yugoslav) forces became engaged, after clashes with Albanian tribesmen, in the northern part of the country. The League sent a commission of representatives from various powers to the region. In November 1921, the League decided that the frontiers of Albania should be the same as they had been in 1913, with three minor changes that favoured Yugoslavia. Yugoslav forces withdrew a few weeks later, albeit under protest. The borders of Albania again became the cause of international conflict when Italian General Enrico Tellini and four of his assistants were ambushed and killed on 24 August 1923 while marking out the newly decided border between Greece and Albania. Italian leader Benito Mussolini was incensed, and demanded that a commission investigate the incident within five days. Whatever the results of the investigation, Mussolini insisted that the Greek government pay Italy fifty million lire in reparations. The Greeks said they would not pay unless it was proved that the crime was committed by Greeks. Mussolini sent a warship to shell the Greek island of Corfu, and Italian forces occupied the island on 31 August 1923. This contravened the League's covenant, so Greece appealed to the League to deal with the situation. The Allies agreed (at Mussolini's insistence) that the Conference of Ambassadors should be responsible for resolving the dispute because it was the conference that had appointed General Tellini. The League Council examined the dispute, but then passed on their findings to the Conference of Ambassadors to make the final decision. The conference accepted most of the League's recommendations, forcing Greece to pay fifty million lire to Italy, even though those who committed the crime were never discovered. Italian forces then withdrew from Corfu. The port city of Memel (now Klaipėda) and the surrounding area, with a predominantly German population, was under provisional Allied control according to Article 99 of the Treaty of Versailles. The French and Polish governments favoured turning Memel into an international city, while Lithuania wanted to annex the area. By 1923, the fate of the area had still not been decided, prompting Lithuanian forces to invade in January 1923 and seize the port. After the Allies failed to reach an agreement with Lithuania, they referred the matter to the League of Nations. In December 1923, the League Council appointed a Commission of Inquiry. The commission chose to cede Memel to Lithuania and give the area autonomous rights. The Klaipėda Convention was approved by the League Council on 14 March 1924, and then by the Allied powers and Lithuania. In 1939 Germany retook the region following the rise of the Nazis and an ultimatum to Lithuania, demanding the return of the region under threat of war. The League of Nations failed to prevent the secession of the Memel region to Germany. With League oversight, the Sanjak of Alexandretta in the French Mandate of Syria was given autonomy in 1937. Renamed Hatay, its parliament declared independence as the Republic of Hatay in September 1938, after elections the previous month. It was annexed by Turkey with French consent in mid-1939. The League resolved a dispute between the Kingdom of Iraq and the Republic of Turkey over control of the former Ottoman province of Mosul in 1926. According to the British, who had been awarded a League of Nations mandate over Iraq in 1920 and therefore represented Iraq in its foreign affairs, Mosul belonged to Iraq; on the other hand, the new Turkish republic claimed the province as part of its historic heartland. A League of Nations Commission of Inquiry, with Belgian, Hungarian and Swedish members, was sent to the region in 1924; it found that the people of Mosul did not want to be part of either Turkey or Iraq, but if they had to choose, they would pick Iraq. In 1925, the commission recommended that the region stay part of Iraq, under the condition that the British hold the mandate over Iraq for another 25 years, to ensure the autonomous rights of the Kurdish population. The League Council adopted the recommendation and decided on 16 December 1925 to award Mosul to Iraq. Although Turkey had accepted League of Nations' arbitration in the Treaty of Lausanne in 1923, it rejected the decision, questioning the Council's authority. The matter was referred to the Permanent Court of International Justice, which ruled that, when the Council made a unanimous decision, it must be accepted. Nonetheless, Britain, Iraq and Turkey ratified a separate treaty on 5 June 1926 that mostly followed the decision of the League Council and also assigned Mosul to Iraq. It was agreed that Iraq could still apply for League membership within 25 years and that the mandate would end upon its admission. After the First World War, Poland and Lithuania both regained their independence but soon became immersed in territorial disputes. During the Polish–Soviet War, Lithuania signed the Moscow Peace Treaty with the Soviet Union that laid out Lithuania's frontiers. This agreement gave Lithuanians control of the city of Vilnius (, ), the old Lithuanian capital, but a city with a majority Polish population. This heightened tension between Lithuania and Poland and led to fears that they would resume the Polish–Lithuanian War, and on 7 October 1920, the League negotiated the Suwałki Agreement establishing a cease-fire and a demarcation line between the two nations. On 9 October 1920, General Lucjan Żeligowski, commanding a Polish military force in contravention of the Suwałki Agreement, took the city and established the Republic of Central Lithuania. After a request for assistance from Lithuania, the League Council called for Poland's withdrawal from the area. The Polish government indicated they would comply, but instead reinforced the city with more Polish troops. This prompted the League to decide that the future of Vilnius should be determined by its residents in a plebiscite and that the Polish forces should withdraw and be replaced by an international force organised by the League. The plan was met with resistance in Poland, Lithuania, and the Soviet Union, which opposed any international force in Lithuania. In March 1921, the League abandoned plans for the plebiscite. After unsuccessful proposals by Paul Hymans to create a federation between Poland and Lithuania, Vilnius and the surrounding area was formally annexed by Poland in March 1922. After Lithuania took over the Klaipėda Region, the Allied Conference set the frontier between Lithuania and Poland, leaving Vilnius within Poland, on 14 March 1923. Lithuanian authorities refused to accept the decision, and officially remained in a state of war with Poland until 1927. It was not until the 1938 Polish ultimatum that Lithuania restored diplomatic relations with Poland and thus "de facto" accepted the borders. There were several border conflicts between Colombia and Peru in the early part of the 20th century, and in 1922, their governments signed the Salomón-Lozano Treaty in an attempt to resolve them. As part of this treaty, the border town of Leticia and its surrounding area was ceded from Peru to Colombia, giving Colombia access to the Amazon River. On 1 September 1932, business leaders from Peruvian rubber and sugar industries who had lost land as a result organised an armed takeover of Leticia. At first, the Peruvian government did not recognise the military takeover, but President of Peru Luis Sánchez Cerro decided to resist a Colombian re-occupation. The Peruvian Army occupied Leticia, leading to an armed conflict between the two nations. After months of diplomatic negotiations, the governments accepted mediation by the League of Nations, and their representatives presented their cases before the Council. A provisional peace agreement, signed by both parties in May 1933, provided for the League to assume control of the disputed territory while bilateral negotiations proceeded. In May 1934, a final peace agreement was signed, resulting in the return of Leticia to Colombia, a formal apology from Peru for the 1932 invasion, demilitarisation of the area around Leticia, free navigation on the Amazon and Putumayo Rivers, and a pledge of non-aggression. Saar was a province formed from parts of Prussia and the Rhenish Palatinate and placed under League control by the Treaty of Versailles. A plebiscite was to be held after fifteen years of League rule to determine whether the province should belong to Germany or France. When the referendum was held in 1935, 90.3 percent of voters supported becoming part of Germany, which was quickly approved by the League Council. In addition to territorial disputes, the League also tried to intervene in other conflicts between and within nations. Among its successes were its fight against the international trade in opium and sexual slavery, and its work to alleviate the plight of refugees, particularly in Turkey in the period up to 1926. One of its innovations in this latter area was the 1922 introduction of the Nansen passport, which was the first internationally recognised identity card for stateless refugees. After an incident involving sentries on the Greek-Bulgarian border in October 1925, fighting began between the two countries. Three days after the initial incident, Greek troops invaded Bulgaria. The Bulgarian government ordered its troops to make only token resistance, and evacuated between ten thousand and fifteen thousand people from the border region, trusting the League to settle the dispute. The League condemned the Greek invasion, and called for both Greek withdrawal and compensation to Bulgaria. Following accusations of forced labour on the large American-owned Firestone rubber plantation and American accusations of slave trading, the Liberian government asked the League to launch an investigation. The resulting commission was jointly appointed by the League, the United States, and Liberia. In 1930, a League report confirmed the presence of slavery and forced labour. The report implicated many government officials in the selling of contract labour and recommended that they be replaced by Europeans or Americans, which generated anger within Liberia and led to the resignation of President Charles D. B. King and his vice-president. The Liberian government outlawed forced labour and slavery and asked for American help in social reforms. The Mukden Incident, also known as the "Manchurian Incident" was a decisive setback that weakened The League because its major members refused to tackle Japanese aggression. Japan itself withdrew. Under the agreed terms of the Twenty-One Demands with China, the Japanese government had the right to station its troops in the area around the South Manchurian Railway, a major trade route between the two countries, in the Chinese region of Manchuria. In September 1931, a section of the railway was lightly damaged by the Japanese Kwantung Army as a pretext for an invasion of Manchuria. The Japanese army claimed that Chinese soldiers had sabotaged the railway and in apparent retaliation (acting contrary to orders from Tokyo, ) occupied all of Manchuria. They renamed the area Manchukuo, and on 9 March 1932 set up a puppet government, with Pu Yi, the former emperor of China, as its executive head. This new entity was recognised only by the governments of Italy, Spain and Nazi Germany; the rest of the world still considered Manchuria legally part of China. The League of Nations sent observers. The Lytton Report appeared a year later (October 1932). It declared Japan to be the aggressor and demanded Manchuria be returned to China. The report passed 42–1 in the Assembly in 1933 (only Japan voting against), but instead of removing its troops from China, Japan withdrew from the League. In the end, as British historian Charles Mowat argued, collective security was dead: The League failed to prevent the 1932 war between Bolivia and Paraguay over the arid Gran Chaco region. Although the region was sparsely populated, it contained the Paraguay River, which would have given either landlocked country access to the Atlantic Ocean, and there was also speculation, later proved incorrect, that the Chaco would be a rich source of petroleum. Border skirmishes throughout the late 1920s culminated in an all-out war in 1932 when the Bolivian army attacked the Paraguayans at Fort Carlos Antonio López at Lake Pitiantuta. Paraguay appealed to the League of Nations, but the League did not take action when the Pan-American Conference offered to mediate instead. The war was a disaster for both sides, causing 57,000 casualties for Bolivia, whose population was around three million, and 36,000 dead for Paraguay, whose population was approximately one million. It also brought both countries to the brink of economic disaster. By the time a ceasefire was negotiated on 12 June 1935, Paraguay had seized control of most of the region, as was later recognised by the 1938 truce. In October 1935, Italian dictator Benito Mussolini sent 400,000 troops to invade Abyssinia (Ethiopia). Marshal Pietro Badoglio led the campaign from November 1935, ordering bombing, the use of chemical weapons such as mustard gas, and the poisoning of water supplies, against targets which included undefended villages and medical facilities. The modern Italian Army defeated the poorly armed Abyssinians and captured Addis Ababa in May 1936, forcing Emperor of Ethiopia Haile Selassie to flee. The League of Nations condemned Italy's aggression and imposed economic sanctions in November 1935, but the sanctions were largely ineffective since they did not ban the sale of oil or close the Suez Canal (controlled by Britain). As Stanley Baldwin, the British Prime Minister, later observed, this was ultimately because no one had the military forces on hand to withstand an Italian attack. In October 1935, the US President, Franklin D. Roosevelt, invoked the recently passed Neutrality Acts and placed an embargo on arms and munitions to both sides, but extended a further "moral embargo" to the belligerent Italians, including other trade items. On 5 October and later on 29 February 1936, the United States endeavoured, with limited success, to limit its exports of oil and other materials to normal peacetime levels. The League sanctions were lifted on 4 July 1936, but by that point Italy had already gained control of the urban areas of Abyssinia. The Hoare–Laval Pact of December 1935 was an attempt by the British Foreign Secretary Samuel Hoare and the French Prime Minister Pierre Laval to end the conflict in Abyssinia by proposing to partition the country into an Italian sector and an Abyssinian sector. Mussolini was prepared to agree to the pact, but news of the deal leaked out. Both the British and French public vehemently protested against it, describing it as a sell-out of Abyssinia. Hoare and Laval were forced to resign, and the British and French governments dissociated themselves from the two men. In June 1936, although there was no precedent for a head of state addressing the Assembly of the League of Nations in person, Haile Selassie spoke to the Assembly, appealing for its help in protecting his country. The Abyssinian crisis showed how the League could be influenced by the self-interest of its members; one of the reasons why the sanctions were not very harsh was that both Britain and France feared the prospect of driving Mussolini and Adolf Hitler into an alliance. On 17 July 1936, the Spanish Army launched a coup d'état, leading to a prolonged armed conflict between Spanish Republicans (the elected leftist national government) and the Nationalists (conservative, anti-communist rebels who included most officers of the Spanish Army). Julio Álvarez del Vayo, the Spanish Minister of Foreign Affairs, appealed to the League in September 1936 for arms to defend Spain's territorial integrity and political independence. The League members would not intervene in the Spanish Civil War nor prevent foreign intervention in the conflict. Adolf Hitler and Mussolini continued to aid General Francisco Franco's Nationalists, while the Soviet Union helped the Spanish Republic. In February 1937, the League did ban foreign volunteers, but this was in practice a symbolic move. Following a long record of instigating localised conflicts throughout the 1930s, Japan began a full-scale invasion of China on 7 July 1937. On 12 September, the Chinese representative, Wellington Koo, appealed to the League for international intervention. Western countries were sympathetic to the Chinese in their struggle, particularly in their stubborn defence of Shanghai, a city with a substantial number of foreigners. The League was unable to provide any practical measures; on 4 October, it turned the case over to the Nine Power Treaty Conference. Article 8 of the Covenant gave the League the task of reducing "armaments to the lowest point consistent with national safety and the enforcement by common action of international obligations". A significant amount of the League's time and energy was devoted to this goal, even though many member governments were uncertain that such extensive disarmament could be achieved or was even desirable. The Allied powers were also under obligation by the Treaty of Versailles to attempt to disarm, and the armament restrictions imposed on the defeated countries had been described as the first step toward worldwide disarmament. The League Covenant assigned the League the task of creating a disarmament plan for each state, but the Council devolved this responsibility to a special commission set up in 1926 to prepare for the 1932–1934 World Disarmament Conference. Members of the League held different views towards the issue. The French were reluctant to reduce their armaments without a guarantee of military help if they were attacked; Poland and Czechoslovakia felt vulnerable to attack from the west and wanted the League's response to aggression against its members to be strengthened before they disarmed. Without this guarantee, they would not reduce armaments because they felt the risk of attack from Germany was too great. Fear of attack increased as Germany regained its strength after the First World War, especially after Adolf Hitler gained power and became German Chancellor in 1933. In particular, Germany's attempts to overturn the Treaty of Versailles and the reconstruction of the German military made France increasingly unwilling to disarm. The World Disarmament Conference was convened by the League of Nations in Geneva in 1932, with representatives from 60 states. A one-year moratorium on the expansion of armaments, later extended by a few months, was proposed at the start of the conference. The Disarmament Commission obtained initial agreement from France, Italy, Spain, Japan, and Britain to limit the size of their navies. The Kellogg–Briand Pact, facilitated by the commission in 1928, failed in its objective of outlawing war. Ultimately, the Commission failed to halt the military build-up by Germany, Italy, Spain and Japan during the 1930s. The League was mostly silent in the face of major events leading to the Second World War, such as Hitler's remilitarisation of the Rhineland, occupation of the Sudetenland and "Anschluss" of Austria, which had been forbidden by the Treaty of Versailles. In fact, League members themselves re-armed. In 1933, Japan simply withdrew from the League rather than submit to its judgement, as did Germany the same year (using the failure of the World Disarmament Conference to agree to arms parity between France and Germany as a pretext), Italy and Spain in 1937. The final significant act of the League was to expel the Soviet Union in December 1939 after it invaded Finland. The onset of the Second World War demonstrated that the League had failed in its primary purpose, the prevention of another world war. There were a variety of reasons for this failure, many connected to general weaknesses within the organisation. Additionally, the power of the League was limited by the United States' refusal to join. The origins of the League as an organisation created by the Allied powers as part of the peace settlement to end the First World War led to it being viewed as a "League of Victors". The League's neutrality tended to manifest itself as indecision. It required a unanimous vote of nine, later fifteen, Council members to enact a resolution; hence, conclusive and effective action was difficult, if not impossible. It was also slow in coming to its decisions, as certain ones required the unanimous consent of the entire Assembly. This problem mainly stemmed from the fact that the primary members of the League of Nations were not willing to accept the possibility of their fate being decided by other countries, and by enforcing unanimous voting had effectively given themselves veto power. Representation at the League was often a problem. Though it was intended to encompass all nations, many never joined, or their period of membership was short. The most conspicuous absentee was the United States. President Woodrow Wilson had been a driving force behind the League's formation and strongly influenced the form it took, but the US Senate voted not to join on 19 November 1919. Ruth Henig has suggested that, had the United States become a member, it would have also provided support to France and Britain, possibly making France feel more secure, and so encouraging France and Britain to co-operate more fully regarding Germany, thus making the rise to power of the Nazi Party less likely. Conversely, Henig acknowledges that if the US had been a member, its reluctance to engage in war with European states or to enact economic sanctions might have hampered the ability of the League to deal with international incidents. The structure of the US federal government might also have made its membership problematic, as its representatives at the League could not have made decisions on behalf of the executive branch without having the prior approval of the legislative branch. In January 1920, when the League was born, Germany was not permitted to join because it was seen as having been the aggressor in the First World War. Soviet Russia was also initially excluded, as Communist regimes were not welcomed. The League was further weakened when major powers left in the 1930s. Japan began as a permanent member of the Council, but withdrew in 1933 after the League voiced opposition to its occupation of Manchuria. Italy began as a permanent member of the Council, but withdrew in 1937. Spain also began as a permanent member of the Council, but withdrew in 1939. The League had accepted Germany, also as a permanent member of the Council, in 1926, deeming it a "peace-loving country", but Adolf Hitler pulled Germany out when he came to power in 1933. Another important weakness grew from the contradiction between the idea of collective security that formed the basis of the League and international relations between individual states. The League's collective security system required nations to act, if necessary, against states they considered friendly, and in a way that might endanger their national interests, to support states for which they had no normal affinity. This weakness was exposed during the Abyssinia Crisis, when Britain and France had to balance maintaining the security they had attempted to create for themselves in Europe "to defend against the enemies of internal order", in which Italy's support played a pivotal role, with their obligations to Abyssinia as a member of the League. On 23 June 1936, in the wake of the collapse of League efforts to restrain Italy's war against Abyssinia, the British Prime Minister, Stanley Baldwin, told the House of Commons that collective security had Ultimately, Britain and France both abandoned the concept of collective security in favour of appeasement in the face of growing German militarism under Hitler. In this context, the League of Nations was also the institution where the first international debate on terrorism took place following the 1934 assassination of King Alexander I of Yugoslavia in Marseille, showing its conspiratorial features, many of which are detectable in the discourse of terrorism among states after 9/11. American diplomatic historian Samuel Flagg Bemis originally supported the League, but after two decades changed his mind: The League of Nations lacked an armed force of its own and depended on the Great Powers to enforce its resolutions, which they were very unwilling to do. Its two most important members, Britain and France, were reluctant to use sanctions and even more reluctant to resort to military action on behalf of the League. Immediately after the First World War, pacifism became a strong force among both the people and governments of the two countries. The British Conservatives were especially tepid to the League and preferred, when in government, to negotiate treaties without the involvement of that organisation. Moreover, the League's advocacy of disarmament for Britain, France, and its other members, while at the same time advocating collective security, meant that the League was depriving itself of the only forceful means by which it could uphold its authority. When the British cabinet discussed the concept of the League during the First World War, Maurice Hankey, the Cabinet Secretary, circulated a memorandum on the subject. He started by saying, "Generally it appears to me that any such scheme is dangerous to us, because it will create a sense of security which is wholly fictitious". He attacked the British pre-war faith in the sanctity of treaties as delusional and concluded by claiming: The Foreign Office minister Sir Eyre Crowe also wrote a memorandum to the British cabinet claiming that "a solemn league and covenant" would just be "a treaty, like other treaties". "What is there to ensure that it will not, like other treaties, be broken?" Crowe went on to express scepticism of the planned "pledge of common action" against aggressors because he believed the actions of individual states would still be determined by national interests and the balance of power. He also criticised the proposal for League economic sanctions because it would be ineffectual and that "It is all a question of real military preponderance". Universal disarmament was a practical impossibility, Crowe warned. As the situation in Europe escalated into war, the Assembly transferred enough power to the Secretary General on 30 September 1938 and 14 December 1939 to allow the League to continue to exist legally and carry on reduced operations. The headquarters of the League, the Palace of Nations, remained unoccupied for nearly six years until the Second World War ended. At the 1943 Tehran Conference, the Allied powers agreed to create a new body to replace the League: the United Nations. Many League bodies, such as the International Labour Organisation, continued to function and eventually became affiliated with the UN. The designers of the structures of the United Nations intended to make it more effective than the League. The final meeting of the League of Nations took place on 18 April 1946 in Geneva. Delegates from 34 nations attended the assembly. This session concerned itself with liquidating the League: it transferred assets worth approximately $22,000,000 (U.S.) in 1946 (including the Palace of Nations and the League's archives) to the UN, returned reserve funds to the nations that had supplied them, and settled the debts of the League. Robert Cecil, addressing the final session, said: The Assembly passed a resolution that "With effect from the day following the close of the present session of the Assembly [i.e., April 19], the League of Nations shall cease to exist except for the sole purpose of the liquidation of its affairs as provided in the present resolution." A Board of Liquidation consisting of nine persons from different countries spent the next 15 months overseeing the transfer of the League's assets and functions to the United Nations or specialised bodies, finally dissolving itself on July 31, 1947. The archive of the League of Nations was transferred to the United Nations Office at Geneva and is now an entry in the UNESCO Memory of the World Register. In the past few decades, by research using the League Archives at Geneva, historians have reviewed the legacy of the League of Nations as the United Nations has faced similar troubles to those of the interwar period. Current consensus views that, even though the League failed to achieve its ultimate goal of world peace, it did manage to build new roads towards expanding the rule of law across the globe; strengthened the concept of collective security, giving a voice to smaller nations; helped to raise awareness to problems like epidemics, slavery, child labour, colonial tyranny, refugee crisis and general working conditions through its numerous commissions and committees; and paved the way for new forms of statehood, as the mandate system put the colonial powers under international observation. Professor David Kennedy portrays the League as a unique moment when international affairs were "institutionalised", as opposed to the pre–First World War methods of law and politics. The principal Allies in the Second World War (the UK, the USSR, France, the U.S., and the Republic of China) became permanent members of the United Nations Security Council in 1946. (In 1971 the People's Republic of China replaced the Republic of China (then only in control of Taiwan) as permanent member of the UN Security Council, and in 1991 the Russian Federation replaced the USSR.) Decisions of the Security Council are binding on all members of the UN; unanimous decisions are not required, unlike in the League Council. Permanent members of the Security Council can wield a veto to protect their vital interests. Bodyline Bodyline, also known as fast leg theory bowling, was a cricketing tactic devised by the English cricket team for their 1932–33 Ashes tour of Australia, specifically to combat the extraordinary batting skill of Australia's Don Bradman. England's use of a tactic perceived by some as overly aggressive or even unfair ultimately threatened diplomatic relations between the two countries before the situation was calmed. A bodyline delivery was one where the cricket ball was bowled at the body of the batsman, in the hope that when he defended himself with his bat, a resulting deflection could be caught by one of several fielders standing close by. This was considered by critics to be intimidatory and physically threatening, to the point of being unfair in a game that was supposed to uphold gentlemanly traditions. Although no serious injuries arose from any short-pitched deliveries while a leg theory field was actually set, the tactic still led to considerable ill feeling between the two teams, particularly when Australian batsmen suffered actual injuries in separate incidents, which inflamed the watching crowds. The controversy eventually spilled into the diplomatic arena. Short-pitched bowling continues to be permitted in cricket, even when aimed at the batsman. However, over time, several of the Laws of Cricket were changed to render the bodyline tactic less effective. Bodyline was a tactic devised for and primarily used in the Ashes series between England and Australia in 1932—33. The tactic involved bowling at leg stump or just outside it, pitching the ball short so that it reared at the body of a batsman standing in an orthodox batting position. A ring of fielders ranged on the leg side would catch any defensive deflections from the bat. The batsman's options were to evade the ball through ducking or moving aside, allow the ball to strike their body or play the ball with his bat. The latter course carried additional risks. Defensive shots brought few runs and could carry far enough to be caught by the fielders on the leg side; pull and hook shots could be caught on the edge of the field where two men were usually placed for such a shot. Bodyline bowling was intimidatory, and was largely designed as an attempt to curb the prolific scoring of Donald Bradman, although other prolific Australian batsmen such as Bill Woodfull, Bill Ponsford and Alan Kippax were also targeted. Several different terms were used to describe this style of bowling before the name "bodyline" was used. Among the first to use it was the writer and former Australian Test cricketer Jack Worrall; in the match between the English team and an Australian XI, when bodyline was first used in full, he referred to "half-pitched slingers on the body line" and first used it in print after the first Test. Other writers used a similar phrase around this time, but the first use of "bodyline" in print seems to have been by the journalist Hugh Buggy in the Melbourne "Herald", in his report on the first day's play of the first Test. In the 19th century, most cricketers considered it unsportsmanlike to bowl the ball at the leg stump or for batsmen to hit on the leg side. But by the early years of the 20th century, some bowlers—usually slow or medium-paced—used leg theory as a tactic; the ball was aimed outside the line of leg stump and the fielders placed on that side of the field, the object being to test the batsman's patience and force a rash stroke. Two English left-arm bowlers, George Hirst in 1903—04 and Frank Foster in 1911—12, bowled leg theory to packed leg side fields in Test matches in Australia; Warwick Armstrong used it regularly for Australia. In the years immediately before the First World War, several bowlers used leg theory in county cricket. When cricket resumed after the war, few bowlers maintained the tactic, which was unpopular with spectators owing to its negativity. Fred Root, the Worcestershire bowler, used it regularly and with considerable success in county cricket. Root later defended the use of leg theory—and Bodyline—observing that when bowlers bowled outside off stump, the batsmen were able to let the ball pass them without playing a shot. Some fast bowlers experimented with leg theory prior to 1932, sometimes accompanying the tactic with short-pitched bowling. In 1925, Australian Jack Scott first bowled a form of what would later have been called bodyline in a state match for New South Wales; his captain Herbie Collins disliked it and would not let him use it again. Other Australian captains were less particular, including Vic Richardson who let Scott use those tactics when he moved to South Australia. He repeated them against the MCC in 1928–29. In 1927, in a Test trial match, "Nobby" Clark bowled short to a leg-trap (a cluster of fielders placed close on the leg side). He was representing England in a side captained by Douglas Jardine. In 1928–29, Harry Alexander bowled fast leg theory at an England team, and Harold Larwood briefly used a similar tactic on that same tour in two Test matches. Freddie Calthorpe, the England captain, criticised Learie Constantine's use of short-pitched bowling to a leg side field in a Test match in 1930; one such ball struck Andy Sandham, but Constantine only reverted to more conventional tactics after a complaint from the England team. The Australian cricket team toured England in 1930. Australia won the five-Test series 2–1, and Donald Bradman scored 974 runs at a batting average of 139.14, an aggregate record that still stands in 2018. By the time of the next Ashes series of 1932–33, Bradman's average hovered around 100, approximately twice that of all other world-class batsmen. The English cricket authorities felt that new tactics would be required to prevent Bradman being even more successful on Australian pitches; some critics believed that Bradman could be dismissed by leg-spin as Walter Robins and Ian Peebles had supposedly caused him problems; two leg-spinners were included in the English touring party of 1932–33. Gradually, the idea developed that Bradman was vulnerable to pace bowling. In the final Test of the 1930 Ashes series, while he was batting, the pitch became briefly difficult following rain. Bradman was seen to be uncomfortable facing deliveries which bounced higher than usual at a faster pace, being seen to step back out of the line of the ball. Former England player and Surrey captain Percy Fender was one who noticed, and the incident was much discussed by cricketers. Given that Bradman scored 232, it was not initially thought that a way to curb his prodigious scoring had been found. When Douglas Jardine later saw film footage of the Oval incident and noticed Bradman's discomfort, according to his daughter he shouted, "I've got it! He's yellow!" The theory of Bradman's vulnerability developed when Fender received correspondence from Australia in 1932, describing how Australian batsmen were increasingly moving across the stumps towards the off side to play the ball on the on side. Fender showed these letters to Jardine when it became clear that he was to captain the English team in Australia during the 1932–33 tour, and he also discussed Bradman's discomfort at the Oval. It was also known in England that Bradman was dismissed for a four-ball duck by fast bowler Eddie Gilbert, and looked very uncomfortable. Bradman had also appeared uncomfortable against the pace of Sandy Bell in his innings of 299 not out at the Adelaide Oval in South Africa's tour of Australia earlier in 1932, when the desperate bowler decided to bowl short to him, and fellow South African Herbie Taylor, according to Jack Fingleton, may have mentioned this to English cricketers in 1932. Fender felt Bradman might be vulnerable to fast, short-pitched deliveries on the line of leg stump. Jardine felt that Bradman was afraid to stand his ground against intimidatory bowling, citing instances in 1930 when he shuffled about, contrary to orthodox batting technique. Jardine's first experience against Australia came when he scored an unbeaten 96 to secure a draw against the 1921 Australian touring side for Oxford University. The tourists were criticised in the press for not allowing Jardine to reach his hundred, but had tried to help him with some easy bowling. There has been speculation that this incident helped develop Jardine's antipathy towards Australians, although Jardine's biographer Christopher Douglas denies this. Jardine's attitude towards Australia hardened after he toured the country in 1928–29. When he scored three consecutive hundreds in the early games, he was frequently jeered by the crowd for slow play; the Australian spectators took an increasing dislike to him, mainly for his superior attitude and bearing, his awkward fielding, and particularly his choice of headwear—a Harlequin cap that was given to successful Oxford cricketers. Although Jardine may simply have worn the cap out of superstition, it conveyed a negative impression to the spectators; his general demeanour drew one comment of "Where's the butler to carry the bat for you?" By this stage Jardine had developed an intense dislike for Australian crowds. During his third century at the start of the tour, during a period of abuse from the spectators, he observed to Hunter Hendry that "All Australians are uneducated, and an unruly mob". After the innings, when teammate Patsy Hendren remarked that the Australian crowds did not like Jardine, he replied "It's fucking mutual". During the tour, Jardine fielded next to the crowd on the boundary. There, he was roundly abused and mocked for his awkward fielding, particularly when chasing the ball. On one occasion, he spat towards the crowd while fielding on the boundary as he changed position for the final time. Jardine was appointed captain of England for the 1931 season, replacing Percy Chapman who had led the team in 1930. He defeated New Zealand in his first series, but opinion was divided as to how effective he had been. The following season, he led England again and was appointed to lead the team to tour Australia for the 1932–33 Ashes series. A meeting was arranged between Jardine, Nottinghamshire captain Arthur Carr and his two fast bowlers Harold Larwood and Bill Voce at London's Piccadilly Hotel to discuss a plan to combat Bradman. Jardine asked Larwood and Voce if they could bowl on leg stump and make the ball rise into the body of the batsman. The bowlers agreed they could, and that it might prove effective. Jardine also visited Frank Foster to discuss his field-placing in Australia in 1911–12. Larwood and Voce practised the plan over the remainder of the 1932 season with varying but increasing success and several injuries to batsmen. Ken Farnes experimented with short-pitched, leg-theory bowling but was not selected for the tour. Bill Bowes also used short-pitched bowling, notably against Jack Hobbs. The England team which toured Australia in 1932–33 contained four fast bowlers and a few medium pacers; such a heavy concentration on pace was unusual at the time, and drew comment from the Australian press and players, including Bradman. On the journey, Jardine instructed his team on how to approach the tour and discussed tactics with several players, including Larwood; at this stage, he seems to have settled on leg theory, if not full bodyline, as his main tactic. Some players later reported that he told them to hate the Australians in order to defeat them, while instructing them to refer to Bradman as "the little bastard." Upon arrival, Jardine quickly alienated the press and crowds through his manner and approach. In the early matches, although there were instances of the English bowlers pitching the ball short and causing problems with their pace, full bodyline tactics were not used. There had been little unusual about the English bowling except the number of fast bowlers. Larwood and Voce were given a light workload in the early matches by Jardine. The English tactics changed in a game against an Australian XI team at Melbourne in mid-November, when full bodyline tactics were deployed for the first time. Jardine had left himself out of the English side, which was led instead by Bob Wyatt who later wrote that the team experimented with a diluted form of bodyline bowling. He reported to Jardine that Bradman, who was playing for the opposition, seemed uncomfortable against the bowling tactics of Larwood, Voce and Bowes. The crowd, press and Australian players were shocked by what they experienced and believed that the bowlers were targeting the batsmen's heads. Bradman adopted unorthodox tactics—ducking, weaving and moving around the crease—which did not meet with universal approval from Australians and he scored just 36 and 13 in the match. The tactic continued to be used in the next game by Voce (Larwood and Bowes did not play in this game), against New South Wales, for whom Jack Fingleton made a century and received several blows in the process. Bradman again failed twice, and had scored just 103 runs in six innings against the touring team; many Australian fans were now worried by Bradman's form. Meanwhile, Jardine wrote to tell Fender that his information about the Australian batting technique was correct and that it meant he was having to move more and more fielders onto the leg side: "if this goes on I shall have to move the whole bloody lot to the leg side." The Australian press were shocked and criticised the hostility of Larwood in particular. Some former Australian players joined the criticism, saying the tactics were ethically wrong. But at this stage, not everyone was opposed, and the Australian Board of Control believed the English team had bowled fairly. On the other hand, Jardine increasingly came into disagreement with tour manager Warner over bodyline as the tour progressed. Warner hated bodyline but would not speak out against it. He was accused of hypocrisy for not taking a stand on either side, particularly after expressing sentiments at the start of the tour that cricket "has become a synonym for all that is true and honest. To say 'that is not cricket' implies something underhand, something not in keeping with the best ideals ... all who love it as players, as officials or spectators must be careful lest anything they do should do it harm." Bradman missed the first Test at Sydney, worn out by constant cricket and the ongoing argument with the Board of Control. Jardine later wrote that the real reason was that the batsman had suffered a nervous breakdown. The English bowlers used Bodyline intermittently in the first match, to the crowd's vocal displeasure, and the Australians lost the game by ten wickets. Larwood was particularly successful, returning match figures of ten wickets for 124 runs. One of the English bowlers, Gubby Allen, refused to bowl with fielders on the leg side, clashing with Jardine over these tactics. The only Australian batsman to make an impact was Stan McCabe, who hooked and pulled everything aimed at his upper body, to score 187 not out in four hours from 233 deliveries. Behind the scenes, administrators began to express concerns to each other. Yet the English tactics still did not earn universal disapproval; former Australian captain Monty Noble praised the English bowling. Meanwhile, Woodfull was being encouraged to retaliate to the short-pitched English attack, not least by members of his own side such as Vic Richardson, or to include pace bowlers such as Eddie Gilbert or Laurie Nash to match the aggression of the opposition. But Woodfull refused to consider doing so. He had to wait until minutes before the game before he was confirmed as captain by the selectors. For the second Test, Bradman returned to the team after his newspaper employers released him from his contract. England continued to use bodyline and Bradman was dismissed by his first ball in the first innings. In the second innings, against the full Bodyline attack, he scored an unbeaten century which helped Australia to win the match and level the series at one match each. Critics began to believe bodyline was not quite the threat that had been perceived and Bradman's reputation, which had suffered slightly with his earlier failures, was restored. However, the pitch was slightly slower than others in the series, and Larwood was suffering from problems with his boots which reduced his effectiveness. The controversy reached its peak during the Third Test at Adelaide. On the second day, a Saturday, before a crowd of 50,962 spectators, Australia bowled out England who had batted through the first day. In the third over of the Australian innings, Larwood bowled to Woodfull. The fifth ball narrowly missed Woodfull's head and the final ball, delivered short on the line of middle stump, struck Woodfull over the heart. The batsman dropped his bat and staggered away holding his chest, bent over in pain. The England players surrounded Woodfull to offer sympathy but the crowd began to protest noisily. Jardine called to Larwood: "Well bowled, Harold!" Although the comment was aimed at unnerving Bradman, who was also batting at the time, Woodfull was appalled. Play resumed after a brief delay, once it was certain the Australian captain was fit to carry on, and since Larwood's over had ended, Woodfull did not have to face the bowling of Allen in the next over. However, when Larwood was ready to bowl at Woodfull again, play was halted once more when the fielders were moved into Bodyline positions, causing the crowd to protest and call abuse at the England team. Subsequently, Jardine claimed that Larwood requested a field change, Larwood said that Jardine had done so. Many commentators condemned the alteration of the field as unsporting, and the angry spectators became extremely volatile. Jardine, although writing that Woodfull could have retired hurt if he was unfit, later expressed his regret at making the field change at that moment. The fury of the crowd was such that a riot may have occurred had another incident taken place and several writers suggested that the anger of the spectators was the culmination of feelings built up over the two months that bodyline had developed. During the over, another rising Larwood delivery knocked the bat out of Woodfull's hands. He batted for 89 minutes, being hit a few more times before Allen bowled him for 22. Later in the day, Pelham Warner, one of the England managers, visited the Australian dressing room. He expressed sympathy to Woodfull but was surprised by the Australian's response. According to Warner, Woodfull replied, "I don't want to see you, Mr Warner. There are two teams out there. One is trying to play cricket and the other is not." Fingleton wrote that Woodfull had added, "This game is too good to be spoilt. It is time some people got out of it." Woodfull was usually dignified and quietly spoken, making his reaction surprising to Warner and others present. Warner was so shaken that he was found in tears later that day in his hotel room. There was no play on the following day, Sunday being a rest day, but on Monday morning, the exchange between Warner and Woodfull was reported in several Australian newspapers. The players and officials were horrified that a sensitive private exchange had been reported to the press. Leaks to the press were practically unknown in 1933. David Frith notes that discretion and respect were highly prized and such a leak was "regarded as a moral offence of the first order." Woodfull made it clear that he severely disapproved of the leak, and later wrote that he "always expected cricketers to do the right thing by their team-mates." As the only full-time journalist in the Australian team, suspicion immediately fell on Fingleton, although as soon as the story was published, he told Woodfull he was not responsible. Warner offered Larwood a reward of one pound if he could dismiss Fingleton in the second innings; Larwood obliged by bowling him for a duck. Fingleton later claimed that Sydney Sun reporter Claude Corbett had received the information from Bradman; for the rest of their lives, Fingleton and Bradman made claim and counter-claim that the other man was responsible for the leak. The following day, as Australia faced a large deficit on the first innings, Bert Oldfield played a long innings in support of Bill Ponsford, who scored 85. In the course of the innings, the English bowlers used bodyline against him, and he faced several short-pitched deliveries but took several fours from Larwood to move to 41. Having just conceded a four, Larwood bowled fractionally shorter and slightly slower. Oldfield attempted to hook but lost sight of the ball and edged it onto his temple; the ball fractured his skull. Oldfield staggered away and fell to his knees and play stopped as Woodfull came onto the pitch and the angry crowd jeered and shouted, once more reaching the point where a riot seemed likely. Several English players thought about arming themselves with stumps should the crowd come onto the field. The ball which injured Oldfield was bowled to a conventional, non-bodyline field; Larwood immediately apologised but Oldfield said that it was his own fault before he was helped back to the dressing room and play continued. Jardine later secretly sent a telegram of sympathy to Oldfield's wife and arranged for presents to be given to his young daughters. At the end of the fourth day's play of the third Test match, the Australian Board of Control sent a cable to the Marylebone Cricket Club (MCC), cricket's ruling body and the club that selected the England team, in London: Not all Australians, including the press and players, believed that the cable should have been sent, particularly immediately following a heavy defeat. The suggestion of unsportsmanlike behaviour was deeply resented by the MCC, and was one of the worst accusations that could have been levelled at the team at the time. Additionally, members of the MCC believed that the Australians had over-reacted to the English bowling. The MCC took some time to draft a reply: At this point, the remainder of the series was under threat. Jardine was shaken by the events and by the hostile reactions to his team. Stories appeared in the press, possibly leaked by the disenchanted Nawab of Pataudi, about fights and arguments between the England players. Jardine offered to stop using bodyline if the team did not support him, but after a private meeting (not attended by Jardine or either of the team managers) the players released a statement fully supporting the captain and his tactics. Even so, Jardine would not have played in the fourth Test without the withdrawal of the unsportsmanlike accusation. The Australian Board met to draft a reply cable, which was sent on 30 January, indicating that they wished the series to continue and offering to postpone consideration of the fairness of bodyline bowling until after the series. The MCC's reply, on 2 February, suggested that continuing the series would be impossible unless the accusation of unsporting behaviour was withdrawn. The situation escalated into a diplomatic incident. Figures high up in both the British and Australian government saw bodyline as potentially fracturing an international relationship that needed to remain strong. The Governor of South Australia, Alexander Hore-Ruthven, who was in England at the time, expressed his concern to British Secretary of State for Dominion Affairs James Henry Thomas that this would cause a significant impact on trade between the nations. The standoff was settled when the Australian prime minister, Joseph Lyons, met with members of the Australian Board and outlined to them the severe economic hardships that could be caused in Australia if the British public boycotted Australian trade. Following considerable discussion and debate in the English and Australian press, the Australian Board sent a cable to the MCC which, while maintaining its opposition to bodyline bowling, stated "We do not regard the sportsmanship of your team as being in question". Even so, correspondence between the Australian Board and the MCC continued for almost a year. Voce missed the fourth Test of the series, being replaced by a leg spinner, Tommy Mitchell. Larwood continued to use bodyline, but he was the only bowler in the team using the tactic; even so, he used it less frequently than usual and seemed less effective in high temperatures and humidity. England won the game by eight wickets, thanks in part to an innings of 83 by Eddie Paynter who had been admitted to hospital with tonsillitis but left in order to bat when England were struggling in their innings. Voce returned for the final Test, but neither he nor Allen were fully fit, and despite the use of bodyline tactics, Australia scored 435 at a rapid pace, aided by several dropped catches. Australia included a fast bowler for this final game, Harry Alexander who bowled some short deliveries but was not allowed to use many fielders on the leg side by his captain, Woodfull. England built a lead of 19 but their tactics in Australia's second innings were disrupted when Larwood left the field with an injured foot; Hedley Verity, a spinner, claimed five wickets to bowl Australia out; England won by eight wickets and won the series by four Tests to one. Bodyline continued to be bowled occasionally in the 1933 English season—most notably by Nottinghamshire, who had Carr, Voce and Larwood in their team. This gave the English crowds their first chance to see what all the fuss was about. Ken Farnes, the Cambridge University fast bowler, also bowled it in the University Match, hitting a few Oxford batsmen. Jardine himself had to face bodyline bowling in a Test match. The West Indian cricket team toured England in 1933, and, in the second Test at Old Trafford, Jackie Grant, their captain, decided to try bodyline. He had a couple of fast bowlers, Manny Martindale and Learie Constantine. Facing bodyline tactics for the first time, England first suffered, falling to 134 for 4, with Wally Hammond being hit on the chin, though he recovered to continue his innings. Then Jardine himself faced Martindale and Constantine. Jardine never flinched. With Les Ames finding himself in difficulties, Jardine said, "You get yourself down this end, Les. I'll take care of this bloody nonsense." He played right back to the bouncers, standing on tiptoe, and played them with a dead bat, sometimes playing the ball one handed for more control. While the Old Trafford pitch was not as suited to bodyline as the hard Australian wickets, Martindale did take 5 for 73, but Constantine only took 1 for 55. Jardine himself made 127, his only Test century. In the West Indian second innings, Clark bowled bodyline back to the West Indians, taking 2 for 64. The match in the end was drawn but played a large part in turning English opinion against bodyline. "The Times" used the word bodyline, without using inverted commas or using the qualification "so-called", for the first time. "Wisden" also said that "most of those watching it for the first time must have come to the conclusion that, while strictly within the law, it was not nice." In 1934, Bill Woodfull led Australia back to England on a tour that had been under a cloud after the tempestuous cricket diplomacy of the previous bodyline series. Jardine had retired from International cricket in early 1934 after captaining a fraught tour of India and under England's new captain, Bob Wyatt, agreements were put in place so that bodyline would not be used. However, there were occasions when the Australians felt that their hosts had crossed the mark with tactics resembling bodyline. In a match between the Australians and Nottinghamshire, Voce, one of the bodyline practitioners of 1932–33, employed the strategy with the wicket-keeper standing to the leg side and took 8/66. In the second innings, Voce repeated the tactic late in the day, in fading light against Woodfull and Bill Brown. Of his 12 balls, 11 were no lower than head height. Woodfull told the Nottinghamshire administrators that, if Voce's leg-side bowling was repeated, his men would leave the field and return to London. He further said that Australia would not return to the country in the future. The following day, Voce was absent, ostensibly due to a leg injury. Already angered by the absence of Larwood, the Nottinghamshire faithful heckled the Australians all day. Australia had previously and privately complained that some pacemen had strayed past the agreement in the Tests. As a direct consequence of the 1932–33 tour, the MCC introduced a new rule to the laws of cricket for the 1935 English cricket season. Originally, the MCC hoped that captains would ensure that the game was played in the correct spirit, and passed a resolution that bodyline bowling would breach this spirit. When this proved to be insufficient, the MCC passed a law that "direct attack" bowling was unfair and became the responsibility of the umpires to identify and stop. In 1957, the laws were altered to prevent more than two fielders standing behind square on the leg side; the intention was to prevent negative bowling tactics whereby off spinners and slow inswing bowlers aimed at the leg stump of batsmen with fielders concentrated on the leg side. However, an indirect effect was to make bodyline fields impossible to implement. Later law changes, under the heading of "Intimidatory Short Pitched Bowling", also restricted the number of "bouncers" which may be bowled in an over. Nevertheless, the tactic of intimidating the batsman is still used to an extent that would have been shocking in 1933, although it is less dangerous now because today's players wear helmets and generally far more protective gear. The West Indies teams of the 1980s, which regularly fielded a bowling attack comprising some of the best fast bowlers in cricket history, were perhaps the most feared exponents. The English players and management were consistent in referring to their tactic as "fast leg theory" considering it to be a variant of the established and unobjectionable leg theory tactic. The inflammatory term "bodyline" was coined and perpetuated by the Australian press (see below). English writers used the term "fast leg theory". The terminology reflected differences in understanding, as neither the English public nor the Board of the Marylebone Cricket Club (MCC)—the governing body of English cricket—could understand why the Australians were complaining about what they perceived as a commonly used tactic. Some concluded that the Australian cricket authorities and public were sore losers. Of the four fast bowlers in the tour party, Gubby Allen was a voice of dissent in the English camp, refusing to bowl short on the leg side, and writing several letters home to England critical of Jardine, although he did not express this in public in Australia. A number of other players, while maintaining a united front in public, also deplored bodyline in private. The amateurs Bob Wyatt (the vice-captain), Freddie Brown and the Nawab of Pataudi opposed it, as did Wally Hammond and Les Ames among the professionals. During the season, Woodfull's physical courage, stoic and dignified leadership won him many admirers. He flatly refused to employ retaliatory tactics and did not publicly complain even though he and his men were repeatedly hit. Jardine however insisted his tactic was not designed to cause injury and that he was leading his team in a sportsmanlike and gentlemanly manner, arguing that it was up to the Australian batsmen to play their way out of trouble. It was subsequently revealed that several of the players had private reservations, but they did not express them publicly at the time. Following the 1932–33 series, several authors, including many of the players involved, released books expressing various points of view about bodyline. Many argued that it was a scourge on cricket and must be stamped out, while some did not see what all the fuss was about. The series has been described as the most controversial period in Australian cricket history, and voted the most important Australian moment by a panel of Australian cricket identities. The MCC asked Harold Larwood to sign an apology to them for his bowling in Australia, making his selection for England again conditional upon it. Larwood was furious at the notion, pointing out that he had been following orders from his upper-class captain, and that was where any blame should lie. Larwood refused, never played for England again, and became vilified in his own country. Douglas Jardine always defended his tactics and in the book he wrote about the tour, "In Quest of the Ashes", described allegations that the England bowlers directed their attack with the intention of causing physical harm as stupid and patently untruthful. The immediate effect of the law change which banned bodyline in 1935 was to make commentators and spectators sensitive to the use of short-pitched bowling; bouncers became exceedingly rare and bowlers who delivered them were practically ostracised. This attitude ended after the Second World War, and among the first teams to make extensive use of short-pitched bowling was the Australian team captained by Bradman between 1946 and 1948. Other teams soon followed. Outside the sport, there were significant consequences for Anglo-Australian relations, which remained strained until the outbreak of World War II made cooperation paramount. Business between the two countries was adversely affected as citizens of each country avoided goods manufactured in the other. Australian commerce also suffered in British colonies in Asia: the "North China Daily News" published a pro-bodyline editorial, denouncing Australians as sore losers. An Australian journalist reported that several business deals in Hong Kong and Shanghai were lost by Australians because of local reactions. English immigrants in Australia found themselves shunned and persecuted by locals, and Australian visitors to England were treated similarly. In 1934–35 a statue of Prince Albert in Sydney was vandalised, with an ear being knocked off and the word "BODYLINE" painted on it. Both before and after World War II, numerous satirical cartoons and comedy skits were written, mostly in Australia, based on events of the bodyline tour. Generally, they poked fun at the English. In 1984, Australia's Network Ten produced a television mini-series titled "Bodyline", dramatising the events of the 1932–33 English tour of Australia. It starred Gary Sweet as Don Bradman, Hugo Weaving as Douglas Jardine, Jim Holt as Harold Larwood, Rhys McConnochie as Pelham Warner, and Frank Thring as Jardine's mentor Lord Harris. The series took some liberties with historical accuracy for the sake of drama, including a depiction of angry Australian fans burning a British flag at the Sydney Cricket Ground, an event which was never documented. Larwood, having emigrated to Australia in 1950, received several threatening and obscene phone calls after the series aired. The series was widely and strongly attacked by the surviving players for its inaccuracy and sensationalism. To this day, the bodyline tour remains one of the most significant events in the history of cricket, and strong in the consciousness of many cricket followers. In a poll of cricket journalists, commentators, and players in 2004, the bodyline tour was ranked the most important event in cricket history. Liverpool F.C. Liverpool Football Club is a professional football club in Liverpool, England, that competes in the Premier League, the top tier of English football. The club has won 5 European Cups, 3 UEFA Cups, 3 UEFA Super Cups, 18 League titles, 7 FA Cups, 8 League Cups, and 15 FA Community Shields. Founded in 1892, the club joined the Football League the following year and has played at Anfield since its formation. Liverpool established itself as a major force in English and European football in the 1970s and 1980s when Bill Shankly and Bob Paisley led the club to 11 League titles and seven European trophies. Under the management of Rafael Benítez and captained by Steven Gerrard, Liverpool became European champions for the fifth time in 2005. Liverpool was the ninth highest-earning football club in the world in 2016–17, with an annual revenue of €424.2 million, and the world's eighth most valuable football club in 2018, valued at $1.944 billion. The club is one of the best supported teams in the world. Liverpool has long-standing rivalries with Manchester United and Everton. The club's supporters have been involved in two major tragedies: the Heysel Stadium disaster, where escaping fans were pressed against a collapsing wall at the 1985 European Cup Final in Brussels, with 39 people – mostly Italians and Juventus fans – dying, after which English clubs were given a five-year ban from European competition, and the Hillsborough disaster in 1989, where 96 Liverpool supporters died in a crush against perimeter fencing. The team changed from red shirts and white shorts to an all-red home strip in 1964 which has been used ever since. The club's anthem is "You'll Never Walk Alone". Liverpool F.C. was founded following a dispute between the Everton committee and John Houlding, club president and owner of the land at Anfield. After eight years at the stadium, Everton relocated to Goodison Park in 1892 and Houlding founded Liverpool F.C. to play at Anfield. Originally named "Everton F.C. and Athletic Grounds Ltd" (Everton Athletic for short), the club became Liverpool F.C. in March 1892 and gained official recognition three months later, after The Football Association refused to recognise the club as Everton. The team won the Lancashire League in its début season, and joined the Football League Second Division at the start of the 1893–94 season. After finishing in first place the club was promoted to the First Division, which it won in 1901 and again in 1906. Liverpool reached its first FA Cup Final in 1914, losing 1–0 to Burnley. It won consecutive League championships in 1922 and 1923, but did not win another trophy until the 1946–47 season, when the club won the First Division for a fifth time under the control of ex-West Ham Utd centre half George Kay. Liverpool suffered its second Cup Final defeat in 1950, playing against Arsenal. The club was relegated to the Second Division in the 1953–54 season. Soon after Liverpool lost 2–1 to non-league Worcester City in the 1958–59 FA Cup, Bill Shankly was appointed manager. Upon his arrival he released 24 players and converted a boot storage room at Anfield into a room where the coaches could discuss strategy; here, Shankly and other "Boot Room" members Joe Fagan, Reuben Bennett, and Bob Paisley began reshaping the team. The club was promoted back into the First Division in 1962 and won it in 1964, for the first time in 17 years. In 1965, the club won its first FA Cup. In 1966, the club won the First Division but lost to Borussia Dortmund in the European Cup Winners' Cup final. Liverpool won both the League and the UEFA Cup during the 1972–73 season, and the FA Cup again a year later. Shankly retired soon afterwards and was replaced by his assistant, Bob Paisley. In 1976, Paisley's second season as manager, the club won another League and UEFA Cup double. The following season, the club retained the League title and won the European Cup for the first time, but it lost in the 1977 FA Cup Final. Liverpool retained the European Cup in 1978 and regained the First Division title in 1979. During Paisley's nine seasons as manager Liverpool won 21 trophies, including three European Cups, a UEFA Cup, six League titles and three consecutive League Cups; the only domestic trophy he did not win was the FA Cup. Paisley retired in 1983 and was replaced by his assistant, Joe Fagan. Liverpool won the League, League Cup and European Cup in Fagan's first season, becoming the first English side to win three trophies in a season. Liverpool reached the European Cup final again in 1985, against Juventus at the Heysel Stadium. Before kick-off, Liverpool fans breached a fence which separated the two groups of supporters, and charged the Juventus fans. The resulting weight of people caused a retaining wall to collapse, killing 39 fans, mostly Italians. The incident became known as the Heysel Stadium disaster. The match was played in spite of protests by both managers, and Liverpool lost 1–0 to Juventus. As a result of the tragedy, English clubs were banned from participating in European competition for five years; Liverpool received a ten-year ban, which was later reduced to six years. Fourteen Liverpool fans received convictions for involuntary manslaughter. Fagan had announced his retirement just before the disaster and Kenny Dalglish was appointed as player-manager. During his tenure, the club won another three league titles and two FA Cups, including a League and Cup "Double" in the 1985–86 season. Liverpool's success was overshadowed by the Hillsborough disaster: in an FA Cup semi-final against Nottingham Forest on 15 April 1989, hundreds of Liverpool fans were crushed against perimeter fencing. Ninety-four fans died that day; the 95th victim died in hospital from his injuries four days later and the 96th died nearly four years later, without regaining consciousness. After the Hillsborough disaster there was a government review of stadium safety. The resulting Taylor Report paved the way for legislation that required top-division teams to have all-seater stadiums. The report ruled that the main reason for the disaster was overcrowding due to a failure of police control. Liverpool was involved in the closest finish to a league season during the 1988–89 season. Liverpool finished equal with Arsenal on both points and goal difference, but lost the title on total goals scored when Arsenal scored the final goal in the last minute of the season. Dalglish cited the Hillsborough disaster and its repercussions as the reason for his resignation in 1991; he was replaced by former player Graeme Souness. Under his leadership Liverpool won the 1992 FA Cup Final, but their league performances slumped, with two consecutive sixth-place finishes, eventually resulting in his dismissal in January 1994. Souness was replaced by Roy Evans, and Liverpool went on to win the 1995 Football League Cup Final. While they made some title challenges under Evans, third-place finishes in 1996 and 1998 were the best they could manage, and so Gérard Houllier was appointed co-manager in the 1998–99 season, and became the sole manager in November 1998 after Evans resigned. In 2001, Houllier's second full season in charge, Liverpool won a "Treble": the FA Cup, League Cup and UEFA Cup. Houllier underwent major heart surgery during the 2001–02 season and Liverpool finished second in the League, behind Arsenal. They won a further League Cup in 2003, but failed to mount a title challenge in the two seasons that followed. Houllier was replaced by Rafael Benítez at the end of the 2003–04 season. Despite finishing fifth in Benítez's first season, Liverpool won the 2004–05 UEFA Champions League, beating A.C. Milan 3–2 in a penalty shootout after the match ended with a score of 3–3. The following season, Liverpool finished third in the Premier League and won the 2006 FA Cup Final, beating West Ham United in a penalty shootout after the match finished 3–3. American businessmen George Gillett and Tom Hicks became the owners of the club during the 2006–07 season, in a deal which valued the club and its outstanding debts at £218.9 million. The club reached the 2007 UEFA Champions League Final against Milan, as it had in 2005, but lost 2–1. During the 2008–09 season Liverpool achieved 86 points, its highest Premier League points total, and finished as runners up to Manchester United. In the 2009–10 season, Liverpool finished seventh in the Premier League and failed to qualify for the Champions League. Benítez subsequently left by mutual consent and was replaced by Fulham manager Roy Hodgson. At the start of the 2010–11 season Liverpool was on the verge of bankruptcy and the club's creditors asked the High Court to allow the sale of the club, overruling the wishes of Hicks and Gillett. John W. Henry, owner of the Boston Red Sox and of Fenway Sports Group, bid successfully for the club and took ownership in October 2010. Poor results during the start of that season led to Hodgson leaving the club by mutual consent and former player and manager Kenny Dalglish taking over. In the 2011–12 season, Liverpool secured a record 8th League Cup success and reached the FA Cup final, but finished in eighth position, the worst league finish in 18 years; this led to the sacking of Dalglish. He was replaced by Brendan Rodgers, whose Liverpool team in the 2013–14 season mounted an unexpected title charge to finish second behind champions Manchester City and subsequently return to the Champions League, scoring 101 goals in the process, the most since the 106 scored in the 1895–96 season. Following a disappointing 2014–15 season, where Liverpool finished sixth in the league, and a poor start to the following campaign, Rodgers was sacked in October 2015. He was replaced by Jürgen Klopp, who in his first season at Liverpool, took the club to the finals of both the Football League Cup and UEFA Europa League, finishing as runner-up in both competitions. For much of Liverpool's history its home colours have been all red, but when the club was founded its kit was more like the contemporary Everton kit. The blue and white quartered shirts were used until 1894, when the club adopted the city's colour of red. The city's symbol of the liver bird was adopted as the club's badge in 1901, although it was not incorporated into the kit until 1955. Liverpool continued to wear red shirts and white shorts until 1964, when manager Bill Shankly decided to change to an all red strip. Liverpool played in all red for the first time against Anderlecht, as Ian St. John recalled in his autobiography: The Liverpool away strip has more often than not been all yellow or white shirts and black shorts, but there have been several exceptions. An all grey kit was introduced in 1987, which was used until the 1991–92 centenary season, when it was replaced by a combination of green shirts and white shorts. After various colour combinations in the 1990s, including gold and navy, bright yellow, black and grey, and ecru, the club alternated between yellow and white away kits until the 2008–09 season, when it re-introduced the grey kit. A third kit is designed for European away matches, though it is also worn in domestic away matches on occasions when the current away kit clashes with a team's home kit. Between 2012–15, the kits were designed by Warrior Sports, who became the club's kit providers at the start of the 2012–13 season. In February 2015, Warrior's parent company New Balance announced it would be entering the global football market, with teams sponsored by Warrior now being outfitted by New Balance. The only other branded shirts worn by the club were made by Umbro until 1985, when they were replaced by Adidas, who produced the kits until 1996 when Reebok took over. They produced the kits for 10 years before Adidas made the kits from 2006 to 2012. Liverpool was the first English professional club to have a sponsor's logo on its shirts, after agreeing a deal with Hitachi in 1979. Since then the club has been sponsored by Crown Paints, Candy, Carlsberg and Standard Chartered Bank. The contract with Carlsberg, which was signed in 1992, was the longest-lasting agreement in English top-flight football. The association with Carlsberg ended at the start of the 2010–11 season, when Standard Chartered Bank became the club's sponsor. The Liverpool badge is based on the city's liver bird, which in the past had been placed inside a shield. In 1992, to commemorate the centennial of the club, a new badge was commissioned, including a representation of the Shankly Gates. The next year twin flames were added at either side, symbolic of the Hillsborough memorial outside Anfield, where an eternal flame burns in memory of those who died in the Hillsborough disaster. In 2012, Warrior Sports' first Liverpool kit removed the shield and gates, returning the badge to what had adorned Liverpool shirts in the 1970s; the flames were moved to the back collar of the shirt, surrounding the number 96 for the number who died at Hillsborough. Anfield was built in 1884 on land adjacent to Stanley Park. It was originally used by Everton before the club moved to Goodison Park after a dispute over rent with Anfield owner John Houlding. Left with an empty ground, Houlding founded Liverpool in 1892 and the club has played at Anfield ever since. The capacity of the stadium at the time was 20,000, although only 100 spectators attended Liverpool's first match at Anfield. The Kop was built in 1906 due to the high turnout for matches and was called the Oakfield Road Embankment initially. Its first game was on 1 September 1906 when the home side beat Stoke City 1–0. In 1906 the banked stand at one end of the ground was formally renamed the Spion Kop after a hill in KwaZulu-Natal. The hill was the site of the Battle of Spion Kop in the Second Boer War, where over 300 men of the Lancashire Regiment died, many of them from Liverpool. At its peak, the stand could hold 28,000 spectators and was one of the largest single-tier stands in the world. Many stadia in England had stands named after Spion Kop, but Anfield's was the largest of them at the time; it could hold more supporters than some entire football grounds. Anfield could accommodate more than 60,000 supporters at its peak, and had a capacity of 55,000 until the 1990s. The Taylor Report and Premier League regulations obliged Liverpool to convert Anfield to an all-seater stadium in time for the 1993–94 season, reducing the capacity to 45,276. The findings of the "Taylor Report" precipitated the redevelopment of the Kemlyn Road Stand, which was rebuilt in 1992, coinciding with the centenary of the club, and was known as the Centenary Stand until 2017 when it was renamed the Kenny Dalglish Stand. An extra tier was added to the Anfield Road end in 1998, which further increased the capacity of the ground but gave rise to problems when it was opened. A series of support poles and stanchions were inserted to give extra stability to the top tier of the stand after movement of the tier was reported at the start of the 1999–2000 season. Because of restrictions on expanding the capacity at Anfield, Liverpool announced plans to move to the proposed Stanley Park Stadium in May 2002. Planning permission was granted in July 2004, and in September 2006, Liverpool City Council agreed to grant Liverpool a 999-year lease on the proposed site. Following the takeover of the club by George Gillett and Tom Hicks in February 2007, the proposed stadium was redesigned. The new design was approved by the Council in November 2007. The stadium was scheduled to open in August 2011 and would hold 60,000 spectators, with HKS, Inc. contracted to build the stadium. Construction was halted in August 2008, as Gillett and Hicks had difficulty in financing the £300 million needed for the development. In October 2012, BBC Sport reported that Fenway Sports Group, the new owners of Liverpool FC, had decided to redevelop their current home at Anfield stadium, rather than building a new stadium in Stanley Park. As part of the redevelopment the capacity of Anfield was to increase from 45,276 to approximately 60,000 and would cost approximately £150m. When construction was completed on the new Main stand the capacity of Anfield was increased to 54,074. This £100 million expansion added a third tier to the stand. This was all part of a £260 million project to improve the Anfield area. Jurgen Klopp the manager at the time described the stand as "impressive." Liverpool is one of the best supported clubs in the world. The club states that its worldwide fan base includes more than 200 officially recognised Club of the LFC Official Supporters Clubs in at least 50 countries. Notable groups include Spirit of Shankly. The club takes advantage of this support through its worldwide summer tours, which has included playing in front of 101,000 in Michigan, U.S., and 95,000 in Melbourne, Australia. Liverpool fans often refer to themselves as Kopites, a reference to the fans who once stood, and now sit, on the Kop at Anfield. In 2008 a group of fans decided to form a splinter club, A.F.C. Liverpool, to play matches for fans who had been priced out of watching Premier League football. The song "You'll Never Walk Alone", originally from the Rodgers and Hammerstein musical "Carousel" and later recorded by Liverpool musicians Gerry and the Pacemakers, is the club's anthem and has been sung by the Anfield crowd since the early 1960s. It has since gained popularity among fans of other clubs around the world. The song's title adorns the top of the Shankly Gates, which were unveiled on 2 August 1982 in memory of former manager Bill Shankly. The "You'll Never Walk Alone" portion of the Shankly Gates is also reproduced on the club's crest. The club's supporters have been involved in two stadium disasters. The first was the 1985 Heysel Stadium disaster, in which 39 Juventus supporters were killed. They were confined to a corner by Liverpool fans who had charged in their direction; the weight of the cornered fans caused a wall to collapse. UEFA laid the blame for the incident solely on the Liverpool supporters, and banned all English clubs from European competition for five years. Liverpool was banned for an additional year, preventing it from participating in the 1990–91 European Cup, even though it won the League in 1990. Twenty-seven fans were arrested on suspicion of manslaughter and were extradited to Belgium in 1987 to face trial. In 1989, after a five-month trial in Belgium, 14 Liverpool fans were given three-year sentences for involuntary manslaughter; half of the terms were suspended. The second disaster took place during an FA Cup semi-final between Liverpool and Nottingham Forest at Hillsborough Stadium, Sheffield, on 15 April 1989. Ninety-six Liverpool fans died as a consequence of overcrowding at the Leppings Lane end, in what became known as the Hillsborough disaster. In the following days "The Sun" newspaper published an article entitled "The Truth", in which it claimed that Liverpool fans had robbed the dead and had urinated on and attacked the police. Subsequent investigations proved the allegations false, leading to a boycott of the newspaper by Liverpool fans across the city and elsewhere; many still refuse to buy "The Sun" more than 20 years later. Many support organisations were set up in the wake of the disaster, such as the Hillsborough Justice Campaign, which represents bereaved families, survivors and supporters in their efforts to secure justice. Liverpool's longest-established rivalry is with fellow Liverpool team Everton, against whom they contest the Merseyside derby. The rivalry stems from Liverpool's formation and the dispute with Everton officials and the then owners of Anfield. The Merseyside derby is one of the few local derbies which do not enforce fan segregation, and hence has been known as the "friendly derby". Since the mid-1980s, the rivalry has intensified both on and off the field and, since the inception of the Premier League in 1992, the Merseyside derby has had more players sent off than any other Premier League game. It has been referred to as "the most ill-disciplined and explosive fixture in the Premier League". Liverpool's rivalry with Manchester United stems from the cities' competition in the Industrial Revolution of the 19th century. The two clubs alternated as champions between 1964 and 1967, and Manchester United became the first English team to win the European Cup in 1968, followed by Liverpool's four European Cup victories. Despite the 38 league titles and eight European Cups between them the two rivals have rarely been successful at the same time – Liverpool's run of titles in the 1970s and 1980s coincided with Manchester United's 26-year title drought, and United's success in the Premier League-era has likewise coincided with Liverpool's ongoing drought, and the two clubs have finished first and second in the league only five times. Nonetheless, former Manchester United manager Alex Ferguson said in 2002, "My greatest challenge was knocking Liverpool right off their fucking perch", and the last player to be transferred between the two clubs was Phil Chisnall, who moved to Liverpool from Manchester United in 1964. As the owner of Anfield and founder of Liverpool, John Houlding was the club's first chairman, a position he held from its founding in 1892 until 1904. John McKenna took over as chairman after Houlding's departure. McKenna subsequently became President of the Football League. The chairmanship changed hands many times before John Smith, whose father was a shareholder of the club, took up the role in 1973. He oversaw the most successful period in Liverpool's history before stepping down in 1990. His successor was Noel White who became chairman in 1990. In August 1991 David Moores, whose family had owned the club for more than 50 years became chairman. His uncle John Moores was also a shareholder at Liverpool and was chairman of Everton from 1961 to 1973. Moores owned 51 percent of the club, and in 2004 expressed his willingness to consider a bid for his shares in Liverpool. Moores eventually sold the club to American businessmen George Gillett and Tom Hicks on 6 February 2007. The deal valued the club and its outstanding debts at £218.9 million. The pair paid £5,000 per share, or £174.1m for the total shareholding and £44.8m to cover the club's debts. Disagreements between Gillett and Hicks, and the fans' lack of support for them, resulted in the pair looking to sell the club. Martin Broughton was appointed chairman of the club on 16 April 2010 to oversee its sale. In May 2010, accounts were released showing the holding company of the club to be £350m in debt (due to leveraged takeover) with losses of £55m, causing auditor KPMG to qualify its audit opinion. The group's creditors, including the Royal Bank of Scotland, took Gillett and Hicks to court to force them to allow the board to proceed with the sale of the club, the major asset of the holding company. A High Court judge, Mr Justice Floyd, ruled in favour of the creditors and paved the way for the sale of the club to Fenway Sports Group (formerly New England Sports Ventures), although Gillett and Hicks still had the option to appeal. Liverpool was sold to Fenway Sports Group on 15 October 2010 for £300m. Liverpool has been described as a global brand; a 2010 report valued the club's trademarks and associated intellectual property at £141m, an increase of £5m on the previous year. Liverpool was given a brand rating of AA (Very Strong). In April 2010 business magazine "Forbes" ranked Liverpool as the sixth most valuable football team in the world, behind Manchester United, Real Madrid, Arsenal, Barcelona and Bayern Munich; they valued the club at $822m (£532m), excluding debt. Accountants Deloitte ranked Liverpool eighth in the Deloitte Football Money League, which ranks the world's football clubs in terms of revenue. Liverpool's income in the 2009–10 season was €225.3m. Because of its successful history, Liverpool is often featured when football is depicted in British culture and has appeared in a number of media firsts. The club appeared in the first edition of the BBC's "Match of the Day", which screened highlights of its match against Arsenal at Anfield on 22 August 1964. The first football match to be televised in colour was between Liverpool and West Ham United, broadcast live in March 1967. Liverpool fans featured in the Pink Floyd song "Fearless", in which they sang excerpts from "You'll Never Walk Alone". To mark the club's appearance in the 1988 FA Cup Final, Liverpool released a song known as the "Anfield Rap", featuring John Barnes and other members of the squad. A documentary drama on the Hillsborough disaster, written by Jimmy McGovern, was screened in 1996. It featured Christopher Eccleston as Trevor Hicks, whose story is the focus of the script. Hicks, who lost two teenage daughters in the disaster, went on to campaign for safer stadiums and helped to form the Hillsborough Families Support Group. Liverpool featured in the film "The 51st State" (also known as "Formula 51"), in which ex-hitman Felix DeSouza (Robert Carlyle) is a keen supporter of the team and the last scene takes place at a match between Liverpool and Manchester United. The club was featured in a children's television show called "Scully"; the plot revolved around a young boy, Francis Scully, who tried to gain a trial match with Liverpool. The show featured prominent Liverpool players of the time such as Kenny Dalglish. Since the establishment of the club in 1892, 45 players have been club captain of Liverpool F.C. Andrew Hannah became the first captain of the club after Liverpool separated from Everton and formed its own club. Initially Alex Raisbeck, who was club captain from 1899 to 1909, was the longest serving captain before being overtaken by Steven Gerrard who served 12 seasons as Liverpool captain starting from the 2003–04 season. The present captain is Jordan Henderson, who replaced Gerrard in the 2015–16 season following Gerrard's move to LA Galaxy. Liverpool's first trophy was the Lancashire League, which it won in the club's first season. In 1901, the club won its first League title, while its first success in the FA Cup was in 1965. In terms of the number of trophies won, Liverpool's most successful decade was the 1980s, when the club won six League titles, two FA Cups, four League Cups, five Charity Shields (one shared) and two European Cups. The club has accumulated more top-flight wins and points than any other English team. Liverpool also has the highest average league finishing position (3.3) for the 50-year period to 2015 and second-highest average league finishing position for the period 1900–1999 after Arsenal, with an average league placing of 8.7. Liverpool has won the European Cup, UEFA's premier club competition, five times, an English record and only surpassed by Real Madrid and Milan. Liverpool's fifth European Cup win, in 2005, meant that the club was awarded the trophy permanently and was also awarded a multiple-winner badge. Liverpool also hold the English record of three wins in the UEFA Cup, UEFA's secondary club competition. Especially short competitions, such as the FA Community Shield and the UEFA Super Cup, are not generally considered to contribute towards a Double or Treble. Footnotes Citations Lindow Man Lindow Man, also known as Lindow II and (in jest) as Pete Marsh, is the preserved bog body of a man discovered in a peat bog at Lindow Moss near Wilmslow in Cheshire, North West England. The human remains were found on 1 August 1984 by commercial peat-cutters. Lindow Man is not the only bog body to have been found in the moss; Lindow Woman was discovered the year before, and other body parts have also been recovered. The find, described as "one of the most significant archaeological discoveries of the 1980s", caused a media sensation. It helped invigorate study of British bog bodies, which had previously been neglected in comparison to those found in the rest of Europe. At the time of death, Lindow Man was a healthy male in his mid-20s, and he may have been someone of high status, as his body shows little evidence of heavy or rough work. There has been debate over the reason for Lindow Man's death, for the nature of his demise was violent, perhaps ritualistic; after a last meal of charred bread, Lindow Man was strangled, hit on the head, and his throat cut. Dating the body has proven problematic, but it is thought that Lindow Man was deposited into Lindow Moss, face down, some time between 2 BC and 119 AD, in either the Iron Age or Romano-British period. The recovered body has been preserved by freeze-drying and is on permanent display at the British Museum, although it occasionally travels to other venues such as the Manchester Museum. Lindow Moss is a peat bog in Lindow, an area of Wilmslow, Cheshire, which has been used as common land since the medieval period. It formed after the last ice age, one of many such peat bogs in north-east Cheshire and the Mersey basin that formed in hollows caused by melting ice. Investigations have not yet discovered settlement or agricultural activity around the edge of Lindow Moss that would have been contemporary with Lindow Man; however, analysis of pollen in the peat suggests there was some cultivation in the vicinity. Once covering over , the bog has now shrunk to a tenth of its original size. It is a dangerous place; an 18th-century writer recorded people drowning there. For centuries the peat from the bog was used as fuel, and it continued to be extracted until the 1980s, by which time the process had been mechanised. Lindow Moss is a lowland raised mire; this type of peat bog often produces the best preserved bog bodies, allowing more detailed analysis. Lowland raised mires occur mainly in northern England and extend south to the Midlands. Lindow Man is one of 27 bodies to be recovered from such areas. On 13 May 1983, two peat workers at Lindow Moss, Andy Mould and Stephen Dooley, noticed an unusual object—about the size of a football—on the elevator taking peat to the shredding machine. They removed the object for closer inspection, joking that it was a dinosaur egg. Once the peat had been removed, their discovery turned out to be a decomposing, incomplete human head with one eye and some hair intact. Forensics identified the skull as belonging to a European woman, probably aged 30–50. Police initially thought the skull was that of Malika Reyn-Bardt, who had disappeared in 1960 and was the subject of an ongoing investigation. While in prison on another charge, her husband, Peter Reyn-Bardt, had boasted that he had killed his wife and buried her in the back garden of their bungalow, which was on the edge of the area of mossland where peat was being dug. The garden was examined but no body was recovered there. When Reyn-Bardt was confronted with the discovery of the skull from Lindow Moss, he confessed to the murder of his wife. It was later radiocarbon dated, revealing it to be nearly 2,000 years old. "Lindow Woman", as it became known, dated from around 210 AD. This emerged shortly before Reyn-Bardt went to trial, but he was convicted on the evidence of his confession. A year later a further discovery was made at Lindow Moss, just south-west of the Lindow Woman. On 1 August 1984, Andy Mould, who had been involved in the discovery of Lindow Woman, took what he thought was a piece of wood off the elevator of the peat-shredding machine. He threw the object at Eddie Slack, his workmate. When it hit the ground, peat fell off the object and revealed it to be a human foot. The police were called and the foot was taken away for examination. Rick Turner, the Cheshire County Archaeologist, was notified of the discovery and succeeded in finding the rest of the body, which later became known as Lindow Man. Some skin had been exposed and had started to decay, so to prevent further deterioration of the body, it was re-covered with peat. The complete excavation of the block containing the remains was performed on 6 August. Until it could be dated, it was moved to the Macclesfield District General Hospital for storage. As the body of Malika Reyn-Bardt had still not been found, it was thought possible the body might be hers, until it was determined to be male, and radiocarbon dated. The owners of the land on which Lindow Man was found donated the body to the British Museum, and on 21 August it was transported to London. At the time, the body was dubbed "Pete Marsh" (a pun on "peat marsh") by Middlesex Hospital radiologists, a name subsequently adopted by local journalists, as was the similar "Pete Bogg" (a pun on "peat bog"). The find was announced to the press during the second week of investigation. As the best preserved bog body found in Britain, its discovery caused a domestic media sensation and received global coverage. Sparking excitement in the country's archaeological community, who had long expected such a find, it was hailed as one of the most important archaeological discoveries of the 1980s. A "Q.E.D." documentary about Lindow Man broadcast by the BBC in 1985 attracted 10 million viewers. Lindow Man's official name is Lindow II, as there are other finds from the area: Lindow I (Lindow Woman) refers to a human skull, Lindow III to a "fragmented headless body", and Lindow IV to the upper thigh of an adult male, possibly that of Lindow Man. After the discovery of Lindow Man, there were no further archaeological excavations at Lindow Moss until 1987. A large piece of skin was found by workmen on the elevator on 6 February 1987. On this occasion, the police left the investigation to the archaeologists. Over 70 pieces were found, constituting Lindow III. Although the bone was not as well preserved as that of Lindow Man, the other tissues survived in better condition. The final discovery was that of Lindow IV on 14 June 1988. Part of a left leg and buttocks were found on the elevator, from a site just west of where Lindow Man was found. Nearly three months later, on 12 September, a right thigh was discovered in the peat on the bucket of a digger. The proximity of the discovery sites, coupled with the fact that the remains were shown to come from an adult male, means that Lindow IV is probably part of Lindow Man. Lindow Man marked the first discovery in Britain of a well-preserved bog body; its condition was comparable to that of Grauballe Man and Tollund Man from Denmark. Before Lindow Man was found, it was estimated that 41 bog bodies had been found in England and Wales and 15 in Scotland. Encouraged by the discovery of Lindow Man, a gazetteer was compiled, which revealed a far higher number of bog bodies: over 85 in England and Wales and over 36 in Scotland. Prior to the discovery of the bodies in Lindow Moss, British bog bodies had been a relatively neglected subject compared to European examples. The interest caused by Lindow Man led to more in-depth research of accounts of discoveries in bogs since the 17th century; by 1995, the numbers had changed to 106 in England and Wales and 34 in Scotland. The remains covered a large time frame. In life, Lindow Man would have measured between 5'6" and 5'8" (1.68 and 1.73 m) tall and weighed about . It was possible to ascertain that his age at death was around the mid-20s. The body retains a trimmed beard, moustache, and sideburns of brown hair, as well as healthy teeth with no visible cavities, and manicured fingernails, indicating he did little heavy or rough work. Apart from a fox-fur armband, Lindow Man was discovered completely naked. When he died, Lindow Man was suffering from slight osteoarthritis and an infestation of whipworm and maw worm. As a result of decalcification of the bones and pressure from the peat under which Lindow Man was buried, his skull was distorted. While some preserved human remains may contain DNA, peat bogs such as Lindow Moss are generally poor for such a purpose, and it is unlikely that DNA could be recovered from Lindow Man. Lindow Man and Lindow III were found to have elevated levels of copper on their skin. The cause for this was uncertain as there could have been natural causes, although a study by Pyatt "et al." proposed that the bodies may have been painted with a copper-based pigment. To test this, skin samples were taken from places likely to be painted and tested against samples from areas where painting was unlikely. It was found that the copper content of the skin of the torso was higher than the control areas, suggesting that the theory of Pyatt "et al." may have been correct. However, the conclusion was ambiguous as the overall content was above that expected of a male, and variations across the body may have been due to environmental factors. Similarly, green deposits were found in the hair, originally thought to be a copper-based pigment used for decoration, but it was later found to be the result of a reaction between the keratin in the hair and the acid of the peat bog. Dating Lindow Man is problematic as samples from the body and surrounding peat have produced dates spanning a 900-year period. Although the peat encasing Lindow Man has been radiocarbon dated to about 300 BC, Lindow Man himself has a different date. Early tests at different laboratories returned conflicting dates for the body; later tests suggested a date between 2 BC and 119 AD. There has been a tendency to ascribe the body to the Iron Age period rather than Roman due to the interpretation that Lindow Man's death may have been a ritual sacrifice or execution. Explanations for why the peat in which he was found is much older have been sought. Archaeologist P. C. Buckland suggests that as the stratigraphy of the peat appears undisturbed, Lindow Man may have been deposited into a pool that was already some 300 years old. Geographer K. E. Barber has argued against this hypothesis, saying that pools at Lindow Moss would have been too shallow, and suggests that the peat may have been peeled back to allow the burial and then replaced, leaving the stratigraphy apparently undisturbed. Lindow Man's last meal was preserved in his stomach and intestines and was analysed in some detail. It was hoped that investigations into the contents of the stomach would shed light on the contemporary diet, as was the case with Grauballe Man and Tollund Man in the 1950s. The analysis of the contents of the digestive system of bog bodies had become one of the principal endeavours of investigating such remains. Analysis of the grains present revealed his diet to be mostly of cereals. He probably ate slightly charred bread, although the burning may have had ritual significance rather than being an accident. Some mistletoe pollen was also found in the stomach, indicating that Lindow Man died in March or April. One of the conclusions of the study was that the people buried in Lindow Moss may have had a less varied diet than their European counterparts. According to Jody Joy, curator of the Iron Age collection at the British Museum, the importance of Lindow Man lies more in how he lived rather than how he died, as the circumstances surrounding his demise may never be fully established. As the peat was cleaned off the body in the laboratory, it became clear that Lindow Man had suffered a violent death. The injuries included a V-shaped, cut on top of his head; a possible laceration at the back of the head, ligature marks on the neck where a sinew cord was found, a possible wound on the right side of the neck, a possible stab wound in the upper right chest, a broken neck, and a fractured rib. Xeroradiography revealed that the blow on top of the head (causing the V-shaped cut) was caused by a relatively blunt object; it had fractured the skull and driven fragments into the brain. Swelling along the edges of the wound indicated that Lindow Man had lived after being struck. The blow, possibly from a small axe, would have caused unconsciousness, but Lindow Man could have survived for several hours afterwards. The ligature marks on the neck were caused by tightening the sinew cord found around his neck, possibly a garrotte or necklace. It is not possible to confirm whether some injuries took place before or after death, due to the body's state of decay. This is the case for the wound in the upper right chest and the laceration on the back of the skull. The cut on the right of the neck may have been the result of the body becoming bloated, causing the skin to split; however, the straight edges to the wound suggest that it may have been caused by a sharp instrument, such as a knife. The ligature marks on the neck may have occurred after death. In some interpretations of Lindow Man's death, the sinew is a garrotte used to break the man's neck. However, Robert Connolly, a lecturer in physical anthropology, suggests that the sinew may have been ornamental and that ligature marks may have been caused by the body swelling when submerged. The rib fracture may also have occurred after death, perhaps during the discovery of the body, but is included in some narratives of Lindow Man's death. The broken neck would have proven the fatal injury, whether caused by the sinew cord tightening around the neck or by blows to the back of the head. After death, Lindow Man was deposited into Lindow Moss face down. Archaeologist Don Brothwell considers that many of the older bodies need re-examining with modern techniques, such as those used in the analysis of Lindow Man. The study of bog bodies, including these found in Lindow Moss, has contributed to a wider understanding of well-preserved human remains, helping to develop new methods in analysis and investigation. The use of sophisticated techniques, such as computer tomography (CT) scans, has marked the investigation of the Lindow bodies as particularly important. Such scans allow the reconstruction of the body and internal examination. Of the 27 bodies recovered from lowland raised mires in England and Wales, only those from Lindow Moss and the remains of Worsley Man have survived, together with a shoe from another body. The remains have a date range from the early 1st to the 4th centuries. Investigation into the other bodies relies on contemporary descriptions of the discovery. The physical evidence allows a general reconstruction of how Lindow Man was killed, although some details are debated, but it does not explain why he was killed. In North West England, there is little evidence for religious or ritual activity in the Iron Age period. What evidence does survive is usually in the form of artefacts recovered from peat bogs. Late Iron Age burials in the region often took the form of a crouched inhumation, sometimes with personal ornaments. Although dated to the mid-1st century AD, the type of burial of Lindow Man was more common in the pre-historic period. In the latter half of the 20th century, scholars widely believed that bog bodies demonstrating injuries to the neck or head area were examples of ritual sacrifice. Bog bodies were associated with Germanic and Celtic cultures, specifically relating to head worship. According to Brothwell, Lindow Man is one of the most complex examples of "overkill" in a bog body, and possibly has ritual meaning as it was "extravagant" for a straightforward murder. Archaeologists John Hodgson and Mark Brennand suggest that bog bodies may have been related to religious practice, although there is division in the academic community over this issue. In the case of Lindow Man, scholars debate whether the killing was murder or done as part of ritual. Anne Ross, an expert on Iron Age religion, proposed that the death was an example of human sacrifice and that the "triple death" (throat cut, strangled, and hit on the head) was an offering to several different gods. The wide date range for Lindow Man's death (2 BC to 119 AD) means he may have met his demise after the Romans conquered northern England in the 60s AD. As the Romans outlawed human sacrifice, such timing would open up other possibilities. This conclusion was emphasised by historian Ronald Hutton, who challenged the interpretation of sacrificial death. Connolly suggests that as Lindow Man was found naked, he could have been the victim of a violent robbery. Joy said, "The jury really is still out on these bodies, whether they were aristocrats, priests, criminals, outsiders, whether they went willingly to their deaths or whether they were executed – but Lindow was a very remote place in those days, an unlikely place for an ambush or a murder". Environment and situation are the crucial factors that determine how corpses decay. For instance, corpses will decay differently depending on the weather, the way they are buried, and the medium in which they are buried. Peat slows the decay of corpses. It was feared that, once Lindow Man was removed from that environment, which had preserved the body for nearly 2,000 years, the remains would rapidly start to deteriorate, so steps were taken to insure preservation. After rejecting methods that had been used to maintain the integrity of other bog bodies, such as the "pit-tanning" used on Grauballe Man, which took a year and a half, scientists settled on freeze-drying. In preparation, the body was covered in a solution of 15% polyethylene glycol 400 and 85% water to prevent its becoming distorted. The body was then frozen solid and the ice vaporised to ensure Lindow Man did not shrink. Afterwards, Lindow Man was put in a specially constructed display case to control the environment, maintaining the temperature at and the humidity at 55%. Lindow Man is held in the British Museum. Before the remains were transferred there, people from North West England launched an unsuccessful campaign to keep the body in Manchester. The bog body has been on temporary display in other venues: at the Manchester Museum on three occasions, April to , March to , and to ; and at the Great North Museum in Newcastle from August to . The 2008–09 Manchester display, titled "Lindow Man: A Bog Body Mystery Exhibition at the Manchester Museum", won the category "Best Archaeological Innovation" in the 2010 British Archaeological Awards, run by the Council for British Archaeology. Critics have complained that, by museum display of the remains, the body of Lindow Man has been objectified rather than treated with the respect due the dead. Emma Restall Orr, a neo-druid, has questioned whether the body should be displayed at all. This is part of a wider discussion about the scientific treatment of human remains and museum researchers and archaeologists using them as information sources. Luton Town F.C. Luton Town Football Club () is a professional association football club based at Kenilworth Road, Luton, Bedfordshire since 1905. Founded in 1885, it is nicknamed "the Hatters" and affiliated to the Bedfordshire County Football Association. Its first-team will contest the third tier of English football, League One, during the 2018–19 season. The club's history includes major trophy wins, several financial crises, numerous promotions and relegations, and some spells of sustained success. It was perhaps most prominent between 1982 and 1992, when it was a member of English football's top division, at that time the First Division; the team won its only major honour, the Football League Cup, in 1988. The club was the first in southern England to turn professional, making payments to players as early as 1890 and turning fully professional a year later. It joined the Football League before the 1897–98 season, left in 1900 because of financial problems, and rejoined in 1920. Luton reached the First Division in 1955–56 and contested a major final for the first time when playing Nottingham Forest in the 1959 FA Cup Final. The team was then relegated from the top division in 1959–60, and demoted twice more in the following five years, playing in the Fourth Division from the 1965–66 season. However, it was promoted back to the top level by 1974–75. Luton Town's most recent successful period began in 1981–82, when the club won the Second Division, and thereby gained promotion to the First. Luton defeated Arsenal 3–2 in the 1988 Football League Cup Final and remained in the First Division until relegation at the end of the 1991–92 season. Between 2007 and 2009, financial difficulties caused the club to fall from the second tier of English football to the fifth in successive seasons. The last of these relegations came during the 2008–09 season, when 30 points were docked from Luton's record for various financial irregularities. Luton thereafter spent five seasons in non-League football before winning the Conference Premier in 2013–14, securing promotion back into the Football League. Luton Town Football Club was formed on 11 April 1885, the product of a merger of the two leading local teams, Luton Town Wanderers and Excelsior. Initially based at Excelsior's Dallow Lane ground, the club began making payments to certain individual players in 1890. The following year, Luton became the first club in southern England to be fully professional. The club was a founder member of the Southern Football League in the 1894–95 season and finished as runners-up in its first two seasons. It then left to help form the United League and came second in that league's inaugural season before joining the Football League (then based mostly in northern and central England) for 1897–98, concurrently moving to a new ground at Dunstable Road. The club continued to enter a team to the United League for two more seasons, and won the title in 1897–98. Poor attendance, high wages and the high travel and accommodation costs that resulted from Luton's distance from the northern heartlands of the Football League crippled the club financially, and made it too expensive to compete in that league. A return to the Southern League was therefore arranged for the 1900–01 season. Eight years after arriving at Dunstable Road, Luton moved again, settling at their current ground, Kenilworth Road, in 1905. Captain and left winger Bob Hawkes became Luton's first international player when he was picked to play for England against Ireland on 16 February 1907. A poor 1911–12 season saw Luton relegated to the Southern League's Second Division; the club won promotion back two years later. After the First World War broke out, Luton took part in The London Combination during 1915–16, and afterwards filled each season with friendly matches. A key player of the period was Ernie Simms, a forward. Simms was invalided back to England after being wounded on the Italian front, but recovered enough to regain his place in the Luton team and scored 40 goals during the 1916–17 season. The Luton side first played in the white and black colours which it has retained for much of its history during the 1920–21 season, when the club rejoined the Football League; the players had previously worn an assortment of colour combinations, most permanently sky blue shirts with white shorts and navy socks. Such was the quality of Luton's team at this time that despite playing in the third tier, a fixture between Ireland and England at Windsor Park on 22 October 1921 saw three Luton players on the pitch—Louis Bookman and Allan Mathieson for Ireland, and the club's top goalscorer, Simms, for England. However, after Luton finished fourth in the division, the squad was broken up as Simms, Bookman and Mathieson joined South Shields, Port Vale and Exeter City respectively. Luton stayed in the Third Division South until 1936–37, when the team finished top and won promotion to the Second Division, at that time the second tier of English football. During the promotion season, striker Joe Payne scored 55 goals in 39 games; during the previous season he had scored 10 in one match against Bristol Rovers, which remains a Football League record today. During the early 1950s, one of Luton's greatest sides emerged under manager Dally Duncan. The team included Gordon Turner, who went on to become Luton's all-time top goalscorer, Bob Morton, who holds the record for the most club appearances, and Syd Owen, an England international. During this period, Luton sides also featured two England international goalkeepers, Ron Baynham and Bernard Streten, as well as Irish internationals Seamus Dunne, Tom Aherne and George Cummins. This team reached the top flight for the first time in 1955–56, after finishing the season in second place behind Birmingham City on goal average. A few years of success followed, including an FA Cup Final appearance against Nottingham Forest in 1958–59; at the end of the season, Owen was voted FWA Footballer of the Year. However, the club was relegated the following season and, by 1964–65, was playing in the fourth tier. In yo-yo club fashion, Luton were to return. A team including Bruce Rioch, John Moore and Graham French won the Fourth Division championship in 1967–68 under the leadership of former player Allan Brown; two years later Malcolm Macdonald's goals helped them to another promotion, while comedian Eric Morecambe became a director of the club. Luton Town won promotion back to the First Division in 1973–74, but were relegated the following season by a solitary point. Former Luton player David Pleat was made manager in 1978, and by 1982–83 the team was back in the top flight. The team which Pleat assembled at Kenilworth Road was notable at the time for the number of black players it included; during an era when many English squads were almost entirely white, Luton often fielded a mostly black team. Talented players such as Ricky Hill, Brian Stein and Emeka Nwajiobi made key contributions to the club's success during this period, causing it to accrue "a richer history of black stars than any in the country", in the words of journalist Gavin Willacy. On the last day of the 1982–83 season, the club's first back in the top tier, it narrowly escaped relegation: playing Manchester City at Maine Road, Luton needed to win to stay up, while City could escape with a draw. A late winner by Yugoslavian substitute Raddy Antić saved the team and prompted Pleat to dance across the pitch performing a "jig of joy", an image that has become iconic. The club achieved its highest ever league position, seventh, under John Moore in 1986–87, and, managed by Ray Harford, won the Football League Cup a year later with a 3–2 win over Arsenal. With ten minutes left on the clock and Arsenal 2–1 ahead, a penalty save from stand-in goalkeeper Andy Dibble sparked a late Luton rally: Danny Wilson equalised, before Brian Stein scored the winner with the last kick of the match. The club reached the League Cup Final once more in 1988–89, but lost 3–1 to Nottingham Forest. The club was relegated from the top division at the end of the 1991–92 season, and sank to the third tier four years later. Luton stayed in the third-tier Second Division until relegation at the end of the 2000–01 season. Under the management of Joe Kinnear, who had arrived halfway through the previous season, the team won promotion from the fourth tier at the first attempt. "Controversial" owner John Gurney unsettled the club in 2003, terminating Kinnear's contract on his arrival in May; Gurney replaced Kinnear with Mike Newell before leaving Luton as the club entered administration. Newell's team finished as champions of the rebranded third-tier Football League One in 2004–05. While Newell's place was taken first by Kevin Blackwell and later former player Mick Harford, the team was then relegated twice in a row, starting in 2006–07, and spent the latter part of the 2007–08 season in administration, thus incurring a ten-point deduction from that season's total. The club then had a total of 30 points docked from its 2008–09 record by the Football Association and the Football League for financial irregularities dating back several years. These deductions proved to be too large an obstacle to overcome, but Luton came from behind in the final of the Football League Trophy to win the competition for the first time. Relegation meant that 2009–10 saw Luton playing in the Conference Premier, a competition which the club had never before participated in. The club unsuccessfully contested the promotion play-offs three times in four seasons during their time as a non-League club, employing five different managers. In the 2012–13 FA Cup fourth round, Luton won their away tie against Premier League club Norwich City 1–0 and, in doing so, became the first non-League team to beat a side from England's top division since 1989. In the 2013–14 season, under the management of John Still, Luton won the Conference Premier title with three games to spare, and thereby secured a return to the Football League from 2014–15. The club's nickname, "the Hatters", reflects Luton's historical connection with the hat making trade, which has been prominent there since the 17th century. The nickname was originally a variant on the now rarely seen straw-plaiters. Supporters of the club are also called Hatters. The club is associated with two very different colour schemes—white and black (first permanently adopted in 1920), and orange, navy and white (first used in 1973, and worn by the team as of the 2015–16 season). Luton mainly wore a combination of light blue and white before 1920, when white shirts and black shorts were first adopted. These colours were retained for over half a century, with the colour of the socks varying between white and black, until Luton changed to orange, navy and white at the start of the 1973–74 season. Luton began playing in white shirts, shorts and socks in 1979, with the orange and navy motif reduced to trim; navy shorts were adopted in 1984. This palette was retained until the 1999–2000 season, when the team played in orange shirts and blue shorts. From 2000 to 2008, Luton returned to white shirts and black shorts; orange was included as trim until 2007. The white, navy and orange palette favoured in the 1980s was brought back in 2008, following the results of a club poll, but a year later the colours were changed yet again, this time to a predominantly orange strip with white shorts. Navy shorts were readopted in 2011. Luton are wearing orange shirts, navy shorts and white socks during the 2015–16 season. Luton Town have traditionally used the town's crest as its own in a manner similar to many other teams. The club's first badge was a white eight-pointed star, which was emblazoned across the team's shirts (then a deep cochineal red) in 1892. Four years later a crest comprising the club's initials intertwined was briefly adopted. The shirts were thereafter plain until 1933, when Luton first adopted a badge depicting a straw boater, which appeared on Luton shirts. The letters "LTFC" were added in 1935, and this basic design remained until 1947. The club then played without a badge until 1970, when the club began to wear the town crest regularly, having first done so in the 1959 FA Cup Final. In 1973, concurrently with the club's switch to the orange kit, a new badge was introduced featuring the new colours. The new emblem depicted a stylised orange football, bearing the letters "Lt", surrounded by the club's name in navy blue text. In 1987, the club switched back to a derivative of the town emblem, with the shield portion of the heraldic crest becoming the team's badge; the only similarity with the previous design was the inclusion of the club name around the shield in navy blue. The "rainbow" badge, introduced in 1994, featured the town crest below an orange and blue bow which curved around to meet two footballs, positioned on either side of the shield, with the club name underneath. This badge was used until 2005, when a replacement very similar to the 1987 version was adopted, featuring black text rather than blue and a straw boater in place of the outstretched arm depicted in the older design. The club's founding year, 1885, was added in 2008. The badge was altered once more during the 2009–10 pre-season, with the red of the town crest being replaced with orange to better reflect the club colours. The first sponsor to appear on a Luton Town shirt was Tricentrol, a local motor company based in Dunstable, who sponsored the club from March 1980 to 1982; the deal was worth £50,000. Subsequent sponsors have been Bedford Trucks (1982 to 1990), Vauxhall (1990 to 1991), Universal Salvage Auctions (1991 to 1999), SKF (1999 to 2003), Travel Extras (2003 to 2005), Electrolux (2005 to 2008), Carbrini Sportswear (2008 to 2009), EasyJet and NICEIC (concurrently, 2009 to 2015), and Barnfield College and NICEIC (concurrently, 2015 to 2016). Since June 2016, the club's kit has been sponsored by NICEIC and SsangYong Motor UK. The club released the song "Hatters, Hatters", a collaboration between the Luton team and the Bedfordshire-based musical comedy group the Barron Knights, in 1974. Eight years later another song featuring vocals by the Luton players, "We're Luton Town", was released to celebrate the club's promotion to the First Division. Luton Town's first ground was at Dallow Lane, the former ground of Excelsior. The ground was next to the Dunstable to Luton train line, and players regularly claimed to have trouble seeing the ball because of smoke from the trains. A damaging financial loss during 1896–97 forced Luton to sell the stadium to stay afloat and, as a result, the club moved across the tracks to a stadium between the railway and Dunstable Road. The Dunstable Road ground was opened by Herbrand Russell, 11th Duke of Bedford, who also donated £50 towards the £800 building costs. When the site was sold for housing in 1905, the club was forced to move again at short notice, to its present Kenilworth Road site, in time for the start of the 1905–06 season. The 10,356 capacity all-seater stadium is in the Bury Park area of Luton, and named after the road that runs along one end of it, although the official address of the club is 1 Maple Road. Opposite the eponymous Kenilworth Stand is the Oak Road End, which has evolved from a stand first used exclusively by Luton supporters, then later by away supporters, and now used by both except in times of high ticket demand from away clubs. The Main Stand is flanked by the David Preece Stand, and opposite them stands a row of executive boxes. These boxes replaced the Bobbers Stand in 1986, as the club sought to maximise income. The original Main Stand burnt down in 1921, and was replaced by the current stand before the 1922–23 season. The ground underwent extensive redevelopment during the 1930s, and the capacity by the start of the Second World War was 30,000. Floodlights were installed before the 1953–54 season, but it was 20 years before any further modernisation was carried out. In 1973 the Bobbers Stand became all-seated, and in 1985 the grass pitch was replaced with an artificial playing surface; it quickly became unpopular and was derided as "the plastic pitch". A serious incident involving hooliganism before, during and after a match against Millwall in 1985 caused the club's then chairman, Conservative MP David Evans, to introduce a scheme effective from the start of 1986–87 banning all visiting supporters from the ground, and requiring home fans to carry identity cards when attending matches. Conversion to an all-seater ground also began in 1986. Away fans returned for 1990–91, and grass a year later. The David Preece Stand was erected in 1991, and the conversion of the Kenilworth Stand to an all-seater was completed in 2005. The club first stated its intent to leave Kenilworth Road in 1955. Even then the ground was small compared to rival stadia, and its location made significant redevelopment difficult. The team has since made several attempts to relocate. Leaving Luton for the nearby new town of Milton Keynes was unsuccessfully proposed several times, most notably in the 1980s. The club sold Kenilworth Road to Luton Council in 1989, and has since leased it. A planning application for a new ground, the "Kohlerdome" proposed by chairman David Kohler in 1995, was turned down by the Secretary of State in 1998, and Kohler left soon after. In 2007, the club's then-owners proposed a controversial plan to relocate to a site near Junction 12 of the M1 motorway, near Harlington and Toddington. A planning application was made on the club's behalf by former chairman Cliff Bassett, but the application was withdrawn almost immediately following the club's takeover in 2008. In 2009, the club began an independent feasibility study to determine a viable location to move to. The club did not rule out redeveloping Kenilworth Road and, in October 2012, entered talks to buy the stadium back from Luton Borough Council. By 2015, these plans had been dropped in favour of a move to a new location, with Managing Director Gary Sweet confirming that the club was in a position to "buy land, secure the best possible professional advice ... and to see the [planning] application process through to the receipt of consent." In April 2016, the club announced its intention to build and move into a 17,500-capacity stadium on the Power Court site in central Luton. During the 2014–15 season, Luton Town had an average home league attendance of 8,702—the second highest in League Two behind only Portsmouth. In the 2013–14 season, when the club were in the Conference Premier, the club had significantly higher support than the other clubs in its league, with an average home attendance of 7,387; more than twice compared to the second highest of 3,568. Average attendances at Kenilworth Road fell with the installation of seats and the club's reduction in stature, dropping from 13,452 in 1982–83 to their 2014–15 level—a slump of 35% over 32 years. A supporters' trust, Trust in Luton, owns shares in the club and elects a representative to the club's board. The club's official supporters' group, Luton Town Supporters' Club, merged with Trust in Luton in 2014. The club is associated with another supporters' group, the breakaway Loyal Luton Supporters Club. Trust in Luton has, since March 2014, held the legal right to veto any changes to the club's identity, including name, nickname, colours, club crest and mascot. Luton Town supporters maintain a bitter rivalry with Hertfordshire-based Watford. Watford have remained the higher ranked team at the end of every season since 1997. However, overall Luton still hold the superior record in the fixture between the two clubs; out of 118 competitive matches there have been 53 Luton victories and 36 for Watford, with 29 draws. A survey taken in 2003 showed that there was also animosity between Luton Town fans and those of west London club Queens Park Rangers. The club produces an official match programme for home games, "Talk of the Town". A character known as Happy Harry, a smiling man wearing a straw boater, serves as the team's mascot and appears on the Kenilworth Road pitch before matches. In December 2014, after the seafront statue of Eric Morecambe in his birthplace Morecambe was restored, Luton and Morecambe F.C. jointly announced that the winners of future Luton–Morecambe fixtures would be awarded the "Eric Morecambe Trophy". The record for the most appearances for Luton is held by Bob Morton, who turned out for Luton 562 times in all competitions. Morton also holds the record for the most Football League appearances for the club, with 495. Fred Hawkes holds the record for the most league appearances for Luton, having played in 509 league matches. Six players, Gordon Turner, Andy Rennie, Brian Stein, Ernie Simms, Herbert Moody and Steve Howard, have scored more than 100 goals for Luton. The first player to be capped while playing for Luton was left winger Robert Hawkes, who took to the field for England against Ireland at Goodison Park on 16 February 1907. The most capped player is Mal Donaghy, who earned 58 Northern Ireland caps while at the club. The first player to score in an international match was Joe Payne, who scored twice in his only game for England against Finland on 20 May 1937. Payne also holds the Football League record for the most goals in a game—he hit 10 past Bristol Rovers on 13 April 1936. The club's largest wins have been a 15–0 victory over Great Yarmouth Town on 21 November 1914 in the FA Cup and a 12–0 win over Bristol Rovers in the Third Division South on 13 April 1936. Luton's heaviest loss was a 9–0 defeat against Small Heath in the Second Division on 12 November 1898. Luton's highest home attendances are 30,069 against Blackpool in the FA Cup on 4 March 1959 and 27,911 against Wolverhampton Wanderers in the First Division on 5 November 1955. The highest transfer fee received for a Luton Town player is the £3 million West Bromwich Albion paid for Curtis Davies on 31 August 2005. The most expensive player Luton Town have ever bought was Lars Elstrup, who cost £850,000 from Odense Boldklub on 21 August 1989. The youngest player to make a first-team appearance for Luton Town is Connor Tomlinson at 15 years and 199 days old in the EFL Trophy, replacing Zane Banton as a 92nd-minute substitute in a 2–1 win over Gillingham on 30 August 2016, after the club were given permission for him to play from his headteacher. The club operates a Development Squad, made up of contracted senior players, youth team scholars and trialists, which plays in the Southern Division of The Central League. The club also fields an under-18 team in the Football League Youth Alliance South East Conference. Luton's youth set-up consists of ten Soccer Centres across Bedfordshire and North Hertfordshire, two Centres of Excellence (one in Luton, one in Dunstable), and an Academy in Baldock that caters for players in the under-9 to under-16 age groups. Bibliography Lung cancer Lung cancer, also known as lung carcinoma, is a malignant lung tumor characterized by uncontrolled cell growth in tissues of the lung. This growth can spread beyond the lung by the process of metastasis into nearby tissue or other parts of the body. Most cancers that start in the lung, known as primary lung cancers, are carcinomas. The two main types are small-cell lung carcinoma (SCLC) and non-small-cell lung carcinoma (NSCLC). The most common symptoms are coughing (including coughing up blood), weight loss, shortness of breath, and chest pains. The vast majority (85%) of cases of lung cancer are due to long-term tobacco smoking. About 10–15% of cases occur in people who have never smoked. These cases are often caused by a combination of genetic factors and exposure to radon gas, asbestos, second-hand smoke, or other forms of air pollution. Lung cancer may be seen on chest radiographs and computed tomography (CT) scans. The diagnosis is confirmed by biopsy which is usually performed by bronchoscopy or CT-guidance. Avoidance of risk factors, including smoking and air pollution, is the primary method of prevention. Treatment and long-term outcomes depend on the type of cancer, the stage (degree of spread), and the person's overall health. Most cases are not curable. Common treatments include surgery, chemotherapy, and radiotherapy. NSCLC is sometimes treated with surgery, whereas SCLC usually responds better to chemotherapy and radiotherapy. Worldwide in 2012, lung cancer occurred in 1.8 million people and resulted in 1.6 million deaths. This makes it the most common cause of cancer-related death in men and second most common in women after breast cancer. The most common age at diagnosis is 70 years. Overall, 17.4% of people in the United States diagnosed with lung cancer survive five years after the diagnosis, while outcomes on average are worse in the developing world. Signs and symptoms which may suggest lung cancer include: If the cancer grows in the airways, it may obstruct airflow, causing breathing difficulties. The obstruction can lead to accumulation of secretions behind the blockage, and predispose to pneumonia. Depending on the type of tumor, paraneoplastic phenomena—symptoms not due to the local presence of cancer—may initially attract attention to the disease. In lung cancer, these phenomena may include hypercalcemia, syndrome of inappropriate antidiuretic hormone (SIADH, abnormally concentrated urine and diluted blood), ectopic ACTH production, or Lambert–Eaton myasthenic syndrome (muscle weakness due to autoantibodies). Tumors in the top of the lung, known as Pancoast tumors, may invade the local part of the sympathetic nervous system, leading to Horner's syndrome (dropping of the eyelid and a small pupil on that side), as well as damage to the brachial plexus. Many of the symptoms of lung cancer (poor appetite, weight loss, fever, fatigue) are not specific. In many people, the cancer has already spread beyond the original site by the time they have symptoms and seek medical attention. Symptoms that suggest the presence of metastatic disease include weight loss, bone pain and neurological symptoms (headaches, fainting, convulsions, or limb weakness). Common sites of spread include the brain, bone, adrenal glands, opposite lung, liver, pericardium, and kidneys. About 10% of people with lung cancer do not have symptoms at diagnosis; these cancers are incidentally found on routine chest radiography. Cancer develops after genetic damage to DNA and epigenetic changes. Those changes affect the cell's normal functions, including cell proliferation, programmed cell death (apoptosis), and DNA repair. As more damage accumulates, the risk of cancer increases. Tobacco smoking is by far the main contributor to lung cancer. Cigarette smoke contains at least 73 known carcinogens, including benzo["a"]pyrene, NNK, 1,3-butadiene, and a radioactive isotope of polonium — polonium-210. Across the developed world, 90% of lung cancer deaths in men and 70% of those in women during the year 2000 were attributed to smoking. Smoking accounts for about 85% of lung cancer cases. Passive smoking — the inhalation of smoke from another's smoking — is a cause of lung cancer in nonsmokers. A passive smoker can be defined as someone either living or working with a smoker. Studies from the US, Europe, and the UK have consistently shown a significantly-increased risk among those exposed to passive smoking. Those who live with someone who smokes have a 20–30% increase in risk while those who work in an environment with secondhand smoke have a 16–19% increase in risk. Investigations of sidestream smoke suggest that it is more dangerous than direct smoke. Passive smoking results in roughly 3,400 lung cancer-related deaths each year in the U.S. Marijuana smoke contains many of the same carcinogens as those in tobacco smoke. However, the effect of smoking cannabis on lung cancer risk is not clear. A 2013 review did not find an increased risk from light to moderate use. A 2014 review found that smoking cannabis doubled the risk of lung cancer. Radon is a colorless and odorless gas generated by the breakdown of radioactive radium, which in turn is the decay product of uranium, found in the Earth's crust. The radiation decay products ionize genetic material, causing mutations that sometimes become cancerous. Radon is the second-most common cause of lung cancer in the U.S., causing about 21,000 deaths each year. The risk increases 8–16% for every 100 Bq/m³ increase in the radon concentration. Radon gas levels vary by locality and the composition of the underlying soil and rocks. About one in 15 homes in the US has radon levels above the recommended guideline of 4 picocuries per liter (pCi/l) (148 Bq/m³). Asbestos can cause a variety of lung diseases such as lung cancer. Tobacco smoking and asbestos both have synergistic effects on the development of lung cancer. In smokers who work with asbestos, the risk of lung cancer is increased 45-fold compared to the general population. Asbestos can also cause cancer of the pleura, called mesothelioma — which actually is different from lung cancer). Outdoor air pollutants, especially chemicals released from the burning of fossil fuels, increase the risk of lung cancer. Fine particulates (PM) and sulfate aerosols, which may be released in traffic exhaust fumes, are associated with a slightly-increased risk. For nitrogen dioxide, an incremental increase of 10 parts per billion increases the risk of lung cancer by 14%. Outdoor air pollution is estimated to cause 1–2% of lung cancers. Tentative evidence supports an increased risk of lung cancer from indoor air pollution in relation to the burning of wood, charcoal, dung, or crop residue for cooking and heating. Women who are exposed to indoor coal smoke have roughly twice the risk, and many of the by-products of burning biomass are known or suspected carcinogens. This risk affects about 2.4 billion people worldwide, and it is believed to result in 1.5% of lung cancer deaths. About 8% of lung cancer is caused by inherited factors. In relatives of people that are diagnosed with lung cancer, the risk is doubled, likely due to a combination of genes. Polymorphisms on chromosomes 5, 6, and 15 are known to affect the risk of lung cancer. Numerous other substances, occupations, and environmental exposures have been linked to lung cancer. The International Agency for Research on Cancer (IARC) states that there is some "sufficient evidence" to show that the following are carcinogenic in the lungs: Similar to many other cancers, lung cancer is initiated by either the activation of oncogenes or the inactivation of tumor suppressor genes. Carcinogens cause mutations in these genes that induce the development of cancer. Mutations in the "K-ras" proto-oncogene cause roughly 10–30% of lung adenocarcinomas. Nearly 4% of non-small-cell lung carcinomas involve an EML4-ALK tyrosine kinase fusion gene. Epigenetic changes such as alteration of DNA methylation, histone tail modification, or microRNA regulation may result in the inactivation of tumor suppressor genes. The epidermal growth factor receptor (EGFR) regulates cell proliferation, apoptosis, angiogenesis, and tumor invasion. Mutations and amplification of EGFR are common in non-small-cell lung carcinoma, and they provide the basis for treatment with EGFR-inhibitors. Her2/neu is affected less frequently. Other genes that are often mutated or amplified include "c-MET", "NKX2-1", "LKB1", "PIK3CA", and "BRAF". The cell lines of origin are not fully understood. The mechanism may involve the abnormal activation of stem cells. In the proximal airways, stem cells that express keratin 5 are more likely to be affected, typically leading to squamous-cell lung carcinoma. In the middle airways, implicated stem cells include club cells and neuroepithelial cells that express club cell secretory protein. Small-cell lung carcinoma may originate from these cell lines or neuroendocrine cells, and it may express CD44. Metastasis of lung cancer requires transition from epithelial to mesenchymal cell type. This may occur through the activation of signaling pathways such as Akt/GSK3Beta, MEK-ERK, Fas, and Par6. Performing a chest radiograph is one of the first investigative steps if a person reports symptoms that may suggest lung cancer. This may reveal an obvious mass, the widening of the mediastinum (suggestive of spread to lymph nodes there), atelectasis (collapse), consolidation (pneumonia), or pleural effusion. CT imaging is typically used to provide more information about the type and extent of disease. Bronchoscopy or CT-guided biopsy is often used to sample the tumor for histopathology. Lung cancer often appears as a solitary pulmonary nodule on a chest radiograph. However, the differential diagnosis is wide. Many other diseases can also give this appearance, including metastatic cancer, hamartomas, and infectious granulomas such as tuberculosis, histoplasmosis and coccidioidomycosis. Lung cancer can also be an incidental finding, as a solitary pulmonary nodule on a chest radiograph or CT scan done for an unrelated reason. The definitive diagnosis of lung cancer is based on the histological examination of the suspicious tissue in the context of the clinical and radiological features. Clinical practice guidelines recommend frequencies for pulmonary nodule surveillance. CT imaging should not be used for longer or more frequently than indicated, as the extended surveillance exposes people to increased radiation. Lung cancers are classified according to histological type. This classification is important for determining both the management and predicting outcomes of the disease. Lung cancers are carcinomas — malignancies that arise from epithelial cells. Lung carcinomas are categorized by the size and appearance of the malignant cells seen by a histopathologist under a microscope. For therapeutic purposes, two broad classes are distinguished: non-small-cell lung carcinoma and small-cell lung carcinoma. The three main subtypes of NSCLC are adenocarcinoma, squamous-cell carcinoma, and large-cell carcinoma. Nearly 40% of lung cancers are adenocarcinoma, which usually comes from peripheral lung tissue. Although most cases of adenocarcinoma are associated with smoking, adenocarcinoma is also the most-common form of lung cancer among people who have smoked fewer than 100 cigarettes in their lifetimes ("never-smokers") and ex-smokers with a modest smoking history. A subtype of adenocarcinoma, the bronchioloalveolar carcinoma, is more common in female never-smokers, and may have a better long-term survival. Squamous-cell carcinoma causes about 30% of lung cancers. They typically occur close to large airways. A hollow cavity and associated cell death are commonly found at the center of the tumor. Nearly 9% of lung cancers are large-cell carcinoma. These are so named because the cancer cells are large, with excess cytoplasm, large nuclei, and conspicuous nucleoli. In small-cell lung carcinoma (SCLC), the cells contain dense neurosecretory granules (vesicles containing neuroendocrine hormones), which give this tumor an endocrine/paraneoplastic syndrome association. Most cases arise in the larger airways (primary and secondary bronchi). Sixty to seventy percent have extensive disease (which cannot be targeted within a single radiation therapy field) at presentation. Four main histological subtypes are recognised, although some cancers may contain a combination of different subtypes, such as adenosquamous carcinoma. Rare subtypes include carcinoid tumors, bronchial gland carcinomas, and sarcomatoid carcinomas. The lungs are a common place for the spread of tumors from other parts of the body. Secondary cancers are classified by the site of origin; for example, breast cancer that has been spread to the lung is called metastatic breast cancer. Metastases often have a characteristic round appearance on chest radiograph. Primary lung cancers also most commonly metastasize to the brain, bones, liver, and adrenal glands. Immunostaining of a biopsy usually helps determine the original source. The presence of Napsin-A, TTF-1, CK7, and CK20 help confirm the subtype of lung carcinoma. SCLC that originates from neuroendocrine cells may express CD56, neural cell adhesion molecule, synaptophysin, or chromogranin. Lung cancer staging is an assessment of the degree of spread of the cancer from its original source. It is one of the factors affecting both the prognosis and the potential treatment of lung cancer. The evaluation of non-small-cell lung carcinoma (NSCLC) staging uses the TNM classification. This is based on the size of the primary tumor, lymph node involvement, and distant metastasis. Using the TNM descriptors, a group is assigned, ranging from occult cancer, through stages 0, IA (one-A), IB, IIA, IIB, IIIA, IIIB, and IV (four). This stage group assists with the choice of treatment and estimation of prognosis. Small-cell lung carcinoma (SCLC) has traditionally been classified as "limited stage" (confined to one-half of the chest and within the scope of a single tolerable radiotherapy field) or "extensive stage" (more widespread disease). However, the TNM classification and grouping are useful in estimating prognosis. For both NSCLC and SCLC, the two general types of staging evaluations are clinical staging and surgical staging. Clinical staging is performed before definitive surgery. It is based on the results of imaging studies (such as CT scans and PET scans) and biopsy results. Surgical staging is evaluated either during or after the operation. It is based on the combined results of surgical and clinical findings, including surgical sampling of thoracic lymph nodes. Smoking prevention and smoking cessation are effective ways of preventing the development of lung cancer. While in most countries industrial and domestic carcinogens have been identified and banned, tobacco smoking is still widespread. Eliminating tobacco smoking is a primary goal in the prevention of lung cancer, and smoking cessation is an important preventive tool in this process. Policy interventions to decrease passive smoking in public areas such as restaurants and workplaces have become more common in many Western countries. Bhutan has had a complete smoking ban since 2005 while India introduced a ban on smoking in public in October 2008. The World Health Organization has called for governments to institute a total ban on tobacco advertising to prevent young people from taking up smoking. They assess that such bans have reduced tobacco consumption by 16% where instituted. Cancer screening uses medical tests to detect disease in large groups of people who have no symptoms. For individuals with high risk of developing lung cancer, computed tomography (CT) screening can detect cancer and give a person options to respond to it in a way that prolongs life. This form of screening reduces the chance of death from lung cancer by an absolute amount of 0.3% (relative amount of 20%). High risk people are those age 55–74 who have smoked equivalent amount of a pack of cigarettes daily for 30 years including time within the past 15 years. CT screening is associated with a high rate of falsely positive tests which may result in unneeded treatment. For each true positive scan there are about 19 falsely positives scans. Other concerns include radiation exposure and the cost of testing along with follow up. Research has not found two other available tests—sputum cytology or chest radiograph (CXR) screening tests—to have any benefit. The United States Preventive Services Task Force (USPSTF) recommends yearly screening using low-dose computed tomography in those who have a total smoking history of 30 pack-years and are between 55 and 80 years old until a person has not been smoking for more than 15 years. Screening should not be done in those with other health problems that would make treatment of lung cancer if found not an option. The English National Health Service was in 2014 re-examining the evidence for screening. The long-term use of supplemental vitamin A, vitamin C, vitamin D or vitamin E does not reduce the risk of lung cancer. Some studies suggest that people who eat diets with a higher proportion of vegetables and fruit tend to have a lower risk, but this may be due to confounding—with the lower risk actually due to the association of a high fruit/vegetables diet with less smoking. Several rigorous studies have not demonstrated a clear association between diet and lung cancer risk, although meta-analysis that accounts for smoking status may show benefit from a healthy diet. Treatment for lung cancer depends on the cancer's specific cell type, how far it has spread, and the person's performance status. Common treatments include palliative care, surgery, chemotherapy, and radiation therapy. Targeted therapy of lung cancer is growing in importance for advanced lung cancer. If investigations confirm NSCLC, the stage is assessed to determine whether the disease is localized and amenable to surgery or if it has spread to the point where it cannot be cured surgically. CT scan and positron emission tomography are used for this determination. If mediastinal lymph node involvement is suspected, the nodes may be sampled to assist staging. Techniques used for this include transthoracic needle aspiration, transbronchial needle aspiration (with or without endobronchial ultrasound), endoscopic ultrasound with needle aspiration, mediastinoscopy, and thoracoscopy. Blood tests and pulmonary function testing are used to assess whether a person is well enough for surgery. If pulmonary function tests reveal poor respiratory reserve, surgery may not be possible. In most cases of early-stage NSCLC, removal of a lobe of lung (lobectomy) is the surgical treatment of choice. In people who are unfit for a full lobectomy, a smaller sublobar excision (wedge resection) may be performed. However, wedge resection has a higher risk of recurrence than lobectomy. Radioactive iodine brachytherapy at the margins of wedge excision may reduce the risk of recurrence. Rarely, removal of a whole lung (pneumonectomy) is performed. Video-assisted thoracoscopic surgery (VATS) and VATS lobectomy use a minimally invasive approach to lung cancer surgery. VATS lobectomy is equally effective compared to conventional open lobectomy, with less postoperative illness. In SCLC, chemotherapy and/or radiotherapy is typically used. However the role of surgery in SCLC is being reconsidered. Surgery might improve outcomes when added to chemotherapy and radiation in early stage SCLC. Radiotherapy is often given together with chemotherapy, and may be used with curative intent in people with NSCLC who are not eligible for surgery. This form of high-intensity radiotherapy is called radical radiotherapy. A refinement of this technique is continuous hyperfractionated accelerated radiotherapy (CHART), in which a high dose of radiotherapy is given in a short time period. Postoperative thoracic radiotherapy generally should not be used after curative intent surgery for NSCLC. Some people with mediastinal N2 lymph node involvement might benefit from post-operative radiotherapy. For potentially curable SCLC cases, chest radiotherapy is often recommended in addition to chemotherapy. If cancer growth blocks a short section of bronchus, brachytherapy (localized radiotherapy) may be given directly inside the airway to open the passage. Compared to external beam radiotherapy, brachytherapy allows a reduction in treatment time and reduced radiation exposure to healthcare staff. Evidence for brachytherapy, however, is less than that for external beam radiotherapy. Prophylactic cranial irradiation (PCI) is a type of radiotherapy to the brain, used to reduce the risk of metastasis. PCI is most useful in SCLC. In limited-stage disease, PCI increases three-year survival from 15% to 20%; in extensive disease, one-year survival increases from 13% to 27%. Recent improvements in targeting and imaging have led to the development of stereotactic radiation in the treatment of early-stage lung cancer. In this form of radiotherapy, high doses are delivered over a number of sessions using stereotactic targeting techniques. Its use is primarily in patients who are not surgical candidates due to medical comorbidities. For both NSCLC and SCLC patients, smaller doses of radiation to the chest may be used for symptom control (palliative radiotherapy). The chemotherapy regimen depends on the tumor type. Small-cell lung carcinoma (SCLC), even relatively early stage disease, is treated primarily with chemotherapy and radiation. In SCLC, cisplatin and etoposide are most commonly used. Combinations with carboplatin, gemcitabine, paclitaxel, vinorelbine, topotecan, and irinotecan are also used. In advanced non-small cell lung carcinoma (NSCLC), chemotherapy improves survival and is used as first-line treatment, provided the person is well enough for the treatment. Typically, two drugs are used, of which one is often platinum-based (either cisplatin or carboplatin). Other commonly used drugs are gemcitabine, paclitaxel, docetaxel, pemetrexed, etoposide or vinorelbine. Platinum-based drugs and combinations that include platinum therapy may lead to a higher risk of serious adverse effects in people over 70 years old. Adjuvant chemotherapy refers to the use of chemotherapy after apparently curative surgery to improve the outcome. In NSCLC, samples are taken of nearby lymph nodes during surgery to assist staging. If stage II or III disease is confirmed, adjuvant chemotherapy (including or not including postoperative radiotherapy) improves survival by 4% at five years. The combination of vinorelbine and cisplatin is more effective than older regimens. Adjuvant chemotherapy for people with stage IB cancer is controversial, as clinical trials have not clearly demonstrated a survival benefit. Chemotherapy before surgery in NSCLC that can be removed surgically may improve outcomes. Chemotherapy may be combined with palliative care in the treatment of the NSCLC. In advanced cases, appropriate chemotherapy improves average survival over supportive care alone, as well as improving quality of life. With adequate physical fitness maintaining chemotherapy during lung cancer palliation offers 1.5 to 3 months of prolongation of survival, symptomatic relief, and an improvement in quality of life, with better results seen with modern agents. The NSCLC Meta-Analyses Collaborative Group recommends if the recipient wants and can tolerate treatment, then chemotherapy should be considered in advanced NSCLC. Several drugs that target molecular pathways in lung cancer are available, especially for the treatment of advanced disease. Erlotinib, gefitinib and afatinib inhibit tyrosine kinase at the epidermal growth factor receptor. Denosumab is a monoclonal antibody directed against receptor activator of nuclear factor kappa-B ligand. It may be useful in the treatment of bone metastases. Several treatments can be administered via bronchoscopy for the management of airway obstruction or bleeding. If an airway becomes obstructed by cancer growth, options include rigid bronchoscopy, balloon bronchoplasty, stenting, and microdebridement. Laser photosection involves the delivery of laser light inside the airway via a bronchoscope to remove the obstructing tumor. Palliative care when added to usual cancer care benefits people even when they are still receiving chemotherapy. These approaches allow additional discussion of treatment options and provide opportunities to arrive at well-considered decisions. Palliative care may avoid unhelpful but expensive care not only at the end of life, but also throughout the course of the illness. For individuals who have more advanced disease, hospice care may also be appropriate. Of all people with lung cancer in the US, 16.8% survive for at least five years after diagnosis. In England and Wales, between 2010 and 2011, overall five-year survival for lung cancer was estimated at 9.5%. Outcomes are generally worse in the developing world. Stage is often advanced at the time of diagnosis. At presentation, 30–40% of cases of NSCLC are stage IV, and 60% of SCLC are stage IV. Survival for lung cancer falls as the stage at diagnosis becomes more advanced: the English data suggest that around 70% of patients survive at least a year when diagnosed at the earliest stage, but this falls to just 14% for those diagnosed with the most advanced disease. Prognostic factors in NSCLC include presence of pulmonary symptoms, large tumor size (>3 cm), nonsquamous cell type (histology), degree of spread (stage) and metastases to multiple lymph nodes, and vascular invasion. For people with inoperable disease, outcomes are worse in those with poor performance status and weight loss of more than 10%. Prognostic factors in small cell lung cancer include performance status, gender, stage of disease, and involvement of the central nervous system or liver at the time of diagnosis. For NSCLC, the best prognosis is achieved with complete surgical resection of stage IA disease, with up to 70% five-year survival. People with extensive-stage SCLC have an average five-year survival rate of less than 1%. The average survival time for limited-stage disease is 20 months, with a five-year survival rate of 20%. According to data provided by the National Cancer Institute, the median age at diagnosis of lung cancer in the United States is 70 years, and the median age at death is 72 years. In the US, people with medical insurance are more likely to have a better outcome. Worldwide, lung cancer is the most-common cancer among men in terms of both incidence and mortality, and among women has the third-highest incidence, and is second after breast cancer in mortality. In 2012, there were 1.82 million new cases worldwide, and 1.56 million deaths due to lung cancer, representing 19.4% of all deaths from cancer. The highest rates are in North America, Europe, and East Asia, with over a third of new cases in China that year. Rates in Africa and South Asia are much lower. The population segment that is most likely to develop lung cancer is people aged over 50 who have a history of smoking. Unlike the mortality rate in men — which began declining more than 20 years ago, women's lung cancer mortality rates have risen over the last decades, and are just recently beginning to stabilize. In the U.S., the lifetime risk of developing lung cancer is 8% in men and 6% in women. For every 3–4 million cigarettes smoked, one lung cancer death can occur. The influence of "Big Tobacco" plays a significant role in smoking. Young nonsmokers who see tobacco advertisements are more likely to smoke. The role of passive smoking is increasingly being recognized as a risk factor for lung cancer, resulting in policy interventions to decrease the undesired exposure of nonsmokers to others' tobacco smoke. In the U.S., both black men and black women have a higher incidence. Lung cancer rates are currently lower in developing countries. With increased smoking in developing countries, the rates are expected to increase in the next few years, notably in both China and India. Also in the U.S., military veterans have a 25–50% higher rate of lung cancer primarily due to higher rates of smoking. During World War II and the Korean War, asbestos also played a role, and Agent Orange may have caused some problems during the Vietnam War. Lung cancer is the third most-common cancer in the UK (around 46,400 people were diagnosed with the disease in 2014), and it is the most-common cause of cancer-related death (around 35,900 people died in 2014). From the 1960s, the rates of lung adenocarcinoma started rise in relation to other kinds of lung cancer, partially due to the introduction of filter cigarettes. The use of filters removes larger particles from tobacco smoke, thus reducing deposition in larger airways. However, the smoker has to inhale more deeply to receive the same amount of nicotine, increasing particle deposition in small airways where adenocarcinoma tends to arise. The incidence of lung adenocarcinoma continues rising. Lung cancer was uncommon before the advent of cigarette smoking; it was not even recognized as a distinct disease until 1761. Different aspects of lung cancer were described further in 1810. Malignant lung tumors made up only 1% of all cancers seen at autopsy in 1878, but had risen to 10–15% by the early 1900s. Case reports in the medical literature numbered only 374 worldwide in 1912, but a review of autopsies showed the incidence of lung cancer had increased from 0.3% in 1852 to 5.66% in 1952. In Germany in 1929, physician Fritz Lickint recognized the link between smoking and lung cancer, which led to an aggressive antismoking campaign. The British Doctors' Study, published in the 1950s, was the first solid epidemiological evidence of the link between lung cancer and smoking. As a result, in 1964 the Surgeon General of the United States recommended smokers should stop smoking. The connection with radon gas was first recognized among miners in the Ore Mountains near Schneeberg, Saxony. Silver has been mined there since 1470, and these mines are rich in uranium, with its accompanying radium and radon gas. Miners developed a disproportionate amount of lung disease, eventually recognized as lung cancer in the 1870s. Despite this discovery, mining continued into the 1950s, due to the USSR's demand for uranium. Radon was confirmed as a cause of lung cancer in the 1960s. The first successful pneumonectomy for lung cancer was performed in 1933. Palliative radiotherapy has been used since the 1940s. Radical radiotherapy, initially used in the 1950s, was an attempt to use larger radiation doses in patients with relatively early-stage lung cancer, but who were otherwise unfit for surgery. In 1997, continuous hyperfractionated accelerated radiotherapy was seen as an improvement over conventional radical radiotherapy. With small-cell lung carcinoma, initial attempts in the 1960s at surgical resection and radical radiotherapy were unsuccessful. In the 1970s, successful chemotherapy regimens were developed. Current research directions for lung cancer treatment include immunotherapy, which encourages the body's immune system to attack the tumor cells, epigenetics, and new combinations of chemotherapy and radiotherapy, both on their own and together. Many of these new treatments work through immune checkpoint blockade, disrupting cancer's ability to evade the immune system. Ipilimumab blocks signaling through a receptor on T cells known as CTLA-4 which dampens down the immune system. It has been approved by the U.S. Food and Drug Administration (FDA) for treatment of melanoma and is undergoing clinical trials for both non-small cell lung cancer (NSCLC) and small cell lung cancer (SCLC). Other immunotherapy treatments interfere with the binding of programmed cell death 1 (PD-1) protein with its ligand PD-1 ligand 1 (PD-L1), and have been approved as first- and subsequent-line treatments for various subsets of lung cancers. Signaling through PD-1 inactivates T cells. Some cancer cells appear to exploit this by expressing PD-L1 in order to switch off T cells that might recognise them as a threat. Monoclonal antibodies targeting both PD-1 and PD-L1, such as pembrolizumab, nivolumab, atezolizumab, and durvalumab are currently in clinical trials for treatment for lung cancer. Epigenetics is the study of small, usually heritable, molecular modifications—or "tags"—that bind DNA and modify gene expression levels. Targeting these tags with drugs can kill cancer cells. Early-stage research in NSCLC using drugs aimed at epigenetic modifications shows that blocking more than one of these tags can kill cancer cells with fewer side effects. Studies also show that giving patients these drugs before standard treatment can improve its effectiveness. Clinical trials are underway to evaluate how well these drugs kill lung cancer cells in humans. Several drugs that target epigenetic mechanisms are in development. Histone deacetylase inhibitors in development include valproic acid, vorinostat, belinostat, panobinostat, entinostat, and romidepsin. DNA methyltransferase inhibitors in development include decitabine, azacytidine, and hydralazine. The TRACERx project is looking at how NSCLC develops and evolves, and how these tumors become resistant to treatment. The project will look at tumor samples from 850 NSCLC patients at various stages including diagnosis, after first treatment, post-treatment, and relapse. By studying samples at different points of tumor development, the researchers hope to identify the changes that drive tumor growth and resistance to treatment. The results of this project will help scientists and doctors gain a better understanding of NSCLC and potentially lead to the development of new treatments for the disease. For lung cancer cases that develop resistance to epidermal growth factor receptor (EGFR) and anaplastic lymphoma kinase (ALK) tyrosine kinase inhibitors, new drugs are in development. New EGFR inhibitors include afatinib and dacomitinib. An alternative signaling pathway, c-Met, can be inhibited by tivantinib and onartuzumab. New ALK inhibitors include crizotinib and ceritinib. If the MAPK/ERK pathway is involved, the BRAF kinase inhibitor dabrafenib and the MAPK/MEK inhibitor trametinib may be beneficial. Lung cancer stem cells are often resistant to conventional chemotherapy and radiotherapy. This may lead to relapse after treatment. New approaches target protein or glycoprotein markers that are specific to the stem cells. Such markers include CD133, CD90, ALDH1A1, CD44 and ABCG2. Signaling pathways such as Hedgehog, Wnt and Notch are often implicated in the self-renewal of stem cell lines. Thus treatments targeting these pathways may help to prevent relapse. Laurence of Canterbury Laurence (died 2 February 619) was the second Archbishop of Canterbury from about 604 to 619. He was a member of the Gregorian mission sent from Italy to England to Christianise the Anglo-Saxons from their native Anglo-Saxon paganism, although the date of his arrival is disputed. He was consecrated archbishop by his predecessor, Augustine of Canterbury, during Augustine's lifetime, to ensure continuity in the office. While archbishop, he attempted unsuccessfully to resolve differences with the native British bishops by corresponding with them about points of dispute. Laurence faced a crisis following the death of King Æthelberht of Kent, when the king's successor abandoned Christianity; he eventually reconverted. Laurence was revered as a saint after his death in 619. Laurence was part of the Gregorian mission originally dispatched from Rome in 595 to convert the Anglo-Saxons from their native paganism to Christianity; he landed at Thanet, Kent, with Augustine in 597, or, as some sources state, first arrived in 601 and was not a part of the first group of missionaries. He had been a monk in Rome before his travels to England, but nothing else is known of his history or background. The medieval chronicler Bede says that Augustine sent Laurence back to Pope Gregory I to report on the success of converting King Æthelberht of Kent and to carry a letter with questions for the pope. Accompanied by Peter of Canterbury, another missionary, he set off some time after July 598, and had returned by June 601. He brought back with him Gregory's replies to Augustine's questions, a document commonly known as the "Libellus responsionum", that Bede incorporated in his "Historia ecclesiastica gentis Anglorum". Laurence is probably the Laurence referred to in the letter from Gregory to Bertha, queen of Kent. In that letter, Gregory praises Bertha for her part in the conversion of her husband, details of which Gregory says he received from Laurence the priest. It is known that Laurence returned to England with Mellitus and others of the second group of missionaries in the summer of 601, but there is no record of Peter being with them. Laurence succeeded Augustine to the see of Canterbury in about 604, and ruled until his death on 2 February 619. To secure the succession, Augustine had consecrated Laurence before he died, even though that was prohibited by canon law. Augustine was afraid though that if someone did not step into the office immediately, it would damage the missionary efforts in Britain. However, Laurence never received a pallium from Rome, so he may have been considered uncanonical by the papacy. Bede makes a point of comparing Augustine's action in consecrating Laurence to Saint Peter's action of consecrating Clement as Bishop of Rome during Peter's lifetime, which the theologian J. Robert Wright believes may be Bede's way of criticising the practices of the church in his day. In 610 Laurence received letters from Pope Boniface IV, addressed to him as archbishop and Augustine's successor. The correspondence was in response to Laurence having sent Mellitus to Rome earlier in 610, to solicit advice from the papacy on matters concerning the English Church. While in Rome Mellitus attended a synod, and brought the synodical decrees back with him to Laurence. In 613 Laurence consecrated the monastery church built by Augustine in Canterbury, and dedicated it to saints Peter and Paul; it was later re-consecrated as St Augustine's Abbey, Canterbury. Laurence also wrote to the bishops in the lands held by the Scots and by the Britons, urging them to hold Easter on the day that the Roman church celebrated it, instead of their traditional date, part of the Easter controversy. The letter is also preserved in Bede's history. Laurence in 609 stated that Dagan, a native bishop, would not eat with Laurence or share a roof with the archbishop, due to the differences between the two Churches. Æthelberht died in 616, during Laurence's tenure; his son Eadbald abandoned Christianity in favour of Anglo-Saxon paganism, forcing many of the Gregorian missionaries to flee the pagan backlash that followed Æthelberht's death. Among them in Gaul were Mellitus, who was Bishop of London, and Justus, who was Bishop of Rochester. Remaining in Britain, Laurence succeeded in reconverting Eadbald to Christianity. Bede relates the story that Laurence had been prepared to give up when he was visited by St Peter in a dream or vision. St Peter chastised Laurence and whipped him, and the marks of the whipping remained after the vision or dream ended. Laurence then displayed them to Eadbald, and the king was converted on the spot. Bede, however, hints that it was the death of some of the leaders of the pagan party in battle that really persuaded Laurence to stay. According to Benedicta Ward, a historian of Christianity, Bede uses the story of the whipping as an example of how suffering was a reminder of Christ's suffering for humans, and how that example could lead to conversion. Wright argues that another point Bede is making is that it is because of the intercession of St Peter himself that the mission continued. David Farmer, in the "Oxford Dictionary of Saints", suggests that the whipping story may have been a blending of the "Quo Vadis" story with some information given by Jerome in a letter. Modern historians have seen political overtones in the pagan reaction. The historian D. P. Kirby sees Eadbald's actions as a repudiation of his father's pro-Frankish policies. Alcuin, a later medieval writer, wrote that Laurence was "censured by apostolic authority". This may have been a letter from Pope Adeodatus I, commanding Laurence to stay in Kent. Kirby goes on to argue that it was Justus, not Laurence, who converted Eadbald, and this while Justus was archbishop, sometime around 624. Not all historians agree with this argument, however. Nicholas Brooks states that the king was converted during Laurence's archiepiscopate, within a year of him succeeding his father. The historian Barbara Yorke argues that there were two co-rulers of Kent after Æthelberht's death, Eadbald and a Æthelwald, and that Eadbald was converted by Laurence while Æthelwald was converted by Justus after his return to Rochester. Another factor in the pagan reaction was Laurence's objection to Eadbald's marriage to his father's widow, something that Christians considered to be unlawful. All efforts to extend the church beyond Kent encountered difficulties due to the attitude of King Rædwald of East Anglia, who had become the leading king in the south after Æthelberht's death. Rædwald was converted before the death of Æthelberht, perhaps at the urging of Æthelberht, but his kingdom was not, and Rædwald seems to have converted only to the extent of placing a Christian altar in his pagan temple. It proved impossible for Mellitus to return to London as bishop, although Justus did resume his duties at Rochester. Laurence died on 2 February 619, and was buried in the abbey of St Peter and Paul in Canterbury, later renamed St Augustine's; his relics, or remains, were moved, or translated, to the new church of St Augustine's in 1091. His shrine was in the axial chapel of the abbey church, flanking the shrine of Augustine, his predecessor. Laurence came to be regarded as a saint, and was given the feast day of 3 February. The ninth century Stowe Missal commemorates his feast day, along with Mellitus and Justus. A "Vita" (or "Life") was written about the time of his translation, by Goscelin, but it is mainly based on information in Bede. His tomb was opened in 1915. Besides his feast day, the date of his translation, 13 September, was also celebrated after his death. Laurence's tenure as archbishop is mainly remembered for his failure to secure a settlement with the Celtic church, and for his reconversion of Eadbald following Æthelbert's death. He was succeeded as archbishop by Mellitus, the Bishop of London. Louis Riel Louis David Riel (; ; 22 October 1844 – 16 November 1885) was a Canadian politician, a founder of the province of Manitoba, and a political leader of the Métis people of the Canadian Prairies. He led two rebellions against the government of Canada and its first post-Confederation prime minister, John A. Macdonald. Riel sought to preserve Métis rights and culture as their homelands in the Northwest came progressively under the Canadian sphere of influence. Over the decades, he has been made a folk hero by the Francophones, the Catholic nationalists, the native rights movement, and the New Left student movement. Riel has received more scholarly attention than practically any other figure in Canadian history. His first resistance was the Red River Rebellion of 1869–1870. The provisional government established by Riel ultimately negotiated the terms under which the modern province of Manitoba entered the Canadian Confederation. Riel ordered the execution of Thomas Scott, and fled to the United States to escape prosecution. Despite this, he is frequently referred to as the "Father of Manitoba". While a fugitive, he was elected three times to the House of Commons of Canada, although he never assumed his seat. During these years, he was frustrated by having to remain in exile despite his growing belief that he was a divinely chosen leader and prophet, a belief which would later resurface and influence his actions. Because of this new religious conviction, Catholic leaders who had supported him before increasingly repudiated him. He married in 1881 while in exile in Montana in the United States; he fathered three children. In 1884 Riel was called upon by the Métis leaders in Saskatchewan to articulate their grievances to the Canadian government. Instead he organized a military resistance that escalated into a military confrontation, the North-West Rebellion of 1885. Ottawa used the new rail lines to send in thousands of combat soldiers. It ended in his arrest and conviction for high treason. Rejecting many protests and popular appeals, Prime Minister MacDonald decided to hang him. Riel was seen as a heroic victim by French Canadians; his execution had a lasting negative impact on Canada, polarizing the new nation along ethno-religious lines. Although only a few hundred people were directly affected by the Rebellion in Saskatchewan, the long-term result was that the Prairie provinces would be controlled by the Anglophones, not the Francophones. An even more important long-term impact was the bitter alienation Francophones across Canada felt, and anger against the repression by their countrymen. Riel's historical reputation has long been polarized between portrayals as a dangerous half-insane religious fanatic and rebel against the Canadian nation, or by contrast a heroic rebel who fought to protect his Francophone people from the unfair encroachments of an Anglophone national government. He is increasingly celebrated as a proponent of multiculturalism, although that downplays his primary commitment to Métis nationalism and political independence. The Red River Settlement was a community in Rupert's Land nominally administered by the Hudson's Bay Company (HBC), and largely inhabited by First Nations tribes and the Métis, an ethnic group of mixed Cree, Ojibwa, Saulteaux, French Canadian, Scottish, and English descent. Louis Riel was born there in 1844, near modern Winnipeg, Manitoba, to Louis Riel, Sr. and Julie Lagimodière. Riel was the eldest of eleven children in a locally well-respected family. His father, who was of Franco-Ojibwa Métis descent, had gained prominence in this community by organizing a group that supported Guillaume Sayer, a Métis imprisoned for challenging the HBC's historical trade monopoly. Sayer's eventual release due to agitations by Louis Sr.'s group effectively ended the monopoly, and the name Riel was therefore well known in the Red River area. His mother was the daughter of Jean-Baptiste Lagimodière and Marie-Anne Gaboury, one of the earliest white families to settle in the Red River Settlement in 1812. The Riels were noted for their devout Catholicism and strong family ties. Riel was first educated by Roman Catholic priests at St. Boniface. At age 13 he came to the attention of Alexandre Taché, the Suffragan Bishop of St. Boniface, who was eagerly promoting the priesthood for talented young Métis. In 1858 Taché arranged for Riel to attend the Petit Séminaire of the Collège de Montréal, under the direction of the Sulpician order. Descriptions of him at the time indicate that he was a fine scholar of languages, science, and philosophy, but exhibited a frequent and unpredictable moodiness. Following news of his father's premature death in 1864, Riel lost interest in the priesthood and he withdrew from the college in March 1865. For a time he continued his studies as a day student in the convent of the Grey Nuns, but was soon asked to leave following breaches of discipline. He remained in Montreal over a year, living at the home of his aunt, Lucie Riel. Impoverished by the death of his father, Riel took employment as a law clerk in the Montreal office of Rodolphe Laflamme. During this time he was involved in a failed romance with a young woman named Marie–Julie Guernon. This progressed to the point of Riel having signed a contract of marriage, but his fiancée's family opposed her involvement with a Métis, and the engagement was soon broken. Compounding this disappointment, Riel found legal work unpleasant, and by early 1866 he had resolved to leave Canada East. Some of his friends said later that he worked odd jobs in Chicago, Illinois, while staying with poet Louis-Honoré Fréchette, and wrote poems himself in the manner of Lamartine; also that he was then for a time employed as a clerk in Saint Paul, Minnesota, before returning to the Red River Settlement on 26 July 1868. The majority population of the Red River had historically been Métis and First Nation people. Upon his return, Riel found that religious, nationalistic, and racial tensions were exacerbated by an influx of Anglophone Protestant settlers from Ontario. The political situation was also uncertain, as ongoing negotiations for the transfer of Rupert's Land from the Hudson's Bay Company to Canada had not addressed the political terms of transfer. Finally, despite warnings to the Macdonald government from Bishop Taché and the HBC governor William Mactavish that any such activity would precipitate unrest, the Canadian minister of public works, William McDougall, ordered a survey of the area. The arrival on 20 August 1869 of a survey party headed by Colonel John Stoughton Dennis increased anxiety among the Métis. The Métis did not possess title to their land, which was in any case laid out according to the seigneurial system rather than in English-style square lots. In late August, Riel denounced the survey in a speech, and on 11 October 1869, the survey's work was disrupted by a group of Métis that included Riel. This group organized itself as the "Métis National Committee" on 16 October, with Riel as secretary and John Bruce as president. When summoned by the HBC-controlled Council of Assiniboia to explain his actions, Riel declared that any attempt by Canada to assume authority would be contested unless Ottawa had first negotiated terms with the Métis. Nevertheless, the non-bilingual McDougall was appointed the lieutenant governor-designate, and attempted to enter the settlement on 2 November. McDougall's party was turned back near the Canada–US border, and on the same day, Métis led by Riel seized Fort Garry. On 6 November, Riel invited Anglophones to attend a convention alongside Métis representatives to discuss a course of action, and on 1 December he proposed to this convention a list of rights to be demanded as a condition of union. Much of the settlement came to accept the Métis point of view, but a passionately pro-Canadian minority began organizing in opposition. Loosely constituted as the Canadian Party, this group was led by John Christian Schultz, Charles Mair, Colonel John Stoughton Dennis, and a more reticent Major Charles Boulton. McDougall attempted to assert his authority by authorizing Dennis to raise a contingent of armed men, but the Anglophone settlers largely ignored this call to arms. Schultz, however, attracted approximately fifty recruits and fortified his house and store. Riel ordered Schultz's home surrounded, and the outnumbered Canadians soon surrendered and were imprisoned in Upper Fort Garry. Hearing of the unrest, Ottawa sent three emissaries to the Red River, including HBC representative Donald Alexander Smith. While they were en route, the Métis National Committee declared a provisional government on 8 December, with Riel becoming its president on 27 December. Meetings between Riel and the Ottawa delegation took place on 5 and 6 January 1870, but when these proved fruitless, Smith chose to present his case in a public forum. Smith assured large audiences of the Government's goodwill in meetings on 19 and 20 January, leading Riel to propose the formation of a new convention split evenly between French and English settlers to consider Smith's instructions. On 7 February, a new list of rights was presented to the Ottawa delegation, and Smith and Riel agreed to send representatives to Ottawa to engage in direct negotiations on that basis. The provisional government established by Louis Riel published its own newspaper titled "New Nation" and established the Legislative Assembly of Assiniboia to pass laws. Despite the apparent progress on the political front, the Canadian party continued to plot against the provisional government. However, they suffered a setback on 17 February, when forty-eight men, including Boulton and Thomas Scott, were arrested near Fort Garry. Boulton was tried by a tribunal headed by Ambroise-Dydime Lépine and sentenced to death for his interference with the provisional government. He was pardoned, but Scott interpreted this as weakness by the Métis, whom he regarded with open contempt. After Scott repeatedly quarreled with his guards, they insisted that he be tried for insubordination. At his court martial he was found guilty and was sentenced to death. Riel was repeatedly entreated to commute the sentence, but Riel responded, "I have done three good things since I have commenced: I have spared Boulton's life at your instance, I pardoned Gaddy, and now I shall shoot Scott." Scott was executed by firing squad on 4 March. Riel's motivations have been the cause of much speculation, but his own justification was that he felt it necessary to demonstrate to the Canadians that the Métis must be taken seriously. Protestant Canada did take notice, swore revenge, and set up a "Canada First" movement to mobilize their anger. The delegates representing the provisional government departed for Ottawa in March. Although they initially met with legal difficulties arising from the execution of Scott, they soon entered into direct talks with Macdonald and George-Étienne Cartier. An agreement enshrining the demands in the list of rights was quickly reached, and this formed the basis for the Manitoba Act of 12 May 1870, which formally admitted Manitoba into the Canadian confederation. However, the negotiators could not secure a general amnesty for the provisional government. As a means of exercising Canadian authority in the settlement and dissuading American expansionists, a Canadian military expedition under Colonel Garnet Wolseley was dispatched to the Red River. Although the government described it as an ""errand of peace"", Riel learned that Canadian militia elements in the expedition meant to lynch him, and he fled as the expedition approached the Red River. The arrival of the expedition on 20 August marked the effective end of the Red River Rebellion. It was not until 2 September 1870 that the new lieutenant-governor Adams George Archibald arrived and set about the establishment of civil government. Without an amnesty, and with the Canadian militia beating and intimidating his sympathisers, Riel fled to the safety of the St. Joseph's mission across the Canada–US border in the Dakota Territory. However the results of the first provincial election in December 1870 were promising for Riel, as many of his supporters came to power. Nevertheless, stress and financial troubles precipitated a serious illness—perhaps a harbinger of his future mental afflictions—that prevented his return to Manitoba until May 1871. The settlement now faced a possible threat, from cross-border Fenian raids coordinated by his former associate William Bernard O'Donoghue. Archibald proclaimed a general call to arms on 4 October. Companies of armed horsemen were raised, including one led by Riel. When Archibald reviewed the troops in St. Boniface, he made the significant gesture of publicly shaking Riel's hand, signaling that a rapprochement had been affected. This was not to be—when this news reached Ontario, Mair and members of the Canada First movement whipped up anti-Riel (and anti-Archibald) sentiment. With Federal elections coming in 1872, Macdonald could ill afford further rift in Quebec-Ontario relations and so he did not offer an amnesty. Instead he quietly arranged for Taché to offer Riel a bribe of $1,000 to remain in voluntary exile. This was supplemented by an additional £600 from Smith for the care of Riel's family. Nevertheless, by late June Riel was back in Manitoba and was soon persuaded to run as a member of parliament for the electoral district of Provencher. However, following the early September defeat of George-Étienne Cartier in his home riding in Quebec, Riel stood aside so that Cartier—on record as being in favour of amnesty for Riel—might secure a seat in Provencher. Cartier won by acclamation, but Riel's hopes for a swift resolution to the amnesty question were dashed following Cartier's death on 20 May 1873. In the ensuing by-election in October 1873, Riel ran unopposed as an Independent, although he had again fled, a warrant having been issued for his arrest in September. Lépine was not so lucky; he was captured and faced trial. Riel made his way to Montreal and, fearing arrest or assassination, vacillated as to whether he should attempt to take up his seat in the House of Commons—Edward Blake, the Premier of Ontario, had announced a bounty of $5,000 for his arrest. Famously, Riel was the only Member of Parliament who was not present for the great Pacific Scandal debate of 1873 that led to the resignation of the Macdonald government in November. Liberal leader Alexander Mackenzie became the interim prime minister, and a general election was held in January 1874. Although the Liberals under Mackenzie formed the new government, Riel easily retained his seat. Formally, Riel had to sign a register book at least once upon being elected, and he did so under disguise in late January. He was nevertheless stricken from the rolls following a motion supported by Schultz, who had become the member for the electoral district of Lisgar. Undeterred, Riel prevailed again in the resulting by-election, and although again expelled, his symbolic point had been made and public opinion in Quebec was strongly tipped in his favour. During this period, Riel had been staying with priests of the Oblate order in Plattsburgh, New York, who introduced him to Father Fabien Martin "dit" Barnabé in the nearby village of Keeseville. It was here that he received news of Lépine's fate: following his trial for the murder of Scott, which had begun on 13 October 1874, Lépine was found guilty and sentenced to death. This sparked outrage in the sympathetic Quebec press, and calls for amnesty for both Lépine and Riel were renewed. This presented a severe political difficulty for Mackenzie, who was hopelessly caught between the demands of Quebec and Ontario. However, a solution was forthcoming when, acting on his own initiative, the Governor General Lord Dufferin commuted Lépine's sentence in January 1875. This opened the door for Mackenzie to secure from parliament an amnesty for Riel, on the condition that he remain in exile for five years. During his time of exile, he was primarily concerned with religious rather than political matters. Spurred on by a sympathetic Roman Catholic priest in Quebec, he was increasingly influenced by his belief that he was a divinely chosen leader of the Métis. Modern biographers have speculated that he may have suffered from the psychological condition megalomania. His mental state deteriorated, and following a violent outburst he was taken to Montreal, where he was under the care of his uncle, John Lee, for a few months. But after Riel disrupted a religious service, Lee arranged to have him committed in an asylum in Longue Pointe on 6 March 1876 under the assumed name "Louis R. David". Fearing discovery, his doctors soon transferred him to the Beauport Asylum near Quebec City under the name "Louis Larochelle". While he suffered from sporadic irrational outbursts, he continued his religious writing, composing theological tracts with an admixture of Christian and Judaic ideas. He consequently began calling himself Louis "David" Riel, prophet of the new world, and he would pray (standing) for hours, having servants help him to hold his arms in the shape of a cross. Nevertheless, he slowly recovered, and was released from the asylum on 23 January 1878 with an admonition to lead a quiet life. He returned for a time to Keeseville, where he became involved in a passionate romance with Evelina Martin "dit" Barnabé, sister of his friend, the oblate father Fabien Barnabé. But with insufficient means to propose marriage, Riel returned to the west, hoping that she might follow. However, she decided that she would be unsuited to prairie life, and their correspondence soon ended. In the fall of 1878, Riel returned to St. Paul, and briefly visited his friends and family. This was a time of rapid change for the Métis of the Red River—the buffalo on which they depended were becoming increasingly scarce, the influx of settlers was ever-increasing, and much land was sold to unscrupulous land speculators. Like other Red River Métis who had left Manitoba, Riel headed further west to start a new life. Travelling to the Montana Territory, he became a trader and interpreter in the area surrounding Fort Benton. Observing rampant alcoholism and its detrimental impact on the Native American and Métis people, he engaged in an unsuccessful attempt to curtail the whisky trade. In 1881, he married Marguerite Monet "dit" Bellehumeur (1861–1886), a young Métis, "in the fashion of the country" on 28 April, an arrangement that was solemnized on 9 March 1882. They were to have three children: Jean-Louis (1882–1908); Marie-Angélique (1883–1897); and a boy who was born and died on 21 October 1885, less than one month before Riel was hanged. Riel soon became involved in the politics of Montana, and in 1882, actively campaigned on behalf of the Republican Party. He brought a suit against a Democrat for rigging a vote, but was then himself accused of fraudulently inducing British subjects to take part in the election. In response, Riel applied for United States citizenship and was naturalized on 16 March 1883. With two young children, he had by 1884 settled down and was teaching school at the St. Peter's Jesuit mission in the Sun River district of Montana. Following the Red River Rebellion, Métis travelled west and settled in the Saskatchewan Valley, especially along the south branch of the river in the country surrounding the Saint-Laurent mission (near modern St. Laurent de Grandin, Saskatchewan). But by the 1880s, it had become clear that westward migration was no panacea for the troubles of the Métis and the plains Indians. The rapid collapse of the buffalo herd was causing near starvation among the Plains Cree and Blackfoot First Nations. This was exacerbated by a reduction in government assistance in 1883, and by a general failure of Ottawa to live up to its treaty obligations. The Métis were likewise obliged to give up the hunt and take up agriculture—but this transition was accompanied by complex issues surrounding land claims similar to those that had previously arisen in Manitoba. Moreover, settlers from Europe and the eastern provinces were also moving into the Saskatchewan territories, and they too had complaints related to the administration of the territories. Virtually all parties therefore had grievances, and by 1884 English settlers, Anglo-Métis and Métis communities were holding meetings and petitioning a largely unresponsive government for redress. In the electoral district of Lorne, a meeting of the south branch Métis was held in the village of Batoche on 24 March, and thirty representatives voted to ask Riel to return and represent their cause. On 6 May a joint "Settler's Union" meeting was attended by both the Métis and English-speaking representatives from Prince Albert, including William Henry Jackson, an Ontario settler sympathetic to the Métis and known to them as Honoré Jackson, and James Isbister of the Anglo-Métis. It was here resolved to send a delegation to ask Riel's assistance in presenting their grievances to the Canadian government. The head of the delegation to Riel was Gabriel Dumont, a respected buffalo hunter and leader of the Saint-Laurent Métis who had known Riel in Manitoba. James Isbister was the lone Anglo-Métis delegate. Riel was easily swayed to support their cause—which was perhaps not surprising in view of Riel's continuing conviction that he was the divinely selected leader of the Métis and the prophet of a new form of Christianity. Riel also intended to use the new position of influence to pursue his own land claims in Manitoba. The party departed 4 June, and arrived back at Batoche on 5 July. Upon his arrival Métis and English settlers alike formed an initially favourable impression of Riel following a series of speeches in which he advocated moderation and a reasoned approach. During June 1884, the Plains Cree leaders Big Bear and Poundmaker were independently formulating their complaints, and subsequently held meetings with Riel. However, the Native grievances were quite different from those of the settlers, and nothing was then resolved. Inspired by Riel, Honoré Jackson and representatives of other communities set about drafting a petition, and Jackson on 28 July released a manifesto detailing grievances and the settler's objectives. A joint English-Métis central committee with Jackson acting as secretary worked to reconcile proposals from different communities. In the interim, Riel's support began to waver. As Riel's religious pronouncements became increasingly heretical the clergy distanced themselves, and father Alexis André cautioned Riel against mixing religion and politics. Also, in response to bribes by territorial lieutenant-governor and Indian commissioner Edgar Dewdney, local English-language newspapers adopted an editorial stance critical of Riel. Nevertheless, the work continued, and on 16 December Riel forwarded the committee's petition to the government, along with the suggestion that delegates be sent to Ottawa to engage in direct negotiation. Receipt of the petition was acknowledged by Joseph-Adolphe Chapleau, Macdonald's Secretary of State, although Macdonald himself would later deny having ever seen it. By then many original followers had left; only 250 remained at Batoche when it fell in May 1885. Historian Donald Creighton has argued that Riel had become a changed man: While Riel awaited news from Ottawa he considered returning to Montana, but had by February resolved to stay. Without a productive course of action, Riel began to engage in obsessive prayer, and was experiencing a significant relapse of his mental agitations. This led to a deterioration in his relationship with the Catholic hierarchy, as he publicly espoused an increasingly heretical doctrine. On 11 February 1885, a response to the petition was received. The government proposed to take a census of the North-West Territories, and to form a commission to investigate grievances. This angered a faction of the Métis who saw it as a mere delaying tactic; they favoured taking up arms at once. Riel became the leader of this faction, but he lost the support of almost all Anglophones and Anglo-Métis, the Catholic Church, and the great majority of Indians. He also lost the support of the Métis faction supporting local leader Charles Nolin. But Riel, undoubtedly influenced by his messianic delusions, became increasingly supportive of this course of action. In the church at Saint-Laurent on 15 March, Riel disrupted a sermon to argue for this position, following which he was barred from receiving the sacraments. He took more and more about his "divine revelations". But disenchanted with the status quo, and swayed by Riel's charisma and eloquent rhetoric, hundreds of Métis remained loyal to Riel, despite his proclamations that Bishop Ignace Bourget should be accepted as pope, and that ""Rome has fallen"". At his trial, Riel denied allegations that his religious beliefs were as irrational as was being (and continue to be) alleged. He explained as follows: "I wish to leave Rome aside, inasmuch as it is the cause of division between Catholics and Protestants. I did not wish to force my views...If I could have any influence in the new world it would be to help in that way, even if it takes 200 years to become practical...so my children's children can shake hands with the Protestants of the new world in a friendly manner. I do not wish those evils which exist in Europe to be continued, as much as I can influence it, among the (Metis). I do not wish that to be repeated in America. On 18 March it became known that the North-West Mounted Police garrison at Battleford was being reinforced. Although only 100 men had been sent in response to warnings from father Alexis André and NWMP superintendent L.N.F. Crozier, a rumour soon began to circulate that 500 heavily armed troops were advancing on the territory. Métis patience was exhausted, and Riel's followers seized arms, took hostages, and cut the telegraph lines between Batoche and Battleford. The Provisional Government of Saskatchewan was declared at Batoche on 19 March, with Riel as the political and spiritual leader and with Dumont assuming responsibility for military affairs. Riel formed a council called the Exovedate (a neologism meaning "those who have left the flock"), and sent representatives to court Poundmaker and Big Bear. On 21 March, Riel's emissaries demanded that Crozier surrender Fort Carlton, but this was refused. The situation was becoming critical, and on 23 March Dewdney sent a telegraph to Macdonald indicating that military intervention might be necessary. Scouting near Duck Lake on 26 March, a force led by Gabriel Dumont unexpectedly chanced upon a party from Fort Carlton. In the ensuing Battle of Duck Lake, the police were routed, and the Natives also rose up once the news became known. The die was cast for a violent outcome, and the North-West Rebellion was begun in earnest. Riel had counted on the Canadian government being unable to effectively respond to another uprising in the distant North-West Territories, thereby forcing them to accept political negotiation. This was essentially the same strategy that had worked to such great effect during the 1870 rebellion. In that instance, the first troops did not arrive until three months after Riel seized control. However, Riel had completely overlooked the significance of the Canadian Pacific Railway. Despite some uncompleted gaps, the first Canadian regular and militia units, under the command of Major-General Frederick Dobson Middleton, arrived in Duck Lake less than two weeks after Riel had made his demands. Knowing that he could not defeat the Canadians in direct confrontation, Dumont had hoped to force the Canadians to negotiate by engaging in a long-drawn out campaign of guerrilla warfare; Dumont realised a modest success along these lines at the Battle of Fish Creek on 24 April 1885. Riel, however, insisted on concentrating forces at Batoche to defend his "city of God". The outcome of the ensuing Battle of Batoche which took place from 9 to 12 May was never in doubt, and on 15 May a disheveled Riel surrendered to Canadian forces. Although Big Bear's forces managed to hold out until the Battle of Loon Lake on 3 June, the rebellion was a dismal failure for Métis and Natives alike, as they surrendered or fled. Several individuals closely tied to the government requested that the trial be held in Winnipeg in July 1885. There are historians who contend that the trial was moved to Regina because of concerns with the possibility of an ethnically mixed and sympathetic jury. Tom Flanagan states that an amendment of the North-West Territories Act (which dropped the provision that trials with crimes punishable by death should be tried in Manitoba) meant that the trial could be convened within the North-West Territories and did not have to be held in Winnipeg. Prime Minister Sir John A. Macdonald ordered the trial to be convened in Regina, where Riel was tried before a jury of six English and Scottish Protestants, all from the area surrounding the city. The trial began on 28 July 1885, and lasted five days. Riel delivered two long speeches during his trial, defending his own actions and affirming the rights of the Métis people. He rejected his lawyer's attempt to argue that he was not guilty by reason of insanity, asserting, The jury found him guilty but recommended mercy; nonetheless, Judge Hugh Richardson sentenced him to death, with the date of his execution initially set for 18 September 1885. "We tried Riel for treason," one juror later said, "And he was hanged for the murder of Scott." Boulton writes in his memoirs that, as the date of his execution approached, Riel regretted his opposition to the defence of insanity and vainly attempted to provide evidence that he was not sane. Requests for a retrial and an appeal to the Judicial Committee of the Privy Council in Britain were denied. Sir John A. Macdonald, who was instrumental in upholding Riel's sentence, is famously quoted as saying: Before his execution, Riel was reconciled with the Catholic Church, and assigned Father André as his spiritual advisor. He was also given writing materials so that he could employ his time in prison to write a book. Louis Riel was hanged for treason on 16 November 1885 at the North-West Mounted Police barracks in Regina. Boulton writes of Riel's final moments, Following the execution, Riel's body was returned to his mother's home in St. Vital, where it lay in state. On 12 December 1886, his remains were laid in the churchyard of the Saint-Boniface Cathedral following the celebration of a requiem mass. The trial and execution of Riel caused a bitter and prolonged reaction which convulsed Canadian politics for decades. The execution was both supported and opposed by the provinces. For example, Ontario (conservative) strongly supported Riel's execution, but Quebec was vehemently opposed to it. Francophones were upset Riel was hung because they thought his execution was a symbol of English dominance. The Orange Irish Protestant element in Ontario had demanded the execution as the punishment for Riel's treason and his execution of Thomas Scott in 1870. With their revenge satisfied, the Orange turned their attention to other matters (especially the Jesuit Estates proposal). In Quebec there was no forgetting, and the politician Honoré Mercier rose to power by mobilizing the opposition in 1886. Riel remains controversial. J. M. Bumsted in 2000 said that for Manitoba historian James Jackson, the murder of Scott – "perhaps the result of Riel's incipient madness – was the great blemish on Riel's achievement, depriving him of his proper role as the father of Manitoba." The Saskatchewan Métis' requested land grants were all provided by the government by the end of 1887, and the government resurveyed the Métis river lots in accordance with their wishes. The Métis did not understand the long term value of their new land, however, and it was soon bought by speculators who later turned huge profits from it. Riel's worst fears were realised—following the failed rebellion, the French language and Roman Catholic religion faced increasing marginalisation in both Saskatchewan and Manitoba, as exemplified by the controversy surrounding the Manitoba Schools Question. The Métis themselves were increasingly forced to live on undesirable land or in the shadow of Indian reserves (as they did not themselves have treaty status). Saskatchewan did not attain provincehood until 1905. Riel's execution and Macdonald's refusal to commute his sentence caused lasting discord in Quebec, and led to a fundamental alteration in the Canadian political order. In Quebec, Honoré Mercier exploited the discontent to reconstitute the Parti National. This party, which promoted Quebec nationalism, won a majority in the 1886 Quebec election by winning a number of seats formerly controlled by the Quebec Conservative Party. The federal election of 1887 likewise saw significant gains by the federal Liberals, again at the expense of the Conservatives. This led to the victory of the Liberal party under Sir Wilfrid Laurier in the federal election of 1896, which in turn set the stage for the domination of Canadian federal politics by the Liberal party in the 20th century. That Riel's name still has resonance in Canadian politics was evidenced on 16 November 1994, when Suzanne Tremblay, a Bloc Québécois member of parliament, introduced private members' bill C-228, "An Act to revoke the conviction of Louis David Riel". The unsuccessful bill was widely perceived in English Canada as an attempt to arouse support for Quebec nationalism before the 1995 referendum on Quebec sovereignty. Bill C-213 or Louis Riel Day Act and Bill C-417 Louis Riel Act are the more notable acts which have gone through parliament. Bill C-297 to revoke the conviction of Louis Riel was introduced to the House of Commons 21 October and 22 November 1996, however the motion lacked unanimous consent from the House and was dropped. Bill C-213 or the Louis Riel Day Act of 1997 attempted to revoke the conviction of Louis Riel for high treason and establish a National Day in his honour on 16 November. Bill C-417 or the Louis Riel Act which also had a first reading in parliament to revoke the conviction of Louis Riel, and establish 15 July as "Louis Riel Day" was tabled. On 18 February 2008, the province of Manitoba officially recognized the first Louis Riel Day as a general provincial holiday. It will now fall on the third Monday of February each year in the Province of Manitoba. Historians have debated the Riel case so often and so passionately that he is the most written-about person in all of Canadian history. Interpretations have varied dramatically over time. The first amateur English language histories hailed the triumph of civilization, represented by English-speaking Protestants, over savagery represented by the half-breed Métis who were Catholic and spoke French. Riel was portrayed as an insane traitor and an obstacle to the expansion of Canada to the West. By the mid-20th century academic historians had dropped the theme of savagery versus civilization, deemphasized the Métis, and focused on Riel, presenting his execution as a major cause of the bitter division in Canada along ethnocultural and geographical lines of religion and language. W. L. Morton says of the execution: Morton argued that Riel's demands were unrealistic: The Catholic bishops had originally supported the Métis, but reversed themselves when they realized that Riel was leading a heretical movement. They made sure that he was not honored as a martyr. However the bishops lost their influence during the Quiet Revolution, and activists in Québec found in Riel the perfect hero, with the image now of a freedom fighter who stood up for his people against an oppressive government in the face of widespread racist bigotry. His insanity was ignored and he was made a folk hero by the Francophones, the Catholic nationalists, the native rights movement, and the New Left student movement. Activists who espoused violence embraced his image; in the 1960s, the Quebec terrorist group, the Front de libération du Québec adopted the name "Louis Riel" for one of its terrorist cells. Across Canada there emerged a new interpretation of reality in his rebellion, holding that the Métis had major unresolved grievances; that the government was indeed unresponsive; that Riel resorted to violence only as a last resort; and he was given a questionable trial, then executed by a vengeful government. John Foster said in 1985 that: However, a leading specialist Thomas Flanagan reversed his views after editing Riel's writings: As for the insanity, historians have noted that many religious leaders the past have exhibited behavior that looks exactly like insanity. Flanigan emphasizes that Riel exemplified the tradition of religious mystics involved in politics, especially those with a sense that the world was about to be totally transformed by their religious vision. In his case it meant his delivering the Métis from colonial domination. More broadly, Flanagan argues that Riel was devoutly religious and rejected equalitarianism (which he equated with secularism), concluding he was "a millenarian theocrat, sympathetic to the 'ancien régime' and opposed to the French Revolution, democracy, individualism, and secular society." Métis scholars have noted that Riel is a more important figure to non-Métis than to Métis; he is the only Métis figure most non-Métis are aware of. Political scientists such as Thomas Flanagan have pointed out certain parallels between Riel's following during the North-West Rebellion and millenarian cults. A resolution was passed by Parliament on 10 March 1992 citing that Louis Riel was the founder of Manitoba. Two statues of Riel are located in Winnipeg. One of the Winnipeg statues, the work of architect Étienne Gaboury and sculptor Marcien Lemay, depicts Riel as a naked and tortured figure. It was unveiled in 1970 and stood in the grounds of the Legislative Assembly of Manitoba for 23 years. After much outcry (especially from the Métis community) that the statue was an undignified misrepresentation, the statue was removed and placed at the Collège universitaire de Saint-Boniface. It was replaced in 1994 with a statue designed by Miguel Joyal depicting Riel as a dignified statesman. The unveiling ceremony was on 16 May 1996, in Winnipeg. A statue of Riel on the grounds of the Saskatchewan legislative building in Regina was installed and later removed for similar reasons. In numerous communities across Canada, Riel is commemorated in the names of streets, schools, neighbourhoods, and other buildings. Examples in Winnipeg include the landmark Esplanade Riel pedestrian bridge linking Old Saint-Boniface with Winnipeg, the Louis Riel School Division, Louis Riel Avenue in Old Saint-Boniface, and Riel Avenue in St. Vital's Minnetonka neighbourhood (which is sometimes called Riel). The student centre and campus pub at the University of Saskatchewan in Saskatoon are named after Riel ("Place Riel" and "Louis"', respectively). Highway 11, stretching from Regina to just south of Prince Albert, has been named "Louis Riel Trail" by the province; the roadway passes near locations of the 1885 rebellion. One of the student residences at Simon Fraser University in Burnaby, British Columbia is named Louis Riel House. There is a Louis Riel School in Calgary, Alberta. and Ottawa, Ontario. On 26 September 2007, Manitoba legislature passed a bill establishing a statutory holiday on the third Monday in February as "Louis Riel Day", the same day some other provinces celebrate Family Day, beginning in 2008. The first Louis Riel Day was celebrated on 18 February 2008. This new statutory holiday coincides with the celebration on 15–24 February of the Festival du Voyageur. In the spring of 2008, the Government of Saskatchewan Tourism, Parks, Culture and Sport Minister Christine Tell proclaimed in Duck Lake that "the 125th commemoration, in 2010, of the 1885 Northwest Resistance is an excellent opportunity to tell the story of the prairie Métis and First Nations peoples' struggle with Government forces and how it has shaped Canada today." One of three Territorial Government Buildings remains on Dewdney Avenue in the Saskatchewan capital city of Regina, Saskatchewan which was the site of the Trial of Louis Riel, where the drama the "Trial of Louis Riel" is still performed. Following the May trial, Louis Riel was hanged 16 November 1885. The RCMP Heritage Centre, in Regina, opened in May 2007. The Métis brought his body to his mother's home, now the Riel House National Historic Site, and then interred at the St. Boniface Basilica in Manitoba, his birthplace, for burial. In 1925, the French writer Maurice Constantin-Weyer who lived 10 years in Manitoba published in French a fictionalized biography of Louis Riel titled "La Bourrasque". An English translation/adaptation was published in 1930 : "A Martyr's Folly" (Toronto, The Macmillan Company), and a new version in 1954, "The Half-Breed" (New York, The Macaulay Compagny). Portrayals of Riel's role in the Red River Rebellion include the 1979 CBC television film "Riel" and Canadian cartoonist Chester Brown's acclaimed 2003 graphic novel "Louis Riel: A Comic-Strip Biography". In the 1940 film "North West Mounted Police" Riel is portrayed by "Francis McDonald". An opera about Riel entitled "Louis Riel" was commissioned for Canada's centennial celebrations in 1967. It was an opera in three acts, written by Harry Somers, with an English and French libretto by Mavor Moore and Jacques Languirand. The Canadian Opera Company produced and performed the first run of the opera in September and October 1967. From the late 1960s until the early 1990s, the city of Saskatoon hosted "Louis Riel Day", a summer celebration that included a relay race that combined running, backpack carrying, canoeing, hill climbing, and horseback riding along the South Saskatchewan River in the city's downtown core. Traditionally, the event also included a cabbage roll eating contest and tug-of-war competition, as well as live musical performances. Although not affiliated with the Saskatoon Exhibition, for years Louis Riel Day was scheduled for the day before the start of the fair, and as such came to be considered the Exhibition's unofficial kick-off (the scheduling of the two events was separated in later years). The event was discontinued when major sponsors pulled out. Billy Childish wrote a song entitled "Louis Riel", which was performed by Thee Headcoats. Texas musician Doug Sahm wrote a song entitled "Louis Riel," which appeared on the album "S.D.Q. '98". In the song, Sahm likens the lore surrounding Riel to David Crockett's legend in his home state, spinning an abridged tale of Riel's life as a revolutionary: "...but you gotta respect him for what he thought was right... And all around Regina they talk about him still – why did they have to kill Louis Riel?" The Seattle-based Indie rock band Grand Archives also wrote a song entitled "Louis Riel" that appears on their 2008 self-titled album. A track entitled "Snowin' Today: A Lament for Louis Riel" appears on the 2009 album "Live: Two Nights In March" by Saskatchewan singer/guitarist Little Miss Higgins; a studio version features on her 2010 release "Across The Plains". On 22 October 2003, the Canadian news channel CBC Newsworld and its French-language equivalent, Réseau de l'information, staged a simulated retrial of Riel. Viewers were invited to enter a verdict on the trial over the internet, and more than 10,000 votes were received—87% of which were "not guilty". The results of this straw poll led to renewed calls for Riel's posthumous pardon. Also on the basis of a public poll, the CBC's "Greatest Canadian" project ranked Riel as the 11th "Greatest Canadian". An episode of the TV-series "How the West Was Won" from 1979 was named "L'Affaire Riel", featuring Louis Riel while in exile in the United States. In 2001, Canadian sketch comedy troupe Royal Canadian Air Farce featured Riel in its send-up of the CBC documentary series "Canada: A People's History." Significant parallels were drawn between Riel's actions and those of modern-day Québécois separatists, and the comedian who portrayed Riel was made up to look like then-Premier Lucien Bouchard. Laika Laika (; c. 1954 – 3 November 1957) was a Soviet space dog who became one of the first animals in space, and the first animal to orbit the Earth. Laika, a stray dog from the streets of Moscow, was selected to be the occupant of the Soviet spacecraft Sputnik 2 that was launched into outer space on 3 November 1957. Little was known about the impact of spaceflight on living creatures at the time of Laika's mission, and the technology to de-orbit had not yet been developed, so Laika's survival was never expected. Some scientists believed humans would be unable to survive the launch or the conditions of outer space, so engineers viewed flights by animals as a necessary precursor to human missions. The experiment aimed to prove that a living passenger could survive being launched into orbit and endure a micro-g environment, paving the way for human spaceflight and providing scientists with some of the first data on how living organisms react to spaceflight environments. Laika died within hours from overheating, possibly caused by a failure of the central R-7 sustainer to separate from the payload. The true cause and time of her death were not made public until 2002; instead, it was widely reported that she died when her oxygen ran out on day six or, as the Soviet government initially claimed, she was euthanised prior to oxygen depletion. On 11 April 2008, Russian officials unveiled a monument to Laika. A small monument in her honour was built near the military research facility in Moscow that prepared Laika's flight to space. It portrayed a dog standing on top of a rocket. She also appears on the Monument to the Conquerors of Space in Moscow. After the success of Sputnik 1 in October 1957, Nikita Khrushchev, the Soviet leader, wanted a spacecraft launched on 7 November 1957, the 40th anniversary of the October Revolution. Construction had already started on a more sophisticated satellite, but it would not be ready until December; this satellite would later become Sputnik 3. Meeting the November deadline meant building a new craft. Khrushchev specifically wanted his engineers to deliver a "space spectacular", a mission that would repeat the triumph of Sputnik 1, stunning the world with Soviet prowess. Planners settled on an orbital flight with a dog. Soviet rocket engineers had long intended a canine orbit before attempting human spaceflight; since 1951, they had lofted 12 dogs into sub-orbital space on ballistic flights, working gradually toward an orbital mission set for some time in 1958. To satisfy Khrushchev's demands, they expedited the orbital canine flight for the November launch. According to Russian sources, the official decision to launch Sputnik 2 was made on 10 or 12 October, leaving less than four weeks to design and build the spacecraft. Sputnik 2, therefore, was something of a rush job, with most elements of the spacecraft being constructed from rough sketches. Aside from the primary mission of sending a living passenger into space, Sputnik 2 also contained instrumentation for measuring solar irradiance and cosmic rays. The craft was equipped with a life-support system consisting of an oxygen generator and devices to avoid oxygen poisoning and to absorb carbon dioxide. A fan, designed to activate whenever the cabin temperature exceeded , was added to keep the dog cool. Enough food (in a gelatinous form) was provided for a seven-day flight, and the dog was fitted with a bag to collect waste. A harness was designed to be fitted to the dog, and there were chains to restrict her movements to standing, sitting, or lying down; there was no room to turn around in the cabin. An electrocardiogram monitored heart rate and further instrumentation tracked respiration rate, maximum arterial pressure, and the dog's movements. Laika was found as a stray wandering the streets of Moscow. Soviet scientists chose to use Moscow strays since they assumed that such animals had already learned to endure conditions of extreme cold and hunger. This specimen was an eleven-pound mongrel female, approximately three years old. Another account reported that she weighed about . Soviet personnel gave her several names and nicknames, among them Kudryavka (Russian for "Little Curly"), Zhuchka ("Little Bug"), and Limonchik ("Little Lemon"). Laika, the Russian name for several breeds of dogs similar to the husky, was the name popularised around the world. The American press dubbed her Muttnik ("mutt" + suffix "-nik") as a pun on Sputnik, or referred to her as "Curly". Her true pedigree is unknown, although it is generally accepted that she was part husky or other Nordic breed, and possibly part terrier. NASA refers to Laika as a "part-Samoyed terrier." A Russian magazine described her temperament as phlegmatic, saying that she did not quarrel with other dogs. Vladimir Yazdovsky, who led the program of test dogs used on rockets, in a later publication wrote that “Laika was quiet and charming”. The Soviet Union and United States had previously sent animals only on sub-orbital flights. Three dogs were trained for the Sputnik 2 flight: Albina, Mushka, and Laika. Soviet space-life scientists Vladimir Yazdovsky and Oleg Gazenko trained the dogs. To adapt the dogs to the confines of the tiny cabin of Sputnik 2, they were kept in progressively smaller cages for periods of up to 20 days. The extensive close confinement caused them to stop urinating or defecating, made them restless, and caused their general condition to deteriorate. Laxatives did not improve their condition, and the researchers found that only long periods of training proved effective. The dogs were placed in centrifuges that simulated the acceleration of a rocket launch and were placed in machines that simulated the noises of the spacecraft. This caused their pulses to double and their blood pressure to increase by 30–65 torr. The dogs were trained to eat a special high-nutrition gel that would be their food in space. Before the launch, one of the mission scientists took Laika home to play with his children. In a book chronicling the story of Soviet space medicine, Dr. Vladimir Yazdovsky wrote, "Laika was quiet and charming...I wanted to do something nice for her: She had so little time left to live." Vladimir Yazdovsky made the final selection of dogs and their designated roles. Laika was to be the "flight dog"—a sacrifice to science on a one-way mission to space. Albina, who had already flown twice on a high-altitude test rocket, was to act as Laika's backup. The third dog Mushka was a "control dog"—she was to stay on the ground and be used to test instrumentation and life support. Before leaving for the Baikonur Cosmodrome, Yazdovsky and Gazenko conducted surgery on the dogs, routing the cables from the transmitters to the sensors that would measure breathing, pulse, and blood pressure. Because the existing airstrip at Turatam near the cosmodrome was small, the dogs and crew had to be first flown aboard a Tu-104 plane to Tashkent. From there, a smaller and lighter Il-14 plane took them to Turatam. Training of dogs continued upon arrival; one after another they were placed in the capsules to get familiar with the feeding system. According to a NASA document, Laika was placed in the capsule of the satellite on 31 October 1957—three days before the start of the mission. At that time of year, the temperatures at the launch site were extremely cold, and a hose connected to a heater was used to keep her container warm. Two assistants were assigned to keep a constant watch on Laika before launch. Just prior to liftoff on 3 November 1957, from Baikonur Cosmodrome, Laika's fur was sponged in a weak alcohol solution and carefully groomed, while iodine was painted onto the areas where sensors would be placed to monitor her bodily functions. One of the technicians preparing the capsule before final liftoff stated that "after placing Laika in the container and before closing the hatch, we kissed her nose and wished her bon voyage, knowing that she would not survive the flight." The exact time of the liftoff varies from source to source and is mentioned as 05:30:42 Moscow Time or 07:22 Moscow Time. At peak acceleration Laika's respiration increased to between three and four times the pre-launch rate. The sensors showed her heart rate was 103 beats/min before launch and increased to 240 beats/min during the early acceleration. After reaching orbit, Sputnik 2's nose cone was jettisoned successfully; however the "Block A" core did not separate as planned, preventing the thermal control system from operating correctly. Some of the thermal insulation tore loose, raising the cabin temperature to . After three hours of weightlessness, Laika's pulse rate had settled back to 102 beats/min, three times longer than it had taken during earlier ground tests, an indication of the stress she was under. The early telemetry indicated that Laika was agitated but eating her food. After approximately five to seven hours into the flight, no further signs of life were received from the spacecraft. The Soviet scientists had planned to euthanise Laika with a poisoned serving of food. For many years, the Soviet Union gave conflicting statements that she had died either from asphyxia, when the batteries failed, or that she had been euthanised. Many rumours circulated about the exact manner of her death. In 1999, several Russian sources reported that Laika had died when the cabin overheated on the fourth orbit. In October 2002, Dimitri Malashenkov, one of the scientists behind the Sputnik 2 mission, revealed that Laika had died by the fourth circuit of flight from overheating. According to a paper he presented to the World Space Congress in Houston, TX, USA, "It turned out that it was practically impossible to create a reliable temperature control system in such limited time constraints." Over five months later, after 2,570 orbits, Sputnik 2—including Laika's remains—disintegrated during re-entry on 14 April 1958. Due to the overshadowing issue of the Soviet vs. U.S. Space Race, the ethical issues raised by this experiment went largely unaddressed for some time. As newspaper clippings from 1957 show, the press was initially focused on reporting the political perspective, while the health and retrieval—or lack thereof—of Laika only became an issue later. Sputnik 2 was not designed to be retrievable, and Laika had always been intended to die. The mission sparked a debate across the globe on the mistreatment of animals and animal testing in general to advance science. In the United Kingdom, the National Canine Defence League called on all dog owners to observe a minute's silence, while the Royal Society for the Prevention of Cruelty to Animals (RSPCA) received protests even before Radio Moscow had finished announcing the launch. Animal rights groups at the time called on members of the public to protest at Soviet embassies. Others demonstrated outside the United Nations in New York; nevertheless, laboratory researchers in the U.S. offered some support for the Soviets, at least before the news of Laika's death. In the Soviet Union, there was less controversy. Neither the media, books in the following years, nor the public openly questioned the decision to send a dog into space. In 1998, after the collapse of the Soviet regime, Oleg Gazenko, one of the scientists responsible for sending Laika into space, expressed regret for allowing her to die: In other Warsaw Pact countries, open criticism of the Soviet space program was difficult because of political censorship, but there were notable cases of criticism in Polish scientific circles. A Polish scientific periodical, "Kto, Kiedy, Dlaczego" ("Who, When, Why"), published in 1958, discussed the mission of Sputnik 2. In the periodical's section dedicated to astronautics, Krzysztof Boruń described the Sputnik 2 mission as "regrettable" and criticised not bringing Laika back to Earth alive as "undoubtedly a great loss for science". Laika is memorialised in the form of a statue and plaque at Star City, Russia, the Russian Cosmonaut training facility. Created in 1997, Laika is positioned behind the cosmonauts with her ears erect. The Monument to the Conquerors of Space, constructed in 1964, also includes Laika. On 11 April 2008 at the military research facility where staff had been responsible for readying Laika for the flight, officials unveiled a monument of her poised on top of a space rocket. Stamps and envelopes picturing Laika were produced, as well as branded cigarettes and matches. Future space missions carrying dogs would be designed to be recovered. Four other dogs died in Soviet space missions: Bars and Lisichka were killed when their R-7 rocket exploded shortly after launch on 28 July 1960; Pchyolka and Mushka died when Korabl-Sputnik 3 was purposely destroyed with an explosive charge to prevent foreign powers from inspecting the capsule after a wayward atmospheric reentry trajectory on 1 December 1960. Although never shown, Laika is prominently mentioned in the 1985 film "My Life as a Dog", in which the main character (a young Swedish boy in the late 1950s) identifies strongly with the dog. "Laika", a 2007 graphic novel by Nick Abadzis giving a fictionalized account of Laika's life, won the Eisner Award as "Best Publication for Teens". Master of Puppets Master of Puppets is the third studio album by American heavy metal band Metallica. It was released on March 3, 1986 by Elektra Records. Recorded at the Sweet Silence Studios with producer Flemming Rasmussen, it was the first Metallica album released on a major record label. "Master of Puppets" was the band's last album to feature bassist Cliff Burton, who died in a bus accident in Sweden during the album's promotional tour. The album peaked at number 29 on the "Billboard" 200 and became the first thrash metal album to be certified platinum. It was certified 6× platinum by the Recording Industry Association of America (RIAA) in 2003 for shipping six million copies in the United States. The album was eventually certified 6× platinum by Music Canada and gold by the British Phonographic Industry (BPI). "Master of Puppets" was released to critical acclaim and has been included in several publications' best album lists. Its driving, virtuosic music, and angry political lyrics drew praise from critics outside the metal community. The album is considered the band's strongest effort of the period and is one of the most influential heavy metal albums. Critics credit it for consolidating the American thrash metal scene with its atmospheric and meticulously performed songs. Many bands from all genres of heavy metal have covered the album's songs, including tribute albums. "Master of Puppets" was deemed "culturally, historically, or aesthetically significant" enough for preservation in the National Recording Registry by the United States Library of Congress in 2015, the first metal recording to do so. The cover was designed by Metallica and Peter Mensch and painted by Don Brautigam. It depicts a cemetery field of white crosses tethered to strings, manipulated by a pair of hands in a blood-red sky. Instead of releasing a single or video in advance of the album's release, Metallica embarked on a five-month American tour in support of Ozzy Osbourne. The European leg was canceled after Burton's death in September 1986, and the band returned home to audition a new bassist. Metallica honored the album's 20th anniversary on the Escape from the Studio '06 tour, by playing it in its entirety. A remastered version of the album was also released in November 2017. Metallica's 1983 debut "Kill 'Em All" laid the foundation for thrash metal with its aggressive musicianship and vitriolic lyricism. The album revitalized the American underground scene, and records by contemporaries followed in similar manner. The band's second album "Ride the Lightning" extended the limits of the genre with its more sophisticated songwriting and improved production. The album caught the attention of Elektra Records representative Michael Alago, who signed the group to an eight-album deal in the fall of 1984, halfway through the album's promotional tour. Elektra reissued "Ride the Lightning" on November 19, and the band began touring larger venues and festivals throughout 1985. After parting with manager Jon Zazula, Metallica hired Q Prime executives Cliff Burnstein and Peter Mensch. During the summer, the band played the Monsters of Rock festival at Castle Donington, alongside Bon Jovi and Ratt in front of 70,000 fans. Metallica was motivated to make an album that would impress critics and fans, and began writing new material in mid-1985. Lead singer/rhythm guitarist James Hetfield and drummer Lars Ulrich were the main songwriters on the album, already titled "Master of Puppets". The two developed ideas at a garage in El Cerrito, California, before inviting bassist Cliff Burton and guitarist Kirk Hammett for rehearsals. Hetfield and Ulrich described the songwriting process as starting with "guitar riffs, assembled and reassembled until they start to sound like a song". After that, the band came up with a song title and topic, and Hetfield wrote lyrics to match the title. "Master of Puppets" was Metallica's first album not to feature songwriting contributions from former lead guitarist Dave Mustaine. Mustaine claimed he had co-written "Leper Messiah", based on an old song called "The Hills Ran Red". The band denied this, though admitting that a section incorporated ideas of Mustaine's. The band was not satisfied with the acoustics of the American studios they considered, and decided to record in Ulrich's native Denmark. Ulrich took drum lessons, and Hammett worked with Joe Satriani to learn how to record more efficiently. Ulrich was in talks with Rush's bassist and vocalist Geddy Lee to produce the album, but the collaboration never materialized because of uncoordinated schedules. Metallica recorded the album with producer Flemming Rasmussen at Sweet Silence Studios in Copenhagen, Denmark, from September 1 to December 27, 1985. The writing of all the songs except "Orion" and "The Thing That Should Not Be" was completed before the band's arrival in Copenhagen. Rasmussen stated that the band brought well-prepared demos of the songs, and only slight changes were made to the compositions in the studio. The recording took longer than the previous album because Metallica had developed a sense of perfectionism and had higher ambitions for this one. Metallica eschewed the slick production and synthesizers of contemporary hard rock and heavy metal albums by Bon Jovi, Iron Maiden, and Judas Priest. Despite a reputation for drinking, the band stayed dry on recording days. Hammett recalled that the group was "just making another album" at the time and "had no idea that the record would have such a range of influence that it went on to have". He also said that the group was "definitely peaking" at the time and that the album had "the sound of a band really gelling, really learning how to work well together". Rasmussen and Metallica did not manage to complete the mixtapes as planned. Instead, the multitracks were sent in January 1986 to Michael Wagener, who finished the album's mixing. The cover was designed by Metallica and Peter Mensch and painted by Don Brautigam. It depicts a cemetery field of white crosses tethered to strings, manipulated by a pair of hands in a blood-red sky. Ulrich explained that the artwork summarized the lyrical content of the album—people being subconsciously manipulated. The original artwork was sold at Rockefeller Plaza, New York City for $28,000 in 2008. The band mocked the warning stickers promoted by the PMRC with a facetious Parental Advisory label on the cover: "The only track you probably won't want to play is 'Damage, Inc.' due to the multiple use of the infamous 'F' word. Otherwise, there aren't any 'shits', 'fucks', 'pisses', 'cunts', 'motherfuckers', or 'cocksuckers' anywhere on this record". The album was recorded with the following equipment: Hammett's guitars were a black 1974 Gibson Flying V, a black Jackson Randy Rhoads, and a black Fernandes Stratocaster-inspired model nicknamed "Edna"; Hetfield used a Jackson King V played through a Mesa Boogie Mark C+ amplifier modified as a pre-amp; Burton played an Aria Pro II SB1000 through Mesa Boogie amplifier heads and cabinets; Ulrich played Tama drum equipment, and borrowed a rare S.L.P. Black Brass from Def Leppard drummer Rick Allen. "Master of Puppets" features dynamic music and thick arrangements. Metallica delivered a more refined approach and performance compared to the previous two albums, with multilayered songs and technical dexterity. This album and its predecessor "Ride the Lightning" follow a similar track sequencing: both open with an up-tempo song with an acoustic intro, followed by a lengthy title track, and a fourth track with ballad qualities. Although both albums are similarly structured, the musicianship on "Master of Puppets" is more powerful and epic in scope, with tight rhythms and delicate guitar solos. According to music writer Joel McIver, "Master of Puppets" introduced a new level of heaviness and complexity in thrash metal, displaying atmospheric and precisely executed songs. Hetfield's vocals had matured from the hoarse shouting of the first two albums to a deeper, in-control yet aggressive style. The songs explore themes such as control and the abuse of power. The lyrics describe the consequences of alienation, oppression, and feelings of powerlessness. Author Ryan Moore thought the lyrics depicted "ominous yet unnamed forces of power wielding total control over helpless human subjects". The lyrics were considered perceptive and harrowing, and were praised for being honest and socially conscious by writer Brock Helander. Referring to the epic proportions of the songs, BBC Music's Eamonn Stack stated that "at this stage in their careers Metallica weren't even doing songs, they were telling stories". The compositions and arrangements benefited from Burton's classical training and understanding of harmony. "Battery" is about anger and refers to the term in the sense of "assault and battery". Some critics contended that the title actually refers to an artillery battery, and interpreted it as "Hetfield of a war tactic as the aggressor" personifying destruction. The song begins with bass-heavy acoustic guitars that build layer by multitracked-layer until they are joined by a wall of distorted electric guitars. It then breaks into fast, aggressive riffing featuring off-beat rhythms and heavily distorted minor dyads where root-fifth power chords might be expected. Hetfield improvised the riff while relaxing in London. "Master of Puppets" consists of several riffs with odd meters and a cleanly picked middle section with melodic solo. The song shares a similar structure with "The Four Horsemen" from the band's first album: two verse-chorus sets lead to a lengthy interlude to another verse-chorus set. The opening and pre-verse sections feature fast downstroked chromatic riffing at 220 beats per minute. The persistent and precise eighth-note riffing of the verse is made more intense by switching to an off-kilter time signature on each fourth bar. A lengthy interlude follows the second chorus, beginning with a clean, arpeggiated section over which Hetfield contributes a melodic solo; the riffing becomes distorted and progressively more heavy and Hammett provides a more virtuosic solo before the song returns to the main verse. The song closes with a fade-out of sinister laughter. The theme is cocaine addiction, a topic considered taboo at the time. "The Thing That Should Not Be" was inspired by the Cthulhu Mythos created by famed horror writer H.P. Lovecraft, with notable direct references to The Shadow Over Innsmouth and to Cthulhu himself, who is the subject matter of the song's chorus. It is considered the heaviest track on the album, whose main riff emulates a beast dragging itself into the sea. The Black Sabbath-influenced guitars are downtuned, creating slow and moody ambiance. "Welcome Home (Sanitarium)" was based on Ken Kesey's novel "One Flew Over the Cuckoo's Nest" and conveys the thoughts of a patient unjustly caged in a mental institution. The song opens with a section of clean single strings and harmonics. The clean, arpeggiated main riff is played in alternating and time signatures. The song is structured with alternating somber clean guitars in the verses, and distorted heavy riffing in the choruses, unfolding into an aggressive finale. This structure follows a pattern of power ballads Metallica set with "Fade to Black" on "Ride the Lightning" and was to revisit with "One" on "...And Justice for All". "Disposable Heroes" is an anti-war song about a young soldier whose fate is controlled by his superiors. With sections performed at 220 beats per minute, it is one of the most intense tracks on the record. The guitar passage at the end of each verse was Hammett's imitation of the sort of music he found in war films. The syncopated riffing of "Leper Messiah" challenges the hypocrisy of the televangelism that emerged in the 1980s. The song describes how people are willingly turned into blind religious followers who mindlessly do whatever they are told. The 136 beats per minute mid-tempo riffing of the verses culminates in a descending chromatic riff in the chorus; it increases to a galloping 184 beats per minute for the middle section that climaxes in a distorted scream of "Lie!". The title derives from the lyrics to the David Bowie song "Ziggy Stardust". "Orion" is a multipart instrumental highlighting Burton's bass playing. It opens with a fade-in bass section, heavily processed to resemble an orchestra. It continues with mid-tempo riffing, followed by a bass solo at half-tempo. The tempo accelerates during the latter part, and ends with music fading out. Burton arranged the middle section, which features its moody bass line and multipart guitar harmonies. "Damage, Inc." rants about senseless violence and reprisal at an unspecified target. It starts with a series of reversed bass chords based on the chorale prelude of Bach's "Come, Sweet Death". The song then jumps into a rapid rhythm with a pedal-point riff in E that Hammett says was influenced by Deep Purple. "Master of Puppets" was hailed as a masterpiece by critics outside of the heavy metal audience and cited by some as the genre's greatest album. In a contemporary review, Tim Holmes of "Rolling Stone" asserted that the band had redefined heavy metal with the technical skill and subtlety showcased on the album, which he described as "the sound of global paranoia". "Kerrang!" wrote that "Master of Puppets" "finally put Metallica into the big leagues where they belong". Editor Tom King said Metallica was at an "incredible song-writing peak" during the recording sessions, partially because Burton contributed to the songwriting. By contrast, "Spin" magazine's Judge I-Rankin was disappointed with the album and said, although the production is exceptional and Metallica's experimentation is commendable, it eschews the less "intellectual" approach of "Kill 'Em All" for a MDC-inspired direction that is inconsistent. In a retrospective review, AllMusic's Steve Huey viewed "Master of Puppets" as Metallica's best album and remarked that, although it was not as unexpected as "Ride the Lightning", "Master of Puppets" is a more musically and thematically consistent album. Greg Kot of the "Chicago Tribune" said the songs were the band's most intense at that point, despite veering towards "the progressive tendency of Rush." Adrien Begrand of PopMatters praised the production as "a metal version of Phil Spector's Wall of Sound" and felt none of Metallica's subsequent albums could match its passionate and intense musical quality. BBC Music's Eamonn Stack called the album "hard, fast, rock with substance" and likened the songs to stories of "biblical proportions". Canadian journalist Martin Popoff compared the album to "Ride the Lightning" and found "Master of Puppets" not a remake, despite similarities in "awesome power and effect". In a less enthusiastic review, Robert Christgau said the band's energy and political motivations are respectable, but believed they evoke clichéd images of "revolutionary heroes" who are "male chauvinists too inexperienced to know better". Released on March 3, 1986, the album had a 72-week run on the "Billboard" 200 album charts and earned the band its first gold certification. The album debuted on March 29, 1986, at number 128 and peaked at number 29 on the "Billboard" 200 chart. "Billboard" reported that the album sold 300,000 copies in its first three weeks. Despite virtually no radio airplay and no music videos, the album sold more than 500,000 copies in its first year. In 2003, "Master of Puppets" was certified 6× platinum by the Recording Industry Association of America (RIAA), having shipped six million copies in the United States. Between the beginning of the Nielsen SoundScan era in 1991 and 2009, the album sold 4,578,000 copies. The album was less successful on an international level, entering the top 40 on the German and Swiss album charts in its inaugural year. In 2004, it peaked within the top 10 in Finland and into the top 15 in Sweden. In 2008, the album reached the top 40 on the Australian and Norwegian album charts. "Master of Puppets" received 6× platinum certification from Music Canada and a golden award from the British Phonographic Industry (BPI) for shipments of 600,000 and 100,000 copies, respectively. "Master of Puppets" has appeared in several publications' best album lists. It was ranked number 167 on the list of Rolling Stone's 500 Greatest Albums of All Time; the magazine would later rank it second on their 2017 list of "100 Greatest Metal Albums of All Time", behind Black Sabbath's "Paranoid". "Time" included the album in its list of the 100 best albums of all time. According to the magazine's Josh Tyrangiel, "Master of Puppets" reinforced the velocity of playing in heavy metal and diminished some of its clichés. "Slant Magazine" placed the album at number 90 on its list of the best albums of the 1980s, saying "Master of Puppets" is not only Metallica's best recording, but also their most sincere. The album is featured in Robert Dimery's book "1001 Albums You Must Hear Before You Die". IGN named "Master of Puppets" the best heavy metal album of all time. The website stated it was Metallica's best because it "built upon and perfected everything they had experimented with prior" and that "all the pieces come together in glorious cohesion". Music journalist Martin Popoff also ranked it the best heavy metal album. The album was voted the fourth greatest guitar album of all time by "Guitar World" in 2006, and the title track ranked number 61 on the magazine's list of the 100 greatest guitar solos. "Total Guitar" ranked the main riff of the title track at number 7 among the top 20 guitar riffs. The April 2006 edition of "Kerrang!" was dedicated to the album and gave away readers the cover album "Master of Puppets: Remastered". "Master of Puppets " became thrash metal's first platinum album and by the early 1990s thrash metal successfully challenged and redefined the mainstream of heavy metal. Metallica and a few other bands headlined arena concerts and appeared regularly on MTV, although radio play remained incommensurate with their popularity. "Master of Puppets" is widely accepted as the genre's most accomplished album, and paved the way for subsequent development. The album, in the words of writer Christopher Knowles, "ripped Metallica away from the underground and put them atop the metal mountain". David Hayter from "Guitar Planet" recognized the album as one of the most influential records ever made and a benchmark by which other metal albums should be judged. MTV's Kyle Anderson had similar thoughts, saying that 25 years after its release the album remained a "stone cold classic". Carlos Ramirez from Noisecreep believes that "Master of Puppets" stands as one of the most representative albums of its genre. 1986 is seen as a pinnacle year for thrash metal in which the genre broke out of the underground thanks to albums such as Megadeth's "Peace Sells... but Who's Buying?" and Slayer's "Reign in Blood". Anthrax released "Among the Living" the following year, and by the end of 1987 these bands, alongside Metallica, were being called the "Big Four" of thrash metal. "Master of Puppets" frequently tops critic and fan polls of favorite thrash metal albums—the most frequent rival is Slayer's "Reign in Blood", also released in 1986 and also considered that band's peak. The rivalry partially stemmed from a contrast in approaches on the two albums, between the sophistication of "Master of Puppets" and the velocity of "Reign in Blood". Histories of the band tend to position "Ride the Lightning", "Master of Puppets", and "...And Justice for All" as a trilogy over the course of which the band's music progressively matured and became more sophisticated. In 2015, the album was deemed "culturally, historically, or aesthetically significant" by the Library of Congress and was selected for preservation in the National Recording Registry. Metallica opted for extensive touring instead of releasing a single or video to promote the album. Metallica spent March to August 1986 touring as the opening act for Ozzy Osbourne in the United States, the first tour Metallica played to arena-sized audiences. The group used to play Black Sabbath riffs during sound checks, which Osbourne perceived as a mockery toward him. Referring to that occasion, Ulrich stated that Metallica was honored to play with Osbourne, who treated the band well on the tour. Metallica was noted by the media for its excessive drinking habit while touring and earned the nickname "Alcoholica". The band members occasionally wore satirical T-shirts reading "Alcoholica/Drank 'Em All". The band usually played a 45-minute set often followed by an encore. According to Ulrich, the audiences in bigger cities were already familiar with Metallica's music, unlike in the smaller towns they've visited. "In the B-markets, people really don't know what we're all about. But after 45 or 50 minutes we can tell we've won them over. And fans who come to hear Ozzy go home liking Metallica." Metallica won over Osbourne's fans and slowly began to establish a mainstream following. The tour, however, was notable for several incidents. Hetfield broke his wrist in a mid-tour skateboarding accident, and his guitar technician John Marshall played rhythm guitar on several dates. The European leg of the Damage, Inc. Tour commenced in September, with Anthrax as the supporting band. After the September 26 performance in Stockholm, the band's bus rolled over on a stretch of icy road the following morning. Burton was thrown through a window and killed instantly. The driver was charged with manslaughter but was not convicted. The band returned to San Francisco and hired Flotsam and Jetsam bassist Jason Newsted to replace Burton. Many of the songs that appeared on the band's next album, "...And Justice for All", were composed while Burton was still alive. All of the songs have been performed live and some became permanent set list features. Four tracks were featured on the nine-song set list for the album's promotional tour: "Battery" as opener, "Master of Puppets", "Welcome Home (Sanitarium)", and "Damage, Inc." The title track, which was issued as a single in France, became a live staple and the most played Metallica song. When played live, the crowd fills in some of the vocal parts while the group performs the instrumentals. "Loudwire"s Chad Childers characterized the band's performance as "furious" and the song as the set's highlight. "Rolling Stone" described the live performance as "a classic in all its eight-minute glory". While filming its 3D movie "" (2013) at the Rogers Arena in Vancouver, crosses were rising from the stage during the song, reminiscent of the album's cover art. "Welcome Home (Sanitarium)" is the second-most performed song from the album. The live performance is often accompanied by lasers, pyrotechnical effects and film screens. "Battery" is usually played at the beginning of the setlist or during the encore, accompanied by lasers and flame plumes. "Disposable Heroes" is featured in the video album "" (2009) filmed in Mexico City, in which the song was played on the second of three nights at the Foro Sol. "Orion" is the least-performed song from the album. Its first live performance was during the Escape from the Studio '06 tour, when the band performed the album in its entirety, honoring the 20th anniversary of its release. The band performed the album in the middle of the set. "Battery", "Welcome Home (Sanitarium)", "Damage, Inc." and the full-length "Master of Puppets" were revived for the band's concerts in 1998 and 1999, after having been retired for a number of years. All lyrics written by James Hetfield. The bonus tracks on the digital re-release were recorded live at the Seattle Coliseum, Seattle, Washington on August 29 and 30, 1989, and also appeared on the live album "" (1993). Credits are adapted from the album's liner notes. Metallica Production Artwork Ride the Lightning Ride the Lightning is the second studio album by American heavy metal band Metallica, released on July 27, 1984, by the independent record label Megaforce Records. The album was recorded in three weeks with producer Flemming Rasmussen at the Sweet Silence Studios in Copenhagen, Denmark. The artwork, based on a concept by the band, depicts an electric chair being struck by lightning flowing from the band logo. The title was taken from a passage in Stephen King's novel "The Stand". Although rooted in the thrash metal genre, the album showcased the band's musical growth and lyrical sophistication. This was partly because bassist Cliff Burton introduced the basics of music theory to the rest of the band and had more input in the songwriting. Instead of relying strictly on fast tempos as on its debut "Kill 'Em All", Metallica broadened its approach by employing acoustic guitars, extended instrumentals, and more complex harmonies. The overall recording costs were paid by Metallica's European label Music for Nations because Megaforce was unable to cover it. It was the last album to feature songwriting contributions from former lead guitarist Dave Mustaine, and the first to feature contributions from his replacement, Kirk Hammett. "Ride the Lightning" received positive response from music critics, who saw it as a more ambitious effort than its predecessor. Metallica promoted the album on the Bang That Head That Doesn't Bang European tour in late 1984, and on its North American leg in the first half of 1985. The band performed at major music festivals such as Monsters of Rock and Day on the Green later that year. Two months after its release, Elektra Records signed Metallica to a multi-year deal and reissued the album. "Ride the Lightning" peaked at number 100 on the "Billboard" 200 with no radio exposure. Although 75,000 copies were initially pressed for the American market, the album sold half a million by November 1987. It was certified 6× platinum by the Recording Industry Association of America (RIAA) in 2012 for shipping six million copies in the United States. Many rock publications have ranked "Ride the Lightning" on their best album lists, saying it had a lasting impact on the genre. Metallica released their debut album, "Kill 'Em All", on the independent label Megaforce Records on July 25, 1983. The album helped to establish thrash metal, a heavy metal subgenre defined by its brisk riffs and intense percussion. After finishing its promotional tour, Metallica began composing new material, and from September, began performing the songs that were to make up "Ride the Lightning" at concerts. Because the band had little money, its members often ate one meal a day and stayed at fans' homes while playing at clubs across the United States. An incident occurred when part of Metallica's gear was stolen in Boston, and Anthrax lent Metallica some of its equipment to complete the remaining dates. When not gigging, the band stayed in a rented house in El Cerrito, California, called the Metallica Mansion. Frontman James Hetfield felt uneasy about performing double duty on vocals and rhythm guitar, so the band offered the job to Armored Saint singer John Bush, who turned down the offer because Armored Saint was doing well at the time. Hetfield gradually built confidence as lead vocalist and kept his original role. Metallica started recording on February 20, 1984 at Sweet Silence Studios in Copenhagen, Denmark. The album was produced by Flemming Rasmussen, the founder of Sweet Silence Studios. Drummer Lars Ulrich chose Rasmussen, because he liked his work on Rainbow's "Difficult to Cure" (1981), and was keen to record in Europe. Rasmussen, who had not heard of Metallica, agreed to work on the album, even though his studio employees questioned the band's talent. Rasmussen listened to Metallica's tapes before the members arrived and thought the band had great potential. Metallica rehearsed the album's material at Mercyful Fate's practice room in Copenhagen. Before entering the studio, Metallica collected ideas on "riff tape" recordings of various jam sessions. Hetfield and Ulrich went through the tapes and selected the strongest riffs to assemble into songs. Instruments were recorded separately, with Hetfield playing only rhythm guitar. Rasmussen, with the support of drum roadie Flemming Larsen, taught the basics of timing and beat duration to Ulrich, who had a tendency to increase speed and had little knowledge of rhythm theory. Drums were recorded in an empty warehouse at the back of the studio, which was not soundproof, and caused reverberation. Although four tracks were already arranged, the band members were not used to creating songs in the studio, as they had not done so for "Kill 'Em All". "For Whom the Bell Tolls", "Trapped Under Ice" and "Escape" were written from scratch in Copenhagen, and the band put finishing touches on "Fight Fire with Fire", "Ride the Lightning", "Creeping Death", and "The Call of Ktulu", which were already performed live. Lead guitarist Kirk Hammett took the album's name from a passage in Stephen King's novel "The Stand". The cover art, displaying an electric chair in the midst of lightning bolts, was conceived before recording began. Metallica initially had sound problems, because its gear was stolen three weeks before the band arrived in Copenhagen. The band members slept in the studio by day as they could not afford a hotel and recorded by night, because the studio was booked by other artists during the daytime. Because the group was looking for a major label deal, several A&R representatives from different labels visited the studio. At first, it seemed that Metallica was going to sign with Bronze Records, but the deal fell through, because Bronze executive Gerry Bron did not appreciate the work done at Sweet Silence Studios, and wanted the US edition to be remixed by engineer Eddie Kramer, and even considered re-recording the album in another studio. Metallica was put off by Bron's failure to share the band's artistic vision and decided to look for another label for the US release, in spite of the fact that Bronze had already advertised Metallica as one of their bands. Metallica had to record quickly because of European shows scheduled 29 days after it entered the studio. Recording finished on March 14, and Megaforce released the album on July 27. Although the original album budget was $20,000, the final expense was above $30,000. Metallica's European label Music for Nations paid the studio costs because Megaforce owner Jon Zazula could not afford them. Metallica was unhappy with the lack of promotion by Megaforce, and decided to part ways with Zazula. Major label Elektra Records employee Michael Alago noticed Metallica at The Stone gig in San Francisco, and invited Elektra's chairman and the head of promotion to see the August show in New York. The performance at Roseland Ballroom, with Anthrax and Metallica opening for Raven, pleased the Elektra staff, and the band was offered a contract the following morning. On September 12, Metallica signed with Elektra, who re-released the album on November 19. Cliff Burnstein and Peter Mensch of Q Prime were concurrently appointed as the band's new managers. "Ride the Lightning" was the last Metallica album to feature co-writing contributions from former lead guitarist Dave Mustaine, who received credit on the title track and the instrumental "The Call of Ktulu". The album also represented the first time Hammett was given writing credits. Music writers opine that "Ride the Lightning" exhibited greater musical maturity, with sonically broader songs than "Kill 'Em All", which was noted for its one-dimensional sound. This was partially because of bassist Cliff Burton's knowledge of music theory. He showed Hetfield how to augment core notes with complementary counter-melodies and how basic guitar harmony worked, which reflected on the song compositions. Hetfield developed more socially aware lyrics, as well as ominous and semi-philosophical references. Ulrich explained that Metallica opted not to rely strictly on fast tempos as on the previous album, but to explore other musical approaches that sounded powerful and heavy. "Grinder" magazine's Kevin Fisher summarized the album as "ultimate thrash, destruction and total blur" that reminded him of the speed and power of "Kill 'Em All". Music journalist Martin Popoff observed that "Ride the Lightning" offered "sophistication and brutality in equal measure" and was seen as something new at the time of its release. Discussing the album's lyrical content, philosopher William Irwin wrote: "After "Kill 'Em All", the rebellion and aggression became much more focused as the enemy became more clearly defined. Metallica was deeply concerned about various domains in which the common man was wrongfully yet ingeniously deceived. More precisely, they were highly critical of those in power". The major-key acoustic introduction to "Fight Fire with Fire" displayed Metallica's evolution towards a more harmonically complex style of songwriting. The fastest Metallica song in terms of picking speed, it is driven by nimbly tremolo-picked riffs in the verses and chorus. The extended solo at the end dissolves in a sound effect of a vast nuclear explosion. The main riff was taped during the Kill 'Em All Tour and the acoustic intro was something Burton was playing on acoustic guitar at the time. The song discouraged the "eye for an eye" approach, and its lyrical themes focused on nuclear warfare and Armageddon. "Ride the Lightning" was Metallica's first song to emphasize the misery of the criminal justice system. The lyrics were written from the perspective of someone who is anticipating execution by the electric chair. The song, one of the two album tracks that credited Mustaine, begins in a mid-tempo which gradually accelerates as the song progress. It features an instrumental middle section highlighted by Hammett's soloing. According to Hetfield, the song was not a criticism of capital punishment, but a tale of a man sentenced to death for a crime he did not commit, as in the opening lyrics: "Guilty as charged/But Damn it/It ain't right". "For Whom the Bell Tolls" begins with a bell tolling, followed by a marching riff and high-register bass melody. The chromatic introduction, which Burton wrote before he joined Metallica, is often mistaken for an electric guitar but is actually Burton's bass guitar augmented with distortion and a wah-wah pedal. The lyrics were inspired by Ernest Hemingway's 1940 novel of the same name, which explores the horror and dishonor of modern warfare. "For Whom the Bell Tolls" was released as a promotional single in two versions, an edit on side A and the album version on side B. "Fade to Black" is a power ballad whose lyrics contemplate suicide. Hetfield wrote the words because he felt powerless after the band's equipment was stolen before the January 1984 show in Boston. Musically, the song begins with an acoustic guitar introduction overlaid with electric soloing. The song becomes progressively heavier and faster, ending with multi-layered guitar solos. The ballad's arpeggiated chords and reserved singing was inconvenient for thrash metal bands at the time and disappointed some of Metallica's fans. The song's structure foreshadows later Metallica ballads, "Welcome Home (Sanitarium)", "One" and "The Day That Never Comes. "Fade to Black" was released as a promotional single in 1984, in glow in the dark green. "Trapped Under Ice" is about a person who wakes from a cryonic state. Realizing there is nowhere to go, and no-one will come to the rescue, the person helplessly awaits impending doom. The song is built on a fast-picked galloping riff, reminiscent of the album's opener. It was inspired by a track Hammett's former band Exodus had demoed called "Impaler", which was later released on that band's 2004 album "Tempo of the Damned". "Escape" was originally titled "The Hammer" and was intended to be released as a single due to its lighter riffs and conventional song structure. The intro features a counterpoint bass melody and a chugging guitar riff that resolves into a standard down-stroked riff. "Escape" is Hetfield's most hated Metallica song, due to it being the result of their record company forcing Metallica to write something more radio friendly. Authors Mick Wall and Malcolm Dome felt the song was influenced by the album-oriented rock of 1970s bands such as Journey and Foreigner, but fans perceived it as an attempt for airplay at rock radio. Metallica performed "Escape" live only once, at the 2012 Orion Music + More festival, while performing "Ride the Lightning" in its entirety. "Creeping Death" describes the (Exodus 12:29). The lyrics deal with the ten plagues visited on Ancient Egypt; four of them are mentioned throughout the song, as well as the Passover. The title was inspired by a scene from "The Ten Commandments" while the band was watching the movie at Burton's house. The bridge, with its chant "Die, by my hand!", was originally written by Hammett for the song "Die by His Hand" while he was playing in Exodus, who recorded it as a demo but did not feature it on a studio album. Journalist Joel McIver called the song a "moshpit anthem" due to its epic lyrical themes and dramatic atmosphere. "Creeping Death" was released as a single with a B-side titled "Garage Days Revisited" made up of covers of Diamond Head's "Am I Evil?" and Blitzkrieg's "Blitzkrieg". "The Call of Ktulu", tentatively titled "When Hell Freezes Over", was inspired by H. P. Lovecraft's book "The Shadow over Innsmouth", which was introduced to the rest of the band by Burton. The title was taken from one of Lovecraft's key stories featuring Cthulhu, "The Call of Cthulhu", although the original name was modified to "Ktulu" for easier pronunciation. The track begins with D minor chord progression in the intro, followed by a two-minute bass solo over a rhythmic riff pattern. Conductor Michael Kamen rearranged the piece for Metallica's 1999 "S&M" project and won a Grammy Award for Best Rock Instrumental Performance in 2001. "Ride the Lightning" received widespread acclaim from music critics. According to "Q" magazine, the album confirmed Metallica's status as the leading heavy metal band of the modern era. The magazine credited the group for redefining the norms of thrash metal with "Fade to Black", the genre's first power ballad. British rock magazine "Kerrang!" stated that the album's maturity and musical intelligence helped Metallica expand heavy metal's boundaries. Greg Kot of the "Chicago Tribune" described "Ride the Lightning" as a more refined extension of the group's debut. In a retrospective review, Sputnikmusic's Channing Freeman named it as one of the few albums that can be charming and powerful at the same time. He praised Hetfield's vocal performance and concluded that Metallica was "firing on all cylinders". AllMusic's Steve Huey saw the album as a more ambitious and remarkable effort than "Kill 'Em All". He called "Ride the Lightning" an "all-time metal classic" because of the band's rich musical imagination and lyrics that avoided heavy metal cliches. "The Rolling Stone Album Guide" viewed the album as a great step forward for the band and as an album that established the concept for Metallica's following two records. Colin Larkin, writing in the "Encyclopedia of Popular Music", singled out "For Whom the Bell Tolls" as an example of Metallica's growing music potential. Popoff regards "Ride the Lightning" as an album where "extreme metal became art". "This literally was the first album since (Judas Priest 1976's) "Sad Wings of Destiny" where the rulebook has changed. This was a new kind of heaviness; the soft, billowy but explosive production was amazing, the speed was superhuman", stated Popoff. Reviewing the 2016 reissue, Jason Anderson of "Uncut" considers "Ride the Lightning" the second best Metallica album which set the pace for metal in the years to come. Megaforce initially pressed 75,000 copies of the album for the US market, while Music for Nations took care of the European market. By the autumn of 1984, "Ride the Lightning" had moved 85,000 copies in Europe, resulting in Metallica's first cover story for "Kerrang!" in its December issue. After signing Metallica, Elektra released the single "Creeping Death" in a sleeve depicting a bridge and a skull painted grey and green. The album peaked at number 100 on the "Billboard" 200 with no radio exposure. In 1984, the French record label Bernett Records misprinted the color of the album cover in green, rather than blue, and 400 copies with the green cover were produced. Because of their rarity, these green albums have become collectors' items. "Ride the Lightning" went gold by November 1987 and in 2012 was certified 6× platinum by the Recording Industry Association of America (RIAA) for shipping six million copies in the US. The album, along with "Kill 'Em All", was reissued in 2016 as a boxed set including demos and live recordings. Many rock publications have ranked "Ride the Lightning" on their best album lists. The album placed fifth on IGN Music's "Top 25 Metal Albums" list. "Spin" listed it as a thrash metal essential, declaring it "the thrashiest thrash ever". According to "Guitar World", "Ride the Lightning" "didn't just change the band's trajectory—it reset the course of metal itself". Corey Deiterman of the "Houston Press" considers "Ride the Lightning" the most influential Metallica album, saying it had a lasting impact on genres such as crossover thrash and hardcore punk. In 2017, it was ranked 11th on "Rolling Stone" list of "100 Greatest Metal Albums of All Time". After recording was completed, Music for Nations founder Martin Hooker wanted to arrange a triple bill UK tour in March / April 1984 with Exciter, Metallica, and The Rods. The Hell on Earth Tour never materialized because of poor ticket sales. To promote "Ride the Lightning", Metallica commenced the Bang That Head That Doesn't Bang European tour on November 16, in Rouen, France, with British NWOBHM band Tank as support. The tour continued with dates in Belgium, Italy, Germany, and the Nordic countries to an average crowd of 1,300. After a Christmas break, the group embarked on a 50-date North American tour, firstly as a co-headlining act with W.A.S.P. and then as headliners with Armored Saint supporting. At a gig in Portland, Oregon, Metallica covered "The Money Will Roll Right In" by Fang, with Armored Saint onstage. The American leg ended in May 1985, and the band spent the following two months working on the next studio album, "Master of Puppets", whose recording sessions were scheduled to begin in September. Metallica performed at the Monsters of Rock festival held at Castle Donington in England on August 17 in front of 70,000 fans. The band was placed between Ratt and Bon Jovi, two glam metal groups whose sound and appearance were much unlike Metallica's. At the start of the set, Hetfield pronounced to the audience: "If you came here to see spandex, eye make-up, and the words 'oh baby' in every fuckin' song, this ain't the fuckin' band!" Two weeks later, Metallica appeared on the Day on the Green festival in Oakland, California, before 90,000 people. The last show Metallica played before recording began was the Loreley Metal Hammer festival in Germany, headlined by Venom. Metallica finished 1985 with a show at the Sacramento Memorial Auditorium on December 29 opening for Y&T, and a New Year's Eve concert at the Civic Auditorium in San Francisco on a bill with Metal Church, Exodus, and Megadeth, the first time Metallica and Megadeth shared a stage. At this gig, Metallica premiered "Master of Puppets" and "Disposable Heroes", songs from the then-upcoming third studio album. All lyrics written by James Hetfield (Kirk Hammett also contributed to lyrics for "Creeping Death"). The bonus tracks on the digital re-release were recorded live at the Seattle Coliseum, Seattle, Washington on August 29 and 30, 1989, and later appeared on the live album "" (1993). Credits are adapted from the album's liner notes. Metallica Production Packaging Metallica Metallica is an American heavy metal band. The band was formed in 1981 in Los Angeles, California by drummer Lars Ulrich and vocalist/guitarist James Hetfield, and has been based in San Francisco, California for most of its career. The group's fast tempos, instrumentals and aggressive musicianship made them one of the founding "big four" bands of thrash metal, alongside Megadeth, Anthrax and Slayer. Metallica's current lineup comprises founding members Hetfield and Ulrich, longtime lead guitarist Kirk Hammett and bassist Robert Trujillo. Guitarist Dave Mustaine (who went on to form Megadeth) and bassists Ron McGovney, Cliff Burton and Jason Newsted are former members of the band. Metallica earned a growing fan base in the underground music community and won critical acclaim with its first five albums. The band's third album, "Master of Puppets" (1986), was described as one of the heaviest and most influential thrash metal albums; its eponymous fifth album, "Metallica" (1991), the band's first to root predominantly in heavy metal, appealed to a more mainstream audience, achieving substantial commercial success and selling over 16 million copies in the United States to date, making it the best-selling album of the SoundScan era. After experimenting with different genres and directions in subsequent releases, the band returned to its thrash metal roots with the release of its ninth album, "Death Magnetic" (2008), which drew similar praise to that of the band's earlier albums. In 2000, Metallica led the case against the peer-to-peer file sharing service Napster, in which the band and several other artists filed lawsuits against the service for sharing their copyright-protected material without consent; after reaching a settlement, Napster became a pay-to-use service in 2003. Metallica was the subject of the acclaimed 2004 documentary film "Some Kind of Monster", which documented the troubled production of the band's eighth album, "St. Anger" (2003), and the internal struggles within the band at the time. In 2009, Metallica was inducted into the Rock and Roll Hall of Fame. The band wrote the screenplay for and starred in the 2013 IMAX concert film "", in which the band performed live against a fictional thriller storyline. Metallica has released ten studio albums, four live albums, a cover album, five extended plays, 37 singles and 39 music videos. The band has won nine Grammy Awards from 23 nominations, and its last six studio albums (beginning with "Metallica") have consecutively debuted at number one on the "Billboard" 200. Metallica ranks as one of the most commercially successful bands of all time, having sold over 125 million albums worldwide as of 2018. Metallica has been listed as one of the greatest artists of all time by magazines such as "Rolling Stone", which ranked them at no. 61 on its "100 Greatest Artists of All Time" list. As of 2017, Metallica is the third best-selling music artist since Nielsen SoundScan began tracking sales in 1991, selling a total of 58 million albums in the United States. Metallica was formed in Los Angeles, California, in late 1981 when Danish-born drummer Lars Ulrich placed an advertisement in a Los Angeles newspaper, "The Recycler", which read, "Drummer looking for other metal musicians to jam with Tygers of Pan Tang, Diamond Head and Iron Maiden." Guitarists James Hetfield and Hugh Tanner of Leather Charm answered the advertisement. Although he had not formed a band, Ulrich asked Metal Blade Records founder Brian Slagel if he could record a song for the label's upcoming compilation album, "Metal Massacre". Slagel accepted, and Ulrich recruited Hetfield to sing and play rhythm guitar. The band was officially formed on October 28, 1981, five months after Ulrich and Hetfield first met. Ulrich talked to his friend Ron Quintana, who was brainstorming names for a fanzine. Quintana had proposed the names MetalMania and Metallica. Ulrich named his band Metallica. A second advertisement was placed in "The Recycler" for a position as lead guitarist. Dave Mustaine answered; Ulrich and Hetfield recruited him after seeing his expensive guitar equipment. In early 1982, Metallica recorded its first original song, "Hit the Lights", for the "Metal Massacre I" compilation. Hetfield played bass on the song, and Lloyd Grant was credited with a guitar solo. "Metal Massacre I" was released on June 14, 1982; early pressings listed the band incorrectly as "Mettallica". Although angered by the error, Metallica created enough "buzz" with the song, and the band played its first live performance on March 14, 1982, at Radio City in Anaheim, California, with newly recruited bassist Ron McGovney. The band's first taste of live success came early; they were chosen to open for British heavy metal band Saxon at one gig of their 1982 US tour. This was Metallica's second gig. Metallica recorded its first demo, "Power Metal", whose name was inspired by Quintana's early business cards in early 1982. The term "thrash metal" was coined by "Kerrang!" journalist Malcolm Dome in reference to Anthrax's song "Metal Thrashing Mad" in "Kerrang!" issue 62, published on February 23, 1984. Prior to this, James Hetfield referred to Metallica's sound as "power metal". In late 1982, Ulrich and Hetfield attended a show at the West Hollywood nightclub Whisky a Go Go, which featured bassist Cliff Burton in a band named Trauma. The two were "blown away" by Burton's use of a wah-wah pedal and asked him to join Metallica. Hetfield and Mustaine wanted McGovney to leave because they thought he "didn't contribute anything, he just followed". Although Burton initially declined the offer, by the end of the year, he had accepted on the condition the band move to El Cerrito in the San Francisco Bay Area. Metallica's first live performance with Burton was at the nightclub The Stone in March 1983, and the first recording to feature Burton was the "Megaforce" demo (1983). Metallica was ready to record their debut album, but when Metal Blade was unable to cover the cost, the band began looking for other options. Concert promoter Johny "Z" Zazula, who had heard the demo "No Life 'til Leather" (1982), offered to broker a record deal between Metallica and New York City-based record labels. After those record labels showed no interest, Zazula borrowed enough money to cover the recording budget and signed Metallica to his own label, Megaforce Records. In May 1983, Metallica traveled to Rochester, New York to record its debut album, "Metal Up Your Ass", which was produced by Paul Curcio. The other members decided to eject Mustaine from the band because of his drug and alcohol abuse, and violent behavior just before the recording sessions on April 11, 1983. Exodus guitarist Kirk Hammett replaced Mustaine the same afternoon. Mustaine, who went on to found Megadeth, has expressed his dislike for Hammett in interviews, saying Hammett "stole" his job. Mustaine was "pissed off" because he believes Hammett became popular by playing guitar leads that Mustaine himself had written. In a 1985 interview with "Metal Forces", Mustaine said, "it's real funny how Kirk Hammett ripped off every lead break I'd played on that "No Life 'til Leather" tape and got voted No. 1 guitarist in your magazine". On Megadeth's debut album "Killing Is My Business... and Business Is Good!" (1985), Mustaine included the song "Mechanix", which Metallica had previously reworked and retitled "The Four Horsemen" on "Kill 'Em All". Mustaine said he did this to "straighten Metallica up" because Metallica referred to Mustaine as a drunk and said he could not play guitar. Metallica's first live performance with Hammett was on April 16, 1983, at a nightclub in Dover, New Jersey called The Showplace; the support act was Anthrax's original line-up, which included Dan Lilker and Neil Turbin. This was the first time the two bands performed live together. Because of conflicts with its record label and the distributors' refusal to release an album titled "Metal Up Your Ass", the album was renamed "Kill 'Em All". It was released on Megaforce Records in the U.S. and on Music for Nations in Europe, and peaked at number 155 on the "Billboard" 200 in 1986. Although the album was not initially a financial success, it earned Metallica a growing fan base in the underground metal scene. To support the release, Metallica embarked on the Kill 'Em All for One tour with Raven. In February 1984, Metallica supported Venom on the Seven Dates of Hell tour, during which the bands performed in front of 7,000 people at the Aardschok Festival in Zwolle, Netherlands. Metallica recorded its second studio album, "Ride the Lightning", at Sweet Silence Studios in Copenhagen, Denmark. It was released in August 1984 and reached number 100 on the "Billboard" 200. A French printing press mistakenly printed green covers for the album, which are now considered collectors' items. Mustaine received writing credit for "Ride the Lightning" and "The Call of Ktulu". Elektra Records A&R director Michael Alago, and co-founder of Q-Prime Management Cliff Burnstein, attended a Metallica concert in September 1984. They were impressed with the performance, signed Metallica to Elektra, and made the band as a client of Q-Prime Management. Metallica's growing success was such that the band's British label Music for Nations released "Creeping Death" as a limited edition single, which sold 40,000 copies as an import in the U.S. Two of the three songs on the recordcover versions of Diamond Head's "Am I Evil?" and Blitzkrieg's "Blitzkrieg"appeared on the 1988 Elektra reissue of "Kill 'Em All". Metallica embarked on its first major European tour with Tank to an average crowd of 1,300. Returning to the U.S., it embarked upon a tour co-headlining with W.A.S.P. and supported by Armored Saint. Metallica played its largest show at the Monsters of Rock festival at Donington Park, England, on August 17, 1985, with Bon Jovi and Ratt, playing to 70,000 people. At a show in Oakland, California, at the Day on the Green festival, the band played to a crowd of 60,000. Metallica's third studio album, "Master of Puppets", was recorded at Sweet Silence Studios and was released in March 1986. The album reached number 29 on the "Billboard" 200 and spent 72 weeks on the chart. It was the band's first album to be certified gold on November 4, 1986, and was certified six times platinum in 2003. Steve Huey of AllMusic considered the album "the band's greatest achievement". Following the release of the album, Metallica supported Ozzy Osbourne on a U.S. tour. Hetfield broke his wrist while skateboarding; he continued with the tour, performing vocals, with guitar technician John Marshall playing rhythm guitar. On September 27, 1986, during the European leg of Metallica's Damage, Inc. Tour, members drew cards to determine which bunks on the tour bus they would sleep in. Burton won and chose to sleep in Hammett's bunk. At around sunrise near Dörarp, Sweden, the bus driver lost control and skidded, which caused the bus to overturn several times. Ulrich, Hammett, and Hetfield sustained no serious injuries; however, bassist Burton was pinned under the bus and died. Hetfield said: I saw the bus lying right on him. I saw his legs sticking out. I freaked. The bus driver, I recall, was trying to yank the blanket out from under him to use for other people. I just went, 'Don't fucking do that!' I already wanted to kill the [bus driver]. I don't know if he was drunk or if he hit some ice. All I knew was, he was driving and Cliff wasn't alive anymore. Burton's death left Metallica's future in doubt. The three remaining members decided Burton would want them to carry on, and with the Burton family's blessings the band sought a replacement. Roughly 40 people, including Hammett's childhood friend, Les Claypool of Primus, Troy Gregory of Prong, and Jason Newsted, formerly of Flotsam and Jetsam, auditioned for the band. Newsted learned Metallica's entire set list; after the audition Metallica invited him to Tommy's Joynt in San Francisco. Hetfield, Ulrich, and Hammett decided on Newsted as Burton's replacement; Newsted's first live performance with Metallica was at the Country Club in Reseda, California. The members initiated Newsted by tricking him into eating a ball of wasabi. After Newsted joined Metallica, the band left its El Cerrito practice spacea suburban house formerly rented by sound engineer Mark Whitaker dubbed "the Metalli-mansion"and relocated to the adjacent cities of Berkeley and Albany before eventually settling in the Marin County city of San Rafael, north of San Francisco. Metallica finished its tour in the early months of 1987. In March 1987, Hetfield again broke his wrist while skateboarding, forcing the band to cancel an appearance on "Saturday Night Live". In August 1987, an all-covers extended play (EP) titled "" was released. The EP was recorded in an effort to use the band's newly constructed recording studio, test Newsted's talents, and to relieve grief and stress following the death of Burton. A video titled "Cliff 'Em All" commemorating Burton's three years in Metallica was released in 1987; the video included bass solos, home videos, and pictures. Metallica's first studio album since Burton's death, "...And Justice for All", was released in 1988. The album was a commercial success, reaching number six on the "Billboard" 200, and was the band's first album to enter the top 10. The album was certified platinum nine weeks after its release. There were complaints about the production; Steve Huey of AllMusic said Ulrich's drums were clicking more than thudding, and the guitars "buzz thinly". To promote the album, Metallica embarked on a tour called Damaged Justice. In 1989, Metallica received its first Grammy Award nomination for "...And Justice for All" in the new Best Hard Rock/Metal Performance Vocal or Instrument category. Metallica was the favorite to win but the award was given to Jethro Tull for the album "Crest of a Knave". The award was controversial with fans and the press; Metallica was standing off-stage waiting to receive the award after performing the song "One". Jethro Tull had been advised by its manager not to attend the ceremony because he was expecting Metallica to win. The award was named in "Entertainment Weekly" "Grammy's 10 Biggest Upsets". Following the release of "...And Justice for All", Metallica released its debut music video for the song "One", which the band performed in an abandoned warehouse. The footage was remixed with the film "Johnny Got His Gun". Rather than organize an ongoing licensing deal, Metallica purchased the rights to the film. The remixed video was submitted to MTV with an alternative, performance-only version that was held back in case MTV banned the remixed version. MTV accepted the remixed version; the video was viewers' first exposure to Metallica. In 1999, it was voted number 38 in MTV's "Top 100 Videos of All Time" countdown; it was featured in the network's 25th Anniversary edition of "ADD Video", which showcased the most popular videos on MTV in the last 25 years. In October 1990, Metallica entered One on One Recording's studio in North Hollywood to record its next album. Bob Rock, who had worked with Aerosmith, The Cult, Bon Jovi, and Mötley Crüe, was hired as the producer. "Metallica"also known as "The Black Album"was remixed three times, cost , and ended three marriages. Although the release was delayed until 1991, "Metallica" debuted at number one in ten countries, selling 650,000 units in the U.S. during its first week. The album brought Metallica mainstream attention; it has been certified 16 times platinum in the U.S., which makes it the 25th-best-selling album in the country. The making of "Metallica" and the following tour was documented in "A Year and a Half in the Life of Metallica". The tour in support of the album, called the Wherever We May Roam Tour, lasted 14 months and included dates in the U.S., Japan, and the UK. In April 1992, Metallica appeared at The Freddie Mercury Tribute Concert and performed a three-song set. Hetfield later performed "Stone Cold Crazy" with the remaining members of Queen and Tony Iommi. On August 8, 1992, during the co-headlining Guns N' Roses/Metallica Stadium Tour, Hetfield suffered second and third degree burns to his arms, face, hands, and legs. There had been some confusion with the new pyrotechnics setup, which resulted in Hetfield walking into a flame during "Fade to Black". Newsted said Hetfield's skin was "bubbling like on "The Toxic Avenger"". Metallica returned to the stage 17 days later with guitar technician and Metal Church member John Marshall replacing Hetfield on guitar for the remainder of the tour, although Hetfield was able to sing. Later in 1993, Metallica went on the Nowhere Else to Roam Tour, playing five shows in Mexico City. "", the band's first box set, was released in November 1993. The collection contained three live CDs, three home videos, and a book filled with riders and letters. After almost three years of touring to promote the album "Metallica", including a headlining performance at Woodstock '94, Metallica returned to the studio to write and record its sixth studio album. The band went on a brief hiatus in the summer of 1995 and played three outdoor shows that included headlining at Donington Park, where it was supported by Slayer, Skid Row, Slash's Snakepit, Therapy?, and Corrosion of Conformity. The short tour was titled Escape from the Studio '95. The band spent about a year writing and recording new songs, resulting in the release of "Load" in 1996. "Load" debuted at number one on the "Billboard" 200 and ARIA Charts; it was the band's second number one album. The cover art of "Load", called "Blood and Semen III", was created by Andres Serrano, who pressed a mixture of his own semen and blood between sheets of plexiglass. The release marked a change in the band's musical direction and a new image; band members' hair was cut. Metallica headlined the alternative rock festival Lollapalooza festival in mid-1996. During early production of the album, the band had recorded enough material to fill a double album. It was decided that half of the songs were to be released; the band would continue to work on the remaining songs and release them the following year. This resulted in the follow-up album, "Reload". The cover art was again created by Serrano, this time using a mixture of blood and urine. "Reload" debuted at number one on the "Billboard" 200 and reached number two on the Top Canadian Album chart. Hetfield said in the 2004 documentary film "Some Kind of Monster" that the band initially thought some of the songs on these albums were of average quality; these were "polished and reworked" until judged to be releasable. To promote "Reload", Metallica performed "Fuel" and "The Memory Remains" with Marianne Faithfull on NBC's "Saturday Night Live" in December 1997. In 1998, Metallica compiled a double album of cover songs titled "Garage Inc." The first disc contained newly recorded covers of songs by Diamond Head, Killing Joke, the Misfits, Thin Lizzy, Mercyful Fate, Black Sabbath, and others. The second disc featured the original version of "The $5.98 E.P.: Garage Days Re-Revisited", which had become a scarce collectors' item. The album entered the "Billboard" 200 at number two. On April 21 and 22, 1999, Metallica recorded two performances with the San Francisco Symphony conducted by Michael Kamen, who had previously worked with producer Rock on "Nothing Else Matters". Kamen approached Metallica in 1991 with the idea of pairing the band's music with a symphony orchestra. Kamen and his staff of over 100 composed additional orchestral material for Metallica songs. Metallica wrote two new Kamen-scored songs for the event, "No Leaf Clover" and "-Human". The audio recording and concert footage were released in 1999 as the album and concert film "S&M". It entered the "Billboard" 200 at number two, and the Australian ARIA charts and Top Internet Albums chart at number one. In 2000, Metallica discovered that a demo of its song "I Disappear", which was supposed to be released in combination with the , was receiving radio airplay. Tracing the source of the leak, the band found the file on the Napster peer-to-peer file-sharing network, and also found that the band's entire catalogue was freely available. Legal action was initiated against Napster; Metallica filed a lawsuit at the U.S. District Court, Central District of California, alleging that Napster violated three areas of the law: copyright infringement, unlawful use of digital audio interface device, and the Racketeer Influenced and Corrupt Organizations Act (RICO). Ulrich provided a statement to the Senate Judiciary Committee regarding copyright infringement on July 11, 2000. Federal Judge Marilyn Hall Patel ordered the site to place a filter on the program within 72 hours or be shut down. A settlement between Metallica and Napster was reached when German media conglomerate Bertelsmann BMG showed interest in purchasing the rights to Napster for $94 million. Under the terms of settlement, Napster agreed to block users who shared music by artists who do not want their music shared. On June 3, 2002, Napster filed for Chapter 11 protection under U.S. bankruptcy laws. On September 3, 2002, an American bankruptcy judge blocked the sale of Napster to Bertelsmann and forced Napster to liquidate its assets according to Chapter 7 of the U.S. bankruptcy laws. At the 2000 MTV Video Music Awards, Ulrich appeared with host Marlon Wayans in a skit that criticized the idea of using Napster to share music. Marlon played a college student listening to Metallica's "I Disappear". Ulrich walked in and asked for an explanation. Ulrich responded to Wayans' excuse that using Napster was just "sharing" by saying that Wayans' idea of sharing was "borrowing things that were not yours without asking". He called in the Metallica road crew, who proceeded to confiscate all of Wayans' belongings, leaving him almost naked in an empty room. Napster creator Shawn Fanning responded later in the ceremony by presenting an award wearing a Metallica shirt, saying, "I borrowed this shirt from a friend. Maybe, if I like it, I'll buy one of my own." Ulrich was later booed on stage at the award show when he introduced the final musical act, Blink-182. Newsted left Metallica on January 17, 2001, as plans were being made to enter the recording studio. He said he left the band for "private and personal reasons, and the physical damage I have done to myself over the years while playing the music that I love". During a "Playboy" interview with Metallica, Newsted said he wanted to release an album with his side project, Echobrain. Hetfield was opposed to the idea and said, "When someone does a side project, it takes away from the strength of Metallica", and that a side project is "like cheating on your wife in a way". Newsted said Hetfield had recorded vocals for a song used in the film "", and appeared on two Corrosion of Conformity albums. Hetfield replied, "My name isn't on those records. And I'm not out trying to sell them", and pondered questions such as, "Where would it end? Does he start touring with it? Does he sell shirts? Is it his band?" In April 2001, filmmakers Joe Berlinger and Bruce Sinofsky began following Metallica to document the recording process of the band's next studio album. Over two years they recorded more than 1,000 hours of footage. On July 19, 2001, before preparations to enter the recording studio, Hetfield entered rehab to treat his "alcoholism and other addictions". All recording plans were put on hold and the band's future was in doubt. Hetfield left rehab on December 4, 2001, and the band returned to the recording studio on April 12, 2002. Hetfield was required to limit his work to four hours a day between noon and 4 pm, and to spend the rest of his time with his family. The footage recorded by Berlinger and Sinofsky was compiled into the documentary "Some Kind of Monster", which premiered at the Sundance Film Festival in January 2004. In the documentary, Newsted said his former bandmates' decision to hire a therapist to help solve their problems which he felt they could have solved on their own was "really fucking lame and weak". For the duration of the recording period, producer Bob Rock played bass on the album and in several live shows at which Metallica performed during that time. Once the record was completed in early 2003, the band started to hold auditions for Newsted's permanent replacement. Bassists Pepper Keenan, Jeordie White, Scott Reeder, Eric Avery, Danny Lohner, and Chris Wyseamong othersauditioned for the role. After three months of auditions, Robert Trujillo, formerly of Suicidal Tendencies and Ozzy Osbourne's band, was chosen as the new bassist. As Metallica moved on, Newsted joined Canadian thrash metal band Voivod in 2002, and was Trujillo's replacement in Osbourne's band during the 2003 Ozzfest tour, which included Voivod. In June 2003, Metallica's eighth studio album, "St. Anger", debuted at number one on the "Billboard" 200, and drew mixed reactions from critics. Ulrich's "steely" sounding snare drum and the absence of guitar solos received particular criticism. Kevin Forest Moreau of "Shakingthrough.net" said, "the guitars stumble in a monotone of mid-level, processed rattle; the drums don't propel as much as struggle to disguise an all-too-turgid pace; and the rage is both unfocused and leavened with too much narcissistic navel-gazing". Brent DiCrescenzo of "Pitchfork" described it as "an utter mess". However, "Blender" magazine called it the "grimiest and grimmest of the band's Bob Rock productions", and "New York Magazine" called it "utterly raw and rocking". The title track, "St. Anger", won the Grammy Award for Best Metal Performance in 2004; it was used as the official theme song for WWE's "SummerSlam 2003". Before the band's set at the 2004 Download Festival, Ulrich was rushed to the hospital after having an anxiety seizure and was unable to perform. Hetfield searched for last-minute volunteers to replace Ulrich. Slayer drummer Dave Lombardo and Slipknot drummer Joey Jordison volunteered. Lombardo performed "Battery" and "The Four Horsemen", Ulrich's drum technician Flemming Larsen performed "Fade to Black", and Jordison performed the remainder of the set. Having toured for two years in support of "St. Anger" on the Summer Sanitarium Tour 2003 and the Madly in Anger with the World Tour, with multi-platinum rock band Godsmack in support, Metallica took a break from performing and spent most of 2005 with friends and family. The band opened for The Rolling Stones at AT&T Park in San Francisco on November 13 and 15, 2005. In December 2006, Metallica released a DVD titled "The Videos 1989–2004", which sold 28,000 copies in its first week and entered the "Billboard" Top Videos chart at number three. Metallica recorded a guitar-based interpretation of Ennio Morricone's "The Ecstasy of Gold" for a tribute album titled "We All Love Ennio Morricone", which was released in February 2007. The track received a Grammy nomination at the 50th Grammy Awards for the category "Best Rock Instrumental Performance". A recording of "The Ecstasy of Gold" has been played to introduce Metallica's performances since the 1980s. Earlier that year, Metallica announced on its official website that after 15 years, long-time producer Bob Rock would not be producing the band's next studio album. Instead, the band chose to work with producer Rick Rubin. Metallica scheduled the release of "Death Magnetic" as September 12, 2008, and the band filmed a music video for the album's first single, "The Day That Never Comes". On September 2, 2008, a record store in France began selling copies of "Death Magnetic" nearly two weeks before its scheduled worldwide release date, which resulted in the album being made available on peer-to-peer clients. This prompted the band's UK distributor Vertigo Records to officially release the album on September 10, 2008. Rumors of Metallica or Warner Bros. taking legal action against the French retailer were unconfirmed, though drummer Lars Ulrich responded to the leak by saying, "...We're ten days from release. I mean, from here, we're golden. If this thing leaks all over the world today or tomorrow, happy days. Happy days. Trust me", and, "By 2008 standards, that's a victory. If you'd told me six months ago that our record wouldn't leak until 10 days out, I would have signed up for that." "Death Magnetic" debuted at number one in the U.S. selling 490,000 units; Metallica became the first band to have five consecutive studio albums debut at number one in the history of the "Billboard" 200. A week after its release, "Death Magnetic" remained at number one on the "Billboard" 200 and the European album chart; it also became the fastest selling album of 2008 in Australia. "Death Magnetic" remained at number one on the "Billboard" 200 album chart for three consecutive weeks. Metallica was one of two artists whose albumthe other being Jack Johnson's album "Sleep Through the Static"remained on the "Billboard" 200 for three consecutive weeks at number one in 2008. "Death Magnetic" also remained at number one on "Billboard"'s Hard Rock, Modern Rock/Alternative and Rock album charts for five consecutive weeks. The album reached number one in 32 countries outside the U.S., including the UK, Canada, and Australia. In November 2008, Metallica's record deal with Warner Bros. ended and the band considered releasing its next album through the internet. On January 14, 2009, it was announced that Metallica would be inducted into the Rock and Roll Hall of Fame on April 4, 2009, and that former bassist Jason Newstedwho left the band in 2001would perform with the band at the ceremony. Initially, it was announced that the matter had been discussed and that bassist Trujillo had agreed not to play because he "wanted to see the Black Album band". However, during the band's set of "Master of Puppets" and "Enter Sandman", both Trujillo and Newsted were on stage. Ray Burton, father of the late Cliff Burton, accepted the honor on his behalf. Although he was not to be inducted with them, Metallica invited Dave Mustaine to take part in the induction ceremony. Mustaine declined because of his touring commitments in Europe. Metallica, Slayer, Megadeth, and Anthrax performed on the same bill for the first time on June 16, 2010, at Warsaw Babice Airport, Warsaw, as a part of the Sonisphere Festival series. The show in Sofia, Bulgaria, on June 22, 2010, was broadcast via satellite to cinemas. The bands also played concerts in Bucharest on June 26, 2010, and Istanbul on June 27, 2010. On June 28, 2010, "Death Magnetic" was certified double platinum by the RIAA. Metallica's World Magnetic Tour ended in Melbourne on November 21, 2010. The band had been touring for over two years in support of "Death Magnetic". To accompany the final tour dates in Australia and New Zealand, a live, limited edition EP of past performances in Australia called "Six Feet Down Under" was released. The EP was followed by "Six Feet Down Under (Part II)", which was released on November 12, 2010. Part 2 contains a further eight songs recorded during the first two Oceanic Legs of the World Magnetic Tour. On November 26, 2010, Metallica released a live EP titled "Live at Grimey's", which was recorded in June 2008 at Grimey's Record Store, just before the band's appearance at Bonnaroo Music Festival that year. In a June 2009 interview with Italy's Rock TV, Ulrich said Metallica was planning to continue touring until August 2010, and that there were no plans for a tenth album. He said he was sure the band would collaborate with producer Rick Rubin again. According to Blabbermouth.net, the band was considering recording its next album in the second half of 2011. In November 2010, during an interview with The Pulse of Radio, Ulrich said Metallica would return to writing in 2011. Ulrich said, "There's a bunch of balls in the air for 2011, but I think the main one is we really want to get back to writing again. We haven't really written since, what, '06, '07, and we want to get back to kind of just being creative again. Right now we are going to just chill out and then probably start up again in, I'd say, March or April, and start probably putting the creative cap back on and start writing some songs." In an interview at the April 2011 Big Four concert, Robert Trujillo said Metallica will work with Rick Rubin again as producer for the new album and were "really excited to write some new music. There's no shortage of riffage in Metallica world right now." He added, "The first album with Rick was also the first album for me, so in a lot of ways, you're kind of testing the water. Now that we're comfortable with Rick and his incredible engineer, Greg Fidelman, who worked with Slayer, actually, on this last recordit's my heroit's a great team. And it's only gonna better; I really believe that. So I'm super-excited." In June 2011, Rubin said Metallica had begun writing its new album. On November 9, 2010, Metallica announced it would be headlining the Rock in Rio festival in Rio de Janeiro on September 25, 2011. On December 13, 2010, the band announced it would again play as part of the "big four" during the Sonisphere Festival at Knebworth House, Hertfordshire, on July 8, 2011. It was the first time all of the "big four" members played on the same stage in the UK. On December 17, 2010, Another "big four" Sonisphere performance that would take place in France on July 9 was announced. On January 25, 2011, another "big four" performance on April 23, 2011, at the Empire Polo Club in Indio, California, was announced. It was the first time all of the "big four" members played on the same stage in the U.S. On February 17, 2011, a show in Gelsenkirchen, Germany, on July 2, 2011, was announced. On February 22, a "big four" show in Milan on July 6, 2011, was announced. On March 2, 2011, another "big four" concert, which took place in Gothenburg on July 3, 2011, was announced. The final "big four" concert was in New York City, at Yankee Stadium, on September 14, 2011. On June 15, 2011, Metallica announced that recording sessions with singer-songwriter Lou Reed had concluded. The album, which was titled "Lulu", was recorded over several months and comprised ten songs based on Frank Wedekind's "Lulu" plays "Earth Spirit" and "Pandora's Box". The album was released on October 31, 2011. The recording of the album was problematic at times; Lars Ulrich later said Lou Reed challenged him to a "street fight". On October 16, 2011, Robert Trujillo confirmed that the band was back in the studio and writing new material. He said, "The writing process for the new Metallica album has begun. We've been in the studio with Rick Rubin, working on a couple of things, and we're going to be recording during the most of next year." Metallica was due to make its first appearance in India at the "India Rocks" concert, supporting the 2011 Indian Grand Prix. However, the concert was canceled when the venue was proven to be unsafe. Fans raided the stage during the event and the organizers were later arrested for fraud. Metallica made its Indian debut in Bangalore on October 30, 2011. On November 10, it was announced that Metallica would headline the main stage on Saturday June 9, 2012, at the Download Festival at Donington Park and that the band would play "The Black Album" in its entirety. Metallica celebrated its 30th anniversary by playing four shows at the Fillmore in San Francisco in December 2011. The shows were exclusive to Met Club members and tickets were charged at $6 each or $19.81 for all four nights. The shows consisted of songs from the band's career and featured guest appearances by artists who had either helped or had influenced Metallica. These shows were notable because Lloyd Grant, Dave Mustaine, Jason Newsted, Glenn Danzig, Ozzy Osbourne, Jerry Cantrell, Apocalyptica, members of Diamond Head, and King Diamond joined Metallica on stage for all appropriate songs. In December 2011, Metallica began releasing songs that were written for "Death Magnetic" but were not included on the album online. On December 13, 2011, the band released "Beyond Magnetic", a digital EP release exclusively on iTunes. It was released on CD in January 2012. On February 7, 2012, Metallica announced that it would start a new music festival called Orion Music + More, which took place on June 23 and 24, 2012, in Atlantic City. Metallica also confirmed that it would headline the festival on both days and would perform two of its most critically acclaimed albums in their entirety: "The Black Album" on one night, and "Ride the Lightning" on the other. In a July 2012 interview with Canadian radio station 99.3 The Fox, Ulrich said Metallica would not release its new album until at least early 2014. In November 2012, Metallica left Warner Bros. Records and launched an independent record label, Blackened Recordings, which will produce the band's future releases. The band has acquired the rights to all of its studio albums, which will be reissued through the new label. Blackened releases will be licensed through Warner subsidiary Rhino Entertainment in North America and internationally through Universal Music. On September 20, 2012, Metallica announced via its official website that a new DVD containing footage of shows it performed in Quebec in 2009 would be released that December; fans would get the chance to vote for two setlists that would appear on the DVD. The film, titled "Quebec Magnetic", was released in the U.S. on December 10, 2012. In an interview with "Classic Rock" on January 8, 2013, Ulrich said regarding the band's upcoming album, "What we're doing now certainly sounds like a continuation [of "Death Magnetic"]". He also said, "I love Rick [Rubin]. We all love Rick. We're in touch with Rick constantly. We'll see where it goes. It would stun me if the record came out in 2013." Also in 2013, the band starred in a 3D concert film titled "", which was directed by Antal Nimród and was released in IMAX theaters on September 27. In an interview dated July 22, 2013, Ulrich told "Ultimate Guitar", "2014 will be all about making a new Metallica record"; he said the album will most likely be released during 2015. Kirk Hammett and Robert Trujillo later confirmed the band's intention to enter the studio. At the second Orion Music + More festival held in Detroit, the band played under the name "Dehaan"a reference to actor Dane DeHaan, who starred in "Metallica: Through the Never". The band performed its debut album "Kill 'Em All" in its entirety, celebrating the 30th anniversary of its release. On December 8, 2013, the band played a show called "Freeze 'Em All" in Antarctica, becoming the first band to play on all seven continents. The performance was filmed and released as a live album the same month. At the 56th Annual Grammy Awards in January 2014, Metallica performed "One" with Chinese pianist Lang Lang. In March 2014, Metallica began a tour called "Metallica By Request", in which fans request songs for the band to perform. A new song, titled "Lords of Summer" was written for the concerts and released as a "first take" demo in June 2014. In June 2014, the band headlined the Glastonbury Festival in an attempt to attract new fans. Ulrich said, "We have one shot, you never know if you'll be invited back". In November 2014, Metallica performed at the closing ceremony of BlizzCon 2014. In January 2015, Metallica announced a "Metallica Night" with the San Jose Sharks, which featured a Q&A session with the band and a charity auction benefiting the San Francisco Bay Chapter of the Sierra Club, but no performances. They were announced to headline Lollapalooza in March 2015, returning to perform there for the first time in 20 years. On May 2, 2015, Metallica performed their third annual Metallica Day at AT&T Park. Metallica were also announced to play at X Games for the first time at X Games Austin 2015 in Austin, Texas. On June 14, 2015, Hetfield and Hammett performed The Star-Spangled Banner live via electric guitars prior to game 5 of the NBA Finals between the Cleveland Cavaliers and Golden State Warriors at Oracle Arena in Oakland, California. In late October, the band unveiled a new website with an introduction from Ulrich containing footage from the studio of the band working on new material. On November 2, Metallica were announced to play "The Night Before" Super Bowl 50 at AT&T Park. Metallica announced they would be opening the U.S. Bank Stadium on August 20, 2016, with Avenged Sevenfold and Volbeat as support. In April 2016, during the week leading up to Record Store Day, for which the band was its ambassador for 2016, Ulrich told "Billboard" that the band's expanded role within the music industry had played a part in the amount of time that it had taken to write and record the album. "The way we do things now is very different than the way we did things back in the days of "Kill 'Em All" and "Ride the Lightning". Nowadays we like to do so many different things." Ulrich was also optimistic that production of the album had almost reached its completion. "Unless something radical happens it would be difficult for me to believe that it won't come out in 2016". On August 18, 2016, the band announced via their website that their tenth studio album, "Hardwired... to Self-Destruct", would be released worldwide on November 18, 2016, via their independent label, Blackened Recordings. They also unveiled the track listing, album artwork, and released a music video for the album's first single, "Hardwired". The album was released as scheduled and debuted at number one on the "Billboard" 200. Metallica announced they would be touring the US in summer of 2017 for the WorldWired Tour. The stadium tour also includes Avenged Sevenfold, Volbeat and Gojira as supporting acts. On August 7, 2017, Metallica was invited by the San Francisco Giants again for the fifth annual "Metallica Night" with Hammett and Hetfield performing the national anthem. In January 2018, the band announced that they would be reissuing "The $5.98 E.P.: Garage Days Re-Revisited" on April 13 for Record Store Day, and the sixth annual "Metallica Night" was also announced a few weeks later, this time in April, with all proceeds going to the All Within My Hands Foundation, which the band created in late 2017. In February 2018, the band announced a second set of North American tour dates, most of which for cities that they had not visited in up to thirty years. Metallica was influenced by early heavy metal and hard rock bands and artists Black Sabbath, Deep Purple, Kiss, Led Zeppelin, Queen, Ted Nugent, AC/DC, Rush, Aerosmith, Judas Priest, Scorpions and by new wave of British heavy metal (NWOBHM) bands Venom, Motörhead, Saxon, Diamond Head, Blitzkrieg, and Iron Maiden, and early punk rock bands Ramones, Sex Pistols, and the Misfits also influenced Metallica's style as did post-punk band Killing Joke. The band's early releases contained fast tempos, harmonized leads, and nine-minute instrumental tracks. Steve Huey of AllMusic said "Ride the Lightning" featured "extended, progressive epics; tight, concise groove-rockers". Huey said Metallica expanded its compositional technique and range of expression to take on a more aggressive approach in following releases, and lyrics dealt with personal and socially conscious issues. Religious and military leaders, rage, insanity, monsters, and drugsamong other themeswere explored on "Master of Puppets". In 1991, Huey said Metallica with new producer Bob Rock simplified and streamlined its music for a more commercial approach to appeal to mainstream audiences. Robert Palmer of "Rolling Stone" said the band abandoned its aggressive, fast tempos to expand its music and expressive range. The change in direction proved commercially successful; "Metallica" was the band's first album to peak at number one on the "Billboard" 200. Metallica noticed changes to the rock scene created by the grunge movement of the early 1990s. In "Load"—an album that has been described as having "an almost alternative rock" approach—the band changed musical direction and focused on non-metal influences. Metallica's new lyrical approach moved away from drugs and monsters, and focused on anger, loss, and retribution. Some fans and critics were not pleased with this change, which included haircuts, the cover art of "Load", and headlining the Lollapalooza festival of 1996. David Fricke of "Rolling Stone" described the move as "goodbye to the moldy stricture and dead-end Puritanism of no-frills thrash", and called "Load" the heaviest record of 1996. With the release of "ReLoad" in 1997, the band displayed blues and early hard rock influences, incorporating more rhythm and harmony in song structures. "St. Anger" marked another large change in the band's sound. Guitar solos were excluded from the album, leaving a "raw and unpolished sound". The band used drop C tuning; Ulrich's snare drum received particular criticism. "New York Magazine"s Ethan Brown said it "reverberates with a thwong". The album's lyrics deal with Hetfield's drug rehabilitation and include references to the devil, anti-drug themes, claustrophobia, impending doom, and religious hypocrisy. At the advice of producer Rick Rubin, for its ninth studio album "Death Magnetic", the band returned to standard tuning and guitar solos. As a return to Metallica's thrash roots, "Death Magnetic" was a riff-oriented album featuring intense guitar solos and subtle lyrics dealing with suicide and redemption. Metallica has become one of the most influential heavy metal bands of all time, and is credited as one of the "big four" of thrash metal, along with Slayer, Anthrax, and Megadeth. The band has sold more than 110 million records worldwide, including an RIAA-certified 66 million and Nielsen SoundScan-reported 58,000,000 in the US, making Metallica one of the most commercially successful bands of all time. The writers of "The Rolling Stone Encyclopedia of Rock & Roll" said Metallica gave heavy metal "a much-needed charge". Stephen Thomas Erlewine and Greg Prato of Allmusic said Metallica "expanded the limits of thrash, using speed and volume not for their own sake, but to enhance their intricately structured compositions", and called the band "easily the best, most influential heavy metal band of the '80s, responsible for bringing the music back to Earth". Jonathan Davis of Korn said he respects Metallica as his favorite band; he said, "I love that they've done things their own way and they've persevered over the years and they're still relevant to this day. I think they're one of the greatest bands ever." Godsmack drummer Shannon Larkin said Metallica has been the biggest influence on the band, stating, "they really changed my life when I was 16 years oldI'd never heard anything that heavy". Vocalist and guitarist Robb Flynn of Machine Head said that when creating the band's 2007 album, "The Blackening", "What we mean is an album that has the power, influence and epic grandeur of that album "Master of Puppets"and the staying powera timeless record like that". Trivium guitarists Corey Beaulieu and Matt Heafy said that when they heard Metallica they wanted to start playing guitar. M. Shadows of Avenged Sevenfold said touring with Metallica was the band's career highlight, and said, "Selling tons of records and playing huge shows will never compare to meeting your idols "Metallica"". God Forbid guitarists Doc and Dallas Coyle were inspired by Metallica as they grew up, and the band's bassist John Outcalt admires Burton as a "rocker". Ill Niño drummer Dave Chavarri said he finds early Metallica releases are "heavy, raw, rebellious. It said, 'fuck you'", and Adema drummer Kris Kohls said the band is influenced by Metallica. On April 4, 2009, Metallica were inducted into the Rock And Roll Hall Of Fame. They entered the Rock and Roll Hall of Fame the second year they were eligible and first year they were nominated. Metallica's induction into the Hall included its current lineup, James Hetfield, Kirk Hammett, Robert Trujillo, and Lars Ulrich, and former members Jason Newsted and Cliff Burton. MTV ranked Metallica the third "Greatest Heavy Metal Band in History". Metallica was ranked 42nd on VH1's "100 Greatest Artists Of All Time", was listed fifth on VH1's "100 Greatest Artists of Hard Rock", and the band was number one on VH1's "20 Greatest Metal Bands" list. "Rolling Stone" placed the band 61st on its list of "The 100 Greatest Artists of All Time"; its albums "Master of Puppets" and "Metallica" were ranked at numbers 167 and 252 respectively on the magazine's list of "The 500 Greatest Albums of All Time". "Master of Puppets" was named in "Q Magazine" "50 Heaviest Albums of All Time", and was ranked number one on IGN's "Top 25 Metal Albums", and number one on Metal-rules.com's "Top 100 Heavy Metal Albums" list. "Enter Sandman" was ranked number 399 on "Rolling Stone" "500 Greatest Songs of All Time". "Kerrang!" released a tribute album titled "Master of Puppets: Remastered" with the April 8, 2006, edition of the magazine to celebrate the 20th anniversary of "Master of Puppets". The album featured cover versions of Metallica songs by Machine Head, Bullet for My Valentine, Chimaira, Mastodon, Mendeed, and Triviumall of which are influenced by Metallica. At least 15 Metallica tribute albums have been released. On September 10, 2006, Metallica guest starred on "The Simpsons" eighteenth-season premiere, "The Mook, the Chef, the Wife and Her Homer". Hammett's and Hetfield's voices were used in three episodes of the animated television series "Metalocalypse". Finnish cello metal band Apocalyptica released a tribute album titled "Plays Metallica by Four Cellos", which features eight Metallica songs played on cellos. A parody band named Beatallica plays music using a combination of The Beatles and Metallica songs. Beatallica faced legal troubles when Sony, which owns The Beatles' catalog, issued a cease and desist order, claiming "substantial and irreparable injury" and ordering the group to pay damages. Ulrich, a fan of Beatallica, asked Metallica's lawyer Peter Paterno to help settle the case. On March 7, 1999, Metallica was inducted into the San Francisco Walk of Fame. The mayor of San Francisco, Willie Brown, proclaimed the day "Official Metallica Day". The band was awarded the MTV Icon award in 2003, and a concert paying tribute to the band with artists performing its songs was held. Performances included Sum 41 and a medley of "For Whom the Bell Tolls", "Enter Sandman", and "Master of Puppets". Staind covered "Nothing Else Matters", Avril Lavigne played "Fuel", hip-hop artist Snoop Dogg performed "Sad but True", Korn played "One", and Limp Bizkit performed "Welcome Home (Sanitarium)". The "Guitar Hero" video game series included several of Metallica's songs. "One" was used in "Guitar Hero III". The album "Death Magnetic" was later released as purchasable, downloadable content for the game. "Trapped Under Ice" was featured in the sequel, "Guitar Hero World Tour". In 2009, Metallica collaborated with the game's developers to make "", which included a number of Metallica's songs. Harmonix' video game series "Rock Band" included "Enter Sandman"; "Ride the Lightning", "Blackened", and "...And Justice for All" were released as downloadable tracks. In 2013, due to expiring content licenses, "Ride the Lightning", "Blackened", and "...And Justice for All" are no longer available for download. Current members Former members Session/touring musicians Timeline Grammy Awards The 1988 re-issue of "Kill 'Em All" on Elektra Records also charted on the Billboard 200, peaking at number 120. Middle Ages In the history of Europe, the Middle Ages (or Medieval Period) lasted from the 5th to the 15th century. It began with the fall of the Western Roman Empire and merged into the Renaissance and the Age of Discovery. The Middle Ages is the middle period of the three traditional divisions of Western history: classical antiquity, the medieval period, and the modern period. The medieval period is itself subdivided into the Early, High, and Late Middle Ages. Population decline, counterurbanisation, invasion, and movement of peoples, which had begun in Late Antiquity, continued in the Early Middle Ages. The large-scale movements of the Migration Period, including various Germanic peoples, formed new kingdoms in what remained of the Western Roman Empire. In the 7th century, North Africa and the Middle East—once part of the Byzantine Empire—came under the rule of the Umayyad Caliphate, an Islamic empire, after conquest by Muhammad's successors. Although there were substantial changes in society and political structures, the break with classical antiquity was not complete. The still-sizeable Byzantine Empire, Rome's direct continuation, survived in the Eastern Mediterranean and remained a major power. The empire's law code, the "Corpus Juris Civilis" or "Code of Justinian", was rediscovered in Northern Italy in 1070 and became widely admired later in the Middle Ages. In the West, most kingdoms incorporated the few extant Roman institutions. Monasteries were founded as campaigns to Christianise pagan Europe continued. The Franks, under the Carolingian dynasty, briefly established the Carolingian Empire during the later 8th and early 9th century. It covered much of Western Europe but later succumbed to the pressures of internal civil wars combined with external invasions: Vikings from the north, Magyars from the east, and Saracens from the south. During the High Middle Ages, which began after 1000, the population of Europe increased greatly as technological and agricultural innovations allowed trade to flourish and the Medieval Warm Period climate change allowed crop yields to increase. Manorialism, the organisation of peasants into villages that owed rent and labour services to the nobles, and feudalism, the political structure whereby knights and lower-status nobles owed military service to their overlords in return for the right to rent from lands and manors, were two of the ways society was organised in the High Middle Ages. The Crusades, first preached in 1095, were military attempts by Western European Christians to regain control of the Holy Land from Muslims. Kings became the heads of centralised nation-states, reducing crime and violence but making the ideal of a unified Christendom more distant. Intellectual life was marked by scholasticism, a philosophy that emphasised joining faith to reason, and by the founding of universities. The theology of Thomas Aquinas, the paintings of Giotto, the poetry of Dante and Chaucer, the travels of Marco Polo, and the Gothic architecture of cathedrals such as Chartres are among the outstanding achievements toward the end of this period and into the Late Middle Ages. The Late Middle Ages was marked by difficulties and calamities including famine, plague, and war, which significantly diminished the population of Europe; between 1347 and 1350, the Black Death killed about a third of Europeans. Controversy, heresy, and the Western Schism within the Catholic Church paralleled the interstate conflict, civil strife, and peasant revolts that occurred in the kingdoms. Cultural and technological developments transformed European society, concluding the Late Middle Ages and beginning the early modern period. The Middle Ages is one of the three major periods in the most enduring scheme for analysing European history: classical civilisation, or Antiquity; the Middle Ages; and the Modern Period. Medieval writers divided history into periods such as the "Six Ages" or the "Four Empires", and considered their time to be the last before the end of the world. When referring to their own times, they spoke of them as being "modern". In the 1330s, the humanist and poet Petrarch referred to pre-Christian times as "antiqua" (or "ancient") and to the Christian period as "nova" (or "new"). Leonardo Bruni was the first historian to use tripartite periodisation in his "History of the Florentine People" (1442). Bruni and later historians argued that Italy had recovered since Petrarch's time, and therefore added a third period to Petrarch's two. The "Middle Ages" first appears in Latin in 1469 as "media tempestas" or "middle season". In early usage, there were many variants, including "medium aevum", or "middle age", first recorded in 1604, and "media saecula", or "middle ages", first recorded in 1625. The alternative term "medieval" (or occasionally "mediaeval" or "mediæval") derives from "medium aevum". Tripartite periodisation became standard after the German 17th-century historian Christoph Cellarius divided history into three periods: Ancient, Medieval, and Modern. The most commonly given starting point for the Middle Ages is around 500, with the date of 476 first used by Bruni. Later starting dates are sometimes used in the outer parts of Europe. For Europe as a whole, 1500 is often considered to be the end of the Middle Ages, but there is no universally agreed upon end date. Depending on the context, events such as Christopher Columbus's first voyage to the Americas in 1492, the conquest of Constantinople by the Turks in 1453, or the Protestant Reformation in 1517 are sometimes used. English historians often use the Battle of Bosworth Field in 1485 to mark the end of the period. For Spain, dates commonly used are the death of King Ferdinand II in 1516, the death of Queen Isabella I of Castile in 1504, or the conquest of Granada in 1492. Historians from Romance-speaking countries tend to divide the Middle Ages into two parts: an earlier "High" and later "Low" period. English-speaking historians, following their German counterparts, generally subdivide the Middle Ages into three intervals: "Early", "High", and "Late". In the 19th century, the entire Middle Ages were often referred to as the "Dark Ages", but with the adoption of these subdivisions, use of this term was restricted to the Early Middle Ages, at least among historians. The Roman Empire reached its greatest territorial extent during the 2nd century AD; the following two centuries witnessed the slow decline of Roman control over its outlying territories. Economic issues, including inflation, and external pressure on the frontiers combined to create the Crisis of the Third Century, with emperors coming to the throne only to be rapidly replaced by new usurpers. Military expenses increased steadily during the 3rd century, mainly in response to the war with the Sasanian Empire, which revived in the middle of the 3rd century. The army doubled in size, and cavalry and smaller units replaced the Roman legion as the main tactical unit. The need for revenue led to increased taxes and a decline in numbers of the curial, or landowning, class, and decreasing numbers of them willing to shoulder the burdens of holding office in their native towns. More bureaucrats were needed in the central administration to deal with the needs of the army, which led to complaints from civilians that there were more tax-collectors in the empire than tax-payers. The Emperor Diocletian (r. 284–305) split the empire into separately administered eastern and western halves in 286; the empire was not considered divided by its inhabitants or rulers, as legal and administrative promulgations in one division were considered valid in the other. In 330, after a period of civil war, Constantine the Great (r. 306–337) refounded the city of Byzantium as the newly renamed eastern capital, Constantinople. Diocletian's reforms strengthened the governmental bureaucracy, reformed taxation, and strengthened the army, which bought the empire time but did not resolve the problems it was facing: excessive taxation, a declining birthrate, and pressures on its frontiers, among others. Civil war between rival emperors became common in the middle of the 4th century, diverting soldiers from the empire's frontier forces and allowing invaders to encroach. For much of the 4th century, Roman society stabilised in a new form that differed from the earlier classical period, with a widening gulf between the rich and poor, and a decline in the vitality of the smaller towns. Another change was the Christianisation, or conversion of the empire to Christianity, a gradual process that lasted from the 2nd to the 5th centuries. In 376, the Goths, fleeing from the Huns, received permission from Emperor Valens (r. 364–378) to settle in the Roman province of Thracia in the Balkans. The settlement did not go smoothly, and when Roman officials mishandled the situation, the Goths began to raid and plunder. Valens, attempting to put down the disorder, was killed fighting the Goths at the Battle of Adrianople on 9 August 378. As well as the threat from such tribal confederacies from the north, internal divisions within the empire, especially within the Christian Church, caused problems. In 400, the Visigoths invaded the Western Roman Empire and, although briefly forced back from Italy, in 410 sacked the city of Rome. In 406 the Alans, Vandals, and Suevi crossed into Gaul; over the next three years they spread across Gaul and in 409 crossed the Pyrenees Mountains into modern-day Spain. The Migration Period began, when various peoples, initially largely Germanic peoples, moved across Europe. The Franks, Alemanni, and the Burgundians all ended up in northern Gaul while the Angles, Saxons, and Jutes settled in Britain, and the Vandals went on to cross the strait of Gibraltar after which they conquered the province of Africa. In the 430s the Huns began invading the empire; their king Attila (r. 434–453) led invasions into the Balkans in 442 and 447, Gaul in 451, and Italy in 452. The Hunnic threat remained until Attila's death in 453, when the Hunnic confederation he led fell apart. These invasions by the tribes completely changed the political and demographic nature of what had been the Western Roman Empire. By the end of the 5th century the western section of the empire was divided into smaller political units, ruled by the tribes that had invaded in the early part of the century. The deposition of the last emperor of the west, Romulus Augustulus, in 476 has traditionally marked the end of the Western Roman Empire. By 493 the Italian peninsula was conquered by the Ostrogoths. The Eastern Roman Empire, often referred to as the Byzantine Empire after the fall of its western counterpart, had little ability to assert control over the lost western territories. The Byzantine emperors maintained a claim over the territory, but while none of the new kings in the west dared to elevate himself to the position of emperor of the west, Byzantine control of most of the Western Empire could not be sustained; the reconquest of the Mediterranean periphery and the Italian Peninsula (Gothic War) in the reign of Justinian (r. 527–565) was the sole, and temporary, exception. The political structure of Western Europe changed with the end of the united Roman Empire. Although the movements of peoples during this period are usually described as "invasions", they were not just military expeditions but migrations of entire peoples into the empire. Such movements were aided by the refusal of the Western Roman elites to support the army or pay the taxes that would have allowed the military to suppress the migration. The emperors of the 5th century were often controlled by military strongmen such as Stilicho (d. 408), Aetius (d. 454), Aspar (d. 471), Ricimer (d. 472), or Gundobad (d. 516), who were partly or fully of non-Roman background. When the line of Western emperors ceased, many of the kings who replaced them were from the same background. Intermarriage between the new kings and the Roman elites was common. This led to a fusion of Roman culture with the customs of the invading tribes, including the popular assemblies that allowed free male tribal members more say in political matters than was common in the Roman state. Material artefacts left by the Romans and the invaders are often similar, and tribal items were often modelled on Roman objects. Much of the scholarly and written culture of the new kingdoms was also based on Roman intellectual traditions. An important difference was the gradual loss of tax revenue by the new polities. Many of the new political entities no longer supported their armies through taxes, instead relying on granting them land or rents. This meant there was less need for large tax revenues and so the taxation systems decayed. Warfare was common between and within the kingdoms. Slavery declined as the supply weakened, and society became more rural. Between the 5th and 8th centuries, new peoples and individuals filled the political void left by Roman centralised government. The Ostrogoths, a Gothic tribe, settled in Roman Italy in the late fifth century under Theoderic the Great (d. 526) and set up a kingdom marked by its co-operation between the Italians and the Ostrogoths, at least until the last years of Theodoric's reign. The Burgundians settled in Gaul, and after an earlier realm was destroyed by the Huns in 436 formed a new kingdom in the 440s. Between today's Geneva and Lyon, it grew to become the realm of Burgundy in the late 5th and early 6th centuries. Elsewhere in Gaul, the Franks and Celtic Britons set up small polities. Francia was centred in northern Gaul, and the first king of whom much is known is Childeric I (d. 481). His grave was discovered in 1653 and is remarkable for its grave goods, which included weapons and a large quantity of gold. Under Childeric's son Clovis I (r. 509–511), the founder of the Merovingian dynasty, the Frankish kingdom expanded and converted to Christianity. The Britons, related to the natives of Britannia – modern-day Great Britain – settled in what is now Brittany. Other monarchies were established by the Visigothic Kingdom in the Iberian Peninsula, the Suebi in northwestern Iberia, and the Vandal Kingdom in North Africa. In the sixth century, the Lombards settled in Northern Italy, replacing the Ostrogothic kingdom with a grouping of duchies that occasionally selected a king to rule over them all. By the late sixth century, this arrangement had been replaced by a permanent monarchy, the Kingdom of the Lombards. The invasions brought new ethnic groups to Europe, although some regions received a larger influx of new peoples than others. In Gaul for instance, the invaders settled much more extensively in the north-east than in the south-west. Slavs settled in Central and Eastern Europe and the Balkan Peninsula. The settlement of peoples was accompanied by changes in languages. Latin, the literary language of the Western Roman Empire, was gradually replaced by vernacular languages which evolved from Latin, but were distinct from it, collectively known as Romance languages. These changes from Latin to the new languages took many centuries. Greek remained the language of the Byzantine Empire, but the migrations of the Slavs added Slavic languages to Eastern Europe. As Western Europe witnessed the formation of new kingdoms, the Eastern Roman Empire remained intact and experienced an economic revival that lasted into the early 7th century. There were fewer invasions of the eastern section of the empire; most occurred in the Balkans. Peace with the Sasanian Empire, the traditional enemy of Rome, lasted throughout most of the 5th century. The Eastern Empire was marked by closer relations between the political state and Christian Church, with doctrinal matters assuming an importance in Eastern politics that they did not have in Western Europe. Legal developments included the codification of Roman law; the first effort—the "Codex Theodosianus"—was completed in 438. Under Emperor Justinian (r. 527–565), another compilation took place—the "Corpus Juris Civilis". Justinian also oversaw the construction of the Hagia Sophia in Constantinople and the reconquest of North Africa from the Vandals and Italy from the Ostrogoths, under Belisarius (d. 565). The conquest of Italy was not complete, as a deadly outbreak of plague in 542 led to the rest of Justinian's reign concentrating on defensive measures rather than further conquests. At the Emperor's death, the Byzantines had control of most of Italy, North Africa, and a small foothold in southern Spain. Justinian's reconquests have been criticised by historians for overextending his realm and setting the stage for the early Muslim conquests, but many of the difficulties faced by Justinian's successors were due not just to over-taxation to pay for his wars but to the essentially civilian nature of the empire, which made raising troops difficult. In the Eastern Empire the slow infiltration of the Balkans by the Slavs added a further difficulty for Justinian's successors. It began gradually, but by the late 540s Slavic tribes were in Thrace and Illyrium, and had defeated an imperial army near Adrianople in 551. In the 560s the Avars began to expand from their base on the north bank of the Danube; by the end of the 6th-century, they were the dominant power in Central Europe and routinely able to force the Eastern emperors to pay tribute. They remained a strong power until 796. An additional problem to face the empire came as a result of the involvement of Emperor Maurice (r. 582–602) in Persian politics when he intervened in a succession dispute. This led to a period of peace, but when Maurice was overthrown, the Persians invaded and during the reign of Emperor Heraclius (r. 610–641) controlled large chunks of the empire, including Egypt, Syria, and Anatolia until Heraclius' successful counterattack. In 628 the empire secured a peace treaty and recovered all of its lost territories. In Western Europe, some of the older Roman elite families died out while others became more involved with ecclesiastical than secular affairs. Values attached to Latin scholarship and education mostly disappeared, and while literacy remained important, it became a practical skill rather than a sign of elite status. In the 4th century, Jerome (d. 420) dreamed that God rebuked him for spending more time reading Cicero than the Bible. By the 6th century, Gregory of Tours (d. 594) had a similar dream, but instead of being chastised for reading Cicero, he was chastised for learning shorthand. By the late 6th century, the principal means of religious instruction in the Church had become music and art rather than the book. Most intellectual efforts went towards imitating classical scholarship, but some original works were created, along with now-lost oral compositions. The writings of Sidonius Apollinaris (d. 489), Cassiodorus (d. c. 585), and Boethius (d. c. 525) were typical of the age. Changes also took place among laymen, as aristocratic culture focused on great feasts held in halls rather than on literary pursuits. Clothing for the elites was richly embellished with jewels and gold. Lords and kings supported entourages of fighters who formed the backbone of the military forces. Family ties within the elites were important, as were the virtues of loyalty, courage, and honour. These ties led to the prevalence of the feud in aristocratic society, examples of which included those related by Gregory of Tours that took place in Merovingian Gaul. Most feuds seem to have ended quickly with the payment of some sort of compensation. Women took part in aristocratic society mainly in their roles as wives and mothers of men, with the role of mother of a ruler being especially prominent in Merovingian Gaul. In Anglo-Saxon society the lack of many child rulers meant a lesser role for women as queen mothers, but this was compensated for by the increased role played by abbesses of monasteries. Only in Italy does it appear that women were always considered under the protection and control of a male relative. Peasant society is much less documented than the nobility. Most of the surviving information available to historians comes from archaeology; few detailed written records documenting peasant life remain from before the 9th century. Most of the descriptions of the lower classes come from either law codes or writers from the upper classes. Landholding patterns in the West were not uniform; some areas had greatly fragmented landholding patterns, but in other areas large contiguous blocks of land were the norm. These differences allowed for a wide variety of peasant societies, some dominated by aristocratic landholders and others having a great deal of autonomy. Land settlement also varied greatly. Some peasants lived in large settlements that numbered as many as 700 inhabitants. Others lived in small groups of a few families and still others lived on isolated farms spread over the countryside. There were also areas where the pattern was a mix of two or more of those systems. Unlike in the late Roman period, there was no sharp break between the legal status of the free peasant and the aristocrat, and it was possible for a free peasant's family to rise into the aristocracy over several generations through military service to a powerful lord. Roman city life and culture changed greatly in the early Middle Ages. Although Italian cities remained inhabited, they contracted significantly in size. Rome, for instance, shrank from a population of hundreds of thousands to around 30,000 by the end of the 6th century. Roman temples were converted into Christian churches and city walls remained in use. In Northern Europe, cities also shrank, while civic monuments and other public buildings were raided for building materials. The establishment of new kingdoms often meant some growth for the towns chosen as capitals. Although there had been Jewish communities in many Roman cities, the Jews suffered periods of persecution after the conversion of the empire to Christianity. Officially they were tolerated, if subject to conversion efforts, and at times were even encouraged to settle in new areas. Religious beliefs in the Eastern Empire and Iran were in flux during the late sixth and early seventh centuries. Judaism was an active proselytising faith, and at least one Arab political leader converted to it. Christianity had active missions competing with the Persians' Zoroastrianism in seeking converts, especially among residents of the Arabian Peninsula. All these strands came together with the emergence of Islam in Arabia during the lifetime of Muhammad (d. 632). After his death, Islamic forces conquered much of the Eastern Empire and Persia, starting with Syria in 634–635 and reaching Egypt in 640–641, Persia between 637 and 642, North Africa in the later seventh century, and the Iberian Peninsula in 711. By 714, Islamic forces controlled much of the peninsula in a region they called Al-Andalus. The Islamic conquests reached their peak in the mid-eighth century. The defeat of Muslim forces at the Battle of Tours in 732 led to the reconquest of southern France by the Franks, but the main reason for the halt of Islamic growth in Europe was the overthrow of the Umayyad Caliphate and its replacement by the Abbasid Caliphate. The Abbasids moved their capital to Baghdad and were more concerned with the Middle East than Europe, losing control of sections of the Muslim lands. Umayyad descendants took over the Iberian Peninsula, the Aghlabids controlled North Africa, and the Tulunids became rulers of Egypt. By the middle of the 8th century, new trading patterns were emerging in the Mediterranean; trade between the Franks and the Arabs replaced the old Roman economy. Franks traded timber, furs, swords and slaves in return for silks and other fabrics, spices, and precious metals from the Arabs. The migrations and invasions of the 4th and 5th centuries disrupted trade networks around the Mediterranean. African goods stopped being imported into Europe, first disappearing from the interior and by the 7th century found only in a few cities such as Rome or Naples. By the end of the 7th century, under the impact of the Muslim conquests, African products were no longer found in Western Europe. The replacement of goods from long-range trade with local products was a trend throughout the old Roman lands that happened in the Early Middle Ages. This was especially marked in the lands that did not lie on the Mediterranean, such as northern Gaul or Britain. Non-local goods appearing in the archaeological record are usually luxury goods. In the northern parts of Europe, not only were the trade networks local, but the goods carried were simple, with little pottery or other complex products. Around the Mediterranean, pottery remained prevalent and appears to have been traded over medium-range networks, not just produced locally. The various Germanic states in the west all had coinages that imitated existing Roman and Byzantine forms. Gold continued to be minted until the end of the 7th century, when it was replaced by silver coins. The basic Frankish silver coin was the denarius or denier, while the Anglo-Saxon version was called a penny. From these areas, the denier or penny spread throughout Europe during the centuries from 700 to 1000. Copper or bronze coins were not struck, nor were gold except in Southern Europe. No silver coins denominated in multiple units were minted. Christianity was a major unifying factor between Eastern and Western Europe before the Arab conquests, but the conquest of North Africa sundered maritime connections between those areas. Increasingly the Byzantine Church differed in language, practices, and liturgy from the Western Church. The Eastern Church used Greek instead of the Western Latin. Theological and political differences emerged, and by the early and middle 8th century issues such as iconoclasm, clerical marriage, and state control of the Church had widened to the extent that the cultural and religious differences were greater than the similarities. The formal break, known as the East–West Schism, came in 1054, when the papacy and the patriarchy of Constantinople clashed over papal supremacy and excommunicated each other, which led to the division of Christianity into two Churches—the Western branch became the Roman Catholic Church and the Eastern branch the Eastern Orthodox Church. The ecclesiastical structure of the Roman Empire survived the movements and invasions in the west mostly intact, but the papacy was little regarded, and few of the Western bishops looked to the bishop of Rome for religious or political leadership. Many of the popes prior to 750 were more concerned with Byzantine affairs and Eastern theological controversies. The register, or archived copies of the letters, of Pope Gregory the Great (pope 590–604) survived, and of those more than 850 letters, the vast majority were concerned with affairs in Italy or Constantinople. The only part of Western Europe where the papacy had influence was Britain, where Gregory had sent the Gregorian mission in 597 to convert the Anglo-Saxons to Christianity. Irish missionaries were most active in Western Europe between the 5th and the 7th centuries, going first to England and Scotland and then on to the continent. Under such monks as Columba (d. 597) and Columbanus (d. 615), they founded monasteries, taught in Latin and Greek, and authored secular and religious works. The Early Middle Ages witnessed the rise of monasticism in the West. The shape of European monasticism was determined by traditions and ideas that originated with the Desert Fathers of Egypt and Syria. Most European monasteries were of the type that focuses on community experience of the spiritual life, called cenobitism, which was pioneered by Pachomius (d. 348) in the 4th century. Monastic ideals spread from Egypt to Western Europe in the 5th and 6th centuries through hagiographical literature such as the "Life of Anthony". Benedict of Nursia (d. 547) wrote the Benedictine Rule for Western monasticism during the 6th century, detailing the administrative and spiritual responsibilities of a community of monks led by an abbot. Monks and monasteries had a deep effect on the religious and political life of the Early Middle Ages, in various cases acting as land trusts for powerful families, centres of propaganda and royal support in newly conquered regions, and bases for missions and proselytisation. They were the main and sometimes only outposts of education and literacy in a region. Many of the surviving manuscripts of the Latin classics were copied in monasteries in the Early Middle Ages. Monks were also the authors of new works, including history, theology, and other subjects, written by authors such as Bede (d. 735), a native of northern England who wrote in the late 7th and early 8th centuries. The Frankish kingdom in northern Gaul split into kingdoms called Austrasia, Neustria, and Burgundy during the 6th and 7th centuries, all of them ruled by the Merovingian dynasty, who were descended from Clovis. The 7th century was a tumultuous period of wars between Austrasia and Neustria. Such warfare was exploited by Pippin (d. 640), the Mayor of the Palace for Austrasia who became the power behind the Austrasian throne. Later members of his family inherited the office, acting as advisers and regents. One of his descendants, Charles Martel (d. 741), won the Battle of Poitiers in 732, halting the advance of Muslim armies across the Pyrenees. Great Britain was divided into small states dominated by the kingdoms of Northumbria, Mercia, Wessex, and East Anglia, which were descended from the Anglo-Saxon invaders. Smaller kingdoms in present-day Wales and Scotland were still under the control of the native Britons and Picts. Ireland was divided into even smaller political units, usually known as tribal kingdoms, under the control of kings. There were perhaps as many as 150 local kings in Ireland, of varying importance. The Carolingian dynasty, as the successors to Charles Martel are known, officially took control of the kingdoms of Austrasia and Neustria in a coup of 753 led by (r. 752–768). A contemporary chronicle claims that Pippin sought, and gained, authority for this coup from Pope (pope 752–757). Pippin's takeover was reinforced with propaganda that portrayed the Merovingians as inept or cruel rulers, exalted the accomplishments of Charles Martel, and circulated stories of the family's great piety. At the time of his death in 768, Pippin left his kingdom in the hands of his two sons, Charles (r. 768–814) and Carloman (r. 768–771). When Carloman died of natural causes, Charles blocked the succession of Carloman's young son and installed himself as the king of the united Austrasia and Neustria. Charles, more often known as Charles the Great or Charlemagne, embarked upon a programme of systematic expansion in 774 that unified a large portion of Europe, eventually controlling modern-day France, northern Italy, and Saxony. In the wars that lasted beyond 800, he rewarded allies with war booty and command over parcels of land. In 774, Charlemagne conquered the Lombards, which freed the papacy from the fear of Lombard conquest and marked the beginnings of the Papal States. The coronation of Charlemagne as emperor on Christmas Day 800 is regarded as a turning point in medieval history, marking a return of the Western Roman Empire, since the new emperor ruled over much of the area previously controlled by the Western emperors. It also marks a change in Charlemagne's relationship with the Byzantine Empire, as the assumption of the imperial title by the Carolingians asserted their equivalence to the Byzantine state. There were several differences between the newly established Carolingian Empire and both the older Western Roman Empire and the concurrent Byzantine Empire. The Frankish lands were rural in character, with only a few small cities. Most of the people were peasants settled on small farms. Little trade existed and much of that was with the British Isles and Scandinavia, in contrast to the older Roman Empire with its trading networks centred on the Mediterranean. The empire was administered by an itinerant court that travelled with the emperor, as well as approximately 300 imperial officials called counts, who administered the counties the empire had been divided into. Clergy and local bishops served as officials, as well as the imperial officials called "missi dominici", who served as roving inspectors and troubleshooters. Charlemagne's court in Aachen was the centre of the cultural revival sometimes referred to as the "Carolingian Renaissance". Literacy increased, as did development in the arts, architecture and jurisprudence, as well as liturgical and scriptural studies. The English monk Alcuin (d. 804) was invited to Aachen and brought the education available in the monasteries of Northumbria. Charlemagne's chancery—or writing office—made use of a new script today known as Carolingian minuscule, allowing a common writing style that advanced communication across much of Europe. Charlemagne sponsored changes in church liturgy, imposing the Roman form of church service on his domains, as well as the Gregorian chant in liturgical music for the churches. An important activity for scholars during this period was the copying, correcting, and dissemination of basic works on religious and secular topics, with the aim of encouraging learning. New works on religious topics and schoolbooks were also produced. Grammarians of the period modified the Latin language, changing it from the Classical Latin of the Roman Empire into a more flexible form to fit the needs of the Church and government. By the reign of Charlemagne, the language had so diverged from the classical that it was later called Medieval Latin. Charlemagne planned to continue the Frankish tradition of dividing his kingdom between all his heirs, but was unable to do so as only one son, Louis the Pious (r. 814–840), was still alive by 813. Just before Charlemagne died in 814, he crowned Louis as his successor. Louis's reign of 26 years was marked by numerous divisions of the empire among his sons and, after 829, civil wars between various alliances of father and sons over the control of various parts of the empire. Eventually, Louis recognised his eldest son (d. 855) as emperor and gave him Italy. Louis divided the rest of the empire between Lothair and Charles the Bald (d. 877), his youngest son. Lothair took East Francia, comprising both banks of the Rhine and eastwards, leaving Charles West Francia with the empire to the west of the Rhineland and the Alps. Louis the German (d. 876), the middle child, who had been rebellious to the last, was allowed to keep Bavaria under the suzerainty of his elder brother. The division was disputed. of Aquitaine (d. after 864), the emperor's grandson, rebelled in a contest for Aquitaine, while Louis the German tried to annex all of East Francia. Louis the Pious died in 840, with the empire still in chaos. A three-year civil war followed his death. By the Treaty of Verdun (843), a kingdom between the Rhine and Rhone rivers was created for Lothair to go with his lands in Italy, and his imperial title was recognised. Louis the German was in control of Bavaria and the eastern lands in modern-day Germany. Charles the Bald received the western Frankish lands, comprising most of modern-day France. Charlemagne's grandsons and great-grandsons divided their kingdoms between their descendants, eventually causing all internal cohesion to be lost. In 987 the Carolingian dynasty was replaced in the western lands, with the crowning of Hugh Capet (r. 987–996) as king. In the eastern lands the dynasty had died out earlier, in 911, with the death of Louis the Child, and the selection of the unrelated Conrad I (r. 911–918) as king. The breakup of the Carolingian Empire was accompanied by invasions, migrations, and raids by external foes. The Atlantic and northern shores were harassed by the Vikings, who also raided the British Isles and settled there as well as in Iceland. In 911, the Viking chieftain Rollo (d. c. 931) received permission from the Frankish King Charles the Simple (r. 898–922) to settle in what became Normandy. The eastern parts of the Frankish kingdoms, especially Germany and Italy, were under continual Magyar assault until the invader's defeat at the Battle of Lechfeld in 955. The breakup of the Abbasid dynasty meant that the Islamic world fragmented into smaller political states, some of which began expanding into Italy and Sicily, as well as over the Pyrenees into the southern parts of the Frankish kingdoms. Efforts by local kings to fight the invaders led to the formation of new political entities. In Anglo-Saxon England, King Alfred the Great (r. 871–899) came to an agreement with the Viking invaders in the late 9th century, resulting in Danish settlements in Northumbria, Mercia, and parts of East Anglia. By the middle of the 10th century, Alfred's successors had conquered Northumbria, and restored English control over most of the southern part of Great Britain. In northern Britain, Kenneth MacAlpin (d. c. 860) united the Picts and the Scots into the Kingdom of Alba. In the early 10th century, the Ottonian dynasty had established itself in Germany, and was engaged in driving back the Magyars. Its efforts culminated in the coronation in 962 of (r. 936–973) as Holy Roman Emperor. In 972, he secured recognition of his title by the Byzantine Empire, which he sealed with the marriage of his son Otto II (r. 967–983) to Theophanu (d. 991), daughter of an earlier Byzantine Emperor Romanos II (r. 959–963). By the late 10th century Italy had been drawn into the Ottonian sphere after a period of instability; Otto III (r. 996–1002) spent much of his later reign in the kingdom. The western Frankish kingdom was more fragmented, and although kings remained nominally in charge, much of the political power devolved to the local lords. Missionary efforts to Scandinavia during the 9th and 10th centuries helped strengthen the growth of kingdoms such as Sweden, Denmark, and Norway, which gained power and territory. Some kings converted to Christianity, although not all by 1000. Scandinavians also expanded and colonised throughout Europe. Besides the settlements in Ireland, England, and Normandy, further settlement took place in what became Russia and in Iceland. Swedish traders and raiders ranged down the rivers of the Russian steppe, and even attempted to seize Constantinople in 860 and 907. Christian Spain, initially driven into a small section of the peninsula in the north, expanded slowly south during the 9th and 10th centuries, establishing the kingdoms of Asturias and León. In Eastern Europe, Byzantium revived its fortunes under Emperor Basil I (r. 867–886) and his successors Leo VI (r. 886–912) and Constantine VII (r. 913–959), members of the Macedonian dynasty. Commerce revived and the emperors oversaw the extension of a uniform administration to all the provinces. The military was reorganised, which allowed the emperors John I (r. 969–976) and Basil II (r. 976–1025) to expand the frontiers of the empire on all fronts. The imperial court was the centre of a revival of classical learning, a process known as the Macedonian Renaissance. Writers such as John Geometres (fl. early 10th century) composed new hymns, poems, and other works. Missionary efforts by both Eastern and Western clergy resulted in the conversion of the Moravians, Bulgars, Bohemians, Poles, Magyars, and Slavic inhabitants of the Kievan Rus'. These conversions contributed to the founding of political states in the lands of those peoples—the states of Moravia, Bulgaria, Bohemia, Poland, Hungary, and the Kievan Rus'. Bulgaria, which was founded around 680, at its height reached from Budapest to the Black Sea and from the Dnieper River in modern Ukraine to the Adriatic Sea. By 1018, the last Bulgarian nobles had surrendered to the Byzantine Empire. Few large stone buildings were constructed between the Constantinian basilicas of the 4th century and the 8th century, although many smaller ones were built during the 6th and 7th centuries. By the beginning of the 8th century, the Carolingian Empire revived the basilica form of architecture. One feature of the basilica is the use of a transept, or the "arms" of a cross-shaped building that are perpendicular to the long nave. Other new features of religious architecture include the crossing tower and a monumental entrance to the church, usually at the west end of the building. Carolingian art was produced for a small group of figures around the court, and the monasteries and churches they supported. It was dominated by efforts to regain the dignity and classicism of imperial Roman and Byzantine art, but was also influenced by the Insular art of the British Isles. Insular art integrated the energy of Irish Celtic and Anglo-Saxon Germanic styles of ornament with Mediterranean forms such as the book, and established many characteristics of art for the rest of the medieval period. Surviving religious works from the Early Middle Ages are mostly illuminated manuscripts and carved ivories, originally made for metalwork that has since been melted down. Objects in precious metals were the most prestigious form of art, but almost all are lost except for a few crosses such as the Cross of Lothair, several reliquaries, and finds such as the Anglo-Saxon burial at Sutton Hoo and the hoards of Gourdon from Merovingian France, Guarrazar from Visigothic Spain and Nagyszentmiklós near Byzantine territory. There are survivals from the large brooches in fibula or penannular form that were a key piece of personal adornment for elites, including the Irish Tara Brooch. Highly decorated books were mostly Gospel Books and these have survived in larger numbers, including the Insular Book of Kells, the Book of Lindisfarne, and the imperial Codex Aureus of St. Emmeram, which is one of the few to retain its "treasure binding" of gold encrusted with jewels. Charlemagne's court seems to have been responsible for the acceptance of figurative monumental sculpture in Christian art, and by the end of the period near life-sized figures such as the Gero Cross were common in important churches. During the later Roman Empire, the principal military developments were attempts to create an effective cavalry force as well as the continued development of highly specialised types of troops. The creation of heavily armoured cataphract-type soldiers as cavalry was an important feature of the 5th-century Roman military. The various invading tribes had differing emphases on types of soldiers—ranging from the primarily infantry Anglo-Saxon invaders of Britain to the Vandals and Visigoths, who had a high proportion of cavalry in their armies. During the early invasion period, the stirrup had not been introduced into warfare, which limited the usefulness of cavalry as shock troops because it was not possible to put the full force of the horse and rider behind blows struck by the rider. The greatest change in military affairs during the invasion period was the adoption of the Hunnic composite bow in place of the earlier, and weaker, Scythian composite bow. Another development was the increasing use of longswords and the progressive replacement of scale armour by mail armour and lamellar armour. The importance of infantry and light cavalry began to decline during the early Carolingian period, with a growing dominance of elite heavy cavalry. The use of militia-type levies of the free population declined over the Carolingian period. Although much of the Carolingian armies were mounted, a large proportion during the early period appear to have been mounted infantry, rather than true cavalry. One exception was Anglo-Saxon England, where the armies were still composed of regional levies, known as the "fyrd", which were led by the local elites. In military technology, one of the main changes was the return of the crossbow, which had been known in Roman times and reappeared as a military weapon during the last part of the Early Middle Ages. Another change was the introduction of the stirrup, which increased the effectiveness of cavalry as shock troops. A technological advance that had implications beyond the military was the horseshoe, which allowed horses to be used in rocky terrain. The High Middle Ages was a period of tremendous expansion of population. The estimated population of Europe grew from 35 to 80 million between 1000 and 1347, although the exact causes remain unclear: improved agricultural techniques, the decline of slaveholding, a more clement climate and the lack of invasion have all been suggested. As much as 90 per cent of the European population remained rural peasants. Many were no longer settled in isolated farms but had gathered into small communities, usually known as manors or villages. These peasants were often subject to noble overlords and owed them rents and other services, in a system known as manorialism. There remained a few free peasants throughout this period and beyond, with more of them in the regions of Southern Europe than in the north. The practice of assarting, or bringing new lands into production by offering incentives to the peasants who settled them, also contributed to the expansion of population. The Open-field system of agriculture was commonly practiced in most of Europe, especially in "northwestern and central Europe." Open-field agricultural communities had three basic characteristics: individual peasant holdings in the form of strips of land were scattered among the different fields belonging to the manor; crops were rotated from year to year to preserve soil fertility; and common land was used for grazing livestock and other purposes. Other sections of society included the nobility, clergy, and townsmen. Nobles, both the titled nobility and simple knights, exploited the manors and the peasants, although they did not own lands outright but were granted rights to the income from a manor or other lands by an overlord through the system of feudalism. During the 11th and 12th centuries, these lands, or fiefs, came to be considered hereditary, and in most areas they were no longer divisible between all the heirs as had been the case in the early medieval period. Instead, most fiefs and lands went to the eldest son. The dominance of the nobility was built upon its control of the land, its military service as heavy cavalry, control of castles, and various immunities from taxes or other impositions. Castles, initially in wood but later in stone, began to be constructed in the 9th and 10th centuries in response to the disorder of the time, and provided protection from invaders as well as allowing lords defence from rivals. Control of castles allowed the nobles to defy kings or other overlords. Nobles were stratified; kings and the highest-ranking nobility controlled large numbers of commoners and large tracts of land, as well as other nobles. Beneath them, lesser nobles had authority over smaller areas of land and fewer people. Knights were the lowest level of nobility; they controlled but did not own land, and had to serve other nobles. The clergy was divided into two types: the secular clergy, who lived out in the world, and the regular clergy, who lived under a religious rule and were usually monks. Throughout the period monks remained a very small proportion of the population, usually less than one percent. Most of the regular clergy were drawn from the nobility, the same social class that served as the recruiting ground for the upper levels of the secular clergy. The local parish priests were often drawn from the peasant class. Townsmen were in a somewhat unusual position, as they did not fit into the traditional three-fold division of society into nobles, clergy, and peasants. During the 12th and 13th centuries, the ranks of the townsmen expanded greatly as existing towns grew and new population centres were founded. But throughout the Middle Ages the population of the towns probably never exceeded 10 percent of the total population. Jews also spread across Europe during the period. Communities were established in Germany and England in the 11th and 12th centuries, but Spanish Jews, long settled in Spain under the Muslims, came under Christian rule and increasing pressure to convert to Christianity. Most Jews were confined to the cities, as they were not allowed to own land or be peasants. Besides the Jews, there were other non-Christians on the edges of Europe—pagan Slavs in Eastern Europe and Muslims in Southern Europe. Women in the Middle Ages were officially required to be subordinate to some male, whether their father, husband, or other kinsman. Widows, who were often allowed much control over their own lives, were still restricted legally. Women's work generally consisted of household or other domestically inclined tasks. Peasant women were usually responsible for taking care of the household, child-care, as well as gardening and animal husbandry near the house. They could supplement the household income by spinning or brewing at home. At harvest-time, they were also expected to help with field-work. Townswomen, like peasant women, were responsible for the household, and could also engage in trade. What trades were open to women varied by country and period. Noblewomen were responsible for running a household, and could occasionally be expected to handle estates in the absence of male relatives, but they were usually restricted from participation in military or government affairs. The only role open to women in the Church was that of nuns, as they were unable to become priests. In central and northern Italy and in Flanders, the rise of towns that were to a degree self-governing stimulated economic growth and created an environment for new types of trade associations. Commercial cities on the shores of the Baltic entered into agreements known as the Hanseatic League, and the Italian Maritime republics such as Venice, Genoa, and Pisa expanded their trade throughout the Mediterranean. Great trading fairs were established and flourished in northern France during the period, allowing Italian and German merchants to trade with each other as well as local merchants. In the late 13th century new land and sea routes to the Far East were pioneered, famously described in "The Travels of Marco Polo" written by one of the traders, Marco Polo (d. 1324). Besides new trading opportunities, agricultural and technological improvements enabled an increase in crop yields, which in turn allowed the trade networks to expand. Rising trade brought new methods of dealing with money, and gold coinage was again minted in Europe, first in Italy and later in France and other countries. New forms of commercial contracts emerged, allowing risk to be shared among merchants. Accounting methods improved, partly through the use of double-entry bookkeeping; letters of credit also appeared, allowing easy transmission of money. The High Middle Ages was the formative period in the history of the modern Western state. Kings in France, England, and Spain consolidated their power, and set up lasting governing institutions. New kingdoms such as Hungary and Poland, after their conversion to Christianity, became Central European powers. The Magyars settled Hungary around 900 under King Árpád (d. c. 907) after a series of invasions in the 9th century. The papacy, long attached to an ideology of independence from secular kings, first asserted its claim to temporal authority over the entire Christian world; the Papal Monarchy reached its apogee in the early 13th century under the pontificate of (pope 1198–1216). Northern Crusades and the advance of Christian kingdoms and military orders into previously pagan regions in the Baltic and Finnic north-east brought the forced assimilation of numerous native peoples into European culture. During the early High Middle Ages, Germany was ruled by the Ottonian dynasty, which struggled to control the powerful dukes ruling over territorial duchies tracing back to the Migration period. In 1024, they were replaced by the Salian dynasty, who famously clashed with the papacy under Emperor (r. 1084–1105) over Church appointments as part of the Investiture Controversy. His successors continued to struggle against the papacy as well as the German nobility. A period of instability followed the death of Emperor (r. 1111–25), who died without heirs, until Barbarossa (r. 1155–90) took the imperial throne. Although he ruled effectively, the basic problems remained, and his successors continued to struggle into the 13th century. Barbarossa's grandson Frederick II (r. 1220–1250), who was also heir to the throne of Sicily through his mother, clashed repeatedly with the papacy. His court was famous for its scholars and he was often accused of heresy. He and his successors faced many difficulties, including the invasion of the Mongols into Europe in the mid-13th century. Mongols first shattered the Kievan Rus' principalities and then invaded Eastern Europe in 1241, 1259, and 1287. Under the Capetian dynasty the French monarchy slowly began to expand its authority over the nobility, growing out of the Île-de-France to exert control over more of the country in the 11th and 12th centuries. They faced a powerful rival in the Dukes of Normandy, who in 1066 under William the Conqueror (duke 1035–1087), conquered England (r. 1066–87) and created a cross-channel empire that lasted, in various forms, throughout the rest of the Middle Ages. Normans also settled in Sicily and southern Italy, when Robert Guiscard (d. 1085) landed there in 1059 and established a duchy that later became the Kingdom of Sicily. Under the Angevin dynasty of (r. 1154–89) and his son Richard I (r. 1189–99), the kings of England ruled over England and large areas of France, brought to the family by Henry II's marriage to Eleanor of Aquitaine (d. 1204), heiress to much of southern France. Richard's younger brother John (r. 1199–1216) lost Normandy and the rest of the northern French possessions in 1204 to the French King Philip II Augustus (r. 1180–1223). This led to dissension among the English nobility, while John's financial exactions to pay for his unsuccessful attempts to regain Normandy led in 1215 to "Magna Carta", a charter that confirmed the rights and privileges of free men in England. Under (r. 1216–72), John's son, further concessions were made to the nobility, and royal power was diminished. The French monarchy continued to make gains against the nobility during the late 12th and 13th centuries, bringing more territories within the kingdom under the king's personal rule and centralising the royal administration. Under Louis IX (r. 1226–70), royal prestige rose to new heights as Louis served as a mediator for most of Europe. In Iberia, the Christian states, which had been confined to the north-western part of the peninsula, began to push back against the Islamic states in the south, a period known as the "Reconquista". By about 1150, the Christian north had coalesced into the five major kingdoms of León, Castile, Aragon, Navarre, and Portugal. Southern Iberia remained under control of Islamic states, initially under the Caliphate of Córdoba, which broke up in 1031 into a shifting number of petty states known as "taifas", who fought with the Christians until the Almohad Caliphate re-established centralised rule over Southern Iberia in the 1170s. Christian forces advanced again in the early 13th century, culminating in the capture of Seville in 1248. In the 11th century, the Seljuk Turks took over much of the Middle East, occupying Persia during the 1040s, Armenia in the 1060s, and Jerusalem in 1070. In 1071, the Turkish army defeated the Byzantine army at the Battle of Manzikert and captured the Byzantine Emperor Romanus IV (r. 1068–71). The Turks were then free to invade Asia Minor, which dealt a dangerous blow to the Byzantine Empire by seizing a large part of its population and its economic heartland. Although the Byzantines regrouped and recovered somewhat, they never fully regained Asia Minor and were often on the defensive. The Turks also had difficulties, losing control of Jerusalem to the Fatimids of Egypt and suffering from a series of internal civil wars. The Byzantines also faced a revived Bulgaria, which in the late 12th and 13th centuries spread throughout the Balkans. The crusades were intended to seize Jerusalem from Muslim control. The First Crusade was proclaimed by Pope Urban II (pope 1088–99) at the Council of Clermont in 1095 in response to a request from the Byzantine Emperor Alexios I Komnenos (r. 1081–1118) for aid against further Muslim advances. Urban promised indulgence to anyone who took part. Tens of thousands of people from all levels of society mobilised across Europe and captured Jerusalem in 1099. One feature of the crusades was the pogroms against local Jews that often took place as the crusaders left their countries for the East. These were especially brutal during the First Crusade, when the Jewish communities in Cologne, Mainz, and Worms were destroyed, and other communities in cities between the rivers Seine and the Rhine suffered destruction. Another outgrowth of the crusades was the foundation of a new type of monastic order, the military orders of the Templars and Hospitallers, which fused monastic life with military service. The crusaders consolidated their conquests into crusader states. During the 12th and 13th centuries, there were a series of conflicts between those states and the surrounding Islamic states. Appeals from those states to the papacy led to further crusades, such as the Third Crusade, called to try to regain Jerusalem, which had been captured by Saladin (d. 1193) in 1187. In 1203, the Fourth Crusade was diverted from the Holy Land to Constantinople, and captured the city in 1204, setting up a Latin Empire of Constantinople and greatly weakening the Byzantine Empire. The Byzantines recaptured the city in 1261, but never regained their former strength. By 1291 all the crusader states had been captured or forced from the mainland, although a titular Kingdom of Jerusalem survived on the island of Cyprus for several years afterwards. Popes called for crusades to take place elsewhere besides the Holy Land: in Spain, southern France, and along the Baltic. The Spanish crusades became fused with the "Reconquista" of Spain from the Muslims. Although the Templars and Hospitallers took part in the Spanish crusades, similar Spanish military religious orders were founded, most of which had become part of the two main orders of Calatrava and Santiago by the beginning of the 12th century. Northern Europe also remained outside Christian influence until the 11th century or later, and became a crusading venue as part of the Northern Crusades of the 12th to 14th centuries. These crusades also spawned a military order, the Order of the Sword Brothers. Another order, the Teutonic Knights, although founded in the crusader states, focused much of its activity in the Baltic after 1225, and in 1309 moved its headquarters to Marienburg in Prussia. During the 11th century, developments in philosophy and theology led to increased intellectual activity. There was debate between the realists and the nominalists over the concept of "universals". Philosophical discourse was stimulated by the rediscovery of Aristotle and his emphasis on empiricism and rationalism. Scholars such as Peter Abelard (d. 1142) and Peter Lombard (d. 1164) introduced Aristotelian logic into theology. In the late 11th and early 12th centuries cathedral schools spread throughout Western Europe, signalling the shift of learning from monasteries to cathedrals and towns. Cathedral schools were in turn replaced by the universities established in major European cities. Philosophy and theology fused in scholasticism, an attempt by 12th- and 13th-century scholars to reconcile authoritative texts, most notably Aristotle and the Bible. This movement tried to employ a systemic approach to truth and reason and culminated in the thought of Thomas Aquinas (d. 1274), who wrote the "Summa Theologica", or "Summary of Theology". Chivalry and the ethos of courtly love developed in royal and noble courts. This culture was expressed in the vernacular languages rather than Latin, and comprised poems, stories, legends, and popular songs spread by troubadours, or wandering minstrels. Often the stories were written down in the "chansons de geste", or "songs of great deeds", such as "The Song of Roland" or "The Song of Hildebrand". Secular and religious histories were also produced. Geoffrey of Monmouth (d. c. 1155) composed his "Historia Regum Britanniae", a collection of stories and legends about Arthur. Other works were more clearly history, such as Otto von Freising's (d. 1158) "Gesta Friderici Imperatoris" detailing the deeds of Emperor Frederick Barbarossa, or William of Malmesbury's (d. c. 1143) "Gesta Regum" on the kings of England. Legal studies advanced during the 12th century. Both secular law and canon law, or ecclesiastical law, were studied in the High Middle Ages. Secular law, or Roman law, was advanced greatly by the discovery of the "Corpus Juris Civilis" in the 11th century, and by 1100 Roman law was being taught at Bologna. This led to the recording and standardisation of legal codes throughout Western Europe. Canon law was also studied, and around 1140 a monk named Gratian (fl. 12th century), a teacher at Bologna, wrote what became the standard text of canon law—the "Decretum". Among the results of the Greek and Islamic influence on this period in European history was the replacement of Roman numerals with the decimal positional number system and the invention of algebra, which allowed more advanced mathematics. Astronomy advanced following the translation of Ptolemy's "Almagest" from Greek into Latin in the late 12th century. Medicine was also studied, especially in southern Italy, where Islamic medicine influenced the school at Salerno. In the 12th and 13th centuries, Europe produced economic growth and innovations in methods of production. Major technological advances included the invention of the windmill, the first mechanical clocks, the manufacture of distilled spirits, and the use of the astrolabe. Concave spectacles were invented around 1286 by an unknown Italian artisan, probably working in or near Pisa. The development of a three-field rotation system for planting crops increased the usage of land from one half in use each year under the old two-field system to two-thirds under the new system, with a consequent increase in production. The development of the heavy plough allowed heavier soils to be farmed more efficiently, aided by the spread of the horse collar, which led to the use of draught horses in place of oxen. Horses are faster than oxen and require less pasture, factors that aided the implementation of the three-field system. The construction of cathedrals and castles advanced building technology, leading to the development of large stone buildings. Ancillary structures included new town halls, houses, bridges, and tithe barns. Shipbuilding improved with the use of the rib and plank method rather than the old Roman system of mortise and tenon. Other improvements to ships included the use of lateen sails and the stern-post rudder, both of which increased the speed at which ships could be sailed. In military affairs, the use of infantry with specialised roles increased. Along with the still-dominant heavy cavalry, armies often included mounted and infantry crossbowmen, as well as sappers and engineers. Crossbows, which had been known in Late Antiquity, increased in use partly because of the increase in siege warfare in the 10th and 11th centuries. The increasing use of crossbows during the 12th and 13th centuries led to the use of closed-face helmets, heavy body armour, as well as horse armour. Gunpowder was known in Europe by the mid-13th century with a recorded use in European warfare by the English against the Scots in 1304, although it was merely used as an explosive and not as a weapon. Cannon were being used for sieges in the 1320s, and hand-held guns were in use by the 1360s. In the 10th century the establishment of churches and monasteries led to the development of stone architecture that elaborated vernacular Roman forms, from which the term "Romanesque" is derived. Where available, Roman brick and stone buildings were recycled for their materials. From the tentative beginnings known as the First Romanesque, the style flourished and spread across Europe in a remarkably homogeneous form. Just before 1000 there was a great wave of building stone churches all over Europe. Romanesque buildings have massive stone walls, openings topped by semi-circular arches, small windows, and, particularly in France, arched stone vaults. The large portal with coloured sculpture in high relief became a central feature of façades, especially in France, and the capitals of columns were often carved with narrative scenes of imaginative monsters and animals. According to art historian C. R. Dodwell, "virtually all the churches in the West were decorated with wall-paintings", of which few survive. Simultaneous with the development in church architecture, the distinctive European form of the castle was developed and became crucial to politics and warfare. Romanesque art, especially metalwork, was at its most sophisticated in Mosan art, in which distinct artistic personalities including Nicholas of Verdun (d. 1205) become apparent, and an almost classical style is seen in works such as a font at Liège, contrasting with the writhing animals of the exactly contemporary Gloucester Candlestick. Large illuminated bibles and psalters were the typical forms of luxury manuscripts, and wall-painting flourished in churches, often following a scheme with a "Last Judgement" on the west wall, a Christ in Majesty at the east end, and narrative biblical scenes down the nave, or in the best surviving example, at Saint-Savin-sur-Gartempe, on the barrel-vaulted roof. From the early 12th century, French builders developed the Gothic style, marked by the use of rib vaults, pointed arches, flying buttresses, and large stained glass windows. It was used mainly in churches and cathedrals and continued in use until the 16th century in much of Europe. Classic examples of Gothic architecture include Chartres Cathedral and Reims Cathedral in France as well as Salisbury Cathedral in England. Stained glass became a crucial element in the design of churches, which continued to use extensive wall-paintings, now almost all lost. During this period the practice of manuscript illumination gradually passed from monasteries to lay workshops, so that according to Janetta Benton "by 1300 most monks bought their books in shops", and the book of hours developed as a form of devotional book for lay-people. Metalwork continued to be the most prestigious form of art, with Limoges enamel a popular and relatively affordable option for objects such as reliquaries and crosses. In Italy the innovations of Cimabue and Duccio, followed by the Trecento master Giotto (d. 1337), greatly increased the sophistication and status of panel painting and fresco. Increasing prosperity during the 12th century resulted in greater production of secular art; many carved ivory objects such as gaming-pieces, combs, and small religious figures have survived. Monastic reform became an important issue during the 11th century, as elites began to worry that monks were not adhering to the rules binding them to a strictly religious life. Cluny Abbey, founded in the Mâcon region of France in 909, was established as part of the Cluniac Reforms, a larger movement of monastic reform in response to this fear. Cluny quickly established a reputation for austerity and rigour. It sought to maintain a high quality of spiritual life by placing itself under the protection of the papacy and by electing its own abbot without interference from laymen, thus maintaining economic and political independence from local lords. Monastic reform inspired change in the secular Church. The ideals that it was based upon were brought to the papacy by Pope Leo IX (pope 1049–1054), and provided the ideology of the clerical independence that led to the Investiture Controversy in the late 11th century. This involved Pope Gregory VII (pope 1073–85) and Emperor Henry IV, who initially clashed over episcopal appointments, a dispute that turned into a battle over the ideas of investiture, clerical marriage, and simony. The emperor saw the protection of the Church as one of his responsibilities as well as wanting to preserve the right to appoint his own choices as bishops within his lands, but the papacy insisted on the Church's independence from secular lords. These issues remained unresolved after the compromise of 1122 known as the Concordat of Worms. The dispute represents a significant stage in the creation of a papal monarchy separate from and equal to lay authorities. It also had the permanent consequence of empowering German princes at the expense of the German emperors. The High Middle Ages was a period of great religious movements. Besides the Crusades and monastic reforms, people sought to participate in new forms of religious life. New monastic orders were founded, including the Carthusians and the Cistercians. The latter especially expanded rapidly in their early years under the guidance of Bernard of Clairvaux (d. 1153). These new orders were formed in response to the feeling of the laity that Benedictine monasticism no longer met the needs of the laymen, who along with those wishing to enter the religious life wanted a return to the simpler hermetical monasticism of early Christianity, or to live an Apostolic life. Religious pilgrimages were also encouraged. Old pilgrimage sites such as Rome, Jerusalem, and Compostela received increasing numbers of visitors, and new sites such as Monte Gargano and Bari rose to prominence. In the 13th century mendicant orders—the Franciscans and the Dominicans—who swore vows of poverty and earned their living by begging, were approved by the papacy. Religious groups such as the Waldensians and the Humiliati also attempted to return to the life of early Christianity in the middle 12th and early 13th centuries, but they were condemned as heretical by the papacy. Others joined the Cathars, another heretical movement condemned by the papacy. In 1209, a crusade was preached against the Cathars, the Albigensian Crusade, which in combination with the medieval Inquisition, eliminated them. The first years of the 14th century were marked by famines, culminating in the Great Famine of 1315–17. The causes of the Great Famine included the slow transition from the Medieval Warm Period to the Little Ice Age, which left the population vulnerable when bad weather caused crop failures. The years 1313–14 and 1317–21 were excessively rainy throughout Europe, resulting in widespread crop failures. The climate change—which resulted in a declining average annual temperature for Europe during the 14th century—was accompanied by an economic downturn. These troubles were followed in 1347 by the Black Death, a pandemic that spread throughout Europe during the following three years. The death toll was probably about 35 million people in Europe, about one-third of the population. Towns were especially hard-hit because of their crowded conditions. Large areas of land were left sparsely inhabited, and in some places fields were left unworked. Wages rose as landlords sought to entice the reduced number of available workers to their fields. Further problems were lower rents and lower demand for food, both of which cut into agricultural income. Urban workers also felt that they had a right to greater earnings, and popular uprisings broke out across Europe. Among the uprisings were the "jacquerie" in France, the Peasants' Revolt in England, and revolts in the cities of Florence in Italy and Ghent and Bruges in Flanders. The trauma of the plague led to an increased piety throughout Europe, manifested by the foundation of new charities, the self-mortification of the flagellants, and the scapegoating of Jews. Conditions were further unsettled by the return of the plague throughout the rest of the 14th century; it continued to strike Europe periodically during the rest of the Middle Ages. Society throughout Europe was disturbed by the dislocations caused by the Black Death. Lands that had been marginally productive were abandoned, as the survivors were able to acquire more fertile areas. Although serfdom declined in Western Europe it became more common in Eastern Europe, as landlords imposed it on those of their tenants who had previously been free. Most peasants in Western Europe managed to change the work they had previously owed to their landlords into cash rents. The percentage of serfs amongst the peasantry declined from a high of 90 to closer to 50 percent by the end of the period. Landlords also became more conscious of common interests with other landholders, and they joined together to extort privileges from their governments. Partly at the urging of landlords, governments attempted to legislate a return to the economic conditions that existed before the Black Death. Non-clergy became increasingly literate, and urban populations began to imitate the nobility's interest in chivalry. Jewish communities were expelled from England in 1290 and from France in 1306. Although some were allowed back into France, most were not, and many Jews emigrated eastwards, and Hungary. The Jews were expelled from Spain in 1492, and dispersed to Turkey, France, Italy, and Holland. The rise of banking in Italy during the 13th century continued throughout the 14th century, fuelled partly by the increasing warfare of the period and the needs of the papacy to move money between kingdoms. Many banking firms loaned money to royalty, at great risk, as some were bankrupted when kings defaulted on their loans. Strong, royalty-based nation states rose throughout Europe in the Late Middle Ages, particularly in England, France, and the Christian kingdoms of the Iberian Peninsula: Aragon, Castile, and Portugal. The long conflicts of the period strengthened royal control over their kingdoms and were extremely hard on the peasantry. Kings profited from warfare that extended royal legislation and increased the lands they directly controlled. Paying for the wars required that methods of taxation become more effective and efficient, and the rate of taxation often increased. The requirement to obtain the consent of taxpayers allowed representative bodies such as the English Parliament and the French Estates General to gain power and authority. Throughout the 14th century, French kings sought to expand their influence at the expense of the territorial holdings of the nobility. They ran into difficulties when attempting to confiscate the holdings of the English kings in southern France, leading to the Hundred Years' War, waged from 1337 to 1453. Early in the war the English under Edward III (r. 1327–77) and his son Edward, the Black Prince (d. 1376), won the battles of Crécy and Poitiers, captured the city of Calais, and won control of much of France. The resulting stresses almost caused the disintegration of the French kingdom during the early years of the war. In the early 15th century, France again came close to dissolving, but in the late 1420s the military successes of Joan of Arc (d. 1431) led to the victory of the French and the capture of the last English possessions in southern France in 1453. The price was high, as the population of France at the end of the Wars was likely half what it had been at the start of the conflict. Conversely, the Wars had a positive effect on English national identity, doing much to fuse the various local identities into a national English ideal. The conflict with France also helped create a national culture in England separate from French culture, which had previously been the dominant influence. The dominance of the English longbow began during early stages of the Hundred Years' War, and cannon appeared on the battlefield at Crécy in 1346. In modern-day Germany, the Holy Roman Empire continued to rule, but the elective nature of the imperial crown meant there was no enduring dynasty around which a strong state could form. Further east, the kingdoms of Poland, Hungary, and Bohemia grew powerful. In Iberia, the Christian kingdoms continued to gain land from the Muslim kingdoms of the peninsula; Portugal concentrated on expanding overseas during the 15th century, while the other kingdoms were riven by difficulties over royal succession and other concerns. After losing the Hundred Years' War, England went on to suffer a long civil war known as the Wars of the Roses, which lasted into the 1490s and only ended when Henry Tudor (r. 1485–1509 as Henry VII) became king and consolidated power with his victory over Richard III (r. 1483–85) at Bosworth in 1485. In Scandinavia, Margaret I of Denmark (r. in Denmark 1387–1412) consolidated Norway, Denmark, and Sweden in the Union of Kalmar, which continued until 1523. The major power around the Baltic Sea was the Hanseatic League, a commercial confederation of city-states that traded from Western Europe to Russia. Scotland emerged from English domination under Robert the Bruce (r. 1306–29), who secured papal recognition of his kingship in 1328. Although the Palaeologi emperors recaptured Constantinople from the Western Europeans in 1261, they were never able to regain control of much of the former imperial lands. They usually controlled only a small section of the Balkan Peninsula near Constantinople, the city itself, and some coastal lands on the Black Sea and around the Aegean Sea. The former Byzantine lands in the Balkans were divided between the new Kingdom of Serbia, the Second Bulgarian Empire and the city-state of Venice. The power of the Byzantine emperors was threatened by a new Turkish tribe, the Ottomans, who established themselves in Anatolia in the 13th century and steadily expanded throughout the 14th century. The Ottomans expanded into Europe, reducing Bulgaria to a vassal state by 1366 and taking over Serbia after its defeat at the Battle of Kosovo in 1389. Western Europeans rallied to the plight of the Christians in the Balkans and declared a new crusade in 1396; a great army was sent to the Balkans, where it was defeated at the Battle of Nicopolis. Constantinople was finally captured by the Ottomans in 1453. During the tumultuous 14th century, disputes within the leadership of the Church led to the Avignon Papacy of 1309–76, also called the "Babylonian Captivity of the Papacy" (a reference to the Babylonian captivity of the Jews), and then to the Great Schism, lasting from 1378 to 1418, when there were two and later three rival popes, each supported by several states. Ecclesiastical officials convened at the Council of Constance in 1414, and in the following year the council deposed one of the rival popes, leaving only two claimants. Further depositions followed, and in November 1417 the council elected Martin V (pope 1417–31) as pope. Besides the schism, the Western Church was riven by theological controversies, some of which turned into heresies. John Wycliffe (d. 1384), an English theologian, was condemned as a heretic in 1415 for teaching that the laity should have access to the text of the Bible as well as for holding views on the Eucharist that were contrary to Church doctrine. Wycliffe's teachings influenced two of the major heretical movements of the later Middle Ages: Lollardy in England and Hussitism in Bohemia. The Bohemian movement initiated with the teaching of Jan Hus, who was burned at the stake in 1415 after being condemned as a heretic by the Council of Constance. The Hussite Church, although the target of a crusade, survived beyond the Middle Ages. Other heresies were manufactured, such as the accusations against the Knights Templar that resulted in their suppression in 1312 and the division of their great wealth between the French King Philip IV (r. 1285–1314) and the Hospitallers. The papacy further refined the practice in the Mass in the Late Middle Ages, holding that the clergy alone was allowed to partake of the wine in the Eucharist. This further distanced the secular laity from the clergy. The laity continued the practices of pilgrimages, veneration of relics, and belief in the power of the Devil. Mystics such as Meister Eckhart (d. 1327) and Thomas à Kempis (d. 1471) wrote works that taught the laity to focus on their inner spiritual life, which laid the groundwork for the Protestant Reformation. Besides mysticism, belief in witches and witchcraft became widespread, and by the late 15th century the Church had begun to lend credence to populist fears of witchcraft with its condemnation of witches in 1484 and the publication in 1486 of the "Malleus Maleficarum", the most popular handbook for witch-hunters. During the Later Middle Ages, theologians such as John Duns Scotus (d. 1308) and William of Ockham (d. c. 1348), led a reaction against scholasticism, objecting to the application of reason to faith. Their efforts undermined the prevailing Platonic idea of "universals". Ockham's insistence that reason operates independently of faith allowed science to be separated from theology and philosophy. Legal studies were marked by the steady advance of Roman law into areas of jurisprudence previously governed by customary law. The lone exception to this trend was in England, where the common law remained pre-eminent. Other countries codified their laws; legal codes were promulgated in Castile, Poland, and Lithuania. Education remained mostly focused on the training of future clergy. The basic learning of the letters and numbers remained the province of the family or a village priest, but the secondary subjects of the trivium—grammar, rhetoric, logic—were studied in cathedral schools or in schools provided by cities. Commercial secondary schools spread, and some Italian towns had more than one such enterprise. Universities also spread throughout Europe in the 14th and 15th centuries. Lay literacy rates rose, but were still low; one estimate gave a literacy rate of ten percent of males and one percent of females in 1500. The publication of vernacular literature increased, with Dante (d. 1321), Petrarch (d. 1374) and Giovanni Boccaccio (d. 1375) in 14th-century Italy, Geoffrey Chaucer (d. 1400) and William Langland (d. c. 1386) in England, and François Villon (d. 1464) and Christine de Pizan (d. c. 1430) in France. Much literature remained religious in character, and although a great deal of it continued to be written in Latin, a new demand developed for saints' lives and other devotional tracts in the vernacular languages. This was fed by the growth of the "Devotio Moderna" movement, most prominently in the formation of the Brethren of the Common Life, but also in the works of German mystics such as Meister Eckhart and Johannes Tauler (d. 1361). Theatre also developed in the guise of miracle plays put on by the Church. At the end of the period, the development of the printing press in about 1450 led to the establishment of publishing houses throughout Europe by 1500. In the early 15th century, the countries of the Iberian peninsula began to sponsor exploration beyond the boundaries of Europe. Prince Henry the Navigator of Portugal (d. 1460) sent expeditions that discovered the Canary Islands, the Azores, and Cape Verde during his lifetime. After his death, exploration continued; Bartolomeu Dias (d. 1500) went around the Cape of Good Hope in 1486 and Vasco da Gama (d. 1524) sailed around Africa to India in 1498. The combined Spanish monarchies of Castile and Aragon sponsored the voyage of exploration by Christopher Columbus (d. 1506) in 1492 that discovered the Americas. The English crown under Henry VII sponsored the voyage of John Cabot (d. 1498) in 1497, which landed on Cape Breton Island. One of the major developments in the military sphere during the Late Middle Ages was the increased use of infantry and light cavalry. The English also employed longbowmen, but other countries were unable to create similar forces with the same success. Armour continued to advance, spurred by the increasing power of crossbows, and plate armour was developed to protect soldiers from crossbows as well as the hand-held guns that were developed. Pole arms reached new prominence with the development of the Flemish and Swiss infantry armed with pikes and other long spears. In agriculture, the increased usage of sheep with long-fibred wool allowed a stronger thread to be spun. In addition, the spinning wheel replaced the traditional distaff for spinning wool, tripling production. A less technological refinement that still greatly affected daily life was the use of buttons as closures for garments, which allowed for better fitting without having to lace clothing on the wearer. Windmills were refined with the creation of the tower mill, allowing the upper part of the windmill to be spun around to face the direction from which the wind was blowing. The blast furnace appeared around 1350 in Sweden, increasing the quantity of iron produced and improving its quality. The first patent law in 1447 in Venice protected the rights of inventors to their inventions. The Late Middle Ages in Europe as a whole correspond to the Trecento and Early Renaissance cultural periods in Italy. Northern Europe and Spain continued to use Gothic styles, which became increasingly elaborate in the 15th century, until almost the end of the period. International Gothic was a courtly style that reached much of Europe in the decades around 1400, producing masterpieces such as the Très Riches Heures du Duc de Berry. All over Europe secular art continued to increase in quantity and quality, and in the 15th century the mercantile classes of Italy and Flanders became important patrons, commissioning small portraits of themselves in oils as well as a growing range of luxury items such as jewellery, ivory caskets, cassone chests, and maiolica pottery. These objects also included the Hispano-Moresque ware produced by mostly Mudéjar potters in Spain. Although royalty owned huge collections of plate, little survives except for the Royal Gold Cup. Italian silk manufacture developed, so that Western churches and elites no longer needed to rely on imports from Byzantium or the Islamic world. In France and Flanders tapestry weaving of sets like "The Lady and the Unicorn" became a major luxury industry. The large external sculptural schemes of Early Gothic churches gave way to more sculpture inside the building, as tombs became more elaborate and other features such as pulpits were sometimes lavishly carved, as in the Pulpit by Giovanni Pisano in Sant'Andrea. Painted or carved wooden relief altarpieces became common, especially as churches created many side-chapels. Early Netherlandish painting by artists such as Jan van Eyck (d. 1441) and Rogier van der Weyden (d. 1464) rivalled that of Italy, as did northern illuminated manuscripts, which in the 15th century began to be collected on a large scale by secular elites, who also commissioned secular books, especially histories. From about 1450 printed books rapidly became popular, though still expensive. There were around 30,000 different editions of incunabula, or works printed before 1500, by which time illuminated manuscripts were commissioned only by royalty and a few others. Very small woodcuts, nearly all religious, were affordable even by peasants in parts of Northern Europe from the middle of the 15th century. More expensive engravings supplied a wealthier market with a variety of images. The medieval period is frequently caricatured as a "time of ignorance and superstition" that placed "the word of religious authorities over personal experience and rational activity." This is a legacy from both the Renaissance and Enlightenment when scholars favourably contrasted their intellectual cultures with those of the medieval period. Renaissance scholars saw the Middle Ages as a period of decline from the high culture and civilisation of the Classical world; Enlightenment scholars saw reason as superior to faith, and thus viewed the Middle Ages as a time of ignorance and superstition. Others argue that reason was generally held in high regard during the Middle Ages. Science historian Edward Grant writes, "If revolutionary rational thoughts were expressed [in the 18th century], they were only made possible because of the long medieval tradition that established the use of reason as one of the most important of human activities". Also, contrary to common belief, David Lindberg writes, "the late medieval scholar rarely experienced the coercive power of the Church and would have regarded himself as free (particularly in the natural sciences) to follow reason and observation wherever they led". The caricature of the period is also reflected in some more specific notions. One misconception, first propagated in the 19th century and still very common, is that all people in the Middle Ages believed that the Earth was flat. This is untrue, as lecturers in the medieval universities commonly argued that evidence showed the Earth was a sphere. Lindberg and Ronald Numbers, another scholar of the period, state that there "was scarcely a Christian scholar of the Middle Ages who did not acknowledge [Earth's] sphericity and even know its approximate circumference". Other misconceptions such as "the Church prohibited autopsies and dissections during the Middle Ages", "the rise of Christianity killed off ancient science", or "the medieval Christian Church suppressed the growth of natural philosophy", are all cited by Numbers as examples of widely popular myths that still pass as historical truth, although they are not supported by current historical research. Manitoba Manitoba () is a province at the longitudinal centre of Canada. It is often considered one of the three prairie provinces (with Alberta and Saskatchewan) and is Canada's fifth-most populous province with its estimated 1.3 million people. Manitoba covers with a widely varied landscape, stretching from the northern oceanic coastline to the southern border with the United States. The province is bordered by the provinces of Ontario to the east and Saskatchewan to the west, the territories of Nunavut to the north, and Northwest Territories to the northwest, and the US states of North Dakota and Minnesota to the south. Aboriginal peoples have inhabited what is now Manitoba for thousands of years. In the late 17th century, fur traders arrived in the area when it was part of Rupert's Land and owned by the Hudson's Bay Company. In 1869, negotiations for the creation of the province of Manitoba led to an armed uprising of the Métis people against the Government of Canada, a conflict known as the Red River Rebellion. The rebellion's resolution led to the Parliament of Canada passing the Manitoba Act in 1870 that created the province. Manitoba's capital and largest city, Winnipeg, is the eighth-largest census metropolitan area in Canada. Other census agglomerations in the province are Brandon, Steinbach, Portage la Prairie, and Thompson. The name "Manitoba" is believed to be derived from the Cree, Ojibwe or Assiniboine languages. The name derives from Cree "manitou-wapow" or Ojibwa "manidoobaa", both meaning "straits of Manitou, the Great Spirit", a place referring to what are now called The Narrows in the centre of Lake Manitoba. It may also be from the Assiniboine for "Lake of the Prairie". The lake was known to French explorers as "Lac des Prairies." Thomas Spence chose the name to refer to a new republic he proposed for the area south of the lake. Métis leader Louis Riel also chose the name, and it was accepted in Ottawa under the Manitoba Act of 1870. Manitoba is bordered by the provinces of Ontario to the east and Saskatchewan to the west, the territories of Nunavut to the north, and the US states of North Dakota and Minnesota to the south. The province possibly meets the Northwest Territories at the four corners quadripoint to the extreme northwest, though surveys have not been completed and laws are unclear about the exact location of the Nunavut–NWT boundary. Manitoba adjoins Hudson Bay to the northeast, and is the only prairie province to have a saltwater coastline. The Port of Churchill is Canada's only Arctic deep-water port. Lake Winnipeg is the tenth-largest freshwater lake in the world. Hudson Bay is the world's second-largest bay by area. Manitoba is at the heart of the giant Hudson Bay watershed, once known as Rupert's Land. It was a vital area of the Hudson's Bay Company, with many rivers and lakes that provided excellent opportunities for the lucrative fur trade. The province has a saltwater coastline bordering Hudson Bay and more than 110,000 lakes, covering approximately 15.6 percent or of its surface area. Manitoba's major lakes are Lake Manitoba, Lake Winnipegosis, and Lake Winnipeg, the tenth-largest freshwater lake in the world. Some traditional Native lands and boreal forest on Lake Winnipeg's east side are a proposed UNESCO World Heritage Site. Manitoba is at the centre of the Hudson Bay drainage basin, with a high volume of the water draining into Lake Winnipeg and then north down the Nelson River into Hudson Bay. This basin's rivers reach far west to the mountains, far south into the United States, and east into Ontario. Major watercourses include the Red, Assiniboine, Nelson, Winnipeg, Hayes, Whiteshell and Churchill rivers. Most of Manitoba's inhabited south has developed in the prehistoric bed of Glacial Lake Agassiz. This region, particularly the Red River Valley, is flat and fertile; receding glaciers left hilly and rocky areas throughout the province. Baldy Mountain is the province's highest point at above sea level, and the Hudson Bay coast is the lowest at sea level. Riding Mountain, the Pembina Hills, Sandilands Provincial Forest, and the Canadian Shield are also upland regions. Much of the province's sparsely inhabited north and east lie on the irregular granite Canadian Shield, including Whiteshell, Atikaki, and Nopiming Provincial Parks. Extensive agriculture is found only in the province's southern areas, although there is grain farming in the Carrot Valley Region (near The Pas). The most common agricultural activity is cattle husbandry (34.6%), followed by assorted grains (19.0%) and oilseed (7.9%). Around 12 percent of Canada's farmland is in Manitoba. Manitoba has an extreme continental climate. Temperatures and precipitation generally decrease from south to north and increase from east to west. Manitoba is far from the moderating influences of mountain ranges or large bodies of water. Because of the generally flat landscape, it is exposed to cold Arctic high-pressure air masses from the northwest during January and February. In the summer, air masses sometimes come out of the Southern United States, as warm humid air is drawn northward from the Gulf of Mexico. Temperatures exceed numerous times each summer, and the combination of heat and humidity can bring the humidex value to the mid-40s. Carman, Manitoba recorded the second-highest humidex ever in Canada in 2007, with 53.0. According to Environment Canada, Manitoba ranked first for clearest skies year round, and ranked second for clearest skies in the summer and for the sunniest province in the winter and spring. Southern Manitoba (including the city of Winnipeg), falls into the humid continental climate zone (Köppen Dfb). This area is cold and windy in the winter and has frequent blizzards because of the open landscape. Summers are warm with a moderate length. This region is the most humid area in the prairie provinces, with moderate precipitation. Southwestern Manitoba, though under the same climate classification as the rest of Southern Manitoba, is closer to the semi-arid interior of Palliser's Triangle. The area is drier and more prone to droughts than other parts of southern Manitoba. This area is cold and windy in the winter and has frequent blizzards due to the openness of the prairie landscape. Summers are generally warm to hot, with low to moderate humidity. Southern parts of the province just north of Tornado Alley, experience tornadoes, with 16 confirmed touchdowns in 2016. In 2007, on 22 and 23 June, numerous tornadoes touched down, the largest an F5 tornado that devastated parts of Elie (the strongest recorded tornado in Canada). The province's northern sections (including the city of Thompson) fall in the subarctic climate zone (Köppen climate classification "Dfc"). This region features long and extremely cold winters and brief, warm summers with little precipitation. Overnight temperatures as low as occur on several days each winter. Manitoba natural communities may be grouped within five ecozones: boreal plains, prairie, taiga shield, boreal shield and Hudson plains. Three of these—taiga shield, boreal shield and Hudson plain—contain part of the Boreal forest of Canada which covers the province's eastern, southeastern, and northern reaches. Forests make up about , or 48 percent, of the province's land area. The forests consist of pines (Jack Pine, Red Pine, Eastern White Pine), spruces (White Spruce, Black Spruce), Balsam Fir, Tamarack (larch), poplars (Trembling Aspen, Balsam Poplar), birches (White Birch, Swamp Birch) and small pockets of Eastern White Cedar. Two sections of the province are not dominated by forest. The province's northeast corner bordering Hudson Bay is above the treeline and is considered tundra. The tallgrass prairie once dominated the south central and southeastern parts including the Red River Valley. Mixed grass prairie is found in the southwestern region. Agriculture has replaced much of the natural prairie but prairie still can be found in parks and protected areas; some are notable for the presence of the endangered western prairie fringed orchid. Manitoba is especially noted for its northern polar bear population; Churchill is commonly referred to as the "Polar Bear Capital". Other large animals, including moose, white-tailed deer, black bears, cougars, lynx, and wolves, are common throughout the province, especially in the provincial and national parks. There is a large population of red sided garter snakes near Narcisse; the dens there are home to the world's largest concentration of snakes. Manitoba's bird diversity is enhanced by its position on two major migration routes, with 392 confirmed identified species; 287 of these nesting within the province. These include the great grey owl, the province's official bird, and the endangered peregrine falcon. Manitoba's lakes host 18 species of game fish, particularly species of trout, pike, and goldeye, as well as many smaller fish. Modern-day Manitoba was inhabited by the First Nations people shortly after the last ice age glaciers retreated in the southwest about 10,000 years ago; the first exposed land was the Turtle Mountain area. The Ojibwe, Cree, Dene, Sioux, Mandan, and Assiniboine peoples founded settlements, and other tribes entered the area to trade. In Northern Manitoba, quartz was mined to make arrowheads. The first farming in Manitoba was along the Red River, where corn and other seed crops were planted before contact with Europeans. In 1611, Henry Hudson was one of the first Europeans to sail into what is now known as Hudson Bay, where he was abandoned by his crew. The first European to reach present-day central and southern Manitoba was Sir Thomas Button, who travelled upstream along the Nelson River to Lake Winnipeg in 1612 in an unsuccessful attempt to find and rescue Hudson. When the British ship "Nonsuch" sailed into Hudson Bay in 1668–1669, she became the first trading vessel to reach the area; that voyage led to the formation of the Hudson's Bay Company, to which the British government gave absolute control of the entire Hudson Bay watershed. This watershed was named Rupert's Land, after Prince Rupert, who helped to subsidize the Hudson's Bay Company. York Factory was founded in 1684 after the original fort of the Hudson's Bay Company, Fort Nelson (built in 1682), was destroyed by rival French traders. Pierre Gaultier de Varennes, sieur de La Vérendrye, visited the Red River Valley in the 1730s to help open the area for French exploration and trade. As French explorers entered the area, a Montreal-based company, the North West Company, began trading with the local Indigenous people. Both the North West Company and the Hudson's Bay Company built fur-trading forts; the two companies competed in southern Manitoba, occasionally resulting in violence, until they merged in 1821 (the Hudson's Bay Company Archives in Winnipeg preserve the history of this era). Great Britain secured the territory in 1763 after their victory over France in the North American theatre of the Seven Years' War, better known as the French and Indian War in North America; lasting from 1754 to 1763. The founding of the first agricultural community and settlements in 1812 by Lord Selkirk, north of the area which is now downtown Winnipeg, led to conflict between British colonists and the Métis. Twenty colonists, including the governor, and one Métis were killed in the Battle of Seven Oaks in 1816. Thomas Spence attempted to be President of the Republic of Manitobah in 1867, that he and his council named. Rupert's Land was ceded to Canada by the Hudson's Bay Company in 1869 and incorporated into the Northwest Territories; a lack of attention to Métis concerns caused Métis leader Louis Riel to establish a local provisional government as part of the Red River Rebellion. In response, Prime Minister John A. Macdonald introduced the Manitoba Act in the House of Commons of Canada, the bill was given Royal Assent and Manitoba was brought into Canada as a province in 1870. Louis Riel was pursued by British army officer Garnet Wolseley because of the rebellion, and Riel fled into exile. The Canadian government blocked the Métis' attempts to obtain land promised to them as part of Manitoba's entry into confederation. Facing racism from the new flood of white settlers from Ontario, large numbers of Métis moved to what would become Saskatchewan and Alberta. Numbered Treaties were signed in the late 19th century with the chiefs of various First Nations that lived in the area. These treaties made specific promises of land for every family. As a result, a reserve system was established under the jurisdiction of the Federal Government. The prescribed amount of land promised to the native peoples was not always given; this led aboriginal groups to assert rights to the land through aboriginal land claims, many of which are still ongoing. The original province of Manitoba was a square one-eighteenth of its current size, and was known colloquially as the "postage stamp province". Its borders were expanded in 1881, taking land from the Northwest Territories and the District of Keewatin, but Ontario claimed a large portion of the land; the disputed portion was awarded to Ontario in 1889. Manitoba grew to its current size in 1912, absorbing land from the Northwest Territories to reach 60°N, uniform with the northern reach of its western neighbours Saskatchewan, Alberta and British Columbia. The Manitoba Schools Question showed the deep divergence of cultural values in the territory. The Catholic Franco-Manitobans had been guaranteed a state-supported separate school system in the original constitution of Manitoba, but a grassroots political movement among English Protestants from 1888 to 1890 demanded the end of French schools. In 1890, the Manitoba legislature passed a law removing funding for French Catholic schools. The French Catholic minority asked the federal government for support; however, the Orange Order and other anti-Catholic forces mobilized nationwide to oppose them. The federal Conservatives proposed remedial legislation to override Manitoba, but they were blocked by the Liberals, led by Wilfrid Laurier, who opposed the remedial legislation because of his belief in provincial rights. Once elected Prime Minister in 1896, Laurier implemented a compromise stating Catholics in Manitoba could have their own religious instruction for 30 minutes at the end of the day if there were enough students to warrant it, implemented on a school-by-school basis. By 1911, Winnipeg was the third largest city in Canada, and remained so until overtaken by Vancouver in the 1920s. A boomtown, it grew quickly around the start of the 20th century, with outside investors and immigrants contributing to its success. The drop in growth in the second half of the decade was a result of the opening of the Panama Canal in 1914, which reduced reliance on transcontinental railways for trade, as well as a decrease in immigration due to the outbreak of the First World War. Over 18,000 Manitoba residents enlisted in the first year of the war; by the end of the war, 14 Manitobans had received the Victoria Cross. After the First World War ended, severe discontent among farmers (over wheat prices) and union members (over wage rates) resulted in an upsurge of radicalism, coupled with a polarization over the rise of Bolshevism in Russia. The most dramatic result was the Winnipeg general strike of 1919. It began on 15 May and collapsed on 25 June 1919; as the workers gradually returned to their jobs, the Central Strike Committee decided to end the movement. Government efforts to violently crush the strike, including a Royal Northwest Mounted Police charge into a crowd of protesters that resulted in multiple casualties and one death, had led to the arrest of the movement's leaders. In the aftermath, eight leaders went on trial, and most were convicted on charges of seditious conspiracy, illegal combinations, and seditious libel; four were aliens who were deported under the Canadian Immigration Act. The Great Depression (1929–c. 1939) hit especially hard in Western Canada, including Manitoba. The collapse of the world market combined with a steep drop in agricultural production due to drought led to economic diversification, moving away from a reliance on wheat production. The Manitoba Co-operative Commonwealth Federation, forerunner to the New Democratic Party of Manitoba (NDP), was founded in 1932. Canada entered the Second World War in 1939. Winnipeg was one of the major commands for the British Commonwealth Air Training Plan to train fighter pilots, and there were air training schools throughout Manitoba. Several Manitoba-based regiments were deployed overseas, including Princess Patricia's Canadian Light Infantry. In an effort to raise money for the war effort, the Victory Loan campaign organized "If Day" in 1942. The event featured a simulated Nazi invasion and occupation of Manitoba, and eventually raised over C$65 million. Winnipeg was inundated during the 1950 Red River Flood and had to be partially evacuated. In that year, the Red River reached its highest level since 1861 and flooded most of the Red River Valley. The damage caused by the flood led then-Premier Duff Roblin to advocate for the construction of the Red River Floodway; it was completed in 1968 after six years of excavation. Permanent dikes were erected in eight towns south of Winnipeg, and clay dikes and diversion dams were built in the Winnipeg area. In 1997, the "Flood of the Century" caused over in damages in Manitoba, but the floodway prevented Winnipeg from flooding. In 1990, Prime Minister Brian Mulroney attempted to pass the Meech Lake Accord, a series of constitutional amendments to persuade Quebec to endorse the Canada Act 1982. Unanimous support in the legislature was needed to bypass public consultation. Manitoba politician Elijah Harper, a Cree, opposed because he did not believe First Nations had been adequately involved in the Accord's process, and thus the Accord failed. In 2013, Manitoba was the second province to make accessibility legislation law, protecting the rights of persons with disabilities. At the 2011 census, Manitoba had a population of 1,208,268, more than half of which is in the Winnipeg Capital Region; Winnipeg is Canada's eighth-largest Census Metropolitan Area, with a population of 730,018 (2011 Census). Although initial colonization of the province revolved mostly around homesteading, the last century has seen a shift towards urbanization; Manitoba is the only Canadian province with over fifty-five percent of its population located in a single city. According to the 2006 Canadian census, the largest ethnic group in Manitoba is English (22.9%), followed by German (19.1%), Scottish (18.5%), Ukrainian (14.7%), Irish (13.4%), North American Indian (10.6%), Polish (7.3%), Métis (6.4%), French (5.6%), Dutch (4.9%), and Russian (4.0%). Almost one-fifth of respondents also identified their ethnicity as "Canadian". There is a significant indigenous community: aboriginals (including Métis) are Manitoba's fastest-growing ethnic group, representing 13.6 percent of Manitoba's population as of 2001 (some reserves refused to allow census-takers to enumerate their populations or were otherwise incompletely counted). There is a significant Franco-Manitoban minority (148,370) and a growing aboriginal population (192,865, including the Métis). Gimli, Manitoba is home to the largest Icelandic community outside of Iceland. Most Manitobans belong to a Christian denomination: on the 2001 census, 758,760 Manitobans (68.7%) reported being Christian, followed by 13,040 (1.2%) Jewish, 5,745 (0.5%) Buddhist, 5,485 (0.5%) Sikh, 5,095 (0.5%) Muslim, 3,840 (0.3%) Hindu, 3,415 (0.3%) Aboriginal spirituality and 995 (0.1%) pagan. 201,825 Manitobans (18.3%) reported no religious affiliation. The largest Christian denominations by number of adherents were the Roman Catholic Church with 292,970 (27%); the United Church of Canada with 176,820 (16%); and the Anglican Church of Canada with 85,890 (8%). Manitoba has a moderately strong economy based largely on natural resources. Its Gross Domestic Product was C$50.834 billion in 2008. The province's economy grew 2.4 percent in 2008, the third consecutive year of growth; in 2009, it neither increased nor decreased. The average individual income in Manitoba in 2006 was C$25,100 (compared to a national average of C$26,500), ranking fifth-highest among the provinces. As of October 2009, Manitoba's unemployment rate was 5.8 percent. Manitoba's economy relies heavily on agriculture, tourism, energy, oil, mining, and forestry. Agriculture is vital and is found mostly in the southern half of the province, although grain farming occurs as far north as The Pas. Around 12 percent of Canadian farmland is in Manitoba. The most common type of farm found in rural areas is cattle farming (34.6%), followed by assorted grains (19.0%) and oilseed (7.9%). Manitoba is the nation's largest producer of sunflower seed and dry beans, and one of the leading sources of potatoes. Portage la Prairie is a major potato processing centre, and is home to the McCain Foods and Simplot plants, which provide French fries for McDonald's, Wendy's, and other commercial chains. Can-Oat Milling, one of the largest oat mills in the world, also has a plant in the municipality. Manitoba's largest employers are government and government-funded institutions, including crown corporations and services like hospitals and universities. Major private-sector employers are The Great-West Life Assurance Company, Cargill Ltd., and James Richardson and Sons Ltd. Manitoba also has large manufacturing and tourism sectors. Churchill's Arctic wildlife is a major tourist attraction; the town is a world capital for polar bear and beluga whale watchers. Manitoba is the only province with an Arctic deep-water seaport, at Churchill. In January 2018, the Canadian Federation of Independent Business claimed that Manitoba was the most improved province for tackling red tape. Manitoba's early economy depended on mobility and living off the land. Aboriginal Nations (Cree, Ojibwa, Dene, Sioux and Assiniboine) followed herds of bison and congregated to trade among themselves at key meeting places throughout the province. After the arrival of the first European traders in the 17th century, the economy centred on the trade of beaver pelts and other furs. Diversification of the economy came when Lord Selkirk brought the first agricultural settlers in 1811, though the triumph of the Hudson's Bay Company (HBC) over its competitors ensured the primacy of the fur trade over widespread agricultural colonization. HBC control of Rupert's Land ended in 1868; when Manitoba became a province in 1870, all land became the property of the federal government, with homesteads granted to settlers for farming. Transcontinental railways were constructed to simplify trade. Manitoba's economy depended mainly on farming, which persisted until drought and the Great Depression led to further diversification. CFB Winnipeg is a Canadian Forces Base at the Winnipeg International Airport. The base is home to flight operations support divisions and several training schools, as well as the 1 Canadian Air Division and Canadian NORAD Region Headquarters. 17 Wing of the Canadian Forces is based at CFB Winnipeg; the Wing has three squadrons and six schools. It supports 113 units from Thunder Bay to the Saskatchewan/Alberta border, and from the 49th parallel north to the high Arctic. 17 Wing acts as a deployed operating base for CF-18 Hornet fighter–bombers assigned to the Canadian NORAD Region. The two 17 Wing squadrons based in the city are: the 402 ("City of Winnipeg" Squadron), which flies the Canadian designed and produced de Havilland Canada CT-142 Dash 8 navigation trainer in support of the 1 Canadian Forces Flight Training School's Air Combat Systems Officer and Airborne Electronic Sensor Operator training programs (which trains all Canadian Air Combat Systems Officer); and the 435 ("Chinthe" Transport and Rescue Squadron), which flies the Lockheed C-130 Hercules tanker/transport in airlift search and rescue roles, and is the only Air Force squadron equipped and trained to conduct air-to-air refuelling of fighter aircraft. Canadian Forces Base Shilo (CFB Shilo) is an Operations and Training base of the Canadian Forces located east of Brandon. During the 1990s, Canadian Forces Base Shilo was designated as an Area Support Unit, acting as a local base of operations for Southwest Manitoba in times of military and civil emergency. CFB Shilo is the home of the 1st Regiment, Royal Canadian Horse Artillery, both battalions of the 1 Canadian Mechanized Brigade Group, and the Royal Canadian Artillery. The Second Battalion of Princess Patricia's Canadian Light Infantry (2 PPCLI), which was originally stationed in Winnipeg (first at Fort Osborne, then in Kapyong Barracks), has operated out of CFB Shilo since 2004. CFB Shilo hosts a training unit, 3rd Canadian Division Training Centre. It serves as a base for support units of 3rd Canadian Division, also including 3 CDSG Signals Squadron, Shared Services Unit (West), 11 CF Health Services Centre, 1 Dental Unit, 1 Military Police Regiment, and an Integrated Personnel Support Centre. The base currently houses 1,700 soldiers. After the control of Rupert's Land was passed from Great Britain to the Government of Canada in 1869, Manitoba attained full-fledged rights and responsibilities of self-government as the first Canadian province carved out of the Northwest Territories. The Legislative Assembly of Manitoba was established on 14 July 1870. Political parties first emerged between 1878 and 1883, with a two-party system (Liberals and Conservatives). The United Farmers of Manitoba appeared in 1922, and later merged with the Liberals in 1932. Other parties, including the Co-operative Commonwealth Federation (CCF), appeared during the Great Depression; in the 1950s, Manitoban politics became a three-party system, and the Liberals gradually declined in power. The CCF became the New Democratic Party of Manitoba (NDP), which came to power in 1969. Since then, the Progressive Conservatives and the NDP have been the dominant parties. Like all Canadian provinces, Manitoba is governed by a unicameral legislative assembly. The executive branch is formed by the governing party; the party leader is the premier of Manitoba, the head of the executive branch. The head of state, Queen Elizabeth II, is represented by the Lieutenant Governor of Manitoba, who is appointed by the Governor General of Canada on advice of the Prime Minister. The head of state is primarily a ceremonial role, although the Lieutenant Governor has the official responsibility of ensuring that Manitoba has a duly constituted government. The Legislative Assembly consists of the 57 Members elected to represent the people of Manitoba. The premier of Manitoba is Brian Pallister of the PC Party. The PCs were elected with a majority government of 40 seats. The NDP holds 14 seats, and the Liberal Party have three seats but does not have official party status in the Manitoba Legislature. The last provincial general election was held on 19 April 2016. The province is represented in federal politics by 14 Members of Parliament and six Senators. Manitoba's judiciary consists of the Court of Appeal, the Court of Queen's Bench, and the Provincial Court. The Provincial Court is primarily for criminal law; 95 percent of criminal cases in Manitoba are heard here. The Court of Queen's Bench is the highest trial court in the province. It has four jurisdictions: family law (child and family services cases), civil law, criminal law (for indictable offences), and appeals. The Court of Appeal hears appeals from both benches; its decisions can only be appealed to the Supreme Court of Canada. English and French are the official languages of the legislature and courts of Manitoba, according to §23 of the Manitoba Act, 1870 (part of the Constitution of Canada). In April 1890, the Manitoba legislature attempted to abolish the official status of French, and ceased to publish bilingual legislation. However, in 1985 the Supreme Court of Canada ruled in the Reference re Manitoba Language Rights that §23 still applied, and that legislation published only in English was invalid (unilingual legislation was declared valid for a temporary period to allow time for translation). Although French is an official language for the purposes of the legislature, legislation, and the courts, the Manitoba Act does not require it to be an official language for the purpose of the executive branch (except when performing legislative or judicial functions). Hence, Manitoba's government is not completely bilingual. The Manitoba French Language Services Policy of 1999 is intended to provide a comparable level of provincial government services in both official languages. According to the 2006 Census, 82.8 percent of Manitoba's population spoke only English, 3.2 percent spoke only French, 15.1 percent spoke both, and 0.9 percent spoke neither. In 2010, the provincial government of Manitoba passed the Aboriginal Languages Recognition Act, which gives official recognition to seven indigenous languages: Cree, Dakota, Dene, Inuktitut, Michif, Ojibway and Oji-Cree. Transportation and warehousing contribute approximately to Manitoba's GDP. Total employment in the industry is estimated at 34,500, or around 5 percent of Manitoba's population. Trucks haul 95 percent of land freight in Manitoba, and trucking companies account for 80 percent of Manitoba's merchandise trade to the United States. Five of Canada's twenty-five largest employers in for-hire trucking are headquartered in Manitoba. of Manitoba's GDP comes directly or indirectly from trucking. Greyhound Canada and Grey Goose Bus Lines offer domestic bus service from the Winnipeg Bus Terminal. The terminal was relocated from downtown Winnipeg to the airport in 2009, and is a Greyhound hub. Municipalities also operate localized transit bus systems. Manitoba has two Class I railways: Canadian National Railway (CN) and Canadian Pacific Railway (CPR). Winnipeg is centrally located on the main lines of both carriers, and both maintain large inter-modal terminals in the city. CN and CPR operate a combined of track in Manitoba. Via Rail offers transcontinental and Northern Manitoba passenger service from Winnipeg's Union Station. Numerous small regional and short-line railways also run trains within Manitoba: the Hudson Bay Railway, the Southern Manitoba Railway, Burlington Northern Santa Fe Manitoba, Greater Winnipeg Water District Railway, and Central Manitoba Railway. Together, these smaller lines operate approximately of track in the province. Winnipeg James Armstrong Richardson International Airport, Manitoba's largest airport, is one of only a few 24-hour unrestricted airports in Canada and is part of the National Airports System. A new, larger terminal opened in October 2011. The airport handles approximately of cargo annually, making it the third largest cargo airport in the country. Eleven regional passenger airlines and nine smaller and charter carriers operate out of the airport, as well as eleven air cargo carriers and seven freight forwarders. Winnipeg is a major sorting facility for both FedEx and Purolator, and receives daily trans-border service from UPS. Air Canada Cargo and Cargojet Airways use the airport as a major hub for national traffic. The Port of Churchill, owned by OmniTRAX, is the only Arctic deep-water port in Canada. It is nautically closer to ports in Northern Europe and Russia than any other port in Canada. It has four deep-sea berths for the loading and unloading of grain, general cargo and tanker vessels. The port is served by the Hudson Bay Railway (also owned by OmniTRAX). Grain represented 90 percent of the port's traffic in the 2004 shipping season. In that year, over of agricultural products were shipped through the port. The first school in Manitoba was founded in 1818 by Roman Catholic missionaries in present-day Winnipeg; the first Protestant school was established in 1820. A provincial board of education was established in 1871; it was responsible for public schools and curriculum, and represented both Catholics and Protestants. The Manitoba Schools Question led to funding for French Catholic schools largely being withdrawn in favour of the English Protestant majority. Legislation making education compulsory for children between seven and fourteen was first enacted in 1916, and the leaving age was raised to sixteen in 1962. Public schools in Manitoba fall under the regulation of one of thirty-seven school divisions within the provincial education system (except for the Manitoba Band Operated Schools, which are administered by the federal government). Public schools follow a provincially mandated curriculum in either French or English. There are sixty-five funded independent schools in Manitoba, including three boarding schools. These schools must follow the Manitoban curriculum and meet other provincial requirements. There are forty-four non-funded independent schools, which are not required to meet those standards. There are five universities in Manitoba, regulated by the Ministry of Advanced Education and Literacy. Four of these universities are in Winnipeg: the University of Manitoba, the largest and most comprehensive; the University of Winnipeg, a liberal arts school primarily focused on undergrad studies located downtown; Université de Saint-Boniface, the province's only French-language university; and the Canadian Mennonite University, a religious-based institution. The Université de Saint-Boniface, established in 1818 and now affiliated with the University of Manitoba, is the oldest university in Western Canada. Brandon University, formed in 1899 and located in Brandon, is the province's only university not in Winnipeg. Manitoba has thirty-eight public libraries; of these, twelve have French-language collections and eight have significant collections in other languages. Twenty-one of these are part of the Winnipeg Public Library system. The first lending library in Manitoba was founded in 1848. Manitoba's culture has been influenced by traditional (Aboriginal and Métis) and modern Canadian artistic values, as well as by the cultures of its immigrant populations and American neighbours. The Minister of Culture, Heritage, Tourism and Sport is responsible for promoting and, to some extent, financing Manitoban culture. Manitoba is the birthplace of the Red River Jig, a combination of aboriginal pow-wows and European reels popular among early settlers. Manitoba's traditional music has strong roots in Métis and Aboriginal culture, in particular the old-time fiddling of the Métis. Manitoba's cultural scene also incorporates classical European traditions. The Winnipeg-based Royal Winnipeg Ballet (RWB), is Canada's oldest ballet and North America's longest continuously operating ballet company; it was granted its royal title in 1953 under Queen Elizabeth II. The Winnipeg Symphony Orchestra (WSO) performs classical music and new compositions at the Centennial Concert Hall. Manitoba Opera, founded in 1969, also performs out of the Centennial Concert Hall. Le Cercle Molière (founded 1925) is the oldest French-language theatre in Canada, and Royal Manitoba Theatre Centre (founded 1958) is Canada's oldest English-language regional theatre. Manitoba Theatre for Young People was the first English-language theatre to win the Canadian Institute of the Arts for Young Audiences Award, and offers plays for children and teenagers as well as a theatre school. The Winnipeg Art Gallery (WAG), Manitoba's largest art gallery and the sixth largest in the country, hosts an art school for children; the WAG's permanent collection comprises over twenty thousand works, with a particular emphasis on Manitoban and Canadian art. The 1960s pop group The Guess Who was formed in Manitoba, and later became the first Canadian band to have a No. 1 hit in the United States; Guess Who guitarist Randy Bachman later created Bachman–Turner Overdrive (BTO) with fellow Winnipeg-based musician Fred Turner. Fellow rocker Neil Young, lived for a time in Manitoba, played with Stephen Stills in Buffalo Springfield, and again in supergroup Crosby, Stills, Nash & Young. Soft-rock band Crash Test Dummies formed in the late 1980s in Winnipeg and were the 1992 Juno Awards Group of the Year. Several prominent Canadian films were produced in Manitoba, such as "The Stone Angel", based on the Margaret Laurence book of the same title, "The Saddest Music in the World", "Foodland", "For Angela", and "My Winnipeg". Major films shot in Manitoba include "The Assassination of Jesse James by the Coward Robert Ford" and "Capote", both of which received Academy Award nominations. "Falcon Beach", an internationally broadcast television drama, was filmed at Winnipeg Beach, Manitoba. Manitoba has a strong literary tradition. Manitoban writer Bertram Brooker won the first-ever Governor General's Award for Fiction in 1936. Cartoonist Lynn Johnston, author of the comic strip "For Better or For Worse", was nominated for a Pulitzer Prize and inducted into the Canadian Cartoonist Hall of Fame. Margaret Laurence's "The Stone Angel" and "A Jest of God" were set in Manawaka, a fictional town representing Neepawa; the latter title won the Governor General's Award in 1966. Carol Shields won both the Governor General's Award and the Pulitzer Prize for "The Stone Diaries". Gabrielle Roy, a Franco-Manitoban writer, won the Governor General's Award three times. A quote from her writings is featured on the Canadian $20 bill. Festivals take place throughout the province, with the largest centred in Winnipeg. The inaugural Winnipeg Folk Festival was held in 1974 as a one-time celebration to mark Winnipeg's 100th anniversary. Today, the five-day festival is one of the largest folk festivals in North America with over 70 acts from around the world and an annual attendance that exceeds 80,000. The Winnipeg Folk Festival's home – Birds Hill Provincial Park – is located 34 kilometres outside of Winnipeg and for the five days of the festival, it becomes Manitoba's third largest "city." The Festival du Voyageur is an annual ten-day event held in Winnipeg's French Quarter, and is Western Canada's largest winter festival. It celebrates Canada's fur-trading past and French-Canadian heritage and culture. Folklorama, a multicultural festival run by the Folk Arts Council, receives around 400,000 pavilion visits each year, of which about thirty percent are from non-Winnipeg residents. The Winnipeg Fringe Theatre Festival is an annual alternative theatre festival, the second-largest festival of its kind in North America (after the Edmonton International Fringe Festival). Manitoban museums document different aspects of the province's heritage. The Manitoba Museum is the largest museum in Manitoba and focuses on Manitoban history from prehistory to the 1920s. The full-size replica of the Nonsuch is the museum's showcase piece. The Manitoba Children's Museum at The Forks presents exhibits for children. There are two museums dedicated to the native flora and fauna of Manitoba: the Living Prairie Museum, a tall grass prairie preserve featuring 160 species of grasses and wildflowers, and FortWhyte Alive, a park encompassing prairie, lake, forest and wetland habitats, home to a large herd of bison. The Canadian Fossil Discovery Centre houses the largest collection of marine reptile fossils in Canada. Other museums feature the history of aviation, marine transport, and railways in the area. The Canadian Museum for Human Rights is the first Canadian national museum outside of the National Capital Region. Winnipeg has two daily newspapers: the "Winnipeg Free Press", a broadsheet with the highest circulation numbers in Manitoba, as well as the "Winnipeg Sun", a smaller tabloid-style paper. There are several ethnic weekly newspapers, including the weekly French-language "La Liberté", and regional and national magazines based in the city. Brandon has two newspapers: the daily "Brandon Sun" and the weekly "Wheat City Journal". Many small towns have local newspapers. There are five English-language television stations and one French-language station based in Winnipeg. The Global Television Network (owned by Canwest) is headquartered in the city. Winnipeg is home to twenty-one AM and FM radio stations, two of which are French-language stations. Brandon's five local radio stations are provided by Astral Media and Westman Communications Group. In addition to the Brandon and Winnipeg stations, radio service is provided in rural areas and smaller towns by Golden West Broadcasting, Corus Entertainment, and local broadcasters. CBC Radio broadcasts local and national programming throughout the province. Native Communications is devoted to Aboriginal programming and broadcasts to many of the isolated native communities as well as to larger cities. Manitoba has four professional sports teams: the Winnipeg Blue Bombers (Canadian Football League), the Winnipeg Jets (National Hockey League), the Manitoba Moose (American Hockey League), and the Winnipeg Goldeyes (American Association). The province was previously home to another team called the Winnipeg Jets, which played in the World Hockey Association and National Hockey League from 1972 until 1996, when financial troubles prompted a sale and move of the team, renamed the Phoenix Coyotes. A second incarnation of the Winnipeg Jets returned, after True North Sports & Entertainment bought the Atlanta Thrashers and moved the team to Winnipeg in time for the 2011 hockey season. Manitoba has one major junior-level hockey team, the Western Hockey League's Brandon Wheat Kings, and one junior football team, the Winnipeg Rifles of the Canadian Junior Football League. The province is represented in university athletics by the University of Manitoba Bisons, the University of Winnipeg Wesmen, and the Brandon University Bobcats. All three teams compete in the Canada West Universities Athletic Association (the regional division of Canadian Interuniversity Sport). Curling is an important winter sport in the province with Manitoba producing more men's national champions than any other province, while additionally in the top 3 women's national champions, as well as multiple world champions in the sport. The province also hosts the world's largest curling tournament in the MCA Bonspiel. The province is regular host to Grand Slam events which feature as the largest cash events in the sport such as the annual Manitoba Lotteries Women's Curling Classic as well as other rotating events. Though not as prominent as hockey and curling, long track speed skating also features as a notable and top winter sport in Manitoba. The province has produced some of the world's best female speed skaters including Susan Auch and the country's top Olympic medal earners Cindy Klassen and Clara Hughes. Madagascar Madagascar (; ), officially the Republic of Madagascar ( ; ), and previously known as the Malagasy Republic, is an island country in the Indian Ocean, off the coast of East Africa. The nation comprises the island of Madagascar (the fourth-largest island in the world), and numerous smaller peripheral islands. Following the prehistoric breakup of the supercontinent Gondwana, Madagascar split from the Indian peninsula around 88 million years ago, allowing native plants and animals to evolve in relative isolation. Consequently, Madagascar is a biodiversity hotspot; over 90% of its wildlife is found nowhere else on Earth. The island's diverse ecosystems and unique wildlife are threatened by the encroachment of the rapidly growing human population and other environmental threats. The first archaeological evidence for human foraging on Madagascar dates to 2000 BC. Human settlement of Madagascar occurred between 350 BC and AD 550 by Austronesian peoples, arriving on outrigger canoes from Borneo. These were joined around AD 1000 by Bantu migrants crossing the Mozambique Channel from East Africa. Other groups continued to settle on Madagascar over time, each one making lasting contributions to Malagasy cultural life. The Malagasy ethnic group is often divided into 18 or more subgroups of which the largest are the Merina of the central highlands. Until the late 18th century, the island of Madagascar was ruled by a fragmented assortment of shifting sociopolitical alliances. Beginning in the early 19th century, most of the island was united and ruled as the Kingdom of Madagascar by a series of Merina nobles. The monarchy ended in 1897 when the island was absorbed into the French colonial empire, from which the island gained independence in 1960. The autonomous state of Madagascar has since undergone four major constitutional periods, termed republics. Since 1992, the nation has officially been governed as a constitutional democracy from its capital at Antananarivo. However, in a popular uprising in 2009, president Marc Ravalomanana was made to resign and presidential power was transferred in March 2009 to Andry Rajoelina. Constitutional governance was restored in January 2014, when Hery Rajaonarimampianina was named president following a 2013 election deemed fair and transparent by the international community. Madagascar is a member of the United Nations, the African Union (AU), the Southern African Development Community (SADC), and the Organisation Internationale de la Francophonie. Madagascar belongs to the group of least developed countries, according to the United Nations. Malagasy and French are both official languages of the state. The majority of the population adheres to traditional beliefs, Christianity, or an amalgamation of both. Ecotourism and agriculture, paired with greater investments in education, health, and private enterprise, are key elements of Madagascar's development strategy. Under Ravalomanana, these investments produced substantial economic growth, but the benefits were not evenly spread throughout the population, producing tensions over the increasing cost of living and declining living standards among the poor and some segments of the middle class. , the economy has been weakened by the 2009–2013 political crisis, and quality of life remains low for the majority of the Malagasy population. In the Malagasy language, the island of Madagascar is called "Madagasikara" and its people are referred to as "Malagasy". The island's appellation "Madagascar" is not of local origin, but rather was popularized in the Middle Ages by Europeans. The name "Madageiscar" was first recorded in the memoirs of 13th-century Venetian explorer Marco Polo as a corrupted transliteration of the name Mogadishu, the Somali port with which Polo had confused the island. On St. Laurence's Day in 1500, Portuguese explorer Diogo Dias landed on the island and named it "São Lourenço". Polo's name was preferred and popularized on Renaissance maps. No single Malagasy-language name predating "Madagasikara" appears to have been used by the local population to refer to the island, although some communities had their own name for part or all of the land they inhabited. At , Madagascar is the world's 47th largest country and the fourth-largest island. The country lies mostly between latitudes 12°S and 26°S, and longitudes 43°E and 51°E. Neighboring islands include the French territory of Réunion and the country of Mauritius to the east, as well as the state of Comoros and the French territory of Mayotte to the north west. The nearest mainland state is Mozambique, located to the west. The prehistoric breakup of the supercontinent Gondwana separated the Madagascar–Antarctica–India landmass from the Africa–South America landmass around 135 million years ago. Madagascar later split from India about 88 million years ago, allowing plants and animals on the island to evolve in relative isolation. Along the length of the eastern coast runs a narrow and steep escarpment containing much of the island's remaining tropical lowland forest. To the west of this ridge lies a plateau in the center of the island ranging in altitude from above sea level. These central highlands, traditionally the homeland of the Merina people and the location of their historic capital at Antananarivo, are the most densely populated part of the island and are characterized by terraced, rice-growing valleys lying between grassy hills and patches of the subhumid forests that formerly covered the highland region. To the west of the highlands, the increasingly arid terrain gradually slopes down to the Mozambique Channel and mangrove swamps along the coast. Madagascar's highest peaks rise from three prominent highland massifs: Maromokotro in the Tsaratanana Massif is the island's highest point, followed by Boby Peak in the Andringitra Massif, and Tsiafajavona in the Ankaratra Massif. To the east, the "Canal des Pangalanes" is a chain of man-made and natural lakes connected by canals built by the French just inland from the east coast and running parallel to it for some . The western and southern sides, which lie in the rain shadow of the central highlands, are home to dry deciduous forests, spiny forests, and deserts and xeric shrublands. Due to their lower population densities, Madagascar's dry deciduous forests have been better preserved than the eastern rain forests or the original woodlands of the central plateau. The western coast features many protected harbors, but silting is a major problem caused by sediment from the high levels of inland erosion carried by rivers crossing the broad western plains. The combination of southeastern trade winds and northwestern monsoons produces a hot rainy season (November–April) with frequently destructive cyclones, and a relatively cooler dry season (May–October). Rain clouds originating over the Indian Ocean discharge much of their moisture over the island's eastern coast; the heavy precipitation supports the area's rainforest ecosystem. The central highlands are both drier and cooler while the west is drier still, and a semi-arid climate prevails in the southwest and southern interior of the island. Tropical cyclones annually cause damage to infrastructure and local economies as well as loss of life. In 2004 Cyclone Gafilo became the strongest cyclone ever recorded to hit Madagascar. The storm killed 172 people, left 214,260 homeless and caused more than US$250 million in damage. As a result of the island's long isolation from neighboring continents, Madagascar is home to an abundance of plants and animals found nowhere else on Earth. Approximately 90% of all plant and animal species found in Madagascar are endemic, including the lemurs (a type of strepsirrhine primate), the carnivorous fossa and many birds. This distinctive ecology has led some ecologists to refer to Madagascar as the "eighth continent", and the island has been classified by Conservation International as a biodiversity hotspot. More than 80 percent of Madagascar's 14,883 plant species are found nowhere else in the world, including five plant families. The family "Didiereaceae", composed of four genera and 11 species, is limited to the spiny forests of southwestern Madagascar. Four-fifths of the world's "Pachypodium" species are endemic to the island. Three-fourths of Madagascar's 860 orchid species are found here alone, as are six of the world's nine baobab species. The island is home to around 170 palm species, three times as many as on all of mainland Africa; 165 of them are endemic. Many native plant species are used as herbal remedies for a variety of afflictions. The drugs vinblastine and vincristine are "vinca" alkaloids, used to treat Hodgkin's disease, leukemia, and other cancers, were derived from the Madagascar periwinkle. The traveler's palm, known locally as "ravinala" and endemic to the eastern rain forests, is highly iconic of Madagascar and is featured in the national emblem as well as the Air Madagascar logo. Like its flora, Madagascar's fauna is diverse and exhibits a high rate of endemism. Lemurs have been characterized as "Madagascar's flagship mammal species" by Conservation International. In the absence of monkeys and other competitors, these primates have adapted to a wide range of habitats and diversified into numerous species. , there were officially 103 species and subspecies of lemur, 39 of which were described by zoologists between 2000 and 2008. They are almost all classified as rare, vulnerable, or endangered. At least 17 species of lemur have become extinct since humans arrived on Madagascar, all of which were larger than the surviving lemur species. A number of other mammals, including the cat-like fossa, are endemic to Madagascar. Over 300 species of birds have been recorded on the island, of which over 60 percent (including four families and 42 genera) are endemic. The few families and genera of reptile that have reached Madagascar have diversified into more than 260 species, with over 90 percent of these being endemic (including one endemic family). The island is home to two-thirds of the world's chameleon species, including the smallest known, and researchers have proposed that Madagascar may be the origin of all chameleons. Endemic fish of Madagascar include two families, 15 genera and over 100 species, primarily inhabiting the island's freshwater lakes and rivers. Although invertebrates remain poorly studied on Madagascar, researchers have found high rates of endemism among the known species. All 651 species of terrestrial snail are endemic, as are a majority of the island's butterflies, scarab beetles, lacewings, spiders and dragonflies. Madagascar's varied fauna and flora are endangered by human activity. Since the arrival of humans around 2,350 years ago, Madagascar has lost more than 90 percent of its original forest. This forest loss is largely fueled by "tavy" ("fat"), a traditional slash-and-burn agricultural practice imported to Madagascar by the earliest settlers. Malagasy farmers embrace and perpetuate the practice not only for its practical benefits as an agricultural technique, but for its cultural associations with prosperity, health and venerated ancestral custom ("fomba malagasy"). As human population density rose on the island, deforestation accelerated beginning around 1400 years ago. By the 16th century, the central highlands had been largely cleared of their original forests. More recent contributors to the loss of forest cover include the growth in cattle herd size since their introduction around 1000 years ago, a continued reliance on charcoal as a fuel for cooking, and the increased prominence of coffee as a cash crop over the past century. According to a conservative estimate, about 40 percent of the island's original forest cover was lost from the 1950s to 2000, with a thinning of remaining forest areas by 80 percent. In addition to traditional agricultural practice, wildlife conservation is challenged by the illicit harvesting of protected forests, as well as the state-sanctioned harvesting of precious woods within national parks. Although banned by then-President Marc Ravalomanana from 2000 to 2009, the collection of small quantities of precious timber from national parks was re-authorized in January 2009 and dramatically intensified under the administration of Andry Rajoelina as a key source of state revenues to offset cuts in donor support following Ravalomanana's ousting. It is anticipated that all the island's rainforests, excluding those in protected areas and the steepest eastern mountain slopes, will have been deforested by 2025. Invasive species have likewise been introduced by human populations. Following the 2014 discovery in Madagascar of the Asian common toad, a relative of a toad species that has severely harmed wildlife in Australia since the 1930s, researchers warned the toad could "wreak havoc on the country's unique fauna." Habitat destruction and hunting have threatened many of Madagascar's endemic species or driven them to extinction. The island's elephant birds, a family of endemic giant ratites, became extinct in the 17th century or earlier, most probably due to human hunting of adult birds and poaching of their large eggs for food. Numerous giant lemur species vanished with the arrival of human settlers to the island, while others became extinct over the course of the centuries as a growing human population put greater pressures on lemur habitats and, among some populations, increased the rate of lemur hunting for food. A July 2012 assessment found that the exploitation of natural resources since 2009 has had dire consequences for the island's wildlife: 90 percent of lemur species were found to be threatened with extinction, the highest proportion of any mammalian group. Of these, 23 species were classified as critically endangered. By contrast, a previous study in 2008 had found only 38 percent of lemur species were at risk of extinction. In 2003 Ravalomanana announced the Durban Vision, an initiative to more than triple the island's protected natural areas to over or 10 percent of Madagascar's land surface. , areas protected by the state included five Strict Nature Reserves ("Réserves Naturelles Intégrales"), 21 Wildlife Reserves ("Réserves Spéciales") and 21 National Parks ("Parcs Nationaux"). In 2007 six of the national parks were declared a joint World Heritage Site under the name Rainforests of the Atsinanana. These parks are Marojejy, Masoala, Ranomafana, Zahamena, Andohahela and Andringitra. Local timber merchants are harvesting scarce species of rosewood trees from protected rainforests within Marojejy National Park and exporting the wood to China for the production of luxury furniture and musical instruments. To raise public awareness of Madagascar's environmental challenges, the Wildlife Conservation Society opened an exhibit entitled ""Madagascar!"" in June 2008 at the Bronx Zoo in New York. The settlement of Madagascar is a subject of ongoing research and debate. Archaeological finds such as cut marks on bones found in the northwest and stone tools in the northeast indicate that Madagascar was visited by foragers around 2000 BC. Traditionally, archaeologists have estimated that the earliest settlers arrived in successive waves throughout the period between 350 BC and 550 AD, while others are cautious about dates earlier than 250 AD. In either case, these dates make Madagascar one of the last major landmasses on Earth to be settled by humans. Early settlers arrived in outrigger canoes from southern Borneo. Upon arrival, early settlers practiced slash-and-burn agriculture to clear the coastal rainforests for cultivation. The first settlers encountered Madagascar's abundance of megafauna, including giant lemurs, elephant birds, giant fossa and the Malagasy hippopotamus, which have since become extinct due to hunting and habitat destruction. By 600 AD groups of these early settlers had begun clearing the forests of the central highlands. Arab traders first reached the island between the seventh and ninth centuries. A wave of Bantu-speaking migrants from southeastern Africa arrived around 1000 AD. South Indian Tamil Merchants arrived around 11th Century. They introduced the zebu, a type of long-horned humped cattle, which they kept in large herds and irrigated paddy fields were developed in the central highland Betsileo Kingdom, and were extended with terraced paddies throughout the neighboring Kingdom of Imerina a century later. The rising intensity of land cultivation and the ever-increasing demand for zebu pasturage had largely transformed the central highlands from a forest ecosystem to grassland by the 17th century. The oral histories of the Merina people, who may have arrived in the central highlands between 600 and 1000 years ago, describe encountering an established population they called the Vazimba. Probably the descendants of an earlier and less technologically advanced Austronesian settlement wave, the Vazimba were assimilated or expelled from the highlands by the Merina kings Andriamanelo, Ralambo and Andrianjaka in the 16th and early 17th centuries. Today, the spirits of the Vazimba are revered as "tompontany" (ancestral masters of the land) by many traditional Malagasy communities. Madagascar was an important transoceanic trading hub connecting ports of the Indian Ocean in the early centuries following human settlement. The written history of Madagascar began with the Arabs, who established trading posts along the northwest coast by at least the 10th century and introduced Islam, the Arabic script (used to transcribe the Malagasy language in a form of writing known as "sorabe"), Arab astrology, and other cultural elements. European contact began in 1500, when the Portuguese sea captain Diogo Dias sighted the island. The French established trading posts along the east coast in the late 17th century. From about 1774 to 1824, Madagascar gained prominence among pirates and European traders, particularly those involved in the trans-Atlantic slave trade. The small island of Nosy Boroha off the northeastern coast of Madagascar has been proposed by some historians as the site of the legendary pirate utopia of Libertalia. Many European sailors were shipwrecked on the coasts of the island, among them Robert Drury, whose journal is one of the few written depictions of life in southern Madagascar during the 18th century. The wealth generated by maritime trade spurred the rise of organized kingdoms on the island, some of which had grown quite powerful by the 17th century. Among these were the Betsimisaraka alliance of the eastern coast and the Sakalava chiefdoms of Menabe and Boina on the west coast. The Kingdom of Imerina, located in the central highlands with its capital at the royal palace of Antananarivo, emerged at around the same time under the leadership of King Andriamanelo. Upon its emergence in the early 17th century, the highland kingdom of Imerina was initially a minor power relative to the larger coastal kingdoms and grew even weaker in the early 18th century when King Andriamasinavalona divided it among his four sons. Following almost a century of warring and famine, Imerina was reunited in 1793 by King Andrianampoinimerina (1787–1810). From his initial capital Ambohimanga, and later from the Rova of Antananarivo, this Merina king rapidly expanded his rule over neighboring principalities. His ambition to bring the entire island under his control was largely achieved by his son and successor, King Radama I (1810–28), who was recognized by the British government as King of Madagascar. Radama concluded a treaty in 1817 with the British governor of Mauritius to abolish the lucrative slave trade in return for British military and financial assistance. Artisan missionary envoys from the London Missionary Society began arriving in 1818 and included such key figures as James Cameron, David Jones and David Griffiths, who established schools, transcribed the Malagasy language using the Roman alphabet, translated the Bible, and introduced a variety of new technologies to the island. Radama's successor, Queen Ranavalona I (1828–61), responded to increasing political and cultural encroachment on the part of Britain and France by issuing a royal edict prohibiting the practice of Christianity in Madagascar and pressuring most foreigners to leave the territory. She made heavy use of the traditional practice of "fanompoana" (forced labor as tax payment) to complete public works projects and develop a standing army of between 20,000 and 30,000 Merina soldiers, whom she deployed to pacify outlying regions of the island and further expand the Kingdom of Merina to encompass most of Madagascar. Residents of Madagascar could accuse one another of various crimes, including theft, Christianity and especially witchcraft, for which the ordeal of "tangena" was routinely obligatory. Between 1828 and 1861, the "tangena" ordeal caused about 3,000 deaths annually. In 1838, it was estimated that as many as 100,000 people in Imerina died as a result of the tangena ordeal, constituting roughly 20 percent of the population. The combination of regular warfare, disease, difficult forced labor and harsh measures of justice resulted in a high mortality rate among soldiers and civilians alike during her 33-year reign. Among those who continued to reside in Imerina were Jean Laborde, an entrepreneur who developed munitions and other industries on behalf of the monarchy, and Joseph-François Lambert, a French adventurer and slave trader, with whom then-Prince Radama II signed a controversial trade agreement termed the Lambert Charter. Succeeding his mother, Radama II (1861–63) attempted to relax the queen's stringent policies, but was overthrown two years later by Prime Minister Rainivoninahitriniony (1852–1865) and an alliance of "Andriana" (noble) and "Hova" (commoner) courtiers, who sought to end the absolute power of the monarch. Following the coup, the courtiers offered Radama's queen Rasoherina (1863–68) the opportunity to rule, if she would accept a power sharing arrangement with the Prime Minister—a new social contract that would be sealed by a political marriage between them. Queen Rasoherina accepted, first wedding Rainivoninahitriniony, then later deposing him and wedding his brother, Prime Minister Rainilaiarivony (1864–95), who would go on to marry Queen Ranavalona II (1868–83) and Queen Ranavalona III (1883–97) in succession. Over the course of Rainilaiarivony's 31-year tenure as prime minister, numerous policies were adopted to modernize and consolidate the power of the central government. Schools were constructed throughout the island and attendance was made mandatory. Army organization was improved, and British consultants were employed to train and professionalize soldiers. Polygamy was outlawed and Christianity, declared the official religion of the court in 1869, was adopted alongside traditional beliefs among a growing portion of the populace. Legal codes were reformed on the basis of British common law and three European-style courts were established in the capital city. In his joint role as Commander-in-Chief, Rainilaiarivony also successfully ensured the defense of Madagascar against several French colonial incursions. Primarily on the basis that the Lambert Charter had not been respected, France invaded Madagascar in 1883 in what became known as the first Franco-Hova War. At the end of the war, Madagascar ceded the northern port town of Antsiranana (Diego Suarez) to France and paid 560,000 francs to Lambert's heirs. In 1890, the British accepted the full formal imposition of a French protectorate on the island, but French authority was not acknowledged by the government of Madagascar. To force capitulation, the French bombarded and occupied the harbor of Toamasina on the east coast, and Mahajanga on the west coast, in December 1894 and January 1895 respectively. A French military flying column then marched toward Antananarivo, losing many men to malaria and other diseases. Reinforcements came from Algeria and Sub-Saharan Africa. Upon reaching the city in September 1895, the column bombarded the royal palace with heavy artillery, causing heavy casualties and leading Queen Ranavalona III to surrender. France annexed Madagascar in 1896 and declared the island a colony the following year, dissolving the Merina monarchy and sending the royal family into exile on Réunion Island and to Algeria. A two-year resistance movement organized in response to the French capture of the royal palace was effectively put down at the end of 1897. Under colonial rule, plantations were established for the production of a variety of export crops. Slavery was abolished in 1896 and approximately 500,000 slaves were freed; many remained in their former masters' homes as servants or as sharecroppers; in many parts of the island strong discriminatory views against slave descendants are still held today. Wide paved boulevards and gathering places were constructed in the capital city of Antananarivo and the Rova palace compound was turned into a museum. Additional schools were built, particularly in rural and coastal areas where the schools of the Merina had not reached. Education became mandatory between the ages of 6 to 13 and focused primarily on French language and practical skills. The Merina royal tradition of taxes paid in the form of labor was continued under the French and used to construct a railway and roads linking key coastal cities to Antananarivo. Malagasy troops fought for France in World War I. In the 1930s, Nazi political thinkers developed the Madagascar Plan that had identified the island as a potential site for the deportation of Europe's Jews. During the Second World War, the island was the site of the Battle of Madagascar between the Vichy government and the British. The occupation of France during the Second World War tarnished the prestige of the colonial administration in Madagascar and galvanized the growing independence movement, leading to the Malagasy Uprising of 1947. This movement led the French to establish reformed institutions in 1956 under the "Loi Cadre" (Overseas Reform Act), and Madagascar moved peacefully towards independence. The Malagasy Republic was proclaimed on 14 October 1958, as an autonomous state within the French Community. A period of provisional government ended with the adoption of a constitution in 1959 and full independence on 26 June 1960. Since regaining independence, Madagascar has transitioned through four republics with corresponding revisions to its constitution. The First Republic (1960–72), under the leadership of French-appointed President Philibert Tsiranana, was characterized by a continuation of strong economic and political ties to France. Many high-level technical positions were filled by French expatriates, and French teachers, textbooks and curricula continued to be used in schools around the country. Popular resentment over Tsiranana's tolerance for this "neo-colonial" arrangement inspired a series of farmer and student protests that overturned his administration in 1972. Gabriel Ramanantsoa, a major general in the army, was appointed interim president and prime minister that same year, but low public approval forced him to step down in 1975. Colonel Richard Ratsimandrava, appointed to succeed him, was assassinated six days into his tenure. General Gilles Andriamahazo ruled after Ratsimandrava for four months before being replaced by another military appointee: Vice Admiral Didier Ratsiraka, who ushered in the socialist-Marxist Second Republic that ran under his tenure from 1975 to 1993. This period saw a political alignment with the Eastern Bloc countries and a shift toward economic insularity. These policies, coupled with economic pressures stemming from the 1973 oil crisis, resulted in the rapid collapse of Madagascar's economy and a sharp decline in living standards, and the country had become completely bankrupt by 1979. The Ratsiraka administration accepted the conditions of transparency, anti-corruption measures and free market policies imposed by the International Monetary Fund, World Bank and various bilateral donors in exchange for their bailout of the nation's broken economy. Ratsiraka's dwindling popularity in the late 1980s reached a critical point in 1991 when presidential guards opened fire on unarmed protesters during a rally. Within two months, a transitional government had been established under the leadership of Albert Zafy (1993–96), who went on to win the 1992 presidential elections and inaugurate the Third Republic (1992–2010). The new Madagascar constitution established a multi-party democracy and a separation of powers that placed significant control in the hands of the National Assembly. The new constitution also emphasized human rights, social and political freedoms, and free trade. Zafy's term, however, was marred by economic decline, allegations of corruption, and his introduction of legislation to give himself greater powers. He was consequently impeached in 1996, and an interim president, Norbert Ratsirahonana, was appointed for the three months prior to the next presidential election. Ratsiraka was then voted back into power on a platform of decentralization and economic reforms for a second term which lasted from 1996 to 2001. The contested 2001 presidential elections in which then-mayor of Antananarivo, Marc Ravalomanana, eventually emerged victorious, caused a seven-month standoff in 2002 between supporters of Ravalomanana and Ratsiraka. The negative economic impact of the political crisis was gradually overcome by Ravalomanana's progressive economic and political policies, which encouraged investments in education and ecotourism, facilitated foreign direct investment, and cultivated trading partnerships both regionally and internationally. National GDP grew at an average rate of 7 percent per year under his administration. In the later half of his second term, Ravalomanana was criticised by domestic and international observers who accused him of increasing authoritarianism and corruption. Opposition leader and then-mayor of Antananarivo, Andry Rajoelina, led a movement in early 2009 in which Ravalomanana was pushed from power in an unconstitutional process widely condemned as a "coup d'état". In March 2009, Rajoelina was declared by the Supreme Court as the President of the High Transitional Authority, an interim governing body responsible for moving the country toward presidential elections. In 2010, a new constitution was adopted by referendum, establishing a Fourth Republic, which sustained the democratic, multi-party structure established in the previous constitution. Hery Rajaonarimampianina was declared the winner of the 2013 presidential election, which the international community deemed fair and transparent. Madagascar is a semi-presidential representative democratic multi-party republic, wherein the popularly elected president is the head of state and selects a prime minister, who recommends candidates to the president to form his cabinet of ministers. According to the constitution, executive power is exercised by the government while legislative power is vested in the ministerial cabinet, the Senate and the National Assembly, although in reality these two latter bodies have very little power or legislative role. The constitution establishes independent executive, legislative and judicial branches and mandates a popularly elected president limited to three five-year terms. The public directly elects the president and the 127 members of the National Assembly to five-year terms. All 33 members of the Senate serve six-year terms, with 22 senators elected by local officials and 11 appointed by the president. The last National Assembly election was held on 20 December 2013 and the last Senate election was held on 30 December 2015. At the local level, the island's 22 provinces are administered by a governor and provincial council. Provinces are further subdivided into regions and communes. The judiciary is modeled on the French system, with a High Constitutional Court, High Court of Justice, Supreme Court, Court of Appeals, criminal tribunals, and tribunals of first instance. The courts, which adhere to civil law, lack the capacity to quickly and transparently try the cases in the judicial system, often forcing defendants to pass lengthy pretrial detentions in unsanitary and overcrowded prisons. Antananarivo is the administrative capital and largest city of Madagascar. It is located in the highlands region, near the geographic center of the island. King Andrianjaka founded Antananarivo as the capital of his Imerina Kingdom around 1610 or 1625 upon the site of a captured Vazimba capital on the hilltop of Analamanga. As Merina dominance expanded over neighboring Malagasy peoples in the early 19th century to establish the Kingdom of Madagascar, Antananarivo became the center of administration for virtually the entire island. In 1896 the French colonizers of Madagascar adopted the Merina capital as their center of colonial administration. The city remained the capital of Madagascar after regaining independence in 1960. In 2017, the capital's population was estimated at 1,391,433 inhabitants. The next largest cities are Antsirabe (500,000), Toamasina (450,000) and Mahajanga (400,000). Since Madagascar gained independence from France in 1960, the island's political transitions have been marked by numerous popular protests, several disputed elections, an impeachment, two military coups and one assassination. The island's recurrent political crises are often prolonged, with detrimental effects on the local economy, international relations and Malagasy living standards. The eight-month standoff between incumbent Ratsiraka and challenger Marc Ravalomanana following the 2001 presidential elections cost Madagascar millions of dollars in lost tourism and trade revenue as well as damage to infrastructure, such as bombed bridges and buildings damaged by arson. A series of protests led by Andry Rajoelina against Ravalomanana in early 2009 became violent, with more than 170 people killed. Modern politics in Madagascar are colored by the history of Merina subjugation of coastal communities under their rule in the 19th century. The consequent tension between the highland and coastal populations has periodically flared up into isolated events of violence. Madagascar has historically been perceived as being on the margin of mainstream African affairs despite being a founding member of the Organisation of African Unity, which was established in 1963 and dissolved in 2002 to be replaced by the African Union. Madagascar was not permitted to attend the first African Union summit because of a dispute over the results of the 2001 presidential election, but rejoined the African Union in July 2003 after a 14-month hiatus. Madagascar was again suspended by the African Union in March 2009 following the unconstitutional transfer of executive power to Rajoelina. Madagascar is a member of the International Criminal Court with a Bilateral Immunity Agreement of protection for the United States military. Eleven countries have established embassies in Madagascar, including France, the United Kingdom, the United States, China and India. Human rights in Madagascar are protected under the constitution and the state is a signatory to numerous international agreements including the Universal Declaration of Human Rights and the Convention on the Rights of the Child. Religious, ethnic and sexual minorities are protected under the law. Freedom of association and assembly are also guaranteed under the law, although in practice the denial of permits for public assembly has occasionally been used to impede political demonstrations. Torture by security forces is rare and state repression is low relative to other countries with comparably few legal safeguards, although arbitrary arrests and the corruption of military and police officers remain problems. Ravalomanana's 2004 creation of BIANCO, an anti-corruption bureau, resulted in reduced corruption among Antananarivo's lower-level bureaucrats in particular, although high-level officials have not been prosecuted by the bureau. The rise of centralized kingdoms among the Sakalava, Merina and other ethnic groups produced the island's first standing armies by the 16th century, initially equipped with spears but later with muskets, cannons and other firearms. By the early 19th century, the Merina sovereigns of the Kingdom of Madagascar had brought much of the island under their control by mobilizing an army of trained and armed soldiers numbering as high as 30,000. French attacks on coastal towns in the later part of the century prompted then-Prime Minister Rainilaiarivony to solicit British assistance to provide training to the Merina monarchy's army. Despite the training and leadership provided by British military advisers, the Malagasy army was unable to withstand French weaponry and was forced to surrender following an attack on the royal palace at Antananarivo. Madagascar was declared a colony of France in 1897. The political independence and sovereignty of the Malagasy armed forces, which comprises an army, navy and air force, was restored with independence from France in 1960. Since this time the Malagasy military has never engaged in armed conflict with another state or within its own borders, but has occasionally intervened to restore order during periods of political unrest. Under the socialist Second Republic, Admiral Didier Ratsiraka instated mandatory national armed or civil service for all young citizens regardless of gender, a policy that remained in effect from 1976 to 1991. The armed forces are under the direction of the Minister of the Interior and have remained largely neutral during times of political crisis, as during the protracted standoff between incumbent Ratsiraka and challenger Marc Ravalomanana in the disputed 2001 presidential elections, when the military refused to intervene in favor of either candidate. This tradition was broken in 2009, when a segment of the army defected to the side of Andry Rajoelina, then-mayor of Antananarivo, in support of his attempt to force President Ravalomanana from power. The Minister of the Interior is responsible for the national police force, paramilitary force ("gendarmerie") and the secret police. The police and gendarmerie are stationed and administered at the local level. However, in 2009 fewer than a third of all communes had access to the services of these security forces, with most lacking local-level headquarters for either corps. Traditional community tribunals, called "dina", are presided over by elders and other respected figures and remain a key means by which justice is served in rural areas where state presence is weak. Historically, security has been relatively high across the island. Violent crime rates are low, and criminal activities are predominantly crimes of opportunity such as pickpocketing and petty theft, although child prostitution, human trafficking and the production and sale of marijuana and other illegal drugs are increasing. Budget cuts since 2009 have severely impacted the national police force, producing a steep increase in criminal activity in recent years. Madagascar is subdivided into 22 regions ("faritra"). The regions are further subdivided into 119 districts, 1,579 communes, and 17,485 "fokontany". Madagascar became a Member State of the United Nations on 20 September 1960, shortly after gaining its independence on 26 June 1960. As of January 2017, 34 police officers from Madagascar are deployed in Haiti as part of the United Nations Stabilisation Mission in Haiti. Starting in 2015, under the direction of and with assistance from the UN, the World Food Programme started the Madagascar Country Programme with the two main goals of long-term development/ reconstruction efforts and addressing the food insecurity issues in the southern regions of Madagascar. These goals plan to be accomplished by providing meals for specific schools in rural and urban priority areas and by developing national school feeding policies to increase consistency of nourishment throughout the country. Small and local farmers have also been assisted in increasing both the quantity and quality of their production, as well as improving their crop yield in unfavorable weather conditions. During Madagascar's First Republic, France heavily influenced Madagascar's economic planning and policy and served as its key trading partner. Key products were cultivated and distributed nationally through producers' and consumers' cooperatives. Government initiatives such as a rural development program and state farms were established to boost production of commodities such as rice, coffee, cattle, silk and palm oil. Popular dissatisfaction over these policies was a key factor in launching the socialist-Marxist Second Republic, in which the formerly private bank and insurance industries were nationalized; state monopolies were established for such industries as textiles, cotton and power; and import–export trade and shipping were brought under state control. Madagascar's economy quickly deteriorated as exports fell, industrial production dropped by 75 percent, inflation spiked and government debt increased; the rural population was soon reduced to living at subsistence levels. Over 50 percent of the nation's export revenue was spent on debt servicing. The IMF forced Madagascar's government to accept structural adjustment policies and liberalization of the economy when the state became bankrupt in 1982 and state-controlled industries were gradually privatized over the course of the 1980s. The political crisis of 1991 led to the suspension of IMF and World Bank assistance. Conditions for the resumption of aid were not met under Zafy, who tried unsuccessfully to attract other forms of revenue for the State before aid was once again resumed under the interim government established upon Zafy's impeachment. The IMF agreed to write off half Madagascar's debt in 2004 under the Ravalomanana administration. Having met a set of stringent economic, governance and human rights criteria, Madagascar became the first country to benefit from the Millennium Challenge Account in 2005. Madagascar's GDP in 2015 was estimated at 9.98 billion USD, with a per capita GDP of $411.82. Approximately 69 percent of the population lives below the national poverty line threshold of one dollar per day. Over the last five years, the average growth rate has been 2.6% but is expected to have reached 4.1% in 2016, due to public works programs and a growth of the service sector. The agriculture sector constituted 29 percent of Malagasy GDP in 2011, while manufacturing formed 15 percent of GDP. Madagascar's other sources of growth are tourism, agriculture and the extractive industries. Tourism focuses on the niche eco-tourism market, capitalizing on Madagascar's unique biodiversity, unspoiled natural habitats, national parks and lemur species. An estimated 365,000 tourists visited Madagascar in 2008, but the sector declined during the political crisis with 180,000 tourists visiting in 2010. However, the sector has been growing steadily for a few years; In 2016, 293,000 tourists landed in the African island with an increase of 20% compared to 2015; For 2017 the country has the goal of reaching 366,000 visitors, while for 2018 government estimates are expected to reach 500,000 annual tourists. The island is still a very poor country in 2018; structural brakes remain in the development of the economy: corruption and the shackles of the public administration, lack of legal certainty, backwardness of land legislation. The economy of the African country, however, has been growing since 2011, with rates exceeding 4% per year; the economic indicators seem almost all growing, the income per capita of the island is often referred to as around 1600 dollars a year to 2017 , one of the lowest in the world, however, growing since 2012; The gross domestic product is also growing; unemployment was also cut, which in 2016 was equal to 2.1% at 2017 c 'was a strong work force of 13.400.000 inhabitants. The main economic resources of Madagascar are tourism, textile export, production and export agricultural and mining. Madagascar's natural resources include a variety of unprocessed agricultural and mineral resources. Agriculture (including raffia), fishing and forestry are mainstays of the economy. Madagascar is the world's principal supplier of vanilla, cloves and ylang-ylang. Other key agricultural resources include coffee, lychees and shrimp. Key mineral resources include various types of precious and semi-precious stones, and Madagascar currently provides half of the world's supply of sapphires, which were discovered near Ilakaka in the late 1990s. Madagascar has one of the world's largest reserves of ilmenite (titanium ore), as well as important reserves of chromite, coal, iron, cobalt, copper and nickel. Several major projects are underway in the mining, oil and gas sectors that are anticipated to give a significant boost to the Malagasy economy. These include such projects as ilmenite and zircon mining from heavy mineral sands near Tôlanaro by Rio Tinto, extraction of nickel near Moramanga and its processing near Toamasina by Sherritt International, and the development of the giant onshore heavy oil deposits at Tsimiroro and Bemolanga by Madagascar Oil. Exports formed 28 percent of GDP in 2009. Most of the country's export revenue is derived from the textiles industry, fish and shellfish, vanilla, cloves and other foodstuffs. France is Madagascar's main trading partner, although the United States, Japan and Germany also have strong economic ties to the country. The Madagascar-U.S. Business Council was formed in May 2003, as a collaboration between USAID and Malagasy artisan producers to support the export of local handicrafts to foreign markets. Imports of such items as foodstuffs, fuel, capital goods, vehicles, consumer goods and electronics consume an estimated 52 percent of GDP. The main sources of Madagascar's imports include China, France, Iran, Mauritius and Hong Kong. In 2010, Madagascar had approximately of paved roads, of railways and of navigable waterways. The majority of roads in Madagascar are unpaved, with many becoming impassable in the rainy season. Largely paved national routes connect the six largest regional towns to Antananarivo, with minor paved and unpaved routes providing access to other population centers in each district. There are several rail lines. Antananarivo is connected to Toamasina, Ambatondrazaka and Antsirabe by rail, and another rail line connects Fianarantsoa to Manakara. The most important seaport in Madagascar is located on the east coast at Toamasina. Ports at Mahajanga and Antsiranana are significantly less used due to their remoteness. The island's newest port at Ehoala, constructed in 2008 and privately managed by Rio Tinto, will come under state control upon completion of the company's mining project near Tôlanaro around 2038. Air Madagascar services the island's many small regional airports, which offer the only practical means of access to many of the more remote regions during rainy season road washouts. Running water and electricity are supplied at the national level by a government service provider, Jirama, which is unable to service the entire population. , only 6.8 percent of Madagascar's "fokontany" had access to water provided by Jirama, while 9.5 percent had access to its electricity services. 56% of Madagascar's power is provided by hydroelectric power plants with the remaining 44% provided by diesel engine generators. Mobile telephone and internet access are widespread in urban areas but remain limited in rural parts of the island. Approximately 30 percent of the districts are able to access the nations' several private telecommunications networks via mobile telephones or land lines. Radio broadcasts remain the principal means by which the Malagasy population access international, national and local news. Only state radio broadcasts are transmitted across the entire island. Hundreds of public and private stations with local or regional range provide alternatives to state broadcasting. In addition to the state television channel, a variety of privately owned television stations broadcast local and international programming throughout Madagascar. Several media outlets are owned by political partisans or politicians themselves, including the media groups MBS (owned by Ravalomanana) and Viva (owned by Rajoelina), contributing to political polarization in reporting. The media has historically come under varying degrees of pressure over time to censor their criticism of the government. Reporters are occasionally threatened or harassed and media outlets are periodically forced to close. Accusations of media censorship have increased since 2009 due to the alleged intensification of restrictions on political criticism. Access to the internet has grown dramatically over the past decade, with an estimated 352,000 residents of Madagascar accessing the internet from home or in one of the nation's many internet cafés in December 2011. Medical centers, dispensaries and hospitals are found throughout the island, although they are concentrated in urban areas and particularly in Antananarivo. Access to medical care remains beyond the reach of many Malagasy, especially in the rural areas, and many recourse to traditional healers. In addition to the high expense of medical care relative to the average Malagasy income, the prevalence of trained medical professionals remains extremely low. In 2010 Madagascar had an average of three hospital beds per 10,000 people and a total of 3,150 doctors, 5,661 nurses, 385 community health workers, 175 pharmacists and 57 dentists for a population of 22 million. 14.6 percent of government spending in 2008 was directed toward the health sector. Approximately 70 percent of spending on health was contributed by the government, while 30 percent originated with international donors and other private sources. The government provides at least one basic health center per commune. Private health centers are concentrated within urban areas and particularly those of the central highlands. Despite these barriers to access, health services have shown a trend toward improvement over the past twenty years. Child immunizations against such diseases as hepatitis B, diphtheria and measles increased an average of 60 percent in this period, indicating low but increasing availability of basic medical services and treatments. The Malagasy fertility rate in 2009 was 4.6 children per woman, declining from 6.3 in 1990. Teen pregnancy rates of 14.8 percent in 2011, much higher than the African average, are a contributing factor to rapid population growth. In 2010 the maternal mortality rate was 440 per 100,000 births, compared to 373.1 in 2008 and 484.4 in 1990, indicating a decline in perinatal care following the 2009 coup. The infant mortality rate in 2011 was 41 per 1,000 births, with an under-five mortality rate at 61 per 1,000 births. Schistosomiasis, malaria and sexually transmitted diseases are common in Madagascar, although infection rates of AIDS remain low relative to many countries in mainland Africa, at only 0.2 percent of the adult population. The malaria mortality rate is also among the lowest in Africa at 8.5 deaths per 100,000 people, in part due to the highest frequency use of insecticide treated nets in Africa. Adult life expectancy in 2009 was 63 years for men and 67 years for women. In 2017, Madagascar had an outbreak of the bubonic plague (also known as the black death) that affected urban areas. Prior to the 19th century, all education in Madagascar was informal and typically served to teach practical skills as well as social and cultural values, including respect for ancestors and elders. The first formal European-style school was established in 1818 at Toamasina by members of the London Missionary Society (LMS). The LMS was invited by King Radama I (1810–28) to expand its schools throughout Imerina to teach basic literacy and numeracy to aristocratic children. The schools were closed by Ranavalona I in 1835 but reopened and expanded in the decades after her death. By the end of the 19th century Madagascar had the most developed and modern school system in pre-colonial Sub-Saharan Africa. Access to schooling was expanded in coastal areas during the colonial period, with French language and basic work skills becoming the focus of the curriculum. During the post-colonial First Republic, a continued reliance on French nationals as teachers, and French as the language of instruction, displeased those desiring a complete separation from the former colonial power.Consequently, under the socialist Second Republic, French instructors and other nationals were expelled, Malagasy was declared the language of instruction and a large cadre of young Malagasy were rapidly trained to teach at remote rural schools under the mandatory two-year national service policy. This policy, known as "malgachization", coincided with a severe economic downturn and a dramatic decline in the quality of education. Those schooled during this period generally failed to master the French language or many other subjects and struggled to find employment, forcing many to take low-paying jobs in the informal or black market that mired them in deepening poverty. Excepting the brief presidency of Albert Zafy, from 1992 to 1996, Ratsiraka remained in power from 1975 to 2001 and failed to achieve significant improvements in education throughout his tenure. Education was prioritized under the Ravalomanana administration (2002–09), and is currently free and compulsory from ages 6 to 13. The primary schooling cycle is five years, followed by four years at the lower secondary level and three years at the upper secondary level. During Ravalomanana's first term, thousands of new primary schools and additional classrooms were constructed, older buildings were renovated, and tens of thousands of new primary teachers were recruited and trained. Primary school fees were eliminated and kits containing basic school supplies were distributed to primary students. Government school construction initiatives have ensured at least one primary school per "fokontany" and one lower secondary school within each commune. At least one upper secondary school is located in each of the larger urban centers. The three branches of the national public university are located at Antananarivo (founded in 1961), Mahajanga (1977) and Fianarantsoa (1988). These are complemented by public teacher-training colleges and several private universities and technical colleges. As a result of increased educational access, enrollment rates more than doubled between 1996 and 2006. However, education quality is weak, producing high rates of grade repetition and dropout. Education policy in Ravalomanana's second term focused on quality issues, including an increase in minimum education standards for the recruitment of primary teachers from a middle school leaving certificate (BEPC) to a high school leaving certificate (BAC), and a reformed teacher training program to support the transition from traditional didactic instruction to student-centered teaching methods to boost student learning and participation in the classroom. Public expenditure on education was 13.4 percent of total government expenditure and 2.9 percent of GDP in 2008. Primary classrooms are crowded, with average pupil to teacher ratios of 47:1 in 2008. In , the population of Madagascar was estimated at /1e6 round 0 million, up from 2.2 million in 1900. The annual population growth rate in Madagascar was approximately 2.9 percent in 2009. Approximately 42.5 percent of the population is younger than 15 years of age, while 54.5 percent are between the ages of 15 and 64. Those aged 65 and older form three percent of the total population. Only two general censuses, in 1975 and 1993, have been carried out after independence. The most densely populated regions of the island are the eastern highlands and the eastern coast, contrasting most dramatically with the sparsely populated western plains. The Malagasy ethnic group forms over 90 percent of Madagascar's population and is typically divided into eighteen ethnic subgroups. Recent DNA research revealed that the genetic makeup of the average Malagasy person constitutes an approximately equal blend of Southeast Asian and East African genes, although the genetics of some communities show a predominance of Southeast Asian or East African origins or some Arab, Indian or European ancestry. Southeast Asian features – specifically from the southern part of Borneo – are most predominant among the Merina of the central highlands, who form the largest Malagasy ethnic subgroup at approximately 26 percent of the population, while certain communities among the coastal peoples (collectively called "côtiers") have relatively stronger East African features. The largest coastal ethnic subgroups are the Betsimisaraka (14.9 percent) and the Tsimihety and Sakalava (6 percent each). Chinese, Indian and Comorian minorities are present in Madagascar, as well as a small European (primarily French) populace. Emigration in the late 20th century has reduced these minority populations, occasionally in abrupt waves, such as the exodus of Comorans in 1976, following anti-Comoran riots in Mahajanga. By comparison, there has been no significant emigration of Malagasy peoples. The number of Europeans has declined since independence, reduced from 68,430 in 1958 to 17,000 three decades later. There were an estimated 25,000 Comorans, 18,000 Indians, and 9,000 Chinese living in Madagascar in the mid-1980s. The Malagasy language is of Malayo-Polynesian origin and is generally spoken throughout the island. The numerous dialects of Malagasy, which are generally mutually intelligible, can be clustered under one of two subgroups: eastern Malagasy, spoken along the eastern forests and highlands including the Merina dialect of Antananarivo and western Malagasy, spoken across the western coastal plains. French became the official language during the colonial period, when Madagascar came under the authority of France. In the first national Constitution of 1958, Malagasy and French were named the official languages of the Malagasy Republic. Madagascar is a francophone country, and French is mostly spoken as a second language among the educated population and used for international communication. No official languages were recorded in the Constitution of 1992, although Malagasy was identified as the national language. Nonetheless, many sources still claimed that Malagasy and French were official languages, eventually leading a citizen to initiate a legal case against the state in April 2000, on the grounds that the publication of official documents only in the French language was unconstitutional. The High Constitutional Court observed in its decision that, in the absence of a language law, French still had the character of an official language. In the Constitution of 2007, Malagasy remained the national language while official languages were reintroduced: Malagasy, French, and English. English was removed as an official language from the constitution approved by voters in the November 2010 referendum. The outcome of the referendum, and its consequences for official and national language policy, are not recognized by the political opposition, who cite lack of transparency and inclusiveness in the way the election was organized by the High Transitional Authority. Language Policy Over the years, Madagascar has had different language policies under different influences of authority. The indigenous language of Madagascar, Malagasy, was the predominant language on the island until the French colonization in 1897. Malagasy has developed throughout the decades from an oral language to a language that has a written system (Latin orthography), a change that was enforced by King Radama I, in 1823. Following the French colonization, the language of instruction and media changed from Malagasy to almost exclusively French. Moreover, the first French governor-general, Gallieni, also encouraged the French officials to learn Malagasy as well. After the advent of the Malagasy independence, the Madagascans tried to reinstate Malagasy as a language of instruction especially in secondary schools. However, the language policy was inadequately planned and Malagasy was struggling to surpass French as the language of instruction. Today, Madagascar has two official languages: Malagasy and French. Madagascar managed to maintain the indigenous language, Malagasy, in society and in schools despite the colonizing power. Malagasy and French are both the language of instruction in primary and secondary schools in Madagascar. The inclusion of the African language as a medium of instruction is usually uncommon in other colonized African countries. According to the US Department of State in 2011, 41% of Madagascans practiced Christianity and 52% adhered to traditional religions, which tends to emphasize links between the living and the "razana" (ancestors). But according to the Pew Research Center in 2010, 85% of the population practiced Christianity, while just 4.5% of Madagascans practiced folk religions; among Christians, practitioners of Protestantism outnumbered adherents of Roman Catholicism. The veneration of ancestors has led to the widespread tradition of tomb building, as well as the highlands practice of the "famadihana", whereby a deceased family member's remains are exhumed and re-wrapped in fresh silk shrouds, before being replaced in the tomb. The famadihana is an occasion to celebrate the beloved ancestor's memory, reunite with family and community, and enjoy a festive atmosphere. Residents of surrounding villages are often invited to attend the party, where food and rum are typically served and a hiragasy troupe or other musical entertainment is commonly present. Consideration for ancestors is also demonstrated through adherence to "fady", taboos that are respected during and after the lifetime of the person who establishes them. It is widely believed that by showing respect for ancestors in these ways, they may intervene on behalf of the living. Conversely, misfortunes are often attributed to ancestors whose memory or wishes have been neglected. The sacrifice of zebu is a traditional method used to appease or honor the ancestors. In addition, the Malagasy traditionally believe in a creator god, called Zanahary or Andriamanitra. In 1818, the London Missionary Society sent the first Christian missionaries to the island, where they built churches, translated the Bible into the Malagasy language and began to gain converts. Beginning in 1835, Queen Ranavalona I persecuted these converts as part of an attempt to halt European cultural and political influence on the island. In 1869, a successor, Queen Ranavalona II, converted the court to Christianity and encouraged Christian missionary activity, burning the "sampy" (royal idols) in a symbolic break with traditional beliefs. Today, many Christians integrate their religious beliefs with traditional ones related to honoring the ancestors. For instance, they may bless their dead at church before proceeding with traditional burial rites or invite a Christian minister to consecrate a "famadihana" reburial. The Malagasy Council of Churches comprises the four oldest and most prominent Christian denominations of Madagascar (Roman Catholic, Church of Jesus Christ in Madagascar, Lutheran, and Anglican) and has been an influential force in Malagasy politics. Islam is also practiced on the island. Islam was first brought to Madagascar in the Middle Ages by Arab and Somali Muslim traders, who established several Islamic schools along the eastern coast. While the use of Arabic script and loan words and the adoption of Islamic astrology would spread across the island, the Islamic religion failed to take hold in all but a handful of southeastern coastal communities. Today, Muslims constitute 3–7 percent of the population of Madagascar and are largely concentrated in the northwestern provinces of Mahajanga and Antsiranana. The vast majority of Muslims are Sunni. Muslims are divided between those of Malagasy ethnicity, Indians, Pakistanis and Comorians. More recently, Hinduism was introduced to Madagascar through Gujarati people immigrating from the Saurashtra region of India in the late 19th century. Most Hindus in Madagascar speak Gujarati or Hindi at home. Each of the many ethnic subgroups in Madagascar adhere to their own set of beliefs, practices and ways of life that have historically contributed to their unique identities. However, there are a number of core cultural features that are common throughout the island, creating a strongly unified Malagasy cultural identity. In addition to a common language and shared traditional religious beliefs around a creator god and veneration of the ancestors, the traditional Malagasy worldview is shaped by values that emphasize "fihavanana" (solidarity), "vintana" (destiny), "tody" (karma), and "hasina", a sacred life force that traditional communities believe imbues and thereby legitimates authority figures within the community or family. Other cultural elements commonly found throughout the island include the practice of male circumcision; strong kinship ties; a widespread belief in the power of magic, diviners, astrology and witch doctors; and a traditional division of social classes into nobles, commoners, and slaves. Although social castes are no longer legally recognized, ancestral caste affiliation often continues to affect social status, economic opportunity and roles within the community. Malagasy people traditionally consult "Mpanandro" ("Makers of the Days") to identify the most auspicious days for important events such as weddings or "famadihana", according to a traditional astrological system introduced by Arabs. Similarly, the nobles of many Malagasy communities in the pre-colonial period would commonly employ advisers known as the "ombiasy" (from "olona-be-hasina", "man of much virtue") of the southeastern Antemoro ethnic group, who trace their ancestry back to early Arab settlers. The diverse origins of Malagasy culture are evident in its tangible expressions. The most emblematic instrument of Madagascar, the "valiha", is a bamboo tube zither carried to Madagascar by early settlers from southern Borneo, and is very similar in form to those found in Indonesia and the Philippines today. Traditional houses in Madagascar are likewise similar to those of southern Borneo in terms of symbolism and construction, featuring a rectangular layout with a peaked roof and central support pillar. Reflecting a widespread veneration of the ancestors, tombs are culturally significant in many regions and tend to be built of more durable material, typically stone, and display more elaborate decoration than the houses of the living. The production and weaving of silk can be traced back to the island's earliest settlers, and Madagascar's national dress, the woven "lamba", has evolved into a varied and refined art. The Southeast Asian cultural influence is also evident in Malagasy cuisine, in which rice is consumed at every meal, typically accompanied by one of a variety of flavorful vegetable or meat dishes. African influence is reflected in the sacred importance of zebu cattle and their embodiment of their owner's wealth, traditions originating on the African mainland. Cattle rustling, originally a rite of passage for young men in the plains areas of Madagascar where the largest herds of cattle are kept, has become a dangerous and sometimes deadly criminal enterprise as herdsmen in the southwest attempt to defend their cattle with traditional spears against increasingly armed professional rustlers. A wide variety of oral and written literature has developed in Madagascar. One of the island's foremost artistic traditions is its oratory, as expressed in the forms of "hainteny" (poetry), "kabary" (public discourse) and "ohabolana" (proverbs). An epic poem exemplifying these traditions, the "Ibonia", has been handed down over the centuries in several different forms across the island, and offers insight into the diverse mythologies and beliefs of traditional Malagasy communities. This tradition was continued in the 20th century by such artists as Jean-Joseph Rabearivelo, who is considered Africa's first modern poet, and Elie Rajaonarison, an exemplar of the new wave of Malagasy poetry. Madagascar has also developed a rich musical heritage, embodied in dozens of regional musical genres such as the coastal "salegy" or highland "hiragasy" that enliven village gatherings, local dance floors and national airwaves. Additionally, Madagascar also has a growing culture of classical music fostered through youth academies, organizations and orchestras that promote youth involvement in classical music. The plastic arts are also widespread throughout the island. In addition to the tradition of silk weaving and lamba production, the weaving of raffia and other local plant materials has been used to create a wide array of practical items such as floor mats, baskets, purses and hats. Wood carving is a highly developed art form, with distinct regional styles evident in the decoration of balcony railings and other architectural elements. Sculptors create a variety of furniture and household goods, "aloalo" funerary posts, and wooden sculptures, many of which are produced for the tourist market. The decorative and functional woodworking traditions of the Zafimaniry people of the central highlands was inscribed on UNESCO's list of Intangible Cultural Heritage in 2008. Among the Antaimoro people, the production of paper embedded with flowers and other decorative natural materials is a long-established tradition that the community has begun to market to eco-tourists. Embroidery and drawn thread work are done by hand to produce clothing, as well as tablecloths and other home textiles for sale in local crafts markets. A small but growing number of fine art galleries in Antananarivo, and several other urban areas, offer paintings by local artists, and annual art events, such as the Hosotra open-air exhibition in the capital, contribute to the continuing development of fine arts in Madagascar. A number of traditional pastimes have emerged in Madagascar. "Moraingy", a type of hand-to-hand combat, is a popular spectator sport in coastal regions. It is traditionally practiced by men, but women have recently begun to participate. The wrestling of zebu cattle, which is named savika or "tolon-omby", is also practiced in many regions. In addition to sports, a wide variety of games are played. Among the most emblematic is "fanorona", a board game widespread throughout the Highland regions. According to folk legend, the succession of King Andrianjaka after his father Ralambo was partially due to the obsession that Andrianjaka's older brother may have had with playing "fanorona" to the detriment of his other responsibilities. Western recreational activities were introduced to Madagascar over the past two centuries. Rugby union is considered the national sport of Madagascar. Soccer is also popular. Madagascar has produced a world champion in pétanque, a French game similar to lawn bowling, which is widely played in urban areas and throughout the Highlands. School athletics programs typically include soccer, track and field, judo, boxing, women's basketball and women's tennis. Madagascar sent its first competitors to the Olympic Games in 1964 and has also competed in the African Games. Scouting is represented in Madagascar by its own local federation of three scouting clubs. Membership in 2011 was estimated at 14,905. Because of its advanced sports facilities, Antananarivo gained the hosting rights for several of Africa's top international basketball events, including the 2011 FIBA Africa Championship, the 2009 FIBA Africa Championship for Women, the 2014 FIBA Africa Under-18 Championship, the 2013 FIBA Africa Under-16 Championship, and the 2015 FIBA Africa Under-16 Championship for Women. Malagasy cuisine reflects the diverse influences of Southeast Asian, African, Indian, Chinese and European culinary traditions. The complexity of Malagasy meals can range from the simple, traditional preparations introduced by the earliest settlers, to the refined festival dishes prepared for the island's 19th-century monarchs. Throughout almost the entire island, the contemporary cuisine of Madagascar typically consists of a base of rice ("vary") served with an accompaniment ("laoka"). The many varieties of laoka may be vegetarian or include animal proteins, and typically feature a sauce flavored with such ingredients as ginger, onion, garlic, tomato, vanilla, coconut milk, salt, curry powder, green peppercorns or, less commonly, other spices or herbs. In parts of the arid south and west, pastoral families may replace rice with maize, cassava, or curds made from fermented zebu milk. A wide variety of sweet and savory fritters as well as other street foods are available across the island, as are diverse tropical and temperate-climate fruits. Locally produced beverages include fruit juices, coffee, herbal teas and teas, and alcoholic drinks such as rum, wine, and beer. Three Horses Beer is the most popular beer on the island and is considered emblematic of Madagascar. The island also produces some of the world's finest chocolate; Chocolaterie Robert, established in 1940, is the most famous chocolate company on the island. Marilyn Monroe Marilyn Monroe (born Norma Jeane Mortenson; June 1, 1926 – August 5, 1962) was an American actress, model, and singer. Famous for playing comic "blonde bombshell" characters, she became one of the most popular sex symbols of the 1950s and was emblematic of the era's attitudes towards sexuality. Although she was a top-billed actress for only a decade, her films grossed $200 million by the time of her unexpected death in 1962. More than half a century later, she continues to be a major popular culture icon. Born and raised in Los Angeles, Monroe spent most of her childhood in foster homes and an orphanage and married at the age of sixteen. While working in a radioplane factory in 1944 as part of the war effort, she was introduced to a photographer from the First Motion Picture Unit and began a successful pin-up modeling career. The work led to short-lived film contracts with Twentieth Century-Fox (1946–1947) and Columbia Pictures (1948). After a series of minor film roles, she signed a new contract with Fox in 1951. Over the next two years, she became a popular actress and had roles in several comedies, including "As Young as You Feel" and "Monkey Business", and in the dramas "Clash by Night" and "Don't Bother to Knock". Monroe faced a scandal when it was revealed that she had posed for nude photos before she became a star, but the story did not tarnish her career and instead resulted in increased interest in her films. By 1953, Monroe was one of the most marketable Hollywood stars; she had leading roles in the noir film "Niagara", which focused on her sex appeal, and the comedies "Gentlemen Prefer Blondes" and "How to Marry a Millionaire", which established her star image as a "dumb blonde". Although she played a significant role in the creation and management of her public image throughout her career, she was disappointed when she was typecast and underpaid by the studio. She was briefly suspended in early 1954 for refusing a film project but returned to star in one of the biggest box office successes of her career, "The Seven Year Itch" (1955). When the studio was still reluctant to change Monroe's contract, she founded a film production company in late 1954; she named it Marilyn Monroe Productions (MMP). She dedicated 1955 to building her company and began studying method acting at the Actors Studio. In late 1955, Fox awarded her a new contract, which gave her more control and a larger salary. Her subsequent roles included a critically acclaimed performance in "Bus Stop" (1956) and the first independent production of MMP, "The Prince and the Showgirl" (1957). Monroe won a Golden Globe for Best Actress for her work in "Some Like It Hot" (1959), a critical and commercial success. Her last completed film was the drama "The Misfits" (1961). Monroe's troubled private life received much attention. She struggled with substance abuse, depression, and anxiety. Her second and third marriages, to retired baseball star Joe DiMaggio and playwright Arthur Miller, respectively, were highly publicized and both ended in divorce. On August 5, 1962, she died at age 36 from an overdose of barbiturates at her home in Los Angeles. Although Monroe's death was ruled a probable suicide, several conspiracy theories have been proposed in the decades following her death. Monroe was born Norma Jeane Mortenson at the Los Angeles County Hospital on June 1, 1926 as the third child of Gladys Pearl Baker (née Monroe, 1902–1984). Gladys was the daughter of two poor Midwesterners who migrated to California. At the age of fifteen, she married a man nine years her senior, John Newton Baker, and had two children by him, Robert (1917–1933) and Berniece (b. 1919). She filed for divorce in 1921, and Baker took the children with him to his native Kentucky. Monroe was not told that she had a sister until she was twelve, and met her for the first time as an adult. Following the divorce, Gladys worked as a film negative cutter at Consolidated Film Industries. In 1924, she married her second husband, Martin Edward Mortensen, but they separated only some months later and divorced in 1928. The identity of Monroe's father is unknown and she most often used Baker as her surname. Monroe's early childhood was stable and happy. Gladys was mentally and financially unprepared for a child, and soon after the birth she was able to place Monroe with foster parents Albert and Ida Bolender in the rural town of Hawthorne. They raised their foster children according to the principles of evangelical Christianity. At first, Gladys lived with the Bolenders and commuted to work in Los Angeles, until longer work shifts forced her to move back to the city in early 1927. She then began visiting her daughter on weekends, often taking her to the cinema and to sightsee in Los Angeles. Although the Bolenders wanted to adopt Monroe, by the summer of 1933 Gladys felt stable enough for Monroe to move in with her and bought a small house in Hollywood. They shared it with lodgers, actors George and Maude Atkinson and their daughter, Nellie. Some months later, in January 1934, Gladys had a mental breakdown and was diagnosed with paranoid schizophrenia. After several months in a rest home, she was committed to the Metropolitan State Hospital. She spent the rest of her life in and out of hospitals and was rarely in contact with Monroe. Monroe became a ward of the state, and her mother's friend, Grace McKee Goddard, took responsibility over her and her mother's affairs. In the following four years, she lived with several foster families and often switched schools. For the first sixteen months, she continued living with the Atkinsons; she was sexually abused during this time. Always a shy girl, she now also developed a stutter and became withdrawn. In the summer of 1935, she briefly stayed with Grace and her husband Erwin "Doc" Goddard and two other families, until Grace placed her in the Los Angeles Orphans Home in Hollywood in September 1935. While the orphanage was "a model institution" and was described in positive terms by her peers, Monroe found being placed there traumatizing, as to her "it seemed that no one wanted me". Encouraged by the orphanage staff who thought that Monroe would be happier living in a family, Grace became her legal guardian in 1936, although she was not able to take her out of the orphanage until the summer of 1937. Monroe's second stay with the Goddards lasted only a few months because Doc molested her. After staying with several of her relatives and Grace's friends and relatives in Los Angeles and Compton, Monroe found a more permanent home in September 1938, when she began living with Grace's aunt, Ana Atchinson Lower, in the Sawtelle district. She was enrolled in Emerson Junior High School and was taken to weekly Christian Science services with Lower. Monroe was otherwise a mediocre student, but she excelled in writing and contributed to the school newspaper. Due to the elderly Lower's health issues, Monroe returned to live with the Goddards in Van Nuys in either late 1940 or early 1941. After graduating from Emerson, she began attending Van Nuys High School. In early 1942, the company that employed Doc Goddard relocated him to West Virginia. California child protection laws prevented the Goddards from taking Monroe out of state, and she faced the possibility of having to return to the orphanage. As a solution, she married their neighbors' son, 21-year-old factory worker James "Jim" Dougherty on June 19, 1942, just after her 16th birthday. Monroe subsequently dropped out of high school and became a housewife; she later stated that the "marriage didn't make me sad, but it didn't make me happy, either. My husband and I hardly spoke to each other. This wasn't because we were angry. We had nothing to say. I was dying of boredom." In 1943, Dougherty enlisted in the Merchant Marine and was stationed on Catalina Island, where Monroe moved with him. In April 1944, Jim Dougherty was shipped out to the Pacific; he would remain there for most of the next two years. Monroe moved in with his parents and began a job at the Radioplane Munitions Factory in Van Nuys. In late 1944, she met photographer David Conover, who had been sent by the U.S. Army Air Forces' First Motion Picture Unit to the factory to shoot morale-boosting pictures of female workers. Although none of her pictures were used, she quit working at the factory in January 1945 and began modeling for Conover and his friends. Defying her deployed husband, she moved on her own and signed a contract with the Blue Book Model Agency in August 1945. As a model, Monroe occasionally used the name Jean Norman. She straightened her curly brunette hair and dyed it blonde to make her more employable. Her figure was deemed more suitable for pin-up than fashion modeling, and she was featured mostly in advertisements and men's magazines. The agency's owner, Emmeline Snively, said that Monroe was one of its most ambitious and hard-working models; by early 1946, she had appeared on 33 magazine covers for publications such as "Pageant", "U.S. Camera", "Laff", and "Peek". Through Snively, Monroe received a contract with an acting agency in June 1946. After an unsuccessful interview at Paramount Pictures, she was given a screen-test by Ben Lyon, a 20th Century-Fox executive. Head executive Darryl F. Zanuck was unenthusiastic about it, but he was persuaded to give her a standard six-month contract to avoid her being signed by rival studio RKO Pictures. Monroe's contract began in August 1946, and she and Lyon selected the stage name "Marilyn Monroe". The first name was picked by Lyon, who was reminded of Broadway star Marilyn Miller; the last was picked by Monroe after her mother's maiden name. In September 1946, she divorced Dougherty, who was against her working. Monroe had no film roles during the first months of her contract and instead dedicated her days to acting, singing and dancing classes. Eager to learn more about the film industry and in order to promote herself, she spent time at the studio lot to observe others working. Her contract was renewed in February 1947, and she was given her first two film roles, bit parts in "Dangerous Years" (1947) and "Scudda Hoo! Scudda Hay!" (1948). The studio also enrolled her in the Actors' Laboratory Theatre, an acting school teaching the techniques of the Group Theatre; she later stated that it was "my first taste of what real acting in a real drama could be, and I was hooked". Monroe's contract was not renewed in August 1947, and she returned to modeling while also doing occasional odd jobs at the studio. Monroe was determined to make it as an actress, and she continued studying at the Actors' Lab. In October 1947, she appeared as a blonde vamp in the short-lived play "Glamour Preferred" at the Bliss-Hayden Theater, but the production was not reviewed by any major publication. To promote herself, she frequented producers' offices, befriended gossip columnist Sidney Skolsky, and entertained influential male guests at studio functions, a practice she had begun at Fox. She also became a friend and occasional sex partner of Fox executive Joseph M. Schenck, who persuaded his friend Harry Cohn, the head executive of Columbia Pictures, to sign her in March 1948. While at Fox, Monroe was given roles of a "girl next door"; at Columbia, she was modeled after Rita Hayworth. Her hairline was raised and her hair was bleached to platinum blonde. She also began working with the studio's head drama coach, Natasha Lytess, who would remain her mentor until 1955. Her only film at the studio was the low-budget musical "Ladies of the Chorus" (1948), in which she had her first starring role as a chorus girl who is courted by a wealthy man. She also screentested for the lead role in "Born Yesterday" (1950), but her contract was not renewed in September 1948. "Ladies of the Chorus" was released the following month and was not a success. After Columbia, Monroe became the protégée of Johnny Hyde, who was the vice president of the William Morris Agency. Hyde represented her and their relationship soon became sexual, with him even proposing marriage. He paid for a silicone prosthesis to be implanted in Monroe's jaw and possibly for a rhinoplasty, and arranged a bit part in the Marx Brothers film "Love Happy" (1950). Monroe also continued modeling, and in May 1949 she posed nude for photos taken by Tom Kelley. Although her role in "Love Happy" was very small, she was chosen to participate in the film's promotional tour in New York that year. In 1950, Monroe had bit parts in "Love Happy", "A Ticket to Tomahawk", "Right Cross" and "The Fireball", but also appeared in minor supporting roles in two critically acclaimed films: Joseph Mankiewicz's drama "All About Eve" and John Huston's crime film "The Asphalt Jungle". Despite only appearing on screen for a few minutes in the latter, she gained a mention in "Photoplay" and according to Spoto "moved effectively from movie model to serious actress". In December 1950, Hyde was able to negotiate a seven-year contract for Monroe with 20th Century-Fox. He died of a heart attack only days later, which left her devastated. The Fox contract gave Monroe more publicity. In March 1951, she was a presenter at the 23rd Academy Awards, and in September, "Collier's" became the first national magazine to publish a full-length profile of her. The same year, she had supporting roles in four low-budget films: in the MGM drama "Home Town Story", and in three moderately successful comedies for Fox, "As Young as You Feel", "Love Nest", and "Let's Make It Legal". According to Spoto all four films featured her "essentially [as] a sexy ornament", but she received some praise from critics: Bosley Crowther of "The New York Times" described her as "superb" in "As Young As You Feel" and Ezra Goodman of the "Los Angeles Daily News" called her "one of the brightest up-and-coming [actresses]" for "Love Nest". To further develop her acting skills, Monroe began taking classes with Michael Chekhov and mime Lotte Goslar. Her popularity with audiences was also growing: she received several thousand letters of fan mail a week, and was declared "Miss Cheesecake of 1951" by the army newspaper "Stars and Stripes", reflecting the preferences of soldiers in the Korean War. In her private life, Monroe was in a relationship with director Elia Kazan, and also briefly dated several other men, including director Nicholas Ray and actors Yul Brynner and Peter Lawford. In the second year of her contract, Monroe became a top-billed actress. Gossip columnist Florabel Muir named her the "it girl" of 1952 and Hedda Hopper described her as the "cheesecake queen" turned "box office smash". In February, she was named the "best young box office personality" by the Foreign Press Association of Hollywood, and began a highly publicized romance with retired New York Yankee Joe DiMaggio, who was one of the most famous sports personalities of the era. In March 1952, a scandal broke when Monroe revealed during an interview that in 1949, she had posed for nude pictures, which were now featured in calendars. The studio had learned of the upcoming publication of the calendar some weeks prior, and together with Monroe decided that to avoid damaging her career it was best to admit to them while stressing that she had been broke at the time. The strategy gained her public sympathy and increased interest in her films; the following month, she was featured on the cover of "Life" as "The Talk of Hollywood". Monroe added to her reputation as a new sex symbol with other publicity stunts that year: she wore a revealing dress when acting as Grand Marshal at the Miss America Pageant parade, and told gossip columnist Earl Wilson that she usually wore no underwear. Regardless of her popularity and sex appeal, Monroe wished to present more of her acting range. She appeared in two commercially successful dramas in the summer of 1952. The first was Fritz Lang's "Clash by Night", for which she was loaned to RKO and played a fish cannery worker; to prepare, she spent time in a real fish cannery in Monterey. She received positive reviews for her performance: "The Hollywood Reporter" stated that "she deserves starring status with her excellent interpretation", and "Variety" wrote that she "has an ease of delivery which makes her a cinch for popularity". The second film was the thriller "Don't Bother to Knock", in which she starred as a mentally disturbed babysitter and which Zanuck had assigned for her to test her abilities in a heavier dramatic role. It received mixed reviews from critics, with Crowther deeming her too inexperienced for the difficult role, and "Variety" blaming the script for the film's problems. Monroe's three other films in 1952 continued her typecasting in comic roles that focused on her sex appeal. In "We're Not Married!", her starring role as a beauty pageant contestant was created solely to "present Marilyn in two bathing suits", according to its writer Nunnally Johnson. In Howard Hawks' "Monkey Business", in which she was featured opposite Cary Grant, she played a secretary who is a "dumb, childish blonde, innocently unaware of the havoc her sexiness causes around her". In "O. Henry's Full House", her final film of the year, she had a minor role as a prostitute. During this period, Monroe gained a reputation for being difficult on film sets; the difficulties worsened as her career progressed. She was often late or did not show up at all, did not remember her lines, and would demand several re-takes before she was satisfied with her performance. Monroe's dependence on her acting coaches—first Natasha Lytess and later Paula Strasberg—also irritated directors. Monroe's problems have been attributed to a combination of perfectionism, low self-esteem, and stage fright; she disliked the lack of control she had on her work on film sets and never experienced similar problems during photo shoots, in which she had more say over her performance and could be more spontaneous instead of following a script. To alleviate her anxiety and chronic insomnia, she began to use barbiturates, amphetamines and alcohol, which also exacerbated her problems, although she did not become severely addicted until 1956. According to Sarah Churchwell, some of Monroe's behavior—especially later in her career—was also in response to the condescension and sexism of her male co-stars and directors. Similarly, Lois Banner has stated that she was bullied by many of her directors. Monroe starred in three movies that were released in 1953 and emerged as a major sex symbol and one of Hollywood's most bankable performers. The first of these was the Technicolor film noir "Niagara", in which she played a "femme fatale" scheming to murder her husband, played by Joseph Cotten. By then, Monroe and her make-up artist Allan "Whitey" Snyder had developed the make-up look that became associated with her: dark arched brows, pale skin, "glistening" red lips and a beauty mark. According to Sarah Churchwell, "Niagara" was one of the most overtly sexual films of Monroe's career, and it included scenes in which her body was covered only by a sheet or a towel, considered shocking by contemporary audiences. Its most famous scene is a 30-second long shot behind Monroe where she is seen walking with her hips swaying, which was heavily used in the film's marketing. When "Niagara" was released in January, women's clubs protested that the film was immoral, but the movie proved popular with audiences and grossed $6 million at the box office. While "Variety" deemed it "clichéd" and "morbid", "The New York Times" commented that "the falls and Miss Monroe are something to see", as although Monroe may not be "the perfect actress at this point ... she can be seductive – even when she walks". Monroe continued to attract attention by wearing revealing outfits in publicity events, most famously at the "Photoplay" awards in January 1953, where she won the "Fastest Rising Star" award. She wore a skin-tight gold lamé dress, which prompted veteran star Joan Crawford to describe her behavior as "unbecoming an actress and a lady" to the press. While "Niagara" made Monroe a sex symbol and established her "look", her second film of the year, the satirical musical comedy "Gentlemen Prefer Blondes", established her screen persona as a "dumb blonde". Based on Anita Loos' bestselling novel and its Broadway version, the film focuses on two "gold-digging" showgirls, Lorelei Lee and Dorothy Shaw, played by Monroe and Jane Russell. The role of Lorelei was originally intended for Betty Grable, who had been 20th Century-Fox's most popular "blonde bombshell" in the 1940s; Monroe was fast eclipsing her as a star who could appeal to both male and female audiences. As part of the film's publicity campaign, she and Russell pressed their hand and footprints in wet concrete outside Grauman's Chinese Theatre in June. "Gentlemen Prefer Blondes" was released shortly after and became one of the biggest box office successes of the year by grossing $5.3 million, more than double its production costs. Crowther of "The New York Times" and William Brogdon of "Variety" both commented favorably on Monroe, especially noting her performance of "Diamonds Are a Girl's Best Friend"; according to the latter, she demonstrated the "ability to sex a song as well as point up the eye values of a scene by her presence". In September, Monroe made her television debut in the "Jack Benny Show" playing Jack's fantasy woman in the episode "Honolulu Trip". She co-starred with Betty Grable and Lauren Bacall in her third movie of the year, "How to Marry a Millionaire", released in November. It featured Monroe in the role of a naïve model who teams up with her friends to find rich husbands, repeating the successful formula of "Gentlemen Prefer Blondes". It was the second film ever released in CinemaScope, a widescreen format that Fox hoped would draw audiences back to theaters as television was beginning to cause losses to film studios. Despite mixed reviews, the film was Monroe's biggest box office success at that point in her career, earning $8 million in world rentals. Monroe was listed in the annual Top Ten Money Making Stars Poll in both 1953 and 1954, and according to Fox historian Aubrey Solomon became the studio's "greatest asset" alongside CinemaScope. Monroe's position as a leading sex symbol was confirmed in December 1953, when Hugh Hefner featured her on the cover and as centerfold in the first issue of "Playboy". The cover image was a photograph taken of her at the Miss America Pageant parade in 1952, and the centerfold featured one of her 1949 nude photographs. Although Monroe had become one of 20th Century-Fox's biggest stars, her contract had not changed since 1950, meaning that she was paid far less than other stars of her stature and could not choose her projects or co-workers. She was also tired of being typecast, and her attempts to appear in films other than comedies or musicals had been thwarted by Zanuck, who had a strong personal dislike of her and did not think she would earn the studio as much revenue in dramas. When she refused to begin shooting yet another musical comedy, a film version of "The Girl in Pink Tights", which was to co-star Frank Sinatra, the studio suspended her on January 4, 1954. The suspension was front-page news, and Monroe immediately began a publicity campaign to counter any negative press and to strengthen her position in the conflict. On January 14, she and Joe DiMaggio, whose relationship had been subject to constant media attention since 1952, were married at San Francisco City Hall. They then traveled to Japan, combining a honeymoon with his business trip. From there, she traveled alone to Korea, where she performed songs from her films as part of a USO show for over 60,000 U.S. Marines over a four-day period. After returning to Hollywood in February, she was awarded "Photoplay"s "Most Popular Female Star" prize. She reached a settlement with the studio in March: it included a new contract to be made later in the year, and a starring role in the film version of the Broadway play "The Seven Year Itch", for which she was to receive a bonus of $100,000. Monroe's next film was Otto Preminger's Western "River of No Return", which had been filmed prior to her suspension and featured Robert Mitchum as her co-star. She called it a "Z-grade cowboy movie in which the acting finished second to the scenery and the CinemaScope process", although it was popular with audiences. The first film she made after returning to Fox was the musical "There's No Business Like Show Business", which she strongly disliked but the studio required her to do in exchange for dropping "The Girl in Pink Tights". The musical was unsuccessful upon its release in December, and Monroe's performance was considered vulgar by many critics. In September 1954, Monroe began filming Billy Wilder's comedy "The Seven Year Itch", in which she starred opposite Tom Ewell as a woman who becomes the object of her married neighbor's sexual fantasies. Although the film was shot in Hollywood, the studio decided to generate advance publicity by staging the filming of a scene on Lexington Avenue in Manhattan. In the shoot, Monroe is standing on a subway grate with the air blowing up the skirt of her white dress, which became one of the most famous scenes of her career. The shoot lasted for several hours and attracted a crowd of nearly 2,000 spectators, including professional photographers. The publicity stunt placed Monroe on international front pages, and it also marked the end of her marriage to DiMaggio, who was furious about the stunt. The union had been troubled from the start by his jealousy and controlling attitude; Spoto and Banner have also asserted that he was physically abusive. After Monroe returned to Hollywood, she hired high-profile attorney Jerry Giesler and announced in October 1954 that she was filing for divorce from DiMaggio after only nine months of marriage. "The Seven Year Itch" was released the following June and grossed over $4.5 million at the box office, making it one of the biggest commercial successes that year. After filming for "The Seven Year Itch" wrapped in November, Monroe began a new battle for control over her career and left Hollywood for the East Coast, where she and photographer Milton Greene founded their own production company, Marilyn Monroe Productions (MMP) – an action that has later been called "instrumental" in the collapse of the studio system. Announcing its foundation in a press conference in January 1955, Monroe stated that she was "tired of the same old sex roles. I want to do better things. People have scope, you know." She asserted that she was no longer under contract to Fox, as the studio had not fulfilled its duties, such as paying her the promised bonus for "The Seven Year Itch". This began a year-long legal battle between her and the studio. The press largely ridiculed Monroe for her actions and she was parodied in "The Seven Year Itch" writer George Axelrod's "Will Success Spoil Rock Hunter?" (1955), in which her lookalike Jayne Mansfield played a dumb actress who starts her own production company. Monroe dedicated 1955 to studying her craft. She moved to Manhattan and took acting classes with Constance Collier and attended workshops on method acting at the Actors Studio, run by Lee Strasberg. She grew close to Strasberg and his wife Paula, receiving private lessons at their home due to her shyness, and soon became a family member. She dismissed her old drama coach, Natasha Lytess, and replaced her with Paula; the Strasbergs remained an important influence for the rest of her career. Monroe also started undergoing psychoanalysis at the recommendation of Strasberg, who believed that an actor must confront their emotional traumas and use them in their performances. In her private life, Monroe continued her relationship with DiMaggio despite the ongoing divorce proceedings; she also dated actor Marlon Brando and playwright Arthur Miller. She had first been introduced to Miller by Kazan in the early 1950s. The affair between Monroe and Miller became increasingly serious after October 1955, when her divorce from DiMaggio was finalized and Miller separated from his wife. The FBI also opened a file on her. The studio feared that Monroe would be blacklisted and urged her to end the affair, as Miller was being investigated by the FBI for allegations of communism and had been subpoenaed by the House Un-American Activities Committee. Despite the risk to her career, Monroe refused to end the relationship, later calling the studio heads "born cowards". By the end of the year, Monroe and Fox had come to an agreement about a new seven-year contract. It was clear that MMP would not be able to finance films alone, and the studio was eager to have Monroe working again. The contract required her to make four movies for Fox during the seven years. The studio would pay her $100,000 for each movie, and granted her the right to choose her own projects, directors and cinematographers. She would also be free to make one film with MMP per each completed film for Fox. Monroe began 1956 by announcing her win over 20th Century-Fox; the press, which had previously derided her, now wrote favorably about her decision to fight the studio. "Time" called her a "shrewd businesswoman" and "Look" predicted that the win would be "an example of the individual against the herd for years to come". In March, she officially changed her name to Marilyn Monroe. Her relationship with Miller prompted some negative comments from the press, including Walter Winchell's statement that "America's best-known blonde moving picture star is now the darling of the left-wing intelligentsia." Monroe and Miller were married in a civil ceremony at the Westchester County Court in White Plains, New York, on June 29, and two days later had a Jewish ceremony at his agent's house at Waccabuc, New York. With the marriage, Monroe converted to Judaism, which led Egypt to ban all of her films. The media saw the union as mismatched given her star image as a sex symbol and his position as an intellectual, as demonstrated by "Variety"s headline "Egghead Weds Hourglass". The drama "Bus Stop" was the first film that Monroe chose to make under the new contract; the movie was released in August 1956. She played Chérie, a saloon singer whose dreams of stardom are complicated by a naïve cowboy who falls in love with her. For the role, she learnt an Ozark accent, chose costumes and make-up that lacked the glamour of her earlier films, and provided deliberately mediocre singing and dancing. Broadway director Joshua Logan agreed to direct, despite initially doubting her acting abilities and knowing of her reputation for being difficult. The filming took place in Idaho and Arizona in early 1956, with Monroe "technically in charge" as the head of MMP, occasionally making decisions on cinematography and with Logan adapting to her chronic lateness and perfectionism. The experience changed Logan's opinion of Monroe, and he later compared her to Charlie Chaplin in her ability to blend comedy and tragedy. "Bus Stop" became a box office success, grossing $4.25 million, and received mainly favorable reviews. "The Saturday Review of Literature" wrote that Monroe's performance "effectively dispels once and for all the notion that she is merely a glamour personality" and Crowther proclaimed: "Hold on to your chairs, everybody, and get set for a rattling surprise. Marilyn Monroe has finally proved herself an actress." She received a Golden Globe for Best Actress nomination for her performance. In August 1956, Monroe began filming MMP's first independent production, "The Prince and the Showgirl", at Pinewood Studios in England. It was based on Terence Rattigan's "The Sleeping Prince", a play about an affair between a showgirl and a prince in the 1910s. The main roles had first been played on stage by Laurence Olivier and Vivien Leigh; he reprised his role and directed and co-produced the film. The production was complicated by conflicts between him and Monroe. He angered her with the patronizing statement "All you have to do is be sexy", and by wanting her to replicate Leigh's interpretation. He also disliked the constant presence of Paula Strasberg, Monroe's acting coach, on set. In retaliation to what she considered Olivier's "condescending" behavior, Monroe started arriving late and became uncooperative, stating later that "if you don't respect your artists, they can't work well." Her drug use escalated, and according to Spoto she became pregnant and miscarried during the production. She also had arguments with Greene over how MMP should be run, including whether Miller should join the company. Despite the difficulties, the film was completed on schedule by the end of the year. It was released to mixed reviews in June 1957 and proved unpopular with American audiences. It was better received in Europe, where she was awarded the Italian David di Donatello and the French Crystal Star awards, and was nominated for a BAFTA. After returning to the United States, Monroe took an 18-month hiatus from work to concentrate on married life on the East Coast. She and Miller split their time between their Manhattan apartment and an eighteenth-century farmhouse that they purchased in Roxbury, Connecticut; they spent the summer in Amagansett, Long Island. She became pregnant in mid-1957, but it was ectopic and had to be terminated. She suffered a miscarriage a year later. Her gynecological problems were largely caused by endometriosis, a disease from which she suffered throughout her adult life. Monroe was also briefly hospitalized during this time due to a barbiturate overdose. During the hiatus, she dismissed Greene from MMP and bought his share of the company as they could not settle their disagreements and she had begun to suspect that he was embezzling money from the company. Monroe returned to Hollywood in July 1958 to act opposite Jack Lemmon and Tony Curtis in Billy Wilder's comedy on gender roles, "Some Like It Hot". Although she considered the role of Sugar Kane another "dumb blonde", she accepted it due to Miller's encouragement and the offer of receiving ten percent of the film's profits in addition to her standard pay. The difficulties during the film's production have since become "legendary". Monroe would demand dozens of re-takes, and could not remember her lines or act as directed – Curtis famously stated that kissing her was "like kissing Hitler" due to the number of re-takes. Monroe herself privately likened the production to a sinking ship and commented on her co-stars and director saying "[but] why should I worry, I have no phallic symbol to lose." Many of the problems stemmed from a conflict between her and Wilder, who also had a reputation for being difficult, on how she should play the character. Monroe made Wilder angry by asking him to alter many of her scenes, which in turn made her stage fright worse, and it is suggested that she deliberately ruined several scenes to act them her way. In the end, Wilder was happy with Monroe's performance and stated: "Anyone can remember lines, but it takes a real artist to come on the set and not know her lines and yet give the performance she did!" Despite the difficulties of its production, "Some Like It Hot" became a critical and commercial success when it was released in March 1959. Monroe's performance earned her a Golden Globe for Best Actress, and prompted "Variety" to call her "a comedienne with that combination of sex appeal and timing that just can't be beat". It has been voted one of the best films ever made in polls by the BBC, the American Film Institute, and "Sight & Sound". After "Some Like It Hot", Monroe took another hiatus until late 1959, when she returned to Hollywood and starred in the musical comedy "Let's Make Love", about an actress and a millionaire who fall in love when performing in a satirical play. She chose George Cukor to direct and Miller re-wrote portions of the script, which she considered weak; she accepted the part solely because she was behind on her contract with Fox, having only made one of four promised films. The film's production was delayed by her frequent absences from the set. She had an affair with Yves Montand, her co-star, which was widely reported by the press and used in the film's publicity campaign. "Let's Make Love" was unsuccessful upon its release in September 1960; Crowther described Monroe as appearing "rather untidy" and "lacking ... the old Monroe dynamism", and Hedda Hopper called the film "the most vulgar picture she's ever done". Truman Capote lobbied for her to play Holly Golightly in a film adaptation of "Breakfast at Tiffany's", but the role went to Audrey Hepburn as its producers feared that Monroe would complicate the production. The last film that Monroe completed was John Huston's "The Misfits", which Miller had written to provide her with a dramatic role. She played Roslyn, a recently divorced woman who becomes friends with three aging cowboys, played by Clark Gable, Eli Wallach and Montgomery Clift. The filming in the Nevada desert between July and November 1960 was again difficult. The four-year marriage of Monroe and Miller was effectively over, and he began a new relationship with set photographer Inge Morath. Monroe disliked that he had based her role partly on her life, and thought it inferior to the male roles; she also struggled with Miller's habit of re-writing scenes the night before filming. Her health was also failing: she was in pain from gallstones, and her drug addiction was so severe that her make-up usually had to be applied while she was still asleep under the influence of barbiturates. In August, filming was halted for her to spend a week detoxing in a Los Angeles hospital. Despite her problems, Huston stated that when Monroe was playing Roslyn, she "was not pretending to an emotion. It was the real thing. She would go deep down within herself and find it and bring it up into consciousness." Monroe and Miller separated after filming wrapped, and she was granted a quick divorce in Mexico in January 1961. "The Misfits" was released the following month, but it failed at the box office. Its reviews were mixed, with "Variety" complaining of frequently "choppy" character development, and Bosley Crowther calling Monroe "completely blank and unfathomable" and stating that "unfortunately for the film's structure, everything turns upon her". Despite the film's initial failure, it has received more favorable reviews from critics and film scholars in the twenty-first century. Geoff Andrew of the British Film Institute has called it a classic, Huston scholar Tony Tracy has described Monroe's performance the "most mature interpretation of her career", and Geoffrey McNab of "The Independent" has praised her for being "extraordinary" in portraying Roslyn's "power of empathy". Monroe was next to star in a television adaptation of W. Somerset Maugham's short story "Rain" for NBC, but the project fell through as the network did not want to hire her choice of director, Lee Strasberg. Instead of working, she spent the first six months of 1961 preoccupied by health problems. Monroe underwent surgery for her endometriosis, had a cholecystectomy, and spent four weeks in hospital care – including a brief stint in a mental ward – for depression. She was helped by her ex-husband Joe DiMaggio, with whom she now rekindled a friendship. In spring 1961, Monroe also moved back to California after six years on the East Coast. She dated Frank Sinatra for several months, and in early 1962 purchased a house in Brentwood, Los Angeles. Monroe returned to the public eye in the spring of 1962; she received a "World Film Favorite" Golden Globe Award and began to shoot a new film for 20th Century Fox, "Something's Got to Give", a re-make of "My Favorite Wife" (1940). It was to be co-produced by MMP, directed by George Cukor and to co-star Dean Martin and Cyd Charisse. Days before filming began, Monroe caught sinusitis; despite medical advice to postpone the production, Fox began it as planned in late April. Monroe was too ill to work for the majority of the next six weeks, but despite confirmations by multiple doctors, the studio tried to put pressure on her by alleging publicly that she was faking it. On May 19, she took a break to sing "Happy Birthday" on stage at President John F. Kennedy's birthday celebration at Madison Square Garden in New York. She drew attention with her costume: a beige, skintight dress covered in rhinestones, which made her appear nude. Monroe's trip to New York caused even more irritation for Fox executives, who had wanted her to cancel it. Monroe next filmed a scene for "Something's Got to Give" in which she swam naked in a swimming pool. To generate advance publicity, the press was invited to take photographs of the scene, which were later published in "Life"; this was the first time that a major star had posed nude while at the height of their career. When she was again on sick leave for several days, Fox decided that it could not afford to have another film running behind schedule when it was already struggling to cover the rising costs of "Cleopatra" (1963). On June 7, Fox fired Monroe and sued her for $750,000 in damages. She was replaced by Lee Remick, but after Martin refused to make the film with anyone other than Monroe, Fox sued him as well and shut down the production. The studio blamed Monroe for the film's demise and began spreading negative publicity about her, even alleging that she was mentally disturbed. Fox soon regretted its decision and re-opened negotiations with Monroe later in June; a settlement about a new contract, including re-commencing "Something's Got to Give" and a starring role in the black comedy "What a Way to Go!" (1964), was reached later that summer. To repair her public image, Monroe engaged in several publicity ventures, including interviews for "Life" and "Cosmopolitan" and her first photo shoot for "Vogue". For "Vogue", she and photographer Bert Stern collaborated for two series of photographs, one a standard fashion editorial and another of her posing nude, which were both later published posthumously with the title "The Last Sitting". In the last weeks of her life, she was also planning on starring in a biopic of Jean Harlow. During the final months of her life, Monroe lived at 12305 Fifth Helena Drive in the Brentwood neighborhood of Los Angeles. Her housekeeper Eunice Murray was staying overnight at the home on the evening of August 5, 1962. Murray awoke at 3:00a.m. on August 6 and sensed that something was wrong. Although she saw light from under Monroe's bedroom door, she was unable to get a response and found the door locked. Murray then called Monroe's psychiatrist, Dr. Ralph Greenson, who arrived at the house shortly after and broke into the bedroom through a window, finding Monroe dead in her bed. She was pronounced dead by her physician, Dr. Hyman Engelberg, who arrived at the house at around 3:50a.m. At 4:25a.m., they notified the Los Angeles Police Department. Monroe had died between 8:30 p.m. and 10:30p.m. on August 5, and the toxicology report revealed that the cause of death was acute barbiturate poisoning. She had 8 mg% (milligrams per 100 milliliters of solution) chloral hydrate and 4.5 mg% of pentobarbital (Nembutal) in her blood, and a further 13 mg% of pentobarbital in her liver. Empty medicine bottles were found next to her bed. The possibility that Monroe had accidentally overdosed was ruled out because the dosages found in her body were several times over the lethal limit. The Los Angeles County Coroners Office was assisted in their investigation by the Los Angeles Suicide Prevention Team, who had expert knowledge on suicide. Monroe's doctors stated that she had been "prone to severe fears and frequent depressions" with "abrupt and unpredictable mood changes", and had overdosed several times in the past, possibly intentionally. Due to these facts and the lack of any indication of foul play, deputy coroner Thomas Noguchi classified her death as a probable suicide. Monroe was an international star and her sudden death was front-page news in the United States and Europe. According to Lois Banner, "it's said that the suicide rate in Los Angeles doubled the month after she died; the circulation rate of most newspapers expanded that month", and the "Chicago Tribune" reported that they had received hundreds of phone calls from members of the public who were requesting information about her death. French artist Jean Cocteau commented that her death "should serve as a terrible lesson to all those, whose chief occupation consists of spying on and tormenting film stars", her former co-star Laurence Olivier deemed her "the complete victim of ballyhoo and sensation", and "Bus Stop" director Joshua Logan stated that she was "one of the most unappreciated people in the world". Her funeral, held at the Westwood Village Memorial Park Cemetery on August 8, was private and attended by only her closest associates. The service was arranged by Joe DiMaggio and her business manager Inez Melson. Hundreds of spectators crowded the streets around the cemetery. Monroe was later entombed at crypt No. 24 at the Corridor of Memories. In the following decades, several conspiracy theories, including murder and accidental overdose, have been introduced to contradict suicide as the cause of Monroe's death. The speculation that Monroe had been murdered first gained mainstream attention with the publication of Norman Mailer's "" in 1973, and in the following years became widespread enough for the Los Angeles County District Attorney John Van de Kamp to conduct a "threshold investigation" in 1982 to see whether a criminal investigation should be opened. No evidence of foul play was found. When 20th Century-Fox began to develop Monroe's star image, they wanted her to replace the aging Betty Grable, who was their most popular "blonde bombshell" of the 1940s. The 1940s had been the heyday of actresses who were perceived as tough and smart, such as Katharine Hepburn and Barbara Stanwyck, film stars who had appealed to women-dominated audiences. The studio wanted Monroe to be a star of the new decade who would draw men to movie theaters. From the beginning, she played a significant part in the creation of her public image, and towards the end of her career Monroe exerted almost full control over it. Monroe devised many of her publicity strategies, cultivated friendships with gossip columnists such as Sidney Skolsky and Louella Parsons, and controlled the use of her images. In addition to Grable, she was often compared to another iconic blonde, 1930s film star Jean Harlow. The comparison was prompted partly by Monroe, who named Harlow as her childhood idol, wanted to play her in a biopic, and even employed Harlow's hair stylist to color her hair. Monroe's screen persona centered on her blonde hair and the stereotypes associated with it, especially dumbness, naïveté, sexual availability and artificiality. She often used a breathy, childish voice in her films, and in interviews gave the impression that everything she said was "utterly innocent and uncalculated", parodying herself with double entendres that came to be known as "Monroeisms". For example, when she was asked what she had on in the 1949 nude photo shoot, she replied, "I had the radio on". Monroe began her career as a pin-up model, and her hourglass figure was often one of her most noted features. Film scholar Richard Dyer wrote that Monroe was often positioned so that her curvy silhouette was on display, and that she often posed like a pin-up in her publicity photos. Her distinctive, hip-swinging walk also drew attention to her body and earned her the nickname "the girl with the horizontal walk". Clothing played an important factor in Monroe's star image. She often wore white to emphasize her blondness, and drew attention by wearing revealing outfits that showed off her figure. Her publicity stunts often revolved around her clothing, which exposed large amounts of her body or even a wardrobe malfunction, such as when one of the shoulder straps of her dress suddenly snapped during a press conference. In press stories, Monroe was portrayed as the embodiment of the American Dream, a girl who had risen from a miserable childhood to Hollywood stardom. In her studio biographies, stories of her time spent in foster families and an orphanage were exaggerated and even partly fabricated. Although Monroe's typecast screen persona as a dim-witted but sexually attractive blonde was a carefully crafted act, audiences and film critics believed it to be her real personality and that she was not acting in her comedies. This became an obstacle in her later career, when she wanted to change her public image and pursue other kinds of roles, or to be respected as a businesswoman. Academic Sarah Churchwell studied narratives about Monroe and has stated: Lois Banner has written that Monroe often subtly parodied her status as a sex symbol in her films and public appearances, and that "the 'Marilyn Monroe' character she created was a brilliant archetype, who stands between Mae West and Madonna in the tradition of twentieth-century gender tricksters." Monroe herself stated that she was influenced by West, learning "a few tricks from her – that impression of laughing at, or mocking, her own sexuality". In the 1950s, she also studied comedy in classes given by mime and dancer Lotte Goslar, famous for her comic stage performances, and had her accompany her on film sets to instruct her. In "Gentlemen Prefer Blondes", one of the films in which she played an archetypal dumb blonde, Monroe had the sentence "I can be smart when it's important, but most men don't like it" added to her character's lines in the script. Dyer has stated that Monroe's star image was created mainly for the male gaze and that she usually played "the girl", who is defined solely by her gender, in her films. Her roles were almost always chorus girls, secretaries, or models; occupations where "the woman is on show, there for the pleasure of men." Film scholar Thomas Harris, who analyzed Monroe's public image in 1957, wrote that her working class roots and lack of family made her appear more sexually available, "the ideal playmate", in contrast to her contemporary, Grace Kelly, who was also marketed as an attractive blonde, but due to her upper-class background came to be seen as a sophisticated actress, unattainable for the majority of male viewers. According to Dyer, Monroe became "virtually a household name for sex" in the 1950s and "her image has to be situated in the flux of ideas about morality and sexuality that characterised the fifties in America", such as Freudian ideas about sex, the Kinsey report (1953), and Betty Friedan's "The Feminine Mystique" (1963). By appearing vulnerable and unaware of her sex appeal, Monroe was the first sex symbol to present sex as natural and without danger, in contrast to the 1940s "femme fatales". Spoto likewise describes her as the embodiment of "the postwar ideal of the American girl, soft, transparently needy, worshipful of men, naïve, offering sex without demands", which is echoed in Molly Haskell's statement that "she was the fifties fiction, the lie that a woman had no sexual needs, that she is there to cater to, or enhance, a man's needs." Monroe's contemporary Norman Mailer wrote that "Marilyn suggested sex might be difficult and dangerous with others, but ice cream with her", while Groucho Marx characterized her as "Mae West, Theda Bara, and Bo Peep all rolled into one". According to Haskell, due to her status as a sex symbol, Monroe was less popular with women than with men, as they "couldn't identify with her and didn't support her", although this would change after her death. Dyer has also argued that Monroe's platinum blonde hair became such a defining feature because it made her "racially unambiguous" and exclusively white just as the civil rights movement was beginning, and that she should be seen as emblematic of racism in twentieth-century popular culture. Banner agreed that it may not be a coincidence that Monroe launched a trend of platinum blonde actresses during the civil rights movement, but has also criticized Dyer, pointing out that in her highly publicized private life, Monroe associated with people who were seen as "white ethnics", such as Joe DiMaggio (Italian-American) and Arthur Miller (Jewish). According to Banner, she sometimes challenged prevailing racial norms in her publicity photographs; for example, in an image featured in "Look" in 1951, she was shown in revealing clothes while practicing with African-American singing coach Phil Moore. Monroe was perceived as a specifically American star, "a national institution as well known as hot dogs, apple pie, or baseball" according to "Photoplay". Banner calls her the symbol of populuxe, a star whose joyful and glamorous public image "helped the nation cope with its paranoia in the 1950s about the Cold War, the atom bomb, and the totalitarian communist Soviet Union". Historian Fiona Handyside writes that the French female audiences associated whiteness/blondness with American modernity and cleanliness, and so Monroe came to symbolize a modern, "liberated" woman whose life takes place in the public sphere. Film historian Laura Mulvey has written of her as an endorsement for American consumer culture: Twentieth Century Fox profited from Monroe's popularity by cultivating several lookalike actresses that included Jayne Mansfield and Sheree North. Other studios also attempted to create their own Monroes: Universal Pictures with Mamie Van Doren, Columbia Pictures with Kim Novak, and Rank Organisation with Diana Dors. According to The Guide to United States Popular Culture, "as an icon of American popular culture, Monroe's few rivals in popularity include Elvis Presley and Mickey Mouse ... no other star has ever inspired such a wide range of emotions – from lust to pity, from envy to remorse." Art historian Gail Levin stated that Monroe may have been "the most photographed person of the 20th century", and The American Film Institute has named her the sixth greatest female screen legend in American film history. The Smithsonian Institution has included her on their list of "100 Most Significant Americans of All Time", and both "Variety" and VH1 have placed her in the top ten in their rankings of the greatest popular culture icons of the twentieth century.Hundreds of books have been written about Monroe. She has been the subject of films, plays, operas, and songs, and has influenced artists and entertainers such as Andy Warhol and Madonna. She also remains a valuable brand: her image and name have been licensed for hundreds of products, and she has been featured in advertising for multinational corporations and brands such as Max Factor, Chanel, Mercedes-Benz, and Absolut Vodka. Monroe's enduring popularity is linked to her conflicted public image. On the one hand, she remains a sex symbol, beauty icon and one of the most famous stars of classical Hollywood cinema. On the other, she is also remembered for her troubled private life, unstable childhood, struggle for professional respect, as well as her death and the conspiracy theories that surrounded it. She has been written about by scholars and journalists who are interested in gender and feminism; these writers include Gloria Steinem, Jacqueline Rose, Molly Haskell, Sarah Churchwell, and Lois Banner. Some, such as Steinem, have viewed her as a victim of the studio system. Others, such as Haskell, Rose, and Churchwell, have instead stressed Monroe's proactive role in her career and her participation in the creation of her public persona. Due to the contrast between her stardom and troubled private life, Monroe is closely linked to broader discussions about modern phenomena such as mass media, fame, and consumer culture. According to academic Susanne Hamscha, Monroe has continued relevance to ongoing discussions about modern society, and she is "never completely situated in one time or place" but has become "a surface on which narratives of American culture can be (re-)constructed", and "functions as a cultural type that can be reproduced, transformed, translated into new contexts, and enacted by other people". Similarly, Banner has called Monroe the "eternal shapeshifter" who is re-created by "each generation, even each individual ... to their own specifications". While Monroe remains a cultural icon, critics are divided on her legacy as an actress. David Thomson called her body of work "insubstantial" and Pauline Kael wrote that she could not act, but rather "used her lack of an actress's skills to amuse the public. She had the wit or crassness or desperation to turn cheesecake into acting – and vice versa; she did what others had the 'good taste' not to do". In contrast, according to Peter Bradshaw, Monroe was a talented comedian who "understood how comedy achieved its effects", and Roger Ebert wrote that "Monroe's eccentricities and neuroses on sets became notorious, but studios put up with her long after any other actress would have been blackballed because what they got back on the screen was magical". Similarly, Jonathan Rosenbaum stated that "she subtly subverted the sexist content of her material" and that "the difficulty some people have discerning Monroe's intelligence as an actress seems rooted in the ideology of a repressive era, when superfeminine women weren't supposed to be smart". Moon The Moon is an astronomical body that orbits planet Earth and is Earth's only permanent natural satellite. It is the fifth-largest natural satellite in the Solar System, and the largest among planetary satellites relative to the size of the planet that it orbits (its primary). The Moon is after Jupiter's satellite Io the second-densest satellite in the Solar System among those whose densities are known. The Moon is thought to have formed about 4.51 billion years ago, not long after Earth. The most widely accepted explanation is that the Moon formed from the debris left over after a giant impact between Earth and a Mars-sized body called Theia. The Moon is in synchronous rotation with Earth, and thus always shows the same side to earth, the near side. The near side is marked by dark volcanic maria that fill the spaces between the bright ancient crustal highlands and the prominent impact craters. After the Sun, the Moon is the second-brightest regularly visible celestial object in Earth's sky. Its surface is actually dark, although compared to the night sky it appears very bright, with a reflectance just slightly higher than that of worn asphalt. Its gravitational influence produces the ocean tides, body tides, and the slight lengthening of the day. The Moon's average orbital distance is , or 1.28 light-seconds. This is about thirty times the diameter of Earth. The Moon's apparent size in the sky is almost the same as that of the Sun (because it is 400x farther and larger). Therefore, the Moon covers the Sun nearly precisely during a total solar eclipse. This matching of apparent visual size will not continue in the far future, because the Moon's distance from Earth is slowly increasing. The Moon was first reached in 1959 by an unmanned spacecraft of the Soviet Union's Luna program; the United States' NASA Apollo program achieved the only manned lunar missions to date, beginning with the first manned orbital mission by Apollo 8 in 1968, and six manned landings between 1969 and 1972, with the first being Apollo 11. These missions returned lunar rocks which have been used to develop a geological understanding of the Moon's origin, internal structure, and the Moon's later history. Since the Apollo 17 mission in 1972, the Moon has been visited only by unmanned spacecraft. Both the Moon's natural prominence in the earthly sky and its regular cycle of phases as seen from Earth have provided cultural references and influences for human societies and cultures since time immemorial. Such cultural influences can be found in language, lunar based calendar systems, art, and mythology. The usual English proper name for Earth's natural satellite is "the Moon", which in nonscientific texts is usually not capitalized. The noun "moon" is derived from Old English "mōna", which (like all Germanic language cognates) stems from Proto-Germanic "*mēnô", which comes from Proto-Indo-European "*mḗh₁n̥s" "moon", "month", which comes from the Proto-Indo-European root "*meh₁-" "to measure", the month being the ancient unit of time measured by the Moon. Occasionally, the name "Luna" is used. In literature, especially science fiction, "Luna" is used to distinguish it from other moons, while in poetry, the name has been used to denote personification of our moon. The modern English adjective pertaining to the Moon is "lunar", derived from the Latin word for the Moon, "luna". The adjective "selenic" (usually only used to refer to the chemical element selenium) is so rarely used to refer to the Moon that this meaning is not recorded in most major dictionaries. It is derived from the Ancient Greek word for the Moon, (selḗnē), from which is however also derived the prefix "seleno-", as in "selenography", the study of the physical features of the Moon, as well as the element name "selenium". Both the Greek goddess Selene and the Roman goddess Diana were alternatively called Cynthia. The names Luna, Cynthia, and Selene are reflected in terminology for lunar orbits in words such as "apolune", "pericynthion", and "selenocentric". The name Diana comes from the Proto-Indo-European "*diw-yo", "heavenly", which comes from the PIE root *dyeu- "to shine," which in many derivatives means "sky, heaven, and god" and is also the origin of Latin "dies", "day". The Moon formed 4.51 billion years ago, some 60 million years after the origin of the Solar System. Several forming mechanisms have been proposed, including the fission of the Moon from Earth's crust through centrifugal force (which would require too great an initial spin of Earth), the gravitational capture of a pre-formed Moon (which would require an unfeasibly extended atmosphere of Earth to dissipate the energy of the passing Moon), and the co-formation of Earth and the Moon together in the primordial accretion disk (which does not explain the depletion of metals in the Moon). These hypotheses also cannot account for the high angular momentum of the Earth–Moon system. The prevailing hypothesis is that the Earth–Moon system formed after an impact of a Mars-sized body (named "Theia") with the proto-Earth (giant impact). The impact blasted material into Earth's orbit and then the material accreted and formed the Moon. The Moon's far side has a crust that is thicker than that of the near side. This is thought to be because the Moon fused from two different bodies. This hypothesis, although not perfect, perhaps best explains the evidence. Eighteen months prior to an October 1984 conference on lunar origins, Bill Hartmann, Roger Phillips, and Jeff Taylor challenged fellow lunar scientists: "You have eighteen months. Go back to your Apollo data, go back to your computer, do whatever you have to, but make up your mind. Don't come to our conference unless you have something to say about the Moon's birth." At the 1984 conference at Kona, Hawaii, the giant impact hypothesis emerged as the most popular. Before the conference, there were partisans of the three "traditional" theories, plus a few people who were starting to take the giant impact seriously, and there was a huge apathetic middle who didn’t think the debate would ever be resolved. Afterward, there were essentially only two groups: the giant impact camp and the agnostics. Giant impacts are thought to have been common in the early Solar System. Computer simulations of giant impacts have produced results that are consistent with the mass of the lunar core and the angular momentum of the Earth–Moon system. These simulations also show that most of the Moon derived from the impactor, rather than the proto-Earth. However, more recent simulations suggest a larger fraction of the Moon derived from the proto-Earth. Other bodies of the inner Solar System such as Mars and Vesta have, according to meteorites from them, very different oxygen and tungsten isotopic compositions compared to Earth. However, Earth and the Moon have nearly identical isotopic compositions. The isotopic equalization of the Earth-Moon system might be explained by the post-impact mixing of the vaporized material that formed the two, although this is debated. The impact released a lot of energy and then the released material re-accreted into the Earth–Moon system. This would have melted the outer shell of Earth, and thus formed a magma ocean. Similarly, the newly formed Moon would also have been affected and had its own lunar magma ocean; its depth is estimated from about to . While the giant impact hypothesis might explain many lines of evidence, some questions are still unresolved, most of which involve the Moon's composition. In 2001, a team at the Carnegie Institute of Washington reported the most precise measurement of the isotopic signatures of lunar rocks. To their surprise, the rocks from the Apollo program had the same isotopic signature as rocks from Earth, however they differed from almost all other bodies in the Solar System. Indeed, this observation was unexpected, because most of the material that formed the Moon was thought to come from Theia and it was announced in 2007 that there was less than a 1% chance that Theia and Earth had identical isotopic signatures. Other Apollo lunar samples had in 2012 the same titanium isotopes composition as Earth, which conflicts with what is expected if the Moon formed far from Earth or is derived from Theia. These discrepancies may be explained by variations of the giant impact hypothesis. The Moon is a differentiated body: it has a geochemically distinct crust, mantle, and core. The Moon has a solid iron-rich inner core with a radius possibly as small as and a fluid outer core primarily made of liquid iron with a radius of roughly . Around the core is a partially molten boundary layer with a radius of about . This structure is thought to have developed through the fractional crystallization of a global magma ocean shortly after the Moon's formation 4.5 billion years ago. Crystallization of this magma ocean would have created a mafic mantle from the precipitation and sinking of the minerals olivine, clinopyroxene, and orthopyroxene; after about three-quarters of the magma ocean had crystallised, lower-density plagioclase minerals could form and float into a crust atop. The final liquids to crystallise would have been initially sandwiched between the crust and mantle, with a high abundance of incompatible and heat-producing elements. Consistent with this perspective, geochemical mapping made from orbit suggests the crust of mostly anorthosite. The Moon rock samples of the flood lavas that erupted onto the surface from partial melting in the mantle confirm the mafic mantle composition, which is more iron-rich than that of Earth. The crust is on average about thick. The Moon is the second-densest satellite in the Solar System, after Io. However, the inner core of the Moon is small, with a radius of about or less, around 20% of the radius of the Moon. Its composition is not well defined, but is probably metallic iron alloyed with a small amount of sulfur and nickel; analyses of the Moon's time-variable rotation suggest that it is at least partly molten. The topography of the Moon has been measured with laser altimetry and stereo image analysis. Its most visible topographic feature is the giant far-side South Pole–Aitken basin, some in diameter, the largest crater on the Moon and the second-largest confirmed impact crater in the Solar System. At deep, its floor is the lowest point on the surface of the Moon. The highest elevations of the Moon's surface are located directly to the northeast, and it has been suggested might have been thickened by the oblique formation impact of the South Pole–Aitken basin. Other large impact basins, such as Imbrium, Serenitatis, Crisium, Smythii, and Orientale, also possess regionally low elevations and elevated rims. The far side of the lunar surface is on average about higher than that of the near side. The discovery of fault scarp cliffs by the Lunar Reconnaissance Orbiter suggest that the Moon has shrunk within the past billion years, by about 90 metres (300 ft). Similar shrinkage features exist on Mercury. The dark and relatively featureless lunar plains, clearly seen with the naked eye, are called "maria" (Latin for "seas"; singular "mare"), as they were once believed to be filled with water; they are now known to be vast solidified pools of ancient basaltic lava. Although similar to terrestrial basalts, lunar basalts have more iron and no minerals altered by water. The majority of these lavas erupted or flowed into the depressions associated with impact basins. Several geologic provinces containing shield volcanoes and volcanic domes are found within the near side "maria". Almost all maria are on the near side of the Moon, and cover 31% of the surface of the near side, compared with 2% of the far side. This is thought to be due to a concentration of heat-producing elements under the crust on the near side, seen on geochemical maps obtained by "Lunar Prospector"'s gamma-ray spectrometer, which would have caused the underlying mantle to heat up, partially melt, rise to the surface and erupt. Most of the Moon's mare basalts erupted during the Imbrian period, 3.0–3.5 billion years ago, although some radiometrically dated samples are as old as 4.2 billion years. Until recently, the youngest eruptions, dated by crater counting, appeared to have been only 1.2 billion years ago. In 2006, a study of Ina, a tiny depression in Lacus Felicitatis, found jagged, relatively dust-free features that, because of the lack of erosion by infalling debris, appeared to be only 2 million years old. Moonquakes and releases of gas also indicate some continued lunar activity. In 2014 NASA announced "widespread evidence of young lunar volcanism" at 70 irregular mare patches identified by the Lunar Reconnaissance Orbiter, some less than 50 million years old. This raises the possibility of a much warmer lunar mantle than previously believed, at least on the near side where the deep crust is substantially warmer because of the greater concentration of radioactive elements. Just prior to this, evidence has been presented for 2–10 million years younger basaltic volcanism inside Lowell crater, Orientale basin, located in the transition zone between the near and far sides of the Moon. An initially hotter mantle and/or local enrichment of heat-producing elements in the mantle could be responsible for prolonged activities also on the far side in the Orientale basin. The lighter-coloured regions of the Moon are called "terrae", or more commonly "highlands", because they are higher than most maria. They have been radiometrically dated to having formed 4.4 billion years ago, and may represent plagioclase cumulates of the lunar magma ocean. In contrast to Earth, no major lunar mountains are believed to have formed as a result of tectonic events. The concentration of maria on the Near Side likely reflects the substantially thicker crust of the highlands of the Far Side, which may have formed in a slow-velocity impact of a second moon of Earth a few tens of millions of years after their formation. The other major geologic process that has affected the Moon's surface is impact cratering, with craters formed when asteroids and comets collide with the lunar surface. There are estimated to be roughly 300,000 craters wider than on the Moon's near side alone. The lunar geologic timescale is based on the most prominent impact events, including Nectaris, Imbrium, and Orientale, structures characterized by multiple rings of uplifted material, between hundreds and thousands of kilometres in diameter and associated with a broad apron of ejecta deposits that form a regional stratigraphic horizon. The lack of an atmosphere, weather and recent geological processes mean that many of these craters are well-preserved. Although only a few multi-ring basins have been definitively dated, they are useful for assigning relative ages. Because impact craters accumulate at a nearly constant rate, counting the number of craters per unit area can be used to estimate the age of the surface. The radiometric ages of impact-melted rocks collected during the Apollo missions cluster between 3.8 and 4.1 billion years old: this has been used to propose a Late Heavy Bombardment of impacts. Blanketed on top of the Moon's crust is a highly comminuted (broken into ever smaller particles) and impact gardened surface layer called regolith, formed by impact processes. The finer regolith, the lunar soil of silicon dioxide glass, has a texture resembling snow and a scent resembling spent gunpowder. The regolith of older surfaces is generally thicker than for younger surfaces: it varies in thickness from in the highlands and in the maria. Beneath the finely comminuted regolith layer is the "megaregolith", a layer of highly fractured bedrock many kilometres thick. Comparison of high-resolution images obtained by the Lunar Reconnaissance Orbiter has shown a contemporary crater-production rate significantly higher than previously estimated. A secondary cratering process caused by distal ejecta is thought to churn the top two centimetres of regolith a hundred times more quickly than previous models suggested—on a timescale of 81,000 years. Lunar swirls are enigmatic features found across the Moon's surface. They are characterized by a high albedo, appear optically immature (i.e. the optical characteristics of a relatively young regolith), and have often a sinuous shape. Their shape is often accentuated by low albedo regions that wind between the bright swirls. Liquid water cannot persist on the lunar surface. When exposed to solar radiation, water quickly decomposes through a process known as photodissociation and is lost to space. However, since the 1960s, scientists have hypothesized that water ice may be deposited by impacting comets or possibly produced by the reaction of oxygen-rich lunar rocks, and hydrogen from solar wind, leaving traces of water which could possibly persist in cold, permanently shadowed craters at either pole on the Moon. Computer simulations suggest that up to of the surface may be in permanent shadow. The presence of usable quantities of water on the Moon is an important factor in rendering lunar habitation as a cost-effective plan; the alternative of transporting water from Earth would be prohibitively expensive. In years since, signatures of water have been found to exist on the lunar surface. In 1994, the bistatic radar experiment located on the "Clementine" spacecraft, indicated the existence of small, frozen pockets of water close to the surface. However, later radar observations by Arecibo, suggest these findings may rather be rocks ejected from young impact craters. In 1998, the neutron spectrometer on the "Lunar Prospector" spacecraft showed that high concentrations of hydrogen are present in the first meter of depth in the regolith near the polar regions. Volcanic lava beads, brought back to Earth aboard Apollo 15, showed small amounts of water in their interior. The 2008 "Chandrayaan-1" spacecraft has since confirmed the existence of surface water ice, using the on-board Moon Mineralogy Mapper. The spectrometer observed absorption lines common to hydroxyl, in reflected sunlight, providing evidence of large quantities of water ice, on the lunar surface. The spacecraft showed that concentrations may possibly be as high as 1,000 ppm. In 2009, "LCROSS" sent a impactor into a permanently shadowed polar crater, and detected at least of water in a plume of ejected material. Another examination of the LCROSS data showed the amount of detected water to be closer to . In May 2011, 615–1410 ppm water in melt inclusions in lunar sample 74220 was reported, the famous high-titanium "orange glass soil" of volcanic origin collected during the Apollo 17 mission in 1972. The inclusions were formed during explosive eruptions on the Moon approximately 3.7 billion years ago. This concentration is comparable with that of magma in Earth's upper mantle. Although of considerable selenological interest, this announcement affords little comfort to would-be lunar colonists—the sample originated many kilometers below the surface, and the inclusions are so difficult to access that it took 39 years to find them with a state-of-the-art ion microprobe instrument. The gravitational field of the Moon has been measured through tracking the Doppler shift of radio signals emitted by orbiting spacecraft. The main lunar gravity features are mascons, large positive gravitational anomalies associated with some of the giant impact basins, partly caused by the dense mare basaltic lava flows that fill those basins. The anomalies greatly influence the orbit of spacecraft about the Moon. There are some puzzles: lava flows by themselves cannot explain all of the gravitational signature, and some mascons exist that are not linked to mare volcanism. The Moon has an external magnetic field of about 1–100 nanoteslas, less than one-hundredth that of Earth. The Moon does not currently have a global dipolar magnetic field and only has crustal magnetization, probably acquired early in its history when a dynamo was still operating. Alternatively, some of the remnant magnetization may be from transient magnetic fields generated during large impacts through the expansion of an impact-generated plasma cloud in an ambient magnetic field. This is supported by the apparent location of the largest crustal magnetizations near the antipodes of the giant impact basins. The Moon has an atmosphere so tenuous as to be nearly vacuum, with a total mass of less than . The surface pressure of this small mass is around 3 × 10 atm (0.3 nPa); it varies with the lunar day. Its sources include outgassing and sputtering, a product of the bombardment of lunar soil by solar wind ions. Elements that have been detected include sodium and potassium, produced by sputtering (also found in the atmospheres of Mercury and Io); helium-4 and neon from the solar wind; and argon-40, radon-222, and polonium-210, outgassed after their creation by radioactive decay within the crust and mantle. The absence of such neutral species (atoms or molecules) as oxygen, nitrogen, carbon, hydrogen and magnesium, which are present in the regolith, is not understood. Water vapour has been detected by "Chandrayaan-1" and found to vary with latitude, with a maximum at ~60–70 degrees; it is possibly generated from the sublimation of water ice in the regolith. These gases either return into the regolith because of the Moon's gravity or are lost to space, either through solar radiation pressure or, if they are ionized, by being swept away by the solar wind's magnetic field. A permanent asymmetric moon dust cloud exists around the Moon, created by small particles from comets. Estimates are 5 tons of comet particles strike the Moon's surface each 24 hours. The particles strike the Moon's surface ejecting moon dust above the Moon. The dust stays above the Moon approximately 10 minutes, taking 5 minutes to rise, and 5 minutes to fall. On average, 120 kilograms of dust are present above the Moon, rising to 100 kilometers above the surface. The dust measurements were made by LADEE's Lunar Dust EXperiment (LDEX), between 20 and 100 kilometers above the surface, during a six-month period. LDEX detected an average of one 0.3 micrometer moon dust particle each minute. Dust particle counts peaked during the Geminid, Quadrantid, Northern Taurid, and Omicron Centaurid meteor showers, when the Earth, and Moon, pass through comet debris. The cloud is asymmetric, more dense near the boundary between the Moon's dayside and nightside. In October 2017, NASA scientists at the Marshall Space Flight Center and the Lunar and Planetary Institute in Houston announced their finding, based on studies of Moon magma samples retrieved by the Apollo missions, that the Moon had once possessed a relatively thick atmosphere for a period of 70 million years between 3 and 4 billion years ago. This atmosphere, sourced from gases ejected from lunar volcanic eruptions, was twice the thickness of that of present-day Mars. The ancient lunar atmosphere was eventually stripped away by solar winds and dissipated into space. The Moon's axial tilt with respect to the ecliptic is only 1.5424°, much less than the 23.44° of Earth. Because of this, the Moon's solar illumination varies much less with season, and topographical details play a crucial role in seasonal effects. From images taken by "Clementine" in 1994, it appears that four mountainous regions on the rim of Peary Crater at the Moon's north pole may remain illuminated for the entire lunar day, creating peaks of eternal light. No such regions exist at the south pole. Similarly, there are places that remain in permanent shadow at the bottoms of many polar craters, and these "craters of eternal darkness" are extremely cold: "Lunar Reconnaissance Orbiter" measured the lowest summer temperatures in craters at the southern pole at and just close to the winter solstice in north polar Hermite Crater. This is the coldest temperature in the Solar System ever measured by a spacecraft, colder even than the surface of Pluto. Average temperatures of the Moon's surface are reported, but temperatures of different areas will vary greatly depending upon whether they are in sunlight or shadow. The Moon makes a complete orbit around Earth with respect to the fixed stars about once every 27.3 days (its sidereal period). However, because Earth is moving in its orbit around the Sun at the same time, it takes slightly longer for the Moon to show the same phase to Earth, which is about 29.5 days (its synodic period). Unlike most satellites of other planets, the Moon orbits closer to the ecliptic plane than to the planet's equatorial plane. The Moon's orbit is subtly perturbed by the Sun and Earth in many small, complex and interacting ways. For example, the plane of the Moon's orbit gradually rotates once every 18.61 years, which affects other aspects of lunar motion. These follow-on effects are mathematically described by Cassini's laws. The Moon is exceptionally large relative to Earth: Its diameter is a quarter and its mass is 1/81 of Earth's. It is the largest moon in the Solar System relative to the size of its planet, though Charon is larger relative to the dwarf planet Pluto, at 1/9 Pluto's mass. The Earth and the Moon's barycentre, their common centre of mass, is located (about a quarter of Earth's radius) beneath Earth's surface. The Earth revolves around the Earth-Moon barycentre once a sidereal month, with 1/81 the speed of the Moon, or about per second. This motion is superimposed on the much larger revolution of the Earth around the Sun at a speed of about per second. The Moon is in synchronous rotation as it orbits Earth; it rotates about its axis in about the same time it takes to orbit Earth. This results in it always keeping nearly the same face turned towards Earth. However, because of the effect of libration, about 59% of the Moon's surface can actually be seen from Earth. The side of the Moon that faces Earth is called the near side, and the opposite the far side. The far side is often inaccurately called the "dark side", but it is in fact illuminated as often as the near side: once every 29.5 Earth days. During new moon, the near side is dark. The Moon had once rotated at a faster rate, but early in its history, its rotation slowed and became tidally locked in this orientation as a result of frictional effects associated with tidal deformations caused by Earth. With time, the energy of rotation of the Moon on its axis was dissipated as heat, until there was no rotation of the Moon relative to Earth. In 2016, planetary scientists, using data collected on the much earlier NASA Lunar Prospector mission, found two hydrogen-rich areas on opposite sides of the Moon, probably in the form of water ice. It is speculated that these patches were the poles of the Moon billions of years ago, before it was tidally locked to Earth. The Moon has an exceptionally low albedo, giving it a reflectance that is slightly brighter than that of worn asphalt. Despite this, it is the brightest object in the sky after the Sun. This is due partly to the brightness enhancement of the opposition surge; the Moon at quarter phase is only one-tenth as bright, rather than half as bright, as at full moon. Additionally, color constancy in the visual system recalibrates the relations between the colors of an object and its surroundings, and because the surrounding sky is comparatively dark, the sunlit Moon is perceived as a bright object. The edges of the full moon seem as bright as the centre, without limb darkening, because of the reflective properties of lunar soil, which retroreflects light more towards the Sun than in other directions. The Moon does appear larger when close to the horizon, but this is a purely psychological effect, known as the moon illusion, first described in the 7th century BC. The full Moon's angular diameter is about 0.52° (on average) in the sky, roughly the same apparent size as the Sun (see ). The Moon's highest altitude at culmination varies by its phase and time of year. The full moon is highest in the sky during winter (for each hemisphere). The 18.61-year nodal cycle has an influence on lunar standstill. When the ascending node of the lunar orbit is in the vernal equinox, the lunar declination can reach up to plus or minus 28° each month. This means the Moon can pass overhead if viewed from latitudes up to 28° north or south (of the Equator), instead of only 18°. The orientation of the Moon's crescent also depends on the latitude of the viewing location; an observer in the tropics can see a smile-shaped crescent Moon. The Moon is visible for two weeks every 27.3 days at the North and South Poles. Zooplankton in the Arctic use moonlight when the Sun is below the horizon for months on end. The distance between the Moon and Earth varies from around to at perigee (closest) and apogee (farthest), respectively. On 14 November 2016, it was closer to Earth when at full phase than it has been since 1948, 14% closer than its farthest position in apogee. Reported as a "supermoon", this closest point coincided within an hour of a full moon, and it was 30% more luminous than when at its greatest distance because its angular diameter is 14% greater and formula_1. At lower levels, the human perception of reduced brightness as a percentage is provided by the following formula: formula_2 When the actual reduction is 1.00 / 1.30, or about 0.770, the perceived reduction is about 0.877, or 1.00 / 1.14. This gives a maximum perceived increase of 14% between apogee and perigee moons of the same phase. There has been historical controversy over whether features on the Moon's surface change over time. Today, many of these claims are thought to be illusory, resulting from observation under different lighting conditions, poor astronomical seeing, or inadequate drawings. However, outgassing does occasionally occur and could be responsible for a minor percentage of the reported lunar transient phenomena. Recently, it has been suggested that a roughly diameter region of the lunar surface was modified by a gas release event about a million years ago. The Moon's appearance, like the Sun's, can be affected by Earth's atmosphere. Common optical effects are the 22° halo ring, formed when the Moon's light is refracted through the ice crystals of high cirrostratus clouds, and smaller coronal rings when the Moon is seen through thin clouds. The illuminated area of the visible sphere (degree of illumination) is given by formula_3, where formula_4 is the elongation (i.e., the angle between Moon, the observer (on Earth) and the Sun). The gravitational attraction that masses have for one another decreases inversely with the square of the distance of those masses from each other. As a result, the slightly greater attraction that the Moon has for the side of Earth closest to the Moon, as compared to the part of the Earth opposite the Moon, results in tidal forces. Tidal forces affect both the Earth's crust and oceans. The most obvious effect of tidal forces is to cause two bulges in the Earth's oceans, one on the side facing the Moon and the other on the side opposite. This results in elevated sea levels called ocean tides. As the Earth spins on its axis, one of the ocean bulges (high tide) is held in place "under" the Moon, while another such tide is opposite. As a result, there are two high tides, and two low tides in about 24 hours. Since the Moon is orbiting the Earth in the same direction of the Earth's rotation, the high tides occur about every 12 hours and 25 minutes; the 25 minutes is due to the Moon's time to orbit the Earth. The Sun has the same tidal effect on the Earth, but its forces of attraction are only 40% that of the Moon's; the Sun's and Moon's interplay is responsible for spring and neap tides. If the Earth were a water world (one with no continents) it would produce a tide of only one meter, and that tide would be very predictable, but the ocean tides are greatly modified by other effects: the frictional coupling of water to Earth's rotation through the ocean floors, the inertia of water's movement, ocean basins that grow shallower near land, the sloshing of water between different ocean basins. As a result, the timing of the tides at most points on the Earth is a product of observations that are explained, incidentally, by theory. While gravitation causes acceleration and movement of the Earth's fluid oceans, gravitational coupling between the Moon and Earth's solid body is mostly elastic and plastic. The result is a further tidal effect of the Moon on the Earth that causes a bulge of the solid portion of the Earth nearest the Moon that acts as a torque in opposition to the Earth's rotation. This "drains" angular momentum and rotational kinetic energy from Earth's spin, slowing the Earth's rotation. That angular momentum, lost from the Earth, is transferred to the Moon in a process (confusingly known as tidal acceleration), which lifts the Moon into a higher orbit and results in its lower orbital speed about the Earth. Thus the distance between Earth and Moon is increasing, and the Earth's spin is slowing in reaction. Measurements from laser reflectors left during the Apollo missions (lunar ranging experiments) have found that the Moon's distance increases by per year (roughly the rate at which human fingernails grow). Atomic clocks also show that Earth's day lengthens by about 15 microseconds every year, slowly increasing the rate at which UTC is adjusted by leap seconds. Left to run its course, this tidal drag would continue until the spin of Earth and the orbital period of the Moon matched, creating mutual tidal locking between the two. As a result, the Moon would be suspended in the sky over one meridian, as is already currently the case with Pluto and its moon Charon. However, the Sun will become a red giant engulfing the Earth-Moon system long before this occurrence. In a like manner, the lunar surface experiences tides of around amplitude over 27 days, with two components: a fixed one due to Earth, because they are in synchronous rotation, and a varying component from the Sun. The Earth-induced component arises from libration, a result of the Moon's orbital eccentricity (if the Moon's orbit were perfectly circular, there would only be solar tides). Libration also changes the angle from which the Moon is seen, allowing a total of about 59% of its surface to be seen from Earth over time. The cumulative effects of stress built up by these tidal forces produces moonquakes. Moonquakes are much less common and weaker than are earthquakes, although moonquakes can last for up to an hour—significantly longer than terrestrial quakes—because of the absence of water to damp out the seismic vibrations. The existence of moonquakes was an unexpected discovery from seismometers placed on the Moon by Apollo astronauts from 1969 through 1972. Eclipses only occur when the Sun, Earth, and Moon are all in a straight line (termed "syzygy"). Solar eclipses occur at new moon, when the Moon is between the Sun and Earth. In contrast, lunar eclipses occur at full moon, when Earth is between the Sun and Moon. The apparent size of the Moon is roughly the same as that of the Sun, with both being viewed at close to one-half a degree wide. The Sun is much larger than the Moon but it is the vastly greater distance that gives it the same apparent size as the much closer and much smaller Moon from the perspective of Earth. The variations in apparent size, due to the non-circular orbits, are nearly the same as well, though occurring in different cycles. This makes possible both total (with the Moon appearing larger than the Sun) and annular (with the Moon appearing smaller than the Sun) solar eclipses. In a total eclipse, the Moon completely covers the disc of the Sun and the solar corona becomes visible to the naked eye. Because the distance between the Moon and Earth is very slowly increasing over time, the angular diameter of the Moon is decreasing. Also, as it evolves toward becoming a red giant, the size of the Sun, and its apparent diameter in the sky, are slowly increasing. The combination of these two changes means that hundreds of millions of years ago, the Moon would always completely cover the Sun on solar eclipses, and no annular eclipses were possible. Likewise, hundreds of millions of years in the future, the Moon will no longer cover the Sun completely, and total solar eclipses will not occur. Because the Moon's orbit around Earth is inclined by about 5.145° (5° 9') to the orbit of Earth around the Sun, eclipses do not occur at every full and new moon. For an eclipse to occur, the Moon must be near the intersection of the two orbital planes. The periodicity and recurrence of eclipses of the Sun by the Moon, and of the Moon by Earth, is described by the saros, which has a period of approximately 18 years. Because the Moon is continuously blocking our view of a half-degree-wide circular area of the sky, the related phenomenon of occultation occurs when a bright star or planet passes behind the Moon and is occulted: hidden from view. In this way, a solar eclipse is an occultation of the Sun. Because the Moon is comparatively close to Earth, occultations of individual stars are not visible everywhere on the planet, nor at the same time. Because of the precession of the lunar orbit, each year different stars are occulted. Understanding of the Moon's cycles was an early development of astronomy: by the , Babylonian astronomers had recorded the 18-year Saros cycle of lunar eclipses, and Indian astronomers had described the Moon's monthly elongation. The Chinese astronomer Shi Shen gave instructions for predicting solar and lunar eclipses. Later, the physical form of the Moon and the cause of moonlight became understood. The ancient Greek philosopher Anaxagoras reasoned that the Sun and Moon were both giant spherical rocks, and that the latter reflected the light of the former. Although the Chinese of the Han Dynasty believed the Moon to be energy equated to "qi", their 'radiating influence' theory also recognized that the light of the Moon was merely a reflection of the Sun, and Jing Fang (78–37 BC) noted the sphericity of the Moon. In the 2nd century AD, Lucian wrote the novel "A True Story", in which the heroes travel to the Moon and meet its inhabitants. In 499 AD, the Indian astronomer Aryabhata mentioned in his "Aryabhatiya" that reflected sunlight is the cause of the shining of the Moon. The astronomer and physicist Alhazen (965–1039) found that sunlight was not reflected from the Moon like a mirror, but that light was emitted from every part of the Moon's sunlit surface in all directions. Shen Kuo (1031–1095) of the Song dynasty created an allegory equating the waxing and waning of the Moon to a round ball of reflective silver that, when doused with white powder and viewed from the side, would appear to be a crescent. In Aristotle's (384–322 BC) description of the universe, the Moon marked the boundary between the spheres of the mutable elements (earth, water, air and fire), and the imperishable stars of aether, an influential philosophy that would dominate for centuries. However, in the , Seleucus of Seleucia correctly theorized that tides were due to the attraction of the Moon, and that their height depends on the Moon's position relative to the Sun. In the same century, Aristarchus computed the size and distance of the Moon from Earth, obtaining a value of about twenty times the radius of Earth for the distance. These figures were greatly improved by Ptolemy (90–168 AD): his values of a mean distance of 59 times Earth's radius and a diameter of 0.292 Earth diameters were close to the correct values of about 60 and 0.273 respectively. Archimedes (287–212 BC) designed a planetarium that could calculate the motions of the Moon and other objects in the Solar System. During the Middle Ages, before the invention of the telescope, the Moon was increasingly recognised as a sphere, though many believed that it was "perfectly smooth". In 1609, Galileo Galilei drew one of the first telescopic drawings of the Moon in his book and noted that it was not smooth but had mountains and craters. Telescopic mapping of the Moon followed: later in the 17th century, the efforts of Giovanni Battista Riccioli and Francesco Maria Grimaldi led to the system of naming of lunar features in use today. The more exact 1834–36 of Wilhelm Beer and Johann Heinrich Mädler, and their associated 1837 book , the first trigonometrically accurate study of lunar features, included the heights of more than a thousand mountains, and introduced the study of the Moon at accuracies possible in earthly geography. Lunar craters, first noted by Galileo, were thought to be volcanic until the 1870s proposal of Richard Proctor that they were formed by collisions. This view gained support in 1892 from the experimentation of geologist Grove Karl Gilbert, and from comparative studies from 1920 to the 1940s, leading to the development of lunar stratigraphy, which by the 1950s was becoming a new and growing branch of astrogeology. The Cold War-inspired Space Race between the Soviet Union and the U.S. led to an acceleration of interest in exploration of the Moon. Once launchers had the necessary capabilities, these nations sent unmanned probes on both flyby and impact/lander missions. Spacecraft from the Soviet Union's "Luna" program were the first to accomplish a number of goals: following three unnamed, failed missions in 1958, the first human-made object to escape Earth's gravity and pass near the Moon was "Luna 1"; the first human-made object to impact the lunar surface was "Luna 2", and the first photographs of the normally occluded far side of the Moon were made by "Luna 3", all in 1959. The first spacecraft to perform a successful lunar soft landing was "Luna 9" and the first unmanned vehicle to orbit the Moon was "Luna 10", both in 1966. Rock and soil samples were brought back to Earth by three "Luna" sample return missions ("Luna 16" in 1970, "Luna 20" in 1972, and "Luna 24" in 1976), which returned 0.3 kg total. Two pioneering robotic rovers landed on the Moon in 1970 and 1973 as a part of Soviet Lunokhod programme. Luna 24 was the last Soviet/Russian mission to the Moon. During the late 1950s at the height of the Cold War, the United States Army conducted a classified feasibility study that proposed the construction of a manned military outpost on the Moon called Project Horizon with the potential to conduct a wide range of missions from scientific research to nuclear Earth bombardment. The study included the possibility of conducting a lunar-based nuclear test. The Air Force, which at the time was in competition with the Army for a leading role in the space program, developed its own similar plan called Lunex. However, both these proposals were ultimately passed over as the space program was largely transferred from the military to the civilian agency NASA. Following President John F. Kennedy's 1961 commitment to a manned moon landing before the end of the decade, the United States, under NASA leadership, launched a series of unmanned probes to develop an understanding of the lunar surface in preparation for manned missions: the Jet Propulsion Laboratory's Ranger program produced the first close-up pictures; the Lunar Orbiter program produced maps of the entire Moon; the Surveyor program landed its first spacecraft four months after "Luna 9". The manned Apollo program was developed in parallel; after a series of unmanned and manned tests of the Apollo spacecraft in Earth orbit, and spurred on by a potential Soviet lunar flight, in 1968 Apollo 8 made the first manned mission to lunar orbit. The subsequent landing of the first humans on the Moon in 1969 is seen by many as the culmination of the Space Race. Neil Armstrong became the first person to walk on the Moon as the commander of the American mission Apollo 11 by first setting foot on the Moon at 02:56 UTC on 21 July 1969. An estimated 500 million people worldwide watched the transmission by the Apollo TV camera, the largest television audience for a live broadcast at that time. The Apollo missions 11 to 17 (except Apollo 13, which aborted its planned lunar landing) returned of lunar rock and soil in 2,196 separate samples. The American Moon landing and return was enabled by considerable technological advances in the early 1960s, in domains such as ablation chemistry, software engineering, and atmospheric re-entry technology, and by highly competent management of the enormous technical undertaking. Scientific instrument packages were installed on the lunar surface during all the Apollo landings. Long-lived instrument stations, including heat flow probes, seismometers, and magnetometers, were installed at the Apollo 12, 14, 15, 16, and 17 landing sites. Direct transmission of data to Earth concluded in late 1977 because of budgetary considerations, but as the stations' lunar laser ranging corner-cube retroreflector arrays are passive instruments, they are still being used. Ranging to the stations is routinely performed from Earth-based stations with an accuracy of a few centimetres, and data from this experiment are being used to place constraints on the size of the lunar core. After the first Moon race there were years of near quietude but starting in the 1990s, many more countries have become involved in direct exploration of the Moon. In 1990, Japan became the third country to place a spacecraft into lunar orbit with its "Hiten" spacecraft. The spacecraft released a smaller probe, "Hagoromo", in lunar orbit, but the transmitter failed, preventing further scientific use of the mission. In 1994, the U.S. sent the joint Defense Department/NASA spacecraft "Clementine" to lunar orbit. This mission obtained the first near-global topographic map of the Moon, and the first global multispectral images of the lunar surface. This was followed in 1998 by the "Lunar Prospector" mission, whose instruments indicated the presence of excess hydrogen at the lunar poles, which is likely to have been caused by the presence of water ice in the upper few meters of the regolith within permanently shadowed craters. India, Japan, China, the United States, and the European Space Agency each sent lunar orbiters, and especially ISRO's "Chandrayaan-1" has contributed to confirming the discovery of lunar water ice in permanently shadowed craters at the poles and bound into the lunar regolith. The post-Apollo era has also seen two rover missions: the final Soviet Lunokhod mission in 1973, and China's ongoing Chang'e 3 mission, which deployed its Yutu rover on 14 December 2013. The Moon remains, under the Outer Space Treaty, free to all nations to explore for peaceful purposes. The European spacecraft "SMART-1", the second ion-propelled spacecraft, was in lunar orbit from 15 November 2004 until its lunar impact on 3 September 2006, and made the first detailed survey of chemical elements on the lunar surface. The ambitious Chinese Lunar Exploration Program began with "Chang'e 1", which successfully orbited the Moon from 5 November 2007 until its controlled lunar impact on 1 March 2009. It obtained a full image map of the Moon. "Chang'e 2", beginning in October 2010, reached the Moon more quickly, mapped the Moon at a higher resolution over an eight-month period, then left lunar orbit for an extended stay at the Earth–Sun L2 Lagrangian point, before finally performing a flyby of asteroid 4179 Toutatis on 13 December 2012, and then heading off into deep space. On 14 December 2013, "Chang'e 3" landed a lunar lander onto the Moon's surface, which in turn deployed a lunar rover, named "Yutu" (Chinese: 玉兔; literally "Jade Rabbit"). This was the first lunar soft landing since "Luna 24" in 1976, and the first lunar rover mission since "Lunokhod 2" in 1973. China intends to launch another rover mission ("Chang'e 4") before 2020, followed by a sample return mission ("Chang'e 5") soon after. Between 4 October 2007 and 10 June 2009, the Japan Aerospace Exploration Agency's "Kaguya (Selene)" mission, a lunar orbiter fitted with a high-definition video camera, and two small radio-transmitter satellites, obtained lunar geophysics data and took the first high-definition movies from beyond Earth orbit. India's first lunar mission, "Chandrayaan I", orbited from 8 November 2008 until loss of contact on 27 August 2009, creating a high resolution chemical, mineralogical and photo-geological map of the lunar surface, and confirming the presence of water molecules in lunar soil. The Indian Space Research Organisation planned to launch "Chandrayaan II" in 2013, which would have included a Russian robotic lunar rover. However, the failure of Russia's "Fobos-Grunt" mission has delayed this project. The U.S. co-launched the "Lunar Reconnaissance Orbiter" (LRO) and the "LCROSS" impactor and follow-up observation orbiter on 18 June 2009; "LCROSS" completed its mission by making a planned and widely observed impact in the crater Cabeus on 9 October 2009, whereas "LRO" is currently in operation, obtaining precise lunar altimetry and high-resolution imagery. In November 2011, the LRO passed over the large and bright Aristarchus crater. NASA released photos of the crater on 25 December 2011. Two NASA GRAIL spacecraft began orbiting the Moon around 1 January 2012, on a mission to learn more about the Moon's internal structure. NASA's "LADEE" probe, designed to study the lunar exosphere, achieved orbit on 6 October 2013. Upcoming lunar missions include Russia's "Luna-Glob": an unmanned lander with a set of seismometers, and an orbiter based on its failed Martian "Fobos-Grunt" mission. Privately funded lunar exploration has been promoted by the Google Lunar X Prize, announced 13 September 2007, which offers US$20 million to anyone who can land a robotic rover on the Moon and meet other specified criteria. Shackleton Energy Company is building a program to establish operations on the south pole of the Moon to harvest water and supply their Propellant Depots. NASA began to plan to resume manned missions following the call by U.S. President George W. Bush on 14 January 2004 for a manned mission to the Moon by 2019 and the construction of a lunar base by 2024. The Constellation program was funded and construction and testing begun on a manned spacecraft and launch vehicle, and design studies for a lunar base. However, that program has been cancelled in favor of a manned asteroid landing by 2025 and a manned Mars orbit by 2035. India has also expressed its hope to send a manned mission to the Moon by 2020. On 28 February 2018, SpaceX, Vodafone, Nokia and Audi announced a collaboration to install a 4G wireless communication network on the Moon, with the aim of streaming live footage on the surface to Earth. In 2007, the X Prize Foundation together with Google launched the Google Lunar X Prize to encourage commercial endeavors to the Moon. A prize of $20 million will be awarded to the first private venture to get to the Moon with a robotic lander by the end of March 2018, with additional prizes worth $10 million for further milestones. As of August 2016, 16 teams are participating in the competition. In January 2018 the foundation announced that the prize would go unclaimed as none of the finalist teams would be able to make a launch attempt by the deadline. In August 2016, the US government granted permission to US-based start-up Moon Express to land on the Moon. This marked the first time that a private enterprise was given the right to do so. The decision is regarded as a precedent helping to define regulatory standards for deep-space commercial activity in the future, as thus far companies' operation had been restricted to being on or around Earth. For many years, the Moon has been recognized as an excellent site for telescopes. It is relatively nearby; astronomical seeing is not a concern; certain craters near the poles are permanently dark and cold, and thus especially useful for infrared telescopes; and radio telescopes on the far side would be shielded from the radio chatter of Earth. The lunar soil, although it poses a problem for any moving parts of telescopes, can be mixed with carbon nanotubes and epoxies and employed in the construction of mirrors up to 50 meters in diameter. A lunar zenith telescope can be made cheaply with an ionic liquid. In April 1972, the Apollo 16 mission recorded various astronomical photos and spectra in ultraviolet with the Far Ultraviolet Camera/Spectrograph. Although "Luna" landers scattered pennants of the Soviet Union on the Moon, and U.S. flags were symbolically planted at their landing sites by the Apollo astronauts, no nation claims ownership of any part of the Moon's surface. Russia and the U.S. are party to the 1967 Outer Space Treaty, which defines the Moon and all outer space as the "province of all mankind". This treaty also restricts the use of the Moon to peaceful purposes, explicitly banning military installations and weapons of mass destruction. The 1979 Moon Agreement was created to restrict the exploitation of the Moon's resources by any single nation, but as of November 2016, it has been signed and ratified by only 18 nations, none of which engages in self-launched human space exploration or has plans to do so. Although several individuals have made claims to the Moon in whole or in part, none of these are considered credible. A 5,000-year-old rock carving at Knowth, Ireland, may represent the Moon, which would be the earliest depiction discovered. The contrast between the brighter highlands and the darker maria creates the patterns seen by different cultures as the Man in the Moon, the rabbit and the buffalo, among others. In many prehistoric and ancient cultures, the Moon was personified as a deity or other supernatural phenomenon, and astrological views of the Moon continue to be propagated today. In Proto-Indo-European religion, the moon was personified as the male god "*Mehnot". The ancient Sumerians believed that the Moon was the god Nanna, who was the father of Inanna, the goddess of the planet Venus, and Utu, the god of the sun. Nanna was later known as Sîn, and was particularly associated with magic and sorcery. In Greco-Roman mythology, the Sun and the Moon are represented as male and female, respectively (Helios/Sol and Selene/Luna); this is a development unique to the eastern Mediterranean and traces of an earlier male moon god in the Greek tradition are preserved in the figure of Menelaus. In Mesopotamian iconography, the crescent was the primary symbol of Nanna-Sîn. In ancient Greek art, the Moon goddess Selene was represented wearing a crescent on her headgear in an arrangement reminiscent of horns. The star and crescent arrangement also goes back to the Bronze Age, representing either the Sun and Moon, or the Moon and planet Venus, in combination. It came to represent the goddess Artemis or Hecate, and via the patronage of Hecate came to be used as a symbol of Byzantium. An iconographic tradition of representing Sun and Moon with faces developed in the late medieval period. The splitting of the moon () is a miracle attributed to Muhammad. The Moon's regular phases make it a very convenient timepiece, and the periods of its waxing and waning form the basis of many of the oldest calendars. Tally sticks, notched bones dating as far back as 20–30,000 years ago, are believed by some to mark the phases of the Moon. The ~30-day month is an approximation of the lunar cycle. The English noun "month" and its cognates in other Germanic languages stem from Proto-Germanic "*mǣnṓth-", which is connected to the above-mentioned Proto-Germanic "*mǣnōn", indicating the usage of a lunar calendar among the Germanic peoples (Germanic calendar) prior to the adoption of a solar calendar. The PIE root of "moon", *"méhnōt", derives from the PIE verbal root *"meh"-, "to measure", "indicat[ing] a functional conception of the Moon, i.e. marker of the month" (cf. the English words "measure" and "menstrual"), and echoing the Moon's importance to many ancient cultures in measuring time (see Latin and Ancient Greek ("meis") or (mēn), meaning "month"). Most historical calendars are lunisolar. The 7th-century Islamic calendar is an exceptional example of a purely lunar calendar. Months are traditionally determined by the visual sighting of the hilal, or earliest crescent moon, over the horizon. The Moon has long been associated with insanity and irrationality; the words "lunacy" and "lunatic" (popular shortening "loony") are derived from the Latin name for the Moon, "Luna". Philosophers Aristotle and Pliny the Elder argued that the full moon induced insanity in susceptible individuals, believing that the brain, which is mostly water, must be affected by the Moon and its power over the tides, but the Moon's gravity is too slight to affect any single person. Even today, people who believe in a lunar effect claim that admissions to psychiatric hospitals, traffic accidents, homicides or suicides increase during a full moon, but dozens of studies invalidate these claims. Group (mathematics) In mathematics, a group is an algebraic structure consisting of a set of elements equipped with an operation that combines any two elements to form a third element and that satisfies four conditions called the group axioms, namely closure, associativity, identity and invertibility. One of the most familiar examples of a group is the set of integers together with the addition operation, but the abstract formalization of the group axioms, detached as it is from the concrete nature of any particular group and its operation, applies much more widely. It allows entities with highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way while retaining their essential structural aspects. The ubiquity of groups in numerous areas within and outside mathematics makes them a central organizing principle of contemporary mathematics. Groups share a fundamental kinship with the notion of symmetry. For example, a symmetry group encodes symmetry features of a geometrical object: the group consists of the set of transformations that leave the object unchanged and the operation of combining two such transformations by performing one after the other. Lie groups are the symmetry groups used in the Standard Model of particle physics; Poincaré groups, which are also Lie groups, can express the physical symmetry underlying special relativity; and point groups are used to help understand symmetry phenomena in molecular chemistry. The concept of a group arose from the study of polynomial equations, starting with Évariste Galois in the 1830s. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely, both from a point of view of representation theory (that is, through the representations of the group) and of computational group theory. A theory has been developed for finite groups, which culminated with the classification of finite simple groups, completed in 2004. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become a particularly active area in group theory. One of the most familiar groups is the set of integers formula_1 which consists of the numbers The following properties of integer addition serve as a model for the abstract group axioms given in the definition below. The integers, together with the operation +, form a mathematical object belonging to a broad class sharing similar structural aspects. To appropriately understand these structures as a collective, the following abstract definition is developed. A group is a set, "G", together with an operation • (called the "group law" of "G") that combines any two elements "a" and "b" to form another element, denoted or "ab". To qualify as a group, the set and operation, , must satisfy four requirements known as the "group axioms": The result of an operation may depend on the order of the operands. In other words, the result of combining element "a" with element "b" need not yield the same result as combining element "b" with element "a"; the equation may not always be true. This equation always holds in the group of integers under addition, because for any two integers (commutativity of addition). Groups for which the commutativity equation always holds are called "abelian groups" (in honor of Niels Henrik Abel). The symmetry group described in the following section is an example of a group that is not abelian. The identity element of a group "G" is often written as 1 or 1, a notation inherited from the multiplicative identity. If a group is abelian, then one may choose to denote the group operation by + and the identity element by 0; in that case, the group is called an additive group. The identity element can also be written as "id". The set "G" is called the "underlying set" of the group . Often the group's underlying set "G" is used as a short name for the group . Along the same lines, shorthand expressions such as "a subset of the group "G"" or "an element of group "G"" are used when what is actually meant is "a subset of the underlying set "G" of the group " or "an element of the underlying set "G" of the group ". Usually, it is clear from the context whether a symbol like "G" refers to a group or to an underlying set. An alternate (but equivalent) definition is to expand the structure of a group to define a group as a set equipped with three operations satisfying the same axioms as above, with the "there exists" part removed in the two last axioms, these operations being the group law, as above, which is a binary operation, the "inverse operation", which is a unary operation and maps to formula_2 and the identity element, which is viewed as a 0-ary operation. As this formulation of the definition avoids existential quantifiers, it is generally preferred for computing with groups and for computer-aided proofs. This formulation exhibits groups as a variety of universal algebra. It is also useful for talking of properties of the inverse operation, as needed for defining topological groups and group objects. Two figures in the plane are congruent if one can be changed into the other using a combination of rotations, reflections, and translations. Any figure is congruent to itself. However, some figures are congruent to themselves in more than one way, and these extra congruences are called symmetries. A square has eight symmetries. These are: These symmetries are represented by functions. Each of these functions sends a point in the square to the corresponding point under the symmetry. For example, r sends a point to its rotation 90° clockwise around the square's center, and f sends a point to its reflection across the square's vertical middle line. Composing two of these symmetry functions gives another symmetry function. These symmetries determine a group called the dihedral group of degree 4 and denoted D. The underlying set of the group is the above set of symmetry functions, and the group operation is function composition. Two symmetries are combined by composing them as functions, that is, applying the first one to the square, and the second one to the result of the first application. The result of performing first "a" and then "b" is written symbolically "from right to left" as The right-to-left notation is the same notation that is used for composition of functions. The group table on the right lists the results of all such compositions possible. For example, rotating by 270° clockwise (r) and then reflecting horizontally (f) is the same as performing a reflection along the diagonal (f). Using the above symbols, highlighted in blue in the group table: Given this set of symmetries and the described operation, the group axioms can be understood as follows: In contrast to the group of integers above, where the order of the operation is irrelevant, it does matter in D: but In other words, D is not abelian, which makes the group structure more difficult than the integers introduced first. The modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4. The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots (solutions). The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois' ideas were rejected by his contemporaries, and published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's "On the theory of groups, as depending on the symbolic equation θ = 1" (1854) gives the first abstract definition of a finite group. Geometry was a second field in which groups were used systematically, especially symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884. The third field contributing to group theory was number theory. Certain abelian group structures had been used implicitly in Carl Friedrich Gauss' number-theoretical work "Disquisitiones Arithmeticae" (1798), and more explicitly by Leopold Kronecker. In 1847, Ernst Kummer made early attempts to prove Fermat's Last Theorem by developing groups describing factorization into prime numbers. The convergence of these various sources into a uniform theory of groups started with Camille Jordan's "Traité des substitutions et des équations algébriques" (1870). Walther von Dyck (1882) introduced the idea of specifying a group by means of generators and relations, and was also the first to give an axiomatic definition of an "abstract group", in the terminology of the time. As of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside, who worked on representation theory of finite groups, Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, and more generally locally compact groups was studied by Hermann Weyl, Élie Cartan and many others. Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley (from the late 1930s) and later by the work of Armand Borel and Jacques Tits. The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to the classification of finite simple groups, with the final step taken by Aschbacher and Smith in 2004. This project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research is ongoing to simplify the proof of this classification. These days, group theory is still a highly active mathematical branch, impacting many other fields. Basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed under "elementary group theory". For example, repeated applications of the associativity axiom show that the unambiguity of generalizes to more than three factors. Because this implies that parentheses can be inserted anywhere within such a series of terms, parentheses are usually omitted. The axioms may be weakened to assert only the existence of a left identity and left inverses. Both can be shown to be actually two-sided, so the resulting definition is equivalent to the one given above. Two important consequences of the group axioms are the uniqueness of the identity element and the uniqueness of inverse elements. There can be only one identity element in a group, and each element in a group has exactly one inverse element. Thus, it is customary to speak of "the" identity, and "the" inverse of an element. To prove the uniqueness of an inverse element of "a", suppose that "a" has two inverses, denoted "b" and "c", in a group ("G", •). Then The term "b" on the first line above and the "c" on the last are equal, since they are connected by a chain of equalities. In other words, there is only one inverse element of "a". Similarly, to prove that the identity element of a group is unique, assume "G" is a group with two identity elements "e" and "f". Then "e" = "e" • "f" = "f", hence "e" and "f" are equal. In groups, the existence of inverse elements implies that division is possible: given elements "a" and "b" of the group "G", there is exactly one solution "x" in "G" to the equation , namely . In fact, we have Uniqueness results by multiplying the two sides of the equation by . The element , often denoted , is called the "right quotient" of "b" by "a", or the result of the "right division" of "b" by "a". Similarly there is exactly one solution "y" in "G" to the equation , namely . This solution is the "left quotient" of "b" by "a", and is sometimes denoted . In general and may be different, but, if the group operation is commutative (that is, if the group is abelian), they are equal. In this case, the group operation is often denoted as an addition, and one talks of "subtraction" and "difference" instead of division and quotient. A consequence of this is that multiplication by a group element "g" is a bijection. Specifically, if "g" is an element of the group "G", the function (mathematics) from "G" to itself that maps to is a bijection. This function is called the "left translation" by "g" . Similarly, the "right translation" by "g" is the bijection from "G" to itself, that maps "h" to . If "G" is abelian, the left and the right translation by a group element are the same. To understand groups beyond the level of mere symbolic manipulations as above, more structural concepts have to be employed. There is a conceptual principle underlying all of the following notions: to take advantage of the structure offered by groups (which sets, being "structureless", do not have), constructions related to groups have to be "compatible" with the group operation. This compatibility manifests itself in the following notions in various ways. For example, groups can be related to each other via functions called group homomorphisms. By the mentioned principle, they are required to respect the group structures in a precise sense. The structure of groups can also be understood by breaking them into pieces called subgroups and quotient groups. The principle of "preserving structures"—a recurring topic in mathematics throughout—is an instance of working in a category, in this case the category of groups. "Group homomorphisms" are functions that preserve group structure. A function between two groups and is called a "homomorphism" if the equation holds for all elements "g", "k" in "G". In other words, the result is the same when performing the group operation after or before applying the map "a". This requirement ensures that , and also for all "g" in "G". Thus a group homomorphism respects all the structure of "G" provided by the group axioms. Two groups "G" and "H" are called "isomorphic" if there exist group homomorphisms and , such that applying the two functions one after another in each of the two possible orders gives the identity functions of "G" and "H". That is, and for any "g" in "G" and "h" in "H". From an abstract point of view, isomorphic groups carry the same information. For example, proving that for some element "g" of "G" is equivalent to proving that , because applying "a" to the first equality yields the second, and applying "b" to the second gives back the first. Informally, a "subgroup" is a group "H" contained within a bigger one, "G". Concretely, the identity element of "G" is contained in "H", and whenever "h" and "h" are in "H", then so are and "h", so the elements of "H", equipped with the group operation on "G" restricted to "H", indeed form a group. In the example above, the identity and the rotations constitute a subgroup highlighted in red in the group table above: any two rotations composed are still a rotation, and a rotation can be undone by (i.e., is inverse to) the complementary rotations 270° for 90°, 180° for 180°, and 90° for 270° (note that rotation in the opposite direction is not defined). The subgroup test is a necessary and sufficient condition for a nonempty subset "H" of a group "G" to be a subgroup: it is sufficient to check that for all elements . Knowing the subgroups is important in understanding the group as a whole. Given any subset "S" of a group "G", the subgroup generated by "S" consists of products of elements of "S" and their inverses. It is the smallest subgroup of "G" containing "S". In the introductory example above, the subgroup generated by r and f consists of these two elements, the identity element id and . Again, this is a subgroup, because combining any two of these four elements or their inverses (which are, in this particular case, these same elements) yields an element of this subgroup. In many situations it is desirable to consider two group elements the same if they differ by an element of a given subgroup. For example, in D above, once a reflection is performed, the square never gets back to the r configuration by just applying the rotation operations (and no further reflections), i.e., the rotation operations are irrelevant to the question whether a reflection has been performed. Cosets are used to formalize this insight: a subgroup "H" defines left and right cosets, which can be thought of as translations of "H" by arbitrary group elements "g". In symbolic terms, the "left" and "right" cosets of "H" containing "g" are The left cosets of any subgroup "H" form a partition of "G"; that is, the union of all left cosets is equal to "G" and two left cosets are either equal or have an empty intersection. The first case happens precisely when , i.e., if the two elements differ by an element of "H". Similar considerations apply to the right cosets of "H". The left and right cosets of "H" may or may not be equal. If they are, i.e., for all "g" in "G", , then "H" is said to be a "normal subgroup". In D, the introductory symmetry group, the left cosets "gR" of the subgroup "R" consisting of the rotations are either equal to "R", if "g" is an element of "R" itself, or otherwise equal to (highlighted in green). The subgroup "R" is also normal, because and similarly for any element other than f. (In fact, in the case of D, observe that all such cosets are equal, such that .) In some situations the set of cosets of a subgroup can be endowed with a group law, giving a "quotient group" or "factor group". For this to be possible, the subgroup has to be normal. Given any normal subgroup "N", the quotient group is defined by This set inherits a group operation (sometimes called coset multiplication, or coset addition) from the original group "G": for all "g" and "h" in "G". This definition is motivated by the idea (itself an instance of general structural considerations outlined above) that the map that associates to any element "g" its coset "gN" be a group homomorphism, or by general abstract considerations called universal properties. The coset serves as the identity in this group, and the inverse of "gN" in the quotient group is . The elements of the quotient group are "R" itself, which represents the identity, and . The group operation on the quotient is shown at the right. For example, . Both the subgroup as well as the corresponding quotient are abelian, whereas D is not abelian. Building bigger groups by smaller ones, such as D from its subgroup "R" and the quotient is abstracted by a notion called semidirect product. Quotient groups and subgroups together form a way of describing every group by its "presentation": any group is the quotient of the free group over the "generators" of the group, quotiented by the subgroup of "relations". The dihedral group D, for example, can be generated by two elements "r" and "f" (for example, "r" = r, the right rotation and "f" = f the vertical (or any other) reflection), which means that every symmetry of the square is a finite composition of these two symmetries or their inverses. Together with the relations the group is completely described. A presentation of a group can also be used to construct the Cayley graph, a device used to graphically capture discrete groups. Sub- and quotient groups are related in the following way: a subset "H" of "G" can be seen as an injective map , i.e., any element of the target has at most one element that maps to it. The counterpart to injective maps are surjective maps (every element of the target is mapped onto), such as the canonical map . Interpreting subgroup and quotients in light of these homomorphisms emphasizes the structural concept inherent to these definitions alluded to in the introduction. In general, homomorphisms are neither injective nor surjective. Kernel and image of group homomorphisms and the first isomorphism theorem address this phenomenon. Examples and applications of groups abound. A starting point is the group Z of integers with addition as group operation, introduced above. If instead of addition multiplication is considered, one obtains multiplicative groups. These groups are predecessors of important constructions in abstract algebra. Groups are also applied in many other mathematical areas. Mathematical objects are often examined by associating groups to them and studying the properties of the corresponding groups. For example, Henri Poincaré founded what is now called algebraic topology by introducing the fundamental group. By means of this connection, topological properties such as proximity and continuity translate into properties of groups. For example, elements of the fundamental group are represented by loops. The second image at the right shows some loops in a plane minus a point. The blue loop is considered null-homotopic (and thus irrelevant), because it can be continuously shrunk to a point. The presence of the hole prevents the orange loop from being shrunk to a point. The fundamental group of the plane with a point deleted turns out to be infinite cyclic, generated by the orange loop (or any other loop winding once around the hole). This way, the fundamental group detects the hole. In more recent applications, the influence has also been reversed to motivate geometric constructions by a group-theoretical background. In a similar vein, geometric group theory employs geometric concepts, for example in the study of hyperbolic groups. Further branches crucially applying groups include algebraic geometry and number theory. In addition to the above theoretical applications, many practical applications of groups exist. Cryptography relies on the combination of the abstract group theory approach together with algorithmical knowledge obtained in computational group theory, in particular when implemented for finite groups. Applications of group theory are not restricted to mathematics; sciences such as physics, chemistry and computer science benefit from the concept. Many number systems, such as the integers and the rationals enjoy a naturally given group structure. In some cases, such as with the rationals, both addition and multiplication operations give rise to group structures. Such number systems are predecessors to more general algebraic structures known as rings and fields. Further abstract algebraic concepts such as modules, vector spaces and algebras also form groups. The group of integers formula_1 under addition, denoted formula_4, has been described above. The integers, with the operation of multiplication instead of addition, formula_5 do "not" form a group. The closure, associativity and identity axioms are satisfied, but inverses do not exist: for example, is an integer, but the only solution to the equation in this case is , which is a rational number, but not an integer. Hence not every element of formula_1 has a (multiplicative) inverse. The desire for the existence of multiplicative inverses suggests considering fractions Fractions of integers (with "b" nonzero) are known as rational numbers. The set of all such irreducible fractions is commonly denoted formula_8. There is still a minor obstacle for formula_9, the rationals with multiplication, being a group: because the rational number 0 does not have a multiplicative inverse (i.e., there is no "x" such that ), formula_9 is still not a group. However, the set of all "nonzero" rational numbers formula_11 does form an abelian group under multiplication, generally denoted formula_12. Associativity and identity element axioms follow from the properties of integers. The closure requirement still holds true after removing zero, because the product of two nonzero rationals is never zero. Finally, the inverse of "a"/"b" is "b"/"a", therefore the axiom of the inverse element is satisfied. The rational numbers (including 0) also form a group under addition. Intertwining addition and multiplication operations yields more complicated structures called rings and—if division is possible, such as in formula_8—fields, which occupy a central position in abstract algebra. Group theoretic arguments therefore underlie parts of the theory of those entities. In modular arithmetic, two integers are added and then the sum is divided by a positive integer called the "modulus." The result of modular addition is the remainder of that division. For any modulus, "n", the set of integers from 0 to forms a group under modular addition: the inverse of any element "a" is , and 0 is the identity element. This is familiar from the addition of hours on the face of a clock: if the hour hand is on 9 and is advanced 4 hours, it ends up on 1, as shown at the right. This is expressed by saying that 9 + 4 equals 1 "modulo 12" or, in symbols, The group of integers modulo "n" is written formula_14 or formula_15. For any prime number "p", there is also the multiplicative group of integers modulo "p". Its elements are the integers 1 to . The group operation is multiplication modulo "p". That is, the usual product is divided by "p" and the remainder of this division is the result of modular multiplication. For example, if , there are four group elements 1, 2, 3, 4. In this group, , because the usual product 16 is equivalent to 1, which divided by 5 yields a remainder of 1. for 5 divides , denoted The primality of "p" ensures that the product of two integers neither of which is divisible by "p" is not divisible by "p" either, hence the indicated set of classes is closed under multiplication. The identity element is 1, as usual for a multiplicative group, and the associativity follows from the corresponding property of integers. Finally, the inverse element axiom requires that given an integer "a" not divisible by "p", there exists an integer "b" such that The inverse "b" can be found by using Bézout's identity and the fact that the greatest common divisor equals 1. In the case above, the inverse of 4 is 4, and the inverse of 3 is 2, as . Hence all group axioms are fulfilled. Actually, this example is similar to formula_16 above: it consists of exactly those elements in formula_17 that have a multiplicative inverse. These groups are denoted F. They are crucial to public-key cryptography. A "cyclic group" is a group all of whose elements are powers of a particular element "a". In multiplicative notation, the elements of the group are: where "a" means "a" • "a", and "a" stands for "a" • "a" • "a" = ("a" • "a" • "a") etc. Such an element "a" is called a generator or a primitive element of the group. In additive notation, the requirement for an element to be primitive is that each element of the group can be written as In the groups Z/"n"Z introduced above, the element 1 is primitive, so these groups are cyclic. Indeed, each element is expressible as a sum all of whose terms are 1. Any cyclic group with "n" elements is isomorphic to this group. A second example for cyclic groups is the group of "n"-th complex roots of unity, given by complex numbers "z" satisfying . These numbers can be visualized as the vertices on a regular "n"-gon, as shown in blue at the right for . The group operation is multiplication of complex numbers. In the picture, multiplying with "z" corresponds to a counter-clockwise rotation by 60°. Using some field theory, the group F can be shown to be cyclic: for example, if , 3 is a generator since , , , and . Some cyclic groups have an infinite number of elements. In these groups, for every non-zero element "a", all the powers of "a" are distinct; despite the name "cyclic group", the powers of the elements do not cycle. An infinite cyclic group is isomorphic to , the group of integers under addition introduced above. As these two prototypes are both abelian, so is any cyclic group. The study of finitely generated abelian groups is quite mature, including the fundamental theorem of finitely generated abelian groups; and reflecting this state of affairs, many group-related notions, such as center and commutator, describe the extent to which a given group is not abelian. "Symmetry groups" are groups consisting of symmetries of given mathematical objects—be they of geometric nature, such as the introductory symmetry group of the square, or of algebraic nature, such as polynomial equations and their solutions. Conceptually, group theory can be thought of as the study of symmetry. Symmetries in mathematics greatly simplify the study of geometrical or analytical objects. A group is said to act on another mathematical object "X" if every group element performs some operation on "X" compatibly to the group law. In the rightmost example below, an element of order 7 of the (2,3,7) triangle group acts on the tiling by permuting the highlighted warped triangles (and the other ones, too). By a group action, the group pattern is connected to the structure of the object being acted on. In chemical fields, such as crystallography, space groups and point groups describe molecular symmetries and crystal symmetries. These symmetries underlie the chemical and physical behavior of these systems, and group theory enables simplification of quantum mechanical analysis of these properties. For example, group theory is used to show that optical transitions between certain quantum levels cannot occur simply because of the symmetry of the states involved. Not only are groups useful to assess the implications of symmetries in molecules, but surprisingly they also predict that molecules sometimes can change symmetry. The Jahn-Teller effect is a distortion of a molecule of high symmetry when it adopts a particular ground state of lower symmetry from a set of possible ground states that are related to each other by the symmetry operations of the molecule. Likewise, group theory helps predict the changes in physical properties that occur when a material undergoes a phase transition, for example, from a cubic to a tetrahedral crystalline form. An example is ferroelectric materials, where the change from a paraelectric to a ferroelectric state occurs at the Curie temperature and is related to a change from the high-symmetry paraelectric state to the lower symmetry ferroelectric state, accompanied by a so-called soft phonon mode, a vibrational lattice mode that goes to zero frequency at the transition. Such spontaneous symmetry breaking has found further application in elementary particle physics, where its occurrence is related to the appearance of Goldstone bosons. Finite symmetry groups such as the Mathieu groups are used in coding theory, which is in turn applied in error correction of transmitted data, and in CD players. Another application is differential Galois theory, which characterizes functions having antiderivatives of a prescribed form, giving group-theoretic criteria for when solutions of certain differential equations are well-behaved. Geometric properties that remain stable under group actions are investigated in (geometric) invariant theory. Matrix groups consist of matrices together with matrix multiplication. The "general linear group" consists of all invertible "n"-by-"n" matrices with real entries. Its subgroups are referred to as "matrix groups" or "linear groups". The dihedral group example mentioned above can be viewed as a (very small) matrix group. Another important matrix group is the special orthogonal group SO("n"). It describes all possible rotations in "n" dimensions. Via Euler angles, rotation matrices are used in computer graphics. "Representation theory" is both an application of the group concept and important for a deeper understanding of groups. It studies the group by its group actions on other spaces. A broad class of group representations are linear representations, i.e., the group is acting on a vector space, such as the three-dimensional Euclidean space R. A representation of "G" on an "n"-dimensional real vector space is simply a group homomorphism from the group to the general linear group. This way, the group operation, which may be abstractly given, translates to the multiplication of matrices making it accessible to explicit computations. Given a group action, this gives further means to study the object being acted on. On the other hand, it also yields information about the group. Group representations are an organizing principle in the theory of finite groups, Lie groups, algebraic groups and topological groups, especially (locally) compact groups. "Galois groups" were developed to help solve polynomial equations by capturing their symmetry features. For example, the solutions of the quadratic equation are given by Exchanging "+" and "−" in the expression, i.e., permuting the two solutions of the equation can be viewed as a (very simple) group operation. Similar formulae are known for cubic and quartic equations, but do "not" exist in general for degree 5 and higher. Abstract properties of Galois groups associated with polynomials (in particular their solvability) give a criterion for polynomials that have all their solutions expressible by radicals, i.e., solutions expressible using solely addition, multiplication, and roots similar to the formula above. The problem can be dealt with by shifting to field theory and considering the splitting field of a polynomial. Modern Galois theory generalizes the above type of Galois groups to field extensions and establishes—via the fundamental theorem of Galois theory—a precise relationship between fields and groups, underlining once again the ubiquity of groups in mathematics. A group is called "finite" if it has a finite number of elements. The number of elements is called the order of the group. An important class is the "symmetric groups" S, the groups of permutations of "N" letters. For example, the symmetric group on 3 letters S is the group consisting of all possible orderings of the three letters "ABC", i.e., contains the elements "ABC", "ACB", "BAC", "BCA", "CAB", "CBA", in total 6 (factorial of 3) elements. This class is fundamental insofar as any finite group can be expressed as a subgroup of a symmetric group S for a suitable integer "N", according to Cayley's theorem. Parallel to the group of symmetries of the square above, S can also be interpreted as the group of symmetries of an equilateral triangle. The order of an element "a" in a group "G" is the least positive integer "n" such that "a" = "e", where "a" represents i.e., application of the operation • to "n" copies of "a". (If • represents multiplication, then "a" corresponds to the "n"th power of "a".) In infinite groups, such an "n" may not exist, in which case the order of "a" is said to be infinity. The order of an element equals the order of the cyclic subgroup generated by this element. More sophisticated counting techniques, for example counting cosets, yield more precise statements about finite groups: Lagrange's Theorem states that for a finite group "G" the order of any finite subgroup "H" divides the order of "G". The Sylow theorems give a partial converse. The dihedral group (discussed above) is a finite group of order 8. The order of r is 4, as is the order of the subgroup "R" it generates (see above). The order of the reflection elements f etc. is 2. Both orders divide 8, as predicted by Lagrange's theorem. The groups F above have order . Mathematicians often strive for a complete classification (or list) of a mathematical notion. In the context of finite groups, this aim leads to difficult mathematics. According to Lagrange's theorem, finite groups of order "p", a prime number, are necessarily cyclic (abelian) groups Z. Groups of order "p" can also be shown to be abelian, a statement which does not generalize to order "p", as the non-abelian group D of order 8 = 2 above shows. Computer algebra systems can be used to list small groups, but there is no classification of all finite groups. An intermediate step is the classification of finite simple groups. A nontrivial group is called "simple" if its only normal subgroups are the trivial group and the group itself. The Jordan–Hölder theorem exhibits finite simple groups as the building blocks for all finite groups. Listing all finite simple groups was a major achievement in contemporary group theory. 1998 Fields Medal winner Richard Borcherds succeeded in proving the monstrous moonshine conjectures, a surprising and deep relation between the largest finite simple sporadic group—the "monster group"—and certain modular functions, a piece of classical complex analysis, and string theory, a theory supposed to unify the description of many physical phenomena. Many groups are simultaneously groups and examples of other mathematical structures. In the language of category theory, they are group objects in a category, meaning that they are objects (that is, examples of another mathematical structure) which come with transformations (called morphisms) that mimic the group axioms. For example, every group (as defined above) is also a set, so a group is a group object in the category of sets. Some topological spaces may be endowed with a group law. In order for the group law and the topology to interweave well, the group operations must be continuous functions, that is, , and "g" must not vary wildly if "g" and "h" vary only little. Such groups are called "topological groups," and they are the group objects in the category of topological spaces. The most basic examples are the reals R under addition, , and similarly with any other topological field such as the complex numbers or "p"-adic numbers. All of these groups are locally compact, so they have Haar measures and can be studied via harmonic analysis. The former offer an abstract formalism of invariant integrals. Invariance means, in the case of real numbers for example: for any constant "c". Matrix groups over these fields fall under this regime, as do adele rings and adelic algebraic groups, which are basic to number theory. Galois groups of infinite field extensions such as the absolute Galois group can also be equipped with a topology, the so-called Krull topology, which in turn is central to generalize the above sketched connection of fields and groups to infinite field extensions. An advanced generalization of this idea, adapted to the needs of algebraic geometry, is the étale fundamental group. "Lie groups" (in honor of Sophus Lie) are groups which also have a manifold structure, i.e., they are spaces looking locally like some Euclidean space of the appropriate dimension. Again, the additional structure, here the manifold structure, has to be compatible, i.e., the maps corresponding to multiplication and the inverse have to be smooth. A standard example is the general linear group introduced above: it is an open subset of the space of all "n"-by-"n" matrices, because it is given by the inequality where "A" denotes an "n"-by-"n" matrix. Lie groups are of fundamental importance in modern physics: Noether's theorem links continuous symmetries to conserved quantities. Rotation, as well as translations in space and time are basic symmetries of the laws of mechanics. They can, for instance, be used to construct simple models—imposing, say, axial symmetry on a situation will typically lead to significant simplification in the equations one needs to solve to provide a physical description. Another example are the Lorentz transformations, which relate measurements of time and velocity of two observers in motion relative to each other. They can be deduced in a purely group-theoretical way, by expressing the transformations as a rotational symmetry of Minkowski space. The latter serves—in the absence of significant gravitation—as a model of space time in special relativity. The full symmetry group of Minkowski space, i.e., including translations, is known as the Poincaré group. By the above, it plays a pivotal role in special relativity and, by implication, for quantum field theories. Symmetries that vary with location are central to the modern description of physical interactions with the help of gauge theory. In abstract algebra, more general structures are defined by relaxing some of the axioms defining a group. For example, if the requirement that every element has an inverse is eliminated, the resulting algebraic structure is called a monoid. The natural numbers N (including 0) under addition form a monoid, as do the nonzero integers under multiplication , see above. There is a general method to formally add inverses to elements to any (abelian) monoid, much the same way as is derived from , known as the Grothendieck group. Groupoids are similar to groups except that the composition need not be defined for all "a" and "b". They arise in the study of more complicated forms of symmetry, often in topological and analytical structures, such as the fundamental groupoid or stacks. Finally, it is possible to generalize any of these concepts by replacing the binary operation with an arbitrary "n"-ary one (i.e., an operation taking "n" arguments). With the proper generalization of the group axioms this gives rise to an "n"-ary group. The table gives a list of several structures generalizing groups. Mariah Carey Mariah Carey (born March 27, 1969 or 1970) is an American singer and songwriter. Referred to as the "Songbird Supreme" by the "Guinness World Records", she is noted for her five-octave vocal range, vocal power, melismatic style, and signature use of the whistle register. After signing to Columbia Records, she released her debut album, "Mariah Carey" (1990), which spawned four number-one singles on the U.S. "Billboard" Hot 100 chart: "Vision of Love", "Love Takes Time", "Someday", and "I Don't Wanna Cry". She followed this chart success with the number-one single "Emotions", from her second album of the same name, making her the first and only artist to have their first five singles reach number-one on the "Billboard" Hot 100. Following her marriage to Sony Music head Tommy Mottola, Carey became the label's highest-selling act with the follow-up albums "Music Box" (1993), "Merry Christmas" (1994), and "Daydream" (1995). These albums spawned a string of successful singles including the international number-one "Without You", holiday track "All I Want for Christmas Is You", and "One Sweet Day", which became the longest-running U.S. number-one single in history with a total of 16 weeks. After divorcing Mottola, Carey adopted a new image and incorporated more elements of hip hop into her music with the release of "Butterfly" (1997). "Billboard" named her the most successful artist of the 1990s in the United States, while World Music Awards honored her as the world's best-selling recording artist of the 1990s. Carey parted with Columbia in 2000, and signed a record-breaking $100 million recording contract with Virgin Records. In the weeks prior to the release of her film "Glitter" and its accompanying soundtrack in 2001, she suffered a physical and emotional breakdown and was hospitalized for severe exhaustion. The project was poorly received and led to a general decline in the singer's career. Carey's recording contract was bought out for $50 million by Virgin and she signed a multi-million dollar deal with Island Records the following year. After a relatively unsuccessful period, she returned to the top of music charts with "The Emancipation of Mimi" (2005). It became the world's second best-selling album of 2005 and produced "We Belong Together", which made her the only artist to top the "Billboard" Hot 100 Decade-End chart twice. Carey revived her film career with a supporting role in "Precious" (2009), which earned her the Breakthrough Performance Award at the Palm Springs International Film Festival. Throughout her career, Carey has sold more than 200 million records worldwide, making her one of the best-selling music artists of all time. According to the RIAA, she is the third-best-selling female artist in the United States, with 63.5 million certified albums. With the release of "Touch My Body" (2008), Carey gained her 18th number-one single in the United States, more than any other solo artist. In 2012, she was ranked second on VH1's list of the 100 greatest female artists in music history. Aside from her commercial accomplishments, Carey has won 5 Grammy Awards, 19 World Music Awards, 10 American Music Awards, and 14 Billboard Music Awards, and has been consistently credited with inspiring a generation of singers. Mariah Carey was born in Huntington, New York. Her father, Alfred Roy Carey, was of African American and Afro-Venezuelan descent, while her mother, Patricia (née Hickey), is of Irish descent. The last name Carey was adopted by her Venezuelan grandfather, Francisco Núñez, after he came to New York. Patricia was an occasional opera singer and vocal coach before she met Alfred in 1960. As he began earning a living as an aeronautical engineer, the couple wed later that year, and moved into a small suburb in New York. After their elopement, Patricia's family disowned her for marrying a black man. Carey later explained that growing up, she felt neglected by her maternal family, which greatly affected her. During the years between the births of Carey's older sister Alison and herself, the Carey family struggled within the community due to their ethnicity. Carey's name was derived from the song "They Call the Wind Maria", originally from the 1951 Broadway musical "Paint Your Wagon." When Carey was three, her parents divorced. After their separation, Alison moved in with her father, while the other two children, Mariah and brother Morgan, remained with their mother. Carey would grow apart from her father, and would later stop seeing him altogether. By the age of four, Carey recalled that she had begun to sneak the radio under her covers at night, and just sing and try to find peace within the music. During elementary school, she excelled in subjects that she enjoyed, such as music, art, and literature, but did not find interest in others. After several years of financial struggles, Patricia earned enough money to move her family into a stable and more affluent sector in New York. Carey had begun writing poems and adding melodies to them, thus starting as a singer-songwriter while attending Harborfields High School in Greenlawn, New York, where she graduated in 1987. Carey excelled in her music, and demonstrated usage of the whistle register, though only beginning to master and control it through her training with her mother. Though introducing her daughter to classical opera, Patricia never pressured her to pursue a career in it, as she never seemed interested. Carey recalled that she kept her singer-songwriter works a secret and noted that Patricia had "never been a pushy mom. She never said, 'Give it more of an operatic feel.' I respect opera like crazy, but it didn't influence me." While in high school, Carey began writing songs with Gavin Christopher. They needed an assistant who could play the keyboard: "We called someone and he couldn't come, so by accident we stumbled upon Ben [Margulies]. Ben came to the studio, and he really couldn't play the keyboards very well – he was really more of a drummer – but after that day, we kept in touch, and we sort of clicked as writers." Carey and Christopher began writing and composing songs in the basement of his father's store during Carey's senior year. After composing their first song together, "Here We Go 'Round Again", which Carey described as having a Motown vibe, they continued writing material for a full-length demo. She began living in a one-bedroom apartment in Manhattan, which she shared with four other female students. Carey worked as a waitress for various restaurants, usually getting fired after two weeks. While requiring work to pay for her rent, Carey still had musical ambitions, as she continued working late into the night with Margulies in hopes of completing a demo. After completing her four song demo tape, Carey attempted to pass it to music labels, but failed each time. Shortly thereafter, she was introduced to rising pop singer Brenda K. Starr. As Starr's friendship with Carey grew, so did her interest in helping Carey succeed in the industry. In December 1988, Carey accompanied Starr to a record executives' gala, where she handed her demo tape to the head of Columbia Records, Tommy Mottola, who listened to it on his way back home. After the first two songs, he was interested in her; later, after searching for Carey for two weeks, he immediately signed her and began mapping out her commercial debut. While she maintained that she wanted to continue working with Margulies, Mottola enlisted top producers of the time, including Ric Wake, Narada Michael Walden and Rhett Lawrence. Mottola and the staff at Columbia had planned to market Carey as their main female pop artist, competing with Whitney Houston and Madonna (signed to Arista and Sire Records respectively). After the completion of her debut album, "Mariah Carey", Columbia spent more than $1 million promoting it. Despite a weak start, the album eventually reached the top of the "Billboard" 200, after Carey's exposure at the 33rd Annual Grammy Awards. "Mariah Carey" stayed atop the charts for eleven consecutive weeks, and she won the Best New Artist, and Best Female Pop Vocal Performance awards for her single "Vision of Love." In addition to "Vision of Love", the album yielded the "Billboard" Hot 100 number one singles "Love Takes Time", "Someday", and "I Don't Wanna Cry". Carey became the first musical act since The Jackson 5 to have their first four singles reach number one. "Mariah Carey" finished as the best-selling album in the United States in 1991, while totaling sales of over 15 million copies. Carey began recording her second studio album, "Emotions", in 1991. She described it as an homage to Motown soul music, as she felt the need to pay tribute to the type of music that had influenced her as a child. For the project, Carey worked with Walter Afanasieff, who only had a small role on her debut, as well as Robert Clivillés and David Cole, from the dance group C+C Music Factory. Carey's relationship with Margulies deteriorated over a personal contract Carey had signed with him before signing the record deal with Columbia, agreeing to split not only the songwriting royalties from the songs, but half of her earnings as well. However, when the time came to write music for "Emotions," Sony officials made it clear he would only be paid the fair amount given to co-writers on an album. Margulies later filed a lawsuit against Sony which ultimately led to their parting of ways. "Emotions" was released on September 17, 1991, and was accepted by critics as a more mature album than its predecessor. While praised for Carey's improved songwriting, production, and new sound, the album was criticized for its material, thought weaker than that of her debut. Though the album managed sales of over eight million copies globally, "Emotions" failed to reach the commercial and critical heights of its predecessor. As after the release of her debut, critics again questioned whether Carey would embark on a world tour to promote her material. Although Carey explained that stage fright and the style of her songs made a tour very daunting, speculation grew that Carey was a "studio worm," and that she was incapable of producing the perfect pitch and 5-octave vocal range for which she was known. In hopes of putting to rest any claims of her being a manufactured artist, Carey and Walter Afanasieff decided to book an appearance on MTV Unplugged, a television program aired by MTV. The show presented name artists "unplugged" or stripped of studio equipment. While Carey favored her more soulful and powerful songs, it was decided that her most popular content would be included. Days before the show's taping, Carey and Afanasieff thought of adding a cover version of an older song, in order to provide something different and unexpected. They chose "I'll Be There", a song made popular by The Jackson 5 in 1970. On March 16, 1992, Carey recorded a seven-piece set-list at Kaufman Astoria Studios in Queens, New York. The revue was met with critical acclaim, leading to it being aired more than three times as often as an average episode would. The success tempted Sony officials to market it. Sony decided to release it as an EP, priced low because it was short. The EP proved to be a success, contrary to critics and speculations that Carey was just a studio artist, and was given a triple-Platinum certification by the Recording Industry Association of America (RIAA), and managed Gold and Platinum certifications in several European markets. During early 1993, Carey began working on her third studio album, "Music Box". After "Emotions" failed to achieve the commercial heights of her debut album, Carey and Columbia came to the agreement that the next album would contain a more pop influenced sound, in order to appeal to a wider audience. During Carey's writing sessions, she began working mostly with Afanasieff, with whom she co-wrote and produced most of "Music Box". On August 31, "Music Box" was released around the world, debuting at number-one on the "Billboard" 200. The album was met with mixed reception from music critics; while many praised the album's pop influence and strong content, others felt that Carey made less usage of her acclaimed vocal range. Ron Wynn from AllMusic described Carey's different form of singing on the album: "It was wise for Carey to display other elements of her approach, but sometimes excessive spirit is preferable to an absence of passion." The album's second single, "Hero", would eventually come to be one of Carey's most popular and inspirational songs of her career. The song became Carey's eighth chart topper in the United States, and began expanding Carey's popularity throughout Europe. With the release of the album's third single, Carey achieved several career milestones. Her cover of Badfinger's "Without You" became her first number one single in Germany, Sweden, and the United Kingdom. "Music Box" spent prolonged periods at number one on the album charts of several countries, and eventually became one of the best-selling albums of all time, with worldwide sales of over 28 million copies. After declining to tour for her past two albums, Carey agreed to embark on a short stateside string of concerts, titled the Music Box Tour. Spanning only six dates across North America, the short but successful tour was a large step for Carey, who dreaded the hassle of touring. Following "Music Box", Carey took a relatively large period of time away from the public eye, and began working on an unknown project throughout 1994. In October 1994, "Billboard" announced that Carey would release a holiday album later that year. That 1994, Carey recorded a duet with Luther Vandross; a cover of Lionel Richie and Diana Ross's "Endless Love". Carey's album "Merry Christmas" was released on November 1, 1994, on the same day that the album's first single, "All I Want for Christmas Is You", was released. The album eventually became the best-selling Christmas album of all time, with global sales reaching over 15 million copies. Upon its release, "All I Want for Christmas Is You" was critically praised, and is considered "one of the few worthy modern additions to the holiday canon." "Rolling Stone" described it as a "holiday standard and ranked it fourth on its Greatest Rock and Roll Christmas Songs list. Commercially, it became the best-selling holiday ringtone of all time, and the best-selling single by a non-Asian artist in Japan, selling over 2.1 million units (both ringtone and digital download). By the end of the holiday season of 1994, Carey and Afanasieff had already begun writing material for her next studio album, which would be released late the following year. Released on October 3, 1995, "Daydream" combined the pop sensibilities of "Music Box" with downbeat R&B and hip hop influences. The album's second single, "One Sweet Day" was inspired by the death of David Cole. The song remained atop the Hot 100 for a record-holding 16 weeks, and became the longest-running number-one song in history. "Daydream" became her biggest-selling album in the United States, and became her second album to be certified Diamond by the RIAA, following "Music Box". The album again was the best-seller by an international artist in Japan, shipping over 2.2 million copies, and eventually reaching global sales of over 20 million copies. Critically, the album was heralded as Carey's best to date; "The New York Times" named it one of 1995's best albums, and wrote, "best cuts bring R&B candy-making to a new peak of textural refinement [...] Carey's songwriting has taken a leap forward and become more relaxed, sexier and less reliant on thudding clichés." Carey once again opted to embark on a short world tour titled Daydream World Tour. It had seven dates, three in Japan and four throughout Europe. When tickets went on sale, Carey set records when all 150,000 tickets for her three shows at Japan's largest stadium, Tokyo Dome, sold out in under three hours, breaking the previous record held by The Rolling Stones. Due to the album's success, Carey won two awards at the American Music Awards for her solo efforts: Favorite Pop/Rock Female Artist and Favorite Soul/R&B Female Artist. "Daydream" and its singles were respectively nominated in six categories at the 38th Grammy Awards. Carey, along with Boyz II Men, opened the event with a performance of "One Sweet Day". However, Carey did not receive any award, prompting her to comment "What can you do? I will never be disappointed again. After I sat through the whole show and didn't win once, I can handle anything." In 1995, due to "Daydream"s enormous Japanese sales, "Billboard" declared Carey the "Overseas Artist of the Year" in Japan. With her following albums, Carey began to take more initiative and control with her music, and started infusing more genres into her work. For "Butterfly", she sought to work with other producers and writers other than Afanasieff, such as Sean Combs, Kamaal Fareed, Missy Elliott and Jean Claude Oliver and Samuel Barnes from Trackmasters. During the album's recording, Carey and Mottola separated, with Carey citing it as her way of achieving freedom, and a new lease on life. Aside from the album's different approach, critics took notice of Carey's altered style of singing, which she described as breathy vocals. Her new-found style of singing was met with mixed reception; some critics felt this was a sign of maturity, that she did not feel the need to always show off her upper range, while others felt it was a sign of her weakening and waning voice. The album's lead single, "Honey", and its accompanying music video, introduced a more overtly sexual image than Carey had ever demonstrated, and furthered reports of her freedom from Mottola. Carey believed that her image was not "that much of a departure from what I've done in the past [...] It's not like I went psycho and thought I would be a rapper. Personally, this album is about doing whatever the hell I wanted to do." Reviews for "Butterfly" were generally positive: "Rolling Stone" wrote, "It's not as if Carey has totally dispensed with her old saccharine, Houston-style balladry [...] but the predominant mood of 'Butterfly' is one of coolly erotic reverie." AllMusic editor Stephen Thomas Erlewine described Carey's vocals as "sultrier and more controlled than ever," and heralded "Butterfly" as one of her "best records and illustrates that Carey continues to improve and refine her music, which makes her a rarity among her '90s peers.'" The album was a commercial success, although not to the degree of her previous albums "Mariah Carey", "Music Box" and "Daydream". Carey began developing other projects during the late 1990s. On April 14, 1998, Carey partook in the VH1 Divas benefit concert, where she sang alongside Aretha Franklin, Celine Dion, Shania Twain, Gloria Estefan, and Carole King. Carey had begun developing a film project "All That Glitters", later re-titled to simply "Glitter", and wrote songs for other projects, such as "Men in Black" (1997) and "How the Grinch Stole Christmas" (2000). After "Glitter" fell into developmental hell, Carey postponed the project, and began writing material for a new album. Sony Music executives wanted her to prepare a greatest hits collection in time for the holiday season. They wanted to release an album that featured her number one singles in the United States, and her international chart toppers on the European versions, without any new material, while Carey felt that a compilation album should reflect on her most personal songs, not just her most commercial. The album, titled "#1's" (1998), featured a duet with Whitney Houston, "When You Believe", which was included on the soundtrack for "The Prince of Egypt" (1998). "#1's" became a phenomenon in Japan, selling over one million copies in its opening week, making Carey as the only international artist to accomplish this feat. It sold over 3.25 million copies in Japan after only the first three months, and holds the record as the best-selling album by a non-Asian artist. During the spring of 1999, Carey began working on the final album per her record contract with Sony. But due to the pressure and the awkward relationship Carey had developed with Sony, she completed the album in a period of three months in the summer of 1999, quicker than any of her other albums. Titled "Rainbow" (1999), the album found Carey once again working with a new array of music producers and songwriters, such as Jay-Z and DJ Clue?. Carey also wrote two ballads with David Foster and Diane Warren, whom she used to replace Afanasieff. "Rainbow" was released on November 2, 1999, to the highest first week sales of her career at the time, however debuting at number two on the "Billboard" 200. In the meantime Carey's troubled relationship with Columbia grew, as they halted promotion after the album's first two singles. They felt "Rainbow" didn't have any strong single to be released, whereas Carey wanted to release a ballad. This led to a very public feud, as Carey began posting messages on her website, telling fans inside information on the dispute, as well as instructing them to request "Can't Take That Away (Mariah's Theme)" on radio stations. The song was ultimately released but Carey found out that the song had only been given a very limited and low-promotion release, which made it commercially non-viable. Critical reception of "Rainbow" was generally enthusiastic, with the "Sunday Herald" saying that the album "sees her impressively tottering between soul ballads and collaborations with R&B heavyweights like Snoop Doggy Dogg and Usher [...] It's a polished collection of pop-soul." Though a commercial success, "Rainbow" became Carey's lowest selling album to that point in her career. After she received "Billboard"s Artist of the Decade Award and the World Music Award for Best-Selling Female Artist of the Millennium, Carey parted from Columbia and signed an estimated $100 million, five-album recording contract with Virgin Records America (EMI Records) in April 2001. Carey was given full conceptual and creative control over the project. She opted to record an album partly mixed with 1980s influenced disco and other similar genres, in order to go hand-in-hand with the film's setting. She often stated that Columbia had regarded her as a commodity, with her separation from Mottola exacerbating her relations with label executives. Just a few months later, in July 2001, it was widely reported that Carey had suffered a physical and emotional breakdown. She had left messages on her website that complained of being overworked, and her three-year relationship with the singer Luis Miguel ended. In an interview the following year, she said, "I was with people who didn't really know me and I had no personal assistant. I'd do interviews all day long and get two hours of sleep a night, if that." Due to the pressure from the media, her heavy work schedule and the split from Miguel, Carey began posting a series of disturbing messages on her official website, and displayed erratic behavior on several live promotional outings. On July 19, 2001, Carey made a surprise appearance on the MTV program "Total Request Live" (TRL). As the show's host Carson Daly began taping following a commercial break, Carey came out pushing an ice cream cart while wearing a large men's shirt, and began a striptease, in which she shed her shirt to reveal a tight yellow and green ensemble. While she later revealed that Daly was aware of her presence in the building prior to her appearance, Carey's appearance on TRL garnered strong media attention. Only days later, Carey began posting irregular voice notes and messages on her official website: "I'm trying to understand things in life right now and so I really don't feel that I should be doing music right now. What I'd like to do is just a take a little break or at least get one night of sleep without someone popping up about a video. All I really want is [to] just be me and that's what I should have done in the first place ... I don't say this much but guess what, I don't take care of myself." Following the quick removal of the messages, Berger commented that Carey had been "obviously exhausted and not thinking clearly" when she posted the letters. On July 26, she was suddenly hospitalized, citing "extreme exhaustion" and a "physical and emotional breakdown." Carey was admitted to an undisclosed hospital in Connecticut, and remained hospitalized and under doctor's care for two weeks, followed by an extended absence from the public. Following the heavy media coverage surrounding Carey's publicized breakdown and hospitalization, Virgin Records America and 20th Century Fox delayed the release of both "Glitter", as well as its soundtrack of the same name. When discussing the project's weak commercial reaction, Carey blamed both her frame of mind during the time of its release, its postponement, as well as the soundtrack having been released on September 11. Critics panned "Glitter", as well as its accompanying soundtrack; both were unsuccessful commercially. The accompanying soundtrack album, "Glitter", became Carey's lowest-selling album to that point. The "St. Louis Post-Dispatch" dismissed it as "an absolute mess that'll go down as an annoying blemish on a career that, while not always critically heralded, was at least nearly consistently successful." Following the negative cloud that was enveloping Carey's personal life at the time, as well as the project's poor reception, her $100 million five-album record deal with Virgin Records America (EMI Records) was bought out for $50 million. Soon after, Carey flew to Capri, Italy for a period of five months, in which she began writing material for her new album, stemming from all the personal experiences she had endured throughout the past year. Carey later said that her time at Virgin was "a complete and total stress-fest [...] I made a total snap decision which was based on money and I never make decisions based on money. I learned a big lesson from that." Later that year, she signed a contract with Island Records, valued at more than $24 million, and launched the record label MonarC. To add further to Carey's emotional burdens, her father, with whom she had little contact since childhood, died of cancer that year. In 2002, Carey was cast in the independent film, "WiseGirls", alongside Mira Sorvino and Melora Walters, who co-starred as waitresses at a mobster-operated restaurant. It premiered at the Sundance Film Festival, and received generally negative critical response, though Carey's portrayal of the character was praised; Roger Friedman of Fox News referred to her as "a Thelma Ritter for the new millennium," and wrote, "Her line delivery is sharp and she manages to get the right laughs." Later that year, Carey performed the American national anthem to rave reviews at the Super Bowl XXXVI at the Louisiana Superdome in New Orleans, Louisiana. Towards the end of 2002, Carey released her next studio album "Charmbracelet", which she said marked "a new lease on life" for her. Though released in the wake of "Glitter" and Carey's return to the music scene, sales of "Charmbracelet" were moderate and the quality of Carey's vocals came under criticism. Joan Anderson from "The Boston Globe" declared the album "the worst of her career, and revealed a voice [that is] no longer capable of either gravity-defying gymnastics or soft coos," while AllMusic editor Stephen Thomas Erlewine expressed similar sentiments and wrote, "What is a greater problem is that Mariah's voice is shot, sounding in tatters throughout the record. She can no longer coo or softly croon nor can she perform her trademark gravity-defying vocal runs." In April 2003, Carey announced she would be touring later in the year. The Charmbracelet World Tour: An Intimate Evening with Mariah Carey, spanned North America and East Asia over three months, generally playing in smaller venues rather than arenas. Throughout the United States, the shows were done in theaters, and something more Broadway-influenced, "It's much more intimate so you'll feel like you had an experience. You experience a night with me." However, while smaller productions were booked throughout the tour's stateside leg, Carey performed at stadiums in Asia and Europe, performing for a crowd of over 35,000 in Manila, 50,000 in Malaysia, and to over 70,000 people in China. In the United Kingdom, it became Carey's first tour to feature shows outside London, booking arena stops in Glasgow, Birmingham and Manchester. Charmbracelet World Tour: An Intimate Evening with Mariah Carey garnered generally positive reviews from music critics and concert goers, with many complimenting the quality of Carey's live vocals, as well as the production as a whole. Throughout 2004, Carey focused on composing material for her tenth studio album, "The Emancipation of Mimi" (2005). The album found Carey working predominantly with Jermaine Dupri, as well as Bryan-Michael Cox, Manuel Seal, The Neptunes and Kanye West. The album debuted atop the charts in several countries, and was warmly accepted by critics. Caroline Sullivan of "The Guardian" defined it as "cool, focused and urban [... some of] the first Mariah Carey tunes in years which I wouldn't have to be paid to listen to again," while "USA Today"s Elysa Gardner wrote, "The ballads and midtempo numbers that truly reflect the renewed confidence of a songbird who has taken her shots and kept on flying." The album's second single, "We Belong Together", became a "career re-defining" song for Carey, at a point when many critics had considered her career over. Music critics heralded the song as her "return to form," as well as the "return of The Voice," while many felt it would revive "faith" in Carey's potential as a balladeer. "We Belong Together" broke several records in the United States and became Carey's sixteenth chart topper on the "Billboard" Hot 100. After staying at number one for fourteen non-consecutive weeks, the song became the second longest running number one song in US chart history, behind Carey's 1996 collaboration with Boyz II Men, "One Sweet Day". "Billboard" listed it as the "song of the decade" and the ninth most popular song of all time. Besides its chart success, the song broke several airplay records, and according to Nielsen BDS, gathered both the largest one-day and one-week audiences in history. During the week of September 25, 2005, Carey set another record, becoming the first female to occupy the first two spots atop the Hot 100, as "We Belong Together" remained at number one, and her next single, "Shake It Off" moved into the number two spot (Ashanti had topped the chart in 2002 while being a "featured" singer on the number two single). On the Billboard Hot 100 Year-end Chart of 2005, the song was declared the number one song, a career first for Carey. "Billboard" listed "We Belong Together" ninth on The "Billboard" Hot 100 All-Time Top Songs and was declared the most popular song of the 2000s decade by "Billboard". The album earned ten Grammy Award nominations in 2006–07: eight in 2006 for the original release (the most received by Carey in a single year), and two in 2007 for the "Ultra Platinum Edition" (from which "Don't Forget About Us" became her seventeenth number-one hit). In 2006 Carey won Best Contemporary R&B Album for "The Emancipation of Mimi", as well as Best Female R&B Vocal Performance and Best R&B Song for "We Belong Together". "The Emancipation of Mimi" was the best-selling album in the United States in 2005, with nearly five million units sold. It was the first album by a solo female artist to become the year's best-selling album since Alanis Morissette's "Jagged Little Pill" in 1996. At the end of 2005, the IFPI reported that "The Emancipation of Mimi" had sold more than 7.7 million copies globally, and was the second-best-selling album of the year after Coldplay's "X&Y". It was the best-selling album worldwide by a solo and female artist. To date, "The Emancipation of Mimi" has sold over 12 million copies worldwide. At the 48th Grammy Awards, Carey performed a medley of "We Belong Together" and "Fly Like a Bird". In support of the album, Carey embarked on her first headlining tour in three years, named The Adventures of Mimi: The Voice, The Hits, The Tour after a "Carey-centric fan's" music diary. The tour spanned forty stops, with thirty-two in the United States and Canada, two in Africa, and six in Japan. It received warm reaction from music critics and concert goers, many of which celebrated the quality of Carey's live vocals, as well as the show as a whole. Carey played to about 60,000 fans in the two shows in Tunis. "The Adventures of Mimi" DVD was released in November 2007 internationally and December 2007 in the U.S. By spring 2007, Carey had begun to work on her eleventh studio album, "E=MC²", in a private villa in Anguilla. Although "E=MC²" was well received by most critics, some of them criticized it for being very similar to the formula used on "The Emancipation of Mimi". Two weeks before the album's release, "Touch My Body", the record's lead single, reached the top position on the "Billboard" Hot 100, becoming Carey's eighteenth number one and making her the solo artist with the most number one singles in United States history, pushing her past Elvis Presley into second place according to the magazine's revised methodology. Carey is second only to The Beatles, who have twenty number-one singles. Additionally, it gave Carey her 79th week atop the Hot 100, tying her with Presley as the artist with the most weeks at number one in the "Billboard" chart history." "E=MC²" debuted at number one on the "Billboard" 200 with 463,000 copies sold, the biggest opening week sales of her career. In 2008, Carey also played an aspiring singer named Krystal in "Tennessee" and had a cameo appearance in Adam Sandler's film "You Don't Mess with the Zohan", playing herself. Since the album's release, Carey had planned to embark on an extensive tour in support of "E=MC²". However the tour was suddenly cancelled in early December 2008. Carey later stated that she had been pregnant during that time period, and suffered a miscarriage, hence she cancelled the tour. On January 20, 2009, Carey performed "Hero" at the Neighborhood Inaugural Ball after Barack Obama was sworn as the first African-American president of the United States. On July 7, 2009, Carey – alongside Trey Lorenz – performed her version of The Jackson 5 song "I'll Be There" at the memorial service for Michael Jackson. In 2009, she appeared as a social worker in "Precious", the movie adaptation of the 1996 novel "Push" by Sapphire. The film garnered mostly positive reviews from critics, also for Carey's performance. "Variety" described her acting as "pitch-perfect." In January 2010, Carey won the Breakthrough Actress Performance Award for her role in "Precious" at the Palm Springs International Film Festival. On September 25, 2009, Carey's twelfth studio album, "Memoirs of an Imperfect Angel", was released. Reception for the album was mostly mixed; Stephen Thomas Erlewine of AllMusic called it "her most interesting album in a decade," while Jon Caramanica from "The New York Times" criticized Carey's vocal performances, decrying her overuse of her softer vocal registers at the expense of her more powerful lower and upper registers. Commercially, the album debuted at number three on the "Billboard" 200, and became the lowest-selling studio album of her career. The album's lead single, "Obsessed", debuted at number eleven and peaked at number seven on the chart, and became Carey's 27th US top-ten hit, tying her with Elton John and Janet Jackson as the fifth most top-ten hits. The album's follow-up single, a cover of Foreigner's "I Want to Know What Love Is", managed to break airplay records in Brazil. The song spent 27 weeks atop the Brasil Hot 100 Airplay, making it the longest running song in the chart's history. On December 31, 2009, Carey embarked her seventh concert tour, Angels Advocate Tour, which visited the United States and Canada and ended on September 26, 2010. A planned remix album of "Memoirs of an Imperfect Angel"; titled "Angels Advocate" was slated for a March 30, 2010 release, but was eventually cancelled. Following the cancellation of "Angels Advocate", it was announced that Carey would return to the studio to start work on her thirteenth studio album. It was later revealed that it would be her second Christmas album, and follow-up to "Merry Christmas". Longtime collaborators for the project included Jermaine Dupri, Johntá Austin, Bryan-Michael Cox, and Randy Jackson, as well as new collaborators such as Marc Shaiman. The release date for the album, titled "Merry Christmas II You", was November 2, 2010; the track list included six new songs as well as a remix of "All I Want for Christmas Is You". "Merry Christmas II You" debuted at number four on the "Billboard" 200 with sales of 56,000 copies, becoming Carey's 16th top ten album in the United States. The album debuted at number one on the R&B/Hip-Hop Albums chart, making it only the second Christmas album to top this chart. In May 2010, Carey dropped out of her planned appearance in "For Colored Girls", the film adaptation of the play "For Colored Girls Who Have Considered Suicide When the Rainbow Is Enuf", citing medical reasons. In February 2011, Carey announced that she had officially began writing new material for her upcoming fourteenth studio album. Carey recorded a duet with Tony Bennett for his "Duets II" album, titled "When Do The Bells Ring For Me?" In October 2011, Carey announced that she re-recorded "All I Want for Christmas Is You" with Justin Bieber as a duet for his Christmas album, "Under the Mistletoe". In November 2011, Carey was included in the remix to the mixtape single "Warning" by Uncle Murda; the remix also features 50 Cent and Young Jeezy. That same month, Carey released a duet with John Legend titled "When Christmas Comes", originally part of "Merry Christmas II You". On March 1, 2012, Carey performed at New York City's Gotham Hall; her first time performing since pregnancy. She also performed a three song set at a special fundraiser for US President Barack Obama held in New York's Plaza Hotel. A new song titled "Bring It On Home", which Carey wrote specifically for the event to show her support behind Obama's re-election campaign, was also performed. In August 2012, she released a stand alone single, "Triumphant (Get 'Em)", featuring American rappers Rick Ross and Meek Mill and co-written and co-produced by Carey, Jermaine Dupri, and Bryan-Michael Cox. Carey joined the judging panel of "American Idol" season twelve as Jennifer Lopez's replacement, joining Randy Jackson, Nicki Minaj and Keith Urban. In November 2013, she explained about hating to work at "American Idol" adding, "It was like going to work every day in hell with Satan," referring to her on-set squabbles with Minaj. Carey appeared in Lee Daniels' 2013 film "The Butler", about a White House butler who served eight American presidents over the course of three decades. Carey made guest voice-star as a redneck character on the adult animated series "American Dad!" on November 24, 2013. In February 2013 Carey recorded and released a song called "Almost Home", for the soundtrack of the Walt Disney Studios film "Oz the Great and Powerful". The video was directed by photographer David LaChapelle. News started coming around about the singer's fourteenth studio album. Some of the people that Carey worked with on the album included: DJ Clue?, Randy Jackson, Q-Tip, R. Kelly, David Morales, Loris Holland, Stevie J, James Fauntleroy II, Ray Angry, Afanasieff, Dupri, Bryan-Michael Cox, James "Big Jim" Wright, Hit-Boy, The-Dream, Da Brat, and Rodney Jerkins. Carey told "Billboard": "It's about making sure I have tons of good music, because at the end of the day that's the most important thing... There are a lot more raw ballads than people might expect...there are also uptempo and signature-type songs that represent [my] different facets as an artist." The lead single, "Beautiful" featuring singer Miguel, was released on May 6, 2013, and peaked at number 15 on the Hot 100. Carey taped a performance of "Beautiful" along with a medley of her greatest hits on May 15, 2013; the taping aired on the "American Idol" finale the following day. On October 14, 2013, Carey announced that the album's former title track has been chosen as the second single; it premiered via Facebook on November 11, 2013. During a Q&A session following the song's release, Carey gave an update about the album, stating: "Now I've been inspired to add two more songs, so we're almost there. I can't even express this properly but I feel like this is gonna be my favorite album." Following another song release, "You're Mine (Eternal)", it was announced that "The Art of Letting Go" would no longer be the title of the album. After the final name was announced, "Me. I Am Mariah... The Elusive Chanteuse" was released on May 27, 2014. In October 2014, Carey announced All I Want For Christmas Is You, A Night of Joy & Festivity, an annual residency show at the Beacon Theatre in New York City. The first leg included six shows, running from December 15–22, 2014. Carey announced the second leg in October 2015. The second leg ran for 8 shows, from December 8–18, 2015. On January 15, 2015, Carey announced her Number Ones residency at The Colosseum at Caesars Palace in Las Vegas. On January 30, it was announced that Carey has left Universal Music Group's Def Jam Recordings to re-unite with L.A. Reid and Sony Music via Epic Records. To coincide with the residency, Carey released "#1 to Infinity", a greatest hits compilation album which contains all of her eighteen "Billboard" Hot 100 number one singles, along with a new recording, "Infinity", which was released as a single on April 27. In 2015 Carey had her directorial debut for the Hallmark Channel Christmas movie "A Christmas Melody", in which she also performed as one of the main characters. Filming for the project took place during October 2015. In December 2015, Carey announced The Sweet Sweet Fantasy Tour which spanned a total of 27-dates beginning in March 2016, marking the first time the singer had done a significant tour of mainland Europe in 13 years. Four stops included shows in South Africa. The tour grossed 30.3 million dollars. On March 15, 2016, Carey announced that she was filming "Mariah's World", a docu-series for the E! network documenting her Sweet Sweet Fantasy European tour and her wedding planning process. Carey told "The New York Times", "I thought it would be a good opportunity to kind of, like, show my personality and who I am, even though I feel like my real fans have an idea of who I am... A lot of people have misperceptions about this and that." The series premiered on December 4, 2016. Carey guest starred on the musical drama "Empire", as a superstar singer named Kitty and sung the song "Infamous" featuring Jussie Smollett. On December 5, 2016, Carey participated in the "VH1 Divas" : "Holiday: Unsilent Night" benefit concert, alongside Vanessa Williams, Chaka Khan, Patti Labelle and Teyana Taylor. On December 31, 2016, Carey's performance on "Dick Clark's New Year's Rockin' Eve" in Times Square received worldwide attention after technical difficulties caused Carey's in-ear monitors to malfunction, resulting in what "The New York Times" referred to as a "performance train wreck." The singer cited her inability to hear the music without in-ear auditory feedback as the cause for the mishap. Carey's representatives and Dick Clark Productions placed blame on each other. On February 3, 2017, Carey released the single "I Don't" featuring YG. In February 8, 2017, she voiced the Mayor of Gotham City, in the animated film "The Lego Batman Movie". In April 2017, it was announced that Carey is launching her own record label, Butterfly MC Records, a joint partnership with Epic Records. In July 2017, Carey made a cameo in the comedy film "Girls Trip", starring Queen Latifah, Jada Pinkett Smith and Regina Hall. The same month, Carey embarked on a tour with Lionel Richie, titled, All the Hits Tour. Carey was featured in the official remix for French Montana's single "Unforgettable", alongside Swae Lee. In October 2017, Carey released a new soundtrack single, "The Star", for the movie of the same name. Carey also developed an animated Christmas film, titled "Mariah Carey's All I Want For Christmas Is You", for which soundtrack she recorded an original song called "Lil' Snowman." The film was released direct-to-video on November 14, 2017. In the same month, the singer resumed her All I Want for Christmas Is You, a Night of Joy and Festivity concert series, which for the first time visited other countries including England and France. The singer will also tour Oceania in 2018 with her The Number Ones Tour. Carey is set to co-executive produce alongside Brett Ratner a scripted drama about her life for Starz. The show will focus on Carey’s early rise to fame as a young teen in New York City. Carey is a philanthropist who has been involved with several charitable organizations. She became associated with the Fresh Air Fund in the early 1990s, and is the co-founder of a camp located in Fishkill, New York, that enables inner-city youth to embrace the arts and introduces them to career opportunities. The camp was called Camp Mariah "for her generous support and dedication to Fresh Air children," and she received a Congressional Horizon Award for her youth-related charity work. Carey also donated royalties from her hits "Hero" and "One Sweet Day" to charities. She is well-known nationally for her work with the Make-A-Wish Foundation in granting the wishes of children with life-threatening illnesses, and in November 2006 she was awarded the Foundation's Wish Idol for her "extraordinary generosity and her many wish granting achievements." Carey has volunteered for the Police Athletic League of New York City and contributed to the obstetrics department of New York Presbyterian Hospital Cornell Medical Center. A percentage of the sales of "MTV Unplugged" was donated to various other charities. In 2008, Carey was named Hunger Ambassador of the World Hunger Relief Movement. In February 2010, the song, "100%", which was originally written and recorded for the film, "Precious", was used as one of the theme songs for the 2010 Winter Olympics, with all money proceeds going to Team USA. One of Carey's most high-profile benefit concert appearances was on VH1's 1998 "Divas Live" special, during which she performed alongside other female singers in support of the Save the Music Foundation. The concert was a ratings success, and Carey participated in the Divas 2000 special. In 2007, the Save the Music Foundation honored Carey at their tenth gala event for her support towards the foundation since its inception. She appeared at the "" nationally televised fundraiser in the aftermath of the September 11 attacks, and in December 2001, she performed before peacekeeping troops in Kosovo. Carey hosted the CBS television special "At Home for the Holidays", which documented real-life stories of adopted children and foster families. In 2005, Carey performed for Live 8 in London and at the Hurricane Katrina relief telethon "Shelter from the Storm." In August 2008, Carey and other singers recorded the charity single, "Just Stand Up" produced by Babyface and L. A. Reid, to support Stand Up to Cancer. In 2008, Carey performed in a New Year's Eve concert for the family of Libyan dictator Muammar Gaddafi, something she later claimed to "feel horrible and embarrassed to have participated in." In March 2011, Carey's representative Cindi Berger stated that royalties for the song "Save The Day", which was written for her fourteenth studio album, would be donated to charities that create awareness to human rights issues to make amends for the Gadaffi error. Berger also said that "Mariah has and continues to donate her time, money and countless hours of personal service to many organizations both here and abroad." "Save The Day" was never released. Declining offers to appear in commercials in the United States during her early career, Carey was not involved in brand marketing initiatives until 2006, when she participated in endorsements for Intel Centrino personal computers and launched a jewelry and accessories line for teenagers, Glamorized, in American Claire's and Icing stores. During this period, as part of a partnership with Pepsi and Motorola, Carey recorded and promoted a series of exclusive ringtones, including "Time of Your Life". She signed a licensing deal with the cosmetics company Elizabeth Arden, and in 2007, she released her own fragrance, "M." The Elizabeth Arden deal has netted her $150 million. In 2007, "Forbes" named her as the fifth richest woman in entertainment, with an estimated net worth of US$270 million. In November 2011, it was reported that Carey's net worth was valued at more than $500 million. On November 29, 2010, she debuted a collection on HSN, which included jewelry, shoes and fragrances. In November 2011, Carey was announced as the new global ambassador for Jenny Craig, following her weight loss with the program after giving birth to fraternal twins in April. Carey claims she lost on the program. In 2013, human rights activists criticized Carey for performing in a concert for Angola's "father-daughter kleptocracy" and accused her of accepting "dictator cash." Carey has said that from childhood she has been influenced by Billie Holiday, Stevie Ray Vaughan, and R&B and soul musicians such as Al Green, Stevie Wonder, Gladys Knight and Aretha Franklin. Her music contains strong influences of gospel music, and she credits The Clark Sisters, Shirley Caesar and Edwin Hawkins as the most influential in her early years. When Carey incorporated hip hop into her sound, speculation arose that she was making an attempt to take advantage of the genre's popularity, but she told "Newsweek", "People just don't understand. I grew up with this music." She has expressed appreciation for rappers such as The Sugarhill Gang, Eric B. & Rakim, the Wu-Tang Clan, The Notorious B.I.G. and Mobb Deep, with whom she collaborated on the single "The Roof (Back in Time)" (1998). Carey was heavily influenced by Minnie Riperton, and began experimenting with the whistle register due to her original practice of the range. During Carey's career, her vocal and musical style, along with her level of success, has been compared to Whitney Houston, who she has also cited as an influence, and Celine Dion. Carey and her peers, according to Garry Mulholland, are "the princesses of wails [...] virtuoso vocalists who blend chart-oriented pop with mature MOR torch song." Author and writer Lucy O'Brien attributed the comeback of Barbra Streisand's "old-fashioned showgirl" to Carey and Dion, and described them and Houston as "groomed, airbrushed and overblown to perfection." Carey's musical transition and use of more revealing clothing during the late 1990s were, in part, initiated to distance herself from this image, and she subsequently said that most of her early work was "schmaltzy MOR." Some have noted that unlike Houston and Dion, Carey co-writes and produces her own songs. Love is the subject of the majority of Carey's lyrics, although she has written about themes such as racism, social alienation, death, world hunger, and spirituality. She has said that much of her work is partly autobiographical, but "Time" magazine wrote: "If only Mariah Carey's music had the drama of her life. Her songs are often sugary and artificial—NutraSweet soul. But her life has passion and conflict," applying it to the first stages of her career. He commented that as her albums progressed, so too her songwriting and music blossomed into more mature and meaningful material. Jim Faber of the "New York Daily News", made similar comments, "For Carey, vocalizing is all about the performance, not the emotions that inspired it. Singing, to her, represents a physical challenge, not an emotional unburdening." While reviewing "Music Box", Stephen Holden from "Rolling Stone" commented that Carey sang with "sustained passion," while Arion Berger of "Entertainment Weekly" wrote that during some voca